full_review
stringlengths
483
5.34k
latex
stringlengths
15.5k
101k
paper_url
stringlengths
42
44
arxiv_url
stringlengths
32
32
help_prompt
stringlengths
76
408
Interesting and original work but the proposed method might need further development before the resulting CSKB can be used as a reliable resource This paper studies generating commonsense knowledge directly from pre-trained language models for two CSKBs - ConceptNet and Ascent++. The direction is interesting and original, and the paper is well written and easy to follow. However, 1. The paper claims that "up to now no materialized resource of commonsense knowledge generated via pre-trained language models is publicly available.". However it's not true. West et al. (2021) construct AUTOTOMIC via GPT3 that's 10x larger than ATOMIC, and provide comprehensive and in-depth analysis and evaluation. 2. Lack of novelty: the proposed method directly applies previous COMET pipeline on two established CSKBs without further improvement or adaption. 3. Evaluation shows a clear gap between proposed PLM generated CSKs and original human written CSKs. Without further filtering or purification, it's questionable whether the generated noisy CSKB can be used as a reliable resource. Citation: West, Peter et al. “Symbolic Knowledge Distillation: from General Language Models to Commonsense Models.” ArXiv abs/2110.07178 (2021): n. pag. Rating: 5: Marginally below acceptance threshold Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\pdfoutput=1 \documentclass[11pt]{article} \usepackage{acl} \usepackage{times} \usepackage{latexsym} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{microtype} \usepackage{graphicx} \usepackage{booktabs} \usepackage{multirow} \newcommand{\sr}[1]{{\textcolor{violet}{SR: #1}}} \newcommand{\ph}[1]{{\textcolor{orange}{Ph: #1}}} \newcommand{\ascentpp}{\textsc{Ascent++}} \newcommand{\conceptnet}{\textsc{ConceptNet}} \newcommand{\comet}{\textsc{Comet}} \newcommand{\atomic}{\textsc{Atomic}} \newcommand{\triple}[1]{\emph{$\langle$#1$\rangle$}} \renewcommand{\paragraph}[1]{\smallskip\noindent\textbf{#1.\mbox{\ \ }}} \title{Materialized Knowledge Bases from Commonsense Transformers} \author{Tuan-Phong Nguyen \\ Max Planck Institute for Informatics \\ Saarland Informatics Campus \\ Saarbrücken, Germany \\ \texttt{tuanphong@mpi-inf.mpg.de} \And Simon Razniewski \\ Max Planck Institute for Informatics \\ Saarland Informatics Campus \\ Saarbrücken, Germany \\ \texttt{srazniew@mpi-inf.mpg.de}} \begin{document} \maketitle \begin{abstract} Starting from the \comet{} methodology by \citet{bosselut2019comet}, generating commonsense knowledge from commonsense transformers has recently received significant attention. Surprisingly, up to now no materialized resource of commonsense knowledge generated this way is publicly available. This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall. Furthermore, we identify common problem cases, and outline use cases enabled by materialized resources. We posit that the availability of these resources is important for the advancement of the field, as it enables an off-the-shelf-use of the resulting knowledge, as well as further analyses on its strengths and weaknesses. \end{abstract} \section{Introduction} Compiling comprehensive collections of commonsense knowledge (CSK) is an old dream of AI. Besides attempts at manual compilation~\cite{liu2004conceptnet,lenat1995cyc,atomic} and text extraction~\cite{schubert2002can,webchild,mishra2017domain,quasimodo,ascentpp}, commonsense knowledge compilation from pretrained language models~\cite{bosselut2019comet,comet-atomic-2020,west2021symbolic} has recently emerged. In \citeyear{bosselut2019comet}, \citeauthor{bosselut2019comet} introduced \textit{Commonsense Transformers} (\comet{}), an approach for fine-tuning language models on existing corpora of commonsense assertions. These models have shown promising performance in generating commonsense assertions after being trained on established human-authored commonsense resources such as \atomic~\cite{atomic} and \atomic$^{20}_{20}$~\cite{comet-atomic-2020}. More recently, \citet{west2021symbolic} extracts commonsense assertions from a general language model, GPT-3~\cite{GPT3}, using simple prompting techniques. Surprisingly, using this machine-authored commonsense corpus to fine-tune \comet{} helps it outperform GPT-3, which is 100x larger in size, in terms of commonsense capabilities. Despite the prominence of this approach (the seminal \comet{} paper~\cite{bosselut2019comet} receiving over 300 citations in just two years), to date, no resource containing commonsense knowledge compiled from any \comet{} model is publicly available. As compilation of such a resource is a non-trivial endeavour, this is a major impediment to research that aims to understand the potentials of the approach, or intends to employ its outputs in downstream tasks. This resource paper fills this gap. We fine-tune the \comet{} pipeline on two established resources of concept-centric CSK assertions, \conceptnet{} \cite{speer2017conceptnet} and \ascentpp{} \cite{ascentpp}, and execute the pipeline for 10K prominent subjects. Unlike the \atomic{} resources, which were used to train \comet{} in \cite{bosselut2019comet,comet-atomic-2020} and have their main focus on events and social interactions, the two resources of choice are mostly about general concepts (e.g., \textit{lions can roar}, or \textit{a car has four wheels}). Furthermore, as those two resources were constructed using two fundamentally different methods, crowdsourcing and web text extraction, it enables us to discover potentially different impacts they have on the \comet{} models. By taking the top-10 inferences for each subject-predicate pair, we obtain four resources, \conceptnet{} (GPT2-XL, BART) and \ascentpp{} (GPT2-XL, BART), containing 900K to 1.4M ranked assertions of CSK. We perform a detailed evaluation of the intrinsic quality, including fine-grained precision (typicality and saliency) and recall of each resource, derive qualitative insights into the strengths and weaknesses of the approach, and highlight extrinsic use cases enabled by the resources. \pagebreak Our contributions are: \begin{enumerate} \item The materialization of the \comet{} approach for two language models (GPT2-XL, BART) on two concept-centered commonsense knowledge bases (\conceptnet{}, \ascentpp{}); \item Quantitative and qualitative evaluations of the resulting resources in terms of precision, recall and error categories, showing that in terms of recall, \comet{} models outperform crowdsourced construction and are competitive with web text extraction, while exhibiting moderate gaps in terms of precision to both; \item Illustrative use cases of the materialized resources in statement aggregation, join queries, and search. \end{enumerate} The materialized resources, as well as an interactive browsing interface, are available at\linebreak {\small \url{https://ascentpp.mpi-inf.mpg.de/comet}}. \section{Related work} Early approaches at CSK compilation relied on expert knowledge engineers \cite{lenat1995cyc} or crowdsourcing \cite{liu2004conceptnet}, and the latter approach has recently been revived \cite{atomic}. To overcome scalability limitations of manual compilation, text extraction is a second popular paradigm. Following early attempts on linguistic corpora \cite{mishra2017domain}, increasingly approaches have targeted larger text corpora like Wikipedia, book scans, or web documents \cite{webchild,quasimodo,ascentpp,ascent}, to build CSK resources of wide coverage and quality. Recently, both approaches have been complemented by knowledge extraction from pre-trained language models: Language models like BERT~\cite{devlin2019bert} or GPT~\cite{radford2019language, GPT3} have seen millions of documents, and latently store associations among terms. While \citet{west2021symbolic} used prompting to extract symbolic CSK from GPT-3, \citet{bosselut2019comet} proposed to tap this knowledge by supervised learning: The language models are fine-tuned on statements from existing knowledge resources, e.g., trained to predict the object \textit{Africa} when given the subject-predicate pair \textit{elephant, AtLocation}, based on the ConceptNet triple \triple{elephant, AtLocation, Africa}. After training, they can be used to predict objects for unseen subject-predicate pairs, e.g., locations of wombats. The approach gained significant attention, and variants are employed in a range of downstream tasks, e.g., commonsense question answering \cite{bosselut2019dynamic}, commonsense explanation~\cite{semeval-csk-explanation}, story generation \cite{guan2020knowledge}, or video captioning~\cite{fang2020video2commonsense}. Yet, to date, no materialized knowledge resource produced by any \comet{} model is available (\textsc{AutoTOMIC} from \cite{west2021symbolic} being based on prompting GPT-3). The closest to this is a web interface hosted by the AllenAI institute at {\small \url{https://mosaickg.apps.allenai.org/model_comet2020_entities}}. However, this visualizes only predictions for a single subject, making, e.g., aggregations or count impossible, and only shows top-5 predictions, and without scores. \section{Methodology} We follow the implementations in the official code repository\footnote{\url{https://github.com/allenai/comet-atomic-2020/}} of the \textsc{Comet-Atomic}$_{20}^{20}$ project~\cite{comet-atomic-2020} to compute assertions, and decide on output thresholds. \paragraph{Training CSKBs} We use two established concept-centered commonsense knowledge bases (CSKBs), \conceptnet{} 5.7~\cite{speer2017conceptnet} and \ascentpp{}~\cite{ascentpp} as training resources, considering 13 CSK predicates from each of them: \textit{AtLocation}, \textit{CapableOf}, \textit{Causes}, \textit{Desires}, \textit{HasA}, \textit{HasPrerequisite}, \textit{HasProperty}, \textit{HasSubevent}, \textit{MadeOf}, \textit{MotivatedByGoal}, \textit{PartOf}, \textit{UsedFor} and \textit{ReceivesAction}. \begin{enumerate} \item \conceptnet{}~\cite{speer2017conceptnet} is arguably the most widely used CSKB, built by crowdsourcing. \conceptnet{} 5.7 is its lastest version\footnote{\url{https://github.com/commonsense/conceptnet5/wiki/Downloads}}, consisting of 21 million multilingual assertions, spanning CSK as well as general linguistic and taxonomic knowledge. We retain English assertions only, resulting in 207,210 training assertions for the above-mentioned predicates. \item \ascentpp{}~\cite{ascentpp} is a project aiming for automated CSK extraction from large-scaled web contents based on open information extraction (OpenIE) and judicious cleaning and ranking approaches. The \ascentpp{} KB consists of 2 million English CSK assertions for the 13 mentioned predicates. \end{enumerate} \paragraph{Language models} We consider two autoregressive language models (LMs) that were also used in the original \comet{} paper, GPT2-XL~\cite{radford2019language} and BART~\cite{lewis2019bart}. \paragraph{Materialization process} We query the fine-tuned \comet{} models for 10,926 subjects in \conceptnet{} which have at least two assertions for the 13 CSK predicates. For each subject-predicate pair, we use beam search to obtain completions, with different configurations (see Table~\ref{tab:configs}) for BART and GPT2-XL, following the parameters specified in the published code repository and models. We retain the top-10 completions for each subject-predicate pair, with their \textit{beam scores} (i.e., sum of log softmax of all generated tokens) returned by the \textit{generate} function\footnote{\url{https://huggingface.co/docs/transformers/main/en/main\_classes/text\_generation\#transformers.generation\_utils.GenerationMixin.generate}} of the Transformers library~\cite{transformers}. \paragraph{Output} The resulting resources, \conceptnet{} (GPT2-XL, BART) and \ascentpp{} (GPT2-XL, BART), contain a total of 976,296 and 1,420,380 and 1,271,295 and 1,420,380 assertions after deduplication, respectively, as well as their corresponding beam scores. All are available for browsing, as well as for download, at {\small \url{https://ascentpp.mpi-inf.mpg.de/comet}} (see screenshot of browsing interface in Figure~\ref{fig:interface}). \begin{table}[t] \centering \small \begin{tabular}{lrr} \toprule \textbf{Parameter} & \textbf{GPT2-XL} & \textbf{BART} \\ \midrule num\_beams & 10 & 10 \\ temperature & 1.0 & 1.0 \\ top\_p & 0.9 & 1.0 \\ repetition\_penalty & 1.0 & 1.0 \\ max\_length & 16 & 24 \\ no\_repeat\_ngram\_size & 0 & 3 \\ early\_stopping & True & True \\ do\_sample & False & False \\ \bottomrule \end{tabular} \caption{Configurations for beam-search decoders.} \label{tab:configs} \end{table} \section{Analysis} We perform three kind of analyses: (1) a quantitative evaluation of the intrinsic quality of the assertions, based on crowdsourcing, (2) a qualitative evaluation that outlines major strengths and weaknesses, and (3) an illustration of use cases enabled by both resources. \subsection{Quantitative evaluation} The original paper \cite{bosselut2019comet} only evaluated the top-1 triple per subject-predicate pair. Furthermore, it solely evaluated triples by plausibility, which is a necessary, but only partly a sufficient criterion for being considered commonsense \cite{chalier2020joint}. In the following, we evaluate samples from the generated resources along two \textit{precision} dimensions, typicality (top-100 assertions per subject) and saliency (top-10 assertions per subject). We also evaluate \textit{recall}, by measuring the degree to which each resource covers the statements in a human-generated ground truth. \paragraph{Precision: Typicality and saliency} Following~\citet{quasimodo,ascentpp}, we assess assertions in the CSK resources along two precision dimensions: \textit{typicality} and \textit{saliency}, which measure the degree of truth and the degree of relevance of assertions, respectively. We use the Amazon Mechanical Turk (AMT) platform to obtain human judgements. Each dimension is evaluated based on a 4-point Likert scale and an option for \textit{no judgement} if the annotator is not familiar with the concepts. Assertions are transformed into human-readable sentences using the templates introduced by \citet{comet-atomic-2020}. Each assignment is done by three different workers. Following~\citet{comet-atomic-2020}, any CSK assertion that receives the two higher scores in the Likert scale is labelled as \textit{Typical} or \textit{Salient}, and the two lower scores as \textit{Untypical} or \textit{Unsalient}. The final judgements is based on majority vote. In terms of sampling process, for typicality, we draw 500 assertions from each resource when restricting to top-100 assertions per subject. For saliency, we pick 500 random samples from the pool of top-10 assertions per subject. Results are reported in the left part of Table~\ref{tab:csk-eval}. We see a significant drop in the quality of assertions in the LM-based generations compared to the training resources. In terms of the neural models, for both training CSKBs, the BART models demonstrate better typicality than the GPT2-XL ones. Assertions in BART-\ascentpp{} also have significantly better saliency than in GPT2-XL-\ascentpp{}. Interestingly, BART-\conceptnet{} is nearly on par with \ascentpp{} on both metrics. \begin{table*}[t] \centering \small \begin{tabular}{rrrrrrrrr} \toprule \multirow{2}{*}{\textbf{Resource}} & \multicolumn{2}{c}{\textbf{Typicality@100}} & \multicolumn{2}{c}{\textbf{Saliency@10}} & \multicolumn{3}{c}{\textbf{Recall@100}} & \textbf{Size@100} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} & \textbf{Typical} & \textbf{Untypical} & \textbf{Salient} & \textbf{Unsalient} & \textbf{t=0.96} & \textbf{t=0.98} & \textbf{t=1.00} & \textbf{\#triples} \\ \cmidrule{1-1} \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} \ascentpp{} & \textbf{78.4} & \textbf{11.0} & \textbf{62.8} & \textbf{34.6} & \textbf{8.9} & \textbf{7.9} & \textbf{4.6} & 202,026 \\ GPT2-XL-\ascentpp{} & 57.2 & 27.4 & 37.2 & 58.4 & 6.0 & 4.9 & 2.6 & 1,091,662 \\ BART-\ascentpp{} & 69.8 & 17.4 & 50.6 & 42.6 & 2.6 & 1.9 & 1.0 & 1,092,600 \\ \cmidrule{1-1} \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} \conceptnet{} & \textbf{93.6} & \textbf{3.6} & \textbf{80.0} & \textbf{16.8} & 2.3 & 1.7 & 0.9 & 164,291 \\ GPT2-XL-\conceptnet{} & 66.6 & 21.4 & 63.8 & 32.6 & \textbf{9.0} & \textbf{7.3} & \textbf{3.8} & 967,343 \\ BART-\conceptnet{} & 72.6 & 17.0 & 63.4 & 33.4 & 5.3 & 3.7 & 1.0 & 1,092,600 \\ \bottomrule \end{tabular} \caption{Intrinsic evaluation (Typicality, Saliency and Recall - \%) and size of CSK resources.} \label{tab:csk-eval} \end{table*} \paragraph{Recall} We reuse the CSLB dataset~\cite{devereux2014centre} that was processed by~\citet{ascentpp} as ground truth for recall evaluation. The CSLB dataset consists of 22.6K human-written sentences about property norms of 638 concepts. To account for minor reformulations, following \citet{ascentpp}, we also use embedding-based similarity to match ground-truth sentences with statements in the CSK resources. We specifically rely on precomputed SentenceTransformers embeddings~\cite{sbert}. We also restrict all CSK resources to top-100 assertions per subject. The evaluation results are shown in the right part of Table~\ref{tab:csk-eval}, where we report recall at similarity thresholds $0.96$, $0.98$ and $1.0$, as well as resource size. We also plot the recall values at different top-N assertions per subject in Figure~\ref{fig:recal-vs-size} with similarity threshold $t=0.98$. As one can see, \ascentpp{} outperforms both \comet{} models trained on it even though it is significantly smaller. We see opposite results with the \conceptnet{}-based resources, where the \comet{} models generate resources of better coverage than its training data. Our presumption is that the LMs profits more from manually curated resources like \conceptnet{}, but hardly add values to resources that were extracted from the web, as LMs have not seen fundamentally different text. Furthermore, in contrast to precision, GPT2-XL models have better results than BART models in terms of recall, on both input CSKBs. \begin{figure}[t] \centering \includegraphics[width=\columnwidth, trim =1cm 0 1.5cm 1.2cm,clip]{figures/recall-vs-size.pdf} \caption{Resource recall in relation to resource size, at similarity threshold $t=0.98$. } \label{fig:recal-vs-size} \end{figure} \subsection{Qualitative observations} LMs have the strength to generate an open-ended set of objects, even for subjects seen rarely or not at all in the training data. For example, while \conceptnet{} stores only one location for \textit{rabbit}: \textit{``a meadow''}, both BART- and GPT2-XL-\conceptnet{} can generalize to other correct locations, such as \textit{wilderness}, \textit{zoo}, \textit{cage}, \textit{pet store}, etc. In the recall evaluation, we pointed out that \conceptnet{}, a manually-built CSK resource with relatively small size, considerably benefits from LMs generations as they improve the coverage of the resource substantially. However, as indicated in the precision evaluation, LM generations are generally of lower precision than those in the training data. Common error categories we observe are: \begin{itemize} \item \textbf{Co-occurrence misreadings:} LMs frequently predict values that merely frequently co-occur, e.g., \triple{locomotive, atLocation, bus stop}, \triple{running, capableOf, put on shoes}, \triple{war, desires, kill people}, \triple{supermarket, capableOf, buy milk}. \item \textbf{Subject-object-copying}: LMs too often repeat the given subject in predictions. For instance, 45 of 130 objects generated by BART-\conceptnet{} for the subject \textit{chicken} also contain \textit{chicken}, such as \triple{chicken, CapableOf, kill/eat/cook chicken} or \triple{chicken, UsedFor, feed chicken}. \item \textbf{Quantity confusion}: LMs struggle to distinguish quantities. For example, GPT2-XL-\conceptnet{} generates that \textit{bike} has \textit{four wheels} (top-1 prediction), and then also \textit{two wheels} (rank 3), \textit{three wheels} (rank 4) and \textit{twelve wheels} (rank 5). The weakness of dealing with numbers is known as a common issue of embeddings-based approaches \cite{numbers-embeddings}. \item \textbf{Redundancy}: Generated objects often overlap, bloating the output with redundancies. Most common are repetitions of singular/plural nouns, e.g., the top-2 generations by BART-\conceptnet{} for \textit{doctor-CapableOf}: \textit{``visit patient''} and \textit{``visit patients''}. Redundancies also include paraphrases, e.g., \triple{doctor, CapableOf, visit patients / see patients}; or \triple{doctor, CapableOf, prescribe medication / prescribe drug / prescribe medicine} (GPT2-XL-\ascentpp{} generations). Clustering might alleviate this issue \cite{ascentpp}. % \end{itemize} \subsection{Downstream use of materialized resources} Beyond systematic evaluation, materialized resources enable a wide set of downstream use cases, for example context-enriched zero-shot question answering~\cite{petroni2020context}, or KB-based commonsense explanation~\cite{semeval-csk-explanation}. We exemplarily illustrate four enabled types of basic analyses, (1) frequency aggregation, (2) join queries, (3) ranking and (4) text search. \paragraph{Frequency aggregation} Materialized resources enable to count frequencies. In Table~\ref{tab:common-objects}, we demonstrate the three most common objects for each predicate in the GPT2-XL-\conceptnet{} resource. Interestingly, the third most common location of items in the KB is \textit{``sock drawer''}, which is only ranked as the 190\textsuperscript{th} most common location in \conceptnet{}. Similarly, the top-3 objects for \textit{CapableOf} in the generated KB rarely occur the training data. \paragraph{Join queries} One level further, materialized knowledge enables the construction of join queries. For example, we can formulate conjunctive queries like: \begin{itemize} \item Animals that eat themselves include \textit{chicken}, \textit{flies}, \textit{grasshopper}, \textit{mice}, \textit{penguin}, \textit{worm}. \item The most frequent subevents of subevents are: \textit{breathe}, \textit{swallow}, \textit{hold breath}, \textit{think}, \textit{smile}. \item The most common parts of locations are: \textit{beaches}, \textit{seeds}, \textit{lot of trees}, \textit{peel}, \textit{more than one meaning}. \end{itemize} \paragraph{Ranking} Since statements in our materialized resources come with scores, it becomes possible to locally and globally rank assertions, or to compare statements pairwise. For example, in GPT2-XL-\conceptnet{}, the triple \triple{librarian, AtLocation, library}, which is at rank 140, has a score of $-0.048$, which is much higher than that of \triple{elephant, CapableOf, climb tree} (score = $-0.839$, ranked 638,048 globally). \paragraph{Text search} Finally, we can use materialized resources for text search. For example, we can search in GPT2-XL-\conceptnet{} for all assertions that include the term \textit{``airplane''}, finding expected matches like \triple{airplane, AtLocation, airport} and \triple{flight attendant, CapableOf, travel on airplane}, as well as surprising ones like \triple{scrap paper, UsedFor, making paper airplane} and \triple{traveling, HasSubevent, sleeping on airplane}. \begin{table}[t] \centering \scriptsize \begin{tabular}{lp{0.62\columnwidth}} \toprule \textbf{Predicate} & \textbf{Most common objects} \\ \midrule AtLocation & desk (3210), cabinet (2481), sock drawer (1771) \\ \midrule CapableOf & branch out (963), branch off (747), taste good (556) \\ \midrule Causes & death (2504), tears (1290), happiness (1254) \\ \midrule Desires & eat (949), have fun (816), sex (742) \\ \midrule HasA & more than one meaning (1387), seeds (1316), peel (1170) \\ \midrule HasPrerequisite & metal (1965), plastic (1594), water (1423) \\ \midrule HasProperty & good (2615), useful (2585), good for (1746) \\ \midrule HasSubevent & breathe (1006), swallow (721), take off shoes (658) \\ \midrule MadeOf & plastic (1427), aluminum (1297), wood (905) \\ \midrule MotivatedByGoal & have fun (994), enjoyment (493), succeed (444) \\ \midrule PartOf & new testament (914), human experience (683), alabama (667) \\ \midrule ReceivesAction & found in house (1110), eaten (800), found in hospital (779) \\ \midrule UsedFor & cooking (627), decoration (454), transport (448) \\ \bottomrule \end{tabular} \caption{Most common objects generated by GPT2-XL-\conceptnet{}. Numbers in parentheses indicate frequency of the corresponding objects.} \label{tab:common-objects} \end{table} \section{Conclusion} We introduced four CSKBs computed using two COMET models (BART and GPT2-XL) trained on two existing CSK resources (\conceptnet{} and \ascentpp{}). Our findings are: \begin{enumerate} \item The \comet{} methodology produces better results on modest manually curated resources (\conceptnet{}) than on larger web-extracted resources (\ascentpp{}). \item \comet{}'s recall can significantly outperform that of modest manually curated ones (\conceptnet{}), and reach that of large web-extracted ones (\ascentpp{}). \item In terms of precision, a significant gap remains to manual curation, both in typicality and saliency. To web extraction, a moderate gap remains in terms of statement typicality. \end{enumerate} We also identified common problems of the \comet{} generations, such as co-occurrence misreadings, subject copying, and redundancies, which may be subject of further research regarding post-filtering and clustering. \begin{figure*}[t] \centering \frame{\includegraphics[width=\textwidth]{figures/snapshot.png}} \caption{Web interface showing top-10 assertions per predicate in six CSK resources. The number in grey next to a CSKB indicates the total number of assertions for the corresponding subject-predicate pair in the KB.} \label{fig:interface} \end{figure*} \bibliography{references} \bibliographystyle{acl_natbib} \end{document}
https://openreview.net/forum?id=HI5M4MYedZ5
https://arxiv.org/abs/2112.14815
Please evaluate the proposed method for generating commonsense knowledge directly from pre-trained language models, specifically in terms of its novelty, reliability, and comparison to existing resources.
Strong paper that fills a gap in current commonsense knowledge extraction work This paper proposes to materialize neural commonsense predictions with COMET into concrete resources. The paper investigates two SotA knowledge bases (ConceptNet and ASCENT++) and two standard language models (GPT-2-XL and BART). The evaluation estimates the precision of the generated knowledge through salience and typicality, and the recall by comparison against a feature norms dataset, CSLB. The results indicate the promise of this approach, but also point to key obstacles in terms of redundancy, subject copying, and co-occurrence misreading. The paper is overall well-written, original, and the evaluation is solid. The pointed challenges are thought-provoking. The paper size is in between a short and a long paper, so it is unclear to me whether this paper qualifies as long of short. If this is meant to be a long paper, it would be good to include more discussion on how would the authors propose to circumvent the key challenges with this knowledge base generation method. These mitigation strategies are currently only briefly listed in the conclusion, which leaves many questions unanswered. Furthermore, some quantitative investigation of how would the downstream applications benefit from the created sources despite these challenges would be useful. Rating: 9: Top 15% of accepted papers, strong accept Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\pdfoutput=1 \documentclass[11pt]{article} \usepackage{acl} \usepackage{times} \usepackage{latexsym} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{microtype} \usepackage{graphicx} \usepackage{booktabs} \usepackage{multirow} \newcommand{\sr}[1]{{\textcolor{violet}{SR: #1}}} \newcommand{\ph}[1]{{\textcolor{orange}{Ph: #1}}} \newcommand{\ascentpp}{\textsc{Ascent++}} \newcommand{\conceptnet}{\textsc{ConceptNet}} \newcommand{\comet}{\textsc{Comet}} \newcommand{\atomic}{\textsc{Atomic}} \newcommand{\triple}[1]{\emph{$\langle$#1$\rangle$}} \renewcommand{\paragraph}[1]{\smallskip\noindent\textbf{#1.\mbox{\ \ }}} \title{Materialized Knowledge Bases from Commonsense Transformers} \author{Tuan-Phong Nguyen \\ Max Planck Institute for Informatics \\ Saarland Informatics Campus \\ Saarbrücken, Germany \\ \texttt{tuanphong@mpi-inf.mpg.de} \And Simon Razniewski \\ Max Planck Institute for Informatics \\ Saarland Informatics Campus \\ Saarbrücken, Germany \\ \texttt{srazniew@mpi-inf.mpg.de}} \begin{document} \maketitle \begin{abstract} Starting from the \comet{} methodology by \citet{bosselut2019comet}, generating commonsense knowledge from commonsense transformers has recently received significant attention. Surprisingly, up to now no materialized resource of commonsense knowledge generated this way is publicly available. This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall. Furthermore, we identify common problem cases, and outline use cases enabled by materialized resources. We posit that the availability of these resources is important for the advancement of the field, as it enables an off-the-shelf-use of the resulting knowledge, as well as further analyses on its strengths and weaknesses. \end{abstract} \section{Introduction} Compiling comprehensive collections of commonsense knowledge (CSK) is an old dream of AI. Besides attempts at manual compilation~\cite{liu2004conceptnet,lenat1995cyc,atomic} and text extraction~\cite{schubert2002can,webchild,mishra2017domain,quasimodo,ascentpp}, commonsense knowledge compilation from pretrained language models~\cite{bosselut2019comet,comet-atomic-2020,west2021symbolic} has recently emerged. In \citeyear{bosselut2019comet}, \citeauthor{bosselut2019comet} introduced \textit{Commonsense Transformers} (\comet{}), an approach for fine-tuning language models on existing corpora of commonsense assertions. These models have shown promising performance in generating commonsense assertions after being trained on established human-authored commonsense resources such as \atomic~\cite{atomic} and \atomic$^{20}_{20}$~\cite{comet-atomic-2020}. More recently, \citet{west2021symbolic} extracts commonsense assertions from a general language model, GPT-3~\cite{GPT3}, using simple prompting techniques. Surprisingly, using this machine-authored commonsense corpus to fine-tune \comet{} helps it outperform GPT-3, which is 100x larger in size, in terms of commonsense capabilities. Despite the prominence of this approach (the seminal \comet{} paper~\cite{bosselut2019comet} receiving over 300 citations in just two years), to date, no resource containing commonsense knowledge compiled from any \comet{} model is publicly available. As compilation of such a resource is a non-trivial endeavour, this is a major impediment to research that aims to understand the potentials of the approach, or intends to employ its outputs in downstream tasks. This resource paper fills this gap. We fine-tune the \comet{} pipeline on two established resources of concept-centric CSK assertions, \conceptnet{} \cite{speer2017conceptnet} and \ascentpp{} \cite{ascentpp}, and execute the pipeline for 10K prominent subjects. Unlike the \atomic{} resources, which were used to train \comet{} in \cite{bosselut2019comet,comet-atomic-2020} and have their main focus on events and social interactions, the two resources of choice are mostly about general concepts (e.g., \textit{lions can roar}, or \textit{a car has four wheels}). Furthermore, as those two resources were constructed using two fundamentally different methods, crowdsourcing and web text extraction, it enables us to discover potentially different impacts they have on the \comet{} models. By taking the top-10 inferences for each subject-predicate pair, we obtain four resources, \conceptnet{} (GPT2-XL, BART) and \ascentpp{} (GPT2-XL, BART), containing 900K to 1.4M ranked assertions of CSK. We perform a detailed evaluation of the intrinsic quality, including fine-grained precision (typicality and saliency) and recall of each resource, derive qualitative insights into the strengths and weaknesses of the approach, and highlight extrinsic use cases enabled by the resources. \pagebreak Our contributions are: \begin{enumerate} \item The materialization of the \comet{} approach for two language models (GPT2-XL, BART) on two concept-centered commonsense knowledge bases (\conceptnet{}, \ascentpp{}); \item Quantitative and qualitative evaluations of the resulting resources in terms of precision, recall and error categories, showing that in terms of recall, \comet{} models outperform crowdsourced construction and are competitive with web text extraction, while exhibiting moderate gaps in terms of precision to both; \item Illustrative use cases of the materialized resources in statement aggregation, join queries, and search. \end{enumerate} The materialized resources, as well as an interactive browsing interface, are available at\linebreak {\small \url{https://ascentpp.mpi-inf.mpg.de/comet}}. \section{Related work} Early approaches at CSK compilation relied on expert knowledge engineers \cite{lenat1995cyc} or crowdsourcing \cite{liu2004conceptnet}, and the latter approach has recently been revived \cite{atomic}. To overcome scalability limitations of manual compilation, text extraction is a second popular paradigm. Following early attempts on linguistic corpora \cite{mishra2017domain}, increasingly approaches have targeted larger text corpora like Wikipedia, book scans, or web documents \cite{webchild,quasimodo,ascentpp,ascent}, to build CSK resources of wide coverage and quality. Recently, both approaches have been complemented by knowledge extraction from pre-trained language models: Language models like BERT~\cite{devlin2019bert} or GPT~\cite{radford2019language, GPT3} have seen millions of documents, and latently store associations among terms. While \citet{west2021symbolic} used prompting to extract symbolic CSK from GPT-3, \citet{bosselut2019comet} proposed to tap this knowledge by supervised learning: The language models are fine-tuned on statements from existing knowledge resources, e.g., trained to predict the object \textit{Africa} when given the subject-predicate pair \textit{elephant, AtLocation}, based on the ConceptNet triple \triple{elephant, AtLocation, Africa}. After training, they can be used to predict objects for unseen subject-predicate pairs, e.g., locations of wombats. The approach gained significant attention, and variants are employed in a range of downstream tasks, e.g., commonsense question answering \cite{bosselut2019dynamic}, commonsense explanation~\cite{semeval-csk-explanation}, story generation \cite{guan2020knowledge}, or video captioning~\cite{fang2020video2commonsense}. Yet, to date, no materialized knowledge resource produced by any \comet{} model is available (\textsc{AutoTOMIC} from \cite{west2021symbolic} being based on prompting GPT-3). The closest to this is a web interface hosted by the AllenAI institute at {\small \url{https://mosaickg.apps.allenai.org/model_comet2020_entities}}. However, this visualizes only predictions for a single subject, making, e.g., aggregations or count impossible, and only shows top-5 predictions, and without scores. \section{Methodology} We follow the implementations in the official code repository\footnote{\url{https://github.com/allenai/comet-atomic-2020/}} of the \textsc{Comet-Atomic}$_{20}^{20}$ project~\cite{comet-atomic-2020} to compute assertions, and decide on output thresholds. \paragraph{Training CSKBs} We use two established concept-centered commonsense knowledge bases (CSKBs), \conceptnet{} 5.7~\cite{speer2017conceptnet} and \ascentpp{}~\cite{ascentpp} as training resources, considering 13 CSK predicates from each of them: \textit{AtLocation}, \textit{CapableOf}, \textit{Causes}, \textit{Desires}, \textit{HasA}, \textit{HasPrerequisite}, \textit{HasProperty}, \textit{HasSubevent}, \textit{MadeOf}, \textit{MotivatedByGoal}, \textit{PartOf}, \textit{UsedFor} and \textit{ReceivesAction}. \begin{enumerate} \item \conceptnet{}~\cite{speer2017conceptnet} is arguably the most widely used CSKB, built by crowdsourcing. \conceptnet{} 5.7 is its lastest version\footnote{\url{https://github.com/commonsense/conceptnet5/wiki/Downloads}}, consisting of 21 million multilingual assertions, spanning CSK as well as general linguistic and taxonomic knowledge. We retain English assertions only, resulting in 207,210 training assertions for the above-mentioned predicates. \item \ascentpp{}~\cite{ascentpp} is a project aiming for automated CSK extraction from large-scaled web contents based on open information extraction (OpenIE) and judicious cleaning and ranking approaches. The \ascentpp{} KB consists of 2 million English CSK assertions for the 13 mentioned predicates. \end{enumerate} \paragraph{Language models} We consider two autoregressive language models (LMs) that were also used in the original \comet{} paper, GPT2-XL~\cite{radford2019language} and BART~\cite{lewis2019bart}. \paragraph{Materialization process} We query the fine-tuned \comet{} models for 10,926 subjects in \conceptnet{} which have at least two assertions for the 13 CSK predicates. For each subject-predicate pair, we use beam search to obtain completions, with different configurations (see Table~\ref{tab:configs}) for BART and GPT2-XL, following the parameters specified in the published code repository and models. We retain the top-10 completions for each subject-predicate pair, with their \textit{beam scores} (i.e., sum of log softmax of all generated tokens) returned by the \textit{generate} function\footnote{\url{https://huggingface.co/docs/transformers/main/en/main\_classes/text\_generation\#transformers.generation\_utils.GenerationMixin.generate}} of the Transformers library~\cite{transformers}. \paragraph{Output} The resulting resources, \conceptnet{} (GPT2-XL, BART) and \ascentpp{} (GPT2-XL, BART), contain a total of 976,296 and 1,420,380 and 1,271,295 and 1,420,380 assertions after deduplication, respectively, as well as their corresponding beam scores. All are available for browsing, as well as for download, at {\small \url{https://ascentpp.mpi-inf.mpg.de/comet}} (see screenshot of browsing interface in Figure~\ref{fig:interface}). \begin{table}[t] \centering \small \begin{tabular}{lrr} \toprule \textbf{Parameter} & \textbf{GPT2-XL} & \textbf{BART} \\ \midrule num\_beams & 10 & 10 \\ temperature & 1.0 & 1.0 \\ top\_p & 0.9 & 1.0 \\ repetition\_penalty & 1.0 & 1.0 \\ max\_length & 16 & 24 \\ no\_repeat\_ngram\_size & 0 & 3 \\ early\_stopping & True & True \\ do\_sample & False & False \\ \bottomrule \end{tabular} \caption{Configurations for beam-search decoders.} \label{tab:configs} \end{table} \section{Analysis} We perform three kind of analyses: (1) a quantitative evaluation of the intrinsic quality of the assertions, based on crowdsourcing, (2) a qualitative evaluation that outlines major strengths and weaknesses, and (3) an illustration of use cases enabled by both resources. \subsection{Quantitative evaluation} The original paper \cite{bosselut2019comet} only evaluated the top-1 triple per subject-predicate pair. Furthermore, it solely evaluated triples by plausibility, which is a necessary, but only partly a sufficient criterion for being considered commonsense \cite{chalier2020joint}. In the following, we evaluate samples from the generated resources along two \textit{precision} dimensions, typicality (top-100 assertions per subject) and saliency (top-10 assertions per subject). We also evaluate \textit{recall}, by measuring the degree to which each resource covers the statements in a human-generated ground truth. \paragraph{Precision: Typicality and saliency} Following~\citet{quasimodo,ascentpp}, we assess assertions in the CSK resources along two precision dimensions: \textit{typicality} and \textit{saliency}, which measure the degree of truth and the degree of relevance of assertions, respectively. We use the Amazon Mechanical Turk (AMT) platform to obtain human judgements. Each dimension is evaluated based on a 4-point Likert scale and an option for \textit{no judgement} if the annotator is not familiar with the concepts. Assertions are transformed into human-readable sentences using the templates introduced by \citet{comet-atomic-2020}. Each assignment is done by three different workers. Following~\citet{comet-atomic-2020}, any CSK assertion that receives the two higher scores in the Likert scale is labelled as \textit{Typical} or \textit{Salient}, and the two lower scores as \textit{Untypical} or \textit{Unsalient}. The final judgements is based on majority vote. In terms of sampling process, for typicality, we draw 500 assertions from each resource when restricting to top-100 assertions per subject. For saliency, we pick 500 random samples from the pool of top-10 assertions per subject. Results are reported in the left part of Table~\ref{tab:csk-eval}. We see a significant drop in the quality of assertions in the LM-based generations compared to the training resources. In terms of the neural models, for both training CSKBs, the BART models demonstrate better typicality than the GPT2-XL ones. Assertions in BART-\ascentpp{} also have significantly better saliency than in GPT2-XL-\ascentpp{}. Interestingly, BART-\conceptnet{} is nearly on par with \ascentpp{} on both metrics. \begin{table*}[t] \centering \small \begin{tabular}{rrrrrrrrr} \toprule \multirow{2}{*}{\textbf{Resource}} & \multicolumn{2}{c}{\textbf{Typicality@100}} & \multicolumn{2}{c}{\textbf{Saliency@10}} & \multicolumn{3}{c}{\textbf{Recall@100}} & \textbf{Size@100} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} & \textbf{Typical} & \textbf{Untypical} & \textbf{Salient} & \textbf{Unsalient} & \textbf{t=0.96} & \textbf{t=0.98} & \textbf{t=1.00} & \textbf{\#triples} \\ \cmidrule{1-1} \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} \ascentpp{} & \textbf{78.4} & \textbf{11.0} & \textbf{62.8} & \textbf{34.6} & \textbf{8.9} & \textbf{7.9} & \textbf{4.6} & 202,026 \\ GPT2-XL-\ascentpp{} & 57.2 & 27.4 & 37.2 & 58.4 & 6.0 & 4.9 & 2.6 & 1,091,662 \\ BART-\ascentpp{} & 69.8 & 17.4 & 50.6 & 42.6 & 2.6 & 1.9 & 1.0 & 1,092,600 \\ \cmidrule{1-1} \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} \conceptnet{} & \textbf{93.6} & \textbf{3.6} & \textbf{80.0} & \textbf{16.8} & 2.3 & 1.7 & 0.9 & 164,291 \\ GPT2-XL-\conceptnet{} & 66.6 & 21.4 & 63.8 & 32.6 & \textbf{9.0} & \textbf{7.3} & \textbf{3.8} & 967,343 \\ BART-\conceptnet{} & 72.6 & 17.0 & 63.4 & 33.4 & 5.3 & 3.7 & 1.0 & 1,092,600 \\ \bottomrule \end{tabular} \caption{Intrinsic evaluation (Typicality, Saliency and Recall - \%) and size of CSK resources.} \label{tab:csk-eval} \end{table*} \paragraph{Recall} We reuse the CSLB dataset~\cite{devereux2014centre} that was processed by~\citet{ascentpp} as ground truth for recall evaluation. The CSLB dataset consists of 22.6K human-written sentences about property norms of 638 concepts. To account for minor reformulations, following \citet{ascentpp}, we also use embedding-based similarity to match ground-truth sentences with statements in the CSK resources. We specifically rely on precomputed SentenceTransformers embeddings~\cite{sbert}. We also restrict all CSK resources to top-100 assertions per subject. The evaluation results are shown in the right part of Table~\ref{tab:csk-eval}, where we report recall at similarity thresholds $0.96$, $0.98$ and $1.0$, as well as resource size. We also plot the recall values at different top-N assertions per subject in Figure~\ref{fig:recal-vs-size} with similarity threshold $t=0.98$. As one can see, \ascentpp{} outperforms both \comet{} models trained on it even though it is significantly smaller. We see opposite results with the \conceptnet{}-based resources, where the \comet{} models generate resources of better coverage than its training data. Our presumption is that the LMs profits more from manually curated resources like \conceptnet{}, but hardly add values to resources that were extracted from the web, as LMs have not seen fundamentally different text. Furthermore, in contrast to precision, GPT2-XL models have better results than BART models in terms of recall, on both input CSKBs. \begin{figure}[t] \centering \includegraphics[width=\columnwidth, trim =1cm 0 1.5cm 1.2cm,clip]{figures/recall-vs-size.pdf} \caption{Resource recall in relation to resource size, at similarity threshold $t=0.98$. } \label{fig:recal-vs-size} \end{figure} \subsection{Qualitative observations} LMs have the strength to generate an open-ended set of objects, even for subjects seen rarely or not at all in the training data. For example, while \conceptnet{} stores only one location for \textit{rabbit}: \textit{``a meadow''}, both BART- and GPT2-XL-\conceptnet{} can generalize to other correct locations, such as \textit{wilderness}, \textit{zoo}, \textit{cage}, \textit{pet store}, etc. In the recall evaluation, we pointed out that \conceptnet{}, a manually-built CSK resource with relatively small size, considerably benefits from LMs generations as they improve the coverage of the resource substantially. However, as indicated in the precision evaluation, LM generations are generally of lower precision than those in the training data. Common error categories we observe are: \begin{itemize} \item \textbf{Co-occurrence misreadings:} LMs frequently predict values that merely frequently co-occur, e.g., \triple{locomotive, atLocation, bus stop}, \triple{running, capableOf, put on shoes}, \triple{war, desires, kill people}, \triple{supermarket, capableOf, buy milk}. \item \textbf{Subject-object-copying}: LMs too often repeat the given subject in predictions. For instance, 45 of 130 objects generated by BART-\conceptnet{} for the subject \textit{chicken} also contain \textit{chicken}, such as \triple{chicken, CapableOf, kill/eat/cook chicken} or \triple{chicken, UsedFor, feed chicken}. \item \textbf{Quantity confusion}: LMs struggle to distinguish quantities. For example, GPT2-XL-\conceptnet{} generates that \textit{bike} has \textit{four wheels} (top-1 prediction), and then also \textit{two wheels} (rank 3), \textit{three wheels} (rank 4) and \textit{twelve wheels} (rank 5). The weakness of dealing with numbers is known as a common issue of embeddings-based approaches \cite{numbers-embeddings}. \item \textbf{Redundancy}: Generated objects often overlap, bloating the output with redundancies. Most common are repetitions of singular/plural nouns, e.g., the top-2 generations by BART-\conceptnet{} for \textit{doctor-CapableOf}: \textit{``visit patient''} and \textit{``visit patients''}. Redundancies also include paraphrases, e.g., \triple{doctor, CapableOf, visit patients / see patients}; or \triple{doctor, CapableOf, prescribe medication / prescribe drug / prescribe medicine} (GPT2-XL-\ascentpp{} generations). Clustering might alleviate this issue \cite{ascentpp}. % \end{itemize} \subsection{Downstream use of materialized resources} Beyond systematic evaluation, materialized resources enable a wide set of downstream use cases, for example context-enriched zero-shot question answering~\cite{petroni2020context}, or KB-based commonsense explanation~\cite{semeval-csk-explanation}. We exemplarily illustrate four enabled types of basic analyses, (1) frequency aggregation, (2) join queries, (3) ranking and (4) text search. \paragraph{Frequency aggregation} Materialized resources enable to count frequencies. In Table~\ref{tab:common-objects}, we demonstrate the three most common objects for each predicate in the GPT2-XL-\conceptnet{} resource. Interestingly, the third most common location of items in the KB is \textit{``sock drawer''}, which is only ranked as the 190\textsuperscript{th} most common location in \conceptnet{}. Similarly, the top-3 objects for \textit{CapableOf} in the generated KB rarely occur the training data. \paragraph{Join queries} One level further, materialized knowledge enables the construction of join queries. For example, we can formulate conjunctive queries like: \begin{itemize} \item Animals that eat themselves include \textit{chicken}, \textit{flies}, \textit{grasshopper}, \textit{mice}, \textit{penguin}, \textit{worm}. \item The most frequent subevents of subevents are: \textit{breathe}, \textit{swallow}, \textit{hold breath}, \textit{think}, \textit{smile}. \item The most common parts of locations are: \textit{beaches}, \textit{seeds}, \textit{lot of trees}, \textit{peel}, \textit{more than one meaning}. \end{itemize} \paragraph{Ranking} Since statements in our materialized resources come with scores, it becomes possible to locally and globally rank assertions, or to compare statements pairwise. For example, in GPT2-XL-\conceptnet{}, the triple \triple{librarian, AtLocation, library}, which is at rank 140, has a score of $-0.048$, which is much higher than that of \triple{elephant, CapableOf, climb tree} (score = $-0.839$, ranked 638,048 globally). \paragraph{Text search} Finally, we can use materialized resources for text search. For example, we can search in GPT2-XL-\conceptnet{} for all assertions that include the term \textit{``airplane''}, finding expected matches like \triple{airplane, AtLocation, airport} and \triple{flight attendant, CapableOf, travel on airplane}, as well as surprising ones like \triple{scrap paper, UsedFor, making paper airplane} and \triple{traveling, HasSubevent, sleeping on airplane}. \begin{table}[t] \centering \scriptsize \begin{tabular}{lp{0.62\columnwidth}} \toprule \textbf{Predicate} & \textbf{Most common objects} \\ \midrule AtLocation & desk (3210), cabinet (2481), sock drawer (1771) \\ \midrule CapableOf & branch out (963), branch off (747), taste good (556) \\ \midrule Causes & death (2504), tears (1290), happiness (1254) \\ \midrule Desires & eat (949), have fun (816), sex (742) \\ \midrule HasA & more than one meaning (1387), seeds (1316), peel (1170) \\ \midrule HasPrerequisite & metal (1965), plastic (1594), water (1423) \\ \midrule HasProperty & good (2615), useful (2585), good for (1746) \\ \midrule HasSubevent & breathe (1006), swallow (721), take off shoes (658) \\ \midrule MadeOf & plastic (1427), aluminum (1297), wood (905) \\ \midrule MotivatedByGoal & have fun (994), enjoyment (493), succeed (444) \\ \midrule PartOf & new testament (914), human experience (683), alabama (667) \\ \midrule ReceivesAction & found in house (1110), eaten (800), found in hospital (779) \\ \midrule UsedFor & cooking (627), decoration (454), transport (448) \\ \bottomrule \end{tabular} \caption{Most common objects generated by GPT2-XL-\conceptnet{}. Numbers in parentheses indicate frequency of the corresponding objects.} \label{tab:common-objects} \end{table} \section{Conclusion} We introduced four CSKBs computed using two COMET models (BART and GPT2-XL) trained on two existing CSK resources (\conceptnet{} and \ascentpp{}). Our findings are: \begin{enumerate} \item The \comet{} methodology produces better results on modest manually curated resources (\conceptnet{}) than on larger web-extracted resources (\ascentpp{}). \item \comet{}'s recall can significantly outperform that of modest manually curated ones (\conceptnet{}), and reach that of large web-extracted ones (\ascentpp{}). \item In terms of precision, a significant gap remains to manual curation, both in typicality and saliency. To web extraction, a moderate gap remains in terms of statement typicality. \end{enumerate} We also identified common problems of the \comet{} generations, such as co-occurrence misreadings, subject copying, and redundancies, which may be subject of further research regarding post-filtering and clustering. \begin{figure*}[t] \centering \frame{\includegraphics[width=\textwidth]{figures/snapshot.png}} \caption{Web interface showing top-10 assertions per predicate in six CSK resources. The number in grey next to a CSKB indicates the total number of assertions for the corresponding subject-predicate pair in the KB.} \label{fig:interface} \end{figure*} \bibliography{references} \bibliographystyle{acl_natbib} \end{document}
https://openreview.net/forum?id=HI5M4MYedZ5
https://arxiv.org/abs/2112.14815
Please evaluate the paper based on its proposed approach of materializing neural commonsense predictions with COMET into concrete resources, including the investigation of knowledge bases and language models, and the evaluation of precision and recall.
Trained COMET models analyzed for precision+recall and used to generate new commonsense KGs -- but not clear why we need this resource? What the paper is about: The authors offer a new resource generated from COMET models trained on commonsense knowledge graphs like ConceptNet and Ascent++. They study not just plausibility but also the precision (typicality and saliency) as well as recall of the models' predictions. They analyze different base LMs and datasets on these metrics and offer insights. Finally, they demonstrate a web interface with wider customizations than the original one hosted by AI2. Key shortcoming: The authors call it a resource paper (L043). However, the benefit of the new "generated" commonsense knowledge graphs is not well established. Section 4.3 hints at some use cases like aggregation, joins, ranking, and text search. But the benefit of having a static set of predictions (this new resource) is not clear. (1) How are these better than the base KGs like ConceptNet and Ascent++? Perhaps they are bigger but not always more salient/typical/exhaustive than the original KGs (see Table 2). (2) How are these better than retaining the trained COMET model, which can generate such inferences and many more, on demand? Pros: good analysis + useful resource. Cons: usefulness of the resource not demonstrated. EDIT: Reviewer jbiB rightly points to a major missing related work, which further challenges the paper's claim to novelty. Minor: - Should you be referring to Untypical as Atypical instead? I was fairly confident that the latter is "correct" but words have no inherent meaning anyway so this is up to the authors. - Saliency vs Typicality could benefit from a formal definition (in English, not just in a formula) each. Do they differ in just values of k for top k extractions? Rating: 5: Marginally below acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\pdfoutput=1 \documentclass[11pt]{article} \usepackage{acl} \usepackage{times} \usepackage{latexsym} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{microtype} \usepackage{graphicx} \usepackage{booktabs} \usepackage{multirow} \newcommand{\sr}[1]{{\textcolor{violet}{SR: #1}}} \newcommand{\ph}[1]{{\textcolor{orange}{Ph: #1}}} \newcommand{\ascentpp}{\textsc{Ascent++}} \newcommand{\conceptnet}{\textsc{ConceptNet}} \newcommand{\comet}{\textsc{Comet}} \newcommand{\atomic}{\textsc{Atomic}} \newcommand{\triple}[1]{\emph{$\langle$#1$\rangle$}} \renewcommand{\paragraph}[1]{\smallskip\noindent\textbf{#1.\mbox{\ \ }}} \title{Materialized Knowledge Bases from Commonsense Transformers} \author{Tuan-Phong Nguyen \\ Max Planck Institute for Informatics \\ Saarland Informatics Campus \\ Saarbrücken, Germany \\ \texttt{tuanphong@mpi-inf.mpg.de} \And Simon Razniewski \\ Max Planck Institute for Informatics \\ Saarland Informatics Campus \\ Saarbrücken, Germany \\ \texttt{srazniew@mpi-inf.mpg.de}} \begin{document} \maketitle \begin{abstract} Starting from the \comet{} methodology by \citet{bosselut2019comet}, generating commonsense knowledge from commonsense transformers has recently received significant attention. Surprisingly, up to now no materialized resource of commonsense knowledge generated this way is publicly available. This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall. Furthermore, we identify common problem cases, and outline use cases enabled by materialized resources. We posit that the availability of these resources is important for the advancement of the field, as it enables an off-the-shelf-use of the resulting knowledge, as well as further analyses on its strengths and weaknesses. \end{abstract} \section{Introduction} Compiling comprehensive collections of commonsense knowledge (CSK) is an old dream of AI. Besides attempts at manual compilation~\cite{liu2004conceptnet,lenat1995cyc,atomic} and text extraction~\cite{schubert2002can,webchild,mishra2017domain,quasimodo,ascentpp}, commonsense knowledge compilation from pretrained language models~\cite{bosselut2019comet,comet-atomic-2020,west2021symbolic} has recently emerged. In \citeyear{bosselut2019comet}, \citeauthor{bosselut2019comet} introduced \textit{Commonsense Transformers} (\comet{}), an approach for fine-tuning language models on existing corpora of commonsense assertions. These models have shown promising performance in generating commonsense assertions after being trained on established human-authored commonsense resources such as \atomic~\cite{atomic} and \atomic$^{20}_{20}$~\cite{comet-atomic-2020}. More recently, \citet{west2021symbolic} extracts commonsense assertions from a general language model, GPT-3~\cite{GPT3}, using simple prompting techniques. Surprisingly, using this machine-authored commonsense corpus to fine-tune \comet{} helps it outperform GPT-3, which is 100x larger in size, in terms of commonsense capabilities. Despite the prominence of this approach (the seminal \comet{} paper~\cite{bosselut2019comet} receiving over 300 citations in just two years), to date, no resource containing commonsense knowledge compiled from any \comet{} model is publicly available. As compilation of such a resource is a non-trivial endeavour, this is a major impediment to research that aims to understand the potentials of the approach, or intends to employ its outputs in downstream tasks. This resource paper fills this gap. We fine-tune the \comet{} pipeline on two established resources of concept-centric CSK assertions, \conceptnet{} \cite{speer2017conceptnet} and \ascentpp{} \cite{ascentpp}, and execute the pipeline for 10K prominent subjects. Unlike the \atomic{} resources, which were used to train \comet{} in \cite{bosselut2019comet,comet-atomic-2020} and have their main focus on events and social interactions, the two resources of choice are mostly about general concepts (e.g., \textit{lions can roar}, or \textit{a car has four wheels}). Furthermore, as those two resources were constructed using two fundamentally different methods, crowdsourcing and web text extraction, it enables us to discover potentially different impacts they have on the \comet{} models. By taking the top-10 inferences for each subject-predicate pair, we obtain four resources, \conceptnet{} (GPT2-XL, BART) and \ascentpp{} (GPT2-XL, BART), containing 900K to 1.4M ranked assertions of CSK. We perform a detailed evaluation of the intrinsic quality, including fine-grained precision (typicality and saliency) and recall of each resource, derive qualitative insights into the strengths and weaknesses of the approach, and highlight extrinsic use cases enabled by the resources. \pagebreak Our contributions are: \begin{enumerate} \item The materialization of the \comet{} approach for two language models (GPT2-XL, BART) on two concept-centered commonsense knowledge bases (\conceptnet{}, \ascentpp{}); \item Quantitative and qualitative evaluations of the resulting resources in terms of precision, recall and error categories, showing that in terms of recall, \comet{} models outperform crowdsourced construction and are competitive with web text extraction, while exhibiting moderate gaps in terms of precision to both; \item Illustrative use cases of the materialized resources in statement aggregation, join queries, and search. \end{enumerate} The materialized resources, as well as an interactive browsing interface, are available at\linebreak {\small \url{https://ascentpp.mpi-inf.mpg.de/comet}}. \section{Related work} Early approaches at CSK compilation relied on expert knowledge engineers \cite{lenat1995cyc} or crowdsourcing \cite{liu2004conceptnet}, and the latter approach has recently been revived \cite{atomic}. To overcome scalability limitations of manual compilation, text extraction is a second popular paradigm. Following early attempts on linguistic corpora \cite{mishra2017domain}, increasingly approaches have targeted larger text corpora like Wikipedia, book scans, or web documents \cite{webchild,quasimodo,ascentpp,ascent}, to build CSK resources of wide coverage and quality. Recently, both approaches have been complemented by knowledge extraction from pre-trained language models: Language models like BERT~\cite{devlin2019bert} or GPT~\cite{radford2019language, GPT3} have seen millions of documents, and latently store associations among terms. While \citet{west2021symbolic} used prompting to extract symbolic CSK from GPT-3, \citet{bosselut2019comet} proposed to tap this knowledge by supervised learning: The language models are fine-tuned on statements from existing knowledge resources, e.g., trained to predict the object \textit{Africa} when given the subject-predicate pair \textit{elephant, AtLocation}, based on the ConceptNet triple \triple{elephant, AtLocation, Africa}. After training, they can be used to predict objects for unseen subject-predicate pairs, e.g., locations of wombats. The approach gained significant attention, and variants are employed in a range of downstream tasks, e.g., commonsense question answering \cite{bosselut2019dynamic}, commonsense explanation~\cite{semeval-csk-explanation}, story generation \cite{guan2020knowledge}, or video captioning~\cite{fang2020video2commonsense}. Yet, to date, no materialized knowledge resource produced by any \comet{} model is available (\textsc{AutoTOMIC} from \cite{west2021symbolic} being based on prompting GPT-3). The closest to this is a web interface hosted by the AllenAI institute at {\small \url{https://mosaickg.apps.allenai.org/model_comet2020_entities}}. However, this visualizes only predictions for a single subject, making, e.g., aggregations or count impossible, and only shows top-5 predictions, and without scores. \section{Methodology} We follow the implementations in the official code repository\footnote{\url{https://github.com/allenai/comet-atomic-2020/}} of the \textsc{Comet-Atomic}$_{20}^{20}$ project~\cite{comet-atomic-2020} to compute assertions, and decide on output thresholds. \paragraph{Training CSKBs} We use two established concept-centered commonsense knowledge bases (CSKBs), \conceptnet{} 5.7~\cite{speer2017conceptnet} and \ascentpp{}~\cite{ascentpp} as training resources, considering 13 CSK predicates from each of them: \textit{AtLocation}, \textit{CapableOf}, \textit{Causes}, \textit{Desires}, \textit{HasA}, \textit{HasPrerequisite}, \textit{HasProperty}, \textit{HasSubevent}, \textit{MadeOf}, \textit{MotivatedByGoal}, \textit{PartOf}, \textit{UsedFor} and \textit{ReceivesAction}. \begin{enumerate} \item \conceptnet{}~\cite{speer2017conceptnet} is arguably the most widely used CSKB, built by crowdsourcing. \conceptnet{} 5.7 is its lastest version\footnote{\url{https://github.com/commonsense/conceptnet5/wiki/Downloads}}, consisting of 21 million multilingual assertions, spanning CSK as well as general linguistic and taxonomic knowledge. We retain English assertions only, resulting in 207,210 training assertions for the above-mentioned predicates. \item \ascentpp{}~\cite{ascentpp} is a project aiming for automated CSK extraction from large-scaled web contents based on open information extraction (OpenIE) and judicious cleaning and ranking approaches. The \ascentpp{} KB consists of 2 million English CSK assertions for the 13 mentioned predicates. \end{enumerate} \paragraph{Language models} We consider two autoregressive language models (LMs) that were also used in the original \comet{} paper, GPT2-XL~\cite{radford2019language} and BART~\cite{lewis2019bart}. \paragraph{Materialization process} We query the fine-tuned \comet{} models for 10,926 subjects in \conceptnet{} which have at least two assertions for the 13 CSK predicates. For each subject-predicate pair, we use beam search to obtain completions, with different configurations (see Table~\ref{tab:configs}) for BART and GPT2-XL, following the parameters specified in the published code repository and models. We retain the top-10 completions for each subject-predicate pair, with their \textit{beam scores} (i.e., sum of log softmax of all generated tokens) returned by the \textit{generate} function\footnote{\url{https://huggingface.co/docs/transformers/main/en/main\_classes/text\_generation\#transformers.generation\_utils.GenerationMixin.generate}} of the Transformers library~\cite{transformers}. \paragraph{Output} The resulting resources, \conceptnet{} (GPT2-XL, BART) and \ascentpp{} (GPT2-XL, BART), contain a total of 976,296 and 1,420,380 and 1,271,295 and 1,420,380 assertions after deduplication, respectively, as well as their corresponding beam scores. All are available for browsing, as well as for download, at {\small \url{https://ascentpp.mpi-inf.mpg.de/comet}} (see screenshot of browsing interface in Figure~\ref{fig:interface}). \begin{table}[t] \centering \small \begin{tabular}{lrr} \toprule \textbf{Parameter} & \textbf{GPT2-XL} & \textbf{BART} \\ \midrule num\_beams & 10 & 10 \\ temperature & 1.0 & 1.0 \\ top\_p & 0.9 & 1.0 \\ repetition\_penalty & 1.0 & 1.0 \\ max\_length & 16 & 24 \\ no\_repeat\_ngram\_size & 0 & 3 \\ early\_stopping & True & True \\ do\_sample & False & False \\ \bottomrule \end{tabular} \caption{Configurations for beam-search decoders.} \label{tab:configs} \end{table} \section{Analysis} We perform three kind of analyses: (1) a quantitative evaluation of the intrinsic quality of the assertions, based on crowdsourcing, (2) a qualitative evaluation that outlines major strengths and weaknesses, and (3) an illustration of use cases enabled by both resources. \subsection{Quantitative evaluation} The original paper \cite{bosselut2019comet} only evaluated the top-1 triple per subject-predicate pair. Furthermore, it solely evaluated triples by plausibility, which is a necessary, but only partly a sufficient criterion for being considered commonsense \cite{chalier2020joint}. In the following, we evaluate samples from the generated resources along two \textit{precision} dimensions, typicality (top-100 assertions per subject) and saliency (top-10 assertions per subject). We also evaluate \textit{recall}, by measuring the degree to which each resource covers the statements in a human-generated ground truth. \paragraph{Precision: Typicality and saliency} Following~\citet{quasimodo,ascentpp}, we assess assertions in the CSK resources along two precision dimensions: \textit{typicality} and \textit{saliency}, which measure the degree of truth and the degree of relevance of assertions, respectively. We use the Amazon Mechanical Turk (AMT) platform to obtain human judgements. Each dimension is evaluated based on a 4-point Likert scale and an option for \textit{no judgement} if the annotator is not familiar with the concepts. Assertions are transformed into human-readable sentences using the templates introduced by \citet{comet-atomic-2020}. Each assignment is done by three different workers. Following~\citet{comet-atomic-2020}, any CSK assertion that receives the two higher scores in the Likert scale is labelled as \textit{Typical} or \textit{Salient}, and the two lower scores as \textit{Untypical} or \textit{Unsalient}. The final judgements is based on majority vote. In terms of sampling process, for typicality, we draw 500 assertions from each resource when restricting to top-100 assertions per subject. For saliency, we pick 500 random samples from the pool of top-10 assertions per subject. Results are reported in the left part of Table~\ref{tab:csk-eval}. We see a significant drop in the quality of assertions in the LM-based generations compared to the training resources. In terms of the neural models, for both training CSKBs, the BART models demonstrate better typicality than the GPT2-XL ones. Assertions in BART-\ascentpp{} also have significantly better saliency than in GPT2-XL-\ascentpp{}. Interestingly, BART-\conceptnet{} is nearly on par with \ascentpp{} on both metrics. \begin{table*}[t] \centering \small \begin{tabular}{rrrrrrrrr} \toprule \multirow{2}{*}{\textbf{Resource}} & \multicolumn{2}{c}{\textbf{Typicality@100}} & \multicolumn{2}{c}{\textbf{Saliency@10}} & \multicolumn{3}{c}{\textbf{Recall@100}} & \textbf{Size@100} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} & \textbf{Typical} & \textbf{Untypical} & \textbf{Salient} & \textbf{Unsalient} & \textbf{t=0.96} & \textbf{t=0.98} & \textbf{t=1.00} & \textbf{\#triples} \\ \cmidrule{1-1} \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} \ascentpp{} & \textbf{78.4} & \textbf{11.0} & \textbf{62.8} & \textbf{34.6} & \textbf{8.9} & \textbf{7.9} & \textbf{4.6} & 202,026 \\ GPT2-XL-\ascentpp{} & 57.2 & 27.4 & 37.2 & 58.4 & 6.0 & 4.9 & 2.6 & 1,091,662 \\ BART-\ascentpp{} & 69.8 & 17.4 & 50.6 & 42.6 & 2.6 & 1.9 & 1.0 & 1,092,600 \\ \cmidrule{1-1} \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-8} \cmidrule(l){9-9} \conceptnet{} & \textbf{93.6} & \textbf{3.6} & \textbf{80.0} & \textbf{16.8} & 2.3 & 1.7 & 0.9 & 164,291 \\ GPT2-XL-\conceptnet{} & 66.6 & 21.4 & 63.8 & 32.6 & \textbf{9.0} & \textbf{7.3} & \textbf{3.8} & 967,343 \\ BART-\conceptnet{} & 72.6 & 17.0 & 63.4 & 33.4 & 5.3 & 3.7 & 1.0 & 1,092,600 \\ \bottomrule \end{tabular} \caption{Intrinsic evaluation (Typicality, Saliency and Recall - \%) and size of CSK resources.} \label{tab:csk-eval} \end{table*} \paragraph{Recall} We reuse the CSLB dataset~\cite{devereux2014centre} that was processed by~\citet{ascentpp} as ground truth for recall evaluation. The CSLB dataset consists of 22.6K human-written sentences about property norms of 638 concepts. To account for minor reformulations, following \citet{ascentpp}, we also use embedding-based similarity to match ground-truth sentences with statements in the CSK resources. We specifically rely on precomputed SentenceTransformers embeddings~\cite{sbert}. We also restrict all CSK resources to top-100 assertions per subject. The evaluation results are shown in the right part of Table~\ref{tab:csk-eval}, where we report recall at similarity thresholds $0.96$, $0.98$ and $1.0$, as well as resource size. We also plot the recall values at different top-N assertions per subject in Figure~\ref{fig:recal-vs-size} with similarity threshold $t=0.98$. As one can see, \ascentpp{} outperforms both \comet{} models trained on it even though it is significantly smaller. We see opposite results with the \conceptnet{}-based resources, where the \comet{} models generate resources of better coverage than its training data. Our presumption is that the LMs profits more from manually curated resources like \conceptnet{}, but hardly add values to resources that were extracted from the web, as LMs have not seen fundamentally different text. Furthermore, in contrast to precision, GPT2-XL models have better results than BART models in terms of recall, on both input CSKBs. \begin{figure}[t] \centering \includegraphics[width=\columnwidth, trim =1cm 0 1.5cm 1.2cm,clip]{figures/recall-vs-size.pdf} \caption{Resource recall in relation to resource size, at similarity threshold $t=0.98$. } \label{fig:recal-vs-size} \end{figure} \subsection{Qualitative observations} LMs have the strength to generate an open-ended set of objects, even for subjects seen rarely or not at all in the training data. For example, while \conceptnet{} stores only one location for \textit{rabbit}: \textit{``a meadow''}, both BART- and GPT2-XL-\conceptnet{} can generalize to other correct locations, such as \textit{wilderness}, \textit{zoo}, \textit{cage}, \textit{pet store}, etc. In the recall evaluation, we pointed out that \conceptnet{}, a manually-built CSK resource with relatively small size, considerably benefits from LMs generations as they improve the coverage of the resource substantially. However, as indicated in the precision evaluation, LM generations are generally of lower precision than those in the training data. Common error categories we observe are: \begin{itemize} \item \textbf{Co-occurrence misreadings:} LMs frequently predict values that merely frequently co-occur, e.g., \triple{locomotive, atLocation, bus stop}, \triple{running, capableOf, put on shoes}, \triple{war, desires, kill people}, \triple{supermarket, capableOf, buy milk}. \item \textbf{Subject-object-copying}: LMs too often repeat the given subject in predictions. For instance, 45 of 130 objects generated by BART-\conceptnet{} for the subject \textit{chicken} also contain \textit{chicken}, such as \triple{chicken, CapableOf, kill/eat/cook chicken} or \triple{chicken, UsedFor, feed chicken}. \item \textbf{Quantity confusion}: LMs struggle to distinguish quantities. For example, GPT2-XL-\conceptnet{} generates that \textit{bike} has \textit{four wheels} (top-1 prediction), and then also \textit{two wheels} (rank 3), \textit{three wheels} (rank 4) and \textit{twelve wheels} (rank 5). The weakness of dealing with numbers is known as a common issue of embeddings-based approaches \cite{numbers-embeddings}. \item \textbf{Redundancy}: Generated objects often overlap, bloating the output with redundancies. Most common are repetitions of singular/plural nouns, e.g., the top-2 generations by BART-\conceptnet{} for \textit{doctor-CapableOf}: \textit{``visit patient''} and \textit{``visit patients''}. Redundancies also include paraphrases, e.g., \triple{doctor, CapableOf, visit patients / see patients}; or \triple{doctor, CapableOf, prescribe medication / prescribe drug / prescribe medicine} (GPT2-XL-\ascentpp{} generations). Clustering might alleviate this issue \cite{ascentpp}. % \end{itemize} \subsection{Downstream use of materialized resources} Beyond systematic evaluation, materialized resources enable a wide set of downstream use cases, for example context-enriched zero-shot question answering~\cite{petroni2020context}, or KB-based commonsense explanation~\cite{semeval-csk-explanation}. We exemplarily illustrate four enabled types of basic analyses, (1) frequency aggregation, (2) join queries, (3) ranking and (4) text search. \paragraph{Frequency aggregation} Materialized resources enable to count frequencies. In Table~\ref{tab:common-objects}, we demonstrate the three most common objects for each predicate in the GPT2-XL-\conceptnet{} resource. Interestingly, the third most common location of items in the KB is \textit{``sock drawer''}, which is only ranked as the 190\textsuperscript{th} most common location in \conceptnet{}. Similarly, the top-3 objects for \textit{CapableOf} in the generated KB rarely occur the training data. \paragraph{Join queries} One level further, materialized knowledge enables the construction of join queries. For example, we can formulate conjunctive queries like: \begin{itemize} \item Animals that eat themselves include \textit{chicken}, \textit{flies}, \textit{grasshopper}, \textit{mice}, \textit{penguin}, \textit{worm}. \item The most frequent subevents of subevents are: \textit{breathe}, \textit{swallow}, \textit{hold breath}, \textit{think}, \textit{smile}. \item The most common parts of locations are: \textit{beaches}, \textit{seeds}, \textit{lot of trees}, \textit{peel}, \textit{more than one meaning}. \end{itemize} \paragraph{Ranking} Since statements in our materialized resources come with scores, it becomes possible to locally and globally rank assertions, or to compare statements pairwise. For example, in GPT2-XL-\conceptnet{}, the triple \triple{librarian, AtLocation, library}, which is at rank 140, has a score of $-0.048$, which is much higher than that of \triple{elephant, CapableOf, climb tree} (score = $-0.839$, ranked 638,048 globally). \paragraph{Text search} Finally, we can use materialized resources for text search. For example, we can search in GPT2-XL-\conceptnet{} for all assertions that include the term \textit{``airplane''}, finding expected matches like \triple{airplane, AtLocation, airport} and \triple{flight attendant, CapableOf, travel on airplane}, as well as surprising ones like \triple{scrap paper, UsedFor, making paper airplane} and \triple{traveling, HasSubevent, sleeping on airplane}. \begin{table}[t] \centering \scriptsize \begin{tabular}{lp{0.62\columnwidth}} \toprule \textbf{Predicate} & \textbf{Most common objects} \\ \midrule AtLocation & desk (3210), cabinet (2481), sock drawer (1771) \\ \midrule CapableOf & branch out (963), branch off (747), taste good (556) \\ \midrule Causes & death (2504), tears (1290), happiness (1254) \\ \midrule Desires & eat (949), have fun (816), sex (742) \\ \midrule HasA & more than one meaning (1387), seeds (1316), peel (1170) \\ \midrule HasPrerequisite & metal (1965), plastic (1594), water (1423) \\ \midrule HasProperty & good (2615), useful (2585), good for (1746) \\ \midrule HasSubevent & breathe (1006), swallow (721), take off shoes (658) \\ \midrule MadeOf & plastic (1427), aluminum (1297), wood (905) \\ \midrule MotivatedByGoal & have fun (994), enjoyment (493), succeed (444) \\ \midrule PartOf & new testament (914), human experience (683), alabama (667) \\ \midrule ReceivesAction & found in house (1110), eaten (800), found in hospital (779) \\ \midrule UsedFor & cooking (627), decoration (454), transport (448) \\ \bottomrule \end{tabular} \caption{Most common objects generated by GPT2-XL-\conceptnet{}. Numbers in parentheses indicate frequency of the corresponding objects.} \label{tab:common-objects} \end{table} \section{Conclusion} We introduced four CSKBs computed using two COMET models (BART and GPT2-XL) trained on two existing CSK resources (\conceptnet{} and \ascentpp{}). Our findings are: \begin{enumerate} \item The \comet{} methodology produces better results on modest manually curated resources (\conceptnet{}) than on larger web-extracted resources (\ascentpp{}). \item \comet{}'s recall can significantly outperform that of modest manually curated ones (\conceptnet{}), and reach that of large web-extracted ones (\ascentpp{}). \item In terms of precision, a significant gap remains to manual curation, both in typicality and saliency. To web extraction, a moderate gap remains in terms of statement typicality. \end{enumerate} We also identified common problems of the \comet{} generations, such as co-occurrence misreadings, subject copying, and redundancies, which may be subject of further research regarding post-filtering and clustering. \begin{figure*}[t] \centering \frame{\includegraphics[width=\textwidth]{figures/snapshot.png}} \caption{Web interface showing top-10 assertions per predicate in six CSK resources. The number in grey next to a CSKB indicates the total number of assertions for the corresponding subject-predicate pair in the KB.} \label{fig:interface} \end{figure*} \bibliography{references} \bibliographystyle{acl_natbib} \end{document}
https://openreview.net/forum?id=HI5M4MYedZ5
https://arxiv.org/abs/2112.14815
Please evaluate the usefulness and benefits of the new resource generated from COMET models trained on commonsense knowledge graphs, as described in the paper.
Good paper that shows scientific significance, accept **Summary** \ This paper investigates the inductive bias in ViT and CNN models. The authors claim that the previous design of injecting convolution-like inductive bias into CNN models ignore that the optimal inductive bias depends on the data scale and fixed inductive bias may not be optimal. Experiments on different data scales of ImageNet illustrate that smaller data scale is needed for ViT to outperform CNN if more convolution-like inductive biases is included. The paper also proves that frequency characteristics can explain whether the inductive bias is closer to convolution or self-attention by conducting Fourier analysis. Then the authors show that the interpolation of inductive bias between CNN and ViT can be realized by adjusting the moment of reparameterization during training. Based on the above findings, a progressively reparameterization scheduling is proposed to make the front layers to act like convolution and the rear layer to act like self-attention. Experiments on CIFAR-100 show the effectiveness of PRS. **Strengths** - Investigating the inductive bias injected in CNN and ViT models is crucial and this paper presents a new idea of making the inductive bias flexible, which provides a new research direction. - The order of the paper and the logic in conducting this research is clear. The authors try to first understand the inductive bias by conducting the 'different data ratio' experiments and the Fourier analysis and then propose the scheduling strategy of reparameterization based on the previous findings. - The tables, equations and figures used to demonstrate the PRS is clear. - The paper is generally well-written. **Weaknesses** - The experiments used to verify the effectiveness of PRS is only conducted on CIFAR-100. It would be good if more datasets can be included. - The legend in Figure 3 is misleadings. The orange line should be Conv50, SA250? **Overall rating (1-10)** \ 7 **Justification of rating** \ The scientific significance of this paper. Rating: 7: Good paper, accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{cite} \usepackage{hyperref} \usepackage{tikz} \usepackage{comment} \usepackage{booktabs} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{pifont} \usepackage{subcaption} \usepackage{booktabs} \usepackage{tabularx} \usepackage{multirow} \usepackage{makecell} \usepackage[accsupp]{axessibility} % \newcommand{\bfit}[1]{\textbf{\textit{#1}}} \newcommand{\floor}[1]{\left \lfloor #1 \right \rfloor} \newcommand{\xmark}{\ding{55}}% \newcommand{\cmark}{\ding{51}}% \newcommand{\samelineand}{\qquad} \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \makeatletter \def\@fnsymbol#1{\ensuremath{\ifcase#1\or *\or \dagger\or \ddagger\or \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger \or \ddagger\ddagger \else\@ctrerr\fi}} \makeatother \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{11} % \title{Towards Flexible Inductive Bias via \\ Progressive Reparameterization Scheduling} \titlerunning{Towards Flexible Inductive Bias via P.R.S.} \authorrunning{Y. Lee et al.} \author{Yunsung Lee$^{1}$\thanks{indicates equal contributions} \and Gyuseong Lee$^{2}$\samethanks \and Kwangrok Ryoo$^{2}$\samethanks \and \\ Hyojun Go$^{1}$\samethanks \and Jihye Park$^{2}$\samethanks \and Seungryong Kim$^{2}$\thanks{indicates corresponding author.} } \institute{ $^{1}$Riiid AI Research \qquad \qquad $^{2}$Korea University } \maketitle \begin{abstract} There are two \textit{de facto} standard architectures in recent computer vision: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Strong inductive biases of convolutions help the model learn sample effectively, but such strong biases also limit the upper bound of CNNs when sufficient data are available. On the contrary, ViT is inferior to CNNs for small data but superior for sufficient data. Recent approaches attempt to combine the strengths of these two architectures. However, we show these approaches overlook that the optimal inductive bias also changes according to the target data scale changes by comparing various models' accuracy on subsets of sampled ImageNet at different ratios. In addition, through Fourier analysis of feature maps, the model's response patterns according to signal frequency changes, we observe which inductive bias is advantageous for each data scale. The more convolution-like inductive bias is included in the model, the smaller the data scale is required where the ViT-like model outperforms the ResNet performance. To obtain a model with flexible inductive bias on the data scale, we show reparameterization can interpolate inductive bias between convolution and self-attention. By adjusting the number of epochs the model stays in the convolution, we show that reparameterization from convolution to self-attention interpolates the Fourier analysis pattern between CNNs and ViTs. Adapting these findings, we propose Progressive Reparameterization Scheduling (PRS), in which reparameterization adjusts the required amount of convolution-like or self-attention-like inductive bias per layer. For small-scale datasets, our PRS performs reparameterization from convolution to self-attention linearly faster at the late stage layer. PRS outperformed previous studies on the small-scale dataset, e.g., CIFAR-100. \keywords{Flexible Architecture, Vision Transformer, Convolution, Self-attention, Inductive Bias} \end{abstract} \section{Introduction} \newcommand{\etal}{\textit{et al.}} Architecture advances have enhanced the performance of various tasks in computer vision by improving backbone networks~\cite{he2016deep, carion2020end, tian2020fcos, he2017mask,tian2020conditional}. From the success of Transformers in natural language processing~\cite{vaswani2017attention,devlin2019bert,brown2020language}, Vision Transformers (ViTs) show that it can outperform Convolutional Neural Networks (CNNs) and its variants have led to architectural advances~\cite{liu2021swin,touvron2021going,zhou2021deepvit}. ViTs lack inductive bias such as translation equivariance and locality compared to CNNs. Therefore, ViTs with sufficient training data can outperform CNNs, but ViTs with small data perform worse than CNNs. To deal with the data-hungry problem, several works try to inject convolution-like inductive bias into ViTs. The straightforward approaches use convolutions to aid tokenization of an input image~\cite{xiao2021early,yuan2021incorporating,wu2021cvt, hassani2021escaping} or design the modules~\cite{li2021localvit,zhang2021rest,dai2021coatnet,d2021convit} for improving ViTs with the inductive bias of CNNs. Other approaches use the local attention mechanisms for introducing locality to ViTs~\cite{liu2021swin,han2021transformer}, which attend to the neighbor elements and improve the local extraction ability of global attention mechanisms. These approaches can design architectures that leverage the strength of CNNs and ViTs and can alleviate the data-hungry problem at some data scale that their work target. However, we show these approaches overlook that the optimal inductive bias also changes according to the target data scale by comparing various models’ accuracy on subsets of sampled ImageNet at different ratios. If trained on the excessively tiny dataset, recent ViT variants still show lower accuracy than ResNet, and on the full ImageNet scale, all ViT variants outperform ResNet. Inspired by Park~\etal~\cite{park2022vision}, we perform Fourier analysis on these models to further analyze inductive biases in the architecture. We observe that ViTs injected convolution-like inductive bias show frequency characteristics between it of ResNet and ViT. In this experiment, the more convolution-like inductive bias is included, the smaller the data scale is required where the model outperforms the ResNet performance. Specifically, their frequency characteristics tend to serve as the high-pass filter in early layers and as more low-pass filter closer to the last layer. Nevertheless, such a fixed architecture in previous approaches has a fixed inductive bias between CNNs and ViTs, making it difficult to design an architecture that performs well on various data scales. Therefore, each time a new target dataset is given, the optimal inductive bias required changes, so each time the model's architectural design needs to be renewed. For example, a CNN-like architecture should be used for small-scale dataset such as CIFAR~\cite{krizhevsky2009learning}, and a ViT-like architecture should be designed for large-scale datasets such as JFT~\cite{sun2017revisiting}. Also, this design process requires multiple training for tuning the inductive bias of the model, which is time-consuming. In this paper, we confirm the possibility of reparameterization technique~\cite{cordonnier2019relationship,li2021can} from convolution to self-attention towards flexible inductive bias between convolution and self-attention during a single training trial. The reparameterization technique can change the learned convolution layer to self-attention, which identically operates like learned convolution. Performing Fourier analysis, we show that reparameterization can interpolate the inductive biases between convolution and self-attention by adjusting the moment of reparameterization during training. We observe that more training with convolutions than with self-attention makes the model have a similar frequency characteristic to CNN and vice versa. This observation shows that adjusting the schedule of reparameterization can interpolate between the inductive bias of CNNs and ViTs. From these observations, we propose the Progressive Reparameterization Scheduling (PRS). PRS is to sequentially reparameterize from the last layer to the first layer. Layers closer to the last layers are more trained with self-attention than convolution, making them closer to self-attention. Therefore, we can make the model have a suitable inductive bias for small-scale data with our schedule. We validate the effectiveness of PRS with experiments on the CIFAR-100 dataset. \vspace{5pt} Our contributions are summarized as follows: \vspace{-5pt} \begin{itemize} \item We observe that architecture with a more convolutional inductive bias in the early stage layers is advantageous on a small data scale. However, if the data scale is large, it is advantageous to have a self-attentional inductive bias. \item We show that adjusting the remaining period as convolution before reparameterization can interpolate the inductive bias between convolution and self-attention. \item Based on observations of favorable conditions in small-scale datasets, we propose the Progressive Reparameterization Scheduling (PRS) which sequentially changes convolution to self-attention from the last layer to the first layer. PRS outperformed previous approaches on the small-scale dataset, e.g., CIFAR-100. \end{itemize} \section{Related Work} \input{table/related_table} \subsection{Convolution Neural Networks} CNNs, the most representative models in computer vision, have evolved over decades from LeNeT~\cite{lecun1998gradient} to ResNet~\cite{he2016deep} in a way that is faster and more accurate. CNNs can effectively capture low-level features of images through inductive biases which are locality and translation invariance. However, CNNs have a weakness in capturing global information due to their limited receptive field. \subsection{Vision Transformers} Despite the great success of vision transformer~\cite{dosovitskiy2020image} in computer vision, ViT has several fatal limitations that it requires high cost and is difficult to extract the low-level features which contain fundamental structures, so that it shows inferior performance than CNNs in small data scales. There are several attempts to overcome the limitations of ViT and improve its performance by injecting a convolution inductive bias into the Transformer. DeiT~\cite{touvron2021training} allows ViT to take the knowledge of convolution through distillation token. They can converge a model, which fails in ViT. On the other hand, The straightforward approaches~\cite{yuan2021incorporating,li2021localvit,chu2021conditional,zhang2021rest} employ inductive bias to augment ViT by adding depthwise convolution to the FFN of the Transformer. ConViT~\cite{d2021convit} presents a new form of self-attention(SA) called Gated positional self-attention (GPSA) that can be initialized as a convolution layer. After being initialized as convolution only at the start of learning, ConViT learns only in the form of self-attention. Thus, it does not give sufficient inductive bias on small resources. Swin Transformer~\cite{liu2021swin} imposes a bias for the locality to ViT in a way that limits the receptive field by local attention mechanisms. A brief comparison of these methods is shown in Table~\ref{table:method-comparison}. \subsection{Vision Transformers and Convolutions} There have been several studies analyzing the difference between CNNs and ViTs~\cite{park2022vision,raghu2021vision}. Park~\etal~\cite{park2022vision} and Raghu~\etal~\cite{raghu2021vision} prove that CNN and Transformer extract entirely different visual representations. In particular, Park~\etal~\cite{park2022vision} present the several analysis of self-attention and convolution that self-attention acts as a low-pass filter while convolution acts as a high pass filter. Furthermore, several approaches~\cite{cordonnier2019relationship,d2021transformed,li2021can} have reparameterized convolution to self-attention by proving that their operations can be substituted for each other. Cordonnier~\etal ~\cite{cordonnier2019relationship} demonstrates that self-attention and convolution can have the same operation when relative positional encoding and the particular settings are applied. T-CNN~\cite{d2021transformed} presents the model using GPSA proposed by ConViT, which reparameterizes convolution layer as GPSA layers. C-MHSA~\cite{li2021can} prove that reparameterization between two models is also possible even when the input was patch unit, and propose a two-phase training model, which initializes ViT from a well-trained CNN utilizing the construction in above theoretical proof. \section{Preliminaries} Here, we recall the mathematical definitions of multi-head self-attention and convolution to help understand the next section. Then, we briefly introduce the background of reparameterization from convolution layer to self-attention layer. We follow the notation in~\cite{cordonnier2019relationship}. \subsubsection{convolution layer} The convolution layer has locality and translation equivariance characteristics, which are useful inductive biases in many vision tasks. Those inductive biases are encoded in the model through parameter sharing and local information aggregation. Thanks to the inductive biases, better performance can be obtained with a low data regime compared to a transformer that has a global receptive field. The output of the convolution layer can be roughly formulated as follows: \begin{equation}\label{eq:conv} \mathrm{Conv}(\bfit{X}) = \sum_{\Delta}\bfit{X}\bfit{W}^C, \end{equation} where $\bfit{X}\in\mathbb{R}^{H\times W \times C}$ is an image tensor, $H$,$W$,$C$ is the image height, width and channel, $\bfit{W}^C$ is convolution filter weight and the set \begin{equation} \Delta = \bigg[-\floor{\frac{K}{2}},\cdot\cdot\cdot\;,\floor{\frac{K}{2}}\bigg] \times \bigg[-\floor{\frac{K}{2}},\cdot\cdot\cdot\;,\floor{\frac{K}{2}}\bigg] \end{equation} is the receptive field with $K\times K$ kernel. \subsubsection{Multi-head Self-Attention Mechanism} Multi-head self-attention(MHSA) mechanism~\cite{vaswani2017attention} trains the model to find semantic meaning by finding associations among a total of $N$ elements using query $\bfit{Q}\in\mathbb{R}^{N\times d_{H}}$, key $\bfit{K}\in\mathbb{R}^{N\times d_{H}}$, and value $\bfit{V}\in\mathbb{R}^{N\times d_{H}}$, where $d_{H}$ is the size of each head. After embedding the sequence $\textbf{\textit{X}} \in \mathbb{R}^{N \times d}$ as a query and key using $\bfit{W}^Q\in\mathbb{R}^{d\times d_H}$ and $\bfit{W}^K\in\mathbb{R}^{d\times d_H}$, an attention score $\textbf{\textit{A}}\in\mathbb{R}^{N\times N}$ can be obtained by applying softmax to the value obtained by inner producting $\textit{\textbf{Q}}$ and $\textit{\textbf{K}}$, where $d$ is the size of an input token. Self-attention(SA) is obtained through matrix multiplication of $\bfit{V}$ embedded by $\bfit{W}^V\in\mathbb{R}^{N\times d_{H}}$ and $\bfit{A}$: \begin{equation}\label{eq:SA} \begin{split} \mathrm{SA}(\bfit{X}) = \bfit{A}(\bfit{XW}^Q,\bfit{XW}^K)\bfit{XW}^V,\\ \textbf{\textit{A}}(\textbf{Q},\textbf{K}) = \mathrm{softmax} \left( \frac{\bfit{QK}^\top}{\sqrt{d}}+\textbf{\textit{B}} \right), \end{split} \end{equation} where \textit{\textbf{B}} is a relative position suggested in~\cite{dai2019transformer}. By properly setting the relative positional embedding $\bfit{B}$, we can force the query pixel to focus on only one key pixel. MHSA allows the model to attend information from different representation subspaces by performing an attention function in parallel using multiple heads. MHSA with a total of $N_H$ heads can be formulated as follows: \begin{equation}\label{eq:mhsa} \mathrm{MHSA}(\bfit{X})=\sum_{k=1}^{N_{H}}{\mathrm{SA}}_k(\bfit{X})\bfit{W}^O_k, \end{equation} where $\bfit{W}^O$ is learnable projection and $k$ is the index of the head. \subsubsection{Reparameterizing MHSA into Convolution Layer} ~\cite{li2021can} showed that $K\times K$ kernels can be performed through $K^2$ heads, where $K$ is the size of the kernel. Since the convolution layer is agnostic to the context of the input, it is necessary to set $\bfit{W}^Q$ and $\bfit{W}^K$ as $\textbf{0}$ to convert the convolution to MHSA. Using equations~(\ref{eq:SA}) and~(\ref{eq:mhsa}) together, MHSA can be formulated as follows: \begin{equation} \mathrm{MHSA}(\bfit{X}) = \sum_{k=1}^{N_{H}} \bfit{A}_k\bfit{X}\bfit{W}^V _k\bfit{W}^O_k. \end{equation} As $\bfit{A}_k\bfit{X}$ is used to select the desired pixel, the knowledge of the convolution layer can be completely transferred to the MHSA by setting $\bfit{W}^V$ to $\bfit{I}$ and initializing $\bfit{W}^O$ to $\bfit{W}^C$. \section{Inductive Bias Analysis of Various Architectures}\label{sec:FourierMain} \input{table/main_imagenet_subset} In this section, we analyze various architectures through Fourier analysis and accuracy tendency according to data scale. Previous works designing the modules by mixing convolution-like inductive bias to ViTs overlook that a fixed architecture has a fixed inductive bias and optimal inductive bias can change according to data scale. To confirm it, we conduct experiments that measure the accuracy of various architectures by changing the data scale of ImageNet~\cite{deng2009imagenet}. In these experiments, we observe that the required data scale for outperforming ResNet is different for each architecture. Then, we link frequency characteristics of the recent ViT variants and the tendency of their accuracy with data scale by expanding observations of Park~\etal~\cite{park2022vision}. In \cite{park2022blurs,park2022vision}, they analyze feature maps in Fourier space and demonstrate that self-attention is a low-pass filter, and convolution is a high-pass filter. This phenomenon of filtering noise of different frequencies is caused by different inductive biases of self-attention and convolution. With Fourier analysis of Park~\etal~\cite{park2022vision}, we observe that architecture having more CNN-like frequency characteristics shows CNN-like efficiency and accuracy tendency in the small-scale datasets. Park~\etal~\cite{park2022vision} conducted Fourier analysis only for ViT and ResNet, but we analyzed several models with various attempts to inject convolutional induction biases into ViT architecture. In this section, the Fourier characteristics vary for each injected inductive bias, and we can see which model among the ViT variables was more convolution-like or self-attention-like. Section \ref{sec:Reparam} will show that we can interpolate from these convolution-like Fourier features to self-attention-like Fourier features with reparameterization. \begin{figure*} \centering \begin{subfigure}{0.49\linewidth} \includegraphics[width=\linewidth]{figure/Subset_Tiny.pdf} \vspace{-5pt} \caption{} \label{fig:2a} \end{subfigure} \begin{subfigure}{0.49\linewidth} \includegraphics[width=\linewidth]{figure/Subset_Small.pdf} \vspace{-5pt} \caption{} \label{fig:2b} \end{subfigure} \vspace{-5pt} \caption{\textbf{Comparisons of accuracy between ResNet and various ViT-like architectures.} Each model is trained on the subsets of imagenet, specifically 1\%, 5\%, 10\%, 50\%, and 100\%. We plot the accuracy difference between ResNet and other architectures with the increasing subset ratio. The numbers in parentheses mean the number of parameters of each model.}\vspace{-20pt} \label{fig:imagenet_subset} \end{figure*} \subsection{Our Hypothesis} We hypothesize that 1) the more convolution-like inductive bias is included, the smaller the data scale is required where the ViT-like model outperforms CNNs, and 2) frequency characteristics can explain whether the inductive bias of model is closer to CNNs or ViTs. Specifically, the incapacity to which the layer amplifies the high-frequency signal tends to dramatically increase from the first layer to the last layer in CNN, whereas ViT does not increase well. ViTs injected with the inductive bias of convolutions tend to increase it, but not as drastic as CNN. Here, we observe that ViTs increasing this incapacity more dramatically perform well on smaller scale data like CNNs. \subsection{Data Scale Experiment}\label{sec:data_exp} CNNs have inductive biases such as locality and translation invariance and ViTs do not. Because of the difference in inductive bias that architecture has, the data scale determines their superiority. In small-scale data, CNNs outperform ViTs, and at some point, ViTs outperform CNNs as the data scale grows. ViT variants injected with the convolution-like inductive bias have stronger inductive bias compared to na\"ive ViT, and the amount of data required to outperform ResNet will be less than it. In this subsection, we identify accuracy trends and the amount of data required to outperform ResNet for various architectures by changing the data scale. As shown in Table~\ref{tab:main_imagenet} and Figure~\ref{fig:imagenet_subset}, we make subsets with the ratio of 0.01, 0.05, 0.1, and 0.5 respectively in ImageNet for experiments in various settings with the same data distribution and different data scales. By utilizing the taxonomy of vision transformer proposed in ~\cite{liu2021survey}, We choose the representatives in each category as ViT variants to compare together. ResT~\cite{zhang2021rest} injects inductive bias directly by adding convolution layers, whereas Swin~\cite{liu2021swin} and ConViT~\cite{d2021convit} add locality in a new way. Swin uses a method that constrains global attention, while ConViT proposes a new self-attention layer that can act as a convolution layer in the initial stage of training. Therefore, we select ResNet-18 and ResNet-50 as the basic architecture of CNN, DeiT-Ti as Vanilla ViT and ResT-Light, ConViT-Ti, and Swin-T as the variations of the ViT to be tested. Since the number of parameters also significantly affects the performance, we compare the tiny version of Swin (Swin-T)~\cite{liu2021swin} with ResNet-50~\cite{he2016deep} and the remaining ViT variants with ResNet-18~\cite{he2016deep}. Swin-T has more parameters than other models since the dimension is doubled every time it passes through one layer. At 0.01, the smallest data scale, the ResNet series consisting of only CNNs shows better performance, and between them, ResNet-18 with smaller parameters has the highest accuracy. However, as the data scale increase, the accuracy of other ViT models increase more rapidly than ResNet. In particular, ResTv1-Light~\cite{zhang2021rest} and Swin-T~\cite{liu2021swin}, which have hierarchical structures, show superior performance among ViT variants and ResTv1-Light even records the highest accuracy of all models when the data scale is 0.05 or more. As illustrated in Figure~\ref{fig:imagenet_subset}, DeiT-Ti~\cite{touvron2021training} shows better performance than ResNet when the data scale is close to 1, while ConViT-Ti~\cite{d2021convit} and Swin-T~\cite{liu2021swin} outperform it at 0.5 or more. meanwhile, the accuracy of ResT is higher than ResNet-18 from quite a small data scale of 0.05. Therefore, we argue that the inductive bias is strong in the order of ResTv1-Light, Swin-T, ConViT-Ti, and DeiT-Ti. Through these experiments, we can prove that inductive bias and hierarchical structure have a great influence on accuracy improvement. \begin{figure*} \centering \includegraphics[width=\linewidth]{figure/Figure_fourier.pdf} \caption{\textbf{Frequency characteristics of ViTs and ResNet.} In ResNet-50, ResTv1-Lite, and Swin-T, the difference in log amplitude sharply increases as the normalized depth increase. On the other side, DeiT and ConViT which softly inject inductive biases into models do not have this tendency.} \vspace{-20pt} \label{fig:fourer_analysis} \end{figure*} \subsection{Fourier Analysis}\label{sec:fourier} As shown in Section~\ref{sec:data_exp}, the required data scale for outperforming ResNet is different for each architecture. Inspired by the analysis of Park~\etal~\cite{park2022vision}, we show that the architectures with frequency characteristics more similar to ResNet tend to outperform ResNet at smaller data scales through Fourier analysis. As in~\cite{park2022vision, park2022blurs}, the feature maps of each layer can be converted to a two-dimensional frequency domain with Fourier transform. Transformed feature maps can be represented on normalized frequency, which frequency is normalized to $[-\pi,\pi]$. The high-frequency components are represented at $-\pi$ and $\pi$ and the lowest frequency components are represented at $0$. Then, we use the difference in log amplitude to report the amplitude ratio of high-frequency to low-frequency components. For better visualization, differences in log amplitude between $0$ and $1/3\pi$, $0$ and $2/3\pi$, and $0$ and $\pi$ are used to capture the overall frequency characteristics well. Figure~\ref{fig:fourer_analysis} shows frequency characteristics through Fourier analysis. In the ResNet results, the difference in log amplitude sharply increases as the normalized depth increases. This shows that early layers tend to amplify the high-frequency signal, and the tendency to amplify the high-frequency signal decreases sharply as closer to the last layer. However, DeiT and ConViT which softly inject inductive biases into models do not have this tendency and their frequency characteristics are similar through the layers. The results of Swin and ResT that strongly inject inductive biases into models with the local attention mechanism or convolution illustrate that the increase of the difference in log amplitude shows an intermediate level between it ResNet and DeiT. By combining the results of Figure~\ref{fig:fourer_analysis} and Table~\ref{tab:main_imagenet}, we can see that the model performs well for small-scale data if the increase in the difference in log amplitude through layers is sharp. It becomes smoother in the order of ResNet, ResT, Swin, ConViT, and DeiT, the accuracy is higher in the low-data regime in this order. These results are consistent with the observations of previous work that the inductive bias of CNNs helps the model to learn on small-scale data. From these, we address that the difference in log amplitude through the layers can measure the CNN-like inductive bias of the model. If it increases sharply similar to CNNs, the model has strong inductive biases and performs well in a low-data regime. \section{Reparameterization Can Interpolate Inductive Biases}\label{sec:Reparam} As shown on Section~\ref{sec:FourierMain}, a fixed architecture does not have flexible inductive bias, causing them to have be tuned for each data. Since modifying the architecture to have a suitable inductive bias for each data is too time-consuming, the method which can flexibly adjust the inductive bias during the training process is needed. We observe that the model trained more with CNN than self-attention have more CNN-like frequency characteristics through reparameterization. With these results, we show that reparameterization can interpolate the inductive bias between CNNs and ViT by adjusting the moment of reparameterization during training. \subsection{Experimental Settings} Because reparameterization can change convolution to self-attention, we can adjust the ratio of epochs that each layer is trained with convolution and self-attention. In a $10\%$ subset of the ImageNet data, we adjust this ratio by four settings: model trained with 1) convolution for 300 epochs and self-attention for 0 epochs, 2) convolution for 250 epochs and self-attention for 50 epochs 3) convolution for 150 epochs and self-attention for 150 epochs and 4) convolution for 50 epochs and self-attention for 250 epochs. We note that the model is more trained with convolution from 1) to 4). We follow the setting for reparameterization as in CMHSA-3~\cite{li2021can} and Fourier analysis as in Section~\ref{sec:fourier}. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figure/Figure_interpolation.pdf} \vspace{-5pt} \caption{\textbf{Visualization of Interpolation.} As the ratio trained with self-attention increases, the difference in log amplitude of early stage layers tends to increase, and the difference in log amplitude of late stage layers tends to decrease. Conv $x$, SA $y$ denotes that the model is trained with convolution for $x$ epochs and self-attention for $y$ epochs.} \label{fig:interpolation} \end{figure*} \subsection{Interpolation of Convolutional Inductive Bias}\label{sec:intconvind} Figure~\ref{fig:interpolation} shows the results of Fourier analysis according to the ratio of trained epoch with convolution and self-attention. When comparing 1) to 4), we can see that the degree of increase become smaller from 1) to 4). As the ratio trained with self-attention increases, the difference in log amplitude of early stage layers tends to increase, and the difference in log amplitude of late stage layers tends to decrease. These results show that the more training with convolution make the degree of increase sharper. As we observed in the Section~\ref{sec:fourier}, the more sharply increasing the difference of log amplitude through normalized depth represents that the model have more CNN-like inductive biases. By combining the results of Figure~\ref{fig:interpolation} and this observation, we can see that the more trained with convolution make the model have more CNN-like inductive biases. \section{Progressive Reparameterization Scheduling} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figure/Ours/ConvAttn4.pdf}\\ \vspace{-5pt} \caption{\textbf{Illustration of PRS.} Conv. is a block with a convolutional layer, and Self Attn. is a block with a self-attention layer. Each block is progressively transformed from a convolution block to a self-attention block as the training progresses.} \label{fig:main-network}\vspace{-10pt} \end{figure} We now propose Progressive Reparameterization Scheduling (PRS) which adjusts the inductive bias of ViT for learning on small-scale data. PRS is based on our findings as: \begin{itemize} \item As shown in Section~\ref{sec:FourierMain}, the more convolution-like inductive bias is included, the smaller the data scale is required where the ViT-like model outperforms CNNs. In more detail, we can see that the model performs well for small-scale data if the increase in the difference of log amplitude through layers is sharp. \item Furthermore, in the interpolation experiment in Section~\ref{sec:Reparam}, if the layer is trained in a convolution state for longer epochs, the layer has more convolution-like characteristics. If the layer is trained in a self-attention state for longer epochs, the layer has more self-attention-like characteristics. That is, by adjusting the schedule, it is possible to interpolate how much inductive bias the model will have between self-attention and convolution. \end{itemize} From these findings, PRS makes the early layer have a small difference in log amplitude as a high-pass filter and the last layer has a large difference in log amplitude as a low-pass filter. Because convolution and self-attention serve as high-pass filter and low-pass filter respectively as in~Park~\etal~\cite{park2022vision}, PRS wants the rear layer to play the role of self-attention and the front layer to play the role of convolution. In order to force the rear layers to focus more on the role of self-attention than the front layers, PRS reparameterizes according to linear time scheduling from convolution to self-attention, starting from the rear part. PRS is depicted in Figure~\ref{fig:main-network} and can be expressed as a formula as follows: \begin{align} &\bfit{z}_0 = \mathrm{PE}(\bfit{X}), \\ &\begin{aligned} {\bfit{z}^{'}_{l}} = \begin{cases} \mathrm{Conv}(\mathrm{LN}(\bfit{z}_{l-1}))+\bfit{z}_{l-1}, & (t \leq T\cdot (1 - \frac{l}{L+1}))\\ \mathrm{MHSA}(\mathrm{LN}(\bfit{z}_{l-1}))+\bfit{z}_{l-1}, & (t > T\cdot (1 - \frac{l}{L+1})) \end{cases} \end{aligned} \\ &\bfit{z}_{l} = \mathrm{MLP}(\mathrm{LN}(\bfit{z}^{'}_{l})) + \bfit{z}^{'}_{l},\\ &\textbf{y}_{\ } = \mathrm{Linear}(\mathrm{GAP}(\bfit{z}_{L})), \end{align} where $\mathrm{PE}(\cdot)$ is the patch embedding function that follows~\cite{li2021can}, $\mathrm{LN}(\cdot)$ is LayerNorm~\cite{ba2016layer}, $\mathrm{GAP}(\cdot)$ is global average pooling layer, $\mathrm{Linear}(\cdot)$ is linear layer, $t$ denotes current epoch at training, $L$ denotes the total number of layers, $l = 1, 2, \cdots, L$ denotes the layer index and $T$ denotes the total number of training epochs, $\textbf{y}$ denotes the output of the model. \input{table/cifar100} Table~\ref{tab:cifar100} shows the effectiveness of PRS in CIFAR-100 dataset. PRS outperforms the baseline with a top-1 accuracy score of +2.37p on the CIFAR-100 dataset, showing that the performance can be boosted by a simple scheduling. We note that our PRS achieves better 1023 performance than the previous two-stage reparameterization strategy~\cite{li2021can}. These results show that PRS can dynamically apply an appropriate inductive bias for each layer. Through the successful result of PRS, we conjecture that flexibly inducing inductive bias with reparameterization has the potential for designing the model on various scale data. \section{Conclusion} From the analysis of existing ViT-variant models, we have the following conclusion: the more convolution-like inductive bias is included in the model, the smaller the data scale is required where the ViT-like model outperforms CNNs. Furthermore, we empirically show that reparameterization can interpolate inductive biases between convolution and self-attention by adjusting the moment of reparameterization during training. Through this empirical observation, we propose PRS, Progressive Reparameterization Scheduling, a flexible method that embeds the required amount of inductive bias for each layer. PRS outperforms existing approaches on the small-scale dataset, e.g., CIFAR-100. \vspace{-10pt} \subsubsection{Limitations and Future Works} Although linear scheduling is performed in this paper, there is no guarantee that linear scheduling is optimal. Therefore, through subsequent experiments on scheduling, PRS can be improved by changing it to learnable rather than linearly. In this paper, we only covered datasets with scales below ImageNet, but we will also proceed with an analysis of larger data scales than ImageNet. We also find that the hierarchical architectures tend to have more CNNs-like characteristics than the non-hierarchical architectures. This finding about hierarchy can further improve our inductive bias analysis and PRS. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=8gDgxLAhrXK
https://arxiv.org/abs/2210.01370
Evaluate the paper based on its scientific significance and overall quality.
Interesting method, but concerns about motivating analysis **Summary** The authors use existing work that reparameterizes self-attention to be able to interpolate between self-attention and convolution, and propose a linear schedule that progressively switches over CNN modules to self-attention over training time. This method is motivated by a reproduction of the result that CNNs and ViT models with increased inductive biases are more data efficient than baseline ViT models, while ViT models can outperform CNNs at large data scales. **Strengths** - The proposed method is simple and easy to reproduce based on the paper. - The method is effective on CIFAR-100. **Weaknesses** - In my view, the analysis in sections 4 and 5 does not connect to the method. The relationship between the frequency response of convolution/self-attention and their inductive bias is not properly explained and not obvious to me. The cited work by Park et al. doesn’t directly connect the two either. The claim that the frequency response is indicative of inductive bias is therefore, in my opinion, not substantiated. - Regardless of the validity of the analysis in sections 4 and 5, in my opinion sections 4 and 5 are not necessary to argue that inductive bias aids data efficiency, as that is an established property of machine learning models. Specifically for convolutions and self-attention thee are many works to cite (see the related works in this work and the work of Park et al.). As such, the chapters distract from the main contributions. - The main motivation for the method is improved accuracy on CIFAR-100, but a thorough description of the hyperparameter of the method and baselines is missing, which does not inspire trust in the results. - The method is not evaluated for its claimed benefit of adjusting to the data scale. For example, the authors could have evaluated on the ImageNet subsets of sec 4.2. Rating: marginally below acceptance threshold **Justification** My leading principle for this review is if I believe the paper disseminates useful information for the field. In the current form, I do not believe sections 4 and 5 should be published, as I do not agree with the claims on the relationship between frequency response and inductive bias, and they will confuse and distract uninformed readers. I can however see the value of the proposed linear schedule method, and think it would be a good fit for this workshop or another similar venue otherwise. I recommend the authors revisit the way they motivate and analyze their method, and resubmit to a similar venue. Rating: 5: Marginally below acceptance threshold Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{cite} \usepackage{hyperref} \usepackage{tikz} \usepackage{comment} \usepackage{booktabs} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{pifont} \usepackage{subcaption} \usepackage{booktabs} \usepackage{tabularx} \usepackage{multirow} \usepackage{makecell} \usepackage[accsupp]{axessibility} % \newcommand{\bfit}[1]{\textbf{\textit{#1}}} \newcommand{\floor}[1]{\left \lfloor #1 \right \rfloor} \newcommand{\xmark}{\ding{55}}% \newcommand{\cmark}{\ding{51}}% \newcommand{\samelineand}{\qquad} \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \makeatletter \def\@fnsymbol#1{\ensuremath{\ifcase#1\or *\or \dagger\or \ddagger\or \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger \or \ddagger\ddagger \else\@ctrerr\fi}} \makeatother \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{11} % \title{Towards Flexible Inductive Bias via \\ Progressive Reparameterization Scheduling} \titlerunning{Towards Flexible Inductive Bias via P.R.S.} \authorrunning{Y. Lee et al.} \author{Yunsung Lee$^{1}$\thanks{indicates equal contributions} \and Gyuseong Lee$^{2}$\samethanks \and Kwangrok Ryoo$^{2}$\samethanks \and \\ Hyojun Go$^{1}$\samethanks \and Jihye Park$^{2}$\samethanks \and Seungryong Kim$^{2}$\thanks{indicates corresponding author.} } \institute{ $^{1}$Riiid AI Research \qquad \qquad $^{2}$Korea University } \maketitle \begin{abstract} There are two \textit{de facto} standard architectures in recent computer vision: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Strong inductive biases of convolutions help the model learn sample effectively, but such strong biases also limit the upper bound of CNNs when sufficient data are available. On the contrary, ViT is inferior to CNNs for small data but superior for sufficient data. Recent approaches attempt to combine the strengths of these two architectures. However, we show these approaches overlook that the optimal inductive bias also changes according to the target data scale changes by comparing various models' accuracy on subsets of sampled ImageNet at different ratios. In addition, through Fourier analysis of feature maps, the model's response patterns according to signal frequency changes, we observe which inductive bias is advantageous for each data scale. The more convolution-like inductive bias is included in the model, the smaller the data scale is required where the ViT-like model outperforms the ResNet performance. To obtain a model with flexible inductive bias on the data scale, we show reparameterization can interpolate inductive bias between convolution and self-attention. By adjusting the number of epochs the model stays in the convolution, we show that reparameterization from convolution to self-attention interpolates the Fourier analysis pattern between CNNs and ViTs. Adapting these findings, we propose Progressive Reparameterization Scheduling (PRS), in which reparameterization adjusts the required amount of convolution-like or self-attention-like inductive bias per layer. For small-scale datasets, our PRS performs reparameterization from convolution to self-attention linearly faster at the late stage layer. PRS outperformed previous studies on the small-scale dataset, e.g., CIFAR-100. \keywords{Flexible Architecture, Vision Transformer, Convolution, Self-attention, Inductive Bias} \end{abstract} \section{Introduction} \newcommand{\etal}{\textit{et al.}} Architecture advances have enhanced the performance of various tasks in computer vision by improving backbone networks~\cite{he2016deep, carion2020end, tian2020fcos, he2017mask,tian2020conditional}. From the success of Transformers in natural language processing~\cite{vaswani2017attention,devlin2019bert,brown2020language}, Vision Transformers (ViTs) show that it can outperform Convolutional Neural Networks (CNNs) and its variants have led to architectural advances~\cite{liu2021swin,touvron2021going,zhou2021deepvit}. ViTs lack inductive bias such as translation equivariance and locality compared to CNNs. Therefore, ViTs with sufficient training data can outperform CNNs, but ViTs with small data perform worse than CNNs. To deal with the data-hungry problem, several works try to inject convolution-like inductive bias into ViTs. The straightforward approaches use convolutions to aid tokenization of an input image~\cite{xiao2021early,yuan2021incorporating,wu2021cvt, hassani2021escaping} or design the modules~\cite{li2021localvit,zhang2021rest,dai2021coatnet,d2021convit} for improving ViTs with the inductive bias of CNNs. Other approaches use the local attention mechanisms for introducing locality to ViTs~\cite{liu2021swin,han2021transformer}, which attend to the neighbor elements and improve the local extraction ability of global attention mechanisms. These approaches can design architectures that leverage the strength of CNNs and ViTs and can alleviate the data-hungry problem at some data scale that their work target. However, we show these approaches overlook that the optimal inductive bias also changes according to the target data scale by comparing various models’ accuracy on subsets of sampled ImageNet at different ratios. If trained on the excessively tiny dataset, recent ViT variants still show lower accuracy than ResNet, and on the full ImageNet scale, all ViT variants outperform ResNet. Inspired by Park~\etal~\cite{park2022vision}, we perform Fourier analysis on these models to further analyze inductive biases in the architecture. We observe that ViTs injected convolution-like inductive bias show frequency characteristics between it of ResNet and ViT. In this experiment, the more convolution-like inductive bias is included, the smaller the data scale is required where the model outperforms the ResNet performance. Specifically, their frequency characteristics tend to serve as the high-pass filter in early layers and as more low-pass filter closer to the last layer. Nevertheless, such a fixed architecture in previous approaches has a fixed inductive bias between CNNs and ViTs, making it difficult to design an architecture that performs well on various data scales. Therefore, each time a new target dataset is given, the optimal inductive bias required changes, so each time the model's architectural design needs to be renewed. For example, a CNN-like architecture should be used for small-scale dataset such as CIFAR~\cite{krizhevsky2009learning}, and a ViT-like architecture should be designed for large-scale datasets such as JFT~\cite{sun2017revisiting}. Also, this design process requires multiple training for tuning the inductive bias of the model, which is time-consuming. In this paper, we confirm the possibility of reparameterization technique~\cite{cordonnier2019relationship,li2021can} from convolution to self-attention towards flexible inductive bias between convolution and self-attention during a single training trial. The reparameterization technique can change the learned convolution layer to self-attention, which identically operates like learned convolution. Performing Fourier analysis, we show that reparameterization can interpolate the inductive biases between convolution and self-attention by adjusting the moment of reparameterization during training. We observe that more training with convolutions than with self-attention makes the model have a similar frequency characteristic to CNN and vice versa. This observation shows that adjusting the schedule of reparameterization can interpolate between the inductive bias of CNNs and ViTs. From these observations, we propose the Progressive Reparameterization Scheduling (PRS). PRS is to sequentially reparameterize from the last layer to the first layer. Layers closer to the last layers are more trained with self-attention than convolution, making them closer to self-attention. Therefore, we can make the model have a suitable inductive bias for small-scale data with our schedule. We validate the effectiveness of PRS with experiments on the CIFAR-100 dataset. \vspace{5pt} Our contributions are summarized as follows: \vspace{-5pt} \begin{itemize} \item We observe that architecture with a more convolutional inductive bias in the early stage layers is advantageous on a small data scale. However, if the data scale is large, it is advantageous to have a self-attentional inductive bias. \item We show that adjusting the remaining period as convolution before reparameterization can interpolate the inductive bias between convolution and self-attention. \item Based on observations of favorable conditions in small-scale datasets, we propose the Progressive Reparameterization Scheduling (PRS) which sequentially changes convolution to self-attention from the last layer to the first layer. PRS outperformed previous approaches on the small-scale dataset, e.g., CIFAR-100. \end{itemize} \section{Related Work} \input{table/related_table} \subsection{Convolution Neural Networks} CNNs, the most representative models in computer vision, have evolved over decades from LeNeT~\cite{lecun1998gradient} to ResNet~\cite{he2016deep} in a way that is faster and more accurate. CNNs can effectively capture low-level features of images through inductive biases which are locality and translation invariance. However, CNNs have a weakness in capturing global information due to their limited receptive field. \subsection{Vision Transformers} Despite the great success of vision transformer~\cite{dosovitskiy2020image} in computer vision, ViT has several fatal limitations that it requires high cost and is difficult to extract the low-level features which contain fundamental structures, so that it shows inferior performance than CNNs in small data scales. There are several attempts to overcome the limitations of ViT and improve its performance by injecting a convolution inductive bias into the Transformer. DeiT~\cite{touvron2021training} allows ViT to take the knowledge of convolution through distillation token. They can converge a model, which fails in ViT. On the other hand, The straightforward approaches~\cite{yuan2021incorporating,li2021localvit,chu2021conditional,zhang2021rest} employ inductive bias to augment ViT by adding depthwise convolution to the FFN of the Transformer. ConViT~\cite{d2021convit} presents a new form of self-attention(SA) called Gated positional self-attention (GPSA) that can be initialized as a convolution layer. After being initialized as convolution only at the start of learning, ConViT learns only in the form of self-attention. Thus, it does not give sufficient inductive bias on small resources. Swin Transformer~\cite{liu2021swin} imposes a bias for the locality to ViT in a way that limits the receptive field by local attention mechanisms. A brief comparison of these methods is shown in Table~\ref{table:method-comparison}. \subsection{Vision Transformers and Convolutions} There have been several studies analyzing the difference between CNNs and ViTs~\cite{park2022vision,raghu2021vision}. Park~\etal~\cite{park2022vision} and Raghu~\etal~\cite{raghu2021vision} prove that CNN and Transformer extract entirely different visual representations. In particular, Park~\etal~\cite{park2022vision} present the several analysis of self-attention and convolution that self-attention acts as a low-pass filter while convolution acts as a high pass filter. Furthermore, several approaches~\cite{cordonnier2019relationship,d2021transformed,li2021can} have reparameterized convolution to self-attention by proving that their operations can be substituted for each other. Cordonnier~\etal ~\cite{cordonnier2019relationship} demonstrates that self-attention and convolution can have the same operation when relative positional encoding and the particular settings are applied. T-CNN~\cite{d2021transformed} presents the model using GPSA proposed by ConViT, which reparameterizes convolution layer as GPSA layers. C-MHSA~\cite{li2021can} prove that reparameterization between two models is also possible even when the input was patch unit, and propose a two-phase training model, which initializes ViT from a well-trained CNN utilizing the construction in above theoretical proof. \section{Preliminaries} Here, we recall the mathematical definitions of multi-head self-attention and convolution to help understand the next section. Then, we briefly introduce the background of reparameterization from convolution layer to self-attention layer. We follow the notation in~\cite{cordonnier2019relationship}. \subsubsection{convolution layer} The convolution layer has locality and translation equivariance characteristics, which are useful inductive biases in many vision tasks. Those inductive biases are encoded in the model through parameter sharing and local information aggregation. Thanks to the inductive biases, better performance can be obtained with a low data regime compared to a transformer that has a global receptive field. The output of the convolution layer can be roughly formulated as follows: \begin{equation}\label{eq:conv} \mathrm{Conv}(\bfit{X}) = \sum_{\Delta}\bfit{X}\bfit{W}^C, \end{equation} where $\bfit{X}\in\mathbb{R}^{H\times W \times C}$ is an image tensor, $H$,$W$,$C$ is the image height, width and channel, $\bfit{W}^C$ is convolution filter weight and the set \begin{equation} \Delta = \bigg[-\floor{\frac{K}{2}},\cdot\cdot\cdot\;,\floor{\frac{K}{2}}\bigg] \times \bigg[-\floor{\frac{K}{2}},\cdot\cdot\cdot\;,\floor{\frac{K}{2}}\bigg] \end{equation} is the receptive field with $K\times K$ kernel. \subsubsection{Multi-head Self-Attention Mechanism} Multi-head self-attention(MHSA) mechanism~\cite{vaswani2017attention} trains the model to find semantic meaning by finding associations among a total of $N$ elements using query $\bfit{Q}\in\mathbb{R}^{N\times d_{H}}$, key $\bfit{K}\in\mathbb{R}^{N\times d_{H}}$, and value $\bfit{V}\in\mathbb{R}^{N\times d_{H}}$, where $d_{H}$ is the size of each head. After embedding the sequence $\textbf{\textit{X}} \in \mathbb{R}^{N \times d}$ as a query and key using $\bfit{W}^Q\in\mathbb{R}^{d\times d_H}$ and $\bfit{W}^K\in\mathbb{R}^{d\times d_H}$, an attention score $\textbf{\textit{A}}\in\mathbb{R}^{N\times N}$ can be obtained by applying softmax to the value obtained by inner producting $\textit{\textbf{Q}}$ and $\textit{\textbf{K}}$, where $d$ is the size of an input token. Self-attention(SA) is obtained through matrix multiplication of $\bfit{V}$ embedded by $\bfit{W}^V\in\mathbb{R}^{N\times d_{H}}$ and $\bfit{A}$: \begin{equation}\label{eq:SA} \begin{split} \mathrm{SA}(\bfit{X}) = \bfit{A}(\bfit{XW}^Q,\bfit{XW}^K)\bfit{XW}^V,\\ \textbf{\textit{A}}(\textbf{Q},\textbf{K}) = \mathrm{softmax} \left( \frac{\bfit{QK}^\top}{\sqrt{d}}+\textbf{\textit{B}} \right), \end{split} \end{equation} where \textit{\textbf{B}} is a relative position suggested in~\cite{dai2019transformer}. By properly setting the relative positional embedding $\bfit{B}$, we can force the query pixel to focus on only one key pixel. MHSA allows the model to attend information from different representation subspaces by performing an attention function in parallel using multiple heads. MHSA with a total of $N_H$ heads can be formulated as follows: \begin{equation}\label{eq:mhsa} \mathrm{MHSA}(\bfit{X})=\sum_{k=1}^{N_{H}}{\mathrm{SA}}_k(\bfit{X})\bfit{W}^O_k, \end{equation} where $\bfit{W}^O$ is learnable projection and $k$ is the index of the head. \subsubsection{Reparameterizing MHSA into Convolution Layer} ~\cite{li2021can} showed that $K\times K$ kernels can be performed through $K^2$ heads, where $K$ is the size of the kernel. Since the convolution layer is agnostic to the context of the input, it is necessary to set $\bfit{W}^Q$ and $\bfit{W}^K$ as $\textbf{0}$ to convert the convolution to MHSA. Using equations~(\ref{eq:SA}) and~(\ref{eq:mhsa}) together, MHSA can be formulated as follows: \begin{equation} \mathrm{MHSA}(\bfit{X}) = \sum_{k=1}^{N_{H}} \bfit{A}_k\bfit{X}\bfit{W}^V _k\bfit{W}^O_k. \end{equation} As $\bfit{A}_k\bfit{X}$ is used to select the desired pixel, the knowledge of the convolution layer can be completely transferred to the MHSA by setting $\bfit{W}^V$ to $\bfit{I}$ and initializing $\bfit{W}^O$ to $\bfit{W}^C$. \section{Inductive Bias Analysis of Various Architectures}\label{sec:FourierMain} \input{table/main_imagenet_subset} In this section, we analyze various architectures through Fourier analysis and accuracy tendency according to data scale. Previous works designing the modules by mixing convolution-like inductive bias to ViTs overlook that a fixed architecture has a fixed inductive bias and optimal inductive bias can change according to data scale. To confirm it, we conduct experiments that measure the accuracy of various architectures by changing the data scale of ImageNet~\cite{deng2009imagenet}. In these experiments, we observe that the required data scale for outperforming ResNet is different for each architecture. Then, we link frequency characteristics of the recent ViT variants and the tendency of their accuracy with data scale by expanding observations of Park~\etal~\cite{park2022vision}. In \cite{park2022blurs,park2022vision}, they analyze feature maps in Fourier space and demonstrate that self-attention is a low-pass filter, and convolution is a high-pass filter. This phenomenon of filtering noise of different frequencies is caused by different inductive biases of self-attention and convolution. With Fourier analysis of Park~\etal~\cite{park2022vision}, we observe that architecture having more CNN-like frequency characteristics shows CNN-like efficiency and accuracy tendency in the small-scale datasets. Park~\etal~\cite{park2022vision} conducted Fourier analysis only for ViT and ResNet, but we analyzed several models with various attempts to inject convolutional induction biases into ViT architecture. In this section, the Fourier characteristics vary for each injected inductive bias, and we can see which model among the ViT variables was more convolution-like or self-attention-like. Section \ref{sec:Reparam} will show that we can interpolate from these convolution-like Fourier features to self-attention-like Fourier features with reparameterization. \begin{figure*} \centering \begin{subfigure}{0.49\linewidth} \includegraphics[width=\linewidth]{figure/Subset_Tiny.pdf} \vspace{-5pt} \caption{} \label{fig:2a} \end{subfigure} \begin{subfigure}{0.49\linewidth} \includegraphics[width=\linewidth]{figure/Subset_Small.pdf} \vspace{-5pt} \caption{} \label{fig:2b} \end{subfigure} \vspace{-5pt} \caption{\textbf{Comparisons of accuracy between ResNet and various ViT-like architectures.} Each model is trained on the subsets of imagenet, specifically 1\%, 5\%, 10\%, 50\%, and 100\%. We plot the accuracy difference between ResNet and other architectures with the increasing subset ratio. The numbers in parentheses mean the number of parameters of each model.}\vspace{-20pt} \label{fig:imagenet_subset} \end{figure*} \subsection{Our Hypothesis} We hypothesize that 1) the more convolution-like inductive bias is included, the smaller the data scale is required where the ViT-like model outperforms CNNs, and 2) frequency characteristics can explain whether the inductive bias of model is closer to CNNs or ViTs. Specifically, the incapacity to which the layer amplifies the high-frequency signal tends to dramatically increase from the first layer to the last layer in CNN, whereas ViT does not increase well. ViTs injected with the inductive bias of convolutions tend to increase it, but not as drastic as CNN. Here, we observe that ViTs increasing this incapacity more dramatically perform well on smaller scale data like CNNs. \subsection{Data Scale Experiment}\label{sec:data_exp} CNNs have inductive biases such as locality and translation invariance and ViTs do not. Because of the difference in inductive bias that architecture has, the data scale determines their superiority. In small-scale data, CNNs outperform ViTs, and at some point, ViTs outperform CNNs as the data scale grows. ViT variants injected with the convolution-like inductive bias have stronger inductive bias compared to na\"ive ViT, and the amount of data required to outperform ResNet will be less than it. In this subsection, we identify accuracy trends and the amount of data required to outperform ResNet for various architectures by changing the data scale. As shown in Table~\ref{tab:main_imagenet} and Figure~\ref{fig:imagenet_subset}, we make subsets with the ratio of 0.01, 0.05, 0.1, and 0.5 respectively in ImageNet for experiments in various settings with the same data distribution and different data scales. By utilizing the taxonomy of vision transformer proposed in ~\cite{liu2021survey}, We choose the representatives in each category as ViT variants to compare together. ResT~\cite{zhang2021rest} injects inductive bias directly by adding convolution layers, whereas Swin~\cite{liu2021swin} and ConViT~\cite{d2021convit} add locality in a new way. Swin uses a method that constrains global attention, while ConViT proposes a new self-attention layer that can act as a convolution layer in the initial stage of training. Therefore, we select ResNet-18 and ResNet-50 as the basic architecture of CNN, DeiT-Ti as Vanilla ViT and ResT-Light, ConViT-Ti, and Swin-T as the variations of the ViT to be tested. Since the number of parameters also significantly affects the performance, we compare the tiny version of Swin (Swin-T)~\cite{liu2021swin} with ResNet-50~\cite{he2016deep} and the remaining ViT variants with ResNet-18~\cite{he2016deep}. Swin-T has more parameters than other models since the dimension is doubled every time it passes through one layer. At 0.01, the smallest data scale, the ResNet series consisting of only CNNs shows better performance, and between them, ResNet-18 with smaller parameters has the highest accuracy. However, as the data scale increase, the accuracy of other ViT models increase more rapidly than ResNet. In particular, ResTv1-Light~\cite{zhang2021rest} and Swin-T~\cite{liu2021swin}, which have hierarchical structures, show superior performance among ViT variants and ResTv1-Light even records the highest accuracy of all models when the data scale is 0.05 or more. As illustrated in Figure~\ref{fig:imagenet_subset}, DeiT-Ti~\cite{touvron2021training} shows better performance than ResNet when the data scale is close to 1, while ConViT-Ti~\cite{d2021convit} and Swin-T~\cite{liu2021swin} outperform it at 0.5 or more. meanwhile, the accuracy of ResT is higher than ResNet-18 from quite a small data scale of 0.05. Therefore, we argue that the inductive bias is strong in the order of ResTv1-Light, Swin-T, ConViT-Ti, and DeiT-Ti. Through these experiments, we can prove that inductive bias and hierarchical structure have a great influence on accuracy improvement. \begin{figure*} \centering \includegraphics[width=\linewidth]{figure/Figure_fourier.pdf} \caption{\textbf{Frequency characteristics of ViTs and ResNet.} In ResNet-50, ResTv1-Lite, and Swin-T, the difference in log amplitude sharply increases as the normalized depth increase. On the other side, DeiT and ConViT which softly inject inductive biases into models do not have this tendency.} \vspace{-20pt} \label{fig:fourer_analysis} \end{figure*} \subsection{Fourier Analysis}\label{sec:fourier} As shown in Section~\ref{sec:data_exp}, the required data scale for outperforming ResNet is different for each architecture. Inspired by the analysis of Park~\etal~\cite{park2022vision}, we show that the architectures with frequency characteristics more similar to ResNet tend to outperform ResNet at smaller data scales through Fourier analysis. As in~\cite{park2022vision, park2022blurs}, the feature maps of each layer can be converted to a two-dimensional frequency domain with Fourier transform. Transformed feature maps can be represented on normalized frequency, which frequency is normalized to $[-\pi,\pi]$. The high-frequency components are represented at $-\pi$ and $\pi$ and the lowest frequency components are represented at $0$. Then, we use the difference in log amplitude to report the amplitude ratio of high-frequency to low-frequency components. For better visualization, differences in log amplitude between $0$ and $1/3\pi$, $0$ and $2/3\pi$, and $0$ and $\pi$ are used to capture the overall frequency characteristics well. Figure~\ref{fig:fourer_analysis} shows frequency characteristics through Fourier analysis. In the ResNet results, the difference in log amplitude sharply increases as the normalized depth increases. This shows that early layers tend to amplify the high-frequency signal, and the tendency to amplify the high-frequency signal decreases sharply as closer to the last layer. However, DeiT and ConViT which softly inject inductive biases into models do not have this tendency and their frequency characteristics are similar through the layers. The results of Swin and ResT that strongly inject inductive biases into models with the local attention mechanism or convolution illustrate that the increase of the difference in log amplitude shows an intermediate level between it ResNet and DeiT. By combining the results of Figure~\ref{fig:fourer_analysis} and Table~\ref{tab:main_imagenet}, we can see that the model performs well for small-scale data if the increase in the difference in log amplitude through layers is sharp. It becomes smoother in the order of ResNet, ResT, Swin, ConViT, and DeiT, the accuracy is higher in the low-data regime in this order. These results are consistent with the observations of previous work that the inductive bias of CNNs helps the model to learn on small-scale data. From these, we address that the difference in log amplitude through the layers can measure the CNN-like inductive bias of the model. If it increases sharply similar to CNNs, the model has strong inductive biases and performs well in a low-data regime. \section{Reparameterization Can Interpolate Inductive Biases}\label{sec:Reparam} As shown on Section~\ref{sec:FourierMain}, a fixed architecture does not have flexible inductive bias, causing them to have be tuned for each data. Since modifying the architecture to have a suitable inductive bias for each data is too time-consuming, the method which can flexibly adjust the inductive bias during the training process is needed. We observe that the model trained more with CNN than self-attention have more CNN-like frequency characteristics through reparameterization. With these results, we show that reparameterization can interpolate the inductive bias between CNNs and ViT by adjusting the moment of reparameterization during training. \subsection{Experimental Settings} Because reparameterization can change convolution to self-attention, we can adjust the ratio of epochs that each layer is trained with convolution and self-attention. In a $10\%$ subset of the ImageNet data, we adjust this ratio by four settings: model trained with 1) convolution for 300 epochs and self-attention for 0 epochs, 2) convolution for 250 epochs and self-attention for 50 epochs 3) convolution for 150 epochs and self-attention for 150 epochs and 4) convolution for 50 epochs and self-attention for 250 epochs. We note that the model is more trained with convolution from 1) to 4). We follow the setting for reparameterization as in CMHSA-3~\cite{li2021can} and Fourier analysis as in Section~\ref{sec:fourier}. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figure/Figure_interpolation.pdf} \vspace{-5pt} \caption{\textbf{Visualization of Interpolation.} As the ratio trained with self-attention increases, the difference in log amplitude of early stage layers tends to increase, and the difference in log amplitude of late stage layers tends to decrease. Conv $x$, SA $y$ denotes that the model is trained with convolution for $x$ epochs and self-attention for $y$ epochs.} \label{fig:interpolation} \end{figure*} \subsection{Interpolation of Convolutional Inductive Bias}\label{sec:intconvind} Figure~\ref{fig:interpolation} shows the results of Fourier analysis according to the ratio of trained epoch with convolution and self-attention. When comparing 1) to 4), we can see that the degree of increase become smaller from 1) to 4). As the ratio trained with self-attention increases, the difference in log amplitude of early stage layers tends to increase, and the difference in log amplitude of late stage layers tends to decrease. These results show that the more training with convolution make the degree of increase sharper. As we observed in the Section~\ref{sec:fourier}, the more sharply increasing the difference of log amplitude through normalized depth represents that the model have more CNN-like inductive biases. By combining the results of Figure~\ref{fig:interpolation} and this observation, we can see that the more trained with convolution make the model have more CNN-like inductive biases. \section{Progressive Reparameterization Scheduling} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figure/Ours/ConvAttn4.pdf}\\ \vspace{-5pt} \caption{\textbf{Illustration of PRS.} Conv. is a block with a convolutional layer, and Self Attn. is a block with a self-attention layer. Each block is progressively transformed from a convolution block to a self-attention block as the training progresses.} \label{fig:main-network}\vspace{-10pt} \end{figure} We now propose Progressive Reparameterization Scheduling (PRS) which adjusts the inductive bias of ViT for learning on small-scale data. PRS is based on our findings as: \begin{itemize} \item As shown in Section~\ref{sec:FourierMain}, the more convolution-like inductive bias is included, the smaller the data scale is required where the ViT-like model outperforms CNNs. In more detail, we can see that the model performs well for small-scale data if the increase in the difference of log amplitude through layers is sharp. \item Furthermore, in the interpolation experiment in Section~\ref{sec:Reparam}, if the layer is trained in a convolution state for longer epochs, the layer has more convolution-like characteristics. If the layer is trained in a self-attention state for longer epochs, the layer has more self-attention-like characteristics. That is, by adjusting the schedule, it is possible to interpolate how much inductive bias the model will have between self-attention and convolution. \end{itemize} From these findings, PRS makes the early layer have a small difference in log amplitude as a high-pass filter and the last layer has a large difference in log amplitude as a low-pass filter. Because convolution and self-attention serve as high-pass filter and low-pass filter respectively as in~Park~\etal~\cite{park2022vision}, PRS wants the rear layer to play the role of self-attention and the front layer to play the role of convolution. In order to force the rear layers to focus more on the role of self-attention than the front layers, PRS reparameterizes according to linear time scheduling from convolution to self-attention, starting from the rear part. PRS is depicted in Figure~\ref{fig:main-network} and can be expressed as a formula as follows: \begin{align} &\bfit{z}_0 = \mathrm{PE}(\bfit{X}), \\ &\begin{aligned} {\bfit{z}^{'}_{l}} = \begin{cases} \mathrm{Conv}(\mathrm{LN}(\bfit{z}_{l-1}))+\bfit{z}_{l-1}, & (t \leq T\cdot (1 - \frac{l}{L+1}))\\ \mathrm{MHSA}(\mathrm{LN}(\bfit{z}_{l-1}))+\bfit{z}_{l-1}, & (t > T\cdot (1 - \frac{l}{L+1})) \end{cases} \end{aligned} \\ &\bfit{z}_{l} = \mathrm{MLP}(\mathrm{LN}(\bfit{z}^{'}_{l})) + \bfit{z}^{'}_{l},\\ &\textbf{y}_{\ } = \mathrm{Linear}(\mathrm{GAP}(\bfit{z}_{L})), \end{align} where $\mathrm{PE}(\cdot)$ is the patch embedding function that follows~\cite{li2021can}, $\mathrm{LN}(\cdot)$ is LayerNorm~\cite{ba2016layer}, $\mathrm{GAP}(\cdot)$ is global average pooling layer, $\mathrm{Linear}(\cdot)$ is linear layer, $t$ denotes current epoch at training, $L$ denotes the total number of layers, $l = 1, 2, \cdots, L$ denotes the layer index and $T$ denotes the total number of training epochs, $\textbf{y}$ denotes the output of the model. \input{table/cifar100} Table~\ref{tab:cifar100} shows the effectiveness of PRS in CIFAR-100 dataset. PRS outperforms the baseline with a top-1 accuracy score of +2.37p on the CIFAR-100 dataset, showing that the performance can be boosted by a simple scheduling. We note that our PRS achieves better 1023 performance than the previous two-stage reparameterization strategy~\cite{li2021can}. These results show that PRS can dynamically apply an appropriate inductive bias for each layer. Through the successful result of PRS, we conjecture that flexibly inducing inductive bias with reparameterization has the potential for designing the model on various scale data. \section{Conclusion} From the analysis of existing ViT-variant models, we have the following conclusion: the more convolution-like inductive bias is included in the model, the smaller the data scale is required where the ViT-like model outperforms CNNs. Furthermore, we empirically show that reparameterization can interpolate inductive biases between convolution and self-attention by adjusting the moment of reparameterization during training. Through this empirical observation, we propose PRS, Progressive Reparameterization Scheduling, a flexible method that embeds the required amount of inductive bias for each layer. PRS outperforms existing approaches on the small-scale dataset, e.g., CIFAR-100. \vspace{-10pt} \subsubsection{Limitations and Future Works} Although linear scheduling is performed in this paper, there is no guarantee that linear scheduling is optimal. Therefore, through subsequent experiments on scheduling, PRS can be improved by changing it to learnable rather than linearly. In this paper, we only covered datasets with scales below ImageNet, but we will also proceed with an analysis of larger data scales than ImageNet. We also find that the hierarchical architectures tend to have more CNNs-like characteristics than the non-hierarchical architectures. This finding about hierarchy can further improve our inductive bias analysis and PRS. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=8gDgxLAhrXK
https://arxiv.org/abs/2210.01370
Please evaluate the paper based on its analysis of the relationship between frequency response and inductive bias, as well as the effectiveness of the proposed linear schedule method on CIFAR-100.
Good paper! Accept without doubt. This paper first studies the inductive bias in convs and transformers using Fourier analysis. The amount of inductive bias is defined as the log-scale difference between the high and low frequencies. The observation seems to align with common sense that convs have much more bias thus being data-efficient, while transformers are the opposite. Later, the authors study a progressive reparameterization schedule which allows a model to flexibly decide "optimal" the amount of bias needed during training by progressively converting convs to transformer blocks. In general, an interesting paper. However, I do have a concern about the lack of connection among frequencies, inductive biases, and data efficiency. The main argument is that MSA is low-pass while convs are high-pass. However, it is not clear how the high/low frequency contributes to data efficiency. Is the frequency bias universal (applicable to other datasets/tasks)? Reviewer "zxBu" shares the same opinion, as indicated in "The relationship between the frequency response of convolution/self-attention and their inductive bias is not properly explained and not obvious to me." In general, I think the ProgressiveReparameterizationScheduling is an interesting idea and the experiments on cifar show some inspiring results. Despite the concerns on the Fourier analysis, I would still vote for acceptance. line 388 typo line 389: how are the frequencies normalized to [-pi, pi]? linearly? line 396: what is the 'normalized depth'? is it possible to show some qualitative analysis/visual examples on low/high frequencies? what are the low/high frequencies exactly? How do they differ in early and later layers? It might be beneficial to understand the inductive bias. fig 3 why is the orange line has the same configuration as the light blue line? is this a mistake? I would assume it is orange - conv100, SA200? Rating: 7: Good paper, accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{cite} \usepackage{hyperref} \usepackage{tikz} \usepackage{comment} \usepackage{booktabs} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{pifont} \usepackage{subcaption} \usepackage{booktabs} \usepackage{tabularx} \usepackage{multirow} \usepackage{makecell} \usepackage[accsupp]{axessibility} % \newcommand{\bfit}[1]{\textbf{\textit{#1}}} \newcommand{\floor}[1]{\left \lfloor #1 \right \rfloor} \newcommand{\xmark}{\ding{55}}% \newcommand{\cmark}{\ding{51}}% \newcommand{\samelineand}{\qquad} \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \makeatletter \def\@fnsymbol#1{\ensuremath{\ifcase#1\or *\or \dagger\or \ddagger\or \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger \or \ddagger\ddagger \else\@ctrerr\fi}} \makeatother \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{11} % \title{Towards Flexible Inductive Bias via \\ Progressive Reparameterization Scheduling} \titlerunning{Towards Flexible Inductive Bias via P.R.S.} \authorrunning{Y. Lee et al.} \author{Yunsung Lee$^{1}$\thanks{indicates equal contributions} \and Gyuseong Lee$^{2}$\samethanks \and Kwangrok Ryoo$^{2}$\samethanks \and \\ Hyojun Go$^{1}$\samethanks \and Jihye Park$^{2}$\samethanks \and Seungryong Kim$^{2}$\thanks{indicates corresponding author.} } \institute{ $^{1}$Riiid AI Research \qquad \qquad $^{2}$Korea University } \maketitle \begin{abstract} There are two \textit{de facto} standard architectures in recent computer vision: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Strong inductive biases of convolutions help the model learn sample effectively, but such strong biases also limit the upper bound of CNNs when sufficient data are available. On the contrary, ViT is inferior to CNNs for small data but superior for sufficient data. Recent approaches attempt to combine the strengths of these two architectures. However, we show these approaches overlook that the optimal inductive bias also changes according to the target data scale changes by comparing various models' accuracy on subsets of sampled ImageNet at different ratios. In addition, through Fourier analysis of feature maps, the model's response patterns according to signal frequency changes, we observe which inductive bias is advantageous for each data scale. The more convolution-like inductive bias is included in the model, the smaller the data scale is required where the ViT-like model outperforms the ResNet performance. To obtain a model with flexible inductive bias on the data scale, we show reparameterization can interpolate inductive bias between convolution and self-attention. By adjusting the number of epochs the model stays in the convolution, we show that reparameterization from convolution to self-attention interpolates the Fourier analysis pattern between CNNs and ViTs. Adapting these findings, we propose Progressive Reparameterization Scheduling (PRS), in which reparameterization adjusts the required amount of convolution-like or self-attention-like inductive bias per layer. For small-scale datasets, our PRS performs reparameterization from convolution to self-attention linearly faster at the late stage layer. PRS outperformed previous studies on the small-scale dataset, e.g., CIFAR-100. \keywords{Flexible Architecture, Vision Transformer, Convolution, Self-attention, Inductive Bias} \end{abstract} \section{Introduction} \newcommand{\etal}{\textit{et al.}} Architecture advances have enhanced the performance of various tasks in computer vision by improving backbone networks~\cite{he2016deep, carion2020end, tian2020fcos, he2017mask,tian2020conditional}. From the success of Transformers in natural language processing~\cite{vaswani2017attention,devlin2019bert,brown2020language}, Vision Transformers (ViTs) show that it can outperform Convolutional Neural Networks (CNNs) and its variants have led to architectural advances~\cite{liu2021swin,touvron2021going,zhou2021deepvit}. ViTs lack inductive bias such as translation equivariance and locality compared to CNNs. Therefore, ViTs with sufficient training data can outperform CNNs, but ViTs with small data perform worse than CNNs. To deal with the data-hungry problem, several works try to inject convolution-like inductive bias into ViTs. The straightforward approaches use convolutions to aid tokenization of an input image~\cite{xiao2021early,yuan2021incorporating,wu2021cvt, hassani2021escaping} or design the modules~\cite{li2021localvit,zhang2021rest,dai2021coatnet,d2021convit} for improving ViTs with the inductive bias of CNNs. Other approaches use the local attention mechanisms for introducing locality to ViTs~\cite{liu2021swin,han2021transformer}, which attend to the neighbor elements and improve the local extraction ability of global attention mechanisms. These approaches can design architectures that leverage the strength of CNNs and ViTs and can alleviate the data-hungry problem at some data scale that their work target. However, we show these approaches overlook that the optimal inductive bias also changes according to the target data scale by comparing various models’ accuracy on subsets of sampled ImageNet at different ratios. If trained on the excessively tiny dataset, recent ViT variants still show lower accuracy than ResNet, and on the full ImageNet scale, all ViT variants outperform ResNet. Inspired by Park~\etal~\cite{park2022vision}, we perform Fourier analysis on these models to further analyze inductive biases in the architecture. We observe that ViTs injected convolution-like inductive bias show frequency characteristics between it of ResNet and ViT. In this experiment, the more convolution-like inductive bias is included, the smaller the data scale is required where the model outperforms the ResNet performance. Specifically, their frequency characteristics tend to serve as the high-pass filter in early layers and as more low-pass filter closer to the last layer. Nevertheless, such a fixed architecture in previous approaches has a fixed inductive bias between CNNs and ViTs, making it difficult to design an architecture that performs well on various data scales. Therefore, each time a new target dataset is given, the optimal inductive bias required changes, so each time the model's architectural design needs to be renewed. For example, a CNN-like architecture should be used for small-scale dataset such as CIFAR~\cite{krizhevsky2009learning}, and a ViT-like architecture should be designed for large-scale datasets such as JFT~\cite{sun2017revisiting}. Also, this design process requires multiple training for tuning the inductive bias of the model, which is time-consuming. In this paper, we confirm the possibility of reparameterization technique~\cite{cordonnier2019relationship,li2021can} from convolution to self-attention towards flexible inductive bias between convolution and self-attention during a single training trial. The reparameterization technique can change the learned convolution layer to self-attention, which identically operates like learned convolution. Performing Fourier analysis, we show that reparameterization can interpolate the inductive biases between convolution and self-attention by adjusting the moment of reparameterization during training. We observe that more training with convolutions than with self-attention makes the model have a similar frequency characteristic to CNN and vice versa. This observation shows that adjusting the schedule of reparameterization can interpolate between the inductive bias of CNNs and ViTs. From these observations, we propose the Progressive Reparameterization Scheduling (PRS). PRS is to sequentially reparameterize from the last layer to the first layer. Layers closer to the last layers are more trained with self-attention than convolution, making them closer to self-attention. Therefore, we can make the model have a suitable inductive bias for small-scale data with our schedule. We validate the effectiveness of PRS with experiments on the CIFAR-100 dataset. \vspace{5pt} Our contributions are summarized as follows: \vspace{-5pt} \begin{itemize} \item We observe that architecture with a more convolutional inductive bias in the early stage layers is advantageous on a small data scale. However, if the data scale is large, it is advantageous to have a self-attentional inductive bias. \item We show that adjusting the remaining period as convolution before reparameterization can interpolate the inductive bias between convolution and self-attention. \item Based on observations of favorable conditions in small-scale datasets, we propose the Progressive Reparameterization Scheduling (PRS) which sequentially changes convolution to self-attention from the last layer to the first layer. PRS outperformed previous approaches on the small-scale dataset, e.g., CIFAR-100. \end{itemize} \section{Related Work} \input{table/related_table} \subsection{Convolution Neural Networks} CNNs, the most representative models in computer vision, have evolved over decades from LeNeT~\cite{lecun1998gradient} to ResNet~\cite{he2016deep} in a way that is faster and more accurate. CNNs can effectively capture low-level features of images through inductive biases which are locality and translation invariance. However, CNNs have a weakness in capturing global information due to their limited receptive field. \subsection{Vision Transformers} Despite the great success of vision transformer~\cite{dosovitskiy2020image} in computer vision, ViT has several fatal limitations that it requires high cost and is difficult to extract the low-level features which contain fundamental structures, so that it shows inferior performance than CNNs in small data scales. There are several attempts to overcome the limitations of ViT and improve its performance by injecting a convolution inductive bias into the Transformer. DeiT~\cite{touvron2021training} allows ViT to take the knowledge of convolution through distillation token. They can converge a model, which fails in ViT. On the other hand, The straightforward approaches~\cite{yuan2021incorporating,li2021localvit,chu2021conditional,zhang2021rest} employ inductive bias to augment ViT by adding depthwise convolution to the FFN of the Transformer. ConViT~\cite{d2021convit} presents a new form of self-attention(SA) called Gated positional self-attention (GPSA) that can be initialized as a convolution layer. After being initialized as convolution only at the start of learning, ConViT learns only in the form of self-attention. Thus, it does not give sufficient inductive bias on small resources. Swin Transformer~\cite{liu2021swin} imposes a bias for the locality to ViT in a way that limits the receptive field by local attention mechanisms. A brief comparison of these methods is shown in Table~\ref{table:method-comparison}. \subsection{Vision Transformers and Convolutions} There have been several studies analyzing the difference between CNNs and ViTs~\cite{park2022vision,raghu2021vision}. Park~\etal~\cite{park2022vision} and Raghu~\etal~\cite{raghu2021vision} prove that CNN and Transformer extract entirely different visual representations. In particular, Park~\etal~\cite{park2022vision} present the several analysis of self-attention and convolution that self-attention acts as a low-pass filter while convolution acts as a high pass filter. Furthermore, several approaches~\cite{cordonnier2019relationship,d2021transformed,li2021can} have reparameterized convolution to self-attention by proving that their operations can be substituted for each other. Cordonnier~\etal ~\cite{cordonnier2019relationship} demonstrates that self-attention and convolution can have the same operation when relative positional encoding and the particular settings are applied. T-CNN~\cite{d2021transformed} presents the model using GPSA proposed by ConViT, which reparameterizes convolution layer as GPSA layers. C-MHSA~\cite{li2021can} prove that reparameterization between two models is also possible even when the input was patch unit, and propose a two-phase training model, which initializes ViT from a well-trained CNN utilizing the construction in above theoretical proof. \section{Preliminaries} Here, we recall the mathematical definitions of multi-head self-attention and convolution to help understand the next section. Then, we briefly introduce the background of reparameterization from convolution layer to self-attention layer. We follow the notation in~\cite{cordonnier2019relationship}. \subsubsection{convolution layer} The convolution layer has locality and translation equivariance characteristics, which are useful inductive biases in many vision tasks. Those inductive biases are encoded in the model through parameter sharing and local information aggregation. Thanks to the inductive biases, better performance can be obtained with a low data regime compared to a transformer that has a global receptive field. The output of the convolution layer can be roughly formulated as follows: \begin{equation}\label{eq:conv} \mathrm{Conv}(\bfit{X}) = \sum_{\Delta}\bfit{X}\bfit{W}^C, \end{equation} where $\bfit{X}\in\mathbb{R}^{H\times W \times C}$ is an image tensor, $H$,$W$,$C$ is the image height, width and channel, $\bfit{W}^C$ is convolution filter weight and the set \begin{equation} \Delta = \bigg[-\floor{\frac{K}{2}},\cdot\cdot\cdot\;,\floor{\frac{K}{2}}\bigg] \times \bigg[-\floor{\frac{K}{2}},\cdot\cdot\cdot\;,\floor{\frac{K}{2}}\bigg] \end{equation} is the receptive field with $K\times K$ kernel. \subsubsection{Multi-head Self-Attention Mechanism} Multi-head self-attention(MHSA) mechanism~\cite{vaswani2017attention} trains the model to find semantic meaning by finding associations among a total of $N$ elements using query $\bfit{Q}\in\mathbb{R}^{N\times d_{H}}$, key $\bfit{K}\in\mathbb{R}^{N\times d_{H}}$, and value $\bfit{V}\in\mathbb{R}^{N\times d_{H}}$, where $d_{H}$ is the size of each head. After embedding the sequence $\textbf{\textit{X}} \in \mathbb{R}^{N \times d}$ as a query and key using $\bfit{W}^Q\in\mathbb{R}^{d\times d_H}$ and $\bfit{W}^K\in\mathbb{R}^{d\times d_H}$, an attention score $\textbf{\textit{A}}\in\mathbb{R}^{N\times N}$ can be obtained by applying softmax to the value obtained by inner producting $\textit{\textbf{Q}}$ and $\textit{\textbf{K}}$, where $d$ is the size of an input token. Self-attention(SA) is obtained through matrix multiplication of $\bfit{V}$ embedded by $\bfit{W}^V\in\mathbb{R}^{N\times d_{H}}$ and $\bfit{A}$: \begin{equation}\label{eq:SA} \begin{split} \mathrm{SA}(\bfit{X}) = \bfit{A}(\bfit{XW}^Q,\bfit{XW}^K)\bfit{XW}^V,\\ \textbf{\textit{A}}(\textbf{Q},\textbf{K}) = \mathrm{softmax} \left( \frac{\bfit{QK}^\top}{\sqrt{d}}+\textbf{\textit{B}} \right), \end{split} \end{equation} where \textit{\textbf{B}} is a relative position suggested in~\cite{dai2019transformer}. By properly setting the relative positional embedding $\bfit{B}$, we can force the query pixel to focus on only one key pixel. MHSA allows the model to attend information from different representation subspaces by performing an attention function in parallel using multiple heads. MHSA with a total of $N_H$ heads can be formulated as follows: \begin{equation}\label{eq:mhsa} \mathrm{MHSA}(\bfit{X})=\sum_{k=1}^{N_{H}}{\mathrm{SA}}_k(\bfit{X})\bfit{W}^O_k, \end{equation} where $\bfit{W}^O$ is learnable projection and $k$ is the index of the head. \subsubsection{Reparameterizing MHSA into Convolution Layer} ~\cite{li2021can} showed that $K\times K$ kernels can be performed through $K^2$ heads, where $K$ is the size of the kernel. Since the convolution layer is agnostic to the context of the input, it is necessary to set $\bfit{W}^Q$ and $\bfit{W}^K$ as $\textbf{0}$ to convert the convolution to MHSA. Using equations~(\ref{eq:SA}) and~(\ref{eq:mhsa}) together, MHSA can be formulated as follows: \begin{equation} \mathrm{MHSA}(\bfit{X}) = \sum_{k=1}^{N_{H}} \bfit{A}_k\bfit{X}\bfit{W}^V _k\bfit{W}^O_k. \end{equation} As $\bfit{A}_k\bfit{X}$ is used to select the desired pixel, the knowledge of the convolution layer can be completely transferred to the MHSA by setting $\bfit{W}^V$ to $\bfit{I}$ and initializing $\bfit{W}^O$ to $\bfit{W}^C$. \section{Inductive Bias Analysis of Various Architectures}\label{sec:FourierMain} \input{table/main_imagenet_subset} In this section, we analyze various architectures through Fourier analysis and accuracy tendency according to data scale. Previous works designing the modules by mixing convolution-like inductive bias to ViTs overlook that a fixed architecture has a fixed inductive bias and optimal inductive bias can change according to data scale. To confirm it, we conduct experiments that measure the accuracy of various architectures by changing the data scale of ImageNet~\cite{deng2009imagenet}. In these experiments, we observe that the required data scale for outperforming ResNet is different for each architecture. Then, we link frequency characteristics of the recent ViT variants and the tendency of their accuracy with data scale by expanding observations of Park~\etal~\cite{park2022vision}. In \cite{park2022blurs,park2022vision}, they analyze feature maps in Fourier space and demonstrate that self-attention is a low-pass filter, and convolution is a high-pass filter. This phenomenon of filtering noise of different frequencies is caused by different inductive biases of self-attention and convolution. With Fourier analysis of Park~\etal~\cite{park2022vision}, we observe that architecture having more CNN-like frequency characteristics shows CNN-like efficiency and accuracy tendency in the small-scale datasets. Park~\etal~\cite{park2022vision} conducted Fourier analysis only for ViT and ResNet, but we analyzed several models with various attempts to inject convolutional induction biases into ViT architecture. In this section, the Fourier characteristics vary for each injected inductive bias, and we can see which model among the ViT variables was more convolution-like or self-attention-like. Section \ref{sec:Reparam} will show that we can interpolate from these convolution-like Fourier features to self-attention-like Fourier features with reparameterization. \begin{figure*} \centering \begin{subfigure}{0.49\linewidth} \includegraphics[width=\linewidth]{figure/Subset_Tiny.pdf} \vspace{-5pt} \caption{} \label{fig:2a} \end{subfigure} \begin{subfigure}{0.49\linewidth} \includegraphics[width=\linewidth]{figure/Subset_Small.pdf} \vspace{-5pt} \caption{} \label{fig:2b} \end{subfigure} \vspace{-5pt} \caption{\textbf{Comparisons of accuracy between ResNet and various ViT-like architectures.} Each model is trained on the subsets of imagenet, specifically 1\%, 5\%, 10\%, 50\%, and 100\%. We plot the accuracy difference between ResNet and other architectures with the increasing subset ratio. The numbers in parentheses mean the number of parameters of each model.}\vspace{-20pt} \label{fig:imagenet_subset} \end{figure*} \subsection{Our Hypothesis} We hypothesize that 1) the more convolution-like inductive bias is included, the smaller the data scale is required where the ViT-like model outperforms CNNs, and 2) frequency characteristics can explain whether the inductive bias of model is closer to CNNs or ViTs. Specifically, the incapacity to which the layer amplifies the high-frequency signal tends to dramatically increase from the first layer to the last layer in CNN, whereas ViT does not increase well. ViTs injected with the inductive bias of convolutions tend to increase it, but not as drastic as CNN. Here, we observe that ViTs increasing this incapacity more dramatically perform well on smaller scale data like CNNs. \subsection{Data Scale Experiment}\label{sec:data_exp} CNNs have inductive biases such as locality and translation invariance and ViTs do not. Because of the difference in inductive bias that architecture has, the data scale determines their superiority. In small-scale data, CNNs outperform ViTs, and at some point, ViTs outperform CNNs as the data scale grows. ViT variants injected with the convolution-like inductive bias have stronger inductive bias compared to na\"ive ViT, and the amount of data required to outperform ResNet will be less than it. In this subsection, we identify accuracy trends and the amount of data required to outperform ResNet for various architectures by changing the data scale. As shown in Table~\ref{tab:main_imagenet} and Figure~\ref{fig:imagenet_subset}, we make subsets with the ratio of 0.01, 0.05, 0.1, and 0.5 respectively in ImageNet for experiments in various settings with the same data distribution and different data scales. By utilizing the taxonomy of vision transformer proposed in ~\cite{liu2021survey}, We choose the representatives in each category as ViT variants to compare together. ResT~\cite{zhang2021rest} injects inductive bias directly by adding convolution layers, whereas Swin~\cite{liu2021swin} and ConViT~\cite{d2021convit} add locality in a new way. Swin uses a method that constrains global attention, while ConViT proposes a new self-attention layer that can act as a convolution layer in the initial stage of training. Therefore, we select ResNet-18 and ResNet-50 as the basic architecture of CNN, DeiT-Ti as Vanilla ViT and ResT-Light, ConViT-Ti, and Swin-T as the variations of the ViT to be tested. Since the number of parameters also significantly affects the performance, we compare the tiny version of Swin (Swin-T)~\cite{liu2021swin} with ResNet-50~\cite{he2016deep} and the remaining ViT variants with ResNet-18~\cite{he2016deep}. Swin-T has more parameters than other models since the dimension is doubled every time it passes through one layer. At 0.01, the smallest data scale, the ResNet series consisting of only CNNs shows better performance, and between them, ResNet-18 with smaller parameters has the highest accuracy. However, as the data scale increase, the accuracy of other ViT models increase more rapidly than ResNet. In particular, ResTv1-Light~\cite{zhang2021rest} and Swin-T~\cite{liu2021swin}, which have hierarchical structures, show superior performance among ViT variants and ResTv1-Light even records the highest accuracy of all models when the data scale is 0.05 or more. As illustrated in Figure~\ref{fig:imagenet_subset}, DeiT-Ti~\cite{touvron2021training} shows better performance than ResNet when the data scale is close to 1, while ConViT-Ti~\cite{d2021convit} and Swin-T~\cite{liu2021swin} outperform it at 0.5 or more. meanwhile, the accuracy of ResT is higher than ResNet-18 from quite a small data scale of 0.05. Therefore, we argue that the inductive bias is strong in the order of ResTv1-Light, Swin-T, ConViT-Ti, and DeiT-Ti. Through these experiments, we can prove that inductive bias and hierarchical structure have a great influence on accuracy improvement. \begin{figure*} \centering \includegraphics[width=\linewidth]{figure/Figure_fourier.pdf} \caption{\textbf{Frequency characteristics of ViTs and ResNet.} In ResNet-50, ResTv1-Lite, and Swin-T, the difference in log amplitude sharply increases as the normalized depth increase. On the other side, DeiT and ConViT which softly inject inductive biases into models do not have this tendency.} \vspace{-20pt} \label{fig:fourer_analysis} \end{figure*} \subsection{Fourier Analysis}\label{sec:fourier} As shown in Section~\ref{sec:data_exp}, the required data scale for outperforming ResNet is different for each architecture. Inspired by the analysis of Park~\etal~\cite{park2022vision}, we show that the architectures with frequency characteristics more similar to ResNet tend to outperform ResNet at smaller data scales through Fourier analysis. As in~\cite{park2022vision, park2022blurs}, the feature maps of each layer can be converted to a two-dimensional frequency domain with Fourier transform. Transformed feature maps can be represented on normalized frequency, which frequency is normalized to $[-\pi,\pi]$. The high-frequency components are represented at $-\pi$ and $\pi$ and the lowest frequency components are represented at $0$. Then, we use the difference in log amplitude to report the amplitude ratio of high-frequency to low-frequency components. For better visualization, differences in log amplitude between $0$ and $1/3\pi$, $0$ and $2/3\pi$, and $0$ and $\pi$ are used to capture the overall frequency characteristics well. Figure~\ref{fig:fourer_analysis} shows frequency characteristics through Fourier analysis. In the ResNet results, the difference in log amplitude sharply increases as the normalized depth increases. This shows that early layers tend to amplify the high-frequency signal, and the tendency to amplify the high-frequency signal decreases sharply as closer to the last layer. However, DeiT and ConViT which softly inject inductive biases into models do not have this tendency and their frequency characteristics are similar through the layers. The results of Swin and ResT that strongly inject inductive biases into models with the local attention mechanism or convolution illustrate that the increase of the difference in log amplitude shows an intermediate level between it ResNet and DeiT. By combining the results of Figure~\ref{fig:fourer_analysis} and Table~\ref{tab:main_imagenet}, we can see that the model performs well for small-scale data if the increase in the difference in log amplitude through layers is sharp. It becomes smoother in the order of ResNet, ResT, Swin, ConViT, and DeiT, the accuracy is higher in the low-data regime in this order. These results are consistent with the observations of previous work that the inductive bias of CNNs helps the model to learn on small-scale data. From these, we address that the difference in log amplitude through the layers can measure the CNN-like inductive bias of the model. If it increases sharply similar to CNNs, the model has strong inductive biases and performs well in a low-data regime. \section{Reparameterization Can Interpolate Inductive Biases}\label{sec:Reparam} As shown on Section~\ref{sec:FourierMain}, a fixed architecture does not have flexible inductive bias, causing them to have be tuned for each data. Since modifying the architecture to have a suitable inductive bias for each data is too time-consuming, the method which can flexibly adjust the inductive bias during the training process is needed. We observe that the model trained more with CNN than self-attention have more CNN-like frequency characteristics through reparameterization. With these results, we show that reparameterization can interpolate the inductive bias between CNNs and ViT by adjusting the moment of reparameterization during training. \subsection{Experimental Settings} Because reparameterization can change convolution to self-attention, we can adjust the ratio of epochs that each layer is trained with convolution and self-attention. In a $10\%$ subset of the ImageNet data, we adjust this ratio by four settings: model trained with 1) convolution for 300 epochs and self-attention for 0 epochs, 2) convolution for 250 epochs and self-attention for 50 epochs 3) convolution for 150 epochs and self-attention for 150 epochs and 4) convolution for 50 epochs and self-attention for 250 epochs. We note that the model is more trained with convolution from 1) to 4). We follow the setting for reparameterization as in CMHSA-3~\cite{li2021can} and Fourier analysis as in Section~\ref{sec:fourier}. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figure/Figure_interpolation.pdf} \vspace{-5pt} \caption{\textbf{Visualization of Interpolation.} As the ratio trained with self-attention increases, the difference in log amplitude of early stage layers tends to increase, and the difference in log amplitude of late stage layers tends to decrease. Conv $x$, SA $y$ denotes that the model is trained with convolution for $x$ epochs and self-attention for $y$ epochs.} \label{fig:interpolation} \end{figure*} \subsection{Interpolation of Convolutional Inductive Bias}\label{sec:intconvind} Figure~\ref{fig:interpolation} shows the results of Fourier analysis according to the ratio of trained epoch with convolution and self-attention. When comparing 1) to 4), we can see that the degree of increase become smaller from 1) to 4). As the ratio trained with self-attention increases, the difference in log amplitude of early stage layers tends to increase, and the difference in log amplitude of late stage layers tends to decrease. These results show that the more training with convolution make the degree of increase sharper. As we observed in the Section~\ref{sec:fourier}, the more sharply increasing the difference of log amplitude through normalized depth represents that the model have more CNN-like inductive biases. By combining the results of Figure~\ref{fig:interpolation} and this observation, we can see that the more trained with convolution make the model have more CNN-like inductive biases. \section{Progressive Reparameterization Scheduling} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figure/Ours/ConvAttn4.pdf}\\ \vspace{-5pt} \caption{\textbf{Illustration of PRS.} Conv. is a block with a convolutional layer, and Self Attn. is a block with a self-attention layer. Each block is progressively transformed from a convolution block to a self-attention block as the training progresses.} \label{fig:main-network}\vspace{-10pt} \end{figure} We now propose Progressive Reparameterization Scheduling (PRS) which adjusts the inductive bias of ViT for learning on small-scale data. PRS is based on our findings as: \begin{itemize} \item As shown in Section~\ref{sec:FourierMain}, the more convolution-like inductive bias is included, the smaller the data scale is required where the ViT-like model outperforms CNNs. In more detail, we can see that the model performs well for small-scale data if the increase in the difference of log amplitude through layers is sharp. \item Furthermore, in the interpolation experiment in Section~\ref{sec:Reparam}, if the layer is trained in a convolution state for longer epochs, the layer has more convolution-like characteristics. If the layer is trained in a self-attention state for longer epochs, the layer has more self-attention-like characteristics. That is, by adjusting the schedule, it is possible to interpolate how much inductive bias the model will have between self-attention and convolution. \end{itemize} From these findings, PRS makes the early layer have a small difference in log amplitude as a high-pass filter and the last layer has a large difference in log amplitude as a low-pass filter. Because convolution and self-attention serve as high-pass filter and low-pass filter respectively as in~Park~\etal~\cite{park2022vision}, PRS wants the rear layer to play the role of self-attention and the front layer to play the role of convolution. In order to force the rear layers to focus more on the role of self-attention than the front layers, PRS reparameterizes according to linear time scheduling from convolution to self-attention, starting from the rear part. PRS is depicted in Figure~\ref{fig:main-network} and can be expressed as a formula as follows: \begin{align} &\bfit{z}_0 = \mathrm{PE}(\bfit{X}), \\ &\begin{aligned} {\bfit{z}^{'}_{l}} = \begin{cases} \mathrm{Conv}(\mathrm{LN}(\bfit{z}_{l-1}))+\bfit{z}_{l-1}, & (t \leq T\cdot (1 - \frac{l}{L+1}))\\ \mathrm{MHSA}(\mathrm{LN}(\bfit{z}_{l-1}))+\bfit{z}_{l-1}, & (t > T\cdot (1 - \frac{l}{L+1})) \end{cases} \end{aligned} \\ &\bfit{z}_{l} = \mathrm{MLP}(\mathrm{LN}(\bfit{z}^{'}_{l})) + \bfit{z}^{'}_{l},\\ &\textbf{y}_{\ } = \mathrm{Linear}(\mathrm{GAP}(\bfit{z}_{L})), \end{align} where $\mathrm{PE}(\cdot)$ is the patch embedding function that follows~\cite{li2021can}, $\mathrm{LN}(\cdot)$ is LayerNorm~\cite{ba2016layer}, $\mathrm{GAP}(\cdot)$ is global average pooling layer, $\mathrm{Linear}(\cdot)$ is linear layer, $t$ denotes current epoch at training, $L$ denotes the total number of layers, $l = 1, 2, \cdots, L$ denotes the layer index and $T$ denotes the total number of training epochs, $\textbf{y}$ denotes the output of the model. \input{table/cifar100} Table~\ref{tab:cifar100} shows the effectiveness of PRS in CIFAR-100 dataset. PRS outperforms the baseline with a top-1 accuracy score of +2.37p on the CIFAR-100 dataset, showing that the performance can be boosted by a simple scheduling. We note that our PRS achieves better 1023 performance than the previous two-stage reparameterization strategy~\cite{li2021can}. These results show that PRS can dynamically apply an appropriate inductive bias for each layer. Through the successful result of PRS, we conjecture that flexibly inducing inductive bias with reparameterization has the potential for designing the model on various scale data. \section{Conclusion} From the analysis of existing ViT-variant models, we have the following conclusion: the more convolution-like inductive bias is included in the model, the smaller the data scale is required where the ViT-like model outperforms CNNs. Furthermore, we empirically show that reparameterization can interpolate inductive biases between convolution and self-attention by adjusting the moment of reparameterization during training. Through this empirical observation, we propose PRS, Progressive Reparameterization Scheduling, a flexible method that embeds the required amount of inductive bias for each layer. PRS outperforms existing approaches on the small-scale dataset, e.g., CIFAR-100. \vspace{-10pt} \subsubsection{Limitations and Future Works} Although linear scheduling is performed in this paper, there is no guarantee that linear scheduling is optimal. Therefore, through subsequent experiments on scheduling, PRS can be improved by changing it to learnable rather than linearly. In this paper, we only covered datasets with scales below ImageNet, but we will also proceed with an analysis of larger data scales than ImageNet. We also find that the hierarchical architectures tend to have more CNNs-like characteristics than the non-hierarchical architectures. This finding about hierarchy can further improve our inductive bias analysis and PRS. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=8gDgxLAhrXK
https://arxiv.org/abs/2210.01370
Please evaluate the paper based on the reviewer's feedback, specifically addressing the concerns about the lack of connection between frequencies, inductive biases, and data efficiency, as well as the request for qualitative analysis or visual examples of low and high frequencies.
Preliminary work with interesting future extensions The paper describes an approach for developing resource-efficient Counterfactual Generative Networks (CGNs) through Knowledge Distillation (KD) from black-box pre-trained CGNs to smaller TinyGANS. The approach is then named by the authors Source-free Knowledge Distillation of Counterfactual Generative Networks (SKDCGN). The authors employ a TinyGAN for each of the independent mechanisms (i.e., shape, texture, background) in order to increase modularity and reduce the size of the overall model. Pros: - The paper is eligible for the workshop since the concept of “prior” is applicable both to 1) the knowledge transfer from the teacher CGN and 2) the built-in approach of CGNs that employ inductive biases (i.e., shape, texture, and background) to generate realistic images. - The paper tackles two critical problems of modern deep learning literature, i.e., 1) reducing the size of state-of-the-art GANs and 2) learning from large pre-trained models through black-box access. - The paper is well written and clearly presents the objectives, methodology, and qualitative results. - The related work section provides a good overview of the literature concerning this paper (i.e., CGNs and KD). Cons: - The paper employs and combines published techniques [3, 7] rather than proposing a novel method. - The evaluation of the approach is preliminary and needs extensions. Section 4 mainly presents qualitative rather than quantitative results. It would be interesting to evaluate SKDCGN on Out-of-Domain (OOD) classification tasks as performed in [3]. The results would probably be less compelling but still interesting to observe. Furthermore, given that the objective of SKDCGN is to make current CGNs more lightweight, it would be better to report a plot/table in which metrics quantify improvements. For instance, the reduction of the number of trainable parameters, GPU memory usage, etc. - As visible in Fig. 2, the texture mechanism is the one that suffers more from the reduction of size. Textures of Fig. 2 (b) generated by SKDCGN are hardly distinguishable. On the contrary, the shape and background mechanisms mimic quite well the original generations. Future work should develop more on the texture generation mechanism. For instance, by including data augmentation or other approaches that improve image synthesis for GANs. Minor issues: - The abstract is a bit lengthy and could probably be pruned. - Is the legend of Fig. 1 partially wrong? During inference, the TinyGANs should be “fixed” and during training “trainable”. Further, the composition mechanism is said to be untrainable by default. Rating: 6: Marginally above acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{orcidlink} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{ragged2e} \usepackage{xcolor} % \usepackage{graphicx} \usepackage{subcaption} \usepackage{placeins} \usepackage[export]{adjustbox} \usepackage{caption} \usepackage{float} \usepackage[utf8]{inputenc} % \usepackage[T1]{fontenc} % \usepackage{hyperref} % \usepackage{url} % \usepackage{booktabs} % \usepackage{amsmath} \usepackage{amsfonts} % \usepackage{nicefrac} % \usepackage{microtype} % \usepackage[accsupp]{axessibility} % \usepackage{etoolbox} \newcommand{\repthanks}[1]{\textsuperscript{\ref{#1}}} \makeatletter \patchcmd{\maketitle} {\def\thanks} {\let\repthanks\repthanksunskip\def\thanks} {}{} \patchcmd{\@maketitle} {\def\thanks} {\let\repthanks\@gobble\def\thanks} {}{} \newcommand\repthanksunskip[1]{\unskip{}} \makeatother \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{} % \title{SKDCGN: Source-free Knowledge Distillation of Counterfactual Generative Networks using cGANs} % \titlerunning{SKDCGN} \author{Sameer Ambekar \orcidlink{0000-0002-8650-3180}\thanks{Equal contribution.\protect\label{contrib}} \and Matteo Tafuro \orcidlink{0000-0002-6167-2156}\repthanks{contrib} \and Ankit Ankit \orcidlink{0000-0002-9399-9209}\repthanks{contrib}\and \\Diego van der Mast\index{van der Mast, Diego} \orcidlink{0000-0002-0001-3069}\repthanks{contrib} \and Mark Alence \orcidlink{0000-0002-6622-5822}\repthanks{contrib} \and Christos Athanasiadis \orcidlink{0000-0003-4376-9066}} \authorrunning{S. Ambekar et al.} \institute{University of Amsterdam, Amsterdam, the Netherlands. \\ \email{ambekarsameer@gmail.com, tafuromatteo00@gmail.com, ankitnitt1721@gmail.com, diego.vandermast@student.uva.nl, mark.alence@gmail.com, c.athanasiadis@uva.nl }} \maketitle \begin{abstract} \justifying{With the usage of appropriate inductive biases, Counterfactual Generative Networks (CGNs) can generate novel images from random combinations of shape, texture, and background manifolds. These images can be utilized to train an invariant classifier, avoiding the wide spread problem of deep architectures learning spurious correlations rather than meaningful ones. As a consequence, out-of-domain robustness is improved. However, the CGN architecture comprises multiple over parameterized networks, namely BigGAN and U2-Net. Training these networks requires appropriate background knowledge and extensive computation. Since one does not always have access to the precise training details, nor do they always possess the necessary knowledge of counterfactuals, our work addresses the following question: Can we use the knowledge embedded in pre-trained CGNs to train a lower-capacity model, assuming black-box access (i.e., only access to the pretrained CGN model) to the components of the architecture? In this direction, we propose a novel work named SKDCGN that attempts knowledge transfer using Knowledge Distillation (KD). In our proposed architecture, each independent mechanism (shape, texture, background) is represented by a student 'TinyGAN' that learns from the pretrained teacher 'BigGAN'. We demonstrate the efficacy of the proposed method using state-of-the-art datasets such as ImageNet, and MNIST by using KD and appropriate loss functions. Moreover, as an additional contribution, our paper conducts a thorough study on the composition mechanism of the CGNs, to gain a better understanding of how each mechanism influences the classification accuracy of an invariant classifier. Code available at: \url{https://github.com/ambekarsameer96/SKDCGN}} \end{abstract} \section{Introduction} \label{sec:intro} Deep neural networks are prone to learning simple functions that fail to capture intricacies of data in higher-dimensional manifolds \cite{DBLP:journals/corr/abs-2110-02424}, which causes networks to struggle in generalizing to unseen data. In addition to spectral bias \cite{DBLP:journals/corr/abs-2110-02424} and shortcut learning, which are properties inherent to neural networks \cite{DBLP:journals/corr/abs-2004-07780}, spurious learned correlations are also caused by biased datasets. To this end, Counterfactual Generative Networks (CGNs), proposed by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046}, have been shown to generate novel images that mitigate this effect. The authors expose the causal structure of image generation and split it into three Independent Mechanisms (IMs) (object shape, texture, and background), to generate synthetic and \textit{counterfactual} images whereon an invariant classifier ensemble can be trained. The CGN architecture comprises multiple over-parameterized networks, namely BigGANs \cite{brock2019large} and U2-Nets \cite{DBLP:journals/corr/abs-2005-09007}, and its training procedure generally requires appropriate domain-specific expertise. Moreover, one does not always have access to the precise training details, nor do they necessarily possess the required knowledge of counterfactuals. Motivated by these observations, we propose \textit{Source-free Knowledge Distillation of Counterfactual Generative Networks} (SKDCGN), which aims to use the knowledge embedded in a pre-trained CGN to train a lower capacity model, assuming black-box access (i.e., only inputs and outputs) to the components of the source model. More specifically, we harness the idea of Knowledge Distillation (KD) \cite{DBLP:journals/corr/abs-2106-05237} to train a network comprising three (small) generative models, i.e. TinyGANs \cite{DBLP:journals/corr/abs-2009-13829}, each being responsible for a single independent mechanism. SKDCGN carries both practical and theoretical implications, and it is intended to: \begin{enumerate} \item Obtain a lightweight version of the CGN, reducing its computational cost and memory footprint. This is meant to (i) ease the generation of counterfactual datasets and hence encourage the development of robust and invariant classifiers, as well as (ii) potentially allowing the deployment of the model on resource-constrained devices. \item Explore whether we can \textit{learn} from a fully trained CGN and distill it to a less parameterized network, assuming that we do not have access to the training process of the model. \end{enumerate} Along the lines of the original paper, we demonstrate the ability of our model to generate counterfactual images on ImageNet-1k \cite{5206848} and Double-Colored MNIST \cite{DBLP:journals/corr/abs-2101-06046}. Furthermore, we compare our outputs to \cite{DBLP:journals/corr/abs-2101-06046} and a simple baseline in terms of out-of-distribution robustness on the original classification task. As an additional contribution, we conduct a study on the shape IM of the CGN. The paper is organized as follows: firstly, we present a brief literature survey in Section \ref{sec:related-work}; next in Section \ref{sec:approach} the SKDCGN is dissected; Section \ref{sec:exps-results} presents the experimental setup and the empirical results, which are finally discussed in Section \ref{sec:conclusion}. \section{Related work} \label{sec:related-work} This section introduces the fundamental concepts and the related works that we use as a base for our SKDCGN. \subsubsection{Counterfactual Generative Networks. } The main idea of CGNs \cite{DBLP:journals/corr/abs-2101-06046} has already been introduced in Section \ref{sec:intro}. Nonetheless, to aid the understanding of our method to readers that are not familiar with the CGN architecture, we summarize its salient components in this paragraph and also provide the network diagram in Appendix \ref{app:cgn-architecture}, Figure \ref{fig:cgn-diagram}. The CGN consists of 4 backbones: (i) the part of the network responsible for the shape mechanism, those responsible for (ii) texture and (iii) background, and a (iv) composition mechanism that combines the previous three using a deterministic function. Given a noise vector $\mathbf{u}$ (sampled from a spherical Gaussian) and a label $y$ (drawn uniformly from the set of possible labels y) as input, (i) the shape is obtained from a BigGAN-deep-256 \cite{brock2019large}, whose output is subsequently passed through a U2-Net \cite{DBLP:journals/corr/abs-2005-09007} to obtain a binary mask of the object shape. The (ii) texture and (iii) background are obtained similarly, but the BigGAN's output does not require to be segmented by the U2-Net. Finally, the (iv) composition mechanism outputs the final counterfactual image $\mathbf{x}_{gen}$ using the following analytical function: \begin{equation} \label{eq:composition} \mathbf{x}_{g e n}=C(\mathbf{m}, \mathbf{f}, \mathbf{b})=\mathbf{m} \odot \mathbf{f}+(1-\mathbf{m}) \odot \mathbf{b}, \end{equation} where $\mathbf{m}$ is the shape mask, $\mathbf{f}$ is the foreground (or texture), $\mathbf{b}$ is the background and $\odot$ denotes element-wise multiplication. More recently, \cite{khorram2022cycleconsistent} devises an approach that learns a latent transformation that generates visual CFs automatically by steering in the latent space of generative models. Additionally, \cite{DBLP:journals/corr/abs-2109-14274} uses a deep model inversion approach that provides counterfactual explanations by examining the area of an image. \subsubsection{Knowledge Distillation. } \cite{44873} firstly proposed to transfer the knowledge of a pre-trained cumbersome network (referred to as the \textit{teacher}) to a smaller model (the \textit{student}). This is possible because networks frequently learn low-frequency functions among other things, indicating that the learning capacity of the big network is not being utilized fully \cite{DBLP:journals/corr/abs-2110-02424} \cite{DBLP:journals/corr/abs-2004-07780}. Traditional KD approaches (often referred to as \textit{black-box}) simply use the outputs of the large deep model as the teacher knowledge, but other variants have made use of activation, neurons or features of intermediate layers as the knowledge to guide the learning process \cite{kdref1,kdref2}. Existing methods like \cite{DBLP:journals/corr/abs-2009-13829} are also making use of Knowledge distillation for the task of image generation. Our work is similar to this, however, they transfer the knowledge of BigGAN trained on ImageNet dataset to a TinyGAN. In contrast, in our work, we transfer not just the knowledge of image generation but also the task of counterfactual generation from a BigGAN to a TinyGAN. \subsubsection{Distilling GANs using KD. } Given its high effectiveness for model compression, KD has been widely used in different fields, including visual recognition and classification, speech recognition, natural language processing (NLP), and recommendation systems \cite{kd-survey}. However, it is less studied for image generation. \cite{DBLP:journals/corr/abs-1902-00159} firstly applied KD to GANs. However, our project differs from theirs as they use \textit{unconditional} image generation, less general (DCGAN \cite{dcgan}) architectures and they do not assume a black-box generator. Our setting is much more similar to that of \cite{DBLP:journals/corr/abs-2009-13829}, where a BigGAN is distilled to a network with 16$\times$ fewer parameters, assuming no access to the teacher's training procedure or parameters. Considering its competitive performance, we use the proposed architecture (TinyGAN) as the student model and use a modified version of their loss function (further details in Section \ref{sec:method-training}) to optimize our network. \textbf{Source-free}: We term our method as Source-free since we do not have access to the source data, source training details, procedure, and any knowledge about the counterfactuals, etc, but only have access to trained source models. This method is similar to methods such as \cite{yang2021generalized} \cite{ding2022source}. With large diffusion models like Imagen \cite{saharia2022photorealistic} and DALL·E 2 \cite{https://doi.org/10.48550/arxiv.2204.13807} where the training process is usually extremely expensive in terms of computation, lack precise details about training them and often not reproducible by academic groups, we often have access to pretrained models. These can be used to transfer knowledge to a smaller network, and perform the same task with model of lower capacity. \section{Approach} \label{sec:approach} This section dives into the details of the SKDCGN architecture, focusing on the training and inference phases separately for ImageNet-1k and MNIST. In addition, we discuss the loss functions that were employed for Knowledge Distillation. \subsection{SKDCGN} Although transferring the knowledge of an entire CGN into a single generative model could drastically reduce the number of parameters, this strategy would compromise the whole purpose of CGNs, i.e. disentangling the three mechanisms and having control over each of them. Therefore, we opt to train a generative model for each individual component. As shown in the architecture diagram (Figure \ref{fig:arch_diagram}), we treat each IM backbone as a black-box teacher and aim to mimic its output by training a corresponding TinyGAN student. Note that this implies that in the case of the shape mechanism, a single generative model learns to mimic both the BigGAN and the U2-Net. We believe a TinyGAN should be capable of learning binary masks directly, removing the need for the U2-Net and reducing the model size even further. During inference, the outputs of the three students are combined into a final counterfactual image using the composition function defined in Equation \ref{eq:composition}. \begin{figure}[t] \includegraphics[width=\linewidth]{Images/final_architecture.pdf} \caption{\textit{Architecture of the SKDCGN.} During training, each independent mechanism serves as a black-box teacher model to train a corresponding student model. During inference, the outputs of the three trained TinyGANs are combined using a Composition Mechanism that returns the final counterfactual image.} \label{fig:arch_diagram} \end{figure} \subsubsection{Training: Distilling the knowledge of IMs. } \label{sec:method-training} To train SKDCGN, we utilize each IM backbone from the CGN architecture as a black-box teacher for the student network, as visualized in the training section of Figure \ref{fig:arch_diagram} (the backbones are BigGAN + U2-Net for \textit{shape}, BigGAN for \textit{texture}, and BigGAN for \textit{background}). As introduced in the \hyperref[sec:related-work]{Related work} section, \cite{DBLP:journals/corr/abs-2009-13829} proposed an effective KD framework for compressing BigGANs. As the IMs in CGNs rely on BigGANs, we utilize their proposed student architecture. For completeness, the details of the student architecture are reported in Appendix \ref{app:tinygan-architecture}, Figure \ref{fig:tinygan-generator}. We base our training objective on the loss function proposed by \cite{DBLP:journals/corr/abs-2009-13829}. Our full objective comprises multiple terms: (i) a pixel-wise distillation loss, (ii) an adversarial distillation loss, (iii) a feature-level distillation loss, and (iv) KL Divergence. In addition to introducing KL Divergence, we deviate from the original TinyGAN training objective by omitting the term that allows the model to learn from real images of the ImageNet dataset. This would inevitably compromise the quality of the generated counterfactuals. KL Divergence leads to entropy minimization between the teacher and student, which is why we propose its usage. The individual loss terms are dissected below as from \cite{DBLP:journals/corr/abs-2009-13829}: \begin{enumerate} \item \textit{Pixel-wise Distillation Loss}: To imitate the functionality of BigGAN for scaling generation to high-resolution, high-fidelity images, we minimize the pixel-level distance (L1) between the images generated by BigGAN and TinyGAN given the same input: \begin{equation} \mathcal{L}_{\text{KD\_pix}} = \mathbb{E}_{z \sim p(z), y \sim q(y)}[\|T(z,y) - S(z,y) \|_{1}] \label{pixelwise_loss} \end{equation} where $T$ represents the Teacher network, $S$ represents the Student network, $z$ is a latent variable drawn from the truncated normal distribution $p(z)$, and $y$ is the class label sampled from some categorical distribution $q(y)$. \item \textit{Adversarial Distillation Loss}: To promote sharper outputs, an adversarial loss is incorporated to make the outputs of $S$ indistinguishable from those of $T$. It includes a loss for the generator (Eq. \ref{eq:loss-adv-gen}) and one for the discriminator (Eq. \ref{eq:loss-adv-dis}): \begin{align} \mathcal{L}_{\text{KD\_G}} =& - \mathbb{E}_{z, y}[D(S(z,y), y)] \label{eq:loss-adv-gen}\\ \mathcal{L}_{\text{KD\_D}} =& - \mathbb{E}_{z, y}\left[max(0, 1 - D(T(z,y), y)) + max(0, 1 - D(S(z,y), y))\right] \label{eq:loss-adv-dis}, \end{align} where $z$ is the noise vector, $y$ is the class label, $T(z,y)$ is the image generated by the Teacher $T$, while $G$ and $D$ are -- respectively -- the generator and discriminator of the Student $S$. \item \textit{Feature Level Distillation Loss}: To further overcome the blurriness in the images produced by the Student network, the training objective also includes a feature-level distillation loss. More specifically, we take the features computed at each convolutional layer in the Teacher discriminator, and with a loss function stimulate $S$ to generate images similar to $T$: \begin{equation} \mathcal{L}_{\text{KD\_feat}} = \mathbb{E}_{z, y}\left[\sum _{i} \alpha_{i}\left\|D_{i}(T(z,y),y) - D_{i}(S(z,y), y) \right\|_{1}\right] \label{feature_loss} \end{equation} where $D_{i}$ represents the feature vector extracted from the $i^{th}$ layer of the discriminator and the corresponding weights are given by $\alpha_{i}$. \item \textit{KL Divergence}: L1 alone cannot reduce the entropy between the teacher and target. To improve the proposed method, we use KL Divergence in a similar fashion to \cite{asano2021extrapolating} for the task of knowledge distillation between real images drawn from source $P(x)$ and target images $Q(x)$. The \begin{equation} \mathcal D_{\mathrm{KL}}(P \| Q)=\sum_{x \in \mathcal{X}} P(x) \log \left(\frac{P(x)}{Q(x)}\right) \label{feature_loss_kl} \end{equation} \begin{equation} \mathcal{L}_{\text{KL}} = \sum_{x \in X}-p_{x}^{t} \log p_{x}^{s}+p_{x}^{t} \log p_{x}^{t} \label{eq:kl-loss} \end{equation} where $x$ is the class label and $p$ contains the output softmax probabilities of the Generator $G$ divided by the temperature $t$. \end{enumerate} To sum up, the student's generator ($G$) and discriminator ($D$) are respectively optimized using the following objectives: \begin{align} \mathcal{L}_{\text{G}} = & \mathcal{L}_{\text{KD\_feat}} + \lambda_1 \mathcal{L}_{\text{KD\_pix}} + \lambda_2\mathcal{L}_{\text{KD}\_G} \,(\;+\;\mathcal{L}_{\text{KL}}\,)\\ \mathcal{L}_{\text{D}} = & \mathcal{L}_{\text{KD\_D}} \end{align} where $\lambda_1$ and $\lambda_2$ are the regularization terms mentioned in \cite{DBLP:journals/corr/abs-2009-13829}, and the KL divergence term ($\mathcal{L}_{\text{KL}}$) is only used in the enhanced version of SKDCGN. Implementing the SKDCGN architecture requires training a TinyGAN for each Independent Mechanism of the CGN (see Fig. \ref{fig:arch_diagram}). The KD training procedure, however, requires training data. Hence prior to training, 1000 images per class (totalling 1 million samples) are generated using the IM backbones extracted from the pre-trained CGN (as provided by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046}). Finally, note that the original CGN architecture (illustrated in Appendix \ref{app:cgn-architecture}, Figure \ref{fig:cgn-diagram}) comprises another BigGAN trained on ImageNet-1k. It is unrelated to the three Independent Mechanisms and provides primary training supervision via reconstruction loss. We discard this component of the architecture for two main reasons: we do not have a dataset of counterfactuals whereon a GAN can be trained; we argue that this additional knowledge is already embedded in the backbones of a pre-trained CGN. \subsubsection{Inference: generating counterfactuals. } Once the three student networks are trained, their outputs are combined during inference akin to \cite{DBLP:journals/corr/abs-2101-06046} using the analytical function of Equation \ref{eq:composition}. Since the composition function is deterministic, we devise inference as a separate task to training. \section{Experiments and results} \label{sec:exps-results} This section defines our experimental setup, then proceeds to present the results. First, we test SKDCGN -- as defined in the \hyperref[sec:approach]{Approach} section -- on both ImageNet-1k and MNIST (Section \ref{sec:exps-skdcgn}), and based on the observed findings we make some changes to the proposed architecture to improve the quality of the results (Section \ref{sec:exps-improvement}). Due to computational constraints we test these improvements on a smaller dataset, namely the double-colored variant of MNIST \cite{726791}. Finally, as an additional contribution, we conduct a thorough study on the composition mechanism, to gain a better understanding of how each mechanism influences the classification accuracy of an invariant classifier. We present the results of such a study in Section \ref{sec:exps-comp-mechanism}. \subsection{Datasets} \paragraph{ImageNet-1k.} The ImageNet-1k ILSVRC dataset \cite{5206848} contains 1,000 classes, with each class consisting of 1.2 million training images, 50,000 validation and 100,000 test images. Images were resized to $256\times256$ to maintain consistent experiments and to allow direct comparisons with the original results of \cite{DBLP:journals/corr/abs-2101-06046}. \paragraph{Double-colored MNIST.} We use the \textit{double-colored} MNIST dataset proposed by Sauer and Geiger in the original CGN paper \cite{DBLP:journals/corr/abs-2101-06046}. This is a variant of the MNIST dataset where both the digits and the background are independently colored. It consists of 60,000 $28\times28$ images of the 10 digits, along with a test set of 10,000 images. \subsection{Baseline Model: CGN with generator replaced by TinyGAN generator} The SKDCGN is compared with a modified version of the original CGN architecture, where each BigGAN has been replaced by the generator model of a TinyGAN. Training this baseline using the procedure described by \cite{DBLP:journals/corr/abs-2009-13829}, omitting KD, allows for rigorous comparisons that emphasize the effectiveness of the knowledge distillation process. Further training details are provided in Appendix \ref{app:baseline-training}. \subsection{Results of SKDCGN} \label{sec:exps-skdcgn} \begin{figure}[t] \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/shape-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/shape-left.png} \caption{\textit{Shape} mechanism.} \label{fig:shape_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/fg-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/fg-left.png} \caption{\textit{Texture} mechanism.} \label{fig:fg_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/bg-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/bg-left.png} \caption{\textit{Background} mechanism.} \label{fig:bg_results} \end{subfigure} \caption{A comparison of images (on both ImageNet-1k and double-colored MNIST) generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input), for each independent mechanism.} \label{fig:im-results_t_b} \end{figure} The proposed model was firstly trained and tested on ImageNet-1k. To further validate our method, we repeated the training procedure on MNIST. The qualitative results are collected in Figure \ref{fig:im-results_t_b} and demonstrate that TinyGANs can closely approximate the output of each IM. While this is true for both datasets, the effectiveness of our method is especially visible in the case of MNIST. It is likely the case that the reduced capacity of the TinyGANs (compared to the original CGN backbones) is sufficient to decently model the underlying data distribution. ImageNet-1k, on the other hand, reveals more apparent (though still acceptable) discrepancies between the images, especially for the \textit{texture} IM. However, careful and extensive experiments revealed that the three TinyGANs could not generalize when random noise was given to the generator, i.e., they could not produce results beyond the test set. This might be due to a number of reasons. First, the compromised generalization capabilities of each IM's TinyGAN could be caused by their reduced network capacity. Furthermore, each TinyGAN was trained on all 1000 classes of ImageNet-1K, as opposed to Chang and Lu's choice of limiting the training data to the 398 animal labels \cite{DBLP:journals/corr/abs-2009-13829}. Finally, we generate the test samples using the test noise instead of random noise, since we hypothesize that the student networks only learn the manifolds that the teacher networks have been trained on. Additional experiments are required to analyze whether samples generated using random noise are found along the same manifold; unfortunately, we were hindered by the limited time frame allocated for this project, hence we leave this question open for future works. \begin{figure}[t!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/30-test_mask.png} \caption{\textit{Shape} mechanism.} \label{fig:mnist_mask_kl_div_fg} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/29-test.png} \caption{\textit{Texture} mechanism.} \label{fig:mnist_mask_kl_div_bg} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/30-test_bg.png} \caption{\textit{Background} mechanism.} \label{fig:mnist__mask_kl_div_mask} \end{subfigure} \caption{A comparison of double-colored MNIST images generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input) for each IM. Here, SKDCGN was tuned such that KL divergence is minimized between the teacher and student networks, and the L1 loss is multiplied with the activation of every layer.} \label{fig:mnist_kl_div} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{Images/im_kl/1-sample.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{Images/im_kl/23-sample.png} \caption{} \end{subfigure} \caption{(a) Shape masks obtained after the \textit{first} epoch of SKDCGN training on ImageNet-1k, using KL divergence. (b) Shape masks obtained after the 23$^{\text{rd}}$ epoch of SKDCGN training on ImageNet-1k, \textit{without} KL divergence. Evidently, KL enhances the quality of the masks from the first epoch, whereas its absence compromises the results even at a later stage of training.} \label{fig:Imagenet_mask_kl_div} \end{figure} \subsection{Improving the SKDCGN model} \label{sec:exps-improvement} The results presented in the previous section reveal that the outputs are noisy and ambiguous in nature when knowledge distillation is performed using the pre-trained models provided by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046} (note the artifacts in the SKDCGN's outputs of Fig. \ref{fig:im-results_t_b}, especially those trained on ImageNet-1k). This statement was supported by an interesting yet unexpected result of the study on the composition mechanism (refer to Section \ref{sec:exps-comp-mechanism}): it was observed that modifying Equation \ref{eq:composition} such that the shape mask $\mathbf{m}$ is multiplied with a weight factor of 0.75 (i.e., setting the transparency of the shape mask to 75\%), yielded an accuracy increase of the CGN's invariant classifier. The findings of this experiment -- conducted on the double-colored MNIST dataset -- suggest that the mask component is noisy in nature, leading to ambiguities in the decision boundaries during the classification of several digits. In light of this new hypothesis, we attempt to use the \textit{Kullback–Leibler} (KL) divergence to improve the visual quality of the outputs\footnote{It is noteworthy that other techniques were tested in the attempt to improve the visual quality of the results. Although they did not prove to be as beneficial, they are described in Appendix \ref{sec:improve_skdcgn}.}. Since KL leads to entropy minimization between the teacher and student networks, we deem such a technique adequate for the task at hand. Moreover, the choice of using KL was encouraged by the work of Asano and Saeed \cite{asano2021extrapolating}, which proved the suitability of the measure in this context. Concretely, the KL Divergence loss (as defined in Eq. \ref{eq:kl-loss}) was included in the overall generator loss $\mathcal{L}_{\text{G}}$ as seen in Equation \ref{eq:loss-adv-gen}. First, the modified SKDCGN was tested on the double-colored MNIST dataset. As depicted in Figure \ref{fig:mnist_kl_div}, the introduction of KL divergence improves SKDCGN's visual fidelity of both \textit{background} and \textit{texture} IMs, while the quality of the \textit{shape} masks seems to diminish after a few epochs. Contrarily, this approach appeared to be beneficial for the shape mechanism too, in the context of ImageNet-1k. The shape masks resulted more natural and consistent since the first epoch, whereas the absence of KL yielded noisy masks even at a later stage of training (refer to Figure \ref{fig:Imagenet_mask_kl_div}). \subsection{Additional results: study of the shape IM} \label{sec:exps-comp-mechanism} \begin{table}[t] \centering \begin{tabular}{lrrr} \toprule & \;\;Noise & \;\;Rotation & \;\;Transparency\\ \midrule Train Accuracy & $99.9$ & $99.1$ & $94.7$ \\ Test Accuracy & $14.96$ & $13.51$ & $\mathbf{58.86}$ \\ \bottomrule\\ \end{tabular} \caption{Results of the invariant classifier for the analysis of the shape IM. The classifier has been trained to predict whether images are CGN-generated or real. The training examples contain counterfactuals whose shape mechanism has been tuned with one of the three transformations indicated in the table (noise, rotation, transparency -- refer to Sec.\ref{sec:exps-comp-mechanism} for further details).} \label{tab:shape_exp_results} \end{table} \begin{figure}[t] \centering \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/noise/1_46000_mask.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/rot/1_46000_mask.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/trans/1_46000_mask_2.png} \caption{} \end{subfigure} \caption{Shape masks obtained after (a) addition of Gaussian random noise, (b) application of random rotation and (c) decrease of the mask opacity (i.e., lowering its transparency to 75\%).} \label{fig:shape_exp} \end{figure} As an additional contribution, we conduct a thorough study on the composition mechanism, to gain a better understanding of how the mechanisms influence the classification accuracy of an invariant classifier (i.e., a classifier that predicts whether an image is CGN-generated or real). Due to the limited time at our disposal, we focused on the mechanism that we deem most important in the decision-making of such a classifier, namely the \textit{shape}. To evaluate the effects of the shape IM we trained several (original) CGN models on the double-colored MNIST dataset; we tuned the resulting shape masks prior to the counterfactual image generation (governed by the composition mechanism of Equation \ref{eq:composition}) and used the generated images to train an invariant classifier. More specifically, we experimented with (i) the addition of Gaussian noise in the shape mask, (ii) random rotation of the mask, and (iii) multiplying the mask $\mathbf{m}$ in the composition mechanism (Eq. \ref{eq:composition}) with a factor smaller than 1 (or in other words, lowering the opacity of the shape mask). A transparency of 75\% (hence a weight factor of $0.75$) was experimentally found to be most beneficial for the accuracy of the classifier. The influence of the three transformations on the invariant classifier is quantified -- in terms of accuracy -- in Table \ref{tab:shape_exp_results}; sample shape masks generated from each transformation are displayed in Figure \ref{fig:shape_exp}. It is apparent from the test accuracy values that Gaussian noise and random rotations do not lead to any remarkable performance of the classifier but, contrarily, degrade its accuracy to values below 15\%. This is most likely the result of overfitting on the training set, as supported by the \textit{train} accuracy values. On the other hand, lowering the opacity of the mask substantially boosts the test accuracy, improving the previous results by a factor of $4\times$ (circa). It is noteworthy that the masks obtained using the transparency adjustment are more akin to those achieved using regular CGNs (see Figure \ref{fig:shape_exp}). The other transformations, instead, result in mask shapes that are particularly different. As such, they can potentially be used to make classifiers more robust when mixed with regular data during training. Because this is an extensive topic, we believe it warrants further research. \section{Discussion and conclusion} \label{sec:conclusion} With the prevalence of heavily parameterized architectures such as BigGANs, and with the advent of limited-access models like the trending DALL·E 2, source-free compression becomes a growing necessity. In this paper we explored the possibility to obtain a lightweight version of the CGN network, assuming that we do not have access to the training process of the model. More specifically, we treat the backbone of each independent mechanism (shape, texture and background) as a black-box, then use KD to transfer the knowledge of the pre-trained cumbersome networks to simple TinyGANs. SKDCGN achieves a remarkable compression of the overall network: it models the shape mechanism -- initially controlled by a BigGAN (55.9M parameters) and a U2-Net (44M parameters) -- using a single TinyGAN (6.4M parameters); similarly, it replaces the BigGANs responsible for the texture and background IMs with TinyGANs, and discards the forth BigGAN of the original CGN network that provides primary training supervision via reconstruction loss. This translates into four BigGANs and one U2-net (55.9M$\times$4 + 44M parameters, totalling 267.6M) being replaced with three simple TinyGANs (6.4M parameters each, meaning 19.2M parameters in total). Despite the significant compression, we demonstrate the ability of our model to generate counterfactual images on ImageNet-1k and double-colored MNIST datasets (see Figure \ref{fig:im-results_t_b}). When trained on the latter, SKDCGN's network capacity was proven to be sufficient to model the simple data distribution. If trained on the former, the proposed method exhibited remarkable ability in mimicking the original shape and background generations, while the texture mechanism suffered more from the reduction of size. This finding reveals great potential for future works that would attempt to tune the distillation (and hence enhance the synthesis) of the texture images, for instance by including data augmentation in the training procedure. Given the obtained results, we attemptedly limit the presence of noisy and ambiguous artifacts by minimizing the entropy between the teacher and student networks. We introduce a new measure in the knowledge distillation loss, i.e. KL divergence, which we find to enhance the visual quality results of some IMs for both Imagenet-1k and MNIST. Finally, we conduct a study on the composition mechanism to gain a better understanding of how the \textit{shape} IM influences the classification accuracy of an invariant classifier. Though other adjustments were tested, giving a lower weight to the shape mask $\mathbf{m}$ seemingly boosts the classifier performance. \section{Future work} To conclude, the experimental findings of SKDCGN prove that, upon the usage of Knowledge Distillation, one can transfer the capacity/ability of a cumbersome network to a lower-capacity model while still maintaining competitive performances. Although this paper unveils its potential, SKDCGN requires further research that we encourage other researchers to undertake. In addition to the suggestions offered throughout the sections, possible avenues of research include and are not limited to: improving the image generation process by using higher-order activation functions, since the utilized datasets consist of rich image data; improving the teacher-student architecture by introducing additional loss functions; using a learnable, neural network-based composition function instead of an analytical expression. \section*{Acknowledgments} We would like to express our sincere gratitude to Prof. dr. Efstratios Gavves and Prof. Wilker Aziz for effectively organizing the \textit{Deep Learning II} course at the University of Amsterdam, which is the main reason this paper exists. We are thankful to our supervisor, Christos Athanasiadis, for his precious guidance throughout the project. Finally, we also thank the former Program Director of the MSc. Artificial Intelligence, Prof. dr. Cees G.M. Snoek, and the current Program Manager, Prof. dr. Evangelos Kanoulas, for effectively conducting the Master's program in Artificial Intelligence at the University of Amsterdam. \clearpage \appendix \section*{Appendix} \section{Architecture details of the different models} This section contains the architectural details of the different model used in the proposed method. It brushes up the theory of the papers whereon we base our work (i.e. the CGN network \cite{DBLP:journals/corr/abs-2101-06046}, Sec. \ref{app:cgn-architecture} and the TinyGAN model \cite{DBLP:journals/corr/abs-2009-13829}, Sec. \ref{app:tinygan-architecture}) and also presents the baseline model (Sec. \ref{app:baseline-model}). \subsection{Original CGN architecture} \label{app:cgn-architecture} This section contains a diagram of the original CGN architecture, as presented in \cite{DBLP:journals/corr/abs-2101-06046}. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{Images/CGN_architecture.pdf} \caption{CGN architecture diagram. Retrieved from \cite{DBLP:journals/corr/abs-2101-06046}.} \label{fig:cgn-diagram} \end{figure} Figure \ref{fig:cgn-diagram} illustrates the CGN architecture. The network is split into four mechanisms, the shape mechanism $f_{shape}$, the texture mechanism $f_{text}$, the background mechanism $f_{bg}$, and the composer $C$. Components with trainable parameters are blue, components with fixed parameters are green. The primary supervision is provided by an unconstrained conditional GAN (cGAN) via the reconstruction loss $\mathcal{L}_{rec}$. The cGAN is only used for training, as indicated by the dotted lines. Each mechanism takes as input the noise vector $\mathbf{u}$ (sampled from a spherical Gaussian) and the label $y$ (drawn uniformly from the set of possible labels $\mathcal{Y}$) and minimizes its respective loss ($\mathcal{L}_{shape}$, $\mathcal{L}_{text}$, and $\mathcal{L}_{bg}$). To generate a set of counterfactual images, we sample $\mathbf{u}$ and then independently sample $y$ for each mechanism. \subsection{TinyGAN architecture} \label{app:tinygan-architecture} \begin{figure}[t] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=1\linewidth]{Images/Student_G.png} \caption{Student Generator $G$ \cite{DBLP:journals/corr/abs-2009-13829} } \label{fig:student generator} \end{subfigure} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=1\linewidth]{Images/Res_S.png} \caption{A Residual Block in $G$ \cite{DBLP:journals/corr/abs-2009-13829} } \label{fig:residual block} \end{subfigure} \caption{Architecture of the TinyGAN (student) generator} \label{fig:tinygan-generator} \end{figure} This section provides an brief overview of the TinyGAN architecture. For more details, refer to \cite{DBLP:journals/corr/abs-2009-13829}. \paragraph{Generator.} As shown in Figure \ref{fig:tinygan-generator}, TinyGAN comprises a ResNet \cite{resnet}-based generator with class-conditional BatchNorm \cite{batchnorm1} \cite{batchnorm2}. To keep a tight computation budget, it does not adopt attention-based \cite{self-attention} or progressive-growing mechanisms \cite{progressing-growing}. To substantially reduce the model size compared to BigGAN, it: \begin{itemize} \item Relies on using fewer channels; \item Replaces standard convolution by depthwise separable convolution; \item Adopts a simpler way to introduce class conditions. \end{itemize} Overall, TinyGAN's generator has 16$\times$ less parameters than BigGAN's generator. \vspace{-0.5em} \paragraph{Discriminator.} Following \cite{ref-discr-1} \cite{DBLP:journals/corr/abs-1802-05957}, \cite{DBLP:journals/corr/abs-2009-13829} opt for spectral normalized discriminator and introduce the class condition via projection. But instead of utilizing complicated residual blocks, they simply stack multiple convolutional layers with stride as used in DCGAN \cite{dcgan}, which greatly reduces the number of parameters. Overall, TinyGAN's discriminator has 10$\times$ less parameters than BigGAN's discriminator. \subsection{Baseline model} \label{app:baseline-model} The baseline is a standard CGN architecture whose BigGANs have been replaced with TinyGANs. Due to the need of a pre-trained model that (i) supervises the CGN training using a reconstruction loss and (ii) serves as the initialization of the IM GANs, a TinyGAN was trained from scratch using the KD strategy described in \cite{DBLP:journals/corr/abs-2009-13829}. Section \ref{app:baseline-details} dives into the details of the training procedure, then presents qualitative results of both the newly-trained TinyGAN and of baseline model. \begin{figure}[t!] \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/shape-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/shape-right.png} \caption{\textit{Shape} mechanism.} \label{fig:shape_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/fg-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/fg-right.png} \caption{\textit{Texture} mechanism.} \label{fig:fg_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/bg-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/bg-right.png} \caption{\textit{Background} mechanism.} \label{fig:bg_results} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input), for each independent mechanism. We train on both ImageNet-1k (left images) and double-colored MNIST datasets (right images).} \label{fig:mnist_ims} \end{figure} \section{Additional results of SKDCGN's IMs} This section expands Section 4.3 of the main paper and contains more results obtained from each SKDCGN's IM, using both ImageNet-1k and double-colored MNIST datasets. More specifically, we compare the output of each CGN backbone with that of the corresponding SKDCGN's TinyGAN, given the same input. Please refer to Figure \ref{fig:mnist_ims}. \section{Baseline Model} \label{app:baseline-details} The baseline model is a modified version of the original CGN architecture, where each BigGAN has been replaced by the generator model of a TinyGAN. Training this baseline using the procedure described by \cite{DBLP:journals/corr/abs-2009-13829}, omitting KD, allows for rigorous comparisons that emphasize the effectiveness of the knowledge distillation process. In this section we provide training details, and collect sample outputs of the trained model. \begin{figure}[t!] \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/1-test.png} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input after $1^{st}$ epoch.} \label{fig:tinygan_results_1} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/18-test.png} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input after $18^{th}$ epoch.} \label{fig:tinygan_results_18} \end{subfigure} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input} \label{tinygan_results} \end{figure} \subsection{Training Details} \label{app:baseline-training} The training procedure of a CGN requires a pre-trained GAN to provide primary supervision via the reconstruction loss. However, the original TinyGAN was only trained on only animal classes, hence the publicly-available model could not be used for our baseline. In order to consistently use the same dataset for all the experiments, we re-trained a TinyGAN from scratch (as described in \cite{DBLP:journals/corr/abs-2009-13829}) on all classes of ImageNet-1k. The images generated by TinyGAN are visualized in Appendix \ref{app:pretrained-tinygan-gen-outputs}. The images generated for each Independent Mechanism using our baseline model can be seen in \ref{app:baseline-outputs}. Apart from this, we additionally generated the counterfactuals using the baseline model which are shown in Appendix \ref{app:baseline-counterfactuals}. \begin{figure}[ht!] \centering \begin{tabular}{lllll} $\Tilde{m}$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0000000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0300000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0600000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_1200000.png} \vspace{-0.31em}\\ \vspace{-0.34em} $m$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_1200000.png}\\ \vspace{-0.33em} $f$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_1200000.png}\\ \vspace{-0.33em} $b$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_1200000.png}\\ \vspace{-0.41em} $x_{gen}$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_1200000.png} \end{tabular} \caption{Individual IM Outputs after training for baseline. From top to bottom: $m$, $\Tilde{m}$, $f$, $b$, $x_{gen}$. From left to right: at the start of training, after epoch $300k{th}$, epoch $600k^{th}$, epoch $900k^{th}$, and epoch $1.2million^{th}$} \label{fig:IMs_baseline_2} \end{figure} \subsubsection{Generated outputs of TinyGAN trained on ImageNet-1k} \label{app:pretrained-tinygan-gen-outputs} A TinyGAN was trained using all 1000 classes of the ImageNet-1k dataset. Training details are provided by \cite{DBLP:journals/corr/abs-2009-13829}. Although the original paper trains the model for 1.2 million epochs, we are forced to restrict the amount of iterations due to computational constraints. After distilling the knowledge of a BigGAN for 18 epochs, our TinyGAN generates reasonable images, as seen in Figure \ref{fig:tinygan_results_18}. To compare the image generation we have also presented images generated after the first epoch as well \ref{fig:tinygan_results_1}. It can be observed that if we further train the model, it could produce images better in quality. Note that animal classes are better captured by the model: this is inline with the findings of \cite{DBLP:journals/corr/abs-2009-13829}. \begin{figure}[ht!] \centering \begin{tabular}{lllll} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000000_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000019_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000070_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000096_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000142_x_gen.jpg} \vspace{-0.31em}\\ \vspace{-0.33em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000193_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000198_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000205_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000245_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000259_x_gen.jpg}\\ \vspace{-0.32em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001213_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001214_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001312_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001325_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001426_x_gen.jpg}\\ \vspace{-0.32em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001460_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001486_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001521_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001642_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001683_x_gen.jpg}\\ \vspace{-0.4em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001696_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001697_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001790_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001881_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001978_x_gen.jpg} \end{tabular} \caption{Counterfactuals generated by baseline on test data for ImageNet-1k} \label{fig:counterfactuals_baseline} \end{figure} \subsubsection{Generated outputs of the baseline trained on ImageNet-1k} \label{app:baseline-outputs} Figure \ref{fig:IMs_baseline_2} illustrates the individual outputs of each IMs at the start of training, after epoch 300k$^{\text{th}}$, epoch 600k$^{\text{th}}$, epoch 900k$^{\text{th}}$, and epoch 1.2M$^{\text{th}}$ (from left to right). In each figure, we show from top to bottom: pre-masks $\Tilde{m}$, masks $m$, texture $f$, background $b$, and composite images $x_{gen}$. \subsubsection{Generated Counterfactual Images of Baseline trained on ImageNet-1k} \label{app:baseline-counterfactuals} Finally, we show counterfactual images generated by the baseline model in Figure \ref{fig:counterfactuals_baseline}. \section{Improving the SKDCGN process} \label{sec:improve_skdcgn} As mentioned in Section 4.4 of the main paper, we observed that the outputs from CGN are noisy in nature. Fig \ref{fig:mnist_cgn_noisy} evidently illustrates how noisy the MNIST digits are. However in this section we try to improve our architecture by several methods. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Images/MNIST_noisy_mask_cgn.pdf} \caption{Noisy outputs generated by the CGN when we made use of pretrained weights given by the authors. } \label{fig:mnist_cgn_noisy} \end{figure} In the direction towards improving the images that are being generated by our architecture, we strongly believe the room of improvement lies in these components: \begin{itemize} \item Improving the quality of images that are being generated by the GAN network in our architecture. Usually loss functions like VGG based perception loss, L1 reconstruction loss are added. \item Improving the existing knowledge distillation framework such that the student learns better from the teacher's guidance by adding new loss functions to the Knowledge Distillation task. \end{itemize} \begin{figure}[ht!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 2 epochs on test data.} \label{fig:mnist_mask_bce2} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 10 epochs on test data.} \label{fig:mnist_mask_bce10} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 30 epochs on test data.} \label{fig:mnist__mask_bce30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with cross entropy loss} \label{fig:mnist_ims_1} \end{figure} To improve the quality of images, we observe that our architecture already has most of the loss functions integrated implicitly/explicitly. Hence, we add the Cross entropy loss for the generator and discriminator for the mask IM of the architecture and get the results as shown in \ref{fig:mnist_mask_bce2} for second epoch. We observe that digits like '0' are being reconstructed however for other digits the inputs look noisy in nature. By the end of 10th epoch for test set in Fig. \ref{fig:mnist_mask_bce10} we observe that the digits are being reconstructed. We continue with the training since we expected better results than what we have a;ready seen, however, contrary to our beliefs we observe artefacts by the end of 30th epoch as shown in Fig. \ref{fig:mnist__mask_bce30}. \begin{figure}[ht!] \centering \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 2 epochs on test data.} \label{fig:mnist_kl_layer2} \end{subfigure} \\ \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 10 epochs on test data.} \label{fig:mnist_kl_layer10} \end{subfigure} \\ \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 30 epochs on test data.} \label{fig:mnist_kl_layer30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with KL divergence multiplied with the activation of every layer instead of L1} \label{fig:mnist_kl_layer} \end{figure} \subsection{KL multiplied with layer instead of L1} \label{app:kl_instead_l1} Since the image generation process already has most of the components to ensure that the reconstruction is in place, we tried to improve the Knowledge distillation between teacher and student network by integrating the KL divergence and multiply the loss with every layer of the network instead of L1 which is default. Possibly, because L1 reconstruction loss is explicitly needed that is to multiplied with the activation of every layer. We observe the results as shown in Fig. \ref{fig:mnist_kl_layer} \begin{figure}[ht!] \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 2 epochs on test data.} \label{fig:mnist_mask_mse2} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 10 epochs on test data.} \label{fig:mnist_mask_mse10} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 30 epochs on test data.} \label{fig:mnist__mask_mse_30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with L2 multiplied with the activation of every layer instead of L1.} \label{fig:mnist_mse} \end{figure} \subsection{MSE instead of L1} \label{app:mse_no_l1} In addition, We also tried L2 loss instead of L1 loss but it lead to noisy outputs than previously generated and obtain results as shown in \ref{fig:mnist_mse}. Since, L2 assumes that the influence of noise is independent of the image's local characteristic the images are noisy in nature. \clearpage \bibliographystyle{unsrt} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=ewS9kxTKF7f
https://arxiv.org/abs/2208.04226
Please evaluate the paper based on its methodology, objectives, and qualitative results, and provide suggestions for future extensions and improvements.
A knowledge distillation method with limited novelty [Summary] This paper presents a method that trained three tinyGANs from the Counterfactual GAN (CGNs) for shape, background and texture independent components respectively. The main novelty of this paper is the combination of the TinyGANs [7] and CGNs [3]. The author shows it is feasible to train the proposed method in the knowledge distillation way. [Paper strength] - The paper contributes to the combination of the TinyGANs [7] and CGNs [3]. The TinyGANs[7] train the one TinyGANs from one BigGANs, and the CGNs trained three BigGANs [4] for object shape, texture and background respectively. The proposed method in this paper trained three TinyGANs (using the way in [7]) from three BigGANs in the CGNs [3]. - The secondary contribution is the author uses the KL divergence as the additional loss compared to [7]. In section 4.4, the author shows that using KL divergency could improve performance. - In the experiment, the author shows that the proposed method could generate reasonable results on ImageNet and MNIST datasets. [Paper weakness] - The novelty contribution is limited. The proposed method is to train the TinyGANs [3] from the CGANs [3] model. The proposed architecture shown in Fig. 1 is similar to [7] except it is for three separated models, and the three separated models are introduced by [3]. The loss terms 1 to 3 are from [7] with the exact same equation and adapted text, and the loss term 4 is from [21]. I did not find any interesting technical contribution. - The experiment is limited in terms of the error metric and conclusion. There are only qualitative results for the main experiments in sections 4.3 and 4.4. The comparison between the proposed method and baseline can only be judged subjectively. In the related works [7], there are other metrics such as FID to compare different methods. From these qualitative results, I can only judge that the proposed method can generate the somehow similar results to the baseline but for sure is worse than the baseline. Compared to the results achieved in [7], the TinyGAN in [7] has comparable results with the BigGAN model. - Inappropriate baseline. The baseline method is CGN but each BigGAN is replaced by a TinyGAN. However, the author motivates the proposed method that the BigGANs in the original CGN is over-parameterrized (line 45). I expected the author would compare their method to the original CGN, too. If we can train the CGN with the TinyGAN, I think we do not have the over-parameterized networks problem. Other minor issues: - I do not find the necessarily of section 4.5. - Misuse of the supplementary material. The author refers to supplementary on improving the method in section D. However, according to the author guideline of ECCV (https://eccv2022.ecva.net/submission/call-for-papers/), "Reviewers will be encouraged to look at it (supplementary), but are not obligated to do so", and "It may not include results obtained with an improved version of the method". Rating: 6: Marginally above acceptance threshold Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{orcidlink} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{ragged2e} \usepackage{xcolor} % \usepackage{graphicx} \usepackage{subcaption} \usepackage{placeins} \usepackage[export]{adjustbox} \usepackage{caption} \usepackage{float} \usepackage[utf8]{inputenc} % \usepackage[T1]{fontenc} % \usepackage{hyperref} % \usepackage{url} % \usepackage{booktabs} % \usepackage{amsmath} \usepackage{amsfonts} % \usepackage{nicefrac} % \usepackage{microtype} % \usepackage[accsupp]{axessibility} % \usepackage{etoolbox} \newcommand{\repthanks}[1]{\textsuperscript{\ref{#1}}} \makeatletter \patchcmd{\maketitle} {\def\thanks} {\let\repthanks\repthanksunskip\def\thanks} {}{} \patchcmd{\@maketitle} {\def\thanks} {\let\repthanks\@gobble\def\thanks} {}{} \newcommand\repthanksunskip[1]{\unskip{}} \makeatother \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{} % \title{SKDCGN: Source-free Knowledge Distillation of Counterfactual Generative Networks using cGANs} % \titlerunning{SKDCGN} \author{Sameer Ambekar \orcidlink{0000-0002-8650-3180}\thanks{Equal contribution.\protect\label{contrib}} \and Matteo Tafuro \orcidlink{0000-0002-6167-2156}\repthanks{contrib} \and Ankit Ankit \orcidlink{0000-0002-9399-9209}\repthanks{contrib}\and \\Diego van der Mast\index{van der Mast, Diego} \orcidlink{0000-0002-0001-3069}\repthanks{contrib} \and Mark Alence \orcidlink{0000-0002-6622-5822}\repthanks{contrib} \and Christos Athanasiadis \orcidlink{0000-0003-4376-9066}} \authorrunning{S. Ambekar et al.} \institute{University of Amsterdam, Amsterdam, the Netherlands. \\ \email{ambekarsameer@gmail.com, tafuromatteo00@gmail.com, ankitnitt1721@gmail.com, diego.vandermast@student.uva.nl, mark.alence@gmail.com, c.athanasiadis@uva.nl }} \maketitle \begin{abstract} \justifying{With the usage of appropriate inductive biases, Counterfactual Generative Networks (CGNs) can generate novel images from random combinations of shape, texture, and background manifolds. These images can be utilized to train an invariant classifier, avoiding the wide spread problem of deep architectures learning spurious correlations rather than meaningful ones. As a consequence, out-of-domain robustness is improved. However, the CGN architecture comprises multiple over parameterized networks, namely BigGAN and U2-Net. Training these networks requires appropriate background knowledge and extensive computation. Since one does not always have access to the precise training details, nor do they always possess the necessary knowledge of counterfactuals, our work addresses the following question: Can we use the knowledge embedded in pre-trained CGNs to train a lower-capacity model, assuming black-box access (i.e., only access to the pretrained CGN model) to the components of the architecture? In this direction, we propose a novel work named SKDCGN that attempts knowledge transfer using Knowledge Distillation (KD). In our proposed architecture, each independent mechanism (shape, texture, background) is represented by a student 'TinyGAN' that learns from the pretrained teacher 'BigGAN'. We demonstrate the efficacy of the proposed method using state-of-the-art datasets such as ImageNet, and MNIST by using KD and appropriate loss functions. Moreover, as an additional contribution, our paper conducts a thorough study on the composition mechanism of the CGNs, to gain a better understanding of how each mechanism influences the classification accuracy of an invariant classifier. Code available at: \url{https://github.com/ambekarsameer96/SKDCGN}} \end{abstract} \section{Introduction} \label{sec:intro} Deep neural networks are prone to learning simple functions that fail to capture intricacies of data in higher-dimensional manifolds \cite{DBLP:journals/corr/abs-2110-02424}, which causes networks to struggle in generalizing to unseen data. In addition to spectral bias \cite{DBLP:journals/corr/abs-2110-02424} and shortcut learning, which are properties inherent to neural networks \cite{DBLP:journals/corr/abs-2004-07780}, spurious learned correlations are also caused by biased datasets. To this end, Counterfactual Generative Networks (CGNs), proposed by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046}, have been shown to generate novel images that mitigate this effect. The authors expose the causal structure of image generation and split it into three Independent Mechanisms (IMs) (object shape, texture, and background), to generate synthetic and \textit{counterfactual} images whereon an invariant classifier ensemble can be trained. The CGN architecture comprises multiple over-parameterized networks, namely BigGANs \cite{brock2019large} and U2-Nets \cite{DBLP:journals/corr/abs-2005-09007}, and its training procedure generally requires appropriate domain-specific expertise. Moreover, one does not always have access to the precise training details, nor do they necessarily possess the required knowledge of counterfactuals. Motivated by these observations, we propose \textit{Source-free Knowledge Distillation of Counterfactual Generative Networks} (SKDCGN), which aims to use the knowledge embedded in a pre-trained CGN to train a lower capacity model, assuming black-box access (i.e., only inputs and outputs) to the components of the source model. More specifically, we harness the idea of Knowledge Distillation (KD) \cite{DBLP:journals/corr/abs-2106-05237} to train a network comprising three (small) generative models, i.e. TinyGANs \cite{DBLP:journals/corr/abs-2009-13829}, each being responsible for a single independent mechanism. SKDCGN carries both practical and theoretical implications, and it is intended to: \begin{enumerate} \item Obtain a lightweight version of the CGN, reducing its computational cost and memory footprint. This is meant to (i) ease the generation of counterfactual datasets and hence encourage the development of robust and invariant classifiers, as well as (ii) potentially allowing the deployment of the model on resource-constrained devices. \item Explore whether we can \textit{learn} from a fully trained CGN and distill it to a less parameterized network, assuming that we do not have access to the training process of the model. \end{enumerate} Along the lines of the original paper, we demonstrate the ability of our model to generate counterfactual images on ImageNet-1k \cite{5206848} and Double-Colored MNIST \cite{DBLP:journals/corr/abs-2101-06046}. Furthermore, we compare our outputs to \cite{DBLP:journals/corr/abs-2101-06046} and a simple baseline in terms of out-of-distribution robustness on the original classification task. As an additional contribution, we conduct a study on the shape IM of the CGN. The paper is organized as follows: firstly, we present a brief literature survey in Section \ref{sec:related-work}; next in Section \ref{sec:approach} the SKDCGN is dissected; Section \ref{sec:exps-results} presents the experimental setup and the empirical results, which are finally discussed in Section \ref{sec:conclusion}. \section{Related work} \label{sec:related-work} This section introduces the fundamental concepts and the related works that we use as a base for our SKDCGN. \subsubsection{Counterfactual Generative Networks. } The main idea of CGNs \cite{DBLP:journals/corr/abs-2101-06046} has already been introduced in Section \ref{sec:intro}. Nonetheless, to aid the understanding of our method to readers that are not familiar with the CGN architecture, we summarize its salient components in this paragraph and also provide the network diagram in Appendix \ref{app:cgn-architecture}, Figure \ref{fig:cgn-diagram}. The CGN consists of 4 backbones: (i) the part of the network responsible for the shape mechanism, those responsible for (ii) texture and (iii) background, and a (iv) composition mechanism that combines the previous three using a deterministic function. Given a noise vector $\mathbf{u}$ (sampled from a spherical Gaussian) and a label $y$ (drawn uniformly from the set of possible labels y) as input, (i) the shape is obtained from a BigGAN-deep-256 \cite{brock2019large}, whose output is subsequently passed through a U2-Net \cite{DBLP:journals/corr/abs-2005-09007} to obtain a binary mask of the object shape. The (ii) texture and (iii) background are obtained similarly, but the BigGAN's output does not require to be segmented by the U2-Net. Finally, the (iv) composition mechanism outputs the final counterfactual image $\mathbf{x}_{gen}$ using the following analytical function: \begin{equation} \label{eq:composition} \mathbf{x}_{g e n}=C(\mathbf{m}, \mathbf{f}, \mathbf{b})=\mathbf{m} \odot \mathbf{f}+(1-\mathbf{m}) \odot \mathbf{b}, \end{equation} where $\mathbf{m}$ is the shape mask, $\mathbf{f}$ is the foreground (or texture), $\mathbf{b}$ is the background and $\odot$ denotes element-wise multiplication. More recently, \cite{khorram2022cycleconsistent} devises an approach that learns a latent transformation that generates visual CFs automatically by steering in the latent space of generative models. Additionally, \cite{DBLP:journals/corr/abs-2109-14274} uses a deep model inversion approach that provides counterfactual explanations by examining the area of an image. \subsubsection{Knowledge Distillation. } \cite{44873} firstly proposed to transfer the knowledge of a pre-trained cumbersome network (referred to as the \textit{teacher}) to a smaller model (the \textit{student}). This is possible because networks frequently learn low-frequency functions among other things, indicating that the learning capacity of the big network is not being utilized fully \cite{DBLP:journals/corr/abs-2110-02424} \cite{DBLP:journals/corr/abs-2004-07780}. Traditional KD approaches (often referred to as \textit{black-box}) simply use the outputs of the large deep model as the teacher knowledge, but other variants have made use of activation, neurons or features of intermediate layers as the knowledge to guide the learning process \cite{kdref1,kdref2}. Existing methods like \cite{DBLP:journals/corr/abs-2009-13829} are also making use of Knowledge distillation for the task of image generation. Our work is similar to this, however, they transfer the knowledge of BigGAN trained on ImageNet dataset to a TinyGAN. In contrast, in our work, we transfer not just the knowledge of image generation but also the task of counterfactual generation from a BigGAN to a TinyGAN. \subsubsection{Distilling GANs using KD. } Given its high effectiveness for model compression, KD has been widely used in different fields, including visual recognition and classification, speech recognition, natural language processing (NLP), and recommendation systems \cite{kd-survey}. However, it is less studied for image generation. \cite{DBLP:journals/corr/abs-1902-00159} firstly applied KD to GANs. However, our project differs from theirs as they use \textit{unconditional} image generation, less general (DCGAN \cite{dcgan}) architectures and they do not assume a black-box generator. Our setting is much more similar to that of \cite{DBLP:journals/corr/abs-2009-13829}, where a BigGAN is distilled to a network with 16$\times$ fewer parameters, assuming no access to the teacher's training procedure or parameters. Considering its competitive performance, we use the proposed architecture (TinyGAN) as the student model and use a modified version of their loss function (further details in Section \ref{sec:method-training}) to optimize our network. \textbf{Source-free}: We term our method as Source-free since we do not have access to the source data, source training details, procedure, and any knowledge about the counterfactuals, etc, but only have access to trained source models. This method is similar to methods such as \cite{yang2021generalized} \cite{ding2022source}. With large diffusion models like Imagen \cite{saharia2022photorealistic} and DALL·E 2 \cite{https://doi.org/10.48550/arxiv.2204.13807} where the training process is usually extremely expensive in terms of computation, lack precise details about training them and often not reproducible by academic groups, we often have access to pretrained models. These can be used to transfer knowledge to a smaller network, and perform the same task with model of lower capacity. \section{Approach} \label{sec:approach} This section dives into the details of the SKDCGN architecture, focusing on the training and inference phases separately for ImageNet-1k and MNIST. In addition, we discuss the loss functions that were employed for Knowledge Distillation. \subsection{SKDCGN} Although transferring the knowledge of an entire CGN into a single generative model could drastically reduce the number of parameters, this strategy would compromise the whole purpose of CGNs, i.e. disentangling the three mechanisms and having control over each of them. Therefore, we opt to train a generative model for each individual component. As shown in the architecture diagram (Figure \ref{fig:arch_diagram}), we treat each IM backbone as a black-box teacher and aim to mimic its output by training a corresponding TinyGAN student. Note that this implies that in the case of the shape mechanism, a single generative model learns to mimic both the BigGAN and the U2-Net. We believe a TinyGAN should be capable of learning binary masks directly, removing the need for the U2-Net and reducing the model size even further. During inference, the outputs of the three students are combined into a final counterfactual image using the composition function defined in Equation \ref{eq:composition}. \begin{figure}[t] \includegraphics[width=\linewidth]{Images/final_architecture.pdf} \caption{\textit{Architecture of the SKDCGN.} During training, each independent mechanism serves as a black-box teacher model to train a corresponding student model. During inference, the outputs of the three trained TinyGANs are combined using a Composition Mechanism that returns the final counterfactual image.} \label{fig:arch_diagram} \end{figure} \subsubsection{Training: Distilling the knowledge of IMs. } \label{sec:method-training} To train SKDCGN, we utilize each IM backbone from the CGN architecture as a black-box teacher for the student network, as visualized in the training section of Figure \ref{fig:arch_diagram} (the backbones are BigGAN + U2-Net for \textit{shape}, BigGAN for \textit{texture}, and BigGAN for \textit{background}). As introduced in the \hyperref[sec:related-work]{Related work} section, \cite{DBLP:journals/corr/abs-2009-13829} proposed an effective KD framework for compressing BigGANs. As the IMs in CGNs rely on BigGANs, we utilize their proposed student architecture. For completeness, the details of the student architecture are reported in Appendix \ref{app:tinygan-architecture}, Figure \ref{fig:tinygan-generator}. We base our training objective on the loss function proposed by \cite{DBLP:journals/corr/abs-2009-13829}. Our full objective comprises multiple terms: (i) a pixel-wise distillation loss, (ii) an adversarial distillation loss, (iii) a feature-level distillation loss, and (iv) KL Divergence. In addition to introducing KL Divergence, we deviate from the original TinyGAN training objective by omitting the term that allows the model to learn from real images of the ImageNet dataset. This would inevitably compromise the quality of the generated counterfactuals. KL Divergence leads to entropy minimization between the teacher and student, which is why we propose its usage. The individual loss terms are dissected below as from \cite{DBLP:journals/corr/abs-2009-13829}: \begin{enumerate} \item \textit{Pixel-wise Distillation Loss}: To imitate the functionality of BigGAN for scaling generation to high-resolution, high-fidelity images, we minimize the pixel-level distance (L1) between the images generated by BigGAN and TinyGAN given the same input: \begin{equation} \mathcal{L}_{\text{KD\_pix}} = \mathbb{E}_{z \sim p(z), y \sim q(y)}[\|T(z,y) - S(z,y) \|_{1}] \label{pixelwise_loss} \end{equation} where $T$ represents the Teacher network, $S$ represents the Student network, $z$ is a latent variable drawn from the truncated normal distribution $p(z)$, and $y$ is the class label sampled from some categorical distribution $q(y)$. \item \textit{Adversarial Distillation Loss}: To promote sharper outputs, an adversarial loss is incorporated to make the outputs of $S$ indistinguishable from those of $T$. It includes a loss for the generator (Eq. \ref{eq:loss-adv-gen}) and one for the discriminator (Eq. \ref{eq:loss-adv-dis}): \begin{align} \mathcal{L}_{\text{KD\_G}} =& - \mathbb{E}_{z, y}[D(S(z,y), y)] \label{eq:loss-adv-gen}\\ \mathcal{L}_{\text{KD\_D}} =& - \mathbb{E}_{z, y}\left[max(0, 1 - D(T(z,y), y)) + max(0, 1 - D(S(z,y), y))\right] \label{eq:loss-adv-dis}, \end{align} where $z$ is the noise vector, $y$ is the class label, $T(z,y)$ is the image generated by the Teacher $T$, while $G$ and $D$ are -- respectively -- the generator and discriminator of the Student $S$. \item \textit{Feature Level Distillation Loss}: To further overcome the blurriness in the images produced by the Student network, the training objective also includes a feature-level distillation loss. More specifically, we take the features computed at each convolutional layer in the Teacher discriminator, and with a loss function stimulate $S$ to generate images similar to $T$: \begin{equation} \mathcal{L}_{\text{KD\_feat}} = \mathbb{E}_{z, y}\left[\sum _{i} \alpha_{i}\left\|D_{i}(T(z,y),y) - D_{i}(S(z,y), y) \right\|_{1}\right] \label{feature_loss} \end{equation} where $D_{i}$ represents the feature vector extracted from the $i^{th}$ layer of the discriminator and the corresponding weights are given by $\alpha_{i}$. \item \textit{KL Divergence}: L1 alone cannot reduce the entropy between the teacher and target. To improve the proposed method, we use KL Divergence in a similar fashion to \cite{asano2021extrapolating} for the task of knowledge distillation between real images drawn from source $P(x)$ and target images $Q(x)$. The \begin{equation} \mathcal D_{\mathrm{KL}}(P \| Q)=\sum_{x \in \mathcal{X}} P(x) \log \left(\frac{P(x)}{Q(x)}\right) \label{feature_loss_kl} \end{equation} \begin{equation} \mathcal{L}_{\text{KL}} = \sum_{x \in X}-p_{x}^{t} \log p_{x}^{s}+p_{x}^{t} \log p_{x}^{t} \label{eq:kl-loss} \end{equation} where $x$ is the class label and $p$ contains the output softmax probabilities of the Generator $G$ divided by the temperature $t$. \end{enumerate} To sum up, the student's generator ($G$) and discriminator ($D$) are respectively optimized using the following objectives: \begin{align} \mathcal{L}_{\text{G}} = & \mathcal{L}_{\text{KD\_feat}} + \lambda_1 \mathcal{L}_{\text{KD\_pix}} + \lambda_2\mathcal{L}_{\text{KD}\_G} \,(\;+\;\mathcal{L}_{\text{KL}}\,)\\ \mathcal{L}_{\text{D}} = & \mathcal{L}_{\text{KD\_D}} \end{align} where $\lambda_1$ and $\lambda_2$ are the regularization terms mentioned in \cite{DBLP:journals/corr/abs-2009-13829}, and the KL divergence term ($\mathcal{L}_{\text{KL}}$) is only used in the enhanced version of SKDCGN. Implementing the SKDCGN architecture requires training a TinyGAN for each Independent Mechanism of the CGN (see Fig. \ref{fig:arch_diagram}). The KD training procedure, however, requires training data. Hence prior to training, 1000 images per class (totalling 1 million samples) are generated using the IM backbones extracted from the pre-trained CGN (as provided by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046}). Finally, note that the original CGN architecture (illustrated in Appendix \ref{app:cgn-architecture}, Figure \ref{fig:cgn-diagram}) comprises another BigGAN trained on ImageNet-1k. It is unrelated to the three Independent Mechanisms and provides primary training supervision via reconstruction loss. We discard this component of the architecture for two main reasons: we do not have a dataset of counterfactuals whereon a GAN can be trained; we argue that this additional knowledge is already embedded in the backbones of a pre-trained CGN. \subsubsection{Inference: generating counterfactuals. } Once the three student networks are trained, their outputs are combined during inference akin to \cite{DBLP:journals/corr/abs-2101-06046} using the analytical function of Equation \ref{eq:composition}. Since the composition function is deterministic, we devise inference as a separate task to training. \section{Experiments and results} \label{sec:exps-results} This section defines our experimental setup, then proceeds to present the results. First, we test SKDCGN -- as defined in the \hyperref[sec:approach]{Approach} section -- on both ImageNet-1k and MNIST (Section \ref{sec:exps-skdcgn}), and based on the observed findings we make some changes to the proposed architecture to improve the quality of the results (Section \ref{sec:exps-improvement}). Due to computational constraints we test these improvements on a smaller dataset, namely the double-colored variant of MNIST \cite{726791}. Finally, as an additional contribution, we conduct a thorough study on the composition mechanism, to gain a better understanding of how each mechanism influences the classification accuracy of an invariant classifier. We present the results of such a study in Section \ref{sec:exps-comp-mechanism}. \subsection{Datasets} \paragraph{ImageNet-1k.} The ImageNet-1k ILSVRC dataset \cite{5206848} contains 1,000 classes, with each class consisting of 1.2 million training images, 50,000 validation and 100,000 test images. Images were resized to $256\times256$ to maintain consistent experiments and to allow direct comparisons with the original results of \cite{DBLP:journals/corr/abs-2101-06046}. \paragraph{Double-colored MNIST.} We use the \textit{double-colored} MNIST dataset proposed by Sauer and Geiger in the original CGN paper \cite{DBLP:journals/corr/abs-2101-06046}. This is a variant of the MNIST dataset where both the digits and the background are independently colored. It consists of 60,000 $28\times28$ images of the 10 digits, along with a test set of 10,000 images. \subsection{Baseline Model: CGN with generator replaced by TinyGAN generator} The SKDCGN is compared with a modified version of the original CGN architecture, where each BigGAN has been replaced by the generator model of a TinyGAN. Training this baseline using the procedure described by \cite{DBLP:journals/corr/abs-2009-13829}, omitting KD, allows for rigorous comparisons that emphasize the effectiveness of the knowledge distillation process. Further training details are provided in Appendix \ref{app:baseline-training}. \subsection{Results of SKDCGN} \label{sec:exps-skdcgn} \begin{figure}[t] \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/shape-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/shape-left.png} \caption{\textit{Shape} mechanism.} \label{fig:shape_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/fg-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/fg-left.png} \caption{\textit{Texture} mechanism.} \label{fig:fg_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/bg-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/bg-left.png} \caption{\textit{Background} mechanism.} \label{fig:bg_results} \end{subfigure} \caption{A comparison of images (on both ImageNet-1k and double-colored MNIST) generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input), for each independent mechanism.} \label{fig:im-results_t_b} \end{figure} The proposed model was firstly trained and tested on ImageNet-1k. To further validate our method, we repeated the training procedure on MNIST. The qualitative results are collected in Figure \ref{fig:im-results_t_b} and demonstrate that TinyGANs can closely approximate the output of each IM. While this is true for both datasets, the effectiveness of our method is especially visible in the case of MNIST. It is likely the case that the reduced capacity of the TinyGANs (compared to the original CGN backbones) is sufficient to decently model the underlying data distribution. ImageNet-1k, on the other hand, reveals more apparent (though still acceptable) discrepancies between the images, especially for the \textit{texture} IM. However, careful and extensive experiments revealed that the three TinyGANs could not generalize when random noise was given to the generator, i.e., they could not produce results beyond the test set. This might be due to a number of reasons. First, the compromised generalization capabilities of each IM's TinyGAN could be caused by their reduced network capacity. Furthermore, each TinyGAN was trained on all 1000 classes of ImageNet-1K, as opposed to Chang and Lu's choice of limiting the training data to the 398 animal labels \cite{DBLP:journals/corr/abs-2009-13829}. Finally, we generate the test samples using the test noise instead of random noise, since we hypothesize that the student networks only learn the manifolds that the teacher networks have been trained on. Additional experiments are required to analyze whether samples generated using random noise are found along the same manifold; unfortunately, we were hindered by the limited time frame allocated for this project, hence we leave this question open for future works. \begin{figure}[t!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/30-test_mask.png} \caption{\textit{Shape} mechanism.} \label{fig:mnist_mask_kl_div_fg} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/29-test.png} \caption{\textit{Texture} mechanism.} \label{fig:mnist_mask_kl_div_bg} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/30-test_bg.png} \caption{\textit{Background} mechanism.} \label{fig:mnist__mask_kl_div_mask} \end{subfigure} \caption{A comparison of double-colored MNIST images generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input) for each IM. Here, SKDCGN was tuned such that KL divergence is minimized between the teacher and student networks, and the L1 loss is multiplied with the activation of every layer.} \label{fig:mnist_kl_div} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{Images/im_kl/1-sample.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{Images/im_kl/23-sample.png} \caption{} \end{subfigure} \caption{(a) Shape masks obtained after the \textit{first} epoch of SKDCGN training on ImageNet-1k, using KL divergence. (b) Shape masks obtained after the 23$^{\text{rd}}$ epoch of SKDCGN training on ImageNet-1k, \textit{without} KL divergence. Evidently, KL enhances the quality of the masks from the first epoch, whereas its absence compromises the results even at a later stage of training.} \label{fig:Imagenet_mask_kl_div} \end{figure} \subsection{Improving the SKDCGN model} \label{sec:exps-improvement} The results presented in the previous section reveal that the outputs are noisy and ambiguous in nature when knowledge distillation is performed using the pre-trained models provided by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046} (note the artifacts in the SKDCGN's outputs of Fig. \ref{fig:im-results_t_b}, especially those trained on ImageNet-1k). This statement was supported by an interesting yet unexpected result of the study on the composition mechanism (refer to Section \ref{sec:exps-comp-mechanism}): it was observed that modifying Equation \ref{eq:composition} such that the shape mask $\mathbf{m}$ is multiplied with a weight factor of 0.75 (i.e., setting the transparency of the shape mask to 75\%), yielded an accuracy increase of the CGN's invariant classifier. The findings of this experiment -- conducted on the double-colored MNIST dataset -- suggest that the mask component is noisy in nature, leading to ambiguities in the decision boundaries during the classification of several digits. In light of this new hypothesis, we attempt to use the \textit{Kullback–Leibler} (KL) divergence to improve the visual quality of the outputs\footnote{It is noteworthy that other techniques were tested in the attempt to improve the visual quality of the results. Although they did not prove to be as beneficial, they are described in Appendix \ref{sec:improve_skdcgn}.}. Since KL leads to entropy minimization between the teacher and student networks, we deem such a technique adequate for the task at hand. Moreover, the choice of using KL was encouraged by the work of Asano and Saeed \cite{asano2021extrapolating}, which proved the suitability of the measure in this context. Concretely, the KL Divergence loss (as defined in Eq. \ref{eq:kl-loss}) was included in the overall generator loss $\mathcal{L}_{\text{G}}$ as seen in Equation \ref{eq:loss-adv-gen}. First, the modified SKDCGN was tested on the double-colored MNIST dataset. As depicted in Figure \ref{fig:mnist_kl_div}, the introduction of KL divergence improves SKDCGN's visual fidelity of both \textit{background} and \textit{texture} IMs, while the quality of the \textit{shape} masks seems to diminish after a few epochs. Contrarily, this approach appeared to be beneficial for the shape mechanism too, in the context of ImageNet-1k. The shape masks resulted more natural and consistent since the first epoch, whereas the absence of KL yielded noisy masks even at a later stage of training (refer to Figure \ref{fig:Imagenet_mask_kl_div}). \subsection{Additional results: study of the shape IM} \label{sec:exps-comp-mechanism} \begin{table}[t] \centering \begin{tabular}{lrrr} \toprule & \;\;Noise & \;\;Rotation & \;\;Transparency\\ \midrule Train Accuracy & $99.9$ & $99.1$ & $94.7$ \\ Test Accuracy & $14.96$ & $13.51$ & $\mathbf{58.86}$ \\ \bottomrule\\ \end{tabular} \caption{Results of the invariant classifier for the analysis of the shape IM. The classifier has been trained to predict whether images are CGN-generated or real. The training examples contain counterfactuals whose shape mechanism has been tuned with one of the three transformations indicated in the table (noise, rotation, transparency -- refer to Sec.\ref{sec:exps-comp-mechanism} for further details).} \label{tab:shape_exp_results} \end{table} \begin{figure}[t] \centering \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/noise/1_46000_mask.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/rot/1_46000_mask.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/trans/1_46000_mask_2.png} \caption{} \end{subfigure} \caption{Shape masks obtained after (a) addition of Gaussian random noise, (b) application of random rotation and (c) decrease of the mask opacity (i.e., lowering its transparency to 75\%).} \label{fig:shape_exp} \end{figure} As an additional contribution, we conduct a thorough study on the composition mechanism, to gain a better understanding of how the mechanisms influence the classification accuracy of an invariant classifier (i.e., a classifier that predicts whether an image is CGN-generated or real). Due to the limited time at our disposal, we focused on the mechanism that we deem most important in the decision-making of such a classifier, namely the \textit{shape}. To evaluate the effects of the shape IM we trained several (original) CGN models on the double-colored MNIST dataset; we tuned the resulting shape masks prior to the counterfactual image generation (governed by the composition mechanism of Equation \ref{eq:composition}) and used the generated images to train an invariant classifier. More specifically, we experimented with (i) the addition of Gaussian noise in the shape mask, (ii) random rotation of the mask, and (iii) multiplying the mask $\mathbf{m}$ in the composition mechanism (Eq. \ref{eq:composition}) with a factor smaller than 1 (or in other words, lowering the opacity of the shape mask). A transparency of 75\% (hence a weight factor of $0.75$) was experimentally found to be most beneficial for the accuracy of the classifier. The influence of the three transformations on the invariant classifier is quantified -- in terms of accuracy -- in Table \ref{tab:shape_exp_results}; sample shape masks generated from each transformation are displayed in Figure \ref{fig:shape_exp}. It is apparent from the test accuracy values that Gaussian noise and random rotations do not lead to any remarkable performance of the classifier but, contrarily, degrade its accuracy to values below 15\%. This is most likely the result of overfitting on the training set, as supported by the \textit{train} accuracy values. On the other hand, lowering the opacity of the mask substantially boosts the test accuracy, improving the previous results by a factor of $4\times$ (circa). It is noteworthy that the masks obtained using the transparency adjustment are more akin to those achieved using regular CGNs (see Figure \ref{fig:shape_exp}). The other transformations, instead, result in mask shapes that are particularly different. As such, they can potentially be used to make classifiers more robust when mixed with regular data during training. Because this is an extensive topic, we believe it warrants further research. \section{Discussion and conclusion} \label{sec:conclusion} With the prevalence of heavily parameterized architectures such as BigGANs, and with the advent of limited-access models like the trending DALL·E 2, source-free compression becomes a growing necessity. In this paper we explored the possibility to obtain a lightweight version of the CGN network, assuming that we do not have access to the training process of the model. More specifically, we treat the backbone of each independent mechanism (shape, texture and background) as a black-box, then use KD to transfer the knowledge of the pre-trained cumbersome networks to simple TinyGANs. SKDCGN achieves a remarkable compression of the overall network: it models the shape mechanism -- initially controlled by a BigGAN (55.9M parameters) and a U2-Net (44M parameters) -- using a single TinyGAN (6.4M parameters); similarly, it replaces the BigGANs responsible for the texture and background IMs with TinyGANs, and discards the forth BigGAN of the original CGN network that provides primary training supervision via reconstruction loss. This translates into four BigGANs and one U2-net (55.9M$\times$4 + 44M parameters, totalling 267.6M) being replaced with three simple TinyGANs (6.4M parameters each, meaning 19.2M parameters in total). Despite the significant compression, we demonstrate the ability of our model to generate counterfactual images on ImageNet-1k and double-colored MNIST datasets (see Figure \ref{fig:im-results_t_b}). When trained on the latter, SKDCGN's network capacity was proven to be sufficient to model the simple data distribution. If trained on the former, the proposed method exhibited remarkable ability in mimicking the original shape and background generations, while the texture mechanism suffered more from the reduction of size. This finding reveals great potential for future works that would attempt to tune the distillation (and hence enhance the synthesis) of the texture images, for instance by including data augmentation in the training procedure. Given the obtained results, we attemptedly limit the presence of noisy and ambiguous artifacts by minimizing the entropy between the teacher and student networks. We introduce a new measure in the knowledge distillation loss, i.e. KL divergence, which we find to enhance the visual quality results of some IMs for both Imagenet-1k and MNIST. Finally, we conduct a study on the composition mechanism to gain a better understanding of how the \textit{shape} IM influences the classification accuracy of an invariant classifier. Though other adjustments were tested, giving a lower weight to the shape mask $\mathbf{m}$ seemingly boosts the classifier performance. \section{Future work} To conclude, the experimental findings of SKDCGN prove that, upon the usage of Knowledge Distillation, one can transfer the capacity/ability of a cumbersome network to a lower-capacity model while still maintaining competitive performances. Although this paper unveils its potential, SKDCGN requires further research that we encourage other researchers to undertake. In addition to the suggestions offered throughout the sections, possible avenues of research include and are not limited to: improving the image generation process by using higher-order activation functions, since the utilized datasets consist of rich image data; improving the teacher-student architecture by introducing additional loss functions; using a learnable, neural network-based composition function instead of an analytical expression. \section*{Acknowledgments} We would like to express our sincere gratitude to Prof. dr. Efstratios Gavves and Prof. Wilker Aziz for effectively organizing the \textit{Deep Learning II} course at the University of Amsterdam, which is the main reason this paper exists. We are thankful to our supervisor, Christos Athanasiadis, for his precious guidance throughout the project. Finally, we also thank the former Program Director of the MSc. Artificial Intelligence, Prof. dr. Cees G.M. Snoek, and the current Program Manager, Prof. dr. Evangelos Kanoulas, for effectively conducting the Master's program in Artificial Intelligence at the University of Amsterdam. \clearpage \appendix \section*{Appendix} \section{Architecture details of the different models} This section contains the architectural details of the different model used in the proposed method. It brushes up the theory of the papers whereon we base our work (i.e. the CGN network \cite{DBLP:journals/corr/abs-2101-06046}, Sec. \ref{app:cgn-architecture} and the TinyGAN model \cite{DBLP:journals/corr/abs-2009-13829}, Sec. \ref{app:tinygan-architecture}) and also presents the baseline model (Sec. \ref{app:baseline-model}). \subsection{Original CGN architecture} \label{app:cgn-architecture} This section contains a diagram of the original CGN architecture, as presented in \cite{DBLP:journals/corr/abs-2101-06046}. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{Images/CGN_architecture.pdf} \caption{CGN architecture diagram. Retrieved from \cite{DBLP:journals/corr/abs-2101-06046}.} \label{fig:cgn-diagram} \end{figure} Figure \ref{fig:cgn-diagram} illustrates the CGN architecture. The network is split into four mechanisms, the shape mechanism $f_{shape}$, the texture mechanism $f_{text}$, the background mechanism $f_{bg}$, and the composer $C$. Components with trainable parameters are blue, components with fixed parameters are green. The primary supervision is provided by an unconstrained conditional GAN (cGAN) via the reconstruction loss $\mathcal{L}_{rec}$. The cGAN is only used for training, as indicated by the dotted lines. Each mechanism takes as input the noise vector $\mathbf{u}$ (sampled from a spherical Gaussian) and the label $y$ (drawn uniformly from the set of possible labels $\mathcal{Y}$) and minimizes its respective loss ($\mathcal{L}_{shape}$, $\mathcal{L}_{text}$, and $\mathcal{L}_{bg}$). To generate a set of counterfactual images, we sample $\mathbf{u}$ and then independently sample $y$ for each mechanism. \subsection{TinyGAN architecture} \label{app:tinygan-architecture} \begin{figure}[t] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=1\linewidth]{Images/Student_G.png} \caption{Student Generator $G$ \cite{DBLP:journals/corr/abs-2009-13829} } \label{fig:student generator} \end{subfigure} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=1\linewidth]{Images/Res_S.png} \caption{A Residual Block in $G$ \cite{DBLP:journals/corr/abs-2009-13829} } \label{fig:residual block} \end{subfigure} \caption{Architecture of the TinyGAN (student) generator} \label{fig:tinygan-generator} \end{figure} This section provides an brief overview of the TinyGAN architecture. For more details, refer to \cite{DBLP:journals/corr/abs-2009-13829}. \paragraph{Generator.} As shown in Figure \ref{fig:tinygan-generator}, TinyGAN comprises a ResNet \cite{resnet}-based generator with class-conditional BatchNorm \cite{batchnorm1} \cite{batchnorm2}. To keep a tight computation budget, it does not adopt attention-based \cite{self-attention} or progressive-growing mechanisms \cite{progressing-growing}. To substantially reduce the model size compared to BigGAN, it: \begin{itemize} \item Relies on using fewer channels; \item Replaces standard convolution by depthwise separable convolution; \item Adopts a simpler way to introduce class conditions. \end{itemize} Overall, TinyGAN's generator has 16$\times$ less parameters than BigGAN's generator. \vspace{-0.5em} \paragraph{Discriminator.} Following \cite{ref-discr-1} \cite{DBLP:journals/corr/abs-1802-05957}, \cite{DBLP:journals/corr/abs-2009-13829} opt for spectral normalized discriminator and introduce the class condition via projection. But instead of utilizing complicated residual blocks, they simply stack multiple convolutional layers with stride as used in DCGAN \cite{dcgan}, which greatly reduces the number of parameters. Overall, TinyGAN's discriminator has 10$\times$ less parameters than BigGAN's discriminator. \subsection{Baseline model} \label{app:baseline-model} The baseline is a standard CGN architecture whose BigGANs have been replaced with TinyGANs. Due to the need of a pre-trained model that (i) supervises the CGN training using a reconstruction loss and (ii) serves as the initialization of the IM GANs, a TinyGAN was trained from scratch using the KD strategy described in \cite{DBLP:journals/corr/abs-2009-13829}. Section \ref{app:baseline-details} dives into the details of the training procedure, then presents qualitative results of both the newly-trained TinyGAN and of baseline model. \begin{figure}[t!] \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/shape-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/shape-right.png} \caption{\textit{Shape} mechanism.} \label{fig:shape_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/fg-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/fg-right.png} \caption{\textit{Texture} mechanism.} \label{fig:fg_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/bg-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/bg-right.png} \caption{\textit{Background} mechanism.} \label{fig:bg_results} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input), for each independent mechanism. We train on both ImageNet-1k (left images) and double-colored MNIST datasets (right images).} \label{fig:mnist_ims} \end{figure} \section{Additional results of SKDCGN's IMs} This section expands Section 4.3 of the main paper and contains more results obtained from each SKDCGN's IM, using both ImageNet-1k and double-colored MNIST datasets. More specifically, we compare the output of each CGN backbone with that of the corresponding SKDCGN's TinyGAN, given the same input. Please refer to Figure \ref{fig:mnist_ims}. \section{Baseline Model} \label{app:baseline-details} The baseline model is a modified version of the original CGN architecture, where each BigGAN has been replaced by the generator model of a TinyGAN. Training this baseline using the procedure described by \cite{DBLP:journals/corr/abs-2009-13829}, omitting KD, allows for rigorous comparisons that emphasize the effectiveness of the knowledge distillation process. In this section we provide training details, and collect sample outputs of the trained model. \begin{figure}[t!] \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/1-test.png} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input after $1^{st}$ epoch.} \label{fig:tinygan_results_1} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/18-test.png} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input after $18^{th}$ epoch.} \label{fig:tinygan_results_18} \end{subfigure} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input} \label{tinygan_results} \end{figure} \subsection{Training Details} \label{app:baseline-training} The training procedure of a CGN requires a pre-trained GAN to provide primary supervision via the reconstruction loss. However, the original TinyGAN was only trained on only animal classes, hence the publicly-available model could not be used for our baseline. In order to consistently use the same dataset for all the experiments, we re-trained a TinyGAN from scratch (as described in \cite{DBLP:journals/corr/abs-2009-13829}) on all classes of ImageNet-1k. The images generated by TinyGAN are visualized in Appendix \ref{app:pretrained-tinygan-gen-outputs}. The images generated for each Independent Mechanism using our baseline model can be seen in \ref{app:baseline-outputs}. Apart from this, we additionally generated the counterfactuals using the baseline model which are shown in Appendix \ref{app:baseline-counterfactuals}. \begin{figure}[ht!] \centering \begin{tabular}{lllll} $\Tilde{m}$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0000000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0300000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0600000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_1200000.png} \vspace{-0.31em}\\ \vspace{-0.34em} $m$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_1200000.png}\\ \vspace{-0.33em} $f$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_1200000.png}\\ \vspace{-0.33em} $b$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_1200000.png}\\ \vspace{-0.41em} $x_{gen}$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_1200000.png} \end{tabular} \caption{Individual IM Outputs after training for baseline. From top to bottom: $m$, $\Tilde{m}$, $f$, $b$, $x_{gen}$. From left to right: at the start of training, after epoch $300k{th}$, epoch $600k^{th}$, epoch $900k^{th}$, and epoch $1.2million^{th}$} \label{fig:IMs_baseline_2} \end{figure} \subsubsection{Generated outputs of TinyGAN trained on ImageNet-1k} \label{app:pretrained-tinygan-gen-outputs} A TinyGAN was trained using all 1000 classes of the ImageNet-1k dataset. Training details are provided by \cite{DBLP:journals/corr/abs-2009-13829}. Although the original paper trains the model for 1.2 million epochs, we are forced to restrict the amount of iterations due to computational constraints. After distilling the knowledge of a BigGAN for 18 epochs, our TinyGAN generates reasonable images, as seen in Figure \ref{fig:tinygan_results_18}. To compare the image generation we have also presented images generated after the first epoch as well \ref{fig:tinygan_results_1}. It can be observed that if we further train the model, it could produce images better in quality. Note that animal classes are better captured by the model: this is inline with the findings of \cite{DBLP:journals/corr/abs-2009-13829}. \begin{figure}[ht!] \centering \begin{tabular}{lllll} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000000_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000019_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000070_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000096_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000142_x_gen.jpg} \vspace{-0.31em}\\ \vspace{-0.33em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000193_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000198_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000205_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000245_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000259_x_gen.jpg}\\ \vspace{-0.32em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001213_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001214_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001312_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001325_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001426_x_gen.jpg}\\ \vspace{-0.32em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001460_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001486_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001521_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001642_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001683_x_gen.jpg}\\ \vspace{-0.4em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001696_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001697_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001790_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001881_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001978_x_gen.jpg} \end{tabular} \caption{Counterfactuals generated by baseline on test data for ImageNet-1k} \label{fig:counterfactuals_baseline} \end{figure} \subsubsection{Generated outputs of the baseline trained on ImageNet-1k} \label{app:baseline-outputs} Figure \ref{fig:IMs_baseline_2} illustrates the individual outputs of each IMs at the start of training, after epoch 300k$^{\text{th}}$, epoch 600k$^{\text{th}}$, epoch 900k$^{\text{th}}$, and epoch 1.2M$^{\text{th}}$ (from left to right). In each figure, we show from top to bottom: pre-masks $\Tilde{m}$, masks $m$, texture $f$, background $b$, and composite images $x_{gen}$. \subsubsection{Generated Counterfactual Images of Baseline trained on ImageNet-1k} \label{app:baseline-counterfactuals} Finally, we show counterfactual images generated by the baseline model in Figure \ref{fig:counterfactuals_baseline}. \section{Improving the SKDCGN process} \label{sec:improve_skdcgn} As mentioned in Section 4.4 of the main paper, we observed that the outputs from CGN are noisy in nature. Fig \ref{fig:mnist_cgn_noisy} evidently illustrates how noisy the MNIST digits are. However in this section we try to improve our architecture by several methods. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Images/MNIST_noisy_mask_cgn.pdf} \caption{Noisy outputs generated by the CGN when we made use of pretrained weights given by the authors. } \label{fig:mnist_cgn_noisy} \end{figure} In the direction towards improving the images that are being generated by our architecture, we strongly believe the room of improvement lies in these components: \begin{itemize} \item Improving the quality of images that are being generated by the GAN network in our architecture. Usually loss functions like VGG based perception loss, L1 reconstruction loss are added. \item Improving the existing knowledge distillation framework such that the student learns better from the teacher's guidance by adding new loss functions to the Knowledge Distillation task. \end{itemize} \begin{figure}[ht!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 2 epochs on test data.} \label{fig:mnist_mask_bce2} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 10 epochs on test data.} \label{fig:mnist_mask_bce10} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 30 epochs on test data.} \label{fig:mnist__mask_bce30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with cross entropy loss} \label{fig:mnist_ims_1} \end{figure} To improve the quality of images, we observe that our architecture already has most of the loss functions integrated implicitly/explicitly. Hence, we add the Cross entropy loss for the generator and discriminator for the mask IM of the architecture and get the results as shown in \ref{fig:mnist_mask_bce2} for second epoch. We observe that digits like '0' are being reconstructed however for other digits the inputs look noisy in nature. By the end of 10th epoch for test set in Fig. \ref{fig:mnist_mask_bce10} we observe that the digits are being reconstructed. We continue with the training since we expected better results than what we have a;ready seen, however, contrary to our beliefs we observe artefacts by the end of 30th epoch as shown in Fig. \ref{fig:mnist__mask_bce30}. \begin{figure}[ht!] \centering \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 2 epochs on test data.} \label{fig:mnist_kl_layer2} \end{subfigure} \\ \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 10 epochs on test data.} \label{fig:mnist_kl_layer10} \end{subfigure} \\ \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 30 epochs on test data.} \label{fig:mnist_kl_layer30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with KL divergence multiplied with the activation of every layer instead of L1} \label{fig:mnist_kl_layer} \end{figure} \subsection{KL multiplied with layer instead of L1} \label{app:kl_instead_l1} Since the image generation process already has most of the components to ensure that the reconstruction is in place, we tried to improve the Knowledge distillation between teacher and student network by integrating the KL divergence and multiply the loss with every layer of the network instead of L1 which is default. Possibly, because L1 reconstruction loss is explicitly needed that is to multiplied with the activation of every layer. We observe the results as shown in Fig. \ref{fig:mnist_kl_layer} \begin{figure}[ht!] \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 2 epochs on test data.} \label{fig:mnist_mask_mse2} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 10 epochs on test data.} \label{fig:mnist_mask_mse10} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 30 epochs on test data.} \label{fig:mnist__mask_mse_30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with L2 multiplied with the activation of every layer instead of L1.} \label{fig:mnist_mse} \end{figure} \subsection{MSE instead of L1} \label{app:mse_no_l1} In addition, We also tried L2 loss instead of L1 loss but it lead to noisy outputs than previously generated and obtain results as shown in \ref{fig:mnist_mse}. Since, L2 assumes that the influence of noise is independent of the image's local characteristic the images are noisy in nature. \clearpage \bibliographystyle{unsrt} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=ewS9kxTKF7f
https://arxiv.org/abs/2208.04226
Please evaluate the proposed method in our paper, focusing on its novelty and contributions, as well as the experimental results and comparisons with baseline methods.
This paper studies an interesting task: learning less parameterized models from the bigGAN teacher using knowledge distillation. However, the paper itseld lacks clarity, especially in the exps. This paper aims to learn lightweight models from the bigGAN teacher using knowledge distillation (KL loss at both pixel/feature levels). The Introduction, Related work, and Approach are easy to follow and well explained. Unfortunately, I find the exp part poorly written, making it extremely hard to understand: does the result support the claim? I lost track completely there. The same also happens in the appendix. Please add concrete conclusions or take-aways for each figure and re-write the exp part. Despite the fact the task/problem addressed in the paper is both theoretically and practically meaningful, I would still recommend 'reject' given the flaws in the writing, especially the exp part. Fig 1: left: are the modules in blue non-trainable? right: are the tinyGANs trainable during inference? Or there is a mistake in legend? line 223: Abuse of notation: "S" refers to the student model in eq2, while in eq4 it is defined as the generator line 344: what is the take-away from fig 2. I do not see a conclusion? It seems the SKDCGN generates less impressive results than the CGN. line 351: "We realize that the student is as good as the teacher." where does this conclusion come from? line 356/358: Grammarly incorrect. I do not follow these claims. It seems this part (exp 4.4) is not in a good shape as a submission. Appendix B2/B3: It is unclear what the takeaways are in the figures? I find it hard to interpret appendix: line 226/227 typos. Rating: 4: Ok but not good enough - rejection Confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{orcidlink} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{ragged2e} \usepackage{xcolor} % \usepackage{graphicx} \usepackage{subcaption} \usepackage{placeins} \usepackage[export]{adjustbox} \usepackage{caption} \usepackage{float} \usepackage[utf8]{inputenc} % \usepackage[T1]{fontenc} % \usepackage{hyperref} % \usepackage{url} % \usepackage{booktabs} % \usepackage{amsmath} \usepackage{amsfonts} % \usepackage{nicefrac} % \usepackage{microtype} % \usepackage[accsupp]{axessibility} % \usepackage{etoolbox} \newcommand{\repthanks}[1]{\textsuperscript{\ref{#1}}} \makeatletter \patchcmd{\maketitle} {\def\thanks} {\let\repthanks\repthanksunskip\def\thanks} {}{} \patchcmd{\@maketitle} {\def\thanks} {\let\repthanks\@gobble\def\thanks} {}{} \newcommand\repthanksunskip[1]{\unskip{}} \makeatother \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{} % \title{SKDCGN: Source-free Knowledge Distillation of Counterfactual Generative Networks using cGANs} % \titlerunning{SKDCGN} \author{Sameer Ambekar \orcidlink{0000-0002-8650-3180}\thanks{Equal contribution.\protect\label{contrib}} \and Matteo Tafuro \orcidlink{0000-0002-6167-2156}\repthanks{contrib} \and Ankit Ankit \orcidlink{0000-0002-9399-9209}\repthanks{contrib}\and \\Diego van der Mast\index{van der Mast, Diego} \orcidlink{0000-0002-0001-3069}\repthanks{contrib} \and Mark Alence \orcidlink{0000-0002-6622-5822}\repthanks{contrib} \and Christos Athanasiadis \orcidlink{0000-0003-4376-9066}} \authorrunning{S. Ambekar et al.} \institute{University of Amsterdam, Amsterdam, the Netherlands. \\ \email{ambekarsameer@gmail.com, tafuromatteo00@gmail.com, ankitnitt1721@gmail.com, diego.vandermast@student.uva.nl, mark.alence@gmail.com, c.athanasiadis@uva.nl }} \maketitle \begin{abstract} \justifying{With the usage of appropriate inductive biases, Counterfactual Generative Networks (CGNs) can generate novel images from random combinations of shape, texture, and background manifolds. These images can be utilized to train an invariant classifier, avoiding the wide spread problem of deep architectures learning spurious correlations rather than meaningful ones. As a consequence, out-of-domain robustness is improved. However, the CGN architecture comprises multiple over parameterized networks, namely BigGAN and U2-Net. Training these networks requires appropriate background knowledge and extensive computation. Since one does not always have access to the precise training details, nor do they always possess the necessary knowledge of counterfactuals, our work addresses the following question: Can we use the knowledge embedded in pre-trained CGNs to train a lower-capacity model, assuming black-box access (i.e., only access to the pretrained CGN model) to the components of the architecture? In this direction, we propose a novel work named SKDCGN that attempts knowledge transfer using Knowledge Distillation (KD). In our proposed architecture, each independent mechanism (shape, texture, background) is represented by a student 'TinyGAN' that learns from the pretrained teacher 'BigGAN'. We demonstrate the efficacy of the proposed method using state-of-the-art datasets such as ImageNet, and MNIST by using KD and appropriate loss functions. Moreover, as an additional contribution, our paper conducts a thorough study on the composition mechanism of the CGNs, to gain a better understanding of how each mechanism influences the classification accuracy of an invariant classifier. Code available at: \url{https://github.com/ambekarsameer96/SKDCGN}} \end{abstract} \section{Introduction} \label{sec:intro} Deep neural networks are prone to learning simple functions that fail to capture intricacies of data in higher-dimensional manifolds \cite{DBLP:journals/corr/abs-2110-02424}, which causes networks to struggle in generalizing to unseen data. In addition to spectral bias \cite{DBLP:journals/corr/abs-2110-02424} and shortcut learning, which are properties inherent to neural networks \cite{DBLP:journals/corr/abs-2004-07780}, spurious learned correlations are also caused by biased datasets. To this end, Counterfactual Generative Networks (CGNs), proposed by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046}, have been shown to generate novel images that mitigate this effect. The authors expose the causal structure of image generation and split it into three Independent Mechanisms (IMs) (object shape, texture, and background), to generate synthetic and \textit{counterfactual} images whereon an invariant classifier ensemble can be trained. The CGN architecture comprises multiple over-parameterized networks, namely BigGANs \cite{brock2019large} and U2-Nets \cite{DBLP:journals/corr/abs-2005-09007}, and its training procedure generally requires appropriate domain-specific expertise. Moreover, one does not always have access to the precise training details, nor do they necessarily possess the required knowledge of counterfactuals. Motivated by these observations, we propose \textit{Source-free Knowledge Distillation of Counterfactual Generative Networks} (SKDCGN), which aims to use the knowledge embedded in a pre-trained CGN to train a lower capacity model, assuming black-box access (i.e., only inputs and outputs) to the components of the source model. More specifically, we harness the idea of Knowledge Distillation (KD) \cite{DBLP:journals/corr/abs-2106-05237} to train a network comprising three (small) generative models, i.e. TinyGANs \cite{DBLP:journals/corr/abs-2009-13829}, each being responsible for a single independent mechanism. SKDCGN carries both practical and theoretical implications, and it is intended to: \begin{enumerate} \item Obtain a lightweight version of the CGN, reducing its computational cost and memory footprint. This is meant to (i) ease the generation of counterfactual datasets and hence encourage the development of robust and invariant classifiers, as well as (ii) potentially allowing the deployment of the model on resource-constrained devices. \item Explore whether we can \textit{learn} from a fully trained CGN and distill it to a less parameterized network, assuming that we do not have access to the training process of the model. \end{enumerate} Along the lines of the original paper, we demonstrate the ability of our model to generate counterfactual images on ImageNet-1k \cite{5206848} and Double-Colored MNIST \cite{DBLP:journals/corr/abs-2101-06046}. Furthermore, we compare our outputs to \cite{DBLP:journals/corr/abs-2101-06046} and a simple baseline in terms of out-of-distribution robustness on the original classification task. As an additional contribution, we conduct a study on the shape IM of the CGN. The paper is organized as follows: firstly, we present a brief literature survey in Section \ref{sec:related-work}; next in Section \ref{sec:approach} the SKDCGN is dissected; Section \ref{sec:exps-results} presents the experimental setup and the empirical results, which are finally discussed in Section \ref{sec:conclusion}. \section{Related work} \label{sec:related-work} This section introduces the fundamental concepts and the related works that we use as a base for our SKDCGN. \subsubsection{Counterfactual Generative Networks. } The main idea of CGNs \cite{DBLP:journals/corr/abs-2101-06046} has already been introduced in Section \ref{sec:intro}. Nonetheless, to aid the understanding of our method to readers that are not familiar with the CGN architecture, we summarize its salient components in this paragraph and also provide the network diagram in Appendix \ref{app:cgn-architecture}, Figure \ref{fig:cgn-diagram}. The CGN consists of 4 backbones: (i) the part of the network responsible for the shape mechanism, those responsible for (ii) texture and (iii) background, and a (iv) composition mechanism that combines the previous three using a deterministic function. Given a noise vector $\mathbf{u}$ (sampled from a spherical Gaussian) and a label $y$ (drawn uniformly from the set of possible labels y) as input, (i) the shape is obtained from a BigGAN-deep-256 \cite{brock2019large}, whose output is subsequently passed through a U2-Net \cite{DBLP:journals/corr/abs-2005-09007} to obtain a binary mask of the object shape. The (ii) texture and (iii) background are obtained similarly, but the BigGAN's output does not require to be segmented by the U2-Net. Finally, the (iv) composition mechanism outputs the final counterfactual image $\mathbf{x}_{gen}$ using the following analytical function: \begin{equation} \label{eq:composition} \mathbf{x}_{g e n}=C(\mathbf{m}, \mathbf{f}, \mathbf{b})=\mathbf{m} \odot \mathbf{f}+(1-\mathbf{m}) \odot \mathbf{b}, \end{equation} where $\mathbf{m}$ is the shape mask, $\mathbf{f}$ is the foreground (or texture), $\mathbf{b}$ is the background and $\odot$ denotes element-wise multiplication. More recently, \cite{khorram2022cycleconsistent} devises an approach that learns a latent transformation that generates visual CFs automatically by steering in the latent space of generative models. Additionally, \cite{DBLP:journals/corr/abs-2109-14274} uses a deep model inversion approach that provides counterfactual explanations by examining the area of an image. \subsubsection{Knowledge Distillation. } \cite{44873} firstly proposed to transfer the knowledge of a pre-trained cumbersome network (referred to as the \textit{teacher}) to a smaller model (the \textit{student}). This is possible because networks frequently learn low-frequency functions among other things, indicating that the learning capacity of the big network is not being utilized fully \cite{DBLP:journals/corr/abs-2110-02424} \cite{DBLP:journals/corr/abs-2004-07780}. Traditional KD approaches (often referred to as \textit{black-box}) simply use the outputs of the large deep model as the teacher knowledge, but other variants have made use of activation, neurons or features of intermediate layers as the knowledge to guide the learning process \cite{kdref1,kdref2}. Existing methods like \cite{DBLP:journals/corr/abs-2009-13829} are also making use of Knowledge distillation for the task of image generation. Our work is similar to this, however, they transfer the knowledge of BigGAN trained on ImageNet dataset to a TinyGAN. In contrast, in our work, we transfer not just the knowledge of image generation but also the task of counterfactual generation from a BigGAN to a TinyGAN. \subsubsection{Distilling GANs using KD. } Given its high effectiveness for model compression, KD has been widely used in different fields, including visual recognition and classification, speech recognition, natural language processing (NLP), and recommendation systems \cite{kd-survey}. However, it is less studied for image generation. \cite{DBLP:journals/corr/abs-1902-00159} firstly applied KD to GANs. However, our project differs from theirs as they use \textit{unconditional} image generation, less general (DCGAN \cite{dcgan}) architectures and they do not assume a black-box generator. Our setting is much more similar to that of \cite{DBLP:journals/corr/abs-2009-13829}, where a BigGAN is distilled to a network with 16$\times$ fewer parameters, assuming no access to the teacher's training procedure or parameters. Considering its competitive performance, we use the proposed architecture (TinyGAN) as the student model and use a modified version of their loss function (further details in Section \ref{sec:method-training}) to optimize our network. \textbf{Source-free}: We term our method as Source-free since we do not have access to the source data, source training details, procedure, and any knowledge about the counterfactuals, etc, but only have access to trained source models. This method is similar to methods such as \cite{yang2021generalized} \cite{ding2022source}. With large diffusion models like Imagen \cite{saharia2022photorealistic} and DALL·E 2 \cite{https://doi.org/10.48550/arxiv.2204.13807} where the training process is usually extremely expensive in terms of computation, lack precise details about training them and often not reproducible by academic groups, we often have access to pretrained models. These can be used to transfer knowledge to a smaller network, and perform the same task with model of lower capacity. \section{Approach} \label{sec:approach} This section dives into the details of the SKDCGN architecture, focusing on the training and inference phases separately for ImageNet-1k and MNIST. In addition, we discuss the loss functions that were employed for Knowledge Distillation. \subsection{SKDCGN} Although transferring the knowledge of an entire CGN into a single generative model could drastically reduce the number of parameters, this strategy would compromise the whole purpose of CGNs, i.e. disentangling the three mechanisms and having control over each of them. Therefore, we opt to train a generative model for each individual component. As shown in the architecture diagram (Figure \ref{fig:arch_diagram}), we treat each IM backbone as a black-box teacher and aim to mimic its output by training a corresponding TinyGAN student. Note that this implies that in the case of the shape mechanism, a single generative model learns to mimic both the BigGAN and the U2-Net. We believe a TinyGAN should be capable of learning binary masks directly, removing the need for the U2-Net and reducing the model size even further. During inference, the outputs of the three students are combined into a final counterfactual image using the composition function defined in Equation \ref{eq:composition}. \begin{figure}[t] \includegraphics[width=\linewidth]{Images/final_architecture.pdf} \caption{\textit{Architecture of the SKDCGN.} During training, each independent mechanism serves as a black-box teacher model to train a corresponding student model. During inference, the outputs of the three trained TinyGANs are combined using a Composition Mechanism that returns the final counterfactual image.} \label{fig:arch_diagram} \end{figure} \subsubsection{Training: Distilling the knowledge of IMs. } \label{sec:method-training} To train SKDCGN, we utilize each IM backbone from the CGN architecture as a black-box teacher for the student network, as visualized in the training section of Figure \ref{fig:arch_diagram} (the backbones are BigGAN + U2-Net for \textit{shape}, BigGAN for \textit{texture}, and BigGAN for \textit{background}). As introduced in the \hyperref[sec:related-work]{Related work} section, \cite{DBLP:journals/corr/abs-2009-13829} proposed an effective KD framework for compressing BigGANs. As the IMs in CGNs rely on BigGANs, we utilize their proposed student architecture. For completeness, the details of the student architecture are reported in Appendix \ref{app:tinygan-architecture}, Figure \ref{fig:tinygan-generator}. We base our training objective on the loss function proposed by \cite{DBLP:journals/corr/abs-2009-13829}. Our full objective comprises multiple terms: (i) a pixel-wise distillation loss, (ii) an adversarial distillation loss, (iii) a feature-level distillation loss, and (iv) KL Divergence. In addition to introducing KL Divergence, we deviate from the original TinyGAN training objective by omitting the term that allows the model to learn from real images of the ImageNet dataset. This would inevitably compromise the quality of the generated counterfactuals. KL Divergence leads to entropy minimization between the teacher and student, which is why we propose its usage. The individual loss terms are dissected below as from \cite{DBLP:journals/corr/abs-2009-13829}: \begin{enumerate} \item \textit{Pixel-wise Distillation Loss}: To imitate the functionality of BigGAN for scaling generation to high-resolution, high-fidelity images, we minimize the pixel-level distance (L1) between the images generated by BigGAN and TinyGAN given the same input: \begin{equation} \mathcal{L}_{\text{KD\_pix}} = \mathbb{E}_{z \sim p(z), y \sim q(y)}[\|T(z,y) - S(z,y) \|_{1}] \label{pixelwise_loss} \end{equation} where $T$ represents the Teacher network, $S$ represents the Student network, $z$ is a latent variable drawn from the truncated normal distribution $p(z)$, and $y$ is the class label sampled from some categorical distribution $q(y)$. \item \textit{Adversarial Distillation Loss}: To promote sharper outputs, an adversarial loss is incorporated to make the outputs of $S$ indistinguishable from those of $T$. It includes a loss for the generator (Eq. \ref{eq:loss-adv-gen}) and one for the discriminator (Eq. \ref{eq:loss-adv-dis}): \begin{align} \mathcal{L}_{\text{KD\_G}} =& - \mathbb{E}_{z, y}[D(S(z,y), y)] \label{eq:loss-adv-gen}\\ \mathcal{L}_{\text{KD\_D}} =& - \mathbb{E}_{z, y}\left[max(0, 1 - D(T(z,y), y)) + max(0, 1 - D(S(z,y), y))\right] \label{eq:loss-adv-dis}, \end{align} where $z$ is the noise vector, $y$ is the class label, $T(z,y)$ is the image generated by the Teacher $T$, while $G$ and $D$ are -- respectively -- the generator and discriminator of the Student $S$. \item \textit{Feature Level Distillation Loss}: To further overcome the blurriness in the images produced by the Student network, the training objective also includes a feature-level distillation loss. More specifically, we take the features computed at each convolutional layer in the Teacher discriminator, and with a loss function stimulate $S$ to generate images similar to $T$: \begin{equation} \mathcal{L}_{\text{KD\_feat}} = \mathbb{E}_{z, y}\left[\sum _{i} \alpha_{i}\left\|D_{i}(T(z,y),y) - D_{i}(S(z,y), y) \right\|_{1}\right] \label{feature_loss} \end{equation} where $D_{i}$ represents the feature vector extracted from the $i^{th}$ layer of the discriminator and the corresponding weights are given by $\alpha_{i}$. \item \textit{KL Divergence}: L1 alone cannot reduce the entropy between the teacher and target. To improve the proposed method, we use KL Divergence in a similar fashion to \cite{asano2021extrapolating} for the task of knowledge distillation between real images drawn from source $P(x)$ and target images $Q(x)$. The \begin{equation} \mathcal D_{\mathrm{KL}}(P \| Q)=\sum_{x \in \mathcal{X}} P(x) \log \left(\frac{P(x)}{Q(x)}\right) \label{feature_loss_kl} \end{equation} \begin{equation} \mathcal{L}_{\text{KL}} = \sum_{x \in X}-p_{x}^{t} \log p_{x}^{s}+p_{x}^{t} \log p_{x}^{t} \label{eq:kl-loss} \end{equation} where $x$ is the class label and $p$ contains the output softmax probabilities of the Generator $G$ divided by the temperature $t$. \end{enumerate} To sum up, the student's generator ($G$) and discriminator ($D$) are respectively optimized using the following objectives: \begin{align} \mathcal{L}_{\text{G}} = & \mathcal{L}_{\text{KD\_feat}} + \lambda_1 \mathcal{L}_{\text{KD\_pix}} + \lambda_2\mathcal{L}_{\text{KD}\_G} \,(\;+\;\mathcal{L}_{\text{KL}}\,)\\ \mathcal{L}_{\text{D}} = & \mathcal{L}_{\text{KD\_D}} \end{align} where $\lambda_1$ and $\lambda_2$ are the regularization terms mentioned in \cite{DBLP:journals/corr/abs-2009-13829}, and the KL divergence term ($\mathcal{L}_{\text{KL}}$) is only used in the enhanced version of SKDCGN. Implementing the SKDCGN architecture requires training a TinyGAN for each Independent Mechanism of the CGN (see Fig. \ref{fig:arch_diagram}). The KD training procedure, however, requires training data. Hence prior to training, 1000 images per class (totalling 1 million samples) are generated using the IM backbones extracted from the pre-trained CGN (as provided by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046}). Finally, note that the original CGN architecture (illustrated in Appendix \ref{app:cgn-architecture}, Figure \ref{fig:cgn-diagram}) comprises another BigGAN trained on ImageNet-1k. It is unrelated to the three Independent Mechanisms and provides primary training supervision via reconstruction loss. We discard this component of the architecture for two main reasons: we do not have a dataset of counterfactuals whereon a GAN can be trained; we argue that this additional knowledge is already embedded in the backbones of a pre-trained CGN. \subsubsection{Inference: generating counterfactuals. } Once the three student networks are trained, their outputs are combined during inference akin to \cite{DBLP:journals/corr/abs-2101-06046} using the analytical function of Equation \ref{eq:composition}. Since the composition function is deterministic, we devise inference as a separate task to training. \section{Experiments and results} \label{sec:exps-results} This section defines our experimental setup, then proceeds to present the results. First, we test SKDCGN -- as defined in the \hyperref[sec:approach]{Approach} section -- on both ImageNet-1k and MNIST (Section \ref{sec:exps-skdcgn}), and based on the observed findings we make some changes to the proposed architecture to improve the quality of the results (Section \ref{sec:exps-improvement}). Due to computational constraints we test these improvements on a smaller dataset, namely the double-colored variant of MNIST \cite{726791}. Finally, as an additional contribution, we conduct a thorough study on the composition mechanism, to gain a better understanding of how each mechanism influences the classification accuracy of an invariant classifier. We present the results of such a study in Section \ref{sec:exps-comp-mechanism}. \subsection{Datasets} \paragraph{ImageNet-1k.} The ImageNet-1k ILSVRC dataset \cite{5206848} contains 1,000 classes, with each class consisting of 1.2 million training images, 50,000 validation and 100,000 test images. Images were resized to $256\times256$ to maintain consistent experiments and to allow direct comparisons with the original results of \cite{DBLP:journals/corr/abs-2101-06046}. \paragraph{Double-colored MNIST.} We use the \textit{double-colored} MNIST dataset proposed by Sauer and Geiger in the original CGN paper \cite{DBLP:journals/corr/abs-2101-06046}. This is a variant of the MNIST dataset where both the digits and the background are independently colored. It consists of 60,000 $28\times28$ images of the 10 digits, along with a test set of 10,000 images. \subsection{Baseline Model: CGN with generator replaced by TinyGAN generator} The SKDCGN is compared with a modified version of the original CGN architecture, where each BigGAN has been replaced by the generator model of a TinyGAN. Training this baseline using the procedure described by \cite{DBLP:journals/corr/abs-2009-13829}, omitting KD, allows for rigorous comparisons that emphasize the effectiveness of the knowledge distillation process. Further training details are provided in Appendix \ref{app:baseline-training}. \subsection{Results of SKDCGN} \label{sec:exps-skdcgn} \begin{figure}[t] \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/shape-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/shape-left.png} \caption{\textit{Shape} mechanism.} \label{fig:shape_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/fg-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/fg-left.png} \caption{\textit{Texture} mechanism.} \label{fig:fg_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/bg-left.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/bg-left.png} \caption{\textit{Background} mechanism.} \label{fig:bg_results} \end{subfigure} \caption{A comparison of images (on both ImageNet-1k and double-colored MNIST) generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input), for each independent mechanism.} \label{fig:im-results_t_b} \end{figure} The proposed model was firstly trained and tested on ImageNet-1k. To further validate our method, we repeated the training procedure on MNIST. The qualitative results are collected in Figure \ref{fig:im-results_t_b} and demonstrate that TinyGANs can closely approximate the output of each IM. While this is true for both datasets, the effectiveness of our method is especially visible in the case of MNIST. It is likely the case that the reduced capacity of the TinyGANs (compared to the original CGN backbones) is sufficient to decently model the underlying data distribution. ImageNet-1k, on the other hand, reveals more apparent (though still acceptable) discrepancies between the images, especially for the \textit{texture} IM. However, careful and extensive experiments revealed that the three TinyGANs could not generalize when random noise was given to the generator, i.e., they could not produce results beyond the test set. This might be due to a number of reasons. First, the compromised generalization capabilities of each IM's TinyGAN could be caused by their reduced network capacity. Furthermore, each TinyGAN was trained on all 1000 classes of ImageNet-1K, as opposed to Chang and Lu's choice of limiting the training data to the 398 animal labels \cite{DBLP:journals/corr/abs-2009-13829}. Finally, we generate the test samples using the test noise instead of random noise, since we hypothesize that the student networks only learn the manifolds that the teacher networks have been trained on. Additional experiments are required to analyze whether samples generated using random noise are found along the same manifold; unfortunately, we were hindered by the limited time frame allocated for this project, hence we leave this question open for future works. \begin{figure}[t!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/30-test_mask.png} \caption{\textit{Shape} mechanism.} \label{fig:mnist_mask_kl_div_fg} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/29-test.png} \caption{\textit{Texture} mechanism.} \label{fig:mnist_mask_kl_div_bg} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/kl_l1/30-test_bg.png} \caption{\textit{Background} mechanism.} \label{fig:mnist__mask_kl_div_mask} \end{subfigure} \caption{A comparison of double-colored MNIST images generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input) for each IM. Here, SKDCGN was tuned such that KL divergence is minimized between the teacher and student networks, and the L1 loss is multiplied with the activation of every layer.} \label{fig:mnist_kl_div} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{Images/im_kl/1-sample.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{Images/im_kl/23-sample.png} \caption{} \end{subfigure} \caption{(a) Shape masks obtained after the \textit{first} epoch of SKDCGN training on ImageNet-1k, using KL divergence. (b) Shape masks obtained after the 23$^{\text{rd}}$ epoch of SKDCGN training on ImageNet-1k, \textit{without} KL divergence. Evidently, KL enhances the quality of the masks from the first epoch, whereas its absence compromises the results even at a later stage of training.} \label{fig:Imagenet_mask_kl_div} \end{figure} \subsection{Improving the SKDCGN model} \label{sec:exps-improvement} The results presented in the previous section reveal that the outputs are noisy and ambiguous in nature when knowledge distillation is performed using the pre-trained models provided by Sauer and Geiger \cite{DBLP:journals/corr/abs-2101-06046} (note the artifacts in the SKDCGN's outputs of Fig. \ref{fig:im-results_t_b}, especially those trained on ImageNet-1k). This statement was supported by an interesting yet unexpected result of the study on the composition mechanism (refer to Section \ref{sec:exps-comp-mechanism}): it was observed that modifying Equation \ref{eq:composition} such that the shape mask $\mathbf{m}$ is multiplied with a weight factor of 0.75 (i.e., setting the transparency of the shape mask to 75\%), yielded an accuracy increase of the CGN's invariant classifier. The findings of this experiment -- conducted on the double-colored MNIST dataset -- suggest that the mask component is noisy in nature, leading to ambiguities in the decision boundaries during the classification of several digits. In light of this new hypothesis, we attempt to use the \textit{Kullback–Leibler} (KL) divergence to improve the visual quality of the outputs\footnote{It is noteworthy that other techniques were tested in the attempt to improve the visual quality of the results. Although they did not prove to be as beneficial, they are described in Appendix \ref{sec:improve_skdcgn}.}. Since KL leads to entropy minimization between the teacher and student networks, we deem such a technique adequate for the task at hand. Moreover, the choice of using KL was encouraged by the work of Asano and Saeed \cite{asano2021extrapolating}, which proved the suitability of the measure in this context. Concretely, the KL Divergence loss (as defined in Eq. \ref{eq:kl-loss}) was included in the overall generator loss $\mathcal{L}_{\text{G}}$ as seen in Equation \ref{eq:loss-adv-gen}. First, the modified SKDCGN was tested on the double-colored MNIST dataset. As depicted in Figure \ref{fig:mnist_kl_div}, the introduction of KL divergence improves SKDCGN's visual fidelity of both \textit{background} and \textit{texture} IMs, while the quality of the \textit{shape} masks seems to diminish after a few epochs. Contrarily, this approach appeared to be beneficial for the shape mechanism too, in the context of ImageNet-1k. The shape masks resulted more natural and consistent since the first epoch, whereas the absence of KL yielded noisy masks even at a later stage of training (refer to Figure \ref{fig:Imagenet_mask_kl_div}). \subsection{Additional results: study of the shape IM} \label{sec:exps-comp-mechanism} \begin{table}[t] \centering \begin{tabular}{lrrr} \toprule & \;\;Noise & \;\;Rotation & \;\;Transparency\\ \midrule Train Accuracy & $99.9$ & $99.1$ & $94.7$ \\ Test Accuracy & $14.96$ & $13.51$ & $\mathbf{58.86}$ \\ \bottomrule\\ \end{tabular} \caption{Results of the invariant classifier for the analysis of the shape IM. The classifier has been trained to predict whether images are CGN-generated or real. The training examples contain counterfactuals whose shape mechanism has been tuned with one of the three transformations indicated in the table (noise, rotation, transparency -- refer to Sec.\ref{sec:exps-comp-mechanism} for further details).} \label{tab:shape_exp_results} \end{table} \begin{figure}[t] \centering \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/noise/1_46000_mask.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/rot/1_46000_mask.png} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{Images/Shape_exp/trans/1_46000_mask_2.png} \caption{} \end{subfigure} \caption{Shape masks obtained after (a) addition of Gaussian random noise, (b) application of random rotation and (c) decrease of the mask opacity (i.e., lowering its transparency to 75\%).} \label{fig:shape_exp} \end{figure} As an additional contribution, we conduct a thorough study on the composition mechanism, to gain a better understanding of how the mechanisms influence the classification accuracy of an invariant classifier (i.e., a classifier that predicts whether an image is CGN-generated or real). Due to the limited time at our disposal, we focused on the mechanism that we deem most important in the decision-making of such a classifier, namely the \textit{shape}. To evaluate the effects of the shape IM we trained several (original) CGN models on the double-colored MNIST dataset; we tuned the resulting shape masks prior to the counterfactual image generation (governed by the composition mechanism of Equation \ref{eq:composition}) and used the generated images to train an invariant classifier. More specifically, we experimented with (i) the addition of Gaussian noise in the shape mask, (ii) random rotation of the mask, and (iii) multiplying the mask $\mathbf{m}$ in the composition mechanism (Eq. \ref{eq:composition}) with a factor smaller than 1 (or in other words, lowering the opacity of the shape mask). A transparency of 75\% (hence a weight factor of $0.75$) was experimentally found to be most beneficial for the accuracy of the classifier. The influence of the three transformations on the invariant classifier is quantified -- in terms of accuracy -- in Table \ref{tab:shape_exp_results}; sample shape masks generated from each transformation are displayed in Figure \ref{fig:shape_exp}. It is apparent from the test accuracy values that Gaussian noise and random rotations do not lead to any remarkable performance of the classifier but, contrarily, degrade its accuracy to values below 15\%. This is most likely the result of overfitting on the training set, as supported by the \textit{train} accuracy values. On the other hand, lowering the opacity of the mask substantially boosts the test accuracy, improving the previous results by a factor of $4\times$ (circa). It is noteworthy that the masks obtained using the transparency adjustment are more akin to those achieved using regular CGNs (see Figure \ref{fig:shape_exp}). The other transformations, instead, result in mask shapes that are particularly different. As such, they can potentially be used to make classifiers more robust when mixed with regular data during training. Because this is an extensive topic, we believe it warrants further research. \section{Discussion and conclusion} \label{sec:conclusion} With the prevalence of heavily parameterized architectures such as BigGANs, and with the advent of limited-access models like the trending DALL·E 2, source-free compression becomes a growing necessity. In this paper we explored the possibility to obtain a lightweight version of the CGN network, assuming that we do not have access to the training process of the model. More specifically, we treat the backbone of each independent mechanism (shape, texture and background) as a black-box, then use KD to transfer the knowledge of the pre-trained cumbersome networks to simple TinyGANs. SKDCGN achieves a remarkable compression of the overall network: it models the shape mechanism -- initially controlled by a BigGAN (55.9M parameters) and a U2-Net (44M parameters) -- using a single TinyGAN (6.4M parameters); similarly, it replaces the BigGANs responsible for the texture and background IMs with TinyGANs, and discards the forth BigGAN of the original CGN network that provides primary training supervision via reconstruction loss. This translates into four BigGANs and one U2-net (55.9M$\times$4 + 44M parameters, totalling 267.6M) being replaced with three simple TinyGANs (6.4M parameters each, meaning 19.2M parameters in total). Despite the significant compression, we demonstrate the ability of our model to generate counterfactual images on ImageNet-1k and double-colored MNIST datasets (see Figure \ref{fig:im-results_t_b}). When trained on the latter, SKDCGN's network capacity was proven to be sufficient to model the simple data distribution. If trained on the former, the proposed method exhibited remarkable ability in mimicking the original shape and background generations, while the texture mechanism suffered more from the reduction of size. This finding reveals great potential for future works that would attempt to tune the distillation (and hence enhance the synthesis) of the texture images, for instance by including data augmentation in the training procedure. Given the obtained results, we attemptedly limit the presence of noisy and ambiguous artifacts by minimizing the entropy between the teacher and student networks. We introduce a new measure in the knowledge distillation loss, i.e. KL divergence, which we find to enhance the visual quality results of some IMs for both Imagenet-1k and MNIST. Finally, we conduct a study on the composition mechanism to gain a better understanding of how the \textit{shape} IM influences the classification accuracy of an invariant classifier. Though other adjustments were tested, giving a lower weight to the shape mask $\mathbf{m}$ seemingly boosts the classifier performance. \section{Future work} To conclude, the experimental findings of SKDCGN prove that, upon the usage of Knowledge Distillation, one can transfer the capacity/ability of a cumbersome network to a lower-capacity model while still maintaining competitive performances. Although this paper unveils its potential, SKDCGN requires further research that we encourage other researchers to undertake. In addition to the suggestions offered throughout the sections, possible avenues of research include and are not limited to: improving the image generation process by using higher-order activation functions, since the utilized datasets consist of rich image data; improving the teacher-student architecture by introducing additional loss functions; using a learnable, neural network-based composition function instead of an analytical expression. \section*{Acknowledgments} We would like to express our sincere gratitude to Prof. dr. Efstratios Gavves and Prof. Wilker Aziz for effectively organizing the \textit{Deep Learning II} course at the University of Amsterdam, which is the main reason this paper exists. We are thankful to our supervisor, Christos Athanasiadis, for his precious guidance throughout the project. Finally, we also thank the former Program Director of the MSc. Artificial Intelligence, Prof. dr. Cees G.M. Snoek, and the current Program Manager, Prof. dr. Evangelos Kanoulas, for effectively conducting the Master's program in Artificial Intelligence at the University of Amsterdam. \clearpage \appendix \section*{Appendix} \section{Architecture details of the different models} This section contains the architectural details of the different model used in the proposed method. It brushes up the theory of the papers whereon we base our work (i.e. the CGN network \cite{DBLP:journals/corr/abs-2101-06046}, Sec. \ref{app:cgn-architecture} and the TinyGAN model \cite{DBLP:journals/corr/abs-2009-13829}, Sec. \ref{app:tinygan-architecture}) and also presents the baseline model (Sec. \ref{app:baseline-model}). \subsection{Original CGN architecture} \label{app:cgn-architecture} This section contains a diagram of the original CGN architecture, as presented in \cite{DBLP:journals/corr/abs-2101-06046}. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{Images/CGN_architecture.pdf} \caption{CGN architecture diagram. Retrieved from \cite{DBLP:journals/corr/abs-2101-06046}.} \label{fig:cgn-diagram} \end{figure} Figure \ref{fig:cgn-diagram} illustrates the CGN architecture. The network is split into four mechanisms, the shape mechanism $f_{shape}$, the texture mechanism $f_{text}$, the background mechanism $f_{bg}$, and the composer $C$. Components with trainable parameters are blue, components with fixed parameters are green. The primary supervision is provided by an unconstrained conditional GAN (cGAN) via the reconstruction loss $\mathcal{L}_{rec}$. The cGAN is only used for training, as indicated by the dotted lines. Each mechanism takes as input the noise vector $\mathbf{u}$ (sampled from a spherical Gaussian) and the label $y$ (drawn uniformly from the set of possible labels $\mathcal{Y}$) and minimizes its respective loss ($\mathcal{L}_{shape}$, $\mathcal{L}_{text}$, and $\mathcal{L}_{bg}$). To generate a set of counterfactual images, we sample $\mathbf{u}$ and then independently sample $y$ for each mechanism. \subsection{TinyGAN architecture} \label{app:tinygan-architecture} \begin{figure}[t] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=1\linewidth]{Images/Student_G.png} \caption{Student Generator $G$ \cite{DBLP:journals/corr/abs-2009-13829} } \label{fig:student generator} \end{subfigure} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=1\linewidth]{Images/Res_S.png} \caption{A Residual Block in $G$ \cite{DBLP:journals/corr/abs-2009-13829} } \label{fig:residual block} \end{subfigure} \caption{Architecture of the TinyGAN (student) generator} \label{fig:tinygan-generator} \end{figure} This section provides an brief overview of the TinyGAN architecture. For more details, refer to \cite{DBLP:journals/corr/abs-2009-13829}. \paragraph{Generator.} As shown in Figure \ref{fig:tinygan-generator}, TinyGAN comprises a ResNet \cite{resnet}-based generator with class-conditional BatchNorm \cite{batchnorm1} \cite{batchnorm2}. To keep a tight computation budget, it does not adopt attention-based \cite{self-attention} or progressive-growing mechanisms \cite{progressing-growing}. To substantially reduce the model size compared to BigGAN, it: \begin{itemize} \item Relies on using fewer channels; \item Replaces standard convolution by depthwise separable convolution; \item Adopts a simpler way to introduce class conditions. \end{itemize} Overall, TinyGAN's generator has 16$\times$ less parameters than BigGAN's generator. \vspace{-0.5em} \paragraph{Discriminator.} Following \cite{ref-discr-1} \cite{DBLP:journals/corr/abs-1802-05957}, \cite{DBLP:journals/corr/abs-2009-13829} opt for spectral normalized discriminator and introduce the class condition via projection. But instead of utilizing complicated residual blocks, they simply stack multiple convolutional layers with stride as used in DCGAN \cite{dcgan}, which greatly reduces the number of parameters. Overall, TinyGAN's discriminator has 10$\times$ less parameters than BigGAN's discriminator. \subsection{Baseline model} \label{app:baseline-model} The baseline is a standard CGN architecture whose BigGANs have been replaced with TinyGANs. Due to the need of a pre-trained model that (i) supervises the CGN training using a reconstruction loss and (ii) serves as the initialization of the IM GANs, a TinyGAN was trained from scratch using the KD strategy described in \cite{DBLP:journals/corr/abs-2009-13829}. Section \ref{app:baseline-details} dives into the details of the training procedure, then presents qualitative results of both the newly-trained TinyGAN and of baseline model. \begin{figure}[t!] \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/shape-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/shape-right.png} \caption{\textit{Shape} mechanism.} \label{fig:shape_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/fg-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/fg-right.png} \caption{\textit{Texture} mechanism.} \label{fig:fg_results} \end{subfigure} \\ \begin{subfigure}{\textwidth} \centering \hspace{6mm} \textit{ImageNet-1k} \hspace{36mm} \textit{Double-colored MNIST}\\ \includegraphics[width=0.48\linewidth]{Images/ims-outputs/imagenet/bg-right.png} \hfill \includegraphics[width=0.48\linewidth]{Images/ims-outputs/mnist/bg-right.png} \caption{\textit{Background} mechanism.} \label{fig:bg_results} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN's TinyGAN (given the same input), for each independent mechanism. We train on both ImageNet-1k (left images) and double-colored MNIST datasets (right images).} \label{fig:mnist_ims} \end{figure} \section{Additional results of SKDCGN's IMs} This section expands Section 4.3 of the main paper and contains more results obtained from each SKDCGN's IM, using both ImageNet-1k and double-colored MNIST datasets. More specifically, we compare the output of each CGN backbone with that of the corresponding SKDCGN's TinyGAN, given the same input. Please refer to Figure \ref{fig:mnist_ims}. \section{Baseline Model} \label{app:baseline-details} The baseline model is a modified version of the original CGN architecture, where each BigGAN has been replaced by the generator model of a TinyGAN. Training this baseline using the procedure described by \cite{DBLP:journals/corr/abs-2009-13829}, omitting KD, allows for rigorous comparisons that emphasize the effectiveness of the knowledge distillation process. In this section we provide training details, and collect sample outputs of the trained model. \begin{figure}[t!] \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/1-test.png} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input after $1^{st}$ epoch.} \label{fig:tinygan_results_1} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/18-test.png} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input after $18^{th}$ epoch.} \label{fig:tinygan_results_18} \end{subfigure} \caption{A comparison of images generated by BigGAN and the TinyGAN. Images in top row are produced by BigGAN, while those in bottom row are by SKDCGN given the same input} \label{tinygan_results} \end{figure} \subsection{Training Details} \label{app:baseline-training} The training procedure of a CGN requires a pre-trained GAN to provide primary supervision via the reconstruction loss. However, the original TinyGAN was only trained on only animal classes, hence the publicly-available model could not be used for our baseline. In order to consistently use the same dataset for all the experiments, we re-trained a TinyGAN from scratch (as described in \cite{DBLP:journals/corr/abs-2009-13829}) on all classes of ImageNet-1k. The images generated by TinyGAN are visualized in Appendix \ref{app:pretrained-tinygan-gen-outputs}. The images generated for each Independent Mechanism using our baseline model can be seen in \ref{app:baseline-outputs}. Apart from this, we additionally generated the counterfactuals using the baseline model which are shown in Appendix \ref{app:baseline-counterfactuals}. \begin{figure}[ht!] \centering \begin{tabular}{lllll} $\Tilde{m}$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0000000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0300000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0600000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_1_premask_ep_1200000.png} \vspace{-0.31em}\\ \vspace{-0.34em} $m$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_2_mask_ep_1200000.png}\\ \vspace{-0.33em} $f$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_3_texture_ep_1200000.png}\\ \vspace{-0.33em} $b$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_4_bgs_ep_1200000.png}\\ \vspace{-0.41em} $x_{gen}$ & \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0000000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0300000.png} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0600000.png} \hspace{-0.48em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_0900000.png} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_IMs/251_5_gen_ims_ep_1200000.png} \end{tabular} \caption{Individual IM Outputs after training for baseline. From top to bottom: $m$, $\Tilde{m}$, $f$, $b$, $x_{gen}$. From left to right: at the start of training, after epoch $300k{th}$, epoch $600k^{th}$, epoch $900k^{th}$, and epoch $1.2million^{th}$} \label{fig:IMs_baseline_2} \end{figure} \subsubsection{Generated outputs of TinyGAN trained on ImageNet-1k} \label{app:pretrained-tinygan-gen-outputs} A TinyGAN was trained using all 1000 classes of the ImageNet-1k dataset. Training details are provided by \cite{DBLP:journals/corr/abs-2009-13829}. Although the original paper trains the model for 1.2 million epochs, we are forced to restrict the amount of iterations due to computational constraints. After distilling the knowledge of a BigGAN for 18 epochs, our TinyGAN generates reasonable images, as seen in Figure \ref{fig:tinygan_results_18}. To compare the image generation we have also presented images generated after the first epoch as well \ref{fig:tinygan_results_1}. It can be observed that if we further train the model, it could produce images better in quality. Note that animal classes are better captured by the model: this is inline with the findings of \cite{DBLP:journals/corr/abs-2009-13829}. \begin{figure}[ht!] \centering \begin{tabular}{lllll} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000000_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000019_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000070_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000096_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000142_x_gen.jpg} \vspace{-0.31em}\\ \vspace{-0.33em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000193_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000198_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000205_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000245_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0000259_x_gen.jpg}\\ \vspace{-0.32em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001213_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001214_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001312_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001325_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001426_x_gen.jpg}\\ \vspace{-0.32em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001460_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001486_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001521_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001642_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001683_x_gen.jpg}\\ \vspace{-0.4em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001696_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001697_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001790_x_gen.jpg} \hspace{-0.49em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001881_x_gen.jpg} \hspace{-0.5em} \includegraphics[width=.18\linewidth]{Images/Baseline_Counterfactuals/test_0001978_x_gen.jpg} \end{tabular} \caption{Counterfactuals generated by baseline on test data for ImageNet-1k} \label{fig:counterfactuals_baseline} \end{figure} \subsubsection{Generated outputs of the baseline trained on ImageNet-1k} \label{app:baseline-outputs} Figure \ref{fig:IMs_baseline_2} illustrates the individual outputs of each IMs at the start of training, after epoch 300k$^{\text{th}}$, epoch 600k$^{\text{th}}$, epoch 900k$^{\text{th}}$, and epoch 1.2M$^{\text{th}}$ (from left to right). In each figure, we show from top to bottom: pre-masks $\Tilde{m}$, masks $m$, texture $f$, background $b$, and composite images $x_{gen}$. \subsubsection{Generated Counterfactual Images of Baseline trained on ImageNet-1k} \label{app:baseline-counterfactuals} Finally, we show counterfactual images generated by the baseline model in Figure \ref{fig:counterfactuals_baseline}. \section{Improving the SKDCGN process} \label{sec:improve_skdcgn} As mentioned in Section 4.4 of the main paper, we observed that the outputs from CGN are noisy in nature. Fig \ref{fig:mnist_cgn_noisy} evidently illustrates how noisy the MNIST digits are. However in this section we try to improve our architecture by several methods. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Images/MNIST_noisy_mask_cgn.pdf} \caption{Noisy outputs generated by the CGN when we made use of pretrained weights given by the authors. } \label{fig:mnist_cgn_noisy} \end{figure} In the direction towards improving the images that are being generated by our architecture, we strongly believe the room of improvement lies in these components: \begin{itemize} \item Improving the quality of images that are being generated by the GAN network in our architecture. Usually loss functions like VGG based perception loss, L1 reconstruction loss are added. \item Improving the existing knowledge distillation framework such that the student learns better from the teacher's guidance by adding new loss functions to the Knowledge Distillation task. \end{itemize} \begin{figure}[ht!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 2 epochs on test data.} \label{fig:mnist_mask_bce2} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 10 epochs on test data.} \label{fig:mnist_mask_bce10} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/bce/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 30 epochs on test data.} \label{fig:mnist__mask_bce30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with cross entropy loss} \label{fig:mnist_ims_1} \end{figure} To improve the quality of images, we observe that our architecture already has most of the loss functions integrated implicitly/explicitly. Hence, we add the Cross entropy loss for the generator and discriminator for the mask IM of the architecture and get the results as shown in \ref{fig:mnist_mask_bce2} for second epoch. We observe that digits like '0' are being reconstructed however for other digits the inputs look noisy in nature. By the end of 10th epoch for test set in Fig. \ref{fig:mnist_mask_bce10} we observe that the digits are being reconstructed. We continue with the training since we expected better results than what we have a;ready seen, however, contrary to our beliefs we observe artefacts by the end of 30th epoch as shown in Fig. \ref{fig:mnist__mask_bce30}. \begin{figure}[ht!] \centering \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 2 epochs on test data.} \label{fig:mnist_kl_layer2} \end{subfigure} \\ \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 10 epochs on test data.} \label{fig:mnist_kl_layer10} \end{subfigure} \\ \begin{subfigure}{0.9\textwidth} \includegraphics[width=\linewidth]{Images/kl_layer/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) after 30 epochs on test data.} \label{fig:mnist_kl_layer30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with KL divergence multiplied with the activation of every layer instead of L1} \label{fig:mnist_kl_layer} \end{figure} \subsection{KL multiplied with layer instead of L1} \label{app:kl_instead_l1} Since the image generation process already has most of the components to ensure that the reconstruction is in place, we tried to improve the Knowledge distillation between teacher and student network by integrating the KL divergence and multiply the loss with every layer of the network instead of L1 which is default. Possibly, because L1 reconstruction loss is explicitly needed that is to multiplied with the activation of every layer. We observe the results as shown in Fig. \ref{fig:mnist_kl_layer} \begin{figure}[ht!] \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/2-test.png} \caption{A comparison of images generated by the CGN \textbf{shape} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 2 epochs on test data.} \label{fig:mnist_mask_mse2} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/10-test.png} \caption{A comparison of images generated by the CGN \textbf{texture} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 10 epochs on test data.} \label{fig:mnist_mask_mse10} \end{subfigure} \\ \begin{subfigure}{\textwidth} \includegraphics[width=\linewidth]{Images/l2/30-test.png} \caption{A comparison of images generated by the \textbf{background} backbone (\textit{top} row) and those generated by the corresponding SKDCGN given the same input (\textit{bottom} row) for mask IM after 30 epochs on test data.} \label{fig:mnist__mask_mse_30} \end{subfigure} \caption{A comparison of images generated by the CGN backbones and those generated by the corresponding SKDCGN (given the same input) mask IM with L2 multiplied with the activation of every layer instead of L1.} \label{fig:mnist_mse} \end{figure} \subsection{MSE instead of L1} \label{app:mse_no_l1} In addition, We also tried L2 loss instead of L1 loss but it lead to noisy outputs than previously generated and obtain results as shown in \ref{fig:mnist_mse}. Since, L2 assumes that the influence of noise is independent of the image's local characteristic the images are noisy in nature. \clearpage \bibliographystyle{unsrt} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=ewS9kxTKF7f
https://arxiv.org/abs/2208.04226
Please evaluate the clarity and coherence of the experimental section in the paper, including the presentation of results and conclusions drawn from the figures.
. 1) Summary: The paper reformulates Video Frame Interpolation (VFI) as a Continuous Image Transition task. The approach is based on Space Decoupled Learning (SDL) and it shows competitive results for VFI and for other CIT tasks. 2) Strengths: - SDL simplifies VIF to a CIT problem without affecting performance and without requiring any human knowledge of the domain. Additionally, SDL works well for several CIT tasks. - Paper very well written. - Hypothesis supported by experiments. - In line with workshop. 3) Weaknesses. - Figure 1 is usually an important figure in the paper. It seems a bit difficult to follow and to read. Rating: 8: Top 50% of accepted papers, clear accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{booktabs} \usepackage{multirow} \usepackage{tabularx} \usepackage{threeparttable} \usepackage{ragged2e} \usepackage{wrapfig} \makeatletter \@namedef{ver@everyshi.sty}{} \makeatother \usepackage{pgfplots} \usepackage{pgfplots}\pgfplotsset{compat=1.9} \usepackage[accsupp]{axessibility} % \newcommand{\ie}{\emph{i.e.}} \newcommand{\eg}{\emph{e.g.}} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{4010} % \title{Beyond a Video Frame Interpolator: A Space Decoupled Learning Approach to Continuous Image Transition} % \titlerunning{Beyond a Video Frame Interpolator} \author{Tao Yang\inst{1} \and Peiran Ren\inst{1} \and Xuansong Xie\inst{1} \and Xiansheng Hua\inst{1} \and Lei Zhang\inst{2}} % \authorrunning{T. Yang et al.} \institute{DAMO Academy, Alibaba Group \\ \email{\{yangtao9009@gmail.com, peiran\_r@sohu.com, xingtong.xxs@taobao.com, xiansheng.hxs@alibaba-inc.com\}} \\ \and Department of Computing, The Hong Kong Polytechnic University \\ \email{\{cslzhang@comp.polyu.edu.hk\}}} \maketitle \vspace*{-0.5cm} \begin{abstract} Video frame interpolation (VFI) aims to improve the temporal resolution of a video sequence. Most of the existing deep learning based VFI methods adopt off-the-shelf optical flow algorithms to estimate the bidirectional flows and interpolate the missing frames accordingly. Though having achieved a great success, these methods require much human experience to tune the bidirectional flows and often generate unpleasant results when the estimated flows are not accurate. In this work, we rethink the VFI problem and formulate it as a continuous image transition (CIT) task, whose key issue is to transition an image from one space to another space continuously. More specifically, we learn to implicitly decouple the images into a translatable flow space and a non-translatable feature space. The former depicts the translatable states between the given images, while the later aims to reconstruct the intermediate features that cannot be directly translated. In this way, we can easily perform image interpolation in the flow space and intermediate image synthesis in the feature space, obtaining a CIT model. The proposed space decoupled learning (SDL) approach is simple to implement, while it provides an effective framework to a variety of CIT problems beyond VFI, such as style transfer and image morphing. Our extensive experiments on a variety of CIT tasks demonstrate the superiority of SDL to existing methods. The source code and models can be found at \url{https://github.com/yangxy/SDL}. % \keywords{Video Frame Interpolation, Continuous Image Transition, Image Synthesis, Space Decoupled Learning} \end{abstract} \section{Introduction} \label{sec:intro} Video frame interpolation (VFI) targets at synthesizing intermediate frames between the given consecutive frames of a video to overcome the temporal limitations of camera sensors. VFI can be used in a variety of practical applications, including slow movie generation \cite{Jiang2018Superslomo}, motion deblurring \cite{Shen2020BIN} and visual quality enhancement \cite{Xue2019TOFlow}. The conventional VFI approaches \cite{Baker2007ADA} usually calculate optical flows between the source and target images and gradually synthesize the intermediate images. With the great success of deep neural networks (DNNs) in computer vision tasks \cite{Dong2015SRCNN,He2016ResNet,Redmon2016YOLO}, recently researchers have been focusing on developing DNNs to address the challenging issues of VFI. Most DNN based VFI algorithms can be categorized into flow-based \cite{Jiang2018Superslomo,Bao2019DAIN,Xu2019QVI,Niklaus2020Splatting}, kernel-based \cite{Niklaus2017Adaptive,Lee2020Adacof,Shen2020BIN}, and phase-based ones \cite{Meyer2015Phase,Meyer2018PhaseNet}. With the advancement of optical flow methods \cite{Sun2018PWC-Net,Bar-Haim2020ScopeFlow}, flow-based VFI algorithms have gained increasing popularity and shown good quantitative results on benchmarks \cite{Bao2019DAIN,Niklaus2020Splatting}. However, these methods require much human experience to tune the bidirectional flows, \eg, by using the forward \cite{Jiang2018Superslomo,Bao2019DAIN} and backward \cite{Niklaus2018Context,Niklaus2020Splatting} warping algorithms. In order to improve the synthesis performance, some VFI methods have been developed by resorting to the depth information \cite{Bao2019DAIN}, the acceleration information \cite{Xu2019QVI} and the softmax splatting \cite{Niklaus2020Splatting}. These methods, however, adopt the off-the-shelf optical flow algorithms, and hence they often generate unpleasant results when the estimated flows are not accurate. To address the above issues, we rethink the VFI problem and aim to find a solution that is free of flows. Different from previous approaches, we formulate VFI as a continuous image transition (CIT) problem. It is anticipated that we could construct a smooth transition process from the source image to the target image so that the VFI can be easily done. Actually, there are many CIT tasks in computer vision applications, such as image-to-image translation \cite{Isola2017Pix2Pix,Zhu2017CycleGAN}, image morphing \cite{Liu2019Few,Park2020Crossbreed} and style transfer \cite{Gatys2016Style,Huang2017Adain}. Different DNN models have been developed for different CIT tasks. Based on the advancement of deep generative adversarial network (GAN) techniques \cite{Brock2019BigGAN,Karras2019StyleGAN,Karras2020StyleGAN2}, deep image morphing methods have been proposed to generate images with smooth semantic changes by walking in a latent space \cite{Radford2016Unsupervised,Jahanian2020GANsteerability}. Similarly, various image-to-image translation methods have been developed by exploring intermediate domains \cite{Gong2019DLOW,Wu2019RelGANMI,Choi2020StarGANV2}, interpolating attribute \cite{Mao2020ContinuousI2I} or feature \cite{Upchurch2017DFI} or kernel \cite{Wang2019DNI} vectors, using physically inspired models for guidance \cite{Pizzati2021CoMoGAN}, and navigating latent spaces with discovered paths \cite{Chen2019Homomorphic,Jahanian2020GANsteerability}. Though significant progresses have been achieved for CIT, existing methods usually rely on much human knowledge of the specific domain, and employ rather different models for different applications. % In this work, we propose to learn a translatable flow space to control the continuous and smooth translation between two images, while synthesize the image features which cannot be translated. Specifically, we present a novel space decoupled learning (SDL) approach for VFI. Our SDL implicitly decouples the image spaces into a translatable flow space and a non-translatable feature space. With the decoupled image spaces, we can easily perform smooth image translation in the flow space, and synthesize intermediate image features in the non-translatable feature space. Interestingly, the proposed SDL approach can not only provide a flexible solution for VFI, but also provide a general and effective solution to other CIT tasks. To the best of our knowledge, the proposed SDL is the first flow-free algorithm which is however able to synthesize consecutive interpolations, achieving leading performance in VFI. SDL is easy-to-implement, and it can be readily integrated into off-the-shelf DNNs for different CIT tasks beyond VFI, serving as a general-purpose solution to the CIT problem. We conduct extensive experiments on various CIT tasks, including, VFI, image-to-image translation and image morphing, to demonstrate its effectiveness. Though using the same framework, SDL shows highly competitive performance with those state-of-the-art methods that are specifically designed for different CIT problems. \vspace{-2mm} \section{Related Work} \label{sec:work} \subsection{Video Frame Interpolation (VFI)} With the advancement of DNNs, recently significant progresses have been made on VFI. Long \emph{et al}. \cite{Long2016VFI} first attempted to generate the intermediate frames by taking a pair of frames as input to DNNs. This method yields blurry results since the motion information of videos is not well exploited. The latter works are mostly focused on how to effectively model motion and handle occlusions. Meyer \emph{et al}. \cite{Meyer2015Phase,Meyer2018PhaseNet} proposed phase-based models which represent motion as per-pixel phase shift. Niklaus \emph{et al}. \cite{Niklaus2017Adaptive,Niklaus2017Sepconv} came up with the kernel-based approaches that estimate an adaptive convolutional kernel for each pixel. Lee \emph{et al}. \cite{Lee2020Adacof} introduced a novel warping module named Adaptive Collaboration of Flows (AdaCoF). An end-to-end trainable network with channel attention was proposed by Choi \emph{et al}. \cite{Choi2020CAIN}, where frame interpolation is achieved without explicit estimation of motion. The kernel-based methods have achieved impressive results. However, they are not able to generate missing frames with arbitrary interpolation factors and usually fail to handle large motions due to the limitation of kernel size. Unlike phase-based or kernel-based methods, flow-based models explicitly exploit motion information of videos \cite{Jiang2018Superslomo,Bao2019DAIN,Xu2019QVI,Niklaus2020Splatting}. With the advancement of optical flow methods \cite{Sun2018PWC-Net,Bar-Haim2020ScopeFlow}, flow-based VFI algorithms have become popular due to their good performance. Niklaus and Liu \cite{Niklaus2018Context} adopted forward warping to synthesize intermediate frames. This algorithm suffers from holes and overlapped pixels, and it was later improved by the softmax splatting method \cite{Niklaus2020Splatting}, which can seamlessly map multiple source pixels to the same target location. Since forward warping is not very intuitive to use, most flow-based works adopt backward warping. Jiang \emph{et al}. \cite{Jiang2018Superslomo} jointly trained two U-Nets \cite{Ronneberger2015Unet}, which respectively estimate the optical flows and perform bilateral motion approximation to generate intermediate results. Reda \emph{et al}. \cite{Reda2019UVI} and Choi \emph{et al}. \cite{Choi2020Meta} further improved this work by introducing cycle consistency loss and meta-learning, respectively. Bao \emph{et al}. \cite{Bao2019DAIN} explicitly detected the occlusion by exploring the depth information, but the VFI performance is sensitive to depth estimation accuracy. To exploit the acceleration information, Xu \emph{et al}. \cite{Xu2019QVI} proposed a quadratic VFI method. Recently, Park \emph{et al}. \cite{Park2020BMBC} proposed a bilateral motion network to estimate intermediate motions directly. \subsection{Continuous Image Transition (CIT)} In many image transition tasks, the key problem can be formulated as how to transform an image from one state to another state. DNN based approaches have achieved impressive results in many image transition tasks, such as image-to-image translation \cite{Isola2017Pix2Pix,Zhu2017CycleGAN,Wang2018Pix2PixHD}, style transfer \cite{Gatys2016Style,Johnson2016Perceptual}, image morphing \cite{Chen2019Homomorphic} and VFI \cite{Lee2020Adacof,Niklaus2017Sepconv}. However, these methods are difficult to achieve continuous and smooth transition between images. A continuous image transition (CIT) approach is desired to generate the intermediate results for a smooth transition process. Many researches on image-to-image translation and image morphing resort to finding a latent feature space and blending image features therein \cite{Upchurch2017DFI,Mao2020ContinuousI2I,Pizzati2021CoMoGAN}. However, these methods need to explicitly define the feature space based on human knowledge of the domain. Furthermore, encoding an image to a latent code often results in the loss of image details. Alternatively, methods on image morphing and VFI first establish correspondences between the input images, for example, by using a warping function or bidirectional optical flows, to perform shape deformation of image objects, and then gradually blend images for smooth appearance transition \cite{Wolberg1998Morph,Liao2014Morph,Bao2019DAIN,Niklaus2020Splatting}. Unfortunately, it is not easy to accurately specify the correspondences, leading to superimposed appearance of the intermediate results. In addition to generating a continuous transition between two input images (source and target), there are also methods to synthesize intermediate results between two different outputs \cite{Huang2017Adain,Hong2021Domain}. \textbf{Image-to-image Translation:} Isola \emph{et al}. \cite{Isola2017Pix2Pix} showed that the conditional adversarial networks (cGAN) can be a good solution to image-to-image (I2I) translation problems. Many following works, such as unsupervised learning \cite{Zhu2017CycleGAN}, disentangled learning \cite{Lee2018DRIT}, few-shot learning \cite{Liu2019Few}, high resolution image synthesis \cite{Wang2018Pix2PixHD}, multi-domain translation \cite{Choi2018Stargan}, multi-modal translation \cite{Zhu2017Multimodal}, have been proposed to extend cGAN to different scenarios. Continuous I2I has also attracted much attention. A common practice to this problem is to find intermediate domains by weighting discriminator \cite{Gong2019DLOW} or adjusting losses \cite{Wu2019RelGANMI}. Some methods have been proposed to enable controllable I2I by interpolating attribute \cite{Mao2020ContinuousI2I} or feature \cite{Upchurch2017DFI} or kernel \cite{Wang2019DNI} vectors. Pizzati \emph{et al}. \cite{Pizzati2021CoMoGAN} proposed a model-guided framework that allows non-linear interpolations. \textbf{Image Morphing:} Conventional image morphing methods mostly focus on reducing user-intervention in establishing correspondences between the two images \cite{Wolberg1998Morph}. Smythe \cite{Smythe1990Morph} used pairs of mesh nodes for correspondences. Beier and Neely \cite{Beier1992Morph} developed field morphing utilizing simpler line segments other than meshes. Liao \emph{et al}. \cite{Liao2014Morph} performed optimization of warping fields in a specific domain. Recently, methods \cite{Park2020Crossbreed,Abdal2019Img2StyleGAN,Jahanian2020GANsteerability} have been proposed to achieve efficient image morphing by manipulating the latent space of GANs \cite{Brock2019BigGAN,Karras2020StyleGAN2}. However, these methods often result in the loss of image details and require time-consuming iterative optimization during inference. Mao \emph{et al.} \cite{Mao2020ContinuousI2I} and Pizzati \emph{et al}. \cite{Pizzati2021CoMoGAN} decoupled content and style spaces using disentangled representations. They achieved continuous style interpolations by blending the style vectors. However, these methods preserve the content of source image and they are not suitable to image morphing. Park \emph{et al.} \cite{Park2020Crossbreed} overcame this limitation by performing interpolation in both the content and style spaces. As can be seen from the above discussions, existing works basically design rather different models for different CIT tasks. In this work, we aim to develop a state decoupled learning approach to perform different CIT tasks, including VFI, image-to-image translation and image morphing, by using the same framework. \section{Proposed Method} \label{sec:proposed} \subsection{Problem Formulation} \label{sec:problem} Given a source image $I_0$ and a target image $I_1$, the goal of VFI is to synthesize an intermediate result $I_t$ from them: \begin{equation} I_t=\mathcal{G}(I_0, I_1, t), \label{eqn:general} \end{equation} where $t\in(0,1)$ is a control parameter and $\mathcal{G}$ is a transition mapping function. To better preserve image details, researchers \cite{Bao2019DAIN,Xu2019QVI,Niklaus2020Splatting} have resorted to using bidirectional optical flows \cite{Sun2018PWC-Net,Teed2020RAFT} of $I_0$ and $I_1$, denoted by $F_{0\rightarrow1}$ and $F_{1\rightarrow0}$, to establish the motion correspondence between two consecutive frames. With the help of optical flows, $I_t$ can be obtained as follows: \begin{equation} I_t=\mathcal{G}(I_0, I_1, \mathcal{B}(F_{0\rightarrow1}, F_{1\rightarrow0}, t)), \label{eqn:vfi} \end{equation} where $\mathcal{B}$ is a blending function. Forward \cite{Niklaus2018Context,Niklaus2020Splatting} and backward \cite{Bao2019DAIN,Xu2019QVI} warping algorithms have been proposed to perform the blending $\mathcal{B}$ in Eq.~(\ref{eqn:vfi}). The above idea for VFI coincides with some image morphing works \cite{Wolberg1998Morph,Liao2014Morph,Fish2020MorphGAN}, where the warping function, instead of optical flow, is used to mark the object shape changes in the images. However, it is not easy to specify accurately the correspondences using warping, resulting in superimposed morphing appearance. This inspires us to model VFI as a CIT problem and seek for a more effective and common solution. One popular solution to CIT is to embed the images into a latent space, and then blend the image feature codes therein: \begin{equation} I_t=\mathcal{G}(\mathcal{B}(L_0, L_1, t)), \label{eqn:latent} \end{equation} where $L_0, L_1$ represent respectively the latent codes of $I_0, I_1$ in the latent space. For example, StyleGAN \cite{Karras2019StyleGAN} performs \emph{style mixing} by blending the latent codes at various scales. To gain flexible user control, disentangled learning methods \cite{Mao2020ContinuousI2I,Liu2019Few,Pizzati2021CoMoGAN} were later proposed to decompose the latent space into the content and style representations. The smooth style mixing can be achieved by interpolating the style vectors as follows: \begin{equation} I_t=\mathcal{G}(L_0^c, \mathcal{B}(L_0^s, L_1^s, t)), \label{eqn:disentangle} \end{equation} where $L_0^s, L_1^s$ are the style representation vectors of $L_0, L_1$, respectively, and $L_0^c$ is the content vector of $L_0$. In this case, $I_1$ serves as the ``style'' input and the content of $I_0$ is preserved. However, the above formulation is hard to use in tasks such as image morphing. Though impressive advancements have been made, the above CIT methods require much human knowledge to explicitly define the feature space, while embedding an image into a latent code needs time-consuming iterative optimization and sacrifices image details. \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{imgs/SDL_arch.pdf} \caption{The architecture of our space decoupled learning (SDL) method.} \label{fig:arch} \end{figure*} \subsection{Space Decoupled Learning} \label{sec:sdl} As discussed in Section \ref{sec:problem}, previous works employ rather different models for different CIT applications. One interesting question is: can we find a common yet more effective framework to different CIT tasks? We make an in-depth investigation of this issue and present such a framework in this section. The latent space aims to depict the essential image features and patterns of original data. It is expected that in the latent space, the correspondences of input images $I_0$ and $I_1$ can be well built. In other words, the latent codes $L_0, L_1$ in Eq.~(\ref{eqn:latent}) play the role of optical flows $F_{0\rightarrow1}, F_{1\rightarrow0}$ in Eq.~(\ref{eqn:vfi}). Both of Eq.~(\ref{eqn:latent}) and Eq.~(\ref{eqn:vfi}) blend the correspondence of two images to obtain the desired output. The difference lies in that the latent code representation of an image in Eq.~(\ref{eqn:latent}) may lose certain image details, while in Eq.~(\ref{eqn:vfi}) the original inputs $I_0, I_1$ are involved into the reconstruction, partially addressing this problem. From the above discussion, we can conclude that the key to CIT tasks is how to smoothly blend the image features whose correspondences can be well built, while reconstruct the image features whose correspondences are hard to obtain. We thus propose to decouple the image space into two sub-spaces accordingly: a \textit{translatable flow space}, denoted by $P$, where the features can be smoothly and easily blended with $t$, and a \textit{non-translatable feature space}, denoted by $Q$, where the features cannot be blended but should be synthesized. With $P$ and $Q$, we propose a unified formulation of CIT problems as follows: \begin{equation} I_t=\mathcal{G}(Q_{0\rightarrow1}, \mathcal{B}(P_{0\rightarrow1}, t)). \label{eqn:sdl} \end{equation} The subscript ``$0\rightarrow1$'' means the transition is from $I_0$ to $I_1$. With Eq.~(\ref{eqn:sdl}), we continuously transition those translatable image components in $P$, and reconstruct the intermediate features that cannot be directly transitioned in $Q$. Now the question turns to how to define the spaces of $P$ and $Q$. Unlike many previous CIT methods \cite{Mao2020ContinuousI2I,Pizzati2021CoMoGAN} which explicitly define the feature spaces using much human knowledge, we propose to learn $P$ and $Q$ implicitly from training data. We learn a decoupling operator, denoted by $\mathcal{D}$, to decompose the image space of $I_0$ and $I_1$ to the translatable flow space $P$ and the non-translatable feature space $Q$: \begin{equation} (P_{0\rightarrow1}, Q_{0\rightarrow1}) \leftarrow \mathcal{D}(I_0, I_1). \label{eqn:decouple} \end{equation} Specifically, we use several convolutional layers to implement the space decoupling operator $\mathcal{D}$. To gain performance, $\mathcal{D}$ is learned on multiple scales. The proposed method, namely space decoupled learning (SDL), requires no human knowledge of the domain, and it can serve as an effective and unified solution to different CIT tasks. The architecture of SDL is a U-shaped DNN, as illustrated in Fig.~\ref{fig:arch}. Unlike standard U-Net \cite{Ronneberger2015Unet}, a novel \emph{SDL unit} is introduced in the decoder part of our network. The detailed structure of the SDL unit is depicted in the right-bottom corner of Fig.~\ref{fig:arch}. The inputs of the SDL unit are the feature maps decomposed in previous convolution layers. Let $C$ be the number of input feature maps and $s\in(0,1)$ be the ratio of translatable flow features to the total features. $s$ is a hyper-parameter controlled by users (we will discuss how to set it in Section~\ref{sec:expriment}). We then split the channel number of input feature maps in $P$ and $Q$ as $s*C$ and $C-s*C$, and perform the blending $\mathcal{B}$ on $P$ while keeping $Q$ unchanged. There are multiple ways to perform the blending. For example, $\mathcal{B}$ can be achieved by scaling the features with factor $t$: $\mathcal{B}(P_{0\rightarrow1}, t)=t*P_{0\rightarrow1}$, which results in linear interpolation in $P$ and is used in our experiments. Afterwards, the blended $P$ and $Q$ are concatenated as the output of the SDL unit. A merging operator $\mathcal{M}$ (also learned as several convolutional layers like $\mathcal{D}$) is followed to rebind the decoupled spaces on multiple scales. A synthesis network is also adopted to improve the final transition results. We employ a GridNet architecture \cite{Fourure2017Gridnet} for it with three rows and six columns. Following the work of Niklaus \emph{et al}. \cite{Niklaus2020Splatting}, some modifications are utilized to address the checkerboard artifacts. The detailed architecture of the synthesis network can be found in the \textbf{supplementary materials}. In addition, it is worth mentioning that $t$ works with the loss function during training if necessary. Details can be found in the section of experiments. \subsection{Training Strategy} To train SDL model for VFI, we adopt two loss functions: the Charbonnier loss \cite{Charbonnier1994Loss} $\mathcal{L}_C$ and the perceptual loss \cite{Johnson2016Perceptual} $\mathcal{L}_P$. The final loss $\mathcal{L}$ is as follows: \begin{equation} \mathcal{L}=\alpha\mathcal{L}_C+\beta\mathcal{L}_P, \end{equation} where $\alpha$ and $\beta$ are balancing parameters. The content loss $\mathcal{L}_C$ enforces the fine features and preserves the original color information. The perceptual loss $\mathcal{L}_P$ can be better balanced to recover more high-quality details. We use the $conv5\_4$ feature maps before activation in the pre-trained VGG19 network \cite{Simonyan2014VGG} as the perceptual loss. In our experiments, we empirically set $\alpha=1$ and $\beta=0.1$. For other CIT applications including image-to-image translation and image morphing, GAN plays a key role to generate high-quality results in order to alleviate superimposed appearances. In our implementation, we use PatchGAN developed by Isola \emph{et al.} \cite{Isola2017Pix2Pix} for adversarial training. The final loss is the sum of the $\mathcal{L}_1$ loss and PatchGAN loss with equal weights. \begin{table*}[t] \centering \caption{Quantitative comparison (PSNR, SSIM, runtime) of different methods on the Middleburry, UCF101, Vimeo90K and Adobe240fps datasets. The runtime is reported as the average time to process a pair of $640\times 480$ images. The numbers in \textbf{bold} represent the best performance. The upper part of the table presents the results of kernel-based methods, and the lower part presents the methods that can perform smooth frame interpolations. ``-'' means that the result is not available.} \vspace*{-3mm} \resizebox{0.9\textwidth}{!}{\begin{threeparttable}\begin{tabular}{l|c|c|c c|c c|c c|c c} \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Training Dataset}} & \multicolumn{1}{c}{\textbf{Runtime}} & \multicolumn{2}{c}{\textbf{Middleburry}} & \multicolumn{2}{c}{\textbf{UCF101}} & \multicolumn{2}{c}{\textbf{Vimeo90K}} & \multicolumn{2}{c}{\textbf{Adobe240fps}} \\ & & (ms) & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ \\ \hline SepConv \protect{\cite{Niklaus2017Sepconv}} & proprietary & 57 & 35.73 & 0.959 & 34.70 & 0.947 & 33.79 & 0.955 & - & - \\ CAIN \protect{\cite{Choi2020CAIN}} & proprietary & 56 & 35.07 & 0.950 & 34.97 & 0.950 & 34.64 & 0.958 & - & - \\ AdaCof \protect{\cite{Lee2020Adacof}} & Vimeo90K & 77 & 35.71 & 0.958 & 35.16 & 0.950 & 34.35 & 0.956 & - & - \\ CDFI \protect{\cite{Ding2021CDFI}} & Vimeo90K & 248 & 37.14 & 0.966 & 35.21 & 0.950 & 35.17 & 0.964 & - & - \\ \hline \hline SuperSloMo \protect{\cite{Jiang2018Superslomo}} & Adobe240fps+Youtube240fps & 67 & 33.64 & 0.932 & 33.14 & 0.938 & 32.68 & 0.938 & 30.76 & 0.902 \\ DAIN \protect{\cite{Bao2019DAIN}} & Vimeo90K & 831 & 36.70 & 0.964 & 35.00 & 0.949 & 34.70 & 0.963 & 29.22 & 0.877 \\ BMBC \protect{\cite{Park2020BMBC}} & Vimeo90K & 3008 & 36.78 & 0.965 & 35.15 & 0.950 & 35.01 & \textbf{0.965} & 29.56 & 0.881 \\ EDSC \protect{\cite{Cheng2021EDSC}} & Vimeo90K-Septuplet & 60 & 36.81 & \textbf{0.967} & 35.06 & 0.946 & 34.57 & 0.956 & 30.28 & 0.900 \\ SDL & Vimeo90K+Adobe240fps & \textbf{42} & \textbf{37.38} & \textbf{0.967} & \textbf{35.33} & \textbf{0.951} & \textbf{35.47} & \textbf{0.965} & \textbf{31.38} & \textbf{0.914} \\ \end{tabular} \end{threeparttable}} \label{tab:vficomp} \vspace*{-2mm} \end{table*} \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{imgs/vfi.pdf} \vspace*{-5mm} \caption{Visual comparison of competing methods on the Vimeo90K test set. (a) SepConv \protect\cite{Niklaus2017Sepconv}; (b) SuperSloMo \protect\cite{Jiang2018Superslomo}; (c) CAIN \protect\cite{Choi2020CAIN}; (d) EDSC \protect\cite{Cheng2021EDSC}; (e) DAIN \protect\cite{Bao2019DAIN}; (f) BMBC \protect\cite{Park2020BMBC}; (g) SDL; (h) Ground truth.} \label{fig:vimeo} \end{figure*} \vspace*{-2mm} \section{Experiments and Applications} \vspace*{-1mm} \label{sec:expriment} In this section, we first conduct extensive experiments on VFI to validate the effectiveness of our SDL method, and then apply SDL to other CIT tasks beyond VFI, such as face aging, face toonification and image morphing, to validate the generality of SDL. \vspace*{-2mm} \subsection{Datasets and Training Settings for VFI} \vspace*{-1mm} There are several datasets publicly available for training and evaluating VFI models, including Middlebury \cite{Baker2007Middlebury}, UCF101 \cite{Soomro2012UCF101AD}, Vimeo90K \cite{Xue2019TOFlow} and Adobe240-fps \cite{Su2017Adobe240fps}. The Middlebury dataset contains two subsets, \ie, \emph{Other} and \emph{Evaluation}. The former provides ground-truth middle frames, while the later hides the ground-truth, and the users are asked to upload their results to the benchmark website for evaluation. The UCF101 dataset \cite{Soomro2012UCF101AD} contains $379$ triplets of human action videos, which can be used for testing VFI algorithms. The frame resolution of the above two datasets is $256\times256$. We combine the training subsets in Adobe240-fps and Vimeo90K to train our SDL model. The Vimeo90K dataset \cite{Xue2019TOFlow} has $51,312$ ($3,782$) triplets for training (testing), where each triplet contains $3$ consecutive video frames of resolution $256\times448$. This implicitly sets the value of $t$ to $0.5$, and hence it is insufficient to train our SDL model for finer time intervals. We further resort to the Adobe240-fps dataset \cite{Su2017Adobe240fps}, which is composed of high frame-rate videos, for model training. We first extract the frames of all video clips, and then group the extracted frames with $12$ frames per group. There is no overlap between any two groups. During training, we randomly select $3$ frames $I_a, I_b, I_c$ from a group as a triplet, where $\{a,b,c\}\in\{0,1,...,11\}$ and $a<b<c$. The corresponding value of $t$ can be calculated as $(b-a)/(c-a)$. We also randomly reverse the direction of the sequence for data augmentation ($t$ is accordingly changed to $1-t$). Each video frame is resized to have a shorter spatial dimension of $360$ and a random crop of $256\times256$. Horizontal flip is performed for data augmentation. Following SuperSloMo \cite{Jiang2018Superslomo}, we use $112$ video clips for training and the rest $6$ for validation. During model updating, we adopt the Adam \cite{Kingma2015AdamAM} optimizer with a batch size of $48$. The initial learning rate is set as $2\times 10^{-4}$, and it decays by a factor of $0.8$ for every 100K iterations. The model is updated for 600K iterations. \subsection{Comparisons with State-of-the-arts} We evaluate the performance of the proposed SDL method in comparison with two categories of state-of-the-art VFI algorithms, whose source codes or pretrained models are publicly available. The first category of methods allow frame interpolation at arbitrary time, including SuperSloMo \cite{Jiang2018Superslomo}, DAIN \cite{Bao2019DAIN}, BMBC \cite{Park2020BMBC} and EDSC \cite{Cheng2021EDSC}. The second category is kernel-based algorithms, including SepConv \cite{Niklaus2017Sepconv}, CAIN \cite{Choi2020CAIN}, AdaCof \cite{Lee2020Adacof} and CDFI \cite{Ding2021CDFI}, which can only perform frame interpolation iteratively at the power of $2$. The PSNR and SSIM \cite{Wang2004SSIM} indices are used for quantitative comparisons. Table~\ref{tab:vficomp} provides the PSNR/SSIM and runtime results on the Middlebury \emph{Other} \cite{Baker2007Middlebury}, UCF101 \cite{Soomro2012UCF101AD}, Vimeo90K \cite{Xue2019TOFlow} and Adobe240-fps \cite{Su2017Adobe240fps} testing sets. In all experiments, the first and last frames of each group are taken as inputs. On the first three datsets, we set $t=0.5$ to interpolate the middle frame. While on the high frame rate Adobe240-fps dataset, we vary $t\in\{\frac{1}{11},\frac{2}{11},...,\frac{10}{11}\}$ to produce the intermediate $10$ frames, which is beyond the capability of kernel-based methods \cite{Niklaus2017Sepconv,Choi2020CAIN,Lee2020Adacof,Ding2021CDFI}. All the methods are tested on a NVIDIA V100 GPU, and we calculate the average processing time for $10$ runs. From Table~\ref{tab:vficomp}, one can see that the proposed SDL approach achieves best PSNR/SSIM indices on all the datasets, while it has the fastest running speed. The kernel-based method CDFI \cite{Ding2021CDFI} also achieves very good PSNR/SSIM results. However, it often fails to handle large motions due to the limitation of kernel size. The flow-based methods such as DAIN \cite{Bao2019DAIN} address this issue by referring to bidirectional flows, while inevitably suffer from inaccurate estimations. The proposed SDL implicitly decouples the images into a translatable flow space and a non-translatable feature space, avoiding the side effect of inaccurate flows. Fig.~\ref{fig:vimeo} presents some visual comparisons of the VFI results of competing methods. It can be seen that our SDL method preserves better the image fine details and edge structures especially in scenarios with complex motions, where inaccurate flow estimations are commonly observed. SDL manages to address this difficulty by implicitly decoupling the images into a translatable flow space and a non-translatable feature space, and hence resulting in better visual quality with fewer interpolation artifacts. More visual comparison results can be found in the \textbf{supplementary material}. In the task of VFI, optical flow is widely used to explicitly align the adjacent frames. However, this may lead to visual artifacts on pixels where the flow estimation is not accurate. In our SDL, we decouple the image space into a translatable flow space and a non-translatable feature space, and only perform interpolation in the former one, avoiding the possible VFI artifacts caused by inaccurate flow estimation. In Fig.~\ref{fig:vis}, we visualize the the translatable flow space and compare it with the optical flow obtained by SpyNet \cite{Ranjan2017SpyNet}. As can be seen, the translatable flow space matches the optical flow on the whole, while it focuses more on the fine details and edge structures that are import to synthesize high-quality results. \begin{figure}[t!] \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1\textwidth]{imgs/vis.jpg} \caption{Visualization of the translatable flow space and the optical flow in VFI. \textbf{Left:} the translatable flow space; \textbf{Right:} the optical flow.} \label{fig:vis} \end{minipage} \hspace{0.05cm} \begin{minipage}[t]{0.5\linewidth} \centering \begin{tikzpicture} \begin{axis}[ xlabel={$s$}, ylabel={PSNR (dB)}, xmin=0, xmax=1, ymin=26, ymax=36, xtick={0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1}, ytick={26,28,30,32,34,36}, legend pos=north west, ymajorgrids=true, grid style=dashed, width=5.2cm, height=3.2cm, ticklabel style={font=\tiny}, xlabel style={at={(1,0)}, right, yshift=0pt} ] \addplot[ color=blue, mark=square, ] coordinates { (0,26.5)(0.1,35.47)(0.2,35.31)(0.3,35.3)(0.4,35.57)(0.5,35.98)(0.6,35.82)(0.7,35.61)(0.8,35.4)(0.9,35.11)(1,30.95) }; \end{axis} \end{tikzpicture} \caption{PSNR vs. $s$ on the Adobe240-fps testing set. When $s=0.5$, the PSNR reaches the peak, while the performance is very stable by varying $s$ from $0.1$ to $0.9$.} \label{fig:ratio} \end{minipage} \vspace*{-1mm} \end{figure} \begin{table}[t!] \centering \caption{Quantitative comparison (PSNR, SSIM) between SDL and its variants on the Middleburry, UCF101, Vimeo90K and Adobe240fps datasets. The numbers in \textbf{bold} represent the best results.} \vspace*{-3mm} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{l|c|c c|c c|c c|c c} \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Training Dataset}} & \multicolumn{2}{c}{\textbf{Middleburry}} & \multicolumn{2}{c}{\textbf{UCF101}} & \multicolumn{2}{c}{\textbf{Vimeo90K}} & \multicolumn{2}{c}{\textbf{Adobe240fps}} \\ & & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ \\ \hline SDL-vimeo90k & Vimeo90K & \textbf{37.49} & \textbf{0.967} & 35.27 & \textbf{0.951} & \textbf{35.56} & \textbf{0.965} & 26.52 & 0.811 \\ SDL-w/o-sdl & Vimeo90K+Adobe240fps & 36.96 & 0.964 & 35.24 & 0.950 & 35.38 & 0.964 & 26.51 & 0.817 \\ SDL-w/o-syn & Vimeo90K+Adobe240fps & 37.19 & 0.965 & 35.27 & \textbf{0.951} & 35.37 & 0.964 & 31.21 & 0.911 \\ SDL & Vimeo90K+Adobe240fps & 37.38 & \textbf{0.967} & \textbf{35.33} & \textbf{0.951} & 35.47 & \textbf{0.965} & \textbf{31.38} & \textbf{0.914} \\ \end{tabular}} \label{tab:vfi_ablation} \vspace*{-5mm} \end{table} \vspace*{-2mm} \subsection{Ablation Experiments} \label{sec:ablation} In this section, we conduct experiments to investigate the ratio of translatable flow features, and compare SDL with several of its variants. \textbf{Translatable Flow Features.} In order to find out the effect of $s$ (\ie, the ratio of translatable flow features to total features) of SDL, we set $s\in\{0,0.1,...,1\}$) and perform experiments on the Adobe240-fps testing set. The curve of PSNR versus $s$ is plotted in Fig.~\ref{fig:ratio}. We can see that the performance decreases significantly if all feature maps are assigned to non-translatable feature space (\ie, $s=0$) or translatable flow space (\ie, $s=1$). When $s=0.5$, the PSNR reaches the peak, while the performance is very stable by varying $s$ from $0.1$ to $0.9$. This is because SDL can learn to adjust its use of translatable and non-translatable features during training. % \textbf{The variants of SDL.} We compare SDL with several of its variants to validate the design and training of SDL. The first variant is denoted as SDL-vimeo90k, \ie, the model is trained using only the Vimeo90K dataset. The second variant is denoted as SDL-w/o-sdl, \ie, SDL without space decoupling learning by setting $s=0$. The third variant is denoted as SDL-w/o-syn, \ie, the synthesis network is replaced with several convolution layers. We evaluate SDL and its three variants on the Middlebury \emph{Other} \cite{Baker2007Middlebury}, UCF101 \cite{Soomro2012UCF101AD}, Vimeo90K \cite{Xue2019TOFlow} and Adobe240-fps \cite{Su2017Adobe240fps} testing sets, and the PSNR and SSIM results are listed in Table~\ref{tab:vfi_ablation}. One can see that SDL-vimeo90k achieves the best SSIM indices on all the triplet datasets, and the best PSNR indices on Middlebury \emph{Other} and Vimeo90K by using a smaller training dataset than SDL, which uses both Vimeo90K and Adobe240-fps in training. This is because these is a domain gap between Adobe240-fps and Vimeo90k, and hence the SDL-vimeo90k can overfit the three triplet dataset. Furthermore, SDL-vimeo90k performs poorly on the Adobe240-fps dataset. This implies that training SDL using merely triplets fails to synthesize continuous frames. Without decoupling the space, SDL-w/o-sdl performs much worse than the full SDL model, especially on the Adobe240-fps testing set. This validates that the space decoupling learning strategy boosts the VFI performance and plays a key role in continuous image transition. Without the GridNet \cite{Fourure2017Gridnet}, which is widely used as the synthesis network to improve VFI performance \cite{Niklaus2018Context,Niklaus2020Splatting}, SDL-w/o-syn maintains good VFI performance on all the datasets with only slight PSNR/SSIM decrease compared to original SDL. \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{imgs/aging_comp.pdf} \vspace{-3mm} \caption{Comparison of SDL with StyleGAN2 backpropagation on face aging. From left to right: input image, StyleGAN2 backpropagation \protect{\cite{Viazovetskyi2020Distillation}} and SDL. Note that artifacts can be generated by StyleGAN2 backpropagation, while SDL can synthesize the image more robustly.} \label{fig:bad} \vspace{-5mm} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.86\textwidth]{imgs/i2i_aging.pdf} \vspace*{-5mm} \caption{Comparison of SDL with competing methods on continuous face aging. From top to bottom: SDL, StyleGAN2 backpropagation \protect{\cite{Viazovetskyi2020Distillation}}, SAVI2I \protect{\cite{Mao2020ContinuousI2I}}, Lifespan \protect{\cite{Orel2020Lifespan}} and DNI \protect{\cite{Wang2019DNI}}.} \label{fig:i2i} \vspace*{-6mm} \end{figure*} \vspace*{-2mm} \subsection{Applications beyond VFI} \vspace*{-1mm} The proposed SDL achieves leading performance in VFI without using optical flows. It can also be used to address other CIT applications beyond VFI, such as image-to-image translation and image morphing. In this section, we take face aging and toonification and dog-to-dog image morphing as examples to demonstrate the generality of our SDL approach. \textbf{Face Aging.} \label{sec:I2I} Unlike VFI, there is no public dataset available for training and assessing continuous I2I models. To solve this issue, we use StyleGAN \cite{Karras2019StyleGAN,Karras2020StyleGAN2}, which is a cutting-edge network for creating realistic images, to generate training data. Following \cite{Viazovetskyi2020Distillation}, we use StyleGAN2 distillation to synthesize datasets for face manipulation tasks such as aging. We first locate the direction vector associated with the attribute in the latent space, then randomly sample the latent codes to generate source images. For each source image, we walk along the direction vector with equal pace to synthesize a number of target images. As shown in the middle image of Fig.\ref{fig:bad}, StyleGAN2 distillation may not always generate faithful images. We thus manually check all the samples to remove unsatisfactory ones. Finally, $50,000$ samples are generated, and each sample contains $11$ images of $1024\times 1024$. The dataset will be made publicly available. The source image $I_0$ and a randomly selected target image $I _a$ ($a\in1,2,...,10$) are used as the inputs to train the SDL model. The corresponding value of $t$ is $a/10$. We also randomly replace the source image $I_0$ with the target image $I_{10}$ during training, and the corresponding value of $t$ can be set as $a/10-1$. In this way, the range of $t\in[0,1]$ can be extended to $[-1,1]$ so that our model can produce both younger (by setting $a\in[-1,0)$) and older faces (by setting $a\in(0, 1]$). Note that SDL only needs the source image as input in inference. Though trained on synthetic datasets, SDL can be readily used to handle real-world images. Since only a couple of works have been proposed for continuous I2I translation problem, and we choose those methods \cite{Wang2019DNI,Mao2020ContinuousI2I,Orel2020Lifespan} whose training codes are publicly available to compare, and re-train their models using our datasets. In particular, we employ the same supervised $L_1$ loss as ours to re-train those unsupervised methods for fair comparison. Fig.~\ref{fig:i2i} shows the results of competing methods on continuous face aging. One can see that SDL outperforms clearly the competitors in generating realistic images. By synthesizing the non-translatable features in reconstruction, SDL also works much better on retaining image background, for example, the mouth in the right-top corner. StyleGAN2 backpropagation \cite{Viazovetskyi2020Distillation} generates qualified aging faces; however, it fails to translate the face identity and loses the image background. SDL also produces more stable results than StyleGAN2 backpropagation, as shown in Fig.\ref{fig:bad}. It is worth mentioning that SDL is $10^3$ times faster than StyleGAN2 backpropagation which requires time-consuming iterative optimization. SAVI2I \cite{Mao2020ContinuousI2I} fails to generate qualified intermediaries with photo-realistic details. Lifespan \cite{Orel2020Lifespan} adopts an off-the-shelf face segmentation algorithm to keep the background unchanged. However, the generated face images have low quality. To test DNI \cite{Wang2019DNI}, we train two Pix2PixHD \cite{Wang2018Pix2PixHD} models to generate younger and older faces, respectively, and blend their weights continuously. As can be seen, DNI \cite{Wang2019DNI} fails to produce reasonable transition results. Moreover, SDL can generate continuous image-to-image translations with arbitrary resolutions, while all the competing methods cannot do it. More visual comparison results can be found in the \textbf{supplementary materials}. \begin{figure*}[t!] \centering \includegraphics[width=0.86\textwidth]{imgs/I2I_toonify.pdf} \vspace*{-5mm} \caption{Comparison of SDL with competing methods on continuous face toonification. From top to bottom: SDL, Pinkney \emph{et al.} \cite{Pinkney2020ResolutionDG}, and SAVI2I \protect{\cite{Mao2020ContinuousI2I}}.} \label{fig:toonification} \vspace*{-5mm} \end{figure*} \textbf{Face Toonification.} We first build a face toonification dataset by using the method of \emph{layer swapping} \cite{Pinkney2020ResolutionDG}. Specifically, we finetune a pretrained StyleGAN on a cartoon face dataset to obtain a new GAN, then swap different scales of layers of the two GANs (\ie, the pretrained and the finetuned ones) to create a series of blended GANs, which can generate various levels of face toonification effects. Similar to face aging, we generate $50,000$ training samples, each containing $6$ images of resolution $1024\times 1024$. During training, we take the source images (\ie, $I_0$) as input and randomly choose a target image $I_a$, $a\in\{1,2,...,5\}$, as the ground-truth output. The corresponding value of $t$ is $a/5$. We compare SDL with Pinkney \emph{et al.} \cite{Pinkney2020ResolutionDG} and SAVI2I \cite{Mao2020ContinuousI2I}, whose source codes are available. As shown in Fig.~\ref{fig:toonification}, SDL outperforms the competitors in producing visually more favourable results. Pinkney \emph{et al.} \cite{Pinkney2020ResolutionDG} generates qualified toonification effects but it fails to retain the face identity and the image background. The generated face images of SAVI2I \cite{Mao2020ContinuousI2I} have low quality. Furthermore, SAVI2I \cite{Mao2020ContinuousI2I} merely synthesizes images with a resolution of $256\times 256$, while SDL can yield results at any resolution. More visual comparison results can be found in the \textbf{supplementary materials}. \begin{figure*}[t!] \centering \includegraphics[width=0.86\textwidth,,height=0.49\textwidth]{imgs/morphing.pdf} \vspace*{-5mm} \caption{Comparison of SDL with competing methods on dog-to-dog morphing. From top to bottom: SDL, StyleGAN2 backpropagation \protect{\cite{Viazovetskyi2020Distillation}}, CrossBreed \protect{\cite{Park2020Crossbreed}}, SAVI2I \protect{\cite{Mao2020ContinuousI2I}}, and FUNIT \protect{\cite{Liu2019Few}}.} \label{fig:morphing} \vspace*{-5mm} \end{figure*} \textbf{Dog-to-Dog Morphing.} Similar to I2I translation, we synthesize training data for dog-to-dog morphing using StyleGAN2 \cite{Karras2020StyleGAN2} and BigGAN \cite{Brock2019BigGAN}. We randomly sample two latent codes as the source and target images. The intermediate images are obtained by interpolating the two codes in the latent space. We generate $50,000$ training samples, each containing $11$ images of resolution $512\times 512$. During training, we take the source and target images (\ie, $I_0, I_{10}$) as inputs and randomly choose an image $I_a$, $a\in\{1,2,...,9\}$, as the ground-truth output. Since few methods have been proposed for continuous image morphing, we compare SDL with I2I translation models, including CrossBreed \cite{Park2020Crossbreed}, SAVI2I \cite{Mao2020ContinuousI2I} and FUNIT \cite{Liu2019Few}. (We re-train their models using our datasets and the same supervised $L_1$ loss for fair comparison.) As shown in Fig.~\ref{fig:morphing}, SDL achieves smooth morphing from one dog to another with vivid details. StyleGAN2 backpropagation \cite{Viazovetskyi2020Distillation} yields comparable results but it loses the background details. CrossBreed \cite{Park2020Crossbreed} and SAVI2I \cite{Mao2020ContinuousI2I} fail to generate qualified intermediate results. FUNIT \cite{Liu2019Few} produces smooth morphing; however, the generated dog images have low quality and it fails to retain the image content when $t=0,1$. Please refer to the \textbf{supplementary materials} for more visual comparisons. \vspace*{-3mm} \section{Conclusion} \vspace*{-2mm} We proposed a simple yet effective approach, namely space decoupled learning (SDL), for VFI problem. We implicitly decoupled the images into a translatable flow space and a non-translatable feature space, and performed image interpolation in the flow space and intermediate image synthesis in the feature space. The proposed SDL can serve as a general-purpose solution to a variety of continuous image transition (CIT) problems. As demonstrated by our extensive experiments, SDL showed highly competitive performance with the state-of-the-arts, which were however specifically designed for their given tasks. Particularly, in the application of video frame interpolation, SDL was the first flow-free algorithm that can synthesize consecutive interpolations with leading performance. In other CIT tasks such as face aging, face toonification and dog-to-dog morphing, SDL exhibited much better visual quality and efficiency with more foreground and background details. \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=3d6PLMQm5Uj
https://arxiv.org/abs/2203.09771
Please evaluate the paper based on its approach of reformulating Video Frame Interpolation as a Continuous Image Transition task using Space Decoupled Learning, and its competitive results for various tasks.
The SDL method introduced in the paper is a solid contribution as it can improve Continuous Image Transition (CIT) and be deployed for several CIT tasks. The paper seems technically correct, the experiments are exhaustive, the motivation is clear, the paper is well written. Because of this, I vote to accept the paper. Summary: - The paper aims to improve video frame interpolation (VFI) by reformulate as a continuous image transition (CIT) task. The authors propose a model based on Space Decoupled Learning (SDL) that can be used for multiple CIT tasks, including VFI. One of the advantages of SDL is that is does not require human knowledge of the data domain. Thorough experiments show that the proposed SDL achieves competitive results on a number of CIT tasks, including VFI, face ageing, face tonification, dog-to-dog morphing. Positive points: + The Space Decoupled Learning (SDL) for Continuous Image Transition (CIT) proposed in the paper seems a solid contribution and can be effectively deployed for several tasks, including VFI, face ageing, face tonification, dog-to-dog morphing. + The paper seem technically correct. Thorough experiments and ablation studies are performed to show the effectiveness of the proposed method. Experiments on four VFI datasets are included, on which SDL achieves competitive performance. + The paper is easy to read, the motivation is clear and the literature review exhaustive. + The proposed method is based on inductive priors which lead to decoupling the image space into tractable flow space and a non-tractable feature space, therefore this paper is a good fit for the VIPriors workshop. + The paper seems reproducible. Negative points: - I did not find substantial flows in the paper. Per line comments: 225-234: In figure 1, the font size is very small, which makes the text hardly readable. I think the figure and capture are currently not self explanatory. I would suggest adding an explanation on how to read the figure. 315: In Table 1 the font size is very small. I suggest to increase it to make the table more readable. Rating: 8: Top 50% of accepted papers, clear accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{booktabs} \usepackage{multirow} \usepackage{tabularx} \usepackage{threeparttable} \usepackage{ragged2e} \usepackage{wrapfig} \makeatletter \@namedef{ver@everyshi.sty}{} \makeatother \usepackage{pgfplots} \usepackage{pgfplots}\pgfplotsset{compat=1.9} \usepackage[accsupp]{axessibility} % \newcommand{\ie}{\emph{i.e.}} \newcommand{\eg}{\emph{e.g.}} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{4010} % \title{Beyond a Video Frame Interpolator: A Space Decoupled Learning Approach to Continuous Image Transition} % \titlerunning{Beyond a Video Frame Interpolator} \author{Tao Yang\inst{1} \and Peiran Ren\inst{1} \and Xuansong Xie\inst{1} \and Xiansheng Hua\inst{1} \and Lei Zhang\inst{2}} % \authorrunning{T. Yang et al.} \institute{DAMO Academy, Alibaba Group \\ \email{\{yangtao9009@gmail.com, peiran\_r@sohu.com, xingtong.xxs@taobao.com, xiansheng.hxs@alibaba-inc.com\}} \\ \and Department of Computing, The Hong Kong Polytechnic University \\ \email{\{cslzhang@comp.polyu.edu.hk\}}} \maketitle \vspace*{-0.5cm} \begin{abstract} Video frame interpolation (VFI) aims to improve the temporal resolution of a video sequence. Most of the existing deep learning based VFI methods adopt off-the-shelf optical flow algorithms to estimate the bidirectional flows and interpolate the missing frames accordingly. Though having achieved a great success, these methods require much human experience to tune the bidirectional flows and often generate unpleasant results when the estimated flows are not accurate. In this work, we rethink the VFI problem and formulate it as a continuous image transition (CIT) task, whose key issue is to transition an image from one space to another space continuously. More specifically, we learn to implicitly decouple the images into a translatable flow space and a non-translatable feature space. The former depicts the translatable states between the given images, while the later aims to reconstruct the intermediate features that cannot be directly translated. In this way, we can easily perform image interpolation in the flow space and intermediate image synthesis in the feature space, obtaining a CIT model. The proposed space decoupled learning (SDL) approach is simple to implement, while it provides an effective framework to a variety of CIT problems beyond VFI, such as style transfer and image morphing. Our extensive experiments on a variety of CIT tasks demonstrate the superiority of SDL to existing methods. The source code and models can be found at \url{https://github.com/yangxy/SDL}. % \keywords{Video Frame Interpolation, Continuous Image Transition, Image Synthesis, Space Decoupled Learning} \end{abstract} \section{Introduction} \label{sec:intro} Video frame interpolation (VFI) targets at synthesizing intermediate frames between the given consecutive frames of a video to overcome the temporal limitations of camera sensors. VFI can be used in a variety of practical applications, including slow movie generation \cite{Jiang2018Superslomo}, motion deblurring \cite{Shen2020BIN} and visual quality enhancement \cite{Xue2019TOFlow}. The conventional VFI approaches \cite{Baker2007ADA} usually calculate optical flows between the source and target images and gradually synthesize the intermediate images. With the great success of deep neural networks (DNNs) in computer vision tasks \cite{Dong2015SRCNN,He2016ResNet,Redmon2016YOLO}, recently researchers have been focusing on developing DNNs to address the challenging issues of VFI. Most DNN based VFI algorithms can be categorized into flow-based \cite{Jiang2018Superslomo,Bao2019DAIN,Xu2019QVI,Niklaus2020Splatting}, kernel-based \cite{Niklaus2017Adaptive,Lee2020Adacof,Shen2020BIN}, and phase-based ones \cite{Meyer2015Phase,Meyer2018PhaseNet}. With the advancement of optical flow methods \cite{Sun2018PWC-Net,Bar-Haim2020ScopeFlow}, flow-based VFI algorithms have gained increasing popularity and shown good quantitative results on benchmarks \cite{Bao2019DAIN,Niklaus2020Splatting}. However, these methods require much human experience to tune the bidirectional flows, \eg, by using the forward \cite{Jiang2018Superslomo,Bao2019DAIN} and backward \cite{Niklaus2018Context,Niklaus2020Splatting} warping algorithms. In order to improve the synthesis performance, some VFI methods have been developed by resorting to the depth information \cite{Bao2019DAIN}, the acceleration information \cite{Xu2019QVI} and the softmax splatting \cite{Niklaus2020Splatting}. These methods, however, adopt the off-the-shelf optical flow algorithms, and hence they often generate unpleasant results when the estimated flows are not accurate. To address the above issues, we rethink the VFI problem and aim to find a solution that is free of flows. Different from previous approaches, we formulate VFI as a continuous image transition (CIT) problem. It is anticipated that we could construct a smooth transition process from the source image to the target image so that the VFI can be easily done. Actually, there are many CIT tasks in computer vision applications, such as image-to-image translation \cite{Isola2017Pix2Pix,Zhu2017CycleGAN}, image morphing \cite{Liu2019Few,Park2020Crossbreed} and style transfer \cite{Gatys2016Style,Huang2017Adain}. Different DNN models have been developed for different CIT tasks. Based on the advancement of deep generative adversarial network (GAN) techniques \cite{Brock2019BigGAN,Karras2019StyleGAN,Karras2020StyleGAN2}, deep image morphing methods have been proposed to generate images with smooth semantic changes by walking in a latent space \cite{Radford2016Unsupervised,Jahanian2020GANsteerability}. Similarly, various image-to-image translation methods have been developed by exploring intermediate domains \cite{Gong2019DLOW,Wu2019RelGANMI,Choi2020StarGANV2}, interpolating attribute \cite{Mao2020ContinuousI2I} or feature \cite{Upchurch2017DFI} or kernel \cite{Wang2019DNI} vectors, using physically inspired models for guidance \cite{Pizzati2021CoMoGAN}, and navigating latent spaces with discovered paths \cite{Chen2019Homomorphic,Jahanian2020GANsteerability}. Though significant progresses have been achieved for CIT, existing methods usually rely on much human knowledge of the specific domain, and employ rather different models for different applications. % In this work, we propose to learn a translatable flow space to control the continuous and smooth translation between two images, while synthesize the image features which cannot be translated. Specifically, we present a novel space decoupled learning (SDL) approach for VFI. Our SDL implicitly decouples the image spaces into a translatable flow space and a non-translatable feature space. With the decoupled image spaces, we can easily perform smooth image translation in the flow space, and synthesize intermediate image features in the non-translatable feature space. Interestingly, the proposed SDL approach can not only provide a flexible solution for VFI, but also provide a general and effective solution to other CIT tasks. To the best of our knowledge, the proposed SDL is the first flow-free algorithm which is however able to synthesize consecutive interpolations, achieving leading performance in VFI. SDL is easy-to-implement, and it can be readily integrated into off-the-shelf DNNs for different CIT tasks beyond VFI, serving as a general-purpose solution to the CIT problem. We conduct extensive experiments on various CIT tasks, including, VFI, image-to-image translation and image morphing, to demonstrate its effectiveness. Though using the same framework, SDL shows highly competitive performance with those state-of-the-art methods that are specifically designed for different CIT problems. \vspace{-2mm} \section{Related Work} \label{sec:work} \subsection{Video Frame Interpolation (VFI)} With the advancement of DNNs, recently significant progresses have been made on VFI. Long \emph{et al}. \cite{Long2016VFI} first attempted to generate the intermediate frames by taking a pair of frames as input to DNNs. This method yields blurry results since the motion information of videos is not well exploited. The latter works are mostly focused on how to effectively model motion and handle occlusions. Meyer \emph{et al}. \cite{Meyer2015Phase,Meyer2018PhaseNet} proposed phase-based models which represent motion as per-pixel phase shift. Niklaus \emph{et al}. \cite{Niklaus2017Adaptive,Niklaus2017Sepconv} came up with the kernel-based approaches that estimate an adaptive convolutional kernel for each pixel. Lee \emph{et al}. \cite{Lee2020Adacof} introduced a novel warping module named Adaptive Collaboration of Flows (AdaCoF). An end-to-end trainable network with channel attention was proposed by Choi \emph{et al}. \cite{Choi2020CAIN}, where frame interpolation is achieved without explicit estimation of motion. The kernel-based methods have achieved impressive results. However, they are not able to generate missing frames with arbitrary interpolation factors and usually fail to handle large motions due to the limitation of kernel size. Unlike phase-based or kernel-based methods, flow-based models explicitly exploit motion information of videos \cite{Jiang2018Superslomo,Bao2019DAIN,Xu2019QVI,Niklaus2020Splatting}. With the advancement of optical flow methods \cite{Sun2018PWC-Net,Bar-Haim2020ScopeFlow}, flow-based VFI algorithms have become popular due to their good performance. Niklaus and Liu \cite{Niklaus2018Context} adopted forward warping to synthesize intermediate frames. This algorithm suffers from holes and overlapped pixels, and it was later improved by the softmax splatting method \cite{Niklaus2020Splatting}, which can seamlessly map multiple source pixels to the same target location. Since forward warping is not very intuitive to use, most flow-based works adopt backward warping. Jiang \emph{et al}. \cite{Jiang2018Superslomo} jointly trained two U-Nets \cite{Ronneberger2015Unet}, which respectively estimate the optical flows and perform bilateral motion approximation to generate intermediate results. Reda \emph{et al}. \cite{Reda2019UVI} and Choi \emph{et al}. \cite{Choi2020Meta} further improved this work by introducing cycle consistency loss and meta-learning, respectively. Bao \emph{et al}. \cite{Bao2019DAIN} explicitly detected the occlusion by exploring the depth information, but the VFI performance is sensitive to depth estimation accuracy. To exploit the acceleration information, Xu \emph{et al}. \cite{Xu2019QVI} proposed a quadratic VFI method. Recently, Park \emph{et al}. \cite{Park2020BMBC} proposed a bilateral motion network to estimate intermediate motions directly. \subsection{Continuous Image Transition (CIT)} In many image transition tasks, the key problem can be formulated as how to transform an image from one state to another state. DNN based approaches have achieved impressive results in many image transition tasks, such as image-to-image translation \cite{Isola2017Pix2Pix,Zhu2017CycleGAN,Wang2018Pix2PixHD}, style transfer \cite{Gatys2016Style,Johnson2016Perceptual}, image morphing \cite{Chen2019Homomorphic} and VFI \cite{Lee2020Adacof,Niklaus2017Sepconv}. However, these methods are difficult to achieve continuous and smooth transition between images. A continuous image transition (CIT) approach is desired to generate the intermediate results for a smooth transition process. Many researches on image-to-image translation and image morphing resort to finding a latent feature space and blending image features therein \cite{Upchurch2017DFI,Mao2020ContinuousI2I,Pizzati2021CoMoGAN}. However, these methods need to explicitly define the feature space based on human knowledge of the domain. Furthermore, encoding an image to a latent code often results in the loss of image details. Alternatively, methods on image morphing and VFI first establish correspondences between the input images, for example, by using a warping function or bidirectional optical flows, to perform shape deformation of image objects, and then gradually blend images for smooth appearance transition \cite{Wolberg1998Morph,Liao2014Morph,Bao2019DAIN,Niklaus2020Splatting}. Unfortunately, it is not easy to accurately specify the correspondences, leading to superimposed appearance of the intermediate results. In addition to generating a continuous transition between two input images (source and target), there are also methods to synthesize intermediate results between two different outputs \cite{Huang2017Adain,Hong2021Domain}. \textbf{Image-to-image Translation:} Isola \emph{et al}. \cite{Isola2017Pix2Pix} showed that the conditional adversarial networks (cGAN) can be a good solution to image-to-image (I2I) translation problems. Many following works, such as unsupervised learning \cite{Zhu2017CycleGAN}, disentangled learning \cite{Lee2018DRIT}, few-shot learning \cite{Liu2019Few}, high resolution image synthesis \cite{Wang2018Pix2PixHD}, multi-domain translation \cite{Choi2018Stargan}, multi-modal translation \cite{Zhu2017Multimodal}, have been proposed to extend cGAN to different scenarios. Continuous I2I has also attracted much attention. A common practice to this problem is to find intermediate domains by weighting discriminator \cite{Gong2019DLOW} or adjusting losses \cite{Wu2019RelGANMI}. Some methods have been proposed to enable controllable I2I by interpolating attribute \cite{Mao2020ContinuousI2I} or feature \cite{Upchurch2017DFI} or kernel \cite{Wang2019DNI} vectors. Pizzati \emph{et al}. \cite{Pizzati2021CoMoGAN} proposed a model-guided framework that allows non-linear interpolations. \textbf{Image Morphing:} Conventional image morphing methods mostly focus on reducing user-intervention in establishing correspondences between the two images \cite{Wolberg1998Morph}. Smythe \cite{Smythe1990Morph} used pairs of mesh nodes for correspondences. Beier and Neely \cite{Beier1992Morph} developed field morphing utilizing simpler line segments other than meshes. Liao \emph{et al}. \cite{Liao2014Morph} performed optimization of warping fields in a specific domain. Recently, methods \cite{Park2020Crossbreed,Abdal2019Img2StyleGAN,Jahanian2020GANsteerability} have been proposed to achieve efficient image morphing by manipulating the latent space of GANs \cite{Brock2019BigGAN,Karras2020StyleGAN2}. However, these methods often result in the loss of image details and require time-consuming iterative optimization during inference. Mao \emph{et al.} \cite{Mao2020ContinuousI2I} and Pizzati \emph{et al}. \cite{Pizzati2021CoMoGAN} decoupled content and style spaces using disentangled representations. They achieved continuous style interpolations by blending the style vectors. However, these methods preserve the content of source image and they are not suitable to image morphing. Park \emph{et al.} \cite{Park2020Crossbreed} overcame this limitation by performing interpolation in both the content and style spaces. As can be seen from the above discussions, existing works basically design rather different models for different CIT tasks. In this work, we aim to develop a state decoupled learning approach to perform different CIT tasks, including VFI, image-to-image translation and image morphing, by using the same framework. \section{Proposed Method} \label{sec:proposed} \subsection{Problem Formulation} \label{sec:problem} Given a source image $I_0$ and a target image $I_1$, the goal of VFI is to synthesize an intermediate result $I_t$ from them: \begin{equation} I_t=\mathcal{G}(I_0, I_1, t), \label{eqn:general} \end{equation} where $t\in(0,1)$ is a control parameter and $\mathcal{G}$ is a transition mapping function. To better preserve image details, researchers \cite{Bao2019DAIN,Xu2019QVI,Niklaus2020Splatting} have resorted to using bidirectional optical flows \cite{Sun2018PWC-Net,Teed2020RAFT} of $I_0$ and $I_1$, denoted by $F_{0\rightarrow1}$ and $F_{1\rightarrow0}$, to establish the motion correspondence between two consecutive frames. With the help of optical flows, $I_t$ can be obtained as follows: \begin{equation} I_t=\mathcal{G}(I_0, I_1, \mathcal{B}(F_{0\rightarrow1}, F_{1\rightarrow0}, t)), \label{eqn:vfi} \end{equation} where $\mathcal{B}$ is a blending function. Forward \cite{Niklaus2018Context,Niklaus2020Splatting} and backward \cite{Bao2019DAIN,Xu2019QVI} warping algorithms have been proposed to perform the blending $\mathcal{B}$ in Eq.~(\ref{eqn:vfi}). The above idea for VFI coincides with some image morphing works \cite{Wolberg1998Morph,Liao2014Morph,Fish2020MorphGAN}, where the warping function, instead of optical flow, is used to mark the object shape changes in the images. However, it is not easy to specify accurately the correspondences using warping, resulting in superimposed morphing appearance. This inspires us to model VFI as a CIT problem and seek for a more effective and common solution. One popular solution to CIT is to embed the images into a latent space, and then blend the image feature codes therein: \begin{equation} I_t=\mathcal{G}(\mathcal{B}(L_0, L_1, t)), \label{eqn:latent} \end{equation} where $L_0, L_1$ represent respectively the latent codes of $I_0, I_1$ in the latent space. For example, StyleGAN \cite{Karras2019StyleGAN} performs \emph{style mixing} by blending the latent codes at various scales. To gain flexible user control, disentangled learning methods \cite{Mao2020ContinuousI2I,Liu2019Few,Pizzati2021CoMoGAN} were later proposed to decompose the latent space into the content and style representations. The smooth style mixing can be achieved by interpolating the style vectors as follows: \begin{equation} I_t=\mathcal{G}(L_0^c, \mathcal{B}(L_0^s, L_1^s, t)), \label{eqn:disentangle} \end{equation} where $L_0^s, L_1^s$ are the style representation vectors of $L_0, L_1$, respectively, and $L_0^c$ is the content vector of $L_0$. In this case, $I_1$ serves as the ``style'' input and the content of $I_0$ is preserved. However, the above formulation is hard to use in tasks such as image morphing. Though impressive advancements have been made, the above CIT methods require much human knowledge to explicitly define the feature space, while embedding an image into a latent code needs time-consuming iterative optimization and sacrifices image details. \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{imgs/SDL_arch.pdf} \caption{The architecture of our space decoupled learning (SDL) method.} \label{fig:arch} \end{figure*} \subsection{Space Decoupled Learning} \label{sec:sdl} As discussed in Section \ref{sec:problem}, previous works employ rather different models for different CIT applications. One interesting question is: can we find a common yet more effective framework to different CIT tasks? We make an in-depth investigation of this issue and present such a framework in this section. The latent space aims to depict the essential image features and patterns of original data. It is expected that in the latent space, the correspondences of input images $I_0$ and $I_1$ can be well built. In other words, the latent codes $L_0, L_1$ in Eq.~(\ref{eqn:latent}) play the role of optical flows $F_{0\rightarrow1}, F_{1\rightarrow0}$ in Eq.~(\ref{eqn:vfi}). Both of Eq.~(\ref{eqn:latent}) and Eq.~(\ref{eqn:vfi}) blend the correspondence of two images to obtain the desired output. The difference lies in that the latent code representation of an image in Eq.~(\ref{eqn:latent}) may lose certain image details, while in Eq.~(\ref{eqn:vfi}) the original inputs $I_0, I_1$ are involved into the reconstruction, partially addressing this problem. From the above discussion, we can conclude that the key to CIT tasks is how to smoothly blend the image features whose correspondences can be well built, while reconstruct the image features whose correspondences are hard to obtain. We thus propose to decouple the image space into two sub-spaces accordingly: a \textit{translatable flow space}, denoted by $P$, where the features can be smoothly and easily blended with $t$, and a \textit{non-translatable feature space}, denoted by $Q$, where the features cannot be blended but should be synthesized. With $P$ and $Q$, we propose a unified formulation of CIT problems as follows: \begin{equation} I_t=\mathcal{G}(Q_{0\rightarrow1}, \mathcal{B}(P_{0\rightarrow1}, t)). \label{eqn:sdl} \end{equation} The subscript ``$0\rightarrow1$'' means the transition is from $I_0$ to $I_1$. With Eq.~(\ref{eqn:sdl}), we continuously transition those translatable image components in $P$, and reconstruct the intermediate features that cannot be directly transitioned in $Q$. Now the question turns to how to define the spaces of $P$ and $Q$. Unlike many previous CIT methods \cite{Mao2020ContinuousI2I,Pizzati2021CoMoGAN} which explicitly define the feature spaces using much human knowledge, we propose to learn $P$ and $Q$ implicitly from training data. We learn a decoupling operator, denoted by $\mathcal{D}$, to decompose the image space of $I_0$ and $I_1$ to the translatable flow space $P$ and the non-translatable feature space $Q$: \begin{equation} (P_{0\rightarrow1}, Q_{0\rightarrow1}) \leftarrow \mathcal{D}(I_0, I_1). \label{eqn:decouple} \end{equation} Specifically, we use several convolutional layers to implement the space decoupling operator $\mathcal{D}$. To gain performance, $\mathcal{D}$ is learned on multiple scales. The proposed method, namely space decoupled learning (SDL), requires no human knowledge of the domain, and it can serve as an effective and unified solution to different CIT tasks. The architecture of SDL is a U-shaped DNN, as illustrated in Fig.~\ref{fig:arch}. Unlike standard U-Net \cite{Ronneberger2015Unet}, a novel \emph{SDL unit} is introduced in the decoder part of our network. The detailed structure of the SDL unit is depicted in the right-bottom corner of Fig.~\ref{fig:arch}. The inputs of the SDL unit are the feature maps decomposed in previous convolution layers. Let $C$ be the number of input feature maps and $s\in(0,1)$ be the ratio of translatable flow features to the total features. $s$ is a hyper-parameter controlled by users (we will discuss how to set it in Section~\ref{sec:expriment}). We then split the channel number of input feature maps in $P$ and $Q$ as $s*C$ and $C-s*C$, and perform the blending $\mathcal{B}$ on $P$ while keeping $Q$ unchanged. There are multiple ways to perform the blending. For example, $\mathcal{B}$ can be achieved by scaling the features with factor $t$: $\mathcal{B}(P_{0\rightarrow1}, t)=t*P_{0\rightarrow1}$, which results in linear interpolation in $P$ and is used in our experiments. Afterwards, the blended $P$ and $Q$ are concatenated as the output of the SDL unit. A merging operator $\mathcal{M}$ (also learned as several convolutional layers like $\mathcal{D}$) is followed to rebind the decoupled spaces on multiple scales. A synthesis network is also adopted to improve the final transition results. We employ a GridNet architecture \cite{Fourure2017Gridnet} for it with three rows and six columns. Following the work of Niklaus \emph{et al}. \cite{Niklaus2020Splatting}, some modifications are utilized to address the checkerboard artifacts. The detailed architecture of the synthesis network can be found in the \textbf{supplementary materials}. In addition, it is worth mentioning that $t$ works with the loss function during training if necessary. Details can be found in the section of experiments. \subsection{Training Strategy} To train SDL model for VFI, we adopt two loss functions: the Charbonnier loss \cite{Charbonnier1994Loss} $\mathcal{L}_C$ and the perceptual loss \cite{Johnson2016Perceptual} $\mathcal{L}_P$. The final loss $\mathcal{L}$ is as follows: \begin{equation} \mathcal{L}=\alpha\mathcal{L}_C+\beta\mathcal{L}_P, \end{equation} where $\alpha$ and $\beta$ are balancing parameters. The content loss $\mathcal{L}_C$ enforces the fine features and preserves the original color information. The perceptual loss $\mathcal{L}_P$ can be better balanced to recover more high-quality details. We use the $conv5\_4$ feature maps before activation in the pre-trained VGG19 network \cite{Simonyan2014VGG} as the perceptual loss. In our experiments, we empirically set $\alpha=1$ and $\beta=0.1$. For other CIT applications including image-to-image translation and image morphing, GAN plays a key role to generate high-quality results in order to alleviate superimposed appearances. In our implementation, we use PatchGAN developed by Isola \emph{et al.} \cite{Isola2017Pix2Pix} for adversarial training. The final loss is the sum of the $\mathcal{L}_1$ loss and PatchGAN loss with equal weights. \begin{table*}[t] \centering \caption{Quantitative comparison (PSNR, SSIM, runtime) of different methods on the Middleburry, UCF101, Vimeo90K and Adobe240fps datasets. The runtime is reported as the average time to process a pair of $640\times 480$ images. The numbers in \textbf{bold} represent the best performance. The upper part of the table presents the results of kernel-based methods, and the lower part presents the methods that can perform smooth frame interpolations. ``-'' means that the result is not available.} \vspace*{-3mm} \resizebox{0.9\textwidth}{!}{\begin{threeparttable}\begin{tabular}{l|c|c|c c|c c|c c|c c} \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Training Dataset}} & \multicolumn{1}{c}{\textbf{Runtime}} & \multicolumn{2}{c}{\textbf{Middleburry}} & \multicolumn{2}{c}{\textbf{UCF101}} & \multicolumn{2}{c}{\textbf{Vimeo90K}} & \multicolumn{2}{c}{\textbf{Adobe240fps}} \\ & & (ms) & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ \\ \hline SepConv \protect{\cite{Niklaus2017Sepconv}} & proprietary & 57 & 35.73 & 0.959 & 34.70 & 0.947 & 33.79 & 0.955 & - & - \\ CAIN \protect{\cite{Choi2020CAIN}} & proprietary & 56 & 35.07 & 0.950 & 34.97 & 0.950 & 34.64 & 0.958 & - & - \\ AdaCof \protect{\cite{Lee2020Adacof}} & Vimeo90K & 77 & 35.71 & 0.958 & 35.16 & 0.950 & 34.35 & 0.956 & - & - \\ CDFI \protect{\cite{Ding2021CDFI}} & Vimeo90K & 248 & 37.14 & 0.966 & 35.21 & 0.950 & 35.17 & 0.964 & - & - \\ \hline \hline SuperSloMo \protect{\cite{Jiang2018Superslomo}} & Adobe240fps+Youtube240fps & 67 & 33.64 & 0.932 & 33.14 & 0.938 & 32.68 & 0.938 & 30.76 & 0.902 \\ DAIN \protect{\cite{Bao2019DAIN}} & Vimeo90K & 831 & 36.70 & 0.964 & 35.00 & 0.949 & 34.70 & 0.963 & 29.22 & 0.877 \\ BMBC \protect{\cite{Park2020BMBC}} & Vimeo90K & 3008 & 36.78 & 0.965 & 35.15 & 0.950 & 35.01 & \textbf{0.965} & 29.56 & 0.881 \\ EDSC \protect{\cite{Cheng2021EDSC}} & Vimeo90K-Septuplet & 60 & 36.81 & \textbf{0.967} & 35.06 & 0.946 & 34.57 & 0.956 & 30.28 & 0.900 \\ SDL & Vimeo90K+Adobe240fps & \textbf{42} & \textbf{37.38} & \textbf{0.967} & \textbf{35.33} & \textbf{0.951} & \textbf{35.47} & \textbf{0.965} & \textbf{31.38} & \textbf{0.914} \\ \end{tabular} \end{threeparttable}} \label{tab:vficomp} \vspace*{-2mm} \end{table*} \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{imgs/vfi.pdf} \vspace*{-5mm} \caption{Visual comparison of competing methods on the Vimeo90K test set. (a) SepConv \protect\cite{Niklaus2017Sepconv}; (b) SuperSloMo \protect\cite{Jiang2018Superslomo}; (c) CAIN \protect\cite{Choi2020CAIN}; (d) EDSC \protect\cite{Cheng2021EDSC}; (e) DAIN \protect\cite{Bao2019DAIN}; (f) BMBC \protect\cite{Park2020BMBC}; (g) SDL; (h) Ground truth.} \label{fig:vimeo} \end{figure*} \vspace*{-2mm} \section{Experiments and Applications} \vspace*{-1mm} \label{sec:expriment} In this section, we first conduct extensive experiments on VFI to validate the effectiveness of our SDL method, and then apply SDL to other CIT tasks beyond VFI, such as face aging, face toonification and image morphing, to validate the generality of SDL. \vspace*{-2mm} \subsection{Datasets and Training Settings for VFI} \vspace*{-1mm} There are several datasets publicly available for training and evaluating VFI models, including Middlebury \cite{Baker2007Middlebury}, UCF101 \cite{Soomro2012UCF101AD}, Vimeo90K \cite{Xue2019TOFlow} and Adobe240-fps \cite{Su2017Adobe240fps}. The Middlebury dataset contains two subsets, \ie, \emph{Other} and \emph{Evaluation}. The former provides ground-truth middle frames, while the later hides the ground-truth, and the users are asked to upload their results to the benchmark website for evaluation. The UCF101 dataset \cite{Soomro2012UCF101AD} contains $379$ triplets of human action videos, which can be used for testing VFI algorithms. The frame resolution of the above two datasets is $256\times256$. We combine the training subsets in Adobe240-fps and Vimeo90K to train our SDL model. The Vimeo90K dataset \cite{Xue2019TOFlow} has $51,312$ ($3,782$) triplets for training (testing), where each triplet contains $3$ consecutive video frames of resolution $256\times448$. This implicitly sets the value of $t$ to $0.5$, and hence it is insufficient to train our SDL model for finer time intervals. We further resort to the Adobe240-fps dataset \cite{Su2017Adobe240fps}, which is composed of high frame-rate videos, for model training. We first extract the frames of all video clips, and then group the extracted frames with $12$ frames per group. There is no overlap between any two groups. During training, we randomly select $3$ frames $I_a, I_b, I_c$ from a group as a triplet, where $\{a,b,c\}\in\{0,1,...,11\}$ and $a<b<c$. The corresponding value of $t$ can be calculated as $(b-a)/(c-a)$. We also randomly reverse the direction of the sequence for data augmentation ($t$ is accordingly changed to $1-t$). Each video frame is resized to have a shorter spatial dimension of $360$ and a random crop of $256\times256$. Horizontal flip is performed for data augmentation. Following SuperSloMo \cite{Jiang2018Superslomo}, we use $112$ video clips for training and the rest $6$ for validation. During model updating, we adopt the Adam \cite{Kingma2015AdamAM} optimizer with a batch size of $48$. The initial learning rate is set as $2\times 10^{-4}$, and it decays by a factor of $0.8$ for every 100K iterations. The model is updated for 600K iterations. \subsection{Comparisons with State-of-the-arts} We evaluate the performance of the proposed SDL method in comparison with two categories of state-of-the-art VFI algorithms, whose source codes or pretrained models are publicly available. The first category of methods allow frame interpolation at arbitrary time, including SuperSloMo \cite{Jiang2018Superslomo}, DAIN \cite{Bao2019DAIN}, BMBC \cite{Park2020BMBC} and EDSC \cite{Cheng2021EDSC}. The second category is kernel-based algorithms, including SepConv \cite{Niklaus2017Sepconv}, CAIN \cite{Choi2020CAIN}, AdaCof \cite{Lee2020Adacof} and CDFI \cite{Ding2021CDFI}, which can only perform frame interpolation iteratively at the power of $2$. The PSNR and SSIM \cite{Wang2004SSIM} indices are used for quantitative comparisons. Table~\ref{tab:vficomp} provides the PSNR/SSIM and runtime results on the Middlebury \emph{Other} \cite{Baker2007Middlebury}, UCF101 \cite{Soomro2012UCF101AD}, Vimeo90K \cite{Xue2019TOFlow} and Adobe240-fps \cite{Su2017Adobe240fps} testing sets. In all experiments, the first and last frames of each group are taken as inputs. On the first three datsets, we set $t=0.5$ to interpolate the middle frame. While on the high frame rate Adobe240-fps dataset, we vary $t\in\{\frac{1}{11},\frac{2}{11},...,\frac{10}{11}\}$ to produce the intermediate $10$ frames, which is beyond the capability of kernel-based methods \cite{Niklaus2017Sepconv,Choi2020CAIN,Lee2020Adacof,Ding2021CDFI}. All the methods are tested on a NVIDIA V100 GPU, and we calculate the average processing time for $10$ runs. From Table~\ref{tab:vficomp}, one can see that the proposed SDL approach achieves best PSNR/SSIM indices on all the datasets, while it has the fastest running speed. The kernel-based method CDFI \cite{Ding2021CDFI} also achieves very good PSNR/SSIM results. However, it often fails to handle large motions due to the limitation of kernel size. The flow-based methods such as DAIN \cite{Bao2019DAIN} address this issue by referring to bidirectional flows, while inevitably suffer from inaccurate estimations. The proposed SDL implicitly decouples the images into a translatable flow space and a non-translatable feature space, avoiding the side effect of inaccurate flows. Fig.~\ref{fig:vimeo} presents some visual comparisons of the VFI results of competing methods. It can be seen that our SDL method preserves better the image fine details and edge structures especially in scenarios with complex motions, where inaccurate flow estimations are commonly observed. SDL manages to address this difficulty by implicitly decoupling the images into a translatable flow space and a non-translatable feature space, and hence resulting in better visual quality with fewer interpolation artifacts. More visual comparison results can be found in the \textbf{supplementary material}. In the task of VFI, optical flow is widely used to explicitly align the adjacent frames. However, this may lead to visual artifacts on pixels where the flow estimation is not accurate. In our SDL, we decouple the image space into a translatable flow space and a non-translatable feature space, and only perform interpolation in the former one, avoiding the possible VFI artifacts caused by inaccurate flow estimation. In Fig.~\ref{fig:vis}, we visualize the the translatable flow space and compare it with the optical flow obtained by SpyNet \cite{Ranjan2017SpyNet}. As can be seen, the translatable flow space matches the optical flow on the whole, while it focuses more on the fine details and edge structures that are import to synthesize high-quality results. \begin{figure}[t!] \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1\textwidth]{imgs/vis.jpg} \caption{Visualization of the translatable flow space and the optical flow in VFI. \textbf{Left:} the translatable flow space; \textbf{Right:} the optical flow.} \label{fig:vis} \end{minipage} \hspace{0.05cm} \begin{minipage}[t]{0.5\linewidth} \centering \begin{tikzpicture} \begin{axis}[ xlabel={$s$}, ylabel={PSNR (dB)}, xmin=0, xmax=1, ymin=26, ymax=36, xtick={0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1}, ytick={26,28,30,32,34,36}, legend pos=north west, ymajorgrids=true, grid style=dashed, width=5.2cm, height=3.2cm, ticklabel style={font=\tiny}, xlabel style={at={(1,0)}, right, yshift=0pt} ] \addplot[ color=blue, mark=square, ] coordinates { (0,26.5)(0.1,35.47)(0.2,35.31)(0.3,35.3)(0.4,35.57)(0.5,35.98)(0.6,35.82)(0.7,35.61)(0.8,35.4)(0.9,35.11)(1,30.95) }; \end{axis} \end{tikzpicture} \caption{PSNR vs. $s$ on the Adobe240-fps testing set. When $s=0.5$, the PSNR reaches the peak, while the performance is very stable by varying $s$ from $0.1$ to $0.9$.} \label{fig:ratio} \end{minipage} \vspace*{-1mm} \end{figure} \begin{table}[t!] \centering \caption{Quantitative comparison (PSNR, SSIM) between SDL and its variants on the Middleburry, UCF101, Vimeo90K and Adobe240fps datasets. The numbers in \textbf{bold} represent the best results.} \vspace*{-3mm} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{l|c|c c|c c|c c|c c} \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Training Dataset}} & \multicolumn{2}{c}{\textbf{Middleburry}} & \multicolumn{2}{c}{\textbf{UCF101}} & \multicolumn{2}{c}{\textbf{Vimeo90K}} & \multicolumn{2}{c}{\textbf{Adobe240fps}} \\ & & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ \\ \hline SDL-vimeo90k & Vimeo90K & \textbf{37.49} & \textbf{0.967} & 35.27 & \textbf{0.951} & \textbf{35.56} & \textbf{0.965} & 26.52 & 0.811 \\ SDL-w/o-sdl & Vimeo90K+Adobe240fps & 36.96 & 0.964 & 35.24 & 0.950 & 35.38 & 0.964 & 26.51 & 0.817 \\ SDL-w/o-syn & Vimeo90K+Adobe240fps & 37.19 & 0.965 & 35.27 & \textbf{0.951} & 35.37 & 0.964 & 31.21 & 0.911 \\ SDL & Vimeo90K+Adobe240fps & 37.38 & \textbf{0.967} & \textbf{35.33} & \textbf{0.951} & 35.47 & \textbf{0.965} & \textbf{31.38} & \textbf{0.914} \\ \end{tabular}} \label{tab:vfi_ablation} \vspace*{-5mm} \end{table} \vspace*{-2mm} \subsection{Ablation Experiments} \label{sec:ablation} In this section, we conduct experiments to investigate the ratio of translatable flow features, and compare SDL with several of its variants. \textbf{Translatable Flow Features.} In order to find out the effect of $s$ (\ie, the ratio of translatable flow features to total features) of SDL, we set $s\in\{0,0.1,...,1\}$) and perform experiments on the Adobe240-fps testing set. The curve of PSNR versus $s$ is plotted in Fig.~\ref{fig:ratio}. We can see that the performance decreases significantly if all feature maps are assigned to non-translatable feature space (\ie, $s=0$) or translatable flow space (\ie, $s=1$). When $s=0.5$, the PSNR reaches the peak, while the performance is very stable by varying $s$ from $0.1$ to $0.9$. This is because SDL can learn to adjust its use of translatable and non-translatable features during training. % \textbf{The variants of SDL.} We compare SDL with several of its variants to validate the design and training of SDL. The first variant is denoted as SDL-vimeo90k, \ie, the model is trained using only the Vimeo90K dataset. The second variant is denoted as SDL-w/o-sdl, \ie, SDL without space decoupling learning by setting $s=0$. The third variant is denoted as SDL-w/o-syn, \ie, the synthesis network is replaced with several convolution layers. We evaluate SDL and its three variants on the Middlebury \emph{Other} \cite{Baker2007Middlebury}, UCF101 \cite{Soomro2012UCF101AD}, Vimeo90K \cite{Xue2019TOFlow} and Adobe240-fps \cite{Su2017Adobe240fps} testing sets, and the PSNR and SSIM results are listed in Table~\ref{tab:vfi_ablation}. One can see that SDL-vimeo90k achieves the best SSIM indices on all the triplet datasets, and the best PSNR indices on Middlebury \emph{Other} and Vimeo90K by using a smaller training dataset than SDL, which uses both Vimeo90K and Adobe240-fps in training. This is because these is a domain gap between Adobe240-fps and Vimeo90k, and hence the SDL-vimeo90k can overfit the three triplet dataset. Furthermore, SDL-vimeo90k performs poorly on the Adobe240-fps dataset. This implies that training SDL using merely triplets fails to synthesize continuous frames. Without decoupling the space, SDL-w/o-sdl performs much worse than the full SDL model, especially on the Adobe240-fps testing set. This validates that the space decoupling learning strategy boosts the VFI performance and plays a key role in continuous image transition. Without the GridNet \cite{Fourure2017Gridnet}, which is widely used as the synthesis network to improve VFI performance \cite{Niklaus2018Context,Niklaus2020Splatting}, SDL-w/o-syn maintains good VFI performance on all the datasets with only slight PSNR/SSIM decrease compared to original SDL. \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{imgs/aging_comp.pdf} \vspace{-3mm} \caption{Comparison of SDL with StyleGAN2 backpropagation on face aging. From left to right: input image, StyleGAN2 backpropagation \protect{\cite{Viazovetskyi2020Distillation}} and SDL. Note that artifacts can be generated by StyleGAN2 backpropagation, while SDL can synthesize the image more robustly.} \label{fig:bad} \vspace{-5mm} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.86\textwidth]{imgs/i2i_aging.pdf} \vspace*{-5mm} \caption{Comparison of SDL with competing methods on continuous face aging. From top to bottom: SDL, StyleGAN2 backpropagation \protect{\cite{Viazovetskyi2020Distillation}}, SAVI2I \protect{\cite{Mao2020ContinuousI2I}}, Lifespan \protect{\cite{Orel2020Lifespan}} and DNI \protect{\cite{Wang2019DNI}}.} \label{fig:i2i} \vspace*{-6mm} \end{figure*} \vspace*{-2mm} \subsection{Applications beyond VFI} \vspace*{-1mm} The proposed SDL achieves leading performance in VFI without using optical flows. It can also be used to address other CIT applications beyond VFI, such as image-to-image translation and image morphing. In this section, we take face aging and toonification and dog-to-dog image morphing as examples to demonstrate the generality of our SDL approach. \textbf{Face Aging.} \label{sec:I2I} Unlike VFI, there is no public dataset available for training and assessing continuous I2I models. To solve this issue, we use StyleGAN \cite{Karras2019StyleGAN,Karras2020StyleGAN2}, which is a cutting-edge network for creating realistic images, to generate training data. Following \cite{Viazovetskyi2020Distillation}, we use StyleGAN2 distillation to synthesize datasets for face manipulation tasks such as aging. We first locate the direction vector associated with the attribute in the latent space, then randomly sample the latent codes to generate source images. For each source image, we walk along the direction vector with equal pace to synthesize a number of target images. As shown in the middle image of Fig.\ref{fig:bad}, StyleGAN2 distillation may not always generate faithful images. We thus manually check all the samples to remove unsatisfactory ones. Finally, $50,000$ samples are generated, and each sample contains $11$ images of $1024\times 1024$. The dataset will be made publicly available. The source image $I_0$ and a randomly selected target image $I _a$ ($a\in1,2,...,10$) are used as the inputs to train the SDL model. The corresponding value of $t$ is $a/10$. We also randomly replace the source image $I_0$ with the target image $I_{10}$ during training, and the corresponding value of $t$ can be set as $a/10-1$. In this way, the range of $t\in[0,1]$ can be extended to $[-1,1]$ so that our model can produce both younger (by setting $a\in[-1,0)$) and older faces (by setting $a\in(0, 1]$). Note that SDL only needs the source image as input in inference. Though trained on synthetic datasets, SDL can be readily used to handle real-world images. Since only a couple of works have been proposed for continuous I2I translation problem, and we choose those methods \cite{Wang2019DNI,Mao2020ContinuousI2I,Orel2020Lifespan} whose training codes are publicly available to compare, and re-train their models using our datasets. In particular, we employ the same supervised $L_1$ loss as ours to re-train those unsupervised methods for fair comparison. Fig.~\ref{fig:i2i} shows the results of competing methods on continuous face aging. One can see that SDL outperforms clearly the competitors in generating realistic images. By synthesizing the non-translatable features in reconstruction, SDL also works much better on retaining image background, for example, the mouth in the right-top corner. StyleGAN2 backpropagation \cite{Viazovetskyi2020Distillation} generates qualified aging faces; however, it fails to translate the face identity and loses the image background. SDL also produces more stable results than StyleGAN2 backpropagation, as shown in Fig.\ref{fig:bad}. It is worth mentioning that SDL is $10^3$ times faster than StyleGAN2 backpropagation which requires time-consuming iterative optimization. SAVI2I \cite{Mao2020ContinuousI2I} fails to generate qualified intermediaries with photo-realistic details. Lifespan \cite{Orel2020Lifespan} adopts an off-the-shelf face segmentation algorithm to keep the background unchanged. However, the generated face images have low quality. To test DNI \cite{Wang2019DNI}, we train two Pix2PixHD \cite{Wang2018Pix2PixHD} models to generate younger and older faces, respectively, and blend their weights continuously. As can be seen, DNI \cite{Wang2019DNI} fails to produce reasonable transition results. Moreover, SDL can generate continuous image-to-image translations with arbitrary resolutions, while all the competing methods cannot do it. More visual comparison results can be found in the \textbf{supplementary materials}. \begin{figure*}[t!] \centering \includegraphics[width=0.86\textwidth]{imgs/I2I_toonify.pdf} \vspace*{-5mm} \caption{Comparison of SDL with competing methods on continuous face toonification. From top to bottom: SDL, Pinkney \emph{et al.} \cite{Pinkney2020ResolutionDG}, and SAVI2I \protect{\cite{Mao2020ContinuousI2I}}.} \label{fig:toonification} \vspace*{-5mm} \end{figure*} \textbf{Face Toonification.} We first build a face toonification dataset by using the method of \emph{layer swapping} \cite{Pinkney2020ResolutionDG}. Specifically, we finetune a pretrained StyleGAN on a cartoon face dataset to obtain a new GAN, then swap different scales of layers of the two GANs (\ie, the pretrained and the finetuned ones) to create a series of blended GANs, which can generate various levels of face toonification effects. Similar to face aging, we generate $50,000$ training samples, each containing $6$ images of resolution $1024\times 1024$. During training, we take the source images (\ie, $I_0$) as input and randomly choose a target image $I_a$, $a\in\{1,2,...,5\}$, as the ground-truth output. The corresponding value of $t$ is $a/5$. We compare SDL with Pinkney \emph{et al.} \cite{Pinkney2020ResolutionDG} and SAVI2I \cite{Mao2020ContinuousI2I}, whose source codes are available. As shown in Fig.~\ref{fig:toonification}, SDL outperforms the competitors in producing visually more favourable results. Pinkney \emph{et al.} \cite{Pinkney2020ResolutionDG} generates qualified toonification effects but it fails to retain the face identity and the image background. The generated face images of SAVI2I \cite{Mao2020ContinuousI2I} have low quality. Furthermore, SAVI2I \cite{Mao2020ContinuousI2I} merely synthesizes images with a resolution of $256\times 256$, while SDL can yield results at any resolution. More visual comparison results can be found in the \textbf{supplementary materials}. \begin{figure*}[t!] \centering \includegraphics[width=0.86\textwidth,,height=0.49\textwidth]{imgs/morphing.pdf} \vspace*{-5mm} \caption{Comparison of SDL with competing methods on dog-to-dog morphing. From top to bottom: SDL, StyleGAN2 backpropagation \protect{\cite{Viazovetskyi2020Distillation}}, CrossBreed \protect{\cite{Park2020Crossbreed}}, SAVI2I \protect{\cite{Mao2020ContinuousI2I}}, and FUNIT \protect{\cite{Liu2019Few}}.} \label{fig:morphing} \vspace*{-5mm} \end{figure*} \textbf{Dog-to-Dog Morphing.} Similar to I2I translation, we synthesize training data for dog-to-dog morphing using StyleGAN2 \cite{Karras2020StyleGAN2} and BigGAN \cite{Brock2019BigGAN}. We randomly sample two latent codes as the source and target images. The intermediate images are obtained by interpolating the two codes in the latent space. We generate $50,000$ training samples, each containing $11$ images of resolution $512\times 512$. During training, we take the source and target images (\ie, $I_0, I_{10}$) as inputs and randomly choose an image $I_a$, $a\in\{1,2,...,9\}$, as the ground-truth output. Since few methods have been proposed for continuous image morphing, we compare SDL with I2I translation models, including CrossBreed \cite{Park2020Crossbreed}, SAVI2I \cite{Mao2020ContinuousI2I} and FUNIT \cite{Liu2019Few}. (We re-train their models using our datasets and the same supervised $L_1$ loss for fair comparison.) As shown in Fig.~\ref{fig:morphing}, SDL achieves smooth morphing from one dog to another with vivid details. StyleGAN2 backpropagation \cite{Viazovetskyi2020Distillation} yields comparable results but it loses the background details. CrossBreed \cite{Park2020Crossbreed} and SAVI2I \cite{Mao2020ContinuousI2I} fail to generate qualified intermediate results. FUNIT \cite{Liu2019Few} produces smooth morphing; however, the generated dog images have low quality and it fails to retain the image content when $t=0,1$. Please refer to the \textbf{supplementary materials} for more visual comparisons. \vspace*{-3mm} \section{Conclusion} \vspace*{-2mm} We proposed a simple yet effective approach, namely space decoupled learning (SDL), for VFI problem. We implicitly decoupled the images into a translatable flow space and a non-translatable feature space, and performed image interpolation in the flow space and intermediate image synthesis in the feature space. The proposed SDL can serve as a general-purpose solution to a variety of continuous image transition (CIT) problems. As demonstrated by our extensive experiments, SDL showed highly competitive performance with the state-of-the-arts, which were however specifically designed for their given tasks. Particularly, in the application of video frame interpolation, SDL was the first flow-free algorithm that can synthesize consecutive interpolations with leading performance. In other CIT tasks such as face aging, face toonification and dog-to-dog morphing, SDL exhibited much better visual quality and efficiency with more foreground and background details. \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=3d6PLMQm5Uj
https://arxiv.org/abs/2203.09771
Please evaluate the paper based on the proposed Space Decoupled Learning (SDL) method for Continuous Image Transition (CIT), its technical correctness, thorough experiments, and its potential impact on various CIT tasks.
In general, good work Quality: Good work, intensive experiments with three tasks with two major datasets Clarity: Clear problem statements, clear and tidy methodology by adding unsupervised k-means and corresponding loss Significance: the methodology could ease the stated problems to some extend (e.g., if new unseen samples are far away from any local experts, so in this situation the hyper parameter K will be related with the performance. How can you determine this K? And what is the influence? Is there any optimal option on choosing K in general?) Questions: 1. In Fig. 6 \& 7 right side, the mAP performance dropped with the large \lambda (the rightmost square and star). And the caption for that is 'the method is stable regardless the choice of the parameters over various tasks'. Could you please give me some explanation on those two points? I think the trend of dropping of mAP with large \lambda is clear. Do you still think it is stable? 2. I find some typos or grammar mistakes, please check them. Line 66, 'evcaluate'. Line 112, 'where early on in the training'. Line 236, 'Thereof'. Line 493. 'both both'. Rating: 7: Good paper, accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\pdfoutput=1 \documentclass[runningheads]{llncs} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{tikz} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{booktabs} \usepackage{wrapfig} \usepackage{subcaption} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \usepackage{hyperref} \hypersetup{colorlinks,allcolors=black} \newcommand{\bibi}[1]{\todo[inline]{{\textbf{Bibi:} \emph{#1}}}} \newcommand{\bibir}[1]{\textcolor{red}{Bibi: #1}} \newcommand{\csabi}[1]{\todo[inline]{{\textbf{Csabi:} \emph{#1}}}} \newcommand{\csabir}[1]{\textcolor{red}{Csabi: #1}} \usepackage[capitalize]{cleveref} \crefname{section}{Sec.}{Secs.} \Crefname{section}{Section}{Sections} \Crefname{table}{Table}{Tables} \crefname{table}{Tab.}{Tabs.} \usepackage[accsupp]{axessibility} % \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{4} % \title{Diversified Dynamic Routing for Vision Tasks} % \titlerunning{Diversified Dynamic Routing for Vision Tasks} \author{Botos Csaba\inst{1} \and Adel Bibi\inst{1} \and Yanwei Li\inst{2} \and Philip Torr\inst{1} \and Ser-Nam Lim\inst{3} } \authorrunning{Botos Cs. et al.} \institute{University of Oxford, UK\\ \email{csbotos@robots.ox.ac.uk}\\\email{\{adel.bibi,philip.torr\}@eng.ox.ac.uk}\and The Chinese University of Hong Kong, HKSAR\\ \email{ywli@cse.cuhk.edu.hk}\\ \and Meta AI\\ \email{sernamlim@fb.com} } \maketitle \begin{abstract} Deep learning models for vision tasks are trained on large datasets under the assumption that there exists a universal representation that can be used to make predictions for all samples. Whereas high complexity models are proven to be capable of learning such representations, a mixture of experts trained on specific subsets of the data can infer the labels more efficiently. However using mixture of experts poses two new problems, namely (\textbf{i}) assigning the correct expert at inference time when a new unseen sample is presented. (\textbf{ii}) Finding the optimal partitioning of the training data, such that the experts rely the least on common features. In Dynamic Routing (DR)~\cite{li2020learning} a novel architecture is proposed where each layer is composed of a set of experts, however without addressing the two challenges we demonstrate that the model reverts to using the same subset of experts. In our method, Diversified Dynamic Routing (DivDR) the model is explicitly trained to solve the challenge of finding relevant partitioning of the data and assigning the correct experts in an unsupervised approach. We conduct several experiments on semantic segmentation on Cityscapes and object detection and instance segmentation on MS-COCO showing improved performance over several baselines. \end{abstract} \section{Introduction} In recent years, deep learning models have made huge strides solving complex tasks in computer vision, e.g. segmentation~\cite{long2015fully,chen2017deeplab} and detection~\cite{fastrcnn,fasterrcnn}, and reinforcement learning, e.g. playing atari games~\cite{mnih2013atari}. Despite this progress, the computational complexity of such models still poses a challenge for practical deployment that requires accurate real-time performance. This has incited a rich body of work tackling the accuracy complexity trade-off from various angles. For instance, a class of methods tackle this trade-off by developing more efficient architectures~\cite{tan2019efficientnet,yu2018bisenet}, while others initially train larger models and then later distill them into smaller more efficient models~\cite{hinton2015distilling,xie2020self,gou2021knowledge}. Moreover, several works rely on sparse regularization approaches~\cite{wan2013regularization,ding2021hr,shaw2019squeezenas} during training or by performing a post-training pruning of model weights that contribute marginally to the final prediction. While listing all categories of methods tackling this trade-off is beyond the scope of this paper, to the best of our knowledge, they all share the assumption that predicting the correct label requires a universal set of features that works best for all samples. We argue that such an assumption is often broken even in well curated datasets. For example, in the task of segmentation, object sizes can widely vary across the dataset requiring different computational effort to process. That is to say, large objects can be easily processed under lower resolutions while smaller objects require processing in high resolution to retain accuracy. This opens doors for class of methods that rely on \textit{local experts}; efficient models trained directly on each subset separately leveraging the use of this local bias. However, prior art often ignore local biases in the training and validation datasets when tackling the accuracy-efficiency trade-off for two key reasons illustrated in Figure \ref{fig:pull-figure}. (\textbf{i}) Even under the assumption that such local biases in the training data are known, during inference time, new unseen samples need to be assigned to the correct local subset so as to use the corresponding \textit{local expert} for prediction (Figure \ref{fig:pull-figure} left). (\textbf{ii}) Such local biases in datasets are not known \textbf{apriori} and may require a prohibitively expensive inspection of the underlying dataset (Figure \ref{fig:pull-figure} right). In this paper, we take an orthogonal direction to prior art on the accuracy-efficiency trade-off by addressing the two challenges in an unsupervised manner. In particular, we show that training \textit{local experts} on learnt subsets sharing local biases can jointly outperform \textit{global experts}, i.e. models that were trained over the entire dataset. We summarize our contributions in two folds. \begin{enumerate} \item We propose Diversified Dynamic Routing (DivDR); an unsupervised learning approach that trains several local experts on learnt subsets of the training dataset. At inference time, DivDR assigns the correct local expert for prediction to newly unseen samples. \item We extensively evaluate DivDR and compare against several existing methods on semantic segmentation, object detection and instance segmentation on various datasets, i.e. Cityscapes~\cite{cordts2016cityscapes} and MS-COCO~\cite{lin2014microsoft}. We find that DivDR compared to existing methods better trades-off accuracy and efficiency. We complement our experiments with various ablations demonstrating robustness of DivDR to choices of hyperparameters. \end{enumerate} \begin{figure} \centering \includegraphics[width=.7\textwidth]{figures/banner.pdf} \caption{The figure depicts the two main challenges in learning local experts on subsets on subsets of the dataset with local biases. First, even when the subsets in the training dataset is presented where there is a local expert per subset, the challenge remains in assigning the local expert for new unseen samples (left Figure). The second challenge is that the local biases in the training data are not available during training time (right Figure).} \label{fig:pull-figure} \end{figure} \section{Related Work} \label{sec:related} In prior literature model architectures were predominantly hand-designed, meaning that hyper-parameters such as the number and width of layers, size and stride of convolution kernels were predefined. In contrast, Neural Architecture Search~\cite{zoph2016neural,liu2018darts} revealed that searching over said hyper-parameter space is feasible provided enough data and compute power resulting in substantial improvement in model accuracy. Recently, a line of research~\cite{li2019partial,liu2019auto,chen2018searching,tan2019efficientnet,veit2018convolutional} also proposed to constrain the search space to cost-efficient models that jointly optimize the accuracy and the computational complexity of the models. Concurrently, cost-efficient inference has been also in the focus of works on dynamic network architectures~\cite{mullapudi2018hydranets,you2019gate,wang2018skipnet,wu2018blockdrop}, where the idea is to allow the model to choose different architectures based on the input through gating computational blocks during inference. For example, Li et al.~\cite{li2020learning} proposed an end-to-end dynamic routing framework that generates routes within the architecture that vary per input sample. The search space of~\cite{li2020learning}, inspired by Auto-DeepLab~\cite{liu2019auto}, allows exploring spatial up and down-sampling between subsequent layers which distinguishes the work from prior dynamic routing methods. One common failure mode of dynamic models is mentioned in~\cite{mullapudi2018hydranets}, where during the initial phase of the training only a specific set of modules are selected and trained, leading to a static model with reduced capacity. This issue is addressed by Mullapudi et \textit{al.}~\cite{mullapudi2018hydranets} through clustering the training data in advance based on latent representations of a pretrained image classifier model, whereas~\cite{veit2018convolutional} uses the Gumbel-Softmax reparameterization~\cite{jang2016categorical} to improve diversity of the dynamic routes. In this work, to mitigate this problem, we adopt the metric learning Magnet Loss~\cite{rippel2015metric} which acts as an improvement over metric learning methods that act on the instance level, e.g. Triplet Loss~\cite{weinberger2009distance,koch2015siamese}, and Contrastive Learning methods~\cite{chopra2005learning,hadsell2006dimensionality}. This is since it considers the complete distribution of the underlying data resulting in a more stable clustering. To adapt Magnet Loss to resolving the Dynamic Routing drawbacks, we use it as an unsupervised approach to increase the distance between the forward paths learned by the Dynamic Routing model this is as opposed to clustering the learned representations, i.e. learning clustered dynamic routes as opposed to clustered representations. We review the recent advances on semantic segmentation and object detection which are utilized to validate our method in this work. For semantic segmentation, numerous works have been proposed to capture the larger receptive field~\cite{zhao2017pyramid,chen2017deeplab,chen2017rethinking,chen2018encoder} or establish long-range pixel relation~\cite{zhao2018psanet,huang2018ccnet,song2019learnable} based on Fully Convolutional Networks~\cite{long2015fully}. As mentioned above, with the development of neural network, Neural Architecture Search (NAS)-based approaches~\cite{chen2018searching,liu2019auto,nekrasov2019fast} and dynamic networks~\cite{li2020learning} are utilized to adjust network architecture according to the data while being jointly optimized to reduce the cost of inference. As for object detection, modern detectors can be roughly divided into one-stage or two-stage detectors. One-stage detectors usually make predictions based on the prior guesses, like anchors~\cite{redmon2016you,lin2017focal} and object centers~\cite{tian2019fcos,zhou2019objects}. Meanwhile, two-stage detectors predict boxes based on predefined proposals in a coarse-to-fine manner~\cite{girshick2014rich,fastrcnn,fasterrcnn}. There are also several advances in Transformer-based approaches for image recognition tasks such as segmentation~\cite{zheng2021rethinking,xie2021segformer} and object detection~\cite{carion2020end,zhu2020deformable}, and while our method can be generalized to those architectures as well, it is beyond the scope of this paper. \section{DivDR: Diversified Dynamic Routing} \label{sec:method} We first start by introducing Dynamic Routing. Second, we formulate our objective of the iterative clustering of the dataset and the learning of experts per dataset cluster. At last, we propose a contrastive learning approach based on \textit{magnet loss}~\cite{rippel2015metric} over the gate activation of the dynamic routing model to encourage the learning of different architectures over different dataset clusters. \subsection{Dynamic Routing Preliminaries} The Dynamic Routing (DR)~\cite{li2020learning} model for semantic segmentation consists of $L$ sequential feed-forward layers in which dynamic \emph{nodes} process and propagate the information. Each dynamic node has two parts: (\textbf{i}) the \emph{cell} that performs a non-linear transformation to the input of the node; and (\textbf{ii}) the \emph{gate} that decides which node receives the output of the cell operation in the subsequent layer. In particular, the gates in DR determine what resolution/scale of the activation to be used. That is to say, each gate determines whether the activation output of the cell is to be propagated at the same resolution, up-scaled, or down-scaled by a factor of $2$ in the following layer. Observe that the gate activation determines the \textit{architecture} for a given input since this determines a unique set of connections defining the architecture. The output of the final layer of the nodes are up-sampled and fused by $1 \times 1$ convolutions to match the original resolution of the input image. For an input-label pair $(x,y)$ in a dataset $\mathcal{D}$ of $N$ pairs, let the DR network parameterized by $\theta$ be given as $f_\theta : \mathcal{X} \rightarrow \mathcal{Y}$ where $x \in \mathcal{X}$ and $y \in \mathcal{Y}$. Moreover, let $\mathcal{A}_{\tilde{\theta}} : \mathcal{X} \rightarrow [0,1]^n$, where $\theta \supseteq \tilde{\theta}$, denote the gate activation map for a given input, i.e. the gates determining the architecture discussed earlier, then the training objective for DR networks under computational budget constraints have the following form: \begin{equation} \mathcal{L}_{DR}= \frac{1}{N} \sum_{i=1}^N \mathcal{L}_{seg}\big(f_\theta(x_i), y_i\big)+ \lambda\mathcal{L}_{cost}(\mathcal{A}_{\tilde{\theta}}(x_i)). \end{equation} \noindent We will drop the subscript $\tilde{\theta}$ throughout to reduce text clutter. Note that $\mathcal{L}_{seg}$ and $\mathcal{L}_{cost}$ denote the segmentation and computational budget constraint respectively. Observe that when most of the gate activations are sparse, this incurs a more efficient network that may be at the expense of accuracy and hence the trade-off through the penalty $\lambda$. \begin{figure}[t] \centering \includegraphics[width=.8\textwidth]{figures/kmeans-assignment.pdf} \caption{ \textbf{Gate Activation cluster assignment.} To update the local experts, DivDR performs K-means clustering on the gate activations over the $\mathcal{A}(x_i)~\forall i$ in the training examples with fixed model parameters $\theta$.} \label{fig:kmeans-assign} \end{figure} \begin{figure}[t] \centering \includegraphics[width=.8\textwidth]{figures/gate-activation-diversification.pdf} \caption{ \textbf{Gate Activation Diversification.} We use the labels from the cluster assignment to reduce the \textit{intra-cluster} variance and increase the \textit{inter-cluster} variance by updating model parameters $\theta$.} \label{fig:kmeans-diversify} \end{figure} \subsection{Metric Learning in $\mathcal{A}$-space} Learning local experts can benefit performance both in terms of accuracy and computational cost. We propose an unsupervised approach to learning jointly the subset of the dataset and the soft assignment of the corresponding architectures. We use the DR framework for our approach. We first assume that there are $K$ clusters in the dataset for which we seek to learn an expert on each. Moreover, let $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$, denote the cluster centers representing $K$ different gate activations. Note that as per the previous discussion, each gate activation $\mu_{\mathcal{A}_i} \in [0,1]^n$ corresponds to a unique architecture. The set of cluster centers representing gate activations $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$ can be viewed as a set of prototypical architectures for $K$ different subsets in the datasets. Next, let $\mu(x)$ denote the nearest gate activation center to the gate activation $\mathcal{A}(x)$, i.e. $\mu(x) = \argmin_i \|\mathcal{A}(x) - \mu_{\mathcal{A}_i}\|$. Now, we seek to solve for both the gate activation centers $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$ and the parameters $\theta$ such that the gate activation centers are pushed away from one another. To that end, we propose the alternating between clustering and the minimization of a \textit{magnet loss}\cite{rippel2015metric} variant. In particular, for a given fixed set of activating gates centers $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$, we consider the following loss function: \begin{equation} \begin{aligned} \mathcal{L}_{\text{clustering}}(\mathcal{A}(x_i))&= \Bigg\{ \alpha + \frac{1}{2\sigma^2} \|\mathcal{A}(x_i)-\mu(x_i)\| \\ & + \log\left( \sum_{k : \mu_{\mathcal{A}_k} \neq \mu(x_i)} e^{ -\frac{1}{2\sigma^2} \|\mathcal{A}(x_i) - \mu_{\mathcal{A}_k}\| }\right) \Bigg\}_+. \end{aligned} \end{equation} \noindent Note that $\{x\}_+ = \max(x,0)$, $\sigma^2 = \frac{1}{N-1}\sum_{i}^N \|\mathcal{A}(x_i) - \mu(x_i)\|^2$, and that $\alpha \ge 0$. Observe that unlike in \textit{magnet loss}, we seek to cluster the set of architectures by separating the gate activations. Note that the penultimate term pulls the architecture, closer to the most similar prototypical architecture while the last term pushes it away from all other architectures. Therefore, this loss incites the learning of $K$ different architectures where each input $x_i$ will be assigned to be predicted with one of the $K$ learnt architectures. To that end, our overall \textit{Diversified} DR loss is given as follows: \begin{equation} \begin{aligned} \mathcal{L}_{\text{DivDR}} = \frac{1}{N}\sum_{i=1}^N & \mathcal{L}_{segm}(f_\theta(x_i),y_i) + \lambda_1 \mathcal{L}_{cost}(\mathcal{A}(x_i)) + \lambda_2\mathcal{L}_{clustering}(\mathcal{A}(x_i)). \end{aligned} \end{equation} We then alternate between minimizing $\mathcal{L}_{\text{DivDR}}$ over the parameters $\theta$ and the updates of the cluster centers $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$. In particular, given $\theta$, we update the gate activation centers by performing K-Means clustering~\cite{macqueen1967some} over the gate activations. That is to say, we fix $\theta$ and perform K-means clustering with $K$ clusters over all the gate activations from the dataset $\mathcal{D}$, i.e. we cluster $\mathcal{A}(x_i)~\forall i$ as shown in Figure \ref{fig:kmeans-assign}. Moreover, alternating between optimizing $\mathcal{L}_{\text{DivDR}}$ and updating the gate activation cluster centers over the dataset $\mathcal{D}$, illustrated in Figure~\ref{fig:kmeans-diversify}, results in a diversified set of architectures driven by the data that are more efficient, i.e. learning $K$ local experts that are accurate and efficient. \section{Experiments} \label{sec:experiments} We show empirically that our proposed DivDR approach can outperform existing methods in better trading off accuracy and efficiency. We demonstrate this on several vision tasks, i.e. semantic segmentation, object detection, and instance segmentation. We start first by introducing the datasets used in all experiments along along with the implementation details. We then present the comparisons between DivDR and several other methods along with several ablations. \subsection{Datasets} We mainly prove the effectiveness of the proposed approach for semantic segmentation, object detection, and instance segmentation on two widely-adopted benchmarks, namely Cityscapes~\cite{cordts2016cityscapes} and Microsoft COCO~\cite{lin2014microsoft} dataset. \vspace{0.5em} \noindent \textbf{Cityscapes}. The Cityscapes~\cite{cordts2016cityscapes} dataset contains 19 classes in urban scenes, which is widely used for semantic segmentation. It is consist of 5000 fine annotations that can be divided into 2975, 500, and 1525 images for training, validation, and testing, respectively. In the work, we use the Cityscapes dataset to validate the proposed method on semantic segmentation. \vspace{0.5em} \noindent \textbf{COCO}. Microsoft COCO~\cite{lin2014microsoft} dataset is a well-known for object detection benchmarking which contains 80 categories in common context. In particular, it includes 118k training images, 5k validation images, and 20k held-out testing images. To prove the performance generalization, we report the results on COCO's validation set for both object detection and instance segmentation tasks. \begin{table*}[t] \centering \caption{ Comparison with baselines on the Cityscapes~\cite{cordts2016cityscapes} validation set. * Scores from~\cite{li2020learning} were reproduced using the \href{https://github.com/Megvii-BaseDetection/DynamicRouting}{official implementation}. The evaluation settings are identical to~\cite{li2020learning}. We calculate the average FLOPs with $1024\times 2048$ size input. } \begin{tabular}{lc@{\hskip 0.1in}c@{\hskip 0.1in}r} \toprule \textbf{Method} & \textbf{Backbone} & \textbf{$\mathbf{mIoU}_{val}(\%)$} & \textbf{GFLOPs} \\ \midrule BiSenet~\cite{yu2018bisenet} & ResNet-18 & 74.8 & 98.3 \\ DeepLabV3~\cite{chen2017rethinking} & ResNet-101-ASPP & 78.5 & 1778.7 \\ Semantic FPN~\cite{kirillov2019panoptic} & ResNet-101-FPN & 77.7 & 500.0 \\ DeepLabV3+~\cite{chen2018encoder} & Xception-71-ASPP & 79.6 & 1551.1 \\ PSPNet~\cite{zhao2017pyramid} & ResNet-101-PSP & 79.7 & 2017.6 \\ Auto-DeepLab~\cite{liu2019auto} & Searched-F20-ASPP & 79.7 & 333.3 \\ Auto-DeepLab~\cite{liu2019auto} & Searched-F48-ASPP & 80.3 & 695.0 \\ \midrule DR-A~\cite{li2020learning}* & Layer16 & 72.7$\pm$0.6 & 58.7$\pm$3.1 \\ DR-B~\cite{li2020learning}* & Layer16 & 72.6$\pm$1.3 & 61.1$\pm$3.3 \\ DR-C~\cite{li2020learning}* & Layer16 & 74.2$\pm$0.6 & 68.1$\pm$2.5 \\ DR-Raw~\cite{li2020learning}* & Layer16 & 75.2$\pm$0.5 & 99.2$\pm$2.5 \\ \midrule DivDR-A & Layer16 & 73.5$\pm$0.4 & 57.7$\pm$3.9 \\ DivDR-Raw & Layer16 & 75.4$\pm$1.6 & 95.7$\pm$0.9 \\ \bottomrule \end{tabular} \label{tab:full-cityscapes-comp} \end{table*} \subsection{Implementation Details} In all training settings, we use SGD with a weight decay of $10^{-4}$ and momentum of $0.9$ for both datasets. For semantic segmentation on Cityscapes, we use the exponential learning rate schedule with an initial rate of $0.05$ and a power of $0.9$. For fair comparison, we follow the setting in~\cite{li2020learning} and use a batch size $8$ of random image crops of size $768\times768$ and train for $180K$ iterations. We use random flip augmentations where input images are scaled from $0.5$ to $2$ before cropping. For object detection on COCO we use an initial learning rate of $0.02$ and re-scale the shorter edge to 800 pixels and train for 90K iterations. Following prior art, random flip is adopted without random scaling. \subsection{Semantic Segmentation}~\label{sec:experiment_seg} \begin{figure}[t] \centering \includegraphics[width=.9\textwidth]{figures/k-tsne} \caption{ Visualizing the $183$-dimensional $\mathcal{A}$-space of Dynamic Routing backbones trained for semantic segmentation on Cityscapes~\cite{cordts2016cityscapes} (\textit{top}) and $198$-dimensional $\mathcal{A}$-space for object detection on COCO~\cite{lin2014microsoft} (\textit{bottom}) using t-SNE~\cite{van2008visualizing}. \textit{Left:} varying number of \textit{local experts}, $K=2,3,4$. \textit{Right:} joint t-SNE visualization of architectures of Dynamic Routing~\cite{li2020learning} (\textit{orange}) and our approach (\textit{blue}). It is clear that our method not only encourages diversity of the learned routes but also reduces variance in a specific cluster. Low \textit{intra}-cluster variance is beneficial because it facilitates feature sharing between similar tasks } \label{fig:k-tsne} \end{figure} \begin{table}[t] \centering \caption{Quantitative analysis of semantic segmentation on Cityscapes~\cite{cordts2016cityscapes}. We report \textit{Inter} and \textit{Intra} cluster variance, that shows how far are the cluster centers are from each other in $L_2$ space and how close are the samples to the cluster centers respectively.} \begin{tabular}{@{}l@{\hskip 0.1in}l@{\hskip 0.1in}c@{\hskip 0.1in}l@{\hskip 0.1in}c@{}} \toprule \textbf{method} & \textbf{mIoU} & \textbf{FLOPs} & \textbf{Inter} & \textbf{Intra} \\ \midrule DR-A & 72.7 & 58.7 & 0.4 & 0.3 \\ DivDR-A & 72.0 & 49.9 & 0.6 & 0.2 \\ \midrule DR-Raw & 75.2 & 99.2 & 1.5 & 1.5 \\ DivDR-Raw & 75.7 & 98.3 & 1.2 & 0.5 \\ \bottomrule \end{tabular} \label{table:inter_v_intra} \end{table} We show the benefits of our proposed DivDR of alternation between training with $\mathcal{L}_{\text{DivDR}}$ and computing the gate activations clusters through K-means on Cityscapes \cite{cordts2016cityscapes} for semantic segmentation. In particular, we compare two versions of our proposed unsupervised Dynamic Routing, namely with and without the computational cost constraint ($\lambda_1=0$ denoted as DivDR-Raw and $\lambda_1=0.8$ denoted as DivDR-A) against several variants of the original dynamic routing networks both constrained and unconstrained. All experiments are averaged over 3 seeds. As observed in Table \ref{tab:full-cityscapes-comp}, while both variants perform similarly in terms of accuracy (DR-Raw: $75.2\%$, DivDR: $75.4\%$), DivDR marginally improves the computational cost by $3.5$ GFLOPs. On the other hand, when introducing cost efficiency constraint DivDR-A improves both the efficiency ($58.7$ GFLOPs to $57.7$ GFLOPs) and accuracy ($72.7\%$ to $73.5\%$) as compared to DR-A. At last, we observe that comparing to other state-of-the-art, our unconstrained approach, performs similarly to BiSenet~\cite{yu2018bisenet} with 74.8\% accuracy while performing better in computational efficiency (98.3 GFLOPs vs. 95.7 GFLOPs). \paragraph{\textbf{Visualizing Gate Activations}.} We first start by visualizing the gate activations under different choices of the number of clusters $K$ over the gate activation for DivDR-A. As observed from Figure \ref{fig:k-tsne}, indeed our proposed $\mathcal{L}_{\text{DivDr}}$ results into clusters on local experts as shown by different gate activations $\mathcal{A}$ for $k \in \{2,3,4\}$. Moreover, we also observe that our proposed loss not only results in separated clusters of local experts, i.e. gate activations, but also with a small intra class distances. In particular, as shown in Table \ref{table:inter_v_intra}, our proposed DivDR indeed results in larger inter-cluster distances that are larger than the intra-cluster distnaces. The inter-cluster distances are computed as the average distance over all pair of cluster centers, i.e. $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$ while the intra-cluster distances are the average distances over all pairs in every cluster. This indeed confirms that our proposed training approach results in $K$ different architectures for a given dataset. Consequently, we can group the corresponding input images into $K$ classes and visualize them to reveal common semantic features across the groups. For details see Fig~\ref{fig:cluster-examples}. We find it interesting that despite we do not provide any direct supervision to the gates about the objects present on the images, the clustering learns to group semantically meaningful groups together. \begin{figure} \centering \includegraphics[width=.6\textwidth]{figures/supplementary-cluster_defaced.png} \caption{ Visualization of images from the validation set of MS-COCO 2017~\cite{lin2014microsoft} challenge. In this training $K=3$ and we visualize the top-$5$ images that fall closest to their respective cluster centers $\mu_i$. Note that the dataset does not provide subset-level annotations, however our method uses different pathways to process images containing meals (\textit{top row}), objects with wheels and outdoor scenes (\textit{middle row}) and electronic devices (\textit{bottom row}). } \label{fig:cluster-examples} \end{figure} \paragraph{\textbf{Ablating $\alpha$ and $\lambda_2$.}} Moreover, we also ablate the performance of $\alpha$ which is the separation margin in the hinge loss term of our proposed loss. Observe that larger values of $\alpha$ correspond to more enforced regularization on the separation between gate activation clusters. As shown in Figure \ref{fig:semseg-ablation-alpha-lambda} left, we observe that the mIOU accuracy and the FLOPs of our DivDR-A is only marginally affected by $\alpha$ indicating that a sufficient enough margin can be attained while maintaining accuracy and FLOPs trade-off performance. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/semseg-lambda-and-alpha.pdf} \caption{Ablation on the $\alpha$ (\textit{left}) and $\lambda_2$ (\textit{right}) parameter of the diversity loss term for Semantic Segmentation. The \textit{mean} accuracy in case of the parameter sweep for $\lambda_2$ is higher since in each case the best performing $\alpha$ was used for the training. We can see that the method is stable regardless the choice of the parameters over various tasks. } \label{fig:semseg-ablation-alpha-lambda} \end{figure} \begin{table}[ht] \caption{ Quantitative comparison of Dynamic Routing~\cite{li2020learning} trained without the objective to diversify the paths and using various $K$ for the clustering term. We omit $K=1$ from our results as it reverts to forcing the model to use the same architecture, independent of the input image. Instead we report the baseline scores from~\cite{li2020learning} For comparison we report best Dynamic Routing~\cite{li2020learning} scores from 3 identical runs with different seeds. } \label{tab:coco-k} \begin{subtable}{.5\linewidth} \caption{DivDR-A} \centering \begin{tabular}{@{}r@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{}} \toprule \textbf{K} & \textbf{mAP}$_{val}$ & \textbf{GFLOPs} & \textbf{Inter} & \textbf{Intra} \\ \midrule * & 34.6 & 23.2 & 0.2 & 0.3 \\ \midrule 2 & \textbf{35.1} & 21.9 & 1.1 & 0.4 \\ 3 & 35.0 & \textbf{19.2} & 0.8 & 0.3 \\ 4 & 34.9 & 20.0 & 0.6 & 0.1 \\ \bottomrule \end{tabular} \end{subtable} \begin{subtable}{.5\linewidth} \caption{DivDR-Raw} \centering \begin{tabular}{@{}r@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{}} \toprule \textbf{K} & \textbf{mAP}$_{val}$ & \textbf{GFLOPs} & \textbf{Inter} & \textbf{Intra} \\ \midrule * & 37.8 & 38.2 & 0.5 & 0.7 \\ \midrule 2 & 36.5 & \textbf{31.0} & 0.6 & 0.5 \\ 3 & 37.4 & 32.6 & 1.2 & 0.5 \\ 4 & \textbf{38.1} & 32.8 & 0.7 & 0.2 \\ \bottomrule \end{tabular} \end{subtable} \end{table} \subsection{Object Detection and Instance Segmentation}\label{sec:experiment_det} \label{subsec:coco} \input{tables/coco-det} \input{tables/coco-seg} To further demonstrate the effectiveness on detection and instance segmentation, we validate the proposed method on the COCO datasets with Faster R-CNN~\cite{fasterrcnn} and Mask R-CNN~\cite{he2017mask} heads. As for the backbone, we extend the original dynamic routing networks with another 5-stage layer to keep consistent with that in FPN~\cite{lin2017feature}, bringing 17 layers in total. Similar to that in Sec.~\ref{sec:experiment_seg}, no external supervision is provided to our proposed DivDR during training. As presented in Tables ~\ref{tab:coco-det} and \ref{tab:coco-seg}, we conduct experiments with two different settings, namely without and with computational cost constraints. We illustrate the overall improvement over DR~\cite{li2020learning} across various hyper-parameters in Fig~\ref{fig:coco-scatter} \paragraph{\textbf{Detection.}} Given no computational constraints, DivDR attains 38.1\% mAP with 32.9 GFLOPs as opposed to 37.7$\%$ mAP for DR-R. While the average precision is similar, we observe a noticeable gain computational reduction of 5.3 GFLOPs. Compared with the ResNet-50-FPN for backbone, DivDR achieves similar performance but a small gain of 0.2$\%$ but with half of the GFLOPs (32.9 GFLOPs vs. 95.7 GFLOPs). When we introduce the computational regularization, the cost is reduced to 19.8 GFLOPs while the performance is preserved with 35.4\% mAP. Compared with that in DR-A, we observe that while Div-DR constraibntconstrainted enjoys a 1.1 lower GLOPS, it enjoys improved precision of 3.3$\%$ (35.4\% mAP vs. 32.1\% mAP) with a lower standard deviation.% We believe that this is due to the local experts learnt for separate subsets of the data. \paragraph{\textbf{Instance Segmentation.}} As for the task of instance, as observed in Table \ref{tab:coco-seg}, DivDR unconstrainted performs similarly to DR-R with 35.1\% mAP. However, DivDR better trades-off the GLOPs with with a 32.9 GFLOPs in the unconstrained regime as opposed to 38.2 GLOPS. This is similar to the observations made in the detection experiments. Moreover, when computational constraints are introduced, DivDR enjoys a similar GLOPs as DR-A but with an improved 1.6\% precision (33.4\% mAP vs. 31.8\% mAP). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/det-lambda-and-alpha.pdf} \caption{Ablation on the $\alpha$ (\textit{left}) and $\lambda_2$ (\textit{right}) parameter of the diversity loss term for Object Detection. We can see that the method is stable regardless the choice of the parameters over various tasks. } \label{fig:det-ablation-alpha-lambda} \end{figure} \input{figures/coco-scatter} \paragraph{\textbf{Ablating} $K$.} We compare the performance of our proposed DivDR under different choices of the number of clusters $K$ over the gate activation for both unconstrained and constrained computational constraints, i.e. DivDR-A and DivDR-R respectively. We note that our proposed $\mathcal{L}_{\text{DivDr}}$ effectively clusters the gate activation cluster centers as shown in Figure~\ref{fig:k-tsne}. % Moreover, we also observe that our proposed loss not only results in separated clusters of local experts, but also with a small intra-cluster distances as shown in Table \ref{tab:coco-k}. In particular, we observe that our proposed DivDR results in larger inter-cluster distances that are larger than the intra-cluster distances (in contrast with DR~\cite{li2020learning}). \paragraph{\textbf{Ablating $\alpha$ and $\lambda_2$}.} As shown in Figure \ref{fig:det-ablation-alpha-lambda}, we observe the choice of both $\alpha$ and $\lambda_2$ only marginally affect the performance of DivDR-A in terms of both mAP on the object detection task. However, we find that $\lambda_2 >0.5$ starts to later affect the mAP for reduced computation. \section{Discussion and Future Work} \label{conclusion} In this paper we demonstrate the superiority of networks trained on a subset of the training set holding similar properties, which we refer to as \textit{local experts}. We address the two main challenges of training and employing local experts in real life scenarios, where subset labels are not available during test nor training time. Followed by that, we propose a method, called Diversified Dynamic Routing that is capable of jointly learning local experts and subset labels without supervision. In a controlled study, where the subset labels are known, we showed that we can recover the original subset labels with $98.2\%$ accuracy while maintaining the performance of a hypothetical \textit{Oracle} model in terms of both accuracy and efficiency. To analyse how well this improvement translates to real life problems we conducted extensive experiments on complex computer vision tasks such as segmenting street objects on images taken from the driver's perspective, as well as detecting common objects in both indoor and outdoor scenes. In each scenario we demonstrate that our method outperforms Dynamic Routing~\cite{li2020learning}. Even though this approach is powerful in a sense that it could improve on a strong baseline, we are aware that the clustering method still assumes subsets of \textit{equal} and more importantly \textit{sufficient} size. If the dataset is significantly imbalanced w.r.t. local biases the K-means approach might fail. One further limitation is that if the subsets are too small for the \textit{local experts} to learn generalizable representations our approach might also fail to generalize. Finally, since the search space of the architectures in this work is defined by Dynamic Routing~\cite{li2020learning} which is heavily focused on scale-varience. We believe that our work can be further generalized by analyzing and resolving the challenges mentioned above. \section{Acknowledgement} We thank Hengshuang Zhao for the fruitful discussions and feedback. This work is supported by the UKRI grant: Turing AI Fellowship EP/W002981/1 and EPSRC/MURI grant: EP/N019474/1. We would also like to thank the Royal Academy of Engineering. Botos Csaba was funded by Facebook Grant Number DFR05540. \clearpage \bibliographystyle{splncs04} \bibliography{references} \clearpage \section{Supplementary Material} \subsection{Sensitivity to number of iterations between K-means update} In our early experiments we have found our method achieving satisfactory results if we kept the number of iterations between the K-means update low: $\leq 100$. With lower frequency updates the diversity between the cluster centers was not sufficiently large, leading to the trivial solution, i.e. the model architecture learning to ignore the input image. In Deep Clustering~\cite{caron2018deep} another technique is mentioned to avoid such trivial solutions, namely randomizing and manually altering the cluster centers in case they happen to be too close to each-other. We did not employ such techniques for our method. On another note, we have found that while the cluster centers change significantly during the early phases of the training, the difference between two updates is less emphasized towards the end. This lead to a hypothesis that using an annealing policy on the frequency of the updates might be more practical as it could reduce the training time drastically, however such comparison is beyond the scope of this work. In our experiments we use 50 iterations per K-means update everywhere. \subsection{Gathering gate activation values before or after non-linear layer} We have experimented with applying our method on the output of the final linear layer of each gate in our model. We have found that even though much higher variances can be achieved in terms of intra-cluster and inter-cluster diversity metrics, however most of these differences are marginalized by the final non-linear layer of the gates. In the most frequent case the model learned cluster centers that had negative values, which is entirely ignored by the ReLU-part of the non-linear function used by Dynamic Routing~\cite{li2020learning}. \clearpage \end{document}
https://openreview.net/forum?id=O2eyumb2ATn
https://arxiv.org/abs/2209.13071
Please evaluate the paper based on the reviewer's comments, specifically addressing the stability of the method with large lambda values and any typos or grammar mistakes that may have been found.
A well written paper, with some interesting ideas. The results only show marginal improvement across different settings. Summary:  The paper builds on the idea of Dynamic Routing (DR) in the context of mixture of experts. The authors propose an unsupervised DR approach (coined as DivDR) to train several local experts on subsets of a training dataset. The paper is well written and well structured, even though it can benefit from a proof read. The qualitative experimental results are promising and demonstrate the efficacy of the proposed approach in clustering of data subsets and assignment of correct local expats. The impact on the bottomline performance in different settings (object detection, semantic segmentation, and so) is rather marginal. The paper has a coherent story, but lacks solid theoretical deep dive into the mechanics of DivDR. Major Remarks: - Any theoretical guarantees, or intuitive analyses, on why the alternating between solving eq(3) and reclustering (3) would work? At least try to discuss this by drawing resemblance with similar approaches in literature.  - Given that there is still enough space, I suggest summarizing the steps involved in Fig 2 and 3 (Subsection 3.2) in the algorithmic form.  - $K$ seems to be an important parameter. How to optimize or tune on this parameter? In Table 3, rather contradicting results are reported. Is it better to increase, decrease or optimize $K$, and how?     - Looking at the result on semantic segmentation, the standard (no-DR based) baselines do not represent the state-of-the-art performance on Cityscapes. That said, the improvement offered by DivDR-X is rather marginal (within 1%). And, the reduction in computational complexity in FLOPS is also on the marginal side. Even though the proposed approach does well in metric learning in the $\mathcal{A}()$ space, the impact on the bottom-line performance seems to be marginal. How would you justify adopting DivDR for semantic segmentation, e.g.? Same can be said about Object detection results, btw.  Minor modifications: - Another proof read would help to fix typos such as: "evcaluate" (Lines 66-67), and "on subsets on subsets" (Lines 102-3), "of of accuracy" (Line 178)  and so on.  - Please define the acronyms for the first time: NAS (being neural architecture search), etc.  - What is $n$ in Line 167. -  Please use references here "As shown earlier, learning local experts can benefit performance both in terms of accuracy and computational cost" Rating: 6: Marginally above acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\pdfoutput=1 \documentclass[runningheads]{llncs} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{tikz} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{booktabs} \usepackage{wrapfig} \usepackage{subcaption} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \usepackage{hyperref} \hypersetup{colorlinks,allcolors=black} \newcommand{\bibi}[1]{\todo[inline]{{\textbf{Bibi:} \emph{#1}}}} \newcommand{\bibir}[1]{\textcolor{red}{Bibi: #1}} \newcommand{\csabi}[1]{\todo[inline]{{\textbf{Csabi:} \emph{#1}}}} \newcommand{\csabir}[1]{\textcolor{red}{Csabi: #1}} \usepackage[capitalize]{cleveref} \crefname{section}{Sec.}{Secs.} \Crefname{section}{Section}{Sections} \Crefname{table}{Table}{Tables} \crefname{table}{Tab.}{Tabs.} \usepackage[accsupp]{axessibility} % \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{4} % \title{Diversified Dynamic Routing for Vision Tasks} % \titlerunning{Diversified Dynamic Routing for Vision Tasks} \author{Botos Csaba\inst{1} \and Adel Bibi\inst{1} \and Yanwei Li\inst{2} \and Philip Torr\inst{1} \and Ser-Nam Lim\inst{3} } \authorrunning{Botos Cs. et al.} \institute{University of Oxford, UK\\ \email{csbotos@robots.ox.ac.uk}\\\email{\{adel.bibi,philip.torr\}@eng.ox.ac.uk}\and The Chinese University of Hong Kong, HKSAR\\ \email{ywli@cse.cuhk.edu.hk}\\ \and Meta AI\\ \email{sernamlim@fb.com} } \maketitle \begin{abstract} Deep learning models for vision tasks are trained on large datasets under the assumption that there exists a universal representation that can be used to make predictions for all samples. Whereas high complexity models are proven to be capable of learning such representations, a mixture of experts trained on specific subsets of the data can infer the labels more efficiently. However using mixture of experts poses two new problems, namely (\textbf{i}) assigning the correct expert at inference time when a new unseen sample is presented. (\textbf{ii}) Finding the optimal partitioning of the training data, such that the experts rely the least on common features. In Dynamic Routing (DR)~\cite{li2020learning} a novel architecture is proposed where each layer is composed of a set of experts, however without addressing the two challenges we demonstrate that the model reverts to using the same subset of experts. In our method, Diversified Dynamic Routing (DivDR) the model is explicitly trained to solve the challenge of finding relevant partitioning of the data and assigning the correct experts in an unsupervised approach. We conduct several experiments on semantic segmentation on Cityscapes and object detection and instance segmentation on MS-COCO showing improved performance over several baselines. \end{abstract} \section{Introduction} In recent years, deep learning models have made huge strides solving complex tasks in computer vision, e.g. segmentation~\cite{long2015fully,chen2017deeplab} and detection~\cite{fastrcnn,fasterrcnn}, and reinforcement learning, e.g. playing atari games~\cite{mnih2013atari}. Despite this progress, the computational complexity of such models still poses a challenge for practical deployment that requires accurate real-time performance. This has incited a rich body of work tackling the accuracy complexity trade-off from various angles. For instance, a class of methods tackle this trade-off by developing more efficient architectures~\cite{tan2019efficientnet,yu2018bisenet}, while others initially train larger models and then later distill them into smaller more efficient models~\cite{hinton2015distilling,xie2020self,gou2021knowledge}. Moreover, several works rely on sparse regularization approaches~\cite{wan2013regularization,ding2021hr,shaw2019squeezenas} during training or by performing a post-training pruning of model weights that contribute marginally to the final prediction. While listing all categories of methods tackling this trade-off is beyond the scope of this paper, to the best of our knowledge, they all share the assumption that predicting the correct label requires a universal set of features that works best for all samples. We argue that such an assumption is often broken even in well curated datasets. For example, in the task of segmentation, object sizes can widely vary across the dataset requiring different computational effort to process. That is to say, large objects can be easily processed under lower resolutions while smaller objects require processing in high resolution to retain accuracy. This opens doors for class of methods that rely on \textit{local experts}; efficient models trained directly on each subset separately leveraging the use of this local bias. However, prior art often ignore local biases in the training and validation datasets when tackling the accuracy-efficiency trade-off for two key reasons illustrated in Figure \ref{fig:pull-figure}. (\textbf{i}) Even under the assumption that such local biases in the training data are known, during inference time, new unseen samples need to be assigned to the correct local subset so as to use the corresponding \textit{local expert} for prediction (Figure \ref{fig:pull-figure} left). (\textbf{ii}) Such local biases in datasets are not known \textbf{apriori} and may require a prohibitively expensive inspection of the underlying dataset (Figure \ref{fig:pull-figure} right). In this paper, we take an orthogonal direction to prior art on the accuracy-efficiency trade-off by addressing the two challenges in an unsupervised manner. In particular, we show that training \textit{local experts} on learnt subsets sharing local biases can jointly outperform \textit{global experts}, i.e. models that were trained over the entire dataset. We summarize our contributions in two folds. \begin{enumerate} \item We propose Diversified Dynamic Routing (DivDR); an unsupervised learning approach that trains several local experts on learnt subsets of the training dataset. At inference time, DivDR assigns the correct local expert for prediction to newly unseen samples. \item We extensively evaluate DivDR and compare against several existing methods on semantic segmentation, object detection and instance segmentation on various datasets, i.e. Cityscapes~\cite{cordts2016cityscapes} and MS-COCO~\cite{lin2014microsoft}. We find that DivDR compared to existing methods better trades-off accuracy and efficiency. We complement our experiments with various ablations demonstrating robustness of DivDR to choices of hyperparameters. \end{enumerate} \begin{figure} \centering \includegraphics[width=.7\textwidth]{figures/banner.pdf} \caption{The figure depicts the two main challenges in learning local experts on subsets on subsets of the dataset with local biases. First, even when the subsets in the training dataset is presented where there is a local expert per subset, the challenge remains in assigning the local expert for new unseen samples (left Figure). The second challenge is that the local biases in the training data are not available during training time (right Figure).} \label{fig:pull-figure} \end{figure} \section{Related Work} \label{sec:related} In prior literature model architectures were predominantly hand-designed, meaning that hyper-parameters such as the number and width of layers, size and stride of convolution kernels were predefined. In contrast, Neural Architecture Search~\cite{zoph2016neural,liu2018darts} revealed that searching over said hyper-parameter space is feasible provided enough data and compute power resulting in substantial improvement in model accuracy. Recently, a line of research~\cite{li2019partial,liu2019auto,chen2018searching,tan2019efficientnet,veit2018convolutional} also proposed to constrain the search space to cost-efficient models that jointly optimize the accuracy and the computational complexity of the models. Concurrently, cost-efficient inference has been also in the focus of works on dynamic network architectures~\cite{mullapudi2018hydranets,you2019gate,wang2018skipnet,wu2018blockdrop}, where the idea is to allow the model to choose different architectures based on the input through gating computational blocks during inference. For example, Li et al.~\cite{li2020learning} proposed an end-to-end dynamic routing framework that generates routes within the architecture that vary per input sample. The search space of~\cite{li2020learning}, inspired by Auto-DeepLab~\cite{liu2019auto}, allows exploring spatial up and down-sampling between subsequent layers which distinguishes the work from prior dynamic routing methods. One common failure mode of dynamic models is mentioned in~\cite{mullapudi2018hydranets}, where during the initial phase of the training only a specific set of modules are selected and trained, leading to a static model with reduced capacity. This issue is addressed by Mullapudi et \textit{al.}~\cite{mullapudi2018hydranets} through clustering the training data in advance based on latent representations of a pretrained image classifier model, whereas~\cite{veit2018convolutional} uses the Gumbel-Softmax reparameterization~\cite{jang2016categorical} to improve diversity of the dynamic routes. In this work, to mitigate this problem, we adopt the metric learning Magnet Loss~\cite{rippel2015metric} which acts as an improvement over metric learning methods that act on the instance level, e.g. Triplet Loss~\cite{weinberger2009distance,koch2015siamese}, and Contrastive Learning methods~\cite{chopra2005learning,hadsell2006dimensionality}. This is since it considers the complete distribution of the underlying data resulting in a more stable clustering. To adapt Magnet Loss to resolving the Dynamic Routing drawbacks, we use it as an unsupervised approach to increase the distance between the forward paths learned by the Dynamic Routing model this is as opposed to clustering the learned representations, i.e. learning clustered dynamic routes as opposed to clustered representations. We review the recent advances on semantic segmentation and object detection which are utilized to validate our method in this work. For semantic segmentation, numerous works have been proposed to capture the larger receptive field~\cite{zhao2017pyramid,chen2017deeplab,chen2017rethinking,chen2018encoder} or establish long-range pixel relation~\cite{zhao2018psanet,huang2018ccnet,song2019learnable} based on Fully Convolutional Networks~\cite{long2015fully}. As mentioned above, with the development of neural network, Neural Architecture Search (NAS)-based approaches~\cite{chen2018searching,liu2019auto,nekrasov2019fast} and dynamic networks~\cite{li2020learning} are utilized to adjust network architecture according to the data while being jointly optimized to reduce the cost of inference. As for object detection, modern detectors can be roughly divided into one-stage or two-stage detectors. One-stage detectors usually make predictions based on the prior guesses, like anchors~\cite{redmon2016you,lin2017focal} and object centers~\cite{tian2019fcos,zhou2019objects}. Meanwhile, two-stage detectors predict boxes based on predefined proposals in a coarse-to-fine manner~\cite{girshick2014rich,fastrcnn,fasterrcnn}. There are also several advances in Transformer-based approaches for image recognition tasks such as segmentation~\cite{zheng2021rethinking,xie2021segformer} and object detection~\cite{carion2020end,zhu2020deformable}, and while our method can be generalized to those architectures as well, it is beyond the scope of this paper. \section{DivDR: Diversified Dynamic Routing} \label{sec:method} We first start by introducing Dynamic Routing. Second, we formulate our objective of the iterative clustering of the dataset and the learning of experts per dataset cluster. At last, we propose a contrastive learning approach based on \textit{magnet loss}~\cite{rippel2015metric} over the gate activation of the dynamic routing model to encourage the learning of different architectures over different dataset clusters. \subsection{Dynamic Routing Preliminaries} The Dynamic Routing (DR)~\cite{li2020learning} model for semantic segmentation consists of $L$ sequential feed-forward layers in which dynamic \emph{nodes} process and propagate the information. Each dynamic node has two parts: (\textbf{i}) the \emph{cell} that performs a non-linear transformation to the input of the node; and (\textbf{ii}) the \emph{gate} that decides which node receives the output of the cell operation in the subsequent layer. In particular, the gates in DR determine what resolution/scale of the activation to be used. That is to say, each gate determines whether the activation output of the cell is to be propagated at the same resolution, up-scaled, or down-scaled by a factor of $2$ in the following layer. Observe that the gate activation determines the \textit{architecture} for a given input since this determines a unique set of connections defining the architecture. The output of the final layer of the nodes are up-sampled and fused by $1 \times 1$ convolutions to match the original resolution of the input image. For an input-label pair $(x,y)$ in a dataset $\mathcal{D}$ of $N$ pairs, let the DR network parameterized by $\theta$ be given as $f_\theta : \mathcal{X} \rightarrow \mathcal{Y}$ where $x \in \mathcal{X}$ and $y \in \mathcal{Y}$. Moreover, let $\mathcal{A}_{\tilde{\theta}} : \mathcal{X} \rightarrow [0,1]^n$, where $\theta \supseteq \tilde{\theta}$, denote the gate activation map for a given input, i.e. the gates determining the architecture discussed earlier, then the training objective for DR networks under computational budget constraints have the following form: \begin{equation} \mathcal{L}_{DR}= \frac{1}{N} \sum_{i=1}^N \mathcal{L}_{seg}\big(f_\theta(x_i), y_i\big)+ \lambda\mathcal{L}_{cost}(\mathcal{A}_{\tilde{\theta}}(x_i)). \end{equation} \noindent We will drop the subscript $\tilde{\theta}$ throughout to reduce text clutter. Note that $\mathcal{L}_{seg}$ and $\mathcal{L}_{cost}$ denote the segmentation and computational budget constraint respectively. Observe that when most of the gate activations are sparse, this incurs a more efficient network that may be at the expense of accuracy and hence the trade-off through the penalty $\lambda$. \begin{figure}[t] \centering \includegraphics[width=.8\textwidth]{figures/kmeans-assignment.pdf} \caption{ \textbf{Gate Activation cluster assignment.} To update the local experts, DivDR performs K-means clustering on the gate activations over the $\mathcal{A}(x_i)~\forall i$ in the training examples with fixed model parameters $\theta$.} \label{fig:kmeans-assign} \end{figure} \begin{figure}[t] \centering \includegraphics[width=.8\textwidth]{figures/gate-activation-diversification.pdf} \caption{ \textbf{Gate Activation Diversification.} We use the labels from the cluster assignment to reduce the \textit{intra-cluster} variance and increase the \textit{inter-cluster} variance by updating model parameters $\theta$.} \label{fig:kmeans-diversify} \end{figure} \subsection{Metric Learning in $\mathcal{A}$-space} Learning local experts can benefit performance both in terms of accuracy and computational cost. We propose an unsupervised approach to learning jointly the subset of the dataset and the soft assignment of the corresponding architectures. We use the DR framework for our approach. We first assume that there are $K$ clusters in the dataset for which we seek to learn an expert on each. Moreover, let $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$, denote the cluster centers representing $K$ different gate activations. Note that as per the previous discussion, each gate activation $\mu_{\mathcal{A}_i} \in [0,1]^n$ corresponds to a unique architecture. The set of cluster centers representing gate activations $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$ can be viewed as a set of prototypical architectures for $K$ different subsets in the datasets. Next, let $\mu(x)$ denote the nearest gate activation center to the gate activation $\mathcal{A}(x)$, i.e. $\mu(x) = \argmin_i \|\mathcal{A}(x) - \mu_{\mathcal{A}_i}\|$. Now, we seek to solve for both the gate activation centers $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$ and the parameters $\theta$ such that the gate activation centers are pushed away from one another. To that end, we propose the alternating between clustering and the minimization of a \textit{magnet loss}\cite{rippel2015metric} variant. In particular, for a given fixed set of activating gates centers $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$, we consider the following loss function: \begin{equation} \begin{aligned} \mathcal{L}_{\text{clustering}}(\mathcal{A}(x_i))&= \Bigg\{ \alpha + \frac{1}{2\sigma^2} \|\mathcal{A}(x_i)-\mu(x_i)\| \\ & + \log\left( \sum_{k : \mu_{\mathcal{A}_k} \neq \mu(x_i)} e^{ -\frac{1}{2\sigma^2} \|\mathcal{A}(x_i) - \mu_{\mathcal{A}_k}\| }\right) \Bigg\}_+. \end{aligned} \end{equation} \noindent Note that $\{x\}_+ = \max(x,0)$, $\sigma^2 = \frac{1}{N-1}\sum_{i}^N \|\mathcal{A}(x_i) - \mu(x_i)\|^2$, and that $\alpha \ge 0$. Observe that unlike in \textit{magnet loss}, we seek to cluster the set of architectures by separating the gate activations. Note that the penultimate term pulls the architecture, closer to the most similar prototypical architecture while the last term pushes it away from all other architectures. Therefore, this loss incites the learning of $K$ different architectures where each input $x_i$ will be assigned to be predicted with one of the $K$ learnt architectures. To that end, our overall \textit{Diversified} DR loss is given as follows: \begin{equation} \begin{aligned} \mathcal{L}_{\text{DivDR}} = \frac{1}{N}\sum_{i=1}^N & \mathcal{L}_{segm}(f_\theta(x_i),y_i) + \lambda_1 \mathcal{L}_{cost}(\mathcal{A}(x_i)) + \lambda_2\mathcal{L}_{clustering}(\mathcal{A}(x_i)). \end{aligned} \end{equation} We then alternate between minimizing $\mathcal{L}_{\text{DivDR}}$ over the parameters $\theta$ and the updates of the cluster centers $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$. In particular, given $\theta$, we update the gate activation centers by performing K-Means clustering~\cite{macqueen1967some} over the gate activations. That is to say, we fix $\theta$ and perform K-means clustering with $K$ clusters over all the gate activations from the dataset $\mathcal{D}$, i.e. we cluster $\mathcal{A}(x_i)~\forall i$ as shown in Figure \ref{fig:kmeans-assign}. Moreover, alternating between optimizing $\mathcal{L}_{\text{DivDR}}$ and updating the gate activation cluster centers over the dataset $\mathcal{D}$, illustrated in Figure~\ref{fig:kmeans-diversify}, results in a diversified set of architectures driven by the data that are more efficient, i.e. learning $K$ local experts that are accurate and efficient. \section{Experiments} \label{sec:experiments} We show empirically that our proposed DivDR approach can outperform existing methods in better trading off accuracy and efficiency. We demonstrate this on several vision tasks, i.e. semantic segmentation, object detection, and instance segmentation. We start first by introducing the datasets used in all experiments along along with the implementation details. We then present the comparisons between DivDR and several other methods along with several ablations. \subsection{Datasets} We mainly prove the effectiveness of the proposed approach for semantic segmentation, object detection, and instance segmentation on two widely-adopted benchmarks, namely Cityscapes~\cite{cordts2016cityscapes} and Microsoft COCO~\cite{lin2014microsoft} dataset. \vspace{0.5em} \noindent \textbf{Cityscapes}. The Cityscapes~\cite{cordts2016cityscapes} dataset contains 19 classes in urban scenes, which is widely used for semantic segmentation. It is consist of 5000 fine annotations that can be divided into 2975, 500, and 1525 images for training, validation, and testing, respectively. In the work, we use the Cityscapes dataset to validate the proposed method on semantic segmentation. \vspace{0.5em} \noindent \textbf{COCO}. Microsoft COCO~\cite{lin2014microsoft} dataset is a well-known for object detection benchmarking which contains 80 categories in common context. In particular, it includes 118k training images, 5k validation images, and 20k held-out testing images. To prove the performance generalization, we report the results on COCO's validation set for both object detection and instance segmentation tasks. \begin{table*}[t] \centering \caption{ Comparison with baselines on the Cityscapes~\cite{cordts2016cityscapes} validation set. * Scores from~\cite{li2020learning} were reproduced using the \href{https://github.com/Megvii-BaseDetection/DynamicRouting}{official implementation}. The evaluation settings are identical to~\cite{li2020learning}. We calculate the average FLOPs with $1024\times 2048$ size input. } \begin{tabular}{lc@{\hskip 0.1in}c@{\hskip 0.1in}r} \toprule \textbf{Method} & \textbf{Backbone} & \textbf{$\mathbf{mIoU}_{val}(\%)$} & \textbf{GFLOPs} \\ \midrule BiSenet~\cite{yu2018bisenet} & ResNet-18 & 74.8 & 98.3 \\ DeepLabV3~\cite{chen2017rethinking} & ResNet-101-ASPP & 78.5 & 1778.7 \\ Semantic FPN~\cite{kirillov2019panoptic} & ResNet-101-FPN & 77.7 & 500.0 \\ DeepLabV3+~\cite{chen2018encoder} & Xception-71-ASPP & 79.6 & 1551.1 \\ PSPNet~\cite{zhao2017pyramid} & ResNet-101-PSP & 79.7 & 2017.6 \\ Auto-DeepLab~\cite{liu2019auto} & Searched-F20-ASPP & 79.7 & 333.3 \\ Auto-DeepLab~\cite{liu2019auto} & Searched-F48-ASPP & 80.3 & 695.0 \\ \midrule DR-A~\cite{li2020learning}* & Layer16 & 72.7$\pm$0.6 & 58.7$\pm$3.1 \\ DR-B~\cite{li2020learning}* & Layer16 & 72.6$\pm$1.3 & 61.1$\pm$3.3 \\ DR-C~\cite{li2020learning}* & Layer16 & 74.2$\pm$0.6 & 68.1$\pm$2.5 \\ DR-Raw~\cite{li2020learning}* & Layer16 & 75.2$\pm$0.5 & 99.2$\pm$2.5 \\ \midrule DivDR-A & Layer16 & 73.5$\pm$0.4 & 57.7$\pm$3.9 \\ DivDR-Raw & Layer16 & 75.4$\pm$1.6 & 95.7$\pm$0.9 \\ \bottomrule \end{tabular} \label{tab:full-cityscapes-comp} \end{table*} \subsection{Implementation Details} In all training settings, we use SGD with a weight decay of $10^{-4}$ and momentum of $0.9$ for both datasets. For semantic segmentation on Cityscapes, we use the exponential learning rate schedule with an initial rate of $0.05$ and a power of $0.9$. For fair comparison, we follow the setting in~\cite{li2020learning} and use a batch size $8$ of random image crops of size $768\times768$ and train for $180K$ iterations. We use random flip augmentations where input images are scaled from $0.5$ to $2$ before cropping. For object detection on COCO we use an initial learning rate of $0.02$ and re-scale the shorter edge to 800 pixels and train for 90K iterations. Following prior art, random flip is adopted without random scaling. \subsection{Semantic Segmentation}~\label{sec:experiment_seg} \begin{figure}[t] \centering \includegraphics[width=.9\textwidth]{figures/k-tsne} \caption{ Visualizing the $183$-dimensional $\mathcal{A}$-space of Dynamic Routing backbones trained for semantic segmentation on Cityscapes~\cite{cordts2016cityscapes} (\textit{top}) and $198$-dimensional $\mathcal{A}$-space for object detection on COCO~\cite{lin2014microsoft} (\textit{bottom}) using t-SNE~\cite{van2008visualizing}. \textit{Left:} varying number of \textit{local experts}, $K=2,3,4$. \textit{Right:} joint t-SNE visualization of architectures of Dynamic Routing~\cite{li2020learning} (\textit{orange}) and our approach (\textit{blue}). It is clear that our method not only encourages diversity of the learned routes but also reduces variance in a specific cluster. Low \textit{intra}-cluster variance is beneficial because it facilitates feature sharing between similar tasks } \label{fig:k-tsne} \end{figure} \begin{table}[t] \centering \caption{Quantitative analysis of semantic segmentation on Cityscapes~\cite{cordts2016cityscapes}. We report \textit{Inter} and \textit{Intra} cluster variance, that shows how far are the cluster centers are from each other in $L_2$ space and how close are the samples to the cluster centers respectively.} \begin{tabular}{@{}l@{\hskip 0.1in}l@{\hskip 0.1in}c@{\hskip 0.1in}l@{\hskip 0.1in}c@{}} \toprule \textbf{method} & \textbf{mIoU} & \textbf{FLOPs} & \textbf{Inter} & \textbf{Intra} \\ \midrule DR-A & 72.7 & 58.7 & 0.4 & 0.3 \\ DivDR-A & 72.0 & 49.9 & 0.6 & 0.2 \\ \midrule DR-Raw & 75.2 & 99.2 & 1.5 & 1.5 \\ DivDR-Raw & 75.7 & 98.3 & 1.2 & 0.5 \\ \bottomrule \end{tabular} \label{table:inter_v_intra} \end{table} We show the benefits of our proposed DivDR of alternation between training with $\mathcal{L}_{\text{DivDR}}$ and computing the gate activations clusters through K-means on Cityscapes \cite{cordts2016cityscapes} for semantic segmentation. In particular, we compare two versions of our proposed unsupervised Dynamic Routing, namely with and without the computational cost constraint ($\lambda_1=0$ denoted as DivDR-Raw and $\lambda_1=0.8$ denoted as DivDR-A) against several variants of the original dynamic routing networks both constrained and unconstrained. All experiments are averaged over 3 seeds. As observed in Table \ref{tab:full-cityscapes-comp}, while both variants perform similarly in terms of accuracy (DR-Raw: $75.2\%$, DivDR: $75.4\%$), DivDR marginally improves the computational cost by $3.5$ GFLOPs. On the other hand, when introducing cost efficiency constraint DivDR-A improves both the efficiency ($58.7$ GFLOPs to $57.7$ GFLOPs) and accuracy ($72.7\%$ to $73.5\%$) as compared to DR-A. At last, we observe that comparing to other state-of-the-art, our unconstrained approach, performs similarly to BiSenet~\cite{yu2018bisenet} with 74.8\% accuracy while performing better in computational efficiency (98.3 GFLOPs vs. 95.7 GFLOPs). \paragraph{\textbf{Visualizing Gate Activations}.} We first start by visualizing the gate activations under different choices of the number of clusters $K$ over the gate activation for DivDR-A. As observed from Figure \ref{fig:k-tsne}, indeed our proposed $\mathcal{L}_{\text{DivDr}}$ results into clusters on local experts as shown by different gate activations $\mathcal{A}$ for $k \in \{2,3,4\}$. Moreover, we also observe that our proposed loss not only results in separated clusters of local experts, i.e. gate activations, but also with a small intra class distances. In particular, as shown in Table \ref{table:inter_v_intra}, our proposed DivDR indeed results in larger inter-cluster distances that are larger than the intra-cluster distnaces. The inter-cluster distances are computed as the average distance over all pair of cluster centers, i.e. $\{\mu_{\mathcal{A}_i}\}_{i=1}^K$ while the intra-cluster distances are the average distances over all pairs in every cluster. This indeed confirms that our proposed training approach results in $K$ different architectures for a given dataset. Consequently, we can group the corresponding input images into $K$ classes and visualize them to reveal common semantic features across the groups. For details see Fig~\ref{fig:cluster-examples}. We find it interesting that despite we do not provide any direct supervision to the gates about the objects present on the images, the clustering learns to group semantically meaningful groups together. \begin{figure} \centering \includegraphics[width=.6\textwidth]{figures/supplementary-cluster_defaced.png} \caption{ Visualization of images from the validation set of MS-COCO 2017~\cite{lin2014microsoft} challenge. In this training $K=3$ and we visualize the top-$5$ images that fall closest to their respective cluster centers $\mu_i$. Note that the dataset does not provide subset-level annotations, however our method uses different pathways to process images containing meals (\textit{top row}), objects with wheels and outdoor scenes (\textit{middle row}) and electronic devices (\textit{bottom row}). } \label{fig:cluster-examples} \end{figure} \paragraph{\textbf{Ablating $\alpha$ and $\lambda_2$.}} Moreover, we also ablate the performance of $\alpha$ which is the separation margin in the hinge loss term of our proposed loss. Observe that larger values of $\alpha$ correspond to more enforced regularization on the separation between gate activation clusters. As shown in Figure \ref{fig:semseg-ablation-alpha-lambda} left, we observe that the mIOU accuracy and the FLOPs of our DivDR-A is only marginally affected by $\alpha$ indicating that a sufficient enough margin can be attained while maintaining accuracy and FLOPs trade-off performance. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/semseg-lambda-and-alpha.pdf} \caption{Ablation on the $\alpha$ (\textit{left}) and $\lambda_2$ (\textit{right}) parameter of the diversity loss term for Semantic Segmentation. The \textit{mean} accuracy in case of the parameter sweep for $\lambda_2$ is higher since in each case the best performing $\alpha$ was used for the training. We can see that the method is stable regardless the choice of the parameters over various tasks. } \label{fig:semseg-ablation-alpha-lambda} \end{figure} \begin{table}[ht] \caption{ Quantitative comparison of Dynamic Routing~\cite{li2020learning} trained without the objective to diversify the paths and using various $K$ for the clustering term. We omit $K=1$ from our results as it reverts to forcing the model to use the same architecture, independent of the input image. Instead we report the baseline scores from~\cite{li2020learning} For comparison we report best Dynamic Routing~\cite{li2020learning} scores from 3 identical runs with different seeds. } \label{tab:coco-k} \begin{subtable}{.5\linewidth} \caption{DivDR-A} \centering \begin{tabular}{@{}r@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{}} \toprule \textbf{K} & \textbf{mAP}$_{val}$ & \textbf{GFLOPs} & \textbf{Inter} & \textbf{Intra} \\ \midrule * & 34.6 & 23.2 & 0.2 & 0.3 \\ \midrule 2 & \textbf{35.1} & 21.9 & 1.1 & 0.4 \\ 3 & 35.0 & \textbf{19.2} & 0.8 & 0.3 \\ 4 & 34.9 & 20.0 & 0.6 & 0.1 \\ \bottomrule \end{tabular} \end{subtable} \begin{subtable}{.5\linewidth} \caption{DivDR-Raw} \centering \begin{tabular}{@{}r@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{}} \toprule \textbf{K} & \textbf{mAP}$_{val}$ & \textbf{GFLOPs} & \textbf{Inter} & \textbf{Intra} \\ \midrule * & 37.8 & 38.2 & 0.5 & 0.7 \\ \midrule 2 & 36.5 & \textbf{31.0} & 0.6 & 0.5 \\ 3 & 37.4 & 32.6 & 1.2 & 0.5 \\ 4 & \textbf{38.1} & 32.8 & 0.7 & 0.2 \\ \bottomrule \end{tabular} \end{subtable} \end{table} \subsection{Object Detection and Instance Segmentation}\label{sec:experiment_det} \label{subsec:coco} \input{tables/coco-det} \input{tables/coco-seg} To further demonstrate the effectiveness on detection and instance segmentation, we validate the proposed method on the COCO datasets with Faster R-CNN~\cite{fasterrcnn} and Mask R-CNN~\cite{he2017mask} heads. As for the backbone, we extend the original dynamic routing networks with another 5-stage layer to keep consistent with that in FPN~\cite{lin2017feature}, bringing 17 layers in total. Similar to that in Sec.~\ref{sec:experiment_seg}, no external supervision is provided to our proposed DivDR during training. As presented in Tables ~\ref{tab:coco-det} and \ref{tab:coco-seg}, we conduct experiments with two different settings, namely without and with computational cost constraints. We illustrate the overall improvement over DR~\cite{li2020learning} across various hyper-parameters in Fig~\ref{fig:coco-scatter} \paragraph{\textbf{Detection.}} Given no computational constraints, DivDR attains 38.1\% mAP with 32.9 GFLOPs as opposed to 37.7$\%$ mAP for DR-R. While the average precision is similar, we observe a noticeable gain computational reduction of 5.3 GFLOPs. Compared with the ResNet-50-FPN for backbone, DivDR achieves similar performance but a small gain of 0.2$\%$ but with half of the GFLOPs (32.9 GFLOPs vs. 95.7 GFLOPs). When we introduce the computational regularization, the cost is reduced to 19.8 GFLOPs while the performance is preserved with 35.4\% mAP. Compared with that in DR-A, we observe that while Div-DR constraibntconstrainted enjoys a 1.1 lower GLOPS, it enjoys improved precision of 3.3$\%$ (35.4\% mAP vs. 32.1\% mAP) with a lower standard deviation.% We believe that this is due to the local experts learnt for separate subsets of the data. \paragraph{\textbf{Instance Segmentation.}} As for the task of instance, as observed in Table \ref{tab:coco-seg}, DivDR unconstrainted performs similarly to DR-R with 35.1\% mAP. However, DivDR better trades-off the GLOPs with with a 32.9 GFLOPs in the unconstrained regime as opposed to 38.2 GLOPS. This is similar to the observations made in the detection experiments. Moreover, when computational constraints are introduced, DivDR enjoys a similar GLOPs as DR-A but with an improved 1.6\% precision (33.4\% mAP vs. 31.8\% mAP). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/det-lambda-and-alpha.pdf} \caption{Ablation on the $\alpha$ (\textit{left}) and $\lambda_2$ (\textit{right}) parameter of the diversity loss term for Object Detection. We can see that the method is stable regardless the choice of the parameters over various tasks. } \label{fig:det-ablation-alpha-lambda} \end{figure} \input{figures/coco-scatter} \paragraph{\textbf{Ablating} $K$.} We compare the performance of our proposed DivDR under different choices of the number of clusters $K$ over the gate activation for both unconstrained and constrained computational constraints, i.e. DivDR-A and DivDR-R respectively. We note that our proposed $\mathcal{L}_{\text{DivDr}}$ effectively clusters the gate activation cluster centers as shown in Figure~\ref{fig:k-tsne}. % Moreover, we also observe that our proposed loss not only results in separated clusters of local experts, but also with a small intra-cluster distances as shown in Table \ref{tab:coco-k}. In particular, we observe that our proposed DivDR results in larger inter-cluster distances that are larger than the intra-cluster distances (in contrast with DR~\cite{li2020learning}). \paragraph{\textbf{Ablating $\alpha$ and $\lambda_2$}.} As shown in Figure \ref{fig:det-ablation-alpha-lambda}, we observe the choice of both $\alpha$ and $\lambda_2$ only marginally affect the performance of DivDR-A in terms of both mAP on the object detection task. However, we find that $\lambda_2 >0.5$ starts to later affect the mAP for reduced computation. \section{Discussion and Future Work} \label{conclusion} In this paper we demonstrate the superiority of networks trained on a subset of the training set holding similar properties, which we refer to as \textit{local experts}. We address the two main challenges of training and employing local experts in real life scenarios, where subset labels are not available during test nor training time. Followed by that, we propose a method, called Diversified Dynamic Routing that is capable of jointly learning local experts and subset labels without supervision. In a controlled study, where the subset labels are known, we showed that we can recover the original subset labels with $98.2\%$ accuracy while maintaining the performance of a hypothetical \textit{Oracle} model in terms of both accuracy and efficiency. To analyse how well this improvement translates to real life problems we conducted extensive experiments on complex computer vision tasks such as segmenting street objects on images taken from the driver's perspective, as well as detecting common objects in both indoor and outdoor scenes. In each scenario we demonstrate that our method outperforms Dynamic Routing~\cite{li2020learning}. Even though this approach is powerful in a sense that it could improve on a strong baseline, we are aware that the clustering method still assumes subsets of \textit{equal} and more importantly \textit{sufficient} size. If the dataset is significantly imbalanced w.r.t. local biases the K-means approach might fail. One further limitation is that if the subsets are too small for the \textit{local experts} to learn generalizable representations our approach might also fail to generalize. Finally, since the search space of the architectures in this work is defined by Dynamic Routing~\cite{li2020learning} which is heavily focused on scale-varience. We believe that our work can be further generalized by analyzing and resolving the challenges mentioned above. \section{Acknowledgement} We thank Hengshuang Zhao for the fruitful discussions and feedback. This work is supported by the UKRI grant: Turing AI Fellowship EP/W002981/1 and EPSRC/MURI grant: EP/N019474/1. We would also like to thank the Royal Academy of Engineering. Botos Csaba was funded by Facebook Grant Number DFR05540. \clearpage \bibliographystyle{splncs04} \bibliography{references} \clearpage \section{Supplementary Material} \subsection{Sensitivity to number of iterations between K-means update} In our early experiments we have found our method achieving satisfactory results if we kept the number of iterations between the K-means update low: $\leq 100$. With lower frequency updates the diversity between the cluster centers was not sufficiently large, leading to the trivial solution, i.e. the model architecture learning to ignore the input image. In Deep Clustering~\cite{caron2018deep} another technique is mentioned to avoid such trivial solutions, namely randomizing and manually altering the cluster centers in case they happen to be too close to each-other. We did not employ such techniques for our method. On another note, we have found that while the cluster centers change significantly during the early phases of the training, the difference between two updates is less emphasized towards the end. This lead to a hypothesis that using an annealing policy on the frequency of the updates might be more practical as it could reduce the training time drastically, however such comparison is beyond the scope of this work. In our experiments we use 50 iterations per K-means update everywhere. \subsection{Gathering gate activation values before or after non-linear layer} We have experimented with applying our method on the output of the final linear layer of each gate in our model. We have found that even though much higher variances can be achieved in terms of intra-cluster and inter-cluster diversity metrics, however most of these differences are marginalized by the final non-linear layer of the gates. In the most frequent case the model learned cluster centers that had negative values, which is entirely ignored by the ReLU-part of the non-linear function used by Dynamic Routing~\cite{li2020learning}. \clearpage \end{document}
https://openreview.net/forum?id=O2eyumb2ATn
https://arxiv.org/abs/2209.13071
Please evaluate the paper based on the provided evaluation, focusing on the theoretical analysis of the proposed approach, the impact on performance in different settings, and the optimization of the parameter K.
Nice new idea to process video with GCNs I can only recommend to accept this paper. It is highly related to the workshop due to many reasons. 1) Authors propose GraphVid. A new idea based on GCN to process video in an efficient way. 2) Apart from being a new and nice idea, it can offers state-of-the-art performance. 3) GraphVid uses not only opt-flow as prior knowledge but also many new specific data augmentation methods to extract information from the data in the most efficient way possible. 4) GraphVid allows to make a better use of the data and reduce the computational burden. 5) All ideas are support by experiments. 6) The paper is very easy to read. ... Many others Rating: 10: Top 5% of accepted papers, seminal paper Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage[accsupp]{axessibility} % \usepackage{times} \usepackage{epsfig} \usepackage{caption} \usepackage{subcaption} \usepackage{tabularx} \usepackage{makecell} \usepackage{cellspace} \usepackage{graphicx} \usepackage{wrapfig} \makeatletter \@namedef{ver@everyshi.sty}{} \makeatother \usepackage{tikz} \usepackage{pgfplots} \pgfplotsset{compat=1.17} \usetikzlibrary{pgfplots.groupplots} \usepackage{algorithm}% \usepackage{algpseudocode}% \newcommand{\methodname}{\emph{GraphVid}} \newcommand\dd[1]{\textcolor{red}{[DD: #1]}} \newcommand\ddd[1]{\textcolor{red}{#1}} \newcommand\ek[1]{\textcolor{blue}{#1}} \newcommand\lighten[1]{\textcolor{gray}{#1}} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \def\Real{\mathbb{R}} \def\neighborhood{\mathcal{N}} \def\mathg{\mathcal{G}} \def\mathv{\mathcal{V}} \def\mathe{\mathcal{E}} \def\mathr{\mathcal{R}} \def\eg{\emph{e.g.}} \def\Eg{\emph{E.g.}} \def\ie{\emph{i.e.}} \def\Ie{\emph{I.e.}} \def\cf{\emph{c.f.}} \def\Cf{\emph{C.f.}} \def\etc{\emph{etc.}} \def\vs{\emph{vs.}} \def\wrt{w.r.t.} \def\dof{d.o.f.} \def\etal{\emph{et a.l}} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{4861} % \title{\methodname:\ It Only Takes a Few Nodes to Understand a Video} % \titlerunning{\methodname:\ It Only Takes a Few Nodes to Understand a Video} \author{Eitan Kosman\orcidID{0000-0002-5538-0616} \and Dotan Di Castro} \authorrunning{E. Kosman and D. Di Castro} \institute{Bosch Center of AI, Haifa, Israel\\ \email{\{Eitan.Kosman,Dotan.DiCastro\}@bosch.com}} \maketitle \begin{abstract}\label{section:abstract} We propose a concise representation of videos that encode perceptually meaningful features into graphs. With this representation, we aim to leverage the large amount of redundancies in videos and save computations. First, we construct superpixel-based graph representations of videos by considering superpixels as graph nodes and create spatial and temporal connections between adjacent superpixels. Then, we leverage Graph Convolutional Networks to process this representation and predict the desired output. As a result, we are able to train models with much fewer parameters, which translates into short training periods and a reduction in computation resource requirements. A comprehensive experimental study on the publicly available datasets Kinetics-400 and Charades shows that the proposed method is highly cost-effective and uses limited commodity hardware during training and inference. \textbf{It reduces the computational requirements 10-fold} while achieving results that are comparable to state-of-the-art methods. We believe that the proposed approach is a promising direction that could open the door to solving video understanding more efficiently and enable more resource limited users to thrive in this research field. \end{abstract} \section{Introduction}\label{section:introduction} The field of video understanding has gained prominence thanks to the rising popularity of videos, which has become the most common form of data on the web. On each new uploaded video, a variety of tasks can be performed, such as tagging \cite{fernandez2017vits}, human action recognition \cite{pareek2021survey}, anomaly detection \cite{suarez2020survey}, etc. New video-processing algorithms are continuously being developed to automatically organize the web through the flawless accomplishment of the aforementioned tasks. Nowadays, Deep Neural Networks are the de-facto standard for video understanding \cite{oprea2020review}. However, with every addition of a new element to the training set (that is, a full training video), more resources are required in order to satisfy the enormous computational needs. On the one hand, the exponential increment in the amount of data raises concerns regarding our ability to handle it in the future. On the other hand, it has also spurred an highly creative research field aimed at finding ways to mitigate this burden. Among the first-generation of video processing methods were ones geared toward adopting 2D convolution neural networks (CNNs), due to their computational efficiency \cite{simonyan2014two}. Others decomposed 3D convolutions \cite{du2017closer,xie2018rethinking} into simpler operators, or split a complex neural network into an ensemble of lightweight networks \cite{chen2018multi}. However, video understanding has greatly evolved since then, with the current state-of-the-art methods featuring costly attention mechanisms \cite{arnab2021vivit,girdhar2019video,liu2021video,akbari2021vatt,fan2021multiscale,bertasius2021space,li2021vidtr}. Beyond accuracy, a prominent advantage of the latest generation of methods is that they process raw data, that is, video frames that do not undergo any advanced pre-processing. Meanwhile, pursuing new video representations and incorporating pre-computed features to accelerate training is a promising direction that requires more extensive research. \newcommand{\thumbwidth}{0.2} \newcommand{\thumbheight}{1.2in} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=0.4\linewidth]{figures/guitarist.jpg} \caption{Original image} \label{fig:original_intro} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=0.4\linewidth]{figures/example_superpixels.png} \caption{Mean superpixels} \label{fig:superpixels_intro} \end{subfigure} \caption{A visual comparison between a pixel and a mean-superpixel representation. On the left, the original image is presented. On the right, we present the image formed by generating superpixel regions using SLIC and filling each region with its mean color.} \label{fig:superpixels_example} \end{figure} Prior to the renaissance of deep learning \cite{lecun2015deep}, much research was done on visual feature generation. Two prominent visual feature generation methods are superpixels\footnote{Superpixel techniques segment an image into regions by considering similarity measures, defined using perceptual features.} and optic-flow\footnote{Optic-flow is the pattern of the apparent motion of an object(s) in the image between two consecutive frames due to the movement of the object or the camera.}. These techniques' ability to encode perceptually meaningful features has greatly contributed to the success of computer vision algorithms. Superpixels provide a convenient, compact representation of images that can be very useful for computationally demanding problems, while optic-flow provides hints about motion. We rely on these methods to construct a novel representation of videos that encodes sufficient information for video understanding: 1) adjacent pixels are grouped together in the form of superpixels, and 2) temporal relations and proximities are expressed via graph connectivity. The example depicted in Figure \ref{fig:superpixels_example} provides an intuition for the sufficiency of superpixel representation for scene understanding. It contains the superpixel regions obtained via SLIC \cite{achanta2010slic}, with each region filled with the mean color. One can clearly discern a person playing a guitar in both images. A different way of depicting the relations between superpixels is a graph with nodes representing superpixels \cite{monti2017geometric,dadsetan2021superpixels,avelar2020superpixel}. Such a representation has the advantage of being invariant to rotations and flips, which obviates the need for further augmentations. We here demonstrate how this representation can reduce the computational requirements for processing videos. Recent years have seen a surge in the utilization of Graph Neural Networks (GNNs) \cite{kipf2016semi} in tasks that involve images \cite{monti2017geometric,dadsetan2021superpixels,avelar2020superpixel}, audio \cite{dokania2019graph,zhang2019few} and other data forms \cite{wang2018videos,xie2016representation,abadal2021computing}. In this paper, we propose \methodname, a concise graph representation of videos that enables video processing via GNNs. \methodname\ constructs a graph representation of videos that is subsequently processed via a GCN to predict a target. We intend to exploit the power of graphs for efficient video processing. To the best of our knowledge, we are the first to utilize a graph-based representation of videos for efficiency. \methodname\ dramatically reduces the memory footprint of a model, enabling large batch-sizes that translate to better generalization. Moreover, it utilizes models with an order-of-magnitude fewer parameters than the current state-of-the-art models while preserving the predictive power. \textbf{In summary, our contributions are:} \begin{enumerate} \item We present \methodname\ - a simple and intuitive, yet sufficient representation of video clips. This simplicity is crucial for delivering efficiency. \item We propose a dedicated GNN for processing the proposed representation. The proposed architecture is compared with conventional GNN models in order to demonstrate the importance of each component of \methodname. \item We present 4 types of new augmentations that are directly applied to the video-graph representation. A thorough ablation study of their configurations is preformed in order to demonstrate the contribution of each. \item We perform a thorough experimental study, and show that \methodname\ greatly outperforms previous methods in terms of efficiency - first and foremost, the paper utilizes GNNs for efficient video understanding. We show that it successfully reduces computations while preserving much of the performance of state-of-the-art approaches that utilize computationally demanding models. \end{enumerate} \section{Related Work}\label{section:related_work} \subsection{Deep Learning for Video Understanding} CNNs have found numerous applications in video processing \cite{mittal2021survey,tran2018closer,yue2015beyond}. These include LSTM-based networks that perform per-frame encoding \cite{srivastava2015unsupervised,ullah2017action,yue2015beyond} and the extension of 2D convolutions to the temporal dimension, \eg, 3D CNNs such as C3D \cite{tran2015learning}, R2D \cite{simonyan2014two} and R(2+1)D \cite{tran2018closer}. The success of the Transformer model \cite{vaswani2017attention} has led to the development of attention-based models for vision tasks, via self-attention modules that were used to model spatial dependencies in images. NLNet \cite{wang2018non} was the first to employ self-attention in a CNN. With this novel attention mechanism, NLNet is possible to model long-range dependencies between pixels. The next model to be developed was GCNet \cite{cao2019gcnet}, which simplified the NL-module, thanks to its need for fewer parameters and computations, while preserving its performance. A more prominent transition from CNNs to Transformers began with Vision Transformer (ViT) \cite{dosovitskiy2020image}, which prompted research aimed at improving its effectiveness on small datasets, such as Deit \cite{touvron2021training}. Later, vision-transformers were adapted for video tasks \cite{neimark2021video,arnab2021vivit,bertasius2021space,fan2021multiscale,li2021vidtr,liu2021video}, now crowned as the current state-of-the-art that top the leader-boards of this field. The usage of graph representation in video understanding sparsely took place in the work of Wang \cite{wang2018videos}. They used pre-trained Resnet variants \cite{he2016deep} for generating object bounding boxes of interest on each frame. These bounding boxes are later used for the construction of a spatio-temporal graph that describes how objects change through time, and perform classification on top of the spatio-temporal graph with graph convolutional neural networks \cite{kipf2016semi}. However, we note that the usage of a large backbone for generating object bounding boxes is harmful for performance. We intend to alleviate this by proposing a lighter graph representation. In combination of a dedicated GNN architecture, our representation greatly outperforms \cite{wang2018videos} in all metrics. \subsection{Superpixel Representation of Visual Data} Superpixels are groups of perceptually similar pixels that can be used to create visually meaningful entities while heavily reducing the number of primitives for subsequent processing steps \cite{stutz2018superpixels}. The efficiency of the obtained representation has led to the development of many superpixel-generation algorithms for images \cite{stutz2018superpixels}. This approach was adapted for volumetric data via the construction of supervoxels \cite{papon2013voxel}, which are the trivial extension to depth. These methods were adjusted for use in videos \cite{6247802} by treating the temporal dimension as depth. However, this results in degraded performance, as inherent assumptions regarding neighboring points in the 3D space do not apply to videos with non-negligible motion. Recent approaches especially designed to deal with videos consider the temporal dimensions for generating superpixels that are coherent in time. Xu \emph{et al.}~\cite{10.1007/978-3-642-33783-3_45} proposed a hierarchical graph-based segmentation method. This was followed by the work of Chang \emph{et al.}~\cite{chang2013video}, who suggested that Temporal Superpixels (TSPs) can serve as a representation of videos using temporal superpixels by modeling the flow between frames with a bilateral Gaussian process. \subsection{Graph Convolutional Neural Networks} Introduced in \cite{kipf2016semi}, Graph Convolutional Networks (GCNs) have been widely adopted for graph-related tasks \cite{zhang2018network,kumar2020link}. The basic GCN uses aggregators, such as average and summation, to obtain a node representation given its neighbors. This basic form was rapidly extended to more complex architectures with more sophisticated aggregators. For instance, Graph Attention Networks \cite{velivckovic2017graph} use dot-product-based attention to calculate weights for edges. Relational GCNs \cite{schlichtkrull2018modeling} add to this framework by also considering multiple edge types, namely, relations (such as temporal and spatial relations), and the aggregating information from each relation via separate weights in a single layer. Recently, GCNs have been adopted for tasks involving audio \cite{dokania2019graph,zhang2019few} and images \cite{monti2017geometric,dadsetan2021superpixels,avelar2020superpixel}. Following the success of graph models to efficiently perform image-based tasks, we are eager to demonstrate our extension of the image-graph representation to videos. \section{\methodname\ - A Video-Graph Representation}\label{section:methodology} In this section, we introduce the methodology of \methodname. First, we present our method for video-graph representation generation, depicted in Figure \ref{fig:framework} and described in Algorithm \ref{algo:graphvid}. Then, we present our training methodology that utilizes this representation. Finally, we discuss the benefits of \methodname\ and propose several augmentations. \input{figures/framework} \subsection{Overview} In our framework, we deal with video clips that are sequences of $T$ video frames \text{$v\in \Real^{T\times~C\times~H\times~W}$}. The goal is to transform $v$ into a graph that is sufficiently informative for further processing. To achieve this, we use SLIC \cite{achanta2010slic} to generate $S$ segmented regions, called \textit{superpixels}, over each frame. We denote each segmented region as $R_{t,i}$, where \text{$t\in [T]$} represents the temporal frame index, and \text{$i\in [S]$} the superpixel-segmented region index. The following is a description of how we utilize the superpixels to construct our video-graph representation. \paragraph{Graph Elements -} We define the undirected graph $\mathg$ as a 3-tuple \text{$\mathg=(\mathv,\mathe,\mathr)$}, where \text{$\mathv=\{R_{t,i} | t\in [T], i\in [S]\}$} is the set of nodes representing the segmented regions, $\mathe$ is the set of labeled edges (to be defined hereunder) and \text{$\mathr=\{spatial,temporal\}$} is a set of relations as defined in \cite{schlichtkrull2018modeling}. Each node $R_{t,i}$ is associated with an attribute $R_{t,i}.c\in \Real^3$ representing the mean RGB color in that segmented region. Additionally, we refer to $R_{t,i}.y$ and $R_{t,i}.x$ as the coordinates of the superpixel's centroid, which we use to compute the distances between superpixels. These distances, which will later serve as the edge attributes of the graph, are computed by \begin{equation} d^{t_q\to t_p}_{i,j} = \sqrt{\left(\frac{R_{t_q,i}.y - R_{t_p,j}.y}{H}\right)^2 + \left(\frac{R_{t_q,i}.x - R_{t_p,j}.x}{W}\right)^2}. \end{equation} Here, \text{$t_q,t_p\in [T]$} denote frame indices, and \text{$i,j\in [S]$} denote superpixel indices generated for the corresponding frames. The set of edges $\mathe$ is composed of: \textbf{1)} intra-frame edges (denoted $\mathe^{spatial}$) - edges between nodes corresponding to superpixels in the same frame. We refer to these as \textit{spatial edges}. \textbf{2)} inter-frame edges (denoted $\mathe^{temporal}$) - edges between nodes corresponding to superpixels in two sequential frames. We refer to edges as \textit{temporal edges}. Finally, the full set of edges is \text{$\mathe = \mathe^{spatial} \cup \mathe^{temporal}$}. Following is a description of how we construct both components. \paragraph{Spatial Edges -} In similar to \cite{avelar2020superpixel}, we generate a region-adjacency graph for each frame, with edge attributes describing the distances between superpixel centroids. The notation \text{$\mathe^{spatial}_t$} refers to the set of the spatial-edges connecting nodes corresponding to superpixels in the frame $t$, and \( \mathe^{spatial} = \bigcup_{t=1}^{T}{\mathe^{spatial}_t}. \) Each edge \text{$e_{i,j}^{t}\in \mathe^{spatial}$} is associated with an attribute that describes the euclidean distance between the two superpixel centroids $i$ and $j$ in frame $t$, that is, $d^{t\to t}_{i,j}$. These distances provide information about the relations between the superpixels. Additionally, the distances are invariant to rotations and image-flips, which eliminates the need for those augmentations. Note that normalization of the superpixels' centroid coordinates is required in order to obscure information regarding the resolution of frames, which is irrelevant for many tasks, such as action classification. In Figure \ref{fig:spatial_edges}, we demonstrate the procedure of spatial edge generation for a cropped image that results in a partial graph of the whole image. Each superpixel is associated with a node, which is connected via edges to other adjacent nodes (with the distances between the superpixels' centroids serving as edge attributes). \begin{figure}[!ht] \centering \includegraphics[width=0.45\linewidth]{figures/spatial_graph.png} \caption{Spatial edge generation. First, superpixels are generated. Each superpixel is represented as a node, which is connected via its edges to other such nodes within a frame. Each node is assigned the mean color of the respective segmented region, and each edge is assigned the distances between the superpixel centroids connected by that edge.} \label{fig:spatial_edges} \end{figure} \paragraph{Temporal Edges -} In modeling the temporal relations, we aim to connect nodes that tend to describe the same objects in subsequent frames. To do so, we rely on the assumption that in subsequent frames, such superpixels are attributed similar colors and the same spatial proximity. To achieve this, for each superpixel $R_{t,i}$, we construct a neighborhood $\neighborhood_{t,i}$ that contains superpixels from its subsequent frame whose centroids have a proximity of at most $d_{proximity}\in (0,1]$ with respect to the euclidean distance. Then, we find the superpixel with the most similar color in this neighborhood. As a result, the $t^{th}$ frame is associated with the set of edges $\mathe^{temporal}_{t\to t+1}$ that model temporal relations with its subsequent frame, formally: \begin{equation}\label{eq:neighborhood} \neighborhood_{t,i} = \{R_{t+1,j} | d^{t\to t+1}_{i,j} < d_{proximity}\}, \end{equation} \begin{equation} neighbor(R_{t,i})=\argmin_{R_{t+1,j}\in \neighborhood_{t,i}}{|R_{t,i}.c - R_{t+1,j}.c|_2}, \end{equation} \begin{equation} \mathe^{temporal}_{t\to t+1} = \{(R_{t,i}, temporal, neighbor(R_{t,i}) | i\in [S]\}. \end{equation} Equipped with these definitions, we define the set of temporal edges connecting nodes corresponding to superpixels in frame $t$ to superpixels in frame \text{$t+1$} as the union of the temporal edge sets generated for all the frames: \( \mathe^{temporal} = \bigcup_{t=1}^{T-1}{\mathe^{temporal}_{t\to t+1}} \). \input{algorithms/graph_generation} \subsection{Model Architecture}\label{section:model_arch} In order to model both the spatial and temporal relations between superpixels, our model primarily relies on the Neural Relational Model \cite{schlichtkrull2018modeling}, which is an extension of GCNs \cite{kipf2016semi} to large-scale relational data. In a Neural Relational Model, the propagation model for calculating the forward-pass update of a node, denoted by $v_i$, is defined as \begin{equation}\small h_{i}^{(l+1)}=\sigma \left(\sum_{r\in \mathr}\sum_{j\in \neighborhood_{i}^{r}}{\frac{1}{c_{i,r}} W_{r}^{(l)}h_{j}^{(l)}+W_{0}^{(l)}h_{i}^{(l)}} \right), \end{equation} where $\neighborhood^r_i$ denotes the set of neighbor indices of node $i$ under relation \text{$r\in \mathr$} (not to be confused with the notation $\neighborhood_{t,i}$ from Eq. \ref{eq:neighborhood}). $c_{i,r}$ is a problem-specific normalization constant that can either be learned or chosen in advance (such as \text{$c_{i,r}=|\neighborhood^r_i|)$}. To incorporate edge features, we adapt the approach proposed in \cite{corso2020principal}, that concatenates node and edge attributes as a layer's input, yielding the following: \begin{equation}\label{eq:concat_edges}\small h_{i}^{(l+1)}=\sigma \left(\sum_{r\in \mathr}\sum_{j\in \neighborhood_{i}^{r}}{\frac{1}{c_{i,r}} W_{r}^{(l)}[h_{j}^{(l)},e_{i,j}]+W_{0}^{(l)}h_{i}^{(l)}} \right), \end{equation} where $e_{i,j}$ is the feature of the edge connecting nodes \text{$v_i,v_j$}. \subsection{Augmentations}\label{section:augmentations} We introduce a few possible augmentations that we found useful for training our model as they improved the generalization. \paragraph{Additive Gaussian Edge Noise (AGEN) -} Edge attributes represent distances between superpixel centroids. The coordinates of those centroids may vary due to different superpixel shapes with different centers of mass. To compensate for this, we add a certain amount of noise to each edge attribute. Given a hyper-parameter $\sigma_{edge}$, for each edge attribute $e_{u,v}$ and for each training iteration, we sample a normally distributed variable $z_{u,v}\sim N(0,\sigma_{edge})$ that is added to the edge attribute. \paragraph{Additive Gaussian Node Noise (AGNN) -} Node attributes represent the colors of regions in each frame. Similar to edge attributes, the mean color of each segmented region may vary due to different superpixel shapes. To compensate for this, we add a certain amount of noise to each node attribute. Given a hyper-parameter $\sigma_{node}$, for each node attribute $v.c$ of dimension $d_c$ and for each training iteration, we sample a normally distributed variable $z_{v}\sim N_{d_c}(0,\sigma_{node}\cdot I_{d_c})$ that is added to the node attribute. \paragraph{Random Removal of Spatial Edges (RRSE) -} This augmentation tends to mimic the regularization effect introduced in DropEdge \cite{rong2019dropedge}. Moreover, since the removal of edges leads to fewer message-passings in a GCN, this also accelerates the training and inference. To perform this, we choose a probability \text{$p_{edge}\in[0,1]$}. Then, each edge $e$ is preserved with a probability of $p_{edge}$. \paragraph{Random Removal of Superpixels (RRS) -} SLIC \cite{achanta2010slic} is sensitive to its initialization. Consequently, each video clip may have several graph representations during different training iterations and inference. This can be mitigated by removing a certain amount of superpixels. The outcome is fewer nodes in the corresponding representative graph, as well as fewer edges. Similar to RRSE, we choose a probability \text{$p_{node}\in[0,1]$} so that each superpixel is preserved with a probability of $p_{node}$. \subsection{Benefits of \textbf{\methodname}} \paragraph{Invariance -}The absence of coordinates leads to invariance in the spatial dimension. It is evident that such a representation is invariant to rotations and flips since the relations between different parts of the image are solely characterized by distances. This, in turn, obviates the need to perform such augmentations during training. \paragraph{Efficiency -}We argue that our graph-based representation is more efficient than raw frames. To illustrate this, let $T, C, H$ and $W$ be the dimensions of a clip; that is, the number of frames, number of channels, height and width of a frame, respectively. Correspondingly, the raw representation requires \text{$T\cdot C\cdot H\cdot W$}. To calculate the size of the graph-video, let $S$ be the number of superpixels in a frame. By construction, there are at most \text{$4\cdot S$} edges in each frame because SLIC constraints each to have 4 neighbors. Each edge contains $3$ values, corresponding to the distance on the grid, source and target nodes. Additionally, there are, at most, $S$ edges between every temporal step. This results in \text{$3\cdot (4\cdot S + (T - 1) \cdot S) + C\cdot T\cdot S$} parameters in total. Typically, the second requires much fewer parameters because we choose $S$ so that \text{$S \ll H\cdot W$}. \paragraph{Prior Knowledge Incorporation -} Optical-flow and over-segmentation are encoded within the graph-video representation using the inter-frame and intra-frame edges. This incorporates strong prior knowledge within the resultant representation. For example, optical-flow dramatically improved the accuracy in the two-stream methodology that was proposed in \cite{simonyan2014two}. Additionally, over-segmentation using superpixels has been found useful as input features for machine learning models due to the limited loss of important details, accompanied by a dramatic reduction in the expended time by means of reducing the number of elements of the input \cite{proceedings401,dadsetan2021superpixels,avelar2020superpixel}. \section{Experiments}\label{section:experiments} We validated \methodname\ on 2 human-action-classification benchmarks. The goal of human action classification is to determine the human-involved action that occurs within a video. The objectives of this empirical study were twofold: \begin{itemize} \item Analyze the impact of the various parameters on the accuracy of the model. \item As we first and foremost target efficiency, we sought to examine the resources' consumption of \methodname\ in terms of Floating Point Operations (FLOPs). We followed the conventional protocol \cite{feichtenhofer2020x3d}, which uses single-clip FLOPs as a basic unit of computational cost. We show that we are able to achieve a significant improvement in efficiency over previous methods while preserving state-of-the-art performance. \end{itemize} \subsection{Setup} \paragraph{Datasets -} We use two common datasets for action classification: \textit{Kinetics-400 (K400)} \cite{kay2017kinetics} and \textit{Charades} \cite{sigurdsson2016hollywood}. Kinetics-400 \cite{kay2017kinetics} is a large-scale video dataset released in 2017 that contains 400 classes, with each category consisting of more than 400 videos. It originally had, in total, around 240K, 19K, and 38K videos for training, validation and testing subsets, respectively. Kinetics is gradually shrinking over time due to videos being taken offline, making it difficult to compare against less recent works. We used a dataset containing 208K, 17K and 33K videos for training, validation and test respectively. We report on the most recently available videos. Each video lasts approximately 10 seconds. The Charades dataset \cite{sigurdsson2016hollywood} is composed of 9,848 videos of daily indoor activities, each of an average length of 30 seconds. In total, the dataset contains 66,500 temporal annotations for 157 action classes. In the standard split, there are 7,986 training videos and 1,863 validation videos, sampled at 12 frames per second. We follow prior arts by reporting the Top-1 and Top-5 recognition accuracy for Kinetics-400 and mean average precision (mAP) for Charades. \begin{figure}[t] \centering \includegraphics[width=0.65\linewidth]{figures/general_arch.png} \caption{The general graph neural network architecture we use in our experiments.} \label{fig:general_arch} \end{figure} \paragraph{Network Architecture and Training -} We use GNN variants and feed each of them with our video-graphs. Specifically, we consider Graph Convolutional Networks \cite{kipf2016semi} (GCNs), Graph Attention Networks \cite{velivckovic2017graph} (GATs) and Relational Graph Convolutional Networks \cite{schlichtkrull2018modeling} (RGCNs). The general architecture of our backbones is depicted in Fig. \ref{fig:general_arch}. It consists of $2$ fully-connected (FC) layers with exponential linear unit (ELU) activations that project the node features into a $256D$ feature space. Then come $4$ layers of the corresponding GNN layer (either GCN, GAT or RGCN along with an edge feature concatenation from Eq. \ref{eq:concat_edges}) with a hidden size of 512 with ELU activations, followed by global mean pooling, dropout with a probability of $0.2$ and a linear layer whose output is the predicted logits. For the GAT layers, we use 4 attention heads in each layer, and average the attention heads' results to obtain the desired hidden layer size. For the RGCN layers, we specify 2 relations, which correspond to the spatial and temporal relations, as described in Section \ref{section:methodology}. We use the Adam \cite{kingma2014adam} with a constant learning rate of \text{$1e-3$} for optimization. While choosing this architecture, the core idea was to keep the architecture simple and shallow, while changing the interaction module to better model the relations between parts of the clip. We divide the videos into clips using a sliding window of 20 frames, using a stride of 2 between consecutive frames and a stride of 10 between clips. In all the experiments, we used a fixed batch size of 200. \paragraph{Inference -} At the test phase, we use the same sliding window methodology as in the training. We follow the common practice of processing multiple views of a long video and average per-view logits to obtain the final results. The views are drawn uniformly across the temporal dimension of the video, without spatial cropping. The number of views is determined by the validation dataset. \paragraph{Implementation Details -} All experiments were run on a Ubuntu 18.04 machine with Intel i9-10920X, 93GB RAM and 2 GeForce RTX 3090 GPUs. Our implementation of \methodname\ is in Python3. To generate superpixels, we use \textit{fast-slic} \cite{fastslic} with the AVX2 instruction set. To train the graph neural models, we use Pytorch-Geometric \cite{fey2019fast}. We use a fixed seed for SLIC and cache the generated graphs during the first training epochs in order to further reduce the computations. We also store the edge indexes as int16 instead of int64 in order to reduce the memory footprint. Eventually, the memory footprints of the cached datasets is comparable to those of the original ones. \subsection{Ablation Study}\label{section:ablation} We conduct an in-depth study on Kinetics-400 to analyze the performance gain contributed by incorporating the different components of \methodname. \paragraph{Graph Neural Network Variants and Number of Superpixels per Frame -} We assess the performance of different GNN variants: GCN \cite{kipf2016semi} is trained without edge relations (\ie\, temporal and spatial edges are treated via the same weights). GAT \cite{velivckovic2017graph} is trained by employing the attention mechanism for neighborhood aggregation without edge relations. RGCN \cite{schlichtkrull2018modeling} is trained with edge relations, as described in Section \ref{section:model_arch}. The results of the action classification on K-400 are shown in Figure \ref{fig:n_sp_and_model_variants_ablation}. In this series, the number of views is fixed at $8$, which is the number of views that was found to be most effective for the validation set. For all variants, increasing the number of superpixels per frame ($S$) contributes to the accuracy. We notice a significant improvement in accuracy for the lower range of the number of superpixels, while the accuracy begins to saturate for \text{$S\geq 650$}. Increasing further the number of superpixels leads to bigger inputs, which require more computations. As our goal is to maximize the efficiency, we do not experiment with larger inputs in this section. \input{graphs/ablation/model_sp_grid} We further present in Table \ref{table:models_ablation} the models' specifications for $800$ superpixels, which is the best-performing configuration in this series of experiments. Unsurprisingly, the GCN variant requires the least amount of computations. Meanwhile, the RGCN variant requires fewer computations than GAT and achieves a higher level of accuracy. We conclude that it is beneficial to incorporate edge relations when wishing to encode temporal and spatial relations in videos, and that those features are not easily learned by heavy computational models, such as GAT. \input{tables/models} \paragraph{Augmentations -} \input{graphs/ablation/augmentations_grid} We assessed the impact of augmentations on the performance and their ability to alleviate over-fitting. For this purpose, we chose the best configuration obtained from the previous experiments, that is, RGCN with 800 superpixels per frame, and trained it while adding one augmentation at a time. The results of this series are depicted in Figure \ref{fig:augmentations_grid}. Each graph shows the level of accuracy reached by training the model with one of the parameters that control the augmentation. We begin with the analysis of the AGEN and AGNN, both relate to the addition of Gaussian noise to the graph components, with the corresponding parameters controlling the standard deviations. Their impact is unnoticeable as these parameters head towards $0$, since lower values reflect the scenarios in which little or no augmentations are applied. Slightly increasing the parameter brings about a gradual improvement in the accuracy, until a turning point is reached, after which the level of accuracy declines until it reaches \text{$\sim \frac{1}{400}$}, which resembles a random classifier. The decrease in accuracy stems from the noise obscuring the original signal, allegedly forcing the classifier to classify ungeneralizable noise. For RRSE and RRS, the random removal of spatial edges harms the accuracy of the model. This finding leads us to conclude that spatial edges encode meaningful information about relations between the entities. Moreover, slightly removing the nodes contributes to the level of accuracy, reaching a peak at \text{$p_{node}\approx 0.8$}. To conclude, we present the values that lead to the best Top-1 accuracy score in Table \ref{table:augmentations_params}. \input{tables/aug_params} \subsection{Comparison to the State-of-the-Art} \input{graphs/bubbles_grid} \paragraph{Kinetics-400 -} We present the K-400 results for our RGCN variant in Table \ref{table:k400_sota} and Figure \ref{fig:k400_relative_sota}, along with comparisons to previous arts, including convolutional-based and transformer-based methods. Our results are denoted RGCN-$d$, where $d$ represents the number of superpixels. Additionally, we use the set of augmentations with the parameters from Table \ref{table:augmentations_params}. First, when the RGCN-800 model is trained with the full set of augmentations (denoted Full-Aug), it achieves a significantly higher Top-1 accuracy than when it is trained without any augmentation (denoted No-Aug) or when each augmentation is applied individually. These results demonstrate the effectiveness of our model and that our augmentations can alleviate overfitting and improve the generalization over the test set. Second, all our RGCNs require orders-of-magnitude fewer computations than the previous arts, as well as more than \text{$\times 10$} fewer parameters. \input{tables/sota_comparison/k400} \paragraph{Charades -} We train RGCN variants with $800$ and $2000$ superpixels with the set of augmentations found in Table \ref{table:augmentations_params}. We also follow prior arts \cite{feichtenhofer2019slowfast,fan2021multiscale} by pre-training on K-400 followed by replacing the last FC layer and fine-tuning on Charades. Table \ref{table:charades_sota} and Figure \ref{fig:charades_relative_sota} show that when our RGCN model is trained with 2000 superpixels, its mAP score is comparable to the current state-of-the-art, but this score is reached with orders-of-magnitude fewer computations and using considerably fewer parameters. \input{tables/sota_comparison/charades} \subsection{Video-Graph Generation Run-Time} \begin{wrapfigure}[15]{r}{0.5\linewidth} \begin{center} \input{graphs/samples_generation} \end{center} \caption{Time of generation depending on the number of superpixels.} \label{fig:graph_run_time} \end{wrapfigure} The transition into a video-graph representation requires the consideration of the time needed for generating it. In Figure \ref{fig:graph_run_time}, we measured the average time needed using our setup, which include the whole pipeline: \textbf{1.} Superpixels calculation, and \textbf{2.} Graph structure generation, that is, creating edges between adjacent super-pixels and features calculation as described in Section \ref{section:methodology}. Interestingly, the first step is relatively short compared to the second. Apparently, the optimized \textit{fast-slic} \cite{fastslic} performs well, while the search for adjacent superpixels is time consuming. This opens the possibilities of further optimization. \section{Conclusions and Future Work}\label{section:conclusions} In this paper, we present \methodname, a graph video representations that enable video-processing via graph neural networks. Furthermore, we propose a relational graph convolutional model that suits this representation. Our experimental study demonstrates this model's efficiency in performing video-related tasks while achieving comparable performance to the current state-of-the-art. An interesting avenue for future work is to explore new graph representations of videos, including learnable methods. Additionally, we consider the development of new dedicated graph neural models for processing the unique and dynamic structure of the video-graph as an interesting research direction. Finally, unified models for image and video understanding that disregard temporal edges could be explored in order to take advantage of the amount of data in both worlds. \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=8VUywK1AT7d
https://arxiv.org/abs/2207.01375
Please evaluate the paper based on its proposal of GraphVid, a new approach using GCNs to process video efficiently, and its potential to offer state-of-the-art performance, supported by experiments and specific data augmentation methods.
The idea presented in the paper is simple but can effectively speed up action recognition, therefore the paper should be accepted. Summary: - The authors propose an efficient graph video representation, GraphVid, that can be used for action recognition with reduced time and memory requirements. GraphVid results in a large efficiency gain without decreasing the performance. Positive points: + The idea presented in the paper is interesting and can facilitate future work on action recognition. + Paper experiments 4 augmentation strategies to improve the model performance. + Thorough experiments and ablation study show GraphVid effectiveness. + Competitive results on two action recognition benchmarks, Kinetics-400 and Charades. Negative points: - Relevant related work is missing. GCN have been used for video modelling before. 1) Yan, Sijie, Yuanjun Xiong, and Dahua Lin. "Spatial temporal graph convolutional networks for skeleton-based action recognition." Thirty-second AAAI conference on artificial intelligence. 2018. 2) Thakkar, Kalpit, and P. J. Narayanan. "Part-based graph convolutional network for action recognition." arXiv preprint arXiv:1809.04983 (2018). 3) Korban, Matthew, and Xin Li. "Ddgcn: A dynamic directed graph convolutional network for action recognition." European Conference on Computer Vision. Springer, Cham, 2020. 4) Papadopoulos, Konstantinos, et al. "Vertex feature encoding and hierarchical temporal modeling in a spatial-temporal graph convolutional network for action recognition." arXiv preprint arXiv:1912.09745 (2019). ... - Writing is sloppy and overly complex in places. The text can be simplified by removing sentences such as "to be defined hereunder" (line 207), "The following is a description of how we utilize the superpixels to construct our video-graph representation." (line 202-203)... - Spatial edges and figure 5. It is unclear whether the spatial graph for each frame is complete. I do not see an explanation about edge selection, however, it seems that in figure 3 only "neighbouring super pixels" are connected. - Missing algorithm time complexity for graph generation (i.e. extraction of super pixels, graph construction). - Prior knowledge incorporation (line 369): I do not see how optical flow is currently encoded in the graph video representation, especially due to the absence of coordinates. For example, if an object moves fast within consecutive frames, the distance between the respective super pixels over time might be larger than d_proximity. This way, information about the object motion (direction) is lost completely. Justification: The idea of using super pixels in combination with GCNs is, to my knowledge, novel. The experiments are thorough and show the effectiveness of the method. The paper needs some fixes in the text as indicated above. My rate is weak accept. Rating: 6: Marginally above acceptance threshold Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage[accsupp]{axessibility} % \usepackage{times} \usepackage{epsfig} \usepackage{caption} \usepackage{subcaption} \usepackage{tabularx} \usepackage{makecell} \usepackage{cellspace} \usepackage{graphicx} \usepackage{wrapfig} \makeatletter \@namedef{ver@everyshi.sty}{} \makeatother \usepackage{tikz} \usepackage{pgfplots} \pgfplotsset{compat=1.17} \usetikzlibrary{pgfplots.groupplots} \usepackage{algorithm}% \usepackage{algpseudocode}% \newcommand{\methodname}{\emph{GraphVid}} \newcommand\dd[1]{\textcolor{red}{[DD: #1]}} \newcommand\ddd[1]{\textcolor{red}{#1}} \newcommand\ek[1]{\textcolor{blue}{#1}} \newcommand\lighten[1]{\textcolor{gray}{#1}} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \def\Real{\mathbb{R}} \def\neighborhood{\mathcal{N}} \def\mathg{\mathcal{G}} \def\mathv{\mathcal{V}} \def\mathe{\mathcal{E}} \def\mathr{\mathcal{R}} \def\eg{\emph{e.g.}} \def\Eg{\emph{E.g.}} \def\ie{\emph{i.e.}} \def\Ie{\emph{I.e.}} \def\cf{\emph{c.f.}} \def\Cf{\emph{C.f.}} \def\etc{\emph{etc.}} \def\vs{\emph{vs.}} \def\wrt{w.r.t.} \def\dof{d.o.f.} \def\etal{\emph{et a.l}} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{4861} % \title{\methodname:\ It Only Takes a Few Nodes to Understand a Video} % \titlerunning{\methodname:\ It Only Takes a Few Nodes to Understand a Video} \author{Eitan Kosman\orcidID{0000-0002-5538-0616} \and Dotan Di Castro} \authorrunning{E. Kosman and D. Di Castro} \institute{Bosch Center of AI, Haifa, Israel\\ \email{\{Eitan.Kosman,Dotan.DiCastro\}@bosch.com}} \maketitle \begin{abstract}\label{section:abstract} We propose a concise representation of videos that encode perceptually meaningful features into graphs. With this representation, we aim to leverage the large amount of redundancies in videos and save computations. First, we construct superpixel-based graph representations of videos by considering superpixels as graph nodes and create spatial and temporal connections between adjacent superpixels. Then, we leverage Graph Convolutional Networks to process this representation and predict the desired output. As a result, we are able to train models with much fewer parameters, which translates into short training periods and a reduction in computation resource requirements. A comprehensive experimental study on the publicly available datasets Kinetics-400 and Charades shows that the proposed method is highly cost-effective and uses limited commodity hardware during training and inference. \textbf{It reduces the computational requirements 10-fold} while achieving results that are comparable to state-of-the-art methods. We believe that the proposed approach is a promising direction that could open the door to solving video understanding more efficiently and enable more resource limited users to thrive in this research field. \end{abstract} \section{Introduction}\label{section:introduction} The field of video understanding has gained prominence thanks to the rising popularity of videos, which has become the most common form of data on the web. On each new uploaded video, a variety of tasks can be performed, such as tagging \cite{fernandez2017vits}, human action recognition \cite{pareek2021survey}, anomaly detection \cite{suarez2020survey}, etc. New video-processing algorithms are continuously being developed to automatically organize the web through the flawless accomplishment of the aforementioned tasks. Nowadays, Deep Neural Networks are the de-facto standard for video understanding \cite{oprea2020review}. However, with every addition of a new element to the training set (that is, a full training video), more resources are required in order to satisfy the enormous computational needs. On the one hand, the exponential increment in the amount of data raises concerns regarding our ability to handle it in the future. On the other hand, it has also spurred an highly creative research field aimed at finding ways to mitigate this burden. Among the first-generation of video processing methods were ones geared toward adopting 2D convolution neural networks (CNNs), due to their computational efficiency \cite{simonyan2014two}. Others decomposed 3D convolutions \cite{du2017closer,xie2018rethinking} into simpler operators, or split a complex neural network into an ensemble of lightweight networks \cite{chen2018multi}. However, video understanding has greatly evolved since then, with the current state-of-the-art methods featuring costly attention mechanisms \cite{arnab2021vivit,girdhar2019video,liu2021video,akbari2021vatt,fan2021multiscale,bertasius2021space,li2021vidtr}. Beyond accuracy, a prominent advantage of the latest generation of methods is that they process raw data, that is, video frames that do not undergo any advanced pre-processing. Meanwhile, pursuing new video representations and incorporating pre-computed features to accelerate training is a promising direction that requires more extensive research. \newcommand{\thumbwidth}{0.2} \newcommand{\thumbheight}{1.2in} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=0.4\linewidth]{figures/guitarist.jpg} \caption{Original image} \label{fig:original_intro} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=0.4\linewidth]{figures/example_superpixels.png} \caption{Mean superpixels} \label{fig:superpixels_intro} \end{subfigure} \caption{A visual comparison between a pixel and a mean-superpixel representation. On the left, the original image is presented. On the right, we present the image formed by generating superpixel regions using SLIC and filling each region with its mean color.} \label{fig:superpixels_example} \end{figure} Prior to the renaissance of deep learning \cite{lecun2015deep}, much research was done on visual feature generation. Two prominent visual feature generation methods are superpixels\footnote{Superpixel techniques segment an image into regions by considering similarity measures, defined using perceptual features.} and optic-flow\footnote{Optic-flow is the pattern of the apparent motion of an object(s) in the image between two consecutive frames due to the movement of the object or the camera.}. These techniques' ability to encode perceptually meaningful features has greatly contributed to the success of computer vision algorithms. Superpixels provide a convenient, compact representation of images that can be very useful for computationally demanding problems, while optic-flow provides hints about motion. We rely on these methods to construct a novel representation of videos that encodes sufficient information for video understanding: 1) adjacent pixels are grouped together in the form of superpixels, and 2) temporal relations and proximities are expressed via graph connectivity. The example depicted in Figure \ref{fig:superpixels_example} provides an intuition for the sufficiency of superpixel representation for scene understanding. It contains the superpixel regions obtained via SLIC \cite{achanta2010slic}, with each region filled with the mean color. One can clearly discern a person playing a guitar in both images. A different way of depicting the relations between superpixels is a graph with nodes representing superpixels \cite{monti2017geometric,dadsetan2021superpixels,avelar2020superpixel}. Such a representation has the advantage of being invariant to rotations and flips, which obviates the need for further augmentations. We here demonstrate how this representation can reduce the computational requirements for processing videos. Recent years have seen a surge in the utilization of Graph Neural Networks (GNNs) \cite{kipf2016semi} in tasks that involve images \cite{monti2017geometric,dadsetan2021superpixels,avelar2020superpixel}, audio \cite{dokania2019graph,zhang2019few} and other data forms \cite{wang2018videos,xie2016representation,abadal2021computing}. In this paper, we propose \methodname, a concise graph representation of videos that enables video processing via GNNs. \methodname\ constructs a graph representation of videos that is subsequently processed via a GCN to predict a target. We intend to exploit the power of graphs for efficient video processing. To the best of our knowledge, we are the first to utilize a graph-based representation of videos for efficiency. \methodname\ dramatically reduces the memory footprint of a model, enabling large batch-sizes that translate to better generalization. Moreover, it utilizes models with an order-of-magnitude fewer parameters than the current state-of-the-art models while preserving the predictive power. \textbf{In summary, our contributions are:} \begin{enumerate} \item We present \methodname\ - a simple and intuitive, yet sufficient representation of video clips. This simplicity is crucial for delivering efficiency. \item We propose a dedicated GNN for processing the proposed representation. The proposed architecture is compared with conventional GNN models in order to demonstrate the importance of each component of \methodname. \item We present 4 types of new augmentations that are directly applied to the video-graph representation. A thorough ablation study of their configurations is preformed in order to demonstrate the contribution of each. \item We perform a thorough experimental study, and show that \methodname\ greatly outperforms previous methods in terms of efficiency - first and foremost, the paper utilizes GNNs for efficient video understanding. We show that it successfully reduces computations while preserving much of the performance of state-of-the-art approaches that utilize computationally demanding models. \end{enumerate} \section{Related Work}\label{section:related_work} \subsection{Deep Learning for Video Understanding} CNNs have found numerous applications in video processing \cite{mittal2021survey,tran2018closer,yue2015beyond}. These include LSTM-based networks that perform per-frame encoding \cite{srivastava2015unsupervised,ullah2017action,yue2015beyond} and the extension of 2D convolutions to the temporal dimension, \eg, 3D CNNs such as C3D \cite{tran2015learning}, R2D \cite{simonyan2014two} and R(2+1)D \cite{tran2018closer}. The success of the Transformer model \cite{vaswani2017attention} has led to the development of attention-based models for vision tasks, via self-attention modules that were used to model spatial dependencies in images. NLNet \cite{wang2018non} was the first to employ self-attention in a CNN. With this novel attention mechanism, NLNet is possible to model long-range dependencies between pixels. The next model to be developed was GCNet \cite{cao2019gcnet}, which simplified the NL-module, thanks to its need for fewer parameters and computations, while preserving its performance. A more prominent transition from CNNs to Transformers began with Vision Transformer (ViT) \cite{dosovitskiy2020image}, which prompted research aimed at improving its effectiveness on small datasets, such as Deit \cite{touvron2021training}. Later, vision-transformers were adapted for video tasks \cite{neimark2021video,arnab2021vivit,bertasius2021space,fan2021multiscale,li2021vidtr,liu2021video}, now crowned as the current state-of-the-art that top the leader-boards of this field. The usage of graph representation in video understanding sparsely took place in the work of Wang \cite{wang2018videos}. They used pre-trained Resnet variants \cite{he2016deep} for generating object bounding boxes of interest on each frame. These bounding boxes are later used for the construction of a spatio-temporal graph that describes how objects change through time, and perform classification on top of the spatio-temporal graph with graph convolutional neural networks \cite{kipf2016semi}. However, we note that the usage of a large backbone for generating object bounding boxes is harmful for performance. We intend to alleviate this by proposing a lighter graph representation. In combination of a dedicated GNN architecture, our representation greatly outperforms \cite{wang2018videos} in all metrics. \subsection{Superpixel Representation of Visual Data} Superpixels are groups of perceptually similar pixels that can be used to create visually meaningful entities while heavily reducing the number of primitives for subsequent processing steps \cite{stutz2018superpixels}. The efficiency of the obtained representation has led to the development of many superpixel-generation algorithms for images \cite{stutz2018superpixels}. This approach was adapted for volumetric data via the construction of supervoxels \cite{papon2013voxel}, which are the trivial extension to depth. These methods were adjusted for use in videos \cite{6247802} by treating the temporal dimension as depth. However, this results in degraded performance, as inherent assumptions regarding neighboring points in the 3D space do not apply to videos with non-negligible motion. Recent approaches especially designed to deal with videos consider the temporal dimensions for generating superpixels that are coherent in time. Xu \emph{et al.}~\cite{10.1007/978-3-642-33783-3_45} proposed a hierarchical graph-based segmentation method. This was followed by the work of Chang \emph{et al.}~\cite{chang2013video}, who suggested that Temporal Superpixels (TSPs) can serve as a representation of videos using temporal superpixels by modeling the flow between frames with a bilateral Gaussian process. \subsection{Graph Convolutional Neural Networks} Introduced in \cite{kipf2016semi}, Graph Convolutional Networks (GCNs) have been widely adopted for graph-related tasks \cite{zhang2018network,kumar2020link}. The basic GCN uses aggregators, such as average and summation, to obtain a node representation given its neighbors. This basic form was rapidly extended to more complex architectures with more sophisticated aggregators. For instance, Graph Attention Networks \cite{velivckovic2017graph} use dot-product-based attention to calculate weights for edges. Relational GCNs \cite{schlichtkrull2018modeling} add to this framework by also considering multiple edge types, namely, relations (such as temporal and spatial relations), and the aggregating information from each relation via separate weights in a single layer. Recently, GCNs have been adopted for tasks involving audio \cite{dokania2019graph,zhang2019few} and images \cite{monti2017geometric,dadsetan2021superpixels,avelar2020superpixel}. Following the success of graph models to efficiently perform image-based tasks, we are eager to demonstrate our extension of the image-graph representation to videos. \section{\methodname\ - A Video-Graph Representation}\label{section:methodology} In this section, we introduce the methodology of \methodname. First, we present our method for video-graph representation generation, depicted in Figure \ref{fig:framework} and described in Algorithm \ref{algo:graphvid}. Then, we present our training methodology that utilizes this representation. Finally, we discuss the benefits of \methodname\ and propose several augmentations. \input{figures/framework} \subsection{Overview} In our framework, we deal with video clips that are sequences of $T$ video frames \text{$v\in \Real^{T\times~C\times~H\times~W}$}. The goal is to transform $v$ into a graph that is sufficiently informative for further processing. To achieve this, we use SLIC \cite{achanta2010slic} to generate $S$ segmented regions, called \textit{superpixels}, over each frame. We denote each segmented region as $R_{t,i}$, where \text{$t\in [T]$} represents the temporal frame index, and \text{$i\in [S]$} the superpixel-segmented region index. The following is a description of how we utilize the superpixels to construct our video-graph representation. \paragraph{Graph Elements -} We define the undirected graph $\mathg$ as a 3-tuple \text{$\mathg=(\mathv,\mathe,\mathr)$}, where \text{$\mathv=\{R_{t,i} | t\in [T], i\in [S]\}$} is the set of nodes representing the segmented regions, $\mathe$ is the set of labeled edges (to be defined hereunder) and \text{$\mathr=\{spatial,temporal\}$} is a set of relations as defined in \cite{schlichtkrull2018modeling}. Each node $R_{t,i}$ is associated with an attribute $R_{t,i}.c\in \Real^3$ representing the mean RGB color in that segmented region. Additionally, we refer to $R_{t,i}.y$ and $R_{t,i}.x$ as the coordinates of the superpixel's centroid, which we use to compute the distances between superpixels. These distances, which will later serve as the edge attributes of the graph, are computed by \begin{equation} d^{t_q\to t_p}_{i,j} = \sqrt{\left(\frac{R_{t_q,i}.y - R_{t_p,j}.y}{H}\right)^2 + \left(\frac{R_{t_q,i}.x - R_{t_p,j}.x}{W}\right)^2}. \end{equation} Here, \text{$t_q,t_p\in [T]$} denote frame indices, and \text{$i,j\in [S]$} denote superpixel indices generated for the corresponding frames. The set of edges $\mathe$ is composed of: \textbf{1)} intra-frame edges (denoted $\mathe^{spatial}$) - edges between nodes corresponding to superpixels in the same frame. We refer to these as \textit{spatial edges}. \textbf{2)} inter-frame edges (denoted $\mathe^{temporal}$) - edges between nodes corresponding to superpixels in two sequential frames. We refer to edges as \textit{temporal edges}. Finally, the full set of edges is \text{$\mathe = \mathe^{spatial} \cup \mathe^{temporal}$}. Following is a description of how we construct both components. \paragraph{Spatial Edges -} In similar to \cite{avelar2020superpixel}, we generate a region-adjacency graph for each frame, with edge attributes describing the distances between superpixel centroids. The notation \text{$\mathe^{spatial}_t$} refers to the set of the spatial-edges connecting nodes corresponding to superpixels in the frame $t$, and \( \mathe^{spatial} = \bigcup_{t=1}^{T}{\mathe^{spatial}_t}. \) Each edge \text{$e_{i,j}^{t}\in \mathe^{spatial}$} is associated with an attribute that describes the euclidean distance between the two superpixel centroids $i$ and $j$ in frame $t$, that is, $d^{t\to t}_{i,j}$. These distances provide information about the relations between the superpixels. Additionally, the distances are invariant to rotations and image-flips, which eliminates the need for those augmentations. Note that normalization of the superpixels' centroid coordinates is required in order to obscure information regarding the resolution of frames, which is irrelevant for many tasks, such as action classification. In Figure \ref{fig:spatial_edges}, we demonstrate the procedure of spatial edge generation for a cropped image that results in a partial graph of the whole image. Each superpixel is associated with a node, which is connected via edges to other adjacent nodes (with the distances between the superpixels' centroids serving as edge attributes). \begin{figure}[!ht] \centering \includegraphics[width=0.45\linewidth]{figures/spatial_graph.png} \caption{Spatial edge generation. First, superpixels are generated. Each superpixel is represented as a node, which is connected via its edges to other such nodes within a frame. Each node is assigned the mean color of the respective segmented region, and each edge is assigned the distances between the superpixel centroids connected by that edge.} \label{fig:spatial_edges} \end{figure} \paragraph{Temporal Edges -} In modeling the temporal relations, we aim to connect nodes that tend to describe the same objects in subsequent frames. To do so, we rely on the assumption that in subsequent frames, such superpixels are attributed similar colors and the same spatial proximity. To achieve this, for each superpixel $R_{t,i}$, we construct a neighborhood $\neighborhood_{t,i}$ that contains superpixels from its subsequent frame whose centroids have a proximity of at most $d_{proximity}\in (0,1]$ with respect to the euclidean distance. Then, we find the superpixel with the most similar color in this neighborhood. As a result, the $t^{th}$ frame is associated with the set of edges $\mathe^{temporal}_{t\to t+1}$ that model temporal relations with its subsequent frame, formally: \begin{equation}\label{eq:neighborhood} \neighborhood_{t,i} = \{R_{t+1,j} | d^{t\to t+1}_{i,j} < d_{proximity}\}, \end{equation} \begin{equation} neighbor(R_{t,i})=\argmin_{R_{t+1,j}\in \neighborhood_{t,i}}{|R_{t,i}.c - R_{t+1,j}.c|_2}, \end{equation} \begin{equation} \mathe^{temporal}_{t\to t+1} = \{(R_{t,i}, temporal, neighbor(R_{t,i}) | i\in [S]\}. \end{equation} Equipped with these definitions, we define the set of temporal edges connecting nodes corresponding to superpixels in frame $t$ to superpixels in frame \text{$t+1$} as the union of the temporal edge sets generated for all the frames: \( \mathe^{temporal} = \bigcup_{t=1}^{T-1}{\mathe^{temporal}_{t\to t+1}} \). \input{algorithms/graph_generation} \subsection{Model Architecture}\label{section:model_arch} In order to model both the spatial and temporal relations between superpixels, our model primarily relies on the Neural Relational Model \cite{schlichtkrull2018modeling}, which is an extension of GCNs \cite{kipf2016semi} to large-scale relational data. In a Neural Relational Model, the propagation model for calculating the forward-pass update of a node, denoted by $v_i$, is defined as \begin{equation}\small h_{i}^{(l+1)}=\sigma \left(\sum_{r\in \mathr}\sum_{j\in \neighborhood_{i}^{r}}{\frac{1}{c_{i,r}} W_{r}^{(l)}h_{j}^{(l)}+W_{0}^{(l)}h_{i}^{(l)}} \right), \end{equation} where $\neighborhood^r_i$ denotes the set of neighbor indices of node $i$ under relation \text{$r\in \mathr$} (not to be confused with the notation $\neighborhood_{t,i}$ from Eq. \ref{eq:neighborhood}). $c_{i,r}$ is a problem-specific normalization constant that can either be learned or chosen in advance (such as \text{$c_{i,r}=|\neighborhood^r_i|)$}. To incorporate edge features, we adapt the approach proposed in \cite{corso2020principal}, that concatenates node and edge attributes as a layer's input, yielding the following: \begin{equation}\label{eq:concat_edges}\small h_{i}^{(l+1)}=\sigma \left(\sum_{r\in \mathr}\sum_{j\in \neighborhood_{i}^{r}}{\frac{1}{c_{i,r}} W_{r}^{(l)}[h_{j}^{(l)},e_{i,j}]+W_{0}^{(l)}h_{i}^{(l)}} \right), \end{equation} where $e_{i,j}$ is the feature of the edge connecting nodes \text{$v_i,v_j$}. \subsection{Augmentations}\label{section:augmentations} We introduce a few possible augmentations that we found useful for training our model as they improved the generalization. \paragraph{Additive Gaussian Edge Noise (AGEN) -} Edge attributes represent distances between superpixel centroids. The coordinates of those centroids may vary due to different superpixel shapes with different centers of mass. To compensate for this, we add a certain amount of noise to each edge attribute. Given a hyper-parameter $\sigma_{edge}$, for each edge attribute $e_{u,v}$ and for each training iteration, we sample a normally distributed variable $z_{u,v}\sim N(0,\sigma_{edge})$ that is added to the edge attribute. \paragraph{Additive Gaussian Node Noise (AGNN) -} Node attributes represent the colors of regions in each frame. Similar to edge attributes, the mean color of each segmented region may vary due to different superpixel shapes. To compensate for this, we add a certain amount of noise to each node attribute. Given a hyper-parameter $\sigma_{node}$, for each node attribute $v.c$ of dimension $d_c$ and for each training iteration, we sample a normally distributed variable $z_{v}\sim N_{d_c}(0,\sigma_{node}\cdot I_{d_c})$ that is added to the node attribute. \paragraph{Random Removal of Spatial Edges (RRSE) -} This augmentation tends to mimic the regularization effect introduced in DropEdge \cite{rong2019dropedge}. Moreover, since the removal of edges leads to fewer message-passings in a GCN, this also accelerates the training and inference. To perform this, we choose a probability \text{$p_{edge}\in[0,1]$}. Then, each edge $e$ is preserved with a probability of $p_{edge}$. \paragraph{Random Removal of Superpixels (RRS) -} SLIC \cite{achanta2010slic} is sensitive to its initialization. Consequently, each video clip may have several graph representations during different training iterations and inference. This can be mitigated by removing a certain amount of superpixels. The outcome is fewer nodes in the corresponding representative graph, as well as fewer edges. Similar to RRSE, we choose a probability \text{$p_{node}\in[0,1]$} so that each superpixel is preserved with a probability of $p_{node}$. \subsection{Benefits of \textbf{\methodname}} \paragraph{Invariance -}The absence of coordinates leads to invariance in the spatial dimension. It is evident that such a representation is invariant to rotations and flips since the relations between different parts of the image are solely characterized by distances. This, in turn, obviates the need to perform such augmentations during training. \paragraph{Efficiency -}We argue that our graph-based representation is more efficient than raw frames. To illustrate this, let $T, C, H$ and $W$ be the dimensions of a clip; that is, the number of frames, number of channels, height and width of a frame, respectively. Correspondingly, the raw representation requires \text{$T\cdot C\cdot H\cdot W$}. To calculate the size of the graph-video, let $S$ be the number of superpixels in a frame. By construction, there are at most \text{$4\cdot S$} edges in each frame because SLIC constraints each to have 4 neighbors. Each edge contains $3$ values, corresponding to the distance on the grid, source and target nodes. Additionally, there are, at most, $S$ edges between every temporal step. This results in \text{$3\cdot (4\cdot S + (T - 1) \cdot S) + C\cdot T\cdot S$} parameters in total. Typically, the second requires much fewer parameters because we choose $S$ so that \text{$S \ll H\cdot W$}. \paragraph{Prior Knowledge Incorporation -} Optical-flow and over-segmentation are encoded within the graph-video representation using the inter-frame and intra-frame edges. This incorporates strong prior knowledge within the resultant representation. For example, optical-flow dramatically improved the accuracy in the two-stream methodology that was proposed in \cite{simonyan2014two}. Additionally, over-segmentation using superpixels has been found useful as input features for machine learning models due to the limited loss of important details, accompanied by a dramatic reduction in the expended time by means of reducing the number of elements of the input \cite{proceedings401,dadsetan2021superpixels,avelar2020superpixel}. \section{Experiments}\label{section:experiments} We validated \methodname\ on 2 human-action-classification benchmarks. The goal of human action classification is to determine the human-involved action that occurs within a video. The objectives of this empirical study were twofold: \begin{itemize} \item Analyze the impact of the various parameters on the accuracy of the model. \item As we first and foremost target efficiency, we sought to examine the resources' consumption of \methodname\ in terms of Floating Point Operations (FLOPs). We followed the conventional protocol \cite{feichtenhofer2020x3d}, which uses single-clip FLOPs as a basic unit of computational cost. We show that we are able to achieve a significant improvement in efficiency over previous methods while preserving state-of-the-art performance. \end{itemize} \subsection{Setup} \paragraph{Datasets -} We use two common datasets for action classification: \textit{Kinetics-400 (K400)} \cite{kay2017kinetics} and \textit{Charades} \cite{sigurdsson2016hollywood}. Kinetics-400 \cite{kay2017kinetics} is a large-scale video dataset released in 2017 that contains 400 classes, with each category consisting of more than 400 videos. It originally had, in total, around 240K, 19K, and 38K videos for training, validation and testing subsets, respectively. Kinetics is gradually shrinking over time due to videos being taken offline, making it difficult to compare against less recent works. We used a dataset containing 208K, 17K and 33K videos for training, validation and test respectively. We report on the most recently available videos. Each video lasts approximately 10 seconds. The Charades dataset \cite{sigurdsson2016hollywood} is composed of 9,848 videos of daily indoor activities, each of an average length of 30 seconds. In total, the dataset contains 66,500 temporal annotations for 157 action classes. In the standard split, there are 7,986 training videos and 1,863 validation videos, sampled at 12 frames per second. We follow prior arts by reporting the Top-1 and Top-5 recognition accuracy for Kinetics-400 and mean average precision (mAP) for Charades. \begin{figure}[t] \centering \includegraphics[width=0.65\linewidth]{figures/general_arch.png} \caption{The general graph neural network architecture we use in our experiments.} \label{fig:general_arch} \end{figure} \paragraph{Network Architecture and Training -} We use GNN variants and feed each of them with our video-graphs. Specifically, we consider Graph Convolutional Networks \cite{kipf2016semi} (GCNs), Graph Attention Networks \cite{velivckovic2017graph} (GATs) and Relational Graph Convolutional Networks \cite{schlichtkrull2018modeling} (RGCNs). The general architecture of our backbones is depicted in Fig. \ref{fig:general_arch}. It consists of $2$ fully-connected (FC) layers with exponential linear unit (ELU) activations that project the node features into a $256D$ feature space. Then come $4$ layers of the corresponding GNN layer (either GCN, GAT or RGCN along with an edge feature concatenation from Eq. \ref{eq:concat_edges}) with a hidden size of 512 with ELU activations, followed by global mean pooling, dropout with a probability of $0.2$ and a linear layer whose output is the predicted logits. For the GAT layers, we use 4 attention heads in each layer, and average the attention heads' results to obtain the desired hidden layer size. For the RGCN layers, we specify 2 relations, which correspond to the spatial and temporal relations, as described in Section \ref{section:methodology}. We use the Adam \cite{kingma2014adam} with a constant learning rate of \text{$1e-3$} for optimization. While choosing this architecture, the core idea was to keep the architecture simple and shallow, while changing the interaction module to better model the relations between parts of the clip. We divide the videos into clips using a sliding window of 20 frames, using a stride of 2 between consecutive frames and a stride of 10 between clips. In all the experiments, we used a fixed batch size of 200. \paragraph{Inference -} At the test phase, we use the same sliding window methodology as in the training. We follow the common practice of processing multiple views of a long video and average per-view logits to obtain the final results. The views are drawn uniformly across the temporal dimension of the video, without spatial cropping. The number of views is determined by the validation dataset. \paragraph{Implementation Details -} All experiments were run on a Ubuntu 18.04 machine with Intel i9-10920X, 93GB RAM and 2 GeForce RTX 3090 GPUs. Our implementation of \methodname\ is in Python3. To generate superpixels, we use \textit{fast-slic} \cite{fastslic} with the AVX2 instruction set. To train the graph neural models, we use Pytorch-Geometric \cite{fey2019fast}. We use a fixed seed for SLIC and cache the generated graphs during the first training epochs in order to further reduce the computations. We also store the edge indexes as int16 instead of int64 in order to reduce the memory footprint. Eventually, the memory footprints of the cached datasets is comparable to those of the original ones. \subsection{Ablation Study}\label{section:ablation} We conduct an in-depth study on Kinetics-400 to analyze the performance gain contributed by incorporating the different components of \methodname. \paragraph{Graph Neural Network Variants and Number of Superpixels per Frame -} We assess the performance of different GNN variants: GCN \cite{kipf2016semi} is trained without edge relations (\ie\, temporal and spatial edges are treated via the same weights). GAT \cite{velivckovic2017graph} is trained by employing the attention mechanism for neighborhood aggregation without edge relations. RGCN \cite{schlichtkrull2018modeling} is trained with edge relations, as described in Section \ref{section:model_arch}. The results of the action classification on K-400 are shown in Figure \ref{fig:n_sp_and_model_variants_ablation}. In this series, the number of views is fixed at $8$, which is the number of views that was found to be most effective for the validation set. For all variants, increasing the number of superpixels per frame ($S$) contributes to the accuracy. We notice a significant improvement in accuracy for the lower range of the number of superpixels, while the accuracy begins to saturate for \text{$S\geq 650$}. Increasing further the number of superpixels leads to bigger inputs, which require more computations. As our goal is to maximize the efficiency, we do not experiment with larger inputs in this section. \input{graphs/ablation/model_sp_grid} We further present in Table \ref{table:models_ablation} the models' specifications for $800$ superpixels, which is the best-performing configuration in this series of experiments. Unsurprisingly, the GCN variant requires the least amount of computations. Meanwhile, the RGCN variant requires fewer computations than GAT and achieves a higher level of accuracy. We conclude that it is beneficial to incorporate edge relations when wishing to encode temporal and spatial relations in videos, and that those features are not easily learned by heavy computational models, such as GAT. \input{tables/models} \paragraph{Augmentations -} \input{graphs/ablation/augmentations_grid} We assessed the impact of augmentations on the performance and their ability to alleviate over-fitting. For this purpose, we chose the best configuration obtained from the previous experiments, that is, RGCN with 800 superpixels per frame, and trained it while adding one augmentation at a time. The results of this series are depicted in Figure \ref{fig:augmentations_grid}. Each graph shows the level of accuracy reached by training the model with one of the parameters that control the augmentation. We begin with the analysis of the AGEN and AGNN, both relate to the addition of Gaussian noise to the graph components, with the corresponding parameters controlling the standard deviations. Their impact is unnoticeable as these parameters head towards $0$, since lower values reflect the scenarios in which little or no augmentations are applied. Slightly increasing the parameter brings about a gradual improvement in the accuracy, until a turning point is reached, after which the level of accuracy declines until it reaches \text{$\sim \frac{1}{400}$}, which resembles a random classifier. The decrease in accuracy stems from the noise obscuring the original signal, allegedly forcing the classifier to classify ungeneralizable noise. For RRSE and RRS, the random removal of spatial edges harms the accuracy of the model. This finding leads us to conclude that spatial edges encode meaningful information about relations between the entities. Moreover, slightly removing the nodes contributes to the level of accuracy, reaching a peak at \text{$p_{node}\approx 0.8$}. To conclude, we present the values that lead to the best Top-1 accuracy score in Table \ref{table:augmentations_params}. \input{tables/aug_params} \subsection{Comparison to the State-of-the-Art} \input{graphs/bubbles_grid} \paragraph{Kinetics-400 -} We present the K-400 results for our RGCN variant in Table \ref{table:k400_sota} and Figure \ref{fig:k400_relative_sota}, along with comparisons to previous arts, including convolutional-based and transformer-based methods. Our results are denoted RGCN-$d$, where $d$ represents the number of superpixels. Additionally, we use the set of augmentations with the parameters from Table \ref{table:augmentations_params}. First, when the RGCN-800 model is trained with the full set of augmentations (denoted Full-Aug), it achieves a significantly higher Top-1 accuracy than when it is trained without any augmentation (denoted No-Aug) or when each augmentation is applied individually. These results demonstrate the effectiveness of our model and that our augmentations can alleviate overfitting and improve the generalization over the test set. Second, all our RGCNs require orders-of-magnitude fewer computations than the previous arts, as well as more than \text{$\times 10$} fewer parameters. \input{tables/sota_comparison/k400} \paragraph{Charades -} We train RGCN variants with $800$ and $2000$ superpixels with the set of augmentations found in Table \ref{table:augmentations_params}. We also follow prior arts \cite{feichtenhofer2019slowfast,fan2021multiscale} by pre-training on K-400 followed by replacing the last FC layer and fine-tuning on Charades. Table \ref{table:charades_sota} and Figure \ref{fig:charades_relative_sota} show that when our RGCN model is trained with 2000 superpixels, its mAP score is comparable to the current state-of-the-art, but this score is reached with orders-of-magnitude fewer computations and using considerably fewer parameters. \input{tables/sota_comparison/charades} \subsection{Video-Graph Generation Run-Time} \begin{wrapfigure}[15]{r}{0.5\linewidth} \begin{center} \input{graphs/samples_generation} \end{center} \caption{Time of generation depending on the number of superpixels.} \label{fig:graph_run_time} \end{wrapfigure} The transition into a video-graph representation requires the consideration of the time needed for generating it. In Figure \ref{fig:graph_run_time}, we measured the average time needed using our setup, which include the whole pipeline: \textbf{1.} Superpixels calculation, and \textbf{2.} Graph structure generation, that is, creating edges between adjacent super-pixels and features calculation as described in Section \ref{section:methodology}. Interestingly, the first step is relatively short compared to the second. Apparently, the optimized \textit{fast-slic} \cite{fastslic} performs well, while the search for adjacent superpixels is time consuming. This opens the possibilities of further optimization. \section{Conclusions and Future Work}\label{section:conclusions} In this paper, we present \methodname, a graph video representations that enable video-processing via graph neural networks. Furthermore, we propose a relational graph convolutional model that suits this representation. Our experimental study demonstrates this model's efficiency in performing video-related tasks while achieving comparable performance to the current state-of-the-art. An interesting avenue for future work is to explore new graph representations of videos, including learnable methods. Additionally, we consider the development of new dedicated graph neural models for processing the unique and dynamic structure of the video-graph as an interesting research direction. Finally, unified models for image and video understanding that disregard temporal edges could be explored in order to take advantage of the amount of data in both worlds. \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=8VUywK1AT7d
https://arxiv.org/abs/2207.01375
Please evaluate the paper based on its proposed method of using super pixels and graph convolutional networks for action recognition, considering its potential impact and the thoroughness of the experiments.
Dynamic Image for 3D MRI image Alzheimer's Disease classification Summary This paper addresses the problem of 3D MRI volume segmentation of images collected from patients with Alzheimer's disease. The motivation for the work comes from automating the classification task for assigning labels such as cognitive unimpaired (CU) and Alzheimer’s disease (AD) to each patient based on the 3D MRI volume. The authors approach the problem by leveraging a 3D to 2D conversion using dynamic images according to https://openaccess.thecvf.com/content_cvpr_2015/papers/Fernando_Modeling_Video_Evolution_2015_CVPR_paper.pdf and https://www.egavves.com/data/cvpr2016bilen.pdf and introducing an attention module in a transfer model with pre-training on ImageNet dataset. The evaluations include (a) matching feature dimensionality of the features extracted from four well-established architectures and the features coming from the dynamic image based conversion, (b) optimizing the steps in feature extraction and AI-based classification, (c) analyzing inclusion/exclusion of skull, and (d) comparing execution times when 3D vs 2D raw data are used as inputs into classifiers. Strengths: The classification framework is very interesting. The introduction of dynamic image and attention module is novel Weaknesses: The experimental dataset is very limited. The theoretical description is not very clear. Comments: Is there any reason (e.g., based on visual inspection) to believe that the features of 2D dynamic images are of the same nature as the features extracted from ImageNet? Line 129: what do you refer to when mentioning the ImageNet resolution? Line 171: why did you choose three activation functions and 1x1 convolutional kernels? The section 3.3 also did not explain the details in Fig 3. For example, why do you have in Fig 3 the same blocks but the tensor sizes are HxWx512 -> HxWx256 -> HxWx64? Shouldn’t the last tensor size be HxWx128? What implementation did you use for the CAM attention module? What is the method in dynamic image based conversion that you are using to create 110 x 110 x 3 (i.e., 3 features)? The original paper in https://openaccess.thecvf.com/content_cvpr_2015/papers/Fernando_Modeling_Video_Evolution_2015_CVPR_paper.pdf refers to learning rank machines but your paper does not mention this important detail. Minor comments: Fig 1 caption: you have one row of pictures but the caption refers to two rows. Why applying dynamic image-based 3D to 2D conversion is preferred over z-axis? Is there any motivation to prefer one of the three possible planes, i.e., sagittal vs transversal vs coronal plane? Rating: 8: Top 50% of accepted papers, clear accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \DeclareMathOperator*{\argmin}{argmin} % \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \begin{document} \pagestyle{headings} \mainmatter \title{Dynamic Image for 3D MRI Image Alzheimer’s Disease Classification} % \titlerunning{Dynamic Image for 3D MRI Image Alzheimer’s Disease Classification} \author{Xin Xing\thanks{authors show equal contribution}\orcidID{0000-0001-7207-5149}\and Gongbo Liang\samethanks \orcidID{0000-0002-6700-6664} \and Hunter Blanton \orcidID{0000-0001-8058-4218} \and Muhammad Usman Rafique\orcidID{0000-0001-5504-5482}\and Chris Wang \orcidID{0000-0003-3898-3690}\and Ai-Ling Lin \orcidID{0000-0002-5197-2219} \and Nathan Jacobs \orcidID{0000-0002-4242-8967} } \authorrunning{X. Xing et al.} \institute{University of Kentucky, Lexington KY 40506, USA \\ \email{\{xxi242, gli238\}@g.uky.edu}} \maketitle \begin{abstract} We propose to apply a 2D CNN architecture to 3D MRI image Alzheimer's disease classification. Training a 3D convolutional neural network (CNN) is time-consuming and computationally expensive. We make use of approximate rank pooling to transform the 3D MRI image volume into a 2D image to use as input to a 2D CNN. We show our proposed CNN model achieves $9.5\%$ better Alzheimer's disease classification accuracy than the baseline 3D models. We also show that our method allows for efficient training, requiring only $20\%$ of the training time compared to 3D CNN models. The code is available online: https://github.com/UkyVision/alzheimer-project. \keywords{Dynamic image, 2D CNN, MRI image, Alzheimer's Disease} \end{abstract} \section{Introduction} Alzheimer's disease (AD) is the sixth leading cause of death in the U.S.~\cite{nih}. It heavily affects the patients' families and U.S. health care system due to medical payments, social welfare cost, and salary loss. Since AD is irreversible, early stage diagnosis is crucial for helping slow down disease progression. Currently, researchers are using advanced neuroimaging techniques, such as magnetic resonance imaging (MRI), to identify AD. MRI technology produces a 3D image, which has millions of voxels. Figure~\ref{fig1} shows example slices of Cognitive Unimpaired (CU) and Alzheimer's disease (AD) MRI images. \begin{figure} \centering \includegraphics[scale=0.5]{fig1.pdf} \caption{The MRI sample slices of the CU and AD participants and the corresponding dynamic images.} \label{fig1} \end{figure} With the promising performance of deep learning in natural image classification, convolutional neural networks (CNNs) show tremendous potential in medical image diagnosis. Due to the volumetric nature of MRI images, the natural deep learning model is a 3D convolutional neural network (3D CNN)~\cite{3dcnn}. Compared to 2D CNN models, 3D CNN models are more computationally expensive and time consuming to train due to the high dimensionality of the input. Another issue is that most current medical datasets are relatively small. The limited data makes it difficult to train a deep network that generalizes to high accuracy on unseen data. To overcome the problem of limited medical image training data, transfer learning is an attractive approach for feature extraction. However, pre-trained CNN models are mainly trained on 2D image datasets. There are few suitable pre-trained 3D CNN models. In our paper, We propose to apply approximate rank pooling~\cite{dyi} to convert a 3D MRI volume into a 2D image over the height dimension. Thus, we can use a 2D CNN architecture for 3D MRI image classification. The main contributions of our work are following: \begin{itemize} \item We propose to apply a CNN model that transforms the 3D MRI volume image into 2D dynamic image as the input of 2D CNN. Incorporating with an attention mechanism, the proposed model significantly boosts the accuracy of the Alzheimer's Disease MRI diagnosis. \item We analyze the effect of skull MRI images on the approximate rank pooling method, showing that the applied approximate rank pooling method is sensitive to the noise introduced by the skull. Skull striping is necessary before using the dynamic image technology. \end{itemize} \section{Related Work} Learning-based Alzheimer's disease (AD) research can be mainly divided into two branches based on the type of input: (1) manually selected region of interest (ROI) input and (2) whole image input. With ROI models~\cite{ref1}~\cite{ref2}, manual region selection is needed to extract the interest region of the original brain image as the input to the CNN model, which is a time consuming task. It is more straightforward and desirable to use the whole image as input. Korolev et al.~\cite{Korolev2017} propose two 3D CNN architectures based on VGGNet and ResNet, which is the first study to prove the manual feature extraction step for Brain MRI image classification is unnecessary. Their 3D models are called 3D-VGG and 3D-ResNet, and are widely used for 3D medical image classification study. Cheng et al.~\cite{Cheng2017} proposes to use multiple 3D CNN models trained on MRI images for AD classification in an ensemble learning strategy. They separate the original MRI 3D images into many patches (n=27), then forward each patch to an independent 3D CNN for feature extraction. Afterward, the extracted features are concatenated for classification. The performance is satisfactory, but the computation cost and training time overhead are very expensive. Yang et al.~\cite{Yang2018} uses the 3D-CNN models of Korolev et al.~\cite{Korolev2017} as a backbone for studying the explainability of AD classification in MRI images by extending class activation mapping (CAM)\cite{cam} and gradient-based CAM\cite{grad-cam} on 3D images. In our work, we use the whole brain MRI image as input and use 3D-VGG and 3D-ResNet as our baseline models. Dynamic images where first applied to medical imagery by Liang et al.~\cite{Liang2019} for breast cancer diagnosis. The authors use the dynamic image method to convert 3D digital breast tomosynthesis images into dynamic images and combined them with 2D mammography images for breast cancer classification. In our work, we propose to combine dynamic images with an attention mechanism for 3D MRI image classification. \section{Approach} We provide a detailed discussion of our method. First, we summarize the high-level network architecture. Second, we provide detailed information about the approximate rank pooling method. Next, we show our classifier structure and attention mechanism. Finally, we discuss the loss function used for training. \subsection{Model Architecture} \begin{figure}[h!] \centering \includegraphics[scale=0.3]{workflow1.pdf} \caption{The architecture of our 2D CNN model.} \label{fig2} \end{figure} Figure~\ref{fig2} illustrates the architecture of our model. The 3D MRI image is passed to the approximate rank pooling module to transform the 3D MRI image volume into a 2D dynamic image. We apply transfer learning for feature extraction with the dynamic image as the input. We leveraged a pre-trained CNN as the backbone feature extractor. The feature extraction model is pre-trained with the ImageNet dataset~\cite{imagenet}. Because we use a lower input resolution than the resolution used for ImageNet training, we use only a portion of the pre-trained CNN. The extracted features are finally sent to a small classifier for diagnosis prediction. The attention mechanism, which is widely used in computer vision community, can boost CNN model performance, so we embed the attention module in our classifier. \subsection{Dynamic Image} The temporal rank pooling~\cite{Fernando}~\cite{dyi} was originally proposed for video action recognition. For a video with T frames $I_{1}, ... , I_{T}$, the method compresses the whole video into one frame by temporal rank pooling. The compressed frame is called a dynamic image. The construction of the dynamic image is based on Fernando et al~\cite{Fernando}. The authors use a ranking function to represent the video. $\psi(I_{t})\in\Re^d$ is a feature representation of the individual frame $I_t$ of the video. $V_t=\frac{1}{t}\sum_{\tau=1}^{t}\psi(I_{\tau})$ is the temporal average of the feature up to time $t$. $V_t$ is measured by a ranking score $S(t|d)=<d, V_t>$, where $d\in\Re^m$ is a learned parameter. By accumulating more frames for the average, the later times are associated with larger scores, e.g $q>t\rightarrow S(q|d)>S(t|d)$ , which are constraints for the ranking problem. So the whole problem can be formulated as a convex problem using RankSVM: \begin{equation} d^*=\rho(I_1, ..., I_t; \tau)=\argmin_dE(d) \label{eq:1} \end{equation} \begin{equation} E(d)=\frac{\lambda}{2}||d||^2 + \frac{2}{T(T-1)}\times\sum_{q>t}\max\{0, 1-S(q|d)+S(t|d)\} \label{eq:2} \end{equation} In Equation \eqref{eq:2}, the first term is a quadratic regularization used in SVMs, the second term is a hinge-loss counting incorrect rankings for the pairs $q>t$. The RankSVM formulation can be used for dynamic image generation, but the operations are computationally expensive. Bilen et al.~\cite{dyi} proposed a fast approximate rank pooling for dynamic images: \begin{equation} \hat{\rho}(I_1, ..., I_t; \psi)=\sum_{t=1}^{T}\alpha_t \cdot\psi(I_t) % \label{eq:3} \end{equation} where, $\psi(I_t)=\frac{1}{t}\sum_{\tau=1}^{t}I_{\tau}$ is the temporal average of frames up to time t, and $\alpha_t=2t-T-1$ is the coefficient associated to frame $\psi(I_t)$. We take this approximate rank pooling strategy in our work for 3D MRI volume to 2D image transformation. In our implementation, the z-dimension of 3D MRI image is equal to temporal dimension of the video. \subsection{Classifier with Attention Mechanism} \begin{figure}[h!] \centering \includegraphics[scale=0.3]{att.pdf} \caption{The attention mechanism structure in our CNN model.} \label{fig3} \end{figure} The classifier is a combination of an attention mechanism module and a basic classifier. Figure~\ref{fig3} depicts the structure of attention mechanism, which includes four $1 \times 1$ convolutional layers. The first three activation functions of convolutional layers are ReLU, the last convolutional layer is attached with softmax activation function. The input feature maps $A \in R^{H\times W\times C}$ are passed through the four convolutional layers to calculate attention mask $S\in R^{H\times W\times 1}$. We apply element-wise multiplication between the attention mask and input feature maps to get the final output feature map $O \in R^{H\times W\times C}$. Our basic classifier contains three fully connected (FC) layers. The output dimensions of the three FC layers are 512, 64, and 2. Dropout layers are used after the first two layers with dropout probability 0.5. \subsection{Loss Function} In previous AD classification studies, researchers mainly concentrated on binary classification. In our work, we do the same for ease of comparison. The overall loss function is binary cross-entropy. For a 3D image $V$ with label $l$ and probability prediction $p(l|V)$, the loss function is: \begin{equation} loss(l,V)=-[l \cdot log(p(l|V))+(1-l) \cdot log(1-p(l|V))] \label{eq:4} \end{equation} where the label $l=0$ indicates a negative sample and $l=1$ indicates a positive sample. \section{Evaluation} We use the publicly available dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI)~\cite{ADNI} for our work. Specifically, we trained CNNs with the data from the ``spatially normalized, masked, and N3-corrected T1 images” category. The brain MRI image size is $110 \times 110 \times 110$. Since a subject may have multiple MRI scans in the database, we use the first scan of each subject to avoid data leakage. The total number of data samples is 100, containing 51 CU samples and 49 AD samples. The CNNs are implemented in PyTorch. We use five-fold cross validation to better evaluate model performance. The batch size used for our model is 16. The batch size of the baseline models is 8, which is the maximum batch size of the 3D CNN model trained on the single GTX-1080ti GPU. We use the Adam optimizer with $beta_1=0.9$ and $beta_2=0.999$. The learning rate is 0.0001. We train for 150 epochs. To evaluate the performance of our model, we use accuracy (Acc), the area under the curve of Receiver Operating Characteristics (ROC), F1 score (F1), Precision, Recall and Average Precision (AP) as our evaluation metrics. \subsection{Quantitative Results} High quality feature extraction is crucial for the final prediction. Different pre-trained CNN models can output different features in terms of size and effective receptive field. We test different pre-trained CNNs to find out which CNN models perform best as our feature extractor. Table~\ref{table1} shows various CNN models and the corresponding output feature size. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{The different pre-trained CNN model as feature extractors and the output feature sizes} \label{table1} \begin{tabular}{lll} \hline\noalign{\smallskip} CNN model & & Output feature size\\ \noalign{\smallskip} \hline \noalign{\smallskip} AlexNet~\cite{Alex} & & $256\times5\times5$ \\ VggNet11~\cite{Vgg} & & $512\times6\times6$ \\ ResNet18~\cite{He2015} & & $512\times7\times7$ \\ MobileNet\_v2~\cite{Sandler_2018_CVPR} & &$1280\times4\times4$ \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \setlength{\tabcolsep}{4pt} Since our dynamic image resolution is $110\times110\times3$, which is much smaller than the ImageNet dataset resolution: $256\times256\times3$, we use only part of the pre-trained CNN as the feature extractor. Directly using the whole pre-trained CNN model as feature extractor will cause the output feature size to be too small, which decreases the classification performance. In the implementation, we get rid of the maxpooling layer of each pre-trained model except for the MobileNet\_v2~\cite{Sandler_2018_CVPR}, which contains no maxpooling layer. Also, because there is a domain gap between the natural image and medical image we set the pre-trained CNN models' parameters trainable, so that we can fine tune the models for better performance. \begin{table} \begin{center} \caption{The performance results of different backbone models with dynamic image as input} \label{table2} \begin{tabular}{llccccc} \hline\noalign{\smallskip} Model & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} AlexNet & 0.87 & 0.90 & 0.86 & 0.89 & 0.83 & 0.82 \\ ResNet18 & 0.85 & 0.84 & 0.84 & 0.86 & 0.81 & 0.79 \\ MobileNet\_v2 & 0.88 & 0.89 & 0.87 & 0.89 & 0.85 & 0.83 \\ VggNet11 & 0.91 & 0.92 & 0.91 & 0.88 & 0.93 & 0.86 \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{table} \begin{center} \caption{The performance results of different 2D and 3D CNN models} \label{table3} \begin{tabular}{llcccccc} \hline\noalign{\smallskip} Model &$\quad$ & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} &$\quad$ & 0.80 & 0.78 & 0.78 & 0.82 & 0.75 & 0.74 \\ 3D-ResNet~\cite{Korolev2017}&$\quad$ & 0.84 & 0.82 & 0.82 & 0.86 & 0.79 & 0.78 \\ \hline Max. + VGG11&$\quad$ & 0.80 & 0.77 & 0.80 & 0.78 & 0.81 & 0.73 \\ Avg. + VGG11&$\quad$ & 0.86 & 0.84 & 0.86 & 0.83 & 0.89 & 0.79 \\ Max. + VGG11 + Att&$\quad$ & 0.82 & 0.76 & 0.82 & 0.80 & 0.83 & 0.75 \\ Avg. + VGG11 + Att&$\quad$ & 0.88 & 0.89 & 0.88 & 0.85 & \textbf{0.91} & 0.82 \\ \hline Ours &$\quad$ & \textbf{0.92} &\textbf{0.95} & \textbf{0.91} & \textbf{0.97} & 0.85 & \textbf{0.90} \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} When analyzing MRI images using computer-aided detectors (CADs), it is common to strip out the skulls from the brain images. Thus, we first test the proposed method using the MRI with the skull stripped. Our proposed model takes dynamic images (Dyn) as input, VGG11 as feature extractor, and a classifier with the attention mechanism: $Dyn + VGG11 + Att $. The whole experiment can be divided into three sections: the backbone and attention section, the baseline model section, and the pooling section. In the backbone and attention section, we use 4 different pre-trained models and test the selected backbone with and without the attention mechanism. Based on the performance shown in Table~\ref{table2}, we choose VGG11 as the backbone model. In the baseline model section, we compare our method with two baselines, namely 3D-VGG and 3D-ResNet. Table~\ref{table3} shows the performance under different CNN models. The proposed model achieves $9.52\%$ improvement in accuracy and $15.20\%$ better ROC over the 3D-ResNet. In the pooling section: we construct two baselines by replacing the approximate rank pooling module with the average pooling (Avg.) layer or max pooling (Max.) layer. The pooling layer processes the input 3D image over the z-dimension and outputs the same size as the dynamic image. Comparing with the different 3D-to-2D conversion methods under the same configuration, the dynamic image outperforms the two pooling methods. \subsection{Pre-processing Importance Evaluation} \begin{table} \begin{center} \caption{The performance results of different 2D and 3D CNN models on the MRI image with skull.} \label{table4} \begin{tabular}{lcccccccc} \hline\noalign{\smallskip} Model &$\quad$ & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} &$\quad$ & 0.78 & 0.62 & 0.77 & 0.80 & 0.75 & 0.72 \\ Ours &$\quad$ & 0.63 & 0.52 & 0.63 & 0.62 & 0.64 & 0.57\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{figure} \centering \includegraphics[scale=0.5]{MRIwskull.pdf} \caption{The MRI sample slices with skull of the CU and AD participants and the corresponding dynamic images.} \label{fig4} \end{figure} In this section, we show results using the raw MRI image (including skull) as input. We perform experiments on the same patients' raw brain MRI image with the skull included to test the performance of our model. The raw MRI image category is ``MT1,GradWarp,N3m". The image size of the raw MRI image is "$176 \times 256 \times 256$". Figure~\ref{fig4} illustrates the dynamic images of different participants' MRI brain images with the skull. The dynamic images are blurrier than the images under skull striping processing. This is because the skull variance can be treated as noise in the dynamic image. % Table~\ref{table4} shows the significant performance decrease when using 3D Brain MRI images with skull. Figure~\ref{fig4} shows a visual representation of how the dynamic images are affected by including the skull in the image. In this scenario, the model can not sufficiently diagnose the different groups. A potential cause of this decrease in performance is that the approximate rank pooling module is a pre-processing step, and the module is not trainable. We believe an end-to-end, learnable rank pooling module would improve performance.% \subsection{Models Training time} \begin{table} \begin{center} \caption{The total 150 epochs training time of different CNN models.} \label{table5} \begin{tabular}{lcc} \hline\noalign{\smallskip} &$\quad$ &Training time(s) \\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} & &2359 \\ 3D-ResNet~\cite{Korolev2017} & &3916\\ Ours & &414\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} Another advantage of the proposed model is faster training. We train all of our CNN models for 150 epochs on the same input dataset. Table~\ref{table5} shows the total training time of the different 2D and 3D CNN models. Compared with the 3D-CNN networks, the proposed model trains in about $20\%$ of the time. Also, due to the higher dimension of the 3D convolutional layer, the number of parameters of the 3D convolutional layer is naturally higher than the 2D convolutional layer. By applying the MobileNet~\cite{mobilenet} or ShuffleNet~\cite{shuffle} in medical image diagnosis, there is potential for mobile applications. We used MobileNet for our experiments. We used the MobileNet v1 achitecture as the feature extractor and obtained 84.84\% accuracy, which is similar in accuracy to the 3D ResNet. \section{Conclusions} We proposed to apply the approximate rank pooling method to convert 3D Brain MRI images into 2D dynamic images as the inputs for a pre-trained 2D CNN. The proposed model outperforms a 3D CNN with much less training time and improves 9.5\% better performance than the baselines. We trained and evaluated on MRI brain imagery and found out that brain skull striping pre-processing is useful before applying the approximate rank pooling conversion. We used an offline approximate rank pooling module in our experiments, but we believe it would be interesting to explore a learnable temporal rank pooling module in the future. \section*{Acknowledgement} This work is supported by NIH/NIA R01AG054459. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=HdNVXBdk05
https://arxiv.org/abs/2012.00119
Evaluate the paper on 3D MRI image Alzheimer's Disease classification, focusing on the strengths and weaknesses of the classification framework, the introduction of dynamic images and attention module, and the limitations of the experimental dataset and theoretical description.
Simple and effective method to improve AD classification Summary ------- This paper presents a method for Alzheimer's Disease (AD) classification from 3D MRI scans. In contrast to earlier approaches, the 3D scans are first converted into a 2D dynamic image, which is then processed by a 2D convolutional neural network. The network consists of a pre-trained (and fine-tuned) feature extractor, an attention module, and subsequent fully connected layers for the final classification. Results on MRI scans with stripped skulls demonstrate a significant increase compared to fully 3D baselines, albeit using only 20% of the time needed for training. Quality and Clarity ------------------- The introduction, motivation, and technical description are very well written and easy to follow. The provided figures are helpful to understand the method and show qualitative results of the dynamic image generation. Originality ----------- Using 2D dynamic images instead of the full 3D scan is a compelling idea (although already explored in other medical domains). The main contribution of this paper lies therefore in the subsequent use of pre-trained 2D feature extractors, which are enabled by the 2D input images. Furthermore, the authors investigate the usefulness of an attention module between the feature extraction and classification network. Significance ------------ The results demonstrate consistent improvements over several baselines and variations of the proposed model. On top of those improvements, the proposed model trains faster and requires less resources for prediction. The authors mention that "there is potential for mobile applications" (line 360). I am not sure what this could look like in a clinical setting and suggest the authors elaborate if they want to make this point convincingly. My main concern for a clinical application is that the improved results seem to be achieved only on "skull stripped" MRI scans. From the paper, it is unclear whether this is a manual, semi-automatic, or entirely automatic process. I would appreciate if the authors could discuss this point in more detail to provide context about the amount of manual labor needed in the proposed pipeline. Pros ---- * well written * elegant architecture * thorough comparison to baselines and model variations * significant accuracy improvement Cons ---- * unclear how much manual intervention is needed for "skull stripping" Minor Comments -------------- * title: capitalize "image" and "classification" * line 56: "We" -> "we" * line 144: "association" -> "associated" * line 283: "choice" -> "choose" * line 335: "decrease if" -> "decrease in" * Equation 4: What is $i$ running over? According to the text, $l$ and $I$ are label and image of a single sample. * Figure 4: It would be helpful to see same (or similar) images without skull in the same figure. * notation: $\mathcal{R}$ vs $R$ Rating: 8: Top 50% of accepted papers, clear accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \DeclareMathOperator*{\argmin}{argmin} % \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \begin{document} \pagestyle{headings} \mainmatter \title{Dynamic Image for 3D MRI Image Alzheimer’s Disease Classification} % \titlerunning{Dynamic Image for 3D MRI Image Alzheimer’s Disease Classification} \author{Xin Xing\thanks{authors show equal contribution}\orcidID{0000-0001-7207-5149}\and Gongbo Liang\samethanks \orcidID{0000-0002-6700-6664} \and Hunter Blanton \orcidID{0000-0001-8058-4218} \and Muhammad Usman Rafique\orcidID{0000-0001-5504-5482}\and Chris Wang \orcidID{0000-0003-3898-3690}\and Ai-Ling Lin \orcidID{0000-0002-5197-2219} \and Nathan Jacobs \orcidID{0000-0002-4242-8967} } \authorrunning{X. Xing et al.} \institute{University of Kentucky, Lexington KY 40506, USA \\ \email{\{xxi242, gli238\}@g.uky.edu}} \maketitle \begin{abstract} We propose to apply a 2D CNN architecture to 3D MRI image Alzheimer's disease classification. Training a 3D convolutional neural network (CNN) is time-consuming and computationally expensive. We make use of approximate rank pooling to transform the 3D MRI image volume into a 2D image to use as input to a 2D CNN. We show our proposed CNN model achieves $9.5\%$ better Alzheimer's disease classification accuracy than the baseline 3D models. We also show that our method allows for efficient training, requiring only $20\%$ of the training time compared to 3D CNN models. The code is available online: https://github.com/UkyVision/alzheimer-project. \keywords{Dynamic image, 2D CNN, MRI image, Alzheimer's Disease} \end{abstract} \section{Introduction} Alzheimer's disease (AD) is the sixth leading cause of death in the U.S.~\cite{nih}. It heavily affects the patients' families and U.S. health care system due to medical payments, social welfare cost, and salary loss. Since AD is irreversible, early stage diagnosis is crucial for helping slow down disease progression. Currently, researchers are using advanced neuroimaging techniques, such as magnetic resonance imaging (MRI), to identify AD. MRI technology produces a 3D image, which has millions of voxels. Figure~\ref{fig1} shows example slices of Cognitive Unimpaired (CU) and Alzheimer's disease (AD) MRI images. \begin{figure} \centering \includegraphics[scale=0.5]{fig1.pdf} \caption{The MRI sample slices of the CU and AD participants and the corresponding dynamic images.} \label{fig1} \end{figure} With the promising performance of deep learning in natural image classification, convolutional neural networks (CNNs) show tremendous potential in medical image diagnosis. Due to the volumetric nature of MRI images, the natural deep learning model is a 3D convolutional neural network (3D CNN)~\cite{3dcnn}. Compared to 2D CNN models, 3D CNN models are more computationally expensive and time consuming to train due to the high dimensionality of the input. Another issue is that most current medical datasets are relatively small. The limited data makes it difficult to train a deep network that generalizes to high accuracy on unseen data. To overcome the problem of limited medical image training data, transfer learning is an attractive approach for feature extraction. However, pre-trained CNN models are mainly trained on 2D image datasets. There are few suitable pre-trained 3D CNN models. In our paper, We propose to apply approximate rank pooling~\cite{dyi} to convert a 3D MRI volume into a 2D image over the height dimension. Thus, we can use a 2D CNN architecture for 3D MRI image classification. The main contributions of our work are following: \begin{itemize} \item We propose to apply a CNN model that transforms the 3D MRI volume image into 2D dynamic image as the input of 2D CNN. Incorporating with an attention mechanism, the proposed model significantly boosts the accuracy of the Alzheimer's Disease MRI diagnosis. \item We analyze the effect of skull MRI images on the approximate rank pooling method, showing that the applied approximate rank pooling method is sensitive to the noise introduced by the skull. Skull striping is necessary before using the dynamic image technology. \end{itemize} \section{Related Work} Learning-based Alzheimer's disease (AD) research can be mainly divided into two branches based on the type of input: (1) manually selected region of interest (ROI) input and (2) whole image input. With ROI models~\cite{ref1}~\cite{ref2}, manual region selection is needed to extract the interest region of the original brain image as the input to the CNN model, which is a time consuming task. It is more straightforward and desirable to use the whole image as input. Korolev et al.~\cite{Korolev2017} propose two 3D CNN architectures based on VGGNet and ResNet, which is the first study to prove the manual feature extraction step for Brain MRI image classification is unnecessary. Their 3D models are called 3D-VGG and 3D-ResNet, and are widely used for 3D medical image classification study. Cheng et al.~\cite{Cheng2017} proposes to use multiple 3D CNN models trained on MRI images for AD classification in an ensemble learning strategy. They separate the original MRI 3D images into many patches (n=27), then forward each patch to an independent 3D CNN for feature extraction. Afterward, the extracted features are concatenated for classification. The performance is satisfactory, but the computation cost and training time overhead are very expensive. Yang et al.~\cite{Yang2018} uses the 3D-CNN models of Korolev et al.~\cite{Korolev2017} as a backbone for studying the explainability of AD classification in MRI images by extending class activation mapping (CAM)\cite{cam} and gradient-based CAM\cite{grad-cam} on 3D images. In our work, we use the whole brain MRI image as input and use 3D-VGG and 3D-ResNet as our baseline models. Dynamic images where first applied to medical imagery by Liang et al.~\cite{Liang2019} for breast cancer diagnosis. The authors use the dynamic image method to convert 3D digital breast tomosynthesis images into dynamic images and combined them with 2D mammography images for breast cancer classification. In our work, we propose to combine dynamic images with an attention mechanism for 3D MRI image classification. \section{Approach} We provide a detailed discussion of our method. First, we summarize the high-level network architecture. Second, we provide detailed information about the approximate rank pooling method. Next, we show our classifier structure and attention mechanism. Finally, we discuss the loss function used for training. \subsection{Model Architecture} \begin{figure}[h!] \centering \includegraphics[scale=0.3]{workflow1.pdf} \caption{The architecture of our 2D CNN model.} \label{fig2} \end{figure} Figure~\ref{fig2} illustrates the architecture of our model. The 3D MRI image is passed to the approximate rank pooling module to transform the 3D MRI image volume into a 2D dynamic image. We apply transfer learning for feature extraction with the dynamic image as the input. We leveraged a pre-trained CNN as the backbone feature extractor. The feature extraction model is pre-trained with the ImageNet dataset~\cite{imagenet}. Because we use a lower input resolution than the resolution used for ImageNet training, we use only a portion of the pre-trained CNN. The extracted features are finally sent to a small classifier for diagnosis prediction. The attention mechanism, which is widely used in computer vision community, can boost CNN model performance, so we embed the attention module in our classifier. \subsection{Dynamic Image} The temporal rank pooling~\cite{Fernando}~\cite{dyi} was originally proposed for video action recognition. For a video with T frames $I_{1}, ... , I_{T}$, the method compresses the whole video into one frame by temporal rank pooling. The compressed frame is called a dynamic image. The construction of the dynamic image is based on Fernando et al~\cite{Fernando}. The authors use a ranking function to represent the video. $\psi(I_{t})\in\Re^d$ is a feature representation of the individual frame $I_t$ of the video. $V_t=\frac{1}{t}\sum_{\tau=1}^{t}\psi(I_{\tau})$ is the temporal average of the feature up to time $t$. $V_t$ is measured by a ranking score $S(t|d)=<d, V_t>$, where $d\in\Re^m$ is a learned parameter. By accumulating more frames for the average, the later times are associated with larger scores, e.g $q>t\rightarrow S(q|d)>S(t|d)$ , which are constraints for the ranking problem. So the whole problem can be formulated as a convex problem using RankSVM: \begin{equation} d^*=\rho(I_1, ..., I_t; \tau)=\argmin_dE(d) \label{eq:1} \end{equation} \begin{equation} E(d)=\frac{\lambda}{2}||d||^2 + \frac{2}{T(T-1)}\times\sum_{q>t}\max\{0, 1-S(q|d)+S(t|d)\} \label{eq:2} \end{equation} In Equation \eqref{eq:2}, the first term is a quadratic regularization used in SVMs, the second term is a hinge-loss counting incorrect rankings for the pairs $q>t$. The RankSVM formulation can be used for dynamic image generation, but the operations are computationally expensive. Bilen et al.~\cite{dyi} proposed a fast approximate rank pooling for dynamic images: \begin{equation} \hat{\rho}(I_1, ..., I_t; \psi)=\sum_{t=1}^{T}\alpha_t \cdot\psi(I_t) % \label{eq:3} \end{equation} where, $\psi(I_t)=\frac{1}{t}\sum_{\tau=1}^{t}I_{\tau}$ is the temporal average of frames up to time t, and $\alpha_t=2t-T-1$ is the coefficient associated to frame $\psi(I_t)$. We take this approximate rank pooling strategy in our work for 3D MRI volume to 2D image transformation. In our implementation, the z-dimension of 3D MRI image is equal to temporal dimension of the video. \subsection{Classifier with Attention Mechanism} \begin{figure}[h!] \centering \includegraphics[scale=0.3]{att.pdf} \caption{The attention mechanism structure in our CNN model.} \label{fig3} \end{figure} The classifier is a combination of an attention mechanism module and a basic classifier. Figure~\ref{fig3} depicts the structure of attention mechanism, which includes four $1 \times 1$ convolutional layers. The first three activation functions of convolutional layers are ReLU, the last convolutional layer is attached with softmax activation function. The input feature maps $A \in R^{H\times W\times C}$ are passed through the four convolutional layers to calculate attention mask $S\in R^{H\times W\times 1}$. We apply element-wise multiplication between the attention mask and input feature maps to get the final output feature map $O \in R^{H\times W\times C}$. Our basic classifier contains three fully connected (FC) layers. The output dimensions of the three FC layers are 512, 64, and 2. Dropout layers are used after the first two layers with dropout probability 0.5. \subsection{Loss Function} In previous AD classification studies, researchers mainly concentrated on binary classification. In our work, we do the same for ease of comparison. The overall loss function is binary cross-entropy. For a 3D image $V$ with label $l$ and probability prediction $p(l|V)$, the loss function is: \begin{equation} loss(l,V)=-[l \cdot log(p(l|V))+(1-l) \cdot log(1-p(l|V))] \label{eq:4} \end{equation} where the label $l=0$ indicates a negative sample and $l=1$ indicates a positive sample. \section{Evaluation} We use the publicly available dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI)~\cite{ADNI} for our work. Specifically, we trained CNNs with the data from the ``spatially normalized, masked, and N3-corrected T1 images” category. The brain MRI image size is $110 \times 110 \times 110$. Since a subject may have multiple MRI scans in the database, we use the first scan of each subject to avoid data leakage. The total number of data samples is 100, containing 51 CU samples and 49 AD samples. The CNNs are implemented in PyTorch. We use five-fold cross validation to better evaluate model performance. The batch size used for our model is 16. The batch size of the baseline models is 8, which is the maximum batch size of the 3D CNN model trained on the single GTX-1080ti GPU. We use the Adam optimizer with $beta_1=0.9$ and $beta_2=0.999$. The learning rate is 0.0001. We train for 150 epochs. To evaluate the performance of our model, we use accuracy (Acc), the area under the curve of Receiver Operating Characteristics (ROC), F1 score (F1), Precision, Recall and Average Precision (AP) as our evaluation metrics. \subsection{Quantitative Results} High quality feature extraction is crucial for the final prediction. Different pre-trained CNN models can output different features in terms of size and effective receptive field. We test different pre-trained CNNs to find out which CNN models perform best as our feature extractor. Table~\ref{table1} shows various CNN models and the corresponding output feature size. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{The different pre-trained CNN model as feature extractors and the output feature sizes} \label{table1} \begin{tabular}{lll} \hline\noalign{\smallskip} CNN model & & Output feature size\\ \noalign{\smallskip} \hline \noalign{\smallskip} AlexNet~\cite{Alex} & & $256\times5\times5$ \\ VggNet11~\cite{Vgg} & & $512\times6\times6$ \\ ResNet18~\cite{He2015} & & $512\times7\times7$ \\ MobileNet\_v2~\cite{Sandler_2018_CVPR} & &$1280\times4\times4$ \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \setlength{\tabcolsep}{4pt} Since our dynamic image resolution is $110\times110\times3$, which is much smaller than the ImageNet dataset resolution: $256\times256\times3$, we use only part of the pre-trained CNN as the feature extractor. Directly using the whole pre-trained CNN model as feature extractor will cause the output feature size to be too small, which decreases the classification performance. In the implementation, we get rid of the maxpooling layer of each pre-trained model except for the MobileNet\_v2~\cite{Sandler_2018_CVPR}, which contains no maxpooling layer. Also, because there is a domain gap between the natural image and medical image we set the pre-trained CNN models' parameters trainable, so that we can fine tune the models for better performance. \begin{table} \begin{center} \caption{The performance results of different backbone models with dynamic image as input} \label{table2} \begin{tabular}{llccccc} \hline\noalign{\smallskip} Model & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} AlexNet & 0.87 & 0.90 & 0.86 & 0.89 & 0.83 & 0.82 \\ ResNet18 & 0.85 & 0.84 & 0.84 & 0.86 & 0.81 & 0.79 \\ MobileNet\_v2 & 0.88 & 0.89 & 0.87 & 0.89 & 0.85 & 0.83 \\ VggNet11 & 0.91 & 0.92 & 0.91 & 0.88 & 0.93 & 0.86 \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{table} \begin{center} \caption{The performance results of different 2D and 3D CNN models} \label{table3} \begin{tabular}{llcccccc} \hline\noalign{\smallskip} Model &$\quad$ & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} &$\quad$ & 0.80 & 0.78 & 0.78 & 0.82 & 0.75 & 0.74 \\ 3D-ResNet~\cite{Korolev2017}&$\quad$ & 0.84 & 0.82 & 0.82 & 0.86 & 0.79 & 0.78 \\ \hline Max. + VGG11&$\quad$ & 0.80 & 0.77 & 0.80 & 0.78 & 0.81 & 0.73 \\ Avg. + VGG11&$\quad$ & 0.86 & 0.84 & 0.86 & 0.83 & 0.89 & 0.79 \\ Max. + VGG11 + Att&$\quad$ & 0.82 & 0.76 & 0.82 & 0.80 & 0.83 & 0.75 \\ Avg. + VGG11 + Att&$\quad$ & 0.88 & 0.89 & 0.88 & 0.85 & \textbf{0.91} & 0.82 \\ \hline Ours &$\quad$ & \textbf{0.92} &\textbf{0.95} & \textbf{0.91} & \textbf{0.97} & 0.85 & \textbf{0.90} \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} When analyzing MRI images using computer-aided detectors (CADs), it is common to strip out the skulls from the brain images. Thus, we first test the proposed method using the MRI with the skull stripped. Our proposed model takes dynamic images (Dyn) as input, VGG11 as feature extractor, and a classifier with the attention mechanism: $Dyn + VGG11 + Att $. The whole experiment can be divided into three sections: the backbone and attention section, the baseline model section, and the pooling section. In the backbone and attention section, we use 4 different pre-trained models and test the selected backbone with and without the attention mechanism. Based on the performance shown in Table~\ref{table2}, we choose VGG11 as the backbone model. In the baseline model section, we compare our method with two baselines, namely 3D-VGG and 3D-ResNet. Table~\ref{table3} shows the performance under different CNN models. The proposed model achieves $9.52\%$ improvement in accuracy and $15.20\%$ better ROC over the 3D-ResNet. In the pooling section: we construct two baselines by replacing the approximate rank pooling module with the average pooling (Avg.) layer or max pooling (Max.) layer. The pooling layer processes the input 3D image over the z-dimension and outputs the same size as the dynamic image. Comparing with the different 3D-to-2D conversion methods under the same configuration, the dynamic image outperforms the two pooling methods. \subsection{Pre-processing Importance Evaluation} \begin{table} \begin{center} \caption{The performance results of different 2D and 3D CNN models on the MRI image with skull.} \label{table4} \begin{tabular}{lcccccccc} \hline\noalign{\smallskip} Model &$\quad$ & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} &$\quad$ & 0.78 & 0.62 & 0.77 & 0.80 & 0.75 & 0.72 \\ Ours &$\quad$ & 0.63 & 0.52 & 0.63 & 0.62 & 0.64 & 0.57\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{figure} \centering \includegraphics[scale=0.5]{MRIwskull.pdf} \caption{The MRI sample slices with skull of the CU and AD participants and the corresponding dynamic images.} \label{fig4} \end{figure} In this section, we show results using the raw MRI image (including skull) as input. We perform experiments on the same patients' raw brain MRI image with the skull included to test the performance of our model. The raw MRI image category is ``MT1,GradWarp,N3m". The image size of the raw MRI image is "$176 \times 256 \times 256$". Figure~\ref{fig4} illustrates the dynamic images of different participants' MRI brain images with the skull. The dynamic images are blurrier than the images under skull striping processing. This is because the skull variance can be treated as noise in the dynamic image. % Table~\ref{table4} shows the significant performance decrease when using 3D Brain MRI images with skull. Figure~\ref{fig4} shows a visual representation of how the dynamic images are affected by including the skull in the image. In this scenario, the model can not sufficiently diagnose the different groups. A potential cause of this decrease in performance is that the approximate rank pooling module is a pre-processing step, and the module is not trainable. We believe an end-to-end, learnable rank pooling module would improve performance.% \subsection{Models Training time} \begin{table} \begin{center} \caption{The total 150 epochs training time of different CNN models.} \label{table5} \begin{tabular}{lcc} \hline\noalign{\smallskip} &$\quad$ &Training time(s) \\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} & &2359 \\ 3D-ResNet~\cite{Korolev2017} & &3916\\ Ours & &414\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} Another advantage of the proposed model is faster training. We train all of our CNN models for 150 epochs on the same input dataset. Table~\ref{table5} shows the total training time of the different 2D and 3D CNN models. Compared with the 3D-CNN networks, the proposed model trains in about $20\%$ of the time. Also, due to the higher dimension of the 3D convolutional layer, the number of parameters of the 3D convolutional layer is naturally higher than the 2D convolutional layer. By applying the MobileNet~\cite{mobilenet} or ShuffleNet~\cite{shuffle} in medical image diagnosis, there is potential for mobile applications. We used MobileNet for our experiments. We used the MobileNet v1 achitecture as the feature extractor and obtained 84.84\% accuracy, which is similar in accuracy to the 3D ResNet. \section{Conclusions} We proposed to apply the approximate rank pooling method to convert 3D Brain MRI images into 2D dynamic images as the inputs for a pre-trained 2D CNN. The proposed model outperforms a 3D CNN with much less training time and improves 9.5\% better performance than the baselines. We trained and evaluated on MRI brain imagery and found out that brain skull striping pre-processing is useful before applying the approximate rank pooling conversion. We used an offline approximate rank pooling module in our experiments, but we believe it would be interesting to explore a learnable temporal rank pooling module in the future. \section*{Acknowledgement} This work is supported by NIH/NIA R01AG054459. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=HdNVXBdk05
https://arxiv.org/abs/2012.00119
Please evaluate the effectiveness of our proposed method for Alzheimer's Disease classification from 3D MRI scans, including its impact on accuracy, training time, and resource requirements.
An interesting moslty solid contribution with some problematic unclear aspects. ----------------------------- Summary ----------------------------- The authors present a novel method for the diagnosis of Alzheimer's disease from MRI volumes. The first compress the 3D volume to a 2D 'dynamic' image, which is then fed into a pre-trained features extractor. Finally, the resulting feature vector is subject to an attention mechanism and processed by a fully connected MLP to predict a probability for the disease. The method is evaluated against various baselines and achieves competitive results. ----------------------------- Strengths ----------------------------- - The paper is for the most part clearly written and structured. - The method outperforms its baselines, which are adequatly choosen as far as I can tell. ----------------------------- Weaknesses ----------------------------- My main concern is the unclear explanation of the dynamic image compression. This would not be a big issue, since they say they are using an existing method [3]. However, looking at [3], I am not sure the equation they give (Eq. 3) corresponds to what is presented in [3] and if it is really correct. To be more precise, Eq 3 computes simply an averaged image using fixed weights \alpha_t. It does not even make use of the feature representation. In contrast, the method in [3] averages the computed feature representations, as far as I understand. I am really not sure if this is just a typo in Eq 3 or if the authors are simply averaging the images with fixed weights and overselling it as a more sophisticated method. The mathematical notation is not very clear. In line 140 'd' is used as the dimension of a feature vector. In line 143 it is itself a feature vector. In Section 3.2 the 'I' denotes an input image, but in Section 3.3 the same symbol is used for the feature image produced by the network. Throughout the paper, various symbols are used for the set of real numbers. ----------------------------- Final Recommendation ----------------------------- Even though the paper seems highly relevant and generally solid, I am really concerned about the computation of the dynamic image. I thus see the paper as borderline. However, as I am not an expert on dynamic images, it might be that I simply misunderstood this part or that it can be explained as a typo. Rating: 5: Marginally below acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \DeclareMathOperator*{\argmin}{argmin} % \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \begin{document} \pagestyle{headings} \mainmatter \title{Dynamic Image for 3D MRI Image Alzheimer’s Disease Classification} % \titlerunning{Dynamic Image for 3D MRI Image Alzheimer’s Disease Classification} \author{Xin Xing\thanks{authors show equal contribution}\orcidID{0000-0001-7207-5149}\and Gongbo Liang\samethanks \orcidID{0000-0002-6700-6664} \and Hunter Blanton \orcidID{0000-0001-8058-4218} \and Muhammad Usman Rafique\orcidID{0000-0001-5504-5482}\and Chris Wang \orcidID{0000-0003-3898-3690}\and Ai-Ling Lin \orcidID{0000-0002-5197-2219} \and Nathan Jacobs \orcidID{0000-0002-4242-8967} } \authorrunning{X. Xing et al.} \institute{University of Kentucky, Lexington KY 40506, USA \\ \email{\{xxi242, gli238\}@g.uky.edu}} \maketitle \begin{abstract} We propose to apply a 2D CNN architecture to 3D MRI image Alzheimer's disease classification. Training a 3D convolutional neural network (CNN) is time-consuming and computationally expensive. We make use of approximate rank pooling to transform the 3D MRI image volume into a 2D image to use as input to a 2D CNN. We show our proposed CNN model achieves $9.5\%$ better Alzheimer's disease classification accuracy than the baseline 3D models. We also show that our method allows for efficient training, requiring only $20\%$ of the training time compared to 3D CNN models. The code is available online: https://github.com/UkyVision/alzheimer-project. \keywords{Dynamic image, 2D CNN, MRI image, Alzheimer's Disease} \end{abstract} \section{Introduction} Alzheimer's disease (AD) is the sixth leading cause of death in the U.S.~\cite{nih}. It heavily affects the patients' families and U.S. health care system due to medical payments, social welfare cost, and salary loss. Since AD is irreversible, early stage diagnosis is crucial for helping slow down disease progression. Currently, researchers are using advanced neuroimaging techniques, such as magnetic resonance imaging (MRI), to identify AD. MRI technology produces a 3D image, which has millions of voxels. Figure~\ref{fig1} shows example slices of Cognitive Unimpaired (CU) and Alzheimer's disease (AD) MRI images. \begin{figure} \centering \includegraphics[scale=0.5]{fig1.pdf} \caption{The MRI sample slices of the CU and AD participants and the corresponding dynamic images.} \label{fig1} \end{figure} With the promising performance of deep learning in natural image classification, convolutional neural networks (CNNs) show tremendous potential in medical image diagnosis. Due to the volumetric nature of MRI images, the natural deep learning model is a 3D convolutional neural network (3D CNN)~\cite{3dcnn}. Compared to 2D CNN models, 3D CNN models are more computationally expensive and time consuming to train due to the high dimensionality of the input. Another issue is that most current medical datasets are relatively small. The limited data makes it difficult to train a deep network that generalizes to high accuracy on unseen data. To overcome the problem of limited medical image training data, transfer learning is an attractive approach for feature extraction. However, pre-trained CNN models are mainly trained on 2D image datasets. There are few suitable pre-trained 3D CNN models. In our paper, We propose to apply approximate rank pooling~\cite{dyi} to convert a 3D MRI volume into a 2D image over the height dimension. Thus, we can use a 2D CNN architecture for 3D MRI image classification. The main contributions of our work are following: \begin{itemize} \item We propose to apply a CNN model that transforms the 3D MRI volume image into 2D dynamic image as the input of 2D CNN. Incorporating with an attention mechanism, the proposed model significantly boosts the accuracy of the Alzheimer's Disease MRI diagnosis. \item We analyze the effect of skull MRI images on the approximate rank pooling method, showing that the applied approximate rank pooling method is sensitive to the noise introduced by the skull. Skull striping is necessary before using the dynamic image technology. \end{itemize} \section{Related Work} Learning-based Alzheimer's disease (AD) research can be mainly divided into two branches based on the type of input: (1) manually selected region of interest (ROI) input and (2) whole image input. With ROI models~\cite{ref1}~\cite{ref2}, manual region selection is needed to extract the interest region of the original brain image as the input to the CNN model, which is a time consuming task. It is more straightforward and desirable to use the whole image as input. Korolev et al.~\cite{Korolev2017} propose two 3D CNN architectures based on VGGNet and ResNet, which is the first study to prove the manual feature extraction step for Brain MRI image classification is unnecessary. Their 3D models are called 3D-VGG and 3D-ResNet, and are widely used for 3D medical image classification study. Cheng et al.~\cite{Cheng2017} proposes to use multiple 3D CNN models trained on MRI images for AD classification in an ensemble learning strategy. They separate the original MRI 3D images into many patches (n=27), then forward each patch to an independent 3D CNN for feature extraction. Afterward, the extracted features are concatenated for classification. The performance is satisfactory, but the computation cost and training time overhead are very expensive. Yang et al.~\cite{Yang2018} uses the 3D-CNN models of Korolev et al.~\cite{Korolev2017} as a backbone for studying the explainability of AD classification in MRI images by extending class activation mapping (CAM)\cite{cam} and gradient-based CAM\cite{grad-cam} on 3D images. In our work, we use the whole brain MRI image as input and use 3D-VGG and 3D-ResNet as our baseline models. Dynamic images where first applied to medical imagery by Liang et al.~\cite{Liang2019} for breast cancer diagnosis. The authors use the dynamic image method to convert 3D digital breast tomosynthesis images into dynamic images and combined them with 2D mammography images for breast cancer classification. In our work, we propose to combine dynamic images with an attention mechanism for 3D MRI image classification. \section{Approach} We provide a detailed discussion of our method. First, we summarize the high-level network architecture. Second, we provide detailed information about the approximate rank pooling method. Next, we show our classifier structure and attention mechanism. Finally, we discuss the loss function used for training. \subsection{Model Architecture} \begin{figure}[h!] \centering \includegraphics[scale=0.3]{workflow1.pdf} \caption{The architecture of our 2D CNN model.} \label{fig2} \end{figure} Figure~\ref{fig2} illustrates the architecture of our model. The 3D MRI image is passed to the approximate rank pooling module to transform the 3D MRI image volume into a 2D dynamic image. We apply transfer learning for feature extraction with the dynamic image as the input. We leveraged a pre-trained CNN as the backbone feature extractor. The feature extraction model is pre-trained with the ImageNet dataset~\cite{imagenet}. Because we use a lower input resolution than the resolution used for ImageNet training, we use only a portion of the pre-trained CNN. The extracted features are finally sent to a small classifier for diagnosis prediction. The attention mechanism, which is widely used in computer vision community, can boost CNN model performance, so we embed the attention module in our classifier. \subsection{Dynamic Image} The temporal rank pooling~\cite{Fernando}~\cite{dyi} was originally proposed for video action recognition. For a video with T frames $I_{1}, ... , I_{T}$, the method compresses the whole video into one frame by temporal rank pooling. The compressed frame is called a dynamic image. The construction of the dynamic image is based on Fernando et al~\cite{Fernando}. The authors use a ranking function to represent the video. $\psi(I_{t})\in\Re^d$ is a feature representation of the individual frame $I_t$ of the video. $V_t=\frac{1}{t}\sum_{\tau=1}^{t}\psi(I_{\tau})$ is the temporal average of the feature up to time $t$. $V_t$ is measured by a ranking score $S(t|d)=<d, V_t>$, where $d\in\Re^m$ is a learned parameter. By accumulating more frames for the average, the later times are associated with larger scores, e.g $q>t\rightarrow S(q|d)>S(t|d)$ , which are constraints for the ranking problem. So the whole problem can be formulated as a convex problem using RankSVM: \begin{equation} d^*=\rho(I_1, ..., I_t; \tau)=\argmin_dE(d) \label{eq:1} \end{equation} \begin{equation} E(d)=\frac{\lambda}{2}||d||^2 + \frac{2}{T(T-1)}\times\sum_{q>t}\max\{0, 1-S(q|d)+S(t|d)\} \label{eq:2} \end{equation} In Equation \eqref{eq:2}, the first term is a quadratic regularization used in SVMs, the second term is a hinge-loss counting incorrect rankings for the pairs $q>t$. The RankSVM formulation can be used for dynamic image generation, but the operations are computationally expensive. Bilen et al.~\cite{dyi} proposed a fast approximate rank pooling for dynamic images: \begin{equation} \hat{\rho}(I_1, ..., I_t; \psi)=\sum_{t=1}^{T}\alpha_t \cdot\psi(I_t) % \label{eq:3} \end{equation} where, $\psi(I_t)=\frac{1}{t}\sum_{\tau=1}^{t}I_{\tau}$ is the temporal average of frames up to time t, and $\alpha_t=2t-T-1$ is the coefficient associated to frame $\psi(I_t)$. We take this approximate rank pooling strategy in our work for 3D MRI volume to 2D image transformation. In our implementation, the z-dimension of 3D MRI image is equal to temporal dimension of the video. \subsection{Classifier with Attention Mechanism} \begin{figure}[h!] \centering \includegraphics[scale=0.3]{att.pdf} \caption{The attention mechanism structure in our CNN model.} \label{fig3} \end{figure} The classifier is a combination of an attention mechanism module and a basic classifier. Figure~\ref{fig3} depicts the structure of attention mechanism, which includes four $1 \times 1$ convolutional layers. The first three activation functions of convolutional layers are ReLU, the last convolutional layer is attached with softmax activation function. The input feature maps $A \in R^{H\times W\times C}$ are passed through the four convolutional layers to calculate attention mask $S\in R^{H\times W\times 1}$. We apply element-wise multiplication between the attention mask and input feature maps to get the final output feature map $O \in R^{H\times W\times C}$. Our basic classifier contains three fully connected (FC) layers. The output dimensions of the three FC layers are 512, 64, and 2. Dropout layers are used after the first two layers with dropout probability 0.5. \subsection{Loss Function} In previous AD classification studies, researchers mainly concentrated on binary classification. In our work, we do the same for ease of comparison. The overall loss function is binary cross-entropy. For a 3D image $V$ with label $l$ and probability prediction $p(l|V)$, the loss function is: \begin{equation} loss(l,V)=-[l \cdot log(p(l|V))+(1-l) \cdot log(1-p(l|V))] \label{eq:4} \end{equation} where the label $l=0$ indicates a negative sample and $l=1$ indicates a positive sample. \section{Evaluation} We use the publicly available dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI)~\cite{ADNI} for our work. Specifically, we trained CNNs with the data from the ``spatially normalized, masked, and N3-corrected T1 images” category. The brain MRI image size is $110 \times 110 \times 110$. Since a subject may have multiple MRI scans in the database, we use the first scan of each subject to avoid data leakage. The total number of data samples is 100, containing 51 CU samples and 49 AD samples. The CNNs are implemented in PyTorch. We use five-fold cross validation to better evaluate model performance. The batch size used for our model is 16. The batch size of the baseline models is 8, which is the maximum batch size of the 3D CNN model trained on the single GTX-1080ti GPU. We use the Adam optimizer with $beta_1=0.9$ and $beta_2=0.999$. The learning rate is 0.0001. We train for 150 epochs. To evaluate the performance of our model, we use accuracy (Acc), the area under the curve of Receiver Operating Characteristics (ROC), F1 score (F1), Precision, Recall and Average Precision (AP) as our evaluation metrics. \subsection{Quantitative Results} High quality feature extraction is crucial for the final prediction. Different pre-trained CNN models can output different features in terms of size and effective receptive field. We test different pre-trained CNNs to find out which CNN models perform best as our feature extractor. Table~\ref{table1} shows various CNN models and the corresponding output feature size. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{The different pre-trained CNN model as feature extractors and the output feature sizes} \label{table1} \begin{tabular}{lll} \hline\noalign{\smallskip} CNN model & & Output feature size\\ \noalign{\smallskip} \hline \noalign{\smallskip} AlexNet~\cite{Alex} & & $256\times5\times5$ \\ VggNet11~\cite{Vgg} & & $512\times6\times6$ \\ ResNet18~\cite{He2015} & & $512\times7\times7$ \\ MobileNet\_v2~\cite{Sandler_2018_CVPR} & &$1280\times4\times4$ \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \setlength{\tabcolsep}{4pt} Since our dynamic image resolution is $110\times110\times3$, which is much smaller than the ImageNet dataset resolution: $256\times256\times3$, we use only part of the pre-trained CNN as the feature extractor. Directly using the whole pre-trained CNN model as feature extractor will cause the output feature size to be too small, which decreases the classification performance. In the implementation, we get rid of the maxpooling layer of each pre-trained model except for the MobileNet\_v2~\cite{Sandler_2018_CVPR}, which contains no maxpooling layer. Also, because there is a domain gap between the natural image and medical image we set the pre-trained CNN models' parameters trainable, so that we can fine tune the models for better performance. \begin{table} \begin{center} \caption{The performance results of different backbone models with dynamic image as input} \label{table2} \begin{tabular}{llccccc} \hline\noalign{\smallskip} Model & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} AlexNet & 0.87 & 0.90 & 0.86 & 0.89 & 0.83 & 0.82 \\ ResNet18 & 0.85 & 0.84 & 0.84 & 0.86 & 0.81 & 0.79 \\ MobileNet\_v2 & 0.88 & 0.89 & 0.87 & 0.89 & 0.85 & 0.83 \\ VggNet11 & 0.91 & 0.92 & 0.91 & 0.88 & 0.93 & 0.86 \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{table} \begin{center} \caption{The performance results of different 2D and 3D CNN models} \label{table3} \begin{tabular}{llcccccc} \hline\noalign{\smallskip} Model &$\quad$ & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} &$\quad$ & 0.80 & 0.78 & 0.78 & 0.82 & 0.75 & 0.74 \\ 3D-ResNet~\cite{Korolev2017}&$\quad$ & 0.84 & 0.82 & 0.82 & 0.86 & 0.79 & 0.78 \\ \hline Max. + VGG11&$\quad$ & 0.80 & 0.77 & 0.80 & 0.78 & 0.81 & 0.73 \\ Avg. + VGG11&$\quad$ & 0.86 & 0.84 & 0.86 & 0.83 & 0.89 & 0.79 \\ Max. + VGG11 + Att&$\quad$ & 0.82 & 0.76 & 0.82 & 0.80 & 0.83 & 0.75 \\ Avg. + VGG11 + Att&$\quad$ & 0.88 & 0.89 & 0.88 & 0.85 & \textbf{0.91} & 0.82 \\ \hline Ours &$\quad$ & \textbf{0.92} &\textbf{0.95} & \textbf{0.91} & \textbf{0.97} & 0.85 & \textbf{0.90} \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} When analyzing MRI images using computer-aided detectors (CADs), it is common to strip out the skulls from the brain images. Thus, we first test the proposed method using the MRI with the skull stripped. Our proposed model takes dynamic images (Dyn) as input, VGG11 as feature extractor, and a classifier with the attention mechanism: $Dyn + VGG11 + Att $. The whole experiment can be divided into three sections: the backbone and attention section, the baseline model section, and the pooling section. In the backbone and attention section, we use 4 different pre-trained models and test the selected backbone with and without the attention mechanism. Based on the performance shown in Table~\ref{table2}, we choose VGG11 as the backbone model. In the baseline model section, we compare our method with two baselines, namely 3D-VGG and 3D-ResNet. Table~\ref{table3} shows the performance under different CNN models. The proposed model achieves $9.52\%$ improvement in accuracy and $15.20\%$ better ROC over the 3D-ResNet. In the pooling section: we construct two baselines by replacing the approximate rank pooling module with the average pooling (Avg.) layer or max pooling (Max.) layer. The pooling layer processes the input 3D image over the z-dimension and outputs the same size as the dynamic image. Comparing with the different 3D-to-2D conversion methods under the same configuration, the dynamic image outperforms the two pooling methods. \subsection{Pre-processing Importance Evaluation} \begin{table} \begin{center} \caption{The performance results of different 2D and 3D CNN models on the MRI image with skull.} \label{table4} \begin{tabular}{lcccccccc} \hline\noalign{\smallskip} Model &$\quad$ & Acc & ROC &F1 & Precision & Recall & AP\\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} &$\quad$ & 0.78 & 0.62 & 0.77 & 0.80 & 0.75 & 0.72 \\ Ours &$\quad$ & 0.63 & 0.52 & 0.63 & 0.62 & 0.64 & 0.57\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{figure} \centering \includegraphics[scale=0.5]{MRIwskull.pdf} \caption{The MRI sample slices with skull of the CU and AD participants and the corresponding dynamic images.} \label{fig4} \end{figure} In this section, we show results using the raw MRI image (including skull) as input. We perform experiments on the same patients' raw brain MRI image with the skull included to test the performance of our model. The raw MRI image category is ``MT1,GradWarp,N3m". The image size of the raw MRI image is "$176 \times 256 \times 256$". Figure~\ref{fig4} illustrates the dynamic images of different participants' MRI brain images with the skull. The dynamic images are blurrier than the images under skull striping processing. This is because the skull variance can be treated as noise in the dynamic image. % Table~\ref{table4} shows the significant performance decrease when using 3D Brain MRI images with skull. Figure~\ref{fig4} shows a visual representation of how the dynamic images are affected by including the skull in the image. In this scenario, the model can not sufficiently diagnose the different groups. A potential cause of this decrease in performance is that the approximate rank pooling module is a pre-processing step, and the module is not trainable. We believe an end-to-end, learnable rank pooling module would improve performance.% \subsection{Models Training time} \begin{table} \begin{center} \caption{The total 150 epochs training time of different CNN models.} \label{table5} \begin{tabular}{lcc} \hline\noalign{\smallskip} &$\quad$ &Training time(s) \\ \noalign{\smallskip} \hline \noalign{\smallskip} 3D-VGG~\cite{Korolev2017} & &2359 \\ 3D-ResNet~\cite{Korolev2017} & &3916\\ Ours & &414\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} Another advantage of the proposed model is faster training. We train all of our CNN models for 150 epochs on the same input dataset. Table~\ref{table5} shows the total training time of the different 2D and 3D CNN models. Compared with the 3D-CNN networks, the proposed model trains in about $20\%$ of the time. Also, due to the higher dimension of the 3D convolutional layer, the number of parameters of the 3D convolutional layer is naturally higher than the 2D convolutional layer. By applying the MobileNet~\cite{mobilenet} or ShuffleNet~\cite{shuffle} in medical image diagnosis, there is potential for mobile applications. We used MobileNet for our experiments. We used the MobileNet v1 achitecture as the feature extractor and obtained 84.84\% accuracy, which is similar in accuracy to the 3D ResNet. \section{Conclusions} We proposed to apply the approximate rank pooling method to convert 3D Brain MRI images into 2D dynamic images as the inputs for a pre-trained 2D CNN. The proposed model outperforms a 3D CNN with much less training time and improves 9.5\% better performance than the baselines. We trained and evaluated on MRI brain imagery and found out that brain skull striping pre-processing is useful before applying the approximate rank pooling conversion. We used an offline approximate rank pooling module in our experiments, but we believe it would be interesting to explore a learnable temporal rank pooling module in the future. \section*{Acknowledgement} This work is supported by NIH/NIA R01AG054459. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=HdNVXBdk05
https://arxiv.org/abs/2012.00119
Please evaluate the clarity and correctness of the explanation of the dynamic image compression method in our paper, specifically in relation to the equation provided.
A new approach for annotating 3D+T cell tracking data Cell tracking is an important but challenging step for many biological research. By combining VR and eye tracking, the authors developed a new method for tracking cells in 3D and time. With the help of the two devices, users can generate cell trajectories by simply looking at a cell in a 3D movie. In the manuscript, detailed usage and user study data are provided. From what described, this new approach appears to be a great addition and can potentially make cell tracking annotation faster and enjoyable as stated by the authors. Major comments: - The manuscript is well written with details that are necessary, the limitations are clearly stated. - Although it is hard to grasp the actual user experience without trying the device, the user study results seem promising and reflect a good overall experience. - Although the authors provide user estimation of a 10x speed up with the new approach, it would be more convincing to actually measure the time for conventional methods and compare them with the new. - Since the user can only look at one cell at a time with the new approach, this will likely limit the overall annotation throughput. Instead of focusing on generating trajectories cell-by-cell, I would encourage the authors to explore ways to use the new hardware to fix and curate trajectories generated by automated algorithms. Similar to what is mentioned in the end of the manuscript, it would be also interesting to see how this can be combined with machine-learning algorithms. Minor comments: - The provided supplementary video is quite helpful to the understanding of the approach, it would be helpful also to mention the video in the manuscript. - When describing the hardware, could you provide detailed information about the eye tracking resolution? For users who are not familiar with the device, it is better to get some feeling about how accurate the eye tracking device is. - Could you also discuss how the cell size impacts the tracking performance? When the cell is big, should the user look at the center of the cell? Is there an optimal display cell size for the device? Rating: 6: Marginally above acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} % \newcommand{\preprint}{} \usepackage{graphicx} % \DeclareGraphicsExtensions{.pdf,.png,.jpg,.jpeg} % \graphicspath{{figures/}{pictures/}{images/}{./}} % \usepackage{microtype} % \PassOptionsToPackage{warn}{textcomp} % \usepackage{textcomp} % \usepackage{mathptmx} % \usepackage{times} % \renewcommand*\ttdefault{txtt} % \usepackage{cite} % \usepackage{tabu} % \usepackage{booktabs} % \makeatletter \newcommand*{\addFileDependency}[1]{% \typeout{(#1)} \@addtofilelist{#1} \IfFileExists{#1}{}{\typeout{No file #1.}} } \makeatother \newcommand*{\myexternaldocument}[1]{% \externaldocument{#1}% \addFileDependency{#1.tex}% \addFileDependency{#1.aux}% } \usepackage{xr-hyper} \usepackage{hyperref} \usepackage[svgnames]{xcolor} \hypersetup{ colorlinks=true, linkcolor={DarkBlue}, urlcolor={DarkBlue}} \ifdefined\preprint \else \myexternaldocument{supplement} \fi \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \usepackage[% font={small}, labelfont=bf, format=hang, format=plain, margin=0pt, width=1.0\textwidth, ]{caption} \usepackage[list=true]{subcaption} \usepackage{comment} \usepackage{microtype} \renewcommand*\ttdefault{txtt} % \usepackage[utf8]{inputenc} \usepackage{csquotes} \usepackage{breakcites} \newcommand{\TODO}[1]{\colorbox{red}{\color{white}\textbf{TODO}} {\color{red}#1}} \usepackage[capitalise,noabbrev]{cleveref} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{23} % \title{Bionic Tracking: Using Eye Tracking to Track Biological Cells in Virtual Reality} % \titlerunning{Bionic Tracking} \author{Ulrik Günther\inst{1,2,3}\orcidID{0000-0002-1179-8228} \and Kyle I.S. Harrington\inst{4,5}\orcidID{0000-0002-7237-1973} \and Raimund Dachselt\inst{6,7}\orcidID{0000-0002-2176-876X} \and\\ Ivo F. Sbalzarini\inst{6,2,3,7}\orcidID{0000-0003-4414-4340} } \authorrunning{Günther, et al.} \institute{Center for Advanced Systems Understanding, Görlitz, Germany \and Center for Systems Biology, Dresden, Germany\and Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany \and Virtual Technology and Design, University of Idaho, Moscow, ID, USA\and HHMI Janelia Farm Research Campus, Ashburn, VA, USA\and Faculty of Computer Science, Technische Universität Dresden, Germany \and Excellence Cluster Physics of Life, Technische Universität Dresden, Germany } \maketitle \begin{abstract} We present Bionic Tracking, a novel method for solving biological cell tracking problems with eye tracking in virtual reality using commodity hardware. Using gaze data, and especially smooth pursuit eye movements, we are able to track cells in time series of 3D volumetric datasets. The problem of tracking cells is ubiquitous in developmental biology, where large volumetric microscopy datasets are acquired on a daily basis, often comprising hundreds or thousands of time points that span hours or days. The image data, however, is only a means to an end, and scientists are often interested in the reconstruction of cell trajectories and cell lineage trees. Reliably tracking cells in crowded three-dimensional space over many timepoints remains an open problem, and many current approaches rely on tedious manual annotation and curation. In our Bionic Tracking approach, we substitute the usual 2D point-and-click annotation to track cells with eye tracking in a virtual reality headset, where users simply have to follow a cell with their eyes in 3D space in order to track it. We detail the interaction design of our approach and explain the graph-based algorithm used to connect different time points, also taking occlusion and user distraction into account. We demonstrate our cell tracking method using the example of two different biological datasets. Finally, we report on a user study with seven cell tracking experts, demonstrating the benefits of our approach over manual point-and-click tracking. \end{abstract} \section{Introduction} In cell and developmental biology, the image data generated via fluorescence microscopy is often only a means to an end: Many tasks require exact information about the positions of cells during development, or even their entire history, the so-called cell lineage tree. Both the creation of such a tree using cell tracking, and tracking of single cells, are difficult and cannot always be done in a fully automatic manner. Therefore, such lineage trees are created in a tedious manual process using a point-and-click 2D interface. Even if cells can be tracked (semi)automatically, faulty tracks have to be repaired manually. Again, this is a very tedious task, as the users have to go through each timepoint and 2D section in order to connect cells in 3D+time, with a 2D point-and-click interface. Manually tracking one single cell through 101 timepoints with this manual process takes 5 to 30 minutes, depending on complexity of the dataset. Tracking an entire developmental dataset with many 3D images can take months of manual curation effort. The 3D images the lineage trees are usually created based on fluorescence microscopy images. Such fluorescence images do not have well-defined intensity scales, and intensities might vary strongly even within single cells. Cells also move around, divide, change their shape---sometimes drastically---or might die. Cells might also not appear alone, and may move through densely-populated tissue, making it difficult to tell one cell apart from another. These three issues are the main reasons that make the task of tracking cells so difficult. Further complicating the situation, recent advances in fluorescence microscopy, such as the advent and widespread use of lightsheet microscopy \cite{Huisken:2004ky}, have led to a large increase in size of the images, with datasets growing from about a gigabyte to several terabytes for long-term timelapse images \cite{Reynaud:2015dx}. In this work, we reduce the effort needed to track cells through time series of 3D images by introducing \emph{Bionic Tracking}, a method that uses smooth pursuit eye movements as detected by eye trackers inside a virtual reality head-mounted display (HMD) to render cell tracking and track curation tasks easier, faster, and more ergonomic. Instead of following a cell by point-and-click, users have to simply look at a cell in Virtual Reality (VR) in order to track it. The main contributions we present here are: \begin{itemize} \item A setup for interactively tracking cells by simply following the cell in a 3D volume rendering with the eyes, using a virtual reality headset equipped with eye trackers, \item an iterative, graph-based algorithm to connect gaze samples over time with cells in volumetric datasets, addressing both the problems of occlusion and user distraction, and \item a user study evaluating the setup and the algorithm with seven cell tracking experts \end{itemize} \section{Related Work} \label{sec:RelatedWork} The main problem we address in this paper is the manual curation or tracking step, which is necessary for both validation and for handling cases where automatic tracking produces incorrect or no results. In this section, we give a brief overview of (semi-)automatic tracking algorithms, then continue with relevant work from the VR, visualization, and eye tracking communities. Historically, software for solving tracking problems was developed for a specific model organism, such as for the roundworm \emph{Caenorhabditis elegans}, the fruitfly \emph{Drosophila melanogaster}, or the zebrafish \emph{Danio rerio} --- all highly studied animals in biology --- and relied on stereotypical developmental dynamics within an organism in order to succeed in tracking cells. This approach however either fails entirely or produces unreliable results for other organisms, or for organisms whose development is not as stereotyped. For that reason, (semi-)automated approaches have been developed that are independent of the model organism and can track large amounts of cells, but often require manual tracking of at least a subset of the cells in a dataset. Examples of such frameworks are: \begin{itemize} \item \emph{TGMM}, Tracking by Gaussian Mixture Models \cite{amat2014, ckendorf:2015ch}, is an offline tracking solution that works by generating oversegmented supervoxels from the original image data, then fit all cell nuclei with a Gaussian Mixture Model and evolve that through time, and finally use the temporal context of a cell track to create the lineage tree. \item \emph{TrackMate} \cite{tinevez2017} is a plugin for Fiji \cite{schindelin2012fiji} that provides automatic, semi-automatic, and manual tracking of single particles in image datasets. TrackMate can be extended with custom spot detection and tracking algorithms. \item \emph{MaMuT}, the Massive MultiView Tracker \cite{wolff2018}, is another plugin for Fiji that allows the user to manually track cells in large datasets, often originating from multi-view lightsheet microscopes. MaMuT's viewer is based on BigDataViewer \cite{Pietzsch:2015hl} and is able to handle terabytes of data. \end{itemize} All automated approaches have in common that they need manual curation as a final step, as they all make assumptions about cell shapes, modelling them, e.g., as blobs of Gaussian shape, as in the case of TGMM. Manual tracking and curation is usually done with mouse-and-keyboard interaction to select a cell and create a track, often while just viewing a single slice of a 3D time point of the dataset. In Bionic Tracking, we replace this interaction by leveraging the user's gaze in a virtual reality headset, while the user can move freely around or in the dataset. Gaze in general has been used in human-computer interaction for various interactions: It has been used as an additional input modality in conjunction with touch interaction \cite{Stellmach:2012Looka} or pedaling \cite{Klamka:2015ka}, and for building user interfaces, e.g., for text entry \cite{Lutz:2015ga}. The particular kind of eye movements we exploit for Bionic Tracking---\emph{smooth pursuits}, where the eyes follow a stimulus in a smooth, continuous manner---is not yet explored exhaustively for interacting with 3D or VR content. Applications can be found mainly in 2D interfaces, such as in \cite{Kosch:2018Your}, where the authors use deviations from smoothness in smooth pursuits to evaluate cognitive load; or in \cite{Vidal:2013Pursuits}, where smooth pursuits are used for item selection in 2D user interfaces. For smooth pursuits in VR, we are only aware of two works, \cite{piumsomboon2017} and \cite{Khamis:2018VRpursuits}: In the first, the authors introduce \emph{Radial Pursuit}, a technique where the user can select an object in a 3D scene by tracking it with her eyes, and it will become more ``lensed-out'' the longer she focuses on a particular object. In the latter, the authors explore target selection using smooth pursuits, perform a user study, and make design recommendations for smooth pursuit-based VR interfaces. All aforementioned works are only concerned with navigation or selection tasks on structured, geometric data. In Bionic Tracking however, we use smooth pursuits to track cells in unstructured, volumetric data that cannot simply be queried for the objects contained or their positions. In the context of biomedical image analysis, VR has been applied successfully, e.g., for virtual colonoscopy \cite{Mirhosseini:2019Immersive} and for tracing of neurons in connectome data \cite{Usher:2017bda}. In the latter, the authors show the neurons in VR in order to let the user trace them with a handheld controller. The authors state that this technique resulted in faster and better-quality annotations. Tracking cells using handheld VR controllers is an alternative to gaze, but could place higher physical strain on the user. \section{The Bionic Tracking Approach} For Bionic Tracking, we exploit smooth pursuit eye movements. Smooth pursuits are the only smooth movements performed by our eyes. The occur when following a stimilus, and cannot be triggered without one \cite{Duchowski:2017ii}. Instead of using a regular 2D screen, we perform the cell tracking process in VR, since VR gives the user improved navigation and situational awareness compared to 2D when exploring a complex 3D/4D dataset \cite{Slater:2016552}. In addition, the HMD tracking data can be used to impose constraints on the data acquired from the eye trackers. In order to remove outliers from gaze data one can calculate the quaternion distance between eyeball rotation and head rotation, which is physiologically limited: a 90-degree angle between eye direction and head direction is not plausible, and head movement follows eye movement via the vestibo-ocular reflex. As a system consisting of both a VR HMD and an integrated eye tracking solution might be perceived as too complex, we start by explaining why we think that only using one of the technologies would not solve the problem: \begin{itemize} \item \emph{Without eye tracking}, the head orientation from the HMD could still be used as a cursor. However, following small and smooth movements with the head is not something humans are used to doing. The eyes always lead the way, and the head follows via the vestibulo-ocular reflex. \item \emph{Without virtual reality}, the effective space in which the user can use to follow cells around becomes restricted to the rather small part of the visual field a regular screen occupies. The user furthermore loses the ability to move around freely without an additional input modality, e.g. to avoid obstacles (in our case, those might be cells not tracked at the moment). As an alternative to HMDs, a system using large screens or projectors, such as Powerwalls or CAVEs, could be used, but increases the technical complexity. \end{itemize} \subsection{Hardware selection} We have chosen the HTC Vive as HMD, as it is comfortable to wear, provides good resolution, and an excellent tracking system for room-scale VR experiences. Furthermore, it is usable with the SteamVR/OpenVR API. For eye tracking, we have chosen the \emph{Pupil} eye trackers produced by Pupil Labs \cite{Kassner:2014kh}, as they provide both an open-source software and competitively-priced hardware that is simple to integrate physically into off-the-shelf HMDs. The software is available as LGPL-licensed open-source code and can be extended with custom plugins. In addition to being open-source, the \emph{Pupil} software makes the measured gaze data and image frames available to external applications via a simple ZeroMQ- and MessagePack-based protocol\footnote{See \url{https://docs.pupil-labs.com/developer/core/network-api/} for details on interacting with Pupil over the network.}---in contrast to closed-source proprietary libraries required by other products---which enables using the eye tracking data in a local application or even over the network. Alternative solutions, like the HTC Vive Pro Eye, or an HTC Vive with integrated Tobii eye tracker were either not available at the time this project started, or were much more expensive. \subsection{Software framework} We have developed Bionic Tracking using the visualization framework \textit{scenery} \cite{Gunther:2019scenerya}, as it supports rendering of mesh data simultaneously with multi-timepoint volumetric data that contains the cells or nuclei to be tracked. Crucially for Bionic Tracking, scenery supports rendering to all SteamVR/OpenVR-supported VR HMDs and supports the Pupil eye trackers. In addition, scenery runs on the Java VM and is interoperable with the image analysis toolkit Fiji, just as the existing tracking tools \emph{TrackMate} and \emph{MaMuT} (see \cref{sec:RelatedWork}). \begin{figure} \vspace{-1.25\baselineskip} \centering \includegraphics[width=\textwidth]{cell-shapes.pdf} \caption{Some example nucleus shapes encountered in our \emph{Platynereis} test dataset. \label{fig:NucleusShapes}} \vspace{-3\baselineskip} \end{figure} \subsection{Rendering} We use simple, alpha blending-based volume rendering for displaying the data in the VR headset using scenery's Vulkan backend. While more advanced algorithms for volume rendering exist which provide a higher visual quality (e.g. Metropolis Light Transport \cite{Kroes:2012bo}), achieving a high and ideally consistent framerate is important for VR applications, which led us to choose alpha blending. For the data used in this work, we have only used in-core rendering, while the framework also supports out-of-core volume rendering for even larger datasets. To the user, we not only display the volume on its own, but a gray, unobtrusive box for spatial anchoring around the volume (see the supplementary video for an impression of how this looks). \section{Tracking Cells with Bionic Tracking} \subsection{Preparation} After putting on the VR HMD, making sure the eye tracker's cameras can see the user's eyes and launching the application, the calibration routine needs to be run first in order to establish a mapping between the user's gaze and world space positions in the VR scene. For calibration, we show the user a total of 18 white spheres, with 5 of them layered on three circles 1\,m apart (distances in the VR scene are the same as in the physical world). The radius of the circles increases with each layer to achieve a good coverage of the field of view. In addition to the spheres on the circles, we show three spheres in the center of the circles to also cover the area in the center of the field of view. During the calibration routine, the user has to look at these spheres as they are shown in the HMD. Since the calibration targets follow the head movements of the user, the user does not need to stay still. At the end of the calibration, the user will be notified of success or failure, and can repeat the calibration process if necessary. Calibration typically needs to be run only once per session, and can then be used to track as many cells as the user likes. Exceptions are if there is significant slippage or if the HMD is removed during the session. Our calibration routine is mostly similar to the one used in \emph{Pupil's} HMDeyes Unity example project\footnote{See \url{https://github.com/pupil-software/hmd-eyes} for details.}. Movement in VR can be performed either physically, or via buttons on the handheld controllers, which additionally allow control of the following functions (handedness can be swapped, default bindings shown in Supp. Fig.~\ref{T2TControls}): \begin{itemize} \setlength{\itemsep}{1.5pt} \setlength{\parskip}{2pt} \item move the dataset by holding the left-hand trigger and moving the controller, \item use the directional pad on the left-hand controller to move the observer (forward, backward, left, or right -- with respect to the direction the user is looking to), \item start and stop tracking by pressing the right-hand side trigger, \item deleting the most recently created track by pressing the right-side button, and confirming within three seconds with another press of the same button, \item play and pause the dataset over time by pressing the right-hand menu button, \item play the dataset faster or slower in time by pressing the right-hand directional pad up or down, and \item stepping through the timepoints of the dataset one by one, forward or backward, by pressing the right-hand directional pad left or right. \end{itemize} When the dataset is not playing, the user can also use the directional pad on the right-hand controller to scale the dataset. The initial setting for the scale of the dataset is to make it appear about 2m big. \subsection{Tracking Process} After calibration, the user can position herself freely in space. To track a cell, the user performs the following steps: \begin{enumerate} \setlength{\itemsep}{1.5pt} \setlength{\parskip}{2pt} \item Find the timepoint and cell with which the track should start, adjust playback speed between one and 20 volumes/second, and start to look at the cell or object of interest, \item start playback of the multi-timepoint dataset, while continuing to follow the cell by looking at it, and maybe moving physically to follow the cell around occlusions, \item end or pause the track at the final timepoint. Tracking will stop automatically when playback as reached the end of the dataset, and the dataset will play again from the beginning. \end{enumerate} In order to minimize user strain in smooth pursuit-based VR interactions, the authors of \cite{Khamis:2018VRpursuits} have provided design guidelines: They suggest large trajectory sizes, clear instructions what the user has to look at, and relatively short selection times. While physical cell size cannot be influenced, the controls available to the user enable free positioning and zooming. The selection time, here the tracking time, of course depends on the individual cell to be tracked, but as the tracking can be paused, and the playback speed adjusted, the user is free to choose both a comfortable length and speed. During the tracking procedure, we collect the following data for each timepoint: \begin{itemize} \setlength{\itemsep}{1.5pt} \setlength{\parskip}{2pt} \item the entry and exit points of the gaze ray through the volume in normalised volume-local coordinates, i.e., as a vector $\in [0.0, 1.0]^3$, \item the confidence rating -- calculated by the \emph{Pupil} software -- of the gaze ray, \item the user's head orientation and position, \item the timepoint of the volume, and \item a list of sampling points with uniform spacing along the gaze ray through the volume and the actual sample values on these points calculated by trilinear interpolation from the volume image data. \end{itemize} We call a single gaze ray including the above metadata a \emph{spine}. The set of all spines for a single track over time we call a \emph{hedgehog} -- due to its appearance, see Supp. Fig.~\ref{hedgehog}. By collecting the spines through the volume, we are effectively able to transform each 3-dimensional cell localization problem into a 1-dimensional one along a single ray through the volume and create a cell track. This analysis procedure is explained in detail in the next section. \section{Analysis of the Tracking Data} In previous applications using smooth pursuits (such as in \cite{Vidal:2013Pursuits,piumsomboon2017}), the tracked objects were geometric and not volumetric in nature, and therefore well-defined in 2D or 3D space with their extents and shape fully known. In our analysis in contrast, we use the indirect information about the objects contained in spines and hedgehogs to find the tracked object in unstructured volumetric data and follow it. After a full hedgehog has been collected to create a new cell track, all further analysis is done solely on the data contained in this hedgehog. To illustrate the analysis, it is useful to visualize a hedgehog in two dimensions by laying out all spines in a 2D plane next to each other (see \cref{fig:labelledHedgehog}). In this plane, time advances along the X axis and depth through the volume along a given spine is on the Y axis. Note that each line parallel to the Y axis represents one spine and therefore one gaze sample, of which we collect up to 60 per second. In \cref{fig:labelledHedgehog}, this led to 1614 spines with 16 spines per image timepoint on average collected within 30 seconds. In the figure, we have highlighted the local intensity maximum along each spine in red. The track of the cell the user was following is then mostly visible. \begin{figure}[h] \includegraphics[width=\textwidth]{hedgehog-annotated.pdf} \caption{A hedgehog visualized in 2D, with nearest local maxima marked in red. Each vertical line is one spine of the hedgehog with the observer sitting at the bottom. On the X axis, time runs from left to right, and is counted in gaze samples taken. After every 500 spines, a dotted white line is shown at 500, 1000, and 1500 spines recorded. The gray line shortly before 500 spines is the line whose profile is shown in Supp. Fig.~\ref{T2TExampleRay}. The discontinuities in the local maxima A and B have different origins: For A, the user seems to have moved further away, resulting in a gap, while for B, another cell appeared closely behind the tracked one and might have mislead the user, leaving it for the algorithm to filter out. See text for details.\label{fig:labelledHedgehog}} \vspace{-1.25\baselineskip} \end{figure} \subsection{Graph-based temporal tracking} \label{sec:graphbasedtemporaltracking} Movements of the user and temporary occlusion by other cells or objects render it challenging to reliably extract a space-time trajectory from the information contained in the hedgehog. In order to reliably link cell detections across timepoints, we use an incremental graph-based approach based on all spines that have local maxima in their sample values. A plot of an exemplary spine through a volume is shown in Supp. Fig.~\ref{T2TExampleRay}. In the figure, the distance from the observer in voxels along the spine is shown on the X axis, while the Y axis shows the intensity value of the volume data at that point along the spine. To initialize the algorithm, we assume that when starting a track the user looks at an unoccluded cell that is visible as the nearest local maximum along the spine. In Supp. Fig.~\ref{T2TExampleRay} that would be the leftmost local maximum. \begin{figure}[h] \includegraphics[width=\columnwidth]{t2t-algorithm.pdf} \caption{A graphical illustration of the incremental graph-search algorithm used to extract tracks from a hedgehog. Time runs along the X axis. $\mathrm{spine}_1$ contains the initial seed point where to start tracking. The algorithm is currently at $\mathrm{spine}_4$, determining how to proceed to $\mathrm{spine}_5$. In this case, the middle track with $\mathrm{dist}=1$ wins, as it is the shortest world-space distance away from the current point. The algorithm will continue the path search until it has reached the last spine, $\mathrm{spine}_n$. In this manner, the algorithm closes the gaps around the sample numbers 700 and 1200 in Figure~\ref{fig:labelledHedgehog}, and leaves out the detected cells further along the individual rays. $\mathrm{spine}_3$ is connected initially, but removed in the final statistical pruning step. It is therefore grayed out. See text for details. \label{fig:T2TAlgorithm}} \vspace{-1.25\baselineskip} \end{figure} For each timepoint, we have collected a variable number of spines, whose count varies between 0 and 120; zero spines might be obtained in case that the user closes her eyes, or that no detection was possible for other reasons, and 120 Hz is the maximum frame rate of the eye trackers used. In order to correctly track a cell across spines over time, and after the initial seed point on the first spine has been determined, we step through the spines in the hedgehog one by one, performing the following operations, as illustrated in \cref{fig:T2TAlgorithm}: \begin{enumerate} \setlength{\itemsep}{1.5pt} \setlength{\parskip}{2pt} \item advance to the next spine in the hedgehog, \item find the indices of all local maxima along the spine, ordered by world-space distance to the selected point from the previous spine, \item connect the selected point from the previous spine with the closest (in world-space distance) local maximum in the current spine, \item calculate the world-space position of the new selected point, and \item add the selected point to the set of points for the current track. \end{enumerate} In addition to connecting discontinuities in the local maxima detected (discontinuity A in \cref{fig:labelledHedgehog}) world-space distance weighting also excludes cases where another cell is briefly moving close to the user and the actually tracked cell (discontinuity B in \cref{fig:labelledHedgehog}). The process of connecting a local maximum to the nearest one at a later time is a variant of \emph{dynamic fringe-saving A*} search on a grid \cite{sun2009} with all rays extended to the maximum length in the entire hedgehog along the X axis, and time increasing along the Y axis. This strategy constructs a cell track from the spines of each hedgehog. The calculation of the final track typically takes less than a second and is visualised right away, such that the user can quickly decide whether to keep it, or discard it. \subsection{Handling Distraction and Occlusions} In some cases, however, world-space distance weighting is not enough, and a kind of Midas touch problem \cite{Jacob:1995Eye} remains: When the user briefly looks somewhere else than at the cell of interest, and another local maximum is detected there, that local maximum may indeed have the smallest world-space distance and win. This would introduce a wrong link in the track. Usually, the Midas touch problem is avoided by resorting to multimodal input (see, e.g., \cite{Stellmach:2012Looka,Meena:2017bn}). Here, we aim to avoid the Midas touch problem without burdening the user with additional modalities of control. We instead use statistics: for each vertex distance $d$, we calculate the z-score $Z(d) = \left( d - \mu_\mathrm{dist}\right)/\sigma_{\mathrm{dist}}$, where $\mu_\mathrm{dist}$ is the mean distance in the entire hedgehog and $\sigma_\mathrm{dist}$ is the standard deviation of all distances in the entire hedgehog. We then prune all graph vertices with a z-score higher than 2.0. This corresponds to distances larger than double the standard deviation of all distances the hedgehog. Pruning and graph calculations are repeated iteratively until no vertices with a z-score higher than 2.0 remain, effectively filtering out discontinuities like B in \cref{fig:labelledHedgehog}. \section{Proof of concept} \label{sec:ProofOfConcept} We demonstrate the applicability of the method with two different datasets: \begin{itemize} \item A developmental 101-timepoint dataset of a \emph{Platynereis dumerilii} embryo, an ocean-dwelling ringworm, acquired using a custom-built OpenSPIM \cite{Pitrone:2013ki} lightsheet microscope, with cell nuclei tagged with the fluorescent GFP protein (16bit stacks, 700x660x113 pixel, 100MB/timepoint, 9.8 GByte total size), \item A 12-timepoint dataset of \emph{MDA231} human breast cancer cells, embedded in a collagen matrix and infected with viruses tagged with the fluorescent GFP protein, acquired using a commercial Olympus FluoView F1000 confocal microscope (dataset from the Cell Tracking Challenge \cite{Ulman:2017objective}, 16 bit TIFF stacks, 512x512x30 pixels, 15MB/timepoint, 98 MByte total size). \end{itemize} The \emph{Platynereis} dataset was chosen because it poses a current research challenge, with all tested semiautomatic algorithms failing on this dataset, due to the diverse nuclei shapes and cell movements. Examples of shapes encountered in the dataset are shown in \cref{fig:NucleusShapes}. The MDA231 dataset in turn was chosen because it had the worst success scores for automatic tracking methods on the \emph{\href{https://celltrackingchallenge.net}{celltrackingchallenge.net}} website due to the diversity of cell shapes and jerky movements in the dataset. For the \emph{Platynereis} dataset, we were able to quickly obtain high-quality cell tracks using our prototype system. A visualization of one such cell track is shown in Supplementary Figure \ref{T2TTracksPlatynereis}. In the companion video, we show both the gaze tracking process to create the track and a visualization showing all spines used to generate the track. For the MDA231 dataset, we are able to obtain tracks for six moving cells in the dataset in about 10 minutes. A visualization of these tracks is shown in Supp. Fig.~\ref{T2TTracksMDA}; see the companion video for a part of the tracking process. This example also demonstrates that the Bionic Tracking technique is useful even on nearly ``flat'' microscopy images in VR, as the dataset only has 30 Z slices, compared to a resolution of 512x512 in X and Y. All datasets are rendered at their full resolution, with a typical framerate of 60-90fps. \section{Evaluation} We evaluated Bionic tracking by first performing a user study to gain insight into user acceptance and feasibility. We then compared tracks created with Bionic Tracking to the manually annotated ground truth. Together, these evaluations serve as an initial characterization of the usability and performance of Bionic Tracking. \subsection{User Study} \label{sec:EvaluationUserStudy} We recruited seven cell tracking experts who were either proficient with manual cell tracking tasks in biology, proficient in using or developing automated tracking algorithms, or both (median age 36, s.d. 7.23, 1 female, 6 male) to take part in the study. The users were given the task to track arbitrary cells in the \emph{Platynereis} dataset already used in \cref{sec:ProofOfConcept}. One of the users was already familiar with this particular dataset. The study was conducted on a Dell Precision Tower 7910 workstation (Intel Xeon E5-2630v3 CPU, 8 cores, 64 GB RAM, GeForce GTX 1080Ti GPU) running Windows 10, build 1909. Before starting to use the software, all users were informed of the goals and potential risks (e.g., simulator sickness) of the study. With a questionnaire, they were asked for presence of any visual or motor impairments (apart from needing to wear glasses or contact lenses, none were reported), about previous VR experience and physical wellbeing. After using the software, users were again asked about their physical wellbeing, and had to judge their experience using the NASA Task Load Index (TLX, \cite{Hart:1988tlx}) and Simulator Sickness Questionnaire (SSQ, \cite{kennedy1993}). In addition, they were asked both qualitative and quantative questions about the software based on both the User Experience Questionnaire \cite{Laugwitz:2008Construction} and the System Usability Scale \cite{Brooke:1996SUS}. We concluded the study for each participant with a short interview where users were asked to state areas of improvement, and what they liked about the software. The full questionnaire used in the study is available in the supplementary materials. After filling the pre-study part of the questionnaire, users were given a brief introduction to the controls in the software. After ensuring a good fit of the HMD on the user's head, the interpupillary distance (IPD) of the HMD was adjusted to the user's eyes, as were the ROIs of the eye tracking cameras. The users then ran the calibration routine on their own. Then, they were able to take time to freely explore the dataset in space and time. If the calibration was found to not be sufficiently accurate, we re-adjusted HMD fit and camera ROIs, and ran the calibration routine again. Finally, all users were tasked with tracking the cells in the \emph{Platynereis} dataset. Users were then able to create cell tracks freely, creating up to 32 cell tracks in 10 to 29 minutes. All participants in the study had no or very limited experience with using VR interfaces (5-point scale, 0 means no experience, and 4 daily use: mean 0.43, s.d. 0.53), and only one had previously used any eye-tracking-based user interfaces before (same 5-point scale: mean 0.14, s.d. 0.37). \subsection{User Study Results} The average SSQ score was $25.6 \pm 29.8$ (median $14.9$), which is on par with other VR applications that have been evaluated using SSQ (see, e.g., \cite{Singla:2017Measuring}). From TLX, we used all categories (mental demand, physical demand, temporal demand, success, effort, insecurity), on a 7-point scale where 0=Very Low and 6=Very High for the demand metrics, and 0=Perfect, 6=Failure for the performance metrics. Users reported medium scores for mental demand ($2.71 \pm 1.70$) and for effort ($2.86 \pm 1.68$), while reporting low scores for physical demand ($1.86 \pm 1.95$), temporal demand ($1.57 \pm 0.98$), and insecurity ($1.14 \pm 1.68$). The participants judged themselves to have been rather successful with the tracking tasks ($1.71 \pm 0.75$). All questions asked related to software usability and acceptance are summarised in \cref{fig:StudyAnswers}. The users estimated that the Bionic Tracking method would yield a speedup of a factor 2 to 10 ($3.33 \pm 6.25$) compared to tracking cells with a regular 2D interface, and expressed high interest in using the method for their own tracking tasks ($3.43 \pm 0.53$; 5-point scale here and for the following: 0=No agreement, 4=Full agreement), as the tracks created by it looked reasonable ($2.57 \pm 0.98$), it would provide an improvement over their current methods ($3.14 \pm 0.90$), and they could create new cell tracks not only with confidence ($2.86 \pm 0.69$), but also faster ($3.29 \pm 0.76$). Users found the software relatively intuitive ($2.43 \pm 0.98$) and did not need a long time to learn how to use it ($0.59 \pm 0.79$), which they also remarked on the the follow-up interviews: \begin{displayquote} "It was so relaxing, actually, looking at this [cell] and just looking." (P2, the user remarked further after the interview that the technique might prevent carpal tunnel issues often encountered when tracking via mouse and keyboard.) \end{displayquote} \begin{displayquote} "I figured this could be like a super quick way to generate the [cell] tracks." (P7) \end{displayquote} Furthermore, the user study showed that users tend to adjust playback speed more often than image size (in VR). After playing around with different settings -- users could choose speeds from 1 to 20 volumes/second -- all users interestingly settled on 4-5 volumes/second, corresponding to 200 to 250\,ms of viewing time per timepoint, which coincides with the onset delay of smooth pursuit eye movements. Albeit having no or limited previous VR experience, the users did not feel irritated by the environment ($0.00 \pm 0.00$) nor by the use of eye tracking ($0.29 \pm 0.49$). \begin{figure}[h] \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/study-answers.pdf} \caption{Results of usability and acceptance-related question from the user study. Please note that the questions are formulated both positively and negatively.\label{fig:StudyAnswers}} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/52tracks.png} \caption{The 52 tracks we used for comparison with manual tracking results visualised together with the volumetric data of one timepoint. This is the same view the user had, taken from within the VR headset. See the supplementary video for a demonstration of creating these tracks.\label{fig:52tracks}} \end{subfigure} \caption{User study and cell tracking results for the \emph{Platynereis} dataset.} \vspace{-1.25\baselineskip} \end{figure} \subsection{Comparison with Manual Tracking Results} \label{sec:EvaluationComparison} To further characterize the performance of Bionic Tracking, we performed a comparison to manually annotated tracks. Our primary focus in this comparison is to assess the capacity of Bionic Tracking to recreate individual manually annotated tracks. We compared 52 tracks created by an expert annotator using Bionic Tracking (see \cref{fig:52tracks}) on the \textit{Platynereis} dataset to their respective best matching ground truth tracks. We find that 25 of the 52 tracks have a distance score \cite{Ulman:2017objective} that is less than 1 cell diameter, suggesting that these tracks will, on average, intersect the volume of their corresponding cell. \section{Discussion} We were able to show that gaze in VR can be used to reconstruct tracks of biological cells in 3D microscopy. Our method does not only accelerates the process, but makes manual tracking tasks also easier and less demanding. Although our expert-based user study was rather small in size, limiting its statistical power, we believe that it provides an indication that the use of Bionic Tracking can improve the user experience and speed for cell tracking tasks, and that developing it further is worthwhile. Even though the users had limited previous VR experience before, they were quickly able to create cell tracks with high confidence. Multiple users complimented the ergonomics of the technique, although it remains to be seen whether this would still be the case for longer (1h+) tracking sessions. With the projected speedups, however, it might not even be necessary to have such long sessions anymore (users indicated that for manual tracking, they would not do sessions longer than 3 to 4 hours, with the estimated speedups, this could be potentially reduced to just 20-90 minutes using Bionic Tracking). For tracking large lineages comprising thousands of cells, Bionic Tracking on it own is going to be cumbersome, for combinatorial reasons. It can, however, augment existing techniques for parts of the tracking process, e.g., to track cells only in early stages of development, where they tend to have less well-defined shapes, or it may provide constraints and training data for machine-learning algorithms of automated tracking. Furthermore, Bionic Tracking could be used in conjunction with any automatic tracking algorithm that provides uncertainty scores in order to restrict gaze input to regions where the algorithm cannot perform below a given uncertainty threshold. This could be done, e.g., by superimposing a heatmap on the volume rendering to indicate to the user areas that need additional curation. Hybrid semi-automated/manual approaches are already among the most popular tools for challenging biological datasets \cite{Winnubst:2019Reconstruction}. \section{Future Work and Limitations} In the future, we would like to integrate Bionic Tracking into an existing tracking software, such that it can be used by a general audience. Unfortunately, eye tracking-enabled HMDs are not yet widely available, but according to current announcements, this is likely to change. Current developments in eye tracking hardware and VR HMDs indicate falling prices in the near future, such that those devices might soon become more common, or even directly integrated into off-the-shelf HMDs. One could imagine just having one or two eye tracking-enabled HMDs as an institute, making them available to users in a bookable item-facility manner. At the moment, the calibration of the eye trackers can still be a bit problematic, but this is likely to improve in the future, too, with machine learning approaches making the process faster, more reliable, and more user-friendly. In order for Bionic Tracking to become a tool that can be routinely used for research in biology, it will be necessary to implement interactions that allow the user to indicate certain events, like cell divisions. Such an interaction could for example include the user pressing a certain button whenever a cell division occurs, and then track until the next cell division. In such a way, the user can skip from cell division to cell division, literally applying divide-and-conquer for tracking (a part of) the cell lineage tree at hand. These additional features will enable the creation of entire cell lineage trees. The design and evaluation of algorithms to detect and track entire lineage trees is currently an active focus in the systems biology community \cite{Ulman:2017objective}. In this study, we have used comparison algorithms from the Particle Tracking Challenge (PTC) \cite{Chenouard:2014Objective}, which were designed to compare single tracks. There are limitations when applying the PTC metric to compare cell tracking annotations. However, until additional tracking events---such as the aforementioned cell divisions---can be recorded with Bionic Tracking, PTC is the only metric that can be applied. In our tests, we have still seen some spurious detections, which lead to tracks obviously not taken by the cell. This calls for more evaluations within crowded environments: While Bionic Tracking seems well suited for crowded scenes in principle -- as users can, e.g., move around corners and are tracked by the HMD -- it is not yet clear whether eye tracking is precise enough in such situations. In addition, head tracking data from the HMD could be used to highlight the area of the volumetric dataset the user is looking toward (foveated rendering, \cite{levoy1990, bruder2019}), e.g., by dimming areas the user is not looking at. We have not yet explored foveation, but could imagine it might improve tracking accuracy and mental load. \section{Conclusion} We have presented \emph{Bionic Tracking}, a new method for object tracking in volumetric image datasets, leveraging gaze data and virtual reality HMDs for biological cell tracking problems. Our method is able to augment the manual parts of cell tracking tasks in order to render them faster, more ergonomic, and more enjoyable for the user, while still generating high-quality tracks. Users estimated they could perform cell tracking tasks up to 10-fold faster with Bionic Tracking than with conventional, manual tracking methods. As part of Bionic Tracking, we have introduced a method for graph-based temporal tracking, which enables to robustly connect gaze samples with cell or object detections in volumetric data over time. The results from our research prototype have been very encouraging, and we plan to continue this line of research with further studies, extending the evaluation to more datasets and users, and adding an evaluation of the accuracy of the created cell tracks on datasets that have known associated ground truth. Furthermore, we would like to add Bionic Tracking to a pipeline where the gaze-determined cell tracks can be used to train machine-learning algorithms to improve automatic tracking results. Our prototype software is available as open-source software at \emph{\href{https://github.com/scenerygraphics/bionic-tracking}{github.com/scenerygraphics/bionic-tracking}}. \section*{Acknowledgements} The authors thank all participants of the user study. Thanks to Mette Handberg-Thorsager for providing the \emph{Platynereis} dataset and for feedback on the manuscript. Thanks to Vladimir Ulman and Jean-Yves Tinevez for helpful discussions regarding track comparison. Thanks to Bevan Cheeseman, Aryaman Gupta, and Stefanie Schmidt for helpful discussions. Thanks to Pupil Labs for help with the eye tracking calibration. This work was partially funded by the Center for Advanced Systems Understanding (CASUS), financed by Germany’s Federal Ministry of Education and Research (BMBF) and by the Saxon Ministry for Science, Culture and Tourism (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament. R.D. and I.F.S. were supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy – EXC-2068 – 390729961 – Cluster of Excellence Physics of Life of TU Dresden. \bibliographystyle{abbrv-doi-hyperref} \bibliography{bionictracking} \ifdefined\preprint \clearpage \section*{Supplementary Material} \nopagebreak \renewcommand\thefigure{S.\arabic{figure}} \setcounter{figure}{0} \begin{figure}[h] \includegraphics[width=\textwidth]{figures/vive-controllers-t2t.pdf} \caption{Controller bindings for Bionic Tracking. Handedness can be swapped.} \label{T2TControls} \end{figure} \begin{figure}[h] \includegraphics[width=\columnwidth]{hedgehog-full-partial.png} \caption{Left: Partial hedgehogs (sets of rays of samples through the volume for one cell track) for a single time point of the \emph{Platynereis} dataset, after creating 18 cell tracks. Right: Full hedgehogs for all timepoints after creating tracks for 18 cells. Color coded by time, yellow is early, blue late along the time of the dataset. See the supplementary video for a dynamic demonstration and the main text for details.\label{hedgehog}} \end{figure} \begin{figure}[h] \includegraphics[width=\columnwidth]{t2t-ray.pdf} \caption{An example intensity value profile along an entire spine/ray through a volumetric dataset. The X axis is step along the spine in voxels, the Y axis volume sample value. In this case, there are two local maxima along the ray, one close to the observer, at index 70, and another one further away at 284. The profile was taken along the gray line shown in Figure 2 of the main text. \label{T2TExampleRay}} \end{figure} \begin{figure}[h] \includegraphics[width=\columnwidth]{t2t-track.png} \caption{Visualization of a cell track created in the \emph{Platynereis} dataset. See the companion video for the tracking process over time.\label{T2TTracksPlatynereis}} \end{figure} \begin{figure}[h] \includegraphics[width=\columnwidth]{mda231-tracks-new.png} \caption{Cell tracks created by Bionic Tracking in the MDA231 dataset, with a single spine used for creating a track shown at the top left in purple.\label{T2TTracksMDA}} \end{figure} \fi \end{document}
https://openreview.net/forum?id=3_2Zf8Rr1N
https://arxiv.org/abs/2005.00387
Please evaluate the paper based on the provided information, focusing on the user experience, potential impact, and suggestions for further improvement.
Interesting direction for tracking of cells in time-lapse video based on eye-tracking, with a small but promising user study. ### Summary - The paper proposes an approach for the “manual” tracking of 3d+t datasets of biological cells that is carried out in virtual reality and cells are tracked via eye. The approach, that the authors call Bionick Tracking, offers an alternative to the established method of manually tracking cells (and lineage trees) on a 2D screen with mouse click. - The authors investigate the question whether using eye gaze and the movement in virtual reality can facilitate cell tracking. In order to be able to infer the cell track from the tracked eye gaze, they propose to use a graph-based algorithm. They carry out a study with seven users to test their set-up with regards to usability and accuracy. - They find that all users overall had a positive tracking experience. The users also stated that they believe that tracking with Bionick Tracking speeds up the tracking process. ### Major strengths of the paper - The paper is understandable with a clear line of thought; Limitations are clearly stated. - The authors set-up a pipeline for eye-tracking of cells in time-lapse videos with a focus on using commodity hardware. With this focus, chances are higher that the setup will actually be adapted by other labs. - The authors carried out a user study with promising results including very positive feedback from the users. - The approach is novel and addresses important challenges in the cell tracking community (visualization of 3D time-lapse videos and annotation of cell tracks). - The authors provide a video that nicely explains the usage of their setup and their algorithm, which facilitates the understanding of the entire paper. ### Major weaknesses of the paper - The extraction of the path from the gaze involves smoothing, but how to handle real jumps or datasets that are difficult to register? In this context, the ground-truth path has to contain real jumps that would potentially be smoothed out by the algorithm that they propose in the paper. - The study is limited as they only tested their setup on seven users, but this limitation is clearly stated. - There is no quantitative comparison of annotation time of conventional methods versus their method. The observation that their method is faster in tracking cells than conventional methods is based on the user’s opinion. This is an important finding, but should be backed up with further quantitative experiments. ### Language Some sentences use vague or colloquial language: - “The initial setting for the scale of the dataset is to make it appear about 2m big.” (about 2m big) - “and a kind of Midas touch problem [10] remains” (a kind) - “At the moment, the calibration of the eye trackers can still be a bit problematic” (a bit) - “One could imagine just having one or two eye tracking-enabled HMDs as an institute” (One could imagine just….) Unclear sentences and minor mistakes - Line 42: "The 3D images the lineage trees are usually created based on fluorescence microscopy images." Unclear sentence; - Line 94: Unclear sentence, please split into two sentences. - Line 129: "The occur when following a stimilus" → They occur when following a stimulus - Line 391: "This corresponds to distances larger than double the standard deviation of all distances the hedgehog." Unclear sentence - Line 539: "Our method does not only accelerates the process, but makes"; Incorrect grammar (--> Our method does not only accelerate the process …) Rating: 6: Marginally above acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} % \newcommand{\preprint}{} \usepackage{graphicx} % \DeclareGraphicsExtensions{.pdf,.png,.jpg,.jpeg} % \graphicspath{{figures/}{pictures/}{images/}{./}} % \usepackage{microtype} % \PassOptionsToPackage{warn}{textcomp} % \usepackage{textcomp} % \usepackage{mathptmx} % \usepackage{times} % \renewcommand*\ttdefault{txtt} % \usepackage{cite} % \usepackage{tabu} % \usepackage{booktabs} % \makeatletter \newcommand*{\addFileDependency}[1]{% \typeout{(#1)} \@addtofilelist{#1} \IfFileExists{#1}{}{\typeout{No file #1.}} } \makeatother \newcommand*{\myexternaldocument}[1]{% \externaldocument{#1}% \addFileDependency{#1.tex}% \addFileDependency{#1.aux}% } \usepackage{xr-hyper} \usepackage{hyperref} \usepackage[svgnames]{xcolor} \hypersetup{ colorlinks=true, linkcolor={DarkBlue}, urlcolor={DarkBlue}} \ifdefined\preprint \else \myexternaldocument{supplement} \fi \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \usepackage[% font={small}, labelfont=bf, format=hang, format=plain, margin=0pt, width=1.0\textwidth, ]{caption} \usepackage[list=true]{subcaption} \usepackage{comment} \usepackage{microtype} \renewcommand*\ttdefault{txtt} % \usepackage[utf8]{inputenc} \usepackage{csquotes} \usepackage{breakcites} \newcommand{\TODO}[1]{\colorbox{red}{\color{white}\textbf{TODO}} {\color{red}#1}} \usepackage[capitalise,noabbrev]{cleveref} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{23} % \title{Bionic Tracking: Using Eye Tracking to Track Biological Cells in Virtual Reality} % \titlerunning{Bionic Tracking} \author{Ulrik Günther\inst{1,2,3}\orcidID{0000-0002-1179-8228} \and Kyle I.S. Harrington\inst{4,5}\orcidID{0000-0002-7237-1973} \and Raimund Dachselt\inst{6,7}\orcidID{0000-0002-2176-876X} \and\\ Ivo F. Sbalzarini\inst{6,2,3,7}\orcidID{0000-0003-4414-4340} } \authorrunning{Günther, et al.} \institute{Center for Advanced Systems Understanding, Görlitz, Germany \and Center for Systems Biology, Dresden, Germany\and Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany \and Virtual Technology and Design, University of Idaho, Moscow, ID, USA\and HHMI Janelia Farm Research Campus, Ashburn, VA, USA\and Faculty of Computer Science, Technische Universität Dresden, Germany \and Excellence Cluster Physics of Life, Technische Universität Dresden, Germany } \maketitle \begin{abstract} We present Bionic Tracking, a novel method for solving biological cell tracking problems with eye tracking in virtual reality using commodity hardware. Using gaze data, and especially smooth pursuit eye movements, we are able to track cells in time series of 3D volumetric datasets. The problem of tracking cells is ubiquitous in developmental biology, where large volumetric microscopy datasets are acquired on a daily basis, often comprising hundreds or thousands of time points that span hours or days. The image data, however, is only a means to an end, and scientists are often interested in the reconstruction of cell trajectories and cell lineage trees. Reliably tracking cells in crowded three-dimensional space over many timepoints remains an open problem, and many current approaches rely on tedious manual annotation and curation. In our Bionic Tracking approach, we substitute the usual 2D point-and-click annotation to track cells with eye tracking in a virtual reality headset, where users simply have to follow a cell with their eyes in 3D space in order to track it. We detail the interaction design of our approach and explain the graph-based algorithm used to connect different time points, also taking occlusion and user distraction into account. We demonstrate our cell tracking method using the example of two different biological datasets. Finally, we report on a user study with seven cell tracking experts, demonstrating the benefits of our approach over manual point-and-click tracking. \end{abstract} \section{Introduction} In cell and developmental biology, the image data generated via fluorescence microscopy is often only a means to an end: Many tasks require exact information about the positions of cells during development, or even their entire history, the so-called cell lineage tree. Both the creation of such a tree using cell tracking, and tracking of single cells, are difficult and cannot always be done in a fully automatic manner. Therefore, such lineage trees are created in a tedious manual process using a point-and-click 2D interface. Even if cells can be tracked (semi)automatically, faulty tracks have to be repaired manually. Again, this is a very tedious task, as the users have to go through each timepoint and 2D section in order to connect cells in 3D+time, with a 2D point-and-click interface. Manually tracking one single cell through 101 timepoints with this manual process takes 5 to 30 minutes, depending on complexity of the dataset. Tracking an entire developmental dataset with many 3D images can take months of manual curation effort. The 3D images the lineage trees are usually created based on fluorescence microscopy images. Such fluorescence images do not have well-defined intensity scales, and intensities might vary strongly even within single cells. Cells also move around, divide, change their shape---sometimes drastically---or might die. Cells might also not appear alone, and may move through densely-populated tissue, making it difficult to tell one cell apart from another. These three issues are the main reasons that make the task of tracking cells so difficult. Further complicating the situation, recent advances in fluorescence microscopy, such as the advent and widespread use of lightsheet microscopy \cite{Huisken:2004ky}, have led to a large increase in size of the images, with datasets growing from about a gigabyte to several terabytes for long-term timelapse images \cite{Reynaud:2015dx}. In this work, we reduce the effort needed to track cells through time series of 3D images by introducing \emph{Bionic Tracking}, a method that uses smooth pursuit eye movements as detected by eye trackers inside a virtual reality head-mounted display (HMD) to render cell tracking and track curation tasks easier, faster, and more ergonomic. Instead of following a cell by point-and-click, users have to simply look at a cell in Virtual Reality (VR) in order to track it. The main contributions we present here are: \begin{itemize} \item A setup for interactively tracking cells by simply following the cell in a 3D volume rendering with the eyes, using a virtual reality headset equipped with eye trackers, \item an iterative, graph-based algorithm to connect gaze samples over time with cells in volumetric datasets, addressing both the problems of occlusion and user distraction, and \item a user study evaluating the setup and the algorithm with seven cell tracking experts \end{itemize} \section{Related Work} \label{sec:RelatedWork} The main problem we address in this paper is the manual curation or tracking step, which is necessary for both validation and for handling cases where automatic tracking produces incorrect or no results. In this section, we give a brief overview of (semi-)automatic tracking algorithms, then continue with relevant work from the VR, visualization, and eye tracking communities. Historically, software for solving tracking problems was developed for a specific model organism, such as for the roundworm \emph{Caenorhabditis elegans}, the fruitfly \emph{Drosophila melanogaster}, or the zebrafish \emph{Danio rerio} --- all highly studied animals in biology --- and relied on stereotypical developmental dynamics within an organism in order to succeed in tracking cells. This approach however either fails entirely or produces unreliable results for other organisms, or for organisms whose development is not as stereotyped. For that reason, (semi-)automated approaches have been developed that are independent of the model organism and can track large amounts of cells, but often require manual tracking of at least a subset of the cells in a dataset. Examples of such frameworks are: \begin{itemize} \item \emph{TGMM}, Tracking by Gaussian Mixture Models \cite{amat2014, ckendorf:2015ch}, is an offline tracking solution that works by generating oversegmented supervoxels from the original image data, then fit all cell nuclei with a Gaussian Mixture Model and evolve that through time, and finally use the temporal context of a cell track to create the lineage tree. \item \emph{TrackMate} \cite{tinevez2017} is a plugin for Fiji \cite{schindelin2012fiji} that provides automatic, semi-automatic, and manual tracking of single particles in image datasets. TrackMate can be extended with custom spot detection and tracking algorithms. \item \emph{MaMuT}, the Massive MultiView Tracker \cite{wolff2018}, is another plugin for Fiji that allows the user to manually track cells in large datasets, often originating from multi-view lightsheet microscopes. MaMuT's viewer is based on BigDataViewer \cite{Pietzsch:2015hl} and is able to handle terabytes of data. \end{itemize} All automated approaches have in common that they need manual curation as a final step, as they all make assumptions about cell shapes, modelling them, e.g., as blobs of Gaussian shape, as in the case of TGMM. Manual tracking and curation is usually done with mouse-and-keyboard interaction to select a cell and create a track, often while just viewing a single slice of a 3D time point of the dataset. In Bionic Tracking, we replace this interaction by leveraging the user's gaze in a virtual reality headset, while the user can move freely around or in the dataset. Gaze in general has been used in human-computer interaction for various interactions: It has been used as an additional input modality in conjunction with touch interaction \cite{Stellmach:2012Looka} or pedaling \cite{Klamka:2015ka}, and for building user interfaces, e.g., for text entry \cite{Lutz:2015ga}. The particular kind of eye movements we exploit for Bionic Tracking---\emph{smooth pursuits}, where the eyes follow a stimulus in a smooth, continuous manner---is not yet explored exhaustively for interacting with 3D or VR content. Applications can be found mainly in 2D interfaces, such as in \cite{Kosch:2018Your}, where the authors use deviations from smoothness in smooth pursuits to evaluate cognitive load; or in \cite{Vidal:2013Pursuits}, where smooth pursuits are used for item selection in 2D user interfaces. For smooth pursuits in VR, we are only aware of two works, \cite{piumsomboon2017} and \cite{Khamis:2018VRpursuits}: In the first, the authors introduce \emph{Radial Pursuit}, a technique where the user can select an object in a 3D scene by tracking it with her eyes, and it will become more ``lensed-out'' the longer she focuses on a particular object. In the latter, the authors explore target selection using smooth pursuits, perform a user study, and make design recommendations for smooth pursuit-based VR interfaces. All aforementioned works are only concerned with navigation or selection tasks on structured, geometric data. In Bionic Tracking however, we use smooth pursuits to track cells in unstructured, volumetric data that cannot simply be queried for the objects contained or their positions. In the context of biomedical image analysis, VR has been applied successfully, e.g., for virtual colonoscopy \cite{Mirhosseini:2019Immersive} and for tracing of neurons in connectome data \cite{Usher:2017bda}. In the latter, the authors show the neurons in VR in order to let the user trace them with a handheld controller. The authors state that this technique resulted in faster and better-quality annotations. Tracking cells using handheld VR controllers is an alternative to gaze, but could place higher physical strain on the user. \section{The Bionic Tracking Approach} For Bionic Tracking, we exploit smooth pursuit eye movements. Smooth pursuits are the only smooth movements performed by our eyes. The occur when following a stimilus, and cannot be triggered without one \cite{Duchowski:2017ii}. Instead of using a regular 2D screen, we perform the cell tracking process in VR, since VR gives the user improved navigation and situational awareness compared to 2D when exploring a complex 3D/4D dataset \cite{Slater:2016552}. In addition, the HMD tracking data can be used to impose constraints on the data acquired from the eye trackers. In order to remove outliers from gaze data one can calculate the quaternion distance between eyeball rotation and head rotation, which is physiologically limited: a 90-degree angle between eye direction and head direction is not plausible, and head movement follows eye movement via the vestibo-ocular reflex. As a system consisting of both a VR HMD and an integrated eye tracking solution might be perceived as too complex, we start by explaining why we think that only using one of the technologies would not solve the problem: \begin{itemize} \item \emph{Without eye tracking}, the head orientation from the HMD could still be used as a cursor. However, following small and smooth movements with the head is not something humans are used to doing. The eyes always lead the way, and the head follows via the vestibulo-ocular reflex. \item \emph{Without virtual reality}, the effective space in which the user can use to follow cells around becomes restricted to the rather small part of the visual field a regular screen occupies. The user furthermore loses the ability to move around freely without an additional input modality, e.g. to avoid obstacles (in our case, those might be cells not tracked at the moment). As an alternative to HMDs, a system using large screens or projectors, such as Powerwalls or CAVEs, could be used, but increases the technical complexity. \end{itemize} \subsection{Hardware selection} We have chosen the HTC Vive as HMD, as it is comfortable to wear, provides good resolution, and an excellent tracking system for room-scale VR experiences. Furthermore, it is usable with the SteamVR/OpenVR API. For eye tracking, we have chosen the \emph{Pupil} eye trackers produced by Pupil Labs \cite{Kassner:2014kh}, as they provide both an open-source software and competitively-priced hardware that is simple to integrate physically into off-the-shelf HMDs. The software is available as LGPL-licensed open-source code and can be extended with custom plugins. In addition to being open-source, the \emph{Pupil} software makes the measured gaze data and image frames available to external applications via a simple ZeroMQ- and MessagePack-based protocol\footnote{See \url{https://docs.pupil-labs.com/developer/core/network-api/} for details on interacting with Pupil over the network.}---in contrast to closed-source proprietary libraries required by other products---which enables using the eye tracking data in a local application or even over the network. Alternative solutions, like the HTC Vive Pro Eye, or an HTC Vive with integrated Tobii eye tracker were either not available at the time this project started, or were much more expensive. \subsection{Software framework} We have developed Bionic Tracking using the visualization framework \textit{scenery} \cite{Gunther:2019scenerya}, as it supports rendering of mesh data simultaneously with multi-timepoint volumetric data that contains the cells or nuclei to be tracked. Crucially for Bionic Tracking, scenery supports rendering to all SteamVR/OpenVR-supported VR HMDs and supports the Pupil eye trackers. In addition, scenery runs on the Java VM and is interoperable with the image analysis toolkit Fiji, just as the existing tracking tools \emph{TrackMate} and \emph{MaMuT} (see \cref{sec:RelatedWork}). \begin{figure} \vspace{-1.25\baselineskip} \centering \includegraphics[width=\textwidth]{cell-shapes.pdf} \caption{Some example nucleus shapes encountered in our \emph{Platynereis} test dataset. \label{fig:NucleusShapes}} \vspace{-3\baselineskip} \end{figure} \subsection{Rendering} We use simple, alpha blending-based volume rendering for displaying the data in the VR headset using scenery's Vulkan backend. While more advanced algorithms for volume rendering exist which provide a higher visual quality (e.g. Metropolis Light Transport \cite{Kroes:2012bo}), achieving a high and ideally consistent framerate is important for VR applications, which led us to choose alpha blending. For the data used in this work, we have only used in-core rendering, while the framework also supports out-of-core volume rendering for even larger datasets. To the user, we not only display the volume on its own, but a gray, unobtrusive box for spatial anchoring around the volume (see the supplementary video for an impression of how this looks). \section{Tracking Cells with Bionic Tracking} \subsection{Preparation} After putting on the VR HMD, making sure the eye tracker's cameras can see the user's eyes and launching the application, the calibration routine needs to be run first in order to establish a mapping between the user's gaze and world space positions in the VR scene. For calibration, we show the user a total of 18 white spheres, with 5 of them layered on three circles 1\,m apart (distances in the VR scene are the same as in the physical world). The radius of the circles increases with each layer to achieve a good coverage of the field of view. In addition to the spheres on the circles, we show three spheres in the center of the circles to also cover the area in the center of the field of view. During the calibration routine, the user has to look at these spheres as they are shown in the HMD. Since the calibration targets follow the head movements of the user, the user does not need to stay still. At the end of the calibration, the user will be notified of success or failure, and can repeat the calibration process if necessary. Calibration typically needs to be run only once per session, and can then be used to track as many cells as the user likes. Exceptions are if there is significant slippage or if the HMD is removed during the session. Our calibration routine is mostly similar to the one used in \emph{Pupil's} HMDeyes Unity example project\footnote{See \url{https://github.com/pupil-software/hmd-eyes} for details.}. Movement in VR can be performed either physically, or via buttons on the handheld controllers, which additionally allow control of the following functions (handedness can be swapped, default bindings shown in Supp. Fig.~\ref{T2TControls}): \begin{itemize} \setlength{\itemsep}{1.5pt} \setlength{\parskip}{2pt} \item move the dataset by holding the left-hand trigger and moving the controller, \item use the directional pad on the left-hand controller to move the observer (forward, backward, left, or right -- with respect to the direction the user is looking to), \item start and stop tracking by pressing the right-hand side trigger, \item deleting the most recently created track by pressing the right-side button, and confirming within three seconds with another press of the same button, \item play and pause the dataset over time by pressing the right-hand menu button, \item play the dataset faster or slower in time by pressing the right-hand directional pad up or down, and \item stepping through the timepoints of the dataset one by one, forward or backward, by pressing the right-hand directional pad left or right. \end{itemize} When the dataset is not playing, the user can also use the directional pad on the right-hand controller to scale the dataset. The initial setting for the scale of the dataset is to make it appear about 2m big. \subsection{Tracking Process} After calibration, the user can position herself freely in space. To track a cell, the user performs the following steps: \begin{enumerate} \setlength{\itemsep}{1.5pt} \setlength{\parskip}{2pt} \item Find the timepoint and cell with which the track should start, adjust playback speed between one and 20 volumes/second, and start to look at the cell or object of interest, \item start playback of the multi-timepoint dataset, while continuing to follow the cell by looking at it, and maybe moving physically to follow the cell around occlusions, \item end or pause the track at the final timepoint. Tracking will stop automatically when playback as reached the end of the dataset, and the dataset will play again from the beginning. \end{enumerate} In order to minimize user strain in smooth pursuit-based VR interactions, the authors of \cite{Khamis:2018VRpursuits} have provided design guidelines: They suggest large trajectory sizes, clear instructions what the user has to look at, and relatively short selection times. While physical cell size cannot be influenced, the controls available to the user enable free positioning and zooming. The selection time, here the tracking time, of course depends on the individual cell to be tracked, but as the tracking can be paused, and the playback speed adjusted, the user is free to choose both a comfortable length and speed. During the tracking procedure, we collect the following data for each timepoint: \begin{itemize} \setlength{\itemsep}{1.5pt} \setlength{\parskip}{2pt} \item the entry and exit points of the gaze ray through the volume in normalised volume-local coordinates, i.e., as a vector $\in [0.0, 1.0]^3$, \item the confidence rating -- calculated by the \emph{Pupil} software -- of the gaze ray, \item the user's head orientation and position, \item the timepoint of the volume, and \item a list of sampling points with uniform spacing along the gaze ray through the volume and the actual sample values on these points calculated by trilinear interpolation from the volume image data. \end{itemize} We call a single gaze ray including the above metadata a \emph{spine}. The set of all spines for a single track over time we call a \emph{hedgehog} -- due to its appearance, see Supp. Fig.~\ref{hedgehog}. By collecting the spines through the volume, we are effectively able to transform each 3-dimensional cell localization problem into a 1-dimensional one along a single ray through the volume and create a cell track. This analysis procedure is explained in detail in the next section. \section{Analysis of the Tracking Data} In previous applications using smooth pursuits (such as in \cite{Vidal:2013Pursuits,piumsomboon2017}), the tracked objects were geometric and not volumetric in nature, and therefore well-defined in 2D or 3D space with their extents and shape fully known. In our analysis in contrast, we use the indirect information about the objects contained in spines and hedgehogs to find the tracked object in unstructured volumetric data and follow it. After a full hedgehog has been collected to create a new cell track, all further analysis is done solely on the data contained in this hedgehog. To illustrate the analysis, it is useful to visualize a hedgehog in two dimensions by laying out all spines in a 2D plane next to each other (see \cref{fig:labelledHedgehog}). In this plane, time advances along the X axis and depth through the volume along a given spine is on the Y axis. Note that each line parallel to the Y axis represents one spine and therefore one gaze sample, of which we collect up to 60 per second. In \cref{fig:labelledHedgehog}, this led to 1614 spines with 16 spines per image timepoint on average collected within 30 seconds. In the figure, we have highlighted the local intensity maximum along each spine in red. The track of the cell the user was following is then mostly visible. \begin{figure}[h] \includegraphics[width=\textwidth]{hedgehog-annotated.pdf} \caption{A hedgehog visualized in 2D, with nearest local maxima marked in red. Each vertical line is one spine of the hedgehog with the observer sitting at the bottom. On the X axis, time runs from left to right, and is counted in gaze samples taken. After every 500 spines, a dotted white line is shown at 500, 1000, and 1500 spines recorded. The gray line shortly before 500 spines is the line whose profile is shown in Supp. Fig.~\ref{T2TExampleRay}. The discontinuities in the local maxima A and B have different origins: For A, the user seems to have moved further away, resulting in a gap, while for B, another cell appeared closely behind the tracked one and might have mislead the user, leaving it for the algorithm to filter out. See text for details.\label{fig:labelledHedgehog}} \vspace{-1.25\baselineskip} \end{figure} \subsection{Graph-based temporal tracking} \label{sec:graphbasedtemporaltracking} Movements of the user and temporary occlusion by other cells or objects render it challenging to reliably extract a space-time trajectory from the information contained in the hedgehog. In order to reliably link cell detections across timepoints, we use an incremental graph-based approach based on all spines that have local maxima in their sample values. A plot of an exemplary spine through a volume is shown in Supp. Fig.~\ref{T2TExampleRay}. In the figure, the distance from the observer in voxels along the spine is shown on the X axis, while the Y axis shows the intensity value of the volume data at that point along the spine. To initialize the algorithm, we assume that when starting a track the user looks at an unoccluded cell that is visible as the nearest local maximum along the spine. In Supp. Fig.~\ref{T2TExampleRay} that would be the leftmost local maximum. \begin{figure}[h] \includegraphics[width=\columnwidth]{t2t-algorithm.pdf} \caption{A graphical illustration of the incremental graph-search algorithm used to extract tracks from a hedgehog. Time runs along the X axis. $\mathrm{spine}_1$ contains the initial seed point where to start tracking. The algorithm is currently at $\mathrm{spine}_4$, determining how to proceed to $\mathrm{spine}_5$. In this case, the middle track with $\mathrm{dist}=1$ wins, as it is the shortest world-space distance away from the current point. The algorithm will continue the path search until it has reached the last spine, $\mathrm{spine}_n$. In this manner, the algorithm closes the gaps around the sample numbers 700 and 1200 in Figure~\ref{fig:labelledHedgehog}, and leaves out the detected cells further along the individual rays. $\mathrm{spine}_3$ is connected initially, but removed in the final statistical pruning step. It is therefore grayed out. See text for details. \label{fig:T2TAlgorithm}} \vspace{-1.25\baselineskip} \end{figure} For each timepoint, we have collected a variable number of spines, whose count varies between 0 and 120; zero spines might be obtained in case that the user closes her eyes, or that no detection was possible for other reasons, and 120 Hz is the maximum frame rate of the eye trackers used. In order to correctly track a cell across spines over time, and after the initial seed point on the first spine has been determined, we step through the spines in the hedgehog one by one, performing the following operations, as illustrated in \cref{fig:T2TAlgorithm}: \begin{enumerate} \setlength{\itemsep}{1.5pt} \setlength{\parskip}{2pt} \item advance to the next spine in the hedgehog, \item find the indices of all local maxima along the spine, ordered by world-space distance to the selected point from the previous spine, \item connect the selected point from the previous spine with the closest (in world-space distance) local maximum in the current spine, \item calculate the world-space position of the new selected point, and \item add the selected point to the set of points for the current track. \end{enumerate} In addition to connecting discontinuities in the local maxima detected (discontinuity A in \cref{fig:labelledHedgehog}) world-space distance weighting also excludes cases where another cell is briefly moving close to the user and the actually tracked cell (discontinuity B in \cref{fig:labelledHedgehog}). The process of connecting a local maximum to the nearest one at a later time is a variant of \emph{dynamic fringe-saving A*} search on a grid \cite{sun2009} with all rays extended to the maximum length in the entire hedgehog along the X axis, and time increasing along the Y axis. This strategy constructs a cell track from the spines of each hedgehog. The calculation of the final track typically takes less than a second and is visualised right away, such that the user can quickly decide whether to keep it, or discard it. \subsection{Handling Distraction and Occlusions} In some cases, however, world-space distance weighting is not enough, and a kind of Midas touch problem \cite{Jacob:1995Eye} remains: When the user briefly looks somewhere else than at the cell of interest, and another local maximum is detected there, that local maximum may indeed have the smallest world-space distance and win. This would introduce a wrong link in the track. Usually, the Midas touch problem is avoided by resorting to multimodal input (see, e.g., \cite{Stellmach:2012Looka,Meena:2017bn}). Here, we aim to avoid the Midas touch problem without burdening the user with additional modalities of control. We instead use statistics: for each vertex distance $d$, we calculate the z-score $Z(d) = \left( d - \mu_\mathrm{dist}\right)/\sigma_{\mathrm{dist}}$, where $\mu_\mathrm{dist}$ is the mean distance in the entire hedgehog and $\sigma_\mathrm{dist}$ is the standard deviation of all distances in the entire hedgehog. We then prune all graph vertices with a z-score higher than 2.0. This corresponds to distances larger than double the standard deviation of all distances the hedgehog. Pruning and graph calculations are repeated iteratively until no vertices with a z-score higher than 2.0 remain, effectively filtering out discontinuities like B in \cref{fig:labelledHedgehog}. \section{Proof of concept} \label{sec:ProofOfConcept} We demonstrate the applicability of the method with two different datasets: \begin{itemize} \item A developmental 101-timepoint dataset of a \emph{Platynereis dumerilii} embryo, an ocean-dwelling ringworm, acquired using a custom-built OpenSPIM \cite{Pitrone:2013ki} lightsheet microscope, with cell nuclei tagged with the fluorescent GFP protein (16bit stacks, 700x660x113 pixel, 100MB/timepoint, 9.8 GByte total size), \item A 12-timepoint dataset of \emph{MDA231} human breast cancer cells, embedded in a collagen matrix and infected with viruses tagged with the fluorescent GFP protein, acquired using a commercial Olympus FluoView F1000 confocal microscope (dataset from the Cell Tracking Challenge \cite{Ulman:2017objective}, 16 bit TIFF stacks, 512x512x30 pixels, 15MB/timepoint, 98 MByte total size). \end{itemize} The \emph{Platynereis} dataset was chosen because it poses a current research challenge, with all tested semiautomatic algorithms failing on this dataset, due to the diverse nuclei shapes and cell movements. Examples of shapes encountered in the dataset are shown in \cref{fig:NucleusShapes}. The MDA231 dataset in turn was chosen because it had the worst success scores for automatic tracking methods on the \emph{\href{https://celltrackingchallenge.net}{celltrackingchallenge.net}} website due to the diversity of cell shapes and jerky movements in the dataset. For the \emph{Platynereis} dataset, we were able to quickly obtain high-quality cell tracks using our prototype system. A visualization of one such cell track is shown in Supplementary Figure \ref{T2TTracksPlatynereis}. In the companion video, we show both the gaze tracking process to create the track and a visualization showing all spines used to generate the track. For the MDA231 dataset, we are able to obtain tracks for six moving cells in the dataset in about 10 minutes. A visualization of these tracks is shown in Supp. Fig.~\ref{T2TTracksMDA}; see the companion video for a part of the tracking process. This example also demonstrates that the Bionic Tracking technique is useful even on nearly ``flat'' microscopy images in VR, as the dataset only has 30 Z slices, compared to a resolution of 512x512 in X and Y. All datasets are rendered at their full resolution, with a typical framerate of 60-90fps. \section{Evaluation} We evaluated Bionic tracking by first performing a user study to gain insight into user acceptance and feasibility. We then compared tracks created with Bionic Tracking to the manually annotated ground truth. Together, these evaluations serve as an initial characterization of the usability and performance of Bionic Tracking. \subsection{User Study} \label{sec:EvaluationUserStudy} We recruited seven cell tracking experts who were either proficient with manual cell tracking tasks in biology, proficient in using or developing automated tracking algorithms, or both (median age 36, s.d. 7.23, 1 female, 6 male) to take part in the study. The users were given the task to track arbitrary cells in the \emph{Platynereis} dataset already used in \cref{sec:ProofOfConcept}. One of the users was already familiar with this particular dataset. The study was conducted on a Dell Precision Tower 7910 workstation (Intel Xeon E5-2630v3 CPU, 8 cores, 64 GB RAM, GeForce GTX 1080Ti GPU) running Windows 10, build 1909. Before starting to use the software, all users were informed of the goals and potential risks (e.g., simulator sickness) of the study. With a questionnaire, they were asked for presence of any visual or motor impairments (apart from needing to wear glasses or contact lenses, none were reported), about previous VR experience and physical wellbeing. After using the software, users were again asked about their physical wellbeing, and had to judge their experience using the NASA Task Load Index (TLX, \cite{Hart:1988tlx}) and Simulator Sickness Questionnaire (SSQ, \cite{kennedy1993}). In addition, they were asked both qualitative and quantative questions about the software based on both the User Experience Questionnaire \cite{Laugwitz:2008Construction} and the System Usability Scale \cite{Brooke:1996SUS}. We concluded the study for each participant with a short interview where users were asked to state areas of improvement, and what they liked about the software. The full questionnaire used in the study is available in the supplementary materials. After filling the pre-study part of the questionnaire, users were given a brief introduction to the controls in the software. After ensuring a good fit of the HMD on the user's head, the interpupillary distance (IPD) of the HMD was adjusted to the user's eyes, as were the ROIs of the eye tracking cameras. The users then ran the calibration routine on their own. Then, they were able to take time to freely explore the dataset in space and time. If the calibration was found to not be sufficiently accurate, we re-adjusted HMD fit and camera ROIs, and ran the calibration routine again. Finally, all users were tasked with tracking the cells in the \emph{Platynereis} dataset. Users were then able to create cell tracks freely, creating up to 32 cell tracks in 10 to 29 minutes. All participants in the study had no or very limited experience with using VR interfaces (5-point scale, 0 means no experience, and 4 daily use: mean 0.43, s.d. 0.53), and only one had previously used any eye-tracking-based user interfaces before (same 5-point scale: mean 0.14, s.d. 0.37). \subsection{User Study Results} The average SSQ score was $25.6 \pm 29.8$ (median $14.9$), which is on par with other VR applications that have been evaluated using SSQ (see, e.g., \cite{Singla:2017Measuring}). From TLX, we used all categories (mental demand, physical demand, temporal demand, success, effort, insecurity), on a 7-point scale where 0=Very Low and 6=Very High for the demand metrics, and 0=Perfect, 6=Failure for the performance metrics. Users reported medium scores for mental demand ($2.71 \pm 1.70$) and for effort ($2.86 \pm 1.68$), while reporting low scores for physical demand ($1.86 \pm 1.95$), temporal demand ($1.57 \pm 0.98$), and insecurity ($1.14 \pm 1.68$). The participants judged themselves to have been rather successful with the tracking tasks ($1.71 \pm 0.75$). All questions asked related to software usability and acceptance are summarised in \cref{fig:StudyAnswers}. The users estimated that the Bionic Tracking method would yield a speedup of a factor 2 to 10 ($3.33 \pm 6.25$) compared to tracking cells with a regular 2D interface, and expressed high interest in using the method for their own tracking tasks ($3.43 \pm 0.53$; 5-point scale here and for the following: 0=No agreement, 4=Full agreement), as the tracks created by it looked reasonable ($2.57 \pm 0.98$), it would provide an improvement over their current methods ($3.14 \pm 0.90$), and they could create new cell tracks not only with confidence ($2.86 \pm 0.69$), but also faster ($3.29 \pm 0.76$). Users found the software relatively intuitive ($2.43 \pm 0.98$) and did not need a long time to learn how to use it ($0.59 \pm 0.79$), which they also remarked on the the follow-up interviews: \begin{displayquote} "It was so relaxing, actually, looking at this [cell] and just looking." (P2, the user remarked further after the interview that the technique might prevent carpal tunnel issues often encountered when tracking via mouse and keyboard.) \end{displayquote} \begin{displayquote} "I figured this could be like a super quick way to generate the [cell] tracks." (P7) \end{displayquote} Furthermore, the user study showed that users tend to adjust playback speed more often than image size (in VR). After playing around with different settings -- users could choose speeds from 1 to 20 volumes/second -- all users interestingly settled on 4-5 volumes/second, corresponding to 200 to 250\,ms of viewing time per timepoint, which coincides with the onset delay of smooth pursuit eye movements. Albeit having no or limited previous VR experience, the users did not feel irritated by the environment ($0.00 \pm 0.00$) nor by the use of eye tracking ($0.29 \pm 0.49$). \begin{figure}[h] \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/study-answers.pdf} \caption{Results of usability and acceptance-related question from the user study. Please note that the questions are formulated both positively and negatively.\label{fig:StudyAnswers}} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/52tracks.png} \caption{The 52 tracks we used for comparison with manual tracking results visualised together with the volumetric data of one timepoint. This is the same view the user had, taken from within the VR headset. See the supplementary video for a demonstration of creating these tracks.\label{fig:52tracks}} \end{subfigure} \caption{User study and cell tracking results for the \emph{Platynereis} dataset.} \vspace{-1.25\baselineskip} \end{figure} \subsection{Comparison with Manual Tracking Results} \label{sec:EvaluationComparison} To further characterize the performance of Bionic Tracking, we performed a comparison to manually annotated tracks. Our primary focus in this comparison is to assess the capacity of Bionic Tracking to recreate individual manually annotated tracks. We compared 52 tracks created by an expert annotator using Bionic Tracking (see \cref{fig:52tracks}) on the \textit{Platynereis} dataset to their respective best matching ground truth tracks. We find that 25 of the 52 tracks have a distance score \cite{Ulman:2017objective} that is less than 1 cell diameter, suggesting that these tracks will, on average, intersect the volume of their corresponding cell. \section{Discussion} We were able to show that gaze in VR can be used to reconstruct tracks of biological cells in 3D microscopy. Our method does not only accelerates the process, but makes manual tracking tasks also easier and less demanding. Although our expert-based user study was rather small in size, limiting its statistical power, we believe that it provides an indication that the use of Bionic Tracking can improve the user experience and speed for cell tracking tasks, and that developing it further is worthwhile. Even though the users had limited previous VR experience before, they were quickly able to create cell tracks with high confidence. Multiple users complimented the ergonomics of the technique, although it remains to be seen whether this would still be the case for longer (1h+) tracking sessions. With the projected speedups, however, it might not even be necessary to have such long sessions anymore (users indicated that for manual tracking, they would not do sessions longer than 3 to 4 hours, with the estimated speedups, this could be potentially reduced to just 20-90 minutes using Bionic Tracking). For tracking large lineages comprising thousands of cells, Bionic Tracking on it own is going to be cumbersome, for combinatorial reasons. It can, however, augment existing techniques for parts of the tracking process, e.g., to track cells only in early stages of development, where they tend to have less well-defined shapes, or it may provide constraints and training data for machine-learning algorithms of automated tracking. Furthermore, Bionic Tracking could be used in conjunction with any automatic tracking algorithm that provides uncertainty scores in order to restrict gaze input to regions where the algorithm cannot perform below a given uncertainty threshold. This could be done, e.g., by superimposing a heatmap on the volume rendering to indicate to the user areas that need additional curation. Hybrid semi-automated/manual approaches are already among the most popular tools for challenging biological datasets \cite{Winnubst:2019Reconstruction}. \section{Future Work and Limitations} In the future, we would like to integrate Bionic Tracking into an existing tracking software, such that it can be used by a general audience. Unfortunately, eye tracking-enabled HMDs are not yet widely available, but according to current announcements, this is likely to change. Current developments in eye tracking hardware and VR HMDs indicate falling prices in the near future, such that those devices might soon become more common, or even directly integrated into off-the-shelf HMDs. One could imagine just having one or two eye tracking-enabled HMDs as an institute, making them available to users in a bookable item-facility manner. At the moment, the calibration of the eye trackers can still be a bit problematic, but this is likely to improve in the future, too, with machine learning approaches making the process faster, more reliable, and more user-friendly. In order for Bionic Tracking to become a tool that can be routinely used for research in biology, it will be necessary to implement interactions that allow the user to indicate certain events, like cell divisions. Such an interaction could for example include the user pressing a certain button whenever a cell division occurs, and then track until the next cell division. In such a way, the user can skip from cell division to cell division, literally applying divide-and-conquer for tracking (a part of) the cell lineage tree at hand. These additional features will enable the creation of entire cell lineage trees. The design and evaluation of algorithms to detect and track entire lineage trees is currently an active focus in the systems biology community \cite{Ulman:2017objective}. In this study, we have used comparison algorithms from the Particle Tracking Challenge (PTC) \cite{Chenouard:2014Objective}, which were designed to compare single tracks. There are limitations when applying the PTC metric to compare cell tracking annotations. However, until additional tracking events---such as the aforementioned cell divisions---can be recorded with Bionic Tracking, PTC is the only metric that can be applied. In our tests, we have still seen some spurious detections, which lead to tracks obviously not taken by the cell. This calls for more evaluations within crowded environments: While Bionic Tracking seems well suited for crowded scenes in principle -- as users can, e.g., move around corners and are tracked by the HMD -- it is not yet clear whether eye tracking is precise enough in such situations. In addition, head tracking data from the HMD could be used to highlight the area of the volumetric dataset the user is looking toward (foveated rendering, \cite{levoy1990, bruder2019}), e.g., by dimming areas the user is not looking at. We have not yet explored foveation, but could imagine it might improve tracking accuracy and mental load. \section{Conclusion} We have presented \emph{Bionic Tracking}, a new method for object tracking in volumetric image datasets, leveraging gaze data and virtual reality HMDs for biological cell tracking problems. Our method is able to augment the manual parts of cell tracking tasks in order to render them faster, more ergonomic, and more enjoyable for the user, while still generating high-quality tracks. Users estimated they could perform cell tracking tasks up to 10-fold faster with Bionic Tracking than with conventional, manual tracking methods. As part of Bionic Tracking, we have introduced a method for graph-based temporal tracking, which enables to robustly connect gaze samples with cell or object detections in volumetric data over time. The results from our research prototype have been very encouraging, and we plan to continue this line of research with further studies, extending the evaluation to more datasets and users, and adding an evaluation of the accuracy of the created cell tracks on datasets that have known associated ground truth. Furthermore, we would like to add Bionic Tracking to a pipeline where the gaze-determined cell tracks can be used to train machine-learning algorithms to improve automatic tracking results. Our prototype software is available as open-source software at \emph{\href{https://github.com/scenerygraphics/bionic-tracking}{github.com/scenerygraphics/bionic-tracking}}. \section*{Acknowledgements} The authors thank all participants of the user study. Thanks to Mette Handberg-Thorsager for providing the \emph{Platynereis} dataset and for feedback on the manuscript. Thanks to Vladimir Ulman and Jean-Yves Tinevez for helpful discussions regarding track comparison. Thanks to Bevan Cheeseman, Aryaman Gupta, and Stefanie Schmidt for helpful discussions. Thanks to Pupil Labs for help with the eye tracking calibration. This work was partially funded by the Center for Advanced Systems Understanding (CASUS), financed by Germany’s Federal Ministry of Education and Research (BMBF) and by the Saxon Ministry for Science, Culture and Tourism (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament. R.D. and I.F.S. were supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy – EXC-2068 – 390729961 – Cluster of Excellence Physics of Life of TU Dresden. \bibliographystyle{abbrv-doi-hyperref} \bibliography{bionictracking} \ifdefined\preprint \clearpage \section*{Supplementary Material} \nopagebreak \renewcommand\thefigure{S.\arabic{figure}} \setcounter{figure}{0} \begin{figure}[h] \includegraphics[width=\textwidth]{figures/vive-controllers-t2t.pdf} \caption{Controller bindings for Bionic Tracking. Handedness can be swapped.} \label{T2TControls} \end{figure} \begin{figure}[h] \includegraphics[width=\columnwidth]{hedgehog-full-partial.png} \caption{Left: Partial hedgehogs (sets of rays of samples through the volume for one cell track) for a single time point of the \emph{Platynereis} dataset, after creating 18 cell tracks. Right: Full hedgehogs for all timepoints after creating tracks for 18 cells. Color coded by time, yellow is early, blue late along the time of the dataset. See the supplementary video for a dynamic demonstration and the main text for details.\label{hedgehog}} \end{figure} \begin{figure}[h] \includegraphics[width=\columnwidth]{t2t-ray.pdf} \caption{An example intensity value profile along an entire spine/ray through a volumetric dataset. The X axis is step along the spine in voxels, the Y axis volume sample value. In this case, there are two local maxima along the ray, one close to the observer, at index 70, and another one further away at 284. The profile was taken along the gray line shown in Figure 2 of the main text. \label{T2TExampleRay}} \end{figure} \begin{figure}[h] \includegraphics[width=\columnwidth]{t2t-track.png} \caption{Visualization of a cell track created in the \emph{Platynereis} dataset. See the companion video for the tracking process over time.\label{T2TTracksPlatynereis}} \end{figure} \begin{figure}[h] \includegraphics[width=\columnwidth]{mda231-tracks-new.png} \caption{Cell tracks created by Bionic Tracking in the MDA231 dataset, with a single spine used for creating a track shown at the top left in purple.\label{T2TTracksMDA}} \end{figure} \fi \end{document}
https://openreview.net/forum?id=3_2Zf8Rr1N
https://arxiv.org/abs/2005.00387
Evaluate the paper's approach for tracking cells in time-lapse videos using eye-tracking in virtual reality, and discuss the strengths and weaknesses of the study design and results.
Simple and effective U-Net feedback extension Summary ------- This paper presents a feedback mechanism for U-Nets, which re-uses the output of the U-Net for a second round of processing. For the incorporation of the output into an earlier feature map of the U-Net, the authors propose two attention mechanisms (source-target-attention and self-attention). On an electron microscopy dataset of neural tissue, experiments demonstrate consistent improvements of the self-attention version of the proposed method over several baselines, including a vanilla U-Net and a feedback U-Net. In further ablation studies, the authors investigate the two proposed attention mechanisms, the choice of the injection point of the output, and the usefulness of the second round of processing. Quality and Clarity ------------------- The technical description of the method is very clear and the provided figures helpful. Originality ----------- Reusing the output of a U-Net for a second round of processing is not an entirely new idea (as acknowledged by the authors in the Related Work section). The main contribution here is therefore the addition of an attention mechanism and the re-use of the same weights for the second round of processing. Significance ------------ The experimental evaluation is thorough (albeit on only a single dataset) and addresses key questions about the proposed method. Qualitative results (especially the attention maps generated by the two proposed mechanisms) are helpful to understand the contributions of the method. Pros ---- * clear presentation * elegant architecture * convincing results * thorough analysis of method components in ablation study Cons ---- * evaluated on only one dataset Minor Comments -------------- * line 44: "cell image segmentation is a difficult task because [...] there is not regularity compared to other datasets such as automatic driving" I would personally argue that the opposite is true * line 420: "menbranes" -> "membranes" * line 437: "firrst" -> "first" * line 487: "We" -> "we" * no "." after "Equation", "Table", or "Figure" * line 71: "we evaluate the proposed method on two kinds of cell image datasets" results are only presented on one dataset Rating: 8: Top 50% of accepted papers, clear accept Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{url} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \begin{document} \pagestyle{headings} \mainmatter \title{Feedback Attention for Cell Image Segmentation} % \titlerunning{Feedback Attention for Cell Image Segmentation} \author{Hiroki Tsuda \and Eisuke Shibuya \and Kazuhiro Hotta} \authorrunning{H. Tsuda et al.} \institute{Meijo University, 1-501 Shiogamaguchi, Tempaku-ku, Nagoya 468-8502, Japan \url{http://www1.meijo-u.ac.jp/~kazuhotta/cms_new/} \\ \email{193427019@ccalumni.meijo-u.ac.jp,\\160442066@ccalumni.meijo-u.ac.jp,\\kazuhotta@meijo-u.ac.jp}} \maketitle \begin{abstract} In this paper, we address cell image segmentation task by Feedback Attention mechanism like feedback processing. Unlike conventional neural network models of feedforward processing, we focused on the feedback processing in human brain and assumed that the network learns like a human by connecting feature maps from deep layers to shallow layers. We propose some Feedback Attentions which imitate human brain and feeds back the feature maps of output layer to close layer to the input. U-Net with Feedback Attention showed better result than the conventional methods using only feedforward processing. \keywords{Cell Image, Semantic Segmentation, Attention Mechanism, Feedback Mechanism} \end{abstract} \section{Introduction} \label{sec:intro} Deep neural networks has achieved state-of-the-art performance in image classification~\cite{alexnet}, segmentation~\cite{fcn}, detection~\cite{faster-rcnn}, and tracking~\cite{siamesefc}. Since the advent of AlexNet~\cite{alexnet}, several Convolutional Neural Network (CNN)~\cite{lecun1998gradient} has been proposed such as VGG~\cite{vgg}, ResNet~\cite{resnet}, Deeplabv3+~\cite{deeplabv3plus}, Faster R-CNN~\cite{faster-rcnn}, and Siamese FC~\cite{siamesefc}. These networks are feedfoward processing. Neural network is mathematical model of neurons~\cite{widrow1998perceptrons} that imitate the structure of human brain. Human brain performs not only feedfoward processing from shallow layers to deep layers of neurons, but also feedback processing from deep layers to shallow layers. However, conventional neural networks consist of only feedfoward processing from shallow layers to deep layers, and do not use feedback processing to connect from deep layers to shallow layers. Therefore, in this paper, we propose some Feedback Attention methods using position attention mechanism and feedback process. Semantic segmentation assigns class labels to all pixels in an image. The study of this task can be applied to various fields such as automatic driving \cite{camvid,cordts2016cityscapes}, cartography \cite{ghamisi2014feature,maggiori2016convolutional} and cell biology \cite{sstem,imanishi2018novel,unet}. In particular, cell image segmentation requires better results in order to ensure that cell biologists can perform many experiments at the same time. In addition, overall time and cost savings are expected to be achieved by automated processing without human involvement to reduce human error. Manual segmentation by human experts is slow to process and burdensome, and there is a significant demand for algorithms that can do the segmentation quickly and accurately without human. However, cell image segmentation is a difficult task because the number of supervised images is smaller and there is not regularity compared to the other datasets such as automatic driving. A large number of supervised images requires expert labeling which takes a lot of effort, cost and time. Therefore, it is necessary to enhance the segmentation ability for pixel-level recognition with small number of training images. Most of the semantic segmentation approaches are based on Fully Convolutional Network (FCN)~\cite{fcn}. FCN is composed of some convolutional layers and some pooling layers, which does not require some fully connected layers. Convolutional layer and pooling layer reproduce the workings of neurons in the visual cortex. These are proposed in Neocognitron~\cite{fukushima1982neocognitron} which is the predecessor of CNN. Convolutional layer which is called S-cell extracts local features of the input. Pooling layer which is called C-cell compresses the information to enable downsampling to obtain position invariance. Thus, by repeating the feature extraction by convolutional layer and the local position invariance by pooling layer, robust pattern recognition is possible because it can react only to the difference of shape without much influence of misalignment and size change of the input pattern. Only the difference between CNN and Neocognitron is the optimization method, and the basic elements of both are same structure. We focused on the relationship between the feature map close to the input and output of the semantic segmentation, and considered that it is possible to extract effective features by using between the same size and number of channels in the feature maps close to the input and output. In this paper, we create an attention map based on the relationship between these different feature maps, and a new attention mechanism is used to generate segmentation results. We can put long-range dependent spatial information from the output into the feature map of the input. The attention mechanism is fed back into the feature map of the input to create a model that can be reconsidered in based on the output. In experiments, we evaluate the proposed method on a cell image datasets ~\cite{sstem}. We confirmed that the proposed method gave higher accuracy than conventional method. We evaluate our method by some ablation studies and show the effectiveness of our method. This paper is organized as follows. In section~\ref{sec:related}, we describe related works. The details of the proposed method are explained in section~\ref{sec:proposed}. In section~\ref{sec:experments}, we evaluate our proposed method on segmentation of cell images. Finally, we describe conclusion and future works in section~\ref{sec:conclusions}. \section{Related works} \label{sec:related} \subsection{Semantic Segmentation} \label{sec:related:seg} FCNs~\cite{fcn} based methods have achieved significant results for semantic segmentation. The original FCN used stride convolutions and pooling to gradually downsize the feature map, and finally created high-dimensional feature map with low-resolution. This feature map has semantic information but fine information such as fine objects and correct location are lost. Thus, if the upsampling is used at the final layer, the accuracy is not sufficient. Therefore, encoder-decoder structure is usually used in semantic segmentation to obtain a final feature map with high-resolution. It consists of encoder network that extracts features from input image using convolutional layers, pooling layers, and batch normalization layers, and decoder network that classifies the extracted feature map by upsampling, convolutional layers, and batch normalization layers. Decoder restores the low-resolution semantic feature map extracted by encoder and middle-level features to the original image to compensate for the lost spatial information, and obtains a feature map with high resolution semantic information. SegNet~\cite{segnet} is a typical network of encoder-decoder structures. Encoder uses 13 layers of VGG16~\cite{vgg}, and decoder receives some indexes selected by max pooling of encoder. In this way, decoder complements the positional information when upsampling and accelerates the calculation by unpooling, which requires no training. Another famous encoder-decoder structural model is U-net~\cite{unet}. The most important characteristic of U-Net is skip connection between encoder and decoder. The feature map with the spatial information of encoder is connected to the restored feature map of the decoder. This complements the high-resolution information and improves the resolution so that labels can be assigned more accurately to each pixel. In addition, deconvolution is used for up-sampling in decoder. \subsection{Attention Mechanism} \label{sec:related:attention} Attention mechanism is an application of the human attention mechanism to machine learning. It has been used in computer vision and natural language processing. In the field of image recognition, important parts or channels are emphasized. Residual Attention Network \cite{wang2017residual} introduced a stack network structure composed of multiple attention components, and attention residual learning applied residual learning \cite{resnet} to the attention mechanism. Squeeze-and-Excitation Network (SENet) \cite{senet} introduced an attention mechanism that adaptively emphasizes important channels in feature maps. Accuracy booster blocks \cite{accuracy-booster} and efficient channel attention module \cite{wang2019eca} made further improvements by changing the fully-connected layer in SENet. Attention Branch Network \cite{fukui2019abn} is Class Activation Mapping (CAM) \cite{cam} based structure to build visual attention maps for image classification. Transformer \cite{transformer} performed language translation only with the attention mechanism. There are Self-Attention that uses the same tensor, and Source-Target-Attention that uses two different tensors. Several networks have been proposed that use Self-Attention to learn the similarity between pixels in feature maps \cite{fu2019dual,huang2019ccnet,stand-alone,wang2018non,sagan}. \subsection{Feedback Mechanism using Recurrent Neural Networks} \label{sec:related:recurrent} Feedback is a fundamental mechanism of the human perceptual system and is expected to develop in the computer vision in the future. There have been several approaches to feedback using recurrent neural networks (RNNs)~\cite{alom2018recurrent,han2018image,zamir2017feedback}. Feedback Network~\cite{zamir2017feedback} uses convLSTM~\cite{xingjian2015convlstm} to acquire hidden states with high-level information and provide feedback with the input image. However, this is intended to solve the image classification task and is not directly applicable to the segmentation task. RU-Net~\cite{alom2018recurrent} consists of a U-Net~\cite{unet} and a recurrent neural network, where each convolutional layer is replaced by recurrent convolutional layer~\cite{liang2015recurrent}. The accumulation of feature information at each scale by the recurrent convolutional layer gives better results than the standard convolutional layer. However, this is not strictly feedback but the deepening of network. Feedback U-Net~\cite{shibuya2020feedback} is the segmentation method using convLSTM~\cite{xingjian2015convlstm}. The probability for segmentation at final layer is used as the input image for segmentation at the second round, while the first feature map is used as the hidden state for the second segmentation to provide feedback. Since RNNs is a neural network that contains loop connections, it can be easily used for feedback mechanisms. However, the problem with RNNs is that the amount of operations increases drastically and a lot of memory is consumed, which makes processing difficult and often results in the phenomenon that information is not transmitted. Thus, we applied RNNs-free feedback mechanism to U-Net, and excellent performance is shown by the feedback attention mechanism on the segmentation task. \section{Proposed Method} \label{sec:proposed} This section describes the details of the proposed method. Section~\ref{sec:proposed:details} outlines the network of our method. In section~\ref{sec:proposed:feedback}, we describe the details of the proposed attention mechanism. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/proposed_method.png} \caption{Network structure of the proposed method using Feedback Attention} \label{fig:proposed} \end{figure} \subsection{Network Structure Details} \label{sec:proposed:details} The proposed method is based on U-Net~\cite{unet}, which is used as a standard in medical and cell images. Figure~\ref{fig:proposed} shows the detail network structure of our proposed method using U-net. We design to do segmentation twice using U-Net in order to use the feature maps in input and output. Since the proposed method uses the feature maps of input and output, we use the model twice with shared weights. First, we perform segmentation by U-Net to obtain high-resolution important feature maps at the final layer. Then, we connect to Feedback Attention to a feature map that is close to the input with the same size and number of channels as this feature map. In this case, we use the input feature map that was processed two times by convolution. The reason is that a feature map convolved twice can extract more advanced features than a feature map convolved once. The details of Feedback Attention is explained in section~\ref{sec:proposed:feedback}. By applying Attention between the feature maps of input and output, we can obtain an input that takes the output into account as feedback control. In training, U-Net is updated by using only the gradients at the second round using feedback attention. In addition, the loss function is trained using Softmax cross-entropy loss. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/Source-Target.png}\\ \text{(a) Source-Target-Attention} \centering \includegraphics[width=\linewidth]{images/Self.png}\\ \text{(b) Self-Attention} \caption{Feedback Attention} \label{fig:feedback attention} \end{figure} \subsection{Feedback Attention} \label{sec:proposed:feedback} We propose two kinds of Feedback Attentions to aggregate feature maps with the shape of $C \times H \times W$. Figure~\ref{fig:feedback attention} (a) shows the Source-Target-Attention method that directly aggregates similar features between the feature maps of input and output. Figure~\ref{fig:feedback attention} (b) shows the self-attention method that performs self-attention for output feature map and finally adds it to the feature map of input. Both Feedback Attentions are explained in the following subsections. \subsubsection{Feedback Attention using Source-Target-Attention} \label{sec:proposed:feedback:st} We use Source-Target-Attention to aggregate the correlation between feature maps based on the relationship between input and output. Since the feature map in the final layer close to the output contains all the information for judging, it can be fed back using attention and effectively extract features again from the shallow input layer. We elaborate the process to aggregate each feature map. As shown in Figure~\ref{fig:feedback attention} (a), we feed the feature maps of input or output into $1\times1$ convolutions and batch normalization to generate two new feature maps \textbf{Query} and \textbf{Key}, respectively, we are inspired by Self-Attention GAN (SAGAN) \cite{sagan} to reduce the channel number to $C/8$ for memory efficiency. Then, we reshape them to $C/8 \times (H \times W)$. After we perform a matrix multiplication between the transpose of \textbf{Query} and \textbf{Key}, and we use a softmax function to calculate the attention map. Attention map in vector form is as follows. \begin{equation} w_{ij}=\frac{1}{Z_i}\exp({Query}_{i}^T ~{Key}_{j}), \end{equation} where $w_{ij}$ measures the $i^{th}$ \textbf{Query}'s impact on $j^{th}$ \textbf{Key}. $Z_i$ is the sum of similarity scores as \begin{equation} Z_{i}={\sum_{j=1}^{H \times W} {\exp({Query}_{i}^T ~{Key}_{j})}}, \end{equation} where $H \times W$ is the total number of pixels in \textbf{Query}. By increasing the correlation between two locations, we can create an attention map that takes into account output's feature map. On the other hand, we feed the feature map of output into $1\times1$ convolution and batch normalization to generate a new feature map \textbf{Value} and reshape it to $C/2 \times (H \times W)$. Then, we perform a matrix multiplication between attention map and the transpose of \textbf{Value} and reshape the result to $C/2 \times H \times W$. In addition, we feed the new feature map into $1\times1$ convolution and batch normalization to generate feature map the same size as the feature map of input $C \times H \times W$. Finally, we multiply it by a scale parameter $\alpha$ and perform a element-wise sum operation with the input feature map to obtain the final output as follows. \begin{equation} \label{source-target} A_i=\alpha \sum_{j=1}^{H \times W}{(w_{ij}~ Value_j^T)^T+F_i}, \end{equation} where $\alpha$ is initialized as 0 and gradually learns to assign more weight \cite{sagan}. $A_i$ indicates the feedbacked output and $F_i$ indicates the feature map of the input. By adding $\alpha \sum_{j=1}^{H \times W}(w_{ij}~ Value_j^T)^T$ to the feature map close to input, we can get the feature map considering feature map of output. The new feature map $A_i$ is fed into the network again, and we obtain the segmentation result. From Equation~(\ref{source-target}), it can be inferred that the output $A_i$ is the weighted sum of all positions in output and the feature map of input. Therefore, the segmentation accuracy is improved by transmitting the information of the output to the input. \subsubsection{Feedback Attention using Self-Attention} \label{sec:proposed:feedback:self} In Source-Target-Attention, the feature map between input and output is aggregated. Thus, the relationship between each feature map can be emphasized. However, the feature map of the input may not extract enough information and therefore may result in poorly relational coordination. We construct Feedback Attention using Self-Attention that aggregates only the feature map of output. The structure is shown in Figure~\ref{fig:feedback attention} (b). We feed the feature maps of output into $1\times1$ convolution and batch normalization to generate new feature maps \textbf{Query}, \textbf{Key} and \textbf{Value}. This is similar to Source-Target-Attention. We also reshape \textbf{Query} and \textbf{Key} to $C/8 \times (H \times W)$. Then, we perform a matrix multiplication between the transpose of \textbf{Query} and \textbf{Key}, and use a softmax function to calculate the attention map. Attention map in vector form is as follows. \begin{equation} w_{pq}=\frac{\exp({Query}_{p}^T ~{Key}_{q})}{\sum_{q=1}^{H \times W} {\exp({Query}_{p}^T~{Key}_{q})}}, \end{equation} where $w_{pq}$ measures the $p^{th}$ \textbf{Query}'s impact on $q^{th}$ \textbf{Key}. We reshape \textbf{Value} to $C/2 \times (H \times W)$. Then, we perform a matrix multiplication between attention map and the transpose of \textbf{Value} and reshape the result to $C \times H \times W$ after $1 \times 1$ convolution. Finally, we multiply it by a scale parameter $\beta$ and perform a element-wise sum operation with the feature maps of input to obtain the final output as follows. \begin{equation} \label{self} A_p=\beta \sum_{q=1}^{H \times W}{(w_{pq}~ Value_q^T)^T+F_p}, \end{equation} where $\beta$ is initialized as 0 and gradually learns to assign more weight \cite{sagan}. $A_p$ indicates the output, $F_p$ indicates the feature map of input. New feature map $A_p$ is fed into the network again, and we obtain the segmentation result. Unlike Equation~(\ref{source-target}), Equation~(\ref{self}) calculates the similarity using only the information of output. In addition, consistency can be improved because information can be selectively passed to the input by the scale parameter. \begin{figure}[t] \centering \begin{tabular}{c} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_input.png} \text{Input image} \end{minipage} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_GT.png} \text{Ground truth} \end{minipage} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_unet.png} \text{U-Net\cite{unet}} \end{minipage} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_feedback_st.png} \text{Feedback}\\ \text{Attention(ST)} \end{minipage} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_feedback_self.png} \text{Feedback}\\ \text{Attention(Self)} \end{minipage} \end{tabular} \caption{Examples of segmentation results on ssTEM dataset. ST indicates Source-Target-Attention, Self indicates Self-Attention.} \label{fig:sstem} \end{figure} \begin{table}[t] \centering \caption{Segmentation accuracy (IoU and mIoU) on ssTEM Dataset. ST indicates Source-Target-Attention, Self indicates Self-Attention.} \label{table:sstem} \begin{tabular}{l|ccccc} \hline Method & Membrane & Mitochondria & Synapse & Cytoplasm & Mean IoU\% \\ \hline \hline U-Net\cite{unet} & 74.24 & 71.01 & 43.08 & 92.03 & 70.09 \\ RU-Net\cite{alom2018recurrent} & 75.81 & 74.39 & 43.26 & 92.25 & 71.43 \\ Feedback\\U-Net\cite{shibuya2020feedback} & 76.44 & 75.20 & 42.30 & 92.43 & 71.59 \\ Feedback\\Attention(ST) & {\textbf{76.65}} & {\textbf{78.27}} & {\textbf{43.32}} & {\textbf{92.64}} & {\textbf{72.72}} \\ Feedback\\Attention(Self) & {\color{red} \textbf{76.94}} & {\color{red} \textbf{79.52}} & {\color{red} \textbf{45.29}} & {\color{red} \textbf{92.80}} & {\color{red} \textbf{73.64}} \\\hline \end{tabular} \end{table} \section{Experiments} \label{sec:experments} This section shows evaluation results by the proposed method. We explain the datasets used in experiments in section~\ref{sec:experments:dataset}. Experimental results are shown in section~\ref{sec:experments:results}. Finally, section~\ref{sec:experments:ablation studies} describes Ablation studies to demonstrate the effectiveness of the proposed method. \subsection{Dataset} \label{sec:experments:dataset} In experiments, we evaluated all methods 15 times with 5-fold cross-validation using three kinds of initial values on the Drosophila cell image data set \cite{sstem}. We use Intersection over Union (IoU) as evaluation measure. Average IoU of 15 times evaluations is used as final measure. This dataset shows neural tissue from a Drosophila larva ventral nerve cord and was acquired using serial section Transmission Electron Microscopy at HHMI Janelia Research Campus \cite{sstem}. This dataset is called ssTEM dataset. There are 20 images of $1024 \times 1024$ pixels and ground truth. In this experiment, semantic segmentation is performed for four classes; membrane, mitochondria, synapses and cytoplasm. We augmented 20 images to 320 images by cropping 16 regions of $256 \times 256$ pixels without overlap from an image. We divided those images into 192 training, 48 validation and 80 test images. \begin{figure}[t] \centering \begin{tabular}{c} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_in.png} \text{Input image} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_gen_st.png} \text{Output image(ST)} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_st_attention_mem.png} \text{Attention map}\\ \text{Membrane(ST)} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_st_attention_cyt.png} \text{Attention map}\\ \text{Cytoplasm(ST)} \end{minipage} \\ \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/mCherry0305.png} \text{Ground Truth} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_gen_self.png} \text{Output image(Self)} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_self_attention_mem.png} \text{Attention map}\\ \text{Membrane(Self)} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_self_attention_cyt.png} \text{Attention map}\\ \text{Cytoplasm(Self)} \end{minipage} \end{tabular} \caption{Visualization results of Attention Map on ssTEM dataset. ST indicates Source-Target-Attention, Self indicates Self-Attention.} \label{fig:attention maps} \end{figure} \subsection{Experimental Results} \label{sec:experments:results} Table~\ref{table:sstem} shows the accuracy on ssTEM dataset, and Figure~\ref{fig:sstem} shows the segmentation results. Bold red letters in the Table represent the best IoU and black bold letters represent the second best IoU. Table~\ref{table:sstem} shows that our proposed Feedback Attention improved the accuracy of all classes compared to conventional U-Net~\cite{unet}. We also evaluated two feedback methods using RNNs; RU-Net~\cite{alom2018recurrent} with recurrent convolution applied to U-Net and Feedback U-Net~\cite{shibuya2020feedback} with feedback segmentation applied to U-Net. The result shows that the proposed method gave high accuracy in all classes. In addition, we can see that Self-Attention, which calculates the similarity in the output, is more accurate than Source-Target-Attention which calculates the similarity from the relationship between the input and the output. This indicates that the feature map of the input does not extract enough features and therefore the similarity representation between the input and the output does not work well. From the yellow frames in Figure~\ref{fig:sstem}, our method using Feedback Attention can identify mitochondria that were detected excessively by conventional methods. In the conventional methods, cell membranes were interrupted, but in our proposed method, we confirm that cell membranes are segmented in such a way that they are cleanly connected. Experimental results show that cell membrane and the mitochondria have been successfully identified even in places where it is difficult to detect by conventional methods. We visualize some attention maps in Figure~\ref{fig:attention maps} to understand our two kinds of Feedback Attentions. White indicates similarity and black indicates dissimilarity. We find that Self-Attention maps has many similar pixels but Source-Target-Attention maps has fewer pixels. This is because Source-Target-Attention uses the feature maps of input and output, and the feature map near input is different from that of output, so the number of similar pixels are smaller than Self-Attention map. However, the membranes and cytoplasm have different values in the attention map. This means that they are emphasized as different objects. On the other hand, Self-Attention generates attention maps from only the feature map of output. Therefore, as shown in the Figure~\ref{fig:attention maps}, when cell membrane and cytoplasm are selected, they are highlighted as similar pixels. \begin{table}[t] \centering \caption{Comparison of different feedback connections.} \label{table:connection} \begin{tabular}{l|ccccc} \hline Method & Membrane & Mitochondria & Synapse & Cytoplasm & Mean IoU\% \\ \hline \hline Add & 75.56 & 77.36 & 41.84 & 92.46 & 71.81 \\ 1$\times$1 Conv & 75.22 & \textbf{78.39} & \textbf{43.46} & 92.49 & 72.39 \\ SE-Net\cite{senet} & 75.89 & 77.31 & 42.92 & 92.49 & 72.15 \\ Light Attention\cite{hiramatsu2020semantic} & 76.20 & 78.27 & 43.18 & 92.57 & 72.56 \\ Feedback\\Attention(ST) & \textbf{76.65} & 78.27 & 43.32 & \textbf{92.64} & \textbf{72.72} \\ Feedback\\Attention(Self) & {\color{red} \textbf{76.94}} & {\color{red} \textbf{79.52}} & {\color{red} \textbf{45.29}} & {\color{red} \textbf{92.80}} & {\color{red} \textbf{73.64}} \\ \hline \end{tabular} \end{table} \subsection{Ablation Studies} \label{sec:experments:ablation studies} We performed three ablation studies to show the effectiveness of the proposed method. The first ablation study evaluated the different connection methods. The second ablation study confirmed the effectiveness of connection location from the output to the input. The last ablation study confirmed the effectiveness of before and after Feedback Attention was used. \subsubsection{Comparison of difference feedback connection} The effectiveness of the other feedback connection methods from the output to the input was experimentally confirmed. We compare four methods. We compare two methods that do not use the attention mechanism. The first one is that we simply add the feature map in the output to the input. The second one is that we feed the feature map in the output to $1 \times 1$ convolution and then add it to the feature map in the input. Both methods use scale parameter as our propose method. In addition, we compare two methods using attention mechanism. The first one is that we apply SE-Net~\cite{senet}, which suppresses and emphasizes the feature map between channels, to the output feature map, and add it to the input feature map. The second one is that we apply Light Attention~\cite{hiramatsu2020semantic}, which suppresses and emphasizes the important locations and channels in feature map by $3 \times 3$ convolutional processing, to the output feature map and adding it to the input feature map. From Table~\ref{table:connection}, we can see that the above four methods improve the accuracy from U-Net~\cite{unet} because the feedback mechanism is effective. However, our proposed method is more accurate than those four methods. This shows that our proposed Feedback Attention can use the output's information effectively in the input. \begin{table}[t] \centering \caption{Comparison between different connection locations.} \label{table:location} \begin{tabular}{lccccc} \hline \multicolumn{1}{l|}{Method} & Membrane & Mitochondria & Synapse & Cytoplasm & Mean IoU\% \\ \hline \hline \multicolumn{6}{c}{Feedback Attention using Source-Target-Attention} \\ \hline \multicolumn{1}{l|}{One conv} & 76.54 & 77.39 & 43.06 & 91.96 & 72.24 \\ \multicolumn{1}{l|}{Two conv(Ours)} & 76.65 & 78.27 & 43.32 & 92.64 & 72.72 \\ \hline \multicolumn{6}{c}{Feedback Attention using Self-Attention} \\ \hline \multicolumn{1}{l|}{One conv} & \textbf{76.69} & \textbf{78.73} & \textbf{45.23} & \textbf{92.66} & \textbf{73.33} \\ \multicolumn{1}{l|}{Two conv(Ours)} & {\color{red} \textbf{76.94}} & {\color{red} \textbf{79.52}} & {\color{red} \textbf{45.29}} & {\color{red} \textbf{92.80}} & {\color{red} \textbf{73.64}} \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{Comparison before and after Feedback Attention.} \label{table:bafore_after} \begin{tabular}{lccccc} \hline \multicolumn{1}{l|}{Method} & Membrane & Mitochondria & Synapse & Cytoplasm & Mean IoU\% \\ \hline \hline \multicolumn{6}{c}{Feedback Attention using Source-Target-Attention} \\ \hline \multicolumn{1}{l|}{First output} & 76.07 & 76.76 & 41.28 & 92.39 & 71.62 \\ \multicolumn{1}{l|}{Second output(Ours)} & \textbf{76.65} & \textbf{78.27} & \textbf{43.32} & \textbf{92.64} & \textbf{72.72} \\ \hline \multicolumn{6}{c}{Feedback Attention using Self-Attention} \\ \hline \multicolumn{1}{l|}{First output} & 75.49 & 74.29 & 41.57 & 92.03 & 70.84 \\ \multicolumn{1}{l|}{Second output(Ours)} & {\color{red} \textbf{76.94}} & {\color{red} \textbf{79.52}} & {\color{red} \textbf{45.29}} & {\color{red} \textbf{92.80}} & {\color{red} \textbf{73.64}} \\ \hline \end{tabular} \end{table} \subsubsection{Comparison between different connection locations} We experimentally evaluated the location of the input feature map which is the destination of feedback. Since the size of feature map should be the same as final layer, the candidates are only two layers close to input. The first one is the feature map closest to the input which is obtained by only one convolution process. The other one is the feature map obtained after convolution is performed two times. We compared the two feature map locations that we use Feedback Attention. Table~\ref{table:location} shows that the Feedback Attention to the feature map after two convolution process is better for both Source-Target-Attention and Self-Attention. This indicates that only one convolution process does not extract good features than two convolution processes. \subsubsection{Comparison before and after Feedback Attention} When we use Feedback Attention, the output of network is feedback to input as attention. Thus, we get the outputs twice. Although we use the output using Feedback Attention at the second round is used as final result, we compare the results of the outputs at the first and second rounds to show the effectiveness of Feedback Attention. From Table~\ref{table:bafore_after}, the output using Feedback Attention as the second round is better than that at the first round. This demonstrates that the accuracy was improved through the feedback mechanism. \section{Conclusions} \label{sec:conclusions} In this paper, we have proposed two Feedback Attention for cell image segmentation. Feedback Attention allows us to take advantage of the feature map information of the output and improve the accuracy of the segmentation, and segmentation accuracy is improved in comparison with conventional feedforward network, RU-Net~\cite{alom2018recurrent} which uses local feedback at each convolutional layer and Feedback U-Net~\cite{shibuya2020feedback} which uses global feedback between input and output. Ablation studies show that Feedback Attention can obtain accurate segmentation results by choosing the location and attention mechanism that conveys the output information. In the future, we aim to develop a top-down attention mechanism that directly utilizes ground truth, such as self-distillation~\cite{zhang2019your}. Feedback networks are also categorized as a kind of top-down networks, because the representation of feature extraction will be expanded if the ground truth can be used for direct learning in the middle layer as well. In addition, Reformer~\cite{reformer} using Locality Sensitive Hashing has been proposed in recent years. Since Transformer-based Attention uses a lot of memory, Reformer will work well in our Feedback Attention. These are subjects for future works. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=7G1GGjdzrde
https://arxiv.org/abs/2008.06474
Please evaluate the proposed method in our paper, including the clarity of the technical description, the originality of the approach, the significance of the results, and any potential limitations or improvements.
Thoroughly-investigated addition of feedback attention to U-Net segmentation ### Quality This work approaches the addition of a feedback attention mechanism into a U-Net based segmentation method with commendable rigour. While the authors only demonstrate their method on one dataset, they do so in a methodical manner whereby the carefully validate what sort of feedback works best, and also offer explanations as to why this is the case. The authors also validate which layers of the network work most effectively for their chosen network architecture, and that the improvement in segmentation performance is indeed due to the addition of feedback attention rather than just from running data through the U-Net twice. The figures are generally good quality. The attention maps in Figure 4 are particularly striking and are a good demonstration of the difference between the two proposed feedback attention methods introduced here. I would have been very interested to see the authors look closer at the attention maps and performance for the mitochondria and synapse classes, as these classes display the most striking segmentation accuracy improvement over the other methods compared. It would also have been interesting for the authors to have discussed cross-applicability of this approach to other microscopy modalities, for example digital pathology data. ### Clarity It took me a few reads of the paper in order to sufficiently understand the method, and there are quite a lot of grammatical errors and typos throughout. I personally found that the Query/Key/Value terminology was quite clunky and detracted from my ability to understand section 3.2. For example, the phrase ‘the $i$th Query’s impact’ (line 255) threw me for a while, as I was unclear which dimension (C, H, W) $i$ was indexing. This may well be my own unfamiliarity with the field, but being slightly more explicit in the explanation of the indices would have helped me understand much quicker. It also felt like there was too much unnecessary repetition between the explanations of the Source-Target-Attention and Self-Attention methods. Again, if I understand correctly, the Source-Target-Attention and Self-Attention methods are identical except for the ‘Query’ being identical to the ‘Key’ in the latter case? If so, this could have been displayed in a more compact mathematical way. ### Originality This work appears to be the first instance in which attention is integrated into a U-Net via a feedback mechanism. I am not an expert in this field, but a cursory literature search retrieved a [paper discussing the use of attention in medical image segmentation via attention gates](https://arxiv.org/abs/1804.03999) – perhaps this should have been mentioned as ‘Related Work’ ### Significance High-quality semantic segmentation is an undoubtedly significant and impactful challenge for microscopy. The technical merit of this work has been made clear, but I think that the authors could have expanded on the significance of the improved performance in the field of cell imaging. For example, the conclusion does not mention the application at all, and more dwells on further technical adaptations, which feels a little short-sighted. For that reason I also have concerns regarding deployment of this technique and ensuring that biologists are actual able to reap the benefits for their research. ### Pros * Excellent technical rigour demonstrated in investigating novel method through ablation studies * Significant increase in segmentation performance achieved with this method ### Cons * The sections describing the generation of attention maps were quite difficult to follow and understand * While the performance is very good, the authors do not discuss a route by which this method can actually be used for the benefit of the application demonstrated. Rating: 8: Top 50% of accepted papers, clear accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{url} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \begin{document} \pagestyle{headings} \mainmatter \title{Feedback Attention for Cell Image Segmentation} % \titlerunning{Feedback Attention for Cell Image Segmentation} \author{Hiroki Tsuda \and Eisuke Shibuya \and Kazuhiro Hotta} \authorrunning{H. Tsuda et al.} \institute{Meijo University, 1-501 Shiogamaguchi, Tempaku-ku, Nagoya 468-8502, Japan \url{http://www1.meijo-u.ac.jp/~kazuhotta/cms_new/} \\ \email{193427019@ccalumni.meijo-u.ac.jp,\\160442066@ccalumni.meijo-u.ac.jp,\\kazuhotta@meijo-u.ac.jp}} \maketitle \begin{abstract} In this paper, we address cell image segmentation task by Feedback Attention mechanism like feedback processing. Unlike conventional neural network models of feedforward processing, we focused on the feedback processing in human brain and assumed that the network learns like a human by connecting feature maps from deep layers to shallow layers. We propose some Feedback Attentions which imitate human brain and feeds back the feature maps of output layer to close layer to the input. U-Net with Feedback Attention showed better result than the conventional methods using only feedforward processing. \keywords{Cell Image, Semantic Segmentation, Attention Mechanism, Feedback Mechanism} \end{abstract} \section{Introduction} \label{sec:intro} Deep neural networks has achieved state-of-the-art performance in image classification~\cite{alexnet}, segmentation~\cite{fcn}, detection~\cite{faster-rcnn}, and tracking~\cite{siamesefc}. Since the advent of AlexNet~\cite{alexnet}, several Convolutional Neural Network (CNN)~\cite{lecun1998gradient} has been proposed such as VGG~\cite{vgg}, ResNet~\cite{resnet}, Deeplabv3+~\cite{deeplabv3plus}, Faster R-CNN~\cite{faster-rcnn}, and Siamese FC~\cite{siamesefc}. These networks are feedfoward processing. Neural network is mathematical model of neurons~\cite{widrow1998perceptrons} that imitate the structure of human brain. Human brain performs not only feedfoward processing from shallow layers to deep layers of neurons, but also feedback processing from deep layers to shallow layers. However, conventional neural networks consist of only feedfoward processing from shallow layers to deep layers, and do not use feedback processing to connect from deep layers to shallow layers. Therefore, in this paper, we propose some Feedback Attention methods using position attention mechanism and feedback process. Semantic segmentation assigns class labels to all pixels in an image. The study of this task can be applied to various fields such as automatic driving \cite{camvid,cordts2016cityscapes}, cartography \cite{ghamisi2014feature,maggiori2016convolutional} and cell biology \cite{sstem,imanishi2018novel,unet}. In particular, cell image segmentation requires better results in order to ensure that cell biologists can perform many experiments at the same time. In addition, overall time and cost savings are expected to be achieved by automated processing without human involvement to reduce human error. Manual segmentation by human experts is slow to process and burdensome, and there is a significant demand for algorithms that can do the segmentation quickly and accurately without human. However, cell image segmentation is a difficult task because the number of supervised images is smaller and there is not regularity compared to the other datasets such as automatic driving. A large number of supervised images requires expert labeling which takes a lot of effort, cost and time. Therefore, it is necessary to enhance the segmentation ability for pixel-level recognition with small number of training images. Most of the semantic segmentation approaches are based on Fully Convolutional Network (FCN)~\cite{fcn}. FCN is composed of some convolutional layers and some pooling layers, which does not require some fully connected layers. Convolutional layer and pooling layer reproduce the workings of neurons in the visual cortex. These are proposed in Neocognitron~\cite{fukushima1982neocognitron} which is the predecessor of CNN. Convolutional layer which is called S-cell extracts local features of the input. Pooling layer which is called C-cell compresses the information to enable downsampling to obtain position invariance. Thus, by repeating the feature extraction by convolutional layer and the local position invariance by pooling layer, robust pattern recognition is possible because it can react only to the difference of shape without much influence of misalignment and size change of the input pattern. Only the difference between CNN and Neocognitron is the optimization method, and the basic elements of both are same structure. We focused on the relationship between the feature map close to the input and output of the semantic segmentation, and considered that it is possible to extract effective features by using between the same size and number of channels in the feature maps close to the input and output. In this paper, we create an attention map based on the relationship between these different feature maps, and a new attention mechanism is used to generate segmentation results. We can put long-range dependent spatial information from the output into the feature map of the input. The attention mechanism is fed back into the feature map of the input to create a model that can be reconsidered in based on the output. In experiments, we evaluate the proposed method on a cell image datasets ~\cite{sstem}. We confirmed that the proposed method gave higher accuracy than conventional method. We evaluate our method by some ablation studies and show the effectiveness of our method. This paper is organized as follows. In section~\ref{sec:related}, we describe related works. The details of the proposed method are explained in section~\ref{sec:proposed}. In section~\ref{sec:experments}, we evaluate our proposed method on segmentation of cell images. Finally, we describe conclusion and future works in section~\ref{sec:conclusions}. \section{Related works} \label{sec:related} \subsection{Semantic Segmentation} \label{sec:related:seg} FCNs~\cite{fcn} based methods have achieved significant results for semantic segmentation. The original FCN used stride convolutions and pooling to gradually downsize the feature map, and finally created high-dimensional feature map with low-resolution. This feature map has semantic information but fine information such as fine objects and correct location are lost. Thus, if the upsampling is used at the final layer, the accuracy is not sufficient. Therefore, encoder-decoder structure is usually used in semantic segmentation to obtain a final feature map with high-resolution. It consists of encoder network that extracts features from input image using convolutional layers, pooling layers, and batch normalization layers, and decoder network that classifies the extracted feature map by upsampling, convolutional layers, and batch normalization layers. Decoder restores the low-resolution semantic feature map extracted by encoder and middle-level features to the original image to compensate for the lost spatial information, and obtains a feature map with high resolution semantic information. SegNet~\cite{segnet} is a typical network of encoder-decoder structures. Encoder uses 13 layers of VGG16~\cite{vgg}, and decoder receives some indexes selected by max pooling of encoder. In this way, decoder complements the positional information when upsampling and accelerates the calculation by unpooling, which requires no training. Another famous encoder-decoder structural model is U-net~\cite{unet}. The most important characteristic of U-Net is skip connection between encoder and decoder. The feature map with the spatial information of encoder is connected to the restored feature map of the decoder. This complements the high-resolution information and improves the resolution so that labels can be assigned more accurately to each pixel. In addition, deconvolution is used for up-sampling in decoder. \subsection{Attention Mechanism} \label{sec:related:attention} Attention mechanism is an application of the human attention mechanism to machine learning. It has been used in computer vision and natural language processing. In the field of image recognition, important parts or channels are emphasized. Residual Attention Network \cite{wang2017residual} introduced a stack network structure composed of multiple attention components, and attention residual learning applied residual learning \cite{resnet} to the attention mechanism. Squeeze-and-Excitation Network (SENet) \cite{senet} introduced an attention mechanism that adaptively emphasizes important channels in feature maps. Accuracy booster blocks \cite{accuracy-booster} and efficient channel attention module \cite{wang2019eca} made further improvements by changing the fully-connected layer in SENet. Attention Branch Network \cite{fukui2019abn} is Class Activation Mapping (CAM) \cite{cam} based structure to build visual attention maps for image classification. Transformer \cite{transformer} performed language translation only with the attention mechanism. There are Self-Attention that uses the same tensor, and Source-Target-Attention that uses two different tensors. Several networks have been proposed that use Self-Attention to learn the similarity between pixels in feature maps \cite{fu2019dual,huang2019ccnet,stand-alone,wang2018non,sagan}. \subsection{Feedback Mechanism using Recurrent Neural Networks} \label{sec:related:recurrent} Feedback is a fundamental mechanism of the human perceptual system and is expected to develop in the computer vision in the future. There have been several approaches to feedback using recurrent neural networks (RNNs)~\cite{alom2018recurrent,han2018image,zamir2017feedback}. Feedback Network~\cite{zamir2017feedback} uses convLSTM~\cite{xingjian2015convlstm} to acquire hidden states with high-level information and provide feedback with the input image. However, this is intended to solve the image classification task and is not directly applicable to the segmentation task. RU-Net~\cite{alom2018recurrent} consists of a U-Net~\cite{unet} and a recurrent neural network, where each convolutional layer is replaced by recurrent convolutional layer~\cite{liang2015recurrent}. The accumulation of feature information at each scale by the recurrent convolutional layer gives better results than the standard convolutional layer. However, this is not strictly feedback but the deepening of network. Feedback U-Net~\cite{shibuya2020feedback} is the segmentation method using convLSTM~\cite{xingjian2015convlstm}. The probability for segmentation at final layer is used as the input image for segmentation at the second round, while the first feature map is used as the hidden state for the second segmentation to provide feedback. Since RNNs is a neural network that contains loop connections, it can be easily used for feedback mechanisms. However, the problem with RNNs is that the amount of operations increases drastically and a lot of memory is consumed, which makes processing difficult and often results in the phenomenon that information is not transmitted. Thus, we applied RNNs-free feedback mechanism to U-Net, and excellent performance is shown by the feedback attention mechanism on the segmentation task. \section{Proposed Method} \label{sec:proposed} This section describes the details of the proposed method. Section~\ref{sec:proposed:details} outlines the network of our method. In section~\ref{sec:proposed:feedback}, we describe the details of the proposed attention mechanism. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/proposed_method.png} \caption{Network structure of the proposed method using Feedback Attention} \label{fig:proposed} \end{figure} \subsection{Network Structure Details} \label{sec:proposed:details} The proposed method is based on U-Net~\cite{unet}, which is used as a standard in medical and cell images. Figure~\ref{fig:proposed} shows the detail network structure of our proposed method using U-net. We design to do segmentation twice using U-Net in order to use the feature maps in input and output. Since the proposed method uses the feature maps of input and output, we use the model twice with shared weights. First, we perform segmentation by U-Net to obtain high-resolution important feature maps at the final layer. Then, we connect to Feedback Attention to a feature map that is close to the input with the same size and number of channels as this feature map. In this case, we use the input feature map that was processed two times by convolution. The reason is that a feature map convolved twice can extract more advanced features than a feature map convolved once. The details of Feedback Attention is explained in section~\ref{sec:proposed:feedback}. By applying Attention between the feature maps of input and output, we can obtain an input that takes the output into account as feedback control. In training, U-Net is updated by using only the gradients at the second round using feedback attention. In addition, the loss function is trained using Softmax cross-entropy loss. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/Source-Target.png}\\ \text{(a) Source-Target-Attention} \centering \includegraphics[width=\linewidth]{images/Self.png}\\ \text{(b) Self-Attention} \caption{Feedback Attention} \label{fig:feedback attention} \end{figure} \subsection{Feedback Attention} \label{sec:proposed:feedback} We propose two kinds of Feedback Attentions to aggregate feature maps with the shape of $C \times H \times W$. Figure~\ref{fig:feedback attention} (a) shows the Source-Target-Attention method that directly aggregates similar features between the feature maps of input and output. Figure~\ref{fig:feedback attention} (b) shows the self-attention method that performs self-attention for output feature map and finally adds it to the feature map of input. Both Feedback Attentions are explained in the following subsections. \subsubsection{Feedback Attention using Source-Target-Attention} \label{sec:proposed:feedback:st} We use Source-Target-Attention to aggregate the correlation between feature maps based on the relationship between input and output. Since the feature map in the final layer close to the output contains all the information for judging, it can be fed back using attention and effectively extract features again from the shallow input layer. We elaborate the process to aggregate each feature map. As shown in Figure~\ref{fig:feedback attention} (a), we feed the feature maps of input or output into $1\times1$ convolutions and batch normalization to generate two new feature maps \textbf{Query} and \textbf{Key}, respectively, we are inspired by Self-Attention GAN (SAGAN) \cite{sagan} to reduce the channel number to $C/8$ for memory efficiency. Then, we reshape them to $C/8 \times (H \times W)$. After we perform a matrix multiplication between the transpose of \textbf{Query} and \textbf{Key}, and we use a softmax function to calculate the attention map. Attention map in vector form is as follows. \begin{equation} w_{ij}=\frac{1}{Z_i}\exp({Query}_{i}^T ~{Key}_{j}), \end{equation} where $w_{ij}$ measures the $i^{th}$ \textbf{Query}'s impact on $j^{th}$ \textbf{Key}. $Z_i$ is the sum of similarity scores as \begin{equation} Z_{i}={\sum_{j=1}^{H \times W} {\exp({Query}_{i}^T ~{Key}_{j})}}, \end{equation} where $H \times W$ is the total number of pixels in \textbf{Query}. By increasing the correlation between two locations, we can create an attention map that takes into account output's feature map. On the other hand, we feed the feature map of output into $1\times1$ convolution and batch normalization to generate a new feature map \textbf{Value} and reshape it to $C/2 \times (H \times W)$. Then, we perform a matrix multiplication between attention map and the transpose of \textbf{Value} and reshape the result to $C/2 \times H \times W$. In addition, we feed the new feature map into $1\times1$ convolution and batch normalization to generate feature map the same size as the feature map of input $C \times H \times W$. Finally, we multiply it by a scale parameter $\alpha$ and perform a element-wise sum operation with the input feature map to obtain the final output as follows. \begin{equation} \label{source-target} A_i=\alpha \sum_{j=1}^{H \times W}{(w_{ij}~ Value_j^T)^T+F_i}, \end{equation} where $\alpha$ is initialized as 0 and gradually learns to assign more weight \cite{sagan}. $A_i$ indicates the feedbacked output and $F_i$ indicates the feature map of the input. By adding $\alpha \sum_{j=1}^{H \times W}(w_{ij}~ Value_j^T)^T$ to the feature map close to input, we can get the feature map considering feature map of output. The new feature map $A_i$ is fed into the network again, and we obtain the segmentation result. From Equation~(\ref{source-target}), it can be inferred that the output $A_i$ is the weighted sum of all positions in output and the feature map of input. Therefore, the segmentation accuracy is improved by transmitting the information of the output to the input. \subsubsection{Feedback Attention using Self-Attention} \label{sec:proposed:feedback:self} In Source-Target-Attention, the feature map between input and output is aggregated. Thus, the relationship between each feature map can be emphasized. However, the feature map of the input may not extract enough information and therefore may result in poorly relational coordination. We construct Feedback Attention using Self-Attention that aggregates only the feature map of output. The structure is shown in Figure~\ref{fig:feedback attention} (b). We feed the feature maps of output into $1\times1$ convolution and batch normalization to generate new feature maps \textbf{Query}, \textbf{Key} and \textbf{Value}. This is similar to Source-Target-Attention. We also reshape \textbf{Query} and \textbf{Key} to $C/8 \times (H \times W)$. Then, we perform a matrix multiplication between the transpose of \textbf{Query} and \textbf{Key}, and use a softmax function to calculate the attention map. Attention map in vector form is as follows. \begin{equation} w_{pq}=\frac{\exp({Query}_{p}^T ~{Key}_{q})}{\sum_{q=1}^{H \times W} {\exp({Query}_{p}^T~{Key}_{q})}}, \end{equation} where $w_{pq}$ measures the $p^{th}$ \textbf{Query}'s impact on $q^{th}$ \textbf{Key}. We reshape \textbf{Value} to $C/2 \times (H \times W)$. Then, we perform a matrix multiplication between attention map and the transpose of \textbf{Value} and reshape the result to $C \times H \times W$ after $1 \times 1$ convolution. Finally, we multiply it by a scale parameter $\beta$ and perform a element-wise sum operation with the feature maps of input to obtain the final output as follows. \begin{equation} \label{self} A_p=\beta \sum_{q=1}^{H \times W}{(w_{pq}~ Value_q^T)^T+F_p}, \end{equation} where $\beta$ is initialized as 0 and gradually learns to assign more weight \cite{sagan}. $A_p$ indicates the output, $F_p$ indicates the feature map of input. New feature map $A_p$ is fed into the network again, and we obtain the segmentation result. Unlike Equation~(\ref{source-target}), Equation~(\ref{self}) calculates the similarity using only the information of output. In addition, consistency can be improved because information can be selectively passed to the input by the scale parameter. \begin{figure}[t] \centering \begin{tabular}{c} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_input.png} \text{Input image} \end{minipage} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_GT.png} \text{Ground truth} \end{minipage} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_unet.png} \text{U-Net\cite{unet}} \end{minipage} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_feedback_st.png} \text{Feedback}\\ \text{Attention(ST)} \end{minipage} \begin{minipage}[t]{0.19\hsize} \centering \includegraphics[width=\linewidth]{drosophila/drosophila_feedback_self.png} \text{Feedback}\\ \text{Attention(Self)} \end{minipage} \end{tabular} \caption{Examples of segmentation results on ssTEM dataset. ST indicates Source-Target-Attention, Self indicates Self-Attention.} \label{fig:sstem} \end{figure} \begin{table}[t] \centering \caption{Segmentation accuracy (IoU and mIoU) on ssTEM Dataset. ST indicates Source-Target-Attention, Self indicates Self-Attention.} \label{table:sstem} \begin{tabular}{l|ccccc} \hline Method & Membrane & Mitochondria & Synapse & Cytoplasm & Mean IoU\% \\ \hline \hline U-Net\cite{unet} & 74.24 & 71.01 & 43.08 & 92.03 & 70.09 \\ RU-Net\cite{alom2018recurrent} & 75.81 & 74.39 & 43.26 & 92.25 & 71.43 \\ Feedback\\U-Net\cite{shibuya2020feedback} & 76.44 & 75.20 & 42.30 & 92.43 & 71.59 \\ Feedback\\Attention(ST) & {\textbf{76.65}} & {\textbf{78.27}} & {\textbf{43.32}} & {\textbf{92.64}} & {\textbf{72.72}} \\ Feedback\\Attention(Self) & {\color{red} \textbf{76.94}} & {\color{red} \textbf{79.52}} & {\color{red} \textbf{45.29}} & {\color{red} \textbf{92.80}} & {\color{red} \textbf{73.64}} \\\hline \end{tabular} \end{table} \section{Experiments} \label{sec:experments} This section shows evaluation results by the proposed method. We explain the datasets used in experiments in section~\ref{sec:experments:dataset}. Experimental results are shown in section~\ref{sec:experments:results}. Finally, section~\ref{sec:experments:ablation studies} describes Ablation studies to demonstrate the effectiveness of the proposed method. \subsection{Dataset} \label{sec:experments:dataset} In experiments, we evaluated all methods 15 times with 5-fold cross-validation using three kinds of initial values on the Drosophila cell image data set \cite{sstem}. We use Intersection over Union (IoU) as evaluation measure. Average IoU of 15 times evaluations is used as final measure. This dataset shows neural tissue from a Drosophila larva ventral nerve cord and was acquired using serial section Transmission Electron Microscopy at HHMI Janelia Research Campus \cite{sstem}. This dataset is called ssTEM dataset. There are 20 images of $1024 \times 1024$ pixels and ground truth. In this experiment, semantic segmentation is performed for four classes; membrane, mitochondria, synapses and cytoplasm. We augmented 20 images to 320 images by cropping 16 regions of $256 \times 256$ pixels without overlap from an image. We divided those images into 192 training, 48 validation and 80 test images. \begin{figure}[t] \centering \begin{tabular}{c} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_in.png} \text{Input image} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_gen_st.png} \text{Output image(ST)} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_st_attention_mem.png} \text{Attention map}\\ \text{Membrane(ST)} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_st_attention_cyt.png} \text{Attention map}\\ \text{Cytoplasm(ST)} \end{minipage} \\ \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/mCherry0305.png} \text{Ground Truth} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_gen_self.png} \text{Output image(Self)} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_self_attention_mem.png} \text{Attention map}\\ \text{Membrane(Self)} \end{minipage} \begin{minipage}[t]{0.24\hsize} \centering \includegraphics[width=\linewidth]{attention/image_self_attention_cyt.png} \text{Attention map}\\ \text{Cytoplasm(Self)} \end{minipage} \end{tabular} \caption{Visualization results of Attention Map on ssTEM dataset. ST indicates Source-Target-Attention, Self indicates Self-Attention.} \label{fig:attention maps} \end{figure} \subsection{Experimental Results} \label{sec:experments:results} Table~\ref{table:sstem} shows the accuracy on ssTEM dataset, and Figure~\ref{fig:sstem} shows the segmentation results. Bold red letters in the Table represent the best IoU and black bold letters represent the second best IoU. Table~\ref{table:sstem} shows that our proposed Feedback Attention improved the accuracy of all classes compared to conventional U-Net~\cite{unet}. We also evaluated two feedback methods using RNNs; RU-Net~\cite{alom2018recurrent} with recurrent convolution applied to U-Net and Feedback U-Net~\cite{shibuya2020feedback} with feedback segmentation applied to U-Net. The result shows that the proposed method gave high accuracy in all classes. In addition, we can see that Self-Attention, which calculates the similarity in the output, is more accurate than Source-Target-Attention which calculates the similarity from the relationship between the input and the output. This indicates that the feature map of the input does not extract enough features and therefore the similarity representation between the input and the output does not work well. From the yellow frames in Figure~\ref{fig:sstem}, our method using Feedback Attention can identify mitochondria that were detected excessively by conventional methods. In the conventional methods, cell membranes were interrupted, but in our proposed method, we confirm that cell membranes are segmented in such a way that they are cleanly connected. Experimental results show that cell membrane and the mitochondria have been successfully identified even in places where it is difficult to detect by conventional methods. We visualize some attention maps in Figure~\ref{fig:attention maps} to understand our two kinds of Feedback Attentions. White indicates similarity and black indicates dissimilarity. We find that Self-Attention maps has many similar pixels but Source-Target-Attention maps has fewer pixels. This is because Source-Target-Attention uses the feature maps of input and output, and the feature map near input is different from that of output, so the number of similar pixels are smaller than Self-Attention map. However, the membranes and cytoplasm have different values in the attention map. This means that they are emphasized as different objects. On the other hand, Self-Attention generates attention maps from only the feature map of output. Therefore, as shown in the Figure~\ref{fig:attention maps}, when cell membrane and cytoplasm are selected, they are highlighted as similar pixels. \begin{table}[t] \centering \caption{Comparison of different feedback connections.} \label{table:connection} \begin{tabular}{l|ccccc} \hline Method & Membrane & Mitochondria & Synapse & Cytoplasm & Mean IoU\% \\ \hline \hline Add & 75.56 & 77.36 & 41.84 & 92.46 & 71.81 \\ 1$\times$1 Conv & 75.22 & \textbf{78.39} & \textbf{43.46} & 92.49 & 72.39 \\ SE-Net\cite{senet} & 75.89 & 77.31 & 42.92 & 92.49 & 72.15 \\ Light Attention\cite{hiramatsu2020semantic} & 76.20 & 78.27 & 43.18 & 92.57 & 72.56 \\ Feedback\\Attention(ST) & \textbf{76.65} & 78.27 & 43.32 & \textbf{92.64} & \textbf{72.72} \\ Feedback\\Attention(Self) & {\color{red} \textbf{76.94}} & {\color{red} \textbf{79.52}} & {\color{red} \textbf{45.29}} & {\color{red} \textbf{92.80}} & {\color{red} \textbf{73.64}} \\ \hline \end{tabular} \end{table} \subsection{Ablation Studies} \label{sec:experments:ablation studies} We performed three ablation studies to show the effectiveness of the proposed method. The first ablation study evaluated the different connection methods. The second ablation study confirmed the effectiveness of connection location from the output to the input. The last ablation study confirmed the effectiveness of before and after Feedback Attention was used. \subsubsection{Comparison of difference feedback connection} The effectiveness of the other feedback connection methods from the output to the input was experimentally confirmed. We compare four methods. We compare two methods that do not use the attention mechanism. The first one is that we simply add the feature map in the output to the input. The second one is that we feed the feature map in the output to $1 \times 1$ convolution and then add it to the feature map in the input. Both methods use scale parameter as our propose method. In addition, we compare two methods using attention mechanism. The first one is that we apply SE-Net~\cite{senet}, which suppresses and emphasizes the feature map between channels, to the output feature map, and add it to the input feature map. The second one is that we apply Light Attention~\cite{hiramatsu2020semantic}, which suppresses and emphasizes the important locations and channels in feature map by $3 \times 3$ convolutional processing, to the output feature map and adding it to the input feature map. From Table~\ref{table:connection}, we can see that the above four methods improve the accuracy from U-Net~\cite{unet} because the feedback mechanism is effective. However, our proposed method is more accurate than those four methods. This shows that our proposed Feedback Attention can use the output's information effectively in the input. \begin{table}[t] \centering \caption{Comparison between different connection locations.} \label{table:location} \begin{tabular}{lccccc} \hline \multicolumn{1}{l|}{Method} & Membrane & Mitochondria & Synapse & Cytoplasm & Mean IoU\% \\ \hline \hline \multicolumn{6}{c}{Feedback Attention using Source-Target-Attention} \\ \hline \multicolumn{1}{l|}{One conv} & 76.54 & 77.39 & 43.06 & 91.96 & 72.24 \\ \multicolumn{1}{l|}{Two conv(Ours)} & 76.65 & 78.27 & 43.32 & 92.64 & 72.72 \\ \hline \multicolumn{6}{c}{Feedback Attention using Self-Attention} \\ \hline \multicolumn{1}{l|}{One conv} & \textbf{76.69} & \textbf{78.73} & \textbf{45.23} & \textbf{92.66} & \textbf{73.33} \\ \multicolumn{1}{l|}{Two conv(Ours)} & {\color{red} \textbf{76.94}} & {\color{red} \textbf{79.52}} & {\color{red} \textbf{45.29}} & {\color{red} \textbf{92.80}} & {\color{red} \textbf{73.64}} \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{Comparison before and after Feedback Attention.} \label{table:bafore_after} \begin{tabular}{lccccc} \hline \multicolumn{1}{l|}{Method} & Membrane & Mitochondria & Synapse & Cytoplasm & Mean IoU\% \\ \hline \hline \multicolumn{6}{c}{Feedback Attention using Source-Target-Attention} \\ \hline \multicolumn{1}{l|}{First output} & 76.07 & 76.76 & 41.28 & 92.39 & 71.62 \\ \multicolumn{1}{l|}{Second output(Ours)} & \textbf{76.65} & \textbf{78.27} & \textbf{43.32} & \textbf{92.64} & \textbf{72.72} \\ \hline \multicolumn{6}{c}{Feedback Attention using Self-Attention} \\ \hline \multicolumn{1}{l|}{First output} & 75.49 & 74.29 & 41.57 & 92.03 & 70.84 \\ \multicolumn{1}{l|}{Second output(Ours)} & {\color{red} \textbf{76.94}} & {\color{red} \textbf{79.52}} & {\color{red} \textbf{45.29}} & {\color{red} \textbf{92.80}} & {\color{red} \textbf{73.64}} \\ \hline \end{tabular} \end{table} \subsubsection{Comparison between different connection locations} We experimentally evaluated the location of the input feature map which is the destination of feedback. Since the size of feature map should be the same as final layer, the candidates are only two layers close to input. The first one is the feature map closest to the input which is obtained by only one convolution process. The other one is the feature map obtained after convolution is performed two times. We compared the two feature map locations that we use Feedback Attention. Table~\ref{table:location} shows that the Feedback Attention to the feature map after two convolution process is better for both Source-Target-Attention and Self-Attention. This indicates that only one convolution process does not extract good features than two convolution processes. \subsubsection{Comparison before and after Feedback Attention} When we use Feedback Attention, the output of network is feedback to input as attention. Thus, we get the outputs twice. Although we use the output using Feedback Attention at the second round is used as final result, we compare the results of the outputs at the first and second rounds to show the effectiveness of Feedback Attention. From Table~\ref{table:bafore_after}, the output using Feedback Attention as the second round is better than that at the first round. This demonstrates that the accuracy was improved through the feedback mechanism. \section{Conclusions} \label{sec:conclusions} In this paper, we have proposed two Feedback Attention for cell image segmentation. Feedback Attention allows us to take advantage of the feature map information of the output and improve the accuracy of the segmentation, and segmentation accuracy is improved in comparison with conventional feedforward network, RU-Net~\cite{alom2018recurrent} which uses local feedback at each convolutional layer and Feedback U-Net~\cite{shibuya2020feedback} which uses global feedback between input and output. Ablation studies show that Feedback Attention can obtain accurate segmentation results by choosing the location and attention mechanism that conveys the output information. In the future, we aim to develop a top-down attention mechanism that directly utilizes ground truth, such as self-distillation~\cite{zhang2019your}. Feedback networks are also categorized as a kind of top-down networks, because the representation of feature extraction will be expanded if the ground truth can be used for direct learning in the middle layer as well. In addition, Reformer~\cite{reformer} using Locality Sensitive Hashing has been proposed in recent years. Since Transformer-based Attention uses a lot of memory, Reformer will work well in our Feedback Attention. These are subjects for future works. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=7G1GGjdzrde
https://arxiv.org/abs/2008.06474
Please evaluate the paper based on the thorough investigation of adding feedback attention to a U-Net segmentation method, including the validation of different feedback methods and the analysis of the impact on segmentation performance.
Interesting practical method but questionable concepts and results The paper addresses the problem of denoising of microscopy images and the fact that traditional methods as well as various recent supervised deep learning methods make assumptions about the noise statistics that may not hold. The authors advocate the use of self-supervised deep learning methods, as high-quality paired training data is often not available to properly train supervised methods. But they observe that self-supervised methods typically produce high-frequency artifacts and achieve inferior results compared to supervised methods. To remedy this, they propose to exploit the fact that the images are usually diffraction-limited, by adding a convolution with a point-spread function model to an existing self-supervised deep learning-based denoising method (Noise2Void) and training it accordingly. Experimental results on a range of microscopy images illustrate the potential of the proposed method. This paper is well written and the presentation is easy to follow. While the idea is interesting, I am not convinced it is theoretically sound. As explained (in Section 3.3 and also in Section 3.4), the Noise2Void method estimates the image s. Since s=z*h (Section 3.1), this makes it a denoising method, not a deconvolution method, and that is indeed how the method was designed. Thus, simply processing the estimated s by convolution with an assumed PSF model h (Figure 1 and Section 3.4) is questionable. Of course, doing so will force Noise2Void to behave more like it, and you can claim to "view the direct output before the convolution as an estimate of the phantom image ... i.e. an attempt at deconvolution" and get some visually pleasing results, but that does not make the approach theoretically right. Rather, it seems a practical trick that apparently happens to work to some extent. Other specific comments: - Section 4.1: Synthetic data is generated using a Gaussian PSF and pixel-wise additive Gaussian noise, but that is not realistic. As the authors admit elsewhere (multiple times), the dominant sources of noise are Poisson photon noise and Gaussian readout noise (Sections 1 and 3.1). - Section 4.2: "Our implementation is based on the pytorch Noise2Void implementation from [10]. We use the exact same network architecture, with the only difference being the added convolution with the PSF at the end of the network." This, combined with the above major concern, make both the theoretical and the practical contribution of the paper rather limited. - In Section 2 many methods are discussed but the comparison in Figure 2 is limited to only N2V (and a variant). Are there really no available software implementations of other methods to compare with? - Section 4.3: The only quantitative measure used is PSNR. It is tricky to make the entire quantitative comparison hinge on a single measure that is known to be questionable. It would be good to also evaluate using other measures, such as SSIM. - The authors claim "considerable visual improvements" (Figure 2) and even "stunning visual improvement (Section 4.4). These are subjective statements that in my opinion are not supported by the provided evidence. Rating: 6: Marginally above acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{soul} \usepackage[normalem]{ulem} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage[misc]{ifsym} \usepackage{xspace} \usepackage{tabularx} \usepackage{multirow} \newcommand{\miniheadline}[1]{\noindent\textbf{#1.}} \newcommand\todo[1]{\textcolor{red}{TODO: #1}} \makeatletter \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot} \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace} \def\eg{\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot} \def\ie{\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot} \def\cf{\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot} \def\etc{\emph{etc}\onedot} \def\vs{\emph{vs}\onedot} \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot} \def\etal{\emph{et al}\onedot} \makeatother \newcommand{\E}[2]{\mathbb{E}_{#2} {\left[ #1 \right]} } \newcommand{\KL}[2]{\mathbb{KL}(#1||#2)} \newcommand{\oursm}{ours{\tiny{$^-$}}} \newcommand{\oursp}{ours{\tiny{$^+$}}} \newcommand{\VAE}{\mbox{\textsc{VAE}}\xspace} \newcommand{\VAEs}{\mbox{\textsc{VAE}}s\xspace} \newcommand{\CARE}{\mbox{\textsc{CARE}}\xspace} \newcommand{\CSBDeep}{\mbox{\textsc{CSBDeep}}\xspace} \newcommand{\NoiseNoise}{\mbox{\textsc{Noise2Noise}}\xspace} \newcommand{\NoiseVoid}{\mbox{\textsc{Noise2Void}}\xspace} \newcommand{\NoiseSelf}{\mbox{\textsc{Noise2Self}}\xspace} \newcommand{\DenoiSeg}{\mbox{\textsc{DenoiSeg}}\xspace} \newcommand{\DivNoising}{\mbox{\textsc{DivNoising}}\xspace} \newcommand{\NtoN}{\mbox{\textsc{N2N}}\xspace} \newcommand{\NtoV}{\mbox{\textsc{N2V}}\xspace} \newcommand{\PNtoV}{\mbox{\textsc{PN2V}}\xspace} \newcommand{\PNtoVgmm}{\mbox{\textsc{PN2V-GMM}}\xspace} \newcommand{\PNtoVhist}{\mbox{\textsc{PN2V-H}}\xspace} \newcommand{\UNet}{\mbox{\textsc{U-Net}}\xspace} \newcommand{\imgp}{x} \newcommand{\sigp}{s} \newcommand{\sigpe}{\hat{s}} \newcommand{\img}{\mathbf{x}} \newcommand{\sig}{\mathbf{s}} \newcommand{\sige}{\hat{\mathbf{s}}} \newcommand{\seg}{\mathbf{c}} \newcommand{\loss}[1]{\mathcal{L}_{\pars}{(#1)}} \newcommand{\losskl}[1]{\mathcal{L}_\encopas^\textsc{KL}{(#1)}} \newcommand{\lossr}[1]{\mathcal{L}_{\encopas,\decopas}^\textsc{R}{(#1)}} \newcommand{\recf}{\img^\textsc{RF}} \newcommand{\latente}{\hat{\mathbf{z}}} \newcommand{\latentpe}{\hat{z}} \newcommand{\sample}{\mathbf{s}} \newcommand{\latent}{{\mathbf{z}}} \newcommand{\psf}{{\mathbf{h}}} \newcommand{\latentp}{z} \newcommand{\encopas}{{\mathbf{\phi}}} \newcommand{\enc}[1]{f_\encopas(#1)} \newcommand{\pars}{{\mathbf{\theta} }} \newcommand{\dec}[1]{g_\decopas(#1)} \newcommand{\q}[1]{q_{\encopas}(#1)} \newcommand{\p}[1]{p(#1)} \newcommand{\pt}[1]{p_{\decopas}(#1)} \newcommand{\pnm}[1]{p_\textsc{NM}(#1)} \newcommand{\numpix}{N} \newcommand{\numimgs}{M} \newcommand{\numsamples}{K} \newcommand{\numlatdim}{D} \newcommand{\setRandPix}{M} \newcommand{\MMSE}{\textsc{MMSE}\xspace} \newcommand{\MAP}{\textsc{MAP}\xspace} \newcommand{\GMM}{\textsc{GMM}\xspace} \newcommand{\PSF}{\textsc{PSF}\xspace} \newcommand{\PSFs}{\textsc{PSF}s\xspace} \newcommand{\SURE}{\textsc{SURE}\xspace} \usepackage{booktabs} \usepackage{multirow} \usepackage[normalem]{ulem} \useunder{\uline}{\ul}{} \newcommand\figSchema{ \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figs/network.pdf} \caption{ \textbf{Improved Denoising for Diffraction-Limited Data.} \textbf{Top:} Given a noisy input, self-supervised methods like \NoiseVoid (N2V)~\cite{krull2019noise2void} often produce high-frequency artifacts that do not occur in diffraction-limited data. Based on the assumption that the true signal must be the product of a convolution with a \emph{point spread function} (\PSF), our method is able to considerably improve denoising quality and remove these artifacts. \textbf{Bottom:} Our method is based on the \NoiseVoid masking scheme. Unpaired training images simultaneously serve as input and target. The loss is only calculated for a randomly selected set of pixels, which are masked in the input image. Our contribution is to convolve the output of the network with the \PSF in order to produce a denoising result that is guaranteed to be consistent with diffraction-limited imaging. The output of the network before the convolution operation can be interpreted as a deconvolution result, which is a byproduct of our method. Our system can be trained in an end-to-end fashion, calculating the loss between our denoising result and the selected pixel set of the input image. } \label{fig:schema} \end{figure} } \newcommand\figTable{ \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figs/example_results_table_v1.pdf} \caption{ \textbf{Denoising results.} We show cropped denoising results for various fluorescence microscopy datasets. Our method achieves considerable visual improvements for all datasets compared to \NoiseVoid. The \emph{N2V~(conv.)} baseline corresponds to the \NoiseVoid result convolved with the same \PSF we use for our proposed method. } \label{fig:table} \end{figure} } \newcommand\figDeconv{ \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figs/deconvolution_results_examples.pdf} \caption{\textbf{Effect of the proposed Positivity Constraint.} We show cropped denoising and deconvolution results from various datasets with (\emph{\oursp}) and without positivity constraint (\emph{\oursm}), see Section~\ref{sec:posConstr} for details. While the denoising results are almost indistinguishable, the deconvolution results show a drastic reduction of artifacts when the positivity constraint is used. } \vspace{-2mm} \label{fig:deconv} \end{figure} } \newcommand\figPSF{ \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figs/psf_text.pdf} \caption{ \textbf{Effects of Point Spread Function Mismatch.} We use synthetic data to investigate how the choice of \PSF influences the resulting denoising quality. The data was generated by convolving rendered text with a Gaussian \PSF of standard deviation $\sigma=1$ (highlighted in red) and subsequently adding noise. Here, we show the results of our method when trained using Gaussian \PSFs of various sizes. We achieve the best results by using the true \PSF. Smaller \PSFs produce high-frequency artifacts. Larger \PSFs produce overly smooth images. } \vspace{-2mm} \label{fig:psf} \end{figure} } \newcommand\tablePSNR{ \begin{table}[] \centering \begin{tabular}{|l|c|cccc|cc|c|} \hline \multicolumn{1}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}dataset/\\ network\end{tabular}}} & \multirow{3}{*}{raw data} & \multicolumn{6}{c|}{self-supervised} & \multirow{2}{*}{superv.} \\ \cline{3-8} \multicolumn{1}{|c|}{} & & \multicolumn{4}{c|}{no noise model} & \multicolumn{2}{c|}{noise model} & \\ \cline{3-9} \multicolumn{1}{|c|}{} & & N2V & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}N2V \\ conv.\end{tabular}} & ours$^-$, & ours$^+$ & PN2V & DivN. & CARE \\ \hline Convallaria & 28.98 & 35.85 & 32.86 & \textbf{36.39} & 36.26 & 36.47 & {\ul 36.94} & 36.71 \\ Mouse actin & 23.71 & 33.35 & 33.48 & 33.94 & \textbf{34.04} & 33.86 & 33.98 & {\ul 34.20} \\ Mouse nuclei & 28.10 & 35.86 & 34.59 & \textbf{36.34} & 36.27 & 36.35 & 36.31 & {\ul 36.58} \\ Flywing (DenoiSeg) & 11.15 & 23.62 & 23.51 & 24.10 & \textbf{24.30} & 24.85 & 25.10 & {\ul 25.60} \\ Mouse (DenoiSeg) & 20.84 & 33.61 & 32.27 & \textbf{33.91} & 33.83 & 34.19 & 34.03 & {\ul 34.63} \\ W2S avg1 ch0 & 21.86 & 34.30 & 34.38 & {\ul \textbf{34.90}} & 34.24 & - & 34.13 & 34.30 \\ W2S avg1 ch1 & 19.35 & 31.80 & 32.23 & {\ul \textbf{32.31}} & 32.24 & - & 32.28 & 32.11 \\ W2S avg1 ch2 & 20.43 & 34.65 & {\ul \textbf{35.19}} & 35.03 & 35.09 & 32.48 & 35.18 & 34.73 \\ W2S avg16 ch0 & 33.20 & 38.80 & 38.73 & \textbf{39.17} & 37.84 & 39.19 & 39.62 & {\ul 41.94} \\ W2S avg16 ch1 & 31.24 & 37.81 & 37.49 & \textbf{38.33} & 38.19 & 38.24 & 38.37 & {\ul 39.09} \\ W2S avg16 ch2 & 32.35 & 40.19 & 40.32 & 40.60 & \textbf{40.74} & 40.49 & 40.52 & {\ul 40.88} \\ \hline \end{tabular} \vspace{.3cm} \caption{\textbf{Quantitative Denoising Results.} We report the average peak signal to noise ratio for each dataset and method. Here, \textit{\oursp} and \textit{\oursm} correspond to our method with ($\lambda=1$) and without positivity constraint ($\lambda=0$), see Section~\ref{sec:posConstr} for details. The best results among self-supervised methods without noise model are highlighted in bold. The best results overall are underlined. Here \emph{DivN.} is short for \DivNoising~\cite{prakash2020divnoising}. } \label{tab:results} \end{table} } \begin{document} \pagestyle{headings} \mainmatter \title{Improving Blind Spot Denoising\\ for Microscopy} % \author{Anna~S.~Goncharova\inst{1,2} \and Alf~Honigmann\inst{1} \and Florian~Jug\inst{1,2,3, \text{\Letter}} \and Alexander~Krull\inst{1,2,4, \text{\Letter}}} \authorrunning{A. Goncharova et al.} \institute{Max-Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany \and Center for Systems Biology Dresden (CSBD), Dresden, Germany \and Fondazione Human Technopole, Milano, Italy \and Max Planck Institute for the Physics of Complex Systems, Dresden, Germany\\ \Letter \: \text{jug@mpi-cbg.de}, \text{krull@mpi-cbg.de}} \maketitle \begin{abstract} Many microscopy applications are limited by the total amount of usable light and are consequently challenged by the resulting levels of noise in the acquired images. This problem is often addressed via (supervised) deep learning based denoising. Recently, by making assumptions about the noise statistics, self-supervised methods have emerged. Such methods are trained directly on the images that are to be denoised and do not require additional paired training data. While achieving remarkable results, self-supervised methods can produce high-frequency artifacts and achieve inferior results compared to supervised approaches. Here we present a novel way to improve the quality of self-supervised denoising. Considering that light microscopy images are usually diffraction-limited, we propose to include this knowledge in the denoising process. We assume the clean image to be the result of a convolution with a point spread function (PSF) and explicitly include this operation at the end of our neural network. As a consequence, we are able to eliminate high-frequency artifacts and achieve self-supervised results that are very close to the ones achieved with traditional supervised methods. \keywords{denoising, CNN, light microscopy, deconvolution} \end{abstract} \figSchema \section{Introduction} For most microscopy applications, finding the right exposure and light intensity to be used involves a trade-off between maximizing the signal to noise ratio and minimizing undesired effects such as phototoxicity. As a consequence, researchers often have to cope with considerable amounts of noise. To mitigate this issue, denoising plays an essential role in many data analysis pipelines, enabling otherwise impossible experiments~\cite{belthangady2019applications}. Currently, deep learning based denoising, also known as content-aware image restoration (\CARE)~\cite{weigert2018content}, achieves the highest quality results. \CARE methods learn a mapping from noisy to clean images. Before being applied, they must be trained with pairs of corresponding noisy and clean training data. In practice, this dependence on training pairs can be a bottleneck. While noisy images can usually be produced in abundance, recording their clean counterparts is difficult or impossible. Over the last years, various solutions to the problem have been proposed. Lehtinen \etal showed that a network can be trained for denoising using only pairs of corresponding noisy images. This method is known as \NoiseNoise~\cite{lehtinen2018noise2noise}. The first self-supervised approaches \NoiseVoid~\cite{krull2019noise2void} and \NoiseSelf~\cite{batson2019noise2self} were introduced soon after this. These methods can be trained on unpaired noisy image data. In fact, they can be trained on the very same data that is to be denoised in the first place. The underlying approach relies on the assumption that (given the true signal) the noise in an image is generated independently for each pixel, as is indeed the case for the dominant sources of noise in light microscopy (Poisson shot noise and Gaussian readout noise)~\cite{luisier2010image,zhang2019poisson}. Both methods employ so-called \emph{blind spot} training, in which random pixels are masked in the input image with the network trying to predict their value from the surrounding patch. Unfortunately, the original self-supervised methods typically produce visible high-frequency artifacts (see Figure~\ref{fig:schema}) and can often not reach the quality achieved by supervised \CARE training. It is worth noting that the high-frequency artifacts produced by these self-supervised methods never occur in the real fluorescence signal. Since the image is diffraction-limited and oversampled, the true signal has to be smooth to some degree. Multiple extensions of \NoiseVoid and \NoiseSelf have been proposed~\cite{Krull:2020_PN2V,laine2019high,Prakash2019ppn2v,khademi2020self}. All of them improve results by explicitly modeling the noise distribution. Here, we propose an alternate and novel route to high-quality self-supervised denoising. Instead of making additional assumptions about the noise, we show that the result can be improved by including additional knowledge about the structure of our signal. We believe that our approach might ultimately complement existing methods that are based on noise modeling, to further improve denoising quality. We assume that the true signal is the product of a convolution of an unknown \emph{phantom image} and an approximately known point spread function (PSF) -- a common assumption in established deconvolution approaches~\cite{richardson1972bayesian}. We use a \UNet~\cite{ronneberger2015u} to predict the phantom image and then explicitly perform the convolution to produce the final denoised result (see Figure~\ref{fig:schema}). We follow~\cite{krull2019noise2void,batson2019noise2self} and use a blind spot masking scheme allowing us to train our network in an end-to-end fashion from unpaired noisy data. We demonstrate that our method achieves denoising quality close to supervised methods on a variety of real and publicly available datasets. Our approach is generally on-par with modern noise model based methods~\cite{Krull:2020_PN2V,prakash2020divnoising}, while relying on a much simpler pipeline. As a byproduct, our method outputs the predicted phantom image, which can be interpreted as a deconvolution result. While we focus on the denoising task in this paper, we find that we can produce visually convincing deconvolved images by including a positivity constraint for the deconvolved output. \section{Related work} \label{sec:relatedWork} In the following, we will discuss related work on self-supervised blind spot denoising and other unsupervised denoising methods. We will focus on deep learning-based methods and omit the more traditional approaches that directly operate on individual images without training. Finally, we will briefly discuss concurrent work that tries to jointly solve denoising and inverse problems such as deconvolution. \subsection{Self-Supervised Blind Spot Denoising} By now, there is a variety of different blind spot based methods. While the first self-supervised methods (\NoiseVoid and \NoiseSelf) use a masking scheme to implement blind spot training, Laine \etal~\cite{laine2019high} suggest an alternative approach. Instead of masking, the authors present a specific network architecture that directly implements the blind spot receptive field. Additionally, the authors proposed a way to improve denoising quality by including a simple pixel-wise Gaussian based noise model. In parallel, Krull \etal~\cite{Krull:2020_PN2V} introduced a similar noise model based technique for improving denoising quality, this time using the pixel masking approach. Instead of Gaussians, Krull~\etal use histogram-based noise models together with a sampling scheme. Follow-up work additionally introduces parametric noise models and demonstrates how they can be bootstrapped (estimated) directly from the raw data~\cite{Prakash2019ppn2v}. All mentioned methods improve denoising quality by modeling the imaging noise. We, In contrast, are the first to show how blind spot denoising can be improved by including additional knowledge of the signal itself, namely the fact that it is diffraction-limited and oversampled. While the blind spot architecture introduced in~\cite{laine2019high} is computationally cheaper than the masking scheme from \cite{krull2019noise2void,khademi2020self}, it is unfortunately incompatible with our setup (see Figure~\ref{fig:schema}). Applying a convolution after a blind spot network would break the blind spot structure of the overall architecture. We thus stick with the original masking scheme, which is architecture-independent and can directly be applied for end-to-end training. \subsection{Other Unsupervised Denoising Approaches} An important alternative route is based on the theoretical work known as \emph{Stein's unbiased risk estimator} (\SURE)~\cite{stein1981estimation}. Given noisy observation, such as an image corrupted by additive Gaussian noise, Stein's 1981 theoretical work enables us to calculate the expected mean-squared error of an estimator that tries to predict the underlying signal without requiring access to the true signal. The approach was put to use for conventional (non-deep-learning-based) denoising in~\cite{ramani2008monte} and later applied to derive a loss function for neural networks~\cite{metzler2018unsupervised}. While it has been shown that the same principle can theoretically be applied for other noise models beyond additive Gaussian noise~\cite{raphan2007learning}, this has to our knowledge not yet been used to build a general unsupervised deep learning based denoiser. In a very recent work called \DivNoising~\cite{prakash2020divnoising} unsupervised denoising was achieved by training a variational autoencoder (\VAE)~\cite{KingmaW13} as a generative model of the data. Once trained, the \VAE can produce samples from an approximate posterior of clean images given a noisy input, allowing the authors to provide multiple diverse solutions or to combine them to a single estimate. Like the previously discussed extensions of blind spot denoising~\cite{laine2019high,Krull:2020_PN2V,Prakash2019ppn2v,khademi2020self} all methods based on \SURE as well as \DivNoising rely on a known noise model or on estimating an approximation. We, in contrast, do not model the noise distribution in any way (except assuming it is zero centered and applied at the pixel level) and achieve improved results. A radically different path that does not rely on modeling the noise distribution was described by Ulyanov \etal~\cite{ulyanov2018deep}. This technique, known as \emph{deep image prior}, trains a network using a fixed pattern of random inputs and the noisy image as a target. If trained until convergence, the network will simply produce the noisy image as output. However, by stopping the training early (at an adequate time) this setup can produce high-quality denoising results. Like our self-supervised method, deep image prior does not require additional training data to be applied. However, it is fundamentally different in that it is trained and applied separately for each image that is to be denoised, while our method can, once it is trained, be readily applied to previously unseen data. \subsection{Concurrent Work on Denoising and Inverse Problems} Kobayashi \etal~\cite{kobayashi2020image} developed a similar approach in parallel to ours. They provide a mathematical framework on how inverse problems such as deconvolution can be tackled using a blind spot approach. However, while we use a comparable setup, our perspective is quite different. Instead of deconvolution, we focus on the benefits for the denoising task and show that the quality of the results on real data can be dramatically improved. Yet another alternative approach was developed by Hendriksen \etal~\cite{hendriksen2020noise2inverse}. However, this technique is limited to well-conditioned inverse problems like computer tomography reconstruction and is not directly applicable to the type of microscopy data we consider here. \section{Methods} \label{sec:methods} In the following, we first describe our model of the image formation process, which is the foundation of our method, and then formally describe the denoising task. Before finally describing our method for blind spot denoising with diffraction-limited data, we include a brief recap of the original \NoiseVoid method described in \cite{krull2019noise2void}. \subsection{Image Formation} \label{sec:imageFormation} We think of the observed noisy image $\img$ recorded by the microscope, as being created in a two-stage process. Light originates from the excited fluorophores in the sample. We will refer to the unknown distribution of excited fluorophores as the \emph{phantom image} and denote it as $\latent$. The phantom image is mapped through the optics of the microscope to form a distorted image $\sig$ on the detector, which we will refer to as \emph{signal}. We assume the signal is the result of a convolution $\sig = \latent * \psf$ between the phantom image $\latent$ and a known \PSF $\psf$~\cite{richardson1972bayesian}. Finally, the signal is subject to different forms of imaging noise, resulting in the noisy observation $\img$. We think of $\img$ as being drawn from a distribution $\img \sim \pnm{\img|\sig}$, which we call the \emph{noise model}. Assuming that (given a signal $\sig$) the noise is occurring independently for each pixel, we can factorize the noise model as \begin{equation} \pnm{\img|\sig} = \prod_i^N \pnm{\imgp_i, \sigp_i}, \end{equation} where $\pnm{\imgp_i, \sigp_i}$ is the unknown probability distribution, describing how likely it is to measure the noisy value $\imgp_i$ at pixel $i$ given an underlying signal $\sigp_i$. Note that such a noise model that factorizes over pixels can describe the most dominant sources of noise in fluorescent microscopy, the Poisson shot noise and readout noise~\cite{foi2008practical,zhang2019poisson}. Here, the particular shape of the noise model does not have to be known. The only additional assumption we make (following the original \NoiseVoid~\cite{krull2019noise2void}) is that the added noise is centered around zero, that is the expected value of the noisy observations at a pixel is equal to the signal $\E{\imgp_i}{ \pnm{\imgp_i, \sigp_i}}= \sigp_i$. \subsection{Denoising Task} \label{sec:denoisingTask} Given an observed noisy image $\img$, the denoising task as we consider it in this paper is to find a suitable estimate $\sige \approx \sig$. Note that this is different from the deconvolution task, attempting to find an estimate $\latente \approx \latent$ for the original phantom image. \subsection{Blind Spot Denoising Recap} \label{sec:bsdRecap} In the originally proposed \NoiseVoid, the network is seen as implementing a function $\sigpe_i = f(\recf_i;\pars)$, that predicts an estimate for each pixel's signal $\sigpe_i$ from its surrounding patch $\recf_i$, which includes the noisy pixel values in a neighborhood around the pixel $i$ but excludes the value $\imgp_i$ at the pixel itself. We use $\pars$ to denote the network parameters. The authors of~\cite{krull2019noise2void} refer to $\recf_i$ as a \emph{blind spot receptive field}. It allows us to train the network using unpaired noisy training images $x$, with the training loss computed as a sum over pixels comparing the predicted results directly to the corresponding values of the noisy observation \begin{equation} \sum_{i} \left( \sigpe_i - \imgp_i \right)^2 . \label{eq:loss} \end{equation} Note that the blind spot receptive field is necessary for this construction, as a standard network, in which each pixel prediction is also based on the value at the pixel itself would simply learn the identity transformation when trained using the same image as input and as target. To implement a network with a blind spot receptive field \NoiseVoid uses a standard \UNet~\cite{ronneberger2015u} together with a masking scheme during training. The loss is only computed for a randomly selected subset of pixels $\setRandPix$. These pixels are \emph{masked} in the input image, replacing their value with a random pixel value from a local neighborhood. A network trained in this way acts as if it had a blind spot receptive field, enabling the network to denoise images once it has been trained on unpaired noisy observations. \subsection{Blind Spot Denoising for Diffraction-Limited Data} \label{sec:ourMethod} While the self-supervised \NoiseVoid method~\cite{krull2019noise2void} can be readily applied to the data $\img$ with the goal of directly producing an estimate $\sige \approx \sig$, this is a sub-optimal strategy in our setting. Considering the above-described process of image formation, we know that, since $\sig$ is the result of a convolution with a \PSF, high-frequencies must be drastically reduced or completely removed. It is thus extremely unlikely that the true signal would include high-frequency features as they are \eg visible in the \NoiseVoid result in Figure~\ref{fig:schema}. While a network might in principle learn this from data, we find that blind spot methods usually fail at this and produce high-frequency artifacts. To avoid this problem, we propose to add a convolution with the \PSF after the \UNet (see Figure~\ref{fig:schema}). When we now interpret the final output after the convolution as an estimate of the signal $\sige \approx \sig$, we can be sure that this output is consistent with our model of image formation and can \eg not contain unrealistic high-frequency artifacts. In addition, we can view the direct output before the convolution as an estimate of the phantom image $\latente \approx \latent$, \ie an attempt at deconvolution. To train our model using unpaired noisy data, we adhere to the same masking scheme and training loss (Eq.~\ref{eq:loss}) as in \NoiseVoid. The only difference being that our signal is produced using the additional convolution, thus enforcing the adequate dampening of high-frequencies in the final denoising estimate. \subsection{A Positivity Constraint for the Deconvolved Image} \label{sec:posConstr} Considering that the predicted deconvolved phantom image $\latente$ describes the distribution of excited fluorophores in our sample (see Section~\ref{sec:imageFormation}), we know that it cannot take negative values. After all, a negative fluorophore concentration can never occur in a physical sample. We propose to enforce this constraint using an additional loss component, linearly punishing negative values. Together with the original \NoiseVoid loss our loss is computed as \begin{equation} \frac{1}{|\setRandPix|} \sum_{i \in \setRandPix} \left( \sigpe_i - \imgp_i \right)^2 + \lambda \frac{1}{N} \sum_{i=1}^\numpix \max(0, -\latentpe_i) \label{eq:lossFull}, \end{equation} where $\numpix$ is the number of pixels and $\lambda$ is a hyperparameter controlling the influence of the positivity constraint. Note that the new positivity term can be evaluated at each pixel in the image, while the \NoiseVoid component can only be computed at the masked pixels. \section{Experiments and Results} \label{sec:experiments} In the following, we evaluate the denoising performance of our method comparing it to various baselines. Additionally, we investigate the effect of the positivity constraint (see Section~\ref{sec:posConstr}). Finally, we describe an experiment on the role of the \PSF used for reconstruction. \subsection{Datasets} \label{sec:data} \miniheadline{Fluorescence Microscopy Data with Real Noise} We used 6 fluorescence microscopy datasets with real noise. The \textit{Convallaria}~\cite{Krull:2020_PN2V,Prakash2019ppn2v} and \textit{Mouse actin}~\cite{Krull:2020_PN2V,Prakash2019ppn2v} datasets each consist of a set of 100 noisy images of $1024 \times 1024$ pixels showing a static sample. The \textit{Mouse skull nuclei}~\cite{Krull:2020_PN2V,Prakash2019ppn2v} consist of a set of 200 images of $512 \times 512$ pixels. In all 3 datasets, the ground truth is derived by averaging all images. We use all 5 images in each dataset for validation and the rest for training. The authors of~\cite{Krull:2020_PN2V,Prakash2019ppn2v} define a region of each image that is to be used for testing, while the whole image can be used for training of self-supervised methods. We adhere to this procedure. We additionally use data from~\cite{zhou2020w2s}, which provides 3 channels with training and test sets each consisting of $80$ and $40$, respectively. We use 15\% of the training data for validation. Images are $512 \times 512$ pixels in size. Note that like~\cite{prakash2020divnoising} we use the raw data made available to us by the authors as the provided normalized data is not suitable for our purpose. The dataset provides 5 different versions of each image with different levels of noise. In this work, we use only the version with the minimum and maximum amount of noise. We will refer to them as \textit{W2S avg1} and \textit{W2S avg16} respectively, as they are created by averaging different numbers of raw images. \miniheadline{Fluorescence Microscopy Data with Synthetic Noise} Additionally, we use 2 fluorescence microscopy datasets from~\cite{buchholz2020denoiseg} and added synthetic noise. We will refer to them as \textit{Mouse (DenoiSeg)} and \textit{Flywing (DenoiSeg)}. While the original data contains almost no noise, we add pixel-wise Gaussian noise with standard deviation 20 and 70 for \textit{Mouse (DenoiSeg)} and \textit{Flywing (DenoiSeg)}, respectively. Both datasets are split into a training, validation, and test fraction. The \textit{Mouse} dataset, provides 908 images of $128 \times 128$ pixels for training, 160 images of the same size as a validation set, and 67 images of $256 \times 256$ as a test set. The \textit{Flywing} dataset, provides 1428 images size $128 \times 128$ as a training set, 252 images for validation (same size), and also 42 images size $512 \times 512$ as test set. As our method does not require ground truth, we follow \cite{prakash2020divnoising} and add the test fraction to the training data in order to achieve a fair comparison. \miniheadline{Synthetic Data} While the above-mentioned datasets are highly realistic, we do not know the true \PSF that produced the images. To investigate the effect of a mismatch between the true \PSF and the \PSF used in the training of our method, we used the clean rendered text data from the book \emph{The beetle}~\cite{marsh2004beetle} previously introduced in~\cite{prakash2020divnoising}, synthetically convolved it using a Gaussian \PSF with a standard deviation of 1 pixel width. Finally, we added pixel-wise Gaussian noise with a standard deviation of 100. The resulting data consists of 40800 small images of $128 \times 128$ pixels in size. We split off a validation fraction of 15\%. \subsection{Implementation Details and Training} \label{sec:implementation} Our implementation is based on the \emph{pytorch} \NoiseVoid implementation from~\cite{Krull:2020_PN2V}. We use the exact same network architecture, with the only difference being the added convolution with the \PSF at the end of the network. In all our experiments, we use the same network parameters: A 3-depth \UNet with 1 input channel and 64 channels in the first layer. All networks were trained for 200 epochs, with 10 steps per epoch. We set the initial learning rate to 0.001 and used Adam optimizer, batch size = 1, virtual batch size = 20, and patch size = 100. We mask 3.125\% (the default) of pixels in each patch. We use the positivity constraint with $\lambda=1$ (see Section~\ref{sec:posConstr}). \subsection{Denoising Performance} \label{sec:denoisingPerformance} We report the results for all fluorescence microscopy datasets in Table~\ref{tab:results}. The performance we can achieve in our denoising task is measured quantitatively by calculation of the average peak signal-to-noise ratio (\textbf{PSNR}). Qualitative results can be found in Figure~\ref{fig:table}. We run our method using a Gaussian \PSF with a standard deviation of 1 pixel width for all datasets. Figure~\ref{fig:table} shows examples of denoising results on different datasets. \figTable \tablePSNR To assess the denoising quality of our method we compare its results to various baselines. We compared our method to \NoiseVoid, noise model based self-supervised methods (\PNtoV~\cite{Krull:2020_PN2V}, \DivNoising~\cite{prakash2020divnoising}), as well as the well-known supervised \CARE~\cite{weigert2018content} approach. While we run \NoiseVoid ourselves, the PSNR values for all other methods were taken from \cite{prakash2020divnoising}. We created a simple additional baseline by convolving the \NoiseVoid result with the same \PSF used in our own method. This baseline is referred to as \emph{N2V (conv.)}. \subsection{Effect of the Positivity Constraint} \label{sec:effectOfPosConstr} Here we want to discuss the effect of the positivity constraint (see Section~\ref{sec:posConstr}) on the denoising and deconvolution results. We compare our method without positivity constraint ($\lambda = 0$, see Eq.~\ref{eq:lossFull}) and with positivity constraint ($\lambda = 1$). Choosing different values for $\lambda$ did not have a noticeable effect. We find that the constraint does not provide a systematic advantage or disadvantage with respect to denoising quality (see Table~\ref{tab:results}). In Figure~\ref{fig:deconv} we compare the results visually. While it is difficult to make out any differences in the denoising results, we see a stunning visual improvement for the deconvolution result when the positivity constraint is used. While the deconvolution result without positivity constraint contains various artifacts such as random repeating structures and grid patterns, these problems largely disappear when the positivity constraint is used. We find it is an interesting observation that such different predicted phantom images can lead to virtually indistinguishable denoising results after convolution with the \PSF, demonstrating how ill-posed the unsupervised deconvolution problem really is. \figDeconv \subsection{Effect of the Point Spread Function} \label{sec:effectOfPSF} Here we want to discuss an additional experiment on the role of the \PSF used in the reconstruction and the effect of a mismatch with respect to the \PSF that actually produced the data. We use our synthetic \emph{The beetle} dataset (see Section~\ref{sec:data}) that has been convolved with a Gaussian \PSF with a standard deviation of $\sigma=1$ pixel width and was subject to Gaussian noise of standard deviation 100. We train our method on this data using different Gaussian \PSFs with standard deviations between $\sigma=0$ and $\sigma=2$. We used an active positivity constraint with $\lambda=$ 1. The results of the experiment can be found in Figure~\ref{fig:psf}. We find that the true \PSF of $\sigma=1$ gives the best results. While lower values lead to increased artifacts, similar to those produced by \NoiseVoid, larger values lead to an overly smooth result. \figPSF \section{Discussion and Outlook} \label{sec:Discussion} Here, we have proposed a novel way of improving self-supervised denoising for microscopy, making use of the fact that images are typically diffraction-limited. While our method can be easily applied, results are often on-par with more sophisticated second-generation self-supervised methods~\cite{Krull:2020_PN2V,prakash2020divnoising}. We believe that the simplicity and general applicability of our method will facilitate fast and widespread use in fluorescence microscopy where oversampled and diffraction-limited data is the default. While the standard deviation of the \PSF is currently a parameter that has to be set by the user, we believe that future work can optimize it as a part of the training procedure. This would provide the user with an \emph{de facto} parameter-free turn-key system that could readily be applied to unpaired noisy raw data and achieve results very close to supervised training. In addition to providing a denoising result, our method outputs a deconvolved image as well. Even though deconvolution is not the focus of this work, we find that including a positivity constraint in our loss enables us to predict visually plausible results. However, the fact that dramatically different predicted deconvolved images give rise to virtually indistinguishable denoising results (see Figure~\ref{fig:deconv}) illustrates just how underconstrained the deconvolution task is. Hence, further regularization might be required to achieve deconvolution results of optimal quality. In concurrent work, Kobayashi \etal~\cite{kobayashi2020image} have generated deconvolution results in a similar fashion and achieved encouraging results in their evaluation. We expect that future work will quantify to what degree the positivity constraint and other regularization terms can further improve self-supervised deconvolution methods. We believe that the use of a convolution after the network output to account for diffraction-limited imaging will in the future be combined with noise model based techniques, such as the self-supervised~\cite{Krull:2020_PN2V,laine2019high} or with novel techniques like \DivNoising. In the latter case, this might even enable us to produce diverse deconvolution results and allow us to tackle uncertainty introduced by the under-constrained nature of the deconvolution problem in a systematic way. \subsubsection*{Code Availability.} \label{sec:code} Our code is available at \url{https://github.com/juglab/DecoNoising}. \subsubsection*{Acknowledgments.} \label{sec:acknowledgments} We thank the Scientific Computing Facility at MPI-CBG for giving us access to their HPC cluster. \par\vfill\par \clearpage \bibliographystyle{splncs04} \bibliography{refs} \end{document}
https://openreview.net/forum?id=BcAWplCftE
https://arxiv.org/abs/2008.08414
Please evaluate the theoretical soundness, practical contribution, and visual improvements of our proposed method for denoising microscopy images using self-supervised deep learning and convolution with a point-spread function model.
Theoretically and conceptually very strong, but I have concerns regarding the performance In this work, the authors present an extension to the Noise2Void denoising framework that incorporates convolution with a point spread function in order to better approximate the image formation process in microscopy. ### Quality The premise of the work is very solid and represents a shift towards making denoising approaches specific to bioimaging data rather than just the direct translation of computer vision techniques originally formulated for e.g. photographs, video data. This is an important conceptual advancement in the field. The performance of the new method is assessed in comparison to a selection of other denoising frameworks, and when measured by the PSNR metric is shown to out-perform comparable self-supervised methods. However, I find the general reliance on the PSNR as a performance assessment to be problematic, as this does not take into account the structural content of the images post-denoising. For example, in Fig. 2 there appears to be some structural discrepancies in the ‘Flywing’ data. While the ‘N2V (conv.)’ image has a lower PSNR than ‘ours’, the visual agreement between the N2V (conv.) and ground truth data appears better than that between ours and ground truth. As a sanity check for myself, I thresholded and skeletonised these images and while the N2V (conv.) and ground truth skeletons matched well, the ‘ours’ skeleton deviated substantially at the central junction. Apologies if this seems facetious, but I think it underlines the necessity for another measure of performance, especially as the ultimate goal of denoising microscopy images is to produce a better baseline from which quantitative measurements of structure can be made (rather than just a visually pleasing image). I would suggest that the authors remove the phrase ‘stunning visual improvement’ (line 443) as this is rather subjective – for example, using the positivity constraint in the mouse actin deconvolution does not improve the prevalence of patterned noise (which can be seen if the images are Fourier-transformed). Section 4.5, wherein the effects of the PSF size are investigated, seems a little abrupt. Although the PSF size parameter is clearly critical for the performance of the method, this section would have benefitted from additional discussion of e.g. a non-uniform PSF throughout the image or tolerance to the PSF deviating from a Gaussian function, as these are both relevant considerations in real-life microscopy applications. ### Clarity The paper is overall incredibly clear and I managed to understand the majority of what was written on my first pass (in contrast to my general experience reading papers in this field). The repeated use of the phrase ‘diffraction-limited’ is somewhat misleading and may even be doing the work a disservice, This phrase is normally used in the context of referring to conventional widefield or confocal fluorescence imaging data; however there is no reason that the application of this method is limited to this regime. For example, given that the condition of a point spread function whose spatial distribution can be approximated by a Nyquist-sampled Gaussian distribution of known width, this approach could be readily applied to some super-resolution data such as STED images. For this reason, the authors may wish to reconsider the use of the phrase ‘diffraction-limited’ although this is just a suggestion. ### Originality The work described here is a very similar concept to that described in [Kobayashi et al (2020)](https://arxiv.org/abs/2006.06156). However the authors acknowledge this work in their discussion and given that there was less than one month between the submission of the work by Kobayashi et al and the BIC submission deadline I do not see this as shortcoming in originality (rather, unfortunate timing). Setting aside the paper by Kobayashi et al, this paper displays interesting conceptual novelty. In comparison to the paper by Kobayashi et al, this work (in my opinion) is much better focused toward the application of fluorescence microscopy and is reported in such a way that I feel it is more likely that a microscopist would preferentially use the method presented here. ### Significance The overall conceptual significance – integrating knowledge about the image formation process into the denoising method – is high, as I mentioned above. I am still not entirely convinced, however, that there is a *significant* increase in performance, as the PSNR values represent fairly marginal gains over to Noise2Void (alongside my above concerns regarding structural fidelity). ### Pros * The paper is very well written, explained, and presented * The theoretical benefits of the approach are substantial, and again the explanation of these is well-integrated into the paper * The discussion of the paper shows that this work is a starting point and that the authors have thought about concrete ways to extend and improve it going forward. ### Cons * The authors have not convinced me from a quantitative point of view that the results are superior to existing self-supervised methods. Rating: 6: Marginally above acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{soul} \usepackage[normalem]{ulem} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage[misc]{ifsym} \usepackage{xspace} \usepackage{tabularx} \usepackage{multirow} \newcommand{\miniheadline}[1]{\noindent\textbf{#1.}} \newcommand\todo[1]{\textcolor{red}{TODO: #1}} \makeatletter \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot} \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace} \def\eg{\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot} \def\ie{\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot} \def\cf{\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot} \def\etc{\emph{etc}\onedot} \def\vs{\emph{vs}\onedot} \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot} \def\etal{\emph{et al}\onedot} \makeatother \newcommand{\E}[2]{\mathbb{E}_{#2} {\left[ #1 \right]} } \newcommand{\KL}[2]{\mathbb{KL}(#1||#2)} \newcommand{\oursm}{ours{\tiny{$^-$}}} \newcommand{\oursp}{ours{\tiny{$^+$}}} \newcommand{\VAE}{\mbox{\textsc{VAE}}\xspace} \newcommand{\VAEs}{\mbox{\textsc{VAE}}s\xspace} \newcommand{\CARE}{\mbox{\textsc{CARE}}\xspace} \newcommand{\CSBDeep}{\mbox{\textsc{CSBDeep}}\xspace} \newcommand{\NoiseNoise}{\mbox{\textsc{Noise2Noise}}\xspace} \newcommand{\NoiseVoid}{\mbox{\textsc{Noise2Void}}\xspace} \newcommand{\NoiseSelf}{\mbox{\textsc{Noise2Self}}\xspace} \newcommand{\DenoiSeg}{\mbox{\textsc{DenoiSeg}}\xspace} \newcommand{\DivNoising}{\mbox{\textsc{DivNoising}}\xspace} \newcommand{\NtoN}{\mbox{\textsc{N2N}}\xspace} \newcommand{\NtoV}{\mbox{\textsc{N2V}}\xspace} \newcommand{\PNtoV}{\mbox{\textsc{PN2V}}\xspace} \newcommand{\PNtoVgmm}{\mbox{\textsc{PN2V-GMM}}\xspace} \newcommand{\PNtoVhist}{\mbox{\textsc{PN2V-H}}\xspace} \newcommand{\UNet}{\mbox{\textsc{U-Net}}\xspace} \newcommand{\imgp}{x} \newcommand{\sigp}{s} \newcommand{\sigpe}{\hat{s}} \newcommand{\img}{\mathbf{x}} \newcommand{\sig}{\mathbf{s}} \newcommand{\sige}{\hat{\mathbf{s}}} \newcommand{\seg}{\mathbf{c}} \newcommand{\loss}[1]{\mathcal{L}_{\pars}{(#1)}} \newcommand{\losskl}[1]{\mathcal{L}_\encopas^\textsc{KL}{(#1)}} \newcommand{\lossr}[1]{\mathcal{L}_{\encopas,\decopas}^\textsc{R}{(#1)}} \newcommand{\recf}{\img^\textsc{RF}} \newcommand{\latente}{\hat{\mathbf{z}}} \newcommand{\latentpe}{\hat{z}} \newcommand{\sample}{\mathbf{s}} \newcommand{\latent}{{\mathbf{z}}} \newcommand{\psf}{{\mathbf{h}}} \newcommand{\latentp}{z} \newcommand{\encopas}{{\mathbf{\phi}}} \newcommand{\enc}[1]{f_\encopas(#1)} \newcommand{\pars}{{\mathbf{\theta} }} \newcommand{\dec}[1]{g_\decopas(#1)} \newcommand{\q}[1]{q_{\encopas}(#1)} \newcommand{\p}[1]{p(#1)} \newcommand{\pt}[1]{p_{\decopas}(#1)} \newcommand{\pnm}[1]{p_\textsc{NM}(#1)} \newcommand{\numpix}{N} \newcommand{\numimgs}{M} \newcommand{\numsamples}{K} \newcommand{\numlatdim}{D} \newcommand{\setRandPix}{M} \newcommand{\MMSE}{\textsc{MMSE}\xspace} \newcommand{\MAP}{\textsc{MAP}\xspace} \newcommand{\GMM}{\textsc{GMM}\xspace} \newcommand{\PSF}{\textsc{PSF}\xspace} \newcommand{\PSFs}{\textsc{PSF}s\xspace} \newcommand{\SURE}{\textsc{SURE}\xspace} \usepackage{booktabs} \usepackage{multirow} \usepackage[normalem]{ulem} \useunder{\uline}{\ul}{} \newcommand\figSchema{ \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figs/network.pdf} \caption{ \textbf{Improved Denoising for Diffraction-Limited Data.} \textbf{Top:} Given a noisy input, self-supervised methods like \NoiseVoid (N2V)~\cite{krull2019noise2void} often produce high-frequency artifacts that do not occur in diffraction-limited data. Based on the assumption that the true signal must be the product of a convolution with a \emph{point spread function} (\PSF), our method is able to considerably improve denoising quality and remove these artifacts. \textbf{Bottom:} Our method is based on the \NoiseVoid masking scheme. Unpaired training images simultaneously serve as input and target. The loss is only calculated for a randomly selected set of pixels, which are masked in the input image. Our contribution is to convolve the output of the network with the \PSF in order to produce a denoising result that is guaranteed to be consistent with diffraction-limited imaging. The output of the network before the convolution operation can be interpreted as a deconvolution result, which is a byproduct of our method. Our system can be trained in an end-to-end fashion, calculating the loss between our denoising result and the selected pixel set of the input image. } \label{fig:schema} \end{figure} } \newcommand\figTable{ \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figs/example_results_table_v1.pdf} \caption{ \textbf{Denoising results.} We show cropped denoising results for various fluorescence microscopy datasets. Our method achieves considerable visual improvements for all datasets compared to \NoiseVoid. The \emph{N2V~(conv.)} baseline corresponds to the \NoiseVoid result convolved with the same \PSF we use for our proposed method. } \label{fig:table} \end{figure} } \newcommand\figDeconv{ \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figs/deconvolution_results_examples.pdf} \caption{\textbf{Effect of the proposed Positivity Constraint.} We show cropped denoising and deconvolution results from various datasets with (\emph{\oursp}) and without positivity constraint (\emph{\oursm}), see Section~\ref{sec:posConstr} for details. While the denoising results are almost indistinguishable, the deconvolution results show a drastic reduction of artifacts when the positivity constraint is used. } \vspace{-2mm} \label{fig:deconv} \end{figure} } \newcommand\figPSF{ \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figs/psf_text.pdf} \caption{ \textbf{Effects of Point Spread Function Mismatch.} We use synthetic data to investigate how the choice of \PSF influences the resulting denoising quality. The data was generated by convolving rendered text with a Gaussian \PSF of standard deviation $\sigma=1$ (highlighted in red) and subsequently adding noise. Here, we show the results of our method when trained using Gaussian \PSFs of various sizes. We achieve the best results by using the true \PSF. Smaller \PSFs produce high-frequency artifacts. Larger \PSFs produce overly smooth images. } \vspace{-2mm} \label{fig:psf} \end{figure} } \newcommand\tablePSNR{ \begin{table}[] \centering \begin{tabular}{|l|c|cccc|cc|c|} \hline \multicolumn{1}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}dataset/\\ network\end{tabular}}} & \multirow{3}{*}{raw data} & \multicolumn{6}{c|}{self-supervised} & \multirow{2}{*}{superv.} \\ \cline{3-8} \multicolumn{1}{|c|}{} & & \multicolumn{4}{c|}{no noise model} & \multicolumn{2}{c|}{noise model} & \\ \cline{3-9} \multicolumn{1}{|c|}{} & & N2V & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}N2V \\ conv.\end{tabular}} & ours$^-$, & ours$^+$ & PN2V & DivN. & CARE \\ \hline Convallaria & 28.98 & 35.85 & 32.86 & \textbf{36.39} & 36.26 & 36.47 & {\ul 36.94} & 36.71 \\ Mouse actin & 23.71 & 33.35 & 33.48 & 33.94 & \textbf{34.04} & 33.86 & 33.98 & {\ul 34.20} \\ Mouse nuclei & 28.10 & 35.86 & 34.59 & \textbf{36.34} & 36.27 & 36.35 & 36.31 & {\ul 36.58} \\ Flywing (DenoiSeg) & 11.15 & 23.62 & 23.51 & 24.10 & \textbf{24.30} & 24.85 & 25.10 & {\ul 25.60} \\ Mouse (DenoiSeg) & 20.84 & 33.61 & 32.27 & \textbf{33.91} & 33.83 & 34.19 & 34.03 & {\ul 34.63} \\ W2S avg1 ch0 & 21.86 & 34.30 & 34.38 & {\ul \textbf{34.90}} & 34.24 & - & 34.13 & 34.30 \\ W2S avg1 ch1 & 19.35 & 31.80 & 32.23 & {\ul \textbf{32.31}} & 32.24 & - & 32.28 & 32.11 \\ W2S avg1 ch2 & 20.43 & 34.65 & {\ul \textbf{35.19}} & 35.03 & 35.09 & 32.48 & 35.18 & 34.73 \\ W2S avg16 ch0 & 33.20 & 38.80 & 38.73 & \textbf{39.17} & 37.84 & 39.19 & 39.62 & {\ul 41.94} \\ W2S avg16 ch1 & 31.24 & 37.81 & 37.49 & \textbf{38.33} & 38.19 & 38.24 & 38.37 & {\ul 39.09} \\ W2S avg16 ch2 & 32.35 & 40.19 & 40.32 & 40.60 & \textbf{40.74} & 40.49 & 40.52 & {\ul 40.88} \\ \hline \end{tabular} \vspace{.3cm} \caption{\textbf{Quantitative Denoising Results.} We report the average peak signal to noise ratio for each dataset and method. Here, \textit{\oursp} and \textit{\oursm} correspond to our method with ($\lambda=1$) and without positivity constraint ($\lambda=0$), see Section~\ref{sec:posConstr} for details. The best results among self-supervised methods without noise model are highlighted in bold. The best results overall are underlined. Here \emph{DivN.} is short for \DivNoising~\cite{prakash2020divnoising}. } \label{tab:results} \end{table} } \begin{document} \pagestyle{headings} \mainmatter \title{Improving Blind Spot Denoising\\ for Microscopy} % \author{Anna~S.~Goncharova\inst{1,2} \and Alf~Honigmann\inst{1} \and Florian~Jug\inst{1,2,3, \text{\Letter}} \and Alexander~Krull\inst{1,2,4, \text{\Letter}}} \authorrunning{A. Goncharova et al.} \institute{Max-Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany \and Center for Systems Biology Dresden (CSBD), Dresden, Germany \and Fondazione Human Technopole, Milano, Italy \and Max Planck Institute for the Physics of Complex Systems, Dresden, Germany\\ \Letter \: \text{jug@mpi-cbg.de}, \text{krull@mpi-cbg.de}} \maketitle \begin{abstract} Many microscopy applications are limited by the total amount of usable light and are consequently challenged by the resulting levels of noise in the acquired images. This problem is often addressed via (supervised) deep learning based denoising. Recently, by making assumptions about the noise statistics, self-supervised methods have emerged. Such methods are trained directly on the images that are to be denoised and do not require additional paired training data. While achieving remarkable results, self-supervised methods can produce high-frequency artifacts and achieve inferior results compared to supervised approaches. Here we present a novel way to improve the quality of self-supervised denoising. Considering that light microscopy images are usually diffraction-limited, we propose to include this knowledge in the denoising process. We assume the clean image to be the result of a convolution with a point spread function (PSF) and explicitly include this operation at the end of our neural network. As a consequence, we are able to eliminate high-frequency artifacts and achieve self-supervised results that are very close to the ones achieved with traditional supervised methods. \keywords{denoising, CNN, light microscopy, deconvolution} \end{abstract} \figSchema \section{Introduction} For most microscopy applications, finding the right exposure and light intensity to be used involves a trade-off between maximizing the signal to noise ratio and minimizing undesired effects such as phototoxicity. As a consequence, researchers often have to cope with considerable amounts of noise. To mitigate this issue, denoising plays an essential role in many data analysis pipelines, enabling otherwise impossible experiments~\cite{belthangady2019applications}. Currently, deep learning based denoising, also known as content-aware image restoration (\CARE)~\cite{weigert2018content}, achieves the highest quality results. \CARE methods learn a mapping from noisy to clean images. Before being applied, they must be trained with pairs of corresponding noisy and clean training data. In practice, this dependence on training pairs can be a bottleneck. While noisy images can usually be produced in abundance, recording their clean counterparts is difficult or impossible. Over the last years, various solutions to the problem have been proposed. Lehtinen \etal showed that a network can be trained for denoising using only pairs of corresponding noisy images. This method is known as \NoiseNoise~\cite{lehtinen2018noise2noise}. The first self-supervised approaches \NoiseVoid~\cite{krull2019noise2void} and \NoiseSelf~\cite{batson2019noise2self} were introduced soon after this. These methods can be trained on unpaired noisy image data. In fact, they can be trained on the very same data that is to be denoised in the first place. The underlying approach relies on the assumption that (given the true signal) the noise in an image is generated independently for each pixel, as is indeed the case for the dominant sources of noise in light microscopy (Poisson shot noise and Gaussian readout noise)~\cite{luisier2010image,zhang2019poisson}. Both methods employ so-called \emph{blind spot} training, in which random pixels are masked in the input image with the network trying to predict their value from the surrounding patch. Unfortunately, the original self-supervised methods typically produce visible high-frequency artifacts (see Figure~\ref{fig:schema}) and can often not reach the quality achieved by supervised \CARE training. It is worth noting that the high-frequency artifacts produced by these self-supervised methods never occur in the real fluorescence signal. Since the image is diffraction-limited and oversampled, the true signal has to be smooth to some degree. Multiple extensions of \NoiseVoid and \NoiseSelf have been proposed~\cite{Krull:2020_PN2V,laine2019high,Prakash2019ppn2v,khademi2020self}. All of them improve results by explicitly modeling the noise distribution. Here, we propose an alternate and novel route to high-quality self-supervised denoising. Instead of making additional assumptions about the noise, we show that the result can be improved by including additional knowledge about the structure of our signal. We believe that our approach might ultimately complement existing methods that are based on noise modeling, to further improve denoising quality. We assume that the true signal is the product of a convolution of an unknown \emph{phantom image} and an approximately known point spread function (PSF) -- a common assumption in established deconvolution approaches~\cite{richardson1972bayesian}. We use a \UNet~\cite{ronneberger2015u} to predict the phantom image and then explicitly perform the convolution to produce the final denoised result (see Figure~\ref{fig:schema}). We follow~\cite{krull2019noise2void,batson2019noise2self} and use a blind spot masking scheme allowing us to train our network in an end-to-end fashion from unpaired noisy data. We demonstrate that our method achieves denoising quality close to supervised methods on a variety of real and publicly available datasets. Our approach is generally on-par with modern noise model based methods~\cite{Krull:2020_PN2V,prakash2020divnoising}, while relying on a much simpler pipeline. As a byproduct, our method outputs the predicted phantom image, which can be interpreted as a deconvolution result. While we focus on the denoising task in this paper, we find that we can produce visually convincing deconvolved images by including a positivity constraint for the deconvolved output. \section{Related work} \label{sec:relatedWork} In the following, we will discuss related work on self-supervised blind spot denoising and other unsupervised denoising methods. We will focus on deep learning-based methods and omit the more traditional approaches that directly operate on individual images without training. Finally, we will briefly discuss concurrent work that tries to jointly solve denoising and inverse problems such as deconvolution. \subsection{Self-Supervised Blind Spot Denoising} By now, there is a variety of different blind spot based methods. While the first self-supervised methods (\NoiseVoid and \NoiseSelf) use a masking scheme to implement blind spot training, Laine \etal~\cite{laine2019high} suggest an alternative approach. Instead of masking, the authors present a specific network architecture that directly implements the blind spot receptive field. Additionally, the authors proposed a way to improve denoising quality by including a simple pixel-wise Gaussian based noise model. In parallel, Krull \etal~\cite{Krull:2020_PN2V} introduced a similar noise model based technique for improving denoising quality, this time using the pixel masking approach. Instead of Gaussians, Krull~\etal use histogram-based noise models together with a sampling scheme. Follow-up work additionally introduces parametric noise models and demonstrates how they can be bootstrapped (estimated) directly from the raw data~\cite{Prakash2019ppn2v}. All mentioned methods improve denoising quality by modeling the imaging noise. We, In contrast, are the first to show how blind spot denoising can be improved by including additional knowledge of the signal itself, namely the fact that it is diffraction-limited and oversampled. While the blind spot architecture introduced in~\cite{laine2019high} is computationally cheaper than the masking scheme from \cite{krull2019noise2void,khademi2020self}, it is unfortunately incompatible with our setup (see Figure~\ref{fig:schema}). Applying a convolution after a blind spot network would break the blind spot structure of the overall architecture. We thus stick with the original masking scheme, which is architecture-independent and can directly be applied for end-to-end training. \subsection{Other Unsupervised Denoising Approaches} An important alternative route is based on the theoretical work known as \emph{Stein's unbiased risk estimator} (\SURE)~\cite{stein1981estimation}. Given noisy observation, such as an image corrupted by additive Gaussian noise, Stein's 1981 theoretical work enables us to calculate the expected mean-squared error of an estimator that tries to predict the underlying signal without requiring access to the true signal. The approach was put to use for conventional (non-deep-learning-based) denoising in~\cite{ramani2008monte} and later applied to derive a loss function for neural networks~\cite{metzler2018unsupervised}. While it has been shown that the same principle can theoretically be applied for other noise models beyond additive Gaussian noise~\cite{raphan2007learning}, this has to our knowledge not yet been used to build a general unsupervised deep learning based denoiser. In a very recent work called \DivNoising~\cite{prakash2020divnoising} unsupervised denoising was achieved by training a variational autoencoder (\VAE)~\cite{KingmaW13} as a generative model of the data. Once trained, the \VAE can produce samples from an approximate posterior of clean images given a noisy input, allowing the authors to provide multiple diverse solutions or to combine them to a single estimate. Like the previously discussed extensions of blind spot denoising~\cite{laine2019high,Krull:2020_PN2V,Prakash2019ppn2v,khademi2020self} all methods based on \SURE as well as \DivNoising rely on a known noise model or on estimating an approximation. We, in contrast, do not model the noise distribution in any way (except assuming it is zero centered and applied at the pixel level) and achieve improved results. A radically different path that does not rely on modeling the noise distribution was described by Ulyanov \etal~\cite{ulyanov2018deep}. This technique, known as \emph{deep image prior}, trains a network using a fixed pattern of random inputs and the noisy image as a target. If trained until convergence, the network will simply produce the noisy image as output. However, by stopping the training early (at an adequate time) this setup can produce high-quality denoising results. Like our self-supervised method, deep image prior does not require additional training data to be applied. However, it is fundamentally different in that it is trained and applied separately for each image that is to be denoised, while our method can, once it is trained, be readily applied to previously unseen data. \subsection{Concurrent Work on Denoising and Inverse Problems} Kobayashi \etal~\cite{kobayashi2020image} developed a similar approach in parallel to ours. They provide a mathematical framework on how inverse problems such as deconvolution can be tackled using a blind spot approach. However, while we use a comparable setup, our perspective is quite different. Instead of deconvolution, we focus on the benefits for the denoising task and show that the quality of the results on real data can be dramatically improved. Yet another alternative approach was developed by Hendriksen \etal~\cite{hendriksen2020noise2inverse}. However, this technique is limited to well-conditioned inverse problems like computer tomography reconstruction and is not directly applicable to the type of microscopy data we consider here. \section{Methods} \label{sec:methods} In the following, we first describe our model of the image formation process, which is the foundation of our method, and then formally describe the denoising task. Before finally describing our method for blind spot denoising with diffraction-limited data, we include a brief recap of the original \NoiseVoid method described in \cite{krull2019noise2void}. \subsection{Image Formation} \label{sec:imageFormation} We think of the observed noisy image $\img$ recorded by the microscope, as being created in a two-stage process. Light originates from the excited fluorophores in the sample. We will refer to the unknown distribution of excited fluorophores as the \emph{phantom image} and denote it as $\latent$. The phantom image is mapped through the optics of the microscope to form a distorted image $\sig$ on the detector, which we will refer to as \emph{signal}. We assume the signal is the result of a convolution $\sig = \latent * \psf$ between the phantom image $\latent$ and a known \PSF $\psf$~\cite{richardson1972bayesian}. Finally, the signal is subject to different forms of imaging noise, resulting in the noisy observation $\img$. We think of $\img$ as being drawn from a distribution $\img \sim \pnm{\img|\sig}$, which we call the \emph{noise model}. Assuming that (given a signal $\sig$) the noise is occurring independently for each pixel, we can factorize the noise model as \begin{equation} \pnm{\img|\sig} = \prod_i^N \pnm{\imgp_i, \sigp_i}, \end{equation} where $\pnm{\imgp_i, \sigp_i}$ is the unknown probability distribution, describing how likely it is to measure the noisy value $\imgp_i$ at pixel $i$ given an underlying signal $\sigp_i$. Note that such a noise model that factorizes over pixels can describe the most dominant sources of noise in fluorescent microscopy, the Poisson shot noise and readout noise~\cite{foi2008practical,zhang2019poisson}. Here, the particular shape of the noise model does not have to be known. The only additional assumption we make (following the original \NoiseVoid~\cite{krull2019noise2void}) is that the added noise is centered around zero, that is the expected value of the noisy observations at a pixel is equal to the signal $\E{\imgp_i}{ \pnm{\imgp_i, \sigp_i}}= \sigp_i$. \subsection{Denoising Task} \label{sec:denoisingTask} Given an observed noisy image $\img$, the denoising task as we consider it in this paper is to find a suitable estimate $\sige \approx \sig$. Note that this is different from the deconvolution task, attempting to find an estimate $\latente \approx \latent$ for the original phantom image. \subsection{Blind Spot Denoising Recap} \label{sec:bsdRecap} In the originally proposed \NoiseVoid, the network is seen as implementing a function $\sigpe_i = f(\recf_i;\pars)$, that predicts an estimate for each pixel's signal $\sigpe_i$ from its surrounding patch $\recf_i$, which includes the noisy pixel values in a neighborhood around the pixel $i$ but excludes the value $\imgp_i$ at the pixel itself. We use $\pars$ to denote the network parameters. The authors of~\cite{krull2019noise2void} refer to $\recf_i$ as a \emph{blind spot receptive field}. It allows us to train the network using unpaired noisy training images $x$, with the training loss computed as a sum over pixels comparing the predicted results directly to the corresponding values of the noisy observation \begin{equation} \sum_{i} \left( \sigpe_i - \imgp_i \right)^2 . \label{eq:loss} \end{equation} Note that the blind spot receptive field is necessary for this construction, as a standard network, in which each pixel prediction is also based on the value at the pixel itself would simply learn the identity transformation when trained using the same image as input and as target. To implement a network with a blind spot receptive field \NoiseVoid uses a standard \UNet~\cite{ronneberger2015u} together with a masking scheme during training. The loss is only computed for a randomly selected subset of pixels $\setRandPix$. These pixels are \emph{masked} in the input image, replacing their value with a random pixel value from a local neighborhood. A network trained in this way acts as if it had a blind spot receptive field, enabling the network to denoise images once it has been trained on unpaired noisy observations. \subsection{Blind Spot Denoising for Diffraction-Limited Data} \label{sec:ourMethod} While the self-supervised \NoiseVoid method~\cite{krull2019noise2void} can be readily applied to the data $\img$ with the goal of directly producing an estimate $\sige \approx \sig$, this is a sub-optimal strategy in our setting. Considering the above-described process of image formation, we know that, since $\sig$ is the result of a convolution with a \PSF, high-frequencies must be drastically reduced or completely removed. It is thus extremely unlikely that the true signal would include high-frequency features as they are \eg visible in the \NoiseVoid result in Figure~\ref{fig:schema}. While a network might in principle learn this from data, we find that blind spot methods usually fail at this and produce high-frequency artifacts. To avoid this problem, we propose to add a convolution with the \PSF after the \UNet (see Figure~\ref{fig:schema}). When we now interpret the final output after the convolution as an estimate of the signal $\sige \approx \sig$, we can be sure that this output is consistent with our model of image formation and can \eg not contain unrealistic high-frequency artifacts. In addition, we can view the direct output before the convolution as an estimate of the phantom image $\latente \approx \latent$, \ie an attempt at deconvolution. To train our model using unpaired noisy data, we adhere to the same masking scheme and training loss (Eq.~\ref{eq:loss}) as in \NoiseVoid. The only difference being that our signal is produced using the additional convolution, thus enforcing the adequate dampening of high-frequencies in the final denoising estimate. \subsection{A Positivity Constraint for the Deconvolved Image} \label{sec:posConstr} Considering that the predicted deconvolved phantom image $\latente$ describes the distribution of excited fluorophores in our sample (see Section~\ref{sec:imageFormation}), we know that it cannot take negative values. After all, a negative fluorophore concentration can never occur in a physical sample. We propose to enforce this constraint using an additional loss component, linearly punishing negative values. Together with the original \NoiseVoid loss our loss is computed as \begin{equation} \frac{1}{|\setRandPix|} \sum_{i \in \setRandPix} \left( \sigpe_i - \imgp_i \right)^2 + \lambda \frac{1}{N} \sum_{i=1}^\numpix \max(0, -\latentpe_i) \label{eq:lossFull}, \end{equation} where $\numpix$ is the number of pixels and $\lambda$ is a hyperparameter controlling the influence of the positivity constraint. Note that the new positivity term can be evaluated at each pixel in the image, while the \NoiseVoid component can only be computed at the masked pixels. \section{Experiments and Results} \label{sec:experiments} In the following, we evaluate the denoising performance of our method comparing it to various baselines. Additionally, we investigate the effect of the positivity constraint (see Section~\ref{sec:posConstr}). Finally, we describe an experiment on the role of the \PSF used for reconstruction. \subsection{Datasets} \label{sec:data} \miniheadline{Fluorescence Microscopy Data with Real Noise} We used 6 fluorescence microscopy datasets with real noise. The \textit{Convallaria}~\cite{Krull:2020_PN2V,Prakash2019ppn2v} and \textit{Mouse actin}~\cite{Krull:2020_PN2V,Prakash2019ppn2v} datasets each consist of a set of 100 noisy images of $1024 \times 1024$ pixels showing a static sample. The \textit{Mouse skull nuclei}~\cite{Krull:2020_PN2V,Prakash2019ppn2v} consist of a set of 200 images of $512 \times 512$ pixels. In all 3 datasets, the ground truth is derived by averaging all images. We use all 5 images in each dataset for validation and the rest for training. The authors of~\cite{Krull:2020_PN2V,Prakash2019ppn2v} define a region of each image that is to be used for testing, while the whole image can be used for training of self-supervised methods. We adhere to this procedure. We additionally use data from~\cite{zhou2020w2s}, which provides 3 channels with training and test sets each consisting of $80$ and $40$, respectively. We use 15\% of the training data for validation. Images are $512 \times 512$ pixels in size. Note that like~\cite{prakash2020divnoising} we use the raw data made available to us by the authors as the provided normalized data is not suitable for our purpose. The dataset provides 5 different versions of each image with different levels of noise. In this work, we use only the version with the minimum and maximum amount of noise. We will refer to them as \textit{W2S avg1} and \textit{W2S avg16} respectively, as they are created by averaging different numbers of raw images. \miniheadline{Fluorescence Microscopy Data with Synthetic Noise} Additionally, we use 2 fluorescence microscopy datasets from~\cite{buchholz2020denoiseg} and added synthetic noise. We will refer to them as \textit{Mouse (DenoiSeg)} and \textit{Flywing (DenoiSeg)}. While the original data contains almost no noise, we add pixel-wise Gaussian noise with standard deviation 20 and 70 for \textit{Mouse (DenoiSeg)} and \textit{Flywing (DenoiSeg)}, respectively. Both datasets are split into a training, validation, and test fraction. The \textit{Mouse} dataset, provides 908 images of $128 \times 128$ pixels for training, 160 images of the same size as a validation set, and 67 images of $256 \times 256$ as a test set. The \textit{Flywing} dataset, provides 1428 images size $128 \times 128$ as a training set, 252 images for validation (same size), and also 42 images size $512 \times 512$ as test set. As our method does not require ground truth, we follow \cite{prakash2020divnoising} and add the test fraction to the training data in order to achieve a fair comparison. \miniheadline{Synthetic Data} While the above-mentioned datasets are highly realistic, we do not know the true \PSF that produced the images. To investigate the effect of a mismatch between the true \PSF and the \PSF used in the training of our method, we used the clean rendered text data from the book \emph{The beetle}~\cite{marsh2004beetle} previously introduced in~\cite{prakash2020divnoising}, synthetically convolved it using a Gaussian \PSF with a standard deviation of 1 pixel width. Finally, we added pixel-wise Gaussian noise with a standard deviation of 100. The resulting data consists of 40800 small images of $128 \times 128$ pixels in size. We split off a validation fraction of 15\%. \subsection{Implementation Details and Training} \label{sec:implementation} Our implementation is based on the \emph{pytorch} \NoiseVoid implementation from~\cite{Krull:2020_PN2V}. We use the exact same network architecture, with the only difference being the added convolution with the \PSF at the end of the network. In all our experiments, we use the same network parameters: A 3-depth \UNet with 1 input channel and 64 channels in the first layer. All networks were trained for 200 epochs, with 10 steps per epoch. We set the initial learning rate to 0.001 and used Adam optimizer, batch size = 1, virtual batch size = 20, and patch size = 100. We mask 3.125\% (the default) of pixels in each patch. We use the positivity constraint with $\lambda=1$ (see Section~\ref{sec:posConstr}). \subsection{Denoising Performance} \label{sec:denoisingPerformance} We report the results for all fluorescence microscopy datasets in Table~\ref{tab:results}. The performance we can achieve in our denoising task is measured quantitatively by calculation of the average peak signal-to-noise ratio (\textbf{PSNR}). Qualitative results can be found in Figure~\ref{fig:table}. We run our method using a Gaussian \PSF with a standard deviation of 1 pixel width for all datasets. Figure~\ref{fig:table} shows examples of denoising results on different datasets. \figTable \tablePSNR To assess the denoising quality of our method we compare its results to various baselines. We compared our method to \NoiseVoid, noise model based self-supervised methods (\PNtoV~\cite{Krull:2020_PN2V}, \DivNoising~\cite{prakash2020divnoising}), as well as the well-known supervised \CARE~\cite{weigert2018content} approach. While we run \NoiseVoid ourselves, the PSNR values for all other methods were taken from \cite{prakash2020divnoising}. We created a simple additional baseline by convolving the \NoiseVoid result with the same \PSF used in our own method. This baseline is referred to as \emph{N2V (conv.)}. \subsection{Effect of the Positivity Constraint} \label{sec:effectOfPosConstr} Here we want to discuss the effect of the positivity constraint (see Section~\ref{sec:posConstr}) on the denoising and deconvolution results. We compare our method without positivity constraint ($\lambda = 0$, see Eq.~\ref{eq:lossFull}) and with positivity constraint ($\lambda = 1$). Choosing different values for $\lambda$ did not have a noticeable effect. We find that the constraint does not provide a systematic advantage or disadvantage with respect to denoising quality (see Table~\ref{tab:results}). In Figure~\ref{fig:deconv} we compare the results visually. While it is difficult to make out any differences in the denoising results, we see a stunning visual improvement for the deconvolution result when the positivity constraint is used. While the deconvolution result without positivity constraint contains various artifacts such as random repeating structures and grid patterns, these problems largely disappear when the positivity constraint is used. We find it is an interesting observation that such different predicted phantom images can lead to virtually indistinguishable denoising results after convolution with the \PSF, demonstrating how ill-posed the unsupervised deconvolution problem really is. \figDeconv \subsection{Effect of the Point Spread Function} \label{sec:effectOfPSF} Here we want to discuss an additional experiment on the role of the \PSF used in the reconstruction and the effect of a mismatch with respect to the \PSF that actually produced the data. We use our synthetic \emph{The beetle} dataset (see Section~\ref{sec:data}) that has been convolved with a Gaussian \PSF with a standard deviation of $\sigma=1$ pixel width and was subject to Gaussian noise of standard deviation 100. We train our method on this data using different Gaussian \PSFs with standard deviations between $\sigma=0$ and $\sigma=2$. We used an active positivity constraint with $\lambda=$ 1. The results of the experiment can be found in Figure~\ref{fig:psf}. We find that the true \PSF of $\sigma=1$ gives the best results. While lower values lead to increased artifacts, similar to those produced by \NoiseVoid, larger values lead to an overly smooth result. \figPSF \section{Discussion and Outlook} \label{sec:Discussion} Here, we have proposed a novel way of improving self-supervised denoising for microscopy, making use of the fact that images are typically diffraction-limited. While our method can be easily applied, results are often on-par with more sophisticated second-generation self-supervised methods~\cite{Krull:2020_PN2V,prakash2020divnoising}. We believe that the simplicity and general applicability of our method will facilitate fast and widespread use in fluorescence microscopy where oversampled and diffraction-limited data is the default. While the standard deviation of the \PSF is currently a parameter that has to be set by the user, we believe that future work can optimize it as a part of the training procedure. This would provide the user with an \emph{de facto} parameter-free turn-key system that could readily be applied to unpaired noisy raw data and achieve results very close to supervised training. In addition to providing a denoising result, our method outputs a deconvolved image as well. Even though deconvolution is not the focus of this work, we find that including a positivity constraint in our loss enables us to predict visually plausible results. However, the fact that dramatically different predicted deconvolved images give rise to virtually indistinguishable denoising results (see Figure~\ref{fig:deconv}) illustrates just how underconstrained the deconvolution task is. Hence, further regularization might be required to achieve deconvolution results of optimal quality. In concurrent work, Kobayashi \etal~\cite{kobayashi2020image} have generated deconvolution results in a similar fashion and achieved encouraging results in their evaluation. We expect that future work will quantify to what degree the positivity constraint and other regularization terms can further improve self-supervised deconvolution methods. We believe that the use of a convolution after the network output to account for diffraction-limited imaging will in the future be combined with noise model based techniques, such as the self-supervised~\cite{Krull:2020_PN2V,laine2019high} or with novel techniques like \DivNoising. In the latter case, this might even enable us to produce diverse deconvolution results and allow us to tackle uncertainty introduced by the under-constrained nature of the deconvolution problem in a systematic way. \subsubsection*{Code Availability.} \label{sec:code} Our code is available at \url{https://github.com/juglab/DecoNoising}. \subsubsection*{Acknowledgments.} \label{sec:acknowledgments} We thank the Scientific Computing Facility at MPI-CBG for giving us access to their HPC cluster. \par\vfill\par \clearpage \bibliographystyle{splncs04} \bibliography{refs} \end{document}
https://openreview.net/forum?id=BcAWplCftE
https://arxiv.org/abs/2008.08414
Please evaluate the performance and significance of our paper, specifically in terms of its denoising method for microscopy images and its comparison to existing self-supervised methods.
DenoiSeg: Joint Denoising and Segmentation Summary This paper addresses the problem of image segmentation and denoising when a very few training segmentation masks are available. The problem is motivated by the need to train neural networks with a high capacity (millions of coefficients) while having only tens of ground truth segmentation masks. The authors approach the problem by combining the self-supervised denoising task with the segmentation task in one neural network optimized using a joint loss. The joint loss is a weighted contribution of denoising and segmentation loss contributions. The authors develop the joint denoising and segmentation framework as an extension to the Noise2Void work accessible at https://openaccess.thecvf.com/content_CVPR_2019/papers/Krull_Noise2Void_-_Learning_Denoising_From_Single_Noisy_Images_CVPR_2019_paper.pdf Strengths: The practical value of training a segmentation model with very few ground truth segmentation masks is very high. The novelty lies in formulating a joint loss and delivering denoised images as well as segmentation masks. Weaknesses: The paper is missing an assumption paragraph which is misleading for a reader who would like to use this technique. For example, one of the assumptions is the i.i.d. property of the noise. Another assumption is that the very few ground truth segmentation masks must be representative of the dataset. The authors showed the performance on three datasets that have spatially distributed very similar pattern/content and thus sampling is very easy. Comments: How did the authors decide on the size of the blind spot patches? The paper focused on Noise2Void is using patches of 64 x 64 pixels while this work is using patches of 128 x 128 pixels. Lines 249 – 257: The authors refer to patches and then to images. Please, verify the terminology Lines 314, 345-350: It is not clear how delta is computed. Is it AP(alpha=0.2) – AP(alpha=0.5) or AP(alpha=0.2) – AP(alpha=0.7) compared to AP(alpha=0.5)? Please, clarify. I could not follow the Figure 6 vertical axis (you might include an equation for delta). Rating: 9: Top 15% of accepted papers, strong accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{amsmath,graphicx} \usepackage{textcomp} \usepackage{xspace} \usepackage{tikz} \usepackage{xcolor} \usepackage{marvosym} \usepackage{tabularx} \usepackage{dirtytalk} \usepackage{float} \makeatletter \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot} \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace} \def\eg{\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot} \def\ie{\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot} \def\cf{\emph{cf.}\xspace} \def\Cf{\emph{Cf.}\xspace} \def\etc{\emph{etc}\onedot} \def\vs{\emph{vs}\onedot} \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot} \def\etal{\emph{et~al}\onedot} \newcommand{\CARE}{\mbox{\textsc{CARE}}\xspace} \newcommand{\CSBDeep}{\mbox{\textsc{CSBDeep}}\xspace} \newcommand{\NoiseNoise}{\mbox{\textsc{Noise2Noise}}\xspace} \newcommand{\NoiseVoid}{\mbox{\textsc{Noise2Void}}\xspace} \newcommand{\DenoiSeg}{\mbox{\textsc{DenoiSeg}}\xspace} \newcommand{\NtoN}{\mbox{\textsc{N2N}}\xspace} \newcommand{\NtoV}{\mbox{\textsc{N2V}}\xspace} \newcommand{\UNet}{\mbox{\textsc{U-Net}}\xspace} \newcommand{\img}{\boldsymbol{x}} \newcommand{\seg}{\boldsymbol{y}} \usepackage{array} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcommand\blfootnote[1]{% \begingroup \renewcommand\thefootnote{}\footnote{#1}% \addtocounter{footnote}{-1}% \endgroup } \newcommand\figTeaser{ \begin{figure}[t] \centering \includegraphics[width=.8\linewidth]{Figs/Teaser_1.pdf} \caption{The proposed \DenoiSeg training scheme. A \UNet is trained with a joint self-supervised denoising loss~($\mathcal{L}_d$) and a classical segmentation loss~($\mathcal{L}_s$). Both losses are weighted with respect to each other by a hyperparameter $\alpha$. In this example, $\mathcal{L}_d$ can be computed on all $3800$ training patches, while $\mathcal{L}_s$ can only be computed on the $10$ available annotated ground truth patches that are available for segmentation. } \label{fig:teaser} \end{figure} } \newcommand\figDSB{ \begin{figure}[t] \centering \begin{minipage}{.02\linewidth} \begin{tikzpicture} \draw (0, 0) node[rotate=90] {\textcolor{white}{wsp}DSB n20}; \end{tikzpicture} \end{minipage} \begin{minipage}{.97\linewidth} \includegraphics[width=.49\linewidth,trim={0.6cm 1.3cm 0.6cm 0.5cm},clip]{Figs/AP_n20_area.pdf} \includegraphics[width=.49\linewidth,trim={0.6cm 1.3cm 0.6cm 0.5cm},clip]{Figs/SEG_n20_area.pdf} \end{minipage} \begin{minipage}{.02\linewidth} \begin{tikzpicture} \draw (0, 0) node[rotate=90] {\textcolor{white}{ws}DSB n10}; \end{tikzpicture} \end{minipage} \begin{minipage}{.97\linewidth} \includegraphics[width=.49\linewidth,trim={0.6cm 1.3cm 0.6cm 0.5cm},clip]{Figs/AP_n10_area.pdf} \includegraphics[width=.49\linewidth,trim={0.6cm 1.3cm 0.6cm 0.5cm},clip]{Figs/SEG_n10_area.pdf} \end{minipage} \begin{minipage}{.02\linewidth} \begin{tikzpicture} \draw (0, 0) node[rotate=90] {\textcolor{white}{ws}DSB n0}; % \end{tikzpicture} \end{minipage} \begin{minipage}{.97\linewidth} \includegraphics[width=.49\linewidth,trim={0.6cm 0.6cm 0.6cm 0.5cm},clip]{Figs/AP_n0_area.pdf} \includegraphics[width=.49\linewidth,trim={0.6cm 0.6cm 0.6cm 0.5cm},clip]{Figs/SEG_n0_area.pdf} \end{minipage} \caption{Results for DSB n0, n10 and n20, evaluated with Average Precision (AP)~\cite{schmidt2018} and SEG-Score~\cite{ulman2017objective}. \DenoiSeg outperforms both baseline methods, mainly when only limited segmentation ground truth is available. Note that the advantage of our proposed method is at least partially compromised when the image data is not noisy (row 3).} \label{fig:DSB} \end{figure} } \newcommand\figDeltaNoise{ \begin{figure}[ht] \centering \begin{minipage}{\linewidth} \begin{tikzpicture} \draw (0, 0) node[inner sep=0] {\includegraphics[width=\linewidth,trim={0.6cm 0.6cm 0.8cm 0.5cm},clip]{Figs/alpha_delta_and_additional_noise.pdf}}; \draw (-5.8, 4.4) node[inner sep=0] {(a)}; \draw (0.45, 4.4) node[inner sep=0] {(b)}; \end{tikzpicture} \end{minipage} \caption{In \textbf{(a)}, we show that \DenoiSeg consistently improves results over the baseline for a broad range of hyperparameter $\alpha$ values. The results come close to what would be achievable by choosing the best possible $\alpha$ (see main text). In \textbf{(b)}, we show that adding synthetic noise can lead to improved \DenoiSeg performance. For the DSB, Fly Wing, and Mouse Nuclei data, we compare baseline results with \DenoiSeg results on the same data (n0) and with added synthetic noise (n10 and n20, see main text). } \label{fig:deltaNoise} \end{figure} } \newcommand\figQualitative{ \begin{figure}[H] \centering \begin{minipage}{\linewidth} \begin{tikzpicture} \draw (1.06, 0) node[inner sep=0] {\includegraphics[width=.96\linewidth]{Figs/dsb_qualitative.pdf}}; \draw (-5, 0) node[rotate=90] {{\textcolor{white}{g}}DSB n10{\textcolor{white}{g}}}; % \draw (-4.3, 1.6) node {Input}; \draw (-1.11, 1.6) node {{\textcolor{white}{g}}Insets{\textcolor{white}{g}}}; \draw (0.35, 1.6) node {{\textcolor{white}{g}}GT{\textcolor{white}{g}}}; \draw (1.8, 1.6) node {{\textcolor{white}{g}}Baseline{\textcolor{white}{g}}}; \draw (3.25, 1.6) node {Sequent.}; \draw (5.47, 1.9) node {$\overbrace{\text{\textcolor{white}{blablablablablablab}}}^{\text{\textbf{Ours}}}$}; \draw (4.75, 1.6) node {Segm.}; \draw (6.17, 1.6) node {{\textcolor{white}{g}}Denoised{\textcolor{white}{g}}}; \draw (-3.34, 1.24) node {\textcolor{white}{\textbf{3800 (GT for 10)}}}; \end{tikzpicture} \begin{tikzpicture} \draw (1.06, 0) node[inner sep=0] {\includegraphics[width=.96\linewidth]{Figs/flywing_qualitative.pdf}}; \draw (-5, 0) node[rotate=90] {Fly Wing n10}; \draw (-3.44, 1.24) node {\textcolor{white}{\textbf{1428 (GT for 2)}}}; \end{tikzpicture} \begin{tikzpicture} \draw (1.06, 0) node[inner sep=0] {\includegraphics[width=.96\linewidth]{Figs/mouse_qualitative.pdf}}; \draw (-5, -0.1) node[rotate=90] {{\textcolor{white}{g}}Mouse Nuclei n10}; \draw (-3.52, 1.24) node {\textcolor{white}{\textbf{908 (GT for 2)}}}; \end{tikzpicture} \end{minipage} \caption{Qualitative results on DSB n10 (first row), Fly Wing n10 (second row) and Mouse Nuclei n10 (third row). The first column shows an example test image. Numbers indicate how many noisy input and annotated ground truth (GT) patches were used for training. Note that segmentation GT was only available for at most 10 images, accounting for less than 0.27\% of the available raw data. Other columns show depicted inset regions, from left to right showing: raw input, segmentation GT, results of two baseline methods, and our \DenoiSeg segmentation and denoising results.} \label{fig:qualitative} \end{figure} } \newcommand\tabDenoising{ \begin{table}[h] \centering \begin{tabular}{p{0.8cm}||p{1.75cm}p{1.75cm}| p{1.75cm}p{1.75cm} | p{1.75cm}p{1.75cm}} \hline \multicolumn{1}{c||}{} & \multicolumn{2}{c|}{DSB \small{(GT for 10)}} & \multicolumn{2}{c|}{Fly Wing \small{(GT for 2)}} & \multicolumn{2}{c}{Mouse N. \small{(GT for 1)}} \\ \hline Noise & $\DenoiSeg$ & $\NoiseVoid$ & $\DenoiSeg$ & $\NoiseVoid$ & $\DenoiSeg$ & $\NoiseVoid$ \\ \hline n10 & \small{37.57$\pm$0.07} & \small{38.01$\pm$0.05} & \small{33.12$\pm$0.01} & \small{33.16$\pm$0.01} & \small{37.42$\pm$0.10} & \small{37.86$\pm$0.01} \\ n20 & \small{35.38$\pm$0.08} & \small{35.53$\pm$0.02} & \small{30.45$\pm$0.20} & \small{30.72$\pm$0.01} & \small{34.21$\pm$0.19} & \small{34.59$\pm$0.01} \\ \hline \end{tabular} \vspace{0.2cm} \caption{Comparing the denoising performance of \DenoiSeg and \NoiseVoid. Mean Peak Signal-to-Noise Ratio values (with $\pm 1$ SEM over 5 runs) are shown. Similar tables for \DenoiSeg results when more segmentation GT was available can be found online in the \DenoiSeg-Wiki. } \label{tab:denoising} \end{table} } \begin{document} \title{\DenoiSeg: Joint Denoising and Segmentation} \titlerunning{\DenoiSeg: Joint Denoising and Segmentation} \author{Tim-Oliver Buchholz\inst{\ast,1,2} \and Mangal Prakash\inst{\ast,1,2} \and Alexander Krull\inst{1,2,3} \and Florian Jug\inst{1,2,4,\text{\Letter}}} \authorrunning{T. Buchholz and M. Prakash \etal} \institute{$^1$Center for Systems Biology, Dresden, Germany\\$^2$Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany\\ $^3$Max Planck Institute for Physics of Complex Systems, Dresden, Germany\\ $^4$Fondatione Human Technopole, Milano, Italy\\ \Letter \: \text{jug@mpi-cbg.de}, \text{florian.jug@fht.org}} \maketitle % \blfootnote{$^\ast$ Equal contribution (alphabetical order).} \begin{abstract} Microscopy image analysis often requires the segmentation of objects, but training data for this task is typically scarce and hard to obtain. Here we propose \DenoiSeg, a new method that can be trained end-to-end on only a few annotated ground truth segmentations. We achieve this by extending \NoiseVoid\cite{krull2019noise2void}, a self-supervised denoising scheme that can be trained on noisy images alone, to also predict dense 3-class segmentations. The reason for the success of our method is that segmentation can profit from denoising, especially when performed jointly within the same network. The network becomes a denoising expert by seeing all available raw data, while co-learning to segment, even if only a few segmentation labels are available. This hypothesis is additionally fueled by our observation that the best segmentation results on high quality (very low noise) raw data are obtained when moderate amounts of synthetic noise are added. This renders the denoising-task non-trivial and unleashes the desired co-learning effect. We believe that \DenoiSeg offers a viable way to circumvent the tremendous hunger for high quality training data and effectively enables few-shot learning of dense segmentations. \keywords{segmentation \and denoising \and co-learning \and few shot learning} \end{abstract} \section{Introduction} \label{sec:introduction} The advent of modern microscopy techniques has enabled the routine investigation of biological processes at sub-cellular resolution. The growing amount of microscopy image data necessitates the development of automated analysis methods, with object segmentation often being one of the desired analyses. Over the years, a sheer endless array of methods have been proposed for segmentation~\cite{jug2014bioimage}, but deep learning (DL) based approaches are currently best performing~\cite{caicedo2019evaluation,moen2019deep,razzak2018deep}. Still, even the best existing methods offer plenty of scope for improvements, motivating further research in this field~\cite{schmidt2018,stringer2020cellpose,hirsch2020patchperpix}. A trait common to virtually all DL-based segmentation methods is their requirement for tremendous amounts of labeled ground truth (GT) training data, the creation of which is extraordinarily time consuming. In order to make the most out of a given amount of segmentation training data, data augmentation~\cite{shorten2019survey,zhao2019data} is used in most cases. Another way to increase the amount of available training data for segmentation is to synthetically generate it, \eg by using Generative Adversarial Networks (GANs)~\cite{ihle2019unsupervised,osokin2017gans,sandfort2019data}. However, the generated training data needs to capture all statistical properties of the real data and the respective generated labels, thereby making this approach cumbersome in its own right. For other image processing tasks, such as denoising~\cite{lehtinen2018noise2noise,weigert2018content,buchholz2019cryo}, the annotation problem has been addressed via self-supervised training~\cite{krull2019noise2void,batson2019noise2self,alex2019probabilistic,2019ppn2v}. While previous denoising approaches~\cite{weigert2018content} require pairs of noisy and clean ground truth training images, self-supervised methods can be trained directly on the noisy raw data that is to be denoised. Very recently, Prakash~\etal~\cite{prakash2019leveraging} demonstrated on various microscopy datasets that self-supervised denoising~\cite{krull2019noise2void} prior to object segmentation leads to greatly improved segmentation results, especially when only small numbers of segmentation GT images are available for training. The advantage of this approach stems from the fact that the self-supervised denoising module can be trained on the full body of available microscopy data. In this way, the subsequent segmentation module receives images that are easier to interpret, leading to an overall gain in segmentation quality even without having a lot of GT data to train on. In the context of natural images, a similar combination of denoising and segmentation was proposed by Liu~\etal~\cite{liu2017image} and Wang~\etal~\cite{wang2019segmentation}. However, both methods lean heavily on the availability of paired low- and high-quality image pairs for training their respective denoising module. Additionally, their cascaded denoising and segmentation networks make the training comparatively computationally expensive. \figTeaser Here, we present \DenoiSeg, a novel training scheme that leverages denoising for object segmentation (see Fig.~\ref{fig:teaser}). Like Prakash~\etal, we employ the self-supervised \NoiseVoid~\cite{krull2019noise2void} for denoising. However, while Prakash~\etal rely on two sequential steps for denoising and segmentation, we propose to use a single network to jointly predict the denoised image and the desired object segmentation. We use a simple \UNet~\cite{RFB15a} architecture, making training fast and accessible on moderately priced consumer hardware. Our network is trained on noisy microscopy data and requires only a small fraction of images to be annotated with GT segmentations. We evaluate our method on different datasets and with different amounts of annotated training images. When only small amounts of annotated training data are available, our method consistently outperforms not only networks trained purely for segmentation~\cite{chen2016dcan,guerrero2018multiclass}, but also the currently best performing training schemes proposed by Prakash~\etal~\cite{prakash2019leveraging}. \section{Methods} \label{sec:methods} We propose to jointly train a single \UNet for segmentation and denoising tasks. While for segmentation only a small amount of annotated GT labels are available, the self-supervised denoising module does benefit from all available raw images. In the following we will first discuss how these tasks can be addressed separately and then introduce a joint loss function combining the two. \subsubsection{Segmentation.} \label{sec:segmentation} We see segmentation as a 3-class pixel classification problem~\cite{chen2016dcan,guerrero2018multiclass,prakash2019leveraging} and train a \UNet to classify each pixel as foreground, background or border (this yields superior results compared to a simple classification into foreground and background~\cite{schmidt2018}). Our network uses three output channels to predict each pixel's probability of belonging to the respective class. We train it using the standard cross-entropy loss, which will be denoted as $\mathcal{L}_{s}\big( \seg_i,f(\img_i) \big)$, where $\img_i$ is the $i$-th training image, $\seg_i$ is the ground truth 3-class segmentation, and $f(\img_i)$ is the network output. \subsubsection{Self-Supervised Denoising.} \label{sec:selfsupervised_denoising} We use the \NoiseVoid setup described in~\cite{krull2019noise2void} as our self-supervised denoiser of choice. We extend the above mentioned 3-class segmentation \UNet by adding a forth output channel, which is used for denoising and trained using the \NoiseVoid scheme. \NoiseVoid uses a Mean Squared Error (MSE) loss, which is calculated over a randomly selected subset of blind spot pixels that are masked in the input image. Since the method is self-supervised and does not require ground truth, this loss $\mathcal{L}_{d}\big( \img_i,f(\img_i) \big)$ can be calculated as a function of the input image $\img_i$ and the network output~$f(\img_i)$. \subsubsection{Joint-Loss.} \label{sec:joint_loss} To jointly train our network for denoising and segmentation we use a combined loss. For a given training batch $(\img_1,\seg_1,\dots,\img_m,\seg_m)$ of $m$ images, we assume that GT segmentation is available only for a subset of the raw images. We define $\seg_i=\boldsymbol{0}$ for images where no segmentation GT is present. The loss over a batch is calculated as \begin{equation}\label{eq:loss} \mathcal{L} = \frac{1}{m}\sum_{i=1}^m \alpha \cdot \mathcal{L}_{d}\big( \img_i,f(\img_i) \big) + (1 - \alpha) \cdot \mathcal{L}_{s}\big( \seg_i,f(\img_i) \big), \end{equation} where $0\leq \alpha \leq 1$ is a tunable hyperparameter that determines the relative weight of denoising and segmentation during training. Note that the \NoiseVoid loss is self-supervised, therefore it can be calculated for all raw images in the batch. The cross-entropy loss however requires GT segmentation and can only be evaluated on a subset of images, where this information is available. For images where no GT segmentation is available we define $\mathcal{L}_{s}\big( \seg_i=\boldsymbol{0},f(\img_i) \big)=0$. In the setup described above, setting $\alpha=1$ corresponds to pure \NoiseVoid denoising. However, setting $\alpha=0$ does not exactly correspond to the vanilla 3-class segmentation, due to two reasons. Firstly, only some of the images are annotated but in Eq.~\ref{eq:loss} the loss is divided by the constant batch size $m$. This effectively corresponds to a reduced batch size and learning rate, compared to the vanilla method. Secondly, our method applies \NoiseVoid masking of blind spot pixels in the input image. \subsubsection{Implementation Details.} \label{sec:implementation} Our \DenoiSeg implementation is publicly available\footnote{https://github.com/juglab/DenoiSeg}. The proposed network produces four output channels corresponding to denoised images, foreground, background and border segmentation. For all our experiments we use a \UNet architecture of depth $4$, convolution kernel size of $3$, a linear activation function in the last layer, $32$ initial feature maps, and batch normalization during training. All networks are trained for $200$ epochs with an initial learning rate of $0.0004$. The learning rate is reduced if the validation loss is not decreasing over ten epochs. For training we use $8$-fold data augmentation by adding $90^\circ$ rotated and flipped versions of all images. \section{Experiments and Results} \label{sec:results} We use three publicly available datasets for which GT annotations are available (data available at \DenoiSeg-Wiki\footnote{https://github.com/juglab/DenoiSeg/wiki}). For each dataset we generate noisy versions by adding pixel-wise independent Gaussian noise with zero-mean and standard deviations of $10$ and $20$. The dataset names are extended by n0, n10, and n20 to indicate the respective additional noise. For network training, patches of size $128 \times 128$ are extracted and randomly split into training ($85\%$) and validation ($15\%$) sets. \begin{itemize} \item \textbf{DSB.} From the Kaggle 2018 Data Science Bowl challenge, we take the same images as used by ~\cite{prakash2019leveraging}. The training and validation sets consist of $3800$ and $670$ patches respectively, while the test set counts $50$ images. \item \textbf{Fly Wing.} This dataset from our collaborators consist of $1428$ training and $252$ validation patches of a membrane labeled fly wing. The test set is comprised of $50$ additional images. \item \textbf{Mouse Nuclei.} Finally, we choose a challenging dataset depicting diverse and non-uniformly clustered nuclei in the mouse skull, consisting of $908$ training and 160 validation patches. The test set counts $67$ additional images. \end{itemize} \figQualitative For each dataset, we train \DenoiSeg and compare it to two different competing methods: \DenoiSeg trained purely for segmentation with $\alpha = 0$ (referred to as \textit{Baseline}), and a sequential scheme based on~\cite{prakash2019leveraging} that first trains a denoiser and then the aforementioned baseline (referred to as \textit{Sequential}). We chose our network with $\alpha = 0$ as baseline to mitigate the effect of batch normalization on the learning rate as described in Section~\ref{sec:methods}. A comparison of our baseline to a vanilla 3-class \UNet with the same hyperparameters leads to very similar results and can be found in the supplementary material. Furthermore, we investigate \DenoiSeg performance when trained with different amounts of available GT segmentation images. This is done by picking random subsets of various sizes from the available GT annotations. Note that the self-supervised denoising task still has access to all raw input images. A qualitative comparison of \DenoiSeg results with other baselines (see Figure~\ref{fig:qualitative}) indicates the effectiveness of our method. As evaluation metrics, we use Average Precision (AP)~\cite{everingham2010pascal} and SEG~\cite{ulman2017objective} scores. The AP metric measures both instance detection and segmentation accuracy while SEG captures the degree of overlap between instance segmentations and GT. To compute the scores, the predicted foreground channel is thresholded and connected components are interpreted as instance segmentations. The threshold values are optimized for each measure on the validation data. All conducted experiments were repeated $5$ times and the mean scores along with $\pm 1$ standard error of the mean are reported in Figure~\ref{fig:DSB}. \subsubsection{Performance with Varying Quantities of GT Data and Noise.} \figDSB Figure~\ref{fig:DSB} shows the results of \DenoiSeg with $\alpha = 0.5$ (equally weighting denoising and segmentation losses) for DSB n0, n10 and n20 datasets. For low numbers of GT training images, \DenoiSeg outperforms all other methods. Figures for the other two datasets can be found in the supplementary material. Results for all performed experiments showing overall similar trends and can be found on the \DenoiSeg-Wiki. \subsubsection{Importance of $\alpha$.} \figDeltaNoise We further investigated the sensitivity of our results to the hyperparameter $\alpha$. In Figure~\ref{fig:deltaNoise}(a) we look at the difference in resulting AP ($\Delta$) when instead of $\alpha=0.5$ we use values of $\alpha=0.3$ and $\alpha=0.7$. Additionally we also compare to the Baseline and results that use (the a priori unknown) best $\alpha$. The best $\alpha$ for each trained network is found by a grid search for $\alpha \in \{0.1, 0.2, \dots, 0.9\}$. Figure~\ref{fig:deltaNoise}(a) shows that our proposed method is extraordinarily robust with respect to the choice of $\alpha$. Results for the other datasets showing similar trends can be found in the supplementary material. \subsubsection{Noisy Inputs Lead to Elevated Segmentation Performance.} Here we want to elaborate on the interesting observation we made in Figure~\ref{fig:DSB}: when additional noise is synthetically added to the raw data, the segmentation performance reaches higher AP and SEG scores, even though segmentation should be more difficult in the presence of noise. We investigate this phenomenon in Figure~\ref{fig:deltaNoise}(b). We believe that in the absence of noise the denoising task can be solved trivially, preventing the regularizing effect that allows \DenoiSeg to cope with small amounts of training data. \sloppy \subsubsection{Evaluation of Denoising Performance.} Although we are not training \DenoiSeg networks for their denoising capabilities, it is interesting to know how their denoising predictions compare to dedicated denoising networks. Table~\ref{tab:denoising} compares our denoising results with results obtained by \NoiseVoid~\cite{krull2019noise2void}. It can be seen that co-learning segmentation is only marginally impeding the network's ability to denoise its inputs. \fussy \tabDenoising \section{Discussion} \label{sec:discussion} Here we have shown that $(i)$~joint segmentation and self-supervised denoising leads to improved segmentation quality when only limited amounts of segmentation ground truth is available (Figures~\ref{fig:qualitative} and~\ref{fig:DSB}), $(ii)$~the hyperparameter $\alpha$ is modulating the quality of segmentation results but leads to similarly good solutions for a broad range of values, and $(iii)$~results on input data that are subject to a certain amount of intrinsic or synthetically added noise lead to better segmentations than \DenoiSeg trained on essentially noise-free raw data. We reason that the success of our proposed method originates from the fact that similar \say{skills} are required for denoising and segmentation. The segmentation task can profit from denoising, and compared to~\cite{prakash2019leveraging}, performs even better when jointly trained within the same network. When a low number of annotated images are available, denoising is guiding the training and the features learned from this task, in turn, facilitate segmentation. We believe that \DenoiSeg offers a viable way to enable few-shot learning of dense segmentations and can therefore be applied in cases where other methods cannot. We also show that the amount of required training data can be so little, even ad-hoc label generation by human users is a valid possibility, expanding the practical applicability of our proposed method manyfold. \subsubsection*{Acknowledgments.} \label{sec:acknowledgments} The authors would like to acknowledge Romina Piscitello-Gomez and Suzanne Eaton from MPI-CBG for fly wing data, Diana Afonso and Jacqueline Tabler from MPI-CBG for mouse nuclei data and the Scientific Computing Facility at MPI-CBG for giving us access to their HPC cluster. \newpage \bibliographystyle{splncs04} \bibliography{refs} \end{document}
https://openreview.net/forum?id=UWm7zRhPoMX
https://arxiv.org/abs/2005.02987
Please evaluate the effectiveness of our paper in addressing the problem of image segmentation and denoising with limited training segmentation masks, and provide feedback on any assumptions or clarifications that should be included.
Potentially VERY useful public dataset, some questions regarding neglected related work While the submitted manuscript is indeed aiming a a joint denoising and superres task, there is quite a body of work that was not included in the related work sections or in the comparative parts of the paper (CARE, N2V, PN2V, etc.). I have the feeling that the utility of the data should be valued higher than the incompleteness of comparisons, but if the camera-ready version could at least acknowledge the broader existing literature it would certainly be a positive. With respect to the data, it appears that the availability of it is still pending to some degree. From the GitHub repo of the paper: ``` To those who have cloned or forked our repository, we now removed the png data and are working with the raw data pre-processed only with a single global z-score normalization. All the consequent modifications are being made. The full raw data will be made public very soon, and pretrained models (with raw data) will be made available by mid July. ``` In my opinion it would be highly desirable to bring the work on this public dataset to completion together with the submission of the camera-ready version. Rating: 7: Good paper, accept Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{xcolor} \usepackage{subfigure} \usepackage{array} \usepackage{booktabs} \usepackage{colortbl} \usepackage{hhline} \usepackage{arydshln} \usepackage{verbatim} % \usepackage{gensymb} % \usepackage{multirow} \usepackage{tabu} \usepackage{epsfig} \usepackage{caption} \usepackage{ulem} \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref} \begin{document} \newcommand{\fullname}{\textbf{W}idefield\textbf{2S}IM} \newcommand{\name}{W2S} \newcommand\blfootnote[1]{% \begingroup \renewcommand\thefootnote{}\footnote{#1}% \addtocounter{footnote}{-1}% \endgroup } \pagestyle{headings} \mainmatter \def\ECCVSubNumber{2} % \title{W2S: Microscopy Data with Joint Denoising and Super-Resolution for Widefield to SIM Mapping} \titlerunning{W2S} \authorrunning{R. Zhou et al.} \author{\author{Ruofan Zhou\inst{*}\orcidID{0000-0002-5645-4541} \and Majed El Helou\inst{*}\orcidID{0000-0002-7469-2404} \and Daniel Sage\orcidID{0000-0002-1150-1623} \and Thierry Laroche \and Arne Seitz \and\\ Sabine S\"usstrunk\orcidID{0000-0002-0441-6068}} \authorrunning{R. Zhou et al.} \institute{\'Ecole Poletechnique F\'ed\'erale de Lausanne (EPFL), Switzerland \\ \email{\{ruofan.zhou,majed.elhelou,sabine.susstrunk\}@epfl.ch}}} \maketitle \begin{abstract} \blfootnote{$^*$ The first two authors have similar contributions.} In fluorescence microscopy live-cell imaging, there is a critical trade-off between the signal-to-noise ratio and spatial resolution on one side, and the integrity of the biological sample on the other side. To obtain clean high-resolution (HR) images, one can either use microscopy techniques, such as structured-illumination microscopy (SIM), or apply denoising and super-resolution (SR) algorithms. However, the former option requires multiple shots that can damage the samples, and although efficient deep learning based algorithms exist for the latter option, no benchmark exists to evaluate these algorithms on the joint denoising and SR (JDSR) tasks. To study JDSR on microscopy data, we propose such a novel JDSR dataset, \fullname{} (\name{}), acquired using a conventional fluorescence widefield and SIM imaging. \name{} includes 144,000 real fluorescence microscopy images, resulting in a total of 360 sets of images. A set is comprised of noisy low-resolution (LR) widefield images with different noise levels, a noise-free LR image, and a corresponding high-quality HR SIM image. W2S allows us to benchmark the combinations of 6 denoising methods and 6 SR methods. We show that state-of-the-art SR networks perform very poorly on noisy inputs. Our evaluation also reveals that applying the best denoiser in terms of reconstruction error followed by the best SR method does not necessarily yield the best final result. Both quantitative and qualitative results show that SR networks are sensitive to noise and the sequential application of denoising and SR algorithms is sub-optimal. Lastly, we demonstrate that SR networks retrained end-to-end for JDSR outperform any combination of state-of-the-art deep denoising and SR networks\footnote{Code and data available at \url{https://github.com/IVRL/w2s}}. \keywords{Image Restoration Dataset, Denoising, Super-resolution, Microscopy Imaging, Joint Optimization} \end{abstract} \newcommand{\etal}{\textit{et al.}} \section{Introduction} \label{sec:introduction} \newcommand{\teaserimg}[1]{\includegraphics[width=0.115\linewidth,clip]{#1}} \begin{figure}[t] \centering \begin{tabu}{cccccccc} \rowfont{\tiny} \multicolumn{8}{c}{Single Channel}\\ \teaserimg{IMAGES/dataset_imgs/003_0/full_frame.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg1.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg2.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg4.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg8.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg16.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg400.png}& \teaserimg{IMAGES/dataset_imgs/003_0/sim.png}\\ \teaserimg{IMAGES/dataset_imgs/008_1/full_frame.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg1.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg2.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg4.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg8.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg16.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg400.png}& \teaserimg{IMAGES/dataset_imgs/008_1/sim.png}\\ \rowfont{\tiny} \multicolumn{8}{c}{Multi Channel}\\ \teaserimg{IMAGES/dataset_imgs/010/full_frame.png}& \teaserimg{IMAGES/dataset_imgs/010/avg1.png}& \teaserimg{IMAGES/dataset_imgs/010/avg2.png}& \teaserimg{IMAGES/dataset_imgs/010/avg4.png}& \teaserimg{IMAGES/dataset_imgs/010/avg8.png}& \teaserimg{IMAGES/dataset_imgs/010/avg16.png}& \teaserimg{IMAGES/dataset_imgs/010/avg400.png}& \teaserimg{IMAGES/dataset_imgs/010/sim.png}\\ \teaserimg{IMAGES/dataset_imgs/013/full_frame.png}& \teaserimg{IMAGES/dataset_imgs/013/avg1.png}& \teaserimg{IMAGES/dataset_imgs/013/avg2.png}& \teaserimg{IMAGES/dataset_imgs/013/avg4.png}& \teaserimg{IMAGES/dataset_imgs/013/avg8.png}& \teaserimg{IMAGES/dataset_imgs/013/avg16.png}& \teaserimg{IMAGES/dataset_imgs/013/avg400.png}& \teaserimg{IMAGES/dataset_imgs/013/sim.png}\\ \rowfont{\tiny} Full frame & Raw crop & 2$\times$ Average & 4$\times$ Average &8$\times$ Average &16$\times$ Average& Noise-free LR & Target HR \end{tabu} \caption{Example of image sets in the proposed \name. We obtain LR images with 5 different noise levels by either taking a single raw image or averaging different numbers of raw images of the same field of view. The more images we average, the lower the noise level, as shown in the different columns of the figure. The noise-free LR images are the average of 400 raw images, and the HR images are obtained using structured-illumination microscopy (SIM)~\cite{gustafsson2000surpassing}. The multi-channel images are formed by mapping the three single-channel images of different wavelengths to RGB. A gamma correction is applied for better visualization. Best viewed on screen.} \label{fig:teaser} \end{figure} Fluorescence microscopy allows to visualize sub-cellular structures and protein-protein interaction at the molecular scale. However, due to the weak signals and diffraction limit, fluorescence microscopy images suffer from high noise and limited resolution. One way to obtain high-quality, high-resolution (HR) microscopy images is to leverage super-resolution fluorescence microscopy, such as structure illumination microscopy (SIM)~\cite{gustafsson2000surpassing}. This technique requires multiple captures with several parameters requiring expert tuning to get high-quality images. Multiple or high-intensity-light acquisitions can cause photo-bleach and even damage the samples. The imaged cells could be affected and, if imaged in sequence for live tracking, possibly killed. This is because a single SIM acquisition already requires a set of captures with varying structured illumination. Hence, a large set of SIM captures would add up to high illumination and an overhead in capture time that is detrimental to imaging and tracking of live cells. Therefore, developing an algorithm to effectively denoise and super-resolve a fluorescence microscopy image is of great importance to biomedical research. However, a high-quality dataset is needed to benchmark and evaluate joint denoising and super-resolution (JDSR) on microscopy data. Deep-learning-based methods in denoising~\cite{anwar2019real,tai2017memnet,zhang2017beyond,el2020blind} and SR~\cite{wang2018esrgan,zhang2018image,zhang2018residual} today are outperforming classical signal processing approaches. A major limitation in the literature is, however, the fact that these two restoration tasks are addressed separately. This is in great part due to a missing dataset that would allow both to train and to evaluate JDSR. Such a dataset must contain aligned pairs of LR and HR images, with noise and noise-free LR images, to allow retraining retrain prior denoising and SR methods for benchmarking the consecutive application of a denoiser and an SR network as well as candidate one-shot JDSR methods. In this paper, we present such a dataset, which, to the best of our knowledge, is the first JDSR dataset. This dataset allows us to evaluate the existing denoising and SR algorithms on microscopy data. We leverage widefield microscopy and SIM techniques to acquire data fulfilling the described requirements above. Our noisy LR images are captured using widefield imaging of human cells. We capture a total of 400 replica raw images per field of view. We average several of the LR images to obtain images with different noise levels, and all of the 400 replicas to obtain the noise-free LR image. Using SIM imaging~\cite{gustafsson2000surpassing}, we obtain the corresponding high-quality HR images. Our resulting \fullname{} (\name{}) dataset consists of 360 sets of LR and HR image pairs, with different fields of view and acquisition wavelengths. Visual examples of the images in \name{} are shown in Fig.~\ref{fig:teaser}. We leverage our JDSR dataset to benchmark different approaches for denoising and SR restoration on microscopy images. We compare the sequential use of different denoisers and SR methods, of directly using an SR method on a noisy LR image, and of using SR methods on the noise-free LR images of our dataset for reference. We additionally evaluate the performance of retraining SR networks on our JDSR dataset. Results show a significant drop in the performance of SR networks when the low-resolution (LR) input is noisy compared to it being noise-free. We also find that the consecutive application of denoising and SR achieves better results. It is, however, not as performing in terms of RMSE and perceptual texture reconstruction as training a single model on the JDSR task, due to the accumulation of error. The best results are thus obtained by training a single network for the joint optimization of denoising and SR. In summary, we create a microscopy JDSR dataset, \name{}, containing noisy images with 5 noise levels, noise-free LR images, and the corresponding high-quality HR images. We analyze our dataset by comparing the noise magnitude and the blur kernel of our images to those of existing denoising and SR datasets. We benchmark state-of-the-art denoising and SR algorithms on \name{}, by evaluating different settings and on different noise levels. Results show the networks can benefit from joint optimization. \section{Related Work} \subsection{Biomedical Imaging Techniques for Denoising and Super-resolution} Image averaging of multiple shots is one of the most employed methods to obtain a clean microscopy image. This is due to its reliability and to avoid the potential blurring or over-smoothing effects of denoisers. For microscopy experiments requiring long observation and minimal degradation of specimens, low-light conditions and short exposure times are, however, preferred as multiple shots might damage the samples. To reduce the noise influence and increase the resolution, denoising methods and SR imaging techniques are leveraged. To recover a clean image from a single shot, different denoising methods have been designed, including PURE-LET~\cite{luisier2011image}, EPLL~\cite{zoran2011learning}, and BM3D~\cite{BM3D}. Although these methods provide promising results, recent deep learning methods outperform them by a big margin~\cite{zhang2019poisson}. To achieve resolution higher than that imposed by the diffraction limit, a variety of SR microscopy techniques exist, which achieve SR either by spatially modulating the fluorescence emission using patterned illumination (\textit{e.g.}, STED~\cite{hein2008stimulated} and SIM~\cite{gustafsson2000surpassing}), or by stochastically switching on and off individual molecules using photo-switchable probes (\textit{e.g.}, STORM~\cite{rust2006sub}), or photo-convertible fluorescent proteins (\textit{e.g.}, PALM~\cite{shroff2008live}). However, all of these methods require multiple shots over a period of time, which is not suitable for live cells because of the motion and potential damage to the cell. Thus, in this work, we aim to develop a deep learning method to reconstruct HR images from a single microscopy capture. \subsection{Datasets for Denoising and Super-resolution} \label{sec:work} Several datasets have commonly been used in benchmarking SR and denoising, including Set5~\cite{bevilacqua2012low}, Set14~\cite{zeyde2010single}, BSD300~\cite{martin2001database}, Urban100~\cite{huang2015single}, Manga109~\cite{matsui2017sketch}, and DIV2K~\cite{timofte2018ntire}. None of these datasets are optimized for microscopy and they only allow for synthetic evaluation. Specifically, the noisy inputs are generated by adding Gaussian noise for testing denoising algorithms, and the LR images are generated by downsampling the blurred HR images for testing SR methods. These degradation models deviate from the degradations encountered in real image capture~\cite{chen2019camera}. To better take into account realistic imaging characteristics and thus evaluate denoising and SR methods in real scenarios, real-world denoising and SR datasets have recently been proposed. Here we discuss these real datasets and compare them to our proposed \name{}. \noindent\textbf{Real Denoising Dataset } Only a few datasets allow to quantitatively evaluate denoising algorithms on real images, such as DND~\cite{plotz2017benchmarking} and SSID~\cite{abdelhamed2018high}. These datasets capture images with different noise levels, for instance by changing the ISO setting at capture. More related to our work, Zhang~\etal{}~\cite{zhang2019poisson} collect a dataset of microscopy images. All three datasets are designed only for denoising, and no HR images are provided that would allow them to be used for SR evaluation. According to our benchmark results, the best denoising algorithm does not necessarily provide the best input for the downstream SR task, and the JDSR learning is the best overall approach. This suggests a dataset on joint denoising and SR can provide a more comprehensive benchmark for image restoration. \noindent\textbf{Real Super-resolution Dataset } Recently, capturing LR and HR image pairs by changing camera parameters has been proposed. Chen~\etal{} collect 100 pairs of images of printed postcards placed at different distances. SR-RAW~\cite{zhang2019zoom} consists of 500 real scenes captured with multiple focal lengths. Although this dataset provides real LR-HR pairs, it suffers from misalignment due to the inevitable perspective changes or lens distortion. Cai~\etal{} thus introduce an iterative image registration scheme into the registration of another dataset, RealSR~\cite{cai2019toward}. However, to have high-quality images, all these datasets are captured with low ISO setting, and the images thus contain very little noise as shown in our analysis. Qian~\etal{} propose a dataset for joint demosaicing, denoising and SR~\cite{qian2019trinity}, but the noise in their dataset is simulated by adding white Gaussian noise. Contrary to these datasets, our proposed \name{} is constructed using SR microscopy techniques~\cite{gustafsson2000surpassing}, all pairs of images are well aligned, and it contains raw LR images with different noise levels and the noise-free LR images, thus enabling the benchmarking of both denoising and SR under real settings. \subsection{Deep Learning based Image Restoration} Deep learning based methods have shown promising results on various image restoration tasks, including denoising and SR. We briefly present prior work and the existing problems that motivate joint optimization. \noindent\textbf{Deep Learning for Denoising } Recent deep learning approaches for image denoising achieve state-of-the-art results on recovering the noise-free images from images with additive noise.% Whether based on residual learning~\cite{zhang2017beyond}, using memory blocks~\cite{tai2017memnet}, bottleneck architecture~\cite{weigert2018content}, % attention mechanisms~\cite{anwar2019real}, internally modeling Gaussian noise parameters~\cite{el2020blind}, these deep learning methods all require training data. For real-world raw-image denoising, the training data should include noisy images with a Poisson noise component, and a corresponding aligned noise-free image, which is not easy to acquire. % Some recent self-supervised methods can learn without having training targets~\cite{batson2019noise2self,krull2019noise2void,lehtinen2018noise2noise}, however, their performance does not match that of supervised methods. We hence focus on the better-performing supervised methods in our benchmark, since targets are available. All these networks are typically evaluated only on the denoising task, often only on the one they are trained on. They optimize for minimal squared pixel error, leading to potentially smoothed out results that favour reconstruction error at the expense of detail preservation. When a subsequent task such as SR is then applied on the denoised outputs from these networks, the quality of the final results does not, as we see in our benchmark, necessarily correspond to the denoising performance of the different approaches. This highlights the need for a more comprehensive perspective that jointly considers both restoration tasks. \noindent\textbf{Deep Learning for Super-resolution } Since the first convolutional neural network for SR~\cite{dong2014learning} outperformed conventional methods on synthetic datasets, many new architectures~\cite{kim2016accurate,lim2017enhanced,shi2016real,vasu2018analyzing,wang2018esrgan,zhang2018image,zhang2018residual} and loss functions~\cite{johnson2016perceptual,ledig2017photo,sajjadi2017enhancenet,zhang2019ranksrgan,zhang2019image} have been proposed to improve the effectiveness and the efficiency of the networks. To enable the SR networks generalize better on the real-world LR images where the degradation is unknown, works have been done on kernel prediction~\cite{cai2019toward,gu2019blind} and kernel modeling~\cite{zhang2019deep,zhou2019kernel}. However, most of the SR networks assume that the LR images are noise-free or contain additive Gaussian noise with very small variance. Their predictions are easily affected by noise if the distribution of the noise is different from their assumptions~\cite{choi2019evaluating}. This again motivates a joint approach developed for the denoising and SR tasks. \noindent\textbf{Joint Optimization in Deep Image Restoration } Although a connection can be drawn between the denoising and super-resolution tasks in the frequency domain~\cite{elhelou2020stochastic}, their joint optimization was not studied before due to the lack of a real benchmark. Recent studies have shown the performance of joint optimization in image restoration, for example, the joint demosaicing and denoising~\cite{gharbi2016deep,klatzer2016learning}, joint demosaicing and super-resolution~\cite{zhang2019zoom,zhou2018deep}. All these methods show that the joint solution outperforms the sequential application of the two stages. More relevant to JDSR, % Xie~\etal{}~\cite{xie2015joint} present a dictionary learning approach with constraints tailored for depth maps, and Miao~\etal{}~\cite{miao2020handling} propose a cascade of two networks for joint denoising and deblurring, evaluated on synthetic data only. Similarly, our results show that a joint solution for denoising and SR also obtains better results than any sequential application. Note that our W2S dataset allows us to draw such conclusions on \textit{real} data, rather than degraded data obtained through simulation. \section{Joint Denoising and Super-Resolution Dataset for Widefield to SIM Mapping} In this section, we describe the experimental setup that we use to acquire the sets of LR and HR images and present an analysis of the noise levels and blur kernels of our dataset. \subsection{Structured-Illumination Microscopy} \label{sec:sim} Structured-illumination microscopy (SIM) is a technique used in microscopy imaging that allows samples to be captured with a higher resolution than the one imposed by the physical limits of the imaging system~\cite{gustafsson2000surpassing}. Its operation is based on the interference principle of the Moir{\'e} effect. We present how SIM works in more detail in our supplementary material. We use SIM to extend the resolution of standard widefield microscopy images. This allows us to obtain aligned LR and HR image pairs to create our dataset. The acquisition details are described in the next section. \subsection{Data Acquisition} \label{sec:acquisition} We capture the LR images of the \name{} dataset using widefield microscopy~\cite{verveer1999comparison}. Images are acquired with a high-quality commercial fluorescence microscope and with real biological samples, namely, human cells. \noindent\textbf{Widefield Images } A time-lapse widefield of 400 images is acquired using a Nikon SIM setup (Eclipse T1) microscope. The details of the setup are given in the supplementary material. In total, we capture 120 different fields-of-view (FOVs), each FOV with 400 captures in 3 different wavelengths. All images are \textit{raw}, \textit{i.e.}, are linear with respect to focal plane illuminance, and are made up of $512 \times 512$ pixels. We generate different noise-level images by averaging 2, 4, 8, and 16 raw images of the same FOV. The larger the number of averaged raw images is, the lower the noise level. The noise-free LR image is estimated as the average of all 400 captures of a single FOV. Examples of images with different noise levels and the corresponding noise-free LR images are presented in Fig.~\ref{fig:teaser}. \noindent\textbf{SIM Imaging } The HR images are captured using SIM imaging. We acquire the SIM images using the same Nikon SIM setup (Eclipse T1) microscope as above. We present the details of the setup in the supplementary material. The HR images have a resolution that is higher by a factor of 2, resulting in $1024 \times 1024$ pixel images. \subsection{Data Analysis} \label{sec:ana} \name{} includes 120 different FOVs, each FOV is captured in 3 channels, corresponding to the wavelengths 488nm, 561nm and 640nm. As the texture of the cells is different and independent across different channels, the different channels can be considered as different images, thus resulting in 360 views. For each view, 1 HR image and 400 LR images are captured. We obtain LR images with different noise levels by averaging different numbers of images of the same FOV and the same channel. In summary, \name{} provides 360 different sets of images, each image set includes LR images with 5 different noise levels (corresponding to 1, 2, 4, 8, and 16 averaged LR images), the corresponding noise-free LR image (averaged over 400 LR images) and the corresponding HR image acquired with SIM. The LR images have dimensions $512 \times 512$, and the HR images $1024 \times 1024$. To quantitatively evaluate the difficulty of recovering the HR image from the noisy LR observation in \name{}, we analyze the degradation model relating the LR observations to their corresponding HR images. We adopt a commonly used degradation model~\cite{chen2019camera,dong2014learning,gu2019blind,zhou2019kernel}, with an additional noise component, % \begin{equation}\label{eq:LRdegradation} I_{LR}^{noisy} = (I_{HR} \circledast k) \downarrow_m + n, \end{equation} where $I_{LR}^{noisy}$ and $I_{HR}$ correspond, respectively, to the noisy LR observation and the HR image, $\circledast$ is the convolution operation, $k$ is a blur kernel, $\downarrow_m$ is a downsampling operation with a factor of $m$, and $n$ is the additive noise. Note that $n$ is usually assumed to be zero in most of the SR networks' degradation models, while it is not the case for our dataset. As the downsampling factor $m$ is equal to the targeted super-resolution factor, it is well defined for each dataset. We thus analyze in what follows the two unknown variables of the degradation model for \name{}; namely the noise $n$ and the blur kernel $k$. Comparing to other denoising datasets, \name{} contains 400 noisy images for each view, DND~\cite{choi2019evaluating} contains only 1, SSID~\cite{abdelhamed2018high} contains 150, and FMD~\cite{zhang2019poisson}, which also uses widefield imaging, contains 50. \name{} can thus provide a wide range of noise levels by averaging a varying number of images out of the 400. In addition, \name{} provides LR and HR image pairs that do not suffer from misalignment problems often encountered in SR datasets. \noindent\textbf{Noise Estimation } We use the noise modeling method in~\cite{foi2008practical} to estimate the noise magnitude in raw images taken from \name{}, from the denoising dataset FMD~\cite{zhang2019poisson}, and from the SR datasets RealSR~\cite{cai2019toward} and City100~\cite{chen2019camera}. The approach of~\cite{foi2008practical} models the noise as Poisson-Gaussian. The measured noisy pixel intensity is given by $y=x+n_P(x)+n_G$, where $x$ is the noise-free pixel intensity, $n_G$ is zero-mean Gaussian noise, and $x+n_P(x)$ follows a Poisson distribution of mean $ax$ for some $a>0$. This approach yields an estimate for the parameter $a$ of the Poisson distribution. % We evaluate the Poisson parameter of the noisy images from the three noise levels (obtained by averaging 1, 4 and 8 images) of \name{}, the raw noisy images of FMD, and the LR images of the SR datasets for comparison. We show the mean of the estimated noise magnitude for the different datasets in Fig.~\ref{fig:noise_stats}. We see that the raw noisy images of \name{} have a high noise level, comparable to that of FMD. On the other hand, the estimated noise parameters of the SR datasets are almost zero, up to small imprecision, and are thus significantly lower than even the estimated noise magnitude of the LR images from the lowest noise level in \name{}. Our evaluation highlights the fact that the additive noise component is not taken into consideration in current state-of-the-art SR datasets. The learning-based SR methods using these datasets are consequently not tailored to deal with noisy inputs that are common in many practical applications, leading to potentially poor performance. In contrast, \name{} contains images with high (and low) noise magnitude comparable to the noise magnitude of a recent denoising dataset~\cite{zhang2019poisson}. \begin{figure}[t] \centering \subfigure[Estimated noise (log)]{ \includegraphics[width=0.45\linewidth,height=0.31\linewidth]{IMAGES/dataset_imgs/noise.png} \label{fig:noise_stats} } \subfigure[Estimated kernels]{ \includegraphics[width=0.45\linewidth,trim={0 0 0 7},clip,height=0.31\linewidth]{IMAGES/dataset_imgs/kernel.png} \label{fig:kernel_stats} } \caption{Noise and kernel estimation on images from different datasets. A comparably-high noise level and a wide kernel indicate that the HR images of \name{} are challenging to recover from the noisy LR observation.} \label{fig:dataset_stats} \end{figure} \noindent\textbf{Blur Kernel Estimation } We estimate the blur kernel $k$ shown in Eq.~\eqref{eq:LRdegradation} as \begin{equation} k = \underset{k}{argmin} ||I_{LR}^{noise-free}\uparrow^{bic} - k \circledast I_{HR} ||^2_2, \end{equation} where $I_{LR}^{noise-free}\uparrow^{bic}$ is the noise-free LR image upscaled using bicubic interpolation. We solve for $k$ directly in the frequency domain using the Fast Fourier Transform~\cite{helou2018fourier}. The estimated blur kernel is visualized in Fig.~\ref{fig:kernel_stats}. For the purpose of comparison, we show the estimated blur kernel from two SR datasets: RealSR~\cite{cai2019toward} and City100~\cite{chen2019camera}. We also visualize the two other blur kernels: the MATLAB bicubic kernel that is commonly used in the synthetic SR datasets, and the Gaussian blur kernel with a sigma of 2.0, which is the largest kernel used by the state-of-the-art blind SR network~\cite{gu2019blind} for the upscaling factor of 2. From the visualization we clearly see the bicubic kernel and Gaussian blur kernel that are commonly used in synthetic datasets are very different from the blur kernels of real captures. The blur kernel of \name{} has a long tail compared to the blur kernels estimated from the other SR datasets, illustrating that more high-frequency information is removed for the LR images in \name. This is because a wider space-domain filter corresponds to a narrower frequency-domain low pass, and vice versa. Hence, the recovery of HR images from such LR images is significantly more challenging. Compared to the SR datasets, the LR and HR pairs in \name{} are well-aligned during the capture process, and no further registration is needed. Furthermore, to obtain high-quality images, the SR datasets are captured under high ISO and contain almost zero noise, whereas \name{} contains LR images with different noise levels. This makes it a more comprehensive benchmark for testing under different imaging conditions. Moreover, as shown in Sec.~\ref{sec:ana}, the estimated blur kernel of \name{} is wider than that of other datasets, and hence it averages pixels over a larger window, filtering out more frequency components and making \name{} a more challenging dataset for SR. \section{Benchmark} \label{sec:benchmark} We benchmark on the sequential application of state-of-the-art denoising and SR algorithms on \name{} using RMSE and SSIM. Note that we do not consider the inverse order, \textit{i.e.}, first applying SR methods on noisy images, as this amplifies the noise and causes a large increase in RMSE as shown in the last row of Table~\ref{table:PSNR_dsr}. With current methods, it would be extremely hard for a subsequent denoiser to recover the original clean signal. \subsection{Setup} We split \name{} into two disjoint training and test sets. The training set consists of 240 LR and HR image sets, and the test set consists of 120 sets of images, with no overlap between the two sets. We retrain the learning-based methods on the training set, and the evaluation of all methods is carried out on the test set. For denoising, we evaluate different approaches from both classical methods and deep-learning methods. We use a method tailored to address Poisson denoising, PURE-LET~\cite{luisier2011image}, and the classical Gaussian denoising methods EPLL~\cite{zoran2011learning} and BM3D~\cite{BM3D}. The Gaussian denoisers are combined with the Anscombe variance-stabilization transform (VST)~\cite{makitalo2012optimal} to first modify the distribution of the image noise into a Gaussian distribution, denoise, and then invert the result back with the inverse VST. We estimate the noise magnitude using the method in~\cite{foi2008practical}, to be used as input for both the denoiser and for the VST when the latter is needed. We also use the state-of-the-art deep-learning methods MemNet~\cite{tai2017memnet}, DnCNN~\cite{zhang2017beyond}, and RIDNet~\cite{anwar2019real}. For a fair comparison with the traditional non-blind methods that are given a noise estimate, we separately train each of these denoising methods for every noise level, and test with the appropriate model per noise level. The training details are presented in the supplementary material. We use six state-of-the-art SR networks for the benchmark: four pixel-wise distortion based SR networks, RCAN~\cite{zhang2018image}, RDN~\cite{zhang2018residual}, SAN~\cite{dai2019second}, SRFBN~\cite{li2019feedback}, and two perceptually-optimized SR networks, EPSR~\cite{vasu2018analyzing} and ESRGAN~\cite{wang2018esrgan}. The networks are trained for SR and the inputs are assumed to be noise-free, \textit{i.e.}, they are trained to map from the noise-free LR images to the high-quality HR images. All these networks are trained using the same settings, the details of which are presented in the supplementary material. \begin{table}[t] \centering \begin{tabular}{ccccccc} \toprule & & \multicolumn{5}{c}{Number of raw images averaged before denoising} \\ \cline{3-7} & Method & {1} & {2} & {4} & {8} & {16} \\ \cline{1-7} \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Denoisers}}}& PURE-LET~\cite{luisier2011image} & \cellcolor{gray!20}0.089/0.864&0.076/0.899&0.062/0.928&0.052/0.944&0.044/0.958 \\ &VST+EPLL~\cite{zoran2011learning} & \cellcolor{gray!20}0.083/0.887&0.074/0.916&0.061/0.937&0.051/0.951&0.044/0.962 \\ &VST+BM3D~\cite{BM3D} & \cellcolor{gray!20}0.080/0.897&0.072/0.921&0.059/0.939&0.050/0.953&0.043/0.962 \\ &MemNet$^\dagger$~\cite{tai2017memnet} &\cellcolor{gray!20}0.090/0.901&0.072/0.909&0.063/0.925&0.059/0.944&0.059/0.944 \\ &DnCNN$^\dagger$~\cite{zhang2017beyond} &\cellcolor{gray!20}0.078/0.907&0.061/0.926&\textcolor{red}{0.049}/0.944&\textcolor{red}{0.041}/0.954&\textcolor{red}{0.033}/\textcolor{red}{0.964} \\ &RIDNet$^\dagger$~\cite{anwar2019real} & \cellcolor{gray!20}\textcolor{red}{0.076}/\textcolor{red}{0.910}&\textcolor{red}{0.060}/\textcolor{red}{0.928}&\textcolor{red}{0.049}/\textcolor{red}{0.943}&\textcolor{red}{0.041}/\textcolor{red}{0.955}&0.034/\textcolor{red}{0.964} \\ \cline{1-7} \bottomrule \end{tabular} \caption{RMSE/SSIM results on denoising the \name{} test images. We benchmark three classical methods and three deep learning based methods. The larger the number of averaged raw images is, the lower the noise level. $^\dagger$The learning based methods are trained for each noise level separately. An interesting observation is that the best RMSE results (in red) do not necessarily give the best result after the downstream SR method as show in Table~\ref{table:PSNR_dsr}. We highlight the results under the highest noise level with gray background for easier comparison with Table~\ref{table:PSNR_dsr}.} \label{table:PSNR_den} \end{table} \subsection{Results and Discussion} \newcommand{\benchmarkA}[1]{\includegraphics[width=0.135\linewidth]{#1}} We apply the denoising algorithms on the noisy LR images, and calculate the RMSE and SSIM values between the denoised image and the corresponding noise-free LR image in the test set of \name{}. The results of the 6 benchmarked denoising algorithms are shown in Table~\ref{table:PSNR_den}. DnCNN and RIDNet outperform the classical denoising methods for all noise levels. Although MemNet achieves worse results than the classical denoising methods in terms of RMSE and SSIM, the results of MemNet contain fewer artifacts as shown in Fig.~\ref{fig:result:denoising}. One interesting observation is that a better denoising with a lower RMSE or a higher SSIM, in some cases, results in unwanted smoothing in the form of a local filtering that incurs a loss of detail. Although the RMSE results of DnCNN are not the best (Table~\ref{table:PSNR_den}), when they are used downstream by the SR networks in Table~\ref{table:PSNR_dsr}, the DnCNN denoised images achieve the best final performance. Qualitative denoising results are shown in the first row of Fig.~\ref{fig:result:denoising}. We note that the artifacts created by denoising algorithms are amplified when SR methods are applied on the denoised results (\textit{e.g.}, (a) and (b) of Fig.~\ref{fig:result:denoising}). Although the denoised images are close to the clean LR image according to the evaluation metrics, the SR network is unable to recover faithful texture from these denoised images as the denoising algorithms remove part of the high-frequency information. \begin{figure}[t] \centering \begin{tabu}{ccccccc} \rowfont{\tiny} \multicolumn{7}{c}{Denoising Results}\\ \benchmarkA{IMAGES/jdsr/100_0/PURELET.png} & \benchmarkA{IMAGES/jdsr/100_0/EPLL.png} & \benchmarkA{IMAGES/jdsr/100_0/BM3D.png} & \benchmarkA{IMAGES/jdsr/100_0/M_1.png} & \benchmarkA{IMAGES/jdsr/100_0/D_1.png} & \benchmarkA{IMAGES/jdsr/100_0/R_1.png} & \benchmarkA{IMAGES/jdsr/100_0/avg400.png} \\ \rowfont{\tiny} (a) PURE-LET & (b) EPLL & (c) BM3D & (d) MemNet & (e) DnCNN & (f) RIDNet & (g) clean LR \\ \rowfont{\tiny} \multicolumn{7}{c}{RDN~\cite{zhang2018residual} applied on denoised results}\\ \benchmarkA{IMAGES/jdsr/100_0/RDN_PURELET.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_EPLL.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_BM3D.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_M_1.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_D_1.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_R_1.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN.png} \\ \rowfont{\tiny} (a) RDN+ & (b) RDN+ & (c) RDN+ & (d) RDN+ & (e) RDN+ & (f) RDN+ & (g) RDN+\\ \rowfont{\tiny} PURE-LET & EPLL & BM3D & MemNet & DnCNN & RIDNet & clean LR\\ \end{tabu} \caption{The first row shows qualitative results of the denoising algorithms on a test LR image with the highest noise level. The second row shows qualitative results of the SR network RDN~\cite{zhang2018residual} applied on top of the denoised results. RDN amplifies the artifacts created by PURE-LET and EPLL, and is unable to recover faithful texture when the input image is over-smoothed by denoising algorithms. A gamma correction is applied for better visualization. Best viewed on screen.} \label{fig:result:denoising} \end{figure} \begin{table}[t] \centering \begin{tabular}{lcccccccc} \toprule & & \multicolumn{6}{c}{Super-resolution networks} \\ \cline{3-8} & \textbf{} & RCAN & RDN & SAN & SRFBN & EPSR & ESRGAN \\ \cline{3-8} \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Denoisers}}} & PURE-LET & .432/.697&.458/.695&.452/.693&.444/.694&.658/.594&.508/.646\\ & VST+EPLL & .425/.716&.434/.711&.438/.707&.442/.710&.503/.682&.485/.703\\ & VST+BM3D & .399/.753&.398/.748&.418/.745&.387/.746&.476/.698&.405/.716\\ & MemNet & .374/.755&.392/\textcolor{red}{.749}&.387/.746&.377/.752&.411/.713&.392/.719\\ & DnCNN & \textcolor{red}{.357}/\textcolor{red}{.756}&\textcolor{red}{.365}/\textcolor{red}{.749}&\textcolor{red}{.363}/\textcolor{red}{.753}&\textcolor{red}{.358}/\textcolor{red}{.754}&\textcolor{red}{.402}/\textcolor{red}{.719}&\textcolor{red}{.373}/\textcolor{red}{.726}\\ & RIDNet & .358/\textcolor{red}{.756}&.371/.747&.364/.752&.362/.753&.411/.710&.379/.725\\ \cline{1-8} & Noise-free LR & .255/.836&.251/.837&.258/.834&.257/.833&.302/.812&.289/.813\\ \hline & Noisy LR & .608/.382&.589/.387&.582/.388&.587/.380&.627/.318&.815/.279\\ \hline \bottomrule \end{tabular} \caption{RMSE/SSIM results on the sequential application of denoising and SR methods on the \name{} test images with the highest noise level, corresponding to the first column of Table~\ref{table:PSNR_den}. We omit the leading `0' in the results for better readability. For each SR method, we highlight the best RMSE value in red. The SR networks applied on the denoised results are trained to map the noise-free LR images to the high-quality HR images. } \label{table:PSNR_dsr} \end{table} The SR networks are applied on the denoised results of the denoising algorithms, and are evaluated using RMSE and SSIM. We also include the results of applying the SR networks on the noise-free LR images. As mentioned above, we notice that there is a significant drop in performance when the SR networks are given the denoised LR images instead of the noise-free LR images as shown in Table~\ref{table:PSNR_den}. For example, applying RDN on noise-free LR images results in the SSIM value of 0.836, while the SSIM value of the same network applied to the denoised results of RIDNet on the lowest noise level is 0.756 (shown in the first row, last column in Table~\ref{table:PSNR_jdsr}). This illustrates that the SR networks are strongly affected by noise or over-smoothing in the inputs. We also notice that a better SR network according to the evaluation on a single SR task does not necessarily provide better final results when applied on the denoised images. Although RDN outperforms RCAN in both RMSE and SSIM when applied on noise-free LR images, RCAN is more robust when the input is a denoised image. Among all the distortion-based SR networks, RCAN shows the most robustness as it outperforms all other networks in terms of RMSE and SSIM when applied on denoised LR images. As mentioned above, another interesting observation is that although DnCNN results in lower RMSE and higher SSIM than other networks for denoising at the highest noise level, DnCNN still provides a better input for the SR networks. We note generally that better denoisers according to the denoising benchmark do not necessarily provide better denoised images for the downstream SR task. Although the denoised results from MemNet have larger RMSE than the conventional methods, as shown in Table~\ref{table:PSNR_den}, the SR results on MemNet's denoised images achieve higher quality based on RMSE and SSIM. Qualitative results are given in Fig.~\ref{fig:result:benchmark}, where for each SR network we show the results for the denoising algorithm that achieves the highest RMSE value for the joint task (\textit{i.e.}, using the denoised results of DnCNN). We note that none of networks is able to produce results with detailed texture. As denoising algorithms remove some high-frequency signals along with noise, the SR results from the distortion-based networks are blurry and many texture details are lost. Although the perception-based methods (EPSR and ESRGAN) are able to produce sharp results, they fail to reproduce faithful texture and suffer a drop in SSIM. \begin{figure}[!ht] \centering \begin{tabu}{ccccccc} \benchmarkA{IMAGES/benchmark/113_1/RCAN_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/RDN_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/SAN_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/SRFBN_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/EPSR_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/ESRGAN_D_1.png}&\benchmarkA{IMAGES/benchmark/113_1/sim.png}\\ \rowfont{\tiny} (a) 0.313 &(b) 0.322 &(c) 0.322 &(d) 0.344 &(e) 0.405 &(f) 0.400 & Ground-truth\\ \end{tabu} \caption{Qualitative results with the corresponding RMSE values on the sequential application of denoising and SR algorithms on the \name{} test images with the highest noise level. (a) DnCNN+RCAN, (b) DnCNN+RDN, (c) DnCNN+SAN, (d) DnCNN+SRFBN (e) DnCNN+EPSR, (f) DnCNN+ESRGAN. A gamma correction is applied for better visualization. Best viewed on screen.} \label{fig:result:benchmark} \end{figure} \subsection{Joint Denoising and Super-Resolution (JDSR)} Our benchmark results in Sec.~\ref{sec:benchmark} show that the successive application of denoising and SR algorithms does not produce the highest-quality HR outputs. In this section, we demonstrate that it is more effective to train a JDSR model that directly transforms the noisy LR image into an HR image. \subsection{Training Setup} For JDSR, we adopt a 16-layer RRDB network~\cite{wang2018esrgan}. To enable the network to better recover texture, we replace the GAN loss in the training with a novel texture loss. The GAN loss often results in SR networks producing realistic but fake textures that are different from the ground-truth and may result in a significant drop in SSIM~\cite{wang2018esrgan}. Instead, we introduce a texture loss that exploits the features' second-order statistics to help the network produce high-quality and real textures. This choice is motivated by the fact that second-order descriptors have proven effective for tasks such as texture recognition~\cite{harandi2014bregman}. We leverage the difference in second-order statistics of VGG features to measure the similarity of the texture between the reconstructed HR image and the ground-truth HR image. The texture loss is defined as \begin{equation} \mathcal{L}_{texture} = || Cov(\phi(I_{SR})) - Cov(\phi(I_{HR})) ||_2^2, \end{equation} where $I_{SR}$ is the estimated result from the network for JDSR and $I_{HR}$ is the ground-truth HR image, $\phi(\cdot)$ is a neural network feature space, and $Cov(\cdot)$ computes the covariance. We follow the implementation of MPN-CONV~\cite{li2017is} for the forward and backward feature covariance calculation. To improve visual quality, we further incorporate a perceptual loss to the training objective \begin{equation} \mathcal{L}_{perceptual} = || \phi(I_{SR}) - \phi(I_{HR}) ||_2^2. \end{equation} Our final loss function is then given by \begin{equation} \mathcal{L} = \mathcal{L}_1 + \alpha \cdot \mathcal{L}_{perceptual} + \beta \cdot \mathcal{L}_{texture}, \end{equation} where $\mathcal{L}_1$ represents the $\ell$1 loss between the estimated image and the ground-truth. We empirically set $\alpha = 0.05$ and $\beta = 0.05$. % We follow the same training setup as the experiments in Sec.~\ref{sec:benchmark}. For comparison, we also train RCAN~\cite{zhang2018residual} and ESRGAN~\cite{wang2018esrgan} on JDSR. % \begin{table}[t] \centering \begin{tabular}{cccccc} \toprule & \multicolumn{4}{c}{Number of raw images averaged before JDSR} & \multirow{2}{*}{\#Parameters} \\ \cline{2-5} Method & {1} & {2} & {4} & {8} \\ \cline{1-6} DnCNN$^\dagger$+RCAN$^\ddagger$&0.357/0.756&0.348/0.779&0.332/0.797&0.320/0.813&0.5M+15M\\ DnCNN$^\dagger$+ESRGAN$^\ddagger$&0.373/0.726&0.364/0.770&0.349/0.787&0.340/0.797&0.5M+18M\\ \cline{1-6} JDSR-RCAN$^*$&0.343/0.767&0.330/0.780&0.314/0.799&0.308/0.814&15M\\ JDSR-ESRGAN$^*$&0.351/0.758&0.339/0.771&0.336/0.788&0.322/0.798&18M\\ Ours$^*$&0.340/0.760&0.326/0.779&0.318/0.797&0.310/0.801&11M \\ \cline{1-6} \end{tabular} \caption{JDSR RMSE/SSIM results on the \name{} test set. $^\dagger$The denoising networks are retrained per noise level. $^\ddagger$The SR networks are trained to map noise-free LR images to HR images. $^*$The networks trained for JDSR are also retrained per noise level. } \label{table:PSNR_jdsr} \end{table} \subsection{Results and Discussion} The quantitative results of different methods are reported in Table~\ref{table:PSNR_jdsr}. The results indicate that comparing to the sequential application of denoising and SR, a single network trained on JDSR is more effective even though it has fewer parameters. GAN-based methods generate fake textures and lead to low SSIM scores. Our model, trained with texture loss, is able to effectively recover high-fidelity texture information even when high noise levels are present in the LR inputs. We show the qualitative results of JDSR on the highest noise level (which corresponds to the first column of Table~\ref{table:PSNR_den}) in Fig.~\ref{fig:jdsr}. We see that other networks have difficulties to recover the shape of the cells in the presence of noise, whereas our method trained with texture loss is able to generate a higher-quality HR image with faithful texture. \newcommand{\jdsrimg}[1]{\includegraphics[width=0.16\linewidth]{#1}} \begin{figure}[t] \centering \begin{tabu}{cccccc} \jdsrimg{IMAGES/jdsr/090_0/RCAN_D_1_avg1.png}& \jdsrimg{IMAGES/jdsr/090_0/RCAN_jdsravg1.png}& \jdsrimg{IMAGES/jdsr/090_0/ESRGAN_D_1_avg1.png}& \jdsrimg{IMAGES/jdsr/090_0/ESRGAN_jdsravg1.png}& \jdsrimg{IMAGES/jdsr/090_0/ours_jdsravg1.png}& \jdsrimg{IMAGES/jdsr/090_0/sim.png} \\ \rowfont{\tiny} (a) 0.101 &(b) 0.065 &(c) 0.160 &(d) 0.124 &(e) 0.084 & Ground-truth\\ \end{tabu} \caption{Qualitative results with the corresponding RMSE values of denoising and SR on the \name{} test images with the highest noise level. (a) DnCNN+RCAN, (b) RCAN, (c) DnCNN+ESRGAN, (d) ESRGAN, (e) a 16-layer RRDB network~\cite{wang2018esrgan} trained with texture loss. A gamma correction is applied for better visualization. Best viewed on screen.} \label{fig:jdsr} \vspace{-0.2cm} \end{figure} \section{Conclusion} We propose the first joint denoising and SR microscopy dataset, \fullname{}. We use image averaging to obtain LR images with different noise levels and the noise-free LR. The HR images are obtained with SIM imaging. With \name{}, we benchmark the combination of various denoising and SR methods. Our results indicate that SR networks are very sensitive to noise, and that the consecutive application of two approaches is sub-optimal and suffers from the accumulation of errors from both stages. We also observe form the experimental results that the networks benefit from joint optimization for denoising and SR. \name{} is publicly available, and we believe it will be useful in advancing image restoration in medical imaging. Although the data is limited to the domain of microscopy data, it can be a useful dataset for benchmarking deep denoising and SR algorithms. \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=Hjw2saQPB5G
https://arxiv.org/abs/2003.05961
Please evaluate the completeness of the related work section and the comparative analysis in the paper, and provide feedback on the availability of the dataset and the importance of making it public.
An extensive new dataset and a new state-of-the-art algorithm for denoising and super-resolution. The authors address the problem of denoising and image resolution improvement in microscopy. A new public benchmark dataset is presented, much more extensive than the currently available alternatives.The authors perform a detailed evaluation of the existing denoising and super-resolution algorithms and finally propose their own which handles both tasks and out-performs the others. All code and data is available. Rating: 9: Top 15% of accepted papers, strong accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{xcolor} \usepackage{subfigure} \usepackage{array} \usepackage{booktabs} \usepackage{colortbl} \usepackage{hhline} \usepackage{arydshln} \usepackage{verbatim} % \usepackage{gensymb} % \usepackage{multirow} \usepackage{tabu} \usepackage{epsfig} \usepackage{caption} \usepackage{ulem} \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref} \begin{document} \newcommand{\fullname}{\textbf{W}idefield\textbf{2S}IM} \newcommand{\name}{W2S} \newcommand\blfootnote[1]{% \begingroup \renewcommand\thefootnote{}\footnote{#1}% \addtocounter{footnote}{-1}% \endgroup } \pagestyle{headings} \mainmatter \def\ECCVSubNumber{2} % \title{W2S: Microscopy Data with Joint Denoising and Super-Resolution for Widefield to SIM Mapping} \titlerunning{W2S} \authorrunning{R. Zhou et al.} \author{\author{Ruofan Zhou\inst{*}\orcidID{0000-0002-5645-4541} \and Majed El Helou\inst{*}\orcidID{0000-0002-7469-2404} \and Daniel Sage\orcidID{0000-0002-1150-1623} \and Thierry Laroche \and Arne Seitz \and\\ Sabine S\"usstrunk\orcidID{0000-0002-0441-6068}} \authorrunning{R. Zhou et al.} \institute{\'Ecole Poletechnique F\'ed\'erale de Lausanne (EPFL), Switzerland \\ \email{\{ruofan.zhou,majed.elhelou,sabine.susstrunk\}@epfl.ch}}} \maketitle \begin{abstract} \blfootnote{$^*$ The first two authors have similar contributions.} In fluorescence microscopy live-cell imaging, there is a critical trade-off between the signal-to-noise ratio and spatial resolution on one side, and the integrity of the biological sample on the other side. To obtain clean high-resolution (HR) images, one can either use microscopy techniques, such as structured-illumination microscopy (SIM), or apply denoising and super-resolution (SR) algorithms. However, the former option requires multiple shots that can damage the samples, and although efficient deep learning based algorithms exist for the latter option, no benchmark exists to evaluate these algorithms on the joint denoising and SR (JDSR) tasks. To study JDSR on microscopy data, we propose such a novel JDSR dataset, \fullname{} (\name{}), acquired using a conventional fluorescence widefield and SIM imaging. \name{} includes 144,000 real fluorescence microscopy images, resulting in a total of 360 sets of images. A set is comprised of noisy low-resolution (LR) widefield images with different noise levels, a noise-free LR image, and a corresponding high-quality HR SIM image. W2S allows us to benchmark the combinations of 6 denoising methods and 6 SR methods. We show that state-of-the-art SR networks perform very poorly on noisy inputs. Our evaluation also reveals that applying the best denoiser in terms of reconstruction error followed by the best SR method does not necessarily yield the best final result. Both quantitative and qualitative results show that SR networks are sensitive to noise and the sequential application of denoising and SR algorithms is sub-optimal. Lastly, we demonstrate that SR networks retrained end-to-end for JDSR outperform any combination of state-of-the-art deep denoising and SR networks\footnote{Code and data available at \url{https://github.com/IVRL/w2s}}. \keywords{Image Restoration Dataset, Denoising, Super-resolution, Microscopy Imaging, Joint Optimization} \end{abstract} \newcommand{\etal}{\textit{et al.}} \section{Introduction} \label{sec:introduction} \newcommand{\teaserimg}[1]{\includegraphics[width=0.115\linewidth,clip]{#1}} \begin{figure}[t] \centering \begin{tabu}{cccccccc} \rowfont{\tiny} \multicolumn{8}{c}{Single Channel}\\ \teaserimg{IMAGES/dataset_imgs/003_0/full_frame.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg1.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg2.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg4.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg8.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg16.png}& \teaserimg{IMAGES/dataset_imgs/003_0/avg400.png}& \teaserimg{IMAGES/dataset_imgs/003_0/sim.png}\\ \teaserimg{IMAGES/dataset_imgs/008_1/full_frame.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg1.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg2.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg4.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg8.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg16.png}& \teaserimg{IMAGES/dataset_imgs/008_1/avg400.png}& \teaserimg{IMAGES/dataset_imgs/008_1/sim.png}\\ \rowfont{\tiny} \multicolumn{8}{c}{Multi Channel}\\ \teaserimg{IMAGES/dataset_imgs/010/full_frame.png}& \teaserimg{IMAGES/dataset_imgs/010/avg1.png}& \teaserimg{IMAGES/dataset_imgs/010/avg2.png}& \teaserimg{IMAGES/dataset_imgs/010/avg4.png}& \teaserimg{IMAGES/dataset_imgs/010/avg8.png}& \teaserimg{IMAGES/dataset_imgs/010/avg16.png}& \teaserimg{IMAGES/dataset_imgs/010/avg400.png}& \teaserimg{IMAGES/dataset_imgs/010/sim.png}\\ \teaserimg{IMAGES/dataset_imgs/013/full_frame.png}& \teaserimg{IMAGES/dataset_imgs/013/avg1.png}& \teaserimg{IMAGES/dataset_imgs/013/avg2.png}& \teaserimg{IMAGES/dataset_imgs/013/avg4.png}& \teaserimg{IMAGES/dataset_imgs/013/avg8.png}& \teaserimg{IMAGES/dataset_imgs/013/avg16.png}& \teaserimg{IMAGES/dataset_imgs/013/avg400.png}& \teaserimg{IMAGES/dataset_imgs/013/sim.png}\\ \rowfont{\tiny} Full frame & Raw crop & 2$\times$ Average & 4$\times$ Average &8$\times$ Average &16$\times$ Average& Noise-free LR & Target HR \end{tabu} \caption{Example of image sets in the proposed \name. We obtain LR images with 5 different noise levels by either taking a single raw image or averaging different numbers of raw images of the same field of view. The more images we average, the lower the noise level, as shown in the different columns of the figure. The noise-free LR images are the average of 400 raw images, and the HR images are obtained using structured-illumination microscopy (SIM)~\cite{gustafsson2000surpassing}. The multi-channel images are formed by mapping the three single-channel images of different wavelengths to RGB. A gamma correction is applied for better visualization. Best viewed on screen.} \label{fig:teaser} \end{figure} Fluorescence microscopy allows to visualize sub-cellular structures and protein-protein interaction at the molecular scale. However, due to the weak signals and diffraction limit, fluorescence microscopy images suffer from high noise and limited resolution. One way to obtain high-quality, high-resolution (HR) microscopy images is to leverage super-resolution fluorescence microscopy, such as structure illumination microscopy (SIM)~\cite{gustafsson2000surpassing}. This technique requires multiple captures with several parameters requiring expert tuning to get high-quality images. Multiple or high-intensity-light acquisitions can cause photo-bleach and even damage the samples. The imaged cells could be affected and, if imaged in sequence for live tracking, possibly killed. This is because a single SIM acquisition already requires a set of captures with varying structured illumination. Hence, a large set of SIM captures would add up to high illumination and an overhead in capture time that is detrimental to imaging and tracking of live cells. Therefore, developing an algorithm to effectively denoise and super-resolve a fluorescence microscopy image is of great importance to biomedical research. However, a high-quality dataset is needed to benchmark and evaluate joint denoising and super-resolution (JDSR) on microscopy data. Deep-learning-based methods in denoising~\cite{anwar2019real,tai2017memnet,zhang2017beyond,el2020blind} and SR~\cite{wang2018esrgan,zhang2018image,zhang2018residual} today are outperforming classical signal processing approaches. A major limitation in the literature is, however, the fact that these two restoration tasks are addressed separately. This is in great part due to a missing dataset that would allow both to train and to evaluate JDSR. Such a dataset must contain aligned pairs of LR and HR images, with noise and noise-free LR images, to allow retraining retrain prior denoising and SR methods for benchmarking the consecutive application of a denoiser and an SR network as well as candidate one-shot JDSR methods. In this paper, we present such a dataset, which, to the best of our knowledge, is the first JDSR dataset. This dataset allows us to evaluate the existing denoising and SR algorithms on microscopy data. We leverage widefield microscopy and SIM techniques to acquire data fulfilling the described requirements above. Our noisy LR images are captured using widefield imaging of human cells. We capture a total of 400 replica raw images per field of view. We average several of the LR images to obtain images with different noise levels, and all of the 400 replicas to obtain the noise-free LR image. Using SIM imaging~\cite{gustafsson2000surpassing}, we obtain the corresponding high-quality HR images. Our resulting \fullname{} (\name{}) dataset consists of 360 sets of LR and HR image pairs, with different fields of view and acquisition wavelengths. Visual examples of the images in \name{} are shown in Fig.~\ref{fig:teaser}. We leverage our JDSR dataset to benchmark different approaches for denoising and SR restoration on microscopy images. We compare the sequential use of different denoisers and SR methods, of directly using an SR method on a noisy LR image, and of using SR methods on the noise-free LR images of our dataset for reference. We additionally evaluate the performance of retraining SR networks on our JDSR dataset. Results show a significant drop in the performance of SR networks when the low-resolution (LR) input is noisy compared to it being noise-free. We also find that the consecutive application of denoising and SR achieves better results. It is, however, not as performing in terms of RMSE and perceptual texture reconstruction as training a single model on the JDSR task, due to the accumulation of error. The best results are thus obtained by training a single network for the joint optimization of denoising and SR. In summary, we create a microscopy JDSR dataset, \name{}, containing noisy images with 5 noise levels, noise-free LR images, and the corresponding high-quality HR images. We analyze our dataset by comparing the noise magnitude and the blur kernel of our images to those of existing denoising and SR datasets. We benchmark state-of-the-art denoising and SR algorithms on \name{}, by evaluating different settings and on different noise levels. Results show the networks can benefit from joint optimization. \section{Related Work} \subsection{Biomedical Imaging Techniques for Denoising and Super-resolution} Image averaging of multiple shots is one of the most employed methods to obtain a clean microscopy image. This is due to its reliability and to avoid the potential blurring or over-smoothing effects of denoisers. For microscopy experiments requiring long observation and minimal degradation of specimens, low-light conditions and short exposure times are, however, preferred as multiple shots might damage the samples. To reduce the noise influence and increase the resolution, denoising methods and SR imaging techniques are leveraged. To recover a clean image from a single shot, different denoising methods have been designed, including PURE-LET~\cite{luisier2011image}, EPLL~\cite{zoran2011learning}, and BM3D~\cite{BM3D}. Although these methods provide promising results, recent deep learning methods outperform them by a big margin~\cite{zhang2019poisson}. To achieve resolution higher than that imposed by the diffraction limit, a variety of SR microscopy techniques exist, which achieve SR either by spatially modulating the fluorescence emission using patterned illumination (\textit{e.g.}, STED~\cite{hein2008stimulated} and SIM~\cite{gustafsson2000surpassing}), or by stochastically switching on and off individual molecules using photo-switchable probes (\textit{e.g.}, STORM~\cite{rust2006sub}), or photo-convertible fluorescent proteins (\textit{e.g.}, PALM~\cite{shroff2008live}). However, all of these methods require multiple shots over a period of time, which is not suitable for live cells because of the motion and potential damage to the cell. Thus, in this work, we aim to develop a deep learning method to reconstruct HR images from a single microscopy capture. \subsection{Datasets for Denoising and Super-resolution} \label{sec:work} Several datasets have commonly been used in benchmarking SR and denoising, including Set5~\cite{bevilacqua2012low}, Set14~\cite{zeyde2010single}, BSD300~\cite{martin2001database}, Urban100~\cite{huang2015single}, Manga109~\cite{matsui2017sketch}, and DIV2K~\cite{timofte2018ntire}. None of these datasets are optimized for microscopy and they only allow for synthetic evaluation. Specifically, the noisy inputs are generated by adding Gaussian noise for testing denoising algorithms, and the LR images are generated by downsampling the blurred HR images for testing SR methods. These degradation models deviate from the degradations encountered in real image capture~\cite{chen2019camera}. To better take into account realistic imaging characteristics and thus evaluate denoising and SR methods in real scenarios, real-world denoising and SR datasets have recently been proposed. Here we discuss these real datasets and compare them to our proposed \name{}. \noindent\textbf{Real Denoising Dataset } Only a few datasets allow to quantitatively evaluate denoising algorithms on real images, such as DND~\cite{plotz2017benchmarking} and SSID~\cite{abdelhamed2018high}. These datasets capture images with different noise levels, for instance by changing the ISO setting at capture. More related to our work, Zhang~\etal{}~\cite{zhang2019poisson} collect a dataset of microscopy images. All three datasets are designed only for denoising, and no HR images are provided that would allow them to be used for SR evaluation. According to our benchmark results, the best denoising algorithm does not necessarily provide the best input for the downstream SR task, and the JDSR learning is the best overall approach. This suggests a dataset on joint denoising and SR can provide a more comprehensive benchmark for image restoration. \noindent\textbf{Real Super-resolution Dataset } Recently, capturing LR and HR image pairs by changing camera parameters has been proposed. Chen~\etal{} collect 100 pairs of images of printed postcards placed at different distances. SR-RAW~\cite{zhang2019zoom} consists of 500 real scenes captured with multiple focal lengths. Although this dataset provides real LR-HR pairs, it suffers from misalignment due to the inevitable perspective changes or lens distortion. Cai~\etal{} thus introduce an iterative image registration scheme into the registration of another dataset, RealSR~\cite{cai2019toward}. However, to have high-quality images, all these datasets are captured with low ISO setting, and the images thus contain very little noise as shown in our analysis. Qian~\etal{} propose a dataset for joint demosaicing, denoising and SR~\cite{qian2019trinity}, but the noise in their dataset is simulated by adding white Gaussian noise. Contrary to these datasets, our proposed \name{} is constructed using SR microscopy techniques~\cite{gustafsson2000surpassing}, all pairs of images are well aligned, and it contains raw LR images with different noise levels and the noise-free LR images, thus enabling the benchmarking of both denoising and SR under real settings. \subsection{Deep Learning based Image Restoration} Deep learning based methods have shown promising results on various image restoration tasks, including denoising and SR. We briefly present prior work and the existing problems that motivate joint optimization. \noindent\textbf{Deep Learning for Denoising } Recent deep learning approaches for image denoising achieve state-of-the-art results on recovering the noise-free images from images with additive noise.% Whether based on residual learning~\cite{zhang2017beyond}, using memory blocks~\cite{tai2017memnet}, bottleneck architecture~\cite{weigert2018content}, % attention mechanisms~\cite{anwar2019real}, internally modeling Gaussian noise parameters~\cite{el2020blind}, these deep learning methods all require training data. For real-world raw-image denoising, the training data should include noisy images with a Poisson noise component, and a corresponding aligned noise-free image, which is not easy to acquire. % Some recent self-supervised methods can learn without having training targets~\cite{batson2019noise2self,krull2019noise2void,lehtinen2018noise2noise}, however, their performance does not match that of supervised methods. We hence focus on the better-performing supervised methods in our benchmark, since targets are available. All these networks are typically evaluated only on the denoising task, often only on the one they are trained on. They optimize for minimal squared pixel error, leading to potentially smoothed out results that favour reconstruction error at the expense of detail preservation. When a subsequent task such as SR is then applied on the denoised outputs from these networks, the quality of the final results does not, as we see in our benchmark, necessarily correspond to the denoising performance of the different approaches. This highlights the need for a more comprehensive perspective that jointly considers both restoration tasks. \noindent\textbf{Deep Learning for Super-resolution } Since the first convolutional neural network for SR~\cite{dong2014learning} outperformed conventional methods on synthetic datasets, many new architectures~\cite{kim2016accurate,lim2017enhanced,shi2016real,vasu2018analyzing,wang2018esrgan,zhang2018image,zhang2018residual} and loss functions~\cite{johnson2016perceptual,ledig2017photo,sajjadi2017enhancenet,zhang2019ranksrgan,zhang2019image} have been proposed to improve the effectiveness and the efficiency of the networks. To enable the SR networks generalize better on the real-world LR images where the degradation is unknown, works have been done on kernel prediction~\cite{cai2019toward,gu2019blind} and kernel modeling~\cite{zhang2019deep,zhou2019kernel}. However, most of the SR networks assume that the LR images are noise-free or contain additive Gaussian noise with very small variance. Their predictions are easily affected by noise if the distribution of the noise is different from their assumptions~\cite{choi2019evaluating}. This again motivates a joint approach developed for the denoising and SR tasks. \noindent\textbf{Joint Optimization in Deep Image Restoration } Although a connection can be drawn between the denoising and super-resolution tasks in the frequency domain~\cite{elhelou2020stochastic}, their joint optimization was not studied before due to the lack of a real benchmark. Recent studies have shown the performance of joint optimization in image restoration, for example, the joint demosaicing and denoising~\cite{gharbi2016deep,klatzer2016learning}, joint demosaicing and super-resolution~\cite{zhang2019zoom,zhou2018deep}. All these methods show that the joint solution outperforms the sequential application of the two stages. More relevant to JDSR, % Xie~\etal{}~\cite{xie2015joint} present a dictionary learning approach with constraints tailored for depth maps, and Miao~\etal{}~\cite{miao2020handling} propose a cascade of two networks for joint denoising and deblurring, evaluated on synthetic data only. Similarly, our results show that a joint solution for denoising and SR also obtains better results than any sequential application. Note that our W2S dataset allows us to draw such conclusions on \textit{real} data, rather than degraded data obtained through simulation. \section{Joint Denoising and Super-Resolution Dataset for Widefield to SIM Mapping} In this section, we describe the experimental setup that we use to acquire the sets of LR and HR images and present an analysis of the noise levels and blur kernels of our dataset. \subsection{Structured-Illumination Microscopy} \label{sec:sim} Structured-illumination microscopy (SIM) is a technique used in microscopy imaging that allows samples to be captured with a higher resolution than the one imposed by the physical limits of the imaging system~\cite{gustafsson2000surpassing}. Its operation is based on the interference principle of the Moir{\'e} effect. We present how SIM works in more detail in our supplementary material. We use SIM to extend the resolution of standard widefield microscopy images. This allows us to obtain aligned LR and HR image pairs to create our dataset. The acquisition details are described in the next section. \subsection{Data Acquisition} \label{sec:acquisition} We capture the LR images of the \name{} dataset using widefield microscopy~\cite{verveer1999comparison}. Images are acquired with a high-quality commercial fluorescence microscope and with real biological samples, namely, human cells. \noindent\textbf{Widefield Images } A time-lapse widefield of 400 images is acquired using a Nikon SIM setup (Eclipse T1) microscope. The details of the setup are given in the supplementary material. In total, we capture 120 different fields-of-view (FOVs), each FOV with 400 captures in 3 different wavelengths. All images are \textit{raw}, \textit{i.e.}, are linear with respect to focal plane illuminance, and are made up of $512 \times 512$ pixels. We generate different noise-level images by averaging 2, 4, 8, and 16 raw images of the same FOV. The larger the number of averaged raw images is, the lower the noise level. The noise-free LR image is estimated as the average of all 400 captures of a single FOV. Examples of images with different noise levels and the corresponding noise-free LR images are presented in Fig.~\ref{fig:teaser}. \noindent\textbf{SIM Imaging } The HR images are captured using SIM imaging. We acquire the SIM images using the same Nikon SIM setup (Eclipse T1) microscope as above. We present the details of the setup in the supplementary material. The HR images have a resolution that is higher by a factor of 2, resulting in $1024 \times 1024$ pixel images. \subsection{Data Analysis} \label{sec:ana} \name{} includes 120 different FOVs, each FOV is captured in 3 channels, corresponding to the wavelengths 488nm, 561nm and 640nm. As the texture of the cells is different and independent across different channels, the different channels can be considered as different images, thus resulting in 360 views. For each view, 1 HR image and 400 LR images are captured. We obtain LR images with different noise levels by averaging different numbers of images of the same FOV and the same channel. In summary, \name{} provides 360 different sets of images, each image set includes LR images with 5 different noise levels (corresponding to 1, 2, 4, 8, and 16 averaged LR images), the corresponding noise-free LR image (averaged over 400 LR images) and the corresponding HR image acquired with SIM. The LR images have dimensions $512 \times 512$, and the HR images $1024 \times 1024$. To quantitatively evaluate the difficulty of recovering the HR image from the noisy LR observation in \name{}, we analyze the degradation model relating the LR observations to their corresponding HR images. We adopt a commonly used degradation model~\cite{chen2019camera,dong2014learning,gu2019blind,zhou2019kernel}, with an additional noise component, % \begin{equation}\label{eq:LRdegradation} I_{LR}^{noisy} = (I_{HR} \circledast k) \downarrow_m + n, \end{equation} where $I_{LR}^{noisy}$ and $I_{HR}$ correspond, respectively, to the noisy LR observation and the HR image, $\circledast$ is the convolution operation, $k$ is a blur kernel, $\downarrow_m$ is a downsampling operation with a factor of $m$, and $n$ is the additive noise. Note that $n$ is usually assumed to be zero in most of the SR networks' degradation models, while it is not the case for our dataset. As the downsampling factor $m$ is equal to the targeted super-resolution factor, it is well defined for each dataset. We thus analyze in what follows the two unknown variables of the degradation model for \name{}; namely the noise $n$ and the blur kernel $k$. Comparing to other denoising datasets, \name{} contains 400 noisy images for each view, DND~\cite{choi2019evaluating} contains only 1, SSID~\cite{abdelhamed2018high} contains 150, and FMD~\cite{zhang2019poisson}, which also uses widefield imaging, contains 50. \name{} can thus provide a wide range of noise levels by averaging a varying number of images out of the 400. In addition, \name{} provides LR and HR image pairs that do not suffer from misalignment problems often encountered in SR datasets. \noindent\textbf{Noise Estimation } We use the noise modeling method in~\cite{foi2008practical} to estimate the noise magnitude in raw images taken from \name{}, from the denoising dataset FMD~\cite{zhang2019poisson}, and from the SR datasets RealSR~\cite{cai2019toward} and City100~\cite{chen2019camera}. The approach of~\cite{foi2008practical} models the noise as Poisson-Gaussian. The measured noisy pixel intensity is given by $y=x+n_P(x)+n_G$, where $x$ is the noise-free pixel intensity, $n_G$ is zero-mean Gaussian noise, and $x+n_P(x)$ follows a Poisson distribution of mean $ax$ for some $a>0$. This approach yields an estimate for the parameter $a$ of the Poisson distribution. % We evaluate the Poisson parameter of the noisy images from the three noise levels (obtained by averaging 1, 4 and 8 images) of \name{}, the raw noisy images of FMD, and the LR images of the SR datasets for comparison. We show the mean of the estimated noise magnitude for the different datasets in Fig.~\ref{fig:noise_stats}. We see that the raw noisy images of \name{} have a high noise level, comparable to that of FMD. On the other hand, the estimated noise parameters of the SR datasets are almost zero, up to small imprecision, and are thus significantly lower than even the estimated noise magnitude of the LR images from the lowest noise level in \name{}. Our evaluation highlights the fact that the additive noise component is not taken into consideration in current state-of-the-art SR datasets. The learning-based SR methods using these datasets are consequently not tailored to deal with noisy inputs that are common in many practical applications, leading to potentially poor performance. In contrast, \name{} contains images with high (and low) noise magnitude comparable to the noise magnitude of a recent denoising dataset~\cite{zhang2019poisson}. \begin{figure}[t] \centering \subfigure[Estimated noise (log)]{ \includegraphics[width=0.45\linewidth,height=0.31\linewidth]{IMAGES/dataset_imgs/noise.png} \label{fig:noise_stats} } \subfigure[Estimated kernels]{ \includegraphics[width=0.45\linewidth,trim={0 0 0 7},clip,height=0.31\linewidth]{IMAGES/dataset_imgs/kernel.png} \label{fig:kernel_stats} } \caption{Noise and kernel estimation on images from different datasets. A comparably-high noise level and a wide kernel indicate that the HR images of \name{} are challenging to recover from the noisy LR observation.} \label{fig:dataset_stats} \end{figure} \noindent\textbf{Blur Kernel Estimation } We estimate the blur kernel $k$ shown in Eq.~\eqref{eq:LRdegradation} as \begin{equation} k = \underset{k}{argmin} ||I_{LR}^{noise-free}\uparrow^{bic} - k \circledast I_{HR} ||^2_2, \end{equation} where $I_{LR}^{noise-free}\uparrow^{bic}$ is the noise-free LR image upscaled using bicubic interpolation. We solve for $k$ directly in the frequency domain using the Fast Fourier Transform~\cite{helou2018fourier}. The estimated blur kernel is visualized in Fig.~\ref{fig:kernel_stats}. For the purpose of comparison, we show the estimated blur kernel from two SR datasets: RealSR~\cite{cai2019toward} and City100~\cite{chen2019camera}. We also visualize the two other blur kernels: the MATLAB bicubic kernel that is commonly used in the synthetic SR datasets, and the Gaussian blur kernel with a sigma of 2.0, which is the largest kernel used by the state-of-the-art blind SR network~\cite{gu2019blind} for the upscaling factor of 2. From the visualization we clearly see the bicubic kernel and Gaussian blur kernel that are commonly used in synthetic datasets are very different from the blur kernels of real captures. The blur kernel of \name{} has a long tail compared to the blur kernels estimated from the other SR datasets, illustrating that more high-frequency information is removed for the LR images in \name. This is because a wider space-domain filter corresponds to a narrower frequency-domain low pass, and vice versa. Hence, the recovery of HR images from such LR images is significantly more challenging. Compared to the SR datasets, the LR and HR pairs in \name{} are well-aligned during the capture process, and no further registration is needed. Furthermore, to obtain high-quality images, the SR datasets are captured under high ISO and contain almost zero noise, whereas \name{} contains LR images with different noise levels. This makes it a more comprehensive benchmark for testing under different imaging conditions. Moreover, as shown in Sec.~\ref{sec:ana}, the estimated blur kernel of \name{} is wider than that of other datasets, and hence it averages pixels over a larger window, filtering out more frequency components and making \name{} a more challenging dataset for SR. \section{Benchmark} \label{sec:benchmark} We benchmark on the sequential application of state-of-the-art denoising and SR algorithms on \name{} using RMSE and SSIM. Note that we do not consider the inverse order, \textit{i.e.}, first applying SR methods on noisy images, as this amplifies the noise and causes a large increase in RMSE as shown in the last row of Table~\ref{table:PSNR_dsr}. With current methods, it would be extremely hard for a subsequent denoiser to recover the original clean signal. \subsection{Setup} We split \name{} into two disjoint training and test sets. The training set consists of 240 LR and HR image sets, and the test set consists of 120 sets of images, with no overlap between the two sets. We retrain the learning-based methods on the training set, and the evaluation of all methods is carried out on the test set. For denoising, we evaluate different approaches from both classical methods and deep-learning methods. We use a method tailored to address Poisson denoising, PURE-LET~\cite{luisier2011image}, and the classical Gaussian denoising methods EPLL~\cite{zoran2011learning} and BM3D~\cite{BM3D}. The Gaussian denoisers are combined with the Anscombe variance-stabilization transform (VST)~\cite{makitalo2012optimal} to first modify the distribution of the image noise into a Gaussian distribution, denoise, and then invert the result back with the inverse VST. We estimate the noise magnitude using the method in~\cite{foi2008practical}, to be used as input for both the denoiser and for the VST when the latter is needed. We also use the state-of-the-art deep-learning methods MemNet~\cite{tai2017memnet}, DnCNN~\cite{zhang2017beyond}, and RIDNet~\cite{anwar2019real}. For a fair comparison with the traditional non-blind methods that are given a noise estimate, we separately train each of these denoising methods for every noise level, and test with the appropriate model per noise level. The training details are presented in the supplementary material. We use six state-of-the-art SR networks for the benchmark: four pixel-wise distortion based SR networks, RCAN~\cite{zhang2018image}, RDN~\cite{zhang2018residual}, SAN~\cite{dai2019second}, SRFBN~\cite{li2019feedback}, and two perceptually-optimized SR networks, EPSR~\cite{vasu2018analyzing} and ESRGAN~\cite{wang2018esrgan}. The networks are trained for SR and the inputs are assumed to be noise-free, \textit{i.e.}, they are trained to map from the noise-free LR images to the high-quality HR images. All these networks are trained using the same settings, the details of which are presented in the supplementary material. \begin{table}[t] \centering \begin{tabular}{ccccccc} \toprule & & \multicolumn{5}{c}{Number of raw images averaged before denoising} \\ \cline{3-7} & Method & {1} & {2} & {4} & {8} & {16} \\ \cline{1-7} \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Denoisers}}}& PURE-LET~\cite{luisier2011image} & \cellcolor{gray!20}0.089/0.864&0.076/0.899&0.062/0.928&0.052/0.944&0.044/0.958 \\ &VST+EPLL~\cite{zoran2011learning} & \cellcolor{gray!20}0.083/0.887&0.074/0.916&0.061/0.937&0.051/0.951&0.044/0.962 \\ &VST+BM3D~\cite{BM3D} & \cellcolor{gray!20}0.080/0.897&0.072/0.921&0.059/0.939&0.050/0.953&0.043/0.962 \\ &MemNet$^\dagger$~\cite{tai2017memnet} &\cellcolor{gray!20}0.090/0.901&0.072/0.909&0.063/0.925&0.059/0.944&0.059/0.944 \\ &DnCNN$^\dagger$~\cite{zhang2017beyond} &\cellcolor{gray!20}0.078/0.907&0.061/0.926&\textcolor{red}{0.049}/0.944&\textcolor{red}{0.041}/0.954&\textcolor{red}{0.033}/\textcolor{red}{0.964} \\ &RIDNet$^\dagger$~\cite{anwar2019real} & \cellcolor{gray!20}\textcolor{red}{0.076}/\textcolor{red}{0.910}&\textcolor{red}{0.060}/\textcolor{red}{0.928}&\textcolor{red}{0.049}/\textcolor{red}{0.943}&\textcolor{red}{0.041}/\textcolor{red}{0.955}&0.034/\textcolor{red}{0.964} \\ \cline{1-7} \bottomrule \end{tabular} \caption{RMSE/SSIM results on denoising the \name{} test images. We benchmark three classical methods and three deep learning based methods. The larger the number of averaged raw images is, the lower the noise level. $^\dagger$The learning based methods are trained for each noise level separately. An interesting observation is that the best RMSE results (in red) do not necessarily give the best result after the downstream SR method as show in Table~\ref{table:PSNR_dsr}. We highlight the results under the highest noise level with gray background for easier comparison with Table~\ref{table:PSNR_dsr}.} \label{table:PSNR_den} \end{table} \subsection{Results and Discussion} \newcommand{\benchmarkA}[1]{\includegraphics[width=0.135\linewidth]{#1}} We apply the denoising algorithms on the noisy LR images, and calculate the RMSE and SSIM values between the denoised image and the corresponding noise-free LR image in the test set of \name{}. The results of the 6 benchmarked denoising algorithms are shown in Table~\ref{table:PSNR_den}. DnCNN and RIDNet outperform the classical denoising methods for all noise levels. Although MemNet achieves worse results than the classical denoising methods in terms of RMSE and SSIM, the results of MemNet contain fewer artifacts as shown in Fig.~\ref{fig:result:denoising}. One interesting observation is that a better denoising with a lower RMSE or a higher SSIM, in some cases, results in unwanted smoothing in the form of a local filtering that incurs a loss of detail. Although the RMSE results of DnCNN are not the best (Table~\ref{table:PSNR_den}), when they are used downstream by the SR networks in Table~\ref{table:PSNR_dsr}, the DnCNN denoised images achieve the best final performance. Qualitative denoising results are shown in the first row of Fig.~\ref{fig:result:denoising}. We note that the artifacts created by denoising algorithms are amplified when SR methods are applied on the denoised results (\textit{e.g.}, (a) and (b) of Fig.~\ref{fig:result:denoising}). Although the denoised images are close to the clean LR image according to the evaluation metrics, the SR network is unable to recover faithful texture from these denoised images as the denoising algorithms remove part of the high-frequency information. \begin{figure}[t] \centering \begin{tabu}{ccccccc} \rowfont{\tiny} \multicolumn{7}{c}{Denoising Results}\\ \benchmarkA{IMAGES/jdsr/100_0/PURELET.png} & \benchmarkA{IMAGES/jdsr/100_0/EPLL.png} & \benchmarkA{IMAGES/jdsr/100_0/BM3D.png} & \benchmarkA{IMAGES/jdsr/100_0/M_1.png} & \benchmarkA{IMAGES/jdsr/100_0/D_1.png} & \benchmarkA{IMAGES/jdsr/100_0/R_1.png} & \benchmarkA{IMAGES/jdsr/100_0/avg400.png} \\ \rowfont{\tiny} (a) PURE-LET & (b) EPLL & (c) BM3D & (d) MemNet & (e) DnCNN & (f) RIDNet & (g) clean LR \\ \rowfont{\tiny} \multicolumn{7}{c}{RDN~\cite{zhang2018residual} applied on denoised results}\\ \benchmarkA{IMAGES/jdsr/100_0/RDN_PURELET.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_EPLL.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_BM3D.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_M_1.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_D_1.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN_R_1.png} & \benchmarkA{IMAGES/jdsr/100_0/RDN.png} \\ \rowfont{\tiny} (a) RDN+ & (b) RDN+ & (c) RDN+ & (d) RDN+ & (e) RDN+ & (f) RDN+ & (g) RDN+\\ \rowfont{\tiny} PURE-LET & EPLL & BM3D & MemNet & DnCNN & RIDNet & clean LR\\ \end{tabu} \caption{The first row shows qualitative results of the denoising algorithms on a test LR image with the highest noise level. The second row shows qualitative results of the SR network RDN~\cite{zhang2018residual} applied on top of the denoised results. RDN amplifies the artifacts created by PURE-LET and EPLL, and is unable to recover faithful texture when the input image is over-smoothed by denoising algorithms. A gamma correction is applied for better visualization. Best viewed on screen.} \label{fig:result:denoising} \end{figure} \begin{table}[t] \centering \begin{tabular}{lcccccccc} \toprule & & \multicolumn{6}{c}{Super-resolution networks} \\ \cline{3-8} & \textbf{} & RCAN & RDN & SAN & SRFBN & EPSR & ESRGAN \\ \cline{3-8} \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Denoisers}}} & PURE-LET & .432/.697&.458/.695&.452/.693&.444/.694&.658/.594&.508/.646\\ & VST+EPLL & .425/.716&.434/.711&.438/.707&.442/.710&.503/.682&.485/.703\\ & VST+BM3D & .399/.753&.398/.748&.418/.745&.387/.746&.476/.698&.405/.716\\ & MemNet & .374/.755&.392/\textcolor{red}{.749}&.387/.746&.377/.752&.411/.713&.392/.719\\ & DnCNN & \textcolor{red}{.357}/\textcolor{red}{.756}&\textcolor{red}{.365}/\textcolor{red}{.749}&\textcolor{red}{.363}/\textcolor{red}{.753}&\textcolor{red}{.358}/\textcolor{red}{.754}&\textcolor{red}{.402}/\textcolor{red}{.719}&\textcolor{red}{.373}/\textcolor{red}{.726}\\ & RIDNet & .358/\textcolor{red}{.756}&.371/.747&.364/.752&.362/.753&.411/.710&.379/.725\\ \cline{1-8} & Noise-free LR & .255/.836&.251/.837&.258/.834&.257/.833&.302/.812&.289/.813\\ \hline & Noisy LR & .608/.382&.589/.387&.582/.388&.587/.380&.627/.318&.815/.279\\ \hline \bottomrule \end{tabular} \caption{RMSE/SSIM results on the sequential application of denoising and SR methods on the \name{} test images with the highest noise level, corresponding to the first column of Table~\ref{table:PSNR_den}. We omit the leading `0' in the results for better readability. For each SR method, we highlight the best RMSE value in red. The SR networks applied on the denoised results are trained to map the noise-free LR images to the high-quality HR images. } \label{table:PSNR_dsr} \end{table} The SR networks are applied on the denoised results of the denoising algorithms, and are evaluated using RMSE and SSIM. We also include the results of applying the SR networks on the noise-free LR images. As mentioned above, we notice that there is a significant drop in performance when the SR networks are given the denoised LR images instead of the noise-free LR images as shown in Table~\ref{table:PSNR_den}. For example, applying RDN on noise-free LR images results in the SSIM value of 0.836, while the SSIM value of the same network applied to the denoised results of RIDNet on the lowest noise level is 0.756 (shown in the first row, last column in Table~\ref{table:PSNR_jdsr}). This illustrates that the SR networks are strongly affected by noise or over-smoothing in the inputs. We also notice that a better SR network according to the evaluation on a single SR task does not necessarily provide better final results when applied on the denoised images. Although RDN outperforms RCAN in both RMSE and SSIM when applied on noise-free LR images, RCAN is more robust when the input is a denoised image. Among all the distortion-based SR networks, RCAN shows the most robustness as it outperforms all other networks in terms of RMSE and SSIM when applied on denoised LR images. As mentioned above, another interesting observation is that although DnCNN results in lower RMSE and higher SSIM than other networks for denoising at the highest noise level, DnCNN still provides a better input for the SR networks. We note generally that better denoisers according to the denoising benchmark do not necessarily provide better denoised images for the downstream SR task. Although the denoised results from MemNet have larger RMSE than the conventional methods, as shown in Table~\ref{table:PSNR_den}, the SR results on MemNet's denoised images achieve higher quality based on RMSE and SSIM. Qualitative results are given in Fig.~\ref{fig:result:benchmark}, where for each SR network we show the results for the denoising algorithm that achieves the highest RMSE value for the joint task (\textit{i.e.}, using the denoised results of DnCNN). We note that none of networks is able to produce results with detailed texture. As denoising algorithms remove some high-frequency signals along with noise, the SR results from the distortion-based networks are blurry and many texture details are lost. Although the perception-based methods (EPSR and ESRGAN) are able to produce sharp results, they fail to reproduce faithful texture and suffer a drop in SSIM. \begin{figure}[!ht] \centering \begin{tabu}{ccccccc} \benchmarkA{IMAGES/benchmark/113_1/RCAN_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/RDN_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/SAN_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/SRFBN_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/EPSR_D_1.png}& \benchmarkA{IMAGES/benchmark/113_1/ESRGAN_D_1.png}&\benchmarkA{IMAGES/benchmark/113_1/sim.png}\\ \rowfont{\tiny} (a) 0.313 &(b) 0.322 &(c) 0.322 &(d) 0.344 &(e) 0.405 &(f) 0.400 & Ground-truth\\ \end{tabu} \caption{Qualitative results with the corresponding RMSE values on the sequential application of denoising and SR algorithms on the \name{} test images with the highest noise level. (a) DnCNN+RCAN, (b) DnCNN+RDN, (c) DnCNN+SAN, (d) DnCNN+SRFBN (e) DnCNN+EPSR, (f) DnCNN+ESRGAN. A gamma correction is applied for better visualization. Best viewed on screen.} \label{fig:result:benchmark} \end{figure} \subsection{Joint Denoising and Super-Resolution (JDSR)} Our benchmark results in Sec.~\ref{sec:benchmark} show that the successive application of denoising and SR algorithms does not produce the highest-quality HR outputs. In this section, we demonstrate that it is more effective to train a JDSR model that directly transforms the noisy LR image into an HR image. \subsection{Training Setup} For JDSR, we adopt a 16-layer RRDB network~\cite{wang2018esrgan}. To enable the network to better recover texture, we replace the GAN loss in the training with a novel texture loss. The GAN loss often results in SR networks producing realistic but fake textures that are different from the ground-truth and may result in a significant drop in SSIM~\cite{wang2018esrgan}. Instead, we introduce a texture loss that exploits the features' second-order statistics to help the network produce high-quality and real textures. This choice is motivated by the fact that second-order descriptors have proven effective for tasks such as texture recognition~\cite{harandi2014bregman}. We leverage the difference in second-order statistics of VGG features to measure the similarity of the texture between the reconstructed HR image and the ground-truth HR image. The texture loss is defined as \begin{equation} \mathcal{L}_{texture} = || Cov(\phi(I_{SR})) - Cov(\phi(I_{HR})) ||_2^2, \end{equation} where $I_{SR}$ is the estimated result from the network for JDSR and $I_{HR}$ is the ground-truth HR image, $\phi(\cdot)$ is a neural network feature space, and $Cov(\cdot)$ computes the covariance. We follow the implementation of MPN-CONV~\cite{li2017is} for the forward and backward feature covariance calculation. To improve visual quality, we further incorporate a perceptual loss to the training objective \begin{equation} \mathcal{L}_{perceptual} = || \phi(I_{SR}) - \phi(I_{HR}) ||_2^2. \end{equation} Our final loss function is then given by \begin{equation} \mathcal{L} = \mathcal{L}_1 + \alpha \cdot \mathcal{L}_{perceptual} + \beta \cdot \mathcal{L}_{texture}, \end{equation} where $\mathcal{L}_1$ represents the $\ell$1 loss between the estimated image and the ground-truth. We empirically set $\alpha = 0.05$ and $\beta = 0.05$. % We follow the same training setup as the experiments in Sec.~\ref{sec:benchmark}. For comparison, we also train RCAN~\cite{zhang2018residual} and ESRGAN~\cite{wang2018esrgan} on JDSR. % \begin{table}[t] \centering \begin{tabular}{cccccc} \toprule & \multicolumn{4}{c}{Number of raw images averaged before JDSR} & \multirow{2}{*}{\#Parameters} \\ \cline{2-5} Method & {1} & {2} & {4} & {8} \\ \cline{1-6} DnCNN$^\dagger$+RCAN$^\ddagger$&0.357/0.756&0.348/0.779&0.332/0.797&0.320/0.813&0.5M+15M\\ DnCNN$^\dagger$+ESRGAN$^\ddagger$&0.373/0.726&0.364/0.770&0.349/0.787&0.340/0.797&0.5M+18M\\ \cline{1-6} JDSR-RCAN$^*$&0.343/0.767&0.330/0.780&0.314/0.799&0.308/0.814&15M\\ JDSR-ESRGAN$^*$&0.351/0.758&0.339/0.771&0.336/0.788&0.322/0.798&18M\\ Ours$^*$&0.340/0.760&0.326/0.779&0.318/0.797&0.310/0.801&11M \\ \cline{1-6} \end{tabular} \caption{JDSR RMSE/SSIM results on the \name{} test set. $^\dagger$The denoising networks are retrained per noise level. $^\ddagger$The SR networks are trained to map noise-free LR images to HR images. $^*$The networks trained for JDSR are also retrained per noise level. } \label{table:PSNR_jdsr} \end{table} \subsection{Results and Discussion} The quantitative results of different methods are reported in Table~\ref{table:PSNR_jdsr}. The results indicate that comparing to the sequential application of denoising and SR, a single network trained on JDSR is more effective even though it has fewer parameters. GAN-based methods generate fake textures and lead to low SSIM scores. Our model, trained with texture loss, is able to effectively recover high-fidelity texture information even when high noise levels are present in the LR inputs. We show the qualitative results of JDSR on the highest noise level (which corresponds to the first column of Table~\ref{table:PSNR_den}) in Fig.~\ref{fig:jdsr}. We see that other networks have difficulties to recover the shape of the cells in the presence of noise, whereas our method trained with texture loss is able to generate a higher-quality HR image with faithful texture. \newcommand{\jdsrimg}[1]{\includegraphics[width=0.16\linewidth]{#1}} \begin{figure}[t] \centering \begin{tabu}{cccccc} \jdsrimg{IMAGES/jdsr/090_0/RCAN_D_1_avg1.png}& \jdsrimg{IMAGES/jdsr/090_0/RCAN_jdsravg1.png}& \jdsrimg{IMAGES/jdsr/090_0/ESRGAN_D_1_avg1.png}& \jdsrimg{IMAGES/jdsr/090_0/ESRGAN_jdsravg1.png}& \jdsrimg{IMAGES/jdsr/090_0/ours_jdsravg1.png}& \jdsrimg{IMAGES/jdsr/090_0/sim.png} \\ \rowfont{\tiny} (a) 0.101 &(b) 0.065 &(c) 0.160 &(d) 0.124 &(e) 0.084 & Ground-truth\\ \end{tabu} \caption{Qualitative results with the corresponding RMSE values of denoising and SR on the \name{} test images with the highest noise level. (a) DnCNN+RCAN, (b) RCAN, (c) DnCNN+ESRGAN, (d) ESRGAN, (e) a 16-layer RRDB network~\cite{wang2018esrgan} trained with texture loss. A gamma correction is applied for better visualization. Best viewed on screen.} \label{fig:jdsr} \vspace{-0.2cm} \end{figure} \section{Conclusion} We propose the first joint denoising and SR microscopy dataset, \fullname{}. We use image averaging to obtain LR images with different noise levels and the noise-free LR. The HR images are obtained with SIM imaging. With \name{}, we benchmark the combination of various denoising and SR methods. Our results indicate that SR networks are very sensitive to noise, and that the consecutive application of two approaches is sub-optimal and suffers from the accumulation of errors from both stages. We also observe form the experimental results that the networks benefit from joint optimization for denoising and SR. \name{} is publicly available, and we believe it will be useful in advancing image restoration in medical imaging. Although the data is limited to the domain of microscopy data, it can be a useful dataset for benchmarking deep denoising and SR algorithms. \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=Hjw2saQPB5G
https://arxiv.org/abs/2003.05961
Please evaluate the effectiveness and novelty of our paper, which introduces a new algorithm for denoising and super-resolution in microscopy, along with a comprehensive evaluation using a newly created benchmark dataset.
What leads to generalization of object proposals? 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. In this paper, authors propose guidelines to build proper datasets for object proposals that can offer good generalization when training models on them. Concretely, the paper introduces the idea of prototypical classes as the sufficient and necessary classes to achieve good generalization. To proof this, they conduct a series of experiments on OIV4 and COCO datasets. As oracles, they choose Faster RCNN and RetinaNet. 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - The paper is very well written. Story is very easy to follow. - Generalization typically has been study from the model perspective. However, interpreting the problem from the data perspective is very interesting. - Although authors focus on data, they also offer a study of what is happening with the models to validate the results. - In particular, prototypical classes seems pretty interesting. 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - The effect of space granularity has a weird interpretation. The model used for this experiment is class specific Faster RCNN. Have the authors tried the class agnostic version? - Visual and Semantic diversity seems obvious. It would be interesting to study also the amount of samples, as well as how similar they are. 4. [Overall rating] Paper rating. 7 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. The whole paper is interesting. Even more from the efficiency perspective. Prototype classes play an important role. 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) Rating: 7: Good paper, accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{xspace} \usepackage{tabularx,colortbl} \usepackage{graphicx, caption, subcaption} \newcommand{\seenclasses}{$\mathrm{S}$} \newcommand{\trainclasses}{${\mathrm{L}}$ } \newcommand{\unseenclasses}{${\mathrm{U}}$ } \newcommand{\seendataset}{${\mathrm{D_{L}}}$ } \newcommand{\seenimages}{${\mathrm{I_{S}}}$ } \newcommand{\oivlong}{Open Images V4\xspace} \newcommand{\oiv}{OIV4\xspace} \newcommand{\oivsource}{OIV4-source\xspace} \newcommand{\oivtarget}{OIV4-target\xspace} \newcommand{\oivsourcetrain}{OIV4-source-train\xspace} \newcommand{\oivsourceval}{OIV4-source-val\xspace} \newcommand{\oivtargettrain}{OIV4-target-train\xspace} \newcommand{\oivtargetval}{OIV4-target-val\xspace} \newcommand{\oivall}{OIV4-all\xspace} \newcommand{\cocoall}{COCO-all\xspace} \newcommand{\coco}{COCO\xspace} \newcommand{\cocotarget}{COCO-target\xspace} \newcommand{\cocosource}{COCO-source\xspace} \newcommand{\AR}[1] {AR@#1} \newcommand{\frcnn}{Faster R-CNN\xspace} \newcommand{\retina}{RetinaNet\xspace} \newcommand{\deepti}[1]{{\color{blue}{Deepti: #1}}} \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \newcolumntype{C}[1]{>{\centering\arraybackslash}c{#1}} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{13} % \title{What leads to generalization of object proposals?} % \author{Rui Wang \and Dhruv Mahajan \and Vignesh Ramanathan} \authorrunning{R. Wang et al.} \institute{Facebook AI \\ \email{\{ruiw, dhruvm, vigneshr\}@fb.com}} \maketitle \begin{abstract} Object proposal generation is often the first step in many detection models. It is lucrative to train a good proposal model, that generalizes to unseen classes. This could help scaling detection models to larger number of classes with fewer annotations. Motivated by this, we study how a detection model trained on a small set of source classes can provide proposals that \emph{generalize} to unseen classes. We systematically study the properties of the dataset -- visual diversity and label space granularity -- required for good generalization. We show the trade-off between using fine-grained labels and coarse labels. We introduce the idea of prototypical classes: a set of sufficient and necessary classes required to train a detection model to obtain generalized proposals in a more data-efficient way. On the \oivlong dataset, we show that only $25\%$ of the classes can be selected to form such a prototypical set. The resulting proposals from a model trained with these classes is only $4.3\%$ worse than using all the classes, in terms of average recall (AR). We also demonstrate that \frcnn model leads to better generalization of proposals compared to a single-stage network like \retina. \keywords{object proposals, object detection, generalization} \end{abstract} \section{Introduction} \label{sec:intro} Object detection systems have shown considerable improvements for fully supervised settings \cite{ren2015faster,lin2017focal,liu2016ssd,redmon2017yolo9000,dai2016r}, as well as weakly supervised settings~\cite{Gao_2019_ICCV,arun2019dissimilarity,tang2018pcl} that only use image-level labels. Both approaches typically consider detection as a combination of two tasks: (a) spatial localization of the objects using proposals and (b) classification of the proposals into correct classes. A generalized proposal model that localizes all classes can help in scaling object detection. This could lead to the use of fewer or no bounding box annotations to only solve the classification task and development of more sophisticated classifiers, as explored in works like \cite{uijlings2018revisiting,singh2018r}. Many detection models \cite{ren2015faster,lin2017focal} have been developed in recent years, which can be used to obtain high quality object proposals. However, an equally important aspect that determines the generalization ability of proposals is \emph{the dataset} used to train these models. As illustrated in Fig.~\ref{fig:pull_fig}, the objects and class labels in a dataset significantly impact the ability to generalize to new classes. Intuitively, to localize a fine-grained vehicle like taxi in a target dataset, it might be sufficient to train a localization model with other vehicles like cars or vans in the source dataset. For localization (unlike classification), we may not need any training data for this class. On the other hand, training with these classes will not help in localizing other vehicles like boat. While few works leverage this intuition for weakly supervised learning~\cite{uijlings2018revisiting}, the extent to which object localization depends on the categories used to train the model has not been well quantified and studied in detail. Towards this end, we define ``generalization" as the ability of a model to localize (not classify) objects not annotated in the training dataset. In our work, we answer the question: \emph{What kind of dataset is best suited to train a model that generalizes even to unseen object classes?} We further study the ability of popular detection models like \frcnn \cite{ren2015faster} and \retina \cite{lin2017focal} to generate proposals that generalize to unseen classes. These networks are designed to improve the detection quality for the small set of seen classes in the training dataset. We carefully study these design choices and provide a way to obtain proposals that generalize to a larger set of unseen classes. \begin{figure}[t!] \centering \includegraphics[width=0.95\linewidth]{figures/rui_pull2} \caption{Proposal models learned on seen vehicle classes can localize unseen classes which share similar localization structure like ``bus" and ``taxi". However, ``barge" and ``gondola", which are also vehicles will not be precisely localized by this model, due to lack of visual diversity in the training dataset for vehicles} \label{fig:pull_fig} \vspace{-0.2in} \end{figure} We answer several questions about dataset properties and modeling choices required for generalized proposals: \begin{itemize} \item \textbf{What are the properties of object classes to ensure generalization of proposals from a model?} First, we show that it is crucial to have visual diversity to obtain generalized proposals. We need examples of different vehicles like ``car" and ``boats", even if the examples are only labelled as ``vehicle". Further, we hypothesize the existence of {\it{prototypical classes}} as a subset of leaf classes in a semantic hierarchy that are sufficient and necessary to construct a dataset to train a model for proposal generalization. We define new quantitative metrics to measure these properties for any set of classes and show that it is possible to construct a small prototypical set of object classes. This has positive implications for large taxonomies, since it is sufficient to annotate examples only for the prototypical classes. \item \textbf{Does the label-granularity of the dataset affect generalization? If so, what is the coarsest granularity that can be used?} Coarse-grained labels (``vehicles" instead of ``taxis") are significantly less tedious to annotate and more accurate than fine-grained labels. Past works like RFCNN-3000 \cite{singh2018r} argued that a single super class might be sufficient to obtain good proposals. However, we show that there is a trade-off between using very few coarse classes and large-number of fine-grained classes, and a middle-ground approach leads to best generalization. \item \textbf{What are the \emph{modeling} choices that are critical for leveraging state-of-the-art detectors to obtain generalized proposals?} We show that: (a) detections from two-stage networks like \frcnn are better for obtaining generalized proposals than a single-stage network like \retina, (b) while class-specific bounding box regression is typically used in \frcnn, it is beneficial only when considering larger number of proposals (average recall AR@1000) and class-agnostic regression is better when considering fewer proposals (AR@100) and (c) choice of NMS threshold is dependent on the number of proposals being considered (AR@100 or AR@1000). \end{itemize} On \oiv \cite{kuznetsova2018open}, we show that compared to training with all the object classes, using a prototypical subset of $25\%$ of the object classes only leads to a drop of $4.3\%$ in average recall (AR@100), while training with $50\%$ of such classes leads to a negligible drop of $0.9\%$. We also show how the detections from \frcnn can be fused to obtain high quality proposals that have $10\%$ absolute gain in AR@100 compared to the class-agnostic proposals of the RPN from the same network and $3.5\%$ better than \retina. To stress the practical importance of generalized proposals, we also show that generalization ability is directly correlated with the performance of weakly supervised detection models. \section{Related Work} \label{sec:relwork} \noindent \textbf{Generalizing localization across multiple classes: }The idea of different object classes sharing the same structure has been exploited in building detection models for a long time\cite{felzenszwalb2009object,novotny2016have,ott2011shared,salakhutdinov2011learning,torralba2004sharing}. More recently, \cite{dai2016r,ren2015faster} also have a dedicated proposal network for object localization. However these works do not measure the transferability of proposals trained on one set of classes to another. Uijlings \textit{et al.} \cite{uijlings2018revisiting} tried to transfer information from coarse source classes to fine-grained target classes that share similar localization properties. They showed that this can help weakly supervised detection for the target classes. LSDA \cite{hoffman2014lsda} transformed classifiers into detectors by sharing knowledge between classes. Multiple works \cite{tang2016large,hoffman2016large,rochan2015weakly,guillaumin2012large} showed the benefit of sharing localization information between similar classes to improve semi supervised and weakly supervised detection. Yang \textit{et al.} \cite{yang2019detecting} trained a large-scale detection model following similar principles. Singh \textit{et al.} \cite{singh2018r} showed that even a detector trained with one class can localize objects of different classes sufficiently well due to commonality between classes. We generalize this idea further. There has also been work on learning models \cite{yang2019detecting,redmon2017yolo9000,gao2019note} with a combination of bounding boxes for certain classes and only class labels for others. They inherently leverage the idea that localization can generalize across multiple classes. We provide systematic ways to quantify and measure this property for proposal models. \noindent \textbf{Object proposal generation models:} There have been many seminal works on generating class-agnostic object proposals \cite{uijlings2013selective,zitnick2014edge,pont2016multiscale,krahenbuhl2014geodesic}. A comprehensive study of different methods can be found in \cite{hosang2015makes} and a study of proposal evaluation metrics can be found in \cite{chavali2016object}. Proposal models have also been trained with dedicated architectures and objectives in \cite{pinheiro2015learning,kuo2015deepbox,szegedy2014scalable}. In our work, we leverage standard models like \frcnn and focus on the dataset properties required to achieve generalization with this model. \section{Approach} \vspace{-0.1in} \label{sec:approach} We study two important aspects involved in obtaining generalized proposals from a detection model: (1) {\bf{Data Properties}} such as the granularity of the label space (shown in Fig.~\ref{fig:g1}), and the visual diversity of object classes under each label, required for generalization of proposals. The idea of label granularity and visual diversity is shown in Fig.~\ref{fig:g2}. We investigate how a smaller subset of ``prototypical" object classes in a dataset which is representative of all other classes can be identified. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.56\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/label_granularity.pdf} \caption{Label semantic hierarchy} \label{fig:g1} \end{subfigure}\hfill \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/rui_figs_gran_v2.pdf} \caption{Granularity vs. Diversity} \label{fig:g2} \end{subfigure} \caption{We study two important dataset properties needed to train a proposal model: label granularity and visual diversity. (a) Label granularity can be represented by different levels in a semantic hierarchy as shown. (b) The difference between label granularity and visual diversity is illustrated. At the same granularity, we can either have high or low visual diversity as shown} \label{fig:gran_visual} \vspace{-0.2in} \end{figure} (2) {\bf{Modeling Choice}} for leveraging a detector trained on a dataset with seen classes to obtain proposals that generalize to unseen classes. \subsection{Dataset Properties} \label{sec:data_prop} The choice of labels and data used to train the model is crucial for generalization. To study these properties, we assume: (a) classes are organized in a semantic tree and (b) internal nodes do not have any data of their own, that are not categorized into one of its child nodes. In practice, such a hierarchy is either already available (\oiv) or can be obtained from Wordnet~\cite{wordnet}. These assumptions help us study the datasets under controlled settings. However, later we explore a way to identify ``prototypical" subsets even when a semantic hierarchy is unavailable. \subsubsection{Label Space Granularity} \label{sec:label_space} As we noted through some examples earlier, it is intuitive that we might not need fine-grained labels to train a good localization model. To quantitatively study the effect of granularity, we construct different datasets with the same set of images and object bounding boxes, but consider classes at different levels of semantic hierarchy (Fig.~\ref{fig:g1}). We then train a model with these datasets and evaluate the generalization ability as a function of label granularity. For instance, for the coarsest root level, we assign all the bounding boxes the same ``object" label and train a detector to distinguish objects from all non-objects. This pertains to the idea of objectness used in weakly supervised algorithms~\cite{uijlings2013selective} and super-class in \cite{singh2018r}. For an intermediate level, we collapse all leaf-labels to their corresponding parent labels at that level to train the model. While a fine-grained label space provides more information, a model trained at this level also attempts to distinguish object classes with similar structure and this could affect generalization. We quantify this trade-off in Sec.~\ref{sec:exp_data}. \subsubsection{Prototypical classes to capture visual diversity} \label{sec:proto} One of the main aims of our work is to see if we can identify a significantly smaller number of classes than the full object-label space, so that bounding boxes from this set of classes are sufficient to train a generalized proposal model. Note that in Sec.~\ref{sec:label_space}, we wanted to study if a small set of coarse labels are sufficient to train a generalized proposal model. However, this does not answer anything about the visual diversity of objects within each sub-category that is required for generalization. As an example (shown in Fig.~\ref{fig:gran_visual}), in order to localize different types of vehicles like ``car" or ``airplane" it might be sufficient to collapse the label for all these objects into a single label named ``vehicle", however dropping all instances of airplane during training will lead to a drop in performance for this class. To quantitatively study this effect, we introduce the notion of ``prototypical" classes. Given a large set of leaf classes, these are the smallest subset such that a model trained only with instances from them is sufficient to localize objects from the remaining classes. Note that due to the long-tail distribution of real-world data, obtaining images for large number of semantic classes is a tedious task. If a small set of prototypical classes does exist, this makes the data collection process much easier when scaling detection to large number of classes. \noindent{\bf{Properties: }}We identify the two properties that are required to quantify the prototypicality of a set of classes : \textit{Sufficient set}: is a set of classes such that training a model only with examples from them should be sufficient to localize objects from all other classes. The most superfluous sufficient set would be the entire set of leaf classes themselves. \textit{Necessary set}: is a set of classes such that dropping any class from this set will lead to a significant drop in generalization. A simple example would be a very coarse vertical like ``vehicle". Intuitively dropping all vehicles would affect their localization as they do not share localization properties with other classes. We provide concrete ways to measure both these properties in Sec.~\ref{sec:exp_data}. \noindent{\bf{Identifying prototypical classes: }} Given a set of $N$ leaf classes $\mathbb{C}$, we wish to identify a set of $P$ prototypical classes $\mathbb{P} \subset \mathbb{C}$. Intuitively, this is similar to clustering the classes that have the same localization structure and then choosing a representative class from each cluster. Below, we discuss three approaches: \noindent(a) \textbf{Oracle visual clustering}: To get an upper bound for choosing the best $P$ prototypical classes, we assume that bounding box annotations for all the $N$ leaf classes are available. We then use these bounding boxes to compute visual similarity between classes. We note that this is not a practical approach, but is crucial to evaluate the effectiveness of proxies we introduce later. We first train a detection model using the annotations of all the leaf classes. We then measure the visual similarity between two classes $i, j$ as \vspace{-0.05in} {\small \begin{align} \label{eq:max_ap} S_{ij} = \max \left( \frac{\text{AP}^i(j)}{\text{AP}^j(j)}, \frac{\text{AP}^j(i)}{\text{AP}^i(i)}\right), \end{align}}where $AP^i(j)$ is the detection average precision (AP) for the $j^{th}$ class when we use the detections corresponding to the $i^{th}$ class as detections of class $j$. $S_{ij}$ is a measure of how well one class can replace another class in localizing it. We then use the resulting similarity measure to hierarchically cluster the classes into $P$ clusters using agglomerative clustering. We then pick the class with the highest number of examples in each cluster to construct the set of prototypical classes. For practical reasons, we use frequency to choose the representative class, since this results in the construction of the largest dataset. \noindent(b) \textbf{Semantic clustering based on frequency}: Semantic similarity is often viewed as a good proxy for visual similarity as shown through datasets like Imagenet \cite{deng2009imagenet} and \oiv. Hence, we use the semantic tree to cluster the classes in an hierarchical fashion starting from the leaves. At any given step, we cluster together two leaf classes that share a common parent if they jointly have the lowest number of examples. The algorithm stops when $P$ clusters are left. We then select the most frequent class from each cluster as a prototypical class. Here we assume that apriori we know the frequency of each class in a dataset. This is a very weak assumption, since a rough estimate of class distribution in a dataset can often be obtained even from weak labels like hashtags. This doesn't require any image-level label or bounding boxes and is easy to implement in practice. \noindent(c) \textbf{Most frequent prototypical subset}: For this baseline, we choose the top $P$ most frequently occurring classes in the dataset as the prototypical classes. Note that unlike the previous approaches, this does not require any knowledge of the semantic hierarchy. \subsection{Modeling Choice\label{subsec:model}} \label{sec:model_choice} Once the dataset is fixed, the next step is to train a detection model. In our work, we explore the use of two models: \frcnn and \retina. The observations made in our work should nevertheless generalize to other two-stage and single-stage detection models as well. In the case of a single-stage network, the detections from a model trained on a source dataset with seen classes can directly be treated as proposals. Their ability to localize novel classes in a target dataset can be evaluated to test generalization. However, for a two-stage network, another natural choice would be to use the Region Proposal Network (RPN) of the model, since it is trained in a class-agnostic fashion and aims to localize all objects in the image. However, as noted by He et al. \cite{he2017mask}, the detection part of the model is better at localizing the object due to more fine-tuned bounding box regression and better background classification. We study this more rigorously, by comparing the generalization of proposals obtained from the detection head as well as RPN. We vary different model parameters to obtain the optimal setting for proposal generalization. \section{Experiments} \label{sec:expts} We evaluate the ability of the object proposal obtained from detection models learned with different settings in Section~\ref{sec:model_choice} to generalize to new unseen classes. We also explore the effects of label-space granularity and the need for semantic and visual diversity. Finally, we show that a small set of prototypical classes could be used to train an effective proposal model for all classes in the dataset. \subsection{Experimental Setup} \noindent \textbf{Source and target splits: } We split each dataset into two parts: (a) {\it{Source dataset}} consisting of a set of seen classes called {\it{source classes}} and (b) {\it{Target dataset}} consisting of a set of unseen classes called {\it{target classes}}. {\it{Target dataset}} is used to evaluate the generalization of proposal models trained with the {\it{Source dataset}}. Since an image can contain both source and target classes, we ensure that such images are not present in the source class dataset. However, there may be a small number of images in the target dataset that contain source classes. We use the following two datasets for our experiments: (1) {\it{\oivlong(\oiv)~\cite{kuznetsova2018open}}} consists of $600$ classes. We retain only object classes which have more than $100$ training images. This results in a total of $482$ leaf classes. We randomly split all the leaf classes into $432$ source (\oivsource dataset) and $50$ target (\oivtarget dataset) classes. There are also annotations associated only with internal nodes (for example, "animal") and without a specific leaf label (like the type of animal). We remove such annotations and all associated images, since such images cannot be unambiguously assigned to a source or target split. This leaves us with $1.2M$ images with $7.96M$ boxes in the train split and $73k$ images with $361K$ boxes in the test split. For training proposal models, we always use the train split and for evaluation we use the test split. Wherever needed, we explicitly suffix the dataset with "train" and "test" (for example, \oivsource-train and \oivsource-test). (2) {\it{\coco~\cite{coco}}}: We use the 2017 version of the \coco dataset and randomly split the classes in to $70$ source (\cocosource dataset) and $10$ target (\cocotarget dataset) classes. For training, we use the train split and for evaluation, we use the $5000$ images from the validation set. Wherever needed, we explicitly suffix the dataset with ``train" and ``test". Target classes list is provided in the supplementary. \noindent\textbf{Evaluation metrics: } We report the standard average recall (\AR{k})~\cite{hosang2015makes} metric to evaluate the quality of proposals. One of the main motivations for building a generalized proposal model is to use the resulting proposals to train detection models for unseen classes with limited or no bounding box annotation. A typical proposal-based supervised detection model RCNN could also be used to evaluate the quality of proposals. However, the application to weakly supervised detection is more compelling since their performance is closely tied to proposals than supervised models which can correct the inaccuracies in proposals due to availability of labelled bounding boxes. Hence, we implement a weakly supervised detector with the approach used in YOLO9000~\cite{redmon2017yolo9000}\footnote{We chose~\cite{redmon2017yolo9000} due to its simplicity. In practice, we can use other weakly supervised approaches too.}. We report the detection AP (averaged over IoU thresholds ranging from $0.5$ to $0.95$) on the test set of the target dataset. Please see the supplementary material for more details. \noindent\textbf{Implementation details: } We fix Imagenet pre-trained ResNet-50 with Feature Pyramid Networks \cite{lin2017feature} as the backbone for all models. We use the Detectron codebase~\cite{girshick2018detectron}. For \coco, we train the models for $90k$ iterations with an initial learning rate and the decay suggested in \cite{ren2015faster}. For \oiv, we train the models for $800k$ iterations with an initial learning rate of $0.01$ and cosine learning rate decay. When training the weakly supervised model (\cite{redmon2017yolo9000}), we use the top $100$ proposals in each image to choose pseudo ground truth at every training iteration. \subsection{Modeling Choices} We first identify the best detection model and setting to extract proposals that generalize to new unseen classes. We then analyze generalization ability under different settings from this model. We reiterate that in order to test generalization, evaluation is done on target classes that have no intersection with the source classes used during training. \noindent {\textbf{Choice of detection model:}} We compare the generalization ability of a two-stage network (\frcnn) and a single-stage network (\retina) in Fig.~\ref{fig:mod1}. Since, in a two-stage model like \frcnn, the output from the RPN is class-agnostic and can be used as proposals too, we compare the performance of the RPN as well. The models are trained on \cocosource-train dataset. We report AR@100 on seen classes in the \cocosource-test dataset, as well as unseen classes in the \cocotarget-test. The difference in performance between seen and unseen classes reflects the generalization gap. We also show an upper-bound performance on \cocotarget-test obtained by models trained on the full training dataset containing both \cocosource-train and \cocotarget-train. \begin{figure}[t] \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/coco_rpn_vs.pdf} \caption{Comparison of detection models} \label{fig:mod1} \end{subfigure}\hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/coco_ar_breakdown_v2.pdf} \caption{RPN vs. detection head} \label{fig:mod2} \end{subfigure} \caption{(a) \AR{100} corresponding to different models trained on \cocosource-train and evaluated on different test splits. Upper-bound corresponds to model trained on full \coco dataset and evaluated on \cocotarget-test. (b) Average recall of RPN and detection head at different IoU thresholds, for model trained on \cocosource-train and evaluated on \cocotarget-test} \label{fig:rpn_vs_det} \vspace{-0.2in} \end{figure} We notice that on seen classes, \retina achieves a lower performance compared to \frcnn (drop of $2.4\%$). However, the drop is larger for unseen target classes ($3.5\%$), indicating a larger generalization gap for \retina. One reason for this is that \retina is more sensitive to missing bounding boxes corresponding to unlabelled unseen classes in the source dataset. Proposals corresponding to unseen object classes that are not annotated in the training data are treated as hard-negatives, due to the use of focal-loss. Hence, the model heavily penalizes proposals corresponding to unannotated bounding boxes, leading to overall drop in AR. Since some seen classes share visual similarity with unseen classes, this sensitivity to missing annotations affects AR for seen classes too. However, this effect is more magnified for unseen target classes. On the other hand, in \frcnn, only a small number of proposals (less than $512$) which do not intersect with annotated bounding boxes are sampled at random as negatives. The probability that a proposal corresponding to an unseen object class is chosen as a negative is lower, leading to better generalization. Hence, for the rest of the paper, we use \frcnn as the detection model. We also notice that the detection head of \frcnn provides better overall performance \emph{without} sacrificing generalization. This can be attributed to better bounding box regression from the detection head which has additional layers, following the RPN in the model. To investigate this effect, we measure AR at different IoU thresholds for both sets of proposals for the model trained on \cocosource and evaluated on \cocotarget in Fig.~\ref{fig:mod2}. We see that the difference in \AR{1000} increases drastically at higher values of IoU threshold, and is negligible at a threshold of $0.5$. This implies that the boxes from the detection head are more fine-tuned to exactly localize objects, unlike the RPN. \noindent {\textbf{Choice of \frcnn settings:}} The results so far were obtained using class-specific bounding box regression (which is the standard setting in \frcnn) for the detection head. Since we want the bounding boxes to generalize to unseen classes, class agnostic regression could be a valid choice too. We study this in Fig.~\ref{fig:cls_ag} for \oiv and \coco. We see that class agnostic regression is better for small number of proposals as seen by \AR{10,20,50}. However, when we consider more proposals (\AR{1000}), class specific regression provides a significant gain ($4.5\%$ for \oiv and $7.5\%$ for \coco). It results in multiple regressed versions (one corresponding to each class) of the same proposal generated from the RPN. This helps in improving recall at higher number of proposals. Previously, we fixed the NMS threshold to $0.5$. We study the effect of this threshold in Fig.~\ref{fig:nms_fig}. We train on \oivsource, \cocosource and test on \oivtarget, \cocotarget respectively. Intuitively, a low threshold can improve spatial coverage of objects by ensuring proposals are spatially well spread out. When considering a larger number of proposals, there are sufficient boxes to ensure spatial coverage, and having some redundancy is helpful. This is witnessed by the steeper drop in \AR{1000} at low NMS thresholds, unlike \AR{100}. Based on these observations, we use class-specific bounding box regression with an NMS threshold of $0.5$ for rest of the experiments. \begin{figure}[t] \vspace{-0.1in} \centering \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/cls_specific_vs_agnostic.pdf} \caption{Effect of class agnostic regression vs. class specific regression} \label{fig:cls_ag} \end{minipage} \hfill \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/nms.pdf} \caption{Effect of NMS threshold on performance of proposals} \label{fig:nms_fig} \end{minipage} \vspace{-0.21in} \end{figure} \begin{table}[h] \vspace{-0.4in} \centering \begin{center} \caption{Comparing performance of proposals generated by RPN head and detection head for weakly supervised detection. We also show the \AR{100} numbers which are seen to be correlated with detection AP}\label{tab:det_map} \begin{tabular}{l|c|c|c|c} \hline \multicolumn{5}{c}{Target Dataset - \oivtarget}\\ \hline & \multicolumn{2}{c|}{Source: \oivsource} & \multicolumn{2}{c}{Source: \oivall}\\ & Det. AP & \AR{100} & Det. AP & \AR{100} \\\hline \frcnn RPN & 8.7 & 55.0 & 9.6 & 60.4\\ \frcnn Detection & \textbf{24.0} & \textbf{69.4} & \textbf{30.8} & \textbf{76.9} \\ \hline \end{tabular} \end{center} \vspace{-0.35in} \end{table} \noindent {\textbf{Weakly supervised detection:}} A strong practical utility for generalized proposals that localize all objects is that, no bounding box annotations should be needed to train a detection model for new object classes. Hence, we measure the effect of better generalized proposals on the performance of a weakly supervised detection model, trained without bounding box annotations. We show results corresponding to the RPN head and detection head of \frcnn in Tab.~\ref{tab:det_map}. The weakly supervised model is trained on \oivtarget-train and evaluated on \oivtarget-test. We also show results for proposals obtained from training with \oivsource as well as \oivall (upper-bound). We see that the performance of the weakly supervised detection model is directly correlated with the quality of the proposals being used, showing the need for good generalized proposals. \subsection{Dataset Properties} \label{sec:exp_data} \noindent {\textbf{Effect of label space granularity: }} \oiv organizes object classes in a semantic hierarchy with $5$ levels. We directly leverage this hierarchy to measure the effect of label granularity (Fig.~\ref{fig:g1}). We construct a dataset at each level $L_i$ (\oivsource-$L_i$) by retaining all the images in \oivsource, but relabeling bounding boxes corresponding to leaf labels with their ancestor at $L_i$. We construct 5 datasets, one for each level with the same set of images and bounding boxes. We report the performance of these models on \oivtarget in Tab.~\ref{tab:label_gran}. Along with \AR{100/1000}, we also report the detection AP of the weakly supervised detection models trained with the proposals obtained from the corresponding levels. The weakly supervised models are trained on \oivtarget-train and evaluated on \oivtarget-test. \vspace*{-8mm} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Effect of different label space granularities on the quality of proposal for \oiv dataset. The number of classes at each level is shown in brackets. Evaluation is done on \oivtarget-eval dataset. Both AR and weakly supervised detection AP are reported} \label{tab:label_gran} \begin{tabular}{cccc} \hline\noalign{\smallskip} Source Dataset & AR@100 & AR@1000 & AP (weak)\\ \noalign{\smallskip} \hline \noalign{\smallskip} \oivsource-$L_0 (1)$ & 61.7 & 72.0 & 19.5\\ \hline \oivsource-$L_1 (86)$ & 63.4 & 73.0 & 22.6\\ \hline \oivsource-$L_2 (270)$ & 63.7 & 75.2 & 23.1\\ \hline \oivsource-$L_3 (398)$ & 65.2 & 77.2 & 24.3\\ \hline \oivsource-$L_4 (432)$ & 64.2 & 76.1 & 24.0\\ \hline \end{tabular} \end{center} \end{table} \vspace{-0.35in} \setlength{\tabcolsep}{1.4pt} Some past works like \cite{singh2018r} postulated that one super-class (similar to $L_0$) could be sufficient. However, we observe that both \AR{100} and \AR{1000} increase as we move from $L_0$ to $L_1$ along with a significant gain ($3.1\%$) in AP. This indicates that training with just a binary label yields lower quality proposals compared to training with at least a coarse set of labels at $L_1$. While both AP and \AR{100} increase as the granularity increases from $L_1$ to $L_3$, the difference is fairly small for both metrics ($ < 2\%$ change). However, annotating bounding boxes with labels at $L_1$ ($86$ labels) is significantly cheaper than $L_3$ ($398$ labels). Hence, $L_1$ can be seen as a good trade-off in terms of labelling cost, and training a good model. \vspace*{0.02in} \noindent {\textbf{Need for visual and semantic diversity: }} We noticed that training with coarse labels can yield good proposals. It would be interesting to observe if all or only some of these coarse classes are crucial to build a good proposal model. To study this, we conduct ablation experiments where we train a model with \oivsource-train after dropping all images having a specific $L_1$ label and evaluate the proposals on the \oivsource-test images belonging to this label in Fig.~\ref{fig:drop_classes_fig}a. We repeat this experiment for a few fine-grained classes at $L_4$ in Fig.~\ref{fig:drop_classes_fig}b. \begin{figure}[t] \vspace{-0.1in} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.7\textwidth]{figures/drop_coarse.pdf} \label{fig:1} \end{subfigure}\hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.7\textwidth]{figures/drop_fine.pdf} \label{fig:2} \end{subfigure} \vspace{-0.1in} \caption{Effect of Semantic Diversity, measured by dropping an object class during training and measuring the resulting change in AR for that class: (a) dropping L1 classes and (b) dropping L4 classes} \label{fig:drop_classes_fig} \vspace{-0.2in} \end{figure} We notice that certain coarse classes (like ``clothing" and ``vehicle") experience a huge drop in performance. On the other hand, ``animal" and ``food" are less affected. This can be explained from the fact that, there are many toy-animal images within the coarse label ``toy", similarly ``containers" is a coarse class in \oiv which is often depicted with food in it. These classes can act as proxies for ``animal" and ``food" respectively. However, ``clothing" and ``vehicle" do not have good proxies. More interestingly, we make a similar observation for finer classes at $L_4$ like airplanes and helicopters. This suggests that there is a smaller set of objects that have unique localization properties in \oiv. \noindent {\textbf{Prototypical classes: }} Some object classes are similar to others in terms of localization, while there are classes that are unique and need to be included in training. Motivated by this observation, we try to identify a small set of classes called ``prototypical" classes which are both necessary and sufficient to train a generalizable proposal model. We use the \oivsource dataset as before with 432 leaf classes. We use the different approaches outlined in Sec.~\ref{sec:proto} to identify a subset of ``prototypical" classes. Note that among these methods, oracle visual clustering assumes availability of bounding boxes for all classes and serves as an upper bound on how to identify a really good prototypical set. Some sample clusters of classes obtained by this method are shown in Tab.~\ref{tab:visual_clusters}. The remaining methods make weaker assumptions and are more useful in practice. In addition to these methods,we also train models with a set of randomly chosen prototypical classes. \vspace*{-0.3in} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Sample clusters obtained by oracle visual clustering for $P=50$. The most frequent class in each cluster chosen as a prototypical class is highlighted} \label{tab:visual_clusters} \begin{tabular}{lll} \hline\noalign{\smallskip} \scriptsize{\textbf{Woman}, Girl, Doll} & \scriptsize{\textbf{Wheel}, Tire, Bicyclewheel} & \scriptsize{\textbf{Lobster}, Scorpion, Centipede} \\ \hline \scriptsize{\textbf{Glasses}, Goggles} & \scriptsize{\textbf{Jeans}, Shorts, Miniskirt} & \scriptsize{\textbf{Goose}, Ostrich, Turkey} \\ \hline \scriptsize{\textbf{Book}, Shelf, Bookcase} & \scriptsize{\textbf{Musicalkeyboard}, Piano} & \scriptsize{\textbf{Swimmingpool}, Bathtub, Jacuzzi} \\ \hline \scriptsize{\textbf{Man}, Boy, Shirt} & \scriptsize{\textbf{Apple}, Pomegranate, Peach} & \scriptsize{\textbf{Raven}, Woodpecker, Bluejay}\\ \hline \end{tabular} \end{center} \vspace{-0.3in} \end{table} \setlength{\tabcolsep}{1.4pt} We introduce two ways to measure \textit{sufficiency} and \textit{necessity}. From the $432$ classes, once we pick a subset of $P$ prototypical classes, we train a proposal model and evaluate the resulting model on the $50$ target classes in \oivtarget, to measure \textit{sufficiency} and \textit{necessity}. \noindent{\textbf{Dataset construction for fair comparison}} We ensure that the total number of images as well as bounding box annotations are kept fixed when we construct datasets for different prototypical subsets. This is important to ensure that proposals trained with different subsets are comparable. Once we chose a set of $P$ prototypical classes, we uniformly sub-sample \oivsource images having any of these prototypical classes to get a subset of $920K$ images. And within each subset, we uniformly sub-sample the bounding boxes corresponding to the prototypical classes to retain $5.2M$ bounding boxes. We do not retain any bounding boxes outside the chosen prototypical classes. \noindent{\textbf{Training with prototypical subsets}} For a set of prototypical classes and the corresponding dataset, we train a \frcnn with those classes as labels. We combine the detections as described in Sec.~\ref{sec:model_choice} to obtain proposals. \noindent{\textbf{Measuring sufficiency of prototypical classes}} A subset of classes are sufficient, if a proposal model trained with them generalizes as well as a model trained with all classes. We follow this notion and evaluate the proposals obtained from the models trained with different prototypical subsets on \oivtarget and report the average recall (\AR{100}) in Fig.~\ref{fig:proto_properties}a. Similar trends are observed with \AR{1000} as well (shown in supplementary). Looking at the proposals obtained from oracle visual clustering, training with less than 25\% of the classes (100) leads to only a drop of $4.8\%$ in \AR{100}, compared to training with images belonging to all object classes. This gap reduces to $0.4\%$ if we train with 50\% (200) of all the classes. This provides an empirical proof for the existence of a significantly smaller number of object classes that are sufficient to train a generalizable proposal model. Next, we look at the prototypical classes obtained from a more practical approach: semantic clustering. We notice that the proposal model trained with these prototypical classes always outperform other approaches such as choosing a random set of classes or the most frequent set of classes. Further, the performance of this method is only lower by a margin of $3\%$ compared to oracle visual clustering for different value of $P$. Selecting most frequent set of classes as the prototypical subset performs slightly worse than semantic clustering. This shows that semantic clustering can serve as a good way to identify prototypical classes for large taxonomies when the semantic hierarchy is available for the dataset, else the most frequent subset is a weaker alternative. \begin{figure}[t] \vspace{-0.1in} \begin{minipage}[t]{0.99\textwidth} \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=1.0\textwidth]{figures/sampling_method_ar100.pdf} \label{fig:suff_1} \end{subfigure}\hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{figures/necessity.pdf} \label{fig:necessity} \end{subfigure} \vspace{-0.2in} \caption{(a) Average recall \AR{100} for proposals obtained from models trained with varying number of prototypical classes chosen by different methods. We show the average recall on the \oivtarget dataset with $50$ unseen classes. $P$ denotes the number of prototypical classes. Higher value indicates higher sufficiency. (b) The relative change in AR for target classes by dropping proposals corresponding to the most similar class in the prototypical subset. Higher value indicates lower redundancy in prototypical subset and higher necessity} \label{fig:proto_properties} \end{minipage} \vspace{-0.25in} \end{figure} \noindent{\textbf{Measuring necessity of prototypical classes}} A set of classes are considered necessary, if there is no redundancy among the classes in terms of localization properties. For a given class in the set, there should be no equivalent class which can provide similar bounding boxes. We measure this property for a prototypical subset by evaluating the corresponding proposal model on \oivtarget dataset using the following method. For every target class in \oivtarget, we measure the relative change in \AR{100} and \AR{1000} by removing proposals corresponding to the most similar class in the prototypical subset (similarity measured by Eq.~\ref{eq:max_ap}). The change in AR would be minimal if there is another class in the prototypical subset which can localize the target class. This measure, averaged over all target classes provides a good estimate of necessity. A high value symbolizes a high degree of necessity, while a low value corresponds to redundancy among the prototypical classes. We plot this for different number of prototypical classes for oracle visual clustering and semantic clustering in Fig.~\ref{fig:proto_properties}b. We notice that at any given number of prototypical classes, the change in average recall is higher for oracle visual clustering compared to semantic clustering. This demonstrates that visual clustering leads to prototypical classes which are less redundant (and more necessary). As expected, we see the necessity drops, as we increase the number of prototypical classes for both methods. Again, this is expected since redundancy between classes increases with more number of classes. The relative change in \AR{1000} is also seen to be lower than \AR{100}, since when considering a larger number of proposals, we expect more redundancy among the proposals. Finally, for oracle visual clustering as we move from $200$ to $300$ classes, sufficiency changes by a small amount from $73.2$ to $75.9$ ( Fig.~\ref{fig:proto_properties}a), while the necessity drops steeply in Fig.~\ref{fig:proto_properties}b. This suggests that the ideal number of prototypical classes for \oiv could be around $200$. \section{Conclusion} We studied the ability of detection models trained on a set of seen classes to localize unseen classes. We showed that \frcnn can be used to obtain better proposals for unseen classes than \retina, and studied the effect of model choices on generalization of proposals, like class-agnostic bounding box regression and NMS threshold. We quantitatively measured the importance of visual diversity and showed that using a very fine-grained or very coarse label-space can both affect generalization, while a middle-ground approach is best suited. We introduced the idea of prototypical classes that are sufficient and necessary to obtain generalized proposals. We demonstrated different approaches to determine small prototypical subsets for a given dataset. We believe that our work is a step forward towards learning proposals that generalize to a large number of classes and scaling up detection in a more data-efficient way. \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=ZHT0ZpxQO5E
https://arxiv.org/abs/2008.05700
Evaluate the strengths and weaknesses of the paper, focusing on the significance of the proposed guidelines for building datasets that lead to good generalization of object proposals.
Prototype classes can be sufficient to localize in object detection [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The authors propose to learn object localizations using only prototype classes. They explore what defines prototype classes and experiment with many ablations and hyperparameters to their method. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. Powerful idea; clear definitions; practical modeling choices for determining prototypical classes; extensive experimentation. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. Marginally related to visual inductive priors. [Overall rating] Paper rating: Accept Rating: 7: Good paper, accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{xspace} \usepackage{tabularx,colortbl} \usepackage{graphicx, caption, subcaption} \newcommand{\seenclasses}{$\mathrm{S}$} \newcommand{\trainclasses}{${\mathrm{L}}$ } \newcommand{\unseenclasses}{${\mathrm{U}}$ } \newcommand{\seendataset}{${\mathrm{D_{L}}}$ } \newcommand{\seenimages}{${\mathrm{I_{S}}}$ } \newcommand{\oivlong}{Open Images V4\xspace} \newcommand{\oiv}{OIV4\xspace} \newcommand{\oivsource}{OIV4-source\xspace} \newcommand{\oivtarget}{OIV4-target\xspace} \newcommand{\oivsourcetrain}{OIV4-source-train\xspace} \newcommand{\oivsourceval}{OIV4-source-val\xspace} \newcommand{\oivtargettrain}{OIV4-target-train\xspace} \newcommand{\oivtargetval}{OIV4-target-val\xspace} \newcommand{\oivall}{OIV4-all\xspace} \newcommand{\cocoall}{COCO-all\xspace} \newcommand{\coco}{COCO\xspace} \newcommand{\cocotarget}{COCO-target\xspace} \newcommand{\cocosource}{COCO-source\xspace} \newcommand{\AR}[1] {AR@#1} \newcommand{\frcnn}{Faster R-CNN\xspace} \newcommand{\retina}{RetinaNet\xspace} \newcommand{\deepti}[1]{{\color{blue}{Deepti: #1}}} \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \newcolumntype{C}[1]{>{\centering\arraybackslash}c{#1}} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{13} % \title{What leads to generalization of object proposals?} % \author{Rui Wang \and Dhruv Mahajan \and Vignesh Ramanathan} \authorrunning{R. Wang et al.} \institute{Facebook AI \\ \email{\{ruiw, dhruvm, vigneshr\}@fb.com}} \maketitle \begin{abstract} Object proposal generation is often the first step in many detection models. It is lucrative to train a good proposal model, that generalizes to unseen classes. This could help scaling detection models to larger number of classes with fewer annotations. Motivated by this, we study how a detection model trained on a small set of source classes can provide proposals that \emph{generalize} to unseen classes. We systematically study the properties of the dataset -- visual diversity and label space granularity -- required for good generalization. We show the trade-off between using fine-grained labels and coarse labels. We introduce the idea of prototypical classes: a set of sufficient and necessary classes required to train a detection model to obtain generalized proposals in a more data-efficient way. On the \oivlong dataset, we show that only $25\%$ of the classes can be selected to form such a prototypical set. The resulting proposals from a model trained with these classes is only $4.3\%$ worse than using all the classes, in terms of average recall (AR). We also demonstrate that \frcnn model leads to better generalization of proposals compared to a single-stage network like \retina. \keywords{object proposals, object detection, generalization} \end{abstract} \section{Introduction} \label{sec:intro} Object detection systems have shown considerable improvements for fully supervised settings \cite{ren2015faster,lin2017focal,liu2016ssd,redmon2017yolo9000,dai2016r}, as well as weakly supervised settings~\cite{Gao_2019_ICCV,arun2019dissimilarity,tang2018pcl} that only use image-level labels. Both approaches typically consider detection as a combination of two tasks: (a) spatial localization of the objects using proposals and (b) classification of the proposals into correct classes. A generalized proposal model that localizes all classes can help in scaling object detection. This could lead to the use of fewer or no bounding box annotations to only solve the classification task and development of more sophisticated classifiers, as explored in works like \cite{uijlings2018revisiting,singh2018r}. Many detection models \cite{ren2015faster,lin2017focal} have been developed in recent years, which can be used to obtain high quality object proposals. However, an equally important aspect that determines the generalization ability of proposals is \emph{the dataset} used to train these models. As illustrated in Fig.~\ref{fig:pull_fig}, the objects and class labels in a dataset significantly impact the ability to generalize to new classes. Intuitively, to localize a fine-grained vehicle like taxi in a target dataset, it might be sufficient to train a localization model with other vehicles like cars or vans in the source dataset. For localization (unlike classification), we may not need any training data for this class. On the other hand, training with these classes will not help in localizing other vehicles like boat. While few works leverage this intuition for weakly supervised learning~\cite{uijlings2018revisiting}, the extent to which object localization depends on the categories used to train the model has not been well quantified and studied in detail. Towards this end, we define ``generalization" as the ability of a model to localize (not classify) objects not annotated in the training dataset. In our work, we answer the question: \emph{What kind of dataset is best suited to train a model that generalizes even to unseen object classes?} We further study the ability of popular detection models like \frcnn \cite{ren2015faster} and \retina \cite{lin2017focal} to generate proposals that generalize to unseen classes. These networks are designed to improve the detection quality for the small set of seen classes in the training dataset. We carefully study these design choices and provide a way to obtain proposals that generalize to a larger set of unseen classes. \begin{figure}[t!] \centering \includegraphics[width=0.95\linewidth]{figures/rui_pull2} \caption{Proposal models learned on seen vehicle classes can localize unseen classes which share similar localization structure like ``bus" and ``taxi". However, ``barge" and ``gondola", which are also vehicles will not be precisely localized by this model, due to lack of visual diversity in the training dataset for vehicles} \label{fig:pull_fig} \vspace{-0.2in} \end{figure} We answer several questions about dataset properties and modeling choices required for generalized proposals: \begin{itemize} \item \textbf{What are the properties of object classes to ensure generalization of proposals from a model?} First, we show that it is crucial to have visual diversity to obtain generalized proposals. We need examples of different vehicles like ``car" and ``boats", even if the examples are only labelled as ``vehicle". Further, we hypothesize the existence of {\it{prototypical classes}} as a subset of leaf classes in a semantic hierarchy that are sufficient and necessary to construct a dataset to train a model for proposal generalization. We define new quantitative metrics to measure these properties for any set of classes and show that it is possible to construct a small prototypical set of object classes. This has positive implications for large taxonomies, since it is sufficient to annotate examples only for the prototypical classes. \item \textbf{Does the label-granularity of the dataset affect generalization? If so, what is the coarsest granularity that can be used?} Coarse-grained labels (``vehicles" instead of ``taxis") are significantly less tedious to annotate and more accurate than fine-grained labels. Past works like RFCNN-3000 \cite{singh2018r} argued that a single super class might be sufficient to obtain good proposals. However, we show that there is a trade-off between using very few coarse classes and large-number of fine-grained classes, and a middle-ground approach leads to best generalization. \item \textbf{What are the \emph{modeling} choices that are critical for leveraging state-of-the-art detectors to obtain generalized proposals?} We show that: (a) detections from two-stage networks like \frcnn are better for obtaining generalized proposals than a single-stage network like \retina, (b) while class-specific bounding box regression is typically used in \frcnn, it is beneficial only when considering larger number of proposals (average recall AR@1000) and class-agnostic regression is better when considering fewer proposals (AR@100) and (c) choice of NMS threshold is dependent on the number of proposals being considered (AR@100 or AR@1000). \end{itemize} On \oiv \cite{kuznetsova2018open}, we show that compared to training with all the object classes, using a prototypical subset of $25\%$ of the object classes only leads to a drop of $4.3\%$ in average recall (AR@100), while training with $50\%$ of such classes leads to a negligible drop of $0.9\%$. We also show how the detections from \frcnn can be fused to obtain high quality proposals that have $10\%$ absolute gain in AR@100 compared to the class-agnostic proposals of the RPN from the same network and $3.5\%$ better than \retina. To stress the practical importance of generalized proposals, we also show that generalization ability is directly correlated with the performance of weakly supervised detection models. \section{Related Work} \label{sec:relwork} \noindent \textbf{Generalizing localization across multiple classes: }The idea of different object classes sharing the same structure has been exploited in building detection models for a long time\cite{felzenszwalb2009object,novotny2016have,ott2011shared,salakhutdinov2011learning,torralba2004sharing}. More recently, \cite{dai2016r,ren2015faster} also have a dedicated proposal network for object localization. However these works do not measure the transferability of proposals trained on one set of classes to another. Uijlings \textit{et al.} \cite{uijlings2018revisiting} tried to transfer information from coarse source classes to fine-grained target classes that share similar localization properties. They showed that this can help weakly supervised detection for the target classes. LSDA \cite{hoffman2014lsda} transformed classifiers into detectors by sharing knowledge between classes. Multiple works \cite{tang2016large,hoffman2016large,rochan2015weakly,guillaumin2012large} showed the benefit of sharing localization information between similar classes to improve semi supervised and weakly supervised detection. Yang \textit{et al.} \cite{yang2019detecting} trained a large-scale detection model following similar principles. Singh \textit{et al.} \cite{singh2018r} showed that even a detector trained with one class can localize objects of different classes sufficiently well due to commonality between classes. We generalize this idea further. There has also been work on learning models \cite{yang2019detecting,redmon2017yolo9000,gao2019note} with a combination of bounding boxes for certain classes and only class labels for others. They inherently leverage the idea that localization can generalize across multiple classes. We provide systematic ways to quantify and measure this property for proposal models. \noindent \textbf{Object proposal generation models:} There have been many seminal works on generating class-agnostic object proposals \cite{uijlings2013selective,zitnick2014edge,pont2016multiscale,krahenbuhl2014geodesic}. A comprehensive study of different methods can be found in \cite{hosang2015makes} and a study of proposal evaluation metrics can be found in \cite{chavali2016object}. Proposal models have also been trained with dedicated architectures and objectives in \cite{pinheiro2015learning,kuo2015deepbox,szegedy2014scalable}. In our work, we leverage standard models like \frcnn and focus on the dataset properties required to achieve generalization with this model. \section{Approach} \vspace{-0.1in} \label{sec:approach} We study two important aspects involved in obtaining generalized proposals from a detection model: (1) {\bf{Data Properties}} such as the granularity of the label space (shown in Fig.~\ref{fig:g1}), and the visual diversity of object classes under each label, required for generalization of proposals. The idea of label granularity and visual diversity is shown in Fig.~\ref{fig:g2}. We investigate how a smaller subset of ``prototypical" object classes in a dataset which is representative of all other classes can be identified. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.56\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/label_granularity.pdf} \caption{Label semantic hierarchy} \label{fig:g1} \end{subfigure}\hfill \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/rui_figs_gran_v2.pdf} \caption{Granularity vs. Diversity} \label{fig:g2} \end{subfigure} \caption{We study two important dataset properties needed to train a proposal model: label granularity and visual diversity. (a) Label granularity can be represented by different levels in a semantic hierarchy as shown. (b) The difference between label granularity and visual diversity is illustrated. At the same granularity, we can either have high or low visual diversity as shown} \label{fig:gran_visual} \vspace{-0.2in} \end{figure} (2) {\bf{Modeling Choice}} for leveraging a detector trained on a dataset with seen classes to obtain proposals that generalize to unseen classes. \subsection{Dataset Properties} \label{sec:data_prop} The choice of labels and data used to train the model is crucial for generalization. To study these properties, we assume: (a) classes are organized in a semantic tree and (b) internal nodes do not have any data of their own, that are not categorized into one of its child nodes. In practice, such a hierarchy is either already available (\oiv) or can be obtained from Wordnet~\cite{wordnet}. These assumptions help us study the datasets under controlled settings. However, later we explore a way to identify ``prototypical" subsets even when a semantic hierarchy is unavailable. \subsubsection{Label Space Granularity} \label{sec:label_space} As we noted through some examples earlier, it is intuitive that we might not need fine-grained labels to train a good localization model. To quantitatively study the effect of granularity, we construct different datasets with the same set of images and object bounding boxes, but consider classes at different levels of semantic hierarchy (Fig.~\ref{fig:g1}). We then train a model with these datasets and evaluate the generalization ability as a function of label granularity. For instance, for the coarsest root level, we assign all the bounding boxes the same ``object" label and train a detector to distinguish objects from all non-objects. This pertains to the idea of objectness used in weakly supervised algorithms~\cite{uijlings2013selective} and super-class in \cite{singh2018r}. For an intermediate level, we collapse all leaf-labels to their corresponding parent labels at that level to train the model. While a fine-grained label space provides more information, a model trained at this level also attempts to distinguish object classes with similar structure and this could affect generalization. We quantify this trade-off in Sec.~\ref{sec:exp_data}. \subsubsection{Prototypical classes to capture visual diversity} \label{sec:proto} One of the main aims of our work is to see if we can identify a significantly smaller number of classes than the full object-label space, so that bounding boxes from this set of classes are sufficient to train a generalized proposal model. Note that in Sec.~\ref{sec:label_space}, we wanted to study if a small set of coarse labels are sufficient to train a generalized proposal model. However, this does not answer anything about the visual diversity of objects within each sub-category that is required for generalization. As an example (shown in Fig.~\ref{fig:gran_visual}), in order to localize different types of vehicles like ``car" or ``airplane" it might be sufficient to collapse the label for all these objects into a single label named ``vehicle", however dropping all instances of airplane during training will lead to a drop in performance for this class. To quantitatively study this effect, we introduce the notion of ``prototypical" classes. Given a large set of leaf classes, these are the smallest subset such that a model trained only with instances from them is sufficient to localize objects from the remaining classes. Note that due to the long-tail distribution of real-world data, obtaining images for large number of semantic classes is a tedious task. If a small set of prototypical classes does exist, this makes the data collection process much easier when scaling detection to large number of classes. \noindent{\bf{Properties: }}We identify the two properties that are required to quantify the prototypicality of a set of classes : \textit{Sufficient set}: is a set of classes such that training a model only with examples from them should be sufficient to localize objects from all other classes. The most superfluous sufficient set would be the entire set of leaf classes themselves. \textit{Necessary set}: is a set of classes such that dropping any class from this set will lead to a significant drop in generalization. A simple example would be a very coarse vertical like ``vehicle". Intuitively dropping all vehicles would affect their localization as they do not share localization properties with other classes. We provide concrete ways to measure both these properties in Sec.~\ref{sec:exp_data}. \noindent{\bf{Identifying prototypical classes: }} Given a set of $N$ leaf classes $\mathbb{C}$, we wish to identify a set of $P$ prototypical classes $\mathbb{P} \subset \mathbb{C}$. Intuitively, this is similar to clustering the classes that have the same localization structure and then choosing a representative class from each cluster. Below, we discuss three approaches: \noindent(a) \textbf{Oracle visual clustering}: To get an upper bound for choosing the best $P$ prototypical classes, we assume that bounding box annotations for all the $N$ leaf classes are available. We then use these bounding boxes to compute visual similarity between classes. We note that this is not a practical approach, but is crucial to evaluate the effectiveness of proxies we introduce later. We first train a detection model using the annotations of all the leaf classes. We then measure the visual similarity between two classes $i, j$ as \vspace{-0.05in} {\small \begin{align} \label{eq:max_ap} S_{ij} = \max \left( \frac{\text{AP}^i(j)}{\text{AP}^j(j)}, \frac{\text{AP}^j(i)}{\text{AP}^i(i)}\right), \end{align}}where $AP^i(j)$ is the detection average precision (AP) for the $j^{th}$ class when we use the detections corresponding to the $i^{th}$ class as detections of class $j$. $S_{ij}$ is a measure of how well one class can replace another class in localizing it. We then use the resulting similarity measure to hierarchically cluster the classes into $P$ clusters using agglomerative clustering. We then pick the class with the highest number of examples in each cluster to construct the set of prototypical classes. For practical reasons, we use frequency to choose the representative class, since this results in the construction of the largest dataset. \noindent(b) \textbf{Semantic clustering based on frequency}: Semantic similarity is often viewed as a good proxy for visual similarity as shown through datasets like Imagenet \cite{deng2009imagenet} and \oiv. Hence, we use the semantic tree to cluster the classes in an hierarchical fashion starting from the leaves. At any given step, we cluster together two leaf classes that share a common parent if they jointly have the lowest number of examples. The algorithm stops when $P$ clusters are left. We then select the most frequent class from each cluster as a prototypical class. Here we assume that apriori we know the frequency of each class in a dataset. This is a very weak assumption, since a rough estimate of class distribution in a dataset can often be obtained even from weak labels like hashtags. This doesn't require any image-level label or bounding boxes and is easy to implement in practice. \noindent(c) \textbf{Most frequent prototypical subset}: For this baseline, we choose the top $P$ most frequently occurring classes in the dataset as the prototypical classes. Note that unlike the previous approaches, this does not require any knowledge of the semantic hierarchy. \subsection{Modeling Choice\label{subsec:model}} \label{sec:model_choice} Once the dataset is fixed, the next step is to train a detection model. In our work, we explore the use of two models: \frcnn and \retina. The observations made in our work should nevertheless generalize to other two-stage and single-stage detection models as well. In the case of a single-stage network, the detections from a model trained on a source dataset with seen classes can directly be treated as proposals. Their ability to localize novel classes in a target dataset can be evaluated to test generalization. However, for a two-stage network, another natural choice would be to use the Region Proposal Network (RPN) of the model, since it is trained in a class-agnostic fashion and aims to localize all objects in the image. However, as noted by He et al. \cite{he2017mask}, the detection part of the model is better at localizing the object due to more fine-tuned bounding box regression and better background classification. We study this more rigorously, by comparing the generalization of proposals obtained from the detection head as well as RPN. We vary different model parameters to obtain the optimal setting for proposal generalization. \section{Experiments} \label{sec:expts} We evaluate the ability of the object proposal obtained from detection models learned with different settings in Section~\ref{sec:model_choice} to generalize to new unseen classes. We also explore the effects of label-space granularity and the need for semantic and visual diversity. Finally, we show that a small set of prototypical classes could be used to train an effective proposal model for all classes in the dataset. \subsection{Experimental Setup} \noindent \textbf{Source and target splits: } We split each dataset into two parts: (a) {\it{Source dataset}} consisting of a set of seen classes called {\it{source classes}} and (b) {\it{Target dataset}} consisting of a set of unseen classes called {\it{target classes}}. {\it{Target dataset}} is used to evaluate the generalization of proposal models trained with the {\it{Source dataset}}. Since an image can contain both source and target classes, we ensure that such images are not present in the source class dataset. However, there may be a small number of images in the target dataset that contain source classes. We use the following two datasets for our experiments: (1) {\it{\oivlong(\oiv)~\cite{kuznetsova2018open}}} consists of $600$ classes. We retain only object classes which have more than $100$ training images. This results in a total of $482$ leaf classes. We randomly split all the leaf classes into $432$ source (\oivsource dataset) and $50$ target (\oivtarget dataset) classes. There are also annotations associated only with internal nodes (for example, "animal") and without a specific leaf label (like the type of animal). We remove such annotations and all associated images, since such images cannot be unambiguously assigned to a source or target split. This leaves us with $1.2M$ images with $7.96M$ boxes in the train split and $73k$ images with $361K$ boxes in the test split. For training proposal models, we always use the train split and for evaluation we use the test split. Wherever needed, we explicitly suffix the dataset with "train" and "test" (for example, \oivsource-train and \oivsource-test). (2) {\it{\coco~\cite{coco}}}: We use the 2017 version of the \coco dataset and randomly split the classes in to $70$ source (\cocosource dataset) and $10$ target (\cocotarget dataset) classes. For training, we use the train split and for evaluation, we use the $5000$ images from the validation set. Wherever needed, we explicitly suffix the dataset with ``train" and ``test". Target classes list is provided in the supplementary. \noindent\textbf{Evaluation metrics: } We report the standard average recall (\AR{k})~\cite{hosang2015makes} metric to evaluate the quality of proposals. One of the main motivations for building a generalized proposal model is to use the resulting proposals to train detection models for unseen classes with limited or no bounding box annotation. A typical proposal-based supervised detection model RCNN could also be used to evaluate the quality of proposals. However, the application to weakly supervised detection is more compelling since their performance is closely tied to proposals than supervised models which can correct the inaccuracies in proposals due to availability of labelled bounding boxes. Hence, we implement a weakly supervised detector with the approach used in YOLO9000~\cite{redmon2017yolo9000}\footnote{We chose~\cite{redmon2017yolo9000} due to its simplicity. In practice, we can use other weakly supervised approaches too.}. We report the detection AP (averaged over IoU thresholds ranging from $0.5$ to $0.95$) on the test set of the target dataset. Please see the supplementary material for more details. \noindent\textbf{Implementation details: } We fix Imagenet pre-trained ResNet-50 with Feature Pyramid Networks \cite{lin2017feature} as the backbone for all models. We use the Detectron codebase~\cite{girshick2018detectron}. For \coco, we train the models for $90k$ iterations with an initial learning rate and the decay suggested in \cite{ren2015faster}. For \oiv, we train the models for $800k$ iterations with an initial learning rate of $0.01$ and cosine learning rate decay. When training the weakly supervised model (\cite{redmon2017yolo9000}), we use the top $100$ proposals in each image to choose pseudo ground truth at every training iteration. \subsection{Modeling Choices} We first identify the best detection model and setting to extract proposals that generalize to new unseen classes. We then analyze generalization ability under different settings from this model. We reiterate that in order to test generalization, evaluation is done on target classes that have no intersection with the source classes used during training. \noindent {\textbf{Choice of detection model:}} We compare the generalization ability of a two-stage network (\frcnn) and a single-stage network (\retina) in Fig.~\ref{fig:mod1}. Since, in a two-stage model like \frcnn, the output from the RPN is class-agnostic and can be used as proposals too, we compare the performance of the RPN as well. The models are trained on \cocosource-train dataset. We report AR@100 on seen classes in the \cocosource-test dataset, as well as unseen classes in the \cocotarget-test. The difference in performance between seen and unseen classes reflects the generalization gap. We also show an upper-bound performance on \cocotarget-test obtained by models trained on the full training dataset containing both \cocosource-train and \cocotarget-train. \begin{figure}[t] \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/coco_rpn_vs.pdf} \caption{Comparison of detection models} \label{fig:mod1} \end{subfigure}\hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/coco_ar_breakdown_v2.pdf} \caption{RPN vs. detection head} \label{fig:mod2} \end{subfigure} \caption{(a) \AR{100} corresponding to different models trained on \cocosource-train and evaluated on different test splits. Upper-bound corresponds to model trained on full \coco dataset and evaluated on \cocotarget-test. (b) Average recall of RPN and detection head at different IoU thresholds, for model trained on \cocosource-train and evaluated on \cocotarget-test} \label{fig:rpn_vs_det} \vspace{-0.2in} \end{figure} We notice that on seen classes, \retina achieves a lower performance compared to \frcnn (drop of $2.4\%$). However, the drop is larger for unseen target classes ($3.5\%$), indicating a larger generalization gap for \retina. One reason for this is that \retina is more sensitive to missing bounding boxes corresponding to unlabelled unseen classes in the source dataset. Proposals corresponding to unseen object classes that are not annotated in the training data are treated as hard-negatives, due to the use of focal-loss. Hence, the model heavily penalizes proposals corresponding to unannotated bounding boxes, leading to overall drop in AR. Since some seen classes share visual similarity with unseen classes, this sensitivity to missing annotations affects AR for seen classes too. However, this effect is more magnified for unseen target classes. On the other hand, in \frcnn, only a small number of proposals (less than $512$) which do not intersect with annotated bounding boxes are sampled at random as negatives. The probability that a proposal corresponding to an unseen object class is chosen as a negative is lower, leading to better generalization. Hence, for the rest of the paper, we use \frcnn as the detection model. We also notice that the detection head of \frcnn provides better overall performance \emph{without} sacrificing generalization. This can be attributed to better bounding box regression from the detection head which has additional layers, following the RPN in the model. To investigate this effect, we measure AR at different IoU thresholds for both sets of proposals for the model trained on \cocosource and evaluated on \cocotarget in Fig.~\ref{fig:mod2}. We see that the difference in \AR{1000} increases drastically at higher values of IoU threshold, and is negligible at a threshold of $0.5$. This implies that the boxes from the detection head are more fine-tuned to exactly localize objects, unlike the RPN. \noindent {\textbf{Choice of \frcnn settings:}} The results so far were obtained using class-specific bounding box regression (which is the standard setting in \frcnn) for the detection head. Since we want the bounding boxes to generalize to unseen classes, class agnostic regression could be a valid choice too. We study this in Fig.~\ref{fig:cls_ag} for \oiv and \coco. We see that class agnostic regression is better for small number of proposals as seen by \AR{10,20,50}. However, when we consider more proposals (\AR{1000}), class specific regression provides a significant gain ($4.5\%$ for \oiv and $7.5\%$ for \coco). It results in multiple regressed versions (one corresponding to each class) of the same proposal generated from the RPN. This helps in improving recall at higher number of proposals. Previously, we fixed the NMS threshold to $0.5$. We study the effect of this threshold in Fig.~\ref{fig:nms_fig}. We train on \oivsource, \cocosource and test on \oivtarget, \cocotarget respectively. Intuitively, a low threshold can improve spatial coverage of objects by ensuring proposals are spatially well spread out. When considering a larger number of proposals, there are sufficient boxes to ensure spatial coverage, and having some redundancy is helpful. This is witnessed by the steeper drop in \AR{1000} at low NMS thresholds, unlike \AR{100}. Based on these observations, we use class-specific bounding box regression with an NMS threshold of $0.5$ for rest of the experiments. \begin{figure}[t] \vspace{-0.1in} \centering \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/cls_specific_vs_agnostic.pdf} \caption{Effect of class agnostic regression vs. class specific regression} \label{fig:cls_ag} \end{minipage} \hfill \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{figures/nms.pdf} \caption{Effect of NMS threshold on performance of proposals} \label{fig:nms_fig} \end{minipage} \vspace{-0.21in} \end{figure} \begin{table}[h] \vspace{-0.4in} \centering \begin{center} \caption{Comparing performance of proposals generated by RPN head and detection head for weakly supervised detection. We also show the \AR{100} numbers which are seen to be correlated with detection AP}\label{tab:det_map} \begin{tabular}{l|c|c|c|c} \hline \multicolumn{5}{c}{Target Dataset - \oivtarget}\\ \hline & \multicolumn{2}{c|}{Source: \oivsource} & \multicolumn{2}{c}{Source: \oivall}\\ & Det. AP & \AR{100} & Det. AP & \AR{100} \\\hline \frcnn RPN & 8.7 & 55.0 & 9.6 & 60.4\\ \frcnn Detection & \textbf{24.0} & \textbf{69.4} & \textbf{30.8} & \textbf{76.9} \\ \hline \end{tabular} \end{center} \vspace{-0.35in} \end{table} \noindent {\textbf{Weakly supervised detection:}} A strong practical utility for generalized proposals that localize all objects is that, no bounding box annotations should be needed to train a detection model for new object classes. Hence, we measure the effect of better generalized proposals on the performance of a weakly supervised detection model, trained without bounding box annotations. We show results corresponding to the RPN head and detection head of \frcnn in Tab.~\ref{tab:det_map}. The weakly supervised model is trained on \oivtarget-train and evaluated on \oivtarget-test. We also show results for proposals obtained from training with \oivsource as well as \oivall (upper-bound). We see that the performance of the weakly supervised detection model is directly correlated with the quality of the proposals being used, showing the need for good generalized proposals. \subsection{Dataset Properties} \label{sec:exp_data} \noindent {\textbf{Effect of label space granularity: }} \oiv organizes object classes in a semantic hierarchy with $5$ levels. We directly leverage this hierarchy to measure the effect of label granularity (Fig.~\ref{fig:g1}). We construct a dataset at each level $L_i$ (\oivsource-$L_i$) by retaining all the images in \oivsource, but relabeling bounding boxes corresponding to leaf labels with their ancestor at $L_i$. We construct 5 datasets, one for each level with the same set of images and bounding boxes. We report the performance of these models on \oivtarget in Tab.~\ref{tab:label_gran}. Along with \AR{100/1000}, we also report the detection AP of the weakly supervised detection models trained with the proposals obtained from the corresponding levels. The weakly supervised models are trained on \oivtarget-train and evaluated on \oivtarget-test. \vspace*{-8mm} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Effect of different label space granularities on the quality of proposal for \oiv dataset. The number of classes at each level is shown in brackets. Evaluation is done on \oivtarget-eval dataset. Both AR and weakly supervised detection AP are reported} \label{tab:label_gran} \begin{tabular}{cccc} \hline\noalign{\smallskip} Source Dataset & AR@100 & AR@1000 & AP (weak)\\ \noalign{\smallskip} \hline \noalign{\smallskip} \oivsource-$L_0 (1)$ & 61.7 & 72.0 & 19.5\\ \hline \oivsource-$L_1 (86)$ & 63.4 & 73.0 & 22.6\\ \hline \oivsource-$L_2 (270)$ & 63.7 & 75.2 & 23.1\\ \hline \oivsource-$L_3 (398)$ & 65.2 & 77.2 & 24.3\\ \hline \oivsource-$L_4 (432)$ & 64.2 & 76.1 & 24.0\\ \hline \end{tabular} \end{center} \end{table} \vspace{-0.35in} \setlength{\tabcolsep}{1.4pt} Some past works like \cite{singh2018r} postulated that one super-class (similar to $L_0$) could be sufficient. However, we observe that both \AR{100} and \AR{1000} increase as we move from $L_0$ to $L_1$ along with a significant gain ($3.1\%$) in AP. This indicates that training with just a binary label yields lower quality proposals compared to training with at least a coarse set of labels at $L_1$. While both AP and \AR{100} increase as the granularity increases from $L_1$ to $L_3$, the difference is fairly small for both metrics ($ < 2\%$ change). However, annotating bounding boxes with labels at $L_1$ ($86$ labels) is significantly cheaper than $L_3$ ($398$ labels). Hence, $L_1$ can be seen as a good trade-off in terms of labelling cost, and training a good model. \vspace*{0.02in} \noindent {\textbf{Need for visual and semantic diversity: }} We noticed that training with coarse labels can yield good proposals. It would be interesting to observe if all or only some of these coarse classes are crucial to build a good proposal model. To study this, we conduct ablation experiments where we train a model with \oivsource-train after dropping all images having a specific $L_1$ label and evaluate the proposals on the \oivsource-test images belonging to this label in Fig.~\ref{fig:drop_classes_fig}a. We repeat this experiment for a few fine-grained classes at $L_4$ in Fig.~\ref{fig:drop_classes_fig}b. \begin{figure}[t] \vspace{-0.1in} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.7\textwidth]{figures/drop_coarse.pdf} \label{fig:1} \end{subfigure}\hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.7\textwidth]{figures/drop_fine.pdf} \label{fig:2} \end{subfigure} \vspace{-0.1in} \caption{Effect of Semantic Diversity, measured by dropping an object class during training and measuring the resulting change in AR for that class: (a) dropping L1 classes and (b) dropping L4 classes} \label{fig:drop_classes_fig} \vspace{-0.2in} \end{figure} We notice that certain coarse classes (like ``clothing" and ``vehicle") experience a huge drop in performance. On the other hand, ``animal" and ``food" are less affected. This can be explained from the fact that, there are many toy-animal images within the coarse label ``toy", similarly ``containers" is a coarse class in \oiv which is often depicted with food in it. These classes can act as proxies for ``animal" and ``food" respectively. However, ``clothing" and ``vehicle" do not have good proxies. More interestingly, we make a similar observation for finer classes at $L_4$ like airplanes and helicopters. This suggests that there is a smaller set of objects that have unique localization properties in \oiv. \noindent {\textbf{Prototypical classes: }} Some object classes are similar to others in terms of localization, while there are classes that are unique and need to be included in training. Motivated by this observation, we try to identify a small set of classes called ``prototypical" classes which are both necessary and sufficient to train a generalizable proposal model. We use the \oivsource dataset as before with 432 leaf classes. We use the different approaches outlined in Sec.~\ref{sec:proto} to identify a subset of ``prototypical" classes. Note that among these methods, oracle visual clustering assumes availability of bounding boxes for all classes and serves as an upper bound on how to identify a really good prototypical set. Some sample clusters of classes obtained by this method are shown in Tab.~\ref{tab:visual_clusters}. The remaining methods make weaker assumptions and are more useful in practice. In addition to these methods,we also train models with a set of randomly chosen prototypical classes. \vspace*{-0.3in} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Sample clusters obtained by oracle visual clustering for $P=50$. The most frequent class in each cluster chosen as a prototypical class is highlighted} \label{tab:visual_clusters} \begin{tabular}{lll} \hline\noalign{\smallskip} \scriptsize{\textbf{Woman}, Girl, Doll} & \scriptsize{\textbf{Wheel}, Tire, Bicyclewheel} & \scriptsize{\textbf{Lobster}, Scorpion, Centipede} \\ \hline \scriptsize{\textbf{Glasses}, Goggles} & \scriptsize{\textbf{Jeans}, Shorts, Miniskirt} & \scriptsize{\textbf{Goose}, Ostrich, Turkey} \\ \hline \scriptsize{\textbf{Book}, Shelf, Bookcase} & \scriptsize{\textbf{Musicalkeyboard}, Piano} & \scriptsize{\textbf{Swimmingpool}, Bathtub, Jacuzzi} \\ \hline \scriptsize{\textbf{Man}, Boy, Shirt} & \scriptsize{\textbf{Apple}, Pomegranate, Peach} & \scriptsize{\textbf{Raven}, Woodpecker, Bluejay}\\ \hline \end{tabular} \end{center} \vspace{-0.3in} \end{table} \setlength{\tabcolsep}{1.4pt} We introduce two ways to measure \textit{sufficiency} and \textit{necessity}. From the $432$ classes, once we pick a subset of $P$ prototypical classes, we train a proposal model and evaluate the resulting model on the $50$ target classes in \oivtarget, to measure \textit{sufficiency} and \textit{necessity}. \noindent{\textbf{Dataset construction for fair comparison}} We ensure that the total number of images as well as bounding box annotations are kept fixed when we construct datasets for different prototypical subsets. This is important to ensure that proposals trained with different subsets are comparable. Once we chose a set of $P$ prototypical classes, we uniformly sub-sample \oivsource images having any of these prototypical classes to get a subset of $920K$ images. And within each subset, we uniformly sub-sample the bounding boxes corresponding to the prototypical classes to retain $5.2M$ bounding boxes. We do not retain any bounding boxes outside the chosen prototypical classes. \noindent{\textbf{Training with prototypical subsets}} For a set of prototypical classes and the corresponding dataset, we train a \frcnn with those classes as labels. We combine the detections as described in Sec.~\ref{sec:model_choice} to obtain proposals. \noindent{\textbf{Measuring sufficiency of prototypical classes}} A subset of classes are sufficient, if a proposal model trained with them generalizes as well as a model trained with all classes. We follow this notion and evaluate the proposals obtained from the models trained with different prototypical subsets on \oivtarget and report the average recall (\AR{100}) in Fig.~\ref{fig:proto_properties}a. Similar trends are observed with \AR{1000} as well (shown in supplementary). Looking at the proposals obtained from oracle visual clustering, training with less than 25\% of the classes (100) leads to only a drop of $4.8\%$ in \AR{100}, compared to training with images belonging to all object classes. This gap reduces to $0.4\%$ if we train with 50\% (200) of all the classes. This provides an empirical proof for the existence of a significantly smaller number of object classes that are sufficient to train a generalizable proposal model. Next, we look at the prototypical classes obtained from a more practical approach: semantic clustering. We notice that the proposal model trained with these prototypical classes always outperform other approaches such as choosing a random set of classes or the most frequent set of classes. Further, the performance of this method is only lower by a margin of $3\%$ compared to oracle visual clustering for different value of $P$. Selecting most frequent set of classes as the prototypical subset performs slightly worse than semantic clustering. This shows that semantic clustering can serve as a good way to identify prototypical classes for large taxonomies when the semantic hierarchy is available for the dataset, else the most frequent subset is a weaker alternative. \begin{figure}[t] \vspace{-0.1in} \begin{minipage}[t]{0.99\textwidth} \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=1.0\textwidth]{figures/sampling_method_ar100.pdf} \label{fig:suff_1} \end{subfigure}\hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{figures/necessity.pdf} \label{fig:necessity} \end{subfigure} \vspace{-0.2in} \caption{(a) Average recall \AR{100} for proposals obtained from models trained with varying number of prototypical classes chosen by different methods. We show the average recall on the \oivtarget dataset with $50$ unseen classes. $P$ denotes the number of prototypical classes. Higher value indicates higher sufficiency. (b) The relative change in AR for target classes by dropping proposals corresponding to the most similar class in the prototypical subset. Higher value indicates lower redundancy in prototypical subset and higher necessity} \label{fig:proto_properties} \end{minipage} \vspace{-0.25in} \end{figure} \noindent{\textbf{Measuring necessity of prototypical classes}} A set of classes are considered necessary, if there is no redundancy among the classes in terms of localization properties. For a given class in the set, there should be no equivalent class which can provide similar bounding boxes. We measure this property for a prototypical subset by evaluating the corresponding proposal model on \oivtarget dataset using the following method. For every target class in \oivtarget, we measure the relative change in \AR{100} and \AR{1000} by removing proposals corresponding to the most similar class in the prototypical subset (similarity measured by Eq.~\ref{eq:max_ap}). The change in AR would be minimal if there is another class in the prototypical subset which can localize the target class. This measure, averaged over all target classes provides a good estimate of necessity. A high value symbolizes a high degree of necessity, while a low value corresponds to redundancy among the prototypical classes. We plot this for different number of prototypical classes for oracle visual clustering and semantic clustering in Fig.~\ref{fig:proto_properties}b. We notice that at any given number of prototypical classes, the change in average recall is higher for oracle visual clustering compared to semantic clustering. This demonstrates that visual clustering leads to prototypical classes which are less redundant (and more necessary). As expected, we see the necessity drops, as we increase the number of prototypical classes for both methods. Again, this is expected since redundancy between classes increases with more number of classes. The relative change in \AR{1000} is also seen to be lower than \AR{100}, since when considering a larger number of proposals, we expect more redundancy among the proposals. Finally, for oracle visual clustering as we move from $200$ to $300$ classes, sufficiency changes by a small amount from $73.2$ to $75.9$ ( Fig.~\ref{fig:proto_properties}a), while the necessity drops steeply in Fig.~\ref{fig:proto_properties}b. This suggests that the ideal number of prototypical classes for \oiv could be around $200$. \section{Conclusion} We studied the ability of detection models trained on a set of seen classes to localize unseen classes. We showed that \frcnn can be used to obtain better proposals for unseen classes than \retina, and studied the effect of model choices on generalization of proposals, like class-agnostic bounding box regression and NMS threshold. We quantitatively measured the importance of visual diversity and showed that using a very fine-grained or very coarse label-space can both affect generalization, while a middle-ground approach is best suited. We introduced the idea of prototypical classes that are sufficient and necessary to obtain generalized proposals. We demonstrated different approaches to determine small prototypical subsets for a given dataset. We believe that our work is a step forward towards learning proposals that generalize to a large number of classes and scaling up detection in a more data-efficient way. \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=ZHT0ZpxQO5E
https://arxiv.org/abs/2008.05700
Evaluate the paper based on its proposal to use prototype classes for object localization, including the clarity of the definitions, practical modeling choices, and the extent of experimentation.
Unsupervised Image Classification for Deep Representation Learning 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. Authors in this paper described a modification of the embedding clustering method (DeepCluster) presented in [2]. Different from DeepCluster, this work proposes a unified pipeline, where clustering is directly performed using an image classification task. Experiments show first that their method achieves same or better performance than that of DeepCluster. 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. The authors found out that the embedding clustering phase in the previous method DeepCluster [2] can be avoided and its two phases can directly be performed using a classification task. The method is pretty simple but it obtains state-of-the-art results. 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. The paper seems a bit difficult to read. I found it a bit difficult to follow the story. The whole paper is built on top of the work of [2], when considering the results, it could be described as a contribution to the field by itself. Although authors propose to use different visual data augmentation (at the beginning only random crop and then further extending it with SimCLR techniques), and claim efficiency as they avoid storing embeddings, the work does not seem to be very much related to study visual inductive priors. 4. [Overall rating] Paper rating. 6. 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. With a simpler method, the paper shows good results. However, readability, structure and out of scope possibility lower the score. 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) I suggest the authors to increase the size of Fig. 1, 2 and 4. Rating: 6: Marginally above acceptance threshold Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{multirow} \usepackage{booktabs} \usepackage{floatrow} \floatsetup[table]{capposition=top} \floatsetup[figure]{capposition=bottom} \newfloatcommand{capbtabbox}{table}[][\FBwidth] \usepackage{subfigure} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{Anonymous} % \title{Unsupervised Image Classification for Deep Representation Learning} % \titlerunning{Unsupervised Image Classification} \author{Weijie Chen\inst{1} \and Shiliang Pu\inst{1}\thanks{Corresponding Author} \and Di Xie\inst{1}\and Shicai Yang\inst{1}\and Yilu Guo\inst{1}\and Luojun Lin\inst{2}} \authorrunning{Chen et al.} \institute{Hikvision Research Institute, Hangzhou, China \\ \email{\{chenweijie5, pushiliang.hri, xiedi, yangshicai, guoyilu5\}@hikvision.com}\\ School of Electronic and Information Engineering, South China University of Technology\\ \email{linluojun2009@126.com}} \maketitle \begin{abstract} Deep clustering against self-supervised learning (SSL) is a very important and promising direction for unsupervised visual representation learning since it requires little domain knowledge to design pretext tasks. However, the key component, embedding clustering, limits its extension to the extremely large-scale dataset due to its prerequisite to save the global latent embedding of the entire dataset. In this work, we aim to make this framework more simple and elegant without performance decline. We propose an unsupervised image classification framework without using embedding clustering, which is very similar to standard supervised training manner. For detailed interpretation, we further analyze its relation with deep clustering and contrastive learning. Extensive experiments on ImageNet dataset have been conducted to prove the effectiveness of our method. Furthermore, the experiments on transfer learning benchmarks have verified its generalization to other downstream tasks, including multi-label image classification, object detection, semantic segmentation and few-shot image classification. \keywords{Unsupervised Learning, Representation Learning} \end{abstract} \section{Introduction} Convolutional neural networks (CNN) \cite{he2016deep,huang2017densely,chen2019all} had been applied to many computer vision applications \cite{girshick2015fast,long2015fully,lin2019attribute} due to their powerful representational capacity. The normal working flow is to pretrain the networks on a very large-scale dataset with annotations like ImageNet \cite{russakovsky2015imagenet} and then transfer to a small dataset via fine-tuning. However, the dataset collection with manually labelling for pre-training is strongly resource-consuming, which draws lots of researchers' attention to develop unsupervised representation learning approaches. Among the existing unsupervised learning methods, self-supervision is highly sound since it can directly generate supervisory signal from the input images, like image inpainting \cite{doersch2015unsupervised,pathak2016context} and jigsaw puzzle solving \cite{noroozi2016unsupervised}. However, it requires rich empirical domain knowledge to design pretext tasks and is not well-transferred to downsteam tasks. Compared with this kind of self-supervised approaches, DeepCluster is a simple yet effective method which involves litter domain knowledge. It simply adopts embedding clustering to generate pseudo labels by capturing the manifold and mining the relation of all data points in the dataset. This process is iteratively alternated with an end-to-end representation learning which is exactly the same with supervised one. However, along with the advantage brought by embedding clustering, an obvious defect naturally appears that the latent embedding of each data point in the dataset should be saved before clustering, which leads to extra memory consumption linearly growing with the dataset size. It makes it difficult to scale to the very large-scale datasets. Actually, this problem also happens in the work of DeeperCluster \cite{caron2019unsupervised}, which uses distributed $k$-means to ease the problem. However, it still did not solve the problem in essence. Also, the data points in most of datasets are usually independently identically distributed (\emph{i.i.d}). Therefore, building a framework analogous to DeepCluster, we wonder if we can directly generate pseudo class ID for each image without explicitly seeing other images and take it as an image classification task for representation learning. \begin{figure}[tp] \centering \includegraphics[width=1.0\columnwidth]{./figures/pipeline2.png} \caption{The pipeline of unsupervised image classification learning. The black and red arrows separately denote the processes of pseudo-label generation and representation learning. These two processes are alternated iteratively. For efficient implementation, the pseudo labels in current epoch are updated by the forward results from the previous epoch which means our training framework is twice faster than DeepCluster.} \label{pipeline} \end{figure} The answer is excitedly YES! We integrate both the processes of pseudo label generation and representation learning into an unified framework of image classification. Briefly speaking, during the pseudo label generation, we directly feed each input image into the classification model with softmax output and pick the class ID with highest softmax score as pseudo label. It is very similar to the inference phase in supervised image classification. After pseudo class IDs are generated, the representation learning period is exactly the same with supervised training manner. These two periods are iteratively alternated until convergence. A strong concern is that if such unsupervised training method will be easily trapped into a local optima and if it can be well-generalized to other downstream tasks. In supervised training, this problem is usually solved by data augmentation which can also be applied to our proposed framework. It is worth noting that we not only adopt data augmentation in representation learning but also in pseudo label generation. It can bring disturbance to label assignment and make the task more challenging to learn data augmentation agnostic features. The entire pipeline is shown in Fig.\ref{pipeline}. To the best of our knowledge, this unsupervised framework is the closest to the supervised one compared with other existing works. Since it is very similar to supervised image classification, we name our method as \emph {Unsupervised Image Classification} (UIC) correspondingly. For simplicity, without any specific instruction, \emph{clustering} in this paper only refers to embedding clustering via $k$-mean, and \emph{classification} refers to CNN-based classification model with cross-entropy loss function. To further explain why UIC works, we analyze its hidden relation with both deep clustering and contrastive learning. We point out that UIC can be considered as a special variant of them. We hope our work can bring a deeper understanding of deep clustering series work to the self-supervision community. We empirically validate the effectiveness of UIC by extensive experiments on ImageNet. The visualization of classification results shows that UIC can act as clustering although lacking explicit clustering. We also validate its generalization ability by the experiments on transfer learning benchmarks. All these experiments indicate that UIC can work comparable with deep clustering. To summarize, our main contributions are listed as follows: \begin{itemize} \item A simple yet effective unsupervised image classification framework is proposed for visual representation learning, which can be taken as a strong prototype to develop more advanced unsupervised learning methods. \item Our framework simplifies DeepCluster by discarding embedding clustering while keeping no performance degradation and surpassing most of other self-supervised learning methods. We demonstrate that embedding clustering is not the main reason why DeepCluster works. \item Our training framework is twice faster than DeepCluster since we do not need an extra forward pass to generate pseudo labels. \end{itemize} \section{Related Work} \subsection{Self-supervised learning} Self-supervised learning is a major form of unsupervised learning, which defines pretext tasks to train the neural networks without human-annotation, including image inpainting \cite{doersch2015unsupervised,pathak2016context}, automatic colorization \cite{larsson2016learning,zhang2016colorful}, rotation prediction \cite{gidaris2018unsupervised}, cross-channel prediction \cite{zhang2017split}, image patch order prediction \cite{noroozi2016unsupervised}, and so on. These pretext tasks are designed by directly generating supervisory signals from the raw images without manually labeling, and aim to learn well-pretrained representations for downstream tasks, like image classification, object detection, and semantic segmentation. Recently, contrastive learning \cite{tian2019contrastive,he2019momentum,hjelm2018learning,oord2018representation} is developed to improve the performance of self-supervised learning. Its corresponding pretext task is that the features encoded from multi-views of the same image are similar to each others. The core insight behind these methods is to learn multi-views invariant representations. This is also the essence of our proposed method. \subsection{Clustering-based methods} Clustering-based methods are mostly related to our proposed method. Coates et al. \cite{coates2012learning} is the first to pretrain CNNs via clustering in a layer-by-layer manner. The following works \cite{yang2016joint,xie2016unsupervised,liao2016learning,caron2018deep} are also motivated to jointly cluster images and learn visual features. Among them, DeepCluster \cite{caron2018deep} is one of the most representative methods in recent years, which applies $k$-means clustering to the encoded features of all data points and generates pseudo labels to drive an end-to-end training of the target neural networks. The embedding clustering and representation learning are iterated by turns and contributed to each other along with training. Compared with other SSL methods with fixed pseudo labels, this kind of works not only learn good features but also learn meaningful pseudo labels. However, as a prerequisite for embedding clustering, it has to save the latent features of each sample in the entire dataset to depict the global data relation, which leads to excessive memory consumption and constrains its extension to the very large-scale datasets. Although another work DeeperCluster \cite{caron2019unsupervised} proposes distributed $k$-means to ease this problem, it is still not efficient and elegant enough. Another work SelfLabel \cite{asano2019self-labelling} treats clustering as a complicated optimal transport problem. It proposes label optimization as a regularized term to the entire dataset to simulate clustering with the hypothesis that the generated pseudo labels should partition the dataset equally. However, it is hypothesized and not an \emph{i.i.d} solution. Interestingly, we find that our method can naturally divide the dataset into nearly equal partitions without using label optimization. \section{Methods} \subsection{Preliminary: Deep Clustering} We first review deep clustering to illustrate the process of pseudo label generation and representation learning, from which we analyze the disadvantages of embedding clustering and dig out more room for further improvement. \subsubsection{Pseudo Label Generation.} Most self-supervised learning approaches focus on how to generate pseudo labels to drive unsupervised training. In deep clustering, this is achieved via $k$-means clustering on the embedding of all provided training images $X=x_1, x_2, ..., x_N$. In this way, the images with similar embedding representations can be assigned to the same label. Commonly, the clustering problem can be defined as to optimize cluster centroids and cluster assignments for all samples, which can be formulated as: \begin{equation} \label{label_generation} \mathop{\min}_{C\in \mathbb{R}^{d\times k}}\frac{1}{N}\sum_{n=1}^{N}\mathop{\min}_{y_n\in \{0, 1\}^{k}\,\,s.t. y_n^T\textbf{1}_k=1}\parallel C_{y_n}-f_\theta(x_n)\parallel \end{equation} where $f_\theta(\cdot)$ denotes the embedding mapping, and $\theta$ is the trainable weights of the given neural network. $C$ and $y_n$ separately denote cluster centroid matrix with shape $d\times k$ and label assignment to $n_{th}$ image in the dataset, where $d$, $k$ and $N$ separately denote the embedding dimension, cluster number and dataset size. For simplicity in the following description, $y_n$ is presented as an one-hot vector, where the non-zero entry denotes its corresponding cluster assignment. \subsubsection{Representation Learning.} After pseudo label generation, the representation learning process is exactly the same with supervised manner. To this end, a trainable linear classifier $W$ is stacked on the top of main network and optimized with $\theta$ together, which can be formulated as: \begin{equation} \label{representation_learning} \mathop{\min}_{\theta, W}\frac{1}{N}\sum_{n=1}^{N}l(y_n, Wf_{\theta}(x_n)) \end{equation} where $l$ is the loss function. Certainly, a correct label assignment is beneficial for representation learning, even approaching the supervised one. Likewise, a disentangled embedding representation will boost the clustering performance. These two steps are iteratively alternated and contribute positively to each other during optimization. \subsubsection{Analysis.} Actually, clustering is to capture the global data relation, which requires to save the global latent embedding matrix $E\in \mathbb{R}^{d\times N}$ of the given dataset. Taking $k$-means as an example, it uses $E$ to iteratively compute the cluster centroids $C$. Here naturally comes a problem. It is difficult to scale to the extremely large datasets especially for those with millions or even billions of images since the memory of $E$ is linearly related to the dataset size. Thus, an existing question is, how can we group the images into several clusters without explicitly using global relation? Also, another slight problem is, the classifier $W$ has to reinitialize after each clustering and train from scratch, since the cluster IDs are changeable all the time, which makes the loss curve fluctuated all the time even at the end of training. \subsection{Unsupervised Image Classification} From the above section, we can find that the two steps in deep clustering (Eq.\ref{label_generation} and Eq.\ref{representation_learning}) actually illustrate two different manners for images grouping, namely clustering and classification. The former one groups images into clusters relying on the similarities among them, which is usually used in unsupervised learning. While the latter one learns a classification model and then directly classifies them into one of pre-defined classes without seeing other images, which is usually used in supervised learning. For the considerations discussed in the above section, we can't help to ask, why not directly use classification model to generate pseudo labels to avoid clustering? In this way, it can integrate these two steps pseudo label generation and representation learning into a more unified framework. Here pseudo label generation is formulated as: \begin{equation} \label{label_generation2} \mathop{\min}_{y_n}\frac{1}{N}\sum_{n=1}^{N}l(y_n, f^{'}_{\theta^{'}}(x_n))\,\,\,s.t. \,\,\,y_n\in \{0, 1\}^{k},y_n^T\textbf{1}_k=1 \end{equation} where $f^{'}_{\theta^{'}}(\cdot)$ is the network composed by $f_{\theta}(\cdot)$ and $W$. Since cross-entropy with softmax output is the most commonly-used loss function for image classification, Eq.\ref{label_generation2} can be rewritten as: \begin{equation} \label{label_generation3} y_n=p(f^{'}_{\theta^{'}}(x_n)) \end{equation} where $p(\cdot)$ is an $\arg\max$ function indicating the non-zero entry for $y_n$. Iteratively alternating Eq.\ref{label_generation3} and Eq.\ref{representation_learning} for pseudo label generation and representation learning, can it really learn a disentangled representation? Apparently, it will easily fall in a local optima and learn less-representative features. The breaking point is data augmentation which is the core of many supervised and unsupervised learning algorithms. Normally, data augmentation is only adopted in representation learning process. However, this is not enough, which can not make this task challenging. Here data augmentation is also adopted in pseudo label generation. It brings disturbance for pseudo label, and make the task challenging enough to learn more robust features. Hence, Eq.\ref{label_generation3} and Eq.\ref{representation_learning} are rewritten as: \begin{equation} \label{label_generation4} y_n=p(f^{'}_{\theta^{'}}(t_1(x_n))) \end{equation} \begin{equation} \label{representation_learning2} \mathop{\min}_{\theta^{'}}\frac{1}{N}\sum_{n=1}^{N}l(y_n, f^{'}_{\theta^{'}}(t_2(x_n))) \end{equation} where $t_{1}(\cdot)$ and $t_{2}(\cdot)$ denote two different random transformations. For efficiency, the forward pass of label generation can reuse the forward results of representation learning in the previous epoch. The entire pipeline of our proposed framework is illustrated in Fig.\ref{pipeline}. Since our proposed method is very similar to the supervised image classification in format. Correspondingly, we name our method as unsupervised image classification. Compared with deep clustering, our method is more simple and elegant. It can be easily scaled to large datasets, since it does not need global latent embedding of the entire dataset for image grouping. Further, the classifier $W$ is optimized with the backbone network simultaneously instead of reinitializing after each clustering. Our method makes it a real end-to-end training framework. \subsection{Interpretation} \begin{figure}[tp] \centering \includegraphics[width=0.7\columnwidth]{./figures/cluster_vs_classify.png} \caption{The difference and relation between embedding clustering and classification.} \label{contact1} \end{figure} \subsubsection{The Relation with Embedding Clustering.} Embedding clustering is the key component in deep clustering, which mainly focuses on three aspects: 1) sample embedding generation, 2) distance metric, 3) grouping manner (or cluster centroid generation). Actually, from these aspects, using image classification to generate pseudo labels can be taken as a special variant of embedding clustering, as visualized in Fig.\ref{contact1}. Compared with embedding clustering, the embedding in classification is the output of softmax layer and its dimension is exactly the class number. Usually, we call it the probability assigned to each class. As for distance metric, compared with the euclidean distance used in embedding clustering, cross-entropy can also be considered as an distance metric used in classification. The most significant point is the grouping manner. In $k$-means clustering, the cluster centroids are dynamicly determined and iteratively updated to reduce the intra-classes distance and enlarge the inter-classes distance. Conversely, the class centroids for classification are predefined and fixed as $k$ orthonormal one-hot vectors, which helps directly classify images via cross-entropy. Briefly speaking, \emph{the key difference between embedding clustering and classification is whether the class centroids are dynamicly determined or not}. In DeepCluster \cite{caron2018deep}, 20-iterations $k$-means clustering is operated, while in DeeperCluster \cite{caron2019unsupervised}, 10-iterations $k$-means clustering is enough. It means that clustering actually is not that important. Our method actually can be taken as an 1-iteration variant with fixed class centroids. Considering the representations are still not well-learnt at the beginning of training, both clustering and classification cannot correctly partition the images into groups with the same semantic information. During training, we claim that it is redundant to tune both the embedding features and class centroids meanwhile. It is enough to fix the class centroids as orthonormal vectors and only tune the embedding features. Along with representation learning drived by learning data augmentation invariance, the images with the same semantic information will get closer to the same class centroid. What's more, compared with deep clustering, the class centroids in UIC are consistent in between pseudo label generation and representation learning. \subsubsection{The Relation with Contrastive Learning.} Contrastive learning has become a popular method for unsupervised learning recently. Implicitly, unsupervised image classification can also be connected to contrastive learning to explain why it works. Although Eq.\ref{label_generation4} for pseudo label generation and Eq.\ref{representation_learning2} for representation learning are operated by turns, we can merge Eq.\ref{label_generation4} into Eq.\ref{representation_learning2} and get: \begin{equation} \label{contrastive learning} \mathop{\min}_{\theta^{'}}\frac{1}{N}\sum_{n=1}^{N}l(p(f^{'}_{\theta^{'}}(t_1(x_n))), f^{'}_{\theta^{'}}(t_2(x_n))) \end{equation} which is optimized to maximize the mutual information between the representations from different transformations of the same image and learn data augmentation agnostic features. This is a basic formula used in many contrastive learning methods. More concretely, our method use a random view of the images to select their nearest class centroid, namely positive class, in a manner of taking the argmax of the softmax scores. During optimization, we push the representation of another random view of the images to get closer to their corresponding positive class. Implicitly, the remaining orthonormal \emph{k}-1 classes will automatically turn into negative classes. Since we use cross-entropy with softmax as the loss function, they will get farther to the negative classes during optimization. Intuitively, this may be a more proper way to generate negative samples. In normal contrastive learning methods, given an image I in a (large) minibatch , they treat the other images in the minibatch as the negative samples. But there exist the risk that the negative samples may share the same semantic information with I. \section{Experimental Results} \subsection{Dataset Benchmarks and Network Architectures} We mainly apply our proposed unsupervised image classification to ImageNet dataset \cite{russakovsky2015imagenet} without annotations, which is designed for 1000-categories image classification consisting of 1.28 millions images. As for network architectures, we select the most representative one in unsupervised representation learning, AlexNet \cite{krizhevsky2012imagenet}, as our baseline model for performance analysis and comparison. It is composed by five convolutional layers for features extraction and three fully-connected layers for classification. Note that the Local Response Normalization layers are replaced by batch normalization layers. After unsupervised training, the performance is mainly evaluated by \begin{itemize} \item linear probes; \item transfer learning on downstream tasks. \end{itemize} Linear probes \cite{zhang2017split} had been a standard metric followed by lots of related works. It quantitatively evaluates the representation generated by different convolutional layers through separately freezing the convolutional layers (and Batch Normalization layers) from shallow layers to higher layers and training a linear classifier on top of them using annotated labels. For evaluation by linear probing, we conduct experiments on ImageNet datasets with annotated labels. Linear probes is a direct approach to evaluate the features learnt by unsupervised learning through fixing the feature extractors. Compared with this approach, transfer learning on downsteam tasks is closer to practical scenarios. Following the existing works, we transfer the unsupervised pretrained model on ImageNet to PASCAL VOC dataset \cite{Everingham2015the} for multi-label image classification, object detection and semantic segmentation via fine-tuning. To avoid the performance gap brought by hyperparameter difference during fine-tuning, we further evaluate the representations by metric-based few-shot classification on \emph{mini}ImageNet \cite{vinyals2016matching} without fine-tuning. \subsection{Unsupervised Image Classification} \begin{table}[tp] \tabcolsep=2pt \begin{floatrow} \begin{minipage}{0.5\linewidth} \centering \begin{floatrow} \ttabbox{\caption{Ablation study on class number. We also report NMI t/labels, denoting the NMI between pseudo labels and annotated labels. FFT means further fine-tuning with fixed label assignments.}}{% \begin{tabular}[t]{lcccc} \toprule[2pt] \multirow{2}{*}{Methods}& \multicolumn{3}{c}{Top1 Accuracy} & \multirow{2}{*}{NMI t/labels}\\ \cline{2-4} &conv3&conv4&conv5&\\ \hline UIC 3k &41.2&41.0&38.1& 38.5\\ UIC 5k &40.6&40.9&38.2& 40.8\\ UIC 10k &40.6&40.8&37.9&42.6\\ UIC 3k (FFT)& 41.6 &41.5 &39.0 &-\\ \bottomrule[2pt] \label{table_class_number} \end{tabular}} \end{floatrow} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \ttabbox{\caption{Ablation study on whether data augmentation is adopted in pseudo label generation.}}{ \begin{tabular}[t]{lcccc} \toprule[2pt] \multirow{2}{*}{Methods}&\multirow{2}{*}{Aug}& \multicolumn{3}{c}{Top1 Accuracy}\\ \cline{3-5} && conv3 & conv4 & conv5\\ \hline UIC 3k &$\times$&39.5&39.9&37.9\\ UIC 3k &$\surd$&41.6&41.5&39.0\\ \bottomrule[2pt] \label{table_augmentation} \end{tabular}} \end{minipage} \end{floatrow} \end{table} \subsubsection{Implementation Details.} Similar to DeepCluster, two important implementation details during unsupervised image classification have to be highlighted: 1) Avoid empty classes, 2) Class balance sampling. At the beginning of training, due to randomly initialization for network parameters, some classes are unavoidable to assign zero samples. To avoid trivial solution, we should avoid empty classes. When we catch one class with zero samples, we split the class with maximum samples into two equal partitions and assign one to the empty class. We observe that this situation of empty classes only happens at the beginning of training. As for class balance sampling, this technique is also used in supervised training to avoid the solution biasing to those classes with maximum samples. \subsubsection{Optimization Settings.} We optimize AlexNet for 500 epochs through SGD optimizer with 256 batch size, 0.9 momentum, 1e-4 weight decay, 0.5 drop-out ratio and 0.1 learning rate decaying linearly. Analogous to DeepCluster, we apply Sobel filter to the input images to remove color information. During pseudo label generation and representation learning, we both adopt randomly resized cropping and horizontally flipping to augment input data. Compared with standard supervised training, the optimization settings are exactly the same except one extra hyperparameter, class number. Since over-clustering had been a consensus for clustering-based methods, here we only conduct ablation study about class number from 3k, 5k to 10k. \begin{figure}[tp] \centering \includegraphics[width=0.7\columnwidth]{./figures/class_distribution2.png} \caption{Nearly uniform distribution of image number assigned to each class.} \label{image_number} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=0.3\columnwidth]{./figures/visualized.png} \caption{Visualization of the classification results with low entropy.} \label{vis} \end{figure} \subsubsection{Evaluation via Normalized Mutual Information.} Normalized mutual information (NMI) is the main metric to evaluate the classification results, which ranges in the interval between 0 and 1. If NMI is approaching 1, it means two label assignments are strongly coherent. The annotated labels are unknown in practical scenarios, so we did not use them to tune the hyperparameters. But if the annotated labels are given, we can also use the NMI of label assignment against annotated one (NMI t/labels) to evaluate the classification results after training. As shown in the fifth column in Tab.\ref{table_class_number}, when the class number is 10k, the NMI t/labels is comparable with DeepCluster (refer to Fig.2(a) in the paper \cite{caron2018deep}), which means the performance of our proposed unsupervised image classification is approaching to DeepCluster even without explicitly embedding clustering. However, the more class number will be easily to get higher NMI t/labels. So we cannot directly use it to compare the performance among different class number. \subsubsection{Evaluation via Visualization.} At the end of training, we take a census for the image number assigned to each class. As shown in Fig.\ref{image_number}, our classification model nearly divides the images in the dataset into equal partitions. This is a interesting finding. In the work of \cite{asano2019self-labelling}, this result is achieved via label optimization solved by \emph{sinkhorn-Knopp algorithm}. However, our method can achieve the same result without label optimization. We infer that class balance sampling training manner can implicitly bias to uniform distribution. Furthermore, we also visualize the classification results in Fig.\ref{vis}. Our method can classify the images with similar semantic information into one class. \subsection{Linear Classification on Activations} \begin{table}[tp] \begin{floatrow} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \label{linearprobing} \caption{Linear probing evaluation on ImageNet. We mainly compare the performance of our method with DeepCluster. For reference, we also list the results of other methods. } \begin{tabular}{lccccc} \toprule[1pt] \multirow{2}{*}{Methods}& \multicolumn{5}{c}{ImageNet}\\ \cline{2-6} &conv1&conv2&conv3&conv4&conv5\\ \hline ImageNet labels &19.3&36.3&44.2&48.3&50.5\\ Random&11.6&17.1&16.9&16.3&14.1\\ \hline DeepCluster \cite{caron2018deep}&13.4&32.3&41.0&39.6&38.2\\ SelfLabel $3k\times1$ \cite{asano2019self-labelling}&-&-&43.0&44.7&40.9\\ SelfLabel $3k\times10$ \cite{asano2019self-labelling}&22.5&37.4&44.7&47.1&44.1\\ \textbf{Ours} & \textbf{12.8} & \textbf{34.3} & \textbf{41.6} & \textbf{41.5} & \textbf{39.0}\\ \bottomrule[1pt] \multicolumn{6}{c}{Take a look at other self-supervised learning methods}\\ \toprule[1pt] Contenxt \cite{doersch2015unsupervised} & 16.2 & 23.3 & 30.2 & 31.7 & 29.6\\ BiGan \cite{donahue2017adversarial} & 17.7&24.5&31.0&29.9&28.0\\ Split-brain \cite{zhang2017split} & 17.7 & 29.3 & 35.4 & 35.2&32.8\\ Jigsaw puzzle \cite{noroozi2016unsupervised} & 18.2 & 28.8 & 34.0 & 33.9&27.1\\ RotNet \cite{gidaris2018unsupervised} &18.8&31.7&38.7&38.2&36.5\\ AND \cite{huang2019unsupervised} & 15.6&27.0&35.9&39.7&37.9\\ AET \cite{zhang2019aet} & 19.3&35.4&44.0&43.6&42.4\\ RotNet+retrieval \cite{feng2019self} & 22.2&38.2&45.7&48.7&48.3\\ \bottomrule[1pt] \label{linearProbes} \end{tabular} \end{floatrow} \end{table} \subsubsection{Optimization Settings.} We use linear probes for more quantitative evaluation. Following \cite{zhang2017split}, we use max-pooling to separately reduce the activation dimensions to 9600, 9216, 9600, 9600 and 9216 (conv1-conv5). Freezing the feature extractors, we only train the inserted linear layers. We train the linear layers for 32 epochs with zero weight decay and 0.1 learning rate divided by ten at epochs 10, 20 and 30. The shorter size of the images in the dataset are resized to 256 pixels. And then we use 224$\times$224 random crop as well as horizontal flipping to train the linear layer. After training, the accuracy is determined with 10-crops (center crop and four-corners crop as well as horizontal flipping). \subsubsection{Ablation Study on Class Number Selection.} We conduct ablation study on class number as shown in Tab.\ref{table_class_number}. Different from DeepCluster, the performance 3k is slightly better than 5k and 10k, which is also confirmed by \cite{asano2019self-labelling}. \subsubsection{Further Fine-Tuning.} During training, the label assignment is changed every epoch. We fix the label assignment at last epoch with center crop inference in pseudo label generation, and further fine-tune the network with 30 epochs. As shown in Tab.\ref{table_class_number}, the performance can be further improved. \subsubsection{Ablation Study on Data Augmentation.} Data augmentation plays an important role in clustering-based self-supervised learning since the pseudo labels are almost wrong at the beginning of training since the features are still not well-learnt and the representation learning is mainly drived by learning data augmentation invariance at the beginning of training. In this paper, we also use data augmentation in pseudo label generation. As shown in Tab.\ref{table_augmentation}, it can improve the performance. In this paper, we simply adopt randomly resized crop to augment data in pseudo label generation and representation learning. \subsubsection{Comparison with Other State-of-The-Art Methods.} Since our method aims at simplifying DeepCluster by discarding clustering, we mainly compare our results with DeepCluster. As shown in Fig.\ref{linearProbes}, our performance is comparable with DeepCluster, which validates that the clustering operation can be replaced by more challenging data augmentation. Note that it is also validated by the NMI t/labels mentioned above. SelfLabel [$3k\times1$] simulates clustering via label optimization which classifies datas into equal partitions. However, as discussed above in Fig.\ref{image_number}, our proposed framework also divides the dataset into nearly equal partitions without the complicated label optimization term. Therefore, theoretically, our framework can also achieve comparable results with SelfLabel [$3k\times1$], and we impute the performance gap to their extra augmentation. With strong augmentation, our can still surpass SelfLabel as shown in Tab.6. Compared with other self-supervised learning methods, our method can surpass most of them which only use a single type of supervisory signal. We believe our proposed framework can be taken as strong baseline model for self-supervised learning and make a further performance boost when combined with other supervisory signals, which will be validated in our future work. \begin{table}[tp] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{Transfer the pretrained model to downstream tasks on PASCAL VOC dataset.} \label{downstreamtask} \begin{tabular}{lcccc} \toprule[2pt] \multirow{3}{*}{Methods}& \multicolumn{2}{c}{Classification} & \multicolumn{1}{c}{Detection} & \multicolumn{1}{c}{Segmentation} \\ &\multicolumn{2}{c}{(\%mAP)}&(\%mAP)&(\%mIU)\\ \cline{2-5} & FC6-8 & ALL & ALL & ALL \\ \hline ImageNet Labels&78.9&79.9&56.8&48.0\\ Random-RGB&33.2&57.0&44.5&30.1\\ Random-Sobel&29.0&61.9&47.9&32.0\\ \hline DeepCluster \cite{caron2018deep}&72.0&73.7&55.4&45.1\\ SelfLabeling $3k\times10$ \cite{asano2019self-labelling} & - & 75.3 & 55.9 & 43.7\\ \textbf{Ours} & 76.2 & 75.9 & 54.9 & 45.9 \\ \bottomrule[2pt] \multicolumn{5}{c}{Take a look at other kinds of self-supervised methods}\\ \toprule[2pt] BiGan \cite{donahue2017adversarial}& 52.5 & 60.3 & 46.9 & 35.2 \\ Contenxt \cite{doersch2015unsupervised} & 55.1 & 63.1 & 51.1 & - \\ Split-brain \cite{zhang2017split} & 63.0 & 67.1&46.7&36.0\\ Jigsaw puzzle \cite{noroozi2016unsupervised} & - & 67.6&53.2&37.6\\ RotNet \cite{gidaris2018unsupervised}& 70.87 & 72.97 & 54.4 & 39.1 \\ RotNet+retrieval \cite{feng2019self} & -&74.7&58.0&45.9\\ \bottomrule[2pt] \label{table_downstream_tasks} \end{tabular} \end{table} \subsection{Transfer to Downstream Tasks} \begin{table}[tp] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{Evaluation via few-shot classification on the test set of \emph{mini}ImageNet. Note that 224 resolution is center-cropped from 256 which is upsampled from 84 low-resolutional images. It can be regarded as inserting a upsampling layer at the bottom of the network while the input is still 84$\times$84. MP is short for max-pooling. For reference, the 5way-5shot accuracy of prototypical networks \cite{snell2017prototypical} via supervised manner is 68.2\%.} \begin{tabular}{lccccc} \toprule[1pt] \multirow{2}{*}{Methods} & \multirow{2}{*}{resolution} & \multicolumn{4}{c}{5way-5shot accuracy}\\ \cline{3-6} &&conv3 & conv4 & conv5 & conv5+MP\\ \hline UIC 3k & 224$\times$224 & 48.79 & 53.03 & 62.46 & 65.05\\ DeepCluster & 224$\times$224 & 51.33 & 54.42 & 60.32 & 65.04\\ UIC 3k & 84$\times$84 & 52.43 & 54.76 & 54.40 & 52.85\\ DeepCluster & 84$\times$84 & 53.46 & 54.87 & 49.81 & 50.18\\ \bottomrule[1pt] \end{tabular} \label{fewshot2} \end{table} \subsubsection{Evaluation via Fine-Tuning: Multi-label Image Classification, Object Detection, Semantic Segmentation on Pascal-VOC.}In practical scenarios, self-supervised learning is usually used to provide a good pretrained model to boost the representations for downstream tasks. Following other works, the representation learnt by our proposed method is also evaluated by fine-tuning the models on PASCAL VOC datasets. Specifically, we run the object detection task using fast-rcnn \cite{girshick2015fast} framework and run the semantic segmentation task using FCN \cite{long2015fully} framework. As shown in Tab.\ref{table_downstream_tasks}, our performance is comparable with other clustering-based methods and surpass most of other SSL methods. \subsubsection{Evaluation without Fine-Tuning: Metric-based Few-shot Image Classification on \emph{mini}ImageNet.} Few-shot classification \cite{vinyals2016matching,snell2017prototypical} is naturally a protocol for representation evaluation, since it can directly use unsupervised pretrained models for feature extraction and use metric-based methods for few-shot classification without any finetuning. It can avoid the performance gap brought by fine-tuning tricks. In this paper, we use Prototypical Networks \cite{snell2017prototypical} for representation evaluation on the test set of \emph{mini}ImageNet. As shown in Tab.\ref{fewshot2}, our method is comparable with DeepCluster overall. Specifically, our performances in highest layers are better than DeepCluster. \section{More Experiments} In the above sections, we try to keep training settings the same with DeepCluster for fair comparison. Although achieving SOTA results is not the main starting point of this work, we would not mind to further improve our results through combining the training tricks proposed by other methods. \subsection{More Data Augmentations} As discussed above, data augmentation used in the process of pseudo label generation and network training plays a very important role for representation learning. Recently, SimCLR\cite{chen2020a} consumes lots of computational resources to do a thorough ablation study about data augmentation. They used a strong color jittering and random Gaussian blur to boost their performance. We find such strong augmentation can also benefit our method as shown in Tab.6. Our result in conv5 with a strong augmentation surpasses DeepCluster and SelfLabel by a large margin and is comparable with SelfLabel with 10 heads. Note that the results in this section do not use further fine-tuning. \subsection{More Network architectures} To further convince the readers, we supplement the experiments of ResNet50 (500epochs) with the strong data augmentation and an extra MLP-head proposed by SimCLR\cite{chen2020a} (we fix and do not discard MLP-head when linear probing). As shown in Tab.7, our method surpasses SelfLabel and achieves SOTA results when compared with non-contrastive-learning methods. Although our method still has a performance gap with SimCLR and MoCov2 ($>>$500epochs), our method is the simplest one among them. We believe it can bring more improvement by appling more useful tricks. \begin{table}[tp] \begin{floatrow} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \label{withmoreaugmentations2} \caption{More experimental results with more data augmentations. } \begin{tabular}{llcccc} \toprule[1pt] \multirow{2}{*}{Methods}&\multirow{2}{*}{Arch}&\multicolumn{4}{c}{ImageNet}\\ \cline{3-6} &&conv3&conv4&conv5&NMI t/labels\\ \hline DeepCluster \cite{caron2018deep}&AlexNet&41.0&39.6&38.2&-\\ SelfLabel $3k\times1$ \cite{asano2019self-labelling}&AlexNet&43.0&44.7&40.9&-\\ SelfLabel $3k\times10$ \cite{asano2019self-labelling}&AlexNet+10heads&44.7&47.1&44.1&-\\ UIC (Ours) & AlexNet & 41.6 & 41.5 & 39.0 & 38.5\\ UIC + strong aug (Ours) & AlexNet & 43.5 & 45.6 & 44.3 & 40.0\\ \bottomrule[1pt] \end{tabular} \end{floatrow} \end{table} \begin{table}[tp] \begin{floatrow} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \label{withmorearchitectures2} \caption{More experimental results with more network architectures.} \begin{tabular}{llll} \toprule[1pt] Methods&Arch&Top-1&NMI t/labels\\ \hline Jigsaw \cite{kolesnikov2019revisiting}&Res50&38.4&-\\ Rotation \cite{kolesnikov2019revisiting}&Res50&43.8&-\\ InstDisc \cite{wu2018unsupervised}&Res50&54.0&-\\ BigBiGAN \cite{donahue2019large}&Res50&56.6&-\\ Local Agg. \cite{zhuang2019local}&Res50&60.2&-\\ Moco \cite{he2019momentum}&Res50&60.6&-\\ PIRL \cite{misra2019self-supervised}&Res50&63.6&-\\ CPCv2 \cite{henaff2019data-efficient}&Res50&63.8&-\\ SimCLR \cite{chen2020a}&Res50 + MLP-head&69.3&-\\ Mocov2 \cite{chen2020improved}&Res50 + MLP-head&71.1&-\\ SelfLabel $3k\times10$ \cite{asano2019self-labelling}&Res50+10heads&61.5&-\\ UIC + strong aug (Ours) & VGG16 & 57.7 & 46.9\\ UIC + strong aug (Ours) & Res50 & 62.7 & 50.6\\ UIC + strong aug (Ours) & Res50 + MLP-head & 64.4 & 53.3\\ \bottomrule[1pt] \end{tabular} \end{floatrow} \end{table} \section{Conclusions} We always believe that the greatest truths are the simplest. Our method validates that the embedding clustering is not the main reason why DeepCluster works. Our method makes training a SSL model as easy as training a supervised image classification model, which can be adopted as a strong prototype to further develop more advanced unsupervised learning approaches. We make SSL more accessible to the community which is very friendly to the academic development. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=PnuDpxJvR0q
https://arxiv.org/abs/2006.11480
Please evaluate the paper based on its clarity, contribution to the field, and the strength of its results.
Simple and effective method for unsupervised feature learning #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper proposes a simple method for unsupervised deep clustering by iteratively (1) generating pseudo-labels by performing a forward pass through a CNN and (2) training the CNN using the generated pseudo-labels. The method is evaluated on ImageNet and performs competitively with other unsupervised learning methods. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. * The method is surprisingly simple and seems to perform competitively with clustering in latent space, which is much more computationally expensive. * The paper is well written and easy to understand. * The performed experiments are sound and demonstrate the effectiveness of the method. #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. * The bold numbers in Table 3 are rather misleading as they do not actually denote the best performance. #### 4. [Overall rating] Paper rating * 8: Top 50% of accepted papers, clear accept #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. The method is simple and effective and the paper is well written. Rating: 8: Top 50% of accepted papers, clear accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{multirow} \usepackage{booktabs} \usepackage{floatrow} \floatsetup[table]{capposition=top} \floatsetup[figure]{capposition=bottom} \newfloatcommand{capbtabbox}{table}[][\FBwidth] \usepackage{subfigure} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{Anonymous} % \title{Unsupervised Image Classification for Deep Representation Learning} % \titlerunning{Unsupervised Image Classification} \author{Weijie Chen\inst{1} \and Shiliang Pu\inst{1}\thanks{Corresponding Author} \and Di Xie\inst{1}\and Shicai Yang\inst{1}\and Yilu Guo\inst{1}\and Luojun Lin\inst{2}} \authorrunning{Chen et al.} \institute{Hikvision Research Institute, Hangzhou, China \\ \email{\{chenweijie5, pushiliang.hri, xiedi, yangshicai, guoyilu5\}@hikvision.com}\\ School of Electronic and Information Engineering, South China University of Technology\\ \email{linluojun2009@126.com}} \maketitle \begin{abstract} Deep clustering against self-supervised learning (SSL) is a very important and promising direction for unsupervised visual representation learning since it requires little domain knowledge to design pretext tasks. However, the key component, embedding clustering, limits its extension to the extremely large-scale dataset due to its prerequisite to save the global latent embedding of the entire dataset. In this work, we aim to make this framework more simple and elegant without performance decline. We propose an unsupervised image classification framework without using embedding clustering, which is very similar to standard supervised training manner. For detailed interpretation, we further analyze its relation with deep clustering and contrastive learning. Extensive experiments on ImageNet dataset have been conducted to prove the effectiveness of our method. Furthermore, the experiments on transfer learning benchmarks have verified its generalization to other downstream tasks, including multi-label image classification, object detection, semantic segmentation and few-shot image classification. \keywords{Unsupervised Learning, Representation Learning} \end{abstract} \section{Introduction} Convolutional neural networks (CNN) \cite{he2016deep,huang2017densely,chen2019all} had been applied to many computer vision applications \cite{girshick2015fast,long2015fully,lin2019attribute} due to their powerful representational capacity. The normal working flow is to pretrain the networks on a very large-scale dataset with annotations like ImageNet \cite{russakovsky2015imagenet} and then transfer to a small dataset via fine-tuning. However, the dataset collection with manually labelling for pre-training is strongly resource-consuming, which draws lots of researchers' attention to develop unsupervised representation learning approaches. Among the existing unsupervised learning methods, self-supervision is highly sound since it can directly generate supervisory signal from the input images, like image inpainting \cite{doersch2015unsupervised,pathak2016context} and jigsaw puzzle solving \cite{noroozi2016unsupervised}. However, it requires rich empirical domain knowledge to design pretext tasks and is not well-transferred to downsteam tasks. Compared with this kind of self-supervised approaches, DeepCluster is a simple yet effective method which involves litter domain knowledge. It simply adopts embedding clustering to generate pseudo labels by capturing the manifold and mining the relation of all data points in the dataset. This process is iteratively alternated with an end-to-end representation learning which is exactly the same with supervised one. However, along with the advantage brought by embedding clustering, an obvious defect naturally appears that the latent embedding of each data point in the dataset should be saved before clustering, which leads to extra memory consumption linearly growing with the dataset size. It makes it difficult to scale to the very large-scale datasets. Actually, this problem also happens in the work of DeeperCluster \cite{caron2019unsupervised}, which uses distributed $k$-means to ease the problem. However, it still did not solve the problem in essence. Also, the data points in most of datasets are usually independently identically distributed (\emph{i.i.d}). Therefore, building a framework analogous to DeepCluster, we wonder if we can directly generate pseudo class ID for each image without explicitly seeing other images and take it as an image classification task for representation learning. \begin{figure}[tp] \centering \includegraphics[width=1.0\columnwidth]{./figures/pipeline2.png} \caption{The pipeline of unsupervised image classification learning. The black and red arrows separately denote the processes of pseudo-label generation and representation learning. These two processes are alternated iteratively. For efficient implementation, the pseudo labels in current epoch are updated by the forward results from the previous epoch which means our training framework is twice faster than DeepCluster.} \label{pipeline} \end{figure} The answer is excitedly YES! We integrate both the processes of pseudo label generation and representation learning into an unified framework of image classification. Briefly speaking, during the pseudo label generation, we directly feed each input image into the classification model with softmax output and pick the class ID with highest softmax score as pseudo label. It is very similar to the inference phase in supervised image classification. After pseudo class IDs are generated, the representation learning period is exactly the same with supervised training manner. These two periods are iteratively alternated until convergence. A strong concern is that if such unsupervised training method will be easily trapped into a local optima and if it can be well-generalized to other downstream tasks. In supervised training, this problem is usually solved by data augmentation which can also be applied to our proposed framework. It is worth noting that we not only adopt data augmentation in representation learning but also in pseudo label generation. It can bring disturbance to label assignment and make the task more challenging to learn data augmentation agnostic features. The entire pipeline is shown in Fig.\ref{pipeline}. To the best of our knowledge, this unsupervised framework is the closest to the supervised one compared with other existing works. Since it is very similar to supervised image classification, we name our method as \emph {Unsupervised Image Classification} (UIC) correspondingly. For simplicity, without any specific instruction, \emph{clustering} in this paper only refers to embedding clustering via $k$-mean, and \emph{classification} refers to CNN-based classification model with cross-entropy loss function. To further explain why UIC works, we analyze its hidden relation with both deep clustering and contrastive learning. We point out that UIC can be considered as a special variant of them. We hope our work can bring a deeper understanding of deep clustering series work to the self-supervision community. We empirically validate the effectiveness of UIC by extensive experiments on ImageNet. The visualization of classification results shows that UIC can act as clustering although lacking explicit clustering. We also validate its generalization ability by the experiments on transfer learning benchmarks. All these experiments indicate that UIC can work comparable with deep clustering. To summarize, our main contributions are listed as follows: \begin{itemize} \item A simple yet effective unsupervised image classification framework is proposed for visual representation learning, which can be taken as a strong prototype to develop more advanced unsupervised learning methods. \item Our framework simplifies DeepCluster by discarding embedding clustering while keeping no performance degradation and surpassing most of other self-supervised learning methods. We demonstrate that embedding clustering is not the main reason why DeepCluster works. \item Our training framework is twice faster than DeepCluster since we do not need an extra forward pass to generate pseudo labels. \end{itemize} \section{Related Work} \subsection{Self-supervised learning} Self-supervised learning is a major form of unsupervised learning, which defines pretext tasks to train the neural networks without human-annotation, including image inpainting \cite{doersch2015unsupervised,pathak2016context}, automatic colorization \cite{larsson2016learning,zhang2016colorful}, rotation prediction \cite{gidaris2018unsupervised}, cross-channel prediction \cite{zhang2017split}, image patch order prediction \cite{noroozi2016unsupervised}, and so on. These pretext tasks are designed by directly generating supervisory signals from the raw images without manually labeling, and aim to learn well-pretrained representations for downstream tasks, like image classification, object detection, and semantic segmentation. Recently, contrastive learning \cite{tian2019contrastive,he2019momentum,hjelm2018learning,oord2018representation} is developed to improve the performance of self-supervised learning. Its corresponding pretext task is that the features encoded from multi-views of the same image are similar to each others. The core insight behind these methods is to learn multi-views invariant representations. This is also the essence of our proposed method. \subsection{Clustering-based methods} Clustering-based methods are mostly related to our proposed method. Coates et al. \cite{coates2012learning} is the first to pretrain CNNs via clustering in a layer-by-layer manner. The following works \cite{yang2016joint,xie2016unsupervised,liao2016learning,caron2018deep} are also motivated to jointly cluster images and learn visual features. Among them, DeepCluster \cite{caron2018deep} is one of the most representative methods in recent years, which applies $k$-means clustering to the encoded features of all data points and generates pseudo labels to drive an end-to-end training of the target neural networks. The embedding clustering and representation learning are iterated by turns and contributed to each other along with training. Compared with other SSL methods with fixed pseudo labels, this kind of works not only learn good features but also learn meaningful pseudo labels. However, as a prerequisite for embedding clustering, it has to save the latent features of each sample in the entire dataset to depict the global data relation, which leads to excessive memory consumption and constrains its extension to the very large-scale datasets. Although another work DeeperCluster \cite{caron2019unsupervised} proposes distributed $k$-means to ease this problem, it is still not efficient and elegant enough. Another work SelfLabel \cite{asano2019self-labelling} treats clustering as a complicated optimal transport problem. It proposes label optimization as a regularized term to the entire dataset to simulate clustering with the hypothesis that the generated pseudo labels should partition the dataset equally. However, it is hypothesized and not an \emph{i.i.d} solution. Interestingly, we find that our method can naturally divide the dataset into nearly equal partitions without using label optimization. \section{Methods} \subsection{Preliminary: Deep Clustering} We first review deep clustering to illustrate the process of pseudo label generation and representation learning, from which we analyze the disadvantages of embedding clustering and dig out more room for further improvement. \subsubsection{Pseudo Label Generation.} Most self-supervised learning approaches focus on how to generate pseudo labels to drive unsupervised training. In deep clustering, this is achieved via $k$-means clustering on the embedding of all provided training images $X=x_1, x_2, ..., x_N$. In this way, the images with similar embedding representations can be assigned to the same label. Commonly, the clustering problem can be defined as to optimize cluster centroids and cluster assignments for all samples, which can be formulated as: \begin{equation} \label{label_generation} \mathop{\min}_{C\in \mathbb{R}^{d\times k}}\frac{1}{N}\sum_{n=1}^{N}\mathop{\min}_{y_n\in \{0, 1\}^{k}\,\,s.t. y_n^T\textbf{1}_k=1}\parallel C_{y_n}-f_\theta(x_n)\parallel \end{equation} where $f_\theta(\cdot)$ denotes the embedding mapping, and $\theta$ is the trainable weights of the given neural network. $C$ and $y_n$ separately denote cluster centroid matrix with shape $d\times k$ and label assignment to $n_{th}$ image in the dataset, where $d$, $k$ and $N$ separately denote the embedding dimension, cluster number and dataset size. For simplicity in the following description, $y_n$ is presented as an one-hot vector, where the non-zero entry denotes its corresponding cluster assignment. \subsubsection{Representation Learning.} After pseudo label generation, the representation learning process is exactly the same with supervised manner. To this end, a trainable linear classifier $W$ is stacked on the top of main network and optimized with $\theta$ together, which can be formulated as: \begin{equation} \label{representation_learning} \mathop{\min}_{\theta, W}\frac{1}{N}\sum_{n=1}^{N}l(y_n, Wf_{\theta}(x_n)) \end{equation} where $l$ is the loss function. Certainly, a correct label assignment is beneficial for representation learning, even approaching the supervised one. Likewise, a disentangled embedding representation will boost the clustering performance. These two steps are iteratively alternated and contribute positively to each other during optimization. \subsubsection{Analysis.} Actually, clustering is to capture the global data relation, which requires to save the global latent embedding matrix $E\in \mathbb{R}^{d\times N}$ of the given dataset. Taking $k$-means as an example, it uses $E$ to iteratively compute the cluster centroids $C$. Here naturally comes a problem. It is difficult to scale to the extremely large datasets especially for those with millions or even billions of images since the memory of $E$ is linearly related to the dataset size. Thus, an existing question is, how can we group the images into several clusters without explicitly using global relation? Also, another slight problem is, the classifier $W$ has to reinitialize after each clustering and train from scratch, since the cluster IDs are changeable all the time, which makes the loss curve fluctuated all the time even at the end of training. \subsection{Unsupervised Image Classification} From the above section, we can find that the two steps in deep clustering (Eq.\ref{label_generation} and Eq.\ref{representation_learning}) actually illustrate two different manners for images grouping, namely clustering and classification. The former one groups images into clusters relying on the similarities among them, which is usually used in unsupervised learning. While the latter one learns a classification model and then directly classifies them into one of pre-defined classes without seeing other images, which is usually used in supervised learning. For the considerations discussed in the above section, we can't help to ask, why not directly use classification model to generate pseudo labels to avoid clustering? In this way, it can integrate these two steps pseudo label generation and representation learning into a more unified framework. Here pseudo label generation is formulated as: \begin{equation} \label{label_generation2} \mathop{\min}_{y_n}\frac{1}{N}\sum_{n=1}^{N}l(y_n, f^{'}_{\theta^{'}}(x_n))\,\,\,s.t. \,\,\,y_n\in \{0, 1\}^{k},y_n^T\textbf{1}_k=1 \end{equation} where $f^{'}_{\theta^{'}}(\cdot)$ is the network composed by $f_{\theta}(\cdot)$ and $W$. Since cross-entropy with softmax output is the most commonly-used loss function for image classification, Eq.\ref{label_generation2} can be rewritten as: \begin{equation} \label{label_generation3} y_n=p(f^{'}_{\theta^{'}}(x_n)) \end{equation} where $p(\cdot)$ is an $\arg\max$ function indicating the non-zero entry for $y_n$. Iteratively alternating Eq.\ref{label_generation3} and Eq.\ref{representation_learning} for pseudo label generation and representation learning, can it really learn a disentangled representation? Apparently, it will easily fall in a local optima and learn less-representative features. The breaking point is data augmentation which is the core of many supervised and unsupervised learning algorithms. Normally, data augmentation is only adopted in representation learning process. However, this is not enough, which can not make this task challenging. Here data augmentation is also adopted in pseudo label generation. It brings disturbance for pseudo label, and make the task challenging enough to learn more robust features. Hence, Eq.\ref{label_generation3} and Eq.\ref{representation_learning} are rewritten as: \begin{equation} \label{label_generation4} y_n=p(f^{'}_{\theta^{'}}(t_1(x_n))) \end{equation} \begin{equation} \label{representation_learning2} \mathop{\min}_{\theta^{'}}\frac{1}{N}\sum_{n=1}^{N}l(y_n, f^{'}_{\theta^{'}}(t_2(x_n))) \end{equation} where $t_{1}(\cdot)$ and $t_{2}(\cdot)$ denote two different random transformations. For efficiency, the forward pass of label generation can reuse the forward results of representation learning in the previous epoch. The entire pipeline of our proposed framework is illustrated in Fig.\ref{pipeline}. Since our proposed method is very similar to the supervised image classification in format. Correspondingly, we name our method as unsupervised image classification. Compared with deep clustering, our method is more simple and elegant. It can be easily scaled to large datasets, since it does not need global latent embedding of the entire dataset for image grouping. Further, the classifier $W$ is optimized with the backbone network simultaneously instead of reinitializing after each clustering. Our method makes it a real end-to-end training framework. \subsection{Interpretation} \begin{figure}[tp] \centering \includegraphics[width=0.7\columnwidth]{./figures/cluster_vs_classify.png} \caption{The difference and relation between embedding clustering and classification.} \label{contact1} \end{figure} \subsubsection{The Relation with Embedding Clustering.} Embedding clustering is the key component in deep clustering, which mainly focuses on three aspects: 1) sample embedding generation, 2) distance metric, 3) grouping manner (or cluster centroid generation). Actually, from these aspects, using image classification to generate pseudo labels can be taken as a special variant of embedding clustering, as visualized in Fig.\ref{contact1}. Compared with embedding clustering, the embedding in classification is the output of softmax layer and its dimension is exactly the class number. Usually, we call it the probability assigned to each class. As for distance metric, compared with the euclidean distance used in embedding clustering, cross-entropy can also be considered as an distance metric used in classification. The most significant point is the grouping manner. In $k$-means clustering, the cluster centroids are dynamicly determined and iteratively updated to reduce the intra-classes distance and enlarge the inter-classes distance. Conversely, the class centroids for classification are predefined and fixed as $k$ orthonormal one-hot vectors, which helps directly classify images via cross-entropy. Briefly speaking, \emph{the key difference between embedding clustering and classification is whether the class centroids are dynamicly determined or not}. In DeepCluster \cite{caron2018deep}, 20-iterations $k$-means clustering is operated, while in DeeperCluster \cite{caron2019unsupervised}, 10-iterations $k$-means clustering is enough. It means that clustering actually is not that important. Our method actually can be taken as an 1-iteration variant with fixed class centroids. Considering the representations are still not well-learnt at the beginning of training, both clustering and classification cannot correctly partition the images into groups with the same semantic information. During training, we claim that it is redundant to tune both the embedding features and class centroids meanwhile. It is enough to fix the class centroids as orthonormal vectors and only tune the embedding features. Along with representation learning drived by learning data augmentation invariance, the images with the same semantic information will get closer to the same class centroid. What's more, compared with deep clustering, the class centroids in UIC are consistent in between pseudo label generation and representation learning. \subsubsection{The Relation with Contrastive Learning.} Contrastive learning has become a popular method for unsupervised learning recently. Implicitly, unsupervised image classification can also be connected to contrastive learning to explain why it works. Although Eq.\ref{label_generation4} for pseudo label generation and Eq.\ref{representation_learning2} for representation learning are operated by turns, we can merge Eq.\ref{label_generation4} into Eq.\ref{representation_learning2} and get: \begin{equation} \label{contrastive learning} \mathop{\min}_{\theta^{'}}\frac{1}{N}\sum_{n=1}^{N}l(p(f^{'}_{\theta^{'}}(t_1(x_n))), f^{'}_{\theta^{'}}(t_2(x_n))) \end{equation} which is optimized to maximize the mutual information between the representations from different transformations of the same image and learn data augmentation agnostic features. This is a basic formula used in many contrastive learning methods. More concretely, our method use a random view of the images to select their nearest class centroid, namely positive class, in a manner of taking the argmax of the softmax scores. During optimization, we push the representation of another random view of the images to get closer to their corresponding positive class. Implicitly, the remaining orthonormal \emph{k}-1 classes will automatically turn into negative classes. Since we use cross-entropy with softmax as the loss function, they will get farther to the negative classes during optimization. Intuitively, this may be a more proper way to generate negative samples. In normal contrastive learning methods, given an image I in a (large) minibatch , they treat the other images in the minibatch as the negative samples. But there exist the risk that the negative samples may share the same semantic information with I. \section{Experimental Results} \subsection{Dataset Benchmarks and Network Architectures} We mainly apply our proposed unsupervised image classification to ImageNet dataset \cite{russakovsky2015imagenet} without annotations, which is designed for 1000-categories image classification consisting of 1.28 millions images. As for network architectures, we select the most representative one in unsupervised representation learning, AlexNet \cite{krizhevsky2012imagenet}, as our baseline model for performance analysis and comparison. It is composed by five convolutional layers for features extraction and three fully-connected layers for classification. Note that the Local Response Normalization layers are replaced by batch normalization layers. After unsupervised training, the performance is mainly evaluated by \begin{itemize} \item linear probes; \item transfer learning on downstream tasks. \end{itemize} Linear probes \cite{zhang2017split} had been a standard metric followed by lots of related works. It quantitatively evaluates the representation generated by different convolutional layers through separately freezing the convolutional layers (and Batch Normalization layers) from shallow layers to higher layers and training a linear classifier on top of them using annotated labels. For evaluation by linear probing, we conduct experiments on ImageNet datasets with annotated labels. Linear probes is a direct approach to evaluate the features learnt by unsupervised learning through fixing the feature extractors. Compared with this approach, transfer learning on downsteam tasks is closer to practical scenarios. Following the existing works, we transfer the unsupervised pretrained model on ImageNet to PASCAL VOC dataset \cite{Everingham2015the} for multi-label image classification, object detection and semantic segmentation via fine-tuning. To avoid the performance gap brought by hyperparameter difference during fine-tuning, we further evaluate the representations by metric-based few-shot classification on \emph{mini}ImageNet \cite{vinyals2016matching} without fine-tuning. \subsection{Unsupervised Image Classification} \begin{table}[tp] \tabcolsep=2pt \begin{floatrow} \begin{minipage}{0.5\linewidth} \centering \begin{floatrow} \ttabbox{\caption{Ablation study on class number. We also report NMI t/labels, denoting the NMI between pseudo labels and annotated labels. FFT means further fine-tuning with fixed label assignments.}}{% \begin{tabular}[t]{lcccc} \toprule[2pt] \multirow{2}{*}{Methods}& \multicolumn{3}{c}{Top1 Accuracy} & \multirow{2}{*}{NMI t/labels}\\ \cline{2-4} &conv3&conv4&conv5&\\ \hline UIC 3k &41.2&41.0&38.1& 38.5\\ UIC 5k &40.6&40.9&38.2& 40.8\\ UIC 10k &40.6&40.8&37.9&42.6\\ UIC 3k (FFT)& 41.6 &41.5 &39.0 &-\\ \bottomrule[2pt] \label{table_class_number} \end{tabular}} \end{floatrow} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \ttabbox{\caption{Ablation study on whether data augmentation is adopted in pseudo label generation.}}{ \begin{tabular}[t]{lcccc} \toprule[2pt] \multirow{2}{*}{Methods}&\multirow{2}{*}{Aug}& \multicolumn{3}{c}{Top1 Accuracy}\\ \cline{3-5} && conv3 & conv4 & conv5\\ \hline UIC 3k &$\times$&39.5&39.9&37.9\\ UIC 3k &$\surd$&41.6&41.5&39.0\\ \bottomrule[2pt] \label{table_augmentation} \end{tabular}} \end{minipage} \end{floatrow} \end{table} \subsubsection{Implementation Details.} Similar to DeepCluster, two important implementation details during unsupervised image classification have to be highlighted: 1) Avoid empty classes, 2) Class balance sampling. At the beginning of training, due to randomly initialization for network parameters, some classes are unavoidable to assign zero samples. To avoid trivial solution, we should avoid empty classes. When we catch one class with zero samples, we split the class with maximum samples into two equal partitions and assign one to the empty class. We observe that this situation of empty classes only happens at the beginning of training. As for class balance sampling, this technique is also used in supervised training to avoid the solution biasing to those classes with maximum samples. \subsubsection{Optimization Settings.} We optimize AlexNet for 500 epochs through SGD optimizer with 256 batch size, 0.9 momentum, 1e-4 weight decay, 0.5 drop-out ratio and 0.1 learning rate decaying linearly. Analogous to DeepCluster, we apply Sobel filter to the input images to remove color information. During pseudo label generation and representation learning, we both adopt randomly resized cropping and horizontally flipping to augment input data. Compared with standard supervised training, the optimization settings are exactly the same except one extra hyperparameter, class number. Since over-clustering had been a consensus for clustering-based methods, here we only conduct ablation study about class number from 3k, 5k to 10k. \begin{figure}[tp] \centering \includegraphics[width=0.7\columnwidth]{./figures/class_distribution2.png} \caption{Nearly uniform distribution of image number assigned to each class.} \label{image_number} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=0.3\columnwidth]{./figures/visualized.png} \caption{Visualization of the classification results with low entropy.} \label{vis} \end{figure} \subsubsection{Evaluation via Normalized Mutual Information.} Normalized mutual information (NMI) is the main metric to evaluate the classification results, which ranges in the interval between 0 and 1. If NMI is approaching 1, it means two label assignments are strongly coherent. The annotated labels are unknown in practical scenarios, so we did not use them to tune the hyperparameters. But if the annotated labels are given, we can also use the NMI of label assignment against annotated one (NMI t/labels) to evaluate the classification results after training. As shown in the fifth column in Tab.\ref{table_class_number}, when the class number is 10k, the NMI t/labels is comparable with DeepCluster (refer to Fig.2(a) in the paper \cite{caron2018deep}), which means the performance of our proposed unsupervised image classification is approaching to DeepCluster even without explicitly embedding clustering. However, the more class number will be easily to get higher NMI t/labels. So we cannot directly use it to compare the performance among different class number. \subsubsection{Evaluation via Visualization.} At the end of training, we take a census for the image number assigned to each class. As shown in Fig.\ref{image_number}, our classification model nearly divides the images in the dataset into equal partitions. This is a interesting finding. In the work of \cite{asano2019self-labelling}, this result is achieved via label optimization solved by \emph{sinkhorn-Knopp algorithm}. However, our method can achieve the same result without label optimization. We infer that class balance sampling training manner can implicitly bias to uniform distribution. Furthermore, we also visualize the classification results in Fig.\ref{vis}. Our method can classify the images with similar semantic information into one class. \subsection{Linear Classification on Activations} \begin{table}[tp] \begin{floatrow} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \label{linearprobing} \caption{Linear probing evaluation on ImageNet. We mainly compare the performance of our method with DeepCluster. For reference, we also list the results of other methods. } \begin{tabular}{lccccc} \toprule[1pt] \multirow{2}{*}{Methods}& \multicolumn{5}{c}{ImageNet}\\ \cline{2-6} &conv1&conv2&conv3&conv4&conv5\\ \hline ImageNet labels &19.3&36.3&44.2&48.3&50.5\\ Random&11.6&17.1&16.9&16.3&14.1\\ \hline DeepCluster \cite{caron2018deep}&13.4&32.3&41.0&39.6&38.2\\ SelfLabel $3k\times1$ \cite{asano2019self-labelling}&-&-&43.0&44.7&40.9\\ SelfLabel $3k\times10$ \cite{asano2019self-labelling}&22.5&37.4&44.7&47.1&44.1\\ \textbf{Ours} & \textbf{12.8} & \textbf{34.3} & \textbf{41.6} & \textbf{41.5} & \textbf{39.0}\\ \bottomrule[1pt] \multicolumn{6}{c}{Take a look at other self-supervised learning methods}\\ \toprule[1pt] Contenxt \cite{doersch2015unsupervised} & 16.2 & 23.3 & 30.2 & 31.7 & 29.6\\ BiGan \cite{donahue2017adversarial} & 17.7&24.5&31.0&29.9&28.0\\ Split-brain \cite{zhang2017split} & 17.7 & 29.3 & 35.4 & 35.2&32.8\\ Jigsaw puzzle \cite{noroozi2016unsupervised} & 18.2 & 28.8 & 34.0 & 33.9&27.1\\ RotNet \cite{gidaris2018unsupervised} &18.8&31.7&38.7&38.2&36.5\\ AND \cite{huang2019unsupervised} & 15.6&27.0&35.9&39.7&37.9\\ AET \cite{zhang2019aet} & 19.3&35.4&44.0&43.6&42.4\\ RotNet+retrieval \cite{feng2019self} & 22.2&38.2&45.7&48.7&48.3\\ \bottomrule[1pt] \label{linearProbes} \end{tabular} \end{floatrow} \end{table} \subsubsection{Optimization Settings.} We use linear probes for more quantitative evaluation. Following \cite{zhang2017split}, we use max-pooling to separately reduce the activation dimensions to 9600, 9216, 9600, 9600 and 9216 (conv1-conv5). Freezing the feature extractors, we only train the inserted linear layers. We train the linear layers for 32 epochs with zero weight decay and 0.1 learning rate divided by ten at epochs 10, 20 and 30. The shorter size of the images in the dataset are resized to 256 pixels. And then we use 224$\times$224 random crop as well as horizontal flipping to train the linear layer. After training, the accuracy is determined with 10-crops (center crop and four-corners crop as well as horizontal flipping). \subsubsection{Ablation Study on Class Number Selection.} We conduct ablation study on class number as shown in Tab.\ref{table_class_number}. Different from DeepCluster, the performance 3k is slightly better than 5k and 10k, which is also confirmed by \cite{asano2019self-labelling}. \subsubsection{Further Fine-Tuning.} During training, the label assignment is changed every epoch. We fix the label assignment at last epoch with center crop inference in pseudo label generation, and further fine-tune the network with 30 epochs. As shown in Tab.\ref{table_class_number}, the performance can be further improved. \subsubsection{Ablation Study on Data Augmentation.} Data augmentation plays an important role in clustering-based self-supervised learning since the pseudo labels are almost wrong at the beginning of training since the features are still not well-learnt and the representation learning is mainly drived by learning data augmentation invariance at the beginning of training. In this paper, we also use data augmentation in pseudo label generation. As shown in Tab.\ref{table_augmentation}, it can improve the performance. In this paper, we simply adopt randomly resized crop to augment data in pseudo label generation and representation learning. \subsubsection{Comparison with Other State-of-The-Art Methods.} Since our method aims at simplifying DeepCluster by discarding clustering, we mainly compare our results with DeepCluster. As shown in Fig.\ref{linearProbes}, our performance is comparable with DeepCluster, which validates that the clustering operation can be replaced by more challenging data augmentation. Note that it is also validated by the NMI t/labels mentioned above. SelfLabel [$3k\times1$] simulates clustering via label optimization which classifies datas into equal partitions. However, as discussed above in Fig.\ref{image_number}, our proposed framework also divides the dataset into nearly equal partitions without the complicated label optimization term. Therefore, theoretically, our framework can also achieve comparable results with SelfLabel [$3k\times1$], and we impute the performance gap to their extra augmentation. With strong augmentation, our can still surpass SelfLabel as shown in Tab.6. Compared with other self-supervised learning methods, our method can surpass most of them which only use a single type of supervisory signal. We believe our proposed framework can be taken as strong baseline model for self-supervised learning and make a further performance boost when combined with other supervisory signals, which will be validated in our future work. \begin{table}[tp] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{Transfer the pretrained model to downstream tasks on PASCAL VOC dataset.} \label{downstreamtask} \begin{tabular}{lcccc} \toprule[2pt] \multirow{3}{*}{Methods}& \multicolumn{2}{c}{Classification} & \multicolumn{1}{c}{Detection} & \multicolumn{1}{c}{Segmentation} \\ &\multicolumn{2}{c}{(\%mAP)}&(\%mAP)&(\%mIU)\\ \cline{2-5} & FC6-8 & ALL & ALL & ALL \\ \hline ImageNet Labels&78.9&79.9&56.8&48.0\\ Random-RGB&33.2&57.0&44.5&30.1\\ Random-Sobel&29.0&61.9&47.9&32.0\\ \hline DeepCluster \cite{caron2018deep}&72.0&73.7&55.4&45.1\\ SelfLabeling $3k\times10$ \cite{asano2019self-labelling} & - & 75.3 & 55.9 & 43.7\\ \textbf{Ours} & 76.2 & 75.9 & 54.9 & 45.9 \\ \bottomrule[2pt] \multicolumn{5}{c}{Take a look at other kinds of self-supervised methods}\\ \toprule[2pt] BiGan \cite{donahue2017adversarial}& 52.5 & 60.3 & 46.9 & 35.2 \\ Contenxt \cite{doersch2015unsupervised} & 55.1 & 63.1 & 51.1 & - \\ Split-brain \cite{zhang2017split} & 63.0 & 67.1&46.7&36.0\\ Jigsaw puzzle \cite{noroozi2016unsupervised} & - & 67.6&53.2&37.6\\ RotNet \cite{gidaris2018unsupervised}& 70.87 & 72.97 & 54.4 & 39.1 \\ RotNet+retrieval \cite{feng2019self} & -&74.7&58.0&45.9\\ \bottomrule[2pt] \label{table_downstream_tasks} \end{tabular} \end{table} \subsection{Transfer to Downstream Tasks} \begin{table}[tp] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{Evaluation via few-shot classification on the test set of \emph{mini}ImageNet. Note that 224 resolution is center-cropped from 256 which is upsampled from 84 low-resolutional images. It can be regarded as inserting a upsampling layer at the bottom of the network while the input is still 84$\times$84. MP is short for max-pooling. For reference, the 5way-5shot accuracy of prototypical networks \cite{snell2017prototypical} via supervised manner is 68.2\%.} \begin{tabular}{lccccc} \toprule[1pt] \multirow{2}{*}{Methods} & \multirow{2}{*}{resolution} & \multicolumn{4}{c}{5way-5shot accuracy}\\ \cline{3-6} &&conv3 & conv4 & conv5 & conv5+MP\\ \hline UIC 3k & 224$\times$224 & 48.79 & 53.03 & 62.46 & 65.05\\ DeepCluster & 224$\times$224 & 51.33 & 54.42 & 60.32 & 65.04\\ UIC 3k & 84$\times$84 & 52.43 & 54.76 & 54.40 & 52.85\\ DeepCluster & 84$\times$84 & 53.46 & 54.87 & 49.81 & 50.18\\ \bottomrule[1pt] \end{tabular} \label{fewshot2} \end{table} \subsubsection{Evaluation via Fine-Tuning: Multi-label Image Classification, Object Detection, Semantic Segmentation on Pascal-VOC.}In practical scenarios, self-supervised learning is usually used to provide a good pretrained model to boost the representations for downstream tasks. Following other works, the representation learnt by our proposed method is also evaluated by fine-tuning the models on PASCAL VOC datasets. Specifically, we run the object detection task using fast-rcnn \cite{girshick2015fast} framework and run the semantic segmentation task using FCN \cite{long2015fully} framework. As shown in Tab.\ref{table_downstream_tasks}, our performance is comparable with other clustering-based methods and surpass most of other SSL methods. \subsubsection{Evaluation without Fine-Tuning: Metric-based Few-shot Image Classification on \emph{mini}ImageNet.} Few-shot classification \cite{vinyals2016matching,snell2017prototypical} is naturally a protocol for representation evaluation, since it can directly use unsupervised pretrained models for feature extraction and use metric-based methods for few-shot classification without any finetuning. It can avoid the performance gap brought by fine-tuning tricks. In this paper, we use Prototypical Networks \cite{snell2017prototypical} for representation evaluation on the test set of \emph{mini}ImageNet. As shown in Tab.\ref{fewshot2}, our method is comparable with DeepCluster overall. Specifically, our performances in highest layers are better than DeepCluster. \section{More Experiments} In the above sections, we try to keep training settings the same with DeepCluster for fair comparison. Although achieving SOTA results is not the main starting point of this work, we would not mind to further improve our results through combining the training tricks proposed by other methods. \subsection{More Data Augmentations} As discussed above, data augmentation used in the process of pseudo label generation and network training plays a very important role for representation learning. Recently, SimCLR\cite{chen2020a} consumes lots of computational resources to do a thorough ablation study about data augmentation. They used a strong color jittering and random Gaussian blur to boost their performance. We find such strong augmentation can also benefit our method as shown in Tab.6. Our result in conv5 with a strong augmentation surpasses DeepCluster and SelfLabel by a large margin and is comparable with SelfLabel with 10 heads. Note that the results in this section do not use further fine-tuning. \subsection{More Network architectures} To further convince the readers, we supplement the experiments of ResNet50 (500epochs) with the strong data augmentation and an extra MLP-head proposed by SimCLR\cite{chen2020a} (we fix and do not discard MLP-head when linear probing). As shown in Tab.7, our method surpasses SelfLabel and achieves SOTA results when compared with non-contrastive-learning methods. Although our method still has a performance gap with SimCLR and MoCov2 ($>>$500epochs), our method is the simplest one among them. We believe it can bring more improvement by appling more useful tricks. \begin{table}[tp] \begin{floatrow} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \label{withmoreaugmentations2} \caption{More experimental results with more data augmentations. } \begin{tabular}{llcccc} \toprule[1pt] \multirow{2}{*}{Methods}&\multirow{2}{*}{Arch}&\multicolumn{4}{c}{ImageNet}\\ \cline{3-6} &&conv3&conv4&conv5&NMI t/labels\\ \hline DeepCluster \cite{caron2018deep}&AlexNet&41.0&39.6&38.2&-\\ SelfLabel $3k\times1$ \cite{asano2019self-labelling}&AlexNet&43.0&44.7&40.9&-\\ SelfLabel $3k\times10$ \cite{asano2019self-labelling}&AlexNet+10heads&44.7&47.1&44.1&-\\ UIC (Ours) & AlexNet & 41.6 & 41.5 & 39.0 & 38.5\\ UIC + strong aug (Ours) & AlexNet & 43.5 & 45.6 & 44.3 & 40.0\\ \bottomrule[1pt] \end{tabular} \end{floatrow} \end{table} \begin{table}[tp] \begin{floatrow} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \label{withmorearchitectures2} \caption{More experimental results with more network architectures.} \begin{tabular}{llll} \toprule[1pt] Methods&Arch&Top-1&NMI t/labels\\ \hline Jigsaw \cite{kolesnikov2019revisiting}&Res50&38.4&-\\ Rotation \cite{kolesnikov2019revisiting}&Res50&43.8&-\\ InstDisc \cite{wu2018unsupervised}&Res50&54.0&-\\ BigBiGAN \cite{donahue2019large}&Res50&56.6&-\\ Local Agg. \cite{zhuang2019local}&Res50&60.2&-\\ Moco \cite{he2019momentum}&Res50&60.6&-\\ PIRL \cite{misra2019self-supervised}&Res50&63.6&-\\ CPCv2 \cite{henaff2019data-efficient}&Res50&63.8&-\\ SimCLR \cite{chen2020a}&Res50 + MLP-head&69.3&-\\ Mocov2 \cite{chen2020improved}&Res50 + MLP-head&71.1&-\\ SelfLabel $3k\times10$ \cite{asano2019self-labelling}&Res50+10heads&61.5&-\\ UIC + strong aug (Ours) & VGG16 & 57.7 & 46.9\\ UIC + strong aug (Ours) & Res50 & 62.7 & 50.6\\ UIC + strong aug (Ours) & Res50 + MLP-head & 64.4 & 53.3\\ \bottomrule[1pt] \end{tabular} \end{floatrow} \end{table} \section{Conclusions} We always believe that the greatest truths are the simplest. Our method validates that the embedding clustering is not the main reason why DeepCluster works. Our method makes training a SSL model as easy as training a supervised image classification model, which can be adopted as a strong prototype to further develop more advanced unsupervised learning approaches. We make SSL more accessible to the community which is very friendly to the academic development. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=PnuDpxJvR0q
https://arxiv.org/abs/2006.11480
Evaluate the effectiveness and simplicity of our proposed method for unsupervised feature learning in comparison to other methods in the field.
Distilling Visual Priors from Self-Supervised Learning #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The method has two stages: (i) a teacher network is trained with contrastive learning to obtain feature representation, (ii) the knowledge of the teacher network is transferred to student network by distillation, in the meantime, the student network is also finetuned with labels. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - 2-stage method - Using a margin to overcome the small bank size problem #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - No conclusion and discussion section. #### 4. [Overall rating] Paper rating 7 #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. #### 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - How is the margin value chosen? In the text, it is given as 0.6 but in the table, the value is 0.4. - Related works: Why is there any margin loss part? - Missing citations: > L.29:".. simply memorize the dataset and can not generalize well to unseen data.." > L.31:"..some works.." (only given one) Rating: 7: Good paper, accept Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{xcolor} \usepackage{subfig} \usepackage{booktabs} \usepackage{multirow} \usepackage{bm} \usepackage{wasysym} \usepackage{mathrsfs} \usepackage{xspace} \usepackage{bbm} \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \newcommand{\TODOFIG}[1]{\textbf{TODO Figure: #1}} % \newcommand{\TODOTAB}[1]{\textbf{TODO Table: #1}} % \newcommand{\TODO}[1]{\textbf{TODO: #1}} % \newcommand{\bb}[1]{\bm{\mathrm{#1}}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot} \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace} \def\eg{\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot} \def\ie{\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot} \def\cf{\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot} \def\etc{\emph{etc}\onedot} \def\vs{\emph{vs}\onedot} \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot} \def\etal{\emph{et al}.} \DeclareMathOperator*{\argmin}{argmin} \makeatletter \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{2} % \title{Distilling Visual Priors from \\ Self-Supervised Learning} \titlerunning{Distilling Visual Priors from Self-Supervised Learning} \author{Bingchen Zhao\inst{1,2}\and Xin Wen\inst{2}} \authorrunning{B. Zhao and X. Wen} \institute{Megvii Research Nanjing\\ \and Tongji University, Shanghai, China\\ \email{zhaobc.gm@gmail.com, wx99@tongji.edu.cn}} \maketitle \begin{abstract} Convolutional Neural Networks (CNNs) are prone to overfit small training datasets. We present a novel two-phase pipeline that leverages self-supervised learning and knowledge distillation to improve the generalization ability of CNN models for image classification under the data-deficient setting. The first phase is to learn a teacher model which possesses rich and generalizable visual representations via self-supervised learning, and the second phase is to distill the representations into a student model in a self-distillation manner, and meanwhile fine-tune the student model for the image classification task. We also propose a novel margin loss for the self-supervised contrastive learning proxy task to better learn the representation under the data-deficient scenario. Together with other tricks, we achieve competitive performance in the VIPriors image classification challenge. \keywords{Self-supervised Learning, Knowledge-distillation} \end{abstract} \section{Introduction} \label{sec:intro} Convolutional Neural Networks (CNNs) have achieved breakthroughs in image classification~\cite{he2016deep} via supervised training on large-scale datasets, e.g., ImageNet~\cite{deng2009imagenet}. However, when the dataset is small, the over-parameterized CNNs tend to simply memorize the dataset and can not generalize well to unseen data. To alleviate this over-fitting problem, several regularization techniques have been proposed, such as Dropout~\cite{srivastava14dropout}, BatchNorm~\cite{ioffe2015batch}. In addition, some works seek to combat with over-fitting by re-designing the CNN building blocks to endow the model with some encouraging properties (e.g., translation invariance~\cite{kayhan2020translation}). Recently, self-supervised learning has shown a great potential of learning useful representation from data without external label information. In particular, the contrastive learning methods~\cite{he2020momentum,chen2020simple} have demonstrated advantages over other self-supervised learning methods in learning better transferable representations for downstream tasks. Compared to supervised learning, representations learned by self-supervised learning are unbiased to image labels, which can effectively prevent the model from over-fitting the patterns of any object category. Furthermore, the data augmentation in modern contrastive learning~\cite{chen2020simple} typically involves diverse transformation strategies, which significantly differ from those used by supervised learning. This may also suggest that contrastive learning can better capture the diversity of the data than supervised learning. In this paper, we go one step further by exploring the capability of contrastive learning under the data-deficient setting. Our key motivation lies in the realization that the label-unbiased and highly expressive representations learned by self-supervised learning can largely prevent the model from over-fitting the small training dataset. Specifically, we design a new two-phase pipeline for data-deficient image classification. The first phase is to utilize self-supervised contrastive learning as a proxy task for learning useful representations, which we regard as visual priors before using the image labels to train a model in a supervised manner. The second phase is use the weight obtained from the first phase as the start point, and leverage the label information to further fine-tune the model to perform classification. In principle, self-supervised pre-training is an intuitive approach for preventing over-fitting when the labeled data are scarce, yet constructing the pre-training and fine-tuning pipeline properly is critical for good results. Specifically, there are two problems to be solved. First, the common practice in self-supervised learning is to obtain a memory bank for negative sampling. While MoCo~\cite{he2020momentum} has demonstrated accuracy gains with increased bank size, the maximum bank size, however, is limited in the data-deficient setting. To address this issue, we propose a margin loss that can reduce the bank size while maintaining the same performance. We hope that this method can be helpful for fast experiments and evaluation. Second, directly fine-tuning the model on a small dataset still faces the risk of over-fitting, based on the observation that fine-tuning a linear classifier on top of the pre-train representation can yield a good result. We proposed to utilize a recent published feature distillation method~\cite{heo2019comprehensive} to perform self-distillation between the pre-trained teacher model and a student model. This self-distilation module plays a role of regularizing the model from forgetting the visual priors learned from the contrastive learning phase, and thus can further prevent the model from over-fitting on the small dataset. \section{Related Works}\label{sec:related} \noindent \textbf{Self-supervised learning}~~ focus on how to obtain good representations of data from heuristically designed proxy tasks, such as image colorization~\cite{zhang2016colorful}, tracking objects in videos~\cite{wang2015unsupervised}, de-noising auto-encoders~\cite{vincent2008extracting} and predicting image rotations~\cite{gidaris2018unsupervised}. Recent works using contrastive learning objectives~\cite{wu2018unsupervised} have achieved remarkable performance, among which MoCo~\cite{he2020momentum,chen2020improved} is the first self-supervised method that outperforms supervised pre-training methods on multiple downstream tasks. In SimCLR~\cite{chen2020simple}, the authors show that the augmentation policy used by self-supervised method is quite different from the supervised methods, and is often harder. This phenomenon suggests that the self-supervised learned representations can be more rich and diverse than the supervised variants. \noindent \textbf{Knowledge distillation}~~ aims to distill useful knowledge or representation from a teacher model to a student model~\cite{hinton2015kd}. Original knowledge distillation uses the predicted logits to transfer knowledge from teacher to student~\cite{hinton2015kd}. Then, some works found that transferring the knowledge conveyed by the feature map from the teacher to student can lead to better performance~\cite{romero2014fitnets,zagoruyko2016paying}. Heo~\etal~\cite{heo2019comprehensive} provided a overhaul study of how to effectively distill knowledge from the feature map, which also inspires our design for knowledge distillation. Self-distillation uses the same model for both teacher and student~\cite{furlanello2018born}, which has been shown to improve the performance of the model. We utilize the self-distillation method as a regulation term to prevent our model from over-fitting. \section{Method} Our method contains two phases, the first phase is to use the recently published MoCo v2~\cite{chen2020improved} to pre-train the model on the given dataset to obtain good representations. The learned representations can be considered as visual priors before using the label information. The second phase is to initialize both the teacher and student model used in the self-distillation process with the pre-trained weight. The weight of the teacher is frozen, and the student is updated using a combination of the classification loss and the overhaul-feature-distillation (OFD)~\cite{heo2019comprehensive} loss from the teacher. As a result, the student model is regularized by the representation from the teacher when performing the classification task. The two phases are visualized in Fig.~\ref{fig:distill}. \begin{figure} \centering \includegraphics[height=6cm]{figs/Distill.pdf} \caption{ The two phases of our proposed method. The first phase is to construct a useful visual prior with self-supervised contrastive learning, and the second phase is to perform self-distillation on the pre-trained checkpoint. The student model is fine-tuned with a distillation loss and a classification loss, while the teacher model is frozen.} \label{fig:distill} \end{figure} \subsection{Phase-1: Pre-Train with Self-Supervised Learning} The original loss used by MoCo is as follows: \begin{equation} \label{eq:moco} \mathcal{L}_{\text{moco}}=- \log\left[\frac{\exp\left(\mathbf{q} \cdot \mathbf{k^{+}} / \tau\right)}{\exp\left(\mathbf{q} \cdot \mathbf{k^{+}} / \tau\right) + \sum_{\mathbf{k^{-}}} \exp\left(\mathbf{q} \cdot \mathbf{k^{-}} / \tau\right)} \right] \,, \end{equation} where $\mathbf{q}$ and $\mathbf{k^{+}}$ is a positive pair (different views of the same image) sampled from the given dataset $\mathcal{D}$, and $\mathbf{k^{-}}$ are negative examples (different images). As shown in Fig.~\ref{fig:distill}, MoCo uses a momentum encoder $\theta_{k}$ to encode all the $\mathbf{k}$ and put them in a queue for negative sampling, the momentum encoder is a momentum average of the encoder $\theta_{q}$: \begin{equation} \theta_k \leftarrow \eta\theta_k+(1-\eta)\theta_q. \end{equation} As shown in MoCo~\cite{he2020momentum}, the size of the negative sampling queue is crucial to the performance of the learned representation. In a data-deficient dataset, the maximum size of the queue is limited, we propose to add a margin to the original loss function to help the model obtain a larger margin between data samples thus help the model obtain a similar result with fewer negative examples. \begin{equation} \mathcal{L}_{\text{margin}}=-\log\left[\frac{\exp\left(\left(\mathbf{q} \cdot \mathbf{k^{+}} - m \right) / \tau\right)}{\exp\left(\left(\mathbf{q} \cdot \mathbf{k^{+}} - m \right) / \tau\right) + \sum_{\mathbf{k^{-}}} \exp\left(\mathbf{q} \cdot \mathbf{k^{-}} / \tau\right)} \right] \,. \end{equation} \subsection{Phase-2: Self-Distill on Labeled Dataset} The self-supervised trained checkpoint from phase-1 is then used to initialize the teacher and student for fine-tuning on the whole dataset with labels. We choose to use OFD~\cite{heo2019comprehensive} to distill the visual priors from teacher to student. The distillation process can be seen as a regulation to prevent the student from over-fitting the small train dataset and give the student a more diversed representation for classification. The distillation loss can be formulated as follows: \begin{equation} \label{eq:distill} \mathcal{L}_{\text{distill}}=\sum_{\mathbf{F}}d_{p}\left(\text{StopGrad}\left(\mathbf{F}_{t}\right), r(\mathbf{F}_{s})\right) \,, \end{equation} where $\mathbf{F}_t$ and $\mathbf{F}_s$ stands for the feature map of the teacher and student model respectively, the StopGrad means the weight of the teacher will not be updated by gradient descent, the $d_p$ stands for a distance metric, $r$ is a connector function to transform the feature from the student to the teacher. Along with a cross-entropy loss for classification: \begin{equation}\label{eq:ce_loss} \mathcal{L}_{\text{ce}}=- \log p(y=i|\mathbf{x}) \,, \end{equation} the final loss function for the student model is: \begin{equation}\label{eq:stu_loss} \mathcal{L}_{\text{stu}}=\mathcal{L}_{\text{ce}} +\lambda \mathcal{L}_{\text{distill}} \,. \end{equation} The student model is then used for evaluation. \section{Experiments} \subsubsection{Dataset} Only the subset of the ImageNet~\cite{deng2009imagenet} dataset given by the VIPrior challenge is used for our experiments, no external data or pre-trained checkpoint is used. The VIPrior challenge dataset contains 1,000 classes which is the same with the original ImageNet~\cite{deng2009imagenet}, and is split into train, val and test splits, each of the splits has 50 images for each class, resulting in a total of 150,000 images. For comparison, we use the train split to train the model and test the model on the validation split. \subsubsection{Implementation Details} For phase-1, we set the momentum $\eta$ as 0.999 in all the experiments as it yields better performance, and the size of the queue is set to 4,096. The margin $m$ in our proposed margin loss is set to be 0.6. We train the model for 800 epochs in phase-1, the initial learning rate is set to 0.03 and the learning rate is dropped by 10x at epoch 120 and epoch 160. Other hyperparameter is set to be the same with MoCo v2~\cite{chen2020improved}, For phase-2, the $\lambda$ in Eq.~\ref{eq:stu_loss} is set to $10^{-4}$. We also choose to use $\ell_2$ distance as the distance metric $d_p$ in Eq.~\ref{eq:distill}. We train the model for 100 epochs in phase-2, the initial learning rate is set to 0.1 and is dropped by 10x every 30 epochs. \subsubsection{Ablation Results} We first present the overall performance of our proposed two phase pipeline, then show some ablation results. As shown in Tab.~\ref{tab:r50_phases}, supervised training of ResNet50~\cite{he2016deep} would lead to over-fitting on the train split, thus the validation top-1 accuracy is low. By first pre-training the model with the phase-1 of our pipeline, and fine-tuning a linear classifier on top of the obtained feature representation~\cite{wu2018unsupervised}, we can reach a 6.6 performance gain in top-1 accuracy. This indicates that the feature learned from self-supervised learning contain more information and can generalize well on the validation set. We also show that fine-tuning the full model from phase-1 can reach better performance compared to only fine-tuning a linear classifier, which indicates that the weight from phase-1 can also serve as a good initialization, but the supervised training process may still cause the model to suffer from over-fitting. Finally, by combining phase-1 and phase-2 together, our proposed pipeline achieves 16.7 performance gain in top-1 accuracy over the supervised baseline. \begin{table}[] \begin{center} \begin{tabular}{cccc} \toprule ResNet50 & \#Pretrain Epoch & \#Finetune Epoch & Val Acc \\ \midrule Supervised Training & - & 100 & 27.9 \\ Phase-1 + finetune fc & 800 & 100 & 34.5 \\ Phase-1 + finetune & 800 & 100 & 39.4 \\ \begin{tabular}[c]{c}Phase-1 + Phase-2\\ (Ours)\end{tabular} & 800 & 100 & 44.6 \\ \bottomrule \end{tabular} \vspace{0.2cm} \caption{\label{tab:r50_phases} Training and Pre-training the model on the train split and evaluate the performance on the validation split on the given dataset. `finetune fc' stands for train a linear classifier on top of the pretrained representation, `finetune' stands for train the weight of the whole model. Our proposed pipeline (Phase-1 + Phase-2) can have 16.7 performance gain in top-1 validation accuracy.} \end{center} \end{table} \subsubsection{The effect of our margin loss} Tab.~\ref{tab:margin_moco} shows that effect of the number negative samples in contrastive learning loss, the original loss function used by MoCo~v2~\cite{he2020momentum} is sensitive to the number of negatives, the fewer negative, the lower the linear classification result is. Our modified margin loss can help alleviate the issue with a margin to help the model learn a larger margin between data points. The experiments show that our margin loss is less sensitive to the number negatives and can be used in a data-deficient setting. \begin{table}[] \begin{center} \begin{tabular}{@{}ccccc@{}} \toprule & \#Neg & Margin & Val Acc \\ \midrule \multicolumn{1}{c}{\multirow{3}{*}{MoCo v2~\cite{he2020momentum}}} & 4096 & - & 34.5 \\ \multicolumn{1}{c}{} & 1024 & - & 32.1 \\ \multicolumn{1}{c}{} & 256 & - & 29.1 \\\midrule \multirow{3}{*}{Margin loss} & 4096 & 0.4 & 34.6 \\ & 1024 & 0.4 & 34.2 \\ & 256 & 0.4 & 33.7 \\\bottomrule \end{tabular} \end{center} \vspace{0.1cm} \caption{\label{tab:margin_moco} The Val Acc means the linear classification accuracy obtained by fine-tune a linear classifier on top of the learned representation. The original MoCo v2 is sensitive to the number of negative, the performance drops drastically when number negatives is small. Our modified margin loss is less sensitive to the number negatives, as shown in the table, even has 16x less negatives the performance only drops 0.9.} \end{table} \vspace{-0.5cm} \begin{table}[] \begin{center} \begin{tabular}{lccc} \toprule & \#Pretrain Epoch & \#Finetune Epoch & Test Acc \\ \midrule Phase-1 + Phase-2 & 800 & 100 & 47.2 \\ +Input Resolution 448 & 800 & 100 & 54.8 \\ +ResNeXt101~\cite{xie2017aggregated} & 800 & 100 & 62.3 \\ +Label-Smooth~\cite{muller2019does} & 800 & 100 & 64.2 \\ +Auto-Aug~\cite{cubuk2019autoaugment} & 800 & 100 & 65.7 \\ +TenCrop & 800 & 100 & 66.2 \\ +Ensemble two models & 800 & 100 & 68.8 \\ \bottomrule \end{tabular} \vspace{0.2cm} \caption{\label{tab:tricks} The tricks used in the competition, our final accuracy is 68.8 which is a competitive result in the challenge. Our code will be made public. Results in this table are obtain by train the model on the combination of train and validation splits.} \end{center} \end{table} \subsubsection{Competition Tricks} For better performance in the competition, we combine the train and val split to train the model that generate the submission. Several other tricks and stronger backbone models are used for better performance, such as Auto-Augment~\cite{cubuk2019autoaugment}, ResNeXt~\cite{xie2017aggregated}, label-smooth~\cite{muller2019does}, TenCrop and model ensemble. Detailed tricks are listed in Tab.~\ref{tab:tricks}. \section{Conclusion} This paper proposes a novel two-phase pipeline for image classification using CNNs under the data-deficient setting. The first phase is to learn a teacher model which obtains a rich visual representation from the dataset using self-supervised learning. The second phase is transfer this representation into a student model in a self-distillation manner, meanwhile the student is fine-tuned for downstream classification task. Experiments shows the effectiveness of our proposed method, Combined with additional tricks, our method achieves a competitive result in the VIPrior Image Classification Challenge. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=8V9lE-zP0ZL
https://arxiv.org/abs/2008.00261
Evaluate the paper on "Distilling Visual Priors from Self-Supervised Learning" based on its strengths and weaknesses.
Interesting method, well-written paper #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper proposes to use contrastive pre-training to construct a visal prior (i.e. the model weights) and subsequently initializes both a teacher and student model with the pre-trained weights to finetune the student network on the dataset while imposing an additional distillation loss between the frozen teacher and unfrozen student networks. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. * The paper is clear and is easy to understand * The method is interesting and seems to perform well * The ablation studies clealy show which portion of the performance gain can be accredited to the proposed method and which portion is due to additional tricks, such as data augmentation. #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. * It would have been nice to include experiments on an additional (toy) dataset such as MNIST or SVHN to show that the method generalizes to other tasks. * Conclusion section is missing; the paper is ending rather abruptly. #### 4. [Overall rating] Paper rating 8: Top 50% of accepted papers, clear accept #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. The method is interesting and seems to perform well. In addition, the paper is well-written. Please consider extending the paper with a "Conclusion" section. #### 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) * What is $\tau$ in equation (1)? Rating: 8: Top 50% of accepted papers, clear accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{xcolor} \usepackage{subfig} \usepackage{booktabs} \usepackage{multirow} \usepackage{bm} \usepackage{wasysym} \usepackage{mathrsfs} \usepackage{xspace} \usepackage{bbm} \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \newcommand{\TODOFIG}[1]{\textbf{TODO Figure: #1}} % \newcommand{\TODOTAB}[1]{\textbf{TODO Table: #1}} % \newcommand{\TODO}[1]{\textbf{TODO: #1}} % \newcommand{\bb}[1]{\bm{\mathrm{#1}}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot} \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace} \def\eg{\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot} \def\ie{\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot} \def\cf{\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot} \def\etc{\emph{etc}\onedot} \def\vs{\emph{vs}\onedot} \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot} \def\etal{\emph{et al}.} \DeclareMathOperator*{\argmin}{argmin} \makeatletter \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{2} % \title{Distilling Visual Priors from \\ Self-Supervised Learning} \titlerunning{Distilling Visual Priors from Self-Supervised Learning} \author{Bingchen Zhao\inst{1,2}\and Xin Wen\inst{2}} \authorrunning{B. Zhao and X. Wen} \institute{Megvii Research Nanjing\\ \and Tongji University, Shanghai, China\\ \email{zhaobc.gm@gmail.com, wx99@tongji.edu.cn}} \maketitle \begin{abstract} Convolutional Neural Networks (CNNs) are prone to overfit small training datasets. We present a novel two-phase pipeline that leverages self-supervised learning and knowledge distillation to improve the generalization ability of CNN models for image classification under the data-deficient setting. The first phase is to learn a teacher model which possesses rich and generalizable visual representations via self-supervised learning, and the second phase is to distill the representations into a student model in a self-distillation manner, and meanwhile fine-tune the student model for the image classification task. We also propose a novel margin loss for the self-supervised contrastive learning proxy task to better learn the representation under the data-deficient scenario. Together with other tricks, we achieve competitive performance in the VIPriors image classification challenge. \keywords{Self-supervised Learning, Knowledge-distillation} \end{abstract} \section{Introduction} \label{sec:intro} Convolutional Neural Networks (CNNs) have achieved breakthroughs in image classification~\cite{he2016deep} via supervised training on large-scale datasets, e.g., ImageNet~\cite{deng2009imagenet}. However, when the dataset is small, the over-parameterized CNNs tend to simply memorize the dataset and can not generalize well to unseen data. To alleviate this over-fitting problem, several regularization techniques have been proposed, such as Dropout~\cite{srivastava14dropout}, BatchNorm~\cite{ioffe2015batch}. In addition, some works seek to combat with over-fitting by re-designing the CNN building blocks to endow the model with some encouraging properties (e.g., translation invariance~\cite{kayhan2020translation}). Recently, self-supervised learning has shown a great potential of learning useful representation from data without external label information. In particular, the contrastive learning methods~\cite{he2020momentum,chen2020simple} have demonstrated advantages over other self-supervised learning methods in learning better transferable representations for downstream tasks. Compared to supervised learning, representations learned by self-supervised learning are unbiased to image labels, which can effectively prevent the model from over-fitting the patterns of any object category. Furthermore, the data augmentation in modern contrastive learning~\cite{chen2020simple} typically involves diverse transformation strategies, which significantly differ from those used by supervised learning. This may also suggest that contrastive learning can better capture the diversity of the data than supervised learning. In this paper, we go one step further by exploring the capability of contrastive learning under the data-deficient setting. Our key motivation lies in the realization that the label-unbiased and highly expressive representations learned by self-supervised learning can largely prevent the model from over-fitting the small training dataset. Specifically, we design a new two-phase pipeline for data-deficient image classification. The first phase is to utilize self-supervised contrastive learning as a proxy task for learning useful representations, which we regard as visual priors before using the image labels to train a model in a supervised manner. The second phase is use the weight obtained from the first phase as the start point, and leverage the label information to further fine-tune the model to perform classification. In principle, self-supervised pre-training is an intuitive approach for preventing over-fitting when the labeled data are scarce, yet constructing the pre-training and fine-tuning pipeline properly is critical for good results. Specifically, there are two problems to be solved. First, the common practice in self-supervised learning is to obtain a memory bank for negative sampling. While MoCo~\cite{he2020momentum} has demonstrated accuracy gains with increased bank size, the maximum bank size, however, is limited in the data-deficient setting. To address this issue, we propose a margin loss that can reduce the bank size while maintaining the same performance. We hope that this method can be helpful for fast experiments and evaluation. Second, directly fine-tuning the model on a small dataset still faces the risk of over-fitting, based on the observation that fine-tuning a linear classifier on top of the pre-train representation can yield a good result. We proposed to utilize a recent published feature distillation method~\cite{heo2019comprehensive} to perform self-distillation between the pre-trained teacher model and a student model. This self-distilation module plays a role of regularizing the model from forgetting the visual priors learned from the contrastive learning phase, and thus can further prevent the model from over-fitting on the small dataset. \section{Related Works}\label{sec:related} \noindent \textbf{Self-supervised learning}~~ focus on how to obtain good representations of data from heuristically designed proxy tasks, such as image colorization~\cite{zhang2016colorful}, tracking objects in videos~\cite{wang2015unsupervised}, de-noising auto-encoders~\cite{vincent2008extracting} and predicting image rotations~\cite{gidaris2018unsupervised}. Recent works using contrastive learning objectives~\cite{wu2018unsupervised} have achieved remarkable performance, among which MoCo~\cite{he2020momentum,chen2020improved} is the first self-supervised method that outperforms supervised pre-training methods on multiple downstream tasks. In SimCLR~\cite{chen2020simple}, the authors show that the augmentation policy used by self-supervised method is quite different from the supervised methods, and is often harder. This phenomenon suggests that the self-supervised learned representations can be more rich and diverse than the supervised variants. \noindent \textbf{Knowledge distillation}~~ aims to distill useful knowledge or representation from a teacher model to a student model~\cite{hinton2015kd}. Original knowledge distillation uses the predicted logits to transfer knowledge from teacher to student~\cite{hinton2015kd}. Then, some works found that transferring the knowledge conveyed by the feature map from the teacher to student can lead to better performance~\cite{romero2014fitnets,zagoruyko2016paying}. Heo~\etal~\cite{heo2019comprehensive} provided a overhaul study of how to effectively distill knowledge from the feature map, which also inspires our design for knowledge distillation. Self-distillation uses the same model for both teacher and student~\cite{furlanello2018born}, which has been shown to improve the performance of the model. We utilize the self-distillation method as a regulation term to prevent our model from over-fitting. \section{Method} Our method contains two phases, the first phase is to use the recently published MoCo v2~\cite{chen2020improved} to pre-train the model on the given dataset to obtain good representations. The learned representations can be considered as visual priors before using the label information. The second phase is to initialize both the teacher and student model used in the self-distillation process with the pre-trained weight. The weight of the teacher is frozen, and the student is updated using a combination of the classification loss and the overhaul-feature-distillation (OFD)~\cite{heo2019comprehensive} loss from the teacher. As a result, the student model is regularized by the representation from the teacher when performing the classification task. The two phases are visualized in Fig.~\ref{fig:distill}. \begin{figure} \centering \includegraphics[height=6cm]{figs/Distill.pdf} \caption{ The two phases of our proposed method. The first phase is to construct a useful visual prior with self-supervised contrastive learning, and the second phase is to perform self-distillation on the pre-trained checkpoint. The student model is fine-tuned with a distillation loss and a classification loss, while the teacher model is frozen.} \label{fig:distill} \end{figure} \subsection{Phase-1: Pre-Train with Self-Supervised Learning} The original loss used by MoCo is as follows: \begin{equation} \label{eq:moco} \mathcal{L}_{\text{moco}}=- \log\left[\frac{\exp\left(\mathbf{q} \cdot \mathbf{k^{+}} / \tau\right)}{\exp\left(\mathbf{q} \cdot \mathbf{k^{+}} / \tau\right) + \sum_{\mathbf{k^{-}}} \exp\left(\mathbf{q} \cdot \mathbf{k^{-}} / \tau\right)} \right] \,, \end{equation} where $\mathbf{q}$ and $\mathbf{k^{+}}$ is a positive pair (different views of the same image) sampled from the given dataset $\mathcal{D}$, and $\mathbf{k^{-}}$ are negative examples (different images). As shown in Fig.~\ref{fig:distill}, MoCo uses a momentum encoder $\theta_{k}$ to encode all the $\mathbf{k}$ and put them in a queue for negative sampling, the momentum encoder is a momentum average of the encoder $\theta_{q}$: \begin{equation} \theta_k \leftarrow \eta\theta_k+(1-\eta)\theta_q. \end{equation} As shown in MoCo~\cite{he2020momentum}, the size of the negative sampling queue is crucial to the performance of the learned representation. In a data-deficient dataset, the maximum size of the queue is limited, we propose to add a margin to the original loss function to help the model obtain a larger margin between data samples thus help the model obtain a similar result with fewer negative examples. \begin{equation} \mathcal{L}_{\text{margin}}=-\log\left[\frac{\exp\left(\left(\mathbf{q} \cdot \mathbf{k^{+}} - m \right) / \tau\right)}{\exp\left(\left(\mathbf{q} \cdot \mathbf{k^{+}} - m \right) / \tau\right) + \sum_{\mathbf{k^{-}}} \exp\left(\mathbf{q} \cdot \mathbf{k^{-}} / \tau\right)} \right] \,. \end{equation} \subsection{Phase-2: Self-Distill on Labeled Dataset} The self-supervised trained checkpoint from phase-1 is then used to initialize the teacher and student for fine-tuning on the whole dataset with labels. We choose to use OFD~\cite{heo2019comprehensive} to distill the visual priors from teacher to student. The distillation process can be seen as a regulation to prevent the student from over-fitting the small train dataset and give the student a more diversed representation for classification. The distillation loss can be formulated as follows: \begin{equation} \label{eq:distill} \mathcal{L}_{\text{distill}}=\sum_{\mathbf{F}}d_{p}\left(\text{StopGrad}\left(\mathbf{F}_{t}\right), r(\mathbf{F}_{s})\right) \,, \end{equation} where $\mathbf{F}_t$ and $\mathbf{F}_s$ stands for the feature map of the teacher and student model respectively, the StopGrad means the weight of the teacher will not be updated by gradient descent, the $d_p$ stands for a distance metric, $r$ is a connector function to transform the feature from the student to the teacher. Along with a cross-entropy loss for classification: \begin{equation}\label{eq:ce_loss} \mathcal{L}_{\text{ce}}=- \log p(y=i|\mathbf{x}) \,, \end{equation} the final loss function for the student model is: \begin{equation}\label{eq:stu_loss} \mathcal{L}_{\text{stu}}=\mathcal{L}_{\text{ce}} +\lambda \mathcal{L}_{\text{distill}} \,. \end{equation} The student model is then used for evaluation. \section{Experiments} \subsubsection{Dataset} Only the subset of the ImageNet~\cite{deng2009imagenet} dataset given by the VIPrior challenge is used for our experiments, no external data or pre-trained checkpoint is used. The VIPrior challenge dataset contains 1,000 classes which is the same with the original ImageNet~\cite{deng2009imagenet}, and is split into train, val and test splits, each of the splits has 50 images for each class, resulting in a total of 150,000 images. For comparison, we use the train split to train the model and test the model on the validation split. \subsubsection{Implementation Details} For phase-1, we set the momentum $\eta$ as 0.999 in all the experiments as it yields better performance, and the size of the queue is set to 4,096. The margin $m$ in our proposed margin loss is set to be 0.6. We train the model for 800 epochs in phase-1, the initial learning rate is set to 0.03 and the learning rate is dropped by 10x at epoch 120 and epoch 160. Other hyperparameter is set to be the same with MoCo v2~\cite{chen2020improved}, For phase-2, the $\lambda$ in Eq.~\ref{eq:stu_loss} is set to $10^{-4}$. We also choose to use $\ell_2$ distance as the distance metric $d_p$ in Eq.~\ref{eq:distill}. We train the model for 100 epochs in phase-2, the initial learning rate is set to 0.1 and is dropped by 10x every 30 epochs. \subsubsection{Ablation Results} We first present the overall performance of our proposed two phase pipeline, then show some ablation results. As shown in Tab.~\ref{tab:r50_phases}, supervised training of ResNet50~\cite{he2016deep} would lead to over-fitting on the train split, thus the validation top-1 accuracy is low. By first pre-training the model with the phase-1 of our pipeline, and fine-tuning a linear classifier on top of the obtained feature representation~\cite{wu2018unsupervised}, we can reach a 6.6 performance gain in top-1 accuracy. This indicates that the feature learned from self-supervised learning contain more information and can generalize well on the validation set. We also show that fine-tuning the full model from phase-1 can reach better performance compared to only fine-tuning a linear classifier, which indicates that the weight from phase-1 can also serve as a good initialization, but the supervised training process may still cause the model to suffer from over-fitting. Finally, by combining phase-1 and phase-2 together, our proposed pipeline achieves 16.7 performance gain in top-1 accuracy over the supervised baseline. \begin{table}[] \begin{center} \begin{tabular}{cccc} \toprule ResNet50 & \#Pretrain Epoch & \#Finetune Epoch & Val Acc \\ \midrule Supervised Training & - & 100 & 27.9 \\ Phase-1 + finetune fc & 800 & 100 & 34.5 \\ Phase-1 + finetune & 800 & 100 & 39.4 \\ \begin{tabular}[c]{c}Phase-1 + Phase-2\\ (Ours)\end{tabular} & 800 & 100 & 44.6 \\ \bottomrule \end{tabular} \vspace{0.2cm} \caption{\label{tab:r50_phases} Training and Pre-training the model on the train split and evaluate the performance on the validation split on the given dataset. `finetune fc' stands for train a linear classifier on top of the pretrained representation, `finetune' stands for train the weight of the whole model. Our proposed pipeline (Phase-1 + Phase-2) can have 16.7 performance gain in top-1 validation accuracy.} \end{center} \end{table} \subsubsection{The effect of our margin loss} Tab.~\ref{tab:margin_moco} shows that effect of the number negative samples in contrastive learning loss, the original loss function used by MoCo~v2~\cite{he2020momentum} is sensitive to the number of negatives, the fewer negative, the lower the linear classification result is. Our modified margin loss can help alleviate the issue with a margin to help the model learn a larger margin between data points. The experiments show that our margin loss is less sensitive to the number negatives and can be used in a data-deficient setting. \begin{table}[] \begin{center} \begin{tabular}{@{}ccccc@{}} \toprule & \#Neg & Margin & Val Acc \\ \midrule \multicolumn{1}{c}{\multirow{3}{*}{MoCo v2~\cite{he2020momentum}}} & 4096 & - & 34.5 \\ \multicolumn{1}{c}{} & 1024 & - & 32.1 \\ \multicolumn{1}{c}{} & 256 & - & 29.1 \\\midrule \multirow{3}{*}{Margin loss} & 4096 & 0.4 & 34.6 \\ & 1024 & 0.4 & 34.2 \\ & 256 & 0.4 & 33.7 \\\bottomrule \end{tabular} \end{center} \vspace{0.1cm} \caption{\label{tab:margin_moco} The Val Acc means the linear classification accuracy obtained by fine-tune a linear classifier on top of the learned representation. The original MoCo v2 is sensitive to the number of negative, the performance drops drastically when number negatives is small. Our modified margin loss is less sensitive to the number negatives, as shown in the table, even has 16x less negatives the performance only drops 0.9.} \end{table} \vspace{-0.5cm} \begin{table}[] \begin{center} \begin{tabular}{lccc} \toprule & \#Pretrain Epoch & \#Finetune Epoch & Test Acc \\ \midrule Phase-1 + Phase-2 & 800 & 100 & 47.2 \\ +Input Resolution 448 & 800 & 100 & 54.8 \\ +ResNeXt101~\cite{xie2017aggregated} & 800 & 100 & 62.3 \\ +Label-Smooth~\cite{muller2019does} & 800 & 100 & 64.2 \\ +Auto-Aug~\cite{cubuk2019autoaugment} & 800 & 100 & 65.7 \\ +TenCrop & 800 & 100 & 66.2 \\ +Ensemble two models & 800 & 100 & 68.8 \\ \bottomrule \end{tabular} \vspace{0.2cm} \caption{\label{tab:tricks} The tricks used in the competition, our final accuracy is 68.8 which is a competitive result in the challenge. Our code will be made public. Results in this table are obtain by train the model on the combination of train and validation splits.} \end{center} \end{table} \subsubsection{Competition Tricks} For better performance in the competition, we combine the train and val split to train the model that generate the submission. Several other tricks and stronger backbone models are used for better performance, such as Auto-Augment~\cite{cubuk2019autoaugment}, ResNeXt~\cite{xie2017aggregated}, label-smooth~\cite{muller2019does}, TenCrop and model ensemble. Detailed tricks are listed in Tab.~\ref{tab:tricks}. \section{Conclusion} This paper proposes a novel two-phase pipeline for image classification using CNNs under the data-deficient setting. The first phase is to learn a teacher model which obtains a rich visual representation from the dataset using self-supervised learning. The second phase is transfer this representation into a student model in a self-distillation manner, meanwhile the student is fine-tuned for downstream classification task. Experiments shows the effectiveness of our proposed method, Combined with additional tricks, our method achieves a competitive result in the VIPrior Image Classification Challenge. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=8V9lE-zP0ZL
https://arxiv.org/abs/2008.00261
Evaluate the paper based on its clarity, the strength of the proposed method, and the inclusion of ablation studies, and provide a rating and justification for the rating.
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answering #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper proposes a method which constrains search space by using question type information as prior information and utilizes different attentions to obtain better results. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - Search space constraints according to question types - Using multiple attention mechanisms - Performance and better attention maps #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - Question types are prior knowledge yet not visual prior knowledge. #### 4. [Overall rating] Paper rating 7 #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. #### 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - Fig.3: Some of the notations are not visible. In addition, you can show modules with dashed areas with different colors. - Table 4: Why did you make your results as bold? - The effectiveness of multi-hypothesis interaction learning proposed in Section 3.3: The explanation in the subsection makes confusion because the order of showing the results. It can be better to have a paragraph for each result (table). - Limitations and failing cases. - Time and memory usage - Will you share the code and models? Typos: - L80: constraint - L.436: not fitting in the line - L.590:modality Rating: 7: Good paper, accept Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \usepackage{epsfig} \usepackage{ mathrsfs } \usepackage{xcolor} \usepackage{tablefootnote} \usepackage{ stmaryrd } \usepackage[ruled,vlined,linesnumbered]{algorithm2e} \usepackage{tabularx} % \usepackage{multirow} \usepackage{array} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \usepackage{supertabular} \usepackage{enumitem} \usepackage{ dsfont } \usepackage[toc,page]{appendix} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand\red[1]{{\color{red}#1}} \newcommand\brown[1]{{\color{brown}#1}} \newcommand{\tuong}[1]{\brown{#1}} \usepackage{pifont}% \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \def\R{{\mathbb R}} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \renewcommand{\baselinestretch}{0.98} \normalsize \usepackage{floatrow} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{1} % \title{Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answering} % \titlerunning{MILQT} \author{Tuong Do\inst{1} \and Binh X. Nguyen\inst{1} \and Huy Tran\inst{1} \and Erman Tjiputra\inst{1} \and Quang D. Tran\inst{1}\and Thanh-Toan Do\inst{2} } \authorrunning{Tuong Do et al.} \institute{AIOZ, Singapore \\ \email{\{tuong.khanh-long.do,binh.xuan.nguyen,huy.tran,\\erman.tjiputra,quang.tran\}@aioz.io} \and University of Liverpool \\ \email{thanh-toan.do@liverpool.ac.uk}} \maketitle \begin{abstract} Different approaches have been proposed to Visual Question Answering (VQA). However, few works are aware of the behaviors of varying joint modality methods over question type prior knowledge extracted from data in constraining answer search space, of which information gives a reliable cue to reason about answers for questions asked in input images. In this paper, we propose a novel VQA model that utilizes the question-type prior information to improve VQA by leveraging the multiple interactions between different joint modality methods based on their behaviors in answering questions from different types. The solid experiments on two benchmark datasets, i.e., VQA 2.0 and TDIUC, indicate that the proposed method yields the best performance with the most competitive approaches. \keywords{visual question answering, multiple interaction learning.} \end{abstract} \section{Introduction} The task of Visual Question Answering (VQA) is to provide a correct answer to a given question such that the answer is consistent with the visual content of a given image. The VQA research raises a rich set of challenges because it is an intersection of different research fields including computer vision, natural language processing, and reasoning. Thanks to its wide applications, the VQA has attracted great attention in recent years~\cite{VQA,Xu2016AskAA,Yang2016StackedAN,bottom-up2017,Kim2018BilinearAN,MTL_QTA}. This also leads to the presence of large scale datasets~\cite{VQA,vqav22016,Kushal2018Tdiuc} and evaluation protocols~\cite{VQA,Kushal2018Tdiuc}. There are works that consider types of question as the side information which gives a strong cue to reason about the answer \cite{2017AgrawalPriorVQA,MTL_QTA,kafle2016answer}. However, the relation between question types and answers from training data have not been investigated yet. Fig.~\ref{fig:distribution_graph} shows the correlation between question types and some answers in the VQA 2.0 dataset \cite{vqav22016}. It suggests that a question regarding the quantity should be answered by a number, not a color. The observation indicated that the prior information got from the correlations between question types and answers open an answer search space constrain for the VQA model. The search space constrain is useful for VQA model to give out final prediction and thus, improve the overall performance. The Fig.~\ref{fig:distribution_graph} is consistent with our observation, e.g., it clearly suggests that a question regarding the quantity should be answered by a number, not a color. \begin{figure} \centering \includegraphics[width = \columnwidth*8/9, keepaspectratio=True]{Distribution_graph.png} \caption{The distribution of candidate answers in each question type in VQA 2.0. } \label{fig:distribution_graph} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=\textwidth*9/10, keepaspectratio=true]{diff_attentions.png} \caption{Examples of attention maps of different attention mechanisms. BAN~\cite{Kim2018BilinearAN} and SAN~\cite{Yang2016StackedAN} identify different visual areas when answering questions from different types. \cmark\ and \xmark\ indicate correct and wrong answers, respectively.} \label{fig:diff_attentions} \end{figure*} In current state-of-the-art VQA systems, the joint modality component plays an important role since it would learn meaningful joint representations between linguistic and visual inputs~\cite{Xu2016AskAA,Yang2016StackedAN,bottom-up2017,Kim2018BilinearAN,dense-attention,tan2019lxmert}. Although different joint modality methods or attention mechanisms have been proposed, we hypothesize that each method may capture different aspects of the input. That means different attentions may provide different answers for questions belonged to different question types. Fig.~\ref{fig:diff_attentions} shows examples in which the attention models (BAN~\cite{Kim2018BilinearAN} and SAN~\cite{Yang2016StackedAN}) attend on different regions of input images when dealing with questions from different types. Unfortunately, most of recent VQA systems are based on single attention models~\cite{Xu2016AskAA,Yang2016StackedAN,bottom-up2017,Kim2018BilinearAN,MTL_QTA,Fukui2016MultimodalCB}. From the above observation, it is necessary to develop a VQA system which leverages the power of different attention models to deal with questions from different question types. In this paper, we propose a multiple interaction learning with question-type prior knowledge (MILQT) which extracts the question-type prior knowledge from questions to constrain the answer search space and leverage different behaviors of multiple attentions in dealing with questions from different types. Our contributions are summarized as follows. (i) We propose a novel VQA model that leverages the question-type information to augment the VQA loss. (ii) We identified that different attentions shows different performance in dealing with questions from different types and then leveraged this characteristic to rise performance through our designed model. (iii) The extensive experiments show that the proposed model yields the best performance with the most competitive approaches in the widely used VQA 2.0~\cite{vqav22016} and TDIUC~\cite{Kushal2018Tdiuc} datasets. \section{Related Work} \textbf{Visual Question Answering}. In recent years, VQA has attracted a large attention from both computer vision and natural language processing communities. The recent VQA researches mainly focus on the development of different attention models. In~\cite{Fukui2016MultimodalCB}, the authors proposed the Multimodal Compact Bilinear (MCB) pooling by projecting the visual and linguistic features to a higher dimensional space and then convolving both vectors efficiently by using element-wise product in Fast Fourier Transform space. In \cite{Yang2016StackedAN}, the authors proposed Stacked Attention Networks (SAN) which locate, via multi-step reasoning, image regions that are relevant to the question for answer prediction. In~\cite{bottom-up2017,tip-trick}, the authors employed the top-down attention that learns an attention weight for each image region by applying non-linear transformations on the combination of image features and linguistic features. In~\cite{dense-attention}, the authors proposed a dense, symmetric attention model that allows each question word attends on image regions and each image region attends on question words. In~\cite{Kim2018BilinearAN} the authors proposed Bilinear Attention Networks (BAN) that find bilinear attention distributions to utilize given visual-linguistics information seamlessly. Recently, in \cite{tan2019lxmert} the authors introduced Cross Modality Encoder Representations (LXMERT) to learn the alignment/ relationships between visual concepts and language semantics. Regarding the question type, previous works have considered question-type information to improve VQA results. Agrawal et al. \cite{2017AgrawalPriorVQA} trained a separated question-type classifier to classify input questions into two categories, i.e., Yes-No and non Yes-No. Each category will be subsequently processed in different ways. In the other words, the question type information is only used for selecting suitable sub-sequence processing. Shi et al. \cite{MTL_QTA} also trained a question-type classifier to predict the question type. The predicted one-hot question type is only used to weight the importance of different visual features. Kafle et al. \cite{kafle2016answer} also used question type to improve the performance of VQA prediction. Similar to \cite{2017AgrawalPriorVQA}, the authors separately trained a classifier to predict the type of the input question. The predicted question type is then used to improve VQA prediction through a Bayesian inference model. In our work, different from~\cite{2017AgrawalPriorVQA}, \cite{MTL_QTA} and \cite{kafle2016answer}, question types work as the prior knowledge, which constrain answer search space through loss function. Additionally, we can further identify the performance of different joint modality methods over questions from different types. Besides, through the multiple interaction learning, the behaviors of the joint modality methods are utilized on giving out the final answer which further improve VQA performance. \section{Methodology} \begin{figure*} \centering \includegraphics[width=\textwidth*8/10, keepaspectratio=true]{vqa-net-diagram.png} \caption{The proposed MILQT for VQA. } \label{fig:framework} \end{figure*} The proposed multiple interaction learning with question-type prior knowledge (MILQT) is illustrated in Fig.~\ref{fig:framework}. Similar to the most of the VQA systems \cite{Kim2018BilinearAN,Yang2016StackedAN,bottom-up2017}, multiple interaction learning with question-type prior knowledge (MILQT) consists of the joint learning solution for input questions and images, followed by a multi-class classification over a set of predefined candidate answers. However, MILQT allows to leverage multiple joint modality methods under the guiding of question-types to output better answers. As in Fig.~\ref{fig:framework}, MILQT consists of two modules: Question-type awareness $\mathcal{A}$, and Multi-hypothesis interaction learning $\mathcal{M}$. The first module aims to learn the question-type representation, which is further used to enhance the joint visual-question embedding features and to constrain answer search space through prior knowledge extracted from data. Based on the question-type information, the second module aims to identify the behaviors of multiple joint learning methods and then justify adjust contributions to giving out final predictions. In the following, we describe the representation of input questions and images in Section~\ref{subsec:rep}. Section~\ref{subsec:qt-awa} presents the Question-type awareness module $\mathcal{A}$. Section~\ref{subsec:interaction} presents the Multi-hypothesis interaction learning module $\mathcal{M}$. Section~\ref{subsec:overall-loss} presents the multi-task loss for entire model training. \subsection{Input Representation} \label{subsec:rep} \textbf{Question representation.} Given an input question, follow the recent state-of-the-art~\cite{bottom-up2017,Kim2018BilinearAN}, we trim the question to a maximum of 12 words. The questions that are shorter than 12 words are zero-padded. Each word is then represented by a 600-D vector that is a concatenation of the 300-D GloVe word embedding \cite{pennington2014glove} and the augmenting embedding from training data as ~\cite{Kim2018BilinearAN}. This step results in a sequence of word embeddings with size of $12 \times 600$ and is denoted as $f_w$ in Fig~\ref{fig:framework}. In order to obtain the intent of question, the $f_w$ is passed through a Gated Recurrent Unit (GRU)~\cite{2014ChoGRU} which results in a 1024-D vector representation $f_q$ for the input question. \textbf{Image representation.} There are several object detectors have been proposed in the literature, of which outputs vary in size and location. Inspired by recent advances of VQA~\cite{bottom-up2017,MTL_QTA,tip-trick}, we use bottom-up attention, i.e. an object detection which takes as FasterRCNN \cite{Ren2015FasterRCNN} backbone, to extract image representation. At first, the input image is passed through bottom-up networks to get $K \times 2048$ bounding box representation which is denotes as $f_v$ in Fig. \ref{fig:framework}. \subsection{Question-type Awareness} \label{subsec:qt-awa} \textbf{Question-type classification.} This component in module $\mathcal{A}$ aims to learn the question-type representation. Specifically, aforementioned component takes the question embedding $f_q$ as input, which is then passed through several fully-connected (FC) layers and is ended by a softmax layer which produces a probability distribution $h$ over $P$ question types, where $P$ depends on the dataset, i.e., $P$ equals $3$ for VQA 2.0~\cite{vqav22016} and equals $12$ for TDIUC~\cite{Kushal2018Tdiuc}. The question type embedding $f_{qt}$ extracted from question-type classification component will be combined with the attention features to enhance the joint semantic representation between the input image and question, while the predicted question type will be used to augment the VQA loss. \textbf{Multi-level multi-modal fusion.} Unlike the previous works that perform only one level of fusion between linguistic and visual features that may limit the capacity of these models to learn a good joint semantic space. In our work, a multi-level multi-modal fusion that encourages the model to learn a better joint semantic space is introduced which takes the question-type representation got from question-type classification component as one of inputs. \textit{First level multi-modal fusion:} The first level fusion is similar to previous works~\cite{bottom-up2017,Kim2018BilinearAN,Yang2016StackedAN}. Given visual features $f_v$, question features $f_{q}$, and any joint modality mechanism (e.g., bilinear attention~\cite{Kim2018BilinearAN}, stacked attention~\cite{Yang2016StackedAN}, bottom-up~\cite{bottom-up2017} etc.), we combines visual features with question features and learn attention weights to weight for visual and/or linguistic features. Different attention mechanisms have different ways for learning the joint semantic space. The detail of each attention mechanism can be found in the corresponding studies~\cite{Yang2016StackedAN,Kim2018BilinearAN,bottom-up2017}. The output of first level multi-modal fusion is denoted as $f_{att}$ in the Fig.~\ref{fig:framework}. \textit{Second level multi-modal fusion:} In order to enhance the joint semantic space, the output of the first level multi-modal fusion $f_{att}$ is combined with the question-type feature $f_{qt}$, which is the output of the last FC layer of the ``Question-type classification'' component. We try two simple but effective operators, i.e. \textit{element-wise multiplication --- EWM} or \textit{element-wise addition --- EWA}, to combine $f_{att}$ and $f_{qt}$. The output of the second level multi-modal fusion, which is denoted as $f_{att-qt}$ in Fig.~\ref{fig:framework}, can be seen as an attention representation that is aware of the question-type information. Given an attention mechanism, the $f_{att-qt}$ will be used as the input for a classifier that predicts an answer for the corresponding question. This is shown at the ``Answer prediction'' boxes in the Fig.~\ref{fig:framework}. \textbf{Augmented VQA loss.} The introduced loss function takes model predicted question types and prior knowledge question types from data to identify the answer search space constraints when the model outputs predicted answers. \textit{Prior computation.} In order to make the VQA classifier pay more attention on the answers corresponding to the question type of the input question, we use the statistical information from training data to identify the relation between the question type and the answer. The Alg.~\ref{alg:mapping} presents the calculation of the prior information between the question types and the answers. To calculate the prior, we firstly make statistics of the frequency of different question types in each VQA candidate answer. This results in a matrix $m_{qt-ans}$ (lines 2 to 4). We then column-wise normalize the matrix $m_{qt-ans}$ by dividing elements in a column by the sum of the column (lines 5 to 7). \begin{algorithm} \label{alg:mapping} \DontPrintSemicolon \SetAlgoLined \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{$Q$: number of questions in training set.\\ $P$: number of question types.\\ $A$: number of candidate answers.\\ $qtLabels \in \{1,...,P\}^{Q \times 1}$: type labels of questions in training set. \\ $ansLabels \in \{1,...,A\}^{Q \times 1}$: answer labels of questions in training set.} \Output{$m_{qt-ans}$ $\in \R^{P \times A}$: relational prior of question types and answers.} $m_{qt-ans} = zeros(P,A)$ /* init $m_{qt-ans}$ with all zero values */\; \For {$q = 1 \rightarrow Q$}{ $m_{qt-ans} [qtLabels[q], ansLabels[q]]$ += 1 \; } \For {$a = 1 \rightarrow A$}{ $m_{qt-ans}[:,a]$ = $normalize (m_{qt-ans}[:,a])$ \\ } \caption{Question type - answer relational prior computation} \end{algorithm} \textit{Augmented VQA loss function design $l_{vqa}$.} Let $y_i \in \R^{A \times 1}$, $g_i \in \R^{A \times 1}$, $h_i \in \R^{P \times 1}$ be the VQA groundtruth answer, VQA answer prediction, and the question-type prediction of the $i^{th}$ input question-image, respectively. Given the question, our target is to increase the chances of possible answers corresponding to the question type of the question. To this end, we first define the weighting (question-type) awareness matrix $m_{awn}$ by combining the predicted question-type $h_i$ and the prior information $m_{qt-ans}$ as follows: \begin{equation} m_{awn} = {h_i}^T m_{qt-ans} \label{eq:m_awn} \end{equation} This weighting matrix is used to weight the VQA groundtruth $y_i$ and VQA answer prediction $g_i$ to as follows: \begin{equation} \hat{y}_i= m_{awn}^{T} \odot y_i \end{equation} \begin{equation} \hat{g}_i= m_{awn}^{T} \odot g_i \end{equation} where $\odot$ is the element-wise product. As a result, this weighting increases the chances of possible answers corresponding to the question type of the question. Finally, the VQA loss $l_{vqa}$ is computed as follows: \\ \begin{equation} \begin{aligned} \label{eq:vqaloss} &l_{vqa} = - \frac{1}{QA}\sum_{i=1}^{Q}\sum_{j=1}^{A} \hat{y}_{ij} \log (\sigma(\hat{g}_{ij}))+ (1-\hat{y}_{ij})\log(1-\sigma(\hat{g}_{ij}))\\ \end{aligned} \end{equation} where $Q$ and $A$ are the number of training questions and candidate answers; $\sigma$ is the element-wise sigmoid function. (\ref{eq:vqaloss}) is a \textit{soft} cross entropy loss and has been shown to be more effective than softmax in VQA problem~\cite{tip-trick}. It is worth noting that when computing the weighting matrix $a_{awn}$ in (\ref{eq:m_awn}), instead of using the predicted question type $h_i$, we can also use the groundtruth question type. However, we found that there are some inconsistency between the groundtruth question types and the groundtruth answers. For example, in VQA 2.0 dataset, most of questions started by ``how many" are classified with the question type ``number", and the answers to these questions are numeric numbers. However, there are also some exceptions. For example, the question \textit{``How many stripes are there on the zebra?''} is annotated with the groundtruth question-type ``number" but its annotated groundtruth answer is ``many", which is not a numeric number. By using groundtruth question type to augment the loss, the answer to that question is likely a numeric number, which is an incorrect answer compared to the groundtruth answer. In order to make the model robust to these exceptions, we use the predicted question type to augment the VQA loss. Using the predicted question type can be seen as a self-adaptation mechanism that allows the system to adapt to exceptions. In particular, for the above example, the predicted question type may not be necessary ``number'' and it can be ``other''. \subsection{Multi-hypothesis interaction learning} \label{subsec:interaction} As presented in Fig.~\ref{fig:framework}, MILQT allows to utilize multiple hypotheses (i.e., joint modality mechanisms). Specifically, we propose a multi-hypothesis interaction learning design $\mathcal{M}$ that takes answer predictions produced by different joint modality mechanisms and interactively learn to combine them. Let $g \in \R^{A \times J}$ be the matrix of predicted probability distributions over $A$ answers from the $J$ joint modality mechanisms. $\mathcal{M}$ outputs the distribution $\rho \in \R^{A}$, which is calculated from $g$ through Equation (\ref{eq:multi-hypothesis}). \begin{equation} \begin{aligned} &\rho = \mathcal{M} \left(g,w_{mil}\right) = \sum_{j}\left(m^T_{qt-ans}w_{mil} \odot g\right) \end{aligned} \label{eq:multi-hypothesis} \end{equation} $w_ {mil} \in \mathds{R}^{P \times J}$ is the learnable weight which control the contributions of $J$ considered joint modality mechanisms on predicting answer based on the guiding of $P$ question types; $\odot$ denotes Hardamard product. \subsection{Multi-task loss} \label{subsec:overall-loss} In order to train the proposed MILQT, we define a multi-task loss to jointly optimize the question-type classification, the answer prediction of each individual attention mechanism, and the VQA loss (\ref{eq:vqaloss}). Formally, our multi-task loss is defined as follows: \begin{equation} l = \alpha_1\sum_{j=1}^{k} l_{H_j} +\alpha_2 l_{vqa} + \alpha_3 l_{qt} \label{eq:final_loss} \end{equation} where $\alpha_1, \alpha_2, \alpha_3$ are parameters controlling the importance of each loss; $l_{qt}$ is the question-type classification loss; $l_{H_j}$ is the answer prediction loss of $j^{th}$ mechanism over $J$ joint modality methods; $l_{vqa}$ is the introduced VQA loss augmented by the predicted question type and the prior information defined by (\ref{eq:vqaloss}). \section{Experiments} \subsection{Dataset and implementation detail} \textbf{Dataset.} We conduct the experiments on two benchmark VQA datasets that are VQA 2.0~\cite{vqav22016} and TDIUC~\cite{Kushal2018Tdiuc}. The VQA 2.0 dataset is the most popular and is widely used in VQA problem. In VQA 2.0 dataset, questions are divided into three question types, i.e., ``Yes-No'', ``Number'' and ``Other'' while the TDIUC dataset has 12 different question types. As standardly done in the literature, we use the standard VQA accuracy metric \cite{VQA} when evaluating on VQA 2.0 dataset and Arithmetric MPT as well as Harmonic MPT proposed in \cite{Kushal2018Tdiuc} when evaluating on TDIUC\footnote{In \cite{Kushal2018Tdiuc}, the authors show that using Arithmetric MPT and Harmonic MPT is more suitable than the standard VQA accuracy metric \cite{VQA} when evaluating on TDIUC.}. \textbf{Implementation detail. } \label{subsec:implement} Our proposed MILQT is implemented using PyTorch \cite{paszke2017automaticPyTorch}. The experiments are conducted on a single NVIDIA Titan V with 12GB RAM. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth*9/10, keepaspectratio=true]{exp_examples.png} \caption{Example results of SAN \cite{Yang2016StackedAN}, BAN \cite{Kim2018BilinearAN}, and our method on the validation set of VQA 2.0. In all cases, the proposed method produces better attention maps. It also produce more accurate answers than compared methods (second row).} \label{fig:exp_figure} \end{figure*} In all experiments, the learning rate is set to $10^{-3}$ (or $7\times 10^{-4}$ if using Visual Genome \cite{visualgenome} as augmenting data) and batch size is set to $256$. The number of detected bounding boxes is set to $50$ when extracting visual features. The GRU \cite{2014ChoGRU} for question embedding has one layer with $1024$-D hidden state and processes words in forward order. During training, except image representations $f_v$, other components are trained end-to-end with the multi-task loss (\ref{eq:final_loss}). AdaMax optimizer \cite{Kingma2014AdamAM} is used to train our model. \begin{table}[!t] \begin{center} \small \begin{tabular}{l c} \hline \begin{tabular}[l]{@{}l@{}}\textbf{Models}\end{tabular} &\textbf{VQA score}\\ \hline \multicolumn{2}{c}{\textbf{Contribution of question type awareness}} \\ BAN-2-Counter \cite{Kim2018BilinearAN} &65.25 \\ \quad + add &65.68\\ \quad\quad + prior &66.04\\ \quad + mul &65.80\\ \quad\quad + prior &66.13\\ \hline \multicolumn{2}{c}{\textbf{Contribution of hypothesis interaction learning}} \\ BAN-2-Counter \cite{Kim2018BilinearAN} &65.25 \\ \quad + BAN-2 \cite{Kim2018BilinearAN} &66.15\\ \quad + SAN \cite{Yang2016StackedAN} &65.64\\ \hline \multicolumn{2}{c}{\textbf{Whole model testing}} \\ BAN-2-Counter \cite{Kim2018BilinearAN} &65.25 \\ \quad + BAN-2 \cite{Kim2018BilinearAN} + Mul + prior &66.31\\ \quad + SAN \cite{Yang2016StackedAN} + Mul + prior &66.48\\ \hline \end{tabular} \end{center} \caption{Contributions of the proposed components and the whole model on the VQA 2.0 validation set.} \label{tab:valeval} \end{table} \begin{table}[!t] \small \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline {Models} & BAN-2 & \begin{tabular}[c]{@{}c@{}}BAN-2-\\ Counter \ \end{tabular} & \begin{tabular}[c]{@{}c@{}}Averaging\\ Ens.\end{tabular} & \begin{tabular}[c]{@{}c@{}}Interaction\\ Learning\end{tabular} \\ \hline {Accuracy} & 65.36 & 65.25 & 65.61 & {66.15} \\ \hline \end{tabular} \end{center} \caption{Performance on VQA 2.0 validation set where BAN2 \cite{Kim2018BilinearAN} and BAN-2-Counter \cite{Kim2018BilinearAN} are ensembled using averaging ensembling and the proposed interacting learning. } \label{tab:ens} \end{table} \subsection{Ablation study} To evaluate the contribution of question-type awareness $\mathcal{A}$ module and multi-hypothesis interaction learning $\mathcal{M}$ in our method, we conduct ablation studies when training on the train set and testing on the validation set of VQA 2.0 \cite{vqav22016}. Starting with the BAN glimpse 2 with counter sub-module (BAN-2-Counter) \cite{Kim2018BilinearAN} as the baseline, we show the effectiveness of proposed modules when they are integrated into the baseline. The counter sub-module \cite{Zhang2018LearningToCount} is used in the baseline to prove the extendability of proposed model on supporting ``Number" question. However, any sub-modules can also be applied, e.g., relational reasoning sub-module \cite{2017SantoroRelationalNet} to support for ``Yes/No" and ``Other" questions. It is worth noting that in order to make a fair comparison, we use the same visual features and question embedding features for both BAN-2-Counter baseline and our model. \begin{table}[!t] \begin{center} \small \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Question\\ types\end{tabular}}} & \multicolumn{3}{c|}{\textbf{Correlation scores}} \\ \cline{2-4} & \textbf{BAN-Counter} & \textbf{BAN} & \textbf{SAN} \\ \hline \textit{Yes/No} & 0.40 & 0.55 & 0.05 \\ \hline \textit{Numbers} & 0.55 & 0.23 & 0.22 \\ \hline \textit{Others} & 0.35 & 0.38 & 0.27 \\ \hline \end{tabular} \end{center} \caption{The correlation scores extracted from $w_{mil}$ of MILQT. The extracted information got from model trained in VQA 2.0 train set.} \label{tab:corr} \end{table} \begin{table*}[!t] \centering \small \begin{center} \begin{tabular}{l| c| c c c|c |c c c} \hline \multirow{2}{*}{\textbf{Models}} &\multicolumn{4}{c|}{\textbf{VQA - test-dev}} &\multicolumn{4}{c}{\textbf{VQA - test-std}} \\ \cline{2-9} &\textbf{Overall} &\textbf{Yes/No} &\textbf{Nums} &\textbf{Other} &\textbf{Overall} &\textbf{Yes/No} &\textbf{Nums} &\textbf{Other}\\ \hline SAN \cite{Yang2016StackedAN} &64.80 &79.63 &43.21 &57.09 &65.21 &80.06 &43.57 &57.24 \\ Up-Down \cite{bottom-up2017} &65.32 &81.82 &44.21 &56.05 &65.67 &82.20 &43.90 &56.26 \\ \begin{tabular}[c]{@{}c@{}}CMP \cite{tan2019lxmert}\ \end{tabular} &68.7 &84.91 &50.15 &59.11 &69.23 &85.48 &49.53 &59.6\\ Pythia \cite{Jiang2018PythiaVT} &70.01 &86.12 &48.97 &61.06 &70.24 &86.37 &48.46 &61.18 \\ BAN \cite{Kim2018BilinearAN} &70.04 &85.42 &54.04 &60.52 &70.35 &85.82 &53.71 &60.69 \\ \begin{tabular}[c]{@{}c@{}}LXMERT\cite{tan2019lxmert} \\ \end{tabular} &\textbf{72.4} &88.3 &54.2 &62.9 &\textbf{72.5} &88.0 &56.7 &65.2\\ \hline \textbf{MILQT} &70.62 &86.47 &54.24 &60.79 &70.93 &86.80 &53.79 &61.03\\ \hline \end{tabular} \end{center} \caption[Test-dev and test-standard results on VQA 2.0 dataset with single-models of different methods] {Comparison to the state of the arts on the test-dev and test-standard of VQA 2.0. For fair comparison, in all setup except LXMERT which uses BERT \cite{Devlin2019BERTPO} as question embedding, Glove embedding and GRU are leveraged for question embedding and Bottom-up features are used to extract visual information. CMP, i.e.Cross-Modality with Pooling, is the LXMERT with the aforementioned setup. } \label{tab:VQA} \end{table*} \textbf{The effectiveness of question-type awareness and prior information proposed in Section~\ref{subsec:qt-awa}.} The first section in Table \ref{tab:valeval} shows that by having second level multi-modal fusion (Section~\ref{subsec:qt-awa}) which uses element-wise multiplication (\textit{+mul}) to combine the question-type feature $f_{qt}$ and the attention feature $f_{att}$, the overall performance increases from $65.25\%$ (baseline) to $65.80\%$. By further using the predicted question type and the prior information (\textit{+prior}) to augment the VQA loss, the performance increases to $66.13\%$ which is $+0.88\%$ improvement over the baseline. The results in the first section in Table \ref{tab:valeval} confirm that combining question-type features with attention features helps to learn a better joint semantic space, which leads to the performance boost over the baseline. These results also confirm that using the predicted question type and the prior provides a further boost in the performance. We also find out that using EWM provides better accuracy than EWA at the second level fusion. \textbf{The effectiveness of multi-hypothesis interaction learning proposed in Section~\ref{subsec:interaction}.} The second section in Table \ref{tab:valeval} shows the effectiveness when leveraging different joint modality mechanisms by using multi-hypothesis interaction learning. By using BAN-2-Counter \cite{Kim2018BilinearAN} and BAN-2 \cite{Kim2018BilinearAN} (BAN-2-Counter + BAN-2), the overall performance is $66.15\%$ which is $+0.9\%$ improvement over the BAN-2-Counter baseline. Table \ref{tab:corr} illustrates the correlation between different joint modality mechanisms and question types. This information is extracted from $w_{mil}$ which identify the contributions of each mechanism in giving final VQA results guiding by the question type information. The results in Table \ref{tab:VQA} indicate that some joint modality methods achieve better performance in some specific question types, e.g., joint modality method BAN outperform other methods in Number question type by a large margin. The correlation in Table \ref{tab:corr} and performance in Table \ref{tab:VQA} also indicates that the MILQT model tends to leverage the contribution of joint methods proportional to their performance in each specific question type. Besides, the results in Table \ref{tab:ens} indicate that under the guiding of question type, $\mathcal{M}$ module produce better performance when comparing with none-use solution or the weighted sum method \cite{li2019regat} in which the predictions of different joint modality mechanisms are summed up and the answer with highest score are considered as the final answer. \textbf{The effectiveness of the entire proposed model.} The third section in Table \ref{tab:valeval} presents results when all components (except the visual feature extractor) are combined in a unified model and are trained end-to-end. To verify the effectiveness of the proposed framework, we conduct two configurations. In the first configuration, we use two joint modality mechanisms BAN-2-Counter and BAN-2, the EWM in the second level multi-modal fusion, and the predicted question type together with the prior information to augment the loss. The second configuration is similar to the first configuration, except that we use BAN-2-Counter and SAN in interaction learning. The third section on Table \ref{tab:valeval} shows that both configurations give the performance boost over the baseline. The second configuration achieves better performance, i.e., $66.48\%$ accuracy, which outperforms over the baseline BAN-2-Counter $+1.23\%$. Table \ref{tab:valeval} also show that using ``question-type awareness" gives further boost over using interaction learning only, i.e., the performance of ``BAN-2-Counter + SAN + Mul + prior" (66.48) outperforms the performance of ``BAN-2-Counter + SAN" (65.64). Fig.~\ref{fig:exp_figure} presents some visualization results of our second configuration and other methods on the VQA 2.0 validation set. \textbf{Question-type classification analysis} The proposed MILQT is a model which allows joint training between question-type classification and VQA answer classification. The effectiveness of multi-task learning helps to improve performance in both tasks. To further analyze the effectiveness of MILQT in the question-type classification, we provide in this section the question type classification on TDIUC dataset. We follow QTA~\cite{MTL_QTA} to calculate the accuracy, i.e., the overall accuracy is the number of correct predictions over the number of testing questions, across all categories. The results are presented in Table \ref{tab:state-of-the-art-qt}. Our MILQT uses BAN-2 \cite{Kim2018BilinearAN}, BAN-2-Counter~\cite{Kim2018BilinearAN}, and SAN~\cite{Yang2016StackedAN} in the interaction learning, element-wise multiplication in the second level of multi-modal fusion, and the predicted question type with prior information to augment the VQA loss. Compare to the state-of-the-art QTA~\cite{MTL_QTA}, our MILQT outperforms QTA for most of question types. In overall, we achieve state-of-the-art performance on question-type classification task on TDIUC dataset with $96.45\%$ accuracy. It is worth noting that for the ``Utility and Affordances" category, the question type classification accuracy is $0\%$ for both QTA and MILQT. It is because the imbalanced data problem in TDIUC dataset. The ``Utility and Affordances" category has only $\approx 0.03\%$ samples in the dataset. Hence this category is strongly dominated by other categories when learning the question type classifier. Note that, there are cases in which questions belonging to the ``Utility and Affordances" category have similar answers with questions belonging to other categories. Thus, the data becomes less bias w.r.t. answers (in comparing to question categories). This explains why although both MILQT and QTA have $0\%$ accuracy for the ``Utility and Affordances" on the question category classification, both of them achieve some accuracy on the VQA classification (see Table \ref{tab:state-of-the-art-qt}). \begin{table}[!t] \begin{center} \small \begin{tabular}{|l|c |c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Question-type accuracy}\end{tabular}} & \multicolumn{2}{c|}{\textbf{Reference Models}} \\ \cline{2-3} &\textbf{QTA \cite{MTL_QTA}}& \begin{tabular}[l]{@{}l@{}}\textbf{MILQT}\end{tabular}\\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Scene Recognition} \end{tabular} &99.40 &\textbf{99.84} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sport Recognition}\end{tabular} &73.08 &\textbf{85.81} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Color Attributes} \end{tabular} &86.10 &\textbf{89.60} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Other Attributes} \end{tabular} &77.76 &\textbf{85.03} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Activity Recognition}\end{tabular} &13.18 &\textbf{16.43} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Positional Recognition}\end{tabular} &89.52 &\textbf{89.55} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sub-Object Recognition}\end{tabular} &98.96 &\textbf{99.42} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Absurd}\end{tabular} &\textbf{95.46} &95.12 \\ \begin{tabular}[c]{@{}c@{}}\textbf{Utility and Affordances}\end{tabular} &00.00 &00.00 \\ \begin{tabular}[c]{@{}c@{}}\textbf{Object Presence}\end{tabular} &\textbf{100.00} &\textbf{100.00} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Counting}\end{tabular} &99.90 &\textbf{99.99}\\ \begin{tabular}[c]{@{}c@{}}\textbf{Sentiment Understanding}\end{tabular} &60.51 &\textbf{67.82} \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Overall}\end{tabular} &95.66 &\textbf{96.45} \\ \hline \end{tabular} \end{center} \caption{The comparative question-type classification results between MILQT and state-of-the-art QTA \cite{MTL_QTA} on the TDIUC validation set.} \label{tab:state-of-the-art-qt} \end{table} \begin{table*}[!t] \centering \small \begin{center} \begin{tabular}{|l|c c c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Score}\end{tabular}} & \multicolumn{4}{c|}{\textbf{Reference Models}} \\ \cline{2-5} &\textbf{QTA-M \cite{MTL_QTA}}& \textbf{MCB-A \cite{Kushal2018Tdiuc}}& \textbf{RAU \cite{Kushal2018Tdiuc}}& \begin{tabular}[l]{@{}l@{}}\textbf{MILQT}\end{tabular}\\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Scene Recognition} \end{tabular} &93.74 &93.06 &93.96 &\textbf{94.74} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sport Recognition}\end{tabular} &94.80 &92.77 &93.47 &\textbf{96.47} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Color Attributes} \end{tabular} &57.62 &68.54 &66.86 &\textbf{75.23} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Other Attributes} \end{tabular} &52.05 &56.72 &56.49 &\textbf{61.93} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Activity Recognition}\end{tabular} &53.13 &52.35 &51.60 &\textbf{65.03} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Positional Recognition}\end{tabular} &33.90 &35.40 &35.26 &\textbf{42.31} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sub-Object Recognition}\end{tabular} &86.89 &85.54 &86.11 &\textbf{89.63} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Absurd}\end{tabular} &\textbf{98.57} &84.82 &96.08 &88.95 \\ \begin{tabular}[c]{@{}c@{}}\textbf{Utility and Affordances}\end{tabular} &24.07 &35.09 &31.58 &\textbf{38.60} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Object Presence}\end{tabular} &94.57 &93.64 &94.38 &\textbf{96.21} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Counting}\end{tabular} &53.59 &51.01 &48.43 &\textbf{62.41} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sentiment Understanding}\end{tabular} &60.06 &\textbf{66.25} &60.09 &64.98 \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Arithmetic MPT}\end{tabular} &66.92 &67.90 &67.81 &\textbf{73.04} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Harmonic MPT}\end{tabular} &55.77 &60.47 &59.00 &\textbf{66.86} \\ \hline \end{tabular} \end{center} \caption{The comparative results between the proposed model and other models on the validation set of TDIUC. } \label{tab:TDIUC} \end{table*} \subsection{Comparison to the state of the art} \textbf{Experiments on VQA 2.0 test-dev and test-standard.} We evaluate MILQT on the test-dev and test-standard of VQA 2.0 dataset \cite{vqav22016}. To train the model, similar to previous works~\cite{Yang2016StackedAN,tip-trick,Jiang2018PythiaVT,Kim2018BilinearAN}, we use both training set and validation set of VQA 2.0. We also use the Visual Genome~\cite{visualgenome} as additional training data. MILQT consists of three joint modality mechanisms, i.e., {BAN-2}, {BAN-2-Counter}, and {SAN} accompanied with the EWM for the multi-modal fusion, and the predicted question type together with the prior information to augment the VQA loss. Table~\ref{tab:VQA} presents the results of different methods on test-dev and test-std of VQA 2.0. The results show that our MILQT yields the good performance with the most competitive approaches. \textbf{Experiments on TDIUC.} In order to prove the stability of MILQT, we evaluate MILQT on TDIUC dataset \cite{Kushal2018Tdiuc}. The results in Table \ref{tab:TDIUC} show that the proposed model establishes the state-of-the-art results on both evaluation metrics Arithmetic MPT and Harmonic MPT \cite{Kushal2018Tdiuc}. Specifically, our model significantly outperforms the recent QTA~\cite{MTL_QTA}, i.e., on the overall, we improve over QTA $6.1\%$ and $11.1\%$ with Arithemic MPT and Harmonic MPT metrics, respectively. It is worth noting that the results of QTA~\cite{MTL_QTA} in Table \ref{tab:TDIUC}, which are cited from \cite{MTL_QTA}, are achieved when \cite{MTL_QTA} used the one-hot \textit{predicted question type} of testing question to weight visual features. When using \textit{the groundtruth question type} to weight visual features, \cite{MTL_QTA} reported $69.11\%$ and $60.08\%$ for Arithemic MPT and Harmonic MPT metrics, respectively. Our model also outperforms these performances a large margin, i.e., the improvements are $3.9\%$ and $6.8\%$ for Arithemic MPT and Harmonic MPT metrics, respectively. We also note that for the question type ``Absurd", we get lower performance than QTA \cite{MTL_QTA}. For this question type, the question is irrelevant with the image content. Consequently, this question type does not help to learn a joint meaningful embedding between the input question and image. This explains for our lower performance on this question type. \section{Conclusion} We present a multiple interaction learning with question-type prior knowledge for constraining answer search space--- MILQT that takes into account the question-type information to improve the VQA performance at different stages. The system also allows to utilize and learn different attentions under a unified model in an interacting manner. The extensive experimental results show that all proposed components improve the VQA performance. We yields the best performance with the most competitive approaches on VQA 2.0 and TDIUC dataset. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=A4ft_k1rJ1C
https://arxiv.org/abs/2009.11118
Evaluate the paper on "Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answering" based on its strengths and weaknesses.
An ensemble of VQA methods with prior knowledge on question types [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The authors propose a VQA method that jointly optimizes question answering and answer type classification using an ensemble of existing attention-based VQA methods. Prior information about the answer types is integrated into a weighted loss. Modalities are fused by first merging visual and linguistic features, then merging in question type features. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. The authors use prior information on answer types while allowing the model to account for outlier questions. The authors perform extensive experiments to show all aspects of their works. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. It is not clear how significant the individual performance increases of each contribution are. Crucial design choices for the VQA loss are not motivated (e.g. where to integrate awareness matrix). Presentation needs work (typos, grammar, typesetting). [Overall rating] Paper rating: Accept (tentative rating, subject to revision until deadline) [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - Please review the paper for typos, incorrect grammar and typesetting. Specifically lines 40 (grammar), 114, 178, 260 "constraints", 264, 287 (capitalizing VQA), 389 "standardly", 436 end of line, 590 "modaality" - Table 4: your method is not the highest (LXMERT is), so please do not use bold numbers. See also the claim on line 594. - Algorithm 1 should be expressed in math, like the other equations. - Source for claim on line 153 Rating: 7: Good paper, accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \usepackage{epsfig} \usepackage{ mathrsfs } \usepackage{xcolor} \usepackage{tablefootnote} \usepackage{ stmaryrd } \usepackage[ruled,vlined,linesnumbered]{algorithm2e} \usepackage{tabularx} % \usepackage{multirow} \usepackage{array} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \usepackage{supertabular} \usepackage{enumitem} \usepackage{ dsfont } \usepackage[toc,page]{appendix} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand\red[1]{{\color{red}#1}} \newcommand\brown[1]{{\color{brown}#1}} \newcommand{\tuong}[1]{\brown{#1}} \usepackage{pifont}% \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \def\R{{\mathbb R}} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \renewcommand{\baselinestretch}{0.98} \normalsize \usepackage{floatrow} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{1} % \title{Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answering} % \titlerunning{MILQT} \author{Tuong Do\inst{1} \and Binh X. Nguyen\inst{1} \and Huy Tran\inst{1} \and Erman Tjiputra\inst{1} \and Quang D. Tran\inst{1}\and Thanh-Toan Do\inst{2} } \authorrunning{Tuong Do et al.} \institute{AIOZ, Singapore \\ \email{\{tuong.khanh-long.do,binh.xuan.nguyen,huy.tran,\\erman.tjiputra,quang.tran\}@aioz.io} \and University of Liverpool \\ \email{thanh-toan.do@liverpool.ac.uk}} \maketitle \begin{abstract} Different approaches have been proposed to Visual Question Answering (VQA). However, few works are aware of the behaviors of varying joint modality methods over question type prior knowledge extracted from data in constraining answer search space, of which information gives a reliable cue to reason about answers for questions asked in input images. In this paper, we propose a novel VQA model that utilizes the question-type prior information to improve VQA by leveraging the multiple interactions between different joint modality methods based on their behaviors in answering questions from different types. The solid experiments on two benchmark datasets, i.e., VQA 2.0 and TDIUC, indicate that the proposed method yields the best performance with the most competitive approaches. \keywords{visual question answering, multiple interaction learning.} \end{abstract} \section{Introduction} The task of Visual Question Answering (VQA) is to provide a correct answer to a given question such that the answer is consistent with the visual content of a given image. The VQA research raises a rich set of challenges because it is an intersection of different research fields including computer vision, natural language processing, and reasoning. Thanks to its wide applications, the VQA has attracted great attention in recent years~\cite{VQA,Xu2016AskAA,Yang2016StackedAN,bottom-up2017,Kim2018BilinearAN,MTL_QTA}. This also leads to the presence of large scale datasets~\cite{VQA,vqav22016,Kushal2018Tdiuc} and evaluation protocols~\cite{VQA,Kushal2018Tdiuc}. There are works that consider types of question as the side information which gives a strong cue to reason about the answer \cite{2017AgrawalPriorVQA,MTL_QTA,kafle2016answer}. However, the relation between question types and answers from training data have not been investigated yet. Fig.~\ref{fig:distribution_graph} shows the correlation between question types and some answers in the VQA 2.0 dataset \cite{vqav22016}. It suggests that a question regarding the quantity should be answered by a number, not a color. The observation indicated that the prior information got from the correlations between question types and answers open an answer search space constrain for the VQA model. The search space constrain is useful for VQA model to give out final prediction and thus, improve the overall performance. The Fig.~\ref{fig:distribution_graph} is consistent with our observation, e.g., it clearly suggests that a question regarding the quantity should be answered by a number, not a color. \begin{figure} \centering \includegraphics[width = \columnwidth*8/9, keepaspectratio=True]{Distribution_graph.png} \caption{The distribution of candidate answers in each question type in VQA 2.0. } \label{fig:distribution_graph} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=\textwidth*9/10, keepaspectratio=true]{diff_attentions.png} \caption{Examples of attention maps of different attention mechanisms. BAN~\cite{Kim2018BilinearAN} and SAN~\cite{Yang2016StackedAN} identify different visual areas when answering questions from different types. \cmark\ and \xmark\ indicate correct and wrong answers, respectively.} \label{fig:diff_attentions} \end{figure*} In current state-of-the-art VQA systems, the joint modality component plays an important role since it would learn meaningful joint representations between linguistic and visual inputs~\cite{Xu2016AskAA,Yang2016StackedAN,bottom-up2017,Kim2018BilinearAN,dense-attention,tan2019lxmert}. Although different joint modality methods or attention mechanisms have been proposed, we hypothesize that each method may capture different aspects of the input. That means different attentions may provide different answers for questions belonged to different question types. Fig.~\ref{fig:diff_attentions} shows examples in which the attention models (BAN~\cite{Kim2018BilinearAN} and SAN~\cite{Yang2016StackedAN}) attend on different regions of input images when dealing with questions from different types. Unfortunately, most of recent VQA systems are based on single attention models~\cite{Xu2016AskAA,Yang2016StackedAN,bottom-up2017,Kim2018BilinearAN,MTL_QTA,Fukui2016MultimodalCB}. From the above observation, it is necessary to develop a VQA system which leverages the power of different attention models to deal with questions from different question types. In this paper, we propose a multiple interaction learning with question-type prior knowledge (MILQT) which extracts the question-type prior knowledge from questions to constrain the answer search space and leverage different behaviors of multiple attentions in dealing with questions from different types. Our contributions are summarized as follows. (i) We propose a novel VQA model that leverages the question-type information to augment the VQA loss. (ii) We identified that different attentions shows different performance in dealing with questions from different types and then leveraged this characteristic to rise performance through our designed model. (iii) The extensive experiments show that the proposed model yields the best performance with the most competitive approaches in the widely used VQA 2.0~\cite{vqav22016} and TDIUC~\cite{Kushal2018Tdiuc} datasets. \section{Related Work} \textbf{Visual Question Answering}. In recent years, VQA has attracted a large attention from both computer vision and natural language processing communities. The recent VQA researches mainly focus on the development of different attention models. In~\cite{Fukui2016MultimodalCB}, the authors proposed the Multimodal Compact Bilinear (MCB) pooling by projecting the visual and linguistic features to a higher dimensional space and then convolving both vectors efficiently by using element-wise product in Fast Fourier Transform space. In \cite{Yang2016StackedAN}, the authors proposed Stacked Attention Networks (SAN) which locate, via multi-step reasoning, image regions that are relevant to the question for answer prediction. In~\cite{bottom-up2017,tip-trick}, the authors employed the top-down attention that learns an attention weight for each image region by applying non-linear transformations on the combination of image features and linguistic features. In~\cite{dense-attention}, the authors proposed a dense, symmetric attention model that allows each question word attends on image regions and each image region attends on question words. In~\cite{Kim2018BilinearAN} the authors proposed Bilinear Attention Networks (BAN) that find bilinear attention distributions to utilize given visual-linguistics information seamlessly. Recently, in \cite{tan2019lxmert} the authors introduced Cross Modality Encoder Representations (LXMERT) to learn the alignment/ relationships between visual concepts and language semantics. Regarding the question type, previous works have considered question-type information to improve VQA results. Agrawal et al. \cite{2017AgrawalPriorVQA} trained a separated question-type classifier to classify input questions into two categories, i.e., Yes-No and non Yes-No. Each category will be subsequently processed in different ways. In the other words, the question type information is only used for selecting suitable sub-sequence processing. Shi et al. \cite{MTL_QTA} also trained a question-type classifier to predict the question type. The predicted one-hot question type is only used to weight the importance of different visual features. Kafle et al. \cite{kafle2016answer} also used question type to improve the performance of VQA prediction. Similar to \cite{2017AgrawalPriorVQA}, the authors separately trained a classifier to predict the type of the input question. The predicted question type is then used to improve VQA prediction through a Bayesian inference model. In our work, different from~\cite{2017AgrawalPriorVQA}, \cite{MTL_QTA} and \cite{kafle2016answer}, question types work as the prior knowledge, which constrain answer search space through loss function. Additionally, we can further identify the performance of different joint modality methods over questions from different types. Besides, through the multiple interaction learning, the behaviors of the joint modality methods are utilized on giving out the final answer which further improve VQA performance. \section{Methodology} \begin{figure*} \centering \includegraphics[width=\textwidth*8/10, keepaspectratio=true]{vqa-net-diagram.png} \caption{The proposed MILQT for VQA. } \label{fig:framework} \end{figure*} The proposed multiple interaction learning with question-type prior knowledge (MILQT) is illustrated in Fig.~\ref{fig:framework}. Similar to the most of the VQA systems \cite{Kim2018BilinearAN,Yang2016StackedAN,bottom-up2017}, multiple interaction learning with question-type prior knowledge (MILQT) consists of the joint learning solution for input questions and images, followed by a multi-class classification over a set of predefined candidate answers. However, MILQT allows to leverage multiple joint modality methods under the guiding of question-types to output better answers. As in Fig.~\ref{fig:framework}, MILQT consists of two modules: Question-type awareness $\mathcal{A}$, and Multi-hypothesis interaction learning $\mathcal{M}$. The first module aims to learn the question-type representation, which is further used to enhance the joint visual-question embedding features and to constrain answer search space through prior knowledge extracted from data. Based on the question-type information, the second module aims to identify the behaviors of multiple joint learning methods and then justify adjust contributions to giving out final predictions. In the following, we describe the representation of input questions and images in Section~\ref{subsec:rep}. Section~\ref{subsec:qt-awa} presents the Question-type awareness module $\mathcal{A}$. Section~\ref{subsec:interaction} presents the Multi-hypothesis interaction learning module $\mathcal{M}$. Section~\ref{subsec:overall-loss} presents the multi-task loss for entire model training. \subsection{Input Representation} \label{subsec:rep} \textbf{Question representation.} Given an input question, follow the recent state-of-the-art~\cite{bottom-up2017,Kim2018BilinearAN}, we trim the question to a maximum of 12 words. The questions that are shorter than 12 words are zero-padded. Each word is then represented by a 600-D vector that is a concatenation of the 300-D GloVe word embedding \cite{pennington2014glove} and the augmenting embedding from training data as ~\cite{Kim2018BilinearAN}. This step results in a sequence of word embeddings with size of $12 \times 600$ and is denoted as $f_w$ in Fig~\ref{fig:framework}. In order to obtain the intent of question, the $f_w$ is passed through a Gated Recurrent Unit (GRU)~\cite{2014ChoGRU} which results in a 1024-D vector representation $f_q$ for the input question. \textbf{Image representation.} There are several object detectors have been proposed in the literature, of which outputs vary in size and location. Inspired by recent advances of VQA~\cite{bottom-up2017,MTL_QTA,tip-trick}, we use bottom-up attention, i.e. an object detection which takes as FasterRCNN \cite{Ren2015FasterRCNN} backbone, to extract image representation. At first, the input image is passed through bottom-up networks to get $K \times 2048$ bounding box representation which is denotes as $f_v$ in Fig. \ref{fig:framework}. \subsection{Question-type Awareness} \label{subsec:qt-awa} \textbf{Question-type classification.} This component in module $\mathcal{A}$ aims to learn the question-type representation. Specifically, aforementioned component takes the question embedding $f_q$ as input, which is then passed through several fully-connected (FC) layers and is ended by a softmax layer which produces a probability distribution $h$ over $P$ question types, where $P$ depends on the dataset, i.e., $P$ equals $3$ for VQA 2.0~\cite{vqav22016} and equals $12$ for TDIUC~\cite{Kushal2018Tdiuc}. The question type embedding $f_{qt}$ extracted from question-type classification component will be combined with the attention features to enhance the joint semantic representation between the input image and question, while the predicted question type will be used to augment the VQA loss. \textbf{Multi-level multi-modal fusion.} Unlike the previous works that perform only one level of fusion between linguistic and visual features that may limit the capacity of these models to learn a good joint semantic space. In our work, a multi-level multi-modal fusion that encourages the model to learn a better joint semantic space is introduced which takes the question-type representation got from question-type classification component as one of inputs. \textit{First level multi-modal fusion:} The first level fusion is similar to previous works~\cite{bottom-up2017,Kim2018BilinearAN,Yang2016StackedAN}. Given visual features $f_v$, question features $f_{q}$, and any joint modality mechanism (e.g., bilinear attention~\cite{Kim2018BilinearAN}, stacked attention~\cite{Yang2016StackedAN}, bottom-up~\cite{bottom-up2017} etc.), we combines visual features with question features and learn attention weights to weight for visual and/or linguistic features. Different attention mechanisms have different ways for learning the joint semantic space. The detail of each attention mechanism can be found in the corresponding studies~\cite{Yang2016StackedAN,Kim2018BilinearAN,bottom-up2017}. The output of first level multi-modal fusion is denoted as $f_{att}$ in the Fig.~\ref{fig:framework}. \textit{Second level multi-modal fusion:} In order to enhance the joint semantic space, the output of the first level multi-modal fusion $f_{att}$ is combined with the question-type feature $f_{qt}$, which is the output of the last FC layer of the ``Question-type classification'' component. We try two simple but effective operators, i.e. \textit{element-wise multiplication --- EWM} or \textit{element-wise addition --- EWA}, to combine $f_{att}$ and $f_{qt}$. The output of the second level multi-modal fusion, which is denoted as $f_{att-qt}$ in Fig.~\ref{fig:framework}, can be seen as an attention representation that is aware of the question-type information. Given an attention mechanism, the $f_{att-qt}$ will be used as the input for a classifier that predicts an answer for the corresponding question. This is shown at the ``Answer prediction'' boxes in the Fig.~\ref{fig:framework}. \textbf{Augmented VQA loss.} The introduced loss function takes model predicted question types and prior knowledge question types from data to identify the answer search space constraints when the model outputs predicted answers. \textit{Prior computation.} In order to make the VQA classifier pay more attention on the answers corresponding to the question type of the input question, we use the statistical information from training data to identify the relation between the question type and the answer. The Alg.~\ref{alg:mapping} presents the calculation of the prior information between the question types and the answers. To calculate the prior, we firstly make statistics of the frequency of different question types in each VQA candidate answer. This results in a matrix $m_{qt-ans}$ (lines 2 to 4). We then column-wise normalize the matrix $m_{qt-ans}$ by dividing elements in a column by the sum of the column (lines 5 to 7). \begin{algorithm} \label{alg:mapping} \DontPrintSemicolon \SetAlgoLined \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{$Q$: number of questions in training set.\\ $P$: number of question types.\\ $A$: number of candidate answers.\\ $qtLabels \in \{1,...,P\}^{Q \times 1}$: type labels of questions in training set. \\ $ansLabels \in \{1,...,A\}^{Q \times 1}$: answer labels of questions in training set.} \Output{$m_{qt-ans}$ $\in \R^{P \times A}$: relational prior of question types and answers.} $m_{qt-ans} = zeros(P,A)$ /* init $m_{qt-ans}$ with all zero values */\; \For {$q = 1 \rightarrow Q$}{ $m_{qt-ans} [qtLabels[q], ansLabels[q]]$ += 1 \; } \For {$a = 1 \rightarrow A$}{ $m_{qt-ans}[:,a]$ = $normalize (m_{qt-ans}[:,a])$ \\ } \caption{Question type - answer relational prior computation} \end{algorithm} \textit{Augmented VQA loss function design $l_{vqa}$.} Let $y_i \in \R^{A \times 1}$, $g_i \in \R^{A \times 1}$, $h_i \in \R^{P \times 1}$ be the VQA groundtruth answer, VQA answer prediction, and the question-type prediction of the $i^{th}$ input question-image, respectively. Given the question, our target is to increase the chances of possible answers corresponding to the question type of the question. To this end, we first define the weighting (question-type) awareness matrix $m_{awn}$ by combining the predicted question-type $h_i$ and the prior information $m_{qt-ans}$ as follows: \begin{equation} m_{awn} = {h_i}^T m_{qt-ans} \label{eq:m_awn} \end{equation} This weighting matrix is used to weight the VQA groundtruth $y_i$ and VQA answer prediction $g_i$ to as follows: \begin{equation} \hat{y}_i= m_{awn}^{T} \odot y_i \end{equation} \begin{equation} \hat{g}_i= m_{awn}^{T} \odot g_i \end{equation} where $\odot$ is the element-wise product. As a result, this weighting increases the chances of possible answers corresponding to the question type of the question. Finally, the VQA loss $l_{vqa}$ is computed as follows: \\ \begin{equation} \begin{aligned} \label{eq:vqaloss} &l_{vqa} = - \frac{1}{QA}\sum_{i=1}^{Q}\sum_{j=1}^{A} \hat{y}_{ij} \log (\sigma(\hat{g}_{ij}))+ (1-\hat{y}_{ij})\log(1-\sigma(\hat{g}_{ij}))\\ \end{aligned} \end{equation} where $Q$ and $A$ are the number of training questions and candidate answers; $\sigma$ is the element-wise sigmoid function. (\ref{eq:vqaloss}) is a \textit{soft} cross entropy loss and has been shown to be more effective than softmax in VQA problem~\cite{tip-trick}. It is worth noting that when computing the weighting matrix $a_{awn}$ in (\ref{eq:m_awn}), instead of using the predicted question type $h_i$, we can also use the groundtruth question type. However, we found that there are some inconsistency between the groundtruth question types and the groundtruth answers. For example, in VQA 2.0 dataset, most of questions started by ``how many" are classified with the question type ``number", and the answers to these questions are numeric numbers. However, there are also some exceptions. For example, the question \textit{``How many stripes are there on the zebra?''} is annotated with the groundtruth question-type ``number" but its annotated groundtruth answer is ``many", which is not a numeric number. By using groundtruth question type to augment the loss, the answer to that question is likely a numeric number, which is an incorrect answer compared to the groundtruth answer. In order to make the model robust to these exceptions, we use the predicted question type to augment the VQA loss. Using the predicted question type can be seen as a self-adaptation mechanism that allows the system to adapt to exceptions. In particular, for the above example, the predicted question type may not be necessary ``number'' and it can be ``other''. \subsection{Multi-hypothesis interaction learning} \label{subsec:interaction} As presented in Fig.~\ref{fig:framework}, MILQT allows to utilize multiple hypotheses (i.e., joint modality mechanisms). Specifically, we propose a multi-hypothesis interaction learning design $\mathcal{M}$ that takes answer predictions produced by different joint modality mechanisms and interactively learn to combine them. Let $g \in \R^{A \times J}$ be the matrix of predicted probability distributions over $A$ answers from the $J$ joint modality mechanisms. $\mathcal{M}$ outputs the distribution $\rho \in \R^{A}$, which is calculated from $g$ through Equation (\ref{eq:multi-hypothesis}). \begin{equation} \begin{aligned} &\rho = \mathcal{M} \left(g,w_{mil}\right) = \sum_{j}\left(m^T_{qt-ans}w_{mil} \odot g\right) \end{aligned} \label{eq:multi-hypothesis} \end{equation} $w_ {mil} \in \mathds{R}^{P \times J}$ is the learnable weight which control the contributions of $J$ considered joint modality mechanisms on predicting answer based on the guiding of $P$ question types; $\odot$ denotes Hardamard product. \subsection{Multi-task loss} \label{subsec:overall-loss} In order to train the proposed MILQT, we define a multi-task loss to jointly optimize the question-type classification, the answer prediction of each individual attention mechanism, and the VQA loss (\ref{eq:vqaloss}). Formally, our multi-task loss is defined as follows: \begin{equation} l = \alpha_1\sum_{j=1}^{k} l_{H_j} +\alpha_2 l_{vqa} + \alpha_3 l_{qt} \label{eq:final_loss} \end{equation} where $\alpha_1, \alpha_2, \alpha_3$ are parameters controlling the importance of each loss; $l_{qt}$ is the question-type classification loss; $l_{H_j}$ is the answer prediction loss of $j^{th}$ mechanism over $J$ joint modality methods; $l_{vqa}$ is the introduced VQA loss augmented by the predicted question type and the prior information defined by (\ref{eq:vqaloss}). \section{Experiments} \subsection{Dataset and implementation detail} \textbf{Dataset.} We conduct the experiments on two benchmark VQA datasets that are VQA 2.0~\cite{vqav22016} and TDIUC~\cite{Kushal2018Tdiuc}. The VQA 2.0 dataset is the most popular and is widely used in VQA problem. In VQA 2.0 dataset, questions are divided into three question types, i.e., ``Yes-No'', ``Number'' and ``Other'' while the TDIUC dataset has 12 different question types. As standardly done in the literature, we use the standard VQA accuracy metric \cite{VQA} when evaluating on VQA 2.0 dataset and Arithmetric MPT as well as Harmonic MPT proposed in \cite{Kushal2018Tdiuc} when evaluating on TDIUC\footnote{In \cite{Kushal2018Tdiuc}, the authors show that using Arithmetric MPT and Harmonic MPT is more suitable than the standard VQA accuracy metric \cite{VQA} when evaluating on TDIUC.}. \textbf{Implementation detail. } \label{subsec:implement} Our proposed MILQT is implemented using PyTorch \cite{paszke2017automaticPyTorch}. The experiments are conducted on a single NVIDIA Titan V with 12GB RAM. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth*9/10, keepaspectratio=true]{exp_examples.png} \caption{Example results of SAN \cite{Yang2016StackedAN}, BAN \cite{Kim2018BilinearAN}, and our method on the validation set of VQA 2.0. In all cases, the proposed method produces better attention maps. It also produce more accurate answers than compared methods (second row).} \label{fig:exp_figure} \end{figure*} In all experiments, the learning rate is set to $10^{-3}$ (or $7\times 10^{-4}$ if using Visual Genome \cite{visualgenome} as augmenting data) and batch size is set to $256$. The number of detected bounding boxes is set to $50$ when extracting visual features. The GRU \cite{2014ChoGRU} for question embedding has one layer with $1024$-D hidden state and processes words in forward order. During training, except image representations $f_v$, other components are trained end-to-end with the multi-task loss (\ref{eq:final_loss}). AdaMax optimizer \cite{Kingma2014AdamAM} is used to train our model. \begin{table}[!t] \begin{center} \small \begin{tabular}{l c} \hline \begin{tabular}[l]{@{}l@{}}\textbf{Models}\end{tabular} &\textbf{VQA score}\\ \hline \multicolumn{2}{c}{\textbf{Contribution of question type awareness}} \\ BAN-2-Counter \cite{Kim2018BilinearAN} &65.25 \\ \quad + add &65.68\\ \quad\quad + prior &66.04\\ \quad + mul &65.80\\ \quad\quad + prior &66.13\\ \hline \multicolumn{2}{c}{\textbf{Contribution of hypothesis interaction learning}} \\ BAN-2-Counter \cite{Kim2018BilinearAN} &65.25 \\ \quad + BAN-2 \cite{Kim2018BilinearAN} &66.15\\ \quad + SAN \cite{Yang2016StackedAN} &65.64\\ \hline \multicolumn{2}{c}{\textbf{Whole model testing}} \\ BAN-2-Counter \cite{Kim2018BilinearAN} &65.25 \\ \quad + BAN-2 \cite{Kim2018BilinearAN} + Mul + prior &66.31\\ \quad + SAN \cite{Yang2016StackedAN} + Mul + prior &66.48\\ \hline \end{tabular} \end{center} \caption{Contributions of the proposed components and the whole model on the VQA 2.0 validation set.} \label{tab:valeval} \end{table} \begin{table}[!t] \small \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline {Models} & BAN-2 & \begin{tabular}[c]{@{}c@{}}BAN-2-\\ Counter \ \end{tabular} & \begin{tabular}[c]{@{}c@{}}Averaging\\ Ens.\end{tabular} & \begin{tabular}[c]{@{}c@{}}Interaction\\ Learning\end{tabular} \\ \hline {Accuracy} & 65.36 & 65.25 & 65.61 & {66.15} \\ \hline \end{tabular} \end{center} \caption{Performance on VQA 2.0 validation set where BAN2 \cite{Kim2018BilinearAN} and BAN-2-Counter \cite{Kim2018BilinearAN} are ensembled using averaging ensembling and the proposed interacting learning. } \label{tab:ens} \end{table} \subsection{Ablation study} To evaluate the contribution of question-type awareness $\mathcal{A}$ module and multi-hypothesis interaction learning $\mathcal{M}$ in our method, we conduct ablation studies when training on the train set and testing on the validation set of VQA 2.0 \cite{vqav22016}. Starting with the BAN glimpse 2 with counter sub-module (BAN-2-Counter) \cite{Kim2018BilinearAN} as the baseline, we show the effectiveness of proposed modules when they are integrated into the baseline. The counter sub-module \cite{Zhang2018LearningToCount} is used in the baseline to prove the extendability of proposed model on supporting ``Number" question. However, any sub-modules can also be applied, e.g., relational reasoning sub-module \cite{2017SantoroRelationalNet} to support for ``Yes/No" and ``Other" questions. It is worth noting that in order to make a fair comparison, we use the same visual features and question embedding features for both BAN-2-Counter baseline and our model. \begin{table}[!t] \begin{center} \small \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Question\\ types\end{tabular}}} & \multicolumn{3}{c|}{\textbf{Correlation scores}} \\ \cline{2-4} & \textbf{BAN-Counter} & \textbf{BAN} & \textbf{SAN} \\ \hline \textit{Yes/No} & 0.40 & 0.55 & 0.05 \\ \hline \textit{Numbers} & 0.55 & 0.23 & 0.22 \\ \hline \textit{Others} & 0.35 & 0.38 & 0.27 \\ \hline \end{tabular} \end{center} \caption{The correlation scores extracted from $w_{mil}$ of MILQT. The extracted information got from model trained in VQA 2.0 train set.} \label{tab:corr} \end{table} \begin{table*}[!t] \centering \small \begin{center} \begin{tabular}{l| c| c c c|c |c c c} \hline \multirow{2}{*}{\textbf{Models}} &\multicolumn{4}{c|}{\textbf{VQA - test-dev}} &\multicolumn{4}{c}{\textbf{VQA - test-std}} \\ \cline{2-9} &\textbf{Overall} &\textbf{Yes/No} &\textbf{Nums} &\textbf{Other} &\textbf{Overall} &\textbf{Yes/No} &\textbf{Nums} &\textbf{Other}\\ \hline SAN \cite{Yang2016StackedAN} &64.80 &79.63 &43.21 &57.09 &65.21 &80.06 &43.57 &57.24 \\ Up-Down \cite{bottom-up2017} &65.32 &81.82 &44.21 &56.05 &65.67 &82.20 &43.90 &56.26 \\ \begin{tabular}[c]{@{}c@{}}CMP \cite{tan2019lxmert}\ \end{tabular} &68.7 &84.91 &50.15 &59.11 &69.23 &85.48 &49.53 &59.6\\ Pythia \cite{Jiang2018PythiaVT} &70.01 &86.12 &48.97 &61.06 &70.24 &86.37 &48.46 &61.18 \\ BAN \cite{Kim2018BilinearAN} &70.04 &85.42 &54.04 &60.52 &70.35 &85.82 &53.71 &60.69 \\ \begin{tabular}[c]{@{}c@{}}LXMERT\cite{tan2019lxmert} \\ \end{tabular} &\textbf{72.4} &88.3 &54.2 &62.9 &\textbf{72.5} &88.0 &56.7 &65.2\\ \hline \textbf{MILQT} &70.62 &86.47 &54.24 &60.79 &70.93 &86.80 &53.79 &61.03\\ \hline \end{tabular} \end{center} \caption[Test-dev and test-standard results on VQA 2.0 dataset with single-models of different methods] {Comparison to the state of the arts on the test-dev and test-standard of VQA 2.0. For fair comparison, in all setup except LXMERT which uses BERT \cite{Devlin2019BERTPO} as question embedding, Glove embedding and GRU are leveraged for question embedding and Bottom-up features are used to extract visual information. CMP, i.e.Cross-Modality with Pooling, is the LXMERT with the aforementioned setup. } \label{tab:VQA} \end{table*} \textbf{The effectiveness of question-type awareness and prior information proposed in Section~\ref{subsec:qt-awa}.} The first section in Table \ref{tab:valeval} shows that by having second level multi-modal fusion (Section~\ref{subsec:qt-awa}) which uses element-wise multiplication (\textit{+mul}) to combine the question-type feature $f_{qt}$ and the attention feature $f_{att}$, the overall performance increases from $65.25\%$ (baseline) to $65.80\%$. By further using the predicted question type and the prior information (\textit{+prior}) to augment the VQA loss, the performance increases to $66.13\%$ which is $+0.88\%$ improvement over the baseline. The results in the first section in Table \ref{tab:valeval} confirm that combining question-type features with attention features helps to learn a better joint semantic space, which leads to the performance boost over the baseline. These results also confirm that using the predicted question type and the prior provides a further boost in the performance. We also find out that using EWM provides better accuracy than EWA at the second level fusion. \textbf{The effectiveness of multi-hypothesis interaction learning proposed in Section~\ref{subsec:interaction}.} The second section in Table \ref{tab:valeval} shows the effectiveness when leveraging different joint modality mechanisms by using multi-hypothesis interaction learning. By using BAN-2-Counter \cite{Kim2018BilinearAN} and BAN-2 \cite{Kim2018BilinearAN} (BAN-2-Counter + BAN-2), the overall performance is $66.15\%$ which is $+0.9\%$ improvement over the BAN-2-Counter baseline. Table \ref{tab:corr} illustrates the correlation between different joint modality mechanisms and question types. This information is extracted from $w_{mil}$ which identify the contributions of each mechanism in giving final VQA results guiding by the question type information. The results in Table \ref{tab:VQA} indicate that some joint modality methods achieve better performance in some specific question types, e.g., joint modality method BAN outperform other methods in Number question type by a large margin. The correlation in Table \ref{tab:corr} and performance in Table \ref{tab:VQA} also indicates that the MILQT model tends to leverage the contribution of joint methods proportional to their performance in each specific question type. Besides, the results in Table \ref{tab:ens} indicate that under the guiding of question type, $\mathcal{M}$ module produce better performance when comparing with none-use solution or the weighted sum method \cite{li2019regat} in which the predictions of different joint modality mechanisms are summed up and the answer with highest score are considered as the final answer. \textbf{The effectiveness of the entire proposed model.} The third section in Table \ref{tab:valeval} presents results when all components (except the visual feature extractor) are combined in a unified model and are trained end-to-end. To verify the effectiveness of the proposed framework, we conduct two configurations. In the first configuration, we use two joint modality mechanisms BAN-2-Counter and BAN-2, the EWM in the second level multi-modal fusion, and the predicted question type together with the prior information to augment the loss. The second configuration is similar to the first configuration, except that we use BAN-2-Counter and SAN in interaction learning. The third section on Table \ref{tab:valeval} shows that both configurations give the performance boost over the baseline. The second configuration achieves better performance, i.e., $66.48\%$ accuracy, which outperforms over the baseline BAN-2-Counter $+1.23\%$. Table \ref{tab:valeval} also show that using ``question-type awareness" gives further boost over using interaction learning only, i.e., the performance of ``BAN-2-Counter + SAN + Mul + prior" (66.48) outperforms the performance of ``BAN-2-Counter + SAN" (65.64). Fig.~\ref{fig:exp_figure} presents some visualization results of our second configuration and other methods on the VQA 2.0 validation set. \textbf{Question-type classification analysis} The proposed MILQT is a model which allows joint training between question-type classification and VQA answer classification. The effectiveness of multi-task learning helps to improve performance in both tasks. To further analyze the effectiveness of MILQT in the question-type classification, we provide in this section the question type classification on TDIUC dataset. We follow QTA~\cite{MTL_QTA} to calculate the accuracy, i.e., the overall accuracy is the number of correct predictions over the number of testing questions, across all categories. The results are presented in Table \ref{tab:state-of-the-art-qt}. Our MILQT uses BAN-2 \cite{Kim2018BilinearAN}, BAN-2-Counter~\cite{Kim2018BilinearAN}, and SAN~\cite{Yang2016StackedAN} in the interaction learning, element-wise multiplication in the second level of multi-modal fusion, and the predicted question type with prior information to augment the VQA loss. Compare to the state-of-the-art QTA~\cite{MTL_QTA}, our MILQT outperforms QTA for most of question types. In overall, we achieve state-of-the-art performance on question-type classification task on TDIUC dataset with $96.45\%$ accuracy. It is worth noting that for the ``Utility and Affordances" category, the question type classification accuracy is $0\%$ for both QTA and MILQT. It is because the imbalanced data problem in TDIUC dataset. The ``Utility and Affordances" category has only $\approx 0.03\%$ samples in the dataset. Hence this category is strongly dominated by other categories when learning the question type classifier. Note that, there are cases in which questions belonging to the ``Utility and Affordances" category have similar answers with questions belonging to other categories. Thus, the data becomes less bias w.r.t. answers (in comparing to question categories). This explains why although both MILQT and QTA have $0\%$ accuracy for the ``Utility and Affordances" on the question category classification, both of them achieve some accuracy on the VQA classification (see Table \ref{tab:state-of-the-art-qt}). \begin{table}[!t] \begin{center} \small \begin{tabular}{|l|c |c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Question-type accuracy}\end{tabular}} & \multicolumn{2}{c|}{\textbf{Reference Models}} \\ \cline{2-3} &\textbf{QTA \cite{MTL_QTA}}& \begin{tabular}[l]{@{}l@{}}\textbf{MILQT}\end{tabular}\\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Scene Recognition} \end{tabular} &99.40 &\textbf{99.84} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sport Recognition}\end{tabular} &73.08 &\textbf{85.81} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Color Attributes} \end{tabular} &86.10 &\textbf{89.60} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Other Attributes} \end{tabular} &77.76 &\textbf{85.03} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Activity Recognition}\end{tabular} &13.18 &\textbf{16.43} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Positional Recognition}\end{tabular} &89.52 &\textbf{89.55} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sub-Object Recognition}\end{tabular} &98.96 &\textbf{99.42} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Absurd}\end{tabular} &\textbf{95.46} &95.12 \\ \begin{tabular}[c]{@{}c@{}}\textbf{Utility and Affordances}\end{tabular} &00.00 &00.00 \\ \begin{tabular}[c]{@{}c@{}}\textbf{Object Presence}\end{tabular} &\textbf{100.00} &\textbf{100.00} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Counting}\end{tabular} &99.90 &\textbf{99.99}\\ \begin{tabular}[c]{@{}c@{}}\textbf{Sentiment Understanding}\end{tabular} &60.51 &\textbf{67.82} \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Overall}\end{tabular} &95.66 &\textbf{96.45} \\ \hline \end{tabular} \end{center} \caption{The comparative question-type classification results between MILQT and state-of-the-art QTA \cite{MTL_QTA} on the TDIUC validation set.} \label{tab:state-of-the-art-qt} \end{table} \begin{table*}[!t] \centering \small \begin{center} \begin{tabular}{|l|c c c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Score}\end{tabular}} & \multicolumn{4}{c|}{\textbf{Reference Models}} \\ \cline{2-5} &\textbf{QTA-M \cite{MTL_QTA}}& \textbf{MCB-A \cite{Kushal2018Tdiuc}}& \textbf{RAU \cite{Kushal2018Tdiuc}}& \begin{tabular}[l]{@{}l@{}}\textbf{MILQT}\end{tabular}\\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Scene Recognition} \end{tabular} &93.74 &93.06 &93.96 &\textbf{94.74} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sport Recognition}\end{tabular} &94.80 &92.77 &93.47 &\textbf{96.47} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Color Attributes} \end{tabular} &57.62 &68.54 &66.86 &\textbf{75.23} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Other Attributes} \end{tabular} &52.05 &56.72 &56.49 &\textbf{61.93} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Activity Recognition}\end{tabular} &53.13 &52.35 &51.60 &\textbf{65.03} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Positional Recognition}\end{tabular} &33.90 &35.40 &35.26 &\textbf{42.31} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sub-Object Recognition}\end{tabular} &86.89 &85.54 &86.11 &\textbf{89.63} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Absurd}\end{tabular} &\textbf{98.57} &84.82 &96.08 &88.95 \\ \begin{tabular}[c]{@{}c@{}}\textbf{Utility and Affordances}\end{tabular} &24.07 &35.09 &31.58 &\textbf{38.60} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Object Presence}\end{tabular} &94.57 &93.64 &94.38 &\textbf{96.21} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Counting}\end{tabular} &53.59 &51.01 &48.43 &\textbf{62.41} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Sentiment Understanding}\end{tabular} &60.06 &\textbf{66.25} &60.09 &64.98 \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Arithmetic MPT}\end{tabular} &66.92 &67.90 &67.81 &\textbf{73.04} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Harmonic MPT}\end{tabular} &55.77 &60.47 &59.00 &\textbf{66.86} \\ \hline \end{tabular} \end{center} \caption{The comparative results between the proposed model and other models on the validation set of TDIUC. } \label{tab:TDIUC} \end{table*} \subsection{Comparison to the state of the art} \textbf{Experiments on VQA 2.0 test-dev and test-standard.} We evaluate MILQT on the test-dev and test-standard of VQA 2.0 dataset \cite{vqav22016}. To train the model, similar to previous works~\cite{Yang2016StackedAN,tip-trick,Jiang2018PythiaVT,Kim2018BilinearAN}, we use both training set and validation set of VQA 2.0. We also use the Visual Genome~\cite{visualgenome} as additional training data. MILQT consists of three joint modality mechanisms, i.e., {BAN-2}, {BAN-2-Counter}, and {SAN} accompanied with the EWM for the multi-modal fusion, and the predicted question type together with the prior information to augment the VQA loss. Table~\ref{tab:VQA} presents the results of different methods on test-dev and test-std of VQA 2.0. The results show that our MILQT yields the good performance with the most competitive approaches. \textbf{Experiments on TDIUC.} In order to prove the stability of MILQT, we evaluate MILQT on TDIUC dataset \cite{Kushal2018Tdiuc}. The results in Table \ref{tab:TDIUC} show that the proposed model establishes the state-of-the-art results on both evaluation metrics Arithmetic MPT and Harmonic MPT \cite{Kushal2018Tdiuc}. Specifically, our model significantly outperforms the recent QTA~\cite{MTL_QTA}, i.e., on the overall, we improve over QTA $6.1\%$ and $11.1\%$ with Arithemic MPT and Harmonic MPT metrics, respectively. It is worth noting that the results of QTA~\cite{MTL_QTA} in Table \ref{tab:TDIUC}, which are cited from \cite{MTL_QTA}, are achieved when \cite{MTL_QTA} used the one-hot \textit{predicted question type} of testing question to weight visual features. When using \textit{the groundtruth question type} to weight visual features, \cite{MTL_QTA} reported $69.11\%$ and $60.08\%$ for Arithemic MPT and Harmonic MPT metrics, respectively. Our model also outperforms these performances a large margin, i.e., the improvements are $3.9\%$ and $6.8\%$ for Arithemic MPT and Harmonic MPT metrics, respectively. We also note that for the question type ``Absurd", we get lower performance than QTA \cite{MTL_QTA}. For this question type, the question is irrelevant with the image content. Consequently, this question type does not help to learn a joint meaningful embedding between the input question and image. This explains for our lower performance on this question type. \section{Conclusion} We present a multiple interaction learning with question-type prior knowledge for constraining answer search space--- MILQT that takes into account the question-type information to improve the VQA performance at different stages. The system also allows to utilize and learn different attentions under a unified model in an interacting manner. The extensive experimental results show that all proposed components improve the VQA performance. We yields the best performance with the most competitive approaches on VQA 2.0 and TDIUC dataset. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=A4ft_k1rJ1C
https://arxiv.org/abs/2009.11118
Please evaluate our paper on an ensemble of VQA methods with prior knowledge on question types, focusing on the strengths and weaknesses of our approach.
Simple, effective and very well discussed 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. This paper proposes to adapt several data-level augmentation techniques from image field to videos. To study the effects of such techniques, authors conduct experiments on the action recognition topic. Results showcase that some of the proposed techniques helps improve the performance. Additionally, they described their participation in the 1st VIPriors Action Recognition Challenge. 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - Regarding the results, the paper is really well motivated and clearly understandable. The story is clear and is very easy to follow. - Authors clearly described how they have adapted all the techniques to video. - The results are marginal (or even worse than baseline) in some situations. However, I like the fact that authors recognise it and discuss it in Section 4.6, suggesting some interesting reasons. 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - Results are marginal (or even worse than baseline) for some augmentation techniques. - Authors talk always about temporal distortions. However, it seems that basically they apply frame distortions during a concrete temporal window. For me, this is not a temporal distortion. Authors can check the paper “Learning Temporal Action Proposals With Fewer Labels”. In this paper, data augmentation on videos is performed by modifying the temporal information accumulated by the features. Concretely, they use time warping and Time masking. 4. [Overall rating] Paper rating. 7 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. Despite weaknesses, the paper is well written, and very well discussed. 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - Line 198: augmentation operation(s). - Line 350: consists (of) Rating: 7: Good paper, accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{styles/llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{adjustbox} \usepackage{subfig} \captionsetup[subfigure]{labelformat=empty} \usepackage{multirow} \usepackage{amssymb}% \usepackage{pifont}% \usepackage{multirow, boldline} \usepackage{ctable} \usepackage{xcolor} \usepackage[bottom]{footmisc} \usepackage{listings} \usepackage{wrapfig} \usepackage{makecell} \usepackage{hyperref} \hypersetup{pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false} \usepackage{breakcites} \hypersetup{ colorlinks = true, citecolor = green } \hypersetup{linkcolor=red} \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \newcommand{\etal}{\textit{et al}.} \newcommand{\ie}{\textit{i.e.} } \newcommand{\eg}{\textit{e.g.} } \newcommand{\ours}{DMV} \newcommand{\ourss}{DMV } \newcommand{\cmark}{\ding{51}}% \definecolor{darkgreen}{rgb}{0.0, 0.6, 0.2} \definecolor{MyRed}{rgb}{0.8,0.2,0} \def\red#1{\textcolor{MyRed}{#1}} \definecolor{MyBlue}{rgb}{0,0,1.0} \def\first#1{\textcolor{MyBlue}{#1}} \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} \lstset{frame=topbottom, language=Python, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle={\scriptsize\ttfamily}, numbers=none, numberstyle=\tiny\color{gray}, keywordstyle=\color{blue}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } \newcommand{\tref}[1]{Tab.~\ref{#1}} \newcommand{\Tref}[1]{Table~\ref{#1}} \newcommand{\eref}[1]{Eq.~(\ref{#1})} \newcommand{\Eref}[1]{Equation~(\ref{#1})} \newcommand{\fref}[1]{Fig.~\ref{#1}} \newcommand{\Fref}[1]{Figure~\ref{#1}} \newcommand{\sref}[1]{Sec.~\ref{#1}} \newcommand{\Sref}[1]{Section~\ref{#1}} \newcommand{\dummyfig}[1]{ \centering \fbox{ \begin{minipage}[c][0.33\textheight][c]{0.5\textwidth} \centering{#1} \end{minipage} } } \newcommand{\similarity}{s} \newcommand{\scoremap}{M} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{0000} % \def\JL#1{{\color{red}JL: \it #1}} \title{Learning Temporally Invariant and \\ Localizable Features via Data Augmentation \\ for Video Recognition} \titlerunning{Temporally Invariant Data Augmentation for Video Recognition} \author{Taeoh Kim\thanks{Equal contribution}\inst{1} \and Hyeongmin Lee\samethanks\inst{1} \and MyeongAh Cho\samethanks\inst{1} \and Ho Seong Lee\inst{2} \and \\ Dong Heon Cho\inst{2} \and Sangyoun Lee\inst{1}\thanks{Corresponding Author}} \authorrunning{T. Kim et al} \institute{Yonsei University, Seoul, South Korea \and Cognex Deep Learning Lab, Seoul, South Korea \\ \email{\{kto, minimonia, maycho0305, syleee\}@yonsei.ac.kr} \\ \email{\{hoseong.lee, david.cho\}@cognex.com}} \maketitle \begin{abstract} Deep-Learning-based video recognition has shown promising improvements along with the development of large-scale datasets and spatiotemporal network architectures. In image recognition, learning spatially invariant features is a key factor in improving recognition performance and robustness. Data augmentation based on visual inductive priors, such as cropping, flipping, rotating, or photometric jittering, is a representative approach to achieve these features. Recent state-of-the-art recognition solutions have relied on modern data augmentation strategies that exploit a mixture of augmentation operations. In this study, we extend these strategies to the temporal dimension for videos to learn temporally invariant or temporally localizable features to cover temporal perturbations or complex actions in videos. Based on our novel temporal data augmentation algorithms, video recognition performances are improved using only a limited amount of training data compared to the spatial-only data augmentation algorithms, including the 1st Visual Inductive Priors (VIPriors) for data-efficient action recognition challenge. Furthermore, learned features are temporally localizable that cannot be achieved using spatial augmentation algorithms. Our source code is available at \url{https://github.com/taeoh-kim/temporal_data_augmentation}. \end{abstract} \section{Introduction} Many augmentation techniques have been proposed to increase the recognition performance and robustness for an environment with limited training data or to prevent overconfidence and overfitting of large-scale data, such as ImageNet~\cite{krizhevsky2012imagenet}. These techniques can be categorized into data-level augmentation~\cite{krizhevsky2012alexnet, vggnet, autoaugment, fastautoaugment, randaugment, augmix, cutout, hideandseek}, data-level mixing~\cite{mixup, cutmix, cutblur, attributemix, attentivecutmix, smoothmix}, and in-network augmentation~\cite{dropout, dropblock, stochasticdepth, shakeshake, shakedrop, regvideo, manimixup}. Data augmentation is an important component for recent state-of-the-art self-supervised learning~\cite{moco, simclr, pirl}, semi-supervised learning~\cite{uda, mixmatch, remixmatch, fixmatch}, self-learning~\cite{noisystudent}, and generative models~\cite{crgan, diffauggan, bcrgan, dagan} because of its ability to learn invariant features. The purpose of data augmentation in image recognition is to enhance the generalizability via learning spatially invariant features. Augmentation, such as geometric (cropping, flipping, rotating, \textit{etc.}) and photometric (brightness, contrast, color, \textit{etc.}) transformation, can model uncertain variances in a dataset. Recent algorithms have exhibited state-of-the-art performances in terms of the complexity-accuracy trade-off~\cite{fastautoaugment, randaugment} or robustness~\cite{robustness, augmix}. Some approaches~\cite{cutmix, cutblur} learn localizable features that can be used as transferable features for the localization-related tasks, such as object detection and image captioning. They simultaneously learn what to and where to focus for recognition. Despite evolving through numerous algorithms in image recognition, exploration into data augmentation and regularization in video recognition has rarely been done. In videos, temporal variations and perturbations should be considered. For example, Fig. \ref{fig_perturbation} depicts temporal perturbations across frames in a video. This perturbation can be a geometric perturbation, such as translation, rotation, scale, and so on, or a photometric perturbation, such as brightness, contrast, and so on. To handle perturbation, both well-studied spatial augmentation and temporally varying data augmentation should be considered. In this paper, we propose several extensions for temporal robustness. More specifically, temporally invariant and localizable features can be modeled via data augmentations. In this paper, we extend upon two recent examples of well-studied spatial augmentation techniques: data-level augmentation and data-level mixing. To the best of our knowledge, this is the first study that deeply analyzes temporal perturbation modeling via data augmentation in video recognition. The contributions of this paper can summarized as follows: \begin{itemize} \item {We propose an extension of RandAugment~\cite{randaugment}, called RandAugment-T, to conduct data-level augmentation for video recognition. It can temporally model varying levels of augmentation operations.} \item {We also propose the temporal extensions of CutOut~\cite{cutout}, MixUp~\cite{mixup}, and CutMix~\cite{cutmix} as examples of deleting, blending, and cut-and-pasting data samples. Considering the temporal dimension improves recognition performance and the temporal localization abilities.} \item {The recognition results of the proposed extensions on the UCF-101~\cite{soomro2012ucf101} subset for the 1st Visual Inductive Priors (VIPriors) for data-efficient action recognition challenge, and the HMDB-51~\cite{kuehne2011hmdb} dataset exhibit performance improvements compared to the spatial-only versions in a simple baseline.} \end{itemize} \begin{figure*}[!t] \centering \subfloat {\includegraphics[width=0.155\linewidth]{./fig/g1.png}}\ \subfloat {\includegraphics[width=0.155\linewidth]{./fig/g2.png}}\ \subfloat {\includegraphics[width=0.155\linewidth]{./fig/g3.png}}\ \hfill \subfloat {\includegraphics[width=0.155\linewidth]{./fig/p1.png}}\ \subfloat {\includegraphics[width=0.155\linewidth]{./fig/p2.png}}\ \subfloat {\includegraphics[width=0.155\linewidth]{./fig/p3.png}}\ \\ \caption{Example clips of temporal perturbations. \textit{Left}: Geometric perturbation across frames in a sky-diving video due to extreme camera and object movement. \textit{Right}: Photometric perturbation across frames in a basketball stadium due to camera flashes.} \label{fig_perturbation} \end{figure*} \section{Related Works} \subsection{Data augmentation} \subsubsection{Data-level augmentation} First, to enlarge the generalization performance of a dataset and to reduce the overfitting problem of preliminary networks, various data augmentation methods, such as rotate, flip, crop, color jitter~\cite{krizhevsky2012imagenet}, and scale jitter~\cite{vggnet} have been proposed. CutOut~\cite{cutout} deletes a square-shaped box at a random location to encourage the network focus on various properties of images, to avoid relying on the most discriminative regions. Hide-and-Seek~\cite{hideandseek} is a similar approach, but it deletes multiple regions that are sampled from grid patches. Recently, the methodology of combining more than one augmentation operation has been proposed. Cubuk~\etal~\cite{autoaugment} propose a reinforcement learning-based approach to search for the optimal data augmentation policy in the given dataset. However, because the search space is too large, it requires extensive time to determine the optimal policy. Although an approach to mitigate this problem has been proposed~\cite{fastautoaugment}, it is difficult hard and time-consuming to determine the optimal augmentation strategy. To solve this, Cubuk~\etal~\cite{randaugment} propose RandAugment, which randomly samples augment operations from the candidate list and cascades them. Similarly, Hendrycks~\etal~\cite{augmix} propose an approach called AugMix that parallelly blends images that have been augmented by the operations sampled from a set of candidates. These techniques can model uncertain spatial perturbation, such as the geometric transform, photometric transform, or both. Because studies have focused on static images, applying these approaches to videos is a straightforward extension. For videos, Ji~\etal~\cite{ji2019learning} propose temporal augmentation operations called time warping and time masking, which randomly adjust or skip temporal frames. In contrast, in this paper, we focus on the temporally varying augmentation. \subsubsection{Data-level mixing} Together with data augmentation algorithms, augmentation strategies using multiple samples have been proposed. Zhang~\etal~\cite{mixup} propose an approach called MixUp to manipulate images with more than one image. This approach makes a new sample by blending two arbitrary images and interpolating their one-hot ground-truth labels. This encourages the model to behave linearly in-between training examples. CutMix~\cite{cutmix} combines the concepts of CutOut and MixUp, by taking the best of both worlds. It replaces a square-shaped deleted region in CutOut with a patch from another image. This encourages the model to learn not only what to recognize but also where to recognize it. It can be interpreted as spatially localizable feature learning. Inspired by CutMix, several methods have been proposed. CutBlur~\cite{cutblur} propose a CutMix-like approach to solving the restoration problem by cut-and-pasting between low-resolution and high-resolution images. They also proposed CutMixUp, which combines MixUp and CutMix. CutMixUp blends the two images inside the one of the masks of CutMix to relax extreme changes in boundary pixels. Attribute~Mix~\cite{attributemix} uses masks of any shape, not only square-shaped masks. Attentive~CutMix~\cite{attentivecutmix} also discards the square-shaped masks. It uses multiple patches sampled from the grid and replaces the regions with another image. Smoothmix~\cite{smoothmix} focuses on the 'strong edge' problem caused by the boundary of the masks. Although numerous data manipulation methods, including deleting, blending, and cut-and-pasting, have successfully augmented many image datasets, their ability when applied to video recognition to learn temporally invariant and localizable features has not yet been explored. \subsubsection{In-network augmentation} Apart from the data-level approaches, several studies have proposed in-network augmentation algorithms. These have usually involved the design of stochastic networks to undergo augmentation at the feature-level to reduce predictive variance and to learn more high-level augmented features rather than to learn features from low-level augmentations. Dropout~\cite{dropout} is the very first approach to regularize the overfitted models. Other approaches, such as DropBlock~\cite{dropblock}, Stochastic depth~\cite{stochasticdepth}, Shake-Shake~\cite{shakeshake}, and ShakeDrop~\cite{shakedrop} regularization, have been proposed. Manifold-MixUp~\cite{manimixup} propose a mixing strategy like MixUp but is used instead in the feature space. The most similar approach to this study is a regularization method for video recognition called Random Mean Scaling~\cite{regvideo}. It randomly adjusts spatiotemporal features in video networks. In contrast, our approach focuses on data-level manipulation and is extended from spatial-only algorithms into the temporal worlds. \subsection{Video recognition} For video action recognition, like image recognition, various architectures have been proposed to capture spatiotemporal features from videos. In \cite{tran2015learning}, Tran \textit{et al.} proposed C3D, which extracts features containing objects, scenes, and action information through 3D convolutional layers and then simply passes them through a linear classifier. In \cite{tran2018closer}, a (2+1)D convolution that focuses on layer factorization rather than 3D convolution is proposed. It is composed using a 2D spatial convolution followed by 1D temporal convolution. In addition, the non-local block~\cite{wang2018non} and GloRe~\cite{chen2019graph} modules have been suggested to capture long-range dependencies via self-attention and graph-based modules. By plugging them into 3D ConvNet, the network can learn long-distance relations in both space and time. Another approach is two-stream architecture~\cite{wang2016temporal, stroud2020d3d, ryoo2019assemblenet}. In \cite{carreira2017quo}, a two-stream 3D ConvNet inflated from the deep image classification network and pre-trained features is proposed and achieves state-of-the-art performance by pre-training with the Kinetics dataset, a large-scale action recognition dataset. Based on this architecture, Xie \textit{et al.} \cite{xie2017rethinking} combined a top-heavy model design, temporally separable convolution, and spatiotemporal feature-gating blocks to make low-cost and meaningful features. Recently, SlowFast~\cite{feichtenhofer2019slowfast} networks that consist of a slow path for semantic information and a fast path for rapidly changing motion information exhibit competitive performance with a different frame rate sampling strategy. In addition, RESOUND~\cite{li2018resound} proposed a method to reduce the static bias of the dataset, and an Octave convolution~\cite{chen2019drop} is proposed to reduce spatial redundancy by dividing the frequency of features. A debiasing loss function~\cite{choi2019can} is proposed to mitigate the strong scene bias of networks and focus on the actual action information. Since the advent of the large-scale Kinetics dataset, most action recognition studies have pre-trained the backbone on Kinetics, which guarantees basic performance. However, based on the results of the study by \cite{hara2018can}, architectures with numerous parameters are significantly overfitted when learning from scratch on relatively small datasets, such as UCF-101~\cite{soomro2012ucf101} and HMDB-51~\cite{kuehne2011hmdb}. This indicates that training without a pre-trained backbone is a challenging issue. Compared to existing studies that have been focused on novel dataset and architectures, we focus on regularization techniques, such as data augmentation, to prevent overfitting via learning invariance and robustness in terms of spatiality and temporality. \section{Methods} \subsection{Data-level temporal data augmentations} \begin{wrapfigure}{r}{0.5\linewidth} \vspace{-1.0cm} \begin{lstlisting} def randaugment_T(X, N, M1, M2): """Generate a set of distortions. Args: X: Input video (T x H x W) N: Number of augmentation transformations to apply sequentially. M1, M2: Magnitudes for both temporal ends. """ ops = np.random.choice(transforms, N) M = np.linspace(M1, M2, T) return [[op(X, M[t]) for t in range(T)] for op in ops] \end{lstlisting} \vspace{-0.5cm} \caption{\small{Pseudo-code for RandAugment-T based on Numpy in Python. Template is borrowed from~\cite{randaugment}}} \label{fig:randaugt} \vspace{-0.5cm} \end{wrapfigure} First, we extend the existing RandAugment~\cite{randaugment} method for video recognition. RandAugment has two hyper-parameters for optimization. One is the number of augmentation operations to apply, N, and the other is the magnitude of the operation, M. A grid search of these two parameters in a given dataset produces state-of-the-art performance in image recognition. For video recognition, RandAugment is directly applicable to every frame of a video; however, this limits temporal perturbation modeling. To cover temporally varying transformations, we propose RandAugment-T, which linearly interpolates between two magnitudes from the first frame to the last frame in a video clip. The pseudo-code for RandAugment-T is described in Fig.~\ref{fig:randaugt}. It receives three hyper-parameters: N, M1, and M2, where N is the number of operations, which is the same as RandAugment, and M1 and M2 indicate the magnitudes for both temporal ends, which can be any combination of levels. The set of augmentation operations (\texttt{transforms} in Fig.~\ref{fig:randaugt}) is identical to RandAugment. However, \texttt{rotate}, \texttt{shear-x}, \texttt{shear-y}, \texttt{translate-x}, and \texttt{translate-y} can model temporally varying geometric transformation, such as camera or object movements (Fig.~\ref{fig:taugexample}(a)), and \texttt{solarize}, \texttt{color}, \texttt{posterize}, \texttt{contrast}, \texttt{brightness}, and \texttt{sharpness} can model photometric transformation, such as brightness or contrast changes due to the auto-shot mode in a camera (Fig. ~\ref{fig:taugexample}(b)). The remaining operations (\texttt{identity}, \texttt{autocontrast}, and \texttt{equalize}) have no magnitudes that are applied evenly across frames. \begin{figure*}[!t] \centering \subfloat {\includegraphics[width=0.8\linewidth]{./fig/translation_5frame.png}}\ \\[0.2ex] \subfloat[(a) Temporally varying geometric augmentations (Top: vertical-down translation, Bottom: clockwise rotation)] {\includegraphics[width=0.8\linewidth]{./fig/rotation_5frame.png}}\ \\ \subfloat {\includegraphics[width=0.8\linewidth]{./fig/brightness_5frame.png}}\ \\[0.2ex] \subfloat[(b) Temporally varying photometric augmentations (Top: increasing brightness, Bottom: decreasing contrast)] {\includegraphics[width=0.8\linewidth]{./fig/contrast_5frame.png}}\ \\ \caption{Example of temporally varying data augmentation operations for RandAugment-T} \label{fig:taugexample} \end{figure*} \subsection{Data-level temporal deleting, blending, and cut-and-pasting} \label{regularization} \begin{figure*}[!t] \centering \subfloat {\includegraphics[width=0.49\linewidth]{./fig/cutout_5frame.png}}\ \hfill \subfloat {\includegraphics[width=0.49\linewidth]{./fig/cutmix_5frame.png}}\ \\[-2ex] \subfloat {\includegraphics[width=0.49\linewidth]{./fig/framecutout_5frame.png}}\ \hfill \subfloat {\includegraphics[width=0.49\linewidth]{./fig/framecutmix_5frame.png}}\ \\[-2ex] \subfloat[\small{(a) \textit{Top}: CutOut~\cite{cutout}, \textit{Middle}: FrameCutOut, \textit{Bottom}: CubeCutOut}] {\includegraphics[width=0.49\linewidth]{./fig/cubecutout_5frame.png}}\ \hfill \subfloat[\small{(b) \textit{Top}: CutMix~\cite{cutmix}, \textit{Middle}: FrameCutMix, \textit{Bottom}: CubeCutMix}] {\includegraphics[width=0.49\linewidth]{./fig/cubecutmix_5frame.png}}\ \\[-2ex] \subfloat {\includegraphics[width=0.49\linewidth]{./fig/mixup_5frame.png}}\ \hfill \subfloat {\includegraphics[width=0.49\linewidth]{./fig/framemixup_5frame.png}}\ \\[-2ex] \subfloat[\small{(c) \textit{Top}: MixUp~\cite{mixup}, \textit{Bottom}: CutMixUp~\cite{cutblur}}] {\includegraphics[width=0.49\linewidth]{./fig/cutmixup_5frame.png}}\ \hfill \subfloat[\small{(d) \textit{Top}: FrameCutMixUp, \textit{Bottom}: CubeCutMixUp}] {\includegraphics[width=0.49\linewidth]{./fig/cubemixup_5frame.png}}\ \\[0.5ex] \subfloat[\small{(e) FadeMixUp}] {\includegraphics[width=0.49\linewidth]{./fig/fademixup_5frame.png}}\ \hfill \caption{Visual comparison of data-level deleting, blending, and cut-and-pasting for videos. Desired ground-truth labels are calculated by the ratio of each class: \textit{Fencing} and \textit{PlayingGuitar}.} \label{fig_frameworkcomparison} \end{figure*} Regularization techniques, which have been proposed for image recognition, such as CutOut~\cite{cutout}, MixUp~\cite{mixup}, and CutMix~\cite{cutmix}, can be applied identically across frames in a video. CutMixUp is a combination of MixUp and CutMix, which is proposed in~\cite{cutblur}, can also be used for relaxing the unnatural boundary changes. In this section, we propose temporal extensions of the above algorithms. FrameCutOut and CubeCutOut are the temporal and spatiotemporal extensions of CutOut (Fig~\ref{fig_frameworkcomparison} (a)), respectively. CutOut encourages the network to better use the full context of the images, rather than relying on a small portion of specific spatial regions. Similarly, FrameCutOut encourages the network to better use the full temporal context and the full spatiotemporal context by CubeCutOut. FrameCutMix and CubeCutMix are extensions of CutMix~\cite{cutmix} (Fig~\ref{fig_frameworkcomparison} (b)). CutMix is designed for the learning of spatially localizable features. Cut-and-paste mixing between two images encourages the network to learn where to recognize features. Similarly, FrameCutMix and CubeCutMix are designed for the learning of temporally and spatiotemporally localizable features in a video. Like CutMix, the mixing ratio $\lambda$ is sampled from the beta distribution $Beta(\alpha, \alpha)$, where $\alpha$ is a hyper-parameter, and the locations for random frames or random spatiotemporal cubes are selected based on $\lambda$. Like CutMixUp~\cite{cutblur}, which is the unified version of MixUp~\cite{mixup} and CutMix~\cite{cutmix}, FrameCutMixUp and CubeCutMixUp can be designed similarly (Fig~\ref{fig_frameworkcomparison} (c) and (d)) to relax extreme boundary changes between two samples. For these blend$+$cut-and-paste algorithms, MixUp is applied between two data samples by the mixing ratio $\lambda_1$, and the other hyper-parameter $\lambda_2$ is sampled from $Beta(2, 2)$. Based on $\lambda_2$, the region mask $\mathbf{M}$ is selected randomly similar to CutMix to cut-and-paste the MixUp-ed sample and one of the two original samples. The final mixed data and desired ground-truth labels are formulated as follows: \begin{equation} \begin{split} \Tilde{x} = \left\{ \begin{array}{ll} (\lambda_1 x_A + (1-\lambda_1) x_B) \odot \mathbf{M} + x_A \odot (\mathbf{1} - \mathbf{M}) & \quad \mbox{if } \lambda_1 < 0.5 \\ (\lambda_1 x_A + (1-\lambda_1) x_B) \odot \mathbf{M} + x_B \odot (\mathbf{1} - \mathbf{M}) & \quad \mbox{if } \lambda_1 \geq 0.5 \end{array} \right. \\ \Tilde{y} = \left\{ \begin{array}{ll} (\lambda_1 \lambda_2 + (1 - \lambda_1)) y_A + (1-\lambda_1) \lambda_2 y_B & \quad \mbox{if } \lambda_1 < 0.5 \\ \lambda_1 \lambda_2 y_A + (1 - \lambda_1 \lambda_2) y_B & \quad \mbox{if } \lambda_1 \geq 0.5 \end{array} \right. \end{split} \end{equation} where $\Tilde{x}$, $\Tilde{y}$, and $\odot$ indicate the mixed data, modified label, and element-wise multiplication, respectively. Finally, we propose another extension of MixUp, called FadeMixUp, inspired by the fade-in, fade-out, and dissolve overlap effects in videos. For FadeMixUp, in MixUp, the mixing ratio is smoothly changing along with temporal frames (Fig~\ref{fig_frameworkcomparison} (e)). In FadeMixUp, a list of the mixing ratios $\Tilde{\lambda}_t$ of a frame $t$ is calculated by linear interpolation between $\lambda - \gamma$ and $\lambda + \gamma$, where $\lambda$ is the mixing ratio of MixUp, and the $\gamma$ is sampled from $Uniform(0, min(\lambda, 1-\lambda))$. Because the adjustments in the mixing ratio at both ends are symmetric, the label is the same as MixUp. \begin{equation} \begin{split} \Tilde{x_t} & = \Tilde{\lambda_t} X_{A_t} + (1-\Tilde{\lambda}_t) X_{B_t} \\ \Tilde{y} & = \lambda y_A + (1-\lambda) y_B, \\ \end{split} \label{eq:fademixup} \end{equation} FadeMixUp can be modeled for temporal variations and can learn temporally localizable feature without sharp boundary changes, like other cut-and-pasting algorithms. Because many videos include these overlapping effects at the scene change, FadeMixUp can be applied naturally. A summary of deleting, blending, and cut-and-pasting data augmentation algorithms is described in Table~\ref{tb:mixcomp}. In the table, a checkmark indicates the elements (pixels) that can be changed along the spatial or temporal axis via augmentation methods. Compared to the existing algorithms~\cite{cutout, cutmix, mixup, cutblur}, our proposed methods are extended temporally and spatiotemporally. \begin{table}[!t] \centering \caption{\small{Comparison between deleting, blending, and cut-and-pasting frameworks.}} \resizebox{1.0\linewidth}{!}{ \begin{tabular}{ll|ccc|ccc|cc|ccc} \toprule & Type & \multicolumn{3}{c|}{Delete} & \multicolumn{3}{c|}{Cut-and-paste} & \multicolumn{2}{c|}{Blend} & \multicolumn{3}{c}{Blend $+$ Cut-and-paste} \\ \cmidrule{2-13} & Name & \makecell{CutOut \\ \cite{cutout}} & \makecell{Frame \\ CutOut} & \makecell{Cube\\CutOut} & \makecell{CutMix \\ \cite{cutmix}} & \makecell{Frame\\CutMix} & \makecell{Cube\\CutMix} & \makecell{MixUp \\ \cite{mixup}} & \makecell{Fade\\MixUp} & \makecell{CutMixUp\\ \cite{cutblur}} & \makecell{Frame\\CutMixUp} & \makecell{Cube\\CutMixUp} \\ \midrule Axis & Spatial & \cmark & & \cmark & \cmark & & \cmark & & & \cmark & & \cmark \\ & Temporal & & \cmark & \cmark & & \cmark & \cmark & & \cmark & & \cmark & \cmark \\ \bottomrule \end{tabular}} \label{tb:mixcomp} \end{table} \section{Experiments} \subsection{Experimental Settings} For video action recognition, we train and evaluate the proposed method on the UCF-101~\cite{soomro2012ucf101} and HMDB-51~\cite{kuehne2011hmdb} datasets. The UCF-101 dataset originally consists of 13,320 videos with 101 classes. The dataset consists of three training/testing splits, but we used the modified split provided by the 1st VIPriors action recognition challenge that consists of 4,795 training videos and 4,742 validation videos. The HMDB-51 dataset consists of 6,766 videos with 51 classes. We use the original three training/testing splits for training and evaluation. Our experiments are trained and evaluated on a single GTX 1080-ti GPU and are implemented using the PyTorch framework. We use SlowFast-50~\cite{feichtenhofer2019slowfast} as the backbone network with 64 temporal frames because it is more lightweight and faster than other networks such as C3D~\cite{tran2015learning}, I3D~\cite{carreira2017quo}, and S3D~\cite{xie2017rethinking}, without any pre-training and optical-flow. For the baseline, basic data augmentation, such as random crop with a size of 160, random scale jittering between [160, 200] for the short side of a video, and random horizontal flip, are applied. For optimization, the batch size is set to 16, the learning rate is set to 1e-4, and a weight decay of 1e-5 is used. Moreover, we incorporate the learning rate warm-up~\cite{cosinewarmup} and cosine learning rate scheduling~\cite{cosinelr} with the Adam optimizer~\cite{adam}. We train all models for 150 epochs. For evaluation, we sample 10 clips uniformly along the temporal axis and average softmax predictions. For the challenge, following \cite{feichtenhofer2019slowfast}, we sample 30 clips. \subsection{Data-level temporal data augmentations} Table \ref{table:taugres} presents the recognition results on the UCF-101 validation set for the VIPriors challenge. For all result tables, \textbf{boldface} indicates the best results, and an \underline{underline} indicates the second best. RandAugment-spatial indicates an original implementation without temporal variations. In the temporal version, M1 of Fig. \ref{fig:randaugt} is sampled from $Uniform(0.1, M2)$, and M2 is set to M of the spatial RandAugment. For temporal$+$, M1 and M2 are set to M$-\delta$ and M$+\delta$, respectively, where $\delta$ is sampled from $Uniform(0, 0.5\times M)$. For Mix in Table \ref{table:taugres}, it randomly chooses the spatial or temporal$+$ variations. The results reveal that solely applying RandAugment drastically improves recognition performance. Among them, temporally expanded RandAugment-T (temporal$+$) exhibits the best performance. For all RandAugment results, to produce the best accuracy, a grid search of two hyper-parameters: N $\in[1, 2, 3]$ and M $\in[3, 5, 10]$, is used. \begin{table}[!t] \setlength{\tabcolsep}{3pt} \centering \begin{minipage}{.5\linewidth} \centering \caption{\small{Data Augmentation Results}} \label{table:taugres} \begin{adjustbox}{width=1.0\linewidth} \begin{tabular}{l|l|cc} \toprule & Range & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & & 49.37 & 73.62 \\ RandAugment & Spatial & 66.87 & 88.04 \\ & Temporal & 67.33 & 88.42 \\ & Temporal+ & \textbf{69.23} & \textbf{89.20} \\ & Mix & \underline{68.24} & \underline{89.25} \\ \end{tabular} \end{adjustbox} \end{minipage} \quad% \begin{minipage}{.4\linewidth} \centering \caption{\small{Data Deleting Results}} \label{table:toutres} \begin{adjustbox}{width=1.0\linewidth} \begin{tabular}{l|cc} \toprule & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & \textbf{49.37} & \textbf{73.62} \\ CutOut & 46.01 & 69.80 \\ FrameCutOut & \underline{47.60} & 71.32 \\ CubeCutOut & 47.45 & \underline{72.06} \\ \end{tabular} \end{adjustbox} \end{minipage}% \vspace{-0.4cm} \end{table} \begin{table}[!t] \setlength{\tabcolsep}{3pt} \centering \begin{minipage}{.46\linewidth} \centering \caption{\small{Data Cut-and-paste Results}} \label{table:tmixres} \begin{adjustbox}{width=1.0\linewidth} \begin{tabular}{l|cc} \toprule & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & 49.37 & 73.62 \\ CutMix($\alpha=2$) & 50.81 & \underline{75.62} \\ FrameCutMix($\alpha=2$) & 51.29 & 74.99 \\ FrameCutMix($\alpha=5$) & \textbf{53.10} & \textbf{76.61} \\ CubeCutMix($\alpha=2$) & \underline{51.86} & 74.34 \\ CubeCutMix($\alpha=5$) & 51.81 & 75.16 \\ \end{tabular} \end{adjustbox} \end{minipage} \quad \quad \begin{minipage}{.4\linewidth} \centering \caption{\small{Data Blending Results}} \label{table:tblendres} \begin{adjustbox}{width=1.0\linewidth} \begin{tabular}{l|cc} \toprule & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & 49.37 & 73.62 \\ MixUp & 59.60 & \underline{82.56} \\ FadeMixUp & 59.22 & 82.24 \\ \midrule CutMixUp & 59.35 & 81.99 \\ FrameMixUp & \textbf{60.67} & \textbf{83.47} \\ CubeMixUp & \underline{59.85} & 82.20 \\ \end{tabular} \end{adjustbox} \end{minipage} \quad% \vspace{-0.4cm} \end{table} \subsection{Data-level temporal deleting, cut-and-pasting, and blending} The results of deleting data (CutOut, FrameCutOut, and CubeCutOut) are described in Table \ref{table:toutres}. For CutOut, an $80\times 80$ spatial patch is randomly deleted, and for FrameCutOut, 16 frames are randomly deleted. For CubeCutOut, an $80\times 80\times 16$ cube is randomly deleted. The results reveal that deleting patches, frames, or spatiotemporal cubes reduces recognition performance in a limited number of training datasets. Among them, CutOut exhibits the worst performance. For data cut-and-pasting, like that of CutMix~\cite{cutmix} and its extensions, the results are described in Table \ref{table:tmixres}. We apply the mixing probability of 0.5 for all methods and employ different hyper-parameters $\alpha$. Because the object size in the action recognition dataset is smaller than that in ImageNet~\cite{krizhevsky2012imagenet}, the mixing ratio should be sampled in a region close to 0.5 by sampling the large $\alpha$ in the beta distribution. The results demonstrate that the temporal and spatiotemporal extensions outperform the spatial-only mixing strategy. Because the probability of object occlusion during temporal mixing is lower than during spatial mixing, the performance of FrameCutMix is the most improved. Finally, for data blending, compared to MixUp~\cite{mixmatch} and CutMixUp~\cite{cutblur}, the temporal and spatiotemporal extensions show slightly superior performance, which is described in Table \ref{table:tblendres}. Compared to deleting and cut-and-pasting augmentations, blending presents the best performances. Because the number of training data is limited, a linear convex combination of samples easily and effectively augments the sample space. \begin{table}[!t] \centering \caption{\small{Temporal Augmentation Results on HMDB51 Dataset}} \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|cc|cc|cc|cc} \toprule & \multicolumn{2}{c}{Split-1} & \multicolumn{2}{c}{Split-2} & \multicolumn{2}{c}{Split-3} & \multicolumn{2}{c}{Average}\\ \cmidrule{2-9} & Top-1 Acc. & Top-5 Acc.& Top-1 Acc. & Top-5 Acc. & Top-1 Acc. & Top-5 Acc. & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & 36.60 & 67.25 & 37.19 & 65.75 & 32.88 & 65.82 & 35.56 & 66.27 \\ \midrule RandAug & \underline{47.45} & \underline{79.21} & \underline{47.12} & \underline{76.86} & \underline{47.45} & \underline{77.97} & \underline{47.34} & \underline{78.01} \\ RandAug-T & \textbf{48.17} & \textbf{79.35} & \textbf{47.84} & \textbf{77.00} & \textbf{48.37} & \textbf{78.17} & \textbf{48.13} & \textbf{78.17} \\ \midrule CutOut & \textbf{34.71} & \textbf{65.49} & \textbf{32.35} & 63.79 & \underline{31.76} & \underline{62.94} & \textbf{32.94 }& \textbf{64.07} \\ FrameCutOut & 31.05 & 61.57 & \underline{32.16} & \textbf{65.36} & \textbf{31.87} & \textbf{64.18} & 31.69 & \underline{63.70} \\ CubeCutOut & \underline{33.01} & \underline{63.99} & 32.04 & \underline{64.25} & 30.59 & 62.81 & \underline{31.88} & 63.68 \\ \midrule CutMix & 33.95 & 64.27 & 33.69 & \underline{66.84} & 31.24 & \underline{63.53} & 32.96 & 64.88 \\ FrameCutMix & \underline{34.97} & \textbf{65.56} & \underline{34.84} & \textbf{67.91} & \underline{33.27} & \underline{63.53} & \underline{34.36} & \underline{65.67} \\ CubeCutMix & \textbf{35.10} & \underline{65.10} & \textbf{35.95} & 65.62 & \textbf{36.54} & \textbf{67.97} & \textbf{35.86} & \textbf{66.23} \\ \midrule MixUp & 38.95 & 68.10 & \textbf{40.72} & 70.92 & \underline{40.20} & 71.31 & 39.96 & 70.11 \\ CutMixUp &\textbf{ 40.92} & \textbf{71.07} &40.16 & 71.55 & 39.28 & \underline{71.48} & \underline{40.12} & \underline{71.37} \\ FrameMixUp & 40.33 & \underline{70.98} & 40.52 & 70.85 & 39.02 & 70.65 & 39.96 & 70.83 \\ CubeMixUp & \underline{40.72} & 70.65 & \underline{40.70} & \textbf{72.88} & \textbf{40.92} & \textbf{71.83} & \textbf{40.78} & \textbf{71.79} \\ FadeMixUp & 39.80 & 70.39 & 40.46 & \underline{71.70} & 39.61 & 70.00 & 39.96 & 70.70 \\ \bottomrule \end{tabular}} \label{tb:hmdb51} \end{table} \begin{table}[!t] \centering \caption{\small{Model Evaluation for VIPriors Challenge}} \resizebox{0.85\linewidth}{!}{ \begin{tabular}{cc|c|c|c|c|cc} \toprule & Train Data & Test Data & Augmentation & Regularization & Others & Top-1 Acc. & Top-5 Acc. \\ \midrule & Train & Val & & & & 49.37 & 73.62 \\ \midrule & Train & Val & & FrameMixUp & & 60.67 & 83.47 \\ & Train & Val & RandAug & & & 66.87 & 88.04 \\ & Train & Val & RandAug-T & & & \underline{69.23} & 89.20 \\ & Train & Val & RandAug-T & FadeMixUp & & 68.73 & \underline{89.27} \\ & Train & Val & RandAug-T & FrameMixUp & & \textbf{69.70} & \textbf{89.84} \\ \midrule & Train+Val & Test & & & & 68.99 & - \\ & Train+Val & Test & RandAug-T & & & 81.43 & - \\ & Train+Val & Test & RandAug-T & FadeMixUp & & \underline{82.16} & - \\ & Train+Val & Test & RandAug-T & All Methods & Ensemble & \textbf{86.04} & - \\ \bottomrule \end{tabular}} \label{tb:challenge} \end{table} \begin{table}[!t] \centering \caption{\small{Comparison between Entries of VIPriors Challenge}} \resizebox{0.75\linewidth}{!}{ \begin{tabular}{cc|c|c|c|c} \toprule & Entry & Backbone & Two-stream & Ensemble & Top-1 Acc. \\ \midrule & 1st place team & I3D, C3D, 3D-ResNet, R(2+1)D & \cmark & Across Model & \textbf{90.8} \\ & 2nd place team~\cite{chen2020viprior} & TCDC & \cmark & Within Model & \underline{88.3} \\ & 3rd place team~\cite{luo2020viprior} & SlowFast50, TSM & \cmark & Across Model & 87.6 \\ \midrule & Ours & SlowFast50 & & & 82.2 \\ & Ours & SlowFast50 & & Within Model & 86.0 \\ \bottomrule \end{tabular}} \label{tb:challenge_entry} \end{table} \subsection{Results on HMDB-51 dataset} To determine the generalization to other datasets, we train and evaluate using the HMDB-51 dataset with its original splits. Generally, the recognition performance in HMDB-51 is inferior to the performance of UCF-101 due to its limited number of training samples. We use the same model and hyper-parameters as in UCF-101. The results in Table~\ref{tb:hmdb51} indicate that the temporal extensions generally outperforms spatial-only versions, and similar to UCF-101, the RandAugment and blending demonstrate the best accuracy. \begin{figure*}[!t] \centering \subfloat[\small{(a) Sample clip A: \textit{FrisbeeCatch}}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/A13.jpg}}\ \hfill \subfloat[\small{(b) Sample clip B: \textit{JugglingBalls}}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/B94.jpg}}\ \\[-2ex] \subfloat[\small{(c) MixUp-ed Clip}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/MixUp.jpg}}\ \hfill \subfloat[\small{(d) FadeMixUp-ed Clip}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/FadeMixUp.jpg}}\ \\[-2ex] \subfloat[\small{(e) CAM for \textit{FrisbeeCatch} on (c)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/MixUp_A.jpg}}\ \hfill \subfloat[\small{(f) CAM for \textit{FrisbeeCatch} on (d)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/FadeMixUp_A.jpg}}\ \\[-2ex] \subfloat[\small{(g) CAM for \textit{JugglingBalls} on (c)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/MixUp_B.jpg}}\ \hfill \subfloat[\small{(h) CAM for \textit{JugglingBalls} on (d)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/FadeMixUp_B.jpg}}\ \\[-2ex] \subfloat[\small{(i) CAM for \textit{FrisbeeCatch} on (a)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/MixUp_CAM.jpg}}\ \hfill \subfloat[\small{(j) CAM for \textit{FrisbeeCatch} on (a)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/FadeMixUpCAM.jpg}}\ \caption{Class activation maps. \textit{Left}: MixUp, \textit{Right}: FadeMixUp} \label{fig_camforblend} \end{figure*} \subsection{1st VIPriors action recognition challenge} Based on the comprehensive experimental results, we attend the 1st VIPriors action recognition challenge. In this challenge, any pre-training and external datasets are not allowed. The performance of various models is described in Table~\ref{tb:challenge}. For validation, applying both RandAugment-T and FrameMixUp perform the best. For the test set, 3,783 videos are provided without ground truths. Therefore, we report the results based on the challenge leaderboard. A combination of training and validation datasets including 9,537 videos are used to train the final challenge entries. According to the baseline accuracy of 68.99\%, adapting RandAugment-T improves the performance by only up to 81.43\%. Finally, we submitted an ensembled version of the different models that are trained using RandAugment-T and various mixing augmentation, to produce 86.04\% top-1 accuracy. The results including other challenge entries are described in Table~\ref{tb:challenge_entry}. The 1st place team proposes two-stream multi-scale spatiotemporal fusion strategy based on hand-craft optical flow and various 3D-ConvNets. The 2nd place team~\cite{chen2020viprior} also propose two-stream networks called 3D Temporal Central Difference Convolution (TCDC) based on C3D backbone. The 3rd place team~\cite{luo2020viprior} combines SlowFast network and Temporal Shift Module (TSM)~\cite{lin2019tsm} with two-stream networks. Compared to these methods, even if our final challenge results are inferior to them, our framework is much simple and comparative without using any two-stream strategy and model ensemble. \subsection{Discussions} \subsubsection{Why are the improvements not large?} Although temporal extensions generally outperform spatial-only versions in data augmentation algorithms, performance improvements might be not large enough. The possible reasons for this are three-fold. The first reason is the lack of sufficient training data. The second is the lack of temporal perturbation, and the third is that datasets used for experiments consist of trimmed videos. Both UCF-101 and HMDB-51 datasets have little temporal perturbations. Therefore, applying spatial augmentation is sufficient to learn the context. Furthermore, both datasets are trimmed to have few temporal occlusions; therefore, no room is left to learn the ability to localize temporally. Compared to the image dataset, because the action region is relatively small, removing the spatial region can hurt the basic recognition performance for deleting and cut-and-pasting if the volume of training data is not adequate. In contrast, for blending, although it is an unnatural image, as said in~\cite{cutmix}, the blending can the exploit full region of frames. Therefore, it produces reasonable performance improvements. \begin{figure*}[!t] \centering \subfloat[\small{(a) Sample clip A: \textit{Swing}}] {\includegraphics[width=0.495\linewidth]{./fig/cam/mix/A18.jpg}}\ \hfill \subfloat[\small{(b) Sample clip B: \textit{Basketball}}] {\includegraphics[width=0.495\linewidth]{./fig/cam/mix/B23.jpg}}\ \\[-2ex] \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/MixUp.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/FrameMix.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CutMix.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CubeMix.jpg}}\ \\[-2ex] \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/MixUp_A.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/FrameMix_A.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CutMix_A.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CubeMix_A.jpg}}\ \\[-2ex] \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/MixUp_B.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/FrameMix_B.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CutMix_B.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CubeMix_B.jpg}}\ \\[-2ex] \subfloat[\small{(c) MixUp}] {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/MixUp_Pure.jpg}}\ \hfill \subfloat[\small{(d) FrameCutMix}] {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/FrameMix_Pure.jpg}}\ \hfill \subfloat[\small{(d) CutMix}] {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CutMix_Pure.jpg}}\ \hfill \subfloat[\small{(d) CubeCutMix}] {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CubeMix_Pure.jpg}}\ \caption{Class actionvation maps. For (c)-(f), from the top to the bottom row: mixed clips, CAMs for {\textit{Swing}}, CAMs for {\textit{Basketball}}, and CAMs for {\textit{Swing}} on pure clip (a), respectively.} \label{fig_camforstloc} \end{figure*} \subsubsection{Spatiotemporal class activation map visualization} We visualize the learned features using the class activation map~\cite{cam} in Fig.~\ref{fig_camforblend}. In the SlowFast network, we use the features of the last convolutional layer in SlowPath. Fig.~\ref{fig_camforblend} (a) and (b) present example clips. Fig.~\ref{fig_camforblend} (c) and (d) are the visualizations of the clips using MixUp-ed and FadeMixUp-ed, respectively. In Fig.~\ref{fig_camforblend} (f) and (h) compared to Fig.~\ref{fig_camforblend} (e) and (g), the features of FadeMixUp are more localized temporally than those of MixUp. In Fig.~\ref{fig_camforblend} (j) compared to Fig.~\ref{fig_camforblend} (i), the activations of FadeMixUp are spatiotemporally localized better than those of MixUp in pure Clip A. Fig.~\ref{fig_camforstloc} compares the spatiotemporal localization abilities of MixUp, CutMix, FrameCutMix, and CubeCutMix. Compared to MixUp, as stated in their paper~\cite{cutmix}, CutMix can spatially localize a basketball court or a person on a swing. However, compared to CubeCutMix, the activations of CutMix are not well localized temporally. FrameCutMix also cannot localize features like MixUp, but it can separate the weights of activation separately on the temporal axis. \section{Conclusion} In this paper, we proposed several extensions of data-level augmentation and data-level deleting, blending, and cut-and-pasting augmentation algorithms from the spatial (image) domain into the temporal and spatiotemporal (video) domain. Although applying spatial data augmentation increases the recognition performance in a limited amount of training data, extending temporal and spatiotemporal data augmentation boosts performance. Moreover, our models that are trained on temporal augmentation can achieve temporal and spatiotemporal localization ability that cannot be achieved by the model trained only on spatial augmentation. Our next step is an extension to a large-scale dataset, such as Kinetics~\cite{carreira2017quo}, or untrimmed videos. \section*{Acknowledgments} This research was supported by R\&D program for Advanced Integrated-intelligence for Identification (AIID) through the National Research Foundation of KOREA (NRF) funded by Ministry of Science and ICT (NRF-2018M3E3A1057289). \clearpage \bibliographystyle{utils/splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=tqz0rQvz_58
https://arxiv.org/abs/2008.05721
Please evaluate the strengths and weaknesses of the paper, providing justification for your rating.
Good execution of a simple idea [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The authors extend popular data augmentation methods to the temporal domain in a straightforward manner. Experiments show minor improvements over the spatial data augmentations. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. Simple, powerful idea; simple implementations; clear explanations; self-critical in analyzing the magnitude of their contribution; extensive evaluations. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. Marginally related to workshop topic. [Overall rating] Paper rating: Strong accept [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - Grammar: lines 40, 254, 521 (whole paragraph needs revision) - Typos: line 130 "squre", 141 "researches" - Figure 2 could have been pseudocode Rating: 9: Top 15% of accepted papers, strong accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{styles/llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{adjustbox} \usepackage{subfig} \captionsetup[subfigure]{labelformat=empty} \usepackage{multirow} \usepackage{amssymb}% \usepackage{pifont}% \usepackage{multirow, boldline} \usepackage{ctable} \usepackage{xcolor} \usepackage[bottom]{footmisc} \usepackage{listings} \usepackage{wrapfig} \usepackage{makecell} \usepackage{hyperref} \hypersetup{pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false} \usepackage{breakcites} \hypersetup{ colorlinks = true, citecolor = green } \hypersetup{linkcolor=red} \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \newcommand{\etal}{\textit{et al}.} \newcommand{\ie}{\textit{i.e.} } \newcommand{\eg}{\textit{e.g.} } \newcommand{\ours}{DMV} \newcommand{\ourss}{DMV } \newcommand{\cmark}{\ding{51}}% \definecolor{darkgreen}{rgb}{0.0, 0.6, 0.2} \definecolor{MyRed}{rgb}{0.8,0.2,0} \def\red#1{\textcolor{MyRed}{#1}} \definecolor{MyBlue}{rgb}{0,0,1.0} \def\first#1{\textcolor{MyBlue}{#1}} \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} \lstset{frame=topbottom, language=Python, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle={\scriptsize\ttfamily}, numbers=none, numberstyle=\tiny\color{gray}, keywordstyle=\color{blue}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } \newcommand{\tref}[1]{Tab.~\ref{#1}} \newcommand{\Tref}[1]{Table~\ref{#1}} \newcommand{\eref}[1]{Eq.~(\ref{#1})} \newcommand{\Eref}[1]{Equation~(\ref{#1})} \newcommand{\fref}[1]{Fig.~\ref{#1}} \newcommand{\Fref}[1]{Figure~\ref{#1}} \newcommand{\sref}[1]{Sec.~\ref{#1}} \newcommand{\Sref}[1]{Section~\ref{#1}} \newcommand{\dummyfig}[1]{ \centering \fbox{ \begin{minipage}[c][0.33\textheight][c]{0.5\textwidth} \centering{#1} \end{minipage} } } \newcommand{\similarity}{s} \newcommand{\scoremap}{M} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{0000} % \def\JL#1{{\color{red}JL: \it #1}} \title{Learning Temporally Invariant and \\ Localizable Features via Data Augmentation \\ for Video Recognition} \titlerunning{Temporally Invariant Data Augmentation for Video Recognition} \author{Taeoh Kim\thanks{Equal contribution}\inst{1} \and Hyeongmin Lee\samethanks\inst{1} \and MyeongAh Cho\samethanks\inst{1} \and Ho Seong Lee\inst{2} \and \\ Dong Heon Cho\inst{2} \and Sangyoun Lee\inst{1}\thanks{Corresponding Author}} \authorrunning{T. Kim et al} \institute{Yonsei University, Seoul, South Korea \and Cognex Deep Learning Lab, Seoul, South Korea \\ \email{\{kto, minimonia, maycho0305, syleee\}@yonsei.ac.kr} \\ \email{\{hoseong.lee, david.cho\}@cognex.com}} \maketitle \begin{abstract} Deep-Learning-based video recognition has shown promising improvements along with the development of large-scale datasets and spatiotemporal network architectures. In image recognition, learning spatially invariant features is a key factor in improving recognition performance and robustness. Data augmentation based on visual inductive priors, such as cropping, flipping, rotating, or photometric jittering, is a representative approach to achieve these features. Recent state-of-the-art recognition solutions have relied on modern data augmentation strategies that exploit a mixture of augmentation operations. In this study, we extend these strategies to the temporal dimension for videos to learn temporally invariant or temporally localizable features to cover temporal perturbations or complex actions in videos. Based on our novel temporal data augmentation algorithms, video recognition performances are improved using only a limited amount of training data compared to the spatial-only data augmentation algorithms, including the 1st Visual Inductive Priors (VIPriors) for data-efficient action recognition challenge. Furthermore, learned features are temporally localizable that cannot be achieved using spatial augmentation algorithms. Our source code is available at \url{https://github.com/taeoh-kim/temporal_data_augmentation}. \end{abstract} \section{Introduction} Many augmentation techniques have been proposed to increase the recognition performance and robustness for an environment with limited training data or to prevent overconfidence and overfitting of large-scale data, such as ImageNet~\cite{krizhevsky2012imagenet}. These techniques can be categorized into data-level augmentation~\cite{krizhevsky2012alexnet, vggnet, autoaugment, fastautoaugment, randaugment, augmix, cutout, hideandseek}, data-level mixing~\cite{mixup, cutmix, cutblur, attributemix, attentivecutmix, smoothmix}, and in-network augmentation~\cite{dropout, dropblock, stochasticdepth, shakeshake, shakedrop, regvideo, manimixup}. Data augmentation is an important component for recent state-of-the-art self-supervised learning~\cite{moco, simclr, pirl}, semi-supervised learning~\cite{uda, mixmatch, remixmatch, fixmatch}, self-learning~\cite{noisystudent}, and generative models~\cite{crgan, diffauggan, bcrgan, dagan} because of its ability to learn invariant features. The purpose of data augmentation in image recognition is to enhance the generalizability via learning spatially invariant features. Augmentation, such as geometric (cropping, flipping, rotating, \textit{etc.}) and photometric (brightness, contrast, color, \textit{etc.}) transformation, can model uncertain variances in a dataset. Recent algorithms have exhibited state-of-the-art performances in terms of the complexity-accuracy trade-off~\cite{fastautoaugment, randaugment} or robustness~\cite{robustness, augmix}. Some approaches~\cite{cutmix, cutblur} learn localizable features that can be used as transferable features for the localization-related tasks, such as object detection and image captioning. They simultaneously learn what to and where to focus for recognition. Despite evolving through numerous algorithms in image recognition, exploration into data augmentation and regularization in video recognition has rarely been done. In videos, temporal variations and perturbations should be considered. For example, Fig. \ref{fig_perturbation} depicts temporal perturbations across frames in a video. This perturbation can be a geometric perturbation, such as translation, rotation, scale, and so on, or a photometric perturbation, such as brightness, contrast, and so on. To handle perturbation, both well-studied spatial augmentation and temporally varying data augmentation should be considered. In this paper, we propose several extensions for temporal robustness. More specifically, temporally invariant and localizable features can be modeled via data augmentations. In this paper, we extend upon two recent examples of well-studied spatial augmentation techniques: data-level augmentation and data-level mixing. To the best of our knowledge, this is the first study that deeply analyzes temporal perturbation modeling via data augmentation in video recognition. The contributions of this paper can summarized as follows: \begin{itemize} \item {We propose an extension of RandAugment~\cite{randaugment}, called RandAugment-T, to conduct data-level augmentation for video recognition. It can temporally model varying levels of augmentation operations.} \item {We also propose the temporal extensions of CutOut~\cite{cutout}, MixUp~\cite{mixup}, and CutMix~\cite{cutmix} as examples of deleting, blending, and cut-and-pasting data samples. Considering the temporal dimension improves recognition performance and the temporal localization abilities.} \item {The recognition results of the proposed extensions on the UCF-101~\cite{soomro2012ucf101} subset for the 1st Visual Inductive Priors (VIPriors) for data-efficient action recognition challenge, and the HMDB-51~\cite{kuehne2011hmdb} dataset exhibit performance improvements compared to the spatial-only versions in a simple baseline.} \end{itemize} \begin{figure*}[!t] \centering \subfloat {\includegraphics[width=0.155\linewidth]{./fig/g1.png}}\ \subfloat {\includegraphics[width=0.155\linewidth]{./fig/g2.png}}\ \subfloat {\includegraphics[width=0.155\linewidth]{./fig/g3.png}}\ \hfill \subfloat {\includegraphics[width=0.155\linewidth]{./fig/p1.png}}\ \subfloat {\includegraphics[width=0.155\linewidth]{./fig/p2.png}}\ \subfloat {\includegraphics[width=0.155\linewidth]{./fig/p3.png}}\ \\ \caption{Example clips of temporal perturbations. \textit{Left}: Geometric perturbation across frames in a sky-diving video due to extreme camera and object movement. \textit{Right}: Photometric perturbation across frames in a basketball stadium due to camera flashes.} \label{fig_perturbation} \end{figure*} \section{Related Works} \subsection{Data augmentation} \subsubsection{Data-level augmentation} First, to enlarge the generalization performance of a dataset and to reduce the overfitting problem of preliminary networks, various data augmentation methods, such as rotate, flip, crop, color jitter~\cite{krizhevsky2012imagenet}, and scale jitter~\cite{vggnet} have been proposed. CutOut~\cite{cutout} deletes a square-shaped box at a random location to encourage the network focus on various properties of images, to avoid relying on the most discriminative regions. Hide-and-Seek~\cite{hideandseek} is a similar approach, but it deletes multiple regions that are sampled from grid patches. Recently, the methodology of combining more than one augmentation operation has been proposed. Cubuk~\etal~\cite{autoaugment} propose a reinforcement learning-based approach to search for the optimal data augmentation policy in the given dataset. However, because the search space is too large, it requires extensive time to determine the optimal policy. Although an approach to mitigate this problem has been proposed~\cite{fastautoaugment}, it is difficult hard and time-consuming to determine the optimal augmentation strategy. To solve this, Cubuk~\etal~\cite{randaugment} propose RandAugment, which randomly samples augment operations from the candidate list and cascades them. Similarly, Hendrycks~\etal~\cite{augmix} propose an approach called AugMix that parallelly blends images that have been augmented by the operations sampled from a set of candidates. These techniques can model uncertain spatial perturbation, such as the geometric transform, photometric transform, or both. Because studies have focused on static images, applying these approaches to videos is a straightforward extension. For videos, Ji~\etal~\cite{ji2019learning} propose temporal augmentation operations called time warping and time masking, which randomly adjust or skip temporal frames. In contrast, in this paper, we focus on the temporally varying augmentation. \subsubsection{Data-level mixing} Together with data augmentation algorithms, augmentation strategies using multiple samples have been proposed. Zhang~\etal~\cite{mixup} propose an approach called MixUp to manipulate images with more than one image. This approach makes a new sample by blending two arbitrary images and interpolating their one-hot ground-truth labels. This encourages the model to behave linearly in-between training examples. CutMix~\cite{cutmix} combines the concepts of CutOut and MixUp, by taking the best of both worlds. It replaces a square-shaped deleted region in CutOut with a patch from another image. This encourages the model to learn not only what to recognize but also where to recognize it. It can be interpreted as spatially localizable feature learning. Inspired by CutMix, several methods have been proposed. CutBlur~\cite{cutblur} propose a CutMix-like approach to solving the restoration problem by cut-and-pasting between low-resolution and high-resolution images. They also proposed CutMixUp, which combines MixUp and CutMix. CutMixUp blends the two images inside the one of the masks of CutMix to relax extreme changes in boundary pixels. Attribute~Mix~\cite{attributemix} uses masks of any shape, not only square-shaped masks. Attentive~CutMix~\cite{attentivecutmix} also discards the square-shaped masks. It uses multiple patches sampled from the grid and replaces the regions with another image. Smoothmix~\cite{smoothmix} focuses on the 'strong edge' problem caused by the boundary of the masks. Although numerous data manipulation methods, including deleting, blending, and cut-and-pasting, have successfully augmented many image datasets, their ability when applied to video recognition to learn temporally invariant and localizable features has not yet been explored. \subsubsection{In-network augmentation} Apart from the data-level approaches, several studies have proposed in-network augmentation algorithms. These have usually involved the design of stochastic networks to undergo augmentation at the feature-level to reduce predictive variance and to learn more high-level augmented features rather than to learn features from low-level augmentations. Dropout~\cite{dropout} is the very first approach to regularize the overfitted models. Other approaches, such as DropBlock~\cite{dropblock}, Stochastic depth~\cite{stochasticdepth}, Shake-Shake~\cite{shakeshake}, and ShakeDrop~\cite{shakedrop} regularization, have been proposed. Manifold-MixUp~\cite{manimixup} propose a mixing strategy like MixUp but is used instead in the feature space. The most similar approach to this study is a regularization method for video recognition called Random Mean Scaling~\cite{regvideo}. It randomly adjusts spatiotemporal features in video networks. In contrast, our approach focuses on data-level manipulation and is extended from spatial-only algorithms into the temporal worlds. \subsection{Video recognition} For video action recognition, like image recognition, various architectures have been proposed to capture spatiotemporal features from videos. In \cite{tran2015learning}, Tran \textit{et al.} proposed C3D, which extracts features containing objects, scenes, and action information through 3D convolutional layers and then simply passes them through a linear classifier. In \cite{tran2018closer}, a (2+1)D convolution that focuses on layer factorization rather than 3D convolution is proposed. It is composed using a 2D spatial convolution followed by 1D temporal convolution. In addition, the non-local block~\cite{wang2018non} and GloRe~\cite{chen2019graph} modules have been suggested to capture long-range dependencies via self-attention and graph-based modules. By plugging them into 3D ConvNet, the network can learn long-distance relations in both space and time. Another approach is two-stream architecture~\cite{wang2016temporal, stroud2020d3d, ryoo2019assemblenet}. In \cite{carreira2017quo}, a two-stream 3D ConvNet inflated from the deep image classification network and pre-trained features is proposed and achieves state-of-the-art performance by pre-training with the Kinetics dataset, a large-scale action recognition dataset. Based on this architecture, Xie \textit{et al.} \cite{xie2017rethinking} combined a top-heavy model design, temporally separable convolution, and spatiotemporal feature-gating blocks to make low-cost and meaningful features. Recently, SlowFast~\cite{feichtenhofer2019slowfast} networks that consist of a slow path for semantic information and a fast path for rapidly changing motion information exhibit competitive performance with a different frame rate sampling strategy. In addition, RESOUND~\cite{li2018resound} proposed a method to reduce the static bias of the dataset, and an Octave convolution~\cite{chen2019drop} is proposed to reduce spatial redundancy by dividing the frequency of features. A debiasing loss function~\cite{choi2019can} is proposed to mitigate the strong scene bias of networks and focus on the actual action information. Since the advent of the large-scale Kinetics dataset, most action recognition studies have pre-trained the backbone on Kinetics, which guarantees basic performance. However, based on the results of the study by \cite{hara2018can}, architectures with numerous parameters are significantly overfitted when learning from scratch on relatively small datasets, such as UCF-101~\cite{soomro2012ucf101} and HMDB-51~\cite{kuehne2011hmdb}. This indicates that training without a pre-trained backbone is a challenging issue. Compared to existing studies that have been focused on novel dataset and architectures, we focus on regularization techniques, such as data augmentation, to prevent overfitting via learning invariance and robustness in terms of spatiality and temporality. \section{Methods} \subsection{Data-level temporal data augmentations} \begin{wrapfigure}{r}{0.5\linewidth} \vspace{-1.0cm} \begin{lstlisting} def randaugment_T(X, N, M1, M2): """Generate a set of distortions. Args: X: Input video (T x H x W) N: Number of augmentation transformations to apply sequentially. M1, M2: Magnitudes for both temporal ends. """ ops = np.random.choice(transforms, N) M = np.linspace(M1, M2, T) return [[op(X, M[t]) for t in range(T)] for op in ops] \end{lstlisting} \vspace{-0.5cm} \caption{\small{Pseudo-code for RandAugment-T based on Numpy in Python. Template is borrowed from~\cite{randaugment}}} \label{fig:randaugt} \vspace{-0.5cm} \end{wrapfigure} First, we extend the existing RandAugment~\cite{randaugment} method for video recognition. RandAugment has two hyper-parameters for optimization. One is the number of augmentation operations to apply, N, and the other is the magnitude of the operation, M. A grid search of these two parameters in a given dataset produces state-of-the-art performance in image recognition. For video recognition, RandAugment is directly applicable to every frame of a video; however, this limits temporal perturbation modeling. To cover temporally varying transformations, we propose RandAugment-T, which linearly interpolates between two magnitudes from the first frame to the last frame in a video clip. The pseudo-code for RandAugment-T is described in Fig.~\ref{fig:randaugt}. It receives three hyper-parameters: N, M1, and M2, where N is the number of operations, which is the same as RandAugment, and M1 and M2 indicate the magnitudes for both temporal ends, which can be any combination of levels. The set of augmentation operations (\texttt{transforms} in Fig.~\ref{fig:randaugt}) is identical to RandAugment. However, \texttt{rotate}, \texttt{shear-x}, \texttt{shear-y}, \texttt{translate-x}, and \texttt{translate-y} can model temporally varying geometric transformation, such as camera or object movements (Fig.~\ref{fig:taugexample}(a)), and \texttt{solarize}, \texttt{color}, \texttt{posterize}, \texttt{contrast}, \texttt{brightness}, and \texttt{sharpness} can model photometric transformation, such as brightness or contrast changes due to the auto-shot mode in a camera (Fig. ~\ref{fig:taugexample}(b)). The remaining operations (\texttt{identity}, \texttt{autocontrast}, and \texttt{equalize}) have no magnitudes that are applied evenly across frames. \begin{figure*}[!t] \centering \subfloat {\includegraphics[width=0.8\linewidth]{./fig/translation_5frame.png}}\ \\[0.2ex] \subfloat[(a) Temporally varying geometric augmentations (Top: vertical-down translation, Bottom: clockwise rotation)] {\includegraphics[width=0.8\linewidth]{./fig/rotation_5frame.png}}\ \\ \subfloat {\includegraphics[width=0.8\linewidth]{./fig/brightness_5frame.png}}\ \\[0.2ex] \subfloat[(b) Temporally varying photometric augmentations (Top: increasing brightness, Bottom: decreasing contrast)] {\includegraphics[width=0.8\linewidth]{./fig/contrast_5frame.png}}\ \\ \caption{Example of temporally varying data augmentation operations for RandAugment-T} \label{fig:taugexample} \end{figure*} \subsection{Data-level temporal deleting, blending, and cut-and-pasting} \label{regularization} \begin{figure*}[!t] \centering \subfloat {\includegraphics[width=0.49\linewidth]{./fig/cutout_5frame.png}}\ \hfill \subfloat {\includegraphics[width=0.49\linewidth]{./fig/cutmix_5frame.png}}\ \\[-2ex] \subfloat {\includegraphics[width=0.49\linewidth]{./fig/framecutout_5frame.png}}\ \hfill \subfloat {\includegraphics[width=0.49\linewidth]{./fig/framecutmix_5frame.png}}\ \\[-2ex] \subfloat[\small{(a) \textit{Top}: CutOut~\cite{cutout}, \textit{Middle}: FrameCutOut, \textit{Bottom}: CubeCutOut}] {\includegraphics[width=0.49\linewidth]{./fig/cubecutout_5frame.png}}\ \hfill \subfloat[\small{(b) \textit{Top}: CutMix~\cite{cutmix}, \textit{Middle}: FrameCutMix, \textit{Bottom}: CubeCutMix}] {\includegraphics[width=0.49\linewidth]{./fig/cubecutmix_5frame.png}}\ \\[-2ex] \subfloat {\includegraphics[width=0.49\linewidth]{./fig/mixup_5frame.png}}\ \hfill \subfloat {\includegraphics[width=0.49\linewidth]{./fig/framemixup_5frame.png}}\ \\[-2ex] \subfloat[\small{(c) \textit{Top}: MixUp~\cite{mixup}, \textit{Bottom}: CutMixUp~\cite{cutblur}}] {\includegraphics[width=0.49\linewidth]{./fig/cutmixup_5frame.png}}\ \hfill \subfloat[\small{(d) \textit{Top}: FrameCutMixUp, \textit{Bottom}: CubeCutMixUp}] {\includegraphics[width=0.49\linewidth]{./fig/cubemixup_5frame.png}}\ \\[0.5ex] \subfloat[\small{(e) FadeMixUp}] {\includegraphics[width=0.49\linewidth]{./fig/fademixup_5frame.png}}\ \hfill \caption{Visual comparison of data-level deleting, blending, and cut-and-pasting for videos. Desired ground-truth labels are calculated by the ratio of each class: \textit{Fencing} and \textit{PlayingGuitar}.} \label{fig_frameworkcomparison} \end{figure*} Regularization techniques, which have been proposed for image recognition, such as CutOut~\cite{cutout}, MixUp~\cite{mixup}, and CutMix~\cite{cutmix}, can be applied identically across frames in a video. CutMixUp is a combination of MixUp and CutMix, which is proposed in~\cite{cutblur}, can also be used for relaxing the unnatural boundary changes. In this section, we propose temporal extensions of the above algorithms. FrameCutOut and CubeCutOut are the temporal and spatiotemporal extensions of CutOut (Fig~\ref{fig_frameworkcomparison} (a)), respectively. CutOut encourages the network to better use the full context of the images, rather than relying on a small portion of specific spatial regions. Similarly, FrameCutOut encourages the network to better use the full temporal context and the full spatiotemporal context by CubeCutOut. FrameCutMix and CubeCutMix are extensions of CutMix~\cite{cutmix} (Fig~\ref{fig_frameworkcomparison} (b)). CutMix is designed for the learning of spatially localizable features. Cut-and-paste mixing between two images encourages the network to learn where to recognize features. Similarly, FrameCutMix and CubeCutMix are designed for the learning of temporally and spatiotemporally localizable features in a video. Like CutMix, the mixing ratio $\lambda$ is sampled from the beta distribution $Beta(\alpha, \alpha)$, where $\alpha$ is a hyper-parameter, and the locations for random frames or random spatiotemporal cubes are selected based on $\lambda$. Like CutMixUp~\cite{cutblur}, which is the unified version of MixUp~\cite{mixup} and CutMix~\cite{cutmix}, FrameCutMixUp and CubeCutMixUp can be designed similarly (Fig~\ref{fig_frameworkcomparison} (c) and (d)) to relax extreme boundary changes between two samples. For these blend$+$cut-and-paste algorithms, MixUp is applied between two data samples by the mixing ratio $\lambda_1$, and the other hyper-parameter $\lambda_2$ is sampled from $Beta(2, 2)$. Based on $\lambda_2$, the region mask $\mathbf{M}$ is selected randomly similar to CutMix to cut-and-paste the MixUp-ed sample and one of the two original samples. The final mixed data and desired ground-truth labels are formulated as follows: \begin{equation} \begin{split} \Tilde{x} = \left\{ \begin{array}{ll} (\lambda_1 x_A + (1-\lambda_1) x_B) \odot \mathbf{M} + x_A \odot (\mathbf{1} - \mathbf{M}) & \quad \mbox{if } \lambda_1 < 0.5 \\ (\lambda_1 x_A + (1-\lambda_1) x_B) \odot \mathbf{M} + x_B \odot (\mathbf{1} - \mathbf{M}) & \quad \mbox{if } \lambda_1 \geq 0.5 \end{array} \right. \\ \Tilde{y} = \left\{ \begin{array}{ll} (\lambda_1 \lambda_2 + (1 - \lambda_1)) y_A + (1-\lambda_1) \lambda_2 y_B & \quad \mbox{if } \lambda_1 < 0.5 \\ \lambda_1 \lambda_2 y_A + (1 - \lambda_1 \lambda_2) y_B & \quad \mbox{if } \lambda_1 \geq 0.5 \end{array} \right. \end{split} \end{equation} where $\Tilde{x}$, $\Tilde{y}$, and $\odot$ indicate the mixed data, modified label, and element-wise multiplication, respectively. Finally, we propose another extension of MixUp, called FadeMixUp, inspired by the fade-in, fade-out, and dissolve overlap effects in videos. For FadeMixUp, in MixUp, the mixing ratio is smoothly changing along with temporal frames (Fig~\ref{fig_frameworkcomparison} (e)). In FadeMixUp, a list of the mixing ratios $\Tilde{\lambda}_t$ of a frame $t$ is calculated by linear interpolation between $\lambda - \gamma$ and $\lambda + \gamma$, where $\lambda$ is the mixing ratio of MixUp, and the $\gamma$ is sampled from $Uniform(0, min(\lambda, 1-\lambda))$. Because the adjustments in the mixing ratio at both ends are symmetric, the label is the same as MixUp. \begin{equation} \begin{split} \Tilde{x_t} & = \Tilde{\lambda_t} X_{A_t} + (1-\Tilde{\lambda}_t) X_{B_t} \\ \Tilde{y} & = \lambda y_A + (1-\lambda) y_B, \\ \end{split} \label{eq:fademixup} \end{equation} FadeMixUp can be modeled for temporal variations and can learn temporally localizable feature without sharp boundary changes, like other cut-and-pasting algorithms. Because many videos include these overlapping effects at the scene change, FadeMixUp can be applied naturally. A summary of deleting, blending, and cut-and-pasting data augmentation algorithms is described in Table~\ref{tb:mixcomp}. In the table, a checkmark indicates the elements (pixels) that can be changed along the spatial or temporal axis via augmentation methods. Compared to the existing algorithms~\cite{cutout, cutmix, mixup, cutblur}, our proposed methods are extended temporally and spatiotemporally. \begin{table}[!t] \centering \caption{\small{Comparison between deleting, blending, and cut-and-pasting frameworks.}} \resizebox{1.0\linewidth}{!}{ \begin{tabular}{ll|ccc|ccc|cc|ccc} \toprule & Type & \multicolumn{3}{c|}{Delete} & \multicolumn{3}{c|}{Cut-and-paste} & \multicolumn{2}{c|}{Blend} & \multicolumn{3}{c}{Blend $+$ Cut-and-paste} \\ \cmidrule{2-13} & Name & \makecell{CutOut \\ \cite{cutout}} & \makecell{Frame \\ CutOut} & \makecell{Cube\\CutOut} & \makecell{CutMix \\ \cite{cutmix}} & \makecell{Frame\\CutMix} & \makecell{Cube\\CutMix} & \makecell{MixUp \\ \cite{mixup}} & \makecell{Fade\\MixUp} & \makecell{CutMixUp\\ \cite{cutblur}} & \makecell{Frame\\CutMixUp} & \makecell{Cube\\CutMixUp} \\ \midrule Axis & Spatial & \cmark & & \cmark & \cmark & & \cmark & & & \cmark & & \cmark \\ & Temporal & & \cmark & \cmark & & \cmark & \cmark & & \cmark & & \cmark & \cmark \\ \bottomrule \end{tabular}} \label{tb:mixcomp} \end{table} \section{Experiments} \subsection{Experimental Settings} For video action recognition, we train and evaluate the proposed method on the UCF-101~\cite{soomro2012ucf101} and HMDB-51~\cite{kuehne2011hmdb} datasets. The UCF-101 dataset originally consists of 13,320 videos with 101 classes. The dataset consists of three training/testing splits, but we used the modified split provided by the 1st VIPriors action recognition challenge that consists of 4,795 training videos and 4,742 validation videos. The HMDB-51 dataset consists of 6,766 videos with 51 classes. We use the original three training/testing splits for training and evaluation. Our experiments are trained and evaluated on a single GTX 1080-ti GPU and are implemented using the PyTorch framework. We use SlowFast-50~\cite{feichtenhofer2019slowfast} as the backbone network with 64 temporal frames because it is more lightweight and faster than other networks such as C3D~\cite{tran2015learning}, I3D~\cite{carreira2017quo}, and S3D~\cite{xie2017rethinking}, without any pre-training and optical-flow. For the baseline, basic data augmentation, such as random crop with a size of 160, random scale jittering between [160, 200] for the short side of a video, and random horizontal flip, are applied. For optimization, the batch size is set to 16, the learning rate is set to 1e-4, and a weight decay of 1e-5 is used. Moreover, we incorporate the learning rate warm-up~\cite{cosinewarmup} and cosine learning rate scheduling~\cite{cosinelr} with the Adam optimizer~\cite{adam}. We train all models for 150 epochs. For evaluation, we sample 10 clips uniformly along the temporal axis and average softmax predictions. For the challenge, following \cite{feichtenhofer2019slowfast}, we sample 30 clips. \subsection{Data-level temporal data augmentations} Table \ref{table:taugres} presents the recognition results on the UCF-101 validation set for the VIPriors challenge. For all result tables, \textbf{boldface} indicates the best results, and an \underline{underline} indicates the second best. RandAugment-spatial indicates an original implementation without temporal variations. In the temporal version, M1 of Fig. \ref{fig:randaugt} is sampled from $Uniform(0.1, M2)$, and M2 is set to M of the spatial RandAugment. For temporal$+$, M1 and M2 are set to M$-\delta$ and M$+\delta$, respectively, where $\delta$ is sampled from $Uniform(0, 0.5\times M)$. For Mix in Table \ref{table:taugres}, it randomly chooses the spatial or temporal$+$ variations. The results reveal that solely applying RandAugment drastically improves recognition performance. Among them, temporally expanded RandAugment-T (temporal$+$) exhibits the best performance. For all RandAugment results, to produce the best accuracy, a grid search of two hyper-parameters: N $\in[1, 2, 3]$ and M $\in[3, 5, 10]$, is used. \begin{table}[!t] \setlength{\tabcolsep}{3pt} \centering \begin{minipage}{.5\linewidth} \centering \caption{\small{Data Augmentation Results}} \label{table:taugres} \begin{adjustbox}{width=1.0\linewidth} \begin{tabular}{l|l|cc} \toprule & Range & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & & 49.37 & 73.62 \\ RandAugment & Spatial & 66.87 & 88.04 \\ & Temporal & 67.33 & 88.42 \\ & Temporal+ & \textbf{69.23} & \textbf{89.20} \\ & Mix & \underline{68.24} & \underline{89.25} \\ \end{tabular} \end{adjustbox} \end{minipage} \quad% \begin{minipage}{.4\linewidth} \centering \caption{\small{Data Deleting Results}} \label{table:toutres} \begin{adjustbox}{width=1.0\linewidth} \begin{tabular}{l|cc} \toprule & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & \textbf{49.37} & \textbf{73.62} \\ CutOut & 46.01 & 69.80 \\ FrameCutOut & \underline{47.60} & 71.32 \\ CubeCutOut & 47.45 & \underline{72.06} \\ \end{tabular} \end{adjustbox} \end{minipage}% \vspace{-0.4cm} \end{table} \begin{table}[!t] \setlength{\tabcolsep}{3pt} \centering \begin{minipage}{.46\linewidth} \centering \caption{\small{Data Cut-and-paste Results}} \label{table:tmixres} \begin{adjustbox}{width=1.0\linewidth} \begin{tabular}{l|cc} \toprule & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & 49.37 & 73.62 \\ CutMix($\alpha=2$) & 50.81 & \underline{75.62} \\ FrameCutMix($\alpha=2$) & 51.29 & 74.99 \\ FrameCutMix($\alpha=5$) & \textbf{53.10} & \textbf{76.61} \\ CubeCutMix($\alpha=2$) & \underline{51.86} & 74.34 \\ CubeCutMix($\alpha=5$) & 51.81 & 75.16 \\ \end{tabular} \end{adjustbox} \end{minipage} \quad \quad \begin{minipage}{.4\linewidth} \centering \caption{\small{Data Blending Results}} \label{table:tblendres} \begin{adjustbox}{width=1.0\linewidth} \begin{tabular}{l|cc} \toprule & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & 49.37 & 73.62 \\ MixUp & 59.60 & \underline{82.56} \\ FadeMixUp & 59.22 & 82.24 \\ \midrule CutMixUp & 59.35 & 81.99 \\ FrameMixUp & \textbf{60.67} & \textbf{83.47} \\ CubeMixUp & \underline{59.85} & 82.20 \\ \end{tabular} \end{adjustbox} \end{minipage} \quad% \vspace{-0.4cm} \end{table} \subsection{Data-level temporal deleting, cut-and-pasting, and blending} The results of deleting data (CutOut, FrameCutOut, and CubeCutOut) are described in Table \ref{table:toutres}. For CutOut, an $80\times 80$ spatial patch is randomly deleted, and for FrameCutOut, 16 frames are randomly deleted. For CubeCutOut, an $80\times 80\times 16$ cube is randomly deleted. The results reveal that deleting patches, frames, or spatiotemporal cubes reduces recognition performance in a limited number of training datasets. Among them, CutOut exhibits the worst performance. For data cut-and-pasting, like that of CutMix~\cite{cutmix} and its extensions, the results are described in Table \ref{table:tmixres}. We apply the mixing probability of 0.5 for all methods and employ different hyper-parameters $\alpha$. Because the object size in the action recognition dataset is smaller than that in ImageNet~\cite{krizhevsky2012imagenet}, the mixing ratio should be sampled in a region close to 0.5 by sampling the large $\alpha$ in the beta distribution. The results demonstrate that the temporal and spatiotemporal extensions outperform the spatial-only mixing strategy. Because the probability of object occlusion during temporal mixing is lower than during spatial mixing, the performance of FrameCutMix is the most improved. Finally, for data blending, compared to MixUp~\cite{mixmatch} and CutMixUp~\cite{cutblur}, the temporal and spatiotemporal extensions show slightly superior performance, which is described in Table \ref{table:tblendres}. Compared to deleting and cut-and-pasting augmentations, blending presents the best performances. Because the number of training data is limited, a linear convex combination of samples easily and effectively augments the sample space. \begin{table}[!t] \centering \caption{\small{Temporal Augmentation Results on HMDB51 Dataset}} \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|cc|cc|cc|cc} \toprule & \multicolumn{2}{c}{Split-1} & \multicolumn{2}{c}{Split-2} & \multicolumn{2}{c}{Split-3} & \multicolumn{2}{c}{Average}\\ \cmidrule{2-9} & Top-1 Acc. & Top-5 Acc.& Top-1 Acc. & Top-5 Acc. & Top-1 Acc. & Top-5 Acc. & Top-1 Acc. & Top-5 Acc. \\ \midrule Baseline & 36.60 & 67.25 & 37.19 & 65.75 & 32.88 & 65.82 & 35.56 & 66.27 \\ \midrule RandAug & \underline{47.45} & \underline{79.21} & \underline{47.12} & \underline{76.86} & \underline{47.45} & \underline{77.97} & \underline{47.34} & \underline{78.01} \\ RandAug-T & \textbf{48.17} & \textbf{79.35} & \textbf{47.84} & \textbf{77.00} & \textbf{48.37} & \textbf{78.17} & \textbf{48.13} & \textbf{78.17} \\ \midrule CutOut & \textbf{34.71} & \textbf{65.49} & \textbf{32.35} & 63.79 & \underline{31.76} & \underline{62.94} & \textbf{32.94 }& \textbf{64.07} \\ FrameCutOut & 31.05 & 61.57 & \underline{32.16} & \textbf{65.36} & \textbf{31.87} & \textbf{64.18} & 31.69 & \underline{63.70} \\ CubeCutOut & \underline{33.01} & \underline{63.99} & 32.04 & \underline{64.25} & 30.59 & 62.81 & \underline{31.88} & 63.68 \\ \midrule CutMix & 33.95 & 64.27 & 33.69 & \underline{66.84} & 31.24 & \underline{63.53} & 32.96 & 64.88 \\ FrameCutMix & \underline{34.97} & \textbf{65.56} & \underline{34.84} & \textbf{67.91} & \underline{33.27} & \underline{63.53} & \underline{34.36} & \underline{65.67} \\ CubeCutMix & \textbf{35.10} & \underline{65.10} & \textbf{35.95} & 65.62 & \textbf{36.54} & \textbf{67.97} & \textbf{35.86} & \textbf{66.23} \\ \midrule MixUp & 38.95 & 68.10 & \textbf{40.72} & 70.92 & \underline{40.20} & 71.31 & 39.96 & 70.11 \\ CutMixUp &\textbf{ 40.92} & \textbf{71.07} &40.16 & 71.55 & 39.28 & \underline{71.48} & \underline{40.12} & \underline{71.37} \\ FrameMixUp & 40.33 & \underline{70.98} & 40.52 & 70.85 & 39.02 & 70.65 & 39.96 & 70.83 \\ CubeMixUp & \underline{40.72} & 70.65 & \underline{40.70} & \textbf{72.88} & \textbf{40.92} & \textbf{71.83} & \textbf{40.78} & \textbf{71.79} \\ FadeMixUp & 39.80 & 70.39 & 40.46 & \underline{71.70} & 39.61 & 70.00 & 39.96 & 70.70 \\ \bottomrule \end{tabular}} \label{tb:hmdb51} \end{table} \begin{table}[!t] \centering \caption{\small{Model Evaluation for VIPriors Challenge}} \resizebox{0.85\linewidth}{!}{ \begin{tabular}{cc|c|c|c|c|cc} \toprule & Train Data & Test Data & Augmentation & Regularization & Others & Top-1 Acc. & Top-5 Acc. \\ \midrule & Train & Val & & & & 49.37 & 73.62 \\ \midrule & Train & Val & & FrameMixUp & & 60.67 & 83.47 \\ & Train & Val & RandAug & & & 66.87 & 88.04 \\ & Train & Val & RandAug-T & & & \underline{69.23} & 89.20 \\ & Train & Val & RandAug-T & FadeMixUp & & 68.73 & \underline{89.27} \\ & Train & Val & RandAug-T & FrameMixUp & & \textbf{69.70} & \textbf{89.84} \\ \midrule & Train+Val & Test & & & & 68.99 & - \\ & Train+Val & Test & RandAug-T & & & 81.43 & - \\ & Train+Val & Test & RandAug-T & FadeMixUp & & \underline{82.16} & - \\ & Train+Val & Test & RandAug-T & All Methods & Ensemble & \textbf{86.04} & - \\ \bottomrule \end{tabular}} \label{tb:challenge} \end{table} \begin{table}[!t] \centering \caption{\small{Comparison between Entries of VIPriors Challenge}} \resizebox{0.75\linewidth}{!}{ \begin{tabular}{cc|c|c|c|c} \toprule & Entry & Backbone & Two-stream & Ensemble & Top-1 Acc. \\ \midrule & 1st place team & I3D, C3D, 3D-ResNet, R(2+1)D & \cmark & Across Model & \textbf{90.8} \\ & 2nd place team~\cite{chen2020viprior} & TCDC & \cmark & Within Model & \underline{88.3} \\ & 3rd place team~\cite{luo2020viprior} & SlowFast50, TSM & \cmark & Across Model & 87.6 \\ \midrule & Ours & SlowFast50 & & & 82.2 \\ & Ours & SlowFast50 & & Within Model & 86.0 \\ \bottomrule \end{tabular}} \label{tb:challenge_entry} \end{table} \subsection{Results on HMDB-51 dataset} To determine the generalization to other datasets, we train and evaluate using the HMDB-51 dataset with its original splits. Generally, the recognition performance in HMDB-51 is inferior to the performance of UCF-101 due to its limited number of training samples. We use the same model and hyper-parameters as in UCF-101. The results in Table~\ref{tb:hmdb51} indicate that the temporal extensions generally outperforms spatial-only versions, and similar to UCF-101, the RandAugment and blending demonstrate the best accuracy. \begin{figure*}[!t] \centering \subfloat[\small{(a) Sample clip A: \textit{FrisbeeCatch}}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/A13.jpg}}\ \hfill \subfloat[\small{(b) Sample clip B: \textit{JugglingBalls}}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/B94.jpg}}\ \\[-2ex] \subfloat[\small{(c) MixUp-ed Clip}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/MixUp.jpg}}\ \hfill \subfloat[\small{(d) FadeMixUp-ed Clip}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/FadeMixUp.jpg}}\ \\[-2ex] \subfloat[\small{(e) CAM for \textit{FrisbeeCatch} on (c)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/MixUp_A.jpg}}\ \hfill \subfloat[\small{(f) CAM for \textit{FrisbeeCatch} on (d)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/FadeMixUp_A.jpg}}\ \\[-2ex] \subfloat[\small{(g) CAM for \textit{JugglingBalls} on (c)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/MixUp_B.jpg}}\ \hfill \subfloat[\small{(h) CAM for \textit{JugglingBalls} on (d)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/FadeMixUp_B.jpg}}\ \\[-2ex] \subfloat[\small{(i) CAM for \textit{FrisbeeCatch} on (a)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/MixUp_CAM.jpg}}\ \hfill \subfloat[\small{(j) CAM for \textit{FrisbeeCatch} on (a)}] {\includegraphics[width=0.495\linewidth]{./fig/cam/blend/FadeMixUpCAM.jpg}}\ \caption{Class activation maps. \textit{Left}: MixUp, \textit{Right}: FadeMixUp} \label{fig_camforblend} \end{figure*} \subsection{1st VIPriors action recognition challenge} Based on the comprehensive experimental results, we attend the 1st VIPriors action recognition challenge. In this challenge, any pre-training and external datasets are not allowed. The performance of various models is described in Table~\ref{tb:challenge}. For validation, applying both RandAugment-T and FrameMixUp perform the best. For the test set, 3,783 videos are provided without ground truths. Therefore, we report the results based on the challenge leaderboard. A combination of training and validation datasets including 9,537 videos are used to train the final challenge entries. According to the baseline accuracy of 68.99\%, adapting RandAugment-T improves the performance by only up to 81.43\%. Finally, we submitted an ensembled version of the different models that are trained using RandAugment-T and various mixing augmentation, to produce 86.04\% top-1 accuracy. The results including other challenge entries are described in Table~\ref{tb:challenge_entry}. The 1st place team proposes two-stream multi-scale spatiotemporal fusion strategy based on hand-craft optical flow and various 3D-ConvNets. The 2nd place team~\cite{chen2020viprior} also propose two-stream networks called 3D Temporal Central Difference Convolution (TCDC) based on C3D backbone. The 3rd place team~\cite{luo2020viprior} combines SlowFast network and Temporal Shift Module (TSM)~\cite{lin2019tsm} with two-stream networks. Compared to these methods, even if our final challenge results are inferior to them, our framework is much simple and comparative without using any two-stream strategy and model ensemble. \subsection{Discussions} \subsubsection{Why are the improvements not large?} Although temporal extensions generally outperform spatial-only versions in data augmentation algorithms, performance improvements might be not large enough. The possible reasons for this are three-fold. The first reason is the lack of sufficient training data. The second is the lack of temporal perturbation, and the third is that datasets used for experiments consist of trimmed videos. Both UCF-101 and HMDB-51 datasets have little temporal perturbations. Therefore, applying spatial augmentation is sufficient to learn the context. Furthermore, both datasets are trimmed to have few temporal occlusions; therefore, no room is left to learn the ability to localize temporally. Compared to the image dataset, because the action region is relatively small, removing the spatial region can hurt the basic recognition performance for deleting and cut-and-pasting if the volume of training data is not adequate. In contrast, for blending, although it is an unnatural image, as said in~\cite{cutmix}, the blending can the exploit full region of frames. Therefore, it produces reasonable performance improvements. \begin{figure*}[!t] \centering \subfloat[\small{(a) Sample clip A: \textit{Swing}}] {\includegraphics[width=0.495\linewidth]{./fig/cam/mix/A18.jpg}}\ \hfill \subfloat[\small{(b) Sample clip B: \textit{Basketball}}] {\includegraphics[width=0.495\linewidth]{./fig/cam/mix/B23.jpg}}\ \\[-2ex] \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/MixUp.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/FrameMix.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CutMix.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CubeMix.jpg}}\ \\[-2ex] \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/MixUp_A.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/FrameMix_A.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CutMix_A.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CubeMix_A.jpg}}\ \\[-2ex] \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/MixUp_B.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/FrameMix_B.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CutMix_B.jpg}}\ \hfill \subfloat {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CubeMix_B.jpg}}\ \\[-2ex] \subfloat[\small{(c) MixUp}] {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/MixUp_Pure.jpg}}\ \hfill \subfloat[\small{(d) FrameCutMix}] {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/FrameMix_Pure.jpg}}\ \hfill \subfloat[\small{(d) CutMix}] {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CutMix_Pure.jpg}}\ \hfill \subfloat[\small{(d) CubeCutMix}] {\includegraphics[width=0.24\linewidth]{./fig/cam/mix/CubeMix_Pure.jpg}}\ \caption{Class actionvation maps. For (c)-(f), from the top to the bottom row: mixed clips, CAMs for {\textit{Swing}}, CAMs for {\textit{Basketball}}, and CAMs for {\textit{Swing}} on pure clip (a), respectively.} \label{fig_camforstloc} \end{figure*} \subsubsection{Spatiotemporal class activation map visualization} We visualize the learned features using the class activation map~\cite{cam} in Fig.~\ref{fig_camforblend}. In the SlowFast network, we use the features of the last convolutional layer in SlowPath. Fig.~\ref{fig_camforblend} (a) and (b) present example clips. Fig.~\ref{fig_camforblend} (c) and (d) are the visualizations of the clips using MixUp-ed and FadeMixUp-ed, respectively. In Fig.~\ref{fig_camforblend} (f) and (h) compared to Fig.~\ref{fig_camforblend} (e) and (g), the features of FadeMixUp are more localized temporally than those of MixUp. In Fig.~\ref{fig_camforblend} (j) compared to Fig.~\ref{fig_camforblend} (i), the activations of FadeMixUp are spatiotemporally localized better than those of MixUp in pure Clip A. Fig.~\ref{fig_camforstloc} compares the spatiotemporal localization abilities of MixUp, CutMix, FrameCutMix, and CubeCutMix. Compared to MixUp, as stated in their paper~\cite{cutmix}, CutMix can spatially localize a basketball court or a person on a swing. However, compared to CubeCutMix, the activations of CutMix are not well localized temporally. FrameCutMix also cannot localize features like MixUp, but it can separate the weights of activation separately on the temporal axis. \section{Conclusion} In this paper, we proposed several extensions of data-level augmentation and data-level deleting, blending, and cut-and-pasting augmentation algorithms from the spatial (image) domain into the temporal and spatiotemporal (video) domain. Although applying spatial data augmentation increases the recognition performance in a limited amount of training data, extending temporal and spatiotemporal data augmentation boosts performance. Moreover, our models that are trained on temporal augmentation can achieve temporal and spatiotemporal localization ability that cannot be achieved by the model trained only on spatial augmentation. Our next step is an extension to a large-scale dataset, such as Kinetics~\cite{carreira2017quo}, or untrimmed videos. \section*{Acknowledgments} This research was supported by R\&D program for Advanced Integrated-intelligence for Identification (AIID) through the National Research Foundation of KOREA (NRF) funded by Ministry of Science and ICT (NRF-2018M3E3A1057289). \clearpage \bibliographystyle{utils/splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=tqz0rQvz_58
https://arxiv.org/abs/2008.05721
Please evaluate the strengths and weaknesses of our paper, focusing on the execution of our simple idea and its significance, as well as any potential improvements or revisions you would suggest for the camera-ready version.
Injecting Prior Knowledge into Image Caption Generation 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper tries to mitigate overfitting and generating easy captions by introducing prior knowledge from the dataset during training. To this end, authors propose to add visual-semantic relation prior knowledge by defining a series of Latent Topics, and semantic prior knowledge by training a Seq2seq module with the text. While the former is introduced in the training procedure as a self-attention with image region features, the latter is utilized to remove visual biased on semantic structures. Apart from increasing results of state-of-the-art approaches, they demonstrate that with their approach, image captioning models can rely on less data when training. 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - The paper is easy to read. Ideas are easy to follow. - It is very well motivated. - Benefits of both modules (CLTA and SAE Regularizer) are clearly demonstrated in the experiments. - The implementation is very well explained in detail. - The benefits of adding prior knowledge (visual and semantic) is showed. - Additionally, authors demonstrate the relevance of prior knowledge as it allows to train models with less data. 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - Although the improvement exists, in some situations it is marginal. 4. [Overall rating] Paper rating. 9 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. Good paper. Well written, well motivated, simple method and positive results. On top of that, very much in line with the workshop. 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) Rating: 9: Top 15% of accepted papers, strong accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{epsfig} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{multirow} \usepackage{graphicx} \usepackage[table]{xcolor} \usepackage[export]{adjustbox} \usepackage{cellspace, tabularx} \newcommand{\tabitem}{~~\llap{\textbullet}~~} \usepackage{caption} \usepackage{siunitx} \setlength{\belowcaptionskip}{-2ex} \usepackage{floatrow} \newfloatcommand{capbtabbox}{table}[][\FBwidth] \usepackage{blindtext} \usepackage{subcaption} \captionsetup{compatibility=false} \newcommand{\etal}{\textit{et al.}} \newcommand{\eg}{\textit{e.g.}} \newcommand{\ie}{\textit{i.e.}} \usepackage{color, colortbl} \definecolor{LightCyan}{rgb}{0.88,1,1} \definecolor{Gray}{gray}{0.9} \usepackage{cleveref} \usepackage{bm} \newcommand{\bv}{\bm{v}} \newcommand{\bx}{\bm{x}} \newcommand{\by}{\bm{y}} \newcommand{\bz}{\bm{z}} \newcommand{\bc}{\bm{c}} \newcommand{\bh}{\bm{h}} \newcommand{\softatt}{{\textbf{Soft-Att}}} \newcommand{\mimlOneTwoEight}{{\textbf{Two-Stream Att(128)}}} \newcommand{\firstIC}{{\textbf{Vanilla-$\Theta_D(\bh^{\text{first}})$}}} \newcommand{\lastIC}{{\textbf{Denoising-$\Theta_D(\bh^{\text{last}})$}}} \newcommand{\lastSAE}{{\textbf{Denoising SAE-Decoder}}} \newcommand{\base}{\footnotesize{\textbf{Baseline}}} \newcommand{\cla}{\footnotesize{\textbf{+Conditional Latent Attn.}}} \newcommand{\sae}{\footnotesize{\textbf{+SAE-Regularizer}}} \newcommand{\gtone}{\footnotesize{\textbf{GT1}}} \newcommand{\gttwo}{\footnotesize{\textbf{GT2}}} \usepackage{soul} \newcommand{\hbnote}[1]{\textbf{\color{red}HB\@:~#1}} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{8} % \title{Injecting Prior Knowledge into Image Caption Generation} \titlerunning{Injecting Prior Knowledge into Image Caption Generation} \author{Arushi Goel\inst{1}\and Basura Fernando\inst{2} \and Thanh-Son Nguyen\inst{2} \and Hakan Bilen\inst{1}} \authorrunning{A. Goel et al.} \institute{School of Informatics, University of Edinburgh, Scotland \and AI3, Institute of High Performance Computing, A*STAR, Singapore } \maketitle \begin{abstract} Automatically generating natural language descriptions from an image is a challenging problem in artificial intelligence that requires a good understanding of the visual and textual signals and the correlations between them. The state-of-the-art methods in image captioning struggles to approach human level performance, especially when data is limited. In this paper, we propose to improve the performance of the state-of-the-art image captioning models by incorporating two sources of prior knowledge: (i) a conditional latent topic attention, that uses a set of latent variables (topics) as an anchor to generate highly probable words and, (ii) a regularization technique that exploits the inductive biases in syntactic and semantic structure of captions and improves the generalization of image captioning models. Our experiments validate that our method produces more human interpretable captions and also leads to significant improvements on the MSCOCO dataset in both the full and low data regimes. \end{abstract} \section{Introduction} \label{sec.intro} In recent years there has been a growing interest to develop end-to-end learning algorithms in computer vision tasks. Despite the success in many problems such as image classification~\cite{he2016deep} and person recognition~\cite{joon2015person}, the state-of-the-art methods struggle to reach human-level performance in solving more challenging tasks such as image captioning within limited time and data which involves understanding the visual scenes and describing them in a natural language. This is in contrast to humans who are effortlessly successful in understanding the scenes which they have never seen before and communicating them in a language. It is likely that this efficiency is due to the strong prior knowledge of structure in the visual world and language~\cite{chomsky2014aspects}. Motivated by this observation, in this paper we ask ``How can such prior knowledge be represented and utilized to learn better image captioning models with deep neural networks?''. To this end, we look at the state-of-the-art encoder-decoder image captioning methods~\cite{vinyals2015show,xu2015show,Anderson2018} where a Convolutional Neural Network (CNN) encoder extracts an embedding from the image, a Recurrent Neural Network (RNN) decoder generates the text based on the embedding. This framework typically contains two \emph{dynamic} mechanisms to model the sequential output: i) an attention module \cite{bahdanau2014neural,xu2015show} that identifies the relevant parts of the image embedding based on the previous word and visual features and ii) the RNN decoder that predicts the next words based on the its previous state and attended visual features. While these two components are very powerful to model complex relations between the visual and language cues, we hypothesize that they are also capable of and at the same time prone to overfitting to wrong correlations, thus leading to poor generalization performance when the data is limited. Hence, we propose to regulate these modules with two sources of prior knowledge. \begin{figure}[t] \begin{center} \includegraphics[width=0.82\linewidth]{highlevel_introfig.pdf} \end{center} \caption{Our Final Model with Conditional Latent Topic Attention (CLTA) and Sentence Prior (Sentence Auto-Encoder (SAE) regularizer) both rely on prior knowledge to find relevant words and generate non-template like and generalized captions compared to the same Baseline caption for both images - \emph{A man hitting a tennis ball with a racket.}} \label{fig:introfig} \end{figure} First, we propose an attention mechanism that accurately attends to relevant image regions and better cope with complex associations between words and image regions. For instance, in the example of a ``man playing tennis'', the input visual attention encoder might only look at the local features (\emph{tennis ball}) leaving out the global visual information (\emph{tennis court}). Hence, it generates a trivial caption as ``A man is hitting a tennis ball'', which is not the full description of the image in context (as shown in \cref{fig:introfig}). We solve this ambiguity by incorporating prior knowledge of context via latent topic models~\cite{blei2003latent}, which are known to identify semantically meaningful topics~\cite{chang2009reading}, into our attention module. In particular we introduce a Conditional Latent Topic Attention (CLTA) module that models relationship between a word and image regions through a latent shared space \ie~latent topics to find salient regions in an image. \emph{Tennis ball} steers the model to associate this word with the latent topic, ``tennis'', which further is responsible for localizing \emph{tennis court} in the image. If a region-word pair has a higher probability with respect to a latent topic and if the same topic has a higher probability with respect to some other regions, then it is also a salient region and will be highly weighted. Therefore, we compute two sets of probabilities conditioned on the current word of the captioning model. We use conditional-marginalized probability where marginalization is done over latent topics to find salient image regions to generate the next word. Our CLTA is modeled as a neural network where marginalized probability is used to weight the image region features to obtain a context vector that is passed to a image captioning decoder to generate the next word. Second, the complexity in the structure of natural language makes it harder to generate fluent sentences while preserving a higher amount of encoded information (high Bleu-4 scores). Although current image captioning models are able to model this linguistic structure, the generated captions follow a more template-like form, for instance, ``A \ul{man} \ul{hitting} a \ul{tennis ball} with a \ul{racket}.'' As shown in \cref{fig:introfig}, visually similar images have template-like captions from the baseline model. Inspired from sequence-to-sequence (seq2seq) machine translation \cite{sutskever2014sequence,luong2015multi,wiseman2016sequence,gehring2017convolutional}, we introduce a new regularization technique for captioning models coined SAE Regularizer. In particular, we design and train an additional seq2seq sentence auto-encoder model (``SAE'') that first reads in a whole sentence as input, generates a fixed dimensional vector, then the vector is further used to reconstruct the input sentence. Human languages are highly structured and follows immense amount of regularity. Certain words are more likely to co-appear and certain word patterns can be observed more often. Our SAE is trained to learn the structure of the input (sentence) space in an offline manner by exploiting the regularity of the sentence space. The continuous latent space learned by SAE blends together both the syntactic and semantic information from the input sentence space and generates high quality sentences during the reconstruction via the SAE decoder. This suggests that the continuous latent space of SAE contains sufficient information regarding the syntactic and semantic structure of input sentences. Specifically, we use SAE-Dec as an auxiliary decoder branch (see \cref{fig:sae}). Adding this regularizer forces the representation from the image encoder and language decoder to be more representative of the visual content and less likely to overfit. SAE-Dec is employed along with the original image captioning decoder (``IC-Dec'') to output the target sentence during training, however, we do not use SAE regularizer at test time reducing additional computations. Both of the proposed improvements also help to overcome the problem of training on large image-caption paired data \cite{lin2014microsoft,liu2004conceptnet} by incorporating prior knowledge which is learned from unstructured data in the form of latent topics and SAE. These priors -- also known as ``inductive biases'' -- help the models make inferences that go beyond the observed training data. Through an extensive set of experiments, we demonstrate that our proposed CLTA module and SAE-Dec regularizer improves the image captioning performance both in the limited data and full data training regimes on the MSCOCO dataset \cite{lin2014microsoft}. \section{Related Work} \label{sec.rel} Here, we first discuss related attention mechanisms and then the use of knowledge transfer in image captioning models. \noindent \textbf{Attention mechanisms in image captioning. } The pioneering work in neural machine translation \cite{bahdanau2014neural,luong2015effective,cho2014properties} has shown that attention in encoder-decoder architectures can significantly boost the performance in sequential generation tasks. Visual attention is one of the biggest contributor in image captioning \cite{fang2015captions,xu2015show,Anderson2018,Huang_2019_ICCV}. Soft attention and hard attention variants for image captioning were introduced in~\cite{xu2015show}. Bottom-Up and Top-Down self attention is effectively used in~\cite{Anderson2018}. Attention on attention is used in recent work~\cite{Huang_2019_ICCV}. Interestingly, they use attention at both encoder and the decoder step of the captioning process. Our proposed attention significantly differs in comparison to these attention mechanisms. First, the traditional attention methods, soft-attention \cite{bahdanau2014neural} and scaled dot product attention \cite{vaswani2017attention} aims to find features or regions in an image that highly correlates with a word representation~\cite{Anderson2018,bahdanau2014neural,sharma2018conceptual}. In contrast, our \emph{conditional-latent topic attention} uses latent variables \ie topics as anchors to find relationship between word representations and image regions (features). Some image regions and word representations may project to the same set of latent topics more than the others and therefore more likely to co-occur. Our method learns to model these relationships between word-representations and image region features using our latent space. We allow competition among regions and latent topics to compute two sets of probabilities to find salient regions. This competing strategy and our latent topics guided by pre-trained LDA topics \cite{blei2003latent} allow us to better model relationships between visual features and word representations. Hence, the neural structure and our attention mechanism is quite different from all prior work~\cite{xu2015show,Anderson2018,Huang_2019_ICCV,bahdanau2014neural}. \noindent \textbf{Knowledge transfer in image captioning. } It is well known that language consists of semantic and syntactic biases \cite{bao2019generating,marcheggiani2018exploiting}. We exploit these biases by first training a recurrent caption auto-encoder to capture this useful information using \cite{sutskever2014sequence}. Our captioning auto-encoder is trained to reconstruct the input sentence and hence, this decoder encapsulates the structural, syntactic and semantic information of input captions. During captioning process we regularize the captioning RNN with this pretrained caption-decoder to exploit biases in the language domain and transfer them to the visual-language domain. To the best of our knowledge, no prior work has attempted such knowledge transfer in image captioning. Zhou \etal \cite{zhou2019improving} encode external knowledge in the form of knowledge graphs using Concept-Net \cite{liu2004conceptnet} to improve image captioning. The closest to ours is the work of \cite{yang2019auto} where they propose to generate scene graphs from both sentences and images and then encode the scene graphs to a common dictionary before decoding them back to sentences. However, generation of scene graphs from images itself is an extremely challenging task. Finally, we propose to transfer syntactic and semantic information as a regularization technique during the image captioning process as an auxiliary loss. Our experiments suggest that this leads to considerable improvements, specially in more structured measures such as CIDEr \cite{vedantam2015cider}. \section{Method} \label{sec.method} In this section, we first review image captioning with attention, introduce our CLTA mechanism, and then our sentence auto-encoder (SAE) regularizer. \subsection{Image Captioning with Attention} \label{sec.overview} Image captioning models are based on encoder-decoder architecture \cite{xu2015show} that use a CNN as image encoder and a Long Short-Term Memory (LSTM)~\cite{hochreiter1997long} as the decoder -- see~Fig.\ref{fig:introfig}. The encoder takes an image as input and extracts a feature set $v=\{\bv_1,\ldots,\bv_R\}$ corresponding to $R$ regions of the image, where $\bv_i \in \mathbb{R}^D$ is the $D$-dimensional feature vector for the $i^{th}$ region. The decoder outputs a caption $y$ by generating one word at each time step. At time step $t$, the feature set $v$ is combined into a single vector $\bv^t_a$ by taking weighted sum as follows: \begin{equation} \bv^t_a = \sum_{i=1}^R \alpha_{i}^{t} \bv_{i} \label{eq.ct} \end{equation} where $\alpha^t_i$ is the CLTA weight for region $i$ at time $t$, that is explained in the next section. The decoder LSTM $\phi$ then takes a concatenated vector $[\bv^t_a|\by_{t-1}]$ and the previous hidden state $\mathbf{h_{t-1}}$ as input and generates the next hidden state $\mathbf{h_t}$: \begin{align} \mathbf{h_t} &= \phi([\bv^t_a|E \by_{t-1}], \mathbf{h_{t-1}},\Theta_{\phi}) \label{eq.lstm.hil} \end{align} where, $|$ denotes concatenation, $\by_{t-1}\in \mathbb{R}^K$ is the one-hot vector of the word generated at time $t-1$, $K$ is the vocabulary size, $\bh^t \in \mathbb{R}^{n}$ is the hidden state of the LSTM at time $t$, $n$ is the LSTM dimensionality, and $\Theta_{\phi}$ are trainable parameters of the LSTM. Finally, the decoder predicts the output word by applying a linear mapping $\psi$ on the hidden state and $\bv^t_a$ as follows: \begin{align} \by_{t} &= \psi([\mathbf{h_t}|\bv^t_a],\Theta_{\psi}) \end{align} where $\Theta_{\psi}$ are trainable parameters. Our LSTM implementation closely follows the formulation in \cite{zaremba2014recurrent}. The word embedding matrix $E \in \mathbb{R}^{m\times K}$ is trained to translate one-hot vectors to word embeddings as in \cite{xu2015show}, where $m$ is the word embedding dimension. In the next section, we describe our proposed CLTA mechanism. \subsection{CLTA: Conditional Latent Topic Attention} \label{sec.method.att} At time step $t$, our CLTA module takes the previous LSTM hidden state ($\bh^{t-1}$) and image features to output the attention weights $\alpha^t$. Specifically, we use a set of latent topics to model the associations between textual ($\bh^{t-1}$) and visual features ($\bv$) to compute the attention weights. The attention weight for region $i$ is obtained by taking the conditional-marginalization over the latent topic $l$ as follows: \begin{align} \alpha^t_i & = P(\text{region}=i|h^{t-1}, \bv) = \sum_{l=1}^C P(\text{region}=i|h^{t-1}, \bv, l) P(l|h^{t-1}, \bv_{i}) \end{align} where $l$ is a topic variable in the $C$-dimensional latent space. To compute $P(l|h^{t-1}, \bv_i)$, we first project both textual and visual features to a common $C$-dimensional shared latent space, and obtain the associations by summing the projected features as follows: \begin{equation} \bm{q}^t_{i}= W_{sc} \bv_i + W_{hc} \bh^{t-1} \end{equation} where $W_{sc}\in \mathbb{R}^{C\times D}$ and $W_{hc}\in \mathbb{R}^{C\times n}$ are the trainable projection matrices for visual and textual features, respectively. Then the latent topic probability is given by: \begin{equation} P_L = P(l|\bh^{t-1}, \bv_{i}) = \frac{\exp({\bm{q}^t_{il}})}{\sum_{k=1}^{C}\exp({\bm{q}^t_{ik}})} \label{eq.ltopic} \end{equation} Afterwards, we compute the probability of a region given the textual, vision features and latent topic variable as follows: \begin{equation} \bm{r}^t_{i} = W_{sr} \bv_i + W_{hr} \bh^{t-1} \end{equation} \begin{align} P(\text{region}=i|\bh^{t-1}, v, l) &= \frac{\exp({\bm{r}^t_{il}})}{\sum_{k=1}^{R}\exp({\bm{r}^t_{kl}})} \end{align} where $W_{sr}\in \mathbb{R}^{C\times D}$ and $W_{hr}\in \mathbb{R}^{C\times n}$ are the trainable projection matrices for visual and textual features, respectively. The latent topic posterior in \cref{eq.ltopic} is pushed to the pre-trained LDA topic prior by adding a KL-divergence term to the image captioning objective. We apply Latent Dirichlet Allocation (LDA) \cite{blei2003latent} on the caption data. Then, each caption has an inferred topic distribution $Q_T$ from the LDA model which acts as a prior on the latent topic distribution, $P_L$. For doing this, we take the average of the C-dimensional latent topics at all time steps from $0,\ldots,t-1$ as: \begin{equation} P_{L_{avg}} = \frac{1}{t}\sum_{k=0}^{t-1} P(l|\bh^{k}, \bv_{i}) \end{equation} Hence, the KL-divergence objective is defined as: \begin{equation} D_{KL}(P_{L_{avg}}||Q_T) = \sum_{c \in C} P_{L_{avg}}(c) \times log(\frac{P_{L_{avg}}(c)}{Q_T(c)}) \label{eq.kl} \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{latent-topics.pdf} \caption{Image-Caption pairs generated from our CLTA module with $128$ dimensions and visualization of Top-20 words from the latent topics.} \label{fig:latentcategory} \end{figure} This learnt latent topic distribution captures the semantic relations between the visual and textual features in the form of visual topics, and therefore we also use this latent posterior, $P_L$ as a source of meaningful information during generation of the next hidden state. The modified hidden state $\mathbf{h_t}$ in \cref{eq.lstm.hil} is now given by: \begin{align} \mathbf{h_t} &= \phi([\bv^t_a|E \by_{t-1}|P_L], \mathbf{h_{t-1}},\Theta_{\phi}) \label{eq.lstm.hil.new} \end{align} We visualize the distribution of latent topics in \Cref{fig:latentcategory}. While traditional ``soft-max" attention exploit simple correlation among textual and visual information, we make use of latent topics to model associations between them. \subsection{SAE Regularizer} \label{sec.method.sae} Encoder-decoder methods are widely used for translating one language to another \cite{cho2014learning,sutskever2014sequence,bahdanau2014neural}. When the input and target sentences are the same, these models function as auto-encoders by first encoding an entire sentence into a fixed-(low) dimensional vector in a latent space, and then reconstructing it. Autoencoders are commonly employed for unsupervised training in text classification \cite{dai2015semi} and machine translation \cite{luong2015multi}. In this paper, our SAE regularizer has two advantages: i) acts as a soft constraint on the image captioning model to regularize the syntactic and semantic space of the captions for better generalization and, ii) encourages the image captioning model to extract more context information for better modelling long-term memory. These two properties of the SAE regularizer generates semantically meaningful captions for an image with syntactic generalizations and prevents generation of naive and template-like captions. Our SAE model uses network architecture of \cite{sutskever2014sequence} with Gated Recurrent Units (GRU) \cite{chung2014empirical}. Let us denote the parameter of the decoder GRU by $\Theta_{\text{D}}$. A stochastic variation of the vanilla sentence auto-encoders is de-noising auto-encoders~\cite{vincent2008extracting} which are trained to ``de-noise'' corrupted versions of their inputs. To inject such input noise, we drop each word in the input sentence with a probability of 50\% to reduce the contribution of a single word on the semantics of a sentence. We train the SAE model in an offline stage on training set of the captioning dataset. After the SAE model is trained, we discard its encoder and integrate only its decoder to regularize the captioning model. As depicted in \Cref{fig:sae}, the pretrained SAE decoder takes the last hidden state vector of captioning LSTM $\bh$ as input and generates an extra caption (denoted as $y_{\text{sae}}$) in addition to the output of the captioning model (denoted as $y_{\text{lstm}}$). We use output of the SAE decoder only in train time to regulate the captioning model $\phi$ by implicitly transferring the previously learned latent structure with SAE decoder. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{SAEReg.pdf} \end{center} \caption{Illustration of our proposed Sentence Auto-Encoder (SAE) regularizer with the image captioning decoder. The captioning model is trained by adding the SAE decoder as an auxiliary branch and thus acting as a regularizer.} \label{fig:sae} \end{figure} Our integrated model is optimized to generate two accurate captions (\ie $y_{\text{sae}}$ and $y_{\text{lstm}}$) by minimizing a weighted average of two loss values: \begin{equation} \arg \min_{\Omega}~~~\lambda L(y^*,y_{\text{lstm}}) + (1-\lambda) L(y^*,y_{\text{sae}}) \label{eq.loss} \end{equation} where $L$ is the cross-entropy loss computed for each caption, word by word against the ground truth caption $y^*$, $\lambda$ is the trade-off parameter, and $\Omega$ are the parameters of our model. We consider two scenarios that we use during our experimentation. \begin{itemize} \item First, we set the parameters of the SAE decoder $\Theta_D$ to be the weights of the pre-trained SAE decoder and freeze them while optimizing \Cref{eq.loss} in terms of $\Omega=\{ \Theta_{\phi},\Theta_{\psi},E \}$. \item Second, we initialize $\Theta_D$ with the weights of the pre-trained SAE decoder and fine-tune them along with the LSTM parameters, \ie $\Omega=\{\Theta_{\phi},\Theta_{\psi},E,\Theta_{\text{D}}\}$. \end{itemize} As discussed in \cref{sec.method.att}, we also minimize the KL divergence in \cref{eq.kl} along with the final regularized objective in \cref{eq.loss} as: \begin{equation} \arg \min_{\Omega}~~~\lambda L(y^*,y_{\text{lstm}}) + (1-\lambda) L(y^*,y_{\text{sae}}) + \gamma D_{KL}(P_{L_{avg}}||Q_T) \label{eq.totalloss} \end{equation} where, $\gamma$ is the weight for the KL divergence loss. \paragraph{Discussion. } An alternative way of exploiting the information from the pre-trained SAE model is to bring the representations from the captioning decoder closer to the encodings of the SAE encoder by minimizing the Euclidean distance between the hidden state from the SAE encoder and the hidden state from the captioning decoder at each time-step. However, we found this setting is too restrictive on the learned hidden state of the LSTM. \section{Experiments} \label{sec.exp} \noindent \textbf{Dataset. } Our models are evaluated on the standard MSCOCO 2014 image captioning dataset~\cite{lin2014microsoft}. For fair comparisons, we use the same data splits for training, validation and testing as in \cite{karpathy2015deep} which have been used extensively in prior works. This split has 113,287 images for training, 5k images for validation and testing respectively with 5 captions for each image. We perform evaluation on all relevant metrics for generated sentence evaluation - CIDEr \cite{vedantam2015cider}, Bleu \cite{papineni2002bleu}, METEOR \cite{denkowski2014meteor}, ROUGE-L \cite{lin2004automatic} and, SPICE \cite{anderson2016spice}. \hfill \noindent \textbf{Implementation Details. } For training our image captioning model, we compute the image features based on the Bottom-Up architecture proposed by \cite{Anderson2018}, where the model is trained using a Faster-RCNN model \cite{ren2015faster} on the Visual-Genome Dataset \cite{krishna2017visual} with object and attribute information. These features are extracted from $R$ regions and each region feature has $D$ dimensions, where $R$ and $D$ is 36 and 2048 respectively as proposed in \cite{Anderson2018}. We use these $36\times 2048$ image features in all our experiments. \subsection{Experimental Setup} \label{sec.expsetup} \paragraph{LDA Topic Models.} The LDA \cite{blei2003latent} model is learned in an offline manner to generate a $C$ dimensional topic distribution for each caption. Briefly, the LDA model treats the captions as word-documents and group these words to form $C$ topics (cluster of words), learns the word distribution for each topic $(C \times V)$ where $V$ is the vocabulary size and also generates a topic distribution for each input caption, $Q_T$ where each $C^{th}$ dimension denotes the probability for that topic. \paragraph{Sentence Auto-Encoder.} The Sentence Auto-encoder is trained offline on the MSCOCO 2014 captioning dataset \cite{lin2014microsoft} with the same splits as discussed above. For the architecture, we have a single layer GRU for both the encoder and the decoder. The word embeddings are learned with the network using an embedding layer and the dimension of both the hidden state and the word embeddings is 1024. During training, the decoder is trained with teacher-forcing \cite{bengio2015scheduled} with a probability of 0.5. For inference, the decoder decodes till it reaches the end of caption token. The learning rate for this network is 2e-3 and it is trained using the ADAM \cite{kingma2014adam} optimizer. \paragraph{Image Captioning Decoder with SAE Regularizer.} The architecture of our image captioning decoder is same as the Up-Down model \cite{Anderson2018} with their ``soft-attention'' replaced by our CLTA module and trained with the SAE regularizer. We also retrain the AoANet model proposed by Huang \etal \cite{Huang_2019_ICCV} by incorporating our CLTA module and the SAE regularizer. In the results section, we show improvements over the Up-Down and AoANet models using our proposed approaches. Note, the parameters for training Up-Down and AoANet baselines are same as the original setting. While training the captioning models together with the SAE-decoder, we jointly learn an affine embedding layer (dimension 1024) by combining the embeddings from the image captioning decoder and the SAE-decoder. During inference, we use beam search to generate captions from the captioning decoder using a beam size of 5 for Up-Down and a beam-size of 2 for AoANet. For training the overall objective function as given in Equation \ref{eq.totalloss}, the value of $\lambda$ is initialized by 0.7 and increased by a rate of 1.1 every 5 epochs until it reaches a value of 0.9 and $\gamma$ is fixed to 0.1. We use the ADAM optimizer with a learning rate of 2e-4. Our code is implemented using PyTorch \cite{pytorch} and will be made publicly available. \section{Results and Analysis} \label{sec.results} First, we study the caption reconstruction performance of vanilla and denoising SAE, then report our model's image captioning performance on MS-COCO dataset with full and limited data, investigate multiple design decisions and analyze our results qualitatively. \subsection{Sentence Auto-Encoder Results} \label{sec.quantresults} An ideal SAE must learn mapping its input to a fixed low dimensional space such that a whole sentence can be summarized and reconstructed accurately. To this end, we experiment with two SAEs, Vanilla-SAE and Denoising-SAE and report their reconstruction performances in terms of Bleu4 and cross-entropy (CE) loss in fig.\ref{fig:sea_loss}. \newsavebox{\testbox}% \newlength{\testheight}% \savebox{\testbox}{% \centering \begin{tabular}{c|c|c} \hline Models & Bleu-4 $\uparrow$ & CE-Loss $\downarrow$\\ \hline\hline Vanilla SAE & \textbf{96.33} & \textbf{0.12} \\ Denoising SAE & 89.79 & 0.23\\ \hline \end{tabular} }% \settoheight{\testheight}{\usebox{\testbox}} \begin{figure} \begin{floatrow} \ffigbox{ \includegraphics[width=\linewidth,height=3.0\testheight]{SAE_plot_v2.pdf} } { \caption{Error Curve for the Sentence Auto-Encoder on the Karpathy test split. The error starts increasing approximately after 20 epochs.} \label{fig:sea_loss} } \capbtabbox{ \usebox{\testbox} } { \caption{Bleu-4 Evaluation and Reconstruction Cross-Entropy Loss for the Sentence Auto-Encoder on the Karpathy test split of MSCOCO 2014 caption dataset \cite{lin2014microsoft}.} \label{table:sea_results} } \end{floatrow} \end{figure} The vanilla model, when the inputs words are not corrupted, outperforms the denoising one in both metrics. This is expected as the denoising model is only trained with corrupted input sequences. The loss for both the Vanilla and Denoising SAE start from a relatively high value of approximately 0.8 and 0.4 respectively, and converge to a significantly low error of 0.1 and 0.2. For a better analysis, we also compute the Bleu-4 metrics on our decoded caption against the 5 ground-truth captions. As reported in fig.\ref{table:sea_results}, both models obtain significantly high Bleu-4 scores. This indicates that an entire caption can be compressed in a low dimensional vector ($1024$) and can be successfully reconstructed. \begin{table*}[t] \renewcommand*{\arraystretch}{1.13} \resizebox{0.98\textwidth}{!}{ \begin{tabular}{|l|c c c c c c|c c c c c c|} \hline \multirow{2}{*}{Models} & \multicolumn{6}{c|}{cross-entropy loss} & \multicolumn{6}{c|}{cider optimization}\\ & B-1 & B-4 & M & R &C & S & B-1 & B-4 & M & R &C & S \\ \hline\hline LSTM-A \cite{yao2017boosting} & 75.4 & 35.2 & 26.9 & 55.8 & 108.8 & 20.0 & 78.6 & 35.5& 27.3& 56.8& 118.3 & 20.8 \\ RFNet \cite{jiang2018recurrent} & 76.4 & 35.8 & 27.4 & 56.8 &112.5& 20.5 & 79.1& 36.5& 27.7& 57.3 &121.9& 21.2 \\ Up-Down \cite{Anderson2018} & 77.2 & 36.2 & 27.0 & 56.4 & 113.5 & 20.3 & 79.8& 36.3& 27.7& 56.9 &120.1& 21.4 \\ GCN-LSTM \cite{yao2018exploring} & 77.3 & 36.8 & 27.9 & 57.0 &116.3& 20.9 & 80.5 & 38.2& 28.5& 58.3 &127.6& 22.0 \\ AoANet \cite{Huang_2019_ICCV} & 77.4 & 37.2 & 28.4 & 57.5 & 119.8 & 21.3 & 80.2& 38.9& 29.2& 58.8 &129.8 & 22.4 \\ \hline \hline Up-Down$^{\dagger}$ & 75.9 & 36.0 & 27.3 & 56.1 & 113.3 & 20.1 & 79.2 & 36.3 & 27.7 & 57.3 & 120.8 & 21.2 \\ Up-Down$^{\dagger}$ + CLTA + SAE-Reg &\textbf{ 76.7} &\textbf{37.1} & \textbf{28.1} & \textbf{57.1} & \textbf{116.2}& \textbf{21.0} & \textbf{80.2} &\textbf{37.4} &\textbf{ 28.4} & \textbf{58.1} & \textbf{127.4} &\textbf{22.0} \\ \rowcolor{LightCyan} Relative Improvement & +0.8 & +1.1 & +0.8 & +1.0 & +2.9 & +0.9 & +1.0 & +1.1 & +0.7 & +0.8 & +6.6 & +0.8\\ \hline AoANet$^{*}$ & 77.3 & 36.9 & \textbf{28.5} & 57.3 & 118.4 & 21.6 & 80.5 & 39.1 & 29.0 & 58.9 & 128.9 & 22.7 \\ AoANet$^{\dagger}$ + CLTA + SAE-Reg & \textbf{78.1} & \textbf{37.9} & 28.4 & \textbf{57.5} & \textbf{119.9} & \textbf{21.7} & \textbf{80.8} & \textbf{39.3} & \textbf{29.1} & \textbf{59.1} & \textbf{130.1} & \textbf{22.9}\\ \rowcolor{LightCyan} Relative Improvement & +0.8 & +1.0 & -0.1 & +0.2 & +1.5 & +0.1 & +0.3 & +0.2 & +0.1 & +0.2 & +1.2 & +0.2 \\ \hline \end{tabular}} \caption{Image captioning performance on the ``Karpathy'' test split of the MSCOCO 2014 caption dataset \cite{lin2014microsoft} from other state-of-the-art methods and our models. Our Conditional Latent Topic Attention with the SAE regularizer significantly improves across all the metrics using both \textit{cross-entropy loss} and \textit{cider optimization}. \small{$\dagger$ denotes our trained models} and * indicates the results obtained from the publicly available pre-trained model. } \label{table:celoss} \end{table*} \subsection{Image Captioning Results} \label{sec.ic.results} Here we incorporate the proposed CLTA and SAE regularizer to recent image-captioning models including Up-Down~\cite{Anderson2018} and AoANet~\cite{Huang_2019_ICCV} and report their performance on MS-COCO dataset in multiple metrics (see \Cref{table:celoss}). The tables report the original results of these methods from their publications in the top block and the rows in cyan show relative improvement of our models when compared to the baselines. The baseline models are trained for two settings - 1)Up-Down$^{\dagger}$, is the model re-trained on the architecture of Anderson \etal \cite{Anderson2018} and, 2) AoANet$^{\dagger}$, is the Attention-on-Attention model re-trained as in Huang \etal \cite{Huang_2019_ICCV}. Note that for both Up-Down and AoANet, we use the original source code to train them in our own hardware. We replace the ``soft-attention" module in our Up-Down baseline by CLTA directly. The AoANet model is based on the powerful Transformer \cite{vaswani2017attention} architecture with the multi-head dot attention in both encoder and decoder. For AoANet, we replace the dot attention in the decoder of AoANet at each head by the CLTA which results in multi-head CLTA. The SAE-decoder is added as a regularizer on top of these models as also discussed in \cref{sec.expsetup}. As discussed later in \cref{sec.ablation}, we train all our models with $128$ dimensions for the CLTA and with the Denoising SAE decoder (initialized with $\bh^{last}$). We evaluate our models with the cross-entropy loss training and also by using the CIDEr score oprimization \cite{rennie2017self} after the cross-entropy pre-training stage (\cref{table:celoss}). For the cross-entropy one, our combined approach consistently improves over the baseline performances across all metrics. It is clear from the results that improvements in CIDEr and Bleu-4 are quite significant which shows that our approach generates more human-like and accurate sentences. It is interesting to note that AoANet with CLTA and SAE-regularizer also gives consistent improvements despite having a strong transformer language model. We show in \cref{sec.qualitative} the differences between our captions and the captions generated from Up-Down and AoANet. Our method is modular and improves on state-of-the-art models despite the architectural differences. Moreover, the SAE decoder is discarded after training and hence it brings no additional computational load during test-time but with significant performance boost. For CIDEr optimization, our models based on Up-Down and AoANet also show significant improvements in all metrics for our proposed approach. \begin{table}[t] \renewcommand*{\arraystretch}{1.1} \begin{center} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|} \hline Models & \multicolumn{2}{c|}{50\% data} & \multicolumn{2}{c|}{75\% data} & \multicolumn{2}{c|}{100\% data} \\ % \hline & Bleu-4 & CIDEr & Bleu-4 & CIDEr & Bleu-4 & CIDEr \\ \hline Up-Down & 35.4 & 112.0 & 35.8 & 112.7 & 36.0 & 113.3 \\ \hline Up-Down+CLTA& 36.3 & 113.7 & 36.3 & 114.5 & 36.5 & 115.0 \\ \hline Up-Down+CLTA+SAE-Reg & \textbf{36.6} & \textbf{114.8}& \textbf{36.8} &\textbf{115.6} & \textbf{37.1} &\textbf{116.2} \\ \hline \hline AoANet & 36.6 & 116.1 & 36.8 & 118.1 & 36.9 & 118.4 \\ \hline AoANet+CLTA& 36.9 & 116.7 & 37.1 & 118.4 & 37.4 & 119.1 \\ \hline AoANet+CLTA+SAE-Reg & \textbf{37.2} & \textbf{117.5}& \textbf{37.6} &\textbf{118.9} & \textbf{37.9} &\textbf{119.9} \\ \hline \end{tabular}} \end{center} \caption{Evaluation of our CLTA and SAE-Regularizer methods by training on a subset of the MSCOCO ``Karpathy'' Training split.} \label{table:lowdata} \end{table} \subsection{Learning to Caption with Less Data} \label{sec.lessdata} Table \ref{table:lowdata} evaluates the performance of our proposed models for a subset of the training data, where $x$\% is the percentage of the total data that is used for training. All these subsets of the training samples are chosen randomly. Our CLTA module is trained with $128$ dimensions for the latent topics along with the Denoising SAE Regularizer initialized with the last hidden state of the LSTM (Up-Down+CLTA+SAE-Reg). Despite the number of training samples, our average improvement with CLTA and SAE-Regularizer is around 1\% in Bleu-4 and 2.9\% in CIDEr for the Up-Down model and 0.8\% in Bleu-4 and 1.2\% in CIDEr for the AoANet model. The significant improvements in Bleu-4 and CIDEr scores with only 50\% and 75\% of the data compared to the baseline validates our proposed methods as a form of rich prior. \subsection{Qualitative Results} \label{sec.qualitative} In \cref{fig:qualitative}, we show examples of images and captions generated by the baselines Up-Down and AoANet along with our proposed methods, CLTA and SAE-Regularizer. The baseline models have repetitive words and errors while generating captions (\textit{in front of a mirror}, \textit{a dog in the rear view mirror}). Our models corrects these mistakes by finding relevant words according to the context and putting them together in a human-like caption format (\textit{a rear view mirror shows a dog} has the same meaning as \textit{a rear view mirror shows a dog in the rear view mirror} which is efficiently corrected by our models by bringing in the correct meaning). From all the examples shown, we can see that our model overcomes the limitation of overfitting in current methods by completing a caption with more semantic and syntactic generalization (\eg: \textit{different flavoured donuts} and \textit{several trains on the tracks}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{qualitative_new.pdf} \caption{Example of generated captions from the baseline Up-Down, AoANet, our proposed CLTA and, our final models with both CLTA and SAE Regularizer.} \label{fig:qualitative} \end{figure} \subsection{Ablation Study} \label{sec.ablation} \textbf{Conditional Latent Topic Attention (CLTA).} Table \ref{table:mil_ablation} depicts the results for the CLTA module that is described in \cref{sec.method.att}. Soft-attention is used as a baseline and corresponds to the attention mechanism in \cite{xu2015show} which is the main attention module in Up-Down image captioning model by Anderson \etal \cite{Anderson2018}. We replace this attention with the CLTA and evaluate its performance for different number of latent dimensions, \ie~topics ($C$). The models trained with latent topic dimensions of $128$, $256$ and $512$ all outperform the baseline significantly. The higher CIDEr and Bleu-4 scores for these latent topics show the model's capability to generate more descriptive and accurate human-like sentences. As we increase the dimensions of latent topics from $128$ to $512$, we predict more relevant keywords as new topics learnt by the CLTA module with $512$ dimensions are useful in encoding more information and hence generating meaningful captions. \begin{table}[t] \centering \begin{subtable}{.49\textwidth} \centering% \raggedright \begin{tabular}{|c|c|c|c|c|c|} \hline Models & Baseline & \multicolumn{3}{c|}{CLTA}\\ % \hline & Soft-Attention & 128 & 256 & 512 \\ \hline Bleu-4 & 36.0 & 36.5 & 36.6 & \textbf{36.7} \\ \hline CIDEr & 113.3 & 115.0 & 115.2 & \textbf{115.3} \\ \hline \end{tabular} \caption{Evaluation scores for the Up-Down model with soft-attention and ablations of our CLTA module.}\label{table:mil_ablation} \end{subtable}\hfill \begin{subtable}{.49\textwidth} \centering% \renewcommand*{\arraystretch}{1.1} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{|l|l|c|c|c|} \hline Models & SAE-Decoder & $\bh$ & Bleu-4 &CIDEr \\ \hline\hline Baseline& No & - & 36.0 & 113.3 \\ \hline \multirow{4}{*}{CLTA-128} &\multirow{2}{*}{Vanilla} & First & 36.9 & 115.8 \\ & & Last & 36.8 & 115.3 \\ \cline{2-5} &\multirow{2}{*}{Denoising} & First & 36.8 & 116.1 \\ & & Last & 37.1 & \textbf{116.2} \\ \hline CLTA-512& Denoising & Last & \textbf{37.2} & 115.9 \\ \hline \end{tabular}} \caption{Additional quantitative evaluation results from different settings of the SAE decoder when trained with image captioning decoder. $\bh$ denotes the hidden state.} \label{table:sae_ablation} \end{subtable} \caption{Ablative Analysis for different settings on our (a) CLTA module and, (b) SAE regularizer training.} \end{table} \noindent \textbf{Image Captioning Decoder with SAE Regularizer. } \Cref{table:sae_ablation} reports ablations for our full image captioning model (Up-Down with CLTA) and the SAE regularizer. As discussed in \cref{sec.method.sae}, SAE decoder (parameters defined by $\Theta_D$) is initialized with the hidden state of the image captioning decoder. During training, we test different settings of how the SAE decoder is trained with the image captioning decoder: (1) Vanilla vs Denoising SAE and, (2) $\bh^{\text{first}}$ vs $\bh^{\text{last}}$, whether the SAE decoder is initialized with the first or last hidden state of the LSTM decoder. For all the settings, we fine-tune the parameters of GRU$_\text{D}$ ($\Theta_D$) when trained with the image captioning model (the parameters are initialized with the weights of the pre-trained Vanilla or Denoising SAE decoder). The results in Table \ref{table:sae_ablation} are reported on different combinations from the settings described above, with the CLTA having $128$ and $512$ dimensions in the image captioning decoder. Adding the auxiliary branch of SAE decoder significantly improves over the baseline model with CLTA and in the best setting, Denoising SAE with $\bh^{\text{last}}$ improves the CIDEr and Bleu-4 scores by 1.2 and 0.6 respectively. As the SAE decoder is trained for the task of reconstruction, fine-tuning it to the task of captioning improves the image captioning decoder. Initializing the Vanilla SAE decoder with $\bh^{\text{last}}$ does not provide enough gradient during training and quickly converges to a lower error, hence this brings lower generalization capacity to the image captioning decoder. As $\bh^{\text{first}}$ is less representative of an entire caption compared to $\bh^{\text{last}}$, vanilla SAE with $\bh^{\text{first}}$ is more helpful to improve the captioning decoder training. On the other hand, the Denoising SAE being robust to noisy summary vectors provide enough training signal to improve the image captioning decoder when initialized with either $\bh^{\text{first}}$ or $\bh^{\text{last}}$ but slightly better performance with $\bh^{\text{last}}$ for Bleu-4 and CIDEr as it forces $\bh^{\text{last}}$ to have an accurate lower-dim representation for the SAE and hence better generalization. It is clear from the results in \cref{table:sae_ablation}, that Denoising SAE with $\bh^{\text{last}}$ helps to generate accurate and generalizable captions. From our experiments, we found that CLTA with $128$ topics and Denoising SAE (with $\bh^{\text{last}}$) has better performance than even it's counterpart with $512$ topics. Hence, for all our experiments in \cref{sec.ic.results} and \cref{sec.lessdata} our topic dimension is $128$ with Denoising SAE initialized with $\bh^{\text{last}}$. \section{Conclusion} \label{sec.conclusion} In this paper, we have introduced two novel methods for image captioning that exploit prior knowledge and hence help to improve state-of-the-art models even when the data is limited. The first method exploits association between visual and textual features by learning latent topics via an LDA topic prior and obtains robust attention weights for each image region. The second one is an SAE regularizer that is pre-trained in an autoencoder framework to learn the structure of the captions and is plugged into the image captioning model to regulate its training. Using these modules, we obtain consistent improvements on two investigate models, bottom-up top-down and the AoANet image captioning model, indicating the usefulness of our two modules as a strong prior. In future work, we plan to further investigate potential use of label space structure learning for other challenging vision tasks with limited data and to improve generalization. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=f75kMo1dnKD
https://arxiv.org/abs/1911.10082
Please evaluate our paper on injecting prior knowledge into image caption generation, focusing on the clarity of the ideas, the significance of the experiments, and the overall strength of the paper.
Powerful implementation of prior knowledge into image captioning models [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The authors propose two adding to prior knowledge based modules to image captioning models. One module uses prior knowledge on the association of keywords to image regions. The other module regularizes generated captions to be more realistic. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. Simple but powerful ideas; clear methods; topical submission; excellent writing. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. By nature of the method experimental settings are complex; [Overall rating] Paper rating: Strong accept [Confidence] 4/5 [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - Grammar: lines 40, 189 - Formatting: line 221 Rating: 9: Top 15% of accepted papers, strong accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{epsfig} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{multirow} \usepackage{graphicx} \usepackage[table]{xcolor} \usepackage[export]{adjustbox} \usepackage{cellspace, tabularx} \newcommand{\tabitem}{~~\llap{\textbullet}~~} \usepackage{caption} \usepackage{siunitx} \setlength{\belowcaptionskip}{-2ex} \usepackage{floatrow} \newfloatcommand{capbtabbox}{table}[][\FBwidth] \usepackage{blindtext} \usepackage{subcaption} \captionsetup{compatibility=false} \newcommand{\etal}{\textit{et al.}} \newcommand{\eg}{\textit{e.g.}} \newcommand{\ie}{\textit{i.e.}} \usepackage{color, colortbl} \definecolor{LightCyan}{rgb}{0.88,1,1} \definecolor{Gray}{gray}{0.9} \usepackage{cleveref} \usepackage{bm} \newcommand{\bv}{\bm{v}} \newcommand{\bx}{\bm{x}} \newcommand{\by}{\bm{y}} \newcommand{\bz}{\bm{z}} \newcommand{\bc}{\bm{c}} \newcommand{\bh}{\bm{h}} \newcommand{\softatt}{{\textbf{Soft-Att}}} \newcommand{\mimlOneTwoEight}{{\textbf{Two-Stream Att(128)}}} \newcommand{\firstIC}{{\textbf{Vanilla-$\Theta_D(\bh^{\text{first}})$}}} \newcommand{\lastIC}{{\textbf{Denoising-$\Theta_D(\bh^{\text{last}})$}}} \newcommand{\lastSAE}{{\textbf{Denoising SAE-Decoder}}} \newcommand{\base}{\footnotesize{\textbf{Baseline}}} \newcommand{\cla}{\footnotesize{\textbf{+Conditional Latent Attn.}}} \newcommand{\sae}{\footnotesize{\textbf{+SAE-Regularizer}}} \newcommand{\gtone}{\footnotesize{\textbf{GT1}}} \newcommand{\gttwo}{\footnotesize{\textbf{GT2}}} \usepackage{soul} \newcommand{\hbnote}[1]{\textbf{\color{red}HB\@:~#1}} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{8} % \title{Injecting Prior Knowledge into Image Caption Generation} \titlerunning{Injecting Prior Knowledge into Image Caption Generation} \author{Arushi Goel\inst{1}\and Basura Fernando\inst{2} \and Thanh-Son Nguyen\inst{2} \and Hakan Bilen\inst{1}} \authorrunning{A. Goel et al.} \institute{School of Informatics, University of Edinburgh, Scotland \and AI3, Institute of High Performance Computing, A*STAR, Singapore } \maketitle \begin{abstract} Automatically generating natural language descriptions from an image is a challenging problem in artificial intelligence that requires a good understanding of the visual and textual signals and the correlations between them. The state-of-the-art methods in image captioning struggles to approach human level performance, especially when data is limited. In this paper, we propose to improve the performance of the state-of-the-art image captioning models by incorporating two sources of prior knowledge: (i) a conditional latent topic attention, that uses a set of latent variables (topics) as an anchor to generate highly probable words and, (ii) a regularization technique that exploits the inductive biases in syntactic and semantic structure of captions and improves the generalization of image captioning models. Our experiments validate that our method produces more human interpretable captions and also leads to significant improvements on the MSCOCO dataset in both the full and low data regimes. \end{abstract} \section{Introduction} \label{sec.intro} In recent years there has been a growing interest to develop end-to-end learning algorithms in computer vision tasks. Despite the success in many problems such as image classification~\cite{he2016deep} and person recognition~\cite{joon2015person}, the state-of-the-art methods struggle to reach human-level performance in solving more challenging tasks such as image captioning within limited time and data which involves understanding the visual scenes and describing them in a natural language. This is in contrast to humans who are effortlessly successful in understanding the scenes which they have never seen before and communicating them in a language. It is likely that this efficiency is due to the strong prior knowledge of structure in the visual world and language~\cite{chomsky2014aspects}. Motivated by this observation, in this paper we ask ``How can such prior knowledge be represented and utilized to learn better image captioning models with deep neural networks?''. To this end, we look at the state-of-the-art encoder-decoder image captioning methods~\cite{vinyals2015show,xu2015show,Anderson2018} where a Convolutional Neural Network (CNN) encoder extracts an embedding from the image, a Recurrent Neural Network (RNN) decoder generates the text based on the embedding. This framework typically contains two \emph{dynamic} mechanisms to model the sequential output: i) an attention module \cite{bahdanau2014neural,xu2015show} that identifies the relevant parts of the image embedding based on the previous word and visual features and ii) the RNN decoder that predicts the next words based on the its previous state and attended visual features. While these two components are very powerful to model complex relations between the visual and language cues, we hypothesize that they are also capable of and at the same time prone to overfitting to wrong correlations, thus leading to poor generalization performance when the data is limited. Hence, we propose to regulate these modules with two sources of prior knowledge. \begin{figure}[t] \begin{center} \includegraphics[width=0.82\linewidth]{highlevel_introfig.pdf} \end{center} \caption{Our Final Model with Conditional Latent Topic Attention (CLTA) and Sentence Prior (Sentence Auto-Encoder (SAE) regularizer) both rely on prior knowledge to find relevant words and generate non-template like and generalized captions compared to the same Baseline caption for both images - \emph{A man hitting a tennis ball with a racket.}} \label{fig:introfig} \end{figure} First, we propose an attention mechanism that accurately attends to relevant image regions and better cope with complex associations between words and image regions. For instance, in the example of a ``man playing tennis'', the input visual attention encoder might only look at the local features (\emph{tennis ball}) leaving out the global visual information (\emph{tennis court}). Hence, it generates a trivial caption as ``A man is hitting a tennis ball'', which is not the full description of the image in context (as shown in \cref{fig:introfig}). We solve this ambiguity by incorporating prior knowledge of context via latent topic models~\cite{blei2003latent}, which are known to identify semantically meaningful topics~\cite{chang2009reading}, into our attention module. In particular we introduce a Conditional Latent Topic Attention (CLTA) module that models relationship between a word and image regions through a latent shared space \ie~latent topics to find salient regions in an image. \emph{Tennis ball} steers the model to associate this word with the latent topic, ``tennis'', which further is responsible for localizing \emph{tennis court} in the image. If a region-word pair has a higher probability with respect to a latent topic and if the same topic has a higher probability with respect to some other regions, then it is also a salient region and will be highly weighted. Therefore, we compute two sets of probabilities conditioned on the current word of the captioning model. We use conditional-marginalized probability where marginalization is done over latent topics to find salient image regions to generate the next word. Our CLTA is modeled as a neural network where marginalized probability is used to weight the image region features to obtain a context vector that is passed to a image captioning decoder to generate the next word. Second, the complexity in the structure of natural language makes it harder to generate fluent sentences while preserving a higher amount of encoded information (high Bleu-4 scores). Although current image captioning models are able to model this linguistic structure, the generated captions follow a more template-like form, for instance, ``A \ul{man} \ul{hitting} a \ul{tennis ball} with a \ul{racket}.'' As shown in \cref{fig:introfig}, visually similar images have template-like captions from the baseline model. Inspired from sequence-to-sequence (seq2seq) machine translation \cite{sutskever2014sequence,luong2015multi,wiseman2016sequence,gehring2017convolutional}, we introduce a new regularization technique for captioning models coined SAE Regularizer. In particular, we design and train an additional seq2seq sentence auto-encoder model (``SAE'') that first reads in a whole sentence as input, generates a fixed dimensional vector, then the vector is further used to reconstruct the input sentence. Human languages are highly structured and follows immense amount of regularity. Certain words are more likely to co-appear and certain word patterns can be observed more often. Our SAE is trained to learn the structure of the input (sentence) space in an offline manner by exploiting the regularity of the sentence space. The continuous latent space learned by SAE blends together both the syntactic and semantic information from the input sentence space and generates high quality sentences during the reconstruction via the SAE decoder. This suggests that the continuous latent space of SAE contains sufficient information regarding the syntactic and semantic structure of input sentences. Specifically, we use SAE-Dec as an auxiliary decoder branch (see \cref{fig:sae}). Adding this regularizer forces the representation from the image encoder and language decoder to be more representative of the visual content and less likely to overfit. SAE-Dec is employed along with the original image captioning decoder (``IC-Dec'') to output the target sentence during training, however, we do not use SAE regularizer at test time reducing additional computations. Both of the proposed improvements also help to overcome the problem of training on large image-caption paired data \cite{lin2014microsoft,liu2004conceptnet} by incorporating prior knowledge which is learned from unstructured data in the form of latent topics and SAE. These priors -- also known as ``inductive biases'' -- help the models make inferences that go beyond the observed training data. Through an extensive set of experiments, we demonstrate that our proposed CLTA module and SAE-Dec regularizer improves the image captioning performance both in the limited data and full data training regimes on the MSCOCO dataset \cite{lin2014microsoft}. \section{Related Work} \label{sec.rel} Here, we first discuss related attention mechanisms and then the use of knowledge transfer in image captioning models. \noindent \textbf{Attention mechanisms in image captioning. } The pioneering work in neural machine translation \cite{bahdanau2014neural,luong2015effective,cho2014properties} has shown that attention in encoder-decoder architectures can significantly boost the performance in sequential generation tasks. Visual attention is one of the biggest contributor in image captioning \cite{fang2015captions,xu2015show,Anderson2018,Huang_2019_ICCV}. Soft attention and hard attention variants for image captioning were introduced in~\cite{xu2015show}. Bottom-Up and Top-Down self attention is effectively used in~\cite{Anderson2018}. Attention on attention is used in recent work~\cite{Huang_2019_ICCV}. Interestingly, they use attention at both encoder and the decoder step of the captioning process. Our proposed attention significantly differs in comparison to these attention mechanisms. First, the traditional attention methods, soft-attention \cite{bahdanau2014neural} and scaled dot product attention \cite{vaswani2017attention} aims to find features or regions in an image that highly correlates with a word representation~\cite{Anderson2018,bahdanau2014neural,sharma2018conceptual}. In contrast, our \emph{conditional-latent topic attention} uses latent variables \ie topics as anchors to find relationship between word representations and image regions (features). Some image regions and word representations may project to the same set of latent topics more than the others and therefore more likely to co-occur. Our method learns to model these relationships between word-representations and image region features using our latent space. We allow competition among regions and latent topics to compute two sets of probabilities to find salient regions. This competing strategy and our latent topics guided by pre-trained LDA topics \cite{blei2003latent} allow us to better model relationships between visual features and word representations. Hence, the neural structure and our attention mechanism is quite different from all prior work~\cite{xu2015show,Anderson2018,Huang_2019_ICCV,bahdanau2014neural}. \noindent \textbf{Knowledge transfer in image captioning. } It is well known that language consists of semantic and syntactic biases \cite{bao2019generating,marcheggiani2018exploiting}. We exploit these biases by first training a recurrent caption auto-encoder to capture this useful information using \cite{sutskever2014sequence}. Our captioning auto-encoder is trained to reconstruct the input sentence and hence, this decoder encapsulates the structural, syntactic and semantic information of input captions. During captioning process we regularize the captioning RNN with this pretrained caption-decoder to exploit biases in the language domain and transfer them to the visual-language domain. To the best of our knowledge, no prior work has attempted such knowledge transfer in image captioning. Zhou \etal \cite{zhou2019improving} encode external knowledge in the form of knowledge graphs using Concept-Net \cite{liu2004conceptnet} to improve image captioning. The closest to ours is the work of \cite{yang2019auto} where they propose to generate scene graphs from both sentences and images and then encode the scene graphs to a common dictionary before decoding them back to sentences. However, generation of scene graphs from images itself is an extremely challenging task. Finally, we propose to transfer syntactic and semantic information as a regularization technique during the image captioning process as an auxiliary loss. Our experiments suggest that this leads to considerable improvements, specially in more structured measures such as CIDEr \cite{vedantam2015cider}. \section{Method} \label{sec.method} In this section, we first review image captioning with attention, introduce our CLTA mechanism, and then our sentence auto-encoder (SAE) regularizer. \subsection{Image Captioning with Attention} \label{sec.overview} Image captioning models are based on encoder-decoder architecture \cite{xu2015show} that use a CNN as image encoder and a Long Short-Term Memory (LSTM)~\cite{hochreiter1997long} as the decoder -- see~Fig.\ref{fig:introfig}. The encoder takes an image as input and extracts a feature set $v=\{\bv_1,\ldots,\bv_R\}$ corresponding to $R$ regions of the image, where $\bv_i \in \mathbb{R}^D$ is the $D$-dimensional feature vector for the $i^{th}$ region. The decoder outputs a caption $y$ by generating one word at each time step. At time step $t$, the feature set $v$ is combined into a single vector $\bv^t_a$ by taking weighted sum as follows: \begin{equation} \bv^t_a = \sum_{i=1}^R \alpha_{i}^{t} \bv_{i} \label{eq.ct} \end{equation} where $\alpha^t_i$ is the CLTA weight for region $i$ at time $t$, that is explained in the next section. The decoder LSTM $\phi$ then takes a concatenated vector $[\bv^t_a|\by_{t-1}]$ and the previous hidden state $\mathbf{h_{t-1}}$ as input and generates the next hidden state $\mathbf{h_t}$: \begin{align} \mathbf{h_t} &= \phi([\bv^t_a|E \by_{t-1}], \mathbf{h_{t-1}},\Theta_{\phi}) \label{eq.lstm.hil} \end{align} where, $|$ denotes concatenation, $\by_{t-1}\in \mathbb{R}^K$ is the one-hot vector of the word generated at time $t-1$, $K$ is the vocabulary size, $\bh^t \in \mathbb{R}^{n}$ is the hidden state of the LSTM at time $t$, $n$ is the LSTM dimensionality, and $\Theta_{\phi}$ are trainable parameters of the LSTM. Finally, the decoder predicts the output word by applying a linear mapping $\psi$ on the hidden state and $\bv^t_a$ as follows: \begin{align} \by_{t} &= \psi([\mathbf{h_t}|\bv^t_a],\Theta_{\psi}) \end{align} where $\Theta_{\psi}$ are trainable parameters. Our LSTM implementation closely follows the formulation in \cite{zaremba2014recurrent}. The word embedding matrix $E \in \mathbb{R}^{m\times K}$ is trained to translate one-hot vectors to word embeddings as in \cite{xu2015show}, where $m$ is the word embedding dimension. In the next section, we describe our proposed CLTA mechanism. \subsection{CLTA: Conditional Latent Topic Attention} \label{sec.method.att} At time step $t$, our CLTA module takes the previous LSTM hidden state ($\bh^{t-1}$) and image features to output the attention weights $\alpha^t$. Specifically, we use a set of latent topics to model the associations between textual ($\bh^{t-1}$) and visual features ($\bv$) to compute the attention weights. The attention weight for region $i$ is obtained by taking the conditional-marginalization over the latent topic $l$ as follows: \begin{align} \alpha^t_i & = P(\text{region}=i|h^{t-1}, \bv) = \sum_{l=1}^C P(\text{region}=i|h^{t-1}, \bv, l) P(l|h^{t-1}, \bv_{i}) \end{align} where $l$ is a topic variable in the $C$-dimensional latent space. To compute $P(l|h^{t-1}, \bv_i)$, we first project both textual and visual features to a common $C$-dimensional shared latent space, and obtain the associations by summing the projected features as follows: \begin{equation} \bm{q}^t_{i}= W_{sc} \bv_i + W_{hc} \bh^{t-1} \end{equation} where $W_{sc}\in \mathbb{R}^{C\times D}$ and $W_{hc}\in \mathbb{R}^{C\times n}$ are the trainable projection matrices for visual and textual features, respectively. Then the latent topic probability is given by: \begin{equation} P_L = P(l|\bh^{t-1}, \bv_{i}) = \frac{\exp({\bm{q}^t_{il}})}{\sum_{k=1}^{C}\exp({\bm{q}^t_{ik}})} \label{eq.ltopic} \end{equation} Afterwards, we compute the probability of a region given the textual, vision features and latent topic variable as follows: \begin{equation} \bm{r}^t_{i} = W_{sr} \bv_i + W_{hr} \bh^{t-1} \end{equation} \begin{align} P(\text{region}=i|\bh^{t-1}, v, l) &= \frac{\exp({\bm{r}^t_{il}})}{\sum_{k=1}^{R}\exp({\bm{r}^t_{kl}})} \end{align} where $W_{sr}\in \mathbb{R}^{C\times D}$ and $W_{hr}\in \mathbb{R}^{C\times n}$ are the trainable projection matrices for visual and textual features, respectively. The latent topic posterior in \cref{eq.ltopic} is pushed to the pre-trained LDA topic prior by adding a KL-divergence term to the image captioning objective. We apply Latent Dirichlet Allocation (LDA) \cite{blei2003latent} on the caption data. Then, each caption has an inferred topic distribution $Q_T$ from the LDA model which acts as a prior on the latent topic distribution, $P_L$. For doing this, we take the average of the C-dimensional latent topics at all time steps from $0,\ldots,t-1$ as: \begin{equation} P_{L_{avg}} = \frac{1}{t}\sum_{k=0}^{t-1} P(l|\bh^{k}, \bv_{i}) \end{equation} Hence, the KL-divergence objective is defined as: \begin{equation} D_{KL}(P_{L_{avg}}||Q_T) = \sum_{c \in C} P_{L_{avg}}(c) \times log(\frac{P_{L_{avg}}(c)}{Q_T(c)}) \label{eq.kl} \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{latent-topics.pdf} \caption{Image-Caption pairs generated from our CLTA module with $128$ dimensions and visualization of Top-20 words from the latent topics.} \label{fig:latentcategory} \end{figure} This learnt latent topic distribution captures the semantic relations between the visual and textual features in the form of visual topics, and therefore we also use this latent posterior, $P_L$ as a source of meaningful information during generation of the next hidden state. The modified hidden state $\mathbf{h_t}$ in \cref{eq.lstm.hil} is now given by: \begin{align} \mathbf{h_t} &= \phi([\bv^t_a|E \by_{t-1}|P_L], \mathbf{h_{t-1}},\Theta_{\phi}) \label{eq.lstm.hil.new} \end{align} We visualize the distribution of latent topics in \Cref{fig:latentcategory}. While traditional ``soft-max" attention exploit simple correlation among textual and visual information, we make use of latent topics to model associations between them. \subsection{SAE Regularizer} \label{sec.method.sae} Encoder-decoder methods are widely used for translating one language to another \cite{cho2014learning,sutskever2014sequence,bahdanau2014neural}. When the input and target sentences are the same, these models function as auto-encoders by first encoding an entire sentence into a fixed-(low) dimensional vector in a latent space, and then reconstructing it. Autoencoders are commonly employed for unsupervised training in text classification \cite{dai2015semi} and machine translation \cite{luong2015multi}. In this paper, our SAE regularizer has two advantages: i) acts as a soft constraint on the image captioning model to regularize the syntactic and semantic space of the captions for better generalization and, ii) encourages the image captioning model to extract more context information for better modelling long-term memory. These two properties of the SAE regularizer generates semantically meaningful captions for an image with syntactic generalizations and prevents generation of naive and template-like captions. Our SAE model uses network architecture of \cite{sutskever2014sequence} with Gated Recurrent Units (GRU) \cite{chung2014empirical}. Let us denote the parameter of the decoder GRU by $\Theta_{\text{D}}$. A stochastic variation of the vanilla sentence auto-encoders is de-noising auto-encoders~\cite{vincent2008extracting} which are trained to ``de-noise'' corrupted versions of their inputs. To inject such input noise, we drop each word in the input sentence with a probability of 50\% to reduce the contribution of a single word on the semantics of a sentence. We train the SAE model in an offline stage on training set of the captioning dataset. After the SAE model is trained, we discard its encoder and integrate only its decoder to regularize the captioning model. As depicted in \Cref{fig:sae}, the pretrained SAE decoder takes the last hidden state vector of captioning LSTM $\bh$ as input and generates an extra caption (denoted as $y_{\text{sae}}$) in addition to the output of the captioning model (denoted as $y_{\text{lstm}}$). We use output of the SAE decoder only in train time to regulate the captioning model $\phi$ by implicitly transferring the previously learned latent structure with SAE decoder. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{SAEReg.pdf} \end{center} \caption{Illustration of our proposed Sentence Auto-Encoder (SAE) regularizer with the image captioning decoder. The captioning model is trained by adding the SAE decoder as an auxiliary branch and thus acting as a regularizer.} \label{fig:sae} \end{figure} Our integrated model is optimized to generate two accurate captions (\ie $y_{\text{sae}}$ and $y_{\text{lstm}}$) by minimizing a weighted average of two loss values: \begin{equation} \arg \min_{\Omega}~~~\lambda L(y^*,y_{\text{lstm}}) + (1-\lambda) L(y^*,y_{\text{sae}}) \label{eq.loss} \end{equation} where $L$ is the cross-entropy loss computed for each caption, word by word against the ground truth caption $y^*$, $\lambda$ is the trade-off parameter, and $\Omega$ are the parameters of our model. We consider two scenarios that we use during our experimentation. \begin{itemize} \item First, we set the parameters of the SAE decoder $\Theta_D$ to be the weights of the pre-trained SAE decoder and freeze them while optimizing \Cref{eq.loss} in terms of $\Omega=\{ \Theta_{\phi},\Theta_{\psi},E \}$. \item Second, we initialize $\Theta_D$ with the weights of the pre-trained SAE decoder and fine-tune them along with the LSTM parameters, \ie $\Omega=\{\Theta_{\phi},\Theta_{\psi},E,\Theta_{\text{D}}\}$. \end{itemize} As discussed in \cref{sec.method.att}, we also minimize the KL divergence in \cref{eq.kl} along with the final regularized objective in \cref{eq.loss} as: \begin{equation} \arg \min_{\Omega}~~~\lambda L(y^*,y_{\text{lstm}}) + (1-\lambda) L(y^*,y_{\text{sae}}) + \gamma D_{KL}(P_{L_{avg}}||Q_T) \label{eq.totalloss} \end{equation} where, $\gamma$ is the weight for the KL divergence loss. \paragraph{Discussion. } An alternative way of exploiting the information from the pre-trained SAE model is to bring the representations from the captioning decoder closer to the encodings of the SAE encoder by minimizing the Euclidean distance between the hidden state from the SAE encoder and the hidden state from the captioning decoder at each time-step. However, we found this setting is too restrictive on the learned hidden state of the LSTM. \section{Experiments} \label{sec.exp} \noindent \textbf{Dataset. } Our models are evaluated on the standard MSCOCO 2014 image captioning dataset~\cite{lin2014microsoft}. For fair comparisons, we use the same data splits for training, validation and testing as in \cite{karpathy2015deep} which have been used extensively in prior works. This split has 113,287 images for training, 5k images for validation and testing respectively with 5 captions for each image. We perform evaluation on all relevant metrics for generated sentence evaluation - CIDEr \cite{vedantam2015cider}, Bleu \cite{papineni2002bleu}, METEOR \cite{denkowski2014meteor}, ROUGE-L \cite{lin2004automatic} and, SPICE \cite{anderson2016spice}. \hfill \noindent \textbf{Implementation Details. } For training our image captioning model, we compute the image features based on the Bottom-Up architecture proposed by \cite{Anderson2018}, where the model is trained using a Faster-RCNN model \cite{ren2015faster} on the Visual-Genome Dataset \cite{krishna2017visual} with object and attribute information. These features are extracted from $R$ regions and each region feature has $D$ dimensions, where $R$ and $D$ is 36 and 2048 respectively as proposed in \cite{Anderson2018}. We use these $36\times 2048$ image features in all our experiments. \subsection{Experimental Setup} \label{sec.expsetup} \paragraph{LDA Topic Models.} The LDA \cite{blei2003latent} model is learned in an offline manner to generate a $C$ dimensional topic distribution for each caption. Briefly, the LDA model treats the captions as word-documents and group these words to form $C$ topics (cluster of words), learns the word distribution for each topic $(C \times V)$ where $V$ is the vocabulary size and also generates a topic distribution for each input caption, $Q_T$ where each $C^{th}$ dimension denotes the probability for that topic. \paragraph{Sentence Auto-Encoder.} The Sentence Auto-encoder is trained offline on the MSCOCO 2014 captioning dataset \cite{lin2014microsoft} with the same splits as discussed above. For the architecture, we have a single layer GRU for both the encoder and the decoder. The word embeddings are learned with the network using an embedding layer and the dimension of both the hidden state and the word embeddings is 1024. During training, the decoder is trained with teacher-forcing \cite{bengio2015scheduled} with a probability of 0.5. For inference, the decoder decodes till it reaches the end of caption token. The learning rate for this network is 2e-3 and it is trained using the ADAM \cite{kingma2014adam} optimizer. \paragraph{Image Captioning Decoder with SAE Regularizer.} The architecture of our image captioning decoder is same as the Up-Down model \cite{Anderson2018} with their ``soft-attention'' replaced by our CLTA module and trained with the SAE regularizer. We also retrain the AoANet model proposed by Huang \etal \cite{Huang_2019_ICCV} by incorporating our CLTA module and the SAE regularizer. In the results section, we show improvements over the Up-Down and AoANet models using our proposed approaches. Note, the parameters for training Up-Down and AoANet baselines are same as the original setting. While training the captioning models together with the SAE-decoder, we jointly learn an affine embedding layer (dimension 1024) by combining the embeddings from the image captioning decoder and the SAE-decoder. During inference, we use beam search to generate captions from the captioning decoder using a beam size of 5 for Up-Down and a beam-size of 2 for AoANet. For training the overall objective function as given in Equation \ref{eq.totalloss}, the value of $\lambda$ is initialized by 0.7 and increased by a rate of 1.1 every 5 epochs until it reaches a value of 0.9 and $\gamma$ is fixed to 0.1. We use the ADAM optimizer with a learning rate of 2e-4. Our code is implemented using PyTorch \cite{pytorch} and will be made publicly available. \section{Results and Analysis} \label{sec.results} First, we study the caption reconstruction performance of vanilla and denoising SAE, then report our model's image captioning performance on MS-COCO dataset with full and limited data, investigate multiple design decisions and analyze our results qualitatively. \subsection{Sentence Auto-Encoder Results} \label{sec.quantresults} An ideal SAE must learn mapping its input to a fixed low dimensional space such that a whole sentence can be summarized and reconstructed accurately. To this end, we experiment with two SAEs, Vanilla-SAE and Denoising-SAE and report their reconstruction performances in terms of Bleu4 and cross-entropy (CE) loss in fig.\ref{fig:sea_loss}. \newsavebox{\testbox}% \newlength{\testheight}% \savebox{\testbox}{% \centering \begin{tabular}{c|c|c} \hline Models & Bleu-4 $\uparrow$ & CE-Loss $\downarrow$\\ \hline\hline Vanilla SAE & \textbf{96.33} & \textbf{0.12} \\ Denoising SAE & 89.79 & 0.23\\ \hline \end{tabular} }% \settoheight{\testheight}{\usebox{\testbox}} \begin{figure} \begin{floatrow} \ffigbox{ \includegraphics[width=\linewidth,height=3.0\testheight]{SAE_plot_v2.pdf} } { \caption{Error Curve for the Sentence Auto-Encoder on the Karpathy test split. The error starts increasing approximately after 20 epochs.} \label{fig:sea_loss} } \capbtabbox{ \usebox{\testbox} } { \caption{Bleu-4 Evaluation and Reconstruction Cross-Entropy Loss for the Sentence Auto-Encoder on the Karpathy test split of MSCOCO 2014 caption dataset \cite{lin2014microsoft}.} \label{table:sea_results} } \end{floatrow} \end{figure} The vanilla model, when the inputs words are not corrupted, outperforms the denoising one in both metrics. This is expected as the denoising model is only trained with corrupted input sequences. The loss for both the Vanilla and Denoising SAE start from a relatively high value of approximately 0.8 and 0.4 respectively, and converge to a significantly low error of 0.1 and 0.2. For a better analysis, we also compute the Bleu-4 metrics on our decoded caption against the 5 ground-truth captions. As reported in fig.\ref{table:sea_results}, both models obtain significantly high Bleu-4 scores. This indicates that an entire caption can be compressed in a low dimensional vector ($1024$) and can be successfully reconstructed. \begin{table*}[t] \renewcommand*{\arraystretch}{1.13} \resizebox{0.98\textwidth}{!}{ \begin{tabular}{|l|c c c c c c|c c c c c c|} \hline \multirow{2}{*}{Models} & \multicolumn{6}{c|}{cross-entropy loss} & \multicolumn{6}{c|}{cider optimization}\\ & B-1 & B-4 & M & R &C & S & B-1 & B-4 & M & R &C & S \\ \hline\hline LSTM-A \cite{yao2017boosting} & 75.4 & 35.2 & 26.9 & 55.8 & 108.8 & 20.0 & 78.6 & 35.5& 27.3& 56.8& 118.3 & 20.8 \\ RFNet \cite{jiang2018recurrent} & 76.4 & 35.8 & 27.4 & 56.8 &112.5& 20.5 & 79.1& 36.5& 27.7& 57.3 &121.9& 21.2 \\ Up-Down \cite{Anderson2018} & 77.2 & 36.2 & 27.0 & 56.4 & 113.5 & 20.3 & 79.8& 36.3& 27.7& 56.9 &120.1& 21.4 \\ GCN-LSTM \cite{yao2018exploring} & 77.3 & 36.8 & 27.9 & 57.0 &116.3& 20.9 & 80.5 & 38.2& 28.5& 58.3 &127.6& 22.0 \\ AoANet \cite{Huang_2019_ICCV} & 77.4 & 37.2 & 28.4 & 57.5 & 119.8 & 21.3 & 80.2& 38.9& 29.2& 58.8 &129.8 & 22.4 \\ \hline \hline Up-Down$^{\dagger}$ & 75.9 & 36.0 & 27.3 & 56.1 & 113.3 & 20.1 & 79.2 & 36.3 & 27.7 & 57.3 & 120.8 & 21.2 \\ Up-Down$^{\dagger}$ + CLTA + SAE-Reg &\textbf{ 76.7} &\textbf{37.1} & \textbf{28.1} & \textbf{57.1} & \textbf{116.2}& \textbf{21.0} & \textbf{80.2} &\textbf{37.4} &\textbf{ 28.4} & \textbf{58.1} & \textbf{127.4} &\textbf{22.0} \\ \rowcolor{LightCyan} Relative Improvement & +0.8 & +1.1 & +0.8 & +1.0 & +2.9 & +0.9 & +1.0 & +1.1 & +0.7 & +0.8 & +6.6 & +0.8\\ \hline AoANet$^{*}$ & 77.3 & 36.9 & \textbf{28.5} & 57.3 & 118.4 & 21.6 & 80.5 & 39.1 & 29.0 & 58.9 & 128.9 & 22.7 \\ AoANet$^{\dagger}$ + CLTA + SAE-Reg & \textbf{78.1} & \textbf{37.9} & 28.4 & \textbf{57.5} & \textbf{119.9} & \textbf{21.7} & \textbf{80.8} & \textbf{39.3} & \textbf{29.1} & \textbf{59.1} & \textbf{130.1} & \textbf{22.9}\\ \rowcolor{LightCyan} Relative Improvement & +0.8 & +1.0 & -0.1 & +0.2 & +1.5 & +0.1 & +0.3 & +0.2 & +0.1 & +0.2 & +1.2 & +0.2 \\ \hline \end{tabular}} \caption{Image captioning performance on the ``Karpathy'' test split of the MSCOCO 2014 caption dataset \cite{lin2014microsoft} from other state-of-the-art methods and our models. Our Conditional Latent Topic Attention with the SAE regularizer significantly improves across all the metrics using both \textit{cross-entropy loss} and \textit{cider optimization}. \small{$\dagger$ denotes our trained models} and * indicates the results obtained from the publicly available pre-trained model. } \label{table:celoss} \end{table*} \subsection{Image Captioning Results} \label{sec.ic.results} Here we incorporate the proposed CLTA and SAE regularizer to recent image-captioning models including Up-Down~\cite{Anderson2018} and AoANet~\cite{Huang_2019_ICCV} and report their performance on MS-COCO dataset in multiple metrics (see \Cref{table:celoss}). The tables report the original results of these methods from their publications in the top block and the rows in cyan show relative improvement of our models when compared to the baselines. The baseline models are trained for two settings - 1)Up-Down$^{\dagger}$, is the model re-trained on the architecture of Anderson \etal \cite{Anderson2018} and, 2) AoANet$^{\dagger}$, is the Attention-on-Attention model re-trained as in Huang \etal \cite{Huang_2019_ICCV}. Note that for both Up-Down and AoANet, we use the original source code to train them in our own hardware. We replace the ``soft-attention" module in our Up-Down baseline by CLTA directly. The AoANet model is based on the powerful Transformer \cite{vaswani2017attention} architecture with the multi-head dot attention in both encoder and decoder. For AoANet, we replace the dot attention in the decoder of AoANet at each head by the CLTA which results in multi-head CLTA. The SAE-decoder is added as a regularizer on top of these models as also discussed in \cref{sec.expsetup}. As discussed later in \cref{sec.ablation}, we train all our models with $128$ dimensions for the CLTA and with the Denoising SAE decoder (initialized with $\bh^{last}$). We evaluate our models with the cross-entropy loss training and also by using the CIDEr score oprimization \cite{rennie2017self} after the cross-entropy pre-training stage (\cref{table:celoss}). For the cross-entropy one, our combined approach consistently improves over the baseline performances across all metrics. It is clear from the results that improvements in CIDEr and Bleu-4 are quite significant which shows that our approach generates more human-like and accurate sentences. It is interesting to note that AoANet with CLTA and SAE-regularizer also gives consistent improvements despite having a strong transformer language model. We show in \cref{sec.qualitative} the differences between our captions and the captions generated from Up-Down and AoANet. Our method is modular and improves on state-of-the-art models despite the architectural differences. Moreover, the SAE decoder is discarded after training and hence it brings no additional computational load during test-time but with significant performance boost. For CIDEr optimization, our models based on Up-Down and AoANet also show significant improvements in all metrics for our proposed approach. \begin{table}[t] \renewcommand*{\arraystretch}{1.1} \begin{center} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|} \hline Models & \multicolumn{2}{c|}{50\% data} & \multicolumn{2}{c|}{75\% data} & \multicolumn{2}{c|}{100\% data} \\ % \hline & Bleu-4 & CIDEr & Bleu-4 & CIDEr & Bleu-4 & CIDEr \\ \hline Up-Down & 35.4 & 112.0 & 35.8 & 112.7 & 36.0 & 113.3 \\ \hline Up-Down+CLTA& 36.3 & 113.7 & 36.3 & 114.5 & 36.5 & 115.0 \\ \hline Up-Down+CLTA+SAE-Reg & \textbf{36.6} & \textbf{114.8}& \textbf{36.8} &\textbf{115.6} & \textbf{37.1} &\textbf{116.2} \\ \hline \hline AoANet & 36.6 & 116.1 & 36.8 & 118.1 & 36.9 & 118.4 \\ \hline AoANet+CLTA& 36.9 & 116.7 & 37.1 & 118.4 & 37.4 & 119.1 \\ \hline AoANet+CLTA+SAE-Reg & \textbf{37.2} & \textbf{117.5}& \textbf{37.6} &\textbf{118.9} & \textbf{37.9} &\textbf{119.9} \\ \hline \end{tabular}} \end{center} \caption{Evaluation of our CLTA and SAE-Regularizer methods by training on a subset of the MSCOCO ``Karpathy'' Training split.} \label{table:lowdata} \end{table} \subsection{Learning to Caption with Less Data} \label{sec.lessdata} Table \ref{table:lowdata} evaluates the performance of our proposed models for a subset of the training data, where $x$\% is the percentage of the total data that is used for training. All these subsets of the training samples are chosen randomly. Our CLTA module is trained with $128$ dimensions for the latent topics along with the Denoising SAE Regularizer initialized with the last hidden state of the LSTM (Up-Down+CLTA+SAE-Reg). Despite the number of training samples, our average improvement with CLTA and SAE-Regularizer is around 1\% in Bleu-4 and 2.9\% in CIDEr for the Up-Down model and 0.8\% in Bleu-4 and 1.2\% in CIDEr for the AoANet model. The significant improvements in Bleu-4 and CIDEr scores with only 50\% and 75\% of the data compared to the baseline validates our proposed methods as a form of rich prior. \subsection{Qualitative Results} \label{sec.qualitative} In \cref{fig:qualitative}, we show examples of images and captions generated by the baselines Up-Down and AoANet along with our proposed methods, CLTA and SAE-Regularizer. The baseline models have repetitive words and errors while generating captions (\textit{in front of a mirror}, \textit{a dog in the rear view mirror}). Our models corrects these mistakes by finding relevant words according to the context and putting them together in a human-like caption format (\textit{a rear view mirror shows a dog} has the same meaning as \textit{a rear view mirror shows a dog in the rear view mirror} which is efficiently corrected by our models by bringing in the correct meaning). From all the examples shown, we can see that our model overcomes the limitation of overfitting in current methods by completing a caption with more semantic and syntactic generalization (\eg: \textit{different flavoured donuts} and \textit{several trains on the tracks}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{qualitative_new.pdf} \caption{Example of generated captions from the baseline Up-Down, AoANet, our proposed CLTA and, our final models with both CLTA and SAE Regularizer.} \label{fig:qualitative} \end{figure} \subsection{Ablation Study} \label{sec.ablation} \textbf{Conditional Latent Topic Attention (CLTA).} Table \ref{table:mil_ablation} depicts the results for the CLTA module that is described in \cref{sec.method.att}. Soft-attention is used as a baseline and corresponds to the attention mechanism in \cite{xu2015show} which is the main attention module in Up-Down image captioning model by Anderson \etal \cite{Anderson2018}. We replace this attention with the CLTA and evaluate its performance for different number of latent dimensions, \ie~topics ($C$). The models trained with latent topic dimensions of $128$, $256$ and $512$ all outperform the baseline significantly. The higher CIDEr and Bleu-4 scores for these latent topics show the model's capability to generate more descriptive and accurate human-like sentences. As we increase the dimensions of latent topics from $128$ to $512$, we predict more relevant keywords as new topics learnt by the CLTA module with $512$ dimensions are useful in encoding more information and hence generating meaningful captions. \begin{table}[t] \centering \begin{subtable}{.49\textwidth} \centering% \raggedright \begin{tabular}{|c|c|c|c|c|c|} \hline Models & Baseline & \multicolumn{3}{c|}{CLTA}\\ % \hline & Soft-Attention & 128 & 256 & 512 \\ \hline Bleu-4 & 36.0 & 36.5 & 36.6 & \textbf{36.7} \\ \hline CIDEr & 113.3 & 115.0 & 115.2 & \textbf{115.3} \\ \hline \end{tabular} \caption{Evaluation scores for the Up-Down model with soft-attention and ablations of our CLTA module.}\label{table:mil_ablation} \end{subtable}\hfill \begin{subtable}{.49\textwidth} \centering% \renewcommand*{\arraystretch}{1.1} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{|l|l|c|c|c|} \hline Models & SAE-Decoder & $\bh$ & Bleu-4 &CIDEr \\ \hline\hline Baseline& No & - & 36.0 & 113.3 \\ \hline \multirow{4}{*}{CLTA-128} &\multirow{2}{*}{Vanilla} & First & 36.9 & 115.8 \\ & & Last & 36.8 & 115.3 \\ \cline{2-5} &\multirow{2}{*}{Denoising} & First & 36.8 & 116.1 \\ & & Last & 37.1 & \textbf{116.2} \\ \hline CLTA-512& Denoising & Last & \textbf{37.2} & 115.9 \\ \hline \end{tabular}} \caption{Additional quantitative evaluation results from different settings of the SAE decoder when trained with image captioning decoder. $\bh$ denotes the hidden state.} \label{table:sae_ablation} \end{subtable} \caption{Ablative Analysis for different settings on our (a) CLTA module and, (b) SAE regularizer training.} \end{table} \noindent \textbf{Image Captioning Decoder with SAE Regularizer. } \Cref{table:sae_ablation} reports ablations for our full image captioning model (Up-Down with CLTA) and the SAE regularizer. As discussed in \cref{sec.method.sae}, SAE decoder (parameters defined by $\Theta_D$) is initialized with the hidden state of the image captioning decoder. During training, we test different settings of how the SAE decoder is trained with the image captioning decoder: (1) Vanilla vs Denoising SAE and, (2) $\bh^{\text{first}}$ vs $\bh^{\text{last}}$, whether the SAE decoder is initialized with the first or last hidden state of the LSTM decoder. For all the settings, we fine-tune the parameters of GRU$_\text{D}$ ($\Theta_D$) when trained with the image captioning model (the parameters are initialized with the weights of the pre-trained Vanilla or Denoising SAE decoder). The results in Table \ref{table:sae_ablation} are reported on different combinations from the settings described above, with the CLTA having $128$ and $512$ dimensions in the image captioning decoder. Adding the auxiliary branch of SAE decoder significantly improves over the baseline model with CLTA and in the best setting, Denoising SAE with $\bh^{\text{last}}$ improves the CIDEr and Bleu-4 scores by 1.2 and 0.6 respectively. As the SAE decoder is trained for the task of reconstruction, fine-tuning it to the task of captioning improves the image captioning decoder. Initializing the Vanilla SAE decoder with $\bh^{\text{last}}$ does not provide enough gradient during training and quickly converges to a lower error, hence this brings lower generalization capacity to the image captioning decoder. As $\bh^{\text{first}}$ is less representative of an entire caption compared to $\bh^{\text{last}}$, vanilla SAE with $\bh^{\text{first}}$ is more helpful to improve the captioning decoder training. On the other hand, the Denoising SAE being robust to noisy summary vectors provide enough training signal to improve the image captioning decoder when initialized with either $\bh^{\text{first}}$ or $\bh^{\text{last}}$ but slightly better performance with $\bh^{\text{last}}$ for Bleu-4 and CIDEr as it forces $\bh^{\text{last}}$ to have an accurate lower-dim representation for the SAE and hence better generalization. It is clear from the results in \cref{table:sae_ablation}, that Denoising SAE with $\bh^{\text{last}}$ helps to generate accurate and generalizable captions. From our experiments, we found that CLTA with $128$ topics and Denoising SAE (with $\bh^{\text{last}}$) has better performance than even it's counterpart with $512$ topics. Hence, for all our experiments in \cref{sec.ic.results} and \cref{sec.lessdata} our topic dimension is $128$ with Denoising SAE initialized with $\bh^{\text{last}}$. \section{Conclusion} \label{sec.conclusion} In this paper, we have introduced two novel methods for image captioning that exploit prior knowledge and hence help to improve state-of-the-art models even when the data is limited. The first method exploits association between visual and textual features by learning latent topics via an LDA topic prior and obtains robust attention weights for each image region. The second one is an SAE regularizer that is pre-trained in an autoencoder framework to learn the structure of the captions and is plugged into the image captioning model to regulate its training. Using these modules, we obtain consistent improvements on two investigate models, bottom-up top-down and the AoANet image captioning model, indicating the usefulness of our two modules as a strong prior. In future work, we plan to further investigate potential use of label space structure learning for other challenging vision tasks with limited data and to improve generalization. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=f75kMo1dnKD
https://arxiv.org/abs/1911.10082
Please evaluate the paper based on its implementation of prior knowledge into image captioning models, including the strengths and weaknesses of the approach.
Interesting and novel idea #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper proposes a novel, computationally efficient artificial "strong neuron" for sparse neural networks that combines low-level features through AND and OR operations and a corresponding training strategy. The resulting networks are evaluated on the GTSRB (German traffic sign) and SVHN datasets and show competitive results in both classification error and adversarial stability. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. * The proposed "strong neurons" are definitely a novel and interesting idea and the motivation is clearly explained through Figure 1. * The method seems effective on the evaluated datasets. * The paper is generally well written and easy to follow. * Code is made available which will hopefully spark interest for research into alternatives to traditional CNNs. #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. * The optimization method is based on a lot of heuristics to simplify the otherwise intractable brute force approach. Although the optimization method seems effective (based on the performance), it is hard to evaluate the possible negative effects of these simplifications on the model performance. * The training strategy not allowing to use mini-batches seems like a major drawback for training on large-scale or high resolution datasets like ImageNet or CityScapes. #### 4. [Overall rating] Paper rating * 8. Top 50% of accepted papers, clear accept #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. The authors have proposed a novel and interesting alternative to traditional CNNs and have shown its effectiveness. #### 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) * Combine multiple references into same brackets, i.e. ([1], [2], [3]) should be [1,2,3]. * What do $k$ and $i$ represent in equation (1) and line 125/126? * The paper would be easier to read if a conclusion would be included in the image captions (i.e. what point is the image trying to make). Rating: 8: Top 50% of accepted papers, clear accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass{article} \usepackage{arxiv} \usepackage{textcomp} \usepackage[utf8]{inputenc} % \usepackage[T1]{fontenc} % \usepackage{enumerate} \usepackage{hyperref} % \usepackage{url} % \usepackage{booktabs} % \usepackage{amsfonts} % \usepackage{amsmath} % \usepackage{gensymb} % \usepackage{nicefrac} % \usepackage{microtype} % \usepackage{lipsum} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{float} \usepackage{todonotes} \graphicspath{ {./images/} } \usepackage{svg} \title{On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron} \author{ Sergey Bochkanov \\ ALGLIB Project \\ Russian Federation \\ \texttt{sergey.bochkanov@alglib.net} \\ } \begin{document} \maketitle \begin{abstract} Deep neural networks have achieved human-level accuracy on almost all perceptual benchmarks. It is interesting that these advances were made using two ideas that are decades old: (a) an artificial neuron based on a linear summator and (b) SGD training. However, there are important metrics beyond accuracy: computational efficiency and stability against adversarial perturbations. In this paper, we propose two closely connected methods to improve these metrics on contour recognition tasks: (a) a novel model of an artificial neuron, a "strong neuron," with low hardware requirements and inherent robustness against adversarial perturbations and (b) a novel constructive training algorithm that generates sparse networks with $O(1)$ connections per neuron. We demonstrate the feasibility of our approach through experiments on SVHN and GTSRB benchmarks. We achieved an impressive 10x-100x reduction in operations count (10x when compared with other sparsification approaches, 100x when compared with dense networks) and a substantial reduction in hardware requirements (8-bit fixed-point math was used) with no reduction in model accuracy. Superior stability against adversarial perturbations (exceeding that of adversarial training) was achieved without any counteradversarial measures, relying on the robustness of strong neurons alone. We also proved that constituent blocks of our strong neuron are the only activation functions with perfect stability against adversarial attacks. \end{abstract} \section{Introduction} In recent decades, artificial neural networks have achieved impressive results on all computer vision benchmarks. Perhaps the correct phrase would be "unbelievably good" because a hypothetical time traveller from the year 2000 would be shocked by today's progress in this area. One could have predicted, relying on Moore's law, the computing power of today's CPUs. However, it would have been impossible to predict the completely unexpected success in the training of large nonconvex multiextremal models --- object recognition, neural text translation, style transfer, and deep fakes. Interestingly, this progress was achieved using two ideas that are decades old: (1) an artificial neuron with a linear summator at its core and (2) stochastic gradient (SGD) training. The combination of these ideas was fortuitous, allowing us to fit any decision function, no matter how complex. As a result, in recent years neural models surpassed human-level accuracy on ImageNet and other benchmarks. However, we believe (and will justify below) that the very properties of summators and SGD impede progress in improving two other important metrics: the sparsity of the neural connections and adversarial stability. In our work, we propose (1) a novel model of an artificial neuron with inherent robustness against adversarial perturbations and (2) a novel training algorithm that allows us to build extremely sparse networks with $O(1)$ connections per neuron. With these proposals, we achieved state-of-the-art performance and adversarial stability on a number of contour recognition benchmarks. The article is structured as follows. In section \ref{sect:novelneuron}, we will discuss the deficiencies of linear summators and propose a new model of an artificial neuron that we call the "strong neuron." In section \ref{sect:rationale}, we will show that the structure of our strong neuron is motivated by obvious stability requirements and that our strong neuron is the only perfectly stable artificial neuron possible. In section \ref{sect:overview}, we will discuss three blocks of the Contour Engine, a neural architecture that utilizes our proposed strong neurons: a feature detection unit, sparse inference unit, and shallow classifier. The key part of our network --- the sparsely connected geometric inference engine --- and its training algorithm will be discussed in section \ref{sect:sparselayers}. The initial feature detection layer will be briefly discussed in section \ref{sect:featuredetector} (with a more detailed discussion in Appendix B). The shallow classifier that performs post-processing of the network output will be discussed in section \ref{sect:shallowclassifier}. In section \ref{sect:comparison}, we will compare our architecture with similar and related approaches. In section \ref{sect:results}, we will discuss the experimental results. Finally, in section \ref{sect:conclusions}, we present a brief summary of our findings and a few thoughts on future research directions. \section{The novel artificial neuron ("strong neuron")} \label{sect:novelneuron} In this work we propose to replace traditional summator-based artificial neurons with a more powerful one that (a) can separate input images with decision surfaces much more complex than hyperplanes, (b) has better stability properties with respect to the adversarial perturbations of its inputs, (c) inherently favors sparsity of connections and (d) has fairly low hardware requirements (8-bit fixed point hardware is enough in most cases). \begin{figure}[h!] \centering \includegraphics[width=10cm]{figure-1-strongnn.pdf} \caption{A summator-based neuron and a strong neuron} \label{fig:fig1_strongnn} \end{figure} In the following subsections, we discuss specifics of the contour recognition problems, strong and weak points of the summator-based artificial neuron and, finally, our proposal. \subsection{Contour recognition = logical AND + logical OR} Contour recognition is an important subset of computer vision problems. It is deeply connected with properties of our world --- we live in a universe full of localized objects with distinctive edges. Many important problems are contour based: handwritten digit recognition, traffic light detection, traffic sign recognition and number plate recognition. There are also non-contour tasks --- for example, ones that can only be solved by gathering information from many small cues scattered throughout an image (e.g., distinguishing a food store from an electronics store). A degenerate counterexample is a task that involves computing the mean intensity of the image pixels --- its decision function ignores any kind of spatial structure in the image. Contour recognition has interesting mathematical properties: \begin{itemize} \item It naturally leads to $[0,1]$-bounded activities. Not all computer vision problems have this property (e.g., object counting tasks have unbounded activities). \item Contours are localized and independent from their surrounding (e.g., a crosswalk sign is a crosswalk sign, regardless of who uses the crosswalk --- a pedestrian, a tank or a bird). \item Ideal contour detector should have a monotonic response with respect to the full/partial "dimming" of the contour or some of its parts. In other words, if you start to progressively remove parts of the contour, you should observe monotonically decreasing detector responses. \end{itemize} Our insight is that contour recognition is essentially a combination of two basic operations on low-level features: \begin{itemize} \item logical AND (detection), which decomposes high-level features as combinations of several low-level ones, placed at different locations \item logical OR (generalization), which allows detectors to be activated by more diverse inputs \end{itemize} \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-and-or.pdf} \caption{Pattern recognition: AND + OR} \label{fig:andor} \end{figure} \subsection{What is wrong with linear summator and SGD?} A linear summator trained with SGD is an excellent basic building block for a number of reasons: \begin{itemize} \item First, it is flexible. It smoothly implements soft-AND/soft-OR logic within a single framework: $AND_{RELU}(A,B)=ReLU(A+B-1)$, $OR_{RELU}(A,B)=ReLU(A+B)$. It may also implement more general decision functions (including ones with negative weights). \item Second, it is trainable. We usually accept it as a given that one can stack many linear units interleaved with nonlinearities, constructe a huge nonlinear nonconvex model and \emph{successfully} fit it with SGD to some complex and noisy decision function. \end{itemize} However, it has some deficiencies as well First, summator-based implementation of the AND/OR logic is very brittle, especially in high-dimensional spaces. The neuron can be set to an arbitrarily high value (or, alternatively, zeroed) by feeding it with many small activities in different channels. Many researchers believe that this is the reason behind the adversarial instability of modern neural networks. We also feel (more intuition that concrete proof) that SGD-based training has limited potential for sparsification. There are multiple sparsification strategies that share one common trait: they start from the same dense network and progressively sparsify it (via $L_1$ regularization or by other means). As a result, the final connection count is typically \emph{a fraction} of the initial connection count: $O(s{\times}C)$, where $s$ is a sparsity coefficient that may be quite small --- 0.1, 0.01 or even less --- although it is asymptotically different from zero. Thus, we believe that sparsity via regularization is inferior to sparsity achieved by other means (explicit channel selection or sparsifying constraints). \subsection{Our proposal} We propose to use f(A,B)=min(A,B,1) to implement AND-logic, to use f(A,B)=max(A,B,0) to implement OR-logic and to combine both kinds of logic in a novel summator-free artificial neuron --- "strong neuron" (see Figure \ref{fig:stronger}). \begin{figure}[ht] \centering \includegraphics[width=12cm]{figure-stronger.pdf} \caption{The strong neuron is better at pattern recognition than the linear one} \label{fig:stronger} \end{figure} We call our artificial neuron "strong" because it has a much more complex decision boundary than the summator-based neuron. The shape of this boundary naturally fits into the pattern recognition framework. Even with binary weights (which allowed us to achieve state-of-the-art results on GTSRB and SVHN benchmarks), standalone strong neurons can separate large chunks of the target class from the rest of the training set. In the somewhat exaggerated example shown in Figure \ref{fig:stronger}, the standalone summator-based neuron cannot distinguish between the full image dimmed by 50\% (reduced contrast) and the image with a completely dropped bottom half. The linearity of the summator means that it is possible to compensate for the lack of activity in one channel by increasing the activity in another one. In contrast, the strong neuron easily and naturally distinguishes between these two images. Another important property of our strong neuron is that its amplification of adversarial perturbations can be precisely controlled. Further, with binary weights the layer of strong neurons becomes robust with respect to adversarial attacks: an $\epsilon$-bounded perturbation of inputs produces exactly $\epsilon$-bounded perturbation of outputs. We also propose a novel training algorithm that can train strong neurons with sparse connectivity. This algorithm reformulates the initial nonlinear least squares problem subject to sparsity constraints as a discrete one problem with discrete (binary or nonbinary) weights and discrete sparsity constraints, which is efficiently solved by the newly proposed heuristic. The properties of strong neurons and their training algorithm can be used to reduce hardware requirements --- in particular, to avoid expensive floating point units. With binary weights, our strong neurons are summation-free and multiplication-free --- only $min$ and $max$ operations are needed to implement strong neurons. Moreover, the adversarial stability of strong neurons means that they are also resistant to random perturbations from rounding errors (i.e., it is possible to reduce precision from full 32-bit floating point to 8-bit fixed-point without sacrificing inference accuracy). \section{The motivation behind our model} \label{sect:rationale} In this section, we will show that our artificial neuron model is motivated by some fundamental considerations, that is, there are some reasonable and intuitive requirements that are satisfied by our model --- and are not satisfied by summator-based neurons. First, we define the $L_\infty$-nonexpansive function as one which in a general N-dimensional case satisfies \begin{align*} |f(x+{\Delta}x)-f(x)| \leq \max\limits_i|{\Delta}x_i| = {\lVert}{\Delta}x{\rVert}_\infty \end{align*} for any N-dimensional input perturbation ${\Delta}x$. Similarly, we define the $L_1$-nonexpansive function as one that satisfies \begin{align*} |f(x+{\Delta}x)-f(x)| \leq \sum\limits_i|{\Delta}x_i| = {\lVert}{\Delta}x{\rVert}_1 \end{align*} Clearly, both kinds of nonexpansive functions produce bounded output under bounded input perturbation. However, the $L_\infty$ version provides stricter bounds than the $L_1$ one --- it does not accumulate perturbations. For a 32x32x1 input image, $L_\infty$-nonexpansivity means that a change of $0.01$ in every pixel changes the output by at most $0.01$, and $L_1$-nonexpansivity means that the output change may be as large as $10.24=1024\times0.01$! Another interesting question is how different kinds of nonexpansivity perform in a multilayer setting. It is easy to see that $L_\infty$-nonexpansivity is preserved under superposition: $f_\infty(f_\infty(x),\dots,f_\infty(x))$ still produces an $\epsilon$-bounded output under an $\epsilon$-bounded input. Conversely, stacking $L_1$-nonexpansive functions does not preserve this property: given that $f_1(x)$ produces an $N\epsilon$-bounded output under an $\epsilon$-bounded input, $f_1(f_1(x),\dots,f_1(x))$ will produce an $N^{2}\epsilon$-bounded output. Human vision --- and any artificial vision system that should be robust --- has a bounded reaction to bounded perturbations of the input image. The bounding ratio is not always 1:1 because sometimes we want to amplify weak signals. Thus, enforcing $L_\infty$-nonexpansivity on the entire classifier may overconstrain it. However, it makes sense to enforce this constraint at least for some parts of the classifier. Our computational results show that stacking nonexpansive layers and performing potentially nonrobust inference only in the last step greatly improves stability against adversarial perturbations. The rationale behind our model of the artificial neuron should be obvious --- making inference as robust as possible. However, we present an even more interesting result --- the fact that our model is the only perfectly stable artificial neuron that implements AND/OR logic. One familiar with the history of artificial neural networks may remember the so-called "XOR problem" --- a problem of fitting the simple four-point dataset below: \begin{center} \begin{tabular}[H]{ c c c } $x_0$ & $x_1$ & $y$ \\ \hline 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{tabular} \end{center} This problem is an elegant example of a dataset that cannot be separated by the single linear summator. Inspired by its minimalistic beauty, we formulate two similar problems, which address the accumulation of perturbations in multilayer networks: \paragraph{Theorem 1: $L_\infty$-nonexpansive AND problem.} $\exists!{\enspace}f(x,y)=min(x,y)$ such that the following holds: \begin{enumerate} \item $f(x,y)$ is defined for $x,y \in [0,1]$ \item $f(0,0)=f(0,1)=f(1,0)=0$ \item $f(1,1)=1$ \item $a{\leq}A,\ \ b{\leq}B \implies f(a,b){\leq}f(A,B)$ (monotonicity) \item $|f(a+{\Delta}a,b+{\Delta}b)-f(a,b)| \leq max(|{\Delta}a|,|{\Delta}b|)$ \end{enumerate} \paragraph{Theorem 2: $L_\infty$-nonexpansive OR problem.} $\exists!{\enspace}g(x,y)=max(x,y)$ such that the following holds: \begin{enumerate} \item $g(x,y)$ is defined for $x,y \in [0,1]$ \item $g(0,0)=0$ \item $g(0,1)=g(1,0)=g(1,1)=1$ \item $a{\leq}A,\ \ b{\leq}B \implies g(a,b){\leq}g(A,B)$ (monotonicity) \item $|g(a+{\Delta}a,b+{\Delta}b)-g(a,b)| \leq max(|{\Delta}a|,|{\Delta}b|)$ \end{enumerate} Proofs of theorems 1 and 2 can be found in Appendix A \ref{sect:appendixa}. These theorems have the following consequences: \begin{itemize} \item Our $min$-based AND and $max$-based OR elements are the only perfectly robust implementations of AND/OR logic \item It is impossible to implement a robust AND (robust OR) element with just one ReLU neuron --- the best that can be achieved is $L_1$-nonexpansivity, which is not robust \item It is possible to implement robust AND/OR logic by performing tricks with many traditional ReLU neurons ($max(a,b)=a+ReLU(b-a)$, $max(a,b,c)=max(a,max(b,c))$ and so on), but the result will be just another implementation of our robust AND/OR logic --- although it is much harder to achieve with SGD training \end{itemize} \section{Contour Engine: architecture overview} \label{sect:overview} In previous sections, we presented our model of the artificial neuron and discussed the motivation behind it, its significance and differences between the novel neuron and traditional summator-based ones. In this section, we briefly discuss the architecture of our network before moving to more detailed explanations in the following sections. \begin{figure}[H] \centering \includegraphics[width=14cm]{figure-contourengine.pdf} \caption{Three blocks of the Contour Engine network} \label{fig:contourengine} \end{figure} The three key parts of our neural architectures are: \begin{itemize} \item shallow feature detector \item sparse contour detection layers \item shallow classifier \end{itemize} The feature detection layer produces initial low-level features. The contour detection layers (one or two is usually enough) combine them in order to produce medium and high-level features. Finally, a linear or nonlinear classifier post-processes the features produced by the robust contour detection stage. The training algorithm includes three distinct, sequential stages: \begin{itemize} \item train (iteratively) or build (noniteratively) a shallow feature detector \item create sparse contour detection layers in a constructive manner (add layer by layer, create each layer neuron by neuron) \item train a shallow classifier using activities of sparse layers as inputs \end{itemize} In our experiments, we used noniterative construction of the shallow feature detector --- either analytically constructed edge detection filters or filters obtained via unsupervised training were used (running k-means over image patches \cite{Coates11}). Such an approach makes the input layer independent from label assignment, which allows us to make some interesting conclusions regarding the asymptotic complexity of the image recognition. Our approach to the construction of sparse layers --- adding layers and neurons one by one --- is similar to and was inspired by the Cascade-Correlation network \cite{Fahlman90}. The difference from the original work is that in order to generate new neurons we have to solve the \emph{nonsmooth} nonlinear least squares subproblem with additional sparsity $L_0$ constraints (for comparison, traditional summator-based neurons result in smooth unconstrained nonlinear least squares subproblems). The second important contribution of our work (in addition to the robust artificial neuron) is the heuristic, which can efficiently find approximate solutions of such subproblems. This heuristic is discussed in more detail in the next section. Finally, the shallow classifier can be implemented as a linear layer (with SOFTMAX normalization) processing outputs of the sparse block. \section{Training sparsely connected layers} \label{sect:sparselayers} This section discusses the core contribution of our work --- the constructive training of sparsely connected strong neurons. \subsection{Issues with SGD training} Based on our experience, online SGD training does not work well for networks with $min$-based activation functions. We failed to achieve good results with SGD --- but maybe someone else will be able to do better. We believe that the extreme nonconvexity of the $min$ function contributed to this failure ($max$ is less of a problem in our opinion), as it makes training much more difficult and prone to stalling in bad local extrema. Our solution to these problems is the constructive training algorithm, which creates networks layer by layer, and each layer is created by adding neurons one by one. This approach was investigated many times by many researchers with mixed results. We again refer here to the work of Fahlman et al. on the Cascade-Correlation network \cite{Fahlman90}, which, in our opinion, was the most successful one and inspired our own research. \subsection{The constructive training algorithm} Training networks composed of highly nonconvex and nonsmooth elements is difficult. Suppose, however, that \emph{somehow} you can train just one such element to fit some target function of your choice. How can it help you train a network? The answer is to build your model incrementally, training new elements to fit the current residual and adding them one by one. \begin{figure}[H] \centering \includegraphics[width=14cm]{figure-train-layers.pdf} \caption{Incremental training procedure} \label{fig:trainlayers} \end{figure} New neurons are trained to fit the current residual of the classifier, and every time you add a neuron to the layer you have to retrain the classifier to obtain new residuals. One may see some similarity to boosting here (we will return to this point later). The algorithm listed above can be easily generalized to multilayer training. One choice to be made is whether or not to maintain shortcut connections to the classifier from the previously learned layer. The training procedure can easily fast-forward information from bottom to top by learning identity mapping if necessary, so it is mostly a matter of taste. \subsection{Training strong neurons} In the subsection above, we reduced the problem of training sparse multilayer networks to training just one neuron with sparse connections: \begin{align*} \min\limits_{w} \sum\limits_{i}\left(N(w,X_i)-y_i\right)^2\ \ \ s.t.\ \ sparsity\ \ constraints \end{align*} where $w$ is a weight vector, $X_i$ is an $i$-th row of the input activities matrix $X$ (activities of the bottom layer at $i$-th image), $N(w,x)$ is a neuron output and $y_i$ is a target to fit (in our case, the current residual). For a three-input strong neuron, the formulation above becomes: \begin{equation} \label{eq:strong_nls_nonsmooth} \begin{split} \min\limits_{w_0, w_1, w_2} &\sum\limits_{i}\left[\min\left(\max\limits_{j}(w_{0,j}{\cdot}X_{i,j})\ ,\ \max\limits_{j}(w_{1,j}{\cdot}X_{i,j})\ ,\ \max\limits_{j}(w_{2,j}{\cdot}X_{i,j})\ ,\ \textbf{1}\right)-y_i\right]^2 s.t. \\ &{\lVert}w_0{\rVert}_0 \leq k\ ,\ \ {\lVert}w_1{\rVert}_0 \leq k\ ,\ \ {\lVert}w_2{\rVert}_0 \leq k \end{split} \end{equation} This problem has no easy solution, even in an unconstrained setting, and $L_0$ constraints are hard to handle with present nonsmooth solvers. Our proposal is to replace (\ref{eq:strong_nls_nonsmooth}) with some similar, albeit nonequivalent, form, which can be solved more efficiently and robustly. One attractive property of the contour recognition problems is that they deal with $[0,1]$-bounded activities, where $0$ stands for the absence of some feature and $1$ stands for the maximum activity possible. Thus, one may reasonably expect that all weights in (\ref{eq:strong_nls_nonsmooth}) will be nonnegative (connections with negative weights simply will not activate the neuron). Furthermore, it makes sense to place further restrictions on the weights --- that is, to choose weights from some short fixed list, for example $\{0,\nicefrac{1}{2},1,1\nicefrac{1}{2},2\}$. Now, instead of a nonconvex, nonsmooth, nonlinear least squares problem we have a combinatorial optimization problem: \begin{equation} \label{eq:strong_nls_discrete} \begin{split} \min\limits_{w_0, w_1, w_2} &\sum\limits_{i}\left[\min\left(\max\limits_{j}(w_{0,j}{\cdot}X_{i,j})\ ,\ \max\limits_{j}(w_{1,j}{\cdot}X_{i,j})\ ,\ \max\limits_{j}(w_{2,j}{\cdot}X_{i,j})\ ,\ \textbf{1}\right)-y_i\right]^2 s.t. \\ &w_{0,j},w_{1,j},w_{2,j} \in W\\ &{\lVert}w_0{\rVert}_0 \leq k\ ,\ \ {\lVert}w_1{\rVert}_0 \leq k\ ,\ \ {\lVert}w_2{\rVert}_0 \leq k \end{split} \end{equation} where $W$ can be binary $\{0,\ 1\}$ or something more fine-grained, such as $\{0,\ \nicefrac{1}{2},\ 1,\ 1\nicefrac{1}{2},\ 2\}$ or $\{0,\ \nicefrac{1}{4},\ \nicefrac{1}{2},\ \nicefrac{3}{4},\ 1,\ 1\nicefrac{1}{4},\ 1\nicefrac{1}{2},\ 1\nicefrac{3}{4},\ 2\}$. Discrete optimization problems are usually harder to solve precisely than continuous ones. Furthermore, \emph{this} discrete problem cannot be reduced to well-studied mixed-integer LP or mixed-integer QP, so there is likely no other way to solve it except for a brute-force search. However, we do not need an exact solution --- having a good one is sufficient. Our insight is that there is a simple heuristic that can generate good strong neurons without dealing with nonconvex multiextremal optimization problems. The original discrete optimization problem has no constraints except for sparsity. A $max$-element can gather information from any element of the input tensor (see figure below). As a result, we have to evaluate prohibitively large amount of possible connection structures. For instance, for 15 unit-weight connections to elements with a 32x32x20 input tensor we have roughly $10^{58}$ possible geometries. \begin{figure}[H] \centering \includegraphics[width=8cm]{figure-trn0.pdf} \caption{Totally unconstrained neuron} \label{fig:trn0} \end{figure} It is possible to significantly reduce the configuration count by adding some additional restrictions on the inter-layer connections. For example, we may impose two additional constraints: \begin{itemize} \item Require that $max$-elements are spatially local (i.e., each element gathers inputs from just one location $(x,y)$ of the input tensor) \item Require that $max$-elements feeding data into the same $min$-element are located close to each other \end{itemize} Alternatively --- for 1x1xD input tensors with no spatial component --- these restrictions can be reformulated as follows: \begin{itemize} \item Require that $max$-elements are correlationally local (i.e., each element gathers inputs from strongly correlated channels) \item Require that $max$-elements feeding data into the same $min$-element are correlated strongly enough \end{itemize} Having such constraints on the connections of the strong neuron significantly reduces the number of configurations that must be evaluated to solve the problem (\ref{eq:strong_nls_discrete}). In our toy example, the configuration count is reduced from $10^{58}$ to just $10^{18}$. \begin{figure}[H] \centering \includegraphics[width=8cm]{figure-trn1.pdf} \caption{Strong neuron with spatial/correlational constraints} \label{fig:trn1} \end{figure} We can achieve a further reduction in search complexity through a two-step search procedure: \begin{itemize} \item Evaluate all possible "seed detectors" --- strong neurons with single-input $max$-elements (AND without OR) \item Expand the best seed found --- sequentially add connections to its $max$-elements \end{itemize} \begin{figure}[H] \centering \includegraphics[width=8cm]{figure-trn2.pdf} \caption{Seed detector --- a strong neuron without $max$-elements} \label{fig:trn2} \end{figure} As a result of this improvement, the search complexity for our 32x32x20 example is reduced from $10^{18}$ to $10^{9}$ neural configurations. However, it is still too costly --- each of these configurations requires a full pass over the entire dataset in order to evaluate the neuron's performance. Further improvements can be achieved by assuming the following: \begin{itemize} \item Good $f_3=\min(A,B,C)$ can be found by extending good $f_2=\min(A,B)$ with the best-suited $C$ \item Good $f_2=\min(A,B)$ can be found by extending good $f_1=A$ with the best-suited $B$ \item Good $f_1=A$ can be found by simply evaluating all possible single-input seed detectors \end{itemize} \begin{figure}[H] \centering \includegraphics[width=8cm]{figure-trn3.pdf} \caption{Growth of seed detectors} \label{fig:trn3} \end{figure} This improvement makes the problem (\ref{eq:strong_nls_discrete}) computationally tractable. For example, the complexity of our toy example is reduced to just $20000$ combinations (compare this with the initial $10^{58}$ estimate). \paragraph{Algorithm outline.} The simplified algorithm (only $\{0,1\}$ weights, input activities are $[0,1]$-bounded) is shown below: \begin{enumerate} \item Setup the initial model (empty with zero output) and a vector of its residuals over the entire dataset. Select a neuron pool size $P$ (a few hundreds works in most cases). \item Competition phase: generate seed detectors and select the winner from the combined pool: \begin{itemize} \item Select a set of $P$ promising input features, "gen-1 seeds," $f_1=A$. Some form of quick and dirty feature selection is usually enough. \item Produce $P$ gen-2 seeds by extending gen-1 seeds $f_1=A$ with such $B$ that $f_2=\min(A,B)$ produces the best linear fit to the current residual. Only the spatial/correlational neighborhood of $f_1$ is evaluated. \item Produce $P$ gen-3 seeds by extending gen-2 seeds $f_2=\min(A,B)$ with such $C$ that $f_3=\min(A,B,C)$ produces the best linear fit to the current residual. Only the spatial/correlational neighborhood of $f_1$ is evaluated. \end{itemize} \item Generalization phase. Having determined a winning seed detector, sequentially extend its inputs with new $max$-connections: \begin{itemize} \item $f = \min(A, B, ...)$ \item $A \xrightarrow{} \max(A)$ \item $\max(A) \xrightarrow{} \max(A,A_2)$ \item $\max(A,A_2) \xrightarrow{} \max(A,A_2,A_3)$ and so on \end{itemize} Extending is performed in such a way that the extended detector fits the residual better than its previous version. Only the spatial/correlational neighborhood of $A$ is investigated. The procedure stops after the maximum number of connections is formed (good value --- 5 connections per $max$-element) or when there is no connection that can improve the fit. \item Add a detector to the model, and update the classifier and residual vector. Stop after the user-specified amount of detectors is formed. Go to 2 otherwise. \end{enumerate} Although it is not explicitly stated, the algorithm above is a batch algorithm --- it requires us to keep an entire dataset in memory and make a full pass over it in order to generate new strong neurons. The reason for this is that the algorithm has no way of correcting the neuron structure once it has been added to the model --- so, if you train a suboptimal neuron using a subsample of the entire training set, you will be unable to improve it later. The only way to properly generate a neuron is to use all the available data. This property raises an old question of the balance between network stability and its plasticity. Networks trained with SGD have high plasticity but zero stability. Plasticity allows us to use SGD --- an algorithm that makes only marginal improvements in the network being trained --- because these small decrements in the loss function will accumulate over time. At the same time, it impedes cheap nondestructive retraining --- once an image is removed from the training set, it is quickly forgotten. In contrast, our algorithm has zero plasticity --- it will not improve the neurons it generated previously --- but perfect stability. The drawback of such an approach is that it is necessary to use an entire training set to generate just one strong neuron, and this job has to be done in the best way possible. The upside is that the network never forgets what it learned before. If your task has changed a bit, you can restart training and add a few new neurons without damaging previously learned ones. \section{The feature detection layer} \label{sect:featuredetector} In this section, we briefly discuss the feature detection layer based on \cite{Coates11} and several proposed improvements. We deem this part of our work as less important than the results discussed in the previous section (sparsely connected layers of the robust neurons). Nevertheless, there are several interesting ideas we want to share here. This section provides only a brief summary, with a detailed description presented in Appendix B \ref{sect:appendixb}. \begin{wrapfigure}{r}{0.5\textwidth} \includegraphics[width=0.95\linewidth]{figure-filters-chromaluma.pdf} \caption{Filters learned with our (improved) procedure} \label{fig:chromaluma} \end{wrapfigure} Strong neurons can perform logical inference on low-level features, but they cannot \emph{produce} these features from raw pixel values. Thus, a separate feature extraction block is essential in order to "prime" the Contour Engine. The purpose of our feature extraction layer is to describe the input image using a rich dictionary of visual words. The description includes features such as oriented edges, more complex shapes, colors and gradients, computed at multiple scales and orientations. The key point of Coates et al. is that one may achieve surprisingly good classification performance by processing images with a single convolutional layer whose filters are trained in an unsupervised manner (k-means on random image patches). The authors also proposed to post-process the raw convolutions with a simple activity sparsification filter $y_{sparse,i} = ReLU\left(y_i - \lambda\cdot mean(y)\right)$. Filters as large as 4x4, 5x5 or 6x6 typically give the best results. Figure \ref{fig:chromaluma} shows an example of the filters found with our training procedure. We extend their results as follows: \begin{itemize} \item separate processing of color-agnostic (shape sensitive) and color-based features \item multiple downsampling levels of the layer outputs (2x and 4x max-pooling are used together) \item feature detection at multiple scales \item completeness with respect to image transformations --- multiple versions of the same feature corresponding to positive/negative phases, permutations in color space, rotations and so on \end{itemize} \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-v1layer.pdf} \caption{Multiscale multimodal feature extraction layer} \label{fig:v1layer} \end{figure} \section{The shallow classifier layer} \label{sect:shallowclassifier} Our proposed strong neurons have unique stability and sparsity properties, but some limitations are also present. They have a rigid piecewise linear output with a fixed slope, but in order to separate image classes one often needs nonlinearities with steep slopes in some places and flat spots in other parts of the feature space. Hence, a separate classifier layer is needed at the top of the network. This classifier layer can be as deep as you wish --- but strong neurons perform data processing extremely well, so all you need in most cases is a single linear summator followed by SOFTMAX. Training such a classifier is straightforward, requiring only sample activities of the bottom sparsely connected block over the entire dataset and training of the single-layer neural network (logit model) using the activities matrix as the input. \emph{One important point to note is that the shallow classifier layer is the only place in our model where significant adversarial instability is introduced.} The sparsely connected layers of strong neurons amplify adversarial perturbations in a completely controllable manner (and do not amplify them when binary weights are used). The initial feature detection layer is a single layer of convolutions with bounded coefficients, and thus it has limited adversarial perturbation growth. As a result, any adversary targeting our model will actually target its last layer. In effect, this means that we reduced the problem of building a robust deep classifier to one of building a robust \emph{shalow} classifier. In this work, we will show that, due to the stability of the bottom layers, a simple linear classifier performs well enough in terms of adversarial stability. \section{Comparison with related approaches} \label{sect:comparison} In this section we discuss several other machine learning algorithms that are related to our work: \begin{itemize} \item Cascade-Correlation \item Boosting \item Forward-Thinking architecture \item Deep neural decision forests \item BagNet \item $L_2$-nonexpansive networks \end{itemize} We also would like to briefly review some present defenses against adversarial attacks: \begin{itemize} \item Adversarial training \item $L_2$-nonexpansive networks \item Convex Outer Adversarial Polytope (Wong Defense) \end{itemize} \paragraph{Cascade-Correlation.} We already mentioned and referred to the Cascade-Correlation architecture. Our network construction algorithm reproduces Fahlman's idea in many respects. Two important differences can be noted: (1) our algorithm trains sparsely connected strong neurons, and (2) unlike CasCor we try to avoid long chains of nonlinearities, which contribute to various instabilities, so our network has a shallow and wide layered structure. \paragraph{Boosting.} There is some similarity between our training algorithm and boosting. Both algorithms expand the model by sequentially adding new units trained to fit the current residual. Thus, one may consider our approach to be a special case of boosting. However, boosting algorithms do not pay attention to the properties of weak classifiers added to the model; that is, any kind of weak classifier will fit into the boosting framework. In contrast, robust strong neurons are essential to our network architecture. \paragraph{Forward-Thinking architecture.} Another interesting approach to discuss is Forward-Thinking architecture (see \cite{forwardthinking}). This architecture is a constructive algorithm that trains the network layer by layer in a greedy manner. Both Forward Thinking and Contour Engine use the same approach to create a layered network structure (different from both modern CNNs and Cascade-Correlation). \paragraph{Deep neural decision forests.} We also note some similarity between Contour Engine and one novel deep learning algorithm: deep neural decision forests \cite{deepneuraldf}. First, there is a correspondence between our strong neurons and shallow decision trees. Indeed, a strong neuron without $max$-units, the seed detector $f(A,B)=\min(A,B)$, is in some sense equivalent to a short decision tree. One may generate such a tree, which returns $1$ for $A>0.5$ and $B>0.5$ and returns 0 otherwise. The difference is that our strong neuron is more powerful than a shallow decision tree. Adding $max$-connections achieves a quadratic/cubic increase in the model capacity with just a linear increase in its size. Conversely, the capacity of the decision tree is linearly proportional to its size. \paragraph{BagNet.} BagNet, an experimental neural architecture \cite{bagnet}, achieves impressive classification results on ImageNet with the bag-of-local-features model. By averaging predictions of the local models (each seeing just $\nicefrac{1}{7}\times\nicefrac{1}{7}$ of the entire image) it is possible to achieve results competitive with those of deep networks. Authors have proposed this architecture as a proof of concept, which demonstrates that we have an incomplete understanding of the underlying mechanisms of computer vision algorithms. For us, this approach is an interesting counterexample to Contour Engine. Our architecture is based on a large-scale spatial structure, whereas BagNet works with scattered small-scale hints. \paragraph{Adversarial training.} A simple yet universal defense is to train the network using both original and adversarial examples\cite{advtrn}. These additional examples make the inferences more robust by explicitly telling the network about the expected behavior under adversarial perturbation. In theory, this may guide the network so that it will implement internally robust AND/OR logic (indeed, it is possible to implement $max$/$min$ with ReLU units). The benefit of this approach is that it works for any kind of model --- all that is needed is a training code and a code that generates adversarial examples. \paragraph{$L_2$-nonexpansive networks.} This approach \cite{l2nonexpansive} is a class of neural networks in which "a unit amount of change in the inputs causes at most a unit amount of change in the outputs or any of the internal layers." Due to the utilization of traditional summators, the authors were unable to achieve $L_\infty$-nonexpansivity, so they had to resort to weaker $L_2$-nonexpansivity (although it is still much better than $L_1$-nonexpansivity). \paragraph{Convex Outer Adversarial Polytope (Wong Defense).} This approach \cite{wongdefense} models network behavior under adversarial perturbation of its inputs. An input image is provided along with per-component bounds of adversarial perturbation. Wong's algorithm models the perturbation of activities of internal units and provides differentiable error bounds for network outputs. It thus enables the use of straightforward SGD training on error bounds in order to reduce errors under adversarial perturbation. \section{Experimental results} \label{sect:results} \subsection{Datasets} We tested Contour Engine on two popular computer vision benchmarks: GTSRB and SVHN. \paragraph{German Traffic Sign Recognition Benchmark.} This benchmark is a multi-class single-image classification challenge \cite{gtsrb}. The dataset has more than 50000 images of centered traffic signs belonging to 43 classes. The classes are unequally sampled --- some "popular" traffic signs have many more instances than rare ones. The images in the dataset were captured in the wild under slightly (sometimes wildly) different orientations, lighting conditions, image sizes (bounding rectangles from 18x18 pixels to 64x64 and larger) and amounts of motion blur. \begin{figure}[H] \centering \includegraphics[width=5cm]{gtsrb.png} \caption{GTSRB dataset} \label{fig:gtsrb} \end{figure} We applied the following post-processing: we resized all images to standard 32x32 resolution, adding padding when necessary, and standardized brightness (mean 0.5). In numerical experiments, affine distortions were used to augment the dataset. \paragraph{Street View House Numbers.} This dataset is a well-known 10-class digit recognition problem \cite{svhn}. It has 630420 training and test images belonging to 10 classes. The image size is 32x32 in all cases. \begin{figure}[H] \centering \includegraphics[width=5cm]{svhn.jpeg} \caption{SVHN dataset} \label{fig:svhn} \end{figure} We normalized images in the dataset by making white the dominant color --- images with a majority of black pixels were inverted. No augmentation was applied to the images. \subsection{Software} Our neural architecture is quite nonstandard, and the training algorithms are even more nonstandard. Many machine learning frameworks can perform inferences on models like ours (the framework has to be flexible enough to allow scattered operations on tensors; in particular, TensorFlow can do this). However, no present framework can \emph{train} such models. Thus, we had to write the training and inference code in C++ from scratch. This code --- an experimental machine learning framework with several examples --- can be downloaded from \url{https://www.alglib.net/strongnet/}. \subsection{Network architecture} In this work, we evaluated a multi-column architecture with a shared unsupervised feature detection layer and separate supervised classification columns (see Figure \ref{fig:resultsnetwork}). The $K$-th column is individually trained to separate class $K$ from the rest of the dataset. \begin{figure}[htp] \centering \includegraphics[width=7cm]{figure-results-network.pdf} \caption{Network structure} \label{fig:resultsnetwork} \end{figure} The feature detection layer has two separate blocks: contour (color-agnostic) features and color-based ones. The contour filter bank has a capacity equal to 50 different filters. These filters have a size of 6x6, which allows the detection of medium complexity shapes; that is, ones more complex than simple edges. Each of these filters produces two features --- one corresponding to the "positive" phase and one to the "negative" phase --- so the total channel count is 100. The color filter bank is much smaller and stores just 10 filters, each having a size of 4x4, which is adequate to detect uniformly colored patches. In both cases (contour and color), we perform multiscale feature analysis, processing 32x32 (scale 0) and downsampled 16x16 (scale 1) versions of the image. The contour block requires 4.6 MFLOP to be computed, while the color block needs 0.4 MFLOP. Thus, the total amount of floating point operations required to perform initial feature detection is \textbf{5.0 MFLOP}. Classification columns are composed of our novel strong neurons grouped into two sparsely connected "strong layers" followed by a single output sigmoid neuron (linear summator + logistic function). Shortcut connections are present between all strong layers and outputs. In our experiments, columns with widths equal to just 200 strong neurons were powerful enough to separate GTSRB classes. Such columns needed roughly \textbf{0.007 MFLOP} (7000 FLOP). The output of the $k$-th column is the probability of the image belonging to class $K$. Due to logistic model properties, this probability is usually well calibrated. However, it is important to remember that different columns are trained separately, so their outputs do not have to sum to one. \subsection{Results: low-cost inference on GTSRB} The GTSRB dataset has 43 classes, so our network has a shared feature detection layer and 43 class-specific sparse columns. This means that the inference cost of our model is \textbf{$5.0+43\times0.007=5.3$ MFLOP}. The test set error of our model on this dataset is \textbf{1.6\%}. \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-gtsrb-results.pdf} \caption{GTSRB: accuracy vs inference cost} \label{fig:gtsrbresults} \end{figure} The table above compares Contour Engine with Targeted Kernel Networks \cite{targetedkernelnets} and pruning \cite{yiming}. Targeted Kernel Networks (TSTN and STN rows) reduce computational complexity by dropping some of the inner convolutions using attentional modulation. They may be regarded as a type of spatial pruning. The work by Yiming Hu et al. involved channel-based pruning performed using a genetic algorithm. Contour Engine outperforms both approaches by an order of magnitude. One more interesting point is that the $5.3$ MFLOP required by our model are mostly unsupervised. Only $0.3$ MFLOP ($0.007$ MFLOP per class) are performed in the supervised part of our network. Most of the time is spent on unsupervised preprocessing, which consumes about $95\%$ of the computational budget. This result suggests that the actual complexity of the contour-based classification is on the kiloflop rather than on the megaflop or gigaflop scale. \subsection{Results: low-cost inference on SVHN} The Street View House Numbers dataset has 10 classes, so our network uses a shared feature detection layer similar to the one employed on GTSRB with 10 class-specific sparse columns. We note here that in this task color does not carry any classification-related information (e.g., the green-vs-blue edge is important because it is an edge, not because it is green or blue), so we dropped the color part of the feature extraction layer. The inference cost for our model was \textbf{4.8 MFLOP}, and the test set error was \textbf{4.8\%}. \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-svhn-results.pdf} \caption{SVHN: accuracy vs inference cost} \label{fig:svhnresults} \end{figure} For this dataset, we compare our network with the pruning by Yiming Hu et al. (again) and with Capsule Networks (\cite{capsnets}, \cite{targetedkernelnets}). Again, Contour Engine outperforms its competitors by an order of magnitude. \subsection{Results: improved adversarial stability} We tested the adversarial stability of the Contour Engine network trained on the SVHN dataset. We used a powerful PGD attack (iterated FGSM with 20 iterations and backtracking line search) with the perturbation $L_\infty$-norm bounded by 0.01, 0.02 and 0.03. \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-adversarial-results.pdf} \caption{SVHN: adversarial attack success rate} \label{fig:adversarialresults} \end{figure} The table above compares the attack success rate for Contour Engine with reference values from three independent works (\cite{wongdefense}, \cite{atda}, \cite{iat}). It can be seen that an unprotected network can be successfully attacked in 83\% cases with a perturbation as small as 0.01. Different kinds of adversarial protection (when used on traditional summator-based networks) significantly reduce the attack success rate. However, in all cases Contour Engine outperforms these results without any special counter-adversarial measures. \subsection{Results: hardware requirements} Our neural network has fairly low hardware requirements. We already mentioned its low floating point count, but another interesting property is that it is easy to switch from floating point operations to fixed point ones. Stability with respect to adversarial perturbations (maliciously targeted ones) implies stability with respect to perturbations arising from rounding (untargeted ones) --- thus one may expect graceful degradation with a progressive decrease in mantissa length. Different parts of the network have different hardware requirements with respect to working accuracy: \paragraph{Feature detection layer.} This part of the network is just a single layer of convolutions with bounded coefficients, performed on $[0,1]$-bounded inputs, producing $[0,1$-bounded outputs. Thus, it can be efficiently implemented with no drop in the inference quality using just 8-bit fixed point inputs and outputs and 8-bit unsigned integer multiplicator/summator units with 24-bit accumulators. \paragraph{Strong layers.} This part of the network can also be implemented with 8-bit fixed-point units. With binary weights, this part of the network is multiplication free and summation free, so only 8-bit min and max units are needed. With non-binary weights, strong neurons may need multiplication by fixed-point numbers with short mantissas (e.g., $1\nicefrac{1}{2}$), which may be performed with just a few shifts/adds. \paragraph{Shallow classifier.} This part of network is just a single summator with bounded coefficients. Hence, it may work well with 8-bit fixed point inputs and outputs, 8-bit unsigned integer multiplicator units and 24-bit internal accumulators. In fact, our model's accuracy and stability results were obtained with 7-bit precision to store the activity matrices. We had to utilize this reduced precision due to the immense memory requirements of some parts of our training algorithm. However, this also allowed us to experimentally verify our claims with low hardware requirements. Experimenting with a 4-bit version of our network also looks promising. \section{Summary} \label{sect:conclusions} In this work, we have proposed a novel model of the artificial neuron --- the strong neuron --- which can separate classes with decision boundaries more complex than hyperplanes and which is resistant to adversarial perturbations of its inputs. We proved that our proposal is a fundamental and well-motivated change and that constituent elements of our strong neuron, $min$/$max$ units, are the only robust implementations of the AND/OR logic. We also proposed a novel training algorithm that can generate sparse networks with $O(1)$ connections per strong neuron, a result that far surpasses any present advances in neural network sparsification. State-of-the-art efficiency (inference cost) is achieved on GTSRB and SVHN benchmarks. We also achieved state-of-the-art results in terms of stability against adversarial attacks on SVHN --- without any kind of adversarial training --- which surpassed much more sophisticated defenses. Further, our network has low hardware requirements and gracefully degrades when numerical precision is decreased (we managed to achieve the results listed above using just 8-bit fixed point math for the unit activities). One more interesting result is related to our decision to separate unsupervised feature detection and supervised classification. We found that Contour Engine spends most of the inference time in the unsupervised preprocessor --- less than 10.000 FLOP per class is used by the supervised part of the network (one which is composed of strong neurons). This result suggests that contour recognition is much easier than was previously thought. Once initial unsupervised image preprocessing is done, centered contours can be recognized with just a few kiloflops. Finally, we want to highlight future directions of our work: \begin{itemize} \item \textbf{Convolutional training.} Our proof-of-concept network is nonconvolutional, which limits its applicability to well-centered image recognition problems, such as MNIST, GTSRB, and SVHN. The next step is to implement computationally feasible convolutional training. \item \textbf{Better adversarial stability.} We already achieved state-of-the-art stability with a simple linear output. However, we believe that further improvements are possible with a better shallow classifier layer (output layer). This layer is the only adversarially unstable part of the network --- we managed to reduce the problem of building a \emph{deep} and robust network to one of building a \emph{shallow } and robust one. One promising robust classifier model is a maxout\cite{maxout} neuron with an $L_1$ constraint on internal linear subunits. \item \textbf{Transfer learning and fast retraining.} The filters of the unsupervised feature detection layer look quite generic (edges, bars, blobs, arcs), which strongly suggests that this layer could be reused across multiple pattern detection problems. Thus, one obvious direction of research involves the transfer properties of the feature detection layer. Furthermore, we feel that the strong neurons generated by the sparse training algorithm may also allow some limited reuse. When combined with extremely cheap inference performed by strong neurons, this opens the door to pretrained "universal column," which contain strong neurons capable of detecting a wide range of "popular contours." \end{itemize} \bibliographystyle{alpha} \begin{thebibliography}{1} \bibitem[Coates11]{Coates11} Coates, A. and Lee, H. and Ng, A.Y. \newblock "An Analysis of Single-Layer Networksin Unsupervised Feature Learning". \newblock Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:215-223, 2011. \bibitem[Fahlman90]{Fahlman90} Scott E. Fahlman, Christian Lebiere \newblock "The cascade-correlation learning architecture". \newblock Advances in neural information processing systems 2, June 1990, Pages 524–532 \bibitem[Hettinger17]{forwardthinking} Chris Hettinger, Tanner Christensen, Ben Ehlert, Jeffrey Humpherys, Tyler Jarvis, Sean Wade \newblock "Forward Thinking: Building and Training Neural Networks One Layer at a Time". \newblock arXiv:1706.02480 \bibitem[Kontschieder15]{deepneuraldf} Peter Kontschieder, Madalina Fiterau, Antonio Criminisi, Samuel Rota Bulo. \newblock "Deep Neural Decision Forests". \newblock Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), 2016. \bibitem[Brendel19]{bagnet} Wieland Brendel, Matthias Bethge. \newblock "Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet". \newblock arXiv:1904.00760. \bibitem[Qian18]{l2nonexpansive} Haifeng Qian, Mark N. Wegman. \newblock "L2-Nonexpansive Neural Networks". \newblock arXiv:1802.07896. \bibitem[Wong17]{wongdefense} Eric Wong, J. Zico Kolter. \newblock "Provable defenses against adversarial examples via the convex outer adversarial polytope". \newblock arXiv:1711.00851. \bibitem[Goodfellow14]{advtrn} Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. \newblock "Explaining and Harnessing Adversarial Examples". \newblock arXiv:1412.6572. \bibitem[Stallkamp12]{gtsrb} J. Stallkamp, M. Schlipsing, J. Salmen, C. Igel. \newblock "Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition". \newblock Neural Networks Special Issue. \bibitem[Netzer11]{svhn} Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng. \newblock "Reading Digits in Natural Images with Unsupervised Feature Learning". \newblock NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011. \bibitem[Kashyap18]{targetedkernelnets} Kashyap Chitta. \newblock "Targeted Kernel Networks: Faster Convolutions with Attentive Regularization". \newblock Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science, vol 11132. Springer, Cham. \bibitem[Yiming18]{yiming} Yiming Hu, Siyang Sun, Jianquan Li, Xingang Wang, Qingyi Gu. \newblock "A novel channel pruning method for deep neural network compression". \newblock arXiv:1805.11394 \bibitem[Sabour17]{capsnets} Sara Sabour, Nicholas Frosst, Geoffrey E Hinton. \newblock "Dynamic Routing Between Capsules". \newblock arXiv:1710.09829. \bibitem[Song19]{atda} Chuanbiao Song, Kun He, Liwei Wang, John E. Hopcroft. \newblock "Improving the generalization of adversarial training with domain adaptation". \newblock arXiv:1810.00740. \bibitem[Lamb19]{iat} Alex Lamb, Vikas Verma, Juho Kannala, Yoshua Bengio. \newblock "Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy". \newblock arXiv:1906.06784. \bibitem[Goodfellow19]{maxout} Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio. \newblock "Maxout Networks". \newblock arXiv:1302.4389. \bibitem[Shang16]{crelu} Wenling Shang, Kihyuk Sohn, Diogo Almeida, Honglak Lee. \newblock "Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units". \newblock arXiv:1603.05201. \bibitem[Blot16]{maxmin} Michael Blot, Matthieu Cord, Nicolas Thome. \newblock "Maxmin convolutional neural networks for image classification". \newblock arXiv:1610.07882. \end{thebibliography} \newpage \section{Appendix A: proofs of theorems 1 and 2} \label{sect:appendixa} \paragraph{Theorem 1: $L_\infty$-nonexpansive AND problem.} $\exists!{\enspace}f(x,y)=min(x,y)$ such that following holds: \begin{enumerate} \item[C1] $f(x,y)$ is defined for $x,y \in [0,1]$ \item[C2] $f(0,0)=f(0,1)=f(1,0)=0$ \item[C3] $f(1,1)=1$ \item[C4] $a{\leq}A,\ \ b{\leq}B \implies f(a,b){\leq}f(A,B)$ (monotonicity) \item[C5] $|f(a+{\Delta}a,b+{\Delta}b)-f(a,b)| \leq max(|{\Delta}a|,|{\Delta}b|)$ \end{enumerate} \paragraph{Proof.} We will prove Theorem 1 by demonstrating that conditions C1...C5 constrain $f(x,y)$ in such a way that the only possible solution is $f(x,y)=min(x,y)$. The monotonicity condition C4 combined with C2 means that \begin{equation} \label{eq:f0y} {\forall}\ y{\in}[0,1]\ \ \ f(0,y)=0 \end{equation} Condition C5, when combined with C2 and C3, means that ${\forall}y{\in}[0,1]\ \ f(y,y)=y$. Indeed, C5 combined with C2 means that $|f(y,y)-f(0,0)| \leq |y|\ \implies\ f(y,y){\leq}y$. Similarly, C5 combined with C3 means that $|f(y,y)-f(1,1)| \leq |1-y|\ \implies\ f(y,y){\geq}y$. As result, we have \begin{equation} \label{eq:fyy} {\forall}\ y{\in}[0,1]\ \ \ f(y,y)=y \end{equation} Similarly to the previous paragraph, condition C5 combined with \ref{eq:f0y} and \ref{eq:fyy} constrains function values between $f(0,y)$ and $f(y,y)$ to \begin{align*} {\forall}\ 0{\leq}x{\leq}y{\leq}1\ \ \ f(x,y)=x=min(x,y) \end{align*} Due to the symmetry of the problem, it is obvious that the following also holds: \begin{align*} {\forall}\ 0{\leq}y{\leq}x{\leq}1\ \ \ f(x,y)=y=min(x,y) \end{align*} So, finally, \begin{align*} {\forall}x,y\in[0,1]\ \ \ f(x,y)=min(x,y) \end{align*} which has been shown. \paragraph{Theorem 2: $L_\infty$-nonexpansive OR problem.} $\exists!{\enspace}g(x,y)=max(x,y)$ such that following holds: \begin{enumerate} \item[C1] $g(x,y)$ is defined for $x,y \in [0,1]$ \item[C2] $g(0,0)=0$ \item[C3] $g(0,1)=g(1,0)=g(1,1)=1$ \item[C4] $a{\leq}A,\ \ b{\leq}B \implies g(a,b){\leq}g(A,B)$ (monotonicity) \item[C5] $|g(a+{\Delta}a,b+{\Delta}b)-g(a,b)| \leq max(|{\Delta}a|,|{\Delta}b|)$ \end{enumerate} \paragraph{Proof.} Similarly to the previous proof, we will prove Theorem 2 by demonstrating that conditions C1...C5 constrain $g(x,y)$ in such a way that the only possible solution is $g(x,y)=max(x,y)$. C5 combined with C2 and C3 constrains $g(x,y)$ along $x=y$: $g(0,0)=0 \implies g(y,y) \leq y$ and $g(1,1)=1 \implies g(y,y) \geq y$, so finally we have \begin{equation} \label{eq:gyy} \forall\ y\in[0,1]\ \ \ g(y,y)=y \end{equation} Similarly, for $g(0,y)$ from the nonexpansivity constraint C5 combined with boundary values $g(0,0)=0$ and $g(0,1)=1$, it immediately follows that \begin{equation} \label{eq:g0y} \forall\ y\in[0,1]\ \ \ g(0,y)=y \end{equation} and, due to monotonicity constraint C4, from \ref{eq:gyy} and \ref{eq:g0y} we get \begin{align*} \forall\ 0 \leq x \leq y \leq 1\ \ \ g(x,y)=y=max(x,y) \end{align*} Due to the obvious symmetry, it is easy to prove that \begin{align*} \forall\ x,y\in[0,1]\ \ \ g(x,y)=max(x,y) \end{align*} which has been shown. \section{Appendix B. The feature detection layer} \label{sect:appendixb} In this section we discuss a feature detection layer based on \cite{Coates11} with several proposed improvements. There are several interesting ideas we want to share here, so this section is quite long. Nevertheless, we deem this part of our work as less important than the results on strong neurons, so we moved it to the end of the article. Modern convolutional networks tend to have many layers with filters as small as 3x3. One well-known pattern is to have two layers with 3x3 convolutions followed by a max-pooling layer. Almost all architectures lack a clear distinction between feature extraction and subsequent geometric inference --- both tasks are performed using the same sequence of standard building blocks. Due to the quadratic dependence between the network width and weights count, preference is given to deep and narrow networks --- making the network 2x deeper and 2x narrower results in a 2x decrease in computing power. In contrast, our neural architecture has sparse layers with $O(1)$ connections per neuron. It thus inherently favors shallow and wide networks. Another difference from traditional architectures is that our strong neurons can perform logical inferences on low-level features, although they cannot \emph{produce} these features from raw pixel values. Thus, a separate feature extraction block is essential in order to "prime" Contour Engine. The purpose of our feature extraction layer is to describe an input image using a rich dictionary of visual words. The description includes features such as oriented edges, more complex shapes, colors and gradients, computed at multiple scales and orientations. The following subsections discuss our implementation of the feature extraction layer, starting from the very basic setup and progressively improving it. \subsection{The basic structure} The basic implementation of the feature extraction unit is a single layer of 4x4 and/or 6x6 convolutions followed by normalization and sparsification (see \cite{Coates11}) layers: \begin{align*} y_{raw}[i,j,k] &= ReLU\left(CONV(W,x)\right) \\ y_{sparse}[i,j,k] &= ReLU\left(y_{raw}[i,j,k] - \lambda\underset{k}{MEAN}(y_{raw}[i,j,k])\right) \\ y_{nrm}[i,j,k] &= \frac{y_{sparse}[i,j,k]}{\epsilon+\max\limits_{i,j,k} y_{sparse}[i,j,k]} \end{align*} where $W$ is a $Kx3xMxM$ tensor (here $K$ is an output filter count, $M$ is a convolution size and $3$ stands for RGB input) and $\lambda$ is a tunable sparsification parameter. The typical amount of filters within feature banks ranges from 8 (just edge detectors) to 100 (medium-complexity shapes) features. We experimented with different methods of generating feature banks and found that training them in the completely unsupervised manner (see \cite{Coates11}) tends to give good results with interesting generalization properties, which will be discussed later. \subsection{Separating contour and color} One improvement we propose is to separate contour-based and color-based features. We require the former to be color-agnostic (the feature detector output does not change under permutation of RGB channels) and the latter to be lightness-agnostic (the feature detector output does not change with the addition/subtraction of gray color). We have several reasons behind our proposal. First, it is well known that the human visual cortex (the best universal visual processor known so far) performs separate processing of contour and color signals in the first regions of the ventral stream, also known as the "what pathway." We want to duplicate it here because our work was partially inspired by unique properties of the human visual system. Second, having such orthogonality in our model accelerates training in later stages (creating sparse connectivity) because it greatly reduces the number of possible connections in the network. Finally, such separation makes our network more controllable --- we can easily measure the amount of information provided by the edges and color and easily introduce some invariants into the model (e.g., invariance with respect to various color and lightness corrections). Color-agnostic processing can be implemented by requiring that components of the tensor $W$ corresponding to different RGB channels have the same value. However, we prefer to explicitly replace the $Kx3xMxM$ weight tensor $W$ with the $KxMxM$ tensor $W_L$: \begin{math} y_{L,raw}[i,j,k] = ReLU\left(CONV(W_L,\frac{1}{3}\left(x_R+x_G+x_B\right))\right) \end{math} One more normalization we introduce is a requirement that the feature detector output be invariant with respect to lightness shift (addition/removal of the gray color). Mathematically, this condition means that we require tensor elements within each filter to sum to zero: \begin{math} {\forall}k:\quad \sum\limits_{i,j}W_L[k,i,j] = 0 \end{math} One possible way to enforce such requirements is to tweak the data fed to "k-means over image patches" procedure proposed by Coates et al. Color-agnostic filters can be learned by replacing colors with monochrome values prior to running k-means. The second requirement --- the invariance wrt lightness shift --- can be enforced by substracting the mean lightness from image patches. Similarly, color-based lightness-agnostic processing can be implemented by requiring that components of the weight tensor $W$ corresponding to different RGB channels sum to zero (invariance wrt to lightness shift is implicitly enforced by this constraint): \begin{math} \forall i,j,k:\quad W_C[k,0,i,j]+W_C[k,1,i,j]+W_C[k,2,i,j] = 0 \end{math} As with color-agnostic filters, color-based ones can be learned through manipulation with data sent following the Coates procedure --- one can simply subtract the lightness value from each pixel. The following filters were learned by running this procedure on the CIFAR dataset: \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-filters-chromaluma.pdf} \caption{Chroma and luma filters} \label{fig:appbfilters} \end{figure} \subsection{Downsampling (max-pooling) layer} The max-pooling layer is well known for its ability to simultaneously reduce the dimensionality of the data and improve its linear separability (the latter is achieved due to the introduction of shift-invariance). We again refer to \cite{Coates11} for some interesting quantative results. In this section, we focus on the max-pooling layer, which performs max-downsampling of the input tensor (pooling with a filter width equal to the stride). The question is, what downsampling factor is the best one? Numerical experiments showed that, for 4x4- and 6x6-sized features, good results could be achieved with 2x downsampling. This provides a good balance between generalization and loss of essential spatial information. While 4x downsampling loses too much information to be used alone, it can supplement 2x-downsampled activities if both are used together. \subsection{Feature detection at multiple scales} Although the initial formulation covers just small 4x4 or 6x6 image patches, one may reasonably want to have a multiscale description that includes both small (e.g., ~4x4 pixels), medium (~8x8) and large (~16x16) features. Traditional convolutional architectures do not explicitly form such multiscale representations. Since the beginning, the dominant approach has been to stack standard building blocks and allow SGD to do the rest. We, however, aim to develop an architecture that performs some standardized kinds of processing (feature extraction, spatial pooling, multiscale processing) in the standardized manner with a limited amount of controllable nonlinearities learned. \subsection{Introducing completeness} Now, we have everything we need to prime Contour Engine --- shape/color separation, multiple downsampling levels and multiscale image processing. The key parts of our feature detection layer are present. However, we may add one more improvement --- completeness. It is preferable to have a feature detection layer that is complete under some particular set of transformations. For example, if feature $F_0$ detects some particularly oriented shape, the feature detection layer may also be required to have $F_1$, $F_2$ and $F_3$ that detect the same shape rotated by $90{\degree}$, $180{\degree}$ and $270{\degree}$,respectively. Another option is to require completeness with respect to permutations in color space --- one may require a color gradient to be detected for any combination of constituent colors (red-green, red-blue, green-blue, yellow-blue, violet-green and so on). This requirement may be a bit too much for specialized computer vision systems like those that detect traffic lights --- red blobs against black backgrounds are important, but violet blobs against green background are irrelevant for solving the problem. However, to design a general purpose vision system that can be specialized for any task, having such a feature detection layer may be essential for success. \emph{What is usually achieved by training a "prototype network" on a large, diverse dataset (say, ImageNet) can also be achieved by introducing completeness in a network trained on a much smaller dataset}. In this work, however, we focus on another aspect of complete feature subsets: computational complexity. Some types of completeness allow us to achieve constant 2x-6x performance boost, that is, to have subsets of two features (completeness with respect to lightness inversion) or six features (completeness with respect to color rotation) computed in roughly the same time as is usually needed to compute just one feature. Completeness with respect to lightness inversion means that color-agnostic features now come in two subsets --- corresponding to the "positive phase" of some filter and corresponding to the "negative phase": \begin{align*} y_{f}[i,j,k] &= CONV(W,x) \\ y_{raw}[i,j,k] &= CONCAT\left[ ReLU(+y_f), ReLU(-y_f) \right] \\ y_{sparse}[i,j,k] &= ReLU\left(y_{raw}[i,j,k] - \lambda\underset{k}{MEAN}(y_{raw}[i,j,k])\right) \\ y_{nrm}[i,j,k] &= \frac{y_{sparse}[i,j,k]}{\epsilon+\max\limits_{i,j,k} y_{sparse}[i,j,k]} \end{align*} This improvement allows us to achieve a constant 2x performance boost for the color-agnostic part of our feature detection layer. This means that we either can have a 2x wider layer (more features detected) with the same performance budget, or alternatively, we can have roughly the same level of quality with a 2x smaller running time. Similar, albeit more complex changes can be made to introduce completeness with respect to rotations in color space. Capturing both positive and negative phases of ReLU units was proposed long before this work (e.g., \cite{crelu}, \cite{maxmin}). However, most previous authors failed to consider the fact that capturing positive/phases is just a special case of the more general movement toward having a complete feature detection layer. \end{document}
https://openreview.net/forum?id=S4kvQ7_XBxP
https://arxiv.org/abs/2006.09510
Evaluate the paper based on its novel and interesting idea of using "strong neurons" for sparse neural networks, its effectiveness on the evaluated datasets, and the potential drawbacks of the optimization method and training strategy.
A novel approach to counter recognition [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The authors propose a new type of neuron designed for contour recognition. They detail an extensive algorithm for training these neurons without back-propagation. They show their method can outperform convolutional methods in low FLOWs regime and is more robust against adversarial attacks. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. Creative approach to a hard problem (replacing the convolutional neuron); builds on related work where possible; solid experiments. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. Many heuristics are needed to optimize the strong neuron, while the effect of the heuristics are not analyzed or explored. In the same vein, an ablation study, including the effects of the unsupervised backbone, would have helped to make the work more solid. [Overall rating] Paper rating: Accept [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - Lines 156-157, 189-190 are unclear to me - Section 7.1 addresses the reader as "you". Use of more formal "one" would be advised. - Change brackets to separate sentences (e.g. lines 370-371) - Minor typos: lines 490 "shalow" Rating: 7: Good paper, accept Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass{article} \usepackage{arxiv} \usepackage{textcomp} \usepackage[utf8]{inputenc} % \usepackage[T1]{fontenc} % \usepackage{enumerate} \usepackage{hyperref} % \usepackage{url} % \usepackage{booktabs} % \usepackage{amsfonts} % \usepackage{amsmath} % \usepackage{gensymb} % \usepackage{nicefrac} % \usepackage{microtype} % \usepackage{lipsum} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{float} \usepackage{todonotes} \graphicspath{ {./images/} } \usepackage{svg} \title{On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron} \author{ Sergey Bochkanov \\ ALGLIB Project \\ Russian Federation \\ \texttt{sergey.bochkanov@alglib.net} \\ } \begin{document} \maketitle \begin{abstract} Deep neural networks have achieved human-level accuracy on almost all perceptual benchmarks. It is interesting that these advances were made using two ideas that are decades old: (a) an artificial neuron based on a linear summator and (b) SGD training. However, there are important metrics beyond accuracy: computational efficiency and stability against adversarial perturbations. In this paper, we propose two closely connected methods to improve these metrics on contour recognition tasks: (a) a novel model of an artificial neuron, a "strong neuron," with low hardware requirements and inherent robustness against adversarial perturbations and (b) a novel constructive training algorithm that generates sparse networks with $O(1)$ connections per neuron. We demonstrate the feasibility of our approach through experiments on SVHN and GTSRB benchmarks. We achieved an impressive 10x-100x reduction in operations count (10x when compared with other sparsification approaches, 100x when compared with dense networks) and a substantial reduction in hardware requirements (8-bit fixed-point math was used) with no reduction in model accuracy. Superior stability against adversarial perturbations (exceeding that of adversarial training) was achieved without any counteradversarial measures, relying on the robustness of strong neurons alone. We also proved that constituent blocks of our strong neuron are the only activation functions with perfect stability against adversarial attacks. \end{abstract} \section{Introduction} In recent decades, artificial neural networks have achieved impressive results on all computer vision benchmarks. Perhaps the correct phrase would be "unbelievably good" because a hypothetical time traveller from the year 2000 would be shocked by today's progress in this area. One could have predicted, relying on Moore's law, the computing power of today's CPUs. However, it would have been impossible to predict the completely unexpected success in the training of large nonconvex multiextremal models --- object recognition, neural text translation, style transfer, and deep fakes. Interestingly, this progress was achieved using two ideas that are decades old: (1) an artificial neuron with a linear summator at its core and (2) stochastic gradient (SGD) training. The combination of these ideas was fortuitous, allowing us to fit any decision function, no matter how complex. As a result, in recent years neural models surpassed human-level accuracy on ImageNet and other benchmarks. However, we believe (and will justify below) that the very properties of summators and SGD impede progress in improving two other important metrics: the sparsity of the neural connections and adversarial stability. In our work, we propose (1) a novel model of an artificial neuron with inherent robustness against adversarial perturbations and (2) a novel training algorithm that allows us to build extremely sparse networks with $O(1)$ connections per neuron. With these proposals, we achieved state-of-the-art performance and adversarial stability on a number of contour recognition benchmarks. The article is structured as follows. In section \ref{sect:novelneuron}, we will discuss the deficiencies of linear summators and propose a new model of an artificial neuron that we call the "strong neuron." In section \ref{sect:rationale}, we will show that the structure of our strong neuron is motivated by obvious stability requirements and that our strong neuron is the only perfectly stable artificial neuron possible. In section \ref{sect:overview}, we will discuss three blocks of the Contour Engine, a neural architecture that utilizes our proposed strong neurons: a feature detection unit, sparse inference unit, and shallow classifier. The key part of our network --- the sparsely connected geometric inference engine --- and its training algorithm will be discussed in section \ref{sect:sparselayers}. The initial feature detection layer will be briefly discussed in section \ref{sect:featuredetector} (with a more detailed discussion in Appendix B). The shallow classifier that performs post-processing of the network output will be discussed in section \ref{sect:shallowclassifier}. In section \ref{sect:comparison}, we will compare our architecture with similar and related approaches. In section \ref{sect:results}, we will discuss the experimental results. Finally, in section \ref{sect:conclusions}, we present a brief summary of our findings and a few thoughts on future research directions. \section{The novel artificial neuron ("strong neuron")} \label{sect:novelneuron} In this work we propose to replace traditional summator-based artificial neurons with a more powerful one that (a) can separate input images with decision surfaces much more complex than hyperplanes, (b) has better stability properties with respect to the adversarial perturbations of its inputs, (c) inherently favors sparsity of connections and (d) has fairly low hardware requirements (8-bit fixed point hardware is enough in most cases). \begin{figure}[h!] \centering \includegraphics[width=10cm]{figure-1-strongnn.pdf} \caption{A summator-based neuron and a strong neuron} \label{fig:fig1_strongnn} \end{figure} In the following subsections, we discuss specifics of the contour recognition problems, strong and weak points of the summator-based artificial neuron and, finally, our proposal. \subsection{Contour recognition = logical AND + logical OR} Contour recognition is an important subset of computer vision problems. It is deeply connected with properties of our world --- we live in a universe full of localized objects with distinctive edges. Many important problems are contour based: handwritten digit recognition, traffic light detection, traffic sign recognition and number plate recognition. There are also non-contour tasks --- for example, ones that can only be solved by gathering information from many small cues scattered throughout an image (e.g., distinguishing a food store from an electronics store). A degenerate counterexample is a task that involves computing the mean intensity of the image pixels --- its decision function ignores any kind of spatial structure in the image. Contour recognition has interesting mathematical properties: \begin{itemize} \item It naturally leads to $[0,1]$-bounded activities. Not all computer vision problems have this property (e.g., object counting tasks have unbounded activities). \item Contours are localized and independent from their surrounding (e.g., a crosswalk sign is a crosswalk sign, regardless of who uses the crosswalk --- a pedestrian, a tank or a bird). \item Ideal contour detector should have a monotonic response with respect to the full/partial "dimming" of the contour or some of its parts. In other words, if you start to progressively remove parts of the contour, you should observe monotonically decreasing detector responses. \end{itemize} Our insight is that contour recognition is essentially a combination of two basic operations on low-level features: \begin{itemize} \item logical AND (detection), which decomposes high-level features as combinations of several low-level ones, placed at different locations \item logical OR (generalization), which allows detectors to be activated by more diverse inputs \end{itemize} \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-and-or.pdf} \caption{Pattern recognition: AND + OR} \label{fig:andor} \end{figure} \subsection{What is wrong with linear summator and SGD?} A linear summator trained with SGD is an excellent basic building block for a number of reasons: \begin{itemize} \item First, it is flexible. It smoothly implements soft-AND/soft-OR logic within a single framework: $AND_{RELU}(A,B)=ReLU(A+B-1)$, $OR_{RELU}(A,B)=ReLU(A+B)$. It may also implement more general decision functions (including ones with negative weights). \item Second, it is trainable. We usually accept it as a given that one can stack many linear units interleaved with nonlinearities, constructe a huge nonlinear nonconvex model and \emph{successfully} fit it with SGD to some complex and noisy decision function. \end{itemize} However, it has some deficiencies as well First, summator-based implementation of the AND/OR logic is very brittle, especially in high-dimensional spaces. The neuron can be set to an arbitrarily high value (or, alternatively, zeroed) by feeding it with many small activities in different channels. Many researchers believe that this is the reason behind the adversarial instability of modern neural networks. We also feel (more intuition that concrete proof) that SGD-based training has limited potential for sparsification. There are multiple sparsification strategies that share one common trait: they start from the same dense network and progressively sparsify it (via $L_1$ regularization or by other means). As a result, the final connection count is typically \emph{a fraction} of the initial connection count: $O(s{\times}C)$, where $s$ is a sparsity coefficient that may be quite small --- 0.1, 0.01 or even less --- although it is asymptotically different from zero. Thus, we believe that sparsity via regularization is inferior to sparsity achieved by other means (explicit channel selection or sparsifying constraints). \subsection{Our proposal} We propose to use f(A,B)=min(A,B,1) to implement AND-logic, to use f(A,B)=max(A,B,0) to implement OR-logic and to combine both kinds of logic in a novel summator-free artificial neuron --- "strong neuron" (see Figure \ref{fig:stronger}). \begin{figure}[ht] \centering \includegraphics[width=12cm]{figure-stronger.pdf} \caption{The strong neuron is better at pattern recognition than the linear one} \label{fig:stronger} \end{figure} We call our artificial neuron "strong" because it has a much more complex decision boundary than the summator-based neuron. The shape of this boundary naturally fits into the pattern recognition framework. Even with binary weights (which allowed us to achieve state-of-the-art results on GTSRB and SVHN benchmarks), standalone strong neurons can separate large chunks of the target class from the rest of the training set. In the somewhat exaggerated example shown in Figure \ref{fig:stronger}, the standalone summator-based neuron cannot distinguish between the full image dimmed by 50\% (reduced contrast) and the image with a completely dropped bottom half. The linearity of the summator means that it is possible to compensate for the lack of activity in one channel by increasing the activity in another one. In contrast, the strong neuron easily and naturally distinguishes between these two images. Another important property of our strong neuron is that its amplification of adversarial perturbations can be precisely controlled. Further, with binary weights the layer of strong neurons becomes robust with respect to adversarial attacks: an $\epsilon$-bounded perturbation of inputs produces exactly $\epsilon$-bounded perturbation of outputs. We also propose a novel training algorithm that can train strong neurons with sparse connectivity. This algorithm reformulates the initial nonlinear least squares problem subject to sparsity constraints as a discrete one problem with discrete (binary or nonbinary) weights and discrete sparsity constraints, which is efficiently solved by the newly proposed heuristic. The properties of strong neurons and their training algorithm can be used to reduce hardware requirements --- in particular, to avoid expensive floating point units. With binary weights, our strong neurons are summation-free and multiplication-free --- only $min$ and $max$ operations are needed to implement strong neurons. Moreover, the adversarial stability of strong neurons means that they are also resistant to random perturbations from rounding errors (i.e., it is possible to reduce precision from full 32-bit floating point to 8-bit fixed-point without sacrificing inference accuracy). \section{The motivation behind our model} \label{sect:rationale} In this section, we will show that our artificial neuron model is motivated by some fundamental considerations, that is, there are some reasonable and intuitive requirements that are satisfied by our model --- and are not satisfied by summator-based neurons. First, we define the $L_\infty$-nonexpansive function as one which in a general N-dimensional case satisfies \begin{align*} |f(x+{\Delta}x)-f(x)| \leq \max\limits_i|{\Delta}x_i| = {\lVert}{\Delta}x{\rVert}_\infty \end{align*} for any N-dimensional input perturbation ${\Delta}x$. Similarly, we define the $L_1$-nonexpansive function as one that satisfies \begin{align*} |f(x+{\Delta}x)-f(x)| \leq \sum\limits_i|{\Delta}x_i| = {\lVert}{\Delta}x{\rVert}_1 \end{align*} Clearly, both kinds of nonexpansive functions produce bounded output under bounded input perturbation. However, the $L_\infty$ version provides stricter bounds than the $L_1$ one --- it does not accumulate perturbations. For a 32x32x1 input image, $L_\infty$-nonexpansivity means that a change of $0.01$ in every pixel changes the output by at most $0.01$, and $L_1$-nonexpansivity means that the output change may be as large as $10.24=1024\times0.01$! Another interesting question is how different kinds of nonexpansivity perform in a multilayer setting. It is easy to see that $L_\infty$-nonexpansivity is preserved under superposition: $f_\infty(f_\infty(x),\dots,f_\infty(x))$ still produces an $\epsilon$-bounded output under an $\epsilon$-bounded input. Conversely, stacking $L_1$-nonexpansive functions does not preserve this property: given that $f_1(x)$ produces an $N\epsilon$-bounded output under an $\epsilon$-bounded input, $f_1(f_1(x),\dots,f_1(x))$ will produce an $N^{2}\epsilon$-bounded output. Human vision --- and any artificial vision system that should be robust --- has a bounded reaction to bounded perturbations of the input image. The bounding ratio is not always 1:1 because sometimes we want to amplify weak signals. Thus, enforcing $L_\infty$-nonexpansivity on the entire classifier may overconstrain it. However, it makes sense to enforce this constraint at least for some parts of the classifier. Our computational results show that stacking nonexpansive layers and performing potentially nonrobust inference only in the last step greatly improves stability against adversarial perturbations. The rationale behind our model of the artificial neuron should be obvious --- making inference as robust as possible. However, we present an even more interesting result --- the fact that our model is the only perfectly stable artificial neuron that implements AND/OR logic. One familiar with the history of artificial neural networks may remember the so-called "XOR problem" --- a problem of fitting the simple four-point dataset below: \begin{center} \begin{tabular}[H]{ c c c } $x_0$ & $x_1$ & $y$ \\ \hline 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{tabular} \end{center} This problem is an elegant example of a dataset that cannot be separated by the single linear summator. Inspired by its minimalistic beauty, we formulate two similar problems, which address the accumulation of perturbations in multilayer networks: \paragraph{Theorem 1: $L_\infty$-nonexpansive AND problem.} $\exists!{\enspace}f(x,y)=min(x,y)$ such that the following holds: \begin{enumerate} \item $f(x,y)$ is defined for $x,y \in [0,1]$ \item $f(0,0)=f(0,1)=f(1,0)=0$ \item $f(1,1)=1$ \item $a{\leq}A,\ \ b{\leq}B \implies f(a,b){\leq}f(A,B)$ (monotonicity) \item $|f(a+{\Delta}a,b+{\Delta}b)-f(a,b)| \leq max(|{\Delta}a|,|{\Delta}b|)$ \end{enumerate} \paragraph{Theorem 2: $L_\infty$-nonexpansive OR problem.} $\exists!{\enspace}g(x,y)=max(x,y)$ such that the following holds: \begin{enumerate} \item $g(x,y)$ is defined for $x,y \in [0,1]$ \item $g(0,0)=0$ \item $g(0,1)=g(1,0)=g(1,1)=1$ \item $a{\leq}A,\ \ b{\leq}B \implies g(a,b){\leq}g(A,B)$ (monotonicity) \item $|g(a+{\Delta}a,b+{\Delta}b)-g(a,b)| \leq max(|{\Delta}a|,|{\Delta}b|)$ \end{enumerate} Proofs of theorems 1 and 2 can be found in Appendix A \ref{sect:appendixa}. These theorems have the following consequences: \begin{itemize} \item Our $min$-based AND and $max$-based OR elements are the only perfectly robust implementations of AND/OR logic \item It is impossible to implement a robust AND (robust OR) element with just one ReLU neuron --- the best that can be achieved is $L_1$-nonexpansivity, which is not robust \item It is possible to implement robust AND/OR logic by performing tricks with many traditional ReLU neurons ($max(a,b)=a+ReLU(b-a)$, $max(a,b,c)=max(a,max(b,c))$ and so on), but the result will be just another implementation of our robust AND/OR logic --- although it is much harder to achieve with SGD training \end{itemize} \section{Contour Engine: architecture overview} \label{sect:overview} In previous sections, we presented our model of the artificial neuron and discussed the motivation behind it, its significance and differences between the novel neuron and traditional summator-based ones. In this section, we briefly discuss the architecture of our network before moving to more detailed explanations in the following sections. \begin{figure}[H] \centering \includegraphics[width=14cm]{figure-contourengine.pdf} \caption{Three blocks of the Contour Engine network} \label{fig:contourengine} \end{figure} The three key parts of our neural architectures are: \begin{itemize} \item shallow feature detector \item sparse contour detection layers \item shallow classifier \end{itemize} The feature detection layer produces initial low-level features. The contour detection layers (one or two is usually enough) combine them in order to produce medium and high-level features. Finally, a linear or nonlinear classifier post-processes the features produced by the robust contour detection stage. The training algorithm includes three distinct, sequential stages: \begin{itemize} \item train (iteratively) or build (noniteratively) a shallow feature detector \item create sparse contour detection layers in a constructive manner (add layer by layer, create each layer neuron by neuron) \item train a shallow classifier using activities of sparse layers as inputs \end{itemize} In our experiments, we used noniterative construction of the shallow feature detector --- either analytically constructed edge detection filters or filters obtained via unsupervised training were used (running k-means over image patches \cite{Coates11}). Such an approach makes the input layer independent from label assignment, which allows us to make some interesting conclusions regarding the asymptotic complexity of the image recognition. Our approach to the construction of sparse layers --- adding layers and neurons one by one --- is similar to and was inspired by the Cascade-Correlation network \cite{Fahlman90}. The difference from the original work is that in order to generate new neurons we have to solve the \emph{nonsmooth} nonlinear least squares subproblem with additional sparsity $L_0$ constraints (for comparison, traditional summator-based neurons result in smooth unconstrained nonlinear least squares subproblems). The second important contribution of our work (in addition to the robust artificial neuron) is the heuristic, which can efficiently find approximate solutions of such subproblems. This heuristic is discussed in more detail in the next section. Finally, the shallow classifier can be implemented as a linear layer (with SOFTMAX normalization) processing outputs of the sparse block. \section{Training sparsely connected layers} \label{sect:sparselayers} This section discusses the core contribution of our work --- the constructive training of sparsely connected strong neurons. \subsection{Issues with SGD training} Based on our experience, online SGD training does not work well for networks with $min$-based activation functions. We failed to achieve good results with SGD --- but maybe someone else will be able to do better. We believe that the extreme nonconvexity of the $min$ function contributed to this failure ($max$ is less of a problem in our opinion), as it makes training much more difficult and prone to stalling in bad local extrema. Our solution to these problems is the constructive training algorithm, which creates networks layer by layer, and each layer is created by adding neurons one by one. This approach was investigated many times by many researchers with mixed results. We again refer here to the work of Fahlman et al. on the Cascade-Correlation network \cite{Fahlman90}, which, in our opinion, was the most successful one and inspired our own research. \subsection{The constructive training algorithm} Training networks composed of highly nonconvex and nonsmooth elements is difficult. Suppose, however, that \emph{somehow} you can train just one such element to fit some target function of your choice. How can it help you train a network? The answer is to build your model incrementally, training new elements to fit the current residual and adding them one by one. \begin{figure}[H] \centering \includegraphics[width=14cm]{figure-train-layers.pdf} \caption{Incremental training procedure} \label{fig:trainlayers} \end{figure} New neurons are trained to fit the current residual of the classifier, and every time you add a neuron to the layer you have to retrain the classifier to obtain new residuals. One may see some similarity to boosting here (we will return to this point later). The algorithm listed above can be easily generalized to multilayer training. One choice to be made is whether or not to maintain shortcut connections to the classifier from the previously learned layer. The training procedure can easily fast-forward information from bottom to top by learning identity mapping if necessary, so it is mostly a matter of taste. \subsection{Training strong neurons} In the subsection above, we reduced the problem of training sparse multilayer networks to training just one neuron with sparse connections: \begin{align*} \min\limits_{w} \sum\limits_{i}\left(N(w,X_i)-y_i\right)^2\ \ \ s.t.\ \ sparsity\ \ constraints \end{align*} where $w$ is a weight vector, $X_i$ is an $i$-th row of the input activities matrix $X$ (activities of the bottom layer at $i$-th image), $N(w,x)$ is a neuron output and $y_i$ is a target to fit (in our case, the current residual). For a three-input strong neuron, the formulation above becomes: \begin{equation} \label{eq:strong_nls_nonsmooth} \begin{split} \min\limits_{w_0, w_1, w_2} &\sum\limits_{i}\left[\min\left(\max\limits_{j}(w_{0,j}{\cdot}X_{i,j})\ ,\ \max\limits_{j}(w_{1,j}{\cdot}X_{i,j})\ ,\ \max\limits_{j}(w_{2,j}{\cdot}X_{i,j})\ ,\ \textbf{1}\right)-y_i\right]^2 s.t. \\ &{\lVert}w_0{\rVert}_0 \leq k\ ,\ \ {\lVert}w_1{\rVert}_0 \leq k\ ,\ \ {\lVert}w_2{\rVert}_0 \leq k \end{split} \end{equation} This problem has no easy solution, even in an unconstrained setting, and $L_0$ constraints are hard to handle with present nonsmooth solvers. Our proposal is to replace (\ref{eq:strong_nls_nonsmooth}) with some similar, albeit nonequivalent, form, which can be solved more efficiently and robustly. One attractive property of the contour recognition problems is that they deal with $[0,1]$-bounded activities, where $0$ stands for the absence of some feature and $1$ stands for the maximum activity possible. Thus, one may reasonably expect that all weights in (\ref{eq:strong_nls_nonsmooth}) will be nonnegative (connections with negative weights simply will not activate the neuron). Furthermore, it makes sense to place further restrictions on the weights --- that is, to choose weights from some short fixed list, for example $\{0,\nicefrac{1}{2},1,1\nicefrac{1}{2},2\}$. Now, instead of a nonconvex, nonsmooth, nonlinear least squares problem we have a combinatorial optimization problem: \begin{equation} \label{eq:strong_nls_discrete} \begin{split} \min\limits_{w_0, w_1, w_2} &\sum\limits_{i}\left[\min\left(\max\limits_{j}(w_{0,j}{\cdot}X_{i,j})\ ,\ \max\limits_{j}(w_{1,j}{\cdot}X_{i,j})\ ,\ \max\limits_{j}(w_{2,j}{\cdot}X_{i,j})\ ,\ \textbf{1}\right)-y_i\right]^2 s.t. \\ &w_{0,j},w_{1,j},w_{2,j} \in W\\ &{\lVert}w_0{\rVert}_0 \leq k\ ,\ \ {\lVert}w_1{\rVert}_0 \leq k\ ,\ \ {\lVert}w_2{\rVert}_0 \leq k \end{split} \end{equation} where $W$ can be binary $\{0,\ 1\}$ or something more fine-grained, such as $\{0,\ \nicefrac{1}{2},\ 1,\ 1\nicefrac{1}{2},\ 2\}$ or $\{0,\ \nicefrac{1}{4},\ \nicefrac{1}{2},\ \nicefrac{3}{4},\ 1,\ 1\nicefrac{1}{4},\ 1\nicefrac{1}{2},\ 1\nicefrac{3}{4},\ 2\}$. Discrete optimization problems are usually harder to solve precisely than continuous ones. Furthermore, \emph{this} discrete problem cannot be reduced to well-studied mixed-integer LP or mixed-integer QP, so there is likely no other way to solve it except for a brute-force search. However, we do not need an exact solution --- having a good one is sufficient. Our insight is that there is a simple heuristic that can generate good strong neurons without dealing with nonconvex multiextremal optimization problems. The original discrete optimization problem has no constraints except for sparsity. A $max$-element can gather information from any element of the input tensor (see figure below). As a result, we have to evaluate prohibitively large amount of possible connection structures. For instance, for 15 unit-weight connections to elements with a 32x32x20 input tensor we have roughly $10^{58}$ possible geometries. \begin{figure}[H] \centering \includegraphics[width=8cm]{figure-trn0.pdf} \caption{Totally unconstrained neuron} \label{fig:trn0} \end{figure} It is possible to significantly reduce the configuration count by adding some additional restrictions on the inter-layer connections. For example, we may impose two additional constraints: \begin{itemize} \item Require that $max$-elements are spatially local (i.e., each element gathers inputs from just one location $(x,y)$ of the input tensor) \item Require that $max$-elements feeding data into the same $min$-element are located close to each other \end{itemize} Alternatively --- for 1x1xD input tensors with no spatial component --- these restrictions can be reformulated as follows: \begin{itemize} \item Require that $max$-elements are correlationally local (i.e., each element gathers inputs from strongly correlated channels) \item Require that $max$-elements feeding data into the same $min$-element are correlated strongly enough \end{itemize} Having such constraints on the connections of the strong neuron significantly reduces the number of configurations that must be evaluated to solve the problem (\ref{eq:strong_nls_discrete}). In our toy example, the configuration count is reduced from $10^{58}$ to just $10^{18}$. \begin{figure}[H] \centering \includegraphics[width=8cm]{figure-trn1.pdf} \caption{Strong neuron with spatial/correlational constraints} \label{fig:trn1} \end{figure} We can achieve a further reduction in search complexity through a two-step search procedure: \begin{itemize} \item Evaluate all possible "seed detectors" --- strong neurons with single-input $max$-elements (AND without OR) \item Expand the best seed found --- sequentially add connections to its $max$-elements \end{itemize} \begin{figure}[H] \centering \includegraphics[width=8cm]{figure-trn2.pdf} \caption{Seed detector --- a strong neuron without $max$-elements} \label{fig:trn2} \end{figure} As a result of this improvement, the search complexity for our 32x32x20 example is reduced from $10^{18}$ to $10^{9}$ neural configurations. However, it is still too costly --- each of these configurations requires a full pass over the entire dataset in order to evaluate the neuron's performance. Further improvements can be achieved by assuming the following: \begin{itemize} \item Good $f_3=\min(A,B,C)$ can be found by extending good $f_2=\min(A,B)$ with the best-suited $C$ \item Good $f_2=\min(A,B)$ can be found by extending good $f_1=A$ with the best-suited $B$ \item Good $f_1=A$ can be found by simply evaluating all possible single-input seed detectors \end{itemize} \begin{figure}[H] \centering \includegraphics[width=8cm]{figure-trn3.pdf} \caption{Growth of seed detectors} \label{fig:trn3} \end{figure} This improvement makes the problem (\ref{eq:strong_nls_discrete}) computationally tractable. For example, the complexity of our toy example is reduced to just $20000$ combinations (compare this with the initial $10^{58}$ estimate). \paragraph{Algorithm outline.} The simplified algorithm (only $\{0,1\}$ weights, input activities are $[0,1]$-bounded) is shown below: \begin{enumerate} \item Setup the initial model (empty with zero output) and a vector of its residuals over the entire dataset. Select a neuron pool size $P$ (a few hundreds works in most cases). \item Competition phase: generate seed detectors and select the winner from the combined pool: \begin{itemize} \item Select a set of $P$ promising input features, "gen-1 seeds," $f_1=A$. Some form of quick and dirty feature selection is usually enough. \item Produce $P$ gen-2 seeds by extending gen-1 seeds $f_1=A$ with such $B$ that $f_2=\min(A,B)$ produces the best linear fit to the current residual. Only the spatial/correlational neighborhood of $f_1$ is evaluated. \item Produce $P$ gen-3 seeds by extending gen-2 seeds $f_2=\min(A,B)$ with such $C$ that $f_3=\min(A,B,C)$ produces the best linear fit to the current residual. Only the spatial/correlational neighborhood of $f_1$ is evaluated. \end{itemize} \item Generalization phase. Having determined a winning seed detector, sequentially extend its inputs with new $max$-connections: \begin{itemize} \item $f = \min(A, B, ...)$ \item $A \xrightarrow{} \max(A)$ \item $\max(A) \xrightarrow{} \max(A,A_2)$ \item $\max(A,A_2) \xrightarrow{} \max(A,A_2,A_3)$ and so on \end{itemize} Extending is performed in such a way that the extended detector fits the residual better than its previous version. Only the spatial/correlational neighborhood of $A$ is investigated. The procedure stops after the maximum number of connections is formed (good value --- 5 connections per $max$-element) or when there is no connection that can improve the fit. \item Add a detector to the model, and update the classifier and residual vector. Stop after the user-specified amount of detectors is formed. Go to 2 otherwise. \end{enumerate} Although it is not explicitly stated, the algorithm above is a batch algorithm --- it requires us to keep an entire dataset in memory and make a full pass over it in order to generate new strong neurons. The reason for this is that the algorithm has no way of correcting the neuron structure once it has been added to the model --- so, if you train a suboptimal neuron using a subsample of the entire training set, you will be unable to improve it later. The only way to properly generate a neuron is to use all the available data. This property raises an old question of the balance between network stability and its plasticity. Networks trained with SGD have high plasticity but zero stability. Plasticity allows us to use SGD --- an algorithm that makes only marginal improvements in the network being trained --- because these small decrements in the loss function will accumulate over time. At the same time, it impedes cheap nondestructive retraining --- once an image is removed from the training set, it is quickly forgotten. In contrast, our algorithm has zero plasticity --- it will not improve the neurons it generated previously --- but perfect stability. The drawback of such an approach is that it is necessary to use an entire training set to generate just one strong neuron, and this job has to be done in the best way possible. The upside is that the network never forgets what it learned before. If your task has changed a bit, you can restart training and add a few new neurons without damaging previously learned ones. \section{The feature detection layer} \label{sect:featuredetector} In this section, we briefly discuss the feature detection layer based on \cite{Coates11} and several proposed improvements. We deem this part of our work as less important than the results discussed in the previous section (sparsely connected layers of the robust neurons). Nevertheless, there are several interesting ideas we want to share here. This section provides only a brief summary, with a detailed description presented in Appendix B \ref{sect:appendixb}. \begin{wrapfigure}{r}{0.5\textwidth} \includegraphics[width=0.95\linewidth]{figure-filters-chromaluma.pdf} \caption{Filters learned with our (improved) procedure} \label{fig:chromaluma} \end{wrapfigure} Strong neurons can perform logical inference on low-level features, but they cannot \emph{produce} these features from raw pixel values. Thus, a separate feature extraction block is essential in order to "prime" the Contour Engine. The purpose of our feature extraction layer is to describe the input image using a rich dictionary of visual words. The description includes features such as oriented edges, more complex shapes, colors and gradients, computed at multiple scales and orientations. The key point of Coates et al. is that one may achieve surprisingly good classification performance by processing images with a single convolutional layer whose filters are trained in an unsupervised manner (k-means on random image patches). The authors also proposed to post-process the raw convolutions with a simple activity sparsification filter $y_{sparse,i} = ReLU\left(y_i - \lambda\cdot mean(y)\right)$. Filters as large as 4x4, 5x5 or 6x6 typically give the best results. Figure \ref{fig:chromaluma} shows an example of the filters found with our training procedure. We extend their results as follows: \begin{itemize} \item separate processing of color-agnostic (shape sensitive) and color-based features \item multiple downsampling levels of the layer outputs (2x and 4x max-pooling are used together) \item feature detection at multiple scales \item completeness with respect to image transformations --- multiple versions of the same feature corresponding to positive/negative phases, permutations in color space, rotations and so on \end{itemize} \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-v1layer.pdf} \caption{Multiscale multimodal feature extraction layer} \label{fig:v1layer} \end{figure} \section{The shallow classifier layer} \label{sect:shallowclassifier} Our proposed strong neurons have unique stability and sparsity properties, but some limitations are also present. They have a rigid piecewise linear output with a fixed slope, but in order to separate image classes one often needs nonlinearities with steep slopes in some places and flat spots in other parts of the feature space. Hence, a separate classifier layer is needed at the top of the network. This classifier layer can be as deep as you wish --- but strong neurons perform data processing extremely well, so all you need in most cases is a single linear summator followed by SOFTMAX. Training such a classifier is straightforward, requiring only sample activities of the bottom sparsely connected block over the entire dataset and training of the single-layer neural network (logit model) using the activities matrix as the input. \emph{One important point to note is that the shallow classifier layer is the only place in our model where significant adversarial instability is introduced.} The sparsely connected layers of strong neurons amplify adversarial perturbations in a completely controllable manner (and do not amplify them when binary weights are used). The initial feature detection layer is a single layer of convolutions with bounded coefficients, and thus it has limited adversarial perturbation growth. As a result, any adversary targeting our model will actually target its last layer. In effect, this means that we reduced the problem of building a robust deep classifier to one of building a robust \emph{shalow} classifier. In this work, we will show that, due to the stability of the bottom layers, a simple linear classifier performs well enough in terms of adversarial stability. \section{Comparison with related approaches} \label{sect:comparison} In this section we discuss several other machine learning algorithms that are related to our work: \begin{itemize} \item Cascade-Correlation \item Boosting \item Forward-Thinking architecture \item Deep neural decision forests \item BagNet \item $L_2$-nonexpansive networks \end{itemize} We also would like to briefly review some present defenses against adversarial attacks: \begin{itemize} \item Adversarial training \item $L_2$-nonexpansive networks \item Convex Outer Adversarial Polytope (Wong Defense) \end{itemize} \paragraph{Cascade-Correlation.} We already mentioned and referred to the Cascade-Correlation architecture. Our network construction algorithm reproduces Fahlman's idea in many respects. Two important differences can be noted: (1) our algorithm trains sparsely connected strong neurons, and (2) unlike CasCor we try to avoid long chains of nonlinearities, which contribute to various instabilities, so our network has a shallow and wide layered structure. \paragraph{Boosting.} There is some similarity between our training algorithm and boosting. Both algorithms expand the model by sequentially adding new units trained to fit the current residual. Thus, one may consider our approach to be a special case of boosting. However, boosting algorithms do not pay attention to the properties of weak classifiers added to the model; that is, any kind of weak classifier will fit into the boosting framework. In contrast, robust strong neurons are essential to our network architecture. \paragraph{Forward-Thinking architecture.} Another interesting approach to discuss is Forward-Thinking architecture (see \cite{forwardthinking}). This architecture is a constructive algorithm that trains the network layer by layer in a greedy manner. Both Forward Thinking and Contour Engine use the same approach to create a layered network structure (different from both modern CNNs and Cascade-Correlation). \paragraph{Deep neural decision forests.} We also note some similarity between Contour Engine and one novel deep learning algorithm: deep neural decision forests \cite{deepneuraldf}. First, there is a correspondence between our strong neurons and shallow decision trees. Indeed, a strong neuron without $max$-units, the seed detector $f(A,B)=\min(A,B)$, is in some sense equivalent to a short decision tree. One may generate such a tree, which returns $1$ for $A>0.5$ and $B>0.5$ and returns 0 otherwise. The difference is that our strong neuron is more powerful than a shallow decision tree. Adding $max$-connections achieves a quadratic/cubic increase in the model capacity with just a linear increase in its size. Conversely, the capacity of the decision tree is linearly proportional to its size. \paragraph{BagNet.} BagNet, an experimental neural architecture \cite{bagnet}, achieves impressive classification results on ImageNet with the bag-of-local-features model. By averaging predictions of the local models (each seeing just $\nicefrac{1}{7}\times\nicefrac{1}{7}$ of the entire image) it is possible to achieve results competitive with those of deep networks. Authors have proposed this architecture as a proof of concept, which demonstrates that we have an incomplete understanding of the underlying mechanisms of computer vision algorithms. For us, this approach is an interesting counterexample to Contour Engine. Our architecture is based on a large-scale spatial structure, whereas BagNet works with scattered small-scale hints. \paragraph{Adversarial training.} A simple yet universal defense is to train the network using both original and adversarial examples\cite{advtrn}. These additional examples make the inferences more robust by explicitly telling the network about the expected behavior under adversarial perturbation. In theory, this may guide the network so that it will implement internally robust AND/OR logic (indeed, it is possible to implement $max$/$min$ with ReLU units). The benefit of this approach is that it works for any kind of model --- all that is needed is a training code and a code that generates adversarial examples. \paragraph{$L_2$-nonexpansive networks.} This approach \cite{l2nonexpansive} is a class of neural networks in which "a unit amount of change in the inputs causes at most a unit amount of change in the outputs or any of the internal layers." Due to the utilization of traditional summators, the authors were unable to achieve $L_\infty$-nonexpansivity, so they had to resort to weaker $L_2$-nonexpansivity (although it is still much better than $L_1$-nonexpansivity). \paragraph{Convex Outer Adversarial Polytope (Wong Defense).} This approach \cite{wongdefense} models network behavior under adversarial perturbation of its inputs. An input image is provided along with per-component bounds of adversarial perturbation. Wong's algorithm models the perturbation of activities of internal units and provides differentiable error bounds for network outputs. It thus enables the use of straightforward SGD training on error bounds in order to reduce errors under adversarial perturbation. \section{Experimental results} \label{sect:results} \subsection{Datasets} We tested Contour Engine on two popular computer vision benchmarks: GTSRB and SVHN. \paragraph{German Traffic Sign Recognition Benchmark.} This benchmark is a multi-class single-image classification challenge \cite{gtsrb}. The dataset has more than 50000 images of centered traffic signs belonging to 43 classes. The classes are unequally sampled --- some "popular" traffic signs have many more instances than rare ones. The images in the dataset were captured in the wild under slightly (sometimes wildly) different orientations, lighting conditions, image sizes (bounding rectangles from 18x18 pixels to 64x64 and larger) and amounts of motion blur. \begin{figure}[H] \centering \includegraphics[width=5cm]{gtsrb.png} \caption{GTSRB dataset} \label{fig:gtsrb} \end{figure} We applied the following post-processing: we resized all images to standard 32x32 resolution, adding padding when necessary, and standardized brightness (mean 0.5). In numerical experiments, affine distortions were used to augment the dataset. \paragraph{Street View House Numbers.} This dataset is a well-known 10-class digit recognition problem \cite{svhn}. It has 630420 training and test images belonging to 10 classes. The image size is 32x32 in all cases. \begin{figure}[H] \centering \includegraphics[width=5cm]{svhn.jpeg} \caption{SVHN dataset} \label{fig:svhn} \end{figure} We normalized images in the dataset by making white the dominant color --- images with a majority of black pixels were inverted. No augmentation was applied to the images. \subsection{Software} Our neural architecture is quite nonstandard, and the training algorithms are even more nonstandard. Many machine learning frameworks can perform inferences on models like ours (the framework has to be flexible enough to allow scattered operations on tensors; in particular, TensorFlow can do this). However, no present framework can \emph{train} such models. Thus, we had to write the training and inference code in C++ from scratch. This code --- an experimental machine learning framework with several examples --- can be downloaded from \url{https://www.alglib.net/strongnet/}. \subsection{Network architecture} In this work, we evaluated a multi-column architecture with a shared unsupervised feature detection layer and separate supervised classification columns (see Figure \ref{fig:resultsnetwork}). The $K$-th column is individually trained to separate class $K$ from the rest of the dataset. \begin{figure}[htp] \centering \includegraphics[width=7cm]{figure-results-network.pdf} \caption{Network structure} \label{fig:resultsnetwork} \end{figure} The feature detection layer has two separate blocks: contour (color-agnostic) features and color-based ones. The contour filter bank has a capacity equal to 50 different filters. These filters have a size of 6x6, which allows the detection of medium complexity shapes; that is, ones more complex than simple edges. Each of these filters produces two features --- one corresponding to the "positive" phase and one to the "negative" phase --- so the total channel count is 100. The color filter bank is much smaller and stores just 10 filters, each having a size of 4x4, which is adequate to detect uniformly colored patches. In both cases (contour and color), we perform multiscale feature analysis, processing 32x32 (scale 0) and downsampled 16x16 (scale 1) versions of the image. The contour block requires 4.6 MFLOP to be computed, while the color block needs 0.4 MFLOP. Thus, the total amount of floating point operations required to perform initial feature detection is \textbf{5.0 MFLOP}. Classification columns are composed of our novel strong neurons grouped into two sparsely connected "strong layers" followed by a single output sigmoid neuron (linear summator + logistic function). Shortcut connections are present between all strong layers and outputs. In our experiments, columns with widths equal to just 200 strong neurons were powerful enough to separate GTSRB classes. Such columns needed roughly \textbf{0.007 MFLOP} (7000 FLOP). The output of the $k$-th column is the probability of the image belonging to class $K$. Due to logistic model properties, this probability is usually well calibrated. However, it is important to remember that different columns are trained separately, so their outputs do not have to sum to one. \subsection{Results: low-cost inference on GTSRB} The GTSRB dataset has 43 classes, so our network has a shared feature detection layer and 43 class-specific sparse columns. This means that the inference cost of our model is \textbf{$5.0+43\times0.007=5.3$ MFLOP}. The test set error of our model on this dataset is \textbf{1.6\%}. \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-gtsrb-results.pdf} \caption{GTSRB: accuracy vs inference cost} \label{fig:gtsrbresults} \end{figure} The table above compares Contour Engine with Targeted Kernel Networks \cite{targetedkernelnets} and pruning \cite{yiming}. Targeted Kernel Networks (TSTN and STN rows) reduce computational complexity by dropping some of the inner convolutions using attentional modulation. They may be regarded as a type of spatial pruning. The work by Yiming Hu et al. involved channel-based pruning performed using a genetic algorithm. Contour Engine outperforms both approaches by an order of magnitude. One more interesting point is that the $5.3$ MFLOP required by our model are mostly unsupervised. Only $0.3$ MFLOP ($0.007$ MFLOP per class) are performed in the supervised part of our network. Most of the time is spent on unsupervised preprocessing, which consumes about $95\%$ of the computational budget. This result suggests that the actual complexity of the contour-based classification is on the kiloflop rather than on the megaflop or gigaflop scale. \subsection{Results: low-cost inference on SVHN} The Street View House Numbers dataset has 10 classes, so our network uses a shared feature detection layer similar to the one employed on GTSRB with 10 class-specific sparse columns. We note here that in this task color does not carry any classification-related information (e.g., the green-vs-blue edge is important because it is an edge, not because it is green or blue), so we dropped the color part of the feature extraction layer. The inference cost for our model was \textbf{4.8 MFLOP}, and the test set error was \textbf{4.8\%}. \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-svhn-results.pdf} \caption{SVHN: accuracy vs inference cost} \label{fig:svhnresults} \end{figure} For this dataset, we compare our network with the pruning by Yiming Hu et al. (again) and with Capsule Networks (\cite{capsnets}, \cite{targetedkernelnets}). Again, Contour Engine outperforms its competitors by an order of magnitude. \subsection{Results: improved adversarial stability} We tested the adversarial stability of the Contour Engine network trained on the SVHN dataset. We used a powerful PGD attack (iterated FGSM with 20 iterations and backtracking line search) with the perturbation $L_\infty$-norm bounded by 0.01, 0.02 and 0.03. \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-adversarial-results.pdf} \caption{SVHN: adversarial attack success rate} \label{fig:adversarialresults} \end{figure} The table above compares the attack success rate for Contour Engine with reference values from three independent works (\cite{wongdefense}, \cite{atda}, \cite{iat}). It can be seen that an unprotected network can be successfully attacked in 83\% cases with a perturbation as small as 0.01. Different kinds of adversarial protection (when used on traditional summator-based networks) significantly reduce the attack success rate. However, in all cases Contour Engine outperforms these results without any special counter-adversarial measures. \subsection{Results: hardware requirements} Our neural network has fairly low hardware requirements. We already mentioned its low floating point count, but another interesting property is that it is easy to switch from floating point operations to fixed point ones. Stability with respect to adversarial perturbations (maliciously targeted ones) implies stability with respect to perturbations arising from rounding (untargeted ones) --- thus one may expect graceful degradation with a progressive decrease in mantissa length. Different parts of the network have different hardware requirements with respect to working accuracy: \paragraph{Feature detection layer.} This part of the network is just a single layer of convolutions with bounded coefficients, performed on $[0,1]$-bounded inputs, producing $[0,1$-bounded outputs. Thus, it can be efficiently implemented with no drop in the inference quality using just 8-bit fixed point inputs and outputs and 8-bit unsigned integer multiplicator/summator units with 24-bit accumulators. \paragraph{Strong layers.} This part of the network can also be implemented with 8-bit fixed-point units. With binary weights, this part of the network is multiplication free and summation free, so only 8-bit min and max units are needed. With non-binary weights, strong neurons may need multiplication by fixed-point numbers with short mantissas (e.g., $1\nicefrac{1}{2}$), which may be performed with just a few shifts/adds. \paragraph{Shallow classifier.} This part of network is just a single summator with bounded coefficients. Hence, it may work well with 8-bit fixed point inputs and outputs, 8-bit unsigned integer multiplicator units and 24-bit internal accumulators. In fact, our model's accuracy and stability results were obtained with 7-bit precision to store the activity matrices. We had to utilize this reduced precision due to the immense memory requirements of some parts of our training algorithm. However, this also allowed us to experimentally verify our claims with low hardware requirements. Experimenting with a 4-bit version of our network also looks promising. \section{Summary} \label{sect:conclusions} In this work, we have proposed a novel model of the artificial neuron --- the strong neuron --- which can separate classes with decision boundaries more complex than hyperplanes and which is resistant to adversarial perturbations of its inputs. We proved that our proposal is a fundamental and well-motivated change and that constituent elements of our strong neuron, $min$/$max$ units, are the only robust implementations of the AND/OR logic. We also proposed a novel training algorithm that can generate sparse networks with $O(1)$ connections per strong neuron, a result that far surpasses any present advances in neural network sparsification. State-of-the-art efficiency (inference cost) is achieved on GTSRB and SVHN benchmarks. We also achieved state-of-the-art results in terms of stability against adversarial attacks on SVHN --- without any kind of adversarial training --- which surpassed much more sophisticated defenses. Further, our network has low hardware requirements and gracefully degrades when numerical precision is decreased (we managed to achieve the results listed above using just 8-bit fixed point math for the unit activities). One more interesting result is related to our decision to separate unsupervised feature detection and supervised classification. We found that Contour Engine spends most of the inference time in the unsupervised preprocessor --- less than 10.000 FLOP per class is used by the supervised part of the network (one which is composed of strong neurons). This result suggests that contour recognition is much easier than was previously thought. Once initial unsupervised image preprocessing is done, centered contours can be recognized with just a few kiloflops. Finally, we want to highlight future directions of our work: \begin{itemize} \item \textbf{Convolutional training.} Our proof-of-concept network is nonconvolutional, which limits its applicability to well-centered image recognition problems, such as MNIST, GTSRB, and SVHN. The next step is to implement computationally feasible convolutional training. \item \textbf{Better adversarial stability.} We already achieved state-of-the-art stability with a simple linear output. However, we believe that further improvements are possible with a better shallow classifier layer (output layer). This layer is the only adversarially unstable part of the network --- we managed to reduce the problem of building a \emph{deep} and robust network to one of building a \emph{shallow } and robust one. One promising robust classifier model is a maxout\cite{maxout} neuron with an $L_1$ constraint on internal linear subunits. \item \textbf{Transfer learning and fast retraining.} The filters of the unsupervised feature detection layer look quite generic (edges, bars, blobs, arcs), which strongly suggests that this layer could be reused across multiple pattern detection problems. Thus, one obvious direction of research involves the transfer properties of the feature detection layer. Furthermore, we feel that the strong neurons generated by the sparse training algorithm may also allow some limited reuse. When combined with extremely cheap inference performed by strong neurons, this opens the door to pretrained "universal column," which contain strong neurons capable of detecting a wide range of "popular contours." \end{itemize} \bibliographystyle{alpha} \begin{thebibliography}{1} \bibitem[Coates11]{Coates11} Coates, A. and Lee, H. and Ng, A.Y. \newblock "An Analysis of Single-Layer Networksin Unsupervised Feature Learning". \newblock Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:215-223, 2011. \bibitem[Fahlman90]{Fahlman90} Scott E. Fahlman, Christian Lebiere \newblock "The cascade-correlation learning architecture". \newblock Advances in neural information processing systems 2, June 1990, Pages 524–532 \bibitem[Hettinger17]{forwardthinking} Chris Hettinger, Tanner Christensen, Ben Ehlert, Jeffrey Humpherys, Tyler Jarvis, Sean Wade \newblock "Forward Thinking: Building and Training Neural Networks One Layer at a Time". \newblock arXiv:1706.02480 \bibitem[Kontschieder15]{deepneuraldf} Peter Kontschieder, Madalina Fiterau, Antonio Criminisi, Samuel Rota Bulo. \newblock "Deep Neural Decision Forests". \newblock Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), 2016. \bibitem[Brendel19]{bagnet} Wieland Brendel, Matthias Bethge. \newblock "Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet". \newblock arXiv:1904.00760. \bibitem[Qian18]{l2nonexpansive} Haifeng Qian, Mark N. Wegman. \newblock "L2-Nonexpansive Neural Networks". \newblock arXiv:1802.07896. \bibitem[Wong17]{wongdefense} Eric Wong, J. Zico Kolter. \newblock "Provable defenses against adversarial examples via the convex outer adversarial polytope". \newblock arXiv:1711.00851. \bibitem[Goodfellow14]{advtrn} Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. \newblock "Explaining and Harnessing Adversarial Examples". \newblock arXiv:1412.6572. \bibitem[Stallkamp12]{gtsrb} J. Stallkamp, M. Schlipsing, J. Salmen, C. Igel. \newblock "Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition". \newblock Neural Networks Special Issue. \bibitem[Netzer11]{svhn} Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng. \newblock "Reading Digits in Natural Images with Unsupervised Feature Learning". \newblock NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011. \bibitem[Kashyap18]{targetedkernelnets} Kashyap Chitta. \newblock "Targeted Kernel Networks: Faster Convolutions with Attentive Regularization". \newblock Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science, vol 11132. Springer, Cham. \bibitem[Yiming18]{yiming} Yiming Hu, Siyang Sun, Jianquan Li, Xingang Wang, Qingyi Gu. \newblock "A novel channel pruning method for deep neural network compression". \newblock arXiv:1805.11394 \bibitem[Sabour17]{capsnets} Sara Sabour, Nicholas Frosst, Geoffrey E Hinton. \newblock "Dynamic Routing Between Capsules". \newblock arXiv:1710.09829. \bibitem[Song19]{atda} Chuanbiao Song, Kun He, Liwei Wang, John E. Hopcroft. \newblock "Improving the generalization of adversarial training with domain adaptation". \newblock arXiv:1810.00740. \bibitem[Lamb19]{iat} Alex Lamb, Vikas Verma, Juho Kannala, Yoshua Bengio. \newblock "Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy". \newblock arXiv:1906.06784. \bibitem[Goodfellow19]{maxout} Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio. \newblock "Maxout Networks". \newblock arXiv:1302.4389. \bibitem[Shang16]{crelu} Wenling Shang, Kihyuk Sohn, Diogo Almeida, Honglak Lee. \newblock "Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units". \newblock arXiv:1603.05201. \bibitem[Blot16]{maxmin} Michael Blot, Matthieu Cord, Nicolas Thome. \newblock "Maxmin convolutional neural networks for image classification". \newblock arXiv:1610.07882. \end{thebibliography} \newpage \section{Appendix A: proofs of theorems 1 and 2} \label{sect:appendixa} \paragraph{Theorem 1: $L_\infty$-nonexpansive AND problem.} $\exists!{\enspace}f(x,y)=min(x,y)$ such that following holds: \begin{enumerate} \item[C1] $f(x,y)$ is defined for $x,y \in [0,1]$ \item[C2] $f(0,0)=f(0,1)=f(1,0)=0$ \item[C3] $f(1,1)=1$ \item[C4] $a{\leq}A,\ \ b{\leq}B \implies f(a,b){\leq}f(A,B)$ (monotonicity) \item[C5] $|f(a+{\Delta}a,b+{\Delta}b)-f(a,b)| \leq max(|{\Delta}a|,|{\Delta}b|)$ \end{enumerate} \paragraph{Proof.} We will prove Theorem 1 by demonstrating that conditions C1...C5 constrain $f(x,y)$ in such a way that the only possible solution is $f(x,y)=min(x,y)$. The monotonicity condition C4 combined with C2 means that \begin{equation} \label{eq:f0y} {\forall}\ y{\in}[0,1]\ \ \ f(0,y)=0 \end{equation} Condition C5, when combined with C2 and C3, means that ${\forall}y{\in}[0,1]\ \ f(y,y)=y$. Indeed, C5 combined with C2 means that $|f(y,y)-f(0,0)| \leq |y|\ \implies\ f(y,y){\leq}y$. Similarly, C5 combined with C3 means that $|f(y,y)-f(1,1)| \leq |1-y|\ \implies\ f(y,y){\geq}y$. As result, we have \begin{equation} \label{eq:fyy} {\forall}\ y{\in}[0,1]\ \ \ f(y,y)=y \end{equation} Similarly to the previous paragraph, condition C5 combined with \ref{eq:f0y} and \ref{eq:fyy} constrains function values between $f(0,y)$ and $f(y,y)$ to \begin{align*} {\forall}\ 0{\leq}x{\leq}y{\leq}1\ \ \ f(x,y)=x=min(x,y) \end{align*} Due to the symmetry of the problem, it is obvious that the following also holds: \begin{align*} {\forall}\ 0{\leq}y{\leq}x{\leq}1\ \ \ f(x,y)=y=min(x,y) \end{align*} So, finally, \begin{align*} {\forall}x,y\in[0,1]\ \ \ f(x,y)=min(x,y) \end{align*} which has been shown. \paragraph{Theorem 2: $L_\infty$-nonexpansive OR problem.} $\exists!{\enspace}g(x,y)=max(x,y)$ such that following holds: \begin{enumerate} \item[C1] $g(x,y)$ is defined for $x,y \in [0,1]$ \item[C2] $g(0,0)=0$ \item[C3] $g(0,1)=g(1,0)=g(1,1)=1$ \item[C4] $a{\leq}A,\ \ b{\leq}B \implies g(a,b){\leq}g(A,B)$ (monotonicity) \item[C5] $|g(a+{\Delta}a,b+{\Delta}b)-g(a,b)| \leq max(|{\Delta}a|,|{\Delta}b|)$ \end{enumerate} \paragraph{Proof.} Similarly to the previous proof, we will prove Theorem 2 by demonstrating that conditions C1...C5 constrain $g(x,y)$ in such a way that the only possible solution is $g(x,y)=max(x,y)$. C5 combined with C2 and C3 constrains $g(x,y)$ along $x=y$: $g(0,0)=0 \implies g(y,y) \leq y$ and $g(1,1)=1 \implies g(y,y) \geq y$, so finally we have \begin{equation} \label{eq:gyy} \forall\ y\in[0,1]\ \ \ g(y,y)=y \end{equation} Similarly, for $g(0,y)$ from the nonexpansivity constraint C5 combined with boundary values $g(0,0)=0$ and $g(0,1)=1$, it immediately follows that \begin{equation} \label{eq:g0y} \forall\ y\in[0,1]\ \ \ g(0,y)=y \end{equation} and, due to monotonicity constraint C4, from \ref{eq:gyy} and \ref{eq:g0y} we get \begin{align*} \forall\ 0 \leq x \leq y \leq 1\ \ \ g(x,y)=y=max(x,y) \end{align*} Due to the obvious symmetry, it is easy to prove that \begin{align*} \forall\ x,y\in[0,1]\ \ \ g(x,y)=max(x,y) \end{align*} which has been shown. \section{Appendix B. The feature detection layer} \label{sect:appendixb} In this section we discuss a feature detection layer based on \cite{Coates11} with several proposed improvements. There are several interesting ideas we want to share here, so this section is quite long. Nevertheless, we deem this part of our work as less important than the results on strong neurons, so we moved it to the end of the article. Modern convolutional networks tend to have many layers with filters as small as 3x3. One well-known pattern is to have two layers with 3x3 convolutions followed by a max-pooling layer. Almost all architectures lack a clear distinction between feature extraction and subsequent geometric inference --- both tasks are performed using the same sequence of standard building blocks. Due to the quadratic dependence between the network width and weights count, preference is given to deep and narrow networks --- making the network 2x deeper and 2x narrower results in a 2x decrease in computing power. In contrast, our neural architecture has sparse layers with $O(1)$ connections per neuron. It thus inherently favors shallow and wide networks. Another difference from traditional architectures is that our strong neurons can perform logical inferences on low-level features, although they cannot \emph{produce} these features from raw pixel values. Thus, a separate feature extraction block is essential in order to "prime" Contour Engine. The purpose of our feature extraction layer is to describe an input image using a rich dictionary of visual words. The description includes features such as oriented edges, more complex shapes, colors and gradients, computed at multiple scales and orientations. The following subsections discuss our implementation of the feature extraction layer, starting from the very basic setup and progressively improving it. \subsection{The basic structure} The basic implementation of the feature extraction unit is a single layer of 4x4 and/or 6x6 convolutions followed by normalization and sparsification (see \cite{Coates11}) layers: \begin{align*} y_{raw}[i,j,k] &= ReLU\left(CONV(W,x)\right) \\ y_{sparse}[i,j,k] &= ReLU\left(y_{raw}[i,j,k] - \lambda\underset{k}{MEAN}(y_{raw}[i,j,k])\right) \\ y_{nrm}[i,j,k] &= \frac{y_{sparse}[i,j,k]}{\epsilon+\max\limits_{i,j,k} y_{sparse}[i,j,k]} \end{align*} where $W$ is a $Kx3xMxM$ tensor (here $K$ is an output filter count, $M$ is a convolution size and $3$ stands for RGB input) and $\lambda$ is a tunable sparsification parameter. The typical amount of filters within feature banks ranges from 8 (just edge detectors) to 100 (medium-complexity shapes) features. We experimented with different methods of generating feature banks and found that training them in the completely unsupervised manner (see \cite{Coates11}) tends to give good results with interesting generalization properties, which will be discussed later. \subsection{Separating contour and color} One improvement we propose is to separate contour-based and color-based features. We require the former to be color-agnostic (the feature detector output does not change under permutation of RGB channels) and the latter to be lightness-agnostic (the feature detector output does not change with the addition/subtraction of gray color). We have several reasons behind our proposal. First, it is well known that the human visual cortex (the best universal visual processor known so far) performs separate processing of contour and color signals in the first regions of the ventral stream, also known as the "what pathway." We want to duplicate it here because our work was partially inspired by unique properties of the human visual system. Second, having such orthogonality in our model accelerates training in later stages (creating sparse connectivity) because it greatly reduces the number of possible connections in the network. Finally, such separation makes our network more controllable --- we can easily measure the amount of information provided by the edges and color and easily introduce some invariants into the model (e.g., invariance with respect to various color and lightness corrections). Color-agnostic processing can be implemented by requiring that components of the tensor $W$ corresponding to different RGB channels have the same value. However, we prefer to explicitly replace the $Kx3xMxM$ weight tensor $W$ with the $KxMxM$ tensor $W_L$: \begin{math} y_{L,raw}[i,j,k] = ReLU\left(CONV(W_L,\frac{1}{3}\left(x_R+x_G+x_B\right))\right) \end{math} One more normalization we introduce is a requirement that the feature detector output be invariant with respect to lightness shift (addition/removal of the gray color). Mathematically, this condition means that we require tensor elements within each filter to sum to zero: \begin{math} {\forall}k:\quad \sum\limits_{i,j}W_L[k,i,j] = 0 \end{math} One possible way to enforce such requirements is to tweak the data fed to "k-means over image patches" procedure proposed by Coates et al. Color-agnostic filters can be learned by replacing colors with monochrome values prior to running k-means. The second requirement --- the invariance wrt lightness shift --- can be enforced by substracting the mean lightness from image patches. Similarly, color-based lightness-agnostic processing can be implemented by requiring that components of the weight tensor $W$ corresponding to different RGB channels sum to zero (invariance wrt to lightness shift is implicitly enforced by this constraint): \begin{math} \forall i,j,k:\quad W_C[k,0,i,j]+W_C[k,1,i,j]+W_C[k,2,i,j] = 0 \end{math} As with color-agnostic filters, color-based ones can be learned through manipulation with data sent following the Coates procedure --- one can simply subtract the lightness value from each pixel. The following filters were learned by running this procedure on the CIFAR dataset: \begin{figure}[H] \centering \includegraphics[width=10cm]{figure-filters-chromaluma.pdf} \caption{Chroma and luma filters} \label{fig:appbfilters} \end{figure} \subsection{Downsampling (max-pooling) layer} The max-pooling layer is well known for its ability to simultaneously reduce the dimensionality of the data and improve its linear separability (the latter is achieved due to the introduction of shift-invariance). We again refer to \cite{Coates11} for some interesting quantative results. In this section, we focus on the max-pooling layer, which performs max-downsampling of the input tensor (pooling with a filter width equal to the stride). The question is, what downsampling factor is the best one? Numerical experiments showed that, for 4x4- and 6x6-sized features, good results could be achieved with 2x downsampling. This provides a good balance between generalization and loss of essential spatial information. While 4x downsampling loses too much information to be used alone, it can supplement 2x-downsampled activities if both are used together. \subsection{Feature detection at multiple scales} Although the initial formulation covers just small 4x4 or 6x6 image patches, one may reasonably want to have a multiscale description that includes both small (e.g., ~4x4 pixels), medium (~8x8) and large (~16x16) features. Traditional convolutional architectures do not explicitly form such multiscale representations. Since the beginning, the dominant approach has been to stack standard building blocks and allow SGD to do the rest. We, however, aim to develop an architecture that performs some standardized kinds of processing (feature extraction, spatial pooling, multiscale processing) in the standardized manner with a limited amount of controllable nonlinearities learned. \subsection{Introducing completeness} Now, we have everything we need to prime Contour Engine --- shape/color separation, multiple downsampling levels and multiscale image processing. The key parts of our feature detection layer are present. However, we may add one more improvement --- completeness. It is preferable to have a feature detection layer that is complete under some particular set of transformations. For example, if feature $F_0$ detects some particularly oriented shape, the feature detection layer may also be required to have $F_1$, $F_2$ and $F_3$ that detect the same shape rotated by $90{\degree}$, $180{\degree}$ and $270{\degree}$,respectively. Another option is to require completeness with respect to permutations in color space --- one may require a color gradient to be detected for any combination of constituent colors (red-green, red-blue, green-blue, yellow-blue, violet-green and so on). This requirement may be a bit too much for specialized computer vision systems like those that detect traffic lights --- red blobs against black backgrounds are important, but violet blobs against green background are irrelevant for solving the problem. However, to design a general purpose vision system that can be specialized for any task, having such a feature detection layer may be essential for success. \emph{What is usually achieved by training a "prototype network" on a large, diverse dataset (say, ImageNet) can also be achieved by introducing completeness in a network trained on a much smaller dataset}. In this work, however, we focus on another aspect of complete feature subsets: computational complexity. Some types of completeness allow us to achieve constant 2x-6x performance boost, that is, to have subsets of two features (completeness with respect to lightness inversion) or six features (completeness with respect to color rotation) computed in roughly the same time as is usually needed to compute just one feature. Completeness with respect to lightness inversion means that color-agnostic features now come in two subsets --- corresponding to the "positive phase" of some filter and corresponding to the "negative phase": \begin{align*} y_{f}[i,j,k] &= CONV(W,x) \\ y_{raw}[i,j,k] &= CONCAT\left[ ReLU(+y_f), ReLU(-y_f) \right] \\ y_{sparse}[i,j,k] &= ReLU\left(y_{raw}[i,j,k] - \lambda\underset{k}{MEAN}(y_{raw}[i,j,k])\right) \\ y_{nrm}[i,j,k] &= \frac{y_{sparse}[i,j,k]}{\epsilon+\max\limits_{i,j,k} y_{sparse}[i,j,k]} \end{align*} This improvement allows us to achieve a constant 2x performance boost for the color-agnostic part of our feature detection layer. This means that we either can have a 2x wider layer (more features detected) with the same performance budget, or alternatively, we can have roughly the same level of quality with a 2x smaller running time. Similar, albeit more complex changes can be made to introduce completeness with respect to rotations in color space. Capturing both positive and negative phases of ReLU units was proposed long before this work (e.g., \cite{crelu}, \cite{maxmin}). However, most previous authors failed to consider the fact that capturing positive/phases is just a special case of the more general movement toward having a complete feature detection layer. \end{document}
https://openreview.net/forum?id=S4kvQ7_XBxP
https://arxiv.org/abs/2006.09510
Please evaluate the strengths and weaknesses of the paper, including any suggestions for improvement or clarification.
Unsupervised Learning of Video Representations via Dense Trajectory Clustering 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. This paper addresses the task of unsupervised learning of video representations for action recognition. Following current trend for image representation learning, authors propose first to adapt [46] and [49] for video instance recognition and local aggregation respectively. Since results prove that these methods do not capture motion, which is clearly important for action recognition, they proposed to force a 3D ConvNet to learn embeddings from IDTs. Experiment results justifies their framework. 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. The whole paper is well written and motivations are very well stated. Although authors obtained promising results by correctly adapting [46] and [49], they went further and analyzed errors. 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. A strong hypothesis of the paper is that motion is important to recognize actions in videos. And current 3D convnets models cannot learn it, while they tend to only learn appearance. For this reason, authors use IDTs. This hypothesis seems unfair since 3D convs can be trained with flow, or even two-stream. Have authors try their Video IR and Video LA using directly optical flow, or introducing a two-stream model? (C3D, I3D, R(2+1), TSN,…) 4. [Overall rating] Paper rating. 6 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. Good research story, good experimental set-up but arguable assumption. 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) Do authors plan to release supplementary material they claim to have in the paper? Rating: 6: Marginally above acceptance threshold Confidence: 3: The reviewer is fairly confident that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{microtype} \usepackage{wrapfig} \usepackage{pifont} \usepackage{color} \usepackage{booktabs} \usepackage{multirow} \usepackage{subfigure} \usepackage{etoolbox} \usepackage{epsfig} \usepackage{subfiles} \newcommand{\smallsec}[1]{\vspace{0.2em}\noindent\textbf{#1}} \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref} \begin{document} \pagestyle{headings} \mainmatter \title{Unsupervised Learning of Video Representations via Dense Trajectory Clustering} % \titlerunning{Unsupervised Learning of Video Representations via IDT Clustering} \author{Pavel Tokmakov\inst{1}\and Martial Hebert\inst{1}\and Cordelia Schmid\inst{2}} \institute{Carnegie Mellon University \and Inria} \authorrunning{P. Tokmakov, et al.} \maketitle \begin{abstract} This paper addresses the task of unsupervised learning of representations for action recognition in videos. Previous works proposed to utilize future prediction, or other domain-specific objectives to train a network, but achieved only limited success. In contrast, in the relevant field of image representation learning, simpler, discrimination-based methods have recently bridged the gap to fully-supervised performance. We first propose to adapt two top performing objectives in this class - instance recognition and local aggregation, to the video domain. In particular, the latter approach iterates between clustering the videos in the feature space of a network and updating it to respect the cluster with a non-parametric classification loss. We observe promising performance, but qualitative analysis shows that the learned representations fail to capture motion patterns, grouping the videos based on appearance. To mitigate this issue, we turn to the heuristic-based IDT descriptors, that were manually designed to encode motion patterns in videos. We form the clusters in the IDT space, using these descriptors as a an unsupervised prior in the iterative local aggregation algorithm. Our experiments demonstrates that this approach outperform prior work on UCF101 and HMDB51 action recognition benchmarks\footnote{\url{https://github.com/pvtokmakov/video_cluster}}. We also qualitatively analyze the learned representations and show that they successfully capture video dynamics. \keywords{unsupervised representation learning, action recognition} \end{abstract} \section{Introduction} The research on self-supervised learning of image representation has recently experienced a major breakthrough. Early approaches carefully designed objective functions to capture properties that the authors believed would result in learning rich representations~\cite{doersch2015unsupervised,noroozi2016unsupervised,gidaris2018unsupervised,zhang2016colorful}. For instance, Doersch et al.~\cite{doersch2015unsupervised} proposed to predict relative positions of two patches in an image, and Zhang et al.~\cite{zhang2016colorful} trained a network to colorize images. However, they have achieved only limited success. The methods that have brought the performance of self-supervised image representations close to those learned in a fully-supervised way, rely on a different principle instead. They use the standard cross-entropy loss and either treat each image as an individual class~\cite{dosovitskiy2014discriminative,wu2018unsupervised,oord2018representation}, or switch between clustering images in the feature space of the network, and updating the model to classify them into clusters~\cite{caron2018deep,zhuang2019local}. The resulting representations effectively capture discriminative image cues without having to manually separate images into categories. Self-supervised feature learning for videos has so far mostly relied on manually designed objective functions. While some works adopted their objectives directly from the image-based methods, such as predicting video rotation~\cite{jing2018self}, or relative position of space-time patches~\cite{kim2019self}, others utilize video-specific cues, such as predicting feature representations of video patches in future frames~\cite{han2019video}. Very recently, Sun et al.~\cite{sun2019contrastive}, have proposed a variant of the instance classification objective for videos. In this work we first investigating whether the recent, classification-based objectives proposed for image representation learning can be applied to videos. We introduce a video variant of the non-parametric Instance Recognition approach of Wu et al.,~\cite{wu2018unsupervised} (Video IR). It simply treats each video as its own class and trains a 3D ConvNet~\cite{tran2015learning,hara2018can} to discriminate between the videos. We observe that this naive approach is already competitive with prior work in the video domain. To further improve the results, we capitalize on the observation of Zhuang et al.~\cite{zhuang2019local} that embedding semantically similar instances close to each other in feature space is equally important to being able to discriminate between any two of them. We adapt their Local Aggregation approach to videos (Video LA). As shown in the top part of Figure~\ref{fig:meth}, this method first encodes a video using a 3D ConvNet, and the resulting embeddings are clustered with K-means. A non-parametric clustering loss proposed in~\cite{zhuang2019local} is then used to update the network and the algorithm is iterated in an Expectation-Maximization framework. This approach results in an improvement over Video IR, but the gap between the two objectives remains smaller than in the image domain. We identify the reasons behind this phenomenon, by examining the video clusters discovered by the algorithm. Our analysis shows that they mainly capture appearance cues, such as scene category, and tend to ignore the temporal information, which is crucial for the downstream task of action recognition. For instance, as shown in the top right corner of Figure~\ref{fig:meth}, videos with similar background, but different activities are embedded closer than examples of the same action. This is not surprising, since appearance cues are both dominant in the data itself, and are better reflected in the 3D ConvNet architecture. To mitigate this issue, we turn to the heuristic-based video representations of the past. Improved Dense Trajectories (IDT)~\cite{wang2013action} were the state-of-the-art approach for action recognition in the pre-deep learning era, and remained competitive on some datasets until very recently. The idea behind IDT is to manually encode the cues in videos that help to discriminate between human actions. To this end, individual pixels are first tracked with optical flow, and heuristics-based descriptors~\cite{dalal2005histograms,dalal2006human,wang2013dense} are aggregated along the trajectories to encode both appearance and motion cues. In this work, we propose to transfer the notion of similarity between videos encoded in IDTs to 3D ConvNets via non-parametric clustering. To this end, we first compute IDT descriptors for a collection of unlabeled videos. We then cluster these videos in the resulting features space and use the non-parametric classification objective of~\cite{zhuang2019local} to train a 3D ConvNet to respect the discovered clusters (bottom part of Figure~\ref{fig:meth}). The network is first trained until convergence using the fixed IDT clusters, and then finetuned in the joint IDT and 3D ConvNet space with the iterative Video LA approach. The resulting representation outperforms the baselines described above by a significant margin. We also qualitatively analyze the clusters and find that they effectively capture motion information. Following prior work~\cite{han2019video,jing2018self,sun2019contrastive}, we use the large-scale Kinetics~\cite{carreira2017quo} dataset for self-supervised pretraining, ignoring the labels. The learned representations are evaluated by finetuing on UCF101~\cite{soomro2012ucf101} and HMDB51~\cite{kuehne2011hmdb} action recognition benchmarks. To gain a better insight into the quality of the representations, we additionally provide an evaluation in a few-shot regime, using the model as a fixed feature extractor. \section{Related work} \label{sec:rl} In this section, we first briefly review previous work on image-based unsupervised representation learning. We then discuss various approaches to video modeling, and conclude by presenting relevant video representation learning methods. \textbf{Image representation} learning from unlabeled data is a well explored topic. Due to space limitations, we will only review the most relevant approaches here. The earliest methods were built around auto-encoder architectures: one network is trained to compress an image into a vector in such a way, that another network is able to reconstruct the original image from the encoding~\cite{hinton2006fast,lee2009convolutional,kingma2013auto,donahue2016adversarial,goodfellow2014generative}. In practice, however, the success of generative methods in discriminative representation learning has been limited. Until very recently, manually designing self-supervised objectives has been the the dominant paradigm. For example, Doersch et al.~\cite{doersch2015unsupervised} and Noroozi and Favaro~\cite{noroozi2016unsupervised} predict relative positions of patches in an image, Zhang et al.~\cite{zhang2016colorful} learn to colorize images, and Gidaris et al.~\cite{gidaris2018unsupervised} learn to recognize image rotations. While these methods have shown some performance improvements compared to random network initialization, they remain significantly below a fully-supervised baseline. The most recent methods, instead of designing specialized objective functions, propose to use the standard cross-entropy loss and either treat every image as its own class~\cite{dosovitskiy2014discriminative,oord2018representation,wu2018unsupervised}, or switch between clustering the examples in the feature space of the network and updating the network with a classification loss to respect the clusters~\cite{caron2018deep,zhuang2019local}. These methods exploit the structural similarity between semantically similar images, to automatically learn a semantic image embedding. In this paper we adapt the methods of Wu et al.~\cite{wu2018unsupervised} and Zhuang et al.~\cite{zhuang2019local} to the video domain, but demonstrate that they do not perform as well due to the structural priors being less strong in videos. We then introduce explicit prior in the form of IDT descriptors and show this indeed improves performance. \textbf{Video modeling} has traditionally been approached with heuristics-based methods. Most notably, Dense Trajectories (DT)~\cite{wang2013dense} sample points in frames and track them with optical flow. Then appearance and motion descriptors are extracted along each track and encoded into a single vector. The discriminative ability of DT descriptors was later improved in~\cite{wang2013action} by suppressing camera motion with the help of a human detector, and removing trajectories that fall into background regions. The resulting representation focuses on relevant regions in videos (humans and objects in motion) and encodes both their appearance and motion patterns. More recently, the success of end-to-end trainable CNN representation has been extended to the video domain. Simonyan et al.~\cite{simonyan2014two} proposed to directly train 2D CNNs for action recognition, fusing several frames at the first layer of the network. Their approach, however, had a very limited capacity for modeling temporal information. This issue was later addressed in~\cite{tran2015learning} by extending the 2D convolution operation in time. Introduction of the large scale Kinetcis dataset for action recognition~\cite{carreira2017quo} was a major step forward for 3D CNNs. Pretrained on this dataset, they were finally able to outperform the traditional, heuristic-based representations. Several variants of 3D ConvNet architectures have been proposed since, to improve performance and efficiency~\cite{carreira2017quo,hara2018can,xie2017rethinking}. In this work, we demonstrate how the IDT descriptors can be used to improve unsupervised learning of 3D ConvNet representations. \textbf{Video representation} learning from unlabeled data is a less explored topic. This is largely because the community has only recently converged upon the 3D ConvNets as thr standard architecture. Early methods used recurrent networks, or 2D CNNs, and relied on future-prediction~\cite{srivastava2015unsupervised}, as well as various manually designed objectives~\cite{mobahi2009deep,misra2016shuffle,lee2017unsupervised,gan2018geometry,fernando2017self}. In particular, several works utilized temporal consistency between consecutive frames as a learning signal~\cite{misra2016shuffle,lee2017unsupervised,mobahi2009deep}, whereas Gan et al.~\cite{gan2018geometry} used geometric cues, and Fernando et al.~\cite{fernando2017self} proposed the odd-one-out objective function. With 3D ConvNets, generative architectures~\cite{kim2019self,vondrick2016generating}, as well as some self-supervised objectives have been explored~\cite{jing2018self,kim2019self,wang2019self}. For example, Jing et al.~\cite{jing2018self} train a model to predict video rotation, Kim et al.~\cite{kim2019self} use relative spatio-temporal patch location prediction as an objective, and Wang et al.~\cite{wang2019self} regress motion and appearance statistics. In another line of work, future frame colorization was explored as a self-supervision signal~\cite{vondrick2018tracking}. Recently, Han et al.~\cite{han2019video} proposed to predict feature representations of video patches in future frames. Most similarly, Sun et al.~\cite{sun2019contrastive} use a variant of the instance discrimination loss. In this work, we demonstrate that simply adapting instance discrimination~\cite{wu2018unsupervised} and local aggregation~\cite{zhuang2019local} objectives from the image to the video domain already achieves competitive results, and augmenting local aggregation with IDT priors further improves the results, outperforming the state-of-the-art. \section{Method} \label{sec:meth} Our goal is to learn an embedding function $f_{\boldsymbol{\theta}}$ that maps videos $V = \{v_1, v_2, ..., v_N\}$ into compact descriptors $f_{\boldsymbol{\theta}}(v_i) = \boldsymbol{d}_i$ in such a way, that they can be discriminated based on human actions, using unlabeled videos. For instance, as shown in Figure~\ref{fig:meth}, we want the two videos of people to doing handstands to be close to each other in the embedding space, and well separated from the video of a person training a dog. Below, we first introduce the two objective functions used in our work - instance recognition~\cite{wu2018unsupervised} and local aggregation~\cite{zhuang2019local}, and then describe our approach of using IDT~\cite{wang2013action} descriptors as unsupervised priors in non-parametric clustering. \subsection{Video instance recognition} This objective is based on the intuition that the best way to learn a discriminative representation is to use a discriminative loss. And, in the absence of supervised class labels, treating each instance as a distinct class of its own is a natural surrogate. Using the standard softmax classification criterion, the probability of every video $v$ with the feature $\boldsymbol{d}$ belonging to its own class $i$ is expressed as: \begin{equation} P(i | \boldsymbol{d}) = \frac{\exp(\boldsymbol{w}_{i}^T \boldsymbol{d})}{\sum_{j=1}^N{\exp(\boldsymbol{w}_{j}^T \boldsymbol{d})}}, \end{equation} where $\boldsymbol{w}_j$ is the weight vector of the $j$'th classifier. In this case, however, every class contains only a single example, thus $\boldsymbol{w}_j$ can be directly replaced with $\boldsymbol{d}_j$. The authors of~\cite{wu2018unsupervised} then propose the following formulation of the class probability: \begin{equation} P(i | \boldsymbol{d}) = \frac{\exp(\boldsymbol{d}_{i}^T \boldsymbol{d} / \tau)}{\sum_{j=1}^N{\exp(\boldsymbol{d}_{j}^T \boldsymbol{d} / \tau)}}, \label{eq:instance_prob} \end{equation} where $\tau$ is a temperature parameter that controls the concentration level of the distribution, and helps convergence~\cite{wang2017normface,hinton2015distilling}. The final learning objective is the standard negative log likelihood over the training set. Recall that training is done in batches, thus a memory bank of encodings $D = \{\boldsymbol{d}_1, \boldsymbol{d}_2, ..., \boldsymbol{d}_N\}$ has to be maintained to compute Equation~\ref{eq:instance_prob}. \begin{figure} \begin{center} \makebox[0.9\textwidth]{\includegraphics[width=0.8\paperwidth]{figures/method2.png}} \caption{Our approach for unsupervised representation learning from video collections. Directly applying a non-parametric clustering objective results in a representation that groups videos based on appearance (top right corner). To mitigate this issue, we propose to first cluster the videos in the space of IDT descriptors (bottom right corner), which results in a grouping that better reflects video dynamics. We then apply the non-parametric clustering loss to transfer the properties of this embedding to a 3D ConvNet.} \label{fig:meth} \end{center} \vspace{-20px} \end{figure} \subsection{Video local aggregation} While being able to separate any two instances is a key property for an image or video embedding space, another, complementary and equally desirable property is minimizing the distance between semantically similar instances. To this end, Zhuang et al.~\cite{zhuang2019local} proposed to use clusters of instances instead of individual examples as class surrogates. We adapt their approach to the video domain, and briefly describe it below. Firstly, the video embedding vectors ${\boldsymbol{d}_1, \boldsymbol{d}_2, ..., \boldsymbol{d}_N}$ are grouped into $K$ clusters $G = \{G_1, G_2, .., G_K\}$ using K-means. The embedding function $f_{\boldsymbol{\theta}}$ is then updated to respect the cluster, using the non-parametric clustering objective proposed in~\cite{zhuang2019local}, and the two steps are iterated in an EM-framework. In particular, for every instance $v_i$ together with its embedding $\boldsymbol{d}_i$, two sets of neighbours are identified: close neighbours $\boldsymbol{C}_i$ (shown with a dashed circle in Figure~\ref{fig:meth}) and background neighbours $\boldsymbol{B}_i$. Intuitively, close neighbours are those examples that fall into the same cluster as $v_i$ and background neighbors are simply those that have a small distance to $\boldsymbol{d}_i$ in the feature space (they include both close neighbors and hard negative examples). Please see~\cite{zhuang2019local} for more details on how $\boldsymbol{C}_i$ and $\boldsymbol{B}_i$ are constructed. The objective is then to minimize the distance between $\boldsymbol{d}_i$ and its close neighbours (instances in the same cluster), while maximizing the distance to those background neighbors that are not in $\boldsymbol{C}_i$ (hard negatives). The authors formulate this objective in a probabilistic way as minimizing the negative log likelihood of $\boldsymbol{d}_i$ being recognized as a close neighbor, given that it is recognized as a background neighbor: \begin{equation} L(\boldsymbol{C}_i, \boldsymbol{B}_i | \boldsymbol{d}_i, \boldsymbol{\theta}) = -\log \frac{P(\boldsymbol{C}_i \cap \boldsymbol{B}_i | \boldsymbol{d}_i)}{P(\boldsymbol{B}_i | \boldsymbol{d}_i)}, \label{eq:localagg} \end{equation} where the probability of $\boldsymbol{d}$ being a member of a set $\boldsymbol{A}$ is defined as: \begin{equation} P(\boldsymbol{A}| \boldsymbol{d}) = \sum_{i \in \boldsymbol{A}} P(i | \boldsymbol{d}), \end{equation} and the definition of $P(i | \boldsymbol{d})$ is adapted from Equation~\ref{eq:instance_prob}. Despite the involved formulation, one can see that this objective does exactly what it is intended to do - minimizes the distance between examples inside a cluster and maximize it between those belonging to different clusters in a non-parametric way. Intuitively, the Local Aggregation objective relies on the structural similarity between semantically similar images, together with deep image prior in CNN architectures~\cite{ulyanov2018deep}, to form meaningful clusters in the embedding space. In videos, however, both structural and architectural priors are less strong. Indeed, pixels that are close to each other in the spatio-temporal volume of a video are not always strongly correlated due to the presence of object and camera and motion. On the architecture side, 3D ConvNets are also worse at capturing spatio-temoral patterns, compared to CNNs at capturing spatial patterns. To mitigate this lack of implicit priors, we propose to introduce an explicit one in the form of IDT descriptors. \subsection{IDT descriptors as priors for video representation learning} While state-of-the-art architectures for action recognition~\cite{tran2015learning,carreira2017quo,hara2018can} simply extend 2D CNN filters into the temporal dimension, treating videos as spatio-temporal cuboids of pixels, classical approaches~\cite{wang2013dense,wang2013action} explicitly identified and encoded spatio-temporal interest points that are rich in motion patterns relevant to action classification. In our experiments, we use the original implementation of IDT~\cite{wang2013action} to compute video descriptors for unlabeled videos (shown in the lower part of Figure~\ref{fig:meth}). We supply the IDT extractor with human detection form the state-of-the-art Mask-RCNN~\cite{he2017mask} model trained on MS COCO~\cite{lin2014microsoft} for improved camera stabilization (see~\cite{wang2013action} for details). This method, however, produces thousands of descriptors $\boldsymbol{x} \in \mathcal{X}$ per video. To encode them into a compact vector we follow prior work~\cite{wang2013action,wang2019hallucinating} and first apply PCA to reduce the dimensionality of each individual trajectory descriptor $\boldsymbol{x_i}$. We then utilize Fisher vector coding~\cite{perronnin2010improving}, which is based on a Gaussian Mixture Model (GMM) with K components $G(w_k, \boldsymbol{\mu}_k, \boldsymbol{\sigma}_k)$, parameterized by mixing probability, mean, and diagonal standard deviation. The encoding for a trajectory descriptor $\boldsymbol{x}$ is then computed by stacking the derivatives of each components of the GMM with respect to mean and variance: \begin{equation} \phi^*_k(\boldsymbol{x}) = \frac{p(\boldsymbol{\mu}_k | \boldsymbol{x})}{\sqrt{w_k}}[\phi_k(\boldsymbol{x}), \frac{\phi_k^{'}(\boldsymbol{x})}{\sqrt{2}}], \end{equation} where the first- and second-order features $\phi_k, \phi_k^{'} \in R^D$ are defined as: \begin{equation} \phi_k(\boldsymbol{x}) = \frac{(\boldsymbol{x} - \boldsymbol{\mu_k})}{\boldsymbol{\sigma}_k}, \phi_k^{'}(\boldsymbol{x}) = \phi_k(\boldsymbol{x})^{2} - 1, \end{equation} thus, the resulting Fisher vector encoding $\phi(\boldsymbol{x}) = [\phi^*_1(\boldsymbol{x}), \phi^*_2(\boldsymbol{x}), ..., \phi^*_k(\boldsymbol{x})]$ is of dimensionality $2KD$. To obtain the video-level descriptor $\boldsymbol{\psi}$, individual trajectory encodings are averaged $\boldsymbol{\psi} = avg_{\boldsymbol{x} \in \mathcal{X}}\phi(\boldsymbol{x})$, and power-~\cite{koniusz2018deeper} and l2-normalization are applied. Finally, to further reduce dimensionality, count sketching~\cite{weinberger2009feature} is used: $p(\boldsymbol{\psi}) = \boldsymbol{P}\boldsymbol{\psi}$, where $\boldsymbol{P}$ is the sketch projection matrix (see~\cite{weinberger2009feature} for details). The resulting encoding $p(\boldsymbol{\psi})$ is a 2000-dimensional vector, providing a compact representation of a video, which captures discriminative motion and appearance information. Importantly, it is completely unsupervised. Both the PCA projection and the parameters of the Gaussian mixture model are estimated using a random sample of trajectory encodings, and matrix $\mathbf{P}$ is selected at random as well. To transfer the cues encoded in IDTs descriptors to a 3D ConvNet, we first cluster the videos in the $p(\boldsymbol{\psi})$ space with K-means, to obtain the clusters $G$. We then use $G$ to compute the sets of neighborhoods $(\boldsymbol{C}_i, \boldsymbol{B}_i)$ for each video $v_i$ in an unlabeled collection (shown in the bottom right corner on Figure~\ref{fig:meth}), and apply the objective in Equation~\ref{eq:localagg} to train the network. This forces the learned representation to capture the motion patterns that dominate the IDT space (note that IDTs encode appearance cues as well in the form of HOG descriptors). Finally, we construct a joint space of IDT and 3D ConvNet representations by concatenating the vectors $\boldsymbol{d}$ and $p(\boldsymbol{\psi})$ for each video. We further finetune the network in this joint space for a few epochs. This step allows the model to capitalize on appearance cues encoded by the the expressive 3D ConvNet architecture. We analyze the resulting model quantitatively and qualitatively, and find that it both outperforms the state-of-the-art, and is better at capturing motion information. \section{Experiments} \label{sec:exp} \subsection{Datasets and evaluation} We use the Kinetics~\cite{carreira2017quo} dataset for unsupervised representation learning and evaluate the learned models on UCF101~\cite{soomro2012ucf101} and HMDB51~\cite{kuehne2011hmdb} in a fully-supervised regime. Below, we describe each dataset in more detail. \textbf{Kinetics} is a large-scale, action classification dataset collected by querying videos on YouTube. We use the training set of Kinetics-400, which contains 235 000 videos, for most of the experiments in the paper, but additionally report results using fewer as well as more videos in Section~\ref{sec:vids}. Note that we do not use any annotations provided in Kinetics. \textbf{UCF101} is a classic dataset for human action recognition, which consists of 13,320 videos, covering 101 action classes. It is much smaller than Kinetics, and 3D ConvNets fail to outperform heuristic-based methods on it without fully-supervised pretraining on larger datasets. Following prior work~\cite{jing2018self,han2019video}, we use UCF101 to evaluate the quality of representations learned on Kinetics in an unsupervised way via transfer learning. In addition to using the full training set of UCF101, we report few-shot learning results to gain more insight into the learned representations. We use the first split of the dataset for ablation analysis, and report results averaged over all splits when comparing to prior work. \textbf{HMDB51} is another benchmark for action recognition, which consists of 6,770 videos, collected from movies, and split into 51 categories. Due to the small size of the training set, it, poses an even larger challenge for learning-based methods. As with UCF101, we report ablation results on the first split, and use the results averaged over all splits for comparison to prior work. Following standard protocol, we report classification accuracy as the main evaluation criteria on UCF101 and HMDB51. However, this makes direct comparison between different approaches difficult, due to the differences in network architectures. Thus, whenever possible, we additionally report the fraction of the fully-supervised performance for the same architecture. \subsection{Implementation details} \label{sec:impl} \subsubsection{Self-supervised objectives} We study three self-supervised objective functions: Video Instance Recognition (Video IR), Video Local Aggregation (Video LA) and Video Local Aggregation with IDT prior. For Video IR we follow the setting of ~\cite{wu2018unsupervised} and set $\tau$ in Equation~\ref{eq:instance_prob} to 0.07. We use 4096 negative samples for approximating the denominator of Equation~\ref{eq:instance_prob}. In addition to the parameters described above, Local Aggregation requires choosing the number of clusters $K$, as well as the number of runs of K-means that are combined for robustness. The authors of~\cite{zhuang2019local} do not provide clear guidelines on selecting these hyperparameters, so we choose to take the values used in their ImageNet experiments and decrease them proportionally to the size of Kinetics. As a result, we set $K$ to 6000 and the number of clusterings to 3. We validate the importance of this choice in Appendix~\ref{sec:obj}. For experiments with with IDT priors we use exactly the same hyper-parameters for the LA objective as described above. We use the original implementation of ~\cite{wang2013action} to extract IDT descriptors. Human detections are computed with ResNet101 variant of Mask-RCNN~\cite{he2017mask} model pretrained on MS COCO~\cite{lin2014microsoft}. We evaluate the importance of human detections for the final performance of our approach in Appendix~\ref{sec:abl}. When computing Fisher vector encoding, we generally follow the setting of~\cite{wang2019hallucinating}. In particular, we set the feature importance to 90\% when computing PCA, and the number of components in GMM to 256. When fitting the PCA and GMM models we randomly choose 3500 videos from Kinetics and 500 IDT descriptors from each video, to get a representative sample. Note that extracting IDTs and encoding them into Fisher vectors does not require GPUs, and thus the code can be efficiently run in parallel on a CPU cluster. As a result, we were able to compute the descriptors for Kinetics in just 5 days. \vspace{-15px} \subsubsection{Network architecture and optimization} Following most of the prior work, we use a 3D ResNet18 architecture~\cite{hara2018can} in all the experiments, but also report results with deeper variants in Appendix~\ref{sec:depth}. The embedding dimension for self-supervised objectives is set to 128, as in~\cite{zhuang2019local}. We use SGD with momentum to train the networks, and apply multi-scale, random spatio-temporal cropping for data augmentation, with exactly the same setting as in~\cite{hara2018can}. We also perform the standard mean subtraction. All the models are trained on 16 frames clips of spatial resolution of $112 \times 112$, unless stated otherwise. During self-supervised learning we follow the setting of~\cite{zhuang2019local} and set the learning rate to 0.03, and momentum to 0.9, with batch size of 256. All the models are trained for 200 epoch, and the learning rate is dropped by a factor 0.1 at epochs 160 and 190. As in~\cite{zhuang2019local}, we initialize the LA models with 40 epoch of IR pretraining. When finetuning on UCF101 and HMDB51, we set the learning rate to 0.1 and momentum to 0.9, using batch size 128. We drop the learning rate by a factor of 0.1 when the validation performance stops improving. Following~\cite{jing2018self}, we freeze the first ResNet block when finetuning on UCF101, and the first two blocks on HMDB51 to avoid overfitting. During inference, for every video we sample five clips at random, using the center crop. The final prediction is obtained by averaging softmax scores over the five clips. For few-shot experiments, we use the protocol of~\cite{chen2019closer} and freeze the entire network, only learning a linear classifier. \subsection{Analysis of self-supervised objectives} We begin by comparing different variants of self-supervised objectives described in Section~\ref{sec:meth}. They are used to learn a representation on Kinetics-400 in a self-supervised way, and the resulting models are transferred to UCF101 and HMDB51. We additionally evaluate two baselines - Supervised, which is pretrained on Kinetics using ground-truth labels, and Scratch, which is initialized with random weights. The results are reported in Table~\ref{tab:anal}. \begin{table}[bt] \caption{Comparison between variants of unsupervised learning objective using classification accuracy and fraction of fully supervised performance on the fist split of UCF101 and HMDB51. All models use a 3D ResNet18 backbone, and take 16 frames with resolution of $112 \times 112$ as input. Video LA with IDT prior consistently outperforms other objectives, with improvements on HMDB51 being especially significant.} \label{tab:anal} \centering { \begin{tabular}{l|c@{\hspace{1em}}c@{\hspace{1em}}|c@{\hspace{1em}}c@{\hspace{1em}}} Method & \multicolumn{2}{c|}{UCF101} & \multicolumn{2}{c}{HMDB51} \\\hline & Accuracy & \% sup. & Accuracy & \% sup. \\ \hline Scratch~\cite{hara2018can} & 42.4 & 50.2 & 17.1 & 30.3 \\\hline Video IR & 70.0 & 82.9 & 39.9 & 70.7 \\ Video LA & 71.4 & 84.6 & 41.7 & 73.9 \\ Video LA + IDT prior & \textbf{72.8} & \textbf{86.3} & \textbf{44.0} & \textbf{78.0} \\ \hline Supervised~\cite{hara2018can} & 84.4 & 100 & 56.4 & 100 \\ \hline \end{tabular} } \vspace{-10px} \end{table} Firstly, we observe that supervised pretraining is indeed crucial for achieving top performance on both datasets, with the variant trained from scratch reaching only 50.2\% and 30.3\% of the accuracy of the fully supervised model on UCF101 and HMDB51 respectively. The gap is especially large on HMDB51, due to the small size of the dataset. Using the video variant of the Instance Recognition objective (Video IR in the table), however, results in a 27.6\% accuracy improvement on UCF101 and 22.8\% HMDB51, reaching 82.9\% and 70.7\% of the supervised accuracy respectively. Notice that this simple method already outperforms some of the approaches proposed in prior works~\cite{jing2018self,han2019video,kim2019self}. Next, we can see that the Local Aggregation objective (Video LA in the table) further improves the results, reaching 84.6\% and 73.9\% of the fully-supervised performance on UCF101 and HMDB51 respectively. This shows that despite the higher-dimensionality of the video data, this method is still able to discover meaningful clusters in an unsupervised way. However, the gap to the IR objective is smaller than in the image domain~\cite{zhuang2019local}. Finally, our full method, which uses IDT descriptors as an unsupervised prior when clustering the videos (Video LA + IDT prior in the table), is indeed able to further boost the performance, reaching 86.3\% and 78.0\% of fully supervised performance on the two datasets. The improvement over Video LA is especially significant on HMDB51. We explain this by the fact that categories in UCF101 are largely explainable by appearance, thus the benefits of better modeling the temporal information are limited on this dataset. In contrast, on HMDB51 capturing scene dynamics is crucial for accurate classification. \subsection{Few-shot evaluation} When finetuning a model, even on a datasets of modest size, like UCF101, the effect of self-supervised pretraining is confounded by the effectiveness of the adaptation strategy itself. Indeed, it has been show recently that, on several tasks that were traditionally used to measure the effectiveness of image-based unsupervised learning approaches, fully supervised performance can be achieved with no pretraining at all, by simply better utilizing the existing data~\cite{he2019rethinking}. Thus, to gain more insight into our objectives, we propose to use pretrained models as feature extractors, and learn linear classifiers in a few-shot regime. The results on UCF101 are reported in Table~\ref{tab:fs}. \begin{table}[bt] \caption{Comparison between variants of unsupervised learning objective on the first split of UCF101 in a few-shot regime, using classification accuracy. The networks are fully frozen, and a linear classifier is learned, gradually decreasing the amount of training data. The gap between unsupervised and supervised representations increases, but our full method (`Video LA + IDT') still outperforms other variants across the board.} \label{tab:fs} \centering { \begin{tabular}{l|c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}} Method & 1-shot & 5-shot & 10-shot & 20-shot & All \\ \hline Scratch & 1.7 & 7.5 & 10.6 & 17.2 & 38.2 \\\hline Video IR & 13.4 & 27.7 & 35.2 & 42.4 & 56.5 \\ Video LA & 15.6 & 30.6 & 36.4 & 44.2 & 58.6 \\ Video LA + IDT prior & \textbf{17.8} & \textbf{31.5} & \textbf{38.4} & \textbf{45.5} & \textbf{58.8} \\ \hline Supervised & 46.4 & 62.0 & 67.7 & 73.3 & 81.8 \\ \hline \end{tabular} } \vspace{-10px} \end{table} The most important observation here is that the gap between fully-supervised and unsupervised representations increases as the data becomes scarcer. This shows that, despite being useful in practice, unsupervised pretraining is still far from making large datasets obsolete. Among the objectives studied in our work, however, Video LA with IDT prior shows the strongest performance across the board, and is especially effective in the low-data regime. \subsection{Qualitative analysis of the representations} To gain further insight into the effect of our IDT prior on representation learning, we now visualize some of the clusters discovered by the vanilla LA, and the variant with the prior in Figures~\ref{fig:la} and~\ref{fig:fv} respectively. Firstly, we observe that, in the absence of external constraints LA defaults to using appearance, and primarily scene information to cluster the videos. For instance, the first cluster (top left corner) corresponds to swimming pools, the one on the top right seems to focus on grass, and the two clusters in the bottom row capture vehicles and backyards, irrespective of the actual scene dynamics. This is not surprising, since appearance cues are both more dominant in the data itself, and are better reflected by the 3D ConvNet architecture. \begin{figure} \vspace{-3px} \begin{center} \makebox[0.9\textwidth]{\includegraphics[width=0.8\paperwidth]{figures/la1.png}} \caption{Visualization of the clusters discovered by the Video LA objective without IDT prior. This variant groups videos in the space of a 3D ConvNet. As a results, the clusters are primarily defined by the appearance, grouping swimming pools, grass fields, vehicles, and backyards. The activity happening in the videos does not seem to play a significant role.} \label{fig:la} \end{center} \vspace{-25px} \end{figure} \begin{figure} \begin{center} \makebox[0.9\textwidth]{\includegraphics[width=0.8\paperwidth]{figures/fv1.png}} \vspace{-5px} \caption{Visualization of the clusters discovered by variant of Video LA objective that uses IDT prior. In contrast to the examples above, the videos are mainly grouped by motion properties, such as forward-backward hand motion, person rotation, fast person motion, and `riding' action.} \label{fig:fv} \end{center} \vspace{-25px} \end{figure} In contrast, the model learned with IDT prior is better at capturing motion cues. For example, the cluster in the top left corner of Figure~\ref{fig:fv} is characterized by forward-backward hand motion, such as observed during cleaning or barbecuing. The cluster in the top-right captures humans spinning or rotating. The bottom left cluster mostly contains videos with very fast actor motion, and the one in the bottom right closely corresponds to the action `riding'. Importantly, neither set of clusters is perfectly aligned with the definition of actions in popular computer vision dataset. For instance, despite having a clear motion-based interpretation, the top left cluster in Figure~\ref{fig:fv} combines Kinetcis categories `cleaning window', `cleaning floor', and `barbecuing'. Indeed, the actions vocabulary used in the literature is defined by a complex combination of actor's motion and scene appearance, making automatic discovery of well-aligned clusters challenging, and partially explaining the remaining gap between clustering-based methods and fully-supervised pretraining. \subsection{Learning long-term temporal dependencies} \begin{table}[bt] \caption{Evaluation of the effect of clip length on the Video LA objective with and without IDT prior on the first split of UCF101 and HMDB51 using classification accuracy. Scratch and Supervised baselines are also reported. All models use a 3D ResNet18 backbone, and take frames with resolution of $112 \times 112$ as input. Both self-supervised and fully-supervised variants benefit from longer sequences, but the model trained from scratch is not able to capitalize on more information.} \label{tab:clip_len} \centering { \begin{tabular}{l|c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}|c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}} Method & \multicolumn{3}{c|}{UCF101} & \multicolumn{3}{c}{HMDB51} \\\hline & 16-fr & 32-fr & 64-fr & 16-fr & 32-fr & 64-fr \\ \hline Scratch & 42.4 & 44.9 & 45.3 & 17.1 & 18.0 & 17.4 \\\hline Video LA & 71.4 & 75.0 & 79.4 & 41.7 & 43.1 & 48.9 \\ Video LA + IDT prior & \textbf{72.8} & \textbf{76.3} & \textbf{81.5} & \textbf{44.0} & \textbf{44.7} & \textbf{49.6} \\ \hline Supervised & 84.4 & 87.0 & 91.2 & 56.4 & 63.1 & 67.5 \\ \hline \end{tabular} } \end{table} Next, we experiment with applying our Video LA objective with IDT prior over longer clips. Recall that this approach attempts to capture the notion of similarity between the videos encoded in the IDT descriptors that are computed over the whole video. The model reported so far, however, only takes 16-frame clips as input, which makes the objective highly ambiguous. In Table~\ref{tab:clip_len} we evaluate networks trained using 32- and 64-frame long clips instead, reporting results on UCF101 and HMDB51. We observe that, as expected, performance of our approach (`Video LA + IDT' in the table) increases with more temporal information, but the improvement is non-linear, and our model is indeed able to better capture long-term motion cues when trained using longer clips. Similar improvements are observed for the plain Video LA objective, but our approach still shows top performance. Supervised model is also able to capitalize on longer videos, but on UCF101 the improvements are lower than seen by our approach (6.8\% for the supervised model, compared to 8.7\% for ours). Interestingly, the model trained from scratch does not benefit from longer videos as much as self-supervised or supervised variants. In particular, on HMDB51 its performance improves by about 1-2\% with 32 frames, but actually decreases with 64. We attribute this to the fact that using longer clips lowers the diversity of the training set, which is crucial for optimizing an untrained representation. These results further demonstrate the importance of model pretraining for video understanding. \subsection{Effect of the number of videos} \label{sec:vids} So far, we have reported all the results using 235 000 videos in the training set of Kinetics-400~\cite{carreira2017quo}. We now train the model with our final objective (Video LA with IDT prior) using a varying number of videos to study the effect of the dataset size on the quality of the learned representations. In particular, we subsample the training set to 185 000 and 135 000 examples at random to see whether smaller datasets can be used for representation learning. We also add the videos from the larger Kinetics-600 dataset to see if our method scales to larger video collections. We use the 3D ResNet18 architecture with 16-frames long clips and input resolution of $112 \times 112$ in all experiments, and report results on the first split of UCF101 and HMDB51 in Figure~\ref{fig:data}. \begin{figure} \vspace{-15px} \begin{center} \makebox[0.9\textwidth]{\includegraphics[width=0.8\paperwidth]{figures/data1.png}} \caption{Varying the number of Kinetics videos when training a 3D ConvNet with the `Video LA with IDT prior' objective. Using more data for unsupervised pretraining results in better representations, as evident form transfer learning results on the first split of UCF101 and HMDB51 (reported using classification accuracy).} \label{fig:data} \end{center} \vspace{-20px} \end{figure} Firstly, we observe that useful representations can be learned with as few 135 000 videos. However, using more data results in improved performance on both datasets. On UCF101 the improvements are mostly linear, but accuracy drops somewhat for the largest training set (370 000 videos). We attribute this to the randomness in training and hypothesize that further improvements can be achieved with more data. On HMDB51 accuracy seems to plateau after 235 000 videos, but improves with 370 000. We will use the model trained on the largest available dataset for comparison to the state-of-the-art in the next section. \subsection{Comparison to the state-of-the-art} Finally, we compare our approach (Video LA with IDT prior) to the state-of-the-art unsupervised video representations in Table~\ref{tab:sot}. As noted in Section~\ref{sec:impl}, to fairly compare results achieved by methods with different network architectures, we use the fraction of fully supervised performance as an additional metric, whenever this information is available. To make the table size manageable, we only report approaches that use 3D ConvNets pretrained on Kinetics. These, however, cover all the top performing methods in the literature. \begin{table}[bt] \caption{Comparison to the state-of-the-art using accuracy and fraction of the fully-supervised performance on UCF101 and HMDB51, averaged over 3 splits. `Ours': Video LA with IDT prior. DPC uses a non-standard version of 3D ResNet, and does not report fully-supervised performance for it. Our method shows top accuracy among the models using the same network architecture. When normalized for the architecture differences, it outperforms all the approaches.} \label{tab:sot} \centering { \begin{tabular}{l|c|c|c|c@{\hspace{0.5em}}c@{\hspace{0.5em}}|c@{\hspace{0.5em}}c@{\hspace{0.5em}}} Method & Network & Frame size & \#Frames & \multicolumn{2}{c|}{UCF101} & \multicolumn{2}{c}{HMDB51} \\\hline \multicolumn{4}{c|}{} & Acc. & \% sup. & Acc. & \% sup. \\\hline PMAS~\cite{wang2019self} & C3D & $112 \times 112$ & 16 & 61.2 & 74.3 & 33.4 & - \\ \hline 3D-Puzzle~\cite{kim2019self} & 3D ResNet18 & $224 \times 224$ & 16 & 65.8 & 78.0 & 33.7 & 59.8 \\ DPC~\cite{han2019video} & 3D ResNet18 & $112 \times 112$ & 40 & 68.2 & - & 34.5 & - \\ Ours & 3D ResNet18 & $112 \times 112$ & 16 & 73.0 & 86.5 & 41.6 & 73.8 \\ \hline 3D-RotNet~\cite{jing2018self} & 3D ResNet18 & $112 \times 112$ & 64 & 66.0 & 72.1 & 37.1 & 55.5 \\ Ours & 3D ResNet18 & $112 \times 112$ & 64 & \textbf{83.0} & \textbf{90.7} & \textbf{50.4} & \textbf{75.6} \\ \hline DPC~\cite{han2019video} & 3D ResNet34 & $224 \times 224$ & 40 & 75.7 & - & 35.7 & - \\ \hline CBT~\cite{sun2019contrastive} & S3D & $112 \times 112$ & 16 & 79.5 & 82.1 & 44.6 & 58.8 \\ \hline IDT~\cite{wang2013action} & - & Full & All & 85.9 & - & 57.2 & - \end{tabular} } \vspace{-10px} \end{table} Firstly, we observe that our principled approach is indeed a lot more effective that manually designed objectives used in PMAS~\cite{wang2019self}, or 3D-Puzzle~\cite{jing2018self}, confirming the effectiveness of clustering-based training. The improvements are especially large on HMDB, which is, as we have shown previously, can be attributed to the IDT prior helping to better model the temporal information. Our approach also outperforms DPC~\cite{han2019video}, when the network depth is the same for both methods, even though DPC uses much longer sequences (40 frames with a stride 2, so the effective length is 120). Notably, on HMDB our approach even outperforms a variant of DPC with a deeper network, and bigger frame size by a large margin. When trained with longer temporal sequences, our method also outperforms the deeper variant of DPC on UCF by 7.3\%. On HMDB we are 14.7\% ahead. The very recent approach of Sun et al.~\cite{sun2019contrastive} (`CBT' in the table), reports high accuracy on both datasets. However, we show that this is due to the authors of~\cite{sun2019contrastive} using a much deeper network than other methods in the literature. In terms of the fraction of fully-supervised performance, the 16-frame variant of our method outperforms CBT by 4.4\% on UCF and by 15.0\% on HMDB. Moreover, the 64-frame variant also outperforms CBT in raw accuracy on both datasets. Finally, we report the performance of Fisher vector encoded IDT descriptors (`IDT' in the table, the numbers are taken from~\cite{simonyan2014two}). Please note that these descriptors are computed on the full length of the video, using the original resolution. Despite this, our 64 frame model comes close to the IDT performance on both datasets. Training a deeper variant of this model with a larger input resolution can close the remaining gap. \section{Conclusions} \label{sec:concl} This paper introduced a novel approach for unsupervised video representation learning. Our method transfers the heuristic-based IDT descriptors, that are effective at capturing motion information, to 3D ConvNets via non-parametric clustering, using an unlabeled collection of videos. We quantitatively evaluated the learned representations on UCF101 and HMDB51 action recognition benchmarks, and demonstrated that they outperform prior work. We also qualitatively analyzed the discovered video clusters, showing that they successfully capture video dynamics, in addition to appearance. This analysis highlighted that the clusters do not perfectly match with the human-defined action classes, partially explaining the remaining gap to the fully-supervised performance. {\footnotesize \smallsec{Acknowledgements:} We thank Piotr Koniusz and Lei Wang for sharing their implementation of Fisher vector encoding. This work was supported in part by the Inria associate team GAYA, and by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DOI/IBC) contract number D17PC00345. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright annotation theron. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied of IARPA, DOI/IBC or the U.S. Government.} \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \clearpage \appendix \begin{center}\Large\bfseries Appendix\end{center} \subfile{supplementary.tex} \end{document}
https://openreview.net/forum?id=yc54rY6_tX6
https://arxiv.org/abs/2006.15731
Please evaluate the strengths and weaknesses of our paper, specifically focusing on the hypothesis that motion is important for action recognition and whether or not we have explored alternative methods such as using optical flow or a two-stream model.
Unsupervised Learning of Video Representations via Dense Trajectory Clustering #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper proposes a method which uses IDT descriptor and 3DConvNet to obtain action clusters and learns the unsupervised video representation. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - Using IDT as prior knowledge - Effective motion capturing - Performance - Extensive related works and ablation study #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - It requires training with other big dataset. - Do the authors have any evidence of the performance without the usage of any other dataset? #### 4. [Overall rating] Paper rating 9 #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. The paper has nice analyses and the proposed method outperforms other methods by using IDT descriptors and 3DConvNet. #### 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - L.318: Can you elaborate with the fine-tuning stage? - Can you please clarify for the Table 3 if you train your network with Kinetics-400 or 600? Most of the methods (DPC, 3D-Puzzle, PMAS) in the table use Kinetics-400 for self supervised training. - What are the limitations? - Do you have any memory usage and time analyses? Typos: - L.406: first - L.507: Kinetics Rating: 9: Top 15% of accepted papers, strong accept Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage{microtype} \usepackage{wrapfig} \usepackage{pifont} \usepackage{color} \usepackage{booktabs} \usepackage{multirow} \usepackage{subfigure} \usepackage{etoolbox} \usepackage{epsfig} \usepackage{subfiles} \newcommand{\smallsec}[1]{\vspace{0.2em}\noindent\textbf{#1}} \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref} \begin{document} \pagestyle{headings} \mainmatter \title{Unsupervised Learning of Video Representations via Dense Trajectory Clustering} % \titlerunning{Unsupervised Learning of Video Representations via IDT Clustering} \author{Pavel Tokmakov\inst{1}\and Martial Hebert\inst{1}\and Cordelia Schmid\inst{2}} \institute{Carnegie Mellon University \and Inria} \authorrunning{P. Tokmakov, et al.} \maketitle \begin{abstract} This paper addresses the task of unsupervised learning of representations for action recognition in videos. Previous works proposed to utilize future prediction, or other domain-specific objectives to train a network, but achieved only limited success. In contrast, in the relevant field of image representation learning, simpler, discrimination-based methods have recently bridged the gap to fully-supervised performance. We first propose to adapt two top performing objectives in this class - instance recognition and local aggregation, to the video domain. In particular, the latter approach iterates between clustering the videos in the feature space of a network and updating it to respect the cluster with a non-parametric classification loss. We observe promising performance, but qualitative analysis shows that the learned representations fail to capture motion patterns, grouping the videos based on appearance. To mitigate this issue, we turn to the heuristic-based IDT descriptors, that were manually designed to encode motion patterns in videos. We form the clusters in the IDT space, using these descriptors as a an unsupervised prior in the iterative local aggregation algorithm. Our experiments demonstrates that this approach outperform prior work on UCF101 and HMDB51 action recognition benchmarks\footnote{\url{https://github.com/pvtokmakov/video_cluster}}. We also qualitatively analyze the learned representations and show that they successfully capture video dynamics. \keywords{unsupervised representation learning, action recognition} \end{abstract} \section{Introduction} The research on self-supervised learning of image representation has recently experienced a major breakthrough. Early approaches carefully designed objective functions to capture properties that the authors believed would result in learning rich representations~\cite{doersch2015unsupervised,noroozi2016unsupervised,gidaris2018unsupervised,zhang2016colorful}. For instance, Doersch et al.~\cite{doersch2015unsupervised} proposed to predict relative positions of two patches in an image, and Zhang et al.~\cite{zhang2016colorful} trained a network to colorize images. However, they have achieved only limited success. The methods that have brought the performance of self-supervised image representations close to those learned in a fully-supervised way, rely on a different principle instead. They use the standard cross-entropy loss and either treat each image as an individual class~\cite{dosovitskiy2014discriminative,wu2018unsupervised,oord2018representation}, or switch between clustering images in the feature space of the network, and updating the model to classify them into clusters~\cite{caron2018deep,zhuang2019local}. The resulting representations effectively capture discriminative image cues without having to manually separate images into categories. Self-supervised feature learning for videos has so far mostly relied on manually designed objective functions. While some works adopted their objectives directly from the image-based methods, such as predicting video rotation~\cite{jing2018self}, or relative position of space-time patches~\cite{kim2019self}, others utilize video-specific cues, such as predicting feature representations of video patches in future frames~\cite{han2019video}. Very recently, Sun et al.~\cite{sun2019contrastive}, have proposed a variant of the instance classification objective for videos. In this work we first investigating whether the recent, classification-based objectives proposed for image representation learning can be applied to videos. We introduce a video variant of the non-parametric Instance Recognition approach of Wu et al.,~\cite{wu2018unsupervised} (Video IR). It simply treats each video as its own class and trains a 3D ConvNet~\cite{tran2015learning,hara2018can} to discriminate between the videos. We observe that this naive approach is already competitive with prior work in the video domain. To further improve the results, we capitalize on the observation of Zhuang et al.~\cite{zhuang2019local} that embedding semantically similar instances close to each other in feature space is equally important to being able to discriminate between any two of them. We adapt their Local Aggregation approach to videos (Video LA). As shown in the top part of Figure~\ref{fig:meth}, this method first encodes a video using a 3D ConvNet, and the resulting embeddings are clustered with K-means. A non-parametric clustering loss proposed in~\cite{zhuang2019local} is then used to update the network and the algorithm is iterated in an Expectation-Maximization framework. This approach results in an improvement over Video IR, but the gap between the two objectives remains smaller than in the image domain. We identify the reasons behind this phenomenon, by examining the video clusters discovered by the algorithm. Our analysis shows that they mainly capture appearance cues, such as scene category, and tend to ignore the temporal information, which is crucial for the downstream task of action recognition. For instance, as shown in the top right corner of Figure~\ref{fig:meth}, videos with similar background, but different activities are embedded closer than examples of the same action. This is not surprising, since appearance cues are both dominant in the data itself, and are better reflected in the 3D ConvNet architecture. To mitigate this issue, we turn to the heuristic-based video representations of the past. Improved Dense Trajectories (IDT)~\cite{wang2013action} were the state-of-the-art approach for action recognition in the pre-deep learning era, and remained competitive on some datasets until very recently. The idea behind IDT is to manually encode the cues in videos that help to discriminate between human actions. To this end, individual pixels are first tracked with optical flow, and heuristics-based descriptors~\cite{dalal2005histograms,dalal2006human,wang2013dense} are aggregated along the trajectories to encode both appearance and motion cues. In this work, we propose to transfer the notion of similarity between videos encoded in IDTs to 3D ConvNets via non-parametric clustering. To this end, we first compute IDT descriptors for a collection of unlabeled videos. We then cluster these videos in the resulting features space and use the non-parametric classification objective of~\cite{zhuang2019local} to train a 3D ConvNet to respect the discovered clusters (bottom part of Figure~\ref{fig:meth}). The network is first trained until convergence using the fixed IDT clusters, and then finetuned in the joint IDT and 3D ConvNet space with the iterative Video LA approach. The resulting representation outperforms the baselines described above by a significant margin. We also qualitatively analyze the clusters and find that they effectively capture motion information. Following prior work~\cite{han2019video,jing2018self,sun2019contrastive}, we use the large-scale Kinetics~\cite{carreira2017quo} dataset for self-supervised pretraining, ignoring the labels. The learned representations are evaluated by finetuing on UCF101~\cite{soomro2012ucf101} and HMDB51~\cite{kuehne2011hmdb} action recognition benchmarks. To gain a better insight into the quality of the representations, we additionally provide an evaluation in a few-shot regime, using the model as a fixed feature extractor. \section{Related work} \label{sec:rl} In this section, we first briefly review previous work on image-based unsupervised representation learning. We then discuss various approaches to video modeling, and conclude by presenting relevant video representation learning methods. \textbf{Image representation} learning from unlabeled data is a well explored topic. Due to space limitations, we will only review the most relevant approaches here. The earliest methods were built around auto-encoder architectures: one network is trained to compress an image into a vector in such a way, that another network is able to reconstruct the original image from the encoding~\cite{hinton2006fast,lee2009convolutional,kingma2013auto,donahue2016adversarial,goodfellow2014generative}. In practice, however, the success of generative methods in discriminative representation learning has been limited. Until very recently, manually designing self-supervised objectives has been the the dominant paradigm. For example, Doersch et al.~\cite{doersch2015unsupervised} and Noroozi and Favaro~\cite{noroozi2016unsupervised} predict relative positions of patches in an image, Zhang et al.~\cite{zhang2016colorful} learn to colorize images, and Gidaris et al.~\cite{gidaris2018unsupervised} learn to recognize image rotations. While these methods have shown some performance improvements compared to random network initialization, they remain significantly below a fully-supervised baseline. The most recent methods, instead of designing specialized objective functions, propose to use the standard cross-entropy loss and either treat every image as its own class~\cite{dosovitskiy2014discriminative,oord2018representation,wu2018unsupervised}, or switch between clustering the examples in the feature space of the network and updating the network with a classification loss to respect the clusters~\cite{caron2018deep,zhuang2019local}. These methods exploit the structural similarity between semantically similar images, to automatically learn a semantic image embedding. In this paper we adapt the methods of Wu et al.~\cite{wu2018unsupervised} and Zhuang et al.~\cite{zhuang2019local} to the video domain, but demonstrate that they do not perform as well due to the structural priors being less strong in videos. We then introduce explicit prior in the form of IDT descriptors and show this indeed improves performance. \textbf{Video modeling} has traditionally been approached with heuristics-based methods. Most notably, Dense Trajectories (DT)~\cite{wang2013dense} sample points in frames and track them with optical flow. Then appearance and motion descriptors are extracted along each track and encoded into a single vector. The discriminative ability of DT descriptors was later improved in~\cite{wang2013action} by suppressing camera motion with the help of a human detector, and removing trajectories that fall into background regions. The resulting representation focuses on relevant regions in videos (humans and objects in motion) and encodes both their appearance and motion patterns. More recently, the success of end-to-end trainable CNN representation has been extended to the video domain. Simonyan et al.~\cite{simonyan2014two} proposed to directly train 2D CNNs for action recognition, fusing several frames at the first layer of the network. Their approach, however, had a very limited capacity for modeling temporal information. This issue was later addressed in~\cite{tran2015learning} by extending the 2D convolution operation in time. Introduction of the large scale Kinetcis dataset for action recognition~\cite{carreira2017quo} was a major step forward for 3D CNNs. Pretrained on this dataset, they were finally able to outperform the traditional, heuristic-based representations. Several variants of 3D ConvNet architectures have been proposed since, to improve performance and efficiency~\cite{carreira2017quo,hara2018can,xie2017rethinking}. In this work, we demonstrate how the IDT descriptors can be used to improve unsupervised learning of 3D ConvNet representations. \textbf{Video representation} learning from unlabeled data is a less explored topic. This is largely because the community has only recently converged upon the 3D ConvNets as thr standard architecture. Early methods used recurrent networks, or 2D CNNs, and relied on future-prediction~\cite{srivastava2015unsupervised}, as well as various manually designed objectives~\cite{mobahi2009deep,misra2016shuffle,lee2017unsupervised,gan2018geometry,fernando2017self}. In particular, several works utilized temporal consistency between consecutive frames as a learning signal~\cite{misra2016shuffle,lee2017unsupervised,mobahi2009deep}, whereas Gan et al.~\cite{gan2018geometry} used geometric cues, and Fernando et al.~\cite{fernando2017self} proposed the odd-one-out objective function. With 3D ConvNets, generative architectures~\cite{kim2019self,vondrick2016generating}, as well as some self-supervised objectives have been explored~\cite{jing2018self,kim2019self,wang2019self}. For example, Jing et al.~\cite{jing2018self} train a model to predict video rotation, Kim et al.~\cite{kim2019self} use relative spatio-temporal patch location prediction as an objective, and Wang et al.~\cite{wang2019self} regress motion and appearance statistics. In another line of work, future frame colorization was explored as a self-supervision signal~\cite{vondrick2018tracking}. Recently, Han et al.~\cite{han2019video} proposed to predict feature representations of video patches in future frames. Most similarly, Sun et al.~\cite{sun2019contrastive} use a variant of the instance discrimination loss. In this work, we demonstrate that simply adapting instance discrimination~\cite{wu2018unsupervised} and local aggregation~\cite{zhuang2019local} objectives from the image to the video domain already achieves competitive results, and augmenting local aggregation with IDT priors further improves the results, outperforming the state-of-the-art. \section{Method} \label{sec:meth} Our goal is to learn an embedding function $f_{\boldsymbol{\theta}}$ that maps videos $V = \{v_1, v_2, ..., v_N\}$ into compact descriptors $f_{\boldsymbol{\theta}}(v_i) = \boldsymbol{d}_i$ in such a way, that they can be discriminated based on human actions, using unlabeled videos. For instance, as shown in Figure~\ref{fig:meth}, we want the two videos of people to doing handstands to be close to each other in the embedding space, and well separated from the video of a person training a dog. Below, we first introduce the two objective functions used in our work - instance recognition~\cite{wu2018unsupervised} and local aggregation~\cite{zhuang2019local}, and then describe our approach of using IDT~\cite{wang2013action} descriptors as unsupervised priors in non-parametric clustering. \subsection{Video instance recognition} This objective is based on the intuition that the best way to learn a discriminative representation is to use a discriminative loss. And, in the absence of supervised class labels, treating each instance as a distinct class of its own is a natural surrogate. Using the standard softmax classification criterion, the probability of every video $v$ with the feature $\boldsymbol{d}$ belonging to its own class $i$ is expressed as: \begin{equation} P(i | \boldsymbol{d}) = \frac{\exp(\boldsymbol{w}_{i}^T \boldsymbol{d})}{\sum_{j=1}^N{\exp(\boldsymbol{w}_{j}^T \boldsymbol{d})}}, \end{equation} where $\boldsymbol{w}_j$ is the weight vector of the $j$'th classifier. In this case, however, every class contains only a single example, thus $\boldsymbol{w}_j$ can be directly replaced with $\boldsymbol{d}_j$. The authors of~\cite{wu2018unsupervised} then propose the following formulation of the class probability: \begin{equation} P(i | \boldsymbol{d}) = \frac{\exp(\boldsymbol{d}_{i}^T \boldsymbol{d} / \tau)}{\sum_{j=1}^N{\exp(\boldsymbol{d}_{j}^T \boldsymbol{d} / \tau)}}, \label{eq:instance_prob} \end{equation} where $\tau$ is a temperature parameter that controls the concentration level of the distribution, and helps convergence~\cite{wang2017normface,hinton2015distilling}. The final learning objective is the standard negative log likelihood over the training set. Recall that training is done in batches, thus a memory bank of encodings $D = \{\boldsymbol{d}_1, \boldsymbol{d}_2, ..., \boldsymbol{d}_N\}$ has to be maintained to compute Equation~\ref{eq:instance_prob}. \begin{figure} \begin{center} \makebox[0.9\textwidth]{\includegraphics[width=0.8\paperwidth]{figures/method2.png}} \caption{Our approach for unsupervised representation learning from video collections. Directly applying a non-parametric clustering objective results in a representation that groups videos based on appearance (top right corner). To mitigate this issue, we propose to first cluster the videos in the space of IDT descriptors (bottom right corner), which results in a grouping that better reflects video dynamics. We then apply the non-parametric clustering loss to transfer the properties of this embedding to a 3D ConvNet.} \label{fig:meth} \end{center} \vspace{-20px} \end{figure} \subsection{Video local aggregation} While being able to separate any two instances is a key property for an image or video embedding space, another, complementary and equally desirable property is minimizing the distance between semantically similar instances. To this end, Zhuang et al.~\cite{zhuang2019local} proposed to use clusters of instances instead of individual examples as class surrogates. We adapt their approach to the video domain, and briefly describe it below. Firstly, the video embedding vectors ${\boldsymbol{d}_1, \boldsymbol{d}_2, ..., \boldsymbol{d}_N}$ are grouped into $K$ clusters $G = \{G_1, G_2, .., G_K\}$ using K-means. The embedding function $f_{\boldsymbol{\theta}}$ is then updated to respect the cluster, using the non-parametric clustering objective proposed in~\cite{zhuang2019local}, and the two steps are iterated in an EM-framework. In particular, for every instance $v_i$ together with its embedding $\boldsymbol{d}_i$, two sets of neighbours are identified: close neighbours $\boldsymbol{C}_i$ (shown with a dashed circle in Figure~\ref{fig:meth}) and background neighbours $\boldsymbol{B}_i$. Intuitively, close neighbours are those examples that fall into the same cluster as $v_i$ and background neighbors are simply those that have a small distance to $\boldsymbol{d}_i$ in the feature space (they include both close neighbors and hard negative examples). Please see~\cite{zhuang2019local} for more details on how $\boldsymbol{C}_i$ and $\boldsymbol{B}_i$ are constructed. The objective is then to minimize the distance between $\boldsymbol{d}_i$ and its close neighbours (instances in the same cluster), while maximizing the distance to those background neighbors that are not in $\boldsymbol{C}_i$ (hard negatives). The authors formulate this objective in a probabilistic way as minimizing the negative log likelihood of $\boldsymbol{d}_i$ being recognized as a close neighbor, given that it is recognized as a background neighbor: \begin{equation} L(\boldsymbol{C}_i, \boldsymbol{B}_i | \boldsymbol{d}_i, \boldsymbol{\theta}) = -\log \frac{P(\boldsymbol{C}_i \cap \boldsymbol{B}_i | \boldsymbol{d}_i)}{P(\boldsymbol{B}_i | \boldsymbol{d}_i)}, \label{eq:localagg} \end{equation} where the probability of $\boldsymbol{d}$ being a member of a set $\boldsymbol{A}$ is defined as: \begin{equation} P(\boldsymbol{A}| \boldsymbol{d}) = \sum_{i \in \boldsymbol{A}} P(i | \boldsymbol{d}), \end{equation} and the definition of $P(i | \boldsymbol{d})$ is adapted from Equation~\ref{eq:instance_prob}. Despite the involved formulation, one can see that this objective does exactly what it is intended to do - minimizes the distance between examples inside a cluster and maximize it between those belonging to different clusters in a non-parametric way. Intuitively, the Local Aggregation objective relies on the structural similarity between semantically similar images, together with deep image prior in CNN architectures~\cite{ulyanov2018deep}, to form meaningful clusters in the embedding space. In videos, however, both structural and architectural priors are less strong. Indeed, pixels that are close to each other in the spatio-temporal volume of a video are not always strongly correlated due to the presence of object and camera and motion. On the architecture side, 3D ConvNets are also worse at capturing spatio-temoral patterns, compared to CNNs at capturing spatial patterns. To mitigate this lack of implicit priors, we propose to introduce an explicit one in the form of IDT descriptors. \subsection{IDT descriptors as priors for video representation learning} While state-of-the-art architectures for action recognition~\cite{tran2015learning,carreira2017quo,hara2018can} simply extend 2D CNN filters into the temporal dimension, treating videos as spatio-temporal cuboids of pixels, classical approaches~\cite{wang2013dense,wang2013action} explicitly identified and encoded spatio-temporal interest points that are rich in motion patterns relevant to action classification. In our experiments, we use the original implementation of IDT~\cite{wang2013action} to compute video descriptors for unlabeled videos (shown in the lower part of Figure~\ref{fig:meth}). We supply the IDT extractor with human detection form the state-of-the-art Mask-RCNN~\cite{he2017mask} model trained on MS COCO~\cite{lin2014microsoft} for improved camera stabilization (see~\cite{wang2013action} for details). This method, however, produces thousands of descriptors $\boldsymbol{x} \in \mathcal{X}$ per video. To encode them into a compact vector we follow prior work~\cite{wang2013action,wang2019hallucinating} and first apply PCA to reduce the dimensionality of each individual trajectory descriptor $\boldsymbol{x_i}$. We then utilize Fisher vector coding~\cite{perronnin2010improving}, which is based on a Gaussian Mixture Model (GMM) with K components $G(w_k, \boldsymbol{\mu}_k, \boldsymbol{\sigma}_k)$, parameterized by mixing probability, mean, and diagonal standard deviation. The encoding for a trajectory descriptor $\boldsymbol{x}$ is then computed by stacking the derivatives of each components of the GMM with respect to mean and variance: \begin{equation} \phi^*_k(\boldsymbol{x}) = \frac{p(\boldsymbol{\mu}_k | \boldsymbol{x})}{\sqrt{w_k}}[\phi_k(\boldsymbol{x}), \frac{\phi_k^{'}(\boldsymbol{x})}{\sqrt{2}}], \end{equation} where the first- and second-order features $\phi_k, \phi_k^{'} \in R^D$ are defined as: \begin{equation} \phi_k(\boldsymbol{x}) = \frac{(\boldsymbol{x} - \boldsymbol{\mu_k})}{\boldsymbol{\sigma}_k}, \phi_k^{'}(\boldsymbol{x}) = \phi_k(\boldsymbol{x})^{2} - 1, \end{equation} thus, the resulting Fisher vector encoding $\phi(\boldsymbol{x}) = [\phi^*_1(\boldsymbol{x}), \phi^*_2(\boldsymbol{x}), ..., \phi^*_k(\boldsymbol{x})]$ is of dimensionality $2KD$. To obtain the video-level descriptor $\boldsymbol{\psi}$, individual trajectory encodings are averaged $\boldsymbol{\psi} = avg_{\boldsymbol{x} \in \mathcal{X}}\phi(\boldsymbol{x})$, and power-~\cite{koniusz2018deeper} and l2-normalization are applied. Finally, to further reduce dimensionality, count sketching~\cite{weinberger2009feature} is used: $p(\boldsymbol{\psi}) = \boldsymbol{P}\boldsymbol{\psi}$, where $\boldsymbol{P}$ is the sketch projection matrix (see~\cite{weinberger2009feature} for details). The resulting encoding $p(\boldsymbol{\psi})$ is a 2000-dimensional vector, providing a compact representation of a video, which captures discriminative motion and appearance information. Importantly, it is completely unsupervised. Both the PCA projection and the parameters of the Gaussian mixture model are estimated using a random sample of trajectory encodings, and matrix $\mathbf{P}$ is selected at random as well. To transfer the cues encoded in IDTs descriptors to a 3D ConvNet, we first cluster the videos in the $p(\boldsymbol{\psi})$ space with K-means, to obtain the clusters $G$. We then use $G$ to compute the sets of neighborhoods $(\boldsymbol{C}_i, \boldsymbol{B}_i)$ for each video $v_i$ in an unlabeled collection (shown in the bottom right corner on Figure~\ref{fig:meth}), and apply the objective in Equation~\ref{eq:localagg} to train the network. This forces the learned representation to capture the motion patterns that dominate the IDT space (note that IDTs encode appearance cues as well in the form of HOG descriptors). Finally, we construct a joint space of IDT and 3D ConvNet representations by concatenating the vectors $\boldsymbol{d}$ and $p(\boldsymbol{\psi})$ for each video. We further finetune the network in this joint space for a few epochs. This step allows the model to capitalize on appearance cues encoded by the the expressive 3D ConvNet architecture. We analyze the resulting model quantitatively and qualitatively, and find that it both outperforms the state-of-the-art, and is better at capturing motion information. \section{Experiments} \label{sec:exp} \subsection{Datasets and evaluation} We use the Kinetics~\cite{carreira2017quo} dataset for unsupervised representation learning and evaluate the learned models on UCF101~\cite{soomro2012ucf101} and HMDB51~\cite{kuehne2011hmdb} in a fully-supervised regime. Below, we describe each dataset in more detail. \textbf{Kinetics} is a large-scale, action classification dataset collected by querying videos on YouTube. We use the training set of Kinetics-400, which contains 235 000 videos, for most of the experiments in the paper, but additionally report results using fewer as well as more videos in Section~\ref{sec:vids}. Note that we do not use any annotations provided in Kinetics. \textbf{UCF101} is a classic dataset for human action recognition, which consists of 13,320 videos, covering 101 action classes. It is much smaller than Kinetics, and 3D ConvNets fail to outperform heuristic-based methods on it without fully-supervised pretraining on larger datasets. Following prior work~\cite{jing2018self,han2019video}, we use UCF101 to evaluate the quality of representations learned on Kinetics in an unsupervised way via transfer learning. In addition to using the full training set of UCF101, we report few-shot learning results to gain more insight into the learned representations. We use the first split of the dataset for ablation analysis, and report results averaged over all splits when comparing to prior work. \textbf{HMDB51} is another benchmark for action recognition, which consists of 6,770 videos, collected from movies, and split into 51 categories. Due to the small size of the training set, it, poses an even larger challenge for learning-based methods. As with UCF101, we report ablation results on the first split, and use the results averaged over all splits for comparison to prior work. Following standard protocol, we report classification accuracy as the main evaluation criteria on UCF101 and HMDB51. However, this makes direct comparison between different approaches difficult, due to the differences in network architectures. Thus, whenever possible, we additionally report the fraction of the fully-supervised performance for the same architecture. \subsection{Implementation details} \label{sec:impl} \subsubsection{Self-supervised objectives} We study three self-supervised objective functions: Video Instance Recognition (Video IR), Video Local Aggregation (Video LA) and Video Local Aggregation with IDT prior. For Video IR we follow the setting of ~\cite{wu2018unsupervised} and set $\tau$ in Equation~\ref{eq:instance_prob} to 0.07. We use 4096 negative samples for approximating the denominator of Equation~\ref{eq:instance_prob}. In addition to the parameters described above, Local Aggregation requires choosing the number of clusters $K$, as well as the number of runs of K-means that are combined for robustness. The authors of~\cite{zhuang2019local} do not provide clear guidelines on selecting these hyperparameters, so we choose to take the values used in their ImageNet experiments and decrease them proportionally to the size of Kinetics. As a result, we set $K$ to 6000 and the number of clusterings to 3. We validate the importance of this choice in Appendix~\ref{sec:obj}. For experiments with with IDT priors we use exactly the same hyper-parameters for the LA objective as described above. We use the original implementation of ~\cite{wang2013action} to extract IDT descriptors. Human detections are computed with ResNet101 variant of Mask-RCNN~\cite{he2017mask} model pretrained on MS COCO~\cite{lin2014microsoft}. We evaluate the importance of human detections for the final performance of our approach in Appendix~\ref{sec:abl}. When computing Fisher vector encoding, we generally follow the setting of~\cite{wang2019hallucinating}. In particular, we set the feature importance to 90\% when computing PCA, and the number of components in GMM to 256. When fitting the PCA and GMM models we randomly choose 3500 videos from Kinetics and 500 IDT descriptors from each video, to get a representative sample. Note that extracting IDTs and encoding them into Fisher vectors does not require GPUs, and thus the code can be efficiently run in parallel on a CPU cluster. As a result, we were able to compute the descriptors for Kinetics in just 5 days. \vspace{-15px} \subsubsection{Network architecture and optimization} Following most of the prior work, we use a 3D ResNet18 architecture~\cite{hara2018can} in all the experiments, but also report results with deeper variants in Appendix~\ref{sec:depth}. The embedding dimension for self-supervised objectives is set to 128, as in~\cite{zhuang2019local}. We use SGD with momentum to train the networks, and apply multi-scale, random spatio-temporal cropping for data augmentation, with exactly the same setting as in~\cite{hara2018can}. We also perform the standard mean subtraction. All the models are trained on 16 frames clips of spatial resolution of $112 \times 112$, unless stated otherwise. During self-supervised learning we follow the setting of~\cite{zhuang2019local} and set the learning rate to 0.03, and momentum to 0.9, with batch size of 256. All the models are trained for 200 epoch, and the learning rate is dropped by a factor 0.1 at epochs 160 and 190. As in~\cite{zhuang2019local}, we initialize the LA models with 40 epoch of IR pretraining. When finetuning on UCF101 and HMDB51, we set the learning rate to 0.1 and momentum to 0.9, using batch size 128. We drop the learning rate by a factor of 0.1 when the validation performance stops improving. Following~\cite{jing2018self}, we freeze the first ResNet block when finetuning on UCF101, and the first two blocks on HMDB51 to avoid overfitting. During inference, for every video we sample five clips at random, using the center crop. The final prediction is obtained by averaging softmax scores over the five clips. For few-shot experiments, we use the protocol of~\cite{chen2019closer} and freeze the entire network, only learning a linear classifier. \subsection{Analysis of self-supervised objectives} We begin by comparing different variants of self-supervised objectives described in Section~\ref{sec:meth}. They are used to learn a representation on Kinetics-400 in a self-supervised way, and the resulting models are transferred to UCF101 and HMDB51. We additionally evaluate two baselines - Supervised, which is pretrained on Kinetics using ground-truth labels, and Scratch, which is initialized with random weights. The results are reported in Table~\ref{tab:anal}. \begin{table}[bt] \caption{Comparison between variants of unsupervised learning objective using classification accuracy and fraction of fully supervised performance on the fist split of UCF101 and HMDB51. All models use a 3D ResNet18 backbone, and take 16 frames with resolution of $112 \times 112$ as input. Video LA with IDT prior consistently outperforms other objectives, with improvements on HMDB51 being especially significant.} \label{tab:anal} \centering { \begin{tabular}{l|c@{\hspace{1em}}c@{\hspace{1em}}|c@{\hspace{1em}}c@{\hspace{1em}}} Method & \multicolumn{2}{c|}{UCF101} & \multicolumn{2}{c}{HMDB51} \\\hline & Accuracy & \% sup. & Accuracy & \% sup. \\ \hline Scratch~\cite{hara2018can} & 42.4 & 50.2 & 17.1 & 30.3 \\\hline Video IR & 70.0 & 82.9 & 39.9 & 70.7 \\ Video LA & 71.4 & 84.6 & 41.7 & 73.9 \\ Video LA + IDT prior & \textbf{72.8} & \textbf{86.3} & \textbf{44.0} & \textbf{78.0} \\ \hline Supervised~\cite{hara2018can} & 84.4 & 100 & 56.4 & 100 \\ \hline \end{tabular} } \vspace{-10px} \end{table} Firstly, we observe that supervised pretraining is indeed crucial for achieving top performance on both datasets, with the variant trained from scratch reaching only 50.2\% and 30.3\% of the accuracy of the fully supervised model on UCF101 and HMDB51 respectively. The gap is especially large on HMDB51, due to the small size of the dataset. Using the video variant of the Instance Recognition objective (Video IR in the table), however, results in a 27.6\% accuracy improvement on UCF101 and 22.8\% HMDB51, reaching 82.9\% and 70.7\% of the supervised accuracy respectively. Notice that this simple method already outperforms some of the approaches proposed in prior works~\cite{jing2018self,han2019video,kim2019self}. Next, we can see that the Local Aggregation objective (Video LA in the table) further improves the results, reaching 84.6\% and 73.9\% of the fully-supervised performance on UCF101 and HMDB51 respectively. This shows that despite the higher-dimensionality of the video data, this method is still able to discover meaningful clusters in an unsupervised way. However, the gap to the IR objective is smaller than in the image domain~\cite{zhuang2019local}. Finally, our full method, which uses IDT descriptors as an unsupervised prior when clustering the videos (Video LA + IDT prior in the table), is indeed able to further boost the performance, reaching 86.3\% and 78.0\% of fully supervised performance on the two datasets. The improvement over Video LA is especially significant on HMDB51. We explain this by the fact that categories in UCF101 are largely explainable by appearance, thus the benefits of better modeling the temporal information are limited on this dataset. In contrast, on HMDB51 capturing scene dynamics is crucial for accurate classification. \subsection{Few-shot evaluation} When finetuning a model, even on a datasets of modest size, like UCF101, the effect of self-supervised pretraining is confounded by the effectiveness of the adaptation strategy itself. Indeed, it has been show recently that, on several tasks that were traditionally used to measure the effectiveness of image-based unsupervised learning approaches, fully supervised performance can be achieved with no pretraining at all, by simply better utilizing the existing data~\cite{he2019rethinking}. Thus, to gain more insight into our objectives, we propose to use pretrained models as feature extractors, and learn linear classifiers in a few-shot regime. The results on UCF101 are reported in Table~\ref{tab:fs}. \begin{table}[bt] \caption{Comparison between variants of unsupervised learning objective on the first split of UCF101 in a few-shot regime, using classification accuracy. The networks are fully frozen, and a linear classifier is learned, gradually decreasing the amount of training data. The gap between unsupervised and supervised representations increases, but our full method (`Video LA + IDT') still outperforms other variants across the board.} \label{tab:fs} \centering { \begin{tabular}{l|c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}} Method & 1-shot & 5-shot & 10-shot & 20-shot & All \\ \hline Scratch & 1.7 & 7.5 & 10.6 & 17.2 & 38.2 \\\hline Video IR & 13.4 & 27.7 & 35.2 & 42.4 & 56.5 \\ Video LA & 15.6 & 30.6 & 36.4 & 44.2 & 58.6 \\ Video LA + IDT prior & \textbf{17.8} & \textbf{31.5} & \textbf{38.4} & \textbf{45.5} & \textbf{58.8} \\ \hline Supervised & 46.4 & 62.0 & 67.7 & 73.3 & 81.8 \\ \hline \end{tabular} } \vspace{-10px} \end{table} The most important observation here is that the gap between fully-supervised and unsupervised representations increases as the data becomes scarcer. This shows that, despite being useful in practice, unsupervised pretraining is still far from making large datasets obsolete. Among the objectives studied in our work, however, Video LA with IDT prior shows the strongest performance across the board, and is especially effective in the low-data regime. \subsection{Qualitative analysis of the representations} To gain further insight into the effect of our IDT prior on representation learning, we now visualize some of the clusters discovered by the vanilla LA, and the variant with the prior in Figures~\ref{fig:la} and~\ref{fig:fv} respectively. Firstly, we observe that, in the absence of external constraints LA defaults to using appearance, and primarily scene information to cluster the videos. For instance, the first cluster (top left corner) corresponds to swimming pools, the one on the top right seems to focus on grass, and the two clusters in the bottom row capture vehicles and backyards, irrespective of the actual scene dynamics. This is not surprising, since appearance cues are both more dominant in the data itself, and are better reflected by the 3D ConvNet architecture. \begin{figure} \vspace{-3px} \begin{center} \makebox[0.9\textwidth]{\includegraphics[width=0.8\paperwidth]{figures/la1.png}} \caption{Visualization of the clusters discovered by the Video LA objective without IDT prior. This variant groups videos in the space of a 3D ConvNet. As a results, the clusters are primarily defined by the appearance, grouping swimming pools, grass fields, vehicles, and backyards. The activity happening in the videos does not seem to play a significant role.} \label{fig:la} \end{center} \vspace{-25px} \end{figure} \begin{figure} \begin{center} \makebox[0.9\textwidth]{\includegraphics[width=0.8\paperwidth]{figures/fv1.png}} \vspace{-5px} \caption{Visualization of the clusters discovered by variant of Video LA objective that uses IDT prior. In contrast to the examples above, the videos are mainly grouped by motion properties, such as forward-backward hand motion, person rotation, fast person motion, and `riding' action.} \label{fig:fv} \end{center} \vspace{-25px} \end{figure} In contrast, the model learned with IDT prior is better at capturing motion cues. For example, the cluster in the top left corner of Figure~\ref{fig:fv} is characterized by forward-backward hand motion, such as observed during cleaning or barbecuing. The cluster in the top-right captures humans spinning or rotating. The bottom left cluster mostly contains videos with very fast actor motion, and the one in the bottom right closely corresponds to the action `riding'. Importantly, neither set of clusters is perfectly aligned with the definition of actions in popular computer vision dataset. For instance, despite having a clear motion-based interpretation, the top left cluster in Figure~\ref{fig:fv} combines Kinetcis categories `cleaning window', `cleaning floor', and `barbecuing'. Indeed, the actions vocabulary used in the literature is defined by a complex combination of actor's motion and scene appearance, making automatic discovery of well-aligned clusters challenging, and partially explaining the remaining gap between clustering-based methods and fully-supervised pretraining. \subsection{Learning long-term temporal dependencies} \begin{table}[bt] \caption{Evaluation of the effect of clip length on the Video LA objective with and without IDT prior on the first split of UCF101 and HMDB51 using classification accuracy. Scratch and Supervised baselines are also reported. All models use a 3D ResNet18 backbone, and take frames with resolution of $112 \times 112$ as input. Both self-supervised and fully-supervised variants benefit from longer sequences, but the model trained from scratch is not able to capitalize on more information.} \label{tab:clip_len} \centering { \begin{tabular}{l|c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}|c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}} Method & \multicolumn{3}{c|}{UCF101} & \multicolumn{3}{c}{HMDB51} \\\hline & 16-fr & 32-fr & 64-fr & 16-fr & 32-fr & 64-fr \\ \hline Scratch & 42.4 & 44.9 & 45.3 & 17.1 & 18.0 & 17.4 \\\hline Video LA & 71.4 & 75.0 & 79.4 & 41.7 & 43.1 & 48.9 \\ Video LA + IDT prior & \textbf{72.8} & \textbf{76.3} & \textbf{81.5} & \textbf{44.0} & \textbf{44.7} & \textbf{49.6} \\ \hline Supervised & 84.4 & 87.0 & 91.2 & 56.4 & 63.1 & 67.5 \\ \hline \end{tabular} } \end{table} Next, we experiment with applying our Video LA objective with IDT prior over longer clips. Recall that this approach attempts to capture the notion of similarity between the videos encoded in the IDT descriptors that are computed over the whole video. The model reported so far, however, only takes 16-frame clips as input, which makes the objective highly ambiguous. In Table~\ref{tab:clip_len} we evaluate networks trained using 32- and 64-frame long clips instead, reporting results on UCF101 and HMDB51. We observe that, as expected, performance of our approach (`Video LA + IDT' in the table) increases with more temporal information, but the improvement is non-linear, and our model is indeed able to better capture long-term motion cues when trained using longer clips. Similar improvements are observed for the plain Video LA objective, but our approach still shows top performance. Supervised model is also able to capitalize on longer videos, but on UCF101 the improvements are lower than seen by our approach (6.8\% for the supervised model, compared to 8.7\% for ours). Interestingly, the model trained from scratch does not benefit from longer videos as much as self-supervised or supervised variants. In particular, on HMDB51 its performance improves by about 1-2\% with 32 frames, but actually decreases with 64. We attribute this to the fact that using longer clips lowers the diversity of the training set, which is crucial for optimizing an untrained representation. These results further demonstrate the importance of model pretraining for video understanding. \subsection{Effect of the number of videos} \label{sec:vids} So far, we have reported all the results using 235 000 videos in the training set of Kinetics-400~\cite{carreira2017quo}. We now train the model with our final objective (Video LA with IDT prior) using a varying number of videos to study the effect of the dataset size on the quality of the learned representations. In particular, we subsample the training set to 185 000 and 135 000 examples at random to see whether smaller datasets can be used for representation learning. We also add the videos from the larger Kinetics-600 dataset to see if our method scales to larger video collections. We use the 3D ResNet18 architecture with 16-frames long clips and input resolution of $112 \times 112$ in all experiments, and report results on the first split of UCF101 and HMDB51 in Figure~\ref{fig:data}. \begin{figure} \vspace{-15px} \begin{center} \makebox[0.9\textwidth]{\includegraphics[width=0.8\paperwidth]{figures/data1.png}} \caption{Varying the number of Kinetics videos when training a 3D ConvNet with the `Video LA with IDT prior' objective. Using more data for unsupervised pretraining results in better representations, as evident form transfer learning results on the first split of UCF101 and HMDB51 (reported using classification accuracy).} \label{fig:data} \end{center} \vspace{-20px} \end{figure} Firstly, we observe that useful representations can be learned with as few 135 000 videos. However, using more data results in improved performance on both datasets. On UCF101 the improvements are mostly linear, but accuracy drops somewhat for the largest training set (370 000 videos). We attribute this to the randomness in training and hypothesize that further improvements can be achieved with more data. On HMDB51 accuracy seems to plateau after 235 000 videos, but improves with 370 000. We will use the model trained on the largest available dataset for comparison to the state-of-the-art in the next section. \subsection{Comparison to the state-of-the-art} Finally, we compare our approach (Video LA with IDT prior) to the state-of-the-art unsupervised video representations in Table~\ref{tab:sot}. As noted in Section~\ref{sec:impl}, to fairly compare results achieved by methods with different network architectures, we use the fraction of fully supervised performance as an additional metric, whenever this information is available. To make the table size manageable, we only report approaches that use 3D ConvNets pretrained on Kinetics. These, however, cover all the top performing methods in the literature. \begin{table}[bt] \caption{Comparison to the state-of-the-art using accuracy and fraction of the fully-supervised performance on UCF101 and HMDB51, averaged over 3 splits. `Ours': Video LA with IDT prior. DPC uses a non-standard version of 3D ResNet, and does not report fully-supervised performance for it. Our method shows top accuracy among the models using the same network architecture. When normalized for the architecture differences, it outperforms all the approaches.} \label{tab:sot} \centering { \begin{tabular}{l|c|c|c|c@{\hspace{0.5em}}c@{\hspace{0.5em}}|c@{\hspace{0.5em}}c@{\hspace{0.5em}}} Method & Network & Frame size & \#Frames & \multicolumn{2}{c|}{UCF101} & \multicolumn{2}{c}{HMDB51} \\\hline \multicolumn{4}{c|}{} & Acc. & \% sup. & Acc. & \% sup. \\\hline PMAS~\cite{wang2019self} & C3D & $112 \times 112$ & 16 & 61.2 & 74.3 & 33.4 & - \\ \hline 3D-Puzzle~\cite{kim2019self} & 3D ResNet18 & $224 \times 224$ & 16 & 65.8 & 78.0 & 33.7 & 59.8 \\ DPC~\cite{han2019video} & 3D ResNet18 & $112 \times 112$ & 40 & 68.2 & - & 34.5 & - \\ Ours & 3D ResNet18 & $112 \times 112$ & 16 & 73.0 & 86.5 & 41.6 & 73.8 \\ \hline 3D-RotNet~\cite{jing2018self} & 3D ResNet18 & $112 \times 112$ & 64 & 66.0 & 72.1 & 37.1 & 55.5 \\ Ours & 3D ResNet18 & $112 \times 112$ & 64 & \textbf{83.0} & \textbf{90.7} & \textbf{50.4} & \textbf{75.6} \\ \hline DPC~\cite{han2019video} & 3D ResNet34 & $224 \times 224$ & 40 & 75.7 & - & 35.7 & - \\ \hline CBT~\cite{sun2019contrastive} & S3D & $112 \times 112$ & 16 & 79.5 & 82.1 & 44.6 & 58.8 \\ \hline IDT~\cite{wang2013action} & - & Full & All & 85.9 & - & 57.2 & - \end{tabular} } \vspace{-10px} \end{table} Firstly, we observe that our principled approach is indeed a lot more effective that manually designed objectives used in PMAS~\cite{wang2019self}, or 3D-Puzzle~\cite{jing2018self}, confirming the effectiveness of clustering-based training. The improvements are especially large on HMDB, which is, as we have shown previously, can be attributed to the IDT prior helping to better model the temporal information. Our approach also outperforms DPC~\cite{han2019video}, when the network depth is the same for both methods, even though DPC uses much longer sequences (40 frames with a stride 2, so the effective length is 120). Notably, on HMDB our approach even outperforms a variant of DPC with a deeper network, and bigger frame size by a large margin. When trained with longer temporal sequences, our method also outperforms the deeper variant of DPC on UCF by 7.3\%. On HMDB we are 14.7\% ahead. The very recent approach of Sun et al.~\cite{sun2019contrastive} (`CBT' in the table), reports high accuracy on both datasets. However, we show that this is due to the authors of~\cite{sun2019contrastive} using a much deeper network than other methods in the literature. In terms of the fraction of fully-supervised performance, the 16-frame variant of our method outperforms CBT by 4.4\% on UCF and by 15.0\% on HMDB. Moreover, the 64-frame variant also outperforms CBT in raw accuracy on both datasets. Finally, we report the performance of Fisher vector encoded IDT descriptors (`IDT' in the table, the numbers are taken from~\cite{simonyan2014two}). Please note that these descriptors are computed on the full length of the video, using the original resolution. Despite this, our 64 frame model comes close to the IDT performance on both datasets. Training a deeper variant of this model with a larger input resolution can close the remaining gap. \section{Conclusions} \label{sec:concl} This paper introduced a novel approach for unsupervised video representation learning. Our method transfers the heuristic-based IDT descriptors, that are effective at capturing motion information, to 3D ConvNets via non-parametric clustering, using an unlabeled collection of videos. We quantitatively evaluated the learned representations on UCF101 and HMDB51 action recognition benchmarks, and demonstrated that they outperform prior work. We also qualitatively analyzed the discovered video clusters, showing that they successfully capture video dynamics, in addition to appearance. This analysis highlighted that the clusters do not perfectly match with the human-defined action classes, partially explaining the remaining gap to the fully-supervised performance. {\footnotesize \smallsec{Acknowledgements:} We thank Piotr Koniusz and Lei Wang for sharing their implementation of Fisher vector encoding. This work was supported in part by the Inria associate team GAYA, and by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DOI/IBC) contract number D17PC00345. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright annotation theron. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied of IARPA, DOI/IBC or the U.S. Government.} \clearpage \bibliographystyle{splncs04} \bibliography{egbib} \clearpage \appendix \begin{center}\Large\bfseries Appendix\end{center} \subfile{supplementary.tex} \end{document}
https://openreview.net/forum?id=yc54rY6_tX6
https://arxiv.org/abs/2006.15731
Please evaluate the strengths and weaknesses of the paper, focusing on the proposed method's use of IDT descriptors and 3DConvNet for unsupervised video representation learning.
EfficientSeg: An Efficient Semantic Segmentation Network #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper proposes new U-Net architecture which uses MobileNetV3 blocks. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - New Unet #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - Only changing the architecture and no prior rather than generic data augmentation - It is more like technical report. #### 4. [Overall rating] Paper rating 4 #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. #### 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) - Why the 'related works' are related to the paper is missing. - L.32: Is any meaningful pretraining available for medical data? - Simple present tense instead of past tense - Fig.1 image resolution Typos: - L.173: 10 at . Missing citations: - L.24: "..appoaches.." (only one is given.) - L.26: "..various problems.." - L.34: "..newly emerging..". What are those fields? - L.57: classifying objects[?,?,?], detecting objects[?,?,?], estimating pose[?,?,?] - L.72: easy-to-access applications[?,?,?] - L.61: offline[?,?,?].., real-time[?,?,?] Rating: 4: Ok but not good enough - rejection Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{100} % \title{EfficientSeg: An Efficient Semantic Segmentation Network} % \titlerunning{EfficientSeg} \author{Vahit Bugra Yesilkaynak \and Yusuf H. Sahin \and Gozde Unal} \authorrunning{Yesilkaynak et al.} \institute{ Istanbul Technical University, Istanbul, Turkey\\ \email{\{yesilkaynak15, sahinyu, gozde.unal\}@itu.edu.tr}\\ } \maketitle \begin{abstract} Deep neural network training without pre-trained weights and few data is shown to need more training iterations. It is also known that, deeper models are more successful than their shallow counterparts for semantic segmentation task. Thus, we introduce EfficientSeg architecture, a modified and scalable version of U-Net, which can be efficiently trained despite its depth. We evaluated EfficientSeg architecture on Minicity dataset and outperformed U-Net baseline score ($40\%$ mIoU) using the same parameter count ($51.5\%$ mIoU). Our most successful model obtained $58.1\%$ mIoU score and got the fourth place in semantic segmentation track of ECCV 2020 VIPriors challenge. \keywords{semantic segmentation, few data, MobileNet, data efficiency} \end{abstract} \section{Introduction} \label{sec:intro} Typical machine learning approaches, especially deep learning, draw its strength from the usage of a high number of supervised examples\cite{NIPS2012_4824}. However, reliance on large training sets restricts the applicability of deep learning solutions to various problems where high amounts of data may not be available. Thus, generally in few shot learning approaches, it is very common to start the network training using a pre-trained network or network backbone to obtain prior knowledge \cite{wang2020generalizing} from a larger dataset like ImageNet\cite{imagenet_cvpr09}. However, for the tasks defined on domains that are different from that of natural images such as for medical image segmentation \cite{ronneberger2015u,kamnitsas2017efficient}, it is not meaningful to start from pre-trained weights. This distinction makes learning from scratch using a low number of data instances, an important objective. This is also the objective of the newly emerging data-efficient deep learning field. In \cite{he2019rethinking}, the authors argued that, non-pre-trained models can perform similar to their pre-trained counterparts even if it takes more iterations and/or fewer data to train. Also in \cite{zoph2020rethinking}, it is shown that, with stronger data augmentation the need to pre-train the network lessens. Even when using pre-trained networks, there is strong evidence that data augmentation improves the results \cite{howard2013some,long2015fully,chen2017rethinking}. In semantic segmentation, it is known that building deeper networks or using deeper backbones affects the results positively \cite{he2016deep,li2019global}. Yet deeper networks come with limitations. Ideally, a baseline network which is subject to scaling should be memory and time-efficient. The latter is due to the fact that the number of needed training iterations will be increased for a large network. Using MobileNetV3\cite{DBLP:journals/corr/abs-1905-02244} blocks, we are able to create a baseline model which is still expressive and deep with a lower parameter count. Regarding all these considerations, in this article, we present a new deep learning architecture for segmentation, using MobileNetV3 blocks. As we focused on the problem of training with few data, we evaluated our network in Minicity dataset\footnote{https://github.com/VIPriors/vipriors-challenges-toolkit/tree/master/semantic-segmentation}, which is a subset of Cityscapes \cite{cordts2016cityscapes}. Our method obtained the fourth place in the semantic segmentation challenge on ECCV VIPriors workshop \footnote{https://vipriors.github.io/challenges/}. \section{Related Work} \textbf{Semantic Segmentation.} Computer vision problems focus on extracting useful information from images automatically such as classifying objects, detecting objects, estimating pose and so on. Semantic segmentation is one such problem where the main concern is to group the pixels on an image to state what pixels belong to which entity in the image. Semantic segmentation finds many applications in real life problems yet we can divide the efforts on the field into two main categories: offline segmentation and real-time segmentation. Real-time segmentation networks need to be both fast and accurate, with this constraint they generally have lower mIoU compared to their counter-parts. To our knowledge currently the state-of-the-art is U-HarDNet-70\cite{chao2019hardnet} with reported 75.9\% class mIoU and 53 frames per second with a 1080Ti GPU. On the other hand, offline segmentation has no time concerns thus the proposed solutions are generally slower. To our knowledge, the state of the art technique on offline Cityscapes segmentation is HRNet-OCR\cite{tao2020hierarchical} with a class mIoU of 85.1\%. We next describe the most popular architectural paradigm in image recognition, namely the MobileNet. \textbf{MobileNet Blocks.} With the increasing popularity of CNNs, the demand on easy-to-access applications based on CNNs have also increased. One way to establish the demanded accessibility is to use mobile devices, yet the competition on image recognition challenges generally pushed CNN networks into being too big to run on mobile devices. In this environment, there are two main solutions to make mobile CNN applications feasible: running the networks in powerful servers for external computation or using smaller networks to fit in mobile devices. In this paper, we focus on the second solution, which aims at creating smaller networks. Howard et al. introduced a family of networks called MobileNets\cite{howard2013some} with this motivation. The main idea behind MobileNets is utilizing Depthwise Separable Convolutional (DSC) layers. DSC layer is very much like a standard 2D convolutional layer and serves the same purpose yet it is both smaller in number of parameters and faster compared to its counterpart. Figure \ref{fig:depthwise} depicts the difference between a standard convolution layer and DSC layer. MobileNet architecture has two more improved versions namely MobileNetV2\cite{DBLP:journals/corr/abs-1801-04381} and MobileNetV3\cite{DBLP:journals/corr/abs-1905-02244}, before going into the details of MobileNetV3, we describe MobileNetV2 and another work based on it, EfficientNet\cite{DBLP:journals/corr/abs-1905-11946}.\\ \textbf{MobileNetV2 Blocks and EfficientNet.} MobileNetV2 relies on two main components: depthwise separable convolutional layers and inverted residual architecture with linear bottlenecks. Inverted residual architecture is implemented by adding a middle phase called expansion phase, inside MobileNetV2 blocks the input tensor are expanded into having $ t \times d $ depth with a convolution operation $t$ and $d$ are expansion ratio and depth of the input tensor respectively, after the expansion phase depthwise separable convolution phase follows. EfficientNets\cite{DBLP:journals/corr/abs-1905-11946} are a family of networks which was built to be small, fast and accurate on image classification task. It consists of blocks pretty similar to MobileNetV2, yet instead of making the networks mobile, the authors used the advantages of MobileNetV2 blocks to create bigger networks, namely EfficientNets, are have significantly smaller number of parameters compared to their similar performing counterparts thus they are both memory and time efficient. After the success EfficientNet has achieved, Howard et al. published another work which is called MobileNetV3\cite{DBLP:journals/corr/abs-1905-02244}.\\ \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{figure_conv.PNG} \caption{Figure shows the difference between a standard convolution layer (a) and a depthwise separable convolution layer (b), depthwise separable layer consists of two convolution operations which decreases the number of parameters. In the figure "k" is the kernel size and "d" is the depth of the input tensor.} \label{fig:depthwise} \end{figure} \textbf{MobileNetV3.} We use MobileNetV3 as the building blocks of our network EfficientSeg. Howard et al. added a Squeeze-and-Excite\cite{DBLP:journals/corr/abs-1709-01507} operation to the residual layer and introduced a new architecture scheme. In our work we use this architecture to create a U-shaped semantic segmentation network. We will discuss further details in the following sections. \textbf{Data augmentation.} As stated in Section \ref{sec:intro}, data augmentation is important for learning from few data. In traditional neural network training, transformations like flipping, cropping, scaling and rotating are highly used. In \cite{ma2019optimizing}, \cite{cubuk2020randaugment} and \cite{imgaug} more complex data augmentation methods like JPEG compression, local copying of segmentation masks, contrast, brightness and sharpness changes, blurring are suggested. There are also data augmentation methods focusing on generating new data by GANs or style transfer\cite{zhu2017data,DBLP:journals/corr/abs-1904-09135,frid2018gan}, but they are out of scope for the Minicity segmentation task since they are not generally applicable for training from scratch. \section{Method} In this paper, we present a new neural architecture called EfficientSeg, which can be counted as a modified version of the classic U-Net architecture\cite{ronneberger2015u} by alternating the blocks with inverted residual blocks which are presented in MobileNetV3\cite{DBLP:journals/corr/abs-1905-02244}. The EfficientSeg network, which is illustrated in Figure \ref{fig:my_label}, is a U-shaped architecture with 4 concatenation shortcuts, between an encoder and a decoder. Our encoder which is the down-sampling encoding branch of the network is like a MobileNetV3-Large classifier itself without the classification layers, whereas the decoder is its mirror symmetric version, where the down-sampling is replaced with upsampling operation. In the decoder part, we need to upsample the input tensors to retrieve a segmentation mask image which is the same size as the input image. We apply an upsample with bilinear interpolation and a scale factor 2 at each block where its symmetric is a downsample block on the encoder side. We have 4 shortcut connections across from the encoder towards the decoder at the same layer. Each shortcut is done by concatenating the input of a downsampling block in the encoder part with the corresponding upsampled output in the decoder part. In this way, we enable the network to capture the fine details through these shortcuts rather than solely preserving them in the bottleneck. As in MobileNetV3 blocks, a width scaling parameter to upscale the network also exists in EfficientSeg, making it suitable to create networks of different scales. We will be discussing two of them which are EfficientSeg (1.5) which has the same number of parameters as baseline the U-Net in Minicity Challenge and also our larger network EfficientSeg (6.0). \begin{figure}[h] \centering \includegraphics[width=\linewidth]{effseg.pdf} \caption{EfficientSeg architecture. There are 5 different type of blocks. Inverted Residual Blocks are MobileNetV3 blocks described as in the paper. 1x1 and 3x3 blocks are standard convolution blocks which has activation and batch normalization. Downsampling operations are done with increasing the stride and for upsampling, linear interpolation is used.} \label{fig:my_label} \end{figure} \section{Experiment} In our experiments, we train the EfficientSeg network with $384\times768$ sized cropped images using Adam\cite{kingma2014adam} optimization algorithm with a learning rate of $\textit{lr=1e-3}$ at the start. We divide the learning rate by 10 at $200^{th}$ and $400^{th}$ epochs. As the objective function, we use a weighted cross-entropy loss. In the dataset, we observe that some of the categories are underrepresented relative to the others. We incorporate that information into the objective function in the form of increased weights: a weight of 2 (wall, fence, pole, rider, motorcycle, bicycle) and a weight of 3(bus, train, truck) are used for the rare classes. For every epoch, 20 extra images for each rare class are also fed to the network. Deciding on which data augmentations to use requires prior knowledge of the domain \cite{cubuk2020randaugment}. Since in our train set we have few objects of same category having different color and texture properties, we decided to reduce the texture dependency and increase the color invariance by (i) multiplying hue and brightness values of the image by uniformly distributed random values in ($0.4,1.6$), and (ii) JPEG compression. We also did (iii) non-uniform scaling, (iv) random rotation ($\pm20^\circ$) and (v) flipping as in standard deep learning approaches. At evaluation time, we feed the network with both the original test images and their flipped versions, then calculate average of their scores to obtain the final segmentation. Utilizing nearly the same parameter count by using a depth parameter of 1.5, we obtain an mIoU score of $51.5\%$ on the test set whereas baseline U-Net model has a score of $40\%$. To further improve the model we also tested with a depth parameter of 6.0 and obtain an improved mIoU result of $58.1\%$. To demonstrate the importance of texture based data augmentation, we also train the network without the aforementioned augmentations. As can be seen in Table \ref{table:table1}, using both the aforementioned augmentation strategy and increasing the depth of the network, we obtain our highest score. Our code for these experiments is publicly available\footnote{https://github.com/MrGranddy/EfficientSeg}. \begin{table}[h] \begin{center} \begin{tabular}{ccccc} \multicolumn{1}{l|}{} & \textbf{EfficientSeg (1.5)} & \begin{tabular}[c]{@{}c@{}}\textbf{EfficientSeg (6.0)}\\ \textbf{w/o aug.}\end{tabular} & \textbf{EfficientSeg (6.0)} & \\ \hline \multicolumn{1}{l|}{road} & 0.960 & 0.954 & 0.962 & \\ \multicolumn{1}{l|}{sidewalk} & 0.707 & 0.685 & 0.738 & \\ \multicolumn{1}{l|}{building} & 0.846 & 0.832 & 0.864 & \\ \multicolumn{1}{l|}{wall} & 0.277 & 0.165 & 0.318 & \\ \multicolumn{1}{l|}{fence} & 0.285 & 0.197 & 0.304 & \\ \multicolumn{1}{l|}{pole} & 0.449 & 0.471 & 0.517 & \\ \multicolumn{1}{l|}{traffic light} & 0.239 & 0.382 & 0.450 & \\ \multicolumn{1}{l|}{traffic sign} & 0.491 & 0.517 & 0.615 & \\ \multicolumn{1}{l|}{vegetation} & 0.885 & 0.888 & 0.899 & \\ \multicolumn{1}{l|}{terrain} & 0.501 & 0.464 & 0.576 & \\ \multicolumn{1}{l|}{sky} & 0.912 & 0.919 & 0.932 & \\ \multicolumn{1}{l|}{person} & 0.580 & 0.575 & 0.710 & \\ \multicolumn{1}{l|}{rider} & 0.222 & 0.179 & 0.353 & \\ \multicolumn{1}{l|}{car} & 0.864 & 0.842 & 0.899 & \\ \multicolumn{1}{l|}{truck} & 0.342 & 0.106 & 0.497 & \\ \multicolumn{1}{l|}{bus} & 0.264 & 0.128 & 0.325 & \\ \multicolumn{1}{l|}{train} & 0.169 & 0.002 & 0.137 & \\ \multicolumn{1}{l|}{motorcycle} & 0.278 & 0.191 & 0.333 & \\ \multicolumn{1}{l|}{bicycle} & 0.518 & 0.544 & 0.611 & \\ \hline \multicolumn{1}{l|}{mIoU} & 0.515 & 0.476 & 0.581 & \\ & & & & \end{tabular} \end{center} \caption{Class IoU and mIoU scores on Minicity test set for differently trained EfficientSeg architectures} \label{table:table1} \end{table} It is also worth mentioning that, the effect of the aforementioned data augmentation techniques, is more significant than depth up-scaling. This result empirically shows the importance of texture based data augmentation. \section{Conclusions} In conclusion, we introduced a novel semantic segmentation architecture EfficientSeg which consists of scalable blocks to make it easy to fit for problems of different scales. In our work we empirically show how selecting the most beneficial augmentation improves the success of the network, making it even more advantageous than up-scaling the network. When trained with our augmentation set EfficientSeg (1.5) achieves 51.5\% mIoU, outperforming its much larger counterpart EfficientSeg (6.0) if no augmentation is applied, in the other hand when trained with our augmentation set we achieve our best score 58.1\%. Utilizing prior knowledge is especially important on tasks providing few data to train on, as the popularity of efficient image recognition networks increases, it is expected that data efficiency is the next step to have simple, efficient and elegant solutions to image recognition tasks. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=s-OSwnzXvEi
https://arxiv.org/abs/2009.06469
Evaluate the paper based on its strengths and weaknesses, and provide an overall rating for the paper.
Effective method, out of scope #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper proposes a novel CNN architecture for semantic segmentation based on U-Net and MobileNetV3 blocks. The proposed architecture is applied on the MiniCity dataset and performance improvements are shown. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. * Clarity: the paper is clear and easy to read. * Effectiveness: the method seems effective on small-size datasets. #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. * Scope: The proposed architecture change is motivated by computational efficiency and parameter reduction rather than incorporating prior knowledge. Prior knowledge is only considered to motivate the techniques used for data augmentation, which are generic and widely used techniques. #### 4. [Overall rating] Paper rating * 5. Marginally below acceptance threshold #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. Despite being effective, the proposed architecture is not motivated by incorporating prior knowledge but rather by computational efficiency and therefore the paper falls outside of the intended scope of this workshop. #### 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) (line 173) "We divide the learning rate by 10 at . " Missing word. (line 177) "poll" --> pole Rating: 5: Marginally below acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \begin{document} \pagestyle{headings} \mainmatter \def\ECCVSubNumber{100} % \title{EfficientSeg: An Efficient Semantic Segmentation Network} % \titlerunning{EfficientSeg} \author{Vahit Bugra Yesilkaynak \and Yusuf H. Sahin \and Gozde Unal} \authorrunning{Yesilkaynak et al.} \institute{ Istanbul Technical University, Istanbul, Turkey\\ \email{\{yesilkaynak15, sahinyu, gozde.unal\}@itu.edu.tr}\\ } \maketitle \begin{abstract} Deep neural network training without pre-trained weights and few data is shown to need more training iterations. It is also known that, deeper models are more successful than their shallow counterparts for semantic segmentation task. Thus, we introduce EfficientSeg architecture, a modified and scalable version of U-Net, which can be efficiently trained despite its depth. We evaluated EfficientSeg architecture on Minicity dataset and outperformed U-Net baseline score ($40\%$ mIoU) using the same parameter count ($51.5\%$ mIoU). Our most successful model obtained $58.1\%$ mIoU score and got the fourth place in semantic segmentation track of ECCV 2020 VIPriors challenge. \keywords{semantic segmentation, few data, MobileNet, data efficiency} \end{abstract} \section{Introduction} \label{sec:intro} Typical machine learning approaches, especially deep learning, draw its strength from the usage of a high number of supervised examples\cite{NIPS2012_4824}. However, reliance on large training sets restricts the applicability of deep learning solutions to various problems where high amounts of data may not be available. Thus, generally in few shot learning approaches, it is very common to start the network training using a pre-trained network or network backbone to obtain prior knowledge \cite{wang2020generalizing} from a larger dataset like ImageNet\cite{imagenet_cvpr09}. However, for the tasks defined on domains that are different from that of natural images such as for medical image segmentation \cite{ronneberger2015u,kamnitsas2017efficient}, it is not meaningful to start from pre-trained weights. This distinction makes learning from scratch using a low number of data instances, an important objective. This is also the objective of the newly emerging data-efficient deep learning field. In \cite{he2019rethinking}, the authors argued that, non-pre-trained models can perform similar to their pre-trained counterparts even if it takes more iterations and/or fewer data to train. Also in \cite{zoph2020rethinking}, it is shown that, with stronger data augmentation the need to pre-train the network lessens. Even when using pre-trained networks, there is strong evidence that data augmentation improves the results \cite{howard2013some,long2015fully,chen2017rethinking}. In semantic segmentation, it is known that building deeper networks or using deeper backbones affects the results positively \cite{he2016deep,li2019global}. Yet deeper networks come with limitations. Ideally, a baseline network which is subject to scaling should be memory and time-efficient. The latter is due to the fact that the number of needed training iterations will be increased for a large network. Using MobileNetV3\cite{DBLP:journals/corr/abs-1905-02244} blocks, we are able to create a baseline model which is still expressive and deep with a lower parameter count. Regarding all these considerations, in this article, we present a new deep learning architecture for segmentation, using MobileNetV3 blocks. As we focused on the problem of training with few data, we evaluated our network in Minicity dataset\footnote{https://github.com/VIPriors/vipriors-challenges-toolkit/tree/master/semantic-segmentation}, which is a subset of Cityscapes \cite{cordts2016cityscapes}. Our method obtained the fourth place in the semantic segmentation challenge on ECCV VIPriors workshop \footnote{https://vipriors.github.io/challenges/}. \section{Related Work} \textbf{Semantic Segmentation.} Computer vision problems focus on extracting useful information from images automatically such as classifying objects, detecting objects, estimating pose and so on. Semantic segmentation is one such problem where the main concern is to group the pixels on an image to state what pixels belong to which entity in the image. Semantic segmentation finds many applications in real life problems yet we can divide the efforts on the field into two main categories: offline segmentation and real-time segmentation. Real-time segmentation networks need to be both fast and accurate, with this constraint they generally have lower mIoU compared to their counter-parts. To our knowledge currently the state-of-the-art is U-HarDNet-70\cite{chao2019hardnet} with reported 75.9\% class mIoU and 53 frames per second with a 1080Ti GPU. On the other hand, offline segmentation has no time concerns thus the proposed solutions are generally slower. To our knowledge, the state of the art technique on offline Cityscapes segmentation is HRNet-OCR\cite{tao2020hierarchical} with a class mIoU of 85.1\%. We next describe the most popular architectural paradigm in image recognition, namely the MobileNet. \textbf{MobileNet Blocks.} With the increasing popularity of CNNs, the demand on easy-to-access applications based on CNNs have also increased. One way to establish the demanded accessibility is to use mobile devices, yet the competition on image recognition challenges generally pushed CNN networks into being too big to run on mobile devices. In this environment, there are two main solutions to make mobile CNN applications feasible: running the networks in powerful servers for external computation or using smaller networks to fit in mobile devices. In this paper, we focus on the second solution, which aims at creating smaller networks. Howard et al. introduced a family of networks called MobileNets\cite{howard2013some} with this motivation. The main idea behind MobileNets is utilizing Depthwise Separable Convolutional (DSC) layers. DSC layer is very much like a standard 2D convolutional layer and serves the same purpose yet it is both smaller in number of parameters and faster compared to its counterpart. Figure \ref{fig:depthwise} depicts the difference between a standard convolution layer and DSC layer. MobileNet architecture has two more improved versions namely MobileNetV2\cite{DBLP:journals/corr/abs-1801-04381} and MobileNetV3\cite{DBLP:journals/corr/abs-1905-02244}, before going into the details of MobileNetV3, we describe MobileNetV2 and another work based on it, EfficientNet\cite{DBLP:journals/corr/abs-1905-11946}.\\ \textbf{MobileNetV2 Blocks and EfficientNet.} MobileNetV2 relies on two main components: depthwise separable convolutional layers and inverted residual architecture with linear bottlenecks. Inverted residual architecture is implemented by adding a middle phase called expansion phase, inside MobileNetV2 blocks the input tensor are expanded into having $ t \times d $ depth with a convolution operation $t$ and $d$ are expansion ratio and depth of the input tensor respectively, after the expansion phase depthwise separable convolution phase follows. EfficientNets\cite{DBLP:journals/corr/abs-1905-11946} are a family of networks which was built to be small, fast and accurate on image classification task. It consists of blocks pretty similar to MobileNetV2, yet instead of making the networks mobile, the authors used the advantages of MobileNetV2 blocks to create bigger networks, namely EfficientNets, are have significantly smaller number of parameters compared to their similar performing counterparts thus they are both memory and time efficient. After the success EfficientNet has achieved, Howard et al. published another work which is called MobileNetV3\cite{DBLP:journals/corr/abs-1905-02244}.\\ \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{figure_conv.PNG} \caption{Figure shows the difference between a standard convolution layer (a) and a depthwise separable convolution layer (b), depthwise separable layer consists of two convolution operations which decreases the number of parameters. In the figure "k" is the kernel size and "d" is the depth of the input tensor.} \label{fig:depthwise} \end{figure} \textbf{MobileNetV3.} We use MobileNetV3 as the building blocks of our network EfficientSeg. Howard et al. added a Squeeze-and-Excite\cite{DBLP:journals/corr/abs-1709-01507} operation to the residual layer and introduced a new architecture scheme. In our work we use this architecture to create a U-shaped semantic segmentation network. We will discuss further details in the following sections. \textbf{Data augmentation.} As stated in Section \ref{sec:intro}, data augmentation is important for learning from few data. In traditional neural network training, transformations like flipping, cropping, scaling and rotating are highly used. In \cite{ma2019optimizing}, \cite{cubuk2020randaugment} and \cite{imgaug} more complex data augmentation methods like JPEG compression, local copying of segmentation masks, contrast, brightness and sharpness changes, blurring are suggested. There are also data augmentation methods focusing on generating new data by GANs or style transfer\cite{zhu2017data,DBLP:journals/corr/abs-1904-09135,frid2018gan}, but they are out of scope for the Minicity segmentation task since they are not generally applicable for training from scratch. \section{Method} In this paper, we present a new neural architecture called EfficientSeg, which can be counted as a modified version of the classic U-Net architecture\cite{ronneberger2015u} by alternating the blocks with inverted residual blocks which are presented in MobileNetV3\cite{DBLP:journals/corr/abs-1905-02244}. The EfficientSeg network, which is illustrated in Figure \ref{fig:my_label}, is a U-shaped architecture with 4 concatenation shortcuts, between an encoder and a decoder. Our encoder which is the down-sampling encoding branch of the network is like a MobileNetV3-Large classifier itself without the classification layers, whereas the decoder is its mirror symmetric version, where the down-sampling is replaced with upsampling operation. In the decoder part, we need to upsample the input tensors to retrieve a segmentation mask image which is the same size as the input image. We apply an upsample with bilinear interpolation and a scale factor 2 at each block where its symmetric is a downsample block on the encoder side. We have 4 shortcut connections across from the encoder towards the decoder at the same layer. Each shortcut is done by concatenating the input of a downsampling block in the encoder part with the corresponding upsampled output in the decoder part. In this way, we enable the network to capture the fine details through these shortcuts rather than solely preserving them in the bottleneck. As in MobileNetV3 blocks, a width scaling parameter to upscale the network also exists in EfficientSeg, making it suitable to create networks of different scales. We will be discussing two of them which are EfficientSeg (1.5) which has the same number of parameters as baseline the U-Net in Minicity Challenge and also our larger network EfficientSeg (6.0). \begin{figure}[h] \centering \includegraphics[width=\linewidth]{effseg.pdf} \caption{EfficientSeg architecture. There are 5 different type of blocks. Inverted Residual Blocks are MobileNetV3 blocks described as in the paper. 1x1 and 3x3 blocks are standard convolution blocks which has activation and batch normalization. Downsampling operations are done with increasing the stride and for upsampling, linear interpolation is used.} \label{fig:my_label} \end{figure} \section{Experiment} In our experiments, we train the EfficientSeg network with $384\times768$ sized cropped images using Adam\cite{kingma2014adam} optimization algorithm with a learning rate of $\textit{lr=1e-3}$ at the start. We divide the learning rate by 10 at $200^{th}$ and $400^{th}$ epochs. As the objective function, we use a weighted cross-entropy loss. In the dataset, we observe that some of the categories are underrepresented relative to the others. We incorporate that information into the objective function in the form of increased weights: a weight of 2 (wall, fence, pole, rider, motorcycle, bicycle) and a weight of 3(bus, train, truck) are used for the rare classes. For every epoch, 20 extra images for each rare class are also fed to the network. Deciding on which data augmentations to use requires prior knowledge of the domain \cite{cubuk2020randaugment}. Since in our train set we have few objects of same category having different color and texture properties, we decided to reduce the texture dependency and increase the color invariance by (i) multiplying hue and brightness values of the image by uniformly distributed random values in ($0.4,1.6$), and (ii) JPEG compression. We also did (iii) non-uniform scaling, (iv) random rotation ($\pm20^\circ$) and (v) flipping as in standard deep learning approaches. At evaluation time, we feed the network with both the original test images and their flipped versions, then calculate average of their scores to obtain the final segmentation. Utilizing nearly the same parameter count by using a depth parameter of 1.5, we obtain an mIoU score of $51.5\%$ on the test set whereas baseline U-Net model has a score of $40\%$. To further improve the model we also tested with a depth parameter of 6.0 and obtain an improved mIoU result of $58.1\%$. To demonstrate the importance of texture based data augmentation, we also train the network without the aforementioned augmentations. As can be seen in Table \ref{table:table1}, using both the aforementioned augmentation strategy and increasing the depth of the network, we obtain our highest score. Our code for these experiments is publicly available\footnote{https://github.com/MrGranddy/EfficientSeg}. \begin{table}[h] \begin{center} \begin{tabular}{ccccc} \multicolumn{1}{l|}{} & \textbf{EfficientSeg (1.5)} & \begin{tabular}[c]{@{}c@{}}\textbf{EfficientSeg (6.0)}\\ \textbf{w/o aug.}\end{tabular} & \textbf{EfficientSeg (6.0)} & \\ \hline \multicolumn{1}{l|}{road} & 0.960 & 0.954 & 0.962 & \\ \multicolumn{1}{l|}{sidewalk} & 0.707 & 0.685 & 0.738 & \\ \multicolumn{1}{l|}{building} & 0.846 & 0.832 & 0.864 & \\ \multicolumn{1}{l|}{wall} & 0.277 & 0.165 & 0.318 & \\ \multicolumn{1}{l|}{fence} & 0.285 & 0.197 & 0.304 & \\ \multicolumn{1}{l|}{pole} & 0.449 & 0.471 & 0.517 & \\ \multicolumn{1}{l|}{traffic light} & 0.239 & 0.382 & 0.450 & \\ \multicolumn{1}{l|}{traffic sign} & 0.491 & 0.517 & 0.615 & \\ \multicolumn{1}{l|}{vegetation} & 0.885 & 0.888 & 0.899 & \\ \multicolumn{1}{l|}{terrain} & 0.501 & 0.464 & 0.576 & \\ \multicolumn{1}{l|}{sky} & 0.912 & 0.919 & 0.932 & \\ \multicolumn{1}{l|}{person} & 0.580 & 0.575 & 0.710 & \\ \multicolumn{1}{l|}{rider} & 0.222 & 0.179 & 0.353 & \\ \multicolumn{1}{l|}{car} & 0.864 & 0.842 & 0.899 & \\ \multicolumn{1}{l|}{truck} & 0.342 & 0.106 & 0.497 & \\ \multicolumn{1}{l|}{bus} & 0.264 & 0.128 & 0.325 & \\ \multicolumn{1}{l|}{train} & 0.169 & 0.002 & 0.137 & \\ \multicolumn{1}{l|}{motorcycle} & 0.278 & 0.191 & 0.333 & \\ \multicolumn{1}{l|}{bicycle} & 0.518 & 0.544 & 0.611 & \\ \hline \multicolumn{1}{l|}{mIoU} & 0.515 & 0.476 & 0.581 & \\ & & & & \end{tabular} \end{center} \caption{Class IoU and mIoU scores on Minicity test set for differently trained EfficientSeg architectures} \label{table:table1} \end{table} It is also worth mentioning that, the effect of the aforementioned data augmentation techniques, is more significant than depth up-scaling. This result empirically shows the importance of texture based data augmentation. \section{Conclusions} In conclusion, we introduced a novel semantic segmentation architecture EfficientSeg which consists of scalable blocks to make it easy to fit for problems of different scales. In our work we empirically show how selecting the most beneficial augmentation improves the success of the network, making it even more advantageous than up-scaling the network. When trained with our augmentation set EfficientSeg (1.5) achieves 51.5\% mIoU, outperforming its much larger counterpart EfficientSeg (6.0) if no augmentation is applied, in the other hand when trained with our augmentation set we achieve our best score 58.1\%. Utilizing prior knowledge is especially important on tasks providing few data to train on, as the popularity of efficient image recognition networks increases, it is expected that data efficiency is the next step to have simple, efficient and elegant solutions to image recognition tasks. \bibliographystyle{splncs04} \bibliography{egbib} \end{document}
https://openreview.net/forum?id=s-OSwnzXvEi
https://arxiv.org/abs/2009.06469
Evaluate the effectiveness and strengths of the proposed method in the paper, considering its clarity and performance on small-size datasets.
2nd Place Scheme on Action Recognition Track of ECCV 2020 VIPriors Challenges: An Efficient Optical Flow Stream Guided Framework 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. This paper introduces a data-efficient pipeline to address the problem of action recognition. It is based on a two-stream model that utilizes an enhanced C3D network. The convolutions in the C3D are modified to include a 3D Temporal Central Difference Convolution term. Instead of working with RGB, authors proposed to use Rank Pooling guided with Optical Flow. Additionally, this work is ranked 2nd in de VIPriors Action Recognition Challenge. 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - Modification of the convolution, integrating a new term. - 2nd Position in the challenge. 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - The paper feels more like a technical report than a proper paper. - The introduction needs more motivation. - Experiments are very much focused on the challenge. Only modifying the convolution would need more justification. A deeper study. 4. [Overall rating] Paper rating. 4 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. Weaknesses of point 3 justify the rating. 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) Please, include Rank Pooling citation, it seems is introduced by the authors. Rating: 4: Ok but not good enough - rejection Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{booktabs} \usepackage[normalem]{ulem} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \begin{document} \titlerunning{An Efficient Optical Flow Stream Guided Framework} \title{2nd Place Scheme on Action Recognition Track of ECCV 2020 VIPriors Challenges: An Efficient Optical Flow Stream Guided Framework} % \author{Haoyu Chen\inst{1} \and Zitong Yu\inst{1} \and Xin Liu\inst{1}\and Wei Peng\inst{1}\and Yoon Lee\inst{2}\and Guoying Zhao\inst{1}} \authorrunning{H. Chen et al.} \institute{CMVS, University of Oulu, Finland. \and CEL, Delft university of technology, the Netherlands.\\ \email{\{chen.haoyu, zitong.yu, xin,liu, wei.peng, guoying.zhao\}@oulu.fi}, \email{\{y.lee\}@tudelft.nl}\\ } \maketitle \begin{abstract} To address the problem of training on small datasets for action recognition tasks, most prior works are either based on a large number of training samples or require pre-trained models transferred from other large datasets to tackle overfitting problems. However, it limits the research within organizations that have strong computational abilities. In this work, we try to propose a data-efficient framework that can train the model from scratch on small datasets while achieving promising results. Specifically, by introducing a 3D central difference convolution operation, we proposed a novel C3D neural network-based two-stream (Rank Pooling RGB and Optical Flow) framework for the task. The method is validated on the action recognition track of the ECCV 2020 VIPriors challenges and got the 2nd place (88.31\%) \footnote[1]{https://competitions.codalab.org/competitions/23706\#results}. It is proved that our method can achieve a promising result even without a pre-trained model on large scale datasets. The code will be released soon. \keywords{from-scratch training, 3D difference convolution, Rank Pooling, over-fitting} \end{abstract} \section{Introduction} Nowadays, with the strong ability of deep learning methods, training on massive datasets could consistently gain substantial performances on the action recognition task. However, it only works for a few very large companies that have thousands of expensive hardware GPUs and the majority of smaller companies and universities with few hardware clusters cannot enjoy the benefits. In this work, we try to train a model from scratch without large datasets or large scale pre-trained models while it can achieve state-of-the-art performance on the action recognition task. Specifically, we introduce an enhanced convolution operation: 3D temporal central difference convolution (TCDC) into a traditional 3D CNN structure to efficient spatio-temporal features in basic convolution operators with less overfitting. Besides, instead of using raw RGB frames that might learn too much unnecessary details, we propose to use an efficient representation called Rank Pooling to serve as an enhanced RGB stream. Furthermore, the Optical Flow stream is used to guide the learning of the Rank Pooling stream to tackle the overfitting issue. At last, the Optical Flow stream and Rank Pooling stream are combined to be trained jointly on the task for better performance. The framework of our method is illustrated in Fig. \ref{fig:framework}. Our contribution to tackling this training-from-scratch task includes: a novel temporal convolution operator (3D TCDC), an Optical Flow guided Rank Pooling stream and a joint two-stream learning strategy for action recognition. \begin{figure*} \includegraphics[width=\linewidth]{framwork.pdf} \caption{Network architecture for our hybrid two stream framework. The Optical Flow is used to enhance the learning of Rank Pooling for overcoming the overfitting problem} \label{fig:framework} \end{figure*} \section{Related work} The first common used two-stream 2D CNN architecture for action recognition was proposed by Simonyan and Zisserman \cite{twostream}, including one stream of RGB frames, and the other of Optical Flow. The two streams are trained separately and fused by averaging the scores of both the streams. A transition from 2D CNNs to 3D CNNs was made since the better performances of spatio-temporal features compared by 3D CNN to their 2D equivalents \cite{3dcnn}. This transition comes with the problem of overfitting caused by small datasets and a high large number of parameters that need to be optimized \cite{closer} \cite{longterm} in the model. Specifically, in a two-stream (RGB and Optical Flow) framework, directly training models to learn RGB frames from scratch on a small dataset can lead to severe overfitting problem for RGB stream, while Optical Flow stream can still achieve relative high performances. The reason is that RGB frames contain too many noisy details and a large model could learn some irrelevant features which lead to overfitting with local optima. Many previous works have reported this overfitting issue, for instance, training from scratch on single RGB stream, 3D Resnet50 model \cite{mars} can achieve 55.2\% accuracy, Slowfast model \cite{slowfast} for 40.1\%, and even with neural network searching \cite{nas}, the accuracy can only reach 61\%. To deal with the problem of overfitting, Carreira and Zisserman \cite{I3D} introduced the Kinetics dataset with the I3D network, which was large enough to let 3D CNNs be trained sufficiently. Using RGB and Flow streams pre-trained on Kinetics \cite{kinetics}, I3D achieved the state of art on the UCF101 \cite{ucf101} datasets. However, when the large scale datasets and pre-trained models are not available, especially for those who are not able to access to powerful computing facilities, how to overcome the overfitting is still an unsolved problem. In this work, we proposed to introduce a new 3D CNN operator TCDC \cite{yupr}, which is inspired by the 2D-CDC\cite{yucvpr}, and use Rank Pooling RGB stream with Optical Flow guided strategy to tackle this issue, which can achieve a promising result with a low computational cost. \section{Methodology} \subsection{C3D Backbones with Central Difference Convolution} Based on the traditional 3D CNN framework \cite{3dcnn}, we introduce an unified 3D convolution operator called 3D temporal central difference Convolution (3D TCDC) for better integrating local gradient information. In a TCDC operation, the sampled local receptive field cube $\mathcal{C}$ is consisted of two kinds of regions: 1) the region in the current moment $\mathcal{R'}$, and 2) the regions in the adjacent moments $\mathcal{R''}$. In the setting of a TCDC, the central difference term is only calculated from $\mathcal{R''}$. Thus the generalized TCDC can be formulated as: \vspace{-1.5em} \begin{equation} \small \setlength{\belowdisplayskip}{-1.5em} \begin{split} y(p_0) &=\underbrace{\sum_{p_n\in \mathcal{C}}w(p_n)\cdot x(p_0+p_n)}_{\text{vanilla 3D convolution}}+\theta\cdot (\underbrace{-x(p_0)\cdot\sum_{p_n\in \mathcal{R''}}w(p_n))}_{\text{temporal CD term}}. \\ \label{eq:CDC-T} \end{split} \end{equation} where $w$, $x$ and $p$ denote the kernel weights, input feature maps and weight positions respectively. The first term in the right side stands for a Vanilla 3D convolution, while the second term stands for 3D TCDC operation. Please note that $w(p_n)$ is shared between vanilla 3D convolution and temporal CD term, thus no extra parameters are added. The hyperparameter $\theta \in [0,1]$ is the factor to combine the contribution of gradient-level (3D TCDC) and intensity-level (Vanilla 3D). As a result, our C3D framework combines vanilla 3D convolution with 3D TCDC and could provide more robust and diverse modeling performance. \subsection{Rank Pooling for Optical Flow guided learning} We introduce a more explicit representation Rank Pooling instead of raw RGB frames to avoid the overfitting problem on the RGB steam. The definition of the Rank Pooling is below. Let a RGB stream sequence with k frames be represented as $< I1, I2, ..., It, ..., Ik >$, where $I_t$ is the average of RGB features over the frames up to $t$-timestamp. The process of Rank Pooling is formulated as following objective function: \begin{equation} \begin{split} {arg\,min} \frac{1}{2}\left \| \omega \right \|^{2} + \delta \sum_{i>j}^{\xi _{ij}}, \\ s.t. \omega ^{T}\cdot (I_{i}-I_{j})\geq 1-\xi _{ij},\xi _{ij}\geq 0 \label{eq:rankpooling} \end{split} \end{equation} By optimizing Eq. \ref{eq:rankpooling}, we map a sequence of K frames to a single vector $d$. In this paper, Rank Pooling is directly applied on the pixels of RGB frames and the dynamic image $d$ is of the same size as the input frames. After the Rank Pooling images being generated, we combine the Rank Pooling stream with Optical Flow stream as input into the above C3D networks, which can enhance the learning of Rank Pooling stream. \section{Experiments} We validate our method on action recognition track of the ECCV 2020 VIPriors challenges with part (split 1) of the well-known action recognition dataset UCF101 \cite{ucf101}. There are 9537 video clips for training and validating, and 3783 for testing. \subsection{Different backbones} \begin{table}[] \centering \caption{Comparison of different backbone networks} \label{tab:backbone} \begin{tabular}{@{}ccccc@{}} \toprule \textbf{Backbone} & \textbf{Stream} & \textbf{Training Acc} & \textbf{Testing Acc} & \textbf{Overfitting gap} \\ \midrule Slowfast\cite{slowfast} & RGB & 84.1\% & 40.1\% & 44.1\% \\ Slowfast \cite{slowfast} & Optical Flow & 75.2\% & 56.4\% & 18.8\% \\ ResNet 3D 101 \cite{mars} & RGB & 82.8\% & 48.8\% & 34.0\% \\ ResNet 3D 101 \cite{mars} & Optical Flow & 84.4\% & 66.3\% & 18.1\% \\ ResNet 3D 50 \cite{mars} & RGB & 84.1\% & 51.8\% & \underline{32.3\%} \\ ResNet 3D 50 \cite{mars} & Optical Flow & 86.1\% & 67.6\% & 18.5\% \\ NAS \cite{nas} & RGB & 88.9\% & 50.2\% & 38.7\% \\ C3D \cite{3dcnn} & RGB & 88.3\% & 51.9\% & 36.4\% \\ C3D \cite{3dcnn} & Optical Flow & 84.2\% & 68.1\% & 16.1\% \\ \midrule \textbf{TCDC (ours)} & \textbf{RGB} & \textbf{91.4\%} & \underline{\textbf{55.8\%}} & \textbf{35.6\%} \\ \textbf{TCDC (ours)} & \textbf{Optical Flow} & \textbf{85.4\%} & \underline{\textbf{77.2\%}} & \underline{\textbf{8.2\%}} \\ \bottomrule \end{tabular} \end{table} In the experiment, we compared our 3D temporal CDC stacked networks (TCDC network) with C3D\cite{3dcnn}, ResNet 3D 50\cite{mars}, ResNet 3D 101\cite{mars}, SlowFast network\cite{slowfast} and also searched neural networks\cite{nas}. It turns out that our network performs the best among these networks. As shown in Table 1, we can see that our TCDC network can relatively solve the overfitting problem. However, there is still room to improve the performance, especially for the RGB stream. Then we introduce the Rank Pooling representations. \subsection{Efficiency of Rank Pooling stream} \begin{table}[] \centering \caption{Comparison of different stream fusions} \label{tab:stream} \begin{tabular}{@{}cccc@{}} \toprule \textbf{Fusing streams} & \multicolumn{3}{c}{\textbf{Accuracy}} \\ \midrule \textbf{Theta in TCDC network} & \textbf{0.2} & \textbf{0.5} & \textbf{0.7} \\ \midrule RGB & 52.6\% & 53.1\% & 55.8\% \\ \begin{tabular}[c]{@{}c@{}}RGB\\ (Optical Flow enhanced)\end{tabular} & 52.8\% & 54.2\% & 58.9\% \\ \begin{tabular}[c]{@{}c@{}}Rank Pooling\\ (Optical Flow enhanced)\end{tabular} & 69.7\% & 71.2\% & 78.5\% \\ \begin{tabular}[c]{@{}c@{}}Rank Pooling \\ (Optical Flow enhanced) \\ + Optical Flow\end{tabular} & - & - & 83.8\% \\ \begin{tabular}[c]{@{}c@{}}Rank Pooling (Optical Flow enhanced) \\ + Optical Flow (ensemble 12 \&16 frame)\end{tabular} & - & - & \underline{\textbf{88.3\%}} \\ \bottomrule \end{tabular} \end{table} To further overcome the serve overfitting problem of networks on RGB stream, we concatenate Optical Flow stream along with the RGB stream to enhance the learning procedure. However, as shown in Table \ref{tab:stream}, the benefit it gains is limited. We assume it's caused by the irrelevant features with local optima. Thus we propose to use a more explicit and efficient representation of RGB frames called Rank Pooling to tackle the problem. By introducing Rank Pooling representation, the overfitting problem is released (Rank Pooling 78.5\% V.S. RGB 58.9\%) as shown in third line of the Table \ref{tab:stream}. The best result is achieved by assembling the two stream results at clip lengths of 12 frame and 16 frame (all the data augmentations are implemented in all these frameworks). \subsection{Other experimental settings} Data augmentation techniques such as random cropping and horizontal flipping are proved very effective to avoid the problem of over-fitting. Here, we implemented two data augmentation techniques as same as \cite{wanglinm}: 1. a corner cropping strategy, which means only 4 corners and 1 center of the images are cropped; 2. Horizontal Flip strategy that the training set is enlarged two times as the original one. We fix the input image size is 112*112. The clip length is 16 (ensembled with 12) frame. The optimal training parameters are set as 32, 0.1, 0.9, 10, 200 for the batch size, the initial learning rate, the momentum, the learning rate patience, and the epoch iteration respectively. The optimizer is standard SGD. The optical flow is extracted by a OpenCV wrapper for tvl1 optical flow and then processed by FlowNet2 \footnote[2]{https://github.com/lmb-freiburg/flownet2-docker} to generate 2-channel frames. The distribution platform is Pytorch with a single GPU: NVidia V100 (RAM: 32 GB). \section{Conclusions} In this work, we propose a data-efficient two-stream framework that can train the model from scratch on small datasets while achieving state-of-the-art results. By introducing a TCDC network on an Optical Flow guided Rank Pooling stream, we can substantially reduce the overfitting problem when dealing with small datasets. The method is validated on the action recognition track of the ECCV 2020 VIPriors challenges. It is proved that our method can achieve a promising result even without a pre-trained model on a large scale dataset. \bibliographystyle{splncs04} \bibliography{eccv2020submission} \end{document}
https://openreview.net/forum?id=R6YWiPVOQBo
https://arxiv.org/abs/2008.03996
Evaluate the paper on "An Efficient Optical Flow Stream Guided Framework for Action Recognition" based on its strengths and weaknesses.
Interesting method, unclear explanation #### 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper proposes a new temporal convolution operator (3D TCDC), that combines a vanilla 3D convolution operator with a "Temporal Central Difference" term, and an optical flow guided Rank Pooling operation to compress the raw RGB input stream to a more compact single vector. A two stream network leverages both optical flow and the rank pooling representation to perform the task of action recognition. The method is evaluated on the VIPriors action recognition dataset and significant performance improvements are shown. #### 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. * I like that the authors propose 3D TCDC as a new fundamental building block for 3D CNNs; the method seems interesting. * The method seems effective on the dataset it has been evaluated on. #### 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. * The paper is too compact and omits some prerequisites that would make the method much more easy to understand. Especially section 3 would benefit from a more detailed explanation: * What does the temporal CD term in equation (1) do and what is the motivation behind it? It would have helped to provide a short recap of [13]. * Similarly for the Rank Pooling operation: what is the motivation behind equation (2)? Which variable is optimized and what does the optimization represent? What are $\omega$, $\delta$, $\xi$? The authors could have at least referred to previous work on rank pooling. * How are the two streams fused, i.e. what is "Probability fusion" in Fig. 1? * It would have been nice if the method would have been evaluated on multiple datasets. #### 4. [Overall rating] Paper rating * 6. Marginally above acceptance threshold #### 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. The proposed method seems interesting and effective. However, the paper provides too little explanation on the method. Nevertheless, I'm willing to accept the paper in the hope that the authors can further elaborate on the for the camera-ready version. #### 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) * The abbreviation "C3D" is never explained. * (line 202) "We assume it’s caused by the irrelevant features with local optima." Unclear explanation. * See 3. for additional comments. Rating: 6: Marginally above acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage{booktabs} \usepackage[normalem]{ulem} \usepackage{tikz} \usepackage{comment} \usepackage{amsmath,amssymb} % \usepackage{color} \usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry} \begin{document} \titlerunning{An Efficient Optical Flow Stream Guided Framework} \title{2nd Place Scheme on Action Recognition Track of ECCV 2020 VIPriors Challenges: An Efficient Optical Flow Stream Guided Framework} % \author{Haoyu Chen\inst{1} \and Zitong Yu\inst{1} \and Xin Liu\inst{1}\and Wei Peng\inst{1}\and Yoon Lee\inst{2}\and Guoying Zhao\inst{1}} \authorrunning{H. Chen et al.} \institute{CMVS, University of Oulu, Finland. \and CEL, Delft university of technology, the Netherlands.\\ \email{\{chen.haoyu, zitong.yu, xin,liu, wei.peng, guoying.zhao\}@oulu.fi}, \email{\{y.lee\}@tudelft.nl}\\ } \maketitle \begin{abstract} To address the problem of training on small datasets for action recognition tasks, most prior works are either based on a large number of training samples or require pre-trained models transferred from other large datasets to tackle overfitting problems. However, it limits the research within organizations that have strong computational abilities. In this work, we try to propose a data-efficient framework that can train the model from scratch on small datasets while achieving promising results. Specifically, by introducing a 3D central difference convolution operation, we proposed a novel C3D neural network-based two-stream (Rank Pooling RGB and Optical Flow) framework for the task. The method is validated on the action recognition track of the ECCV 2020 VIPriors challenges and got the 2nd place (88.31\%) \footnote[1]{https://competitions.codalab.org/competitions/23706\#results}. It is proved that our method can achieve a promising result even without a pre-trained model on large scale datasets. The code will be released soon. \keywords{from-scratch training, 3D difference convolution, Rank Pooling, over-fitting} \end{abstract} \section{Introduction} Nowadays, with the strong ability of deep learning methods, training on massive datasets could consistently gain substantial performances on the action recognition task. However, it only works for a few very large companies that have thousands of expensive hardware GPUs and the majority of smaller companies and universities with few hardware clusters cannot enjoy the benefits. In this work, we try to train a model from scratch without large datasets or large scale pre-trained models while it can achieve state-of-the-art performance on the action recognition task. Specifically, we introduce an enhanced convolution operation: 3D temporal central difference convolution (TCDC) into a traditional 3D CNN structure to efficient spatio-temporal features in basic convolution operators with less overfitting. Besides, instead of using raw RGB frames that might learn too much unnecessary details, we propose to use an efficient representation called Rank Pooling to serve as an enhanced RGB stream. Furthermore, the Optical Flow stream is used to guide the learning of the Rank Pooling stream to tackle the overfitting issue. At last, the Optical Flow stream and Rank Pooling stream are combined to be trained jointly on the task for better performance. The framework of our method is illustrated in Fig. \ref{fig:framework}. Our contribution to tackling this training-from-scratch task includes: a novel temporal convolution operator (3D TCDC), an Optical Flow guided Rank Pooling stream and a joint two-stream learning strategy for action recognition. \begin{figure*} \includegraphics[width=\linewidth]{framwork.pdf} \caption{Network architecture for our hybrid two stream framework. The Optical Flow is used to enhance the learning of Rank Pooling for overcoming the overfitting problem} \label{fig:framework} \end{figure*} \section{Related work} The first common used two-stream 2D CNN architecture for action recognition was proposed by Simonyan and Zisserman \cite{twostream}, including one stream of RGB frames, and the other of Optical Flow. The two streams are trained separately and fused by averaging the scores of both the streams. A transition from 2D CNNs to 3D CNNs was made since the better performances of spatio-temporal features compared by 3D CNN to their 2D equivalents \cite{3dcnn}. This transition comes with the problem of overfitting caused by small datasets and a high large number of parameters that need to be optimized \cite{closer} \cite{longterm} in the model. Specifically, in a two-stream (RGB and Optical Flow) framework, directly training models to learn RGB frames from scratch on a small dataset can lead to severe overfitting problem for RGB stream, while Optical Flow stream can still achieve relative high performances. The reason is that RGB frames contain too many noisy details and a large model could learn some irrelevant features which lead to overfitting with local optima. Many previous works have reported this overfitting issue, for instance, training from scratch on single RGB stream, 3D Resnet50 model \cite{mars} can achieve 55.2\% accuracy, Slowfast model \cite{slowfast} for 40.1\%, and even with neural network searching \cite{nas}, the accuracy can only reach 61\%. To deal with the problem of overfitting, Carreira and Zisserman \cite{I3D} introduced the Kinetics dataset with the I3D network, which was large enough to let 3D CNNs be trained sufficiently. Using RGB and Flow streams pre-trained on Kinetics \cite{kinetics}, I3D achieved the state of art on the UCF101 \cite{ucf101} datasets. However, when the large scale datasets and pre-trained models are not available, especially for those who are not able to access to powerful computing facilities, how to overcome the overfitting is still an unsolved problem. In this work, we proposed to introduce a new 3D CNN operator TCDC \cite{yupr}, which is inspired by the 2D-CDC\cite{yucvpr}, and use Rank Pooling RGB stream with Optical Flow guided strategy to tackle this issue, which can achieve a promising result with a low computational cost. \section{Methodology} \subsection{C3D Backbones with Central Difference Convolution} Based on the traditional 3D CNN framework \cite{3dcnn}, we introduce an unified 3D convolution operator called 3D temporal central difference Convolution (3D TCDC) for better integrating local gradient information. In a TCDC operation, the sampled local receptive field cube $\mathcal{C}$ is consisted of two kinds of regions: 1) the region in the current moment $\mathcal{R'}$, and 2) the regions in the adjacent moments $\mathcal{R''}$. In the setting of a TCDC, the central difference term is only calculated from $\mathcal{R''}$. Thus the generalized TCDC can be formulated as: \vspace{-1.5em} \begin{equation} \small \setlength{\belowdisplayskip}{-1.5em} \begin{split} y(p_0) &=\underbrace{\sum_{p_n\in \mathcal{C}}w(p_n)\cdot x(p_0+p_n)}_{\text{vanilla 3D convolution}}+\theta\cdot (\underbrace{-x(p_0)\cdot\sum_{p_n\in \mathcal{R''}}w(p_n))}_{\text{temporal CD term}}. \\ \label{eq:CDC-T} \end{split} \end{equation} where $w$, $x$ and $p$ denote the kernel weights, input feature maps and weight positions respectively. The first term in the right side stands for a Vanilla 3D convolution, while the second term stands for 3D TCDC operation. Please note that $w(p_n)$ is shared between vanilla 3D convolution and temporal CD term, thus no extra parameters are added. The hyperparameter $\theta \in [0,1]$ is the factor to combine the contribution of gradient-level (3D TCDC) and intensity-level (Vanilla 3D). As a result, our C3D framework combines vanilla 3D convolution with 3D TCDC and could provide more robust and diverse modeling performance. \subsection{Rank Pooling for Optical Flow guided learning} We introduce a more explicit representation Rank Pooling instead of raw RGB frames to avoid the overfitting problem on the RGB steam. The definition of the Rank Pooling is below. Let a RGB stream sequence with k frames be represented as $< I1, I2, ..., It, ..., Ik >$, where $I_t$ is the average of RGB features over the frames up to $t$-timestamp. The process of Rank Pooling is formulated as following objective function: \begin{equation} \begin{split} {arg\,min} \frac{1}{2}\left \| \omega \right \|^{2} + \delta \sum_{i>j}^{\xi _{ij}}, \\ s.t. \omega ^{T}\cdot (I_{i}-I_{j})\geq 1-\xi _{ij},\xi _{ij}\geq 0 \label{eq:rankpooling} \end{split} \end{equation} By optimizing Eq. \ref{eq:rankpooling}, we map a sequence of K frames to a single vector $d$. In this paper, Rank Pooling is directly applied on the pixels of RGB frames and the dynamic image $d$ is of the same size as the input frames. After the Rank Pooling images being generated, we combine the Rank Pooling stream with Optical Flow stream as input into the above C3D networks, which can enhance the learning of Rank Pooling stream. \section{Experiments} We validate our method on action recognition track of the ECCV 2020 VIPriors challenges with part (split 1) of the well-known action recognition dataset UCF101 \cite{ucf101}. There are 9537 video clips for training and validating, and 3783 for testing. \subsection{Different backbones} \begin{table}[] \centering \caption{Comparison of different backbone networks} \label{tab:backbone} \begin{tabular}{@{}ccccc@{}} \toprule \textbf{Backbone} & \textbf{Stream} & \textbf{Training Acc} & \textbf{Testing Acc} & \textbf{Overfitting gap} \\ \midrule Slowfast\cite{slowfast} & RGB & 84.1\% & 40.1\% & 44.1\% \\ Slowfast \cite{slowfast} & Optical Flow & 75.2\% & 56.4\% & 18.8\% \\ ResNet 3D 101 \cite{mars} & RGB & 82.8\% & 48.8\% & 34.0\% \\ ResNet 3D 101 \cite{mars} & Optical Flow & 84.4\% & 66.3\% & 18.1\% \\ ResNet 3D 50 \cite{mars} & RGB & 84.1\% & 51.8\% & \underline{32.3\%} \\ ResNet 3D 50 \cite{mars} & Optical Flow & 86.1\% & 67.6\% & 18.5\% \\ NAS \cite{nas} & RGB & 88.9\% & 50.2\% & 38.7\% \\ C3D \cite{3dcnn} & RGB & 88.3\% & 51.9\% & 36.4\% \\ C3D \cite{3dcnn} & Optical Flow & 84.2\% & 68.1\% & 16.1\% \\ \midrule \textbf{TCDC (ours)} & \textbf{RGB} & \textbf{91.4\%} & \underline{\textbf{55.8\%}} & \textbf{35.6\%} \\ \textbf{TCDC (ours)} & \textbf{Optical Flow} & \textbf{85.4\%} & \underline{\textbf{77.2\%}} & \underline{\textbf{8.2\%}} \\ \bottomrule \end{tabular} \end{table} In the experiment, we compared our 3D temporal CDC stacked networks (TCDC network) with C3D\cite{3dcnn}, ResNet 3D 50\cite{mars}, ResNet 3D 101\cite{mars}, SlowFast network\cite{slowfast} and also searched neural networks\cite{nas}. It turns out that our network performs the best among these networks. As shown in Table 1, we can see that our TCDC network can relatively solve the overfitting problem. However, there is still room to improve the performance, especially for the RGB stream. Then we introduce the Rank Pooling representations. \subsection{Efficiency of Rank Pooling stream} \begin{table}[] \centering \caption{Comparison of different stream fusions} \label{tab:stream} \begin{tabular}{@{}cccc@{}} \toprule \textbf{Fusing streams} & \multicolumn{3}{c}{\textbf{Accuracy}} \\ \midrule \textbf{Theta in TCDC network} & \textbf{0.2} & \textbf{0.5} & \textbf{0.7} \\ \midrule RGB & 52.6\% & 53.1\% & 55.8\% \\ \begin{tabular}[c]{@{}c@{}}RGB\\ (Optical Flow enhanced)\end{tabular} & 52.8\% & 54.2\% & 58.9\% \\ \begin{tabular}[c]{@{}c@{}}Rank Pooling\\ (Optical Flow enhanced)\end{tabular} & 69.7\% & 71.2\% & 78.5\% \\ \begin{tabular}[c]{@{}c@{}}Rank Pooling \\ (Optical Flow enhanced) \\ + Optical Flow\end{tabular} & - & - & 83.8\% \\ \begin{tabular}[c]{@{}c@{}}Rank Pooling (Optical Flow enhanced) \\ + Optical Flow (ensemble 12 \&16 frame)\end{tabular} & - & - & \underline{\textbf{88.3\%}} \\ \bottomrule \end{tabular} \end{table} To further overcome the serve overfitting problem of networks on RGB stream, we concatenate Optical Flow stream along with the RGB stream to enhance the learning procedure. However, as shown in Table \ref{tab:stream}, the benefit it gains is limited. We assume it's caused by the irrelevant features with local optima. Thus we propose to use a more explicit and efficient representation of RGB frames called Rank Pooling to tackle the problem. By introducing Rank Pooling representation, the overfitting problem is released (Rank Pooling 78.5\% V.S. RGB 58.9\%) as shown in third line of the Table \ref{tab:stream}. The best result is achieved by assembling the two stream results at clip lengths of 12 frame and 16 frame (all the data augmentations are implemented in all these frameworks). \subsection{Other experimental settings} Data augmentation techniques such as random cropping and horizontal flipping are proved very effective to avoid the problem of over-fitting. Here, we implemented two data augmentation techniques as same as \cite{wanglinm}: 1. a corner cropping strategy, which means only 4 corners and 1 center of the images are cropped; 2. Horizontal Flip strategy that the training set is enlarged two times as the original one. We fix the input image size is 112*112. The clip length is 16 (ensembled with 12) frame. The optimal training parameters are set as 32, 0.1, 0.9, 10, 200 for the batch size, the initial learning rate, the momentum, the learning rate patience, and the epoch iteration respectively. The optimizer is standard SGD. The optical flow is extracted by a OpenCV wrapper for tvl1 optical flow and then processed by FlowNet2 \footnote[2]{https://github.com/lmb-freiburg/flownet2-docker} to generate 2-channel frames. The distribution platform is Pytorch with a single GPU: NVidia V100 (RAM: 32 GB). \section{Conclusions} In this work, we propose a data-efficient two-stream framework that can train the model from scratch on small datasets while achieving state-of-the-art results. By introducing a TCDC network on an Optical Flow guided Rank Pooling stream, we can substantially reduce the overfitting problem when dealing with small datasets. The method is validated on the action recognition track of the ECCV 2020 VIPriors challenges. It is proved that our method can achieve a promising result even without a pre-trained model on a large scale dataset. \bibliographystyle{splncs04} \bibliography{eccv2020submission} \end{document}
https://openreview.net/forum?id=R6YWiPVOQBo
https://arxiv.org/abs/2008.03996
Evaluate the strengths and weaknesses of the paper, focusing on the clarity of the method explanation and the effectiveness of the proposed approach.
The proposed method is somehow novel, but it lacks of ablation studies to validate the effects of using single modalities and cross-modality subspaces. This paper presents a speech-driven gesture generation method based on multimodal representation learning. Each modality is projected to two distinct subspaces: modality-invariant and modality-specific. Gradient reversal layer based adversarial classifier and modality reconstruction decoders are used during training. Average human-likeness ratings below 50% have been achieved, indicating that the proposed approach was not enough to generate natural gesture motions. Ablation studies should be included to show the effects of modality-invariant only, or modality-specific only. The video sample in the anonymous webpage shows an example of the generated motion. I'm not aware of how the other dialogue segments used in the evaluation are, but in the sample, the target speaker is basically in listening mode, so that the instants the co-speech gestures are generated are very short. Rating: 6: Marginally above acceptance threshold Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[manuscript]{acmart} \usepackage{subfigure} \AtBeginDocument{% \providecommand\BibTeX{{% \normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}} \copyrightyear{2022} \acmYear{2022} \setcopyright{rightsretained} \acmConference[ICMI '22]{INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION}{November 7--11, 2022}{Bengaluru, India} \acmBooktitle{INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI '22), November 7--11, 2022, Bengaluru, India}\acmDOI{10.1145/3536221.3558066} \acmISBN{978-1-4503-9390-4/22/11} \begin{document} \title{The ReprGesture entry to the GENEA Challenge 2022} \author{Sicheng Yang} \email{yangsc21@mails.tsinghua.edu.cn} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \author{Zhiyong Wu} \authornote{Corresponding authors} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \affiliation{% \institution{The Chinese University of Hong Kong} \city{Hong Kong SAR} \country{China} } \email{zywu@sz.tsinghua.edu.cn} \orcid{0000-0001-8533-0524} \author{Minglei Li} \authornotemark[1] \email{liminglei29@huawei.com} \affiliation{% \institution{Huawei Cloud Computing Technologies Co., Ltd} \city{Shenzhen} \country{China} } \author{Mengchen Zhao} \email{zhaomengchen@huawei.com} \affiliation{% \institution{Huawei Noah's Ark Lab} \city{Shenzhen} \country{China} } \author{Jiuxin Lin} \email{linjx21@mails.tsinghua.edu.cn} \author{Liyang Chen} \email{cly21@mails.tsinghua.edu.cn} \author{Weihong Bao} \email{bwh21@mails.tsinghua.edu.cn} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \renewcommand{\shortauthors}{Sicheng Yang et al.} \begin{abstract} This paper describes the ReprGesture entry to the Generation and Evaluation of Non-verbal Behaviour for Embodied Agents (GENEA) challenge 2022. The GENEA challenge provides the processed datasets and performs crowdsourced evaluations to compare the performance of different gesture generation systems. In this paper, we explore an automatic gesture generation system based on multimodal representation learning. We use WavLM features for audio, FastText features for text and position and rotation matrix features for gesture. Each modality is projected to two distinct subspaces: modality-invariant and modality-specific. To learn inter-modality-invariant commonalities and capture the characters of modality-specific representations, gradient reversal layer based adversarial classifier and modality reconstruction decoders are used during training. The gesture decoder generates proper gestures using all representations and features related to the rhythm in the audio. Our code, pre-trained models and demo are available at \url{https://github.com/YoungSeng/ReprGesture}. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies~Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing~Human computer interaction (HCI)</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179</concept_id> <concept_desc>Computing methodologies~Natural language processing</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Computing methodologies~Artificial intelligence} \ccsdesc[500]{Human-centered computing~Human computer interaction (HCI)} \ccsdesc[500]{Computing methodologies~Natural language processing} \keywords{gesture generation, data-driven animation, modality-invaiant, modality-specific, representation learning, deep learning} \maketitle \section{Introduction} Nonverbal behavior plays a key role in conveying messages in human communication \cite{10.1145/3397481.3450692}, including facial expressions, hand gestures and body gestures. Co-speech gestures introduce better self-expression. In the virtual world, it helps to present a rather realistic digital avatar. Gesture generation studies how to generate human-like, natural, speech-oriented gestures. There are many different techniques for gesture generation. In this paper, we focus on the task of speech-driven gesture generation. Representative speech-driven gesture generation are either rule-based or data-driven \cite{10.1145/3414685.3417838}. Many data-driven works for gesture generation are based on multimodal fusion and representation learning. Taras et al. map speech acoustic and semantic features into continuous 3D gestures \cite{10.1145/3382507.3418815}. Youngwoo et al. propose an end-to-end model to generate co-speech gestures using text, audio, and speaker identity \cite{10.1145/3414685.3417838}. Jing et al. sample gesture in a variational autoencoder (VAE) latent space and infer rhythmic motion from speech prosody to address the non-deterministic mapping from speech to gesture \cite{Xu2022FreeformBM}. Taras et al. propose a speech-driven gesture-production method based on representation learning \cite{doi:10.1080/10447318.2021.1883883}. Xian et al. propose the hierarchical audio features extractor and pose inferrer to learn discriminative representations \cite{liu2022learning}. Jing et al. present a co-speech gesture generation model whose latent space is split into shared code and motion-specific code \cite{9710107}. However, gesture generation is a challenging task because of cross-modality learning issue and the weak correlation between speech and gestures. The inherent heterogeneity of the representations creates a gap among different modalities. It is necessary to address the weak correlation among different modalities and provide a holistic view of the multimodal data during gesture generation. Inspired by \cite{10.1145/3414685.3417838} and \cite{10.1145/3394171.3413678}, we propose a gesture generation system based on multimodal representation learning. In particular, we first extract features of audio, text and gestures. Then, a system consisting of four components is proposed: (1) Each modality is projected to two distinct representations: modality-invariant and modality-specific. (2) A gradient reversal layer based adversarial classifier is used to reduce the discrepancy between the modality-invariant representations of each modality. (3) Modality decoders are used to reconstruct each modality, allowing modality-specific representations to capture the details of their respective modality. (4) The gesture decoder takes six modality representations (two per modality) and rhythm-related features in audio as its input and generates proper gestures. The main contributions of our work are: (1) A multimodal representation learning approach is proposed for gesture generation, which ensures comprehensive decoupling of multimodal data. (2) To solve the problem of heterogeneity of different modalities in feature fusion, each modality is projected to two subspaces (modality-invariant and modality-specific) to get multimodal representations using domain learning and modality reconstruction. (3) Ablation studies demonstrate the role of different components in the system. The task of the GENEA 2022 challenge is to generate corresponding gestures from the given audio and text. A complete task description can be accessed in \cite{yoon2022genea}. We submitted our system to the GENEA 2022 challenge to be evaluated with other gesture generation systems in a large user study. \section{Method} \begin{figure}[h] \centering \includegraphics[width=0.95\linewidth]{fig/1_3.pdf} \caption{Gesture generation through modality -invariant and -specific subspaces.} \Description{Gesture generation through modality -invariant and -specific subspaces.} \label{Architecture} \end{figure} \subsection{The architecture of the proposed system} As shown in Figure \ref{Architecture}, the system generates a sequence of human gestures from a sequence of $\mathbf{u}_{m} (m \in \{t,a,g\})$ that contain the features of text, audio and seed gestures. The architecture of the proposed model consists of five modules: feature extraction, modality representation, modality reconstruction, domain learning and gesture generation. The following describes each of these modules in detail. \subsubsection{Feature extraction} ~\\ For each of the modality, the pipeline of extracting features is as follows: \begin{itemize} \item Text: We first use FastText \cite{10.1162/tacl_a_00051} to get the word embeddings. Padding tokens are inserted to make the words temporally match the gestures by following \cite{10.1145/3414685.3417838}. One-dimensional (1D) convolutional layers are then adopted to generate 32-D text feature sequence $\mathbf{U}_{t}$ (`$t$' for `text') from the 300-D word embeddings. \item Audio: All audio recordings are downsampled to 16kHz, and features are generated from the pre-trained models of WavLM Large \cite{DBLP:journals/corr/abs-2110-13900}. We further adjust sizes, strides and padding in the 1D convolutional layers to reduce the dimension of features from 1024 to 128 forming the final audio feature sequence $\mathbf{U}_{a}$ (`$a$' for `audio'). \item Gesture: Due to the poor quality of hand motion-capture, we only use 18 joints corresponding to the upper body without hands or fingers. Root normalization is used to make objects face the same direction. We apply standard normalization (zero mean and unit variant) to all joints. Seed gestures for the first few frames are utilized for better continuity between consecutive syntheses, as in \cite{10.1145/3414685.3417838}. On top of these, position and 3 × 3 rotation matrix features are computed, and the size of final gesture sequence $\mathbf{U}_{g}$ (`$g$' for `gesture') feature is 216. \end{itemize} \subsubsection{Modality representation} ~\\ First, for each modality $m \in \{t,a,g\}$, we use a linear layer with leaky ReLU activation and layer normalization to map its feature sequence $\mathbf{U}_{m}$ into a new feature sequence $\mathbf{u}_{m} \in \mathbb{R}^{T \times d_{h}}$ with the same feature dimension $d_{h}$. Then, we project each sequence $\mathbf{u}_{m}$ to two distinct representations: modality-invariant $\mathbf{h}_{m}^{c}$ and modality-specific $\mathbf{h}_{m}^{p}$. Afterwards, $\mathbf{h}_{m}^{c}$ learns a shared representation in a common subspace with distributional similarity constraints \cite{8715409}. $\mathbf{h}_{m}^{p}$ captures the unique characteristics of that modality. We derive the representations using the simple feed-forward neural encoding functions: \begin{equation} \mathbf{h}_{m}^{c}=E_{c}\left(\mathbf{u}_{m} ; \theta^{c}\right), \quad \mathbf{h}_{m}^{p}=E_{p}\left(\mathbf{u}_{m} ; \theta_{m}^{p}\right) \end{equation} Encoder $E_{c}$ shares the parameters $\theta^{c}$ across all three modalities, whereas $E_{p}$ assigns separate parameters $\theta_{m}^{p}$ for each modality. \subsubsection{Representation learning} ~\\ Domain learning can improve a model’s ability to extract domain-invariant features \cite{NIPS2016_45fbc6d3}. We use an adversarial classifier to minimize domain loss that reduces the discrepancy among shared representations of each modality. The domain loss can be formulated as: \begin{equation} \mathcal{L}_{domain}=-\sum_{i=1}^{3} \mathbb{E}[ \log \left(D_{repr}(d_m)\right)] \end{equation} where $D_{repr}$ represents feed-forward neural discriminator, $d_m$ represents the result after gradient reversal of $\mathbf{h}_{m}^{p}$. The modality reconstruction loss $\mathcal{L}_{\text {recon}}$ is computed on the reconstructed modality and the original input $\mathbf{u}_{m}$. The $\mathcal{L}_{\text {recon}}$ is used to ensure the hidden representations to capture the details of their respective modality. Specifically, a modality decoder $D$ is proposed to reconstruct $\mathbf{u}_{m}$: \begin{equation} \hat{\mathbf{u}}_{m}=D\left(\mathbf{h}_{m}^{c}+\mathbf{h}_{m}^{p} ; \theta^{d}\right) \end{equation} where $\theta^{d}$ are the parameters of the modality decoder. The modality reconstruction loss can then be computed as: \begin{equation} \mathcal{L}_{\text {recon}}=\frac{1}{3}\left(\sum_{m \in\{t, a, g\}} \frac{\left\|\mathbf{u}_{m}-\hat{\mathbf{u}}_{m}\right\|_{2}^{2}}{d_{h}}\right) \end{equation} where $\|\cdot\|_{2}^{2}$ is the squared $L_2$-norm. \subsubsection{Gesture generation} ~\\ \begin{figure}[h] \centering \includegraphics[width=0.82\linewidth]{fig/2_.pdf} \caption{Architecture of the gesture generation module.} \Description{Architecture of the gesture generation module.} \label{generation} \end{figure} We use generative adversarial network (GAN) based gesture decoder for generating gestures. Gestures are directly related to rhythm and beat, thus we concatenate audio rhythm related features (pitch, energy and volume) and the output of six stacked modality representations together and send them to Transformer encoders with multi-head self-attention as the generator, as shown in Figure \ref{generation}. The generator part is trained using $\mathcal{L}_{gesture}$ consisting of the Huber loss and the MSE loss, and the discriminator part is trained with $\mathcal{L}_{GAN}$. \begin{equation} \mathcal{L}_{gesture}=\alpha \cdot \mathbb{E}\left[\frac{1}{t} \sum_{i=1}^{t} \operatorname{HuberLoss}\left(g_{i}, \hat{g}_{i}\right)\right] + \beta \cdot \mathbb{E}\left[\frac{1}{t} \sum_{i=1}^{t} \|\left(g_{i}, \hat{g}_{i}\right)\|_{2}^{2}\right] \label{Lgesture} \end{equation} \begin{equation} \mathcal{L}_{GAN}=-\mathbb{E}[\log (D_{gesture}(g))]-\mathbb{E}[\log (1-D_{gesture}(\hat{g}))] \end{equation} where $D_{gesture}$ represents gesture discriminator using multilayered bidirectional gated recurrent unit (GRU) \cite{KyunghyunCho2014LearningPR} that outputs binary output for each time step, $t$ is the length of the gesture sequence, $g_i$ represents the $i$th human gesture, $\hat{g_i}$ represents the $i$th generated gesture. The loss of the proposed system can be computed as: \begin{equation} \mathcal{L}_{total} = \mathcal{L}_{gesture} + \gamma \cdot \mathcal{L}_{GAN} + \delta \cdot \mathcal{L}_{domain} + \epsilon \cdot \mathcal{L}_{recon} \label{total} \end{equation} \subsection{Data processing and experiment setup} \subsubsection{Data and data processing} ~\\ In the challenge, the Talking With Hands 16.2M \cite{9010909} is used as the standard dataset. Each video is separated into two independent sides with one speaker each. The audio and text in the dataset have been aligned. For more details please refer to the challenge paper \cite{yoon2022genea}. We note that the data in the training, validation and test sets are extremely unbalanced, so we only use the data from the speaker with identity "1" for training. And we believe that if speech and gesture data are trained on the same person, the gesture behavior would match the speech. \subsubsection{Experiment setup} ~\\ The proposed system is trained on training data only, using the ADAM \cite{2014Adam} optimizer (learning rate is e-4, $\beta_1$ = 0.5, $\beta_2$ = 0.98) with a batch size of 128 for 100 steps. We set $\alpha=300$, $\beta=50$ for Equation (\ref{Lgesture}) and $\gamma=5, \delta=0.1, \epsilon=0.1$ (we noticed in our experiments that too large $\delta$ and $\epsilon$ will lead to non-convergence) for Equation (\ref{total}). There is a warm-up period of 10 epochs in which the $\mathcal{L}_{GAN}$ is not used ($\gamma$ = 0). The feature dimension $d_h$ of sequence $\textbf{u}_m$ is 48. During training, each training sample having 100 frames is sampled with a stride of 10 from the valid motion sections; the initial 10 frames are used as seed gesture poses and the model is trained to generate the remaining 90 poses (3 seconds). \section{Evaluation} \subsection{Evaluation setup} The GENEA Challenge 2022 evaluation is divided into two tiers, and we participated in the upper-body motion tier. The challenge organizers conducted a detailed evaluation comparing all submitted systems\cite{yoon2022genea}. The challenge evaluates human-likeness to assess motion quality, and appropriateness to assess how well the gestures match the speech. The evaluation is based on the HEMVIP methodology \cite{10.1145/3462244.3479957} and Mean Opinion Score (MOS) \cite{1996Methods}. There are in total 11 systems participated in the upper-body tier. The following abbreviations are used to represent each model in the evaluation: \begin{itemize} \item UNA: Ground truth (`U' for the upper-body tier, `NA' for `natural'). \item UBT: The official text-based baseline \cite{8793720}, which takes transcribed speech text with word-level timing information as the input modality (`B' for `baseline', `T' for `text'). \item UBA: The official audio-based baseline \cite{10.1145/3308532.3329472}, which takes speech audio into account when generating output (`A' for `audio'). \item USJ–USQ: 8 participants’ submissions to the upper-body tier (ours is USN). \end{itemize} For more details about the evaluation studies, please refer to the challenge paper \cite{yoon2022genea}. \subsection{Subjective evaluation results and discussion} \subsubsection{Human-likeness Evaluation} ~\\ \begin{figure}[h] \centering \subfigure[Box visualizing the ratings distribution in Upper-body study. ]{ \label{Fig.sub.1} \includegraphics[width=0.43\linewidth]{fig/upper-body_human-likeness_boxplot.pdf}} \quad \subfigure[Significance of pairwise differences between conditions.]{ \label{Fig.sub.2} \includegraphics[width=0.43\linewidth]{fig/upper-body_human-likeness_median_pref.pdf}} \caption{(a) Red bars are the median ratings (each with a 0.05 confidence interval); yellow diamonds are mean ratings (also with a 0.05 confidence interval). Box edges are at 25 and 75 percentiles, while whiskers cover 95\% of all ratings for each condition. (b) White means that the condition listed on the ${y}$-axis rated significantly above the condition on the $x$-axis, black means the opposite ($y$ rated below $x$), and grey means no statistically significant difference at the level $\alpha$ = 0.05 after Holm-Bonferroni correction.} \Description{Box plots visualizing the ratings distribution in Upper-body study.} \label{Upper_result} \end{figure} In this evaluations, study participants are asked to rate ``How human-like does the gesture motion appear?'' on a scale from 0 (worst) to 100 (best). Bar plots and significance comparisons are shown in Figure \ref{Upper_result}. Our system (USN) receives a median score of 44 and a mean score of 44.2, and is ranked fourth among the participating systems. \subsubsection{Appropriateness evaluation} ~\\ \begin{figure}[h] \centering \includegraphics[width=0.45\linewidth]{fig/upper-body_appropriateness_matched_pref.pdf} \caption{Bar plots visualizing the response distribution in the appropriateness studies. The blue bar (bottom) represents responses where subjects preferred the matched motion, the light grey bar (middle) represents tied (``They are equal'') responses, and the red bar (top) represents responses preferring mismatched motion, with the height of each bar being proportional to the fraction of responses in each category. The black horizontal line bisecting the light grey bar shows the proportion of matched responses after splitting ties, each with a 0.05 confidence interval. The dashed black line indicates chance-level performance.} \Description{Box plots visualizing the ratings distribution in the upper-body study.} \label{appropriateness} \end{figure} In this evaluation, participants are asked to choose the character on the left, on the right, or indicate that the two are equally well matched to response ``Please indicate which character’s motion best matches the speech, both in terms of rhythm and intonation and in terms of meaning.'' Bar plots are shown in Figure \ref{appropriateness}. Our system (USN) receives a ``Percent matched'' 54.6, which identifies how often participants preferred matched over mismatched motion in terms of appropriateness. Our system is rated seventh in appropriateness among the participants’ submissions. It should be noted that the difference of our system to the five higher-ranked systems (USL, UBA, USO, USK and USJ) is not significant. Furthermore, if we only consider the ratio of matched motion, i.e., the blue bar in Figure \ref{appropriateness}, our system is ranked fifth among the participating systems. \subsection{Ablation studies} \begin{table}[] \caption{Ablation studies results. `w/o' is short for `without'. Bold indicates the best metric, i.e. the one closest to the ground truth.} \label{tab:Ablation} \resizebox{\textwidth}{!} { \begin{tabular}{cccccccc} \toprule Name & Average jerk & \begin{tabular}[c]{@{}c@{}}Average \\ acceleration\end{tabular} & \begin{tabular}[c]{@{}c@{}}Global \\ CCA\end{tabular} & \begin{tabular}[c]{@{}c@{}}CCA for \\ each sequence\end{tabular} & \begin{tabular}[c]{@{}c@{}}Hellinger\\ distance average\end{tabular} $\downarrow$ & \begin{tabular}[c]{@{}c@{}}FGD on \\ feature space\end{tabular} $\downarrow$ & \begin{tabular}[c]{@{}c@{}}FGD on raw \\ data space\end{tabular} $\downarrow$ \\ \midrule Ground Truth (GT) & 18149.74 $\pm$ 2252.61 & 401.24 $\pm$ 67.57 & 1.000 & 1.00 $\pm$ 0.00 & 0.0 & 0.0 & 0.0 \\ ReprGesture & 2647.59 $\pm$ 1200.05 & 146.90 $\pm$ 46.09 & 0.726 & \textbf{0.95 $\pm$ 0.02} & \textbf{0.155} & 0.86 & \textbf{184.753} \\ w/o WavLM & 1775.09 $\pm$ 512.08 & 77.53 $\pm$ 21.92 & \textbf{0.761} & 0.94 $\pm$ 0.03 & 0.353 & 3.054 & 321.383 \\ w/o $\mathcal{L}_{GAN}$ & \textbf{9731.54 $\pm$ 3636.06} & \textbf{242.15 $\pm$ 81.81} & 0.664 & 0.93 $\pm$ 0.03 & 0.342 & 2.053 & 277.539 \\ w/o $\mathcal{L}_{recon}$ & 533.95 $\pm$ 193.18 & 39.49 $\pm$ 12.23 & 0.710 & 0.93 $\pm$ 0.03 & 0.283 & 0.731 & 659.150 \\ w/o $\mathcal{L}_{domain}$ & 2794.79 $\pm$ 1153.75 & 135.62 $\pm$ 25.13 & 0.707 & 0.94 $\pm$ 0.03 & 0.267 & \textbf{0.653} & 874.209 \\ w/o Repr & 2534.34 $\pm$ 1151.38 & 123.02 $\pm$ 40.90 & 0.723 & 0.94 $\pm$ 0.04 & 0.298 & 0.829 & 514.706 \\ \bottomrule \end{tabular} } \end{table} Moreover, we conduct ablation studies to address the performance effects from different components in the system. The GENEA challenge computes some objective metrics of motion quality by GENEA numerical evaluations\footnote{\url{https://github.com/genea-workshop/genea_numerical_evaluations}}. For calculation and meaning of these objective evaluation metrics, please refer to the challenge paper \cite{yoon2022genea}. A perfect natural system should have average jerk and acceleration very similar to natural motion. The closer the Canonical correlation analysis (CCA) to 1, the better. Lower Hellinger distance and Fr\'{e}chet gesture distance (FGD) are better. To compute the FGD, we train an autoencoder using the training set of the challenge. The results of our ablations studes are summarized in Table \ref{tab:Ablation}. Supported by the results, when we do not use WavLM to extract audio features, but use 1D convolution instead, the Hellinger distance average and FGD on feature space present the worst performance. When the model is trained without the GAN loss, the average jerk and average acceleration are better, but the global CCA and CCA for each sequence are decreased. When the reconstruction loss is removed, the average jerk and average acceleration are worst. The generated gesture movements are few and of small range. When the model is trained using Central Moment Discrepancy (CMD) loss \cite{10.1145/3394171.3413678} instead of domain loss, the best FGD on feature space and the worst FGD on raw data space are obtained. When the modality representations are removed (w/o Repr), we feed the modality sequence $\mathbf{u}_t, \mathbf{u}_a$ and $\mathbf{u}_g$ directly to the gesture decoder and only use the $\mathcal{L}_{task}$ loss, the performances of all metrics have deteriorated except for FGD on feature space. \section{Conclusions and discussion} In this paper, we propose a gesture generation system based on multimodal representation learning, where the considered modalities include text, audio and gesture. Each modality is projected into two different subspaces: modality-invariant and modality-specific. To learn the commonality among different modalities, an adversarial classifier based on gradient reversal layer is used. To capture the features of modality-specific representations, we adopt a modality reconstruction decoder. The gesture decoder utilizes all representations and audio rhythmic features to generate appropriate gestures. In subjective evaluation, our system is ranked fourth among the participating systems in human-likeness evaluation, and ranked seventh in appropriateness evaluation. Whereas, for appropriateness, the differences between our system and the five higher-ranked systems are not significant. For appropriateness evaluation, whether there is a relationship between subjective evaluation and segmentation duration deserves to be investigated. The segments are around 8 to 10 seconds during evaluation\cite{yoon2022genea}. We believe that a longer period of time (e.g. 20-30 seconds) might produce more pronounced and convincing appropriateness results. There is room for improvement in this research. First, we only use data from one person to learn gesture due to unbalanced dataset issue. Such one-to-one mapping could produce boring and homogeneous gestures during inference. Second, the finger motions are not considered because of the low motion-capture quality. Such finger motions could be involved in the future if some data cleanup procedures could be conducted. Third, besides text and audio, more modalities (e.g. emotions, facial expressions and semantic meaning of gestures \cite{Liu2022BEATAL}) could be taken into consideration to generate more appropriate gestures. \begin{acks} This work is supported by Shenzhen Science and Technology Innovation Committee (WDZC20200818121348001), National Natural Science Foundation of China (62076144) and Shenzhen Key Laboratory of next generation interactive media innovative technology (ZDSYS20210623092001004). \end{acks} \bibliographystyle{ACM-Reference-Format} \bibliography{my} \end{document}
https://openreview.net/forum?id=atWaELmguNj7
https://arxiv.org/abs/2208.12133
Please evaluate the effectiveness of our proposed speech-driven gesture generation method and provide feedback on whether the generated gesture motions appear natural.
The evaluation and sub-materials show good results. The paper proposed a gesture generation method by incorporating representation learning into previously proposed network architectures. Six representations for audio, text, gesture (two per modality) are used. The method takes audio, text, and seed gestures as input and output is a sequence of gesture. The paper is well organized and written but it is easier to read if put new lines after sub-sub-section titles. The technical descriptions are well written and the experiments would be reproducible. Some research, such as [10], have already addressed the importance of the audio or human-pose representations for gesture generation. So, the contribution is incremental, but having modality-mixed representations is somewhat novel. The evaluation and sub-materials show good results. Having ablation study, also, help us to better understand the effects of proposed method. However, the effects of proposed representations were not very significant. Rating: 9: Top 15% of accepted papers, strong accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[manuscript]{acmart} \usepackage{subfigure} \AtBeginDocument{% \providecommand\BibTeX{{% \normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}} \copyrightyear{2022} \acmYear{2022} \setcopyright{rightsretained} \acmConference[ICMI '22]{INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION}{November 7--11, 2022}{Bengaluru, India} \acmBooktitle{INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI '22), November 7--11, 2022, Bengaluru, India}\acmDOI{10.1145/3536221.3558066} \acmISBN{978-1-4503-9390-4/22/11} \begin{document} \title{The ReprGesture entry to the GENEA Challenge 2022} \author{Sicheng Yang} \email{yangsc21@mails.tsinghua.edu.cn} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \author{Zhiyong Wu} \authornote{Corresponding authors} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \affiliation{% \institution{The Chinese University of Hong Kong} \city{Hong Kong SAR} \country{China} } \email{zywu@sz.tsinghua.edu.cn} \orcid{0000-0001-8533-0524} \author{Minglei Li} \authornotemark[1] \email{liminglei29@huawei.com} \affiliation{% \institution{Huawei Cloud Computing Technologies Co., Ltd} \city{Shenzhen} \country{China} } \author{Mengchen Zhao} \email{zhaomengchen@huawei.com} \affiliation{% \institution{Huawei Noah's Ark Lab} \city{Shenzhen} \country{China} } \author{Jiuxin Lin} \email{linjx21@mails.tsinghua.edu.cn} \author{Liyang Chen} \email{cly21@mails.tsinghua.edu.cn} \author{Weihong Bao} \email{bwh21@mails.tsinghua.edu.cn} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \renewcommand{\shortauthors}{Sicheng Yang et al.} \begin{abstract} This paper describes the ReprGesture entry to the Generation and Evaluation of Non-verbal Behaviour for Embodied Agents (GENEA) challenge 2022. The GENEA challenge provides the processed datasets and performs crowdsourced evaluations to compare the performance of different gesture generation systems. In this paper, we explore an automatic gesture generation system based on multimodal representation learning. We use WavLM features for audio, FastText features for text and position and rotation matrix features for gesture. Each modality is projected to two distinct subspaces: modality-invariant and modality-specific. To learn inter-modality-invariant commonalities and capture the characters of modality-specific representations, gradient reversal layer based adversarial classifier and modality reconstruction decoders are used during training. The gesture decoder generates proper gestures using all representations and features related to the rhythm in the audio. Our code, pre-trained models and demo are available at \url{https://github.com/YoungSeng/ReprGesture}. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies~Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing~Human computer interaction (HCI)</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179</concept_id> <concept_desc>Computing methodologies~Natural language processing</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Computing methodologies~Artificial intelligence} \ccsdesc[500]{Human-centered computing~Human computer interaction (HCI)} \ccsdesc[500]{Computing methodologies~Natural language processing} \keywords{gesture generation, data-driven animation, modality-invaiant, modality-specific, representation learning, deep learning} \maketitle \section{Introduction} Nonverbal behavior plays a key role in conveying messages in human communication \cite{10.1145/3397481.3450692}, including facial expressions, hand gestures and body gestures. Co-speech gestures introduce better self-expression. In the virtual world, it helps to present a rather realistic digital avatar. Gesture generation studies how to generate human-like, natural, speech-oriented gestures. There are many different techniques for gesture generation. In this paper, we focus on the task of speech-driven gesture generation. Representative speech-driven gesture generation are either rule-based or data-driven \cite{10.1145/3414685.3417838}. Many data-driven works for gesture generation are based on multimodal fusion and representation learning. Taras et al. map speech acoustic and semantic features into continuous 3D gestures \cite{10.1145/3382507.3418815}. Youngwoo et al. propose an end-to-end model to generate co-speech gestures using text, audio, and speaker identity \cite{10.1145/3414685.3417838}. Jing et al. sample gesture in a variational autoencoder (VAE) latent space and infer rhythmic motion from speech prosody to address the non-deterministic mapping from speech to gesture \cite{Xu2022FreeformBM}. Taras et al. propose a speech-driven gesture-production method based on representation learning \cite{doi:10.1080/10447318.2021.1883883}. Xian et al. propose the hierarchical audio features extractor and pose inferrer to learn discriminative representations \cite{liu2022learning}. Jing et al. present a co-speech gesture generation model whose latent space is split into shared code and motion-specific code \cite{9710107}. However, gesture generation is a challenging task because of cross-modality learning issue and the weak correlation between speech and gestures. The inherent heterogeneity of the representations creates a gap among different modalities. It is necessary to address the weak correlation among different modalities and provide a holistic view of the multimodal data during gesture generation. Inspired by \cite{10.1145/3414685.3417838} and \cite{10.1145/3394171.3413678}, we propose a gesture generation system based on multimodal representation learning. In particular, we first extract features of audio, text and gestures. Then, a system consisting of four components is proposed: (1) Each modality is projected to two distinct representations: modality-invariant and modality-specific. (2) A gradient reversal layer based adversarial classifier is used to reduce the discrepancy between the modality-invariant representations of each modality. (3) Modality decoders are used to reconstruct each modality, allowing modality-specific representations to capture the details of their respective modality. (4) The gesture decoder takes six modality representations (two per modality) and rhythm-related features in audio as its input and generates proper gestures. The main contributions of our work are: (1) A multimodal representation learning approach is proposed for gesture generation, which ensures comprehensive decoupling of multimodal data. (2) To solve the problem of heterogeneity of different modalities in feature fusion, each modality is projected to two subspaces (modality-invariant and modality-specific) to get multimodal representations using domain learning and modality reconstruction. (3) Ablation studies demonstrate the role of different components in the system. The task of the GENEA 2022 challenge is to generate corresponding gestures from the given audio and text. A complete task description can be accessed in \cite{yoon2022genea}. We submitted our system to the GENEA 2022 challenge to be evaluated with other gesture generation systems in a large user study. \section{Method} \begin{figure}[h] \centering \includegraphics[width=0.95\linewidth]{fig/1_3.pdf} \caption{Gesture generation through modality -invariant and -specific subspaces.} \Description{Gesture generation through modality -invariant and -specific subspaces.} \label{Architecture} \end{figure} \subsection{The architecture of the proposed system} As shown in Figure \ref{Architecture}, the system generates a sequence of human gestures from a sequence of $\mathbf{u}_{m} (m \in \{t,a,g\})$ that contain the features of text, audio and seed gestures. The architecture of the proposed model consists of five modules: feature extraction, modality representation, modality reconstruction, domain learning and gesture generation. The following describes each of these modules in detail. \subsubsection{Feature extraction} ~\\ For each of the modality, the pipeline of extracting features is as follows: \begin{itemize} \item Text: We first use FastText \cite{10.1162/tacl_a_00051} to get the word embeddings. Padding tokens are inserted to make the words temporally match the gestures by following \cite{10.1145/3414685.3417838}. One-dimensional (1D) convolutional layers are then adopted to generate 32-D text feature sequence $\mathbf{U}_{t}$ (`$t$' for `text') from the 300-D word embeddings. \item Audio: All audio recordings are downsampled to 16kHz, and features are generated from the pre-trained models of WavLM Large \cite{DBLP:journals/corr/abs-2110-13900}. We further adjust sizes, strides and padding in the 1D convolutional layers to reduce the dimension of features from 1024 to 128 forming the final audio feature sequence $\mathbf{U}_{a}$ (`$a$' for `audio'). \item Gesture: Due to the poor quality of hand motion-capture, we only use 18 joints corresponding to the upper body without hands or fingers. Root normalization is used to make objects face the same direction. We apply standard normalization (zero mean and unit variant) to all joints. Seed gestures for the first few frames are utilized for better continuity between consecutive syntheses, as in \cite{10.1145/3414685.3417838}. On top of these, position and 3 × 3 rotation matrix features are computed, and the size of final gesture sequence $\mathbf{U}_{g}$ (`$g$' for `gesture') feature is 216. \end{itemize} \subsubsection{Modality representation} ~\\ First, for each modality $m \in \{t,a,g\}$, we use a linear layer with leaky ReLU activation and layer normalization to map its feature sequence $\mathbf{U}_{m}$ into a new feature sequence $\mathbf{u}_{m} \in \mathbb{R}^{T \times d_{h}}$ with the same feature dimension $d_{h}$. Then, we project each sequence $\mathbf{u}_{m}$ to two distinct representations: modality-invariant $\mathbf{h}_{m}^{c}$ and modality-specific $\mathbf{h}_{m}^{p}$. Afterwards, $\mathbf{h}_{m}^{c}$ learns a shared representation in a common subspace with distributional similarity constraints \cite{8715409}. $\mathbf{h}_{m}^{p}$ captures the unique characteristics of that modality. We derive the representations using the simple feed-forward neural encoding functions: \begin{equation} \mathbf{h}_{m}^{c}=E_{c}\left(\mathbf{u}_{m} ; \theta^{c}\right), \quad \mathbf{h}_{m}^{p}=E_{p}\left(\mathbf{u}_{m} ; \theta_{m}^{p}\right) \end{equation} Encoder $E_{c}$ shares the parameters $\theta^{c}$ across all three modalities, whereas $E_{p}$ assigns separate parameters $\theta_{m}^{p}$ for each modality. \subsubsection{Representation learning} ~\\ Domain learning can improve a model’s ability to extract domain-invariant features \cite{NIPS2016_45fbc6d3}. We use an adversarial classifier to minimize domain loss that reduces the discrepancy among shared representations of each modality. The domain loss can be formulated as: \begin{equation} \mathcal{L}_{domain}=-\sum_{i=1}^{3} \mathbb{E}[ \log \left(D_{repr}(d_m)\right)] \end{equation} where $D_{repr}$ represents feed-forward neural discriminator, $d_m$ represents the result after gradient reversal of $\mathbf{h}_{m}^{p}$. The modality reconstruction loss $\mathcal{L}_{\text {recon}}$ is computed on the reconstructed modality and the original input $\mathbf{u}_{m}$. The $\mathcal{L}_{\text {recon}}$ is used to ensure the hidden representations to capture the details of their respective modality. Specifically, a modality decoder $D$ is proposed to reconstruct $\mathbf{u}_{m}$: \begin{equation} \hat{\mathbf{u}}_{m}=D\left(\mathbf{h}_{m}^{c}+\mathbf{h}_{m}^{p} ; \theta^{d}\right) \end{equation} where $\theta^{d}$ are the parameters of the modality decoder. The modality reconstruction loss can then be computed as: \begin{equation} \mathcal{L}_{\text {recon}}=\frac{1}{3}\left(\sum_{m \in\{t, a, g\}} \frac{\left\|\mathbf{u}_{m}-\hat{\mathbf{u}}_{m}\right\|_{2}^{2}}{d_{h}}\right) \end{equation} where $\|\cdot\|_{2}^{2}$ is the squared $L_2$-norm. \subsubsection{Gesture generation} ~\\ \begin{figure}[h] \centering \includegraphics[width=0.82\linewidth]{fig/2_.pdf} \caption{Architecture of the gesture generation module.} \Description{Architecture of the gesture generation module.} \label{generation} \end{figure} We use generative adversarial network (GAN) based gesture decoder for generating gestures. Gestures are directly related to rhythm and beat, thus we concatenate audio rhythm related features (pitch, energy and volume) and the output of six stacked modality representations together and send them to Transformer encoders with multi-head self-attention as the generator, as shown in Figure \ref{generation}. The generator part is trained using $\mathcal{L}_{gesture}$ consisting of the Huber loss and the MSE loss, and the discriminator part is trained with $\mathcal{L}_{GAN}$. \begin{equation} \mathcal{L}_{gesture}=\alpha \cdot \mathbb{E}\left[\frac{1}{t} \sum_{i=1}^{t} \operatorname{HuberLoss}\left(g_{i}, \hat{g}_{i}\right)\right] + \beta \cdot \mathbb{E}\left[\frac{1}{t} \sum_{i=1}^{t} \|\left(g_{i}, \hat{g}_{i}\right)\|_{2}^{2}\right] \label{Lgesture} \end{equation} \begin{equation} \mathcal{L}_{GAN}=-\mathbb{E}[\log (D_{gesture}(g))]-\mathbb{E}[\log (1-D_{gesture}(\hat{g}))] \end{equation} where $D_{gesture}$ represents gesture discriminator using multilayered bidirectional gated recurrent unit (GRU) \cite{KyunghyunCho2014LearningPR} that outputs binary output for each time step, $t$ is the length of the gesture sequence, $g_i$ represents the $i$th human gesture, $\hat{g_i}$ represents the $i$th generated gesture. The loss of the proposed system can be computed as: \begin{equation} \mathcal{L}_{total} = \mathcal{L}_{gesture} + \gamma \cdot \mathcal{L}_{GAN} + \delta \cdot \mathcal{L}_{domain} + \epsilon \cdot \mathcal{L}_{recon} \label{total} \end{equation} \subsection{Data processing and experiment setup} \subsubsection{Data and data processing} ~\\ In the challenge, the Talking With Hands 16.2M \cite{9010909} is used as the standard dataset. Each video is separated into two independent sides with one speaker each. The audio and text in the dataset have been aligned. For more details please refer to the challenge paper \cite{yoon2022genea}. We note that the data in the training, validation and test sets are extremely unbalanced, so we only use the data from the speaker with identity "1" for training. And we believe that if speech and gesture data are trained on the same person, the gesture behavior would match the speech. \subsubsection{Experiment setup} ~\\ The proposed system is trained on training data only, using the ADAM \cite{2014Adam} optimizer (learning rate is e-4, $\beta_1$ = 0.5, $\beta_2$ = 0.98) with a batch size of 128 for 100 steps. We set $\alpha=300$, $\beta=50$ for Equation (\ref{Lgesture}) and $\gamma=5, \delta=0.1, \epsilon=0.1$ (we noticed in our experiments that too large $\delta$ and $\epsilon$ will lead to non-convergence) for Equation (\ref{total}). There is a warm-up period of 10 epochs in which the $\mathcal{L}_{GAN}$ is not used ($\gamma$ = 0). The feature dimension $d_h$ of sequence $\textbf{u}_m$ is 48. During training, each training sample having 100 frames is sampled with a stride of 10 from the valid motion sections; the initial 10 frames are used as seed gesture poses and the model is trained to generate the remaining 90 poses (3 seconds). \section{Evaluation} \subsection{Evaluation setup} The GENEA Challenge 2022 evaluation is divided into two tiers, and we participated in the upper-body motion tier. The challenge organizers conducted a detailed evaluation comparing all submitted systems\cite{yoon2022genea}. The challenge evaluates human-likeness to assess motion quality, and appropriateness to assess how well the gestures match the speech. The evaluation is based on the HEMVIP methodology \cite{10.1145/3462244.3479957} and Mean Opinion Score (MOS) \cite{1996Methods}. There are in total 11 systems participated in the upper-body tier. The following abbreviations are used to represent each model in the evaluation: \begin{itemize} \item UNA: Ground truth (`U' for the upper-body tier, `NA' for `natural'). \item UBT: The official text-based baseline \cite{8793720}, which takes transcribed speech text with word-level timing information as the input modality (`B' for `baseline', `T' for `text'). \item UBA: The official audio-based baseline \cite{10.1145/3308532.3329472}, which takes speech audio into account when generating output (`A' for `audio'). \item USJ–USQ: 8 participants’ submissions to the upper-body tier (ours is USN). \end{itemize} For more details about the evaluation studies, please refer to the challenge paper \cite{yoon2022genea}. \subsection{Subjective evaluation results and discussion} \subsubsection{Human-likeness Evaluation} ~\\ \begin{figure}[h] \centering \subfigure[Box visualizing the ratings distribution in Upper-body study. ]{ \label{Fig.sub.1} \includegraphics[width=0.43\linewidth]{fig/upper-body_human-likeness_boxplot.pdf}} \quad \subfigure[Significance of pairwise differences between conditions.]{ \label{Fig.sub.2} \includegraphics[width=0.43\linewidth]{fig/upper-body_human-likeness_median_pref.pdf}} \caption{(a) Red bars are the median ratings (each with a 0.05 confidence interval); yellow diamonds are mean ratings (also with a 0.05 confidence interval). Box edges are at 25 and 75 percentiles, while whiskers cover 95\% of all ratings for each condition. (b) White means that the condition listed on the ${y}$-axis rated significantly above the condition on the $x$-axis, black means the opposite ($y$ rated below $x$), and grey means no statistically significant difference at the level $\alpha$ = 0.05 after Holm-Bonferroni correction.} \Description{Box plots visualizing the ratings distribution in Upper-body study.} \label{Upper_result} \end{figure} In this evaluations, study participants are asked to rate ``How human-like does the gesture motion appear?'' on a scale from 0 (worst) to 100 (best). Bar plots and significance comparisons are shown in Figure \ref{Upper_result}. Our system (USN) receives a median score of 44 and a mean score of 44.2, and is ranked fourth among the participating systems. \subsubsection{Appropriateness evaluation} ~\\ \begin{figure}[h] \centering \includegraphics[width=0.45\linewidth]{fig/upper-body_appropriateness_matched_pref.pdf} \caption{Bar plots visualizing the response distribution in the appropriateness studies. The blue bar (bottom) represents responses where subjects preferred the matched motion, the light grey bar (middle) represents tied (``They are equal'') responses, and the red bar (top) represents responses preferring mismatched motion, with the height of each bar being proportional to the fraction of responses in each category. The black horizontal line bisecting the light grey bar shows the proportion of matched responses after splitting ties, each with a 0.05 confidence interval. The dashed black line indicates chance-level performance.} \Description{Box plots visualizing the ratings distribution in the upper-body study.} \label{appropriateness} \end{figure} In this evaluation, participants are asked to choose the character on the left, on the right, or indicate that the two are equally well matched to response ``Please indicate which character’s motion best matches the speech, both in terms of rhythm and intonation and in terms of meaning.'' Bar plots are shown in Figure \ref{appropriateness}. Our system (USN) receives a ``Percent matched'' 54.6, which identifies how often participants preferred matched over mismatched motion in terms of appropriateness. Our system is rated seventh in appropriateness among the participants’ submissions. It should be noted that the difference of our system to the five higher-ranked systems (USL, UBA, USO, USK and USJ) is not significant. Furthermore, if we only consider the ratio of matched motion, i.e., the blue bar in Figure \ref{appropriateness}, our system is ranked fifth among the participating systems. \subsection{Ablation studies} \begin{table}[] \caption{Ablation studies results. `w/o' is short for `without'. Bold indicates the best metric, i.e. the one closest to the ground truth.} \label{tab:Ablation} \resizebox{\textwidth}{!} { \begin{tabular}{cccccccc} \toprule Name & Average jerk & \begin{tabular}[c]{@{}c@{}}Average \\ acceleration\end{tabular} & \begin{tabular}[c]{@{}c@{}}Global \\ CCA\end{tabular} & \begin{tabular}[c]{@{}c@{}}CCA for \\ each sequence\end{tabular} & \begin{tabular}[c]{@{}c@{}}Hellinger\\ distance average\end{tabular} $\downarrow$ & \begin{tabular}[c]{@{}c@{}}FGD on \\ feature space\end{tabular} $\downarrow$ & \begin{tabular}[c]{@{}c@{}}FGD on raw \\ data space\end{tabular} $\downarrow$ \\ \midrule Ground Truth (GT) & 18149.74 $\pm$ 2252.61 & 401.24 $\pm$ 67.57 & 1.000 & 1.00 $\pm$ 0.00 & 0.0 & 0.0 & 0.0 \\ ReprGesture & 2647.59 $\pm$ 1200.05 & 146.90 $\pm$ 46.09 & 0.726 & \textbf{0.95 $\pm$ 0.02} & \textbf{0.155} & 0.86 & \textbf{184.753} \\ w/o WavLM & 1775.09 $\pm$ 512.08 & 77.53 $\pm$ 21.92 & \textbf{0.761} & 0.94 $\pm$ 0.03 & 0.353 & 3.054 & 321.383 \\ w/o $\mathcal{L}_{GAN}$ & \textbf{9731.54 $\pm$ 3636.06} & \textbf{242.15 $\pm$ 81.81} & 0.664 & 0.93 $\pm$ 0.03 & 0.342 & 2.053 & 277.539 \\ w/o $\mathcal{L}_{recon}$ & 533.95 $\pm$ 193.18 & 39.49 $\pm$ 12.23 & 0.710 & 0.93 $\pm$ 0.03 & 0.283 & 0.731 & 659.150 \\ w/o $\mathcal{L}_{domain}$ & 2794.79 $\pm$ 1153.75 & 135.62 $\pm$ 25.13 & 0.707 & 0.94 $\pm$ 0.03 & 0.267 & \textbf{0.653} & 874.209 \\ w/o Repr & 2534.34 $\pm$ 1151.38 & 123.02 $\pm$ 40.90 & 0.723 & 0.94 $\pm$ 0.04 & 0.298 & 0.829 & 514.706 \\ \bottomrule \end{tabular} } \end{table} Moreover, we conduct ablation studies to address the performance effects from different components in the system. The GENEA challenge computes some objective metrics of motion quality by GENEA numerical evaluations\footnote{\url{https://github.com/genea-workshop/genea_numerical_evaluations}}. For calculation and meaning of these objective evaluation metrics, please refer to the challenge paper \cite{yoon2022genea}. A perfect natural system should have average jerk and acceleration very similar to natural motion. The closer the Canonical correlation analysis (CCA) to 1, the better. Lower Hellinger distance and Fr\'{e}chet gesture distance (FGD) are better. To compute the FGD, we train an autoencoder using the training set of the challenge. The results of our ablations studes are summarized in Table \ref{tab:Ablation}. Supported by the results, when we do not use WavLM to extract audio features, but use 1D convolution instead, the Hellinger distance average and FGD on feature space present the worst performance. When the model is trained without the GAN loss, the average jerk and average acceleration are better, but the global CCA and CCA for each sequence are decreased. When the reconstruction loss is removed, the average jerk and average acceleration are worst. The generated gesture movements are few and of small range. When the model is trained using Central Moment Discrepancy (CMD) loss \cite{10.1145/3394171.3413678} instead of domain loss, the best FGD on feature space and the worst FGD on raw data space are obtained. When the modality representations are removed (w/o Repr), we feed the modality sequence $\mathbf{u}_t, \mathbf{u}_a$ and $\mathbf{u}_g$ directly to the gesture decoder and only use the $\mathcal{L}_{task}$ loss, the performances of all metrics have deteriorated except for FGD on feature space. \section{Conclusions and discussion} In this paper, we propose a gesture generation system based on multimodal representation learning, where the considered modalities include text, audio and gesture. Each modality is projected into two different subspaces: modality-invariant and modality-specific. To learn the commonality among different modalities, an adversarial classifier based on gradient reversal layer is used. To capture the features of modality-specific representations, we adopt a modality reconstruction decoder. The gesture decoder utilizes all representations and audio rhythmic features to generate appropriate gestures. In subjective evaluation, our system is ranked fourth among the participating systems in human-likeness evaluation, and ranked seventh in appropriateness evaluation. Whereas, for appropriateness, the differences between our system and the five higher-ranked systems are not significant. For appropriateness evaluation, whether there is a relationship between subjective evaluation and segmentation duration deserves to be investigated. The segments are around 8 to 10 seconds during evaluation\cite{yoon2022genea}. We believe that a longer period of time (e.g. 20-30 seconds) might produce more pronounced and convincing appropriateness results. There is room for improvement in this research. First, we only use data from one person to learn gesture due to unbalanced dataset issue. Such one-to-one mapping could produce boring and homogeneous gestures during inference. Second, the finger motions are not considered because of the low motion-capture quality. Such finger motions could be involved in the future if some data cleanup procedures could be conducted. Third, besides text and audio, more modalities (e.g. emotions, facial expressions and semantic meaning of gestures \cite{Liu2022BEATAL}) could be taken into consideration to generate more appropriate gestures. \begin{acks} This work is supported by Shenzhen Science and Technology Innovation Committee (WDZC20200818121348001), National Natural Science Foundation of China (62076144) and Shenzhen Key Laboratory of next generation interactive media innovative technology (ZDSYS20210623092001004). \end{acks} \bibliographystyle{ACM-Reference-Format} \bibliography{my} \end{document}
https://openreview.net/forum?id=atWaELmguNj7
https://arxiv.org/abs/2208.12133
Please evaluate the proposed gesture generation method in terms of its organization, reproducibility, novelty, and the significance of the proposed representations.
Novel and interesting paper with a little unclarity. The paper proposes a novel deterministic speech to gesture generation model by learning a modal-invariant and modal-specific features for different modalities used as inputs, i.e., audio, text, and seed poses. Ablation study showed that this method could improve the accuracy of the generated gestures. Strengths: 1. Although the paper was not the best among proposed systems, the ablation study showed that the separation for modal-invariant feature and modal-specific feature were helpful to the accuracy of the generated gestures, i.e., a lower Hellinger distance and FGD on raw data space. This indicates that the proposed method is more effective to determine the gesture shape than purely mixing different modality features. 2. The using of WavLM, as a pre-trained neural network for audio feature extraction, improves the jerk and acceleration of the generated gestures, even though the pitch, energy and volume were used at the same time. This shows that prosodic features of audio, i.e., pitch and energy, are not enough for predicting the rhythmic movements in the gestures. Potential issue: The paper uses an autoencoder-like training scheme to help learn the hidden feature space. However, the reconstruction error was computed on the reconstructed results and the output of one of the hidden layers, whose parameters are being updated when training. This could lead to an undesired behavior that the encoder and decoder agree with each other while not considering the original input at all. A more common approach for training an autoencoder is to compute the reconstruction error on the original input or a transformation of the original input. Weaknesses: 1. More details are necessary for understanding the domain learning. While the authors propose to extract modal-invariant features from modalities as in [5], they did not use the original similarity loss for these features. Instead, they propose to use domain learning to reach this goal. Although this could be one of the originalities of this paper, the authors did not thoroughly explain how domain learning works and how it can achieve a similar effect as central moment discrepancy (CMD) loss, or its potential advantages and disadvantages compared with CMD. 2. The authors claims that the reconstruction loss is used to ensure the hidden representations to capture the details. However, no ablation was provided for this. Thus, it is unclear that how much or if this loss is useful. 3. The authors did not use traditional feature extraction method on the audio such as mel-spectrogram, which is more common in the literature. The difference between using WavLM and basic features is unknown. Questions: 1. The scales of coefficients for different loss terms ranges widely, e.g., alpha is 300 and epsilon is 0.1 in equation (5). How did the authors adjust the hyper-parameters and reach such a different scale? 2. Why did the author choose 3 seconds as the length for generation? Any reference for this? Rating: 7: Good paper, accept Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
\documentclass[manuscript]{acmart} \usepackage{subfigure} \AtBeginDocument{% \providecommand\BibTeX{{% \normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}} \copyrightyear{2022} \acmYear{2022} \setcopyright{rightsretained} \acmConference[ICMI '22]{INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION}{November 7--11, 2022}{Bengaluru, India} \acmBooktitle{INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI '22), November 7--11, 2022, Bengaluru, India}\acmDOI{10.1145/3536221.3558066} \acmISBN{978-1-4503-9390-4/22/11} \begin{document} \title{The ReprGesture entry to the GENEA Challenge 2022} \author{Sicheng Yang} \email{yangsc21@mails.tsinghua.edu.cn} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \author{Zhiyong Wu} \authornote{Corresponding authors} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \affiliation{% \institution{The Chinese University of Hong Kong} \city{Hong Kong SAR} \country{China} } \email{zywu@sz.tsinghua.edu.cn} \orcid{0000-0001-8533-0524} \author{Minglei Li} \authornotemark[1] \email{liminglei29@huawei.com} \affiliation{% \institution{Huawei Cloud Computing Technologies Co., Ltd} \city{Shenzhen} \country{China} } \author{Mengchen Zhao} \email{zhaomengchen@huawei.com} \affiliation{% \institution{Huawei Noah's Ark Lab} \city{Shenzhen} \country{China} } \author{Jiuxin Lin} \email{linjx21@mails.tsinghua.edu.cn} \author{Liyang Chen} \email{cly21@mails.tsinghua.edu.cn} \author{Weihong Bao} \email{bwh21@mails.tsinghua.edu.cn} \affiliation{% \institution{Tsinghua University} \city{Shenzhen} \country{China} } \renewcommand{\shortauthors}{Sicheng Yang et al.} \begin{abstract} This paper describes the ReprGesture entry to the Generation and Evaluation of Non-verbal Behaviour for Embodied Agents (GENEA) challenge 2022. The GENEA challenge provides the processed datasets and performs crowdsourced evaluations to compare the performance of different gesture generation systems. In this paper, we explore an automatic gesture generation system based on multimodal representation learning. We use WavLM features for audio, FastText features for text and position and rotation matrix features for gesture. Each modality is projected to two distinct subspaces: modality-invariant and modality-specific. To learn inter-modality-invariant commonalities and capture the characters of modality-specific representations, gradient reversal layer based adversarial classifier and modality reconstruction decoders are used during training. The gesture decoder generates proper gestures using all representations and features related to the rhythm in the audio. Our code, pre-trained models and demo are available at \url{https://github.com/YoungSeng/ReprGesture}. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies~Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing~Human computer interaction (HCI)</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179</concept_id> <concept_desc>Computing methodologies~Natural language processing</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Computing methodologies~Artificial intelligence} \ccsdesc[500]{Human-centered computing~Human computer interaction (HCI)} \ccsdesc[500]{Computing methodologies~Natural language processing} \keywords{gesture generation, data-driven animation, modality-invaiant, modality-specific, representation learning, deep learning} \maketitle \section{Introduction} Nonverbal behavior plays a key role in conveying messages in human communication \cite{10.1145/3397481.3450692}, including facial expressions, hand gestures and body gestures. Co-speech gestures introduce better self-expression. In the virtual world, it helps to present a rather realistic digital avatar. Gesture generation studies how to generate human-like, natural, speech-oriented gestures. There are many different techniques for gesture generation. In this paper, we focus on the task of speech-driven gesture generation. Representative speech-driven gesture generation are either rule-based or data-driven \cite{10.1145/3414685.3417838}. Many data-driven works for gesture generation are based on multimodal fusion and representation learning. Taras et al. map speech acoustic and semantic features into continuous 3D gestures \cite{10.1145/3382507.3418815}. Youngwoo et al. propose an end-to-end model to generate co-speech gestures using text, audio, and speaker identity \cite{10.1145/3414685.3417838}. Jing et al. sample gesture in a variational autoencoder (VAE) latent space and infer rhythmic motion from speech prosody to address the non-deterministic mapping from speech to gesture \cite{Xu2022FreeformBM}. Taras et al. propose a speech-driven gesture-production method based on representation learning \cite{doi:10.1080/10447318.2021.1883883}. Xian et al. propose the hierarchical audio features extractor and pose inferrer to learn discriminative representations \cite{liu2022learning}. Jing et al. present a co-speech gesture generation model whose latent space is split into shared code and motion-specific code \cite{9710107}. However, gesture generation is a challenging task because of cross-modality learning issue and the weak correlation between speech and gestures. The inherent heterogeneity of the representations creates a gap among different modalities. It is necessary to address the weak correlation among different modalities and provide a holistic view of the multimodal data during gesture generation. Inspired by \cite{10.1145/3414685.3417838} and \cite{10.1145/3394171.3413678}, we propose a gesture generation system based on multimodal representation learning. In particular, we first extract features of audio, text and gestures. Then, a system consisting of four components is proposed: (1) Each modality is projected to two distinct representations: modality-invariant and modality-specific. (2) A gradient reversal layer based adversarial classifier is used to reduce the discrepancy between the modality-invariant representations of each modality. (3) Modality decoders are used to reconstruct each modality, allowing modality-specific representations to capture the details of their respective modality. (4) The gesture decoder takes six modality representations (two per modality) and rhythm-related features in audio as its input and generates proper gestures. The main contributions of our work are: (1) A multimodal representation learning approach is proposed for gesture generation, which ensures comprehensive decoupling of multimodal data. (2) To solve the problem of heterogeneity of different modalities in feature fusion, each modality is projected to two subspaces (modality-invariant and modality-specific) to get multimodal representations using domain learning and modality reconstruction. (3) Ablation studies demonstrate the role of different components in the system. The task of the GENEA 2022 challenge is to generate corresponding gestures from the given audio and text. A complete task description can be accessed in \cite{yoon2022genea}. We submitted our system to the GENEA 2022 challenge to be evaluated with other gesture generation systems in a large user study. \section{Method} \begin{figure}[h] \centering \includegraphics[width=0.95\linewidth]{fig/1_3.pdf} \caption{Gesture generation through modality -invariant and -specific subspaces.} \Description{Gesture generation through modality -invariant and -specific subspaces.} \label{Architecture} \end{figure} \subsection{The architecture of the proposed system} As shown in Figure \ref{Architecture}, the system generates a sequence of human gestures from a sequence of $\mathbf{u}_{m} (m \in \{t,a,g\})$ that contain the features of text, audio and seed gestures. The architecture of the proposed model consists of five modules: feature extraction, modality representation, modality reconstruction, domain learning and gesture generation. The following describes each of these modules in detail. \subsubsection{Feature extraction} ~\\ For each of the modality, the pipeline of extracting features is as follows: \begin{itemize} \item Text: We first use FastText \cite{10.1162/tacl_a_00051} to get the word embeddings. Padding tokens are inserted to make the words temporally match the gestures by following \cite{10.1145/3414685.3417838}. One-dimensional (1D) convolutional layers are then adopted to generate 32-D text feature sequence $\mathbf{U}_{t}$ (`$t$' for `text') from the 300-D word embeddings. \item Audio: All audio recordings are downsampled to 16kHz, and features are generated from the pre-trained models of WavLM Large \cite{DBLP:journals/corr/abs-2110-13900}. We further adjust sizes, strides and padding in the 1D convolutional layers to reduce the dimension of features from 1024 to 128 forming the final audio feature sequence $\mathbf{U}_{a}$ (`$a$' for `audio'). \item Gesture: Due to the poor quality of hand motion-capture, we only use 18 joints corresponding to the upper body without hands or fingers. Root normalization is used to make objects face the same direction. We apply standard normalization (zero mean and unit variant) to all joints. Seed gestures for the first few frames are utilized for better continuity between consecutive syntheses, as in \cite{10.1145/3414685.3417838}. On top of these, position and 3 × 3 rotation matrix features are computed, and the size of final gesture sequence $\mathbf{U}_{g}$ (`$g$' for `gesture') feature is 216. \end{itemize} \subsubsection{Modality representation} ~\\ First, for each modality $m \in \{t,a,g\}$, we use a linear layer with leaky ReLU activation and layer normalization to map its feature sequence $\mathbf{U}_{m}$ into a new feature sequence $\mathbf{u}_{m} \in \mathbb{R}^{T \times d_{h}}$ with the same feature dimension $d_{h}$. Then, we project each sequence $\mathbf{u}_{m}$ to two distinct representations: modality-invariant $\mathbf{h}_{m}^{c}$ and modality-specific $\mathbf{h}_{m}^{p}$. Afterwards, $\mathbf{h}_{m}^{c}$ learns a shared representation in a common subspace with distributional similarity constraints \cite{8715409}. $\mathbf{h}_{m}^{p}$ captures the unique characteristics of that modality. We derive the representations using the simple feed-forward neural encoding functions: \begin{equation} \mathbf{h}_{m}^{c}=E_{c}\left(\mathbf{u}_{m} ; \theta^{c}\right), \quad \mathbf{h}_{m}^{p}=E_{p}\left(\mathbf{u}_{m} ; \theta_{m}^{p}\right) \end{equation} Encoder $E_{c}$ shares the parameters $\theta^{c}$ across all three modalities, whereas $E_{p}$ assigns separate parameters $\theta_{m}^{p}$ for each modality. \subsubsection{Representation learning} ~\\ Domain learning can improve a model’s ability to extract domain-invariant features \cite{NIPS2016_45fbc6d3}. We use an adversarial classifier to minimize domain loss that reduces the discrepancy among shared representations of each modality. The domain loss can be formulated as: \begin{equation} \mathcal{L}_{domain}=-\sum_{i=1}^{3} \mathbb{E}[ \log \left(D_{repr}(d_m)\right)] \end{equation} where $D_{repr}$ represents feed-forward neural discriminator, $d_m$ represents the result after gradient reversal of $\mathbf{h}_{m}^{p}$. The modality reconstruction loss $\mathcal{L}_{\text {recon}}$ is computed on the reconstructed modality and the original input $\mathbf{u}_{m}$. The $\mathcal{L}_{\text {recon}}$ is used to ensure the hidden representations to capture the details of their respective modality. Specifically, a modality decoder $D$ is proposed to reconstruct $\mathbf{u}_{m}$: \begin{equation} \hat{\mathbf{u}}_{m}=D\left(\mathbf{h}_{m}^{c}+\mathbf{h}_{m}^{p} ; \theta^{d}\right) \end{equation} where $\theta^{d}$ are the parameters of the modality decoder. The modality reconstruction loss can then be computed as: \begin{equation} \mathcal{L}_{\text {recon}}=\frac{1}{3}\left(\sum_{m \in\{t, a, g\}} \frac{\left\|\mathbf{u}_{m}-\hat{\mathbf{u}}_{m}\right\|_{2}^{2}}{d_{h}}\right) \end{equation} where $\|\cdot\|_{2}^{2}$ is the squared $L_2$-norm. \subsubsection{Gesture generation} ~\\ \begin{figure}[h] \centering \includegraphics[width=0.82\linewidth]{fig/2_.pdf} \caption{Architecture of the gesture generation module.} \Description{Architecture of the gesture generation module.} \label{generation} \end{figure} We use generative adversarial network (GAN) based gesture decoder for generating gestures. Gestures are directly related to rhythm and beat, thus we concatenate audio rhythm related features (pitch, energy and volume) and the output of six stacked modality representations together and send them to Transformer encoders with multi-head self-attention as the generator, as shown in Figure \ref{generation}. The generator part is trained using $\mathcal{L}_{gesture}$ consisting of the Huber loss and the MSE loss, and the discriminator part is trained with $\mathcal{L}_{GAN}$. \begin{equation} \mathcal{L}_{gesture}=\alpha \cdot \mathbb{E}\left[\frac{1}{t} \sum_{i=1}^{t} \operatorname{HuberLoss}\left(g_{i}, \hat{g}_{i}\right)\right] + \beta \cdot \mathbb{E}\left[\frac{1}{t} \sum_{i=1}^{t} \|\left(g_{i}, \hat{g}_{i}\right)\|_{2}^{2}\right] \label{Lgesture} \end{equation} \begin{equation} \mathcal{L}_{GAN}=-\mathbb{E}[\log (D_{gesture}(g))]-\mathbb{E}[\log (1-D_{gesture}(\hat{g}))] \end{equation} where $D_{gesture}$ represents gesture discriminator using multilayered bidirectional gated recurrent unit (GRU) \cite{KyunghyunCho2014LearningPR} that outputs binary output for each time step, $t$ is the length of the gesture sequence, $g_i$ represents the $i$th human gesture, $\hat{g_i}$ represents the $i$th generated gesture. The loss of the proposed system can be computed as: \begin{equation} \mathcal{L}_{total} = \mathcal{L}_{gesture} + \gamma \cdot \mathcal{L}_{GAN} + \delta \cdot \mathcal{L}_{domain} + \epsilon \cdot \mathcal{L}_{recon} \label{total} \end{equation} \subsection{Data processing and experiment setup} \subsubsection{Data and data processing} ~\\ In the challenge, the Talking With Hands 16.2M \cite{9010909} is used as the standard dataset. Each video is separated into two independent sides with one speaker each. The audio and text in the dataset have been aligned. For more details please refer to the challenge paper \cite{yoon2022genea}. We note that the data in the training, validation and test sets are extremely unbalanced, so we only use the data from the speaker with identity "1" for training. And we believe that if speech and gesture data are trained on the same person, the gesture behavior would match the speech. \subsubsection{Experiment setup} ~\\ The proposed system is trained on training data only, using the ADAM \cite{2014Adam} optimizer (learning rate is e-4, $\beta_1$ = 0.5, $\beta_2$ = 0.98) with a batch size of 128 for 100 steps. We set $\alpha=300$, $\beta=50$ for Equation (\ref{Lgesture}) and $\gamma=5, \delta=0.1, \epsilon=0.1$ (we noticed in our experiments that too large $\delta$ and $\epsilon$ will lead to non-convergence) for Equation (\ref{total}). There is a warm-up period of 10 epochs in which the $\mathcal{L}_{GAN}$ is not used ($\gamma$ = 0). The feature dimension $d_h$ of sequence $\textbf{u}_m$ is 48. During training, each training sample having 100 frames is sampled with a stride of 10 from the valid motion sections; the initial 10 frames are used as seed gesture poses and the model is trained to generate the remaining 90 poses (3 seconds). \section{Evaluation} \subsection{Evaluation setup} The GENEA Challenge 2022 evaluation is divided into two tiers, and we participated in the upper-body motion tier. The challenge organizers conducted a detailed evaluation comparing all submitted systems\cite{yoon2022genea}. The challenge evaluates human-likeness to assess motion quality, and appropriateness to assess how well the gestures match the speech. The evaluation is based on the HEMVIP methodology \cite{10.1145/3462244.3479957} and Mean Opinion Score (MOS) \cite{1996Methods}. There are in total 11 systems participated in the upper-body tier. The following abbreviations are used to represent each model in the evaluation: \begin{itemize} \item UNA: Ground truth (`U' for the upper-body tier, `NA' for `natural'). \item UBT: The official text-based baseline \cite{8793720}, which takes transcribed speech text with word-level timing information as the input modality (`B' for `baseline', `T' for `text'). \item UBA: The official audio-based baseline \cite{10.1145/3308532.3329472}, which takes speech audio into account when generating output (`A' for `audio'). \item USJ–USQ: 8 participants’ submissions to the upper-body tier (ours is USN). \end{itemize} For more details about the evaluation studies, please refer to the challenge paper \cite{yoon2022genea}. \subsection{Subjective evaluation results and discussion} \subsubsection{Human-likeness Evaluation} ~\\ \begin{figure}[h] \centering \subfigure[Box visualizing the ratings distribution in Upper-body study. ]{ \label{Fig.sub.1} \includegraphics[width=0.43\linewidth]{fig/upper-body_human-likeness_boxplot.pdf}} \quad \subfigure[Significance of pairwise differences between conditions.]{ \label{Fig.sub.2} \includegraphics[width=0.43\linewidth]{fig/upper-body_human-likeness_median_pref.pdf}} \caption{(a) Red bars are the median ratings (each with a 0.05 confidence interval); yellow diamonds are mean ratings (also with a 0.05 confidence interval). Box edges are at 25 and 75 percentiles, while whiskers cover 95\% of all ratings for each condition. (b) White means that the condition listed on the ${y}$-axis rated significantly above the condition on the $x$-axis, black means the opposite ($y$ rated below $x$), and grey means no statistically significant difference at the level $\alpha$ = 0.05 after Holm-Bonferroni correction.} \Description{Box plots visualizing the ratings distribution in Upper-body study.} \label{Upper_result} \end{figure} In this evaluations, study participants are asked to rate ``How human-like does the gesture motion appear?'' on a scale from 0 (worst) to 100 (best). Bar plots and significance comparisons are shown in Figure \ref{Upper_result}. Our system (USN) receives a median score of 44 and a mean score of 44.2, and is ranked fourth among the participating systems. \subsubsection{Appropriateness evaluation} ~\\ \begin{figure}[h] \centering \includegraphics[width=0.45\linewidth]{fig/upper-body_appropriateness_matched_pref.pdf} \caption{Bar plots visualizing the response distribution in the appropriateness studies. The blue bar (bottom) represents responses where subjects preferred the matched motion, the light grey bar (middle) represents tied (``They are equal'') responses, and the red bar (top) represents responses preferring mismatched motion, with the height of each bar being proportional to the fraction of responses in each category. The black horizontal line bisecting the light grey bar shows the proportion of matched responses after splitting ties, each with a 0.05 confidence interval. The dashed black line indicates chance-level performance.} \Description{Box plots visualizing the ratings distribution in the upper-body study.} \label{appropriateness} \end{figure} In this evaluation, participants are asked to choose the character on the left, on the right, or indicate that the two are equally well matched to response ``Please indicate which character’s motion best matches the speech, both in terms of rhythm and intonation and in terms of meaning.'' Bar plots are shown in Figure \ref{appropriateness}. Our system (USN) receives a ``Percent matched'' 54.6, which identifies how often participants preferred matched over mismatched motion in terms of appropriateness. Our system is rated seventh in appropriateness among the participants’ submissions. It should be noted that the difference of our system to the five higher-ranked systems (USL, UBA, USO, USK and USJ) is not significant. Furthermore, if we only consider the ratio of matched motion, i.e., the blue bar in Figure \ref{appropriateness}, our system is ranked fifth among the participating systems. \subsection{Ablation studies} \begin{table}[] \caption{Ablation studies results. `w/o' is short for `without'. Bold indicates the best metric, i.e. the one closest to the ground truth.} \label{tab:Ablation} \resizebox{\textwidth}{!} { \begin{tabular}{cccccccc} \toprule Name & Average jerk & \begin{tabular}[c]{@{}c@{}}Average \\ acceleration\end{tabular} & \begin{tabular}[c]{@{}c@{}}Global \\ CCA\end{tabular} & \begin{tabular}[c]{@{}c@{}}CCA for \\ each sequence\end{tabular} & \begin{tabular}[c]{@{}c@{}}Hellinger\\ distance average\end{tabular} $\downarrow$ & \begin{tabular}[c]{@{}c@{}}FGD on \\ feature space\end{tabular} $\downarrow$ & \begin{tabular}[c]{@{}c@{}}FGD on raw \\ data space\end{tabular} $\downarrow$ \\ \midrule Ground Truth (GT) & 18149.74 $\pm$ 2252.61 & 401.24 $\pm$ 67.57 & 1.000 & 1.00 $\pm$ 0.00 & 0.0 & 0.0 & 0.0 \\ ReprGesture & 2647.59 $\pm$ 1200.05 & 146.90 $\pm$ 46.09 & 0.726 & \textbf{0.95 $\pm$ 0.02} & \textbf{0.155} & 0.86 & \textbf{184.753} \\ w/o WavLM & 1775.09 $\pm$ 512.08 & 77.53 $\pm$ 21.92 & \textbf{0.761} & 0.94 $\pm$ 0.03 & 0.353 & 3.054 & 321.383 \\ w/o $\mathcal{L}_{GAN}$ & \textbf{9731.54 $\pm$ 3636.06} & \textbf{242.15 $\pm$ 81.81} & 0.664 & 0.93 $\pm$ 0.03 & 0.342 & 2.053 & 277.539 \\ w/o $\mathcal{L}_{recon}$ & 533.95 $\pm$ 193.18 & 39.49 $\pm$ 12.23 & 0.710 & 0.93 $\pm$ 0.03 & 0.283 & 0.731 & 659.150 \\ w/o $\mathcal{L}_{domain}$ & 2794.79 $\pm$ 1153.75 & 135.62 $\pm$ 25.13 & 0.707 & 0.94 $\pm$ 0.03 & 0.267 & \textbf{0.653} & 874.209 \\ w/o Repr & 2534.34 $\pm$ 1151.38 & 123.02 $\pm$ 40.90 & 0.723 & 0.94 $\pm$ 0.04 & 0.298 & 0.829 & 514.706 \\ \bottomrule \end{tabular} } \end{table} Moreover, we conduct ablation studies to address the performance effects from different components in the system. The GENEA challenge computes some objective metrics of motion quality by GENEA numerical evaluations\footnote{\url{https://github.com/genea-workshop/genea_numerical_evaluations}}. For calculation and meaning of these objective evaluation metrics, please refer to the challenge paper \cite{yoon2022genea}. A perfect natural system should have average jerk and acceleration very similar to natural motion. The closer the Canonical correlation analysis (CCA) to 1, the better. Lower Hellinger distance and Fr\'{e}chet gesture distance (FGD) are better. To compute the FGD, we train an autoencoder using the training set of the challenge. The results of our ablations studes are summarized in Table \ref{tab:Ablation}. Supported by the results, when we do not use WavLM to extract audio features, but use 1D convolution instead, the Hellinger distance average and FGD on feature space present the worst performance. When the model is trained without the GAN loss, the average jerk and average acceleration are better, but the global CCA and CCA for each sequence are decreased. When the reconstruction loss is removed, the average jerk and average acceleration are worst. The generated gesture movements are few and of small range. When the model is trained using Central Moment Discrepancy (CMD) loss \cite{10.1145/3394171.3413678} instead of domain loss, the best FGD on feature space and the worst FGD on raw data space are obtained. When the modality representations are removed (w/o Repr), we feed the modality sequence $\mathbf{u}_t, \mathbf{u}_a$ and $\mathbf{u}_g$ directly to the gesture decoder and only use the $\mathcal{L}_{task}$ loss, the performances of all metrics have deteriorated except for FGD on feature space. \section{Conclusions and discussion} In this paper, we propose a gesture generation system based on multimodal representation learning, where the considered modalities include text, audio and gesture. Each modality is projected into two different subspaces: modality-invariant and modality-specific. To learn the commonality among different modalities, an adversarial classifier based on gradient reversal layer is used. To capture the features of modality-specific representations, we adopt a modality reconstruction decoder. The gesture decoder utilizes all representations and audio rhythmic features to generate appropriate gestures. In subjective evaluation, our system is ranked fourth among the participating systems in human-likeness evaluation, and ranked seventh in appropriateness evaluation. Whereas, for appropriateness, the differences between our system and the five higher-ranked systems are not significant. For appropriateness evaluation, whether there is a relationship between subjective evaluation and segmentation duration deserves to be investigated. The segments are around 8 to 10 seconds during evaluation\cite{yoon2022genea}. We believe that a longer period of time (e.g. 20-30 seconds) might produce more pronounced and convincing appropriateness results. There is room for improvement in this research. First, we only use data from one person to learn gesture due to unbalanced dataset issue. Such one-to-one mapping could produce boring and homogeneous gestures during inference. Second, the finger motions are not considered because of the low motion-capture quality. Such finger motions could be involved in the future if some data cleanup procedures could be conducted. Third, besides text and audio, more modalities (e.g. emotions, facial expressions and semantic meaning of gestures \cite{Liu2022BEATAL}) could be taken into consideration to generate more appropriate gestures. \begin{acks} This work is supported by Shenzhen Science and Technology Innovation Committee (WDZC20200818121348001), National Natural Science Foundation of China (62076144) and Shenzhen Key Laboratory of next generation interactive media innovative technology (ZDSYS20210623092001004). \end{acks} \bibliographystyle{ACM-Reference-Format} \bibliography{my} \end{document}
https://openreview.net/forum?id=atWaELmguNj7
https://arxiv.org/abs/2208.12133
Please evaluate the proposed speech to gesture generation model in terms of its novelty, clarity, and effectiveness in improving the accuracy of generated gestures.