\input{figures/fig8.tex}
\input{tables/tab1.tex}
\input{tables/tab2.tex}
\section{Conclusions and open problems}
In this paper, we proposed a novel integral representation of neural networks which allows us to generate conventional neural networks of arbitrary shape at inference time by a simple re-discretization of the integral kernel. Our results show that the proposed continuous INNs achieve the same performance as their discrete DNN counterparts, while being stable under structured pruning without the use of any fine-tuning. In this new direction, the following questions/problems are worth to be further investigated:
\begin{itemize}
  \item INNs open up new possibilities for investigating the capacity of neural networks. The Nyquist theorem can be used to select the number of sampling points.
  \item Adaptive integral quadratures. In this work, we have investigated only uniform partitions for training INNs. Investigating data-free non-uniform partition estimation could also have strong impact on INNs.
  \item Training INN from scratch requires improvement for classification networks. Current accuracy drop probably caused by absence of batch-normalization layers. Smooth analogue of normalization is required.
\end{itemize}
