\input{figures/fig1.tex}

\section{Neural Networks and Integral Operators}
\label{sec:neuralnetworks}

The intuition behind our proposed idea is based on the observation that fully-connected and convolution layers could be considered as numerical integration of specific integrals. To illustrate this, we consider the following example. Let $ W(x) $, $ S(x) $ be univariate functions, then we have \cite{hughes2020calculus}:
\begin{equation}
    \label{eq:eq1}
    \int_{0}^{1} W(x)S(x)\,dx \approx \sum_{i = 0}^{n} q_i W(x_i)S(x_i) = \overrightarrow{w}_{q} \cdot \overrightarrow{s},
\end{equation}
where $ \overrightarrow{w}_{q} = (q_0W(x_0),\ldots ,q_nW(x_n)), \overrightarrow{s} = (S(x_0),\ldots ,S(x_n)), \overrightarrow{q} = (q_0,\ldots ,q_n) $ are the \textit{weights of the integration quadrature}, and $ \overrightarrow{P}^{x} = (x_0,\ldots ,x_n) $ is the \textit{segment partition}. that satisfies the following inequality: $ 0 = x_0 < x_1 < \ldots < x_{n-1} < x_n = 1 $. The pair $ ( \overrightarrow{P}^{x} , \overrightarrow{q} ) $ is called a \textit{numerical integration method} \cite{leader2022numerical}. General numerical integration methods are built using different approximations of input functions (we refer to the examples depicted in \FigRef{fig:fig2}). From \EqRef{eq:eq1}) we can see that the integral of a product of two univariate functions can be approximated by the dot product of two vectors using a
specific numerical integration method. The size of vectors $ \overrightarrow{w}_{q} $ and $ \overrightarrow{s} $ can change to arbitrary values by selecting a larger or smaller partition $ \overrightarrow{P}^{x} $. For more details on the numerical integration of multiple integrals we refer to Appendix A. The proposed use of these integrals for representing basic network layers, such as convolution and fully-connected ones, allows for various segment partition lengths along filters and channels dimension or height and width. This leads to the generation of layers with the desired number of filters, channels, height, and width.

\subsection{DNNs layers as integral operators}
Commonly used linear network layers can be presented
as integral operators with a specific integral kernel. While
such layers act as linear operators on the real linear space
Rk, integral operators act as linear operators on the linear
space of integrable functions $ \mathbf{L}^{2} $. Therefore, not all input data could be considered as continuous integrable functions in a meaningful way. Nevertheless, digital images and audio signals are discretizations of analog signals and, therefore, they can be naturally used in integral networks.
\newline
\newline
\textbf{Convolution or cross-correlation layer} The convolution layer defines a transform of a multichannel signal to another multichannel signal. In the case of integral operators, weights of this layer are represented by an integrable function $ F_W(\lambda , x^{out} , x^{in} , \mathbf{x^s} ) $, where $ \mathbf{x^s} $ is a scalar or vector representing the dimensions, over which the convolution is performed, and $ \lambda $ is a vector of trainable parameters. Input and output images are represented by integrable functions $ F_I(x^{in},\mathbf{x^s}) $,$ F_O(x^{out},\mathbf{x^{s'}}) $ and are connected through the
weight function in the following way:
\begin{equation}
    \begin{gathered}
        \label{eq:eq2}
        F_O\left(x^{\text {out }}, \mathbf{x}^{\mathbf{s}^{\prime}}\right)= \\
        \int_{\Omega} F_W\left(\lambda, x^{\text {out }}, x^{\text {in }}, \mathbf{x}^{\mathbf{s}}\right) F_I\left(x^{\text {in }}, \mathbf{x}^{\mathbf{s}}+\mathbf{x}^{\mathbf{s}^{\prime}}\right) d x^{\text {in }} d \mathbf{x}^{\mathbf{s}} .
    \end{gathered}
\end{equation}
\newline
\newline
\textbf{Fully-connected layer} A fully-connected layer defines a transform of a vector to a vector by means of matrix multiplication. The weights of this layer are represented by an integrable function $ F_W(\lambda,x^{out},x^{in}) $. Similar to the convolution operator, $ \lambda $ defines a vector of trainable parameters of the integral kernel. The input and output functions are represented by the integrable functions $ F_I(x_{in},F_O(x^{out}))$, respectively, and are connected via the weight function as follows:
\begin{equation}
    \label{eq:eq3}
    F_O\left(x^{o u t}\right)=\int_0^1 F_W\left(\lambda, x^{o u t}, x^{i n}\right) F_I\left(x^{i n}\right) d x^{i n} .
\end{equation}
\newline
\newline
\input{figures/fig2.tex}
\input{figures/fig3.tex}
\textbf{Pooling and activation functions} Pooling layers also exhibit a meaningful interpretation in terms of integration or signal discretization. Average pooling could be interpreted as a convolution along the spatial dimensions with a piecewise constant function. MaxPooling could be interpreted as a way of signal discretization. Activation functions in integral networks are naturally connected with activation functions in conventional networks by the following equation:
\begin{equation}
    \label{eq:eq4}
    \mathcal{D}\left(\operatorname{ActFunction}\left((x), P_x\right)=\operatorname{ActFunction}\left(\mathcal{D}\left(x, P_x\right)\right),\right.
\end{equation}
where by $ \mathcal{D} $ we denote the \textit{discretization operation} that
evaluates a scalar function on the given partition $ P_x $. This equation implies that applying an activation function on a discretized signal is equivalent to discretizing the output of the activation function applied to the continuous signal.
\newline
\newline
\textbf{Evaluation and backpropagation through integration} For fast integral evaluation, the integral kernel goes through a discretization procedure and is then passed to a conventional layer for numerical integration. It turns out that any composite quadrature may be represented by such a conventional layer evaluation. For backpropagation through integration we use the chain-rule to evaluate the gradients of the trainable parameters $ \lambda $ as in discrete networks. The validity of the described procedure is guaranteed by the following lemma, whose proof can be found in Appendix A.
\newline
\newline
\textbf{Lemma 1 (Neural Integral Lemma)} \textit{Given that an integral kernel $ F(\lambda, x) $ is smooth and has continuous partial derivatives $ \frac{\partial F(\lambda,x)}{\partial\lambda} $ on the unit cube $ [0, 1]^n $, any composite quadrature can be represented as a forward pass ofthe corresponding discrete operator. The backward pass ofthe discrete operator corresponds to the evaluation ofthe integral operator with the kernel $ \frac{\partial F(\lambda,x)}{\partial\lambda} $ using the same quadrature as in the forward pass.}

\subsection{Continuous parameters representation} The richer and more generalized continuous parameter representation allows to sample discrete weights at inference time at any given resolution. We propose to compactly parameterize the continuous weights as a linear combination of interpolation kernels with uniformly distributed interpolation nodes on the line segment $ [0, 1]: F_W(\lambda,x) = \sum_{i = 0}^{m} \lambda_i u(xm-i) $, where $ m $ and $ \lambda_i $ are the number of interpolation nodes and their values, respectively. For efficiency purposes, we suggest to exploit the available hardware and existing Deep Learning (DL) frameworks and rely on the cubic convolutional interpolation as shown in \FigRef{fig:fig4}, which is used for efficient image interpolation on GPUs. Despite its slight deviation from the cubic spline interpolation, this approach is significantly faster and yet preserves the details better than linear interpolation. In case of multiple dimensions, we propose to define the interpolation on the cube $ [0, 1]^n $ with separable kernels, which is fully compatible with the existing DL frameworks.

The continuous representation is discretized into a stan-
dard weight tensor W which is used by the corresponding
layer in the forward pass. A schematic visualization of a
continuous parameter representation and discretization is
depicted in \FigRef{fig:fig4}.
\newline
\newline
\textbf{Representation of weights for fully-connected and convolutional layers} Fully-connected layers are defined by a two-dimensional weight tensor and, thus, we represent them with a linear combination of two-dimensional kernels on a
uniform 2D grid within the square $ [0, 1]^2 $:
\begin{equation}
    F_W\left(\lambda, x^{\text {out }}, x^{i n}\right)=\sum_{i, j} \lambda_{i j} u\left(x^{\text {out }} m^{\text {out }}-i\right) u\left(x^{\text {in }} m^{\text {in }}-j\right) .
\end{equation}
The discretized weight tensor $ W_q $ of the fully-connected layer is obtained by sampling the continuous representation on partitions $  \overrightarrow{\mathbf{P}}^{out} $ and $ \overrightarrow{\mathbf{P}}^{out} $ and by weighting the result according to the integration quadrature of \EqRef{eq:eq1}:
\begin{equation}
    \label{eq:eq6}
    W_q[k, l]=q_l W[k, l]=q_l F_W\left(\lambda, P_k^{\text {out }}, P_l^{\text {in }}\right)
\end{equation}
Uniform partitions with steps $ h^{out } $ and $ h^{in } $ are defined as follows:$ \overrightarrow{\mathbf{P}}^{out} $ = $ \{ kh^{out}\} _k $ and$ \overrightarrow{\mathbf{P}}^{in} $ = $ \{lh^{in}\}_l $. Fewer or more filters and channels at inference time are obtained with a varying partition size.

As for convolutional layers, in this study we omit resampling convolution kernels along the spatial dimensions $ \mathbf{x^s}$. Therefore, the continuous representation of weights could be viewed as already sampled at each spatial location t and defined by $ F_W $ with location dependent set of interpolation nodes $ \lambda(t) $.
\input{figures/fig4.tex}
\textbf{Trainable partition} So far, we considered only uniform partitions with a fixed sampling step. However, nonuniform sampling can improve numerical integration without increasing the partition size. This relaxation of the fixed sampling points introduces new degrees of freedom and leads to a trainable partition. By training the separable partitions we can obtain an arbitrary rectangular partition in a smooth and efficient way. Such a technique opens up the opportunity for a new structured pruning approach. Combined with the conversion strategy of Section 4, this can reduce the size of pre-trained discrete DNNs without tuning the rest of the parameters. Instead of using a direct partition parameterization $ \overrightarrow{P} $ we employ a latent representation by the vector $\vec{\delta}=\left(0, \delta_1, \ldots, \delta_n\right)$
so that the following holds: $\vec{\delta}_{\text {norm }}=\frac{\vec{\delta}^2}{\operatorname{sum}\left(\vec{\delta}^2\right)}$ , $ \vec{P} = \mathrm{cumsum}(\vec{\delta}_{norm}) $. Such parameterization guarantees that the result is a correctly defined (sorted) partition $\overrightarrow{\mathbf{P}}$ stretched over the whole segment $[0, 1]$.


