\section{Training Integral Neural Networks}
\label{sec:approach}
% \input{figures/goat}

Nowadays, there exists a large variety of pre-trained discrete networks. Therefore, it would be beneficial to have in place a process of converting such networks to integral ones. Such converted networks can serve as a better initialization for the training of integral networks. To this end, we propose an algorithm that permutes the filters and channels of the weight tensors in order to obtain a smooth structure in discrete networks. A visual illustration of this strategy is provided in \FigRef{fig:fig5}. We also propose an algorithm to optimize the smooth parameters representation of INNs using gradient descent. This allows us to obtain a network which can be re-sampled (structurally pruned) without any finetuning at inference time.
\newline
\newline
\textbf{Conversion of DNNs to INNs} To find a permutation that leads to the smoothest structure possible, we minimize the total variation along a specific dimension of the weight tensor. This problem is equivalent to the well-known Traveling Salesman Problem (TSP) \cite{lawler1985traveling}. In our task, the slices along the $ c^{out} $ dimension in the weight tensor (i.e., filters) correspond to the “cities” and the total variation to the “distance” between those cities. Then, the optimal permutation can be considered as an optimal “route” in TSP terms. We use the 2-opt algorithm \cite{croes1958method} to find the permutation of filters that minimizes the total variation along that dimension:
\begin{equation}
    \label{eq:eq7}
    \min_{\sigma\in S_n}\sum\left|W[\sigma(i)]-W[\sigma(i+1)]\right|,
\end{equation}
where W is the weight tensor, $ \sigma$  denotes the permutation, $\sigma(i)$ is the new position of the i-th element defined by the permutation, and $S_n$ is a set of all permutations of length n. The permutation is performed in such a way so that the filters’ permutation in the preceding layer matches the channels’ permutation in the following layer. Since the model output stays exactly the same, our algorithm allows to initialize the integral neural network using a discrete one without experiencing any quality degradation.
\newline
\textbf{Optimization of continuous weights} Any available gradient descent-based method can be used for training the proposed integral neural networks. We use Lemma 1 to construct the training algorithm as described below. We train our networks with random $c^{out}$ from a predefined range ($c^{out}$ denotes the number of filters or rows in the convolution or the fully-connected layer). Discretization of $x^{in}$ of the next layer is therefore defined by discretization of $x^{out}$ of the previous layer. Training integral neural networks using such an approach allows for a better generalization of the integral computation and avoids overfitting of the weights to a fixed partition, since the size of the partition changes at every training iteration. Formally, our training algorithm minimizes the differences between different cube partitions for each layer using the following objective:
\begin{equation}
    \begin{aligned}
        \label{eq:eq8}
        \left|\mathrm{Net}(X,P_1)-\mathrm{Net}(X,P_2)\right| & \leq\left|\mathrm{Net}(X,P_1)-Y\right| \\&+\left|\mathrm{Net}(X,P_2)-Y\right|,
    \end{aligned}
\end{equation}
where $\mathrm{Net}(X,P_{i})$ is the neural network evaluated on input data $X$ with labels $Y$, and $P_{1},P_{2}$ are two different partitions for each layer. One can note that the optimization under stochastic sampling of the partition sizes leads to a reduction of differences between the outputs of integral neural networks of different sizes. Such an optimization therefore ensures that a trained integral neural network has a similar performance when pruned to arbitrary sizes.
