A matrix with the aforementioned properties can indeed be represented as the sum of $m$ ($n \times n$) elementary matrices, where each elementary matrix has exactly 1 one in each row and column.

We will prove this by induction. For $n = 2$ we can only get $m = 1$, so the representation of the matrix as elementary is the matrix itself. Hence, assume any $n > 2$. Then:

\begin{enumerate}
\item For $m=1$ the assumption is true, since the representation with elementary tables, is the table itself as mentioned earlier.
\item Assume that the statement is valid for $m = k \leq n - 2$.
\item We will show it is valid for $m = k + 1$. It should be clear that since the $m = k$ can indeed be represented with $m$ elementary matrices, if we are able to add the $n$ new 1s on the table and get the $m=k+1$ table, then using the previous $k$ elementary matrices and the matrix containing only the newly inserted 1s is a representation of $m = k+1$ table using elementary matrices.

Thus, we have to show that having \textbf{any possible} table with $m = k$ 1s in every row and column, we are able to add exactly one 1 to every row and column in order to get the $k+1$ matrix.

So, consider with have a $n \times n$ matrix with $k \leq n - 1$ 1s in every row and column. We will start by adding one 1 to every row. By doing so, we also ``cover'' the corresponding column that the 1 is inserted, thus excluding it from being used for later rows.

Every row has $n-k$ 0s, therefore $n-k$ possible spots to insert the new 1s. We pick one of these spots randomly. Since there are $n-k$ 0s in every row, we can guarantee that until reaching row $n-k+1$ we will be able to find a 0 that is not excluded due to an insertion in the same column beforehand.
Now, when we reach row $n-k+1$, there could be a problem. The following table represents such a case (for $n-k = 3$). 

\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
& & & 1 & & & & & \\\hline
& & & x & & 1 & & & \\\hline
& 1 & & x & & x & & & \\\hline
& 0 & & 0 & & 0 & & & \\\hline
& x & & x & & x & & & \\\hline
& x & & x & & x & & & \\\hline
\end{tabular} 
\end{center}

Assume that the three first 1s were inserted as on the example and the fourth row has its three 0s as depicted, thus with the current allocation there is no available zero to be swapped for an 1. We argue that the algorithm is always able to go to some previously allocated rows and change the position of an 1 so as to make ``space'' for the current row.

Assume that it is not possible, then this means that none of the three already allocated 1s can be moved to a spot that does not belong to these three columns. This case is depicted in the following matrix:
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
& 0 & & 0 & & 0 & & & \\\hline
& 0 & & 0 & & 0 & & & \\\hline
& 0 & & 0 & & 0 & & & \\\hline
& 0 & & 0 & & 0 & & & \\\hline
& & & & & & & & \\\hline
& & & & & & & & \\\hline
\end{tabular} 
\end{center}
As it should be clear, this contradicts the fact that each column (and row) has $n - k (= 3)$ zeros. Therefore, there should be at least one row that has a zero on a different column than the ones depicted, hence its 1 can be moved and make ``space'' for an insertion on the fourth row.

Applying the same logic (if there is no spot in a row, we reallocate the newly added 1s of the previous rows properly) leads to the wanted $k+1$ matrix.
\end{enumerate}

Since in the induction we considered any $n > 2$, any possible matrix with $m = k$ 1s in every row and column, and created every possible matrix\footnote{due to the fact we select the spots at every row randomly from the free ones} $m=k+1$, we conclude that the initial assumption is valid for any case.



