%\documentclass[twoside,titlepage]{report}
%\usepackage{fullpage}
%\usepackage{graphicx}
%\usepackage{verbatim}
%\usepackage{wrapfig}
%\usepackage{amsmath}
%\usepackage{moreverb}
%\begin{document}
\section{Introduction}
Now we will give an introduction to using MatPix by demonstrating some of the more salient language features. Here we feature all of the code necessary to implement a simple linear regression algorithm. Understanding all of the language features demonstrated here is crucial to building more complex programs in MatPix. We will give detailed step-by-step instructions below.

We also note here that many MatPix constructs are taken from the widely popular Matlab language. Matlab syntax is available in more detail at \cite{matlab}.
\subsection{Regression Details}
A multidimensional linear regression gives us the MatPix language specific details necessary for complex program construction.

We begin by giving a brief overview of the mathematics behind a regression. Let $X$ be an $N \times (D+1)$ dimensional matrix of our input points. $N$ is the number of points and $D$ is the dimensionality of each point (the zero dimension is constant). Next, our output $Y$ is an $N\times 1$ dimensional matrix. The idea behind a regression, is that we wish to find a hyperplane in $D$ dimensions that best relates $X$ to $Y.$

Obtaining our empirical risk equation and minimizing it by setting to gradient to zero, we obtain
\begin{eqnarray}
\label{regressioneqn}
\theta^* = \left ( X^T X\right )^{-1}X^TY
\end{eqnarray}
where $\theta^*$ is the optimal set of parameters for our hyperplane. We have that $\theta^*$ is a $(D+1) \times 1$ matrix representing the hyperplane

$$y= \begin{pmatrix}1 & x_1 & ... & x_D \end{pmatrix} \begin{pmatrix} \theta^*_0 \\\theta^*_1\\\vdots \\ \theta^*_D \end{pmatrix}$$


The general idea here is that we take as input a few $D$ dimensional points (represented by $X$) and an equal number of $1$ dimensional outputs (represented by $Y$) and return $\theta^*$, where $\theta^*$ relates future $D$ dimensional points to hypothesized outputs.

A linear regression is useful for real data analysis when we wish to find empirical trends. The regression gives us a model relating input data to output data. Given some input data, this enables us to make output predictions.
\section{Implementation}
\subsection{Matrix Inversion}
A crucial component to a matrix regression is being able to invert a matrix. In general, matrix inversion is one of the most important algorithms for any mathematical computation.

Here we present a simple Gauss-Jordan elimination technique for matrix inversion. For conciseness, we leave out error checking for singularities (determinants equals zero).

\subsubsection{Implementing the Inversion}
The Gauss-Jordan technique involves augmenting the input matrix with the identity, and then eliminating rows until the inverse is obtained where the identity previously resided.

Below is pseudo-code for the familiar algorithm

\begin{verbatimtab}
GaussJordan (X)
1:	A  <-- A horizonatally concatenated with the identity of size X.size
2:	for i <-- each row of of A
3:		divide row i by the value at position i of row i
4:		while the i-th position of row i has a zero
5:			swap the i-th row of A with a row below below it
6:		for j <-- each row of A below i 
7:			row j <-- row j - position i of row j * row i
	
8:	Repeat 2-7 for A starting at the last row and eliminating upwards
9:	Return all rows of A for columns X.size+1 to the end
\end{verbatimtab}

\subsubsection{The Code}
Now we have the following MatPix code for the above algorithm:

\begin{quote}
\verbatimtabinput[8]{../LRM/tests/mpx/gauss_jordan.mpx}
\end{quote}

We will describe this code piece by piece. 

First, we comment on how these functions have been written. We first note that Matpix functions are all pass-by-reference. Thus, we cannot write a function that modifies a matrix in the upper scope.


We now have the function
\begin{verbatim}
eye(m)
\end{verbatim}
, which inputs a dimension, $m$, and returns an identity matrix of size $m.$ First, we create an $m \times m$ matrix which is automatically initialized to the zero matrix. Next, we iterate through all rows and set each element in a diagonal position to $1.$

Here we notice two important features of MatPix. First, matrices start at position $0$ and end at position $m-1.$ Second, we have a special Matlab-like loop construct. This constructs allows us to create an iterator, and designate its start and finish position with a very simple syntax. Such a construct is very convenient for mathematical algorithms which make heavy use of matrix iteration.

Next, we have the first portion of the main inversion function
\begin{verbatimtab}
function gauss_jordan(V,n) //expand to use size function
{
    matrix Q[n,n*2];
    matrix E[n,n];
    E=eye(E, n);
    
    //Set Q to V augmented with nxn identity
    Q[:, 0:n-1] = V;
    Q[:, n:n*2-1] = E;
\end{verbatimtab}

We first notice that there are two function arguments. The first, $V$, is the square matrix that we wish to invert. Next, we have $n$, which is the dimensionality of $V.$ Note that in MatPix, there is no way to obtain the dimensionality of a matrix. Thus, we must explicitly pass the size when required. Explicity passing the size may help the user better keep track of the dimensionality of intermediate matrices in complex algorithms.

Next, we declare and assign our augmented matrix (A in the pseudocode). Here we notice another key feature of MatPix derived from Matlab. We can assign submatrices to slices of a matrix. This flexibility allows us to construct the augmented matrix in two simple steps. 

The first step is assigning V to the left side of Q and then the second is to assign E to the right side of Q. This slice assignment allows us the flexibility to manipulate matrices for many common applications such as this one.

Next, we have

\begin{verbatimtab}
    r = 0;
    temp=0;//swap storage
    for (i=0:n-2)  //for each row
    {	
        r = i;
        //Check for zero in eliminator row
        //If zero, then find row to swap with
        while((Q[i,i] ==0) && (r !=n-1))
        {   
            r = r+1;
            temp = Q[i,:];
            Q[i,:] = Q[r,:];
            Q[r,:] = temp;
        }
\end{verbatimtab}.

Here, $i$ iterates through each row except the last.  We then use the standard while-loop construct to check if we have a zero in the leading column of row $i.$ We then swap row $i$ for a row beneath it until we have exhausted all rows below. If we find a row with a non-zero leading element, then that row becomes our new eliminating row. If we find no such row, then the entire leading column is zero.

Notice that the array slice assignment operation is very useful for swapping rows.

Next,

\begin{verbatimtab}
         //Normalize the eliminator row
    	if (Q[i,i]!=0)
    	{ 
        	Q[i,:] = Q[i,:]./Q[i,i];
        }
		//Use Eliminator row to eliminate
		//the column of all rows beneath
        for (k=i+1:n-1) 
        {
            Q[k,:] = Q[k,:] - Q[k,i] .* Q[i,:];
        }
    }
\end{verbatimtab}

The first "if" clause checks to make sure that column $i$ is not all zeros. This condition is false if the previous while loop iterates through all rows without finding a non-zero leading element row to swap with.

We then normalize the row by its leading element. We accomplish this by dividing the entire row by element $i.$  This allow us to use row $i$ to eliminate all rows below it. Row elimination occurs through a sequence of elementary row operations. We multiply row $i$ by element $i$ in row $j$. Then, we subtract the result from row $j$ so that the new row $j$ has a $0$ in the leading position. We continue this process for each row beneath $i.$ Note that each pass of the outer loop eliminates a single column of the matrix. This is the column that we refer to as the leading column.

The last section of the code,

\begin{verbatimtab}
for (i=n-1:1) 
    {
        for (k = i-1:-1:0) 
        {
            Q[k,:] = Q[k,:] - Q[k,i] .* Q[i,:];
        }
    }
ret = Q[:,n:n*2];
print("inverted result");
print(ret);
\end{verbatimtab}
, simply repeats this entire process in reverse. This ensures that the upper triangle of $Q$ is eliminated as well. Here we notice two convenient "for loop" constructs. First, the outer loop allows us to iterate from some number, in this case $n-1$, down to $1$. For extra clarity, we can also iterate backwards using the more familiar Matlab style (the inner loop).

After this entire process, the left section of matrix $Q$ is the identity matrix and the right section is the inverse of $V.$ Thus, we use the matrix slicing operator to return the right half of $Q.$

\section{Putting it all together}
Simply renaming the function "gauss\_jordan" to "inv", we have the following sample code for a multidimensional linear regression.

\begin{verbatimtab}
n = 16; d = 2;
matrix X[n,d];
X[:, 0] = 1;
for(i=0:n-1){
	X[i, 1:d-1] = i;
}

print("X input value");
print(X);

matrix theta[d,1];
for(i=0:d-1){
	theta[i, 0] = 5.2*i+.3;
}
print("the theta regression values");
print(theta);

Y = X*theta;

print("the actual Y values");
print(Y);

theta_solve = inv(X'*X, d)*X'*Y;
print("regression solution for theta:");
print(theta_solve);

err = theta-theta_solve;
print("solution error:");
print(err);
\end{verbatimtab}

We test the linear regression as follows. We create our inputs, $X$ as well as our regression parameters $\theta.$ We then calculate $Y$ as $X^T \theta.$ Thus, $Y$ is simply the output of applying $\theta$ to $X.$ We then use the regression formula to calculate $\theta^*$ from $X,Y.$ We compare $\theta^*$ to $\theta$ to make sure the regression yielded accurate results. The regression was a success if $\theta^* = \theta.$


The first $6$ lines construct our input matrix, $X.$ The output of the first two print statements is thus

\begin{verbatim}
X input value
1, 0
1, 1
1, 2
1, 3
1, 4
1, 5
1, 6
1, 7
1, 8
1, 9
1, 10
1, 11
1, 12
1, 13
1, 14
1, 15
\end{verbatim}

Next, we assign our $\theta$ values. This vector represents the parameters for our hyperplane. The output of the next print statement is
\begin{verbatim}
the theta regression values
0.3
5.5
\end{verbatim}.

Using our $\theta$ values, we next derive our $Y$ values as a linear transformation on $X$. The next print statements output
\begin{verbatim}
the actual Y values
0.3
5.8
11.3
16.8
22.3
27.8
33.3
38.8
44.3
49.8
55.3
60.8
66.3
71.8
77.3
82.8
\end{verbatim}.

Finally, the crucial part of this code,
\begin{verbatim}
theta_solve = inv(X'*X, d)*X'*Y;
\end{verbatim}
solves for $\theta^*$ using the equation for the linear regression that we discussed earlier in \ref{regressioneqn}. The output of the final print statements is
\begin{verbatim}
regression solution for theta:
0.3
5.5

solution error:
0
0
\end{verbatim}

We note here that our regressed $\theta^*$ is equal to $\theta.$ Thus, we solved and verified the linear regression. We note here that the GPU computations yield numerical inaccuracies when using GPU emulation software. However, as we can see, the hardware GPU calculations are indeed accurate.

\section{Notes} 
We have presented a detailed description of the features of MatPix using the example of a Linear Regression. We see that the language features of MatPix are very well suited to the very general class of  mathematical problems involving matrix manipulations. The unique feature of MatPix is that each of these matrix manipulations is computed on the GPU.

The MatPix language featues allow us to compute a multidimensional linear regression in very few lines.   The algorithm would be many more lines of code in commonly used high level languages such as Java/C/C++. 

The power of MatPix is that a relatively small set of language features allow this succinct algorithmic description. First, all MatPix object are matrices. Having matrices as first class objects is crucial when dealing with sets of data. Next, the key feature of matrix slicing allows very complex assignment and indexing in a single command. We can both read from and write to a Matrix using simple syntax Finally, the "for"-loop index constructor allows us to effortlessly implement the commonly used paradigm of iteration.

From this tutorial, we can see that it would not be difficult to implement a comprehensive matrix library in MatPix. While the inverse is a key function, we could just as easily implement the determinant, eigenvalue decomposition, and other such functions with similar ease.



%\end{document}