\documentclass[12pt]{article}
% Use utf-8 encoding for foreign characters
%\usepackage[utf8]{inputenc}

\usepackage{datatool} % Import and display CSV files


%\usepackage{setspace}
\usepackage{fullpage}
%\usepackage{JASA_manu} We'll use this later.

\usepackage[round]{natbib}

\usepackage{enumerate}
\usepackage{verbatim}
\usepackage{amsfonts,amsmath,amssymb,amsthm}
\usepackage{graphicx,subfigure}
\usepackage{epsf}
\usepackage{epsfig}
\usepackage{fancyhdr}
\usepackage{graphics}
\usepackage{psfrag,latexsym}
\usepackage{url}
\input{RatbertMacros2}
\input{ExtraCommands}
%\input{ellcom}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


%\newcommand{\myparagraph}[1]{\noindent{\bf{#1}}}

\newcommand{\weightedloss}{{L_{\text{quadratic}}(\beta, \beta_{hca})}}

\newcommand{\histogramSize}{.5}

%\newtheorem{lem}{Lemma}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\title{Penalized Regression Models for the NBA}
\author{Dapo Omidiran \footnotemark}
\date{}

%\doublespacing
\begin{document}

\maketitle

\footnotetext[1]{
Dapo Omidiran is Doctoral Candidate, University of California, Berkeley, CA 94720 (e-mail: dapo@eecs.berkeley.edu). This work was partially supported by the National Science Foundation. The author thanks A, B, C, and the referees for helpful comments.}

\begin{abstract}

In the National Basketball Association (NBA), teams must make choices about which players to acquire, how much to pay them, and other decisions that are fundamentally dependent on a player's effectiveness. Thus, there is great interest in quantitatively understanding each player's impact on the game.
In this paper we develop a penalized regression model for the NBA, use cross-validation to select the tuning parameters, and then finally use this model to produce player ratings. We then apply this model to the 2010-2011 NBA season to predict the outcome of games.
We compare the performance of our model to two other known techniques, and demonstrate empirically that our model produces substantially better
predictions.

\end{abstract}

{\bf Keywords:} Basketball; Penalized Regression; Lasso; Convex Programming

\section{Introduction}
The National Basketball Association (NBA) is both a game and a multi-billion dollar business. Each of the 30 franchises in the NBA try their
best to put the most competitive team possible. Each team has a roster of up to 15 players, of which 5 play at any single time. Players can be added or removed from a roster via the NBA draft, trades, free agent signing market, contract expiration, and injury. The task of the general manager of a team is to build the most competitive roster of players possible within his budget. A very important component of team-building is to understand how players interact competitively (against other teams) and cooperatively (with each other.)

At least 18 of 30 NBA teams have quantitative groups analyzing data to evaluate and rate players \citep{teamswithstatgrps}. In addition, there are many third party organizations who use quantitative analysis of the NBA. The website ESPN.com has many analysts providing statistical analysis
for casual fans, John Hollinger perhaps being the most prominent \citep{hollingersite}. Sophisticated gamblers use quantitative analysis to find attractive bets, and sports books use quantitative analysis to price bets on games. Finally, the online forum APBRmetrics \citep*{APBRmetrics} is an online community for amateurs and professionals to analyze basketball.

\subsection{Notation}
Let $A$ and $B$ be two length $\numobs$ vectors. Let $C$ be a vector of length $\numobs$ with $C_i > 0$ for all $i=1, 2, \ldots, \numobs$.

We can then define
\begin{align*}
A^T B \defn& \sum_{i=1}^{\numobs} A_i B_i & \text{ (Inner product)}\\
||A||_2 \defn& \sqrt{\sum_{i=1}^{\numobs} A_i^2} & \text{ ($\ell_2$ norm)}\\
||A||_1 \defn& \sum_{i=1}^{\numobs} |A_i| & \text{ ($\ell_1$ norm)}\\
||A||_{C} \defn& \sqrt{\sum_{i=1}^{\numobs} C_i A_i^2} & \text{ (C-weighted $\ell_2$ norm)}
\end{align*}

We shall use the notations
\begin{align*}
\text{$\ones_\numobs$ to denote the $\numobs \times 1$ vector of ones.}\\
\text{$\natbasis_i$ to denote the $i^{th}$ natural basis vector.} \\
\text{$\eyemat_\pdim$ to denote a $\pdim \times \pdim$ identity matrix.}
\end{align*}

and $\text{Diag}(w)$ to denote a diagnal matrix with diagonal entries specified by the vector $w$.

Finally, give the scalar $t$, we define the function
\begin{align*}
(t)_+ \defn \max(t,0).
\end{align*}

\subsection{A Brief Introduction to the game of Basketball}
There are 30 teams in the league, and each team has a roster of roughly 12-15 players. Games are usually 48 minutes long,
and each of the two competing teams has exactly five players on the floor at a time. Thus, there are ten players on the floor for the 48 minutes of the game.
Each team plays 82 games in a season. 41 of these games are at their home arena and 41 are played away from their home arena. Thus,
there are 1230 total games in an NBA regular season.

Associated with each game is a box score, which records the statistics for the players who played in the game.
Figure \ref{boxscore} contains a box score from an an NBA game played on February 2nd, 2011 by the Dallas Mavericks
 (the home team) against the New York Knicks (the away team.) Note that we only display the box score for the Mavericks players.
Observe that there are 12 players listed in the box score, but only 11 who actually
played for the Mavericks in this game. Each of the columns of this box score corresponds to a basic statistic of possible interest.
\begin{figure}
\caption{Sample single-game boxscore for the Dallas Mavericks}
\centering
\includegraphics[scale=.5]{images/boxscore.eps}
\label{boxscore}
\end{figure}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Standard Statistical Modeling}
To statistically model the NBA, we must first translate from the game of basketball to a dataset. There is a standard
procedure for this currently used by many basketball analysts \citet{Kubatko07, Oliver2004}, which we describe as follows.

We model each basketball game as a sequence of $\numobs$ independent events between two teams. During event $i$ the home team scores $\Ysca_i$ net points (actually, the exact definition of $\Ysca_i$ is slightly
different, but this is close enough for intuition.) During event $i$, we also record a list of the 10 players players currently on the floor.
We use the variable $\pdim$ to denote the total number of players in the league ($\pdim$ might be approximately 450.) We can then represent the current players on the floor for possession $i$ with a vector $X_{i}$ of length $\pdim$ which is defined as follows:

\begin{align*}
X_{ij} =
\begin{cases}
$1$ & \mbox{Player $j$ is on the floor for the home team} \\
$-1$ & \mbox{Player $j$ is on the floor for the away team} \\
0 & \mbox{otherwise}
\end{cases}
\end{align*}

Associated with event $i$ is a weighting factor $\wsca_i$. Roughly speaking, the $i^{th}$ possession happens for $\wsca_i$ minutes.

We also extract summarize boxscore data like that of Figure \ref{boxscore} with the matrix $\Rmat_{\text{Mavericks},\text{Game 1}}$ of size $12$ by $\ddim$ defined as
\[
 \Rmat_{\text{Mavericks},\text{Game 1}} =
\bordermatrix{\text{}&\text{MIN}&\text{FGM}&\text{FGA}&\ldots &\text{PTS}\cr
                \text{Brian Cardinal}&10 &  {1} &{1}  & \ldots & 3\cr
                \text{Dirk Nowitzki}& 33  &  {10} & {16} & \ldots & 29\cr
                & \vdots & \vdots & \vdots &\ddots & \vdots\cr
                \text{Peja Stojakovic}& 0  &  0 & 0       &\ldots & 0}
\]

The above box score only records the statistics of the 12 players on the Dallas Mavericks roster for that particular game.
One can imagine computing the summed box score matrix
\begin{align}
\Rmat_{\text{Mavericks}} = \sum_{i=1}^{82}  \Rmat_{\text{Mavericks, Game \# i}}
\end{align}

that summarizes the total statistics of these 12 players for an entire season.

Finally, one can imagine the $\pdim \times \ddim$ matrix $\Rmat$ that vertically concatenates $\Rmat_{j}$ across the $30$ teams in the NBA:
\[
 \Rmat =
\bordermatrix{\text{}&\text{}\cr
                \text{Team 1}&\Rmat_{\text{Mavericks}}\cr
                \text{Team 2}&\Rmat_{\text{Bulls}}\cr
                \vdots & \vdots\cr
                \text{Team 30}&\Rmat_{\text{Celtics}}
                }
\]

This matrix $\Rmat$ summarizes the season box score statisics for all $\pdim$ players who played in the NBA for that year.

It turns out that for the experiments done in this paper, the rows of $\Rmat$ don't actually contain raw box score statistics like
blocks, rebounds, and assists; the rows of $\Rmat$ are defined slightly differently. For further information on the precise quantities
contained in each row, see Section \ref{bsweights_interpretation}.

\subsection{Functional Relationship}

We want to define a functional relationship between $\Xmat_i$ and $\Ysca_i$, i.e., find a function $f$ such that $Y \approx f(\Xmat_i)$. One natural way to do this is through a linear regression model,
which defines the relationship

\begin{align*}
\Ysca_i = \betastar_{hca} + \Xmat_i^{T} \betastar + \epsilon_i 
\end{align*}

Recall that the event $i$ has a weighting factor $\wsca_i$ associated with it. For notational convenience, we stack the variables $\Ysca_i$, $\wsca_i$, and $\epsilon_i$ into the $\numobs$ vectors $\Yvec$, $\Wvec$, and $\Epsilonvec$ and the variables $\Xmat_i$ into the $\numobs \times \pdim$ matrix $\Xmat$.

This yields the matrix expression
\begin{align}
\label{apm_model}
\Yvec = \ones_\numobs \betastar_{hca} + \Xmat\betastar + \Epsilonvec
\end{align}

The scalar variable $\betastar_{hca}$ in Equation \eqref{apm_model} represents a home court advantage term,
while the $\pdim$ vector $\betastar$ is then interpreted as the net number of points each of the $\pdim$ players in
the league ``produce" per minute. Note that this is different from the number of points they directly score.
The model described by Equation \eqref{apm_model} recognizes players for whom their team is more effective through their presence on the floor.
For example, both Kevin Garnett of the Boston Celtics and Joakim Noah of the Chicago Bulls are known to set hard screens, play excellent defense, and boost the morale of their teammates while not necessarily actually scoring lots of points themselves. These contributions do not appear in the box score, but are arguably just as important for a winning team.

This linear model is called the Adjusted Plus/Minus (APM) model \citep{apm} and has been independently proposed by several people in the early 2000s.

Given observations $(\Yvec, \Xmat)$ and weights $\Wvec$, we can define the loss function
\begin{align}
\label{quadraticloss}
L_{\text{quadratic}}(\beta, \beta_{hca}) \defn \frac{1}{\sum_{i=1}^\numobs \wsca_i} ||\Yvec-\ones_\numobs \beta_{hca} - \Xmat\beta||_{\Wvec}^2
\end{align}.

A natural technique for learning the parameters $\betastar$ is the weighted least squares (WLS) estimate:
\begin{align}
\label{apm_algorithm}
(\betaols, \betahcaols) = \arg\min_{\beta, \beta_{hca}} L_{\text{quadratic}}(\beta, \beta_{hca})
\end{align}

The values $\betaols$ from the WLS are called the adjusted plus/minus (APM) values \citep{apm} for the players.
Basketballvalue.com \citep{bbvalue} has computed $\betaols$ values for different players in the league for several recent seasons.

\subsection{Evaluating the APM Estimate}
Table \ref{tab:my_table} lists the top ten players in the NBA for the combined 2009-2010 and 2010-2011 NBA regular seasons by their APM values.

\DTLloaddb{bbvalue}{tables/bbvalueapm.csv}

\begin{table}
  \caption{APM Player ratings}
\centering
\begin{tabular}{l*{3}{c}r}
              Player & $\betaols_{i}$  \\ \hline
\DTLforeach{bbvalue}{%
\firstname=Rank,\surname=Player,\score=Rating}{%
\surname & \score \\}
\end{tabular}
\label{tab:my_table}
\end{table}

The APM approach thinks that LeBron James was the best player in the league
for the combined 2009-2010 and 2010-2011 regular seasons. Since $\betaols_{\text{LeBron James}}=12.62$, this procedure suggests that he is worth an additional
12.62 net points to his team for every 100 possessions the team plays.
Since there are roughly 90 possessions in a typical NBA game, this means
that LeBron adds roughly 11 points per game to his team's final margin
of victory.
How significant is this quantity? The ``home court advantage" factor
in the NBA has been estimated previously to be worth 3-4 points. In other words, for two equally matched teams, we expect the team playing at home to win by roughly 3-4 points, on average.
Thus, this APM estimate suggests that LeBron is worth roughly three times as much as home court advantage.

Now, just how believable are the player ratings produced by APM in Table \ref{tab:my_table}?

Overall, this top ten list has many of the widely-considered best players in the NBA. However, there are also some names on this list
that are very questionable. For example, if we believe these ratings, then Nick Collison, a player believed by most
fans and analysts to be at best a merely average player at his position, is better than Dywane Wade and Dwight Howard, two of the premiere superstars in the league. Similarly, while Nene Hilario
and Luol Deng are very good players at their positions, they are not considered by most fans and analysts to be amongst the top ten players in the NBA.

This contradiction between common wisdom and what APM tells us about players is useful, since it can either reveal to us
that the common wisdom is wrong (for example, perhaps Dwight Howard is not a top five player as commonly believed), or that the APM approach is misleading.

We therefore need some basis of comparison to evaluate how well APM is performing.

\subsection{Evaluating APM}
In classical linear regression, there are various tools we can use (standard error, goodness-of-fit tests \citep{R:Faraway:2004}) to evaluate
a linear regression model like APM. However, these tests typically assume that the underlying model
satisfies certain technical conditions (e.g., gaussianity, linearity, statistical independence.)

Unfortunately, it is unreasonable to expect these technical conditions to hold for the NBA. Thus, we must
find other ways to evaluate how trustworthy the $\betaols$ values are, and whether they should be trusted over
common wisdom about players.
One simple approach for evaluating the usefulness of the $\betaols$ values is to test their predictive power versus a simply dummy
estimator.
To be explicit, we
\begin{enumerate}
\item Define a dummy estimator called the home court advantage (HCA) estimator that sets $\betadummy_i=0$ for each player,
and the home court advantage term $\betahcadummy=3.5$. In other words, each player is rated a zero, and the home team is predicted to win every
100 posessions by $3.5$ points.
\item Take the 1230 games of the NBA regular season, compute both APM estimates and HCA estimates for the first 820 games.
\item Use the ratings produced by each of the above techniques to estimate the margin of victory for the home team for the remaining 410 games of the season.
\end{enumerate}

If APM accurately models the behavior in the NBA, then at a minimum it must substantially outperform the HCA estimator.

Let us define the variables
\begin{align}
\label{eval_apm}
A_i &\defn \text{the number of points the home team wins game } i \\
\hat{A}_i &\defn \text{Predicted number of points the home team wins game } i \notag \\
\hat{E_i} &\defn \hat{A}_i - A_i \notag
\end{align}.

Figure \ref{figdump} is a histogram of the variable $\hat{E_i}$ over the course of the 410 games of interest in the 201-2011 NBA season for each technique.

A perfect estimator would have a spike of height 410 centered around zero. Thus, the ``spikier" the histogram looks, the better
a method performs.

It is hard to immediately say from Figure \ref{figdump} that the APM approach yields better predictions than the dummy estimate.

\begin{figure}
\caption{Comparison of HCA estimate and APM estimate.}
\centering
\includegraphics[scale=\histogramSize]{images/plot1.eps}
\label{figdump}
\end{figure}

We can also study some of the empirical properties of the variable $\hat{E_i}$ for each approach. Table \ref{tab:hca_vs_apm}
summarizes the results. 

\DTLloaddb{results}{tables/results.csv}

\begin{table}
  \caption{Performance of Statistical Algorithms}
\centering
%\scalebox{.9}{
\begin{tabular}{l*{4}{c}r}

              Metric & HCA & APM & \lambdapm & \lambdapm(\Rmat, 2) & KRR\\
\hline
\DTLforeach{results}{%
\firstname=metric,\surname=hca,\score=apm,\lpm=one,\lpmtwo=two,\krr=krr}{%
\firstname & \surname & \score & \lpm & \lpmtwo & \krr \\}
\end{tabular}
%}
\label{tab:hca_vs_apm}
\end{table}

When comparing HCA to APM, we notice that
\begin{enumerate}
\item APM reduces the percentage of games in which the wrong winner is identified from $39.27\%$ to $34.15\%$ over the block
of 410 games of interest.
\item Unfortunately, the empirical tail behavior of the variable $\hat{E_i}$ seems to be substantially worse. For example, that the average of $\absoluteerr$ increases from $10.98$ to $17.97$ means that APM makes larger errors
when trying to accurately predict the final margin of victory of games.
\item While the HCA estimator has errors greater than $10$ only $43\%$ of the time, this increases
to $68\%$ with APM.
\end{enumerate}

As a result, it is hard to say that APM represents a meaningful improvement over the HCA estimator.

\section{Improvements to APM}
The APM algorithm \eqref{apm_algorithm} doesn't take into account the following three key pieces of information:

\begin{enumerate}
\item Sparsity. By and large, the NBA is a game dominated by elite players. Lesser players have far less impact on wins and losses. While this is
not some sort of fact, it is certainly commonly accepted wisdom and thus informs player acquisitions and salaries. For example, with say a \$60 million
budget on players, one would much rather hire three elite \$15 million players and fill out the rest of the roster with cheap roleplayers, then spend
tons of money on roleplayers and skimp on elite players. This was the strategy of the Boston Celtics in the summer of 2007 when they traded
their roleplayers and other assets to build a team around Kevin Garnett, Paul Pierce and Ray Allen \citep{garnett}, and the same strategy of the Miami Heat in the summer of 2010 by building a team
around LeBron James, Dywane Wade and Chris Bosh \citep{lebron}. We shall incorporate this prior information through $\ell_1$ regularization.
This effectively penalizes non-sparse models, and should cause only elite players to stand out in the regression.
\item Box score information. Another valuable piece of information useful in inferring player worth is to take into account box score statistics.
You expect good players to not only have good APM numbers, but also to generally produce assists, blocks, steals, etc. Thus, we prefer
ratings vectors $\hat{\beta}$ which are consistent with box score statistics.
\item Centering. We expect that some sort of weighted sum of player ratings should be close to zero, simply because the net number of points
scored as a whole in the NBA is zero. Thus, the inner product of the vector $v \defn y^{T} |X|$ and $\beta$ should be close to zero.
\end{enumerate}

We can encode the above prior information through the function
$g \defn g(\beta_{hca}, \beta, z_0, z; \lambdavec)$ defined as
\begin{align}
\label{lambdapm_objective_function}
g(\beta_{hca}, \beta, z_0, z; \lambdavec)
 = \underbrace{\weightedloss}_{\text{Weighted least squares}} + 
{{{\underbrace{\lambda_1 ||\beta ||_1}_{\text{Sparse player ratings}}}} }+ \\
{{{\underbrace{\lambda_2 ||\beta - z_0 - \Rmat z||_2^2}_{\text{Box score}}}}} +
{\underbrace{\lambda_3 ||z||_1}_{\text{Sparse box score weights}}} + 
{{{\underbrace{\lambda_4 \beta^{T} v v^{T} \beta}_{\text{Centering}}}}} \notag
\end{align}

{where $\lambdavec$ is shorthand notation for $(\lambda_1, \lambda_2, \lambda_3, \lambda_4)^T$.}

In the function $g$ defined in Equation \eqref{lambdapm_objective_function}
\begin{enumerate}
\item $\Rmat$ is a $\pdim \times \ddim$ matrix containing the box-score statistics of the $\pdim$ different players.
\item The variable $z$ gives us weights for each of the box score statistics.
\item The parameter $\lambdavec$ are different regularization weights.
\end{enumerate}

We can then find a model consistent with both the data and the prior information by solving the following convex optimization problem:
\begin{align}
\label{my_cvx}
\betalamhca, \betalam, \zzerolam, \zlam = \arg\min g(\beta_{hca}, \beta, z_0, z; \lambdavec)
\end{align}

We call the procedure described by Equation \eqref{my_cvx} the $\lambdapm$ algorithm.

\subsection{Comments on \lambdapm}
%\item Also has a Bayesian interpretation.
\begin{enumerate}
\item{ 
$\lambdapm$ can be understood as an iterative procedure for improving APM estimates.

Note that one possible way to solve the convex program \eqref{my_cvx} is as the limiting solution of the following iterative procedure
as $K \to \infty$:
\begin{enumerate}
\item Fixes a choice of $z_0^{(0)}$ and $z^{(0)}$.
\item For iterations $k=1,2, \ldots, K$
\begin{enumerate}[(I)]
\item Set $\theta^{(k)} \defn z_0^{(k-1)} \ones_\pdim + \Rmat z^{(k-1)}$.
\item {Define the function $h = h(\beta_{hca}, \beta)$
\begin{align*}
h \defn \weightedloss + \lambda_1 ||\beta ||_1 + \lambda_2 ||\beta - \theta^{(k)}||_2^2
\end{align*}
}
\item {Solve the following
Lasso problem
\begin{align}
\label{step1cvx}
\beta_{hca}^{(k)}, \beta^{(k)} = \arg\min_{\beta_{hca}, \beta}
h(\beta_{hca}, \beta)
\end{align}}
\item {
Then using the returned solutions $\beta_{hca}^{(k)}$, $\beta^{(k)}$ to solve the Lasso
\begin{align}
\label{step2cvx}
z^{(k)}, z_0^{(k)} = \arg\min_{z, z_0}
\lambda_2 ||\beta^{(k)} - z_0 - \Rmat z||_2^2 +  \lambda_3 ||z||_1
\end{align}
}
\end{enumerate}
\end{enumerate}

At each iteration of the above procedure, we are solving two ``coupled" Lasso regression problems. At step \eqref{step1cvx} of the above
iterative procedure (temporarily ignoring the term $||\beta ||_1$ in the definition of the function $h$), we are solving
a ridge regression problem that is quite similar to APM, but biases our solution $\beta^{(k)}$ towards values that are close to $\theta^{(k)}$.
The quantity $\theta^{(k)}$ can be interpreted as a Bayesian prior for our estimate $\beta^{(k)}$.
At step \eqref{step2cvx}, we use the computed estimate $\beta^{(k)}$ to produce a pair $(z^{(k)}, z_0^{(k)})$ which refines/improves the old
value $(z^{(k-1)}, z_0^{(k-1)})$. Thus, we have an improved Bayesian prior for the next step of the algorithm.

In summary, we iteratively use the box score matrix $\Rmat$ and a weighting vector $z$ to compute a prior $\theta \defn \Rmat z$
that is used to improve our player ratings $\beta$, and then use those improved ratings
to improve the weighting vector $z$.
}
\item {
One important difference between $\lambdapm$ and the APM approach is that it yields both a player
rating vector $\betalam$ and a box score weights vector $\zlam$. Thus for player $i$, we can compare the variable $\betalam_i$
to the variable $\Rmat_i^T \zlam + \betalamhca$ to understand how ``overrated" or underrated player $i$ is relative to his box score
production (note that $\Rmat_i^T$ is the $i^{th}$ row of the matrix $\Rmat$.) This is useful, since many players produce great box score stats but are less effective than their box score statistics indicate due to poor defense, selfish play, and other activities that reduce team competitiveness.
Antawn Jamison of the Cleveland Cavaliers for example is a player often derided by fans for his ``empty stats" that appear good
but seemingly made no difference on the outcome of the game. In Section \ref{underrated_analysis}, we explore this aspect of $\lambdapm$ further. 
}

\end{enumerate}

\subsection{Challenges}
For $\lambdapm$ to be useful, we need to be able to select a good choice of the regularization parameters $\lambdavec$ quickly.
Cross validation \citep{Stone1974} is one standard technique in statistics for doing this.
We describe our cross-validation based heuristic for selecting $\lambdavec$ in Appendix \ref{appendix_cv}.

However, K-fold cross validation on $T$ different values of $\lambdavec$ means solving $T K$ different $\lambdapm$ problems, each
of which are convex programs of considerable size ($\numobs \approx 20000$, $\pdim \approx 450$, $\ddim \approx  20$).
Thus, it is necessary that for each fixed value of $\lambdavec$, $\lambdapm$ can be solved quickly.
Our exploration of different numerical algorithms for solving $\lambdapm$ is contained in Appendix \ref{appendix_cyclical}.

Finally, our ultimate goal is to produce substantially better estimates than APM.
If it turns out that despite all the additional computational work $\lambdapm$ requires, there is little or no statistical improvement,
then $\lambdapm$ is not of much practical value.

In the next section, we evaluate the statistical performance of $\lambdapm$.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experimental Results}
In this section, we discuss our experimental findings on the 2010-2011 NBA dataset.


\subsection{Predictive Power}
Applying the cross-validation based heuristic described in Appendix \ref{appendix_cv} on the first 820 games of the 2010-2011
season, yields the regularization parameter
\begin{align*}
\lambdavec_{CV} = (2^{-7}, 2^{-5}, 2^{-10}, 2^{-10})^T.
\end{align*}

Armed with this choice of parameter, we can now compare APM to $\lambdapm$ on the final 410 games of the 2010-2011 NBA regular season. Each procedure produces a player rating vector $\betahat$, and we can use these ratings to predict the final margin of victory in the last 410 games of the regular season. Recall the definitions of $\hat{A}_i$, $A_i$ and $\hat{E_i}$ from Equation \eqref{eval_apm}. Figure \ref{figdump} is a histogram of the variable $\hat{E_i}$ over the course of the final 410 games of the 2010-2011 season for each technique.

\begin{figure}
\centering
\includegraphics[scale=\histogramSize]{images/plot2.eps}
\caption{Comparison of Home Court Advantage estimate, APM estimate, and $\lambdapm$ estimate.}
\label{fig2}
\end{figure}

It is pretty clear from the above that $\lambdapm$ produces better estimates than APM. The histogram
of the $\lambdapm$ errors are ``spikier" around the origin than the APM errors.

We can also study some of the empirical properties of the variable $\hat{E_i}$ for each approach. Table \ref{tab:hca_vs_apm}
summarizes the results. 

As Table \ref{tab:hca_vs_apm} indicates, $\lambdapm$ represents a substantial improvement on APM in nearly all of these statistical measures.
In particular
\begin{enumerate}
\item The fraction of games in which the wrong winner is guessed decreases from 34.15\% with APM to 27.07\% with \lambdapm.
\item The average absolute error in predicting the margin of victory decreases from 17.97 to 11.05.
\end{enumerate}

Comparing $\lambdapm$ to the HCA estimate, we
\begin{enumerate}
\item see an enormous improvement in ability to predict the winning team (from a failure rate of $39.27\%$ to $27.07\%$
\item obtain a similar average absolute error in predicting the margin of victory, with $10.98$ for the HCA estimate and $11.05$ with \lambdapm.
\end{enumerate}

Overall, this suggests that $\lambdapm$ more accurately models the NBA than either the HCA estimate or APM.

\subsection{Comments on APM}
Although Figure \ref{fig2} and Table \ref{tab:hca_vs_apm} suggests that APM performs poorly, it doesn't necessarily mean that APM is useless.
Part of the problem with APM is that it is sensitive to outliers. An average player who plays only one minute of
the first 810 games, but for whom his team plays well during that minute might have a very high $\hat{\beta}_{i}$ from APM. If he then starts
playing major minutes later in the season, then his (erroneously) high rating will cause lots of errors on games.

Regularization procedures like $\lambdapm$ help to reduce this sensitivity, at great additional computatational cost.
It is possible that replacing the quadratic loss function of APM \eqref{quadraticloss} with a less sensitive loss function such as the Huber function
\begin{align*}
L_{\text{huber}}(\beta, \beta_{hca}) &\defn \sum_{i=1}^\numobs \wsca_i \times \text{Huber}_\delta([\Yvec-\ones_\numobs \beta_{hca} - \Xmat\beta]_i)\\
\text{Huber}_\delta(x) &\defn \left\{
     \begin{array}{lr}
       \frac{1}{2}x^2 &: |x| \leq \delta\\
       \delta(|x|-\delta/2) &: \text{otherwise.}
     \end{array}
   \right.
\end{align*}
would reduce this sensitivity.


\subsection{Polynomials In $\Rmat$}
The box score matrix $\Rmat$ keeps tracks of raw statistics like rebounds, assists, and steals.
However, one might imagine augmenting this basic box score matrix with an augmented
matrix that counts different products of raw statistics such as rebounds $\times$ assists, etc.

By capturing some of these combined statistics and using them in $\lambdapm$, the hope
is to more accurately model the value of multifaceted players. For example, part of the reason
a player like LeBron
James is considered so valuable is his versatility: on any given night he can give his team 20+ points, 10+ assists and 10+ rebounds.

We expand the matrix $\Rmat$ to include all pairwise product of the basic variables.
If $\Rmat$ is a $\pdim$ by $\ddim$ matrix, this leads to a $\pdim$ by $\ddim + \binom{\ddim}{2}$ matrix $\Rmatexp$.

Let us use the notations $\lambdapm(\Rmat)$ and $\lambdapm(\Rmat, 2)$ to denote $\lambdapm$ with the box score matrices
$\Rmat$ and $\Rmatexp$, respectively.
 
Applying the cross-validation based heuristic described in Appendix \ref{appendix_cv} on the first 820 games of the 2010-2011
produces the regularization parameter $\lambdavec_{CV, 2} = (2^{-5}, 2^{0}, 2^{-6}, 2^{-10})^T$ for $\lambdapm(\Rmat, 2)$.
Armed with this choice of parameter for the expanded box score matrix $\Rmatexp$, we can now compare $\lambdapm$ with
with the basic box score matrix $\Rmat$ to $\lambdapm$ with the augmented matrix $\Rmatexp$ on the
final 410 games of the 2010-2011 NBA regular season.

Figure \ref{figorder2} demonstrates the result of this experiment, using the 2010-2011 NBA season.

\begin{figure}
\centering
\includegraphics[scale=\histogramSize]{images/plot3.eps}
\caption{Comparison of $\lambdapm(\Rmat)$  and $\lambdapm(\Rmat, 2)$.}
\label{figorder2}
\end{figure}

From Figure \ref{figorder2}, we see that the additional box score statistics improve performance.
The histogram
of the $\lambdapm(\Rmat, 2)$ errors are ``spikier" around the origin than the $\lambdapm(\Rmat)$ errors.

We can also study some of the empirical properties of the variable $\hat{E_i}$ for each approach. Table \ref{tab:hca_vs_apm}
summarizes the results. 

As Table \ref{tab:hca_vs_apm} indicates, $\lambdapm(\Rmat, 2)$ improves the predictive power of simple $\lambdapm(\Rmat)$.
The fraction of games in which the wrong
winner is guessed reduces from $27.07\%$ to $26.83\%$, the average absolute error in predicting games
decreases from 11.05 to 9.37, while the statistics of the variable $\hat{E_i}$ are all comparable or improved.

\section{Interpretation of $\lambdapm$}
In the previous section, we evaluated the performance of $\lambdapm$ by testing its ability to predict the outcome of unseen games. In this section, we interpret the box score weights vector $\zlam$ and player rating vector $\betalam$ returned by $\lambdapm$.


\subsection{Box score weights produced by $\lambdapm$}
\label{bsweights_interpretation}
From the $\lambdapm$ regression, we can extract the box score weights $\zlam$ that tell us the relative importance of the different box
score statistics. Since we normalized the length of the columns of the matrix $\Rmat$ to one before solving $\lambdapm$, it will turn out that a
rescaled version of $\zlamrescaled$ is actually of more practical interest.

Table \ref{tab:bsweights} summarizes the results. We also display the relevant row in the box score matrix $\Rmat$ for LeBron James ($\Rmat_{\text{Lebron James}}^T$.)

Examining this table, we see that LeBron James made two point shots at a rate of $.2174$/minute, and missed two point shots at a rate
of $.1760$/minute. The corresponding weightings from $\zlamrescaled$ are $52.955$ and $-44.15$, suggesting that overall
LeBron's rating from his two point shooting is $3.742$. In fact, from these weightings we can calculate that $\lambdapm$
believes that all players in the league must hit their two point shots at least $45.47\%$ of the time for their
rating from two point shooting to be non-negative.
Interestingly enough, a similar calculation reveals that three pointers must only be hit at a $18.85\%$ rate to break even. This is counterintuitive:
our basic instinct is that hitting two point shots $Q$ percent of the time should be equivalent to hitting
three point shots $2/3 Q$ of the time.
However, three point shooting increases the amount of spacing on the floor and perhaps missed
three point shots are easier to rebound for the offensive team.
Turnovers appear to be extremely costly, with the corresponding entry of $\zlamrescaled$ equal to $-71.511$. Thus, LeBron's
turnover rate of $.0927$ turnovers/minute hurts his box score rating by $6.6291$ points.

\DTLloaddb{weights1}{tables/weights_allgames.csv}

\begin{table}
  \caption{Box Score Weights}
\centering
%\begin{tabular*}{0.75\textwidth}
\begin{tabular*}{0.75\textwidth}{ | c | c | c | c | r | }
Statistic & Description	& $\zlam$ & $\zlamrescaled$ & $\Rmat_{\text{LeBron James}}^T$\\
\hline
\DTLforeach{weights1}{%
\description=Description,\type=Type,\surname=z,\score=zrescale,\lbj=LeBron}{%
\description & \type & \surname & \score  & \lbj \\}
\end{tabular*}
\label{tab:bsweights}
\end{table}

\subsection{Top 25 players in the league}
From the vector $\betalam$ we can extract a list of the top 25 players in the league who have played at least 400 minutes. Table \ref{tab:top25players} summarizes these results.

This list contains some of the most prominent star players in the league (LeBron James, Dywane Wade, Dwight Howard, Blake Griffin), thus
agreeing with common basketball wisdom.

However, this ranking contradicts common basketball wisdom in the following ways
\begin{enumerate}
\item The list noticably omits Kobe Bryant, a player pop culture and common basketball wisdom considers one of the league's superstars. Yet $\lambdapm$ thinks very highly of Pau Gasol, Andrew Bynum and Lamar Odom, three of Kobe Bryant's teammates who are individually
credited far less for the success of the Lakers than Kobe is. 
\item The list includes players like Nene Hilario, Tyson Chandler, Paul Millsap, Gerald Wallace and Amir Johnson, players who typically are
not considered by most to be amongst the top 25 players in the league.
\end{enumerate}

\DTLloaddb{table1}{tables/top25players.csv}

\begin{table}
  \caption{Player Ratings}
\centering
\begin{tabular}{l*{3}{c}r}
              Player & \betalam & Position\\ \hline
\DTLforeach{table1}{%
\firstname=Name,\surname=Rating,\score=Position}{%
\firstname & \surname & \score \\}
\end{tabular}
\label{tab:top25players}
\end{table}

%\DTLdisplaylongdb[%
%caption={Player Ratings},%
%label=
%contcaption={Player Ratings (continued)},%
%foot={\em Continued},%
%lastfoot={}%
%]{table1}

\subsection{Top 10 Most underrated and overrated players}
\label{underrated_analysis}
There are certain players in the NBA for whom their impact on the game seems to be far more than their raw box score production suggests.
$\lambdapm$ allows us to identify these players and quantify their impact by measuring the discrepancy between their $\lambdapm$
rating (the $i^\text{th}$ entry of the vector $\betalam$) and their weighted box score ratings (the $i^\text{th}$ entry of the vector $\Rmat \zlam + \zzerolam
\ones_\pdim$)
\begin{align*}
\text{Underrated}_i \defn \betalam_i - \Rmat_i^T \zlam + \zzerolam
\end{align*}.

Table \ref{tab:underratedplayers} lists the top 10 most underrated players in the league relative to their box score production.

For at least a few of these players, it is easy to understand why box scores alone do a poor job of capturing their impact:
\begin{itemize}
\item Dirk Nowitzki (underrated by 3.32 points) is an extremely dangerous offensive player who forces the the opposing team to pay so much attention to him that his teammates thus have an easier time scoring.
\item Omer Asik (3 points) is an excellent defensive player, and makes it difficult not only for the player he is guarding to score but for all other players on the opposing team.
\item Jared Jeffries (2.81 points) is an offensively inept, but versatile defender with quick feet and hands that can guard any position from point guard up to center.
\end{itemize}

%\DTLloaddb[headers={Name, $\lambdapm$ Rating, Boxscore Rating, Underrated, Position}]{underrated}{underrated.csv}
\DTLloaddb{underrated}{tables/underrated.csv}

\begin{table}
  \caption{Underrated Players}
\centering
\begin{tabular}{l*{3}{c}r}
              Player & $\betalam$ & $\Rmat \zlam + \zzerolam \ones_\pdim$ & Underrated \\ \hline
\DTLforeach{underrated}{%
\firstname=Name,\surname=LambdaPM Rating,\score=LambdaPM BoxScore Rating,\scorea=Underrated}{%
\firstname & \surname & \score & \scorea\\}
\end{tabular}
\label{tab:underratedplayers}
\end{table}

%\DTLdisplaylongdb[%
%caption={Underrated Players},%
%label={tab:underratedplayers},%
%contcaption={Player Ratings (continued)},%
%foot={\em Continued},%
%lastfoot={}%
%]{underrated}

Similarly, we can examine which players impact the game much less than their box score production suggests. We define the quantity

\begin{align*}
\text{Overrated}_i \defn -\text{Underrated}_i
\end{align*}.

Table \ref{tab:overratedplayers} lists the top 10 most overrated players in the league relative to their box score production.

For at least a few of these players, it is easy to understand why box scores alone do a poor job of capturing their impact:
\begin{itemize}
\item Andris Biedrens (overrated by 3.33 points) is a severe liability offensively, due to both his inability to score outside of 5 feet of the basket and poor free throw shooting
(an abysmal 32.3\% free throw shooting in 2010-2011). This makes it much more difficult for his teammates to score, since his defender
can shift attention away from him and instead provide help elsewhere. Biedrens is also a liability defensively.
\item Goran Dragic, Ramon Sessions, and Aaron Brooks are all point guards with a scoring mentality. While a ``shoot-first" PG is not necessarily
harmful to a team (both Derrick Rose and Russell Westbrook are successful scoring PGs), if a point guard doesn't do a good enough
job in setting up his teammates and creating easy scoring opportunities for them, it can really hurt the team's ability to score. Perhaps
this explains the poor ratings for these players.
\end{itemize}


\DTLloaddb{overrated}{tables/overrated.csv}

\begin{table}
  \caption{Overrated Players}
\centering
\begin{tabular}{l*{3}{c}r}
              Player & $\betalam$ & $\Rmat \zlam + \zzerolam \ones_\pdim$ & Overrated\\ \hline
\DTLforeach{overrated}{%
\firstname=Name,\surname=LambdaPM Rating,\score=LambdaPM BoxScore Rating,\scorea=Overrated}{%
\firstname & \surname & \score & \scorea\\}
\end{tabular}
\label{tab:overratedplayers}
\end{table}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Robustness of Results}
How sensitive is the $\lambdapm$ algorithm to our choice of training on the first 820 games? Does
the performance relative to the HCA estimate and APM estimate degrade if the algorithms are trained
only on the first 205 games, for example?

To evaluate this, we train all of the algorithms (HCA, APM, \lambdapm, \lambdapm(\Rmat, 2)) on the first 205 games of the season and then evaluate predictive power on the remaining 1025 games. We use the same regularization parameters for each method described in the previous sections. Table \ref{tab:robust1} summarizes the results of this experiment. Observe that $\lambdapm$ and $\lambdapm(\Rmat, 2)$ both dramatically improve on the simple HCA estimate and the APM estimates.

Table \ref{tab:robust2} summarizes the results of repeating this experiment for the first 410 games.

\DTLloaddb{robust205}{tables/robust205.csv}

\begin{table}
  \caption{Robustness Experiment, First 205 Games}
\centering
\begin{tabular}{l*{3}{c}r}
              Metric & HCA & APM & \lambdapm & \lambdapm(\Rmat, 2) \\
\hline
\DTLforeach{robust205}{%
\firstname=metric,\surname=hca,\score=apm,\pore=one,\bore=two}{%
\firstname & \surname & \score & \pore & \bore \\}
\end{tabular}
\label{tab:robust1}
\end{table}

\DTLloaddb{robust410}{tables/robust410.csv}

\begin{table}
  \caption{Robustness Experiment, First 410 Games}
\centering
\begin{tabular}{l*{3}{c}r}
              Metric & HCA & APM & \lambdapm & \lambdapm(\Rmat, 2) \\
\hline
\DTLforeach{robust410}{%
\firstname=metric,\surname=hca,\score=apm,\pore=one,\bore=two}{%
\firstname & \surname & \score & \pore & \bore \\}
\end{tabular}
\label{tab:robust2}
\end{table}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Kernel Ridge Regression}
Both the APM and $\lambdapm$ model assume that the NBA satisfies the following linear relationship
\begin{align*}
\Ysca_i = \betastar_{hca} + \Xmat_i^{T} \betastar + \epsilon_i 
\end{align*}

This model is in certain ways is too simple. If two players have very high chemistry with each other, then perhaps their cumulative impact
on the game will be more than just the sum of their player ratings.

However, a model based on linear regression model like APM or $\lambdapm$ can never capture these interactions.

Instead of the above linear model, we can model the NBA as

\begin{align*}
\Ysca_i = \tilde{\betastar}_{hca} + \Phi(\Xmat_i)^T \tilde{\betastar} + \epsilon_i 
\end{align*}

Where $\Phi(x)$ is a function mapping the $\pdim$ vector $\Xmat_i$ to $\pdimstar$-dimensional space.

Observe that
\begin{enumerate}
\item $\tilde{\betastar}$ is a length $\pdimstar$ vector.
\item The model is linear in the variable $\Phi(\Xmat_i)^T$ but non-linear in the variable $\Xmat_i$.
\end{enumerate}

We could then solve the convex optimization problem
\begin{align}
\label{krr_problem}
\min \frac{1}{\sum{\wsca_i}} \sum_{i=1}^\numobs \wsca_i (\Ysca_i - \betahca - \Phi(\Xmat_i)^{T} \beta)^2 + \lambda ||\beta||_2^2
\end{align}


Problem \eqref{krr_problem} is a ridge regression problem, and can be solved in closed form in complexity
$O(\max{(\numobs, \pdimstar)}^3)$. However, typically we will be dealing with cases where $\Phi$ is a high-dimensional (or possibly infinite) dimensional function, and so $\pdimstar$ is extremely large.

Fortunately, if we restrict our choice of $\Phi$ to functions $\Phi(x)$
such that there exists a positive semi-definite kernel function $K(x,y) = \Phi(x)^T\Phi(y)$,
then we can use the representer theorem and kernel trick \citep{Hastie} to solve Problem \eqref{krr_problem} more efficiently,
with reduced complexity $O(\min{(\numobs, \pdimstar)}^3)$ \citep{Hastie}. This is a substantial savings.

For the purposes of our experiment, we will choose the polynomial kernel of order $m=2$, which sets $K(x,y) = (1+<x,y>)^m$. For
computational reasons, we further prune our training set down by selection only observations such that $W_i > 18$.
Keep in mind that $\Phi(x)$ is implicitly specified once a kernel $K(x,y)$ is chosen.
Cross validation on the parameter $\lambda$ in Problem \eqref{krr_problem} over a grid of values of the form $\lambda = 2^k$
yields an estimated regularization paramter $\hat{\lambda} = 2^{-1}$.

Figure \ref{figdump} and Table \ref{tab:hca_vs_apm} summarize the result of this experiment, using the 2010-2011 NBA season. We compare
APM and kernel ridge regression with the regularization parameter chosen via cross-validation.

Comparing APM and KRR, we see that
\begin{enumerate}
\item APM is better able to correctly predict the winner of games, with a failure rate of $34.15\%$ for APM versus $35.61\%$ for KRR.
\item APM does a much worse job predicting the final margin of victory, with an average absolute error of 17.97 for APM versus 10.71 for KRR.
\end{enumerate}

This improvement in predicting final margin of victory suggests that there is value
in the higher-order player interactions that KRR captures, and it would be worthwhile to capture these interactions in the $\lambdapm$
model.

\begin{figure}
\centering
\includegraphics[scale=\histogramSize]{images/plot4.eps}
\caption{Comparison of $\lambdapm(\Rmat)$ estimate and KRR estimate. $K(x,y) = (1+<x,y>)^2$, prune at $W_i > 18$.}
\label{fig4}
\end{figure}

This suggests that $\lambdapm$ can be improved if we also incorporate these higher-order interactions into the algorithm,
by replacing each $\Xmat_i$ with a $\Phi(\Xmat_i)$.

However, for several reasons, it isn't clear how exactly to do this:
\begin{enumerate}
\item We would like to utilize the kernel trick approach that worked so well for KRR.
Unfortunately, since there is no representer theorem for $\ell_1$-penalization \citep{Aronszajn50}, we cannot use the kernel trick.
\item Rather than using the kernel trick, we can simply directly replace $\Xmat_i$ with $\Phi(\Xmat_i)$. Unfortunately, when $\pdimstar$ is large
relative to $\pdim$, this greatly increases the computational complexity of $\lambdapm$.
For example, if we let $\Phi(\Xmat_i)$ be a second order polynomial expansion, then since $\pdim \approx 450$, then $\pdimstar \approx 101000$,
increasing the size of the $\lambdapm$ problem from $(\numobs, \pdim, \ddim) = (20000, 450, 20)$ to $(\numobs, \pdim, \ddim) = (20000, 101000, 20)$.
Thus, a naive and direct use of expansions like $\Phi$ is not practical.
\item The intuition behind $\lambdapm$ is that each $\Ysca_i \approx \Xmat_i^T \betastar$ and that $\betastar_j \approx \Rmat_j^T \zstar$.
However, if $\Ysca_i \approx \Phi(\Xmat_i)^T\tilde{\betastar}$, where $\tilde{\betastar}$ is of length $\pdimstar$,
then it isn't at all clear what this $\tilde{\betastar}$ variable should be approximated by. We can no longer simply regress 
the $\pdimstar$ vector $\tilde{\betastar}$ onto the $\pdim \times \ddim$ matrix $\Rmat$, since the dimensions no longer match.
\end{enumerate}

\section{Discussion}
We have introduced $\lambdapm$, a powerful new statistical inference procedure for the NBA,
which contains APM as a special case. We found a fast, iterative numerical algorithm for solving the convex optimization problem
defining $\lambdapm$. We then utilize cross-validation to perform parameter selection. We compare the statistical performance
of our approach to APM and find that the performance is dramatically improved. We also compare APM to kernel
regression and discover that KRR outpeforms APM in one important metric.

This suggests that a variant of $\lambdapm$ which not only incorporates box score information but also the non-linear player interactions of KRR could
have even further improved performance. As discussed in the previous section, it isn't clear how exactly to do this; perhaps the $\lambdapm$ framework is too rigid. However, this is an interesting open question.

%\subsection{Future Work: Possible Solution: Multiple Kernel Learning}
%Rich literature on MKL both theory (Lanckriet et al., 2004, Bach et al., 2005, Srebro and Ben-David, 2006)
%    and real-world applications (Fletcher et al. 2010).
%Basic idea is to treat the Kernel matrix as an optimization variable.
%KRR dual looks like:

%F(K_{\regparn}) \defn \max_{\alpha} {2 <\Y, \alpha> - \alpha^T K_{\regparn} \alpha} \\
%\text{ where } K_{\regparn} \defn I_{\numobs} + \frac{1}{{\regparn}} K
%\end{align*}
%    \item $F(K_{\regparn})$ is a convex function. Optimize it!
%    \item We will specialize to conic combinations
%\begin{align*}
%K = \sum_{i=1}^{q} \theta_i K_i, \theta_i \geq 0
%\end{align*}
%    \item And further assume that $\sum_i \theta_i =1$
%\ei
%        

%How do we then have the best of both worlds? Possible solution:
%\benum
%    \item For each of the $d$ boxscore vectors $\Rmat_j$, construct a kernel $K_j(\Xmat_i)$ (e.g., a rebounding kernel.)
%    \item View kernels as hypotheses to be combined/selected.
%    \item Run MKL on those kernels.
%\eenum

%Conceptual Advantages:
%    Can take advantage of closure properties (conical combination, products, exponentiation, etc) of kernels.
%    Less constrained by $\lambda PM$ framework, but should be able to get most of the benefits.


\appendix
%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Cross Validation}
\label{appendix_cv}
%%%%%%%%%%%%%%%%%%%%%%%%%%%
We implemented the cyclical coordinate descent solver for $\lambdapm$ described in Appendix \ref{appendix_cyclical}
as well as a K-fold cross validation procedure and ran it on the first 820 games of the 2010-2011 NBA season.
Experimentally we determined that the best value of $\lambdavec$ is
\begin{align}
\label{lambdastar}
\lambdavec_{CV} = (2^{-7}, 2^{-5}, 2^{-10}, 2^{-10})^T
\end{align}

Our procedure for selecting this value was to start from the vector $\vec{\lambda_0} = (1,1,1,1)^T$, and then
for each of the four components $\lambda_1$, $\lambda_2$, $\lambda_3$ and $\lambda_4$, iterate over the following sequence
of values
\begin{align*}
\vec{\lambda}_{\text{first}} = {\{ (2^{-k}, 1, 1, 1)^T\}}_{k=-20}^{20} \\
\vec{\lambda}_{\text{second}} = {\{ (1, 2^{-k}, 1, 1)^T\}}_{k=-20}^{20} \\
\vec{\lambda}_{\text{third}} = {\{ (1, 1, 2^{-k}, 1)^T\}}_{k=-20}^{20} \\
\vec{\lambda}_{\text{fourth}} = {\{ (1, 1, 1, 2^{-k})^T\}}_{k=-20}^{20}
\end{align*}

where $2^{k_{\max}}$ is a value that causes the optimization variables to all be zero.
%For each value $\lambda_i$, we get the following plot:

%\begin{figure}[h]
%\label{fig_cv_heuristic}
%\caption{25 fold CV, exponential grid in $\lambda_i$, all other components fixed. Best value roughly $(2^{15}, 2^{11}, 2^{5}, 1)^T$}
%\centering
%\includegraphics{order_1/25foldcv.eps}
%\end{figure}
%
%Again, note that Figure \ref{fig_cv_heuristic} is generated with $\vec{\lambda_0} = (1,1,1,1)^T$. We can take the best point on each
%plot to obtain the parameter vector $\vec{\lambda_1} = (2^{15},2^{11},2^5,1)^T$, and re-run to get similar plots.

We can use this same heuristic for the expanded box score matrix $\Rmatexp$.
This yields
\begin{align*}
\lambdavec_{CV}^2 = (2^{-5}, 2^{0}, 2^{-6}, 2^{-10})^T
\end{align*}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Algorithms for solving $\lambdapm$ Problems}
There are a variety of techniques for solving the convex program \eqref{my_cvx}. In Appendix \ref{appendix_cyclical}, we discuss
Cyclical Coordinate Descent (CCD), the technique we ultimately found the fastest.

However, before settling on CCD, we experimented with the following approaches:
\benum
\item Interior-point methods \citep{Boyd02}: We can rewrite the $\lambdapm$ problem as a convex quadratic program with linear inequality constraints,
and then use interior-point methods to handle the constraints. Unfortunately, IP methods are slow for Lasso-type problems like $\lambdapm$. Furthermore, interior-point methods cannot take advantage of using the solution for a specific choice $\vec{\lambda_a}$
to ``warm-start" the solution to another choice of $\vec{\lambda_b}$.
\item LARs \citep{efron04}: The $\lambdapm$ objective function can be rewritten to look like a Lasso. Therefore, the LARS algorithm \citep{efron04} for solving Lasso problems can also be applied to solving $\lambdapm$. Moreover, each iteration of LARs yields the entire solution path
through the space in $\lambdavec$ parameterized by $\vec{d}_{lars} =(t_1, 0, t_3, 0)^T$. Unfortunately, this LARS based approach too was extremely slow, roughly on the order of a minute to obtain a solution path. Since our cross-validation procedure only samples at most a few points
from each path, LARs ultimately isn't fast enough.
\item Smooth approximation of $\ell_1$ terms
\benum
\item Iteratively reweighted least squares \citep{huber:1974}: Reweighted least squares approximates the $\ell_1$ penalties in the $\lambdapm$ objective function with weighted
quadratic terms, then solves a sequence of least squares problems that hopefully converge to the original optimization problem. Unfortunately, we found
RLS to require too many iterations to converge to an accurate solution.
\item Approximate $|x|$ with $\sqrt{x^2+\epsilon}$ \citep{LeeLeeAbbNg06}: In Appendix \ref{approx_ell_1}, we discuss a particular scheme for approximating the $\ell_1$ constraints with a smooth function. This performed poorly, requiring too many iterations to converge.
\eenum
\item Gradient approaches
\benum
\item Subgradient method \citep{shor1985}: The subgradient method is an iterative procedure that at each step calculates the subgradient of the $\lambdapm$ objective function in order to refine the solution. Unfortunately, although the subgradient method is known to converge, the theoretical convergence
rate is slow. We found that in practice it was also slow, requiring too many iterations to converge.
\item Nesterov's accelerated gradient \citep{nesterov2004}: This is a variant of the ordinary subgradient method with better theoretical guerantees. We noticed
improved empirical performance.
\item Nesterov's proximal gradient method \citep{nesterov2004}: The $\lambdapm$ objective function can be separated into
a smooth and non-smooth components, and thus Nesterov's proximal gradient method can be applied. This algorithm gave the best
performance of all the gradient-based algorithms.
\eenum
\eenum


\section{Cyclic Coordinate Descent}
\label{appendix_cyclical}
Ultimately, we found an algorithm that outperformed all of the approaches above: cyclical coordinate optimization (CCD) \citep{friedman_cyclical, wu_cyclical}.
We ran Hastie's R implementation of CCD \citep{glmnet} for the pure lasso problem (which corresponds to $\lambda_1$, $\lambda_2$ and $\lambda_4$ all set to zero.) The algorithm was extremely fast. This motivated us to derive and implement a CCD algorithm for $\lambdapm$ in Ruby/C++ and then in Matlab/C++.

\subsection{Cyclic Coordinate Descent Derivation}
We want to minimize the following function:
\begin{align*}
\underbrace{\weightedloss}_{\text{APM}} + {{\underbrace{\lambda_1 ||\beta ||_1}_{\text{Sparse player ratings}}}} + 
{{\underbrace{\lambda_2 ||\beta - z_0 - \Rmat z||_2^2}_{\text{Box score}}}} + \underbrace{\lambda_3 ||z||_1}_{\text{Sparse box score weights}} + 
{{\underbrace{\lambda_4 \beta^{T} v v^{T} \beta}_{\text{Centering}}}}
\end{align*}

Taking derivatives with respect to each variable and considering special cases, get the following update equations:
\begin{align*}
 \beta_{hca} &= \frac{1}{\numobs} \frac{1_{n}^T \text{Diag}(w) \left[y - \Xmat \beta \right]}{1_n^T w} \\
z_0 &=  \frac{1}{\pdim} \ones_\pdim^T \left[\beta - \Rmat z \right] & z_i &= \frac{\shrink( \Rmat_i^T [\beta - \ones_\pdim z_0 - \sum_{j \neq i} \Rmat_j z_j], \frac{\lambda_3}{2\lambda_2})}{\Rmat_i^T \Rmat_i}
\end{align*}
\begin{align*}
\beta_i &= \frac{\shrink(M_{ii}[\beta_i ] + \natbasis_i^T[H-M\beta], \frac{\lambda_1}{2})}{M_{ii}} 
\end{align*}

where
\begin{align*}
M &\defn \Xmat^T \text{Diag}(w) \Xmat + \lambda_2 {\eyemat}_\pdim + \lambda_4 v v^T \\
H &\defn \lambda_2 (\Yvec - \ones_\numobs \beta_{hca}) + \Xmat^T \text{Diag}(w) (\Yvec - \ones_\numobs \beta_{hca}) \\
\shrink(a,b) &\defn \sign{(a)} (|a| - b)_{+}
\end{align*}

\section{Approximating $\ell_1$ constraint with smooth function}
\label{approx_ell_1}
This procedure simply approximates the $\ell_1$ norm
\begin{align*}
||\beta||_1 \defn \sum_i |\beta_i|
\end{align*}
with the function
\begin{align*}
\theta(\beta) \defn \sum_{i} \sqrt{(\beta_i)^2 +\epsilon}
\end{align*}.

We can then approximate the $\lambdapm$ objective function with the function
$\tilde{g} = \tilde{g}(\beta_{hca}, \beta, z_0, z; \lambdavec)$ defined
as:

\begin{align*}
\tilde{g} \defn \underbrace{\weightedloss}_{\text{APM}} + {{\underbrace{\lambda_1 \theta(\beta )}_{\text{Approximate sparse player ratings}}}} + \\
{{\underbrace{\lambda_2 ||\beta - z_0 - \Rmat z||_2^2}_{\text{Box score}}}} + \underbrace{\lambda_3 \theta(z)}_{\text{Approximate sparse box score weights}} + 
{{\underbrace{\lambda_4 \beta^{T} v v^{T} \beta}_{\text{Centering}}}}
\end{align*}


We then solve the convex program

\begin{align}
\label{cvx_approx}
\beta_{hca}, \beta, z_0, z = \arg\min \tilde{g}(\beta_{hca}, \beta, z_0, z; \lambdavec)
\end{align}

The convex program \eqref{cvx_approx} has a smooth objective function, and thus can be solved using Newton's method. If $\epsilon$ is small enough, then the solution to \eqref{cvx_approx} will be very close to the solution of $\lambdapm$.
Furthermore, because Newton's method is iterative and uses initialization, we can use the solution for one choice of $\lambdavec$ to warmstart another problem. This is a very useful property, since we will perform cross-validation.
Note that Newton's method requires solving the equation $H v = g$ at each iteration, where $H$ is the Hessian at our current variable and $g$ its
gradient. This takes roughly order $\pdim^2$ computations per iteration (We avoid the $\pdim^3$ of a naive equation solver
by using a Cholesky matrix decomposition.)

However, the above approach has the following significant problem, one unavoidable for approaches that approximate
a non-smooth function like the $\ell_1$ norm with a smooth function.

The closer $\epsilon$ gets to zero, the worse behaved the Hessian is. Newton's method can be thought of as have two phases
of behavior, a ``damped" Newton's phase in which the quadratic approximation of the objective function is poor and thus
the stepsizes chosen by say a backtracking rule are small, and a second phase in which the quadratic approximation is excellent,
the stepsize is 1, and convergence is quite rapid. When the Hessian has a bad (large) condition number, Newton's method spends
a lot of time in this ``damped" phase, and thus does poorly.

We deal with the latter issue using the following heuristic inspired by interior point methods:
\begin{enumerate}
\item Fix a target value $\epsilon_{target}$ for which we want to solve the above convex program.
\item Start solving the convex program \eqref{cvx_approx} using Newton's method.
\item If in the damped phase for too long, then this problem is too hard. Use the current iterate to warmstart
a Newton solver for a new convex program with $\epsilon_{new}=10\epsilon$. Take the solution for $\epsilon_{new}$ and use it to
solve the current problem.
\end{enumerate}

This heuristic seems to dramatically speed up convergence, since we avoid spending many useless iterations in
the nasty damped Newton phase.
\bibliographystyle{plainnat}
\bibliography{new_dapo}
 
\end{document}
