\section{Authorship Classification}

In our authorship classification problem, we assume there exists a set of known programmers.
These programmers cooperate on a project. All programmers work on the project at function level so that each function has a unique author associated.
Notice that in our assumption, a binary executable or a binary library can have multiple authors, which is the common case in real world software development.
To be formal, given an author set $\mathcal{Y}$ of $n$ authors and
a training set $\mathcal{M}$ of $m$ functions, with author label $y_1,y_2,\cdots,y_m\in\mathcal{Y}$.
The task is to learn a classifier so that when given a new function written by author $y\in\mathcal{Y}$,
it can predict which author is the most likely one.

We use the binary code features described in section 3 to represent the training functions. Define an indicator function for each feature $\phi$:

\begin{center}
$I_{\phi}(f,p)=\left\{\begin{array}{ccc}
            0 & \mbox{for} & OCCUR_f(\phi, p)=false \\
            1 & \mbox{for} & OCCUR_f(\phi,p)=true
	    \end{array}\right.$
\end{center}

In the above function, $OCCUR_f(\phi,p)$ is true when we can find feathre $\phi$ at position $p$ in function $f$. Based on the feature indicator functions, we can define the feature vector for function $f$

\begin{center}
$x = \left(\begin{array}{c}
           \sum_{p\in P_1} I_{\phi_1}(f,p) \\
           \sum_{p\in P_2} I_{\phi_2}(f,p) \\
           \vdots                        \\
           \sum_{p\in P_t} I_{\phi_t}(f,p)
           \end{array}\right)$ 
\end{center}
     
where $t$ is the total number of features and $P_i$ is the set of valid positions in function $f$ for feature $\phi_i$.

One challenge is that there are huge amount of candidate features (more than 300 thousand features in our evaluation data set).
Using all those features will not only cause slow training and prediction but also have the risk of overfitting the training data, making our learning model not able to predict on new instances. 
Therefore, it is necessary to conduct feature selection to reduce the number of features. 
We adopt a simple while effective strategy to choose features that ranks the features by the mutual information between features and author labels. 
We use LIBLINEAR support vector implementation [3] to train our model and do prediction. 

