
%The cognitive science definition of a skill is outside the scope of this paper.
Learning science techniques use  \textit{student models} to  estimate   the student's understanding of the subject matter.
Such  techniques  require a mapping of items to skills~\cite{corbett1994knowledge}.
Other authors refer to skills as  topic skills~\cite{desmarais11}, knowledge components or factors~\cite{cen_factor_analysis}.
%Unfortunately, these mappings  are mostly built manually by context experts and psychologists~--- an effort that can take years~\citep{2007beck,corbett}.  
Knowledge Tracing~\cite{corbett1994knowledge}, the de facto standard for student modeling from longitudinal data,
uses a Hidden Markov Model (HMM) per skill to model student's knowledge of the skills as latent variables.
In this section we describe \methodname, a method to refine item to skill mappings automatically.
\yh{check} \methodname identifies skills that need to be refined and split such a skill into an easy and a hard ones if the item difficulty variance within the skill is high enough.
\ql{Explain overview of \methodname method, and reasoning of splitting  by difficulty}


%In its simplest form, IRT models predict the probability of a student's response to an item from the distance between the student's latent ability and the item's latent difficulty \cite{rasch1960}.
%IRT assumes that student ability and item difficulty exist on the same unidimensional, continuous latent scale. 




\begin{algorithm}
\caption{ \yh{Check} \methodname Algorithm \label{alg:ascend} }
   \begin{algorithmic}[1]
    \Require{A longitudinal dataset (\{$X_{ks}$\}, \{$Y_{ks}$\}) for the binary correctness $Y_{ks}$ of student $s$ in practice opportunity $X_{ks}$ for skill $k$, original skills $\mathbf{K}$,  item difficulty point estimations $\mathbf{I}$ and corresponding standard errors $\mathbf{E}$, an original item to skill mapping $\mathbf{Q}$.}\\
     \Function{Ascend}{(\{$X_{ks}$\}, \{$Y_{ks}$\}), $\mathbf{K}$, $\mathbf{I}$, $\mathbf{E}$, $\mathbf{Q}$}
    \State $\rhd{\text{  Identify skills with non-decreasing learning curves:}}$
   	\For{each skill $k \leftarrow 1$ to |$\mathbf{K}$|}
   		\State $\rhd{\text{    $M_{k}$ measures the monotonicity of the curve}}$
  		\State \text{$M_{k} \leftarrow \text{rank correlation of (\{$X_{ks}$\}, \{$Y_{ks}$\})}$}
  		\If{\textbf{not} ($M_{k}$ < 0 \textbf{and} p-value < 0.05)}
  			\State $\mathbf{B} \leftarrow k \hspace{8pt} \rhd{\text{skills with no or little learning}}$
  		\EndIf
  	\EndFor
  \State $\rhd{\text{  Cluster each ill-defined skill's items:}}$
    	\For{each skill $b \leftarrow 1$ to |$\mathbf{B}$|}
	    	\For{each pair of items ($i$, $j$) from skill $b$}
   			\State $\rhd{\text{    $D_{ij}^b$ measures the distance of items}}$
  			\State \text{$D_{ij} = \frac{|I_{i} - I_{j}|}{2 * max(E_{i}, E_{j})}$}
		\EndFor
		\If{max($\mathbf{D}$) > 1}
			 \State $\rhd{\text{    Find items with the largest distance as}}$
			\State \text{\hspace{10pt}initial centroids,  clusters by simplified }
			\State \text{\hspace{10pt}$K$-Means, $g$ represents the group of items:}
		 	\State $g_{easy}$, $g_{hard}$ = $K$-Means($argmax(\mathbf{D})$, $\mathbf{D}$)
    		 	\State $\rhd{\text{  Split the skill into new skills:}}$
		 	\State $\mathbf{K'} \leftarrow b-easy, b-hard$
		\Else
			\State $\mathbf{K'} \leftarrow b$
		  \EndIf
  	\EndFor
     \State $\rhd{\text{  Re-evaluate the new skills $\mathbf{K'}$}}$ learning curves
    \EndFunction
  \end{algorithmic}
\end{algorithm}
% \l 0 \and p-value \l 0.05$} 
%rank\_correlation(\{$X_{ks}$\}, \{$Y_{ks}$\})$
 %   \State $S \leftarrow 0$

%IRT difficulty should just be an input variable in this algorithm
%\rhd{this is a comment}}
\yh{Algo line 8, 20: can i use arrow to put a value into a set?}
Algorithm~\ref{alg:ascend} describes the ASCEND algorithm.
Firstly, we identify ill-defined skills on which students have no or little learning. 
These skills have non-decreasing learning curves. 
A learning curve is a curve that plots performance (error rate) across students on an item versus the number of practice opportunities for a skill.  Figure~\ref{fig:before_split} shows a learning curve that has increasing error rate with the increase of practices. 
We measure the trend(monotonicity) of the curves using non-parametric \textit{Spearman's rank-order correlation}. 
Secondly, for those identified ill-defined skills, we measure the distance(difference) of items by the difference of the point estimation of item difficulty normalized by the standard error. 
Based on the distance matrix, we identify those skills with item difficulties significantly different (the difference of item difficulty point estimation is bigger than two standard error). 
We then split them by $K$-Means clustering method with initial centroids the two items with the largest distance, and only go through one iteration to cluster the remaining items. This will result in on easier group of items and one harder group of items.
Thirdly, we replaced the original skill with two new skills: an easier one consists of the easier items, and the harder one consists of the harder items. The harder new skill probably contains additional skills that expert failed to label before.
Finally, we re-evaluate the skills by learning curve analysis and performance prediction. For this paper, we only focus on learning curve analysis by using monotonicity metric to evaluate.
%\yh{we haven't discussed the reasoning behind splitting a skill by item difficulty}
%After detecting clusters of items within a KC, we split a KC into the number of clusters of new KCs. For example...

\ql{(Questions from Cloud: we need to add transition. The above never talk about homework data vs. test data}

\ql{This is too abrupt. Quinn, write why IRT cannot be estimated from homework data directly, and that you used test data for this.  I don't think it's necessary to talk about chapters or homework sessions, but to explain the problem of how to estimate item difficulty from longitudinal data, and the clever work around of using static data for this}
Response data from homework sessions, with multiple attempts per item, learning aid use, and student learning occurring between attempts, are not well suited for IRT analysis as they violate local independence, an assumption in IRT that states a student's responses are not related beyond what can be explained by his or her latent trait.
Data that relate to a snapshot in time, such as end-of-chapter tests, are more likely to satisfy this assumption.
Here we use similar testing data for IRT calibration.
We specify a 2-parameter-logistic model in which each item is represented by a difficulty (the location on latent scale) parameter and a discrimination (the strength of the relationship between the item and the latent scale) parameter.
The current version of ASCEND only uses the IRT difficulty information.

%calibration stuff
The item parameters are calibrated for each chapter separately.
This simplification is warranted because each chapter shows strong unidimensionality and support for the IRT model.
The trade off to chapter-specific calibration is the latent scale constructed by the IRT model is also chapter-specific and so claims spanning multiple chapters are not supported without defining a common scale across chapters \cite{kolen2004}.
Because ASCEND improves existing skill definitions, and the skill definitions exist within a chapter, the chapter-specific calibration is not a limitation.
Item parameters are estimated by marginal maximum likelihood through the \texttt{R} \cite{R2011} package ltm \cite{Rizopoulos2006}, which provides  the point estimates and standard errors used in ASCEND.



