In many instructional settings,  students are graded by  their performance  on instruments such as exams or homework assignments.
Usually, these instruments are made of  \textit{items}~--  questions, problems, parts of questions~-- which are graded individually.
There has been great interest in the  learning sciences and assessment communities  for statistical methods to model such data.
For example, a popular assessment method,  \textit{Item Response Theory} (IRT), infers individual differences amongst students and items, but it does not account for student learning over time. 
On the other hand, the learning science community uses models that make prediction based on learning curves~\cite{cen_factor_analysis, pavlik2009performance};
assuming that all items for practicing a skill have the same difficulty.


Learning science and IRT models differ on how they explain low performance of a student on a particular item.
For example, 
%When a student has a low probability of a correct response on an item
IRT would attribute that the student's ability is not sufficient for the item's difficulty.
%Unfortunately, IRT models are static, and therefore do not account for student learning. 
On the other hand, the learning science paradigm~\cite{koedinger2012knowledge}, 
assumes that the item has  skill(s)  that the student has yet to master. 
Therefore, learning science requires a very careful skill definition, which may be a very expensive requirement -- item to skill mappings  are often done manually by an expert.
Although  new technology allows for fully automatic discovery of skill definitions~\cite{thmm_edm2013}, such item to skill mappings are often hard to interpret.

Prior work~\cite{desmarais11,winters2005topic}  that  uses multidimensional IRT or matrix factorization algorithms to discover the item to skill mapping  from data has had limited success, only distinguishing between broad areas (i.e., French and Math), but not finer distinctions within an area.   
Matrix factorization is only useful for multidimensional data.
In this paper, we propose a novel method \textit{assessment-Driven Skill Component Discovery} (\methodname), that uses assessment methods to improve an existing item to skill  definition.
ASCEND exploits IRT difficulty  information that has traditionally been ignored in skill refinement algorithms.
We demonstrate ASCEND on data that is locally unidimensional.
