%Traditionally, the assessment and learning science communities focus on different aspects of student modeling. The assessment community uses Item Response Theory which models static item difficulties and student abilities, while the learning science community uses learning curve analysis and student models that account for learning. Prior efforts to unify Item Response Theory and student models although improve the prediction accuracy, yet the item difficulty is prone to be underestimated since it can confound with learning. Meanwhile, no or few attempts have been made towards utilizing Item Response Theory to refine the underlying item to skill mapping (knowledge component model), on which student models rely to achieve both high prediction accuracy and interpretability.


%ASCEND exploits  difficulty  information that has traditionally been ignored in skill refinement algorithms.
Methods to model learning and personalize tutoring activities require an item to skill mapping (sometimes called Q-Matrix).
We propose Assessment-Driven Skill Component Discovery, an automatic method that uses assessment insight to improve  an existing item to skill mapping.
% Our contributions are (1) using monotonicity in learning curve analysis to identify badly-defined knowledge components, (2) use well-estimated item difficulties based on which we extract factors for splitting the knowledge components. 
% \yh{Prior work of skill refinement assumes items for practicing a skill have the same difficulty. We are the first to consider item difficulty discrepencies within a skill in skill refinement exploiting well developed assessment method (IRT). Our method ASCEND is able to detect new latent skill components automatically without complex modeling of item-to-skill or skill-to-skill relationship.}
Using data from a Math commercial tutor, we demonstrate that  ASCEND improve about 35\% of the items that originally mapped to expert ill-defined skills. 
