%% v1.5 [2003/10/29] Config. gooConf -> Docs/gooConf 
\documentclass[paper]{ieice}
%\documentclass[tutorial]{ieice}
%\documentclass[invited]{ieice}
%\documentclass[survey]{ieice}
%\documentclass[invitedsurvey]{ieice}
%\documentclass[review]{ieice}
%\documentclass[letter]{ieice}
%\documentclass[referee]{ieice}
\usepackage{graphicx}
\usepackage{latexsym}
\usepackage{url}

\setcounter{page}{1}

%% <local definitions>
\def\ClassFile{\texttt{ieice.cls}}
\newcommand{\PS}{{\scshape Post\-Script}}
\newcommand{\AmSLaTeX}{%
 $\mathcal A$\lower.4ex\hbox{$\!\mathcal M\!$}$\mathcal S$-\LaTeX}
\def\BibTeX{{\rmfamily B\kern-.05em
 \textsc{i\kern-.025em b}\kern-.08em
  T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\hyphenation{man-u-script}
\makeatletter
\def\tmpcite#1{\@ifundefined{b@#1}{\textbf{?}}{\csname b@#1\endcsname}}%
\makeatother
%% </local definitions>

\field{}
\vol{}
\no{}
%\SpecialIssue{}
%\SpecialSection{}
\title[A Framework for the Conceptual Modeling of Data Mining and Data Warehouses]
      {A Framework for the Conceptual Modeling of Data Mining and Data Warehouses }
%\titlenote{This paper was presented at ...}
\authorlist{%
 \authorentry[Jose.Zubcoff@ua.es]{Jose Zubcoff}{n}{labelA}
 \authorentry{Juan Trujillo}{m}{labelB}[labelC]
}
%\breakauthorline{2}
\affiliate[labelA]{The author is with the school of sea science and applied biology, Alicante University, Spain}
\affiliate[labelB]{The author is with the school of computer science, Alicante University, Spain}
%\paffiliate[labelC]{Presently, the author is with ...}

%\received{2007}{11}{25}
%\revised{2007}{12}{16}
%\finalreceived{2007}{12}{28}

\begin{document}
\maketitle

\begin{summary}
\textit{Data mining} (DM) is currently performed more as an art than as a science. 
The lack of a conceptualization of the whole \textit{knowledge discovery in databases} 
(KDD) process leads to a single-isolated perspective of DM. 
The main drawbacks of viewing DM as isolated processes are: 
(i) the duplicities of time-consuming preprocessing tasks, and 
(ii) the impossibility of assuring data quality, that is, the availability 
of input data needed for a mining process. An integrated design process which 
allows all the stages of KDD to be modeled by using standards and expressive 
formalism is still a necessity. While previous stages 
(preprocessing and integration into a unique repository) can be modeled 
by using specific domain languages, DM still lacks a formalism that allows 
models to be designed from the early stages of development. 
We do not propose a new DM algorithm, but rather provide conceptual models 
for modeling various already existing DM algorithms on top of 
\textit{data warehouses} (DWs). To this end we propose to extend UML in order 
to design DM models on top of multidimensional DW models. 
We show the effectiveness of the proposed modeling technique with a 
step-by-step guideline for a real case study which aims to discover 
clusters in nearby the \textit{marine protected areas} (MPAs). 	


\end{summary}
\begin{keywords}
data mining, data warehouses, conceptual modeling, KDD
\end{keywords}

\section{Introduction}\label{intro}

The \textit{knowledge discovery in databases} (KDD) process~\cite{DBLP:books/mit/PF91/FrawleyPM91} 
defines the successive steps through which 
obtain knowledge from data as an iterative series of actions. 
It involves data preparation, data selection, data mining and knowledge discovery (Fig.~\ref{kdd}). 
\textit{Data mining} (DM) is a specific stage in the KDD process.

All the stages are \textit{highly dependent on} the previous stage. For instance, 
obtaining success in the DM stage depends to a great extent upon  
the adequateness of the input data. 

DM currently takes place in isolated sessions with the main objective 
of using a specific technique to obtain knowledge.  
This \textit{single-isolated} perspective is mainly due to the lack of a 
data-mining-representation formalism that can be used from early stages of the 
development. 
However, there is a common pitfall to 
implement DM techniques as \textit{single-isolated} operations which is related to 
the data quality~\cite{IWAD:conf/icdm/GonzalezMMS04}.
There are several drawbacks with this \textit{single-isolated} perspective: 
while some are concerned with the suitability of data for achieving the 
business goals, 
others affect the time required to prepare and to integrate data. 
For instance, because DM takes place after the integration step has been completed, 
it is not always possible to guarantee the presence of the required input data 
for the specific technique. A further issue related to the \textit{single-isolated} 
perspective of DM is the inadequacy of taking advantage of high-time-consuming 
tasks such as the preparation and integration of data. 

As seen in Fig.~\ref{kdd}, the DM is applied on top of a repository 
which integrates all the data. These data were previously preprocessed, transformed, 
cleaned and loaded to the \textit{data warehouse} (DW) (Fig.~\ref{kdd}.a). 
Once data is in the repository (Fig.~\ref{kdd}.b), DM is applied to obtain the knowledge (Fig.~\ref{kdd}.c). 
Whilst all previous stages (extraction, transformation and loading process, and the 
multidimensional model of the DW) can be modeled by using specific
domain languages, so far, DM has until this time lacked such a specific 
modeling notation. 

Domain knowledge and 
data quality can be dramatically improved by using an integrated framework for modeling 
DWs and DM techniques. 
Every step in the KDD process increases the analysts knowledge of 
the data under analysis~\cite{DBLP:books/mit/PF91/FrawleyPM91}, signifying that 
analysts are missing the opportunity to incorporate  prior knowledge. 
For instance, the DW is represented by using 
a multidimensional (MD) model which is close to the analyst's way of thinking. 
 
In order to tackle these drawbacks we propose a set of conceptual models for 
DM which will assist in the integration of DM into the KDD process by aligning 
DM and DW modeling. Our approach improves various aspects of the knowledge 
discovery process: 
(i) by assuring the \textit{data quality}, which is achieved by modeling the DM 
techniques requisites and 
the repository from the early stages of the DW project development; 
(ii) by avoiding duplication of time-consuming \textit{preprocessing tasks};
(iii) by enriching the prior model at each step of the \textit{iterative process}.
In this way, we present a suitable framework with which to model
DM on top of a \textit{multidimensional} model of a \textit{data warehouse} (DW). 

\begin{figure*}[tb] 
\begin{center} 
\includegraphics[width=0.65\textwidth]{figures/kdd-process1.png} %{file.eps} 
\end{center} 
\caption{Schema of the iterative KDD process } 
\label{kdd} 
\end{figure*} 


\subsection{Data Quality}
Knowledge discovery applications involve vast amounts
of data, which are likely to have originated from many
diverse, and possibly external, sources. This means that the initial
quality of the data cannot be assured. Data may thus be
noisy, unformatted, obsolete, inadequate, inaccurate, or 
incomplete~\cite{ACM:books/Fayyad96}. Moreover, although data preprocessing 
is undertaken before a mining application to
improve data quality, there are not obvious decisions about the right data 
and their updating frequency when modeling the DW. A bad decision in this 
stage can cause, for instance, that
personal data to expire rapidly or the loss of certain financial data.

The adoption of management strategies from the early stages noticeably helps 
to improve data quality. This will furthermore assure the presence of the 
DM input requisites when modeling the repository, that is, at early stages 
of the development of a DW project.


\subsection{Preprocessing Tasks}
\textit{Extraction, transformation and loading} (ETL)  (Fig.~\ref{kdd}.a) is the most time-consuming task 
of the whole KDD process. It accounts for up to 70\%
of the effort. Each data mining technique requires data to be 
integrated, formatted, and cleansed from errors.
A single-isolated perspective of the data mining techniques 
is not the best choice for reducing duplication of these time-consuming tasks.
In this paper we propose an integrated framework for DM modeling 
which will allow analysts to avoid redundancies in the preprocessing tasks.  

\subsection{KDD-Iterative-Enriching Process}

Data-mining modeling must be tightly integrated with the development of a 
DW project if it is to assure the knowledge discovery success.  
As Fig.~\ref{kdd} illustrates, the KDD process starts by integrating the data sources (Fig.~\ref{kdd}.a). 
Then, once the DW is modeled (Fig.~\ref{kdd}.b), the data-mining and other techniques (Fig.~\ref{kdd}.c) 
can analyze the repository to finally obtain knowledge (Fig.~\ref{kdd}.d).
In~\cite{DBLP:conf/er/TrujilloL03} we have presented an approach through which
to model the integration stage. In~\cite{DBLP:conf/er/Lujan-MoraTS02,DBLP:conf/uml/Lujan-MoraTS02,DBLP:journals/dke/Lujan-MoraTS06} 
we presented the elements and methodology with which to model the DW (the main support for the DM step).
As KDD is iterative by nature, 
we model the DM stage by following the same modeling paradigm used in the previous 
stages~\cite{DBLP:conf/dawak/ZubcoffT05,IWAD:journals/dke/ZubcoffT07,DBLP:conf/dawak/ZubcoffT06,DBLP:conf/dawak/ZubcoffPT07,IWAD:conf/ecdm/ZubcoffCT07,IWAD:conf/ecdm/ZubcoffT07}. 

The entire knowledge discovery process can thus be coherently modeled by using 
a common notation, the same visual language and the same tools.
In this fashion, analysts can reuse previously semantic-rich multidimensional 
models of DWs for DM. In other words, DM can be aligned to the DW development 
in order to improve the whole KDD process.

Moreover, our approach allows analysts to work with domain concepts at the 
correct level of abstraction. For instance, in initial KDD-iterations it 
might be highly impractical to define the settings of the DM technique. 
These models can thus be enhanced by the specialist at each iteration cycle 
(by adding restrictions, parameters, attributes or new elements to the model).
Furthermore, the decision about the platform or tool to be used for DM
can be taken once the models are defined, therefore obtaining the most from 
each platform or tool. To sum up, once a DM model is designed, it can be 
enriched, along with other KDD-models, with new information. These previous 
models will therefore be improved and enhanced with new elements or parameters 
at each iteration. 


\subsection{Outline}
Section~\ref{rw} presents the work related to DM conceptual models, ontologies, 
and standards. 
Section~\ref{DM} explains current stages in data and information throughout the KDD 
process by using distinct DM techniques. 
In Section~\ref{CM} we propose the conceptual modeling for DM.
Section~\ref{CS} presents a case study. Finally, Section~\ref{CO} draws the main 
conclusions and outlines our immediate future work.




\section{Related Work}
\label{rw}

Various approaches dealing with parts of the KDD process exist in literature. 
None of these deal with the entire KDD process from an integrated view. 
Furthermore, some are directed towards the management of the results of KDD, 
i.e. the  \textit{pattern base management system}
(PBMS)~\cite{DBLP:conf/er/RizziBCGHTVVV03,DBLP:conf/parma/Rizzi04}, 
while others deal with the data mining process, as is the case of the 
\textit{intelligent discovery assistants} 
(IDA)~\cite{IWAD:journal/TKDE/BernsteinPH05}.

PBMS, is focused on storing and managing
patterns (as data is managed by a database management system). It is
mainly focused on the management of the results. Then PBMS does not, therefore,
address the designing of DM models.

The IDA approach enumerates valid
DM processes by proposing an ontology for DM. 
It is an ontology-based approach 
that can assist data miners to navigate the DM processes as
isolated processes.
Nevertheless,
such a proposal neither helps to deal with complex-process modeling, 
nor does it fully describe any of the KDD subprocess.


\subsection{Standards}
Several standards can be used to model DM, 
but, none of them improves the modeling stage of DM
or takes advantage of an integral view of KDD.

The \textit{common
warehouse metamodel} (CWM)~\cite{IWAD:specs/omg/CWM}
addresses the metadata definition of the business
intelligence field, including data mining. It solves the metadata
management problem for DWs, but it is too
complex to be handled by both final user and analyst. 

The \textit{predictive model markup language}
(PMML)~\cite{IWAD:specs/dmg/PMML}
describes machine-learned models in XML. It includes facilities with which to
model the data set and some data transformations which are executed upon it
before modeling. However, with respect to our goals, PMML has two
mayors drawbacks: (i) it is not process-oriented, thus it does not allow a 
data flow to be modeled through a complex KDD process; (ii) its data model is 
restricted to one table only. This is due to its isolated DM perspective nature. 
The PMML can be used by other modeling tools as an intermediate 
for interchanging models or results. 

The DM process model proposed for the \textit{cross industry standard process
for data mining} (CRISP-DM) consortium~\cite{IWAD:misc/crispdm}
provides an ``overview of the life cycle of the DM project'' (Fig.~\ref{crispdm}). 
This is a detailed DM processes description which contains the phases of a DM 
project. It divides the DM process into six phases: business understanding, 
data understanding, data preparation, 
modeling~\footnote[1]{CRISP-DM uses ``modeling'' as the action 
of selecting the input data and the DM algorithm for the analysis}, evaluation 
and deployment.
DM is a data-centric process, the deeper the knowledge of the business, the more 
probability of success in the analysis. While CRISP-DM recommends moving back 
and forth between any phase (within the circle in Fig.~\ref{crispdm}), the 
whole process tends to gain knowledge of the domain
at each step.  
The outer circle in Fig.~\ref{crispdm} symbolizes the cyclic nature of DM itself.
  
As a result, the lessons learned during the DM process may trigger new, and
often more focused business questions.  
 
\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/crispdm.png} %{file.eps} 
\end{center} 
\caption{CRISP-DM process model} 
\label{crispdm} 
\end{figure} 



\subsection{Our Previous Work}


In our previous works, we
have dealt with the modeling of the first stages of the KDD process, 
from the ETL modeling~\cite{DBLP:conf/er/TrujilloL03} to the MD model of
DW~\cite{DBLP:conf/er/Lujan-MoraTS02,DBLP:conf/uml/Lujan-MoraTS02,DBLP:journals/dke/Lujan-MoraTS06} and the 
guideline for designing the entire DW
project~\cite{DBLP:conf/advis/Lujan-MoraT04}. 


Continuing with the KDD process, we have presented specific conceptual models, 
and their respective UML extensions, 
for the following DM techniques: association
rules~\cite{DBLP:conf/dawak/ZubcoffT05,IWAD:journals/dke/ZubcoffT07},
classification~\cite{DBLP:conf/dawak/ZubcoffT06},
clustering~\cite{DBLP:conf/dawak/ZubcoffPT07,IWAD:conf/ecdm/ZubcoffT07}
and time series analysis~\footnote[1]{The time series analysis profile is 
currently under revision}~\cite{IWAD:conf/ecdm/ZubcoffCT07}.
All of these were developed by using the UML profiling extension mechanism.
In these papers, we dealt with specific DM techniques, 
such as association rule, classification and clustering mining respectively, 
showing the suitability of our approach in tackling these mining tasks.
We then started to modeling main DM techniques.

In this paper, we propose the alignment of DM and DWing from the early stages of 
a DW project. The ultimate objective is to integrate the whole KDD process
into the adequate framework which will allow us to deal with the design 
of all stages in the KDD process, optimizing the results.
As we have previously argued ,
our approach dramatically simplifies the process of designing
DM models as it reuses the existing semantic-rich MD model of
the DW which is beneath, at the conceptual level, hiding low-level details
and focusing only upon the DM process.


\section{Data Mining on Multidimensional Data}
\label{DM}

Each \textit{data mining} (DM) technique uses the data in a different manner. 
Moreover, each technique will contain a particular usage 
in function of the given business goal. 
For instance, an association rule algorithm can be used to discover links 
between data. This technique requires a specific data schema, and its  
own constraints upon input data and settings.
Analysts can apply a classification-mining technique or a clustering algorithm 
if they wish to classify data into groups. However the data schema for these 
two techniques are different. While classification-mining techniques require a 
specific attribute to predict their classes, clustering techniques automatically 
discover the classes based on the input dataset. 
Furthermore, constraints for classification are more restricting than those for clustering.
In other cases, when it is necessary to identify trends or seasonality in 
historic data, it can be analyzed by using a time series analysis.
Here, the main restriction
is the mandatory presence of an time attribute in the input dataset and 
the use that can be applied to such special attribute in time series analysis.
In this section, we shall summarize the main aspects of association rules, 
classification, clustering and time series analysis.  
  
\subsection{Association Rules}

Association rules look for relationships between items in an entity~\cite{DBLP:conf/sigmod/AgrawalIS93}. 
The rules that the algorithm 
identifies can be used, for example, to predict a customer's probable 
future purchases, based on the items that 
already exist in the customer's shopping cart. 
This mining process is based on identifying frequent sets of items in the repository.

A group of items in a case is called an itemset. For instance, 
in the shopping-cart analysis a case may be the ticket number 
and all the items in the market basket are related to each other.
We have defined the grouping factor (ticket number) as a \textit{case} attribute. 
This analysis can be carried out by grouping data by any entity (or attribute) 
by using the \textit{case}. 
That is, this analysis is not restricted to assigning the ticket number as a \textit{case}.
For instance, users can analyze sales per customer to discover buyers' behavior, 
or how correlated items are per shop by assigning the attribute shop as a case, 
and so on. 

An association rule mining model is 
made up of a series of itemsets that are related to each other  and the 
rules that describe how those items are grouped within 
the cases. 

The association rule mining 
algorithm can be decomposed into two main sub-processes~\cite{DBLP:conf/sigmod/AgrawalIS93}. 
The first process finds all combinations of items, called 
\textit{frequent itemsets}, whose 
support is greater than a threshold called \textit{minimum support}. 
Most of the existing algorithms are derived 
from the Agrawal apriori algorithm which searches for itemsets 
that satisfy the required minimum support. 
The second process generates the desired rules based on the 
frequent itemsets. The idea is that if ABCD 
and AB are frequent, then the rule AB then CD holds if the 
ratio of support(ABCD) to support(AB) is at 
least as large as the minimum confidence interval. Note 
that the rule will have minimum support because 
ABCD is frequent. The confidence interval originates from 
the idea of conditional probability, i.e. the 
ratio of the support of each of the frequent itemsets. Thus, 
by definition of conditional probability, the 
confidence is calculated by $support (ABCD) / support (AB)$. 

The use of this technique requires a profound knowledge of the data. 
Analysts must decide which attribute will be used as input, and 
which will be used as predict. The settings for the association rules
algorithm are also important because they control the results of the analysis.
The existence of thousands of rules is highly common, and this make the 
knowledge discovery process difficult. However, having very few of them may 
also be useless because the rules obtained are really obvious. Deciding the 
minimum threshold for the support and confidence of the rules or setting the 
maximum number of itemsets are some of the decisions that allow this process 
to be controlled. A specific data schema is therefore needed for the input 
data when it is analyzed with association rule mining. Furthermore, a common 
framework which is able to take advantage of the previous domain knowledge 
will improve the DM stage.

The preprocessing stage is a time-consuming task. The repository must contain 
data integrated from diverse sources (usually operational databases) and be 
preprocessed if we are to analyze it. Before applying a DM technique, 
data must be previously cleaned, formatted and mapped to the adequate schema. 
This preprocessing task may occupy up to 70\% 
of the time. However, if the KDD process is not designed from the early stages, 
analysts cannot assure availability of the data required for this DM technique. 
We propose using the multidimensional model of the DW, in order to improve 
knowledge discovery with association rule modeling. That is, aligning the DM 
with the data-warehousing from early stages of the project development. 

\subsection{Classification}


\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.48\textwidth]{figures/TaxaSupTree.png} %{file.eps} 
\end{center} 
\caption{Classification of taxa for marine protected areas} 
\label{classification} 
\end{figure} 

The goal of classification is to sequentially partition the data to maximize 
the differences between the values of the dependent variable (that which is 
predicted)~\cite{IWAD:conf/ml/Quinlan86}. Classes which correspond to certain 
data features are labeled and the algorithm then attempts to maximize the 
diﬀerences between them. The process of classification is recursive, so the 
process starts again at each class label. 

This is a very useful diagnostic tool when, for instance, analyzing several 
characteristics of marine areas and \textit{marine protected areas} (MPA) to obtain a 
classification of the \textit{taxa}. Decision makers need information about the 
suitability of MPAs, which is obtained by analyzing economic data, their main 
structural characteristics, and biological information. They can then analyze 
the biodiversity of the MPA and the main factors that affect the MPA' behavior, 
with regard to the distribution of the taxa (species). 

In (Fig.~\ref{classification}), we show part of the nodes derived from the DM 
algorithm in the form of decision-tree. As a result, taxa distribution (biological aspect
that represent the biodiversity and other important features) 
can be classified first by the  
distance to the next MPA (structural aspect) (Fig.~\ref{classification}.a), 
following by the years from MPA creation or the MPA distance to the main town
(Fig.~\ref{classification}.b) (it depends on the first order classification), 
and  (Fig.~\ref{classification}.c) 
other structural and economical parameters such as MPA total annual budget, 
the years since an enforcement was applied to the MPA or their level of protection.
The order of classification is meaningful, for instance, from (Fig.~\ref{classification})
the distance to the next MPA is more important than years from MPA creation or total annual budget.
Furthermore, these attributes are more significant than localization (Atlantic or Mediterranean)
that participates in the data mining process but does not appear in figure~\ref{classification}.

This case study have more than two hundred variables in the repository 
containing diverse information about MPAs. 
Using a DM technique in such a complex project necessitates a 
profound domain knowledge. A flat-file view may confuse rather than clarify the 
concepts stored in the repository. Furthermore, some variables may contain 
similar names but refer to different measures (for instance, ”site” may refer 
to the site from which the statistical sampling was taken or to a structural 
attribute). Thus, the data selecting process may be critical. Wrong decisions 
in this process can lead to a bad interpretation of the results. For instance,
 highly correlated variables will appear at the maximum level of the 
classification tree. Association between two variables may therefore hide 
other important classification attributes. Thus, classification must be carried 
out with care when analyzing correlated variables. 

A profound knowledge of the attributes would be very useful in applying any 
classification algorithm. Because a flat-file with hundreds of columns can be 
very difficult to understand, it would be helpful to have additional information 
about the columns or a more semantic-rich structure that can simplify the 
analysis of the input data. In the following section, we shall show the 
suitability of a DW multidimensional model  for complex DM projects. 

\subsection{Clustering}

Clusters, are ``conceptually meaningful groups of objects that share common 
characteristics''~\cite{DBLP:journal/ACM/jain99data}. Clustering is 
the process of grouping cases with common behaviors. This mining technique 
is very helpful in characterizing the data under analysis.  
Clustering groups data by only using the information found in the selected 
attributes. Because these groups capture the natural structure of the data
they are meaningful for describing the system. 
  
Clustering in two dimensions can be easily done by any non-expert
analyst (fig~\ref{clustering}). However, when the number of dimension increases, 
a data mining tool becomes essential for the analysis.   
Clustering, as with other DM techniques, is commonly used 
to discover groups in large amounts of data. 
It is widely known that ``the more information the user has about the data at hand, 
the more likely the user would be able to succeed in 
assessing its true class structure''~\cite{DBLP:journal/ACM/jain99data}. Furthermore, 
this domain information can also be used ``to improve the quality of 
feature extraction, similarity computation, grouping, and cluster 
representatio''~\cite{DBLP:journal/ACM/jain99data}. However, Clustering mining is usually 
applied by using large flat files as input data source. In 
this way, mining techniques are seen as simple tools which attempt to discover patterns. 

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/clustering.png} %{file.eps} 
\end{center} 
\caption{Clustering in a multidimensional space} 
\label{clustering} 
\end{figure} 

For this DM analysis, there is no clustering technique that is universally applicable
to the variety of structures and type of data present in the 
\textit{multidimensional} (MD) data sets. The classic 
clustering technique is called k-means~\cite{IWAD:book/jain88,IWAD:conf/ir/Rasmussen92}. This data mining technique is applied 
by iterating several successive steps. The user must first specify how many 
clusters s/he wishes to obtain
(k). The algorithm then starts by partitioning the input points into k 
initial sets. It next calculates the mean point, or centroid, of each set. 
All instances are assigned to their closest cluster center. Next the centroid in 
each cluster is calculated. These centroids 
are taken to be new center values for their 
respective clusters. Finally, the whole process is repeated with the 
new cluster centers.  
The basic process of hierarchical Clustering starts by assigning 
each item to a cluster~\cite{IWAD:book/jain88,IWAD:conf/ir/Rasmussen92}. It then finds the  
closest (most similar) pair of clusters and merges them into a 
single cluster. After this step, it computes 
distances (similarities) between the new cluster and each of the 
old clusters. This process is repeated until all the
items are clustered into a single cluster. Many different clustering techniques 
exist. There are, however, several common issues that must be addressed if the 
clustering algorithm is to be successful. With regard to the second stage of 
the KDD process, data selection may avoid the use of irrelevant attributes which
reduce the chance of a successful clustering, as they negatively affect 
proximity measures and eliminate clustering tendency. In addition, the data 
selection may alter the manner in which clusters are formed due to the 
sensitivity of the clustering to the data structure.  


\subsection{Time Series Analysis}

Time series analysis allows us to discover patterns or trends over time. It is mainly used
to forecast future values. This DM technique also helps users to describe the 
large amount of historical data. Time series analysis thus has two main goals: 
that of identifying the nature of the phenomenon represented by the sequence 
of observations, and that of forecasting. 

A time dimension is of principle importance in this DM technique. It must be 
present in each of the models to be analyzed. Time dimension thus allows users 
to analyze data by month, quarter, year, etc. The common attribute, therefore, 
is the temporal interval throughout which the observed variable is analyzed. 
These time scales are deﬁned according to business goal requirements. For 
instance, in a point-of-sales system, sales are recorded by using the lowest 
level required time-stamp. However, forecasting monthly sales may be an 
important business objective. Therefore, time series must contain at least one 
time attribute describing the data at any level of the time magnitude. This 
constraint is shared with the DW constraints. All data in a DW is associated 
with a determined time dimension element. It is clear in this case that 
isolated perspectives of this DM technique lead to redundancies of 
preprocessing tasks. 

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/tsdata.png} %{file.eps} 
\end{center} 
\caption{Time series data representation} 
\label{ts} 
\end{figure} 



\section{Conceptual Modeling for Data Mining on top of Multidimensional Models}
\label{CM}

An integrated perspective of the global KDD 
can improve the whole knowledge discovery process. 
This means viewing DM as a subprocess of the whole KDD. 
Thus, DM is applied on top of a DW (Fig.~\ref{kdd}.c). 
As was previously mentioned, a DW can serve as a valuable 
source of data for on-line analytical processing (OLAP) as well as for
DM. Moreover, the use of \textit{multidimensional} (MD) models, which are close
to the analyst's way of thinking, dramatically improves data understanding. 
The integration of DM and DW projects is a natural way of observing the KDD process model. 

It is widely accepted that the development of DWs is based on 
MD modeling. This semantic-rich structure expresses data 
elements by using facts, dimensions and hierarchy levels, describing the 
system domain. This kind of representation is completely intuitive for analysts.
So, for instance, they can easily
drill along any dimension in a data cube to find interesting
patterns at multiple levels of abstraction.

The conceptual modeling paradigm helps to clarify the 
structure of a system (or part of it) at any development stage. 
The model is developed as part of the analysis process. Thus, the DM model 
helps to synthesize the main characteristics by using the objectives. 
Furthermore, it must reveal the requisites for achieving the goals. 

The main advantages of conceptual modeling are the abstraction of specific 
issues (platform, algorithms, parameters, etc.), flexibility, and reusability. 
Moreover, a conceptual model remains the same when it is implemented 
on different platforms, or whenever the platform is updated.

We employ the unified modeling language (UML)~\cite{IWAD:specs/omg/UML}
 which allows us to extend its meta-model and
semantics to a specific domain. In order to adapt it to each particular domain we use
the ``lightweight''~\cite{IWAD:specs/omg/UML} method of extending the UML with the profiling mechanism. 

A UML profile extends the UML (metamodel) by stereotyping the existing UML 
modeling elements. Stereotypes are entities which translate specific domains 
into UML modeling elements (metaclasses). For each domain concept, we have to 
decide the most suitable translated metaclass in UML. These semantics 
(abstract syntax in UML terminology) are defined in terms of attributes, 
relationships, constraints, and the meaning of the concepts modeled. UML 
provide us with a standard visual notation through which to define UML profiles. 
We can thus present UML profile specifications by means of this visual notation, 
therefore making the extension concepts easy to visualize. For the sake of 
simplicity we have not included a complete definition of elements and constraints 
for each UML profile. 


\subsection{Multidimensional Models}

The \textit{multidimensional} (MD) model structures data into facts and dimensions. 
While fact is the focus of our analysis, it contains collections of measures
(typically numerical data); the associated dimensions provide an analytical 
context by using descriptive attributes which form hierarchies on which 
measures can be aggregated. 

\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/mdprofile.png}
\caption{Fragment of the UML profile for modeling MD data}
\label{fig:mdprofile}
\end{figure}

In order to model \textit{data mining} (DM) techniques on top of MD data,
we reuse a previously
defined~\cite{DBLP:journals/dke/Lujan-MoraTS06} UML profile which contains the 
stereotypes, tag definitions and constraints needed to properly accomplish the 
MD modeling, thus allowing us to represent the principal MD properties of DWs 
at the conceptual level with the
additional benefits of the profiling mechanism. In this section
we provide an overview of the definition of these MD-related
stereotypes in Fig.~\ref{fig:mdprofile}.


In a UML profile (see Fig.~\ref{fig:mdprofile}), a set of
stereotypes (modeling elements with \textit{\cf{stereotype}} labels in
their headings) extends (the arrows with arrowhead triangles)
certain UML modeling elements (labeled with \textit{\cf{metaclass}}). Each
stereotype is able to define a new notation for the extended modeling
element; for instance, facts (cells of data) can be shown with a
grid icon. Thus, the MD-mapping previously
proposed~\cite{DBLP:journals/dke/Lujan-MoraTS06} defines 
\textit{Fact}, \textit{Dimension}, and \textit{Base} stereotypes of the UML
\textit{Class} modeling element to represent facts, dimensions and
hierarchy levels, respectively. The measures of analysis and various
types of dimension attributes are respectively represented
(extending the UML \textit{Property}) by the \textit{Fact\-Attribute} stereotype
and by, for instance, \textit{OID} or \textit{DimensionAttribute}
stereotypes. The profile also defines the \textit{Rolls-upTo}
stereotype representing each aggregation step in the dimension
hierarchies (with a UML \textit{Association} extension). Moreover, each stereotype 
is able to define tags with which to model related data. Since dimensions may 
be temporal descriptions, an \textit{isTime} tag definition is
provided. In addition, in order to preserve the correct usage of the deﬁned 
stereotypes  (according to their semantics), a set of constraints
could be formally specified for each stereotype. A detailed list
of these constraints, and the complete MD profile 
can be found in our previous
work~\cite{DBLP:journals/dke/Lujan-MoraTS06}.

\subsection{Metamodel for Association Rules}

The MD model can be used to design the entire DW by using stereotypes 
containing important information about the elements represented. It is then 
possible to identify hierarchies of dimensions, measures of the fact, 
degenerated dimensions, and so on in a MD structure. With this information at 
hand, users can easily design association rule mining models. 

In order to 
designing mining models by taking advantage of the semantic-rich MD model 
we propose the following UML profile.  
The defined metamodel is represented in Fig.~\ref{arPro}. This 
metamodel is based on the association rule mining technique's needs and functionalities. The 
application domain represented is the association rule. The association rule mining  
class contains the structure of the model. We have called this class association rule mining 
model (\textit{ARMModel}). This class contains the basic attributes that allow to define an AR model. 
Thus, elements of class permit the unequivocal definition of an 
association rule mining model structure. The attributes in 
 \textit{ARMModel} are: the \textit{Case}, the \textit{Input}, and/or \textit{Predict} attributes.  
Besides selecting the case, input, and predicted attributes, users can also 
set the parameters that tune the 
extraction of knowledge from the data warehouse. The minimum and maximum support levels 
(\textit{MinSupp}, \textit{MaxSupp}); specify the thresholds for the support probability of the extracted rule.  The 
\textit{MinSupp} is used to exclude rules with low support, for example in the market basket analysis, 
the association relationships between items with occurrences which are lower 
than this threshold, will be discarded; the \textit{MaxSupp} is used to filter 
itemsets, thus avoiding  high occurrence items (i.e. well-known association 
relationships). Default values are 
0.01 and 1 respectively. The confidence shows how frequently the rule head occurs among all the 
groups containing the rule body. The higher the confidence value, the more often this set of items is 
associated. Therefore, the minimum confidence (\textit{MinConf}) threshold allows users to obtain rules 
with higher conditional probability. The default value for the \textit{MinConf} is set to 0.40. 
This means that we only obtain rules with a confidence that is higher than 0.40.
The maximum itemset size (\textit{MISS}) sets a limit to the 
number of items found in an association rule. The default value for \textit{MISS} 
is 2000. This default value prevents  
the generation of an unnecessarily large number of rules and is used by the 
algorithm in the first step of 
the mining process. The maximum number of predicates (\textit{MNOP}) specifies 
the number of instances that 
may appear in the head of the result rule. The default value for \textit{MNOP} 
is 3; therefore, rules with more than 
three predicates (instances of attributes in the \textit{Head} of the rule, for example C, D and E are the predicates 
of the following rule: ``If A,B Then C,D,E'' ) will be excluded. A \textit{Filter} 
parameter is used to specify 
the exclusions of the itemset by discarding unwanted results from the rules.  

The model can be improved at each iteration, thus setting new constraints on 
the rule to be mined or the tuning parameters. 
The main reason is that not all these parameters are known at early stages of the 
development, so users can design main structure of AR mining in this stage. 

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/Association_Rules.png} %{file.eps} 
\end{center} 
\caption{Profile for Association Rule Mining} 
\label{arPro} 
\end{figure} 


There are certain requisites for analyzing data with association rules: data must be discrete or categorized. 
No matter if it will be used as \textit{Input} or \textit{Predict}, the quantitative continuous data of dimensional attributes must be previously 
categorized. This is a precondition which introduces a constraint in the model (type constraint). 

Knowledge 
is obtained in the form of if-then statements (if A then B, with \% Support and \% Confidence) where A is 
the antecedent or body of the rule, B is the consequence or head of the rule, and support and confidence the 
respective probabilities. Thereby, the stereotype \textit{ARMResults} contains the following attributes: head, 
body, support and confidence. The \textit{Body} of a rule contains the instances of attributes used to 
predict the head of the rule. It contains the item or items for which the Associations mining algorithm has 
found an associated item. The head thus contains the item found. Then, \textit{Head} contains the 
instances of the predicted attributes, the consequence of the rules (the ``Then'' part of the rule). The  
(\textit{Support}) is the number of transactions that include all items in the antecedent and consequent parts of the 
rule. The \textit{Support} is expressed as a percentage of the total number of records in the transactions. 
\textit{Confidence} is the conditional probability, that is the ratio of the number of transactions that 
include all items in the consequence as well as the antecedent (namely, the support) to the number of 
transactions that include all items in the antecedent. It shows how frequently the rule head occurs among 
all the itemsets containing the rule body. 

This metamodel therefore improves the process of data mining modeling by reusing the
semantic-rich multidimensional model of the repository. This also helps to 
reduce duplicities in time-consuming preprocessing steps and gains knowledge 
of the domain under analysis. 
Furthermore, at the early steps of the development process, it can guarantee 
the data quality,
assuring that data needed for the association rule analysis will be in the repository.


\subsection{Metamodel for Classification}

The main goal of classification is to maximize the differences between the 
values of a selected measure. Analysts must decide which attributes will 
be classified by using the data at hand. For instance, well-known highly 
correlated attributes must be omitted from the analysis
because they will appear at the highest hierarchical level of classification.
The underlaying model must, therefore, be 
understandable and close to the analysts' way of thinking.  
In this way they can clearly identify which attributes will be used in the 
classification, even when they have to deal with hundreds of attributes

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/Classification.png} %{file.eps} 
\end{center} 
\caption{Profile for Classification Mining} 
\label{cmPro} 
\end{figure} 

Several parameters control the tree growth, shape and other characteristics,
thus helping to provide users with more accurate trees. \textit{MinSupp} specifies that a
node should never be split if it contains fewer rows than a specified value. \textit{MinConf} is
the minimum confidence that a node must fulfill. \textit{MaxTreeLevels} is the maximum number
of levels in the tree that the user wishes to obtain. This is a threshold parameter of
feature selection. When the number of predictable attributes is greater than 
this parameter value, it is necessary to select the most significant attributes. 
The same selection is involved for a number of selected input attributes which 
are greater than \textit{MaxInputAttributes}.
The function used to classify the data is the \textit{Algorithm}. 
\textit{SplitMethod} specifies whether the nodes
split into two (binary) or more branches. \textit{HomogeneityMetrics} indicate the homogeneity
criteria with which to split the nodes (the most common criteria are Entropy and Gini Index). The
\textit{TrainingSize} establishes the maximum size of the training data. The \textit{Filter} parameter
specifies the exclusions of the itemset. Users can provide additional constraints on the
input data in order to improve the generated tree. Useless branches of the tree can be
pruned using these settings. 
These settings are attributes of the classification mining settings (\textit{CMSettings}) class.

As a result, the whole process of data mining with classification can be
easily designed by using our approach.  Furthermore, 
the first models from the early stages of development can be improved at each KDD iteration step. 

\subsection{Metamodel for Time Series}

There are three fundamental elements for time series analysis: the time 
dimension attribute, the dimension to be analyzed, and the measure under study. 
We have therefore deﬁned new stereotypes with which to model these extending 
concepts:
\textit{TimeSeriesAnalysis}, \textit{Input}, \textit{Case}, \textit{Predict}
and \textit{Filter}. We also need to define other elements in our profile which 
are not stereotypes but which also model elements from the TS analysis domain.  

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/tsaprofile.png} %{file.eps} 
\end{center} 
\caption{Profile for Time Series Analysis} 
\label{tsPro} 
\end{figure} 


The \textit{Time Series Analysis Profile} is shown in
Fig.~\ref{tsPro}. Since this profile is a UML extension,
the referenced metamodel (the extended modeling language) is UML.
The referenced metaclasses (the extended modeling elements) are the
\textit{InstanceSpecification} modeling element, the \textit{Usage}
dependency relationship and the \textit{Constraint} element. 
In this way, we
create the \textit{Settings} class (a \textit{Class} typically models the
structural abstraction of the domain entities; one of the key
modeling elements in UML) and the supporting
\textit{MissingValueTreatment} enumeration (an \textit{Enumeration} is a
data type whose values are predefined lists of literals). Since a
\textit{Settings} is a \textit{Class}, it contains attributes with which to model 
the parameters of each specific TS analysis. Thus, we can model a TS
analysis by instancing the \textit{Settings} class. Each instance will also be 
stereotyped as a \textit{TimeSeriesAnalysis}, so it will be possible to check 
integrity constraints (related to TS analyses).
As with the rest of the metamodels, we have reused the \textit{Multidimensional Modeling
Profile}~\cite{DBLP:journals/dke/Lujan-MoraTS06} to help define the data under analysis.


In Fig.~\ref{tsPro} each time series analysis is modeled by \textit{Settings} instances
stereotyped as \textit{TimeSeriesAnalysis}. Thus, cardinality
constraints can be assured between TS analysis and data-mining
attributes (\textit{Input}, \textit{Case}, and \textit{Predict}).

The instances (\textit{InstanceSpecification} modeling elements stereotyped as \textit{TimeSeriesAnalysis})
of this class model allow us to set up parameters with which to tune the underlying algorithm.

The main concepts modeled are: \textit{Input}, \textit{Case}, and \textit{Predict}. An instance of \textit{Settings}
stereotyped as a \textit{TimeSeriesAnalysis} depends on data-mining
attributes from the multidimensional model. An \textit{Input} models data-under-analysis
usage through the time series analysis. A \textit{Case} specifies usage of 
aggregated multidimensional data. Finally, the \textit{Predict} attribute 
indicates usage dependency on predicted variables by the underlying algorithms.

Time series analysis cases can be filtered by selection conditions on its 
values. Each filtered case is thus modeled by attaching a stereotyped constraint 
such as \textit{Filter} to the related
\textit{Case}-stereotyped usage dependency (Fig.~\ref{tsPro}).



\subsection{Metamodel for Clustering}


Clustering aims to discover common behavior in a dataset. The basic features 
to be conceptualized are:
definition of input data and the grouping attribute. For this DM 
technique there is no need to define the predicted attribute.  

We have therefore defined four stereotypes with which to map clustering-related 
concepts into UML: (i) \textit{Clustering}
(the generalization of clustering algorithms including their parameters),
(ii) \textit{Input} (input attributes of the DM technique referring to MD data), 
(iii) \textit{Case} (case attributes), and (iv) the abstract \textit{Attribute} (the DM 
attributes referring the MD data). 

The \textit{input} and \textit{case} mappings 
are implemented in a profiling mechanism by
specializing the \textit{Attribute} stereotype that extends the UML \textit{Class} metaclass to
model references from DM attributes to MD data (with its reference tag
definition). The algorithm mapping is implemented by means of the \textit{Clustering}
stereotype by extending the UML \textit{InstanceSpecification} metaclass (which models specific
objects that conform to previously defined classes) and the \textit{Settings} class
by modeling the algorithm settings (which is not a stereotype but which can 
also be considered as a mapping resource). Each settings parameter is 
represented by indicating its
default value and domain in the UML visual notation. For instance, the number
of clusters (\textit{numCluster}) takes values in the domain of natural numbers (\textit{UnlimitedNatural})
and 10 by default.

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.50\textwidth]{figures/profile-clustering.png} %{file.eps} 
\end{center} 
\caption{Profile for Clustering} 
\label{cluPro} 
\end{figure}

A UML profile allows us to enhance the visualization of the stereotyped concepts. 
The addition of an icon to the standard visual notation of the extended
metaclass, is applied to \textit{inputs} and \textit{cases} (Fig.~\ref{clumm}). 
Otherwise (as with the
\textit{Clustering} stereotype), the default UML notation represents the application of
a stereotype following the instance naming guidelines (i.e., a clustering is represented
as ``clustered'').

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.50\textwidth]{figures/mm-clustering.png} %{file.eps} 
\end{center} 
\caption{Metamodel for Clustering} 
\label{clumm} 
\end{figure} 

As is shown in Fig.~\ref{clumm}, the fact under analysis contains measures 
which are contextualized by dimensions. Each dimension used as an input 
represents an axis, and
each case corresponds to partitions of these axes. A clustering algorithm~\cite{IWAD:book/jain88,IWAD:conf/ir/Rasmussen92}
differs from other algorithms (such as those of the classification or association-rule)
because there is no specific predict attribute that builds a clustering model. 
Clustering algorithms therefore use the input attributes to build an MD space 
on which we can measure similarities existing in the data to be clustered. 

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.50\textwidth]{figures/mm-settings.png} %{file.eps} 
\end{center} 
\caption{Metamodel for Clustering Settings} 
\label{cluSmm} 
\end{figure} 


The algorithm's parameters are shown in Fig.~\ref{cluSmm}. These settings are: 
the maximum number of iterations needed to build
clusters, the maximum number of clusters built, the number of clusters to preselect,
the minimum support necessary to specify the number of cases that are needed to
build a cluster, the minimum error tolerance to stop the building of clusters,
the maximum number of input attributes used to build a cluster, the sample
size controlling the number of cases used to build the clusters, the maximum
number of classes to specify the number of categories in an attribute value, and
the sensitivity to detect smaller or larger density variation as clusters.

The use of this visual representation enhances the comprehension of the models.
Thus, clustering mining models can be designed from the very beginning of the 
KDD development, therefore improving the KDD process by avoiding duplicities 
of time-consuming tasks  (such as preprocessing steps), 
improving the business and data understanding, and
assuring data quality.

   
\section{Designing Data Mining: a Step-By-Step process}
\label{CS}


The \textit{data mining} (DM) stage can be modeled from the early stages of a 
\textit{data warehouse} (DW) project. 
As each \textit{knowledge discovery in databases} (KDD) stage is highly dependent 
on the previous one, then 
modeling using the same framework dramatically improves the discovery process.
In this way, we can assure data quality by modeling the DW previously 
in order to apply the DM technique.
The iterative modeling of DM and DW structures facilitates communication between specialists.
Finally, our framework avoids redundancies in preprocessing tasks. 
In this section we will present a guideline for modeling DM and DW by using our approach.

The discovery process is guided by business goals. The selection of a DM technique 
is based on the objectives given by the requisites analysis. 
For instance, decision-makers may wish to discover classes in their customers' 
data in order to send a selective marketing plan. In this case, any classification 
algorithm will be suitable for this goal.
If they wish to discover common purchase behavior, then clustering algorithms 
are the appropriate technique. The KDD process is thus driven by business 
goals. The KDD process is iterative by nature 
(as was recognized in its definition~\cite{ACM:books/Fayyad96}). 
This is a data-centric process which is refined through each iteration.
The integration stage (Fig.~\ref{kdd}.a), which includes \textit{extraction, transformation and loading} (ETL), and 
the design of the repository (the DW) can be modeled using a common notation 
and a well known visual modeling language (see our previous 
work~\cite{DBLP:conf/er/TrujilloL03,DBLP:conf/er/Lujan-MoraTS02,DBLP:conf/uml/Lujan-MoraTS02,DBLP:journals/dke/Lujan-MoraTS06,DBLP:conf/advis/Lujan-MoraT04}). 
However, the DM process is still developed as an art. 
Therefore, the whole process is only as strong as the ``weak link'' in the chain. 
Taking into consideration that our previous work dealt with the 
conceptualization of specific DM techniques 
(~\cite{IWAD:conf/ecdm/ZubcoffT07,IWAD:conf/ecdm/ZubcoffCT07,DBLP:conf/dawak/ZubcoffT05,IWAD:journals/dke/ZubcoffT07,DBLP:conf/dawak/ZubcoffT06,DBLP:conf/dawak/ZubcoffPT07}), 
this paper presents a global approach through which to align DM and DW design. 

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/CapturesMD.png} %{file.eps} 
\end{center} 
\caption{Multidimensional model of captures} 
\label{MDcs} 
\end{figure} 

The DW (Fig.~\ref{kdd}.b) is designed in response to the analysis requisites, and the main 
objective of the DW is to serve as a repository for the business analysis 
in support of the decision makers. OLAP queries and reports, and DM analysis 
are the main source of information (Fig.~\ref{kdd}.c).
However, DM and DW integration must be considered from the beginning of the project:
from the requirement analysis, to deciding the data available in the repository, 
by selecting the DM technique based on the business goal. 
The DW developer must verify that all data used in the DM stage are 
present in the DW. If data are missing, s/he must assure their presence by 
incorporating these new data from the sources (by designing the integration 
process). In this fashion, data quality (the basis of data mining success) 
can be guaranteed from the DW design stage. The first iteration of the KDD 
can include a formal definition of the DM technique 
to be used and the data required for the analysis.

The models can be enhanced through each successive iteration. DM and DW models 
can be improved at each iteration step. Further steps may allow us to refine 
algorithm parameters, or to add other input attributes, and so on. 

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/ClusteredCapturesWOsettings.png} %{file.eps} 
\end{center} 
\caption{Clustering model for analyzing captures} 
\label{CLUcs} 
\end{figure}

We present a case study showing a particular knowledge discovery with clustering.
These data store information about fish captures. They were gathered in various 
European ports, each of which had distinct natural factors and capture 
composition. Several dimensions must be analyzed: the methodology used for the 
captures, regional geographical aspects (habitat, depth, temperature, salinity, 
etc), the ship used for the capture, commercial interests, and the type of 
species captured. There is, of course, also a time dimension which allows us 
to analyze the seasonality of the captures, but we will focus on clustering 
the characteristics of species based on regional factors. 

The multidimensional model of the captures DW is shown in Fig.~\ref{MDcs}\footnote[1]{Some attributes are omitted for the sake 
of simplicity}. 
In this figure, the data miner can easily identify measures, and the dimension 
to be analyzed. This model furthermore provides a profound comprehension of 
the system. For instance, the existing hierarchies for each dimension are 
clearly shown by using this approach. \textit{Time} dimension is a five-level hierarchy. We can use this 
to aggregate data at any level of this hierarchy. There is, however, only one 
level in the \textit{Ship} dimension hierarchy. This a view of the 
\textit{multidimensional} (MD) model of the DW. 


The data miner can easily design a clustering model based on this 
MD model. The methodology is that of selecting the attributes 
that will participate in the algorithm. If grouping conditions exist, then, 
those attributes must be selected as \textit{Case}. Otherwise, all attributes 
will be used as \textit{Input}. 

In this case study (Fig.~\ref{CLUcs}), the business goal is that of analyzing 
the behavior of captures per month according to the marine area, species name, 
habitat, capture depth, salinity, and water temperature. To design the 
clustering we then set as \textit{Input} all these attributes except month (the 
grouping condition) which is a \textit{Case} attribute.
This may be the first approach for the conceptual modeling of clustering. 
At this stage we abstract specific details from the platform or the algorithm. 
Further iterations will enrich this model by selecting the most suitable 
algorithm and its settings. 
We implemented the designed Clustering model in a well-known platform, the 
Microsoft SQL Server. We used its Analysis Services tools to implement the DM 
models. An excerpt of the results is  shown in Fig.~\ref{CLUimp}. 

\begin{figure}[tb] 
\begin{center} 
\includegraphics[width=0.45\textwidth]{figures/Clustering-Implementation.png} %{file.eps} 
\end{center} 
\caption{Details of the captures Clustering results} 
\label{CLUimp} 
\end{figure} 

\section{Conclusion}
\label{CO}
\textit{Data mining} (DM) is performed more as an art than as science. 
The lack of a conceptualization of the entire KDD process leads to a 
single-isolated perspective of the DM. 
The main drawbacks of viewing DM as isolated processes are: 
(i) the duplicities of time-consuming preprocessing tasks, and 
(ii) the impossibility of assuring data quality, 
that is, the availability of input data needed for a mining process.
In order to deal with these issues, we have defined an expressive formalism 
with which to design DM models.
It is based on our previous work~\cite{DBLP:conf/er/TrujilloL03,DBLP:conf/er/Lujan-MoraTS02,DBLP:conf/uml/Lujan-MoraTS02,DBLP:journals/dke/Lujan-MoraTS06,DBLP:conf/advis/Lujan-MoraT04}, 
that models the first stages of the KDD: the preprocessing, integration and  
multidimensional modeling of the DW. 

Our main goal is the integration of the design of DW and DM techniques. 
As the knowledge discovery process is iterative by nature, the designed 
models can thus be improved at each iteration step. In this way, data 
quality can be guaranteed from the early stages of the development of a 
DW project. Furthermore, designing the DM together with the DW avoids 
redundant time-consuming preprocessing tasks. Finally, all the KDD stages 
can be properly designed using the same standard and visual notation tool. 
This allows us to take advantage of previously designed semantic-rich models, 
and to refine them at each iteration step. 

Our immediately future work is to extend the UML with a profile for Time Series Analysis.
Other future work aims at incorporating privacy preserving concepts into the DM 
models, and the proposal of certain metrics with which to measure the model’s 
understandability. 

\section*{Acknowledgments}
This work has been partially supported by the METASIGN project (TIN2004-00779) from 
the Spanish Ministry of Education and Science, and by the DADS (PBC-05-012-2) project 
from the Castilla-La Mancha Ministry of Science and Technology (Spain). Special thanks 
to Jesus Pardillo for his useful comments. 

%\bibliographystyle{ieicetr}
%\bibliography{myrefs}
\begin{thebibliography}{99}

\bibitem{ACM:Inmon96}
Inmon, W.H.:
\newblock {Building the Data Warehouse}. Second edn.
\newblock John Wiley \& Sons, Inc., New York, NY, USA (1996)

\bibitem{DBLP:books/mit/PF91/FrawleyPM91}
Frawley, W.J., Piatetsky-Shapiro, G., Matheus, C.J.:
\newblock {Knowledge Discovery in Databases: An Overview.}
\newblock In: Knowledge Discovery in Databases.
\newblock AAAI/MIT Press (1991)  1--30


\bibitem{IWAD:conf/icdm/GonzalezMMS04}
Gonz{\'a}lez-Aranda, P., Menasalvas, E., Mill{\'a}n, S., Segovia, J.:
\newblock {Towards a Methodology for Data mining Project Development: The
  Importance of abstraction.}
\newblock In: ICDM Workshops (FDM). (2004)  39--46

\bibitem{IWAD:conf/ml/Quinlan86}
Quinlan, J.:
\newblock {Induction of decision trees}
\newblock {Machine Learning, (1986)} 81–-106

\bibitem{IWAD:specs/omg/CWM}
{Object Management Group}:
\newblock {Common Warehouse Metamodel (CWM), version 1.1.}
\newblock \url{http://www.omg.org/technology/documents/formal/cwm.htm} (March
  2003)

\bibitem{IWAD:specs/dmg/PMML}
{Data Mining Group}:
\newblock {Predictive Model Markup Language (PMML), version 3.1.}
\newblock \url{http://www.dmg.org/pmml-v3-1.html} (Visited April 2007)

\bibitem{DBLP:conf/er/Lujan-MoraVT04}
Luj{\'a}n-Mora, S., Vassiliadis, P., Trujillo, J.:
\newblock {Data Mapping Diagrams for Data Warehouse Design with UML.}
\newblock In: ER. (2004)  191--204

\bibitem{DBLP:conf/er/Lujan-MoraTS02}
Luj{\'a}n-Mora, S., Trujillo, J., Song, I.Y.:
\newblock {Multidimensional Modeling with UML Package Diagrams.}
\newblock In: ER. (2002)  199--213

\bibitem{DBLP:conf/uml/Lujan-MoraTS02}
Luj{\'a}n-Mora, S., Trujillo, J., Song, I.Y.:
\newblock {Extending the UML for Multidimensional Modeling.}
\newblock In: UML. (2002)  290--304

\bibitem{DBLP:journals/dke/Lujan-MoraTS06}
Luj{\'a}n-Mora, S., Trujillo, J., Song, I.Y.:
\newblock {A UML profile for multidimensional modeling in data warehouses.}
\newblock Data Knowl. Eng. \textbf{59}(3) (2006)  725--769

\bibitem{IWAD:specs/omg/UML}
{Object Management Group}:
\newblock {Unified Modeling Language (UML), version 2.1.1.}
\newblock \url{http://www.omg.org/technology/documents/formal/uml.htm}
  (February 2007)

\bibitem{DBLP:journals/is/PedersenJD01}
Pedersen, T.B., Jensen, C.S., Dyreson, C.E.:
\newblock A foundation for capturing and querying complex multidimensional
  data.
\newblock Inf. Syst. \textbf{26}(5) (2001)  383--423

\bibitem{ACM:books/Pyle99}
Pyle, D.:
\newblock {Data preparation for data mining}.
\newblock Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1999)

\bibitem{ACM:books/Fayyad96}
Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., and Uthurusamy, R.:
\newblock {Advances in knowledge discovery and data mining}.
\newblock {0-262-56097-6},
\newblock {American Association for Artificial Intelligence}, Menlo Park, CA, USA (1996)

\bibitem{DBLP:conf/sigmod/AgrawalIS93}
Agrawal, R., Imielinski, T., Swami, A. 
\newblock {Mining Asociation Rules between Sets of Items in Large Databases}.
\newblock In: ACM SIGMOD (1993). 207--216
 
\bibitem{DBLP:conf/er/TrujilloL03}
Trujillo, J., Luj{\'a}n-Mora, S.:
\newblock {A UML Based Approach for Modeling ETL Processes in Data Warehouses.}
\newblock In: ER. (2003)  307--320

\bibitem{IWAD:books/Bowerman93}
Bowerman, B.J., O'Connel, R.T.:
\newblock {Forecasting and Time Series: An Applied Approach.} Third edn.
\newblock Duxbury Pr., Belmont, California, USA (1993)

\bibitem{IWAD:journal/TKDE/BernsteinPH05}
Bernstein, A., Provost, F., Hill, S.:
\newblock {Toward Intelligent Assistance for a Data Mining Process: An Ontology-Based Approach for Cost-Sensitive Classification}
\newblock {IEEE Transactions on Knowledge and Data Engineering}, v.17, 4.
\newblock {IEEE Educational Activities Department}
\newblock Piscataway, NJ, USA. (2005) 503--518

\bibitem{DBLP:conf/sdm/MeekCH02}
Meek, C., Chickering, D.M., Heckerman, D.:
\newblock {Autoregressive Tree Models for Time-Series Analysis.}
\newblock In: SDM. (2002)

\bibitem{RePEc:Yar90}
Yar, M., Chatfield, C.:
\newblock {Prediction intervals for the Holt-Winters forecasting procedure}.
\newblock International Journal of Forecasting \textbf{6}(1) (1990)  127--137

\bibitem{RePEc:Johansen91}
Johansen, S.:
\newblock {Estimation and Hypothesis Testing of Cointegration Vectors in
  Gaussian Vector Autoregressive Models}.
\newblock Econometrica \textbf{59}(6) (November 1991)  1551--80

\bibitem{DBLP:journal/ACM/jain99data}
Jain, A., Murty, M., Flynn, P.:
    \newblock {Data clustering: a review}.
     \newblock {ACM Computing Surveys,
    v.31, 3} (1999) 264--323

\bibitem{IWAD:book/jain88}
Jain, A.K., Dubes, R.C.: 
    \newblock {Algorithms for Clustering Data}. Prentice-Hall (1988) 


\bibitem{IWAD:conf/ir/Rasmussen92}
	Rasmussen, E.M.: 
    \newblock {Clustering Algorithms}. 
	In: Information Retrieval: Data Structures & Algorithms. (1992) 419–442

\bibitem{IWAD:books/Westphal98}
Westphal, C., Blaxton, T.:
\newblock {Data Mining Solutions: Methods and Tools for Solving Real-World
  Problems.}
\newblock John Wiley \& Sons, Inc., USA (1998)

\bibitem{ACM:conf/isict/HofmannT03}
Hofmann, M., Tierney, B.:
\newblock {The involvement of human resources in large scale data mining
  projects}.
\newblock In: ISICT '03: Proceedings of the 1st international symposium on
  Information and communication technologies, Trinity College Dublin (2003)
  103--109

\bibitem{IWAD:specs/omg/OCL}
{Object Management Group}:
\newblock {Object Constraint Language (OCL), version 2.0.}
\newblock \url{http://www.omg.org/technology/documents/formal/ocl.htm} (Sep
  2007)

\bibitem{IWAD:misc/crispdm}
{CRISP-DM Consortium}
\newblock {CRISP-DM, version 1.0.}
\newblock \url{http://www.crisp-dm.org/} (Sep
  2007)

\bibitem{DBLP:conf/er/RizziBCGHTVVV03}
Rizzi, S., Bertino, E., Catania, B., Golfarelli, M., Halkidi, M., Terrovitis,
  M., Vassiliadis, P., Vazirgiannis, M., Vrachnos, E.:
\newblock {Towards a Logical Model for Patterns.}
\newblock In: ER. (2003)  77--90

\bibitem{DBLP:conf/parma/Rizzi04}
Rizzi, S.:
\newblock {UML-Based Conceptual Modeling of Pattern-Bases.}
\newblock In: PaRMa. (2004)

\bibitem{DBLP:conf/advis/Lujan-MoraT04}
Luj{\'a}n-Mora, S., Trujillo, J.:
\newblock {A Data Warehouse Engineering Process.}
\newblock In: ADVIS. (2004)  14--23

\bibitem{DBLP:conf/dawak/ZubcoffT05}
Zubcoff, J.J., Trujillo, J.:
\newblock {Extending the UML for Designing Association Rule Mining Models for
  Data Warehouses.}
\newblock In: DaWaK. (2005)  11--21

\bibitem{IWAD:journals/dke/ZubcoffT07}
Zubcoff, J., Trujillo, J.:
\newblock {A UML 2.0 profile to design Association Rule mining models in the
  multidimensional conceptual modeling of data warehouses.}
\newblock Data Knowl. Eng. \textbf{63}(1) (2007)  44--62

\bibitem{DBLP:conf/dawak/ZubcoffT06}
Zubcoff, J.J., Trujillo, J.:
\newblock Conceptual modeling for classification mining in data warehouses.
\newblock In: DaWaK. (2006)  566--575

\bibitem{DBLP:conf/dawak/ZubcoffPT07}
Zubcoff, J.J., Pardillo, J., Trujillo, J.:
\newblock Integrating Clustering Data Mining into the Multidimensional
               Modeling of Data Warehouses with UML Profiles.
\newblock In: DaWaK. (2007)  199-208

\bibitem{IWAD:conf/ecdm/ZubcoffT07}
Zubcoff, J., Trujillo, J.:
\newblock {An Approach for the Conceptual Modeling of Clustering Mining in the KDD Process.}
\newblock In: IADIS-ECDM (2007) 119--124

\bibitem{IWAD:conf/ecdm/ZubcoffCT07}
Zubcoff, J., Cuzzocrea, A., Trujillo, J.:
\newblock {On the Suitability of Time Series Analysis on Data Warehouses.}
\newblock In: IADIS-ECDM (2007) 17--25


\end{thebibliography}


\profile{Jose Zubcoff}{%
is a full time lecturer at the Science Faculty at the University 
of Alicante, Spain, and Ph.D. student in the Computer Science School of 
University of Alicante, Spain. His research interests include data mining 
modeling, conceptual design of data mining techniques, integration of data 
warehouse and data mining, applications of statistical techniques, machine 
learning and data mining.  Specifically, association rule discovery, 
semi-supervised learning, classification, clustering, active learning 
and Bayesian statistics. With applications to Sea Sciences. 
He has published and presented papers at various national and 
international workshops and conferences in Computer Science such as DAWAK, 
ECDM, IDEAS, JISBD and IBC. He has also published papers in highly 
cited international journals such as Data and Knowledge Engineering (DKE)
as well as Springer-LNCS (2005). Contact him at Jose.Zubcoff@ua.es.
}


\profile{Juan Trujillo}{%
is an associated professor at the Computer Science School at the 
University of Alicante, Spain. Trujillo received a Ph.D. in Computer Science 
from the University of Alicante (Spain) in 2001. His research interests include 
database modeling, data warehouses, conceptual design of data warehouses, 
multidimensional databases, data warehouse security and quality, mining data 
warehouses, OLAP, as well as object-oriented analysis and design with UML. 
He has published many papers in high quality international conferences such as 
ER, UML, ADBIS, CAiSE, WAIM or DAWAK. He has also published papers in highly 
cited international journals such as IEEE Computer, Decision Support Systems 
(DSS), Data and Knowledge Engineering (DKE) or Information Systems (IS). Dr. 
Trujillo has served as a Program Committee member of several workshops and 
conferences such as ER, DOLAP, DAWAK, DSS, JISBD and SCI and has also spent 
some time as a reviewer of several journals such as JDM, KAIS, ISOFT and JODS. 
He has been Program Chair of DOLAP’05 and BP-UML’05, and Program Co-chair of 
DAWAK’05, DAWAK’06 and BP-UML’06 and BP-UML’07. Contact him at jtrujillo@dlsi.ua.es.
}

\end{document}
