\documentclass[12pt,a4paper]{article}

\usepackage[a4paper]{geometry}
\usepackage[onehalfspacing]{setspace}
\usepackage{tocloft}
\usepackage{caption}
\usepackage{fancyhdr}
\usepackage[rm,bf,tiny,indentafter]{titlesec}
\usepackage{amsmath}
\usepackage{datetime}
\usepackage{url}
\usepackage{hyperref}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{color}
\usepackage{times}
\usepackage{xcolor}

\hypersetup{
    colorlinks,
    citecolor=black,
    filecolor=black,
    linkcolor=black,
    urlcolor=black
}

\geometry{left=20mm,right=10mm,top=20mm,bottom=30mm}

\renewcommand{\theenumi}{\alph{enumi}}
\renewcommand{\theenumii}{\arabic{enumii}}
\renewcommand{\labelenumi}{\theenumi )}
\renewcommand{\labelenumii}{\theenumii )}
\renewcommand{\labelenumiii}{ -)}
\renewcommand{\thetable}{\thesection.\arabic{table}}
\renewcommand{\thefigure}{\arabic{figure}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newdateformat{dateFormat}{\twodigit{\THEDAY}-\twodigit{\THEMONTH}-\THEYEAR}

\renewcommand{\contentsname}{Cuprins}
\renewcommand{\cftsecleader}{\cftdotfill{\cftsecdotsep}}
\renewcommand{\cftsecdotsep}{\cftdotsep}

\newcommand\refitem{\refstepcounter{refer}\therefer}
\newcommand\resetcounters{\setcounter{equation}{0}\setcounter{table}{0}\setcounter{figure}{0}}

\DeclareCaptionLabelSeparator{tire}{\; -- \;}
\captionsetup[figure]{labelformat=simple,labelsep=tire}
\captionsetup[table]{labelformat=simple,labelsep=tire,position=top,justification=justified,singlelinecheck=false}
\captionsetup[lstlisting]{labelformat=simple,labelsep=tire,position=top,justification=justified, singlelinecheck=false}

\titleformat{\section}[hang]{\normalfont \bfseries}{\hspace{6mm} \thesection}{1em}{}
\titleformat{\subsection}[hang]{\normalfont \bfseries}{\hspace{6mm} \thesubsection}{1em}{}
\titleformat{\subsubsection}[hang]{\normalfont \bfseries}{\hspace{6mm} \thesubsubsection}{1em}{}
\titleformat{name=\section,numberless}[hang]{\normalfont \bfseries}{\hspace{6mm}}{0em}{}
\titleformat{name=\subsection,numberless}[hang]{\normalfont \bfseries}{\hspace{6mm} I.}{1em}{}

\pagestyle{fancy}
\fancyhf{}
\setlength{\footskip}{1cm}
\cfoot{\thepage}

\begin{document}

\lstset{language=[Sharp]C, rulecolor=\color{blue!80!black},
	basicstyle=\footnotesize, % print whole listing small
	keywordstyle=\color{black}\bfseries, % underlined bold black keywords
	identifierstyle=, % nothing happens
	stringstyle=\ttfamily, % typewriter type for strings
	showstringspaces=false,
	breaklines=true,
	backgroundcolor=\color{white},
	frame=single,
	float=p}
\fontfamily{times}

\tableofcontents
\thispagestyle{empty}

%toc (table of contents), lof (list of figures), or lot (list of tables)
\addtocontents{toc}{\protect\thispagestyle{empty}}

%ends the page and enforce the printing of all elemnts due this point
\clearpage
\setcounter{page}{2} %can set for figures pages etc...
\newcounter{refer}
\pagestyle{plain}

\section*{Introducere
	\addcontentsline{toc}{section}{Introducere}
}
Lucrarea data are ca scop studierea si implementarea a unei sarcini ce tine de un anumit compartiment al obiectului "Data mining". Tema lucrarii date este " Implementarea tehnicilor de clasificare si clasterizare pentru o aplicatie ce ar permite  analiza clientilor unei retele de magazine ce folosesc aceeasi baza de date". Pentru executarea acestei sarcini au fost implementati doi algoritmi care si vor fi descrisi ulterior in aceasta lucrare.

Conceptul de \textit{Data mining}, cunoscut si ca "descoperirea cunostintelor in baze de date mari" este un instrument modern si puternic al Tehnologiei Informatiei si Comunicatiilor,
instrument ce poate fi folosit pentru extragerea unor informatii utile dar inca necunoscute. 
Acest instrument automatizeaza procesul de descoperire a unor relatii si combinatii in
datele brute, iar rezultatele gasite ar putea fi incadrate intr-un sistem automat de suport a 
deciziei. 

Metodele data mining provin din calculul statistic clasic, din administrarea bazelor de 
date si din inteligenta artificiala. Ele nu inlocuiesc metodele traditionale ale statisticii, ci sunt considerate a fi extinderi ale tehnicilor grafice si statistice. Deoarece intuitia umana lipseste in implementarea softurilor (pentru a face recunoasterea a ceea ce este relevant de ceea ce nu este), rezultatele metodelor  data mining vor trebui supuse in mod sistematic unei supravegheri umane.

De asemenea un exemplu classic pentru motivarea apariţiei problemei a fost analiza cosului de piata a unui consumator. Prin alte cuvinte, s-a produs o incercare descoperirei unor reguli in comportamentul consumatorului si deprindelele lui. Evident, ca astfel de problema a fost provocata de necesitatile marketingului, care se dezvolta foarte rapid si rezultatul unei analizei asemanatoare poate fi folosit pentru crearea strategiilor marketing.

In aplicarea tehnicilor de clasificare si clasterizare in sisteme reale se poate distinge inurmatoarele  etape:
\begin{enumerate}
\item definirea problemei;
\item identificarea surselor de date;
\item colectarea si selectarea datelor;
\item colectarea si selectarea datelor;
\item pregătirea datelor;
\item definirea si construirea modelului;
\item evaluarea modelului;
\item integrarea modelului.
\end{enumerate}

\textbf{Definirea problemei} consta in sesizarea unei oportunitati sau necesitati de afaceri. De aceea se va delimita ceea ce urmeaza a fi rezolvat prin Data Mining, obiective urmarire si rezultate scontate. Problema ce urmeaza a fi rezolvata prin Data Mining este o parte componenta a oportunitatii organizatiei, dar nu se identifică cu ea. De asemenea problema trebuie sa primeasca o forma adecvata pentru a putea fi tratată cu această tehnica.

\textbf{Identificarea surselor} de date consta in stabilirea structurii generale a datelor necesare pentru rezolvarea problemei, precum si regulile de constituire a acestora si localizarea lor. Fiecare sursa de date va fi examinata pentru o familiarizare cu continutul sau si pentru identificarea incoerentelor sau a problemelor de definire.

\textbf{Colectarea si selectia datelor} este etapa in care se face extragerea si depunerea intr-o baza comuna a datelor care urmeaza a fi utilizate ulterior. Aceasta etapa ocupa un timp mare, cam 80% din timpul total, iar existenta  depozitelor de date constituie un real avantaj.
In cazul nostru analiza datelor va incepe anume de la aceasta etapa, noi presupunind ca avem deja datele colectate intro baza de date unica conform conditiei sarcinii puse initial.

\textbf{Pregătirea datelor} de obicei stocate in colectii de date care au fost construite pentru alte scopuri. De aceea firesc este sa existe o faza preliminara de pregatire inainte de extragere prin Data Mining. Transformările la care sunt supuse datele pentru Data Mining se refera la: valori extreme, valori lipsa, valori de tip text, tabele. Tratarea valorilor extreme se poate face prin incadrarea intre anumite limite cuprinse intre medie si un numar de abatere standard prin excludere sau limitare sau prin izolarea varfurilor.
  In implementare executata in cursul acestei lucrari pentru pregatirea datelor este responsabil nivelul de date care extrage datele din baza de date si le transforma in tipul cerut de catre intercfata de rulare a algoritmelor.

Definirea si construirea modelului este etapa care se apropie cel mai mult de notiunea de Data Mining si se refera la crearea modelului informatic care va efectua exploatarea. Etapa de definire si construire a modelului este insotita de faza de instruire sau invatare, depinzand de tehnicile de Data Mining utilizate.

Astfel vom aplica doi algoritmi pentru solutionarea problemei de gasire a seturilor frecvente in interiorul unei transactii si relatiile de probabilitate intre acestea. Algoritmi despre care vor fi descrisi sunt \textbf{FPG Tree} si \textbf{Apriori}.

Solutia executiei problemei vor fi doua sarcini separate:
\begin{enumerate}
\item Gasirea multimilor de articole frecvente, care depasesc  pragul minim impus pentru suport minsup (este etapa care consuma cel mai important volum de  resurse);
\item pe baza acestora se vor construi  regulile de asociere care  au confidenta mai  mare decat minimul stabilit prin valoarea minconf.
\end{enumerate}


 Algoritmul \textbf{APRIORI} este cel mai cunoscut algoritm pentru descoperirea regulilor 
de asociere  si este folosit in cele mai multe produse comerciale. El a fost propus de 
Agrawal  si Strikant in 1994 [AS94] pentru a extrage articolset-uri frecvente folosind 
generarea candidatilor. Algoritmul foloseste proprietatea itemset-urilor frecvente: 
Orice subset al unui articolset frecvent trebuie să fie la randul sau frecvent.


Algoritmul \textbf{FPG Tree} este un algoritm mai optimal pentru descoperirea regulelor de asociere. Principala idee a fost inlocuirea parcurgerelor multiple a bazei de date cu parcurgerea unui element, care ar descrie datele – element metadata. Structura aleasa pentru descriere datelor edte un arbore. In construirea FP-tree se foloseste o tehnologie de divizare si captare (in engleza: divide and conquer), care va permite sa descompuna o problema complexa in mai multe probleme elementare. Astfel noi excludem procedura costisitoare de generare a candidatilor, care e o parte din algoritmul apriori.

Algoritmul FP–growth e cu o treapta mai eficient decit Apriori si este scalabil pentru minierea seturilor frecvente atat cit pentru patternuri lungi, cit si pentru cei lungi. Graficul compararii algoritmilor Apriori si FPG e prezentat in Figura ~\ref{figure:AprioriFpgPerformance}.

\begin{figure}	
	\center \includegraphics {AprioriFpgPerformance.png}
	\caption{Eficienta temporala al algoritmilor Apriori si FPG}
	\label{figure:AprioriFpgPerformance}
\end{figure}

\clearpage
\section{Consideratii teoretice}
\subsection{Algoritmi pentru descoperirea regulilor de asociere}
Descoperirea regulilor de asociere are ca scop descoperirea unui set de atribute 
comune care apartin unui numar mare de obiecte dintr-o baza de date in general.  
Pentru descoperirea asocierilor, se presupune ca exista un set de tranzactii, fiecare 
tranzactie fiind o lista de articole (lista de carti, lista de alimente). Un utilizator ar putea fi 
interesat de gasirea tuturor asocierilor care au suport (preponderenţă) cu confidenta $alpha$
(incredere), asadar: 
\begin{enumerate}
\item trebuie gasite toate asocierile care satisfac constrangerile utilizatorului;
\item asocierile trebui gasite eficient dintr-o baza de date de dimensiuni mari;
\end{enumerate}
 
O data cu progresul in tehnologia codurilor de bare, firmele de comercializare a 
produselor au acumulat si stocat cantitati imense de date despre produse, despre vanzari 
referite ca si date despre cosul de cumpararuri (basket data). In cazul acestor tipuri de date, 
o inregistrare (articol, item) consta in mod tipic dintr-un identificator de tranzactie, data 
tranzactiei si produsele cumparate la acea tranzactie.  
Firmele de succes privesc aceste baze de date ca si parti esentiale ale infrastructurii 
de marketing. Ele sunt interesate in introducerea unor procese de marketing conduse de 
informatii, coordonate prin folosirea tehnologiilor de baze de date care sa permita agentilor 
de marketing sa dezvolte si sa implementeze programe si strategii de marketing adaptate 
diverselor categorii de clienti. 
Primele abordari ale problemei le-a facut Agrawal care a propus algoritmul AIS in 
anul 1993 [AIS93]. In anul 1994, tot el a propus algoritmul  APRIORI [AS94]. In anul 
1995, apare algoritmul  SETM propus de Houtsma  si Swami [HS95]. Insă algoritmul 
APRIORI a avut cel mai mare impact, ramanand si la ora actuala tehnica majora utilizata
de producatorii comerciali pentru a detecta seturile frecvente de item-uri.  
Gasirea acestor tipuri de reguli sunt folosite adesea in:
\begin{enumerate}
\item analizarea cosului de cumparaturi;
\item design-ul de cataloage;
\item pentru asezarea marfurilor in rafturi;
\item pentru categorisirea clientilor pe baza tiparelor de cumparaturi pe care acestia le fac;
\item gasirea segmentelor de piata, tendintele in comportamentul consumatorilor; 
\item identificarea marfurilor, care trebuie sa fie promovate impreuna.
\end{enumerate}

De obicei, asa cum am mai mentionat, bazele de date implicate in astfel de aplicatii sunt 
de dimensiuni foarte mari. Din acest motiv, este foarte importanta utilizarea unor algoritmi 
cat mai rapizi pentru aceste aplicatii.

\subsection{Definirea problemei}
Formularea clasica a unei probleme de extragere de reguli de asociere este 
urmatoarea:  
Fie  $I = \lbrace I1, I2, .., Im \rbrace$ un set de articole,  si  $D = \lbrace t1, t2, ..., tn \rbrace$ o baza de date 
tranzactionala, unde fiecare tranzactie are un identificator unic (TID) si contine un set de 
articole $ti = \lbrace Ii1, Ii2, ..., Iik \rbrace$ si $Iij \in I$.

O regula de asociere este o implicare de forma  $X \Rightarrow Y$ unde $X,Y \subset I$ sunt seturi 
(multimi) de articole (itemi) numite articolset-uri (itemseturi) si $X \bigcap Y = \emptyset$.    
Regula de asociere trebuie sa aiba doua masuri: un suport (preponderenta) minim 
(s) si o confidenta (incredere) minima ($\alpha$). Aceste valori (s, $\alpha$) sunt date ca si parametri de 
intrare. 

Suportul indica frecventa de aparitie a unui tipar, de exemplu cat de des apar articolele impreuna. Masura suport este calculata cu formulele: 
\begin{equation}
\label{SupportEq}
S(X \Rightarrow Y) = P (X \bigcup Y)
\end{equation}

sau 

\begin{equation}
Suport = \frac{Numarul\_seturi\_de\_tranzactii\_care\_contin\_pe\_X\_si\_Y}{Numarul\_total\_de\_tranzactii }
\end{equation}

 A doua masura, confidenta, indica puterea unei asocieri, de exemplu cat de mult un anumit articol este dependent de altul (de exemplu, daca produsul X este cumparat de o persoana, atunci in  $\alpha\%$ din cazuri este cumparat  si produsul  Y. Ea se calculeaza cu formulele: 
\begin{equation}
\alpha(X \Rightarrow Y) = P (X \vert Y) = P (X \bigcup Y)\vert P(X)
\end{equation}

sau

\begin{equation}
Confidence = \frac{Numarul\_seturi\_de\_tranzactii\_care\_contin\_pe\_X\_si\_Y}{Numarul\_de\_seturi\_de\_tranzactii\_care\_contin\_pe\_X}
\end{equation}

Problema descoperirii regulilor de asociere consta in generarea unor reguli care sa aiba un suport mai mare decat suportul minim s si o confidenta mai mare decat confidenta minima $\alpha$.  Abordarea este independenta de reprezentarea bazei de date  D, care poate fi un fisier cu date, o tabela relationala sau rezultatul unei interogari.

\subsection{Descrierea algoritmului Apriori}
Algoritmul porneste de la presupunerea ca un articolset este format din doua campuri: unul care pastreaza numarul de tranzactii suportate de articolset-ul respectiv (un contor) si altul un set de articole.  
In prima parte a algoritmului, se numara, pentru fiecare articol  $i \in L$ in cate tranzactii apare $i$. Daca rezultatul depaseste suportul minim smin atunci acel articol devine 1-articolset (1 – itemset) frecvent (set de articole frecvente de lungime 1). Orice pas ulterior, de exemplu pasul $k$, consta mai departe din doua faze. 

In figura ~\ref{figure:AprioriPseudocode} este prezentat algoritmul Apriori in limbaj pseudocod.  

\begin{figure}	
	\center \includegraphics {AprioriPseudocode.png}
	\caption{Pseudocodul algoritmului Apriori}
	\label{figure:AprioriPseudocode}
\end{figure}

In prima faza, articolset-urile frecvente gasite la pasul $k -1$ sunt folosite pentru generarea articolset-urilor candidat $C_k$, folosind functia $AprioriGen$, descrisa in figura ~\ref{figure:AprioriGen}.

\begin{figure}	
	\center \includegraphics {AprioriGen.png}
	\caption{Functia AprioriGen utilizata in faza 1 al algoritmului Apriori}
	\label{figure:AprioriGen}
\end{figure}
  
In faza a doua se scaneaza baza de date D si se calculeaza, pentru fiecare tranzactie $t$, care dintre candidati se găsesc in $t$. Daca un candidat se gaseste in $t$, contorul sau este marit cu 1. Pentru un calcul rapid, se determina intr-un mod eficient candidatii din $C_k$ care sunt continuti intr-o anumita tranzactie $t$ cu ajutorul functiei $Subset$ prezentata in sectiunea implementarea algoritmului.	

\subsection{Avantaje si dezavantaje ale algoritmului Apriori}
\emph{Avantaje:}
\begin{enumerate}
\item utilizeaza proprietatile articolset-urilor frecvente;
\item poate fi usor paralelizat;
\item este usor de implementat.
\end{enumerate}

\emph{Dezavantaje:}
\begin{enumerate}
\item presupune ca baza de date tranzactionala este incarcata in memorie;
\item necesita mai mult de $m$ scanari ale bazei de date.
\end{enumerate}

\clearpage
\section{Solutia arhitecturala si implementarea}
In aceasta sectiune vor fi descrise solutiile arhitecturale utilizate pentru atingerea unei implementari mai flexibile si eficiente din punct de vedere al arhitecturii solutiei. De asemenea voma duce exemple de implementare cit a solutiei arhitecturale atit si a solutiei tehnice utilizate pentru implementarea algoritmului Apriori.

\subsection{Platformele si sistemele utilizate}
Sistemul este implementat utilizind limbajul C\# bazat pe .NET Framework 4.0. Ca sistem de gestionare a bazei de date poate fi utilizat MS SQL Server si MySQL Database Engine. Pentru adaptarea unui sistem de gestiune a bazei de date nou este necesitatea de un efort minim in vederea solutiei arhitecturale abordate.

Functionalitatea de baza a sistemului este descrisa in Figura ~\ref{figure:SystemUsecase}. 

\begin{figure}	
	\center \includegraphics {SystemUsecase.png}
	\caption{Functionalitatile de baza a sistemului}
	\label{figure:SystemUsecase}
\end{figure}

\subsubsection{Connect to database use case}
Conexiunea la un ssitem de gestiune a bazei de date este implementat cu ajutorul unei interfete generice care 
ulterior poate fi usor extinsa cu alt sistem de gestiune a bazei de date sau alt sistem de stocare a datelor.
In Listing-ul ~\ref{DbInterface} este prezentata interfata de conexiune la baza de date.

\begin{lstlisting}[caption = Connection to database interface, label=DbInterface]
public interface IConnectionStringType
    {
        DbConnectionStringBuilder BuildConnectionString();
        string DataBaseManagementSystemName { get;  }
        IRelationalSchemaExplorer GetSchemaExporter { get;  }
        IDataRepository GetDataRepository { get; }
        DbConnection DataConnection { get; }
        DbCommand GetDbCommand(string query);
        DbDataReader DataReader { get; }
    }
\end{lstlisting}

\subsection{Deployment}
The deployment of a web project is the final stage before launching the system on life. The most important factors which determines the environment and services used here are:
\begin{enumerate}
\item Database server hardware capacity in order to satisfy the calls from the application
\item Web server which host the application hardware resources to satisfy the CPU power required to process the web requests
\item Web server environment and services to fit the chosen technologies and services to handle automated jobs
\end{enumerate}
The deployment diagram is shown in the Figure ~\ref{fig:deploymentDiagram}.

\begin{figure}	
	\center \includegraphics[scale=0.9]{deploymentDiagram.png}
	\caption{Deployment diagram of the system}
	\label{fig:deploymentDiagram}
\end{figure}

Ideally the web application and the database must be hosted on different servers in order to improve performance and system speed.

\clearpage
\subsection{Domain model}
During the development process we have to think about our entities in terms of real life scenarios. In order to understand better the domain entities of our system we can build the domain model of it. Also the domain model is created in order to represent the vocabulary and key concepts of the problem domain. The domain model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. A domain model that encapsulates methods within the entities is more properly associated with object oriented models. The domain model provides a structural view of the domain that can be complemented by other dynamic views, such as Use Case models.

An important advantage of a domain model is that it describes and constrains the scope of the problem domain. The domain model can be effectively used to verify and validate the understanding of the problem domain among various stakeholders. It is especially helpful as a communication tool and a focusing point both amongst the different members of the business team as well as between the technical and business teams.

During developing the system described in this paper we have used graphical representation of the domain model using UML class diagrams. The process of creation of the domain model during the architecture building has proved its efficiency in business terms. We have found out some holes in the logic of the system implementation and business rules which contradict each other considering the functional specifications from the specification document given by the stakeholders.

A well-thought domain model serves as a clear depiction of the conceptual fabric of the problem domain and therefore is invaluable to ensure all stakeholders are aligned in the scope and meaning of the concepts indigenous to the problem domain. A high fidelity domain model can also serve as an essential input to solution implementation within a software development cycle since the model elements comprising the problem domain can serve as key inputs to code construction, whether that construction is achieved manually or through automated code generation approaches. It is important, however, not to compromise the richness and clarity of the business meaning depicted in the domain model by expressing it directly in a form influenced by design or implementation concerns.

In this section we will reveal the core domain model of the application entities. Of course there are much more entities in the system, but some of them are not important or just trivial for the system we have developed. That's why we wouldn't describe them here.

In the Figure ~\ref{fig:modelRelations} there are shown main entities from the system and the relations between them. As we can see the core entity is $User$. This fact is natural because our system domain scope is social one and we are focusing on users and their social connections. 

The $User$ entity is the core connection entity in the system and has access to all other entities in the system, besides the system helper methods which aren't shown in the model representations because of their irrelevance. As an usual social oriented system the user has contacts. Contacts is a container which basically holds the connection of the $User$ entity to another $User$ entity. 

One of the core relations of the user in our system is the relation $User-Company$. From the Figure ~\ref{fig:modelRelations} is seen that the user is connected to a company through the position he holds at some moment. $UserPosition$ is the holder entity which hold the information about the user activity when being hired to some company. As in real life scenarios when activating at a company, people develop some projects. At this point we have a helper entity $Project$.  A project is created for the first time by an user which is further considered the owner of the project. After project creation, the users which activates at the same project can be linked to it. When the user is assigned to a given project, he can add project descriptions to it. In order to hold this information in the system we have a helper entity $UserPositionProjectDescription$ which holds all the information about the description the user wants to add to the given project.

\begin{figure}[!h]
	\center \includegraphics[scale=0.85]{modelRelations.png}
	\caption{Core model relations}
	\label{fig:modelRelations}
\end{figure}


More detailed information regarding each entity will be further described when representing each core entity with all it's members and methods.
\subsubsection{User model}
\begin{figure}[!h]
	\center \includegraphics[scale=1]{userModel.png}
	\caption{User model}
	\label{fig:userModel}
\end{figure}
In the Figure ~\ref{fig:userModel} is shown the $User$ entity with all its properties and some of the methods. Note that it is not relevant to list here all the methods of the user entity, and the methods shown in the Figure ~\ref{fig:userModel} are just for example purposes.
The user entity contains properties which we need in order to store all basic information about the user as $FirsName, LastName, etc$ and all the methods which operates this data. Also the $User$ entity holds some methods which are triggered on some events from the system. A good example of such method is $sendRecentActivityEmail()$ (See listing ~\ref{recentActivityMailLst}) method. As it can be pointed out from the name of the method, it grab the data about the user's recent activity and send it on the email address which user has indicated during the registration process.
\begin{lstlisting}[language=PHP, caption=Method of sending recent user activity, label=recentActivityMailLst]
public function sendRecentActivityMail() {
        $emailTemplate = EmailTemplateTable::getByName('recent_activity');
        $data = $emailTemplate->getReplacedHoldersForRecentActivity(this);
        $mailer = sfContext::getInstance()->getMailer();
        $message = $mailer->compose(sfConfig::get('app_mailer_from'), $this->getEmail(), $data['subject'], $data['content']);
        $message->setContentType('text/html; charset=utf-8');
        
        return $mailer->send($message);
    }
\end{lstlisting}

\subsubsection{Companies model}
The system described in this paper is focused not only on the relations between people but mostly on professional interaction between people. That is why we have a complex domain model tied to the companies and user positions taken by users. Also in this context is taken into consideration the users connection through the same company job affording. The users connections through the company is reflected during the search process which is shown in the Figure ~\ref{fig:searchActivity}. The search screen which shows the result of the connections grabbing process is shown in the Figure ~\ref{fig:directory}. 

The process shown in the Figure ~\ref{fig:searchActivity} describes the system behaviour interaction with the user during the contacts search in the directory. It is important to note here that the results of the search given by the system will be different in dependence of the input given by the user and the data which user has fulfilled in his profile. So, the system will give all the contacts from a given industry in case the user don't have a position, consequently company assigned to his profile. Otherwise the system will take the contacts which are related to the company choose from the filters. There is one more detail in this process. If the user do not specify the company he wants to include into the filters and the user has current job position set, the system will grab from the database all the users connected to the company from the users current job position.

\begin{figure}[h!]
	\center \includegraphics[scale=1]{contactsSearchProcess.png}
	\caption{Company contacts search process}
	\label{fig:searchActivity}
\end{figure}

At the database level the filtering criteria is optimised as much as possible. In the Listing ~\ref{peopleFilterQueryLst} is shown the principle of filters applying. From the code snippet we can see that the results are grabbed from the database only after which all the filters are applied to it. This principle constraints the result set returned from the database. Also applying the filters on the database level rises the performance on the database level, minimising the time of execution of the query. The parent 
$doBuildQuery(values)$ method belongs to the $UserFormFilter$ class which applies base user filters as location, contacts etc. to the criteria.

The company area model is shown in the Figure ~\ref{fig:companyModel}. The $Company$ entity holds all the data about the organisation in which users are working. Companies are initially introduced by the system administrator and are further added by privileged users. Each user is linked to a company through $UserPosition$ entity. This entity is a holder for custom user data related to the job position taken by the user. In addition each user position is related to $Project$ entity and to the $UserPositionProjectDescription$ entity as one to many relation. All this entities are valid in the context of a given industry, which is represented in the system by the $Industry$ entity. 
\pagebreak

\begin{lstlisting}[language=PHP, caption=People directory people filter implementation, label=peopleFilterQueryLst]
    protected function doBuildQuery(array $values) {
        $query = parent::doBuildQuery($values);
        if ($this->getOption('only_active_users')) {
            $query->addWhere('r.is_active = 1');
        }
        if ($this->getOption('company_id')) {
            $query->innerJoin('r.Positions UserPosition up')->andWhere('up.company_id = ?', $this->getOption('company_id'))->groupBy('r.id');
        }
        if ($this->getOption('university_id')) {
            $query->innerJoin('r.Universities University up')->andWhere('up.university_id = ?', $this->getOption('university_id'))->groupBy('r.id');
        }
        return $query;
    }
\end{lstlisting}

\begin{figure}[h!]
	\center \includegraphics[scale=0.75]{companyModel.png}
	\caption{Company area model class diagram}
	\label{fig:companyModel}
\end{figure}

The structure represented in the Figure ~\ref{fig:companyModel} gives us a strict cascade separation of our domain objects. The hierarchy begins from the Industry then segregates to Company, User position, Project and finally Project description. This kind of separation gives us a powerful control over the domain aggregates. For example if we want to take all the users from the database which currently activates in a given Industry we can easily do it. This fact also optimises the database query processing time.

\clearpage
\subsection{Unit testing}
Automated tests are one of the greatest advances in programming since object orientation. Particularly conducive to developing web applications, they can guarantee the quality of an application even if releases are numerous.

Any developer with experience developing web applications is well aware of the time it takes to do testing well. Writing test cases, running them, and analysing the results is a tedious job. In addition, the requirements of web applications tend to change constantly, which leads to an ongoing stream of releases and a continuing need for code refactoring. In this context, new errors are likely to regularly crop up.

\subsubsection{Unit and Functional testing}
Unit tests confirm that a unitary code component provides the correct output for a given input. They validate how functions and methods work in every particular case. Unit tests deal with one case at a time, so for instance a single method may need several unit tests if it works differently in certain situations.

Functional tests validate not a simple input-to-output conversion, but a complete feature. For instance, a cache system can only be validated by a functional test, because it involves more than one step: The first time a page is requested, it is rendered; the second time, it is taken from the cache. So functional tests validate a process and require a scenario.

For the most complex interactions, these two types may fall short. Ajax interactions, for instance, require a web browser to execute JavaScript, so automatically testing them requires a special third-party tool. Furthermore, visual effects can only be validated by a human.

In the Listing ~\ref{unitTestExampleLst} is showed an example of an functional test which tests the correct behaviour of activating the user account after registration and alternative scenarios handling.
\begin{lstlisting}[language=PHP, caption=Functional test for activating user account, label=unitTestExampleLst]
public function testActivate() {

        //get redirect to login module
        $this->securityTestForLogin('dashboard/activate');
        // login user
        $this->loginAs('userTestForLogin@test.com', 'admin');

        // take current status of the user
        $user = UserTable::getByEmail('userTestForLogin@test.com');
        $user->setIsActive('0');
        $isActive = $user->isActive();

        $this->getTestBrowser()->
            get('dashboard/activate')->
                with('request')->begin()->
                    isParameter('module', 'dashboard')->
                    isParameter('action', 'activate')->
                end()->
                with('response')->begin()->
                    isRedirected()->
                    followRedirect()->
                end()->
                with('request')->begin()->
                    isParameter('module', 'dashboard')->
                    isParameter('action', 'index')->
                end();
        // see if changes
        $user = UserTable::getByEmail('userTestForLogin@test.com');
        $this->getTest()->is($isActive, !($user->getIsActive()));
        $this->logout();
    }
\end{lstlisting}

As we have practised partially Extreme programming while developing the project we have used Test-Driven development methodology. In the test-driven development (TDD) methodology, the tests are written before the code. Writing tests first helps you to focus on the tasks a function should accomplish before actually developing it.  Plus it takes into account the undeniable fact that if you don't write unit tests first, you never write them.

In a unit test, the autoloading feature is not active by default. Each class that you use in a test must be either defined in the test file or required as an external dependency.
In unit tests, you need to instantiate not only the object you're testing, but also the object it depends upon. Since unit tests must remain unitary, depending on other classes may make more than one test fail if one class is broken. In addition, setting up real objects can be expensive, both in terms of lines of code and execution time.

In the project we have developed a wrapper over the test features which provides by default the symfony framework.
In the given class we have some custom methods which improves the automation process of testing. Also we have mock data initialisation in the database and connection to it. The listing of the wrapper can be seen in the Listing ~\ref{testBaseLst}
\begin{lstlisting}[language=PHP, caption=Base testing class wrapper, label=testBaseLst]
class TestCase {
    
    public function __construct() {
        $this->_browser = new sfTestBrowser();
        $this->_lime = $this->_browser->test();
    }
   
    protected function setUp() {
        Doctrine::getTable('Admin')->createQuery('a')->delete()->execute();
        Doctrine::getTable('CareerTrack')->createQuery('a')->delete()->execute();
       ...............
        Doctrine::getTable('User')->createQuery('a')->delete()->execute();
    }
   
    protected function getTest() {
        return $this->_lime;
    }   
   
    protected function getTestBrowser() {
        return $this->_browser;
    }    
  
    public function run() {
        $methods = get_class_methods($this);
        foreach ($methods as $method) {
            if (substr($method, 0, 4) === 'test') {
                $this->getTest()->info('start test case: "' . substr($method, 4) . '"');
                $this->setUp();
                call_user_func(array($this, $method));
                $this->tearDown();
                $this->getTest()->info('end test case: "' . substr($method, 4) . '"');
            }
        }
    }   
    public function loginAs($email = 'email@email.com', $password = 'admin') {
        $login = array(
            'email' => $email,
            'password' => $password,
        );

        $this->getTestBrowser()->
          post('login/index', array('login' => $login));
        //sfContext::getInstance()->getUser()->signIn($user);
    }    
    public function securityTestFor($login, $url) {
        $this->loginAs($login);
        $this->getTestBrowser()->
            get($url)->
            isForwardedTo('default', 'secure');
        $this->logout();
    $}   
}
\end{lstlisting}
\clearpage

\section*{Conclusions
	\addcontentsline{toc}{section}{Conclusions}
}

Dintre metodele Data Mining existente, cel mai des sunt utilizate metodele de 
descoperire a regulilor de asociere. Aceste metode sunt utilizate frecvent in analiza 
vanzarilor cu amanuntul, dar ele pot fi aplicate cu acelasi succes  si in serviciile de 
marketing (analiza cosului de cumparaturi) pentru determinarea caracteristicilor comune 
ale clientilor. Industria serviciilor financiare utilizeaza, de asemenea, frecvent extragerea 
de reguli de asociere. Analistii folosesc aceste tehnici pentru a analiza cantitati masive de 
date pentru a construi afaceri si modele de risc pentru dezvoltarea strategiilor de investitii. 
Multe companii din sectorul financiar au testat aceste tehnici care au produs rezultate 
pozitive in analiza conturilor clientilor  si la identificarea serviciilor financiare pe care 
acestia le solicita impreuna.

Descoperirea regulilor frecvente de asociere dintr-o bază de date de dimensiuni 
mari este o problemă complexă deoarece spaţiul de căutare creşte exponenţial cu numărul 
de atribute din baza de date şi cu obiectele bazei de date. Cele mai recente abordări sunt de 
natură iterativă necesitând scanări multiple ale bazei de date, ceea ce este foarte costisitor.

Algoritmul APRIORI are performanţă slabă când se execută pe baze de date mari 
care conţin un număr mare de articole pe tranzacţie. Această performanţă scăzută se 
datorează faptului că dimensiunea unui articolset frecvent este mare. De exemplu pentru un 
articol set frecvent de dimensiunea N > 4, algoritmul APRIORI necesită N parcurgeri ale 
bazei de date pentru a descoperi acel articol, ceea ce este costisitor din punct de vedere al 
timpului consumat. 
Pe baze de date mari şi pentru un factor suport mare (> 20%) algoritmul APRIORI
are o performanţă mai bună decât algoritmii PARTITIONING şi SAMPLING, dar pentru un 
factor suport mic (< 5%) performanţa algoritmului APRIORI scade dramatic. 

\clearpage
\section*{References
	\addcontentsline{toc}{section}{References}
}

\refitem\label{refer:SysEngFundam}.\textit{Systems engineering fundamentals}, University press Fort Belvoir, Virginia, January 2001

\refitem\label{refer:linkedin}. LinkedIn professional network site, \underline{\href{http://www.linkedin.com/}{http://www.linkedin.com/}}

\refitem\label{refer:georgSimel}. Georg Simel biography, \underline{\href{http://socio.ch/sim/bio.htm}{http://socio.ch/sim/bio.htm}}

\refitem\label{refer:onSocialStructure}. A. R. Radcliffe-Brown, \textit{On Social Structure}, The Journal of the Royal Anthropological Institute of Great Britain and Ireland, 1940

\refitem\label{refer:stateResearches}. S. F. Nadel, \textit{The Theory of Social Structure}, London: Cohen and West LTD, 1957

\refitem\label{refer:devOfSN}. Linton C. Freeman, \textit{The Development of Social Network Analysis: A Study in
the Sociology of Science.}, Empirical Press, 2009

\refitem\label{refer:wellmanSNA}. Bernie Hogan, \textit{The Networked Individual. A Profile of Barry Wellman.}, Global information bulletin, 2004

\refitem\label{refer:wassermanSNA}.  Wasserman, Stanley and Katherine Faust, \textit{Social Network Analysis: Methods and Applications.}, Cambridge: Cambridge University Press, 1994

\refitem\label{refer:NtA}. Network theory and analysis, \underline{\href{http://www.utwente.nl/cw/theorieenoverzicht/Theory\%20clusters/Communication\%20Processes/Network\%20Theory\%20and\%20analysis_also_within_organizations-1.doc/}{url}}

\refitem\label{refer:symfony}. Symfony framework official site, \underline{\href{http://www.symfony-project.org//}{http://www.symfony-project.org/}}

\end{document}
