\documentclass[a4paper, 11pt]{article}

\usepackage[left=3cm,top=3cm,bottom=3cm,right=3cm]{geometry}


\title{Rascal Visualization - Series 2}
\author{ \textbf{ Arie van der Veek & Ben Kwint}\\ University of Amsterdam}

\begin{document}

\section {Goal of visualization}
With our visualization we tried to achieve the following goals:
\begin{enumerate}
  \item \textbf{Easy navigation through the Java structure}
  \item \textbf{Pinpoint possible hot spots of the application (High complexity, God Classes)}
\end{enumerate}
Java is package based, thus easy navigation through the Java structure means that we navigate through the package structure. The hot spots are determined by calculating certain code metrics on the source code.
The visualization gives information at package level, namely the amount of sub packages, amount classes, total lines of code, total cyclomatic complexity and cyclomatic complexity per 100 non comment source lines of code (NCSLOC). 
This can be used to get hints on whether a package is good or not. For the packages there is no one metric to indicate whether it is good or not. For classes we use the module size based on NCSLOC and and for methods the McCabe cyclomatic complexity.
\\\\
For each package a top 10 is shown of the classes that have the highest module size (God Classes) and the methods with the highest McCabe complexity. 
These lists indicate hot spots in the application. 
The user can follow the structure of the application down based on the packages, the deeper you go the more focused you get on certain problems, the hot spots.
A color is given to the classes and methods indicating a good or bad score. 
A scale of 1 to 4 is used, where the highest is the worst score indicated by red. Amber is 3, Yellow is 2 and Green is 1. 
\\\\
We did not find any literature with concrete numbers on what is a good module size within Java and what not. 
For example in "Code Complete" \cite{mcconnell2009code} Steve McConnell states that research indicates that higher numbers of routines per class were associated with higher fault rates. 
We base the code coloring on programming experience and intuition. 
A class up to 250 NCSLOC is indicated as a good size, because the code is most likely to be comprehensible and has a low complexity (it is not a guarantee though). 
Up to 400 lines gets less comprehensible and is coded as yellow. 
Up to 700 lines is even less comprehensible and is coded as amber. 
A class with more then 1k of executable code (NCSLOC) is to large and indicated as red.
\\\\
For the coding of the McCabe cyclomatic complexity we use the Risk Evaluation of cyclomatic complexity as described by Heitlager et. al. from the SIG model \cite{heitlager2007practical}. This translates to the defined colors as following: 1-10 is green indicating low risk, 11-20 is yellow indicating moderate risk, 21-50 is amber indicating high risk, 50 and higher is colored as red indicating very high risk.
\\\\
By being able to browse the package structure and see the packages and classes, the user can get a feeling whit the structure of the application.
Commonly the structure of a Java application is resembled by the package structure. 
For example each different layer in an application is put in a different package, for example a simple web application the package structure has a different package for the domain model,  utility classes, GUI components and the persistence layer. 
It is hard to give a score to the package structure or layering of a Java program compared to doing software metrics like they do with the SIG model. Therefore this is not implemented. 
The user does get insight in the structure and can use his own intuition to make a judgment about the structure based upon size and complexity.
\\\\
The choice has been made to also include cyclomatic complexity per 100 NCSLOC to have a  length adjusted complexity metric as proposed by \cite{gill1991cyclomatic}. 
This enables the user to compare the cyclomatic complexity regardless of package size.
We have changed the suggested metric to count cyclomatic complexity per 1000 NCSLOC instead of 1000 NCSLOC so the number is still readable for smaller packages.
\\\\
The visualization will be aimed at programs written in Java. It is not guaranteed that other languages give the right view.


\section {Requirements}
To meet the goal of our visualization we set the following requirements for it.
\\\\
Aesthetics:
\begin{enumerate}
\item \textbf{Show only what is relevant:} The user is only shown the objects that are of interest at that moment. This will help the user in not being overloaded with information.
\item \textbf{Use color coding to indicate good or bad score on metrics:} The user should be able to easily see how an item scores on a metrics. Color coding shall be used to indicate this. The user can use the color coding to form an opinion on the quality and easily spot hot spots. For example a package with many red classes and methods indicate a hot spot in the code
\item \textbf{Show a structure that is "natural" to Java users:} The user should not be confronted with a complete different view on the application that he cannot relate to or is not able to translate back to the real application. We wanted to keep the "natural" Java structure intact by showing the visualization based on packages.
\end{enumerate}

Usability:
\begin{enumerate}
\item \textbf{The visualization must react fast if the user clicks on an item:} If the user clicks on an item it must not take more then 10 seconds to view the next item. If the user cannot browse fairly quickly through the structure exploring the model becomes tedious. This results in the design decision that a tree containing the information to build the visualization is build before hand. The tree will contain information per node, for example the cyclomatic complexity per method. Data will be aggregated per view by visiting the nodes of the part of the tree the user is currently working with.
\item \textbf{It must be possible to browse forwards and backwards} To be able to effectively browse and explore the application the user should be able to go back to the parent package.
\item \textbf{Hovering over an item shows details:} To implement the aesthetics requirement "Show only what is relevant" the user should get detailed information when hovering over an item of interest. For example if the user hovers over the package the NCSLOC or amount of sub packages should be shown.
\end{enumerate}


\section {Evaluation of Visualization}
The aesthetics requirements are all implemented. The most effective of the requirements is the color coding of a good or bad score on the metrics. For example if a package is shown with 3 red classes regarding unit size and 4 methods regarding cyclomatic complexity it is a good indication that the package should be examined. One thing that could help in identifying the classes with multiple methods that have high cyclomatic is showing the class name for the method in the top 10 CC methods. In hindsight this should have been implemented.
\\\\
All of the usability requirements have been implemented. For us what really worked well was implementing the buildup of the tree beforehand. The visualization reacted very fast and the user experience was really well. If this would have been slow, we do not think the visualization was really usable. The speed by which a user can browse through the model is a main contributer to the speed the user finds the hot spots. The downside is that it can take a long while to create a tree. For example SmallSQL is done in and HSQLDB in 20 minutes.
\\\\
Even though there is no concrete metric about package structure the visualization is capable of showing problems at package level. For example if a visualization is made of the SmallSQL program, one can easily see that there is only one main package containing the majority of the classes, about 160+.  This is an indication that the application is not well structured. The downside of not having metrics on package level is that it is unquantifiable on how good or bad the package is. It can only give indications.
\\\\
Also adding the CC per 100 NCSLOC did work in comparing the complexity of packages. For example in HSLQDB it was quickly noticed that the util package (40 cc per 100 NSCLOC) had a higher complexity then the test package (17 cc per 100 NSCLOC). This gives a hint to examine the util package. The top 10 CC list for util shows mainly red methods.
\\\\
As a final test we imported a large Java application, called Jakarta jMeter 2.5.1 (See apache foundation site) and ran the visualization for it. jMeter has more classes and an more extensive package structure then HSQLDB and SmallSQL. It did run the visualization and the reaction time when a users clicks on an item is still very well. It takes alot longer to start up though. The main difference we spotted was the fact that the cyclomatic complexity peaks where a lot lower then in HSQLDB and SmallSQL.
\bibliographystyle{alpha}
\bibliography{rationalvis}

\end{document}