\documentclass[10pt,english,a4paper]{article}

\usepackage[utf8]{inputenc}
\usepackage{babel,graphicx}
\usepackage{fancybox} 
\usepackage{graphicx}


\usepackage{times}
\usepackage{url}

\usepackage{float}
 
\floatstyle{ruled}
\newfloat{program}{thb}{lop}
\floatname{program}{Code example}


\bibliographystyle{holeproof}

\title{Dynamic Prestentation Generator \\ Part II: Static analysis}
\author{Lars Hopland Nestås - lma029@student.uib.no \\ Tobias Rusås Olsen -
tol060@student.uib.no \\Group 7}
\date{\today}

\tolerance = 5000

\hbadness = \tolerance 
\pretolerance = 2000
% mange lange sammensatte ord.


\begin{document}

\begin{titlepage}
\maketitle
\thispagestyle{empty}
\begin{figure}[htb]
  \centering
	\includegraphics[width=1\textwidth]{logo.png}
\end{figure}

\begin{abstract}
We have tested the Dynamic Prestentation Generator with static analysis tools and done a manual test of the system. The findings, such as XSS, DoS attacks, information leaks, modifying protected data, stealing of session id’s, and more, are presented in this report.
\end{abstract}

\end{titlepage}

\pagenumbering{arabic}

\newpage

\section{Introduction}
In this paper, we will run the Static Analysis Tool from Fortify, and the
program Findbugs on the source code of the Dynamic Presentation Generator (DPG),
presented in the introduction paper. In the introduction paper we reported some
weaknesses, like Cross-Site Scripting (XSS) and information leak, and we were
interested to see if the static analysis tools would find the same information.

We will go through the results from the Fortify program, and then we look at
the results from the Findbugs program, and then we do some manual code review
before discussing the findings.


\section{Findings - Fortify}

The main part in this paper is about the findings found by the Fortify program.
The vulnerabilities are divided into three categories: Hot, Warning, and Info. We
will pay most attention to the first two categories.

\begin{center}

\begin{tabular}{l r}
\textbf{Group} & \textbf{\# of issues reported} \\ 
\hline HOT & 47 \\ 
Warning &  27 \\ 
Info &  407 \\ 
\hline \hline SUM & 481\\  
\hline
\end{tabular} 
\end{center}

\subsection{Hot}

The vulnerabilities in this category are considered most dangerous, and must be
resolved before application is shipped.

\subsubsection{Cross-site scripting}
The Fortify analysis report contained 47 issues marked \textit{Hot} . All of the 47 issues are in the Cross-site scripting category, and appears in the following 6 files:
\begin{itemize}
\item WEB-INF/jsp/pce/content/editContentForm.jsp
\item WEB-INF/jsp/pm/editPresentationDetailsSuccess.jsp
\item WEB-INF/jsp/pce/content/listContent.jsp
\item WEB-INF/jsp/pce/resources/listResources.jsp
\item WEB-INF/jsp/pm/overridePatternDefaultsOverview.jsp
\item WEB-INF/jsp/pce/content/viewDetails.jsp
\end{itemize}

XSS is appearing when the browser is executing malicious code. 
According to Forify this can take place if 
\begin{enumerate}
\item Data enters a web application through an untrusted source, most frequently a web request or database.
\item The data is included in dynamic content that is sent to a web user without being validated for malicious code.
\end{enumerate}

The solution to this problem is to validate the data, before executing it as a
part of some code, such as generating a dynamic web page.

33 of the reported XXS issues are caused by using a parameter from the URL for generating hyperlinks without validating the parameter. For example is the parameter pid an id for each presentation) used to generate a “back” hyperlink:

\begin{scriptsize}
  \begin{verbatim}
<a href="<c:url value='/pce/content/listContent.htm?pid=${param.pid}' />">Back</a>
\end{verbatim}
\end{scriptsize}

In 3 cases the pid are displayed directly on the page, for example as an header in overridePatternDefaultsOverview.jsp :
\begin{scriptsize}
\begin{Verbatim}
<h1>Override pattern defaults - '${param.pid}'</h1>
\end{Verbatim}
\end{scriptsize}
 But it would be very difficult to apply some malicious code in the \textit{id} parameter, and also get it executed, in this application. This would simply cause an HTTP 500 error, like the one in Figure \ref{fig:error}. 
 
\begin{figure}[htb]
  \centering
	\includegraphics[width=.9\textwidth]{error.png}
	\caption{Typical HTTP 500 error in DPG because of some thrown exception. This exeption is thrown because a presentation with this id does not exists.} 
	\label{fig:error}
\end{figure}
 
% LES HER LARS Forklar litt om hvorfor feilmeldingen oppstår (det var en bug,
% ikke sant?) TAKK LARS

In “Part I - Proposal “, we pointed out that parameter “status message”, which are used in listContent.jsp  and listResources.jsp could be used to XSS. Forify did also report this.  The code that is causing this problem is in Code example 1. This is a typical example of using parameters from the url to generate some dynamic content, without validating the input. 
\begin{program}
\begin{scriptsize}
  \begin{verbatim}
  
<% if (request.getParameter("statusMessage") != null) { %>
        <center>
          <span class="statusMessage">
          	${param.statusMessage}
          </span>
        </center>
<% } %>
\end{verbatim}
\end{scriptsize}
  \caption{XSS attack is possible through the \textit{statusMessage} parameter.}
\end{program}


\subsection{Warning}

The warning vulnerabilities are vulnerabilities that are important to handle,
but not as critical as hot errors. The typical items that are covered by this
category are leaking of system information and bad practices.

\subsubsection{System Information Leak}
Fortify reports 10 occasions of HTML comments in 5 different files of the DPG project.  In general HTML comments may help an attacker to learn about the system and form a plan of attack.
\begin{scriptsize}
\begin{verbatim}
<!-- Ignoring file ${resource.name}  -->
<!-- Show links to page only if page is enabled -->
\end{verbatim}
\end{scriptsize}
Our opinion is that there are no serious information leaks caused by HTML comments in the DPG project, but HTML comments should be replaced with JSP comments (which will not be transmitted to the user).

Two examples of this, is in the PluginManager class. On line 271, a try block
is executed, and on line 277 the catch block will catch exceptions from the
general exception class ``Exception''. Then it simply print the stacktrace of
the exception, which can reveal serious information about the system. The same
thing happens later in the same class, on line 293.

\subsubsection{Missing Check for Null parameter}

In the class Resource.java the compareTo-method doesn’t check if the input
parameter object is null before comparing the objects. This is bad practice.
The solution will be to check for null and throw an exception if it's true,
before comparing.

\subsubsection{J2EE Bad Practices}

J2EE-applications can make use of multiple Java Virtual Machines in order to
increase reliability and performance. But in order to make these JV Machines to
appear as one application to the end user, the application shares objects from
the HttpSession-class with all the machines. For this to work, the objects put
as attributes in the session scope must implement the serializable class.

In this application, the class ``Form'' is set to the session in line 86 of the
EditContentFormController.java, a class which is not serilizable.

\subsubsection{Log Forging}

Log forgin occurs when the application writes untrusted user data to the log
file. By giving wrong user data, a malicious user can write their own entries
in the log file. This is how it's typically used:

The user sends in a presentation id which is a String. In normal use, this
would be written to the log file:

\begin{verbatim}
INFO: Presentation ID=5 not found.
\end{verbatim}

If the user applies a different string, he can forge the log. If we apply the
string: ``5 not found. $\backslash$n INFO: Logout user=Badguy.'' - we assume
that badguy is logged out. The log file will look like this:
%TRENGER SLASH N TIL A BLI FIN

\begin{verbatim}
INFO: Presentation ID=5 not found.
INFO: Logout user=Badguy.
\end{verbatim}

In the application, we find this example in the method ``renderView'' in the
PresentationContentEditorServiceImpl.java on line 215. 223 and 231.
\begin{program}
\begin{scriptsize}
  \begin{verbatim}
209  public String renderView(String presentationId, String pageId, String viewId) {
210
211        // Locate presentation by id
212        PresentationSpecification presentationSpecification 
                   = presentationSpecificationDao.getById(presentationId);
213        if (presentationSpecification == null) {
214            String errorMessage = "No presentation found by presentation id '" 
                   + presentationId + "'";
215            logger.error(errorMessage);
216            throw new PresentationContentEditorException(errorMessage);
[...]
        }
\end{verbatim}
\end{scriptsize}
  \caption{PresentationContentEditorServiceImpl.java . . . }
\end{program}

The best way to prevent log forging, is to white list allowed characters in the
log. The most important thing to handle is the $\backslash$n, which usually
represents a line break. Even thought an incorrent presentationId may generate
an error before invoking the vulnerable method, it should be validated, as
other part of the system might change over time, causing the vulnerability to
be ran.

A subcategory of log forging, is when the log is used for debugging purposes.
The log can be used to look at the flow in the system, and is a good tool for
programmers. The problem occurs when this debugging log is for some reason not
removed before the application is shipped. You should not excuse log forging
vulnerabilities because it's not meant to be production ready code.

Fortify reports of two occurrences of these vulnerabilities, one in
AbstractDpgAuthenticationFilter.java on line 49, and one in XmlTransformer.java
on line 78.

\subsubsection{Trust Boundary Violation}
In the file \_header.jsp there is a request for the URL parameter pid. The pid string is immediately set to pageContext without any validation. 

\begin{program}
\begin{scriptsize}
  \begin{verbatim}
 <%
	String presentationId = request.getParameter("pid");
[...]
	pageContext.setAttribute("pid", presentationId);
 %>
\end{verbatim}
\end{scriptsize}
  \caption{no.uib.ii.dpg2.core.beans line bla bla}
\end{program}

According to Fortiy commingling trusted and untrusted data in the same data
structure encourages programmers to mistakenly trust unvalidated data, and
could lead to vulnerabilities such as XSS.
\subsubsection{Unreleased Resource}
The analyse tool reveals three examples of unreleased resources in the file
RepositoryInitializer.java.

An unreleased resource is when you fail to close a resource properly. An
attacker can sometimes exploit the resource leak and cause a Denial-of-Service
attack by draining the resource pool.

In the code we see that we initialize a FileInputStream on line 258, 276 and
298. 
\begin{program}
\begin{scriptsize}
  \begin{verbatim}
249   // Pattern spec
250        InputStream patternSpecStream = new FileInputStream(patternFolderPath
251                + SEP + "pattern.xml");
252        patternSpecificationDao.importPattern(name, patternSpecStream);
253        patternSpecStream.close();
254
255        logger.debug("Imported pattern spec for '" + name + "'");
256
257        // Plugin pattern
258        InputStream pluginSpecStream = new FileInputStream(patternFolderPath
259                + SEP + "pluginPattern.xml");
260        pluginSpecificationDao.importPluginSpecification(name, pluginSpecStream);
261        patternSpecStream.close();
[...]
276        InputStream javascriptFileStream = new FileInputStream(
277                            stylesheetFolderPath + SEP + fileName
278                    );
[...]
298        InputStream stylesheetFileStream = new FileInputStream(
299                            styleFolderPath + SEP + fileName
300                    );                    
\end{verbatim}
\end{scriptsize}
  \caption{RepositoryInitializer.java . . . }
\end{program}

This stream is never closed because they obviously have copied the code
from line 250-253 to line 258-261, and changed the code until the compiler
stopped complaining, and that's when line 258-260 is changed. The code on line
261 is a bug; they have already closed this stream on line 253. The right code
should be pluginSpecStream.close(), and not patternSpecStream.close().

On line 276 and 298 they open the streams with name ``javascriptFileStream''
and ``stylesheetFileStream'' without ever closing them. This is a typical
example of a potential resource leak.
\subsection{Info}

This category gives us some information about things that are not quite the way
it should be, things like poor error handling, dead/unused code and some bad
practice.

\subsubsection{Poor Error Handling}
We can divide this section into three parts: Overly broad catch, overly broad
throws, and empty catch blocks.

1: Overly broad catch.
There are multiple classes catching the very general exception class
``Exception''. This is bad because it means that every exception thrown by the
class will be handled in the same way, and you will also not benefit from
the information received by more specific exception classes. Even though
catching general exceptions is easy to do, and results in less code, it defeats
the purpose of typed exceptions in Java. Fortify reports 14 of these error
messages.

2: Overly broad throws.
This is pretty much the same as \#1, but instead of catching a general
exception, it throws a general exception ``Exception'', making it harder
to perform the right error handling. This bad practice occurs 316 times in the program, which illustrates that this is not something that the programmers have paid attention to.

3: Empty catch block.
Fortify reports of 31 events of empty catch blocks. If you have an empty catch
block, the programmer has excepted one of two things: a) This method can never
fail or b) It doesn't matter if this method fails. This so called ostrich
maneuver is bad because if something goes wrong, you will have a hard time
noticing because execution of the program will go on as normal. The least you
should do is to log the event, which can help you find out what went wrong.
Another approach is to throw a RuntimeException or an Error, so you guarantee
that the error doesn't go unnoticed.

\subsubsection{Other}

In the classes MasterPageTemplate.java and PageTemplate.java, only equal and
not HashCode has been implemented. If two objects are equal, they should have
the same HashCode, or else this will likely cause problems if the objects are
used in a Collection like a Map or a Set.

The Loggerclass should be declared static final, because you want to use the
same logger throughout the system. Fortify reports of 29 declarations of
Loggerclasses without declaring them static final.

There are two occurrences of the method Math.random() for generating id for
unique entities. This is a problem because this method is a Pseudorandom
Number Generator, which is not truly random. It's randomness is determined by a
small set of initial values. Therefore it's not unlikely that the ``unique''
presentation id will be generated twice, resulting in the following error:

\begin{figure}[htb]
  \centering
	\includegraphics[width=1\textwidth]{iderror.png}
	\caption{Error message, if the generated id for a new presentation already exists.} 
\end{figure}

\section{Findings - Findbugs}
Findbugs returned a rapport which contained 237 unrated issues divided into 6 main categories, as shown in table below. 

\begin{center}

\begin{tabular}{l r}
\textbf{Group} & \textbf{\# of bugs reported} \\ 
\hline Bad practice & 13 \\ 
Correctness &  1 \\ 
Internationalization &  1 \\ 
Malicious code vulnerability &  5 \\ 
Performance & 14 \\ 
Dodgy & 203 \\
\hline \hline SUM & 237\\  
\hline
\end{tabular} 
\end{center}

We discovered that most of the errors reported by findbugs was either already
covered by fortify, or not actual problems. We still believe findbugs has
potential to find bugs that are not found by fortify, since it's seems to
have more focus on bad practise and less on security.

\section{Findings - Manual code review}
\subsection{System Information Leak}
As we mentioned in the previous section about XXS, changing the \textit{pid} URL parameter could cause an HTTP 500 error message. In the DPG system an HTTP 500 error message causes a major system information leak. As we can see on Figure \ref{fig:error} information about the exception is published, together with information about the package structure, frameworks and Apache Tomcat version number. 

An exception thrown by the loginform also published some parts of the SQL-query accosted with the login procedure

\subsection{Input validation}
The WYSIWYG editor used in dpg2 is a third party solution named TinyMCE. TinyMCE is open-source software creating a JavaScript-based WYSIWYG editor. In part I: Proposal, we discovered a possibility for stored cross-site scripting. We were able to store JavaScript code like 
\begin{scriptsize}
\begin{verbatim}
<script>alert'Some code'</script>
\end{verbatim}
\end{scriptsize} 
into the XML content files, using the WYSIWYG editor. It is possible to avoid using this editor by switching off JavaScript in the browser. The editor will then become a simple HTML-form. It is possible to enter JavaScript code into the TinyMCE editor, but you have better control in the simple HTML-form you get up when you turn off JavaScript in your browser.

Input validation should therefore not be an integrated part of TinyMCE, but done separately.

\subsubsection{Stealing sessions}
Because this poorly implementation of input validation, it is possible for an attacker to take over another users session. All the attacker must do is to get hold of the user's \textit{session ID}. The DPG session ID is stored in a cookie. 

Step one for the attacker is to set up his own server to receive the victim's cookie data. As an example we set up our own server http://lysfontene.com/jafu/, and uploaded a PHP file with the code in Code example \ref{code:cookielog}.

\begin{program}
\begin{scriptsize}  
\begin{verbatim}
  
<?php
    $cookie = $HTTP_GET_VARS["cookie"];
    $file = fopen('cookielog.txt', 'a');
    fwrite($file, date(DATE_RFC822). " - ");
    fwrite($file, $cookie . "\n\n");
?> 
\end{verbatim}
\end{scriptsize}
  \caption{This php file receives data from the victim's cookie, and stores it in a log file \textit{cookielog.txt}.}
  \label{code:cookielog}
\end{program}

The next step is to inject some code into the DPG system. As you can see in the Figure \ref{fig:cookie1}, the JavaScript code inside the red box will send the victim's cookie data to the attacker's server. Each time a page with this code is loaded, both inside the presentation content editor and the presentation viewer, the data is sent. The attacker can put the session id into his own cookie and get around the login procedure.

\begin{figure}[htb]
  \centering
	\includegraphics[width=.9\textwidth]{cookie1.png} 
	\caption{By switching off JavaScript in the browser it gets easier to put inn malicious code.}
	\label{fig:cookie1}
\end{figure}

Since we are presenting the malicious code as an image file, the code will not show up in presentation viewer, or the. The user will not notice that the cookie data is sent to the attacker, as we can see in Figure \ref{fig:webpage}. 

\begin{figure}[htb]
  \centering
	\includegraphics[width=.8\textwidth]{webpage.png}
	\caption{The malicious code are hidden in the presentation viewer}
	\label{fig:webpage} 
\end{figure}

The malicious JavaScript code are also hidden inside the TinyMCE editor (Figure \ref{fig:editor}), because of the input validation features in the JavaScript editor.

\begin{figure}[htb]
  \centering
	\includegraphics[width=.8\textwidth]{editor.png}
	\caption{The TinyMCE editor hides the malicious code, but does not remove it.} 
	\label{fig:editor}
\end{figure}

This attack could be used by a malicious publisher to take over an administrators session.

\subsubsection{Override pattern defaults}
The override pattern tool makes it possible to change the layout of the presentation in many ways. This is a simple, but powerful tool. Simple in the way that the user/administrator must have some knowledge of basic web-programming, and powerful because it gives an experienced user a lots of possibilities of configure the layout of the presentation. 

All changes to the templates, master transformations, view transformations and stylesheets are done through simple HTML-forms. On the page for overriding templates it is written:
\begin{quote}\textit{Warning:  It is not recommended to do major modifications directly in the form below, as it does not have syntax highlighting or any form for \textbf{validation}. Copying the contents into a proper editor, modify the file and paste back in again is therefore the recommended approach.}
\begin{flushright}
\textit{--- from the page overrideMasterTemplate.htm} % --- gir en tankestrek.
\end{flushright}
\end{quote}

We might think that the developers had “code correctness” in mind, when they wrote \textit{any form for validation}(such as forgetting to close a tag). But again, the lack of validation is a huge security risk. It is for example possible to reuse the "cookie stealing attack" form above via the override pattern functionality. An example of this is illustrated in Figure \ref{fig:override}. 

\begin{figure}[htb]
  \centering
	\includegraphics[width=.9\textwidth]{masterover.png}
	\caption{Malicious code which steals cookie data is included in the master template.} 
	\label{fig:override}
\end{figure}

Therefore we think that this featured should be reevaulated, as it's a
great security risk. Possible solution to this problem is to remove it
completely, or to change the way you present the information you want to edit.
We believe that the feature has it's place, as it gives the
opportunity to make the site more flexible according to special requirements.
This feature has two opposing forces to consider; if you make it more secure,
you restrict the possibilites provided by the feature, which destroy the
purpose of it. In part three, we will go more in debth of possible solutions to
this issue.

\subsection{Resources}
The resources tool lets an administrator or a publisher, upload content, create folders and organize the content in different folders.  Content could be pictures, text documents, music, etc.  There is actually no filter of which content that can be uploaded to a server. It is therefore possible to upload malicious code for example as JavaScript files. 

\subsubsection{Denial of Service attack}
It is possible to use the resources tool to perform a Denial-of-service attack (DoS attack) on the DPG lobby.  This can easily be done by creating a new folder with a name such as “../../DoSAttack”. The folder “DoSAttack” will be created in folder containing the other presentations.  

\begin{figure}[htb]
  \centering
	\includegraphics[width=.9\textwidth]{doserror.png}
	\caption{After the DoS attack, this exception is thrown.} 
	\label{fig:doserror}
\end{figure}

When the page “presentations.htm” is loaded, an exception(Figure \ref{fig:doserror}) will now be thrown. 
 \textit{presentations.htm} is the main file in the DPG lobby. The reason for the thrown exception is code in the class \textit{JcrPresentationSpecificationDao} which loads the available presentations.  We haven’t had enough time to go through this code in detail, but as we can read from the error message it seems that the method which is listing all presentations iterates through the folder structure searching for a folder named specifications.  And since the folder “DoSAttack” is empty, an exception is thrown. 
 
\subsubsection{Modify and replacing content}
The “trick” by adding “../../” during creation of a new folder can also be used for other malicious purposes.  Let’s say that a publisher has access to the presentation with the id 1494438 and he wants to change a picture in the presentation with the id 2254284 which he cannot access. The first thing he has to do is to upload the new picture to his presentation and use the same filename as the picture that he wants to replace.  The next step is to delete the picture in the 225484 presentation. He can do this by copying the url for deleting the picture in his own presentation, and add “../../2254284/resources” to the path parameter, before executing the url. Finally all he has to do is moving his picture to “../../2254284/resources”.
This attack can also be used to delete or modify other critical files on the server, such as some of the JavaScript files which are loaded via the header file.  


\section{Conclusion}
We have found several weaknesses, and we consider the XSS-vulnerabilites and the vulnerabilities in the "resources tool" as the
main concerns. It is important to notice that most of the vulnerabilities were not found by the static analysis tool, but from manual penetration testing. It would be much more difficult to find most of the vulnerabilities if error handling had been done in a proper way.  

The four main things the developers of the DPG should have some focus on:
\begin{itemize}
\item Input validation should be done before storing data, and outside the JavaScript
editor. Stored data from XML-files or databases should also be validated before
it is presented for the user. If validation is not taken seriously, this will be
a huge problem in the future if more functionality is added to the system.
\item The "resource tool" must be fixed. 
\item A redesign of the “Override pattern defaults” part is necessary. 
\item Stop the information leak caused by poor exception handling. 
\end{itemize}
The other weaknesses are of less importantance, but we still
think it's important to have them adressed. We consider the analysis a
success, since it revealed several issues to address, and we will take a closer
look at the solutions in part three. 



\end{document}