%\documentclass[12pt]{report}
%\documentclass[12pt, twoside]{report}
%\documentclass[12pt, draft]{report}
\documentclass[12pt, inner=.5in]{book}
\usepackage{setspace}
%\usepackage{geometry}                % See geometry.pdf to learn the layout options. There are lots.
\usepackage[bindingoffset=.75in]{geometry}
\geometry{letterpaper}                   % ... or a4paper or a5paper or ... 
%\geometry{landscape}                % Activate for for rotated page geometry
%\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{epstopdf}
\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}
\usepackage{epstopdf}

\usepackage[table]{xcolor}

\usepackage{fancyhdr}
\setlength{\headheight}{15pt}
\pagestyle{fancy}
\fancyhead{}
\renewcommand{\chaptermark}[1]{\markboth{\chaptername~\thechapter.\ #1}{}}
\renewcommand{\sectionmark}[1]{\markright{\thesection.\ #1}}
\renewcommand{\headrulewidth}{0pt}
\fancyhead[RO]{\small\nouppercase{\leftmark}}
\fancyhead[LE]{\small\nouppercase{\rightmark}}

\pagestyle{plain}

\usepackage[Lenny]{fncychap}
\makeatletter
\renewcommand*{\@makechapterhead}[1]{%
\vspace*{-50\p@}%
{\parindent \z@ \raggedright \normalfont
  \ifnum \c@secnumdepth >\m@ne
    \if@mainmatter%%%%% Fix for frontmatter, mainmatter, and backmatter 040920
      \DOCH
    \fi
  \fi       \vskip -30\p@
     \interlinepenalty\@M
  \if@mainmatter%%%%% Fix for frontmatter, mainmatter, and backmatter 060424
    \DOTI{\singlespacing#1}%
  \else%

    \DOTIS{\singlespacing#1}%
  \fi
  \vskip -20\p@
}}
\makeatletter
\renewcommand*{\@makeschapterhead}[1]{%
\vspace*{0\p@}%
{\parindent \z@ \raggedright
  \normalfont
  \interlinepenalty\@M
  \DOTIS{#1}
  \vskip 0\p@
}}

\makeatletter
\def\cleardoublepage{\clearpage\if@twoside \ifodd\c@page\else
    \hbox{}
    \thispagestyle{empty}
    \newpage
    \if@twocolumn\hbox{}\newpage\fi\fi\fi}
\makeatother \clearpage{\pagestyle{plain}\cleardoublepage}

%todonotes
\usepackage{todonotes}
\newcommand{\todoin}{\todo[inline]}

%code listings%%%
\usepackage{listings}
%\lstset{language=C}
\lstset{
	basicstyle=\ttfamily,
	tabsize=2,
	frame=single,
	breaklines=true,
	breakatwhitespace=true,
	breakindent=25pt,
	defaultdialect=[ANSI]C,
	showstringspaces=false
 }
%\def\lstlistlistingname{Code Excerpts}
\def\lstlistingname{Figure}
\usepackage[scaled=.8]{luximono}
   \usepackage[T1]{fontenc}
   \usepackage{textcomp}
%%%

%project name
\usepackage{xspace}
\newcommand{\programName}{CritTer\xspace}

\usepackage{booktabs} %pretty tables
\usepackage[pdfauthor={Erin Rosenbaum},pagebackref=true,pdftex, hyperfootnotes=false]{hyperref}
\def\chapterautorefname{Chapter}
\def\sectionautorefname{Section}
%pdftitle={\projectname}

\usepackage[bottom]{footmisc}

\usepackage{color}

\usepackage{versions}
\includeversion{PRINT}
\excludeversion{COLOR}

\usepackage[font={sf, small}, labelfont=bf]{caption, subcaption}
\ExecuteOptions{tight,TABTOPCAP}

\usepackage{longtable}
\usepackage{multirow}

\usepackage{arydshln}

\usepackage{appendix}

\doublespacing			% Ask per individual, also remove extra space in section headings
\setlength\parskip{0cm}

\newcommand{\abstractname}{Abstract}
\newenvironment{abstract}% 
{\cleardoublepage\null \vfill\begin{center}%
 \bfseries \abstractname \end{center}}% 
 {\vfill\null}

\begin{document}
\singlespacing
\input{titlePage}
\pagenumbering{roman}

%% honor code
\cleardoublepage
\pagestyle{empty}
\begin{flushright}
	\vspace*{1.5in}
	This paper represents my own work in accordance with University regulations.\\
	\vspace*{.75in}
	Erin Rosenbaum\\
	April 15, 2011
\end{flushright}
\newpage

\begin{abstract}
Stylistic errors are a symptom of poorly written code. Sadly, relatively few tools have implemented 
automated stylistic error checking and even fewer are customizable or written for C. \programName 
(Critique from the Terminal) fills this void. It provides a tool to check for administrator-defined stylistic 
errors in C code. \programName uses a SAX style of event-based programming to perform checks 
and produces warnings as the code is being read. Administrators can use predefined checks or 
create their own to enforce coding standards. Additionally, they can use \programName to help grade 
and teach ``good style'' to students. To test \programName's abilities, I ran it over a series of graded 
student submissions and compared \programName's performance to the grader's. The results proved 
that \programName is both helpful and reliable. Not only did \programName find 98.1\% of errors, it had a 
a precision rate of 90.0\%. These rates are excellent, especially given the grader found 83.6\% of errors 
and had a precision of 100\%.
\end{abstract}

\renewcommand{\abstractname}{Acknowledgements}
\begin{abstract}

This thesis has been simultaneously one of the hardest and most rewarding experiences of my 
academic career. Transforming a simple idea into a functional and, more importantly, useful program 
was extremely satisfying and is a boost to my confidence as I enter the professional world. I am 
thoroughly indebted to many individuals who supported me through this process. Unequivocally, I owe 
my greatest and most profound thanks to my advisor, Dr.~Robert Dondero. Dr.~Dondero patiently met 
with me every week this year and helped me in everything from research and general software 
development to programming design tactics and writing skills. Without his support and appreciation of 
good style, this thesis would never have made it to its current state.

I would also like to express my gratitude to Professor Brian Kernighan for helping me enter the Computer 
Science department my sophomore year and for putting up with my two Junior Projects. Without his help 
and support, I would not have become a successful CS major.

There are various individuals who have supported me who also deserve mention. I would like to thank 
Alice Zheng for her wonderful help in creating the \programName logo. Ashton Brown and 
Slater Stich have been two of my closest friends and biggest supporters this year and throughout my 
entire college career. Without their help, I would not have been as productive or successful in my efforts 
on this thesis; without their friendship, my college career would have been significantly less enjoyable. I 
would also like to thank my teammates for being a wonderful, if often slightly annoying, set of brothers --- 
I will actively miss all of our `family' dinners next year. Additionally, I'd like to thank Marty Crotty, my 
coach, who has made me a better competitor and tougher person. Finally I would like to thank my 
parents and my sisters who constantly provide me with support and laughter. I owe all my success to 
them.

\end{abstract}

\begin{spacing}{0.95}
\tableofcontents
\end{spacing}
\listoftables
\listoffigures

\cleardoublepage
\pagenumbering{arabic}
\pagestyle{fancy}
\doublespacing

\chapter{Introduction}
Writing typically contains three types of errors: syntactic, stylistic and semantic.  In the case of writing 
prose, these errors take the form of spelling and grammar mistakes, poorly phrased passages and logic 
errors. When writing code, they form syntax errors, poorly styled code and malfunctioning code. Both 
spelling mistakes and syntax errors represent text that is not within the language (be it English, C, etc.). A 
poorly phrased passage in prose or code denote text that is technically valid but hard to understand. 
Finally, illogical arguments in prose and malfunctioning code both imply errors in the ideas behind the 
text. There are ways to find with these errors in both prose and code (see \autoref{errorChecking}). While 
all these methods are useful, syntactic error checking is largely automated and therefore much more 
available and helpful. Good automated semantic error checking requires a currently unavailable level of 
artificial intelligence. Stylistic error checking, on the other hand, is feasible but has been addressed by 
very few tools. 

\begin{table}%[h]
	\begin{center}
	\begin{tabular}{ccc}
		\toprule
		Types of Errors & Tools for Prose & Tools for Code \\
		\midrule
		Syntactic & Spell \& Grammar Check & Compiler \\
		Stylistic & Editor & Code Reviewers\slash Graders \\ 
		Semantic & Reader & User\slash Tests \\
		\bottomrule
	\end{tabular}
	\end{center}
	\caption{Error Checking}
	\label{errorChecking}
\end{table}


From a different point of view, this problem can be formulated in terms of software quality. 
There are two perspectives on software quality:\ that of the user and that of the programmer. 
Users evaluate software on whether or not it behaves as it ought. Programmers, in addition, evaluate 
software on whether or not it is easily maintainable. Minimally, maintainability implies that code is easy to 
read and update. Evaluating a program from the user's perspective is common practice and most easily 
accomplished through automated testing. Though it is possible to evaluate a program from the 
programmer's perspective, existing tools that do so only check for specific qualities. Unfortunately, code 
quality is subjective, so any tool that only performs pre-defined inspections will never be satisfactory to 
every programmer.

The biggest reason to perform stylistic error checking is to improve readability (the ease with which 
another programmer can understand a piece of code). In the same way that poor phrasing in a paper 
often confounds its underlying arguments, poorly written code can easily obscure its underlying function. 
Furthermore, readable code is easier to revise and update. 

In the academic world, professors and teaching assistants (TAs) often read students' code, especially in 
introductory level courses. In these courses, much of the focus is on enforcing ``good style'' (though the 
definition varies from professor to professor).  The successful implementation of an automated stylistic 
error checker can immediately save work for professors by replacing the process of individually writing 
the same set of stylistic comments to multiple students with a set of automated warnings. In addition to 
reducing this repetitive and time consuming task, it also allows for a consistent evaluation. Students also 
directly benefit by applying this tool to their code prior to submitting assignments --- giving them the 
chance to improve their grades as well as their coding habits.

In an industrial setting, where it is necessary to read or edit another's code, maintaining readability is 
essential. Projects are often handed over to new employees or teams who are then expected to be 
able to contribute immediately. Poorly organized or written code makes this daunting task onerous. Many 
successful software companies make use of a codified internal style but the enforcement of this policy 
falls to the employees. Many transgressions are simply due to inattention and could easily be solved by 
an automated reminder system. Such a tool would improve readability and reduce the need to bother 
one's peers with another round of code reviews, thereby allowing the entire team to be more productive. 

To address these needs, I have created \programName (Critique from the Terminal), a customizable 
style-checker for C code. \programName is run from the command line and executes a set of stylistic 
checks on the source files. Additionally, administrators can create checks to satisfy their personal needs.

\chapter{Related Products}

Many tools exist to help improve code. Minimally, compilers often produce warnings about unused code 
or assignments within \lstinline{if} statements. Tools like Clang\cite{clang} and Uno\cite{Uno} go 
even further and look for bugs such as uninitialized variables, out-of-bounds array indexing and memory 
errors. These tools do not focus on style or readability explicitly and still largely operate on the same 
level as a compiler.  Other tools try very hard to fill the stylistic error checking void. Each approaches the 
problem differently, but all succeed in finding some stylistic errors. Three such tools are 
\hyperref[sec:splint]{Splint}\cite{splint-manual}, \hyperref[sec:pmdAndCheckstyle]{PMD}\cite{pmd}, and 
\hyperref[sec:pmdAndCheckstyle]{Checkstyle}\cite{checkstyle}.

\section{Splint}
\label{sec:splint}

Splint is a tool for ``statically checking C programs for security vulnerabilities and programming 
mistakes''\cite[p.\ 9]{splint-manual}. It works exactly as \programName does from the user's
perspective, i.e.\ as a command-line program which prints warnings to \lstinline{stdout}. 
Splint displays warnings about basic stylistic errors such as assignments with mismatched types and 
ignored return values. With more effort, programmers can add annotations (fancy comments) to their 
code that gives Splint a specification against which to check. These annotations allow for stronger 
checks like memory management, null pointers and ``violations of information 
hiding''\cite[p. 9]{splint-manual}. Examples of annotations in action are shown in 
\autoref{splint-annotations}. These checks supersede the set found in the original Lint, Splint's 
namesake:\ ``Specification Lint'' and ``Secure Programming Lint''. 

\begin{figure}
\begin{lstlisting}[frame=single, language=C]
typedef /*@abstract@*/ /*@mutable@*/ char *mstring;
typedef /*@abstract@*/ /*@immutable@*/ int weekDay;
\end{lstlisting}
\caption[Splint Annotations]{Splint Annotations which define \lstinline{mstring} and \lstinline{weekDay} as abstract data types and further specify that they are mutable\slash immutable respectively.} 
\label{splint-annotations}
\end{figure}

While these annotations provide an extensive feature set, they are a huge inconvenience. They 
require programmers to specifically write their code to meet both the specification of the client and also  
of the tool. For new programmers --- often the ones who need the most error checking --- these 
annotations are almost impossible to implement on top of learning to program.
David Evans, one of the authors of Splint, says as much in a private email. He states:
\begin{quote} \singlespacing
One of the goals of the original design of Splint was for programmers who add no annotations to start 
getting some useful warnings right away, including warnings that encourage them to start adding 
annotations.  For some aspects, such as \lstinline{/*@null@*/} annotations I think this has worked okay, 
but for others like abstract types, memory management, etc., I don't think it has worked very well, and the 
warnings on these issues tend to either make developers want to stop using Splint, or at least just turn off 
all the warnings of that type, rather than start adding the annotations needed to enable better 
checking.\cite{evans-email} 
\end{quote}

Splint and \programName differ in two significant ways. Splint performs a lot of inter-file checks 
regarding headers, interfaces, etc., whereas \programName primarily focuses on intra-file checks. 
They also differ in how they specify what to check. Splint uses a configuration file and command line 
arguments to determine which of the several-hundred pre-defined messages and warnings to display. In 
contrast, \programName allows administrators to easily write their own checks and always runs every 
check that is defined. Because of this disparity, Splint is limited to checking for commonly accepted errors 
but \programName has the freedom to operate idiosyncratically and check many different --- possibly 
quite arbitrary --- coding standards.

\section{PMD and Checkstyle}
\label{sec:pmdAndCheckstyle}

PMD is a tool for checking Java code. It is integrated into a dozen or so popular IDEs.
PMD comes with over 250 checks, which are mostly organized by purpose such as Braces Rules, 
Basic Rules, Coupling Rules, etc. Some checks also deal explicitly with a certain 
library or platform like Android, Jakarta and JUnit. PMD works by passing source code into a 
JavaCC-generated parser and receiving an Abstract Syntax Tree (a.k.a.\ AST, a tree-based model of 
the source code). PMD then traverses the AST and calls each rule to check for any
violations. This pattern of examining a tree of nodes is called the Visitor Pattern\cite{design-patterns}. 
Rules are written in their own classes and extend a base implementation. The rule itself can 
override three functions (start, visit and end) to perform various checks against the source code based 
on the nodes in the AST. The ``dummy'' example from the PMD website which counts how 
many expressions are in the source code is shown in \autoref{pmd-rule}. PMD keeps track of these 
custom rules by reading additional XML files, called rulesets, which specify the various attributes of the 
rule (such as name, message, corresponding class, examples, etc.).

\begin{figure}
\begin{lstlisting}[language=Java]
package net.sourceforge.pmd.rules;

import java.util.concurrent.atomic.AtomicLong;
import net.sourceforge.pmd.AbstractJavaRule;
import net.sourceforge.pmd.RuleContext;
import net.sourceforge.pmd.ast.ASTExpression;

public class CountRule extends AbstractJavaRule {

	private static final String COUNT = "count";

	@Override
	public void start(RuleContext ctx) {
		ctx.setAttribute(COUNT, new AtomicLong());
		super.start(ctx);
	}

	@Override
	public Object visit(ASTExpression node, Object data) {
		// How many Expression nodes are there in all files parsed! 
		RuleContext ctx = (RuleContext)data;
		AtomicLong total = (AtomicLong)ctx.getAttribute(COUNT);
		total.incrementAndGet();
		return super.visit(node, data);
	}

	@Override
	public void end(RuleContext ctx) {
		AtomicLong total = (AtomicLong)ctx.getAttribute(COUNT);
		addViolation(ctx, null, new Object[] { total });
		ctx.removeAttribute(COUNT);
		super.start(ctx);
	}
}
\end{lstlisting}
\caption[Example of a PMD Rule]{Example of a PMD rule which counts the number of expressions in the source code.}
\label{pmd-rule}
\end{figure}

Checkstyle provides similar functionality to PMD in that it checks Java code for stylistic errors.  It was 
designed to help programmers adhere to coding standards. Later, its designers added checks for bug 
prevention, class design problems, and other common errors. Accordingly, Checkstyle comes standard 
with many checks include those regarding duplicate code, class design, whitespace, etc. Like PMD, it 
uses an AST and the Visitor Pattern to check code. Custom rules are registered through an XML file and 
passed to Checkstyle at runtime. An example check which determines how many methods are in a class 
is shown in \autoref{checkstyle-rule}.

\begin{figure}
\begin{lstlisting}[language=Java]
package com.mycompany.checks;
import com.puppycrawl.tools.checkstyle.api.*;

public class MethodLimitCheck extends Check
{
    private static final int DEFAULT_MAX = 30;
    private int max = DEFAULT_MAX;

    @Override
    public int[] getDefaultTokens()
    {
        return new int[]{TokenTypes.CLASS_DEF, TokenTypes.INTERFACE_DEF};
    }

    @Override
    public void visitToken(DetailAST ast)
    {
        // find the OBJBLOCK node below the CLASS_DEF/INTERFACE_DEF
        DetailAST objBlock = ast.findFirstToken(TokenTypes.OBJBLOCK);
        
        // count the number of direct children of the OBJBLOCK that 
        // are METHOD_DEFS
        int methodDefs = objBlock.getChildCount(TokenTypes.METHOD_DEF);
        
        // report error if limit is reached
        if (methodDefs > this.max) {
            log(ast.getLineNo(),
                "too many methods, only " + this.max + " are allowed");
        }
   }
}
\end{lstlisting}
\caption[Example of a Checkstyle Check]{Example of a Checkstyle check which counts the number of methods in a class.}
\label{checkstyle-rule}
\end{figure}

PMD and Checkstyle are great tools; nevertheless, because they only work for Java, they do not solve 
my problem:\ stylistic error checking in C. In essence PMD, Checkstyle and \programName perform 
very similarly; however, PMD and Checkstyle are built upon entirely different frameworks from 
\programName. The use of the Visitor Pattern and an AST requires PMD and Checkstyle to read though 
the entirety of the code before they can produce any warnings. In contrast, \programName performs error 
checking as it reads the code. PMD and Checkstyle also contain graphical user interfaces, both to aid 
writing checks and to find errors (the latter due to their integration with IDEs). \programName, on the 
other hand, is a command line program. Another difference is \programName must be recompiled in 
order to take advantage of any added checks as opposed to responding at runtime to a configuration file.

\chapter{What \programName Does}

\programName reads in a set of C source code files and determines if they contain any of the 
defined stylistic errors. It is run from the command line inside one's working directory. \programName is 
given a list of .c files to check and reads through each in the given order, pausing to read through 
included header files. Upon encountering an error, \programName prints a warning to \lstinline{stderr} 
containing the full location of the error, an error level and a message (an example is shown in 
\autoref{errorExample}). 

\begin{figure}
\begin{subfigure}[b]{.49\linewidth}
\caption{test.c}
\label{errorExampleCode}
\begin{lstlisting}[numbers=left, firstnumber=92, xleftmargin=.8cm]
	for (int q = 0; q<5; q++) {
		printf("hi");
	}
\end{lstlisting}
\end{subfigure}
\begin{subfigure}[b]{.49\linewidth}
\caption{stderr}
\label{errorExampleStderr}
\begin{lstlisting}[xleftmargin=.7cm]
$ critTer test.c
test.c:92.22-92.23: big problem: Do not use magic numbers (5)
\end{lstlisting}
\end{subfigure}
\caption[Example of an Error and Corresponding Warning]{Example of an Error and Corresponding Warning. Here, \programName is complaining that the for loop's exit condition contains a `magic number'. The warning  contains the location of the error, the error level as well as the message.}
\label{errorExample}
\end{figure}

\programName places the responsibility on the administrator to define the set of stylistic errors to check.
\programName comes with a set of predefined checks that the administrator may use or discard at 
his\slash her discretion. Checks are event driven and are called when the appropriate element in the 
code is reached. For example, it is often desirable to make sure that variables have long enough names 
to be adequately descriptive. In order to check this property, whenever \programName recognizes a 
variable inside a declaration it checks that the variable's name exceeds a minimum length. The 
administrator can write his\slash her own checks as functions to be invoked at each of the relevant 
callback points.

The predefined checks are listed in \autoref{predefinedChecks}. These checks reflect two main 
ideas:\ to demonstrate the strength and abilities of \programName as well as my own stylistic choices. In 
order to show off some of the power of \programName, I wrote a variety of checks that show the various 
distinct elements that can be examined. Some checks, such as isFunctionTooLongByLines and 
isFunctionTooLongByStatements, are the same check defined on a different unit of length (line vs.\ 
statement). The style choices I made represent ideas from a variety of sources including Fowler's Bad 
Smells of Code\cite{refactoring}, Google's style guide\cite{googleStyle}, Code 
Complete\cite{code-complete}, C-Style: Standards and Guidelines\cite{standards}, The Practice of 
Programming\cite{practice-of-programming}, PMD\cite{pmd} and Checkstyle\cite{checkstyle}, as well as 
my own experience and preferences.

\begin{table}
\small
	\begin{center}
	\rowcolors{3}{gray!15}{}
	\begin{tabular}{p{5.4cm} p{8.8cm}}
		\toprule
		Check Name & Purpose \\
		\midrule
		isFileTooLong & Check if the file exceeds a maximum length. \\
		hasBraces & Check if the statement within an \lstinline!if!, \lstinline!else!, \lstinline!for!, \lstinline!while!, and \lstinline!do while! statement is a compound statement.\\
		isFunctionTooLongByLines & Check if a function exceeds a maximum line count. \\
		isFunctionTooLongByStatements & Checks if a function exceeds a maximum statement count. \\
		tooManyParameters & Check if there are too many parameters in the function declaration. \\
		neverUseCPlusPlusComments & Warn against using C++ style single line comments. \\
		hasComment & Check for comments before some construct. \\
		switchHasDefault & Check that each \lstinline!switch! statement has a \lstinline!default! case. \\
		switchCasesHaveBreaks & Check that each \lstinline!switch! case has a \lstinline!break!statement. \\
		isTooDeeplyNested & Check whether a region of code (i.e. a compound statement) nests too deeply. \\
		useEnumNotDefine & Warn against using \lstinline!#define! instead of \lstinline!enum! for declarations. \\
		neverUseGotos & Warn against using \lstinline!GOTO! statements. \\
		isVariableNameTooShort & Check if a variable's name exceeds a minimum length. \\
		isMagicNumber & Warn against using magic numbers outside of a declaration. \\
		globalHasComment & Check if each global variable has a comment. \\
		isLoopTooLong & Check if the loop length exceeds a maximum length. \\
		isCompoundStatementEmpty  & Check if the compound statement is empty. \\
		tooManyFunctionsInFile & Check if there are too many functions in a file. \\
		isIfElsePlacementValid & Warn against poor \lstinline!if!\slash\lstinline!else! placement as defined by the Google style guide. \\
		isFunctionCommentValid & Check if function comments have the appropriate contents. Specifically check that the comment mentions each parameter (by name) and what the function returns. \\
		arePointerParametersValidated & Check if each pointer type parameter into a function is mentioned within an \lstinline!assert()! before being used. \\
		doFunctionsHaveCommonPrefix & Check that function names contain a common prefix. \\
		functionHasEnoughLocal\-Comments & Check that there are enough local comments in the function relative to the number of control/selection statements. \\
		structFieldsHaveComments & Check that all fields in a struct have a comment. \\
		\bottomrule
	\end{tabular}
	\end{center}
\caption{Predefined Checks}
\label{predefinedChecks}
\end{table}

\chapter{How \programName Works}
\label{howItWorks}

\autoref{moduleInteraction} shows how \programName is divided up into multiple, loosely coupled 
modules. Each of these has a unique purpose and is designed to keep the code as clean as possible. 
The ``Knowledge Barrier'' distinguishes the easily customizable and understandable modules from 
those which should not be modified without extreme caution. The modules can also be conceptually 
grouped into three categories: \hyperref[parsingTheCode]{Parsing the Code} (Lexer and Parser), 
\hyperref[callingTheChecks]{Calling the Checks} (Hooks and Sax), and 
\hyperref[writingTheChecks]{Writing the Checks} (Checks, Comments, and Locations).

\begin{figure}[h!]
\begin{center}
\processifversion{PRINT}{\includegraphics[scale=0.6]{ClassInteractionPrint}}
\processifversion{COLOR}{\includegraphics[scale=0.6]{ClassInteractionColor}}
\end{center}
\caption[Module Interaction of \programName]{Module Interaction of \programName. Arrows represent the flow of control between the modules.}
\label{moduleInteraction}
\end{figure}

\section{Parsing the Code}
\label{parsingTheCode}

\programName is built on top of Flex and Bison, a lexical analyzer and parser generator, respectively. 
These two programs each take in a specification file (which defines a set of tokens and a corresponding 
context free grammar) and output a set of C files to parse code. Control goes back and forth between the 
lexical analyzer --- which divides the code into distinct tokens --- and the parser --- which determines how the 
tokens fit together. \programName is able to parse valid ANSI C code but it does not compile or in 
anyway track the contents. This means, for example, that \programName sees any variable or function 
name as just as an \lstinline{IDENTIFIER} (any set of letters\footnote{Strictly, \lstinline{IDENTIFIER}s fit 
the regular expression: \lstinline{[a-zA-Z_]([a-zA-Z_]|[0-9])*}.} that does not already designate a data 
type) without any context as to where it was defined or used before. Because \programName does not 
compile code, it cannot evaluate expressions within preprocessor directives. Therefore, \programName 
cannot perform conditional compilation (specifically, it cannot follow \lstinline{#if} and \lstinline{#define}). 
In order to combat the issue of multiple inclusion of header files, \programName stores the name of each 
file it opens and does not open that file again, even if it is included from another file. \programName does 
not read standard header files (such as \lstinline{stdlib.h} and \lstinline{strings.h}) because they define 
data types within \lstinline{#define}s (which \programName cannot evaluate and therefore cannot 
recognize as types). For example, the file sys/cdefs.h, which stdio.h eventually includes, has the line 
\lstinline{#define __signed    signed} and then uses \lstinline{__signed} throughout the file. To adjust for 
this issue, the lexer instead contains a hack for determining if character strings in the code are 
\lstinline{IDENTIFIER}s or data type names. The lexer does a string comparison against common 
types defined in standard headers such as \lstinline{size_t}, \lstinline{FILE}, \lstinline{pid_t}, etc. If any of 
these hardcoded checks pass, then the lexer tells the parser it has found a type name instead of an 
\lstinline{IDENTIFIER}.\footnote{The lack of preprocessing also means that escaped new lines are left in the code. If these characters occur inside a string, \programName's lexical analysis fails, which in turn causes Bison to falsely report syntax errors. Since this style is rare (and often discouraged), I decided to ignore this issue.}

Bison and Flex track the location of any token or grammar construct. They store this information in a 
\lstinline{YYLTYPE} structure. Normally a \lstinline{YYLTYPE} contains 4 fields, \lstinline{first_line}, 
\lstinline{first_column}, \lstinline{last_line} and \lstinline{last_column}; however, I have also added a 
\lstinline{filename} field in order to produce more accurate checks and warnings across a set of 
files. Each grammar rule can contain multiple actions that consist of C code. These actions can 
reference the location of the entire construct, or any of single component of it, through the prebuilt 
location mechanism. Event handlers are functions in the Sax and Hooks modules that 
respond to finding different code constructs. \programName calls the event handlers from actions, passing in the location of the relevant text.  \autoref{grammar} shows an excerpt of the grammar where actions and the event handlers within 
the actions are underlined. Some actions are `hidden' in dummy rules (called `subroutines' by Bison) in 
order to avoid ambiguities within the grammar. Examples of this practice are shown in \autoref{grammar} 
with the \lstinline{beginCompound}, \lstinline{beginFOR} and \lstinline{beginIF} rules. Locations passed from the middle of a rule (all 
those passed to \lstinline{begin} handlers) represent only the location of that segment and not 
the entire construct. For example, in \autoref{grammar}, \lstinline{beginWhile} will only be passed the 
location of the word ``\lstinline{while}'' whereas \lstinline{endWhile} will be passed the location of the 
entire \lstinline{while} statement.

\begin{figure} 
\begin{lstlisting}[language=Caml, escapechar=\%, morekeywords={beginIF, beginFOR, beginCompound}]
beginCompound : /* empty */ %\underline{\{beginCompoundStatement(@\$);\}}%

compound_statement
	: '{' beginCompound '}'										%\underline{\{endCompoundStatement(@\$);\}}%
	| '{' beginCompound statement_list '}'		%\underline{\{endCompoundStatement(@\$);\}}%
	| '{' beginCompound declaration_list '} '	%\underline{\{endCompoundStatement(@\$);\}}%
	| '{' beginCompound declaration_list statement_list '}'	%\underline{\{endCompoundStatement(@\$);\}}%
	;

beginIF : /*empty*/ %\underline{\{beginIf(@\$);\}}%

selection_statement
	: IF beginIF '(' expression ')' statement %\underline{\{endIf(@\$);\}}%
	| IF beginIF '(' expression ')' statement ELSE %\underline{\{endIf(@6); beginElse(@7);\}}% statement	%\underline{\{endElse(@9);\}}%
	| SWITCH %\underline{\{beginSwitch(@1);\}}% '(' expression ')' statement %\underline{\{endSwitch(@\$);\}}%
	;

beginFOR : /*empty*/ %\underline{\{beginFor(@\$);\}}%

iteration_statement
	: WHILE %\underline{\{beginWhile(@1);\}}% '(' expression ')' statement %\underline{\{endWhile(@\$);\}}%
	| DO %\underline{\{beginDoWhile(@1);\}}% statement WHILE '(' expression ')' ';' %\underline{\{endDoWhile(@\$);\}}%
	| FOR beginFOR '(' expression_statement expression_statement ')' statement	%\underline{\{endFor(@\$);\}}%
	| FOR beginFOR '(' expression_statement expression_statement expression ')' statement %\underline{\{endFor(@\$);\}}%
	| FOR beginFOR '(' declaration expression_statement ')' statement %\underline{\{endFor(@\$);\}}%
	| FOR beginFOR '(' declaration expression_statement expression ')' statement	%\underline{\{endFor(@\$);\}}%

\end{lstlisting}
\caption[Excerpt of the Grammar]{Excerpt of the Grammar. Here, the different `paragraphs' are the different grammar rules. Actions are underlined and my additions to the grammar are in bold. }
\label{grammar}
\end{figure} 

Instead of writing them anew, I found Flex and Bison input files specifying the C language
online\cite{originalGrammar} and modified them to add additional functionality. The only major 
modification of the actual grammar was to add the ability to recognize and dynamically add 
\lstinline{typedef} definitions as types. \programName stores these type names in an internal symbol 
table and the lexer checks to make sure that potential \lstinline{IDENTIFIER}s are not already listed 
in the table. Additionally, I modified some of the grammar rules to include dummy rules with actions. In 
order to accommodate the inclusion of header files, I had to expand the given lexer functionality to 
transfer control between files. The specific method of using a stack of buffers and file pointers is heavily 
inspired by the examples in the O'Reilly Flex \& Bison book\cite{flex-and-bison}. When transferring to a 
different file, the lexer adds the current file to a stack with its file pointer, internal state, and current line 
number. When it reaches the end of the file, the lexer pops the current file off the stack and goes back 
to its previous state. The end of program occurs when there are no more files on the 
stack.\footnote{\programName starts by adding all of the given files to the stack and dynamically adding 
additional header files. I chose to pre-load .c files in this manner because the mechanism was already in 
place and it simplified the interface between the main module and the lexer\slash parser. In order to hide 
this implementation detail, the .c files are pushed onto the stack in reverse order.

Additionally, the lexer reads in header filenames without any additional context about the path of the 
current file. Because of this, \programName is unable to find and read header files that are included from 
within subdirectories. It is relatively rare within an academic context to break a single program into 
subdirectories and accordingly I decided not to focus on this issue.} 

\section{Some Theory}
Bison uses an LALR(1) parsing algorithm, meaning it uses Left to Right, Rightmost Derivation to create a 
parse tree using one token of lookahead. This algorithm maintains a parser table which allows it to avoid 
backtracking as it parses the source file. Because of this property, calls are never mistakenly made from 
the lexer or parser into other modules of \programName.

When compilers translate code into machine language, they first perform lexical analysis and parsing, 
just like \programName. Where the two start to differ is in the actions for each grammar rule. Compilers 
store information regarding the semantic value, or meaning of the source code, inside each construct. 
This practice allows compilers to build an Abstract Syntax Tree, which ``conveys the phrase structure of 
the source program, with all parsing issues resolved but without any semantic 
interpretation''\cite{compiler-implementation}. Other modules can then look over the entire tree and 
determine the meaning of the code as well as stylistic attributes. Both PMD and Checkstyle use this 
method.

Originally, \programName operated in a similar manner; it stored an enumerated value corresponding to 
the type of construction as the semantic value of each node (see \autoref{progressionOfDevelopment}). 
The checks then examined the value of each construction. Instead of expanding along this line of 
development, I decided to transition to an event-based system, largely inspired by 
SAX\cite{saxHomepage}. This kind of framework ``reports parsing events (such as the start and end of 
elements) directly to the application through callbacks, and does not usually build an internal tree. The 
application implements handlers to deal with the different events, much like handling events in a 
graphical user interface''\cite{saxHomepage}. Specifically, SAX uses event handlers to parse XML files. 
There are three main handlers which are used at the beginning and end of each XML element as well as 
to capture the text in between. 

I chose to make this transition because the event-based system required far less overhead to 
implement. Instead of spending time building the tree framework and the corresponding methods to 
traverse it, I was able to focus on implementing stylistic checks. Additionally, the SAX-style framework 
has the added benefit of being able to examine code as it parses, as opposed to after the file has been 
parsed completely. This feature also increases scalability because the SAX framework discards the parts 
of the file(s) it has already read.

\singlespacing
\section{Calling the Checks}
\label{callingTheChecks}
\doublespacing 

Much like SAX, \programName calls event handlers at the beginning and end of constructs (functions, 
declarations, statements, parameter lists, etc.) as well as when it finds singular elements (variable 
names, \lstinline{break} statements, parameters, etc.). In the code, handlers are prefaced by the words 
``\lstinline{begin}'', ``\lstinline{end}'' and ``\lstinline{register}'' to signal at what point each is called (as 
shown in \autoref{grammar}). At minimum, each handler is passed the location of the relevant text --- in 
the case of \lstinline{IDENTIFIER}s and numeric constants, the handler is also passed the relevant 
text itself. Each of these event handlers exists in the file sax.c which in turn calls the 
administrator-defined checks (as shown in \autoref{saxHandlers}). While these checks could be written 
into the event handlers themselves, it is advantageous to separate them into their own functions in order 
to preserve the readability of the sax.c file and the code in general. \autoref{predefinedChecksFunctions} 
lists the predefined checks, their function signatures, and the handlers which call them. 
 
\begin{figure}
\begin{lstlisting}[language=C]
void registerConstant(YYLTYPE location, char* constant) {
	isMagicNumber(location, MIDDLE, constant);
}

void beginCompoundStatement(YYLTYPE location) {
	isCompoundStatementEmpty(location, BEGINNING);
	lastCalled_set(beginCompoundStatement);
	isTooDeeplyNested(location, BEGINNING);
	isFunctionCommentValid(location, BEGIN_FUNCTION_BODY, NULL);
}

void endCompoundStatement(YYLTYPE location) {
	isCompoundStatementEmpty(location, END);
	lastCalled_set(endCompoundStatement);
	isTooDeeplyNested(location, END);
}

void beginDeclaration(YYLTYPE location) {
	isMagicNumber(location, BEGINNING, NULL);
	isVariableNameTooShort(location, BEGINNING, NULL);
}

void endDeclaration(YYLTYPE location) {
	isMagicNumber(location, END, NULL);
	globalHasComment(location, MIDDLE);
	isVariableNameTooShort(location, END, NULL);
}
\end{lstlisting}
\caption{A Subset of the Event Handlers in the Sax Module}
\label{saxHandlers}
\end{figure}

Unfortunately, some handlers cannot actually be called at the time the construct is recognized. This 
is because Bison executes actions as they are encountered inside each grammar rule. If actions were 
placed at the beginning of a rule, Bison would not know which to act upon. In all the rules listed in 
\autoref{grammar}, the action is preceded by some distinguishing token (e.g. \lstinline{WHILE}) or by the 
entire rule. However, \autoref{hookGrammar} shows some rules that both need actions at the beginning 
of the statement and lack distinguishing tokens. Specifically we would like to know when we start a 
function definition, but we cannot be sure that we are in a function definition until Bison finishes parsing 
the function's signature. To fix this issue I added the Hooks module. This module intercepts what would 
be normal calls within the SAX framework and then reorders them at the appropriate time. Each call into 
the Hooks module does one of two things:\ it enqueues a Sax level function call and its location or it 
dequeues any item after a specified location (\autoref{hooksQueues}). With the beginning of a function, 
all the elements of the signature are placed on the queue and then dequeued when 
\lstinline{h_beginFunctionDefinition} is called. 


\begin{figure}
\begin{lstlisting}[language=Caml, escapechar=\%]
declarator
	: pointer direct_declarator
	| direct_declarator
	;

direct_declarator
	: IDENTIFIER			 %\underline{\{h\_registerIdentifier(@\$);\}}%
	| '(' declarator ')'
	| direct_declarator '['  %\underline{\{h\_beginDirectDeclarator(@1);\}}% constant_expression ']'	 %\underline{\{h\_endDirectDeclarator(@\$);\}}%
	| direct_declarator '['  %\underline{\{h\_beginDirectDeclarator(@1);\}}% ']'						%\underline{\{h\_endDirectDeclarator(@\$);\}}%
	| direct_declarator '('  %\underline{\{h\_beginDirectDeclarator(@1);\}}% parameter_type_list ')'	%\underline{\{h\_endDirectDeclarator(@\$);\}}%
	| direct_declarator '('  %\underline{\{h\_beginDirectDeclarator(@1);\}}% identifier_list ')'		%\underline{\{h\_endDirectDeclarator(@\$);\}}%
	| direct_declarator '('  %\underline{\{h\_beginDirectDeclarator(@1);\}}% ')'						%\underline{\{h\_endDirectDeclarator(@\$);\}}%
	;

function_definition
	: declaration_specifiers declarator %\underline{\{h\_beginFunctionDefinition(@2);\}}% declaration_list compound_statement %\underline{\{endFunctionDefinition(@\$);\}}%
	| declaration_specifiers declarator %\underline{\{h\_beginFunctionDefinition(@2);\}}%} compound_statement %\underline{\{endFunctionDefinition(@\$);\}}%
	| declarator %\underline{\{h\_beginFunctionDefinition(@1);\}}% declaration_list compound_statement %\underline{\{endFunctionDefinition(@\$);\}}%
	| declarator %\underline{\{h\_beginFunctionDefinition(@1);\}}% compound_statement %\underline{\{endFunctionDefinition(@\$);\}}
	;
\end{lstlisting}
\caption{Additional Excerpt of the Grammar}
\label{hookGrammar}
\end{figure}

\begin{figure}
\begin{center}
\begin{subfigure}[t]{.4\linewidth}
	\caption{}
	\label{hooksQueuesA}
	\includegraphics[scale=0.5]{hooksQueuesPartA.pdf}
\end{subfigure}
\begin{subfigure}[t]{.4\linewidth}
	\caption{}
	\label{hooksQueuesB}
	\includegraphics[scale=0.5]{hooksQueuesPartB.pdf}
\end{subfigure} \\
\vspace{4mm}
\begin{subfigure}[t]{.4\linewidth}
	\caption{}
	\label{hooksQueuesC}
	\includegraphics[scale=0.5]{hooksQueuesPartC.pdf}
\end{subfigure}
\begin{subfigure}[t]{.4\linewidth}
	\caption{}
	\label{hooksQueuesD}
	\includegraphics[scale=0.5]{hooksQueuesPartD.pdf}
\end{subfigure}
\end{center}
\caption[Representation of the Hooks Module]{Representation of the Hooks Module. (\subref{hooksQueuesA}) The initial queue with functions associated to locations 11, 12 and 13. (\subref{hooksQueuesB}) The queue after another function\slash location pair has been enqueued. (\subref{hooksQueuesC}) The call at location 15 causes a call into the Sax layer followed by every stored call after the given location (12) in the queue. (\subref{hooksQueuesD}) The resulting queue.}
\label{hooksQueues}
\end{figure}

The Hooks module also makes the appropriate calls into the Sax layer regarding 
\lstinline{IDENTIFIER}s and numeric constants. The lowest order structure Bison can manipulate is the 
token, meaning it cannot know the textual representation of a given \lstinline{IDENTIFIER} or constant. 
Flex, on the other hand, operates on the actual text. Each module makes one call into Hooks for each 
\lstinline{IDENTIFIER} or constant regarding the text or location. The Hooks module then takes the 
information from these separate calls and combines them into one call in the SAX layer. This can 
best be seen in \autoref{handlerTimeline}.

\begin{figure}
\begin{center}
\begin{tabular}{llc}
\toprule
Hooks & Sax  & Relevant Code\\
\midrule
h\_registerIdentifierText & & \lstinline!example! \\
h\_registerIdentifier & & \lstinline!example! \\
h\_beginParameterList & & \lstinline!(! \\
h\_registerIdentifierText & & \lstinline!a! \\
h\_registerIdentifier & & \lstinline!a! \\
h\_registerParameter & & \lstinline!int a! \\
h\_registerIdentifierText & & \lstinline!b! \\
h\_registerIdentifier & & \lstinline!b! \\
h\_registerParameter & & \lstinline!double b! \\
h\_endParameterList & & \lstinline!)! \\
h\_beginFunctionDefinition & beginFunctionDefinition & \\
 & registerIdentifier & \lstinline!example! \\
 & beginParameterList & \lstinline!(! \\
 & registerIdentifier & \lstinline!a! \\
 & registerParameter & \lstinline!int a! \\
 & registerIdentifier & \lstinline!b!\\
 & registerParameter & \lstinline!double b!\\
 & endParameterList & \lstinline!)!\\ \hdashline[1pt/4pt]
N/A & beginCompoundStatement & \lstinline!{!\\
\ldots & \ldots & \lstinline!...! \\
N/A & endCompoundStatement & \lstinline!}!\\
N/A & endFunctionDefinition \\
 \bottomrule
\end{tabular}
\end{center}
\caption[Timeline of Event Handler Calls]{Timeline of event handler calls into the hooks and sax module for: \lstinline!void example(int a, double b) \{...\}!}
\label{handlerTimeline}
\end{figure}

Event handlers for major and common constructs, the beginning and end of each file, and the 
program have been implemented. Smaller items, including the handlers for registration of operators and 
data types, have yet to be implemented. It is easy to add more handlers; however, administrators should 
only attempt to do so after fully comprehending how the system works. Specifically, it is crucial to route 
handlers through Hooks only when they occur inside constructs which are already rerouted inside Hooks 
(such as statements and declarations). Otherwise events could be called in the wrong order (if they miss 
going through Hooks) or not at all (if they go through Hooks without anything to release them from the 
queue).

In addition to the basic SAX style system, I have implemented one shortcut to help identify code context 
without an excess of global variables. Every time a handler is called, it sets the 
\lstinline{lastCalledFunction} through a setter. Checks can then use this variable to easily figure out 
what the previous context was without additional calls or variables (as shown in 
\autoref{lastCalledExample}).

\begin{figure}
\begin{lstlisting}[language=C]
/**
 * Check if the compound statement is empty.
 */
void isCompoundStatementEmpty(YYLTYPE location, int progress) {
	static void (*context)(YYLTYPE);
	
	switch (progress) {
		case BEGINNING:
			context = lastCalled_get();
			break;
		case END:
			if (lastCalled_get() == beginCompoundStatement) {
				/* create a good error message */
				char *parent = NULL;
				
				if (context == beginIf) { parent = "if statements"; }
				else if (context == beginElse) { parent = "else statements"; }
				else if (context == beginFor) { parent = "for loops"; }
				else if (context == beginWhile) { parent = "while loops"; }
				else if (context == beginDoWhile) { parent = "doWhile loops"; }
				
				if (parent) {
					lyyerrorf(ERROR_HIGH, location, "Do not use empty %s", parent);
				} else {
					lyyerror(ERROR_HIGH, location, 
						       "Do not use empty block statements");
				}
			}
			break;
		default:
			break;
	}
}
\end{lstlisting}
\caption{Example of a Check which Utilizes lastCalledFunction}
\label{lastCalledExample}
\end{figure}
\newpage

\section{Writing the Checks}
\label{writingTheChecks}

Minimally, each check needs access to the location of the code construct in order to be able to produce 
a warning. Additionally, checks often need further information regarding the surrounding context 
of the possible error. A simple example is the check against using ``magic 
numbers''\cite[p.~19]{practice-of-programming} (see \autoref{checkWithContext}). Many programmers 
consider using numeric constants directly inside the code very poor style and recommend defining a 
symbolic constant to hold that value. Therefore \programName should only throw a warning when it 
finds a magic number inside a normal statement as opposed to inside a declaration where it is 
necessarily defined. This check then needs to know every time a declaration begins and ends as well 
as each time a number is found. This contextual information can be stored in global variables in the 
Checks module or passed into the individual check through its parameters (as in 
\autoref{checkWithContext}).

\begin{figure}
\begin{lstlisting}[language=C]
void isMagicNumber(YYLTYPE location, int progress, char* constant) {
	int acceptableNumbers[3] = {0, 1, 2};
	int numAcceptable = sizeof(acceptableNumbers)/sizeof(int);

	static int inDeclaration = 0;
	
	switch (progress) {
		case BEGINNING:
			inDeclaration++;
			break;
		case MIDDLE:
			if (lastCalled_get() == registerCase) {
				lyyerror(ERROR_HIGH, location, "Do not use magic numbers");
			} else if (inDeclaration == 0) {
				int number = (int)strtol(constant, (char**)NULL, 0);
				int i;

				/* see if number is within the acceptableNumbers array */
				for (i = 0; i < numAcceptable; i++) {
					if (number == acceptableNumbers[i]) {
						return;
					}
				}
				lyyerror(ERROR_HIGH, location, "Do not use magic numbers");
			}
			break;
		case END:
			inDeclaration--;
			break;
		default:
			break;
	}
}
\end{lstlisting}
\caption[\programName Check with Additional Context]{\programName check with additional context that throws a warning on encountering a magic number outside of a declaration.}
\label{checkWithContext}
\end{figure}

\newcommand{\yyerror}{\lstinline{yyerror}\xspace}
\newcommand{\lyyerror}{\lstinline{lyyerror}\xspace}
\newcommand{\lyyerrorf}{\lstinline{lyyerrorf}\xspace}

In order to throw a warning, the administrator can call one of three functions: \yyerror, \lyyerror, and 
\lyyerrorf. Each of these functions prints a warning message to \lstinline{stderr} preceded by the error's 
location in the code and an error level (as seen in \autoref{errorExample}). \yyerror and \lyyerrorf 
are each wrappers to \lyyerror, which takes in an error level (\lstinline{enum errorLevel}), a location 
(\lstinline{YYLTYPE}) and a warning message (\lstinline{char *}). In essence, \lyyerror is really a wrapper 
to \lstinline{fprintf} and defines the formatting for the warnings messages and locations. Instead of 
receiving a warning message, \lyyerrorf accepts a format string and a variable argument list, which it 
uses with \lstinline{vsprintf} to create a warning message. It then passes the newly created message to 
\lyyerror with the rest of its arguments. \yyerror only accepts a warning message and calls \lyyerror with 
Bison's internal location in the code and a default high error level. This is because \yyerror is called 
internally through Bison to represent syntax errors. \yyerror is the only predefined error reporting 
function; \lyyerror is an extension suggested by O'Reilly\cite{flex-and-bison} when using location tracking 
with Bison\slash Flex. The `l' represents the variable location. \lyyerrorf is modeled after \lstinline{printf} 
and deals with formatted warning messages in one consolidated function.

Additionally, there are two helper modules designed to facilitate writing checks: Locations and 
Comments. The Locations module contains several functions to manipulate \lstinline{YYLTYPE}s.  
Specifically it contains methods to compare locations as well as allocate, copy and free 
\lstinline{YYLTYPE}s. The Comments module tracks all the comments found while running 
\programName. Comments' contents and locations are registered through calls from the Sax layer and 
are stored in a dynamic array. Adjacent comments (like those in \autoref{adjacentComments}) are 
recognized and combined into one larger comment. Additionally the module provides the ability to 
search through this array to find comments in or near a given location (using the methods from 
Locations). Finally, the Comments module provides two useful functions for analyzing the comment itself. 
The first determines if a comment has words --- signifying that it is not a delimiting comment in the form of 
\lstinline{/*----*/}. The second checks whether a comment contains a given string with or without case 
sensitivity.

\begin{figure}
\begin{lstlisting}[language=C]
/* Warn against using magic numbers outside of a declaration.        */
/* (Presumably, inside a declaration, a variable will be initialized */
/* to a magic number and then used throughout the rest of the code). */
\end{lstlisting}
\caption{Example of Adjacent Comments}
\label{adjacentComments}
\end{figure}

\programName has difficulty checking some coding styles. For example, when 
checking that each global variable has a comment, it is trivial to check if there is a comment before that 
declaration. If, however, the comment appears after the declaration, as is common in some header 
files, \programName is unable to find the comment. This is because, at the end of a declaration, 
\programName has not yet read or stored the forthcoming comment. There are three solutions to this 
issue. The first is a creative hack in which one stores every location of a global variable and then 
searches for comments after each location at the end of the file. The second method would be to perform 
lookahead within the lexer to determine if a comment was about to follow. The third and preferred 
solution is to change the coding standard to have comments precede declarations.\footnote{Princeton 
University's Introduction to Programming Systems course\cite{cos217} will be changing its coding 
standards this summer to facilitate using \programName this fall and thereafter. One of the largest 
changes will be to put comments before function declarations in header files.}

\chapter{How to Use \programName}

\section{Users}
Find out from your administrator where you should find their version of \programName. After following 
their installation instructions, go to your working directory from the command line. Type 
``\lstinline{critTer *.c}'' (or if you only want to check one or two files, type their names instead of the 
\lstinline{*.c}). \programName will output any warnings about your code to \lstinline{stderr}.

\section{Administrators}

\subsection{Use}
In academics, the best use of \programName is as an automated grading system. It is simple to assign a 
point reduction system based on the number of warnings \programName returns over a submission. For 
example, one might deduct a two point penalty per high error level message, a point per normal error 
level message and a half a point per low error level message. Not only does it reduce the work needed 
to grade a submission, but by allowing students to pre-check their work, the submissions become easier 
to read through good, consistent style.

In industry, \programName should be used by programmers before submitting code for peer review. This 
creates an automated system to alert against any code that does not adhere to the accepted coding 
standard. In this way, \programName helps make the code base more consistent and readable without 
direct peer enforcement. Furthermore, the team can be more productive when they spend less time 
correcting their peers' stylistic errors.

\subsection{Customization}
Before customizing the code, it is important to both understand how \programName works (see 
\autoref{howItWorks}) and have looked through the code conventions in \autoref{conventions}.

\subsubsection{Add a Check}

The first step of adding a new stylistic check is to determine precisely what you wish to check and which 
handlers will give you the necessary information. Then determine how much context is needed 
to implement this check. For example, to throw a warning on C++ style comments (comments in form of 
``\lstinline{// Comment text which ends on a newline}'') requires no context --- it is only dependent on the 
existence of that code. In contrast, checking for braces around the content of \lstinline{for} loops requires 
very minimal context:\ whether or not \lstinline{endCompoundStatement} was the last function called (i.e.\ 
right before \lstinline{endFor} was called, did \programName encounter a `\lstinline!}!' or something else). 
This minimal context can be established through the use of the \lstinline{lastCalled_get()} function which 
returns a pointer to the last Sax handler called. More complex checks may need additional context. For 
example, to check that each \lstinline{switch} statement has a \lstinline{default} case, the check needs to 
keep track of whether it has seen a \lstinline{default} within the current \lstinline{switch} block. The 
easiest way to do this is to have the check called from \lstinline{beginSwitch}, \lstinline{registerDefault} 
and \lstinline{endSwitch} and pass in a different `progress' value at each different call. Throughout the 
code, the enumerated values \lstinline{BEGINNING}, \lstinline{MIDDLE} and \lstinline{END} provide such 
values (as shown in \autoref{addCheckExample}).

\begin{figure}
\begin{lstlisting}[language=C]
/**
 * Check that each switch statement has a default case.
 */
void switchHasDefault(YYLTYPE location, int progress) {
	static int started = 0;
	static int found = 0;
	
	switch (progress) {
		case BEGINNING:
			started = 1;
			found = 0;
			break;
		case MIDDLE:
			found = 1;
			break;
		case END:
			if (!found && started) {
				lyyerror(ERROR_HIGH, location, 
					       "Always include a default in switch statements");
			}
			started = 0;
			break;
		default: 
			break;
	}
}
\end{lstlisting}
\caption[\programName Check with Contextual Processing]{\programName Check with Contextual Processing. \lstinline{switchHasDefault} is called from \lstinline{beginSwitch} with \lstinline{BEGINNING}, \lstinline{endSwitch} with \lstinline{END} and \lstinline{registerDefault} with \lstinline{MIDDLE}.}
\label{addCheckExample}
\end{figure}

After determining the relevant handlers and additional necessary parameters,\footnote{Each check 
should have at least one parameter:\ \lstinline{YYLTYPE location}. This value is necessary in order to 
produce a proper warning message. All other parameters are optional and should follow 
\lstinline{location}. Many checks can be completed using only a progress value or informative string.} 
one must actually write the check. The easiest way to deal with contextual processing is to use a 
\lstinline{switch} statement and conditionally set static local variables (\autoref{addCheckExample}). In 
order to throw a warning, one must pass \lstinline{location} to either \lstinline{lyyerror} or 
\lstinline{lyyerrorf} with an error level and either a message or format string and arguments respectively 
(for additional information see \autoref{writingTheChecks}).

\subsubsection{Add an Event Handler}

In order to add an event handler, it is necessary to edit the grammar file. This is not trivial and should be 
undertaken with great care. Having said that, creating a handler itself is actually quite simple. The first 
step is to figure out which grammar rule(s) are relevant to the event you would like to capture. In some 
cases this is incredibly trivial, in others it takes some effort to understand what the grammar is describing. 
In my experience, the best method to figure out the various rules is to perform a manual 
depth-first-search through the different components of the rule until it becomes clear. 

After finding the grammar rule, adding a \lstinline{register} or \lstinline{end} handler is very simple: 
define the handler in sax.c/h and add the action ``\lstinline!{newHandler(@X);}!'' after the component you 
want to recognize. The \lstinline{@X} references the location of either the component (where \lstinline{X} 
= the number of the component) or the entire rule (where \lstinline{X} = `\lstinline{$}'). \autoref{grammar} 
and \autoref{hookGrammar} show examples of these calls. Adding \lstinline{begin} handlers can be 
much more difficult than the previous cases although, in principle, the process is identical. This is due to 
the possibility of adding ambiguities to the grammar.\footnote{The shift\slash reduce conflict for if-else 
statements was removed by giving explicit precedence for if-else statements over if statements as 
suggested by O'Reilly\cite[p.~188]{flex-and-bison}.} When this happens, Bison will throw several errors 
during compilation and the parser will most likely break. The first tactic to avoid this issue is to never 
place actions as the first element in a rule; they should always appear after (at least) one component. If 
this approach is insufficient, you should try burying the action inside a dummy rule (such as 
\lstinline{beginCompound}, \lstinline{beginIF}, and \lstinline{beginFOR} in \autoref{grammar}).

There are only two reasons to route your new handler through the Hooks module instead of going 
directly to Sax; the most likely reason is the event occurs inside a construct that already goes through 
Hooks. Declarations, function signatures, and statements currently go through Hooks. This means that 
events like \lstinline{registerConst} must also go through Hooks in order to be released to Sax at the right 
time (see \autoref{callingTheChecks}). The second motive is the reason why those constructs 
already go through Hooks:  it is the only way of getting an accurate \lstinline{begin} handler. By this, I 
mean it is either impossible or exceptionally complicated to create a \lstinline{begin} handler in the 
correct place in the grammar such that it is executed before all of its components. These constructs 
dequeue all the appropriate previous calls once the \lstinline{h_endXX} handler is called.

If you do not need to go through Hooks, after defining the action in the grammar file and the handler in 
sax.c/h, all you need to do is call \lstinline{lastCalled_set()} from the handler. If the new 
handler needs to go through Hooks, you need to create two handlers: \lstinline{newHandler} in Sax and 
\lstinline{h_newHandler} in Hooks (where \lstinline{h_newHandler} is called from the action in the 
grammar file). If the event just needs to be released at the correct time (i.e.\ it appears within a hooked 
construct), the Hooks handler should call \lstinline{enqueueFunctionAndLocation} to enqueue the Sax 
handler. If the handler needs to dequeue some elements, it should call \lstinline{dequeueUntil}, followed 
by the sax handler and \lstinline{lastCalled_set()}. Examples of both direct and indirect routes from the 
Lexer\slash Parser to Sax are shown in \autoref{saxAndHooksHandlers}.

\begin{figure}

\begin{subfigure}[t]{\linewidth}
\caption{Direct Grammar}
\label{directGrammar}
\begin{lstlisting}[language=Caml]
iteration_statement
	: WHILE {beginWhile(@1);} '(' expression ')' statement   {endWhile(@$);}
\end{lstlisting}
\end{subfigure}

\begin{subfigure}[b]{\linewidth}
\caption{Direct Sax Level Event Handlers}
\label{directSax}
\begin{lstlisting}[language=C]
void beginWhile(YYLTYPE location) {
	lastCalled_set(beginWhile);
	functionHasEnoughLocalComments(location, MIDDLE, 0);
}

void endWhile(YYLTYPE location) {
	hasBraces(location, "while");
	lastCalled_set(endWhile);
	isLoopTooLong(location);
}
\end{lstlisting}
\end{subfigure}

\begin{subfigure}[t]{\linewidth}
\caption{Indirect Grammar}
\label{indirectGrammar}
\begin{lstlisting}[language=Caml]
parameter_list
	: parameter_declaration	{h_registerParameter(@$);}
\end{lstlisting}
\end{subfigure}

\begin{subfigure}[b]{\linewidth}
\caption{Indirect Hooks Level Event Handlers}
\label{indirectHooks}
\begin{lstlisting}[language=C]
void h_registerParameter(YYLTYPE location) {
	enqueueFunctionAndLocation(registerParameter, location);
}
\end{lstlisting}
\end{subfigure}

\begin{subfigure}[b]{\linewidth}
\caption{Indirect Sax Level Event Handlers}
\label{indirectSax}
\begin{lstlisting}[language=C]
void registerParameter(YYLTYPE location) {
	tooManyParameters(location, MIDDLE);
	arePointerParametersValidated(location, REGISTER_PARAM, NULL);
}
\end{lstlisting}
\end{subfigure}

\caption[Direct vs.\ Indirect Event Handlers]{Direct vs.\ Indirect Event Handlers. (\subref{directGrammar}, \subref{directSax}) A while statement has a direct call into the Sax module. (\subref{indirectGrammar}, \subref{indirectHooks}, \subref{indirectSax})The registration of function parameters needs to indirectly route through Hooks in order to have the Sax level event handlers called after beginFunctionDefinition.}
\label{saxAndHooksHandlers}
\end{figure}

\subsection{Compilation, Testing and Installation}

\programName contains a Makefile which contains targets for compilation, testing and installation. To 
compile the given version of \programName (or a customized version without additional files), simply 
type ``\lstinline{make}''. To compile \programName with additional files, edit the \lstinline{all} target to 
include the new files and then type ``\lstinline{make}''. To install \programName (i.e. copy into 
\lstinline{/usr/local/bin/}) type ``\lstinline{make install}''.

Testing can be accomplished by typing ``\lstinline{make test}'' which uses two shell scripts to run the 
local version of \programName over a set of test files and then compares the new output to the previous 
output. The first script, \lstinline{runOnTests.sh}, has a set of paths over which to run \programName. The 
error messages are piped to \lstinline{output.txt} after the old output has been copied to 
\lstinline{output_old.txt}. The second script, \lstinline{checkTestOutput.sh}, uses a list of all the warning 
messages and \lstinline{grep} to break apart the output files check by check. The script then 
\lstinline{diff}s each section of the files to determine if a check has been broken. In order to add new 
checks to the testing mechanism, one simply needs to add the check's function name and a significant 
(non-variable) part of the warning message into the appropriate arrays inside \lstinline{runOnTests.sh}.

\chapter{Evaluation}

\newcommand{\human}{Dr.~Dondero\xspace}

In order to evaluate of \programName's performance, Dr.\ Robert Dondero graded 10 randomly chosen 
final project submissions for Princeton University's Introduction to Programming Systems (COS 217) 
\cite{cos217}. There were over 650 assignments available, each of which was anonymized in order to 
protect the students' privacy. Dr.~Dondero has taught the course for many years, making him the perfect 
person to judge each submission's style. We judged \programName's performance compared to ``true'' 
errors, which were determined after both Dr.~Dondero and \programName looked at each submission. 
This post-analysis judgement of errors was necessary in order to properly take into account errors that 
Dr.~Dondero originally missed and  \programName found. We did allow \programName to perform an 
iterative analysis of the submissions as a whole, mirroring the development process we hope other 
administrators will go through as they develop their own checks. The iterative process mostly affected 
checks like \lstinline{functionIsTooLong} where there was a threshold value that needed to be tuned. 

We measured two properties:\ precision and recall. Precision represents what fraction of the output was 
in response to a true error. In our experiment, \human will always have a precision of 100\%.\footnote{It is 
possible for \human to lower this value by changing his mind regarding an error but this never happened 
in practice.} Recall is an idea from information retrieval to represent the fraction of relevant documents 
that were retrieved. Here we use it to mean the number of true errors that were found. Throughout 
\programName's development, we considered recall to be more important than precision. We prefer 
\programName to find all the true errors and produce extraneous output rather than missing 
some of the true errors and minimizing extraneous output. We think it is better for students and instructors 
to have to defend their code rather than letting possible errors slide.\footnote{The notable exception is 
the check for magic numbers in which producing warnings against the use of 0, 1 and 2 in the code 
would far outweigh the few times those numbers would be used `magically'.} The checks I wrote reflect 
this preference. However, other administrators can write more `conservative' checks which alter this 
relationship between recall and precision.

Four checks were very useful and represented about half of all of \programName's output: magic 
numbers, validating pointer parameters, comments above global variables, and validating function 
comments. The check for magic numbers was one of the qualitatively hardest for Dr.~Dondero to perform 
because numerals do not stand out from code. This is shown in the data in 
\autoref{resultsMagicNumbers} where Dr.~Dondero missed almost as many magic numbers as he 
caught. This check is also interesting in that, by its very nature, it will miss some errors; specifically when 
0, 1 or 2 is used `magically'. 

\begin{table}
\begin{center}
\begin{tabular}{lccc}
	\toprule
	&& \multicolumn{2}{c}{Was an error present?} \\
	&& Yes & No \\ \midrule
\multirow{2}{*}{Did \programName report an error?} & Yes & 46 & 11  \\
										& No  &  9 & \\ \hdashline[2pt/4pt]
\multirow{2}{*}{Did \human report an error?} & Yes & 33 & 0 \\
								     & No  & 22 & \\
	\bottomrule
\end{tabular}
\end{center}
\caption[Test Results for Magic Numbers]{Test Results for Magic Numbers. For this check, \programName had a recall of 83.6\% ($\mathtt{\frac{46}{46+9}}$) and a precision of 80.7\% ($\mathtt{\frac{46}{46+11}}$). \human had a recall of 60.0\% ($\mathtt{\frac{33}{33+22}}$).}
\label{resultsMagicNumbers}
\end{table}

\programName checks whether functions validate pointer parameters inside an \lstinline{assert()} before 
the parameter is actually used. The check fails systematically in two cases. The first is that 
\programName will output a warning when the parameter is being properly validated through code like 
`\lstinline{if (param != NULL)}'. This is unavoidable as \programName does not detect the meaning 
behind the code. The second systematic failure involves opaque pointer types; \programName has no 
way of knowing whether a newly defined type is a wrapper around a pointer. Therefore, it does not make 
sure parameters of those types are validated. Data are shown in 
\autoref{resultsValidatePointerParameters}.

\begin{table}
\begin{center}
\begin{tabular}{lccc}
	\toprule
	&& \multicolumn{2}{c}{Was an error present?} \\
	&& Yes & No \\ \midrule
\multirow{2}{*}{Did \programName report an error?} & Yes & 102 & 21  \\
										& No  &  9 & \\ \hdashline[2pt/4pt]
\multirow{2}{*}{Did \human report an error?} & Yes & 74 & 0 \\
								     & No  & 37 & \\
	\bottomrule
\end{tabular}
\end{center}
\caption[Test Results for Validating Pointer Parameters]{Test Results for Validating Pointer Parameters. 
For this check, \programName had a recall of 91.9\% and a precision of 82.9\%. \human had a recall of 
66.7\%.}
\label{resultsValidatePointerParameters}
\end{table}

One of the more interesting checks looked for comments above each global variable. Yet again, 
\programName came up against systematic failure. While \programName found each missing comment, 
it had a large tendency to output extraneous warnings. There were two main causes: self commenting 
code and an uncheckable coding standard. Many times, \programName encountered global variables 
similar to ``\lstinline!enum BOOLEAN {FALSE, TRUE}!''. Declarations of this kind are self-commenting 
and further comments would decrease readability. Warnings to add comments are therefore extraneous. 
The other systematic failure was due to a previously acceptable standard of putting comments beneath 
declarations. As mentioned in \autoref{writingTheChecks}, \programName is unable to associate 
comments beneath a declaration to that code. Because of this, and the upcoming change to the COS 
217 coding standard, Dr.\ Dondero and I felt that it was inappropriate to include these results against 
\programName. To accomplish this, we have filtered the data to remove the 224 extraneous warnings 
due to comments placed after global declarations. \autoref{resultsCommentsAboveGlobals} shows the 
data both before and after filtering.

\begin{table}
\begin{center}
\begin{tabular}{lccc}
	\toprule
	&& \multicolumn{2}{c}{Was an error present?} \\
	&& Yes & No \\ \midrule
\multirow{2}{*}{Did \programName report an error?} & Yes & 76 & 53 (224)  \\
										& No  &  0 & \\ \hdashline[2pt/4pt]
\multirow{2}{*}{Did \human report an error?} & Yes & 62 & 0 \\
								     & No  & 14 & \\
	\bottomrule
\end{tabular}
\end{center}
\caption[Test Results for Comments Above Global Variables]{Test Results for Comments Above Global 
Variables. The number in the parenthesis in upper right corner represents the raw number of errors 
reported; the first number is the number of errors reported which were not solved after super imposing 
the upcoming change in coding standards. For this check, with the raw data, \programName had a recall 
of 100\% and a precision of 25.3\%. After filtering the data, \programName had a precision of 58.9\%. 
\human had a recall of 81.6\%.}
\label{resultsCommentsAboveGlobals}
\end{table}

The check to validate function comments is an ideal check. One of Dr.~Dondero's biggest time 
drains in grading assignments is checking that each function has an appropriate comment. 
\programName is able to easily check whether the function has a comment and whether that comment 
refers to each of its parameters and its return value. More importantly, \programName had a perfect 
record in finding these errors within our dataset as seen in \autoref{resultsValidateFunctionComments}.  

\begin{table}
\begin{center}
\begin{tabular}{llccc}
	\toprule
	&&& \multicolumn{2}{c}{Was an error present?} \\
	&&& Yes & No   \\ \midrule
     \multirow{4}{2cm}{Comment Present?}
	& \multirow{2}{*}{Did \programName report an error?} & Yes & 138  & 0   \\
	&								  		   & No  & 0  & \\ \cdashline{2-5}[2pt/4pt]
	& \multirow{2}{*}{Did \human report an error?} & Yes & 131 & 0 \\
	&								        & No  & 7 & \\  \hdashline
      \multirow{4}{2cm}{Refers to Parameters?}
	& \multirow{2}{*}{Did \programName report an error?} & Yes & 36 & 0   \\
	&								  		   & No  &  0 & \\ \cdashline{2-5}[2pt/4pt]
	& \multirow{2}{*}{Did \human report an error?} & Yes & 18 & 0 \\
	&								        & No  & 18 & \\  \hdashline
     \multirow{4}{2cm}{Refers to Return Value?}
	& \multirow{2}{*}{Did \programName report an error?} & Yes & 9 &  0 \\
	&								  		   & No  &  0 & \\ \cdashline{2-5}[2pt/4pt]
	& \multirow{2}{*}{Did \human report an error?} & Yes & 4 & 0 \\
	&								        & No  & 5 & \\ \midrule
     \multirow{4}{2cm}{Totals:}
	& \multirow{2}{*}{Did \programName report an error?} & Yes & 183 &  0 \\
	&								  		   & No  &  0 & \\ \cdashline{2-5}[2pt/4pt]
	& \multirow{2}{*}{Did \human report an error?} & Yes & 153 & 0 \\
	&								        & No  & 30 & \\
	\bottomrule
\end{tabular}
\end{center}
\caption[Test Results for Validating Function Comments]{Test Results for Validating Function Comments. 
For this check, \programName had a recall of 100\% and a precision of 100\%. \human had a recall of 
83.6\%.}
\label{resultsValidateFunctionComments}
\end{table}

In addition to those mentioned above, \programName ran 16 checks over the dataset. In some cases, 
like useEnumNotDefine and functionIsTooLong, \programName and \human found the same set of 
errors. In other cases, as with neverUseCPlusPlusStyleComments and switchHasDefault, \human did not 
find any errors yet \programName found a set with 100\% precision. Several checks did not have enough 
data points to perform any meaningful analysis. For example, tooManyFunctionsInFile, 
tooManyParameters, fileIsTooLong, isCompoundStatementEmpty and switchCasesHaveBreaks each 
had less than 4 warnings. In some cases, specifically for isTooDeeplyNested, \human agreed that 
\programName's output was correct even if there wasn't a specific remedy for the code. During the term, 
Dr.\ Dondero normally solves this issue by telling the student that ``it would be better to refactor the code, 
but it's not clear how''.

In going over the graded assignments, it became clear that \programName had not yet been tuned to find 
all the desired types of style errors. One of the most prevalent ignored issues were lines that exceeded a 
maximum length. \programName can be configured to check for this error by editing the code in the lexer 
which tracks the column position of the source file. Another useful check would be to recognize 
duplicate definitions of the same \lstinline{struct} in different files. While this would be possible to 
implement in \programName by storing all references to \lstinline{struct}s, it would not be a simple 
process. Splint, however, does check for this type of error and can be used in conjunction with 
\programName to provide a more complete set of checks. Other issues that appeared were: missing 
\lstinline{#include} guards, needing to create an opaque pointer type from a \lstinline{struct}, and poor 
indentation.

Overall, \programName did very well. \programName produced a total of 1226 warnings over the entire 
dataset. 951 of these represented true stylistic errors in the code. \programName's recall was 14.5\% 
higher than \human's and maintained a relatively high precision of 77.2\%. After filtering, 
\programName's precision jumped to 90.0\%. Dr.~Dondero and I both judged that precision as excellent. 
Data, before and after filtering, can be seen in \autoref{resultsAllChecks}. These tests conclusively prove 
the benefit that \programName can provide in terms of automatic style checking. Even without filtering the 
data, over three quarters of \programName's output was pertinent. Furthermore, it increased error 
detection by 16.3\%.

\begin{table}
\begin{center}
\begin{tabular}{lccc}
	\toprule
	&& \multicolumn{2}{c}{Was an error present?} \\
	&& Yes & No \\ \midrule
\multirow{2}{*}{Did \programName report an error?} & Yes & 933 & 104 (275)  \\
										& No  &  18 & \\ \hdashline[2pt/4pt]
\multirow{2}{*}{Did \human report an error?} & Yes & 796 & 0 \\
								     & No  & 155 & \\
	\bottomrule
\end{tabular}
\end{center}
\caption[Test Results Across All Checks]{Test Results Across All Checks. Across all checks, 
\programName had a recall of 98.1\% and a precision of 77.2\%. After filtering the data, \programName 
had a precision of 90.0\%. \human had a recall of 83.6\%.}
\label{resultsAllChecks}
\end{table}

Given these impressive results, it is clear \programName can help fill the automated stylistic error 
checking gap. \programName can definitely help Professors (and their TAs) grade assignments both by 
saving time and increasing the number of stylistic errors found. \programName can also help students 
improve their coding habits and their grades. Additionally, \programName's impressive performance and 
high degree of customizability give us reason to believe that it will be useful to the world outside of 
academia. Companies with a defined coding standard can customize \programName to help clean up 
their code base and ease the code review process. 

\appendix
\appendixpage
\addappheadtotoc

\renewcommand{\chaptermark}[1]{\markboth{Appendix~\thechapter.\ #1}{}}

\singlespacing

\chapter[Predefined Check Functions]{Predefined Check Functions and their Relevant Event Handlers}
 \label{predefinedChecksFunctions}
 
\newlength\saxColSize
\addtolength\saxColSize{4.9cm}

\newlength\checkSize
\setlength\checkSize{\linewidth}
\addtolength\checkSize{-\saxColSize}
\addtolength\checkSize{-.7cm}

\newcommand{\vertSize}{3mm}

\addcontentsline{lot}{table}{~\ref{predefinedChecksFunctions} \hspace{4mm}Predefined Check Functions and their Relevant Event Handlers}
\begin{longtable}{p{\checkSize} p{\saxColSize}}
\toprule
Check Function \& Purpose & Relevant Sax Handlers \\ \midrule
\endfirsthead
\toprule
Check Function \& Purpose & Relevant Sax Handlers \\ \midrule
\endhead
\hline
\multicolumn{2}{c}{Continued}\\
\bottomrule
\endfoot
\bottomrule
\endlastfoot

		\lstinline!isFileTooLong(YYLTYPE location)! & \multirow{2}{\saxColSize}{endFile} \\*
			 Check if the file exceeds a maximum length.  \vspace{\vertSize} \\
		\lstinline!hasBraces(YYLTYPE location, char* construct)! & \multirow{2}{\saxColSize}{endWhile, endDoWhile, endFor, endIf, endElse} \\*
			 Check if the statement within an \lstinline!if! statement, \lstinline!else! clause, \lstinline!for! statement, \lstinline!while! statement, and \lstinline!do while! statement is a compound statement. \vspace{\vertSize} \\
		\lstinline!isFunctionTooLongByLines(YYLTYPE location)! & \multirow{2}{\saxColSize}{endFunctionDefinition} \\*
			 Check if a function exceeds a maximum line count. \vspace{\vertSize} \\
		\lstinline!isFunctionTooLongByStatements(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{beginFunctionDefinition, endFunctionDefinition, endStatement} \\*
			 Checks if a function exceeds a maximum statement count. \vspace{\vertSize} \\
		\lstinline!tooManyParameters(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{beginParameterList, registerParameter, endParameterList} \\*
			 Check if there are too many parameters in the function declaration. \vspace{\vertSize} \\
		\lstinline!neverUseCPlusPlusComments(YYLTYPE location)! & \multirow{2}{\saxColSize}{N/A (Called from the Lexer)} \\ *
		Warn against using C++ style single line comments. \vspace{\vertSize} \\
		\lstinline!hasComment(YYLTYPE location, char* construct)!  & \multirow{2}{\saxColSize}{endFile (also from \lstinline!globalHasComment!)} \\*
			 Check for comments before some construct. \vspace{\vertSize} \\
		\lstinline!switchHasDefault(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{beginSwitch, registerDefault, endSwitch} \\*
		 Check that each \lstinline!switch! statement has a \lstinline!default! case. \vspace{\vertSize} \\ 
		\lstinline!switchCasesHaveBreaks(YYLTYPE location, int progress, int isCase)! & \multirow{2}{\saxColSize}{beginSwitch, registerDefault, registerCase, registerBreak, registerReturn, registerReturnSomething, endSwitch} \\*
		 Check that each \lstinline!switch! case has a \lstinline!break! or \lstinline!return! statement. \vspace{\vertSize} \\ \\ \\ \\
		\lstinline!isTooDeeplyNested(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{beginCompoundStatement, endCompoundStatement} \\*
		 Check whether a region of code (i.e. a compound statement) nests too deeply. \vspace{\vertSize} \\
		\lstinline!useEnumNotDefine(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{registerDefineIntegralType} \\*
			 Warn against using \lstinline!#define! instead of \lstinline!enum! for declarations. \vspace{\vertSize} \\
		\lstinline!neverUseGotos(YYLTYPE location)! & \multirow{2}{\saxColSize}{registerGoto} \\*
			 Warn against using \lstinline!GOTO! statements. \vspace{\vertSize} \\
		\lstinline!isVariableNameTooShort(YYLTYPE location, int progress, char* identifier)! & \multirow{2}{\saxColSize}{registerIdentifier, beginDeclaration, endDeclaration} \\*
		 Check if a variable's name exceeds a minimum length. \vspace{\vertSize} \\
		\lstinline!isMagicNumber(YYLTYPE location, int progress, char* constant)! & \multirow{2}{\saxColSize}{registerConstant, beginDeclaration, endDeclaration} \\*
			 Warn against using magic numbers outside of a declaration. \vspace{\vertSize} \\
		\lstinline!globalHasComment(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{beginFunctionDefinition, endFunctionDefinition, endDeclaration} \\*
			 Check if each global variable has a comment. \vspace{\vertSize} \\ 
		\lstinline!isLoopTooLong(YYLTYPE location)! & \multirow{2}{\saxColSize}{endWhile, endDoWhile, endFor} \\ *
			Check if the loop length exceeds a maximum length. \vspace{\vertSize} \\
		\lstinline!isCompoundStatementEmpty(YYLTYPE location, int progress)!  & \multirow{2}{\saxColSize}{beginCompoundStatement, endCompoundStatement} \\ *
			Check if the compound statement is empty. \vspace{\vertSize} \\
		\lstinline!tooManyFunctionsInFile(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{endFile, beginFunctionDefinition} \\*
			 Check if there are too many functions in a file. \vspace{\vertSize} \\
		\lstinline!isIfElsePlacementValid(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{endIf, beginElse} \\ *
			Warn against poor \lstinline!if!\slash\lstinline!else! placement as defined by the Google style guide. \vspace{\vertSize} \\
		\lstinline!isFunctionCommentValid(YYLTYPE location, enum commandType command, char* text)! & \multirow{2}{\saxColSize}{beginFunctionDefinition, endFunctionDefinition, beginParameterList, endParameterList, registerIdentifier, beginCompoundStatement, registerReturnSomething} \\ *
			Check if function comments have the appropriate contents. Specifically check that the comment mentions each parameter (by name) and what the function returns. \vspace{\vertSize}  \\ \\
		\lstinline!arePointerParametersValidated(YYLTYPE location, enum commandType command, char* identifier)! & \multirow{2}{\saxColSize}{beginFunctionDefinition, endFunctionDefinition, beginParameterList, registerParameter, endParameterList, registerIdentifier, registerPointer, endStatement} \\ *
			Check if each pointer type parameter into a function is mentioned within an \lstinline!assert()! before being used. \vspace{\vertSize} \\ \\ \\ \\
		\lstinline!void doFunctionsHaveCommonPrefix(YYLTYPE location, int progress, char* identifier)! & \multirow{2}{\saxColSize}{beginProgram, endProgram, endFile, beginFunctionDefinition, endFunctionDefinition, registerIdentifier} \\*
			Check that function names contain a common prefix.  \vspace{\vertSize} \\ \\
		\lstinline!functionHasEnoughLocalComments(YYLTYPE location, int progress, int isComment)! & \multirow{2}{\saxColSize}{beginComment, beginFunctionDefinition, endFunctionDefinition, beginWhile, beginDoWhile, beginFor, beginIf, beginSwitch} \\*
			Check that there are enough local comments in the function relative to the number of control/selection statements.  \vspace{\vertSize} \\ \\ 
		\lstinline!structFieldsHaveComments(YYLTYPE location, int progress)! & \multirow{2}{\saxColSize}{beginStructDefinition, registerStructField, endStructDefinition} \\*
			Check that all fields in a struct have a comment.  \\

\vspace{1mm}
\end{longtable}

\doublespacing
\chapter{Conventions and Necessities}
\label{conventions}

\section{General}

\begin{itemize}
\item Use the DynArray class to handle persistent arrays, stacks and queues. It is a dynamically growing array which holds void pointers. This is the base implementation of how comments are stored as well as the Hook module queues.
\item To change the tab size, edit the \lstinline{count} function inside c.l.
\item To change the string representation for each error level, edit the \lyyerror function in c.y. Currently, \lstinline{ERROR_HIGH} = ``big problem'', \lstinline{ERROR_NORMAL} = ``error'', and \lstinline{ERROR_LOW} = ``low priority''.  
\item Make sure that each instance of \lstinline{IDENTIFIER} in the grammar is either explicitly registered through \lstinline{h_registerIdentifer} or ignored by invoking \lstinline{h_ignoreIdentifierText}. Otherwise the wrong identifier text will be dequeued and passed into the Sax module. This is because the lexer will always enqueue the \lstinline{IDENTIFIER}'s text before the grammar recognizes it.
\item Throughout the code, 0 and 1 are used synonymously for false and true respectively whereas \lstinline{NULL} is used for pointers.
\end{itemize}

\section{Sax and Hooks Modules}

\begin{itemize}
\item When adding new calls, use the ``\lstinline{begin}'', ``\lstinline{register}'' and ``\lstinline{end}'' prefixes where \lstinline{begin} and \lstinline{end} are used with larger constructs and \lstinline{register} is used with constructs that are conceptually a single item (like a parameter).
\item Always make sure to add the \lstinline{lastCalled_set} function for any handlers that do not go through the Hooks module.
\item Be careful of doing memory management at a file level. Most of it is done at a program wide level because files are added onto a stack. This means that for three files, \programName sees three calls to \lstinline{beginFile} before any of the \lstinline{endFile} calls. This makes it far easier to have allocation and releasing of memory at the single \lstinline{beginProgram} and \lstinline{endProgram} calls.
\item Keep all actual checks outside of the Sax handlers to improve code readability.
\item Prefix all calls that go through the Hooks module with ``\lstinline{h_}''.
\end{itemize}

\section{Checks Module}

\begin{itemize}
\item All warning messages should be passed to their respective function without an ending newline; one will be added so that each warning appears on one line.
\item Use local static variables as opposed to global variables to determine context within checks.
\item Phrase check function names as questions if there is something to check or commands if there is an automatic warning. For example, if it is never acceptable to use \lstinline{GOTO} statements, phrase the check as a command like ``neverUseGotos''.
\item To compare entire locations use the methods in the Locations module. Comparing single elements can be done inline. 
\item Be careful not to free things which were either never allocated (e.g.\ the current location passed to the Sax\slash Hook handler), might be shared between objects (e.g.\ filename strings) or will be freed on its own later (e.g.\ comment texts and locations).
\item When adding a new check, add the function name and a significant (non-variable) part of the warning message to the arrays in \lstinline{runOnTests.sh}. If changing the warning message of an existing check, make sure to update the arrays.
\end{itemize}

\singlespacing
\chapter{Progression of Development}
\label{progressionOfDevelopment}
\doublespacing

\programName's development has been reasonably linear and has consisted of slowly adding modules 
as new situations arose. Getting Bison and Flex to parse the sample C code took a significant portion of 
time at the beginning, especially the added elements such as tracking \lstinline{typedef}s and 
dynamically reading header files. Once the code could be read without issue, the next step was to start 
adding some minimal checks.

The first version of \programName consisted of the Main, Lexer and Parser, and Checks modules 
(although they were not conceived as such at that point). At this stage, Checks contained four checks 
which were called directly from the actions in the grammar (\autoref{1.0grammar}). The Checks module 
also contained a single function which performed minimal comment tracking. Context was 
determined by setting an enumerated value for the statement as a whole (underlined in 
\autoref{1.0grammar}). While this initial version was functional, it involved tedious manipulation of the 
grammar as well as very little ability to perform contextual processing.

\begin{figure}
\begin{lstlisting}[language=Caml, escapechar=\%]
selection_statement
	: IF '(' expression ')' statement {%\underline{\$\$ = IF\_SELECTION;}% ifHasBraces($5, @$);}
	| IF '(' expression ')' statement ELSE statement {%\underline{\$\$ = IF\_ELSE\_SELECTION;}% ifHasBraces($5, @$); ifHasBraces($7, @$);}
	| SWITCH '(' expression ')' statement
	;
\end{lstlisting}
\caption[Version 1.0 Grammar Excerpt]{Version 1.0 Grammar Excerpt where \lstinline{ifHasBraces} is a check that determines if an \lstinline{if} statement had braces.}
\label{1.0grammar}
\end{figure}

This dependency on the grammar file motivated the change in framework in version 2. The first 
step was to adopt the SAX style of processing. The alternative was to try to use an AST and Visitor 
Pattern (like PMD and Checkstyle) but this would have involved a lot of overhead to write the framework 
code. The SAX style is relatively lightweight and was the far easier (and simpler) alternative. The 
transition from version 1 to version 2 was fairly straightforward and immediately allowed for additional 
checks to be added. The comment tracking system was moved into its own module and all of the 
previous enumerated value contexts were removed. However, in adding more checks, I realized that 
some calls into the Sax module were simply not going to be in the right order. 

This problem prompted the final version (3) which added the Hooks module. The inspiration behind the 
Hooks module was the queue of function pointers. The insight of storing the pointers to the event 
handlers made this entire implementation (and version) possible. After adding the Hooks module, I was 
able to add even more checks and focus on refactoring the code. In the process of adding more checks, 
I decided to store the last called event handler in order to minimize the number of handlers any given 
check needed to be called from. This eventually led to the \lstinline{lastCalledFunction} methods.  At 
this point I moved all the code regarding \lstinline{YYLTYPE}s into the Locations module and added finer 
grain methods to find relevant comments by location. 

\singlespacing
\nocite{*}
\bibliographystyle{plain}
\clearpage
\phantomsection
\addcontentsline{toc}{chapter}{Bibliography}
\bibliography{Bibliography}

\end{document}  