% The following is a list of subsections that will appear within this section 
% and a brief description:
\section{Agile Practices}
\label{agilePractices}
\subsection{Design and Documentation within an Agile Environment}
\subsubsection{Design}
\label{design}
% Only design and code for what is needed.  Documentation of the design should
% not be overbearing.  Also, add a part
% about designing for reuse.  Many followers of agile software do not consider this a 
% concrete practice, but it does have some benefit for any future development within 
% the same area.  
% The following can be used as sources:
% 1. Extreme Programming Explained
% 2. The Pragmatic Programmer
% 3. Sustainable Software Development
% 4. Continuous Design
% 5. Is Design Dead?
Many people believe agile software practices concentrate solely on creating code, whereas the fact is,
agile supports design.  Software should be designed to be easy to use, maintain, extend and 
understand~\cite{sustainable}.  In addition, design should allow for software to change.
 Design is
many times considered to be the most frustrating part of the software development process.  This is due
to scheduling, people coming and going, and changes to the product.  In addition, many projects
have the potential of becoming over or under designed.  The maximum benefit from design
can be found if a balance between too much
and too little design is met.  

Extreme Programming (XP) pushes what has become to be known as ``Incremental Design.''~\cite{extremeExplained}
Incremental design is based on the belief that a project will run smoother if design is looked
at on a daily basis.  XP believes in using less design up-front so that software can be developed, feedback can be given, and the design 
and software can be updated.  Hence, XP believes in designing always~\cite{extremeExplained}.   
Another factor that must be taken into consideration during the design phase is cost.  Too much up-front design can 
become expensive if it
needs to be updated in the future.  Too little design can result in code oscillation~\cite{sustainable} which
will also drive up cost.  This occurs
when the same set of code is modified repeatedly because there was not enough up-front design.  

Incremental design is enabled by many of the agile practices outlined throughout this section.  
However, when following incremental design 
the downfall can be too little design up-front.  There needs to be a good balance
between planned design and incremental 
design within a project.  In addition, the appropriate time spent documenting the design should be considered.  
Documentation in general will be covered further in Section~\ref{documentation}.  

A good start to achieving balance in the design phase involves obtaining a vision of the design.  A picture
of the software pieces with their associated tasks will help in communicating the goal of the project.  Typically,
the unified modeling language (UML) is good for this.  Although methodologies such as XP de-emphasize diagrams,
many people find software diagrams useful.  They have the ability to effectively communicate the important aspects
of the software being built before any coding has started~\cite{designDead}.  When creating UML diagrams and documents, 
keep the following in mind~\cite{designDead}:
\begin{itemize}
\singlespacing
\item ``Keep them short''
\item ``Don't try to address all the details, just the important ones''
\item ``Treat the resulting design as a sketch, not as a final design''
\end{itemize}
\doublespacing
The last point is important. Sometimes during the coding phase of a project, it is discovered that the initial design
was incorrect.  If the design needs to be captured, it is important to update the design at that time.  
Many times in industry, the design is set to be captured and expected to be up-to-date.  Keeping the documentation
in an easily accessible location will allow for the design to be updated regularly.  Additionally, design documents 
will also be available for other developers and the customer to view as well.  

Within UML, there is a wide variety of available diagrams and documentation techniques.  
For a scripting language such as Tcl, sequence diagrams and use case description documents are useful
in the design phase.  These two UML documents are easy to work from because Tcl interprets one line at 
a time and a sequence can easily be formulated.  Within Section~\ref{documentation}, there is an example sequence diagram 
and use case description
document.  

Perhaps the most important aspect of the design phase is to keep it simple.  In other words, only design and code for what will be 
needed~\cite{sustainable}.  This does not necessarily mean to design for what is needed today.  The development team may 
notice that some features will be needed but may not benefit the customer immediately.  By planning for the future, 
an easier adjustment will result when the feature becomes necessary.  A good example
of this is adding a database to a project.  The team may see it as necessary in the future, but will not need it today.  By planning
for the database now, the team will save the headaches attached to adding a database late in the game.  

In the end, the code should be as simple as possible.  Adding features and building upon a complex system is too difficult, time
consuming and costly~\cite{designDead}.  Kent Beck outlined the following principles for keeping the 
design simple~\cite{extremeExplained}:
\begin{itemize}
\singlespacing
\item Appropriate for the people who need to work with it
\item Reveals the intention of the product
\item No duplication
\item Fewest number of elements as possible
\end{itemize}
\doublespacing
It is key not to get too hung up on getting the design perfect.  The process of ``Continuous Design,'' 
which is derived from simple design, uses refactoring throughout
the life of a project to improve the design~\cite{continuousDesign}.  
By frequently using refactoring, continuous integration and unit testing, continuous design can assist a project that is striving for success.  
These practices are covered thoroughly in Section~\ref{developingAgile}.  

Many followers of agile software development do not consider designing software for reuse a concrete agile practice.  However, there
are many benefits to creating and using reusable software.  Since reusable software has been time tested, it can be counted on to
have fewer defects and cleaner interfaces~\cite{sustainable}.  By creating an environment that fosters software reuse, knowledge will
not be duplicated and new ideas can be formed~\cite{pragmaticProgrammer}.  

Although some developers do not see design as a key element of agile software development, design has the capability
of making a project stronger.  Whether the team elects for more up-front design or more of an incremental design approach,
keeping the design simple is important.  By keeping the design simple, changes become easier to handle.  Modifications to a more complex system
are nearly impossible and a major headache for the development team.  Lastly, reusable software allows the development to come up
with fresh ideas while knowing they have a thoroughly tested product readily available.  

\subsubsection{Documentation}
\label{documentation}
% Discuss how some situations require documentation.  
% Some initial design can be done by using UML practices such as Use Cases.  
% Use tools like JavaDoc (refer to tcl's autodoc tools available)
% to extract design details contained in the source code.  This allows more time 
% to actually developing and creating software rather than creating documents that 
% may never be seen again.  
% Just remember to budget time for this and may be try to hold it off the creation of 
% large documents until the software is about ready for release.  
% This allows the product to be developed and with the use of the autodoc utility, the
% documents can actually point to the source code for design details.  
% The following can be used as sources:
% 1. Using UML Collaboration Diagrams for Static Checking and Test Generation
% 2. Extending Use Cases throughout the Software Lifecycle
% 3. Is Design Dead?
% Display example sequence diagram, and use case description document.  
Documentation can be used as a form of communication between the development team and the
customer.  The process of creating documentation can be valuable to a team because it 
allows for collaboration, understanding and creativity~\cite{sustainable}.  However, 
the resulting documents become less valuable over time since they must be maintained.  
In addition, development teams forget the purpose behind the creation of the documents
in the first place: to develop a working product.  

Many of the agile methodologies do not believe that documentation is necessary in the software
development process.  The belief is that it is wasted effort to create documents that may 
never be seen by anyone except for the developers of those documents.  XP feels that only
the source code and tests should be permanent artifacts of the project~\cite{extremeExplained}.  
Any other form of documentation should be based on the code and tests.  

Documents created from extracting formatted comments from source code has become a popular tool and is available for
many programming languages.  These tools typically extract documentation from embedded
comments in the source code and generate a set of web pages.  The most well-known extraction
tool is Sun Microsystems' JavaDoc utility for the Java programming language.  Putting
design details into the source code allows for these extraction utilities to create
a solid set of design documents that can be placed online and viewed anywhere with a 
web browser.  

The Tcl programming language also has an open source code documentation tool known
as AutoDOC~\cite{tclAutodoc}.  AutoDOC was created by Andreas Kupries to convert embedded 
comments within Tcl code into a cross-referenced set of web pages.  The utility AutoDOC
itself was written in Tcl and, since it is an open source project, can be updated
to fit a development team's needs.  AutoDOC provides a wide variety of information
that can be output to the set of web pages including version, author, bugs, specific
comments, notes, variable information and any dangers or shortfalls associated with
the source code.  An Eclipse IDE plug-in, TUnit, that was developed as part of this project has the ability to create
new files with appropriately formatted comments for use with AutoDOC.  TUnit is described
in further detail in Section~\ref{availableTools}.  

Within industry, many companies are still big on up-front design.  Slowly they are changing and 
becoming more agile in nature, but the need for documentation has not completely subsided.  Many times
design and requirements documents are needed to further define a product.  Agile is not 
completely against documentation.  It just needs to be realized that documentation has a 
cost and typically, documentation is not cost effective.  But, if the team recognizes the cost
and schedule time within the development process to create and modify documents, then there should
not be a problem.  An alternative approach would be to create a set of high-level documents
at the end of the project~\cite{extremeField}.  That way, a working product will be available 
and the documents become an afterthought.  

In addition to the typical design and requirement specifications, UML can be used as well
to better define a product.  In a systems test industry, a systems engineer is typically the 
customer.  The systems engineer would write a set of requirements documents that define
how to fulfill the test requirements of a system.  The requirements documents can be used
to extract a set of use cases.  This process is known as use case identification~\cite{extremeField}.  
Once the use cases are identified, a use case description document and sequence diagram can
be developed.  

Use case descriptions can act as an aid for describing software functionality as it relates to
the customer~\cite{useCasesLifecycle}.  Use cases also accurately 
reflect the requirements and design of a software module.  The uses cases as a whole will be able to 
represent the requirements specified by the systems engineer.  Figure~\ref{fig:useCase} shows 
an example of a use case description document with a synopsis for each section of the document.  
\begin{figure}[htp]
\centering
\includegraphics[scale=0.90]{../images/useCase.jpg}
\caption{\small \sl An example Use Case Description Document.} 
\label{fig:useCase}
\end{figure}
Applying these use case description documents to Tcl, the description section of
the use case will be able to accurately describe how a Tcl module can
be setup to perform the requirement specified by the systems engineer.  Furthermore, the description
section can be used to generate a sequence diagram.  

UML sequence diagrams model the flow of logic within the system in a visual manner.  The 
sequence diagram typically is used to model the potential way a system will be used.  The logic
contained within the sequence diagram comes from a use case.  Creating a sequence diagram
for a Tcl procedure is fairly easy to do.  Procedure calls made from a main Tcl script 
can easily model what is outlined in the use
case description document.  Figure~\ref{fig:sequenceDiagram} shows an example sequence diagram
specifically created for the Tcl scripting language.  
\begin{figure}[htp]
\centering
\includegraphics[scale=0.85]{../images/sequenceDiagram.jpg}
\caption{\small \sl An example Sequence Diagram for the Tcl scripting language.} 
\label{fig:sequenceDiagram}
\end{figure}
Tcl does need to use levels of procedure calls, but this example shows nesting of procedures
two levels deep.  

\subsection{Developing Agile Software}
\label{developingAgile}
% This section will discuss practices typically associated with the coding of software
% within an agile environment.  

\subsubsection{Using Available Tools}
\label{availableTools}
% First discuss tools that can be used, such as Configuration
% Management tools.  This allows for everyone to be the owner of code, rather than having
% it stored on a single person's machine.  This also allows everyone to be reponsible for the
% software.  Other tools available include Apache Ant for builds and the Eclipse IDE for
% development.  Refer to the appropriate section for more information on how these tools can be 
% applied to a scripting language environment such as tcl.  
A rather easy agile practice that can be followed is to use readily available tools
that allow for better defect prevention as well as
better communication.  The most basic of these tools is a simple text editor.
There are a wide variety of text editors available.  TextPad for Windows
is special because it is offers a wide variety of configuration options.  A
particularly useful option for programmers is syntax highlighting.  Syntax
highlighting is even available for the Tcl programming language.  

According to \underline{The Pragmatic Programmer,} there are three main reasons
that a text editor is the most important tool~\cite{pragmaticProgrammer}:
\begin{itemize}
\singlespacing
\item ``Insurance against obsolescence''
\item ``Leverage''
\item ``Easier testing''
\end{itemize}
\doublespacing
Insurance against obsolescence means that the data captured in plain text will
outlive all other forms of data and the applications that created them.  The
survival of the data will allow for them to be used after the original application
that created them is extinct.  Leverage means that plain text can be used by
almost
any conceivable tool in the world of computer science.  And lastly, testing
becomes easier when the tests are written in plain text.  It becomes easy to
add, update and modify tests when they are written in plain text.  

Another set of useful tools come in what is known as an office suite.  Office
suites typically include the basic tools for editing documents, spreadsheets,
and presentations.  Microsoft Office and OpenOffice are two of the more famous
office suites available.  Microsoft Office is not free whereas OpenOffice is
open source and freely available for use on all platforms.  These office suites
are particularly useful in the design phase.  The design documents shown in
Figures~\ref{fig:useCase} and~\ref{fig:sequenceDiagram} were developed using
Microsoft Office.  These same documents could also have been created using
OpenOffice if so desired.  

Another important tool in the software development phase is a configuration
management (CM) system.  CM systems are widely accepted and used to keep track
of every change in source code and documentation.  A good CM tool allows for
simple check-in and check-out of source code and documents over a network
connection~\cite{sustainable}.  These tools even have the capability to merge
changes together.  By having the ability to track changes, the developers are
able to undo any mistakes they may have made.  Further, they can see what change
may have broken the system.  This makes the whole team responsible for the code and documentation, not just a single person.  
These tools can be very specific with the
information they report.  There are a number of CM tools available both freely and off the shelf.  CVS and
Subversion, a compelling replacement for CVS, are both open source and freely
available on the Internet.  

The next tool that can be used in parallel with configuration management is an
automated build process.  Continuous integration, which is covered in detail in
Section~\ref{continuousIntegration}, uses the automated build process as part of
a practice to integrate work frequently to reduce integration problems at a
later time~\cite{continuousIntegration}.  The automated build will be used to check the project out of the
configuration management tool, build the project, and run the set of automated
tests on the project.  Automated testing will be covered in depth within
Section~\ref{testing}.  By keeping the build automated, the development team
can easily track problems, while not having to manually build and test the project
themselves.  

The most notable automated build tool available today is Apache's Ant.  Ant is
open source and freely available.  Using an Ant build allows for the
following~\cite{pragmaticProject}: 
\begin{itemize}
\singlespacing
\item Portability.  Ant can be run on any machine because it can distinguish the
operating system it is on.  The whole team will be able to use a common build
file as a result.  
\item Ant can track file dependencies.  When compiling, not everything will need
to be recompiled, only something that has changed.  
\item Ant includes a comprehensive set of tasks.  These tasks include executing
JUnit tests or opening a Tcl shell for use.  
\end{itemize}
\doublespacing
An example Ant build for the Tcl programming language is available in
Appendix~\ref{sampleBuild}.  The check-out of source code is not shown in the sample
build because of the many options for CM tools.  But the build file includes the clean,
prepare, compile, release and test
targets. The clean target will delete the build while prepare will create the new
build.
Prepare will commonly be used to check-out the source code as well.  Since Tcl
is a scripting language, the compile target is used to copy the code to a build
directory.  The release target gets the build in a ready to release state
whereas the test target will run the unit tests against the build.  All this is
done in an automated fashion and is painless for the user.  

The last tool has the capability of incorporating some of the above tools within
one tool.  The Eclipse open development platform is a vendor-neutral application
used for building software.  Eclipse gives developers the freedom of choice in a
multi-platform, multi-language and multi-platform environment.  Eclipse comes
with many of the tools outlined in this section.  Eclipse has a text editor that
can be configured to work with multiple programming languages, CVS support, and
Apache Ant support all built in.  

The appealing part about Eclipse is that it allows for developers to create
plug-ins to expand the platform even further.  Eclipse and the Subversion
plug-in were widely used for this project.  If something is not available within
Eclipse and there are no plug-ins available, then there is nothing stopping a
developer from creating a plug-in.  This was the motivation behind
TUnit.  

``t-unit'' is an automated unit testing framework that was created for the Tcl
programming language.  The design and development of ``t-unit'' is outlined in
detail within Section~\ref{testing}.  One of the goals of the ``t-unit'' package
was to be as similar as possible to the JUnit Java unit test
framework~\cite{junitCook}.  JUnit has been available as an Eclipse plug-in since
version 1.0, and is now available with Eclipse without a separate download.  In
order to incorporate ``t-unit'' into Eclipse, a plug-in needed to be developed. 
Hence, TUnit was born.  The goal of TUnit was to be able to use Eclipse for all
Tcl related project needs.  

The first part of the TUnit plug-in developed is the Tcl file editor. 
The editor is built to include the following features: syntax highlighting,
auto indentation and content assistance (i.e., completion of a command).  All the
editor preferences can be configured from within the TUnit preferences menu; including
syntax coloring, soft tabbing and content assistance.  

Once the editor was fully functional, an Eclipse perspective was created for the
TUnit environment.  An Eclipse perspective groups views and actions for a
particular task, which in this case was to house a Tcl
development environment~\cite{commercialEclipse}.  Within the TUnit
perspective, a set of wizards was created.  The ``Tcl Project'' wizard creates a new
project within TUnit and creates all the necessary files in order to incorporate
the ``t-unit'' package.  The ``Tcl Package'' wizard creates a new Tcl package
and updates the project to make use of the new package.  And lastly, the ``Tcl
File'' wizard creates a new Tcl file with AutoDOC comments filled in.  

To
further distinguish packages and files, the Tcl ``Namespace Explorer'' was
developed.  The ``Namespace Explorer'' view shows the available Tcl packages and
the procedures available within each package.  Further, ``Namespace Explorer''
allows the user to see if their code meets current Tcl programming standards,
covered in detail in Section~\ref{standards}.  

The most important need of the TUnit plug-in was for use with the ``t-unit''
unit test framework.  Within TUnit, any file with a .tcl extension can be
executed as a unit test suite.  Once the user begins executing the file, the
plug-in analyzes the results of the run and reports the results via a popup. 
In addition, the full test results can be viewed within the console view of the
TUnit perspective.  

The TUnit plug-in was made available as an open source project in September
2006.  TUnit and its set of unit tests are being hosted on Google Code under the
GNU Lesser
Public License. Figure~\ref{fig:TUnit} shows the TUnit plug-in within Eclipse.  
\begin{figure}[htp] 
\centering
\includegraphics[scale=0.95]{../images/TUnit.jpg}
\caption{\small \sl The TUnit Plug-in.} 
\label{fig:TUnit}
\end{figure}
The
current release is 1.0.0 and it is available from the TUnit homepage.  More
information about the Google Code repository and TUnit homepage is available in
Appendix~\ref{sourceCode}.  Within weeks of the initial release, an alternate
plug-in was discovered with similar goals of TUnit.  

The Eclipse Dynamic
Languages Toolkit (DLTK) project was created to develop a framework for tool
vendors, researchers, and end-users who rely on dynamic languages such as Tcl~\cite{eclipseDLTK}.
Xored Software Inc.\ was behind the plug-in and they were developing the project
in Russia.   The plug-in in its current state includes an editor, package
explorer, and
wizards similar to TUnit. Since the group was moving at a good pace, it was
hoped that the ``t-unit'' package and the goal of running unit tests from within
Eclipse could be worked on as part of the project.  Hence the TUnit extension
was developed.  

Discussions about an extension to the Eclipse DLTK project were slow at first. 
With the language barrier acting as an obstacle, work was delayed for days while
each group tried to figure out what the other group wanted.  However, after a
few weeks of preliminary discussions, the TUnit extension was
created as an add-on to the Eclipse DLTK plug-in. 

Eclipse DLTK did not include a
unit test framework as part of the plug-in.  Since the DLTK plug-in was under the
Eclipse Public License, the code that was used to develop the JUnit plug-in
could be used as a baseline for the TUnit extension.  As a result, the TUnit
extension
included a separate launch configuration for TUnit and a view that had a 
likeness to the JUnit plug-in.  When a unit test is launched, the TUnit extension
view is displayed and the results are shown immediately within the view. 
Runs, failures and errors are counted and failures are also listed in the same
view with the test file that had the error and the line number.  

The TUnit extension was made available as an open source project in November
2006.  The source code is currently in Loyola Marymount's private CVS repository. 
However, the source code can also be obtained by downloading the plug-in from the
TUnit homepage. Lastly, the TUnit extension is made available under the Eclipse
Public License.  Figure~\ref{fig:TUnitExtension} shows the TUnit extension view within
Eclipse. 
\begin{figure}[htp] 
\centering
\includegraphics[scale=0.95]{../images/TUnitExtension.jpg}
\caption{\small \sl The TUnit Extension View for the Eclipse DLTK project.} 
\label{fig:TUnitExtension}
\end{figure}
The
current release is 1.0.0 and it is available from the TUnit homepage.  More
information about the repository and TUnit homepage is available in
Appendix~\ref{sourceCode}.  Lastly, Section~\ref{futureResearch} outlines some of
the upcoming goals for the TUnit extension.  

\subsubsection{Paired Programming and Code Reviews}
\label{pairedProgramming}
% Two more practices that 
% should be used during the development process are code reviews and refactoring.  At this 
% point it can be explained how code reviews can be used either with or without the paired
% programming practice.  Explain that many companies in the software industry are still
% reluctant to allow for paired programming.  In addition, give some of the negatives of
% using paired programming.  But code reviews will allow for everyone to become familiar 
% a particular piece of software also.  
Paired programming involves two people sitting at one computer where each person can sit comfortably
and work can get done.  
Paired programming is a primary XP practice.  Kent Beck believes the following benefits can be seen
when paired programming is applied to a project~\cite{extremeExplained}:
\begin{itemize}
\singlespacing
\item Keep one another on task
\item Refine the system
\item Come up with ideas
\item When one partner is stuck the other can take over, lowering the stress level
\item Hold each other responsible for following the other practices the team is following
\end{itemize}
\doublespacing
Paired programming within a project offers better communication, more feedback, a simpler system, and 
better error catching.  The result of paired programming can be a program with less defects.  

Working in a pair does not necessarily mean that a person can not work alone some of the time.  
But it is better if most of the programming is done as a team to better prevent
defects in the software.  
In addition, the code that is developed is shared among the team.  Code should never be a single
person's problem, but the responsibility of the whole team.  Using a configuration management system and 
paired programming allows for more accountability among the whole team for the code base and the documentation.  
In addition to allowing everyone ownership of the code base, paired programming allows two differing perspectives 
to come together to actively solve a problem.  And lastly, paired programming is
a great practice to 
introduce new team members to a project.  By sitting together, a new developer can better understand the goals
and intricacies of an existing program.  

Paired programming does have some downsides as well.  First, personal space can become an issue when
in close contact with a fellow programmer.  Some cultures communicate better when they are 
close to the other person whereas some cultures like to keep their distance.  A second issue with
paired programming is having to watch the other person type, which can be poor and sometimes slow~\cite{sustainable}.  
This can become frustrating to the partner who has to watch.  It would be better to not 
program in pairs all the time, keeping the stress amongst the team low.  

Another problem with paired programming is that many times in industry, it is not bought into.  Companies
view paired programming as a waste of a resource when two developers are located at one desk.  
They feel less work is getting done as a result.  The fact is that paired programming can prevent defects 
from ending up in software, saving time rewriting code to
get rid of the defects.  However, these companies still try to prevent defects from ending up in
software by solely holding code reviews.  

Another practice that can be done with or without paired programming is code reviews.  Code reviews in 
conjunction with paired programming allow for the whole team to own the code.  When the team owns the code,
everyone will know the behavior of the whole system.  The team will also be able to foresee future problems 
when new code is integrated.  Code reviews alone allow for the team to learn and share ideas about the 
project.  

The problem with holding code reviews without following the paired programming practice is that 
errors in code logic are typically missed.  The best time to find logic errors is when the code is being 
written~\cite{sustainable}.  When code is presented in a code review setting, there is too much code 
to accurately find logic issues.  A good mix of paired programming and lightweight code reviews will
better prevent defects from showing up in the software development process.  

\subsubsection{Coding Conventions}
\label{standards}
% Creating standards makes it easy for a new team member to read and understand existing code.  
% Show the standards that are actually online and typically used with the tcl programming
% envrionment.  
Before a team begins coding, a set of coding practices should be in place.  
These practices should be written down and captured in CM.  If a design document
is being created as part of the project, place the coding standards in it.  By
accepting a coding standard, the team is making the code easy to read, especially if 
a new team member arrives.  Lastly, when a team member edits another's source code, 
they won't have to deal with the hassle of trying to understand the way it is formatted.  

Ray Johnson of Sun Microsystems created a Tcl style guide for the Tcl programming language~\cite{styleGuide}.  
Many of the conventions described in this Johnson's style guide were used during
the development of the ``t-unit'' 
package described in Section~\ref{testing}.  The conventions Johnson outlines ensure 
that each file and procedure is adequately documented. Secondly,
the conventions guarantee that all of the Tcl code has a uniform style. This makes it
easier use, read and maintain. Lastly, the conventions help to avoid
some common coding that can lead to mistakes, such as building lists by hand
instead of using the list building procedures.

The Eclipse ``Namespace Explorer'' view described in Section~\ref{availableTools} assumes the
package and namespaces coding convention that Johnson outlines is used.  Tcl packages are the units in which code is
developed and distributed.  A single package is typically developed by a single person or group
and distributed as a unit, such as ``t-unit.''  Each package should have its own unique name.  Package names
are typically a one word all in lowercase letters.  Packages that consist of multiple words should start with a lowercase
letter, with the rest of the words starting with an uppercase letter.  

A Tcl namespace will hide the internal structure of a package.  Using a namespace will help avoid name collisions
and provides an easier method for managing packages.  When creating a new namespace, the name that is used should
be the same as the package that it is being used by.  
The TUnit Eclipse plug-in has a package wizard that creates a new namespace along with the package, both of the same name.  
The package wizard assumes that the package will consist of multiple files.  A file with the name of the package
and a .tcl extension will be created to properly export all the variables and procedures to be used by the package.  
Then when the TUnit file wizard is used, the file is created within the proper package and namespace.  

Johnson also outlines a set of guidelines for creating file and procedure headers.  The file header should contain 
relevant information about the contents of the file.  A procedure header gives overall documentation for the 
procedure that follows.  The procedure header should contain enough information for the caller to be able to use the 
procedure.  Johnson created the style guide in 1997 before Andreas Kupries created the AutoDOC application in
2001~\cite{tclAutodoc}.  The AutoDOC application allows for specially formatted file and procedure headers.  
The header information can then be extracted and put into a set of webpages for documentation purposes.  

There are many other guidelines outlined in Johnson's style guide.  The naming conventions that Johnson 
discusses are typical for most programming languages.  In addition, many of basic syntax standards are
similar to guidelines developed for other languages.  One convention that Johnson outlines which
other languages do not usually follow is spacing.  Johnson recommends using four spaces, rather than tabs
for indentation purposes.  Many editors, including the TUnit editor, can be configured to use spaces instead of tabs.  
This will allow the source code to be edited by any editor without losing
its formatting because tabs were used.  

\subsubsection{Refactoring}
\label{refactoring}
% Lastly, refactoring can improve the structure of
% code without changing its behavior.  Discuss that most methods available for refactoring
% are for object oriented languages, but that some of these methods can still be applied
% to a scripting language environment.  
Refactoring is defined as the rewriting, reworking and re-architecting of source
code.
These changes to the software will not alter the external behavior of the code,
but improve the internal structure.  Refactoring is typically done in a
disciplined, step-by-step manner with unit tests in place to catch problems that
may arise.  It is important to distinguish between rewriting code and
refactoring code.  Rewriting code may not improve the internal structure of code
whereas the purpose of refactoring is to do just that.  

There are multiple benefits and needs for refactoring source code.  The
following is a list of reasons a refactoring could be done:
\begin{itemize}
\singlespacing
\item Duplication of code throughout the project
\item The design could be improved
\item Outdated code
\item Performance needs to be improved
\item The software is difficult to understand
\item To find hidden bugs
\item To develop faster
\end{itemize}
\doublespacing
The first four reasons listed are self-explanatory.  However, the last three
reasons need some further explanation.  When software is difficult to
understand, a future developer may come along and not be able to understand the
intention of the code.  This will slow down the development process as the new
team member will have to take the time to get caught up.  In addition to creating
a better understanding, refactoring can lead to finding hidden bugs.  When
refactoring, a developer becomes more familiar with the intent of the code and
can see what it actually does.  At the same time, bugs that were not seen before
come to the forefront.  And lastly, refactoring will lead to developers
programming faster.  Since refactoring will lead to a better design, any new
code that is needed can be added with no difficulty.  If the design of the
system was poor, the developer would have to take the time to understand the
code's intent before being able to add any new features.  

Within industry, managers do not completely understand the need for refactoring.
If the code is working as expected, what is the point of improving it?  In
addition, the time it takes to refactor can be used doing something else, such
as adding new features.  As a result, it is up to the developer to sell the
importance of refactoring.  By discussing the improvement in the quality of the
software after refactoring, a manager may become more comfortable.  If the
time is not available to refactor immediately, perhaps add refactoring to the
schedule or
the documentation, specifying the portions of the source code that could use a
refactoring. 
Perhaps by going a step further and adding what refactoring methods could be
used
will help a future developer to better understand the intention of the code.  

In 1999, Martin Fowler wrote a book to catalog the different refactorings
available~\cite{refactorFowler}.  A majority of the refactorings outlined can
only be applied to an object-oriented programming language.  The book offers
a safe, step-by-step manner to do refactorings.  The
Eclipse development platform described in Section~\ref{availableTools} has added
refactoring support to do refactoring on a Java program from within the Java
editor.  A future part of the Eclipse DLTK project described in
Section~\ref{availableTools} is to add refactoring support for Tcl.  

The
following is
a list of refactorings from Fowler's book that may be applied to a scripting language,
such as Tcl:
\begin{itemize}
\singlespacing
\item ``Add Parameter''
\item ``Consolidate Conditional Expression''
\item ``Consolidate Duplicate Conditional Fragments''
\item ``Decompose Conditional''
\item ``Extract Method''
\item ``Remove Parameter''
\item ``Rename Method''
\item ``Split Temporary Variable''
\item ``Substitute Algorithm''
\end{itemize}
\doublespacing
The ``Add Parameter'' refactoring is typically done when a procedure needs more
information from the caller.  ``Remove Parameter'' is the exact opposite.  When
a procedure has a parameter that is no longer being used, remove it.  ``Extract
Method'' is a good refactoring when a code fragment is too long or a comment is
needed to understand its purpose.  By extracting the code and placing it in its
own method, the code becomes easier to understand.  It's important to give the method a
name that describes its purpose, otherwise, a ``Rename Method'' refactoring may
be needed.  This refactoring is typically done when a procedure name does not
adequately express the purpose of its existence.  

Sometimes the algorithm
outlined within a procedure can be made more understandable by making use of a different
algorithm. This is can easily be done
with a ``Substitute Algorithm'' refactoring.  Just substitute the old algorithm for
a new algorithm.  However, the algorithm could be complex and thorough unit
tests should be in place to make sure that nothing is broken after the
substitution.  A ``Consolidate Conditional Expression'' refactoring can be done
when there are a sequence of conditional tests with the same result.  Extracting
the conditional expressions into a single conditional expression will result in
code that becomes easier to read.  The ``Consolidate Duplicate Conditional
Fragments'' refactoring is typically done when the same fragment of code is
in all branches of a conditional expression.  By moving the code outside the
expression, it becomes clear what varies and what stays the same for each
condition.  The ``Decompose Conditional'' refactoring is good for when there is
a complicated conditional statement.  By extracting procedures from the
conditions, the developer makes it clear what the condition is and what is being
branched on.  The last refactoring that can be done on a scripting language is
``Split Temporary Variable.''  This refactoring is helpful when a temporary
variable is assigned to more than once, but is not a loop variable nor a
collecting temporary variable.  By making a temporary variable for each
assignment, the meaning of each temporary variable becomes clear and the source
code becomes easier to understand.  

\subsubsection{Continuous Integration}
\label{continuousIntegration}
% Frequent integration insures that modules that must fit together will, and that 
% the product continues to work with changes added.  Automate the build with unit 
% tests to detect errors as soon as possible.  Show an example of an Ant build that 
% can be used for the tcl environment.  It also makes use of automated testing capabilities.  
% Will need to refer to the testing section for more information.  A single source 
% repository keeps the necessary items for a build together.  Keep the build fast so time 
% is not wasted.  Perhaps discuss why it was seen that a nightly build was not needed for this
% particular project but may be useful in others.  
Continuous integration is the agile practice where each member of a team
integrates his/her work frequently to ensure that any changes to the software does
not break the
build.  The build is tested against a suite of unit tests to detect integration
problems as quickly as possible~\cite{continuousIntegration}.  The result of
adopting continuous integration as an everyday practice is reduced integration
problems and a project that can be developed at a more rapid pace. 

Adopting continuous integration as an everyday practice will be easy if a few
other measures are already in place.  First would be to have a
configuration management (CM) system available.  The benefits of a CM system are
clearly explained in Section~\ref{availableTools}.  Another measure that should
be in place is an automated build.  Using a tool such as Apache's Ant will make
automating the build fairly easy.  More information on automating a build can be
found in Section~\ref{availableTools} and a sample build file can be found in
Appendix~\ref{sampleBuild}.  The last
measure that should be in place is a build that is self-testing.  
Unit testing and test driven development are described in detail in
Section~\ref{testing}.  

The greatest benefit of continuous integration is reduced
risk~\cite{continuousIntegration}.  By deferring integration, it is hard to
predict how long it will take.  As a result, the development team is opening
itself up to criticism if integration should fail.  With the continuous
integration practice in place, the development team knows what works, what does
not, and any outstanding bugs in the software.  By having in place a
self-testing automated build, the development team can find and remove bugs
from the software right away.  One other benefit of continuous integration is a
working product at all times.  The customer will be able to see the software in
its current state at any moment with or without all the desired features.  The
customer can then offer feedback on the current build before any new features
are added.  Keeping the software in a releasable state is part of the iterative
and incremental development practice described in
Section~\ref{iterativeDevelopment}.  

\subsubsection{Iterative and Incremental Development}
\label{iterativeDevelopment}
% ``Plan a little, do a little, learn a little.''  By developing the software incrementally,
% the software can stay in a releasable state.  The only problem with the software is that
% all the features are not implemented.  But by putting in the most important features
% first, the software could theoretically be released to the customer if they decide that
% some of the features initially envisioned are no longer necessary.  
% Combine these two sections
%%\subsubsection{Keep the Software Releasable}
% This section will discuss how to keep software in a releasble state.  This means
% that the software will have "no broken windows" (clean, readable source code
% with no hacks) and will contain no defects.  New features are only added to the
% software once the defects are eliminated from the project.  The following 
% subsections contribute to keeping the software in a releasable state.  
Iterative and incremental development is the agile practice of developing
software in a manner such that the software will always be in a releasable
state.  A typical schedule is broken up into equal-length iterations of usually
between two and six weeks.  At the end of an iteration, the software is given to
the customer for feedback in real time.  In essence, each iteration contains a
small amount of planning followed by coding and feedback.  The feedback taken
from the customer can then be used in the next iteration to improve the
software.  

The planning phase of each iteration is a lightweight process.  New additions or
features to be added to the project can be placed on an index
card~\cite{sustainable}.  An alternative to the index card could be a use case
document or sequence diagram as shown in Section~\ref{documentation}.  The index
cards or UML diagrams that are created can then be sorted by what has been done,
what will be done during the current iteration, and what will be done in future
iterations.  With input
from the customer, future iterations may not need features that were previously
discussed.  Those index cards or UML diagrams can then be discarded without
changes being made to the current build of the software.  

By following the incremental development practice in conjunction with the 
continuous
integration practice described in Section~\ref{continuousIntegration}, the project will always be in a
releasable state. The continuous integration practice will ensure that a test suite will be in
place resulting in minimal defects in the build delivered to the customer.  If
defects exist, no new features should be added until the defects are eliminated.  
Further, the source code should be clean and readable with no workarounds to
get the project working.  It is better to release the software when it is clean
and ready rather than show a build with workarounds and hacks to the customer.  
In addition to a releasable build, a
collaborative relationship will be formed with the customer.  The customer will know
what to expect from the current iteration as well as any future iterations.  In
addition, they can offer feedback on the current build so that any changes that
need to be made can be captured and done within the current iteration without a
major hit to the development schedule.  

A scripting language such as Tcl makes it easy to incrementally develop
software.  Due to no compile time, releasing a build is as easy as running the
unit test suite against the source code to make sure the build is working.  At
that point, a releasable version can be passed on to the customer.  The build
file shown in Appendix~\ref{sampleBuild} contains a release target that can be
used to generate a build that can be passed onto the customer.  The main
difference between the release and test targets is the running of the unit tests.
Sometimes unit testing will result in extra files being created and
by separating the test and release targets, these extra files will not be
created.  

\subsubsection{The Importance of Testing}
\label{testing}
% Discuss how unit testing is at the heart of the agile software movement.  It helps
% in preventing defects being seen in the end.  Gives confidence that the product is behaving 
% as expected.  Code that is modified and refactored can be done so with a greater confidence.
% Documents the behavior of the software.  Also discuss test-driven development and its 
% importance.  Then move onto JUnit and it influence for the t-unit framework.  Lastly,
% discuss why TUnit was developed and discuss that it also has its own set of unit tests.  
The most important of the agile practices is unit testing.  Extreme programming (XP) considers
unit testing as the heart of the methodology~\cite{embrace}.  
Unit testing allows the developer
of the software to test at the lowest level possible~\cite{sustainable}.  For an object-oriented
language, this means testing the software at the level of classes, methods and functions.  For a 
scripting language such as Tcl, unit testing typically will test the packages and procedures of a 
project.  The unit tests are usually run from an artificial environment setup by the tester.  From
within this environment, procedures and methods are invoked in the modules being tested.  

Test driven development (TDD) is a practice that should also be followed.  TDD 
calls for writing unit tests before any new code is created.  So, if new code needs to be added to an existing
module, new tests should be created before work on the code is started.  The same holds true for a 
new module that needs to be developed.  By following unit testing and test driven development, the 
team can feel comfortable when following the rest of the agile practices.  

Refactoring code can be done with a great deal of confidence when unit tests are in place.  By updating unit
tests before refactoring, doing the refactoring, then running the unit tests, the team can feel comfortable
because errors that are introduced will be detected by the unit tests in place.  The continuous integration 
practice described in Section~\ref{continuousIntegration} can not be followed without unit tests in place.  
Continuous integration calls for an automated build process with unit tests in place to check the code base
for defects.  The result of the build process will then report failures if the build happens to fail.  

In addition to being able to follow the other agile practices, there are multiple other benefits to unit testing
and test driven development.  The results of the tests act as invaluable documentation of the behavior of
the current build~\cite{sustainable}.  There are also significant benefits to TDD.  Debugging of software
can be reduced when following TDD.  Scope creep is also minimized.  Sometimes developers feel the need to add 
code to their software ``just in case''~\cite{extremeExplained}.  With TDD in place, the developer will be more 
hesitant to do that because the unit test will need to be written first.  

Test driven development has popularized the XUnit family of open source tools.  XUnit is the name given to the family of unit test frameworks derived from JUnit,
the Java programming language unit test framework.  Kent Beck and Erich Gamma are the authors of JUnit~\cite{junitCook}.  
The framework is simple to use with a fair amount of features.  JUnit also introduced
the red/green progress bar to 
the software development community.  Red meant that there was a failure within the project and green meant that all
the tests passed.  The progress bar is an integral part of the TUnit extension described in Section~\ref{availableTools} 
and shown in Figure~\ref{fig:TUnitExtension}.  Eventually JUnit became popular as more and more developers came 
across its simplicity and its small size.  Further, people began developing
their own frameworks for other languages and the XUnit family of frameworks was born.  

The main goal of JUnit was to create an environment where developers would feel comfortable writing tests.  Within
JUnit, test cases are created with a name so that if a test fails, the failure can easily be identified.  The 
developer is also given the ability to setup and teardown each test case.  After a test case is run, the goal is
to allow for a summary of what did and did not work.  However, the main result should focus on the failures 
and a brief summary of the successes.  JUnit provides a set of ``assert'' methods for unit testing~\cite{junitCook}.  
Using ``asserts'' allows for a distinguishable difference between errors, such as an ``ArrayIndexOutOfBoundsException,''
and failures.  
The following is a list of ``assert'' methods available:
\begin{itemize}
\singlespacing
\item assertEquals
\item assertFalse
\item assertNotNull
\item assertNotSame
\item assertNull
\item assertSame
\item assertTrue
\end{itemize}
\doublespacing
Typically, the unit tests will be written with a large set of ``assertEquals'' calls to verify that the result of 
a method or function is what was expected.  

The Tcl scripting language has two known unit test frameworks available.  tcltest is part of the standard Tcl 
distribution.  It is a functional unit testing framework but does not follow the
simplicity that the XUnit framework
of calls has offer.  The following unit test for the built-in string command ``toupper'' shows the difficulty 
involved with creating a unit test:

\singlespacing
\begin{verbatim}
package require tcltest 2
namespace import tcltest::*
test example-1.1 {an example test} -body {
   string toupper "hello world"
} -result "HELLO WORLD"
cleanupTests
\end{verbatim}
\doublespacing
The framework does not use ``assert'' calls like many of the other XUnit frameworks.  But it does work if a developer
is willing to take the time to write the tests.  The goal of JUnit was to make writing unit tests simple so that 
developers would feel comfortable writing tests.  This does not appear to be the main focus of the tcltest framework.  

TclTkUnit is another unit test framework available for the Tcl scripting language.  This
framework actually follows the goals
of the XUnit frameworks by making it simple and similar in nature.  In addition, many of the ``assert'' calls that
are found throughout JUnit can be made from within TclTKUnit.  Toshikazu appears to have wanted to develop a framework
that can be adopted by any users of the XUnit frameworks.  However, there are a few characteristics of the framework 
that make it difficult to use.  The results that are reported from within TclTkUnit do not suffice if a failure
is come across.  Here are the results of a test case where a procedure to add
one to the argument passed in was tested: 

\singlespacing
\begin{verbatim}
[NG] ::add1Test::testAdd1
ERROR: expected:<3> but was:<2>
    while executing
"::error $message"
    (procedure "error" line 14)
    invoked from within
"error "$message expected:<$expected> but was:<$actual>""
    invoked from within
"if [expr $expected != $actual] {
            error "$message expected:<$expected> but was:<$actual>"
        }"
    (procedure "Test::assertEquals" line 7)
    invoked from within
"Test::assertEquals $answer $req"
    (procedure "::add1Test::testAdd1" line 4)
    invoked from within
"::add1Test::testAdd1"
    ("eval" body line 1)
    invoked from within
"eval $evalString"

!!!FAILURES!!!
Test Results:
1) ::add1Test::testAdd1

Run: 1, Error: 1
3
\end{verbatim}
\doublespacing
TclTkUnit uses the Tcl global variable errorInfo to report the results.  The problem is, the actual line number
where the ``assertEquals'' call is made is never referenced.  In addition, if multiple ``assert'' calls are made 
and there is more than one error, displaying the errorInfo stack trace will become hard to follow.  A second downside
of the TclTKUnit framework is that it can not handle all the available Tcl data structures available.  For instance,
Tcl offers Tcl lists to list a sequence of values~\cite{tclTk}.  Tcl lists can not be tested for equality 
by the TclTkUnit framework.  A failure will be shown although the lists are indeed equal.  
A second data structure that can not be tested for equality within Tcl is arrays.  Arrays
in Tcl can have a string or number for an index so testing for equality is difficult.  But the TclTKUnit can not do it 
at all.  

A better unit testing package has been developed for the Tcl programming language. 
``t-unit'' was designed with simplicity for 
the user in mind as well as similarity to the XUnit family of unit test frameworks available.  Figure~\ref{fig:packageDiagram} 
shows the simplicity of the framework.  
\begin{figure}[htp] 
\centering
\includegraphics[width=6.0in]{../images/packageDiagram.jpg}
\caption{\small \sl ``t-unit'' package diagram.} 
\label{fig:packageDiagram}
\end{figure}
The ``t-unit'' package contains eight modules.  The ``t-unit::testHarness'' module allows for suites of tests to be created.  
The test harness calls a setup and teardown procedure that is expected to be defined.  
Within the suite are multiple calls to ``t-unit::testHarness'' with a file name as an argument.  The file is expected to be
an ASCII text file.  
Within each file is a set
of calls to the variety of ``assert'' procedures available.  The following is an example test case extracted from the 
unit tests for the ``t-unit'' package:

\singlespacing
\begin{verbatim}
t-unit::assertEquals      12.34     12.34    |  PASS  |  OK
t-unit::assertEquals      "hello"   "hello"  |  PASS  |  OK
t-unit::assertEquals      "hello"   "world"  |  FAIL  |  NOT_EQUAL
t-unit::assertEquals      2         2        |  PASS  |  OK
t-unit::assertEquals      2         3        |  FAIL  |  NOT_EQUAL
t-unit::assertEquals      12345     12345    |  PASS  |  OK
t-unit::assertEquals      12345     12345.0  |  FAIL  |  ARG_MISMATCH
t-unit::assertEquals      TRUE      TRUE     |  PASS  |  OK
t-unit::assertNotEquals   TRUE      FALSE    |  PASS  |  OK
t-unit::assertNotEquals   "hello"   "world"  |  PASS  |  OK
t-unit::assertEqualsList  {1 2 3}   {1 2 3}  |  PASS  |  OK
t-unit::assertEqualsArray A1        A2       |  PASS  |  OK
\end{verbatim}
\doublespacing
The first column contains the appropriate ``assert'' procedure.  The second column is the resulting expression.  If the results
of a procedure call are to be compared, the procedure should be called from this column.  Following Tcl conventions, using brackets
around the procedure call will evaluate the expression before the call to the ``assert'' procedure is made.  The third column
is the expected results column.  Since this is a test harness file, pipes are used to separate whether the ``assert'' procedures
pass or fail.  The last column specifies what is returned from the ``assert'' call.  

The ``t-unit'' package contains a variety of ``assert'' procedures similar to the ``asserts'' found in the JUnit framework.  
``t-unit::assertEquals'' allows for testing of any two expressions.  
It assumes that the expressions will result in integers or strings to be compared.  ``t-unit::assertEqualsArray'' tests two
arrays for equality.  It assumes that the arrays are global and then compares all the elements of the arrays.  
``t-unit::assertEqualsList'' compares two Tcl lists for equality.  ``t-unit::assertEqualsReal'' compares two expressions
to make sure that they are in a certain range of one another.  For instance, in Tcl 123.45 will be equal to 123.45001 if using
the ``=='' operator.  By adding in a range to check against, ``t-unit'' can verify that the two real numbers are equal within a
specified range.  ``t-unit::assertFalse'' and ``t-unit::assertTrue'' checks if boolean FALSE or TRUE is returned from an expression.  
Lastly, ``t-unit::assertNotEquals'' verifies that two expressions are not equal to one another.  

The ``t-unit'' package can be used in a variety of ways.  The package can be accessed from a Tcl shell or from within the 
TUnit or TUnit extension Eclipse plug-ins.  Figure~\ref{fig:classDiagram} shows a typical use of the  ``t-unit'' package.  
\begin{figure}[htp] 
\centering
\includegraphics[width=6.0in]{../images/classDiagram.jpg}
\caption{\small \sl Using the ``t-unit'' package.} 
\label{fig:classDiagram}
\end{figure}
Typically a user following TDD will create multiple test cases.  The test cases come together to form a test suite.  Once
all the unit tests are in place, the developer creates packages and modules for use within the package.  The unit tests 
can then be run and each ``assert'' call creates a result.  After all the unit
tests are run, a report is created letting
the user know where the failures occurred and the line numbers for the
failures.  In addition, the TUnit and TUnit extension 
plug-ins can graphically show the results of the test suite.  