<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//Tigris//DTD XHTML 1.0 Transitional//EN"
"http://style.tigris.org/tigris_transitional.dtd">
<html>
<head>
 <style type="text/css">
/* <![CDATA[ */ 
@import "css/readyset.css"; 
@import "css/inst.css";
/*  ]]> */
 </style>

<link rel="stylesheet" type="text/css" href="css/print.css" media="print" />
 <title>QA Plan</title>
</head>

<body>
<div class="app">
<div class="readyset">

 <h2>QA Plan</h2>

 <div class="todo">
  TODO: For each release, update this file by filling in answers to
  the questions.  In cases where multiple answers are already written,
  delete those answers that do not apply.
 </div>

 <div id="releaseinfo">
 <h3>Release Information</h3>
 <table border="1" cellpadding="3" cellspacing="2" class="axial">
  <tr>
   <th>Project:</th> 
   <td><a href="index.html">PROJECTNAME</a></td> 
  </tr>
  <tr>
   <th>Internal Release Number:</th> 
   <td>X.Y.Z</td> 
  </tr>
  <tr>
   <th>Release Audience:</th> 
   <td>
    <div class="sample1">General availability release</div>
    <div class="sample2">Customer-specific release: CUSTOMER(S)</div>
    <div class="sample3">Developer release (Internal usage only)</div>
    <div class="sample4">Early access release (Controlled external access)</div>
   </td> 
  </tr>
  <tr>
   <th>Attached Worksheets:</th> 
   <td>
    <div>QA plan &gt; <a href="review-meeting-notes.html">Review meeting notes</a></div>
    <div>QA plan &gt; <a href="test-suite.html">System test case suite</a></div>
    <div>QA plan &gt; <a href="test-run-suite.html">System test runs</a></div>
   </td>
  </tr>
  <tr>
   <th>Related Documents:</th> 
   <td>
    <div><a href="srs.html">Software Requirements Specification</a></div>
    <div><a href="design.html">Design</a></div>
    <div><a href="plan.html">Project plan</a></div>
    <div><a href="sdm.html">Software development methodology</a></div>
    <div>LINKS TO RELEVANT STANDARDS</div>
    <div>LINKS TO OTHER DOCUMENTS</div>
   </td>
  </tr>
 </table>
 </div> <!-- /releaseinfo -->

 <div id="processimpact">
  <strong>Process impact:</strong> This document
  specifies quality goals, selects strategies for assuring that those
  goals have been met, and details a plan of action to carry out those
  strategies.
 </div> <!-- /processimpact -->


 <div id="introduction">
 <h3>Introduction</h3>
 <dl>
  <dt>Why is this QA plan needed?</dt> 

  <dd>"Quality" refers to all the good things that we
  would like to see in our product.  We build a quality product and
  assure its quality by keeping quality in mind all the time and
  performing the selected activities below.  Testing is one QA
  activity, but it is not the best or only one, other QA activities
  include the use of style guides and checklists, review meetings, use
  of analysis tools, and careful quality measurements and estimates.  
  A plan is needed to select and coordinate all the QA activities.</dd>


  <dt>What QA lessons were learned in previous releases?</dt>

  <dd class="sample1">None yet.  This is the first release.</dd>
 
  <dd class="sample2">
   <ul>
    <li>Different browsers render the same HTML page differently, so
    we must test each version of each supported browser.</li>

    <li>In a previous release, customers found that punctuation (e.g.,
    quotation marks and less-than signs) were entered and processed
    properly, but not displayed properly.  From now on, we must test
    both validation and display of special characters.</li>

    <li>Large datasets can sometimes make our system fail if the space
    used for temporary data is used up.  Our test plans should include
    more data volume tests.</li>
   </ul>
  </dd>
 

  <dt>What is the scope of this QA plan?</dt> 
 
  <dd class="sample1">All components and aspects of the system will be
  evaluated in this release.
  </dd>
 
  <dd class="sample2">There are many quality goals and approaches to
  assuring them.  Since we have limited time and resources for this
  release, we will focus on the following components and aspects:
  <ul>
   <li>COMPONENT-1</li>
   <li>COMPONENT-2</li>
   <li>COMPONENT-3</li>
   <li>FEATURE-1</li>
   <li>FEATURE-2</li>
  </ul>
  </dd> 

  <div class="todo">
   TODO: Sum up the plan in a few sentences.  The text below is just a
   sample.
  </div>
 
  <dt>What is the summary of this plan?</dt> 
 
  <dd class="sample1">In this release we will continue to use
  development practices that support all of our quality goals, but we
  will focus on functional correctness and robustness.  We will do that
  with the following major activities:
 
  <ul>
   <li>using if-statements to test preconditions and assert statements
       to test invariants and postconditions</li>
   <li>conducting frequent reviews</li>
   <li>performing automated unit and regression testing with JUnit</li>
   <li>carrying out structured manual system testing</li>
   <li>keeping all issues up-to-date in an issue tracking database</li>
  </ul>
  </dd>
 </dl>
 </div> <!-- /introduction -->



 <div id="goals">
 <h3>Quality Goals for this Release</h3>
 <div class="todo">
  TODO: Add or edit goals to fit your project.  Group them by priorities
  that make sense for your project on this particular release.
 </div>

 <ul>
  <li>Essential
   <ul>
    <li><a class="def" href="glossary-std.html#qg_Func_Correctness">Functionality &gt; Correctness</a></li>
    <li><a class="def" href="glossary-std.html#qg_Func_Robustness">Functionality &gt; Robustness</a></li>
   </ul>
  </li>
  <li>Expected
  <ul>
   <li><a class="def" href="glossary-std.html#qg_Func_Accuracy">Functionality &gt; Accuracy</a></li>
   <li><a class="def" href="glossary-std.html#qg_Func_Compatibility">Functionality &gt; Compatibility</a></li>
   <li><a class="def" href="glossary-std.html#qg_Func_Factual">Functionality &gt; Factual correctness</a></li>
   <li><a class="def" href="glossary-std.html#qg_Use_Understand">Usability &gt; Understandability and Readability</a></li>
   <li><a class="def" href="glossary-std.html#qg_Use_Learnability">Usability &gt; Learnability and Memorability</a></li>
   <li><a class="def" href="glossary-std.html#qg_Use_Task">Usability &gt; Task support</a></li>
   <li><a class="def" href="glossary-std.html#qg_Use_Efficiency">Usability &gt; Efficiency</a></li>
   <li><a class="def" href="glossary-std.html#qg_Use_Safety">Usability &gt; Safety</a></li>
   <li><a class="def" href="glossary-std.html#qg_Use_Consistency">Usability &gt; Consistency and Familiarity</a></li>
   <li><a class="def" href="glossary-std.html#qg_Use_Subjective">Usability &gt; Subjective satisfaction</a></li>
   <li><a class="def" href="glossary-std.html#qg_Security">Security</a></li>
  </ul>
  </li>
  <li>Desired
  <ul>
   <li><a class="def" href="glossary-std.html#qg_Rely_ConsistLoad">Reliability &gt; Consistency under load</a></li>
   <li><a class="def" href="glossary-std.html#qg_Rely_ConsistConcur">Reliability &gt; Consistency under concurrency</a></li>
   <li><a class="def" href="glossary-std.html#qg_Rely_AvailLoad">Reliability &gt; Availability under load</a></li>
   <li><a class="def" href="glossary-std.html#qg_Rely_Longevity">Reliability &gt; Longevity</a></li>
   <li><a class="def" href="glossary-std.html#qg_Efficiency">Efficiency</a></li>
   <li><a class="def" href="glossary-std.html#qg_Scalability">Scalability</a></li>
   <li><a class="def" href="glossary-std.html#qg_Scale_PerformLoad">Scalability &gt; Performance under load</a></li>
   <li><a class="def" href="glossary-std.html#qg_Scale_Volume">Scalability &gt; Large data volume</a></li>
   <li><a class="def" href="glossary-std.html#qg_Operability">Operability</a></li>
   <li><a class="def" href="glossary-std.html#qg_Maint_Understand">Maintainability &gt; Understandability</a></li>
   <li><a class="def" href="glossary-std.html#qg_Maint_Evolvability">Maintainability &gt; Evolvability</a></li>
   <li><a class="def" href="glossary-std.html#qg_Maint_Testability">Maintainability &gt; Testability</a></li>
  </ul>
  </li>
 </ul>

 </div> <!-- /goals -->



 <div id="strategy">
 <h3>QA Strategy</h3>

 <div class="todo">
  TODO: Consider the activities listed below and delete those that are
  not applicable to your project.  Edit and add new activities if
  needed.  For each activity, specify the coverage or frequency that
  you plan to achieve.  If you do not plan to perform an activity, write "N/A".
 </div>

 <table border="1" cellpadding="3" cellspacing="2" width="100%">
  <tr>
   <th>Activity</th>
   <th>Coverage or Frequency</th>
   <th>Description</th>
  </tr>

  <tr>
   <td>Preconditions</td>
   <td>
    <div class="sample1">Every public method</div>
    <div class="sample2">Every public method in COMPONENT-NAME</div>
    <div class="sample3">All public methods that modify data</div>
   </td>

   <td>We will use if-statements at the beginning of public methods to
   validate each argument value.  This helps to document assumptions
   and catch invalid values before they can cause faults.</td>
  </tr>

  <tr>
   <td>Assertions</td>
   <td>
    <div class="sample1">Every private method</div>
    <div class="sample2">Every private method in COMPONENT-NAME</div>
    <div class="sample3">All private methods that modify data</div>
   </td>

   <td>Assertions will be used to validate all arguments to private
   methods.  Since these methods are only called from our other
   methods, arguments passed to them should always be valid, unless
   our code is defective.  Assertions will also be used to test class
   invariants and some postconditions.</td>
  </tr>

  <tr>
   <td>Static analysis</td>
   <td>
    <div class="sample1">Strict compiler warnings</div>
    <div class="sample2">Automated style checking</div>
    <div class="sample3">XML validation</div>
    <div class="sample4">Detect common errors</div>
   </td>

   <td>We will use source code analysis tools to automatically detect 
   errors.  Style checkers will help make all of our code
   consistent with our coding standards.  XML validation ensures that
   each XML document conforms to its DTD.  Lint-like tools help detect
   common programming errors.  E.g.:
    <a href="http://www.freebsd.org/cgi/man.cgi?query=lint">lint</a>, 
    <a href="http://www.splint.org/">lclint/splint</a>, 
    <a href="http://artho.com/jlint/">jlint</a>, 
    <a href="http://sourceforge.net/projects/checkstyle/">checkstyle</a>,
    <a href="http://sourceforge.net/projects/jcsc">Jcsc</a>,
    <a href="http://www.logilab.org/projects/pylint">PyLint</a>,
    <a href="http://pychecker.sourceforge.net/">PyChecker</a>,
    <a href="http://tidy.sourceforge.net/">Tidy</a>
   </td>

  </tr>

  <tr>
   <td>Buddy review</td>
   <td>
    <div class="sample1">All changes to release branches</div>
    <div class="sample2">All changes to COMPONENT-NAME</div>
    <div class="sample3">All changes</div>
   </td>

   <td>Whenever changes must be made to code on a release branch
   (e.g., to prepare a maintenance release) the change will be
   reviewed by another developer before it is committed.  The goal is
   to make sure that fixes do not introduce new defects.</td>
  </tr>

  <tr>
   <td nowrap="nowrap">Review meetings</td>
   <td>
    <div class="sample1">Weekly</div>
    <div class="sample2">Once before release</div>
    <div class="sample3">Every source file</div>
   </td>

   <td>We will hold review meetings where developers will
   perform formal inspections of selected code or documents.  We
   choose to spend a small, predetermined amount of time and try to
   maximize the results by selecting review documents carefully.  In
   the review process we will use and maintain a variety of
   checklists.</td>
  </tr>

  <tr>
   <td>Unit testing</td>
   <td>
    <div class="sample1">100% of public methods, and 75% of statements</div>
    <div class="sample2">100% of public methods</div>
    <div class="sample3">75% of statements</div>
   </td>

   <td>We will develop and maintain a unit test suite using the JUnit
   framework.  We will consider the boundary conditions for each
   argument and test both sides of each boundary.  Tests must be run
   and passed before each commit, and they will also be run by the
   testing team. Each public method will have at least one test.  And,
   the overall test suite will exercise at least 75% of all executable
   statements in the system.</td>
  </tr>

  <tr>
   <td>Manual system testing</td>
   <td>
    <div class="sample1">100% of UI screens and fields</div>
    <div class="sample2">100% of specified requirements</div>
   </td>

   <td>The QA team will author and maintain a detailed written suite
   of manual tests to test the entire system through the user
   interface.  This plan will be detailed enough that a person could
   repeatably carry out the tests from the test suite document and
   other associated documents.</td>
  </tr>

  <tr>
   <td>Automated system testing</td>
   <td>
    <div class="sample1">100% of UI screens and fields</div>
    <div class="sample2">100% of specified requirements</div>
   </td>

   <td>The QA team will use a system test automation tool to author
   and maintain a suite of test scripts to test the entire system
   through the user interface.</td>
  </tr>

  <tr>
   <td>Regression testing</td>
   <td>
    <div class="sample1">Run all unit tests before each commit</div>
    <div class="sample2">Run all unit tests nightly</div>
    <div class="sample3">Add new unit test when verifying fixes</div>
   </td>

   <td>We will adopt a policy of frequently re-running all automated
   tests, including those that have previously been successful.  This
   will help catch regressions (bugs that we thought were fixed, but
   that appear again).</td>
  </tr>

  <tr>
   <td>Load, stress, and capacity testing</td>
   <td>
    <div class="sample1">Simple load testing</div>
    <div class="sample2">Detailed analysis of each scalability parameter</div>
   </td>

   <td>We use a load testing tool and/or custom scripts to simulate
   heavy usage of the system.  Load will be defined by scalability
   parameters such as number of concurrent users, number of
   transactions per second, or number/size of data items
   stored/processed. We will verify that the system can handle loads
   within its capacity without crashing, producing incorrect results,
   mixing up results for distinct users, or corrupting the data.  We
   will verify that when capacity limits are exceeded, the system
   safely rejects, ignores, or defers requests that it cannot
   handle.</td>
  </tr>

  <tr>
   <td>Beta testing</td>
   <td>
    <div class="sample1">4 current customers</div>
    <div class="sample2">40 members of our developers network</div>
    <div class="sample3">1000 members of the public</div>
   </td>

   <td>We will involve outsiders in a beta test, or early access,
  program.  We will beta testers directions to focus on specific
  features of the system.  We will actively follow up with beta
  testers to encourage them to report issues.</td>  
  </tr>

  <tr>
   <td>Instrumentation and monitoring</td>
   <td>
    <div class="sample1">Monitor our ASP servers</div>
    <div class="sample2">Remotely monitor customer servers</div>
   </td>

   <td>As part of our SLA, we will monitor the behavior of servers to
  automatically detect service outages or performance degradation.  We
  have policies and procedures in place for failure notification,
  escalation, and correction.</td>

  </tr>

  <tr>
   <td>Field failure reports</td>
   <td>
    <div class="sample1">Prompt users to report failures</div>
    <div class="sample2">Automatically report failures</div>
   </td>

   <td>We want to understand each post-deployment system failure and
   actively take steps to correct the defect.  The system has built-in
   capabilities for gathering detailed information from each system
   failure (e.g., error message, stack traceback, operating system
   version).  This information will be transmitted back to us so that
   we may analyze it and act on it.</td>

  </tr>  
 </table>
 </div> <!-- /strategy -->



 <div id="evaluation">
 <h3>QA Strategy Evaluation</h3>
 <div class="todo">
  TODO: Use the following table to evaluate how well your QA Strategy
  will assure your QA goals.
 </div>

 <table border="1" cellpadding="3" cellspacing="2" width="100%" class="sample1">
  <tr>
   <th>Goal</th>
   <th>Preconditions</th>
   <th>Assertions</th>
   <th>Buddy review</th>
   <th>Review meeting</th>
   <th>Unit testing</th>
   <th>Manual system testing</th>
   <th>Overall assurance</th>
  </tr>

  <tr>
   <td nowrap="nowrap">Functionality</td>
   <td>Medium</td>
   <td>Medium</td>
   <td>Medium</td>
   <td>High</td>
   <td>High</td>
   <td>High</td>
   <td>Strong</td>
  </tr>

  <tr>
   <td nowrap="nowrap">&nbsp;&nbsp; Correctness</td>
   <td>High</td>
   <td>High</td>
   <td>Medium</td>
   <td>Medium</td>
   <td>High</td>
   <td>Medium</td>
   <td>Strong</td>
  </tr>

  <tr>
   <td nowrap="nowrap">&nbsp;&nbsp; Robustness</td>
   <td>High</td>
   <td>High</td>
   <td>Medium</td>
   <td>Medium</td>
   <td>High</td>
   <td>Medium</td>
   <td>Strong</td>
  </tr>

  <tr>
   <td nowrap="nowrap">Usability</td>
   <td>None</td>
   <td>None</td>
   <td>None</td>
   <td>High</td>
   <td>None</td>
   <td>Medium</td>
   <td>Strong</td>
  </tr>

  <tr>
   <td nowrap="nowrap">Security</td>
   <td>Medium</td>
   <td>None</td>
   <td>Medium</td>
   <td>High</td>
   <td>None</td>
   <td>Medium</td>
   <td>Strong</td>
  </tr>

  <tr>
   <td nowrap="nowrap">Reliability</td>
   <td>None</td>
   <td>Medium</td>
   <td>Low</td>
   <td>Medium</td>
   <td>Medium</td>
   <td>Medium</td>
   <td>Weak</td>
  </tr>

  <tr>
   <td nowrap="nowrap">Efficiency</td>
   <td>None</td>
   <td>None</td>
   <td>Low</td>
   <td>Medium</td>
   <td>None</td>
   <td>Low</td>
   <td>At-Risk</td>
  </tr>

  <tr>
   <td nowrap="nowrap">Scalability</td>
   <td>None</td>
   <td>None</td>
   <td>Low</td>
   <td>Medium</td>
   <td>Low</td>
   <td>Low</td>
   <td>At-Risk</td>
  </tr>

  <tr>
   <td nowrap="nowrap">Operability</td>
   <td>None</td>
   <td>None</td>
   <td>None</td>
   <td>Low</td>
   <td>None</td>
   <td>Low</td>
   <td>At-Risk</td>
  </tr>

  <tr>
   <td nowrap="nowrap">Maintainability</td>
   <td>Medium</td>
   <td>Low</td>
   <td>Medium</td>
   <td>High</td>
   <td>Low</td>
   <td>None</td>
   <td>Weak</td>
  </tr>
 </table>
 </div> <!-- /evaluation -->


 
 <p>Cell values in the table above are subjective estimates of the
 effectiveness of each activity.  This table helps to identify quality
 goals that are not being adequately assured.</p>

 <div class="tier2">
 <h4>Evaluation cell values:</h4>
 <ul>
  <li>High: This activity gives a strong assurance that the goal has been met in development.</li>
  <li>Medium: This activity gives a medium assurance that the goal has been met in development.</li>
  <li>Low: This activity gives only a little assurance that the goal has been met in development.</li>
  <li>None: This activity does not address the goal.</li>
 </ul>

 <h4>Overall assurance values:</h4>
 <ul>
  <li>Strong: The set of activities together provide strong assurance that the goal has been met in development.</li>
  <li>Weak: The activities together provide limited assurance that the goal has been met in development.</li>
  <li>At-Risk: There is little or no assurance that this goal has been met.</li>
 </ul>

 <p>Note: As a rule of thumb, it takes at least two "high" activities
 and one "medium" to give a "strong" overall rating.  Likewise, it
 takes at least two "medium" and one "low" activities to rate a "weak"
 overall rating.</p>
 </div>



 <div id="planofaction">
 <h3>Plan of Action</h3>
 <div class="todo">
  TODO: Adjust this plan to fit your project.
 </div>
 <div class="todo">
  TODO: Once the plan is outlined, tasks should be assigned to
  individuals and tracked to completion.
 </div>

 <ol class="sample1">
  <li>Preconditions and Assertions
   <ul>
    <li>Refine requirements document whenever preconditions are not already determined</li>

    <li>Create validation functions for use by preconditions and assertions, as needed</li>

    <li>Write preconditions and assertions in code</li>
   </ul>
  </li>


  <li>Review meetings
   <ul>
    <li>Assign buddy reviewers whenever a change to a release branch is considered</li>
    <li>Select an at-risk document or section of code for weekly review meetings</li>
    <li>Each week, identify reviewers and schedule review meetings</li>
    <li>Reviewers study the material individually for 2 hours</li>
    <li>Reviewers meet to inspect the material for 2 hours</li>
    <li>Place <a href="review-meeting-notes.html">review meeting
    notes</a> in the repository and track any issues identified in
    review meetings</li>
   </ul>
  </li>

  <li>Unit tests
   <ul>
    <li>Set up infrastructure for easy execution of JUnit tests (this is just
    an Ant target)</li>

    <li>Create unit tests for each class when the class is created</li>

    <li>Execute unit tests before each commit.  All tests must pass
    before developer can commit, otherwise open new issue(s) for
    failed tests.  These "smoke tests" will be executed in each
    developer's normal development environment.</li>

    <li>Execute unit tests completely on each release candidate to
    check for regressions. These regression tests will be executed on
    a dedicated QA machine.</li>

    <li>Update unit tests whenever requirements change</li>
   </ul>
  </li>

  <li>System tests
   <ul>
    <li>Design and specify a detailed manual <a
    href="test-suite.html">test suite</a>.</li>

    <li>Review the system test suite to make sure that every UI screen
    and element is covered</li>

    <li>Execute system tests completely on each release candidate.
    These system tests will be carried out on a dedicated QA
    machine.</li>

    <li>Update system tests whenever requirements change</li>
   </ul>
  </li>

  <li>QA Management
   <ul>
    <li>Update this test plan whenever requirements change</li>

    <li>Document test results and communicate them to the entire
    development team</li>

    <li>Estimate remaining (not yet detected) defects based on current
    issue tracking data, defect rates, and metrics on code size and
    the impact of changes.</li>

    <li>Keep all issues up-to-date in an issue tracking database.  The
   issue tracker is available to all project members <a
   href="LINK-TO-ISSUE-TRACKER">here</a>.  The meaning of issue
   states, priorities, and other attributes are defined in the <a
   href="sdm.html#issuetracking">SDM</a>.</li>
   
   </ul>
  </li>

 </ol>
 </div> <!-- /planofaction -->


 <div id="checklist">
 <h3>QA-Plan Checklist</h3>

 <dl>

  <dt>Do the selected activities in the QA Strategy provide enough
  assurance that the project will meet it's quality goals?</dt>

  <dd>Yes, if all activities are carried out as planned, we are
  confident that the quality goals will be satisfied.  We will, of
  course, adjust this plan as needed.</dd>

  <dd>No, this plan leaves open several quality risks that have been
  noted in the <a href="plan.html#risks">Risk Management</a> section
  of the <a href="plan.html">Project Plan</a>.</dd>

  

  <dt>Have human resources been allocated to carry out the QA activities?</dt>

  <dd>Yes, human resources have been allocated.  They are listed in
  the <a href="resource-needs.html">Resource Needs</a> document.</dd>

  <dd>No, human resources have not been allocated.  They are listed as
  "pending" in the <a href="resource-needs.html">Resource Needs</a> document.</dd>

  

  <dt>Have machine and software resources been allocated as needed for
  the QA activities?</dt>

  <dd>Yes, the QA team will use desktop machines and servers that are
  already allocated to them.</dd>

  <dd>Yes, a QA Lab has been set up.  The needed machine and software
  resources are listed in the <a href="resource-needs.html">Resource
  Needs</a> document.</dd>

  <dd>No, needed machine and software resources are listed as pending
  in the <a href="resource-needs.html">Resource Needs</a> document.</dd>

  

  <dt>Has this QA Plan been communicated to the development team and
  other stakeholders?</dt>

  <dd>Yes, everyone is aware of our prioritized quality goals for this
  release and understands how their work will help achieve those
  goals.  Feedback is welcome.</dd>

  <dd>Yes, this document is being posted to the project website.
  Feedback is welcome.</dd>

  <dd>No, some developers are not aware of the quality goals and
  planned QA activities for this release.  This is a risk that is
  noted in the <a href="plan.html#risks">Risk Management</a> section
  of the <a href="plan.html">Project Plan</a>.</dd>

 </dl>
 </div> <!-- /checklist -->

 
 <div class="todo">
  TODO: Check for <a
  href="http://readyset.tigris.org/words-of-wisdom/qa-plan.html">words
  of wisdom</a> and discuss ways to improve this template.
  Or, evaluate the ReadySET Pro <a title="pro use case template and sample test plan"
  href="http://www.readysetpro.com/">professional test plan template</a>.
 </div>

</div>

<div class="legal1">Company Proprietary</div>

<div class="footnote">
 Copyright &#169; 2003-2004 Jason Robbins.  All rights reserved. <a href="readyset-license.html">License terms</a>.
 Retain this copyright statement whenever this file is used as a
 template.
</div>

</div>
</body>
</html>
