%
% Copyright (C) Håkan Lindqvist 2006, 2007.
% May be copied freely for educational and personal use. Any other use
% needs my written consent (i.e. no commercial use without my approval).
%

\label{chapter:tcb}

%%%%%%%%%%%%%%%%%%%%
%
\section{Trust and assurance}
In order to implement security in a computer system, it is obvious that
there must be one or more system components that can be \texttt{trusted}
to be correct. One such technique will be discussed in this chapter:
\texttt{Trusted Computer Base}. After the introduction to the notion of 
trust, the concept of assurance will be lightly touched upon.

%%%%%%%%%%%%%%%%%%%%
\subsection{Trust} \label{tcb:trust}
Trust is probably one of the most important factors in real life: On a
daily basis we \texttt{trust} in that people will do what they are
expected to do in different situations, such as not breaking the law and
that everyone will try not to contradict social norms.
For security, it becomes the question~\cite{Bishop, Amoroso}:

\centerline{\texttt{``To what degree can I trust this system being
secure?''}}

Bishop~\cite{Bishop} provides a definition of trust which fully
encapsulates the above question:

\begin{definition}
\label{definition:trust}
\index{definition!trust} \index{trust}
An entity is \texttt{trustworthy} if there is sufficient credible
evidence leading one to believe that the system will meet a set of given
requirements. \texttt{Trust} is a measure of trustworthiness, relying on
the evidence provided.
\end{definition}

The definition also captures two key points: that trust is a subjective
measure that is dependant on how much evidence of a system's security is
provided, and what kind of evidence it is.

So, for a system that has been designed and implemented to be secure,
potential users of the system will have to be provided evidence for that
any trust they might have for the system's correctness is not misplaced.

This problem is especially true for all companies that sell software; only
the trust that a company gets from their customers that they have a
functionally correct and secure product will ensure continued
investments in software from that company. This brings up the issue of
assuring correctness, which is the topic of the following section.


%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Assurance}
As was mentioned in the previous section, there is a need to provide
convincing arguments that a product works as stated -- that it functions
as is claimed and that it is secure. Stated differently, the problem is to
provide convincing proof of the system's correctness.

%
\subsubsection{Defining assurance}
The process of gathering evidence for the security, and correctness, of
a system is called \texttt{assurance}. The following definition provides
a good starting point for the rest of the discussion~\cite{Bishop}:

\begin{definition}
\label{definition:assurance}
\index{definition!assurance} \index{assurance}
\texttt{Security assurance}, or simply \texttt{assurance}, is confidence
that an entity meets its security requirements, based on specific
evidence provided by the application of assurance techniques.
\end{definition}

The definition does not specify what kind of assurance techniques that
should be employed, just that they should. The goal of the assurance
techniques though are to eliminate all mistakes that can be done during
the development of a \texttt{system}.

%
\subsubsection{Problems and countermeasures}
As was mentioned above, the goal of applying assurance is to reduce the
occurrence of errors in the system. In that respect, assurance has very
much in common with software engineering (see 
e.g.~\cite{software_engineering, SecureCoding, Andersson}) in that
it tries to eliminate as many problem factors as possible. The following
list of errors and trouble sources is given in~\cite{Bishop} as issues
in computer systems:

\begin{enumerate}
 \item Requirement definition, omissions and mistakes 
 \item System design flaws 
 \item Hardware flaws 
 \item Software flaws, programming and compiler bugs 
 \item System use and operation errors and inadvertent mistakes 

 \item Willful system misuse 
 \item Hardware, communication or other equipment malfunction 
 \item Environment problem, natural causes and random events
 \item Evolution, maintenance, faulty upgrades and decommissions 
\end{enumerate}

All of these sources of security issues are well known, and there are
different assurance techniques to counter most of them. Natural causes
and random events are hard to counter though. The counter measures that
are discussed below are adapted from~\cite{Bishop}.

The first thing to assert correctness for is design. Correctness and
completeness testing is an absolute necessity to ensure that the correct
security measures are being taken and that they encapsulate the problem
space. This kind of assurance deals with problems of type 1, 2 and 6.

Second, the implementation of the designed system must be checked, both
hardware and software, which handles the problems listed under 3, 4 and
7. Moreover, this will also handle problems with maintenance,
environmental problems and willful misuse (list items 6, 8 and 9) since
the assurance should stretch to the deployment of the system.

To counter operational problems, operational assurance can handle the
problems under list item 5. Techniques for doing this are, for example,
monitoring and audit. These techniques will hopefully find flaws in the
system as it is used and is in the maintenance phase, so that those
errors can be dealt with~\cite{op_assurance}.

%
\subsubsection{Real world example: OpenBSD}
As a real world example of how several of the above techniques are being
used, the free POSIX--compatible system \texttt{OpenBSD}\footnote{
Official website: \href{http://www.openbsd.org}{http://www.openbsd.org}}
is presented. 

\texttt{OpenBSD} has a very strong focus on security, and therefore has
included a significant amount of well designed countermeasures to well
known vulnerabilities such as buffer overflows and rights escalation.

One of the most impressive parts of the \texttt{OpenBSD} effort is the
thorough code audit the operating system, and its userland
tools (i.e. tools used in application space as opposed to kernel space,
in which the operating system's kernel operates), has undergone: 
the code is constantly checked for bugs and
possible errors, which has uncovered a lot of bugs over the 
years~\cite{openbsd}.

The open nature of \texttt{OpenBSD} gives the discussion on design
issues a natural solution in the project's mailing lists, flaws in the
software are tested for by test releases and actual use, maintenance is
a constant factor due to the commitment of the community and the
developers.


%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Certification}
\index{certification}
Although assurance at different levels while constructing a system is an
obvious way of increasing the quality of the end product, different
types of assurance techniques are likely to uncover different types of
errors and flaws unequally well. To make it possible for someone not
involved in the actual work on a system to judge on the process used for
verification some common ground between different projects is needed.
Certification is such an effort.

A certification is a formalized way of placing trust in a system by
using a well defined set of assurance techniques. Different
certifications will provide different levels of trust for a
\texttt{system} based on how the validation is performed.

For example, lower certification levels may require the use of software
development methodologies and testing of the correctness of the system's
functionality compared to specification documents. On the other end of
the scale are formal proof of the systems functionality, in which the
entire system is subjected to mathematical proof for the correctness of
each line of code.

An industry standard used extensively is the Common Criteria, which has
seven levels of trust. The levels ranges from ``no trust'' to ``Formally
Verified Design and Tested''~\cite{Bishop, CommonCriteria}. Very few
projects ever reach the higher levels of the criteria.


%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Trusted Computing Base}
Previously in the chapter, the topics of assurance and certification
have been discussed. As is easily understood, thorough verification
of a piece of software's correctness is time consuming. In commercial
projects, cost also becomes an important factor.

For these reasons it is a good idea to minimize the code base that
should be responsible for enforcing the security of the system, that is
the mechanism that implements the system's security policy.

A definition of the parts of a system that is responsible for
implementing the mechanisms is provided~\cite{Bishop}:

\begin{definition}
\label{definition:tcb}
\index{definition!TCB}
\index{definition!Trusted Computing Base}
\index{Trusted Computing Base}
A \texttt{Trusted Computing Base} (TCB) consists of all protection
mechanisms within a computer system -- including hardware, firmware and
software -- that are responsible for enforcing a security policy.
\end{definition}

One important conclusion can be reached from this: if the \texttt{TCB}
is small enough to be properly verified, using techniques such as
assurance and certification conformance, and all policy dependant
decisions in the system pass through the \texttt{TCB}, then the level of
trust that can be placed in the system will be directly dependant on the
results from the verification of the \texttt{TCB}.

The challenge is often to make the size of the \texttt{TCB} manageable
for any form of verification. It will often include too many parts of an
operating system's kernel, and therefore be of great size, to be
practically feasible for any more formal type of correctness tests. It
is not unusual that the entire memory management subsystem is a part
of the \texttt{TCB}~\cite{Amoroso}.


