#+SETUPFILE: ../../../template/level-2.org
#+TITLE: Safety, Security, Reliability
#+DATE: <2011-03-03 Thu 20:47>
#+AUTHOR: vaeringjar
#+EMAIL: vaeringjar@land
#+DESCRIPTION: My casual discussion thoughts from OMSE525W9 on Safety, Security, and Reliability with comments.
#+KEYWORDS: ities


* Post

My casual discussion thoughts from OMSE525W9 on Safety, Security, and
Reliability with comments.

TODO: I should go back and add proper citations.


* ORIGINAL TOPIC 1 OF 1
  :PROPERTIES:
  :CUSTOM_ID: original-topic-1-of-1
  :END:

Recall, early in this course we discussed the idea of selecting
processes that meet the needs of the problem at hand. This week we will
go back to this theme for mission-critical problems and also examine the
sorts of metrics we should consider measuring and applying.

During our discussions it will generally make good sense to relate the
metrics we choose to the processes we choose. Systems developed to meet
stringent security, reliability and safety requirements, among others,
typically require fairly rigorous specifications, designs, reviews,
tests and measurements to provide the level of assurances customers and
their users demand. But how much process ceremony and how much process
measurement is enough? Clearly, this is a very tough question to answer.
We have little quantitative guidance to lean on [mind you, some authors
have addressed this problem qualitatively].

This week the textbook and my online notes cover the needs of
mission-critical projects. I have also posted two readings to stimulate
our discussion. The first one, by David Parnas et. al. is an older and
more traditional article pointing out the processes that should be
applied when building high-reliability software for safety-critical
systems. The second article by Drobka et. al. explores the possibilities
of applying Agile/XP processes to build adequate solutions for
mission-critical projects.

This week I would like you to consider processes and metrics needed to
support mission-critical projects distinguishing between a. "baseline",
b. "domain-specific" and c. "gold-plated" projects. When making your
posting, address all three areas in a single posting. If after some
further discussion and thinking you change your mind, I recommend that
you post your altered position/comments as a response to your first
post - this shows you are really reflecting on the possibilities after
all.

9a. Baseline: What do you believe to be the essential processes and
metrics for any mission-critical project? That is, which processes and
which measurements would you implement for projects where reliability,
safety and security (any of these) are critical aspects of the
requirements?

9b: Domain-Specific: What do you believe to be the processes and metrics
that are specific or distinct when tackling: i. reliability-critical;
ii. safety-critical; and iii. security-critical projects? In other
words, beyond the "baseline", which processes and metrics do you feel
should be put in place for each of these domains.

9c. Gold-Plated: What are the processes and metrics that you would
incorporate into a project to mitigate residual risks? These are cases
where you have virtually unlimited budget and the risks are exceedingly
high -- like sending men to the moon. What additional processes and
metrics would you want to include?

-kt


* My Response
  :PROPERTIES:
  :CUSTOM_ID: my-response
  :END:

9a - Baseline:

My proposed baseline is rather verbose. I find value in most of the
ideas we have read and discussed; having a knowledge of each of them is
necessary for mission-critical projects. Traceability or an
understanding as well as some method for determining quality, logic or
safe assumptions in verification or testing, and risk identification are
all crucial.

Traceability is one of the most important characteristics. Without this
we cannot understand which requirements have or have not been met.
Furthermore, lacking this understanding of the requirements throughout
the project means that we do not understand why we have the
requirements.

Even after last week's discussion, I still think that simplicity is not
a direct measure of customer satisfaction, there are other metrics which
are just as important. Keeping things simple could mean that
understandability is greater. This, again, feels correlated with
traceability.

At my job we are really mostly dealing with states of objects,
components, etc. We assume most parts already work on their own, so it's
our job to make sure that each state transfers to the next without
losing functionality. Some of it is obvious or intuitive, other times
not. While I love logic, I think that using formal verification or fault
tolerance assumptions is exhaustive in a different meaning of the word
for this context. For instances of safety, security, or reliability, I
think it is necessary. For my own work, testing works well enough most
of the time. We bring out the big guns for the harder stuff.

9b - Domain-Specific:

1. reliability-critical:

From my point of view, safety is a security is a reliability issue.
Understandability and thus documentation is more and more important
depending on the level of needed reliability. Each set is a subset of
the next. Safety has all of the issues that security and reliability
have but with the added need to protect or preserve life. Security
protects or preserves data. Both need to be reliable. All are
mission-critical. Many of the concepts, metrics, and quality
characterises are intertwined.

My perspective on the matter also affects how I would define the
domain-specific sub-points. Without others to measure against,
reliability-critical is equivalent to mission-critical. But if I were to
add another layer to distinguish the two I would make
reliability-critical a subset of mission-critical. What defines
mission-critical loses some ground with less quanta.

On the other hand, I probably have a bias in that I am imposing my own
ethics to measure the importance of safety, security, or reliability.
Tian's chapter on software requirements engineering (SRE) introduces
time and input domain approaches to measure reliability using SRGMs,
IDRMs, and TBRMs (Tian 372). SRE has its own set of assumptions of which
A3 concedes that its own techniques are not perfect (Tian 373). This
feels like the rare occasions assumption in formal verification,
described below.

2. safety-critical:

This is the most important domain because survival of a life or an
entity is at stake.

Formal verification, like testing, is exhaustive, though if all
variables are accounted for and the code is translated into logic then a
proof can be used to solve the problem. Each one has its own point of
view, defects or correctness, respectively (Tian 252). I think that
formal verification, like testing, is another way for developers to
search for software quality, but in rare occasions or with great or
careful effort do I think that either is a silver bullet. Translation of
the code back into logic is sometimes an intensive process. An analogue
to this is taking user needs and specifying requirements. Or taking each
one down from there to design, code, etc.

Fault tolerance falls under a similar category. Instead of simply
showing where defects or correctness exists, now the issue is dealing
with defects with the assumption that there are problems either rare or
causal (lacking a better word for a domino effect) with the
requirements, design, or implementation (Tian 268). It is possible to
avoid problems, such as deadlock (always my favorite example), by making
conditions impossible or rare. Risk identification, in this case the
management of resources and states of the processes/threads that are
using them, can increase fault tolerance handling. But in order to
reduce the potential for faults and failures one needs to have an
understanding of the program or experience with similar previous
projects. There is a point when statistically there is no gain from
reducing risk. The optimal set reduction algorithm stops when "no
significant gain can be made in entropy reduction" (Tian 362).

3. security-critical projects:

There must be a way for me to break out of my is-a perspective. I will
contrast with safety.

If the customer (and possibly all stakeholders) dies, satisfaction is
not really an issue. I know that sounds cold. But the metric of customer
satisfaction is not really important in some cases for safety-critical
projects. Parnas brings up the space shuttle. I think all would be
willing to trade the dissatisfaction of riding in 3Gs to orbit instead
of freefall back down to the ground. Safety is concerned with the most
important satisfaction which is still being alive; it matters little
that it's hard to breathe if you're still breathing. Security, on the
other hand, allows for a multitude of dissatisfaction(s).

So if security-critical projects are more apt to customer
dissatisfaction, then perhaps they are higher risk systems then
safety-critical systems because more can go wrong with them, concerning
the customer satisfaction metric. Complexity of software itself is a
problem, but I think that is merely due to the fact that most humans
have more mental capacity than physical (hardware) capabilities (Parnas
637). Building something in the mind is easier, but it might also mean
that we have more opportunities to mess up.

This feels like the Tian's "failure independent assumption" as described
above, which gives the example, "multiple engines does not reduce the
likelihood of failures over a single engine" (Tian 268). In a
security-critical project, just because there are multiple services to
please the customer, such as online banking, if one part of the service
fails, such as an online bill-pay, the satisfaction of the customer
drops even if all other parts of the system is intact.

On the other hand, I am not convinced that it is strictly a
security-critical problem. If there are several systems keeping someone
alive, then losing one, such as one of the engines on the space shuttle,
is still a critical and possibly fatal problem.

9c - Gold-Plated Silver Bullets:

The Drobka-Noftz-Raghu article on XP is a start. My own attitudes have
changed quite a bit the last year about XP. I'm not sure if I am
ambitious or confident enough to slap down a set of ideas. I am really
happy to see that documentation got a section all to itself. "To
facilitate communication with the other teams, we wrote use cases rather
than user stories [which] provided a common language." They took a great
idea/workflow and adapted it into something more appropriate for
themselves (Drobka 72).


* Interactions
  :PROPERTIES:
  :CUSTOM_ID: interactions
  :END:

lal, Do you think there are any metrics specific to safety that are not
(or to a much less extent) relevant to security or reliability?

One point I was trying to make in my own post was that safety is not
directly related to customer satisfaction as a metric. In fact, there
could be much dissatisfaction with an aviation or medical systems (so
not necessarily strictly software), such as pain or discomfort, but as
long as survival and well being are maintained the customer's
requirements are still primarily met. Though, that might not match up
well with a dentist patient... I would put pain management systems,
morphine dispensers, etc in a different category than defibrillators.

I contrasted this with security systems which are held to a much
different standard. Banking systems need to not only take care of the
money, but also the peace of mind of the customers. You made a good
point in 2.c about how social and economic metrics are important here.
-me

--------------

al, I think that having a severity and priority status is a great idea.
It seems like something that I would have thought about also, but never
made any mention of it in my own post. I think that having the tracking
system in place would allow for a way to distinguish between
mission-critical and non-mission-critical issues.

I can see this being important when noting the difference between a
defect that exists because it was implemented incorrectly versus a
defect that exists because it was not implemented well enough. For
example, the different between using an EKG that does not play a long,
constant beep versus one that plays the wrong sound or the wrong
interval when someone's heart stops.

Do you think that fault tolerance or failure containment techniques are
only available for gold-plated projects? My own thought is that they are
usually fully taken advantage of only in gold-plated projects, but I
also think that they have a limited place in the baseline. Thoughts? -me

--------------

luke, After reading your post I looked back over mine and thought of
some things that I think both of us could have expanded on. I wrote my
post right after reading the Drobka article on XP. I was really
impressed with some of the ideas they had. Reading your Gold-Plated
section made me think that you were equally impressed with SCR method (I
might be right in assuming from Manny's class?). So it looks like both
of our answers to the Gold-Plated section are along the lines of that we
wanted more of what was in the baseline and more of the domain-specific
parts. More V&V, more testing, and more teams.

The question also notes that we would have a nearly unlimited budget.
While being wary of throwing out man-months, I would be interested in
seeing what would happen if instead of hiring more teams of people of
tasks such as QA, what would happen if parts of the project were made
open-source with a prize for the winners? X-Prize style. -me

--------------

kt, After reading most of the posts concerning processes and metrics, I
think that the most interesting part of this discussion is where people
would send things via the Gold-Plated portion. I made a comment to luke
that if I had an unlimited budget I might be willing to open up some of
the modules of the project; open-source seems like a great idea. But on
the other hand, there are some wonderful products out there that are
open-source that are not operating on a "let's send someone to the moon"
mentality. Web browsers, particularly, are on my mind. Even Google has
opened up portions of the code from Chrome.

Now, this is a very general case... I would probably put more energy
into combining software reliability engineering with the methodology of
the team, though it probably depends on how much time the team has since
we have been hammering the idea that more money does not directly or
linearly translate into faster development. If we could exhaustively
learn all we could about the hardware, then writing the software could
be decomposed into simpler and simpler pieces to the point that all
metrics are simply a checklist of ideas that the software either has or
lacks. In other words, I would alter my process so that all metrics were
essentially boolean measurements of obvious conditions. -me

--------------

vi, To add to your comment on information hiding, the less coupling
there is in a system, the easier it is to maintain. I can speak from
experience that when coupling is high enough, sometimes it is easier to
just start over rather than maintain the current system.

I think that most people could use more experience and understanding
with concern to which metrics and processes to include in a baseline.
Your baseline metrics are defects per 1000 lines of code, defects per
function, and the MTBF. In the domain-specific section for reliability
and safety you listed "develop redundant components." Is there a reason
why you did not include it in the security section? If not, perhaps that
would be another baseline metric. Of, if you took my approach like I
wrote about this week, the rigor to which some metrics might change
depending on the context.

As a cautionary to redundancy, sometimes having a redundant component
does not improve the MTBF of the system. Tian's "failure independent
assumption" as described in chapter 16, gives the example, "multiple
engines does not reduce the likelihood of failures over a single
engine." By this, one would need to be cautious as to how they implement
redundant components (Tian 268). On the other hand, some cases might
need different software for different hardware; if the redundant
components are designed in a way that they can be selected during a
configuration per hardware, then maybe your meant "specialized" or
"specific" components rather than "redundant" in the way that I am
interpreting it.

Those were just my thoughts. What you do think? -me

--------------

juju, Do you think that in an unlimited budget situation that experience
or innovation of employees is a useful thing to have? You could add
staff improvement (education or credentials) to your processes. Also,
staff retention is another way to add legacy experience. While neither
necessarily increase process improvement or metrics evaluation, chances
are that improving one part of business will have positive influences on
others. Do you think there are any social factors that can be improved
that money can buy? -me

--------------

al, Yeah, that makes more sense. In the off chance that you were
strongly for using fault tolerance or failure containment techniques
only for gold-plated projects I was interested to know why. =) -me

--------------
