
%
%structure for each section:
%motivation
%benefits
%implementation
%drawbacks
%mitigation


\section{Usability}
\subsection{Relevant Usability Paradigms}


A primary design consideration of BioCompute is its usability and
user-friendliness.  During its development, two usability metaphors
were used as guiding principles: electronic lab notebooks and portals.
In this context, electronic lab notebooks seek to improve upon their
paper-based brethren while portals seek to hide the perceived
complexity of underlying systems.  This section will discuss how
BioCompute was implemented to achieve the desirable aspects of
electronic lab notebooks and portals while recognizing and mitigating
their drawbacks.

%modeling issues:
%Electronic lab notebooks aim to enhance the detailed logging and
%note-taking process in experiments while preserving the flexibility
%inherent in paper-based systems.  

%reproduction,
%Detailed log notes and records are vital to the reproducibility 
%of experiments and providing a log to jog memories when 
%preparing results for publications.

\subsection{Portal Abstraction for Distributed Resources}

\subsubsection{Hiding Complexity}
BioCompute aims to lower barriers surrounding distributed computing
resources by presenting them for use in the form of a web portal.
While such resources already exist, existence is not the sole
prerequisite for effective use.  By providing an easy, streamlined,
and simplified means of access to pre-existing resources---in this
case, the Condor pool at the University of Notre Dame---a web portal
act as a catalyst for the production of meaningful results.


A portal for bioinformatics makes existing distributed computing
resources accessible to researchers whose domain research can utilize
such resources but whose domain expertise does not cover the effective
engagement of such resources.  BioCompute succeeds where it is an
abstraction for the distributed resources it provides; if the user
regards the system only ever as a ``faster, easier BLAST that keeps
track of things for me'', then it may be considered to be working
well.


In addition to hiding the complexity of distributed resources, a
portal can mask the complexity of underlying domain applications.
Many ``wet sciences'' researchers are often not familiar or
comfortable with command-line interfaces.  Such interfaces, while
efficient for those proficient in their use, often do not lend
themselves to discoverability and do not meet the usability
expectations of those whose background is not computer-based.  In its
original form BLAST is a command-line tool.  BioCompute can make
empower researchers to use BLAST by hiding the command-line behind a
web interface, which is more familiar to typical modern computer
users.

\subsubsection{``Leaky'' Abstractions}


All abstractions are ``leaky'' to some
degree~\cite{spolsky-leaky}---that is, at times they break down and
expose details which should be abstracted.  Inasmuch as BioCompute is
a layer of abstraction, it sits atop and depends upon a number of
other independent, existing systems.  There are a number of dangers
which present themselves in such a situation.  Users of BioCompute
should not have to worry about the state of the distributed resources
upon which it runs, nor should they even need to know it runs in a
distributed environment.  In the event of a failure in an underlying
system, the abstraction ``leaks'' and users are affected.

%ref: spolsky leaky abstractions

\subsubsection{User Interface Issues}

The portal must accommodate users with varying levels of familiarity
with the underlying system while minimizing user frustration on all
levels.  Often, novice users are discouraged by poor documentation or
vague choices, while experienced users feel constrained by seemingly
rigid choices and feel unproductive while learning a new environment.
BioCompute should not be frustrating to users who are familiar with
the BLAST command-line tool.  While BioCompute should be inviting to
users with less familiarity with computers than its developers, it
should not rebuff users who require additional flexibility in order to
effectively use the system.  BioCompute delivers unique benefits in
its harnessing of distributed resources and transcription features, and
forcing advanced users to choose between those benefits and additional
flexibility in the domain application should be avoided.


\subsubsection{Sequence Database Management}

BioCompute will only be useful to researchers if it hosts the
databases they are interested in querying.  

%expand from earlier processing section


In order for BioCompute to be a useful and practical resource for
bioinformatics researchers, it must have relevant sequence databases
against which to run.  Inasmuch as it is BioCompute's goal to be an
eminently useful and flexible tool, hosting databases which further
this goal is desirable.  So far, these databases have been sourced
from public outlets, i.e. NCBI, or from the bioinformatics researchers
themselves.  Since there is currently a small group of users who are
in close contact with those developing BioCompute, there has been
little issue collaborating and sharing data to host.  It would be
desirable to streamline the sequence database submission process so
that in the future user requests for hosting databases can be
accommodated quickly.  There is concern, however, with accepting
dataset submissions and hosting and distributing automatically.  To
this end, a set of scripts have been developed to enable easy addition
of databases to the BioCompute system, but which require explicit
administrator action to do so.


Currently, the initial task of getting a database hosted on BioCompute
presents additional burden to users compared to executing BLAST on
their local machine.  However, this should be a relatively infrequent
activity for users; a typical use case would be a researcher
conducting a large number of queries against a small number of
databases over a period of time.  In this case, the benefits of
harnessing the distributed system are obvious and would outweigh the
initial burden getting the database hosted.  In addition, there may be
future scenarios where in which it is necessary to limit the number of
databases a user may upload.  Regardless, development of BioCompute
must occur with the realization of this matter.

\subsubsection{Input File Limitations}

A specific drawback of BioCompute in its current implementation rests
in its use of HTTP file uploads for the sourcing of FASTA input
sequences.  There is currently an upper limit to the size of the input
file which depends on the user's Internet connection and network
proximity to the server and the HTTP server's timeout settings.
Although comprehensive tests have not been conducted, files as large
as 53 megabytes have been uploaded through a direct connection on the
Notre Dame network.  The server's timeout length has since been to
facilitate larger input files.


\subsubsection{BLAST Results File Formatting}
The results returned from BioCompute differ slightly from those
returned from BLAST.  Currently, the results aggregation algorithm is
a na\"{i}ve ordered concatenation.  This provides results which do not
differ significantly from BLAST, but contain formatting differences
which could be distracting and betray the abstraction of the
distributed system.  Developing a results post-processor to reconcile
these differences is complicated by the fact that BLAST has multiple
output formats which are neither isomorphic nor equivalent.

\subsubsection{Mitigation}
Many of these issues could be addressed by providing a command-line
client to interface with the BioCompute system.  By providing such a
client, one could separate the interfaces used by advanced and regular
users so as to minimize frustration in both groups and focus on the
needs and expectations of each group separately.  It could address the
input sequence size limitation by providing a less restrictive
file-upload delivery vector.


%frustrating for users familiar with command line
%limits on databases
%leaky abstraction - spolsky
%HTTP upload limitdrawback


\subsection{Electronic Notebook}


BioCompute currently provides text-based
``blank-page''~\cite{chem-eln} electronic lab notebook functionality
for BLAST jobs; it allows users to add textual details about a job.
This allows researchers to tightly couple their observations and notes
about the job with the input, results, and recorded metadata of the
job itself.


It is important that an electronic lab notebook be an unobtrusive
element in research workflow; the use of a single component should not
materially impinge on existing workflows nor set unreasonable
restrictions on defining new processes.  To this end, BioCompute's
electronic lab notebook functionality has been of the ``blank-page''
philosophy; it provides room for free-form text input instead of
limiting users to entering data in a rigid form.  While many other
data types are found and recorded in paper notebooks, including
images, equations, graphs, and plots, BioCompute does not currently
support them.




%basic text
%clean slate
%todo:other document types in future - images, eqns (mathbin), 


\subsection{Automatic Transcription}



Much bioinformatics research is conducted via computer-based
experimentation, and as experiments become faster, larger, and easier
to run, the feasibility of traditional paper-based lab notebooks
reaches its scalability limit.  An electronic lab notebook can
especially aid in transcription of experiment details in a
computer-based milieu.


Non-computer-based research traditionally required manually
maintaining detailed notes of conducted experiments, but
computer-based research, such as that facilitated by BioCompute, could
easily benefit from an electronic lab notebook.  The parameters and
results of such research are digitally sourced and can be
automatically recorded by an electronic notebook and presented for
annotation or observation, obviating the need for rote manual
transcription of details which can be automatically recorded and
reducing the possibility of human error or forgetfulness in
transcription.


In short, electronic notebooks can reduce the amount of drudgery and
accidental complexity inherent in the processes of conducting
computer-based research and allow researchers to focus instead on the
problem domain.

\subsection{Data Explosion}

Even in the face of exponentially-growing datasets, the absolute
financial costs surrounding computer-based bioinformatics experiments
are decreasing.  Additionally, BioCompute aims to ameliorate temporal
limitations on bioinformatics through distributed processing.  This
decrease in barriers begets an increase in the frequency and iteration
cycle of experiments.  In an environment with greater processing
resources, experiments need less justification for execution, and the
consideration given each experiment before running it is decreased.
The ability and opportunity to conduct relatively rapid iterations of
experiments at lower costs highlights the benefits of managing the
resultant data and automatically transcribing the details and results
of each job.  In addition, the exponential growth of datasets provides
a greater domain over which experiments can be run, adding further to
the volume of bioinformatics data to be managed.
%ref:iowa blast paper - 2800 sequences per day


\subsection{Collaboration and Sharing Results}


In environments where research has scaled beyond one person, effective
collaboration is limited by the physical singularity and location of a
paper-based lab notebook.  An electronic lab notebook can aid in the
effective dissemination of experiment data throughout a research team.
BioCompute provides functionality which lets researchers easily share
their queries and associated notes with other BioCompute users.






\subsection{Reproducibility}
BioCompute is expected to regularly undergo changes, revisions, and
updates to its execution model.  It is important to enable the
reproducibility of jobs executed within BioCompute in the face of
these changes.  In order to ensure against these likely future
changes, and additionally to ensure system modifications in
BioCompute's testing environments do not affect adversely affect data
processing in production environments, a snapshot of the scripts used
to execute the job is stored alongside the results.  This snapshot
provides a trace of how the job was executed and records the
command-line call to the BLAST executable and the surrounding
computing environment.


In addition to being affected by BioCompute's script structure,
reproducibility and an accurate log of research queries is affected by
the possibility of changing databases on BioCompute's system.
It is anticipated that as BioCompute grows in use, researchers will
request custom sequence databases to be included in the system.  In
addition, some sequence databases currently hosted on BioCompute are
publicly sourced and regularly updated, e.g. {\it nr}, the standard
non-redundant database maintained by the NCBI.


In order to maintain strict reproducibility of results, ideally sequence
databases should be static and not updated or removed once included
into the BioCompute system.  If an update to a database were to be
included, it would be added under a related name with a
differentiating timestamp, while still including the full copy of the
earlier version.


This model is not universally feasible in the long-term, especially
considering the continued growth of datasets.  In addition, keeping
all old datasets may clutter the user interface and make it difficult
to find and select the desired database.  However, if the database
against which a job was run was not available or modified since the
job was run, then the integrity of the lab notebook and the ability
to confirm results is affected.  


There are special cases of sequence databases which should be
update-able in place without negatively affecting reproducibility.
These databases should be publicly transparent in the criteria by
which sequences are added and should be modified in an append-only
mode.  {\it nr} fits these criteria, as do similar databases from the
NCBI, while custom datasets sourced from individual researchers or
labs do not have these attributes.  As such, a scenario is foreseeable
wherein datasets with these attributes are updated in place
frequently, while other sequence datasets remain static and are
updated in full under a differentiating timestamp.
 




\subsection{Future Work}

By providing these functionalities, BioCompute provides a platform for
structured storage of data in a consistent interface, as
opposed to a tool with an ephemeral, unstructured, and
inconsistent relationship with the data upon which it operates.
BioCompute automatically records meta-data associated with the job and
stores it alongside the results of the BLAST program, thereby giving
context and meaning to the results and facilitating reproduction and
verification of the data.  In the future, BioCompute can be expanded
to provide these benefits to other types of data and for programs
other than BLAST.

There are many possibilities for BioCompute to expand and become more
useful in its function as an electronic lab notebook.  First, adding
fine-grained access controls similar to Unix groups would allow
collaborating researchers to share results and progress without
showing giving the results visibility to the entirety of BioCompute.
While this may be antithetical to a wholly collaborative environment,
it would improve usability by alleviating researchers' worries about
the exposure of proprietary and custom datasets and results.  Similar
constructs could be added around user-sourced sequence databases.

%todo: check InterProScan, maq, GMAP

BioCompute could be expanded to accommodate other types of data and
other programs.  InterProScan is another program which would fit
BioCompute's model well.  In addition, BioCompute could provide a
flexible tool for chaining processes together in a manner conceptually
similar to the stream-pipe functionality found in Unix command lines.  A first
step in this direction would be the ability to return a FASTA file of
the sequences matched from a BLAST query (a normal BLAST query returns
just the names of the matched sequences).


BioCompute could also serve as a data-processing workbench.  A model
could be constructed wherein data from experiments is sent first to
BioCompute for hosting and further processing options.  This would
provide a common and familiar environment where users could use common
metaphors to manipulate, process, and add value to disparate data
types.




%harness distr. resources effectively   CHECK 
%encapsulate problems of distr resources and encapsulate them from users
%wet researchers unfamiliar with CL     CHECK
%additional processing capabilities
%value-add of meta-data storage



