
/*! \mainpage Acquire Backend System - Overview
 * 
 * To install and test a minimal insecure installation, use the
 * \ref quickstart. After this is complete, the installation can
 * then be secured using either \ref secureinstallexisting or
 * \ref secureinstallown.
 * 
 * \section whatis What is Acquire
 * 
 * Acquire is a backend which allows simulators to easily submit and
 * manage distributed jobs across clusters, all from the comfort of
 * their laptops or tablets. Acquire allows personal cluster accounts
 * to be amalgamated into a single, easy to use resource. From the
 * Acquire system, it is possible to script together compute resources
 * from disparate locations into a single secure service that can be
 * called from executable code to distribute computation for
 * simulations and more. This software was funded as part of a EPSRC
 * software development project.
 * 
 * \subsection acquire_protocol Acquire Protocol
 * 
 * \image html simple.svg
 * \image latex simple.eps
 * 
 * The Acquire system involves connecting 'client' services to a
 * central TCP server, known as the broker. The broker server service
 * coordinates the moving of work, as well as the selection and
 * curation of jobs. Both 'user' clients (which produce jobs and are
 * driven by users) and 'compute' clients (which run jobs and are
 * instantiated automatically on remote machines) connect to the
 * central service and push and pull data to and from it. It is
 * intended that these distinctions between user and compute become
 * blurred as further scripting of the system becomes possible.
 * 
 * \subsection submitting_job Submitting a Job
 * 
 * \image html submit.svg
 * \image latex submit.eps
 * 
 * The user-side client communicates only with the broker, giving it
 * work to do.  The broker then selects an appropriate cluster that
 * the user has access to, and initiates an SSH connection (blue
 * arrow). Over this secure connection, the broker sends keys that
 * allow a client to authenticate itself and communicate with the
 * system. Then over the same connection, the job is submitted to the
 * cluster queue as a simple bootstrapping script which when it is
 * allocated a resource on which it is to run (red dotted line). When
 * this is run by the queue, it downloads the 'compute' client from
 * the broker, decompresses and executes it. The 'compute' client
 * finds the keys left behind by the SSH and uses these to
 * authenticate with the broker, before downloading and running the
 * work assigned to it. Once complete, the work is uploaded again to
 * the broker server, then become available for download from the
 * broker by the original client application of the user.
 * 
 * \subsection addclusters Adding Clusters to the Broker
 * 
 * An XML file is used to describe the clusters and the machines that
 * can be used to access them. To add the test XML containing cluster
 * (which you may not want to do) to the database permanently, one
 * uses: <p><tt>./broker \-\-load-clusters </tt><i>&lt; path to shared
 * install files &gt;</i><tt>/test_cluster.xml</tt><p> This loads the
 * information in the XML file into the database permanently, although
 * there is an option described in \ref tinned_broker that can be used
 * to temporarily accept clusters definition files.<p>The cluster
 * definition files are simple XML while examples of syntax are given
 * in the shared install directory.
 * 
 * \section features Features
 * 
 * \subsection login_ssh SSH-Based Authentication
 * 
 * As a set of SSH credentials are required to run work, and the work
 * is on a set of trusted machines, it can be assumed that if the
 * trusted machines give you a set of SSH credentials then you must be
 * trustworthy.  Because of this, the system has no formal account set
 * up, authentication is simply a matter of having working SSH access
 * to a machine that is added to a password protected remote account.
 * 
 * While the system supports both password and key-based
 * authentication, the only authentication method that is used
 * internally is key-based, while other authentication methods are
 * only used to 'bootstrap' the login process. This is because adding
 * and deleting auxillary keys is much easier, more secure and
 * transparent to the user than requiring the system to hold unique
 * passwords.
 * 
 * The broker system holds a set of SSH keys which are added to the
 * <tt>authorized_keys</tt> file on each trusted machine as you add
 * logins to the system. These keys are held remotely, encrypted with
 * your account password. Each time a job is submitted, the password
 * is sent alongside to unlock your remote SSH keyring, resulting in
 * an SSH login to the cluster that you or the broker has chosen to
 * run your work. This password is never written to disk.
 * 
 * If a remote cluster is found to be not available, the system will
 * attempt a search of the resources provided by the user to see if
 * access can be obtained indirectly.  This is achieved through use of
 * a automated recursive SSH tunnelling technique which we have dubbed
 * 'telescoping'. This same technique can also be used to find and
 * access resources on disparate private networks.
 * 
 * \subsection message_encryption Cryptographically-Protected Messaging
 * 
 * Messaging to and from the Acquire broker is protected by DSA, RSA,
 * ECDSA and AES. The Acquire system has a rigorous internal
 * cryptography system that provides simple semantic abstractions of
 * tried and tested cryptographic algorithms to ensure message
 * security and integrity. These abstractions are built on top on
 * OpenSSL's libcrypto, the <i>de facto</i> standard cryptography
 * suite, both capable of speed and reliability.
 * 
 * \section how_it_works How It Works
 * 
 * \subsection broker_messages Broker Messages
 *
 * Broker messages are the basic unit of communication in the system.
 * They are a simple stream of data either client to server or server
 * to client. They can be sent between 'actors', arbitrarily named
 * entities with a cryptographic identity that is reflected in the
 * cryptography of the message, meaning that only the intended
 * recipient can read it (while also being able to verify its
 * origin). The actor concept is anticipating the creation of a
 * distributed broker service in the future and implementations of
 * delegated actors that can be created and destroyed with different
 * levels of access and a finite lifetime. Broker messages are split
 * into three parts.
 * 
 * \subsubsection routing Unencrypted routing information.
 *
 * The first is an unencrypted header with simple <tt>TO</tt> and
 * <tt>FROM</tt> keys in plain text so that the messages can be routed
 * without knowledge of their contents. The <tt>TO</tt> and
 * <tt>FROM</tt> keys refer to actors that the remainder of the
 * message is sent from and to.
 *
 * \subsubsection headers Encrypted headers
 *
 * The remainder of the message is encrypted. The second part of the
 * message comprises of a set of headers, similar to HTTP headers,
 * which are encrypted. The first header is always a
 * '&lt;messagetype&gt; &lt;messagesubject&gt;' pair, known as a
 * 'type' and a 'processee'.
 *
 * Messages can be split into two types usually, 'active' messages
 * where the processee is a real actor, and 'passive' messages where
 * the processee is a status or error message. At the end of the
 * headers, again similarly, there is a double CR LF marker that marks
 * the end of the headers. This is so that the protocol is easy to
 * read in plain text once the decryption has been achieved. Also
 * encoded into the header is information about a final part of the
 * stream that can only be obtained if the previous header has been
 * decrypted, as the sizes and data offsets are present in the second
 * encrypted section.
 *
 * \subsubsection content Encrypted content
 *
 * This third section contains a list of payloads, again
 * encrypted. Each of the lengths of the sections and payloads is
 * obtained by a 'Content-Length'-style header in the encrypted
 * headers section. The two block encryption structure is so designed
 * to frustrate the analysis of the encrypted stream.
 *
 * Both the encrypted headers and encrypted content are in general
 * compressed, signed and encrypted (in this order) by the originating
 * actor. The receiving actor must then decrypt, check signatures and
 * decompress, in this order. The system in general supports and wraps
 * all the types of keys available in OpenSSL; DSA, RSA, ECDSA and
 * AES, allowing easy generalisation to future cryptography.
 *
 * \subsection mp_hierarchy Memory Pool Hierarchy
 * 
 * The Acquire system makes use of the memory pool concept from the
 * Apache Portable Runtime, which is appropriate as many of the
 * operations peformed by the system maybe initiated locally by either
 * a client or a server. This is essentially a scoped allocator,
 * anything that has its memory space provided by the allocator
 * becomes freed when the allocator is destroyed (or falls out of
 * scope if the AutoMemoryPool is used). Generally this means that a
 * global memory pool has sub-pools for client threads which then have
 * sub-pools for client sessions. In the case of the compute and
 * client API there is a sub-pool for results and local pools for
 * server exchanges that do not need permanent memory storage. This is
 * good because the application is a server and so by deriving memory
 * pools down a chain memory leaks can be prevented by a guarantee
 * that the space is freed on return from a function.
 *
 * \subsection data_store Data Store
 *
 * A data store is an actor created by either a 'client' client (user)
 * or a 'compute' client (remote). These are then furnished (at some
 * computational expense) with a set of communication keys by these
 * systems. A data store contains a serialised directory secured
 * (encrypted) with a public-private key pair. While the use of these
 * types of keys are computationally costly (RSA), only the 'client'
 * client (user) or 'compute' client (remote) will actively encrypt or
 * decrypt this data. Most of the time it will be exchanged as a set
 * of compressed and encrypted data blocks, with no processing applied
 * at this level. On the server side, a permanent data store of this
 * sort is held as a set of files with sizes and hashes of the
 * plain-text (so that changes in the cryptographic padding do not
 * cause retransmission of blocks) present in an auxillary file.
 *
 * One a data store is complete, it can be marked either as
 * 'workpacket' or 'results' (possibly both!). If it is marked as
 * 'workpacket', it can be given extra information which allows it to
 * be run as a work packet on a remote machine. If it is marked as
 * 'results', it should be downloadable by the user (client). A
 * DataStore takes up disk space on the broker, and so creating a
 * DataStore decreases the quota available to the particular user.
 *
 * \subsection work_instance Work Instance
 *
 * A single DataStore can be split into N WorkInstances which execute
 * in parallel, each knowing the number of instances and their local
 * instance number to the set instantiated from the origin
 * DataStore. Instances on remote machines can be instantiated as a
 * WorkInstance, which is limited only to the work of that instance
 * and having job-specific keys, or can be instantiated as a Compute
 * (only partially implemented) which has independent keys, an ability
 * to query the broker for general "apt work", do serial work and
 * indepedent scheduling.
 *
 * \subsection remote_execution Remote Compute Execution
 *
 * At compile time, if special 'compute' building is enabled, we use
 * 'ldd' to analyse the dependencies of the C++ and Python
 * implementations of the compute and build packages of the required
 * libraries, filtering out some assumed dependencies. These are then
 * placed ready for download.
 *
 * (why are we still downloading this? we have secure copy which would
 * close security holes! we could secure copy the queueing script and
 * have a hash check on the bootstrap and executable packages)
 *
 * The remote machine executes first by the broker tunnelling in with
 * telescoping and executing a download of the current queueing script
 * to the cluster head node. This queues a short script which
 * downloads and executes a bootstrap script.
 *
 * When the cluster queueing system executes the short script
 * downloading the bootstrapping system, the bootstrapping downloads
 * the compute implementation and decompresses it. A binary patching
 * script is then run which compare the versions of system libraries
 * on the currently running remote system with those the included
 * libraries were built with, and then formulates a binary
 * glibc_patch.so to reimplement any missing features from the
 * system. Also links and references to newer versions are weakened
 * inside the libraries to prevent libraries refusing to accept the
 * patching scheme on version grounds.
 * 
 * Then the implementation is executed using LD_PRELOAD to patch all
 * of the binaries run with glibc_patch.so. This is done on both the
 * Python and C++ implementations. The Python implementation merely
 * has slightly different libraries.
 * 
 * When run the WorkInstance instantiates using secure copied keys in
 * the environment and requests work. It is told that it is a
 * WorkInstance and that the work it is to do is in a named
 * DataStore. The script gets the DataStore, as it already has all the
 * appropriate keys and executes it.
 * 
 * When complete the 'compute' client, a WorkInstance, serialises and
 * encrypts the directory and uploads it to the broker. Downloading
 * from the broker can then be done on this DataStore which
 * de-serialises into a directory of results again at the user end.
 *
 * \subsection Databases
 *
 * All databases are SQLite and are accessed using SQL queries. There
 * are four databases, ActorsDB, KeyDB, NetworkDB and PendingDB.
 *
 * ActorsDB contains all the actors. These are in four classes,
 * 'users', 'computes', 'workinstances', 'datastores'. The 'users' are
 * real clients which do work. As these are associated with logins,
 * these are allowed to have quota and resources and are generally a
 * top level actor (don't have parents or origins). Computes are not
 * really implemented, but are a class of authenticated automata
 * clients which have the ability to do work and create more work
 * (WorkInstance keys and actors are used more in lieu of this at the
 * moment for simplicity). WorkInstances are single instances of work
 * from a data store that exist on a compute node. DataStores hold
 * permanent records of encrypted data for input and output from
 * clients and compute.
 *
 * KeyDB contains all the keys. These are split into four tables,
 * consistent with a key purpose. These are encryption, decryption,
 * signing and checking (signature verification). These keys are of
 * any type, but obviously are within the limitations of the
 * cryptographic primitives involved. Each of the key tables has a
 * Work-In-Progress (WIP) password which encrypts the keys while they
 * are on disk. Also for special keys the WIP password can be
 * overridden with a user password, used for instance when a datastore
 * is activated and SSH keys need to be unlocked.
 *
 * (why is the host fingerprinting still disabled?)
 *
 * NetworkDB contains knowledge about the network. This is mostly used
 * for finding network places (hosts and clusters) with SSH
 * telescoping and recording SSH key fingerprints to authenticate
 * remote hosts. This also holds metadata about each users SSH
 * authentication with all the hosts in the system.
 * 
 * PendingDB contains unverified and unauthenticated information about
 * changes in the network. For instance, host key changes and other
 * things that need human intervention. This is then relayed to a
 * simple HTTP console running on the broker port, accessible only
 * from localhost.
 *
 * All of these databases create SQL backups when they are destroyed
 * via their destructor (hence 'inspect signal_setflag(2)' in gdb). If
 * the values in a database need poking, delete the '.db' variant, and
 * edit the SQL; the SQL will be seen as a good backup and be
 * reloaded.
 * 
 * \section outstanding_bugs Known Bugs
 * 
 * A malformed SSH handshake banner can crash the broker.<p> Renaming
 * the external names of actors is unimplemented.<p> Instances do not
 * retry cluster submission on failure.<p> Host name aliasing is
 * unsupported in the SSH telescope.<p> Using the username
 * or another user whose keys you don't have crashes the client.<p>
 * Password is on the command line for Python, but asked as a prompt
 * in the C++, given advantages to scripting, which is right? <p> */
