<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html
        PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:ui="http://java.sun.com/jsf/facelets"
      xmlns:f="http://java.sun.com/jsf/core"
      xmlns:h="http://java.sun.com/jsf/html"
      xmlns:tr="http://myfaces.apache.org/trinidad"
        >

<f:view>

<ui:composition template="../pages/initPage.xhtml">

<ui:define name="pageContent">

<tr:panelGroupLayout>


    <h1>Manual</h1>
    <br/>
    <br/>
    <h2>Overview</h2>
    The aim of the Database on Demand project is to develop a web-based database pre-processing tool that
    will generate custom FASTA formatted sequence databases according to a set of user-selectable criteria.
    <br/>
    <br/>
    <h2>User manual</h2>
    When entering the Database on Demand web site, you will be presented with a short indroduction header
    followed by an input form divided into four parts. These presented top-to-bottom in the order of the
    general workflow (from input over processing to output).<br/>

    <br/>

    <h3>Input configuration</h3>
    The first section is as the header 'INPUT' suggests all about specifying the input you want use to create
    your custom database from. Initially it only shows a drop down field where you are supposed to choose a
    database and a 'Add as source' button:<br/>
    <br/>
    <tr:image source="../images/manual/dod-input-simple-1.png"/><br/>
    <br/>
    As soon as you have chosen and added a source database, the page will update and also show a table
    containing the resources you have selected.<br/>
    <br/>
    <tr:image source="../images/manual/dod-input-one-source-1.png"/><br/>
    <br/>
    This table serves two purposes at once: it gives you an overview of your selected resources and also
    enables you to optionally add filters you want to apply to each input. Filters are added by selecting
    the appropriate filter from the drop-down field, providing parameters in the following text field and
    clicking on the '+' button.<br/>
    <br/>
    <b>NOTE</b> that the number of filters in the 'Filters' column will increase with each new filter you add. To
    see more detail about all the filters you have added, click on the 'Show' link in the first column
    'Details'. This will open a view of all the filters there parameters for this particular database. You
    will also be able to remove unwanted filters here by clicking the '-' button.<br/>
    <br/>
    <tr:image source="../images/manual/dod-input-one-source-filter-1.png"/><br/>
    <br/>
    You will have to select at least one input database (we can not make up data from nowhere), but you can
    also add multiple. Your are also allowed to specify more than one filter per resource, although you don’t
    have to use any if you prefer not to perform a pre-selection.<br/>
    <br/>
    <b>NOTE:</b> Since TrEMBL is quite a large database, it is strongly recommended that you use a filter (for
    example: a species filter for human: TaxID '9606').<br/>
    <br/>
    Once you are confident you have found a combination of input sources and filters that fits you best, you
    can move on to the next section.<br/>
    <br/>

    <h3>Processing configuration</h3>

    The second section is where you can choose to digest the proteins into peptides.<br/>
    <br/>
    <tr:image source="../images/manual/dod-process-simple-1.png"/><br/>
    <br/>
    The first row in this form provides you with a row of options to define the enzyme you want to use for the
    digestion. You can either choose a predefined peptide from the drop-down list or define your own custom
    enzyme by selecting the 'regex' option, which will disable the drop-down box, but enable the enzyme
    specification fields (name, cleave, restrict and Cterm/Nterm). To create new enzyme from scratch, simply
    give it a name, specify the regular expression pattern that is to be used to determine the cleavage site,
    optionally specify the restricting amino acids (which will prevent cleaving) and choose if you want to
    cleave on the N-terminal or C-terminal side of the residue defining the cleavage site. The enzyme digestions
    allows for a number of missed cleavages which can be defined for both, the selected and custom enzymes, by
    changing the number in the miscleavages selection box.<br/>
    <br/>
    After you added the enzyme of your choice by clicking on the '+' button, the page will display a table of
    enzymes and you can go on and add more if you wish so.<br/>
    <br/>
    <tr:image source="../images/manual/dod-process-table-1.png"/><br/>
    <br/>
    By selecting the ‘Ragging of database’ option you can also choose to rag all the peptides in the database.<br/>
    <br/>
    <tr:image source="../images/manual/dod-process-rag-full-1.png"/><br/>
    <br/>
    Ragging is a process by which a single sequence produces a set of derived sequences by consecutively removing
    one terminal residue (depending on the settings, this will be the amino (N) terminus or carboxy (C) terminus).
    An example is given below by N-terminally ragging the sequence 'YSFVATAER':
    <pre>
        YSFVATAER
         SFVATAER
          FVATAER
           VATAER
            ATAER
             TAER
              AER
               ER
                R
    </pre>
    Ragging is extremely useful if you want to detect proteolytic degradation, which results in the formation of
    a novel N-terminus (and/or C-terminus) and you do not <i>a priori</i> know where this processing will take place.
    The truncation option allows you to truncate the sequence to the specified number of terminal residues before
    ragging. For instance, in the case of N-terminal ragging, setting this to 100 will only include the first 100
    (N-terminal) residues of the sequence for the ragging process, disregarding the rest of the sequence. This can be
    useful if you happen to know that the processing you aim to identify, occurrs in the first X residues
    (e.g.: mitochondrial target sequences). <br/>
    Note that ragging is applied AFTER enzymatic digest, if a digest is requested. <br/>



    <br/><br/>

    <b>Important note</b>:
    Whenever an enzymatic digestion step is specified, the software will also automatically apply a step to clear
    peptide-level sequence redundancy in the database. This means that each peptide sequence will be present only
    once in the output database, thus maximizing the information ratio of the database (which is defined as the number
    of unique sequences in a database, divided by the total number of sequences in the database).
    Whenever a peptide could be derived form more than one protein, the accession numbers and peptide locations for each
    potential precursor protein are included in the peptide sequence header. These alternative precursor proteins are
    annotated at the end of the FASTA header description part, and the individual protein accession numbers and locations
    are separated by ‘^A’ characters (which is the FASTA standard annotation for protein isoforms).
    For instance, a peptide that matches to both P12345 and P54321, will carry a header like:
    <pre>
       >sw|P12345 (17-25)|RNAS1_ONDZI RecName: Full=Ribonuclease pancreatic;EC=3.1.27.5;^Asw|P54321 (18-26)
       YSFVATAER
    </pre>

    <br/>

    <h3>Output configuration</h3>
    The third section is where you can choose the output filters.<br/>
    <br/>
    <tr:image source="../images/manual/dod-output-simple-1.png"/><br/>
    <br/>
    The first option allows you to filter sequences by their amino acid composition. You can use a simple
    yet powerful query language to define your compositional requirements. This language is explained below.
    Amino acid notation is the single-letter notation, extended with 'U' for methionine <i>without</i>
    initiator methionine. The format supports boolean operators ('AND', 'OR' and '^' (NOT)) and is
    vaguely reminiscent of regular expressions. It does not have the full power of regular expressions however,
    nor does it have exactly the same syntax, but it is simpler and more powerful in specifying compositional
    requirements. (If you still prefer regular expression syntax, see the note below for instructions on how
    to achieve this.)
    You can specify residues or sequence stretches and combine these, eg.:
    <pre>
      (K and R) or (S or T)    -> selects all entries having either a K and an R, or that have an S or a T
      ((K and R) or S) and L   -> selects all entries having either an L or an S or K and R
      ^R and ^K                -> selects all entries lacking R and lacking K
    </pre>
    Another feature of the language concerns the counting of residues:
    <pre>
      2K or 2R or (K and R)    -> selects all entries having either exactly 2 R's or 2 K's, or that have
                                  both R and K
    </pre>
    Yet another addition of this is logical operations on counts:
    <pre>
      &gt;3K or &lt;5P         -> selects all entries with strictly more than 3 K's or strictly less than 5 P's
      &gt;=2K and &lt;=2L      -> selects all entries with 2 or more K's and 2 or fewer L's
    </pre>

    <br/><br/>

    <b>Important note</b> If you prefer to use regular expression syntax instead of the format described above,
    you can persuade the system to use regular expressions by specifying at least one '.' in your sequence match
    query. Java regular expression matching will then be used by the system.


    <br/><br/>
    The second option allows you to specify mass limits for the output sequences. Masses are calculated based
    on the unmodified residues making up the sequences. The limits are both inclusive, so a minimal mass of
    600 Da will permit all sequences of mass 600 and above to appear in the output database. Note that you
    must always specify both a lower and upper mass limit if you choose to use a mass filter!
    <br/>
    <br/>

    <h3>Workflow review</h3>
    At the end of the page you reach the final section, where you can review and submit the workflow which will
    generate your custom database.<br/>
    <br/>
    Initially you will only see two buttons. The 'Clear all' button will delete all input data so you can start
    over from scratch. The 'Generate workflow' button will generate a text bases summary of the workflow which
    will be used to generate the Database on Demand.<br/>
    <br/>
    <tr:image source="../images/manual/dod-workflow-simple-1.png"/><br/>
    <br/>
    Depending on your selected data there might be different summary views.<br/>
    In case the workflow is missing essential information (e.g. the source database to use as input), you will
    only be presented with an ERROR message making you aware of the missing data. In those cases you will have to
    make the required changes to your workflow and re-genereate the workflow (which will result in an re-evaluation
    of the input and an updated workflow).<br/>
    In certain other cases that may be possible to execute, but may not represent standard use cases (e.g. if no
    enzymatic digest at all is selected, or TrEMBL is used unfiltered as source database), you will be presented
    with WARNING messages. You can then review your selected criteria and choose to update the workflow or to
    continue without changes.<br/>
    In the usual case however you will directly be presented with a workflow summing up the steps that will be
    executed to produce the customised Database on Demand.<br/>
    <br/>
    <tr:image source="../images/manual/dod-workflow-expanded-1.png"/><br/>
    <br/>
    Before you finally submit your workflow, we ask you to provide an email address. We need this information, since
    the creation of a custom database will take too long for you to wait for it. But no need to worry, we will not
    send you advertisements or give away any information to other parties. The sole purpose for this meassure it to
    be able to contact you once your database is available for download or to get in touch should we have discovered
    unexpected problems. So it is in your own interest to provide an existing email address that you can check for
    news about your data.<br/>
    In the usual case you will receive one email within minutes confirming your request. It will contain the unique
    ID for your request which makes it possible for us to identify your job and track down possible problems. A second
    email containing a link to the download address will be send as soon as your database is ready. This usually
    happens in a matter of hours, but please be patient, since some workflows can take days to complete (e.g. if many
    processing steps are requested, applied filters have not significantly reduced the size of source database(s) or
    our processing cluster is under heavy load).<br/>
    Should your workflow have failed, we will most likely be aware of the situation and of course try to solve the
    problem. Should you however not get any response or only failure emails, please do not hesitate to contact us. We
    are always eager to help.<br/>
    <br/>

    <br/>
    <br/>

</tr:panelGroupLayout>

</ui:define>

</ui:composition>

</f:view>


</html>