'.\" t
.TH "clbuildapp" "1M" "Jun 21, 2006" "1\&.2\&.0"
.SH NAME
clbuildapp \- Build or Reconfigure a Clustered Application

.SH SYNOPSIS
.TS
l.
clbuildapp \fB--application\fP \fIapp\fP [\fB--keeplogs\fP]
           Configure or Reconfigure Application

clbuildapp \fB--application\fP \fIapp\fP [\fB--sync\fP]
           Perform Application storage synchronisation

clbuildapp \fB-?\fP
           Show brief usage information
.TE

.SH DESCRIPTION
This utility is used to add new applications to the cluster, or to perform
application reconfigurations if the application is changed in \fBany\fP way.

The utility will provide a summary of work as it proceeds. If a problem occurs
the detailed log currently being written to should be examined. 

Only after an application is configured using this tool will be it considered
a usable application by the cluster daemons. If the cluster is already running
following successful use of the \fBclbuildapp(1M)\fP command the application will
be listed in the \fIclstat(1M)\fP summary output.

If the application is already registered with the cluster, but the configuration
of the application has been modified the routine must be run again - whether the
application or cluster are currently running or not. The tool is typically run
whenever the \fIappconf.xml(5)\fP file is changed, or the storage associated with
the application changes in some way [such as extending a file system]. File system
changes must be performed in a certain way - see the administration guide or the
\fBFILE SYSTEM CHANGES\fP section later for more details.

As with the \fIclbuild(1M)\fP utility this should be ideally be run on the 'primary'
server in the cluster - that defined first in the main cluster configuration
file.

This utility requires that the administrator has built a valid
\fIappconf.xml(5)\fP file - the tool will quickly abort with a suitable
error message if the file does not exist, or has invalid settings, or contains
settings which are not suitable for the cluster configuration or available hardware.

The time taken to run the command depend on the amount of work that must be
undertaken - it can last between a few seconds and hours [in the case when 
large amounts of storage are being synchronised]. Interrupting the command however
long it has been running is strongly discouraged.

.TP 10
.B NOTE [1]
Adding applications to a cluster requires careful planning. You are strongly
recommended to read the \fIappconf.xml(5)\fP page for details of the expected
settings and the documentation available from the web site - \fBwww.linuxha.net\fP.

The web site also contains useful information and even tool sets for clustering
some of the most commonly used applications such as Samba or Apache.

.TP
.B NOTE [2]
As with the \fIclbuild(1M)\fP utility, this utility requires that both nodes
in the cluster are operational. This should not really be considered a
limitation - changing the configuration of cluster when it is not 
operating correctly is probably a bad idea.

.SS Steps undertaken during Application Build
The \fIclbuildapp(1M)\fP attempts to make as many checks as possible to ensure
the cluster and operating environment are suitable for the application
being configured. The aim is to minimise the likelihood of the application
not working when started on either node.

Obviously it is a dependency that the cluster topology itself has been validated
prior to any applications being configured or reconfigured. 

Some of the more important checks that are always performed by this tool include:

.TP 4
[1]
The format and contents of the \fIappconf.xml(5)\fP file are valid XML
and the 'globals' section contains those options which are mandatory -
and these options have sensible settings.
.TP
[2]
The specified volume groups listed in the configuration file are 
available (and activated when using LVM2) on this machine - and the
remote machine. The volumes themselves locally do not need to exist -
the software will create them if necessary - but it will not create 
or extend the volume group - that is deemed only suitable for the
administrator to perform.
.TP
[3]
If this is the initial configuration all logical volumes in the configured
volume groups are expected to be open/mounted to allow Linuxha.net to
retrieve file system information for when running the application as 
part of the cluster.
.TP

.RE
Once all checks are completed successfully a special checksum file will be
written indicating that 'Build' action can be run for this application. This
checksum file is also copied to the remote node for consistency.

.SS Allocation of Application Resources
The purpose of this action is twofold; firstly to assign all necessary
resources for the application on both nodes in the cluster; secondly to
capture details of the file systems used for the volume groups in the cluster.

The build action performs the following resource allocations:

.TP 4
[1]
Allocates a separate network port from the pool available for each file system
defined in the volume group(s) for the application.
.TP
[2]
Allocates an DRBD device number for each file system [the same on both
nodes in the cluster].
.TP
[3]
Writes IP address allocation information to resource files [these are tracked
to allow dynamic changes to this list when performing application reconfigurations].
.RE

For more information on the resources allocated please see the \fBFILES\fP
section below. Once the resources have been allocated the routine will check
the details of the mounted file systems and record which logical volumes are
mounted (for the volume groups for the application), the mount points used, and
obviously the file systems each one makes use of - and even the currently
used mount options.

The type of file system used should really be one of the following, (to
reduce recovery times in the event of a crash). All of the below have
been successfully used with Linuxha.net - others may work, but are not
considered to be 'supported'.

.TP 6
* XFS
XFS has been designed to support large file systems with high levels of I/O. 
The file system is fully journalled and relies on Btrees to handle large 
files and directories efficiently.

This file system can handle file system sizes of up to 9 Exabytes - far larger
than Linuxha.net supports - which is currently limited to 2 Terabytes per
file system.

For more information see the SGI project home page:
\fBhttp://oss.sgi.com/projects/xfs\fP.
.TP
* ReiserFS (Version 3)
For larger file systems, (100Mb of more), this is the recommended file system
to make use of. Currently the stable version is 'reiser3' - and this does
support use as a fully logged file system, (not just meta data), though this 
has not yet been tested. For more information please see \fBhttp://www.namesys.com\fP.
.TP
* JFS
This file system appears stable and is suitable for most purposes. It supports
very small file systems well, (as small as 20Mb). Again the meta data is 
journalled to support faster recovery times.
For more information please see \fBhttp://www-124.ibm.com/jfs\fP.
.TP
* Ext3
A journaled version of Ext2, this is probably the most popular file system
and supports good performance as long as directory sizes are not too large.

One problem with choosing Ext3 however is that support for on-line
filesystem growth is not widespread. This is a major problem when 
considering the high-availability requirements in many cases. Please ensure
the distribution supports online growth for this file system prior to
making use of it.

.RE
The mount options that are currently in use will be taken into account and
used when the file system is mounted as part of the cluster. Please note that
currently the availability of the file system types are not currently checked 
on the other node, though this will be added in future releases.

Of course although it is not mandatory have different OS revisions or 
distributions on both servers is strongly discouraged - for reasons of
consistency.

If you need help in making a choice between these file systems then the
recommendation is to use reiser3 if available - it is fast, stable, works 
well with LVM and supports online file system expansion.

.SS Validation of Volume Groups
Once the details of the mounted file systems have been recorded all volume
groups that are to be considered as part of the application are checked.

It expects that all the volume groups defined to be part of the application
are present on both nodes. It attempts to ensure that the logical volumes
defined in the volume groups on the local host are present on the other node
in the cluster. 

If these logical volumes are not available they will be created automatically
as part of this process. This process of creating the logical volumes only
ensures that have the correct name and size - if you wish to ensure the
locations of the volumes meet some performance requirements the administrator
is recommended to create them manually, of the correct size,
before running the \fIclbuildapp(1M)\fP tool against the application in
question.

Linuxha.net works with Linux versions 2.4 and 2.6 - hence it will automatically
spot and work with the differences between the LVM version 1 and LVM version 2
volume managers. Currently support for commercial volume managers - such as
those produced by EMC (Powerpath) or Symantic (Storage Foundation Suite - VxVM v4)
are not supported since the author does not have access to the software or 
licenses.

It should be noted since Linuxha.net attempts to allow almost any 
environment to be used - it will fully work when different versions of 
LVM are used on both nodes [such as might occur whilst an Operating System
rolling upgrade is being performed].

.SS Storage Synchronisation
If this is a new application definition then typically the "--sync" option
would have been specified - since the application data needs to be copied from
the local machine to the other machine in the cluster. 

Although this step is not strictly necessary before starting the application 
in the cluster for the first time it is strongly recommended. 

To ensure no data is corrupted the following checks are performed prior to
performing the data synchronisation.

.TP 4
[1]
Ensure that all file systems defined for the application are un-mounted,
and attempt to un-mount them if they are not already un-mounted. Failure
to do this will abort the synchronisation - since it the file system
remain mounted at this point synchronisation is not possible.
.TP
[2]
Synchronise the contents of all file systems that are used as part of this
application. This uses the contents of the local storage and OVERWRITES THE
CONTENTS OF THE LOGICAL VOLUMES ON THE REMOTE SERVER.
.RE

Of course this step follows all other checks ensuring that it is very
unlikely the nodes in question are in such a state that this option
might harm data in any way.

The time taken to complete the synchronisation obviously depends on the 
amount of data to synchronise, the disk and network performance - and also
the configured rate of synchronisation for the application. If this rate
is too slow it can be temporarily changed using the \fIdrbd_tool(1M)\fP
utility.

Of course setting the value too high might impede other running applications
so some care must be taken when setting a value. It should also be remembered
that this is the rate per file system - so if many file systems are being
synchronised this number should be set accordingly.

If the "--sync" option is used on an interactive terminal (as is usually
the case), it provides feedback by showing the amount of storage 
currently requiring synchronisation as a decreasing number of megabytes, 
and a percentage complete.

.SH REBUILDING EXISTING APPLICATIONS
If an application has been defined already but needs to be changed in any
way then the the \fIclbuildapp(1M)\fP function will need to be called. In 
all cases this command can be called whilst the cluster and indeed the
application in the cluster are up and running.

The utility should be run following any changes to the \fIappconf.xml(5)\fP 
file for the application - or when the logical volumes for the application
change in any way.

However in this scenario the care must be taken, depending on whether the
application is currently running or not. Unlike most clustering software
products, such changes are recommended \fBwhilst the application is running\fP.
Off-line changes are supported, but are more complex and prone to error
conditions.

To rebuild existing applications off-line (i.e. when they are not currently 
running), see the detailed documentation in the "Administrator's Guide".

The following points should be taken into account when wishing to 
use the reconfigure an existing application:

.TP 4
.B *
A rebuild is required if any changes are made to the 'appconf.xml'
file for an application, or file systems need to be added, removed or
even re-sized.
.TP
.B *
Failure to perform a rebuild will result in an invalid configuration
for the application, and many subsequent commands attempt to 
utilise that configuration will fail - such as "clstat -A application"
for example.
.TP
.B *
Rebuilding an application on-line has no impact to availability to 
that application - the existing or remaining resources can continue
to be used.
.TP
.B *
When resizing an existing logical volume/file system used by the 
application the administrator should ensure just the underlying logical
volume on the machine where the application is currently running
is resized - it is optional on the other node and that will be
resized automatically if necessary.

Do not attempt to resize the file system manually! Simply run the 
\fIclbuildapp(1M)\fP command at this point to perform all necessary
work. This will ensure the DRBD device is informed about the resize,
the new storage synchronised and the new size of the logical volume
recorded by Linuxha.net.
.TP
.B *
If you wish to remove an existing file system from the application
ensure that the steps are performed whilst the application is 
running. On that node simply un-mount the file system before running
\fIclbuildapp(1M)\fP again. Once it has been completed the underlying 
logical volumes on each node can be removed by the administrator.
.TP
.B *
If you wish to add a new file system to an existing application
then please ensure the application is up and running on a node.
On that particular node create the logical volume in one of the 
volume groups defined for the application and mount at the required
mount point. 

Once the \fIclbuildapp(1M)\fP tool has completed the file system
will have been remounted at the same location but instead on an 
allocated DRBD device rather than directly on the logical volume.
In such cases the contents of the file system will synchronise to
the other node in the background. The file system is perfectly 
usable whilst this occurs.
.RE

For examples of typical rebuild processes, (such as adding new filesystems,
or changing the application configuration), see the "Administrator's Guide",
available for download from the linuxha.net web site.

.SH ARGUMENTS
All arguments [apart from the application specification] are optional - 
and if none are specified the default is to perform a standard application 
build or rebuild [depending on whether the application has been previously 
defined or not].

All supported arguments can be shortened if required as long as the string 
remains unique - for example '--app' could be entered rather than '--application'. 
The arguments are case sensitive - single character arguments should be in
upper case whilst multi-character ones should be in lower case.

.TP 8
.B -A,--application
Specifies the name of the application that should be built or rebuilt.
.TP
.B --sync
Synchronise the contents of the file systems used for the application. The 
data from the local node overwrites the contents of the logical volumes
on the remote node.
.TP
.B --keeplogs
The detailed logs generated by the tool as it proceeds are usually
deleted unless a fatal error occurs. If you wish to keep them for any
reason this option can be specified. The files will be kept in the
following directory, which can not be changed as yet:

/var/log/cluster/build

The filenames generated can be found in the \fBFILES\fP section at the
end of this manual page.
.TP
.B --forcesync
When this option is chosen it will \fBforce\fP the cluster to overwrite 
all data file systems belonging to the application on the remote node. This
should only be necessary in exceptional circumstances - the most obvious one
when a file system on either node has been mounted natively rather than via
the DRBD devices.
.TP
.B --nobackup
By default this tool always creates a backup on both nodes in the cluster
of the existing configuration. This is useful to restore the configuration
in the event of a unforeseen event occurring that somehow corrupts the cluster
state [such a node failing whilst a application rebuild is occurring].

Any such file is compressed and typically only occupies a few Kilobytes - hence
no good reason for using this argument - unless performing large number
of changes in a short period of time.

The file names generated are a timestamp of the current time in the following
format:

YYYYMMDDhhmmss.tgz

The same filename is used on both nodes. As the name suggests the file is
a compressed tar archive.
.TP
.B --backupdir
By default the backups of the cluster configuration are written to the
following directory:

/clbackup

This parameter can be used to override this setting and save the backups
elsewhere. The directory in question is expected to exist on both nodes
in the cluster.
.TP
.B --fsmap
This flag only takes affect when performing an off-line rebuild of an application.
The default action is that the file systems currently
mounted are not used to indicate the file system list for the application. However
if you wish to change this during an off-line build all required file systems
must be mounted locally, and then the "--fsmap" option used on the "--build" 
stage to force it to work.

If this is confusing the best approach is to always perform any changes
whilst the specified application is up and running - even if the application 
is started with the \fB--fsonly\fP option of \fIclstartapp(1M)\fP.
.TP
.B -C,--config
This is used to specify the name of an alternative cluster configuration 
file. This is only really used for development and testing and typically
can be ignored.
.TP
.B --nolocking
By default any changes to the configuration will arbitrate for access
to the resources via the local cluster resource lock tool. This ensures 
that no clashes occur even if two applications are configured at the 
same time, or an application changes whilst another one is stopping or
starting.

By acquiring locks the process may take a few seconds longer to execute.
If you are only ever having a single application or wish for the most
speed at some risk then this option can be specified.

It should be noted that the resource locking daemon is optional - if it
is not running the commands will continue even without specification of
this argument - though of course a warning will be issued.
.TP
.B --commands
This is useful really for the developers attempting to debug problems.
It outputs to the log files in question each external command that the
tool executes. 

.SH EXIT STATUS
The \fIclbuildapp(1M)\fP utility makes use of many error codes, but in summary
it will return a non-zero number for an error or zero if the configuration
has been processed successfully.

If an error is generated many of the more obscure error conditions provide
several lines of information to help ensure the administrator is able to
resolve such problems as quickly as possible.

.SH FILES 
.SS Log Files
All log files mentioned below can be found in the directory 
\fB/var/log/cluster/build\fP. In the descriptions below \fBAPP\fP is the 
name of the application being built, whilst \fBTIME\fP is the timestamp given 
to this build.

.TP 10
APP-check-TIME.log
Contains the basic application and cluster environment checks.
.TP
APP-envcheck-TIME.log
Detailed checking of the application and cluster environment, 
checking the configuration file options specified reside in
expected parameters and all mandatory options are present.
.TP
APP-lvm-TIME.log
Contains details of all checks and allocations that are 
Logical Volume Manager related.
.TP
APP-build-TIME.log
Contains details of all application resource allocation - such 
as assignment [or removal] of devices, ports and IP addresses.
.TP
APP-syncdata-TIME.log
Contains the detailed steps that lead to the application data
synchronisation taking place.

.SS Configuration Files
The configuration of a cluster application does not write to a log file - 
instead - it sends any messages to standard output if the '--verbose' flag
is specified. However, it does of course update resource settings if
necessary (those resources are shown below).

.TP 10
/etc/cluster/application/appconf.xml
This file is expected to contain the complete configuration of the application
on the local machine. A copy of this file, (and associated checksums) are
sent to the remote machine as each action mentioned above is completed.
.TP
/etc/cluster/.resources/ports
The directory is created and contains details of which network ports are in
used or are available. A new port for each file system is allocated if
one is not already defined for it. The format of the entry in this directory
is as follows:

NNNN.appname.vgname.lvname

This indicates that port "NNNN" has been allocation to the application
"appname", for the logical volume "lvname" in the volume group "vgname".
.TP
/etc/cluster/.resources/nbd
This directory contains files which are used to handle Enbd device allocation
to each of the file systems managed in the cluster. A separate entry is
required for each of the file systems that the application uses. Again the
format of an allocated device file is as follows:

NN.appname.vgname.lvname

The "appname", "vgname" and "lvname" are as previously described. Please note
that the "NN" is a number which is converted to the /dev/nbd* entry by
the software, for example "0" is /dev/ndaa, whilst "27" is "/dev/ndba".

.SH SEE ALSO
.TS
l l.
clbuild(1M)	- Build / Synchronise cluster application 
clstat(1M)	- Show cluster status information
cldeamon(1M)	- Cluster status Daemon
clstartapp(1M)	- Start a clustered application
clhaltapp(1M)	- Halt a clustered application
clconf.xml(5)	- Overall cluster topology configuration file
appconf.xml(5)	- Configuration of an application used by the cluster
.TE

.SH AUTHOR
The \fIclbuildapp(1M)\fP utility was written by Simon Edwards, 2003-2005.
The author can be contacted via the website mentioned below.

.SH AVAILABILITY
This software is freely available from the Linuxha website - please see
\fBhttp://www.linuxha.net\fP for more details.

.SH WARRANTY
This is Open Source Software is per the GNU GPL. It is free to use and
distribute but \fIcomes with no warranty whatsoever\fP. For more information
on the license please see \fBwww.gnu.org/copyleft/gpl.html\fP.

