.TH lha_stat 1 "11 February 2009" "TrueCL Commands"

.SH NAME
lha_stat \- Check current status of Cluster

.SH SYNOPSES
.TS
l l l.
lha_stat	[\fB--summary\fP]	- Show Cluster summary status

lha_stat	\fB--daemons\fP	- Show Cluster daemon status

lha_stat	\fB--application\fP \fIA\fP	- Show Application status

lha_stat	\fB--diskhb\fP	- Show disk heartbeat status

lha_stat	\fB--monitors\fP	- Show all monitors

lha_stat	\fB--monitors\fP \fB--node\fP \fIN\fP	- Show monitors on node

lha_stat	\fB--monitors\fP \fB-A\fP \fIA\fP	- Show monitors for application

lha_stat	\fB--topology\fP	- Show network topologies status

lha_stat	\fB--app-routes\fP	- Show all application routes

lha_stat	\fB--app-routes\fP \fB--node\fP \fIN\fP	- Show routes checks on node

lha_stat	\fB--app-routes\fP \fB-A\fP \fIA\fP	- Show route checks for application
.TE

.SH DESCRIPTION
The \fIlha_stat(1)\fP command is used to check the current status of the 
running cluster. It can be used on any node defined to be part of the cluster - 
that node does not need to be joined to the active cluster to run this command.

.SH ARGUMENTS
Each of the above usages is separate and can be combined if necessary to produce 
lots of different output from the same command. There are also some other arguments
that can be used, common to all the above usage examples. These are:

.TP 4
--xml
The standard form out output generated by \fIlha_stat(1)\fP is in a tabular, 
human-readable format. If this option is used the output is instead shown as XML, which
may be easier for parsing the results.
.TP
--timeout
The amount of time to wait for certain responses to requests that are 
necessary to clean the status of the cluster or applications. If not specified
this will default to 2 [seconds].
.TP
--debug
Run the application modification in 'debug' mode - might produce significant levels 
of output to the standard output device, most of which is only useful for
developers.
.TP
--verbose
Verbose mode generates some additional output output to standard output to 
show the progress of any query operations taking place. This is the recommended flag if
the administrator wishes to see any output apart from the status information requested.
.TP
--quiet
This will only produce errors and warnings on the standard output device.
.TP
--silent
Only produce output if fatal errors occurs during the operation of querying the
cluster to define its current status.

.RE
The remaining arguments define the output generated. Each will be decribed with examples
where possible, both for the normal mode and when \fB--xml\fP is supplied.

.TP 4
--summary
This is the default output generated if no other status option is specified. It
provides a summary of whether the cluster is active, which nodes are part of the
cluster and the status of all applications. Hence a typical example output might
look like the following:

.TS
l.
cluster: cl1 - UP
nodes:   4 [1 DOWN/3 UP]

Node          Status  Apps
myhost1       UP      1
hostname2     UP      1
hostname3     DOWN    0
hostname4     UP      1

Appname        Status   Node         F/O   Notes
lsf_hostname4  UP       hostname4    1     Failover unavailable
metasvr        UP       myhost1      2     Failover disabled
share2         UP       hostname2    4     Failover disabled
share3         DOWN     N/A          N/A   
test           DOWN     N/A          N/A 
.TE

The information generated should be self-explanatory. The only column that is 
not perhaps obvious is the 'F/O' column for the applications section. This indicates
the number of nodes, including the current node, suitable for running that 
application. The 'Notes' section indicates any status settings or points that
are considered important for the application.

When generating XML an example output might look like the following:

.TS
l.
<?xml version="1.0" standalone="yes"?>
<lha_stat-summary>
  <cluster name="mdb" status="UP"/>
  <nodes>
       <node name="myhost1" state="UP">
       <node name="hostname2" state="UP">
       <node name="hostname3" state="DOWN">
       <node name="hostname4" state="UP">
   </nodes>
   <applications>
       <application name="lsf_hostname4" state="UP" node="hostname4" 
         failover="0" notes="Failover unavailable"/>
       <application name="metasvr" state="UP" node="myhost1"
         failover="1" notes="Failover disabled"/>
       <application name="share2" state="UP" node="hostname2"
         failover="3" notes="Failover disabled"/>
       <application name="share3" state="DOWN" node="N/A" failover="N/A" notes=""/>
       <application name="test" state="DOWN" node="N/A" failover="N/A" notes=""/>
   </applications>
</lha_stat-summary>
.TE

.TP
--daemons
This option indicates which daemons are currently active on the nodes
that are defined as part of the cluster. Any node that is not active in the
cluster will only show the Request daemon as up as show in this example output -
again presented in a tablular style:

.TS
l.
Nodename      Req   Cluster  Net   Sync  Lock  HB    Stat  Mon 
myhost1       UP    UP       UP    UP    UP    UP    UP    UP  
hostname2     UP    UP       UP    UP    UP    UP    UP    UP  
hostname3     UP    DOWN     DOWN  DOWN  DOWN  DOWN  DOWN  DOWN
hostname4     UP    UP       UP    UP    UP    UP    UP    UP  
.TE

The above columns represent the following daemons in order:

.TS
l l.
* Req	Request Daemon
* Cluster	Cluster Daemon
* Net	Network Daemon
* Sync	Sync Daemon
* Lock	Locking Daemon
* HB	Heartbeat Daemon
* Stat	Status Daemon
* Mon	Monitor Daemon
.TE

Although not all daemons are critical [depending on the exact configuration
of the cluster], typically they should all be running on all nodes.
Again the XML output format is very straightforward to parse:

.TS
l.
<?xml version="1.0" standalone="yes"?>
<lha_stat-daemons>
    <daemons node="myhost1" req="1"  cluster="1" net="1" sync="1" 
       lock="1" hb="1" stat="1" mon="1"/>
    <daemons node="hostname2" req="1"  cluster="1" net="1" sync="1" 
       lock="1" hb="1" stat="1" mon="1"/>
    <daemons node="hostname3" req="1"  cluster="0" net="0" sync="0" 
       lock="0" hb="0" stat="0" mon="0"/>
    <daemons node="hostname4" req="1"  cluster="1" net="1" sync="1" 
       lock="1" hb="1" stat="1" mon="1"/>
</lha_stat-daemons>
.TE

.TP
--application
This shows summary information for the specified application. The output generated
includes information on whether and where the application is currently running,
the status of all configured file systems, and also those of any configured IP
addresses. Hence a typical example might look like:

.TS
l.
Application  Status   Node     Storage  Validated  Valid Nodes
metasvr      RUNNING  myhost1  SHARED   Y          myhost1,hostname4

VG/LV        Type   Mount Point             Size      Status
dg190/vol01  vxfs   /data/ds_mountpoint1    20971520  Active
dg190/vol02  vxfs   /tmpmnt/path/to/mnt2    1048576   Active

Topology  IP Address      Interface
prod      192.168.100.40  lan900:1 
.TE

The summary information for the application includes information on whether and
where the application is running, which nodes are suitable for the application,
where the current configuration for the application has been validated and the
storage mechanism defined for the application.

The next section describes the storage associated with the application. It indicates
the volume or disk group, the volume name, the type of file system, the mount point,
volume size [in KB], and the status. In the above example from a HP-UX host the 
file system type is 'vxfs' - on other platforms entries such as 'ext3', 'ufs' or 'jfs'
might instead appear.

.RS 4
.TP 6
Note:
If an application is defined without storage this section will not 
be present in the output.
.RE
.RS 4

The final section shows IP addresses that are configured as part of the 
application and to which logical interfaces they are currently defined to.
Again XML output is very straightforward to understand too:

.TS
l.
<lha_stat-application>
    <status name="metasvr" status="RUNNING" node="myhost1"
            storage="SHARED" validated="Y" 
            validnodes="myhost1,hostname4"/>
    <fsinfo>
        <fs vg="dg190" lv="vol01" mnt="/data/ds_mountpoint1"
          type="vxfs" size="20971520" status="Active"/>
        <fs vg="dg190" lv="vol02" mnt="/tmpmnt/path/to/mnt2"
          type="vxfs" size="1048576" status="Active"/>
    </fsinfo>
    <ipinfo>
        <ip addr="192.168.44.100.40" topology="prod"
         interface="lan900:1"/>
    </ipinfo>
</lha_stat-application>
.TE

.TP
--diskhb
This shows summary information regarding any disk heartbeats that
are configured. This is at a very highlevel, with very little output, for
example:

.TS
l.
Name  Status     Notes
hb1   Active     
.TE

The XML format is similarly limited:

.TS
l.
<?xml version="1.0" standalone="yes"?>
<lha_stat-diskhb>
    <diskhb name="hb1" status="1" notes=""/>
</lha_stat-diskhb>
.TE

If you need to see more information on heartbeat configurations please use the
command:

.TS
l.
lha_build --dishhb --info
.TE

.TP
--monitors --all
This generates details of all monitors. Monitors are additional checks specified
by the administrator that are used to check the status of a particular or a 
particular application. 

In tabular mode it will generate output with the following columns:

.RS 5
.TP 6
Node
The name of the node the monitoring is running on.
.TP
Type
The type of the monitor in question - will be 'A' for an application monitor,
or 'N' for a node monitor. Application monitors stop and start on nodes in
conjunction with the application they are defined for.
.TP
Appl
The name of the application this monitor refers to. If not applicable [for
node monitors] this will be set to '-'.
.TP
Name
The name of the monitor - this is used as a handle to stop/start and delete
the monitor when required.
.TP
Montype
The monitor type - various monitors are available and more can be added
by the administrator if necessary. See the Administrators Guide for
more details.
.TP
Status
Will be set to 'Running' if the monitor is loaded and active. It will be set 
to 'Stopped' if the monitor is loaded, but paused from monitoring.
.TP
Fails
A count on the number of times the monitor has surpassed a failure threshold
since the last re-arm of the counter.
.TP
Last Fail
A time in hours, minutes and seconds since the last failure occurred. If no
failures have occurred or the monitor has been re-armed, this will be set to 
 '00:00:00'.
.TP
Rearm
The time before a next re-arm of the monitor takes place. If a re-arm is not
necessary [since the monitor has not failed since it started or the last re-arm
time], it will be set to '00:00:00'. As with the 'Last Fail' column this is a
time given in hours, minutes and seconds.
.RE
.RS 4

The XML output option is, as usual, very easy to understand - an example being:

.TS
l.
<?xml version="1.0" standalone="yes"?>
<lha_stat-monitors>
    <line node="hostname2" type="A" appl="test" name="testapp"
          montype="process" status="Running" fails="0"
          lastfail="00:00:00" rearm="00:00:00"/>
    <line node="hostname4" type="N" appl="-" name="swap"
          montype="swap_space" status="Stopped" fails="0"
          lastfail="00:00:00" rearm="00:00:00"/>

<lha_stat-monitors>
.TE

The names of the attributes for each element follow the names of the columns
previously described.

.TP
--monitors --node node
This generates the same form of output as described in the '--monitors --all' section
but only for the monitors currently running on the specified node, irrespective of 
the application they are for.

.TP
--monitors -A app
This generates the same form of output as described in the '--monitors --all' section
but only for the application specified -  irrespective of on which node the 
application is currently running on.

.TP
--topology
This shows the status of each defined network topology for those nodes that are
currently part of the formed cluster. Remember that not all nodes need to define 
the same topologies [though often of course they do]. As usual the default format
of the topology output is a form of table, one for each topology:

.TS
l.
Topology backup

Node       Status   Active   Card    Last Fail  Failures  Reset
myhost1    Alive    Yes      lan0    N/A        N/A       N/A  
myhost2    Alive    Paused   lan900  N/A        N/A       N/A  
.TE

The columns in the about above sure be fairly obvious:

.RS 5
.TP 6
Node
The name of the node for this particular entry.
.TP
Status
Whether the topology is 'Alive' [has a working physical network link], or dead [no connectivity].
.TP
Active
Indicates whether monitoring on the interface is taking place [set to 'Yes' if so], or has been
stopped [indicated by the word 'Paused'].
.TP
Card
Indicates the name of the logical card currently in use to represent this topology. 
.TP
Last Fail
Hour long ago [in hours, minutes and seconds] from the last network card failure detected.
If this has not occurred it will be indicated by a value of 'N/A'.
.TP
Failures
The number of failures of this topology since the last reset - set to 'N/A' if no failures
have occurred.
.TP
Reset
If one or more failures have occurred this will show a time [in hours, minutes and seconds], 
before the failure count and last fail time settings are reset.
.RE
.RS 4

It should be noted that some network links are not monitored - though these are typically
those making use of aggregation to provide protection anyway. The XML format is very
straightforward as usual:

.TS
l.
<?xml version="1.0" standalone="yes"?>
<lha_stat-topology>
  <topology name="backup">
    <node name="node1" status="1" active="1" card="lan0" lfail="0"
     failures="0" reset="0"/>
    <node name="node2" status="1" active="0" card="lan0" lfail="0" 
     failures="0" reset="0"/>
  </topology>
  <topology name="prod">
    <node name="node1" status="1" active="1" card="lan1" lfail="0" 
     failures="0" reset="0"/>
    <node name="node2" status="1" active="1" card="lan0" lfail="0" 
     failures="0" reset="0"/>
  </topology>
</lha_stat-topology>
.TE

.TP
--app-routes
This option shows information on application routes. Application routes are IP 
level checks that are related to a particular application. They are optional
and run on the node that an application currently runs on. All three usage
types generate the same output format, which in tabular form will look as 
follows:

.TS
l.
Appl  Route         Status  Active  Last Fail  Failures  Reset   
test  a_long_label  Alive   Yes     N/A        0         N/A     
test  fsds          Failed  No      00:38:05   0         N/A     
test  new_route     Alive   Yes     00:37:34   1         00:22:25
.TE

The columns should be fairly obvious, but in summary are:

.RS 5
.TP 6
Application
The name of the application that this entry refers to. An application can have any
number of application route checks. Each application will have a dedicated process
for its own route-handling and so each series of routes act independently.
.TP
Route
The label associated with the entry. This should be a short string describing the
particular IP address being checked.
.TP
Status
Will be set to 'Alive' if the IP address has not failed a certain number of time
in a single period of time, or 'Failed' if IP ping requests have failed more
frequently.
.TP
Active
Will be set to 'Yes' if checking of the route is currently being performed. Alternative
values include 'Paused' if this particular route has been paused, or 'No' if this 
route has failed. 
.TP
Last Fail
How long ago the last failure for this check occurred. If no failures have occurred since
the last reset period then this will be 'N/A', otherwise will be a time in hours, minutes and
seconds.
.TP
Failures
A count of the number of failures that have occurred for this check since the 
last reset.
.TP
Reset
A time in hours, minutes and seconds to the number of failures for this check will
be reset to 0. If there are no current failures this will be '00:00:00'.
.RE

The XML output format is just as straightforward, for example:

.TS
l.
<?xml version="1.0" standalone="yes"?>
<lha_stat-app-routes>
  <route app="test" route="a_long_label" status="1" 
   active="1" lfail="0" fails="0" reset="0/>
  <route app="test" route="fsds" status="0" 
   active="0" lfail="1234534738.22695" fails="0" reset="0/>
  <route app="test" route="new_route" status="1" 
   active="1" lfail="1234540808.20259" fails="6" reset="1234544408.20259/>
</lha_stat-app-routes>

.TE

.SH OUTPUT
All output, whether in XML or human-readable format is sent to the standard
output device and this can be redirected into a file or piped through into
a script if required.

.SH EXIT CODES
If the information is extracted and returned as expected then a '0' return 
code [for successful will be given]. If the operations
fails completely a failure return code of '1' will occur. 

.SH FILES
Not applicable. All information is taken from the cluster configuration and status
databases present on each node in the cluster, or queried from the status daemon
on each node as necessary.

.SH NOTES
This command is typically used alongside \fIlha_query(1)\fP to get an
overview of the cluster configuration as well as the current cluster status.

.SH AUTHOR
The TrueCL software was written by Simon Edwards, (C) 2006-2009, working
for Advantsys Computer Services Ltd - www.advantsys.co.uk.

.SH SEE ALSO
.BR lha_query(1).

.SH AVAILABILITY
This utility was specifically written under the GNU GPL license and as required
by such software comes with \fIno warranty or guarantee of any kind\fP. For
more information, please see the following page: truecl.advantsys.co.uk.

