% User guide for vlan

\documentclass[12pt, report, oneside]{memoir}
\usepackage[pdftex]{graphicx}
\graphicspath{{../thesis/images/}, {./images/}}
\usepackage{color}
\usepackage{url}
\usepackage[caption=false]{caption}
\usepackage[font=footnotesize]{subfig}
\usepackage{fancyvrb}
\usepackage{rotating}
\definecolor{myred}{rgb}{.647,.129,.149}

\newcommand{\vlan}{Vlan}
\newcommand{\vmap}{Vmap}
\newcommand{\lids}{Lids}
\newcommand{\showgrid}{Showgrid}

\newcommand{\homo}{\ensuremath{S_H}}
\newcommand{\cano}{\ensuremath{S_C}}
\newcommand{\cpu}{\ensuremath{S_C^{cpu}}}
\newcommand{\bdw}{\ensuremath{S_C^{inter\beta}}}
\newcommand{\lat}{\ensuremath{S_C^{inter\lambda}}}
\newcommand{\full}{\ensuremath{S_C^{full}}}

\DefineVerbatimEnvironment{verbatim}{Verbatim}{formatcom=\scriptsize}
\DefineVerbatimEnvironment{verbatim2}{Verbatim}{formatcom=\tiny}
%\renewenvironment{verbatim}{\begin{lstlisting}}{\end{lstlisting}}
\setcounter{secnumdepth}{3}

\begin{document}


% Special memoir class
\pretitle{\begin{center}\Huge\normalfont\bfseries\scshape\color{myred}}
\posttitle{\end{center}}

\title{\vlan\ userguide}
\author{Basile Clout}
\date{\today}

\maketitle

\pagestyle{plain}
\setcounter{chapter}{0}

\begin{abstract}
  \vlan\ is a heterogeneous computational grid emulator (See related
  thesis and paper). In this document, I describe the installation,
  configuration and execution of \vlan\ and the monitoring tools
  \vmap\ and \showgrid.
\end{abstract}


\chapter{Experimental platform}
\label{sec:platform}


\lids\ is a 16-nodes Compaq AlphaServer cluster physically located at
the server room of the Faculty of Computer Science, University of
New-Brunswick. Each node incorporates an Alpha 21264A chip (617MHz,
model EV67 variation 7, System type Tsunami Webrick). The nodes are
interconnected by a switched 100MBits/s Ethernet Switch 24 ports
(\texttt{Baystack Nortel networks}). All nodes are also connected by
serial links to a terminal server \texttt{DECserver 900TM}
(\url{http://www.dnpg.com/datasheets_pr/ds_decserver_900tm.pdf})
accessible with \texttt{telnet async} or \texttt{telnet 172.16.0.100},
and to \texttt{pivot.cs.unb.ca}, a ftp, tftp, etc. server. Only the
head node \verb/lids.cs.unb.ca/ or \verb/131.202.240.97/ is accessible
and protected (iptables) from the Internet. From \lids, it is possible
to access each node \verb/lids02/ (\verb/172.16.0.10/) to
\verb/lids16/ (\verb/172.16.0.24/) by ssh or rsh. Table
\ref{tab:lids_nodes} lists the 9 (out of 16) functional nodes:

\begin{table}[tbp]
  \centering
  \begin{tabular}{llclc}
    \hline
    \textbf{Node} & \textbf{IP} & \textbf{\texttt{async} port} &
    \textbf{Model} & \textbf{Memory (MBytes)} \\ 
    \hline
     lids & 172.16.0.1 & 2001 & DS10 & 252 \\
     lids02 & 172.16.0.10 & 2002 & DS10L & 252 \\
     lids04 & 172.16.0.12 & 2004 & DS10L & 252 \\
     lids05 & 172.16.0.13 & 2005 & DS10L & 252 \\
     lids06 & 172.16.0.14 & 2006 & DS10L & 252 \\
     lids09 & 172.16.0.17 & 2009 & DS10L & 252 \\
     lids13 & 172.16.0.21 & 2013 & DS10L & 252 \\
     lids15 & 172.16.0.23 & 2015 & DS10 & 382 \\
     lids16 & 172.16.0.24 & 2016 & DS10 & 382 \\
     \hline
  \end{tabular}
  \caption{Nodes of \lids}
  \label{tab:lids_nodes}
\end{table}

Other nodes have a CPU, Fan or power supply problems. For lids05,
substitution of the power supply and fan (\texttt{ELINA FAN MODELB
  DF7531-12HB-01A 12VC 210mA 06301}), and modification of the DHCPd
server in lids (correct MAC address, took the mother board from
lids12) made the deal. Replacement of the fan for 6 nodes and the
power supply for 2 would completely fix the cluster.
Currently, the nodes run a Debian GNU/Linux 4.0 operation system with
a custom 2.6.21.1 kernel.

\subsection{RMC}
\label{sec:rmc}

The Remote Management Console (RMC) allows the administrator to
perform console operations remotely using a dial-in modem. It
provides, among other things, remote power on/off, halt and reset, as
well as thermal sensors, power supply and fan monitoring. The RMC is
accessible through the serial link connected to the terminal
server. Access to the RMC of any node from any other node is possible
with telnet (connection to the serial port of the target node through the
terminal server) and the successive \verb/^[^[RMC/ keystrokes
(\verb/C-[ C-[ RMC/). For instance, from any node of the cluster (here
lids), access to the RMC of lids02 is possible with:
\begin{verbatim}
lids:~# telnet async 2002
Trying 172.16.0.100...
Connected to lids-async.
Escape character is '^]'.

RMC>
\end{verbatim}
From here, it is possible to obtain the detailed status of the node:

\begin{verbatim}
RMC>status
?0;1m
       PLATFORM STATUS
?0mFirmware Revision: V1.1
Server Power: ON
RMC Halt: Deasserted
RMC Power Control: ON
Power Supply: OK   
System Fans: OK       CPU Fan: OK   
Temperature: 39.0?C?0m (warnings at 55.0?C, power-off at 60.0?C)
Escape Sequence: ^[^[RMC
Remote Access: Disabled
RMC Password: not set
Alert Enable: Disabled
Alert Pending: ?0;1;5mYES?0m
Init String: 
Dial String: 
Alert String: 
Com1_mode: SNOOP
Last Alert: ?0;1mAC Loss?0m
Watchdog Timer: 60 seconds
Autoreboot: OFF
Logout Timer: 20 minutes
User String: 
\end{verbatim}
This confirms us that the node is powered on and 
functional. Should something happen (e.g. OS frozen), it is
possible to reboot the system remotely:

\begin{verbatim}
RMC>power off

RMC>status
?0;1m
       PLATFORM STATUS
?0mFirmware Revision: V1.1
Server Power: OFF
RMC Halt: 
RMC Power Control: OFF
Power Supply: 
System Fans:     CPU Fan: 
Temperature: 40.0?C?0m (warnings at 55.0?C, power-off at 60.0?C)
Escape Sequence: ^[^[RMC
Remote Access: Disabled
RMC Password: not set
Alert Enable: Disabled
Alert Pending: ?0;1;5mYES?0m
Init String: 
Dial String: 
Alert String: 
Com1_mode: SNOOP
Last Alert: ?0;1mAC Loss?0m
Watchdog Timer: 60 seconds
Autoreboot: OFF
Logout Timer: 20 minutes
User String: 

RMC>power on

Returning to COM port
*** keyboard not plugged in...
256 Meg of system memory
probing hose 0, PCI
probing PCI-to-ISA bridge, bus 1
bus 0, slot 9 -- ewa -- DE500-BA Network Controller
bus 0, slot 11 -- ewb -- DE500-BA Network Controller
bus 0, slot 13 -- dqa -- Acer Labs M1543C IDE
bus 0, slot 13 -- dqb -- Acer Labs M1543C IDE
initializing GCT/FRU at 1ec000
Testing the System
Testing the Disks (read only)
Testing ew* devices.
System Temperature is 38 degrees C

COMPAQ AlphaServer DS10L 617 MHz Console V5.9-6, May  3 2001 15:29:28

CPU 0 booting

(boot dqa0.0.0.13.0 -file 2/vmlinux.gz -flags 0)
block 0 of dqa0.0.0.13.0 is a valid boot block
reading 171 blocks from dqa0.0.0.13.0
bootstrap code read in
base = 200000, image_start = 0, image_bytes = 15600
initializing HWRPB at 2000
initializing page table at ff2e000
initializing machine state
setting affinity to the primary CPU
jumping to bootstrap code
aboot: Linux/Alpha SRM bootloader version 0.9b
aboot: switching to OSF/1 PALcode version 1.86
aboot: booting from device 'IDE 0 13 0 0 0 0 0'
aboot: valid disklabel found: 6 partitions.
aboot: loading uncompressed vmlinuz-2.6.21.1...
aboot: loading compressed vmlinuz-2.6.21.1...
aboot: zero-filling 144368 bytes at 0xfffffc00007caca8
aboot: starting kernel vmlinuz-2.6.21.1 with arguments ro root=/dev/hda1
\end{verbatim}

The RMC gives then control back to telnet. Telnet exit is classic:
press \verb/^[/ (get the telnet prompt) and enter the quit command:

\begin{verbatim}
aboot: zero-filling 144368 bytes at 0xfffffc00007caca8
aboot: starting kernel vmlinuz-2.6.21.1 with arguments ro root=/dev/hda1

telnet> quit
Connection closed.
lids:~# 
\end{verbatim}


Finer control of the bootstrap process (installation of a new OS, new
harddrive, ...) and other hardware configurations can be obtained from
the SRM firmware (see next section). To get interactive access to it,
we need to assert the RMC halt switch. To do that, reenter the RMC
while the node is booting and enter \texttt{halt in} (or \texttt{halt
  out}).  After a simple hardware check (similar to a BIOS), the RMC
let the control to the SRM.

\begin{verbatim}
RMC>power off

RMC>power on

Returning to COM port

RMC>halt in

*** keyboard not plugged in...

RMC>halt out

initializing GCT/FRU at 1ec000

RMC>halt in

Returning to COM port
RMC>halt out

Testing the Disks (read only)
Testing ew* devices.
System Temperature is 39 degrees C

COMPAQ AlphaServer DS10L 617 MHz Console V5.9-6, May  3 2001 15:29:28

Halt Button is IN, AUTO_ACTION ignored

>>>    

\end{verbatim}

The \texttt{>>>} is the SRM prompt.

\subsection{SRM}
\label{sec:srm}

The
SRM\footnote{\url{http://www.faqs.org/docs/Linux-HOWTO/SRM-HOWTO.html#AEN34}}
console is the firmware installed on the Alpha machines. The SRM
console is very much like a Unix shell. It views its NVRAM and devices
as a pseudo-filesystem. It is also used to boot the operating system
(currently a Linux OS). A full listing of available commands and their
description is available with the command \texttt{help}. The list of
predefined environment variables can be displayed with \texttt{show
  *}. The SRM does not directly boot a kernel. It rather launch an
external loader,
\texttt{aboot}\footnote{\url{http://www.alphalinux.org/faq/aboot.html}}. \texttt{aboot}
is the Alpha architecture version of \texttt{Lilo} or \texttt{Grub}
for x86.

There are two ways to configure and launch \texttt{aboot}:

\subsubsection{Command line}
\label{sec:subsection}
The classic way to boot Linux is to issue the command:
\begin{verbatim}
>>> boot devicename -fi filename -fl flags
\end{verbatim}

\emph{devicename} corresponds to the device from which SRW will
attempt to boot. \texttt{dqa0} is the SRM device name of the primary
IDE device and, in our case, the hard disk, corresponding to
\texttt{/dev/hda} under Linux.

\emph{filename} is the path of the compressed image of the kernel,
starting from the root \texttt{/} or \texttt{/boot} directory. The
number preceding the file name gives the partition number of the
\emph{devicename} from which to boot.

\emph{flags} specify various bootflags for the \texttt{aboot}
configuration or the Linux kernel (see below).

\texttt{aboot} also supports a simple command-oriented interactive
command mode entered with a \emph{flag} ``i''. 

\begin{verbatim}
>>>boot dqa0 -fl i
(boot dqa0.0.0.13.0 -file 2/vmlinux.gz -flags i)
block 0 of dqa0.0.0.13.0 is a valid boot block
reading 171 blocks from dqa0.0.0.13.0
bootstrap code read in
base = 200000, image_start = 0, image_bytes = 15600
initializing HWRPB at 2000
initializing page table at ff2e000
initializing machine state
setting affinity to the primary CPU
jumping to bootstrap code
aboot: Linux/Alpha SRM bootloader version 0.9b
aboot: switching to OSF/1 PALcode version 1.86
aboot: booting from device 'IDE 0 13 0 0 0 0 0'
aboot: valid disklabel found: 6 partitions.
Welcome to aboot 0.9b
Commands:
 h, ?                   Display this message
 q                      Halt the system and return to SRM
 p 1-8                  Look in partition <num> for configuration/kernel
 l                      List preconfigured kernels
 d <dir>                List directory <dir> in current filesystem
 b <file> <args>        Boot kernel in <file> (- for raw boot)
 i <file>               Use <file> as initial ramdisk
                        with arguments <args>
 0-9                    Boot preconfiguration 0-9 (list with 'l')
aboot> l
#
# aboot default configurations
#
#0:2/vmlinux.gz ro root=/dev/hda1
#1:3/vmlinux.old.gz ro root=/dev/sda2
#2:3/vmlinux.new.gz ro root=/dev/sda2
#3:3/vmlinux ro root=/dev/sda2
#8:- ro root=/dev/sda2          # fs less boot of raw kernel
#9:0/- ro root=/dev/sda2                # fs less boot of (compressed) ECOFF kernel
0:2/vmlinuz-2.6.21.1 ro root=/dev/hda1

aboot> b 2/vmlinuz-2.6.21.1 ro root=/dev/hda
aboot: loading uncompressed vmlinuz-2.6.21.1...
aboot: loading compressed vmlinuz-2.6.21.1...
aboot: zero-filling 144368 bytes at 0xfffffc00007caca8
aboot: starting kernel vmlinuz-2.6.21.1 with arguments ro root=/dev/hda
\end{verbatim}

\texttt{aboot} also allows the user to define short-hands for
frequently used command lines. A single digit option (0--9) requests
that \texttt{aboot} uses the corresponding option string stored in
file \texttt{/etc/aboot.conf}. The partition where is located this
file has to be specified:
\begin{verbatim}
lids02:/boot# abootconf /dev/hda 2
\end{verbatim}

For example, \texttt{>>> boot dka0 -fl 0} with the following .conf
file will boot the same kernel configuration that previously.

\begin{verbatim}
#
# aboot default configurations
#
#0:2/vmlinux.gz ro root=/dev/hda1
#1:3/vmlinux.old.gz ro root=/dev/sda2
#2:3/vmlinux.new.gz ro root=/dev/sda2
#33:3/vmlinux ro root=/dev/sda2
#8:- ro root=/dev/sda2          # fs less boot of raw kernel
#9:0/- ro root=/dev/sda2                # fs less boot of (compressed) ECOFF kernel
0:2/vmlinuz-2.6.21.1 ro root=/dev/hda1
\end{verbatim}

This \texttt{aboot} configuration allows for a flexible selection of
which and how to boot the kernel.

\subsubsection{Default configuration}
\label{sec:default}

A default kernel with default options can be quickly loaded by the
SRM. The correct parameters are saved in the \texttt{boot*}
environment variables.  The interesting default configuration of the
nodes currently look like those of Table \ref{tab:config_SRM}
(\texttt{show boot*}):

\begin{table}[tbp]
  \centering
  \begin{tabular}{ll}
    \hline
    \textbf{Environment variable} & \textbf{Value} \\
    \hline
    auto\_action & BOOT \\
    boot\_dev & dqa0.0.0.13.0 \\
    boot\_file & 2/vmlinuz-2.6.21.1 \\
    boot\_osflags & 0 \\
    boot\_reset & OFF \\
    \hline
  \end{tabular}
  \caption{SRM default configuration}
  \label{tab:config_SRM}
\end{table}

The \texttt{bootdef\_dev} variable specifies the device which will be
booted from if no device is specified on the \texttt{boot} command
line. The \texttt{boot\_file} contains the filename to be loaded up by
\texttt{aboot}, while \texttt{boot\_flags} contains any extra
flag. \texttt{auto\_action} specifies the action which the console
should take on power-up. Set it to \texttt{HALT} if you want to set an
interactive SRM prompt by default. Then, just enter \texttt{>>> boot}
to boot with the defaulted configuration. If set to \texttt{BOOT}, the
SRM directly fires up \texttt{aboot} and bootstrap the system.

Experimentally, the \texttt{boot\_osflags} option has priority over
\texttt{boot\_file} if it contains an integer corresponding to a
configuration of the \texttt{/etc/aboot.conf} file.  To set the
environement variables, use the \texttt{set} command, like this:
\begin{verbatim}
>>> set boot_file 2/vmlinuz-2.6.21.1
>>> set boot_osflags "ro root=/dev/hda1"
\end{verbatim}


\subsection{Softwares}
\label{sec:softs}

Lids is the cluster's gateway to Internet and provides apt-cacher, a
daemon acting like a local Debian mirror. All package installation in
the cluster nodes have to be done preferably after retrieving the
packages with lids.

All nodes run a carefully configured 2.6.21.1 kernel. Important
options to take into account (\texttt{make config}) are the sections
Networking, Networking options, Core Netfilter Configuration, IP:
Netfilter configuration, QoS and/or fair queuing, Queuing/Scheduling,
Classification (Appendix A). Before compiling the kernel, make sure
that everything is here (gcc, ...), and \texttt{apt-get install
  module-init-tools}.The kernel is then directly compiled and packaged
the Debian way (\texttt{make-kpkg}). The .deb package
\texttt{linux-image-2.6.21.1\_custom.1.0\_alpha.deb}\footnote{Located
  in /usr/src on lids} is then copied onto the different nodes and
locally installed (\texttt{dpkg -i .../deb; cd /boot; mkinitrd -o
  /boot/initrd.img-2.6.21.1 2.6.21.1}). Therefore, all nodes run
exactly the same kernel with the correct kernel drivers and loadable
modules compiled.

\subsubsection{MPI}
\label{sec:mpi}

The cluster run MPICH2 (compiled from source). 
\begin{verbatim}
lids:/usr/src# mpich2version
MPICH2 Version:         1.0.6
MPICH2 Release date:    Unknown, built on Mon Oct 29 17:55:24 ADT 2007
MPICH2 Device:          ch3:sock
MPICH2 configure:       --enable-f77=no --enable-f90=no
MPICH2 CC:      gcc  -O2
MPICH2 CXX:     c++  -O2
MPICH2 F77:       -O2
MPICH2 F90:       -O2
MPICH2 Patch level:      none
\end{verbatim}

Installation of newer MPI libraries, including LAM and OpenMPI, have
all failed. Although OpenMPI packages exist in the testing Debian's
directory, the OpenMPI developpers do not officially support this
architecture and it the maintainer compiled package in the Debian tree
is bugged. Several attempts of compiling OpenMPI from source have
failed (among other things, problem with the fortran compiler and the
auto-update of Debian because of incompatible libc versions. Even when
disabling fortran, still problems ...).

All the MPI files are in the \texttt{/home/basil/mpi} directory of lids,
exported to other nodes by nfs. \texttt{hosts.mf} contains the working
nodes of the clusters, while \texttt{mpd.hosts} contains the nodes
belonging to the MPI ring (which includes the head node lids).


To start the MPI ring, first clean everything for all the users
(including root):
\begin{verbatim}
mpdallexit;
mpdcleanup;
/home/basil/scripts/send_command "killall python2.4";
\end{verbatim}
It is overpowering to kill all python2.4 processes, but it is the
easiest thing to do to make sure that no rogue \texttt{mpd} is running
on any node. Indeed, \texttt{mpd} is the MPI daemon and executes as a
python script. Then, boot the ring:
\begin{verbatim}
mpdboot -n 9 -f /home/basil/mpi/mpd.hosts
\end{verbatim}

Here, the ring contains 9 nodes defined in \texttt{mpd.hosts}.
A MPI process is executed on the ring with:
\begin{verbatim}
lids:~# mpiexec -machinefile mfile -n nbnodes -wdir mydir process 
\end{verbatim}

\emph{myfile} is a text file containing the nodes of the ring in
order. 
\emph{nbnodes} specifies how many nodes of the ring have to be used
for this MPI instance. The selected nodes will be the nodes declared
in the \emph{nbnodes} first lines of \emph{myfile}. 
\emph{mydir} gives the working directory (just to be sure).

\texttt{mpdtrace} lists each node in the ring. \texttt{mpdlistjobs}
lists all the current MPI jobs including the jobID. This ID can be
used to cancel a job with \texttt{mpdkilljob}.

\subsubsection{Other programs}
\label{sec:progs}

Several other tools need to be installed after a fresh Debian
install. To compile and execute Wrekavoc:
\begin{verbatim}
apt-get install libgsl0 libgsl0-dev libxml2 libxml2-dev pkg-config
\end{verbatim}
These requirements are specific to Wrekavoc and are not necessary for \vlan.

Mandatory packages to correctly execute \vlan\ are:
\begin{verbatim}
aptitude install iproute iperf python nmap
\end{verbatim}
Iproute provides \emph{tc} and \emph{ip}. Because of bugs in the
different versions of Iperf (daemon in UDP mode), \vlan\ uses a mix of
iperf 1.0.7 and 2.0.2.  

Make sure that ssh and sshd is working on all the nodes. Rsh is
optional but faster in some configurations (\vlan\ works with
both). In the case of \texttt{ssh}, the keys have first to be
exchanged. Create a private and public key on lids
(\texttt{ssh-keygen}). The public key is to be exported and appended
to the \texttt{authorized\_key} file of the \texttt{~/.ssh} directory
of every nodes in the cluster. The installation and configuration of
\texttt{rsh} is a lot more complicated (one needs to enable inetd,
rsh-redone and modify pam.d and securetty configurations).


\chapter{Emulation}
\label{sec:emulation}

Proper emulation of heterogeneous computational grids is performed
with \vlan. \vlan is located on lids in
\url{/root/export_cluster/code/tools}. It requires root privileges.


\section{Theory}
\label{sec:vlan_theory}

A heterogeneous computational grid is a network of heterogeneous
computing and communication resources. From a homogeneous cluster
represented by a processor graph $C=C(P,L_C)$ (Figure \ref{fig:lids}),
\vlan\ creates virtual topologies $S=S(P, L)$ consisting of
interconnected clusters $C_i=C(P_i, L_i)$. Figure \ref{fig:grids}
presents the demonstration grids emulated in the thesis. Figure
\ref{fig:grid_full} is a more complicated example studied throughout
this section. Table \ref{tab:nodes} gives the mapping between the
hostname of the node and its rank in the MPI ring used in all the
graphic representations of Figures \ref{fig:lids}, \ref{fig:grids} and
\ref{fig:grid_full}.

\begin{figure}[!t]
  \centering  
  \includegraphics[width=2.8in]{grids/homo.pdf} 
  \label{fig:lids}
  \caption{lids: \homo}
\end{figure}

\begin{figure}[!t]
  \centering
  \subfloat[\cano]{
    \includegraphics[width=2.5in]{/grids/cano8.pdf} 
    \label{fig:grids_topo}}
  \subfloat[\cpu]{
    \includegraphics[width=2.5in]{/grids/cano_cpu8.pdf} 
    \label{fig:grids_cpu}}
  \hfill
  \subfloat[\bdw]{
    \includegraphics[width=2.5in]{/grids/cano_inter8.pdf} 
    \label{fig:grids_bdw}}
  \subfloat[\lat]{
    \includegraphics[width=2.5in]{/grids/cano_interlat8.pdf} 
    \label{fig:grids_lat}}
  \caption{Emulated heterogeneous computational grids: \footnotesize{(a)
      modification of the topology, (b) Topology of \cano\ with
      degradation of the processor performance in one cluster, (c)
      Topology of \cano\ with inter-cluster bandwidth degradation and
      (d) Topology of \cano\ with inter-cluster latency degradation}}
  \label{fig:grids}
\end{figure}

\begin{figure}[!t]
  \centering  
  \includegraphics[width=3.5in]{grids/cano_full8.pdf} 
  \label{fig:grid_full}
  \caption{Heterogeneous grid \full}
\end{figure}

\begin{table}[tbp]
  \centering
  \begin{tabular}{cc}

\textbf{hostname} & \textbf{MPI rank} \\    

lids02 & 0 \\
lids04 & 1 \\
lids05 & 2 \\
lids06 & 3 \\
lids09 & 4 \\
lids13 & 5 \\
lids15 & 6 \\
lids16 & 7 \\

  \end{tabular}
  \caption{Mapping node hostname $\leftrightarrow$ rank in the MPI ring}
  \label{tab:nodes}
\end{table}


\vlan\ first statically modifies the set of routes in the routing
tables of each nodes in order to create a custom virtual topology
(section \ref{sec:vlan_topo}). Next, \vlan\ degrades the bandwidth and
latency characteristics of the links in the grid (section
\ref{sec:vlan_links}). Finally, \vlan\ degrades the CPU performance
for processes belonging to specified users (section
\ref{sec:vlan_cpu}). \vlan\ uses tools present in some Linux kernel
subsystems (netfilter, tc, ...).


\subsection{Topology}
\label{sec:vlan_topo}

The virtual topology of a computational grid is created by statically
specifying each route between each pair of processors in the grid. If
there are $p$ processors in the grid, then the routing table of each
node of this grid contains $p-1$ routes corresponding to the $p-1$
other processors, as well as the default entries (for the nodes
outside the grid. In lids, the default route is the 172.10.0.0/16
network). There are two types of routes. If two nodes are directly
connected by a link of the virtual topology, the destination node has
a the \texttt{scope link} and is directly reachable. If the path is
not direct, the route gives the corresponding gateway, or next hop in
the path from the local node to the destination.  

The following routing table is the modified routing table of lids02
(rank 0), the (virtual) router of $C_1$ used to emulate \full\ (Figure
\ref{fig:grid_full}):
\begin{verbatim}
172.16.0.21 via 172.16.0.17 dev eth0 realm dist_c2 
172.16.0.23 via 172.16.0.17 dev eth0 realm dist_c2 
172.16.0.17 dev eth0  proto kernel  scope link  src 172.16.0.10 realm dist_c2 
172.16.0.13 dev eth0  proto kernel  scope link  src 172.16.0.10 realm local_c1 
172.16.0.12 dev eth0  proto kernel  scope link  src 172.16.0.10 realm local_c1 
172.16.0.14 dev eth0  proto kernel  scope link  src 172.16.0.10 realm local_c1 
172.16.0.24 via 172.16.0.17 dev eth0 realm dist_c2 
172.16.0.0/16 dev eth0  proto kernel  scope link  src 172.16.0.10 
\end{verbatim}

lids04 (rank 1), a simple node (not a router) of $C_1$ has a simpler
routing table:
\begin{verbatim}
172.16.0.21 via 172.16.0.10 dev eth0 realm local_c1 
172.16.0.23 via 172.16.0.10 dev eth0 realm local_c1 
172.16.0.17 via 172.16.0.10 dev eth0 realm local_c1 
172.16.0.13 dev eth0  proto kernel  scope link  src 172.16.0.12 realm local_c1 
172.16.0.14 dev eth0  proto kernel  scope link  src 172.16.0.12 realm local_c1 
172.16.0.24 via 172.16.0.10 dev eth0 realm local_c1 
172.16.0.10 dev eth0  proto kernel  scope link  src 172.16.0.12 realm local_c1 
172.16.0.0/16 dev eth0  proto kernel  scope link  src 172.16.0.12 
\end{verbatim}

The next hops nodes (or gateways) are determined with a
Floyd-Warshall algorithm inspired by the RIP protocol (see thesis).


\subsection{Links}
\label{sec:vlan_links}

After the links $L$ of the topology of the heterogeneous grid have
been created, \vlan\ modifies its bandwidth and latency
characteristics. \vlan\ uses the properties of the type of
computational grids we choose to emulate (cluster of clusters) to
optimize the configuration. \vlan\ makes use of the Linux Advanced
Routing and Traffic Control (LARTC) subsystem of the new Linux kernels
to precisely control the way network packets are sent by the
kernel. \texttt{iproute2} and \texttt{tc} are part of the LARTC and
provide user-space tools to modify the configuration of the
kernel. They also make use of the Netfilter framework. These tools
need root provileges.  

\texttt{tc} put the packets to be sent through an interface in a
system of queues, or queuing disciplines, forming a tree. When the
kernel is ready to send a packet, the tree is dequeued from the root
while respecting the queuing displine algorithms, their classes and
associated filters. A queuing displine defines rules to classify and
give a priority order to network packets. Each class contains packets
matched by a corresponding filter. \vlan\ uses the \emph{Hierarchical
  Tocken Buffer} (HTB) and \emph{netem} qdiscs. HTB provides bandwidth
policing and shaping (average and maximum data rate, maximum burst,
etc) while \emph{netem} introduces latency (among other things such as
congestion, reordering, random, etc.). qdiscs can be combined to add
different characteristics to a link. In \vlan, filters match the
cluster of the next hop on the route taken by the packet. This is
implemented with the \emph{route} filter. The route filter matches a
tag introduced in the entries of the routing table (see previous
examples of routing table). This tag is previously declared in the
\url{/etc/iproute2/rt_realms} configuration file. Therefore, a packet
matching a given route will be classified in a class where the latency
of the packets and the global bandwidth are controlled by the
heterogeneous configuration of the emulated grid. A schematic
representation of this algorithm is presented in Figure \ref{fig:tc}.

\begin{figure}[tbp]
  \centering
  \includegraphics[width=4in]{schematc_vlan.pdf}
  \caption{\emph{Vlan}'s queuing structure}
  \label{fig:tc}
\end{figure}

The set of \emph{tc} rules of lids02 in the grid configuration \full\
is:

\begin{verbatim}
---> Queing disciplines:
qdisc htb 1: r2q 10 default 0 direct_packets_stat 85
qdisc netem 2: parent 1:1 limit 1000 delay 10.0ms
qdisc netem 3: parent 1:2 limit 1000 delay 100.0ms

---> Classes:
class htb 1:1 root leaf 2: prio 0 rate 80000Kbit ceil 80000Kbit burst 11Kb cburst 11Kb 
class htb 1:2 root leaf 3: prio 0 rate 10000Kbit ceil 10000Kbit burst 2720b cburst 2720b 
class netem 2:1 parent 2: 
class netem 3:1 parent 3: 

---> Filters:
filter parent 1: protocol ip pref 100 route 
filter parent 1: protocol ip pref 100 route fh 0xffff0001 flowid 1:1 to local_c1 
filter parent 1: protocol ip pref 100 route fh 0xffff0002 flowid 1:2 to dist_c2 
\end{verbatim}

The set of rules is simpler in lids04. Indeed, packets are always sent
to next hop nodes in the same cluster, either another simple nodes, or
(one of) the router if the packet is addressed to it or to a node outside
the cluster.

\begin{verbatim}
---> Queing disciplines:
qdisc htb 1: r2q 10 default 0 direct_packets_stat 77
qdisc netem 2: parent 1:1 limit 1000 delay 10.0ms

---> Classes:
class htb 1:1 root leaf 2: prio 0 rate 80000Kbit ceil 80000Kbit burst 11Kb cburst 11Kb 
class netem 2:1 parent 2: 

---> Filters:
filter parent 1: protocol ip pref 100 route 
filter parent 1: protocol ip pref 100 route fh 0xffff0001 flowid 1:1 to local_c1 
\end{verbatim}

\vlan\ is smart enough to set only necessary rules when possible.


\subsection{CPU}
\label{sec:vlan_cpu}

The apparent computational power of a processor is degraded with
\emph{CPU-lim}, a tool implemented in Wrekavoc. The version included
in \vlan\ is a Alpha port accepting float percentages. \emph{CPU-lim}
consists of two programs requiring root privileges. \texttt{cpulimd}
regularly checks the list of currently executed processes. If one of
this process matches the user listed in the \texttt{cpulimd}
configuration file (\url{/etc/cpulim.conf}), \texttt{cpulimd} executes
\texttt{cpulim} on this process with the percentage $p_{perf}$ of
original performance (concept closely related to the level of
performance degradation) corresponding to the effective user
(euid). \texttt{cpulim} gives himself the highest priority of a FIFO
scheduler. It measures the time since the process started ($t_{tot} =
\textrm{uptime}-\textrm{process\_startime}$) as well as the time during
which the process it supervises executes in user-space ($t_{exec}$
from the number of jiffies in utime). When the ratio
$\frac{t_{exec}}{t_{tot}}$ reaches the $p_{perf}$, \texttt{cpulim}
sends a SIGSTOP signal to the supervised process. When this ratio
returns below $p_{perf}$, \texttt{cpulim} sends a SIGCONT
signal. Thus, the process executes only $p_{perf}$\% of the
time. Practically, the process is $100-p_{perf}$ slower than
normally. 

\vlan\ simply writes the $p_{perf}$ corresponding to the different
users in the \url{/etc/cpulim.conf} and launch \texttt{cpulimd}.

The synopsis of \texttt{cpulim} is \texttt{\textbf{cpulim}
  <percentage> <euid>}. The following is the \texttt{cpulimd}
configuration file corresponding to \full. The processes belong to
user \emph{basil} (uid 1001) are degraded by $66\%$, while all the
other processes are not supervised at all (100\% means no
supervision).

\begin{verbatim}
lids16:~# cat /etc/cpulim.conf 
33 1001
100 *
lids16:~# 
\end{verbatim}

It is important to notice that \texttt{cpulim} has an significant
overhead. Experimentally, this overhead corresponds to 40\% of the
performance of a process (performance loss measured for a degradation
of 0.01\%). Although \emph{CPU-lim} is not accurate, the relative
degradations between processes supervised by cpulim are correct, as
this overhead is constant. Moreover, \emph{CPU-lim} is not adapted to
interactive processes.


\section{Usage}
\label{sec:vlan_usage}


\subsection{Synopsis}
\label{sec:vlan_syno}

\texttt{./vlan4.py [-a (--add\_conf)] [-d (--del\_conf)] [-v
  (--verbose)] [-h (--help)] [--debug] [-f (--fake)] [--silent] [-w
  (--wrekagrid)] [--topo] [--net] [--cpu] [-c (--conf\_file)
  <conf\_file>]}
 
If \vlan\ is executed without arguments, the configuration
corresponding to a default configuration (the 8 nodes of lids) is
deleted.

The option \emph{-a} combined with a configuration file (\emph{-c}) in
the \vlan\ file format emulates the heterogeneous computational grid
described in the file. The option \emph{-d} delete this
configuration. In fact, \emph{-d} just launches the delete scripts on
the nodes composing the described grid. It is not necessary to give
the correct configuration file to \vlan\ for deleting this
configuration, only the set of nodes has to be similar. It is even
possible to emulate or delete some characterisics of the grid only
with the option \emph{--topo} (only topology), \emph{--net} (only
network links) and \emph{--cpu}. These options can be randomly
combined. However, some configurations may not make sense. Only
\emph{--net} for example will modify the characteristics of links that
do not form a coherent topology.

The verbose option \emph{-v} displays a lot of text (contrary to
\emph{silent}): configuration, scripts to be sent to the nodes, error
and warning messages. The \emph{-f} option creates the correct scripts
but does not send to nor executes them on the cluster's nodes. It
allows for checking of the scripts or just learning of how \vlan\
works ;).

The \emph{--debug} option produces a lot of cryptic messages. The
\emph{-w} option translates the given \vlan\ configuration file into
the Wrekagrid file format (which is a lot more complicated). This
allows for creating quick configurations for Wrekagrid without the
hassle to write hundred of lines particularly prone to errors.

When running, \vlan\ prints what it is sending to and executing on
each node and if the connection is successful or not.

\begin{verbatim}
lids:~/export_cluster/code/tools# ./vlan4.py -a -c vlanbench/cano.vlan
Sending config to 172.16.0.13 ...
ssh 172.16.0.13 "if [ -f /etc/vlan/del_topo_172.16.0.13.sh ];
then sh /etc/vlan/del_topo_172.16.0.13.sh;fi
if [ -f /etc/vlan/del_net_172.16.0.13.sh ]; 
then sh /etc/vlan/del_net_172.16.0.13.sh;fi
if [ -f /etc/vlan/del_cpu_172.16.0.13.sh ]; 
then sh /etc/vlan/del_cpu_172.16.0.13.sh;fi
";
ssh 172.16.0.13 "if [ ! -d /etc/vlan/ ]; 
then mkdir -p /etc/vlan/;fi";
scp ./vlan_tmp/add_topo_172.16.0.13.sh
./vlan_tmp/del_topo_172.16.0.13.sh 
./vlan_tmp/add_net_172.16.0.13.sh ./vlan_tmp/del_net_172.16.0.13.sh 
./vlan_tmp/add_cpu_172.16.0.13.sh ./vlan_tmp/del_cpu_172.16.0.13.sh 
172.16.0.13:/etc/vlan/;
ssh 172.16.0.13 "if [ -f /etc/vlan/add_topo_172.16.0.13.sh ]; 
then sh /etc/vlan/add_topo_172.16.0.13.sh;fi
if [ -f /etc/vlan/add_net_172.16.0.13.sh ]; 
then sh /etc/vlan/add_net_172.16.0.13.sh;fi
if [ -f /etc/vlan/add_cpu_172.16.0.13.sh ]; 
then sh /etc/vlan/add_cpu_172.16.0.13.sh;fi
";
... OK!


Sending config to 172.16.0.23 ...
ssh 172.16.0.23 "if [ -f /etc/vlan/del_topo_172.16.0.23.sh ]; 
then sh /etc/vlan/del_topo_172.16.0.23.sh;fi
if [ -f /etc/vlan/del_net_172.16.0.23.sh ];
 then sh /etc/vlan/del_net_172.16.0.23.sh;fi
if [ -f /etc/vlan/del_cpu_172.16.0.23.sh ];
 then sh /etc/vlan/del_cpu_172.16.0.23.sh;fi
";
ssh 172.16.0.23 "if [ ! -d /etc/vlan/ ]; 
then mkdir -p /etc/vlan/;fi";
scp ./vlan_tmp/add_topo_172.16.0.23.sh
./vlan_tmp/del_topo_172.16.0.23.sh 
./vlan_tmp/add_net_172.16.0.23.sh ./vlan_tmp/del_net_172.16.0.23.sh 
./vlan_tmp/add_cpu_172.16.0.23.sh ./vlan_tmp/del_cpu_172.16.0.23.sh 
172.16.0.23:/etc/vlan/;
ssh 172.16.0.23 "if [ -f /etc/vlan/add_topo_172.16.0.23.sh ];
 then sh /etc/vlan/add_topo_172.16.0.23.sh;fi
if [ -f /etc/vlan/add_net_172.16.0.23.sh ]; 
then sh /etc/vlan/add_net_172.16.0.23.sh;fi
if [ -f /etc/vlan/add_cpu_172.16.0.23.sh ]; 
then sh /etc/vlan/add_cpu_172.16.0.23.sh;fi
";
... OK!

...
6 other nodes
...
...
\end{verbatim}


\subsection{File format}
\label{sec:vlan_conf}

Here is the \vlan\ configuration file describing the heterogeneous
computational grid \full:

\begin{verbatim}
# Vlan cano topology full parameters

# Fix topology of the eights nodes 
of the homogeneous cluster

@topo
c1:  c2
172.16.0.10: rc1, rc2
172.16.0.12: m1c1
172.16.0.13: m2c1
172.16.0.14: m3c1

c2: c1
172.16.0.17: rc2, rc1
172.16.0.21: m1c2
172.16.0.23: m2c2
172.16.0.24: m3c2

# Link configuration
@network
c1: 80 10
c2: 40 20
c1<->c2: 10 100

# CPU degradation
@cpu
c1: basil=100
c2: basil=33
\end{verbatim}

Lines starting with a ``\#'' (comment lines) are not parsed and
practically count for nothing. Blank lines are not significant before
the first line starting with ``@''.

The \vlan\ configuration file contains at most three paragraphs
starting with \texttt{@topo}, \texttt{@network} and \texttt{@cpu}. The
 convention is that one contains one pair \verb/key:value/. The key can be a
 cluster name or a node address. The values contain topology,
 network and processor informations.

The first paragraph is mandatory and has to appear first as it
describes the topology of $S$, the grid to emulate. Each group of
lines separated by a blank line (paragraph) describes a cluster of
$S$. The first line of a paragraph gives the cluster name (or id)
before the colon ``:'', and a comma-separated list of clusters (names)
directly connected to it. In \full, the cluster $C_1$ is directly
connected to one cluster only, $C_2$. The following lines list the
nodes within the current cluster. Such lines start with the address
(IP or hostname) of the node, a colon, the name (id) of the node and a
comma-separated list of nodes from other clusters to which this node
is connected to. If the node is a router, this list contains one or
more node'names from other clusters. Otherwise the list is empty. In
\full, rc1 (IP 172.16.0.10) is a router of $C_1$ directly connected to
rc2, the unique router of $C_2$. In this paragraph, blank lines are
important as they separate the definition of the clusters.

The \texttt{@network} paragraph is optional. It describes the
heterogeneous characteristics of the computational grid links. Two
kinds of entries may appear. The key can be the name of a cluster
described in the paragraph \texttt{@topo}. In this case, the
corresponding value is a pair of float (bandwidth latency) specifying
the bandwidth and latency of all the links within the cluster whose
the name is the key. The key can also contain two cluster names
separated by a ``$<->$'' symbol. In this case, the pair of floats
(bandwidth latency) describes the characteristics of the link
connecting the two clusters $C_1$ and $C_2$. This link must be unique
(that means that there cannot be two intercluster links connecting the
two same clusters with different routers). The bandwidth must be
presented in MBits/s and the latency in ms (milliseconds). A null (0)
or missing value (for the latency) means that the corresponding
parameter is not modified and, if possible, no rule is created. It is
possible to omit a cluster or an intercluster link. In this case, no
modification of the corresponding links is performed.

The last paragraph, \texttt{@cpu}, is also optional. For each cluster,
the processes belonging to the given users are degraded by the given
amount. Each key is the name of a cluster. It is not possible to
modify the performance of the nodes individually. This cluster must
have been declared and defined in the \texttt{@topo} paragraph. The
corresponding value is a comma-separated list of pairs (user -- level
of degradation) separated with a sign ``=''. The level of degradation
$p_{perf}$ for a given user must be a float between 0 and 100. This
value specifies the maximum ratio $p_{perf}=\frac{t_{exec}}{t_{tot}}$
allowed for the processes belonging to the corresponding user (in
fact, with the same uid and euid) of all the nodes in the current
cluster. Practically, a process will be in sleep or stop mode
$1-p_{perf}$ of the time it could have been run. $1-p_{perf}$ can be
interpreted as the percentage of degradation of these processes. All
nodes within one cluster will have the same \texttt{@cpu}
configuration. In \full, the performance of the processes running with
the same euid as user basile's uid will be degraded in both
cluster. There will be no performance degradation in $C_1$ and 66\% in
$C_2$. In fact, the configuration given for $C_1$ is not necessary. By
default, if the configuration of a cluster is omitted, no process is
degraded at all.

\vlan\ checks for the logic of the file. The declaration of nodes and
clusters (in \texttt{@topo}) must be unique and correspond between
each other. The network definitions have to reference clusters and
interclusters links declared in the \texttt{@topo} paragraph. The same
is true for the \texttt{@cpu} paragraph.

\subsection{Architecture}
\label{sec:vlan_arc}

\vlan\ can be executed on one of the node of the cluster, the head
node, or even another computer, maybe offline, for greater flexibility
(with the option \emph{--fake}). \vlan\ simply writes 6 shell (sh)
scripts (in \texttt{vlan\_tmp} by default) for each node in the
cluster. These scripts contain commands to run delete scripts if
existing (to delete any previous grid configuration), set the rules
for creating the custom (new) topology
(\texttt{add\_topo\_\emph{ip}.sh}) and modifying the network links
characteristics (\texttt{add\_net\_\emph{ip}.sh}) and processor
performance (\texttt{add\_cpu\_\emph{ip}.sh}), as well as to delete
the configuration and return to the previous state (homogeneous
cluster): \texttt{del\_topo\_\emph{ip}.sh, del\_net\_\emph{ip}.sh and
  del\_cpu\_\emph{ip}.sh}. \emph{ip} is the IP address of the node
where the scripts have to be run to enable or disable parts of the
emulation. During normal execution (\emph{-a -c}), \vlan\ copies these
scripts in the \url{/etc/vlan} directory of the nodes, and executes
the 3 scripts starting with \texttt{add}. \vlan\ reports an error if
these scripts do not execute or meet an error. Only one or two of them
are executed if one or more of these options \emph{--topo},
\emph{--net} and \emph{--cpu} are set. However, these scripts can also
be copied manually to the target computers and run by hand. This
allows for greater flexibility and debugging capabilities. When
deleting the configuration, one or more scripts starting with
\texttt{del\_} are run and deleted.

\vlan\ needs root privileges in order to run \emph{tc} commands,
\texttt{cpulim(d)}, create the directory \url{/etc/vlan} and write
files in \url{/etc/iproute2/}, \url{/etc/} (\url{cpulim.conf}) and
\url{/etc/vlan}.

By default, \vlan\ creates two log files in addition to the shell
scripts: a ``.sum'' file summarizing the written set of orders and
rules for each node, and a ``.log'' file detailing the algorithm
steps, warnings and errors of \vlan.


\section{Examples}
\label{sec:vlan_examples}

In this section we present examples of configurations emulated on
Lids.

\subsection{Cano}
\label{sec:ex_cano}

We call Cano a grid consisting of two clusters $C_1$ and $C_2$
connected by one intercluster link. \cano, \cpu, \bdw, \lat and \full
are examples of Cano grids.

\paragraph{grid \cano}

\begin{verbatim}
@topo
c1:  c2
172.16.0.10: rc1, rc2
172.16.0.12: m1c1
172.16.0.13: m2c1
172.16.0.14: m3c1

c2: c1
172.16.0.17: rc2, rc1
172.16.0.21: m1c2
172.16.0.23: m2c2
172.16.0.24: m3c2

@network
c1: 0 0
c2: 0 0
c1<->c2: 0 0

@cpu
c1: basil=100
c2: basil=100
\end{verbatim}

\paragraph{grid \cpu}

\begin{verbatim}
@topo
c1:  c2
172.16.0.10: rc1, rc2
172.16.0.12: m1c1
172.16.0.13: m2c1
172.16.0.14: m3c1

c2: c1
172.16.0.17: rc2, rc1
172.16.0.21: m1c2
172.16.0.23: m2c2
172.16.0.24: m3c2

@cpu
c1: basil=99
c2: basil=20
\end{verbatim}

\paragraph{grid \bdw}

\begin{verbatim}
@topo
c1:  c2
172.16.0.10: rc1, rc2
172.16.0.12: m1c1
172.16.0.13: m2c1
172.16.0.14: m3c1

c2: c1
172.16.0.17: rc2, rc1
172.16.0.21: m1c2
172.16.0.23: m2c2
172.16.0.24: m3c2

@network
c1<->c2: 10 0

\end{verbatim}

\paragraph{grid \lat}

\begin{verbatim}
@topo
c1:  c2
172.16.0.10: rc1, rc2
172.16.0.12: m1c1
172.16.0.13: m2c1
172.16.0.14: m3c1

c2: c1
172.16.0.17: rc2, rc1
172.16.0.21: m1c2
172.16.0.23: m2c2
172.16.0.24: m3c2

@network
c1<->c2: 0 10
\end{verbatim}

All these configurations can be set with the command
\verb/vlan4.py -a -c cano_conf.vlan/. This is not the only
solution. Once the \cano grid has been emulated, emulating \cpu for
example is as simple as \verb/vlan4.py -a --cpu -ccano_cpu.vlan/.
This is faster for big grids.


\subsection{Triangle}
\label{sec:ex_triangle}

The Triangle configuration is made of three clusters $C_1$, $c_2$ and
$C_3$ interconnected in triangle, e.g. $S_T^{inter}$ (Figure
\ref{fig:grid_triangle}).

\begin{figure}[tbp]
  \centering
  \includegraphics[width=3.2in]{grids/triangle.pdf}
  \caption{$S_T^{inter}$}
  \label{fig:grid_triangle}
\end{figure}

The corresponding configuration file is:
\begin{verbatim}
@topo
c1:  c2, c3
172.16.0.10: rc11, rc32
172.16.0.12: m1c1
172.16.0.13: rc12, rc21

c2: c1, c3
172.16.0.14: rc21, rc12
172.16.0.17: m1c2
172.16.0.21: rc22, rc31

c3: c2, c1
172.16.0.23: rc31, rc22
172.16.0.24: rc32, rc11

@network
c1<->c2: 50 1
c2<->c3: 90 2
c3<->c1: 70 3
\end{verbatim}


\subsection{Circle}
\label{sec:ex_circle}

If each cluster contains only one node, all kinds of topology can
easily be emulated. All clusters in the Circle1 configuration are in
fact single nodes interconnected to form a circle shape,
e.g. $S_{Cir1}^{inter}$ (Figure \ref{fig:grid_circle1}). The corresponding
configuration file is:

\begin{figure}[tbp]
  \centering
  \includegraphics[width=3.2in]{grids/circle1.pdf}
  \caption{$S_{Cir1}^{inter}$}
  \label{fig:grid_circle1}
\end{figure}

\begin{verbatim}
@topo
c1:  c2, c8
172.16.0.10: rc1, rc8, rc2

c2: c3, c1
172.16.0.12: rc2, rc1, rc3

c3: c2, c4
172.16.0.13: rc3, rc2, rc4

c4: c3, c5
172.16.0.14: rc4, rc3, rc5

c5: c4, c6
172.16.0.17: rc5, rc4, rc6

c6: c5, c7
172.16.0.21: rc6, rc5, rc7

c7: c6, c8
172.16.0.23: rc7, rc6, rc8

c8: c7, c1
172.16.0.24: rc8, rc7, rc1


@network
c1<->c2: 50 1
c2<->c3: 100 2
c3<->c4: 80 0.5
c4<->c5: 30 3
c5<->c6: 70 2
c6<->c7: 40 5
c7<->c8: 50 1
c1<->c8: 60 1.5
\end{verbatim}



\subsection{Star}
\label{sec:ex_star}

Star is another configuration where each cluster contains only one
node. In this configuration, one node is connected to all the other
nodes, e.g. $S_{star}^{inter}$ (Figure \ref{fig:grid_star}). The
corresponding configuration file is:
\begin{verbatim}
@topo
c1:  c2, c3, c4, c5, c6, c7, c8
172.16.0.10: rc1, rc2, rc3, rc4, rc5, rc6, rc7, rc8

c2: c1
172.16.0.12: rc2, rc1

c3: c1
172.16.0.13: rc3, rc1

c4: c1
172.16.0.14: rc4, rc1

c5: c1
172.16.0.17: rc5, rc1

c6: c1
172.16.0.21: rc6, rc1

c7: c1
172.16.0.23: rc7, rc1

c8: c1
172.16.0.24: rc8, rc1


@network 
c2<->c1: 100 2
c3<->c1: 80 0.5
c4<->c1: 30 3
c7<->c1: 50 1
c5<->c1: 70 2
c6<->c1: 60 4
c8<->c1: 60 1.5

\end{verbatim}


\begin{figure}[tbp]
  \centering
  \includegraphics[width=3.2in]{grids/star.pdf}
  \caption{$S_{Star}^{inter}$}
  \label{fig:grid_star}
\end{figure}


\chapter{Checking and monitoring}
\label{sec:monitoring}

\vlan\ is distributed with a number of tools to check and monitor the
virtual emulated heterogeneous computational grids. \vmap\ builds the
processor graph of a grid (emulated or not), and \showgrid graphically
displays it. This allows for quick and flexible assessment of the
topology and characteristics of a grid.

\section{Vmap}
\label{sec:vmap}

\vmap\ is a set of Perl and Python scripts that uses iperf (versions
1.7 and patched 2.0.2 for Linux kernel 2.6.21) and nmap (version 4.20)
to determine the adjacency graph of given processors and build the
corresponding weighted processor graph. It can output processor graphs
with link bandwidth (measured for udp, tcp or MPI), link latency and
processor performance benchmark absolute or relative values, as well
as in the format required by PaGridL.


\subsection{Synopsis}
\label{sec:vmap_syn}

\texttt{vmap.pl [-f filename] [-d] [-h] [-user username] [-l log
  filename] [--mpi] [--udp] [--tcp] [--cpu] [-n number of bits
  exchanged] [--pagrid] [--values] [--weights] [--fast] [<list of
  nodes>] }

\vmap\ without arguments writes the processor graph of \lids\ in the
PaGridL format (see the PaGridL User Guide for more information). One
has to be the root user to execute \vmap.

The option \emph{-f} takes in argument the name of a file containing
the list of nodes present in the grid to be analysed. For \lids, this
simple file looks like that:
\begin{verbatim}
lids:~/export_cluster/code/tools# cat lidshosts.txt
lids02 lids04 lids05 lids06 lids09 lids13 lids15 lids16
\end{verbatim}
The list of nodes can also be added at the end of the command line,
after all the options are given.

Several options control which characteristic(s) of the grid is (are)
to be measured and how. \emph{--mpi}, \emph{--udp} and \emph{--tcp}
specify the method used to measure the bandwidth between pairs of
nodes in the grid. \emph{--mpi} uses the custom \texttt{wrekaMPI}
program (requires a running MPI ring), and \emph{--udp} and
\emph{--tcp} measure the available bandwidth with Iperf for UDP and
TCP respectively (\texttt{chkUDP.pl}). \emph{--cpu} measures the time
required to compute $5000$ decimals of $\pi$ with the \texttt{pim2}
program (provided with of Wrekavoc) for the user defined after
\emph{--user} (\texttt{chkCPU.pl}). If no user is given, then
\texttt{pim2} is executed with the euid of the current user
(root). \vmap\ always measure the latency (with ping).

\emph{--pagrid} (default mode) writes the processor graph in the
PaGridL format. It also set the options \texttt{--mpi --cpu}. By
default, the created graph will have its second line set to \texttt{0
  1 0.01} ($R_{Ref}=0.01$ between nodes 0 and 1). The options
\emph{-n} can be set to reflect the number of bits exchanged between
two nodes in the grid for a given application ($n=1$ by
default). \emph{--values} writes the processor graph with the raw
measured values, while \emph{--weights} displays the same graphs but
with relative values (see examples below, Section
\ref{sec:monitoring_ex}). If the \emph{--fast} switch is set, then the
bandwidth and latency of each link is measured one time only (for
speed purposes; do not set if you are controlling the validity of the
emulation with \vlan).

The option \emph{-d} prints some debug information and \emph{-h}
prints a short help. \emph{-l} allows for specifying the name of the
logfile (\texttt{vmap.log} by default).


\subsection{Architecture}
\label{sec:vmap_arch}

In reality, \vmap\ is just a client who executes other programs to
measure the characteristics of the grid. These programs are completely
independent and more flexible, and as such, can be used independently
to get more detailed information about the current state of the cluster.


\subsubsection{pywrekamapd.py}
\label{sec:pywreka}


\texttt{pywrekamapd.py} gives neighbors, cpu and MPI information for
each node:
\begin{verbatim}
lids:~/export_cluster/code/tools# pywrekamapd.py -h
 
Basile Clout, September 2nd, 2007
pyWrekaMapd is a local deamon waiting for the client user to interrogate it
 Give:
     - direct neighbors: nmap
     - cpu relative power (pim2 10000)
     - latency and bandwidth with the nearest neighbors (using MPI: 
wreka_MPI)
 Interrogated by pyWrekaMapc.py, give the characteristics of the 
network corresponding to the local node

Command:
-h, --help Display this help
--cpu_on Enable cpu measurement
--cpu_off Disable cpu measurement
--set_user Give the user under which we measure cpu power
--net_on Enable network (bandwidth and latency) measurements
--net_off Disable network measurements
--set_list arg Set the list of available nodes in the cluster to arg
--set_nmap arg Give the absolute path to nmap
--set_nmapout arg Give the address of the nmap output
--set_wrekaMPI arg Give the absolute path to wreka_MPI
--set_pim2 arg Give the absolute path to pim2
--set_pim2size arg Determines the size of the pim2 test
--set_mpirun arg Give the absolute path to mpirun (to launch mpi)
--set_logfile arg Give the name of the logfile

Example:
lids04:~/wrekamap# ./pywrekamapd.py --net_off --cpu_off
node lids04 172.16.0.12 0 1 0 0
neighbors 172.16.0.12 172.16.0.14 172.16.0.17 172.16.0.10 
\end{verbatim}

\texttt{pywrekamapd.py} is executed on each node in the grid, with a
list of the nodes present in the grid. It provides 4 kinds of
information corresponding to the four types of lines \emph{node},
\emph{neighbors}, \emph{cpu} and \emph{to}. Only the paragraph
\emph{node} is always displayed.

\texttt{pywrekamapd.py} first prints a line starting with
\texttt{node} giving the hostname and IP of the node on which it is
running, as well as how the program has been called. In the previous
example, \texttt{0 1 0 0} means that only the neighborhood information
is queryed.

The second line gives an indication of the computational power of the
node for the processes belonging to a given user (root by
default). This value is the time required by \texttt{pim2} to compute
10000 decimals of $\pi$.

The third line lists the nodes of the grid directly connected to the
local node. \texttt{pywrekamapd.py} orders \texttt{nmap} to send a IP
packet with a Time-To-Live set to 0 to all the other nodes in the grid
For example, the neighborhood test from lids02 for the heterogeneous
grid \full emulated on \lids\ confirms that lids02 has four direct
neighbors:
\begin{verbatim}
order sent: /usr/bin/nmap -send-ip -sP --ttl 0 lids02 lids04 lids05 
lids06 lids09 lids13 lids15 lids16 -oG /tmp/wmapd/nmap_out.txt
Output nmap:
Starting Nmap 4.20 ( http://insecure.org ) at 2008-02-12 14:40 AST
Host lids02 (172.16.0.10) appears to be up.
Host lids04 (172.16.0.12) appears to be up.
MAC Address: 00:10:64:30:51:5D (DNPG)
Host lids05 (172.16.0.13) appears to be up.
MAC Address: 00:10:64:30:8E:DB (DNPG)
Host lids06 (172.16.0.14) appears to be up.
MAC Address: 00:10:64:30:50:89 (DNPG)
Host lids09 (172.16.0.17) appears to be up.
MAC Address: 00:10:64:30:52:25 (DNPG)
Nmap finished: 8 IP addresses (5 hosts up) scanned in 1.091 seconds
\end{verbatim}

The following lines start with \texttt{to} and provide information on
the links between the local node and its neighbors (previously deduced
with \texttt{nmap}). This information is a simple parsing of the
\texttt{wrekaMPI} output. The IP and hostname of the neighbor at the
other end of the analysed link follow the keyword \texttt{to}. The
next four float numbers indicate the minimum, maximum, average and
standard deviation of the latency of the link. The last four values
are the minimum, maximum, average and standard deviation of the MPI
bandwidth between the two nodes. \texttt{wrekaMPI} is launched with
its default values:
\begin{verbatim}
lids02:~# /usr/local/bin/mpiexec -machinefile /home/basil/mpi/hosts.mf
-n 8 /home/basil/mpi/netMPI/wreka_MPI -h

wreka_MPI uses the netMPI library to evaluate the MPI's latency 
and bandwidth in the parallel network.

Usage: mpirun -np 4 --hostfile nodes wreka_MPI [OPTIONS]

Examples:
    mpirun -np 4 --hostfile lamhosts wreka_MPI --bsr -p network.log
    # Describes the network using the bidirectional MPISend/MPIRecv 
method and print the output in network.log.

Options:
         --sr            Use Monodirectional MPI Send/Recv.
         --bsr           Use Bidirectional MPI Send/Recv.
         --isr           Use Monodirectional MPI ISend/IRecv.
         --bisr          Use Bidirectional MPI ISend/IRecv.
-m,      --master                Node number of the master node 
(DEFAULT: 0)
-s,      --size                  Bit left switch defining the 
size of the test (in bytes) (DEFAULT: 22)
-k --skip                        Effectively skip the s first 
tests (DEFAULT: 3).
-n,      --tests                 Number of (effective) tests 
for one test size (DEFAULT: 10)
-r,      --requests              Number of parallel requests 
for an Asynchronous MPI ISend/IRecv (DEFAULT: 16).
-d,      --debug                         Print the complete 
list of measured values.
-l       --live                  Print the values as they are 
calculated.
-p,      --print                 Reroute the standard output.
-h,      --help                  Print this help


Example
: ex_MPI --isr -m 2 -k 2 -n 5 -s 21 -r 4 -p example.log
        Print the bandwidth and latency values in the network
 between node 2 (-m 2) and the other nodes of the cluster. 
Use the monodirectional MPI Send/Recv method, for datagram sizes 
between 1 << 21 (-b 21) and 1 << 22. For each test performed, 
average the value on 3 attempts (-n 3) abd skipping the 2 (-s 2) 
first values. If the test is a isr or bisr, use 4 parallel requests 
(-r 4)Output:
         config: master rank, number of tasks, method (0=SR, 2=ISR),
 bandwidth test's size (bytes), latency (bytes) test's size, 
total # of tests,  # of tests skipped, # of parallel requests 
for ISR, help?, debug?, live?
         tasks: task rank, hostname, IP address (ipv4)
         results: task number, avg latency (ms), min lat, max lat,
 stdev lat, avg bandwidth (Mbits/s), min bdw, max bdw, stdev bdw


lids02:~/config/code/tools# sudo -H -u root /usr/local/bin/mpiexec 
-machinefile /tmp/wmapd/hostfile.txt -n 8 /home/basil/mpi/netMPI/wreka_MPI -m 0
config
 0 8 2 2097152 1 5 2 4 0 0 0
tasks
0 lids02 172.16.0.10
1 lids04 172.16.0.12
2 lids05 172.16.0.13
3 lids06 172.16.0.14
4 lids09 172.16.0.17
5 lids13 172.16.0.21
6 lids15 172.16.0.23
7 lids16 172.16.0.24
results
0 1 10.75 10.74 10.75 0.01 73.92 73.87 73.95 0.04
0 2 10.59 10.27 10.75 0.28 73.86 73.85 73.86 0.00
0 3 10.75 10.75 10.76 0.01 73.95 73.95 73.95 0.00
0 4 100.67 100.66 100.68 0.01 9.17 9.17 9.17 0.00
0 5 121.19 121.19 121.20 0.01 9.10 9.10 9.10 0.00
0 6 121.19 121.19 121.20 0.00 9.10 9.10 9.10 0.00
0 7 121.19 121.19 121.19 0.00 9.10 9.10 9.10 0.00
\end{verbatim}

Below is the pywrekamapd.py run on lids02 when we emulate the grid
\full\ on lids. lids02, the router of $C_1$, correctly has four
neighbors.
\begin{verbatim}
lids02:~/config/code/tools# ./pywrekamapd.py
node lids02 172.16.0.10 1 1 1 0
cpu 6.59103488922 root
neighbors 172.16.0.12 172.16.0.13 172.16.0.14 172.16.0.17 
to 172.16.0.12 lids04 10.75 10.74 10.75 0.01 73.92 73.87 73.95 0.04
to 172.16.0.13 lids05 10.59 10.27 10.75 0.28 73.86 73.85 73.86 0.00
to 172.16.0.14 lids06 10.75 10.75 10.76 0.01 73.95 73.95 73.95 0.00
to 172.16.0.17 lids09 100.67 100.66 100.68 0.01 9.17 9.17 9.17 0.00
\end{verbatim}



\subsubsection{chkCPU.pl}
\label{sec:chckCPU}

In order to measure the CPU performance of a node, \vmap\ does not
directly call \texttt{pywrekamapd.py} but in fact call another
script,\texttt{chkCPU.pl}:
\begin{verbatim}
lids:~/export_cluster/code/tools# ./chkCPU.pl -h

Usage: ./chkCPU.pl [-f file containing the list of nodes] [-h print help and
exit] [-u user] [-l logfile] [--size size of pim2 test
(DEFAULT=1000)] [--min unit=1 goes for the minimum CPU] [--max unit 1
goes for the maximum (DEFAULT)] <list of nodes>
\end{verbatim}

\texttt{chkCPU.pl} simply runs the \texttt{pim2} program on all the
nodes in the grid and outputs the corresponding execution times. It can
also prints the weights relative to the minimum or maximum execution
time. Below is an example for user basil on the emulated grid
\full\footnote{The weigths are around 3.5 instead of 3 ($C_1$  has 0\%
  degradation and $C_2$ 66\%). This is because of the overhead
  involved when degrading the CPU performance. 0\% of degradation
  means that \texttt{cpulimd} is not run at all on the corresponding
  nodes in $C_1$. However, the degradation of the nodes in $C_2$
  include the 66\% plus the overhead. This effect is explained in
  detail in the thesis.}.
\begin{verbatim}
lids:~/export_cluster/code/tools# ./chkCPU.pl -f lamhosts.txt -u basil
lids02 1.81 3.55
lids04 1.80 3.59
lids05 1.82 3.54
lids06 1.82 3.54
lids09 6.38 1.01
lids13 6.36 1.01
lids15 6.37 1.01
lids16 6.44 1.00
\end{verbatim}


\subsubsection{chkUDP.pl}
\label{sec:chkUDP}

\texttt{chkUDP.pl} measures the UDP and TCP available bandwidths
between all pairs of nodes in the grid:

\begin{verbatim}
lids:~/export_cluster/code/tools# ./chkUDP.pl -h
Usage: ./chkUDP.pl [-c list of nodes] [-f file
        containing nodes] [-t time] [-p protocol ("udp" or "tcp")]
        [--log log file] [--iperf_server path of iperf on the servers]
        [--iperf_client path of iperf on the client] [-r network
        protocol (ssh, rsh, ...)] [-u user] [-h this help] [--fast
        check only once per link (instead of 2)] 
Example:
        ./wrekaUDP.pl -p udp -t 3 --log mylog.log -f lamhosts.txt -r
        ssh -u basil
\end{verbatim}

\texttt{chkUDP.pl} runs \texttt{wrekaUDP.pl} on each
node. \texttt{wrekaUDP.pl} launchs, monitors and correctly kills the
Iperf servers (daemons) on all the other nodes of the grid. It also
executes the iperf client on the local node and parses the
results. For example, on the node lids16 of the grid \full:
\begin{verbatim}
lids16:~# /root/config/code/tools/wrekaUDP.pl -h -c "lids02 lids04 
lids05 lids06 lids09 lids13 lids15 lids16"
Usage: ./wrekaUDP.pl [-c list of nodes] [-f file containing nodes] 
[-t time] [-p protocol ("udp" or "tcp")] [--log log file] 
[--iperf_server path of iperf on the servers] 
[--iperf_client path of iperf on the client] [-r network protocol 
(ssh, rsh, ...)] [-u user] [-h this help]
Example: ./wrekaUDP.pl -p udp -t 3 --log mylog.log -f lamhosts.txt 
-r ssh -u basil

lids16 172.16.0.24 8 udp 5
lids02 9.7 121.07
lids04 9.7 131.49
lids05 9.7 132.06
lids06 9.7 132.00
lids09 39.0 20.67
lids13 39.0 20.48
lids15 39.0 20.43
\end{verbatim}

\texttt{wrekaUDP.pl} makes use of two different versions of Iperf,
1.0.7 and 2.0.2 patched against the 2.6.21 Linux kernels. Indeed, both
have different bugs that get mitigated if version 2.0.2 is used as the
client of the 1.0.7 version. The server is run as a daemon on each
node and replies to the queries from the Iperf client. A ping test is
also performed to measure the latency. The time of measurement can be
set with the option \emph{-t} (useful for TCP).  

\texttt{chkUDP.pl} controls the measurements (\emph{--fast} option,
...), processes the various outputs of \texttt{wrekaUDP.pl} over the
network and outputs a nice table.

Below is an example of \texttt{chkUDP.pl} on \full:
\begin{verbatim}
lids:~/export_cluster/code/tools# ./chkUDP.pl -f lamhosts.txt -p udp
5 8 udp
        lids02  lids04  lids05  lids06  lids09  lids13  lids15  lids16
lids02     -     77.5    76.3    77.9    9.7     9.7     9.7     9.7
           -     10.68   10.66   10.58   100.63  121.12  121.08  121.05
lids04   77.9      -     77.9    77.9    9.7     9.7     9.7     9.7
         10.82     -     10.92   10.85   111.29  132.11  132.14  131.68
lids05   77.9    77.9      -     77.9    9.7     9.7     9.7     9.7
         10.50   10.77     -     10.61   111.30  131.75  131.75  131.92
lids06   76.2    76.2    77.9      -     9.7     9.7     9.7     9.7
         10.61   10.50   10.62     -     110.72  131.69  131.69  131.68
lids09   9.7     9.7     9.7     9.7       -     39.0    39.0    38.9
         100.45  111.35  111.28  110.79    -     20.43   20.68   20.43
lids13   9.7     9.7     9.7     9.7     38.9      -     38.9    38.9
         121.19  131.74  131.40  131.54  20.52     -     20.40   20.46
lids15   9.7     9.7     9.7     9.7     39.0    39.0      -     39.0
         121.28  132.02  131.48  132.01  20.69   20.68     -     20.50
lids16   9.7     9.7     9.7     9.7     39.0    39.0    39.0      -
         121.35  131.20  131.76  131.34  20.36   20.44   20.38     -
\end{verbatim}


\section{Showgrid}
\label{sec:showgrid}

\texttt{showgrid.py} displays a 3D (OpenGL) graphic representation of
a processor graph in one of the \vmap\ format. It uses Graphviz for
the layout algorithm and VTK (the Vizualization ToolKit) library for
the vizualization part. \texttt{showgrid.py} can output pdf and png
representations in 2 dimensions (no 3d pdf yet ... Meshlab not usable
yet to create u3d from obj ...), as well as 3D formats such as VRML or
Wavefront OBJ files (zooming, rotating, ...). However, the interactive
visualization is the most complete, as \texttt{showgrid.py} can also
displays the weights of the links and nodes. \texttt{showgrid.py} also
tries to determine the clusters of a grid by computing the fully
connected subgraph of the grid. The diameter and color of the links
and the nodes correspond to their weight (small diameter = small
weight). \texttt{showgrid.py} uses \texttt{vmap2dot.pl} to translate a
\vmap\ file into a .dot file suitable for processing with the programs
of the Graphviz package.


\begin{verbatim}
student164:~/Desktop/repo/wcolor/vtk basileclout$ ./showgrid.py -h

Showgrid.py
Graphically represent a processor grid.

Renderer
-r gnuplot: Not yet implemented
-r vtk: VTK toolkit (DEFAULT)
-r graphviz: Direct graphviz output (with -Tpdf option). 
Good for testing purposes for example

Options:
-h/--help: this help
-o, --output: output filename prefix for non-interactive output)
-2, --2D: 2D Graphviz. With VTK, produces a flat 3D 
representation, suitable for some kind of topologies
-3, --3D: 3d Graphviz (DEFAULT)
-p, --pdf: pdf output
--png: png output (Internet format)
--vrml: VRML 2.0 (VRML97) output (VTK only)
--obj: OBJ Wavefront file format. VTK only 
-w, --weights(VTK): display weights link and processor weights (DEFAULT=OFF)
-s, --space arg: space between nodes in Graphviz algorithm (DEFAULT=5)  
-d, --debug: Print debug messages (DEFAULT=OFF)
--rsphere: sphere radius (DEFAULT=15)

Usage: ./showgrid.py [-r renderer] [--rsphere radius spheres] 
[-s, --space length of edges] [-2 2D representation] [-3 3D] 
[-d/--debug debug flag] [-o/--output output file] [-w print weights]
 [-h this help] <vmap file>

Examples:

With Graphviz
./showgrid.py -r graphviz -2 --png ./vmaps/vmap_triangle.out

With VTK
./showgrid.py -3 -w --vrml ./vmaps/vmap_triangle.out 

Basile Clout, November 2007
\end{verbatim}



\section{Examples}
\label{sec:monitoring_ex}

In this section, we present the \vmap\ processor graphs and showgrid
representations (in 2D ...) of some computational grids.

\subsection{Homogeneous grid lids}
\label{sec:mon_lids}

\subsubsection{processor graphs}
\label{sec:mon_lids_vmap}

\begin{tiny}
\begin{verbatim}
% pagrid's processor grid:
8 28
0 1 0.01
1 4 1 10322 1 1 10322 2 1 10322 3 1 10322 7 1 10322 6 1 10322 5 1 10322 
1 4 1 10322 5 1 10322 2 1 10322 3 1 10322 7 1 10322 6 1 10322 0 1 10322 
1 4 1 10322 1 1 10322 3 1 10322 7 1 10322 6 1 10322 5 1 10322 0 1 10322 
1 4 1 11261 1 1 10322 2 1 10322 7 1 10322 6 1 10322 5 1 10322 0 1 10322 
1 1 1 10322 2 1 10322 3 1 11261 7 1 10322 6 1 10322 5 1 10322 0 1 10322 
1 4 1 10322 1 1 10322 2 1 10322 3 1 10322 7 1 10322 6 1 11261 0 1 10322 
1 4 1 10322 1 1 10322 2 1 10322 3 1 10322 7 1 11261 0 1 10322 5 1 11261 
1 4 1 10322 1 1 10322 2 1 10322 3 1 10322 0 1 10322 5 1 10322 6 1 11261 

% fast 
% weights: 
lids02: 1 lids09 1 1 lids04 1 1 lids05 1 1 lids06 1 1 lids16 1 1 lids15 1 1 lids13 1 1 
lids04: 1 lids09 1 1 lids13 1 1 lids05 1 1 lids06 1 1 lids16 1 1 lids15 1 1 lids02 1 1 
lids05: 1 lids09 1 1 lids04 1 1 lids06 1 1 lids16 1 1 lids15 1 1 lids13 1 1 lids02 1 1 
lids06: 1 lids09 1 1 lids04 1 1 lids05 1 1 lids16 1 1 lids15 1 1 lids13 1 1 lids02 1 1 
lids09: 1 lids04 1 1 lids05 1 1 lids06 1 1 lids16 1 1 lids15 1 1 lids13 1 1 lids02 1 1 
lids13: 1 lids09 1 1 lids04 1 1 lids05 1 1 lids06 1 1 lids16 1 1 lids15 1 1 lids02 1 1 
lids15: 1 lids09 1 1 lids04 1 1 lids05 1 1 lids06 1 1 lids16 1 1 lids02 1 1 lids13 1 1 
lids16: 1 lids09 1 1 lids04 1 1 lids05 1 1 lids06 1 1 lids02 1 1 lids13 1 1 lids15 1 1 

% fast 
% values: 
lids02: 1.79 lids09 93.79 0.11 lids04 93.79 0.11 lids05 93.79 0.11 lids06 93.79 0.11 lids16 93.79 0.11 lids15 93.79 0.11 lids13 93.79 0.11 
lids04: 1.83 lids09 93.78 0.11 lids13 93.79 0.11 lids05 93.77 0.11 lids06 93.79 0.11 lids16 93.79 0.11 lids15 93.79 0.11 lids02 93.79 0.11 
lids05: 1.79 lids09 93.80 0.11 lids04 93.77 0.11 lids06 93.81 0.11 lids16 93.79 0.11 lids15 93.80 0.11 lids13 93.79 0.11 lids02 93.79 0.11 
lids06: 1.79 lids09 93.83 0.12 lids04 93.79 0.11 lids05 93.81 0.11 lids16 93.83 0.11 lids15 93.83 0.11 lids13 93.83 0.11 lids02 93.79 0.11 
lids09: 1.82 lids04 93.78 0.11 lids05 93.80 0.11 lids06 93.83 0.12 lids16 93.79 0.11 lids15 93.79 0.11 lids13 93.79 0.11 lids02 93.79 0.11 
lids13: 1.84 lids09 93.79 0.11 lids04 93.79 0.11 lids05 93.79 0.11 lids06 93.83 0.11 lids16 93.83 0.11 lids15 93.84 0.12 lids02 93.79 0.11 
lids15: 1.81 lids09 93.79 0.11 lids04 93.79 0.11 lids05 93.80 0.11 lids06 93.83 0.11 lids16 93.79 0.12 lids02 93.79 0.11 lids13 93.84 0.12 
lids16: 1.80 lids09 93.79 0.11 lids04 93.79 0.11 lids05 93.79 0.11 lids06 93.83 0.11 lids02 93.79 0.11 lids13 93.83 0.11 lids15 93.79 0.12 
\end{verbatim}
\end{tiny}


\subsubsection{showgrid representation}
\label{sec:mon_lids_showgrid}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{lids.png} 
  \label{fig:lids_showgrid}
  \caption{lids in showgrid.py}
\end{figure}


\subsection{Emulated grids Cano}
\label{sec:mon_cano}


The grids with the Cano configuration are made of two clusters with 4
nodes each connected by a unique intercluster link. This configuration
is extensively used in the thesis to analyse the performance of mesh
partitioners on grids with various kind of heterogeneity.

\subsubsection{Grid \cano}

\begin{verbatim}
lids:~/export_cluster/code/tools# ./vmap.pl -f lamhosts.txt --pagrid --weights --values -u basil --fast
% pagrid's processor grid:
8 13
0 1 0.01
1 1 1 10322 2 1 10322 3 1 10322 4 1 10322 
1 2 1 11261 3 1 10322 0 1 10322 
1 1 1 11261 3 1 10322 0 1 10322 
1 1 1 10322 2 1 10322 0 1 10322 
1 5 1 10322 6 1 10322 0 1 10322 7 1 10322 
1 4 1 10322 6 1 11261 7 1 10322 
1 5 1 11261 4 1 10322 7 1 11261 
1 5 1 10322 4 1 10322 6 1 11261 

% fast 
% weights: 
lids02: 1 lids04 1 1 lids05 1 1 lids06 1 1 lids09 1 1 
lids04: 1 lids05 1 1 lids06 1 1 lids02 1 1 
lids05: 1 lids04 1 1 lids06 1 1 lids02 1 1 
lids06: 1 lids04 1 1 lids05 1 1 lids02 1 1 
lids09: 1 lids13 1 1 lids15 1 1 lids02 1 1 lids16 1 1 
lids13: 1 lids09 1 1 lids15 1 1 lids16 1 1 
lids15: 1 lids13 1 1 lids09 1 1 lids16 1 1 
lids16: 1 lids13 1 1 lids09 1 1 lids15 1 1 

% fast 
% values: 
lids02: 1.85 lids04 93.79 0.11 lids05 93.79 0.11 lids06 93.79 0.11 lids09 93.79 0.11 
lids04: 1.81 lids05 93.79 0.12 lids06 93.78 0.11 lids02 93.79 0.11 
lids05: 1.82 lids04 93.79 0.12 lids06 93.79 0.11 lids02 93.79 0.11 
lids06: 1.83 lids04 93.78 0.11 lids05 93.79 0.11 lids02 93.79 0.11 
lids09: 1.82 lids13 93.79 0.11 lids15 93.79 0.11 lids02 93.79 0.11 lids16 93.79 0.11 
lids13: 1.83 lids09 93.79 0.11 lids15 93.84 0.12 lids16 93.83 0.11 
lids15: 1.84 lids13 93.84 0.12 lids09 93.79 0.11 lids16 93.78 0.12 
lids16: 1.85 lids13 93.83 0.11 lids09 93.79 0.11 lids15 93.78 0.12 
\end{verbatim}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{cano.png} 
  \label{fig:lids_showgrid}
  \caption{\cano\ in showgrid.py (3D output)}
\end{figure}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{cano_2d.pdf} 
  \label{fig:lids_showgrid_2d}
  \caption{\cano\ in showgrid.py (2D output)}
\end{figure}


\subsubsection{Grid \cpu}

\begin{verbatim}
% pagrid's processor grid:
8 13
0 1 0.01
5 1 1 10317 2 1 10317 3 1 12193 4 1 10317
5 2 1 11255 3 1 11255 0 1 10317
5 1 1 11255 3 1 11255 0 1 10317
5 1 1 11255 2 1 11255 0 1 12193
1 5 1 10317 6 1 11255 0 1 10317 7 1 10317
1 4 1 10317 6 1 11255 7 1 10317
1 5 1 11255 4 1 11255 7 1 12193
1 5 1 10317 4 1 10317 6 1 12193


% weights:
lids02: 5 lids04 1 1 lids05 1 1 lids06 1 1 lids09 1 1
lids04: 5 lids05 1 1 lids06 1 1 lids02 1 1
lids05: 5 lids04 1 1 lids06 1 1 lids02 1 1
lids06: 5 lids04 1 1 lids05 1 1 lids02 1 1
lids09: 1 lids13 1 1 lids15 1 1 lids02 1 1 lids16 1 1
lids13: 1 lids09 1 1 lids15 1 1 lids16 1 1
lids15: 1 lids13 1 1 lids09 1 1 lids16 1 1
lids16: 1 lids13 1 1 lids09 1 1 lids15 1 1


% values:
lids02: 2.06 lids04 93.71 0.11 lids05 93.74 0.11 lids06 93.74 0.13 lids09 93.71 0.11
lids04: 2.09 lids05 93.74 0.12 lids06 93.76 0.12 lids02 93.71 0.11
lids05: 2.17 lids04 93.74 0.12 lids06 93.73 0.12 lids02 93.74 0.11
lids06: 2.08 lids04 93.76 0.12 lids05 93.73 0.12 lids02 93.74 0.13
lids09: 10.48 lids13 93.74 0.11 lids15 93.71 0.12 lids02 93.71 0.11 lids16 93.67 0.11
lids13: 10.52 lids09 93.74 0.11 lids15 93.71 0.12 lids16 93.67 0.11
lids15: 10.48 lids13 93.71 0.12 lids09 93.71 0.12 lids16 93.79 0.13
lids16: 10.63 lids13 93.67 0.11 lids09 93.67 0.11 lids15 93.79 0.13
\end{verbatim}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{cpu.png} 
  \label{fig:cpu_showgrid}
  \caption{\cpu\ in showgrid.py}
\end{figure}

\subsubsection{Grid \bdw}

\begin{verbatim}
% pagrid's processor grid:
8 13
0 1 0.01
1 1 1 10321 2 1 10321 3 1 10321 4 10 11260
1 2 1 10321 3 1 10321 0 1 10321
1 1 1 10321 3 1 10321 0 1 10321
1 1 1 10321 2 1 10321 0 1 10321
1 5 1 10321 6 1 10321 0 10 11260 7 1 10321
1 4 1 10321 6 1 10321 7 1 10321
1 5 1 10321 4 1 10321 7 1 11260
1 5 1 10321 4 1 10321 6 1 11260


% weights:
lids02: 1 lids04 10 1 lids05 10 1 lids06 10 1 lids09 1 1
lids04: 1 lids05 10 1 lids06 10 1 lids02 10 1
lids05: 1 lids04 10 1 lids06 10 1 lids02 10 1
lids06: 1 lids04 10 1 lids05 10 1 lids02 10 1
lids09: 1 lids13 10 1 lids15 10 1 lids02 1 1 lids16 10 1
lids13: 1 lids09 10 1 lids15 10 1 lids16 10 1
lids15: 1 lids13 10 1 lids09 10 1 lids16 10 1
lids16: 1 lids13 10 1 lids09 10 1 lids15 10 1


% values:
lids02: 1.81 lids04 93.79 0.11 lids05 93.79 0.11 lids06 93.79 0.11 lids09 9.57 0.12
lids04: 1.83 lids05 93.79 0.11 lids06 93.79 0.11 lids02 93.79 0.11
lids05: 1.84 lids04 93.79 0.11 lids06 93.79 0.11 lids02 93.79 0.11
lids06: 1.80 lids04 93.79 0.11 lids05 93.79 0.11 lids02 93.79 0.11
lids09: 1.81 lids13 93.79 0.11 lids15 93.80 0.11 lids02 9.57 0.12 lids16 93.79 0.11
lids13: 1.79 lids09 93.79 0.11 lids15 93.83 0.11 lids16 93.81 0.11
lids15: 1.85 lids13 93.83 0.11 lids09 93.80 0.11 lids16 93.79 0.12
lids16: 1.83 lids13 93.81 0.11 lids09 93.79 0.11 lids15 93.79 0.12
\end{verbatim}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{bdw.png} 
  \label{fig:bdw_showgrid}
  \caption{\bdw\ in showgrid.py}
\end{figure}

\subsubsection{Grid \lat}

\begin{verbatim}
% pagrid's processor grid:
8 13
0 1 0.01
1 1 1 10321 2 1 10321 3 1 10321 4 1 1008672
1 2 1 10321 3 1 10321 0 1 10321
1 1 1 10321 3 1 10321 0 1 10321
1 1 1 10321 2 1 10321 0 1 10321
1 5 1 11260 6 1 11260 0 1 1008672 7 1 10321
1 4 1 11260 6 1 11260 7 1 10321
1 5 1 11260 4 1 11260 7 1 11260
1 5 1 10321 4 1 10321 6 1 11260


% weights:
lids02: 1 lids04 1 1 lids05 1 1 lids06 1 1 lids09 1 98
lids04: 1 lids05 1 1 lids06 1 1 lids02 1 1
lids05: 1 lids04 1 1 lids06 1 1 lids02 1 1
lids06: 1 lids04 1 1 lids05 1 1 lids02 1 1
lids09: 1 lids13 1 1 lids15 1 1 lids02 1 98 lids16 1 1
lids13: 1 lids09 1 1 lids15 1 1 lids16 1 1
lids15: 1 lids13 1 1 lids09 1 1 lids16 1 1
lids16: 1 lids13 1 1 lids09 1 1 lids15 1 1


% values:
lids02: 1.81 lids04 93.79 0.11 lids05 93.78 0.11 lids06 93.75 0.11 lids09 89.85 10.75
lids04: 1.82 lids05 93.79 0.11 lids06 93.79 0.11 lids02 93.79 0.11
lids05: 1.79 lids04 93.79 0.11 lids06 93.79 0.11 lids02 93.78 0.11
lids06: 1.81 lids04 93.79 0.11 lids05 93.79 0.11 lids02 93.75 0.11
lids09: 1.84 lids13 93.79 0.12 lids15 93.79 0.12 lids02 89.85 10.75 lids16 93.79 0.11
lids13: 1.83 lids09 93.79 0.12 lids15 93.83 0.12 lids16 93.83 0.11
lids15: 1.82 lids13 93.83 0.12 lids09 93.79 0.12 lids16 93.78 0.12
lids16: 1.81 lids13 93.83 0.11 lids09 93.79 0.11 lids15 93.78 0.12
\end{verbatim}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{lat.png} 
  \label{fig:lat_showgrid}
  \caption{\lat\ in showgrid.py}
\end{figure}


\subsubsection{Grid \full}
\begin{verbatim}
% pagrid's processor grid:
8 13
0 1 0.01
4 1 1 757350 2 1 758830 3 1 795070 4 8 7444814
4 2 1 794330 3 1 795070 0 1 757350
4 1 1 794330 3 1 794330 0 1 758830
4 1 1 795070 2 1 794330 0 1 795070
1 5 2 1509524 6 2 1517659 0 8 7444814 7 2 1516180
1 4 2 1509524 6 2 1553160 7 2 1516920
1 5 2 1553160 4 2 1517659 7 2 1517659
1 5 2 1516920 4 2 1516180 6 2 1517659


% weights:
lids02: 4 lids04 8 1 lids05 8 1 lids06 8 1 lids09 1 10
lids04: 4 lids05 8 1 lids06 8 1 lids02 8 1
lids05: 4 lids04 8 1 lids06 8 1 lids02 8 1
lids06: 4 lids04 8 1 lids05 8 1 lids02 8 1
lids09: 1 lids13 4 2 lids15 4 2 lids02 1 10 lids16 4 2
lids13: 1 lids09 4 2 lids15 4 2 lids16 4 2
lids15: 1 lids13 4 2 lids09 4 2 lids16 4 2
lids16: 1 lids13 4 2 lids09 4 2 lids15 4 2


% values:
lids02: 1.83 lids04 73.96 10.24 lids05 73.94 10.26 lids06 73.87 10.75 lids09 9.17 100.66
lids04: 1.85 lids05 73.87 10.74 lids06 73.87 10.75 lids02 73.96 10.24
lids05: 1.82 lids04 73.87 10.74 lids06 73.95 10.74 lids02 73.94 10.26
lids06: 1.80 lids04 73.87 10.75 lids05 73.95 10.74 lids02 73.87 10.75
lids09: 6.34 lids13 37.00 20.41 lids15 37.00 20.52 lids02 9.17 100.66 lids16 36.99 20.50
lids13: 6.32 lids09 37.00 20.41 lids15 36.98 21.00 lids16 37.00 20.51
lids15: 6.51 lids13 36.98 21.00 lids09 37.00 20.52 lids16 36.99 20.52
lids16: 6.40 lids13 37.00 20.51 lids09 36.99 20.50 lids15 36.99 20.52
\end{verbatim}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{full.png} 
  \label{fig:full_showgrid}
  \caption{\full\ in showgrid.py}
\end{figure}


\subsection{Various topologies}
\label{sec:mon_topo}

The emulation of these topologies has been presented in the previous
section. Although not realistic, they illustrate the power and
flexibility of \vlan\ and the monitoring tools.

\subsubsection{Grid $S_{Cir}^{inter}$}
\label{sec:mon_circle}

\begin{verbatim}
% pagrid's processor grid:
8 8
0 1 0.01
1 1 2 133848 7 2 181252
1 2 1 226798 0 2 133848
1 1 1 226798 3 1 135707
1 2 1 135707 4 3 363434
1 5 1 226798 3 3 363434
1 4 1 226798 6 2 543757
1 5 2 543757 7 2 135707
1 6 2 135707 0 2 181252


% weights:
lids02: 1 lids04 2 1 lids16 2 1
lids04: 1 lids05 3 2 lids02 2 1
lids05: 1 lids04 3 2 lids06 3 1
lids06: 1 lids05 3 1 lids09 1 3
lids09: 1 lids13 2 2 lids06 1 3
lids13: 1 lids09 2 2 lids15 1 4
lids15: 1 lids13 1 4 lids16 2 1
lids16: 1 lids15 2 1 lids02 2 1


% values:
lids02: 1.80 lids04 47.87 1.44 lids16 57.25 1.95
lids04: 1.79 lids05 92.95 2.44 lids02 47.87 1.44
lids05: 1.82 lids04 92.95 2.44 lids06 76.30 1.46
lids06: 1.83 lids05 76.30 1.46 lids09 28.59 3.91
lids09: 1.81 lids13 66.76 2.44 lids06 28.59 3.91
lids13: 1.80 lids09 66.76 2.44 lids15 37.96 5.85
lids15: 1.81 lids13 37.96 5.85 lids16 47.89 1.46
lids16: 1.80 lids15 47.89 1.46 lids02 57.25 1.95
\end{verbatim}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{circle.png} 
  \label{fig:circle_showgrid}
  \caption{circle1 in showgrid.py}
\end{figure}


\subsubsection{Grid $S_T^{inter}$}
\label{sec:triangle}

\begin{verbatim}
% pagrid's processor grid:
8 8
0 1 0.01
1 1 2 133848 7 2 181252
1 2 1 226798 0 2 133848
1 1 1 226798 3 1 135707
1 2 1 135707 4 3 363434
1 5 1 226798 3 3 363434
1 4 1 226798 6 2 543757
1 5 2 543757 7 2 135707
1 6 2 135707 0 2 181252


% weights:
lids02: 1 lids04 2 1 lids16 2 1
lids04: 1 lids05 3 2 lids02 2 1
lids05: 1 lids04 3 2 lids06 3 1
lids06: 1 lids05 3 1 lids09 1 3
lids09: 1 lids13 2 2 lids06 1 3
lids13: 1 lids09 2 2 lids15 1 4
lids15: 1 lids13 1 4 lids16 2 1
lids16: 1 lids15 2 1 lids02 2 1


% values:
lids02: 1.80 lids04 47.87 1.44 lids16 57.25 1.95
lids04: 1.79 lids05 92.95 2.44 lids02 47.87 1.44
lids05: 1.82 lids04 92.95 2.44 lids06 76.30 1.46
lids06: 1.83 lids05 76.30 1.46 lids09 28.59 3.91
lids09: 1.81 lids13 66.76 2.44 lids06 28.59 3.91
lids13: 1.80 lids09 66.76 2.44 lids15 37.96 5.85
lids15: 1.81 lids13 37.96 5.85 lids16 47.89 1.46
lids16: 1.80 lids15 47.89 1.46 lids02 57.25 1.95
\end{verbatim}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{triangle.png} 
  \caption{triangle in showgrid.py}
  \label{fig:triangle_showgrid}
\end{figure}

\subsubsection{Grid $S_{Star}^{inter}$}
\label{sec:star}

\begin{verbatim}
8 7
0 1 0.01
1 4 1 226505 1 1 226505 2 1 136460 3 3 317479 7 2 181018 6 2 181018 5 2 453010
1 0 1 226505
1 0 1 136460
1 0 3 317479
1 0 1 226505
1 0 2 453010
1 0 2 181018
1 0 2 181018


% weights:
lids02: 1 lids09 2 2 lids04 3 2 lids05 3 1 lids06 1 2 lids16 2 1 lids15 2 1 lids13 2 3
lids04: 1 lids02 3 2
lids05: 1 lids02 3 1
lids06: 1 lids02 1 2
lids09: 1 lids02 2 2
lids13: 1 lids02 2 3
lids15: 1 lids02 2 1
lids16: 1 lids02 2 1


% values:
lids02: 1.82 lids09 66.78 2.44 lids04 92.83 2.44 lids05 76.38 1.47 lids06 28.58 3.42 lids16 57.25 1.95 lids15 47.84 1.95 lids13 56.89 4.88
lids04: 1.81 lids02 92.83 2.44
lids05: 1.78 lids02 76.38 1.47
lids06: 1.81 lids02 28.58 3.42
lids09: 1.81 lids02 66.78 2.44
lids13: 1.79 lids02 56.89 4.88
lids15: 1.82 lids02 47.84 1.95
lids16: 1.79 lids02 57.25 1.95
\end{verbatim}

\begin{figure}[htbp]
  \centering  
  \includegraphics[width=4.2in]{star.png} 
  \caption{star in showgrid.py}
  \label{fig:star_showgrid}
\end{figure}





\appendix
\chapter{Kernel configuration}

\begin{verbatim}

#
# Networking
#
CONFIG_NET=y

#
# Networking options
#
# CONFIG_NETDEBUG is not set
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
CONFIG_UNIX=y
CONFIG_XFRM=y
CONFIG_XFRM_USER=m
CONFIG_XFRM_SUB_POLICY=y
# CONFIG_XFRM_MIGRATE is not set
CONFIG_NET_KEY=m
# CONFIG_NET_KEY_MIGRATE is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_ASK_IP_FIB_HASH=y
# CONFIG_IP_FIB_TRIE is not set
CONFIG_IP_FIB_HASH=y
CONFIG_IP_MULTIPLE_TABLES=y
# CONFIG_IP_ROUTE_MULTIPATH is not set
CONFIG_IP_ROUTE_VERBOSE=y
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_IP_MROUTE is not set
# CONFIG_ARPD is not set
# CONFIG_SYN_COOKIES is not set
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
# CONFIG_INET_TUNNEL is not set
CONFIG_INET_XFRM_MODE_TRANSPORT=y
CONFIG_INET_XFRM_MODE_TUNNEL=y
CONFIG_INET_XFRM_MODE_BEET=y
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_DEFAULT_BIC is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_HTCP is not set
# CONFIG_DEFAULT_VEGAS is not set
# CONFIG_DEFAULT_WESTWOOD is not set
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
# CONFIG_TCP_MD5SIG is not set

#
# IP: Virtual Server Configuration
#
# CONFIG_IP_VS is not set
# CONFIG_IPV6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
# CONFIG_NETWORK_SECMARK is not set
CONFIG_NETFILTER=y
CONFIG_NETFILTER_DEBUG=y

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_QUEUE=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NF_CONNTRACK_ENABLED=y
CONFIG_NF_CONNTRACK_SUPPORT=y
# CONFIG_IP_NF_CONNTRACK_SUPPORT is not set
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CT_ACCT=y
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_EVENTS=y
# CONFIG_NF_CT_PROTO_SCTP is not set
# CONFIG_NF_CONNTRACK_AMANDA is not set
# CONFIG_NF_CONNTRACK_FTP is not set
# CONFIG_NF_CONNTRACK_H323 is not set
# CONFIG_NF_CONNTRACK_IRC is not set
# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
# CONFIG_NF_CONNTRACK_PPTP is not set
# CONFIG_NF_CONNTRACK_SANE is not set
# CONFIG_NF_CONNTRACK_SIP is not set
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NETFILTER_XTABLES=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
# CONFIG_NETFILTER_XT_MATCH_DCCP is not set
# CONFIG_NETFILTER_XT_MATCH_DSCP is not set
# CONFIG_NETFILTER_XT_MATCH_ESP is not set
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TCPMSS=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y

#
# IP: Netfilter Configuration
#
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_QUEUE=m
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_IPRANGE=m
CONFIG_IP_NF_MATCH_TOS=m
CONFIG_IP_NF_MATCH_RECENT=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_MATCH_OWNER=m
CONFIG_IP_NF_MATCH_ADDRTYPE=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_NF_NAT=y
CONFIG_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_SAME=y
# CONFIG_NF_NAT_SNMP_BASIC is not set
# CONFIG_NF_NAT_FTP is not set
# CONFIG_NF_NAT_IRC is not set
CONFIG_NF_NAT_TFTP=m
# CONFIG_NF_NAT_AMANDA is not set
# CONFIG_NF_NAT_PPTP is not set
# CONFIG_NF_NAT_H323 is not set
# CONFIG_NF_NAT_SIP is not set
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_TOS=m
# CONFIG_IP_NF_TARGET_ECN is not set
# CONFIG_IP_NF_TARGET_TTL is not set
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_ARPTABLES=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y

#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set

#
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set

#
# TIPC Configuration (EXPERIMENTAL)
#
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
CONFIG_VLAN_8021Q=m
# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set

#
# QoS and/or fair queueing
#
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_FIFO=y
# CONFIG_NET_SCH_CLK_JIFFIES is not set
CONFIG_NET_SCH_CLK_GETTIMEOFDAY=y
# CONFIG_NET_SCH_CLK_CPU is not set

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_HFSC=y
CONFIG_NET_SCH_PRIO=y
CONFIG_NET_SCH_RED=y
CONFIG_NET_SCH_SFQ=y
CONFIG_NET_SCH_TEQL=y
CONFIG_NET_SCH_TBF=y
CONFIG_NET_SCH_GRED=y
CONFIG_NET_SCH_DSMARK=y
CONFIG_NET_SCH_NETEM=y
CONFIG_NET_SCH_INGRESS=y

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=y
CONFIG_NET_CLS_TCINDEX=y
CONFIG_NET_CLS_ROUTE4=y
CONFIG_NET_CLS_ROUTE=y
CONFIG_NET_CLS_FW=y
CONFIG_NET_CLS_U32=y
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=y
CONFIG_NET_CLS_RSVP6=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=y
CONFIG_NET_EMATCH_NBYTE=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_EMATCH_META=y
CONFIG_NET_EMATCH_TEXT=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_NET_ACT_IPT=y
CONFIG_NET_ACT_PEDIT=y
# CONFIG_NET_ACT_SIMP is not set
CONFIG_NET_CLS_IND=y
CONFIG_NET_ESTIMATOR=y
% CONFIG_HAS_IOPORT=y

\end{verbatim}

\end{document}


% January 2008
%%% Local Variables: 
%%% mode: latex
%%% TeX-master: t
%%% End: 
