\documentclass[11pt]{book}
\usepackage{nath}
\thispagestyle{empty}
\topmargin=0pt
\evensidemargin=0cm
\oddsidemargin=0cm
\textwidth=16cm
\textheight=22cm
\skip\footins 5ex 

\def\Jets{{\sc Jets}}
\def\Maple{{\sc Maple}}

\newtheorem{remark}{Remark}[section]

\begin{document}



\chapter{Machine computations}


Many algorithms presented in this book eventually lead to solution 
of equations in total derivatives.
Linear systems of equations in total derivatives arise, e.g., when 
computing the kernel of a linear C-differential operator.
Typical examples are the operator of universal linearization 
(see \ref{...}) and its formal adjoint (see \ref{...}), 
which are relevant to computation of symmetries and conservation 
laws, respectively.
As such they have already gained due attention in computer algebra.
Papers~\cite{He,He2} review the software available 
in early nineties; a number of them actually solve linear 
equations in total derivatives.

This chapter contains description of the package \Jets, which is 
capable of automated or semiautomated solution of nonlinear systems 
in total derivatives.
The package, developed by one of the authors of this book, is a
set of procedures over the computer algebra system \Maple.
\Jets\ was to a great extent inspired by the package {\sc DeLiA} 
by Bernstein and Bocharov~\cite{BB}.
Independently, a loosely similar package named {\sc Jet} is being 
developed by Meshkov~\cite{Mesh}.

The \Jets\ is freely available from {\tt www.diffiety.ac.org}
along with a detailed tutorial.
In this appendix the exposition is concentrated on discussion of 
one of the underlying algorithms.
It is, anyway, advisable to install \Jets\ and make hands-on 
experiments while reading the following pages.
Understandably, a basic familiarity with \Maple\ is indispensable.



\section{Equations in total derivatives}


Let $(E,C)$ be a diffiety equipped with standard internal 
coordinates $x^i$ and $u^k,\dots,u^k_I,\dots$
Here big Latin letters such as $I$ denote symmetric multiindices.
Hence, the total derivatives (generators of the Cartan 
distribution $C$) are 
$$
\numbered\label{TD}
D_j = \frac{\partial}{\partial x^j}
 + \sum_{k,I} u^k_{Ij} \frac{\partial}{\partial u^k_I},
$$
where $u^k_I$ run through all internal coordinates. 
By a system of equations in total derivatives we mean 
a system of equations of the form 
$$
\numbered\label{ETD}
F^k(U^l, \dots, U^l_J, \dots) = 0.
$$
Its solution is a set of functions $U^l$ on $E$ that satisfy 
eq.~(\ref{ETD}) when every $U^l_J$ is substituted by the total 
derivative $D_J U^l$, where 
$D_{j_1 \dots j_s} U = D_{j_1} \dots D_{j_s} U$. 

A particularly awkward feature of total derivatives is that 
they tend to be huge expressions, especially higher order ones.
This is essentially why we cannot simply evaluate all total 
derivatives in eq.~(\ref{ETD}) and solve the resulting system.
Instead, any efficient method to solve equations in total
derivatives must delay evaluation of total derivatives as long as 
possible.

The goal is to replace eq.~(\ref{ETD}) with a system of equations 
in partial derivatives with the same solutions, but much simpler.
This is usually possible because eq.~(\ref{ETD}) constitutes a 
highly overdetermined system.
To understand why, observe that each of the unknown functions 
$U^l$ depends on a finite number of variables 
(all smooth functions do); let $`var U^l$ denote the dependence
set of $U^l$.
Because of the presence of total derivatives, the dependence set 
of the expression $F^k(U^l, \dots,\ D_I U^l, \dots)$, which we 
denote $`var F^k$, exceeds all the sets $`var U^l$.
Otherwise said, the system eq.~(\ref{ETD}) implicitly includes
a number of equations $\frac{\partial U^h}{\partial p} = 0$, one 
for each of the extra variables 
$p \in \bigcup_k `var F^k \setminus `var U^h$.
These are the additional equations to make the system 
eq.~(\ref{ETD}) overdetermined.

A well-established theory of overdetermined systems is available, 
which can be applied to the above-mentioned system, provided we 
avoid undue evaluation of total derivatives.
Considering the fact that manipulating overdetermined systems 
involves considerable amount of computations by itself, it is
obvious that a slice-by-slice approach is needed.
The suggested procedure consists of repeating two steps:

\paritem{1.} deriving simple differential consequences of
eq.~(\ref{ETD});

\paritem{2.} resolving them with respect to `leading' derivatives 
and back-substituting simplest of them into eq.~(\ref{ETD}).

Substitutions made during Step~2 simplify eq.~(\ref{ETD}), 
so that Step~1 can be applied again.
At the same time, the equivalent system of simpler equations
is gradually created.
The algorithm stops when all the expressions 
$F^k(U^l, \dots,\ D_I U^l, \dots)$ become zero.

Essentially, both steps can be automated.




\subsection{Deriving differential consequences}



By {\it differential consequences} of system~(\ref{ETD}) we mean 
the (partial) derivatives
$$
\numbered\label{pdETD}
0 = \frac{\partial F(U^l, \dots, D_J U^l, \dots)}{\partial u^k_I}
 = \frac{\partial F}{\partial U^l_J}
   \frac{\partial D_J U^l}{\partial u^k_I}.
$$
The derivative $\frac\partial{\partial p}$ of the total 
derivative $D_j U$ of a function $U$ on $E$ can be computed 
according to the formula
$$
\numbered\label{pdTD}
\frac{\partial D_j U}{\partial p}
 = D_j(\frac{\partial U}{\partial p})
 + \sum_q \frac{\partial U}{\partial q}
          \frac{\partial D_j q}{\partial p},
$$
where $q$ runs over all variables the function $U$ depends on.
In case of higher total drivatives, (\ref{pdTD}) is
applied recursively.
The formula allows us to compute (some of the) derivatives 
$\frac{\partial F}{\partial U^l_J}
   \frac{\partial D_J U^l}{\partial u^k_I}$ on the right-hand side 
of the equation~(\ref{pdETD}) without actually evaluating the 
total derivatives $D_j U^l$.

The idea is to derive differential consequences as long as their
size decreases.
Let $`size(a)$ denote an integer-valued function, which we may
interpret as measuring the complexity of the input expression~$a$.
E.g., $`size(a)$ may be the value returned by the \Maple\ command
\verb|length(a)|, or the number of unknowns and their 
derivatives involved in $a$, or a weighted combination of both.

To start with, let an expression $a$ depend on a single 
variable~$p$. 
We are interested in obtaining the smallest defferential 
consequence of $a$, i.e., in locating the smallest member of the 
integer sequence 
$$
\numbered\label{size}
\{`size(\frac{\partial^i a}{\partial p^i})\}_{i = 0}^\infty.
$$
We distinguish two basic types of possible behaviour:
the {\it polynomial\/} type, when the size grows smaller and 
smaller and finaly $\frac{\partial^i a}{\partial p^i}$ becomes 
zero, as illustrated by the picture
\begin{center}
\unitlength=.6cm
\begin{picture}(8,4.3)
\put(.5,1){\vector(1,0){7}}
\thicklines
\put(1,1){\line(0,1){2.2}}
\put(2,1){\line(0,1){1.7}}
\put(3,1){\line(0,1){0.4}}
\footnotesize
\put(1,.5){\makebox[0pt]{$0$}}
\put(2,.5){\makebox[0pt]{$1$}}
\put(3,.5){\makebox[0pt]{$2$}}
\put(4,.5){\makebox[0pt]{$3$}}
\put(5,.5){\makebox[0pt]{$4$}}
\put(6.5,.5){\mbox{$i$}}
\end{picture}
\end{center}
and the {\it exponential\/} type, when the size grows bigger and 
bigger, as illustrated by the picture
\begin{center}
\unitlength=.6cm
\begin{picture}(7.5,4.3)
\put(.5,1){\vector(1,0){7}}
\thicklines
\put(1,1){\line(0,1){1.2}}
\put(2,1){\line(0,1){1.3}}
\put(3,1){\line(0,1){1.6}}
\put(4,1){\line(0,1){2.1}}
\put(5,1){\line(0,1){2.8}}
\footnotesize
\put(1,.5){\makebox[0pt]{$0$}}
\put(2,.5){\makebox[0pt]{$1$}}
\put(3,.5){\makebox[0pt]{$2$}}
\put(4,.5){\makebox[0pt]{$3$}}
\put(5,.5){\makebox[0pt]{$4$}}
\put(6.5,.5){\mbox{$i$}}
\end{picture}
\end{center}
The polynomial (exponential) type of behaviour tends to occur 
when $a$ is polynomial (nonpolynomial) in~$p$.

Polynomial behaviour prevails at the early stages of computation.
After a while, a typical expression $a$ is a sum of
a term of polynomial behaviour and another term of exponential
behaviour.
The sequence (\ref{size}) then may exhibit an unpredictable number 
of local minima, but usually the diagram has a single minimum, 
like in
\begin{center}
\unitlength=.6cm
\begin{picture}(8.5,4.3)
\put(.5,1){\vector(1,0){8}}
\thicklines
\put(1,1){\line(0,1){1.8}}
\put(2,1){\line(0,1){1.2}}
\put(3,1){\line(0,1){0.8}}
\put(4,1){\line(0,1){1.3}}
\put(5,1){\line(0,1){2.0}}
\put(6,1){\line(0,1){2.8}}
\footnotesize
\put(1,.5){\makebox[0pt]{$0$}}
\put(2,.5){\makebox[0pt]{$1$}}
\put(3,.5){\makebox[0pt]{$2$}}
\put(4,.5){\makebox[0pt]{$3$}}
\put(5,.5){\makebox[0pt]{$4$}}
\put(6,.5){\makebox[0pt]{$5$}}
\put(7.5,.5){\mbox{$i$}}
\end{picture}
\end{center}

Let us consider the algorithm
$$
\text{\bf do } \\
\quad b := \frac{\partial a}{\partial p} \\
\quad \text{\bf if } b = 0 \text{ \bf or }
  `size(b) \ge `size(a) \text{ \bf then} \\
\quad \quad  \text{\bf return } a \\
\quad \text{\bf else } a := b \\
\quad \text{\bf end-if} \\
\text{\bf end-do}
$$
The algorithm is guaranteed to stop, since otherwise there 
would exist an infinite decreasing sequence of integers.

For any $a$, the algorithm returns the 
{\it first nonzero term of minimal size}.
This may be interpreted as the `smallest reasonable' 
differential consequence of~$a$, considering the fact
that no algorithm can decide whether there exists any 
other minimum.
In the case of a stabilized sequence, the algorithm returns the 
first expression of the stabilized size. 

The effectiveness of the algorithm depends on the function 
`size' being a true measure of complexity of its input expression. 
Expressions which involve the least possible number of
unknown functions and their derivatives are
. 

When $a$ depends on a number of variables $p_1,\dots,p_n$, one 
can apply a similar reasoning to a diagram over an 
$n$-dimansional frame, such as
\begin{center}
\unitlength=.6cm
\begin{picture}(8.5,7.3)
\put(2.50,5.00){\vector(-1,-1){2.5}}
\put(3.25,4.75){\line(-1,-1){2}}
\put(4.00,4.50){\line(-1,-1){2}}
\put(4.75,4.25){\line(-1,-1){2}}
\put(5.50,4.00){\line(-1,-1){2}}
\put(6.25,3.75){\line(-1,-1){2}}
\put(7.00,3.50){\line(-1,-1){2}}
\put(7.75,3.25){\line(-1,-1){2}}
\put(2.5,5.0){\vector( 3,-1){7}}
\put(2.0,4.5){\line( 3,-1){5.25}}
\put(1.5,4.0){\line( 3,-1){5.25}}
\put(1.0,3.5){\line( 3,-1){5.25}}
\put(0.5,3.0){\line( 3,-1){5.25}}
\thicklines
\put(2.50,5.00){\line(0,1){1.8}}
\put(2.00,4.50){\line(0,1){1.2}}
\put(1.50,4.00){\line(0,1){0.8}}
\put(1.00,3.50){\line(0,1){1.3}}
\put(0.50,3.00){\line(0,1){2.0}}
%
\put(3.25,4.75){\line(0,1){2.0}}
\put(2.75,4.25){\line(0,1){1.2}}
\put(2.25,3.75){\line(0,1){1.2}}
\put(1.75,3.25){\line(0,1){1.2}}
\put(1.25,2.75){\line(0,1){1.2}}
%
\put(4.00,4.50){\line(0,1){.8}}
\put(3.50,4.00){\line(0,1){.6}}
\put(3.00,3.50){\line(0,1){.5}}
\put(2.50,3.00){\line(0,1){.4}}
\put(2.00,2.50){\line(0,1){.3}}
%
\put(4.75,4.25){\line(0,1){2}}
\put(4.25,3.75){\line(0,1){1}}
\put(3.75,3.25){\line(0,1){1.2}}
\put(3.25,2.75){\line(0,1){.1}}
%
\put(5.50,4.00){\line(0,1){2}}
\put(5.00,3.50){\line(0,1){1.2}}
%
\put(6.25,3.75){\line(0,1){1.2}}
\put(5.75,3.25){\line(0,1){1.2}}
%
\put(7.00,3.50){\line(0,1){1.2}}
\put(6.50,3.00){\line(0,1){0.2}}
%
\put(7.75,3.25){\line(0,1){0.2}}
%
\footnotesize
\put(-0.3,2.75){\makebox[0pt]{$p$}}
\put(9.2,3.15){\makebox[0pt]{$q$}}
\end{picture}
\end{center}



In our implementation, the procedure to do this is {\tt derive}.


\section{Assignments to partial derivatives}



Equations produced by \verb|derive| do not always allow for explicit 
solution.
However, very often these expressions can be simply resolved with 
respect to one of the leading derivatives and then used as substitutions.  
Example: instead of trying to solve the PDE 
$\partial^2 U/\partial u_x^2 + u_x\,\partial U/\partial u = 0$
we may routinely introduce the substitution 
$\partial^2 U/\partial u_x^2 = -u_x\,\partial\mathclose{} U/\partial u$.

\Jets\ provide a procedure \verb|put| to make assignments to partial 
derivatives that propagate to all differential consequences.
For instance:
\begin{verbatim}
> put('pd(U,u_x^2)' = -u_x*pd(U,u));
> pd(U,u_x^2);
                                  /d   \
                             -u_x |-- U|
                                  \du  /

> pd(U,u_x^3);
                                    /   2     \
                       /d   \       |  d      |
                      -|-- U| - u_x |------- U|
                       \du  /       \du du_x  /
\end{verbatim} 

Another example: Resolving the famous Cauchy--{}Riemann conditions
$$
\partial U/\partial u = \partial V/\partial v,
\qquad
\partial V/\partial u = -\partial U/\partial v,
$$
via the pair of substitutions 
\begin{verbatim}
> put('pd(U,u)' = pd(V,v), 'pd(V,u)' = -pd(U,v));
\end{verbatim} 
produces the Laplace equation
$\partial^2 U/\partial u^2 + \partial^2 U/\partial v^2 = 0$ effortlessly:
\begin{verbatim}
> pd(U,u^2);
                                / 2   \
                                |d    |
                               -|--- U|
                                |  2  |
                                \dv   /
\end{verbatim} 


The behaviour of \verb|put| is quite similar to that of \verb|equation|.
Again, substitutions are stored in a table.
One exception is that a substitution of the form 
\verb|pd(U,u) = 0| will result in the mere change of the dependence record of
$U$ in case it exists.
Another exception is that a substitution of the form
\verb|U = f| will result in the direct assignment \verb|U := f|.


There are two obvious sources of problems if \verb|put| is used
arbitrarily.
Firstly, substitutions can form vicious circles, generating stack overflows.
The standard way to avoid them is explained in the next chapter.

Secondly, substitutions can contradict one another, generating chaotic and 
session-dependent results.
For instance, the above Cauchy--{}Riemann conditions can be alternatively 
resolved as
\begin{verbatim}
> put('pd(V,v)' = pd(U,u), 'pd(V,u)' = -pd(U,v));
\end{verbatim}
Then the input
\begin{verbatim}
> pd(V,u*v);
\end{verbatim}
will produce either 
\begin{verbatim}
                                 2
                                d
                                --- U
                                  2
                                du
\end{verbatim}
or 
\begin{verbatim}
                                / 2   \
                                |d    |
                               -|--- U|
                                |  2  |
                                \dv   /
\end{verbatim}
depending on whether it resulted from the first or the second substitution.
Such questions are addressed by the formal integrability theory of systems
of PDE's.
For simplicity, \Jets\ performs only the standard cross-derivative checks:
\begin{verbatim}
> pd(pd(V,v),u) - pd(pd(V,u),v);

                          / 2   \   / 2   \
                          |d    |   |d    |
                          |--- U| + |--- U|
                          |  2  |   |  2  |
                          \du   /   \dv   /
\end{verbatim}
(which is another way of obtaining the Laplace equation).
To ensure uniqueness of substitutions, all cross-derivatives must be
balanced, i.e., non-zero expressions resulting from cross-derivatives must
be resolved and the results substituted back again.
In this way, substitutions force other substitutions, e.g.,
\begin{verbatim}
> put('pd(U,u^2)' = -pd(U,v^2)):
\end{verbatim}

There is a procedure \verb|cc()| which computes the
compatibility conditions:
\begin{verbatim}
> cc();
                                  {}
\end{verbatim}
This output shows that all cross-derivatives have been already balanced.

A nonempty set of compatibility conditions returned by \verb|cc| 
need not be resolved immediately.
May be postponed, e.g., to see whether the subsequent \verb|derive| does 
not produce a condition simpler to resolve.

Once all cross-derivatives are balanced, it is possible to reduce the 
table by eliminating all derivatives that are derivatives of the others:
\begin{verbatim}
reduce();
                                          2       / 2   \
            d      d     d       /d   \  d        |d    |
            -- V = -- U, -- V = -|-- U|, --- U = -|--- U|
            dv     du    du      \dv  /    2      |  2  |
                                         du       \dv   /
\end{verbatim}






\section{Avoiding vicious circles}



Suppose we want to resolve the equation 
$\partial U/\partial u = \partial^2 U/\partial u^2$ with respect to one of
the partial derivatives.
Obviously, the substitution
$\partial U/\partial u \to \partial^2 U/\partial u^2$ will induce a vicious
circle of substitutions
$$
\partial U/\partial u
 \to \partial^2 U/\partial u^2
 \to \partial^3 U/\partial u^3
 \to \partial^4 U/\partial u^4
 \to \cdots
$$
Not so the inverse substitution 
$\partial^2 U/\partial u^2 \to \partial U/\partial u$, which therefore 
presents the only acceptable way to resolve the equation.


Standard methods to avoid vicious circles use special orderings $\prec$ of 
partial derivatives.
By a partial derivative we mean an expression \verb|pd(|$U,X$\verb|)| where 
$U$ is a function and $X$ is a count over some set of variables.
Functions themselves are regarded as partial derivatives with $X = 1$.
The orderings should satisfy
\begin{equation} \label{C1}
P \prec \frac{\partial P}{\partial s} 
\end{equation}
and
\begin{equation} \label{C2}
Q \prec P 
 \quad\Rightarrow\quad
 \frac{\partial Q}{\partial s} \prec \frac{\partial P}{\partial s}
\end{equation}
for all partial derivatives $P,Q$ and every variable $s$.
Given such an ordering, one imposes the rule that
\begin{equation} \label{R}
\parbox{7.6cm}{\rm
in every substitution $P \to \sigma$, the expression $\sigma$
involves only derivatives $Q$ satisfying $Q \prec P$\rlap{.}}
\end{equation}
Then it is easy to prove that 
{\it if substitutions satisfy the rule~{\rm (\ref{R})},
then every composition of substitutions and every differential consequence 
of a substitution satisfy the same rule~{\rm (\ref{R})}}.
[Prove as an exercise. 
Hint: A differential consequence of the rule $P \to \sigma$ is the rule
$$
\frac{\partial P}{\partial s}
 \to \sum_Q \frac{\partial \sigma}{\partial Q}
  \cdot \frac{\partial Q}{\partial s},
$$
where $Q$ runs over all partial derivatives occurring in $\sigma$.]

Substitutions satisfying rule~(\ref{R}) cannot produce vicious circles.
Every string of substitutions will eventually arrive to a ``minimal
state'' where no further substitution can be applied.

Orderings that satisfy conditions (\ref{C1}), (\ref{C2}) can be introduced 
as follows.
First we need a workable ordering $\prec_{\rm var}$ of variables.
\begin{enumerate}
\item
$x \prec_{\rm var} u$ for every base variable $x$ and every fibre
variable $u$;

\item $u \prec_{\rm var} u_X$ for every fibre variable $u$ and every count $X$;

\item given an ordering $\prec_{\rm b}$ of base variables and an ordering 
$\prec_{\rm f}$ of fibre variables, we have 
$u_X \prec_{\rm var} v_Y$ if

\quad {\tt (degree)} \ 
the degree of the count $X$ is less than the degree of the count $Y$;

with ties broken by

\quad {\tt (function)} \ 
$u \prec_{\rm f} v$; 

with ties broken by

\quad {\tt (count)} \ 
under the ordering $\prec_{\rm b}$, the greatest variable 
that occurs in $Y/X$ has a positive exponent.
\end{enumerate}


By default, the orderings $\prec_{\rm b}$ and $\prec_{\rm f}$ 
are those introduced by the \verb|coordinates| command.
Alternative orderings $\prec_{\rm var}$ are obtained by changing the order in 
which criteria {\tt degree}, {\tt function}, {\tt count} are applied
(obviously, after {\tt count} there is no room for {\tt degree}).
E.g., the ordering governed by the succession 
{\tt function}, {\tt degree}, {\tt count}, is introduced by 
\begin{verbatim}
varordering(function,degree,count);
\end{verbatim}

Having defined an ordering of variables, one obtains an ordering of partial 
derivatives when replacing $\prec_{\rm f}$ with any 
ordering $\prec_{\rm fun}$ of functions and $\prec_{\rm b}$ with any 
ordering $\prec_{\rm var}$ of variables.

In \Jets, the ordering $\prec_{\rm fun}$ is determined by the particular order 
the functions are declared with \verb|unknowns|.
The~ordering $\prec_{\rm var}$ is that introduced above.
The command to change the succession of criteria is \verb|Varordering|.
The default is \verb|Varordering(function,degree,reverse)|, where 
\verb|reverse| means \verb|count| with respect to the ordering reverse to the 
ordering of variables.
The orderings just constructed are linear (every two elements are comparable).

In general, computations in automatic regime incline to expression swell.
The predisposition to produce memory overflow is minimal if the 
\verb|Varordering| declaration starts with \verb|degree|.

However, in the presence of two or more unknowns, it is often desirable
to obtain the resulting system $Z$ in a ``triangular'' form.
It is then possible to solve the equations on the first unknown first; 
then, knowing explicit expression for the first unknown, to solve the 
equations on the second unknown, etc.
Triangular form is produced if the ordering options start with 
\verb|function|.
To select expressions from a set $Z$ that involve only unknowns from a set
$A$, call \verb|unksselect(Z,A)|. 

To obtain the triangular form while avoiding expression swell, two runs 
should be performed: 
first \verb|run(S)| with respect to the ordering by
\verb|degree,function,reverse|, followed by 
\verb|Z := clear(pds)|, and then \verb|run(op(Z))| with respect to
\verb|function,degree,reverse|.



\section{Classification problems}



Another example, taken from~\cite[8.1.5]{VKL}, is
\begin{verbatim}
> equation('u_t' = f*u_x + u_xx):
> dependence(f(u)):
\end{verbatim} 
Here the right-hand side of the equation depends on $u$ via an unknown 
function $f$.  
In loc. cit. authors solve the following problem: how does the
symmetry algebra depend on $f$.
Cases 1--5 found in the book will be reconstructed below.
The universal linearization is 
\begin{verbatim}
> S := symmetries(u = U);
                                           /d   \              2
            S := TD(U, t) - TD(U, x) f - U |-- f| u_x - TD(U, x )
                                           \du  /
\end{verbatim} 
Assuming
\begin{verbatim}> dependence(f(u), U(t,x,u,u_x,u_xx));
\end{verbatim} 
we soon get the following output:
\begin{verbatim}
<10>   linear resolving failed for   pd(U,x*u_x)

       /   2     \             / 2   \
/d   \ |  d      |    / d    \ |d    |       /d   \ /d   \
|-- f| |------- U| = -|---- U| |--- f| u_x + |-- U| |-- f|
\du  / \du_x dx  /    \du_x  / |  2  |       \du  / \du  /
                               \du   /

         / 2   \   // 2   \                   \
         |d    |   ||d    |        /d   \2    | /  d    \
     + U |--- f| - ||--- f| u_xx + |-- f|  u_x| |----- U|
         |  2  |   ||  2  |        \du  /     | \du_xx  /
         \du   /   \\du   /                   /

                                 FAIL
\end{verbatim}
This means that \Jets\ is about to solve a linear equation on 
$\partial^2 U/\partial u\,\partial u_x$ with leading coefficient
$\partial f/\partial u$, but lacks information that the leading coefficient
is nonzero.

In such case, \Jets\ is expecting an instruction from the user.
If there is more than one possible continuation, then it is
recommended to fork the computation: to save the worksheet, make one 
more copy, and continue them separately. 
The current status of the computation can be stored in a file; see the 
command \verb|store| below, and then input into another session. 

\medskip\noindent 1.
When $\partial f/\partial u = 0$, we say \verb|put('pd(f,u)' = 0)|
and continue. 
The result will be that of Case~5 in the loc. cit.

\medskip
When $\partial f/\partial u \ne 0$, we instruct \Jets\ about this 
fact as follows:
\begin{verbatim}
> nonzero(pd(f,u));
                                 d
                                {-- f}
                                 du
\end{verbatim}
Continuing, we rather quickly get another condition:
\begin{verbatim}
<9>   linear resolving failed for   pd(U,u)

        / 2   \
 /d   \ |d    | /d   \
-|-- U| |--- f| |-- f| =
 \du  / |  2  | \du  /
        \du   /

     /    / 3   \              / 2   \2\              / 2   \2
     |    |d    | /d   \       |d    | | / d    \     |d    |
    -|u_x |--- f| |-- f| - u_x |--- f| | |---- U| - U |--- f|
     |    |  3  | \du  /       |  2  | | \du_x  /     |  2  |
     \    \du   /              \du   / /              \du   /

         / 3   \
         |d    | /d   \
     + U |--- f| |-- f| -
         |  3  | \du  /
         \du   /

    /        / 2   \              / 3   \        / 2   \2     \
    |/d   \2 |d    |       /d   \ |d    |        |d    |      |
    ||-- f|  |--- f| u_x + |-- f| |--- f| u_xx - |--- f|  u_xx|
    |\du  /  |  2  |       \du  / |  3  |        |  2  |      |
    \        \du   /              \du   /        \du   /      /

    /  d    \
    |----- U|
    \du_xx  /

                                 FAIL
\end{verbatim}
Now we choose between $\partial^2 f/\partial u^2$ zero and nonzero.

\medskip\noindent 2.
The first possibility leads to Case 4 of loc. cit. 

\medskip
The alternative leads to:
\begin{verbatim}
<19>   linear resolving failed for   pd(U,x)

/ / 3   \            / 2   \2\ / 2   \
| |d    | /d   \     |d    | | |d    | /d   \ /d   \
|-|--- f| |-- f| + 2 |--- f| | |--- f| |-- f| |-- U| =
| |  3  | \du  /     |  2  | | |  2  | \du  / \dx  /
\ \du   /            \du   / / \du   /

     / / 3   \            / 2   \2\2
     | |d    | /d   \     |d    | |     2 / d    \
    -|-|--- f| |-- f| + 2 |--- f| |  u_x  |---- U|
     | |  3  | \du  /     |  2  | |       \du_x  /
     \ \du   /            \du   / /

       / / 3   \            / 2   \2\2
       | |d    | /d   \     |d    | |           /  d    \
     - |-|--- f| |-- f| + 2 |--- f| |  u_x u_xx |----- U|
       | |  3  | \du  /     |  2  | |           \du_xx  /
       \ \du   /            \du   / /

       / / 3   \            / 2   \2\2
       | |d    | /d   \     |d    | |
     + |-|--- f| |-- f| + 2 |--- f| |  U u_x
       | |  3  | \du  /     |  2  | |
       \ \du   /            \du   / /

                                 FAIL
\end{verbatim}
Now 
\begin{equation} \label{c}
\frac{\partial^3 f}{\partial u^3}\cdot \frac{\partial f}{\partial u}
 - 2 \biggl(\frac{\partial^2 f}{\partial u^2}\biggr)^2
\end{equation}
is either zero or nonzero.

\medskip\noindent 3.
In the first case, $f := c_1 \ln(u + c_2) + c_3$, where 
$c_1,c_2,c_3$ are constants:
\begin{verbatim}
> f := c1*ln(u + c2) + c3;
> parameters(c1,c2,c3);
> refresh();
                       f := c1 ln(u + c2) + c3

                              c3, c1, c2
\end{verbatim} 
({\tt refresh} clears remember tables which could still contain
unassigned $f$.)
This is exactly Case~2 of loc. cit.

\medskip
Alternatively, we can declare the expression (\ref{c}) to be nonzero.
After that we get one more fork very soon, namely:


\begin{verbatim}
<15>   linear resolving failed for   pd(U,u_x)

/ / 2   \2 / 3   \   / 2   \        / 4   \            / 3   \2\
| |d    |  |d    |   |d    | /d   \ |d    |     /d   \ |d    | |
|-|--- f|  |--- f| - |--- f| |-- f| |--- f| + 2 |-- f| |--- f| | u_x
| |  2  |  |  3  |   |  2  | \du  / |  4  |     \du  / |  3  | |
\ \du   /  \du   /   \du   /        \du   /            \du   / /

    / d    \
    |---- U| = -
    \du_x  /

    / / 2   \2 / 3   \   / 2   \        / 4   \            / 3   \2\
    | |d    |  |d    |   |d    | /d   \ |d    |     /d   \ |d    | |
    |-|--- f|  |--- f| - |--- f| |-- f| |--- f| + 2 |-- f| |--- f| |
    | |  2  |  |  3  |   |  2  | \du  / |  4  |     \du  / |  3  | |
    \ \du   /  \du   /   \du   /        \du   /            \du   / /

         /  d    \
    u_xx |----- U| +
         \du_xx  /

    / / 2   \2 / 3   \   / 2   \        / 4   \            / 3   \2\
    | |d    |  |d    |   |d    | /d   \ |d    |     /d   \ |d    | |
    |-|--- f|  |--- f| - |--- f| |-- f| |--- f| + 2 |-- f| |--- f| |
    | |  2  |  |  3  |   |  2  | \du  / |  4  |     \du  / |  3  | |
    \ \du   /  \du   /   \du   /        \du   /            \du   / /

    U

                                 FAIL
\end{verbatim} 
\Maple\ can solve the condition that the coefficient is zero after we 
rewrite it in the form
$$
\biggl(\frac{f' f'''}{f''{}^2}\biggr)' = 0,
$$
which is 
$$
f' f''' = a f''{}^2,
$$
where $a$ is a constant.

\medskip\noindent 4. For $a \ne 1$ we get Case 3.

\medskip\noindent 5. For $a = 1$ we get Case 1.

\medskip 
Another reason for failure can be nonlinearity of the equation to be solved,
which occurs only if the input equation in total derivatives is nonlinear.
It is not recommended to solve nonlinear equations since their solutions tend 
to be expressions difficult to simplify.



\section{Fighting expression swell}


\Jets\ has high predisposition to expression swell when running in automatic 
regime.
We already observed sensitivity to the \verb|Varordering|, with ordering 
by \verb|degree| giving more stable behaviour than the others.

The succession of unknowns is also significant.
The rule to be followed is that unknowns should be listed in the order of 
increasing size of their dependence set. 
E.g., 
\begin{verbatim}
dependence(a(x), b(x,u,u_x)):
unknowns(a,b):
\end{verbatim}
is preferable to \verb|unknowns(b,a)|.
The reason is that with \verb|unknowns(a,b)| (the derivatives of) $b$ 
will become expressed through (the derivatives of) $a$ rather than the
converse.

Four global variables control the data flow in automatic regime. 
The output of \verb|derive| and \verb|cc| typically contains a large
number of expressions.
The \verb|ressize| is a number that determines the portion of these 
expressions to become the input of \verb|resolve|.
Namely, expressions are sorted by size and harvested one by one until the 
product of their sizes does not exceed the \verb|ressize|.
The default definition of \verb|size| ensures that one-term expressions
have (nearly) unit size and therefore all pass to resolve.

The results of \verb|resolve| also involve expressions of variety of sizes.
Expressions of size exceeding \verb|maxsize| never pass to the output and
never generate a `resolving failure' report.
Of \verb|resolve|'s results only those of size not exceeding \verb|putsize| 
are used in substitutions.
It makes sense to set \verb|maxsize| higher than \verb|putsize|, because
otherwise important `resolving failure' reports could be missed.  

Finally, \verb|Blimit| is the threshold of `bytes used' where \verb|run| 
will call \verb|reduce| during every loop.

However, very often the expression swell can be associated with the growing 
size of numerators, leading to various `object too big' errors.
In such cases, one should prefer \verb|reverse| to \verb|count| in
\verb|Varordering|.
Transforming the system to another set of variables can be also helpful.



\section*{Transformation of variables}


The procedure \verb|transform| is suitable for transforming an expression to 
another set of variables.
For example,
\begin{verbatim}
> normal(transform(x = u_x, y = u_y, u = x*u_x + y*u_y - u, u_x)); 

                                  x

> normal(transform(x = u_x, y = u_y, u = x*u_x + y*u_y - u, u_xx + u_yy));

                             u_xx + u_yy
                          -----------------
                                          2
                          u_xx u_yy - u_xy
\end{verbatim}
is the famous Legendre transformation.
Needless to say, the {\tt equation} should be empty.



\section{About declarations}


A procedure whose purpose is to change settings of global variables 
is called a declaration.
If named in plural, it resets the previous declaration of the same
type.
If named in singular, it has a cummulative effect.
Some declarations exist only in the plural form (e.g., \verb|coordinates|).

Through a declaration, unassigned names acquire a meaning and become
{\it registered}.
For example, base and fibre variables are registered names once they are 
declared through the coordinates command.

Another possible meaning for a registered name is {\it parameter}.
The declaration 
\begin{verbatim}
parameter(c);
                                  c
\end{verbatim} 
has the same effect as \verb|dependence(c())|, but allows $c$ to be 
still usable as a variable. 
For instance,
\begin{verbatim}
> dependences(f(c,x)); 

                              f = {c, x}
> pd(f,c);
                                 d
                                 -- f
                                 dc
\end{verbatim} 

Meanings other than `variable' are stored in a special table.
The command \verb|registered()| prints meanings of all registered 
names;
\verb|registered(|$m$\verb|)| prints all names of the given meaning $m$.
\begin{verbatim}
> registered();
                              c = {parameter}

> registered(parameter);
                                  c
\end{verbatim} 
To remove a particular meaning from all names that have it, use
\verb|clear|. For instance
\begin{verbatim}
> clear(parameter);
                                  c
> pd(f,c);
Error, pd expects its 2nd argument, p, to be of type ar/count, but received c
\end{verbatim} 


\section{Storing intermediate results}


The command \verb|store(|{\it filename}\verb|)| has the effect that all
assignments to unknowns and their derivatives, all dependence sets, and the
set of all expressions declared nonzero, are written to the specified file.
The format is so chosen that the file can be read into another \Maple\ session.
\begin{verbatim}
> unknowns(a,b,c):
> a := b + c:
> put('pd(b,x)' = 1):
> nonzero(b,c):
> dependence(b(x,u)):
> store(terminal):
                           storing in terminal
assign({a = b+c})
; 
dependence(b = {u, x})
; 
put(
pd(b,x) = 1
); 
nonzero(b,c)
; 
\end{verbatim} 

\Maple\ also offers an alternative solution, namely to store the snapshot of 
memory content in a file, which is, however, rather useless in case of 
memory overflow.




\section{Computation of coverings}


\Jets\ can be useful when computating coverings, albeit not in automatic mode.
As an example we consider the Burgers equation~\cite[Ch. 6, Section 2.1]{VK}.
What follows is a commented full record of computation. 
\begin{verbatim}
> coordinates([x,t], [u], 3):
\end{verbatim} 
The Burgers equation is
\begin{verbatim}
> equation('u_t' = u_xx + u*u_x);

                          u_t = u_xx + u u_x
\end{verbatim}
For further reference, we denote by $B$ the corresponding diffiety.

Recall that Phase I of computation of coverings uses abstract vector 
fields defined on the product $\tilde B = B \times W$, where $W$ is an 
abstract manifold, finite-\ or infinite-dimensional.

\Jets' objects are vector fields if they are of one of the 
following three types: a)~names declared as such, i.e., names that have 
been written among arguments of the command \verb|vectorfields|;
b)~total derivatives; c)~partial derivatives; d) commutators of
vector fields; e) linear combinations of the above.




%%\end{document}


The total derivative with respect to $x$ is denoted by \verb|TD[x]|
(we mean the local one, i.e., one defined on the diffiety $B$).
It is possible to apply a total derivative to a function $f$ by saying either
\begin{verbatim}
> apply(TD[x],f);
                               TD(f, x)
\end{verbatim} 
or simply
\begin{verbatim}
> TD[x](f);
                               TD(f, x)
\end{verbatim} 
E.g., we have
\begin{verbatim}
> TD[x](x*t);
                                  t
\end{verbatim} 
Of course, no expanded form of the total derivatives is available. 


%%\end{document}


Similarly, \verb|pd[u]| means the vector field $\partial/\partial u$ 
and can be applied to a function in one of the following ways:
\begin{verbatim}
> apply(pd[u],f);
> pd[u](f);
                                 d
                                 -- f
                                 du

                                 d
                                 -- f
                                 du
\end{verbatim} 
The commutator is \verb|comm|. As expected, $D_x$ commutes with $D_t$: 
\begin{verbatim}
> comm(TD[x], TD[t]);
                                  0
\end{verbatim} 
It is advisable to alias a shorter name to \verb|comm|, say $C$:
\begin{verbatim}
> alias(C = comm);

     I, u_xxx, u_xxt, u_xtt, u_ttt, u_xx, u_xt, u_tt, u_x, u_t, C

> C(TD[x], TD[t]);
                                  0
\end{verbatim} 

%%\end{document}



Denoted \verb|Ex|, \verb|Et|, the nonlocal (extended) total derivatives 
$\tilde D_x$, $\tilde D_t$ on the covering are the local total derivatives 
plus the corresponding {\it tails} $X,T$:
\begin{verbatim}
> Ex := TD[x] + X;
> Et := TD[t] + T;
                           Ex := TD[x] + X

                           Et := TD[t] + T
\end{verbatim} 


Now we must tell \Jets\ that $X,T$ serve as tails, i.e., that they are abstract 
vector fields and, moreover, purely nonlocal ones. 
This is done by a single command:
\begin{verbatim}
> tail(X,T);
                                 X, T
\end{verbatim} 
Tails are distinguished by the property that their action on local variables 
is zero:
\begin{verbatim}
> apply(X,u_xx);

                                  0
\end{verbatim} 




%%\end{document}



Let $R$ be the commutator $[\tilde D_x, \tilde D_t]$:
\begin{verbatim}
> R := C(Ex,Et);

                 R := TD(T, x) - TD(X, t) + C(X, T)
\end{verbatim} 
Vector fields $\tilde D_x$, $\tilde D_t$ determine a covering if and only 
if $R = 0$.

To explain the above formula for $R$, we note that
the second argument $g$ of \verb|apply(f,g)| can be a function, but can be 
also another vector field, in which case the vector field $f$ is applied to 
coefficients of the vector field $g$.
E.g., 
\begin{verbatim}
> apply(pd[u], u^2*pd[x]);
                              2 u pd[x]
\end{verbatim} 
A tail applied to a local total derivative gives zero:
\begin{verbatim}
> apply(T, TD[x]);
                                  0
\end{verbatim} 
Therefore, we have
$$     
[D_x,T] = {\rm apply}(D_x,T) - {\rm apply}(T,D_x)
 = {\rm apply}(D_x,T)
$$
which explains the simplification
\begin{verbatim}
> C(TD[x],T);
                               TD(T, x)
\end{verbatim} 
in the above formula for $R$.
This ends Phase I.

In Phase II we solve the equation $R = 0$ under some assumptions on the 
dependence of $X,T$:
\begin{verbatim}
> dependence(X(u,u_x), T(u,u_x));

                      X = {u_x, u}, T = {u_x, u}
> vars(R);
                        {u_x, u, u_xxx, u_xx}

> pd(R,u_xxx);
                               / d    \
                              -|---- X|
                               \du_x  /

> dependence(X(u));
                        X = {u}, T = {u_x, u}

> pd(R,u_xx*u_x);
                                 2
                                d
                               ----- T
                                   2
                               du_x
> T := T1*u_x + T0;
                           T := T1 u_x + T0

> tail(T1,T0);
                              X, T1, T0

> dependence(T1(u),T0(u));

                     T1 = {u}, T0 = {u}, X = {u}

> pd(R,u_xx);
                                  /d   \
                             T1 - |-- X|
                                  \du  /
> T1 := pd(X,u);
                                    d
                              T1 := -- X
                                    du
> pd(R,u_x^2);
                                / 2   \
                                |d    |
                              2 |--- X|
                                |  2  |
                                \du   /
> X := X1*u + X0;
                            X := X1 u + X0
> tail(X1,X0);                 
                              X1, X0, T0
                 
> dependence(X1(), X0());

                      X1 = {}, T0 = {u}, X0 = {}

> pd(R,u_x*u^2);
                                 3
                                d
                                --- T0
                                  3
                                du

> T0 := T02*u^2 + T01*u + T00;
                                 2
                      T0 := T02 u  + T01 u + T00

> tail(T02,T01,T00);
                         T00, T01, X0, T02, X1

> dependence(T02(), T01(), T00());

            X1 = {}, T02 = {}, T01 = {}, T00 = {}, X0 = {}

> pd(R,u_x*u);
                              2 T02 - X1

> T02 := 1/2*X1;
                            T02 := 1/2 X1

> pd(R,u_x);
                           T01 - C(X1, X0)

> T01 := -C(X0,X1);
                           T01 := C(X1, X0)

> F := collect(R,u);
                                           2
F := (- 1/2 C(X1, X0) + C(X1, C(X1, X0))) u

     + (-C(T00, X1) + C(X0, C(X1, X0))) u - C(T00, X0)
\end{verbatim} 
Coefficients at the powers of $u$ give the three equations (2.8), loc. cit., 
albeit in different notation: $A = X_1$, $B = X_0$, $C = T_{00}$.
This ends Phase II.

In Phase III we choose the space $W$ coordinatized by nonlocal variables. 
One-dimensional coverings have one nonlocal variable, which we denote $w$:
\begin{verbatim}
> nonlocal(w);
                                  w
\end{verbatim} 
We restrict our demonstration to the case $X_1 = A \ne 0$. 
Then we can put $X_1 = \partial/\partial w$ (see loc. cit.), therefore:
\begin{verbatim}
> X1 := pd[w];
                             X1 := pd[w]

> X0 := b*pd[w];
                            X0 := b pd[w]

> T00 := c*pd[w];           
                           T00 := c pd[w]

> dependence(b(w), c(w));

                           b = {w}, c = {w}
\end{verbatim} 
and insert into $F$ above:
\begin{verbatim}
> G := collect(F, u, expand);

     /                     / 2   \      \
     |      /d   \         |d    |      |  2
G := |- 1/2 |-- b| pd[w] + |--- b| pd[w]| u
     |      \dw  /         |  2  |      |
     \                     \dw   /      /

       /                       / 2   \                \
       |/d   \                 |d    |   /d   \2      |
     + ||-- c| pd[w] + b pd[w] |--- b| - |-- b|  pd[w]| u
       |\dw  /                 |  2  |   \dw  /       |
       \                       \dw   /                /

               /d   \     /d   \
     - c pd[w] |-- b| + b |-- c| pd[w]
               \dw  /     \dw  /
\end{verbatim} 
Coefficients at the powers of $u$ provide us with the three expressions
found in loc. cit.:
\begin{verbatim}
> coeff(coeff(G,u,2), pd[w]);
                                       / 2   \
                              /d   \   |d    |
                        - 1/2 |-- b| + |--- b|
                              \dw  /   |  2  |
                                       \dw   /

> coeff(coeff(G,u,1), pd[w]);

                                / 2   \
                     /d   \     |d    |   /d   \2
                     |-- c| + b |--- b| - |-- b|
                     \dw  /     |  2  |   \dw  /
                                \dw   /

> coeff(coeff(G,u,0), pd[w]);

                            /d   \     /d   \
                         -c |-- b| + b |-- c|
                            \dw  /     \dw  /
\end{verbatim} 
One-dimensional coverings correspond to functions $b,c$ such that
the above expressions are zero.
It is now a matter of routine to find them (see loc. cit. for the final 
result).




\section{Zero-curvature representations}


\Jets\ originated as a tool to compute zero-curvature representations
in automatic regime using the method described in~\cite{M1}.
We introduce the commands to perform the task by means of 
example of the mKdV equation:
\begin{verbatim}
> with(linalg):
Warning, new definition for norm
Warning, new definition for trace

> coordinates([t,x],[u],12):
> equation('u_t' = u_xxx - 6*u^2*u_x);

                                         2
                        u_t = u_xxx - 6 u  u_x

> A := matrix(2,2, [a1,a2,1,-a1]);
> B := matrix(2,2, [b1,b2,b3,-b1]);
> R := matrix(2,2, [r,0,0,-r]);

                                [a1    a2 ]
                           A := [         ]
                                [1     -a1]

                                [b1    b2 ]
                           B := [         ]
                                [b3    -b1]

                                 [r    0 ]
                            R := [       ]
                                 [0    -r]
\end{verbatim}
The matrix $R$ is the characteristic matrix of~\cite{M1} in the normal form
with respect to conjugation (the Jordan normal form). 
The matrix $A$ is in the normal form with respect to gauge equivalence 
due to the stabilizer of $R$.
\begin{verbatim}
> S := zero_curvature(x = A, t = B, 1 = R):
> unknowns(a2,a1,b3,b2,b1,r):
> dependence(a1(u,u_x,u_xx), a2(u,u_x,u_xx),
    b1(u,u_x,u_xx), b2(u,u_x,u_xx), b3(u,u_x,u_xx),
    r(u,u_x,u_xx)):
> nonzero(a2,r):
\end{verbatim}
We assume $a_2$ to be nonzero since otherwise the zero-curvature 
representation comes out as lower-triangular, hence equivalent to a 
couple of conservation laws.
After a while, \verb|run(S)| stops at:
\begin{verbatim}
> run(S);
\end{verbatim}
\centerline{$\vdots$}
\begin{verbatim}
<25>   Put: 
                               3
                              d
                              --- b2 = 0
                                3
                              du

                   2          2      2           /d    \
          b2 = 4 a2  + 4 a2 a1  - 6 u  a2 - 2 a2 |-- a1| u_x
                                                 \du   /

<27>   resolving failed for   pd(a1,u)   nonlinear 

                             /d    \2
                             |-- a1|  - 1
                             \du   /

<27>   linear resolving failed for   pd(a1,u)

                            /d    \
                            |-- a1| a1 = u
                            \du   /

                                 FAIL
\end{verbatim}
We end up with two simple nonlinear equations easy to solve: 
$a_1 = u$, which immediately leads to the 
well-known zero-curvature representation of mKdV with $a_2$ as a 
parameter (the spectral parameter):
\begin{verbatim}
> a1 := u;
                               a1 := u
> run(S);

<2>   Success!

> dependence();
                           a2 = {}, r = {}

> map(eval, op(A));
                              [u    a2]
                              [       ]
                              [1    -u]

> map(expand@eval, op(B));

         [          3                 2      2              ]
         [u_xx - 2 u  + 4 u a2    4 a2  - 2 u  a2 - 2 a2 u_x]
         [                                                  ]
         [           2                         3            ]
         [2 u_x - 2 u  + 4 a2       -u_xx + 2 u  - 4 u a2   ]
\end{verbatim}



\section*{Other features}


\Jets\ contains a number of other undocumented procedures. 
See {\tt Jets.s} for their definitions.



\section*{License}


\Jets\ is a freeware, distributed under the GNU General 
Public License as published by the Free Software Foundation 
\verb|<http://www.gnu.org/copyleft/gpl.html>|.
In particular, it is distributed without any warranty, just in the 
hope that it will be useful. 
Whenever results of your scientific work depend on this software, you
should consider making a proper reference~\cite{Jets}. 





\section{Installing and starting Jets}

The distribution consists of two files: {\tt Jets.s} and {\tt Jets.t}. 
To install Jets proceed as follows:
\begin{enumerate}
\item Start a fresh \Maple\ session.
\item Read in the file {\tt Jets.s},
which contains the source code of all procedures.
There should be the following output on the screen:
\begin{verbatim}
                               JETS 4.9

          Differential calculus on jet spaces and diffieties

                        for Maple V Release 4

as of 18 December 2003

Blimit = 25000   ressize = 500   putsize = 40   maxsize = 20
\end{verbatim} 
If only the first line, JETS 4.9, is printed, then \Maple\ for your platform
uses different end-of-line characters.
Please, open {\tt Jets.s} in an editor and fix the problem. 

If error messages are produced, then your copy of the file is corrupted 
and should be replaced.
\item Read in the file \verb|Jets.t|,
which contains typical examples of computations and tests them against 
correct results. 
Running \verb|Jets.t| should produce no output other than {\it O.K.} and 
a time stamp, otherwise Jets is not compatible with your version of 
\Maple\, or \verb|Jets.t| is outdated. 

The time stamp produced by the {\tt Jets.t} test gives an estimate 
how fast will Jets run on your hardware. 
Few seconds is perfect for average computations.
\end{enumerate}
File {\tt Jets.m} then can be found in 
the \Maple\ directory. 
Next time Jets can be started simply by issuing the command
\begin{verbatim}
> read `Jets.m`; 
\end{verbatim} 




\begin{thebibliography}{9}
\bibitem{BB} A.V. Bocharov and M.L. Bronstein, 
Efficiently implementing two methods of the geometrical theory 
of differential equations: An experience in algorithm and 
software design, 
{\it Acta. Appl. Math.} {\bf 16} (1989) 143--166.

\bibitem{He} W. Hereman, 
Review of symbolic software for the computation of Lie symmetries 
of differential equations, 
{\it Euromath Bulletin} {\bf 1} (1994) 45--82.

\bibitem{M1} M. Marvan, 
A direct procedure to compute zero-curvature representations. 
The case sl$_2$ {\it Secondary Calculus and Cohomological Physics} 
Proc. Conf. Moscow  1997 (Electronic version: ELibEMS 
http://www.emis.de/proceedings/SCCP97, 1998) pp.10

\bibitem{Jets} M. Marvan, 
Jets. A software for diferential calculus on jet spaces and 
diffieties, ver. 4.9 (December~2003) for Maple V Release 4.

\bibitem{Mesh} A.G. Meshkov, 
Computer package for investigation of the complete integrability, 
in: A.M. Samoilenko, ed., {\it Symmetry in Nonlinear Mathematical 
Physics}, Proc. Third Int. Conf., Kyiv, 1999 
(Inst. Math. N.A.S. Ukraine, Kyiv, 2000) 35--46.

\bibitem{Top} V.L. Topunov, 
Reducing systems of linear differential equations to a passive form, 
{\it Acta. Appl. Math.} {\bf 16} (1989) 191--206. 

\bibitem{V} A.M. Vinogradov, 
Local symmetries and conservation laws, 
{\it Acta. Appl. Math.} {\bf 2} (1984) 21--78.

\bibitem{VK} A.M. Vinogradov, I.S. Krasil'shchik, eds., 
{\it Symmetries and Conservation Laws of Systems of Equations of 
Mathematical Physics} 
(Amer. Math. Soc., Providence, Rhode Island, 1999).

\bibitem{VKL} A.M. Vinogradov, I.S. Krasil'shchik and V.V. Lychagin, 
{\it Geometry of Jet Spaces and Nonlinear Partial Differential 
Equations} (Gordon and Breach, New York, 1986).
\end{thebibliography}


\end{document}






\section{Implementation}



As any other \Maple\ package, \Jets\ is a set of procedures;
any computation with \Jets\ is interactive and consists in 
nvoking a suitable procedure on the corresponding input data.

Unlike \Maple, \Jets\ introduces a number of declarations.
While, for instance, any dependence of a function on variables 
must be explicit in \Maple\ (one must write, say, \verb|f(x,y,z)| 
at every occurence of the identificator \verb|f|), in \Jets\ 
the same effect is achieved by the declaration 
\verb|dependence(f(x,y,z))|, to be made just once. 


\section{Variables}


For unexplained notions from the geometry of partial differential equations 
(jets, diffieties, etc.) see \cite{VK}. 
For unexplained \Maple\ commands consult the online help to \Maple.

Most of the procedures will not work until 
{\it base} and {\it fibre variables}
(coordinates on the underlying fibred manifold) are specified 
by using the command {\tt coordinates}. 
Contrary to version 3.9, there are no default settings.

To introduce two base variables, $x,t$ and a single fibre 
variable $u$, enter
\begin{verbatim}
> coordinates([t,x], [u], 5);

I, u_5t, u_4t_x, u_tttxx, u_ttxxx, u_t_4x, u_5x, u_4t, u_tttx, u_ttxx,
    u_txxx, u_4x, u_ttt, u_ttx, u_txx, u_xxx, u_tt, u_tx, u_xx, u_t, u_x
\end{verbatim} 
As a side effect, this command creates aliases for {\it jet variables}
(coordinates on the jet space) up to the order given by the last argument 
(five in this example). 
See the \Maple\ help for explanation of what is an alias.

When in doubts, one can judge as follows: 
To have, say, the eighth-order jet variable $v_{8y}$ at one's disposal, 
$v$ must be a fibre variable, $y$ must be a base variable, and the last 
argument must be at least eight;
the declaration then can be, e.g., 
\verb|coordinates([x,y],[u,v],10)|.

Creation of aliases is only possible when names given to base variables 
are strings, i.e., not indexed. 
Indexed fibre variables are allowed (try and see what happens).
The creation of aliases is suppressed when the third argument is omitted.

There is no upper bound on the number of base and fibre variables.
Every new declaration overrides the former. 

Internally, jet variables are represented as unevaluated calls to the 
procedure {\tt jet}. For example, 
\begin{verbatim}
> jet(u,t);
                                     u_t
\end{verbatim} 
The output is the name \verb|u_t| that has been aliased to the expression 
{\tt jet(u,t)}. 
For higher order jets one has likewise
\begin{verbatim}
> jet(u,x^3*t);
                                    u_txxx

> jet(u,x^4*t);
                                    u_t_4x
\end{verbatim} 
The rule is that more than triple repetition of a base variable is 
indicated by a numeric coefficient and separated by underscores.
Needless to say, one should normally input jet variables as the
aliases, e.g., \verb|u_t|, \verb|u_txxx|, \verb|u_t_4x|.

Jet variables for which no aliases were created appear in
their internal representation.
This has no impact on the validity of results.
To create missing aliases, reinsert the {\tt coordinates} command with 
the third argument sufficiently high.

Be careful to input your variables in exactly the same form as they 
appear in the output of the {\tt coordinates} command.
E.g., if {\tt u\_tx} is aliased, then {\tt u\_xt} is {\it not\/}.
(The name {\tt u\_xt} then represents a function of unassigned dependence.)

In the above inputs, expressions like \verb|x^3*t| are {\it counts}. 
Counts are traditional for the differential calculus 
(cf. $\partial^4 u / \partial x^3 \partial t$), so there should be no
confusion. 
\Maple\ handles them by the fast multiplication routines of its kernel.
Formally, a count is an element of the free commutative semigroup over
the set of variables.



\section{Total and partial derivatives}


{\it Total derivatives\/} are named {\tt TD} in \Jets.
The procedure {\tt TD} accepts the same counts as {\tt jet} does 
(see above) but is not prettyprinted.
E.g., 
\begin{verbatim}
> TD(f,x); TD(f,x^3*t);
                                   TD(f, x)

                                        3
                                 TD(f, x  t)
\end{verbatim} 
are total derivatives $D_x f$ and $D_{txxx} f$ of a function $f$.
Any name different from names of base and fibre variables (and names 
special to \Jets\ and \Maple) is regarded as a name of a differentiable 
function.

Unlike {\tt jet}, total derivatives {\tt TD} are true differentiations and 
accept any algebraic expression as the first argument. 
They are linear, and satisfy the Leibniz rule as well as the chain rule:
\begin{verbatim}
> TD(f + g, x);
TD(f*g, x);
TD(sin(f), x);
                              TD(f, x) + TD(g, x)

                            TD(f, x) g + f TD(g, x)

                               cos(f) TD(f, x)
\end{verbatim} 

\Jets\ does not automatically evaluate total derivatives of 
functions since typically they are huge expressions 
carrying rather tiny portion of useful information.
This is especially true for higher order total derivatives.

However, when appropriate, the evaluation of total 
derivatives can be forced:
\begin{verbatim}
> dependence(f(t,x,u,u_x)); 
evalTD(TD(f,x));
                            f = {x, u_x, t, u}

                    /d   \   /d   \       / d    \     
                    |-- f| + |-- f| u_x + |---- f| u_xx
                    \dx  /   \du  /       \du_x  /  
\end{verbatim} 
The first command declares $f$ as a function depending on $t,x,u,u_x$ only.
Without such a declaration, \verb|evalTD| would have no effect. 
To declare a constant $c$, one can say either \verb|dependence(c())| or 
simply \verb|parameter(c)|.

The output of the second command involves {\it partial derivatives}.
Partial derivatives are input as follows:
\begin{verbatim}
> pd(g,x*u_x^2);
                                    3
                                   d
                                -------- g
                                    2
                                du_x  dx
\end{verbatim}
Observe that partial derivatives admit counts composed of jet variables.

Partial derivatives are our main tool to investigate expressions in total 
derivatives.
Actually, partial derivatives of total derivatives can be evaluated without 
evaluating the latter.
Typical examples are ($f$ is assumed to depend on $t,x,u,u_x$):
\begin{verbatim}
> pd(TD(f,x),u_xx); 
                                   d
                                  ---- f
                                  du_x
\end{verbatim}
The output may contain another total derivative:
\begin{verbatim}
> pd(TD(f,x),u_x); 
                              d           /d   \
                          TD(---- f, x) + |-- f|
                             du_x         \du  /
\end{verbatim}
(cf. the evaluated $D_x f$ above).
\begin{verbatim}
> pd(TD(f,x^2),u_xx);
                               d           /d   \
                         2 TD(---- f, x) + |-- f|
                              du_x         \du  /
\end{verbatim}




\section{Conversions}


It is often desirable to make a substitution for a fibre variable.
For instance, let us verify that 
$a = 2 k\,{\rm sech}^2\bigl(\sqrt k (x + 4 k t)\bigr)$ is a 
solution of the KdV equation $u_t = u_{xxx} + 6 u u_x$ 
(the famous soliton solution), $k$ being an arbitrary constant.
One cannot say \verb|subs(u = a, -u_t + u_xxx + 6*u*u_x)|, since the procedure 
{\tt jet} requires its first argument to be a fibre variable, and therefore
expressions like \verb|jet(a,x)| do not make sense.
Instead one should apply the procedure {\tt convert} with {\tt TD} as the 
second argument:
\begin{verbatim}
a := 2*k*(sech((k)^(1/2)*(x + 4*k*t)))^2;
dependence (k()):
T := convert(-u_t + u_xxx + 6*u*u_x, TD, u = a):
simplify(T);
                                    0 
\end{verbatim}
The procedure effectively replaces {\tt u} with {\tt a}, \verb|jet(u,x)| 
with \verb|TD(a,x)|, etc. 
A much more powerful comand {\tt transform} is described in one of the 
following sections.

With the second argument {\tt diff} instead of {\tt TD}, the same procedure 
{\tt convert} replaces \Jets' partial derivative {\tt pd} with the standard 
\Maple\ derivative {\tt diff}:
\begin{verbatim}
> dependence(a(x,u,u_x,u_xx));

                        a = {u_xx, u_x, u, x}

> convert(a*pd(a,u_x^2*u_xx), diff, u_x = u1, u_xx = u2);
                              /    3                   \
                              |   d                    |
              a(u1, u2, u, x) |-------- a(u1, u2, u, x)|
                              |       2                |
                              \du2 du1                 /
\end{verbatim}
Here the third and fourth argument denote a substitution to be applied
to jet variables (recall that jet variables are unevaluated function calls 
while {\tt diff} requires the indeterminates to be names).
Expressions obtained this way are ready for use in standard \Maple\ procedures, e.g., {\tt dsolve} and {\tt pdesolve}.

The last type of conversion we consider in this section is performed 
automatically any time an expression is prettyprinted.
Namely, the standard \Maple\ procedure {\tt print} has been instructed to
convert {\tt pd} to {\tt Diff}, which is an inert (unevaluated)
differentiation in \Maple.
This explains why \Jets' partial derivatives are prettyprinted.

But {\tt Diff} has also been redefined. 
Having been assigned the same meaning as {\tt pd}, this ensures that the 
copy-and-paste feature of \Maple\ works with {\tt pd}.
Namely, if the display option is set to ``typeset notation,'' then one can 
copy and paste an output into the input area to get a syntactically correct 
input of the same meaning (although sometimes in the inert form).
With {\tt Diff} redefined to {\tt pd}, the output of \verb|pd(a, u_x^2*u_xx)|
becomes \verb|Diff(a,u_x,u_x,u_xx)| with exactly the same meaning: 
\begin{verbatim}
> Diff(a,u_x,u_x,u_xx) - pd(a, u_x^2*u_xx);

                                  0
\end{verbatim}




\section{Diffieties}



A computation dealing with, say, the KdV equation should start with the
declaration
\begin{verbatim}
> equation (u_t = u_xxx + 6*u*u_x);
\end{verbatim} 
The command accepts any number of arguments separated by commas.
Every new {\tt equation} overrides the former.
It may be necessary to use quotes to prevent evaluation on the left hand 
side:
\begin{verbatim}
> equation ('u_t' = u_xxx + 6*u*u_x);
\end{verbatim} 
It is also possible to remove any previous equation completely:
\begin{verbatim}
> equation ();
\end{verbatim} 
Every equation must be resolved with respect to a {\it leading} derivative
$q_X$.
This means that neither $q_X$ nor either of its derivatives $q_{XY}$
occurs on the right-hand side. 
For example, $u_t$ and $u_{xxx}$ are leading derivatives of the KdV 
equation $u_t = u_{xxx} + 6 u u_x$, while $u_x$ is not, since $u_{xxx}$ is 
forbidden on the right hand side of~$u_x = (u_t - u_{xxx})/6 u$.

After an equation is introduced, certain jet variables obtain values: 
\begin{verbatim}
> u_t;
                               u_xxx + 6 u u_x
> u_tx;
                                     2
                         u_4x + 6 u_x  + 6 u u_xx
> u_tt;
                 2                                      2       2
   u_6x + 18 u_xx  + 30 u_x u_xxx + 12 u u_4x + 72 u u_x  + 36 u  u_xx
\end{verbatim} 
Actually, these are results returned by the procedure {\tt jet} 
(recall that the above inputs are equivalent to invoking \verb|jet(u,t)|,
\verb|jet(u,t*x)|,\verb|jet(u,t^2)|, respectively).

By giving more than one argument, systems of PDE are introduced.
In that case, for {\tt jet} to give correct (unique) results, the system 
must be {\it passive}, which roughly speaking means that there must be no 
non-trivial integrability conditions resulting from the system and its 
differential consequences (see.~\cite{Top}).

The complexity of the expressions grows dramatically with the jet order. 
This is why computed values are (transparently to the user) stored in 
a table and then retrieved rather than recomputed every time when needed. 
The table is emptied automatically when a new equation is introduced. 

However, if the right hand side of an equation depends on a function $f$
and the function $f$ is changed, then it is the user who must 
take care of refreshing the table.

For example:
\begin{verbatim}
> dependence(f(x,u));
                              f = {x, u}

> equation('u_t' = u_xxx + f*u_x);

                         u_t = u_xxx + f u_x
> u_tx;
                          /d   \   /d   \    2
               u_4x + u_x |-- f| + |-- f| u_x  + f u_xx
                          \dx  /   \du  /
\end{verbatim} 
Now we make an assignment to $f$:
\begin{verbatim}
> f := x*u;
                               f := x u
\end{verbatim} 
Then the jet variables will keep their previous values
\begin{verbatim}
> u_tx;
                          /d   \   /d   \    2
               u_4x + u_x |-- f| + |-- f| u_x  + f u_xx
                          \dx  /   \du  /
\end{verbatim} 
unless explicit evaluation is invoked
\begin{verbatim}
> eval(u_tx);
                                       2
                   u_4x + u_x u + x u_x  + x u u_xx
\end{verbatim} 
or the command \verb|refresh| is invoked to reevaluate the table.

\begin{verbatim}
> refresh();
> u_tx;
                                       2
                   u_4x + u_x u + x u_x  + x u u_xx
\end{verbatim} 







\section{Linear equations in total derivatives. Symmetries of KdV}




In this section we introduce our first concrete example: computation of
fifth-order symmetries of the KdV equation.
As usual, the KdV equation is introduced as 
\begin{verbatim}
> equation('u_t' = u_xxx + 6*u*u_x);

                        u_t = u_xxx + 6 u u_x
\end{verbatim} 
Symmetries (more precisely, their generating functions $U$) are solutions of
the {\it determining equation $S = 0$} where $S$ is the universal 
linearization operator:
\begin{verbatim}
> S := symmetries(u = U);
                                             3
            S := TD(U, t) - 6 U u_x - TD(U, x ) - 6 TD(U, x) u
\end{verbatim} 
($U$ corresponds to the fibre variable $u$, as 
indicated by the argument $u = U$.)
The PDE is always given by the last {\tt equation} command.


Likewise, the determining equation for generating functions of 
conservation laws is $L = 0$ where
\begin{verbatim}
L := laws(1 = V);
                                      3
              L := -TD(V, t) + TD(V, x ) + 6 u TD(V, x)
\end{verbatim} 
($V$ denotes the unknown corresponding to the first equation,
as indicated by the relation $1 = V$.)

In general, determining equations are equations in total derivatives.
Those for symmetries and conservation laws are linear,
but \Jets\ allows one to solve nonlinear equations as well.
Nonlinear are, e.g., determining systems for coverings and zero-curvature 
representations.

\Jets' mission is solving the equations in total derivatives, linear or
nonlinear.
The solution process repeats two steps:

(1) deriving simple differential consequences;

(2) resolving and back-substituting simplest of them;

\noindent
until the input expressions become zero.
Essentially, both steps can be automated.

Turning back to our example, we need to solve the equation 
$S = 0$ for~$U$. 
As a smooth function, $U$ can depend only on a finite number of
variables, i.e., there must be an upper bound on the jet order of~$U$.
Assume that the bound is five.
A fifth-order generating function is one that depends on 
$x$, $t$, $u$, $u_x, \dots, u_{5x}$. 
Accordingly, our assumption is made known to the computer as follows:
\begin{verbatim}
> dependence(U(x,t,u,u_x,u_xx,u_xxx,u_4x,u_5x));

            U = {t, x, u, u_x, u_xx, u_xxx, u_4x, u_5x}
\end{verbatim} 
To run in automatic regime, \Jets\ also needs to know what are the unknowns:
\begin{verbatim}
> unknowns(U);
                                  U
\end{verbatim} 
Now launch the computation: 
\begin{verbatim}
> run(S);

<0>   Put: 
                                2
                               d
                             ------ U = 0
                                  2
                             du_5x

<1>   Put: 
                               2
                              d
                          ----------- U = 0
                          du_5x du_4x

<1>   Put: 
                                2
                               d
                          ------------ U = 0
                          du_5x du_xxx
\end{verbatim}
etc.
The report lines have the syntax 
\bigskip

\hbox{\verb|<|{\it elapsed time\/}\verb|>   Put: |}

\bigskip
\noindent
The centered formulas following every report line are assignments to partial 
derivatives, to be explained below.

The whole computation takes 49 such steps and ends with 
\begin{verbatim}
<77>   Put: 
d                    /d   \           /d   \              2 / d    \
-- U = 6 U u_x + 3/2 |-- U| u_xxx + 3 |-- U| u_x u - 6 u_x  |---- U|
dt                   \du  /           \du  /                \du_x  /

                  /  d    \      /  d    \
     - 6 u_x u_5x |----- U| - 30 |----- U| u u_x u_xxx
                  \du_5x  /      \du_5x  /

               2 /  d    \                /  d     \
     - 15 u_xxx  |----- U| - 15 u_x u_xxx |------ U|
                 \du_5x  /                \du_xxx  /

          /  d     \      2
     - 18 |------ U| u u_x
          \du_xxx  /

<79>   Success! 
\end{verbatim}

To retrieve the results, on should proceed as follows.
\begin{verbatim}
> dependence();

                 {U(t, u_5x, u_x, u_xx, u, u_xxx, x)}
\end{verbatim}
The result shows what jet variables do the symmetry 
generators depend on (in our case, all but $u_{4x}$).
Then invoke the command \verb|clear(pds)|
\begin{verbatim}
> Z := clear(pds);

         2         2            2              2
        d         d            d              d 
Z := {------ U, ------- U, ------------ U, ----------- U, 
           2          2    du_5x du_xxx    du_xxx du_x
      du_5x     du_xxx

     /   2     \                 /    2      \
     |  d      |     /  d     \  |   d       |      /  d    \
     |------- U| - 6 |------ U|, |--------- U| - 10 |----- U|,
     \du_x du  /     \du_xxx  /  \du_xxx du  /      \du_5x  /
\end{verbatim}
\centerline{$\vdots$}
\medskip 
Here $Z = \{Z_i\}$ is a set of expressions, which we truncated because of
its rather big size. 
The meaning is that the unknown $U$ must satisfy differential equations 
$Z_i = 0$.
The command also removes all assignments to partial derivatives 
(otherwise $Z$ would be a set of zeroes).
This is why \verb|clear(pds)| should not be called repeatedly.

The equations $Z_i = 0$ must be solved by hand, keeping in mind that the 
integration ``constants'' depend on the ``remaining'' variables.
In our example, the ODE $\partial^2 U/\partial u_{5x}^2 = 0$ is 
immediately solved as $U = U_1 u_{5x} + U_0$, where $U_i$ are arbitrary 
functions which do not depend on $u_{4x}$, because $U$ did not, and on
$u_{5x}$, as $u_{5x}$ was the independent variable of the ODE in question. 
\begin{verbatim}
> U := U1*u_5x + U0;
> dependence(U1(t,x,u,u_x,u_xx,u_xxx), U0(t,x,u,u_x,u_xx,u_xxx));

                          U := U1 u_5x + U0

       U1 = {t, x, u, u_x, u_xx, u_xxx}, U0 = {t, x, u, u_x, u_xx, u_xxx}
\end{verbatim}
The next step may be
\begin{verbatim}
> map(expand, Z);
\end{verbatim}
which shows all the equations after the assignment $U = U_1 u_{5x} + U_0$;
or better
\begin{verbatim}
> map(expand@pd, Z, u_5x);
\end{verbatim}
which gives the (more substantial) equations that arise as coefficients at 
$u_{5x}$; or even better
\begin{verbatim}
> unknowns(U1,U0):
> run(Z);
\end{verbatim}
\centerline{$\vdots$}
\begin{verbatim}
> dependence();
                   U1 = {}, U0 = {t, u, u_x, u_xx, u_xxx, x}
> Z := clear(pds);
        2          2             2
       d          d             d
Z := {----- U0, ------- U0, ----------- U0,
          2           2     du_xxx du_x
      du_x      du_xxx

    /   2      \                  / 2    \
    |  d       |     /  d      \  |d     |
    |------- U0| - 6 |------ U0|, |--- U0| - 60 U1 u_x,
    \du_x du   /     \du_xxx   /  |  2   |
                                  \du    /

    /d    \                         2 /  d      \           /d    \
    |-- U0| + 5 u_x u_xxx U1 + 3 u_x  |------ U0| - 1/2 u_x |-- U0|,
    \dx   /                           \du_xxx   /           \du   /

    /    2       \
    |   d        |          /  d     \
    |--------- U0| - 10 U1, |----- U0| - 20 U1 u_x,
    \du_xxx du   /          \du_xx   /
\end{verbatim}
\centerline{$\vdots$}
\medskip 
Prior to the new {\tt run}, one must not forget to declare the new unknowns 
$U_1,U_0$ 
(in this order, since $U_0$ depends on a larger set of variables than $U_1$
does).

We highly recommend that the reader performs the whole computation.
Omitting details, we state the final answer that 5th order symmetries 
of KdV are 
\def\fr#1/#2 {\frac{#1}{#2}}
\begin{eqnarray}&&
U = \fr1/6 a_2 + \fr1/9 a_4 u  \nonumber
\\&&\hbox to 2em{}
 + (a_1 + a_2 t + \fr1/18 a_4 x + (a_3 + a_4 t) u + a_5 u^2) u_x 
\\&&\hbox to 2em{}
 + \fr2/3 a_5 u_x u_{xx} 
 + (\fr1/6 a_3 + \fr1/6 a_4 t + \fr1/3 a_5 u) u_{xxx} 
 + \fr1/30 a_5 u_{5x},  \nonumber
\end{eqnarray}
where $a_1, a_2, a_3, a_4, a_5$ are constants.
This means that the Lie algebra of 5th order symmetries is five-dimensional,
with basis formed by
\begin{eqnarray*}
U_1 &=& u_x,  \\
U_2 &=& \fr1/6 + t u_x, \\ 
U_3 &=& u u_x + \fr1/6 u_{xxx}, \\ 
U_4 &=& \fr1/9 u + (\fr1/18 x + t u) u_x + \fr1/6 t u_{xxx}, \\
U_5 &=& u^2 u_x + \fr2/3 u_x u_{xx} + \fr1/3 u u_{xxx} + \fr1/30 u_{5x}.
\end{eqnarray*}
The Jacobi bracket (the commutator of symmetries in terms of their 
generating functions) is computed by the procedure {\tt Jacobi}.
Its two arguments are the generating functions formatted as lists of 
expressions.
E.g.,
\begin{verbatim}
> Jacobi([u_x], [1/6 + t*u_x]);
                                   [0]

> Jacobi([u_x], [u*u_x + 1/6*u_xxx]);

                                   [0]

> map(expand, Jacobi([1/6 + t*u_x], [u*u_x + 1/6*u_xxx]));

                                [1/6 u_x]
\end{verbatim}
In case of systems of equations, the ordering of the lists is determined
by the order in which the corresponding fibre variables appeared in the 
{\tt coordinates} declaration.


Obviously, {\tt run} can work reliably only if it recognizes zero.
An expression $f$ is considered zero if and only if the output of
\verb|simpl(f)| is {\tt 0}, where the default definition of \verb|simpl(f)| 
is \verb|normal(f)|.
Under this definition, \verb|simpl| reliably recognizes zero in the domain
of rational functions, which was the case with the KdV equation.
In other domains, \verb|simpl| should be redefined appropriately.



\section{Deriving differential consequences}


Here we explain how \verb|run| performs the step (1): deriving the simplest
differential consequences.
We use the KdV example of the preceding section.

An expression $S$ in total derivatives is better kept unevaluated as long 
as possible (if in doubts, try \verb|evalTD(S)|).
So, instead of evaluating $S$, we use partial differentiation:
if $S$ is zero, then all derivatives of $S$ are zero as well.
It is then important to know what variables does $S$ depend on. 
\begin{verbatim}
> coordinates([t,x],[u],8);
vars(S);

      {t, u_6x, x, u, u_7x, u_8x, u_xxx, u_xx, u_4x, u_5x, u_x}
\end{verbatim} 
(the first command creates the aliases $u_{6x}, u_{7x}$, and $u_{8x}$).
Among these variables, $u_{8x}$ is of highest order.
When differentiating $S$ with respect to $u_{8x}$, one may expect that
the result will be a rather simple expression. 
And indeed,
\begin{verbatim}
> pd(S,u_8x);
                                      0
\end{verbatim} 
which shows that $S$ is actually independent of $u_{8x}$ (all occurrences of 
$u_{8x}$ would cancel out under {\tt evalTD} and proper simplification).
The next variable in row is $u_{7x}$: 
\begin{verbatim}
> T := pd(S,u_7x);
                                        d
                          T := - 3 TD(----- U, x)
                                      du_5x
\end{verbatim} 
This result can be further processed in the same way.
One easily checks that $T$ depends on $u_{6x}$ at most, and then 
the derivative
\begin{verbatim}
> pd(T,u_6x);
                                  /   2    \
                                  |  d     |
                              - 3 |------ U|
                                  |     2  |
                                  \du_5x   /
\end{verbatim} 
gives the first meaningful 
piece of knowledge: $\partial^2 U/\partial u_{5x}^2 = 0$ (i.e.,
$U$ linearly depends on $u_{5x}$).
This explains the first intermediate result \verb|run| reported about  
(see the preceding section):
\begin{verbatim}
> run(S);

1.   <1>   Put: 
                                2
                               d
                             ------ U = 0
                                  2
                             du_5x
\end{verbatim}
In fact, \verb|run| makes an assignment 
$\partial^2 U/\partial u_{5x}^2 := 0$.
Assignments to partial derivatives will
be discussed in one of the following sections. 

The algorithm to automate derivation of differential consequences 
is implemented as the procedure \verb|derive|
(which is one of the subroutines called by \verb|run|):
%So, the same output would result from
\begin{verbatim}
derive(S);
                                  /   2    \
                                  |  d     |
                              - 3 |------ U|
                                  |     2  |
                                  \du_5x   /
\end{verbatim} 

The procedure \verb|derive| recursively computes derivatives of the input 
expression and returns nonzero expressions of minimal size.
More precisely, \verb|derive| stops if the subsequent derivative either
vanishes or fails to decrease the size.
By default, size$(f)$ is the number of unknowns and their derivatives $f$ 
depends on plus length$(f) \times 10^{-9}$.
Hence, minimal expressions are those with the minimal number of unknowns, 
with ties broken by length.

Deriving will work (produce a result in finite time) if {\tt size(}$f${\tt )} 
returns a nonnegative integer for any input expression $f$.
 
In order to keep the space and time complexity low, every step in the 
derivation process uses only the highest-order jet derivatives
the input expression depends on.
Therefore, derivation is in fact restricted to maximal elements of the 
set of jet variables ordered as follows:
$$
u_X < v_Y \quad\Leftrightarrow\quad u = v \hbox{ and } X \hbox{ divides } Y
$$
(divisibility of counts in the obvious sense).
After maximal elements are exhausted, the procedure continues with 
maximal elements among the remaining variables, etc.

Still there may exist maximal jet variables $e$ that produce a large 
expression $\partial S/\partial e$.
Nonlocal variables of coverings are typical examples.
Fortunately, these variables can be safely excluded from derivation 
%(certainly at the first stages), 
since the derivative $\partial S/\partial e$ would be left unused anyway.
To exclude certain fibre variables from derivation (along 
with all jet variables they generate), use

\bigskip\noindent
\verb|noderive(|{\it list of fibre variables}\verb|);| 



