\begin{center}
{by J. M. Baird\footnote{Research Professor, Dept. of Electrical
Engineering, Univ. of Utah}}
\end{center}

\section{Introduction}
Presented here is a simplified technique for manipulating vector
differential equations. The technique requires the memorization of
a few key forms and of course some practice, but it is relatively
easy to master and has proven to be an exceptionally valuable tool
for working with vector field theory.

Provided here are both the background and derivations of the key
forms required to use the RCN technique and it therefore gives
much more detail than some readers will want. For those who want
just the results, the key forms and examples will be emphasized
with \emph{italic} text and the key equations will be boxed.

Manipulations of vector equations often take the form of searching
through a table of vector identities trying to find the right one
which will transform a complex vector expression into a more
useful form.  The value of the technique presented here is that it
frees the researcher to work directly with the vector equations as
if they were simple algebraic expressions which obey chain rule
differentiation. One therefore tends to identify the more
desirable forms by the usual algebraic processes, and the vector
identities are a byproduct of the process. In fact, the technique
described here can be used to derive the usual tabulated vector
identities in just a few lines.

\section{Vector Equations and the Main Theorem of Tensor Calculus}
By design, vector equations provide a description of physical
fields which is independent of any particular coordinate system.
As such, vector equations are just a special form of tensor
equations\footnote{It is not difficult using tensor theory to
demonstrate that any vector equation can be written as an equation
between tensors of the first rank} and come under ``the main
theorem of tensor calculus: An equation between tensors of the
same type which is true in one system, is true in all
systems.''\cite{Rindler:1960} This says that \emph{any vector
equation which can be proven in one coordinate system is true in
all coordinate systems}.

Because of this fact, we are free to manipulate vector equations
in their simplest form---rectangular components---and as long as
the final result can be returned to vector form, we are assured
that the result is a general one which holds for any coordinate
system. This provides the means for greatly simplifying the vector
manipulations of both algebraic and differential equations.

\section{The Summation Convention in RCN}
Any vector $\vec{A}$ can be expanded in terms of the unit vector
coordinates $\hat{x}_i\;(i=1,2,3)$ as
\begin{equation}\label{eqn:Aexpanded}
    \vec{A}=A_1\hat{x}_1+A_2\hat{x}_2+A_3\hat{x}_3=\sum_{i=1}^{3}\hat{x}_i(A_i)
\end{equation}
Using the summation notation, the vector dot product
$\vec{A}\cdot\vec{B}$ becomes
\begin{equation}\nonumber
    \vec{A}\cdot\vec{B}=\left(\sum_{j=1}^{3}\hat{x}_jA_j\right)\cdot\left(\sum_{k=1}^{3}\hat{x}_kB_k\right)
    =\sum_{j=1}^3\sum_{k=1}^3(\hat{x}_j\cdot\hat{x}_k)A_jB_k
\end{equation}
Since the unit vectors in RCN are orthogonal,
\begin{equation}\nonumber
    \hat{x}_j\cdot\hat{x}_k=\delta_{jk}
\end{equation}
($\delta_{jk}$ is the Kronecker delta function) we get
\begin{equation}\label{eqn:AdotB}
    \vec{A}\cdot\vec{B}=\sum_{j=1}^3\sum_{k=1}^3\delta_{jk}A_jB_k=\sum_{j=1}^3(A_jB_j)
\end{equation}
The vector cross product $\vec{A}\times\vec{B}$ is slightly more
complicated.
\begin{align*}
    \vec{A}\times\vec{B}&=\sum_{i=1}^3\hat{x}_i(\hat{x}_i\cdot\vec{A}\times\vec{B})\\
    &=\sum_{i=1}^3\hat{x}_i\left[\hat{x}_i\cdot\left(\sum_{j=1}^{3}\hat{x}_jA_j\right)\times\left(\sum_{k=1}^{3}\hat{x}_kB_k\right)\right]\\
    &=\sum_{i=1}^3\hat{x}_i\sum_{j=1}^{3}\sum_{k=1}^{3}(\hat{x}_i\cdot\hat{x}_j\times\hat{x}_k)A_jB_k
\end{align*}
To simplify this expression, we define the ``permutation tensor''
$e_{ijk}$ such that
\begin{equation}\label{eqn:AcrossB}
    \vec{A}\times\vec{B}=\sum_{i=1}^3\hat{x}_i\sum_{j=1}^{3}\sum_{k=1}^{3}(e_{ijk}A_jB_k)
\end{equation}
where
\begin{equation}\label{eqn:permuatationtensor}
\boxed{e_{ijk}=\hat{x}_i\cdot\hat{x}_j\times\hat{x}_k}
\end{equation}

It is found that if we adopt a suitable ``summation convention'',
equations (\ref{eqn:Aexpanded}), (\ref{eqn:AdotB}) and
(\ref{eqn:AcrossB}) can be unambiguously denoted by the quantities
in parenthesis; i.e.
\begin{equation}\label{eqn:vecA2}
    \boxed{\vec{A}\equiv A_i}
\end{equation}
\begin{equation}\label{eqn:AdotB2}
    \boxed{\vec{A}\cdot\vec{B}\equiv A_iB_i}
\end{equation}
\begin{equation}\label{eqn:AcrossB2}
    \boxed{\vec{A}\times\vec{B}\equiv e_{ijk}A_jB_k}
\end{equation}
The summation convention is thus the set of rules for interpreting
the RCN short forms defined in equations
(\ref{eqn:vecA2})--(\ref{eqn:AcrossB2}) as if they were the actual
equations given in (\ref{eqn:Aexpanded}), (\ref{eqn:AdotB}) and
(\ref{eqn:AcrossB}).  Thus we have:
\begin{center}
{\emph{Summation Convention}}
\end{center}
\begin{enumerate}
    \item \emph{Any index which appears twice in a given term indicates
    a summation over that index.}  For example
\begin{equation*}
    {A_jB_j+C_kD_k\equiv\sum_{j=1}^3A_jB_j+\sum_{k=1}^3C_kD_k=\vec{A}\cdot\vec{B}+\vec{C}\cdot\vec{D}}
\end{equation*}
    \item \emph{Any index which appears only once represents the vector
    component of the term.} Reconstruction in this case requires
    multiplication by the unit vector prior to summing over this
    index. An example of this is
\begin{equation*}
    {A_iB_jC_j\equiv\sum_{i=1}^3\hat{x}_iA_i\left(\sum_{j=1}^3B_jC_j\right)=\vec{A}(\vec{B}\cdot\vec{C})}
\end{equation*}
\emph{If an index appears more than twice in the same term---an
error is indicated.} Note also that any index which appears twice
is a ``dummy'' index because it can be changed to any other index
not already appearing in the term without changing the meaning
under the summation rules. Thus,
\begin{equation*}
    e_{ijk}A_jB_k=e_{i\ell m}A_\ell B_m=\vec{A}\times\vec{B}
\end{equation*}
\end{enumerate}

Equations (\ref{eqn:vecA2})--(\ref{eqn:AcrossB2}) provide the
vector-RCN forms which are used in practice. These equations are
used to convert back and forth between vector notation and RCN
notation and therefore must be memorized. Note that the order of
the indices in the form for the cross product is important. The
first index in the permutation tensor $e_{ijk}$ must be the vector
index for the product $\vec{A}\times\vec{B}$ and the second and
third indices must correspond to the first and second vectors
respectively. The indices of the permutation tensor can be
interchanged when needed, but only according to the rules
governing the permutation tensor which are now described.

\section{The Permutation Tensor in RCN}
The rules governing the permutation tensor in RCN notation may be
derived from its definition in equation
(\ref{eqn:permuatationtensor}).
$$
    e_{ijk}=\hat{x}_i\cdot\hat{x}_j\times\hat{x}_k\eqno(\ref{eqn:permuatationtensor})
$$
Note from the properties of the vector triple product that
$$
    (\hat{x}_i\cdot\hat{x}_j\times\hat{x}_k)
    =(\hat{x}_j\cdot\hat{x}_k\times\hat{x}_i)
    =(\hat{x}_k\cdot\hat{x}_i\times\hat{x}_j)
$$
Thus,
\begin{equation}\label{eqn:permutationtensorprop1}
    \boxed{e_{ijk}=e_{jki}=e_{kij}}
\end{equation}
\emph{This rotation of the indices in the permutation tensor is
called cyclic permutation and it does not change the value of the
tensor.}

Alternatively, when two of the unit vectors are directly
interchanged (permuted) we get
$$
    (\hat{x}_i\cdot\hat{x}_j\times\hat{x}_k)
    =-(\hat{x}_j\cdot\hat{x}_i\times\hat{x}_k)
    =-(\hat{x}_i\cdot\hat{x}_k\times\hat{x}_j)
    =-(\hat{x}_k\cdot\hat{x}_j\times\hat{x}_i)
$$
so that
\begin{equation}\label{eqn:permutationtensorprop2}
    \boxed{e_{ijk}=-e_{jik}=-e_{ikj}=-e_{kji}}
\end{equation}
\emph{The direct interchange of any two indices in the permutation
tensor must be accompanied by a change in sign.}

If specific evaluations of the permutation tensor are required, we
note that the triple product of the rectangular unit vectors in
equation (\ref{eqn:permuatationtensor}) will only have a value
when the three unit vectors are in different directions. For
example, if $(i,j,k)=(1,2,3)$, indicating unit vectors along the
x, y, and z axes respectively, the value of the permutation tensor
is $e_{123}=(\hat{x}\cdot\hat{y}\times\hat{z})=1$. Cyclic
permutations of $(1,2,3)$ thus also have the value of $+1$ and the
three interchanged conditions [equation
\ref{eqn:permutationtensorprop2}] will have values of $-1$. All
other combinations of the indices $(i,j,k)$ will give a tensor
value of zero (e.g., $e_{112}=0$). Thus only 6 of the possible 27
terms in the summation of $e_{ijk}A_jB_k$ will be nonzero.  These
six correspond to the six terms in
$$
    \vec{A}\times\vec{B}=\hat{x}(A_yB_z-A_zB_y)
    +\hat{y}(A_zB_x-A_xB_z)
    +\hat{z}(A_xB_y-A_yB_x)
$$

\section{Examples of Compound Vector Terms in RCN}
By repetitive re-use of the vector-RCN forms in equations
(\ref{eqn:vecA2})--(\ref{eqn:AcrossB2}), we can write any vector
expression in terms of RCN. The key to doing this properly is to
maintain the proper relationships between the vector indices for
each factor and never permit the same index to appear more than
twice in a single term. Some examples follow in which the
intermediate steps are shown to indicate the vector indices of the
various factors.
\begin{subequations}
\begin{align}
    \vec{A}\cdot\vec{B}\times\vec{C}&=A_i(\vec{B}\times\vec{C})_i=A_ie_{ijk}B_jC_k\label{eqn:RCNex1}\\
    \vec{A}\times\vec{B}\cdot\vec{C}&=(\vec{A}\times\vec{B})_iC_i\notag\\&=e_{ijk}A_jB_kC_i
    =A_je_{jki}B_kC_i=\vec{A}\cdot\vec{B}\times\vec{C}\label{eqn:RCNex2}\\
    \vec{A}\times(\vec{B}\times\vec{C})&=e_{ijk}A_j(\vec{B}\times\vec{C})_k
    =e_{ijk}A_je_{k\ell m}B_\ell C_m\label{eqn:RCNex3}\\
    (\vec{A}\times\vec{B})\times\vec{C}&=e_{ijk}(\vec{A}\times\vec{B})_jC_k
    =e_{ijk}e_{j\ell m}A_\ell B_mC_k\label{eqn:RCNex4}\\
    \vec{A}\cdot\vec{B}\times(\vec{C}\times\vec{D})
    &=A_i[\vec{B}\times(\vec{C}\times\vec{D})]_i\notag\\
    &=A_ie_{ijk}B_j(\vec{C}\times\vec{D})_k
    =A_ie_{ijk}B_je_{k\ell m}C_\ell D_m\label{eqn:RCNex5}
\end{align}
\end{subequations}
With a little experience, one finds that the
final RCN forms in this example can easily be written directly
from the vector from without the need for the intermediate steps
shown. Note that in the second equation, a minor rearrangement of
the factors and cyclic permutation of the permutation tensor
indices lead to recognition of the vector identity
$$
\vec{A}\times\vec{B}\cdot\vec{C}=\vec{A}\cdot\vec{B}\times\vec{C}
$$

\section{The Vector Operator $\nabla$ in RCN}
In rectangular component notation, the vector operator $\nabla$ is
simply
\begin{equation}\label{eqn:RCNnabla}
    \boxed{\nabla=\hat{x}\nabla_x+\hat{y}\nabla_y+\hat{z}\nabla_z=\sum_{i=1}^3\hat{x}_i\nabla_i\equiv\nabla_i}
\end{equation}
where $\nabla_x=\frac{\partial}{\partial x}$, etc. In RCN,
therefore, we treat this vector operator in exactly the same way
as any other vector with the precaution that we must keep track of
which factors in a given term are to be operated upon. The
following examples will illustrate this point.
\begin{subequations}
\begin{align}
    \nabla\Phi&=\nabla_i\Phi\label{eqn:RCNex6}\\
    \nabla\cdot\vec{A}&=\nabla_iA_i\label{eqn:RCNex7}\\
    \nabla\times\vec{A}&=e_{ijk}\nabla_jA_k\label{eqn:RCNex8}\\
    (\nabla\times\vec{A})\cdot\vec{B}&=e_{ijk}(\nabla_jA_k)B_i\label{eqn:RCNex9}\\
    \nabla\cdot(\vec{A}\times\vec{B})&=\nabla_i(e_{ijk}A_jB_k)\notag\\
    &=e_{ijk}\nabla_i(A_jB_k)\notag\\
    &=e_{ijk}[(\nabla_iA_j)B_k+A_j(\nabla_iB_k)]\notag\\
    &= B_ke_{kij}\nabla_iA_j-A_je_{jik}\nabla_iB_k\notag\\
    &=\vec{B}\cdot\nabla\times\vec{A}-\vec{A}\cdot\nabla\times\vec{B}\label{eqn:RCNex10}\\
    \nabla\times\nabla\times\vec{A}&=e_{ijk}\nabla_je_{k\ell m}\nabla_\ell A_m=e_{ijk}e_{k\ell m}\nabla_j\nabla_\ell A_m\label{eqn:RCNex11}
\end{align}
\end{subequations}

Note in the equation for $\nabla\cdot\vec{A}\times\vec{B}$ above
that the permutation tensor can be moved through the operator
$\nabla_i$ because $e_{ijk}$ is a constant in RCN. Note also that
when the chain rule is applied to $\nabla_i(A_jB_k)$ and the terms
are rearranged including permutations of $e_{ijk}$, that it is
easy to identify the new forms for the terms in the identity
$$
    \nabla\cdot(\vec{A}\times\vec{B})=\vec{B}\cdot\nabla\times\vec{A}-\vec{A}\cdot\nabla\times\vec{B}
$$
This begins to show the power of the RCN technique.

Note from equations (\ref{eqn:RCNex1})--(\ref{eqn:RCNex5}) and
(\ref{eqn:RCNex6})--(\ref{eqn:RCNex11}) that whenever a factor
involves two cross products that we get products of permutation
tensors in the term. The decomposition of this product is probably
the single most important tool of the RCN technique because it
nearly always leads to great simplification of the vector
equations.

\section{Simplification of Double Cross Products}
We see from equation (\ref{eqn:RCNex3}) that the double cross
product $\vec{A}\times(\vec{B}\times\vec{C})$ can be written as
\begin{equation}\label{eqn:eijkeilm}
    \vec{A}\times(\vec{B}\times\vec{C})=e_{ijk}e_{k\ell m}A_jB_\ell C_m
\end{equation}
To find a simplifying form for this type of term we revert to the
full vector notation and derive an equivalent but simpler form for
equation (\ref{eqn:eijkeilm}).

Taking our queue form equation (\ref{eqn:eijkeilm}) to achieve the
desired indices, we write
\begin{align}
\vec{A}\times(\vec{B}\times\vec{C})
&=\sum_{i=1}^3\hat{x}_i[\vec{A}\times(\vec{B}\times\vec{C})]_i\\
&=\sum_{i=1}^3\hat{x}_i\left\{\hat{x}_i\left(\sum_{j=1}^3\hat{x}_jA_j\right)\times\left[\left(\sum_{\ell=1}^3\hat{x}_\ell B_\ell\right)\times\left(\sum_{m=1}^3\hat{x}_mC_m\right)\right]\right\}\\
&=\sum_{i=1}^3\hat{x}_i\left\{\sum_{j=1}^3\sum_{\ell=1}^3\sum_{m=1}^3\left[\hat{x}_i\cdot\hat{x}_j\times(\hat{x}_\ell\times\hat{x}_m)\right]A_jB_\ell C_m\right\}
\end{align}

Using the summation convention on equation (\ref{eqn:eijkeilm}),
this compares to
\begin{equation}\label{eqn:RCNsimp1}
    \vec{A}\times(\vec{B}\times\vec{C})=\sum_{i=1}^3\hat{x}_i\left\{\sum_{j=1}^3\sum_{\ell=1}^3\sum_{m=1}^3\left[\sum_{k=1}^3e_{ijk}e_{k\ell m}\right]A_jB_\ell C_m\right\}
\end{equation}
from which we identify
\begin{equation}\label{eqn:eijkeilm2}
    \sum_{k=1}^3e_{ijk}e_{k\ell m}=\hat{x}_i\cdot\hat{x}_j\times(\hat{x}_\ell\times\hat{x}_m)
\end{equation}
Using the vector identity
$\vec{A}\times(\vec{B}\times\vec{C})=\vec{B}(\vec{A}\cdot\vec{C})-\vec{C}(\vec{A}\cdot\vec{B})$
we can rewrite equation (\ref{eqn:eijkeilm2}) as
\begin{equation}\label{eqn:eijkeilm3}
    \sum_{k=1}^3e_{ijk}e_{k\ell m}=(\hat{x}_i\cdot\hat{x}_\ell)(\hat{x}_j\cdot\hat{x}_m)-(\hat{x}_i\cdot\hat{x}_m)(\hat{x}_j\cdot\hat{x}_\ell)
\end{equation}
and simplify the result to
\begin{equation}\label{eqn:eijkeilm4}
    \sum_{k=1}^3e_{ijk}e_{k\ell m}=\delta_{i\ell }\delta_{jm}-\delta_{im}\delta_{j\ell }
\end{equation}
From equation (\ref{eqn:eijkeilm4}) we get the final desired form
of the RCN identity
\begin{equation}\label{eqn:eijkeilm5}
    \boxed{e_{kij}e_{k\ell m}=\delta_{i\ell }\delta_{jm}-\delta_{im}\delta_{j\ell
    }}
\end{equation}
In practice the identity (\ref{eqn:eijkeilm5}) is used in the
following way:
\begin{enumerate}
    \item First, divide the RCN term in question into two
    factors, the first of which is the product of the two
    permutation tensors to be simplified and the second containing
    all remaining factors; e.g.
$$
    (e_{ijk}A_je_{k\ell m})(B_\ell C_m)=(e_{ijk}e_{k\ell m})(A_jB_\ell C_m)
$$
    \item Next, identify the two indices in the permutation
    tensors which are the same, and using index permutations,
    rearrange the indices so that this index is first in each
    tensor; e.g., (using cyclic permutation)
$$
    (e_{ijk}e_{k\ell m})(A_jB_\ell C_m)=(e_{kij}e_{k\ell m})(A_jB_\ell C_m)
$$
    \item Substitute identity (\ref{eqn:eijkeilm5}):
\begin{equation*}
\begin{split}
    (e_{kij}e_{k\ell m})(A_jB_\ell C_m)&=(\delta_{i\ell }\delta_{jm}-\delta_{im}\delta_{j\ell})A_jB_\ell{}C_m\\
    &=(\delta_{i\ell }\delta_{jm}A_jB_\ell C_m-\delta_{im}\delta_{j\ell }A_jB_\ell C_m)
    \end{split}
\end{equation*}
    \item Last, use the substitution property of the Kronecker function $\delta_{ij}A_i=A_j$:
$$
(\delta_{i\ell
}B_\ell\delta_{jm}A_jC_m-\delta_{im}C_m\delta_{j\ell }A_jB_\ell)=
A_mB_iC_m-A_\ell B_\ell C_i
$$
Note that in the above expression $\delta_{j\ell}A_jB_\ell$ is
equal to $A_\ell B_\ell$ or $A_mB_m$. Either index can be used
since the double index signifies a summation and the index is a
``dummy'' index.
\end{enumerate}

In practice, the preceding RCN manipulations would appear as
follows in the proof of the following vector identity:
\begin{align*}
    \vec{A}\times(\vec{B}\times\vec{C})&=e_{ijk}A_je_{k\ell m}B_\ell C_m\\
    &=e_{kij}e_{k\ell m}A_jB_\ell C_m\\
    &=(\delta_{i\ell }\delta_{jm}-\delta_{im}\delta_{j\ell })A_jB_\ell{}C_m\\
    &=A_mB_iC_m-A_\ell B_\ell C_i\\&=(\vec{A}\cdot\vec{C})\vec{B}-(\vec{A}\cdot\vec{B})\vec{C}
\end{align*}
This is otherwise not an easy derivation, which illustrates the
power of RCN technique. Notice that a nearly identical procedure
gives the curl of the curl as
\begin{align*}
    \nabla\times\nabla\times\vec{A}&=e_{ijk}\nabla_je_{k\ell m}\nabla_\ell A_m\\
    &=e_{kij}e_{k\ell m}\nabla_j\nabla_\ell A_m\\
    &=(\delta_{i\ell }\delta_{jm}-\delta_{im}\delta_{j\ell })\nabla_j\nabla_\ell A_m\\
    &=\nabla_i(\nabla_mA_m)-\nabla_\ell A_i\nabla_\ell\\
    &=\nabla(\nabla\cdot\vec{A})-(\nabla\cdot\nabla)\vec{A}\\
    &=\nabla(\nabla\cdot\vec{A})-\nabla^2\vec{A}
\end{align*}
Another proof of a vector identity illustrates the use of the
chain rule.
\begin{align*}
    \nabla\times(\vec{A}\times\vec{B})&=e_{ijk}\nabla_je_{k\ell m}A_\ell B_m\\
    &=e_{kij}e_{k\ell m}\nabla_j(A_\ell B_m)\\
    &=(\delta_{i\ell }\delta_{jm}-\delta_{im}\delta_{j\ell })\nabla_j(A_\ell B_m)\\
    &=\nabla_m(A_iB_m)-\nabla_\ell(A_\ell B_i)\\
    &=(A_i\nabla_m B_m+B_m\nabla_m A_i)-(A_\ell\nabla_\ell B_i+B_i\nabla_\ell A_\ell)\\
    &=\vec{A}(\nabla\cdot\vec{B})+(\vec{B}\cdot\nabla)\vec{A}
    -(\vec{A}\cdot\nabla)\vec{B}-\vec{B}(\nabla\cdot\vec{A})
\end{align*}
