\subsection{Inverses in two dimensions}
Consider a linear map \(T\colon \mathbb R^n \to \mathbb R^n\).
If \(T\) is invertible (i.e.\ bijective), then \(\ker T = \{ \vb 0 \}\) as \(T\) is injective, and \(\Im T = \mathbb R^n\) as \(T\) is surjective.
These conditions are actually equivalent due to the rank-nullity theorem.
Conversely, if the conditions hold, then \(T(\vb e_1), T(\vb e_2), \cdots, T(\vb e_n)\) must be a basis of the image, so we can just define \(T^{-1}\) by defining its actions on the basis vectors \(T(\vb e_1), T(\vb e_2) \cdots T(\vb e_n)\), specifically mapping them to the standard basis.

How can we test whether the conditions above hold for a matrix \(M\) representing \(T\), and how can we find \(M^{-1}\) from \(M\) explicitly?
For any \(n \times n\) matrix \(M\) (not necessarily invertible), we will define the adjugate matrix \(\adjugate M\) and the determinant \(\det M\) such that
\[
	\adjugate M M = (\det M) I \tag{\(\ast\)}
\]
Then if \(\det M \neq 0\), \(M\) is invertible, where
\[
	M^{-1} = \frac{1}{\det M}\adjugate M
\]
From \(n=2\), recall that (\(\ast\)) holds with
\[
	M = \begin{pmatrix}
		M_{11} & M_{21} \\
		M_{12} & M_{22}
	\end{pmatrix};\quad \adjugate M = \begin{pmatrix}
		M_{22}  & -M_{21} \\
		-M_{12} & M_{11}
	\end{pmatrix};\quad \det M = [M\vb e_1, M\vb e_2] = \varepsilon_{ij}M_{i1}M_{j2}
\]
The determinant in this case is the factor by which areas scale under \(M\).
\(\det M \neq 0\) if and only if \(M\vb e_1, M\vb e_2\) are linearly independent.

\subsection{Three dimensions}
For \(n=3\), we will define similarly
\[
	\det M = [M\vb e_1, M\vb e_2, M\vb e_3] = \varepsilon_{ijk}M_{i1}M_{j2}M_{k3}
\]
We define it like this because this is the factor by which volumes scale under \(M\) in three dimensions.
So
\[
	\det M \neq 0 \iff \{ M \vb e_1, M \vb e_2, M \vb e_3 \} \text{ linearly independent, or } \Im M = \mathbb R^3
\]
Now we define \(\adjugate M\) from \(M\) using row/column notation.
\begin{align*}
	\vb R_1(\adjugate M) & = \vb C_2(M) \times \vb C_3(M) \\
	\vb R_2(\adjugate M) & = \vb C_3(M) \times \vb C_1(M) \\
	\vb R_3(\adjugate M) & = \vb C_1(M) \times \vb C_2(M)
\end{align*}
Note that therefore,
\[
	(\adjugate M M)_{ij} = \vb R_i(\adjugate M) \cdot \vb C_j(M) = \underbrace{(\vb C_1(M) \times \vb C_2(M) \cdot \vb C_3(M))}_{\det M}\delta_{ij}
\]
as claimed.
For example, let us invert the following matrix.
\begin{align*}
	M                      & = \begin{pmatrix}
		1 & 3 & 0 \\ 0 & -1 & -2 \\ 4 & 1 & -1
	\end{pmatrix}                                                                 \\
	\vb C_2 \times \vb C_3 & = \begin{pmatrix} 3 \\ -1 \\ 1 \end{pmatrix} \times \begin{pmatrix} 0 \\ 2 \\ -1 \end{pmatrix} = \begin{pmatrix} -1 \\ 3 \\ 6 \end{pmatrix}    \\
	\vb C_3 \times \vb C_1 & = \begin{pmatrix} 0 \\ 2 \\ -1 \end{pmatrix} \times \begin{pmatrix} 1 \\ 0 \\ 4 \end{pmatrix} = \begin{pmatrix} 8 \\ -1 \\ -2 \end{pmatrix}   \\
	\vb C_1 \times \vb C_2 & = \begin{pmatrix} 1 \\ 0 \\ 4 \end{pmatrix} \times \begin{pmatrix} 3 \\ -1 \\ 1 \end{pmatrix} = \begin{pmatrix} 4 \\ 11 \\ -1 \end{pmatrix} \\
	\adjugate M            & = \begin{pmatrix}
		-1 & 3 & 6 \\ 8 & -1 & -2 \\ 4 & 11 & -1
	\end{pmatrix}                                                                \\
	\det M                 & = \vb C_1 \cdot \vb C_2 \times \vb C_3 = 23                                                 \\
	\adjugate M M          & = 23 I
\end{align*}

\subsection{Levi-Civita \texorpdfstring{\( \varepsilon \)}{𝜀} in higher dimensions}
Recall (from IA Groups):
\begin{itemize}
	\item A permutation \(\sigma\) on the set \(\{ 1, 2, \cdots, n \}\) is a bijection from the set to itself, specified by an ordered list \(\sigma(1), \sigma(2), \cdots, \sigma(n)\).
	\item Permutations form a group \(S_n\), called the symmetric group of order \(n!
	      \)
	\item A transposition \(\tau = (p, q)\) where \(p \neq q\) is a permutation that swaps \(p\) and \(q\).
	\item Any permutation is a product of of \(k\) transpositions, where \(k\) is unique modulo 2 for a given \(\sigma\).
	      In this course, we will write \(\varepsilon(\sigma)\) to mean the sign (or signature) of the permutation, \((-1)^k\).
	      \(\sigma\) is even if the sign is 1, and odd if the sign is \(-1\).
\end{itemize}
The alternating symbol \(\varepsilon\) in \(\mathbb R^n\) or \(\mathbb C^n\) is an \(n\)-index object (tensor) defined by
\[
	\varepsilon_{\underbrace{ij\cdots l}_{\mathclap{n \text{ indices}}}} = \begin{cases}
		+1 & \text{if } i, j \cdots, l \text{ is an even permutation of } 1, 2, \cdots, n \\
		-1 & \text{if } i, j \cdots, l \text{ is an odd permutation of } 1, 2, \cdots, n  \\
		0  & \text{otherwise, i.e.\ if any indices take the same value}
	\end{cases}
\]
Thus if \(\sigma\) is any permutation, then
\[
	\varepsilon_{\sigma(1)\cdots\sigma(n)} = \varepsilon(\sigma)
\]
So \(\varepsilon_{ij\cdots l}\) is totally antisymmetric and changes sign whenever a pair of indices are exchanged.
\begin{definition}
	Given vectors \(\vb v_1, \cdots \vb v_n \in \mathbb R^n\) or \(\mathbb C^n\), the alternating form combines them to give the scalar
	\begin{align*}
		[\vb v_1, \vb v_2, \cdots, \vb v_n ] & = \varepsilon_{ij\cdots l} (\vb v_1)_i (\vb v_2)_j \cdots (\vb v_n)_l                                                            \\
		                                     & = \sum_{\sigma \in S_n} \varepsilon(\sigma) \cdot (\vb v_1)_{\sigma(1)} \cdot (\vb v_2)_{\sigma(2)} \cdots (\vb v_n)_{\sigma(n)}
	\end{align*}
\end{definition}

\subsection{Properties}
\begin{enumerate}
	\item The alternating form is multilinear.
	      \begin{align*}
			[ \vb v_1, \cdots, \vb v_{p-1}, \alpha \vb u + \beta \vb w, \vb v_{p+1} \cdots, \vb v_n ] &= \alpha [ \vb v_1, \cdots, \vb v_{p-1}, \vb u, \vb v_{p+1} \cdots, \vb v_n ] \\
			&+ \beta [ \vb v_1, \cdots, \vb v_{p-1}, \vb w, \vb v_{p+1} \cdots, \vb v_n ]
		  \end{align*}
	\item It is totally antisymmetric.
	      \([ \vb v_{\sigma(1)}, \vb v_{\sigma(2)}, \cdots, \vb v_{\sigma(n)} ] = \varepsilon(\sigma) [ \vb v_1, \cdots, \vb v_n ]\)
	\item Standard basis vectors give a positive result: \([\vb e_i, \cdots, \vb e_n] = 1\).
\end{enumerate}
These three properties fix the alternating form completely, and they also imply
\begin{enumerate}
	\setcounter{enumi}{3}
	\item If \(\vb v_p = \vb v_q\) where \(p \neq q\), then
	      \[
		      [\vb v_1, \cdots, \vb v_p, \cdots, \vb v_q, \cdots, \vb v_n ] = 0
	      \]
	\item If \(\vb v_p\) can be written as a non-trivial linear combination of the other vectors, then
	      \[
		      [\vb v_1, \cdots, \vb v_p, \cdots, \vb v_n ] = 0
	      \]
\end{enumerate}
Property (iv) follows from property (ii), where we swap \(\vb v_p\) and \(\vb v_q\).
Property (v) follows from substituting the linear combination representation of \(\vb v_p\) into the alternating form expression, the using properties (i) and (iv).
To justify (ii) above, it suffices to check a transposition \(\tau = (p\ q)\) where (without loss of generality) \(p < q\), then since transpositions generate all permutations the result follows.
\begin{align*}
	 & [\vb v_1, \cdots, \vb v_{p-1}, \vb v_q, \vb v_{p+1}, \cdots, \vb v_{q-1}, \vb v_p, \vb v_{q+1}, \cdots, \vb v_n]                                                                                                                          \\
	 & = \sum_\sigma \varepsilon(\sigma) (\vb v_1)_{\sigma(1)} \cdots (\vb v_{p-1})_{\sigma(p-1)}(\vb v_q)_{\sigma(p)}(\vb v_{p+1})_{\sigma(p+1)} \\
	 &\quad\quad\quad\cdots (\vb v_{q-1})_{\sigma(q-1)}(\vb v_p)_{\sigma(q)}(\vb v_{q+1})_{\sigma(q+1)}             \\
	 & = \sum_\sigma \varepsilon(\sigma) (\vb v_1)_{\sigma'(1)} \cdots (\vb v_{p-1})_{\sigma'(p-1)}(\vb v_q)_{\sigma'(q)}(\vb v_{p+1})_{\sigma'(p+1)} \\
	 &\quad\quad\quad\cdots (\vb v_{q-1})_{\sigma'(q-1)}(\vb v_p)_{\sigma'(p)}(\vb v_{q+1})_{\sigma'(q+1)}      \\
	\intertext{where \(\sigma' = \sigma\tau\)}
	 & = -\sum_{\sigma'} \varepsilon(\sigma') (\vb v_1)_{\sigma'(1)} \cdots (\vb v_{p-1})_{\sigma'(p-1)}(\vb v_p)_{\sigma'(p)}(\vb v_{p+1})_{\sigma'(p+1)} \\
	 &\quad\quad\quad\cdots (\vb v_{q-1})_{\sigma'(q-1)}(\vb v_q)_{\sigma'(q)}(\vb v_{q+1})_{\sigma'(q+1)} \\
	 & = -[\vb v_1, \cdots, \vb v_{p-1}, \vb v_p, \vb v_{p+1}, \cdots, \vb v_{q-1}, \vb v_q, \vb v_{q+1}, \cdots, \vb v_n]
\end{align*}
as required.

\begin{proposition}
	\([ \vb v_1, \vb v_2, \cdots, \vb v_n] \neq 0\) if and only if \(\vb v_1, \vb v_2, \cdots, \vb v_n\) are linearly independent.
\end{proposition}
\begin{proof}
	To show the forward implication, let us suppose that they are not linearly independent and use property (v).
	Then we can express some \(\vb v_p\) as a linear combination of the others.
	Then \([\vb v_1, \vb v_2, \cdots, \vb v_n] = 0\).

	To show the other direction, note that \(\vb v_1, \vb v_2, \cdots, \vb v_3\) means that they span, and if they span then each of the standard basis vectors \(\vb e_i\) can be written as a linear combination of the \(\vb v\) vectors, i.e.\ \(\vb e_i = U_{ai} \vb v_a\).
	Then
	\begin{align*}
		[\vb e_1, \vb e_2, \cdots, \vb e_n] & = [U_{a1}\vb v_a, U_{b2}\vb v_b, \cdots, U_{cn}\vb v_c]                                 \\
		                                    & = U_{a1}U_{b2}\cdots U_{cn}[\vb v_a, \vb v_b, \cdots, \vb v_c]                          \\
		                                    & = U_{a1}U_{b2}\cdots U_{cn} \varepsilon_{ab\cdots c}[\vb v_1, \vb v_2, \cdots, \vb v_n]
	\end{align*}
	By definition, the left hand side is \(+1\), so \([\vb v_1, \vb v_2, \cdots, \vb v_n]\) is nonzero.
\end{proof}
As an example of these ideas, let
\[
	\vb v_1 = \begin{pmatrix} i \\ 0 \\ 0 \\ 2 \end{pmatrix};\quad\vb v_2 = \begin{pmatrix} 0 \\ 0 \\ 5i \\ 0 \end{pmatrix};\quad\vb v_3 = \begin{pmatrix} 3 \\ 2i \\ 0 \\ 0 \end{pmatrix};\quad\vb v_4 = \begin{pmatrix} 0 \\ 0 \\ i \\ 1 \end{pmatrix};\quad \text{where }\vb v_j \in \mathbb C_4
\]
Then
\begin{align*}
	[\vb v_1, \vb v_2, \vb v_3, \vb v_4]
	 & = 5i[\vb v_1, \vb e_3, \vb v_3, \vb v_4]                                      \\
	 & = 5i[i\vb e_1 + 2\vb e_4, \vb e_3, 3\vb e_1 + 2i\vb e_2, -i\vb e_3 + \vb e_4] \\
	\intertext{By multilinearity, we can eliminate all \(\vb e_3\) terms not in the second position because they will cancel with it, giving}
	 & = 5i[i\vb e_1 + 2\vb e_4, \vb e_3, 3\vb e_1 + 2i\vb e_2, \vb e_4]             \\
	\intertext{And likewise with \(\vb e_4\):}
	 & = 5i[i\vb e_1, \vb e_3, 3\vb e_1 + 2i\vb e_2, \vb e_4]                        \\
	\intertext{And again with \(\vb e_1\):}
	 & = 5i[i\vb e_1, \vb e_3, 2i\vb e_2, \vb e_4]                                   \\
	 & = 5i\cdot 2i \cdot i[\vb e_1, \vb e_3, \vb e_2, \vb e_4]                      \\
	 & = 10i[\vb e_1, \vb e_2, \vb e_3, \vb e_4]                                     \\
	 & = 10i
\end{align*}
