\subsection{Transpose}
If \(M\) is an \(m \times n\) (real or complex) matrix, the transpose \(M^\transpose\) is an \(n \times m\) matrix defined by
\[
	(M^\transpose)_{ia} = M_{ai}
\]
which essentially exchanges rows and columns.
Here are some key properties.
\begin{itemize}
	\item \((\alpha A + \beta B)^\transpose = \alpha A^\transpose + \beta B^\transpose\) for \(\alpha, \beta\) scalars, and \(A, B\) both \(m \times n\) matrices.
	\item \((AB)^\transpose = B^\transpose A^\transpose\), where \(A\) is \(m \times n\) and \(B\) is \(n \times p\).
	      This is because
	      \begin{align*}
		      [(AB)^\transpose]_{ra} & = (AB)_{ar}                               \\
		                             & = A_{ai} B_{ir}                           \\
		                             & = (A^\transpose)_{ia} (B^\transpose)_{ri} \\
		                             & = (B^\transpose)_{ri} (A^\transpose)_{ia} \\
		                             & = (B^\transpose A^\transpose)_{ra}
	      \end{align*}
	\item If \(\vb x\) is a column vector (or an \(n \times 1\) matrix), \(\vb x^\transpose\) is the equivalent row vector (a \(1 \times n\) matrix).
	\item The inner product in \(\mathbb R^n\) can therefore be written \(\vb x \cdot \vb y = \vb x^\transpose \vb y\).
	      Note that this is not equivalent to \(\vb x \vb y^\transpose\), which is known as the outer product, which results in a matrix not a scalar.
	\item If \(M\) is \(n \times n\) (square) then \(M\) is:
	      \begin{itemize}
		      \item symmetric iff \(M^\transpose = M\), or \(M_{ij} = M_{ji}\)
		      \item antisymmetric iff \(M^\transpose = -M\), or \(M_{ij} = -M_{ji}\)
	      \end{itemize}
	\item Any \(M\) which is square can be written as a sum of a symmetric and and an antisymmetric part
	      \[
		      M = S + A\quad\text{where } S = \frac{1}{2}(M + M^\transpose);\quad A = \frac{1}{2}(M - M^\transpose)
	      \]
	      as \(S\) is symmetric and \(A\) is antisymmetric by construction.
	\item If \(A\) is \(3 \times 3\) and antisymmetric, then we can write
	      \[
		      A_{ij} = \varepsilon_{ijk}a_k\text{ where } A = \begin{pmatrix}
			      0    & a_3  & -a_2 \\
			      -a_3 & 0    & a_1  \\
			      a_2  & -a_1 & 0
		      \end{pmatrix}
	      \]
	      Then, we have
	      \[
		      (A \vb x)_i = \varepsilon_{ijk}a_k x_j = (\vb x \times \vb a)_i
	      \]
\end{itemize}

\subsection{Hermitian conjugate}
Let \(M\) be an \(m \times n\) matrix.
Then the Hermitian conjugate (also known as the conjugate transpose) \(M^\dagger\) is an \(n \times m\) matrix defined by
\[
	(M^\dagger)_{ia} = \overline{M_{ai}}
\]
If \(M\) is square, then \(M\) is Hermitian if and only if \(M^\dagger = M\), or alternatively \(M_{ia} = \overline{M_{ai}}\); \(M\) is anti-Hermitian if \(M^\dagger = -M\), or alternatively \(M_{ia} = -\overline{M_{ai}}\).
Similarly to above, if \(\vb z\) is a column vector in \(\mathbb C^n\) (an \(n \times 1\) matrix), then the complex inner product is given by \(\vb z \cdot \vb w = \vb z^\dagger \vb w\).

\subsection{Trace}
For a complex \(n \times n\) (square) matrix \(M\), the trace of the matrix, denoted \(\tr(M)\), is defined by
\[
	\tr(M) = M_{ii} = M_{11} + M_{22} + \cdots + M_{nn}
\]
It has a number of key properties.
\begin{itemize}
	\item \(\tr(\alpha M + \beta N) = \alpha \tr M + \beta \tr N\) where \(\alpha\) and \(\beta\) are scalars, and \(M\) and \(N\) are \(n \times n\) matrices.
	\item \(\tr(MN) = \tr(NM)\) where \(M\) is \(m \times n\) and \(N\) is \(n \times m\).
	      \(MN\) and \(NM\) need not have the same dimension, but their traces are identical.
	      We can check this as follows: \(\tr(MN) = (MN)_{aa} = M_{ai} N_{ia} = N_{ia} M_{ai} = (NM)_{ii} = \tr(NM)\).
	\item \(\tr(M^\transpose) = \tr(M)\)
	\item \(\tr(I) = \delta_{ii} = n\) where \(n\) is the dimensionality of the vector space.
	\item If \(S\) is \(n \times n\) and symmetric, let
	      \begin{align*}
		      T                             & = S - \frac{1}{n}\tr(S) I                \\
		      \text{or } T_{ij}             & = S_{ij} - \frac{1}{n}\tr(S) \delta_{ij} \\
		      \text{then } \tr (T) = T_{ii} & = S_{ii} = \frac{1}{n}\tr(S) \delta_{ii} \\
		                                    & = \tr(S) - \frac{1}{n}\tr(S) = 0
	      \end{align*}
	      Then \(S = T + \frac{1}{n}\tr(S)I\) where \(T\) is traceless and the right hand term \(\frac{1}{n}\tr(S)I\) is `pure trace'.
	\item If \(A\) is \(n \times n\) antisymmetric, \(\tr(A) = A_{ii} = 0\).
\end{itemize}

\subsection{Orthogonal matrices}
A real \(n \times n\) matrix \(U\) is orthogonal if and only if its transpose is its inverse.
\[
	U^\transpose U = UU^\transpose = I
\]
These conditions can be written
\[
	U_{ki}U_{kj} = U_{ik}U_{jk} = \delta_{ij}
\]
In words, the left hand side says that the columns of \(U\) are orthonormal, and the middle part of the equation says that the rows of \(U\) are orthonormal.
\[
	U^\transpose U = \begin{pmatrix}
		           & \vdots  &             \\
		\leftarrow & \vb C_i & \rightarrow \\
		           & \vdots  &
	\end{pmatrix}
	\begin{pmatrix}
		       & \uparrow   &        \\
		\cdots & \vb C_j    & \cdots \\
		       & \downarrow &
	\end{pmatrix}
	= \begin{pmatrix}
		1      & \cdots & 0      \\
		\vdots & \ddots & \vdots \\
		0      & \cdots & 1
	\end{pmatrix}
\]
For example, if \(U = R(\theta)\) is a rotation through \(\theta\) around an axis \(\nhat\), then \(U^\transpose = R(\theta)^\transpose = R(-\theta) = R(\theta)^{-1} = U^{-1}\).
An equivalent definition for orthogonality is: \(U\) is orthogonal if and only if it preserves the inner product on \(\mathbb R^n\).
\[
	(U\vb x)\cdot(U \vb y) = \vb x \cdot \vb y\quad \forall \vb x, \vb y \in \mathbb R^n
\]
To check equivalence:
\begin{align*}
	(U\vb x)\cdot(U \vb y) & = (U\vb x)^\transpose (U\vb y)             \\
	                       & = (\vb x^\transpose U^\transpose) (U\vb y) \\
	                       & = \vb x^\transpose (U^\transpose U) \vb y  \\
	                       & = \vb x^\transpose \vb y                   \\
	                       & = \vb x \cdot \vb y
\end{align*}
which is true if and only if \(U^\transpose U = I\).
Note that in \(\mathbb R^n\), the columns of \(U\) are \(U\vb e_i, \cdots, U\vb e_n\) so the inner product is preserved when \(U\) acts on the standard basis vectors if and only if
\[
	(U\vb e_i)\cdot(U\vb e_j) = \vb e_i \cdot \vb e_j = \delta_{ij}
\]
i.e.\ the columns of \(U\) are orthonormal.

Let us now try to find a general \(2 \times 2\) orthogonal matrix.
We begin by transforming the basis vectors.
\(\vb e_i = \begin{pmatrix} 1 \\ 0 \end{pmatrix}\) must be transformed to a unit vector.
Therefore, in the most general sense:
\[
	U \begin{pmatrix}
		1 \\ 0
	\end{pmatrix} = \begin{pmatrix}
		\cos \theta \\ \sin \theta
	\end{pmatrix}
\]
for some parameter \(\theta\).
Now, the other basis vector \(\vb e_2\) must be orthogonal to it, and so it must be
\[
	U \begin{pmatrix}
		0 \\ 1
	\end{pmatrix} = \pm\begin{pmatrix}
		-\sin \theta \\
		\cos \theta
	\end{pmatrix}
\]
So we have two cases:
\[
	U = R = \begin{pmatrix}
		\cos \theta & -\sin\theta \\ \sin \theta & \cos \theta
	\end{pmatrix};\quad U = H = \begin{pmatrix}
		\cos \theta & \sin \theta \\ \sin \theta & -\cos \theta
	\end{pmatrix}
\]
where \(R\) is a rotation by \(\theta\) and \(H\) is a reflection in \(\mathbb R^2\), where
\[
	\nhat = \begin{pmatrix}
		-\sin \frac{\theta}{2} \\ \cos \frac{\theta}{2}
	\end{pmatrix}
\]
because
\[
	H_{ij} = \delta_{ij} - 2n_i n_j \therefore\ H = \begin{pmatrix}
		1 - 2 \sin^2 \frac{\theta}{2}              & 2\sin\frac{\theta}{2}\cos\frac{\theta}{2} \\
		2\sin\frac{\theta}{2} \cos\frac{\theta}{2} & 1-2\cos^2\frac{\theta}{2}
	\end{pmatrix}
\]
which simplifies as required.
Note that \(\det R = +1\), but \(\det H = -1\).

\subsection{Unitary matrices}
A complex \(n \times n\) matrix \(U\) is called unitary if and only if
\[
	U^\dagger U = U U^\dagger = I
\]
Equivalently, \(U\) is unitary if and only if it preserves the complex inner product on \(\mathbb C_n\):
\[
	\langle U \vb z, U \vb w \rangle = \langle \vb z, \vb w \rangle\quad \forall \vb z, \vb w \in \mathbb C^n
\]
To check equivalence:
\begin{align*}
	\langle U \vb z, U \vb w \rangle & = (U \vb z)^\dagger (U \vb w)         \\
	                                 & = (\vb z^\dagger U^\dagger) (U \vb w) \\
	                                 & = \vb z^\dagger (U^\dagger U) \vb w   \\
	                                 & = \vb z^\dagger \vb w
\end{align*}
which of course matches if and only if \(U^\dagger U = I\).
