\subsection{Orthogonal transformations and rotations}
We know that if a matrix \(R\) is orthogonal, we have \(R^\transpose R = I \iff (R\vb x) \cdot (R\vb y) = \vb x \cdot \vb y \iff\) the rows or columns are orthonormal.
The set of \(n \times n\) matrices \(R\) forms the orthogonal group \(O_n = O(n)\).
If \(R \in O(n)\) then \(\det R = \pm 1\).
\(SO_n = SO(n)\) is the special orthogonal group, which is the subgroup of \(O(n)\) defined by \(\det R = 1\).
If some matrix \(R\) is an element of \(O(n)\), then \(R\) preserves the modulus of \(n\)-dimensional volume.
If \(R \in SO(n)\), then \(R\) preserves not only the modulus but also the sign of such a volume.

\(SO(n)\) consists precisely of all rotations in \(\mathbb R^n\).
\(O(n) \setminus SO(n)\) consists of all reflections.
For some specific \(H \in O(n) \setminus SO(n)\), any element of \(O(n)\) can be written as a product of \(H\) with some element in \(SO(n)\), i.e.\ \(R\) or \(RH\) with \(R \in SO(n)\).
For example, if \(n\) is odd, we can choose \(H = -I\).

Now, we can consider the transformation \(x'_i = R_{ij} x_j\) under two distinct points of view.
\begin{itemize}
	\item (active) The rotation \(R\) acts on the vector \(\vb x\) and yields a new vector \(\vb x'\).
	      The \(x'_i\) are components of the transformed vector in terms of the standard basis vectors.
	\item (passive) The \(x'_i\) are components of the same vector \(\vb x\) but with respect to new orthonormal basis vectors \(\vb u_i\).
	      In general, \(\vb x = \sum_i x_i \vb e_i = \sum_i x'_i \vb u_i\) which is true where \(\vb u_i = \sum_j R_{ij} \vb e_j = \sum_j \vb e_j P_{ji}\).
	      So \(P = R^{-1} = R^\transpose\) where \(P\) is the change of basis matrix.
\end{itemize}

\subsection{2D Minkowski space}
Consider a new `inner product' on \(\mathbb R^2\) given by
\[
	(\vb x, \vb y) = \vb x^\transpose J \vb y;\quad J = \begin{pmatrix}
		1 & 0 \\ 0 & -1
	\end{pmatrix}
\]
\[
	\therefore\ \left( \begin{pmatrix}
			x_0 \\ x_1
		\end{pmatrix}, \begin{pmatrix}
			y_0 \\ y_1
		\end{pmatrix} \right) = x_0 y_0 - x_1 y_1
\]
We start indexing these vectors from zero, not one.
Here are some important properties.
\begin{itemize}
	\item This `inner product' is not positive definite.
	      In fact, \((\vb x, \vb x) = x_0^2 - x_1^2\).
	      (This is a  quadratic form for \(\vb x\) with eigenvalues \(\pm 1\).)
	\item It is bilinear and symmetric.
	\item Defining \(\vb e_0 = \begin{pmatrix}
		      1 \\ 0
	      \end{pmatrix}\) and \(\vb e_1 = \begin{pmatrix}
		      0 \\ 1
	      \end{pmatrix}\), they obey
	      \[
		      (\vb e_0, \vb e_0) = -(\vb e_1, \vb e_1) = 1;\quad (\vb e_0, \vb e_1) = 0
	      \]
	      This is similar to orthonormality, in this generalised sense.
\end{itemize}
This inner product is known as the Minkowski metric on \(\mathbb R^2\).
\(\mathbb R^2\) with this metric is called Minkowski space.

\subsection{Lorentz transformations}
Let us consider a matrix
\[
	M = \begin{pmatrix}
		M_{00} & M_{01} \\
		M_{10} & M_{11}
	\end{pmatrix}
\]
giving a map \(\mathbb R^2 \to \mathbb R^2\); this preserves the Minkowski metric if and only if \((M\vb x, M\vb y) = (\vb x, \vb y)\) for any vectors \(\vb x, \vb y\).
Expanded, this condition is
\[
	(M\vb x)^\transpose J(M \vb y) = \vb x^\transpose M^\transpose J M \vb y = \vb x^\transpose J \vb y
\]
\[
	\implies M^\transpose J M = J
\]
The set of such matrices form a group.
Also, \(\det M = \pm 1\) for the same reason as before.
Furthermore, \(\abs{M_{00}}^2 \geq 1\), so either \(M_{00} \geq 1\) or \(M_{00} \leq -1\).
The subgroup with \(\det M = +1\) and \(M_{00} \geq 1\) is known as the Lorentz group.

Let us find the general form of \(M\), by using the fact that the columns \(M \vb e_0\) and \(M \vb e_i\) are orthonormal with respect to the Minkowski metric.
\[
	(M \vb e_0, M \vb e_0) = M_{00}^2 - M_{10}^2 = (\vb e_0, \vb e_0) = 1\quad (\text{hence } \abs{M_{00}}^2 \geq 1)
\]
Taking \(M_{00} \geq 1\), we can write
\[
	M\vb e_0 = \begin{pmatrix}
		\cosh \theta \\ \sinh \theta
	\end{pmatrix}
\]
for some real value \(\theta\).
For the other column,
\[
	(M \vb e_0, M \vb e_1) = 0;\; (M \vb e_1, M \vb e_1) = -1 \implies M \vb e_1 = \pm\begin{pmatrix}
		\sinh \theta \\
		\cosh \theta
	\end{pmatrix}
\]
The sign is fixed to be positive by the condition that \(\det M = +1\).
\[
	M = \begin{pmatrix}
		\cosh \theta & \sinh \theta \\
		\sinh \theta & \cosh \theta
	\end{pmatrix}
\]
The curves defined by \((\vb x, \vb x) = k\) where \(k\) is a constant are hyperbolas.
This is analogous to how the curves defined by \(\vb x \cdot \vb x = k\) are circles.
So applying \(M\) to any vector on a given branch of a hyperbola, the resultant vector remains on the hyperbola.
Note that these matrices obey the rule \(M(\theta_1) M(\theta_2) = M(\theta_1 + \theta_2)\).
This confirms that they form a group.

\subsection{Application to special relativity}
Let
\[
	M(\theta) = \gamma(v) \begin{pmatrix}
		1 & v \\ v & 1
	\end{pmatrix};\quad v = \tanh \theta;\quad \gamma = (1 - v^2)^{-\frac{1}{2}}
\]
Here, \(v\) lies in the range \(-1 < v < 1\).
We will rename \(x_0\) to be \(t\), which is now our time coordinate.
\(x_1\) will just be written \(x\), our one-dimensional space coordinate.
Then,
\[
	\vb x' = M\vb x \iff \begin{cases}
		t' & = \gamma \cdot (t + vx) \\
		x' & = \gamma \cdot (x + vt)
	\end{cases}
\]
This is a Lorentz transformation, or `boost', relating the time and space coordinates for observers moving with relative velocity \(v\) in Special Relativity, in units where the speed of light \(c\) is taken to be 1.
The \(\gamma\) factor in the Lorentz transformation gives rise to time dilation and length contraction effects.
The group property \(M(\theta_3) = M(\theta_1)M(\theta_2)\) with \(\theta_3 = \theta_1 + \theta_2\) corresponds to the velocities
\[
	v_i = \tanh \theta_i \implies v_3 = \frac{v_1 + v_2}{1 + v_1 v_2}
\]
This is consistent with the fact that all velocities are less than the speed of light, 1.
