\subsection{Definitions}
\begin{definition}
	Let \( R \) be a ring.
	A \textit{module over \( R \)} is a triple \( (M, +, \cdot) \) consisting of a set \( M \) and two operations \( + \colon M \times M \to M \) and \( \cdot \colon R \times M \to M \), that satisfy
	\begin{enumerate}
		\item \( (M, +) \) is an abelian group with identity \( 0 = 0_M \);
		\item \( (r_1 + r_2) \cdot m = r_1 \cdot m + r_2 \cdot m \);
		\item \( r \cdot (m_1 + m_2) = r \cdot m_1 + r \cdot m_2 \);
		\item \( r_1 \cdot (r_2 \cdot m) = (r_1 \cdot r_2) \cdot m \);
		\item \( 1_R \cdot m = m \);
	\end{enumerate}
\end{definition}
\begin{remark}
	Closure is implicitly required by the types of the \( + \) and \( \cdot \) operations.
\end{remark}
\begin{example}
	A module over a field is precisely a vector space.

	A \( \mathbb Z \)-module is precisely the same as an abelian group, since
	\[
		\cdot \colon \mathbb Z \times A \to A;\quad n \cdot a = \begin{cases}
			\underbrace{a + \dots + a}_{n \text{ times}}         & \text{if } n > 0 \\
			0                                                    & \text{if } n = 0 \\
			-\qty(\underbrace{a + \dots + a}_{-n \text{ times}}) & \text{if } n < 0
		\end{cases}
	\]

	Let \( F \) be a field, and \( V \) be a vector space over \( F \).
	Let \( \alpha \colon V \to V \) be an endomorphism.
	We can turn \( V \) into an \( F[X] \)-module by
	\[
		\cdot \colon F[X] \times V \to V;\quad f \cdot v = (f(\alpha))(v)
	\]
	Note that the structure of the \( F[X] \)-module depends on the choice of \( \alpha \).
	We can write \( V = V_\alpha \) to disambiguate.

	For any ring \( R \), we can consider \( R^n \) as an \( R \)-module via
	\[
		r \cdot (r_1, \dots, r_n) = (r \cdot r_1, \dots, r \cdot r_n)
	\]
	In particular, the case \( n = 1 \) shows that any ring \( R \) can be considered an \( R \)-module where the scalar multiplication in the ring and the module agree.

	For an ideal \( I \vartriangleleft R \), we can regard \( I \) as an \( R \)-module, since \( I \) is preserved under multiplication by elements in \( R \).
	The quotient ring \( \faktor{R}{I} \) is also an \( R \)-module, defining multiplication as \( r \cdot (s+I) = rs + I \).

	Let \( \varphi \colon R \to S \) be a ring homomorphism.
	Then any \( S \)-module can be regarded as an \( R \)-module.
	We define \( r \cdot m = \varphi(r) \cdot m \).
	In particular, this applies when \( R \) is a subring of \( S \), and \( \varphi \) is the inclusion map.
	So any module over a ring can be viewed as a module over any subring.
\end{example}
\begin{definition}
	Let \( M \) be an \( R \)-module.
	Then \( N \subseteq M \) is an \textit{\( R \)-submodule of \( M \)}, written \( N \leq M \), if \( (N, +) \leq (M, +) \), and for all \( rn \in N \) for all \( r \in R \) and \( n \in N \).
\end{definition}
\begin{example}
	By considering \( R \) as an \( R \)-module, a subset of \( R \) is an \( R \)-submodule if and only if it is an ideal.
	If \( R = F \) is a field, this definition corresponds to the definition of a vector subspace.
\end{example}
\begin{definition}
	Let \( N \leq M \) be \( R \)-modules.
	Then, the \textit{quotient} \( \faktor{M}{N} \) is defined as the quotient of groups under addition, and with scalar multiplication defined as \( r \cdot (m + N) = rm + N \).
	This is well-defined, since \( N \) is preserved under scalar multiplication.
	This makes \( \faktor{M}{N} \) an \( R \)-module.
\end{definition}
\begin{remark}
	Submodules are analogous both to subrings and to ideals.
\end{remark}
\begin{definition}
	Let \( M, N \) be \( R \)-modules.
	Then \( f \colon M \to N \) is a \textit{\( R \)-module homomorphism} if it is a homomorphism of \( (M, +) \) and \( (N, +) \), and scalar multiplication is preserved: \( f(r \cdot m) = r \cdot f(m) \).
	An \textit{\( R \)-module isomorphism} is an \( R \)-module homomorphism that is a bijection.
\end{definition}
\begin{example}
	If \( R = F \) is a field, \( F \)-module homomorphisms are exactly linear maps.
\end{example}
\begin{theorem}
	Let \( f \colon M \to N \) be an \( R \)-module homomorphism.
	Then
	\begin{enumerate}
		\item \( \ker f = \qty{m \in M \colon f(m) = 0} \leq M \);
		\item \( \Im f = \qty{f(m) \in N \colon m \in M} \leq N \);
		\item \( \faktor{M}{\ker f} \cong \Im f \).
	\end{enumerate}
\end{theorem}
\begin{theorem}
	Let \( A, B \leq M \) be \( R \)-submodules.
	Then
	\begin{enumerate}
		\item \( A + B = \qty{a + b \colon a \in A, b \in B} \leq M \);
		\item \( A \cap B \leq M \);
		\item \( \faktor{A}{A \cap B} \cong \faktor{A + B}{B} \).
	\end{enumerate}
\end{theorem}
\begin{theorem}
	For \( N \leq L \leq M \) are \( R \)-submodules, then
	\[
		\faktor{M/N}{L/N} \cong \faktor{M}{L}
	\]
\end{theorem}
For \( N \leq M \), there is a correspondence between submodules of \( \faktor{M}{N} \) and submodules of \( M \) containing \( N \).
These isomorphism theorems can be proved exactly as before.
Note that these results apply to vector spaces; for example, the first isomorphism theorem immediately gives the rank-nullity theorem.

\subsection{Finitely generated modules}
\begin{definition}
	Let \( M \) be an \( R \)-module.
	If \( m \in M \), then we write \( Rm = \qty{rm \colon r \in R} \).
	This is an \( R \)-submodule of \( M \), known as the submodule \textit{generated by \( m \)}.

	If \( A, B \leq M \), we can define \( A + B = \qty{a + b \colon a \in A, b \in B} \), known as the \textit{sum of submodules}.
	In particular, this sum is commutative.
\end{definition}
\begin{definition}
	A module \( M \) is \textit{finitely generated} if it is the sum of finitely many submodules generated by a single element.
	In other words, \( M = Rm_1 + \dots + Rm_n \).
\end{definition}
This is the analogue of finite dimensionality in linear algebra.
\begin{lemma}
	An \( R \)-module \( M \) is finitely generated if and only if there exists a surjective \( R \)-module homomorphism \( f \colon R^n \to M \) for some \( n \).
\end{lemma}
\begin{proof}
	If \( M \) is finitely generated, we have \( M = Rm_1 + \dots + Rm_n \).
	We define \( f \colon R^n \to M \) by \( (r_1, \dots, r_n) \mapsto r_1 m_1 + \dots + r_n m_n \).
	This is surjective.

	Conversely, suppose such a surjective homomorphism \( f \) exists.
	Let \( e_i = (0, \dots, 1, \dots, 0) \) be the element of \( R^n \) with all entries zero except for 1 in the \( i \)th place.
	Let \( m_i = f(e_i) \).
	Then, since \( f \) is surjective, any element \( m \in M \) is contained in the image of \( f \), so is of the form \( f(r_1, \dots, r_n) = r_1 m_1 + \dots + r_n m_n \).
\end{proof}
\begin{corollary}
	Any quotient by a submodule of a finitely generated module is finitely generated.
\end{corollary}
\begin{proof}
	Let \( N \leq M \), where \( M \) is finitely generated.
	Then there exists a surjective \( R \)-module homomorphism \( f \colon R^n \to M \).
	Then \( q \circ f \), where \( q \) is the quotient map, is also a surjective homomorphism.
	So \( \faktor{M}{N} \) is finitely generated.
\end{proof}
\begin{example}
	It is not always the case that a submodule of a finitely generated module is finitely generated.
	Let \( R \) be a non-Noetherian ring, and \( I \) an ideal in \( R \) that is not finitely generated (in the ring sense).
	\( R \) is a finitely generated \( R \)-module, since \( R1 = R \).
	\( I \) is a submodule of \( R \), which is not finitely generated (in the module sense).
\end{example}
\begin{remark}
	If \( R \) is Noetherian, it is always the case that submodules of finitely generated \( R \)-modules are finitely generated.
	This will be shown on the example sheets.
\end{remark}

\subsection{Torsion}
\begin{definition}
	Let \( M \) be an \( R \)-module.
	\begin{enumerate}
		\item \( m \in M \) is \textit{torsion} if there exists \( 0 \neq r \in R \) such that \( rm = 0 \);
		\item \( M \) is a \textit{torsion module} if every element is torsion;
		\item \( M \) is a \textit{torsion-free module} if 0 is the only torsion element.
	\end{enumerate}
\end{definition}
\begin{example}
	The torsion elements in a \( \mathbb Z \)-module (which is an abelian group) are precisely the elements of finite order.
	If \( F \) is a field, any \( F \)-module is torsion-free.
\end{example}

\subsection{Direct sums}
\begin{definition}
	Let \( M_1, \dots, M_n \) be \( R \)-modules.
	Then the \textit{direct sum} of \( M_1, \dots, M_n \), written \( M_1 \oplus \dots \oplus M_n \), is the set \( M_1 \times \dots \times M_n \), with the operations of addition and scalar multiplication defined componentwise.
	We can show that the direct sum of (finitely many) \( R \)-modules is an \( R \)-module.
\end{definition}
\begin{example}
	\( R^n = R \oplus \dots \oplus R \), where we take the direct sum of \( n \) copies of \( R \).
\end{example}
\begin{lemma}
	Let \( M = \bigoplus_{i=1}^n M_i \), and for each \( M_i \), let \( N_i \leq M_i \).
	Then \( N = \bigoplus_{i=1}^n N_i \) is a submodule of \( M \).
	Further,
	\[
		\faktor{M}{N} = \faktor{\bigoplus_{i=1}^n M_i}{\bigoplus_{i=1}^n N_i} \cong \bigoplus_{i=1}^n \faktor{M_i}{N_i}
	\]
\end{lemma}
\begin{proof}
	First, we can see that this \( N \) is a submodule.
	Applying the first isomorphism theorem to the surjective \( R \)-module homomorphism \( M \to \bigoplus_{i=1}^n \faktor{M_i}{N_i} \) given by \( (m_1, \dots, m_n) \mapsto (m_1 + N_1, \dots, m_n + N_n) \), the result follows as required, since the kernel is \( N \).
\end{proof}

\subsection{Free modules}
\begin{definition}
	Let \( m_1, \dots, m_n \in M \).
	The set \( \qty{m_1, \dots, m_n} \) is \textit{independent} if \( \sum_{i=1}^n r_i m_i = 0 \) implies that the \( r_i \) are all zero.
\end{definition}
\begin{definition}
	A subset \( S \subseteq M \) \textit{generates \( M \) freely} if:
	\begin{enumerate}
		\item \( S \) generates \( M \), so for all \( m \in M \), we can find finitely many entries \( s_i \) and coefficients \( r_i \) such that \( m = \sum_{i=1}^k r_i s_i \);
		\item any function \( \psi \colon S \to N \), where \( N \) is an \( R \)-module, extends to an \( R \)-module homomorphism \( \theta \colon M \to N \).
	\end{enumerate}
\end{definition}
\begin{remark}
	In (ii), such an extension \( \theta \) is always unique if it exists, by (i).
\end{remark}
\begin{definition}
	An \( R \)-module \( M \) freely generated by some subset \( S \subseteq M \) is called \textit{free}.
	We say that \( S \) is a \textit{free basis} for \( M \).
\end{definition}
\begin{remark}
	Free bases in the study of modules are analogous to bases in linear algebra.
	All vector spaces are free modules, but not all modules are free.
\end{remark}
\begin{proposition}
	For a finite subset \( S = \qty{m_1, \dots, m_n} \subseteq M \), the following are equivalent.
	\begin{enumerate}
		\item \( S \) generates \( M \) freely;
		\item \( S \) generates \( M \), and \( S \) is independent;
		\item every element of \( M \) can be written uniquely as \( r_1 m_1 + \dots + r_n m_n \) for some \( r_i \in R \);
		\item the \( R \)-module homomorphism \( R^n \to M \) given by \( (r_1, \dots, r_n) \mapsto r_1 m_1 + \dots + r_n m_n \) is bijective, so is an isomorphism.
	\end{enumerate}
\end{proposition}
\begin{proof}
	Not all implications are shown, but they are similar to arguments found in Part IB Linear Algebra.
	We show (i) implies (ii).
	Let \( S \) generate \( M \) freely.
	Suppose \( S \) is not independent.
	Then there exist \( r_i \) such that \( \sum_{i=1}^n r_i m_i = 0 \) but not all \( r_i \) are zero.
	Let \( r_j \neq 0 \).
	Since \( S \) generates \( M \) freely, consider the module homomorphism \( \psi \colon S \to R \) given by
	\[
		\psi(m_i) = \begin{cases}
			1 & \text{if } i = j \\
			0 & \text{otherwise}
		\end{cases}
	\]
	Then
	\[
		0 = \theta(0) = \theta\qty(\sum_{i=1}^n r_i m_i) = \sum_{i=1}^n r_i \theta(m_i) = r_j \neq 0
	\]
	This is a contradiction, so \( S \) is independent.

	To show (ii) implies (iii), it suffices to show uniqueness.
	If there exist two ways to write an element as a linear combination, consider their difference to find a contradiction from (ii).

	We can show (iii) implies (i).
	Then it remains to show (iii) and (iv) are equivalent.
\end{proof}
\begin{example}
	A non-trivial finite abelian group is not a free \( \mathbb Z \)-module.

	The set \( \qty{2,3} \) generates \( \mathbb Z \) as a \( \mathbb Z \)-module.
	This is not a free basis, since they are not independent: \( 2 \cdot 3 - 3 \cdot 2 = 0 \).
	However, it contains no subset that is a free basis.
	This is different to vector spaces, where we can always construct a basis from a subset of a spanning set.
\end{example}
\begin{proposition}[invariance of dimension]
	Let \( R \) be a nonzero ring.
	If \( R^m \cong R^n \) as \( R \)-modules, then \( m = n \).
\end{proposition}
\begin{proof}
	Let \( I \vartriangleleft R \), and \( M \) an \( R \)-module.
	We define \( IM = \qty{\sum a_i m_i \colon a_i \in I, m_i \in M} \).
	Since \( I \) is an ideal, we can show that \( IM \) is a submodule of \( M \).
	The quotient module \( \faktor{M}{IM} \) is an \( R \)-module, but we can also show that it is an \( \faktor{R}{I} \)-module, by defining scalar multiplication as
	\[
		(r+I) \cdot (m+IM) = (r \cdot m + IM)
	\]
	We can check that this is well-defined; this follows from the fact that for \( b \in I \), \( b \cdot (m + IM) = bm + IM \), but \( b \in I \) so \( bm \in IM \).

	Now, suppose that \( R^m \cong R^n \).
	Then let \( I \vartriangleleft R \) be a maximal ideal in \( R \).
	We can prove the existence of such an ideal under the assumption of the axiom of choice, and in particular using Zorn's lemma.
	By the above discussion, we find an isomorphism of \( \faktor{R}{I} \)-modules
	\[
		\qty(\faktor{R}{I})^m \cong \faktor{R^m}{IR^m} \cong \faktor{R^n}{IR^n} \cong \qty(\faktor{R}{I})^n
	\]
	This is an isomorphism of vector spaces over \( \faktor{R}{I} \) which is a field, since \( I \) is maximal.
	Hence, using the corresponding result from linear algebra, \( n = m \).
\end{proof}

\subsection{Row and column operations}
We will assume that \( R \) is a Euclidean domain in this subsection, and let \( \varphi \) be a Euclidean function for \( R \).
We will consider an \( m \times n \) matrix with entries in \( R \).
\begin{definition}
	The \textit{elementary row operations} on a matrix are
	\begin{enumerate}
		\item add \( \lambda \in R \) multiplied by the \( j \)th row to the \( i \)th row, where \( i \neq j \);
		\item swap the \( i \)th row and the \( j \)th row;
		\item multiply the \( i \)th row by \( u \in R^\times \).
	\end{enumerate}
	Each of these operations can be realised by left-multiplication by some \( m \times m \) matrix.
	These operations are all invertible, so their matrices are all invertible.
\end{definition}
We can define elementary column operations in an analogous way, using right-multiplication by an \( n \times n \) matrix instead.
\begin{definition}
	Two \( m \times n \) matrices \( A, B \) are \textit{equivalent} if there exists a sequence of elementary row and column operations that transforms one matrix into the other.
	If they are equivalent, then there exist invertible matrices \( P, Q \) such that \( B = QAP \).
\end{definition}
\begin{definition}
	A \( k \times k \) \textit{minor} of an \( m \times n \) matrix \( A \) is the determinant of a \( k \times k \) submatrix of \( A \), which is a matrix of \( A \) produced by removing \( m-k \) rows and \( n-k \) columns.

	The \( k \)th Fitting ideal \( \mathrm{Fit}_k(A) \vartriangleleft R \) is the ideal generated by the \( k \times k \) minors of \( A \).
\end{definition}
\begin{lemma}
	The \( k \)th Fitting ideal of a matrix is invariant under elementary row and column operations.
\end{lemma}
\begin{proof}
	It suffices by symmetry to show that the elementary row operations do not change the Fitting ideal.
	For the first elementary row operation on a matrix \( A \), suppose we add \( \lambda \in R \) multiplied by the \( j \)th row to the \( i \)th row, yielding a matrix \( A' \).
	In particular, \( a_{ik} \mapsto a_{ik} + \lambda a_{jk} \) for all \( k \).
	Let \( C \) be a \( k \times k \) submatrix of \( A \) and \( C' \) the corresponding submatrix of \( A' \).

	If row \( i \) was not chosen in \( C \), then \( C \) and \( C' \) are the same matrix.
	Hence the corresponding minors are equal.
	If row \( i \) and row \( j \) were both chosen in \( C \), we have that \( C, C' \) differ by a row operation.
	Since the determinant is invariant under this elementary row operations, the corresponding minors are equal.

	If row \( i \) was chosen but row \( j \) was not chosen, by expanding the determinant along the \( i \)th row, we find
	\[
		\det C' = \det C + \lambda \det D
	\]
	where we can show that \( D \) is a \( k \times k \) submatrix of \( A \) that includes row \( j \) but not row \( i \).
	By definition, \( \det D \in \mathrm{Fit}_k(A) \) and \( \det C \in \mathrm{Fit}_k(A) \), so certainly \( \det C' \in \mathrm{Fit}_k(A) \).
	Hence \( \mathrm{Fit}_k(A') \subseteq \mathrm{Fit}_k(A) \).
	By the invertibility of the elementary row operations, \( \mathrm{Fit}_k(A') \supseteq \mathrm{Fit}_k(A) \).

	The proofs for the other elementary row operations are left as an exercise.
\end{proof}

\subsection{Smith normal form}
\begin{theorem}
	An \( m \times n \) matrix \( A = (a_{ij}) \) over a Euclidean domain \( R \) is equivalent to a matrix of the form
	\[
		\begin{pmatrix}
			d_1                            \\
			 & \ddots                      \\
			 &        & d_t                \\
			 &        &     & 0            \\
			 &        &     &   & \ddots   \\
			 &        &     &   &        &
		\end{pmatrix};\quad d_1 \mid d_2 \mid \dots \mid d_t
	\]
	The \( d_i \) are known as \textit{invariant factors}, and they are unique up to associates.
\end{theorem}
\begin{proof}
	If \( A = 0 \), the matrix is already in Smith normal form.
	Otherwise, we can swap columns and rows such that \( a_{11} \neq 0 \).
	We will reduce \( \varphi(a_{11}) \) as much as possible until it divides every other element in the matrix, using the following algorithm.

	If \( a_{11} \nmid a_{1j} \) for some \( j \geq 2 \), then \( a_{1j} = q a_{11} + r \) where \( q, r \in R \) and \( \varphi(r) < \varphi(a_{11}) \).
	We can subtract \( q \) multiplied by column 1 from column \( j \).
	Swapping such columns leaves \( a_{11} = r \).
	If \( a_{11} \nmid a_{i1} \) for some \( i \geq 2 \), then repeat the above process using row operations.
	Now, \( a_{11} \mid a_{ij} \) for all \( i,j \).
	These steps are repeated until \( a_{11} \) divides all entries of the first row and first column.
	This algorithm will always terminate, for example because the Euclidean function takes values in \( \mathbb Z_{\geq 0} \) and \( \varphi(a_{11}) \) strictly decreases in each iteration.

	Now, we can subtract multiples of the first row and column from the others to give
	\[
		A = \begin{pmatrix}
			a_{11} & 0 & \cdots & 0 \\
			0                       \\
			\vdots &   & A'         \\
			0
		\end{pmatrix}
	\]
	If \( a_{11} \nmid a_{ij} \) for \( i,j \geq 2 \), then add the \( i \)th row to the first row.
	There is now an element in the first row that does \( a_{11} \) not divide.
	We can then perform column operations as above to decrease \( \varphi(a_{11}) \).
	We will then restart the algorithm.
	After finitely many steps, this algorithm will terminate and \( a_{11} \) will divide all elements \( a_{ij} \) of the matrix.
	\[
		A = \begin{pmatrix}
			a_{11} & 0 & \cdots & 0 \\
			0                       \\
			\vdots &   & A'         \\
			0
		\end{pmatrix};\quad a_{11} \equiv d_1 \mid a_{ij}
	\]
	We can now apply the algorithm to \( A' \), since column and row operations not including the first row or column do not change whether \( a_{11} \mid a_{ij} \).

	We now demonstrate uniqueness of the invariant factors.
	Suppose \( A \) has Smith normal form with invarant factors \( d_i \) where \( d_1 \mid \dots \mid d_t \).
	Then, for all \( k \), \( \mathrm{Fit}_k(A) \) can be evaluated in Smith normal form by invariance of the Fitting ideal under row and column operations.
	Hence \( \mathrm{Fit}_k(A) = (d_1 d_2 \cdots d_k) \vartriangleleft R \).
	Thus, the product \( d_1 \cdots d_k \) depends only on \( A \), and is unique up to associates.
	Cancelling, we can see that each \( d_i \) depends only on \( A \), up to associates.
\end{proof}
\begin{example}
	Consider the matrix over \( \mathbb Z \) given by
	\[
		A = \begin{pmatrix}
			2 & -1 \\
			1 & 2
		\end{pmatrix}
	\]
	Using elementary row and column operations,
	\[
		\begin{pmatrix}
			2 & -1 \\
			1 & 2
		\end{pmatrix} \xrightarrow{c_1 \mapsto c_1 + c_2} \begin{pmatrix}
			1 & -1 \\
			3 & 2
		\end{pmatrix} \xrightarrow{c_2 \mapsto c_1 + c_2} \begin{pmatrix}
			1 & 0 \\
			3 & 5
		\end{pmatrix} \xrightarrow{r_2 \mapsto -3r_1 + r_2} \begin{pmatrix}
			1 & 0 \\
			0 & 5
		\end{pmatrix}
	\]
	This is in Smith normal form as \( 1 \mid 5 \).

	Alternatively, \( (d_1) = (2, -1, 1, 2) = (1) \).
	So \( d_1 = \pm 1 \).
	Further, \( (d_1 d_2) = (\det A) = (5) \).
	So \( d_1 d_2 = \pm 5 \) and hence \( d_2 = \pm 5 \).
\end{example}

\subsection{The structure theorem}
\begin{lemma}
	Let \( R \) be a Euclidean domain with Euclidean function \( \varphi \) (or, indeed, a principal ideal domain).
	Any submodule of the free module \( R^m \) is generated by at most \( m \) elements.
\end{lemma}
\begin{proof}
	Let \( N \leq R^m \).
	Consider
	\[
		I = \qty{r \in R \colon \exists r_2, \dots, r_m \in R,\, (r,r_2, \dots, r_m) \in N}
	\]
	Since \( N \) is a submodule, this is an ideal.
	Since \( R \) is a principal ideal domain, \( I = (a) \) for some \( a \in R \).
	Let \( n = (a, a_2, \dots, a_m) \in N \).
	For \( (r_1, \dots, r_m) \in N \), we have \( r_1 = ra \) for some \( r \).
	Hence \( (r_1, \dots, r_m) - rn = (0,r_2 - ra_2, \dots, r_m - ra_m) \), which lies in \( N' = N \cap \qty{0} \times R^{m-1} \leq R^{m-1} \), hence \( N = Rn + N' \).
	By induction, \( N' \) is generated by \( n_2, \dots, n_m \), hence \( (n, n_2, \dots, n_m) \) generate \( N \).
\end{proof}
\begin{theorem}
	Let \( R \) be a Euclidean domain, and \( N \leq R^m \).
	Then there is a free basis \( x_1, \dots, x_m \) for \( R^m \) such that \( N \) is generated by \( d_1 x_1, \dots, d_t x_t \) for some \( d_i \in R \) and \( t \leq m \), and such that \( d_1 \mid \dots \mid d_t \).
\end{theorem}
\begin{proof}
	By the above lemma, we have \( N = R y_1 + \dots + R y_n \) for some \( y_i \in R^m \) for some \( n \leq m \).
	Each \( y_i \) belongs to \( R^m \) so we can form the \( m \times n \) matrix \( A \) which has columns \( y_i \).
	\( A \) is equivalent to a matrix \( A' \) in Smith normal form with invariant factors \( d_1 \mid \dots \mid d_t \).

	\( A' \) is obtained from \( A \) by elementary row and column operations.
	Switching row \( i \) and row \( j \) in \( A \) corresponds to reassigning the standard basis elements \( e_i \) and \( e_j \) to each other.
	Adding a multiple of row \( i \) to row \( j \) corresponds to replacing \( e_1, \dots, e_m \) with a linear combination of these basis elements which is a free basis.
	In general, each row operation simply changes the choice of free basis used for \( R^m \).
	Analogously, each column operation changes the set of generators \( y_i \) for \( N \).

	Hence, after applying these row and column operations, the free basis \( e_i \) of \( R^m \) is converted into \( x_1, \dots, x_m \), and \( N \) is generated by \( d_1 x_1, \dots, d_t x_t \).
\end{proof}
\begin{theorem}[structure theorem for finitely generated modules over Euclidean domains]
	Let \( R \) be a Euclidean domain, and \( M \) a finitely generated module over \( R \).
	Then
	\[
		M \cong \faktor{R}{(d_1)} \oplus \dots \oplus \faktor{R}{(d_t)} \oplus \underbrace{R \oplus \dots \oplus R}_{k \text{ copies}} \cong \faktor{R}{(d_1)} \oplus \dots \oplus \faktor{R}{(d_t)} \oplus R^k
	\]
	for some \( 0 \neq d_i \in R \) and \( d_1 \mid \dots \mid d_t \), and where \( k \geq 0 \).
	The \( d_i \) are called invariant factors.
\end{theorem}
\begin{proof}
	Since \( M \) is a finitely generated module, there exists a surjective \( R \)-module homomorphism \( \varphi \colon R^m \to M \) for some \( m \).
	By the first isomorphism theorem, \( M \cong \faktor{R^m}{\ker \varphi} \).
	By the previous theorem, there exists a free basis \( x_1, \dots, x_m \) for \( R^m \) such that \( \ker \varphi \leq R^m \) is generated by \( d_1 x_1, \dots, d_t x_t \) and where \( d_1 \mid \dots \mid d_t \).
	Then,
	\begin{align*}
		M & \cong \frac{\underbrace{R \oplus \dots R}_{k \text{ copies}}}{d_1 R \oplus \dots \oplus d_t R \oplus \underbrace{0 \oplus \dots \oplus 0}_{m-t \text{ copies}}} \\
		  & \cong \faktor{R}{(d_1)} \oplus \dots \oplus \faktor{R}{(d_t)} \oplus \underbrace{R \oplus \dots \oplus R}_{m-t \text{ copies}}
	\end{align*}
\end{proof}
\begin{remark}
	After deleting those \( d_i \) which are units, the invariant factors of \( M \) are unique up to associates.
	The proof is omitted.
\end{remark}
\begin{corollary}
	Let \( R \) be a Euclidean domain.
	Then any finitely generated torsion-free module is free.
\end{corollary}
\begin{proof}
	Since \( M \) is torsion-free, there are no submodules of the form \( \faktor{R}{(d)} \) with \( d \) nonzero, since then multiplying an element of \( M \) by \( d \) would give zero.
	Hence, by the structure theorem, \( M \cong R^m \) for some \( m \).
\end{proof}
\begin{example}
	Consider \( R = \mathbb Z \), and the abelian group \( G = \genset{a,b} \) subject to the relations \( 2a + b = 0 \) and \( -a + 2b = 0 \), so \( G \cong \faktor{\mathbb Z^2}{N} \) where \( N \) is the \( \mathbb Z \)-submodule of \( \mathbb Z^2 \) generated by \( (2,1) \) and \( (-1,2) \).
	Consider
	\[
		A = \begin{pmatrix}
			2 & -1 \\
			1 & 2
		\end{pmatrix}
	\]
	which has Smith normal form \( d_1 = 1 \) and \( d_2 = 5 \).
	Hence, by changing basis for \( \mathbb Z^2 \), we can let \( N \) be generated by \( (1,0) \) and \( (0,5) \).
	Hence,
	\[
		G \cong \faktor{\mathbb Z \oplus \mathbb Z}{\mathbb Z \oplus 5 \mathbb Z} \cong \faktor{\mathbb Z}{5\mathbb Z}
	\]
\end{example}

\subsection{Primary decomposition theorem}
More generally, applying the structure theorem to \( \mathbb Z \)-modules, we obtain the structure theorem for finitely generated abelian groups:
\begin{theorem}
	Let \( G \) be a finitely generated abelian group.
	Then
	\[
		G \cong C_{d_1} \times \dots \times C_{d_t} \times \mathbb Z^r
	\]
	where \( d_1 \mid \dots \mid d_t \) in \( \mathbb Z \), and \( r \geq 0 \).
\end{theorem}
We have replaced the submodule notation \( \faktor{\mathbb Z}{n\mathbb Z} \) and \( \oplus \) with the group notation \( C_n \) and \( \times \).
The previous theorem for the structure of finite abelian groups is a special case of this theorem, where \( r = 0 \).
We have also seen that any finite abelian group can be written as a product of cyclic groups of prime power order.
This also has a generalisation for modules.
The previous result relied on the lemma \( C_{mn} \cong C_m \times C_n \) where \( m \) and \( n \) are coprime.
There is an analogous result for principal ideal domains.
\begin{lemma}
	Let \( R \) be a principal ideal domain, and \( a, b \in R \) with unit greatest common divisor.
	Then, treating these quotients as \( R \)-modules,
	\[
		\faktor{R}{(ab)} \cong \faktor{R}{(a)} \oplus \faktor{R}{(b)}
	\]
\end{lemma}
\begin{proof}
	Since \( R \) is a principal ideal domain, \( (a,b) = (d) \) for some \( d \in R \).
	The greatest common divisor of \( a, b \) is a unit, so \( d \) is a unit, giving \( (a,b) = R \).
	Hence, there exist \( r,s \in R \) such that \( ra + sb = 1 \).
	This is a generalisation of B\'ezout's theorem.

	Now, we define an \( R \)-module homomorphism \( \psi \colon R \to \faktor{R}{(a)} + \faktor{R}{(b)} \) by \( \psi(x) = (x+(a), x+(b)) \).
	Then \( \psi(sb) = (sb+(a), sb+(b)) = (1-ra+(a),sb+(b)) = (1+(a), (b)) \), and similarly \( \psi(ra) = ((a),1+(b)) \).
	Hence, \( \psi(sbx + rby) = (x+(a),y+(b)) \) so \( \psi \) is surjective.

	Clearly we have \( (ab) \subset \ker \psi \), so it suffices to show the converse.
	If \( x \in \ker \psi \), then \( x \in (a) \) and \( x \in (b) \), so \( x \in (a) \cap (b) \).
	Since \( x = x(ra+sb) = r(ax) + s(bx) \), we must have that \( s(bx) \in (a) \) and \( r(ax) \in (b) \), so \( x \in (ab) \).
	Hence \( \ker \psi = (ab) \), and the result follows from the first isomorphism theorem for modules.
\end{proof}
\begin{lemma}[primary decomposition theorem]
	Let \( R \) be a Euclidean domain and \( M \) a finitely generated \( R \)-module.
	Then
	\[
		M \cong \faktor{R}{(p_1^{n_1})} \oplus \dots \oplus \faktor{R}{(p_k^{n_k})} \oplus R^m
	\]
	where the quotients are considered as \( R \)-modules, where \( p_i \) are primes in \( R \), which are not necessarily distinct, and where \( m \geq 0 \).
\end{lemma}
\begin{proof}
	By the structure theorem,
	\[
		M \cong \faktor{R}{(d_1)} \oplus \dots \oplus \faktor{R}{(d_t)} \oplus \underbrace{R \oplus \dots \oplus R}_{k \text{ copies}} \cong \faktor{R}{(d_1)} \oplus \dots \oplus \faktor{R}{(d_t)} \oplus R^m
	\]
	where \( d_1 \mid \dots \mid d_t \).
	So it suffices to show that each \( \faktor{R}{(d_i)} \) can be written as a product of factors of the form \( \faktor{R}{(p_j^{n_j})} \).
	Since \( R \) is a unique factorisation domain and a principal ideal domain, \( d_i \) can be written as a product \( u p_1^{\alpha_1} \cdots p_r^{\alpha_r} \) where \( u \) is a unit and the \( p_j \) are pairwise non-associate primes.
	By the previous lemma,
	\[
		\faktor{R}{(d_i)} \cong \faktor{R}{(p_1^{\alpha_1})} \oplus \dots \faktor{R}{(p_r^{\alpha_r})}
	\]
\end{proof}

\subsection{Rational canonical form}
Let \( V \) be a vector space over a field \( F \), and \( \alpha \colon V \to V \) be a linear map.
Let \( V_\alpha \) denote the \( F[X] \)-module \( V \) where scalar multiplication is defined by \( f(X) \cdot v = f(\alpha)(v) \).
\begin{lemma}
	If \( V \) is finite-dimensional as a vector space, then \( V_\alpha \) is finitely generated as an \( F[X] \)-module.
\end{lemma}
\begin{proof}
	Consider a basis \( v_1, \dots, v_n \) of \( V \), so \( v_1, \dots, v_n \) generate \( V \) as an \( F \)-vector space.
	Then, these vectors generate \( V_\alpha \) as an \( F[X] \)-module, since \( F \leq F[X] \).
\end{proof}
\begin{example}
	Suppose \( V_\alpha \cong \faktor{F[X]}{(X^n)} \) as an \( F[X] \)-module.
	Then, \( 1, X, X^2, \dots, X^{n-1} \) is a basis for \( \faktor{F[X]}{(X^n)} \) as an \( F \)-vector space.
	With respect to this basis, \( \alpha \) has the matrix form
	\begin{equation}
		\begin{pmatrix}
			0      & 0      & 0      & \cdots & 0      & 0      \\
			1      & 0      & 0      & \cdots & 0      & 0      \\
			0      & 1      & 0      & \cdots & 0      & 0      \\
			0      & 0      & 1      & \cdots & 0      & 0      \\
			\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
			0      & 0      & 0      & \cdots & 1      & 0
		\end{pmatrix}
		\tag{\(\ast\)}
	\end{equation}
\end{example}
\begin{example}
	Suppose \( V_\alpha \cong \faktor{F[X]}{(X-\lambda)^n} \) as an \( F[X] \)-module.
	Consider the basis \( 1, X-\lambda, (X-\lambda)^2, \dots, (X-\lambda)^{n-1} \) for \( \faktor{F[X]}{(X-\lambda)^n} \) as an \( F \)-vector space.
	Here, \( \alpha - \lambda \id \) has matrix (\(\ast\)) from the previous example.
	Hence, \( \alpha \) has matrix \( (\ast) + \lambda I \).
\end{example}
\begin{example}
	Suppose \( V_\alpha \cong \faktor{F[X]}{(f)} \) where \( f \in F[X] \) as an \( F[X] \)-module, such that \( f \) is monic.
	Let
	\[
		f(X) = X^n + a_{n-1} X^{n-1} + \dots + a_0
	\]
	With respect to basis \( 1, X, \dots, X^{n-1} \), \( \alpha \) has matrix
	\[
		C(f) =
		\begin{pmatrix}
			0      & 0      & 0      & \cdots & 0      & -a_0     \\
			1      & 0      & 0      & \cdots & 0      & -a_1     \\
			0      & 1      & 0      & \cdots & 0      & -a_2     \\
			0      & 0      & 1      & \cdots & 0      & -a_3     \\
			\vdots & \vdots & \vdots & \ddots & \vdots & \vdots   \\
			0      & 0      & 0      & \cdots & 1      & -a_{n-1}
		\end{pmatrix}
	\]
	since \( f \) is monic and the last column represents \( X^n \).
	The above matrix is known as the \textit{companion matrix} of the monic polynomial.
\end{example}
\begin{theorem}[Rational canonical form]
	Let \( F \) be a field, \( V \) be a finite-dimensional \( F \)-vector space, and \( \alpha \colon V \to V \) be a linear map.
	Then the \( F[X] \)-module \( V_\alpha \) decomposes as
	\[
		V_\alpha \cong \faktor{F[X]}{(f_1)} \oplus \dots \oplus \faktor{F[X]}{(f_t)}
	\]
	for some monic polynomials \( f_i \in F[X] \), and \( f_1 \mid \dots \mid f_t \).
	Moreover, with respect to a suitable basis, \( \alpha \) has matrix
	\begin{equation}
		\begin{pmatrix}
			C(f_1)                      \\
			 & C(f_2)                   \\
			 &        & \ddots          \\
			 &        &        & C(f_t)
		\end{pmatrix}
		\tag{\(\ast\ast\)}
	\end{equation}
\end{theorem}
\begin{proof}
	We know that \( V_\alpha \) is finitely generated as an \( F[X] \)-module, since \( V \) is finite-dimensional.
	Since \( F[X] \) is a Euclidean domain, the structure theorem applies, and
	\[
		V_\alpha \cong \faktor{F[X]}{(f_1)} \oplus \dots \oplus \faktor{F[X]}{(f_t)} \oplus F[X]^m
	\]
	for some \( m \), where \( f_1 \mid \dots \mid f_t \).
	Since \( V \) is finite-dimensional, \( m = 0 \).
	As \( F \) is a field, without loss of generality we may multiply each \( f_i \) by a unit to ensure that they are monic.
	Then, using the previous example, we can construct the companion matrices for each polynomial and obtain the matrix as required.
\end{proof}
\begin{remark}
	If \( \alpha \) is represented by an \( n \times n \) matrix \( A \), there exists a change of basis matrix \( P \) such that \( PAP^{-1} \) has form (\(\ast\)) as stated in the theorem, so \( A \) is similar to such a block diagonal matrix of companion matrices.
	Note further that (\(\ast\ast\)) can be used to find the minimal and characteristic polynomials of \( \alpha \); the minimal polynomial is \( f_t \), and the characteristic polynomial is \( f_1 \cdots f_t \).
	In particular, the minimal polynomial divides the characteristic polynomial, and this implies the Cayley--Hamilton theorem.
\end{remark}
\begin{example}
	Consider \( \dim V = 2 \).
	Here, \( \sum \deg f_i = 2 \), so there are two cases: one polynomial of degree two, or two polynomials of degree one.
	Consider \( V_\alpha \cong \faktor{F[X]}{(X-\lambda)} \oplus \faktor{F[X]}{(X-\mu)} \).
	Since one of the \( f_i \) must divide the other, we have \( \lambda = \mu \).
	If we have one polynomial of degree two, we have \( V_\alpha \cong \faktor{F[X]}{(f)} \), where \( f \) is the characteristic polynomial of \( \alpha \).
\end{example}
\begin{corollary}
	Let \( A, B \) be invertible \( 2 \times 2 \) non-scalar matrices over a field \( F \).
	Then \( A, B \) are similar if and only if their characteristic polynomials are equal.
\end{corollary}
\begin{proof}
	Certainly if \( A, B \) are similar they have the same characteristic polynomial, which is proven in Part IB Linear Algebra.
	Conversely, if the matrices are non-scalar, the modules \( V_\alpha, V_\beta \) are of the form \( \faktor{F[X]}{(f)} \) by the previous example, so they are both similar to the companion matrix of \( f \), where \( f \) is the characteristic polynomial of \( A \) or \( B \).
\end{proof}
\begin{definition}
	The \textit{annihilator} of an \( R \)-module \( M \) is
	\[
		\mathrm{Ann}_R(M) = \qty{r \in R \colon \forall m \in M,\, rm = 0} \vartriangleleft R
	\]
\end{definition}
\begin{example}
	Let \( I \vartriangleleft R \).
	Then the annihilator of \( \faktor{R}{I} \) is \( \mathrm{Ann}_R\qty(\faktor{R}{I}) = I \).

	Let \( A \) be a finite abelian group.
	Then, considering \( A \) as a \( \mathbb Z \)-module, \( \mathrm{Ann}_{\mathbb Z}(A) = (e) \) where \( e \) is the \textit{exponent} of the group, which is the lowest common multiple of the orders of elements in the group.

	Let \( V_\alpha \) be as above.
	Then \( \mathrm{Ann}_{F[X]}(V_\alpha) = (f) \) where \( f \) is the minimal polynomial of \( \alpha \).
\end{example}

\subsection{Jordan normal form}
Jordan normal form concerns matrix similarity in \( \mathbb C \).
The following results are therefore restricted to this particular field.
\begin{lemma}
	The primes (or equivalently, irreducibles) in \( \mathbb C[X] \) are the polynomials \( X - \lambda \) for \( \lambda \in \mathbb C \), up to associates.
\end{lemma}
\begin{proof}
	By the fundamental theorem of algebra, any non-constant polynomial with complex coefficients has a complex root.
	By the Euclidean algorithm, we can show that having a root \( \lambda \) is equivalent to having a linear factor \( X - \lambda \).
	Hence the irreducibles have degree one, and thus are \( X - \lambda \) exactly, up to associates.
\end{proof}
\begin{theorem}
	Let \( \alpha \colon V \to V \) be an endomorphism of a finite-dimensional \( \mathbb C \)-vector space \( V \).
	Let \( V_\alpha \) be the set \( V \) as a \( \mathbb C[X] \)-module, where scalar multiplication is defined by \( f\cdot v = f(\alpha)(v) \).
	Then, there exists an isomorphism of \( \mathbb C[X] \)-modules
	\[
		V_\alpha \cong \faktor{\mathbb C[X]}{\qty((X-\lambda_1)^{n_1})} \oplus \dots \oplus \faktor{\mathbb C[X]}{\qty((X-\lambda_t)^{n_t})}
	\]
	where \( \lambda_i \in \mathbb C \) are not necessarily distinct.
	In particular, there exists a basis for this vector space such that \( \alpha \) has matrix in block diagonal form
	\[
		\begin{pmatrix}
			J_{n_1}(\lambda_1)                                  \\
			 & J_{n_2}(\lambda_2)                               \\
			 &                    & \ddots                      \\
			 &                    &        & J_{n_t}(\lambda_t)
		\end{pmatrix}
	\]
	where each \textit{Jordan block} \( J_{n_i}(\lambda_i) \) is an \( n_i \times n_i \) matrix of the form
	\[
		J_{n_i}(\lambda_i) = \begin{pmatrix}
			\lambda_i & 0         & 0         & \cdots & 0         \\
			1         & \lambda_i & 0         & \cdots & 0         \\
			0         & 1         & \lambda_i & \cdots & 0         \\
			\vdots    & \vdots    & \vdots    & \ddots & \vdots    \\
			0         & 0         & 0         & \cdots & \lambda_i
		\end{pmatrix}
	\]
\end{theorem}
\begin{proof}
	Note \( \mathbb C[X] \) is a Euclidean domain using the degree function, and \( V_\alpha \) is finitely generated as a \( \mathbb C[X] \)-module.
	These are the assumptions of the primary decomposition theorem.
	Applying this, we find the module decomposition as required, noting that the primes in \( \mathbb C[X] \) are the linear polynomials.
	Note that the free factor \( \mathbb C[X] \) cannot appear in the decomposition since \( V \) is finite-dimensional.

	We have already seen that for a module \( W_\alpha \cong \faktor{F[X]}{\qty((X-\lambda)^n)} \), multiplication by \( X \) is represented by the matrix \( J_n(\lambda) \) with respect to the basis \( 1, (X-\lambda), \dots, (X-\lambda)^{n-1} \).
	Hence the result follows by considering the union of these bases.
\end{proof}
\begin{remark}
	If \( \alpha \) is represented by a matrix \( A \), then \( A \) is similar to a matrix in Jordan normal form.
	This is the form of the result often used in linear algebra.

	The Jordan blocks are uniquely determined up to reordering.
	This can be proven by considering the dimensions of the \textit{generalised eigenspaces}, which are \( \ker\qty((\alpha - \lambda \id)^m) \) for some \( m \in \mathbb N \).

	The minimal polynomial of \( \alpha \) is \( \prod_{\lambda} (X-\lambda)^{c_\lambda} \) where \( c_\lambda \) is the size of the largest \( \lambda \)-block.
	The characteristic polynomial of \( \alpha \) is \( \prod_{\lambda} (X-\lambda)^{a_\lambda} \) where \( a_\lambda \) is the sum of the sizes of the \( \lambda \)-blocks.

	The number of \( \lambda \)-blocks is the dimension of the eigenspace of \( \lambda \).
\end{remark}

\subsection{Modules over principal ideal domains (non-examinable)}
The structure theorem above was proven for Euclidean domains.
This also holds for principal ideal domains.
Some of the ideas relevant to this proof are illustrated in this subsection.
\begin{theorem}
	Let \( R \) be a principal ideal domain.
	Then any finitely generated torsion-free \( R \)-module is free.
\end{theorem}
If \( R \) is a Euclidean domain, this was proven as a corollary to the structure theorem.
\begin{lemma}
	Let \( R \) be a principal ideal domain and \( M \) be an \( R \)-module.
	Let \( r_1, r_2 \in R \) be not both zero, and let \( d \) be their greatest common divisor.
	Then,
	\begin{enumerate}
		\item there exists \( A \in SL_2(R) \) such that
		      \[
			      A \begin{pmatrix}
				      r_1 \\
				      r_2
			      \end{pmatrix} = \begin{pmatrix}
				      d \\
				      0
			      \end{pmatrix}
		      \]
		\item if \( x_1, x_2 \in M \), then there exist \( x_1', x_2' \in M \) such that \( Rx_1 + Rx_2 = Rx_1' + Rx_2' \), and \( r_1 x_1 + r_2 x_2  = d x_1' + 0 \cdot x_2' \).
	\end{enumerate}
\end{lemma}
\begin{proof}
	Since \( R \) is a principal ideal domain, \( (r_1, r_2) = (d) \).
	Hence, by definition, \( d = \alpha r_1 + \beta r_2 \) for some \( \alpha, \beta \in R \).
	Let \( r_1 = s_1 d \) and \( r_2 = s_2 d \).
	Then \( \alpha s_1 + \beta s_2 = 1 \).
	Now, let
	\[
		A = \begin{pmatrix}
			\alpha & \beta \\
			-s_2   & s_1
		\end{pmatrix} \implies \det A = 1;\quad A \begin{pmatrix}
			r_1 \\
			r_2
		\end{pmatrix} = \begin{pmatrix}
			d \\
			0
		\end{pmatrix}
	\]
	as required.

	For the second part, let \( x_1' = s_1 x_1 + s_2 x_2 \) and \( x_2' = -\beta x_1 + \alpha x_2 \).
	Then \( Rx_1' + Rx_2' \subseteq Rx_1 + Rx_2 \).
	The matrix defining \( x_1', x_2' \) in terms of \( x_1, x_2 \) is invertible since its determinant is a unit; we can solve for \( x_1, x_2 \) in terms of \( x_1', x_2' \).
	So \( Rx_1' + Rx_2' = Rx_1 + Rx_2 \).
	Then by direct computation we can see that \( r_1 x_2 + r_2 x_2 = d x_1' + 0 \cdot x_2' \).
\end{proof}
The structure theorem for principal ideal domains follows the same method; it is deduced for Smith normal form.
That theorem also holds for principal ideal domains.
The above lemma allows one to prove Smith normal form for principal ideal domains.
In a Euclidean domain, we used the Euclidean function for a notion of size in order to perform induction; in a principal ideal domain we can count the irreducibles in a factorisation.
\begin{proof}[Proof of theorem]
	Let \( M = Rx_1 + \dots + Rx_n \) where \( n \) is minimal.
	If \( x_1, \dots, x_n \) are independent, then \( M \) is free as required.
	Suppose that the \( x_i \) are not independent, so there exists \( r_i \) such that \( \sum r_i x_i = 0 \) but not all of the \( r_i \) are zero.
	By reordering, we can suppose that \( r_1 \neq 0 \).
	By using part (ii) of the previous lemma, after replacing \( x_1 \) and \( x_2 \) by suitable \( x_1', x_2' \), we may assume that \( r_1 \neq 0 \) and \( r_2 = 0 \).
	By repeating this process with \( x_1 \) and \( x_i \) for all \( i \geq 2 \), we obtain \( r_1 \neq 0 \) and \( r_2 = \dots = r_n = 0 \), so \( r_1 x_1'' = 0 \) for some nonzero \( x_1'' \in M \).
	But \( M \) is torsion-free, so \( r_1 \) must be zero, and this is a contradiction.
\end{proof}
