diff --git "a/wiki/proofwiki/shard_9.txt" "b/wiki/proofwiki/shard_9.txt" new file mode 100644--- /dev/null +++ "b/wiki/proofwiki/shard_9.txt" @@ -0,0 +1,13681 @@ +\section{Generated Sigma-Algebra by Generated Monotone Class} +Tags: Sigma-Algebras, Monotone Classes + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $\mathcal G \subseteq \mathcal P \left({X}\right)$ be a [[Definition:Empty Set|nonempty]] collection of [[Definition:Subset|subsets]] of $X$. +Suppose that $\mathcal G$ satisfies the following condition: +:$(1):\quad A \in \mathcal G \implies \complement_X \left({A}\right) \in \mathcal G$ +that is, $\mathcal G$ is closed under [[Definition:Relative Complement|complement in $X$]]. +Then: +:$\mathfrak m \left({\mathcal G}\right) = \sigma \left({\mathcal G}\right)$ +where $\mathfrak m$ denotes [[Definition:Generated Monotone Class|generated monotone class]], and $\sigma$ denotes [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]. +\end{theorem} + +\begin{proof} +By [[Sigma-Algebra is Monotone Class]], and the definition of [[Definition:Generated Monotone Class|generated monotone class]], it follows that: +:$\mathfrak m \left({\mathcal G}\right) \subseteq \sigma \left({\mathcal G}\right)$ +Next, define $\Sigma$ by: +:$\Sigma := \left\{{M \in \mathfrak m \left({\mathcal G}\right): X \setminus M \in \mathfrak m \left({\mathcal G}\right)}\right\}$ +By $(1)$, it follows that $\mathcal G \subseteq \Sigma$. +Next, we will show that $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +To this end, let $G \in \mathcal G$ be arbitrary. +By $(1)$, also $X \setminus G \in \mathcal G$. +Hence from the definition of [[Definition:Monotone Class|monotone class]]: +:$\varnothing = G \cap \left({X \setminus G}\right) \in \mathfrak m \left({\mathcal G}\right)$ +:$X = G \cup \left({X \setminus G}\right) \in \mathfrak m \left({\mathcal G}\right)$ +by virtue of [[Set Difference Intersection with Second Set is Empty Set]] and [[Set Difference and Intersection form Partition]]. +Since $\varnothing = X \setminus X$, it follows that: +:$X, \varnothing \in \Sigma$ +Further, from [[Set Difference with Set Difference]], it follows that: +:$E \in \Sigma \implies X \setminus E \in \Sigma$ +Lastly, for any [[Definition:Sequence|sequence]] $\left({E_n}\right)_{n \in \N}$ in $\Sigma$, the definition of [[Definition:Monotone Class|monotone class]] implies that: +:$\displaystyle \bigcup_{n \mathop \in \N} E_n \in \mathfrak m \left({\mathcal G}\right)$ +Now to ensure that it is in fact in $\Sigma$, compute: +{{begin-eqn}} +{{eqn|l = X \setminus \left({\bigcup_{n \mathop \in \N} E_n}\right) + |r = \bigcap_{n \mathop \in \N} \left({X \setminus E_n}\right) + |c = [[De Morgan's Laws (Set Theory)/Set Difference/General Case/Difference with Union|De Morgan's Laws: Difference with Union]] +}} +{{end-eqn}} +All of the $X \setminus E_n$ are in $\mathfrak m \left({\mathcal G}\right)$ as each $E_n$ is in $\Sigma$. +Hence, from the definition of [[Definition:Monotone Class|monotone class]], we conclude: +:$\displaystyle \bigcap_{n \mathop \in \N} \left({X \setminus E_n}\right) \in \mathfrak m \left({\mathcal G}\right)$ +which subsequently implies that: +:$\displaystyle \bigcup_{n \mathop \in \N} E_n \in \Sigma$ +Thus, having verified all three axioms, $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Since $\mathcal G \subseteq \Sigma$, this means, by definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]], that: +:$\sigma \left({\mathcal G}\right) \subseteq \Sigma \subseteq \mathfrak m \left({\mathcal G}\right)$ +By definition of [[Definition:Set Equality/Definition 2|set equality]]: +:$\mathfrak m \left({\mathcal G}\right) = \sigma \left({\mathcal G}\right)$ +{{qed}} +\end{proof}<|endoftext|> +\section{Totally Bounded Metric Space is Second-Countable} +Tags: Metric Spaces, Second-Countable Spaces, Totally Bounded Metric Spaces, Totally Bounded Metric Space is Second-Countable + +\begin{theorem} +Let $M = \struct {A, d}$ be a [[Definition:Metric Space|metric space]] which is [[Definition:Totally Bounded Metric Space|totally bounded]]. +Then $M$ is [[Definition:Second-Countable Space|second-countable]]. +\end{theorem} + +\begin{proof} +Let $M = \struct {A, d}$ be [[Definition:Totally Bounded Metric Space|totally bounded]]. +Let $\epsilon = 1, \dfrac 1 2, \dfrac 1 3, \ldots$ +As $M$ is [[Definition:Totally Bounded Metric Space|totally bounded]], for each $\epsilon$ there exists a [[Definition:Finite Net|finite $\epsilon$-net]] $\CC$ for $M$. +From [[Net forms Basis for Metric Space]], $\CC$ is a [[Definition:Countable Basis|countable basis]] for $M$. +That is, $M$ is [[Definition:Second-Countable Space|second-countable]]. +{{qed}} +\end{proof} + +\begin{proof} +Follows directly from: +: [[Totally Bounded Metric Space is Separable]] +: [[Separable Metric Space is Second-Countable]] +{{qed}} +\end{proof}<|endoftext|> +\section{Heine-Borel Theorem/Metric Space} +Tags: Totally Bounded Metric Spaces, Complete Metric Spaces, Compact Spaces + +\begin{theorem} +A [[Definition:Metric Space|metric space]] is [[Definition:Compact Metric Space|compact]] {{iff}} it is both [[Definition:Complete Metric Space|complete]] and [[Definition:Totally Bounded Metric Space|totally bounded]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +This follows directly from: +* [[Compact Metric Space is Complete]] +* [[Compact Metric Space is Totally Bounded]] +{{qed|lemma}} +=== Sufficient Condition === +This follows directly from: +* [[Complete and Totally Bounded Metric Space is Sequentially Compact]] +* [[Sequentially Compact Metric Space is Compact]] +{{qed}} +{{ACC|Complete and Totally Bounded Metric Space is Sequentially Compact}} +{{namedfor|Heinrich Eduard Heine|name2=Émile Borel}} +\end{proof}<|endoftext|> +\section{Borel Sigma-Algebra on Euclidean Space by Monotone Class} +Tags: Sigma-Algebras, Monotone Classes + +\begin{theorem} +Let $\left({\R^n, \tau}\right)$ be the $n$-dimensional [[Definition:Euclidean Space|Euclidean space]]. +Then: +:$\mathcal B \left({\R^n, \tau}\right) = \mathfrak m \left({\tau}\right)$ +where $\mathcal B$ denotes [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]], and $\mathfrak m$ denotes [[Definition:Generated Monotone Class|generated monotone class]]. +\end{theorem} + +\begin{proof} +Let $U \in \tau$ be an [[Definition:Open Set (Topology)|open set]], and define $C$ by: +:$C := X \setminus U$ +hence $C$ is a [[Definition:Closed Set (Topology)|closed set]]. +Further, define, for all $n \in \N$: +:$C_n := \displaystyle \bigcup_{c \mathop \in C} B \left({c; \frac 1 n}\right)$ +where $B$ denotes [[Definition:Open Ball|open ball]]. +The $C_n$ are [[Definition:Open Set (Topology)|open sets]], being the [[Definition:Set Union|union]] of [[Definition:Open Ball|open balls]]. +It is clear that $C \subseteq C_n$ for all $n \in \N$. +Conversely, as $U$ is open, for any $u \in U$ (that is, $u \notin C$), find $n \in \N$ such that: +:$B \left({u; \dfrac 1 n}\right) \subseteq U$ +as is possible from the definition of [[Definition:Open Set (Metric Space)|open set in a metric space]]. +Thus, for all $c \in C = X \setminus U$, this means: +:$d \left({u, c}\right) \ge \dfrac 1 n$ +whence $u \notin C_n$. +That is, we have established that: +:$c \in C \iff \forall n \in \N: c \in C_n$ +Phrased in terms of [[Definition:Set Intersection|intersection]], this means: +:$C = \displaystyle \bigcap_{n \mathop \in \N} C_n$ +Thus, since $C_n \in \tau \subseteq \mathfrak m \left({\tau}\right)$: +:$C \in \mathfrak m \left({\tau}\right)$ +Now define: +:$\complement_X \left({\tau}\right) := \left\{{X \setminus U: U \in \tau}\right\}$ +Then we have shown: +:$\mathfrak m \left({\tau \cup \complement_X \left({\tau}\right)}\right) \subseteq \mathfrak m \left({\tau}\right)$ +and the reverse inclusion (and hence equality) follows from [[Generated Monotone Class Preserves Subset]]. +Now applying [[Generated Sigma-Algebra by Generated Monotone Class/Corollary|Generated Sigma-Algebra by Generated Monotone Class: Corollary]]: +:$\sigma \left({\tau}\right) = \mathfrak m \left({\tau \cup \complement_X \left({\tau}\right)}\right)$ +Combining these two equalities gives the result, by definition of [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Measure is Strongly Additive} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Then $\mu$ is [[Definition:Strongly Additive Function|strongly additive]], that is: +: $\forall E, F \in \Sigma: \mu \left({E \cap F}\right) + \mu \left({E \cup F}\right) = \mu \left({E}\right) + \mu \left({F}\right)$ +\end{theorem} + +\begin{proof} +Combine [[Measure is Finitely Additive Function]] with [[Additive Function is Strongly Additive]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Measure is Subadditive} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Then $\mu$ is [[Definition:Subadditive Function (Measure Theory)|subadditive]], that is: +:$\forall E, F \in \Sigma: \mu \left({E \cup F}\right) \le \mu \left({E}\right) + \mu \left({F}\right)$ +\end{theorem} + +\begin{proof} +A [[Definition:Measure (Measure Theory)|measure]] is an [[Definition:Additive Function (Measure Theory)|additive function]], and, by definition, nowhere negative. +So [[Additive Nowhere Negative Function is Subadditive]] applies. +Hence the result directly: +:$\mu \left({E \cup F}\right) \le \mu \left({E}\right) + \mu \left({F}\right)$ +{{qed}} +\end{proof}<|endoftext|> +\section{Representation of Degree One is Irreducible} +Tags: Representation Theory + +\begin{theorem} +Let $\left({G, \cdot}\right)$ be a [[Definition:Finite Group|finite group]]. +Let $\rho: G \to \operatorname{GL} \left({V}\right)$ be a [[Definition:Linear Representation|linear representation]] of $G$ on $V$ of [[Definition:Dimension (Representation Theory)|degree]] $1$. +Then $\rho$ is an [[Definition:Irreducible Linear Representation|irreducible linear representation]]. +\end{theorem} + +\begin{proof} +By the definition of [[Definition:Dimension (Representation Theory)|degree]] of a [[Definition:Linear Representation|linear representation]], it is known that $\dim \left({V}\right) = 1$. +Let $W$ be a [[Definition:Proper Vector Subspace|proper vector subspace]] of $V$. +It follows from [[Dimension of Proper Subspace is Less Than its Superspace]] that: +:$\dim \left({W}\right) < 1$ +and hence $\dim \left({W}\right) = 0$. +Now from [[Trivial Vector Space iff Zero Dimension]], it follows that: +:$W = \left\{{\mathbf 0}\right\}$ +But this is not a non-[[Definition:Trivial Subspace|trivial]] [[Definition:Proper Vector Subspace|proper subspace]] of $V$. +Thus $V$ has no non-[[Definition:Trivial Subspace|trivial]] [[Definition:Proper Vector Subspace|proper vector subspaces]]. +Hence, by definition, $\rho$ is an [[Definition:Irreducible Linear Representation|irreducible linear representation]]. +{{qed}} +[[Category:Representation Theory]] +danaed4hxwyj92gbmkbtb2z59clixql +\end{proof}<|endoftext|> +\section{Irreducible Representations of Abelian Group} +Tags: Representation Theory + +\begin{theorem} +Let $\left({G, \cdot}\right)$ be a [[Definition:Finite Group|finite]] [[Definition:Abelian Group|abelian group]]. +Let $V$ be a non-[[Definition:Null Module|null]] [[Definition:Vector Space|vector space]] over an [[Definition:Algebraically Closed Field|algebraically closed field]] $k$. +Let $\rho: G \to \operatorname{GL} \left({V}\right)$ be a [[Definition:Linear Representation|linear representation]]. +Then $\rho$ is [[Definition:Irreducible Linear Representation|irreducible]] [[Definition:Iff|iff]] $\dim \left({V}\right) = 1$, where, $\dim$ denotes [[Definition:Dimension (Linear Algebra)|dimension]]. +\end{theorem} + +\begin{proof} +=== Sufficient Condition === +Suppose that $\dim \left({V}\right) = 1$. +That $\rho$ is [[Definition:Irreducible Linear Representation|irreducible]] is shown on [[Representation of Degree One is Irreducible]]. +{{qed|lemma}} +=== Necessary Condition === +Suppose that $\rho$ is an [[Definition:Irreducible Linear Representation|irreducible linear representation]]. +Let $g \in G$ be arbitrary. Now, for all $h \in G$, have: +{{begin-eqn}} +{{eqn|l = \rho \left({g}\right) \rho \left({h}\right) + |r = \rho \left({g h}\right) + |c = $\rho$ is a [[Definition:Group Homomorphism|group homomorphism]] +}} +{{eqn|r = \rho \left({h g}\right) + |c = $G$ is an [[Definition:Abelian Group|abelian group]] +}} +{{eqn|r = \rho \left({h}\right) \rho \left({g}\right) + |c = $\rho$ is a [[Definition:Group Homomorphism|group homomorphism]] +}} +{{end-eqn}} +Now, combining [[Commutative Linear Transformation is G-Module Homomorphism]] and [[Schur's Lemma (Representation Theory)/Corollary|Corollary to Schur's Lemma (Representation Theory)]] yields that: +:$\exists \lambda_g \in k: \rho \left({g}\right) = \lambda_g \operatorname{Id}_V$ +That is, there is a $\lambda_g \in k$ such that $\rho \left({g}\right)$ is the [[Definition:Linear Transformation|linear mapping]] of multiplying by $\lambda_g$. +Hence, for all $v \in V$, $\rho \left({g}\right) \left({v}\right) = \lambda_g v$. +It follows that any [[Definition:Vector Subspace|vector subspace]] of $V$ of [[Definition:Dimension (Linear Algebra)|dimension]] $1$ is [[Definition:Invariant Subspace|invariant]]. +So, had $V$ any [[Definition:Proper Vector Subspace|proper vector subspace]] of [[Definition:Dimension (Linear Algebra)|dimension]] $1$, $\rho$ would not be [[Definition:Irreducible Linear Representation|irreducible]]. +Since $V$ is non-[[Definition:Null Module|null]], it follows from [[Trivial Vector Space iff Zero Dimension]] that $\dim \left({V}\right) > 0$. +Hence necessarily $\dim \left({V}\right) = 1$. +{{qed}} +[[Category:Representation Theory]] +fogs6da21i3iult165g0jn3efol0b3p +\end{proof}<|endoftext|> +\section{Schur's Lemma (Representation Theory)} +Tags: Representation Theory + +\begin{theorem} +Let $\left({G, \cdot}\right)$ be a [[Definition:Finite Group|finite group]]. +Let $V$ and $V'$ be two [[Definition:Irreducible Linear Representation|irreducible]] [[Definition:G-Module|$G$-modules]]. +Let $f: V \to V'$ be a [[Definition:G-Module Homomorphism|homomorphism of $G$-modules]]. +Then either: +:$f \left({v}\right) = 0$ for all $v \in V$ +or: +: $f$ is an [[Definition:Module Isomorphism|isomorphism]]. +\end{theorem} + +\begin{proof} +From [[Kernel is G-Module]], $\ker \left({f}\right)$ is a $G$-submodule of $V$. +From [[Image is G-Module]], $\operatorname{Im} \left({f}\right)$ is a $G$-submodule of $V'$. +By the definition of [[Definition:Reducible Linear Representation|irreducible]]: +:$\ker \left({f}\right) = \left\{{0}\right\}$ +or: +:$\ker \left({f}\right) = V$ +{{explain|Link to a result which shows this. While it does indeed follow from the definition, it would be useful to have a page directly demonstrating this.}} +If $\ker \left({f}\right) = V$ then by definition: +:$f \left({v}\right) = 0$ for all $v \in V$ +Let $\ker \left({f}\right) = \left\{{0}\right\}$. +Then from [[Linear Transformation is Injective iff Kernel Contains Only Zero]]: +:$f$ is [[Definition:Injection|injective]]. +{{explain|Establish whether the above result (which discusses linear transformations on $R$-modules, not $G$ modules) can be directly applied. If so, amend its wording so as to make this clear.}} +It also follows that: +:$\operatorname{Im} \left({f}\right) = V'$ +{{Explain|Prove this}} +Thus $f$ is [[Definition:Surjection|surjective]] and [[Definition:Injection|injective]]. +Thus by definition $f$ is a [[Definition:Bijection|bijection]] and thence an [[Definition:Module Isomorphism|isomorphism]]. +{{qed}} +{{namedfor|Issai Schur|cat = Schur}} +[[Category:Representation Theory]] +bythnhbim7dkufkqe8z7ggoqxlq6p70 +\end{proof}<|endoftext|> +\section{G-Module is Irreducible iff no Non-Trivial Proper Submodules} +Tags: Representation Theory + +\begin{theorem} +Let $\left({G, \circ}\right)$ be a [[Definition:Finite Group|finite group]]. +Let $\left({V, \phi}\right)$ be a [[Definition:G-Module|$G$-module]]. +Then $V$ is an [[Definition:Irreducible G-Module|irreducible $G$-module]] {{iff}} $V$ has no [[Definition:Trivial G-Module|non-trivial]] [[Definition:Proper G-Submodule|proper $G$-submodules]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Assume that $V$ is an [[Definition:Irreducible G-Module|irreducible]] [[Definition:G-Module|$G$-module]], but it has a [[Definition:Trivial G-Module|non-trivial]] [[Definition:Proper G-Submodule|proper $G$-submodule]]. +By the definition [[Definition:Irreducible Linear Representation|irreducible]], its associated [[Definition:Linear Representation|representation]] is [[Definition:Irreducible Linear Representation|irreducible]]. +Let this [[Definition:Linear Representation|representation]] be denoted $\tilde \phi = \rho: G \to \operatorname{GL} \left({V}\right)$). +In [[Correspondence between Linear Group Actions and Linear Representations]] it is defined as: +:$\rho \left({g}\right) \left({v}\right) = \phi \left({g, v}\right)$ +where $g \in G$ and $v \in V$. +Since $V$ has a [[Definition:Proper Subset|proper]] [[Definition:G-Submodule|$G$-submodule]], there exists $W$ a non-trivial [[Definition:Proper Subset|proper]] [[Definition:Vector Subspace|vector subspace]] which $\phi \left({G, W}\right) \subseteq W$ and so $\rho \left({G}\right) W \subseteq W$. +Hence $W$ is [[Definition:Invariant Subspace|invariant]] by every [[Definition:Linear Transformation|linear operators]] in $\left\{ {\rho \left({g}\right): g \in G}\right\}$. +By definition, $\rho$ cannot be [[Definition:Irreducible Linear Representation|irreducible]]. +Thus we have reached a contradiction, and $V$ has then no non-trivial [[Definition:Proper G-Submodule| proper $G$-submodules]]. +{{qed|lemma}} +=== Sufficient Condition === +Assume now that $V$ has no [[Definition:Proper Subset|proper]] [[Definition:G-Submodule|$G$-submodules]], but it is a [[Definition:Reducible Linear Representation|reducible $G$-module]]. +By the definition of [[Definition:Reducible G-Module|reducible $G$-module]], it follow that its associated [[Definition:Linear Representation|representation]] is [[Definition:Reducible Linear Representation|reducible]]. +Let this [[Definition:Linear Representation|representation]] be denoted $\tilde \phi = \rho: G \to \operatorname{GL} \left({V}\right)$). +From the definition of [[Definition:Reducible Linear Representation|reducible representation]], it follows that there exists a [[Definition:Vector Subspace|vector space]] $W$ of $V$. +This is [[Definition:Invariant Subspace|invariant]] under all the [[Definition:Linear Transformation|linear operators]] in $\left\{ {\rho \left({g}\right): g \in G}\right\}$. +{{explain|Why is it invariant?}} +Then: +:$\phi \left({G, W}\right) = \rho \left({G}\right) W \subseteq W$ +which is the definition of a [[Definition:G-Submodule|$G$-submodule]] of $V$. +By our assumption, $V$ has no non-trivial [[Definition:Proper G-Submodule|proper $G$-submodules]]. +Thus we have reached a contradiction and $V$ must be then an [[Definition:Irreducible G-Module|irreducible $G$-module]]. +{{qed}} +[[Category:Representation Theory]] +73n1827vgl6vqae7ngeyjo0kv1fpmmu +\end{proof}<|endoftext|> +\section{Kernel is G-Module} +Tags: Representation Theory + +\begin{theorem} +Let $\struct {G, \cdot}$ be a [[Definition:Group|group]]. +Let $f: \struct {V, \phi} \to \struct {V', \mu}$ be a [[Definition:G-Module Homomorphism |homomorphism of $G$-modules]]. +Then its [[Definition:Kernel of Linear Transformation|kernel]] $\map \ker f$ is a [[Definition:G-Submodule|$G$-submodule]] of $V$. +\end{theorem} + +\begin{proof} +From [[G-Submodule Test]] it suffices to prove that $\phi \sqbrk {\struct {G, \map \ker f} } \subseteq \map \ker f$. +That is, it is to be shown that, if $g \in G$ and $v \in \map \ker f$, then $\map \phi {g, v} \in \map \ker f$. +Assume that $g \in G$ and $v \in \map \ker f$. +{{begin-eqn}} +{{eqn | l = \map f {\map \phi {g, v} } + | r = \map \mu {g, \map f v} + | c = $f$ is a [[Definition:G-Module Homomorphism|$G$-module homomorphism]] +}} +{{eqn | r = \map \mu {g, 0} + | c = $v \in \map \ker f$ +}} +{{eqn | r = 0 + | c = $\mu$ is a [[Definition:Linear Group Action|linear action]] +}} +{{end-eqn}} +Thus $\map \phi {g, v} \in \map \ker f$. +Hence $\map \ker f$ is a [[Definition:G-Submodule|$G$-submodule]] of $V$. +{{qed}} +[[Category:Representation Theory]] +2woti1bqyrmd6d01v2dsexhnha80exm +\end{proof}<|endoftext|> +\section{Image is G-Module} +Tags: Representation Theory + +\begin{theorem} +Let $\left({G, \cdot}\right)$ be a [[Definition:Group|group]] and let $f: \left({V, \phi}\right) \to \left({V', \mu}\right)$ be a [[Definition:G-Module Homomorphism |homomorphism of $G$-modules]]. +Then $\operatorname{Im} \left({f}\right)$ is a [[Definition:G-Submodule|$G$-submodule]] of $V'$. +\end{theorem} + +\begin{proof} +From [[G-Submodule Test]] it suffices to prove that $\mu \left({G, \operatorname{Im} \left({f}\right) }\right) \subseteq \operatorname{Im} \left({f}\right)$. +In other words: for any $g \in G$ and $w \in \operatorname{Im} \left({f}\right)$, it is to be shown that $\mu \left({g, w}\right) \in \operatorname{Im} \left({f}\right)$. +Assume that $g \in G$ and $w\in\operatorname{Im} \left({f}\right)$. +Then there exists a $v \in V$ such that $f \left({v}\right) = w$. +By definition of [[Definition:G-Module Homomorphism|homomorphism]], have: +:$\mu \left({g, w}\right) = \mu \left({g, f \left({v}\right) }\right) = f \left({\phi \left({g, v}\right) }\right)$ +Hence, for all $g \in G$ and $w\in\operatorname{Im} \left({f}\right)$, $\mu \left({g, w}\right) \in \operatorname{Im} \left({f}\right)$. +By [[G-Submodule Test]], it follows that $\operatorname{Im} \left({f}\right)$ is a [[Definition:G-Submodule|$G$-submodule]] of $V'$. +{{qed}} +[[Category:Representation Theory]] +lrqe10cldex5s9jftgmv10fkv100lhw +\end{proof}<|endoftext|> +\section{Set Intersection Preserves Subsets/Families of Sets} +Tags: Set Intersection Preserves Subsets, Indexed Families + +\begin{theorem} +Let $I$ be an [[Definition:Indexing Set|indexing set]]. +Let $\family {A_\alpha}_{\alpha \mathop \in I}$ and $\family {B_\alpha}_{\alpha \mathop \in I}$ be [[Definition:Indexed Family of Subsets|indexed families of subsets]] of a [[Definition:Set|set]] $S$. +Let: +:$\forall \beta \in I: A_\beta \subseteq B_\beta$ +Then: +:$\displaystyle \bigcap_{\alpha \mathop \in I} A_\alpha \subseteq \bigcap_{\alpha \mathop \in I} B_\alpha$ +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = x + | o = \in + | r = \bigcap_{\alpha \mathop \in I} A_\alpha + | c = +}} +{{eqn | lll=\leadsto + | ll= \forall \alpha \in I: + | l = x + | o = \in + | r = A_\alpha + | c = {{Defof|Intersection of Family}} +}} +{{eqn | lll=\leadsto + | ll= \forall \alpha \in I: + | l = x + | o = \in + | r = B_\alpha + | c = {{Defof|Subset}} +}} +{{eqn | lll=\leadsto + | l = x + | o = \in + | r = \bigcap_{\alpha \mathop \in I} B_\alpha + | c = {{Defof|Intersection of Family}} +}} +{{end-eqn}} +By definition of [[Definition:Subset|subset]]: +:$\displaystyle \bigcap_{\alpha \mathop \in I} A_\alpha \subseteq \bigcap_{\alpha \mathop \in I} B_\alpha$ +{{qed}} +\end{proof}<|endoftext|> +\section{Set Intersection Preserves Subsets} +Tags: Set Intersection, Subsets, Set Intersection Preserves Subsets + +\begin{theorem} +Let $A, B, S, T$ be [[Definition:Set|sets]]. +Then: +:$A \subseteq B, \ S \subseteq T \implies A \cap S \subseteq B \cap T$ +\end{theorem} + +\begin{proof} +Let $A \subseteq B$ and $S \subseteq T$. +Then: +{{begin-eqn}} +{{eqn| l = x \in A + | o = \implies + | r = x \in B + | c = {{Defof|Subset}} +}} +{{eqn| l = x \in S + | o = \implies + | r = x \in T + | c = {{Defof|Subset}} +}} +{{end-eqn}} +Now we invoke the [[Praeclarum Theorema]] of [[Definition:Propositional Logic|propositional logic]]: +:$\paren {p \implies q} \land \paren {r \implies s} \vdash \paren {p \land r} \implies \paren {q \land s}$ +applying it as: +:$\paren {x \in A \implies x \in B, \ x \in S \implies x \in T} \leadsto \paren {x \in A \land x \in S \implies x \in B \land x \in T}$ +The result follows directly from the definition of [[Definition:Set Intersection|set intersection]]: +:$\paren {x \in A \implies x \in B, \ x \in S \implies x \in T} \leadsto \paren {x \in A \cap S \implies x \in B \cap T}$ +and from the definition of [[Definition:Subset|subset]]: +:$A \subseteq B, \ S \subseteq T \implies A \cap S \subseteq B \cap T$ +{{qed}} +[[Category:Set Intersection]] +[[Category:Subsets]] +[[Category:Set Intersection Preserves Subsets]] +ji875znv1qa80dwb7ktw1erjpbhjpbv +\end{proof}<|endoftext|> +\section{Null Space Contains Zero Vector} +Tags: Linear Algebra, Null Spaces, Null Space Contains Zero Vector + +\begin{theorem} +Let: +:$\map {\mathrm N} {\mathbf A} = \set {\mathbf x \in \R^n: \mathbf A \mathbf x = \mathbf 0}$ +be the [[Definition:Null Space|null space]] of $\mathbf A$, where: +:$ \mathbf A_{m \times n} = \begin {bmatrix} +a_{1 1} & a_{1 2} & \cdots & a_{1 n} \\ +a_{2 1} & a_{2 2} & \cdots & a_{2 n} \\ + \vdots & \vdots & \ddots & \vdots \\ +a_{m 1} & a_{m 2} & \cdots & a_{m n} \\ +\end{bmatrix}$ +is a [[Definition:Matrix|matrix]] in the [[Definition:Matrix Space|matrix space]] $\map {\MM_\R} {m, n}$. +Then the [[Definition:Null Space|null space]] of $\mathbf A$ contains the [[Definition:Zero Vector|zero vector]]: +:$\mathbf 0 \in \map {\mathrm N} {\mathbf A}$ +where: +:$\mathbf 0 = \mathbf 0_{m \times 1} = \begin {bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end {bmatrix}$ +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \mathbf A \mathbf 0 + | r = \begin {bmatrix} +a_{1 1} & a_{1 2} & \cdots & a_{1 n} \\ +a_{2 1} & a_{2 2} & \cdots & a_{2 n} \\ + \vdots & \vdots & \ddots & \vdots \\ +a_{m 1} & a_{m 2} & \cdots & a_{m n} \\ +\end {bmatrix} \begin {bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end {bmatrix} +}} +{{eqn | r = \begin {bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end {bmatrix} +}} +{{eqn | r = \mathbf 0 +}} +{{end-eqn}} +The [[Definition:Order of Matrix|order]] is correct [[Definition:By Hypothesis|by hypothesis]]. +The result follows by the definition of [[Definition:Null Space|null space]]. +{{qed}} +\end{proof} + +\begin{proof} +From [[Matrix Product as Linear Transformation]], $\mathbf {Ax} = \mathbf 0$ defines a [[Definition:Linear Transformation on Vector Space|linear transformation]] from $\R^m$ to $\R^n$. +The result then follows from [[Linear Transformation Maps Zero Vector to Zero Vector]]. +\end{proof}<|endoftext|> +\section{Homogeneous System has Zero Vector as Solution} +Tags: Linear Algebra, Null Spaces + +\begin{theorem} +Every [[Definition:Homogeneous Linear Equations|homogeneous system of linear equations]] has the [[Definition:Zero Vector|zero vector]] as a [[Definition:Solution to System of Simultaneous Equations|solution]]. +\end{theorem} + +\begin{proof} +By the definition of [[Definition:Null Space|null space]], $\mathbf 0$ is a [[Definition:Solution to System of Simultaneous Equations|solution]] {{iff}} the [[Definition:Null Space|null space]] contains the [[Definition:Zero Vector|zero vector]]. +The result follows from [[Null Space Contains Zero Vector]]. +{{qed}} +[[Category:Linear Algebra]] +[[Category:Null Spaces]] +mp2ftq9clu23hsbzit9h6vwj15zfend +\end{proof}<|endoftext|> +\section{Commutative Linear Transformation is G-Module Homomorphism} +Tags: Representation Theory + +\begin{theorem} +Let $\rho: G \to \operatorname{GL} \left({V}\right)$ be a [[Definition:Linear Representation|representation]]. +Let $f: V \to V$ be a [[Definition:Linear Mapping|linear mapping]]. +Let: +:$\forall g \in G: \rho \left({g}\right) \circ f = f \circ \rho \left({g}\right)$ +Then $f: V \to V$ is a [[Definition:G-Module Homomorphism|$G$-module homomorphism]]. +{{explain|between which $G$-modules? (probably $\left({G, \rho}\right) \to \left({G, \rho}\right)$, which makes it rather tautologous)}} +\end{theorem} + +\begin{proof} +Let: +:$\forall g \in G: \rho \left({g}\right) \circ f = f \circ \rho \left({g}\right)$ +Let $v$ be a [[Definition:Vector Space|vector]] $v \in V$. +Then: +: $\rho \left({g}\right) \left({f \left({v}\right)}\right) = f \left({\rho \left({g}\right) \left({v}\right)}\right)$ +Using the properties from [[Correspondence between Linear Group Actions and Linear Representations]]: +:there exists a [[Definition:G-Module|$G$-module]] $\left({V, \phi}\right)$ associated with $\rho$ such that: +::$\phi \left({g, v}\right) = \rho \left({g}\right) \left({v}\right)$ +Applying the last formula: +:$\rho \left({g}\right) \left({f \left({v}\right)}\right) = \phi \left({g, f \left({v}\right)}\right)$ +and: +:$f \left({\phi \left({g, v}\right)}\right) = f \left({\rho \left({g}\right) \left({v}\right)}\right)$ +Thus our assumption is equivalent to: +:$f \left({\phi \left({g, v}\right)}\right) = \phi \left({g, f \left({v}\right)}\right)$ +Hence, by definition of [[Definition:G-Module Homomorphism|$G$-module homomorphism]], $f: V \to V$ is a [[Definition:G-Module Homomorphism|$G$-module homomorphism]]. +{{qed}} +[[Category:Representation Theory]] +seiecfhsn2xwcc0n9y12p95801ucz5a +\end{proof}<|endoftext|> +\section{Characterization of Measures} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Denote $\overline \R_{\ge 0}$ for the set of [[Definition:Positive Real Number|positive]] [[Definition:Extended Real Number Line|extended real numbers]]. +A [[Definition:Mapping|mapping]] $\mu: \Sigma \to \overline \R_{\ge 0}$ is a [[Definition:Measure (Measure Theory)|measure]] {{iff}}: +:$(1):\quad \map \mu \O = 0$ +:$(2):\quad \mu$ is [[Definition:Finitely Additive Function|finitely additive]] +:$(3):\quad$ For every [[Definition:Increasing Sequence of Sets|increasing sequence]] $\sequence {E_n}_{n \mathop \in \N}$ in $\Sigma$, if $E_n \uparrow E$, then: +::::$\map \mu E = \ds \lim_{n \mathop \to \infty} \map \mu {E_n}$ +where $E_n \uparrow E$ denotes [[Definition:Limit of Increasing Sequence of Sets|limit of increasing sequence of sets]]. +Alternatively, and equivalently, $(3)$ may be replaced by either of: +:$(3'):\quad$ For every [[Definition:Decreasing Sequence of Sets|decreasing sequence]] $\sequence {E_n}_{n \mathop \in \N}$ in $\Sigma$ for which $\map \mu {E_1}$ is [[Definition:Finite|finite]], if $E_n \downarrow E$, then: +::::$\map \mu E = \ds \lim_{n \mathop \to \infty} \map \mu {E_n}$ +:$(3''):\quad$ For every [[Definition:Decreasing Sequence of Sets|decreasing sequence]] $\sequence {E_n}_{n \mathop \in \N}$ in $\Sigma$ for which $\map \mu {E_1}$ is [[Definition:Finite|finite]], if $E_n \downarrow \O$, then: +::::$\ds \lim_{n \mathop \to \infty} \map \mu {E_n} = 0$ +where $E_n \downarrow E$ denotes [[Definition:Limit of Decreasing Sequence of Sets|limit of decreasing sequence of sets]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +To show is that a [[Definition:Measure (Measure Theory)|measure]] $\mu$ has the properties $(1)$, $(2)$, $(3)$, $(3')$ and $(3'')$. +Property $(1)$ is also part of the definition of [[Definition:Measure (Measure Theory)|measure]], and hence is immediate. +Property $(2)$ is precisely the statement of [[Measure is Finitely Additive Function]]. +Next, let $\sequence {E_n}_{n \mathop \in \N} \uparrow E$ in $\Sigma$ be an [[Definition:Increasing Sequence of Sets|increasing sequence]]. +Define $F_1 = E_1$, and, for $n \in \N$: +:$F_{n + 1} = E_{n + 1} \setminus E_n$ +Then as $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]: +:$\forall n \in \N: F_n \in \Sigma$ +Also, the $F_n$ are [[Definition:Pairwise Disjoint|pairwise disjoint]] as $\sequence {E_n}_{n \mathop \in \N}$ is an [[Definition:Increasing Sequence of Sets|increasing sequence]]. +By construction, have for all $k \in \N$ that: +:$\ds E_k = \bigcup_{n \mathop = 1}^k F_n$ +and so: +:$\ds E = \bigcup_{n \mathop \in \N} F_n$ +Hence, as $\mu$ is a [[Definition:Measure (Measure Theory)|measure]], compute: +{{begin-eqn}} +{{eqn | l = \map \mu E + | r = \map \mu {\bigcup_{n \mathop \in \N} F_n} + | c = by the above reasoning +}} +{{eqn | r = \sum_{n \mathop = 1}^\infty \map \mu {F_n} + | c = $\mu$ is a [[Definition:Measure (Measure Theory)|Measure]] +}} +{{eqn | r = \lim_{k \mathop \to \infty} \sum_{n \mathop = 1}^k \map \mu {F_n} + | c = {{Defof|Series}} +}} +{{eqn | r = \lim_{k \mathop \to \infty} \map \mu {\bigcup_{n \mathop = 1}^k F_n} + | c = [[Measure is Finitely Additive Function]], [[Finite Union of Sets in Additive Function]] +}} +{{eqn | r = \lim_{k \mathop \to \infty} \map \mu {E_k} + | c = by the above reasoning +}} +{{end-eqn}} +This establishes property $(3)$ for [[Definition:Measure (Measure Theory)|measures]]. +For $(3'')$, note that it is a special case of $(3')$. +For property $(3')$, let $\sequence {E_n}_{n \mathop \in \N} \downarrow E$ be a [[Definition:Decreasing Sequence of Sets|decreasing sequence]] in $\Sigma$. +Suppose that $\map \mu {E_1} < +\infty$. +By [[Measure is Monotone]], this implies: +:$\forall n \in \N: \map \mu {E_n} < +\infty$ +and also: +:$\map \mu E < +\infty$ +Now define: +:$\forall n \in \N: F_n := E_1 \setminus E_n$ +Then: +:$F_n \uparrow E_1 \setminus E$ +{{explain|prove this claim}} +Hence, property $(3)$ can be applied as follows: +{{begin-eqn}} +{{eqn | l = \map \mu {E_1} - \map \mu E + | r = \map \mu {E_1 \setminus E} + | c = [[Measure of Set Difference with Subset]] +}} +{{eqn | r = \lim_{n \mathop \to \infty} \map \mu {E_1 \setminus E_n} + | c = by property $(3)$ +}} +{{eqn | r = \lim_{n \mathop \to \infty} \paren {\map \mu {E_1} - \map \mu {E_n} } + | c = [[Measure of Set Difference with Subset]] +}} +{{eqn | r = \map \mu {E_1} - \lim_{n \mathop \to \infty} \map \mu {E_n} +}} +{{end-eqn}} +Here, all expressions involving subtraction are well-defined as $\mu$ takes [[Definition:Finite|finite]] values. +It follows that: +:$\ds \map \mu E = \lim_{n \mathop \to \infty} \map \mu {E_n}$ +as required. +{{qed|lemma}} +=== Sufficient Condition === +The [[Definition:Mapping|mapping]] $\mu$ is already satisfying axiom $(1)$ for a [[Definition:Measure (Measure Theory)|measure]] by the imposition on its [[Definition:Codomain of Mapping|codomain]]. +Also, axiom $(3')$ is identical to assumption $(1)$. +It remains to check axiom $(2)$. +So let $\sequence {E_n}_{n \mathop \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] sets in $\Sigma$. +Define, for $n \in \N$: +:$F_n = \ds \bigcup_{k \mathop = 1}^n E_k$ +Then: +:$\forall n \in \N: F_n \subseteq F_{n + 1}$ +Also, by [[Additive Function is Strongly Additive]]: +:$\ds \forall n \in \N: \map \mu {F_n} = \map \mu {\bigcup_{k \mathop = 1}^n E_k} = \sum_{k \mathop = 1}^n \map \mu {E_k}$ +Hence, using condition $(3)$ on the $F_n$, obtain: +{{begin-eqn}} +{{eqn | l = \map \mu {\bigcup_{n \mathop \in \N} E_n} + | r = \lim_{n \mathop \to \infty} \map \mu {F_n} + | c = Condition $(3)$ +}} +{{eqn | r = \lim_{n \mathop \to \infty} \sum_{k \mathop = 1}^n \map \mu {E_k} + | c = by the reasoning above +}} +{{eqn | r = \sum_{k \mathop = 1}^\infty \map \mu {E_k} + | c = {{Defof|Series}} +}} +{{end-eqn}} +This establishes that $\mu$ also satisfies axiom $(2)$ for a [[Definition:Measure (Measure Theory)|measure]], and so it is a [[Definition:Measure (Measure Theory)|measure]]. +Now to show that $(3')$ and $(3'')$ can validly replace $(3)$. +As $(3')$ clearly implies $(3'')$ (which is a special case of the former), it will suffice to show that $(3'')$ implies $(3)$. +{{finish|this really isn't hard (use set difference and finiteness on decreasing sequence $G_n :{{=}} \bigcup_{k \mathop \in \N} E_k \setminus F_n \downarrow \O$) but I can't formulate it nicely; the case where {{LHS}}, {{RHS}} are infinite for axiom $(2)$ has to be covered. Feel free to finish it.}} +{{qed}} +\end{proof}<|endoftext|> +\section{Measure is Countably Subadditive} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Then $\mu$ is a [[Definition:Countably Subadditive Function|countably subadditive function]]. +\end{theorem} + +\begin{proof} +Let $\left({E_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of sets in $\Sigma$. +It is required to show that: +:$\displaystyle \mu \left({\bigcup_{n \mathop \in \N} E_n}\right) \le \sum_{n \mathop \in \N} \mu \left({E_n}\right)$ +Now define the sequence $\left({F_n}\right)_{n\in\N}$ in $\Sigma$ by: +:$F_n := \displaystyle \bigcup_{k \mathop = 1}^n E_n$ +By [[Subset of Union]], it follows that, for all $n \in \N$, $F_n \subseteq F_{n+1}$. +Hence, $\left({F_n}\right)_{n\in\N}$ is [[Definition:Increasing Sequence of Sets|increasing]]. +It is immediate that $F_n \uparrow \displaystyle \bigcup_{n \mathop \in \N} E_n$, where $\uparrow$ signifies the [[Definition:Limit of Increasing Sequence of Sets|limit of an increasing sequence of sets]]. +Now reason as follows: +{{begin-eqn}} +{{eqn|l = \mu \left({\bigcup_{n \mathop \in \N} E_n}\right) + |r = \lim_{n \to \infty} \mu \left({F_n}\right) + |c = [[Characterization of Measures]], $(3)$ +}} +{{eqn|r = \lim_{n \to \infty} \mu \left({E_1 \cup \cdots \cup E_n}\right) + |c = Definition of $F_n$ +}} +{{eqn|o = \le + |r = \lim_{n \to \infty} \sum_{k \mathop = 1}^n \mu \left({E_k}\right) + |c = [[Measure is Subadditive/Corollary|Measure is Subadditive: Corollary]] +}} +{{eqn|r = \sum_{k \mathop \in \N} \mu \left({E_k}\right) +}} +{{end-eqn}} +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Dirac Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $x \in X$, and let $\delta_x$ be the [[Definition:Dirac Measure|Dirac measure at $x$]]. +Then $\delta_x$ is a [[Definition:Measure (Measure Theory)|measure]]. +\end{theorem} + +\begin{proof} +Let us verify in turn that $\delta_x$ satisfies the axioms for a [[Definition:Measure (Measure Theory)|measure]]. +=== Axiom $(1)$ === +By definition of the [[Definition:Dirac Measure|Dirac measure]], $\delta_x \left({E}\right) \ge 0$ for all $E \in \Sigma$. +{{qed|lemma}} +=== Axiom $(2)$ === +Let $\left({E_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint sets]]. +It follows that if for some $m \in \N$, $x \in E_m$, it must be that $n \ne m$ implies $x \notin E_n$. +Now suppose $x \in E_m$ for some $m \in \N$. +Then by definition of [[Definition:Set Union|set union]], $x \in \displaystyle \bigcup_{n \mathop \in \N} E_n$. +Thus: +{{begin-eqn}} +{{eqn|l = \delta_x \left({\bigcup_{n \mathop \in \N} E_n}\right) + |r = 1 +}} +{{eqn|r = \sum_{n \mathop \in \N} \delta_x \left({E_n}\right) +}} +{{end-eqn}} +because $\delta_x \left({E_n}\right) = 0$ iff $n \ne m$, and $1$ otherwise. +Finally, if $x \notin E_n$ for all $n \in \N$, then by definition of [[Definition:Set Union|set union]]: +:$x \notin \displaystyle \bigcup_{n \mathop \in \N} E_n$ +so that: +{{begin-eqn}} +{{eqn|l = \delta_x \left({\bigcup_{n \mathop \in \N} E_n}\right) + |r = 0 +}} +{{eqn|r = \sum_{n \mathop \in \N} 0 +}} +{{eqn|r = \sum_{n \mathop \in \N} \delta_x \left({E_n}\right) +}} +{{end-eqn}} +Hence, from [[Proof by Cases]]: +:$\displaystyle \sum_{n \mathop \in \N} \delta_x \left({E_n}\right) = \delta_x \left({\bigcup_{n \mathop \in \N} E_n}\right)$ +{{qed|lemma}} +=== Axiom $(3)$ === +By definition of the [[Definition:Dirac Measure|Dirac measure]], $\delta_x \left({X}\right) = 1$. +Hence there is an $E \in \Sigma$ such that $\delta_x \left({E}\right)$ is [[Definition:Finite Extended Real Number|finite]]. +{{qed|lemma}} +Thus, $\delta_x$, satisfying all the axioms, is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +[[Category:Measure Theory]] +6y5upvpzfpb3c45fniy5wpmiryuduh6 +\end{proof}<|endoftext|> +\section{Dirac Measure is Probability Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \mathcal A}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $x \in X$, and let $\delta_x$ be the [[Definition:Dirac Measure|Dirac measure at $x$]]. +Then $\delta_x$ is a [[Definition:Probability Measure|probability measure]]. +\end{theorem} + +\begin{proof} +By [[Dirac Measure is Measure]], $\delta_x$ is a [[Definition:Measure (Measure Theory)|measure]]. +Also, $\delta_x \left({X}\right) = 1$ because $x \in X$. +Hence $\delta_x$ is a [[Definition:Probability Measure|probability measure]]. +{{qed}} +[[Category:Measure Theory]] +3rtjm4gw46znkd67a9hzhx1ka4odmr0 +\end{proof}<|endoftext|> +\section{Definition:Co-Countable Measure} +Tags: Definitions: Measure Theory + +\begin{theorem} +Let $X$ be an [[Definition:Uncountable Set|uncountable set]]. +Let $\Sigma$ be the [[Sigma-Algebra of Countable Sets|$\sigma$-algebra of countable sets]] on $X$. +Then the '''co-countable measure (on $X$)''' is the [[Definition:Measure (Measure Theory)|measure]] defined by: +:$\mu: \Sigma \to \overline{\R}, \ \mu \left({E}\right) := \begin{cases} 0 & : \text{if $E$ is countable}\\ 1 & : \text{if $E$ is co-countable}\end{cases}$ +where: +: $\overline{\R}$ denotes the [[Definition:Extended Real Number Line|extended real numbers]] +: $E$ is [[Definition:Co-Countable Set|co-countable]] [[Definition:Iff|iff]] $X \setminus E$ is [[Definition:Countable|countable]]. +\end{theorem}<|endoftext|> +\section{Co-Countable Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $X$ be an [[Definition:Uncountable Set|uncountable set]]. +Let $\Sigma$ be the [[Sigma-Algebra of Countable Sets|$\sigma$-algebra of countable sets]] on $X$. +Then the [[Definition:Co-Countable Measure|co-countable measure]] $\mu$ on $X$ is a [[Definition:Measure (Measure Theory)|measure]]. +\end{theorem} + +\begin{proof} +Let us verify the [[Definition:Measure (Measure Theory)|measure axioms]] $(1)$, $(2)$ and $(3')$ for $\mu$. +=== Proof of $(1)$ === +For all $S \in \Sigma$, $\mu \left({S}\right)$ is $0$ or $1$. +In either case, $\mu \left({S}\right) \ge 0$. +{{qed|lemma}} +=== Proof of $(2)$ === +It is to be shown that (for a [[Definition:Sequence|sequence]] $\left({S_n}\right)_{n \in \N}$ of [[Definition:Pairwise Disjoint|pairwise disjoint sets]]): +:$\displaystyle \sum_{n \mathop = 1}^\infty \mu \left({S_n}\right) = \mu \left({\bigcup_{n \mathop = 1}^\infty S_n}\right)$ +Suppose that at least one $S_n$ is [[Definition:Co-Countable Set|co-countable]], say $S_N$. +Since the $S_n$ are [[Definition:Pairwise Disjoint|pairwise disjoint]], it follows that, for all $n \in \N$, $n \ne N$: +:$S_n \subseteq X \setminus S_N$ +by [[Empty Intersection iff Subset of Complement]]. +From [[Subset of Countably Infinite Set is Countable]], the $S_n$ with $n \ne N$ are all [[Definition:Countable Set|countable]], as $S_N$ is [[Definition:Co-Countable Set|co-countable]]. +Therefore: +:$\mu \left({S_n}\right) = \begin{cases}1 & \text{if $n = N$}\\ 0 & \text{if $n \ne N$}\end{cases}$ +and subsequently: +:$\displaystyle \sum_{n \mathop = 1}^\infty \mu \left({S_n}\right) = 1$ +From [[Superset of Co-Countable Set]], $\displaystyle \bigcup_{n \mathop = 1}^\infty S_n$ is [[Definition:Co-Countable Set|co-countable]]. +Hence: +$\displaystyle \mu \left({\bigcup_{n \mathop = 1}^\infty S_n}\right) = 1$ +verifying $(2)$ for $\mu$ in this case. +If on the other hand, all $S_n$ are [[Definition:Countable Set|countable]], then for all $n \in \N$: +:$\mu \left({S_n}\right) = 0$ +and so: +:$\displaystyle \sum_{n \mathop = 1}^\infty \mu \left({S_n}\right) = 0$ +From [[Countable Union of Countable Sets is Countable]]: +:$\displaystyle \bigcup_{n \mathop = 1}^\infty S_n$ +is also [[Definition:Countable Set|countable]], hence: +:$\displaystyle \mu \left({\bigcup_{n \mathop = 1}^\infty S_n}\right) = 0$ +verifying $(2)$ for $\mu$ in this case as well. +Hence $(2)$ holds for $\mu$, from [[Proof by Cases]]. +{{qed|lemma}} +=== Proof of $(3')$ === +Note that $\varnothing \in \Sigma$ as $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +By [[Empty Set is Countable]], $\varnothing$ is [[Definition:Countable Set|countable]], whence: +:$\mu \left({\varnothing}\right) = 0$ +{{qed|lemma}} +Having verified the axioms, it follows that $\mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +[[Category:Measure Theory]] +0llfqdlmt7mmdp5v3m9qucpw4xh0uvk +\end{proof}<|endoftext|> +\section{Co-Countable Measure is Probability Measure} +Tags: Measure Theory + +\begin{theorem} +Let $X$ be an [[Definition:Uncountable Set|uncountable set]]. +Let $\mathcal A$ be the [[Sigma-Algebra of Countable Sets|$\sigma$-algebra of countable sets]] on $X$. +Then the [[Definition:Co-Countable Measure|co-countable measure]] $\mu$ on $X$ is a [[Definition:Probability Measure|probability measure]]. +\end{theorem} + +\begin{proof} +By [[Co-Countable Measure is Measure]], $\mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +By [[Relative Complement with Self is Empty Set]], have $\complement_X \left({X}\right) = \varnothing$. +As $\varnothing$ is [[Definition:Countable Set|countable]], it follows that $X$ is [[Definition:Co-Countable Set|co-countable]]. +Hence $\mu \left({X}\right) = 1$, and so $\mu$ is a [[Definition:Probability Measure|probability measure]]. +{{qed}} +[[Category:Measure Theory]] +g4nimxfe31rmdrmhywpyp7ufg4kpxhc +\end{proof}<|endoftext|> +\section{Counting Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Then the [[Definition:Counting Measure|counting measure]] $\left\vert{\cdot}\right\vert$ on $\left({X, \Sigma}\right)$ is a [[Definition:Measure (Measure Theory)|measure]]. +\end{theorem} + +\begin{proof} +Let us verify the [[Definition:Measure (Measure Theory)|measure axioms]] $(1)$, $(2)$ and $(3')$ for $\left\vert{\cdot}\right\vert$. +=== Proof of $(1)$ === +The values that $\left\vert{\cdot}\right\vert$ can take are the [[Definition:Natural Number|natural numbers]] $\N$ and $+\infty$. +All of these are positive, whence: +:$\forall S \in \Sigma: \left\vert{S}\right\vert \ge 0$ +{{qed|lemma}} +=== Proof of $(2)$ === +It is to be shown that (for a [[Definition:Sequence|sequence]] $\left({S_n}\right)_{n \in \N}$ of [[Definition:Pairwise Disjoint|pairwise disjoint sets]]): +:$\displaystyle \sum_{n \mathop = 1}^\infty \left\vert{S_n}\right\vert = \left\vert{\bigcup_{n \mathop = 1}^\infty S_n}\right\vert$ +Suppose that the [[Definition:Cardinality|cardinality]] of at least one $S_i$ is [[Definition:Infinite Set|infinite]]. +Then the cardinality of: +:$\displaystyle \bigcup_{n \mathop = 1}^\infty S_n$ +is infinite by [[Subset of Finite Set is Finite]]. +Hence: +:$\displaystyle \mu \left({\bigcup_{n \mathop = 1}^\infty S_n}\right) = +\infty$ +Now as: +:$\mu \left( {S_i} \right) = +\infty$ +it follows by the definition of [[Definition:Extended Real Addition|extended real addition]] that: +:$\displaystyle \sum_{n \mathop = 1}^\infty \mu \left({S_n}\right) = +\infty$ +Suppose now that all $S_i$ are [[Definition:Finite Set|finite]], and $\displaystyle \sum_{n \mathop = 1}^\infty \left\vert{S_n}\right\vert$ converges. +Then by [[Convergent Series of Natural Numbers]], it follows that for some $N \in \N$: +:$\forall m \ge N: \left\vert{S_m}\right\vert = 0$ +which by [[Cardinality of Empty Set]] means $S_m = \varnothing$. +Therefore we conclude that: +:$\displaystyle \sum_{n \mathop = 1}^\infty \left\vert{S_n}\right\vert = \sum_{n \mathop = 1}^N \left\vert{S_n}\right\vert$ +and by [[Union with Empty Set]]: +:$\displaystyle \bigcup_{n \mathop = 1}^\infty S_n = \bigcup_{n \mathop = 1}^N S_n$ +By [[Cardinality of Set Union/Corollary|Cardinality of Set Union: Corollary]]: +:$\displaystyle \left\vert{\bigcup_{n \mathop = 1}^N S_n}\right\vert = \sum_{n \mathop = 1}^N \left\vert{S_n}\right\vert$ +and combining this with the above yields the desired identity. +Suppose finally that $\displaystyle \sum_{n \mathop = 1}^\infty \left\vert{S_n}\right\vert$ diverges. +To establish the desired identity, it is to be shown that $\displaystyle \bigcup_{n \mathop = 1}^\infty S_n$ is [[Definition:Infinite Set|infinite]]. +Suppose to the contrary that it has finite [[Definition:Cardinality|cardinality]], say $k$. +By [[Cardinality of Set Union/Corollary|Cardinality of Set Union: Corollary]], for each $N \in \N$: +:$\displaystyle \sum_{n \mathop = 1}^N \left\vert{S_n}\right\vert = \left\vert{\bigcup_{n \mathop = 1}^N S_n}\right\vert$ +Now since: +:$\displaystyle \bigcup_{n \mathop = 1}^N S_n \subseteq \bigcup_{n \mathop = 1}^\infty S_n$ +it follows by [[Cardinality of Subset of Finite Set]] that, for each $N \in \N$: +:$\displaystyle \left\vert{\bigcup_{n \mathop = 1}^N S_n}\right\vert \le k$ +which contradicts the assumption that $\displaystyle \sum_{n \mathop = 1}^\infty \left\vert{S_n}\right\vert$ diverges. +Therefore $\displaystyle \bigcup_{n \mathop = 1}^\infty S_n$ is [[Definition:Infinite Set|infinite]] and the identity follows. +{{qed|lemma}} +=== Proof of $(3')$ === +By [[Cardinality of Empty Set]]: +:$\left\vert{\varnothing}\right\vert = 0$ +{{qed|lemma}} +Having verified the axioms, it follows that $\left\vert{\cdot}\right\vert$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +[[Category:Measure Theory]] +34c73wo8vgoojdpu83oxz25nxmh5b6w +\end{proof}<|endoftext|> +\section{Trivial Vector Space iff Zero Dimension} +Tags: Linear Algebra, Vector Spaces + +\begin{theorem} +Let $V$ be a [[Definition:Vector Space|vector space]]. +Then $V = \left\{{\mathbf 0}\right\}$ [[Definition:Iff|iff]] $\dim \left({V}\right) = 0$, where $\dim$ signifies [[Definition:Dimension (Linear Algebra)|dimension]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Suppose $V = \left\{{\mathbf 0}\right\}$. +We have that $V$ has no $\mathbf v \in V, \mathbf v \ne \mathbf 0$. +Thus there exists no $\left\{{\mathbf v}\right\} \subseteq V$ such that $\mathbf v \ne \mathbf 0$. +Such a $\left\{{\mathbf v}\right\}$ would be a [[Definition:Linearly Independent Set|linearly independent set]]. +Hence $\varnothing$ is the only possible [[Definition:Basis (Linear Algebra)|basis]] for $V$. +Hence $\dim \left({V}\right) = \left|{\varnothing}\right| = 0$. +{{qed|lemma}} +=== Sufficient Condition === +Suppose $\dim \left({V}\right) = 0$. +Then by definition of [[Definition:Dimension (Linear Algebra)|dimension]], $0 = \dim \left({V}\right) = \left\vert|{B}\right\vert|$, where $B$ is a [[Definition:Basis (Linear Algebra)|basis]] for $V$. +Thus $B = \varnothing$. +Suppose there are sets $\left\{{\mathbf v}\right\}$ such that $\mathbf v \in V, \mathbf v \ne \mathbf 0$. +By [[Singleton is Linearly Independent]], any such $\left\{{\mathbf v}\right\}$ would be a [[Definition:Linearly Independent Set|linearly independent set]]. +So $V$ has no such $\mathbf v \ne \mathbf 0$. +Thus $V = \left\{{\mathbf 0}\right\}$. +{{qed}} +[[Category:Linear Algebra|{{SUBPAGENAME}}]] +[[Category:Vector Spaces|{{SUBPAGENAME}}]] +jvrj05y0ei1p0owjuzmpa1n0j08l0h8 +\end{proof}<|endoftext|> +\section{G-Submodule Test} +Tags: Representation Theory + +\begin{theorem} +Let $\left({V, \phi}\right)$ be a [[Definition:G-Module|$G$-module]] over a [[Definition:Field (Abstract Algebra)|field]] $k$. +Let $W$ be a [[Definition:Vector Subspace|vector subspace]] of $V$. +Then $\left({W, \phi_W}\right)$, where $\phi_W: G \times W \to W$ is the [[Definition:Restriction of Mapping|restriction]] of $\phi$ to $G \times W$, +is a [[Definition:G-Submodule|$G$-submodule]] of $V$ [[Definition:Iff|iff]] $\phi \left({G, W}\right) \subseteq W$. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Assume that $W$ is a [[Definition:G-Submodule|$G$-submodule]] of $V$. +Hence by definition $\phi_W: G \times W \to W$ is a [[Definition:Linear Group Action|linear action]] on $W$. +Also by definition, $\phi_W \left({G, W}\right) = \phi \left({G, W}\right) \subseteq W$. +{{qed|lemma}} +=== Sufficient Condition === +Assume now that $\phi \left({G, W}\right) = \phi_W \left({G, W}\right) \subseteq W$. +Then it is correct to define $\phi_W: G \times W \to W$; it is a well-defined [[Definition:Mapping|mapping]]. +We need to check if $\phi_W$ is a [[Definition:Linear Group Action|linear action]] on $W$: +Assume $a,b \in W$ and $g \in G$; in particular, then, $a,b \in V$ and: +{{begin-eqn}} +{{eqn|l = \phi_W \left({g, a + b}\right) + |r = \phi \left({g, a + b}\right)| + |c = Definition of $\phi_W$ +}} +{{eqn|r = \phi \left({g, a}\right) + \phi \left({g, b}\right) + |c = $\phi$ is a [[Definition:Linear Group Action|linear action]] on $V$ +}} +{{eqn|r = \phi_W \left({g, a}\right) + \phi_W \left({g, b}\right) + |c = Definition of $\phi_W$ +}} +{{end-eqn}} +Further, assume $\lambda \in k$ and $g\in G$, then: +{{begin-eqn}} +{{eqn|l = \phi_W \left({g, \lambda b}\right) + |r = \phi \left({g, \lambda b}\right) + |c = Definition of $\phi_W$ +}} +{{eqn|r = \lambda \phi \left({g, b}\right) + |c = $\phi$ is a [[Definition:Linear Group Action|linear action]] on $V$ +}} +{{eqn|r = \lambda \phi_W \left({g, b}\right) + |c = Definition of $\phi_W$ +}} +{{end-eqn}} +Thus $W$ is a [[Definition:G-Submodule|$G$-submodule]] of $V$. +{{qed}} +[[Category:Representation Theory|{{SUBPAGENAME}}]] +kjp6yspncfxbw87ek5l5gkmlxk27le1 +\end{proof}<|endoftext|> +\section{Composite of Continuous Mappings is Continuous/Point} +Tags: Composite Mappings, Continuous Mappings + +\begin{theorem} +Let $T_1, T_2, T_3$ be [[Definition:Topological Space|topological spaces]]. +Let the [[Definition:Mapping|mapping]] $f : T_1 \to T_2$ be [[Definition:Continuous Mapping at Point (Topology)|continuous at $x$]]. +Let the [[Definition:Mapping|mapping]] $g : T_2 \to T_3$ be [[Definition:Continuous Mapping at Point (Topology)|continuous at $x$]] at $\map f x$. +Then the [[Definition:Composition of Mappings|composite mapping]] $g \circ f : T_1 \to T_3$ is [[Definition:Continuous Mapping at Point (Topology)|continuous at $x$]] at $x$. +\end{theorem} + +\begin{proof} +Let $N$ be any [[Definition:Neighborhood (Topology)|neighborhood]] of $\map {\paren {g \circ f} } x$. +By the [[Definition:Continuous Mapping at Point (Topology)|definition of continuity at a point]]: +:there exists a [[Definition:Neighborhood (Topology)|neighborhood]] $L$ of $\map f x$ such that $g \sqbrk L \subseteq N$ +and +:there exists a [[Definition:Neighborhood (Topology)|neighborhood]] $M$ of $x$ such that $f \sqbrk M \subseteq L$. +Thus $\paren {g \circ f} \sqbrk M \subseteq g \sqbrk L \subseteq N$, as desired. +{{qed}} +[[Category:Composite Mappings]] +[[Category:Continuous Mappings]] +8lye5zl5n78g8xgbl2jcbv6wnj0x0dz +\end{proof}<|endoftext|> +\section{Equivalence of Definitions of Continuous Mapping between Topological Spaces/Everywhere} +Tags: Topology + +\begin{theorem} +Let $T_1 = \struct {S_1, \tau_1}$ and $T_2 = \struct {S_2, \tau_2}$ be [[Definition:Topological Space|topological spaces]]. +Let $f: S_1 \to S_2$ be a [[Definition:Mapping|mapping]] from $S_1$ to $S_2$. +{{TFAE|def = Everywhere Continuous Mapping Between Topological Spaces|view = everywhere continuous mapping between topological spaces}} +=== [[Definition:Continuous Mapping (Topology)/Everywhere/Pointwise|Definition by Pointwise Continuity]] === +{{Definition:Continuous Mapping (Topology)/Everywhere/Pointwise}} +=== [[Definition:Continuous Mapping (Topology)/Everywhere/Open Sets|Definition by Open Sets]] === +{{Definition:Continuous Mapping (Topology)/Everywhere/Open Sets}} +\end{theorem} + +\begin{proof} +=== Sufficient Condition === +Suppose that: +:$U \in \tau_2 \implies f^{-1} \sqbrk U \in \tau_1$ +Let $x \in S_1$. +Let $N \subseteq S_2$ be a [[Definition:Neighborhood (Topology)|neighborhood]] of $\map f x$. +By the definition of a neighborhood, there exists a $U \in \tau_2$ such that $U \subseteq N$. +Now, $f^{-1} \sqbrk U$ is a neighborhood of $x$. +Then: +:$f \sqbrk {f^{-1} \sqbrk U} = U \subseteq N$ +as desired. +{{qed|lemma}} +=== Necessary Condition === +Now, suppose that $f$ is [[Definition:Continuous Mapping at Point (Topology)|continuous at every point]] in $S_1$. +We wish to show that: +:$U \in \tau_2 \implies f^{-1} \sqbrk U \in \tau_1$ +So, let $U \in \tau_2$. +Assume that $f^{-1} \sqbrk U$ is non-[[Definition:Empty Set|empty]], otherwise $f^{-1} \sqbrk U = \O \in \tau_1$ by [[Empty Set is Element of Topology]]. +Let $x \in f^{-1} \sqbrk U$. +By the [[Definition:Continuous Mapping at Point (Topology)|definition of continuity at a point]], there exists a [[Definition:Neighborhood (Topology)|neighborhood]] $N$ of $x$ such that $f \sqbrk N \subseteq U$. +By the definition of a neighborhood, there exists a $X \in \tau_1$ such that $x \in X \subseteq N$. +By [[Image of Subset under Mapping is Subset of Image]]: +:$f \sqbrk X \subseteq f \sqbrk N$ +This gives: +:$f \sqbrk X \subseteq f \sqbrk N \subseteq U$ +Let $\CC = \set {X \in \tau_1: f \sqbrk X \subseteq U}$. +Let $\ds H = \bigcup \CC$. +From the above argument: +:$f^{-1} \sqbrk U \subseteq H$ +It follows directly from the definition of $H$ (and the definition of $\CC$) that: +:$H \subseteq f^{-1} \sqbrk U$ +Hence: +:$H = f^{-1} \sqbrk U$. +By definition, $H$ is the [[Definition:Set Union|union]] of [[Definition:Open Set (Topology)|open sets]] (of $S_1$). Hence $H$ is open by the [[Definition:Topology|definition of a topology]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Null Space Closed under Vector Addition} +Tags: Linear Algebra, Null Spaces + +\begin{theorem} +Let: +:$\map {\mathrm N} {\mathbf A} = \set {\mathbf x \in \R^n : \mathbf A \mathbf x = \mathbf 0}$ +be the [[Definition:Null Space|null space]] of $\mathbf A$, where: +:$\mathbf A_{m \times n} = \begin {bmatrix} +a_{11} & a_{12} & \cdots & a_{1n} \\ +a_{21} & a_{22} & \cdots & a_{2n} \\ +\vdots & \vdots & \ddots & \vdots \\ +a_{m1} & a_{m2} & \cdots & a_{mn} \\ +\end{bmatrix}$, $\mathbf x_{n \times 1} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$ and $\mathbf 0_{m \times 1} = \begin {bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end {bmatrix}$ are [[Definition:Matrix|matrices]] +:the [[Definition:Column Matrix|column matrix]] $\mathbf x_{n \times 1}$ is interpreted as a [[Definition:Vector (Euclidean Space)|vector in $\R^n$]]. +Then $\map {\mathrm N} {\mathbf A}$ is [[Definition:Closed Algebraic Structure|closed]] under [[Definition:Vector Sum|vector addition]]: +:$\forall \mathbf v, \mathbf w \in \map {\mathrm N} {\mathbf A}: \mathbf v + \mathbf w \in \map {\mathrm N} {\mathbf A}$ +\end{theorem} + +\begin{proof} +Let $\mathbf v, \mathbf w \in \map {\mathrm N} {\mathbf A}$. +By the definition of [[Definition:Null Space|null space]]: +{{begin-eqn}} +{{eqn | l = \mathbf A \mathbf v + | r = \mathbf 0 +}} +{{eqn | l = \mathbf A \mathbf w + | r = \mathbf 0 +}} +{{end-eqn}} +Next, observe that: +{{begin-eqn}} +{{eqn | l = \mathbf A \paren {\mathbf v + \mathbf w} + | r = \mathbf A \mathbf v + \mathbf A \mathbf w + | c = [[Matrix Multiplication Distributes over Matrix Addition]] +}} +{{eqn | r = \mathbf 0 + \mathbf 0 +}} +{{eqn | r = \mathbf 0 +}} +{{end-eqn}} +The [[Definition:Order of Matrix|order]] is correct, [[Definition:By Hypothesis|by hypothesis]]. +Hence the result, by the definition of [[Definition:Null Space|null space]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Null Space Closed under Scalar Multiplication} +Tags: Linear Algebra, Null Spaces + +\begin{theorem} +Let: +:$\map {\mathrm N} {\mathbf A} = \set {\mathbf x \in \R^n : \mathbf {A x} = \mathbf 0}$ +be the [[Definition:Null Space|null space]] of $\mathbf A$, where: +:$ \mathbf A_{m \times n} = \begin{bmatrix} +a_{11} & a_{12} & \cdots & a_{1n} \\ +a_{21} & a_{22} & \cdots & a_{2n} \\ +\vdots & \vdots & \ddots & \vdots \\ +a_{m1} & a_{m2} & \cdots & a_{mn} \\ +\end {bmatrix}$, $\mathbf x_{n \times 1} = \begin {bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end {bmatrix}$, $\mathbf 0_{m \times 1} = \begin {bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end {bmatrix}$ +are [[Definition:Matrix|matrices]] where each [[Definition:Column of Matrix|column]] is an [[Definition:Element|element]] of a [[Definition:Real Vector Space|real vector space]]. +Then $\map {\mathrm N} {\mathbf A}$ is [[Definition:Closed for Scalar Product|closed]] under [[Definition:Scalar Multiplication on Vector Space|scalar multiplication]]: +:$\forall \mathbf v \in \map {\mathrm N} {\mathbf A} ,\forall \lambda \in \R: \lambda \mathbf v \in \map {\mathrm N} {\mathbf A}$ +\end{theorem} + +\begin{proof} +Let $\mathbf v \in \map {\mathrm N} {\mathbf A}$, $\lambda \in \R$. +By the definition of [[Definition:Null Space|null space]]: +{{begin-eqn}} +{{eqn | l = \mathbf {A v} + | r = \mathbf {0} +}} +{{end-eqn}} +Observe that: +{{begin-eqn}} +{{eqn | l = \mathbf A \paren {\lambda \mathbf v} + | r = \lambda \paren {\mathbf {A v} } + | c = [[Matrix Multiplication is Homogeneous of Degree 1|Matrix Multiplication is Homogeneous of Degree $1$]] +}} +{{eqn | r = \lambda \mathbf 0 + | c = +}} +{{eqn | r = \mathbf 0 +}} +{{end-eqn}} +Hence the result, by the definition of [[Definition:Null Space|null space]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Null Space is Subspace} +Tags: Linear Algebra, Null Spaces + +\begin{theorem} +Let: +:$\operatorname{N} \left({\mathbf A}\right) = \left\{{\mathbf x \in \R^n: \mathbf {Ax} = \mathbf 0}\right\}$ +be the [[Definition:Null Space|null space]] of $\mathbf A$, where: +:$ \mathbf A_{m \times n} = \begin{bmatrix} +a_{11} & a_{12} & \cdots & a_{1n} \\ +a_{21} & a_{22} & \cdots & a_{2n} \\ +\vdots & \vdots & \ddots & \vdots \\ +a_{m1} & a_{m2} & \cdots & a_{mn} \\ +\end{bmatrix}$, $\mathbf x_{n \times 1} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$, $\mathbf 0_{m \times 1} = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}$ +are [[Definition:Matrix|matrices]]. +Then $\operatorname{N} \left({\mathbf A}\right)$ is a [[Definition:Vector Subspace|linear subspace]] of $\R^n$. +\end{theorem} + +\begin{proof} +$\operatorname{N} \left({\mathbf A}\right) \subseteq \R^n$, by construction. +We have: +* $\mathbf 0 \in \operatorname{N} \left({\mathbf A}\right)$, from [[Null Space Contains Zero Vector]] +* $\forall \mathbf v, \mathbf w \in \operatorname{N} \left({\mathbf A}\right): \mathbf v + \mathbf w \in \operatorname{N} \left({\mathbf A}\right)$, from [[Null Space Closed under Vector Addition]] +* $\forall\mathbf v \in \operatorname{N} \left({\mathbf A}\right), \lambda \in \R: \lambda \mathbf v \in \operatorname{N} \left({\mathbf A}\right)$, from [[Null Space Closed under Scalar Multiplication]] +The result follows from [[Vector Subspace of Real Vector Space]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Norm is Continuous} +Tags: Norm Theory, Continuous Mappings + +\begin{theorem} +Let $\struct {V, \norm {\,\cdot\,} }$ be a [[Definition:Normed Vector Space|normed]] [[Definition:Vector Space|vector space]]. +Then the [[Definition:Mapping|mapping]] $x \mapsto \norm x$ is [[Definition:Continuous Mapping (Metric Spaces)|continuous]]. +Here, the [[Definition:Metric|metric]] used is the metric $d$ [[Definition:Metric Induced by Norm|induced]] by $\norm {\,\cdot\,}$. +\end{theorem} + +\begin{proof} +Since $\norm x = \map d {x, \mathbf 0}$, the result follows directly from [[Distance Function of Metric Space is Continuous]]. +{{qed}} +[[Category:Norm Theory]] +[[Category:Continuous Mappings]] +nwsfsr3ucq08u3e6eq3x8ugflydugc2 +\end{proof}<|endoftext|> +\section{P-adic Valuation of Rational Number is Well Defined} +Tags: P-adic Valuations + +\begin{theorem} +The [[Definition:P-adic Valuation/Rational Numbers|$p$-adic valuation]]: +: $\nu_p: \Q \to \Z \cup \left\{{+\infty}\right\}$ +is [[Definition:Well-Defined Operation|well defined]]. +\end{theorem} + +\begin{proof} +Let $\dfrac a b = \dfrac c d \in \Q$. +Thus: +:$a d = b c \in \Z$ +Then: +{{begin-eqn}} +{{eqn | l = \nu_p^\Z \left({a}\right) + \nu_p^\Z \left({d}\right) + | r = \nu_p^\Z \left({a d}\right) + | c = [[P-adic Valuation on Integers is Valuation]]: [[Definition:Valuation Axioms|Axiom $V1$]] +}} +{{eqn | r = \nu_p^\Z \left({b c}\right) +}} +{{eqn | r = \nu_p^\Z \left({c}\right) + \nu_p^\Z \left({b}\right) + | c = [[P-adic Valuation on Integers is Valuation]]: [[Definition:Valuation Axioms|Axiom $V1$]] +}} +{{eqn | n = 1 + | ll= \leadsto + | l = \nu_p^\Z \left({a}\right) - \nu_p^\Z \left({b}\right) + | r = \nu_p^\Z \left({c}\right) - \nu_p^\Z \left({d}\right) +}} +{{end-eqn}} +So: +{{begin-eqn}} +{{eqn | l = \nu_p^\Q \left({\frac a b}\right) + | r = \nu_p^\Z \left({a}\right) - \nu_p^\Z \left({b}\right) + | c = {{Defof|P-adic Valuation|$p$-adic Valuation}} +}} +{{eqn | r = \nu_p^\Z \left({c}\right) - \nu_p^\Z \left({d}\right) + | c = from $(1)$ +}} +{{eqn | r = \nu_p^\Q \left({\dfrac c d}\right) + | c = {{Defof|P-adic Valuation|$p$-adic Valuation}} +}} +{{end-eqn}} +Thus, by definition, $\nu_p: \Q \to \Z \cup \left\{{+\infty}\right\}$ is [[Definition:Well-Defined Operation|well defined]]. +{{qed}} +[[Category:P-adic Valuations]] +jl2hw7civs0ob7rw2jf4ru71dlgotmh +\end{proof}<|endoftext|> +\section{P-adic Valuation is Valuation} +Tags: P-adic Valuations + +\begin{theorem} +The [[Definition:P-adic Valuation|$p$-adic valuation]] $\nu_p: \Q \to \Z \cup \left\{{+\infty}\right\}$ is a [[Definition:Valuation|valuation]] on $\Q$. +\end{theorem} + +\begin{proof} +To prove that $\nu_p$ is a [[Definition:Valuation|valuation]] it is necessary to demonstrate: +{{begin-axiom}} +{{axiom | n = V1 + | q = \forall q, r \in \Q + | ml= \nu_p \left({q r}\right) + | mo= = + | mr= \nu_p \left({q}\right) + \nu_p \left({r}\right) +}} +{{axiom | n = V2 + | q = \forall q \in \Q + | ml= \nu_p \left({q}\right) = +\infty + | mo= \iff + | mr= q = 0 +}} +{{axiom | n = V3 + | q = \forall q, r \in \Q + | ml= \nu_p \left({q + r}\right) + | mo= \ge + | mr= \min \left\{ {\nu_p \left({q}\right), \nu_p \left({r}\right) }\right\} +}} +{{end-axiom}} +Let $q := \dfrac a b, r := \dfrac c d \in \Q$. +=== Axiom $(V1)$ === +{{begin-eqn}} +{{eqn | l = \nu_p \left({q r}\right) + | r = \nu_p \left({\frac a b \cdot \frac c d}\right) +}} +{{eqn | r = \nu_p \left({\frac {a c} {b d} }\right) + | c = {{Defof|Rational Multiplication}} +}} +{{eqn | r = \nu_p^\Z \left({a c}\right) - \nu_p^\Z \left({b d}\right) + | c = {{Defof|P-adic Valuation/Rational Numbers|$p$-adic Valuation}} +}} +{{eqn | r = \left({\nu_p^\Z \left({a}\right) + \nu_p^\Z \left({c}\right)}\right) - \left({\nu_p^\Z \left({b}\right) + \nu_p^\Z \left({d}\right)}\right) + | c = [[P-adic Valuation on Integers is Valuation|$p$-adic Valuation on Integers is Valuation]]: [[Definition:Valuation Axioms|Axiom $V1$]] +}} +{{eqn | r = \nu_p^\Z \left({a}\right) - \nu_p^\Z \left({b}\right) + \nu_p^\Z \left({c}\right) - \nu_p^\Z \left({d}\right) + | c = [[Integer Addition is Commutative]] +}} +{{eqn | r = \nu_p \left({\frac a b}\right) + \nu_p \left({\frac c d}\right) + | c = {{Defof|P-adic Valuation|$p$-adic Valuation}} +}} +{{eqn | r = \nu_p \left({q}\right) + \nu_p \left({r}\right) + | c = +}} +{{end-eqn}} +{{qed|lemma}} +=== Axiom $(V2)$ === +{{begin-eqn}} +{{eqn | l = \dfrac a b + | r = 0 +}} +{{eqn | ll= \iff + | l = a + | r = 0 + | c = {{Defof|Rational Number}} +}} +{{eqn | ll= \iff + | l = \nu_p^\Z \left({a}\right) + | r = +\infty + | c = {{Defof|P-adic Valuation|$p$-adic Valuation}} +}} +{{eqn | ll= \iff + | l = \nu_p^\Z \left({a}\right) - \nu_p^\Z \left({b}\right) + | r = +\infty + | c = as $b \ne 0$ +}} +{{eqn | ll= \iff + | l = \nu_p \left({\frac a b}\right) + | r = +\infty + | c = {{Defof|P-adic Valuation|$p$-adic Valuation}} +}} +{{end-eqn}} +{{qed|lemma}} +=== Axiom $(V3)$ === +From [[P-adic Valuation on Integers]] follows that: +{{begin-eqn}} +{{eqn | l = \nu_p \left({\frac a b + \dfrac c d}\right) + | r = \nu_p \left({\frac {a d + b c} {b d} }\right) + | c = {{Defof|Rational Addition}} +}} +{{eqn | r = \nu_p^\Z \left({a d + c b}\right) - \nu_p^\Z \left({b d}\right) + | c = {{Defof|P-adic Valuation|$p$-adic Valuation}} +}} +{{eqn | o = \ge + | r = \min \left\{ {\nu_p^\Z \left({a d}\right), \nu_p^\Z \left({c b}\right)}\right\} - \nu_p^\Z \left({b d}\right) + | c = [[P-adic Valuation on Integers is Valuation|$p$-adic Valuation on Integers is Valuation]]: [[Definition:Valuation Axioms|Axiom $V3$]] +}} +{{eqn | r = \min \left\{ {\nu_p^\Z \left({a}\right) + \nu_p^\Z \left({d}\right), \nu_p^\Z \left({c}\right) + \nu_p^\Z \left({b}\right)}\right\} - \nu_p^\Z \left({b}\right) - \nu_p^\Z \left({d}\right) + | c = [[P-adic Valuation on Integers is Valuation|$p$-adic Valuation on Integers is Valuation]]: [[Definition:Valuation Axioms|Axiom $V1$]] +}} +{{eqn | r = \min \left\{ {\nu_p^\Z \left({a}\right) - \nu_p^\Z \left({b}\right), \nu_p^\Z \left({c}\right) - \nu_p^\Z \left({d}\right)}\right\} +}} +{{eqn | r = \min \left\{ {\nu_p \left({\frac a b}\right), \nu_p \left({\frac c d}\right)}\right\} + | c = {{Defof|P-adic Valuation|$p$-adic Valuation}} +}} +{{end-eqn}} +Hence: +:$\nu_p \left({\dfrac a b + \dfrac c d}\right) \ge \min \left\{ {\nu_p \left({\dfrac a b}\right), \nu_p \left({\dfrac c d}\right)}\right\}$ +Thus $\nu_p: \Q \to \Z \cup \left\{ {+\infty}\right\}$ is a [[Definition:Valuation|valuation]] on $\Q$ by definition. +{{qed}} +\end{proof}<|endoftext|> +\section{P-adic Norm not Complete on Rational Numbers} +Tags: Metric Spaces, Normed Spaces, P-adic Number Theory, P-adic Norm not Complete on Rational Numbers + +\begin{theorem} +Let $\norm {\,\cdot\,}_p$ be the [[Definition:P-adic Norm|$p$-adic norm]] on the [[Definition:Rational Numbers|rationals $\Q$]] for some [[Definition:Prime Number|prime]] $p$. +Then: +:the [[Definition:Valued Field|valued field]] $\struct {\Q, \norm {\,\cdot\,}_p}$ is not [[Definition:Complete Normed Division Ring|complete]]. +That is, there exists a [[Definition:Cauchy Sequence in Normed Division Ring|Cauchy sequence]] in $\struct {\Q, \norm{\,\cdot\,}_p}$ which does not [[Definition:Convergent Sequence in Normed Division Ring|converge]] to a [[Definition:Limit of Sequence (Normed Division Ring)|limit]] in $\Q$. +\end{theorem} + +\begin{proof} +=== [[P-adic Norm not Complete on Rational Numbers/Proof 1/Case 1|Case: $p \gt 3$]] === +{{:P-adic Norm not Complete on Rational Numbers/Proof 1/Case 1}}{{qed|lemma}} +=== [[P-adic Norm not Complete on Rational Numbers/Proof 1/Case 2|Case: $p = 2$ or $3$]] === +{{:P-adic Norm not Complete on Rational Numbers/Proof 1/Case 2}}{{qed}} +\end{proof} + +\begin{proof} +[[Hensel's Lemma/First Form|Hensel's Lemma]] is used to prove the existence of a [[Definition:Cauchy Sequence in Normed Division Ring|Cauchy sequence]] that does not [[Definition:Convergent Sequence in Normed Division Ring|converge]]. +==== [[P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 1|Lemma 1]] ==== +{{:P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 1}} +Let $x_1 \in \Z_{>0}: p \nmid x_1, x_1 \ge \dfrac {p + 1} 2$ +Let $k$ be a [[Definition:Positive Integer|positive integer]] such that $k \ge 2, p \nmid k$. +Let $a = x_1^k + p$. +==== [[P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 2|Lemma 2]] ==== +{{:P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 2}} +Let $\map f X \in \Z \sqbrk X$ be the [[Definition:Polynomial (Abstract Algebra)|polynomial]]: +:$X^k - a$ +==== [[P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 3|Lemma 3]] ==== +{{:P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 3}} +Let $\map {f'} X \in \Z \sqbrk X$ be the [[Definition:Formal Derivative of Polynomial|formal derivative]] of $\map f X$. +==== [[P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 4|Lemma 4]] ==== +{{:P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 4}} +By [[Hensel's Lemma/First Form|Hensel's Lemma]] there exists a [[Definition:Sequence|sequence]] of [[Definition:Integer|integers]] $\sequence {x_n}$ such that: +:$(1) \quad \forall n: \map f {x_n} \equiv 0 \pmod {p^n}$ +:$(2) \quad \forall n: x_{n + 1} \equiv x_n \pmod {p^n}$ +==== [[P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 5|Lemma 5]] ==== +{{:P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 5}} +By [[Characterisation of Cauchy Sequence in Non-Archimedean Norm/Corollary 1|corollary of Characterisation of Cauchy Sequence in Non-Archimedean Norm]] then: +:$\sequence {x_n}$ is a [[Definition:Cauchy Sequence in Normed Division Ring|Cauchy sequence]] in $\struct {\Q, \norm {\,\cdot\,}_p}$ +{{AimForCont}} $\sequence {x_n}$ is a [[Definition:Sequence|sequence]] such that for some $c \in \Q$: +:$\displaystyle \lim_{n \mathop \to \infty} x_n = c$ +in $\struct {\Q, \norm {\,\cdot\,}_p}$ +By [[Combination Theorem for Sequences/Normed Division Ring/Product Rule|product rule for convergent sequences]] then: +:$\displaystyle \lim_{n \mathop \to \infty} x_n^k = c^k$ +Hence: +:$c^k = a \in \Z$. +By [[Nth Root of Integer is Integer or Irrational]] then: +:$c \in \Z$ +This [[Definition:Contradiction|contradicts]] [[P-adic Norm not Complete on Rational Numbers/Proof 2/Lemma 2|Lemma 2]]. +So the [[Definition:Sequence|sequence]] $\sequence {x_n}$ does not [[Definition:Convergent Sequence in Normed Division Ring|converge]] in $\struct {\Q, \norm{\,\cdot\,}_p}$. +The result follows. +{{qed}} +\end{proof} + +\begin{proof} +By [[Rational Numbers are Countably Infinite]], the [[Definition:Set|set]] of [[Definition:Rational Number|rational numbers]] is [[Definition:Countably Infinite Set|countably infinite]]. +By [[P-adic Numbers are Uncountable|P-adic Numbers are Uncountable]], the [[Definition:Set|set]] of [[Definition:P-adic Number|$p$-adic numbers $\Q_p$]] is [[Definition:Uncountable Set|uncountably infinite]]. +Let $\CC$ be the [[Definition:Ring of Cauchy Sequences|commutative ring of Cauchy sequences]] over $\struct {\Q, \norm {\,\cdot\,}_p}$. +Let $\NN$ be the [[Definition:Set|set]] of [[Definition:Null Sequence|null sequences]] in $\struct {\Q, \norm {\,\cdot\,}_p}$. +The [[Definition:P-adic Number/Quotient of Cauchy Sequences in P-adic Norm|$p$-adic numbers]] $\Q_p$ is the [[Definition:Quotient Ring|quotient ring]] $\CC \, \big / \NN$ by definition. +By [[Embedding Division Ring into Quotient Ring of Cauchy Sequences]], the mapping $\phi: \Q \to \Q_p$ defined by: +:$\map \phi r = \sequence {r, r, r, \dotsc} + \NN$ +where $\sequence {r, r, r, \dotsc} + \NN$ is the [[Definition:Left Coset|left coset]] in $\CC \, \big / \NN$ that contains the constant [[Definition:Sequence|sequence]] $\sequence {r, r, r, \dotsc}$, is a [[Definition:Distance-Preserving Mapping|distance-preserving]] [[Definition:Ring Monomorphism|monomorphism]]. +By [[Surjection from Natural Numbers iff Countable/Corollary 2|Corollary to Surjection from Natural Numbers iff Countable]] then $\phi$ is not a [[Definition:Surjection|surjection]]. +Hence: +:$\exists \sequence {x_n} \in \CC: \sequence {x_n} + \NN \not \in \map \phi \Q$ +By [[Cauchy Sequence Converges Iff Equivalent to Constant Sequence]] then $\sequence {x_n}$ is not [[Definition:Convergent Sequence in Normed Division Ring|convergent]] in $\struct {\Q, \norm {\,\cdot\,}_p}$. +The result follows. +{{qed}} +\end{proof}<|endoftext|> +\section{P-adic Valuation on Integers is Valuation} +Tags: P-adic Valuations + +\begin{theorem} +Let $\nu_p^\Z: \Z \to \Z \cup \set {+\infty}$ be the [[Definition:P-adic Valuation/Integers|$p$-adic valuation restricted to the integers]]. +Then $\nu_p^\Z$ is a [[Definition:Valuation|valuation]]. +\end{theorem} + +\begin{proof} +To prove that $\nu_p^\Z$ is a [[Definition:Valuation|valuation]] it is necessary to demonstrate: +{{begin-axiom}} +{{axiom | n = \text V 1 + | q = \forall m, n \in \Z + | ml= \map {\nu_p^\Z} {m n} + | mo= = + | mr= \map {\nu_p^\Z} m + \map {\nu_p^\Z} n +}} +{{axiom | n = \text V 2 + | q = \forall n \in \Z + | ml= \map {\nu_p^\Z} n = +\infty + | mo= \iff + | mr= n = 0 +}} +{{axiom | n = \text V 3 + | q = \forall m, n \in \Z + | ml= \map {\nu_p^\Z} {m + n} + | mo= \ge + | mr= \min \set {\map {\nu_p^\Z} m, \map {\nu_p^\Z} n} +}} +{{end-axiom}} +=== Axiom $(\text V 1)$ === +Let $m, n \in \Z$. +Let $m = 0$ or $n = 0$. +Then: +:$\map {\nu_p^\Z} m = +\infty$ +or: +:$\map {\nu_p^\Z} n = +\infty$ +Also: +:$n m = 0$ +and hence: +{{begin-eqn}} +{{eqn | l = \map {\nu_p^\Z} {n m} + | r = +\infty + | c = +}} +{{eqn | r = \map {\nu_p^\Z} n + \map {\nu_p^\Z} m + | c = +}} +{{end-eqn}} +Let $n m \ne 0$. +Then by definition of the [[Definition:P-adic Valuation/Integers|restricted $p$-adic valuation]]: +:$p^{\map {\nu_p^\Z} n} \divides n$ +:$p^{\map {\nu_p^\Z} n + 1} \nmid n$ +Also: +:$p^{\map {\nu_p^\Z} m} \divides m$ +:$p^{\map {\nu_p^\Z} m + 1} \nmid m$ +Hence: +:$p^{\map {\nu_p^\Z} n + \map {\nu_p^\Z} m} \divides n m$ +:$p^{\map {\nu_p^\Z} n + \map {\nu_p^\Z} m + 1} \nmid n m$ +So: +:$\map {\nu_p^\Z} {n m} = \map {\nu_p^\Z} n + \map {\nu_p^\Z} m$ +{{qed|lemma}} +=== Axiom $(\text V 2)$ === +By definition of the [[Definition:P-adic Valuation/Integers|restricted $p$-adic valuation]]: +:$\forall n \in \Z: \map {\nu_p^\Z} n = +\infty \iff n = 0$ +{{qed|lemma}} +=== Axiom $(\text V 3)$ === +Let $m, n \in \Z$. +{{WLOG}} let: +:$p^\alpha \divides n$ +:$p^\beta \divides m$ +where $\alpha \ge \beta$. +Then $\exists t \in \Z, k \in \Z$ such that: +{{begin-eqn}} +{{eqn | l = n + m + | r = p^\alpha k + p^\beta t +}} +{{eqn | r = p^\beta \paren {p^{\alpha - \beta} k + t} +}} +{{end-eqn}} +Thus: +:$p^\beta \divides \paren {m + n}$ +{{qed|lemma}} +Hence by the [[Definition:P-adic Valuation/Integers|definition of $\nu_p^\Z$]]: +:$\map {\nu_p^\Z} {m + n} \ge \min \set {\map {\nu_p^\Z} m, \map {\nu_p^\Z} n}$ +{{qed}} +[[Category:P-adic Valuations]] +j80iwlsbxd05fx5n9kq4c749emu0pk9 +\end{proof}<|endoftext|> +\section{Null Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Then the [[Definition:Null Measure|null measure]] $\mu$ on $\struct {X, \Sigma}$ is a [[Definition:Measure (Measure Theory)|measure]]. +\end{theorem} + +\begin{proof} +Let us verify the [[Definition:Measure (Measure Theory)|measure axioms]] $(1)$, $(2)$ and $(3')$ for $\mu$. +=== Proof of $(1)$ === +Let $S \in \Sigma$. +Then $\map \mu S = 0 \ge 0$. +{{qed|lemma}} +=== Proof of $(2)$ === +It is to be shown that (for a [[Definition:Sequence|sequence]] $\sequence {S_n}_{n \in \N}$ of [[Definition:Pairwise Disjoint|pairwise disjoint sets]]): +:$\displaystyle \sum_{n \mathop = 1}^\infty \map \mu {S_n} = \map \mu {\bigcup_{n \mathop = 1}^\infty S_n}$ +Now [[Definition:Null Measure|by definition]] of $\mu$: +:$\displaystyle \map \mu {S_n} = \map \mu {\bigcup_{n \mathop = 1}^\infty S_n} = 0$ +Thus, the desired equation becomes: +:$\displaystyle \sum_{n \mathop = 1}^\infty 0 = 0$ +which trivially holds. +{{qed|lemma}} +=== Proof of $(3')$ === +Note that $\O \in \Sigma$ as $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Hence $\map \mu \O = 0$. +{{qed|lemma}} +The axioms are fulfilled, and it follows that $\mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +[[Category:Measure Theory]] +itkkj40p9yi6ytnllp3v6fvmhkx5o4f +\end{proof}<|endoftext|> +\section{Infinite Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Then the [[Definition:Infinite Measure|infinite measure]] $\mu$ on $\struct {X, \Sigma}$ is a [[Definition:Measure (Measure Theory)|measure]]. +\end{theorem} + +\begin{proof} +Let us verify the [[Definition:Measure (Measure Theory)|measure axioms]] $(1)$, $(2)$ and $(3')$ for $\mu$. +=== Proof of $(3')$ === +We have [[Definition:Infinite Measure|by definition]] $\map \mu \O = 0$. +{{qed|lemma}} +=== Proof of $(1)$ === +Let $S \in \Sigma$. +Then: +:$\map \mu S = + \infty > 0$ for $S \ne \O$ +:$\map \mu S = 0 \ge 0$ for $S = \O$ +So $\map \mu S \ge 0$ for all $S \in \Sigma$. +{{qed|lemma}} +=== Proof of $(2)$ === +It is to be shown that (for a [[Definition:Sequence|sequence]] $\sequence {S_n}_{n \in \N}$ of [[Definition:Pairwise Disjoint|pairwise disjoint sets]]): +:$\displaystyle \sum_{n \mathop = 1}^\infty \map \mu {S_n} = \map \mu {\bigcup_{n \mathop = 1}^\infty S_n}$ +Suppose $S_n = \O$ for all $n \in \N$. +Then [[Definition:Infinite Measure|by definition]] of $\mu$: +:$\displaystyle \map \mu {S_n} = \map \mu {\bigcup_{n \mathop = 1}^\infty S_n} = \map \mu \O = 0$ +Thus, the desired equation becomes: +:$\displaystyle \sum_{n \mathop = 1}^\infty 0 = 0$ +which trivially holds. +Suppose $S_n \ne \O$ for some $n \in \N$. +Then $S_n \subseteq \displaystyle \bigcup_{n \mathop = 1}^\infty S_n \ne \O$. +:$\displaystyle \map \mu {S_n} = \map \mu {\bigcup_{n \mathop = 1}^\infty S_n} = + \infty$ +:$\displaystyle \sum_{n \mathop = 1}^\infty \map \mu {S_n} = \map \mu {S_n} = + \infty = \map \mu {\bigcup_{n \mathop = 1}^\infty S_n}$ +Thus $\displaystyle \sum_{n \mathop = 1}^\infty \map \mu {S_n} = \map \mu {\bigcup_{n \mathop = 1}^\infty S_n}$. +{{qed|lemma}} +The axioms are fulfilled, and it follows that $\mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +[[Category:Measure Theory]] +tb9f5zr4v95oh5kjlohqra29tgmz5ny +\end{proof}<|endoftext|> +\section{Linear Combination of Measures} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\mu, \nu$ be [[Definition:Measure (Measure Theory)|measures]] on $\left({X, \Sigma}\right)$. +Then for all [[Definition:Positive Real Number|positive real numbers]] $a, b \in \R_{\ge 0}$, the [[Definition:Pointwise Addition|pointwise sum]]: +:$a \mu + b \nu: \Sigma \to \overline \R, \ \left({a \mu + b \nu}\right) \left({E}\right) := a \mu \left({E}\right) + b \nu \left({E}\right)$ +is also a [[Definition:Measure (Measure Theory)|measure]] on $\left({X, \Sigma}\right)$. +\end{theorem} + +\begin{proof} +Verifying the axioms $(1)$, $(2)$ and $(3')$ for a [[Definition:Measure (Measure Theory)|measure]] in turn: +=== Axiom $(1)$ === +The statement of axiom $(1)$ for $a \mu + b \nu$ is: +:$\forall E \in \Sigma: \left({a \mu + b \nu}\right) \left({E}\right) \ge 0$ +Let $E \in \Sigma$. +Then $\mu \left({E}\right), \nu \left({E}\right) \ge 0$ as $\mu$ and $\nu$ are [[Definition:Measure (Measure Theory)|measures]]. +Hence, $a \mu \left({E}\right) \ge 0$ as $a \ge 0$. +Also, $b \nu \left({E}\right) \ge 0$ since $b \ge 0$. +Therefore it follows that: +:$a \mu \left({E}\right) + b \nu \left({E}\right) \ge 0$ +as desired. +{{qed|lemma}} +=== Axiom $(2)$ === +Let $\left({E_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] sets in $\Sigma$. +The statement of axiom $(2)$ for $a \mu + b \nu$ is: +:$\displaystyle \left({a \mu + b \nu}\right) \left({\bigcup_{n \mathop \in \N} E_n}\right) = \sum_{n \mathop \in \N} \left({a \mu + b \nu}\right) \left({E_n}\right)$ +So let us do a direct computation: +{{begin-eqn}} +{{eqn | l = \left({a \mu + b \nu}\right) \left({\bigcup_{n \mathop \in \N} E_n}\right) + | r = a \mu \left({\bigcup_{n \mathop \in \N} E_n}\right) + b \nu \left({\bigcup_{n \mathop \in \N} E_n}\right) + | c = {{Defof|Pointwise Addition}} +}} +{{eqn | r = a \sum_{n \mathop \in \N} \mu \left({E_n}\right) + b \sum_{n \mathop \in \N} \nu \left({E_n}\right) + | c = $\mu$ and $\nu$ are [[Definition:Measure (Measure Theory)|measures]] and satisfy $(2)$ +}} +{{eqn | r = \sum_{n \mathop \in \N} a \mu \left({E_n}\right) + b \nu \left({E_n}\right) + | c = [[Combined Sum Rule for Real Sequences]] +}} +{{eqn | r = \sum_{n \mathop \in \N} \left({a \mu + b \nu}\right) \left({E_n}\right) +}} +{{end-eqn}} +which establishes $a \mu + b \nu$ satisfies $(2)$. +{{qed|lemma}} +{{handwaving|glosses over $+\infty$ intricacies, but that can be dealt with more generally for monotone sequences in $\overline{\R}$}} +=== Axiom $(3')$ === +The statement of axiom $(3')$ for $a \mu + b \nu$ is: +:$\left({a \mu + b \nu}\right) \left({\varnothing}\right) = 0$ +This is verified by the following: +{{begin-eqn}} +{{eqn | l = \left({a \mu + b \nu}\right) \left({\varnothing}\right) + | r = a \mu \left({\varnothing}\right) + b \nu \left({\varnothing}\right) + | c = {{Defof|Pointwise Addition}} +}} +{{eqn | r = a \cdot 0 + b \cdot 0 + | c = $\mu$ and $\nu$ are [[Definition:Measure (Measure Theory)|measures]] and satisfy $(3')$ +}} +{{eqn | r = 0 +}} +{{end-eqn}} +Thus, $a \mu + b \nu$ satisfies $(3')$. +{{qed|lemma}} +Having verified an appropriate set of axioms, it follows that $a \mu + b \nu$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Series of Measures is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\left({\mu_n}\right)_{n \mathop \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Measure (Measure Theory)|measures]] on $\left({X, \Sigma}\right)$. +Let $\left({a_n}\right)_{n \mathop \in \N} \subseteq \R_{\ge 0}$ be a [[Definition:Sequence|sequence]] of [[Definition:Positive Real Number|positive real numbers]]. +Then the [[Definition:Series of Measures|series of measures]] $\mu: \Sigma \to \overline{\R}$, defined by: +:$\displaystyle \mu \left({E}\right) := \sum_{n \mathop \in \N} a_n \mu_n \left({E}\right)$ +is also a [[Definition:Measure (Measure Theory)|measure]] on $\left({X, \Sigma}\right)$. +\end{theorem} + +\begin{proof} +Let us verify the conditions for a [[Definition:Measure (Measure Theory)|measure]] in turn. +Let $E \in \Sigma$. +Then for all $n \in \N$, $\mu_n \left({E}\right) \ge 0$ and so $a_n \mu_n \left({E}\right) \ge 0$. +Therefore, by [[Series of Positive Real Numbers has Positive Limit]]: +:$\displaystyle \mu \left({E}\right) = \sum_{n \mathop \in \N} a_n \mu_n \left({E}\right) \ge 0$ +For every $n \in \N$, also $\mu_n \left({\varnothing}\right) = 0$. +Therefore, it immediately follows that: +:$\displaystyle \mu \left({\varnothing}\right) = \sum_{n \mathop \in \N} a_n \mu_n \left({\varnothing}\right) = \sum_{n \mathop \in \N} 0 = 0$ +Finally, let $\left({E_n}\right)_{n \mathop \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint sets]] in $\Sigma$. +Then: +{{begin-eqn}} +{{eqn | l = \mu \left({\bigcup_{m \mathop \in \N} E_m}\right) + | r = \sum_{n \mathop \in \N} a_n \mu_n \left({\bigcup_{m \mathop \in \N} E_m}\right) + | c = Definition of $\mu$ +}} +{{eqn | r = \sum_{n \mathop \in \N} a_n \left({\sum_{m \mathop \in \N} \mu_n \left({E_m}\right)}\right) + | c = The $\mu_n$ are [[Definition:Measure (Measure Theory)|measures]] +}} +{{eqn | r = \sum_{n \mathop \in \N} \sum_{m \mathop \in \N} a_n \mu_n \left({E_m}\right) +}} +{{eqn | r = \sum_{m \mathop \in \N} \sum_{n \mathop \in \N} a_n \mu_n \left({E_m}\right) + | c = [[Double Series of Positive Real Numbers]] +}} +{{eqn | r = \sum_{m \mathop \in \N} \mu \left({E_m}\right) + | c = Definition of $\mu$ +}} +{{end-eqn}} +Therefore, having verified all three axioms, $\mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Relation Induced by Quotient Set is Equivalence} +Tags: Quotient Sets, Equivalence Relations + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R$ be an [[Definition:Equivalence Relation|equivalence relation]] on $S$. +Let $S / \mathcal R$ be the [[Definition:Quotient Set|quotient set]] of $S$ determined by $\mathcal R$. +Let $\mathcal R'$ be the [[Definition:Relation Induced by Partition|relation induced by $S / \mathcal R$]] on $S$. +Then $\mathcal R' = \mathcal R$. +\end{theorem} + +\begin{proof} +Let $\mathcal R$ be an [[Definition:Equivalence Relation|equivalence relation]] on $S$. +Let $\left({x, y}\right) \in \mathcal R$. +By definition of [[Definition:Equivalence Class|equivalence class]], $y \in \left[\!\left[{x}\right]\!\right]_\mathcal R$ and $x \in \left[\!\left[{x}\right]\!\right]_\mathcal R$. +By definition of [[Definition:Quotient Set|quotient set]], $x$ and $y$ both belong to the same [[Definition:Element|element]] of $S / \mathcal R$. +So, by definition of $\mathcal R'$, it follows that $\left({x, y}\right) \in \mathcal R'$. +That is: +:$\left({x, y}\right) \in \mathcal R \implies \left({x, y}\right) \in \mathcal R'$ +and so by definition of [[Definition:Subset|subset]]: +:$\mathcal R \subseteq \mathcal R'$ +Now let $\left({x, y}\right) \in \mathcal R'$. +Then $y$ belongs to the same [[Definition:Element|element]] of $S / \mathcal R$ that $x$ does. +That is: +:$y \in \left[\!\left[{x}\right]\!\right]_\mathcal R$ +and so $\left({x, y}\right) \in \mathcal R$. +That is: +:$\left({x, y}\right) \in \mathcal R' \implies \left({x, y}\right) \in \mathcal R$ +and so by definition of [[Definition:Subset|subset]]: +:$\mathcal R' \subseteq \mathcal R$ +The result follows by definition of [[Definition:Set Equality/Definition 2|set equality]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Quotient Set Determined by Relation Induced by Partition is That Partition} +Tags: Quotient Sets, Equivalence Relations + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal P$ be a [[Definition:Partition (Set Theory)|partition]] of $S$. +Let $\mathcal R$ be the [[Definition:Relation Induced by Partition|relation induced by $\mathcal P$]]. +Then the [[Definition:Quotient Set|quotient set]] $S / \mathcal R$ of $S$ is $\mathcal P$ itself. +\end{theorem} + +\begin{proof} +Let $P \subseteq S$ such that $P \in \mathcal P$. +Let $x \in P$. +Then: +{{begin-eqn}} +{{eqn | l=y + | o=\in + | r=\left[\!\left[{x}\right]\!\right]_\mathcal R + | c= +}} +{{eqn | ll=\iff + | l=\left({x, y}\right) + | o=\in + | r=\mathcal R + | c=by definition of [[Definition:Equivalence Class|equivalence class]] +}} +{{eqn | ll=\iff + | l=y + | o=\in + | r=P + | c=by definition of [[Definition:Relation Induced by Partition|relation induced by $\mathcal P$]] +}} +{{end-eqn}} +Therefore: +: $P = \left[\!\left[{x}\right]\!\right]_\mathcal R$ +and so: +: $P \in S / \mathcal R$ +and so: +: $\mathcal P \subseteq S / \mathcal R$ +Now let $x \in S$. +As $\mathcal P$ is a [[Definition:Partition (Set Theory)|partition]]: +:$\exists P \in \mathcal P: x \in P$ +Then by definition of $\mathcal R$: +:$\left({x, y}\right) \in \mathcal R \iff y \in \left[\!\left[{x}\right]\!\right]_\mathcal R$ +{{begin-eqn}} +{{eqn | l=y + | o=\in + | r=P + | c= +}} +{{eqn | ll=\iff + | l=\left({x, y}\right) + | o=\in + | r=\mathcal R + | c=by definition of [[Definition:Relation Induced by Partition|relation induced by $\mathcal P$]] +}} +{{eqn | ll=\iff + | l=y + | o=\in + | r=\left[\!\left[{x}\right]\!\right]_\mathcal R + | c=by definition of [[Definition:Equivalence Class|equivalence class]] +}} +{{end-eqn}} +Therefore: +: $\left[\!\left[{x}\right]\!\right]_\mathcal R = P$ +and so: +: $\left[\!\left[{x}\right]\!\right]_\mathcal R \in \mathcal P$ +That is: +: $\mathcal S / \mathcal R \subseteq P$ +It follows by definition of [[Definition:Set Equality/Definition 2|set equality]] that: +:$\mathcal S / \mathcal R = P$ +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Null Sets Closed under Subset} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $N \in \Sigma$ be a [[Definition:Null Set|$\mu$-null set]], and let $M \in \Sigma$ be a [[Definition:Subset|subset]] of $N$. +Then $M$ is also a [[Definition:Null Set|$\mu$-null set]]. +\end{theorem} + +\begin{proof} +As $\mu$ is a [[Definition:Measure (Measure Theory)|measure]], $\mu \left({M}\right) \ge 0$. +Also, by [[Measure is Monotone]], $\mu \left({M}\right) \le \mu \left({N}\right) = 0$. +Hence $\mu \left({M}\right) = 0$, and $M$ is a [[Definition:Null Set|$\mu$-null set]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Null Sets Closed under Countable Union} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $\left({N_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Null Set|$\mu$-null sets]]. +Then $N := \displaystyle \bigcup_{n \mathop \in \N} N_n$ is also a [[Definition:Null Set|$\mu$-null set]]. +\end{theorem} + +\begin{proof} +As $\mu$ is a [[Definition:Measure (Measure Theory)|measure]], $\mu \left({N}\right) \ge 0$. Also: +{{begin-eqn}} +{{eqn|l = \mu \left({N}\right) + |o = \le + |r = \sum_{n \mathop \in \N} \mu \left({N_n}\right) + |c = [[Measure is Countably Subadditive]] +}} +{{eqn|r = \sum_{n \mathop \in \N} 0 + |c = The $N_n$ are [[Definition:Null Set|$\mu$-null sets]] +}} +{{eqn|r = 0 +}} +{{end-eqn}} +Hence necessarily $\mu \left({N}\right) = 0$, and $N$ is a [[Definition:Null Set|$\mu$-null set]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Absolute Value induces Equivalence Compatible with Integer Multiplication} +Tags: Congruence Relations, Integer Multiplication, Absolute Value Function + +\begin{theorem} +Let $\Z$ be the [[Definition:Integer|set of integers]]. +Let $\RR$ be the [[Definition:Relation|relation]] on $\Z$ defined as: +:$\forall x, y \in \Z: \struct {x, y} \in \RR \iff \size x = \size y$ +where $\size x$ denotes the [[Definition:Absolute Value|absolute value]] of $x$. +Then $\RR$ is a [[Definition:Congruence Relation|congruence relation]] for [[Definition:Integer Multiplication|integer multiplication]]. +\end{theorem} + +\begin{proof} +From [[Absolute Value Function on Integers induces Equivalence Relation]], $\RR$ is an [[Definition:Equivalence Relation|equivalence relation]]. +Let: +:$\size {x_1} = \size {x_2}$ +:$\size {y_1} = \size {y_2}$ +Then by definition of [[Definition:Absolute Value|absolute value]]: +{{begin-eqn}} +{{eqn | l = \size {x_1 y_1} + | r = \size {x_1} \size {y_1} + | c = +}} +{{eqn | r = \size {x_2} \size {y_2} + | c = +}} +{{eqn | r = \size {x_2 y_2} + | c = +}} +{{end-eqn}} +That is: +:$\paren {x_1 y_1, x_2 y_2} \in \RR$ +That is, $\RR$ is a [[Definition:Congruence Relation|congruence relation]] for [[Definition:Integer Multiplication|integer multiplication]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Completion Theorem (Measure Spaces)} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Then there exists a [[Definition:Completion (Measure Space)|completion]] $\left({X, \Sigma^*, \bar \mu}\right)$ of $\left({X, \Sigma, \mu}\right)$. +\end{theorem} + +\begin{proof} +We give an explicit construction of $\left({X, \Sigma^*, \bar \mu}\right)$. +To this end, define $\mathcal N$ to be the collection of [[Definition:Subset|subsets]] of [[Definition:Null Set|$\mu$-null sets]]: +:$\mathcal N := \left\{{N \subseteq X: \exists M \in \Sigma: \mu \left({M}\right) = 0, N \subseteq M}\right\}$ +Now, we define: +:$\Sigma^* := \left\{{E \cup N: E \in \Sigma, N \in \mathcal N}\right\}$ +and assert $\Sigma^*$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +By [[Empty Set is Null Set]], $\varnothing \in \mathcal N$, and thus by [[Union with Empty Set]]: +:$\forall E \in \Sigma: E \cup \varnothing = E \in \Sigma^*$ +that is to say, $\Sigma \subseteq \Sigma^*$. +As a consequence, $X \in \Sigma^*$. +Now, suppose that $E \cup N \in \Sigma^*$, and $N \subseteq M, M \in \Sigma$. Then: +{{begin-eqn}} +{{eqn | l = X \setminus \left({E \cup N}\right) + | r = \left({X \setminus E}\right) \cap \left({X \setminus N}\right) + | c = [[De Morgan's Laws (Set Theory)/Set Difference/Difference with Union|De Morgan's Laws: Difference with Union]] +}} +{{eqn | r = \left({X \setminus E}\right) \cap \left({\left({X \setminus M}\right) \cup \left({M \setminus N}\right)}\right) + | c = [[Union of Relative Complements of Nested Subsets]] +}} +{{eqn | r = \left({\left({X \setminus E}\right) \cap \left({X \setminus M}\right)}\right) \cup \left({\left({X \setminus E}\right) \cap \left({M \setminus N}\right)}\right) + | c = [[Intersection Distributes over Union]] +}} +{{eqn|o =}} +{{end-eqn}} +{{begin-eqn}} +{{eqn | l = \left({\left({X \setminus E}\right) \cap \left({X \setminus M}\right)}\right) + | o = \in + | r = \Sigma + | c = $E, M \in \Sigma$, [[Sigma-Algebra Closed under Intersection]] +}} +{{eqn | l = \left({X \setminus E}\right) \cap \left({M \setminus N}\right) + | o = \subseteq + | r = M + | c = [[Set Difference is Subset]], [[Set Intersection Preserves Subsets]] +}} +{{eqn | ll= \implies + | l = X \setminus \left({E \cup N}\right) + | o = \in + | r = \Sigma^* +}} +{{end-eqn}} +Finally, let $\left({E_n}\right)_{n \in \N}$ and $\left({N_n}\right)_{n \in \N}$ be [[Definition:Sequence|sequences]] in $\Sigma$ and $\mathcal N$, respectively. +Let $\left({M_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Null Set|$\mu$-null sets]] such that: +:$\forall n \in \N: N_n \subseteq M_n$ +Then, compute: +{{begin-eqn}} +{{eqn | l = \bigcup_{n \mathop \in \N} \left({E_n \cup N_n}\right) + | r = \left({\bigcup_{n \mathop \in \N} E_n}\right) \cup \left({\bigcup_{n \mathop \in \N} N_n}\right) + | c = [[Union Distributes over Union/Families of Sets|Union Distributes over Union: Families of Sets]] +}} +{{eqn | l = \bigcup_{n \mathop \in \N} N_n + | o = \subseteq + | r = \bigcup_{n \mathop \in \N} M_n + | c = [[Set Union Preserves Subsets]] +}} +{{end-eqn}} +From [[Null Sets Closed under Countable Union]], also: +:$\displaystyle \mu \left({\bigcup_{n \mathop \in \N} M_n}\right) = 0$ +hence it follows that: +:$\displaystyle \bigcup_{n \mathop \in \N} N_n \in \mathcal N$ +Next, as $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]], it follows that: +:$\displaystyle \bigcup_{n \mathop \in \N} E_n \in \Sigma$ +and finally, we conclude: +:$\displaystyle \bigcup_{n \mathop \in \N} \left({E_n \cup N_n}\right) \in \Sigma^*$ +Therefore, we have shown that $\Sigma^*$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Next, define $\bar \mu: \Sigma^* \to \overline{\R}_{\ge 0}$ by: +:$\bar \mu \left({E \cup N}\right) := \mu \left({E}\right)$ +It needs verification that this well-defines $\bar \mu$. +=== [[Completion Theorem (Measure Spaces)/Lemma|Lemma]] === +{{:Completion Theorem (Measure Spaces)/Lemma}}{{qed|lemma}} +Next, let us verify that $\bar \mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +From [[Union with Empty Set]], we have $\varnothing \cup \varnothing = \varnothing$, so by [[Empty Set is Null Set]]: +:$\bar \mu \left({\varnothing}\right) = \mu \left({\varnothing}\right) = 0$ +For a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] sets $\left({E_n \cup N_n}\right)_{n \in \N}$ in $\Sigma^*$, compute: +{{begin-eqn}} +{{eqn | l = \bar \mu \left({\bigcup_{n \mathop \in \N} \left({E_n \cup N_n}\right)}\right) + | r = \bar \mu \left({\left({\bigcup_{n \mathop \in \N} E_n}\right) \cup \left({\bigcup_{n \mathop \in \N} N_n}\right)}\right) + | c = [[Union Distributes over Union/Families of Sets|Union Distributes over Union: Families of Sets]] +}} +{{eqn | r = \mu \left({\bigcup_{n \mathop \in \N} E_n}\right) + | c = Definition of $\bar \mu$ +}} +{{eqn | r = \sum_{n \mathop \in \N} \mu \left({E_n}\right) + | c = $\mu$ is a [[Definition:Measure (Measure Theory)|measure]] +}} +{{eqn | r = \sum_{n \mathop \in \N} \bar \mu \left({E_n \cup N_n}\right) + | c = Definition of $\bar \mu$ +}} +{{end-eqn}} +Thus, $\bar \mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +Since for all $E \in \Sigma$ trivially: +:$\bar \mu \left({E}\right) = \mu \left({E}\right)$ +if $\left({X, \Sigma^*, \bar \mu}\right)$ is a [[Definition:Complete Measure Space|complete measure space]], it also [[Definition:Completion (Measure Space)|completes]] $\left({X, \Sigma, \mu}\right)$. +So suppose that $E \cup N \in \Sigma^*$ is a [[Definition:Null Set|$\bar \mu$-null set]]. +Suppose that $N \subseteq M$, with $M$ a [[Definition:Null Set|$\mu$-null set]]. +Then by [[Set Union Preserves Subsets]], we have: +:$E \cup N \subseteq E \cup M$ +and from $0 = \bar \mu \left({E \cup N}\right) = \mu \left({E}\right)$, $E$ is also a [[Definition:Null Set|$\mu$-null set]]. +Hence by [[Null Sets Closed under Union]], $E \cup M$ is a [[Definition:Null Set|$\mu$-null set]]. +Therefore, for any $E' \in \Sigma^*$ with $E' \subseteq E \cup N$, we also have by [[Subset Relation is Transitive]]: +:$E' \subseteq E \cup M$ +whence $E' \in \mathcal N$, and this means that (by [[Union with Empty Set]]): +:$\bar \mu \left({E'}\right) = \bar \mu \left({\varnothing \cup E'}\right) = \mu \left({\varnothing}\right) = 0$ +So, any [[Definition:Subset|subset]] of $E \cup N$ is again a [[Definition:Null Set|$\bar \mu$-null set]]. +That is, $\left({X, \Sigma^*, \bar \mu}\right)$ is [[Definition:Complete Measure Space|complete]]. +It follows that $\left({X, \Sigma^*, \bar \mu}\right)$ [[Definition:Completion (Measure Space)|completes]] $\left({X, \Sigma, \mu}\right)$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restricted Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $\Sigma'$ be a [[Definition:Sub-Sigma-Algebra|sub-$\sigma$-algebra]] of $\Sigma$. +Then the [[Definition:Restricted Measure|restricted measure]] $\mu \restriction_{\Sigma'}$ is a [[Definition:Measure (Measure Theory)|measure]] on the [[Definition:Measurable Space|measurable space]] $\left({X, \Sigma'}\right)$. +\end{theorem} + +\begin{proof} +Verify the axioms for a [[Definition:Measure (Measure Theory)|measure]] in turn for $\mu \restriction_{\Sigma'}$: +=== Axiom $(1)$ === +The statement of axiom $(1)$ for $\mu \restriction_{\Sigma'}$ is: +:$\forall E' \in \Sigma': \mu \restriction_{\Sigma'} \left({E'}\right) \ge 0$ +Now, for every $E' \in \Sigma'$, compute: +{{begin-eqn}} +{{eqn|l = \mu \restriction_{\Sigma'} \left({E'}\right) + |r = \mu \left({E'}\right) + |c = Definition of $\mu \restriction_{\Sigma'}$ +}} +{{eqn|o = \ge + |r = 0 + |c = $\mu$ is a [[Definition:Measure (Measure Theory)|measure]] +}} +{{end-eqn}} +{{qed|lemma}} +=== Axiom $(2)$ === +Let $\left({E'_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] sets in $\Sigma'$. +Then the statement of axiom $(2)$ for $\mu \restriction_{\Sigma'}$ is: +:$\displaystyle \mu \restriction_{\Sigma'} \left({\bigcup_{n \mathop \in \N} E'_n}\right) = \sum_{n \mathop \in \N} \mu \restriction_{\Sigma'} \left({E'_n}\right)$ +One can show this by means of the following computation: +{{begin-eqn}} +{{eqn|l = \mu \restriction_{\Sigma'} \left({\bigcup_{n \mathop \in \N} E'_n}\right) + |r = \mu \left({\bigcup_{n \mathop \in \N} E'_n}\right) + |c = Definition of $\mu \restriction_{\Sigma'}$ +}} +{{eqn|r = \sum_{n \mathop \in \N} \mu \left({E'_n}\right) + |c = $\mu$ is a [[Definition:Measure (Measure Theory)|measure]] +}} +{{eqn|r = \sum_{n \mathop \in \N} \mu \restriction_{\Sigma'} \left({E'_n}\right) + |c = Definition of $\mu \restriction_{\Sigma'}$ +}} +{{end-eqn}} +{{qed|lemma}} +=== Axiom $(3')$ === +The statement of axiom $(3')$ for $\mu \restriction_{\Sigma'}$ is: +:$\mu \restriction_{\Sigma'} \left({\varnothing}\right) = 0$ +By [[Sigma-Algebra Contains Empty Set]], $\varnothing \in \Sigma'$. Hence: +:$\mu \restriction_{\Sigma'} \left({\varnothing}\right) = \mu \left({\varnothing}\right) = 0$ +because $\mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed|lemma}} +Having verified a suitable set of axioms, it follows that $\mu \restriction_{\Sigma'}$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Restricting Measure Preserves Finiteness} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $\mu$ be a [[Definition:Finite Measure|finite measure]]. +Let $\Sigma'$ be a [[Definition:Sub-Sigma-Algebra|sub-$\sigma$-algebra]] of $\Sigma$. +Then the [[Definition:Restricted Measure|restricted measure]] $\mu \restriction_{\Sigma'}$ is also a [[Definition:Finite Measure|finite measure]]. +\end{theorem} + +\begin{proof} +By [[Restricted Measure is Measure]], $\mu \restriction_{\Sigma'}$ is a [[Definition:Measure (Measure Theory)|measure]]. +Now by [[Definition:Restricted Measure|definition of $\mu \restriction_{\Sigma'}$]], have: +:$\mu \restriction_{\Sigma'} \left({X}\right) = \mu \left({X}\right) < \infty$ +as $\mu$ is a [[Definition:Finite Measure|finite measure]]. +Hence $\mu \restriction_{\Sigma'}$ is also a [[Definition:Finite Measure|finite measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Measure Space Sigma-Finite iff Cover by Sets of Finite Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Then $\left({X, \Sigma, \mu}\right)$ is [[Definition:Sigma-Finite Measure Space|$\sigma$-finite]] [[Definition:Iff|iff]] there exists a [[Definition:Sequence|sequence]] $\left({E_n}\right)_{n \in \N}$ in $\Sigma$ such that: +:$(1):\quad \displaystyle \bigcup_{n \mathop \in \N} E_n = X$ +:$(2):\quad \forall n \in \N: \mu \left({E_n}\right) < +\infty$ +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $\mu$ be a [[Definition:Sigma-Finite Measure|$\sigma$-finite measure]]. +Let $\left({F_n}\right)_{n \in \N}$ be an [[Definition:Exhausting Sequence of Sets|exhausting sequence]] in $\Sigma$ such that: +:$\forall n \in \N: \mu \left({F_n}\right) < \infty$ +Then as $\left({F_n}\right)_{n \in \N}$ is [[Definition:Exhausting Sequence of Sets|exhausting]], have: +:$\displaystyle \bigcup_{n \mathop \in \N} F_n = X$ +It follows that the [[Definition:Sequence|sequence]] $\left({F_n}\right)_{n \in \N}$ satisfies $(1)$ and $(2)$. +{{qed|lemma}} +=== Sufficient Condition === +Let $\mu$ be any [[Definition:Measure (Measure Theory)|measure]]. +Let $\left({E_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] satisfying $(1)$ and $(2)$. +Define $F_n := \displaystyle \bigcup_{k \mathop = 1}^n E_k$. +Then by [[Sigma-Algebra Closed under Union]]: +: $F_n \in \Sigma$ for all $n \in \N$ +Also, by [[Set is Subset of Union]]: +: $F_{n+1} = F_n \cup E_{n+1}$, hence $F_n \subseteq F_{n+1}$ +The definition of the $F_n$ assures that: +: $X = \displaystyle \bigcup_{n \mathop \in \N} E_n = \bigcup_{n \mathop \in \N} F_n$ +Hence $\left({F_n}\right)_{n \in \N}$ is an [[Definition:Exhausting Sequence of Sets|exhausting sequence]] in $\Sigma$. +Furthermore, compute, for any $n \in \N$: +{{begin-eqn}} +{{eqn|l = \mu \left({F_n}\right) + |r = \mu \left({\bigcup_{k \mathop = 1}^n E_k}\right) + |c = Definition of $F_n$ +}} +{{eqn|o = \le + |r = \sum_{k \mathop = 1}^n \mu \left({E_k}\right) + |c = [[Measure is Subadditive]] +}} +{{eqn|o = < + |r = +\infty + |c = By $(2)$ +}} +{{end-eqn}} +Hence, by definition, $\mu$ is [[Definition:Sigma-Finite Measure|$\sigma$-finite]]. +Thus, $\left({X, \Sigma, \mu}\right)$ is also [[Definition:Sigma-Finite Measure Space|$\sigma$-finite]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Central Subgroup is Normal} +Tags: Normal Subgroups, Central Subgroups, Central Subgroup is Normal + +\begin{theorem} +Let $G$ be a [[Definition:Group|group]]. +Let $H$ be a [[Definition:Central Subgroup|central subgroup]] of $G$. +Then $H$ is a [[Definition:Normal Subgroup|normal subgroup]] of $G$. +\end{theorem} + +\begin{proof} +Let $H$ be a [[Definition:Central Subgroup|central subgroup]] of $G$. +By definition of [[Definition:Central Subgroup|central subgroup]]: +:$H \subseteq \map Z G$ +where $\map Z G$ is the [[Definition:Center of Group|center]] of $G$. +Thus we have that $H$ is a [[Definition:Group|group]] which is a [[Definition:Subset|subset]] of $\map Z G$. +Therefore by definition $H$ is a [[Definition:Subgroup|subgroup]] of $\map Z G$. +We also have from [[Center of Group is Abelian Subgroup]] that $\map Z G$ is an [[Definition:Abelian Group|abelian group]]. +It follows from [[Subgroup of Abelian Group is Normal]] that $Z$ is a [[Definition:Normal Subgroup|normal subgroup]] of $G$. +{{Qed}} +\end{proof} + +\begin{proof} +Let $H$ be a [[Definition:Central Subgroup|central subgroup]] of $G$. +By definition of [[Definition:Central Subgroup|central subgroup]]: +:$H \subseteq \map Z G$ +where $\map Z G$ is the [[Definition:Center of Group|center]] of $G$. +Then: +{{begin-eqn}} +{{eqn | ll= \forall x \in G: \forall h \in H: + | l = x h x^{-1} + | r = x x^{-1} h + | c = as $h \in H \implies h \in \map Z G$ +}} +{{eqn | r = h + | c = +}} +{{eqn | lll=\leadsto + | l = x h x^{-1} + | o = \in + | r = H + | c = as $h \in H$ +}} +{{eqn | lll=\leadsto + | l = H + | o = \lhd + | r = G + | c = {{Defof|Normal Subgroup}} +}} +{{end-eqn}} +{{Qed}} +\end{proof}<|endoftext|> +\section{Angle Between Vectors in Terms of Dot Product} +Tags: Vector Algebra, Dot Product + +\begin{theorem} +The [[Definition:Angle Between Vectors|angle between]] two non-[[Definition:Zero Vector|zero]] [[Definition:Vector (Euclidean Space)|vectors in $\R^n$]] can be calculated by: +{{begin-eqn}} +{{eqn | l = \theta + | r = \arccos \frac {\mathbf v \cdot \mathbf w} {\norm {\mathbf v} \norm {\mathbf w} } +}} +{{end-eqn}} +where: +:$\mathbf v \cdot \mathbf w$ represents the [[Definition:Dot Product|dot product]] of $\mathbf v$ and $\mathbf w$ +:$\norm {\, \cdot \,}$ represents [[Definition:Vector Length|vector length]]. +:$\arccos$ represents [[Definition:Arccosine|arccosine]] +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \norm {\mathbf v} \norm {\mathbf w} \cos \theta + | r = \mathbf v \cdot \mathbf w + | c = [[Cosine Formula for Dot Product]] +}} +{{eqn | l = \cos \theta + | r = \frac {\mathbf v \cdot \mathbf w} {\norm {\mathbf v} \norm {\mathbf w} } + | c = because $\mathbf v, \mathbf w \ne \mathbf 0 \implies \norm {\mathbf v}, \norm {\mathbf w} \ne 0$ +}} +{{eqn | l = \arccos \paren {\cos \theta} + | r = \arccos \paren {\frac {\mathbf v \cdot \mathbf w} {\norm {\mathbf v} \norm {\mathbf w} } } + | c = [[Definition:Angle Between Vectors#Comment|because $0 \le \theta \le \pi$]] $\implies -1 \le \cos \theta \le 1$ +}} +{{eqn | l = \theta + | r = \arccos \paren {\frac {\mathbf v \cdot \mathbf w} {\norm {\mathbf v} \norm {\mathbf w} } } + | c = [[Composite of Bijection with Inverse is Identity Mapping]] +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Angle Between Non-Zero Vectors Always Defined} +Tags: Vector Algebra, Analytic Geometry + +\begin{theorem} +The [[Definition:Angle Between Vectors|angle between]] two non-[[Definition:Zero Vector|zero]] [[Definition:Vector (Euclidean Space)|vectors in $\R^n$]] is always defined. +\end{theorem} + +\begin{proof} +=== Case 1 === +Suppose that $\mathbf v$ and $\mathbf w$ are not [[Definition:Scalar Multiplication on Vector Space|scalar multiples]] of each other. +:[[File:AngleBetweenTwoVectors.png|400px]] +From [[Construction of Triangle from Given Lengths]], it is [[Definition:Sufficient Condition|sufficient]] to show that sum of the lengths of any two sides is greater than the third side. +Consider the side with length $\norm {\mathbf v}$. +From the [[Triangle Inequality|triangle inequality for vectors]]: +{{begin-eqn}} +{{eqn | l = \norm {\mathbf v} + | r = \norm {\mathbf {w + v - w} } +}} +{{eqn | o = < + | r = \norm {\mathbf w} + \norm {\mathbf {v - w} } +}} +{{end-eqn}} +Note that the equality is a [[Definition:Strict Inequality|strict inequality]] [[Triangle Inequality|because the vectors are not scalar multiples of each other]]. +{{explain|Pick a better way to explain this than linking to the above}} +Consider the side with length $\norm {\mathbf w}$. +{{begin-eqn}} +{{eqn | l = \norm {\mathbf w} + | r = \norm {\mathbf {v + w - v} } +}} +{{eqn | o = < + | r = \norm {\mathbf v} + \norm {\mathbf {w - v} } +}} +{{eqn | r = \norm {\mathbf v} + \norm {\mathbf {v - w} } +}} +{{end-eqn}} +Lastly, Consider the side with length $\norm {\mathbf v - \mathbf w}$. +{{begin-eqn}} +{{eqn | l = \norm {\mathbf {v - w} } + | r = \norm {\mathbf {v + \paren {-w} } } +}} +{{eqn | o = < + | r = \norm {\mathbf v} + \norm {\mathbf {-w} } +}} +{{eqn | r = \norm {\mathbf v} + \norm {\mathbf w} +}} +{{end-eqn}} +{{qed|lemma}} +=== Case 2 === +Suppose that $\mathbf v$ and $\mathbf w$ ''are'' [[Definition:Scalar Multiplication on Vector Space|scalar multiples]] of each other. +Then the existence of $\theta$ follows directly from the definition of the angle between vectors that are scalar multiples of each other. +{{explain|Link to the above definition, or word it here.}} +{{qed}} +\end{proof}<|endoftext|> +\section{Dynkin System Contains Empty Set} +Tags: Dynkin Systems, Empty Set + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $\mathcal D$ be a [[Definition:Dynkin System|Dynkin system]] on $X$. +Then the [[Definition:Empty Set|empty set]] $\varnothing$ is an element of $\mathcal D$. +\end{theorem} + +\begin{proof} +As $\mathcal D$ is a [[Definition:Dynkin System|Dynkin system]], $X \in \mathcal D$. +By [[Set Difference with Self is Empty Set]], $X \setminus X = \varnothing$. +Hence, by property $(2)$ of a [[Definition:Dynkin System|Dynkin system]], $\varnothing = X \setminus X \in \mathcal D$. +{{qed}} +\end{proof}<|endoftext|> +\section{Dynkin System Closed under Disjoint Union} +Tags: Dynkin Systems + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $\mathcal D$ be a [[Definition:Dynkin System|Dynkin system]] on $X$. +Let $D, E \in \mathcal D$ be [[Definition:Disjoint Sets|disjoint]]. +Then the [[Definition:Set Union|union]] $D \cup E$ is also an element of $\mathcal D$. +\end{theorem} + +\begin{proof} +Define $D_1 = D, D_2 = E$, and for $n \ge 2$, $D_n = \varnothing$. +Then by [[Dynkin System Contains Empty Set]], $D_n \in \mathcal D$ for all $n \in \N$. +Also, by [[Intersection with Empty Set]], it follows that $\left({D_n}\right)_{n \in \N}$ is a [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Sequence|sequence]]. +Hence, by property $(3)$ of a [[Definition:Dynkin System|Dynkin system]], have: +:$D \cup E = \displaystyle \bigcup_{n \mathop \in \N} D_n \in \mathcal D$ +{{qed}} +\end{proof}<|endoftext|> +\section{Sigma-Algebra is Dynkin System} +Tags: Sigma-Algebras, Dynkin Systems + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $\Sigma$ be a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on $X$. +Then $\Sigma$ is a [[Definition:Dynkin System|Dynkin system]] on $X$. +\end{theorem} + +\begin{proof} +The axioms $(1)$ and $(2)$ for both [[Definition:Sigma-Algebra|$\sigma$-algebras]] and [[Definition:Dynkin System|Dynkin systems]] are identical. +[[Definition:Dynkin System|Dynkin system]] axiom $(3)$ is seen to be a specification of [[Definition:Sigma-Algebra|$\sigma$-algebra]] axiom $(3)$ to [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Sequence|sequences]]. +Hence $\Sigma$ is trivially a [[Definition:Dynkin System|Dynkin system]] on $X$. +{{qed}} +\end{proof}<|endoftext|> +\section{Existence and Uniqueness of Dynkin System Generated by Collection of Subsets} +Tags: Dynkin Systems + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]]. +Let $\mathcal G \subseteq \mathcal P \left({X}\right)$ be a collection of [[Definition:Subset|subsets]] of $X$. +Then $\delta \left({\mathcal G}\right)$, the [[Definition:Dynkin System Generated by Collection of Subsets|Dynkin system generated by $\mathcal G$]], exists and is unique. +\end{theorem} + +\begin{proof} +=== Existence === +By [[Power Set is Dynkin System]], there exists at least one [[Definition:Dynkin System|Dynkin system]] containing $\mathcal G$. +Next, let $\Bbb D$ be the collection of [[Definition:Dynkin System|Dynkin systems]] containing $\mathcal G$: +:$\Bbb D := \left\{{\mathcal{D}': \mathcal G \subseteq \mathcal{D}', \text{$\mathcal{D}'$ is a Dynkin system}}\right\}$ +By [[Intersection of Dynkin Systems is Dynkin System]], $\mathcal D := \bigcap \Bbb D$ is a [[Definition:Dynkin System|Dynkin system]]. +Also, by [[Set Intersection Preserves Subsets/Families of Sets|Set Intersection Preserves Subsets]]: +: $\mathcal G \subseteq \mathcal D$ +Now let $\mathcal{D}'$ be a [[Definition:Dynkin System|Dynkin system]] containing $\mathcal G$. +By construction of $\mathcal D$, and [[Intersection is Subset/General Result|Intersection is Subset: General Result]]: +: $\mathcal D \subseteq \mathcal{D}'$ +{{qed|lemma}} +=== Uniqueness === +Suppose both $\mathcal{D}_1$ and $\mathcal{D}_2$ are [[Definition:Dynkin System Generated by Collection of Subsets|Dynkin systems generated by $\mathcal G$]]. +Then property $(2)$ for these [[Definition:Dynkin System|Dynkin systems]] implies both $\mathcal{D}_1 \subseteq \mathcal{D}_2$ and $\mathcal{D}_2 \subseteq \mathcal{D}_1$. +By definition of [[Definition:Set Equality/Definition 2|set equality]]: +: $\mathcal{D}_1 = \mathcal{D}_2$ +{{qed}} +\end{proof}<|endoftext|> +\section{Generated Sigma-Algebra Contains Generated Dynkin System} +Tags: Dynkin Systems, Sigma-Algebras + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]]. +Let $\mathcal G \subseteq \mathcal P \left({X}\right)$ be a collection of [[Definition:Subset|subsets]] of $X$. +Then $\delta \left({\mathcal G}\right) \subseteq \sigma \left({\mathcal G}\right)$. +Here $\delta$ denotes [[Definition:Generated Dynkin System|generated Dynkin system]], and $\sigma$ denotes [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]. +\end{theorem} + +\begin{proof} +By [[Sigma-Algebra is Dynkin System]], $\sigma \left({\mathcal G}\right)$ is a [[Definition:Dynkin System|Dynkin system]]. +The [[Definition:Generated Dynkin System|definition of $\delta \left({\mathcal G}\right)$]] now ensures that $\delta \left({\mathcal G}\right) \subseteq \sigma \left({\mathcal G}\right)$. +{{qed}} +\end{proof}<|endoftext|> +\section{Dynkin System Closed under Intersections is Sigma-Algebra} +Tags: Dynkin Systems, Sigma-Algebras + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $\DD$ be a [[Definition:Dynkin System|Dynkin system]] on $X$. +Suppose that $\DD$ satisfies the following condition: +:$(1):\quad \forall D, E \in \DD: D \cap E \in \DD$ +That is, $\DD$ is closed under [[Definition:Set Intersection|intersection]]. +Then $\DD$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +\end{theorem} + +\begin{proof} +The first two conditions for a [[Definition:Dynkin System|Dynkin system]] are identical to those for a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Hence it is only required to verify that $(1)$ implies that $\DD$ is closed under arbitrary [[Definition:Countable Union|countable unions]]. +So let $\sequence {D_n}_{n \mathop \in \N}$ be a [[Definition:Sequence|sequence]] in $\DD$. +Now define the [[Definition:Sequence|sequence]] $\sequence {E_n}_{n \mathop \in \N}$ by: +:$\displaystyle E_n := D_n \cap \paren {X \setminus \bigcup_{m \mathop < n} D_m}$ +By [[Dynkin System Closed under Union]] and $(1)$, it follows that $E_n \in \DD$ for all $n \in \N$. +=== Lemma === +For all $n \in \N$, it holds that: +:$\displaystyle \bigcup_{k \mathop = 0}^n E_k = \bigcup_{k \mathop = 0}^n D_k$ +=== Proof of Lemma === +{{finish|tedium, best postponed until after the implementation of the new extension}} +The lemma, combined with the definition of the $E_n$, gives immediately that for all $n \in \N$: +:$\displaystyle E_n \in X \setminus \bigcup_{m \mathop < n} D_m = X \setminus \bigcup_{m \mathop < n} E_m$ +whence the $E_n$ are [[Definition:Pairwise Disjoint|pairwise disjoint]]. +Another consequence is that: +:$\displaystyle \bigcup_{n \mathop \in \N} D_n = \bigcup_{n \mathop \in \N} E_n$ +Now since the $E_n$ are [[Definition:Pairwise Disjoint|pairwise disjoint]], it follows that: +:$\displaystyle \bigcup_{n \mathop \in \N} E_n \in \DD$ +which, combined with above equality, concludes in: +:$\displaystyle \bigcup_{n \mathop \in \N} D_n \in \DD$ +Therefore, $\DD$ is closed under [[Definition:Countable Union|countable unions]], making it a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Dynkin System with Generator Closed under Intersection is Sigma-Algebra} +Tags: Dynkin Systems, Sigma-Algebras + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]]. +Let $\GG \subseteq \powerset X$ be a collection of [[Definition:Subset|subsets]] of $X$. +Suppose that $\GG$ satisfies the following condition: +:$(1):\quad \forall G, H \in \GG: G \cap H \in \GG$ +That is, $\GG$ is [[Definition:Closed Algebraic Structure|closed]] under [[Definition:Set Intersection|intersection]]. +Then: +:$\map \delta \GG = \map \sigma \GG$ +where $\delta$ denotes [[Definition:Generated Dynkin System|generated Dynkin system]], and $\sigma$ denotes [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]. +\end{theorem} + +\begin{proof} +From [[Sigma-Algebra is Dynkin System]] and the definition of [[Definition:Generated Dynkin System|generated Dynkin system]], it follows that: +:$\map \delta \GG \subseteq \map \sigma \GG$ +Let $D \in \map \delta \GG$, and define: +:$\delta_D := set {E \subseteq X: E \cap D \in \map \delta \GG}$ +Let us verify that these $\delta_D$ form [[Definition:Dynkin System|Dynkin systems]]. +First of all, note $X \cap D = D$, hence $X \in \delta_D$. +Next, compute, for any $E \in \delta_D$: +{{begin-eqn}} +{{eqn | l = \paren {X \setminus E} \cap D + | r = \paren {\paren {X \setminus E} \cap D} \cup \paren {\paren {X \setminus D} \cap D} + | c = [[Set Difference Intersection with Second Set is Empty Set]], [[Union with Empty Set]] +}} +{{eqn | r = \paren {\paren {X \setminus E} \cup \paren {X \setminus D} } \cap D + | c = [[Intersection Distributes over Union]] +}} +{{eqn | r = \paren {X \setminus \paren {E \cap D} } \cap D + | c = [[De Morgan's Laws (Set Theory)/Set Difference/Difference with Intersection|De Morgan's Laws: Difference with Intersection]] +}} +{{eqn | r = \paren {X \setminus \paren {E \cap D} } \cap \paren {X \setminus \paren {X \setminus D} } + | c = [[Set Difference with Set Difference]] +}} +{{eqn | r = X \setminus \paren {\paren {E \cap D} \cup \paren {X \setminus D} } + | c = [[De Morgan's Laws (Set Theory)/Set Difference/Difference with Union|De Morgan's Laws: Difference with Union]] +}} +{{end-eqn}} +Now from [[Intersection is Associative]], [[Set Difference Intersection with Second Set is Empty Set]] and [[Intersection with Empty Set]]: +:$\paren {E \cap D} \cap \paren {X \setminus D} = E \cap \paren {D \cap \paren {X \setminus D} } = E \cap \O = \O$ +Thus, since $E \cap D, X \setminus D \in \map \delta \GG$, it follows that their [[Definition:Disjoint Union (Probability Theory)|disjoint union]] is as well. +Finally, combining the above, it follows that: +:$\paren {X \setminus E} \cap D \in \map \delta \GG$ +Thus: +:$E \in \delta_D \implies X \setminus E \in \delta_D$ +Finally, let $\sequence {E_n}_{n \mathop \in \N}$ be a [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Sequence|sequence]] of sets in $\delta_D$. +Then it is immediate that $\sequence {E_n \cap D}_{n \mathop \in \N}$ is also [[Definition:Pairwise Disjoint|pairwise disjoint]]. +Hence: +{{begin-eqn}} +{{eqn | l = \paren {\bigcup_{n \mathop \in \N} E_n} \cap D + | r = \bigcup_{n \mathop \in \N} \paren {E_n \cap D} + | c = [[Intersection Distributes over Union]] +}} +{{end-eqn}} +and since the $E_n \cap D$ are in $\map \delta \GG$ by assumption, this [[Definition:Disjoint Union (Probability Theory)|disjoint union]] is also in $\map \delta \GG$. +Therefore, $\delta_D$ is a [[Definition:Dynkin System|Dynkin system]]. +Now by definition of [[Definition:Generated Dynkin System|generated Dynkin system]], $\GG \subseteq \map \delta \GG$. +By this observation and $(1)$, we immediately obtain, for any $G \in \GG$: +:$\GG \subseteq \delta_G$ +Therefore, by definition of [[Definition:Generated Dynkin System|generated Dynkin system]], for all $G \in \GG$: +:$\map \delta \GG \subseteq \delta_G$ +Hence, for any $D \in \map \delta \GG$ and $G \in \GG$: +:$D \cap G \in \map \delta \GG$ +Thus we have established that, for all $D \in \map \delta \GG$: +:$\GG \subseteq \delta_D$ +whence, by the definition of $\delta_D$ and [[Definition:Generated Dynkin System|generated Dynkin system]]: +:$\map \delta \GG \subseteq \delta_D$ +That is to say: +:$\forall D, E \in \map \delta \GG: D \cap E \in \map \delta \GG$ +Hence, by [[Dynkin System Closed under Intersections is Sigma-Algebra]], $\map \delta \GG$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Thus, it follows that $\map \sigma \GG \subseteq \map \delta \GG$. +Hence the result, by definition of [[Definition:Set Equality|set equality]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Carathéodory's Theorem (Measure Theory)} +Tags: Measure Theory + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]]. +Let $\SS \subseteq \powerset X$ be a [[Definition:Semiring of Sets|semi-ring]] of [[Definition:Subset|subsets]] of $X$. +Let $\mu: \SS \to \overline \R$ be a [[Definition:Pre-Measure|pre-measure]] on $\SS$. +Let $\map \sigma \SS$ be the [[Definition:Sigma-Algebra Generated by Collection of Subsets|$\sigma$-algebra generated by $\SS$]]. +Then $\mu$ [[Definition:Extension (Measure Theory)|extends]] to a [[Definition:Measure (Measure Theory)|measure]] $\mu^*$ on $\map \sigma \SS$. +{{Refactor|extract corollary}} +\end{theorem} + +\begin{proof} +{{ProofWanted}} +{{Namedfor|Constantin Carathéodory|cat = Carathéodory}} +\end{proof}<|endoftext|> +\section{Congruence Modulo Normal Subgroup is Congruence Relation} +Tags: Normal Subgroups, Congruence Relations + +\begin{theorem} +Let $\struct {G, \circ}$ be a [[Definition:Group|group]]. +Let $N$ be a [[Definition:Normal Subgroup|normal subgroup]] of $G$. +Then [[Definition:Congruence Modulo Subgroup|congruence modulo $N$]] is a [[Definition:Congruence Relation|congruence relation]] for the [[Definition:Group Operation|group operation]] $\circ$. +\end{theorem} + +\begin{proof} +Let $x \mathrel {\RR_N} y$ denote that $x$ and $y$ are in the same [[Definition:Coset|coset]], that is: +:$x \mathrel {\RR_N} y \iff x \circ N = y \circ N$ +as specified in the definition of [[Definition:Congruence Modulo Subgroup|congruence modulo $N$]]. +Let $x \mathrel {\RR_N} x'$ and $y \mathrel {\RR_N} y'$. +To demonstrate that $\RR_N$ is a [[Definition:Congruence Relation|congruence relation]] for $\circ$, we need to show that: +:$\paren {x \circ y} \mathrel {\RR_N} \paren {x' \circ y'}$ +So: +{{begin-eqn}} +{{eqn | l = \paren {x \circ y} \circ \paren {x' \circ y'}^{-1} + | r = \paren {x \circ y} \circ \paren {y'^{-1} \circ x'^{-1} } + | c = [[Inverse of Group Product]] +}} +{{eqn | r = \paren {\paren {x \circ y} \circ y'^{-1} } \circ x'^{-1} + | c = +}} +{{eqn | r = \paren {x \circ \paren {y \circ y'^{-1} } } \circ x'^{-1} + | c = +}} +{{end-eqn}} +By [[Cosets are Equal iff Product with Inverse in Subgroup]]: +:$x \circ x'^{-1} \in N$ and $y \circ y'^{-1} \in N$ +Thus: +:$\paren {x \circ y} \circ \paren {x' \circ y'}^{-1} \in x \circ H \circ x'^{-1}$ +But we also have that: +{{begin-eqn}} +{{eqn | l = x \circ H \circ x'^{-1} + | r = H \circ x \circ x'^{-1} + | c = {{Defof|Normal Subgroup}} +}} +{{eqn | o = \subseteq + | r = H \circ H + | c = {{Defof|Subset Product}}: $x \circ x'^{-1} \in H$ +}} +{{eqn | r = H + | c = [[Product of Subgroup with Itself]] +}} +{{end-eqn}} +That is: +:$\paren {x \circ y} \circ \paren {x' \circ y'}^{-1} \in H$ +and so: +:$\paren {x \circ y} \mathrel {\RR_N} \paren {x' \circ y'}$ +Hence the result, by definition of [[Definition:Congruence Relation|congruence relation]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Intersection Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $F \in \Sigma$ be a [[Definition:Measurable Set|measurable set]]. +Then the [[Definition:Intersection Measure|intersection measure]] $\mu_F$ is a [[Definition:Measure (Measure Theory)|measure]] on the [[Definition:Measurable Space|measurable space]] $\left({X, \Sigma}\right)$. +\end{theorem} + +\begin{proof} +Verify the axioms for a [[Definition:Measure (Measure Theory)|measure]] in turn for $\mu_F$: +=== Axiom $(1)$ === +The statement of axiom $(1)$ for $\mu_F$ is: +:$\forall E \in \Sigma: \mu_F \left({E}\right) \ge 0$ +For every $E \in \Sigma$ have: +{{begin-eqn}} +{{eqn|l = \mu_F \left({E}\right) + |r = \mu \left({E \cap F}\right) + |c = Definition of $\mu_F$ +}} +{{eqn|o = \ge + |r = 0 + |c = $\mu$ is a [[Definition:Measure (Measure Theory)|measure]] +}} +{{end-eqn}} +{{qed|lemma}} +=== Axiom $(2)$ === +Let $\left({E_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] sets in $\Sigma$. +The statement of axiom $(2)$ for $\mu_F$ is: +:$\displaystyle \mu_F \left({\bigcup_{n \mathop \in \N} E_n}\right) = \sum_{n \mathop \in \N} \mu_F \left({E_n}\right)$ +This is verified by the following computation: +{{begin-eqn}} +{{eqn|l = \mu_F \left({\bigcup_{n \mathop \in \N} E_n}\right) + |r = \mu \left({ \left({\bigcup_{n \mathop \in \N} E_n}\right) \cap F}\right) + |c = Definition of $\mu_F$ +}} +{{eqn|r = \mu \left({ \bigcup_{n \mathop \in \N} \left({E_n \cap F}\right) }\right) + |c = [[Intersection Distributes over Union#General Result|Intersection Distributes over Union: General Result]] +}} +{{eqn|r = \sum_{n \mathop \in \N} \mu \left({E_n \cap F}\right) + |c = $\mu$ is a [[Definition:Measure (Measure Theory)|measure]] +}} +{{eqn|r = \sum_{n \mathop \in \N} \mu_F \left({E_n}\right) + |c = Definition of $\mu_F$ +}} +{{end-eqn}} +{{qed|lemma}} +=== Axiom $(3')$ === +The statement of axiom $(3')$ for $\mu_F$ is: +:$\mu_F \left({\varnothing}\right) = 0$ +By [[Intersection with Empty Set]], $\varnothing \cap F = \varnothing$. Hence: +:$\mu_F \left({\varnothing}\right) = \mu \left({\varnothing \cap F}\right) = 0$ +because $\mu$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed|lemma}} +Having verified a suitable set of axioms, it follows that $\mu_F$ is a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Uniqueness of Measures/Proof 1} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\mathcal G \subseteq \mathcal P \left({X}\right)$ be a [[Definition:Sigma-Algebra Generated by Collection of Subsets#Generator|generator]] for $\Sigma$; i.e., $\Sigma = \sigma \left({\mathcal G}\right)$. +Suppose that $\mathcal G$ satisfies the following conditions: +:$(1):\quad \forall G, H \in \mathcal G: G \cap H \in \mathcal G$ +:$(2):\quad$ There exists an [[Definition:Exhausting Sequence of Sets|exhausting sequence]] $\left({G_n}\right)_{n \in \N} \uparrow X$ in $\mathcal G$ +Let $\mu, \nu$ be [[Definition:Measure (Measure Theory)|measures]] on $\left({X, \Sigma}\right)$, and suppose that: +:$(3):\quad \forall G \in \mathcal G: \mu \left({G}\right) = \nu \left({G}\right)$ +:$(4):\quad \forall n \in \N: \mu \left({G_n}\right)$ is [[Definition:Finite Extended Real Number|finite]] +Then $\mu = \nu$. +Alternatively, by [[Countable Cover induces Exhausting Sequence]], the [[Definition:Exhausting Sequence of Sets|exhausting sequence]] in $(2)$ may be replaced by a [[Definition:Countable Cover|countable $\mathcal G$-cover]] $\left({G_n}\right)_{n \in \N}$, still subject to $(4)$. +\end{theorem} + +\begin{proof} +Define, for all $n \in \N$, $\mathcal{D}_n$ by: +:$\mathcal{D}_n := \left\{{E \in \Sigma: \mu \left({G_n \cap E}\right) = \nu \left({G_n \cap E}\right)}\right\}$ +Let us show that $\mathcal{D}_n$ is a [[Definition:Dynkin System|Dynkin system]]. +By [[Intersection with Subset is Subset]], $G_n \cap X = G_n$, whence $(3)$ implies that $X \in \mathcal{D}_n$. +Now, let $D \in \mathcal{D}_n$. Then: +{{begin-eqn}} +{{eqn|l = \mu \left({G_n \cap \left({X \setminus D}\right)}\right) + |r = \mu \left({G_n \setminus D}\right) + |c = [[Intersection with Set Difference is Set Difference with Intersection]] +}} +{{eqn|r = \mu \left({G_n}\right) - \mu \left({G_n \cap D}\right) + |c = $\mu \left({G_n}\right) < +\infty$, [[Set Difference and Intersection form Partition]], [[Measure is Finitely Additive Function]] +}} +{{eqn|r = \nu \left({G_n}\right) - \nu \left({G_n \cap D}\right) + |c = $(3)$, $D \in \mathcal{D}_n$ +}} +{{eqn|r = \nu \left({G_n \cap \left({X \setminus D}\right)}\right) + |c = Above reasoning in opposite order +}} +{{end-eqn}} +Therefore, $X \setminus D \in \mathcal{D}_n$. +Finally, let $\left({D_m}\right)_{m \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] sets in $\mathcal{D}_n$. +Then: +{{begin-eqn}} +{{eqn|l = \mu \left({G_n \cap \left({\bigcup_{m \mathop \in \N} D_m}\right)}\right) + |r = \mu \left({\bigcup_{m \mathop \in \N} \left({G_n \cap D_m}\right)}\right) + |c = [[Intersection Distributes over Union]] +}} +{{eqn|r = \sum_{m \mathop \in \N} \mu \left({G_n \cap D_m}\right) + |c = $\mu$ is a [[Definition:Measure (Measure Theory)|measure]] +}} +{{eqn|r = \sum_{m \mathop \in \N} \nu \left({G_n \cap D_m}\right) + |c = $D_m \in \mathcal{D}_n$ +}} +{{eqn|r = \nu \left({G_n \cap \left({\bigcup_{m \mathop \in \N} D_m}\right)}\right) + |c = Above reasoning in opposite order +}} +{{end-eqn}} +Therefore, $\displaystyle \bigcup_{m \mathop \in \N} D_m \in \mathcal{D}_n$. +Thus, we have shown that $\mathcal{D}_n$ is a [[Definition:Dynkin System|Dynkin system]]. +Combining $(1)$ and $(3)$, it follows that: +:$\forall n \in \N: \mathcal G \subseteq \mathcal{D}_n$ +From $(1)$ and [[Dynkin System with Generator Closed under Intersection is Sigma-Algebra]]: +:$\delta \left({\mathcal G}\right) = \sigma \left({\mathcal G}\right) = \Sigma$ +where $\delta$ denotes [[Definition:Generated Dynkin System|generated Dynkin system]]. +By definition of $\delta \left({\mathcal G}\right)$, this means: +:$\forall n \in \N: \delta \left({\mathcal G}\right) \subseteq \mathcal{D}_n$ +That is, for all $n \in \N$, $\Sigma \subseteq \mathcal{D}_n \subseteq \Sigma$. +By definition of [[Definition:Set Equality/Definition 2|set equality]]: +: $\Sigma = \mathcal{D}_n$ for all $n \in \N$ +Thus, for all $n \in \N$ and $E \in \Sigma$: +:$\mu \left({G_n \cap E}\right) = \nu \left({G_n \cap E}\right)$ +Now, from [[Set Intersection Preserves Subsets]], $E_n := G_n \cap E$ defines an [[Definition:Increasing Sequence of Sets|increasing sequence of sets]] with [[Definition:Limit of Increasing Sequence of Sets|limit]]: +:$\displaystyle \bigcup_{n \mathop \in \N} \left({G_n \cap E}\right) = \left({\bigcup_{n \mathop \in \N} G_n}\right) \cap E = X \cap E = E$ +from [[Intersection Distributes over Union]] and [[Intersection with Subset is Subset]]. +Thus, for all $E \in \Sigma$: +{{begin-eqn}} +{{eqn|l = \mu \left({E}\right) + |r = \lim_{n \to \infty} \mu \left({G_n \cap E}\right) + |c = [[Characterization of Measures|Characterization of Measures: $(3)$]] +}} +{{eqn|r = \lim_{n \to \infty} \nu \left({G_n \cap E}\right) + |c = $\forall n \in \N: E \in \mathcal{D}_n$ +}} +{{eqn|r = \nu \left({E}\right) + |c = [[Characterization of Measures|Characterization of Measures: $(3)$]] +}} +{{end-eqn}} +That is to say, $\mu = \nu$. +{{qed}} +\end{proof}<|endoftext|> +\section{Lebesgue Measure Invariant under Translations} +Tags: Lebesgue Measure + +\begin{theorem} +Let $\lambda^n$ be the $n$-dimensional [[Definition:Lebesgue Measure|Lebesgue measure]] on $\R^n$ equipped with the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] $\mathcal B \left({\R^n}\right)$. +Let $\mathbf x \in \R^n$. +Then $\lambda^n$ is [[Definition:Translation-Invariant Measure|translation-invariant]]; i.e., for all $B \in \mathcal B \left({\R^n}\right)$, have: +:$\lambda^n \left({\mathbf x + B}\right) = \lambda^n \left({B}\right)$ +where $\mathbf x + B$ is the set $\left\{{\mathbf x + \mathbf b: \mathbf b \in B}\right\}$. +\end{theorem} + +\begin{proof} +Denote with $\tau_{\mathbf x}: \R^n \to \R^n$ the [[Definition:Translation Mapping|translation by $\mathbf x$]]. +From [[Translation in Euclidean Space is Measurable Mapping]], $\tau_{\mathbf x}$ is [[Definition:Measurable Mapping|$\mathcal B \left({\R^n}\right) \, / \, \mathcal B \left({\R^n}\right)$-measurable]]. +Consider the [[Definition:Pushforward Measure|pushforward measure]] $\lambda^n_{\mathbf x} := \left({\tau_{\mathbf x}}\right)_* \lambda^n$ on $\mathcal B \left({\R^n}\right)$. +By [[Characterization of Euclidean Borel Sigma-Algebra]], it follows that: +:$\mathcal B \left({\R^n}\right) = \sigma \left({\mathcal{J}^n_{ho}}\right)$ +where $\sigma$ denotes [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]], and $\mathcal{J}^n_{ho}$ is the set of [[Definition:Half-Open Rectangle|half-open $n$-rectangles]]. +Let us verify the four conditions for [[Uniqueness of Measures]], applied to $\lambda^n$ and $\lambda^n_{\mathbf x}$. +Condition $(1)$ follows from [[Half-Open Rectangles Closed under Intersection]]. +Condition $(2)$ is achieved by the sequence of [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] given by: +:$J_k := \left[{-k \,.\,.\, k}\right)^n$ +For condition $(3)$, let $\left[[{\mathbf a \,.\,.\, \mathbf b}\right)) \in \mathcal{J}^n_{ho}$ be a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]]. +Since: +:$\tau_{\mathbf x}^{-1} \left({\left[[{\mathbf a \,.\,.\, \mathbf b}\right))}\right) = \mathbf x + \left[[{\mathbf a \,.\,.\, \mathbf b}\right)) = \left[[{\mathbf {a + x} \,.\,.\, \mathbf {b + x}}\right))$ +we have: +{{begin-eqn}} +{{eqn|l = \lambda^n_{\mathbf x} \left({\left[\left[{\mathbf a \,.\,.\, \mathbf b}\right)\right)}\right) + |r = \lambda^n \left({\left[\left[{\mathbf {a + x} \,.\,.\, \mathbf {b + x} }\right)\right)}\right) + |c = Definition of [[Definition:Pushforward Measure|pushforward measure]] +}} +{{eqn|r = \prod_{i \mathop = 1}^n \left({\left({b_i + x_i}\right) - \left({a_i + x_i}\right)}\right) + |c = Definition of [[Definition:Lebesgue Measure|Lebesgue measure]] +}} +{{eqn|r = \prod_{i \mathop = 1}^n \left({b_i - a_i}\right) +}} +{{eqn|r = \lambda^n \left({\left[\left[{\mathbf a \,.\,.\, \mathbf b}\right)\right)}\right) + |c = Definition of [[Definition:Lebesgue Measure|Lebesgue measure]] +}} +{{end-eqn}} +Finally, since: +:$\displaystyle \lambda^n \left({J_k}\right) = \prod_{i \mathop = 1}^n \left({k - \left({-k}\right)}\right) = \left({2 k}\right)^n$ +the last condition, $(4)$, is also satisfied. +Whence [[Uniqueness of Measures]] implies that: +:$\lambda^n_{\mathbf x} = \lambda^n$ +and since for all $B \in \mathcal B \left({\R^n}\right)$ we have: +:$\mathbf x + B = \tau_{\mathbf x}^{-1} \left({B}\right)$ +this precisely boils down to: +:$\lambda^n \left({\mathbf x + B}\right) = \lambda^n \left({B}\right)$ +{{qed}} +\end{proof}<|endoftext|> +\section{Translation-Invariant Measure on Euclidean Space is Multiple of Lebesgue Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\mu$ be a [[Definition:Measure (Measure Theory)|measure]] on $\R^n$ equipped with the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] $\mathcal B \left({\R^n}\right)$. +Suppose that $\mu$ is [[Definition:Translation-Invariant Measure|translation-invariant]]. +Also, suppose that $\kappa := \mu \left({\left[{0 \,.\,.\, 1}\right)^n }\right) < +\infty$. +Then $\mu = \kappa \lambda^n$, where $\lambda^n$ is the $n$-dimensional [[Definition:Lebesgue Measure|Lebesgue measure]]. +\end{theorem} + +\begin{proof} +From [[Characterization of Euclidean Borel Sigma-Algebra]], we have: +:$\mathcal B \left({\R^n}\right) = \sigma \left({\mathcal{J}^n_{ho,\text{rat}}}\right)$ +where $\mathcal{J}^n_{ho,\text{rat}}$ denotes the collection of [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] with [[Definition:Rational Number|rational]] endpoints. +So let $J = \left[[{\mathbf a \,.\,.\, \mathbf b}\right)) \in \mathcal{J}^n_{ho,\text{rat}}$. +Let $M \in \N$ be a common denominator of the $a_i, b_i$ (which are [[Definition:Rational Number|rational]] by assumption). +We may then cover $J$ by finitely many [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] $\left[{0 \,.\,.\, \dfrac 1 M}\right)^n$, in that: +:$\displaystyle J = \bigcup_{i \mathop = 1}^{k \left({J}\right)} \mathbf x_i + \left[{0 \,.\,.\, \frac 1 M}\right)^n$ +for some $k \left({J}\right) \in \N$ and suitable $\mathbf x_i$, where: +:$\mathbf x_i + \left[{0 \,.\,.\, \dfrac 1 M}\right)^n := \left[\left[{\mathbf x_i \,.\,.\, \mathbf x_i + \dfrac 1 M}\right)\right)$ +Using that $\mu$ is [[Definition:Translation-Invariant Measure|translation-invariant]], this means: +:$\mu \left({J}\right) = k \left({J}\right) \, \mu \left({\left[{0 \,.\,.\, \dfrac 1 M}\right)^n}\right)$ +Also, by [[Lebesgue Measure is Translation-Invariant]]: +:$\lambda^n \left({J}\right) = k \left({J}\right) \, \lambda^n \left({\left[{0 \,.\,.\, \dfrac 1 M}\right)^n}\right)$ +A moment's thought shows us that the [[Definition:Half-Open Rectangle|half-open $n$-rectangle]] $I = \left[{0 \,.\,.\, 1}\right)^n$ may be covered by $M^n$ copies of $\left[{0 \,.\,.\, \dfrac 1 M}\right)^n$, so that: +:$k \left({I}\right) = M^n$ +For brevity, write $I/M$ for $\left[{0 \,.\,.\, \dfrac 1 M}\right)^n$. +Now, using that: +:$\displaystyle \lambda^n \left({I/M}\right) = \prod_{i \mathop = 1}^n \frac 1 M = \frac 1 {M^n}$ +and $\mu \left({I}\right) = \kappa$, compute: +{{begin-eqn}} +{{eqn|l = \mu \left({J}\right) + |r = k \left({J}\right) \mu \left({I/M}\right) + |c = Definition of $k \left({J}\right)$ +}} +{{eqn|r = \frac {k \left({J}\right)} {M^n} \left({M^n \mu \left({I/M}\right)}\right) + |c = $M^n / M^n = 1$ +}} +{{eqn|r = \frac {k \left({J}\right)} {M^n} \mu \left({I}\right) + |c = $M^n = k \left({I}\right)$ +}} +{{eqn|r = \frac {\kappa \, k \left({J}\right)} {M^n} + |c = Definition of $\kappa$ +}} +{{eqn|r = \kappa \, k \left({J}\right) \, \lambda^n \left({I/M}\right) +}} +{{eqn|r = \kappa \, \lambda^n \left({J}\right) + |c = Definition of $k \left({J}\right)$ +}} +{{end-eqn}} +Therefore, $\mu \left({J}\right) = \kappa \, \lambda^n \left({J}\right)$ for all $J \in \mathcal{J}^n_{ho,\text{rat}}$. +Let us quickly verify the other conditions for [[Uniqueness of Measures]]. +For $(1)$, we have [[Half-Open Rectangles Closed under Intersection]]. +For $(2)$, observe the [[Definition:Exhausting Sequence of Sets|exhausting sequence]] $\left[{-k \,.\,.\, k}\right)^n \mathop \uparrow \R^n$ +Finally, for $(4)$, we recall that $\kappa < +\infty$. +Thus, by [[Uniqueness of Measures]], $\mu = \kappa \lambda^n$. +{{qed}} +\end{proof}<|endoftext|> +\section{Dynkin System Closed under Set Difference with Subset} +Tags: Dynkin Systems + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $\mathcal D$ be a [[Definition:Dynkin System|Dynkin system]] on $X$. +Let $D, E \in \mathcal D$ and suppose that $E \subseteq D$. +Then the [[Definition:Set Difference|set difference]] $D \setminus E$ is also an element of $\mathcal D$. +\end{theorem} + +\begin{proof} +For brevity, write for example $E^c$ for $\complement_X \left({E}\right) = X \setminus E$. +We reason as follows: +{{begin-eqn}} +{{eqn | l = D \setminus E + | r = D \cap E^c + | c = [[Set Difference as Intersection with Relative Complement]] +}} +{{eqn | r = \left({D^c \cup E}\right)^c + | c = [[De Morgan's Laws (Set Theory)/Relative Complement/Complement of Union|De Morgan's Laws: Complement of Union]], [[Relative Complement of Relative Complement]] +}} +{{end-eqn}} +Now this implies that $D \setminus E \in \mathcal D$ {{iff}} $D^c \cup E \in \mathcal D$. +It is already known that $D^c$ and $E$ are in $\mathcal D$ by axiom $(2)$ for a [[Definition:Dynkin System|Dynkin system]]. +Since $E \subseteq D$, it follows that $D^c \cap E = \varnothing$, and thus [[Dynkin System Closed under Disjoint Union]] applies to give: +:$D^c \cup E \in \mathcal D$ +which, combined with above reasoning, yields $D \setminus E \in \mathcal D$. +{{qed}} +\end{proof}<|endoftext|> +\section{Open Rectangles Closed under Intersection} +Tags: Analysis + +\begin{theorem} +Let $\left(({\mathbf a \,.\,.\, \mathbf b}\right))$ and $\left(({\mathbf c \,.\,.\, \mathbf d}\right))$ be [[Definition:Open Rectangle|open $n$-rectangles]]. +Then $\left(({\mathbf a \,.\,.\, \mathbf b}\right)) \cap \left(({\mathbf c \,.\,.\, \mathbf d}\right))$ is also an [[Definition:Open Rectangle|open $n$-rectangle]]. +\end{theorem} + +\begin{proof} +From [[Cartesian Product of Intersections/General Case|Cartesian Product of Intersections: General Case]], we have: +:$\displaystyle \left(({\mathbf a \,.\,.\, \mathbf b}\right)) \cap \left(({\mathbf c \,.\,.\, \mathbf d}\right)) = \prod_{i \mathop = 1}^n \left({a_i \,.\,.\, b_i}\right) \cap \left({c_i \,.\,.\, d_i}\right)$ +Therefore, it suffices to show that the [[Definition:Set Intersection|intersection]] of two [[Definition:Open Real Interval|open intervals]] is again an [[Definition:Open Real Interval|open interval]]. +Now let $x \in \left({a_i \,.\,.\, b_i}\right) \cap \left({c_i \,.\,.\, d_i}\right)$. +Then $x$ is subject to: +* $x > a_i$ and $x > c_i$, i.e., $x > \max \left\{{a_i, c_i}\right\}$ +* $x < b_i$ and $x < d_i$, i.e., $x < \min \left\{{b_i, d_i}\right\}$ +and we see that these conditions are satisfied precisely when: +:$x \in \left({\max \left\{{a_i, c_i}\right\} \,.\,.\, \min \left\{{b_i, d_i}\right\}}\right)$ +Thus, we conclude: +:$\left({a_i \,.\,.\, b_i}\right) \cap \left({c_i \,.\,.\, d_i}\right) = \left({\max \left\{{a_i, c_i}\right\} \,.\,.\, \min \left\{{b_i, d_i}\right\}}\right)$ +showing that indeed the intersection is an [[Definition:Open Real Interval|open interval]]. +Combining this with the above reasoning, it follows that indeed: +:$\left(({\mathbf a \,.\,.\, \mathbf b}\right)) \cap \left(({\mathbf c \,.\,.\, \mathbf d}\right))$ +is an [[Definition:Open Rectangle|open $n$-rectangle]]. +{{qed}} +[[Category:Analysis]] +ol5b6dzhrmypwf027n6us41e07ogiqq +\end{proof}<|endoftext|> +\section{Euclidean Borel Sigma-Algebra Closed under Scalar Multiplication} +Tags: Measure Theory + +\begin{theorem} +Let $\mathcal B \left({\R^n}\right)$ be the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R^n$. +Let $B \in \mathcal B$, and let $t \in \R_{>0}$. +Then also $t \cdot B := \left\{{t \mathbf b: \mathbf b \in B}\right\} \in \mathcal B$. +\end{theorem} + +\begin{proof} +Define $f: \R^n \to \R^n$ by $f \left({\mathbf x}\right) := \dfrac 1 t \mathbf x$. +Then for all $\mathbf x \in \R^n$, $f^{-1} \left({\mathbf x}\right) = \left\{{t \mathbf x}\right\}$, where $f^{-1}$ denotes the [[Definition:Preimage of Mapping|preimage]] of $f$. +Thus $t \cdot B = \displaystyle \bigcup_{\mathbf b \mathop \in B} f^{-1} \left({\mathbf b}\right) = f^{-1} \left({B}\right)$, where the last equality holds by definition of [[Definition:Preimage of Subset under Mapping|preimage]]. +It follows that the statement of the theorem comes down to showing that $f$ is [[Definition:Measurable Mapping|$\mathcal B \, / \, \mathcal B$-measurable]]. +By [[Characterization of Euclidean Borel Sigma-Algebra]] the [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] $\mathcal{J}_{ho}^n$ [[Definition:Generator for Sigma-Algebra|generate]] $\mathcal B$. +Applying [[Mapping Measurable iff Measurable on Generator]], it suffices to demonstrate: +:$\forall J \in \mathcal{J}_{ho}^n: f^{-1} \left({J}\right) \in \mathcal B$ +Now for a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]] $J = \left[[{\mathbf a \,.\,.\, \mathbf b}\right))$, it follows from the consideration on $f^{-1}$ above, that: +:$f^{-1} \left({J}\right) = \left[[{t \mathbf a \,.\,.\, t \mathbf b}\right))$ +which is again in $\mathcal{J}_{ho}^n$ and so, in particular, in $\mathcal B$. +Thus $f$ is [[Definition:Measurable Mapping|$\mathcal B \, / \, \mathcal B$-measurable]], i.e.: +:$\forall B \in \mathcal B: t \cdot B = f^{-1} \left({B}\right) \in \mathcal B$ +which was to be demonstrated. +{{qed}} +\end{proof}<|endoftext|> +\section{Lebesgue Measure of Scalar Multiple} +Tags: Measure Theory + +\begin{theorem} +Let $\lambda^n$ be the $n$-dimensional [[Definition:Lebesgue Measure|Lebesgue measure]] on $\R^n$ equipped with the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] $\mathcal B \left({\R^n}\right)$. +Let $B \in \mathcal B$, and let $t \in \R_{>0}$. +Then $\lambda^n \left({t \cdot B}\right) = t^n \lambda^n \left({B}\right)$, where $t \cdot B$ is the set $\left\{{t \mathbf b: \mathbf b \in B}\right\}$. +\end{theorem} + +\begin{proof} +It follows from [[Rescaling is Linear Transformation]] that the [[Definition:Mapping|mapping]] $\mathbf x \mapsto t \mathbf x$ is a [[Definition:Linear Transformation|linear transformation]]. +Denote $t \, \mathbf I_n$ for the [[Definition:Matrix|matrix]] associated to this [[Definition:Linear Transformation|linear transformation]] by [[Linear Transformation as Matrix Product]]. +From [[Determinant of Rescaling Matrix]], it follows that: +:$\det \left({t \, \mathbf I_n}\right) = t^n$ +From [[Inverse of Rescaling Matrix]], $t \, \mathbf I_n$ is the [[Definition:Inverse Matrix|inverse]] of $t^{-1} \mathbf I_n$. +Thus, it follows that: +{{begin-eqn}} +{{eqn|l = \lambda^n \left({t \cdot B}\right) + |r = \lambda^n \left({\left({t \, \mathbf I_n }\right) \left({B}\right)}\right) +}} +{{eqn|r = \lambda^n \left({\left({t^{-1} \, \mathbf I_n }\right)^{-1} \left({B}\right)}\right) + |c = [[Inverse of Group Inverse]] +}} +{{eqn|r = \left({t^{-1} \, \mathbf I_n }\right)_* \lambda^n \left({B}\right) + |c = Definition of [[Definition:Pushforward Measure|pushforward measure]] +}} +{{eqn|r = \left\vert{\det \, \left({ \left({t^{-1} \, \mathbf I_n }\right)^{-1} }\right) }\right\vert \cdot \lambda^n \left({B}\right) + |c = [[Pushforward of Lebesgue Measure under General Linear Group]] +}} +{{end-eqn}} +{{explain|For [[Inverse of Group Inverse]] to be used, it has to be established that it is indeed a group.}} +Now recall $\det \, \left({\left({t^{-1} \, \mathbf I_n }\right)^{-1}}\right) = \det \, \left({t \, \mathbf I_n}\right) = t^n$. +Since $t > 0$, $\left\vert{t^n}\right\vert = t^n$, and the result follows. +{{qed}} +\end{proof}<|endoftext|> +\section{Measure Invariant on Generator is Invariant} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Sigma-Finite Measure Space|$\sigma$-finite measure space]]. +Let $\theta: X \to X$ be an [[Definition:Measurable Mapping|$\Sigma / \Sigma$-measurable mapping]]. +Suppose that $\Sigma$ is [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated]] by $\mathcal G \subseteq \mathcal P \left({X}\right)$. +Also, let $\mathcal G$ satisfy the following: +:$(1):\quad \forall G, H \in \mathcal G: G \cap H \in \mathcal G$ +:$(2):\quad$ There exists an [[Definition:Exhausting Sequence of Sets|exhausting sequence]] $\left({G_n}\right)_{n \in \N} \uparrow X$ in $\mathcal G$ such that: +:::$\quad \forall n \in \N: \mu \left({G_n}\right) < +\infty$ +Suppose furthermore that, for all $G \in \mathcal G$, $\mu$ satisfies: +:$(3):\quad \mu \left({\theta^{-1} \left({G}\right) }\right) = \mu \left({G}\right)$ +Then $\mu$ is a [[Definition:Invariant Measure|$\theta$-invariant measure]]. +\end{theorem} + +\begin{proof} +Consider the [[Definition:Pushforward Measure|pushforward measure]] $\theta_* \mu$ on $\left({X, \Sigma}\right)$. +By definition, this makes equation $(3)$ come down to: +:$\theta_* \mu \restriction_{\mathcal G} = \mu \restriction_{\mathcal G}$ +where $\restriction$ denotes [[Definition:Restriction of Mapping|restriction]]. +The suppositions $(1)$, $(2)$ and $(3)$ together constitute precisely the prerequisites to [[Uniqueness of Measures]]. +Hence $\theta_* \mu = \mu$, i.e., $\mu$ is [[Definition:Invariant Measure|$\theta$-invariant]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Sigma-Algebras with Independent Generators are Independent} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {\Omega, \EE, \Pr}$ be a [[Definition:Probability Space|probability space]]. +Let $\Sigma, \Sigma'$ be [[Definition:Sub-Sigma-Algebra|sub-$\sigma$-algebras]] of $\EE$. +Suppose that $\GG, \HH$ are [[Definition:Stable under Intersection|$\cap$-stable]] [[Definition:Sigma-Algebra Generated by Collection of Subsets/Generator|generators]] for $\Sigma, \Sigma'$, respectively. +Suppose that, for all $G \in \GG, H \in \HH$: +:$(1): \quad \map \Pr {G \cap H} = \map \Pr G \map \Pr H$ +Then $\Sigma$ and $\Sigma'$ are [[Definition:Independent Sigma-Algebras|$\Pr$-independent]]. +\end{theorem} + +\begin{proof} +Fix $H \in \HH$. +Define, for $E \in \Sigma$: +:$\map \mu E := \map \Pr {E \cap H}$ +:$\map \nu E := \map \Pr E \map \Pr H$ +Then by [[Intersection Measure is Measure]] and [[Restricted Measure is Measure]], $\mu$ is a [[Definition:Measure (Measure Theory)|measure]] on $\Sigma$. +Namely, it is the [[Definition:Intersection Measure|intersection measure]] $\Pr_H$ [[Definition:Restricted Measure|restricted]] to $\Sigma$, that is $\Pr_H \restriction_\Sigma$. +Next, by [[Linear Combination of Measures]] and [[Restricted Measure is Measure]], $\nu$ is also a [[Definition:Measure (Measure Theory)|measure]] on $\Sigma$. +Namely, it is the [[Definition:Restricted Measure|restricted measure]] $\map \Pr H \Pr \restriction_\Sigma$. +Let $\GG' := \GG \cup \set X$. +It is immediate that $\GG'$ is also a [[Definition:Stable under Intersection|$\cap$-stable]] [[Definition:Sigma-Algebra Generated by Collection of Subsets/Generator|generator]] for $\Sigma$. +By assumption $(1)$, $\mu$ and $\nu$ coincide on $\GG$ (since $\map \Pr X = 1$). +From [[Restricting Measure Preserves Finiteness]], $\mu$ and $\nu$ are also [[Definition:Finite Measure|finite measures]]. +Hence, $\GG$ contains the [[Definition:Exhausting Sequence of Sets|exhausting sequence]] of which every term equals $X$. +Having verified all conditions, [[Uniqueness of Measures]] applies to yield $\mu = \nu$. +Now fix $E \in \Sigma$ and define, for $E' \in \Sigma'$: +:$\map {\mu'_E} {E'} := \map \Pr {E \cap E'}$ +:$\map {\nu'_E} {E'} := \map \Pr E \map \Pr {E'}$ +[[Definition:Mutatis Mutandis|Mutatis mutandis]], above consideration applies again, and we conclude by [[Uniqueness of Measures]]: +:$\mu'_E = \nu'_E$ +for all $E \in \Sigma$. +That is, expanding the definition of the [[Definition:Measure (Measure Theory)|measures]] $\mu'_E$ and $\nu'_E$: +:$\forall E \in \Sigma: \forall E' \in \Sigma': \map \Pr {E \cap E'} = \map \Pr E \map \Pr {E'}$ +This is precisely the statement that $\Sigma$ and $\Sigma'$ are [[Definition:Independent Sigma-Algebras|$\Pr$-independent $\sigma$-algebras]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Coset Product of Normal Subgroup is Consistent with Subset Product Definition} +Tags: Coset Product + +\begin{theorem} +Let $\struct {G, \circ}$ be a [[Definition:Group|group]]. +Let $N$ be a [[Definition:Normal Subgroup|normal subgroup]] of $G$. +Let $a, b \in G$. +Let $a \circ N$ and $b \circ N$ be the [[Definition:Left Coset|left cosets]] of $a$ and $b$ by $N$. +Then the [[Definition:Coset Product|coset product]]: +:$\paren {a \circ N} \circ \paren {b \circ N} = \paren {a \circ b} \circ N$ +is consistent with the definition of the coset product as the [[Definition:Subset Product|subset product]] of $a \circ N$ and $b \circ N$: +:$\paren {a \circ N} \paren {b \circ N} = \set {x \circ y: x \in a \circ N, y \in b \circ N}$ +\end{theorem} + +\begin{proof} +Consider the [[Definition:Set|set]]: +:$\paren {a \circ N} \circ \paren {b \circ N} = \set {x \circ y: x \in a \circ N, y \in b \circ N}$ +As $e \in N$, have: +:$\paren {a \circ b} \circ N = \paren {a \circ e} \circ \paren {b \circ N} \subseteq \paren {a \circ N} \circ \paren {b \circ N}$ +by {{GroupAxiom|1}} of $\circ$. +Hence $\paren {a \circ b} \circ N \subseteq \paren {a \circ N} \circ \paren {b \circ N}$. +Now let $x \in a \circ N$ and $y \in b \circ N$. +Then by the definition of [[Definition:Subset Product|subset product]]: +:$\exists n_1 \in N: x = a \circ n_1$ +:$\exists n_2 \in N: y = b \circ n_2$ +It follows that: +{{begin-eqn}} +{{eqn | l = x \circ y + | r = \paren {a \circ n_1} \circ \paren {b \circ n_2} + | c = +}} +{{eqn | r = \paren {a \circ n_1} \circ \paren {n_3 \circ b} + | c = for some $n_3 \in N$: {{Defof|Normal Subgroup}} +}} +{{eqn | r = a \circ \paren {\paren {n_1 \circ n_3} \circ b} + | c = {{GroupAxiom|1}} +}} +{{eqn | r = a \circ \paren {n_4 \circ b} + | c = for some $n_4 \in N$: {{Defof|Subgroup}} +}} +{{eqn | r = a \circ \paren {b \circ n_5} + | c = for some $n_5 \in N$: {{Defof|Normal Subgroup}} +}} +{{eqn | r = \paren {a \circ b} \circ n_5 + | c = {{GroupAxiom|1}} +}} +{{eqn | o = \in + | r = \paren {a \circ b} \circ N + | c = {{Defof|Subset Product}} +}} +{{end-eqn}} +So the definition by [[Definition:Subset Product|subset product]]: +:$\paren {a \circ N} \circ \paren {b \circ N} = \set {x \circ y: x \in a \circ N, y \in b \circ N}$ +leads to the definition of [[Definition:Coset Product|coset product]] as: +:$\paren {a \circ N} \circ \paren {b \circ N} = \paren {a \circ b} \circ N$ +{{qed}} +[[Category:Coset Product]] +pjn937tkvtw3xejhfkt4cvrm8xhyaod +\end{proof}<|endoftext|> +\section{Congruence Relation iff Compatible with Operation} +Tags: Congruence Relations, Compatible Relations, Congruence Relation iff Compatible with Operation + +\begin{theorem} +Let $\struct {S, \circ}$ be an [[Definition:Algebraic Structure|algebraic structure]]. +Let $\RR$ be an [[Definition:Equivalence Relation|equivalence relation]] on $S$. +Then $\RR$ is a [[Definition:Congruence Relation|congruence relation]] for $\circ$ {{iff}}: +{{begin-eqn}} +{{eqn | ll= \forall x, y, z \in S: + | l = x \mathrel \RR y + | o = \implies + | r = \paren {x \circ z} \mathrel \RR \paren {y \circ z} +}} +{{eqn | l = x \mathrel \RR y + | o = \implies + | r = \paren {z \circ x} \mathrel \RR \paren {z \circ y} +}} +{{end-eqn}} +That is, {{iff}} $\RR$ is [[Definition:Relation Compatible with Operation|compatible with $\circ$]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $\RR$ ibe a [[Definition:Congruence Relation|congruence relation]] for $\circ$. +That is: +:$\forall x_1, x_2, y_1, y_2 \in S: x_1 \mathrel \RR x_2 \land y_1 \mathrel \RR y_2 \implies \paren {x_1 \circ y_1} \mathrel \RR \paren {x_2 \circ y_2}$ +As $\RR$ is an [[Definition:Equivalence Relation|equivalence relation]] it is by definition [[Definition:Reflexive Relation|reflexive]]. +That is: +:$\forall z \in S: z \mathrel \RR z$ +Make the substitutions: +{{begin-eqn}} +{{eqn | l = x_1 + | o = \to + | r = x +}} +{{eqn | l = x_2 + | o = \to + | r = y +}} +{{eqn | l = y_1 + | o = \to + | r = z +}} +{{eqn | l = y_2 + | o = \to + | r = z +}} +{{end-eqn}} +It follows that: +:$\forall x, y, z \in S: x \mathrel \RR y \implies \paren {x \circ z} \mathrel \RR \paren {y \circ z}$ +Similarly, make the substitutions: +{{begin-eqn}} +{{eqn | l = x_1 + | o = \to + | r = z +}} +{{eqn | l = x_2 + | o = \to + | r = z +}} +{{eqn | l = y_1 + | o = \to + | r = x +}} +{{eqn | l = y_2 + | o = \to + | r = y +}} +{{end-eqn}} +It follows that: +:$\forall x, y, z \in S: x \mathrel \RR y \implies \paren {z \circ x} \mathrel \RR \paren {z \circ y}$ +{{qed|lemma}} +=== Sufficient Condition === +Now let $\RR$ have the nature that: +{{begin-eqn}} +{{eqn | ll= \forall x, y, z \in S: + | l = x \mathrel \RR y + | o = \implies + | r = \paren {x \circ z} \mathrel \RR \paren {y \circ z} +}} +{{eqn | l = x \mathrel \RR y + | o = \implies + | r = \paren {z \circ x} \mathrel \RR \paren {z \circ y} +}} +{{end-eqn}} +Then we have: +{{begin-eqn}} +{{eqn | ll= \forall x_1, x_2, y_1, y_2 \in S: + | l = x_1 \mathrel \RR y_1 + | o = \implies + | r = \paren {x_1 \circ x_2} \mathrel \RR \paren {y_1 \circ x_2} +}} +{{eqn | l = x_2 \mathrel \RR y_2 + | o = \implies + | r = \paren {y_1 \circ x_2} \mathrel \RR \paren {y_1 \circ y_2} +}} +{{end-eqn}} +As $\RR$ is an [[Definition:Equivalence Relation|equivalence relation]] it is by definition [[Definition:Transitive Relation|transitive]]. +Thus it follows that: +:$\paren {x_1 \circ x_2} \mathrel \RR \paren {y_1 \circ y_2}$ +{{qed|lemma}} +The result follows. +{{qed}} +\end{proof} + +\begin{proof} +We have that an [[Symmetric Preordering is Equivalence Relation|equivalence relation is a (symmetric) preordering]]. +Thus the result [[Preordering of Products under Operation Compatible with Preordering]] can be applied directly. +{{qed}} +\end{proof}<|endoftext|> +\section{Outer Measure of Limit of Increasing Sequence of Sets} +Tags: Measure Theory + +\begin{theorem} +Let $\mu^*$ be an [[Definition:Outer Measure|outer measure]] on a [[Definition:Set|set]] $X$. +Let $\left\langle{S_n}\right\rangle$ be an [[Definition:Increasing Sequence of Sets|increasing sequence]] of [[Definition:Measurable Set#Measurable Sets of an Arbitrary Outer Measure|$\mu^*$-measurable sets]], and let $S_n \uparrow S$ (as $n \to \infty$). +Then for any [[Definition:Subset|subset]] $A \subseteq X$: +: $\displaystyle \mu^* \left({A \cap S}\right) = \lim_{n \to \infty} \mu^* \left({A \cap S_n}\right)$ +\end{theorem} + +\begin{proof} +By the [[Definition:Monotone (Measure Theory)|monotonicity]] of $\mu^*$, it suffices to prove that: +:$\ds \map {\mu^*} {A \cap S} \le \lim_{n \mathop \to \infty} \map {\mu^*} {A \cap S_n}$ +Assume that $\map {\mu^*} {A \cap S_n}$ is [[Definition:Finite|finite]] for all $n \in \N$, otherwise the statement is trivial by the [[Definition:Monotone (Measure Theory)|monotonicity]] of $\mu^*$. +Let $S_0 = \O$. +Then $x \in S$ {{iff}} there exists an [[Definition:Integer|integer]] $n \ge 0$ such that $x \in S_{n + 1}$. +Taking the least possible $n$, it follows that $x \notin S_n$, and so: +:$x \in S_{n + 1} \setminus S_n$ +Therefore: +:$\ds S = \bigcup_{n \mathop = 0}^\infty \paren {S_{n + 1} \setminus S_n}$ +From [[Intersection Distributes over Union]]: +:$\ds A \cap S = A \cap \bigcup_{n \mathop = 0}^\infty \paren {S_{n + 1} \setminus S_n} = \bigcup_{n \mathop = 0}^\infty \paren {A \cap \paren {S_{n + 1} \setminus S_n} }$ +Therefore: +{{begin-eqn}} +{{eqn | l = \map {\mu^*} {A \cap S} + | o = \le + | r = \sum_{n \mathop = 0}^\infty \map {\mu^*} {A \cap \paren {S_{n + 1} \setminus S_n} } + | c = {{Defof|Countably Subadditive Function}} +}} +{{eqn | r = \sum_{n \mathop = 0}^\infty \paren {\map {\mu^*} {A \cap S_{n + 1} } - \map {\mu^*} {A \cap S_{n + 1} \cap S_n} } + | c = {{Defof|Measurable Set of Arbitrary Outer Measure}} +}} +{{eqn | r = \sum_{n \mathop = 0}^\infty \paren {\map {\mu^*} {A \cap S_{n + 1} } - \map {\mu^*} {A \cap S_n} } + | c = [[Intersection with Subset is Subset]] +}} +{{eqn | r = \lim_{n \mathop \to \infty} \map {\mu^*} {A \cap S_n} - \map {\mu^*} {A \cap \O} + | c = [[Telescoping Series/Example 2|Telescoping Series]] +}} +{{eqn | r = \lim_{n \mathop \to \infty} \map {\mu^*} {A \cap S_n} + | c = [[Intersection with Empty Set]] and {{Defof|Outer Measure}} +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Half-Open Rectangles form Semiring of Sets} +Tags: Semirings of Sets + +\begin{theorem} +The [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] $\JJ_{ho}^n$ form a [[Definition:Semiring of Sets|semiring of sets]]. +\end{theorem} + +\begin{proof} +By definition, $\O$ is considered to be a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]]. +That $\JJ_{ho}^n$ is [[Definition:Stable under Intersection|$\cap$-stable]] follows from [[Half-Open Rectangles Closed under Intersection]]. +Thus, it remains to show condition $(3')$ for a [[Definition:Semiring of Sets|semiring of sets]]: +:$(3'):\quad$ If $A, B \in \JJ_{ho}^n$, then there exists a [[Definition:Finite Sequence|finite sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] $A_1, A_2, \ldots, A_m \in \JJ_{ho}^n$ such that $\displaystyle A \setminus B = \bigcup_{k \mathop = 1}^m A_k$. +To prove this, proceed by [[Principle of Mathematical Induction|induction]] on $n$: +=== Basis for the Induction === +Suppose that $n = 1$, and let $I := \hointr a b$ and $J := \hointr c d$ be [[Definition:Half-Open Real Interval|half-open intervals]]. +It is to be demonstrated that $I \setminus J$ is a [[Definition:Finite Union|finite union]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Half-Open Real Interval|half-open intervals]]. +Equivalently, by the above verification of the other axioms, that $\JJ_{ho}^1$ is a [[Definition:Semiring of Sets|semiring of sets]]. +By swapping the roles of $I$ and $J$ if necessary, it may be arranged that $a \le c$. +Now if $b \le c$ as well, then $I \cap J = \O$. +Subsequently, this implies $I \setminus J = I$, and the statement trivially holds. +Next, consider the other case, i.e., $c < b$. +If also $d < b$, then we may write $I$ as the following [[Definition:Disjoint Union (Probability Theory)|disjoint union]]: +:$I = \hointr a b = \hointr a c \cup \hointr c d \cup \hointr d b$ +The middle term equals $J$, and we immediately obtain: +:$I \setminus J = \hointr a c \cup \hointr d b$ +verifying the statement in this case. +Lastly, suppose that $b \le d$. +Then $I \cap J = \hointr a b \cap \hointr c d = \hointr c b$. +Therefore, by [[Set Difference with Intersection is Difference]]: +:$I \setminus J = I \setminus \paren {I \cap J} = \hointr a b \setminus \hointr c b = \hointr a c$ +and, having verified this last case, the result follows from [[Proof by Cases]]. +This constitutes the [[Definition:Basis for the Induction|induction basis]]. +{{qed|lemma}} +=== Induction Hypothesis === +Now assume the [[Definition:Induction Hypothesis|induction hypothesis]]. +That is, for some fixed $n \ge 1$, assume that $\JJ_{ho}^n$ is a [[Definition:Semiring of Sets|semiring of sets]]. +Next, it is to be shown that $\JJ_{ho}^{n + 1}$ is also a [[Definition:Semiring of Sets|semiring of sets]]. +=== Induction Step === +The [[Definition:Induction Step|induction step]] goes as follows. +By definition of [[Definition:Half-Open Rectangle|half-open rectangle]], it holds that: +:$\JJ_{ho}^{n+1} = \JJ_{ho}^n \times \JJ_{ho}^1$ +Further, we have that $\JJ_{ho}^n$ and $\JJ_{ho}^1$ are [[Definition:Semiring of Sets|semirings of sets]]. +Hence by [[Cartesian Product of Semirings of Sets]], $\JJ_{ho}^{n + 1}$ is a [[Definition:Semiring of Sets|semiring of sets]], too. +{{qed}} +\end{proof}<|endoftext|> +\section{Lebesgue Pre-Measure is Pre-Measure} +Tags: Measure Theory + +\begin{theorem} +The [[Definition:Lebesgue Pre-Measure|Lebesgue pre-measure]] $\lambda^n$ on the [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] $\mathcal{J}_{ho}^n$ is a [[Definition:Pre-Measure|pre-measure]]. +\end{theorem} + +\begin{proof} +We employ [[Characterization of Pre-Measures]]. +It is known that $\lambda^n \left({\varnothing}\right) = 0$ by definition of [[Definition:Lebesgue Pre-Measure|Lebesgue pre-measure]]. +The only possibility for two [[Definition:Disjoint Sets|disjoint]] [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] to constitute a single, large [[Definition:Half-Open Rectangle|half-open $n$-rectangle]] is when they are of the form: +:$\left[[{\mathbf a \,.\,.\, \mathbf b}\right)) \quad \left[[{\mathbf a' \,.\,.\, \mathbf b'}\right))$ +such that we have for some $i$ with $1 \le i \le n$: +:$i \ne j \implies a_j = a'_j$ +:$i \ne j \implies b_j = b'_j$ +:$i = j \implies a'_j = b_j$ +which intuitively can be visualised as two cubes that together form one large bar, namely $\left[[{\mathbf a \,.\,.\, \mathbf b'}\right))$. +In this situation, we have: +{{begin-eqn}} +{{eqn|l = \lambda^n \left({\left[\left[{\mathbf a \,.\,.\, \mathbf b}\right)\right)}\right) + \lambda^n \left({\left[\left[{\mathbf a' \,.\,.\, \mathbf b'}\right)\right)}\right) + |r = \prod_{j \mathop = 1}^n \left({b_j - a_j}\right) + \prod_{j \mathop = 1}^n \left({b'_j - a'_j}\right) + |c = By definition of [[Definition:Lebesgue Pre-Measure|Lebesgue pre-measure]] +}} +{{eqn|r = \left({b_i - a_i + b'_i - a'_i}\right) \prod_{\substack{j \mathop = 1 \\ j \mathop \ne i} } \left({b_j - a_j}\right) + |c = By the noted properties of $a_j, b_j, a'_j, b'_j$ +}} +{{eqn|r = \lambda^n \left({\left[\left[{\mathbf a \,.\,.\, \mathbf b'}\right)\right)}\right) + |c = By definition of [[Definition:Lebesgue Pre-Measure|Lebesgue pre-measure]] +}} +{{end-eqn}} +Thus it is verified that $\lambda^n$ is [[Definition:Finitely Additive Function|finitely additive]]. +Finally, suppose that $\left[[{\mathbf a_m \,.\,.\, \mathbf b_m}\right)) \downarrow \varnothing$ is a [[Definition:Decreasing Sequence of Sets|decreasing sequence of sets]], with [[Definition:Limit of Decreasing Sequence of Sets|limit]] $\varnothing$. +Then there exists at least one $j$ with $1 \le j \le n$ such that: +:$\displaystyle \lim_{m \to \infty} a_{m,j} = \lim_{m \to \infty} b_{m,j}$ +which by [[Combination Theorem for Sequences]] is equivalent to: +:$\displaystyle \lim_{m \to \infty} b_{m,j} - a_{m,j} = 0$ +The fact that the [[Definition:Sequence|sequence]] is [[Definition:Decreasing Sequence of Sets|decreasing]] means that, from [[Cartesian Product of Subsets]], for all $m \in \N$, for all $1 \le i \le n$: +:$\left[{a_{m,i} \,.,\,.\, b_{m,i}}\right) \subseteq \left[{a_{1,i} \,.\,.\, b_{1,i} }\right)$ +and whence $b_{m,i} - a_{m,i} \le b_{1,i} - a_{m,1}$. +Hence we have: +{{begin-eqn}} +{{eqn|l = \lim_{m \to \infty} \lambda^n \left({\left[\left[{\mathbf a_m \,.\,.\, \mathbf b_m}\right)\right)}\right) + |r = \lim_{m \to \infty} \prod_{i \mathop = 1}^n \left({b_{m,i} - a_{m,i} }\right) + |c = By definition of [[Definition:Lebesgue Pre-Measure|Lebesgue pre-measure]] +}} +{{eqn|o = \le + |r = \lim_{m \to \infty} \left({b_{m,j} - a_{m,j} }\right) \prod_{\substack{i \mathop = 1 \\ i \mathop \ne j} } \left({b_{1,i} - a_{1,i} }\right) + |c = By the above discussion +}} +{{eqn|r = 0 + |c = [[Combination Theorem for Sequences]] +}} +{{end-eqn}} +This verifies the last condition for [[Characterization of Pre-Measures]], since $\lambda^n$ only takes [[Definition:Finite Extended Real Number|finite]] values. +Hence $\lambda^n$ is a [[Definition:Pre-Measure|pre-measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Existence and Uniqueness of Lebesgue Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\lambda^n$ be the [[Definition:Lebesgue Pre-Measure|Lebesgue pre-measure]] on the [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] $\mathcal{J}_{ho}^n$. +Then [[Definition:Lebesgue Measure|Lebesgue measure]], the [[Definition:Extension (Measure Theory)|extension]] of $\lambda^n$ to the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] $\mathcal B \left({\R^n}\right)$, exists and is unique. +\end{theorem} + +\begin{proof} +From [[Lebesgue Pre-Measure is Pre-Measure]], $\lambda^n$ is a [[Definition:Pre-Measure|pre-measure]] on $\mathcal{J}_{ho}^n$. +By [[Half-Open Rectangles form Semiring of Sets]], $\mathcal{J}_{ho}^n$ is a [[Definition:Semiring of Sets|semiring of sets]]. +Also, from [[Characterization of Euclidean Borel Sigma-Algebra]], $\mathcal B \left({\R^n}\right) = \sigma \left({\mathcal{J}_{ho}^n}\right)$. +Now observe that the [[Definition:Half-Open Rectangle|half-open $n$-rectangles]] $\left[[{-n, n}\right))$ form an [[Definition:Increasing Sequence of Sets|increasing sequence of sets]] with [[Definition:Limit of Increasing Sequence of Sets|limit]] $\R^n$. +Also, by definition of $\lambda^n$, have: +:$\lambda^n \left({\left[[{-n, n}\right))}\right) = \displaystyle \prod_{i \mathop = 1}^n n - \left({-n}\right) = \left({2n}\right)^n < +\infty$ +Hence, [[Carathéodory's Theorem (Measure Theory)|Carathéodory's Theorem]] and its corollary apply. +These yield existence and uniqueness of [[Definition:Lebesgue Measure|Lebesgue measure]], the [[Definition:Extension (Measure Theory)|extension]] of $\lambda^n$ to $\mathcal B \left({\R^n}\right)$. +{{qed}} +\end{proof}<|endoftext|> +\section{Lebesgue Measure is Diffuse} +Tags: Measure Theory + +\begin{theorem} +Let $\lambda^n$ be [[Definition:Lebesgue Measure|Lebesgue measure]] on $\R^n$. +Then $\lambda^n$ is a [[Definition:Diffuse Measure|diffuse measure]]. +\end{theorem} + +\begin{proof} +Any [[Definition:Singleton|singleton]] $\left\{{\mathbf x}\right\} \subseteq \R^n$ is [[Definition:Closed Set (Topology)|closed]] by combining: +* [[Euclidean Space is Complete Metric Space]] +* [[Metric Space is Hausdorff]] +* [[Compact Subspace of Hausdorff Space is Closed/Corollary|Corollary to Compact Subspace of Hausdorff Space is Closed]] +Whence by [[Closed Set Measurable in Borel Sigma-Algebra]], $\left\{{\mathbf x}\right\} \in \mathcal B \left({\R^n}\right)$. +Here, $\mathcal B \left({\R^n}\right)$ is the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R^n$. +Write $\mathbf x + \epsilon = \left({x_1 + \epsilon, \ldots, x_n + \epsilon}\right)$ for $\epsilon > 0$. +Then: +:$\displaystyle \left\{{\mathbf x}\right\} = \bigcap_{m \mathop \in \N} \left[\left[{\mathbf x \,.\,.\, \mathbf x + \frac 1 m}\right)\right)$ +where $\left[\left[{\mathbf x \,.\,.\, \mathbf x + \dfrac 1 m}\right)\right)$ is a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]]. +{{handwaving|justify equality}} +By definition of [[Definition:Lebesgue Measure|Lebesgue measure]], we have (for all $m \in \N$): +:$\displaystyle \lambda^n \left({\left[\left[{\mathbf x \,.\,.\, \mathbf x + \frac 1 m}\right)\right)}\right) = \prod_{i \mathop = 1}^n \frac 1 m = m^{-n}$ +From [[Characterization of Measures]], it follows that: +:$\displaystyle \lambda^n \left({\left\{{\mathbf x}\right\}}\right) = \lim_{m \to \infty} m^{-n}$ +which equals $0$ from [[Sequence of Powers of Reciprocals is Null Sequence]]. +Therefore, for each $\mathbf x \in \R^n$: +:$\lambda^n \left({\left\{{\mathbf x}\right\}}\right) = 0$ +that is, $\lambda^n$ is a [[Definition:Diffuse Measure|diffuse measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Diffuse Measure of Countable Set} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Suppose that for all $x \in X$, the [[Definition:Singleton|singleton]] $\left\{{x}\right\}$ is in $\Sigma$. +Suppose further that $\mu$ is a [[Definition:Diffuse Measure|diffuse measure]]. +Let $E \in \Sigma$ be a [[Definition:Countable|countable]] [[Definition:Measurable Set|measurable set]]. +Then $\mu \left({E}\right) = 0$. +\end{theorem} + +\begin{proof} +It holds trivially that: +:$\displaystyle E = \bigcup_{e \mathop \in E} \left\{{e}\right\}$ +and in particular, this union is [[Definition:Countable Union|countable]]. +Also, $\mu \left({\left\{{e}\right\}}\right) = 0$ for all $e \in E$ as $\mu$ is [[Definition:Diffuse Measure|diffuse]]. +Hence [[Null Sets Closed under Countable Union]] applies to yield: +:$\mu \left({E}\right) = 0$ +{{qed}} +\end{proof}<|endoftext|> +\section{Half-Open Rectangles Closed under Intersection} +Tags: Analysis + +\begin{theorem} +Let $\left[[{\mathbf a \,.\,.\, \mathbf b}\right))$ and $\left[[{\mathbf c \,.\,.\, \mathbf d}\right))$ be [[Definition:Half-Open Rectangle|half-open $n$-rectangles]]. +Then $\left[[{\mathbf a \,.\,.\, \mathbf b}\right)) \cap \left[[{\mathbf c \,.\,.\, \mathbf d}\right))$ is also a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]]. +\end{theorem} + +\begin{proof} +As $\varnothing$ is trivially a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]], let us assume that: +:$\left[[{\mathbf a \,.\,.\, \mathbf b}\right)) \cap \left[[{\mathbf c \,.\,.\, \mathbf d}\right)) \ne \varnothing$ +By [[Cartesian Product of Intersections/General Case|Cartesian Product of Intersections: General Case]], it follows that: +:$\displaystyle \left[[{\mathbf a \,.\,.\, \mathbf b}\right)) \cap \left[[{\mathbf c \,.\,.\, \mathbf d}\right)) = \prod_{i \mathop = 1}^n \left[{a_i \,.\,.\, b_i}\right) \cap \left[{c_i \,.\,.\, d_i}\right)$ +which leaves only to prove that the $\left[{a_i \,.\,.\, b_i}\right) \cap \left[{c_i \,.\,.\, d_i}\right)$ are [[Definition:Half-Open Real Interval|half-open intervals]]. +Now let $x \in \left[{a_i \,.\,.\, b_i}\right) \cap \left[{c_i \,.\,.\, d_i}\right)$. +Then $x$ is subject to: +* $x \ge a_i$ and $x \ge c_i$, i.e., $x \ge \max \left\{{a_i, c_i}\right\}$ +* $x < b_i$ and $x < d_i$, i.e., $x < \min \left\{{b_i, d_i}\right\}$ +and we see that these conditions are satisfied precisely when: +:$x \in \left[{\max \left\{{a_i, c_i}\right\} \,.\,.\, \min \left\{{b_i, d_i}\right\}}\right)$ +Thus, we conclude: +:$\left[{a_i \,.\,.\, b_i}\right) \cap \left[{c_i \,.\,.\, d_i}\right) = \left[{\max \left\{{a_i, c_i}\right\} \,.\,.\, \min \left\{{b_i, d_i}\right\}}\right)$ +showing that indeed the intersection is a [[Definition:Half-Open Real Interval|half-open interval]]. +Combining this with the above reasoning, it follows that indeed: +:$\left[[{\mathbf a \,.\,.\, \mathbf b}\right)) \cap \left[[{\mathbf c \,.\,.\, \mathbf d}\right))$ +is again a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Decomposition of Probability Measures} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {\Omega, \Sigma, P}$ be a [[Definition:Probability Space|probability space]]. +Suppose that for every $\omega \in \Omega$, it holds that: +:$\set \omega \in \Sigma$ +that is, that $\Sigma$ contains all [[Definition:Singleton|singletons]]. +Then there exist a unique [[Definition:Diffuse Measure|diffuse measure]] $\mu$ and a unique [[Definition:Discrete Measure|discrete measure]] $\nu$ such that: +:$P = \mu + \nu$ +\end{theorem} + +\begin{proof} +=== Existence === +For each $n \in \N$, define $\Omega_n$ by: +:$\Omega_n := \set {\omega \in \Omega: \map P {\set \omega} \ge \dfrac 1 n}$ +Suppose that $\Omega_n$ has more than $n$ elements, and define $\Omega'_n$ to be a [[Definition:Finite Set|finite]] [[Definition:Subset|subset]] of $\Omega_n$ with $n + 1$ elements. +From [[Measure is Monotone]] and [[Measure is Finitely Additive Function]], it follows that: +:$\displaystyle 1 = \map P \Omega \ge \map P {\Omega'_n} = \sum_{\omega \mathop \in \Omega'_n} \map P {\set \omega} \ge \frac {\card {\Omega'_n} } n = \frac {n + 1} n$ +This is an obvious contradiction, whence $\Omega_n$ has at most $n$ elements, and in particular, $\Omega_n$ is [[Definition:Finite Set|finite]]. +It follows from [[Sequence of Reciprocals is Null Sequence]] that: +:$\forall p > 0: \exists n \in \N: \dfrac 1 n < p$ +Therefore, defining $\Omega_\infty$ by: +:$\Omega_\infty := \set {\omega \in \Omega: \map P {\set \omega} > 0}$ +we have: +:$\displaystyle \Omega_\infty = \bigcup_{n \mathop \in \N} \Omega_n$ +whence by [[Countable Union of Countable Sets is Countable]], $\Omega_\infty$ is [[Definition:Countable Set|countable]]. +Also, being the [[Definition:Countable Union|countable union]] of elements in $\Sigma$, $\Omega_\infty \in \Sigma$. +Thus, let $\paren {\omega_n}_{n \in \N}$ be an [[Definition:Enumeration|enumeration]] of $\Omega_\infty$. +Now define the [[Definition:Discrete Measure|discrete measure]] $\nu$ by: +:$\displaystyle \nu := \sum_{n \mathop \in \N} \map P {\omega_n} \delta_{\omega_n}$ +Next, we define $\mu$ in the only possible way: +:$\mu: \Sigma \to \overline \R: \map \mu E := \map P E - \map \nu E$ +It remains to verify that $\mu$ is a [[Definition:Measure (Measure Theory)|measure]], and [[Definition:Diffuse Measure|diffuse]]. +So let $E \in \Sigma$, and write, by [[Set Difference and Intersection form Partition]]: +:$E = \paren {E \cap \Omega_\infty} \sqcup \paren {E \setminus \Omega_\infty}$ +where $\sqcup$ signifies [[Definition:Disjoint Union (Probability Theory)|disjoint union]]. +Then we also have, trivially, the decomposition: +:$\displaystyle E \cap \Omega_\infty = \bigsqcup_{\omega_n \mathop \in E} \set {\omega_n}$ +Now, we can compute: +{{begin-eqn}} +{{eqn | l = \map \nu E + | r = \sum_{n \mathop \in \N} \map P {\set {\omega_n} } \map {\delta_{\omega_n} } E + | c = Definition of $\nu$ +}} +{{eqn | r = \sum_{\omega_n \mathop \in E} \map P {\set {\omega_n} } + | c = {{Defof|Dirac Measure}} +}} +{{eqn | r = \map P {E \cap \Omega_\infty} + | c = $P$ is a [[Definition:Measure (Measure Theory)|Measure]] +}} +{{eqn | ll= \leadsto + | l = \map P E - \map \nu E + | r = \map P {E \cap \Omega_\infty} + \map P {E \setminus \Omega_\infty} - \map P {E \cap \Omega_\infty} + | c = [[Measure is Finitely Additive Function]] +}} +{{eqn | r = \map P {E \setminus \Omega_\infty} +}} +{{end-eqn}} +From [[Set Difference as Intersection with Complement]], we may write: +:$E \setminus \Omega_\infty = E \cap \paren {\Omega \setminus \Omega_\infty}$ +So for every $E \in \Sigma$, we have: +:$\map \mu E = \map P {E \cap \paren {\Omega \setminus \Omega_\infty} }$ +whence by $\mu$ is an [[Definition:Intersection Measure|intersection measure]], and by [[Intersection Measure is Measure]], a [[Definition:Measure (Measure Theory)|measure]]. +To show that $\mu$ is [[Definition:Diffuse Measure|diffuse]], let $\omega \in \Omega$. +Then: +:$\map \mu {\set \omega} = \map P {\set \omega \cap \paren {\Omega \setminus \Omega_\infty} }$ +Now, by definition of $\Omega_\infty$, we have: +:$\set \omega \setminus \Omega_\infty = \begin{cases} \set \omega & \text {if $\map P {\set \omega} = 0$} \\ +\O & \text{otherwise} \end{cases}$ +and so in either case, it follows that: +:$\map P {\set \omega \cap \paren {\Omega \setminus \Omega_\infty} } = 0$ +that is to say, $\mu$ is [[Definition:Diffuse Measure|diffuse]]. +{{qed|lemma}} +=== Uniqueness === +Let $P = \mu_1 + \nu_1 = \mu_2 + \nu_2$ be two decompositions. +Since $\mu_1$ and $\mu_2$ are [[Definition:Diffuse Measure|diffuse]], it follows that, for any $\omega \in \Omega$: +:$\map P {\set \omega} = \nu_1 \paren {\set \omega} = \map {\nu_2} {\set \omega}$ +Suppose that we have the [[Definition:Discrete Measure|discrete measures]] $\nu_1$ and $\nu_2$ defined as: +:$\displaystyle \nu_1 = \sum_{n \mathop \in \N} \kappa_n \delta_{x_n}$ +:$\displaystyle \nu_2 = \sum_{m \mathop \in \N} \lambda_m \delta_{y_m}$ +where we take $\sequence {x_n}_{n \mathop \in \N}$ and $\sequence {y_n}_{n \mathop \in \N}$ to be [[Definition:Sequence of Distinct Terms|sequences of distinct terms]]. +Then the equality derived for arbitrary $\omega$ above yields, taking $\omega = x_n$: +:$\displaystyle \kappa_n = \sum_{m \mathop \in \N} \lambda_m \map {\delta_{y_m} } {\set {x_n} }$ +which by definition of [[Definition:Dirac Measure|Dirac measure]] implies the existence of a unique $m \in \N$ such that: +:$x_n = y_m, \, \kappa_n = \lambda_m$ +and reversing the argument, also an $n \in \N$ with this equality for every $y_m$. +That is, it must be that $\nu_1 = \nu_2$. +Whence $\mu_1 = P - \nu_1 = P - \nu_2 = \mu_2$, and uniqueness follows. +{{qed}} +{{ACC|Countable Union of Countable Sets is Countable}} +\end{proof}<|endoftext|> +\section{Equivalence of Definitions of Null Set in Euclidean Space} +Tags: Measure Theory, Definition Equivalences + +\begin{theorem} +Let $\lambda^n$ be [[Definition:Lebesgue Measure|$n$-dimensional Lebesgue measure]] on $\R^n$. +Let $E \subseteq \R^n$. +Then the following are equivalent: +:$(1):\quad \exists B \in \mathcal B \left({\R^n}\right): E \subseteq B, \lambda^n \left({B}\right) = 0$ +:$(2):\quad$ For every $\epsilon > 0$, there exists a [[Definition:Countable Cover|countable cover]] $\left({J_i}\right)_{i \mathop \in \N}$ of $E$ by [[Definition:Open Rectangle|open $n$-rectangles]], such that: +::$\displaystyle \sum_{i \mathop = 1}^\infty \operatorname{vol} \left({J_i}\right) \le \epsilon$ +{{explain|link to some definition (might not exist) for $\operatorname{vol}$}} +\end{theorem}<|endoftext|> +\section{Homomorphism to Group Preserves Inverses} +Tags: Homomorphisms, Group Theory + +\begin{theorem} +Let $\struct {S, \circ}$ be an [[Definition:Algebraic Structure|algebraic structure]]. +Let $\struct {T, *}$ be a [[Definition:Group|group]]. +Let $\phi: \struct {S, \circ} \to \struct {T, *}$ be a [[Definition:Homomorphism (Abstract Algebra)|homomorphism]]. +Let $\struct {S, \circ}$ have an [[Definition:Identity Element|identity]] $e_S$. +Let $x^{-1}$ be an [[Definition:Inverse Element|inverse]] of $x$ for $\circ$. +Then $\map \phi {x^{-1} }$ is an [[Definition:Inverse Element|inverse]] of $\map \phi x$ for $*$. +\end{theorem} + +\begin{proof} +By hypothesis, $\struct {T, *}$ is a [[Definition:Group|group]]. +By {{GroupAxiom|2}}, $\struct {T, *}$ has an [[Definition:Identity Element|identity]]. +Thus [[Homomorphism with Identity Preserves Inverses]] can be applied. +{{Qed}} +\end{proof}<|endoftext|> +\section{Equivalence Induced by Epimorphism is Congruence Relation} +Tags: Epimorphisms, Congruence Relations + +\begin{theorem} +Let $\left({S, \circ}\right)$ and $\left({T, *}\right)$ be [[Definition:Algebraic Structure|algebraic structures]]. +Let $\phi: \left({S, \circ}\right) \to \left({T, *}\right)$ be an [[Definition:Epimorphism (Abstract Algebra)|epimorphism]]. +Let $\mathcal R_\phi$ be the [[Definition:Equivalence Relation Induced by Mapping|equivalence induced by $\phi$]]. +Then the [[Definition:Equivalence Relation Induced by Mapping|induced equivalence]] $\mathcal R_\phi$ is a [[Definition:Congruence Relation|congruence relation]] for $\circ$. +\end{theorem} + +\begin{proof} +Let $x, x', y, y' \in S$ such that: +:$x \mathop {\mathcal R_\phi} x' \land y \mathop {\mathcal R_\phi} y'$ +By definition of [[Definition:Equivalence Relation Induced by Mapping|induced equivalence]]: +{{begin-eqn}} +{{eqn | l = x \mathop {\mathcal R_\phi} x' + | o = \implies + | r = \map \phi x = \map \phi {x'} + | c = +}} +{{eqn | l = y \mathop {\mathcal R_\phi} y' + | o = \implies + | r = \map \phi y = \map \phi {y'} + | c = +}} +{{end-eqn}} +Then: +{{begin-eqn}} +{{eqn | l = \map \phi {x \circ y} + | r = \map \phi x * \map \phi y + | c = {{Defof|Epimorphism (Abstract Algebra)}} +}} +{{eqn | r = \map \phi {x'} * \map \phi {y'} + | c = equality shown above +}} +{{eqn | r = \map \phi {x' \circ y'} + | c = {{Defof|Epimorphism (Abstract Algebra)}} +}} +{{end-eqn}} +Thus $\paren {x \circ y} \mathop {\mathcal R_\phi} \paren {x' \circ y'}$ by definition of [[Definition:Equivalence Relation Induced by Mapping|induced equivalence]]. +So $\mathcal R_\phi$ is a [[Definition:Congruence Relation|congruence relation]] for $\circ$. +\end{proof}<|endoftext|> +\section{Unique Isomorphism from Quotient Mapping to Epimorphism Domain} +Tags: Epimorphisms, Isomorphisms, Quotient Mappings + +\begin{theorem} +Let $\struct {S, \circ}$ and $\struct {T, *}$ be [[Definition:Algebraic Structure|algebraic structures]]. +Let $\phi: \struct {S, \circ} \to \struct {T, *}$ be an [[Definition:Epimorphism (Abstract Algebra)|epimorphism]]. +Let $\mathcal R_\phi$ be the [[Definition:Equivalence Relation Induced by Mapping|equivalence induced by $\phi$]]. +Let $S / \mathcal R_\phi$ be the [[Definition:Quotient Set|quotient of $S$ by $\mathcal R_\phi$]]. +Let $q_{\mathcal R_\phi}: S \to S / \mathcal R_\phi$ be the [[Definition:Quotient Mapping|quotient mapping induced by $\mathcal R_\phi$]]. +Let $\struct {S / \mathcal R_\phi}, {\circ_{\mathcal R_\phi} }$ be the [[Definition:Quotient Structure|quotient structure defined by $\mathcal R_\phi$]]. +Then there is one and only one [[Definition:Isomorphism (Abstract Algebra)|isomorphism]]: +:$\psi: \struct {S / \mathcal R_\phi}, {\circ_{\mathcal R_\phi} } \to \struct {T, *}$ +which satisfies: +:$\psi \bullet q_{\mathcal R_\phi} = \phi$ +where, in order not to cause notational confusion, $\bullet$ is used as the symbol to denote [[Definition:Composition of Mappings|composition of mappings]]. +\end{theorem} + +\begin{proof} +From the [[Quotient Theorem for Surjections]], there is a unique [[Definition:Bijection|bijection]] from $S / \mathcal R_\phi$ onto $T$ satisfying $\psi \bullet q_{\mathcal R_\phi} = \phi$. +Also: +{{begin-eqn}} +{{eqn | l = \forall x, y \in S: \map \psi {\eqclass x {\mathcal R_\phi} \circ_{\mathcal R_\phi} \eqclass y {\mathcal R_\phi} } + | r = \map \psi {\eqclass {x \circ y} {\mathcal R_\phi} } + | c = {{Defof|Quotient Structure}} +}} +{{eqn | r = \map \phi {x \circ y} + | c = {{Defof|Epimorphism (Abstract Algebra)}} +}} +{{eqn | r = \map \phi x * \map \phi y + | c = {{Defof|Epimorphism (Abstract Algebra)}} +}} +{{eqn | r = \map \psi {\eqclass x {\mathcal R_\phi} } * \map \psi {\eqclass y {\mathcal R_\phi} } + | c = {{Defof|Quotient Mapping}} +}} +{{end-eqn}} +Therefore $\psi$ is an [[Definition:Isomorphism (Abstract Algebra)|isomorphism]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Identity is in Kernel of Group Homomorphism} +Tags: Kernels of Group Homomorphisms + +\begin{theorem} +Let $G$ and $H$ be [[Definition:Group|groups]]. +Let $e_G$ and $e_H$ be the [[Definition:Identity Element|identity elements]] of $G$ and $H$ respectively. +Let $\phi: G \to H$ be a [[Definition:Group Homomorphism|(group) homomorphism]] from $G$ to $H$. +Then: +:$e_G \in \map \ker \phi$ +where $\map \ker \phi$ is the [[Definition:Kernel of Group Homomorphism|kernel]] of $\phi$. +\end{theorem} + +\begin{proof} +From the definition of [[Definition:Kernel of Group Homomorphism|kernel]]: +:$\map \ker \phi = \set {x \in G: \map \phi x = e_H}$ +From [[Group Homomorphism Preserves Identity]] we have that: +:$\map \phi {e_G} = e_H$ +Hence the result. +{{Qed}} +\end{proof}<|endoftext|> +\section{Union is Dominated by Disjoint Union} +Tags: Set Theory + +\begin{theorem} +Let $I$ be an [[Definition:Indexing Set|indexing set]]. +For all $i \in I$, let $S_i$ be a [[Definition:Set|set]]. +Then: +:$\displaystyle \bigcup_{i \mathop \in I} S_i \preccurlyeq \bigsqcup_{i \mathop \in I} S_i$ +where $\preccurlyeq$ denotes [[Definition:Dominate (Set Theory)|domination]], $\bigcup$ denotes [[Definition:Set Union|union]], and $\bigsqcup$ denotes [[Definition:Disjoint Union (Set Theory)|disjoint union]]. +\end{theorem} + +\begin{proof} +For all $\displaystyle x \in \bigcup_{i \mathop \in I} S_i$, there exists a $i \left({x}\right) \in I$ such that $x \in S_{i \left({x}\right)}$. +Thus the [[Definition:Mapping|mapping]] $\displaystyle \iota : \bigcup_{i \mathop \in I} S_i \to \bigsqcup_{i \mathop \in I} S_i$ defined by: +:$\iota \left({x}\right) = \left({x, i \left({x}\right)}\right)$ +is an [[Definition:Injection|injection]]. +{{handwaving}} +{{qed}} +[[Category:Set Theory]] +bhdszl9s1i2q3y7mouiapo397zhhime +\end{proof}<|endoftext|> +\section{Disjoint Union Preserves Domination} +Tags: Set Theory + +\begin{theorem} +Let $I$ be an [[Definition:Indexing Set|indexing set]]. +For all $i \in I$, let $A_i$ and $B_i$ be [[Definition:Set|sets]] such that $A_i \preccurlyeq B_i$. +Here, $\preccurlyeq$ denotes [[Definition:Dominate (Set Theory)|domination]]. +Then: +:$\displaystyle \bigsqcup_{i \mathop \in I} A_i \preccurlyeq \bigsqcup_{i \mathop \in I} B_i$ +where $\bigsqcup$ denotes [[Definition:Disjoint Union (Set Theory)|disjoint union]]. +\end{theorem} + +\begin{proof} +By definition of [[Definition:Dominate (Set Theory)|domination]], for all $i \in I$, there exists an [[Definition:Injection|injection]] $\iota_i: A_i \to B_i$. +Thus the mapping $\displaystyle \iota : \bigsqcup_{i \mathop \in I} A_i \to \bigsqcup_{i \mathop \in I} B_i$ defined by: +:$\iota \left({x, i}\right) = \left({\iota_i \left({x}\right), i}\right)$ +is an [[Definition:Injection|injection]]. +{{handwaving|That statement needs proof. Probably best in a separate page.}} +{{qed}} +[[Category:Set Theory]] +qraiskgupdvig3zs5ehs5bw7n3mpbbr +\end{proof}<|endoftext|> +\section{Epimorphism from Real Numbers to Circle Group} +Tags: Circle Group, Group Epimorphisms + +\begin{theorem} +Let $\struct {K, \times}$ be the [[Definition:Circle Group|circle group]], that is: +:$K = \set {z \in \C: \cmod z = 1}$ +under [[Definition:Complex Multiplication|complex multiplication]]. +Let $f: \R \to K$ be the [[Definition:Mapping|mapping]] from the [[Definition:Real Number|real numbers]] to $K$ defined as: +:$\forall x \in \R: \map f x = \cos x + i \sin x$ +Then $f: \struct {\R, +} \to \struct {K, \times}$ is a [[Definition:Group Epimorphism|group epimorphism]]. +Its [[Definition:Kernel of Group Homomorphism|kernel]] is: +:$\map \ker f = \set {2 \pi n: n \in \Z}$ +\end{theorem} + +\begin{proof} +$f$ is a [[Definition:Surjection|surjection]] from ... +{{link wanted|Needs a link to a result specifying that $f$ is surjective (may already exist).}} +{{qed|lemma}} +Then: +{{begin-eqn}} +{{eqn | l = \map f x \times \map f y + | r = \paren {\cos x + i \sin x} \paren {\cos y + i \sin y} + | c = +}} +{{eqn | r = \cos x \cos y + i \sin x \cos y + \cos x i \sin y + i \sin x i \sin y + | c = +}} +{{eqn | r = \paren {\cos x \cos y - \sin x \sin y} + i \paren {\sin x \cos y + \cos x \sin y} + | c = as $i^2 = -1$ +}} +{{eqn | r = \map \cos {x + y} + i \map \sin {x + y} + | c = [[Cosine of Sum]] and [[Sine of Sum]] +}} +{{eqn | r = \map f {x + y} + | c = +}} +{{end-eqn}} +So $f$ is a [[Definition:Group Homomorphism|(group) homomorphism]]. +{{qed|lemma}} +Thus $f$ is seen to be a [[Definition:Surjection|surjective]] [[Definition:Group Homomorphism|homomorphism]]. +Hence, by definition, it is a [[Definition:Group Epimorphism|(group) epimorphism]]. +{{qed|lemma}} +From [[Cosine of Multiple of Pi]]: +:$\forall n \in \Z: \cos n \pi = \paren {-1}^n$ +and from [[Sine of Multiple of Pi]]: +:$\forall n \in \Z: \sin n \pi = 0$ +From [[Sine and Cosine are Periodic on Reals]], it follows that these are the only values of $\Z$ for which this holds. +For $\cos x + i \sin x = 1 + 0 i$ it is necessary that: +:$\cos x = 1$ +:$\sin x = 0$ +and it can be seen that the only values of $x$ for this to happen is: +:$x \in \set {2 \pi n: n \in \Z}$ +Hence, by definition of [[Definition:Kernel of Group Homomorphism|kernel]]: +:$\map \ker f = \set {2 \pi n: n \in \Z}$ +{{qed}} +\end{proof}<|endoftext|> +\section{Increasing Sequence of Sets induces Partition on Limit} +Tags: Set Theory + +\begin{theorem} +Let $\sequence {S_n}_{n \mathop \in \N} \uparrow S$ be an [[Definition:Increasing Sequence of Sets|increasing sequence of sets]] with [[Definition:Limit of Increasing Sequence of Sets|limit]] $S$. +Define $T_1 = S_1$, and, for $n \in \N$, $T_{n + 1} = S_{n + 1} \setminus S_n$, where $\setminus$ denotes [[Definition:Set Difference|set difference]]. +Then $\sequence {T_n}_{n \mathop \in \N}$ is a [[Definition:Countable Set|countable]] [[Definition:Partition (Set Theory)|partition]] of $S$. +\end{theorem} + +\begin{proof} +That $\sequence {T_n}_{n \mathop \in \N}$ [[Definition:Partition (Set Theory)|partitions]] $S$, means precisely that: +:$(1):\quad$ The $T_n$ are [[Definition:Pairwise Disjoint|pairwise disjoint]] +:$(2):\quad \displaystyle \bigcup_{n \mathop \in \N} T_n = S$ +It is more convenient to prove $(1)$ and $(2)$ separately: +=== Proof of $(1)$ === +Let $l, m \in \N$ be such that $l < m$. +Then by [[Set Difference is Subset]], $T_l \subseteq S_l$. +As the $S_n$ form an [[Definition:Increasing Sequence of Sets|increasing sequence of sets]], it follows that also $T_l \subseteq S_{m - 1}$ because $m - 1 \ge l$. +Now compute as follows: +{{begin-eqn}} +{{eqn | l = T_m \cap T_l + | o = \subseteq + | r = T_m \cap S_{m-1} + | c = [[Set Intersection Preserves Subsets]] +}} +{{eqn | o = \subseteq + | r = \paren {S_m \setminus S_{m - 1} } \cap S_{m-1} + | c = Definition of $T_m$ +}} +{{eqn | r = \O + | c = [[Set Difference Intersection with Second Set is Empty Set]] +}} +{{end-eqn}} +Hence $T_m \cap T_l = \O$. +Reversing the roles of $m$ and $l$ leads to the same conclusion if $l > m$. +Hence, by definition, the $T_n$ are [[Definition:Pairwise Disjoint|pairwise disjoint]]. +{{qed|lemma}} +=== Proof of $(2)$ === +By [[Set Union Preserves Subsets]] and [[Set Difference is Subset]], have that: +:$\displaystyle \bigcup_{n \mathop \in \N} T_n \subseteq \bigcup_{n \mathop \in \N} S_n = S$ +To establish $(2)$, by definition of [[Definition:Set Equality/Definition 2|set equality]], it is now only required to show that $S \subseteq \displaystyle \bigcup_{n \mathop \in \N} T_n$. +So let $s \in S$. +Then by definition of [[Definition:Set Union|union]], the set: +:$N_s := \set {n \in \N: s \in S_n}$ +is [[Definition:Non-Empty Set|nonempty]]. +By [[Well-Ordering Principle]], $N_s$ contains a [[Definition:Minimal Element|minimal element]], $n$, say. +If $n = 1$, then $s \in S_1 = T_1$. +If $n > 1$, then, by minimality of $n$, $s \notin S_{n - 1}$. +Hence, by definition of [[Definition:Set Difference|set difference]], $s \in T_n = S_n \setminus S_{n - 1}$. +By definition of [[Definition:Set Union|set union]], it follows that: +:$s \in \displaystyle \bigcup_{n \mathop \in \N} T_n$ +That is, by definition of [[Definition:Subset|subset]]: +:$S \subseteq \displaystyle \bigcup_{n \mathop \in \N} T_n$ +{{qed}} +[[Category:Set Theory]] +n1ezkqegevtsxnuz00948pd2infibcm +\end{proof}<|endoftext|> +\section{Power Function on Complex Numbers is Epimorphism} +Tags: Complex Numbers, Group Epimorphisms + +\begin{theorem} +Let $n \in \Z_{>0}$ be a [[Definition:Strictly Positive Integer|strictly positive integer]]. +Let $\struct {\C_{\ne 0}, \times}$ be the [[Definition:Multiplicative Group of Complex Numbers|multiplicative group of complex numbers]]. +Let $f_n: \C_{\ne 0} \to \C_{\ne 0}$ be the [[Definition:Mapping|mapping]] from the [[Definition:Complex Number|set of complex numbers less zero]] to itself defined as: +:$\forall z \in \C_{\ne 0}: \map {f_n} z = z^n$ +Then $f_n: \struct {\C_{\ne 0}, \times} \to \struct {\C_{\ne 0}, \times}$ is a [[Definition:Group Epimorphism|group epimorphism]]. +The [[Definition:Kernel of Group Homomorphism|kernel]] of $f_n$ is the set of [[Definition:Complex Roots of Unity|complex $n$th roots of unity]]. +\end{theorem} + +\begin{proof} +From [[Non-Zero Complex Numbers under Multiplication form Abelian Group]], $\struct {\C_{\ne 0}, \times}$ is a [[Definition:Group|group]]. +Therefore $\struct {\C_{\ne 0}, \times}$ is [[Definition:Closed Algebraic Structure|closed]] by [[Definition:Group Axioms|group axiom $G0$]]. +Let $w, z \in \C_{\ne 0}$. +{{begin-eqn}} +{{eqn | l = \map {f_n} {w \times z} + | r = \paren {w \times z}^n + | c = +}} +{{eqn | r = w^n \times z^n + | c = [[Power of Product of Commutative Elements in Group]] +}} +{{eqn | r = \map {f_n} w \times \map {f_n} z + | c = +}} +{{end-eqn}} +Thus $f_n$ is a [[Definition:Group Homomorphism|group homomorphism]]. +Now suppose $w = r \paren {\cos \alpha + i \sin \alpha}$, expressing $w$ in [[Definition:Polar Form of Complex Number|polar form]]. +Then $w = \map {f_n} z$ where: +:$z = r^{1/n} \paren {\cos \dfrac \alpha n + i \sin \dfrac \alpha n}$ +and so: +:$\forall w: w \in \map {f_n} {\C_{\ne 0} }$ +That is, $f_n$ is a [[Definition:Surjection|surjection]]. +Being a [[Definition:Group Homomorphism|group homomorphism]] which is also a [[Definition:Surjection|surjection]], by definition $f_n$ is then a [[Definition:Group Epimorphism|group epimorphism]]. +The [[Definition:Kernel of Group Homomorphism|kernel]] of $f_n$ is the set: +:$U_n = \set {e^{2 i k \pi / n}: k \in \N_n}$ +which follows from [[Complex Roots of Unity in Exponential Form]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Real Part as Mapping is Endomorphism for Complex Addition} +Tags: Complex Addition, Group Endomorphisms, Real Parts + +\begin{theorem} +Let $\struct {\C, +}$ be the [[Definition:Additive Group of Complex Numbers|additive group of complex numbers]]. +Let $\struct {\R, +}$ be the [[Definition:Additive Group of Real Numbers|additive group of real numbers]]. +Let $f: \C \to \R$ be the [[Definition:Mapping|mapping]] from the [[Definition:Complex Number|complex numbers]] to the [[Definition:Real Number|real numbers]] defined as: +:$\forall z \in \C: \map f z = \map \Re z$ +where $\map \Re z$ denotes the [[Definition:Real Part|real part]] of $z$. +Then $f: \struct {\C, +} \to \struct {\R, +}$ is a [[Definition:Group Epimorphism|group epimorphism]]. +Its [[Definition:Kernel of Group Homomorphism|kernel]] is the set: +:$\map \ker f = \set {i x: x \in \R}$ +of [[Definition:Wholly Imaginary|wholly imaginary numbers]]. +\end{theorem} + +\begin{proof} +From [[Real Part as Mapping is Surjection]], $f$ is a [[Definition:Surjection|surjection]]. +Let $z_1, z_2 \in \C$. +Let $z_1 = x_1 + i y_1, z_2 = x_2 + i y_2$. +Then: +{{begin-eqn}} +{{eqn | l = \map f {z_1 + z_2} + | r = \map \Re {z_1 + z_2} + | c = Definition of $f$ +}} +{{eqn | r = \map \Re {x_1 + i y_1 + x_2 + i y_2} + | c = Definition of $z_1$ and $z_2$ +}} +{{eqn | r = x_1 + x_2 + | c = {{Defof|Real Part}} +}} +{{eqn | r = \map \Re {z_1} + \map \Re {z_2} + | c = {{Defof|Real Part}} +}} +{{eqn | r = \map f {z_1} + \map f {z_2} + | c = Definition of $f$ +}} +{{end-eqn}} +So $f$ is a [[Definition:Group Homomorphism|group homomorphism]]. +Thus $f$ is a [[Definition:Surjection|surjective]] [[Definition:Group Homomorphism|group homomorphism]] and therefore by definition a [[Definition:Group Epimorphism|group epimorphism]]. +Finally: +:$\forall y \in \R: \map \Re {0 + i y} = 0$ +It follows from [[Complex Addition Identity is Zero]] that: +:$\map \ker f = \set {i x: x \in \R}$ +{{qed}} +\end{proof}<|endoftext|> +\section{Equivalence of Definitions of Semiring of Sets} +Tags: Semirings of Sets + +\begin{theorem} +{{TFAE|def = Semiring of Sets}} +A collection $\SS$ of [[Definition:Subset|subsets]] of a [[Definition:Set|set]] $X$ is a semiring (of sets) {{iff}}: +:$(1):\quad \O \in \SS$ +:$(2):\quad A, B \in \SS \implies A \cap B \in \SS$; that is, $\SS$ is [[Definition:Stable under Intersection|$\cap$-stable]] +:$(3):\quad$ If $A, A_1 \in \SS$ such that $A_1 \subseteq A$, then there exists a [[Definition:Finite Sequence|finite sequence]] $A_2, A_3, \ldots, A_n \in \SS$ such that: +::$(3a):\quad \displaystyle A = \bigcup_{k \mathop = 1}^n A_k$ +::$(3b):\quad$ The $A_k$ are [[Definition:Pairwise Disjoint|pairwise disjoint]] +We prove that criterion $(3)$ can be replaced by: +:$(3'):\quad$ If $A, B \in \SS$, then there exist [[Definition:Finite Sequence|finite sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Set|sets]] $A_1, A_2, \ldots, A_n \in \SS$ such that $\displaystyle A \setminus B = \bigcup_{k \mathop = 1}^n A_k$. +\end{theorem} + +\begin{proof} +=== $(3)$ implies $(3')$ === +Let $X$ be a [[Definition:Set|set]], and let $\SS$ be a collection of [[Definition:Subset|subsets]] of $X$. +Suppose that for all $A, A_1 \in \SS$ such that $A_1 \subseteq A$, there exists a [[Definition:Finite Sequence|finite sequence]] of sets $A_2, A_3, \ldots, A_n \in \SS$ such that: +:$A_1, A_2, \ldots, A_n$ are [[Definition:Pairwise Disjoint|pairwise disjoint]] +:$\displaystyle A = \bigcup_{k \mathop = 1}^n A_k$ +Let $B \in \SS$, and let $A_1 = A \cap B$. +It follows that $A_1 \in \SS$, by [[Definition:Semiring of Sets|definition]]. +Also, $A_1 \subseteq A$ by [[Intersection is Subset]]. +Then: +{{begin-eqn}} +{{eqn | l = A \setminus B + | r = A \setminus \left({A \cap B}\right) + | c = [[Set Difference with Intersection is Difference]] +}} +{{eqn | r = A \setminus A_1 +}} +{{eqn | r = \left({\bigcup_{k \mathop = 1}^n A_k}\right) \setminus A_1 +}} +{{eqn | r = \bigcup_{k \mathop = 1}^n \, \left({A_k \setminus A_1}\right) + | c = [[Set Difference is Right Distributive over Union]] +}} +{{eqn | r = \bigcup_{k \mathop = 2}^n \, \left({A_k \setminus A_1}\right) + | c = [[Set Difference with Self is Empty Set]] and [[Union with Empty Set]] +}} +{{eqn | r = \bigcup_{k \mathop = 2}^n A_k + | c = [[Set Difference with Disjoint Set]] +}} +{{end-eqn}} +as required. +{{qed|lemma}} +=== $(3')$ implies $(3)$ === +Now suppose that for all $A, B \in \SS$, there exists a [[Definition:Finite Sequence|finite sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint sets]] $A_1, A_2, \ldots, A_n \in \SS$ such that $\displaystyle A \setminus B = \bigcup_{k \mathop = 1}^n A_k$. +Then $B$ is [[Definition:Disjoint Sets|disjoint]] with each of the sets $A_k$. +Let $B \subseteq A$. Then: +{{begin-eqn}} +{{eqn | l = B \cup \bigcup_{k \mathop = 1}^n A_k + | r = \left({A \setminus B}\right) \cup B +}} +{{eqn | r = A \cup B + | c = [[Set Difference Union Second Set is Union]] +}} +{{eqn | r = A + | c = [[Union with Superset is Superset]] +}} +{{end-eqn}} +as required. +{{qed}} +[[Category:Semirings of Sets]] +pfw64woccuy4o9v9j9bdj3kaaclv85c +\end{proof}<|endoftext|> +\section{Imaginary Part as Mapping is Endomorphism for Complex Addition} +Tags: Complex Addition, Group Endomorphisms, Imaginary Parts + +\begin{theorem} +Let $\struct {\C, +}$ be the [[Definition:Additive Group of Complex Numbers|additive group of complex numbers]]. +Let $\struct {\R, +}$ be the [[Definition:Additive Group of Real Numbers|additive group of real numbers]]. +Let $f: \C \to \R$ be the [[Definition:Mapping|mapping]] from the [[Definition:Complex Number|complex numbers]] to the [[Definition:Real Number|real numbers]] defined as: +:$\forall z \in \C: \map f z = \map \Im z$ +where $\map \Im z$ denotes the [[Definition:Imaginary Part|imaginary part]] of $z$. +Then $f: \struct {\C, +} \to \struct {\R, +}$ is a [[Definition:Group Epimorphism|group epimorphism]]. +Its [[Definition:Kernel of Group Homomorphism|kernel]] is the set: +:$\map \ker f = \R$ +of [[Definition:Wholly Real|(wholly) real numbers]]. +\end{theorem} + +\begin{proof} +From [[Imaginary Part as Mapping is Surjection]], $f$ is a [[Definition:Surjection|surjection]]. +Let $z_1, z_2 \in \C$. +Let $z_1 = x_1 + i y_1, z_2 = x_2 + i y_2$. +Then: +{{begin-eqn}} +{{eqn | l = \map f {z_1 + z_2} + | r = \map \Im {z_1 + z_2} + | c = Definition of $f$ +}} +{{eqn | r = \map \Im {x_1 + i y_1 + x_2 + i y_2} + | c = Definition of $z_1$ and $z_2$ +}} +{{eqn | r = y_1 + y_2 + | c = {{Defof|Imaginary Part}} +}} +{{eqn | r = \map \Im {z_1} + \map \Im {z_2} + | c = {{Defof|Imaginary Part}} +}} +{{eqn | r = \map f {z_1} + \map f {z_2} + | c = Definition of $f$ +}} +{{end-eqn}} +So $f$ is a [[Definition:Group Homomorphism|group homomorphism]]. +Thus $f$ is a [[Definition:Surjection|surjective]] [[Definition:Group Homomorphism|group homomorphism]] and therefore by definition a [[Definition:Group Epimorphism|group epimorphism]]. +Finally: +:$\forall y \in \R: \map \Im {x + 0 i} = 0 i = 0$ +It follows from [[Complex Addition Identity is Zero]] that: +:$\map \ker f = \set {x: x \in \R} = \R$ +{{qed}} +\end{proof}<|endoftext|> +\section{Reduced Echelon Matrix is Unique} +Tags: Echelon Matrices + +\begin{theorem} +Every [[Definition:Matrix|$m \times n$ matrix]] is [[Definition:Row Equivalence|row equivalent]] to [[Definition:Unique|exactly one]] $m \times n$ [[Definition:Reduced Echelon Matrix|reduced echelon matrix]]. +That is, the [[Definition:Reduced Echelon Form|reduced echelon form]] of a [[Definition:Matrix|matrix]] is [[Definition:Unique|unique]]. +\end{theorem} + +\begin{proof} +=== Proof of Existence === +Proved in [[Matrix is Row Equivalent to Reduced Echelon Matrix]]. +{{qed|lemma}} +=== Proof of Uniqueness === +{{ProofWanted}} +[[Category:Echelon Matrices]] +tqbd2jjcoeedgo03370m94xifgo20hg +\end{proof}<|endoftext|> +\section{Measure of Set Difference with Subset} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma, \mu}$ be a [[Definition:Measure Space|measure space]]. +Let $S, T \in \Sigma$ be such that $S \subseteq T$, and suppose that $\mu \paren S < +\infty$. +Then: +:$\mu \paren {T \setminus S} = \mu \paren T - \mu \paren S$ +where $T \setminus S$ denotes [[Definition:Set Difference|set difference]]. +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = T + | r = \paren {T \setminus S} \cup \paren {T \cap S} + | c = [[Set Difference Union Intersection]] +}} +{{eqn | r = \paren {T \setminus S} \cup S + | c = [[Intersection with Subset is Subset]] +}} +{{eqn | ll= \leadsto + | l = \mu \paren T + | r = \mu \paren {T \setminus S} + \mu \paren S + | c = [[Measure is Finitely Additive Function]], [[Set Difference Intersection with Second Set is Empty Set]] +}} +{{eqn | ll= \leadsto + | l = \mu \paren T - \mu \paren S + | r = \mu \paren {T \setminus S}) + | c = [[Definition:Real Subtraction|Subtraction]] defined as $\mu \paren S < +\infty$ +}} +{{end-eqn}} +{{qed}} +[[Category:Measure Theory]] +aqnz065pz56fykosrzns46w9htorram +\end{proof}<|endoftext|> +\section{Subset of Codomain is Superset of Image of Preimage} +Tags: Composite Mappings, Preimages under Mappings, Subset of Codomain is Superset of Image of Preimage + +\begin{theorem} +Let $f: S \to T$ be a [[Definition:Mapping|mapping]]. +Then: +:$B \subseteq T \implies \paren {f \circ f^{-1} } \sqbrk B \subseteq B$ +where: +:$f \sqbrk B$ denotes the [[Definition:Image of Subset under Mapping|image of $B$ under $f$]] +:$f^{-1}$ denotes the [[Definition:Inverse of Mapping|inverse of $f$]] +:$f \circ f^{-1}$ denotes [[Definition:Composition of Mappings|composition]] of $f$ and $f^{-1}$. +This can be expressed in the language and notation of [[Definition:Direct Image Mapping|direct image mappings]] and [[Definition:Inverse Image Mapping|inverse image mappings]] as: +:$\forall B \in \powerset T: \map {\paren {f^\to \circ f^\gets} } B \subseteq B$ +\end{theorem} + +\begin{proof} +From [[Image of Preimage under Mapping]]: +: $B \subseteq T \implies \left({f \circ f^{-1} }\right) \left[{B}\right] = B \cap f \left[{S}\right]$ +The result follows from [[Intersection is Subset]]. +{{qed}} +\end{proof} + +\begin{proof} +Let $y \in B$. +Then: +:$\exists x \in S: y = f \left({x}\right)$ +Therefore by definition of [[Definition:Preimage of Subset under Mapping|preimage of subset]]: +:$\exists x \in f^{-1} \left[{B}\right]$ +It follows by definition of [[Definition:Image of Subset under Mapping|image of subset]] that: +:$y \in f \left[{f^{-1} \left[{B}\right]}\right]$ +Thus by definition of [[Definition:Composition of Mappings|composition]] $f$ with $f^{-1}$: +:$y \in \left({f \circ f^{-1}}\right) \left[{B}\right]$ +The result follows by definition of [[Definition:Subset|subset]]. +{{qed}} +\end{proof} + +\begin{proof} +Let $B \subseteq T$. +Then: +{{begin-eqn}} +{{eqn | l = y + | o = \in + | r = \paren {f \circ f^{-1} } \sqbrk B +}} +{{eqn | ll= \leadsto + | l = y + | o = \in + | r = f \sqbrk {f^{-1} \sqbrk B} + | c = {{Defof|Composition of Mappings}} +}} +{{eqn | ll= \leadsto + | lo= \exists x \in f^{-1} \sqbrk B: + | l = \map f x + | r = y + | c = {{Defof|Image of Subset under Mapping}} +}} +{{eqn | ll= \leadsto + | l = y + | o = \in + | r = B + | c = {{Defof|Preimage of Subset under Mapping}} +}} +{{end-eqn}} +So by definition of [[Definition:Subset|subset]]: +:$B \subseteq T \implies \paren {f \circ f^{-1} } \sqbrk B \subseteq B$ +{{qed}} +\end{proof}<|endoftext|> +\section{Induced Outer Measure Restricted to Semiring is Pre-Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\mathcal S$ be a [[Definition:Semiring of Sets|semiring]] over a [[Definition:Set|set]] $X$. +Let $\mu: \mathcal S \to \overline \R_{\ge 0}$ be a [[Definition:Pre-Measure|pre-measure]] on $\mathcal S$, where $\overline \R_{\ge 0}$ denotes the set of [[Definition:Positive Real Number|positive]] [[Definition:Extended Real Number Line||extended real numbers]]. +Let $\mu^*: \mathcal P \left({X}\right) \to \overline \R_{\ge 0}$ be the [[Definition:Outer Measure|outer measure]] [[Definition:Induced Outer Measure|induced]] by $\mu$. +Then: +:$\displaystyle \mu^*\restriction_{\mathcal S} \, = \mu$ +where $\restriction$ denotes [[Definition:Restriction of Mapping|restriction]]. +\end{theorem} + +\begin{proof} +Let $S \in \mathcal S$. +It follows immediately from the [[Definition:Induced Outer Measure|definition of the induced outer measure]] that $\mu^* \left({S}\right) \le \mu \left({S}\right)$. +Therefore, it suffices to show that if $\displaystyle \left({A_n}\right)_{n \mathop = 0}^{\infty}$ is a [[Definition:Countable Cover|countable cover]] for $S$, then: +:$\displaystyle \mu \left({S}\right) \le \sum_{n \mathop = 0}^\infty \mu \left({A_n}\right)$ +If the above statement is true, then it follows directly from the [[Definition:Infimum of Set|definition of infimum]] that $\mu \left({S}\right) \le \mu^* \left({S}\right)$, thus proving the theorem. +{{refactor|The structure of this proof needs to be clarified. I recommend that the induction part of it be extracted into a separately-paged lemma, structured as our usual house style.}} +Define, for all [[Definition:Natural Numbers|natural numbers]] $n \in \N$: +:$\displaystyle B_n = A_n \setminus A_{n-1} \setminus \cdots \setminus A_0$ +where $\setminus$ denotes [[Definition:Set Difference|set difference]]. +We take $B_0 = A_0$. +Using the [[Principle of Mathematical Induction|mathematical induction]], we will prove that for all [[Definition:Natural Numbers|natural numbers]] $m < n$, $B_{n, m} = A_n \setminus A_{n-1} \setminus \cdots \setminus A_{n-m}$ is the [[Definition:Finite Union|finite union]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Element|elements]] of $\mathcal S$. +We take $B_{n, 0} = A_n$. +The [[Principle of Mathematical Induction#Basis for the Induction|base case]] $m = 0$ is [[Definition:Trivial|trivial]]. +{{Disambiguate|Definition:Trivial}} +Now assume the [[Principle of Mathematical Induction#Induction Hypothesis|induction hypothesis]] that the above statement is true for some [[Definition:Natural Numbers|natural number]] $m < n - 1$, and let $D_1, D_2, \ldots, D_N$ be [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Element|elements]] of $\mathcal S$ such that: +:$\displaystyle B_{n, m} = \bigcup_{k \mathop = 1}^N D_k$ +Then: +{{begin-eqn}} +{{eqn | l = B_{n, m+1} + | r = B_{n, m} \setminus A_{n-m-1} +}} +{{eqn | r = \left({\bigcup_{k \mathop = 1}^N D_k}\right) \setminus A_{n-m-1} +}} +{{eqn | r = \bigcup_{k \mathop = 1}^N \, \left({D_k \setminus A_{n-m-1} }\right) + | c = by [[Set Difference is Right Distributive over Union]] +}} +{{end-eqn}} +By the definition of a [[Definition:Semiring of Sets|semiring]], for all [[Definition:Natural Numbers|natural numbers]] $k \le N$, $D_k \setminus A_{n-m-1}$ is the [[Definition:Finite Union|finite union]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Element|elements]] of $\mathcal S$. +Hence $B_{n, m+1}$ is the [[Definition:Finite Union|finite union]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Element|elements]] of $\mathcal S$, completing the [[Principle of Mathematical Induction#Induction Step|induction step]]. +Therefore, $B_{n, n-1} = B_n$ is the [[Definition:Finite Union|finite union]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Element|elements]] of $\mathcal S$, as desired. +Using the above result and applying the [[Axiom:Axiom of Countable Choice|axiom of countable choice]], we can, for all $n \in \N$, choose a [[Definition:Finite Set|finite set]] $\mathcal F_n$ of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Element|elements]] of $\mathcal S$ for which: +:$\displaystyle B_n = \bigcup \mathcal F_n$ +Now, $x \in S$ if and only if there exists an $n \in \N$ such that $x \in S \cap A_n$. +Taking the [[Definition:Smallest Element|smallest]] such $n$, which exists because [[Well-Ordering Principle|$\N$ is well-ordered]], it follows that $x \notin A_0, A_1, \ldots, A_{n-1}$, and so $x \in S \cap B_n$. +Therefore: +:$\displaystyle S = \bigcup_{n \mathop = 0}^\infty \, \left({S \cap B_n}\right)$ +Hence: +{{begin-eqn}} +{{eqn | l = \mu \left({S}\right) + | r = \mu \left({\bigcup_{n \mathop = 0}^\infty \, \left({S \cap B_n}\right)}\right) +}} +{{eqn | r = \mu \left({\bigcup_{n \mathop = 0}^\infty \, \left({S \cap \bigcup_{T \mathop \in \mathcal F_n} T}\right)}\right) + | c = by definition of $\mathcal F_n$ +}} +{{eqn | r = \mu \left({\bigcup_{n \mathop = 0}^\infty \, \bigcup_{T \mathop \in \mathcal F_n} \, \left({S \cap T}\right)}\right) + | c = by [[Intersection Distributes over Union]] +}} +{{eqn | o = \le + | r = \sum_{n \mathop = 0}^\infty \, \sum_{T \mathop \in \mathcal F_n} \, \mu \left({S \cap T}\right) + | c = because the [[Countable Union of Countable Sets is Countable|countable union of finite sets is countable]], and by the [[Definition:Countably Subadditive Function|countable subadditivity]] of $\mu$ +}} +{{eqn | o = \le + | r = \sum_{n \mathop = 0}^\infty \, \sum_{T \mathop \in \mathcal F_n} \, \mu \left({T}\right) + | c = by [[Definition:Monotone (Measure Theory)|monotonicity]] +}} +{{eqn | r = \sum_{n \mathop = 0}^\infty \, \mu \left({\bigcup \mathcal F_n}\right) + | c = by [[Definition:Additive Function (Measure Theory)|additivity]] +}} +{{eqn | r = \sum_{n \mathop = 0}^\infty \, \mu \left({B_n}\right) + | c = by definition of $\mathcal F_n$ +}} +{{eqn | o = \le + | r = \sum_{n \mathop = 0}^\infty \, \mu \left({A_n}\right) + | c = by [[Set Difference is Subset]] and [[Definition:Monotone (Measure Theory)|monotonicity]] +}} +{{end-eqn}} +{{qed}} +{{ACC}} +[[Category:Measure Theory]] +bcf6t68sq0w2z9x8ibtxotyyug6dso8 +\end{proof}<|endoftext|> +\section{Ordering on Extended Real Numbers is Ordering} +Tags: Usual Ordering on Extended Real Numbers + +\begin{theorem} +Denote with $\le$ the [[Definition:Ordering on Extended Real Numbers|usual ordering]] on the [[Definition:Extended Real Number Line|extended real numbers]] $\overline \R$. +Then $\le$ is an [[Definition:Ordering|ordering]], and so $\overline \R$ is an [[Definition:Ordered Set|ordered set]]. +\end{theorem} + +\begin{proof} +=== Transitive === +Let $a, b, c \in \overline \R$. +Suppose that $a \le b$ and $b \le c$. +If $c = +\infty$, then by the definition of $\le$, $a \le c$. +If $b = +\infty$, then by [[Positive Infinity is Maximal]], $b = c$, so $a \le c$. +If $a = +\infty$, then applying [[Positive Infinity is Maximal]] twice yields $a \le c$. +The cases for $a$, $b$, or $c$ being $-\infty$ are similar. +The only remaining case is that $a, b, c \in \R$. +Since $\le$ is [[Definition:Transitive Relation|transitive]] on $\R$, $a \le c$. +{{qed|lemma}} +=== Antisymmetric === +Let $a, b \in \overline{\R}$. +Suppose that $a \le b$ and $b \le a$. +If $a \in \R$ and $b \in \R$, then since $\le$ is [[Definition:Antisymmetric Relation|antisymmetric]] on $\R$, $a = b$. +If $a = +\infty$, then since $a \le b$ and [[Positive Infinity is Maximal]], $b = +\infty$. +Thus $a = b$. +If $a = -\infty$, then since $b \le a$ and [[Negative Infinity is Minimal]], $b = -\infty$. +Similarly $b = +\infty \implies a = +\infty$ and $b = -\infty \implies a = -\infty$. +{{qed|lemma}} +=== Reflexive === +Let $a \in \overline \R$. +If $a \in \R$, then since $\le$ is [[Definition:Reflexive Relation|reflexive]] on $\R$, $a \le a$. +If $a = +\infty$, then since $\tuple {+\infty, +\infty} \in \set {\tuple {x, +\infty}: x \in \overline \R}$, $a \le a$. +Similarly, if $a = -\infty$, then $a \le a$. +{{qed}} +[[Category:Usual Ordering on Extended Real Numbers]] +5hb6v34dla40bkoqe30ujv6f1dudwat +\end{proof}<|endoftext|> +\section{Ordering on Extended Real Numbers is Total Ordering} +Tags: Usual Ordering on Extended Real Numbers + +\begin{theorem} +Let $\le$ denote the [[Definition:Ordering on Extended Real Numbers|ordering]] on the [[Definition:Extended Real Number Line|extended real numbers]] $\overline \R$. +Then $\le$ is a [[Definition:Total Ordering|total ordering]], and so $\overline \R$ is a [[Definition:Toset|toset]]. +\end{theorem} + +\begin{proof} +{{ProofWanted}} +[[Category:Usual Ordering on Extended Real Numbers]] +0xw3mdk4joabiju4q94c91ov1lkxfiv +\end{proof}<|endoftext|> +\section{Extended Real Number Space is Compact} +Tags: Extended Real Number Space + +\begin{theorem} +The [[Definition:Extended Real Number Space|extended real number space]] is [[Definition:Compact Topological Space|compact]]. +\end{theorem} + +\begin{proof} +{{ProofWiki|opens at each infinity, and a closed, bounded area remains}} +[[Category:Extended Real Number Space]] +bigwtnb4lgpa3etbglgxakjunwnayw5 +\end{proof}<|endoftext|> +\section{Euclidean Space is Subspace of Extended Real Number Space} +Tags: Extended Real Number Space, Euclidean Space + +\begin{theorem} +Let $\struct {\overline \R, \tau}$ be the [[Definition:Extended Real Number Space|extended real number space]]. +Then $\tau {\restriction_\R}$, the [[Definition:Subspace Topology|subspace topology]] on $\R$, is the [[Definition:Euclidean Topology on Real Number Line|Euclidean topology]]. +That is, [[Definition:Euclidean Space|Euclidean $1$-space]] is a [[Definition:Topological Subspace|subspace]] of the [[Definition:Extended Real Number Space|extended real number space]]. +\end{theorem} + +\begin{proof} +{{ProofWanted}} +[[Category:Extended Real Number Space]] +[[Category:Euclidean Space]] +28mt34bihgcsihkgw9eze0vbtq9lg0l +\end{proof}<|endoftext|> +\section{Transcendental Slope} +Tags: Transcendental Number Theory‏ + +\begin{theorem} +The slope of a line may be transcendental. +\end{theorem} + +\begin{proof} +The slope form of any number $x$ may be produced by: +{{begin-eqn}} +{{eqn|l = {\mathrm m} + |r = \frac {x} {1} + |c = (Slope Form of $x$) +}} +{{end-eqn}} +{{begin-eqn}} +{{eqn|l = {\mathrm m} + |r = {x} + |c = +}} +{{end-eqn}} +If $x$ is transcendental, then the slope of a line $\mathrm m$ is transcendental. +\end{proof}<|endoftext|> +\section{Lagrange's Formula} +Tags: Vector Algebra, Dot Product, Vector Cross Product + +\begin{theorem} +Let: +:$\mathbf a = \begin{bmatrix} a_x \\ a_y \\ a_z \end{bmatrix}$, $\mathbf b = \begin{bmatrix} b_x \\ b_y \\ b_z \end{bmatrix}$, $\mathbf c = \begin{bmatrix} c_x \\ c_y \\ c_z \end{bmatrix}$ +be [[Definition:Vector (Euclidean Space)|vectors]] in a [[Definition:Vector Space|vector space]] of [[Definition:Dimension of Vector Space|$3$ dimensions]]. +Then: +:$\mathbf a \times \paren {\mathbf b \times \mathbf c} = \paren {\mathbf a \cdot \mathbf c} \mathbf b - \paren {\mathbf a \cdot \mathbf b} \mathbf c$ +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \mathbf b \times \mathbf c + | r = \begin {bmatrix} b_x \\ b_y \\ b_z \end {bmatrix} \times \begin {bmatrix} c_x \\ c_y \\ c_z \end {bmatrix} +}} +{{eqn | r = \begin {bmatrix} b_y c_z - b_z c_y \\ b_z c_x - b_x c_z \\ b_x c_y - b_y c_x \end {bmatrix} + | c = {{Defof|Vector Cross Product}} +}} +{{eqn | l = \mathbf a \times \paren {\mathbf b \times \mathbf c} + | r = \begin {bmatrix} a_x \\ a_y \\ a_z \end {bmatrix} \times \begin {bmatrix} b_y c_z - b_z c_y \\ b_z c_x - b_x c_z \\ b_x c_y - b_y c_x \end {bmatrix} +}} +{{eqn | r = \begin {bmatrix} a_y b_x c_y - a_y b_y c_x - a_z b_z c_x + a_z b_x c_z \\ a_z b_y c_z - a_z b_z c_y - a_x b_x c_y + a_x b_y c_x \\ a_x b_z c_x - a_x b_x c_z - a_y b_y c_z + a_y b_z c_y \end {bmatrix} + | c = {{Defof|Vector Cross Product}} +}} +{{eqn | r = \begin {bmatrix} a_y b_x c_y - a_y b_y c_x - a_z b_z c_x + a_z b_x c_z + a_x b_x c_x - a_x b_x c_x \\ a_z b_y c_z - a_z b_z c_y - a_x b_x c_y + a_x b_y c_x + a_y b_y c_y - a_y b_y c_y \\ a_x b_z c_x - a_x b_x c_z - a_y b_y c_z + a_y b_z c_y + a_z b_z c_z - a_z b_z c_z \end {bmatrix} + | c = adding $0 = a_i b_i c_i - a_i b_i c_i$ to each entry +}} +{{eqn | r = \begin {bmatrix} b_x \paren {a_y c_y + a_z c_z + a_x c_x} - c_x \paren {a_y b_y + a_z b_z + a_x b_x} \\ b_y \paren {a_z c_z + a_x c_x + a_y c_y} - c_y \paren {a_z b_z + a_x b_x + a_y c_y} \\ b_z \paren {a_x c_x + a_y c_y + a_z c_z} - c_z \paren {a_x b_x + a_y b_y + a_z c_z} \end {bmatrix} +}} +{{eqn | r = \begin {bmatrix} b_x \paren {\mathbf a \cdot \mathbf c} - c_x \paren {\mathbf a \cdot \mathbf b} \\ b_y \paren {\mathbf a \cdot \mathbf c} - c_y \paren {\mathbf a \cdot \mathbf b} \\ b_z \paren {\mathbf a \cdot \mathbf c} - c_z \paren {\mathbf a \cdot \mathbf b} \end {bmatrix} + | c = {{Defof|Dot Product}} +}} +{{eqn | r = \paren {\mathbf a \cdot \mathbf c} \begin {bmatrix} b_x \\ b_y \\ b_z \end {bmatrix} - \paren {\mathbf a \cdot \mathbf b} \begin {bmatrix} c_x \\ c_y \\ c_z \end {bmatrix} +}} +{{eqn | r = \paren {\mathbf a \cdot \mathbf c} \mathbf b - \paren {\mathbf a \cdot \mathbf b} \mathbf c +}} +{{end-eqn}} +{{qed}} +{{Namedfor|Joseph Louis Lagrange|cat = Lagrange}} +\end{proof}<|endoftext|> +\section{Isomorphism of External Direct Products/General Result} +Tags: Isomorphisms, External Direct Products + +\begin{theorem} +Let: +: $(1): \quad \displaystyle \left({S, \circ}\right) = \prod_{k \mathop = 1}^n S_k = \left({S_1, \circ_1}\right) \times \left({S_2, \circ_2}\right) \times \cdots \times \left({S_n, \circ_n}\right)$ +: $(2): \quad \displaystyle \left({T, \ast}\right) = \prod_{k \mathop = 1}^n T_k = \left({T_1, \ast_1}\right) \times \left({T_2, \ast_2}\right) \times \cdots \times \left({T_n, \ast_n}\right)$ +be [[Definition:External Direct Product/General Definition|external direct products]] of [[Definition:Algebraic Structure|algebraic structures]]. +Let $\phi_k: \left({S_k, \circ_k}\right) \to \left({T_k, \ast_k}\right)$ be an [[Definition:Isomorphism (Abstract Algebra)|isomorphism]] for each $k \in \left[{1 \,.\,.\, n}\right]$. +Then: +:$\phi: \left({s_1, \ldots, s_n}\right) \to \left({\phi_1 \left({s_1}\right), \ldots, \phi_n \left({s_n}\right)}\right)$ +is an [[Definition:Isomorphism (Abstract Algebra)|isomorphism]] from $\left({S, \circ}\right)$ to $\left({T, \ast}\right)$. +\end{theorem} + +\begin{proof} +By definition of [[Definition:Isomorphism (Abstract Algebra)|isomorphism]], each $\phi_k$ is a [[Definition:Homomorphism (Abstract Algebra)|homomorphism]] which is a [[Definition:Bijection|bijection]]. +From [[Cartesian Product of Bijections is Bijection/General Result|Cartesian Product of Bijections is Bijection: General Result]], $\phi$ is a [[Definition:Bijection|bijection]]. +From [[Homomorphism of External Direct Products/General Result|Homomorphism of External Direct Products: General Result]], $\phi$ is a [[Definition:Homomorphism (Abstract Algebra)|homomorphism]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{External Direct Product of Groups is Group/Finite Product} +Tags: Group Theory, Group Direct Products + +\begin{theorem} +The [[Definition:Group Direct Product|external direct product]] of a [[Definition:Finite Sequence|finite sequence]] of [[Definition:Group|groups]] is itself a [[Definition:Group|group]]. +\end{theorem} + +\begin{proof} +Let $\left({G_1, \circ_1}\right), \left({G_2, \circ_2}\right), \ldots, \left({G_n, \circ_n}\right)$ be [[Definition:Group|groups]]. +Let $\displaystyle \left({G, \circ}\right) = \prod_{k \mathop = 1}^n G_k$ be the [[Definition:External Direct Product|external direct product]] of $\left({G_1, \circ_1}\right), \left({G_2, \circ_2}\right), \ldots, \left({G_n, \circ_n}\right)$. +Taking the [[Definition:Group Axioms|group axioms]] in turn: +=== G0: Closure === +From [[External Direct Product Closure/General Result|External Direct Product Closure: General Result]] it follows that $\left({G, \circ}\right)$ is [[Definition:Closed Algebraic Structure|closed]]. +{{qed|lemma}} +=== G1: Associativity === +From [[External Direct Product Associativity/General Result|External Direct Product Associativity: General Result]] it follows that $\left({G, \circ}\right)$ is [[Definition:Associative|associative]]. +{{qed|lemma}} +=== G2: Identity === +Let $e_1, e_2, \ldots, e_n$ be the [[Definition:Identity Element|identity elements]] of $\left({G_1, \circ_1}\right), \left({G_2, \circ_2}\right), \ldots, \left({G_n, \circ_n}\right)$ respectively. +From [[External Direct Product Identity/General Result|External Direct Product Identity: General Result]] it follows that $\left({e_1, e_2, \ldots, e_n}\right)$ is the [[Definition:Identity Element|identity element]] of $\left({G, \circ}\right)$. +{{qed|lemma}} +=== G3: Inverses === +Let $g_1, g_2, \ldots, g_n$ be arbitrary [[Definition:Element|elements]] of $G_1, G_2, \ldots, G_n$ +Let $g_1^{-1}, g_2^{-1}, \ldots, g_n^{-1}$ be the [[Definition:Inverse Element|inverse elements]] of $g_1, g_2, \ldots, g_n$ in $\left({G_1, \circ_1}\right), \left({G_2, \circ_2}\right), \ldots, \left({G_n, \circ_n}\right)$ respectively. +From [[External Direct Product Inverses/General Result|External Direct Product Inverses: General Result]] it follows that $\left({g_1^{-1}, g_2^{-1}, \ldots, g_n^{-1}}\right)$ is the [[Definition:Inverse Element|inverse element]] of $\left({g_1, g_2, \ldots, g_n}\right)$ in $\left({G, \circ}\right)$. +{{qed|lemma}} +All [[Definition:Group Axioms|group axioms]] are fulfilled, hence $\left({G, \circ}\right)$ is a [[Definition:Group|group]]. +{{qed}} +\end{proof}<|endoftext|> +\section{External Direct Product Inverses/General Result} +Tags: External Direct Products + +\begin{theorem} +Let $\displaystyle \left({S, \circ}\right) = \prod_{k \mathop = 1}^n S_k$ be the [[Definition:External Direct Product/General Definition|external direct product]] of the [[Definition:Algebraic Structure|algebraic structures]] $\left({S_1, \circ_1}\right), \left({S_2, \circ_2}\right), \ldots, \left({S_n, \circ_n}\right)$. +Let $\left({x_1, x_2, \ldots, x_n}\right) \in S$. +Let $y_k$ be an [[Definition:Inverse Element|inverse]] of $x_k$ in $\left({S_k, \circ_k}\right)$ for each of $k \in \N^*_n$. +Then $\left({y_1, y_2, \ldots, y_n}\right)$ is the [[Definition:Inverse Element|inverse]] of $\left({x_1, x_2, \ldots, x_n}\right) \in S$ in $\left({S, \circ}\right)$. +\end{theorem} + +\begin{proof} +Let $e_1, e_2, \ldots, e_n$ be the [[Definition:Identity Element|identity elements]] of $\left({S_1, \circ_1}\right), \left({S_2, \circ_2}\right), \ldots, \left({S_n, \circ_n}\right)$ respectively. +Let $x := \left({x_1, x_2, \ldots, x_n}\right)$. +Let $y := \left({x_1, x_2, \ldots, x_n}\right)$. +From [[External Direct Product Identity/General Result|External Direct Product Identity]], $e := \left({e_1, e_2, \ldots, e_n}\right)$ is the [[Definition:Identity Element|identity element]] of $S^n$. +Then: +{{begin-eqn}} +{{eqn | l = x \circ y + | r = \left({x_1, x_2, \ldots, x_n}\right) \circ \left({y_1, y_2, \ldots, y_n}\right) + | c = +}} +{{eqn | r = \left({x_1 \circ_1 y_1, x_2 \circ_2 y_2, \ldots, x_n \circ_n y_n}\right) + | c = Definition of [[Definition:External Direct Product/General Definition|External Direct Product]] +}} +{{eqn | r = \left({e_1, e_2, \ldots, e_n}\right) + | c = {{Defof|Inverse Element}} +}} +{{eqn | r = e + | c = [[External Direct Product Identity/General Result|External Direct Product Identity]] +}} +{{end-eqn}} +and: +{{begin-eqn}} +{{eqn | l = y \circ x + | r = \left({y_1, y_2, \ldots, y_n}\right) \circ \left({x_1, x_2, \ldots, x_n}\right) + | c = +}} +{{eqn | r = \left({y_1 \circ_1 x_1, y_2 \circ_2 x_2, \ldots, y_n \circ_n x_n}\right) + | c = Definition of [[Definition:External Direct Product/General Definition|External Direct Product]] +}} +{{eqn | r = \left({e_1, e_2, \ldots, e_n}\right) + | c = {{Defof|Inverse Element}} +}} +{{eqn | r = e + | c = [[External Direct Product Identity/General Result|External Direct Product Identity]] +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{External Direct Product Identity/General Result} +Tags: External Direct Products + +\begin{theorem} +Let $\displaystyle \left({S, \circ}\right) = \prod_{k \mathop = 1}^n S_k$ be the [[Definition:External Direct Product/General Definition|external direct product]] of the [[Definition:Algebraic Structure|algebraic structures]] $\left({S_1, \circ_1}\right), \left({S_2, \circ_2}\right), \ldots, \left({S_n, \circ_n}\right)$. +Let $e_1, e_2, \ldots, e_n$ be the [[Definition:Identity Element|identity elements]] of $\left({S_1, \circ_1}\right), \left({S_2, \circ_2}\right), \ldots, \left({S_n, \circ_n}\right)$ respectively. +Then $\left({e_1, e_2, \ldots, e_n}\right)$ is the [[Definition:Identity Element|identity element]] of $\left({S, \circ}\right)$. +\end{theorem} + +\begin{proof} +Let $s := \left({s_1, s_2, \ldots, s_n}\right)$ be an arbitrary element of $\left({S_1, \circ_1}\right) \times \left({S_2, \circ_2}\right) \times \cdots \times \left({S_n, \circ_n}\right)$. +Let $e := \left({e_1, e_2, \ldots, e_n}\right)$. +Then: +{{begin-eqn}} +{{eqn | l = s \circ e + | r = \left({s_1, s_2, \ldots, s_n}\right) \circ \left({e_1, e_2, \ldots, e_n}\right) + | c = +}} +{{eqn | r = \left({s_1 \circ_1 e_1, s_2 \circ_2 e_2, \ldots, s_n \circ_n e_n}\right) + | c = Definition of [[Definition:External Direct Product/General Definition|External Direct Product]] +}} +{{eqn | r = \left({s_1, s_2, \ldots, s_n}\right) + | c = {{Defof|Identity Element}} +}} +{{eqn | r = s + | c = Definition of $s$ +}} +{{end-eqn}} +and: +{{begin-eqn}} +{{eqn | l = e \circ s + | r = \left({e_1, e_2, \ldots, e_n}\right) \circ \left({s_1, s_2, \ldots, s_n}\right) + | c = +}} +{{eqn | r = \left({e_1 \circ_1 s_1, e_2 \circ_2 s_2, \ldots, e_n \circ_n s_n}\right) + | c = Definition of [[Definition:External Direct Product/General Definition|External Direct Product]] +}} +{{eqn | r = \left({s_1, s_2, \ldots, s_n}\right) + | c = {{Defof|Identity Element}} +}} +{{eqn | r = s + | c = Definition of $s$ +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{External Direct Product Commutativity/General Result} +Tags: External Direct Products + +\begin{theorem} +Let $\displaystyle \left({S, \circ}\right) = \prod_{k \mathop = 1}^n S_k$ be the [[Definition:External Direct Product/General Definition|external direct product]] of the [[Definition:Algebraic Structure|algebraic structures]] $\left({S_1, \circ_1}\right), \left({S_2, \circ_2}\right), \ldots, \left({S_n, \circ_n}\right)$. +If $\circ_1, \ldots, \circ_n$ are all [[Definition:Commutative Operation|commutative]], then so is $\circ$. +\end{theorem} + +\begin{proof} +Suppose that, for all $k \in \N^*_n$, $\circ_k$ is [[Definition:Commutative Operation|commutative]]. +Let $\left({s_1, s_2, \ldots, s_n}\right)$ and $\left({t_1, t_2, \ldots, t_n}\right)$ be elements of $\left({S_1, \circ_1}\right) \times \left({S_2, \circ_2}\right) \times \cdots \times \left({S_n, \circ_n}\right)$. +{{begin-eqn}} +{{eqn | l = \left({s_1, s_2, \ldots, s_n}\right) \circ \left({t_1, t_2, \ldots, t_n}\right) + | r = \left({s_1 \circ_1 t_1, s_2 \circ_2 t_2, \ldots, s_n \circ_n t_n}\right) + | c = +}} +{{eqn | r = \left({t_1 \circ_1 s_1, t_2 \circ_2 s_2, \ldots, t_n \circ_n s_n}\right) + | c = +}} +{{eqn | r = \left({t_1, t_2, \ldots, t_n}\right) \circ \left({s_1, s_2, \ldots, s_n}\right) + | c = +}} +{{end-eqn}} +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{External Direct Product Associativity/General Result} +Tags: External Direct Products + +\begin{theorem} +Let $\displaystyle \left({S, \circ}\right) = \prod_{k \mathop = 1}^n S_k$ be the [[Definition:External Direct Product/General Definition|external direct product]] of the [[Definition:Algebraic Structure|algebraic structures]] $\left({S_1, \circ_1}\right), \left({S_2, \circ_2}\right), \ldots, \left({S_n, \circ_n}\right)$. +If $\circ_1, \ldots, \circ_n$ are all [[Definition:Associative Operation|associative]], then so is $\circ$. +\end{theorem} + +\begin{proof} +Proof by [[Principle of Mathematical Induction|induction]]: +For all $n \in \N_{> 0}$, let $P \left({n}\right)$ be the [[Definition:Proposition|proposition]]: +:If $\circ_1, \ldots, \circ_n$ are all [Definition:Associative Operation|associative]], then so is the [[Definition:External Direct Product/General Definition|external direct product]] $\circ$ of $\circ_1, \ldots, \circ_n$. +=== Basis for the Induction === +$P \left({1}\right)$ is true, as this just says: +:$\circ_1$ is [[Definition:Associative Operation|associative]]. +$P \left({2}\right)$ is the case: +:If $\circ_1$ and $\circ_2$ are both [[Definition:Associative Operation|associative]], then so is the [[Definition:External Direct Product/General Definition|external direct product]] $\circ$ of $\circ_1$ and $\circ_2$. +This has been proved in [[External Direct Product Associativity]]. +This is our [[Principle of Mathematical Induction#Basis for the Induction|basis for the induction]]. +=== Induction Hypothesis === +Now we need to show that, if $P \left({k}\right)$ is true, where $k \ge 2$, then it logically follows that $P \left({k+1}\right)$ is true. +So this is our [[Principle of Mathematical Induction#Induction Hypothesis|induction hypothesis]]: +:If $\circ_1, \ldots, \circ_k$ are all [[Definition:Associative Operation|associative]], then so is the [[Definition:External Direct Product/General Definition|external direct product]] $\circ$ of $\circ_1, \ldots, \circ_k$. +Then we need to show: +:If $\circ_1, \ldots, \circ_{k+1}$ are all [[Definition:Associative Operation|associative]], then so is the [[Definition:External Direct Product/General Definition|external direct product]] $\circ$ of $\circ_1, \ldots, \circ_{k+1}$. +=== Induction Step === +This is our [[Principle of Mathematical Induction#Induction Step|induction step]]: +Let $a, b, c \in S^{k+1}$: +:$a = \left({a_1, a_2, \ldots, a_k, a_{k+1} }\right)$ +:$b = \left({b_1, b_2, \ldots, b_k, b_{k+1} }\right)$ +:$c = \left({c_1, c_2, \ldots, c_k, c_{k+1} }\right)$ +Note that in the below, by [[Definition:Abuse of Notation|abuse of notation]], $\circ$ is to be used for two separate operations: +:$\left({a_1, a_2, \ldots, a_k, a_{k+1} }\right) \circ \left({b_1, b_2, \ldots, b_k, b_{k+1} }\right)$ +and: +:$\left({a_1, a_2, \ldots, a_k}\right) \circ \left({b_1, b_2, \ldots, b_k}\right)$ +Thus: +{{begin-eqn}} +{{eqn | l = a \circ \left({b \circ c}\right) + | r = \left({a_1, a_2, \ldots, a_k, a_{k+1} }\right) \circ \left({\left({b_1, b_2, \ldots, b_k, b_{k+1} }\right) \circ \left({c_1, c_2, \ldots, c_k, c_{k+1} }\right)}\right) + | c = +}} +{{eqn | r = \left({\left({a_1, a_2, \ldots, a_k}\right), a_{k+1} }\right) \circ \left({\left({\left({b_1, b_2, \ldots, b_k}\right), b_{k+1} }\right) \circ \left({\left({c_1, c_2, \ldots, c_k}\right), c_{k+1} }\right)}\right) + | c = Definition of [[Definition:Ordered Tuple|Ordered Tuple]] +}} +{{eqn | r = \left({\left({a_1, a_2, \ldots, a_k}\right) \circ \left({\left({b_1, b_2, \ldots, b_k}\right) \circ \left({c_1, c_2, \ldots, c_k}\right)}\right), a_{k+1} \circ_{k+1} \left({b_{k+1} \circ_{k+1} c_{k+1} }\right)}\right) + | c = Definition of [[Definition:Operation Induced by Direct Product|Induced Operation]] +}} +{{eqn | r = \left({\left({\left({a_1, a_2, \ldots, a_k}\right) \circ \left({b_1, b_2, \ldots, b_k}\right)}\right) \circ \left({c_1, c_2, \ldots, c_k}\right), a_{k+1} \circ_{k+1} \left({b_{k+1} \circ_{k+1} c_{k+1} }\right)}\right) + | c = [[External Direct Product Associativity/General Result#Induction Hypothesis|Induction Hypothesis]] +}} +{{eqn | r = \left({\left({\left({a_1, a_2, \ldots, a_k}\right) \circ \left({b_1, b_2, \ldots, b_k}\right)}\right) \circ \left({c_1, c_2, \ldots, c_k}\right), \left({a_{k+1} \circ_{k+1} b_{k+1} }\right) \circ_{k+1} c_{k+1} }\right) + | c = [[External Direct Product Associativity/General Result#Basis for the Induction|Basis for the Induction]] +}} +{{eqn | r = \left({\left({\left({a_1, a_2, \ldots, a_k}\right), a_{k+1} }\right) \circ \left({\left({b_1, b_2, \ldots, b_k}\right), b_{k+1} }\right)}\right) \circ \left({\left({c_1, c_2, \ldots, c_k}\right), c_{k+1} }\right) + | c = Definition of [[Definition:Operation Induced by Direct Product|Induced Operation]] +}} +{{eqn | r = \left({\left({a_1, a_2, \ldots, a_k, a_{k+1} }\right) \circ \left({b_1, b_2, \ldots, b_k, b_{k+1} }\right)}\right) \circ \left({c_1, c_2, \ldots, c_k, c_{k+1} }\right) + | c = Definition of [[Definition:Ordered Tuple|Ordered Tuple]] +}} +{{eqn | r = \left({a \circ b}\right) \circ c + | c = [[Definition:By Hypothesis|By Hypothesis]] +}} +{{end-eqn}} +So $P \left({k}\right) \implies P \left({k+1}\right)$ and the result follows by the [[Principle of Mathematical Induction]]. +Therefore: +For all $n \in \N_{> 0}$: +:If $\circ_1, \ldots, \circ_n$ are all [[Definition:Associative Operation|associative]], then so is the [[Definition:External Direct Product/General Definition|external direct product]] $\circ$ of $\circ_1, \ldots, \circ_n$. +{{qed}} +\end{proof}<|endoftext|> +\section{Vector Cross Product is not Associative} +Tags: Vector Cross Product + +\begin{theorem} +The [[Definition:Vector Cross Product|vector cross product]] is ''not'' [[Definition:Associative|associative]]. +That is, in general: +:$\mathbf a \times \left({\mathbf b \times \mathbf c}\right) \ne \left({\mathbf a \times \mathbf b}\right) \times \mathbf c$ +for $\mathbf {a}, \mathbf {b}, \mathbf {c} \in \R^3$. +\end{theorem} + +\begin{proof} +[[Proof by Counterexample]]: +Let $\mathbf a = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}$, $\mathbf b = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}$, $\mathbf c = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$ +be [[Definition:Vector (Euclidean Space)|vectors in $\R^3$]]. +{{begin-eqn}} +{{eqn | l = \mathbf a \times \left({\mathbf b \times \mathbf c}\right) + | r = \mathbf a \times \left({\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} \times \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} }\right) +}} +{{eqn | r = \mathbf a \times \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix} +}} +{{eqn | r = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \times \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix} +}} +{{eqn | r = \begin{bmatrix} 0 \\ 0 \\ -1 \end{bmatrix} +}} +{{eqn | l = \left({\mathbf a \times \mathbf b}\right) \times \mathbf c + | r = \left({\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \times \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} }\right) \times \mathbf c +}} +{{eqn | r = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \times \mathbf c +}} +{{eqn | r = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \times \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} +}} +{{eqn | r = \begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix} +}} +{{end-eqn}} +{{qed}} +[[Category:Vector Cross Product]] +7bzmkkoao1qhhs5bmyk2u1rrudeclby +\end{proof}<|endoftext|> +\section{Vector Cross Product is Anticommutative} +Tags: Vector Cross Product, Vector Cross Product is Anticommutative + +\begin{theorem} +The [[Definition:Vector Cross Product|vector cross product]] is [[Definition:Anticommutative|anticommutative]]: +:$\forall \mathbf a, \mathbf b \in \R^3: \mathbf a \times \mathbf b = -\left({\mathbf b \times \mathbf a}\right)$ +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \mathbf b \times \mathbf a + | r = \begin {bmatrix} b_i \\ b_j \\ b_k \end {bmatrix} \times \begin {bmatrix} a_i \\ a_j \\ a_k \end {bmatrix} +}} +{{eqn | r = \begin {bmatrix} b_j a_k - a_j b_k \\ b_k a_i - b_i a_k \\ b_i a_j - a_i b_j \end {bmatrix} +}} +{{eqn | l = \mathbf a \times \mathbf b + | r = \begin {bmatrix} a_i \\ a_j \\ a_k \end {bmatrix} \times \begin {bmatrix} b_i \\ b_j \\ b_k \end {bmatrix} +}} +{{eqn | r = \begin {bmatrix} a_j b_k - a_k b_j \\ a_k b_i - a_i b_k \\ a_i b_j - a_j b_i \end {bmatrix} +}} +{{eqn | r = \begin {bmatrix} -\paren {a_k b_j - a_j b_k} \\ -\paren {a_i b_k - a_k b_i} \\ -\paren {a_j b_i - a_i b_j} \end {bmatrix} +}} +{{eqn | r = -1 \begin {bmatrix} b_j a_k - a_j b_k \\ b_k a_i - b_i a_k \\ b_i a_j - a_i b_j \end {bmatrix} +}} +{{eqn | r = -\paren {\mathbf b \times \mathbf a} +}} +{{end-eqn}} +{{qed}} +\end{proof} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \paren {\mathbf a + \mathbf b} \times \paren {\mathbf a + \mathbf b} + | r = \mathbf a \times \mathbf a + \mathbf a \times \mathbf b + \mathbf b \times \mathbf a + \mathbf b \times \mathbf b + | c = [[Vector Cross Product Operator is Bilinear]] +}} +{{eqn | l = 0 + | r = 0 + \mathbf a \times \mathbf b + \mathbf b \times \mathbf a + 0 + | c = [[Cross Product of Vector with Itself is Zero]] +}} +{{eqn | l = \mathbf a \times \mathbf b + | r = -\paren {\mathbf b \times \mathbf a} + | c = simplifying +}} +{{end-eqn}} +{{qed}} +\end{proof} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \mathbf a \times \mathbf b + | r = \begin{vmatrix} +\mathbf i & \mathbf j & \mathbf k \\ +a_i & a_j & a_k \\ +b_i & b_j & b_k \\ +\end{vmatrix} + | c = {{Defof|Vector Cross Product}} +}} +{{eqn | r = -\begin{vmatrix} +\mathbf i & \mathbf j & \mathbf k \\ +b_i & b_j & b_k \\ +a_i & a_j & a_k \\ +\end{vmatrix} + | c = [[Determinant with Rows Transposed]] +}} +{{eqn | r = -\left({\mathbf b \times \mathbf a}\right) + | c = {{Defof|Vector Cross Product}} +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Projection is Surjection/General Version} +Tags: Surjections, Projections + +\begin{theorem} +For all non-[[Definition:Empty Set|empty sets]] $S_1, S_2, \ldots, S_j, \ldots, S_n$, the [[Definition:Projection (Mapping Theory)|$j$th projection]] $\operatorname{pr}_j$ on $\displaystyle \prod_{i \mathop = 1}^n S_i$ is a [[Definition:Surjection|surjection]]. +\end{theorem} + +\begin{proof} +Consider the $j$th projection. +As long as none of $S_1, S_2, \ldots, S_n$ is the [[Definition:Empty Set|empty set]], then: +:$\displaystyle \forall x \in S_j: \exists \left({s_1, s_2, \ldots, s_{j-1}, x, s_{j+1}, \ldots, s_n}\right) \in \prod_{k \mathop = 1}^n S_k: \operatorname{pr}_j \left({\left({s_1, s_2, \ldots, s_{j-1}, x, s_{j+1}, \ldots, s_n}\right)}\right) = x$ +Hence the result. +{{qed}} +[[Category:Surjections]] +[[Category:Projections]] +ijq8vf7kihzko82t9y6fuflloiv2t1s +\end{proof}<|endoftext|> +\section{Canonical Injection is Monomorphism/General Result} +Tags: Monomorphisms + +\begin{theorem} +Let $\struct {S_1, \circ_1}, \struct {S_2, \circ_2}, \dotsc, \struct {S_j, \circ_j}, \dotsc, \struct {S_n, \circ_n}$ be [[Definition:Algebraic Structure|algebraic structures]] with [[Definition:Identity Element|identities]] $e_1, e_2, \dotsc, e_j, \dotsc, e_n$ respectively. +Then the [[Definition:Canonical Injection (Abstract Algebra)/General Definition|canonical injection]]: +:$\displaystyle \inj_j: \struct {S_j, \circ_j} \to \prod_{i \mathop = 1}^n \struct {S_i, \circ_i}$ +defined as: +:$\map {\inj_j} x = \tuple {e_1, e_2, \dotsc, e_{j - 1}, x, e_{j + 1}, \dotsc, e_n}$ +is a [[Definition:Monomorphism (Abstract Algebra)|monomorphism]]. +\end{theorem} + +\begin{proof} +From [[Canonical Injection is Injection/General Result|Canonical Injection is Injection]] we have that the [[Definition:Canonical Injection (Abstract Algebra)/General Definition|canonical injections]] are in fact [[Definition:Injection|injective]]. +It remains to prove the [[Definition:Morphism Property|morphism property]]. +Let $x, y \in \struct {S_j, \circ_j}$. +Then: +{{begin-eqn}} +{{eqn | l = \map {\inj_j} {x \circ_j y} + | r = \tuple {e_1, e_2, \dotsc, e_{j - 1}, x \circ_j y, e_{j + 1}, \dotsc, e_n} +}} +{{eqn | r = \tuple {e_1 \circ_1 e_1, e_2 \circ_2 e_2, \dotsc, e_{j - 1} \circ_{j - 1} e_{j - 1}, x \circ_j y, e_{j + 1} \circ_{j + 1} e_{j + 1}, \dotsc, e_n \circ_n e_n} +}} +{{eqn | r = \tuple {e_1, e_2, \dotsc, e_{j - 1}, x, e_{j + 1}, \dotsc, e_n} \circ \tuple {e_1, e_2, \dotsc, e_{j - 1}, y, e_{j + 1}, \dotsc, e_n} +}} +{{eqn | r = \map {\inj_j} x \circ \map {\inj_j} y +}} +{{end-eqn}} +and the [[Definition:Morphism Property|morphism property]] has been demonstrated to hold. +Thus $\displaystyle \inj_j: \struct {S_j, \circ_j} \to \prod_{i \mathop = 1}^n \struct {S_i, \circ_i}$ has been shown to be an [[Definition:Injection|injective]] [[Definition:Homomorphism (Abstract Algebra)|homomorphism]] and therefore a [[Definition:Monomorphism (Abstract Algebra)|monomorphism]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Orthogonality of Solutions to the Sturm-Liouville Equation with Distinct Eigenvalues} +Tags: Sturm-Liouville Theory + +\begin{theorem} +Let $f \left({x}\right)$ and $g \left({x}\right)$ be solutions of the [[Definition:Sturm-Liouville Equation|Sturm-Liouville equation]]: +:$(1): \quad -\dfrac {\mathrm d} {\mathrm d x} \left({p \left({x}\right) \dfrac {\mathrm d y} {\mathrm d x}}\right) + q \left({x}\right) y = \lambda w \left({x}\right) y$ +where $y$ is a [[Definition:Real Function|function]] of the [[Definition:Free Variable|free variable]] $x$. +The [[Definition:Real Function|functions]] $p \left({x}\right)$, $q \left({x}\right)$ and $w \left({x}\right)$ are specified. +In the simplest cases they are [[Definition:Continuous Real Function|continuous]] on the [[Definition:Closed Real Interval|closed interval]] $\left[{a \,.\,.\, b}\right]$. +In addition: +:$(1a): \quad p \left({x}\right) > 0$ has a [[Definition:Continuous Real Function|continuous]] [[Definition:Derivative|derivative]] +:$(1b): \quad w \left({x}\right) > 0$ +:$(1c): \quad y$ is typically required to satisfy some [[Definition:Boundary Condition|boundary conditions]] at $a$ and $b$. +Assume that the Sturm-Liouville problem is regular, that is, $p \left({x}\right)^{-1} > 0$, $q \left({x}\right)$, and $w \left({x}\right) > 0$ are real-valued [[Definition:Lebesgue Integrable|integrable]] functions over the [[Definition:Closed Real Interval|closed interval]] $\left[{a \,.\,.\, b}\right]$, with ''separated boundary conditions'' of the form: +:$(2): \quad y \left({a}\right) \cos \alpha - p \left({a}\right) y' \left({a}\right)\sin \alpha = 0$ +:$(3): \quad y \left({b}\right) \cos \beta - p \left({b}\right) y' \left({b}\right)\sin \beta = 0$ +where $\alpha, \beta \in \left[{0 \,.\,.\, \pi}\right)$. +Then: +:$\displaystyle \left\langle{f, g}\right\rangle = \int_a^b \overline {f \left({x}\right)} q \left({x}\right) w \left({x}\right) \, \mathrm d x = 0$ +where $f \left({x}\right)$ and $g \left({x}\right)$ are solutions to the [[Definition:Sturm-Liouville Equation|Sturm-Liouville equation]] corresponding to distinct eigenvalues and $w \left({x}\right)$ is the "weight" or "density" function. +\end{theorem} + +\begin{proof} +Multiply the equation for $g \left({x}\right)$ by $\overline {f \left({x}\right)}$ (the complex conjugate of $f \left({x}\right)$) to get: +:$-\overline {f \left({x}\right)} \dfrac {\mathrm d \left({p\left({x}\right) \dfrac {\mathrm d g} {\mathrm d x} +\left({x}\right)}\right) } {\mathrm d x} + \overline {f \left({x}\right)} q \left({x}\right) g \left({x}\right) = \mu \overline {f \left({x}\right)} w \left({x}\right) g \left({x}\right)$ +Only $f \left({x}\right)$, $g \left({x}\right)$, $\lambda$ and $\mu $ may be complex. +All other quantities are real. +Complex conjugate this equation, exchange $f \left({x}\right)$ and $g \left({x}\right)$, and subtract the new equation from the original: +{{begin-eqn}} +{{eqn | l = -\overline {f \left({x}\right)} \frac {\mathrm d \left({p \left({x}\right) \frac {\mathrm d g} {\mathrm d x} \left({x}\right) }\right) } {\mathrm d x} + g \left({x}\right) \frac {\mathrm d \left({p \left({x}\right) \frac {\mathrm d \bar f} {\mathrm d x} \left({x}\right) }\right) } {\mathrm d x} + | r = \frac {\mathrm d \left({p \left({x}\right) \left({g \left({x}\right) \frac {\mathrm d \bar f} {\mathrm d x} \left({x}\right) - \overline {f \left({x}\right)} \frac {\mathrm d g} {\mathrm d x} \left({x}\right) }\right) }\right) }{\mathrm d x} +}} +{{eqn | r = \left({\mu - \bar \lambda}\right) \overline {f \left({x}\right)} g \left({x}\right) w \left({x}\right) +}} +{{end-eqn}} +Integrate this between the limits $x = a$ and $x = b$: +:$\displaystyle \left({\mu - \bar \lambda}\right) \int_a^b \overline {f \left({x}\right)} g \left({x}\right) w \left({x}\right) \, \mathrm d x = p \left({b}\right) \left({g \left({b}\right) \frac {\mathrm d \bar f} {\mathrm d x} \left({b}\right) - \overline {f \left({b}\right)} \frac {\mathrm d g} {\mathrm d x} \left({b}\right)} \right) - p \left({a}\right) \left({g \left({a}\right) \frac {\mathrm d \bar f} {\mathrm d x} \left({a}\right) - \overline {f \left({a}\right)} \frac {\mathrm d g} {\mathrm d x} \left({a}\right)}\right)$ +The right side of this equation vanishes because of the boundary conditions, which are either: +: periodic boundary conditions, i.e., that $f \left({x}\right)$, $g \left({x}\right)$, and their first derivatives (as well as $p \left({x}\right)$) have the same values at $x = b$ as at $x = a$ +or: +: that independently at $x = a$ and at $x = b$ either: +:: the condition cited in equation $(2)$ or $(3)$ holds +:or: +::$p \left({x}\right) = 0$. +So: +:$\displaystyle \left({\mu - \bar \lambda}\right) \int_a^b \overline {f \left({x}\right)} g \left({x}\right) w \left({x}\right) \, \mathrm d x = 0$ +If we set $f = g$, so that the integral surely is non-zero, then it follows that $\bar \lambda = \lambda$. +That is, the eigenvalues are real, making the differential operator in the Sturm-Liouville equation self-adjoint (hermitian). +So: +:$\displaystyle \left({\mu - \lambda}\right) \int_a^b \overline {f \left({x}\right)} g \left({x}\right) w \left({x}\right) \, \mathrm d x = 0$ +It follows that, if $f$ and $g$ have distinct eigenvalues, then they are orthogonal. +{{qed}} +\end{proof}<|endoftext|> +\section{Row Equivalent Matrix for Homogeneous System has same Solutions} +Tags: Linear Algebra + +\begin{theorem} +Let $\mathbf A$ be a [[Definition:Matrix|matrix]] in the [[Definition:Matrix Space|matrix space]] $\map {\MM_\R} {m, n}$ such that: +:$\mathbf A \mathbf x = \mathbf 0$ +represents a [[Definition:Homogeneous Linear Equations|homogeneous system of linear equations]]. +Let $\mathbf H$ be [[Definition:Row Equivalence|row equivalent]] to $\mathbf A$. +Then the [[Definition:Solution Set to System of Simultaneous Equations|solution set]] of $\mathbf H \mathbf x = \mathbf 0$ equals the [[Definition:Solution Set to System of Simultaneous Equations|solution set]] of $\mathbf A \mathbf x = \mathbf 0$. +That is: +:$\mathbf A \sim \mathbf H \implies \set {\mathbf x: \mathbf A \mathbf x = \mathbf 0} = \set {\mathbf x: \mathbf H \mathbf x = \mathbf 0}$ +where $\sim$ represents [[Definition:Row Equivalence|row equivalence]]. +\end{theorem} + +\begin{proof} +Let: +{{begin-eqn}} +{{eqn | l = \alpha_{1 1} x_1 + \alpha_{1 2} x_2 + \ldots + \alpha_{1 n} x_n + | r = 0 + | c = +}} +{{eqn | l = \alpha_{2 1} x_1 + \alpha_{2 2} x_2 + \ldots + \alpha_{2 n} x_n + | r = 0 + | c = +}} +{{eqn | o = \vdots +}} +{{eqn | l = \alpha_{m 1} x_1 + \alpha_{m 2} x_2 + \ldots + \alpha_{m n} x_n + | r = 0 + | c = +}} +{{end-eqn}} +be the system of equations to be solved. +Suppose the [[Definition:Elementary Row Operation|elementary row operation]] of multiplying one row $i$ by a non-zero scalar $\lambda$ is performed. +Recall, the $i$th row of the matrix [[Definition:Matrix Representation of Simultaneous Linear Equations|represents]] the $i$th equation of the system to be solved. +Then this is [[Definition:Logically Equivalent|logically equivalent]] to multiplying the $i$th equation on both sides by the scalar $\lambda$: +{{begin-eqn}} +{{eqn | l = \alpha_{i 1} x_1 + \alpha_{i 2} x_2 + \ldots + \alpha_{i n} x_n + | r = 0 +}} +{{eqn | ll= \to + | l = \lambda \alpha_{i 1} x_1 + \lambda \alpha_{i 2} x_2 + \ldots + \lambda \alpha_{i n} x_n + | r = 0 + | c = $r_i \to \lambda r_i$ +}} +{{end-eqn}} +which clearly has the same solutions as the original equation. +Suppose the [[Definition:Elementary Row Operation|elementary row operation]] of adding a scalar multiple of row $i$ to another row $j$ is performed. +Recall that the $i$th and $j$th row of the matrix [[Definition:Matrix Representation of Simultaneous Linear Equations|represent]] the $i$th and $j$th equation in the system to be solved. +{{explain|Woolly. The matrix (by which I presume you mean $\mathbf A$) contains the coefficients and so no part of it "represents" an equation. The act of multiplying $\mathbf x$ by it to obtain $\mathbf b$ represents the equation.}} +Thus this is [[Definition:Logically Equivalent|logically equivalent]] to manipulating the $i$th and $j$th equations as such: +{{begin-eqn}} +{{eqn | l = \alpha_{i 1} x_1 + \alpha_{i 2} x_2 + \ldots + \alpha_{i n} x_n + | r = 0 + | c = +}} +{{eqn | l = \alpha_{j 1} x_1 + \alpha_{j 2} x_2 + \ldots + \alpha_{j n} x_n + | r = 0 + | c = +}} +{{eqn | ll= \to + | l = \alpha_{j 1} x_1 + \alpha_{j 2} x_2 + \ldots + \alpha_{j n} x_n + \lambda \paren {\alpha_{i 1} x_1 + \alpha_{i 2} x_2 + \ldots + \alpha_{i n} x_n} + | r = 0 + | c = $r_i \to r_i + \lambda r_j$ +}} +{{end-eqn}} +As both sides of equation $i$ are equal to each other, this operation is simply performing the same act on both sides of equation $j$. +This clearly will have no effect on the solution set of the system of equations. +Suppose the [[Definition:Elementary Row Operation|elementary row operation]] of interchanging row $i$ and row $j$ is performed. +Recall that the $i$th and $j$th row of the matrix [[Definition:Matrix Representation of Simultaneous Linear Equations|represent]] the $i$th and $j$th equation in the system to be solved. +Then, interchanging row $i$ and row $j$ is [[Definition:Logically Equivalent|logically equivalent]] to switching the $i$th equation and the $j$th equation of the system to be solved. +But clearly the system containing the following two equations: +{{begin-eqn}} +{{eqn | l = \alpha_{i 1} x_1 + \alpha_{i 2} x_2 + \cdots + \alpha_{i n} x_n + | r = 0 + | c = +}} +{{eqn | l = \alpha_{j 1} x_1 + \alpha_{j 2} x_2 + \cdots + \alpha_{j n} x_n + | r = 0 + | c = +}} +{{end-eqn}} +has the same [[Definition:Solution Set to System of Simultaneous Equations|solution set]] as a system instead containing the following two equations: +{{begin-eqn}} +{{eqn | l = \alpha_{j 1} x_1 + \alpha_{j 2} x_2 + \cdots + \alpha_{j n} x_n + | r = 0 + | c = +}} +{{eqn | l = \alpha_{i 1} x_1 + \alpha_{i 2} x_2 + \cdots + \alpha_{i n} x_n + | r = 0 + | c = $r_i \leftrightarrow r_j$ +}} +{{end-eqn}} +Hence the result, by the definition of [[Definition:Row Equivalence|row equivalence]]. +{{qed}} +{{proofread}} +[[Category:Linear Algebra]] +ix7loxpm1evprncf15v9560q41jslzx +\end{proof}<|endoftext|> +\section{Null Space of Reduced Echelon Form} +Tags: Linear Algebra, Null Spaces, Echelon Matrices + +\begin{theorem} +Let $\mathbf A$ be a [[Definition:Matrix|matrix]] in the [[Definition:Matrix Space|matrix space]] $\map {\MM_\R} {m, n}$ such that: +:$\mathbf A \mathbf x = \mathbf 0$ +represents a [[Definition:Homogeneous Linear Equations|homogeneous system of linear equations]]. +The [[Definition:Null Space|null space]] of $\mathbf A$ is the same as that of the null space of the [[Definition:Reduced Echelon Matrix|reduced row echelon form]] of $\mathbf A$: +:$\map {\mathrm N} {\mathbf A} = \map {\mathrm N} {\map {\mathrm {rref} } {\mathbf A} }$ +\end{theorem} + +\begin{proof} +By the definition of [[Definition:Null Space|null space]]: +:$\mathbf x \in \map {\mathrm N} {\mathbf A} \iff \mathbf A \mathbf x = \mathbf 0$ +From the [[Row Equivalent Matrix for Homogeneous System has same Solutions/Corollary|corollary to Row Equivalent Matrix for Homogeneous System has same Solutions]]: +:$\mathbf A \mathbf x = \mathbf 0 \iff \map {\mathrm {rref} } {\mathbf A} \mathbf x = \mathbf 0$ +Hence the result, by the definition of [[Definition:Set Equality|set equality]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Null Space Contains Only Zero Vector iff Columns are Independent} +Tags: Linear Algebra, Null Spaces + +\begin{theorem} +Let: +{{begin-eqn}} +{{eqn | l = \mathbf A_{m \times n} + | r = \begin{bmatrix} \mathbf a_1 & \mathbf a_2 & \cdots & \mathbf a_n \end{bmatrix} +}} +{{end-eqn}} +be a [[Definition:Matrix|matrix]] where: +:$\forall i: 1 \le i \le n: \mathbf a_i = \begin{bmatrix} a_{1i} \\ a_{2i} \\ \vdots \\ a_{mi} \end{bmatrix} \in \R^m$ +are [[Definition:Vector (Euclidean Space)|vectors]]. +Then: +:$\set {\mathbf a_1, \mathbf a_2, \cdots, \mathbf a_n}$ is a [[Definition:Linearly Independent Set|linearly independent set]] +{{iff}}: +:$\map {\mathrm N} {\mathbf A} = \set {\mathbf 0_{n \times 1} }$ +where $\map {\mathrm N} {\mathbf A}$ is the [[Definition:Null Space|null space of $\mathbf A$]]. +\end{theorem} + +\begin{proof} +Let $\mathbf x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} \in \R^m$. +We have that: +{{begin-eqn}} +{{eqn | l = \mathbf x + | o = \in + | r = \map {\mathrm N} {\mathbf A} +}} +{{eqn | ll= \leadstoandfrom + | l = \mathbf A \mathbf x_{n \times 1} + | r = \mathbf 0_{m \times 1} +}} +{{eqn | ll= \leadstoandfrom + | l = \begin{bmatrix} \mathbf a_1 & \mathbf a_2 & \cdots & \mathbf a_n \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} + | r = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix} +}} +{{eqn | ll= \leadstoandfrom + | l = \sum_{k \mathop = 1}^n x_k \mathbf a_k + | r = \mathbf 0 +}} +{{end-eqn}} +=== Sufficient Condition === +Let $\set {\mathbf a_1, \mathbf a_2, \cdots, \mathbf a_n}$ be [[Definition:Linearly Independent Set|linearly independent]]. +Then by definition: +:$\forall k: 1 \le k \le n: x_k = 0 \iff \mathbf x = \mathbf 0_{n \times 1}$ +By the definition of [[Definition:Null Space|null space]]: +:$\map {\mathrm N} {\mathbf A} = \set {\mathbf 0_{n \times 1} }$ +{{qed|lemma}} +=== Necessary Condition === +Let $\map {\mathrm N} {\mathbf A} = \set {\mathbf 0_{n \times 1} }$. +Then by the definition of [[Definition:Null Space|null space]]: +:$\mathbf x = \mathbf 0_{n \times 1}$ +This means that: +:$\forall k: 1 \le k \le n: x_k = 0$ +from which it follows that $\set {\mathbf a_1, \mathbf a_2, \cdots, \mathbf a_n}$ is [[Definition:Linearly Independent Set|linearly independent]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Axiom of Dependent Choice Implies Axiom of Countable Choice} +Tags: Set Theory + +\begin{theorem} +The [[Axiom:Axiom of Dependent Choice|axiom of dependent choice]] implies the [[Axiom:Axiom of Countable Choice|axiom of countable choice]]. +\end{theorem} + +\begin{proof} +Let $\left\langle{S_n}\right\rangle_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Non-Empty Set|non-empty]] [[Definition:Set|sets]]. +Define: +:$\displaystyle S = \bigsqcup_{n \mathop \in \N} S_n$ +where $\bigsqcup$ denotes [[Definition:Disjoint Union (Set Theory)|disjoint union]]. +Let $\mathcal R$ be the [[Definition:Endorelation|binary endorelation]] on $S$ defined by: +:$\left({x, m}\right) \ \mathcal R \ \left({y, n}\right) \iff n = m + 1$ +Note that $\mathcal R$ satisfies: +:$\forall a \in S : \exists b \in S : a \ \mathcal R \ b$ +Using the [[Axiom:Axiom of Dependent Choice|axiom of dependent choice]], there exists a [[Definition:Sequence|sequence]] $\left\langle{y_n}\right\rangle_{n \in \N}$ in $S$ such that $y_n \ \mathcal R \ y_{n + 1}$ for all $n \in \N$. +Letting $y_n = \left({s_n, N_n}\right)$ for all $n \in \N$, it follows by the definition of $\mathcal R$ that $N_{n + 1} = N_n + 1$. +A straightforward application of [[Principle of Mathematical Induction|mathematical induction]] shows that $N_n = n + N$ for some $N \in \N$. +So $s_n \in S_{n + N}$ for all $n \in \N$. +The [[Definition:Cartesian Product|cartesian product]] $S_0 \times S_1 \times \cdots \times S_{N-1}$ [[Finite Cartesian Product of Non-Empty Sets is Non-Empty|is non-empty]]. +So there exists a [[Definition:Finite Sequence|finite sequence]] $x_0, x_1, \ldots, x_{N-1}$ with $x_n \in S_n$ for all [[Definition:Natural Number|natural numbers]] $n < N$. +Now, define $x_n = s_{n - N}$ for all natural numbers $n \ge N$. +Then $x_n \in S_n$ for all $n \in \N$. +{{qed}} +{{ADC||3}} +\end{proof}<|endoftext|> +\section{Axiom of Choice Implies Axiom of Dependent Choice} +Tags: Set Theory + +\begin{theorem} +The [[Axiom:Axiom of Choice|axiom of choice]] implies the [[Axiom:Axiom of Dependent Choice|axiom of dependent choice]]. +\end{theorem} + +\begin{proof} +Let $\mathcal R$ be a [[Definition:Endorelation|binary endorelation]] on a [[Definition:Non-Empty Set|non-empty]] [[Definition:Set|set]] $S$ such that: +:$\forall a \in S: \exists b \in S: a \mathrel{\mathcal R} b$ +For an [[Definition:Element|element]] $x \in S$, define: +:$R \left({x}\right) = \left\{{y \in S: x \mathrel{\mathcal R} y}\right\}$ +By assumption, $R \left({x}\right)$ is [[Definition:Non-Empty Set|non-empty]] for all $x \in S$. +Now, consider the [[Definition:Indexed Family of Sets|indexed family of sets]]: +:$\left\langle{R \left({x}\right)}\right\rangle_{x \mathop \in S}$ +Using the [[Axiom:Axiom of Choice|axiom of choice]], there exists a [[Definition:Mapping|mapping]] $f: S \to S$ such that: +:$\forall x \in S: f \left({x}\right) \in R \left({x}\right)$ +That is: +:$x \mathrel{\mathcal R} f \left({x}\right)$ +So, for any $x \in S$, the [[Definition:Sequence|sequence]]: +:$\left\langle{x_n}\right\rangle_{n \mathop \in \N} = \left\langle{f^n \left({x}\right)}\right\rangle_{n \mathop \in \N}$ +where $f^n$ denotes the [[Definition:Composition of Mappings|composition]] of $f$ with itself $n$ times, is a [[Definition:Sequence|sequence]] such that: +:$x_n \mathrel{\mathcal R} x_{n+1}$ +for all $n \in \N$, as desired. +{{qed}} +\end{proof}<|endoftext|> +\section{Infimum of Empty Set is Greatest Element} +Tags: Empty Set, Infima + +\begin{theorem} +Let $\struct {S, \preceq}$ be an [[Definition:Ordered Set|ordered set]]. +Suppose that $\map \inf \O$, the [[Definition:Infimum of Set|infimum]] of the [[Definition:Empty Set|empty set]], exists in $S$. +Then $\forall s \in S: s \preceq \map \inf \O$. +That is, $\map \inf \O$ is the [[Definition:Greatest Element|greatest element]] of $S$. +\end{theorem} + +\begin{proof} +Observe that, [[Definition:Vacuous Truth|vacuously]], any $s \in S$ is a [[Definition:Lower Bound of Set|lower bound]] for $\O$. +But for any [[Definition:Lower Bound of Set|lower bound]] $s$ of $\O$, $s \preceq \map \inf \O$ by definition of [[Definition:Infimum of Set|infimum]]. +Hence: +:$\forall s \in S: s \preceq \map \inf \O$ +{{qed}} +\end{proof}<|endoftext|> +\section{Supremum of Empty Set is Smallest Element} +Tags: Suprema, Empty Set + +\begin{theorem} +Let $\struct {S, \preceq}$ be an [[Definition:Ordered Set|ordered set]]. +Then: +:the [[Definition:Supremum of Set|supremum]] of the [[Definition:Empty Set|empty set]] exists {{iff}} the [[Definition:Smallest Element|smallest element]] of $S$ exists +in which case: +:$\map \sup \O$ is the [[Definition:Smallest Element|smallest element]] of $S$ +\end{theorem} + +\begin{proof} +Observe that, [[Definition:Vacuous Truth|vacuously]], any $s \in S$ is an [[Definition:Upper Bound of Set|upper bound]] for $\O$. +=== Necessary Condition === +Let $\map \sup \O$ exist. +For any [[Definition:Upper Bound of Set|upper bound]] $s$ of $\O$, $\map \sup \O \preceq s$ by definition of [[Definition:Supremum of Set|supremum]]. +Hence: +:$\forall s \in S: \map \sup \O \preceq s$ +{{qed|lemma}} +=== Sufficient Condition === +Let $t$ be the [[Definition:Smallest Element|smallest element]] of $S$. +Then $t$ is an [[Definition:Upper Bound of Set|upper bound]] of $\O$. +For any [[Definition:Upper Bound of Set|upper bound]] $s$ of $\O$, $t \preceq s$ by definition of the [[Definition:Smallest Element|smallest element]]. +By definition of the [[Definition:Supremum of Set|supremun]]: +:$t = \map \sup \O$ +{{qed}} +\end{proof}<|endoftext|> +\section{Countable Union of Countable Sets is Countable} +Tags: Set Union, Countable Sets, Countable Union of Countable Sets is Countable + +\begin{theorem} +Let the [[Axiom:Axiom of Countable Choice|axiom of countable choice]] be accepted. +Then it can be proved that a [[Definition:Countable Union|countable union]] of [[Definition:Countable Set|countable sets]] is [[Definition:Countable|countable]]. +\end{theorem} + +\begin{proof} +Let $\left\langle{S_n}\right\rangle_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Countable Set|countable sets]]. +Define: +:$\displaystyle S = \bigcup_{n \mathop \in \N} S_n$. +For all $n \in \N$, let $\mathcal F_n$ be the set of all [[Definition:Surjection|surjections]] from $\N$ to $S_n$. +Since $S_n$ is [[Definition:Countable Set|countable]], it follows by [[Surjection from Natural Numbers iff Countable]] that $\mathcal F_n$ is [[Definition:Non-Empty Set|non-empty]]. +Using the [[Axiom:Axiom of Countable Choice|axiom of countable choice]], [[Definition:Existential Quantifier|there exists]] a [[Definition:Sequence|sequence]] $\left\langle{f_n}\right\rangle_{n \in \N}$ such that $f_n \in \mathcal F_n$ [[Definition:Universal Quantifier|for all]] $n \in \N$. +Let $\phi: \N \times \N \to S$, where $\times$ denotes the [[Definition:Cartesian Product|cartesian product]], be the [[Definition:Surjection|surjection]] defined by: +:$\phi \left({m, n}\right) = f_m \left({n}\right)$ +Since [[Cartesian Product of Countable Sets is Countable|$\N \times \N$ is countable]], it follows by [[Surjection from Natural Numbers iff Countable]] that [[Definition:Existential Quantifier|there exists]] a [[Definition:Surjection|surjection]] $\alpha: \N \to \N \times \N$. +Since the [[Composite of Surjections is Surjection|composition of surjections is a surjection]], the [[Definition:Mapping|mapping]] $\phi \circ \alpha: \N \to S$ is a [[Definition:Surjection|surjection]]. +By [[Surjection from Natural Numbers iff Countable]], it follows that $S$ is [[Definition:Countable Set|countable]]. +{{qed}} +\end{proof} + +\begin{proof} +Let $\left\langle{S_n}\right\rangle_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Countable|countable]] [[Definition:Set|sets]]. +Define: +:$\displaystyle S = \bigcup_{n \mathop \in \N} S_n$ +For all $n \in \N$, let $\mathcal F_n$ denote the set of all [[Definition:Injection|injections]] from $S_n$ to $\N$. +Since $S_n$ is [[Definition:Countable|countable]], $\mathcal F_n$ is [[Definition:Non-Empty Set|non-empty]]. +Using the [[Axiom:Axiom of Countable Choice|axiom of countable choice]], there exists a [[Definition:Sequence|sequence]] $\left\langle{f_n}\right\rangle_{n \in \N}$ such that $f_n \in \mathcal F_n$ for all $n \in \N$. +Let $\phi: S \to \N \times \N$, where $\times$ denotes the [[Definition:Cartesian Product|cartesian product]], be the [[Definition:Mapping|mapping]] defined by: +:$\phi \left({x}\right) = \left({n, f_n \left({x}\right)}\right)$ +where $n$ is the ([[Smallest Element is Unique|unique]]) [[Definition:Smallest/Ordered Set|smallest]] [[Definition:Natural Numbers|natural number]] such that $x \in S_n$. +From the [[Well-Ordering Principle]], such an $n$ exists; hence, the [[Definition:Mapping|mapping]] $\phi$ exists. +Since each $f_n$ is an [[Definition:Injection|injection]], it (trivially) follows that $\phi$ is an [[Definition:Injection|injection]]. +Since [[Cartesian Product of Countable Sets is Countable|$\N \times \N$ is countable]], there exists an [[Definition:Injection|injection]] $\alpha: \N \times \N \to \N$. +From [[Composite of Injections is Injection]], the [[Definition:Mapping|mapping]] $\alpha \circ \phi: S \to \N$ is an [[Definition:Injection|injection]]. +Hence, $S$ is [[Definition:Countable|countable]]. +{{qed}} +\end{proof} + +\begin{proof} +Consider the [[Definition:Countable|countable sets]] $S_0, S_1, S_2, \ldots$ where $\displaystyle S = \bigcup_{i \mathop \in \N} {S_i}$. +Assume that none of these sets have any elements in common. +Otherwise, we can consider the sets $S_0' = S_0, S_1' = S_1 \setminus S_0, S_2' = S_2 \setminus \paren {S_0 \cup S_1}, \ldots$ +All of these are [[Definition:Countable|countable]] by [[Subset of Countable Set is Countable]], and they have the same [[Definition:Set Union|union]] $\displaystyle S = \bigcup_{i \mathop \in \N} {S_i'}$. +Now we write the elements of $S_0', S_1', S_2', \ldots$ in the form of a (possibly [[Definition:Infinite|infinite]]) table: +:$\begin{array} {*{4}c} + {a_{00}} & {a_{01}} & {a_{02}} & \cdots \\ + {a_{10}} & {a_{11}} & {a_{12}} & \cdots \\ + {a_{20}} & {a_{21}} & {a_{22}} & \cdots \\ + \vdots & \vdots & \vdots & \ddots \\ +\end{array}$ +where $a_{ij}$ is the $j$th element of set $S_i$. +This table clearly contains all the elements of $\displaystyle S = \bigcup_{i \mathop \in \N} {S_i}$. +Furthermore, we have that $\phi: S \to \N \times \N, a_{ij} \mapsto \tuple {i, j}$ is an [[Definition:Injection|injection]]. +By [[Cartesian Product of Countable Sets is Countable]], there exists an [[Definition:Injection|injection]] $\psi: \N \times \N \to \N$. +Then $\psi \circ \phi: S \to \N$ is also an [[Definition:Injection|injection]] by [[Composite of Injections is Injection]]. +That is to say, $S$ is [[Definition:Countable|countable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Projection is Epimorphism/General Result} +Tags: Epimorphisms, Projections + +\begin{theorem} +Let $\left({S, \circ}\right)$ be the [[Definition:External Direct Product|external direct product]] of the [[Definition:Algebraic Structure|algebraic structures]] $\left({S_1, \circ_1}\right), \left({S_2, \circ_2}\right), \ldots, \left({S_k, \circ_k}\right), \ldots, \left({S_n, \circ_n}\right)$. +Then: +:for each $j \in \left[{1 \,.\,.\, n}\right]$, $\operatorname{pr}_j$ is an [[Definition:Epimorphism (Abstract Algebra)|epimorphism]] from $\left({S, \circ}\right)$ to $\left({S_j, \circ_j}\right)$ +where $\operatorname{pr}_j: \left({S, \circ}\right) \to \left({S_j, \circ_j}\right)$ is the [[Definition:Projection (Mapping Theory)|$j$th projection]] from $\left({S, \circ}\right)$ to $\left({S_j, \circ_j}\right)$. +\end{theorem} + +\begin{proof} +From [[Projection is Surjection]], $\operatorname{pr}_j$ is a [[Definition:Surjection|surjection]] for all $j$. +We now need to show it is a [[Definition:Homomorphism (Abstract Algebra)|homomorphism]]. +Let $s, t \in \left({S, \circ}\right)$ where $s = \left({s_1, s_2, \ldots, s_j, \ldots, s_n}\right)$ and $t = \left({t_1, t_2, \ldots, t_j, \ldots, t_n}\right)$. +Then: +{{begin-eqn}} +{{eqn | l = \operatorname{pr}_j \left({s \circ t}\right) + | r = \operatorname{pr}_j \left({\left({s_1, s_2, \ldots, s_j, \ldots, s_n}\right) \circ \left({t_1, t_2, \ldots, t_j, \ldots, t_n}\right)}\right) + | c = +}} +{{eqn | r = \operatorname{pr}_j \left({\left({s_1 \circ_1 t_1, s_2 \circ_2 t_2, \ldots, s_j \circ_j t_j, \ldots, s_n \circ_n t_n}\right) }\right) + | c = +}} +{{eqn | r = s_j \circ_j t_j + | c = +}} +{{eqn | r = \operatorname{pr}_j \left({s}\right) \circ_j \operatorname{pr}_j \left({t}\right) + | c = +}} +{{end-eqn}} +... and thus the [[Definition:Morphism Property|morphism property]] is demonstrated. +{{Qed}} +\end{proof}<|endoftext|> +\section{External Direct Product of Projection with Canonical Injection/General Result} +Tags: Abstract Algebra, Cartesian Product, Projections + +\begin{theorem} +Let $\struct {S_1, \circ_1}, \struct {S_2, \circ_2}, \dotsc, \struct {S_j, \circ_j}, \dotsc, \struct {S_n, \circ_n}$ be [[Definition:Algebraic Structure|algebraic structures]] with [[Definition:Identity Element|identities]] $e_1, e_2, \dotsc, e_j, \dotsc, e_n$ respectively. +Let $\displaystyle \struct {S, \circ} = \prod_{i \mathop = 1}^n \struct {S_i, \circ_i}$ be the [[Definition:External Direct Product/General Definition|external direct product]] of $\struct {S_1, \circ_1}, \struct {S_2, \circ_2}, \dotsc, \struct {S_j, \circ_j}, \dotsc, \struct {S_n, \circ_n}$. +Let $\pr_j: \struct {S, \circ} \to \struct {S_j, \circ_j}$ be the [[Definition:Projection (Mapping Theory)|$j$th projection]] from $\struct {S, \circ}$ to $\struct {S_j, \circ_j}$. +Let $\inj_j: \struct {S_j, \circ_j} \to \struct {S, \circ}$ be the [[Definition:Canonical Injection (Abstract Algebra)/General Definition|canonical injection]] from $\struct {S_j, \circ_j}$ to $\struct {S, \circ}$. +Then: +:$\pr_j \circ \inj_j = I_{S_j}$ +where $I_{S_j}$ is the [[Definition:Identity Mapping|identity mapping]] from $S_j$ to $S_j$. +\end{theorem} + +\begin{proof} +Let $\displaystyle \tuple {s_1, s_2, \dotsc, s_{j - 1}, s_j, s_{j + 1}, \dotsc, s_n} \in \prod_{i \mathop = 1}^n \struct {S_i, \circ_i}$. +So: +:$s_j \in S_j$ +From the definition of the [[Definition:Canonical Injection (Abstract Algebra)/General Definition|canonical injection]]: +:$\map {\inj_j} {s_j} = \tuple {e_1, e_2, \dotsc, e_{j - 1}, s_j, e_{j + 1}, \dotsc, e_n}$ +So from the definition of the [[Definition:Projection (Mapping Theory)|$j$th projection]]: +:$\map {\pr_j} {e_1, e_2, \dotsc, e_{j - 1}, s_j, e_{j + 1}, \dotsc, e_n} = s_j$ +Thus: +:$\map {\pr_j \circ \inj_j} {s_j} = s_j$ +and the result follows from the definition of the [[Definition:Identity Mapping|identity mapping]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Extended Real Addition is Commutative} +Tags: Extended Real Numbers + +\begin{theorem} +[[Definition:Extended Real Addition|Extended real addition]] $+_{\overline{\R}}$ is [[Definition:Commutative Operation|commutative]]. +That is, for all $x, y \in \overline{\R}$: +:$x +_{\overline{\R}} y = y +_{\overline{\R}} x$ +whenever at least one of the sides is defined. +\end{theorem} + +\begin{proof} +When $x,y \in \R$, then $x +_{\overline{\R}} y = y +_{\overline{\R}} x$ follows from [[Real Addition is Commutative]]. +The remaining cases where the expressions are defined, are already imposed in the [[Definition:Extended Real Addition|definition of $+_{\overline{\R}}$]]. +{{qed}} +[[Category:Extended Real Numbers]] +re0bgbsugf5tynnbcvx6y0icrntr5fa +\end{proof}<|endoftext|> +\section{Extended Real Addition is Associative} +Tags: Extended Real Numbers + +\begin{theorem} +[[Definition:Extended Real Addition|Extended real addition]] $+_{\overline{\R}}$ is [[Definition:Commutative Operation|commutative]]. +That is, for all $x, y, z \in \overline{\R}$: +:$(1):\quad x +_{\overline{\R}} \left({y +_{\overline{\R}} z}\right) = \left({x +_{\overline{\R}} y}\right) +_{\overline{\R}} z$ +whenever at least one of the sides is defined. +\end{theorem} + +\begin{proof} +When $x,y,z \in \R$, then $(1)$ follows from [[Real Addition is Associative]]. +From the [[Definition:Extended Real Addition|definition of $+_{\overline{\R}}$]], it follows that either expression in $(1)$ is defined precisely when at most one of $+\infty$ and $-\infty$ occurs. +The case where neither occurs is already covered above; now assume that $+\infty$ occurs (the case with $-\infty$ is similar). +By [[Extended Real Addition is Commutative]], it suffices to consider the cases $x = +\infty$ and $y = +\infty$. +Suppose $x = +\infty$. +Then as $y \ne -\infty$ and $z \ne -\infty$, it follows that $\left({+\infty}\right) +_{\overline{\R}} y = +\infty$, and $\left({+\infty}\right) +_{\overline{\R}} z = +\infty$. +Hence the right-hand side in $(1)$ equals $+\infty$. +But $y +_{\overline{\R}} z \ne -\infty$ as well, and so the left-hand side also equals $+\infty$. +Now suppose $y = +\infty$. +Then, again, $x +_{\overline{\R}} \left({+\infty}\right) = +\infty$ and $\left({+\infty}\right) +_{\overline{\R}} z = +\infty$. +The result follows by applying these two equalities in the appropriate order on both the left- and the right-hand side of $(1)$. +Hence the result, from [[Proof by Cases]]. +{{qed}} +[[Category:Extended Real Numbers]] +acaio0xh0iy7g3gdb3pf6vx8a0l9njy +\end{proof}<|endoftext|> +\section{Extended Real Multiplication is Commutative} +Tags: Extended Real Numbers + +\begin{theorem} +[[Definition:Extended Real Multiplication|Extended real multiplication]] $\times_{\overline \R}$ is [[Definition:Commutative Operation|commutative]]. +That is, for all $x, y \in \overline \R$: +:$x \times_{\overline \R} y = y \times_{\overline \R} x$ +\end{theorem} + +\begin{proof} +Let $x, y \in \R$. +Then from [[Real Multiplication is Commutative]]: +:$x \times_{\overline \R} y = y \times_{\overline \R} x$ +The remaining cases are explicitly imposed in the [[Definition:Extended Real Addition|definition of $\times_{\overline \R}$]]. +{{qed}} +[[Category:Extended Real Numbers]] +ers5dtbwrqxqxdxuunybf70906gsovl +\end{proof}<|endoftext|> +\section{Extended Real Multiplication is Associative} +Tags: Extended Real Numbers + +\begin{theorem} +[[Definition:Extended Real Multiplication|Extended real multiplication]] $\cdot_{\overline \R}$ is [[Definition:Commutative Operation|commutative]]. +That is, for all $x, y, z \in \overline \R$: +:$(1): \quad x \cdot_{\overline \R} \left({y \cdot_{\overline \R} z}\right) = \left({x \cdot_{\overline \R} y}\right) \cdot_{\overline \R} z$ +\end{theorem} + +\begin{proof} +When $x, y, z \in \R$, then $(1)$ follows from [[Real Multiplication is Associative]]. +Next, the cases where at least one of $+\infty$ and $-\infty$ occurs need to be dealt with. +{{ProofWanted|when someone thinks of a nice way to deal with the case distinctions, go ahead}} +[[Category:Extended Real Numbers]] +42pagy9miv7xxla3wtfi3au46flcowq +\end{proof}<|endoftext|> +\section{Extended Real Numbers under Multiplication form Monoid} +Tags: Extended Real Numbers, Examples of Monoids + +\begin{theorem} +Denote with $\overline \R$ the [[Definition:Extended Real Number Line|extended real numbers]]. +Denote with $\cdot_{\overline \R}$ the [[Definition:Extended Real Multiplication|extended real multiplication]]. +The [[Definition:Algebraic Structure|algebraic structure]] $\struct {\overline \R, \cdot_{\overline \R} }$ is a [[Definition:Monoid|monoid]]. +\end{theorem} + +\begin{proof} +Checking the axioms for a [[Definition:Monoid|monoid]] in turn: +=== Closure === +Immediate as $\cdot_{\overline \R}: \overline \R \times \overline \R \to \overline \R$ is a [[Definition:Mapping|mapping]]. +{{qed|lemma}} +=== Associativity === +Proved on [[Extended Real Multiplication is Associative]]. +{{qed|lemma}} +=== Identity === +For all $x \in \R$, it holds that $1 \cdot_{\overline \R} x = x \cdot_{\overline \R} 1 = x$ by [[Definition:Extended Real Multiplication|definition of $\cdot_{\overline \R}$]] on $\R$. +Furthermore, by definition, $1 \cdot_{\overline \R} \paren {+\infty} = \paren {+\infty} \cdot_{\overline \R} 1 = \paren {+\infty}$. +Lastly $1 \cdot_{\overline \R} \paren {-\infty} = \paren {-\infty} \cdot_{\overline \R} 1 = \paren {-\infty}$. +That is, $1 \in \overline \R$ is an [[Definition:Identity Element|identity]] for $\cdot_{\overline \R}$. +{{qed|lemma}} +Hence, satisfying all the axioms, $\paren {\overline \R, \cdot_{\overline \R} }$ is a [[Definition:Monoid|monoid]]. +{{qed}} +[[Category:Extended Real Numbers]] +[[Category:Examples of Monoids]] +htl3b6zpo85s2okk8e80dj3u9wi27aw +\end{proof}<|endoftext|> +\section{Extended Real Numbers under Multiplication form Commutative Monoid} +Tags: Extended Real Numbers, Examples of Monoids + +\begin{theorem} +Denote with $\overline \R$ the [[Definition:Extended Real Number Line|extended real numbers]]. +Denote with $\cdot_{\overline \R}$ the [[Definition:Extended Real Multiplication|extended real multiplication]]. +The [[Definition:Algebraic Structure|algebraic structure]] $\left({\overline \R, \cdot_{\overline \R}}\right)$ is a [[Definition:Commutative Monoid|commutative monoid]]. +\end{theorem} + +\begin{proof} +By [[Extended Real Numbers under Multiplication form Monoid]], $\left({\overline \R, \cdot_{\overline \R}}\right)$ is a [[Definition:Monoid|monoid]]. +By [[Extended Real Multiplication is Commutative]], $\cdot_{\overline \R}$ is [[Definition:Commutative Operation|commutative]]. +Hence $\left({\overline \R, \cdot_{\overline \R}}\right)$ is a [[Definition:Commutative Monoid|commutative monoid]]. +{{qed}} +[[Category:Extended Real Numbers]] +[[Category:Examples of Monoids]] +rfaey9zmkxdwlog70vuju7j7u1ok4uw +\end{proof}<|endoftext|> +\section{Translation in Euclidean Space is Measurable Mapping} +Tags: Measure Theory + +\begin{theorem} +Let $\mathcal B$ be the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R^n$. +Let $\mathbf x \in \R^n$, and denote with $\tau_{\mathbf x}: \R^n \to \R^n$ [[Definition:Translation Mapping|translation by $\mathbf x$]]. +Then $\tau_{\mathbf x}$ is [[Definition:Measurable Mapping|$\mathcal B \, / \, \mathcal B$-measurable]]. +\end{theorem} + +\begin{proof} +By [[Characterization of Euclidean Borel Sigma-Algebra]], $\mathcal B = \sigma \left({\mathcal{J}_{ho}^n}\right)$. +Here, $\mathcal{J}_{ho}^n$ is the set of [[Definition:Half-Open Rectangle|half-open $n$-rectangles]], and $\sigma$ denotes [[Definition:Generated Sigma-Algebra|generated $\sigma$-algebra]]. +Now, for any [[Definition:Half-Open Rectangle|half-open $n$-rectangle]] $\left[[{\mathbf a \,.\,.\, \mathbf b}\right))$, it is trivial that: +:$\tau_{\mathbf x}^{-1} \left({\left[[{\mathbf a \,.\,.\, \mathbf b}\right))}\right) = \left[[{\mathbf a + \mathbf x \,.\,.\, \mathbf b + \mathbf x}\right))$ +That is, the [[Definition:Preimage of Subset under Mapping|preimage]] of a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]] under $\tau_{\mathbf x}$ is again a [[Definition:Half-Open Rectangle|half-open $n$-rectangle]]. +In particular, since $\mathcal{J}_{ho}^n \subseteq \sigma \left({\mathcal{J}_{ho}^n}\right) = \mathcal B$, [[Mapping Measurable iff Measurable on Generator]] applies. +Thus it follows that $\tau_{\mathbf x}$ is [[Definition:Measurable Mapping|$\mathcal B \, / \, \mathcal B$-measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Conditions for Internal Group Direct Product} +Tags: Internal Group Direct Products + +\begin{theorem} +Let $\struct {G, \circ}$ be a [[Definition:Group|group]] whose [[Definition:Identity Element|identity]] is $e$. +Let $H_1, H_2 \le G$. +Then $G$ is the [[Definition:Internal Group Direct Product|internal group direct product]] of $H_1$ and $H_2$ {{iff}}: +:$(1): \quad \forall h_1 \in H_1, h_2 \in H_2: h_1 \circ h_2 = h_2 \circ h_1$ +:$(2): \quad G = H_1 \circ H_2$ +:$(3): \quad H_1 \cap H_2 = \set e$ +Condition $(1)$ can also be stated as: +:$(1): \quad$ Either $H_1$ or $H_2$ is [[Definition:Normal Subgroup|normal]] in $G$ +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $G$ be the [[Definition:Internal Group Direct Product|internal group direct product]] of $H_1$ and $H_2$. +Then by definition the [[Definition:Mapping|mapping]]: +:$C: H_1 \times H_2 \to G: \map C {h_1, h_2} = h_1 \circ h_2$ +is a [[Definition:Group Isomorphism|(group) isomorphism]] from the [[Definition:Cartesian Product|cartesian product]] $\struct {H_1, \circ {\restriction_{H_1} } } \times \struct {H_2, \circ {\restriction_{H_2} } }$ onto $\struct {G, \circ}$. +Let the symbol $\circ$ also be used for the [[Definition:Operation Induced by Direct Product|operation induced]] on $H_1 \times H_2$ by $\circ {\restriction_{H_1} }$ and $\circ {\restriction_{H_2} }$. +$(1): \quad \forall h_1 \in H_1, h_2 \in H_2: h_1 \circ h_2 = h_2 \circ h_1$: +This follows directly from [[Internal Group Direct Product Commutativity]]. +{{wtd|Did the following get deleted by accident after a redirect, or was it just a result that nobody posted up?}} +From [[Factor of Group Inner Direct Product is Normal]] it follows that the other condition: +:$(1): \quad$ Either $H_1$ of $H_2$ is [[Definition:Normal Subgroup|normal]] in $G$ +is equivalent to this. +{{qed|lemma}} +$(2): \quad G = H_1 \circ H_2$ +This follows directly from [[Subgroup Product is Internal Group Direct Product iff Surjective]]. +{{qed|lemma}} +$(3): \quad H_1 \cap H_2 = \set e$ +Let $z \in H_1 \cap H_2$. +From [[Intersection of Subgroups is Subgroup]], $z^{-1} \in H_1 \cap H_2$. +So $\tuple {z, z^{-1} } \in H_1 \times H_2$ and so: +:$\map C {z, z^{-1} } = z \circ z^{-1} = e = \map C {e, e}$ +We have by definition that $C$ is a [[Definition:Group Isomorphism|(group) isomorphism]], therefore a [[Definition:Bijection|bijection]] and so an [[Definition:Injection|injection]]. +So, as $C$ is [[Definition:Injection|injection]], we have that: +:$\tuple {z, z^{-1} } = \tuple {e, e}$ +and therefore $z = e$. +{{qed|lemma}} +=== Sufficient Condition === +Suppose $H_1, H_2 \le G$ such that: +:$(1): \quad \forall h_1 \in H_1, h_2 \in H_2: h_1 \circ h_2 = h_2 \circ h_1$ +:$(2): \quad G = H_1 \circ H_2$ +:$(3): \quad H_1 \cap H_2 = \set e$ +all apply. +Let $C: H_1 \times H_2 \to G$ be the [[Definition:Mapping|mapping]] defined as: +:$\forall \tuple {h_1, h_2} \in H_1 \times H_2: \map C {h_1, h_2} = h_1 \circ h_2$ +Let $\tuple {x_1, x_2}, \tuple {y_1, y_2} \in H_1 \times H_2$. +Then: +{{begin-eqn}} +{{eqn | l = \map C {\tuple {x_1, x_2} \circ \tuple {y_1, y_2} } + | r = \map C {\paren {x_1 \circ y_1}, \paren {x_2 \circ y_2} } + | c = {{Defof|Operation Induced by Direct Product}} +}} +{{eqn | r = \paren {x_1 \circ y_1} \circ \paren {x_2 \circ y_2} + | c = Definition of $C$ +}} +{{eqn | r = \paren {x_1 \circ \paren {y_1 \circ x_2} } \circ y_2 + | c = [[Definition:Associative|Associativity]] of $\circ$ +}} +{{eqn | r = \paren {x_1 \circ \paren {x_2 \circ y_1} } \circ y_2 + | c = $(1)$: $x_2$ [[Definition:Commuting Elements|commutes]] with $y_2$ +}} +{{eqn | r = \paren {x_1 \circ x_2} \circ \paren {y_1 \circ y_2} + | c = [[Definition:Associative|Associativity]] of $\circ$ +}} +{{eqn | r = \map C {x_1, x_2} \circ \map C {y_1, y_2} + | c = Definition of $C$ +}} +{{end-eqn}} +So $C$ is a [[Definition:Group Homomorphism|(group) homomorphism]]. +It follows from $(2)$ that $C$ is a [[Definition:Surjection|surjection]] and so, by definition, an [[Definition:Group Epimorphism|epimorphism]]. +As $H_1$ and $H_2$ are [[Definition:Subgroup|subgroups]] of $G$, they are by definition [[Definition:Group|groups]]. +Now let $h_1 \in H_1, h_2 \in H_2$ such that $h_1 \circ h_2 = e$. +That is, $h_2 = h_1^{-1}$. +By the [[Two-Step Subgroup Test]] it follows that $h_2 \in H_1$. +By a similar argument, $h_1 \in H_2$. +Thus by definition of [[Definition:Set Intersection|set intersection]], $h_1, h_2 \in H_1 \cap H_2$ and so $h_1 = e = h_2$. +By definition of $C$, that means: +:$\map C {h_1, h_2} = e \implies \tuple {h_1, h_2} = \tuple {e, e}$ +That is: +:$\map \ker C = \set {\tuple {e, e} }$ +From the [[Quotient Theorem for Group Epimorphisms]] it follows that $C$ is a [[Definition:Group Monomorphism|monomorphism]]. +So $C$ is both an [[Definition:Group Epimorphism|epimorphism]] and a [[Definition:Group Monomorphism|monomorphism]], and so by definition an [[Definition:Group Isomorphism|isomorphism]]. +Thus, by definition, $G$ is the [[Definition:Internal Group Direct Product|internal group direct product]] of $H_1$ and $H_2$ +{{qed}} +\end{proof}<|endoftext|> +\section{Internal Group Direct Product Commutativity/General Result} +Tags: Internal Group Direct Products + +\begin{theorem} +Let $\struct {G, \circ}$ be the [[Definition:Internal Group Direct Product/General Definition|internal group direct product]] of $H_1, H_2, \ldots, H_n$. +Let $h_i$ and $h_j$ be [[Definition:Element|elements]] of $H_i$ and $H_j$ respectively, $i \ne j$. +Then $h_i \circ h_j = h_j \circ h_i$. +\end{theorem} + +\begin{proof} +Let $g = h_i \circ h_j \circ h_i^{-1} \circ h_j^{-1}$. +From the [[Internal Direct Product Theorem/General Result|Internal Direct Product Theorem: General Result]], $H_i$ and $H_j$ are [[Definition:Normal Subgroup|normal in $G$]]. +Hence $h_i \circ h_j \circ h_i^{-1} \in H_j$ and thus $g \in H_j$. +Similarly, $g \in H_i$ and thus $g \in H_i \cap H_j$. +But: +{{begin-eqn}} +{{eqn | l = H_i \cap H_j + | r = \set e + | c = +}} +{{eqn | ll= \leadsto + | l = g + | r = h_i \circ h_j \circ h_i^{-1} \circ h_j^{-1} + | c = +}} +{{eqn | r = e + | c = +}} +{{eqn | ll= \leadsto + | l = h_i \circ h_j \circ h_i^{-1} + | r = h_j + | c = +}} +{{eqn | ll= \leadsto + | l = h_i \circ h_j + | r = h_j \circ h_i + | c = +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Pre-Image Sigma-Algebra on Codomain is Sigma-Algebra} +Tags: Sigma-Algebras + +\begin{theorem} +Let $X, X'$ be [[Definition:Set|sets]], and let $f: X \to X'$ be a [[Definition:Mapping|mapping]]. +Let $\Sigma$ be a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on $X$. +Denote with $\Sigma'$ the [[Definition:Pre-Image Sigma-Algebra on Codomain|pre-image $\sigma$-algebra]] on the [[Definition:Codomain of Mapping|domain]] of $f$. +Then $\Sigma'$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on $X'$. +\end{theorem} + +\begin{proof} +Verify the axioms for a [[Definition:Sigma-Algebra|$\sigma$-algebra]] in turn: +=== Axiom $(1)$ === +As $f$ is a [[Definition:Mapping|mapping]], it is immediate that $f^{-1} \left({X'}\right) = X$. +Also $X \in \Sigma$ as $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Hence $X' \in \Sigma'$. +{{qed|lemma}} +=== Axiom $(2)$ === +Let $S' \in \Sigma'$. +By [[Preimage of Set Difference under Mapping]] and $f^{-1} \left({X'}\right) = X$: +:$f^{-1} \left({X' \setminus S'}\right) = f^{-1} \left({X'}\right) \setminus f^{-1} \left({S'}\right)$ +Now $f^{-1} \left({X'}\right) = X \in \Sigma$, and $f^{-1} \left({S'}\right) \in \Sigma$ was already assumed. +As $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]], it follows that $f^{-1} \left({X' \setminus S'}\right) \in \Sigma$ as well. +Hence $X' \setminus S' \in \Sigma'$. +{{qed|lemma}} +=== Axiom $(3)$ === +Let $\left({S'_i}\right)_{i \mathop \in \N}$ be a [[Definition:Sequence|sequence]] in $\Sigma'$. +By [[Preimage of Union under Mapping/General Result|Preimage of Union under Mapping: General Result]], have: +:$\displaystyle f^{-1} \left({\bigcup_{i \mathop \in \N} S'_i}\right) = \bigcup_{i \mathop \in \N} f^{-1} \left({S'_i}\right)$ +By assumption, $S'_i \in \Sigma'$ for all $i \in \N$; hence $f^{-1} \left({S'_i}\right) \in \Sigma$ for all $i \in \N$. +Now $\displaystyle \bigcup_{i \mathop \in \N} f^{-1} \left({S'_i}\right) \in \Sigma$ as $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Hence $\displaystyle \bigcup_{i \mathop \in \N} S'_i \in \Sigma'$. +{{qed}} +\end{proof}<|endoftext|> +\section{Mapping Measurable iff Measurable on Generator} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ and $\left({X', \Sigma'}\right)$ be [[Definition:Measurable Space|measurable spaces]]. +Suppose that $\Sigma'$ is [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated by]] $\mathcal{G}'$. +Then a [[Definition:Mapping|mapping]] $f: X \to X'$ is [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable]] iff: +:$\forall G' \in \mathcal{G}': f^{-1} \left({G'}\right) \in \Sigma$ +That is, iff the [[Definition:Preimage of Subset under Mapping|preimage]] of every [[Definition:Sigma-Algebra Generated by Collection of Subsets#Generator|generator]] under $f$ is a [[Definition:Measurable Set|measurable set]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $f$ be [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable]]. +By definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]] $\mathcal{G}' \subseteq \Sigma'$. +Hence, in particular, $f$ satisfies: +:$\forall G' \in \mathcal{G}': f^{-1} \left({G'}\right) \in \Sigma$ +{{qed|lemma}} +=== Sufficient Condition === +Suppose that: +:$\forall G' \in \mathcal{G}': f^{-1} \left({G'}\right) \in \Sigma$ +Consider the [[Definition:Pre-Image Sigma-Algebra on Codomain|pre-image $\sigma$-algebra]] $\Sigma''$ on $X'$. +The supposition precisely states that $\mathcal{G}' \subseteq \Sigma''$. +By definition of [[Definition:Generated Sigma-Algebra|generated $\sigma$-algebra]], $\sigma \left({\mathcal{G}'}\right) \subseteq \Sigma''$. +By [[Definition:Pre-Image Sigma-Algebra on Codomain|definition of $\Sigma''$]], this precisely means: +:$\forall E' \in \Sigma': f^{-1} \left({E'}\right) \in \Sigma$ +That is, $f$ is [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Internal Direct Product Theorem/General Result} +Tags: Internal Group Direct Products, Normal Subgroups, Internal Direct Product Theorem + +\begin{theorem} +Let $G$ be a [[Definition:Group|group]] whose [[Definition:Identity Element|identity]] is $e$. +Let $\sequence {H_k}_{1 \mathop \le k \mathop \le n}$ be a [[Definition:Sequence|sequence]] of [[Definition:Subgroup|subgroups]] of $G$. +Then $G$ is the [[Definition:Internal Group Direct Product/General Definition|internal group direct product]] of $\sequence {H_k}_{1 \mathop \le k \mathop \le n}$ {{iff}}: +:$(1): \quad G = H_1 H_2 \cdots H_n$ +:$(2): \quad \sequence {H_k}_{1 \mathop \le k \mathop \le n}$ is a [[Definition:Sequence|sequence]] of [[Definition:Independent Subgroups|independent subgroups]] +:$(3): \quad \forall k \in \set {1, 2, \ldots, n}: H_k \lhd G$ +where $H_k \lhd G$ denotes that $H_k$ is a [[Definition:Normal Subgroup|normal subgroup]] of $G$. +\end{theorem}<|endoftext|> +\section{Structure Induced by Group Operation is Group} +Tags: Group Theory, Mapping Theory + +\begin{theorem} +Let $\struct {G, \circ}$ be a [[Definition:Group|group]] whose [[Definition:Identity Element|identity]] is $e$. +Let $S$ be a [[Definition:Set|set]]. +Let $\struct {G^S, \oplus}$ be the [[Definition:Induced Structure|structure on $G^S$ induced]] by $\circ$. +Then $\struct {G^S, \oplus}$ is a [[Definition:Group|group]]. +\end{theorem} + +\begin{proof} +Taking the [[Definition:Group Axioms|group axioms]] in turn: +=== $\text G 0$: Closure === +Let $f, g \in G^S$. +As $\struct {G, \circ}$ is a [[Definition:Group|group]], it is [[Definition:Closed Algebraic Structure|closed]] by [[Definition:Group Axioms|group axiom $G0$.]] +From [[Closure of Pointwise Operation on Algebraic Structure]] it follows that $\struct {G^S, \oplus}$ is likewise [[Definition:Closed Algebraic Structure|closed]]. +{{qed|lemma}} +=== $\text G 1$: Associativity === +As $\struct {G, \circ}$ is a [[Definition:Group|group]], $\circ$ is [[Definition:Associative Operation|associative]]. +So from [[Structure Induced by Associative Operation is Associative]], $\struct {G^S, \oplus}$ is also [[Definition:Associative Algebraic Structure|associative]]. +{{qed|lemma}} +=== $\text G 2$: Identity === +We have from [[Induced Structure Identity]] that the [[Definition:Constant Mapping|constant mapping]] $f_e: S \to T$ defined as: +:$\forall x \in S: \map {f_e} x = e$ +is the [[Definition:Identity Element|identity]] for $\struct {G^S, \oplus}$. +{{qed|lemma}} +=== $\text G 3$: Inverses === +Let $f \in G^S$. +Let $f^* \in G^S$ be defined as follows: +:$\forall f \in G^S: \forall x \in S: \map {f^*} x = \paren {\map f x}^{-1}$ +From [[Induced Structure Inverse]], $f^*$ is the [[Definition:Inverse Element|inverse]] of $f$ for the [[Definition:Pointwise Operation|pointwise operation $\oplus$ induced]] on $G^S$ by $\circ$. +{{qed|lemma}} +All the [[Definition:Group Axioms|group axioms]] are thus seen to be fulfilled, and so $\struct {G^S, \oplus}$ is a [[Definition:Group|group]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Continuous Mapping is Measurable} +Tags: Measure Theory, Continuous Mappings + +\begin{theorem} +Let $\left({X, \tau}\right)$ and $\left({X', \tau'}\right)$ be [[Definition:Topological Space|topological spaces]]. +Denote with $\mathcal B \left({X, \tau}\right)$ and $\mathcal B \left({X', \tau'}\right)$ their respective [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebras]]. +Let $f: X \to X'$ be a [[Definition:Continuous Mapping|continuous mapping]]. +Then $f$ is [[Definition:Measurable Mapping|$\mathcal B \left({X, \tau}\right) \, / \, \mathcal B \left({X', \tau'}\right)$-measurable]]. +\end{theorem} + +\begin{proof} +As $f$ is a [[Definition:Continuous Mapping|continuous mapping]], by definition, it holds that: +:$\forall S' \in \tau': f^{-1} \left({S'}\right) \in \tau$ +Now, [[Definition:Borel Sigma-Algebra|by definition]], $\mathcal B \left({X, \tau}\right) = \sigma \left({\tau}\right)$, and so $\tau \subseteq \mathcal B \left({X, \tau}\right)$. +Also, $\mathcal B \left({X', \tau'}\right) = \sigma \left({\tau'}\right)$. +Hence [[Mapping Measurable iff Measurable on Generator]] applies. +Therefore, $f$ is [[Definition:Measurable Mapping|$\mathcal B \left({X, \tau}\right) \, / \, \mathcal B \left({X', \tau'}\right)$-measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Composition of Measurable Mappings is Measurable} +Tags: Measure Theory, Composite Mappings + +\begin{theorem} +Let $\struct {X_1, \Sigma_1}$, $\struct {X_2, \Sigma_2}$ and $\struct {X_3, \Sigma_3}$ be [[Definition:Measurable Space|measurable spaces]]. +Let $f: X_1 \to X_2$ be a [[Definition:Measurable Mapping|$\Sigma_1 \, / \, \Sigma_2$-measurable mapping]]. +Let $g: X_2 \to X_3$ be a [[Definition:Measurable Mapping|$\Sigma_2 \, / \, \Sigma_3$-measurable mapping]]. +Then their [[Definition:Composition of Mappings|composition]] $g \circ f: X_1 \to X_3$ is [[Definition:Measurable Mapping|$\Sigma_1 \, / \, \Sigma_3$-measurable]]. +\end{theorem} + +\begin{proof} +Let $E_3 \in \Sigma_3$. +Then $\map {g^{-1} } {E_3} \in \Sigma_2$, and $\map {f^{-1} } {\map {g^{-1} } {E_3} } \in \Sigma_1$ as $f, g$ are [[Definition:Measurable Mapping|measurable]]. +That is, $\map {\paren {g \circ f}^{-1} } {E_3} \in \Sigma_1$ for all $E_3 \in \Sigma_3$. +Hence, $g \circ f$ is [[Definition:Measurable Mapping|$\Sigma_1 \, / \, \Sigma_3$-measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Equivalence Relation is Equivalence} +Tags: Equivalence Relations + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be an [[Definition:Equivalence Relation|equivalence relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R {\restriction_T} \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R {\restriction_T}$ is an [[Definition:Equivalence Relation|equivalence relation]] on $T$. +\end{theorem} + +\begin{proof} +Let $\mathcal R$ be an [[Definition:Equivalence Relation|equivalence relation]] on $S$. +Then by definition: +: $\mathcal R$ is a [[Definition:Reflexive Relation|reflexive relation]] on $S$ +: $\mathcal R$ is a [[Definition:Symmetric Relation|symmetric relation]] on $S$ +: $\mathcal R$ is a [[Definition:Transitive Relation|transitive relation]] on $S$. +Then: +: from [[Restriction of Reflexive Relation is Reflexive]], $\mathcal R {\restriction_T}$ is a [[Definition:Reflexive Relation|reflexive relation]] on $T$ +: from [[Restriction of Symmetric Relation is Symmetric]], $\mathcal R {\restriction_T}$ is a [[Definition:Symmetric Relation|symmetric relation]] on $T$ +: from [[Restriction of Transitive Relation is Transitive]], $\mathcal R {\restriction_T}$ is a [[Definition:Transitive Relation|transitive relation]] on $T$ +and so it follows by definition that $\mathcal R {\restriction_T}$ is an [[Definition:Equivalence Relation|equivalence relation]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Reflexive Relation is Reflexive} +Tags: Reflexive Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be a [[Definition:Reflexive Relation|reflexive relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R {\restriction_T} \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R {\restriction_T}$ is a [[Definition:Reflexive Relation|reflexive relation]] on $T$. +\end{theorem} + +\begin{proof} +Suppose $\mathcal R$ is [[Definition:Reflexive Relation|reflexive]] on $S$. +Then: +: $\forall x \in S: \left({x, x}\right) \in \mathcal R$ +So: +: $\forall x \in T: \left({x, x}\right) \in \mathcal R {\restriction_T}$ +Thus $\mathcal R {\restriction_T}$ is [[Definition:Reflexive Relation|reflexive]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Symmetric Relation is Symmetric} +Tags: Symmetric Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be a [[Definition:Symmetric Relation|symmetric relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R {\restriction_T} \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R {\restriction_T}$ is a [[Definition:Symmetric Relation|symmetric relation]] on $T$. +\end{theorem} + +\begin{proof} +Suppose $\mathcal R$ is [[Definition:Symmetric Relation|symmetric]] on $S$. +Then: +{{begin-eqn}} +{{eqn | l = \left({x, y}\right) + | o = \in + | r = \mathcal R {\restriction_T} + | c = +}} +{{eqn | ll= \implies + | l = \left({x, y}\right) + | o = \in + | r = \left({T \times T}\right) \cap \mathcal R + | c = Definition of [[Definition:Restriction of Relation|Restriction of Relation]] +}} +{{eqn | ll= \implies + | l = \left({x, y}\right) + | o = \in + | r = T \times T + | c = +}} +{{eqn | lo= \land + | l = \left({x, y}\right) + | o = \in + | r = \mathcal R + | c = Definition of [[Definition:Set Intersection|Set Intersection]] +}} +{{eqn | ll= \implies + | l = \left({y, x}\right) + | o = \in + | r = T \times T + | c = +}} +{{eqn | lo= \land + | l = \left({y, x}\right) + | o = \in + | r = \mathcal R + | c = $\mathcal R$ is [[Definition:Symmetric Relation|symmetric]] on $S$ +}} +{{eqn | ll= \implies + | l = \left({y, x}\right) + | o = \in + | r = \left({T \times T}\right) \cap \mathcal R + | c = Definition of [[Definition:Set Intersection|Set Intersection]] +}} +{{eqn | ll= \implies + | l = \left({y, x}\right) + | o = \in + | r = \mathcal R {\restriction_T} + | c = Definition of [[Definition:Restriction of Relation|Restriction of Relation]] +}} +{{end-eqn}} +and so $\mathcal R {\restriction_T}$ is [[Definition:Symmetric Relation|symmetric]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Transitive Relation is Transitive} +Tags: Transitive Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\RR \subseteq S \times S$ be a [[Definition:Transitive Relation|transitive relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\RR {\restriction_T} \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\RR$ to $T$. +Then $\RR {\restriction_T}$ is a [[Definition:Transitive Relation|transitive relation]] on $T$. +\end{theorem} + +\begin{proof} +Suppose $\RR$ is [[Definition:Transitive Relation|transitive]] on $S$. +Then by definition: +:$\set {\tuple {x, y}, \tuple {y, z} } \subseteq \RR \implies \tuple {x, z} \in \RR$ +So: +{{begin-eqn}} +{{eqn | l = \set {\tuple {x, y}, \tuple {y, z} } + | o = \subseteq + | r = \RR {\restriction_T} + | c = +}} +{{eqn | ll= \leadsto + | l = \set {\tuple {x, y}, \tuple {y, z} } + | o = \subseteq + | r = \paren {T \times T} \cap \RR + | c = {{Defof|Restriction of Relation}} +}} +{{eqn | ll= \leadsto + | l = \tuple {x, z} + | o = \in + | r = \paren {T \times T} \cap \RR + | c = $\RR$ is [[Definition:Transitive Relation|transitive]] on $S$ +}} +{{eqn | ll= \leadsto + | l = \tuple {x, z} + | o = \in + | r = \RR {\restriction_T} + | c = {{Defof|Restriction of Relation}} +}} +{{end-eqn}} +Therefore, if $x, y, z \in T$, it follows that: +:$\set {\tuple {x, y}, \tuple {y, z} } \subseteq \RR {\restriction_T} \implies \tuple {x, z} \in \RR {\restriction_T}$ +and so by definition $\RR {\restriction_T}$ is a [[Definition:Transitive Relation|transitive relation]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Asymmetric Relation is Asymmetric} +Tags: Symmetric Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be a [[Definition:Asymmetric Relation|asymmetric relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R \restriction_T \ \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R \restriction_T$ is a [[Definition:Asymmetric Relation|asymmetric relation]] on $T$. +\end{theorem} + +\begin{proof} +Suppose $\mathcal R$ is [[Definition:Asymmetric Relation|asymmetric]] on $S$. +Then: +{{begin-eqn}} +{{eqn | l = \left({x, y}\right) + | o = \in + | r = \mathcal R \restriction_T + | c = +}} +{{eqn | ll= \implies + | l = \left({x, y}\right) + | o = \in + | r = \left({T \times T}\right) \cap \mathcal R + | c = {{Defof|Restriction of Relation}} +}} +{{eqn | ll= \implies + | l = \left({x, y}\right) + | o = \in + | r = \mathcal R + | c = [[Intersection is Subset]] +}} +{{eqn | ll= \implies + | l = \left({y, x}\right) + | o = \notin + | r = \mathcal R + | c = $\mathcal R$ is [[Definition:Asymmetric Relation|asymmetric]] on $S$ +}} +{{eqn | ll= \implies + | l = \left({y, x}\right) + | o = \notin + | r = \mathcal R \restriction_T + | c = {{Defof|Restriction of Relation}} +}} +{{end-eqn}} +and so $\mathcal R \restriction_T$ is [[Definition:Asymmetric Relation|asymmetric]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Antisymmetric Relation is Antisymmetric} +Tags: Symmetric Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\RR \subseteq S \times S$ be an [[Definition:Antisymmetric Relation|antisymmetric relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\RR {\restriction_T} \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\RR$ to $T$. +Then $\RR {\restriction_T}$ is an [[Definition:Antisymmetric Relation|antisymmetric relation]] on $T$. +\end{theorem} + +\begin{proof} +Suppose $\RR$ is [[Definition:Antisymmetric Relation|antisymmetric]] on $S$. +Then: +{{begin-eqn}} +{{eqn | l = \set {\tuple {x, y}, \tuple {y, x} } + | o = \subseteq + | r = \RR {\restriction_T} + | c = +}} +{{eqn | ll= \leadsto + | l = \set {\tuple {x, y}, \tuple {y, x} } + | o = \subseteq + | r = \paren {T \times T} \cap \RR + | c = {{Defof|Restriction of Relation}} +}} +{{eqn | ll= \leadsto + | l = \set {\tuple {x, y}, \tuple {y, x} } + | o = \subseteq + | r = \RR + | c = [[Intersection is Subset]] +}} +{{eqn | ll= \leadsto + | l = x + | r = y + | c = $\RR$ is [[Definition:Antisymmetric Relation|Antisymmetric]] on $S$ +}} +{{end-eqn}} +Thus $\RR {\restriction_T}$ is [[Definition:Antisymmetric Relation|antisymmetric]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Antireflexive Relation is Antireflexive} +Tags: Reflexive Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be an [[Definition:Antireflexive Relation|antireflexive relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R \restriction_T \ \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R \restriction_T$ is an [[Definition:Antireflexive Relation|antireflexive relation]] on $T$. +\end{theorem} + +\begin{proof} +Suppose $\mathcal R$ is [[Definition:Antireflexive Relation|antireflexive]] on $S$. +Then: +: $\forall x \in S: \left({x, x}\right) \notin \mathcal R$ +So: +: $\forall x \in T: \left({x, x}\right) \notin \mathcal R \restriction_T$ +Thus $\mathcal R \restriction_T$ is [[Definition:Antireflexive Relation|antireflexive]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Antitransitive Relation is Antitransitive} +Tags: Transitive Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be an [[Definition:Antitransitive Relation|antitransitive relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R \restriction_T \ \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R \restriction_T$ is an [[Definition:Antitransitive Relation|antitransitive relation]] on $T$. +\end{theorem} + +\begin{proof} +Suppose $\mathcal R$ is [[Definition:Antitransitive Relation|antitransitive]] on $S$. +Then by definition: +: $\left\{ {\left({x, y}\right), \left({y, z}\right)}\right\} \subseteq \mathcal R \implies \left({x, z}\right) \notin \mathcal R$ +So: +{{begin-eqn}} +{{eqn | l=\left\{ {\left({x, y}\right), \left({y, z}\right)}\right\} + | o=\subseteq + | r=\mathcal R \restriction_T + | c= +}} +{{eqn | ll=\implies + | l=\left\{ {\left({x, y}\right), \left({y, z}\right)}\right\} + | o=\subseteq + | r=\left({T \times T}\right) \cap \mathcal R + | c=by definition of [[Definition:Restriction of Relation|restriction of relation]] +}} +{{eqn | ll=\implies + | l=\left({x, z}\right) + | o=\notin + | r=\left({T \times T}\right) \cap \mathcal R + | c=as $\mathcal R$ is [[Definition:Antitransitive Relation|antitransitive]] on $S$ +}} +{{eqn | ll=\implies + | l=\left({x, z}\right) + | o=\in + | r=\mathcal R \restriction_T + | c=by definition of [[Definition:Restriction of Relation|restriction of relation]] +}} +{{end-eqn}} +Therefore, if $x, y, z \in T$, it follows that: +: $\left\{ {\left({x, y}\right), \left({y, z}\right)}\right\} \subseteq \mathcal R \restriction_T \implies \left({x, z}\right) \notin \mathcal R \restriction_T$ +and so by definition $\mathcal R \restriction_T$ is an [[Definition:Antitransitive Relation|antitransitive relation]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Relation is Total iff Union with Inverse is Trivial Relation} +Tags: Total Relations, Inverse Relations, Trivial Relation + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on $S$. +Then $\mathcal R$ is a [[Definition:Total Relation|total relation]] {{iff}}: +:$\mathcal R \cup \mathcal R^{-1} = S \times S$ +where: +: $\mathcal R^{-1}$ is the [[Definition:Inverse Relation|inverse]] of $\mathcal R$. +: $S \times S$ is the [[Definition:Trivial Relation|trivial relation]] on $S$. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $\mathcal R$ be a [[Definition:Total Relation|total relation]]. +By definition of [[Definition:Relation|relation]], both $\mathcal R \subseteq S \times S$ and $\mathcal R^{-1} \subseteq S \times S$. +So from [[Union is Smallest Superset]] (and indeed, trivially) $\mathcal R \cup \mathcal R^{-1} \subseteq S \times S$. +Let $\left({a, b}\right) \in S \times S$. +As $\mathcal R$ is [[Definition:Total Relation|total]], either: +:$\left({a, b}\right) \in \mathcal R$ +or: +:$\left({b, a}\right) \in \mathcal R$ +From the definition of [[Definition:Inverse Relation|inverse relation]], this means that either: +:$\left({a, b}\right) \in \mathcal R$ +or: +:$\left({a, b}\right) \in \mathcal R^{-1}$ +That is: +:$\left({a, b}\right) \in \mathcal R \cup \mathcal R^{-1}$ +and so by definition of [[Definition:Subset|subset]]: +:$S \times S \subseteq \mathcal R \cup \mathcal R^{-1}$ +Hence, by definition of [[Definition:Set Equality/Definition 2|set equality]]: +:$\mathcal R \cup \mathcal R^{-1} = S \times S$ +{{qed|lemma}} +=== Sufficient Condition === +Let $\mathcal R \cup \mathcal R^{-1} = S \times S$. +Let $\left({a, b}\right) \in S \times S$. +Then by definition of [[Definition:Set Union|set union]]: +:$\left({a, b}\right) \in \mathcal R$ +or: +:$\left({a, b}\right) \in \mathcal R^{-1}$ +That is, by definition of [[Definition:Inverse Relation|inverse relation]]: +:$\left({a, b}\right) \in R$ +or: +:$\left({b, a}\right) \in R$ +So by definition $\mathcal R$ is [[Definition:Total Relation|total]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Restriction of Connected Relation is Connected} +Tags: Connected Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be a [[Definition:Connected Relation|connected relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R \restriction_T \ \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R \restriction_T$ is a [[Definition:Connected Relation|connected relation]] on $T$. +\end{theorem} + +\begin{proof} +Suppose $\mathcal R$ is [[Definition:Connected Relation|connected]] on $S$. +That is: +:$\forall a, b \in S: a \ne b \implies \left({a, b}\right) \in \mathcal R \lor \left({b, a}\right) \in \mathcal R$ +So: +{{begin-eqn}} +{{eqn | l=a, b + | o=\in + | r=T + | c= +}} +{{eqn | ll=\implies + | l=\left({a, b}\right) + | o=\in + | r=T \times T + | c= +}} +{{eqn | lo=\land + | l=\left({b, a}\right) + | o=\in + | r=T \times T + | c=by definition of [[Definition:Ordered Pair|ordered pair]] and [[Definition:Cartesian Product|cartesian product]] +}} +{{eqn | ll=\implies + | l=\left({a, b}\right) + | o=\in + | r=\left({T \times T}\right) \cap \mathcal R + | c= +}} +{{eqn | lo=\lor + | l=\left({b, a}\right) + | o=\in + | r=\left({T \times T}\right) \cap \mathcal R + | c=as $\mathcal R$ is [[Definition:Connected Relation|connected]] on $S$ +}} +{{eqn | ll=\implies + | l=\left({a, b}\right) + | o=\in + | r=R \restriction_T + | c= +}} +{{eqn | lo=\lor + | l=\left({b, a}\right) + | o=\in + | r=R \restriction_T + | c=by definition of [[Definition:Restriction of Relation|restriction of relation]] +}} +{{end-eqn}} +and so $\mathcal R \restriction_T$ is [[Definition:Connected Relation|connected]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Ordering is Ordering} +Tags: Equivalence Relations + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\preceq$ be an [[Definition:Ordering|ordering]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\preceq \restriction_T$ be the [[Definition:Restriction of Ordering|restriction]] of $\preceq$ to $T$. +Then $\preceq \restriction_T$ is an [[Definition:Ordering|ordering]] on $T$. +\end{theorem} + +\begin{proof} +Let $\preceq$ be an [[Definition:Ordering|ordering]] on $S$. +Then, by definition: +: $\preceq$ is a [[Definition:Reflexive Relation|reflexive relation]] on $S$ +: $\preceq$ is an [[Definition:Antisymmetric Relation|antisymmetric relation]] on $S$ +: $\preceq$ is a [[Definition:Transitive Relation|transitive relation]] on $S$. +Then: +: from [[Restriction of Reflexive Relation is Reflexive]], $\preceq \restriction_T$ is a [[Definition:Reflexive Relation|reflexive relation]] on $T$ +: from [[Restriction of Antisymmetric Relation is Antisymmetric]], $\preceq \restriction_T$ is an [[Definition:Antisymmetric Relation|antisymmetric relation]] on $T$ +: from [[Restriction of Transitive Relation is Transitive]], $\preceq \restriction_T$ is a [[Definition:Transitive Relation|transitive relation]] on $T$ +and so it follows by definition that $\preceq \restriction_T$ is an [[Definition:Ordering|ordering]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Properties of Relation Not Preserved by Restriction} +Tags: Restrictions + +\begin{theorem} +If a [[Definition:Relation|relation]] is: +* [[Definition:Serial Relation|serial]], +* [[Definition:Non-Reflexive Relation|non-reflexive]], +* [[Definition:Non-Symmetric Relation|non-symmetric]], +* [[Definition:Non-Transitive Relation|non-transitive]] or +* non-[[Definition:Connected Relation|connected]] +it is impossible to state without further information whether or not any [[Definition:Restriction of Relation|restriction of that relation]] has the same properties. +\end{theorem} + +\begin{proof} +=== [[Restriction of Serial Relation is Not Necessarily Serial]] === +{{:Restriction of Serial Relation is Not Necessarily Serial}} +=== [[Restriction of Non-Reflexive Relation is Not Necessarily Non-Reflexive]] === +{{:Restriction of Non-Reflexive Relation is Not Necessarily Non-Reflexive}} +=== [[Restriction of Non-Symmetric Relation is Not Necessarily Non-Symmetric]] === +{{:Restriction of Non-Symmetric Relation is Not Necessarily Non-Symmetric}} +=== [[Restriction of Non-Transitive Relation is Not Necessarily Non-Transitive]] === +{{:Restriction of Non-Transitive Relation is Not Necessarily Non-Transitive}} +=== [[Restriction of Non-Connected Relation is Not Necessarily Non-Connected]] === +{{:Restriction of Non-Connected Relation is Not Necessarily Non-Connected}} +\end{proof}<|endoftext|> +\section{Restriction of Serial Relation is Not Necessarily Serial} +Tags: Serial Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be a [[Definition:Serial Relation|serial relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R \restriction_T \ \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R \restriction_T$ is not necessarily a [[Definition:Serial Relation|serial relation]] on $T$. +\end{theorem} + +\begin{proof} +[[Proof by Counterexample]]: +Let $S = \set {a, b}$. +Let $\mathcal R = \set {\tuple {a, b}, \tuple {b, b} }$. +$\mathcal R$ is a [[Definition:Serial Relation|serial relation]], as can be seen by definition. +Now let $T = \set a$. +Then $\mathcal R {\restriction_T} = \O$. +So $\not \exists y \in T: \tuple {a, y} \in \mathcal R {\restriction_T}$. +That is, $\mathcal R {\restriction_T}$ is not a [[Definition:Serial Relation|serial relation]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Non-Reflexive Relation is Not Necessarily Non-Reflexive} +Tags: Reflexive Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\RR \subseteq S \times S$ be a [[Definition:Non-Reflexive Relation|non-reflexive relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\RR {\restriction_T} \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\RR$ to $T$. +Then $\RR {\restriction_T}$ is not necessarily a [[Definition:Non-Reflexive Relation|non-reflexive relation]] on $T$. +\end{theorem} + +\begin{proof} +[[Proof by Counterexample]]: +Let $S = \set {a, b}$. +Let $\RR = \set {\tuple {b, b} }$. +$\RR$ is a [[Definition:Non-Reflexive Relation|non-reflexive relation]], as can be seen by definition: +:$\tuple {a, a} \notin \RR$ +:$\tuple {b, b} \in \RR$ +Now let $T = \set a$. +Then $\RR {\restriction_T} = \O$. +So: +:$\forall x \in T: \tuple {x, x} \notin \RR {\restriction_T}$ +That is, $\RR {\restriction_T}$ is an [[Definition:Antireflexive Relation|antireflexive relation]] on $T$. +That is, specifically ''not'' a [[Definition:Non-Reflexive Relation|non-reflexive relation]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Non-Symmetric Relation is Not Necessarily Non-Symmetric} +Tags: Symmetric Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\RR \subseteq S \times S$ be a [[Definition:Non-Symmetric Relation|non-symmetric relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\RR {\restriction_T} \ \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\RR$ to $T$. +Then $\RR {\restriction_T}$ is not necessarily a [[Definition:Non-Symmetric Relation|non-symmetric relation]] on $T$. +\end{theorem} + +\begin{proof} +[[Proof by Counterexample]]: +Let $S = \set {a, b}$. +Let $\RR = \set {\tuple {a, b}, \tuple {b, b} }$. +$\RR$ is a [[Definition:Non-Symmetric Relation|non-symmetric relation]], as can be seen by definition. +Now let $T = \set b$. +Then $\RR {\restriction_T} \ = \set {\tuple {b, b} }$. +So: +:$\forall x, y \in T: \tuple {x, y} \in \RR {\restriction_T} \implies \tuple {y, x} \in \RR {\restriction_T}$ +as can be seen by setting $x = y = b$. +So $\RR {\restriction_T}$ is a [[Definition:Symmetric Relation|symmetric relation]] on $T$. +That is, $\RR {\restriction_T}$ is not a [[Definition:Non-Symmetric Relation|non-symmetric relation]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Non-Transitive Relation is Not Necessarily Non-Transitive} +Tags: Transitive Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal R \subseteq S \times S$ be a [[Definition:Non-Transitive Relation|non-transitive relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\mathcal R \restriction_T \ \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\mathcal R$ to $T$. +Then $\mathcal R \restriction_T$ is not necessarily a [[Definition:Non-Transitive Relation|non-transitive relation]] on $T$. +\end{theorem} + +\begin{proof} +[[Proof by Counterexample]]: +Let $S = \left\{{a, b}\right\}$. +Let $\mathcal R = \left\{{\left({a, b}\right), \left({b, a}\right), \left({b, b}\right)}\right\}$. +$\mathcal R$ is a [[Definition:Non-Transitive Relation|non-transitive relation]], as can be seen by definition. +Now let $T = \left\{{b}\right\}$. +Then $\mathcal R \restriction_T \ = \left\{{\left({b, b}\right)}\right\}$. +So: +: $\forall x, y \in T: \left({x, y}\right) \in \mathcal R \restriction_T \land \left({y, z}\right) \in \mathcal R \restriction_T \implies \left({y, z}\right) \in \mathcal R \restriction_T$ +as can be seen by setting $x = y = z = b$. +So $\mathcal R \restriction_T$ is a [[Definition:Transitive Relation|transitive relation]] on $T$. +That is, $\mathcal R \restriction_T$ is not a [[Definition:Non-Transitive Relation|non-transitive relation]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Existence and Uniqueness of Sigma-Algebra Generated by Collection of Mappings} +Tags: Sigma-Algebras + +\begin{theorem} +Let $\left({X_i, \Sigma_i}\right)$ be [[Definition:Measurable Space|measurable spaces]], with $i \in I$ for some [[Definition:Index Set|index set]] $I$. +Let $X$ be a [[Definition:Set|set]], and let, for $i \in I$, $f_i: X \to X_i$ be a [[Definition:Mapping|mapping]]. +Then $\sigma \left({f_i: i \in I}\right)$, the [[Definition:Sigma-Algebra Generated by Collection of Mappings|$\sigma$-algebra generated by $\left({f_i}\right)_{i \in I}$]], exists and is unique. +\end{theorem} + +\begin{proof} +By [[Characterization of Sigma-Algebra Generated by Collection of Mappings]], we have that: +:$\sigma \left({f_i: i \in I}\right) = \sigma \left({\displaystyle \bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right)}\right)$ +where the second is a [[Definition:Sigma-Algebra Generated by Collection of Subsets|$\sigma$-algebra generated by a collection of subsets]]. +The result follows from applying [[Existence and Uniqueness of Sigma-Algebra Generated by Collection of Subsets]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Characterization of Sigma-Algebra Generated by Collection of Mappings} +Tags: Sigma-Algebras + +\begin{theorem} +Let $\left({X_i, \Sigma_i}\right)$ be [[Definition:Measurable Space|measurable spaces]], with $i \in I$ for some [[Definition:Index Set|index set]] $I$. +Let $X$ be a [[Definition:Set|set]], and let, for $i \in I$, $f_i: X \to X_i$ be a [[Definition:Mapping|mapping]]. +Then: +:$\sigma \left({f_i: i \in I}\right) = \sigma \left({\displaystyle \bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right)}\right)$ +where: +:$\sigma \left({f_i: i \in I}\right)$ is the [[Definition:Sigma-Algebra Generated by Collection of Mappings|$\sigma$-algebra generated by $\left({f_i}\right)_{i \in I}$]] +:$\sigma \left({\displaystyle \bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right)}\right)$ is the [[Definition:Sigma-Algebra Generated by Collection of Subsets|$\sigma$-algebra generated by $\displaystyle \bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right)$]] +:$f_i^{-1} \left({\Sigma_i}\right)$ denotes the [[Definition:Pre-Image Sigma-Algebra on Domain|pre-image $\sigma$-algebra on $X$ by $f$]] +\end{theorem} + +\begin{proof} +For each $i \in I$, one has by definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]: +:$\displaystyle f_i^{-1} \left({\Sigma_i}\right) \subseteq \bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right) \subseteq \sigma \left({\bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right)}\right)$ +which shows that each of the $f_i$ is [[Definition:Measurable Mapping|measurable]]. +Next, suppose that $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]] such that each of the $f_i$ is [[Definition:Measurable Mapping|$\Sigma \,/\, \Sigma_i$-measurable]]. +Then for all $i \in I$, one has: +:$f_i^{-1} \left({\Sigma_i}\right) \subseteq \Sigma$ +and hence by [[Union is Smallest Superset/Family of Sets|Union is Smallest Superset: Family of Sets]]: +:$\displaystyle \bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right) \subseteq \Sigma$ +Finally, by [[Generated Sigma-Algebra Preserves Subset]], it follows that: +:$\displaystyle \sigma \left({\bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right)}\right) \subseteq \Sigma$ +Thus: +:$\displaystyle \sigma \left({\bigcup_{i \mathop \in I} f_i^{-1} \left({\Sigma_i}\right)}\right) = \sigma \left({f_i : i \in I}\right)$ +by [[Definition:Sigma-Algebra Generated by Collection of Mappings|definition]] of the latter. +{{qed}} +\end{proof}<|endoftext|> +\section{Restriction of Non-Connected Relation is Not Necessarily Non-Connected} +Tags: Connected Relations, Restrictions + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\RR \subseteq S \times S$ be a [[Definition:Connected Relation|non-connected relation]] on $S$. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\RR {\restriction_T} \subseteq T \times T$ be the [[Definition:Restriction of Relation|restriction]] of $\RR$ to $T$. +Then $\RR {\restriction_T}$ is not necessarily a [[Definition:Connected Relation|non-connected relation]] on $T$. +\end{theorem} + +\begin{proof} +[[Proof by Counterexample]]: +Let $S = \set {a, b}$. +Let $\RR = \set {\tuple {a, a}, \tuple {b, b} }$. +$\RR$ is a [[Definition:Connected Relation|non-connected relation]], as can be seen by definition: neither $a \mathrel \RR b$ nor $b \mathrel \RR a$. +Now let $T = \set a$. +Then $\RR {\restriction_T} = \set {\tuple {a, a} }$. +Then $\RR {\restriction_T}$ is trivially [[Definition:Connected Relation|connected]] on $T$. +{{qed}} +\end{proof}<|endoftext|> +\section{Duals of Isomorphic Ordered Sets are Isomorphic} +Tags: Order Isomorphisms + +\begin{theorem} +Let $\struct {S, \preccurlyeq_1}$ and $\struct {T, \preccurlyeq_2}$ be [[Definition:Ordered Set|ordered sets]]. +Let $\struct {S, \succcurlyeq_1}$ and $\struct {T, \succcurlyeq_2}$ be the [[Definition:Dual Ordered Set|dual ordered sets]] of $\struct {S, \preccurlyeq_1}$ and $\struct {T, \preccurlyeq_2}$ respectively. +Let $f: \struct {S, \preccurlyeq_1} \to \struct {T, \preccurlyeq_2}$ be an [[Definition:Order Isomorphism|order isomorphism]]. +Then $f: \struct {S, \succcurlyeq_1} \to {T, \succcurlyeq_2} $ is also an [[Definition:Order Isomorphism|order isomorphism]]. +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | lll=\forall x, y \in S: + | l = x + | o = \succcurlyeq_1 + | r = y + | c = +}} +{{eqn | ll= \leadstoandfrom + | l = y + | o = \preccurlyeq_1 + | r = x + | c = {{Defof|Dual Ordering}} +}} +{{eqn | ll= \leadstoandfrom + | l = \map f y + | o = \preccurlyeq_2 + | r = \map f x + | c = {{Defof|Order Isomorphism}} +}} +{{eqn | ll= \leadstoandfrom + | l = \map f x + | o = \succcurlyeq_2 + | r = \map f y + | c = {{Defof|Dual Ordering}} +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Pushforward Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ and $\struct {X', \Sigma'}$ be [[Definition:Measurable Space|measurable spaces]]. +Let $\mu$ be a [[Definition:Measure (Measure Theory)|measure]] on $\struct {X, \Sigma}$. +Let $f: X \to X'$ be a [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable mapping]]. +Then the [[Definition:Pushforward Measure|pushforward measure]] $f_* \mu: \Sigma' \to \overline \R$ is a [[Definition:Measure (Measure Theory)|measure]]. +\end{theorem} + +\begin{proof} +To show that $f_* \mu$ is a [[Definition:Measure (Measure Theory)|measure]], it will suffice to check the axioms $(1)$, $(2)$ and $(3')$ for a [[Definition:Measure (Measure Theory)|measure]]. +=== Axiom $(1)$ === +The statement of axiom $(1)$ for $f_* \mu$ is: +:$\forall E' \in \Sigma': \map {f_* \mu} {E'} \ge 0$ +Now observe: +{{begin-eqn}} +{{eqn | l = \map {f_* \mu} {E'} + | r = \map \mu {\map {f^{-1} } {E'} } + | c = {{Defof|Pushforward Measure}} +}} +{{eqn | o = \ge + | r = 0 + | c = $\mu$ is a [[Definition:Measure (Measure Theory)|Measure]] +}} +{{end-eqn}} +{{qed|lemma}} +=== Axiom $(2)$ === +Let $\sequence {E'_n}_{n \mathop \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Pairwise Disjoint Family|pairwise disjoint sets]] in $\Sigma$. +The statement of axiom $(2)$ for $f_* \mu$ is: +:$\displaystyle \map {f_* \mu} {\bigcup_{n \mathop \in \N} E'_n} = \sum_{n \mathop \in \N} \map {f_* \mu} {E'_n}$ +Now compute: +{{begin-eqn}} +{{eqn | l = \map {f_* \mu} {\bigcup_{n \mathop \in \N} E'_n} + | r = \map \mu {\map {f^{-1} } {\bigcup_{n \mathop \in \N} E'_n} } + | c = {{Defof|Pushforward Measure}} +}} +{{eqn | r = \map \mu {\bigcup_{n \mathop \in \N} \map {f^{-1} } {E'_n} } + | c = [[Preimage of Union under Mapping/General Result|Preimage of Union under Mapping: General Result]] +}} +{{eqn | r = \sum_{n \mathop \in \N} \map \mu {\map {f^{-1} } {E'_n} } + | c = $\mu$ is a [[Definition:Measure (Measure Theory)|Measure]] +}} +{{eqn | r = \sum_{n \mathop \in \N} \map {f_* \mu} {E'_n} + | c = {{Defof|Pushforward Measure}} +}} +{{end-eqn}} +Note that the second equality uses [[Preimage of Intersection under Mapping]] and [[Preimage of Empty Set is Empty]] to confirm that $\sequence {\map {f^{-1} } {E'_n} }_{n \mathop \in \N}$ is [[Definition:Pairwise Disjoint Family|pairwise disjoint]]: +:$\map {f^{-1} } {E'_n} \cap \map {f^{-1} } {E'_m} = \map {f^{-1} } {E'_n \cap E'_m} = \map {f^{-1} } \O = \O$ +{{qed|lemma}} +=== Axiom $(3')$ === +The statement of axiom $(3')$ for $f_* \mu$ is: +:$\map {f_* \mu} \O = 0$ +Now compute: +{{begin-eqn}} +{{eqn | l = \map {f_* \mu} \O + | r = \map \mu {\map {f^{-1} } \O} + | c = {{Defof|Pushforward Measure}} +}} +{{eqn | r = \map \mu \O + | c = [[Preimage of Empty Set is Empty]] +}} +{{eqn | r = 0 + | c = $\mu$ is a [[Definition:Measure (Measure Theory)|measure]] +}} +{{end-eqn}} +{{qed|lemma}} +Thus $f_* \mu$, satisfying the axioms, is seen to be a [[Definition:Measure (Measure Theory)|measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Lebesgue Measure Invariant under Orthogonal Group} +Tags: Measure Theory, Orthogonal Groups + +\begin{theorem} +Let $M \in \map {\mathrm O} {n, \R}$ be an [[Definition:Orthogonal Matrix|orthogonal matrix]]. +Let $\lambda^n$ be [[Definition:Lebesgue Measure|$n$-dimensional Lebesgue measure]]. +Then the [[Definition:Pushforward Measure|pushforward measure]] $M_* \lambda^n$ equals $\lambda^n$. +\end{theorem} + +\begin{proof} +By [[Orthogonal Group is Subgroup of General Linear Group]], $M \in \GL {n, \R}$. +From [[Pushforward of Lebesgue Measure under General Linear Group]], it follows that: +:$M_* \lambda^n = \size {\det M^{-1} } \lambda^n$ +Since $M^{-1} \in \map {\mathrm O} {n, \R}$ by [[Orthogonal Group is Group]], [[Determinant of Orthogonal Matrix]] applies to yield: +:$\size {\det M^{-1} } = 1$ +Hence the result. +{{qed}} +{{wtd|In order to avoid circularity (through [[Pushforward of Lebesgue Measure under General Linear Group]] and [[Determinant as Volume of Parallelotope]]) the direct proof Schilling produces also needs to be covered}} +\end{proof}<|endoftext|> +\section{Pushforward of Lebesgue Measure under General Linear Group} +Tags: Measure Theory + +\begin{theorem} +Let $M \in \GL {n, \R}$ be an [[Definition:Invertible Matrix|invertible matrix]]. +Let $\lambda^n$ be [[Definition:Lebesgue Measure|$n$-dimensional Lebesgue measure]]. +Then the [[Definition:Pushforward Measure|pushforward measure]] $M_* \lambda^n$ satisfies: +:$M_* \lambda^n = \size {\det M^{-1} } \cdot \lambda^n$ +\end{theorem} + +\begin{proof} +From [[Linear Transformation on Euclidean Space is Continuous]], $M^{-1}$ is a [[Definition:Continuous Mapping (Topology)|continuous mapping]]. +Thus from [[Continuous Mapping is Measurable]], it is [[Definition:Measurable Mapping|measurable]], and so $M_* \lambda^n$ is defined. +Now let $B \in \map {\mathcal B} {\R^n}$ be a [[Definition:Borel Measurable Set|Borel measurable set]], and let $\mathbf x \in \R^n$. +Then: +{{begin-eqn}} +{{eqn | l = \map {M_* \lambda^n} {\mathbf x + B} + | r = \map {\lambda^n} {\map {M^{-1} } {\mathbf x + B} } + | c = {{Defof|Pushforward Measure}} +}} +{{eqn | r = \map {\lambda^n} {\map {M^{-1} } {\mathbf x} + \map {M^{-1} } B} + | c = $M^{-1}$ is [[Definition:Linear Transformation|linear]] +}} +{{eqn | r = \map {\lambda^n} {\map {M^{-1} } B} + | c = [[Lebesgue Measure is Translation-Invariant]] +}} +{{eqn | r = \map {M_* \lambda^n} B + | c = {{Defof|Pushforward Measure}} +}} +{{end-eqn}} +Thus $M_* \lambda^n$ is a [[Definition:Translation-Invariant Measure|translation-invariant measure]]. +From [[Translation-Invariant Measure on Euclidean Space is Multiple of Lebesgue Measure]], it follows that: +:$M_* \lambda^n = \map {M_* \lambda^n} {\openint 0 1^n} \cdot \lambda^n$ +Lastly, using [[Determinant as Volume of Parallelotope]] it follows that: +:$\map {M_* \lambda^n} {\openint 0 1^n} = \map {\lambda^n} {\map {M^{-1} } {\openint 0 1^n} } = \size {\det M^{-1} }$ +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Pre-Image Sigma-Algebra on Domain is Generated by Mapping} +Tags: Measure Theory + +\begin{theorem} +Let $X, X'$ be [[Definition:Set|sets]], and let $f: X \to X'$ be a [[Definition:Mapping|mapping]]. +Let $\Sigma'$ be a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on $X'$. +Let $f: X \to X'$ be a [[Definition:Mapping|mapping]]. +Then: +:$\sigma \left({f}\right) = f^{-1} \left({\Sigma'}\right)$ +where +:$\sigma \left({f}\right)$ denotes the [[Definition:Sigma-Algebra Generated by Collection of Mappings|$\sigma$-algebra generated by $f$]] +:$f^{-1} \left({\Sigma'}\right)$ denotes the [[Definition:Pre-Image Sigma-Algebra on Domain|pre-image $\sigma$-algebra]] under $f$ +\end{theorem} + +\begin{proof} +By [[Characterization of Sigma-Algebra Generated by Collection of Mappings]]: +:$\sigma \left({f}\right) = \sigma \left({f^{-1} \left({\Sigma'}\right)}\right)$ +where the latter $\sigma$ denotes a [[Definition:Sigma-Algebra Generated by Collection of Subsets|$\sigma$-algebra generated by a collection of subsets]]. +By [[Pre-Image Sigma-Algebra on Domain is Sigma-Algebra]], $f^{-1} \left({\Sigma'}\right)$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Hence $\sigma \left({f^{-1} \left({\Sigma'}\right)}\right) = f^{-1} \left({\Sigma'}\right)$. +{{qed}} +\end{proof}<|endoftext|> +\section{Mapping between Euclidean Spaces Measurable iff Components Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\R^n$ and $\R^m$ be [[Definition:Euclidean Space|Euclidean spaces]]. +Denote by $\mathcal{B}^n$ and $\mathcal{B}^m$ their respective [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebras]]. +Denote with $\mathcal B$ the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R$. +Let $f: \R^n \to \R^m$ be a [[Definition:Mapping|mapping]], and write: +:$f \left({\mathbf x}\right) = \begin{bmatrix}f_1 \left({\mathbf x}\right) \\ \vdots \\ f_m \left({\mathbf x}\right)\end{bmatrix}$ +with, for $1 \le i \le m$, $f_i: \R^n \to \R$. +Then $f$ is [[Definition:Measurable Mapping|$\mathcal{B}^n \, / \, \mathcal{B}^m$-measurable]] [[Definition:Iff|iff]]: +:$\forall i:f_i: \R^n \to \R$ is [[Definition:Measurable Mapping|$\mathcal{B}^n \, / \, \mathcal B$-measurable]] +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Suppose that $f$ is [[Definition:Measurable Mapping|$\mathcal{B}^n \, / \, \mathcal{B}^m$-measurable]]. +It is to be shown that for $1 \le i \le m$, $f_i: \R^n \to \R$ is [[Definition:Measurable Mapping|$\mathcal{B}^n \, / \, \mathcal B$-measurable]]. +By [[Mapping Measurable iff Measurable on Generator]] and [[Characterization of Euclidean Borel Sigma-Algebra]], it suffices to show that: +:$f_i^{-1} \left({\left({a \,.\,.\, b}\right)}\right) \in \mathcal{B}^n$ +for every [[Definition:Open Real Interval|open real interval]] $\left({a \,.\,.\, b}\right)$. +Since $f \left({\mathbf x}\right) = \mathbf y$ [[Definition:Iff|iff]], for all $i$, $f_i \left({\mathbf x}\right) = y_i$, it follows that: +:$\mathbf x \in f^{-1} \left({\mathbf y}\right) \iff \forall i: \mathbf x \in f_i^{-1} \left({y_i}\right)$ +i.e., [[Definition:Iff|iff]] $\mathbf x \in \displaystyle \bigcap_{i \mathop = 1}^m f_i^{-1} \left({y_i}\right)$. +By [[Preimage of Union under Mapping]], it follows that for any $B \in \mathcal{B}^m$: +:$\mathbf x \in f^{-1} \left({B}\right) \iff \exists \mathbf y \in B: \forall i: \mathbf x \in f_i^{-1} \left({y_i}\right)$ +In particular, consider the [[Definition:Open Set (Topology)|open set]] $B = \left\{{\mathbf y \in \R^n: y_i \in \left({a \,.\,.\, b}\right)}\right\}$. +Then by construction of $B$, $\mathbf x \in f_i^{-1} \left({\left({a \,.\,.\, b}\right)}\right)$ [[Definition:Iff|iff]] $f \left({x}\right) \in B$. +Therefore, $f_i^{-1} \left({\left({a \,.\,.\, b}\right)}\right) = f^{-1} \left({B}\right)$. +Since $B$ is [[Definition:Open Set (Topology)|open]], it is also [[Definition:Measurable Set|measurable]] by [[Characterization of Euclidean Borel Sigma-Algebra]]. +Thus $f_i^{-1} \left({\left({a \,.\,.\, b}\right)}\right) \in \mathcal{B}^n$ as $f$ was assumed [[Definition:Measurable Mapping|$\mathcal{B}^n \, / \, \mathcal{B}^m$-measurable]]. +{{MissingLinks|To appropriate versions of [[Definition:Preimage]]}} +{{qed|lemma}} +=== Sufficient Condition === +Suppose that all $f_i$ are [[Definition:Measurable Mapping|$\mathcal{B}^n \, / \, \mathcal B$-measurable]]. +It is to be shown that $f$ is [[Definition:Measurable Mapping|$\mathcal{B}^n \, / \, \mathcal{B}^m$-measurable]]. +By [[Mapping Measurable iff Measurable on Generator]] and [[Characterization of Euclidean Borel Sigma-Algebra]], it suffices to show that: +:$f^{-1} \left({\left[[{\mathbf a \,.\,.\, \mathbf b}\right))}\right) \in \mathcal{B}^n$ +for every [[Definition:Half-Open Rectangle|half-open $m$-rectangle]] $\left[[{\mathbf a \,.\,.\, \mathbf b}\right))$. +Now observe, for all $\mathbf x \in \R^n$: +{{begin-eqn}} +{{eqn|l = f \left({\mathbf x}\right) + |o = \in + |r = \left[ [{\mathbf a \,.\,.\, \mathbf b}\right)) +}} +{{eqn|ll= \iff + |l = \forall i: f_i \left({\mathbf x}\right) + |o = \in + |r = \left[{a_i \,.\,.\, b_i}\right) +}} +{{eqn|ll= \iff + |l = \mathbf x + |o = \in + |r = \bigcap_{i \mathop = 1}^m f_i^{-1} \left[{a_i \,.\,.\, b_i}\right) +}} +{{end-eqn}} +Thus: +:$f^{-1} \left({\left[[{\mathbf a \,.\,.\, \mathbf b}\right))}\right) = \displaystyle \bigcap_{i \mathop = 1}^m f_i^{-1} \left[{a_i \,.\,.\, b_i}\right)$ +Now the $f_i^{-1} \left[{a_i \,.\,.\, b_i}\right)$ are [[Definition:Measurable Set|measurable sets]] since the $f_i$ are [[Definition:Measurable Mapping|$\mathcal{B}^n \, / \, \mathcal B$-measurable]]. +Since $\mathcal{B}^n$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]], [[Sigma-Algebra Closed under Countable Intersection]] applies, and it follows that: +:$f^{-1} \left({\left[[{\mathbf a \,.\,.\, \mathbf b}\right))}\right) \in \mathcal{B}^n$ +{{qed}} +\end{proof}<|endoftext|> +\section{Pre-Image Sigma-Algebra of Generated Sigma-Algebra} +Tags: Sigma-Algebras + +\begin{theorem} +Let $f: X \to Y$ be a [[Definition:Mapping|mapping]]. +Let $\mathcal G \subseteq \mathcal P \left({Y}\right)$ be a collection of [[Definition:Subset|subsets]] of $Y$. +Then the following equality of [[Definition:Sigma-Algebra|$\sigma$-algebras]] on $X$ holds: +:$f^{-1} \left({\sigma \left({\mathcal G}\right)}\right) = \sigma \left({f^{-1} \left({\mathcal G}\right)}\right)$ +where +:$\sigma$ denotes a [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]] +:$f^{-1} \left({\sigma \left({\mathcal G}\right)}\right)$ denotes the [[Definition:Pre-Image Sigma-Algebra on Domain|pre-image $\sigma$-algebra]] +:$f^{-1} \left({\mathcal G}\right)$ is the [[Definition:Preimage of Subset under Mapping|preimage]] of $\mathcal G$ under $f$ +\end{theorem} + +\begin{proof} +Since $\mathcal G \subseteq \sigma \left({\mathcal G}\right)$, it follows immediately that: +:$f^{-1} \left({\mathcal G}\right) \subseteq f^{-1} \left({\sigma \left({\mathcal G}\right)}\right)$ +By [[Pre-Image Sigma-Algebra on Domain is Sigma-Algebra]], the latter is a [[Definition:Sigma-Algebra|$\sigma$-algebra]], and so by [[Generated Sigma-Algebra Preserves Subset]], it follows that: +:$\sigma \left({f^{-1} \left({\mathcal G}\right)}\right) \subseteq f^{-1} \left({\sigma \left({\mathcal G}\right)}\right)$ +Conversely, by definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]], we have: +:$f^{-1} \left({\mathcal G}\right) \subseteq \sigma \left({f^{-1} \left({\mathcal G}\right)}\right)$ +Hence from [[Mapping Measurable iff Measurable on Generator]], it follows that $f$ is [[Definition:Measurable Mapping|$\sigma \left({f^{-1} \left({\mathcal G}\right)}\right) \, / \, \sigma \left({\mathcal G}\right)$-measurable]]. +But this by definition of [[Definition:Measurable Mapping|measurable mapping]] means that: +:$f^{-1} \left({\sigma \left({\mathcal G}\right)}\right) \subseteq \sigma \left({f^{-1} \left({\mathcal G}\right)}\right)$ +Hence the result, by definition of [[Definition:Set Equality/Definition 2|set equality]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Stieltjes Function of Measure is Stieltjes Function} +Tags: Measure Theory + +\begin{theorem} +Let $\mu$ be a [[Definition:Measure (Measure Theory)|measure]] on $\R$ with the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] $\mathcal B \left({\R}\right)$. +Suppose that for every $n \in \N$: +:$\mu \left[{-n \,.\,.\, n}\right] < +\infty$ +Then $F_\mu: \R \to \overline \R$, the [[Definition:Stieltjes Function of Measure on Real Numbers|Stieltjes function of $\mu$]], is a [[Definition:Stieltjes Function|Stieltjes function]]. +\end{theorem} + +\begin{proof} +By definition, $F_\mu$ is a [[Definition:Stieltjes Function|Stieltjes function]] {{iff}} it is [[Definition:Increasing Real Function|increasing]] and [[Definition:Left-Continuous at Point|left-continuous]]. +=== $F_\mu$ is Increasing === +Let $x, y \in \R$ such that $x \le y$. +It is apparent that for all $z \in \R$: +:$z \le 0 \implies F_\mu \left({z}\right) \le 0$ +:$z \ge 0 \implies F_\mu \left({z}\right) \ge 0$ +Therefore, only the cases $x \le y \le 0$ and $0 \le x \le y$ remain. +In the first of these cases, note that: +:$\left[{y \,.\,.\, 0}\right) \subseteq \left[{x \,.\,. 0}\right)$ +Hence from [[Measure is Monotone]], it follows that: +:$F_\mu \left({x}\right) = -\mu \left({\left[{x \,.\,.\, 0}\right)}\right) \le -\mu \left({\left[{y \,.\,.\, 0}\right)}\right) = F_\mu \left({y}\right)$ +The remaining case is analogous, using the observation that: +:$\left[{0 \,.\,.\, x}\right) \subseteq \left[{0 \,.\,.\, y}\right)$ +Hence $F_\mu$ is [[Definition:Increasing Real Function|increasing]]. +{{qed|lemma}} +=== $F_\mu$ is Left-Continuous === +Suppose that $x > 0$. +Suppose further that $x_k \uparrow x$ is an [[Definition:Increasing Real Sequence|increasing sequence]] with [[Definition:Limit of a Sequence|limit]] $x$, and that $0 < x_1 < x$. +Then as an [[Definition:Increasing Sequence of Sets|increasing sequence of sets]], we have: +:$\left[{0 \,.\,.\, x_k}\right) \uparrow \left[{0 \,.\,.\, x}\right)$ +Now we compute: +{{begin-eqn}} +{{eqn | l = \lim_{k \to \infty} F_\mu \left({x_k}\right) + | r = \lim_{k \to \infty} \mu \left({\left[{0 \,.\,.\, x_k}\right)}\right) + | c = {{Defof|Stieltjes Function of Measure}} +}} +{{eqn | r = \mu \left({\left[{0 \,.\,.\, x}\right)}\right) + | c = [[Characterization of Measures|Characterization of Measures: $(3)$]] +}} +{{eqn | r = F_\mu \left({x}\right) +}} +{{end-eqn}} +Next, suppose $x = 0$. +Suppose further that $x_k \uparrow 0$ is an [[Definition:Increasing Real Sequence|increasing sequence]] with [[Definition:Limit of Sequence|limit]] $0$. +Now, we have the [[Definition:Decreasing Sequence of Sets|decreasing sequence of sets]] $\left[{x_k \,.\,.\, 0}\right) \downarrow \varnothing$. +Reasoning as above, but applying [[Characterization of Measures|Characterization of Measures: $(3'')$]] instead yields the result in this case. +Finally, suppose $x < 0$ and $x_k \uparrow x$ is an [[Definition:Increasing Real Sequence|increasing sequence]] with [[Definition:Limit of Sequence|limit]] $x$. +Observe the [[Definition:Decreasing Sequence of Sets|decreasing sequence of sets]] $\left[{x_k \,.\,.\, 0}\right) \downarrow \left[{x \,.\,.\, 0}\right)$. +This time, applying [[Characterization of Measures|Characterization of Measures: $(3')$]] yields the result. +Note the additional assumption on $\mu$ was in fact used, in combination with [[Measure is Monotone]]. +This is necessary to establish the conditions for the last two applications of [[Characterization of Measures]]. +From the [[Trichotomy Law for Real Numbers]] and [[Proof by Cases]], the result now follows. +{{qed}} +\end{proof}<|endoftext|> +\section{Characterization of Differentiability} +Tags: Differential Calculus + +\begin{theorem} +{{refactor|one-dimensional case deserves its own page, perhaps}} +Let $\mathbb X$ be an [[Definition:Open Rectangle|open rectangle]] of $\R^n$. +Let $f: \mathbb X \to \R, \mathbf x \mapsto \map f {\mathbf x}$ be a [[Definition:Real-Valued Function|real-valued function]]. +Let $\mathbf x = \begin {bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end {bmatrix} \in \R^n$. +Let $\map {\Delta f} {\mathbf x} = \map f {\mathbf x + \Delta \mathbf x} - \map f {\mathbf x}$ . +Let $\dfrac {\partial f} {\partial x_j}$ be the [[Definition:Partial Derivative|partial derivative of $f$]] {{WRT|Differentiation}} $x_j$. +Then $f$ is [[Definition:Differentiable Real-Valued Function|differentiable]] {{iff}} there exists some $\map {\Delta f} {\mathbf x}$ such that: +:$\map {\Delta f} {\mathbf x} = \displaystyle \sum_{i \mathop = 1}^n \frac {\partial \map f {\mathbf x} } {\partial x_i} \Delta x_i + \sum_{i \mathop = 1}^n \varepsilon_i \Delta x_i$ +where $\forall i: 1 \le i \le n: \varepsilon_i \to 0$ as $\Delta x_i \to 0$. +\end{theorem} + +\begin{proof} +Suppose $f: \R^1 \to \R$. +Define: +:$\map f x = y$ +:$\Delta y = \map f {x + \Delta x} - \Delta x$ +From the definition of the [[Definition:Derivative of Real Function|derivative of a real function]], we can say that $f$ is differentiable {{iff}}: +:$\dfrac {\Delta y} {\Delta x} \to \dfrac {\d y} {\d x}$ +as $\Delta x \to 0$. +Clearly, this is [[Definition:Logically Equivalent|equivalent]] to saying that $f$ is differentiable {{iff}}: +:$\dfrac {\Delta y} {\Delta x} - \dfrac {\d y} {\d x} = \varepsilon$ +as $\varepsilon \to 0$, where $\varepsilon \in \R$ is some [[Definition:Real Number|real number]]. +Solving this equation for $\Delta y$: +:$\Delta y = \dfrac {\d y} {\d x} \Delta x + \varepsilon \Delta x$ +That is, {{iff}} the [[Definition:Real Function|real function]] is [[Definition:Differentiable Real-Valued Function|differentiable]] $\varepsilon \to 0$ as $\Delta x \to 0$. +{{qed|lemma}} +Now consider $f: \R^n \to \R$, $n > 1$. +From the definition of [[Definition:Differentiable Real-Valued Function|differentiability of a real-valued function]], $f$ is differentiable {{iff}}: +:$\map {\Delta f} {\mathbf x} = \map {\nabla f} {\mathbf x} \cdot \Delta \mathbf x + \begin{bmatrix} \\ \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end {bmatrix} \cdot \Delta \mathbf x$ +such that $\begin{bmatrix} \\ \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end{bmatrix} \to \mathbf 0$ as $\Delta \mathbf x \to \mathbf 0$. +{{wtd|needs to change to accommodate the new (more general) definition of differentiability}} +Observe that: +{{begin-eqn}} +{{eqn | l = \map {\nabla f} {\mathbf x} \cdot \Delta \mathbf x + \begin {bmatrix} \\ \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end {bmatrix} \cdot \Delta \mathbf x + | r = \begin {bmatrix} \frac {\map {\partial f} {\mathbf x} } {\partial x_1} \\ \frac {\map {\partial f} {\mathbf x} } {\partial x_2} \\ \vdots \\ \frac {\map {\partial f} {\mathbf x} } {\partial x_n} \end {bmatrix} \cdot \begin {bmatrix} \Delta x_1 \\ \Delta x_2 \\ \vdots \\ \Delta x_n \end {bmatrix} + \begin {bmatrix} \\ \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end {bmatrix} \cdot \begin {bmatrix} \Delta x_1 \\ \Delta x_2 \\ \vdots \\ \Delta x_n \end {bmatrix} +}} +{{eqn | r = \sum_{i \mathop = 1}^n \frac {\map {\partial f} {\mathbf x} } {\partial x_i} \Delta x_i + \sum_{i \mathop = 1}^n \varepsilon_i \Delta x_i + | c = {{Defof|Dot Product}} +}} +{{end-eqn}} +where $\begin {bmatrix} \\ \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end {bmatrix} \to \begin {bmatrix} \\ 0 \\ 0 \\ \vdots \\ 0 \end {bmatrix}$ as $\begin {bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end {bmatrix} \to \begin {bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}$. +{{qed}} +\end{proof}<|endoftext|> +\section{Chain Rule for Real-Valued Functions} +Tags: Differential Calculus + +\begin{theorem} +Let $f: \R^n \to \R, \mathbf x \mapsto z$ be a [[Definition:Differentiable Real-Valued Function|differentiable real-valued function]]. +Let $\mathbf x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} \in \R^n$. +Further, let every [[Definition:Element|element]] $x_i: 1 \le i \le n$ represent an [[Definition:Implicit Function|implicitly defined]] [[Definition:Differentiable Real Function|differentiable real function]] of $t$. +Then $z$ is itself [[Definition:Differentiable Real Function|differentiable]] {{WRT|Differentiation}} $t$ and: +{{begin-eqn}} +{{eqn | l = \frac {\d z} {\d t} + | r = \sum_{k \mathop = 1}^n \frac {\partial z }{\partial x_i} \frac {\d x_i} {\d t} +}} +{{end-eqn}} +where $\dfrac {\partial z} {\partial x_i}$ is the [[Definition:Partial Derivative|partial derivative]] of $z$ {{WRT|Differentiation}} $x_i$. +\end{theorem} + +\begin{proof} +[[Definition:By Hypothesis|By hypothesis]], $f$ is [[Definition:Differentiable Real-Valued Function|differentiable]]. +From [[Characterization of Differentiability]]: +{{begin-eqn}} +{{eqn | l = \Delta z + | r = \sum_{i \mathop = 1}^n \frac {\partial z} {\partial x_i} \Delta x_i + \sum_{i \mathop = 1}^n \epsilon_i \Delta x_i + | c = $\forall i: 1 \le i \le n: \epsilon_i \to 0$ as $\Delta x_i \to 0$ +}} +{{end-eqn}} +Let $\Delta t \ne 0$ and divide both sides of the equation by $\Delta t$: +{{begin-eqn}} +{{eqn | l = \frac {\Delta z} {\Delta t} + | r = \sum_{i \mathop = 1}^n \frac {\partial z} {\partial x_i} \frac {\Delta x_i} {\Delta t} + \sum_{i \mathop = 1}^n \epsilon_i \frac {\Delta x_i} {\Delta t} + | c = $\forall i: 1 \le i \le n: \epsilon_i \to 0$ as $\Delta x_i \to 0$ +}} +{{end-eqn}} +Recall that each $x_i$ was defined to be [[Definition:Differentiable Real Function|differentiable]] {{WRT|Differentiation}} $t$, that is, that each $\dfrac {\d x_i} {\d t}$ exists. +Then $\Delta x_i \to 0$ as $\Delta t \to 0$. +Therefore: +{{begin-eqn}} +{{eqn | l = \frac {\d z} {\d t} + | r = \sum_{i \mathop = 1}^n \frac {\partial z} {\partial x_i} \frac {\d x_i} {\d t} + \sum_{i \mathop = 1}^n 0 \frac {\d x_i} {\d t} +}} +{{eqn | r = \sum_{i \mathop = 1}^n \frac {\partial z} {\partial x_i} \frac {\d x_i} {\d t} + | c = as $\Delta t \to 0$ +}} +{{end-eqn}} +{{qed}} +{{proofread}} +\end{proof}<|endoftext|> +\section{Totally Ordered Set is Well-Ordered iff Subsets Contain Infima} +Tags: Total Orderings, Well-Orderings + +\begin{theorem} +Let $\left({S, \preccurlyeq}\right)$ be a [[Definition:Totally Ordered Set|totally ordered set]]. +Then $\left({S, \preccurlyeq}\right)$ is a [[Definition:Well-Ordered Set|well-ordered set]] [[Definition:Iff|iff]] every [[Definition:Non-Empty Set|non-empty]] [[Definition:Subset|subset]] of $T \subseteq S$ has an [[Definition:Infimum of Set|infimum]] such that $\inf \left({T}\right) \in T$. +\end{theorem} + +\begin{proof} +=== Sufficient Condition === +Let $T \subseteq S$ such that $m := \inf \left({T}\right) \in T$. +By definition, $m$ is a [[Definition:Lower Bound of Set|lower bound]] of $T$ and so: +:$\forall x \in T: m \preccurlyeq x$ +That is: +:$\neg \exists x \in T: x \prec m$ +Thus by definition $x$ is a [[Definition:Minimal Element|minimal element]] of $T$. +Thus by definition $S$ is [[Definition:Well-Founded|well-founded]] and so is a [[Definition:Well-Ordered Set|well-ordered set]]. +{{qed|lemma}} +=== Necessary Condition === +Now suppose $T$ is a [[Definition:Well-Ordered Set|well-ordered set]]. +Then from [[Well-Ordering is Total Ordering]] it follows immediately that $\left({S, \preccurlyeq}\right)$ is a [[Definition:Totally Ordered Set|totally ordered set]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Pre-Measure of Finite Stieltjes Function is Pre-Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\mathcal J_{ho}$ denote the collection of [[Definition:Half-Open Real Interval|half-open intervals]] in $\R$. +Let $f: \R \to \R$ be a [[Definition:Finite Stieltjes Function|finite Stieltjes function]]. +Then the [[Definition:Pre-Measure of Finite Stieltjes Function|pre-measure of $f$]], $\mu_f: \mathcal{J}_{ho} \to \overline \R_{\ge 0}$ is a [[Definition:Pre-Measure|pre-measure]]. +Here, $\overline \R_{\ge 0}$ denotes the set of [[Definition:Positive Real Number|positive]] [[Definition:Extended Real Number Line|extended real numbers]]. +\end{theorem} + +\begin{proof} +It is immediate from the [[Definition:Pre-Measure of Finite Stieltjes Function|definition of $\mu_f$]] that $\mu_f \left({\varnothing}\right) = 0$. +Now suppose that for some [[Definition:Half-Open Real Interval|half-open interval]] $\left[{a \,.\,.\, b}\right)$ one has: +:$\left[{a \,.\,.\, b}\right) = \displaystyle \bigcup_{n \mathop \in \N} \left[{b_n \,.\,.\, b_{n+1}}\right)$ +where $b_0 = a$ and $\displaystyle \lim_{n \mathop \to \infty} b_n = b$. +Then we compute: +{{begin-eqn}} +{{eqn | l = \sum_{n \mathop \in \N} \mu_f \left({\left[{b_n \,.\,.\, b_{n + 1} }\right)}\right) + | r = \sum_{n \mathop \in \N} f \left({b_{n + 1} }\right) - f \left({b_n}\right) + | c = [[Definition:Pre-Measure of Finite Stieltjes Function|Definition of $\mu_f$]] +}} +{{eqn | r = \lim_{n \to \infty} f \left({b_{n + 1} }\right) - f \left({b_0}\right) + | c = [[Telescoping Series/Example 2|Telescoping Series]] +}} +{{eqn | r = f \left({b}\right) - f \left({a}\right) + | c = Definition of $\left({b_n}\right)_n$ +}} +{{eqn | r = \mu_f \left({\left[{a \,.\,.\, b}\right)}\right) + | c = [[Definition:Pre-Measure of Finite Stieltjes Function|Definition of $\mu_f$]] +}} +{{end-eqn}} +which verifies the second condition for a [[Definition:Pre-Measure|pre-measure]]. +Hence $\mu_f$ is indeed a [[Definition:Pre-Measure|pre-measure]]. +{{qed}} +[[Category:Measure Theory]] +bxb62y1d8clttga1ln1strkxwpv20zc +\end{proof}<|endoftext|> +\section{Pre-Measure of Finite Stieltjes Function Extends to Unique Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\mathcal{J}_{ho}$ denote the collection of [[Definition:Half-Open Real Interval|half-open intervals]] in $\R$. +Let $f: \R \to \R$ be a [[Definition:Finite Stieltjes Function|finite Stieltjes function]]. +Then the [[Definition:Pre-Measure of Finite Stieltjes Function|pre-measure of $f$]], $\mu_f$, [[Definition:Extension (Measure Theory)|extends]] uniquely to a [[Definition:Measure (Measure Theory)|measure]] $\mu$ on $\mathcal B \left({\R}\right)$, the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R$. +This unique [[Definition:Measure (Measure Theory)|measure]] $\mu$ is the [[Definition:Measure of Finite Stieltjes Function|measure of $f$]]. +\end{theorem} + +\begin{proof} +We intend to use [[Carathéodory's Theorem (Measure Theory)]]. +To this end, observe that by [[Characterization of Euclidean Borel Sigma-Algebra]], we have: +:$\mathcal B \left({\R}\right) = \sigma \left({\mathcal{J}_{ho}}\right)$ +From [[Pre-Measure of Finite Stieltjes Function is Pre-Measure]], $\mu_f$ is a [[Definition:Pre-Measure|pre-measure]]. +Note that for all $n \in \N$, we have: +:$\mu_f \left({\left[{-n \,.\,.\, n}\right)}\right) = f \left({n}\right) - f \left({-n}\right) < +\infty$ +and $\left[{-n \,.\,.\, n}\right) \uparrow \R$ as an [[Definition:Increasing Sequence of Sets|increasing sequence of sets]]. +All these facts combine with [[Carathéodory's Theorem (Measure Theory)]] to establish existence and uniqueness of $\mu$. +{{qed}} +\end{proof}<|endoftext|> +\section{Stieltjes Function of Measure of Finite Stieltjes Function} +Tags: Measure Theory + +\begin{theorem} +Let $f: \R \to \R$ be a [[Definition:Finite Stieltjes Function|finite Stieltjes function]]. +Let $\mu_f$ be the [[Definition:Measure of Finite Stieltjes Function|measure of $f$]]. +Let $f_{\mu_f}$ be the [[Definition:Stieltjes Function of Measure|Stieltjes function of $\mu_f$]]. +Then $f_{\mu_f} = f$. +\end{theorem} + +\begin{proof} +{{ProofWanted}} +[[Category:Measure Theory]] +2av83obmotj1dfnhu3llbfontcqut8d +\end{proof}<|endoftext|> +\section{Measure of Stieltjes Function of Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\mu$ be a [[Definition:Measure (Measure Theory)|measure]] on $\mathcal B \left({\R}\right)$, the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R$. +Suppose that for all $n \in \N$, $\mu$ satisfies: +:$\mu \left({ \left[{-n \,.\,.\, n}\right) \ }\right) < +\infty$ +Let $f_\mu$ be the [[Definition:Stieltjes Function of Measure|Stieltjes function of $\mu$]]. +Let $\mu_{f_\mu}$ be the [[Definition:Measure of Finite Stieltjes Function|measure of $f_\mu$]]. +Then $\mu_{f_\mu} = \mu$. +\end{theorem} + +\begin{proof} +From [[Pre-Measure of Finite Stieltjes Function Extends to Unique Measure]], it suffices to verify that: +:$\mu_{f_\mu} \left({\left[{a \,.\,.\, b}\right)}\right) = \mu \left({\left[{a \,.\,.\, b}\right)}\right)$ +for all [[Definition:Half-Open Real Interval|half-open intervals]] $\left[{a \,.\,.\, b}\right)$. +Now we have: +{{begin-eqn}} +{{eqn|l = \mu_{f_\mu} \left({\left[{a \,.\,.\, b}\right)}\right) + |r = f_\mu \left({b}\right) - f_\mu \left({a}\right) + |c = [[Definition:Measure of Finite Stieltjes Function|Definition of $\mu_{f_\mu}$]] +}} +{{end-eqn}} +If either $a = 0$ or $b = 0$, the result follows immediately from the [[Definition:Stieltjes Function of Measure|definition of $f_\mu$]]. +Now suppose that $a < b < 0$. +Then: +{{begin-eqn}} +{{eqn|l = f_\mu \left({b}\right) - f_\mu \left({a}\right) + |r = \mu \left({\left[{a \,.\,.\, 0}\right)}\right) - \mu \left({\left[{b \,.\,.\, 0}\right)}\right) + |c = [[Definition:Stieltjes Function of Measure|Definition of $f_\mu$]] +}} +{{eqn|r = \mu \left({\left[{a \,.\,.\, b}\right)}\right) + |c = [[Measure of Set Difference with Subset]] +}} +{{end-eqn}} +Finally, let $0 < a < b$. +Then: +{{begin-eqn}} +{{eqn|l = f_\mu \left({b}\right) - f_\mu \left({a}\right) + |r = \mu \left({\left[{0 \,.\,.\, b}\right)}\right) - \mu \left({\left[{0 \,.\,.\, a}\right)}\right) + |c = [[Definition:Stieltjes Function of Measure|Definition of $f_\mu$]] +}} +{{eqn|r = \mu \left({\left[{a \,.\,.\, b}\right)}\right) + |c = [[Measure of Set Difference with Subset]] +}} +{{end-eqn}} +The final case $a < 0 < b$ is a trivial consequence of [[Measure is Finitely Additive Function]]. +Hence it must be that $\mu_{f_\mu} = \mu$. +{{qed}} +\end{proof}<|endoftext|> +\section{Cantor Set has Zero Lebesgue Measure} +Tags: Cantor Set, Measure Theory + +\begin{theorem} +Let $\mathcal C$ be the [[Definition:Cantor Set|Cantor set]]. +Let $\lambda$ be the [[Definition:Lebesgue Measure|Lebesgue measure]] on the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] $\mathcal B \left({\R}\right)$ on $\R$. +Then $\mathcal C$ is [[Definition:Measurable Set|$\mathcal B \left({\R}\right)$-measurable]], and $\lambda \left({\mathcal C}\right) = 0$. +That is, $\mathcal C$ is a [[Definition:Null Set|$\lambda$-null set]]. +\end{theorem} + +\begin{proof} +Consider the definition of $\mathcal C$ [[Definition:Cantor Set/Limit of Decreasing Sequence|as a limit of a decreasing sequence]]. +In the notation as introduced there, we see that each $S_n$ is a collection of [[Definition:Disjoint Sets|disjoint]] [[Definition:Closed Real Interval|closed intervals]]. +From [[Closed Set Measurable in Borel Sigma-Algebra]], these are [[Definition:Measurable Set|measurable sets]]. +Furthermore, each $S_n$ is [[Definition:Finite Set|finite]]. +Hence by [[Sigma-Algebra Closed under Union]], it follows that $C_n := \displaystyle \bigcup S_n$ is [[Definition:Measurable Set|measurable]] as well. +Then, as we have: +:$\mathcal C = \displaystyle \bigcap_{n \mathop \in \N} C_n$ +it follows from [[Sigma-Algebra Closed under Countable Intersection]] that $\mathcal C$ is [[Definition:Measurable Set|measurable]]. +The $C_n$ also form a [[Definition:Decreasing Sequence of Sets|decreasing sequence of sets]] with [[Definition:Limit of Decreasing Sequence of Sets|limit]] $\mathcal C$. +Thus, from [[Characterization of Measures|Characterization of Measures: $(3')$]], it follows that: +:$\lambda \left({\mathcal C}\right) = \displaystyle \lim_{n \to \infty} \lambda \left({C_n}\right)$ +It is not too hard to show that, for all $n \in \N$: +:$\lambda \left({C_n}\right) = \left({\dfrac 2 3}\right)^n$ +{{finish|yes, I know}} +Now we have by [[Sequence of Powers of Number less than One]] that: +:$\displaystyle \lim_{n \mathop \to \infty} \left({\frac 2 3}\right)^n = 0$ +and the result follows. +{{qed}} +\end{proof}<|endoftext|> +\section{Factorization Lemma/Real-Valued Function} +Tags: Measure Theory + +\begin{theorem} +Then a [[Definition:Mapping|mapping]] $g: X \to \R$ is [[Definition:Measurable Mapping|$\map \sigma f \, / \, \map {\mathcal B} \R$-measurable]] {{iff}}: +:There exists a [[Definition:Measurable Mapping|$\Sigma \, / \, \map {\mathcal B} \R$-measurable mapping]] $\tilde g: Y \to \R$ such that $g = \tilde g \circ f$ +where: +:$\map \sigma f$ denotes the [[Definition:Sigma-Algebra Generated by Collection of Mappings|$\sigma$-algebra generated]] by $f$ +:$\map {\mathcal B} \R$ denotes the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R$ +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $g$ be a [[Definition:Measurable Mapping|$\map \sigma f \, / \, \map {\mathcal B} \R$-measurable function]]. +We need to construct a [[Definition:Measurable Mapping|measurable]] $\tilde g$ such that $g = \tilde g \circ f$. +Let us proceed in the following fashion: +:Establish the result for $g$ a [[Definition:Characteristic Function of Set|characteristic function]]; +:Establish the result for $g$ a [[Definition:Simple Function|simple function]]; +:Establish the result for all $g$ +So let $g = \chi_E$ be a [[Definition:Characteristic Function of Set|characteristic function]]. +By [[Characteristic Function Measurable iff Set Measurable]], it follows that $E$ is [[Definition:Measurable Set|$\sigma \left({f}\right)$-measurable]]. +Thus there exists some $A \in \Sigma$ such that $E = \map {f^{-1} } A$. +Again by [[Characteristic Function Measurable iff Set Measurable]], we have $\chi_A: Y \to \R$ is [[Definition:Measurable Mapping|measurable]]. +It follows that $\chi_E = \chi_A \circ f$, and $\tilde g := \chi_A$ works. +Now let $g = \displaystyle \sum_{i \mathop = 1}^n a_i \chi_{E_i}$ be a [[Definition:Simple Function|simple function]]. +Let $A_i$ be associated to $E_i$ as above. Then we have: +{{begin-eqn}} +{{eqn | l = \sum_{i \mathop = 1}^n a_i \chi_{E_i} + | r = \sum_{i \mathop = 1}^n a_i \paren {\chi_{A_i} \circ f} + | c = by the result for [[Definition:Characteristic Function of Set|characteristic functions]] +}} +{{eqn | r = \paren {\sum_{i \mathop = 1}^n a_i \chi_{A_i} } \circ f + | c = [[Composition of Mappings is Linear]] +}} +{{end-eqn}} +Now $\displaystyle \sum_{i \mathop = 1}^n a_i \chi_{A_i}$ is a [[Definition:Simple Function|simple function]], hence [[Definition:Measurable Mapping|measurable]] by [[Simple Function is Measurable]]. +Therefore, it suffices as a choice for $\tilde g$. +Next, let $g \ge 0$ be a [[Definition:Measurable Function|measurable function]]. +By [[Measurable Function Pointwise Limit of Simple Functions]], we find [[Definition:Simple Function|simple functions]] $g_j$ such that: +:$\displaystyle \lim_{j \mathop \to \infty} g_j = g$ +Applying the previous step to each $g_j$, we find a [[Definition:Sequence|sequence]] of $\tilde g_j$ satisfying: +:$\displaystyle \lim_{j \mathop \to \infty} \tilde g_j \circ f = g$ +From [[Composition with Pointwise Limit]] it follows that we have, putting $\tilde g := \displaystyle \lim_{j \mathop \to \infty} \tilde g_j$: +:$\displaystyle \lim_{j \mathop \to \infty} \tilde g_j \circ f = \tilde g \circ f$ +{{explain|Address why (if?!) $\tilde g$ is well defined}} +An application of [[Pointwise Limit of Measurable Functions is Measurable]] yields $\tilde g$ [[Definition:Measurable Function|measurable]]. +Thus we have provided a suitable $\tilde g$ for every $g$, such that: +:$g = \tilde g \circ f$ +as desired. +{{qed}} +=== Sufficient Condition === +Suppose that such a $\tilde g$ exists. +Note that $f$ is [[Definition:Measurable Mapping|$\map \sigma f \, / \, \Sigma$-measurable]] by [[Definition:Sigma-Algebra Generated by Collection of Mappings|definition of $\map \sigma f$]]. +The result follows immediately from [[Composition of Measurable Mappings is Measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Characterization of Measurable Functions} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \overline{\R}$ be an [[Definition:Extended Real-Valued Function|extended real-valued function]]. +Then the following are all equivalent: +{{begin-eqn}} +{{eqn | n=1 + | o= + | r=f\) is [[Definition:Measurable Function|measurable]] \( +}} +{{eqn | n=2 + | o= + | r=\forall \alpha \in \R: \left\{ {x \in X: f \left({x}\right) \le \alpha}\right\} \in \Sigma +}} +{{eqn | n=2' + | o= + | r=\forall \alpha \in \Q: \left\{ {x \in X: f \left({x}\right) \le \alpha}\right\} \in \Sigma +}} +{{eqn | n=3 + | o= + | r=\forall \alpha \in \R: \left\{ {x \in X: f \left({x}\right) < \alpha}\right\} \in \Sigma +}} +{{eqn | n=3' + | o= + | r=\forall \alpha \in \Q: \left\{ {x \in X: f \left({x}\right) < \alpha}\right\} \in \Sigma +}} +{{eqn | n=4 + | o= + | r=\forall \alpha \in \R: \left\{ {x \in X: f \left({x}\right) \ge \alpha}\right\} \in \Sigma +}} +{{eqn | n=4' + | o= + | r=\forall \alpha \in \Q: \left\{ {x \in X: f \left({x}\right) \ge \alpha}\right\} \in \Sigma +}} +{{eqn | n=5 + | o= + | r=\forall \alpha \in \R: \left\{ {x \in X: f \left({x}\right) > \alpha}\right\} \in \Sigma +}} +{{eqn | n=5' + | o= + | r=\forall \alpha \in \Q: \left\{ {x \in X: f \left({x}\right) > \alpha}\right\} \in \Sigma +}} +{{end-eqn}} +\end{theorem} + +\begin{proof} +Each of $(2)$ up to $(5')$ is equivalent to $(1)$ by combining [[Mapping Measurable iff Measurable on Generator]] and [[Generators for Extended Real Sigma-Algebra]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Characterization of Extended Real Sigma-Algebra} +Tags: Extended Real Numbers, Sigma-Algebras + +\begin{theorem} +Let $\mathcal B \left({\R}\right)$ be the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R$. +Let $\overline{\mathcal B}$ be the [[Definition:Extended Real Sigma-Algebra|extended real $\sigma$-algebra]]. +Define $\mathcal S := \mathcal P \left({\left\{{+\infty, -\infty}\right\}}\right)$, where $\mathcal P$ denotes [[Definition:Power Set|power set]]. +Then: +:$\overline{\mathcal B} = \left\{{B \cup S: B \in \mathcal B \left({\R}\right), S \in \mathcal S}\right\}$ +\end{theorem} + +\begin{proof} +Let $\overline B \in \overline{\mathcal B}$. +Then by [[Extended Real Sigma-Algebra Induces Borel Sigma-Algebra on Reals]], we have: +:$\overline B \cap \R \in \mathcal B \left({\R}\right)$ +We also have, by definition of the [[Definition:Extended Real Number Line|extended real numbers]] $\overline \R$, that: +:$\overline \R \setminus \R = \left\{{+\infty, -\infty}\right\}$ +and therefore, $\overline B \setminus \R \subseteq \left\{{+\infty, -\infty}\right\}$. +Here, $\setminus$ signifies [[Definition:Set Difference|set difference]]. +By [[Set Difference Union Intersection]]: +:$\overline B = \left({\overline B \setminus \R}\right) \cup \left({\overline B \cap \R}\right)$ +Therefore, any $\overline B \in \overline{\mathcal B}$ is of the purported form $B \cup S$ with $B \in \mathcal B \left({\R}\right)$ and $S \in \mathcal S$. +It remains to show that any such set is in fact an element of $\overline{\mathcal B}$. +Since any $B \in \mathcal B \left({\R}\right)$ is naturally also in $\overline{\mathcal B}$, it suffices to show that: +:$\mathcal S \subseteq \overline{\mathcal B}$ +by applying [[Sigma-Algebra Closed under Union]]. +From [[Closed Set Measurable in Borel Sigma-Algebra]], it will now suffice to show that: +:$\varnothing, \left\{{+\infty}\right\}, \left\{{-\infty}\right\}, \left\{{+\infty, -\infty}\right\}$ +are all [[Definition:Closed Set (Topology)|closed sets]] in $\overline \R$. +That they are follows from [[Extended Real Number Space is Hausdorff]] and [[Finite Subspace of Hausdorff Space is Closed|Finite Subspace of Hausdorff Space is Closed]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Extended Real Sigma-Algebra Induces Borel Sigma-Algebra on Reals} +Tags: Extended Real Numbers, Sigma-Algebras + +\begin{theorem} +Let $\overline \BB$ be the [[Definition:Extended Real Sigma-Algebra|extended real $\sigma$-algebra]]. +Let $\map \BB \R$ be the [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]] on $\R$. +Then: +:$\overline \BB_\R = \map \BB \R$ +where $\overline \BB_\R$ denotes a [[Definition:Trace Sigma-Algebra|trace $\sigma$-algebra]]. +\end{theorem} + +\begin{proof} +We have [[Euclidean Space is Subspace of Extended Real Number Space]]. +The result follows from [[Borel Sigma-Algebra of Subset is Trace Sigma-Algebra]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Generators for Extended Real Sigma-Algebra} +Tags: Extended Real Numbers, Sigma-Algebras + +\begin{theorem} +Let $\overline{\mathcal B}$ be the [[Definition:Extended Real Sigma-Algebra|extended real $\sigma$-algebra]]. +Then $\overline{\mathcal B}$ is [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated]] by each of the following collections of [[Definition:Extended Real Interval|extended real intervals]]: +{{begin-eqn}} +{{eqn | n = 1 + | o = + | r = \left\{ {\ \left[{a \,.\,.\, +\infty}\right]: a \in \R}\right\} +}} +{{eqn | n = 1' + | o = + | r = \left\{ {\ \left[{a \,.\,.\, +\infty}\right]: a \in \Q}\right\} +}} +{{eqn | n = 2 + | o = + | r = \left\{ {\ \left({b \,.\,.\, +\infty}\right]: b \in \R}\right\} +}} +{{eqn | n = 2' + | o = + | r = \left\{ {\ \left({b \,.\,.\, +\infty}\right]: b \in \Q}\right\} +}} +{{eqn | n = 3 + | o = + | r = \left\{ {\ \left[{-\infty \,.\,.\, c}\right): c \in \R}\right\} +}} +{{eqn | n = 3' + | o = + | r = \left\{ {\ \left[{-\infty \,.\,.\, c}\right): c \in \Q}\right\} +}} +{{eqn | n = 4 + | o = + | r = \left\{ {\ \left[{-\infty \,.\,.\, d}\right]: d \in \R}\right\} +}} +{{eqn | n = 4' + | o = + | r = \left\{ {\ \left[{-\infty \,.\,.\, d}\right]: d \in \Q}\right\} +}} +{{end-eqn}} +\end{theorem} + +\begin{proof} +Let us first establish that $(1)$ up to $(4')$ all [[Definition:Sigma-Algebra Generated by Collection of Subsets|generate]] the same [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Denote $\mathcal G_i$ for the collection at point $(i)$, and $\mathcal G'_i$ for that at $(i')$, where $i = 1, 2, 3, 4$. +Furthermore, write $\Sigma_i$ for $\sigma \left({\mathcal G_i}\right)$ and $\Sigma'_i$ for $\sigma \left({\mathcal G'_i}\right)$. +Here $\sigma$ denotes [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]. +By [[Generated Sigma-Algebra Preserves Subset]], we have the following [[Definition:Subset|inclusions]]: +:$\Sigma'_i \subseteq \Sigma_i$ +for $i = 1, 2, 3, 4$. +Since we have the following observations about [[Definition:Set Complement|complements]] in $\overline \R$ (for arbitrary $a \in \R$): +:$\complement_{\overline \R} \left({\left[{a \,.\,.\, +\infty}\right]}\right) = \left[{-\infty \,.\,.\, a}\right)$ +:$\complement_{\overline \R} \left({\left[{-\infty \,.\,.\, a}\right]}\right) = \left({a \,.\,.\, +\infty}\right]$ +we deduce that: +:$\mathcal G_3 \subseteq \Sigma_1, \mathcal G'_3 \subseteq \Sigma'_1$ +:$\mathcal G_2 \subseteq \Sigma_4, \mathcal G'_2 \subseteq \Sigma'_4$ +and by definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]: +:$\Sigma_3 \subseteq \Sigma_1, \Sigma'_3 \subseteq \Sigma'_1$ +:$\Sigma_2 \subseteq \Sigma_4, \Sigma'_2 \subseteq \Sigma'_4$ +By [[Complement of Complement]], the converse inclusions: +:$\Sigma_1 \subseteq \Sigma_3, \Sigma'_1 \subseteq \Sigma'_3$ +:$\Sigma_4 \subseteq \Sigma_2, \Sigma'_4 \subseteq \Sigma'_2$ +are derived. +Subsequently, remark that, for all $a \in \R$: +:$\left[{a \,.\,.\, +\infty}\right] = \displaystyle \bigcap_{n \mathop \in \N} \left({a - \frac 1 n \,.\,.\, +\infty}\right]$ +{{MissingLinks|I know, but please find the justification yourself}} +and by [[Sigma-Algebra Closed under Countable Intersection]], it follows that: +:$\mathcal G_1 \subseteq \Sigma_2, \mathcal G'_1, \subseteq \Sigma'_2$ +whence by definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]: +:$\Sigma_1 \subseteq \Sigma_2, \Sigma'_1 \subseteq \Sigma'_2$ +For the converse inclusion, remark that: +:$\left({a \,.\,.\, +\infty}\right] = \displaystyle \bigcup_{n \mathop \in \N} \left[{a + \frac 1 n \,.\,.\, +\infty}\right]$ +{{MissingLinks|again, justification}} +and thus immediately establish: +:$\Sigma_2 \subseteq \Sigma_1, \Sigma'_2 \subseteq \Sigma'_1$ +To summarize, the above arguments establish: +:$\Sigma'_1 = \Sigma'_2 = \Sigma'_3 = \Sigma'_4 \subseteq \Sigma_4 = \Sigma_3 = \Sigma_2 = \Sigma_1$ +Finally, for all $a \in \R$, we have: +:$\left({a \,.\,.\, +\infty}\right] = \displaystyle \bigcup_{\substack{q \mathop \in \Q \\ q \mathop > a}} \left({q \,.\,.\, +\infty}\right]$ +{{explain|via [[Between two Real Numbers exists Rational Number]]}} +whence $\Sigma_2 \subseteq \Sigma'_2$, and all eight [[Definition:Sigma-Algebra|$\sigma$-algebras]] are equal; denote them by $\Sigma$ from now on. +It remains to establish that in fact they equal $\overline{\mathcal B}$. +Since all elements of $\mathcal G_2$ are [[Definition:Open Set (Topology)|open sets]] in the [[Definition:Extended Real Number Space|extended real number space]], it follows that: +:$\Sigma_2 \subseteq \overline{\mathcal B}$ +By [[Intervals with Extended Rational Endpoints form Countable Basis for Extended Real Number Space]], every [[Definition:Open Set (Topology)|open set]] in the [[Definition:Extended Real Number Space|extended real number space]] is of the form: +:$\displaystyle \bigcup \mathcal Q$ +where $\mathcal Q$ is a collection of [[Definition:Extended Real Interval|extended real intervals]] with [[Definition:Extended Rational Numbers|extended rational]] [[Definition:Endpoint of Real Interval|endpoints]]. +By [[Set of Intervals with Extended Rational Endpoints is Countable]], $\mathcal Q$ is a subset of a [[Definition:Countable Set|countable set]]. +By [[Subset of Countably Infinite Set is Countable]], $\mathcal Q$ is [[Definition:Countable Set|countable]] as well. +It follows by [[Definition:Sigma-Algebra|$\sigma$-algebra axiom $(3)$]] that it suffices to show that: +:$\left({a \,.\,.\, b}\right) \in \Sigma$ +:$\left({a \,.\,.\, +\infty}\right] \in \Sigma$ +:$\left[{-\infty \,.\,.\, b}\right) \in \Sigma$ +:$\left[{-\infty \,.\,.\, +\infty}\right) \in \Sigma$ +for all [[Definition:Rational Number|rational numbers]] $a, b \in \Q$. +The middle two are in $\Sigma'_2$ and $\Sigma'_4$, respectively, hence in $\Sigma$. +By [[Sigma-Algebra Closed under Intersection]]: +:$\left({a \,.\,.\, b}\right) = \left({a \,.\,.\, +\infty}\right] \cap \left[{-\infty \,.\,.\, b}\right) \in \Sigma$ +and by [[Sigma-Algebra Closed under Union]]: +:$\left({-\infty \,.\,.\, +\infty}\right) = \left[{a \,.\,.\, +\infty}\right] \cup \left[{-\infty \,.\,.\, a}\right] \in \Sigma$ +Hence $\Sigma = \overline{\mathcal B}$, as desired. +{{qed}} +\end{proof}<|endoftext|> +\section{Characteristic Function Measurable iff Set Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $E \subseteq X$. +Then the following are equivalent: +:$(1): \quad E \in \Sigma$; i.e., $E$ is a [[Definition:Measurable Set|$\Sigma$-measurable set]] +:$(2): \quad \chi_E: X \to \left\{{0, 1}\right\}$, the [[Definition:Characteristic Function of Set|characteristic function]] of $E$, is [[Definition:Measurable Function|$\Sigma$-measurable]] +\end{theorem} + +\begin{proof} +=== $(1)$ implies $(2)$ === +Assume that $E \in \Sigma$. +It is clear that $x \notin \left\{{0, 1}\right\}$ implies $\chi_E^{-1} \left({x}\right) = \varnothing$. +Hence [[Preimage of Union under Mapping]] and [[Characteristic Function Determined by 1-Fiber]] yield, for any $\alpha \in \R$: +:$\left\{{x \in X: \chi_E \left({x}\right) \ge \alpha}\right\} = \begin{cases}\varnothing & \text{if $1 < \alpha$}\\ +E & \text{if $0 < \alpha \le 1$}\\ +X & \text{if $\alpha \le 0$}\end{cases}$ +Since $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]], $X \in \Sigma$. +By assumption, also $E \in \Sigma$. +Lastly, [[Sigma-Algebra Contains Empty Set]] ensures $\varnothing \in \Sigma$. +This establishes $(4)$ of [[Characterization of Measurable Functions|Characterization of Measurable Functions]]. +Hence $\chi_E$ is [[Definition:Measurable Function|$\Sigma$-measurable]]. +{{qed|lemma}} +=== $(2)$ implies $(1)$ === +Assume that $\chi_E$ is [[Definition:Measurable Function|$\Sigma$-measurable]]. +Since for all $x \in X$, it holds that: +:$\chi_E \left({x}\right) > 0 \iff x \in E$ +it follows that: +:$E = \chi^{-1} \left({\left({0 \,.\,.\, +\infty}\right)}\right)$ +and thus $E \in \Sigma$ as $\chi_E$ is [[Definition:Measurable Function|$\Sigma$-measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Simple Function is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \R$ be a [[Definition:Simple Function|simple function]]. +Then $f$ is [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +Let $f$ be written in the following form: +:$f = \displaystyle \sum_{i \mathop = 1}^n a_i \chi_{S_i}$ +where $a_i \in \R$ and the $S_i$ are [[Definition:Measurable Set|$\Sigma$-measurable]]. +Next, for each [[Definition:Ordered Tuple|ordered $n$-tuple]] $b$ of zeroes and ones define: +:$T_b \left({i}\right) := \begin{cases} +S_i & \text{if $b \left({i}\right) = 0$}\\ +X \setminus S_i & \text{if $b \left({i}\right) = 1$} +\end{cases}$ +and subsequently: +:$T_b := \displaystyle \bigcap_{i \mathop = 1}^n T_b \left({i}\right)$ +From [[Sigma-Algebra Closed under Intersection]], $T_b \in \Sigma$ for all $b$. +Also, the $T_b$ are [[Definition:Pairwise Disjoint|pairwise disjoint]], and furthermore: +:$f = \displaystyle \sum_b a_b \chi_{T_b}$ +where: +:$a_b := \displaystyle \sum_{i \mathop = 1}^n b \left({i}\right) a_i$ +{{finish|prove it, it's a messy business}} +Now we have, for all $\lambda \in \R$: +:$\left\{{x \in X: f \left({x}\right) > \lambda}\right\} = \displaystyle \bigcup \left\{{T_b: a_b > \lambda}\right\}$ +which by [[Sigma-Algebra Closed under Union]] is a [[Definition:Measurable Set|$\Sigma$-measurable set]]. +From [[Characterization of Measurable Functions|Characterization of Measurable Functions: $(5)$]] it follows that $f$ is [[Definition:Measurable Function|measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Identity Mapping is Relation Isomorphism} +Tags: Relation Isomorphisms, Identity Mappings + +\begin{theorem} +Let $\struct {S, \RR}$ be a [[Definition:Relational Structure|relational structure]]. +Then the [[Definition:Identity Mapping|identity mapping]] $I_S: S \to S$ is a [[Definition:Relation Isomorphism|relation isomorphism]] from $\struct {S, \RR}$ to itself. +\end{theorem} + +\begin{proof} +By definition of [[Definition:Identity Mapping|identity mapping]]: +:$\forall x \in S: \map {I_S} x = x$ +So: +:$x \mathrel \RR y \implies \map {I_S} x \mathrel \RR \map {I_S} y$ +From [[Identity Mapping is Bijection]], $I_S$ is a [[Definition:Bijection|bijection]]. +Hence: +:$\map {I_S^{-1} } x = x$ +So: +:$x \mathrel \RR y \implies \map {I_S^{-1} } x \mathrel \RR \map {I_S^{-1} } y$ +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Relation Isomorphism is Relation Isomorphism} +Tags: Relation Isomorphisms + +\begin{theorem} +Let $\struct {S, \RR_1}$ and $\struct {T, \RR_2}$ be [[Definition:Relational Structure|relational structures]]. +Let $\phi: \struct {S, \RR_1} \to \struct {T, \RR_2}$ be a [[Definition:Bijection|bijection]]. +Then: +:$\phi: \struct {S, \RR_1} \to \struct {T, \RR_2}$ +is a [[Definition:Relation Isomorphism|relation isomorphism]] +{{iff}}: +:$\phi^{-1}: \struct {T, \RR_2} \to \struct {S, \RR_1}$ +is also a [[Definition:Relation Isomorphism|relation isomorphism]]. +\end{theorem} + +\begin{proof} +Follows directly from the definition of [[Definition:Relation Isomorphism|relation isomorphism]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Measurable Function is Simple Function iff Finite Image Set} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \R$ be a [[Definition:Measurable Function|measurable function]]. +Then $f$ is a [[Definition:Simple Function|simple function]] {{iff}} its [[Definition:Image of Mapping|image]] is [[Definition:Finite Set|finite]]: +:$\card {\Img f} < \infty$ +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Suppose that $f$ is a [[Definition:Simple Function|simple function]], and that: +:$\ds \forall x \in X: \map f x = \sum_{i \mathop = 1}^n a_i \map {\chi_{S_i} } x$ +Since each of the $\chi_{S_i}$ is a [[Definition:Characteristic Function of Set|characteristic function]], it can take only the values $0$ and $1$. +Thus each [[Definition:Summand|summand]] can take two values. +It follows immediately that $f$ can take at most $2^n$ different values. +The conclusion follows from [[Simple Function is Measurable]]. +{{qed|lemma}} +=== Sufficient Condition === +Suppose that the [[Definition:Image of Mapping|image]] of $f$ is [[Definition:Finite Set|finite]]. +Call the distinct values $f$ attains $y_1, \ldots, y_n$. +For brevity, denote $\set {f = a}$ to mean $\set {x \in X: \map f x = a}$ (compare [[Definition:Set Definition by Predicate|Set Definition by Predicate]]). +Define for each $i$ with $1 \le i \le n$: +:$B_i := \set {f = y_i}$ +From [[Characterization of Measurable Functions]] $(2)$ and $(4)$, and [[Sigma-Algebra Closed under Intersection]] we obtain that: +:$\set {f = y_i} = \set {f \ge y_i} \cap \set {f \le y_i} \in \Sigma$ +Furthermore, since the $y_i$ are distinct, the $B_i$ are necessarily [[Definition:Disjoint Sets|disjoint]]. +It follows that: +:$(1): \quad \map f x = \ds \sum_{i \mathop = 1}^n y_j \map {\chi_{B_j} } x$ +As the $B_i$ are [[Definition:Measurable Set|measurable]], $f$ is shown to be a [[Definition:Simple Function|simple function]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Composite of Relation Isomorphisms is Relation Isomorphism} +Tags: Relation Isomorphisms + +\begin{theorem} +Let $\struct {S_1, \RR_1}$, $\struct {S_2, \RR_2}$ and $\struct {S_3, \RR_3}$ be [[Definition:Relational Structure|relational structures]]. +Let: +:$\phi: \struct {S_1, \RR_1} \to \struct {S_2, \RR_2}$ +and: +:$\psi: \struct {S_2, \RR_2} \to \struct {S_3, \RR_3}$ +be [[Definition:Relation Isomorphism|relation isomorphisms]]. +Then $\psi \circ \phi: \struct {S_1, \RR_1} \to \struct {S_3, \RR_3}$ is also a [[Definition:Relation Isomorphism|relation isomorphism]]. +\end{theorem} + +\begin{proof} +From [[Composite of Bijections is Bijection]], $\psi \circ \phi$ is a [[Definition:Bijection|bijection]], as, by definition, an [[Definition:Relation Isomorphism|relation isomorphism]] is also a [[Definition:Bijection|bijection]]. +By definition of [[Definition:Composition of Mappings|composition of mappings]]: +:$\map {\psi \circ \phi} x = \map \psi {\map \phi x}$ +As $\phi$ is a [[Definition:Relation Isomorphism|relation isomorphism]], we have: +:$\forall x_1, y_1 \in S_1: x_1 \mathrel {\RR_1} y_1 \implies \map \phi {x_1} \mathrel {\RR_2} \map \phi {y_1}$ +As $\psi$ is a [[Definition:Relation Isomorphism|relation isomorphism]], we have: +:$\forall x_2, y_2 \in S_2: x_2 \mathrel {\RR_2} y_2 \implies \map \psi {x_2} \mathrel {\RR_3} \map \psi {y_2}$ +By setting $x_2 = \map \phi {x_1}, y_2 = \map \phi {y_1}$, it follows that: +:$\forall x_1, y_1 \in S_1: x_1 \mathrel {\RR_1} y_1 \implies \map \psi {\map \phi {x_1} } \mathrel {\RR_3} \map \psi {\map \phi {y_1} }$ +Similarly we can show that: +:$\forall x_3, y_3 \in S_3: x_3 \mathrel {\RR_3} y_3 \implies \map {\phi^{-1} } {\map {\psi^{-1} } {x_3} } \mathrel {\RR_1} \map {\phi^{-1} } {\map {\psi^{-1} } {y_3} }$ +Hence the result, by definition of a [[Definition:Relation Isomorphism|relation isomorphism]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Sum of Simple Functions is Simple Function} +Tags: Measure Theory, Pointwise Operations + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Let $f, g : X \to \R$ be [[Definition:Simple Function|simple functions]]. +Then $f + g: X \to \R, \map {\paren {f + g} } x := \map f x + \map g x$ is also a [[Definition:Simple Function|simple function]]. +\end{theorem} + +\begin{proof} +We have $f + g = + \circ \innerprod f g \circ \Delta_X$, where: +:$\Delta_X: X \to X \times X$ is the [[Definition:Diagonal Mapping|diagonal mapping]] on $X$ +:$\innerprod f g: X \times X \to \R \times \R, \map {\innerprod f g} {x, y} := \tuple {\map f x, \map g y}$ +:$+: \R \times \R \to \R$ is [[Definition:Real Addition|real addition]]. +{{explain|What is the meaning of the notation $\innerprod f g$? Inner product?}} +From this, we see that $+$ may be [[Definition:Restriction of Mapping|restricted]] to $\Img {\innerprod f g}$, the [[Definition:Image of Mapping|image]] of $\sequence {f, g}$. +It is immediate that this [[Definition:Image of Mapping|image]] equals $\Img f \times \Img g$. +By [[Measurable Function is Simple Function iff Finite Image Set]], $\Img f$ and $\Img g$ are [[Definition:Finite Set|finite]]. +Therefore, by [[Cardinality of Cartesian Product]], $\Img {\sequence {f, g} }$ is also [[Definition:Finite Set|finite]]. +Next, from [[Image of Composite Mapping/Corollary|Corollary to Image of Composite Mapping]]: +:$\Img {\innerprod f g \circ \Delta_X} \subseteq \Img {\innerprod f g}$ +whence the former is [[Definition:Finite Set|finite]]. +Now $+$ is a [[Definition:Surjection|surjection]]: +:$+: \Img {\innerprod f g \circ \Delta_X} \to \Img {f + g}$ +by [[Restriction of Mapping to Image is Surjection]]. +By [[Cardinality of Surjection]], this implies that $\Img {f + g}$ is [[Definition:Finite|finite]]. +Whence [[Measurable Function is Simple Function iff Finite Image Set]] grants that $f + g$ is a [[Definition:Simple Function|simple function]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Product of Simple Functions is Simple Function} +Tags: Measure Theory, Pointwise Operations + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Let $f, g : X \to \R$ be [[Definition:Simple Function|simple functions]]. +Then $f \cdot g: X \to \R, \map {\paren {f \cdot g} } x := \map f x \cdot \map g x$ is also a [[Definition:Simple Function|simple function]]. +\end{theorem} + +\begin{proof} +From [[Measurable Function is Simple Function iff Finite Image Set]], it follows that there exist $x_1, \ldots, x_n$ and $y_1, \ldots y_m$ comprising the [[Definition:Image of Mapping|image]] of $f$ and $g$, respectively. +But then it immediately follows that any value attained by $f \cdot g$ is of the form $x_i \cdot y_j$. +Hence, there are at most $n \times m$ distinct values $f \cdot g$ can take. +From [[Pointwise Product of Measurable Functions is Measurable]], $f \cdot g$ is also [[Definition:Measurable Function|measurable]]. +Hence by [[Measurable Function is Simple Function iff Finite Image Set]], $f \cdot g$ is a [[Definition:Simple Function|simple function]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Positive Part of Simple Function is Simple Function} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \R$ be a [[Definition:Simple Function|simple function]]. +Then $f^+: X \to \R$, the [[Definition:Positive Part|positive part]] of $f$, is also a [[Definition:Simple Function|simple function]]. +\end{theorem} + +\begin{proof} +Let $f$ have the following [[Definition:Standard Representation of Simple Function|standard representation]]: +:$f = \displaystyle \sum_{i \mathop = 0}^n a_i \chi_{E_i}$ +Then we see that $f^+$ must satisfy: +:$f^+ = \displaystyle \sum_{i \mathop = 0}^n \max \left\{{a_i, 0}\right\} \chi_{E_i}$ +as the $E_i$ are [[Definition:Disjoint Sets|disjoint]], and $\chi_{E_i} \ge 0$ [[Definition:Pointwise Inequality|pointwise]]. +Since all of the $E_i$ are [[Definition:Measurable Set|measurable]], it follows that $f^+$ is a [[Definition:Simple Function|simple function]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Negative Part of Simple Function is Simple Function} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \R$ be a [[Definition:Simple Function|simple function]]. +Then $f^-: X \to \R$, the [[Definition:Negative Part|negative part]] of $f$ is also a [[Definition:Simple Function|simple function]]. +\end{theorem} + +\begin{proof} +Let $f$ have the following [[Definition:Standard Representation of Simple Function|standard representation]]: +:$f = \displaystyle \sum_{i \mathop = 0}^n a_i \chi_{E_i}$ +Then we see that $f^-$ must satisfy: +:$f^- = \displaystyle \sum_{i \mathop = 0}^n \min \left\{{a_i, 0}\right\} \chi_{E_i}$ +as the $E_i$ are [[Definition:Disjoint Sets|disjoint]], and $\chi_{E_i} \ge 0$ [[Definition:Pointwise Inequality|pointwise]]. +Since all of the $E_i$ are [[Definition:Measurable Set|measurable]], it follows that $f^+$ is a [[Definition:Simple Function|simple function]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Difference of Positive and Negative Parts} +Tags: Mapping Theory + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $f: X \to \overline{\R}$ be an [[Definition:Extended Real-Valued Function|extended real-valued function]]. +Let $f^+$, $f^-: X \to \overline{\R}$ be the [[Definition:Positive Part|positive]] and [[Definition:Negative Part|negative parts]] of $f$, respectively. +Then $f = f^+ - f^-$. +\end{theorem} + +\begin{proof} +Let $x \in X$. +Consider the following four cases for the value of $f(x)$ in $\overline{\R}$: +* $f \left({x}\right) = -\infty$ +: By [[Definition:Ordering on Extended Real Numbers|ordering on extended reals]]: +::$f^+ \left({x}\right) = \max \left\{{0, f \left({x}\right)}\right\} = \max \left\{{0, -\infty}\right\} = 0$ +::$f^- \left({x}\right) = - \min \left\{{0, f \left({x}\right)}\right\} = - \min \left\{{0, -\infty}\right\} = +\infty$ +: By [[Definition:Extended Real Subtraction|extended real subtraction]], it thus follows that: +::$f^+ \left({x}\right) - f^- \left({x}\right) = 0 - \left({+\infty}\right) = - \infty = f \left({x}\right)$ +* $f \left({x}\right) \in \left({-\infty \,.\,.\, 0}\right)$ +: Since $f \left({x}\right) < 0$, it follows that: +::$f^+ \left({x}\right) = \max \left\{{0, f \left({x}\right)}\right\} = 0$ +::$f^- \left({x}\right) = - \min \left\{{0, f \left({x}\right)}\right\}= - f \left({x}\right)$ +: Thence it immediately follows that: +::$f^+ \left({x}\right) - f^- \left({x}\right) = 0 - \left({- f \left({x}\right)}\right) = f \left({x}\right)$ +* $f \left({x}\right) \in \left[{0 \,.\,.\, \infty}\right)$ +: Since $f \left({x}\right) \geq 0$, we obtain: +::$f^+ \left({x}\right) = \max \left\{{0, f \left({x}\right)}\right\} = f \left({x}\right)$ +::$f^- \left({x}\right) = - \min \left\{{0, f \left({x}\right)}\right\} = 0$ +: Then, these immediately imply: +::$f^+ \left({x}\right) - f^- \left({x}\right) = f \left({x}\right) - 0 = f \left({x}\right)$ +* $f \left({x}\right) = +\infty$ +: By [[Definition:Ordering on Extended Real Numbers|ordering on extended reals]]: +::$f^+ \left({x}\right) = \max \left\{{0, f \left({x}\right)}\right\} = \max \left\{{0, +\infty}\right\} = +\infty$ +::$f^- \left({x}\right) = - \min \left\{{0, f \left({x}\right)}\right\}= - \min \left\{{0, +\infty}\right\} = 0$ +: By [[Definition:Extended Real Subtraction|extended real subtraction]], it now follows that: +::$f^+ \left({x}\right) - f^- \left({x}\right) = +\infty - 0 = +\infty = f \left({x}\right)$ +In all cases, $f^+ \left({x}\right) - f^- \left({x}\right) = f \left({x}\right)$. +As $x\in X$ was arbitrary, we may conclude that: +:$\forall x \in X: f^+ \left({x}\right) - f^- \left({x}\right) = f \left({x}\right)$ +That is, we have shown: +:$f = f^+ - f^-$ +{{qed}} +\end{proof}<|endoftext|> +\section{Sum of Positive and Negative Parts} +Tags: Mapping Theory + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $f: X \to \overline{\R}$ be an [[Definition:Extended Real-Valued Function|extended real-valued function]]. +Let $f^+, f^-: X \to \overline{\R}$ be the [[Definition:Positive Part|positive]] and [[Definition:Negative Part|negative parts]] of $f$, respectively. +Then $\left\vert{f}\right\vert = f^+ + f^-$, where $\left\vert{f}\right\vert$ is the [[Definition:Absolute Value of Extended Real-Valued Function|absolute value of $f$]]. +\end{theorem} + +\begin{proof} +Let $x \in X$. +Suppose that $f \left({x}\right) \ge 0$, where $\ge$ signifies the [[Definition:Extended Real Ordering|extended real ordering]]. +Then $\left\vert{f \left({x}\right)}\right\vert = f \left({x}\right)$, and: +:$f^+ \left({x}\right) = \max \, \left\{{f \left({x}\right), 0}\right\} = f \left({x}\right)$ +:$f^- \left({x}\right) = - \min \, \left\{{f \left({x}\right), 0}\right\} = 0$ +Hence $f^+ \left({x}\right) + f^- \left({x}\right) = f \left({x}\right) = \left\vert{f \left({x}\right)}\right\vert$. +Next, suppose that $f \left({x}\right) < 0$, again in the [[Definition:Extended Real Ordering|extended real ordering]]. +Then $\left\vert{f \left({x}\right)}\right\vert = - f \left({x}\right)$, and: +:$f^+ \left({x}\right) = \max \, \left\{{f \left({x}\right), 0}\right\} = 0$ +:$f^- \left({x}\right) = - \min \, \left\{{f \left({x}\right), 0}\right\} = - f \left({x}\right)$ +Hence $f^+ \left({x}\right) + f^- \left({x}\right) = - f \left({x}\right) = \left\vert{f \left({x}\right)}\right\vert$. +Thus, for all $x \in X$: +:$f^+ \left({x}\right) + f^- \left({x}\right) = \left\vert{f \left({x}\right)}\right\vert$ +that is, $f^+ + f^- = \left\vert{f}\right\vert$. +{{qed}} +\end{proof}<|endoftext|> +\section{Absolute Value of Simple Function is Simple Function} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \R$ be a [[Definition:Simple Function|simple function]]. +Then $\size f: X \to \R$, the [[Definition:Absolute Value of Real-Valued Function|absolute value of $f$]], is also a [[Definition:Simple Function|simple function]]. +\end{theorem} + +\begin{proof} +By [[Simple Function has Standard Representation]], $f$ has a [[Definition:Standard Representation of Simple Function|standard representation]], say: +:$(1): \quad f = \displaystyle \sum_{k \mathop = 0}^n a_k \chi_{E_k}$ +Then, for all $x \in X$: +:$\left\vert{f}\right\vert \left({x}\right) = \displaystyle \left\vert{ \sum_{k \mathop = 0}^n a_k \chi_{E_k} \left({x}\right) }\right\vert$ +by definition of [[Definition:Pointwise Absolute Value|pointwise absolute value]]. +The fact that $(1)$ forms a [[Definition:Standard Representation of Simple Function|standard representation]] ensures that for every $x \in X$, precisely one $k$ has $x \in E_k$. +Now suppose that $x \in E_l$. +Then $\chi_{E_l} \left({x}\right) = 0$ {{iff}} $k \ne l$ by definition of [[Definition:Characteristic Function of Set|characteristic function]]. +It follows that $\left\vert{f}\right\vert \left({x}\right) = \left\vert{a_l \cdot 1}\right\vert = \left\vert{a_l}\right\vert$. +Now define $g: X \to \R$ by: +$g \left({x}\right) := \displaystyle \sum_{k \mathop = 0}^n \left\vert{a_k}\right\vert \chi_{E_k} \left({x}\right)$ +By construction, $g$ is a [[Definition:Simple Function|simple function]], and for $x \in E_l$, $g \left({x}\right) = \left\vert{a_l}\right\vert$. +Thus, since every $x$ is in $E_l$ for precisely one $l$, it is showed that $g = \left\vert{f}\right\vert$. +As $g$ was a [[Definition:Simple Function|simple function]], it follows that $\left\vert{f}\right\vert$ is, too. +{{qed}} +\end{proof} + +\begin{proof} +By [[Sum of Positive and Negative Parts]], we have: +:$\left\vert{f}\right\vert = f^+ + f^-$ +We also have that [[Positive Part of Simple Function is Simple Function]] and [[Negative Part of Simple Function is Simple Function]]. +Hence $\left\vert{f}\right\vert$ is a [[Definition:Pointwise Addition|pointwise sum]] of [[Definition:Simple Function|simple functions]]. +The result follows from [[Pointwise Sum of Simple Functions is Simple Function]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Measurable Function Pointwise Limit of Simple Functions} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \overline{\R}$ be a [[Definition:Measurable Function|$\Sigma$-measurable function]]. +Then there exists a [[Definition:Sequence|sequence]] $\left({f_n}\right)_{n \in \N} \in \mathcal E \left({\Sigma}\right)$ of [[Definition:Simple Function|simple functions]], such that: +:$\forall x \in X: f \left({x}\right) = \displaystyle \lim_{n \to \infty} f_n \left({x}\right)$ +That is, such that $f = \displaystyle \lim_{n \to \infty} f_n$ [[Definition:Pointwise Limit|pointwise]]. +The [[Definition:Sequence|sequence]] $\left({f_n}\right)_{n \in \N}$ may furthermore be taken to satisfy: +:$\forall n \in \N: \left\vert{f_n}\right\vert \le \left\vert{f}\right\vert$ +where $\left\vert{f}\right\vert$ denotes the [[Definition:Absolute Value of Extended Real-Valued Function|absolute value of $f$]]. +\end{theorem} + +\begin{proof} +First, let us prove the theorem when $f$ is a [[Definition:Positive Measurable Function|positive $\Sigma$-measurable function]]. +Now for any $n \in \N$, define for $0 \le k \le n 2^n$: +:$A^n_k := \begin{cases} + \left\{{ k 2^{-n} \le f < \left({k + 1}\right) 2^{-n} }\right\} & : k \ne n 2^n \\ + \left\{{f \ge n}\right\} & : k = n 2^n +\end{cases}$ +where e.g. $\left\{{f \ge n}\right\}$ is short for $\left\{{x \in X: f \left({x}\right) \ge n}\right\}$. +It is immediate that the $A^n_k$ are [[Definition:Pairwise Disjoint|pairwise disjoint]], and that: +:$\displaystyle \bigcup_{k \mathop = 0}^{n 2^n} A^n_k = X$ +Subsequently, define $f_n: X \to \overline{\R}$ by: +:$f_n \left({x}\right) := \displaystyle \sum_{k \mathop = 0}^{n 2^n} k 2^{-n} \chi_{A^n_k} \left({x}\right)$ +where $\chi_{A^n_k}$ is the [[Definition:Characteristic Function of Set|characteristic function]] of $A^n_k$. +Now if $f \left({x}\right) < n$, then we have for some $k < n 2^{-n}$: +:$x \in A^n_k$ +so that: +{{begin-eqn}} +{{eqn | l = \left\vert{f \left({x}\right) - f_n \left({x}\right)}\right\vert + | r = \left\vert{f \left({x}\right) - k 2^{-n} }\right\vert +}} +{{eqn | o = < + | r = 2^{-n} +}} +{{end-eqn}} +since $x \in A^n_k$ {{iff}} $k 2^{-n} \le f \left({x}\right) < \left({k + 1}\right) 2^{-n}$. +In particular, since $f_n \left({x}\right) \le n$ for all $x \in X$, we conclude that [[Definition:Pointwise Inequality of Extended Real-Valued Functions|pointwise]], $f_n \le f$, for all $n \in \N$. +By [[Characterization of Measurable Functions]] and [[Sigma-Algebra Closed under Intersection]], it follows that: +:$A^n_{n 2^n} = \left\{{f \ge n}\right\}$ +:$A^n_k = \left\{{f \ge k 2^{-n}}\right\} \cap \left\{{f < \left({k + 1}\right) 2^{-n}}\right\}$ +are all [[Definition:Measurable Set|$\Sigma$-measurable sets]]. +Hence, by definition, all $f_n$ are [[Definition:Simple Function|$\Sigma$-simple functions]]. +It remains to show that $\displaystyle \lim_{n \to \infty} f_n = f$ [[Definition:Pointwise Limit|pointwise]]. +Let $x \in X$ be arbitrary. +If $f \left({x}\right) = +\infty$, then for all $n \in \N$, $x \in A^n_{n 2^n}$, so that: +:$f_n \left({x}\right) = n$ +Now clearly, $\displaystyle \lim_{n \to \infty} n = +\infty$, showing convergence for these $x$. +If $f \left({x}\right) < +\infty$, then for some $n \in \N$, $f \left({x}\right) < n$. +By the reasoning above, we then have for all $m \ge n$: +:$\left\vert{f \left({x}\right) - f_m \left({x}\right)}\right\vert < 2^{-m}$ +which by [[Sequence of Powers of Number less than One]] implies that: +: $\displaystyle \lim_{n \to \infty} f_n \left({x}\right) = f \left({x}\right)$ +Thus $\displaystyle \lim_{n \to \infty} f_n = f$ [[Definition:Pointwise Limit|pointwise]]. +This establishes the result for [[Definition:Positive Measurable Function|positive measurable]] $f$. +For arbitrary $f$, by [[Difference of Positive and Negative Parts]], we have: +:$f = f^+ - f^-$ +where $f^+$ and $f^-$ are the [[Definition:Positive Part|positive]] and [[Definition:Negative Part|negative parts]] of $f$. +By [[Function Measurable iff Positive and Negative Parts Measurable]], $f^+$ and $f^-$ are [[Definition:Positive Measurable Function|positive measurable functions]]. +Thus we find [[Definition:Sequence|sequences]] $f^+_n$ and $f^-_n$ [[Definition:Pointwise Limit|converging pointwise]] to $f^+$ and $f^-$, respectively. +The [[Sum Rule for Real Sequences]] implies that for all $x \in X$: +:$\displaystyle \lim_{n \to \infty} f^+_n \left({x}\right) - f^-_n \left({x}\right) = f^+ \left({x}\right) - f^- \left({x}\right) = f \left({x}\right)$ +Furthermore, we have for all $n \in \N$ and $x \in X$: +:$\left\vert{f^+_n \left({x}\right) - f^-_n \left({x}\right)}\right\vert = f^+_n \left({x}\right) + f^-_n \left({x}\right) \le f^+ \left({x}\right) + f^- \left({x}\right) = \left\vert{f \left({x}\right)}\right\vert$ +where the last equality follows from [[Sum of Positive and Negative Parts]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Finite Cartesian Product of Non-Empty Sets is Non-Empty} +Tags: Cartesian Product + +\begin{theorem} +Let $S_1, S_2, \ldots, S_n$ be non-[[Definition:Non-Empty Set|non-empty]] [[Definition:Set|sets]]. +Then their [[Definition:Finite Cartesian Product|cartesian product]] $S_1 \times S_2 \times \cdots \times S_n$ is [[Definition:Non-Empty Set|non-empty]]. +\end{theorem} + +\begin{proof} +We use [[Principle of Mathematical Induction|mathematical induction]]. +The [[Definition:Basis for the Induction|base case]] $n = 2$ is proved in [[Kuratowski Formalization of Ordered Pair]], and the [[Definition:Induction Step|induction step]] follows directly from the definition of an [[Definition:Ordered Tuple as Ordered Set|ordered tuple]]. +{{qed}} +{{finish}} +[[Category:Cartesian Product]] +5aqtynhmlw7t2z7lkrvk4nv37eqx2c9 +\end{proof}<|endoftext|> +\section{Equivalent Conditions for Dedekind-Infinite Set} +Tags: Infinite Sets + +\begin{theorem} +For a [[Definition:Set|set]] $S$, the following conditions are [[Definition:Logical Equivalence|equivalent]]: +:$(1): \quad$ $S$ is [[Definition:Dedekind-Infinite|Dedekind-infinite]]. +:$(2): \quad$ $S$ has a [[Definition:Countably Infinite|countably infinite]] [[Definition:Subset|subset]]. +The above [[Definition:Logical Equivalence|equivalence]] can be proven in [[Definition:Zermelo-Fraenkel Set Theory|Zermelo-Fraenkel set theory]]. +If the [[Axiom:Axiom of Countable Choice|axiom of countable choice]] is accepted, then it can be proven that the following condition is also [[Definition:Logical Equivalence|equivalent]] to the above two: +:$(3): \quad$ $S$ is [[Definition:Infinite|infinite]]. +\end{theorem} + +\begin{proof} +{{ProofWanted}} +[[Category:Infinite Sets]] +5t5aqndst6qutq5u2g3ixmjdnkszjc1 +\end{proof}<|endoftext|> +\section{Relation Isomorphism Preserves Reflexivity} +Tags: Relation Isomorphisms, Reflexive Relations + +\begin{theorem} +Let $\struct {S, \mathcal R_1}$ and $\struct {T, \mathcal R_2}$ be [[Definition:Relational Structure|relational structures]]. +Let $\struct {S, \mathcal R_1}$ and $\struct {T, \mathcal R_2}$ be [[Definition:Relation Isomorphism|(relationally) isomorphic]]. +Then $\mathcal R_1$ is a [[Definition:Reflexive Relation|reflexive relation]] {{iff}} $\mathcal R_2$ is also a [[Definition:Reflexive Relation|reflexive relation]]. +\end{theorem} + +\begin{proof} +{{WLOG}} it is necessary to prove only that if $\mathcal R_1$ is [[Definition:Reflexive Relation|reflexive]] then $\mathcal R_2$ is [[Definition:Reflexive Relation|reflexive]]. +Let $\phi: S \to T$ be a [[Definition:Relation Isomorphism|relation isomorphism]]. +Let $y \in T$. +Let $x = \map {\phi^{-1} } y$. +As $\phi$ is a [[Definition:Bijection|bijection]] it follows from [[Inverse Element of Bijection]] that: +:$y = \map \phi x$ +As $\mathcal R_1$ is a [[Definition:Reflexive Relation|reflexive relation]] it follows that: +:$x \mathrel {\mathcal R_1} x$ +As $\phi$ is a [[Definition:Relation Isomorphism|relation isomorphism]] it follows that: +:$\map \phi x \mathrel {\mathcal R_2} \map \phi x$ +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Preimage of Subset under Composite Mapping} +Tags: Composite Mappings, Preimages under Mappings, Preimage of Subset under Composite Mapping + +\begin{theorem} +Let $S_1, S_2, S_3$ be [[Definition:Set|sets]]. +Let $f: S_1 \to S_2$ and $g: S_2 \to S_3$ be [[Definition:Mapping|mappings]]. +Denote with $g \circ f: S_1 \to S_3$ the [[Definition:Composite Mapping|composition]] of $g$ and $f$. +Let $S_3' \subseteq S_3$ be a [[Definition:Subset|subset]] of $S_3$. +Then: +:$\paren {g \circ f}^{-1} \sqbrk {S_3'} = \paren {f^{-1} \circ g^{-1} } \sqbrk {S_3'}$ +where $g^{-1} \sqbrk {S_3'}$ denotes the [[Definition:Preimage of Subset under Mapping|preimage of $S_3'$ under $g$]]. +\end{theorem} + +\begin{proof} +A [[Definition:Mapping|mapping]] is a specific kind of [[Definition:Relation|relation]]. +Hence, [[Inverse of Composite Relation]] applies, and it follows that: +:$\paren {g \circ f}^{-1} \sqbrk {S_3'} = \paren {f^{-1} \circ g^{-1} } \sqbrk {S_3'}$ +{{qed}} +\end{proof} + +\begin{proof} +Let $x \in S_1$. +Then: +{{begin-eqn}} +{{eqn | l = x + | o = \in + | r = \paren {g \circ f}^{-1} \sqbrk {S_3'} + | c = +}} +{{eqn | ll= \leadstoandfrom + | l = \map {\paren {g \circ f} } x + | o = \in + | r = S_3' + | c = {{Defof|Preimage of Subset under Mapping}} +}} +{{eqn | ll= \leadstoandfrom + | l = \map g {\map f x} + | o = \in + | r = S_3' + | c = {{Defof|Composition of Mappings}} +}} +{{eqn | ll= \leadstoandfrom + | l = \map f x + | o = \in + | r = g^{-1} \sqbrk {S_3'} + | c = {{Defof|Preimage of Subset under Mapping}} +}} +{{eqn | ll= \leadstoandfrom + | l = x + | o = \in + | r = f^{-1} \sqbrk {g^{-1} \sqbrk {S_3'} } + | c = {{Defof|Preimage of Subset under Mapping}} +}} +{{eqn | ll= \leadstoandfrom + | l = x + | o = \in + | r = \paren {f^{-1} \circ g^{-1} } \sqbrk {S_3'} + | c = {{Defof|Composition of Mappings}} +}} +{{end-eqn}} +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Infinite Set has Countably Infinite Subset/Proof 4} +Tags: Infinite Set has Countably Infinite Subset + +\begin{theorem} +If the [[Axiom:Axiom of Countable Choice|axiom of countable choice]] is accepted, then it can be proven that every [[Definition:Infinite|infinite set]] has a [[Definition:Countable|countably infinite]] [[Definition:Subset|subset]]. +\end{theorem} + +\begin{proof} +Let $S$ be an [[Definition:Infinite Set|infinite set]]. +For all $n \in \N$, let: +:$\mathcal F_n = \left\{{T \subseteq S : \left\vert{T}\right\vert = n}\right\}$ +where $\left\vert{T}\right\vert$ denotes the [[Definition:Cardinality|cardinality]] of $T$. +From [[Set is Infinite iff exist Subsets of all Finite Cardinalities]]: +:$\mathcal F_n$ is [[Definition:Non-Empty Set|non-empty]]. +Using the [[Axiom:Axiom of Countable Choice|axiom of countable choice]], we can obtain a [[Definition:Sequence|sequence]] $\left\langle{S_n}\right\rangle_{n \in \N}$ such that $S_n \in \mathcal F_n$ for all $n \in \N$. +Define: +:$\displaystyle T = \bigcup_{n \mathop \in \N} S_n \subseteq S$ +For all $n \in \N$, $S_n$ is a [[Definition:Subset|subset]] of $T$ whose [[Definition:Cardinality|cardinality]] is $n$. +From [[Set is Infinite iff exist Subsets of all Finite Cardinalities]]: +:$T$ is infinite. +Because the [[Countable Union of Countable Sets is Countable|countable union of finite sets is countable]], $T$ is [[Definition:Countable Set|countable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Set is Infinite iff exist Subsets of all Finite Cardinalities} +Tags: Infinite Sets + +\begin{theorem} +A [[Definition:Set|set]] $S$ is [[Definition:Infinite Set|infinite]] {{iff}} for all $n \in \N$, there exists a [[Definition:Subset|subset]] of $S$ whose [[Definition:Cardinality|cardinality]] is $n$. +\end{theorem} + +\begin{proof} +Let the [[Definition:Cardinality|cardinality]] of a [[Definition:Set|set]] $S$ be denoted $\card S$. +=== Necessary Condition === +Suppose $S$ is [[Definition:Infinite Set|infinite]]. +We use [[Principle of Mathematical Induction|mathematical induction]] on $n$. +For all $n \in \N$, let $\map P n$ be the [[Definition:Proposition|proposition]]: +:There exists a [[Definition:Subset|subset]] of $S$ whose [[Definition:Cardinality|cardinality]] is $n$. +==== Basis for the Induction ==== +From [[Empty Set is Subset of All Sets]]: +:$\O \subseteq S$ +From [[Cardinality of Empty Set]], its [[Definition:Cardinality|cardinality]] is $0$. +This is our [[Definition:Basis for the Induction|basis for the induction]]. +==== Induction Hypothesis ==== +Now we need to show that, if $\map P k$ is true, where $k \ge 0$, then it logically follows that $\map P {k + 1}$ is true. +So this is our [[Definition:Induction Hypothesis|induction hypothesis]]: +:There exists a [[Definition:Subset|subset]] of $S$ whose [[Definition:Cardinality|cardinality]] is $k$. +Then we need to show: +:There exists a [[Definition:Subset|subset]] of $S$ whose [[Definition:Cardinality|cardinality]] is $k+1$. +==== Induction Step ==== +This is our [[Definition:Induction Step|induction step]]: +By the [[Set is Infinite iff exist Subsets of all Finite Cardinalities#Induction Hypothesis|induction hypothesis]], there exists $T \subseteq S$ such that $\card T = k$. +First, note that $S \ne T$, otherwise by definition of [[Definition:Infinite Set|infinite]] $S$ would be [[Definition:Finite Set|finite]]. +So $T$ is therefore a [[Definition:Proper Subset|proper subset]] of $S$. +So: +{{begin-eqn}} +{{eqn | l = S \setminus T + | o = \ne + | r = \O + | c = [[Set Difference with Proper Subset]] +}} +{{eqn | l = \exists x + | o = \in + | r = S \setminus T + | c = {{Defof|Empty Set}} +}} +{{eqn | ll= \leadsto + | l = \set x + | o = \subseteq + | r = S \setminus T + | c = {{Defof|Singleton}} +}} +{{eqn | ll= \leadsto + | l = \set x + | o = \subseteq + | r = S + | c = [[Set Difference is Subset]]: $S \setminus T \subseteq S$ +}} +{{eqn | ll= \leadsto + | l = T \cup \set x + | o = \subseteq + | r = S + | c = $T \subseteq S$ and $\set x \subseteq S$, and [[Union is Smallest Superset]] +}} +{{end-eqn}} +As $x \notin T$ it follows that: +:$\set x \cap T = \O$ +Thus by [[Cardinality of Set Union]]: +:$\card {T \cup \set x} = k + 1$ +That is, $T \cup \set x$ is a [[Definition:Subset|subset]] of $S$ whose [[Definition:Cardinality|cardinality]] is $k + 1$. +This is the set whose existence was to be be proved. +So $\map P k \implies \map P {k + 1}$ and the result follows by the [[Principle of Mathematical Induction]]. +Therefore: +:For all $n \in \N$, there exists a [[Definition:Subset|subset]] of $S$ whose [[Definition:Cardinality|cardinality]] is $n$. +{{qed|lemma}} +=== Sufficient Condition === +Suppose that for all $n \in \N$, there exists a [[Definition:Subset|subset]] of $S$ whose [[Definition:Cardinality|cardinality]] is $n$. +{{AimForCont}} that $S$ is [[Definition:Finite Set|finite]]. +Let $N = \card S$. +As $N \in \N$ it follows that $N + 1 \in \N$. +[[Definition:By Hypothesis|By hypothesis]], there exists a [[Definition:Subset|subset]] $T \subseteq S$ whose [[Definition:Cardinality|cardinality]] is $N + 1$. +From [[Cardinality of Subset of Finite Set]], $\card S \ge \card T$. +But then $\card S = N \ge N + 1 = \card T$, which contradicts the fact that $N < N + 1$. +From this [[Proof by Contradiction|contradiction]] it follows that $S$ can not be [[Definition:Finite Set|finite]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Relation Isomorphism Preserves Symmetry} +Tags: Relation Isomorphisms, Symmetric Relations + +\begin{theorem} +Let $\left({S, \mathcal R_1}\right)$ and $\left({T, \mathcal R_2}\right)$ be [[Definition:Relational Structure|relational structures]]. +Let $\left({S, \mathcal R_1}\right)$ and $\left({T, \mathcal R_2}\right)$ be [[Definition:Relation Isomorphism|(relationally) isomorphic]]. +Then $\mathcal R_1$ is a [[Definition:Symmetric Relation|symmetric relation]] {{iff}} $\mathcal R_2$ is also a [[Definition:Symmetric Relation|symmetric relation]]. +\end{theorem} + +\begin{proof} +Let $\phi: S \to T$ be a [[Definition:Relation Isomorphism|relation isomorphism]]. +By [[Inverse of Relation Isomorphism is Relation Isomorphism]] it follows that $\phi^{-1}: T \to S$ is also a [[Definition:Relation Isomorphism|relation isomorphism]]. +Thus [[Definition:WLOG|WLOG]] it suffices to prove only that if $\mathcal R_1$ is [[Definition:Symmetric Relation|symmetric]], then also $\mathcal R_2$ is [[Definition:Symmetric Relation|symmetric]]. +So, suppose $\mathcal R_1$ is a [[Definition:Symmetric Relation|symmetric relation]]. +Let $y_1, y_2 \in T$ such that $y_1 \mathrel {\mathcal R_2} y_2$. +Let $x_1 = \phi^{-1} \left({y_1}\right)$ and $x_2 = \phi^{-1} \left({y_2}\right)$. +As $\phi$ is a [[Definition:Bijection|bijection]] it follows from [[Inverse Element of Bijection]] that $y_1 = \phi \left({x_1}\right)$ and $y_2 = \phi \left({x_2}\right)$. +As $\phi^{-1}$ is a [[Definition:Relation Isomorphism|relation isomorphism]] it follows that: +:$x_1 = \phi^{-1} \left({y_1}\right) \mathrel {\mathcal R_1} \phi^{-1} \left({y_2}\right) = x_2$ +As $\mathcal R_1$ is a [[Definition:Symmetric Relation|symmetric relation]] it follows that: +:$x_2 \mathrel {\mathcal R_1} x_1$ +As $\phi$ is a [[Definition:Relation Isomorphism|relation isomorphism]] it follows that: +:$y_2 = \phi \left({x_2}\right) \mathrel {\mathcal R_2} \phi \left({x_1}\right) = y_1$ +So: +:$y_1 \mathrel {\mathcal R_2} y_2 \implies y_2 \mathrel {\mathcal R_2} y_1$ +and so by definition $\mathcal R_2$ is a [[Definition:Symmetric Relation|symmetric relation]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Set Difference with Proper Subset} +Tags: Set Difference, Subsets + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $T \subsetneq S$ be a [[Definition:Proper Subset|proper subset]] of $S$. + +Let $S \setminus T$ denote the [[Definition:Set Difference|set difference]] between $S$ and $T$. +Then: +:$S \setminus T \ne \O$ +where $\O$ denotes the [[Definition:Empty Set|empty set]]. +\end{theorem} + +\begin{proof} +{{AimForCont}} $S \setminus T = \O$. +Then: +:$\not \exists x \in S: x \notin T$ +By [[De Morgan's Laws (Predicate Logic)|De Morgan's laws]]: +:$\forall x \in S: x \in T$ +By definition of [[Definition:Subset|subset]]: +:$S \subseteq T$ +By definition of [[Definition:Proper Subset|proper subset]], we have that $T \subseteq S$ such that $T \ne S$. +But we have $T \subseteq S$ and $S \subseteq T$. +So by definition of [[Definition:Set Equality/Definition 2|set equality]]: +:$S = T$ +From this [[Proof by Contradiction|contradiction]] it follows that: +:$S \setminus T \ne \O$ +{{qed}} +[[Category:Set Difference]] +[[Category:Subsets]] +5r0nemxsix8lymqklsuqmwslr2kxwhd +\end{proof}<|endoftext|> +\section{Order Isomorphism iff Strictly Increasing Surjection} +Tags: Order Isomorphisms, Surjections + +\begin{theorem} +Let $\left({S, \preceq_1}\right)$ and $\left({T, \preceq_2}\right)$ be [[Definition:Totally Ordered Set|totally ordered sets]]. +A [[Definition:Mapping|mapping]] $\phi: \left({S, \preceq_1}\right) \to \left({T, \preceq_2}\right)$ is an [[Definition:Order Isomorphism|order isomorphism]] iff: +: $(1): \quad \phi$ is a [[Definition:Surjection|surjection]] +: $(2): \quad \forall x, y \in S: x \mathop{\prec_1} y \implies \phi \left({x}\right) \mathop{\prec_2} \phi \left({y}\right)$ +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $\phi: \left({S, \preceq_1}\right) \to \left({T, \preceq_2}\right)$ be an [[Definition:Order Isomorphism|order isomorphism]]. +Then by definition $\phi$ is a [[Definition:Bijection|bijection]] and so a [[Definition:Surjection|surjection]]. +Suppose $x \mathop{\prec_1} y$. +That is: +:$x \mathop{\preceq_1} y$ +:$x \ne y$ +Then: +:$x \mathop{\prec_1} y \implies \phi \left({x}\right) \mathop{\preceq_2} \phi \left({y}\right)$ +as $\phi$ is an [[Definition:Order Isomorphism|order isomorphism]]. +But as $\phi$ is a [[Definition:Bijection|bijection]] it is also an [[Definition:Injection|injection]]. +Thus: +: $\phi \left({x}\right) = \phi \left({y}\right) \implies x = y$ +and so it follows that: +:$x \mathop{\prec_1} y \implies \phi \left({x}\right) \mathop{\prec_2} \phi \left({y}\right)$ +{{qed|lemma}} +=== Sufficient Condition === +Suppose $\phi$ is a [[Definition:Mapping|mapping]] which satisfies the conditions: +: $(1): \quad \phi$ is a [[Definition:Surjection|surjection]] +: $(2): \quad \forall x, y \in S: x \mathop{\prec_1} y \implies \phi \left({x}\right) \mathop{\prec_2} \phi \left({y}\right)$ +From $(2)$ and [[Strictly Increasing Mapping is Increasing]] we have: +:$x \mathop{\preceq_1} y \implies \phi \left({x}\right) \mathop{\preceq_2} \phi \left({y}\right)$ +Now suppose $\phi \left({x}\right) \mathop{\preceq_2} \phi \left({y}\right)$. +Suppose $y \mathop{\prec_1} x$. +Then from $(2)$ it would follow that $\phi \left({y}\right) \mathop{\prec_2} \phi \left({x}\right)$. +So it is not the case that $y \mathop{\prec_1} x$. +So from the [[Trichotomy Law (Ordering)|Trichotomy Law]] $x \mathop{\preceq_1} y$. +Thus it follows that: +: $x \mathop{\preceq_1} y \iff \phi \left({x}\right) \mathop{\preceq_2} \phi \left({y}\right)$ +It follows from [[Order Isomorphism is Surjective Order Embedding]] that $\phi$ is an [[Definition:Order Isomorphism|order isomorphism]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Relation Isomorphism Preserves Transitivity} +Tags: Relation Isomorphisms, Transitive Relations + +\begin{theorem} +Let $\struct {S, \RR_1}$ and $\struct {T, \RR_2}$ be [[Definition:Relational Structure|relational structures]]. +Let $\struct {S, \RR_1}$ and $\struct {T, \RR_2}$ be [[Definition:Relation Isomorphism|(relationally) isomorphic]]. +Then $\RR_1$ is a [[Definition:Transitive Relation|transitive relation]] {{iff}} $\RR_2$ is a [[Definition:Transitive Relation|transitive relation]]. +\end{theorem} + +\begin{proof} +Let $\phi: S \to T$ be a [[Definition:Relation Isomorphism|relation isomorphism]]. +By [[Inverse of Relation Isomorphism is Relation Isomorphism]] it follows that $\phi^{-1}: T \to S$ is also a [[Definition:Relation Isomorphism|relation isomorphism]]. +{{WLOG}}, it is therefore sufficient to prove only that if $\RR_1$ is [[Definition:Transitive Relation|transitive]], then $\RR_2$ is also [[Definition:Transitive Relation|transitive]]. +So, suppose $\RR_1$ is a [[Definition:Transitive Relation|transitive relation]]. +Let $y_1, y_2, y_3 \in T$ such that $y_1 \mathrel {\RR_2} y_2$ and $y_2 \mathrel {\RR_2} y_3$. +Let $x_1 = \map {\phi^{-1} } {y_1}$, $x_2 = \map {\phi^{-1} } {y_2}$ and $x_3 = \map {\phi^{-1} } {y_3}$. +As $\phi$ is a [[Definition:Bijection|bijection]] it follows from [[Inverse Element of Bijection]] that: +:$y_1 = \map \phi {x_1}$ +:$y_2 = \map \phi {x_2}$ +:$y_3 = \map \phi {x_3}$ +As $\phi^{-1}$ is a [[Definition:Relation Isomorphism|relation isomorphism]] it follows that: +:$x_1 = \map {\phi^{-1} } {y_1} \mathrel {\RR_1} \map {\phi^{-1} } {y_2} = x_2$ +:$x_2 = \map {\phi^{-1} } {y_2} \mathrel {\RR_1} \map {\phi^{-1} } {y_3} = x_3$ +As $\RR_1$ is a [[Definition:Transitive Relation|transitive relation]] it follows that: +:$x_1 \mathrel {\RR_1} x_3$ +As $\phi$ is a [[Definition:Relation Isomorphism|relation isomorphism]] it follows that: +:$y_1 = \map \phi {x_1} \mathrel {\RR_2} \map \phi {x_3} = y_3$ +Hence if $y_1 \mathrel {\RR_2} y_2$ and $y_2 \mathrel {\RR_2} y_3$, then also: +:$y_1 \mathrel {\RR_2} y_3$ +Hence, $\RR_2$ is a [[Definition:Transitive Relation|transitive relation]], by definition. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Relation Isomorphism Preserves Antisymmetry} +Tags: Relation Isomorphisms, Symmetric Relations + +\begin{theorem} +Let $\left({S, \mathcal R_1}\right)$ and $\left({T, \mathcal R_2}\right)$ be [[Definition:Relational Structure|relational structures]]. +Let $\left({S, \mathcal R_1}\right)$ and $\left({T, \mathcal R_2}\right)$ be [[Definition:Relation Isomorphism|(relationally) isomorphic]]. +Then $\mathcal R_1$ is an [[Definition:Antisymmetric Relation|antisymmetric relation]] {{iff}} $\mathcal R_2$ is also an [[Definition:Antisymmetric Relation|antisymmetric relation]]. +\end{theorem} + +\begin{proof} +Let $\phi: S \to T$ be a [[Definition:Relation Isomorphism|relation isomorphism]]. +By [[Inverse of Relation Isomorphism is Relation Isomorphism]] it follows that $\phi^{-1}: T \to S$ is also a [[Definition:Relation Isomorphism|relation isomorphism]]. +{{WLOG}}, it therefore suffices to prove only that if $\mathcal R_1$ is [[Definition:Antisymmetric Relation|antisymmetric]], then also $\mathcal R_2$ is [[Definition:Antisymmetric Relation|antisymmetric]]. +So, suppose $\mathcal R_1$ is an [[Definition:Antisymmetric Relation|antisymmetric relation]]. +Let $y_1, y_2 \in T$ such that both $y_1 \mathrel {\mathcal R_2}, y_2$ and $y_2 \mathrel {\mathcal R_2} y_1$. +Let $x_1 = \phi^{-1} \left({y_1}\right)$ and $x_2 = \phi^{-1} \left({y_2}\right)$. +As $\phi$ is a [[Definition:Bijection|bijection]] it follows from [[Inverse Element of Bijection]] that $y_1 = \phi \left({x_1}\right)$ and $y_2 = \phi \left({x_2}\right)$. +As $\phi^{-1}$ is a [[Definition:Relation Isomorphism|relation isomorphism]] it follows that: +: $x_1 = \phi^{-1} \left({y_1}\right) \mathrel {\mathcal R_1} \phi^{-1} \left({y_2}\right) = x_2$ +: $x_2 = \phi^{-1} \left({y_2}\right) \mathrel {\mathcal R_1} \phi^{-1} \left({y_1}\right) = x_1$ +As $\mathcal R_1$ is an [[Definition:Antisymmetric Relation|antisymmetric relation]] it follows that $x_1 = x_2$. +As $\phi$ is a [[Definition:Bijection|bijection]] it follows that: +: $y_1 = \phi \left({x_1}\right) = \phi \left({x_2}\right) = y_2$ +Hence $y_1 = y_2$ and so by definition, $\mathcal R_2$ is an [[Definition:Antisymmetric Relation|antisymmetric relation]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Relation Isomorphism Preserves Equivalence Relations} +Tags: Relation Isomorphisms, Equivalence Relations + +\begin{theorem} +Let $\struct {S, \RR_1}$ and $\struct {T, \RR_2}$ be [[Definition:Relational Structure|relational structures]]. +Let $\struct {S, \RR_1}$ and $\struct {T, \RR_2}$ be [[Definition:Relation Isomorphism|(relationally) isomorphic]]. +Then $\RR_1$ is an [[Definition:Equivalence Relation|equivalence relation]] {{iff}} $\RR_2$ is also an [[Definition:Equivalence Relation|equivalence relation]]. +\end{theorem} + +\begin{proof} +Let $\phi: S \to T$ be a [[Definition:Relation Isomorphism|relation isomorphism]]. +By [[Inverse of Relation Isomorphism is Relation Isomorphism]] it follows that $\phi^{-1}: T \to S$ is also a [[Definition:Relation Isomorphism|relation isomorphism]]. +{{WLOG}}, it thus is necessary to prove only that if $\RR_1$ is an [[Definition:Equivalence Relation|equivalence relation]] then $\RR_2$ is an [[Definition:Equivalence Relation|equivalence relation]]. +So, suppose $\RR_1$ is an [[Definition:Equivalence Relation|equivalence relation]]. +By definition: +:$(1): \quad \RR_1$ is [[Definition:Reflexive Relation|reflexive]] +:$(2): \quad \RR_1$ is [[Definition:Symmetric Relation|symmetric]] +:$(3): \quad \RR_1$ is [[Definition:Transitive Relation|transitive]]. +It follows that: +:From [[Relation Isomorphism Preserves Reflexivity]], $\RR_2$ is [[Definition:Reflexive Relation|reflexive]]. +:From [[Relation Isomorphism Preserves Symmetry]], $\RR_2$ is [[Definition:Symmetric Relation|symmetric]]. +:From [[Relation Isomorphism Preserves Transitivity]], $\RR_2$ is [[Definition:Transitive Relation|transitive]]. +So by definition $\RR_2$ is an [[Definition:Equivalence Relation|equivalence relation]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Supremum of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]], and let $I$ be a [[Definition:Countable Set|countable set]]. +Let $\left({f_i}\right)_{i \in I}$, $f_i: X \to \overline \R$ be an [[Definition:Indexed Set|$I$-indexed collection]] of [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the [[Definition:Pointwise Supremum|pointwise supremum]] $\displaystyle \sup_{i \mathop \in I} f_i: X \to \overline \R$ is also [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +Let $a \in \R$; for all $i \in I$, we have by [[Characterization of Measurable Functions]] that: +:$\left\{{f_i > a}\right\} \in \Sigma$ +and as $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]] and $I$ is [[Definition:Countable Set|countable]], also: +:$\displaystyle \bigcup_{i \mathop \in I} \left\{{f_i > a}\right\} \in \Sigma$ +We will now show that: +:$\displaystyle \left\{{\sup_{i \mathop \in I} f_i > a}\right\} = \bigcup_{i \mathop \in I} \left\{{f_i > a}\right\}$ +First, observe that for all $i \in I$: +:$f_i \left({x}\right) \le \displaystyle \sup_{i \mathop \in I} f_i \left({x}\right)$ +and hence: +:$\left\{{f_i > a}\right\} \subseteq \displaystyle \left\{{\sup_{i \mathop \in I} f_i > a}\right\}$ +From [[Union is Smallest Superset/Family of Sets|Union is Smallest Superset: Family of Sets]]: +:$\displaystyle \bigcup_{i \mathop \in I} \left\{{f_i > a}\right\} \subseteq \left\{{\sup_{i \mathop \in I} f_i > a}\right\}$ +Next, suppose that: +:$x \notin \displaystyle \bigcup_{i \mathop \in I} \left\{{f_i > a}\right\}$ +Then, by definition of [[Definition:Set Union|union]]: +:$\forall i \in I: f_i \left({x}\right) \le a$ +which is to say that $a$ is an [[Definition:Upper Bound of Mapping|upper bound]] for the $f_i \left({x}\right)$. +Hence, by definition of [[Definition:Supremum of Set|supremum]], it follows that: +:$\displaystyle \sup_{i \mathop \in I} f_i \left({x}\right) \le a$ +and therefore: +:$x \notin \displaystyle \left\{{\sup_{i \mathop \in I} f_i > a}\right\}$ +Thus, we have shown: +:$\displaystyle \left\{{\sup_{i \mathop \in I} f_i > a}\right\} = \bigcup_{i \mathop \in I} \left\{{f_i > a}\right\} \in \Sigma$ +{{explain|There must be a result covering this type of set equality}} +and by [[Characterization of Measurable Functions]], it follows that $\displaystyle \sup_{i \mathop \in I} f_i$ is [[Definition:Measurable Function|measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Infimum of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]], and let $I$ be a [[Definition:Countable Set|countable set]]. +Let $\left({f_i}\right)_{i \in I}$, $f_i: X \to \overline{\R}$ be an [[Definition:Indexed Set|$I$-indexed collection]] of [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the [[Definition:Pointwise Infimum|pointwise infimum]] $\displaystyle \inf_{i \mathop \in I} f_i: X \to \overline{\R}$ is also [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +From [[Infimum as Supremum]], we have the [[Equality of Mappings]]: +:$\displaystyle \inf_{i \mathop \in I} f_i = - \left({\sup_{i \mathop \in I} \, \left({- f_i}\right)}\right)$ +Now, from [[Negative of Measurable Function is Measurable]] and [[Pointwise Supremum of Measurable Functions is Measurable]], it follows that: +:$\displaystyle - \left({\sup_{i \mathop \in I} \, \left({- f_i}\right)}\right)$ +is a [[Definition:Measurable Function|measurable function]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Upper Limit of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\left({f_n}\right)_{n \in \N}$, $f_n: X \to \overline{\R}$ be a [[Definition:Sequence|sequence]] of [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the [[Definition:Pointwise Upper Limit|pointwise upper limit]] $\displaystyle \limsup_{n \to \infty} f_n: X \to \overline{\R}$ is also [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +By definition of [[Definition:Upper Limit|upper limit]], we have: +:$\displaystyle \limsup_{n \to \infty} f_n = \inf_{m \mathop \in \N} \ \sup_{n \ge m} f_n$ +The result follows from combining: +:[[Pointwise Supremum of Measurable Functions is Measurable]] +:[[Pointwise Infimum of Measurable Functions is Measurable]] +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Lower Limit of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\left({f_n}\right)_{n \mathop \in \N}$, $f_n: X \to \overline \R$ be a [[Definition:Sequence|sequence]] of [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the [[Definition:Pointwise Lower Limit|pointwise lower limit]]: +:$\displaystyle \liminf_{n \mathop \to \infty} f_n: X \to \overline \R$ +is also [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +By definition of [[Definition:Limit Inferior|limit inferior]], we have: +:$\displaystyle \liminf_{n \mathop \to \infty} f_n = \sup_{m \mathop \in \N} \ \inf_{n \mathop \ge m} f_n$ +The result follows from combining: +:[[Pointwise Infimum of Measurable Functions is Measurable]] +:[[Pointwise Supremum of Measurable Functions is Measurable]] +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Limit of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\left({f_n}\right)_{n \in \N}$, $f_n: X \to \overline{\R}$ be a [[Definition:Sequence|sequence]] of [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the [[Definition:Pointwise Limit|pointwise limit]] $\displaystyle \lim_{n \to \infty} f_n: X \to \overline{\R}$ is also [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +From [[Convergence of Limsup and Liminf]], it follows that: +:$\displaystyle \lim_{n \to \infty} f_n = \limsup_{n \to \infty} f_n$ +We have [[Pointwise Upper Limit of Measurable Functions is Measurable]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Sum of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f, g: X \to \overline{\R}$ be [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Assume that the [[Definition:Pointwise Addition of Extended Real-Valued Functions|pointwise sum]] $f + g: X \to \overline{\R}$ is well-defined. +Then $f + g$ is a [[Definition:Measurable Function|$\Sigma$-measurable function]]. +\end{theorem} + +\begin{proof} +By [[Measurable Function Pointwise Limit of Simple Functions]], we find [[Definition:Sequence|sequences]] $\left({f_n}\right)_{n \in \N}, \left({g_n}\right)_{n \in \N}$ such that: +:$\displaystyle f = \lim_{n \to \infty} f_n$ +:$\displaystyle g = \lim_{n \to \infty} g_n$ +where the limits are [[Definition:Pointwise Limit|pointwise]]. +It follows that for all $x \in X$: +:$f \left({x}\right) + g \left({x}\right) = \displaystyle \lim_{n \to \infty} f_n \left({x}\right) + g_n \left({x}\right)$ +{{explain|There needs to be a bunch of results establishing such equalities of convergence in function spaces}} +so that we have the [[Definition:Pointwise Limit|pointwise limit]]: +:$\displaystyle f + g = \lim_{n \to \infty} f_n + g_n$ +By [[Pointwise Sum of Simple Functions is Simple Function]], $f + g$ is a [[Definition:Pointwise Limit|pointwise limit]] of [[Definition:Simple Function|simple functions]]. +By [[Simple Function is Measurable]], $f + g$ is a [[Definition:Pointwise Limit|pointwise limit]] of [[Definition:Measurable Function|measurable functions]]. +Hence $f + g$ is [[Definition:Measurable Function|measurable]], by [[Pointwise Limit of Measurable Functions is Measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Difference of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Let $f, g: X \to \overline \R$ be [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Assume that the [[Definition:Pointwise Subtraction of Extended Real-Valued Functions|pointwise difference]] $f - g: X \to \overline \R$ is well-defined. +Then $f - g$ is a [[Definition:Measurable Function|$\Sigma$-measurable function]]. +\end{theorem} + +\begin{proof} +We have the apparent identity: +:$f - g = f + \paren {-g}$ +By [[Negative of Measurable Function is Measurable]], $-g$ is a [[Definition:Measurable Function|measurable function]]. +Hence so is $f - g$, by [[Pointwise Sum of Measurable Functions is Measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Product of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f, g: X \to \overline{\R}$ be [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the [[Definition:Pointwise Product of Extended Real-Valued Functions|pointwise product]] $f \cdot g: X \to \overline{\R}$ is also [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +By [[Measurable Function Pointwise Limit of Simple Functions]], we find [[Definition:Sequence|sequences]] $\left({f_n}\right)_{n \in \N}, \left({g_n}\right)_{n \in \N}$ such that: +:$\displaystyle f = \lim_{n \to \infty} f_n$ +:$\displaystyle g = \lim_{n \to \infty} g_n$ +where the limits are [[Definition:Pointwise Limit|pointwise]]. +It follows that for all $x \in X$: +:$f \left({x}\right) \cdot g \left({x}\right) = \displaystyle \lim_{n \to \infty} f_n \left({x}\right) \cdot g_n \left({x}\right)$ +{{explain|There needs to be a bunch of results establishing such equalities of convergence in function spaces}} +so that we have the [[Definition:Pointwise Limit|pointwise limit]]: +:$\displaystyle f \cdot g = \lim_{n \to \infty} f_n \cdot g_n$ +By [[Pointwise Product of Simple Functions is Simple Function]], $f \cdot g$ is a [[Definition:Pointwise Limit|pointwise limit]] of [[Definition:Simple Function|simple functions]]. +By [[Simple Function is Measurable]], $f \cdot g$ is a [[Definition:Pointwise Limit|pointwise limit]] of [[Definition:Measurable Function|measurable functions]]. +Hence $f \cdot g$ is [[Definition:Measurable Function|measurable]], by [[Pointwise Limit of Measurable Functions is Measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Maximum of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f, g: X \to \overline{\R}$ be [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the [[Definition:Pointwise Maximum of Extended Real-Valued Functions|pointwise maximum]] $\max \left({f, g}\right): X \to \overline{\R}$ is also [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +For all $x \in X$ and $a \in \R$, we have by [[Max yields Supremum of Parameters]] that: +:$\max \left({f \left({x}\right), g \left({x}\right)}\right) \le a$ +[[Definition:Iff|iff]] both $f \left({x}\right) \le a$ and $g \left({x}\right) \le a$. +That is, for all $a \in \R$: +:$\left\{{x \in X: \max \left({f \left({x}\right), g \left({x}\right)}\right) \le a}\right\} = \left\{{x \in X: f \left({x}\right) \le a}\right\} \cap \left\{{x \in X: g \left({x}\right) \le a}\right\}$ +By [[Characterization of Measurable Functions|Characterization of Measurable Functions: $(1) \implies (2)$]], the two sets on the [[Definition:RHS|RHS]] are elements of $\Sigma$, i.e. [[Definition:Measurable Set|measurable]]. +By [[Sigma-Algebra Closed under Intersection]], it follows that: +:$\left\{{x \in X: \max \left({f \left({x}\right), g \left({x}\right)}\right) \le a}\right\} \in \Sigma$ +Hence $\max \left({f, g}\right)$ is [[Definition:Measurable Function|measurable]], by [[Characterization of Measurable Functions|Characterization of Measurable Functions: $(2) \implies (1)$]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Pointwise Minimum of Measurable Functions is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f, g: X \to \overline{\R}$ be [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the [[Definition:Pointwise Minimum of Extended Real-Valued Functions|pointwise minimum]] $\min \left({f, g}\right): X \to \overline{\R}$ is also [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +For all $x \in X$ and $a \in \R$, we have by [[Min yields Infimum of Parameters]] that: +:$a \le \min \left({f \left({x}\right), g \left({x}\right)}\right)$ +[[Definition:Iff|iff]] both $a \le f \left({x}\right)$ and $a \le g \left({x}\right)$. +That is, for all $a \in \R$: +:$\left\{{x \in X: \min \left({f \left({x}\right), g \left({x}\right)}\right) \ge a}\right\} = \left\{{x \in X: f \left({x}\right) \ge a}\right\} \cap \left\{{x \in X: g \left({x}\right) \ge a}\right\}$ +By [[Characterization of Measurable Functions|Characterization of Measurable Functions: $(1) \implies (4)$]], the two sets on the [[Definition:RHS|RHS]] are elements of $\Sigma$, i.e. [[Definition:Measurable Set|measurable]]. +By [[Sigma-Algebra Closed under Intersection]], it follows that: +:$\left\{{x \in X: \min \left({f \left({x}\right), g \left({x}\right)}\right) \ge a}\right\} \in \Sigma$ +Hence $\min \left({f, g}\right)$ is [[Definition:Measurable Function|measurable]], by [[Characterization of Measurable Functions|Characterization of Measurable Functions: $(4) \implies (1)$]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Function Measurable iff Positive and Negative Parts Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \overline{\R}$ be an [[Definition:Extended Real-Valued Function|extended real-valued function]]. +Let $f^+, f^-: X \to \overline{\R}$ be the [[Definition:Positive Part|positive]] and [[Definition:Negative Part|negative parts]] of $f$. +Then $f$ is [[Definition:Measurable Function|$\Sigma$-measurable]] [[Definition:Iff|iff]] both $f^+$ and $f^-$ are [[Definition:Measurable Function|$\Sigma$-measurable]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Suppose $f$ is [[Definition:Measurable Function|measurable]]. +By definition, its [[Definition:Positive Part|positive part]] $f^+$ equals the [[Definition:Pointwise Maximum of Extended Real-Valued Functions|pointwise maximum]]: +:$f^+ = \max \left\{{f, 0}\right\}$ +where $0$ denotes the [[Definition:Zero Function|zero function]]. +By [[Constant Function is Measurable]], $0$ is a [[Definition:Measurable Function|measurable function]]. +Thus, by [[Pointwise Maximum of Measurable Functions is Measurable]], $f^+$ is [[Definition:Measurable Function|measurable]]. +Subsequently, the [[Definition:Negative Part|negative part]] $f^-$ of $f$ is defined by means of a [[Definition:Pointwise Maximum of Extended Real-Valued Functions|pointwise minimum]]: +:$f^- = - \min \left\{{f, 0}\right\}$ +By [[Pointwise Maximum of Measurable Functions is Measurable]] and [[Negative of Measurable Function is Measurable]], $f^-$ is [[Definition:Measurable Function|measurable]] as well. +{{qed|lemma}} +=== Sufficient Condition === +Suppose $f^+$ and $f^-$ are both [[Definition:Measurable Function|measurable]]. +By [[Difference of Positive and Negative Parts]]: +:$f = f^+ - f^-$ +and hence, by [[Pointwise Difference of Measurable Functions is Measurable]], $f$ is also [[Definition:Measurable Function|measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Measurable Functions Determine Measurable Sets} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ be a [[Definition:Measurable Space|measurable space]]. +Let $f, g: X \to \overline \R$ be [[Definition:Measurable Function|$\Sigma$-measurable functions]]. +Then the following sets are [[Definition:Measurable Set|measurable]]: +:$\set {f < g}$ +:$\set {f \le g}$ +:$\set {f = g}$ +:$\set {f \ne g}$ +where, for example, $\set {f < g}$ is short for $\set {x \in X: \map f x < \map g x}$. +\end{theorem} + +\begin{proof} +From [[Pointwise Difference of Measurable Functions is Measurable]], $f - g: X \to \overline \R$ is [[Definition:Measurable Function|$\Sigma$-measurable]]. +Now we have the following evident identities: +:$\set {f < g} = \set {f - g < 0}$ +:$\set {f \ge g} = \set {f - g \le 0}$ +:$\set {f = g} = \set {f - g = 0}$ +:$\set {f \ne g} = \set {f - g \ne 0}$ +Subsequently, using the [[Definition:Preimage of Subset under Mapping|preimage]] under $f - g$, the latter may respectively be expressed as: +:$\set {f - g < 0} = \paren {f - g}^{-1} \sqbrk {\openint {-\infty} 0}$ +:$\set {f - g \ge 0} = \paren {f - g}^{-1} \sqbrk {\hointl {-\infty} 0}$ +:$\set {f - g = 0} = \paren {f - g}^{-1} \sqbrk {\set 0}$ +:$\set {f - g \ne 0} = \paren {f - g}^{-1} \sqbrk {\R \setminus \set 0}$ +Now the sets: +:$\openint {-\infty} 0$ +:$\hointl {-\infty} 0$ +:$\set 0$ +:$\R \setminus \set 0$ +are all [[Definition:Open Set (Topology)|open]] or [[Definition:Closed Set (Topology)|closed]] in the [[Definition:Euclidean Topology on Real Number Line|Euclidean topology]]. +Hence by definition of [[Definition:Borel Sigma-Algebra|Borel $\sigma$-algebra]], and by [[Closed Set Measurable in Borel Sigma-Algebra]], they are [[Definition:Measurable Set|$\map \BB \R$-measurable]]. +Since $f - g$ is [[Definition:Measurable Function|$\Sigma$-measurable]], it follows that: +:$\set {f < g}$ +:$\set {f \le g}$ +:$\set {f = g}$ +:$\set {f \ne g}$ +are all [[Definition:Measurable Set|$\Sigma$-measurable sets]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Factorization Lemma/Extended Real-Valued Function} +Tags: Measure Theory + +\begin{theorem} +Then an [[Definition:Extended Real-Valued Function|extended real-valued function]] $g: X \to \overline{\R}$ is [[Definition:Measurable Function|$\sigma \left({f}\right)$-measurable]] {{iff}}: +:There exists a [[Definition:Measurable Function|$\Sigma$-measurable mapping]] $\tilde g: Y \to \overline{\R}$ such that $g = \tilde g \circ f$ +where: +:$\sigma \left({f}\right)$ denotes the [[Definition:Sigma-Algebra Generated by Collection of Mappings|$\sigma$-algebra generated]] by $f$ +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $g$ be a [[Definition:Measurable Mapping|$\sigma \left({f}\right) \, / \, \overline{\mathcal B}$-measurable function]]. +We need to construct a [[Definition:Measurable Mapping|measurable]] $\tilde g$ such that $g = \tilde g \circ f$. +Let us proceed in the following fashion: +* Establish the result for $g$ a [[Definition:Characteristic Function of Set|characteristic function]]; +* Establish the result for $g$ a [[Definition:Simple Function|simple function]]; +* Establish the result for all $g$ +So let $g = \chi_E$ be a [[Definition:Characteristic Function of Set|characteristic function]]. +By [[Characteristic Function Measurable iff Set Measurable]], it follows that $E$ is [[Definition:Measurable Set|$\sigma \left({f}\right)$-measurable]]. +Thus there exists some $A \in \Sigma$ such that $E = f^{-1} \left({A}\right)$. +Again by [[Characteristic Function Measurable iff Set Measurable]], we have $\chi_A: Y \to \overline \R$ is [[Definition:Measurable Mapping|measurable]]. +It follows that $\chi_E = \chi_A \circ f$, and $\tilde g := \chi_A$ works. +Now let $g = \displaystyle \sum_{i \mathop = 1}^n a_i \chi_{E_i}$ be a [[Definition:Simple Function|simple function]]. +Let $A_i$ be associated to $E_i$ as above. Then we have: +{{begin-eqn}} +{{eqn | l = \sum_{i \mathop = 1}^n a_i \chi_{E_i} + | r = \sum_{i \mathop = 1}^n a_i \left({\chi_{A_i} \circ f}\right) + | c = by the result for [[Definition:Characteristic Function of Set|characteristic functions]] +}} +{{eqn | r = \left({\sum_{i \mathop = 1}^n a_i \chi_{A_i} }\right) \circ f + | c = [[Composition of Mappings is Linear]] +}} +{{end-eqn}} +Now $\displaystyle \sum_{i \mathop = 1}^n a_i \chi_{A_i}$ is a [[Definition:Simple Function|simple function]], hence [[Definition:Measurable Mapping|measurable]] by [[Simple Function is Measurable]]. +Therefore, it suffices as a choice for $\tilde g$. +Next, let $g \ge 0$ be a [[Definition:Measurable Function|measurable function]]. +By [[Measurable Function Pointwise Limit of Simple Functions]], we find [[Definition:Simple Function|simple functions]] $g_j$ such that: +:$\displaystyle \lim_{j \to \infty} g_j = g$ +Applying the previous step to each $g_j$, we find a [[Definition:Sequence|sequence]] of $\tilde g_j$ satisfying: +:$\displaystyle \lim_{j \to \infty} \tilde g_j \circ f = g$ +From [[Composition with Pointwise Limit]] it follows that we have, putting $\tilde g := \displaystyle \lim_{j \to \infty} \tilde g_j$: +:$\displaystyle \lim_{j \to \infty} \tilde g_j \circ f = \tilde g \circ f$ +{{explain|Address why (if?!) $\tilde g$ is well defined}} +An application of [[Pointwise Limit of Measurable Functions is Measurable]] yields $\tilde g$ [[Definition:Measurable Function|measurable]]. +Thus we have provided a suitable $\tilde g$ for every $g$, such that: +:$g = \tilde g \circ f$ +as desired. +{{qed}} +=== Sufficient Condition === +Suppose that such a $\tilde g$ exists. +Note that $f$ is [[Definition:Measurable Mapping|$\sigma \left({f}\right) \,/\, \Sigma$-measurable]] by [[Definition:Sigma-Algebra Generated by Collection of Mappings|definition of $\sigma \left({f}\right)$]]. +The result follows immediately from [[Composition of Measurable Mappings is Measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Factorization Lemma} +Tags: Measure Theory, Named Theorems + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and $\left({Y, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to Y$ be a [[Definition:Mapping|mapping]]. +\end{theorem}<|endoftext|> +\section{Piecewise Combination of Measurable Mappings is Measurable/Binary Case} +Tags: Measure Theory + +\begin{theorem} +Let $f, g: X \to X'$ be [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable mappings]]. +Let $E \in \Sigma$ be a [[Definition:Measurable Set|measurable set]]. +Define $h: X \to X'$ by: +:$\forall x \in X: \map h x := \begin{cases} +\map f x & : \text {if $x \in E$} \\ +\map g x & : \text {if $x \notin E$} +\end{cases}$ +Then $h$ is also a [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable mapping]]. +\end{theorem} + +\begin{proof} +Let $E' \in \Sigma'$ be a [[Definition:Measurable Set|$\Sigma'$-measurable set]]. +Then by definition of [[Definition:Preimage of Subset under Mapping|preimage]]: +:$\map {h^{-1} } {E'} = \set {x \in X: \map h x \in E'}$ +Expanding the definition of $h$, this translates into: +:$\map {h^{-1} } {E'} = \set {x \in E: \map f x \in E'} \cup \set {x \in \relcomp X E: \map g x \in E'}$ +where $\complement$ denotes [[Definition:Set Complement|set complement]]. +That is, we have: +:$\map {h^{-1} } {E'} = \paren {E \cap \map {f^{-1} } {E'} } \cup \paren {\relcomp X E \cap \map {g^{-1} } {E'} }$ +All sets on the {{RHS}} are [[Definition:Measurable Set|$\Sigma$-measurable]]. +By [[Sigma-Algebra Closed under Intersection]] and [[Sigma-Algebra Closed under Union]], so is $\map {h^{-1} } {E'}$. +Since $E' \in \Sigma'$ was arbitrary, $h$ is a [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable mapping]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Piecewise Combination of Measurable Mappings is Measurable/General Case} +Tags: Measure Theory + +\begin{theorem} +Let $\left({E_n}\right)_{n \in \N} \in \Sigma, \displaystyle \bigcup_{n \mathop \in \N} E_n = X$ be a [[Definition:Countable Cover|countable cover]] of $X$ by [[Definition:Measurable Set|$\Sigma$-measurable sets]]. +For each $n \in \N$, let $f_n: E_n \to X'$ be a [[Definition:Measurable Mapping|$\Sigma_{E_n} \, / \, \Sigma'$-measurable mapping]]. +Here, $\Sigma_{E_n}$ is the [[Definition:Trace Sigma-Algebra|trace $\sigma$-algebra]] of $E_n$ in $\Sigma$. +Suppose that for every $m, n \in \N$, $f_m$ and $f_n$ satisfy: +:$(1): \quad f_m \restriction_{E_m \cap E_n} = f_n \restriction_{E_m \cap E_n}$ +that is, $f_m$ and $f_n$ coincide whenever both are defined; here $\restriction$ denotes [[Definition:Restriction of Mapping|restriction]]. +Define $f: X \to X'$ by: +:$\displaystyle \forall n \in \N, x \in E_n: f \left({x}\right) := f_n \left({x}\right)$ +Then $f$ is a [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable mapping]]. +\end{theorem} + +\begin{proof} +First, note that $f$ is [[Definition:Well-Defined Mapping|well-defined]], since if $x \in E_n$ and $x \in E_m$, we have that: +:$f_n \left({x}\right) = f \left({x}\right) = f_m \left({x}\right)$ +by $(1)$, since $x \in E_n \cap E_m$. +Let $E' \in \Sigma'$. +Then by definition of [[Definition:Preimage of Subset under Mapping|preimage]], $f^{-1} \left({E'}\right) \subseteq X$, and hence: +{{begin-eqn}} +{{eqn | l = f^{-1} \left({E'}\right) + | r = X \cap h^{-1} \left({E'}\right) + | c = [[Intersection with Subset is Subset]] +}} +{{eqn | r = \left({\bigcup_{n \mathop \in \N} E_n}\right) \cap f^{-1} \left({E'}\right) +}} +{{eqn | r = \bigcup_{n \mathop \in \N} \left({E_n \cap f^{-1} \left({E'}\right)}\right) + | c = [[Intersection Distributes over Union/General Result|Intersection Distributes over Union: General Result]] +}} +{{eqn | r = \bigcup_{n \mathop \in \N} f_n^{-1} \left({E'}\right) +}} +{{end-eqn}} +The last step we derive as follows. +Whenever $x \in E_n$, we have by definition of $f$ that $f \left({x}\right) = f_n \left({x}\right)$. +Hence, $x \in E_n \cap f^{-1} \left({E'}\right)$ implies $x \in f_n^{-1} \left({E'}\right)$. +Conversely, if $x \in f_n^{-1} \left({E'}\right)$, i.e. $f_n \left({x}\right) \in E'$, then $x \in E_n$ since $E_n$ is the [[Definition:Domain of Mapping|domain]] of $f_n$. +By definition of $f$ then also $f \left({x}\right) = f_n \left({x}\right) \in E'$. +Hence we have deduced that $E_n \cap f^{-1} \left({E'}\right) = f_n^{-1} \left({E'}\right)$. +Since all $f_n$ are [[Definition:Measurable Mapping|$\Sigma_{E_n} \, / \, \Sigma'$-measurable]], it follows that for all $n \in \N$: +:$f_n^{-1} \left({E'}\right) \in \Sigma$ +By [[Definition:Sigma-Algebra|$\sigma$-algebra axiom $(3)$]], it follows that: +:$f^{-1} \left({E'}\right) = \displaystyle \bigcup_{n \mathop \in \N} f_n^{-1} \left({E'}\right) \in \Sigma$ +Since $E' \in \Sigma'$ was arbitrary, it follows that $f$ is [[Definition:Measurable Mapping|$\Sigma \, / \, \Sigma'$-measurable]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Piecewise Combination of Measurable Mappings is Measurable} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma}$ and $\struct {X', \Sigma'}$ be [[Definition:Measurable Space|measurable spaces]]. +\end{theorem}<|endoftext|> +\section{Function Simple iff Positive and Negative Parts Simple} +Tags: Simple Functions + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $g: X \to \overline{\R}$ be an [[Definition:Extended Real-Valued Function|extended real-valued function]]. +Then $g$ is a [[Definition:Simple Function|simple function]] {{iff}} its [[Definition:Positive Part|positive part]] $g^+$ and [[Definition:Negative Part|negative part]] $g^-$ are [[Definition:Simple Function|simple functions]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Suppose $g$ is a [[Definition:Simple Function|simple function]]. +By [[Positive Part of Simple Function is Simple Function]], so is $g^+$. +By [[Negative Part of Simple Function is Simple Function]], so is $g^-$. +{{qed|lemma}} +=== Sufficient Condition === +Suppose $g^+$ and $g^-$ are [[Definition:Simple Function|simple functions]]. +From [[Difference of Positive and Negative Parts]]: +:$g = g^+ - g^-$ +Hence $g$ is [[Definition:Simple Function|simple]], by [[Pointwise Difference of Simple Functions is Simple Function]]. +{{finish}} +\end{proof}<|endoftext|> +\section{Bounded Measurable Function Uniform Limit of Simple Functions} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $f: X \to \overline{\R}$ be a [[Definition:Bounded Mapping|bounded]] [[Definition:Measurable Function|$\Sigma$-measurable function]]. +Then there exists a [[Definition:Sequence|sequence]] $\left({f_n}\right)_{n \in \N} \in \mathcal E \left({\Sigma}\right)$ of [[Definition:Simple Function|simple functions]], such that: +:$\forall \epsilon > 0: \exists n \in \N: \forall x \in X: \left\vert{f \left({x}\right) - f_n \left({x}\right)}\right\vert < \epsilon$ +That is, such that $f = \displaystyle \lim_{n \to \infty} f_n$ [[Definition:Uniform Limit|uniformly]]. +The [[Definition:Sequence|sequence]] $\left({f_n}\right)_{n \in \N}$ may furthermore be taken to satisfy: +:$\forall n \in \N: \left\vert{f_n}\right\vert \le \left\vert{f}\right\vert$ +where $\left\vert{f}\right\vert$ denotes the [[Definition:Absolute Value of Extended Real-Valued Function|absolute value of $f$]]. +\end{theorem} + +\begin{proof} +First, let us prove the theorem when $f$ is a [[Definition:Positive Measurable Function|positive $\Sigma$-measurable function]]. +Now for any $n \in \N$, define for $0 \le k \le n 2^n$: +:$A^n_k := \begin{cases} + \left\{{ k 2^{-n} \le f < \left({k + 1}\right) 2^{-n} }\right\} & : k \ne n 2^n \\ + \left\{{f \ge n}\right\} & : k = n 2^n +\end{cases}$ +where e.g. $\left\{{f \ge n}\right\}$ is short for $\left\{{x \in X: f \left({x}\right) \ge n}\right\}$. +It is immediate that the $A^n_k$ are [[Definition:Pairwise Disjoint|pairwise disjoint]], and that: +:$\displaystyle \bigcup_{k \mathop = 0}^{n 2^n} A^n_k = X$ +Subsequently, define $f_n: X \to \overline{\R}$ by: +:$f_n \left({x}\right) := \displaystyle \sum_{k \mathop = 0}^{n 2^n} k 2^{-n} \chi_{A^n_k} \left({x}\right)$ +where $\chi_{A^n_k}$ is the [[Definition:Characteristic Function of Set|characteristic function]] of $A^n_k$. +Now if $f \left({x}\right) < n$, then we have for some $k < n 2^{-n}$: +:$x \in A^n_k$ +so that: +{{begin-eqn}} +{{eqn | l = \left\vert{f \left({x}\right) - f_n \left({x}\right)}\right\vert + | r = \left\vert{f \left({x}\right) - k 2^{-n} }\right\vert +}} +{{eqn | o = < + | r = 2^{-n} +}} +{{end-eqn}} +since $x \in A^n_k$ {{iff}} $k 2^{-n} \le f \left({x}\right) < \left({k + 1}\right) 2^{-n}$. +In particular, since $f_n \left({x}\right) \le n$ for all $x \in X$, we conclude that [[Definition:Pointwise Inequality of Extended Real-Valued Functions|pointwise]], $f_n \le f$, for all $n \in \N$. +By [[Characterization of Measurable Functions]] and [[Sigma-Algebra Closed under Intersection]], it follows that: +:$A^n_{n 2^n} = \left\{{f \ge n}\right\}$ +:$A^n_k = \left\{{f \ge k 2^{-n}}\right\} \cap \left\{{f < \left({k + 1}\right) 2^{-n}}\right\}$ +are all [[Definition:Measurable Set|$\Sigma$-measurable sets]]. +Hence, by definition, all $f_n$ are [[Definition:Simple Function|$\Sigma$-simple functions]]. +It remains to show that $\displaystyle \lim_{n \to \infty} f_n = f$ [[Definition:Uniform Limit|uniformly]]. +Let $\epsilon > 0$ be arbitrary. +Let $n \in \N$ be a [[Definition:Bound of Real-Valued Function|bound]] for $f$ satisfying $2^{-n} < \epsilon$. +By the reasoning above, we then have for all $m \ge n$, and all $x \in X$: +:$\left\vert{f \left({x}\right) - f_m \left({x}\right)}\right\vert < 2^{-n}$ +This establishes the result for [[Definition:Positive Measurable Function|positive measurable]] $f$. +For arbitrary $f$, by [[Difference of Positive and Negative Parts]], we have: +:$f = f^+ - f^-$ +where $f^+$ and $f^-$ are the [[Definition:Positive Part|positive]] and [[Definition:Negative Part|negative parts]] of $f$. +By [[Function Measurable iff Positive and Negative Parts Measurable]], $f^+$ and $f^-$ are [[Definition:Positive Measurable Function|positive measurable functions]]. +Thus we find [[Definition:Sequence|sequences]] $f^+_n$ and $f^-_n$ [[Definition:Uniform Limit|converging uniformly]] to $f^+$ and $f^-$, respectively. +That is, given $\epsilon > 0$, there exist $n, n'$ such that: +:$\forall m \ge n: \forall x \in X: \left\vert{f^+ \left({x}\right) - f^+_m \left({x}\right)}\right\vert < \epsilon$ +:$\forall m \ge n': \forall x \in X: \left\vert{f^- \left({x}\right) - f^-_m \left({x}\right)}\right\vert < \epsilon$ +By the [[Triangle Inequality]], for all $m \in \N$: +:$\left\vert{\left({f^+ \left({x}\right) - f^- \left({x}\right)}\right) - \left({f^+_m \left({x}\right) - f^-_m \left({x}\right)}\right)}\right\vert \le \left\vert{f^+ \left({x}\right) - f^+_m \left({x}\right)}\right\vert + \left\vert{f^- \left({x}\right) - f^-_m \left({x}\right)}\right\vert$ +Hence, given $\epsilon > 0$, if $n, n'$ are sufficient for $\epsilon / 2$ for $f^+_m$ and $f^-_m$ respectively, it follows that for $m \ge \max \left\{{n, n'}\right\}$: +:$\left\vert{f \left({x}\right) - f_m \left({x}\right)}\right\vert = \left\vert{\left({f^+ \left({x}\right) - f^- \left({x}\right)}\right) - \left({f^+_m \left({x}\right) - f^-_m \left({x}\right)}\right)}\right\vert < \epsilon$ +where $f_m \left({x}\right) = f^+_m \left({x}\right) - f^-_m \left({x}\right)$. +Thus $f_m$ converges to $f$ [[Definition:Uniform Limit|uniformly]]. +Furthermore, we have for all $n \in \N$ and $x \in X$: +:$\left\vert{f^+_n \left({x}\right) - f^-_n \left({x}\right)}\right\vert = f^+_n \left({x}\right) + f^-_n \left({x}\right) \le f^+ \left({x}\right) + f^- \left({x}\right) = \left\vert{f \left({x}\right)}\right\vert$ +where the last equality follows from [[Sum of Positive and Negative Parts]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Integral of Positive Simple Function Well-Defined} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $f: X \to \R, f \in \mathcal{E}^+$ be a [[Definition:Positive Simple Function|positive simple function]]. +Then the [[Definition:Integral of Positive Simple Function|$\mu$-integral of $f$]], $I_\mu \left({f}\right)$, is well-defined. +That is, for any two [[Definition:Standard Representation of Simple Function|standard representations]] for $f$, say: +:$\displaystyle f = \sum_{i \mathop = 0}^n a_i \chi_{E_i} = \sum_{j \mathop = 0}^m b_j \chi_{F_j}$ +it holds that: +:$\displaystyle \sum_{i \mathop = 0}^n a_i \mu \left({E_i}\right) = \sum_{j \mathop = 0}^m b_j \mu \left({F_j}\right)$ +\end{theorem} + +\begin{proof} +The sets $F_0, \ldots, F_m$ are [[Definition:Pairwise Disjoint|pairwise disjoint]], and: +:$X = \displaystyle \bigcup_{j \mathop = 0}^m F_j$ +From [[Characteristic Function of Disjoint Union]], we have: +:$\chi_X = \displaystyle \sum_{j \mathop = 0}^m \chi_{F_j}$ +Remark that $\chi_X \left({x}\right) = 1$ for all $x \in X$, so that we have: +{{begin-eqn}} +{{eqn|l = f + |r = \sum_{i \mathop = 0}^n a_i \chi_{E_i} \cdot 1 +}} +{{eqn|r = \sum_{i \mathop = 0}^n a_i \chi_{E_i} \left({\sum_{j \mathop = 0}^m \chi_{F_j} }\right) +}} +{{eqn|r = \sum_{i \mathop = 0}^n \sum_{j \mathop = 0}^m a_i \chi_{E_i} \chi_{F_j} +}} +{{eqn|r = \sum_{i \mathop = 0}^n \sum_{j \mathop = 0}^m a_i \chi_{E_i \cap F_j} + |c = [[Characteristic Function of Intersection/Variant 1|Characteristic Function of Intersection: Variant 1]] +}} +{{end-eqn}} +Repeating the argument with the $E_i$ and $F_j$ interchanged also yields: +:$f = \displaystyle \sum_{j \mathop = 0}^m \sum_{i \mathop = 0}^n b_j \chi_{F_j \cap E_i}$ +Now whenever $x \in E_i \cap F_j$, for some $i, j$, then since the $E_i$, $F_j$ are disjoint, we find: +:$x \in E_{i'} \cap F_{j'}$ implies $i = i'$ and $j = j'$ +Thus, evaluating both expressions for $f \left({x}\right)$ we find: +:$a_i = f \left({x}\right) = b_j$ +In conclusion, we have: +:$a_i = b_j$ +if $E_i \cap F_j \ne \varnothing$. +Furthermore, we have for all $i$ that: +:$\displaystyle E_i = E_i \cap X = E_i \cap \left({\bigcup_{j \mathop = 0}^m F_j}\right) = \bigcup_{j \mathop = 0}^m \left({E_i \cap F_j}\right)$ +by [[Intersection Distributes over Union/General Result|Intersection Distributes over Union: General Result]]. +Similarly, we obtain for all $j$: +:$\displaystyle F_j = F_j \cap X = F_j \cap \left({\bigcup_{i \mathop = 0}^n E_i}\right) = \bigcup_{i \mathop = 0}^n \left({F_j \cap E_i}\right)$ +With this knowledge, we compute: +{{begin-eqn}} +{{eqn|l = \sum_{i \mathop = 0}^n a_i \mu \left({E_i}\right) + |r = \sum_{i \mathop = 0}^n a_i \mu \left({\bigcup_{j \mathop = 0}^m \left({E_i \cap F_j}\right)}\right) +}} +{{eqn|r = \sum_{i \mathop = 0}^n a_i \sum_{j \mathop = 0}^m \mu \left({E_i \cap F_j}\right) + |c = [[Measure is Finitely Additive Function]] +}} +{{eqn|r = \sum_{i \mathop = 0}^n \sum_{j \mathop = 0}^m a_i \mu \left({E_i \cap F_j}\right) + |c = [[Summation is Linear]] +}} +{{eqn|r = \sum_{j \mathop = 0}^m \sum_{i \mathop = 0}^n b_j \mu \left({E_i \cap F_j}\right) + |c = If $a_i \ne b_j$ then $E_i \cap F_j = \varnothing$; $a_i \cdot 0 = b_j \cdot 0$ +}} +{{eqn|r = \sum_{j \mathop = 0}^m b_j \sum_{i \mathop = 0}^n \mu \left({E_i \cap F_j}\right) + |c = [[Summation is Linear]] +}} +{{eqn|r = \sum_{j \mathop = 0}^m b_j \mu \left({\bigcup_{i \mathop = 0}^n \left({E_i \cap F_j}\right)}\right) + |c = [[Measure is Finitely Additive Function]] +}} +{{eqn|r = \sum_{j \mathop = 0}^m b_j \mu \left({F_j}\right) +}} +{{end-eqn}} +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Integral of Characteristic Function} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $E \in \Sigma$ be a [[Definition:Measurable Set|measurable set]], and let $\chi_E: X \to \R$ be its [[Definition:Characteristic Function of Set|characteristic function]]. +Then $I_\mu \left({\chi_E}\right) = \mu \left({E}\right)$, where $I_\mu \left({\chi_E}\right)$ is the [[Definition:Integral of Positive Simple Function|$\mu$-integral of $\chi_E$]]. +\end{theorem} + +\begin{proof} +Let $a_1 = 1$ and $E_1 = E$. +As in the definition of [[Definition:Standard Representation of Simple Function|standard representation]], denote $a_0 = 0$ and $E_0 = X \setminus E_1$. +Then for $x \in X$, we have: +:$\chi_E \left({x}\right) = 0 \cdot \chi_{E_0} \left({x}\right) + 1 \cdot \chi_{E_1} \left({x}\right)$ +since $E_1 = E$. +Hence $\chi_E = a_0 \chi_{E_0} + a_1 \chi_{E_1}$ is a [[Definition:Standard Representation of Simple Function|standard representation]] for $\chi_E$. +Thus, by definition of [[Definition:Integral of Positive Simple Function|$\mu$-integral]]: +:$\displaystyle \int \chi_E \, \mathrm d \mu = a_0 \mu \left({E_0}\right) + a_1 \mu \left({E_1}\right) = \mu \left({E}\right)$ +as desired. +{{qed}} +\end{proof}<|endoftext|> +\section{Integral of Positive Simple Function is Positive Homogeneous} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $f: X \to \R, f \in \mathcal E^+$ be a [[Definition:Positive Simple Function|positive simple function]]. +Let $\lambda \in \R_{\ge 0}$ be a [[Definition:Positive Real Number|positive real number]]. +Then $I_\mu \left({\lambda \cdot f}\right) = \lambda \cdot I_\mu \left({f}\right)$, where: +:$\lambda \cdot f$ is the [[Definition:Pointwise Scalar Multiplication of Real-Valued Functions|pointwise $\lambda$-multiple]] of $f$ +:$I_\mu$ denotes [[Definition:Integral of Positive Simple Function|$\mu$-integration]] +This can be summarized by saying that $I_\mu$ is [[Definition:Positive Homogeneous|positive homogeneous]]. +\end{theorem} + +\begin{proof} +Remark that $\lambda \cdot f$ is a [[Definition:Positive Simple Function|positive simple function]] by [[Scalar Multiple of Simple Function is Simple Function]]. +Let: +:$f = \displaystyle \sum_{i \mathop = 0}^n a_i \chi_{E_i}$ +be a [[Definition:Standard Representation of Simple Function|standard representation]] for $f$. +Then we also have, for all $x \in X$: +{{begin-eqn}} +{{eqn|l = \lambda \cdot f \left({x}\right) + |r = \lambda \sum_{i \mathop = 0}^n a_i \chi_{E_i} \left({x}\right) +}} +{{eqn|r = \sum_{i \mathop = 0}^n \left({\lambda a_i}\right) \chi_{E_i} \left({x}\right) + |c = [[Summation is Linear]] +}} +{{end-eqn}} +and it is immediate from the definition that this yields a [[Definition:Standard Representation of Simple Function|standard representation]] for $\lambda \cdot f$. +Therefore, we have: +{{begin-eqn}} +{{eqn | l = \lambda \cdot I_\mu \left({f}\right) + | r = \lambda \sum_{i \mathop = 0}^n a_i \mu \left({E_i}\right) + | c = Definition of [[Definition:Integral of Positive Simple Function|$\mu$-integration]] +}} +{{eqn | r = \sum_{i \mathop = 0}^n \left({\lambda a_i}\right) \mu \left({E_i}\right) + | c = [[Summation is Linear]] +}} +{{eqn | r = I_\mu \left({\lambda \cdot f}\right) + | c = Definition of [[Definition:Integral of Positive Simple Function|$\mu$-integration]] +}} +{{end-eqn}} +as desired. +{{qed}} +\end{proof}<|endoftext|> +\section{Integral of Positive Simple Function is Additive} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $f,g: X \to \R$, $f,g \in \mathcal{E}^+$ be [[Definition:Positive Simple Function|positive simple functions]]. +Then $I_\mu \left({f + g}\right) = I_\mu \left({f}\right) + I_\mu \left({g}\right)$, where: +:$f + g$ is the [[Definition:Pointwise Addition|pointwise sum]] of $f$ and $g$ +:$I_\mu$ denotes [[Definition:Integral of Positive Simple Function|$\mu$-integration]] +This can be summarized by saying that $I_\mu$ is [[Definition:Additive Function (Conventional)|additive]]. +\end{theorem}<|endoftext|> +\section{Strict Ordering Preserved under Product with Invertible Element} +Tags: Ordered Semigroups + +\begin{theorem} +Let $\left({S, \circ, \preceq}\right)$ be an [[Definition:Ordered Semigroup|ordered semigroup]]. +Let $z \in S$ be [[Definition:Invertible Element|invertible]]. +Suppose that either $x \circ z \prec y \circ z$ or $z \circ x \prec z \circ y$. +Then $x \prec y$. +\end{theorem} + +\begin{proof} +Suppose $x \circ z \prec y \circ z$. +By [[Invertible Element of Monoid is Cancellable]], $z^{-1}$ is [[Definition:Cancellable Element|cancellable]]. +Then from [[Strict Ordering Preserved under Product with Cancellable Element]]: +:$x = \left({x \circ z}\right) \circ z^{-1} \prec \left({y \circ z}\right) \circ z^{-1} = y$ +Likewise, if $z \circ x \prec z \circ y$: +:$x = z^{-1} \circ \left({z \circ x}\right) \prec z^{-1} \circ \left({z \circ y}\right) = y$ +{{qed}} +\end{proof}<|endoftext|> +\section{Strict Ordering Preserved under Cancellability in Totally Ordered Semigroup} +Tags: Ordered Semigroups + +\begin{theorem} +Let $\left({S, \circ, \preceq}\right)$ be a [[Definition:Totally Ordered Semigroup|totally ordered semigroup]]. +If either: +: $x \circ z \prec y \circ z$ +or +: $z \circ x \prec z \circ y$ +then $x \prec y$. +\end{theorem} + +\begin{proof} +Let $\preceq$ be a [[Definition:Total Ordering|total ordering]]. +Let $x \circ z \prec y \circ z$. +But we have, by hypothesis: +:$x \succeq y \implies x \circ z \succeq y \circ z$ +which contradicts $x \circ z \prec y \circ z$. +So $x \prec y$. +Similarly for $z \circ x \prec z \circ y$. +{{qed}} +\end{proof}<|endoftext|> +\section{Relation Compatibility in Totally Ordered Semigroup} +Tags: Ordered Semigroups + +\begin{theorem} +Let $\left({S, \circ, \preceq}\right)$ be an [[Definition:Ordered Semigroup|ordered semigroup]] such that: +:$(1): \quad$ All the elements of $\left({S, \circ, \preceq}\right)$ are [[Definition:Cancellable Element|cancellable]] for $\circ$ +:$(2): \quad \preceq$ is a [[Definition:Total Ordering|total ordering]]. +Then: +:$\forall x, y, z \in S: x \circ z \preceq y \circ z \iff x \preceq y$ +\end{theorem} + +\begin{proof} +From [[Strict Ordering Preserved under Cancellability in Totally Ordered Semigroup]]: +: $x \circ z \prec y \circ z \implies x \prec y$ +From the definition of [[Definition:Cancellable Element|cancellable element]]: +: $x \circ z = y \circ z \implies x = y$ +{{qed}} +[[Category:Ordered Semigroups]] +oafce8s0w7tw3us87tqoozua1fopu57 +\end{proof}<|endoftext|> +\section{Integral of Positive Measurable Function Extends Integral of Positive Simple Function} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $f: X \to \R, f \in \mathcal{E}^+$ be a [[Definition:Positive Simple Function|positive simple function]]. +Then $\displaystyle \int f \, \mathrm d\mu = I_\mu \left({f}\right)$, where: +:$\displaystyle \int \cdot \, \mathrm d\mu$ denotes the [[Definition:Integral of Positive Measurable Function|$\mu$-integral of positive measurable functions]] +:$I_\mu$ denotes the [[Definition:Integral of Positive Simple Function|$\mu$-integral of positive simple functions]] +That is, $\displaystyle \int \cdot \, \mathrm d\mu \restriction_{\mathcal{E}^+} = I_\mu$, using the notion of [[Definition:Restriction of Mapping|restriction]], $\restriction$. +\end{theorem}<|endoftext|> +\section{Beppo Levi's Theorem} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma, \mu}$ be a [[Definition:Measure Space|measure space]]. +Let $\sequence {f_n}_{n \mathop \in \N} \in \MM_{\overline \R}^+$ be an [[Definition:Increasing Sequence of Extended Real-Valued Functions|increasing sequence]] of [[Definition:Positive Measurable Function|positive $\Sigma$-measurable functions]]. +Let $\displaystyle \sup_{n \mathop \in \N} f_n: X \to \overline \R$ be the [[Definition:Pointwise Supremum of Extended Real-Valued Functions|pointwise supremum]] of $\sequence {f_n}_{n \mathop \in \N}$, where $\overline \R$ denotes the [[Definition:Extended Real Number Line|extended real numbers]]. +Then: +:$\displaystyle \int \sup_{n \mathop \in \N} f_n \rd \mu = \sup_{n \mathop \in \N} \int f_n \rd \mu$ +where the [[Definition:Supremum of Set|supremum]] on the {{RHS}} is in the [[Definition:Ordering on Extended Real Numbers|ordering on $\overline \R$]]. +\end{theorem} + +\begin{proof} +{{tidy}} +{{Proofread}} +{{MissingLinks}} +Since by definition $\displaystyle \sup _{n \mathop \in \N} f_n \ge f_m$ for all $m$, we have: +:$\displaystyle \int \sup_{n \mathop \in \N} f_n \rd \mu \ge \int f_m \rd \mu$ +and hence the inequality holds for the supremum as well: +:$\displaystyle \int \sup_{n \mathop \in \N} f_n \rd \mu \ge \sup_{m \mathop \in \N} \int f_m \rd \mu$ +To show the reverse inequality, since the integral of $\sup\limits_{n \mathop \in \N}f_n$ is defined as the supremum of the integrals of positive simple functions $s \le \sup\limits_{n \mathop \in \N}f_n$, we show that given a simple function $\displaystyle s = \sum_{i \mathop = 1}^k \lambda_i \chi_{E_i} \le \sup_{n \mathop \in \N} f_n$ (where $\lambda_i \in \closedint 0 {+\infty}$ and $E_i \in \Sigma$) we have: +:$\displaystyle \sup_{m \mathop \in \N} \int f_m \rd \mu \ge \int s \rd \mu$ +To show this, we use the fact that $\nu_s: \Sigma \to \closedint 0 {+\infty}$ defined by $\map {\nu_s} E = \displaystyle \int \chi_Es \rd \mu$ clearly defines a measure over $X$, because it is simply a linear combination (with positive coefficients) of the measures $\bigvalueat \mu {E_i}$. +Now, if we fix $1 > \epsilon > 0$, we have that the sets: +:$A_m = \set {x \in X: \map {f_m} x \ge \paren {1 - \epsilon} \map s x}$ +form a cover of $X$ ($X = \displaystyle \bigcup_{m \mathop \in \N} A_m$) by definition of the supremum, because $\displaystyle s \le \sup_{n \mathop \in \N} f_n$. +Furthermore, $A_m \uparrow X$ ($m \to \infty$). +{{explain|the notation of the above statement: link to a page defining $A_m \uparrow X$ or replace with the house notational style.}} +By definition of the $A_m$ we have that: +:$\displaystyle \int f_m \rd \mu \ge \int \chi_{A_m} f_m \rd \mu \ge \paren {1 - \epsilon} \int \chi_{A_m} s \rd \mu = \paren {1 - \epsilon} \map {\nu_s} {A_m}$ +where the first inequality follows from the fact that the $f_m$ are positive. +Now, since the sequence $\sequence {f_n}_{n \mathop \in \N}$ increases monotonically, by the [[Outer Measure of Limit of Increasing Sequence of Sets| increasing sequence of sets property]] given in any measure space, we have that taking the supremum of both sides yields: +{{begin-eqn}} +{{eqn | l = \sup_{m \mathop \in \N} \int f_m \rd \mu + | r = \lim_{m \mathop \to +\infty} \int f_m \rd \mu + | c = +}} +{{eqn | o = \ge + | r = \lim_{m \mathop \to +\infty} \paren {1 - \epsilon} \map {\nu_s} {A_m} + | c = +}} +{{eqn | r = \paren {1 - \epsilon} \map {\nu_s} X + | c = +}} +{{eqn | r = \paren {1 - \epsilon} \int s \rd \mu + | c = +}} +{{end-eqn}} +Since $\epsilon$ was selected arbitrarily, we have that the desired inequality holds: +:$\displaystyle \sup_{m \mathop \in \N} \int f_m \rd \mu \ge \int s \rd \mu$ +{{qed}} +\end{proof}<|endoftext|> +\section{Ordered Semigroup Isomorphism is Surjective Monomorphism} +Tags: Ordered Semigroups, Surjections, Monomorphisms, Isomorphisms + +\begin{theorem} +Let $\left({S, \circ, \preceq}\right)$ and $\left({T, *, \preccurlyeq}\right)$ be [[Definition:Ordered Semigroup|ordered semigroups]]. +Let $\phi: \left({S, \circ, \preceq}\right) \to \left({T, *, \preccurlyeq}\right)$ be a [[Definition:Mapping|mapping]]. +Then $\phi$ is an [[Definition:Ordered Semigroup Isomorphism|ordered semigroup isomorphism]] iff: +:$(1): \quad \phi$ is an [[Definition:Ordered Semigroup Monomorphism|ordered semigroup monomorphism]] +:$(2): \quad \phi$ is a [[Definition:Surjection|surjection]]. +\end{theorem} + +\begin{proof} +=== Necessary Condition === +Let $\phi: \left({S, \circ, \preceq}\right) \to \left({T, *, \preccurlyeq}\right)$ be an [[Definition:Ordered Semigroup Isomorphism|ordered semigroup isomorphism]]. +Then by definition: +: $\phi$ is a [[Definition:Semigroup Isomorphism|semigroup isomorphism]] from the [[Definition:Semigroup|semigroup]] $\left({S, \circ}\right)$ to the [[Definition:Semigroup|semigroup]] $\left({T, *}\right)$ +: $\phi$ is an [[Definition:Order Isomorphism|order isomorphism]] from the [[Definition:Ordered Set|ordered set]] $\left({S, \preceq}\right)$ to the [[Definition:Ordered Set|ordered set]] $\left({T, \preccurlyeq}\right)$. +A [[Definition:Semigroup Isomorphism|semigroup isomorphism]] is by definition: +: A [[Definition:Semigroup Homomorphism|semigroup homomorphism]] +which is: +: A [[Definition:Semigroup Monomorphism|monomorphism]] and an [[Definition:Semigroup Epimorphism|epimorphism]]. +From [[Order Isomorphism is Surjective Order Embedding]], an [[Definition:Order Isomorphism|order isomorphism]] is an [[Definition:Order Embedding|order embedding]] which is also a [[Definition:Surjection|surjection]]. +Putting this all together, we see that an [[Definition:Ordered Semigroup Isomorphism|ordered semigroup isomorphism]] is: +: A [[Definition:Semigroup Monomorphism|monomorphism]] +: An [[Definition:Order Embedding|order embedding]] +: A [[Definition:Surjection|surjection]]. +An [[Definition:Ordered Semigroup Monomorphism|ordered semigroup monomorphism]] is by definition: +: A [[Definition:Semigroup Monomorphism|monomorphism]] +which is also +: An [[Definition:Order Embedding|order embedding]] +Hence $\phi$ is: +: An [[Definition:Ordered Semigroup Monomorphism|ordered semigroup monomorphism]] +: A [[Definition:Surjection|surjection]]. +{{qed|lemma}} +=== Sufficient Condition === +Let $\phi$ be: +: An [[Definition:Ordered Semigroup Monomorphism|ordered semigroup monomorphism]] +: A [[Definition:Surjection|surjection]]. +By definition, that means $\phi$ be: +: A [[Definition:Semigroup Monomorphism|monomorphism]] +: An [[Definition:Order Embedding|order embedding]] +: A [[Definition:Surjection|surjection]]. +From [[Order Isomorphism is Surjective Order Embedding]], an [[Definition:Order Isomorphism|order isomorphism]] is an [[Definition:Order Embedding|order embedding]] which is also a [[Definition:Surjection|surjection]]. +A [[Definition:Semigroup Isomorphism|semigroup isomorphism]] is by definition: +: A [[Definition:Semigroup Homomorphism|semigroup homomorphism]] +which is: +: A [[Definition:Semigroup Monomorphism|semigroup monomorphism]] and an [[Definition:Semigroup Epimorphism|semigroup epimorphism]]. +Thus a [[Definition:Semigroup Monomorphism|semigroup monomorphism]] which is also a [[Definition:Surjection|surjection]] is a [[Definition:Semigroup Isomorphism|semigroup isomorphism]]. +So $\phi$ is: +: A [[Definition:Semigroup Isomorphism|semigroup isomorphism]] from the [[Definition:Semigroup|semigroup]] $\left({S, \circ}\right)$ to the [[Definition:Semigroup|semigroup]] $\left({T, *}\right)$ +: An [[Definition:Order Isomorphism|order isomorphism]] from the [[Definition:Ordered Set|ordered set]] $\left({S, \preceq}\right)$ to the [[Definition:Ordered Set|ordered set]] $\left({T, \preccurlyeq}\right)$. +{{qed}} +\end{proof}<|endoftext|> +\section{Ordered Semigroup Monomorphism into Image is Isomorphism} +Tags: Semigroup Homomorphisms, Order Isomorphisms + +\begin{theorem} +Let $\struct {S, \circ, \preceq}$ and $\struct {T, *, \preccurlyeq}$ be [[Definition:Ordered Semigroup|ordered semigroups]]. +Let $\phi: \struct {S, \circ, \preceq} \to \struct {T, *, \preccurlyeq}$ be an [[Definition:Ordered Semigroup Monomorphism|ordered semigroup monomorphism]]. +Let $S'$ be the [[Definition:Image of Mapping|image]] of $\phi$. +Then $\phi$ is an [[Definition:Ordered Semigroup Isomorphism|ordered semigroup isomorphism]] from $\struct {S, \circ, \preceq}$ into $\struct {S', * {\restriction_{S'} }, \preccurlyeq \restriction_{S'} }$. +Here: +:$* {\restriction_{S'}}$ denotes the [[Definition:Restriction of Operation|restriction]] of $*$ to $S' \times S'$ +:$\preccurlyeq \restriction_{S'}$ denotes the [[Definition:Restriction of Relation|restriction]] of $\preccurlyeq$ to $S' \times S'$. +\end{theorem} + +\begin{proof} +Let $\phi: \struct {S, \circ, \preceq} \to \struct {T, *, \preccurlyeq}$ be an [[Definition:Ordered Semigroup Monomorphism|ordered semigroup monomorphism]]. +Then $\phi$ is an [[Definition:Injection|injection]] into $\struct {T, *, \preccurlyeq}$ by definition. +From [[Restriction of Mapping to Image is Surjection]], a [[Definition:Mapping|mapping]] from a [[Definition:Set|set]] to the [[Definition:Image of Mapping|image]] of that [[Definition:Mapping|mapping]] is a [[Definition:Surjection|surjection]]. +Thus the [[Definition:Surjective Restriction|surjective restriction]] of $\phi$ onto $S'$ is an [[Definition:Ordered Semigroup Monomorphism|ordered semigroup monomorphism]] which is also a [[Definition:Surjection|surjection]]. +Hence the result from [[Ordered Semigroup Isomorphism is Surjective Monomorphism]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Order Completion Unique up to Isomorphism} +Tags: Order Theory + +\begin{theorem} +Let $\left({S, \preceq_S}\right)$ be an [[Definition:Ordered Set|ordered set]]. +Suppose that both $\left({T, \preceq_T}\right)$ and $\left({T', \preceq_{T'}}\right)$ are [[Definition:Order Completion|order completions]] for $\left({S, \preceq_S}\right)$. +Then there exists a unique [[Definition:Order Isomorphism|order isomorphism]] $\psi: T \to T'$. +In particular, $\left({T, \preceq_T}\right)$ and $\left({T', \preceq_{T'}}\right)$ are isomorphic. +\end{theorem} + +\begin{proof} +Both $\left({T, \preceq_T}\right)$ and $\left({T', \preceq_{T'}}\right)$ are [[Definition:Order Completion|order completions]] for $\left({S, \preceq_S}\right)$. +Hence they both satisfy condition $(4)$ (and also $(1)$, $(2)$ and $(3)$). +Thus, applying condition $(4)$ to $\left({T, \preceq_T}\right)$ (with respect to $\left({T', \preceq_{T'}}\right)$), obtain a unique [[Definition:Order-Preserving Mapping|order-preserving mapping]] $\phi: T' \to T$. +Applying $(4)$ to $\left({T', \preceq_{T'}}\right)$ (with respect to $\left({T, \preceq_T}\right)$) gives also a unique [[Definition:Order-Preserving Mapping|order-preserving mapping]] $\psi: T \to T'$. +By [[Composite of Order-Preserving Mappings is Order-Preserving]], their [[Definition:Composite Mapping|composites]] $\psi \circ \phi: T' \to T'$ and $\phi \circ \psi: T \to T$ are also [[Definition:Order-Preserving Mapping|order-preserving]]. +Now applying $(4)$ to $\left({T, \preceq_T}\right)$ (with respect to itself), it follows that $\phi \circ \psi$ is unique. +Now from [[Identity Mapping is Order Isomorphism]], the [[Definition:Identity Mapping|identity mapping]] $\operatorname{id}_T: T \to T$ is also [[Definition:Order-Preserving Mapping|order-preserving]]. +Thus, uniqueness of $\phi \circ \psi$ implies that $\phi \circ \psi = \operatorname{id}_T$. +Similarly, it follows that $\psi \circ \phi = \operatorname{id}_{T'}$. +It follows that $\psi: T \to T'$ is a [[Definition:Bijection|bijection]] from [[Bijection iff Left and Right Inverse]]. +Thus, $\psi$ is an [[Definition:Order-Preserving Mapping|order-preserving]] [[Definition:Bijection|bijection]] whose [[Definition:Inverse Mapping|inverse]] is also [[Definition:Order-Preserving Mapping|order-preserving]]. +That is, $\psi$ is an [[Definition:Order Isomorphism|order isomorphism]]. +Its uniqueness was already remarked above. +{{qed}} +[[Category:Order Theory]] +sfexel14u95lq4kg46ovitl61vyknd1 +\end{proof}<|endoftext|> +\section{Intersection of Strict Upper Closures in Toset} +Tags: Total Orderings + +\begin{theorem} +Let $\left({S, \preceq}\right)$ be a [[Definition:Totally Ordered Set|totally ordered set]]. +Let $a, b \in S$. +Then: +:$a^\succ \cap b^\succ = \left({\max \left({a, b}\right)}\right)^\succ$ +where: +:$a^\succ$ denotes [[Definition:Strict Upper Closure of Element|strict upper closure]] of $a$ +:$\max$ denotes the [[Definition:Max Operation|max operation]]. +\end{theorem} + +\begin{proof} +As $\left({S, \preceq}\right)$ is a [[Definition:Totally Ordered Set|totally ordered set]], have either $a \preceq b$ or $b \preceq a$. +Since both sides are seen to be invariant upon interchanging $a$ and $b$, [[Definition:WLOG|WLOG]] let $a \preceq b$. +Then it follows by [[Definition:Max Operation|definition of $\max$]] that: +: $\max \left({a, b}\right) = b$ +Thus, from [[Intersection with Subset is Subset]], it suffices to show that $b^\succ \subseteq a^\succ$. +By definition of [[Definition:Strict Upper Closure of Element|strict upper closure]], this comes down to showing that: +:$\forall c \in S: b \prec c \implies a \prec c$ +So let $c \in S$ with $b \prec c$, and recall that $a \preceq b$. +By [[Strictly Precedes is Strict Ordering]], $a \prec c$. +{{qed}} +\end{proof}<|endoftext|> +\section{Intersection of Weak Upper Closures in Toset} +Tags: Total Orderings + +\begin{theorem} +Let $\left({S, \preccurlyeq}\right)$ be a [[Definition:Totally Ordered Set|totally ordered set]]. +Let $a, b \in S$. +Then: +:$a^\succcurlyeq \cap b^\succcurlyeq = \left({\max \left({a, b}\right)}\right)^\succcurlyeq$ +where: +: $a^\succcurlyeq$ denotes [[Definition:Weak Upper Closure of Element|weak upper closure]] of $a$ +: $\max$ denotes the [[Definition:Max Operation|max operation]]. +\end{theorem} + +\begin{proof} +As $\left({S, \preccurlyeq}\right)$ is a [[Definition:Totally Ordered Set|totally ordered set]], either $a \preccurlyeq b$ or $b \preccurlyeq a$. +Since both sides are seen to be invariant upon interchanging $a$ and $b$, [[Definition:WLOG|WLOG]] let $a \preccurlyeq b$. +Then it follows by [[Definition:Max Operation|definition of $\max$]] that: +:$\max \left({a, b}\right) = b$ +Thus, from [[Intersection with Subset is Subset]], it suffices to show that: +:$b^\succcurlyeq \subseteq a^\succcurlyeq$ +By definition of [[Definition:Weak Upper Closure of Element|weak upper closure]], this comes down to showing that: +:$\forall c \in S: b \preccurlyeq c \implies a \preccurlyeq c$ +So let $c \in S$ with $b \preccurlyeq c$. +Recall that $a \preccurlyeq b$. +Now as $\preccurlyeq$ is a [[Definition:Total Ordering|total ordering]], it is in particular [[Definition:Transitive Relation|transitive]]. +Hence $a \preccurlyeq c$. +{{qed}} +\end{proof}<|endoftext|> +\section{Intersection of Weak Lower Closures in Toset} +Tags: Total Orderings + +\begin{theorem} +Let $\struct {S, \preccurlyeq}$ be a [[Definition:Totally Ordered Set|totally ordered set]]. +Let $a, b \in S$. +Then: +:$a^\preccurlyeq \cap b^\preccurlyeq = \paren {\min \set {a, b} }^\preccurlyeq$ +where: +:$a^\preccurlyeq$ denotes the [[Definition:Weak Lower Closure of Element|weak lower closure of $a$]] +:$\min$ denotes the [[Definition:Min Operation|min operation]]. +\end{theorem} + +\begin{proof} +As $\struct {S, \preccurlyeq}$ is a [[Definition:Totally Ordered Set|totally ordered set]], either $a \preccurlyeq b$ or $b \preccurlyeq a$. +Both sides are seen to be invariant upon interchanging $a$ and $b$. +{{WLOG}}, let $b \preccurlyeq a$. +Then it follows by [[Definition:Min Operation|definition of $\min$]] that $\min \set {a, b} = b$. +Thus, from [[Intersection with Subset is Subset]], it suffices to show that: +:$b^\preccurlyeq \subseteq a^\preccurlyeq$ +By definition of [[Definition:Weak Lower Closure of Element|weak lower closure]], this comes down to showing that: +:$\forall c \in S: c \preccurlyeq b \implies c \preccurlyeq a$ +So let $c \in S$ with $c \preccurlyeq b$. +Recall that $b \preccurlyeq a$. +Now as $\preccurlyeq$ is a [[Definition:Total Ordering|total ordering]], it is in particular [[Definition:Transitive Relation|transitive]]. +Hence $c \preccurlyeq a$. +{{qed}} +\end{proof}<|endoftext|> +\section{Intersection of Strict Lower Closures in Toset} +Tags: Total Orderings + +\begin{theorem} +Let $\left({S, \preceq}\right)$ be a [[Definition:Totally Ordered Set|totally ordered set]]. +Let $a,b \in S$. +Then: +:$a^\prec \cap b^\prec = \left({\min \left({a, b}\right)}\right)^\prec$ +where: +: $a^\prec$ denotes [[Definition:Strict Lower Closure of Element|strict lower closure]] of $a$ +: $\min$ denotes the [[Definition:Min Operation|min operation]]. +\end{theorem} + +\begin{proof} +As $\left({S, \preceq}\right)$ is a [[Definition:Totally Ordered Set|totally ordered set]], have either $a \preceq b$ or $b \preceq a$. +Since both sides are seen to be invariant upon interchanging $a$ and $b$, let [[Definition:WLOG|WLOG]] $b \preceq a$. +Then it follows by [[Definition:Max Operation|definition of $\min$]] that $\min \left({a, b}\right) = b$. +Thus, from [[Intersection with Subset is Subset]], it suffices to show that $b^\prec \subseteq a^\prec$. +By the definition of [[Definition:Strict Lower Closure of Element|strict lower closure]], this comes down to showing that: +:$\forall c \in S: c \prec b \implies c \prec a$ +So let $c \in S$ with $c \prec b$, and recall that $b \preceq a$. +By [[Strictly Precedes is Strict Ordering]], $c \prec a$. +{{qed}} +\end{proof}<|endoftext|> +\section{Naturally Ordered Semigroup Exists} +Tags: Naturally Ordered Semigroup + +\begin{theorem} +There exists a [[Definition:Naturally Ordered Semigroup|Naturally Ordered Semigroup]]. +\end{theorem} + +\begin{proof} +{{questionable}} +We take as [[Definition:Axiom|axiomatic]] the [[Definition:Zermelo-Fraenkel Axioms|Zermelo-Fraenkel axioms]]. +From these, [[Existence of Minimal Infinite Successor Set]] is demonstrated. +This proves the existence of a [[Definition:Minimal Infinite Successor Set|minimal infinite successor set]]. +Then we have that the [[Minimal Infinite Successor Set forms Peano Structure]]. +It follows that the existence of a [[Definition:Peano Structure|Peano structure]] depends upon the existence of such a [[Definition:Minimal Infinite Successor Set|minimal infinite successor set]]. +Then we have that a [[Naturally Ordered Semigroup forms Peano Structure]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Strict Upper Closure in Restricted Ordering} +Tags: Upper Closures + +\begin{theorem} +Let $\left({S, \preceq}\right)$ be an [[Definition:Ordered Set|ordered set]]. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$, and let $\preceq \restriction_T$ be the [[Definition:Restricted Ordering|restricted ordering]] on $T$. +Then for all $t \in T$: +:$t^{\succ T} = T \cap t^{\succ S}$ +where: +: $t^{\succ T}$ is the [[Definition:Strict Upper Closure of Element|strict upper closure]] of $t$ in $\left({T, \preceq \restriction_T}\right)$ +: $t^{\succ S}$ is the [[Definition:Strict Upper Closure of Element|strict upper closure]] of $t$ in $\left({S, \preceq}\right)$. +\end{theorem} + +\begin{proof} +Let $t \in T$, and suppose that $t' \in t^{\succ T}$. +By definition of [[Definition:Strict Upper Closure of Element|strict upper closure]], this is equivalent to: +:$t \preceq \restriction_T t' \land t \ne t'$ +By [[Definition:Restricted Ordering|definition of $\preceq \restriction_T$]], the first condition comes down to: +:$t \preceq t' \land t' \in T$ +as it is assumed that $t \in T$. +In conclusion, $t' \in t^{\succ T}$ is equivalent to: +:$t' \in T \land t \preceq t' \land t \ne t'$ +These last two conjuncts precisely express that $t' \in t^{\succ S}$. +By definition of [[Definition:Set Intersection|set intersection]], it also holds that: +:$t' \in T \cap t^{\succ S}$ +precisely when $t' \in T$ and $t' \in t^{\succ S}$. +Thus, it follows that the following are equivalent: +:$t' \in t^{\succ T}$ +:$t' \in T \cap t^{\succ S}$ +and hence the result follows, by definition of [[Definition:Set Equality|set equality]]. +{{qed}} +[[Category:Upper Closures]] +97c1pqbo4j3pty83ysgosc5ligkdtqx +\end{proof}<|endoftext|> +\section{Weak Upper Closure in Restricted Ordering} +Tags: Upper Closures + +\begin{theorem} +Let $\left({S, \preccurlyeq}\right)$ be an [[Definition:Ordered Set|ordered set]]. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\preccurlyeq \restriction_T$ be the [[Definition:Restricted Ordering|restricted ordering]] on $T$. +Then for all $t \in T$: +:$t^{\succcurlyeq T} = T \cap t^{\succcurlyeq S}$ +where: +: $t^{\succcurlyeq T}$ is the [[Definition:Weak Upper Closure of Element|weak upper closure]] of $t$ in $\left({T, \preccurlyeq \restriction_T}\right)$ +: $t^{\succcurlyeq S}$ is the [[Definition:Weak Upper Closure of Element|weak upper closure]] of $t$ in $\left({S, \preccurlyeq}\right)$. +\end{theorem} + +\begin{proof} +Let $t \in T$. +Suppose that: +:$t' \in t^{\succcurlyeq T}$ +By definition of [[Definition:Weak Upper Closure of Element|weak upper closure $t^{\succcurlyeq T}$]], this is equivalent to: +:$t \preccurlyeq \restriction_T t'$ +By [[Definition:Restricted Ordering|definition of $\preccurlyeq \restriction_T$]], this comes down to: +:$t \preccurlyeq t' \land t' \in T$ +as it is assumed that $t \in T$. +The first conjunct precisely expresses that $t' \in t^{\succcurlyeq S}$. +By definition of [[Definition:Set Intersection|set intersection]], it also holds that: +:$t' \in T \cap t^{\succcurlyeq S}$ +precisely when $t' \in T$ and $t' \in t^{\succcurlyeq S}$. +Thus it follows that the following are equivalent: +:$t' \in t^{\succcurlyeq T}$ +:$t' \in T \cap t^{\succcurlyeq S}$ +and hence the result follows, by definition of [[Definition:Set Equality|set equality]]. +{{qed}} +[[Category:Upper Closures]] +heto2ge1ez8yoh0zh3d3awhm3t9hqdl +\end{proof}<|endoftext|> +\section{Strict Lower Closure in Restricted Ordering} +Tags: Lower Closures + +\begin{theorem} +Let $\left({S, \preceq}\right)$ be an [[Definition:Ordered Set|ordered set]]. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$, and let $\preceq \restriction_T$ be the [[Definition:Restricted Ordering|restricted ordering]] on $T$. +Then for all $t \in T$: +:$t^{\prec T} = T \cap t^{\prec S}$ +where: +: $t^{\prec T}$ is the [[Definition:Strict Lower Closure of Element|strict lower closure]] of $t$ in $\left({T, \preceq \restriction_T}\right)$ +: $t^{\prec S}$ is the [[Definition:Strict Lower Closure of Element|strict lower closure]] of $t$ in $\left({S, \preceq}\right)$. +\end{theorem} + +\begin{proof} +Let $t \in T$, and suppose that $t' \in t^{\prec T}$. +By definition of [[Definition:Strict Lower Closure of Element|strict lower closure]], this is equivalent to: +:$t' \preceq \restriction_T t \land t \ne t'$ +By [[Definition:Restricted Ordering|definition of $\preceq \restriction_T$]], the first condition comes down to: +:$t' \preceq t \land t' \in T$ +as it is assumed that $t \in T$. +In conclusion, $t' \in t^{\prec T}$ is equivalent to: +:$t' \in T \land t' \preceq t \land t \ne t'$ +These last two conjuncts precisely express that $t' \in t^{\prec S}$. +By definition of [[Definition:Set Intersection|set intersection]], it also holds that: +:$t' \in T \cap t^{\prec S}$ +precisely when $t' \in T$ and $t' \in t^{\prec S}$. +Thus, it follows that the following are equivalent: +:$t' \in t^{\prec T}$ +:$t' \in T \cap t^{\prec S}$ +and hence the result follows, by definition of [[Definition:Set Equality|set equality]]. +{{qed}} +[[Category:Lower Closures]] +34yt591thyko8sxodcflxre3r6hpca2 +\end{proof}<|endoftext|> +\section{Weak Lower Closure in Restricted Ordering} +Tags: Lower Closures + +\begin{theorem} +Let $\left({S, \preccurlyeq}\right)$ be an [[Definition:Ordered Set|ordered set]]. +Let $T \subseteq S$ be a [[Definition:Subset|subset]] of $S$. +Let $\preccurlyeq \restriction_T$ be the [[Definition:Restricted Ordering|restricted ordering]] on $T$. +Then for all $t \in T$: +:$t^{\preccurlyeq T} = T \cap t^{\preccurlyeq S}$ +where: +:$t^{\preccurlyeq T}$ is the [[Definition:Weak Lower Closure of Element|weak lower closure]] of $t$ in $\left({T, \preccurlyeq \restriction_T}\right)$ +:$t^{\preccurlyeq S}$ is the [[Definition:Weak Lower Closure of Element|weak lower closure]] of $t$ in $\left({S, \preccurlyeq}\right)$. +\end{theorem} + +\begin{proof} +Let $t \in T$, and suppose that $t' \in t^{\preccurlyeq T}$. +By definition of [[Definition:Weak Lower Closure of Element|weak lower closure $t^{\preccurlyeq T}$]], this is equivalent to: +:$t' \preccurlyeq \restriction_T t$ +By [[Definition:Restricted Ordering|definition of $\preccurlyeq \restriction_T$]], this comes down to: +:$t' \preccurlyeq t \land t' \in T$ +as it is assumed that $t \in T$. +The first conjunct precisely expresses that $t' \in t^{\preccurlyeq S}$. +By definition of [[Definition:Set Intersection|set intersection]], it also holds that: +:$t' \in T \cap t^{\preccurlyeq S}$ +precisely when $t' \in T$ and $t' \in t^{\preccurlyeq S}$. +Thus, it follows that the following are equivalent: +:$t' \in t^{\preccurlyeq T}$ +:$t' \in T \cap t^{\preccurlyeq S}$ +and hence the result follows, by definition of [[Definition:Set Equality|set equality]]. +{{qed}} +[[Category:Lower Closures]] +19b16ktdq3w4i6qg86a4kvm7n18ezou +\end{proof}<|endoftext|> +\section{Order Topology on Natural Numbers is Discrete Topology} +Tags: Natural Numbers, Order Topology, Discrete Topology + +\begin{theorem} +Let $\le$ be the standard ordering on the [[Definition:Natural Number|natural numbers]] $\N$. +Then the [[Definition:Order Topology|order topology]] $\tau$ on $\N$ is the [[Definition:Discrete Topology|discrete topology]]. +\end{theorem} + +\begin{proof} +By [[Topology Discrete iff All Singletons Open]], it suffices to show that for all $n \in \N$, the [[Definition:Singleton|singleton]] $\left\{{n}\right\}$ is an [[Definition:Open Set (Topology)|open]] of $\tau$. +Now observe that $\mathop{\downarrow} \left({1}\right) = \left\{{0}\right\}$, since for all $n \in \N$, $n < 1 \implies n = 0$. +It follows that $\left\{{0}\right\}$ is an [[Definition:Open Set (Topology)|open set]] of $\tau$. +Suppose now that $n \in \N$ and $n \ne 0$. +Then it is known that for all $m \in \N$, $n - 1 < m < n + 1$ implies $m = n$. +Thus, $\mathop{\uparrow} \left({n - 1}\right) \cap \mathop{\downarrow} \left({n + 1}\right) = \left\{{n}\right\}$. +It follows that $\left\{{n}\right\}$ is an [[Definition:Open Set (Topology)|open set]] of $\tau$. +{{explain|arrows, and why this inference holds (using the def'n of order top as gen. top)}} +Hence the result, from [[Proof by Cases]]. +{{qed}} +[[Category:Natural Numbers]] +[[Category:Order Topology]] +[[Category:Discrete Topology]] +n0sfjedpy5ex8x29wfrthuqby6y0v0i +\end{proof}<|endoftext|> +\section{Natural Numbers under Multiplication form Ordered Commutative Semigroup} +Tags: Natural Numbers + +\begin{theorem} +Let $\N$ be the [[Definition:Natural Numbers|natural numbers]]. +Let $\times$ be [[Definition:Natural Number Multiplication|multiplication]]. +Let $\le$ be the [[Definition:Ordering on Natural Numbers|ordering on $\N$]]. +Then $\left({\N, \times, \le}\right)$ is an [[Definition:Ordered Commutative Semigroup|ordered commutative semigroup]]. +\end{theorem} + +\begin{proof} +By [[Natural Numbers under Multiplication form Semigroup]], $\left({\N, \times, \le}\right)$ is a [[Definition:Semigroup|semigroup]]. +By [[Natural Number Multiplication is Commutative]], $\times$ is [[Definition:Commutative Operation|commutative]]. +By [[Ordering on Natural Numbers is Compatible with Multiplication]], $\le$ is [[Definition:Relation Compatible with Operation|compatible]] with $\times$. +The result follows. +{{qed}} +\end{proof}<|endoftext|> +\section{Invertible Elements under Natural Number Multiplication} +Tags: Natural Numbers + +\begin{theorem} +Let $\N$ be the [[Definition:Natural Numbers|natural numbers]]. +Let $\times$ denote [[Definition:Natural Number Multiplication|multiplication]]. +Then the only [[Definition:Invertible Element|invertible element]] of $\N$ for $\times$ is $1$. +\end{theorem} + +\begin{proof} +Suppose $m \in \N$ is [[Definition:Invertible Element|invertible]] for $\times$. +Let $n \in \N: m \times n = 1$. +Then from [[Natural Numbers have No Proper Zero Divisors]]: +:$m \ne 0$ and $n \ne 0$ +Thus, $1 \le m$ and $1 \le n$. +If $1 \le m$ then from [[Ordering on Natural Numbers is Compatible with Multiplication]]: +: $1 \le n < m \times n$ +This contradicts $m \times n = 1$. +The result follows. +{{qed}} +\end{proof}<|endoftext|> +\section{Graph containing Closed Walk of Odd Length also contains Odd Cycle} +Tags: Graph Theory + +\begin{theorem} +Let $G$ be a [[Definition:Graph (Graph Theory)|graph]]. +{{explain|This proof works for a [[Definition:Simple Graph|simple graph]], but the theorem may hold for loop graphs and/or multigraphs. Clarification needed as to what applies.}} +Let $G$ have a [[Definition:Closed Walk|closed walk]] of [[Definition:Odd Integer|odd]] [[Definition:Length of Walk|length]]. +Then $G$ has an [[Definition:Odd Cycle (Graph Theory)|odd cycle]]. +\end{theorem} + +\begin{proof} +Let $G = \left({V, E}\right)$ be a [[Definition:Graph (Graph Theory)|graph]] with [[Definition:Closed Walk|closed walk]] whose [[Definition:Length of Walk|length]] is [[Definition:Odd Integer|odd]]. +From [[Closed Walk of Odd Length contains Odd Circuit]], such a walk contains a [[Definition:Circuit|circuit]] whose [[Definition:Length of Walk|length]] is [[Definition:Odd Integer|odd]]. +Let $C_1 = \left({v_1, \ldots, v_{2n+1} = v_1}\right)$ be such a [[Definition:Circuit|circuit]]. +Aiming for a [[Proof by Contradiction|contradiction]], suppose $G$ has no [[Definition:Odd Cycle (Graph Theory)|odd cycles]]. +Then $C_1$ is not a [[Definition:Cycle (Graph Theory)|cycle]]. +Hence, there exist a [[Definition:Vertex of Graph|vertex]] $v_i$ where $2 \le i \le 2n-1$ and an [[Definition:Integer|integer]] $k$ such that $i+1 \le k \le 2n$ and $v_i = v_k$. +If $k-i$ is [[Definition:Odd Integer|odd]], then we have an [[Definition:Odd Integer|odd]] [[Definition:Circuit|circuit]] $\left({v_i, \ldots, v_k = v_i}\right)$ smaller in length than $C_1$. +If $k-i$ is [[Definition:Even Integer|even]], then $\left({v_1, \ldots, v_i, v_{k+1}, \ldots, v_{2n+1}}\right)$ is a [[Definition:Circuit|circuit]] whose [[Definition:Length of Walk|length]] is [[Definition:Odd Integer|odd]] smaller in [[Definition:Length of Walk|length]] than $C_2$. +This new [[Definition:Odd Integer|odd]] [[Definition:Length of Walk|length]] [[Definition:Circuit|circuit]] is named $C_2$, and the same argument is applied as to $C_1$. +Thus at each step a [[Definition:Circuit|circuit]] whose [[Definition:Length of Walk|length]] is [[Definition:Odd Integer|odd]] is reduced. +At the $n$th step for some $n \in \N$, either: +:$(1): \quad C_n$ is a [[Definition:Cycle (Graph Theory)|cycle]], which [[Proof by Contradiction|contradiction]] the supposition that $G$ has no [[Definition:Odd Cycle (Graph Theory)|odd cycles]] +or: +:$(2): \quad C_n$ is a [[Definition:Circuit|circuit]] whose [[Definition:Length of Walk|length]] is $3$. +But from [[Circuit of Length 3 is Cycle]], $C_n$ is a [[Definition:Cycle (Graph Theory)|cycle]], which by definition has [[Definition:Odd Integer|odd]] [[Definition:Length of Walk|length]]. +From this [[Definition:Contradiction|contradiction]] it follows that $G$ has at least one [[Definition:Odd Cycle (Graph Theory)|odd cycle]]. +{{qed}} +[[Category:Graph Theory]] +m6r2rn15sbnqoq4fuq7vbnb3ukjz4tu +\end{proof}<|endoftext|> +\section{Homomorphism of Powers/Naturally Ordered Semigroup} +Tags: Homomorphisms, Naturally Ordered Semigroup + +\begin{theorem} +Let $\struct {S, \circ, \preceq}$ be a [[Definition:Naturally Ordered Semigroup|naturally ordered semigroup]]. +For a given $a \in T_1$, let $\map {\odot^n} a$ be the [[Definition:Power of Element of Magma|$n$th power of $a$]] in $T_1$. +For a given $a \in T_2$, let $\map {\oplus^n} a$ be the [[Definition:Power of Element of Magma|$n$th power of $a$]] in $T_2$. +Then: +:$\forall a \in T_1: \forall n \in \struct {S^*, \circ, \preceq}: \map \phi {\map {\odot^n} a} = \map {\oplus^n} {\map \phi a}$ +where $S^* = S \setminus \set 0$. +\end{theorem} + +\begin{proof} +The proof proceeds by the [[Principle of Mathematical Induction for Naturally Ordered Semigroup|Principle of Mathematical Induction for a Naturally Ordered Semigroup]]. +Let $A := \set {n \in S^*: \forall a \in T_1: \map \phi {\map {\odot^n} a} = \map {\oplus^n} {\map \phi a} }$ +That is, $A$ is defined as the set of all $n$ such that: +:$\forall a \in T_1 \map \phi {\map {\odot^n} a} = \map {\oplus^n} {\map \phi a}$ +=== Basis for the Induction === +We have that: +{{begin-eqn}} +{{eqn | l = \map \phi {\map {\odot^1} a} + | r = \map \phi a + | c = {{Defof|Power of Element of Magma}} +}} +{{eqn | r = \map {\oplus^1} {\map \phi a} + | c = {{Defof|Power of Element of Magma}} +}} +{{end-eqn}} +So $1 \in A$. +This is our [[Definition:Basis for the Induction|basis for the induction]]. +=== Induction Hypothesis === +Now we need to show that, if $k \in A$ where $k \ge 1$, then it logically follows that $k \circ 1 \in A$. +So this is our [[Definition:Induction Hypothesis|induction hypothesis]]: +:$\forall a \in T_1: \map \phi {\map {\odot^k} a} = \map {\oplus^k} {\map \phi a}$ +Then we need to show: +:$\forall a \in T_1: \map \phi {\map {\odot^{k \circ 1} } a} = \map {\oplus^{k \circ 1} } {\map \phi a}$ +=== Induction Step === +This is our [[Definition:Induction Step|induction step]]: +{{begin-eqn}} +{{eqn | l = \map \phi {\map {\odot^{k \circ 1} } a} + | r = \map \phi {\paren {\map {\odot^k} a} \odot a} + | c = {{Defof|Power of Element of Magma}} +}} +{{eqn | r = \paren {\map \phi {\map {\odot^k} a} } \oplus \paren {\map \phi a} + | c = {{Defof|Semigroup Homomorphism}} +}} +{{eqn | r = \paren {\map {\oplus^k} {\map \phi a} } \oplus \paren {\map \phi a} + | c = [[Homomorphism of Powers/Naturally Ordered Semigroup#Induction Hypothesis|Induction Hypothesis]] +}} +{{eqn | r = \map {\oplus^{k \circ 1} } {\map \phi a} + | c = {{Defof|Power of Element of Magma}} +}} +{{end-eqn}} +So $k \in A \implies k \circ 1 \in A$ and the result follows by the [[Principle of Mathematical Induction for Naturally Ordered Semigroup|Principle of Mathematical Induction]]: +:$\forall n \in \struct {S^*, \circ, \preceq}: \map \phi {\map {\odot^n} a} = \map {\oplus^n} {\map \phi a}$ +{{qed}} +\end{proof}<|endoftext|> +\section{Homomorphism of Powers/Natural Numbers} +Tags: Homomorphisms, Natural Numbers + +\begin{theorem} +Let $n \in \N$. +Let $\odot^n$ and $\oplus^n$ be the [[Definition:Power of Element of Semigroup|$n$th powers]] of $\odot$ and $\oplus$, respectively. +Then: +:$\forall a \in T_1: \forall n \in \N: \map \phi {\map {\odot^n} a} = \map {\oplus^n} {\map \phi a}$ +\end{theorem} + +\begin{proof} +Consider the [[Definition:Natural Numbers|natural numbers]] $\N$ defined as a [[Definition:Naturally Ordered Semigroup|naturally ordered semigroup]]. +Then the result follows from [[Homomorphism of Powers/Naturally Ordered Semigroup|Homomorphism of Powers: Naturally Ordered Semigroup]]. +{{qed}} +[[Category:Homomorphisms]] +[[Category:Natural Numbers]] +81mfc6y9nis59bgz4ybp8hhj79dl9m1 +\end{proof}<|endoftext|> +\section{Homomorphism of Powers/Integers} +Tags: Homomorphisms, Integers + +\begin{theorem} +Let $\struct {T_1, \odot}$ and $\struct {T_2, \oplus}$ be [[Definition:Monoid|monoids]]. +Let $\phi: \struct {T_1, \odot} \to \struct {T_2, \oplus}$ be a [[Definition:Semigroup Homomorphism|(semigroup) homomorphism]]. +Let $a$ be an [[Definition:Invertible Element|invertible element]] of $T_1$. +Let $n \in \Z$. +Let $\odot^n$ and $\oplus^n$ be as defined as in [[Index Laws for Monoids]]. +Then: +:$\forall n \in \Z: \map \phi {\map {\odot^n} a} = \map {\oplus^n} {\map \phi a}$ +\end{theorem} + +\begin{proof} +By [[Homomorphism of Powers/Natural Numbers|Homomorphism of Powers: Natural Numbers]], we need show this only for negative $n$, that is: +:$\forall n \in \N^*: \map \phi {\map {\odot^{-n} } a} = \map {\oplus^{-n} } {\map \phi a}$ +But by [[Homomorphism with Identity Preserves Inverses]]: +:$\map \phi {a^{-1} } = \paren {\map \phi a}^{-1}$ +Hence by [[Homomorphism of Powers/Natural Numbers|Homomorphism of Powers: Natural Numbers]]: +:$\map {\oplus^{-n} } {\map \phi a} = \map {\oplus^n} {\map \phi {a^{-1} } } = \map \phi {\map {\odot^n} {a^{-1} } } = \map \phi {\map {\odot^{-n} } a}$ +{{qed}} +\end{proof}<|endoftext|> +\section{Right Operation is Left Distributive over All Operations} +Tags: Left and Right Operations + +\begin{theorem} +Let $\struct {S, \circ, \rightarrow}$ be an [[Definition:Algebraic Structure|algebraic structure]] where: +:$\rightarrow$ is the [[Definition:Right Operation|right operation]] +:$\circ$ is any arbitrary [[Definition:Binary Operation|binary operation]]. +Then $\rightarrow$ is [[Definition:Left Distributive Operation|left distributive]] over $\circ$. +\end{theorem} + +\begin{proof} +By definition of the [[Definition:Right Operation|right operation]]: +{{begin-eqn}} +{{eqn | l = a \rightarrow \paren {b \circ c} + | r = b \circ c + | c = +}} +{{eqn | r = \paren {a \rightarrow b} \circ \paren {a \rightarrow c} + | c = +}} +{{end-eqn}} +The result follows by definition of [[Definition:Left Distributive Operation|left distributivity]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Left Operation is Right Distributive over All Operations} +Tags: Left and Right Operations + +\begin{theorem} +Let $\struct {S, \circ, \leftarrow}$ be an [[Definition:Algebraic Structure|algebraic structure]] where: +:$\leftarrow$ is the [[Definition:Left Operation|left operation]] +:$\circ$ is any arbitrary [[Definition:Binary Operation|binary operation]]. +Then $\leftarrow$ is [[Definition:Right Distributive Operation|right distributive]] over $\circ$. +\end{theorem} + +\begin{proof} +By definition of the [[Definition:Left Operation|left operation]]: +{{begin-eqn}} +{{eqn | l = \paren {a \circ b} \leftarrow c + | r = a \circ b + | c = +}} +{{eqn | r = \paren {a \leftarrow c} \circ \paren {b \leftarrow c} + | c = +}} +{{end-eqn}} +The result follows by definition of [[Definition:Right Distributive Operation|right distributivity]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Right Operation is Distributive over Idempotent Operation} +Tags: Left and Right Operations, Distributive Operations + +\begin{theorem} +Let $\struct {S, \circ, \rightarrow}$ be an [[Definition:Algebraic Structure|algebraic structure]] where: +:$\rightarrow$ is the [[Definition:Right Operation|right operation]] +:$\circ$ is any arbitrary [[Definition:Binary Operation|binary operation]]. +Then: +:$\rightarrow$ is [[Definition:Distributive Operation|distributive]] over $\circ$ +{{iff}} +:$\circ$ is [[Definition:Idempotent Operation|idempotent]]. +\end{theorem} + +\begin{proof} +From [[Right Operation is Left Distributive over All Operations]]: +:$\forall a, b, c \in S: a \rightarrow \paren {b \circ c} = \paren {a \rightarrow b} \circ \paren {a \rightarrow c}$ +for all [[Definition:Binary Operation|binary operations]] $\circ$. +It remains to show that $\rightarrow$ is [[Definition:Right Distributive Operation|right distributive]] over $\circ$ {{iff}} $\circ$ is [[Definition:Idempotent Operation|idempotent]]. +=== Necessary Condition === +Let $\circ$ be [[Definition:Idempotent Operation|idempotent]]. +Then: +{{begin-eqn}} +{{eqn | l = \paren {a \circ b} \rightarrow c + | r = c + | c = {{Defof|Right Operation}} +}} +{{eqn | r = c \circ c + | c = {{Defof|Idempotent Operation}} +}} +{{eqn | r = \paren {a \rightarrow c} \circ \paren {b \rightarrow c} + | c = {{Defof|Right Operation}} +}} +{{end-eqn}} +Thus $\rightarrow$ is [[Definition:Right Distributive Operation|right distributive]] over $\circ$. +{{qed|lemma}} +=== Sufficient Condition === +Let $\rightarrow$ be [[Definition:Right Distributive Operation|right distributive]] over $\circ$. +Let $c \in S$ be arbitrary. +Then: +{{begin-eqn}} +{{eqn | l = c + | r = \paren {a \circ b} \rightarrow c + | c = {{Defof|Right Operation}} +}} +{{eqn | r = \paren {a \rightarrow c} \circ \paren {b \rightarrow c} + | c = {{Defof|Right Distributive Operation}} +}} +{{eqn | r = c \circ c + | c = {{Defof|Right Operation}} +}} +{{end-eqn}} +Hence $\circ$ is [[Definition:Idempotent Operation|idempotent]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Left Operation is Distributive over Idempotent Operation} +Tags: Left and Right Operations, Distributive Operations + +\begin{theorem} +Let $\struct {S, \circ, \leftarrow}$ be an [[Definition:Algebraic Structure|algebraic structure]] where: +:$\leftarrow$ is the [[Definition:Left Operation|left operation]] +:$\circ$ is any arbitrary [[Definition:Binary Operation|binary operation]]. +Then: +:$\leftarrow$ is [[Definition:Distributive Operation|distributive]] over $\circ$ +{{iff}} +:$\circ$ is [[Definition:Idempotent Operation|idempotent]]. +\end{theorem} + +\begin{proof} +From [[Left Operation is Right Distributive over All Operations]]: +:$\forall a, b, c \in S: \paren {a \circ b} \leftarrow c = \paren {a \leftarrow c} \circ \paren {b \leftarrow c}$ +for all [[Definition:Binary Operation|binary operations]] $\circ$. +It remains to show that $\leftarrow$ is [[Definition:Left Distributive Operation|left distributive]] over $\circ$ {{iff}} $\circ$ is [[Definition:Idempotent Operation|idempotent]]. +=== Necessary Condition === +Let $\circ$ be [[Definition:Idempotent Operation|idempotent]]. +Then: +{{begin-eqn}} +{{eqn | l = a \leftarrow \paren {b \circ c} + | r = a + | c = {{Defof|Left Operation}} +}} +{{eqn | r = a \circ a + | c = {{Defof|Idempotent Operation}} +}} +{{eqn | r = \paren {a \leftarrow b} \circ \paren {a \leftarrow c} + | c = {{Defof|Left Operation}} +}} +{{end-eqn}} +Thus $\leftarrow$ is [[Definition:Left Distributive Operation|left distributive]] over $\circ$. +{{qed|lemma}} +=== Sufficient Condition === +Let $\leftarrow$ be [[Definition:Left Distributive Operation|left distributive]] over $\circ$. +Let $a \in S$ be arbitrary. +Then: +{{begin-eqn}} +{{eqn | l = a + | r = a \leftarrow \paren {b \circ c} + | c = {{Defof|Left Operation}} +}} +{{eqn | r = \paren {a \leftarrow b} \circ \paren {a \leftarrow c} + | c = {{Defof|Left Distributive Operation}} +}} +{{eqn | r = a \circ a + | c = {{Defof|Left Operation}} +}} +{{end-eqn}} +Hence $\circ$ is [[Definition:Idempotent Operation|idempotent]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Integral of Positive Measurable Function as Limit of Integrals of Positive Simple Functions} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $f: X \to \overline{\R} \in \mathcal{M}_{\overline{\R}}^+$ be a [[Definition:Positive Measurable Function|positive $\Sigma$-measurable function]]. +Let $\left({f_n}\right)_{n \in \N} \in \mathcal{E}^+$, $f_n: X \to \R$ be a [[Definition:Sequence|sequence]] of [[Definition:Positive Simple Function|positive simple functions]] such that: +:$\displaystyle \lim_{n \to \infty} f_n = f$ +where $\lim$ denotes a [[Definition:Pointwise Limit|pointwise limit]]. +Then: +:$\displaystyle \int f \, \mathrm d\mu = \lim_{n \to \infty} \int f_n \, \mathrm d\mu$ +where the [[Definition:Integral Sign|integral signs]] denote [[Definition:Integral of Positive Measurable Function|$\mu$-integration]]. +\end{theorem}<|endoftext|> +\section{Integral of Characteristic Function/Corollary} +Tags: Measure Theory + +\begin{theorem} +:$\displaystyle \int \chi_E \, \mathrm d\mu = \mu \left({E}\right)$ +where the [[Definition:Integral Sign|integral sign]] denotes the [[Definition:Integral of Positive Measurable Function|$\mu$-integral of $\chi_E$]]. +\end{theorem} + +\begin{proof} +By [[Integral of Characteristic Function]], have: +:$I_\mu \left({\chi_E}\right) = \mu \left({E}\right)$ +where $I_\mu \left({\chi_E}\right)$ is the [[Definition:Integral of Positive Simple Function|$\mu$-integral of $\chi_E$]]. +From [[Integral of Positive Measurable Function Extends Integral of Positive Simple Function]], it also holds that: +:$\displaystyle \int \chi_E \, \mathrm d\mu = I_\mu \left({\chi_E}\right)$ +Combining these equalities gives the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Integral of Positive Measurable Function is Positive Homogeneous} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma, \mu}$ be a [[Definition:Measure Space|measure space]]. +Let $f: X \to \R, f \in \mathcal M_{\overline \R}^+$ be a [[Definition:Positive Measurable Function|positive measurable function]]. +Let $\lambda \in \R_{\ge 0}$ be a [[Definition:Positive Real Number|positive real number]]. +Then: +:$\displaystyle \int \lambda f \rd \mu = \lambda \int f \rd \mu$ +where: +:$\lambda f$ is the [[Definition:Pointwise Scalar Multiplication of Real-Valued Functions|pointwise $\lambda$-multiple]] of $f$ +:The [[Definition:Integral Sign|integral sign]] denotes [[Definition:Integral of Positive Measurable Function|$\mu$-integration]] +This can be summarized by saying that $\displaystyle \int \cdot \rd \mu$ is [[Definition:Positive Homogeneous|positive homogeneous]]. +\end{theorem}<|endoftext|> +\section{Linear Transformation as Matrix Product} +Tags: Linear Transformations + +\begin{theorem} +Let $T: \R^n \to \R^m, \mathbf x \mapsto \map T {\mathbf x}$ be a [[Definition:Linear Transformation on Vector Space|linear transformation]]. +Then: +:$\map T {\mathbf x} = \mathbf A_T \mathbf x$ +where $\mathbf A_T$ is the [[Definition:Matrix|$m \times n$ matrix]] defined as: +:$\mathbf A_T = \begin {bmatrix} \map T {\mathbf e_1} & \map T {\mathbf e_2} & \cdots & \map T {\mathbf e_n} \end {bmatrix}$ +where $\tuple {\mathbf e_1, \mathbf e_2, \cdots, \mathbf e_n}$ is the [[Definition:Standard Ordered Basis on Vector Space|standard ordered basis]] of $\R^n$. +\end{theorem} + +\begin{proof} +Let $\mathbf x = \begin {bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end {bmatrix}$. +Let $\mathbf I_n$ be the [[Definition:Unit Matrix|unit matrix of order $n$]]. +Then: +{{begin-eqn}} +{{eqn | l = \mathbf x_{n \times 1} + | r = \mathbf I_n \mathbf x_{n \times 1} + | c = {{Defof|Left Identity}} +}} +{{eqn | r = \begin {bmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end {bmatrix} + | c = [[Unit Matrix is Unity of Ring of Square Matrices#Lemma: Left Identity|Unit Matrix is Identity:Lemma]] +}} +{{eqn | r = \begin {bmatrix} \mathbf e_1 & \mathbf e_2 & \cdots & \mathbf e_n \end {bmatrix} \begin {bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end {bmatrix} + | c = {{Defof|Standard Ordered Basis on Vector Space}} +}} +{{eqn | r = \sum_{i \mathop = 1}^n \mathbf e_i x_i + | c = {{Defof|Matrix Product (Conventional)}} +}} +{{eqn | l = \map T {\mathbf x} + | r = \map T {\sum_{i \mathop =1}^n \mathbf e_i x_i} +}} +{{eqn | r = \sum_{i \mathop = 1}^n \map T {\mathbf e_i} x_i + | c = {{Defof|Linear Transformation on Vector Space}} +}} +{{eqn | r = \begin {bmatrix} \map T {\mathbf e_1} & \map T {\mathbf e_2} & \cdots & \map T {\mathbf e_n} \end {bmatrix} \begin {bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end {bmatrix} + | c = {{Defof|Matrix Product (Conventional)}} +}} +{{end-eqn}} +That $\mathbf A_T$ is $m \times n$ follows from each $\map T {\mathbf e_i}$ being an [[Definition:Element|element]] of $\R^m$ and thus having $m$ [[Definition:Row of Matrix|rows]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Integral of Positive Measurable Function is Additive} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $f,g: X \to \overline{\R}$, $f,g \in \mathcal{M}_{\overline{\R}}^+$ be [[Definition:Positive Measurable Function|positive measurable functions]]. +Then: +:$\displaystyle \int f + g \, \mathrm d\mu = \displaystyle \int f \, \mathrm d\mu + \displaystyle \int g \, \mathrm d\mu$ +where: +:$f + g$ is the [[Definition:Pointwise Addition|pointwise sum]] of $f$ and $g$ +:The [[Definition:Integral Sign|integral sign]] denotes [[Definition:Integral of Positive Measurable Function|$\mu$-integration]] +This can be summarized by saying that $\displaystyle \int \cdot \, \mathrm d\mu$ is [[Definition:Additive Function (Conventional)|additive]]. +\end{theorem}<|endoftext|> +\section{Integral of Positive Simple Function is Increasing} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $f, g: X \to \R$, $f, g \in \mathcal{E}^+$ be [[Definition:Positive Simple Function|positive simple functions]]. +Suppose that: +: $f \le g$ +where $\le$ denotes [[Definition:Pointwise Inequality of Real-Valued Functions|pointwise inequality]]. +Then: +:$I_\mu \left({f}\right) \le I_\mu \left({g}\right)$ +where $I_\mu$ denotes [[Definition:Integral of Positive Simple Function|$\mu$-integration]] +This can be summarized by saying that $I_\mu$ is an [[Definition:Increasing Mapping|increasing mapping]]. +\end{theorem}<|endoftext|> +\section{Integral of Positive Measurable Function is Monotone} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $f,g: X \to \overline{\R}$, $f,g \in \mathcal{M}_{\overline{\R}}^+$ be [[Definition:Positive Measurable Function|positive measurable functions]]. +Suppose that $f \le g$, where $\le$ denotes [[Definition:Pointwise Inequality of Extended Real-Valued Functions|pointwise inequality]]. +Then: +:$\displaystyle \int f \, \mathrm d\mu \le \int g \, \mathrm d\mu$ +where the [[Definition:Integral Sign|integral sign]] denotes [[Definition:Integral of Positive Measurable Function|$\mu$-integration]]. +This can be summarized by saying that $\displaystyle \int \cdot \, \mathrm d\mu$ is [[Definition:Monotone Mapping|monotone]]. +\end{theorem}<|endoftext|> +\section{Matrix Multiplication is Homogeneous of Degree 1} +Tags: Conventional Matrix Multiplication + +\begin{theorem} +Let $\mathbf A$ be an [[Definition:Matrix|$m \times n$ matrix]] and $\mathbf B$ be an [[Definition:Matrix|$n \times p$ matrix]] such that the [[Definition:Column of Matrix|columns]] of $\mathbf A$ and $\mathbf B$ are members of $\R^m$ and $\R^n$, respectively. +Let $\lambda \in \mathbb F \in \set {\R, \C}$ be a [[Definition:Scalar (Matrix Theory)|scalar]]. +Then: +:$\mathbf A \paren {\lambda \mathbf B} = \lambda \paren {\mathbf A \mathbf B}$ +\end{theorem} + +\begin{proof} +Let $\mathbf A = \sqbrk a_{m n}, \mathbf B = \sqbrk b_{n p}$ +{{begin-eqn}} +{{eqn | lo= \forall i \in \closedint 1 m, j \in \closedint 1 p: + | l = \mathbf A \paren {\lambda \mathbf B} + | r = \lambda \sum_{k \mathop = 1}^n a_{i k} b_{k j} + | c = {{Defof|Matrix Product (Conventional)}} and {{Defof|Matrix Scalar Product}} +}} +{{eqn | r = \sum_{k \mathop = 1}^n a_{i k} \paren {\lambda b_{k j} } +}} +{{eqn | r = \mathbf A \paren {\lambda \mathbf B} + | c = {{Defof|Matrix Product (Conventional)}} and {{Defof|Matrix Scalar Product}} +}} +{{end-eqn}} +{{qed}} +{{proofread}} +{{expand|proof literally carries over for any [[Definition:Commutative Ring|commutative ring]] in place of $\Bbb F$}} +[[Category:Conventional Matrix Multiplication]] +b2tvzdjy8an71tgdj8k0a3t4rceaq58 +\end{proof}<|endoftext|> +\section{Series of Positive Measurable Functions is Positive Measurable Function} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\left({f_n}\right)_{n \in \N} \in \mathcal{M}_{\overline{\R}}^+$, $f_n: X \to \overline{\R}$ be a [[Definition:Sequence|sequence]] of [[Definition:Positive Measurable Function|positive measurable functions]]. +Let $\displaystyle \sum_{n \mathop \in \N} f_n: X \to \overline{\R}$ be the [[Definition:Pointwise Series|pointwise series]] of the $f_n$. +Then $\displaystyle \sum_{n \mathop \in \N} f_n$ is also a [[Definition:Positive Measurable Function|positive measurable function]]. +\end{theorem}<|endoftext|> +\section{Integral of Series of Positive Measurable Functions} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\left({f_n}\right)_{n \in \N} \in \mathcal{M}_{\overline{\R}}^+$, $f_n: X \to \overline{\R}$ be a [[Definition:Sequence|sequence]] of [[Definition:Positive Measurable Function|positive measurable functions]]. +Let $\displaystyle \sum_{n \mathop \in \N} f_n: X \to \overline{\R}$ be the [[Definition:Pointwise Series|pointwise series]] of the $f_n$. +Then: +:$\displaystyle \int \sum_{n \mathop \in \N} f_n \, \mathrm d \mu = \sum_{n \mathop \in \N} \int f_n \, \mathrm d \mu$ +where the [[Definition:Integral Sign|integral sign]] denotes [[Definition:Integral of Positive Measurable Function|$\mu$-integration]]. +\end{theorem}<|endoftext|> +\section{Integral with respect to Dirac Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $x \in X$, and let $\delta_x$ be the [[Definition:Dirac Measure|Dirac measure]] at $x$. +Let $f \in \mathcal M _{\overline \R}, f: X \to \overline \R$ be a [[Definition:Measurable Function|measurable function]]. +Then: +:$\displaystyle \int f \, \mathrm d \delta_x = f \left({x}\right)$ +where the [[Definition:Integral Sign|integral sign]] denotes the [[Definition:Integral of Measurable Function|$\delta_x$-integral]]. +\end{theorem} + +\begin{proof} +Notice that the [[Definition:Constant Mapping|constant function]] $g$ defined as +:$X \ni x' \mapsto g \left({x'}\right) := f \left({x}\right) \in \overline{\R}$ +is [[Definition:Almost Everywhere|$\delta_x$-almost everywhere]] equal to $f$. +This follows from the fact that the set of elements of $X$ where $f$ and $g$ take different values, namely: +:$\left\{ {x' \in X : f \left({x}\right) = g \left({x'}\right) \ne f \left({x'}\right)}\right\}$ +does not contain $x$. +So, by the very definition of $\delta_x$, its [[Definition:Measure|$\delta_x$-measure]] is $0$. +Therefore: +{{begin-eqn}} +{{eqn | l = \int f \, \mathrm d \delta_x + | r = \int g \, \mathrm d \delta_x + | c = +}} +{{eqn | r = \delta_x \left({X}\right) f \left({x}\right) + | c = +}} +{{eqn | r = f \left({x}\right) + | c = +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Integral with respect to Discrete Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\displaystyle \mu = \sum_{n \mathop \in \N} \lambda_n \delta_{x_n}$ be a [[Definition:Discrete Measure|discrete measure]] on $\left({X, \Sigma}\right)$. +Let $f \in \mathcal{M}_{\overline{\R}}^+, f: X \to \overline{\R}$ be a [[Definition:Positive Measurable Function|positive measurable function]]. +Then: +:$\displaystyle \int f \, \mathrm d\mu = \sum_{n \mathop \in \N} \lambda_n f \left({x_n}\right)$ +where the [[Definition:Integral Sign|integral sign]] denotes [[Definition:Integral of Positive Measurable Function|$\mu$-integration]]. +\end{theorem}<|endoftext|> +\section{Matrix Product as Linear Transformation} +Tags: Linear Transformations + +\begin{theorem} +Let: +:$ \mathbf A_{m \times n} = \begin{bmatrix} +a_{11} & a_{12} & \cdots & a_{1n} \\ +a_{21} & a_{22} & \cdots & a_{2n} \\ +\vdots & \vdots & \ddots & \vdots \\ +a_{m1} & a_{m2} & \cdots & a_{mn} \\ +\end{bmatrix}$ +:$\mathbf x_{n \times 1} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$ +:$\mathbf y_{n \times 1} = \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{bmatrix}$ +be [[Definition:Matrix|matrices]] where each [[Definition:Column of Matrix|column]] is an [[Definition:Element|element]] of a [[Definition:Real Vector Space|real vector space]]. +Let $T$ be the [[Definition:Mapping|mapping]]: +:$T: \R^m \to \R^n, \mathbf x \mapsto \mathbf A \mathbf x$ +Then $T$ is a [[Definition:Linear Transformation on Vector Space|linear transformation]]. +\end{theorem} + +\begin{proof} +From [[Matrix Multiplication is Homogeneous of Degree 1|Matrix Multiplication is Homogeneous of Degree $1$]]: +:$\forall \lambda \in \mathbb F \in \set {\R, \C}: \mathbf A \paren {\lambda \mathbf x} = \lambda \paren {\mathbf A \mathbf x}$ +From [[Matrix Multiplication Distributes over Matrix Addition]]: +:$\forall \mathbf x, \mathbf y \in \R^m: \mathbf A \paren {\mathbf x + \mathbf y} = \mathbf A \mathbf x + \mathbf A \mathbf y$ +Hence the result, from the definition of [[Definition:Linear Transformation on Vector Space|linear transformation]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Fatou's Lemma for Integrals} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma, \mu}$ be a [[Definition:Measure Space|measure space]]. +\end{theorem}<|endoftext|> +\section{Linear Transformation Maps Zero Vector to Zero Vector} +Tags: Linear Transformations, Linear Transformation Maps Zero Vector to Zero Vector + +\begin{theorem} +Let $\mathbf V$ be a [[Definition:Vector Space|vector space]], with [[Definition:Zero Vector|zero]] $\mathbf 0$. +Likewise let $\mathbf V\,'$ be another vector space, with [[Definition:Zero Vector|zero]] $\mathbf 0'$. +Let $T: \mathbf V \to \mathbf V\,'$ be a [[Definition:Linear Transformation on Vector Space|linear transformation]]. +Then: +:$T: \mathbf 0 \mapsto \mathbf 0'$ +\end{theorem} + +\begin{proof} +From the [[Definition:Vector Space Axioms|vector space axioms]] we have that $\exists \mathbf 0 \in \mathbf V$. +It remains to be proved that $\map T {\mathbf 0} = \mathbf 0'$: +{{begin-eqn}} +{{eqn | l = \map T {\mathbf 0} + | r = \map T {\mathbf 0 + \mathbf 0} +}} +{{eqn | r = \map T {\mathbf 0} + \map T {\mathbf 0} + | c = {{Defof|Linear Transformation on Vector Space}} +}} +{{eqn | ll= \leadsto + | l = \mathbf 0' + | r = \map T {\mathbf 0} + | c = subtracting $\map T {\mathbf 0}$ from both sides +}} +{{end-eqn}} +{{qed}} +\end{proof} + +\begin{proof} +From the [[Definition:Vector Space Axioms|vector space axioms]] we have that $\exists \mathbf 0 \in \mathbf V$. +What remains is to prove that $\map T {\mathbf 0} = \mathbf 0'$: +{{begin-eqn}} +{{eqn | l = \map T {\mathbf 0} + | r = \map T {0 \, \mathbf 0} + | c = [[Zero Vector Scaled is Zero Vector]] +}} +{{eqn | r = 0 \, \map T {\mathbf 0} + | c = {{Defof|Linear Transformation on Vector Space}} +}} +{{eqn | r = \mathbf 0' + | c = [[Vector Scaled by Zero is Zero Vector]] +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Infimum of Product} +Tags: Max and Min Operations + +\begin{theorem} +Let $\left({G, \circ, \preceq}\right)$ be an [[Definition:Ordered Group|ordered group]]. +Suppose that [[Definition:Subset|subsets]] $A$ and $B$ of $G$ admit [[Definition:Infimum of Set|infima]] in $G$. +Then: +:$\inf \left({A \circ_{\mathcal P} B}\right) = \inf A \circ \inf B$ +where $\circ_{\mathcal P}$ denotes [[Definition:Subset Product|subset product]]. +\end{theorem} + +\begin{proof} +This follows from [[Supremum of Product]] and the [[Duality Principle (Order Theory)/Global Duality|Duality Principle]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Supremum of Product} +Tags: Max and Min Operations + +\begin{theorem} +Let $\struct {G, \circ, \preceq}$ be an [[Definition:Ordered Group|ordered group]]. +Suppose that [[Definition:Subset|subsets]] $A$ and $B$ of $G$ admit [[Definition:Supremum of Set|suprema]] in $G$. +Then: +:$\sup \paren {A \circ_{\PP} B} = \sup A \circ \sup B$ +where $\circ_{\PP}$ denotes [[Definition:Subset Product|subset product]]. +\end{theorem} + +\begin{proof} +Let $a \in A$, $b \in B$. +Then: +{{begin-eqn}} +{{eqn | l = a \circ b + | o = \preceq + | r = \sup A \circ b + | c = {{Defof|Supremum of Set}} +}} +{{eqn | o = \preceq + | r = \sup A \circ \sup B + | c = {{Defof|Supremum of Set}} +}} +{{end-eqn}} +Hence $\sup A \circ \sup B$ is an [[Definition:Upper Bound of Set|upper bound]] for $A \circ_{\PP} B$. +Suppose that $u$ is an [[Definition:Upper Bound of Set|upper bound]] for $A \circ_{\PP} B$. +Then: +{{begin-eqn}} +{{eqn | lo= \forall b \in B: \forall a \in A: + | l = a \circ b + | o = \preceq + | r = u +}} +{{eqn | ll= \leadsto + | lo= \forall b \in B: \forall a \in A: + | l = a + | o = \preceq + | r = u \circ b^{-1} +}} +{{eqn | ll= \leadsto + | lo= \forall b \in B: + | l = \sup A + | o = \preceq + | r = u \circ b^{-1} + | c = {{Defof|Supremum of Set}} +}} +{{eqn | ll= \leadsto + | lo= \forall b \in B: + | l = b + | o = \preceq + | r = \paren {\sup A}^{-1} \circ u +}} +{{eqn | ll= \leadsto + | l = \sup B + | o = \preceq + | r = \paren {\sup A}^{-1} \circ u + | c = {{Defof|Supremum of Set}} +}} +{{eqn | ll= \leadsto + | l = \sup A \circ \sup B + | o = \preceq + | r = u +}} +{{end-eqn}} +Therefore: +:$\sup \paren {A \circ_{\PP} B} = \sup A \circ \sup B$ +{{qed}} +\end{proof}<|endoftext|> +\section{Integral with respect to Series of Measures} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma}\right)$ be a [[Definition:Measurable Space|measurable space]]. +Let $\displaystyle \mu := \sum_{n \mathop \in \N} \lambda_n \mu_n$ be a [[Definition:Series of Measures|series of measures]] on $\left({X, \Sigma}\right)$. +Then for all [[Definition:Positive Measurable Function|positive measurable functions]] $f: X \to \overline \R, f \in \mathcal M_{\overline{\R}}^+$: +:$\displaystyle \int f \, \mathrm d \mu = \sum_{n \mathop \in \N} \int f \, \mathrm d \mu_n$ +where the [[Definition:Integral Sign|integral signs]] denote [[Definition:Integral of Positive Measurable Function|integration with respect to a measure]]. +\end{theorem}<|endoftext|> +\section{Reverse Fatou's Lemma} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma, \mu}$ be a [[Definition:Measure Space|measure space]]. +\end{theorem}<|endoftext|> +\section{Characteristic Function of Limit Inferior of Sequence of Sets} +Tags: Characteristic Functions + +\begin{theorem} +Let $\left({E_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Set|sets]]. +Let $E := \displaystyle \liminf_{n \mathop \to \infty} \, E_n$ be the [[Definition:Limit Inferior of Sequence of Sets|limit inferior]] of the $E_n$. +Then: +:$\displaystyle \chi_E = \liminf_{n \mathop \to \infty} \, \chi_{E_n}$ +where: +:$\chi$ denotes [[Definition:Characteristic Function of Set|characteristic function]] +:$\displaystyle \liminf_{n \to \infty} \, \chi_{E_n}$ is the [[Definition:Pointwise Limit Inferior|pointwise limit inferior]] of the $\chi_{E_n}$ +\end{theorem}<|endoftext|> +\section{Characteristic Function of Limit Superior of Sequence of Sets} +Tags: Characteristic Functions + +\begin{theorem} +Let $\left({E_n}\right)_{n \in \N}$ be a [[Definition:Sequence|sequence]] of [[Definition:Set|sets]]. +Let $E := \displaystyle \limsup_{n \mathop \to \infty} \, E_n$ be the [[Definition:Limit Superior of Sequence of Sets|limit superior]] of the $E_n$. +Then: +:$\displaystyle \chi_E = \limsup_{n \to \infty} \, \chi_{E_n}$ +where: +:$\chi$ denotes [[Definition:Characteristic Function of Set|characteristic function]] +:$\displaystyle \liminf_{n \to \infty} \, \chi_{E_n}$ is the [[Definition:Pointwise Limit Superior|pointwise limit superior]] of the $\chi_{E_n}$ +\end{theorem}<|endoftext|> +\section{Fatou's Lemma for Measures} +Tags: Measure Theory, Fatou's Lemma for Measures + +\begin{theorem} +Let $\struct {X, \Sigma, \mu}$ be a [[Definition:Measure Space|measure space]]. +Let $\sequence {E_n}_{n \mathop \in \N} \in \Sigma$ be a [[Definition:Sequence|sequence]] of [[Definition:Measurable Set|$\Sigma$-measurable sets]]. +Then: +:$\displaystyle \map \mu {\liminf_{n \mathop \to \infty} E_n} \le \liminf_{n \mathop \to \infty} \map \mu {E_n}$ +where: +:$\displaystyle \liminf_{n \mathop \to \infty} E_n$ is the [[Definition:Limit Inferior of Sequence of Sets|limit inferior]] of the $E_n$ +:the {{RHS}} [[Definition:Limit Inferior|limit inferior]] is taken in the [[Definition:Extended Real Number Line|extended real numbers]] $\overline \R$. +\end{theorem} + +\begin{proof} +{{proof wanted}} +{{Namedfor|Pierre Joseph Louis Fatou|cat = Fatou}} +\end{proof}<|endoftext|> +\section{Kernel Transformation of Measure is Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $N: X \times \Sigma \to \overline{\R}_{\ge0}$ be a [[Definition:Kernel (Measure Theory)|kernel]]. +Then $\mu N: X \to \overline{\R}$, the [[Definition:Kernel Transformation of Measure|kernel transformation of $\mu$]], is a [[Definition:Measure (Measure Theory)|measure]]. +\end{theorem}<|endoftext|> +\section{Kernel Transformation of Positive Measurable Function is Positive Measurable Function} +Tags: Measure Theory + +\begin{theorem} +Let $\left({X, \Sigma, \mu}\right)$ be a [[Definition:Measure Space|measure space]]. +Let $N: X \times \Sigma \to \overline{\R}_{\ge0}$ be a [[Definition:Kernel (Measure Theory)|kernel]]. +Let $f: X \to \overline{\R}$ be a [[Definition:Positive Measurable Function|positive measurable function]]. +Then $N f: X \to \overline{\R}$, the [[Definition:Kernel Transformation of Positive Measurable Function|transformation of $f$ by $N$]], is also a [[Definition:Positive Measurable Function|positive measurable function]]. +\end{theorem}<|endoftext|> +\section{Integral with respect to Kernel Transformation of Measure} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma, \mu}$ be a [[Definition:Measure Space|measure space]]. +Let $N: X \times \Sigma \to \overline \R_{\ge 0}$ be a [[Definition:Kernel (Measure Theory)|kernel]]. +Let $f: X \to \overline \R$ be a [[Definition:Positive Measurable Function|positive measurable function]]. +Then: +:$\displaystyle \int f \map \rd {\mu N} = \int N f \rd \mu$ +where: +:The [[Definition:Integral Sign|integral sign]] denotes [[Definition:Integral of Positive Measurable Function|integration with respect to a measure]] +:$\mu N$ is the [[Definition:Kernel Transformation of Measure|transformation of $\mu$ by $N$]] +:$N f$ is the [[Definition:Kernel Transformation of Positive Measurable Function|transformation of $f$ by $N$]] +Writing $\map \mu f$ in place of $\displaystyle \int f \rd \mu$, the theorem statement can be conveniently expressed as: +:$\map {\mu N} f = \map \mu {N f}$ +\end{theorem}<|endoftext|> +\section{Canonical Injection is Injection} +Tags: Injections + +\begin{theorem} +Let $\struct {S_1, \circ_1}$ and $\struct {S_2, \circ_2}$ be [[Definition:Algebraic Structure|algebraic structures]] with [[Definition:Identity Element|identities]] $e_1, e_2$ respectively. +The [[Definition:Canonical Injection (Abstract Algebra)|canonical injections]]: +:$\inj_1: \struct {S_1, \circ_1} \to \struct {S_1, \circ_1} \times \struct {S_2, \circ_2}: \forall x \in S_1: \map {\inj_1} x = \tuple {x, e_2}$ +:$\inj_2: \struct {S_2, \circ_2} \to \struct {S_1, \circ_1} \times \struct {S_2, \circ_2}: \forall x \in S_2: \map {\inj_2} x = \tuple {e_1, x}$ +are [[Definition:Injection|injections]]. +\end{theorem} + +\begin{proof} +Let $x, x' \in S_1$. +Suppose that: +:$\map {\inj_1} x = \map {\inj_1} {x'}$ +Then by definition of [[Definition:Canonical Injection (Abstract Algebra)|canonical injection]]: +:$\tuple {x, e_2} = \tuple {x', e_2}$ +By [[Equality of Ordered Pairs]]: +:$x = x'$ +That is, $\inj_1$ is an [[Definition:Injection|injection]]. +{{qed|lemma}} +Similarly, let $x, x' \in S_2$. +Suppose that: +:$\map {\inj_2} x = \map {\inj_2} {x'}$ +Then by definition of [[Definition:Canonical Injection (Abstract Algebra)|canonical injection]]: +:$\tuple {e_1, x} = \tuple {e_1, x'}$ +Again by [[Equality of Ordered Pairs]]: +:$x = x'$ +That is, $\inj_2$ is an [[Definition:Injection|injection]]. +{{qed|lemma}} +So $\inj_1$ and $\inj_2$ are [[Definition:Injection|injections]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Canonical Injection is Injection/General Result} +Tags: Monomorphisms + +\begin{theorem} +Let $\struct {S_1, \circ_1}, \struct {S_2, \circ_2}, \dotsc, \struct {S_j, \circ_j}, \dotsc, \struct {S_n, \circ_n}$ be [[Definition:Algebraic Structure|algebraic structures]] with [[Definition:Identity Element|identities]] $e_1, e_2, \ldots, e_j, \ldots, e_n$ respectively. +The [[Definition:Canonical Injection (Abstract Algebra)/General Definition|canonical injection]]: +:$\displaystyle \inj_j: \struct {S_j, \circ_j} \to \prod_{i \mathop = 1}^n \struct {S_i, \circ_i}$ +defined as: +:$\map {\inj_j} x = \tuple {e_1, e_2, \dotsc, e_{j - 1}, x, e_{j + 1}, \dotsc, e_n}$ +is an [[Definition:Injection|injection]]. +\end{theorem} + +\begin{proof} +Let: +:$x, y \in S_j: \map {\inj_j} x = \map {\inj_j} y$ +Then: +:$\tuple {e_1, e_2, \dotsc, e_{j - 1}, x, e_{j + 1}, \dotsc, e_n} = \tuple {e_1, e_2, \dotsc, e_{j - 1}, y, e_{j + 1}, \dotsc, e_n}$ +By [[Equality of Ordered Tuples]], it follows directly that: +:$x = y$ +Thus the [[Definition:Canonical Injection (Abstract Algebra)/General Definition|canonical injections]] are [[Definition:Injection|injective]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Intermediate Value Theorem (Topology)} +Tags: Connected Spaces, Continuous Mappings, Order Topology + +\begin{theorem} +Let $X$ be a [[Definition:Connected (Topology)|connected]] [[Definition:Topological Space|topological space]]. +Let $\struct {Y, \preceq, \tau}$ be a [[Definition:Totally Ordered Set|totally ordered set]] equipped with the [[Definition:Order Topology|order topology]]. +Let $f: X \to Y$ be a [[Definition:Continuous Mapping (Topology)|continuous mapping]]. +Let $a$ and $b$ are two points of $a, b \in X$ such that: +:$\map f a \prec \map f b$ +Let: +:$r \in Y: \map f a \prec r \prec \map f b$ +Then there exists a point $c$ of $X$ such that: +:$\map f c = r$ +\end{theorem} + +\begin{proof} +Let $a, b \in X$, and let $r \in Y$ lie between $\map f a$ and $\map f b$. +Define the [[Definition:Set|sets]]: +:$A = f \sqbrk X \cap r^\prec$ and $B = f \sqbrk X \cap r^\succ$ +where $r^\prec$ and $r^\succ$ denote the [[Definition:Strict Lower Closure of Element|strict lower closure]] and [[Definition:Strict Upper Closure of Element|strict upper closure]] respectively of $r$ in $Y$. +$A$ and $B$ are [[Definition:Disjoint Sets|disjoint]] by construction. +$A$ and $B$ are also [[Definition:Non-Empty Set|non-empty]] since one contains $\map f a$ and the other contains $\map f b$. +$A$ and $B$ are also both [[Definition:Open Set (Topology)|open]] by definition as the [[Definition:Set Intersection|intersection]] of [[Definition:Open Set (Topology)|open sets]]. +Suppose there is no point $c$ such that $\map f c = r$. +Then: +:$f \sqbrk X = A \cup B$ +so $A$ and $B$ constitute a [[Definition:Separation (Topology)|separation]] of $X$. +But this [[Definition:Contradiction|contradicts]] the fact that [[Continuous Image of Connected Space is Connected]]. +Hence by [[Proof by Contradiction]]: +:$\exists c \in X: \map f c = r$ +which is what was to be proved. +{{qed}} +\end{proof}<|endoftext|> +\section{Finite Union of Countable Sets is Countable} +Tags: Set Union, Countable Sets + +\begin{theorem} +The [[Definition:Set Union|union]] of a [[Definition:Finite|finite number]] of [[Definition:Countable Set|countable sets]] is [[Definition:Countable|countable]]. +\end{theorem} + +\begin{proof} +Let $S_0, \ldots, S_{n-1}$ be [[Definition:Countable Set|countable sets]]. +For $i \in \left\{{0, \ldots, n-1}\right\}$, let $f_i: \N \to S_i$ be a [[Definition:Surjection|surjection]]. +These exist by [[Surjection from Natural Numbers iff Countable]]. +Now define $f: \N \to \displaystyle \bigcup_{i \mathop = 0}^{n-1} S_i$ by: +:$f \left({m}\right) := f_i \left({\left\lfloor{\dfrac m n}\right\rfloor}\right)$ +where $i$ is the [[Definition:Unique|unique]] element of $\left\{{0, \ldots, n-1}\right\}$ such that $m \equiv i \pmod n$, and $\left\lfloor{x}\right\rfloor$ denotes the [[Definition:Floor Function|floor]] of $x$. +Now let $x \in \displaystyle \bigcup_{i \mathop = 0}^{n-1} S_i$ be arbitrary. +Let $k$ be the smallest natural number such that $x \in S_k$. +Let $l$ be the smallest natural number such that $f_k \left({l}\right) = x$. +These are guaranteed to exist by definition of [[Definition:Set Union|set union]] and [[Definition:Surjection|surjectivity]] of $f_k$, respectively. +Then we have: +:$f \left({l n + k}\right) = f_k \left({\left\lfloor{\dfrac {l n + k} n}\right\rfloor}\right) = f_k \left({l}\right) = x$ +since $k < n$. +Since $x$ was arbitrary, $f$ is a [[Definition:Surjection|surjection]]. +Hence by [[Surjection from Natural Numbers iff Countable]], $\displaystyle \bigcup_{i \mathop = 0}^{n-1} S_i$ is [[Definition:Countable Set|countable]] +{{qed}} +\end{proof}<|endoftext|> +\section{Particular Values of Binomial Coefficients} +Tags: Binomial Coefficients, Examples of Binomial Coefficients + +\begin{theorem} +=== [[Zero Choose Zero|Binomial Coefficient $\dbinom 0 0$]] === +{{:Zero Choose Zero}} +=== [[Zero Choose n|Binomial Coefficient $\dbinom 0 n$]] === +{{:Zero Choose n}} +=== [[One Choose n|Binomial Coefficient $\dbinom 1 n$]] === +{{:One Choose n}} +=== [[N Choose Negative Number is Zero]] === +{{:N Choose Negative Number is Zero}} +=== [[Binomial Coefficient with Zero]] === +{{:Binomial Coefficient with Zero}} +=== [[Binomial Coefficient with One]] === +{{:Binomial Coefficient with One}} +=== [[Binomial Coefficient with Self]] === +{{:Binomial Coefficient with Self}} +=== [[Binomial Coefficient with Self minus One]] === +{{:Binomial Coefficient with Self minus One}} +=== [[Binomial Coefficient with Two]] === +{{:Binomial Coefficient with Two}} +\end{theorem}<|endoftext|> +\section{Binomial Coefficient with Zero} +Tags: Examples of Binomial Coefficients + +\begin{theorem} +:$\forall r \in \R: \dbinom r 0 = 1$ +\end{theorem} + +\begin{proof} +From the [[Definition:Binomial Coefficient/Real Numbers|definition of binomial coefficients]]: +:$\dbinom r k = \dfrac {r^{\underline k}} {k!}$ for $k \ge 0$ +where $r^{\underline k}$ is the [[Definition:Falling Factorial|falling factorial]]. +In turn: +:$\displaystyle x^{\underline k} := \prod_{j \mathop = 0}^{k-1} \left({x - j}\right)$ +But when $k = 0$, we have: +:$\displaystyle \prod_{j \mathop = 0}^{-1} \left({x - j}\right) = 1$ +as $\displaystyle \prod_{j \mathop = 0}^{-1} \left({x - j}\right)$ is a [[Definition:Vacuous Product|vacuous product]]. +From the definition of the [[Definition:Factorial|factorial]] we have that $0! = 1$. +Thus: +:$\forall r \in \R: \dbinom r 0 = 1$ +{{qed}} +=== [[Binomial Coefficient with Zero/Integer Coefficients|Integer Coefficients]] === +This is completely compatible with the result for [[Definition:Natural Numbers|natural numbers]]: +:$\forall n \in \N: \dbinom n 0 = 1$ +{{:Binomial Coefficient with Zero/Integer Coefficients}} +\end{proof}<|endoftext|> +\section{Binomial Coefficient with One} +Tags: Examples of Binomial Coefficients + +\begin{theorem} +:$\forall r \in \R: \dbinom r 1 = r$ +\end{theorem} + +\begin{proof} +From the [[Definition:Binomial Coefficient/Real Numbers|definition of binomial coefficients]]: +:$\dbinom r k = \dfrac {r^{\underline k} } {k!}$ for $k \ge 0$ +where $r^{\underline k}$ is the [[Definition:Falling Factorial|falling factorial]]. +In turn: +:$\displaystyle x^{\underline k} := \prod_{j \mathop = 0}^{k - 1} \paren {x - j}$ +But when $k = 1$, we have: +:$\displaystyle \prod_{j \mathop = 0}^0 \paren {x - j} = \paren {x - 0} = x$ +So: +:$\forall r \in \R: \dbinom r 1 = r$ +{{qed}} +This is completely compatible with the result for [[Definition:Natural Numbers|natural numbers]]: +:$\forall n \in \N: \dbinom n 1 = n$ +as from the definition: +:$\dbinom n 1 = \dfrac {n!} {1! \ \paren {n - 1}!}$ +the result following directly, again from the definition of the [[Definition:Factorial|factorial]] where $1! = 1$. +{{qed}} +\end{proof}<|endoftext|> +\section{Binomial Coefficient with Self} +Tags: Examples of Binomial Coefficients + +\begin{theorem} +:$\forall n \in \Z: \dbinom n n = \sqbrk {n \ge 0}$ +where $\sqbrk {n \ge 0}$ denotes [[Definition:Iverson's Convention|Iverson's convention]]. +That is: +:$\forall n \in \Z_{\ge 0}: \dbinom n n = 1$ +:$\forall n \in \Z_{< 0}: \dbinom n n = 0$ +\end{theorem} + +\begin{proof} +From the definition of [[Definition:Binomial Coefficient|binomial coefficient]]: +:$\dbinom n n = \dfrac {n!} {n! \ \paren {n - n}!} = \dfrac {n!} {n! \ 0!}$ +the result following directly from the definition of the [[Definition:Factorial|factorial]], where $0! = 1$. +From [[N Choose Negative Number is Zero]]: +:$\forall k \in \Z_{<0}: \dbinom n k = 0$ +So for $n < 0$: +:$\dbinom n n = 0$ +{{qed}} +\end{proof} + +\begin{proof} +The case where $n = 1$ can be taken separately. +From [[Binomial Coefficient with Zero]]: +:$\dbinom 1 0 = 1$ +demonstrating that the result holds for $n = 1$. +Let $n \in \N: n > 1$. +From the [[Definition:Binomial Coefficient|definition of binomial coefficients]]: +:$\dbinom n {n - 1} = \dfrac {n!} {\left({n - 1}\right)! \left({n - \left({n - 1}\right)}\right)!} = \dfrac {n!} {\left({n - 1}\right)! \ 1!}$ +the result following directly from the definition of the [[Definition:Factorial|factorial]]. +{{qed}} +\end{proof} + +\begin{proof} +From [[Cardinality of Set of Subsets]], $\dbinom n {n - 1}$ is the number of combination of things taken $n - 1$ at a time. +Choosing $n - 1$ things from $n$ is the same thing as choosing which $1$ of the elements to be left out. +There are $n$ different choices for that $1$ element. +Therefore there are $n$ ways to choose $n - 1$ things from $n$. +{{qed}} +\end{proof}<|endoftext|> +\section{Binomial Coefficient with Two} +Tags: Examples of Binomial Coefficients + +\begin{theorem} +:$\forall r \in \R: \dbinom r 2 = \dfrac {r \left({r - 1}\right)} 2$ +\end{theorem} + +\begin{proof} +From the [[Definition:Binomial Coefficient/Real Numbers|definition of binomial coefficients]]: +:$\dbinom r k = \dfrac {r^{\underline k}} {k!}$ for $k \ge 0$ +where $r^{\underline k}$ is the [[Definition:Falling Factorial|falling factorial]]. +In turn: +:$\displaystyle x^{\underline k} := \prod_{j \mathop = 0}^{k - 1} \left({x - j}\right)$ +When $k = 2$: +:$\displaystyle \prod_{j \mathop = 0}^1 \left({x - j}\right) = \frac {\left({x - 0}\right) \left({x - 1}\right)} {2!}$ +where $2! = 1 \times 2 = 2$. +So: +:$\forall r \in \R: \dbinom r 2 = \dfrac {r \left({r - 1}\right)} 2$ +{{qed}} +\end{proof}<|endoftext|> +\section{Sum of Binomial Coefficients over Lower Index/Corollary} +Tags: Binomial Coefficients, Sum of Binomial Coefficients over Lower Index + +\begin{theorem} +:$\displaystyle \forall n \in \Z_{\ge 0}: \sum_{i \mathop \in \Z} \binom n i = 2^n$ +\end{theorem} + +\begin{proof} +From the definition of the [[Definition:Binomial Coefficient|binomial coefficient]], when $i < 0$ and $i > n$ we have $\dbinom n i = 0$. +The result follows directly from [[Sum of Binomial Coefficients over Lower Index]]. +{{qed}} +[[Category:Binomial Coefficients]] +[[Category:Sum of Binomial Coefficients over Lower Index]] +0kwxkf33w9yqy7i9kaksyn7yxmjwnb4 +\end{proof}<|endoftext|> +\section{Binomial Theorem/Integral Index} +Tags: Binomial Theorem, Proofs by Induction + +\begin{theorem} +Let $X$ be one of the [[Definition:Number|set of numbers]] $\N, \Z, \Q, \R, \C$. +Let $x, y \in X$. +Then: +{{begin-eqn}} +{{eqn | ll= \forall n \in \Z_{\ge 0}: + | l = \paren {x + y}^n + | r = \sum_{k \mathop = 0}^n \binom n k x^{n - k} y^k + | c = +}} +{{eqn | r = x^n + \binom n 1 x^{n - 1} y + \binom n 2 x^{n - 2} y^2 + \binom n 3 x^{n - 3} y^3 + \cdots + | c = +}} +{{eqn | r = x^n + n x^{n - 1} y + \frac {n \paren {n - 1} } {2!} x^{n - 2} y^2 + \frac {n \paren {n - 1} \paren {n - 3} } {3!} x^{n - 3} y^3 + \cdots + | c = +}} +{{end-eqn}} +where $\dbinom n k$ is [[Definition:N Choose K|$n$ choose $k$]]. +\end{theorem} + +\begin{proof} +=== Basis for the Induction === +For $n = 0$ we have: +:$\displaystyle \paren {x + y}^0 = 1 = \binom 0 0 x^{0 - 0} y^0 = \sum_{k \mathop = 0}^0 \binom 0 k x^{0 - k} y^k$ +This is the [[Definition:Basis for the Induction|basis for the induction]]. +=== Induction Hypothesis === +This is our [[Definition:Induction Hypothesis|induction hypothesis]]: +:$\displaystyle \paren {x + y}^n = \sum_{k \mathop = 0}^n \binom n k x^{n - k} y^k$ +=== Induction Step === +This is our [[Definition:Induction Step|induction step]]: +{{begin-eqn}} +{{eqn | l = \paren {x + y}^{n + 1} + | r = \paren {x + y} \paren {x + y}^n + | c = +}} +{{eqn | r = x \sum_{k \mathop = 0}^n \binom n k x^{n - k}y^k + y \sum_{k \mathop = 0}^n \binom n k x^{n - k} y^k + | c = [[Binomial Theorem/Integral Index#Inductive Hypothesis|Inductive Hypothesis]] +}} +{{eqn | r = \sum_{k \mathop = 0}^n \binom n k x^{n + 1 - k} y^k + \sum_{k \mathop = 0}^n \binom n k x^{n - k} y^{k + 1} + | c = +}} +{{eqn | r = \binom n 0 x^{n + 1} + \sum_{k \mathop = 1}^n \binom n k x^{n + 1 - k} y^k + \binom n n y^{n + 1} + \sum_{k \mathop = 0}^{n - 1} \binom n k x^{n - k} y^{k + 1} + | c = +}} +{{eqn | r = x^{n + 1} + y^{n + 1} + \sum_{k \mathop = 1}^n \binom n k x^{n + 1 - k} y^k + \sum_{k \mathop = 0}^{n - 1} \binom n k x^{n - k} y^{k + 1} + | c = +}} +{{eqn | r = \binom {n + 1} 0 x^{n + 1} + \binom {n + 1} {n + 1} y^{n + 1} + \sum_{k \mathop = 1}^n \binom n k x^{n + 1 - k} y^k + \sum_{k \mathop = 1}^n \binom n {k - 1} x^{n + 1 - k} y^k + | c = +}} +{{eqn | r = \binom {n + 1} 0 x^{n + 1} + \binom {n + 1} {n + 1} y^{n + 1} + \sum_{k \mathop = 1}^n \paren {\binom n k + \binom n {k - 1} } x^{n + 1 - k} y^k + | c = +}} +{{eqn | r = \binom {n + 1} 0 x^{n + 1} + \binom {n + 1} {n + 1} y^{n + 1} + \sum_{k \mathop = 1}^n \binom {n + 1} k x^{n + 1 - k} y^k + | c = [[Pascal's Rule]] +}} +{{eqn | r = \sum_{k \mathop = 0}^{n + 1} \binom {n + 1} k x^{n + 1 - k} y^k + | c = +}} +{{end-eqn}} +The result follows by the [[Principle of Mathematical Induction]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Binomial Theorem/Ring Theory} +Tags: Binomial Theorem, Proofs by Induction, Ring Theory + +\begin{theorem} +Let $\left({R, +, \odot}\right)$ be a [[Definition:Ringoid (Abstract Algebra)|ringoid]] such that $\left({R, \odot}\right)$ is a [[Definition:Commutative Semigroup|commutative semigroup]]. +Let $n \in \Z: n \ge 2$. Then: +:$\displaystyle \forall x, y \in R: \odot^n \left({x + y}\right) = \odot^n x + \sum_{k \mathop = 1}^{n-1} \binom n k \left({\odot^{n-k} x}\right) \odot \left({\odot^k y}\right) + \odot^n y$ +where $\dbinom n k = \dfrac {n!} {k! \ \left({n - k}\right)!}$ (see [[Definition:Binomial Coefficient|Binomial Coefficient]]). +If $\left({R, \odot}\right)$ has an [[Definition:Identity Element|identity element]] $e$, then: +:$\displaystyle \forall x, y \in R: \odot^n \left({x + y}\right) = \sum_{k \mathop = 0}^n \binom n k \left({\odot^{n - k} x}\right) \odot \left({\odot^k y}\right)$ +\end{theorem} + +\begin{proof} +First we establish the result for when $\left({R, \odot}\right)$ has an [[Definition:Identity Element|identity element]] $e$. +For $n = 0$ we have: +:$\displaystyle \odot^0 \left({x + y}\right) = e = {0 \choose 0} \left({\odot^{0 - 0} x}\right) \odot \left({\odot^0 y}\right) = \sum_{k \mathop = 0}^0 {0 \choose k} x^{0 - k} \odot y^k$ +For $n = 1$ we have: +:$\displaystyle \odot^1 \left({x + y}\right) = \left({x + y}\right) = {0 \choose 1} \left({\odot^{1 - 0} x}\right) \odot \left({\odot^0 y}\right) + {1 \choose 1} \left({\odot^{1 - 1} x}\right) \odot \left({\odot^1 y}\right) = \sum_{k \mathop = 0}^1 {1 \choose k} x^{1 - k} \odot y^k$ +=== Basis for the Induction === +For $n = 2$ we have: +{{begin-eqn}} +{{eqn | l = \odot^2 \left({x + y}\right) + | r = \left({x + y}\right) \odot \left({x + y}\right) + | c = +}} +{{eqn | r = \left({x \odot x}\right) + \left({x \odot y}\right) + \left({y \odot x}\right) + \left({y \odot y}\right) + | c = +}} +{{eqn | r = \left({x \odot x}\right) + 2 \left({x \odot y}\right) + \left({y \odot y}\right) + | c = $+$ is [[Definition:Commutative Operation|commutative]] in $R$ +}} +{{eqn | r = \odot^2 x + 2 \left({\odot^1 x}\right) \odot \left({\odot^1 y}\right) + \odot^2 y + | c = +}} +{{eqn | r = \odot^2 x + {2 \choose 1} \left({\odot^{2-1} x}\right) \odot \left({\odot^1 y}\right) + \odot^2 y + | c = +}} +{{end-eqn}} +This is the [[Principle of Mathematical Induction#Basis for the Induction|basis for the induction]]. +=== Induction Hypothesis === +This is our [[Principle of Mathematical Induction#Induction Hypothesis|inductive hypothesis]]: +:$\displaystyle \forall n \ge 2: \odot^n \left({x + y}\right) = \odot^n x + \sum_{k \mathop = 1}^{n - 1} {n \choose k} \left({\odot^{n - k} x}\right) \odot \left({\odot^k y}\right) + \odot^n y$ +=== Induction Step === +This is the [[Principle of Mathematical Induction#Induction Step|induction step]]: +{{begin-eqn}} +{{eqn | l = \odot^{n + 1} \left({x + y}\right) + | r = \left({x + y}\right) \odot \left({\odot^n \left({x + y}\right)}\right) + | c = +}} +{{eqn | r = x \odot \left({\odot^n x + \sum_{k \mathop = 1}^{n - 1} {n \choose k} \left({\odot^{n - k} x}\right) \odot \left({\odot^k y}\right) + \odot^n y}\right) + | c = +}} +{{eqn | o = + | ro= + + | r = ~ y \odot \left({\odot^n x + \sum_{k \mathop = 1}^{n - 1} {n \choose k} \left({\odot^{n - k} x}\right) \odot \left({\odot^k y}\right) + \odot^n y}\right) + | c = [[Binomial Theorem/Ring Theory#Inductive Hypothesis|Inductive Hypothesis]] +}} +{{eqn | r = \odot^{n + 1} x + \sum_{k \mathop = 1}^{n - 1} {n \choose k} \left({\odot^{n + 1 - k} x}\right) \odot \left({\odot^k y}\right) + x \odot \left({\odot^n y}\right) + | c = +}} +{{eqn | o = + | ro= + + | r = ~ y \odot \left({\odot^n x}\right) + \sum_{k \mathop = 1}^{n - 1} {n \choose k} \left({\odot^{n - k} x}\right) \odot \left({\odot^{k + 1} y}\right) + \odot^{n + 1} y + | c = +}} +{{eqn | r = \odot^{n + 1} x + \sum_{k \mathop = 1}^n {n \choose k} \left({\odot^{n + 1 - k} x}\right) \odot \left({\odot^k y}\right) + \sum_{k \mathop = 0}^{n - 1} {n \choose k} \left({\odot^{n - k} x}\right) \odot \left({\odot^{k + 1} y}\right) + \odot^{n + 1} y + | c = +}} +{{eqn | r = \odot^{n + 1} x + \sum_{k \mathop = 1}^n {n \choose k} \left({\odot^{n + 1 - k} x}\right) \odot \left({\odot^k y}\right) + \sum_{k \mathop = 1}^n {n \choose k - 1} \left({\odot^{n - k} x}\right) \odot \left({\odot^{k + 1} y}\right) + \odot^{n + 1} y + | c = +}} +{{eqn | r = \odot^{n + 1} x + \sum_{k \mathop = 1}^n {n + 1 \choose k} \left({\odot^{n + 1 - k} x}\right) \odot \left({\odot^k y}\right) + \odot^{n + 1} y + | c = [[Pascal's Rule]] +}} +{{end-eqn}} +The result follows by the [[Principle of Mathematical Induction]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Derivative of Absolute Value Function} +Tags: Derivatives, Absolute Value Function + +\begin{theorem} +Let $\size x$ be the [[Definition:Absolute Value|absolute value]] of $x$ for [[Definition:Real Number|real]] $x$. +Then: +:$\dfrac \d {\d x} \size x = \dfrac x {\size x}$ +for $x \ne 0$. +At $x = 0$, $\size x$ is not [[Definition:Differentiable Real Function at Point|differentiable]]. +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \frac \d {\d x} \size x + | r = \frac \d {\d x} \sqrt{x^2} + | c = [[Square of Real Number is Non-Negative]] +}} +{{eqn | r = \frac \d {\d x} \paren {x^2}^{\frac 1 2} +}} +{{eqn | r = \frac 1 2 \paren {x^2}^{-\frac 1 2} \cdot 2 x + | c = [[Chain Rule for Derivatives]] +}} +{{eqn | r = \frac x {\sqrt{x^2} } +}} +{{eqn | r = \frac x {\size x} +}} +{{end-eqn}} +{{qed|lemma}} +Now consider $x = 0$. +From the definition of [[Definition:Derivative of Real Function at Point|derivative]]: +{{begin-eqn}} +{{eqn | l = \valueat {\dfrac {\d \size x} {\d x} } {x \mathop = 0} + | r = \lim_{x \mathop \to 0}\frac {\size x - 0} {x - 0} +}} +{{eqn | r = \begin {cases} \lim_{x \mathop \to 0^+} \dfrac x x & : x > 0 \\ \lim_{x \mathop \to 0^-} \dfrac {-x} x & : x < 0 \end {cases} + | c = {{Defof|Absolute Value}} +}} +{{eqn | r = \begin {cases} 1 & : x > 0 \\ -1 & : x < 0 \end{cases} +}} +{{end-eqn}} +From [[Limit iff Limits from Left and Right]], the limit does not exist. +{{qed}} +[[Category:Derivatives]] +[[Category:Absolute Value Function]] +kwp3633hffzgpfq9nnhi7ctylso5m5w +\end{proof}<|endoftext|> +\section{Inverse Element in Inverse Completion of Commutative Monoid} +Tags: Inverse Completions + +\begin{theorem} +Let $\left({S, \circ}\right)$ be a [[Definition:Commutative Monoid|commutative monoid]]. +Let $\left ({C, \circ}\right) \subseteq \left({S, \circ}\right)$ be the [[Definition:Subsemigroup|subsemigroup]] of [[Definition:Cancellable Element|cancellable elements]] of $\left({S, \circ}\right)$. +Let $\left({T, \circ'}\right)$ be an [[Definition:Inverse Completion|inverse completion]] of $\left({S, \circ}\right)$. +Then the [[Definition:Inverse Element|inverse]] of an [[Definition:Element|element]] of $S$ which is [[Definition:Invertible Element|invertible]] for $\circ$ is also its [[Definition:Inverse Element|inverse]] for $\circ'$. +\end{theorem} + +\begin{proof} +Let the [[Definition:Identity Element|identity]] of $\left({S, \circ}\right)$ be $e$. +Let $z$ be the [[Definition:Inverse Element|inverse]] of $y$ for $\circ$: +:$z \circ y = e$ +:$y \circ z = e$ +From [[Identity of Inverse Completion of Commutative Monoid]]: +:$z \circ' y = e$ +:$y \circ' z = e$ +Hence $z$ is the [[Definition:Inverse Element|inverse]] of $y$ for $\circ'$. +{{qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Congruence Relation} +Tags: Semigroups, Cartesian Product, Congruence Relations + +\begin{theorem} +The [[Definition:Cross-Relation|cross-relation]] $\boxtimes$ is a [[Definition:Congruence Relation|congruence relation]] on $\left({S \times C, \oplus}\right)$. +=== [[Construction of Inverse Completion/Equivalence Relation/Members of Equivalence Classes|Members of Equivalence Classes]] === +{{:Construction of Inverse Completion/Equivalence Relation/Members of Equivalence Classes}} +=== [[Construction of Inverse Completion/Equivalence Relation/Equivalence Class of Equal Elements|Equivalence Class of Equal Elements]] === +{{:Construction of Inverse Completion/Equivalence Relation/Equivalence Class of Equal Elements}} +\end{theorem} + +\begin{proof} +From [[Semigroup is Subsemigroup of Itself]], $\left({S, \circ}\right)$ is a [[Definition:Subsemigroup|subsemigroup]] of $\left({S, \circ}\right)$. +Also from [[Semigroup is Subsemigroup of Itself]], $\left({C, \circ {\restriction_C}}\right)$ is a [[Definition:Subsemigroup|subsemigroup]] of $\left({C, \circ {\restriction_C}}\right)$. +The result follows from [[Cross-Relation is Congruence Relation]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Equivalence Relation/Members of Equivalence Classes} +Tags: Semigroups, Cartesian Product, Equivalence Relations + +\begin{theorem} +$\forall x, y \in S, a, b \in C:$ +:$(1): \quad \tuple {x \circ a, a} \boxtimes \tuple {y \circ b, b} \iff x = y$ +:$(2): \quad \eqclass {\tuple {x \circ a, y \circ a} } \boxtimes = \eqclass {\tuple {x, y} } \boxtimes$ +where $\eqclass {\tuple {x, y} } \boxtimes$ is the [[Definition:Equivalence Class|equivalence class]] of $\tuple {x, y}$ under $\boxtimes$. +\end{theorem} + +\begin{proof} +From [[Cross-Relation is Equivalence Relation]] we have that $\boxtimes$ is an [[Definition:Equivalence Relation|equivalence relation]]. +Hence the [[Definition:Equivalence Class|equivalence class]] of $\tuple {x, y}$ under $\boxtimes$ is defined for all $\tuple {x, y} \in S \times C$. +From [[Semigroup is Subsemigroup of Itself]], $\struct {S, \circ}$ is a [[Definition:Subsemigroup|subsemigroup]] of $\struct {S, \circ}$. +Also from [[Semigroup is Subsemigroup of Itself]], $\struct {C, \circ {\restriction_C} }$ is a [[Definition:Subsemigroup|subsemigroup]] of $\struct {C, \circ {\restriction_C} }$. +The result follows from [[Elements of Cross-Relation Equivalence Class]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Equivalence Relation/Equivalence Class of Equal Elements} +Tags: Semigroups, Cartesian Product, Equivalence Relations + +\begin{theorem} +:$\forall c, d \in C: \left({c, c}\right) \boxtimes \left({d, d}\right)$ +\end{theorem} + +\begin{proof} +From [[Semigroup is Subsemigroup of Itself]], $\left({S, \circ}\right)$ is a [[Definition:Subsemigroup|subsemigroup]] of $\left({S, \circ}\right)$. +Also from [[Semigroup is Subsemigroup of Itself]], $\left({C, \circ {\restriction_C}}\right)$ is a [[Definition:Subsemigroup|subsemigroup]] of $\left({C, \circ {\restriction_C}}\right)$. +The result follows from [[Equivalence Class of Equal Elements of Cross-Relation]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Quotient Structure is Commutative Semigroup} +Tags: Semigroups, Cartesian Product, Equivalence Relations + +\begin{theorem} +:$\left({T', \oplus'}\right)$ is a [[Definition:Commutative Semigroup|commutative semigroup]]. +\end{theorem} + +\begin{proof} +The [[Quotient Mapping on Structure is Canonical Epimorphism|canonical epimorphism]] from $\left({S \times C, \oplus}\right)$ onto $\left({T', \oplus'}\right)$ is given by: +:$q_\boxtimes: \left({S \times C, \oplus}\right) \to \left({T', \oplus'}\right): q_\boxtimes \left({x, y}\right) = \left[\!\left[{\left({x, y}\right)}\right]\!\right]_\boxtimes$ +where, by definition: +{{begin-eqn}} +{{eqn | l = \forall \left({x_1, y_1}\right), \left({x_2, y_2}\right) \in S \times C: + | o = + | r = q_\boxtimes \left({\left({x_1, y_1}\right) \oplus \left({x_2, y_2}\right)}\right) + | c = +}} +{{eqn | r = q_\boxtimes \left({\left({x_1, y_1}\right)}\right) \oplus' q_\boxtimes \left({\left({x_2, y_2}\right)}\right) + | c = +}} +{{end-eqn}} +By [[Morphism Property Preserves Closure]], as $\oplus$ is [[Definition:Closed Algebraic Structure|closed]], then so is $\oplus'$. +By [[Epimorphism Preserves Associativity]], as $\oplus$ is [[Definition:Associative|associative]], then so is $\oplus'$. +By [[Epimorphism Preserves Commutativity]], as $\oplus$ is [[Definition:Commutative Operation|commutative]], then so is $\oplus'$. +Thus $\left({T', \oplus'}\right)$ is closed, associative and commutative, and therefore a [[Definition:Commutative Semigroup|commutative semigroup]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Quotient Mapping is Injective} +Tags: Semigroups, Cartesian Product, Equivalence Relations + +\begin{theorem} +Let the [[Definition:Mapping|mapping]] $\psi: S \to T'$ be defined as: +:$\forall x \in S: \psi \left({x}\right) = \left[\!\left[{\left({x \circ a, a}\right)}\right]\!\right]_\boxtimes$ +Then $\psi: S \to T'$ is an [[Definition:Injection|injection]], and does not depend on the particular element $a$ chosen. +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \psi \left({x}\right) + | r = \psi \left({y}\right) + | c = +}} +{{eqn | ll= \implies + | l = \forall a \in C: \left[\!\left[{\left({x \circ a, a}\right)}\right]\!\right]_\boxtimes + | r = \left[\!\left[{\left({y \circ a, a}\right)}\right]\!\right]_\boxtimes + | c = Definition of $\left[\!\left[{\left({x, y}\right)}\right]\!\right]_\boxtimes$ +}} +{{eqn | ll= \implies + | l = x + | r = y + | c = from $(1)$ above +}} +{{end-eqn}} +The result follows by the definition of [[Definition:Injection|injection]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Quotient Mapping is Monomorphism} +Tags: Semigroups, Cartesian Product, Monomorphisms + +\begin{theorem} +The [[Definition:Mapping|mapping]] $\psi: S \to T'$ is a [[Definition:Monomorphism (Abstract Algebra)|monomorphism]]. +\end{theorem} + +\begin{proof} +We have that this [[Construction of Inverse Completion/Quotient Mapping is Injective|quotient mapping $\psi: S \to T'$ is an injection]]. +Let $x, y \in S$. Then: +{{begin-eqn}} +{{eqn | l = \psi \left({x}\right) \oplus' \psi \left({y}\right) + | r = \left[\!\left[{\left({x \circ a, a}\right)}\right]\!\right]_\boxtimes \oplus' \left[\!\left[{\left({y \circ a, a}\right)}\right]\!\right]_\boxtimes + | c = Definition of $\psi$ +}} +{{eqn | r = \left[\!\left[{\left({x \circ a, a}\right) \oplus' \left({y \circ a, a}\right)}\right]\!\right]_\boxtimes + | c = Definition of $\oplus'$ +}} +{{eqn | r = \left[\!\left[{\left({x \circ a \circ y \circ a, a \circ a}\right)}\right]\!\right]_\boxtimes + | c = Definition of $\boxtimes$ +}} +{{eqn | r = \left[\!\left[{\left({\left({x \circ y}\right) \circ \left({a \circ a}\right), a \circ a}\right)}\right]\!\right]_\boxtimes + | c = [[Definition:Commutative Operation|Commutativity]] of $\circ$ +}} +{{eqn | r = \psi \left({x \circ y}\right) + | c = as $a \circ a \in C$ +}} +{{end-eqn}} +So $\psi \left({x \circ y}\right) = \psi \left({x}\right) \oplus' \psi \left({y}\right)$, and the [[Definition:Morphism Property|morphism property]] is proven. +Thus $\psi$ is an [[Definition:Injection|injective]] [[Definition:Homomorphism (Abstract Algebra)|homomorphism]], and so by definition a [[Definition:Monomorphism (Abstract Algebra)|monomorphism]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Quotient Mapping to Image is Isomorphism} +Tags: Semigroups, Cartesian Product, Isomorphisms + +\begin{theorem} +Let $S'$ be the [[Definition:Image of Subset under Mapping|image]] $\psi \left({S}\right)$ of $S$. +Then $\psi$ is an [[Definition:Isomorphism (Abstract Algebra)|isomorphism]] from $S$ onto $S'$. +\end{theorem} + +\begin{proof} +From [[Construction of Inverse Completion/Quotient Mapping is Monomorphism|Quotient Mapping is Monomorphism]], $\psi: \left({S, \circ}\right) \to \left({S', \oplus'}\right)$ is a [[Definition:Monomorphism (Abstract Algebra)|monomorphism]]. +Therefore by definition: +:$\psi$ is a [[Definition:Homomorphism (Abstract Algebra)|homomorphism]] +:$\psi$ is an [[Definition:Injection|injection]]. +Because $S'$ is the image of $\psi \left({S}\right)$, by [[Surjection by Restriction of Codomain]] $\psi$ is a [[Definition:Surjection|surjection]]. +Therefore by definition $\psi: S \to S'$ is a [[Definition:Bijection|bijection]]. +A bijective homomorphism is an [[Definition:Isomorphism (Abstract Algebra)|isomorphism]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Quotient Mapping/Image of Cancellable Elements} +Tags: Semigroups, Cartesian Product, Equivalence Relations + +\begin{theorem} +The [[Definition:Set|set]] $C'$ of [[Definition:Cancellable Element|cancellable elements]] of the [[Definition:Semigroup|semigroup]] $S'$ is $\psi \left[{C}\right]$. +\end{theorem} + +\begin{proof} +We have [[Morphism Property Preserves Cancellability]]. +Thus: +:$c \in C \implies \psi \left({c}\right) \in C'$ +So by [[Image of Subset under Relation is Subset of Image/Corollary 2|Image of Subset under Relation is Subset of Image: Corollary 2]]: +:$\psi \left[{C}\right] \subseteq C'$ +From above, $\psi$ is an [[Definition:Semigroup Isomorphism|isomorphism]]. +{{explain|Above where?}} +Hence, also from [[Morphism Property Preserves Cancellability]]: +:$c' \in C' \implies \psi^{-1} \left({c'}\right) \in C$ +So by [[Image of Subset under Relation is Subset of Image/Corollary 3|Image of Subset under Relation is Subset of Image: Corollary 3]]: +: $\psi^{-1} \left[{C'}\right] \subseteq C$ +Hence by definition of [[Definition:Set Equality|set equality]]: +:$\psi \left[{C}\right] = C'$ +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Image of Quotient Mapping is Subsemigroup} +Tags: Subsemigroups, Cartesian Product, Quotient Mappings + +\begin{theorem} +Let $S'$ be the [[Definition:Image of Subset under Mapping|image]] $\psi \left({S}\right)$ of $S$. +Then $\left({S', \oplus'}\right)$ is a [[Definition:Subsemigroup|subsemigroup]] of $\left({T', \oplus'}\right)$. +\end{theorem} + +\begin{proof} +We have that $S'$ is the [[Definition:Image of Subset under Mapping|image]] $\psi \left({S}\right)$ of $S$. +For $\left({S', \oplus'}\right)$ to be a [[Definition:Subsemigroup|subsemigroup]] of $\left({T', \oplus'}\right)$, by [[Subsemigroup Closure Test]] we need to show that $\left({S', \oplus'}\right)$ is [[Definition:Closed Algebraic Structure|closed]]. +Let $x, y \in S'$. +Then $x = \phi \left({x'}\right), y = \phi \left({y'}\right)$ for some $x', y' \in S$. +But as $\phi$ is an [[Definition:Isomorphism (Abstract Algebra)|isomorphism]], it obeys the [[Definition:Morphism Property|morphism property]]. +So $x \oplus' y = \phi \left({x'}\right) \oplus' \phi \left({y'}\right) = \phi \left({x' \circ y'}\right)$. +Hence $x \oplus' y$ is the image of $x' \circ y' \in S$ and hence $x \oplus' y \in S'$. +Thus by the [[Subsemigroup Closure Test]], $\left({S', \oplus'}\right)$ is a [[Definition:Subsemigroup|subsemigroup]] of $\left({T', \oplus'}\right)$ +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Identity of Quotient Structure} +Tags: Semigroups, Cartesian Product, Quotient Mappings + +\begin{theorem} +Let $c \in C$ be arbitrary. +Then: +:$\eqclass {\tuple {c, c} } \boxtimes$ +is the [[Definition:Identity Element|identity]] of $T'$. +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \paren {x \circ c} \circ y + | r = x \circ \paren {y \circ c} + | c = +}} +{{eqn | ll= \leadsto + | l = \eqclass {\tuple {x, y} } \boxtimes \oplus' \eqclass {\tuple {c, c} } \boxtimes + | r = \eqclass {\tuple {x \circ c, y \circ c} } \boxtimes + | c = Definition of $\oplus'$ +}} +{{eqn | r = \eqclass {\tuple {x, y} } \boxtimes + | c = [[Definition:Cancellable Element|Cancellability]] of elements of $C$ +}} +{{end-eqn}} +Hence the result, by definition of [[Definition:Identity Element|identity element]]. +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Invertible Elements in Quotient Structure} +Tags: Semigroups, Cartesian Product, Quotient Mappings + +\begin{theorem} +Every [[Definition:Cancellable Element|cancellable element]] of $S'$ is [[Definition:Invertible Element|invertible]] in $T'$. +\end{theorem} + +\begin{proof} +From [[Construction of Inverse Completion/Identity of Quotient Structure|Identity of Quotient Structure]], $\struct {T', \oplus'}$ has an [[Definition:Identity Element|identity]], and it is $\eqclass {\tuple {c, c} } \boxtimes$ for any $c \in C$. +Call this identity $e_{T'}$. +Let the [[Definition:Mapping|mapping]] $\psi: S \to T'$ be defined as: +:$\forall x \in S: \map \psi x = \eqclass {\tuple {x \circ a, a} } \boxtimes$ +From [[Construction of Inverse Completion/Quotient Mapping/Image of Cancellable Elements|Image of Cancellable Elements in Quotient Mapping]]: +:$C' = \psi \sqbrk C$ +So: +{{begin-eqn}} +{{eqn | l = x' + | o = \in + | r = C' + | c = +}} +{{eqn | ll= \leadsto + | l = \exists x \in C: x' + | r = \map \psi x + | c = as $\psi$ is a [[Definition:Surjection|surjection]] +}} +{{eqn | ll= \leadsto + | l = \forall a \in C: x' + | r = \eqclass {\tuple {x \circ a, a} } \boxtimes + | c = Definition of $\psi$ +}} +{{end-eqn}} +The inverse of $x'$ is $\eqclass {\tuple {a, a \circ x} } \boxtimes$, as follows: +{{begin-eqn}} +{{eqn | l = a \circ x + | o = \in + | r = C + | c = [[Cancellable Elements of Semigroup form Subsemigroup]] +}} +{{eqn | l = a \circ a \circ x + | o = \in + | r = C + | c = [[Cancellable Elements of Semigroup form Subsemigroup]] +}} +{{eqn | ll= \leadsto + | l = \eqclass {\tuple {x \circ a, a} } \boxtimes \oplus' \eqclass {\tuple {a, a \circ x} } \boxtimes + | r = \eqclass {\tuple {x \circ a \circ a, a \circ a \circ x} } \boxtimes + | c = by definition of $\oplus'$ +}} +{{eqn | r = \eqclass {\tuple {a \circ a \circ x, a \circ a \circ x} } \boxtimes + | c = [[Definition:Commutative Operation|Commutativity of $\circ$]] +}} +{{eqn | r = e_{T'} + | c = [[Construction of Inverse Completion/Identity of Quotient Structure|Identity of Quotient Structure]] +}} +{{end-eqn}} +thus showing that the [[Definition:Inverse Element|inverse]] of $\eqclass {\tuple {x \circ a, a} } \boxtimes$ is $\eqclass {\tuple {a, a \circ x} } \boxtimes$. +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Generator for Quotient Structure} +Tags: Semigroups, Cartesian Product, Quotient Mappings + +\begin{theorem} +$T' = S' \cup \left({C'}\right)^{-1}$ is a [[Definition:Generator|generator]] for the [[Definition:Semigroup|semigroup]] $T'$. +\end{theorem} + +\begin{proof} +Let $\left({x, y}\right) \in S \times C$. Then: +{{begin-eqn}} +{{eqn | l = \psi \left({x}\right) \oplus' \left({\psi \left({y}\right)}\right)^{-1} + | r = \left[\!\left[{\left({x \circ a, a}\right)}\right]\!\right]_\boxtimes \oplus' \left[\!\left[{\left({a, a \circ y}\right)}\right]\!\right]_\boxtimes + | c = [[Construction of Inverse Completion/Invertible Elements in Quotient Structure|Invertible Elements in Quotient Structure]] above +}} +{{eqn | r = \left[\!\left[{\left({x \circ a \circ a, a \circ a \circ y}\right)}\right]\!\right]_\boxtimes + | c = Definition of $\oplus'$ +}} +{{eqn | r = \left[\!\left[{\left({x \circ a \circ a, y \circ a \circ a}\right)}\right]\!\right]_\boxtimes + | c = [[Definition:Commutative Operation|Commutativity]] of $\circ$ +}} +{{eqn | r = \left[\!\left[{\left({x, y}\right)}\right]\!\right]_\boxtimes + | c = [[Definition:Cancellable Element|Cancellability]] of $a \in C$ +}} +{{end-eqn}} +{{Qed}} +\end{proof}<|endoftext|> +\section{Construction of Inverse Completion/Quotient Structure is Inverse Completion} +Tags: Semigroups, Cartesian Product, Quotient Mappings + +\begin{theorem} +$T'$ is an [[Definition:Inverse Completion|inverse completion]] of its [[Definition:Subsemigroup|subsemigroup]] $S'$. +\end{theorem} + +\begin{proof} +Every [[Definition:Cancellable Element|cancellable element]] of $S'$ is [[Definition:Invertible Element|invertible]] in $T'$, from [[Construction of Inverse Completion/Invertible Elements in Quotient Structure|Invertible Elements in Quotient Structure]]. +$T' = S' \cup \left({C'}\right)^{-1}$ is a generator for the [[Definition:Semigroup|semigroup]] $T'$, from [[Construction of Inverse Completion/Generator for Quotient Structure|Generator for Quotient Structure]]. +Hence the result, by definition of [[Definition:Inverse Completion|inverse completion]] +{{Qed}} +\end{proof}<|endoftext|> +\section{Ring of Integers Modulo Composite is not Integral Domain} +Tags: Ring of Integers Modulo m, Integral Domains + +\begin{theorem} +Let $m \in \Z: m \ge 2$. +Let $\struct {\Z_m, +, \times}$ be the [[Definition:Ring of Integers Modulo m|ring of integers modulo $m$]]. +Let $m$ be a [[Definition:Composite Number|composite number]]. +Then $\struct {\Z_m, +, \times}$ is not an [[Definition:Integral Domain|integral domain]]. +\end{theorem} + +\begin{proof} +Now suppose $m \in \Z: m \ge 2$ be [[Definition:Composite Number|composite]]. +Then: +: $\exists k, l \in \N_{> 0}: 1 < k < m, 1 < l < m: m = k \times l$ +Thus: +{{begin-eqn}} +{{eqn | l = \eqclass 0 m + | r = \eqclass m m + | c = +}} +{{eqn | r = \eqclass {k l} m + | c = +}} +{{eqn | r = \eqclass k m \times \\eqclass l m + | c = +}} +{{end-eqn}} +So $\struct {\Z_m, +, \times}$ is a [[Definition:Ring (Abstract Algebra)|ring]] with [[Definition:Zero Divisor of Ring|zero divisors]]. +So by definition $\struct {\Z_m, +, \times}$ is not an [[Definition:Integral Domain|integral domain]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Ring Zero is not Cancellable} +Tags: Ring Theory + +\begin{theorem} +Let $\struct {R, +, \circ}$ be a [[Definition:Ring (Abstract Algebra)|ring]] which is [[Definition:Non-Null Ring|not null]]. +Let $0$ be the [[Definition:Ring Zero|ring zero]] of $R$. +Then $0$ is not a [[Definition:Cancellable Element|cancellable element]] for the [[Definition:Ring Product|ring product]] $\circ$. +\end{theorem} + +\begin{proof} +{{AimForCont}} $0$ is [[Definition:Cancellable Element|cancellable]]. +Let $a, b \in R$ such that $a \ne b$. +By definition of [[Definition:Ring Zero|ring zero]]: +:$0 \circ a = 0 = 0 \circ b$ +By our supposition that $0$ is [[Definition:Cancellable Element|cancellable]]: +:$a = b$ +The result follows by [[Proof by Contradiction]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Congruence Relation on Ring induces Ideal} +Tags: Congruence Relations, Ideal Theory, Quotient Rings + +\begin{theorem} +Let $\left({R, +, \circ}\right)$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $\mathcal E$ be a [[Definition:Congruence Relation|congruence relation]] on $R$. +Let $J = \left[\!\left[{0_R}\right]\!\right]_\mathcal E$ be the [[Definition:Equivalence Class|equivalence class]] of $0_R$ under $\mathcal E$. +Then $J$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +\end{theorem} + +\begin{proof} +Let $J = \left[\!\left[{0_R}\right]\!\right]_\mathcal E$. +By [[Congruence Relation induces Normal Subgroup]], $\left({J, +}\right)$ is a [[Definition:Normal Subgroup|normal subgroup]] of $\left({R, +}\right)$. +Thus the elements of $\left({R, +}\right) / \left({J, +}\right)$ are the [[Definition:Coset|cosets]] of $\left[\!\left[{0_R}\right]\!\right]_\mathcal E$ by $+$. +We have that $\mathcal E$ is also [[Definition:Relation Compatible with Operation|compatible]] with $\circ$. +Thus from [[Quotient Structure is Well-Defined]], and so: +:$\left[\!\left[{x_1}\right]\!\right]_\mathcal E = \left[\!\left[{x_2}\right]\!\right]_\mathcal E \land \left[\!\left[{y_1}\right]\!\right]_\mathcal E = \left[\!\left[{y_2}\right]\!\right]_\mathcal E \implies \left[\!\left[{x_1 \circ y_1}\right]\!\right]_\mathcal E = \left[\!\left[{x_2 \circ y_2}\right]\!\right]_\mathcal E$ +Putting $x_1 = 0_R, x_2 = x, y_1 = y_2 = y$, so that $\left[\!\left[{y_1}\right]\!\right]_\mathcal E = \left[\!\left[{y_2}\right]\!\right]_\mathcal E$ by definition: +:$\left[\!\left[{0}\right]\!\right]_\mathcal E = \left[\!\left[{x}\right]\!\right]_\mathcal E \implies \left[\!\left[{0 \circ y}\right]\!\right]_\mathcal E = \left[\!\left[{x \circ y}\right]\!\right]_\mathcal E$ +Hence: +: $\forall y \in R: \left[\!\left[{y}\right]\!\right]_\mathcal E \circ \left[\!\left[{0_R}\right]\!\right]_\mathcal E = \left[\!\left[{0_R}\right]\!\right]_\mathcal E = \left[\!\left[{0_R}\right]\!\right]_\mathcal E \circ \left[\!\left[{y}\right]\!\right]_\mathcal E$ +That is: +: $\forall x \in J, y \in R: y \circ x \in J, x \circ y \in J$ +demonstrating that $J$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +{{qed}} +\end{proof}<|endoftext|> +\section{Ideal induces Congruence Relation on Ring} +Tags: Congruence Relations, Ideal Theory, Quotient Rings + +\begin{theorem} +Let $\struct {R, +, \circ}$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $J$ be an [[Definition:Ideal of Ring|ideal]] of $R$ +Then $J$ induces a [[Definition:Congruence Relation|congruence relation]] $\EE_J$ on $R$ such that $\struct {R / J, +, \circ}$ is a [[Definition:Quotient Ring|quotient ring]]. +\end{theorem} + +\begin{proof} +From [[Ideal is Additive Normal Subgroup]], we have that $\struct {J, +}$ is a [[Definition:Normal Subgroup|normal subgroup]] of $\struct {R, +}$. +Let $x \mathop {\EE_J} y$ denote that $x$ and $y$ are in the same [[Definition:Coset|coset]], that is: +:$x \mathop {\EE_J} y \iff x + N = y + N$ +From [[Congruence Modulo Normal Subgroup is Congruence Relation]], $\EE_J$ is a [[Definition:Congruence Relation|congruence relation]] for $+$. +Now let $x \mathop {\EE_J} x', y \mathop {\EE_J} y'$. +By definition of [[Definition:Congruence Modulo Subgroup|congruence modulo $J$]]: +:$x + \paren {-x'} \in J$ +:$y + \paren {-y'} \in J$ +Then: +:$x \circ y + \paren {-x' \circ y'} = \paren {x + \paren {-x'} } \circ y + x' \circ \paren {y + \paren {-y'} } \in J$ +demonstrating that $\EE_J$ is a [[Definition:Congruence Relation|congruence relation]] for $\circ$. +Hence the result by definition of [[Definition:Quotient Ring|quotient ring]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Vector Space has Basis between Linearly Independent Set and Spanning Set} +Tags: Generators of Vector Spaces, Bases of Vector Spaces + +\begin{theorem} +Let $V$ be a [[Definition:Vector Space|vector space]] over a [[Definition:Field (Abstract Algebra)|field]] $F$. +Let $L$ be a [[Definition:Linearly Independent Set|linearly independent]] [[Definition:Subset|subset]] of $V$. +Let $S$ be a [[Definition:Spanning Set|set that spans $V$]]. +Suppose that $L \subseteq S$. +Then $V$ has a [[Definition:Basis of Vector Space|basis]] $B$ such that $L \subseteq B \subseteq S$. +\end{theorem} + +\begin{proof} +Let $\mathscr I$ be the set of [[Definition:Linearly Independent Set|linearly independent]] [[Definition:Subset|subsets]] of $S$ that contain $L$, [[Subset Relation is Ordering|ordered by inclusion]]. +Note that $L \in \mathscr I$, so $\mathscr I \ne \varnothing$. +Let $\mathscr C$ be a [[Definition:Nest|nest]] in $\mathscr I$. +Let $C = \bigcup \mathscr C$. +{{AimForCont}} that $C$ is [[Definition:Linearly Dependent Set|linearly dependent]]. +Then there exist $v_1, v_2, \ldots, v_n \in C$ and $r_1, r_2, \ldots, r_n \in F$ such that $r_1 \ne 0$: +:$\displaystyle \sum_{k \mathop = 1}^n r_k v_k = 0$ +Then there are $C_1, C_2, \ldots, C_n \in \mathscr C$ such that $v_k \in C_k$ for each $k \in \set {1, 2, \ldots, n}$. +Since $\mathscr C$ is a [[Definition:Nest|nest]], $C_1 \cup C_2 \cup \cdots \cup C_n$ must equal $C_k$ for some $k \in \set {1, 2, \ldots, n}$. +But then $C_k \in \mathscr C$ and $C_k$ is [[Definition:Linearly Dependent Set|linearly dependent]], which is a [[Definition:Contradiction|contradiction]]. +Thus $C$ is [[Definition:Linearly Independent Set|linearly independent]]. +By [[Zorn's Lemma]], $\mathscr I$ has a [[Definition:Maximal Element|maximal element]] $M$ (one that is not contained in any other element). +Since $M \in \mathscr I$, $M$ is [[Definition:Linearly Independent Set|linearly independent]]. +All that remains is to show that $M$ [[Definition:Spanning Set|spans]] $V$. +Suppose, to the contrary, that there exists a $v \in V \setminus \map {\operatorname {span} } M$. +Then, since $S$ spans $V$, there must be an element $s$ of $S$ such that $s \notin \map {\operatorname {span} } M$. +Then $M \cup \set s$ is [[Definition:Linearly Independent Set|linearly independent]]. +Thus $M \cup \set s \supsetneq M$, contradicting the [[Definition:Maximal Element|maximality]] of $M$. +Thus $M$ is a [[Definition:Linearly Independent Set|linearly independent]] subset of $V$ that [[Definition:Spanning Set|spans]] $V$. +Therefore, by definition, $M$ is a [[Definition:Basis of Vector Space|basis]] for $V$. +{{qed}} +\end{proof}<|endoftext|> +\section{Null Ring is Trivial Ring} +Tags: Trivial Rings, Null Ring + +\begin{theorem} +Let $R$ be the [[Definition:Null Ring|null ring]]. +Then $R$ is a [[Definition:Trivial Ring|trivial ring]] and therefore a [[Definition:Commutative Ring|commutative ring]]. +\end{theorem} + +\begin{proof} +We have that $R$ is the [[Definition:Null Ring|null ring]]. +That is, by definition it has a single [[Definition:Element|element]], which can be denoted $0_R$, such that: +:$R := \struct {\set {0_R}, +, \circ}$ +where [[Definition:Ring Addition|ring addition]] and the [[Definition:Ring Product|ring product]] are defined as: +:$0_R + 0_R = 0_R$ +:$0_R \circ 0_R = 0_R$ +Consider the [[Definition:Binary Operation|operation]] $+$. +By definition, the [[Definition:Algebraic Structure|algebraic structure]] $\struct {\set {0_R}, +}$ is a [[Definition:Trivial Group|trivial group]]. +Then: +:$\forall a, b \in R: a \circ b = 0_R$ +Thus by definition, $R$ is a [[Definition:Trivial Ring|trivial ring]]. +The fact that $R$ is a [[Definition:Commutative Ring|commutative ring]] follows from [[Trivial Ring is Commutative Ring]]. +{{qed}} +[[Category:Trivial Rings]] +[[Category:Null Ring]] +tt65y4tpz8lxtj6f61x8ekhmnman5k1 +\end{proof}<|endoftext|> +\section{Null Ring is Ring} +Tags: Examples of Rings + +\begin{theorem} +Let $R$ be the [[Definition:Null Ring|null ring]]. +That is, let: +: $R := \left({\left\{{0_R}\right\}, +, \circ}\right)$ +where ring addition and the ring product are defined as: +* $0_R + 0_R = 0_R$ +* $0_R \circ 0_R = 0_R$ +Then $R$ is a [[Definition:Ring (Abstract Algebra)|ring]]. +\end{theorem} + +\begin{proof} +A [[Null Ring is Trivial Ring|null ring is a trivial ring]]. +So, by [[Trivial Ring is Commutative Ring]], the result follows. +{{qed}} +[[Category:Examples of Rings]] +qux1837zmt7hpbdfrx5a964j2w469mi +\end{proof}<|endoftext|> +\section{Trivial Group is Cyclic Group} +Tags: Trivial Group, Cyclic Groups + +\begin{theorem} +The [[Definition:Trivial Group|trivial group]] is a [[Definition:Cyclic Group|cyclic group]] and therefore [[Definition:Abelian Group|abelian]]. +\end{theorem} + +\begin{proof} +In [[Trivial Group is Group]] it is shown that the [[Definition:Algebraic Structure|algebraic structure]] $\struct {\set e, \circ}$ such that $e \circ e = c$ is in fact a [[Definition:Group|group]]. +It remains to be shown that it is [[Definition:Cyclic Group|cyclic]]. +In order for $G$ to be a [[Definition:Cyclic Group|cyclic group]], every [[Definition:Element|element]] $x$ of $G$ has to be expressible in the form $x = g^n$ for some $g \in G$ and some $n \in \Z$. +In this case, for every [[Definition:Integer|integer]] $n$, every [[Definition:Element|element]] of $G$ can be expressed in the form $e^n$. +Thus $G$ is trivially a [[Definition:Cyclic Group|cyclic group]]. +Thus $G$ is [[Definition:Abelian Group|abelian]] from [[Cyclic Group is Abelian]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Trivial Ring is Commutative Ring} +Tags: Ring Theory, Commutative Rings, Trivial Rings + +\begin{theorem} +Let $\struct {R, +, \circ}$ be a [[Definition:Trivial Ring|trivial ring]]. +Then $\struct {R, +, \circ}$ is a [[Definition:Commutative Ring|commutative ring]]. +\end{theorem} + +\begin{proof} +First we need to show that a [[Definition:Trivial Ring|trivial ring]] is actually a [[Definition:Ring (Abstract Algebra)|ring]] in the first place. +Taking the [[Definition:Ring Axioms|ring axioms]] in turn: +=== $(\text A)$: Ring Addition forms [[Definition:Group|Group]] === +$\struct {R, +}$ is a [[Definition:Group|group]]: +This follows from the definition. +{{qed|lemma}} +=== $(\text M 0)$: Closure of Ring Product === +$\struct {R, \circ}$ is closed: +From [[Ring Product with Zero]], we have $x \circ y = 0_R \in R$. +{{qed|lemma}} +=== $(\text M 1)$: Associativity of Ring Product === +$\circ$ is [[Definition:Associative|associative]] on $\struct {R, +, \circ}$: +:$x \circ \paren {y \circ z} = 0_R = \paren {x \circ y} \circ z$ +{{qed|lemma}} +=== $(\text D)$: Distributivity of Ring Product over Addition === +$\circ$ [[Definition:Distributive Operation|distributes]] over $+$ in $\struct {R, +, \circ}$: +:$x \circ \paren {y + z} = 0_R$ by definition. +Then: +{{begin-eqn}} +{{eqn | l = x \circ y + x \circ z + | r = 0_R + 0_R + | c = +}} +{{eqn | r = 0_R + | c = {{Defof|Ring Zero}} +}} +{{end-eqn}} +and the same for $\paren {y + z} \circ x$. +{{qed|lemma}} +=== Commutative === +From the definition of [[Definition:Trivial Ring|trivial ring]]: +:$\forall x, y \in R: x \circ y = 0_R = y \circ x$ +Hence its [[Definition:Commutative Ring|commutativity]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Congruence Relation and Ideal are Equivalent} +Tags: Ideal Theory, Congruence Relations + +\begin{theorem} +Let $\left({R, +, \circ}\right)$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $\mathcal E$ be an [[Definition:Equivalence Relation|equivalence relation]] on $R$ [[Definition:Relation Compatible with Operation|compatible]] with both $\circ$ and $+$, i.e. a [[Definition:Congruence Relation|congruence relation]] on $R$. +Let $J = \left[\!\left[{0_R}\right]\!\right]_\mathcal E$ be the [[Definition:Equivalence Class|equivalence class]] of $0_R$ under $\mathcal E$. +Then: +: $(1a): \quad J = \left[\!\left[{0_R}\right]\!\right]_\mathcal E$ is an [[Definition:Ideal of Ring|ideal]] of $R$ +: $(2a): \quad$ The [[Definition:Equivalence Relation|equivalence]] defined by the [[Definition:Quotient Ring|quotient ring]] $R / J$ is $\mathcal E$ itself. +Similarly, let $J$ be an [[Definition:Ideal of Ring|ideal]] of $R$. +Then: +: $(1b): \quad J$ induces a [[Definition:Congruence Relation|congruence relation]] $\mathcal E_J$ on $R$ +: $(2b): \quad$ The [[Definition:Ideal of Ring|ideal]] of $R$ defined by $\mathcal E_J$ is $J$ itself. +\end{theorem} + +\begin{proof} +=== Part $(1a)$ === +This is shown on [[Congruence Relation on Ring induces Ideal]]. +{{qed|lemma}} +=== Part $(2a)$ === +This is shown on [[Ideal induced by Congruence Relation defines that Congruence]]. +{{qed|lemma}} +=== Part $(1b)$ === +This is shown on [[Ideal induces Congruence Relation on Ring]]. +{{qed|lemma}} +=== Part $(2b)$ === +{{finish|Argument previously here didn't prove anything. This part is genuinely different from the other three}} +[[Category:Ideal Theory]] +[[Category:Congruence Relations]] +87sxahizg95u7ell6r5i6eiy63r1vs9 +\end{proof}<|endoftext|> +\section{Quotient Ring is Ring} +Tags: Ideal Theory, Quotient Rings + +\begin{theorem} +Let $\struct {R, +, \circ}$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $J$ be an [[Definition:Ideal of Ring|ideal]] of $R$. +Let $\struct {R / J, +, \circ}$ be the [[Definition:Quotient Ring|quotient ring]] of $R$ by $J$. +Then $R / J$ is also a [[Definition:Ring (Abstract Algebra)|ring]]. +\end{theorem} + +\begin{proof} +First, it is to be shown that $+$ and $\circ$ are in fact [[Definition:Well-Defined Operation|well-defined operations]] on $R / J$. +=== [[Quotient Ring is Ring/Quotient Ring Addition is Well-Defined|Well-definition of $+$]] === +{{:Quotient Ring is Ring/Quotient Ring Addition is Well-Defined}} +=== [[Quotient Ring is Ring/Quotient Ring Product is Well-Defined|Well-definition of $\circ$]] === +{{:Quotient Ring is Ring/Quotient Ring Product is Well-Defined}} +Now to prove that $\struct {R / J, +, \circ}$ is a [[Definition:Ring (Abstract Algebra)|ring]], proceed by verifying the [[Definition:Ring Axioms|ring axioms]] in turn: +=== $\text A$: Addition forms a Group === +From: +:[[Ideal is Additive Normal Subgroup]] +:The definition of a [[Definition:Quotient Group|quotient group]] +:[[Quotient Group is Group]] +it follows that $\struct {R / J, +}$ is a [[Definition:Group|group]]. +{{qed|lemma}} +=== $\text M 0$: Closure of Ring Product === +By [[Definition:Quotient Ring|definition of $\circ$]] in $R / J$, it follows that $\struct {R / J, \circ}$ is [[Definition:Closed Algebraic Structure|closed]]. +{{qed|lemma}} +=== $\text M 1$: Associativity of Ring Product === +Associativity can be deduced from the fact that $\circ$ is [[Definition:Associative|associative]] on $R$: +{{begin-eqn}} +{{eqn | lo= \forall x, y, z \in R: + | o = + | r = \paren {x + J} \circ \paren {\paren {y + J} \circ \paren {z + J} } +}} +{{eqn | r = \paren {x + J} \circ \paren {y \circ z + J} +}} +{{eqn | r = x \circ y \circ z + J +}} +{{eqn | r = \paren {x \circ y + J} \circ \paren {z + J} +}} +{{eqn | r = \paren {\paren {x + J} \circ \paren {y + J} } \circ \paren {z + J} +}} +{{end-eqn}} +{{qed|lemma}} +=== $\text D$: Distributivity of Ring Product over Addition === +Distributivity can be deduced from the fact that $\circ$ is [[Definition:Distributive Operation|distributive]] on $R$: +{{begin-eqn}} +{{eqn | lo= \forall x, y, z \in R: + | o = + | r = \paren {\paren {x + J} + \paren {y + J} } \circ \paren {z + J} +}} +{{eqn | r = \paren {x + y + J} \circ \paren {z + J} +}} +{{eqn | r = \paren {\paren {x + y} \circ z} + J +}} +{{eqn | r = \paren {\paren {x \circ z} + \paren {y \circ z} } + J +}} +{{eqn | r = \paren {\paren {x \circ z} + J} + \paren {\paren {y \circ z} + J} +}} +{{eqn | r = \paren {\paren {x + J} \circ \paren {z + J} } + \paren {\paren {y + J} \circ \paren {z + J} } +}} +{{end-eqn}} +{{qed|lemma}} +Having verified all of the [[Definition:Ring Axioms|ring axioms]], it follows that $\struct {R / J, +, \circ}$ is a [[Definition:Ring (Abstract Algebra)|ring]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Ideal is Additive Normal Subgroup} +Tags: Ideal Theory + +\begin{theorem} +Let $\left({R, +, \circ}\right)$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $J$ be an [[Definition:Ideal of Ring|ideal]] of $R$. +Then $\left({J, +}\right)$ is a [[Definition:Normal Subgroup|normal subgroup]] of $\left({R, +}\right)$. +\end{theorem} + +\begin{proof} +As $J$ is an [[Definition:Ideal of Ring|ideal]], $\left({J, +}\right)$ is a [[Definition:Subgroup|subgroup]] of $\left({R, +}\right)$. +By definition of a [[Definition:Ring (Abstract Algebra)|ring]], $\left({R, +}\right)$ is [[Definition:Abelian Group|abelian]]. +The result follows from [[Subgroup of Abelian Group is Normal]]. +{{qed}} +[[Category:Ideal Theory]] +eas96ddq9bmt95eqmyswfp6se6v6le4 +\end{proof}<|endoftext|> +\section{Congruence Relation on Ring induces Ring} +Tags: Congruence Relations, Ring Theory + +\begin{theorem} +Let $\struct {R, +, \circ}$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $\EE$ be a [[Definition:Congruence Relation|congruence relation]] on $R$ for both $+$ and $\circ$. +Let $R / \EE$ be the [[Definition:Quotient Set|quotient set of $R$ by $\EE$]]. +Let $+_\EE$ and $\circ_\EE$ be the [[Definition:Operation Induced on Quotient Set|operations induced on $R / \EE$]] by $+$ and $\circ$ respectively. +Then $\struct {R / \EE, +_\EE, \circ_\EE}$ is a [[Definition:Ring (Abstract Algebra)|ring]]. +\end{theorem} + +\begin{proof} +Let $q_\EE$ be the [[Definition:Quotient Mapping|quotient mapping]] from $\struct {R, +, \circ}$ to $\struct {R / \EE, +_\EE, \circ_\EE}$. +From [[Quotient Mapping on Structure is Canonical Epimorphism]]: +:$q_\EE: \struct {R, +} \to \struct {R / \EE, +_\EE}$ is an [[Definition:Epimorphism (Abstract Algebra)|epimorphism]] +:$q_\EE: \struct {R, \circ} \to \struct {R / \EE, \circ _\EE}$ is an [[Definition:Epimorphism (Abstract Algebra)|epimorphism]]. +As the [[Definition:Morphism Property|morphism property]] holds for both $+$ and $\circ$, it follows that $q_\EE: \struct {R, +, \circ} \to \struct {R / \EE, +_\EE, \circ_\EE}$ is also an [[Definition:Epimorphism (Abstract Algebra)|epimorphism]]. +From [[Epimorphism Preserves Rings]], it follows that $\struct {R / \EE, +_\EE, \circ_\EE}$ is a [[Definition:Ring (Abstract Algebra)|ring]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Subring is not necessarily Ideal} +Tags: Ideal Theory + +\begin{theorem} +Let $\left({R, +, \circ}\right)$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $\left({S, +_S, \circ_S}\right)$ be a [[Definition:Subring|subring]] of $R$. +Then it is not necessarily the case that $S$ is also an [[Definition:Ideal of Ring|ideal]] of $R$. +\end{theorem} + +\begin{proof} +Consider the [[Definition:Field of Real Numbers|field of real numbers]] $\left({\R, +, \times}\right)$. +We have that a [[Definition:Field (Abstract Algebra)|field]] is by definition a [[Definition:Ring (Abstract Algebra)|ring]], hence so is $\left({\R, +, \times}\right)$. +From [[Rational Numbers form Subfield of Real Numbers]] and [[Integers form Subdomain of Rationals]], it follows that the [[Definition:Integer|integers]] $\left({\Z, +, \times}\right)$ are a [[Definition:Subring|subring]] of $\left({\R, +, \times}\right)$. +Consider $1 \in \Z$, and consider $\dfrac 1 2 \in \R$. +We have that $1 \times \dfrac 1 2 = \dfrac 1 2 \notin \Z$. +From this [[Proof by Counterexample|counterexample]] it is seen that $\Z$ is not an [[Definition:Ideal of Ring|ideal]] of $R$. +Hence the result, again by [[Proof by Counterexample]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Characterization of Integrable Functions} +Tags: Measure Theory + +\begin{theorem} +Let $\struct {X, \Sigma, \mu}$ be a [[Definition:Measure Space|measure space]]. +Let $f: X \to \overline \R, f \in \MM_{\overline \R}$ be a [[Definition:Measurable Function|$\Sigma$-measurable function]]. +Then the following are equivalent: +:$(1): \quad f \in \map {\LL_{\overline \R} } \mu$, that is, $f$ is [[Definition:Integrable Function on Measure Space|$\mu$-integrable]]. +:$(2): \quad$ The [[Definition:Positive Part|positive]] and [[Definition:Negative Part|negative parts]] $f^+$ and $f^-$ are [[Definition:Integrable Function on Measure Space|$\mu$-integrable]]. +:$(3): \quad$ The [[Definition:Absolute Value of Extended Real-Valued Function|absolute value]] $\size f$ of $f$ is [[Definition:Integrable Function on Measure Space|$\mu$-integrable]]. +:$(4): \quad$ There exists an [[Definition:Integrable Function on Measure Space|$\mu$-integrable function]] $g: X \to \overline \R$ such that $\size f \le g$ [[Definition:Pointwise Inequality|pointwise]]. +\end{theorem} + +\begin{proof} +We prove the whole cycle of implications: +:$(1) \implies (2) \quad$ by definition of $(1)$ +:$(2) \implies (3)\quad$ because $\size f = f^+ + f^-$ and [[Integral of Positive Measurable Function is Additive]] +:$(3) \implies (4)\quad$ because $g:= \size f$ exists +It remains to demonstrate $(4) \implies (1)$. +Let $f \in \MM_{\overline \R}$ and $g$ according to $(4)$. +Then: +:$f = f^+ - f^-$ +where $f^+$ is the [[Definition:Positive Part|positive]] and $f^-$ is the [[Definition:Negative Part|negative part]] of $f$. +We have that $f^+$ and $f^-$ are positive and measurable. +Let $f^0$ stand for either $f^+$ or $f^-$. +We have that: +:$\size f = f^+ + f^-$ +Therefore: +:$f^0 \le \size f \le g$ +It is to be shown that the [[Definition:Integral of Positive Measurable Function|Integral of Positive Measurable Function]] of $f^0$ exists and is finite. +Let $\EE^+$ and $\map {I_\mu} h$ be defined as in [[Definition:Integral of Positive Measurable Function|Integral of Positive Measurable Function]]. +Then: +:$\forall h \in \EE^+$: $h \le f^0 \implies h \le g$ +Hence: +:$\set {h: h \le f^0, h \in \EE^+} \subseteq \set {h: h \le g, h \in \EE^+}$ +:$\displaystyle \int f^0 \rd \mu := \sup \set {\map {I_\mu} h: h \le f^0, h \in \EE^+} \le \sup \set {\map {I_\mu} h: h \le g, h \in \EE^+} < \infty$ +We have that the integrals for $f^+$ and $f^-$ both are finite. +Therefore $f$ is [[Definition:Integrable Function on Measure Space|$\mu$-integrable]] according to definition. +{{qed}} +{{MissingLinks|The steps in the above need to be demonstrated and explained by linking to appropriate results on {{ProofWiki}} As it stands, none of the above can be followed without knowing all about this subject.}}. +\end{proof}<|endoftext|> +\section{Quotient Ring is Ring/Quotient Ring Addition is Well-Defined} +Tags: Quotient Rings + +\begin{theorem} +Let $\struct {R, +, \circ}$ be a [[Definition:Ring (Abstract Algebra)|ring]] whose [[Definition:Ring Zero|zero]] is $0_R$ and whose [[Definition:Unity of Ring|unity]] is $1_R$. +Let $J$ be an [[Definition:Ideal of Ring|ideal]] of $R$. +Let $\struct {R / J, +, \circ}$ be the [[Definition:Quotient Ring|quotient ring]] of $R$ by $J$. +Then $+$ is [[Definition:Well-Defined Operation|well-defined]] on $R / J$, that is: +:$x_1 + J = x_2 + J, y_1 + J = y_2 + J \implies \paren {x_1 + y_1} + J = \paren {x_2 + y_2} + J$ +\end{theorem} + +\begin{proof} +From [[Ideal is Additive Normal Subgroup]] that $J$ is a [[Definition:Normal Subgroup|normal subgroup]] of $R$ under $+$. +Thus, the [[Definition:Quotient Group|quotient group]] $\struct {R / J, +}$ is defined, and as a [[Quotient Group is Group]], $+$ is [[Definition:Well-Defined Operation|well-defined]]. +{{qed|lemma}} +\end{proof}<|endoftext|> +\section{Implicitly Defined Real-Valued Function} +Tags: Calculus + +\begin{theorem} +Let $F: \struct {\mathbf X' \subseteq \R^{n + 1} } \to \struct {\mathbb I' \subseteq \R}$ have [[Definition:Continuous Mapping (Metric Spaces)|continuous]] [[Definition:Partial Derivative|partial derivatives]]. +{{explain|Can the language of this be brought into line with existing definitions of implicit functions?}} +Let $\tuple {\mathbf x, z}$ denote an [[Definition:Element|element]] of $\R^{n + 1}$, where $\mathbf x \in \R^n$ and $z \in \R$. +Suppose $\exists \tuple {\mathbf x_0, z_0} \in \mathbf X'$ such that: +:$\map F {\mathbf x_0, z_0} = 0$ +:$\dfrac \partial {\partial z} \map F {\mathbf x_0, z_0} \ne 0$ +Then there exists a [[Definition:Unique|unique]] [[Definition:Mapping|mapping]] of the form: +:$g: \mathbf X \to \mathbb I$ +where $\mathbf X \subseteq \R^n$ contains $\mathbf x_0$ and $\mathbb I$ is an [[Definition:Open Real Interval|open real interval]] containing $z_0$, such that: +:$\forall \mathbf x \in \mathbf X, z \in \mathbb I: \map F {\mathbf x, z} = 0 \iff z = \map g {\mathbf x}$ +and $g$ itself has [[Definition:Continuous Mapping (Metric Spaces)|continuous]] [[Definition:Partial Derivative|partial derivatives]]. +{{proofread}} +\end{theorem} + +\begin{proof} +This is a special case of the [[Implicit Function Theorem]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Characteristic Function of Subset} +Tags: Characteristic Functions + +\begin{theorem} +Let $A \subseteq B \subseteq S$. +Then: +:$\forall s \in S: \chi_A \left({s}\right) \le \chi_B \left({s}\right)$ +where $\chi$ denotes [[Definition:Characteristic Function of Set|characteristic function]]. +\end{theorem} + +\begin{proof} +Both $\chi_A$ and $\chi_B$ take values in $\left\{{0, 1}\right\}$ as they are [[Definition:Characteristic Function of Set|characteristic functions]]. +So if $\chi_A \left({s}\right) = 0$, then the statement of the theorem is automatically satisfied (since both $0 \le 0$ and $0 \le 1$). +Now assume $\chi_A \left({s}\right) = 1$. +By [[Definition:Characteristic Function of Set|definition of $\chi_A$]], this happens {{iff}} $s \in A$. +But as $A \subseteq B$, this implies $s \in B$ as well. +Hence, $\chi_B \left({s}\right) = 1$, and so $\chi_A \left({s}\right) \le \chi_B \left({s}\right)$. +The theorem follows from [[Proof by Cases]]. +{{qed}} +[[Category:Characteristic Functions]] +gdk1etepo4hrntko2sjtwauby8ig3j4 +\end{proof}<|endoftext|> +\section{Characteristic Function Determined by 0-Fiber} +Tags: Characteristic Functions + +\begin{theorem} +Let $A \subseteq S$. +Let $f: S \to \set {0, 1}$ be a [[Definition:Mapping|mapping]]. +Denote by $\chi_A$ the [[Definition:Characteristic Function of Set|characteristic function]] on $A$. +Then the following are equivalent: +:$(1): \quad f = \chi_A$ +:$(2): \quad \forall s \in S: \map f s = 0 \iff s \notin A$ +Using the notion of a [[Definition:Fiber (Mapping)|fiber]], $(2)$ may also be expressed as: +:$(2'):\quad \map {f^{-1} } 0 = S \setminus A$ +\end{theorem} + +\begin{proof} +=== $(1)$ implies $(2)$ === +Follows directly from the definition of [[Definition:Characteristic Function of Set|characteristic function]]. +{{qed|lemma}} +=== $(2)$ implies $(1)$ === +Let $s \in S$. +Suppose that $s \notin A$. +Then by assumption, $\map f s = 0$. +Also, by definition of [[Definition:Characteristic Function of Set|characteristic function]], $\map {\chi_A} s = 0$. +Next, suppose that $s \in A$. +Then $\map f s \ne 0$ by assumption. +As $\map f s \in \set {0, 1}$, it follows that $\map f s = 1$. +Again, by definition of [[Definition:Characteristic Function of Set|characteristic function]], also have $\map {\chi_A} s = 1$. +Hence, for all $s \in S$, have $\map f s = \map {\chi_A} s$. +By [[Equality of Mappings]], it follows that $f = \chi_A$. +{{qed}} +[[Category:Characteristic Functions]] +ky9w3kxpse1dj9yoddaplxmkehfvgpy +\end{proof}<|endoftext|> +\section{Ideal induced by Congruence Relation defines that Congruence} +Tags: Congruence Relations, Ideal Theory, Quotient Rings + +\begin{theorem} +Let $\left({R, +, \circ}\right)$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $\mathcal E$ be a [[Definition:Congruence Relation|congruence relation]] on $R$. +Let $J = \left[\!\left[{0_R}\right]\!\right]_\mathcal E$ be the [[Congruence Relation on Ring induces Ideal|ideal induced by $\mathcal E$]]. +Then the [[Definition:Equivalence Relation|equivalence]] defined by the [[Definition:Coset Space|coset space]] $\left({R, +}\right) / \left({J, +}\right)$ is $\mathcal E$ itself. +\end{theorem} + +\begin{proof} +Let $J = \left[\!\left[{0_R}\right]\!\right]_\mathcal E$. +From [[Congruence Relation on Ring induces Ideal]], we have that $J$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +From [[Ideal is Additive Normal Subgroup]], we have that $\left({J, +}\right)$ is a [[Definition:Normal Subgroup|normal subgroup]] of $\left({R, +}\right)$. +From [[Normal Subgroup induced by Congruence Relation defines that Congruence]], the [[Definition:Equivalence Relation|equivalence]] defined by the [[Definition:Partition (Set Theory)|partition]] $\left({R, +}\right) / \left({J, +}\right)$ is $\mathcal E$. +As $\mathcal E$ was the [[Definition:Congruence Relation|congruence relation]] on $R$ that was originally posited, we already know that it is [[Definition:Relation Compatible with Operation|compatible]] with $\circ$. +Thus the [[Definition:Equivalence Relation|equivalence]] defined by $J$ is the same [[Definition:Congruence Relation|congruence relation]] on $R$ that gave rise to $J$ to start with. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Rational Numbers Null Set under Lebesgue Measure} +Tags: Lebesgue Measure + +\begin{theorem} +Let $\lambda$ be [[Definition:Lebesgue Measure|$1$-dimensional Lebesgue measure]] on $\R$. +Let $\Q$ be the set of [[Definition:Rational Number|rational numbers]]. +Then $\lambda \left({\Q}\right) = 0$, i.e. $\Q$ is a [[Definition:Null Set|$\lambda$-null set]]. +\end{theorem} + +\begin{proof} +We have that the [[Rational Numbers are Countably Infinite]]. +The result follows from [[Countable Set is Null Set under Lebesgue Measure]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Cartesian Product of Intersections/General Case} +Tags: Cartesian Product of Intersections + +\begin{theorem} +:$\displaystyle \paren{ \prod_{i \mathop \in I} S_i } \cap \paren{ \prod_{i \mathop \in I} T_i } = \prod_{i \mathop \in I} \paren{S_i \cap T_i}$ +\end{theorem} + +\begin{proof} +Let $\family {x_i}_{i \mathop \in I} \in \displaystyle \paren{ \prod_{i \mathop \in I} S_i } \cap \paren{ \prod_{i \mathop \in I} T_i }$. +By definition of [[Definition:Set Intersection|intersection]], this is equivalent to the [[Definition:Conjunction|conjunction]] of: +:$\family {x_i}_{i \mathop \in I} \in \displaystyle \prod_{i \mathop \in I} S_i$ +:$\family {x_i}_{i \mathop \in I} \in \displaystyle \prod_{i \mathop \in I} T_i$ +By definition of [[Definition:Finite Cartesian Product|Cartesian product]], this means, for all $i \in I$: +:$x_i \in S_i$ and $x_i \in T_i$ +Again by definition of [[Definition:Set Intersection|intersection]], this is equivalent to, for all $i \in I$: +:$x_i \in S_i \cap T_i$ +Finally, by definition of [[Definition:Finite Cartesian Product|Cartesian product]], this is equivalent to: +:$\family {x_i}_{i \mathop \in I} \in \displaystyle \prod_{i \mathop \in I} \paren{S_i \cap T_i}$ +The result follows by definition of [[Definition:Set Equality|set equality]]. +{{qed}} +[[Category:Cartesian Product of Intersections]] +7933c0rqsliz6350edef2esv35htdqb +\end{proof}<|endoftext|> +\section{Cartesian Product of Semirings of Sets} +Tags: Cartesian Product, Semirings of Sets + +\begin{theorem} +Let $\SS$ and $\TT$ be [[Definition:Semiring of Sets|semirings of sets]]. +Then $\SS \times \TT$ is also a [[Definition:Semiring of Sets|semiring of sets]]. +Here, $\times$ denotes [[Definition:Cartesian Product|Cartesian product]]. +\end{theorem} + +\begin{proof} +Recall the conditions for $\SS \times \TT$ to be a [[Definition:Semiring of Sets|semiring of sets]]: +:$(1): \quad \O \in \SS \times \TT$ +:$(2): \quad \SS \times \TT$ is [[Definition:Stable under Intersection|$\cap$-stable]] +:$(3'):\quad$ If $A, B \in \SS \times \TT$, then there exists a [[Definition:Finite Sequence|finite sequence]] of [[Definition:Pairwise Disjoint|pairwise disjoint]] [[Definition:Set|sets]] $A_1, A_2, \ldots, A_n \in \SS \times \TT$ such that $\displaystyle A \setminus B = \bigcup_{k \mathop = 1}^n A_k$. +=== Proof of $(1)$ === +From [[Empty Set is Subset of All Sets]]: +:$\O \in \SS$ +and: +:$\O \in \TT$ +So: +:$\O \times \O \in \SS \times \TT$ +From [[Cartesian Product is Empty iff Factor is Empty]]: +:$\O \times \O = \O$ +{{qed|lemma}} +=== Proof of $(2)$ === +Let $S_1 \times T_1$ and $S_2 \times T_2$ be in $\SS \times \TT$. +Then from [[Cartesian Product of Intersections]]: +:$\paren {S_1 \times T_1} \cap \paren {S_2 \times T_2} = \paren {S_1 \cap S_2} \times \paren {T_1 \cap T_2}$ +Since $\SS$ and $\TT$ are [[Definition:Stable under Intersection|$\cap$-stable]], $S_1 \cap S_2 \in \SS$ and $T_1 \cap T_2 \in \TT$. +Hence $\paren {S_1 \times T_1} \cap \paren {S_2 \times T_2} \in \SS \times \TT$. +{{qed|lemma}} +=== Proof of $(3')$ === +Let $S_1 \times T_1$ and $S_2 \times T_2$ be in $\SS \times \TT$. +Let $\sqcup$ signify [[Definition:Disjoint Union (Probability Theory)|union of disjoint sets]]. +Then: +{{begin-eqn}} +{{eqn | l = \paren {S_1 \times T_1} \setminus \paren {S_2 \times T_2} + | r = \paren {S_1 \times \paren {T_1 \setminus T_2} } \cup \paren {\paren {S_1 \setminus S_2} \times T_1} + | c = [[Set Difference of Cartesian Products]] +}} +{{eqn | o = }} +{{eqn | l = S_1 + | r = \paren {S_1 \setminus S_2} \sqcup \paren {S_1 \cap S_2} + | c = [[Set Difference Union Intersection]] +}} +{{eqn | ll= \leadsto + | l = S_1 \times \paren {T_1 \setminus T_2} + | r = \paren {S_1 \setminus S_2} \times \paren {T_1 \setminus T_2} \ \sqcup \ \paren {S_1 \cap S_2} \times \paren {T_1 \setminus T_2} + | c = [[Cartesian Product Distributes over Union]] +}} +{{eqn | o = }} +{{eqn | l = T_1 + | r = \paren {T_1 \cap T_2} \sqcup \paren {T_1 \setminus T_2} + | c = [[Set Difference Union Intersection]] +}} +{{eqn | ll= \leadsto + | l = \paren {S_1 \setminus S_2} \times T_2 + | r = \paren {S_1 \setminus S_2} \times \paren {T_1 \cap T_2} \ \sqcup \ \paren {S_1 \setminus S_2} \times \paren {T_1 \setminus T_2} + | c = [[Cartesian Product Distributes over Union]] +}} +{{eqn | o = }} +{{eqn | ll= \leadsto + | l = \paren {S_1 \times T_1} \setminus \paren {S_2 \times T_2} + | r = \paren {S_1 \setminus S_2} \times \paren {T_1 \setminus T_2} \ \sqcup \ \paren {S_1 \cap S_2} \times \paren {T_1 \setminus T_2} \ \sqcup \ \paren {S_1 \setminus S_2} \times \paren {T_1 \cap T_2} +}} +{{end-eqn}} +Now recall that $\SS$ and $\TT$ are [[Definition:Semiring of Sets|semirings of sets]]. +Thence the expressions $S_1 \setminus S_2$ and $T_1 \setminus T_2$ may be written as [[Definition:Finite|finite]] [[Definition:Disjoint Union (Probability Theory)|disjoint unions]]. +Applying [[Cartesian Product Distributes over Union]] again, each of the three [[Definition:Operand|$\sqcup$-operands]] in the expression above may thus be written as a [[Definition:Finite|finite]] [[Definition:Disjoint Union (Probability Theory)|disjoint union]]. +This yields the same fact for $\paren {S_1 \times T_1} \setminus \paren {S_2 \times T_2}$ as well, completing the proof. +{{qed}} +\end{proof}<|endoftext|> +\section{Congruence Relation on Group induces Normal Subgroup} +Tags: Normal Subgroups, Congruence Relations + +\begin{theorem} +Let $\left({G, \circ}\right)$ be a [[Definition:Group|group]] whose [[Definition:Identity Element|identity]] is $e$. +Let $\mathcal R$ be a [[Definition:Congruence Relation|congruence relation]] for $\circ$. +Let $H = \left[\!\left[{e}\right]\!\right]_\mathcal R$, where $\left[\!\left[{e}\right]\!\right]_\mathcal R$ is the [[Definition:Equivalence Class|equivalence class]] of $e$ under $\mathcal R$. +Then: +: $\left({H, \circ \restriction_H}\right)$ is a [[Definition:Normal Subgroup|normal subgroup]] of $G$ +where $\circ \restriction_H$ denotes the [[Definition:Restriction of Operation|restriction of $\circ$ to $H$]]. +\end{theorem} + +\begin{proof} +We are given that $\mathcal R$ is a [[Definition:Congruence Relation|congruence relation]] for $\circ$. +From [[Congruence Relation iff Compatible with Operation]], we have: +: $\forall u \in G: x \mathop {\mathcal R} y \implies \left({x \circ u}\right) \mathop {\mathcal R} \left({y \circ u}\right), \left({u \circ x}\right)\mathop {\mathcal R} \left({u \circ y}\right)$ +==== Proof of being a Subgroup ==== +We show that $H$ is a [[Definition:Subgroup|subgroup]] of $G$. +First we note that $H$ is not [[Definition:Empty Set|empty]]: +:$e \in H \implies H \ne \varnothing$ +Then we show $H$ is closed: +{{begin-eqn}} +{{eqn | l = x, y + | o = \in + | r = H + | c = +}} +{{eqn | ll= \implies + | l = e + | o = \mathcal R + | r = x + | c = +}} +{{eqn | lo= \land + | l = e + | o = \mathcal R + | r = y + | c = by definition of $H$ +}} +{{eqn | ll= \implies + | l = \left({e \circ e}\right) + | o = \mathcal R + | r = \left({x \circ y}\right) + | c = $\mathcal R$ is [[Definition:Relation Compatible with Operation|compatible]] with $\circ$ +}} +{{eqn | ll= \implies + | l = x \circ y + | o = \in + | r = H + | c = by definition of $H$ +}} +{{end-eqn}} +Next we show that $x \in H \implies x^{-1} \in H$: +{{begin-eqn}} +{{eqn | l = x + | o = \in + | r = H + | c = +}} +{{eqn | ll= \implies + | l = e + | o = \mathcal R + | r = x + | c = by definition of $H$ +}} +{{eqn | ll= \implies + | l = \left({x^{-1} \circ e}\right) + | o = \mathcal R + | r = \left({x^{-1} \circ x}\right) + | c = $\mathcal R$ is [[Definition:Relation Compatible with Operation|compatible]] with $\circ$ +}} +{{eqn | ll= \implies + | l = x^{-1} + | o = \mathcal R + | r = e + | c = [[Definition:Group|Group properties]] +}} +{{eqn | ll= \implies + | l = x^{-1} + | o = \in + | r = H + | c = by definition of $H$ +}} +{{end-eqn}} +Thus by the [[Two-Step Subgroup Test]], $H$ is a [[Definition:Subgroup|subgroup]] of $G$. +{{qed|lemma}} +==== Proof of Normality ==== +Next we show that $H$ is [[Definition:Normal Subgroup|normal]] in $G$. +Thus: +{{begin-eqn}} +{{eqn | l = x + | o = \in + | r = H + | c = +}} +{{eqn | ll= \implies + | l = e + | o = \mathcal R + | r = h + | c = for some $h \in H$, by definition of $H$ +}} +{{eqn | ll= \implies + | l = \left({x \circ e}\right) + | o = \mathcal R + | r = \left({x \circ h}\right) + | c = $\mathcal R$ is [[Definition:Relation Compatible with Operation|compatible]] with $\circ$ +}} +{{eqn | ll= \implies + | l = \left({x \circ e \circ x^{-1} }\right) + | o = \mathcal R + | r = \left({x \circ h \circ x^{-1} }\right) + | c = $\mathcal R$ is [[Definition:Relation Compatible with Operation|compatible]] with $\circ$ +}} +{{eqn | ll= \implies + | l = e + | o = \mathcal R + | r = \left({x \circ h \circ x^{-1} }\right) + | c = [[Definition:Group|Group properties]] +}} +{{eqn | ll= \implies + | l = x \circ h \circ x^{-1} + | o = \in + | r = H + | c = by definition of $H$ +}} +{{end-eqn}} +Thus from [[Subgroup is Normal iff Contains Conjugate Elements]], we have that $H$ is [[Definition:Normal Subgroup|normal]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Normal Subgroup induced by Congruence Relation defines that Congruence} +Tags: Normal Subgroups, Congruence Relations + +\begin{theorem} +Let $\struct {G, \circ}$ be a [[Definition:Group|group]] whose [[Definition:Identity Element|identity]] is $e$. +Let $\mathcal R$ be a [[Definition:Congruence Relation|congruence relation]] for $\circ$. +Let $\eqclass e {\mathcal R}$ be the [[Definition:Equivalence Class|equivalence class]] of $e$ under $\mathcal R$. +Let $N = \eqclass e {\mathcal R}$ be the [[Congruence Relation on Group induces Normal Subgroup|normal subgroup induced by $\mathcal R$]]. +{{explain|Check the above because it does not look right}} +Then $\mathcal R$ is the [[Congruence Modulo Subgroup is Equivalence Relation|equivalence relation $\mathcal R_N$ defined by $N$]]. +\end{theorem} + +\begin{proof} +Let $\mathcal R_N$ be the [[Congruence Modulo Subgroup is Equivalence Relation|equivalence defined by $N$]]. +Then: +{{begin-eqn}} +{{eqn | l = x + | o = \mathcal R + | r = y + | c = +}} +{{eqn | ll= \leadsto + | l = e + | o = \mathcal R + | r = \paren {x^{-1} \circ y} + | c = $\mathcal R$ is [[Definition:Relation Compatible with Operation|compatible]] with $\circ$ +}} +{{eqn | ll= \leadsto + | l = \paren {e \circ e} + | o = \mathcal R + | r = \paren {x^{-1} \circ y} + | c = [[Definition:Group|Group properties]] +}} +{{eqn | ll= \leadsto + | l = x^{-1} \circ y + | o = \in + | r = N + | c = Definition of $N$ +}} +{{end-eqn}} +But from [[Congruence Class Modulo Subgroup is Coset]]: +:$x \mathop {\mathcal R_N} y \iff x^{-1} \circ y \in N$ +Thus: +:$\mathcal R = \mathcal R_N$ +{{Qed}} +\end{proof}<|endoftext|> +\section{Quotient Structure on Group defined by Congruence equals Quotient Group} +Tags: Normal Subgroups, Congruence Relations + +\begin{theorem} +Let $\struct {G, \circ}$ be a [[Definition:Group|group]] whose [[Definition:Identity Element|identity]] is $e$. +Let $\mathcal R$ be a [[Definition:Congruence Relation|congruence relation]] for $\circ$. +Let $\struct {G / \mathcal R, \circ_\mathcal R}$ be the [[Definition:Quotient Structure|quotient structure defined by $\mathcal R$]]. +Let $N = \eqclass e {\mathcal R}$ be the [[Congruence Relation on Group induces Normal Subgroup|normal subgroup induced by $\mathcal R$]]. +Let $\struct {G / N, \circ_N}$ be the [[Definition:Quotient Group|quotient group]] of $G$ by $N$. +Then $\struct {G / \mathcal R, \circ_\mathcal R}$ is the [[Definition:Subgroup|subgroup]] $\struct {G / N, \circ_N}$ of the [[Definition:Semigroup|semigroup]] $\struct {\powerset G, \circ_\mathcal P}$. +\end{theorem} + +\begin{proof} +Let $\eqclass x {\mathcal R} \in G / \mathcal R$. +By [[Congruence Relation on Group induces Normal Subgroup]]: +:$\eqclass x {\mathcal R} = x N$ +where $x N$ is the [[Definition:Left Coset|(left) coset]] of $N$ in $G$. +Similarly, let $y N \in G / N$. +Then from [[Normal Subgroup induced by Congruence Relation defines that Congruence]]: +:$y N = \eqclass x {\mathcal R}$ +where: +:$\eqclass x {\mathcal R}$ is the [[Definition:Equivalence Class|equivalence class]] of $y$ under $\mathcal R$ +:$\mathcal R$ is the [[Left Congruence Modulo Subgroup is Equivalence Relation|equivalence relation defined by $N$]]. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Vector Scaled by Zero is Zero Vector} +Tags: Vector Algebra + +\begin{theorem} +Let $F$ be a [[Definition:Field (Abstract Algebra)|field]] whose [[Definition:Field Zero|zero]] is $0_F$. +Let $\struct {\mathbf V, +, \circ}_F$ be a [[Definition:Vector Space|vector space]] over $F$, as defined by the [[Definition:Vector Space Axioms|vector space axioms]]. +Then: +:$\forall \mathbf v \in \mathbf V: 0_F \circ \mathbf v = \bszero$ +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = 0_F \circ \mathbf v + | r = \paren {0_F + 0_F} \circ \mathbf v + | c = {{Field-axiom|A3}} +}} +{{eqn | r = 0_F \circ \mathbf v + 0_F \circ \mathbf v + | c = {{Vector-space-axiom|5}} +}} +{{eqn | ll= \leadsto + | l = 0_F \circ \mathbf v + \paren {-0_F \circ \mathbf v} + | r = \paren {0_F \circ \mathbf v + 0_F \circ \mathbf v} + \paren {-0_F \circ \mathbf v} + | c = adding $-0_F \circ \mathbf v$ to both sides +}} +{{eqn | r = 0_F \circ \mathbf v + \paren {0_F \circ \mathbf v + \paren {-0_F \circ \mathbf v} } + | c = {{Vector-space-axiom|2}} +}} +{{eqn | ll= \leadsto + | l = \bszero + | r = 0_F \circ \mathbf v + \bszero + | c = {{Vector-space-axiom|4}} +}} +{{eqn | r = 0_F \circ \mathbf v + | c = {{Vector-space-axiom|3}} +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Zero Vector Scaled is Zero Vector} +Tags: Zero Vectors + +\begin{theorem} +Let $\struct {\mathbf V, +, \circ}_F$ be a [[Definition:Vector Space|vector space]] over a [[Definition:Field (Abstract Algebra)|field]] $F$, as defined by the [[Definition:Vector Space Axioms|vector space axioms]]. +Then: +:$\forall \lambda \in \mathbb F: \lambda \circ \bszero = \bszero$ +where $\bszero \in \mathbf V$ is the [[Definition:Zero Vector|zero vector]]. +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \lambda \circ \bszero + | r = \lambda \circ \paren {\bszero + \bszero} + | c = {{Vector-space-axiom|3}} +}} +{{eqn | r = \lambda \circ \bszero + \lambda \circ \bszero + | c = {{Vector-space-axiom|6}} +}} +{{eqn | ll= \leadsto + | l = \lambda \circ \bszero + \paren {-\lambda \circ \bszero} + | r = \paren {\lambda \circ \bszero + \lambda \circ \bszero} + \paren {-\lambda \circ \bszero} + | c = adding $-\lambda \circ \bszero$ to both sides +}} +{{eqn | r = \lambda \circ \bszero + \paren {\lambda \circ \bszero + \paren {-\lambda \circ \bszero} } + | c = {{Vector-space-axiom|2}} +}} +{{eqn | ll= \leadsto + | l = \bszero + | r = \lambda \circ \bszero + \bszero + | c = {{Vector-space-axiom|4}} +}} +{{eqn | r = \lambda \circ \bszero + | c = {{Vector-space-axiom|3}} +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Vector Product is Zero only if Factor is Zero} +Tags: Vector Algebra + +\begin{theorem} +Let $F$ be a [[Definition:Field (Abstract Algebra)|field]] whose [[Definition:Field Zero|zero]] is $0_F$ and whose [[Definition:Unity of Field|unity]] is $1_F$. +Let $\struct {\mathbf V, +, \circ}_F$ be a [[Definition:Vector Space|vector space]] over $F$, as defined by the [[Definition:Vector Space Axioms|vector space axioms]]. +Then: +:$\forall \lambda \in F: \forall \mathbf v \in \mathbf V: \lambda \circ \mathbf v = \bszero \implies \paren {\lambda = 0_F \lor \mathbf v = \mathbf 0}$ +where $\bszero \in \mathbf V$ is the [[Definition:Zero Vector|zero vector]]. +\end{theorem} + +\begin{proof} +{{AimForCont}} that: +:$\exists \lambda \in F: \exists \mathbf v \in \mathbf V: \lambda \circ \mathbf v = \bszero \land \lambda \ne 0_F \land \mathbf v \ne \bszero$ +which is the [[Definition:Negation|negation]] of the exposition of the theorem. +Utilizing the [[Definition:Vector Space Axioms|vector space axioms]]: +{{begin-eqn}} +{{eqn | l = \lambda \circ \mathbf v + | r = \bszero +}} +{{eqn | ll= \leadsto + | l = \lambda^{-1} \circ \paren {\lambda \circ \mathbf v} + | r = \lambda^{-1} \circ \mathbf 0 + | c = multiplying both sides by $\lambda^{-1}$ +}} +{{eqn | ll= \leadsto + | l = \bszero + | r = \lambda^{-1} \circ \paren {\lambda \circ \mathbf v} + | c = [[Zero Vector Scaled is Zero Vector]] +}} +{{eqn | r = \paren {\lambda^{-1} \cdot \lambda} \circ \mathbf v +}} +{{eqn | r = 1_F \circ \mathbf v +}} +{{eqn | r = \mathbf v +}} +{{end-eqn}} +which [[Proof by Contradiction|contradicts]] the assumption that $\mathbf v \ne \mathbf 0$. +{{qed}} +\end{proof}<|endoftext|> +\section{Baire-Osgood Theorem} +Tags: Topology, Meager Spaces, Baire Spaces + +\begin{theorem} +Let $X$ be a [[Definition:Baire Space (Topology)|Baire space]]. +Let $Y$ be a [[Definition:Metrizable Topology|metrizable topological space]] +Let $f: X \to Y$ be a [[Definition:Mapping|mapping]] which is the [[Definition:Pointwise Limit|pointwise limit]] of a [[Definition:Sequence|sequence]] $\left \langle{f_n}\right\rangle$ in $C \left({X, Y}\right)$. +{{explain|$C \left({X, Y}\right)$}} +Let $D \left({f}\right)$ be the set of points where $f$ is [[Definition:Discontinuous (Topology)|discontinuous]]. +Then $D \left({f}\right)$ is a [[Definition:Meager Space|meager]] subset of $X$. +\end{theorem} + +\begin{proof} +Let $d$ be a [[Definition:Metric|metric]] on $Y$ generating its [[Definition:Topology|topology]]. +Using the [[Definition:Oscillation|oscillation]] we have, following the convention that we omit the metric when writing the oscillation, that: +:$\displaystyle D \left({f}\right) = \bigcup_{n \mathop = 1}^\infty \left\{{x \in X: \omega_f \left({x}\right) \ge \frac 1 n}\right\}$ +which is a [[Definition:Countable Union|countable union]] of [[Definition:Closed Set (Metric Space)|closed sets]]. +Since we have this expression for $D \left({f}\right)$, the claim follows if we can prove that for all $\epsilon \in \R_{>0}$ the [[Definition:Closed Set (Metric Space)|closed set]]: +: $F_\epsilon = \left\{{x \in X : \omega_f \left({x}\right) \ge 5 \epsilon}\right\}$ +is [[Definition:Nowhere Dense|nowhere dense]]. +Let $\epsilon \in \R_{>0}$ be given and consider the sets: +:$\displaystyle A_n = \bigcap_{i, j \mathop \ge n} \left\{{x \in X: d \left({f_i \left({x}\right), f_j \left({x}\right)}\right) \le \epsilon}\right\}$ +which are [[Definition:Closed Set (Metric Space)|closed]] because $d$ and the $f_i$ are continuous. +Because $\left \langle{f_n}\right\rangle$ is pointwise convergent, it is pointwise Cauchy with respect to any metric generating the topology on $Y$, so $\displaystyle \bigcup_{n \mathop = 1}^\infty A_n = X$. +Given a nonempty open $U \subseteq X$ we wish to show that $U \nsubseteq F_\epsilon$. +Consider the sequence $\left\langle{A_n \cap U}\right\rangle$ of closed subsets of $U$. +The union of these is all of $U$. +As $U$ is an open subspace of a Baire space, it is a Baire space. +So one of the elements of $\left\langle{A_n \cap U}\right\rangle$, say $A_k$, must have an interior point, so there is an open $V \subseteq A_k \cap U$. +Because $U$ is open in $X$, $V$ is open in $X$ as well. +We will show that: +:$V \subseteq F_\epsilon^c = \left\{{x \in X: \omega_f \left({x}\right) < 5 \epsilon}\right\}$ +This will show that: +:$V \nsubseteq F_\epsilon$ and thus that $U \nsubseteq F_\epsilon$ +Since $V \subseteq A_k$: +:$d \left({f_i \left({x}\right), f_j \left({x}\right)}\right) \le \epsilon$ +for all $x \in V$ and all $i, j \ge k$. +Pointwise convergence of $\left \langle{f_n}\right\rangle$ gives that: +: $d \left({f \left({x}\right), f_k \left({x}\right)}\right) \le \epsilon$ +for all $x \in V$. +By continuity of $f_k$ we have for every $x_0 \in V$ an open $V_{x_0} \subseteq V$ such that: +: $d \left({f_k \left({x}\right), f_k \left({x_0}\right)}\right) \le \epsilon$ +for all $x \in V_{x_0}$. +By the triangle inequality: +:$d \left({f \left({x}\right), f_k \left({x_0}\right)}\right) \le 2 \epsilon$ +for all $x \in V_{x_0}$. +Applying the triangle inequality again: +:$d \left({f \left({x}\right), f \left({y}\right)}\right) \le 4 \epsilon$ +for all $x, y \in V_{x_0}$. +Thus we have the bound: +:$\omega_f \left({x_0}\right) \le \omega_f \left({V_{x_0}}\right) \le 4 \epsilon$ +showing that $x_0 \notin F_\epsilon$ as desired. +{{qed}} +{{Namedfor|René-Louis Baire|name2 = William Fogg Osgood|cat = Baire|cat2 = Osgood}} +\end{proof}<|endoftext|> +\section{Null Ring is Ideal} +Tags: Ideal Theory + +\begin{theorem} +Let $\struct {R, +, \circ}$ be a [[Definition:Ring (Abstract Algebra)|ring]] whose [[Definition:Ring Zero|zero]] is $0_R$. +Then the [[Definition:Null Ring|null ring]] $\struct {\set {0_R}, +, \circ}$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +\end{theorem} + +\begin{proof} +From [[Null Ring and Ring Itself Subrings]], $\set {0_R}$ is a [[Definition:Subring|subring]] of $\struct {R, +, \circ}$. +Also: +:$\forall x \in \struct {R, +, \circ}: x \circ 0_R = 0_R = 0_R \circ x \in \set {0_R}$ +thus fulfilling the [[Definition:Ideal of Ring|condition]] for $\set {0_R}$ to be an [[Definition:Ideal of Ring|ideal]] of $R$. +{{qed}} +\end{proof}<|endoftext|> +\section{Ring Epimorphism Preserves Subrings} +Tags: Ring Epimorphisms, Subrings + +\begin{theorem} +Let $\phi: \struct {R_1, +_1, \circ_1} \to \struct {R_2, +_2, \circ_2}$ be a [[Definition:Ring Epimorphism|ring epimorphism]]. +Let $S$ be a [[Definition:Subring|subring]] of $R_1$. +Then $\phi \sqbrk S$ is a [[Definition:Subring|subring]] of $R_2$. +\end{theorem} + +\begin{proof} +A direct application of [[Ring Homomorphism Preserves Subrings]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Preimage of Image of Ideal under Ring Homomorphism} +Tags: Ring Homomorphisms, Ideal Theory + +\begin{theorem} +Let $\phi: \left({R_1, +_1, \circ_1}\right) \to \left({R_2, +_2, \circ_2}\right)$ be a [[Definition:Ring Homomorphism|ring homomorphism]]. +Let $K = \ker \left({\phi}\right)$, where $\ker \left({\phi}\right)$ is the [[Definition:Kernel of Ring Homomorphism|kernel]] of $\phi$. +Let $J$ be an [[Definition:Ideal of Ring|ideal]] of $R_1$. +Then: +:$\phi^{-1} \left({\phi \left({J}\right)}\right) = J + K$ +\end{theorem} + +\begin{proof} +As an [[Ideal is Subring|ideal is a subring]], the result [[Preimage of Image of Subring under Ring Homomorphism]] applies directly. +{{Qed}} +\end{proof}<|endoftext|> +\section{Image of Preimage of Ideal under Ring Epimorphism} +Tags: Ring Epimorphisms, Subrings + +\begin{theorem} +Let $\phi: \struct {R_1, +_1, \circ_1} \to \struct {R_2, +_2, \circ_2}$ be a [[Definition:Ring Epimorphism|ring epimorphism]]. +Let $S_2$ be an [[Definition:Ideal of Ring|ideal]] of $R_2$. +Then: +:$\phi \sqbrk {\phi^{-1} \sqbrk {S_2} } = S_2$ +\end{theorem} + +\begin{proof} +As an [[Ideal is Subring|ideal is a subring]], the result [[Image of Preimage of Subring under Ring Epimorphism]] applies directly. +{{Qed}} +\end{proof}<|endoftext|> +\section{Vectors are Right Cancellable} +Tags: Vector Algebra + +\begin{theorem} +Let $\struct {\mathbf V, +, \circ}_{\mathbb F}$ be a [[Definition:Vector Space|vector space]] over $\mathbb F$, as defined by the [[Definition:Vector Space Axioms|vector space axioms]]. +Then every $\mathbf v \in \struct {\mathbf V, +}$ is [[Definition:Right Cancellable Element|right cancellable]]: +:$\forall \mathbf a, \mathbf b, \mathbf c \in \mathbf V: \mathbf a + \mathbf c = \mathbf b + \mathbf c \implies \mathbf a = \mathbf b$ +\end{theorem} + +\begin{proof} +Utilizing the [[Definition:Vector Space Axioms|vector space axioms]]: +{{begin-eqn}} +{{eqn | l = \mathbf a + \mathbf c + | r = \mathbf b + \mathbf c +}} +{{eqn | ll= \leadsto + | l = \paren {\mathbf a + \mathbf c} - \mathbf c + | r = \paren {\mathbf b + \mathbf c} - \mathbf c +}} +{{eqn | ll= \leadsto + | l = \mathbf a + \paren {\mathbf c - \mathbf c} + | r = \mathbf b + \paren {\mathbf c - \mathbf c} +}} +{{eqn | ll= \leadsto + | l = \mathbf a + \mathbf 0 + | r = \mathbf b + \mathbf 0 +}} +{{eqn | ll= \leadsto + | l = \mathbf a + | r = \mathbf b +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Vectors are Left Cancellable} +Tags: Vector Algebra + +\begin{theorem} +Let $\left({\mathbf V, +, \circ}\right)_{\mathbb F}$ be a [[Definition:Vector Space|vector space]] over $\mathbb F$, as defined by the [[Definition:Vector Space Axioms|vector space axioms]]. +Then every $\mathbf v \in \left({\mathbf V, +}\right)$ is [[Definition:Left Cancellable Element|left cancellable]]: +:$\forall \mathbf a, \mathbf b, \mathbf c \in \mathbf V: \mathbf c + \mathbf a = \mathbf c + \mathbf b \implies \mathbf a = \mathbf b$ +\end{theorem} + +\begin{proof} +Utilizing the [[Definition:Vector Space Axioms|vector space axioms]]: +{{begin-eqn}} +{{eqn | l = \mathbf c + \mathbf a + | r = \mathbf c + \mathbf b +}} +{{eqn | ll = \implies + | l = \mathbf a + \mathbf c + | r = \mathbf b + \mathbf c +}} +{{eqn | ll = \implies + | l = \mathbf a + | r = \mathbf b + | c = [[Vectors are Right Cancellable]] +}} +{{end-eqn}} +{{qed}} +[[Category:Vector Algebra]] +99qrr8db0lqjny0a9ktic3ydue0x6ux +\end{proof}<|endoftext|> +\section{Zero Vector is Unique} +Tags: Zero Vectors + +\begin{theorem} +Let $\struct {\mathbf V, +, \circ}_{\mathbb F}$ be a [[Definition:Vector Space|vector space]] over $\mathbb F$, as defined by the [[Definition:Vector Space Axioms|vector space axioms]]. +Then the [[Definition:Zero Vector|zero vector]] in $\mathbf V$ is [[Definition:Unique|unique]]: +:$\exists! \mathbf 0 \in \mathbf V: \forall \mathbf x \in \mathbf V: \mathbf x + \mathbf 0 = \mathbf x$ +\end{theorem} + +\begin{proof} +=== Proof of Existence === +Follows from the [[Definition:Vector Space Axioms|vector space axioms]]. +{{qed|lemma}} +=== Proof of Uniqueness === +Let $\mathbf 0$, $\mathbf 0'$ be [[Definition:Zero Vector|zero vectors]]. +Utilizing the [[Definition:Vector Space Axioms|vector space axioms]]: +{{begin-eqn}} +{{eqn | l = \mathbf 0 + | r = \mathbf 0 + \mathbf 0' +}} +{{eqn | r = \mathbf 0' + \mathbf 0 +}} +{{eqn | r = \mathbf 0' +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Additive Inverse in Vector Space is Unique} +Tags: Vector Algebra, Linear Algebra, Vector Spaces + +\begin{theorem} +Let $\struct {\mathbf V, +, \circ}_F$ be a [[Definition:Vector Space|vector space]] over a [[Definition:Field (Abstract Algebra)|field]] $F$, as defined by the [[Definition:Vector Space Axioms|vector space axioms]]. +Then for every $\mathbf v \in \mathbf V$, the [[Definition:Inverse Element|additive inverse]] of $\mathbf v$ is [[Definition:Unique|unique]]: +:$\forall \mathbf v \in \mathbf V: \exists! \paren {-\mathbf v} \in \mathbf V: \mathbf v + \paren {-\mathbf v} = \mathbf 0$ +\end{theorem} + +\begin{proof} +=== Proof of Existence === +Follows from the [[Definition:Vector Space Axioms|vector space axioms]]. +{{qed|lemma}} +=== Proof of Uniqueness === +Let $\mathbf v$ have inverses $\mathbf x$ and $\mathbf y$. +Then: +{{begin-eqn}} +{{eqn | l = \mathbf v + \mathbf x + | r = \mathbf 0 +}} +{{eqn | lo= \land + | l = \mathbf v + \mathbf y + | r = \mathbf 0 +}} +{{eqn | ll= \leadsto + | l = \mathbf v + \mathbf x + | r = \mathbf v + \mathbf y +}} +{{eqn | ll= \leadsto + | l = \mathbf x + | r = \mathbf y + | c = [[Vectors are Left Cancellable]] +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Vector Inverse is Negative Vector} +Tags: Vector Algebra + +\begin{theorem} +Let $F$ be a [[Definition:Field (Abstract Algebra)|field]] whose [[Definition:Field Zero|zero]] is $0_F$ and whose [[Definition:Unity of Field|unity]] is $1_F$. +Let $\struct {\mathbf V, +, \circ}_F$ be a [[Definition:Vector Space|vector space]] over $F$, as defined by the [[Definition:Vector Space Axioms|vector space axioms]]. +Then: +:$\forall \mathbf v \in \mathbf V: -\mathbf v = -1_F \circ \mathbf v$ +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = \mathbf v + \paren {-1_F \circ \mathbf v} + | r = \paren {1_F \circ \mathbf v} + \paren {-1_F \circ \mathbf v} + | c = {{Field-axiom|M3}} +}} +{{eqn | r = \paren {1_F + \paren {- 1_F} } \circ \mathbf v + | c = {{Vector-space-axiom|5}} +}} +{{eqn | r = 0_F \circ \mathbf v + | c = {{Field-axiom|A4}} +}} +{{eqn | r = \mathbf 0 + | c = [[Vector Scaled by Zero is Zero Vector]] +}} +{{end-eqn}} +so $-1_F \circ \mathbf v$ is an [[Definition:Inverse Element|additive inverse]] of $\mathbf v$. +From [[Additive Inverse in Vector Space is Unique]]: +:$-1_F \circ \mathbf v = -\mathbf v$ +{{qed}} +\end{proof}<|endoftext|> +\section{Non-Trivial Commutative Division Ring is Field} +Tags: Field Theory, Division Rings + +\begin{theorem} +Let $\struct {R, +, \circ}$ be a [[Definition:Non-Trivial Ring|non-trivial]] [[Definition:Division Ring|division ring]] such that $\circ$ is [[Definition:Commutative Operation|commutative]]. +Then $\struct {R, +, \circ}$ is a [[Definition:Field (Abstract Algebra)|field]]. +Similarly, let $\struct {F, +, \circ}$ be a [[Definition:Field (Abstract Algebra)|field]]. +Then $\struct {F, +, \circ}$ is a [[Definition:Non-Trivial Ring|non-trivial]] [[Definition:Division Ring|division ring]] such that $\circ$ is [[Definition:Commutative Operation|commutative]]. +\end{theorem} + +\begin{proof} +Suppose $\struct {R, +, \circ}$ is a [[Definition:Non-Trivial Ring|non-trivial]] [[Definition:Division Ring|division ring]] such that $\circ$ is [[Definition:Commutative Operation|commutative]]. +By definition $\struct {R, +}$ is an [[Definition:Abelian Group|abelian group]]. +Thus [[Definition:Field Axioms|field axioms]] $(\text A 0)$ to $(\text A 4)$ are satisfied. +Also by definition, $\struct {R, \circ}$ is a [[Definition:Semigroup|semigroup]] such that $\circ$ is [[Definition:Commutative Operation|commutative]]. +Thus [[Definition:Field Axioms|field axioms]] $(\text M 0)$ to $(\text M 2)$ are satisfied. +As $\struct {R, +, \circ}$ is a [[Definition:Ring with Unity|ring with unity]], $(\text M 3)$ is satisfied. +[[Definition:Field Axioms|Field axiom]] $(\text D)$ is satisfied by dint of $\struct {R, +, \circ}$ being a [[Definition:Ring (Abstract Algebra)|ring]]. +Finally note that by definition of [[Definition:Division Ring|division ring]], [[Definition:Field Axioms|field axiom]] $(\text M 4)$ is satisfied. +Thus all the [[Definition:Field Axioms|field axioms]] are satisfied, and so $\struct {R, +, \circ}$ is a [[Definition:Field (Abstract Algebra)|field]]. +{{qed|lemma}} +Suppose $\struct {F, +, \circ}$ is a [[Definition:Field (Abstract Algebra)|field]]. +Then by definition $\struct {F, +, \circ}$ is a [[Definition:Non-Trivial Ring|non-trivial]] [[Definition:Division Ring|division ring]] such that $\circ$ is [[Definition:Commutative Operation|commutative]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Division Ring has No Proper Zero Divisors} +Tags: Division Rings + +\begin{theorem} +Let $\left({R, +, \circ}\right)$ be a [[Definition:Division Ring|division ring]]. +Then $\left({R, +, \circ}\right)$ has no [[Definition:Proper Zero Divisor|proper zero divisors]]. +\end{theorem} + +\begin{proof} +Let $\left({R, +, \circ}\right)$ be a [[Definition:Division Ring|division ring]] whose [[Definition:Ring Zero|zero]] is $\left\{{0_R}\right\}$ and whose [[Definition:Unity of Ring|unity]] is $1_R$. +By definition of [[Definition:Division Ring|division ring]], every element $x$ of $R^* = R \setminus \left\{{0_R}\right\}$ has an element $y$ such that: +:$y \circ x = x \circ y = 1_R$ +That is, by definition, every element of $R^*$ is a [[Definition:Unit of Ring|unit]] of $R$. +The result follows from [[Unit Not Zero Divisor]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Maximal Element need not be Greatest Element} +Tags: Order Theory + +\begin{theorem} +Let $\struct {S, \preccurlyeq}$ be an [[Definition:Ordered Set|ordered set]]. +Let $M \in $ be a [[Definition:Maximal Element|maximal element]] of $S$. +Then $M$ is not necessarily the [[Definition:Greatest Element|greatest element]] of $S$. +\end{theorem} + +\begin{proof} +[[Proof by Counterexample]]: +Let $S = \set {a, b, c}$. +Let $\preccurlyeq$ be defined as: +:$x \preccurlyeq y \iff \tuple {x, y} \in \set {\tuple {a, a}, \tuple {b, b}, \tuple {c, c}, \tuple {a, b}, \tuple {a, c} }$ +A straightforward but laborious process determines that $\preccurlyeq$ is a [[Definition:Partial Ordering|partial ordering]] on $S$. +We have that: +:$c \preccurlyeq x \implies c = x$ +and: +:$b \preccurlyeq x \implies b = x$ +and so by definition, both $b$ and $c$ are [[Definition:Maximal Element|maximal elements]] of $S$. +Suppose $b$ is the [[Definition:Greatest Element|greatest element]] of $S$. +Then from [[Greatest Element is Unique]] it follows that $c$ can not be the [[Definition:Greatest Element|greatest element]] of $S$. +Hence the result. +In fact, from the definition of the [[Definition:Greatest Element|greatest element]] of $S$: +:$x$ is the [[Definition:Greatest Element|greatest element]] of $S$ {{iff}} $\forall y \in S: y \preccurlyeq x$ +it can be seen directly that neither $b$ nor $c$ is the [[Definition:Greatest Element|greatest element]] of $S$. +{{qed}} +\end{proof}<|endoftext|> +\section{Maximal Ideal of Division Ring} +Tags: Ideal Theory, Division Rings + +\begin{theorem} +Let $\left({D, +, \circ}\right)$ be a [[Definition:Division Ring|Division Ring]] whose [[Definition:Ring Zero|zero]] is $0$. +:Let $\left({J, +, \circ}\right)$ be a [[Definition:Maximal Ideal of Ring|maximal ideal]] of $D$. +Then $J = \left\{{0}\right\}$. +\end{theorem} + +\begin{proof} +From [[Ideals of Division Ring]], the only [[Definition:Ideal of Ring|ideals]] of a [[Definition:Division Ring|Division Ring]] $\left({D, +, \circ}\right)$ are $\left({D, +, \circ}\right)$ and $\left({\left\{{0}\right\}, +, \circ}\right)$. +Hence the result by definition of [[Definition:Maximal Ideal of Ring|maximal ideal]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Linear Transformation is Injective iff Kernel Contains Only Zero} +Tags: Linear Transformations + +\begin{theorem} +Let $\mathbf V, \mathbf V'$ be [[Definition:Vector Space|vector spaces]], with respective [[Definition:Zero Vector|zeroes]] $\mathbf 0, \mathbf 0'$. +Let $T: \mathbf V \to \mathbf V'$ be a [[Definition:Linear Transformation on Vector Space|linear transformation]]. +Then: +:$T$ is [[Definition:Injection|injective]] {{iff}} $\map \ker T = \set {\mathbf 0}$ +where: +:$\mathbf 0$ is the [[Definition:Zero Vector|zero]] of the [[Definition:Domain of Mapping|domain]] of $T$ +:$\map \ker T$ is the [[Definition:Kernel of Linear Transformation|kernel of $T$]]. +\end{theorem} + +\begin{proof} +=== Sufficient Condition === +That $\mathbf 0 \in \map \ker T$ follows from [[Kernel of Linear Transformation contains Zero Vector]]. +That $\map \ker T$ is a [[Definition:Singleton|singleton]] follows from the definition of [[Definition:Injection|injection]]. +{{qed|lemma}} +=== Necessary Condition === +Let $\map \ker T = \set {\mathbf 0}$. +Consider: +:$\map T {\mathbf x} = \mathbf b$ +where $\mathbf b$ is in the [[Definition:Codomain of Mapping|codomain]] of $T$. +Let this equation have a solution: +:$\mathbf x = \mathbf x_1 \in \mathbf V$ +Suppose $\mathbf x = \mathbf x_2 \in \mathbf V$ is also a solution. +[[Definition:Clearly|Clearly]]: +:$\map T {\mathbf x_1} = \map T {\mathbf x_2}$ +Observe that: +{{begin-eqn}} +{{eqn | l = \map T {\mathbf x_1} + | r = \mathbf b +}} +{{eqn | ll= \land + | l = \map T {\mathbf x_2} + | r = \mathbf b +}} +{{eqn | ll= \leadsto + | l = \map T {\mathbf x_1} - \map T {\mathbf x_2} + | r = \mathbf b - \mathbf b +}} +{{eqn | r = \mathbf 0' +}} +{{eqn | ll= \leadsto + | l = \map T {\mathbf x_1 - \mathbf x_2} + | r = \mathbf 0' + | c = {{Defof|Linear Transformation on Vector Space}} +}} +{{eqn | ll= \leadsto + | l = \paren {\mathbf x_1 - \mathbf x_2} + | o = \in + | r = \map \ker T + | c = {{Defof|Kernel of Linear Transformation}} +}} +{{eqn | ll= \leadsto + | l = \mathbf x_1 - \mathbf x_2 + | r = \mathbf 0 + | c = {{Defof|Set Equality}}: recall $\map \ker T = \set {\mathbf 0}$ +}} +{{eqn | ll= \leadsto + | l = \mathbf x_1 + | r = \mathbf x_2 +}} +{{end-eqn}} +As $\mathbf x_1, \mathbf x_2$ were arbitrary: +:$\forall \mathbf x_1,\mathbf x_2 \in \mathbf V: \map T {\mathbf x_1} = \map T {\mathbf x_2} \implies \mathbf x_1 = \mathbf x_2$ +and the result follows from the definition of [[Definition:Injection|injectivity]]. +{{qed}} +[[Category:Linear Transformations]] +n7es60exp1uazpyjxotyj4ili2emz1h +\end{proof}<|endoftext|> +\section{Diagonal Relation on Ring is Ordering Compatible with Ring Structure} +Tags: Ring Theory, Ordered Rings + +\begin{theorem} +Let $\struct {R, +, \circ, \preceq}$ be a [[Definition:Ring (Abstract Algebra)|ring]] whose [[Definition:Ring Zero|zero]] is $0_R$. +Then the [[Definition:Diagonal Relation|diagonal relation]] $\Delta_R$ on $R$ is an [[Definition:Ordering Compatible with Ring Structure|ordering compatible with the ring structure]] of $R$. +\end{theorem} + +\begin{proof} +From [[Diagonal Relation is Ordering and Equivalence]], we have that $\Delta_R$ is actually an [[Definition:Ordering|ordering]] on $R$. +From the definition of the [[Definition:Diagonal Relation|diagonal relation]]: +:$\tuple {x, y} \in \Delta_R \iff x = y$ +Thus: +{{begin-eqn}} +{{eqn | l = \tuple {x, y} + | o = \in + | r = \Delta_R + | c = +}} +{{eqn | ll= \leadsto + | l = x + | r = y + | c = +}} +{{eqn | ll= \leadsto + | l = x + z + | r = y + z + | c = +}} +{{eqn | ll= \leadsto + | l = \tuple {x + z, y + z} + | o = \in + | r = \Delta_R + | c = +}} +{{end-eqn}} +Similarly: +{{begin-eqn}} +{{eqn | l = \tuple {x, y} + | o = \in + | r = \Delta_R + | c = +}} +{{eqn | ll= \leadsto + | l = x + | r = y + | c = +}} +{{eqn | ll= \leadsto + | l = z + x + | r = z + y + | c = +}} +{{eqn | ll= \leadsto + | l = \tuple {z + x, z + y} + | o = \in + | r = \Delta_R + | c = +}} +{{end-eqn}} +So $\Delta_R$ is [[Definition:Relation Compatible with Operation|compatible]] with $+$. +Then note that: +{{begin-eqn}} +{{eqn | l = \tuple {0_R, x} + | o = \in + | r = \Delta_R + | c = +}} +{{eqn | lo= \land + | l = \tuple {0_R, y} + | o = \in + | r = \Delta_R + | c = +}} +{{eqn | ll= \leadsto + | l = 0_R + | r = x + | c = +}} +{{eqn | lo= \land + | l = 0_R + | r = y + | c = +}} +{{eqn | ll= \leadsto + | l = 0_R + | r = x \circ y + | c = +}} +{{eqn | ll= \leadsto + | l = \tuple {0_R, x \circ y} + | o = \in + | r = \Delta_R + | c = +}} +{{end-eqn}} +Hence the result, from the definition of an [[Definition:Ordering Compatible with Ring Structure|ordering compatible with the ring structure]] of $R$. +{{qed}} +\end{proof}<|endoftext|> +\section{Nagata-Smirnov Metrization Theorem} +Tags: Regular Spaces, Metrizable Topologies + +\begin{theorem} +Let $T = \left({S, \tau}\right)$ be a [[Definition:Topological Space|topological space]]. +Then $T$ is [[Definition:Metrizable|metrizable]] {{iff}} $T$ is [[Definition:Regular Space|regular]] and has a [[Definition:Basis (Topology)|basis]] that is [[Definition:Countably Locally Finite|countably locally finite]]. +\end{theorem} + +\begin{proof} +{{proof wanted}} +{{namedfor|Jun-iti Nagata|name2 = Yurii Mikhailovich Smirnov|cat = Nagata J|cat2 = Smirnov Y M}} +\end{proof}<|endoftext|> +\section{Smirnov Metrization Theorem} +Tags: Metrizable Topologies + +\begin{theorem} +Let $T = \left({S, \tau}\right)$ be a [[Definition:Topological Space|topological space]]. +Then $T$ is [[Definition:Metrizable Space|metrizable]] {{iff}} it is [[Definition:Paracompact Space|paracompact]] and [[Definition:Locally Metrizable Space|locally metrizable]]. +\end{theorem} + +\begin{proof} +{{proof wanted}} +{{Namedfor|Yurii Mikhailovich Smirnov|cat = Smirnov Y M}} +\end{proof}<|endoftext|> +\section{Identity Mapping is Automorphism/Semigroups} +Tags: Semigroup Automorphisms, Identity Mappings + +\begin{theorem} +Let $\left({S, \circ}\right)$ be a [[Definition:Semigroup|semigroup]]. +Then $I_S: \left({S, \circ}\right) \to \left({S, \circ}\right)$ is a [[Definition:Semigroup Automorphism|semigroup automorphism]]. +\end{theorem} + +\begin{proof} +The main result [[Identity Mapping is Automorphism]] holds directly. +{{qed}} +[[Category:Semigroup Automorphisms]] +[[Category:Identity Mappings]] +ndb9aunvdsojqh52tr61ybp4zijnvjv +\end{proof}<|endoftext|> +\section{Identity Mapping is Ordered Ring Automorphism} +Tags: Ordered Rings, Identity Mappings + +\begin{theorem} +Let $\struct {S, +, \circ, \preceq}$ be an [[Definition:Ordered Ring|ordered ring]]. +Then the [[Definition:Identity Mapping|identity mapping]] $I_S: S \to S$ is an [[Definition:Ordered Ring Automorphism|ordered ring automorphism]]. +\end{theorem} + +\begin{proof} +We have that: +:an [[Identity Mapping is Order Isomorphism|identity mapping is an order isomorphism]] +:an [[Identity Mapping is Group Automorphism|identity mapping is a group automorphism]] +:an [[Identity Mapping is Semigroup Automorphism|identity mapping is a semigroup automorphism]] +Hence the result by definition of [[Definition:Ordered Ring Automorphism|ordered ring automorphism]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Reflexive Relation is Reflexive} +Tags: Reflexive Relations, Inverse Relations + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on a set $S$. +If $\mathcal R$ is [[Definition:Reflexive Relation|reflexive]], then so is $\mathcal R^{-1}$. +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = x + | o = \in + | r = S +}} +{{eqn | ll= \implies + | l = \left({x, x}\right) + | o = \in + | r = \mathcal R + | c = {{Defof|Reflexive Relation}} +}} +{{eqn | ll= \implies + | l = \left({x, x}\right) + | o = \in + | r = \mathcal R^{-1} + | c = {{Defof|Inverse Relation}} +}} +{{end-eqn}} +Hence the result by definition of [[Definition:Reflexive Relation|reflexive relation]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Antireflexive Relation is Antireflexive} +Tags: Reflexive Relations, Inverse Relations + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on a set $S$. +If $\mathcal R$ is [[Definition:Antireflexive Relation|antireflexive]], then so is $\mathcal R^{-1}$. +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l=x + | o=\in + | r=S + | c= +}} +{{eqn | ll=\implies + | l=\left({x, x}\right) + | o=\notin + | r=\mathcal R + | c=by definition of [[Definition:Antireflexive Relation|antireflexive relation]] +}} +{{eqn | ll=\implies + | l=\left({x, x}\right) + | o=\notin + | r=\mathcal R^{-1} + | c=by definition of [[Definition:Inverse Relation|inverse relation]] +}} +{{end-eqn}} +Hence the result by definition of [[Definition:Antireflexive Relation|antireflexive relation]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Non-Reflexive Relation is Non-Reflexive} +Tags: Reflexive Relations, Inverse Relations + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on a set $S$. +If $\mathcal R$ is [[Definition:Non-Reflexive Relation|non-reflexive]], then so is $\mathcal R^{-1}$. +\end{theorem} + +\begin{proof} +Let $\mathcal R$ be [[Definition:Non-Reflexive Relation|non-reflexive]]. +Then: +{{begin-eqn}} +{{eqn | l=\exists x \in S: \left({x, x}\right) + | o=\in + | r=\mathcal R + | c=as $\mathcal R$ is not [[Definition:Antireflexive Relation|antireflexive]] +}} +{{eqn | ll=\implies + | l=\exists x \in S: \left({x, x}\right) + | o=\in + | r=\mathcal R^{-1} + | c=by definition of [[Definition:Inverse Relation|inverse relation]] +}} +{{end-eqn}} +Thus $\mathcal R^{-1}$ is not [[Definition:Antireflexive Relation|antireflexive]]. +Also: +{{begin-eqn}} +{{eqn | l=\exists x \in S: \left({x, x}\right) + | o=\notin + | r=\mathcal R + | c=as $\mathcal R$ is not [[Definition:Reflexive Relation|reflexive]] +}} +{{eqn | ll=\implies + | l=\exists x \in S: \left({x, x}\right) + | o=\notin + | r=\mathcal R^{-1} + | c=by definition of [[Definition:Inverse Relation|inverse relation]] +}} +{{end-eqn}} +Thus $\mathcal R^{-1}$ is not [[Definition:Reflexive Relation|reflexive]]. +Hence the result, by definition of [[Definition:Non-Reflexive Relation|non-reflexive relation]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Symmetric Relation is Symmetric} +Tags: Symmetric Relations, Inverse Relations + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on a set $S$. +If $\mathcal R$ is [[Definition:Symmetric Relation|symmetric]], then so is $\mathcal R^{-1}$. +\end{theorem} + +\begin{proof} +Let $\mathcal R$ be [[Definition:Symmetric Relation|symmetric]]. +Then from [[Relation equals Inverse iff Symmetric]] it follows that $\mathcal R^{-1}$ is also symmetric. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Asymmetric Relation is Asymmetric} +Tags: Symmetric Relations, Inverse Relations + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on a set $S$. +If $\mathcal R$ is [[Definition:Asymmetric Relation|asymmetric]], then so is $\mathcal R^{-1}$. +\end{theorem} + +\begin{proof} +Let $\mathcal R$ be [[Definition:Asymmetric Relation|asymmetric]]. +Then: +: $\left({x, y}\right) \in \mathcal R \implies \left({y, x}\right) \notin \mathcal R$ +Thus if $\left({x, y}\right) \in \mathcal R$ then: +: $\left({y, x}\right) \in \mathcal R^{-1}$ and $\left({x, y}\right) \notin \mathcal R^{-1}$ +Thus it follows that $\mathcal R^{-1}$ is also [[Definition:Asymmetric Relation|asymmetric]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Antisymmetric Relation is Antisymmetric} +Tags: Symmetric Relations, Inverse Relations + +\begin{theorem} +Let $\RR$ be a [[Definition:Relation|relation]] on a set $S$. +If $\RR$ is [[Definition:Antisymmetric Relation|antisymmetric]], then so is $\RR^{-1}$. +\end{theorem} + +\begin{proof} +Let $\RR$ be [[Definition:Antisymmetric Relation|antisymmetric]]. +Then: +:$\tuple {x, y} \land \tuple {y, x} \in \RR \implies x = y$ +It follows that: +:$\tuple {y, x} \land \tuple {x, y} \in \RR^{-1} \implies x = y$ +Thus it follows that $\RR^{-1}$ is also [[Definition:Antisymmetric Relation|antisymmetric]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Non-Symmetric Relation is Non-Symmetric} +Tags: Symmetric Relations, Inverse Relations + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on a set $S$. +If $\mathcal R$ is [[Definition:Non-Symmetric Relation|non-symmetric]], then so is $\mathcal R^{-1}$. +\end{theorem} + +\begin{proof} +Let $\mathcal R$ be [[Definition:Non-Symmetric Relation|non-symmetric]]. +Then: +: $\exists \left({x_1, y_1}\right) \in \mathcal R \implies \left({y_1, x_1}\right) \in \mathcal R$ +and also: +: $\exists \left({x_2, y_2}\right) \in \mathcal R \implies \left({y_2, x_2}\right) \notin \mathcal R$ +Thus: +: $\exists \left({y_1, x_1}\right) \in \mathcal R^{-1} \implies \left({x_1, y_1}\right) \in \mathcal R^{-1}$ +and also: +: $\exists \left({y_2, x_2}\right) \in \mathcal R^{-1} \implies \left({x_2, y_2}\right) \notin \mathcal R^{-1}$ +and so $\mathcal R^{-1}$ is [[Definition:Non-Symmetric Relation|non-symmetric]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Transitive Relation is Transitive} +Tags: Transitive Relations, Inverse Relations, Inverse of Transitive Relation is Transitive + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on a [[Definition:Set|set]] $S$. +Let $\mathcal R$ be [[Definition:Transitive Relation|transitive]]. +Then its [[Definition:Inverse Relation|inverse]] $\mathcal R^{-1}$ is also [[Definition:Transitive Relation|transitive]]. +\end{theorem} + +\begin{proof} +Let $\RR$ be [[Definition:Transitive Relation|transitive]]. +Then: +:$\tuple {x, y}, \tuple {y, z} \in \RR \implies \tuple {x, z} \in \RR$ +Thus: +:$\tuple {y, x}, \tuple {z, y} \in \RR^{-1} \implies \tuple {z, x} \in \RR^{-1}$ +and so $\RR^{-1}$ is [[Definition:Transitive Relation|transitive]]. +{{qed}} +\end{proof} + +\begin{proof} +Let $\RR$ be [[Definition:Transitive Relation|transitive]]. +Thus by definition: +:$\RR \circ \RR \subseteq \RR$ +Thus: +{{begin-eqn}} +{{eqn | l = \RR^{-1} \circ \RR^{-1} + | r = \paren {\RR \circ \RR}^{-1} + | c = [[Inverse of Composite Relation]] +}} +{{eqn | o = \subseteq + | r = \RR^{-1} + | c = [[Inverse of Subset of Relation is Subset of Inverse]] +}} +{{end-eqn}} +{{qed}} +Hence the result by definition of [[Definition:Transitive Relation|transitive relation]]. +\end{proof}<|endoftext|> +\section{Inverse of Antitransitive Relation is Antitransitive} +Tags: Transitive Relations, Inverse Relations + +\begin{theorem} +Let $\RR$ be a [[Definition:Relation|relation]] on a set $S$. +If $\RR$ is [[Definition:Antitransitive Relation|antitransitive]], then so is $\RR^{-1}$. +\end{theorem} + +\begin{proof} +Let $\RR$ be [[Definition:Antitransitive Relation|antitransitive]]. +Then: +:$\tuple {x, y}, \tuple {y, z} \in \RR \implies \tuple {x, z} \notin \RR$ +Thus: +:$\tuple {y, x}, \tuple {z, y} \in \RR^{-1} \implies \tuple {z, x} \notin \RR^{-1}$ +and so $\RR^{-1}$ is [[Definition:Antitransitive Relation|antitransitive]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Non-Transitive Relation is Non-Transitive} +Tags: Transitive Relations, Inverse Relations + +\begin{theorem} +Let $\mathcal R$ be a [[Definition:Relation|relation]] on a set $S$. +If $\mathcal R$ is [[Definition:Non-Transitive Relation|non-transitive]], then so is $\mathcal R^{-1}$. +\end{theorem} + +\begin{proof} +Let $\mathcal R$ be [[Definition:Non-Transitive Relation|non-transitive]]. +Then: +: $\exists x_1, y_1, z_1 \in S: \left({x_1, y_1}\right), \left({y_1, z_1}\right) \in \mathcal R, \left({x_1, z_1}\right) \in \mathcal R$ +: $\exists x_2, y_2, z_2 \in S: \left({x_2, y_2}\right), \left({y_2, z_2}\right) \in \mathcal R, \left({x_2, z_2}\right) \notin \mathcal R$ +So: +: $\exists x_1, y_1, z_1 \in S: \left({y_1, x_1}\right), \left({z_1, y_1}\right) \in \mathcal R^{-1}, \left({z_1, x_1}\right) \in \mathcal R^{-1}$ +: $\exists x_2, y_2, z_2 \in S: \left({y_2, x_2}\right), \left({z_2, y_2}\right) \in \mathcal R^{-1}, \left({z_2, x_2}\right) \notin \mathcal R^{-1}$ +So $\mathcal R^{-1}$ is [[Definition:Non-Transitive Relation|non-transitive]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Inverse of Ordered Ring Isomorphism is Ordered Ring Isomorphism} +Tags: Ordered Rings + +\begin{theorem} +Let $\left({S, +, \circ, \preceq}\right)$ and $\left({T, \oplus, *, \preccurlyeq}\right)$ be [[Definition:Ordered Ring|ordered rings]]. +Let $\phi: S \to T$ be an [[Definition:Ordered Ring Isomorphism|ordered ring isomorphism]]. +Then $\phi^{-1}: T \to S$ is also an [[Definition:Ordered Ring Isomorphism|ordered ring isomorphism]]. +\end{theorem} + +\begin{proof} +By definition, $\phi$ is a [[Definition:Bijection|bijection]]. +By [[Bijection iff Inverse is Bijection]], $\phi^{-1}$ is also a [[Definition:Bijection|bijection]]. +By definition, an [[Definition:Ordered Ring Isomorphism|ordered ring isomorphism]] from $\phi: \left({S, +, \circ, \preceq}\right) \to \left({T, \oplus, *, \preccurlyeq}\right)$ is: +:an [[Definition:Order Isomorphism|order isomorphism]] from the [[Definition:Ordered Set|ordered set]] $\left({S, \preceq}\right)$ to the [[Definition:Ordered Set|ordered set]] $\left({T, \preccurlyeq}\right)$ +:a [[Definition:Group Isomorphism|group isomorphism]] from the [[Definition:Group|group]] $\left({S, +}\right)$ to the [[Definition:Group|group]] $\left({T, \oplus}\right)$ +:a [[Definition:Semigroup Isomorphism|semigroup isomorphism]] from the [[Definition:Semigroup|semigroup]] $\left({S, \circ}\right)$ to the [[Definition:Semigroup|semigroup]] $\left({T, *}\right)$. +From [[Inverse of Order Isomorphism is Order Isomorphism]], $\phi^{-1}: \left({T, \preccurlyeq}\right) \to \left({S, \preceq}\right)$ is an [[Definition:Order Isomorphism|order isomorphism]]. +From [[Inverse of Algebraic Structure Isomorphism is Isomorphism]]: +:$\phi^{-1}: \left({T, \oplus}\right) \to \left({S, +}\right)$ is a [[Definition:Group Isomorphism|group isomorphism]] +:$\phi^{-1}: \left({T, *}\right) \to \left({S, \circ}\right)$ is a [[Definition:Semigroup Isomorphism|semigroup isomorphism]]. +From [[Isomorphism of Abelian Groups]], $\phi^{-1}: \left({T, \oplus}\right) \to \left({S, +}\right)$ preserves the [[Definition:Commutative Algebraic Structure|commutativity]] of $S$ and $T$. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Feit-Thompson Theorem} +Tags: Group Theory, Solvable Groups + +\begin{theorem} +All [[Definition:Finite Group|finite groups]] of [[Definition:Odd Integer|odd]] [[Definition:Order of Group|order]] are [[Definition:Solvable Group|solvable]]. +That is, every non-[[Definition:Abelian Group|abelian group]] [[Definition:Finite Group|finite]] [[Definition:Simple Group|simple group]] is of [[Definition:Even Integer|even]] [[Definition:Order of Group|order]]. +\end{theorem} + +\begin{proof} +{{ProofWanted}} +{{Namedfor|Walter Feit|name2 = John Griggs Thompson|cat = Feit|cat2 = Thompson}} +\end{proof}<|endoftext|> +\section{Orthogonal Group is Subgroup of General Linear Group} +Tags: Orthogonal Groups, General Linear Group + +\begin{theorem} +Let $k$ be a [[Definition:Field (Abstract Algebra)|field]]. +Let $\operatorname O \left({n, k}\right)$ be the $n$th [[Definition:Orthogonal Group|orthogonal group]] on $k$. +Let $\operatorname{GL} \left({n, k}\right)$ be the $n$th [[Definition:General Linear Group|general linear group]] on $k$. +Then $\operatorname O \left({n, k}\right)$ is a [[Definition:Subgroup|subgroup]] of $\operatorname{GL} \left({n, k}\right)$. +\end{theorem} + +\begin{proof} +From [[Unit Matrix is Orthogonal]], the [[Definition:Unit Matrix|unit matrix $\mathbf I_n$]] is [[Definition:Orthogonal Matrix|orthogonal]]. +Let $\mathbf A, \mathbf B \in \operatorname O \left({n, k}\right)$. +Then, by definition, $\mathbf A$ and $\mathbf B$ are [[Definition:Orthogonal Matrix|orthogonal]]. +Then by [[Inverse of Orthogonal Matrix is Orthogonal]]: +:$\mathbf B^{-1}$ is a [[Definition:Orthogonal Matrix|orthogonal matrix]]. +By [[Product of Orthogonal Matrices is Orthogonal Matrix]]: +:$\mathbf A \mathbf B^{-1}$ is a [[Definition:Orthogonal Matrix|orthogonal matrix]]. +Thus by definition of [[Definition:Orthogonal Group|orthogonal group]]: +:$\mathbf A \mathbf B^{-1} \in \operatorname O \left({n, k}\right)$ +Hence the result by [[One-Step Subgroup Test]]. +{{qed}} +[[Category:Orthogonal Groups]] +[[Category:General Linear Group]] +m22ama3xu94uxysoaj0ku2jn1iz1ck8 +\end{proof}<|endoftext|> +\section{Orthogonal Group is Group} +Tags: Orthogonal Groups + +\begin{theorem} +Let $k$ be a [[Definition:Field (Abstract Algebra)|field]]. +The $n$th [[Definition:Orthogonal Group|orthogonal group]] on $k$ is a [[Definition:Group|group]]. +\end{theorem} + +\begin{proof} +A direct corollary of [[Orthogonal Group is Subgroup of General Linear Group]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Composite of Ordered Ring Isomorphisms is Ordered Ring Isomorphism} +Tags: Ring Isomorphisms, Order Isomorphisms + +\begin{theorem} +Let $\left({S_1, +_1, \circ_1, \preccurlyeq_1}\right), \left({S_2, +_2, \circ_2, \preccurlyeq_2}\right), \left({S_3, +_3, \circ_3, \preccurlyeq_3}\right)$ be [[Definition:Ordered Ring|ordered rings]]. +Let $\phi: S_1 \to S_2$ and $\psi: S_2 \to S_3$ be [[Definition:Ordered Ring Isomorphism|ordered ring isomorphisms]]. +Then the [[Definition:Composition of Mappings|composite mapping]] $\psi \circ \phi: S_1 \to S_3$ is also an [[Definition:Ordered Ring Isomorphism|ordered ring isomorphism]]. +\end{theorem} + +\begin{proof} +From [[Composite of Order Isomorphisms is Order Isomorphism]], $\psi \circ \phi: \left({S_1, \preccurlyeq_1}\right) \to \left({S_3, \preccurlyeq_3}\right)$ is an [[Definition:Order Isomorphism|order isomorphism]]. +From [[Composite of Isomorphisms in Algebraic Structure is Isomorphism]], $\psi \circ \phi$ is an [[Definition:Isomorphism (Abstract Algebra)|algebraic structure isomorphism]]. +From [[Isomorphism Preserves Groups]], it follows that $\psi \circ \phi$ is a [[Definition:Group Isomorphism|group isomorphism]] from $\left({S_1, +_1}\right)$ to $\left({S_3, +_3}\right)$. +From [[Isomorphism Preserves Semigroups]], it follows that $\psi \circ \phi$ is a [[Definition:Semigroup Isomorphism|semigroup isomorphism]] from $\left({S_1, \circ_1}\right)$ to $\left({S_3, \circ_3}\right)$. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Composite of Ordered Ring Monomorphisms is Ordered Ring Monomorphism} +Tags: Ring Monomorphisms, Order Embeddings, Ordered Rings + +\begin{theorem} +Let $\left({S_1, +_1, \circ_1, \preccurlyeq_1}\right), \left({S_2, +_2, \circ_2, \preccurlyeq_2}\right), \left({S_3, +_3, \circ_3, \preccurlyeq_3}\right)$ be [[Definition:Ordered Ring|ordered rings]]. +Let $\phi: S_1 \to S_2$ and $\psi: S_2 \to S_3$ be [[Definition:Ordered Ring Monomorphism|ordered ring monomorphisms]]. +Then the [[Definition:Composition of Mappings|composite mapping]] $\psi \circ \phi: S_1 \to S_3$ is also an [[Definition:Ordered Ring Monomorphism|ordered ring monomorphism]]. +\end{theorem} + +\begin{proof} +From [[Composite of Order Embeddings is Order Embedding]], $\psi \circ \phi: \left({S_1, \preceq_1}\right) \to \left({S_3, \preceq_3}\right)$ is an [[Definition:Order Embedding|order embedding]]. +From [[Composite of Monomorphisms is Monomorphism]], $\psi \circ \phi$ is a [[Definition:Monomorphism (Abstract Algebra)|monomorphism]]. +From [[Group Monomorphism preserves Groups]], it follows that $\psi \circ \phi$ is a [[Definition:Group Monomorphism|group monomorphism]] from $\left({S_1, +_1}\right)$ to $\left({S_3, +_3}\right)$. +From [[Semigroup Monomorphism preserves Semigroups]], it follows that $\psi \circ \phi$ is a [[Definition:Semigroup Monomorphism|semigroup monomorphism]] from $\left({S_1, \circ_1}\right)$ to $\left({S_3, \circ_3}\right)$. +Hence the result. +{{qed}} +\end{proof}<|endoftext|> +\section{Rescaling is Linear Transformation} +Tags: Linear Transformations + +\begin{theorem} +Let $\left({R, +, \cdot}\right)$ be a [[Definition:Commutative Ring|commutative ring]]. +Let $\left({V, +, \circ}\right)_R$ be an [[Definition:Module|$R$-module]]. +Then for any $r \in R$, the [[Definition:Rescaling|rescaling]]: +:$m_r: V \to V, v \mapsto r \circ v$ +is a [[Definition:Linear Transformation|linear transformation]]. +\end{theorem} + +\begin{proof} +Let $v \in V$ and $s \in R$. +Then: +{{begin-eqn}} +{{eqn|l = m_r \left({s \circ v}\right) + |r = r \circ \left({s \circ v}\right) + |c = Definition of [[Definition:Rescaling|rescaling]] +}} +{{eqn|r = \left({r \cdot s}\right) \circ v + |c = $V$ is an [[Definition:Module|$R$-module]] +}} +{{eqn|r = \left({s \cdot r}\right) \circ v + |c = $R$ is a [[Definition:Commutative Ring|commutative ring]] +}} +{{eqn|r = s \circ \left({r \circ v}\right) + |c = $V$ is an [[Definition:Module|$R$-module]] +}} +{{eqn|r = s \circ m_r \left({v}\right) + |c = Definition of [[Definition:Rescaling|rescaling]] +}} +{{end-eqn}} +Next, for $v,w \in V$: +{{begin-eqn}} +{{eqn|l = m_r \left({v + w}\right) + |r = r \circ \left({v + w}\right) + |c = Definition of [[Definition:Rescaling|rescaling]] +}} +{{eqn|r = r \circ v + r \circ w + |c = $V$ is an [[Definition:Module|$R$-module]] +}} +{{eqn|r = m_r \left({v}\right) + m_r \left({w}\right) + |c = Definition of [[Definition:Rescaling|rescaling]] +}} +{{end-eqn}} +It follows that $m_r$ is a [[Definition:Linear Transformation|linear transformation]]. +{{qed}} +[[Category:Linear Transformations]] +f21xhuuisueybdb7ar42npjhkjtfq0t +\end{proof}<|endoftext|> +\section{Determinant of Rescaling Matrix} +Tags: Determinants + +\begin{theorem} +Let $R$ be a [[Definition:Commutative Ring|commutative ring]]. +Let $r \in R$. +Let $r \, \mathbf I_n$ be the [[Definition:Square Matrix|square matrix]] of [[Definition:Order of Square Matrix|order $n$]] defined by: +:$\sqbrk {r \, \mathbf I_n}_{i j} = \begin{cases} r & : i = j \\ 0 & : i \ne j \end{cases}$ +Then: +:$\map \det {r \, \mathbf I_n} = r^n$ +where $\det$ denotes [[Definition:Determinant of Matrix|determinant]]. +\end{theorem} + +\begin{proof} +From [[Determinant of Diagonal Matrix]], it follows directly that: +:$\map \det {r \, \mathbf I_n} = \displaystyle \prod_{i \mathop = 1}^n r = r^n$ +{{qed}} +[[Category:Determinants]] +2xs8qedaiszlly34e7fjppgn0psrdr9 +\end{proof}<|endoftext|> +\section{Inverse of Rescaling Matrix} +Tags: Matrix Algebra + +\begin{theorem} +Let $R$ be a [[Definition:Commutative and Unitary Ring|commutative ring with unity]]. +Let $r \in R$ be a [[Definition:Unit of Ring|unit]] in $R$. +Let $r \, \mathbf I_n$ be the $n \times n$ [[Definition:Rescaling Matrix|rescaling matrix]] of $r$. +Then $\left({r \, \mathbf I_n}\right)^{-1} = r^{-1} \, \mathbf I_n$. +\end{theorem} + +\begin{proof} +By definition, a [[Definition:Rescaling Matrix|rescaling matrix]] is also a [[Definition:Diagonal Matrix|diagonal matrix]]. +Hence [[Inverse of Diagonal Matrix]] applies, and since $r$ is a [[Definition:Unit of Ring|unit]], it gives the desired result. +{{qed}} +[[Category:Matrix Algebra]] +rdzg8jav5ovuegzpvdyrnr0ul16ur42 +\end{proof}<|endoftext|> +\section{Powers of Ring Elements/General Result} +Tags: Ring Theory, Proofs by Induction + +\begin{theorem} +:$\forall m, n \in \Z: \forall x \in R: \paren {m \cdot x} \circ \paren {n \cdot x} = \paren {m n} \cdot \paren {x \circ x}$. +\end{theorem} + +\begin{proof} +Proof by [[Principle of Mathematical Induction|induction]]: +For all $n \in \N$, let $\map P n$ be the [[Definition:Proposition|proposition]]: +:$\paren {m \cdot x} \circ \paren {n \cdot x} = \paren {m n} \cdot \paren {x \circ x}$ +In what follows, we make extensive use of [[Powers of Ring Elements]]: +:$\forall n \in \Z: \forall x \in R: \paren {m \cdot x} \circ x = m \cdot \paren {x \circ x} = x \circ \paren {m \cdot x}$ +First we verify $\map P 0$. +When $n = 0$, we have: +{{begin-eqn}} +{{eqn | l = \paren {m \cdot x} \circ \paren {0 \cdot x} + | r = \paren {m \cdot x} \circ 0_R + | c = +}} +{{eqn | r = 0_R + | c = +}} +{{eqn | r = 0 \cdot \paren {x \circ x} + | c = +}} +{{eqn | r = \paren {m 0} \cdot \paren {x \circ x} + | c = +}} +{{end-eqn}} +So $\map P 0$ holds. +=== Basis for the Induction === +Next we verify $\map P 1$. +When $n = 1$, we have: +{{begin-eqn}} +{{eqn | l = \paren {m \cdot x} \circ \paren {1 \cdot x} + | r = \paren {m \cdot x} \circ x + | c = +}} +{{eqn | r = m \cdot \paren {x \circ x} + | c = +}} +{{eqn | r = \paren {m 1} \cdot \paren {x \circ x} + | c = +}} +{{end-eqn}} +So $\map P 1$ holds. +This is our [[Definition:Basis for the Induction|basis for the induction]]. +=== Induction Hypothesis === +Now we need to show that, if $\map P k$ is true, where $k \ge 1$, then it logically follows that $\map P {k + 1}$ is true. +So this is our [[Definition:Induction Hypothesis|induction hypothesis]]: +:$\paren {m \cdot x} \circ \paren {k \cdot x} = \paren {m k} \cdot \paren {x \circ x}$ +Then we need to show: +:$\paren {m \cdot x} \circ \paren {\paren {k + 1} \cdot x} = \paren {m \paren {k + 1} } \cdot \paren {x \circ x}$ +=== Induction Step === +This is our [[Definition:Induction Step|induction step]]: +{{begin-eqn}} +{{eqn | l = \paren {m \cdot x} \circ \paren {\paren {k + 1} \cdot x} + | r = \paren {m \cdot x} \circ \paren {k \cdot x + x} + | c = +}} +{{eqn | r = \paren {m \cdot x} \circ \paren {k \cdot x} + \paren {m \cdot x} \circ x + | c = {{Ring-axiom|D}} +}} +{{eqn | r = \paren {m k} \cdot \paren {x \circ x} + m \cdot \paren {x \circ x} + | c = [[Powers of Ring Elements/General Result#Induction Hypothesis|Induction Hypothesis]] +}} +{{eqn | r = \paren {m k + k} \cdot \paren {x \circ x} + | c = {{Ring-axiom|D}} +}} +{{eqn | r = \paren {m \paren {k + 1} } \cdot \paren {x \circ x} + | c = +}} +{{end-eqn}} +So $\map P K \implies \map P {k + 1}$ and the result follows by the [[Principle of Mathematical Induction]]. +Therefore: +:$\forall m \in \Z: \forall n \in \N: \paren {m \cdot x} \circ \paren {n \cdot x} = \paren {m n} \cdot \paren {x \circ x}$ +{{qed|lemma}} +The result for $n < 0$ follows directly from [[Powers of Group Elements]]. +{{qed}} +[[Category:Ring Theory]] +[[Category:Proofs by Induction]] +nh2n0mqnbv15b7l1leixn5etc5jip76 +\end{proof}<|endoftext|> +\section{Characteristic of Division Ring is Zero or Prime} +Tags: Division Rings + +\begin{theorem} +Let $\struct {D, +, \circ}$ be a [[Definition:Division Ring|division ring]]. +Let $\map {\operatorname {Char} } D$ be the [[Definition:Characteristic of Ring|characteristic]] of $D$. +Then $\map {\operatorname {Char} } D$ is either $0$ or a [[Definition:Prime Number|prime number]]. +\end{theorem} + +\begin{proof} +By definition, a [[Definition:Division Ring|division ring]] has no [[Definition:Proper Zero Divisor|proper zero divisors]]. +If $\struct {D, +, \circ}$ is [[Definition:Finite Ring|finite]], then from [[Characteristic of Finite Ring with No Zero Divisors]], $\map {\operatorname {Char} } D$ is [[Definition:Prime Number|prime]]. +On the other hand, suppose $\struct {D, +, \circ}$ is not [[Definition:Finite Ring|finite]]. +Then there are no $x, y \in D, x \ne 0 \ne y$ such that $x + y = 0$. +Thus it follows that $\map {\operatorname {Char} } D$ is zero. +{{qed}} +\end{proof}<|endoftext|> +\section{Characteristic of Integral Domain is Zero or Prime} +Tags: Integral Domains + +\begin{theorem} +Let $\struct {D, +, \circ}$ be an [[Definition:Integral Domain|integral domain]]. +Let $\operatorname{Char} \left({D}\right)$ be the [[Definition:Characteristic of Ring|characteristic]] of $D$. +Then $\operatorname{Char} \left({D}\right)$ is either $0$ or a [[Definition:Prime Number|prime number]]. +\end{theorem} + +\begin{proof} +By definition, an [[Definition:Integral Domain|integral domain]] has no [[Definition:Proper Zero Divisor|proper zero divisors]]. +If $\struct {D, +, \circ}$ is [[Definition:Finite Ring|finite]], then from [[Characteristic of Finite Ring with No Zero Divisors]], $\operatorname{Char} \left({D}\right)$ is [[Definition:Prime Number|prime]]. +On the other hand, suppose $\struct {D, +, \circ}$ is not [[Definition:Finite Ring|finite]]. +Then there are no $x, y \in D, x \ne 0 \ne y$ such that $x + y = 0$. +Thus it follows that $\operatorname{Char} \left({D}\right)$ is zero. +{{qed}} +\end{proof}<|endoftext|> +\section{Magdy's Exchanger} +Tags: + +\begin{theorem} +'''Magdy's Exchanger''' is a simple method to get a semi-algebraic formula for inverse trigonometric functions using the help of inverse hyperbolic functions that has a real logarthmic formula hence we will have an example to get the inverse sine function. +First: +:$\displaystyle \arcsin x = \int \frac 1 {\sqrt{1 - x^2}} \, \mathrm d x$ +Using [[Integration by Parts]]: +:$\displaystyle \int \frac 1 {\sqrt{1-x^2}} \,\mathrm d x = -\frac {\sqrt {1-x^2}} x - \int \frac{\sqrt{1-x^2}} {x^2} \, \mathrm d x$ +The problem is that the integration on the {{RHS}} will inverse itself if we solve it by parts. +Then the result will be: +:$0 = 0$ +Magdy's method will come into effect now. +As we know the [[Definition:Inverse Hyperbolic Function|inverse hyperbolic function]] that is most similar to the [[Definition:Inverse Sine|inverse sine function]] is the [[Definition:Inverse Hyperbolic Sine|inverse sinh function]]. +The idea is to get the last integration in the {{RHS}} from the [[Definition:Inverse Hyperbolic Sine|inverse sinh function]]. +So we use parts again on the [[Definition:Derivative|derivative]] of [[Definition:Inverse Hyperbolic Sine|inverse sinh]] as follows: +:$\displaystyle \operatorname{arcsinh} x = \int \frac 1 {\sqrt{1 + x^2}} \, \mathrm d x$ +{{begin-eqn}} +{{eqn | l = \operatorname{arcsinh} x + | r = \int \frac 1 {\sqrt{1 + x^2} } \, \mathrm d x + | c = [[Primitive of Reciprocal of Root of x squared plus a squared/Inverse Hyperbolic Sine Form|Primitive of $\dfrac 1 {\sqrt {x^2 + a^2} }$]] +}} +{{eqn | r = \frac {\sqrt {1 + x^2} } x + \int \frac {\sqrt {1 + x^2} } {x^2} \,\mathrm d x + | c = [[Integration by Parts]] +}} +{{eqn | r = \ln \left({x + \sqrt{x^2 + 1} }\right) + | c = Definition of [[Definition:Inverse Hyperbolic Sine/Real/Definition 2|Inverse Sinh Function]] +}} +{{end-eqn}} +By [[Primitive of Root of x squared plus a squared over x squared|Primitive of Root of $\dfrac {\sqrt {x^2 + a^2} } {x^2}$]]: +:$\displaystyle \int \frac {\sqrt {1 + x^2} } {x^2} \, \mathrm d x = \ln \left({x + \sqrt{x^2 + 1} }\right) - \frac {\sqrt {1 + x^2} } x$ +Taking the [[Definition:Derivative|derivative]]of both sides: +:$\displaystyle \frac{\sqrt{1 + x^2} } {x^2} = \frac {1 + x^2 + x \sqrt{x^2 + 1} } {x^3 + x^2 \sqrt {x^2 + 1} } = A$ +Squaring both sides: +:$\dfrac {1 + x^2} {x^4} = A^2$ +Now the most important step. +As we want to have $1 - x^2$ in the numerator we will have the fraction on the {{LHS}}} as: +:$\dfrac {1 - x^2 + 2 x^2} {x^4}$ +which does not affect the value of the fraction. +Now the {{LHS}} is: +:$\dfrac {1 - x^2} {x^4} + \dfrac 2 {x^2}$ +then the wanted fraction will be alone on the {{LHS}}: +:$\dfrac {1 - x^2} {x^4} = A^2 - \dfrac 2 {x^2}$ +Taking the root of both sides: +:$\dfrac {\sqrt{1 - x^2} } {x^2} = \sqrt {A^2 - \dfrac 2 {x^2} }$ +then integrating both sides: +:$\displaystyle \int \frac {\sqrt{1 - x^2} } {x^2} \, \mathrm d x = \int \sqrt{A^2 - \frac 2 {x^2}} \, \mathrm d x$ +Now the {{RHS}} is complicated but evaluates as follows: +:$\displaystyle \int \sqrt{A^2 - \frac 2 {x^2} } \, \mathrm d x = \frac {x^2 \sqrt {\frac 1 {x^4} - \frac 1 {x^2} } \ln \left({2 \sqrt {x^2 - 1} + 2 x}\right)} {\sqrt {x^2 - 1} } - x \sqrt {\frac 1 {x^4} - \frac 1 {x^2} } + C$ +Substituting the last result in the first equation of the inverse sine we get: +:$\displaystyle \arcsin x = - \frac {\sqrt {1 - x^2} } x - \frac {x^2 \sqrt {\frac 1 {x^4} - \frac 1 {x^2} } \ln \left({2 \sqrt {x^2 - 1} + 2 x}\right)} {\sqrt {x^2 - 1}} + x \sqrt {\frac 1 {x^4} - \frac 1 {x^2} } + C$ +Hence we get a semi-algebraic formula for the [[Definition:Inverse Sine|inverse sine function]]. +{{qed}} +\end{theorem}<|endoftext|> +\section{Sigma-Algebra Generated by Complements of Generators} +Tags: Sigma-Algebras + +\begin{theorem} +Let $\Sigma$ be a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on a set $X$. +Let $\mathcal G$ be a [[Definition:Sigma-Algebra Generated by Collection of Subsets|generator]] for $\Sigma$. +Then: +:$\mathcal{G}' := \left\{{X \setminus G: G \in \mathcal G}\right\}$ +the set of [[Definition:Relative Complement|relative complements]] of $\mathcal G$, is also a [[Definition:Sigma-Algebra Generated by Collection of Subsets|generator]] for $\Sigma$. +\end{theorem} + +\begin{proof} +By definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generator]]: +:$\forall G \in \mathcal G: G \in \Sigma$ +The third axiom for a [[Definition:Sigma-Algebra|$\sigma$-algebra]] thus ensures that: +:$\forall G \in \mathcal G: X \setminus G \in \Sigma$ +Therefore, by definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]], $\sigma \left({\mathcal{G}'}\right) \subseteq \Sigma$. +However, by the same argument (now applied to $\sigma \left({\mathcal{G}'}\right)$), also: +:$\forall G \in \mathcal G: X \setminus \left({X \setminus G}\right) \in \sigma \left({\mathcal{G}'}\right)$ +and by [[Set Difference with Set Difference]] and [[Intersection with Subset is Subset]]: +:$X \setminus \left({X \setminus G}\right) = G \cap X = G$ +Thus, it follows that $\Sigma = \sigma \left({\mathcal G}\right) \subseteq \sigma \left({\mathcal{G}'}\right)$. +Hence the result, by definition of [[Definition:Set Equality/Definition 2|set equality]]. +{{qed}} +[[Category:Sigma-Algebras]] +3ib7boq4tglu1tojuzxop0s5xnys8wv +\end{proof}<|endoftext|> +\section{Primary Decomposition Theorem} +Tags: Polynomial Theory, Named Theorems + +\begin{theorem} +Let $K$ be a [[Definition:Field (Abstract Algebra)|field]]. +Let $V$ be a [[Definition:Vector Space|vector space]] over $K$. +Let $T: V \to V$ be a [[Definition:Linear Transformation|linear operator]] on $V$. +Let $\map p x \in K \sqbrk x$ be a [[Definition:Polynomial|polynomial]] such that: +:$\map \deg p \ge 1$ +:$\map p T = 0$ +where $0$ is the zero operator on $V$. +{{explain|Link to definition of $K \sqbrk x$}} +Let $\map {p_1} x, \map {p_2} x, \ldots, \map {p_r} x$ be [[Definition:Distinct|distinct]] [[Definition:Irreducible Polynomial|irreducible]] [[Definition:Monic Polynomial|monic polynomials]]. +Let $c \in K \setminus \set 0$ and $a_1, a_2, \ldots, a_r, r \in \Z_{\ge 1}$ be constants. +We have that: +:$\map p x = c \map {p_1} x^{a_1} \map {p_2} x^{a_2} \dotsm \map {p_r} x^{a_r}$ +The primary decomposition theorem then states the following : +:$(1): \quad \map \ker {\map {p_i} T^{a_i} }$ is a [[Definition:Invariant Subspace|$T$-invariant subspace]] of $V$ for all $i = 1, 2, \dotsc, r$ +:$(2): \quad \displaystyle V = \bigoplus_{i \mathop = 1}^r \map \ker {\map {p_i} T^{a_i} }$ +{{explain|Link to definition of $\bigoplus$ in this context.}} +{{improve|Rather than dragging the unwieldy $\map \ker {\map {p_i} T^{a_i} }$ all around the page, suggest that a symbol e.g. $\kappa_i$ be used for it instead. There are many more places where the exposition is repetitive and could benefit from being broken into more modular units. And there are still places where the logical flow is compromised by being wrapped backwards upon itself with "because" and "indeed" and nested "if-thens", although I have done my best to clean out most of these.}} +\end{theorem}<|endoftext|> +\section{Sigma-Algebra Extended by Single Set} +Tags: Sigma-Algebras + +\begin{theorem} +Let $\Sigma$ be a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on a set $X$. +Let $S \subseteq X$ be a [[Definition:Subset|subset]] of $X$. +For [[Definition:Subset|subsets]] $T \subseteq X$ of $X$, denote $T^c$ for the [[Definition:Set Difference|set difference]] $X \setminus T$. +Then: +:$\sigma \left({\Sigma \cup \left\{{S}\right\}}\right) = \left\{{\left({E_1 \cap S}\right) \cup \left({E_2 \cap S^c}\right): E_1, E_2 \in \Sigma}\right\}$ +where $\sigma$ denotes [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]. +\end{theorem} + +\begin{proof} +Define $\Sigma'$ as follows: +:$\Sigma' := \left\{{\left({E_1 \cap S}\right) \cup \left({E_2 \cap S^c}\right): E_1, E_2 \in \Sigma}\right\}$ +Picking $E_1 = X$ and $E_2 = \varnothing$ (allowed by [[Sigma-Algebra Contains Empty Set]]), it follows that $S \in \Sigma'$. +On the other hand, for any $E_1 \in \Sigma$, have by [[Intersection Distributes over Union]] and [[Union with Relative Complement]]: +:$\left({E_1 \cap S}\right) \cup \left({E_1 \cap S^c}\right) = E_1 \cap \left({S \cup S^c}\right) = E_1 \cap X = E_1$ +Hence $E_1 \in \Sigma'$ for all $E_1$, hence $\Sigma \subseteq \Sigma'$. +Therefore, $\Sigma \cup \left\{{S}\right\} \subseteq \Sigma'$. +Moreover, from [[Sigma-Algebra Closed under Union]], [[Sigma-Algebra Closed under Intersection]] and axiom $(2)$ for a [[Definition:Sigma-Algebra|$\sigma$-algebra]], it is necessarily the case that: +:$\Sigma' \subseteq \sigma \left({\Sigma \cup \left\{{S}\right\}}\right)$ +It will thence suffice to demonstrate that $\Sigma'$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +Since $X \in \Sigma$, also $X \in \Sigma'$. +Next, for any $E_1, E_2 \in \Sigma$, observe: +{{begin-eqn}} +{{eqn | l = \left({\left({E_1 \cap S}\right) \cup \left({E_2 \cap S^c}\right)}\right)^c + | r = \left({E_1 \cap S}\right)^c \cap \left({E_2 \cap S^c}\right)^c + | c = [[De Morgan's Laws (Set Theory)/Set Difference/Difference with Union|De Morgan's Laws: Difference with Union]] +}} +{{eqn | r = \left({E_1^c \cup S^c}\right) \cap \left({E_2^c \cup S}\right) + | c = [[De Morgan's Laws (Set Theory)/Set Difference/Difference with Intersection|De Morgan's Laws: Difference with Intersection]], [[Set Difference with Set Difference]] +}} +{{eqn | r = \left({ \left({E_1^c \cup S^c}\right) \cap E_2^c }\right) \cup \left({ \left({E_1^c \cup S^c}\right) \cap S }\right) + | c = [[Intersection Distributes over Union]] +}} +{{eqn | r = \left({E_1^c \cap E_2^c}\right) \cup \left({E_2^c \cap S^c}\right) \cup \left({E_1^c \cap S}\right) \cup \left({S^c \cap S}\right) + | c = [[Union Distributes over Intersection]] +}} +{{eqn | r = \left({ \left({E_1^c \cap E_2^c}\right) \cap \left({S^c \cup S}\right) }\right) \cup \left({E_2^c \cap S^c}\right) \cup \left({E_1^c \cap S}\right) + | c = [[Union with Relative Complement]], [[Set Difference Intersection with Second Set is Empty Set]] +}} +{{eqn | r = \left({E_1^c \cap E_2^c \cap S}\right) \cup \left({E_1^c \cap S}\right) \cup \left({E_1^c \cap E_2^c \cap S^c}\right) \cup \left({E_2^c \cap S^c}\right) + | c = [[Intersection Distributes over Union]] +}} +{{eqn | r = \left({\left({\left({E_1^c \cap E_2^c}\right) \cup E_1^c}\right) \cap S}\right) \cup \left({\left({\left({E_1^c \cap E_2^c}\right) \cup E_2^c}\right) \cap S^c}\right) + | c = [[Intersection Distributes over Union]] +}} +{{eqn | r = \left({E_1^c \cap S}\right) \cup \left({E_2^c \cap S^c}\right) + | c = [[Intersection is Subset]], [[Union with Superset is Superset]] +}} +{{end-eqn}} +As $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]], $E_1^c, E_2^c \in \Sigma$ and so indeed: +:$\left({\left({E_1 \cap S}\right) \cup \left({E_2 \cap S^c}\right)}\right)^c \in \Sigma'$ +Finally, let $\left({E_{1, n}}\right)_{n \in \N}$ and $\left({E_{2, n}}\right)_{n \in \N}$ be [[Definition:Sequence|sequences]] in $\Sigma$. +Then: +{{begin-eqn}} +{{eqn | l = \bigcup_{n \mathop \in \N} \left({E_{1, n} \cap S}\right) \cup \left({E_{2, n} \cap S^c}\right) + | r = \left({\bigcup_{n \mathop \in \N} \left({E_{1, n} \cap S}\right)}\right) \cup \left({\bigcup_{n \mathop \in \N} \left({E_{2, n} \cap S^c}\right)}\right) + | c = [[Union Distributes over Union/Families of Sets]] +}} +{{eqn | r = \left({\left({\bigcup_{n \mathop \in \N} E_{1, n} }\right) \cap S}\right) \cup \left({\left({\bigcup_{n \mathop \in \N} E_{2, n} }\right) \cap S^c}\right) + | c = [[Union Distributes over Intersection]] +}} +{{end-eqn}} +Since $\displaystyle \bigcup_{n \mathop \in \N} E_{1, n}, \bigcup_{n \mathop \in \N} E_{2, n} \in \Sigma$, it follows that: +:$\displaystyle \bigcup_{n \mathop \in \N} \left({E_{1, n} \cap S}\right) \cup \left({E_{2, n} \cap S^c}\right) \in \Sigma'$ +Hence it is established that $\Sigma'$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +It follows that: +:$\displaystyle \sigma \left({\Sigma \cup \left\{{S}\right\}}\right) = \Sigma'$ +{{qed}} +[[Category:Sigma-Algebras]] +quvak41smgejuhrktod8xac8k1ncmym +\end{proof}<|endoftext|> +\section{Scalar Product with Inverse Unity} +Tags: Unitary Modules + +\begin{theorem} +:$\paren {-1_R} \circ x = - x$ +\end{theorem} + +\begin{proof} +Follows directly from [[Scalar Product with Inverse]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Scalar Product with Multiple of Unity} +Tags: Unitary Modules + +\begin{theorem} +:$\paren {n \cdot 1_R} \circ x = n \cdot x$ +that is: +:$\paren {\map {\paren {+_R}^n} {1_R} } \circ x = \map {\paren {+_G}^n} x$ +\end{theorem} + +\begin{proof} +Follows directly from [[Scalar Product with Product]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Subring Module/Special Case} +Tags: Subrings, Module Theory + +\begin{theorem} +Let $S$ be a [[Definition:Subring|subring]] of the [[Definition:Ring (Abstract Algebra)|ring]] $\struct {R, +, \circ}$. +Let $\circ_S$ be the [[Definition:Restriction of Operation|restriction]] of $\circ$ to $S \times R$. +Then $\struct {R, +, \circ_S}_S$ is an [[Definition:Module|$S$-module]]. +If $\struct {R, +, \circ}$ has a [[Definition:Unity of Ring|unity]], $1_R$, and $1_R \in S$, then $\struct {R, +, \circ_S}_S$ is a [[Definition:Unitary Module|unitary $S$-module]]. +\end{theorem} + +\begin{proof} +From [[Ring is Module over Itself]], it follows that: +:$\struct {R, +, \circ}_R$ is an [[Definition:Module|$R$-module]]. +:If $\struct {R, +, \circ}$ has a [[Definition:Unity of Ring|unity]], then $\struct {R, +, \circ}_R$ is [[Definition:Unitary Module|unitary]]. +Now the theorem follows directly from [[Subring Module]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Trivial Module is Module} +Tags: Module Theory + +\begin{theorem} +Let $\struct {G, +_G}$ be an [[Definition:Abelian Group|abelian group]] whose [[Definition:Identity Element|identity]] is $e_G$. +Let $\struct {R, +_R, \circ_R}$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $\struct {G, +_G, \circ}_R$ be the [[Definition:Trivial Module|trivial $R$-module]], such that: +:$\forall \lambda \in R: \forall x \in G: \lambda \circ x = e_G$ +Then $\struct {G, +_G, \circ}_R$ is a [[Definition:Module|module]]. +\end{theorem} + +\begin{proof} +Checking the [[Definition:Module Axioms|module axioms]] in turn: +:$(\text M 1): \quad \lambda \circ \paren {x +_G y} = e_G = e_G +_G e_G = \paren {\lambda \circ x} +_G \paren {\lambda \circ y}$ +:$(\text M 2): \quad \paren {\lambda +_R \mu} \circ x = e_G = e_G +_G e_G = \paren {\lambda \circ x} +_G \paren {\mu \circ x}$ +:$(\text M 3): \quad \paren {\lambda \times_R \mu} \circ x = e_G = \lambda \circ e_G = \lambda \circ \paren {\mu \circ x}$ +Thus the [[Definition:Trivial Module|trivial module]] is indeed a [[Definition:Module|module]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Trivial Module is Not Unitary} +Tags: Module Theory + +\begin{theorem} +Let $\struct {G, +_G}$ be an [[Definition:Abelian Group|abelian group]] whose [[Definition:Identity Element|identity]] is $e_G$. +Let $\struct {R, +_R, \circ_R}$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $\struct {G, +_G, \circ}_R$ be the [[Definition:Trivial Module|trivial $R$-module]], such that: +:$\forall \lambda \in R: \forall x \in G: \lambda \circ x = e_G$ +Then unless $R$ is a [[Definition:Ring with Unity|ring with unity]] and $G$ contains only one element, this is ''not'' a [[Definition:Unitary Module|unitary module]]. +\end{theorem} + +\begin{proof} +By definition, for a [[Definition:Trivial Module|trivial module]] to be [[Definition:Unitary Module|unitary]], $R$ needs to be a [[Definition:Ring with Unity|ring with unity]]. +For {{Module-axiom|4}} to apply, we require that: +:$\forall x \in G: 1_R \circ x = x$ +But for the trivial module: +:$\forall x \in G: 1_R \circ x = e_G$ +So {{Module-axiom|4}} can apply only when: +:$\forall x \in G: x = e_G$ +Thus for the trivial module to be [[Definition:Unitary Module|unitary]], it is necessary that $G$ be the [[Definition:Trivial Group|trivial group]], and thus to contain one element. +{{qed}} +\end{proof}<|endoftext|> +\section{Null Module is Module} +Tags: Module Theory + +\begin{theorem} +Let $\left({R, +_R, \circ_R}\right)$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $G$ be the [[Definition:Trivial Group|trivial group]]. +Let $\left({G, +_G, \circ}\right)_R$ be the [[Definition:Null Module|null module]]. +Then $\left({G, +_G, \circ}\right)_R$ is a [[Definition:Module|module]]. +\end{theorem} + +\begin{proof} +Follows from the fact that $\left({G, +_G, \circ}\right)_R$ has to be, by definition, a [[Definition:Trivial Module|trivial module]]: +$\circ$ can only be defined as: +:$\forall \lambda \in R: \forall x \in G: \lambda \circ x = e_G$ +{{qed}} +[[Category:Module Theory]] +98hsbt8bc90fqn1tff9vjbo2blkvish +\end{proof}<|endoftext|> +\section{Submodule of Module of Polynomial Functions} +Tags: Module Theory, Polynomial Theory + +\begin{theorem} +Let $K$ be a [[Definition:Commutative and Unitary Ring|commutative ring with unity]]. +Let $P \left({K}\right)$ be the set of all [[Definition:Polynomial Function (Abstract Algebra)|polynomial functions]] on $K$. +Consider the set $P_m \left({K}\right)$ of all the [[Definition:Polynomial Function (Abstract Algebra)|polynomial functions]]: +:$\displaystyle \sum_{k \mathop = 0}^{m-1} \alpha_k {I_K}^k$ +for some $m \in \N^*$ where: +:$\left \langle {\alpha_k} \right \rangle_{k \in \left[{0 \,.\,.\, m-1}\right]}$ +is any sequence of $m$ terms of $K$. +Then $P_m \left({K}\right)$ is a [[Definition:Submodule|submodule]] of $P \left({K}\right)$. +\end{theorem}<|endoftext|> +\section{Condition on Equality of Generated Sigma-Algebras} +Tags: Sigma-Algebras + +\begin{theorem} +Let $X$ be a [[Definition:Set|set]], and let $\mathcal G$, $\mathcal H$ be [[Definition:Set|sets]] of [[Definition:Subset|subsets]] of $X$. +Suppose that: +:$\mathcal G \subseteq \mathcal H \subseteq \sigma \left({\mathcal G}\right)$ +where $\sigma$ denotes [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]. +Then: +:$\sigma \left({\mathcal G}\right) = \sigma \left({\mathcal H}\right)$ +\end{theorem} + +\begin{proof} +From [[Generated Sigma-Algebra Preserves Subset]], it follows that: +:$\sigma \left({\mathcal G}\right) \subseteq \sigma \left({\mathcal H}\right)$ +Since $\sigma \left({\mathcal G}\right)$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]] containing $\mathcal H$: +:$\sigma \left({\mathcal H}\right) \subseteq \sigma \left({\mathcal G}\right)$ +from the definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]. +Hence the result, from the definition of [[Definition:Set Equality|set equality]]. +{{qed}} +[[Category:Sigma-Algebras]] +jdy4qnagp0ewlqivk12691qsj7fyqzv +\end{proof}<|endoftext|> +\section{Sigma-Algebra is Monotone Class} +Tags: Sigma-Algebras, Monotone Classes + +\begin{theorem} +Let $\Sigma$ be a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on a [[Definition:Set|set]] $X$. +Then $\Sigma$ is also a [[Definition:Monotone Class|monotone class]]. +\end{theorem} + +\begin{proof} +By definition, $\Sigma$, being a [[Definition:Sigma-Algebra|$\sigma$-algebra]], is [[Definition:Closed Algebraic Structure|closed]] under [[Definition:Countable Union|countable unions]]. +From [[Sigma-Algebra Closed under Countable Intersection]], it is also [[Definition:Closed Algebraic Structure|closed]] under [[Definition:Countable Intersection|countable intersections]]. +Thence, by definition, $\Sigma$ is a [[Definition:Monotone Class|monotone class]]. +{{qed}} +[[Category:Sigma-Algebras]] +[[Category:Monotone Classes]] +fzkm77bth2qkus1e99usmrmrjf455ur +\end{proof}<|endoftext|> +\section{Linear Combination of Sequence is Linear Combination of Set} +Tags: Linear Algebra + +\begin{theorem} +Let $G$ be an [[Definition:Module|$R$-module]]. +Let $\sequence {a_k}_{1 \mathop \le k \mathop \le n}$ be a [[Definition:Sequence|sequence of elements]] of $G$. +Let $b$ be an [[Definition:Element|element]] of $G$. +Then: +:$b$ is a [[Definition:Linear Combination of Sequence|linear combination]] of the [[Definition:Finite Sequence|sequence]] $\sequence {a_k}_{1 \mathop \le k \mathop \le n}$ +{{iff}}: +:$b$ is a [[Definition:Linear Combination of Subset|linear combination]] of the [[Definition:Set|set]] $\set {a_k: 1 \mathop \le k \mathop \le n}$ +\end{theorem} + +\begin{proof} +=== Necessary Condition === +By definition of [[Definition:Linear Combination of Subset|linear combination of subset]]: +:Every [[Definition:Linear Combination of Sequence|linear combination]] of $\sequence {a_k}_{1 \mathop \le k \mathop \le n}$ is a [[Definition:Linear Combination of Subset|linear combination]] of $\set {a_k: 1 \mathop \le k \mathop \le n}$. +{{qed|lemma}} +=== Sufficient Condition === +Let $b$ be a [[Definition:Linear Combination of Subset|linear combination]] of $\set {a_k: 1 \mathop \le k \mathop \le n} = \set {a_1, a_2, \ldots, a_n}$. +Then there exists: +:a [[Definition:Finite Sequence|sequence]] $\sequence {c_j}_{1 \mathop \le j \mathop \le m}$ of elements of $\set {a_1, a_2, \ldots, a_n}$ +and: +:a [[Definition:Finite Sequence|sequence]] $\sequence {\mu_j}_{1 \mathop \le j \mathop \le m}$ of [[Definition:Scalar (Module)|scalars]] such that: +::$\displaystyle b = \sum_{j \mathop = 1}^m \mu_j c_j$ +For each $k \in \closedint 1 n$, let $\lambda_k$ be defined as follows. +If: +:$a_k \in \set {c_1, c_2, \ldots, c_m}$ +and: +:$a_i \ne a_j$ for all indices $i$ such that $1 \le i < k$ +let $\lambda_k$ be the sum of all [[Definition:Scalar (Module)|scalars]] $\mu_j$ such that $c_j = a_k$. +If: +:$a_k \notin \set {c_1, c_2, \ldots, c_m}$ +or: +:$a_i = a_j$ for some index $i$ such that $1 \le i < k$ +let $\lambda_k = 0$. +It follows that: +:$\displaystyle b = \sum_{j \mathop = 1}^m \mu_j c_j = \sum_{k \mathop = 1}^n \lambda_k a_k$ +Let $\sequence {a_k}_{1 \mathop \le k \mathop \le n}$ and $\sequence {b_j}_{1 \mathop \le j \mathop \le m}$ be [[Definition:Finite Sequence|sequences]] of elements of $G$ such that $\set {a_1, a_2, \ldots, a_n}$ and $\set {b_1, b_2, \ldots, b_m}$ are identical. +Then as a consequence of the above: +:an element is a [[Definition:Linear Combination of Sequence|linear combination]] of $\sequence {a_k}_{1 \mathop \le k \mathop \le n}$ +{{iff}}: +:it is a [[Definition:Linear Combination of Subset|linear combination]] of $\set {a_k: 1 \mathop \le k \mathop \le n}$ +{{qed}} +{{explain|The above lacks coherence. Source is to be revisited and reinterpreted.}} +\end{proof}<|endoftext|> +\section{Trace Sigma-Algebra of Generated Sigma-Algebra} +Tags: Sigma-Algebras + +\begin{theorem} +Let $X$ be a [[Definition:Set|Set]], and let $\GG \subseteq \powerset X$ be a collection of [[Definition:Subset|subsets]] of $X$. +Let $A \subseteq X$ be a [[Definition:Subset|subset]] of $X$. +Then the following equality holds: +:$A \cap \map \sigma \GG = \map \sigma {A \cap \GG}$ +where +:$\map \sigma \GG$ denotes the [[Definition:Sigma-Algebra Generated by Collection of Subsets|smallest $\sigma$-algebra on $X$ that contains $\GG$]] +:$\map \sigma {A \cap \GG}$ denotes the [[Definition:Sigma-Algebra Generated by Collection of Subsets|smallest $\sigma$-algebra on $A$ that contains ${A \cap \GG}$]] +:$A \cap \map \sigma \GG$ denotes the [[Definition:Trace Sigma-Algebra|trace $\sigma$-algebra]] on $A$ +:$A \cap \GG$ is a shorthand for $\set {A \cap G: G \in \GG}$ +\end{theorem} + +\begin{proof} +By definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]: +:$\GG \subseteq \map \sigma \GG$ +whence from [[Set Intersection Preserves Subsets]]: +:$A \cap \GG \subseteq A \cap \map \sigma \GG$ +and therefore, by definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]: +:$\map \sigma {A \cap \GG} \subseteq A \cap \map \sigma \GG$ +For the reverse inclusion, define $\Sigma$ by: +:$\Sigma := \set {E \subseteq X: A \cap E \in \map \sigma {A \cap \GG} }$ +We will show that $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on $X$. +Since $A \in \map \sigma {A \cap \GG}$: +:$A \cap X = A \in \map \sigma {A \cap \GG}$ +and therefore $X \in \Sigma$. +Suppose that $E \in \Sigma$. +Then by [[Set Intersection Distributes over Set Difference]] and [[Intersection with Subset is Subset]]: +:$\paren {X \setminus E} \cap A = \paren {X \cap A} \setminus \paren {E \cap A} = A \setminus \paren {E \cap A}$ +Since $E \cap A \in \map \sigma {A \cap \GG}$ and this is a [[Definition:Sigma-Algebra|$\sigma$-algebra]] on $A$: +:$A \setminus \paren {E \cap A} \in \map \sigma {A \cap \GG}$ +Finally, let $\sequence {E_n}_{n \mathop \in \N}$ be a [[Definition:Sequence|sequence]] in $\Sigma$. +Then by [[Intersection Distributes over Union]]: +:$\displaystyle \paren {\bigcup_{n \mathop \in \N} E_n} \cap A = \bigcup_{n \mathop \in \N} \paren {E_n \cap A}$ +The latter expression is a [[Definition:Countable Union|countable union]] of elements of $\map \sigma {A \cap \GG}$, hence again in $\map \sigma {A \cap \GG}$. +Therefore, $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]. +It is also apparent that $\GG \subseteq \Sigma$ since: +:$A \cap \GG \subseteq \map \sigma {A \cap \GG}$ +by definition of [[Definition:Sigma-Algebra Generated by Collection of Subsets|generated $\sigma$-algebra]]. +Thus, as $\Sigma$ is a [[Definition:Sigma-Algebra|$\sigma$-algebra]]: +:$\map \sigma \GG \subseteq \Sigma$ +and therefore: +:$A \cap \map \sigma \GG \subseteq \map \sigma {A \cap \GG}$ +Hence the result, by definition of [[Definition:Set Equality|set equality]]. +{{qed}} +[[Category:Sigma-Algebras]] +imxcs2gyazaxw5soy70oxy6hh8xx2ly +\end{proof}<|endoftext|> +\section{Intersection is Subset/General Result} +Tags: Set Intersection, Subsets + +\begin{theorem} +Let $S$ be a [[Definition:Set|set]]. +Let $\mathcal P \left({S}\right)$ be the [[Definition:Power Set|power set]] of $S$. +Let $\mathbb S \subseteq \mathcal P \left({S}\right)$. +Then: +: $\displaystyle \forall T \in \mathbb S: \bigcap \mathbb S \subseteq T$ +=== [[Intersection is Subset/Family of Sets|Family of Sets]] === +In the context of a [[Definition:Indexed Family of Subsets|family of sets]], the result can be presented as follows: +{{:Intersection is Subset/Family of Sets}} +\end{theorem} + +\begin{proof} +{{begin-eqn}} +{{eqn | l = x + | o = \in + | r = \bigcap \mathbb S + | c = +}} +{{eqn | ll= \implies + | lo= \forall T \in \mathbb S: + | l = x + | o = \in + | r = T + | c = {{Defof|Set Intersection}} +}} +{{eqn | ll= \implies + | lo= \forall T \in \mathbb S: + | l = \bigcap \mathbb S + | o = \subseteq + | r = T + | c = {{Defof|Subset}} +}} +{{end-eqn}} +{{qed}} +[[Category:Set Intersection]] +[[Category:Subsets]] +a5do4c7fpivujuyeknflwmzv4l9bl58 +\end{proof}<|endoftext|> +\section{Union of Relative Complements of Nested Subsets} +Tags: Union, Set Union, Relative Complement, Set Union + +\begin{theorem} +Let $R \subseteq S \subseteq T$ be [[Definition:Set|sets]] with the indicated [[Definition:Subset|inclusions]]. +Then: +:$\complement_T \left({S}\right) \cup \complement_S \left({R}\right) = \complement_T \left({R}\right)$ +where $\complement$ denotes [[Definition:Relative Complement|relative complement]]. +Phrased via [[Set Difference as Intersection with Relative Complement]]: +:$\left({T \setminus S}\right) \cup \left({S \setminus R}\right) = T \setminus R$ +where $\setminus$ denotes [[Definition:Set Difference|set difference]]. +\end{theorem} + +\begin{proof} +From [[Union with Set Difference]]: +:$T = T \setminus S \cup S$ +and therefore by [[Set Difference is Right Distributive over Union]]: +:$T \setminus R = \left({\left({T \setminus S}\right) \setminus R}\right) \cup \left({S \setminus R}\right)$ +Now, by [[Set Difference with Union]] and [[Union with Superset is Superset]]: +:$\left({T \setminus S}\right) \setminus R = T \setminus \left({S \cup R}\right) = T \setminus S$ +Combining the above yields: +:$T \setminus R = \left({T \setminus S}\right) \cup \left({S \setminus R}\right)$ +{{qed}} +\end{proof}<|endoftext|> +\section{Homomorphic Image of R-Module is R-Module} +Tags: Module Theory + +\begin{theorem} +Let $\left({R, +_R, \times_R}\right)$ be a [[Definition:Ring (Abstract Algebra)|ring]]. +Let $\left({G, +_G, \circ_G}\right)_R$ be an [[Definition:Module|$R$-module]]. +Let $\left({H, +_H, \circ_H}\right)_R$ be an [[Definition:R-Algebraic Structure|$R$-algebraic structure]]. +Let $\phi: G \to H$ be a [[Definition:R-Algebraic Structure Homomorphism|homomorphism]]. +Then the [[Definition:Homomorphic Image|homomorphic image]] of $\phi$ is an [[Definition:Module|$R$-module]]. +\end{theorem} + +\begin{proof} +Let us write $\phi \left({G}\right)$ to denote the [[Definition:Homomorphic Image|homomorphic image]] of $\phi$. +From [[Image of Group Homomorphism is Subgroup]], $\phi \left({G}\right)$ is a [[Definition:Subgroup|subgroup]] of $\left({H, +_H}\right)$. +For any $\phi \left({g}\right)$ and $\phi \left({g'}\right)$ in $\phi \left({G}\right)$, we have: +{{begin-eqn}} +{{eqn|l = \phi \left({g}\right) +_H \phi \left({g'}\right) + |r = \phi \left({g +_G g'}\right) + |c = $\phi$ is a [[Definition:R-Algebraic Structure Homomorphism|homomorphism]] +}} +{{eqn|r = \phi \left({g' +_G g'}\right) + |c = $\left({G, +_G}\right)$ is an [[Definition:Abelian Group|abelian group]] +}} +{{eqn|r = \phi \left({g'}\right) +_H \phi \left({g}\right) + |c = $\phi$ is a [[Definition:R-Algebraic Structure Homomorphism|homomorphism]] +}} +{{end-eqn}} +hence $\phi \left({G}\right)$ is an [[Definition:Abelian Group|abelian group]]. +Now we can turn to showing that $\phi \left({G}\right)$ is an [[Definition:Module|$R$-module]]. +To do this, we take the [[Definition:Module|$R$-module]] axioms in turn. +=== Proof of $(1)$ === +It is to be shown that for all $\lambda \in R$ and $\phi \left({g}\right), \phi \left({g'}\right) \in \phi \left({G}\right)$: +:$\lambda \circ_H \left({\phi \left({g}\right) +_H \phi \left({g'}\right)}\right) = \left({\lambda \circ_H \phi \left({g}\right)}\right) +_H \left({\lambda \circ_H \phi \left({g'}\right)}\right)$ +Compute, using that $\phi$ is a [[Definition:R-Algebraic Structure Homomorphism|homomorphism]] repetitively: +{{begin-eqn}} +{{eqn|l = \lambda \circ_H \left({\phi \left({g}\right) +_H \phi \left({g'}\right)}\right) + |r = \lambda \circ_H \phi \left({g +_G g'}\right) +}} +{{eqn|r = \phi \left({\lambda \circ_G \left({g +_G}\right)}\right) +}} +{{eqn|r = \phi \left({\left({\lambda \circ_G g}\right) +_G \left({\lambda \circ_G g'}\right)}\right) + |c = $G$ is an [[Definition:Module|$R$-module]] +}} +{{eqn|r = \phi \left({\lambda \circ_G g}\right) +_H \phi \left({\lambda \circ_G g'}\right) +}} +{{eqn|r = \left({\lambda \circ_H \phi \left({g}\right)}\right) +_H \left({\lambda \circ_H \phi \left({g'}\right)}\right) +}} +{{end-eqn}} +{{qed|lemma}} +=== Proof of $(2)$ === +It is to be shown that for all $\lambda, \mu \in R$ and $\phi \left({g}\right) \in \phi \left({G}\right)$: +:$\left({\lambda +_R \mu}\right) \circ_H \phi \left({g}\right) = \left({\lambda \circ_H \phi \left({g}\right)}\right) +_H \left({\mu \circ_H \phi \left({g}\right)}\right)$ +Compute, using that $\phi$ is a [[Definition:R-Algebraic Structure Homomorphism|homomorphism]] repetitively: +{{begin-eqn}} +{{eqn|l = \left({\lambda +_R \mu}\right) \circ_H \phi \left({g}\right) + |r = \phi \left({\left({\lambda +_R \mu}\right) \circ_G g}\right) +}} +{{eqn|r = \phi \left({\left({\lambda \circ_G g}\right) +_G \left({\mu \circ_G g}\right)}\right) + |c = $G$ is an [[Definition:Module|$R$-module]] +}} +{{eqn|r = \phi \left({\lambda \circ_G g}\right) +_H \phi \left({\mu \circ_G g}\right) +}} +{{eqn|r = \left({\lambda \circ_H \phi \left({g}\right)}\right) +_H \left({\mu \circ_H \phi \left({g}\right)}\right) +}} +{{end-eqn}} +{{qed|lemma}} +=== Proof of $(3)$ === +It is to be shown that for all $\lambda, \mu \in R$ and $\phi \left({g}\right) \in \phi \left({G}\right)$: +:$\left({\lambda \times_R \mu}\right) \circ_H \phi \left({g}\right) = \lambda \circ_H \left({\mu \circ_H \phi \left({g}\right)}\right)$ +Compute, using that $\phi$ is a [[Definition:R-Algebraic Structure Homomorphism|homomorphism]] repetitively: +{{begin-eqn}} +{{eqn|l = \left({\lambda \times_R \mu}\right) \circ_H \phi \left({g}\right) + |r = \phi \left({\left({\lambda \times_R \mu}\right) \circ_G g}\right) +}} +{{eqn|r = \phi \left({\lambda \circ_G \left({\mu \circ_G g}\right)}\right) + |c = $G$ is an [[Definition:Module|$R$-module]] +}} +{{eqn|r = \lambda \circ_H \phi \left({\mu \circ_G g}\right) +}} +{{eqn|r = \lambda \circ_H \left({\mu \circ_H \phi \left({g}\right)}\right) +}} +{{end-eqn}} +{{qed|lemma}} +Having verified that $\phi \left({G}\right)$ satisfies the three axioms, we conclude it is an [[Definition:Module|$R$-module]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Evaluation Linear Transformation is Linear Transformation} +Tags: Linear Transformations + +\begin{theorem} +Let $R$ be a [[Definition:Commutative Ring|commutative ring]]. +Let $G$ be an [[Definition:Module|$R$-module]]. +Let $G^*$ be the [[Definition:Algebraic Dual|algebraic dual]] of $G$. +Let $G^{**}$ be the [[Definition:Algebraic Dual|algebraic dual]] of $G^*$. +Let the [[Definition:Mapping|mapping]] $J: G \to G^{**}$ be the [[Definition:Evaluation Linear Transformation|evaluation linear transformation]] from $G$ into $G^{**}$. +For each $x \in G$, $x^\wedge: G^* \to R$ is defined as: +:$\forall t' \in G^*: \map {x^\wedge} {t'} = \map {t'} x$ +Let the [[Definition:Mapping|mapping]] $J: G \to G^{**}$ be the [[Definition:Evaluation Linear Transformation|evaluation linear transformation]] from $G$ into $G^{**}$ defined as: +:$\forall x \in G: \map J x = x^\wedge$ +where for each $x \in G$, $x^\wedge: G^* \to R$ is defined as: +:$\forall t' \in G^*: \map {x^\wedge} {t'} = \map {t'} x$ + +Then: +:$(1): \quad x^\wedge \in G^{**}$ +:$(2): \quad J$ is a [[Definition:Linear Transformation|linear transformation]]. +\end{theorem} + +\begin{proof} +$(1):$ First we show that $x^\wedge \in G^{**}$: +{{ProofWanted}} +$(2):$ Then we show that $J: G \to G^{**}$ is a [[Definition:Linear Transformation|linear transformation]]: +{{ProofWanted}} +\end{proof}<|endoftext|> +\section{Evaluation Isomorphism is Isomorphism} +Tags: Linear Transformations + +\begin{theorem} +Let $R$ be a [[Definition:Commutative Ring|commutative ring]]. +Let $G$ be a [[Definition:Unitary Module|unitary $R$-module]] whose [[Definition:Dimension of Module|dimension]] is [[Definition:Finite|finite]]. +Then the [[Definition:Evaluation Linear Transformation|evaluation linear transformation]] $J: G \to G^{**}$ is an [[Definition:Module Isomorphism|isomorphism]]. +\end{theorem} + +\begin{proof} +Let $\left \langle {a_n} \right \rangle$ be an [[Definition:Ordered Basis|ordered basis]] of $G$. +Then $\left \langle {J \left({a_n}\right)} \right \rangle$ is the [[Definition:Ordered Dual Basis|ordered basis of $G^{**}$ dual]] to the [[Definition:Ordered Basis|ordered basis of $G^*$ dual]] to $\left \langle {a_n} \right \rangle$. +{{ProofWanted}} +From this it follows that $J$ is an [[Definition:Module Isomorphism|isomorphism]]. +\end{proof}<|endoftext|> +\section{Conditions for Homogeneity/Straight Line} +Tags: Linear Algebra, Analytic Geometry + +\begin{theorem} +The [[Equation of Straight Line in Plane|line]] $L = \alpha_1 x_1 + \alpha_2 x_2 = \beta$ is [[Definition:Homogeneous (Analytic Geometry)|homogeneous]] {{iff}} $\beta = 0$. +\end{theorem} + +\begin{proof} +Let the line $L = \alpha_1 x_1 + \alpha_2 x_2 = \beta$ be homogeneous. +Then the [[Definition:Origin|origin]] $\tuple {0, 0}$ lies on the line $L$. +That is, $\alpha_1 0 + \alpha_2 0 = \beta \implies \beta = 0$. +Let the [[Equation of Straight Line in Plane|equation]] of $L$ be $L = \alpha_1 x_1 + \alpha_2 x_2 = 0$. +Then $0 = \alpha_1 0 + \alpha_2 0 \in L$ and so $\tuple {0, 0}$ lies on the line $L$. +Hence $L$ is [[Definition:Homogeneous (Analytic Geometry)|homogeneous]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Conditions for Homogeneity/Plane} +Tags: Linear Algebra, Solid Analytic Geometry + +\begin{theorem} +The [[Equation of Plane|plane]] $P = \alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_3 = \gamma$ is [[Definition:Homogeneous (Analytic Geometry)|homogeneous]] iff $\gamma = 0$. +\end{theorem} + +\begin{proof} +Let the plane $P = \alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_3 = \gamma$ be homogeneous. +Then the [[Definition:Origin|origin]] $\left({0, 0, 0}\right)$ lies on the plane $P$. +That is, $\alpha_1 0 + \alpha_2 0 + \alpha_3 0= \gamma \implies \gamma = 0$. +Let the [[Equation of Plane|equation]] of $P$ be $P = \alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_3 = 0$. +Then $0 = \alpha_1 0 + \alpha_2 0 + \alpha_3 0 \in P$ and so $\left({0, 0, 0}\right)$ lies on the plane $P$. +Hence $P$ is [[Definition:Homogeneous (Analytic Geometry)|homogeneous]]. +{{qed}} +\end{proof}<|endoftext|> +\section{Zero Matrix is Identity for Hadamard Product} +Tags: Hadamard Product, Zero Matrix + +\begin{theorem} +Let $\struct {S, \cdot}$ be a [[Definition:Monoid|monoid]] whose [[Definition:Identity Element|identity]] is $e$. +Let $\map {\MM_S} {m, n}$ be an [[Definition:Matrix Space|$m \times n$ matrix space]] over $S$. +Let $\mathbf e = \sqbrk e_{m n}$ be the [[Definition:Zero Matrix over General Monoid|zero matrix]] of $\map {\MM_S} {m, n}$. +Then $\mathbf e$ is the [[Definition:Identity Element|identity element]] for [[Definition:Hadamard Product|Hadamard product]]. +\end{theorem} + +\begin{proof} +Let $\mathbf A = \sqbrk a_{m n} \in \map {\MM_S} {m, n}$. +Then: +{{begin-eqn}} +{{eqn | l = \mathbf A \circ \mathbf e + | r = \sqbrk a_{m n} \circ \sqbrk e_{m n} + | c = Definition of $\mathbf A$ and $\mathbf e$ +}} +{{eqn | r = \sqbrk {a \cdot e}_{m n} + | c = {{Defof|Hadamard Product}} +}} +{{eqn | r = \sqbrk a_{m n} + | c = {{Defof|Identity Element}} +}} +{{eqn | ll= \leadsto + | l = \mathbf A \circ \mathbf e + | r = \mathbf A + | c = {{Defof|Zero Matrix over General Monoid}} +}} +{{end-eqn}} +Similarly: +{{begin-eqn}} +{{eqn | l = \mathbf e \circ \mathbf A + | r = \sqbrk e_{m n} \circ \sqbrk a_{m n} + | c = Definition of $\mathbf A$ and $\mathbf e$ +}} +{{eqn | r = \sqbrk {e \cdot a}_{m n} + | c = {{Defof|Hadamard Product}} +}} +{{eqn | r = \sqbrk e_{m n} + | c = {{Defof|Identity Element}} +}} +{{eqn | ll= \leadsto + | l = \mathbf e \circ \mathbf A + | r = \mathbf A + | c = {{Defof|Zero Matrix over General Monoid}} +}} +{{end-eqn}} +{{qed}} +\end{proof}<|endoftext|> +\section{Sum of Ideals is Ideal/General Result} +Tags: Ideal Theory + +\begin{theorem} +Let $J_1, J_2, \ldots, J_n$ be [[Definition:Ideal of Ring|ideals]] of a [[Definition:Ring (Abstract Algebra)|ring]] $\struct {R, +, \circ}$. +Then: +: $J = J_1 + J_2 + \cdots + J_n$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +where $J_1 + J_2 + \cdots + J_n$ is as defined in [[Definition:Subset Product|subset product]]. +\end{theorem} + +\begin{proof} +Let $J_1, J_2, \ldots, J_n$ be [[Definition:Ideal of Ring|ideals]] of a [[Definition:Ring (Abstract Algebra)|ring]] $\struct {R, +, \circ}$. +Proof by [[Principle of Mathematical Induction|induction]]: +For all $n \in \N^*$, let $\map P n$ be the [[Definition:Proposition|proposition]]: +:$J_1 + J_2 + \cdots + J_n$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +$\map P 1$ is true, as this just says $J_1$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +=== Basis for the Induction === +$\map P 2$ is the case: +:$J_1 + J_2$ is an [[Definition:Ideal of Ring|ideal]] of $R$ +which is proved in [[Sum of Ideals is Ideal]]. +This is our [[Principle of Mathematical Induction#Basis for the Induction|basis for the induction]]. +=== Induction Hypothesis === +Now we need to show that, if $\map P k$ is true, where $k \ge 2$, then it logically follows that $\map P {k + 1}$ is true. +So this is our [[Principle of Mathematical Induction#Induction Hypothesis|induction hypothesis]]: +:$J_1 + J_2 + \cdots + J_k$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +Then we need to show: +:$J_1 + J_2 + \cdots + J_k + J_{k + 1}$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +=== Induction Step === +This is our [[Principle of Mathematical Induction#Induction Step|induction step]]: +Let $J = J_1 + J_2 + \cdots + J_k$. +From the [[Sum of Ideals is Ideal/General Result#Induction Hypothesis|induction hypothesis]], $J$ is an [[Definition:Ideal of Ring|ideal]]. +From the [[Sum of Ideals is Ideal/General Result#Basis for the Induction|base case]], $J + J_{k + 1}$ is an [[Definition:Ideal of Ring|ideal]]. +That is: +:$J_1 + J_2 + \cdots + J_k + J_{k+1}$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +So $\map P k \implies \map P {k + 1}$ and the result follows by the [[Principle of Mathematical Induction]]. +Therefore: +:$\forall n \in \N: J_1 + J_2 + \cdots + J_n$ is an [[Definition:Ideal of Ring|ideal]] of $R$. +{{qed}} +\end{proof}<|endoftext|> +\section{Integer is Expressible as Product of Primes} +Tags: Prime Numbers, Factorization, Integer is Expressible as Product of Primes, Prime Decompositions + +\begin{theorem} +Let $n$ be an [[Definition:Integer|integer]] such that $n > 1$. +Then $n$ can be expressed as the [[Definition:Integer Multiplication|product]] of one or more [[Definition:Prime Number|primes]]. +\end{theorem} + +\begin{proof} +{{AimForCont}} this supposition is false. +Let $m$ be the smallest [[Definition:Integer|integer]] which can not be expressed as the [[Definition:Integer Multiplication|product]] of [[Definition:Prime Number|primes]]. +As a [[Definition:Prime Number|prime number]] is trivially a [[Definition:Integer Multiplication|product]] of [[Definition:Prime Number|primes]], $m$ can not itself be [[Definition:Prime Number|prime]]. +Hence: +:$\exists r, s \in \Z: 1 < r < m, 1 < s < m: m = r s$ +As $m$ is our [[Principle of Least Counterexample|least counterexample]], both $r$ and $s$ can be expressed as the [[Definition:Integer Multiplication|product]] of [[Definition:Prime Number|primes]]. +Say $r = p_1 p_2 \cdots p_k$ and $s = q_1 q_2 \cdots q_l$, where all of $p_1, \ldots, p_k, q_1, \ldots, q_l$ are [[Definition:Prime Number|prime]]. +Hence $m = r s = p_1 p_2 \cdots p_k q_1 q_2 \cdots q_l$, which is a [[Definition:Integer Multiplication|product]] of [[Definition:Prime Number|primes]]. +Hence there is no such counterexample. +{{qed}} +\end{proof} + +\begin{proof} +If $n$ is [[Definition:Prime Number|prime]], the result is immediate. +Let $n$ be [[Definition:Composite Number|composite]]. +Then by [[Composite Number has Two Divisors Less Than It]]: +:$\exists r, s \in \Z: n = r s, 1 < r < n, 1 < s < n$ +This being the case, the set $S_1 = \set {d: d \divides n, 1 < d < n}$ is [[Definition:Non-Empty Set|nonempty]], and [[Definition:Bounded Below Set|bounded below]] by $1$. +By [[Set of Integers Bounded Below by Integer has Smallest Element]], $S_1$ has a [[Definition:Smallest Element|smallest element]], which we will call $p_1$. +{{AimForCont}} $p_1$ is [[Definition:Composite Number|composite]]. +By [[Composite Number has Two Divisors Less Than It]], there exist $a, b$ such that $a, b \divides p_1$ and $1 < a < p_1, 1 < b < p_1$. +But by [[Divisor Relation on Positive Integers is Partial Ordering]], it follows that $a, b \divides n$ and hence $a, b \in S$. +This [[Definition:Contradiction|contradicts]] the assertion that $p_1$ is the [[Definition:Smallest Element|smallest element]] of $S_1$. +Thus, $p_1$ is necessarily [[Definition:Prime Number|prime]]. +We may now write $n = p_1 n_1$, where $n > n_1 > 1$. +If $n_1$ is [[Definition:Prime Number|prime]], the proof is complete. +Otherwise, the set $S_2 = \set {d: d \divides n_1, 1 < d < n_1}$ is [[Definition:Non-Empty Set|nonempty]], and [[Definition:Bounded Below Set|bounded below]] by $1$. +By the above argument, the [[Definition:Smallest Element|smallest element]] $p_2$ of $S_2$ is [[Definition:Prime Number|prime]]. +Thus we may write $n_1 = p_2 n_2$, where $1 < n_2 < n_1$. +This gives us $n = p_1 p_2 n_2$. +If $n_2$ is [[Definition:Prime Number|prime]], we are done. +Otherwise, we continue this process. +Since $n > n_1 > n_2 > \cdots > 1$ is a [[Definition:Strictly Decreasing Sequence|(strictly) decreasing sequence]] of [[Definition:Positive Integer|positive integers]], there must be a [[Definition:Finite Set|finite number]] of $n_i$'s. +That is, we will arrive at some [[Definition:Prime Number|prime number]] $n_{k - 1}$, which we will call $p_k$. +This results in the [[Definition:Prime Decomposition|prime decomposition]] $n = p_1 p_2 \cdots p_k$. +{{qed}} +\end{proof} + +\begin{proof} +The proof proceeds by [[Principle of Mathematical Induction|induction]]. +For all $n \in \N_{> 1}$, let $\map P n$ be the [[Definition:Proposition|proposition]]: +:$n$ can be expressed as a product of [[Definition:Prime Number|prime numbers]]. +First note that if $n$ is [[Definition:Prime Number|prime]], the result is immediate. +=== Basis for the Induction === +$\map P 2$ is the case: +:$n$ can be expressed as a product of [[Definition:Prime Number|prime numbers]]. +As $2$ itself is a [[Definition:Prime Number|prime number]], and the result is immediate. +This is the [[Principle of Mathematical Induction#Basis for the Induction|basis for the induction]]. +=== Induction Hypothesis === +Now it needs to be shown that, if $\map P j$ is true, for all $j$ such that $2 \le j \le k$, then it logically follows that $\map P {k + 1}$ is true. +So this is the [[Principle of Mathematical Induction#Induction Hypothesis|induction hypothesis]]: +:For all $j \in \N$ such that $2 \le j \le k$, $j$ can be expressed as a product of [[Definition:Prime Number|prime numbers]]. +from which it is to be shown that: +:$k + 1$ can be expressed as a product of [[Definition:Prime Number|prime numbers]]. +=== Induction Step === +This is the [[Principle of Mathematical Induction#Induction Step|induction step]]: +If $k + 1$ is [[Definition:Prime Number|prime]], then the result is immediate. +Otherwise, $k + 1$ is [[Definition:Composite|composite]] and can be expressed as: +:$k + 1 = r s$ +where $2 \le r < k + 1$ and $2 \le s < k + 1$ +That is, $2 \le r \le k$ and $2 \le s \le k$. +Thus by the [[Integer is Expressible as Product of Primes/Proof 3#Induction Hypothesis|induction hypothesis]], both $r$ and $s$ can be expressed as a product of [[Definition:Prime Number|primes]]. +So $k + 1 = r s$ can also be expressed as a product of [[Definition:Prime Number|primes]]. +So $\map P k \implies \map P {k + 1}$ and the result follows by the [[Second Principle of Mathematical Induction]]. +Therefore, for all $n \in \N_{> 1}$: +:$n$ can be expressed as a product of [[Definition:Prime Number|prime numbers]]. +\end{proof}<|endoftext|> +\section{Expression for Integer as Product of Primes is Unique} +Tags: Prime Numbers, Factorization, Expression for Integer as Product of Primes is Unique, Prime Decompositions + +\begin{theorem} +Let $n$ be an [[Definition:Integer|integer]] such that $n > 1$. +Then the expression for $n$ as the [[Definition:Integer Multiplication|product]] of one or more [[Definition:Prime Number|primes]] is [[Definition:Unique|unique]] up to the order in which they appear. +\end{theorem} + +\begin{proof} +{{AimForCont}} the supposition false. +That is, suppose there is at least one [[Definition:Positive Integer|positive integer]] that can be expressed in more than one way as a product of [[Definition:Prime Number|primes]]. +Let the smallest of these be $m$. +Thus: +:$m = p_1 p_2 \cdots p_r = q_1 q_2 \cdots q_s$ +where all of $p_1, \ldots p_r, q_1, \ldots q_s$ are [[Definition:Prime Number|prime]]. +By definition, $m$ is not itself [[Definition:Prime Number|prime]]. +Therefore: +:$r, s \ge 2$ +Let us arrange that the [[Definition:Prime Number|primes]] which compose $m$ are in order of size: +:$p_1 \le p_2 \le \dots \le p_r$ +and: +:$q_1 \le q_2 \le \dots \le q_s$ +Let us arrange that $p_1 \le q_1$. +Suppose $p_1 = q_1$. +Then: +:$\dfrac m {p_1} = p_2 p_3 \cdots p_r = q_2 q_3 \cdots q_s = \dfrac m {q_1}$ +But then we have the [[Definition:Positive Integer|positive integer]] $\dfrac m {p_1}$ being expressible in two different ways. +This contradicts the fact that $m$ is the smallest [[Definition:Positive Integer|positive integer]] that can be so expressed. +Therefore: +:$p_1 \ne q_1 \implies p_1 < q_1 \implies p_1 < q_2, q_3, \ldots, q_s$ +as we arranged them in order. +From [[Prime not Divisor implies Coprime]]: +:$1 < p_1 < q_j: 1 < j < s \implies p_1 \nmid q_j$ + +But: +:$p_1 \divides m \implies p_1 \divides q_1 q_2 \ldots q_s$ +where $\divides$ denotes [[Definition:Divisor of Integer|divisibility]]. +Thus from [[Euclid's Lemma for Prime Divisors]]: +:$\exists j: 1 \le j \le s: p_1 \divides q_j$ +But $q_j$ was supposed to be a [[Definition:Prime Number|prime]]. +This is a [[Definition:Contradiction|contradiction]]. +Hence, by [[Proof by Contradiction]], the supposition was false. +{{Qed}} +\end{proof} + +\begin{proof} +{{AimForCont}} $n$ has two [[Definition:Prime Decomposition|prime factorizations]]: +:$n = p_1 p_2 \dots p_r = q_1 q_2 \dots q_s$ +where $r \le s$ and each $p_i$ and $q_j$ is prime with $p_1 \le p_2 \le \dots \le p_r$ and $q_1 \le q_2 \le \dots \le q_s$. +Since $p_1 \divides q_1 q_2 \dots q_s$, it follows from [[Euclid's Lemma for Prime Divisors]] that $p_1 = q_j$ for some $1 \le j \le s$. +Thus: +:$p_1 \ge q_1$ +Similarly, since $q_1 \divides p_1 p_2 \dots p_r$, from [[Euclid's Lemma for Prime Divisors]]: +:$q_1 \ge p_1$ +Thus, $p_1 = q_1$, so we may cancel these [[Definition:Common Divisor of Integers|common factors]], which gives: +:$p_2 p_3 \cdots p_r = q_2 q_3 \dots q_s$ +This process is repeated to show that: +:$p_2 = q_2, p_3 = q_3, \ldots, p_r = q_r$ +If $r < s$, we arrive at $1 = q_{r + 1} q_{r + 2} \cdots q_s$ after canceling all [[Definition:Common Divisor of Integers|common factors]]. +But by [[Divisors of One]], the only [[Definition:Divisor of Integer|divisors]] $1$ are $1$ and $-1$. +Hence $q_{r + 1}, q_{r + 2}, \ldots, q_s$ cannot be [[Definition:Prime Number|prime numbers]] +From that [[Proof by Contradiction|contradiction]] it follows that $r = s$. +Thus: +:$p_1 = q_1, p_2 = q_2, \ldots, p_r = q_s$ +which means the two [[Definition:Prime Decomposition|factorizations]] are identical. +Therefore, the [[Definition:Prime Decomposition|prime factorizations]] of $n$ is [[Definition:Unique|unique]]. +{{Qed}} +\end{proof} + +\begin{proof} +The proof proceeds by [[Second Principle of Mathematical Induction|strong induction]]. +For all $n \in \Z_{\ge 2}$, let $\map P n$ be the [[Definition:Proposition|proposition]]: +:the [[Definition:Prime Decomposition|prime decomposition]] for $n$ is [[Definition:Unique|unique]] up to order of presentation. +Note that it has been established in [[Integer is Expressible as Product of Primes]] that $n$ does in fact have at least $1$ [[Definition:Prime Decomposition|prime decomposition]]. +=== Basis for the Induction === +$\map P 2$ is the case: +:$n = 2$ +which is trivially unique. +Thus $\map P 2$ is seen to hold. +This is the [[Second Principle of Mathematical Induction#Basis for the Induction|basis for the induction]]. +=== Induction Hypothesis === +Now it needs to be shown that, if $\map P j$ is true, for all $j$ such that $0 \le j \le k$, then it logically follows that $\map P {k + 1}$ is true. +This is the [[Second Principle of Mathematical Induction#Induction Hypothesis|induction hypothesis]]: +:the [[Definition:Prime Decomposition|prime decomposition]] for all $j$ such that $0 \le j \le k$ is [[Definition:Unique|unique]] up to order of presentation. +from which it is to be shown that: +:the [[Definition:Prime Decomposition|prime decomposition]] for $k + 1$ is [[Definition:Unique|unique]] up to order of presentation. +=== Induction Step === +Either $k + 1$ is [[Definition:Prime Number|prime]] or it is [[Definition:Composite Number|composite]]. +If $k + 1$ is [[Definition:Prime Number|prime]], there is only one way to express it, that is, as the [[Definition:Prime Number|prime]] $k + 1$ itself. +So suppose $k + 1$ is [[Definition:Composite Number|composite]]. +{{AimForCont}} $k + 1$ has the two [[Definition:Prime Decomposition|prime decompositions]]: +$k + 1 = p_1 p_2 \dotsm p_r = q_1 q_2 \dotsm q s$ +where: +:$p_1 \le p_2 \le \dotsb \le p_r$ +and: +:$q_1 \le q_2 \le \dotsb \le q_s$ +Because $q_1$ is a [[Definition:Divisor of Integer|divisor]] of $k + 1$: +:$q_1 \divides p_1 \le p_2 \le \dotsb \le p_r$ +where $\divides$ indicates [[Definition:Divisor of Integer|divisibility]]. +Thus by [[Euclid's Lemma for Prime Divisors/General Result|Euclid's Lemma for Prime Divisors]]: +:$\exists p_i \in \set {p_1, p_2, \ldots, p_r}: q_1 \divides p_i$ +But as $q_1$ and $p_i$ are both [[Definition:Prime Number|primes]], it follows by definition that $q_1 = p_i$. +In a similar way it can be shown that: +:$\exists q_j \in \set {q_1, q_2, \ldots, q_s}: p_1 = q_j$ +So we have: +:$p_1 = q_j \ge q_1$ +and: +:$q_1 = p_i \ge p_1$ +Thus: +:$p_1 \ge q_1 \ge p_1$ +and so: +:$p_1 = q_1$ +Thus $\dfrac {k + 1} {p_1}$ is an [[Definition:Integer|integer]] such that: +:$\dfrac {k + 1} {p_1} \le k$ +and so: +:$p_2 p_3 \dotsm p_r \dfrac {k + 1} {p_1} = q_2 q_3 \dotsm q s$ +But by the [[Expression for Integer as Product of Primes is Unique/Proof 3#Induction Hypothesis|induction hypothesis]]: +:$p_2 = q_2, p_3 = q_3, \dotsc, p_r = q_s$ +where furthermore $r = s$. +Therefore the [[Definition:Prime Decomposition|prime decomposition]] for $k + 1$ is [[Definition:Unique|unique]]. +So $\map P k \implies \map P {k + 1}$ and the result follows by the [[Second Principle of Mathematical Induction]]: +:$\forall n \in \Z_{\ge 2}$, the [[Definition:Prime Decomposition|prime decomposition]] for $n$ is [[Definition:Unique|unique]] up to order of presentation. +{{qed}} +\end{proof} \ No newline at end of file