Datasets:
title
stringlengths 20
150
 upvoted_answer
stringlengths 0
54.3k


Embedding a manifold in the disk  The proof of this has two steps.
Step 1: Denote the map $M \to S^{n+k}$ by $f$. Denote the inclusion $S^{n+k} \to D^{n+k+1}$ by $i$. Then the map $i \circ f : M \to D^{n+k+1}$ extends to a smooth function $g : W \to D^{n+k+1}$. You can define this extension in a variety of ways. A natural extension would be to take a collar neighbourhood of $M$ in $W$, $\epsilon : M \times [0,1] \to W$, and then define $g$ inside the collar neighbourhood by $g(\epsilon(m,t)) = t^2m$ and outside the collar neighbourhood, define $g$ to be the zero vector. Technically, you'll need to replace that $t^2$ by some $C^\infty$ increasing homeomorphism $[0,1]\to[0,1]$ such that its derivative at zero is zero. With $t^2$ all you get is a $C^1$ embedding. If instead you use $t^3$ you have a $C^2$ embedding. So I guess the function you need is $e^{1\frac{1}{t^2}}$, with it defined to be zero at zero. I need my collar neighbourhood to be of the form $\epsilon(M \times \{1\}) = M = \partial W$.
Step 2: Smooth approximation theory says you can approximate $g$ by an embedding (and since the map is already neat on the boundary, it will be a neat approximation). The approximation theory only works if the ambient space has dimension strictly larger than twice the dimension of the domain, so you'll need $2(n+1)+1 \leq n+k+1$, equivalently $2n+3 \leq n+k+1$ or $n+2 \leq k$.
And this is Hirsch's statement. 
Polar equation of an ellipse in polar axis with pole not in origin  The parametric equations are
$$x=4\cos (t ) $$
$$y=2+3\sin(t) $$ 
$f(x) = 1 / \lvert x \rvert^2$, $x\in \mathbb{R}^3$ , for the Fourier transform F, prove by scaling: $ F(f) (y) = C \frac{1}{\lvert y\rvert}. $  So first $x^{a}$ is in $L^1_{\mathrm{loc}}(\mathbb{R}^d)$ as soon as $a < d$ since by a radial change of variable
$$
\int_{x<1} \frac{\mathrm{d}x}{x^a} = \omega_d\int_0^1 r^{d1a} \mathrm{d}r = \frac{\omega_d}{da} < \infty
$$
where $\omega_d = \frac{2\pi^{d/2}}{\Gamma(d/2)}$ is the size of the unit sphere in $\mathbb{R}^d$. In particular, if $d=3$, $x^{a}$ is a tempered distribution as soon as $a<3$.
Now, to get the form of the Fourier transform, just remark that since $f(x) = \frac{1}{x}$ is radial, its Fourier transform $\mathcal{F}(f)=\hat{f}$ is also radial. Moreover, for any $\lambda \in\mathbb{R}$ and $y\in\mathbb{R}^3$
$$
\hat{f}(\lambda\,y) = \frac{1}{\lambda^d}\mathcal{F}_x\left(f(x/\lambda)\right)(y) = \frac{1}{\lambda^d}\mathcal{F}_x\left(\frac{\lambda}{x}\right)(y) = \frac{1}{\lambda^{d1}}\mathcal{F}_x\left(\frac{1}{x}\right)(y) = \frac{1}{\lambda^{d1}}\hat{f}(y)
$$
In particular, taking $λ = z$ and $y = \frac{z}{z}$, one gets for $z≠0$
$$
\hat{f}(z)= \frac{1}{z^{d1}}\hat{f}(\tfrac{z}{z}),
$$
and actually, the equality also holds as tempered distributions since this is the unique tempered distribution with this homogeneity. Since $\hat{f}$ is radial, $\hat{f}(\tfrac{z}{z}) = \hat{f}(e_1) = C$ is a constant. Therefore
$$
\hat{f}(z)= \frac{C}{z^{d1}}
$$
Remark: one can get the constant by expressing $x^{a}$ as an integral of Gaussian functions and using the known expression of the Fourier transform of a Gaussian. One can find this constant in the book Functional Analysis by Lieb and Loss for example.
With the convention $\mathcal{F}(f)(y) = \int_{\mathbb{R}^d} e^{2iπx·y}\,f(x)\,\mathrm{d}x$, one gets for $a\in(0,d)$
$$
\mathcal{F}\left(\frac{1}{\omega_ax^a}\right) = \frac{1}{\omega_{da}x^{da}}
$$
In the case when $a=d$, one gets $\mathcal{F}\left(\frac{1}{\omega_dx^d}\right) = \frac{\psi(d/2)\gamma}{2}  \ln(πx)$ as proved here The Fourier transform of $1/p^3$. 
HundredDigit Challenge  problem 2  math's idea of solution  A complete, purely mathematical approach would imply a high number of tedious passages and calculations (the solution to this problem is known to require $14$ reflections before reaching $t=10$). However, if I interpret correctly the question, it asks for a simple general idea of solution.
If this is the case, we could choose different methods (e.g. using vectors, analytic geometry, trigonometry). The analytic geometry approach might be rather intuitive. We have to consider a sequence of successive segments on a Cartesian plane, each characterized by a starting point, a running line (the line actually followed by the photon in its path), a collision point (identified by determining the first crossing point between the running line and a circle), and a reflection line (whose angle is determined by the slope of the tangent to the circle in the collision point). Once completed a step, the collision point and the reflection line become the starting point and the running line of the successive step. Since for each segment of the path we can calculate the distance covered by the photon (or equivalently the time needed to run this distance), we can continue our stepbystep calculations until we reach $t=10$ and look at the point where we have arrived. In this way, the problem reduces to find the way of determining, from a starting point and a running line of a generic step, the corresponding collision point and the reflection line, so that iterating our calculations we can arrive to the solution.
The general approach to each step may be as follows. If $(x_n,y_n) $ is the starting point at the $n^{th} $ step and $a_n x+b_n $ is the equation of the corresponding running line, we have to begin by finding the collision point, i.e. the first crossing point with a circle. The circle can be firstly identified by visual examination of the graph (although we have then to confirm that the line actually crosses the circle and that the identified circle provides the minimal distance between the starting and collision point). Because all circles correspond to a translation of the circle $x^2 + y^2=\frac {1}{9} \,\,\,$ over an integer lattice, we can consider the equation $(xj_n)^2 + (yk_n)^2=\frac {1}{9} \,\,\, $, where $(j_n,k_n) \,\,$ are the integers expressing the coordinates of the centre of the circle on which the $n^{th} $ reflection occurs. Solving the system
$$\begin {cases}
y=a_n x+b_n \\
(xj_n)^2 + (yk_n)^2=\frac {1}{9} \end {cases} $$
to find the crossing point between the circle and the running line gives the coordinates of the collision point:
$$ x_{n+1}=\frac{m^2 \pm \sqrt {m^2(b_nk_n)^2j_n^2 +1/9 }}{ a_n^2+1 }$$
and
$$y_{n+1}= a_n x_{n+1} +b_n$$
where $$m=a_n k_n +j_na_n b_n $$ and where we have to choose, among the two solutions, the case that corresponds to the lower distance between the starting and the collision point. This distance $d_n$ can be obtained using the standard formula, knowing that it is calculated between the points $(x_n,y_n) \,\,\, $ and $(x_{n+1},y_{n+1}) \,\,\, $.
Now we have to find the slope of the tangent to the circle in $x_{n+1},y_{n+1} \,\,\,$ to calculate the reflection line. Using the derivative of the circle equation, we obtain that the slope $t_n $ of the tangent at the $n^{th} $ reflection is
$$t_n=\pm \frac {(j _n  x_{n+1})}{\sqrt{1/9  (j_n  x_{n+1})^2}}$$
where again we have to choose, among the two solutions, the appropriate one, looking at our graph to check the sign of the searched tangent. To calculate the slope of the reflection line, we can remind that if a line with angle $\alpha$ with respect to the $x $axis is reflected on a line with angle $\beta $, the angle of the resulting reflected line is given by $2\beta\alpha$. In this case we have $$\alpha=\arctan {a_n} \\ \beta=\arctan {t_n} $$ so the slope of our reflection line is $$\tan ( 2 \arctan {t_n}  \arctan {a_n} ) \,\,\, $$ Because it passes through $x_{n+1},y_{n+1} \, $, its equation is
$$\small {y= \tan ( 2 \arctan {t_n}  \arctan {a_n} ) x \\ + y_{n+1}  \tan ( 2 \arctan {t_n}  \arctan {a_n} ) x_{n+1} }$$
In this way, based on a starting point with coordinates $(x_n,y_n ) $ and a running line $a_n x+b_n $ that characterize the $n^{th} $ step, we have determined the collision point $(x_{n+1},y_{n+1}) \,\,$, the distance covered between these two points, and the reflection line. Setting the collision point and the reflection line as the starting point and the running line of the successive step, we can repeat several times the whole procedure, until the sum of the distances covered in each step achieves $10$.
To make a practical example, we can apply these calculations to the first step of the problem. We start from the point $(x_1,y_1)=(0.5,0.1) \,\,\,\,$ and the running line is $y=0.1$. The first circle to be crossed is that with centre in $(1,0)$, which has equation $(x1)^2 +y^2=\frac {1}{9} \,\,\, $. Applying the equations above with $a_1=0 \,$, $b_1=0.1\,$, $j_1=1 \,$, and $k_1=0 \,$, we get $m=1\,$, so that the coordinates of the collision point $(x_2,y_2) \,$ (knowing that we must choose, among the two solutions, that nearer to the starting point) reduce to
$$ x_{2}=1  \sqrt {10.1^21 +1/9 } = 1  \frac {\sqrt {0.91}}{3} \approx 0.682 \\ y_{2}=0 \cdot x_2+0.1=0.1
$$
Using the derivative of the circle equation, we obtain that the slope $t_1 $ of the tangent at the first reflection is
$$t_1= \frac{1  ( 1  \sqrt {0.91}/3) }{\sqrt{1/9  [(1 (1  0.91/3)]^2 }} \\ = \frac {\sqrt{0.91}/3 }{\sqrt{1/9  0.91/9 }} = \sqrt {91}/3 \approx 3.1798 $$
Lastly, the equation of the reflected line is
$${y=\tan \left ( 2 \arctan { \sqrt {91}/3 } \right) x \\ + 0.1  \tan \left ( 2 \arctan { \sqrt {91}/3 } \right) \cdot (1  \sqrt {0.91}/3 )}$$
$$y= \frac {3 \sqrt {91}}{41} x + 0.1 + \frac {3 \sqrt {91}}{41}  \frac {91}{410} $$
which in numbers is approximately
$$y=0.6980 x +0.5761$$
So we have determined the collision point and the reflection line of the first step. As explained above, these become the starting point and the running line of the second step, allowing to repeat the procedure. 
If $z_1$ and $z_2$ are complex numbers, find minimum value of $z_1z_2$  Following Fabian's hint, show that
$z_1=2$ implies $z_1$ is on the circle of radius $2$ centered at the origin, and
if you rewrite $(1i)z_2 + (1+i) \bar{z}2 = 8\sqrt{2}$ using $z_2=x+iy$, the equation becomes $x+y=4\sqrt{2}$, which is a line in the complex plane with intercepts $4\sqrt{2}$ and $4\sqrt{2}i$.
If you draw a picture of the circle and the line, you can see that the minimizing $z_1$ and $z_2$ lie on the "$45^\circ$" line from the origin, with $z_1=2$ and $z_2=4$. 
Non unique factorization of integer valued polynomials  This can even be done with one variable:
$$
2\cdot \left(\frac{x(x+1)}{2}\right)=\big(x\big)\cdot\big(x+1\big).
$$
If you prefer to avoid irreducibles that become units in $\mathbb{Q}$:
$$
\left(\frac{x(x+1)}{2}\right)\cdot\left(\frac{(x+2)(x+3)}{2}\right) = \left(\frac{x(x+3)}{2}\right)\cdot\left(\frac{(x+1)(x+2)}{2}\right).
$$ 
Angle between 2 faces of tetrahedron  Put $C$ at the origin, $B$ on the $x$axis, and $A$ on the $y$axis; $D$ is in the $yz$plane, $\overline{DA}$ is perpendicular to the $xy$plane, $\overline{DB}$ makes a $30$º degree angle with the $xy$plane (so that $\angle DBA = 30$º), $\angle CBD = 45$º, and what’s wanted is $\angle DCA$.
$\triangle DCB$ is an isosceles right triangle; there is no harm in assuming that $CB = 1$, in which case $CD = 1$, and $DB = \sqrt 2$. $\angle BAD$ is a right angle, since $\overline{DA}$ is perpendicular to the $xy$plane, and $\angle DBA = 30$º, so $\triangle DBA$ is a $306090$ right triangle, and $DA = \frac{DB}{2} = \frac{\sqrt 2}{2}$. Finally, $\triangle DCA$ is a right triangle with hypotenuse $DC = 1$ and leg $DA = \frac{\sqrt 2}{2}$, so the remaining side is $\sqrt{1^2  \left( \frac{\sqrt 2}{2} \right)^2}= \frac{\sqrt 2}{2}$, $\triangle DCA$ is an isosceles right triangle, and $\angle DCA$ is indeed $45$º.
This is a little more than I should probably say for homework, but I’ve deliberately been a bit concise, and you’re still going to have to visualize it properly to follow the calculations. 
Dimension of the span  Take then null $n\times n$ matrix. It has $n$ columns, but the dimension of the span of the column vectors is $0$. 
Finding a Topology from a Subbase  You seem to have forgotten the 1fold intersections: The sets of $\cal S$ itself.
With this base, the number $5$ would not be covered, and $\{3,4,5\}$ would not be open.
The base is $\mathcal B=\{ \emptyset, \{1\}, \{3\}, \{4\}, \{2,3\}, \{3,4,5\} \}$. With this you can compute the interior of $B=\{1,5\}$. Just find all points $x\in B$ such that one of these base sets contains $x$ and is a subset of $B$. Also note that $B$ is closed, and that each point of $B$ is isolated. 
Symmetric, commuting matrices in $\mathrm{SL}(3,\mathbb{Z})$  No. We can take
$$
M_1=1, \quad M_2=\begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} , \quad M_3=\begin{pmatrix} 1 & 0 \\ 0 & A \end{pmatrix} ,
$$
with $A\in SL(2,\mathbb Z)$ symmetric, but not a multiple of the identity. 
On if $X \sim N(0,1)$ Then $ \frac{1}{\sqrt{2 \pi}} \int_{ \infty}^X e^{x^2/2} \, dx \sim U[0,1] $  There is a somewhat more general proposition regarding this problem.
If $X$ is a random variable with a continuous and strictly increasing distribution function $F$, then $F(X) \sim U(0, 1)$.
Proof: For any real $0 < x < 1$,$$
P(F(X) \leqslant x) = P(X \leqslant F^{1}(x)) = F(F^{1}(x)) = x,
$$
therefore $F(X) \sim U(0, 1)$.
In your question, $X \sim N(0, 1)$, thus $\displaystyle F(x) = \frac{1}{\sqrt{2\mathrm{\pi}}}\int_{\infty}^x \mathrm{e}^{x^2 / 2} \,\mathrm{d}x$. The problem is solved by applying the general proposition. 
Orthogonality and projection of a vector  I ran the calculation as well...yes your numbers check out OK...
... and the scalar projection you need to do, that is simply the dot product...take the dot product of AC with the unit vector of AB... 
Inscribed angle is always the same and twice the central angle  is this absolute?  The measure of an inscribed angle in the hyperbolic plane is always less than half the measure of the central angle. Here is a picture using the Poincaré disk model:
As you can see, the angle $\alpha$ is always less than half of the angle $\beta$. 
Calculating the volume of a surfboard  If you use any modeling software, it probably has a volume function in it. Otherwise, one approach which I used before is to approximate the surface with a triangulation, then used calculated the volume from the tetrahedra formed by each face and the origin (some triangles have to be summed in, while others have to subtracted out, but if you set it up right, that happens automatically). I presented a paper on the method (which I also used to calculate all mass properties) to SAWE, the Society of Allied Weight Engineers. I could send you a copy if you want. The volume you get this way will only be an approximation, with how close depending on how fine of a triangulation you use. One other issue is that I get the triangulation from an existing modeling package, so I am not versed in technique for triangulation.
You can calculate the volume directly from the Bezier curves, but that is definitely messier. How to do it would depend very much on what modeling techniques you use. 
Laplace Transform of $f(x)=\frac{\sqrt{x}}{1+x}$  We want $\displaystyle F(t)=\int^{\infty}_0 \frac{\sqrt{x}}{1+x} e^{t x} dx$
$F(t) e^{t}=\int^{\infty}_0 \frac{\sqrt{x}}{1+x} e^{t(x+1)} dx$
$\frac{d}{dt}(F(t) e^{t})=\int^{\infty}_0 \sqrt{x}e^{t(x+1)} dx$
$e^t\frac{d}{dt}(F(t) e^{t})=\int^{\infty}_0 \sqrt{x}e^{tx} dx=t^{1/21}\Gamma(1/2+1)=\sqrt{\pi}/(2t^{\frac32})$
because you got a specific case of the integral defining the Gamma function.
After that you'll have to revert the operations :
$F(t)= \frac{\sqrt{\pi}}2 e^t \left(C+\int \frac{e^{t}}{t\sqrt{t}} dt\right)$
$F(t)= \frac{\sqrt{\pi}}2 e^t \left(C\frac{2e^{t}}{\sqrt{t}}\int \frac{2e^{t}}{\sqrt{t}} dt\right)$
$F(t)= \frac{\sqrt{\pi}}2 e^t \left(C\frac{2e^{t}}{\sqrt{t}}2\int e^{u^2} du\right)$
where we recognize the integral expression of the Error function multiplied by $\sqrt{\pi}$.
I'll let you reverify all this and determine the constant $C$.
The answer should be $\sqrt{\frac{\pi}{t}} \pi e^t \text{ erfc}(\sqrt{t})$ for $\Re(t)\gt0$ (with $\text{erfc}(x)=1\text{erf}(x)$).
(short way : Wolfram Alpha). 
If $f$ is a cut edge in $Ge$, is $e$ a cut edge in $Gf$?  For a 2edge connected graph, $G$, if $f$ is a cutedge of $Ge$ that would mean that ($Gef$) is disconnected. Removing of edges is commutative, so $Gfe$ is also disconnected. Finally, $Gf$ would have $e$ as a cutedge since $Gfe$ is disconnected.
As for the general statement if $G$ were 1 edgeconnected, there would be a cutedge, $f$. For example, suppose that you have a graph which is a union of two vertex disjoint $K^{50}$'s bridged by a single edge, $f$. $f$ would be a cutedge in $Ge$ for any $e$, however $Gf$ wouldn't even be connected, so the definition of "cutedge" wouldn't even make sense to use in this situation.
However, if $G$ were at least 3 or more edgeconnected, there would be no edges $e,f$ satisfying the hypothesis so it is trivially true. 
Inverse function theorem for manifolds  Assume $\text{dim}\ M=\text{dim}\ N=n.$ In what follows, we use the Einstein convention for all sums.
The point is that if $(U_p, \phi_p)$ is a chart about $p$ in $M$ and $(V_{f(p)}, \psi_{f(p)})$ is a chart about $f(p)$ in $N$ then $(f_*)_p: T_pM \to T_{f(p)}N$ is a linear transformation, so it has a matrix representation in the coordinates defined by $\phi$ and $\psi$. If we can show that this matrix is the Jacobian of $\hat f:=\psi_{f(p)} \circ f \circ \phi_p^{1}$, which is $\left(\frac{\partial \hat f^j}{\partial r^i}\right)_{ij},$ then $\hat f$ will be a local diffeomorphism, which is what we want.
But, $\textit{by definition},\ \frac{\partial }{\partial x^i}=(\phi_*)^{1}\frac{\partial }{\partial r^i}$, where $(r^i)$ are the usual Euclidean coordinates. Similarly, $\frac{\partial }{\partial y^i}=(\psi_*)^{1}\frac{\partial }{\partial s^i}$ where we use $(s^i)$ to represent the Euclidean coordinates in the range of $\hat f$ just to make the calculations easier to follow. For the same reason, we drop the subscripts $p$ and $f(p).$ Finally, we note that by the chain rule in $\mathbb R^n,\ \hat f_*\frac{\partial }{\partial r^i}=\frac{\partial \hat f^j}{\partial r^i}\frac{\partial}{\partial s^j},$ where the $(\hat f^j)$ are the components of $\hat f$. Then, we calculate
$f_*\frac{\partial }{\partial x^i}=f_*\circ (\phi_*)^{1}\frac{\partial }{\partial r^i}=(f\circ \phi^{1})_*\frac{\partial }{\partial r^i}=$
$(\psi^{1}\circ \hat f)_*\frac{\partial }{\partial r^i}=(\psi_*)^{1}\circ \hat f_*\frac{\partial }{\partial r^i}=$
$(\psi^{1})_*\frac{\partial \hat f^j}{\partial r^i}\frac{\partial}{\partial s^j}=\frac{\partial \hat f^j}{\partial r^i}(\psi^{1})_*\frac{\partial}{\partial s^j}=\frac{\partial \hat f^j}{\partial r^i}\frac{\partial}{\partial y^j}$.
It follows that the matrix of $f_*$ is the Jacobian of $\hat f,$ as desired. 
A Binomial Coefficient Sum: $\sum_{m = 0}^{n} (1)^{nm} \binom{n}{m} \binom{m1}{l}$  This is a special case of the identity $$\sum_k \binom{l}{m+k} \binom{s+k}{n} (1)^k = (1)^{l+m} \binom{sm}{nl},$$ which is identity 5.24 on p. 169 of Concrete Mathematics, 2nd edition. With $l = n$, $m = 0$, $s = 1$, $k = m$, and $n = l$, we see that the OP's sum is $$(1)^{2n} \binom{1}{ln} = \binom{1}{ln}.$$
This is $(1)^{ln}$ when $l \geq n$ and $0$ when $l < n$, as in Fabian's comment to Plop's answer. 
Linearity in first argument of $\langle X,Y\rangle =X^*MY $  Generally one requires that an inner product on a complex vector space be linear in one argument and conjugate linear in the other. What you have proven is precisely that this inner product is conjugate linear in the first argument.
Note that linearity in the second argument and the conjugate symmetry together imply conjugate linearity in the first argument. 
Linear function between normed spaces is continous.  Since $X$ is finite dimensional space, all norms are equivalent in X, so let $(e_i)_i$ be a finite algebriac basis of $X$ and we equipe $X$ by the norme:
$$
\x:=x_1e_1+\dots x_ne_n\=\sum_i x_i
$$
so
$$
\Ax\=\\sum_i x_i Ae_i\\leq\sum_i \x_i Ae_i\ \leq\max_i \Ae_i\\sum_i x_i=\alpha \x\
$$
Where $\alpha=\max_i \Ae_i\<\infty$
So $A$ is continuous 
For what values of $k$ is $p(x) = k(1r^2)^x$ a valid probability mass function  Summing over $x$, $$1=\sum_{x=0}^\infty P(x) = k\sum_{x=0}^\infty (1r^2)^x = \frac{k}{1(1r^2)}$$
by the geometric series formula. So $k=r^2$. 
Help with limit $\lim_{h\to 0}\frac{1}{h}\int_{x}^{x+h}P(t, y)\ dt$..  Note first that
$$
\frac{1}{h}\int_x^{x+h}P(t,h)dtP(x,y)=\frac{1}{h}\int_x^{x+h}(P(t,y)P(x,y))dt
$$
for all $h$ small enough such that $[x,x+h]\times\{y\}$ or $[x+h,x]\times\{y\}$ is contained in $D$.
Fix $\epsilon>0$.
Now by continuity of $t\longmapsto P(t,y)$ at $x$, there exists $\delta>0$ such that $P(t,y)P(x,y)\leq \epsilon$ for $h\leq \delta$.
Then
$$
\frac{1}{h}\int_x^{x+h}P(t,y)dtP(x,y)\leq \frac{1}{h}\int_x^{x+h}P(t,y)P(x,y)dt\leq \frac{1}{h}h\epsilon=\epsilon
$$
for all non zero $h\leq \delta$.
So
$$
\lim_{h\rightarrow 0}\frac{1}{h}\int_x^{x+h}P(t,y)dt=P(x,y).
$$ 
Basis for nullspace  Free variables and basis for $N(A)$  Columns 1 and 3 have the pivots. So the other two columns (2 and 4) correspond to the free variables.
Then call $x_4=t$. Then the last equation says $$x_3  x_4 =0 \leftrightarrow x_3 = x_4= t$$
Call the other free variable $x_2=s$.
Then the first equation becomes, after substiting what we know so far:
$$x_1  s + 2t = 0$$ from which
$$x_1 = s 2t$$ follows. 
Length of chord on ellipse  If your chord makes an angle $\phi$ with the $x$axis, then its length is
$$
\frac{2ab}{\sqrt{ {b^2}\cos^2{\phi} + {a^2}\sin^2{\phi} }}
$$
See this answer for a bit more detail.
The angle $\theta$ used in apurv's answer is not the angle between the chord and the $x$axis. That's probably the reason for his question, where he asked you what you mean by "theta". 
How can I compute the 'average rank' of an infinite set?  There are several things here.
There's no reasonable way to take an infinite sum here. For one, cardinals do not admit inverses, and since ordinals measure order and cardinals measure cardinality, it is unclear if you mean that the sum itself is an ordinal or a cardinal summation, and if it is an ordinal sum, what is the order on $X$, etc. You could argue that we want to talk about this in the surreal numbers. But there's no reason to think that the sum is convergent.
There is a notion of rank in set theory. And it is a global one, rather than a notion particular to each specific set. $\varnothing$ has rank $0$, and a set $A$ has rank of some ordinal $\alpha$ if $\alpha$ is the smallest ordinal such that not $a\in A$ has rank $\geq\alpha$. You can find out more when reading about the von Neumann hierarchy.
The question when two sets are "isomorphic" is not a trivial one to define. But I think that what you want is to consider the transitive closures of two sets and ask if those are somehow isomorphic modulo some identification of some objects. Where the transitive closure of a set $A$ can be defined as $\{a\mid\exists n\in\Bbb N: a\in^n A\}$, using your notation, or define $A_0=A$ and $A_{n+1}=\bigcup A_n$, then the transitive closure is $\bigcup\{A_n\mid n\in\Bbb N\}$ (remember: in set theory, everything is a set, and if you fancy urelements, define $\bigcup$ in the obvious way that ignores the urelements).
I hope those insights are helpful. 
Counterexample to Cauchy product theorem  No, a counterexample satisfying the property you seek cannot exist, which we can see by Cesàro's theorem.
For a series $\sum d_n$, the Cesàro sum is the limit of the average of the partial sums:
$$\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^N\sum_{i=1}^nd_i$$
If a sum is classically summable, then it is also Cesàro summable, and its Cesàro sum matches its classical sum.
Now Cesàro's theorem says that in general if $\sum a_n$ and $\sum b_n$ are conditionally convergent, then the Cauchy product will be Cesàro summable, and its Cesàro sum will be $(\sum a_n)(\sum b_n),$ even though it need not be classically convergent to $(\sum a_n)(\sum b_n).$
Hence what you are asking for cannot happen. For $c_n=\sum_{p+q=n} a_pb_q$, either $\sum c_n$ is not classically convergent, but it is Cesàro summable and its Cesàro sum is $(\sum a_n)(\sum b_n).$
Or else it is classically convergent, in which case it must still converge to its Cesàro sum $(\sum a_n)(\sum b_n).$ 
Why does the double integral for the area of a circle of radius$1$ equal $\pi/8 $ instead of $\pi/4$?  It's just not true that this integral is "the representation of a quarter circle of radius 1 in polar coordinates." The domain over which you are integrating is a quarter disk, but you're integrating a function $f(r,\theta) = 1r^2$ over that domain.
There is a danger in calculus (especially in the transition from single to multivariable) to think of integrals as area. In fact, integrals can be used to accumulate any function. For instance, you can think of the function as measuring charge density over a metal plate in the shape of the domain. Then the integral measures total charge.
If you want an integral over a plane region that computes the area of the region, use $f(r,\theta)=1$. Then
$$
\int_0^{\pi/2} \int_0^1 r\,dr\,d\theta = \left[\frac{r^2}{2}\right]^1_0
\left[\theta\right]^{\pi/2}_0 = \frac{1}{2} \cdot \frac{\pi}{2} = \frac{\pi}{4}
$$
However, this doesn't prove the area formula. The polar coordinates integration formula is derived from the area formula for this disk. So this is—wait for it—circular logic. 
Conceptual question on differentiation in calculus?  We have $y'=\frac{dy}{dx}=y$ where $y$ is a function of $x$. Therefore, by rearranging the equation,
\begin{align}
\frac{dy}{y}=dx\,.
\end{align}
Integrating with respect to $x$ on each side,
\begin{align}
\int\frac{dy}{y}=\int dx\implies \ln(y)=x+C\,,
\end{align}
where $C$ is an integration constant. Exponentiating both sides,
\begin{align}
e^{\ln(y)}=e^{x+C}=e^Ce^x\,.
\end{align}
As $e^C$ is just a constant, we will relabel it $c$. Also note $e^{\ln(y)}=y$. Thus $y=ce^x$.
There's probably a more formal way to see why $y=ce^x$, but I hope this explanation helps. 
How Can you factor $1/2$ out of integral of $\cos2x$?  When you do a $u$substitution, you have to substitute for every $x$ in the integral, including the $dx$. Starting from $u=2x$, you can differentiate both sides and get $\frac{du}{dx}=2$. We rewrite this as $\frac12 du=dx$.
Thus, when you substitute in the integral, you replace the $2x$ with a simple $u$, and you replace the $dx$ with $\frac12 du$. We write the $\frac12$ out in front of the integral, and then we proceed with our simpler integral in terms of $u$. 
Objects whose morphisms are all injective  One way to define fields is as precisely the commutative rings which have no nontrivial ideals; in other words, as precisely the commutative rings which have no nontrivial quotients. You can play this game in other categories too. For example, the analogous subcategory for groups is simple groups (the trivial group is not simple), and the analogous subcategory for rings is simple rings.
Replacing "injective" with "monic" it is of course straightforward to start with a category and restrict to the subcategory of monomorphisms, but presumably this isn't in the spirit of the question.
An example which may or may not be in the spirit of the question is the category of metric spaces and isometries. More generally, a major lesson of category theory is that changing what morphisms you're willing to consider effectively changes what mathematical objects you're studying; even if they look the same, they really aren't. 
Any simple function which behaves like this?  Looks somewhat like a rescaled version of $y = x e^{x}$. Try $y=cx e^{1cx}$ with various choices of $c>1$. For example, try entering (5x)e^(15x) from 0 to 1 into Wolfram Alpha. 
Show that the multiplicative group $(\mathbb{Z}/p\mathbb{Z})^*$ is cyclic  If you know the structure theorem for finite Abelian groups, you will
be able to prove that either a finite Abelian group $G$ is cyclic,
or that its exponent is $<G$, that is there is $m<G$
with $g^m=e$ for for all $g\in G$.
An alternative attack: show that for $d\mid (p1)$ the group
$G=(\Bbb Z/\Bbb Z)^*$ has exactly $d$ solutions of $x^d\equiv1$.
If you compare $G$ to $H=\Bbb Z/(p1)\Bbb Z$, you find that
the number of elements of order dividing $d$ is the same in each,
therefore the number of elements of order exactly $d$ is the
same in each, and now take $d=p1$. 
Does the congruence $x^2  3x  1 \equiv 0$ (mod 31957) have any solutions?  It's true, you can use the quadratic formula in this context. You obtain:
$x \equiv 2^{1}(3\pm\sqrt{9+4}) \equiv 15979(3\pm\sqrt{13}) \pmod{31957}$
Note that the multiplicative inverse of $2$ is the number whose product with $2$ is congruent to $1$, modulo $31957$. This is going to work if and only if $13$ is a perfect square modulo $31957$.
This is generally done by evaluating the Legendre symbol $\left(\frac{13}{31957}\right)$ Do you know how to do that? (Note that $31957$ is prime and congruent to $1$ modulo $4$.) 
Finding coefficient of $x^8$  The coefficient of $x^8$ of $\prod_{k=1}^{10}(xk)$ is given by
$$\sum_{1\leq j<k\leq 10}kj=\frac{1}{2}\left(\left(\sum_{k=1}^{10}k\right)^2\sum_{k=1}^{10}k^2\right)=
\frac{1}{2}\left(55^2385\right)=1320.$$
See also Vieta's formulas.
In other words, here the coefficient of $x^8$ is given by the sum of all products of two distinct numbers in $\{1,2,..,10\}$. This sum can be obtained by squaring $(1+2+3+\dots+10)$ and then by throwing away all the squares $1,4,\dots, 100$. The result should be divided by 2 (they are double products). 
Statistics and the addition rule.  $$
\frac 2 3 \times \frac 2 3 = \frac {2\times2}{3\times3} = \frac 4 9.
$$
The probability of $3$ or $5$ on the first trial is $1/3.$
The probability that that happens on the first toss or the second is not $\frac 1 3 + \frac 1 3$ because those two events are not mutually exclusive: they could both happen.
One way to find the probability that one of these or the other (or both) occurs is to find the probability that they both fail to occur. The probability that the first one fails is $2/3$ and the probability that the second one fails is $2/3.$ The probability that the first one fails and the second one fails is $\frac 2 3 \times \frac 2 3 = \frac 4 9.$ Therefore the probability that you don't get two failures, i.e. that you do get at least one success, is $1 \frac 4 9 = \frac 5 9.$ 
Proving the equivalence of the properties of onto functions  Take $y\in Y$. Our goal is to prove that $f^{1}(\{y\})$ is nonempty. If it is empty, then, we see that
$$f^{1}(\{y\}) = \varnothing = f^{1}(\varnothing)$$
so, $(3)$ would implies that $\{y\} = \varnothing$, a contradiction. Therefore, $f^{1}(\{y\}) \neq \varnothing$ and hence, there exists some $x\in f^{1}(\{y\})$, that is, there exists some $x$ such that $f(x) = y$. 
An inequality involving the AMGM inequality: $ x + \frac1x  \ge 2 $ (for $x<0$).  You can prove the result at once by writing
$$\left(x + \frac{1}{x}\right)^2 = x^2 + \frac{1}{x^2} + 2 \ge 2\sqrt{x^2\cdot \frac{1}{x^2}} + 2 = 4,$$
then taking square roots. 
Every $n\times n$ matrix is the sum of a diagonalizable matrix and a nilpotent matrix.  You have $A = PJP^{1}$ where $J$ is in Jordan form. Write $J = D + N$ where $D$ is the diagonal and $N$ is the rest, which is strictly upper triangular and thus nilpotent. Then $A = PDP^{1} + PNP^{1}$. The former is clearly diagonalizable, while the latter is nilpotent; just note that $(PNP^{1})(PNP^{1}) = PN(P^{1}P)NP^{1} = PN^2P^{1}$ and so on. 
In a ring where $(ab)^2 = a  b$ for fixed $a,b$, then $(ab)(a+b) = 1 \iff a^2  b^2 = 1$.  Idempotents are always a joy to work with. For the converse implication denote $ab=e$ and notice that $e$ is an idempotent by hypothesis; substituting $a=b+e$ in the relation $a^2=b^2+1_A$ leads (after a few simple calculations and cancellations) to $e+be+eb=1_A$ and hence to
$$eb+be=1_Ae \tag{1}$$
Multiplying relation (1) by $e$ on the left as well as on the right yields
$$eb+ebe=ebe+be=0_A \tag{2}$$
and hence to $eb=be=ebe$.
Since $b$ and $e$ commute, $b$ and $a=b+e$ must also commute and thus one can factor the difference of squares
$$1_A=a^2b^2=(ab)(a+b)$$ 
Rudin Theorem 1:11: understanding why $L \subset S$  Simply the sentence 'Let $L$ be the set of all lower bounds of $B$.' should be implicitly understood as
Let $L$ be the set of all lower bounds $s\in S$ of $B$. 
Solving a first order linear system matrix  Method 1: Notice that the third equation is decoupled from the other two.
We have $$z' = z \implies z(t) = c e^{t}$$
Substituting $z(t)$ into the second equation, we have
$$y' = y + 2 z = y + 2 c e^{t} \implies y(t) = (b + 2ct)e^{t}$$
Substituting $y(t)$ and $z(t)$ into the first equation, we have
$$x' = x + 2 y  z = x + 2(b + 2ct)e^{t}  c e^{t}$$
Solving we get
$$x(t) = (a + (2b  c)t + 2ct^2)e^{t}$$
Method 2: Eigenvalues / Eigenvectors
This is a deficient matrix (as you discovered). Are you familiar with generalized eigenvectors and the Jordan form?
The eigenvalue (triple) is $\lambda = 1$ and the RREF of $[A + I]v _1 = 0$ gives
$$\begin{bmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0 \\
\end{bmatrix}v_1 = 0 \implies v_1 = (1, 0, 0)$$
Unfortunately, we cannot get any more linearly independent eigenvectors, so we need to find generalized ones. Following these Jordan Matrix Notes, we have the RREF of $[A + I]v_2 = v_1$ of (shown as augmented matrix)
$$\begin{bmatrix}
0 & 1 & 0 & \frac{1}{2} \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 \\
\end{bmatrix}$$
We can choose
$$v_2 = \left(0, \dfrac{1}{2}, 0 \right)$$
Repeating for the RREF of $[A + I]v_3 = v_2$, we get
$$\begin{bmatrix}
0 & 1 & 0 & \frac{1}{8} \\
0 & 0 & 1 & \frac{1}{4} \\
0 & 0 & 0 & 0 \\
\end{bmatrix}$$
We can choose
$$v_3 = \left(0, \dfrac{1}{4}, \dfrac{1}{8} \right)$$
Referring to the notes, we can now write
$$X(t) = e^{t}\left[c_1 v_1 + c_2 (v_2 + t v_1) + c_3 \left(v_3 + t v_2 + \dfrac{t^2}{2!} v_1\right)\right]$$ 
Launguages in Discrete Mathematical Structures II  Note that in order to get rid of the initial symbol $v_0$, you must at some point apply the production $v_0\to yv_1$, at which point you have the nonterminal symbol $v_1$ in your string. In order to get rid of $v_1$, you must at some point apply $v_1\to z$. Thus, every derivable terminal string contains the symbol $z$, and it follows that $xy^2$ is not derivable.
In fact it’s not hard to see exactly which strings are derivable. To begin a derivation we can apply $v_0\to xv_0$ any number $n\ge 0$ of times, but eventually we must apply $v_0\to yv_1$ in order to get rid of $v_0$; at that point we have $x^nyv_1$. We can now apply $v_1\to yv_1$ any number $m\ge 0$ of times, but eventually we must apply $v_1\to z$. At that point we’re done, and we have $x^nyy^mz$, or simply $x^ny^{m+1}z$, where $m$ and $n$ are any nonnegative integers. Clearly $x^2y^2z$ is in this language, with $n=2$ and $m=1$. That should tell you exactly how to derive $x^2y^2z$, and once you have the derivation, converting it to tree form is straightforward. The first step of the derivation is evidently $v_0\Rightarrow xv_0$, so your derivation tree will begin like this:
$$\begin{array}{ccc}
&&v_0\\
&\diagup&&\diagdown\\
x&&&&v_0
\end{array}$$ 
Prove $\limsup\limits_{n \rightarrow \infty} b_n \leq \limsup\limits_{n \rightarrow \infty} a_n$, given $b_n = \frac{a_1+ \cdots +a_n}{n}$.  Fix $m$ and pick $n\geqslant m$ large. Then $$\begin{align}
\frac{{{a_1} + {a_2} + \cdots + {a_m} + {a_{m + 1}} + \cdots + {a_n}}}{n} &\leqslant \frac{{{a_1} + {a_2} + \cdots + {a_m} + \left( {n  m} \right)\mathop {\sup }\limits_{k > m} {a_k}}}{n} \cr
&\leqslant \frac{{{a_1} + {a_2} + \cdots + {a_m} }}{n} + \frac{{n  m}}{n}\mathop {\sup }\limits_{k > m} {a_k} \end{align} $$
Now hold $m$ fixed and take $\limsup\limits_{n\to\infty}$. You get $$\limsup_{n\to\infty} \frac{a_1+\cdots+a_n}n\leqslant \sup_{k>m}a_k$$
By the same token you can show that $$\liminf_{n\to\infty} \frac{a_1+\cdots+a_n}n\geqslant \liminf_{n\to\infty }a_n$$ 
Convergence in a normed space  What you did is fine. This proves that$$\xy\_{\mathbb{X}}\leqslant\varepsilon+\sup_{n\geqslant N}\x_ny\_{\mathbb{X}}.$$But note that $\sup_{n\geqslant N}\x_ny\_{\mathbb{X}}\leqslant\sup_{n\in\mathbb N}\x_ny\_{\mathbb{X}}$. Therefore, you proved that$$\xy\_{\mathbb{X}}\leqslant\varepsilon+\sup_{n\in\mathbb N}\x_ny\_{\mathbb{X}}.$$Since this takes place for each $\varepsilon>0$,$$\xy\_{\mathbb{X}}\leqslant\sup_{n\in\mathbb N}\x_ny\_{\mathbb{X}}.$$ 
Algebraic independence and $\overline{\mathbb{Q}}$ linear independence  If the set $\{\alpha_1,\dots,\alpha_n\}$ is algebraically independent over $\mathbb{Q}$ then it is also algebraically independent over $\bar{\mathbb{Q}}$.
Otherwise we can assume, without loss of generality, that $\alpha_n$ is algebraic over $\bar{\mathbb{Q}}(\alpha_1,\dots,\alpha_{n1})$.
Let $f$ be the minimal polynomial, $f=p_0+p_1X+\dots+p_kX^k$, where $p_i\in\bar{\mathbb{Q}}(\alpha_1,\dots,\alpha_{n1})$. Let $S$ be the set of the coefficients in $p_i$, for $i=1,2,\dots,k$, so $\alpha_n$ is algebraic over $\mathbb{Q}(S)(\alpha_1,\dots,\alpha_{n1})=\mathbb{Q}(\alpha_1,\dots,\alpha_{n1})(S)$, which is finite dimensional over $\mathbb{Q}(\alpha_1,\dots,\alpha_{n1})$. Hence $\alpha_n$ is algebraic over $\mathbb{Q}(\alpha_1,\dots,\alpha_{n1})$, a contradiction.
Algebraic independence implies linear independence. The converse is not true: $\pi$ and $\pi^2$ are linearly independent over $\mathbb{Q}$ (or any algebraic extension of $\mathbb{Q}$), but not algebraically independent.
This generalizes verbatim to the case where we have fields $F\subseteq K\subseteq L$, with $K$ algebraic over $F$. A subset $\{\alpha_1,\dots,\alpha_n\}$ of $L$ is algebraically independent over $K$ if and only if it is algebraically independent over $F$. 
Generating recursive equation for urn question  First, compute $P(W_{n+1}=w)$. Notice that total number of balls in the urn always stay unchanged, $a+b$.
We have
$$
P(W_{n+1}=w)=P(W_n=w1)\left(1\frac{w1}{a+b}\right)+P(W_n=w)\frac{w}{a+b}
$$
(you could either have $w$ balls and draw a white ball, or $w1$ white balls and draw a black one so $1$ gets added)
multiply both sides by $w1$, we have
$$
wP(W_{n+1}=w)P(W_{n+1}=w)=(w1)P(W_n=w1)\frac{(w1)^2}{a+b}P(W_n=w1)+\frac{w^2w}{a+b}P(W_n=w)
$$
Now sum over all possible $w$ (from $0$ to $\infty$)
$$
\mathbb{E}W_{n+1} 1=\mathbb{E}W_n  \frac{1}{a+b}\mathbb{E}W_n^2 + \frac{1}{a+b}\mathbb{E}W_n^2 \frac{\mathbb{E}W_n}{a+b}
$$
So you get
$$\mathbb{E}W_{n+1}=\left(1\frac{1}{a+b}\right)\mathbb{E}W_n +1
$$
(of course both expectation and second moment exist in this case, since sample space is finite) 
How to Show that $1+\left(\left\lceil\dfrac{x}{n}\right\rceil 1\right)n\leq x$?  By the property of the ceiling function we have:
$$
\left\lceil\dfrac{x}{n}\right\rceil <\dfrac{x}{n}+1,
$$
which gives me:
$$
1+\left(\left\lceil\dfrac{x}{n}\right\rceil1\right)n <x+1.
$$
What I missed which seems some kind of obvious is that $\left(1+\left(\left\lceil\dfrac{x}{n}\right\rceil1\right)n\right)$ is a positive integer and since it is less than $x+1$, it must be less than or equal to $x$. Thus,
$$
1+\left(\left\lceil\dfrac{x}{n}\right\rceil1\right)n \le x.
$$ 
Trouble computing an index  $NC/N$ is a nontrivial subgroup of the infinite cyclic group $G/N$, so $G:NC = \frac{G}{N}:\frac{NC}{N}$ is finite.
Now $NC:N_2C \le N:N_2 \le 2$ is finite, so $G:N_2C = G:NC \times NC:N_2C$ is finite. 
Find the coordinates of the inflexion points of $A(\beta)=8\pi16\sin(2\beta)$ in $\mathbb{R}$  $A''(\beta) = 64\sin(2\beta) = 0$
Let $\theta = 2\beta$, then where is $\sin\theta = 0$?
$\theta = k\pi$, where $k \in \mathbb{Z}.$ Thus $\beta = k\frac{\pi}{2}$ 
Graded modules over $k[t,t^{1}]$  A lowtech way:
Let $M$ be a graded $k[t^{\pm1}]$module, so that in particular $M=\bigoplus_{n\in\mathbb Z}M_i$ as a vector space. For each $n\in\mathbb Z$ the map $M_n\to M_{n+1}$ given by multiplication by $t$ is a linear bijection, with inverse given by multiplication by $t^{1}$, of course. It follows that for all $n\in\mathbb Z$ we have $M_n=t^nM_0$. Moreover, it is easy to see that the map $k[t^{\pm1}]\otimes_k M_0\to M$ obtained by restricting the multiplication map $k[t^{\pm1}]\otimes_k M\to M$ to the subspace $k[t^{\pm1}]\otimes_k M_0$ is in fact an isomorphism of $k[t^{\pm1}]$modules, if we see its domain as a $k[t^{\pm1}]$module in the obvious way. But this obvious module is free: any basis of $M_0$ gives a basis.
N.B.: This is, I guess, what the hightech proof amounts to in this concrete situation... 
Proof of BanachAlaoglu theorem by Douglas  If two linear functionals $f$ and $g$ are equal on the closed unit ball they are equal everywhere: For any $x$ with $\x\ >1$ we have $f(x)=\x\f(\frac x {\x\})=\x\g(\frac x {\x\})=g(x)$ by linearity of $f$ and $g$.
The product topology is the topology of convergence of each coordinate and weak* convergence is the topology of convergence at each point $x$. 
Prove an equation with summation and binomial coefficients  We write $n$ instead of $\gamma$, focus on the essentials and skip the constant $(1)^{ab}2^{2(ab)}$.
We obtain for integers $a\geq b>0$
\begin{align*}
\color{blue}{\sum_{n=0}^{ab}}&\color{blue}{(1)^n\frac{b+n}{b}\binom{2a}{abn}\binom{2b1+n}{n}}\tag{1}\\
&=\sum_{n=0}^{ab}\frac{b+n}{b}\binom{2a}{abn}\binom{2b}{n}\tag{2}\\
&=\sum_{n=0}^{ab}\binom{2a}{abn}\binom{2b}{n}
2\sum_{n=1}^{ab}\binom{2a}{abn}\binom{2b1}{n1}\tag{3}\\
&=\binom{2a2b}{ab}2\sum_{n=0}^{ab1}\binom{2a}{abn1}\binom{2b1}{n}\tag{4}\\
&=\binom{2a2b}{ab}2\binom{2a2b1}{ab1}[[a>b]]\tag{5}\\
&\color{blue}{=[[a=b]]}
\end{align*}
Comment:
In (1) we use the binomial identity $\binom{p}{q}=\binom{p}{pq}$.
In (2) we use the binomial identity $\binom{p}{q}=\binom{p+q1}{q}(1)^q$.
In (3) we split the sum and apply $\binom{p}{q}=\frac{p}{q}\binom{p1}{q1}$ to the righthand sum.
In (4) we apply ChuVandermonde's identity
to the left sum and shift the index of the right sum by one to start with $n=0$.
In (5) we use Iverson brackets in the right expression, since the corresponding sum in the line before is zero if $a=b$. We also use the binomial identity $\binom{2p}{p}=2\binom{2p1}{p1}$. 
In how many orders can perfumes and colognes be sprayed?  Visualize placing the perfumes and colognes in a sequence, and spraying them in that sequence.
Part (i): Out of 7 possible places, we must choose 4 for the perfumes. Thus, the answer is:
$\binom{7}{4} = \boxed{35}$
Part (ii): Since both end places must be taken up by perfumes, we eliminate them from the count. Now, we need to choose 2 places out of the center 5 to put the remaining perfumes. Thus, the answer is:
$\binom{5}{2} = \boxed{10}$ 
Metrizability of quotient spaces of metric spaces  A very simple way in which the quotient can fail to be metrizable is if there are equivalence classes that are not closed. Take your original space to be $\mathbb{R}$ and let $\sim$ have as its equivalence classes $(0,1)$ and all singletons $\{x\}$ with $x\notin(0,1)$. In the quotient topology, you can not separate $(0,1)$ and $1$ by open sets and the quotient space fails to be Hausdorff. 
Solutions to $\frac{1}{n} = \frac{1}{a} + \frac{1}{b}$  Okay so, let's solve $a+b\mid ab$. First, set $d=\gcd(a,b)$ and set $a=da'$ and $b=db'$. Also see that $a+b\mid aba(a+b)$ so that $a+b\mid a^2$, and similarly $a+b\mid b^2$.
Now, set $k$ and $l$ so that $d(a'+b')k=(a+b)k=a^2=d^2a'$ and $d(a'+b')l=d^2b'$. This means $(a'+b')k=da'$ and $(a'+b')l=db'$. Since $a'$ doesn't divide $a'+b'$, it divides $k$; set $k=k'a'$, and similarly, set $l=l'b'$. This transforms both equations to
$(a'+b')k'=d$ and $(a'+b')l'=d$, hence, $k'=l'$; so, (let's set $k'=n=l'$ to avoid confusion) we have $(a'+b')n=d$. So, $a'+b'$ divides $d$. Now we may pick $a'$, $b'$ and $n$, we obtain $d=(a'+b')n$, hence $a=da'=(a'+b')na'$ and $b=db'=(a'+b')nb'$. This means all solutions are given by
$$(a,b)=(\alpha(\alpha+\beta)\gamma,\beta(\alpha+\beta)\gamma)$$
for any $\alpha,\beta,\gamma\in\mathbb{Z}$. Now plugging that into the equation gives
\begin{align}
\frac1a+\frac1b&=\frac1{\alpha(\alpha+\beta)\gamma}+\frac1{\beta(\alpha+\beta)\gamma}\\
&=\frac{\beta}{\alpha\beta(\alpha+\beta)\gamma}+\frac{\alpha}{\alpha\beta(\alpha+\beta)\gamma}\\
&=\frac{\alpha+\beta}{\alpha\beta(\alpha+\beta)\gamma}\\
&=\frac{1}{\alpha\beta\gamma}\\
\end{align}
so that we have $n=\alpha\beta\gamma$. In the end, if we're given an $n$, we simply have to write it as a product of three other numbers (note that they may also be negative) to find solutions to $a$ and $b$. 
Choose a basis of $\mathbb{F}_q/\mathbb{Z}_p$ to do inverse quickly.  Well, I'm sure you know this, and probably it is not what you're looking for, but if you represent $\mathbf F_q$ as a quotient $\mathbf F_p[x]/(f)$ for some irreducible polynomial $f$ of degree $n$, then you can simply take
$$
\varepsilon_k=\varepsilon_k'=x^k
$$
for $k=0,\ldots,n1$,
and determine $i_0',\ldots,i_{n1}'$ by applying the extended Euclidean algorithm to the polynomial $g=i_0+i_1x+\cdots+i_{n1}x^{n1}$ and $f$ in $\mathbf F_p[x]$. You find $a,b\in\mathbf F_p[x]$ such that
$$
af+bg=1
$$
in $\mathbf F_p[x]$. In $\mathbf F_q$ this gives $bg=1$, i.e.
$$
g^{1}=i_0'+i_1'x+\cdots+i_{n1}'x^{n1}=b
$$
in $\mathbf F_q$. This method is very fast, and you only need the Euclidean algorithm. 
Reasoning that $ \sin2x=2 \sin x \cos x$  $$\color{red}{\sin 2x}=\color{blue}{2\sin x}\cos x$$ 
Computing the residues of a function with a single pole.  Let $$f(z) = \sum_{m=\infty}^{+\infty} a_m z^m$$ on $\mathbb{C}\backslash \{0\}.$ You want to compute $a_{1}$. Since your function is even, you get $$\sum_{m=\infty}^{+\infty} a_m z^m = \sum_{m=\infty}^{+\infty} (1)^ma_m z^m$$ and hence $$a_{1} = (1)^{1} a_{1}.$$ You can conclude easily that $a_{1}=0.$ 
The alternating Fourier series associated with the fourth Bernoulli polynomial  Let $$f_4(t):=\sum_{n>0} 2\frac{\cos(2\pi n t)}{n^4}$$
and
$$g_4(t):=\sum_{n>0} (1)^n 2\frac{\cos(2\pi n t)}{n^4}$$
You are right: the expression of $g_4(t)$ on $[0,1]$ is the piecewise polynomial
$$\begin{cases}\frac{2^4}{4!}\pi^4B_4(t+\tfrac12)&\text{for}&0 \le t \le \tfrac12\\
\frac{2^4}{4!}\pi^4B_4(t\tfrac12)&\text{for}&\tfrac12 \le t \le 1\end{cases}$$
This is because $$g_4(t)=f_4(t+\tfrac12)\tag{1}$$
due to the fact that:
$$(1)^n \cos (2 \pi n t) = \cos(2 \pi n t +n \pi)=\cos(2 \pi n( t + \tfrac12))$$
In fact, (1) is not restricted to $t \in [0,1]$. It is valid for any $t$. Here is a graphical verification that can be followed on the Matlab program below:
Fig.1: Top graphics: the initial Fourier series $f$ (in blue) in perfect coincidence with the graphical representation of $B_4(t)$ (red curve) for $t \in [0,1]$ and $B_4(t1)$ (magenta curve) for $t \in [1,2]$. Bottom graphics: The same for Fourier series $g$ with the $\tfrac12$ shift given by (1) .
t=0:0.01:4;
p=4;
K=((2*pi)^4)/factorial(4);
B4=@(t)(K*(t.^42*t.^3+t.^21/30));
%
subplot(2,1,1);% top graphics begins here
hold on;grid on
axis([0,4,2.5,2.5]);
f=0;
for k=1:10;
f=f+2*cos(2*pi*k*t)/k^e;
end;
plot(t,f,'b')
t1=0:0.01:1;plot(t1,B4(t1),'r');% red curve on [0,1]
t2=1:0.01:2;plot(t2,B4(t21),'m');% magenta curve on [1,2]
plot([0,5],[0,0],'k');
%
subplot(2,1,2);% bottom graphics begins here
hold on;grid on
axis([0,4,2.5,2.5]);
g=0;
for k=1:10;
g=g+2*((1)^k)*cos(2*pi*k*t)/k^e;
end;
plot(t,g,'linesmoothing','on')
t1=0.5:0.01:1.5;plot(t1,B4(t11/2),'r');% red curve on [1/2,3/2]
t2=1.5:0.01:2.5;plot(t2,B4(t23/2),'m');% magenta curve on [3/2,5/2]
plot([0,4],[0,0],'k');
Remarks:
This could be done for any $B_p$, not uniquely $B_4$, by comparison with
$$f_p(t):=\sum_{n>0} 2\frac{\cos(2\pi n tp \tfrac{\pi}{2})}{n^p}$$
Look how close the graphical representation of $f$ is close to the representation of function $h$ defined by $h(t)=2 \cos(\pi t)$. 
How does one show $\tan(nz)$ converges uniformly to $i$ in the upper half plane?  Hmm... I'm not convinced the statement is true. Perhaps you mean
$$\lim_{n\rightarrow\infty} \tan(nz) = i?$$
Note that the problem is ripe for experimentation, so use your favorite numerical tool to check the value of $\tan(nz)$ for some fixed $z$ and some increasing sequence of $n$s, then you'll see why I'm guessing the limit should be $i$, rather than $i$.
Is this really the limit for all $z$ in the upper half plane? Well, plug the following into WolframAlpha:
complex expand(tan(n (a + b*i)))
I get
$$\frac{\sin (2 a n)}{\cos (2 a n)+\cosh (2 b n)}+i\frac{
\sinh (2 b n)}{\cos (2 a n)+\cosh (2 b n)},$$
and from this form it's not hard to see why the limit is what it is. This expansion for the tangent follows from similar expansions for sine and cosine, which both follow Euler's formula and is really not so hard to work out.
Edit
I had asserted that the convergence could not be uniform on the upper half plane. While true, this misses the fact that the problem deals with compact subsets of the upper half plane and I'm quite certain that the convergence is uniform on any compact subset of the upper half plane. One way to prove this is to show that
$$\tan(n(a+bi))  i = \frac{2e^{2bn}}{\cos(2an) + \cosh(2bn)},$$
a formula I produced with Mathematica. Now, if $K$ is a compact subset of the upper half plane, then $\min\{\Im(z):z\in K\}$ exists and is positive. ($\Im(z)$ denotes the imaginary part of $z$.) This minimum may be taken as lower bound on $b$ so it's now not to hard to show that $\tan(n(a+bi))  i$ can be made small independent of $a$ and $b$. 
How to find a limit in implicit function  I will show that
$A
> a
> A\frac1{N}\int_0^1 p(x)dx
$,
so
$\lim_{N \to \infty} a
=A
$.
First of all,
$\int_0^1 \frac{1}{1(p(x)/(p(x)+a))^N}dx
> 1$,
so
$A > a$.
Then,
using Bernoulli's inequality,
$\begin{array}\\
(p(x)+a)/p(x)
&=1+(a/p(x))\\
\text{so}\\
((p(x)+a)/p(x))^N
&=(1+(a/p(x)))^N\\
&>1 + Na/p(x)\\
\text{or}\\
(p(x)/(p(x)+a))^N
&<1/(1 + Na/p(x))\\
&=p(x)/(p(x) + Na)\\
\text{Therefore}\\
1(p(x)/(p(x)+a))^N
&>1(p(x)+Na)/p(x)\\
&=Na/(p(x) + Na)\\
\text{so that}\\
A
&=a \int_0^1 \frac{1}{1(p(x)/(p(x)+a))^N}dx\\
&<a\int_0^1 \frac{dx}{(Na)/(p(x) + Na)}\\
&=\int_0^1 \frac{(p(x) + Na)dx}{N}\\
&=\frac1{N}\int_0^1 p(x)dx+a\\
\end{array}
$
Therefore
$A
> a
> A\frac1{N}\int_0^1 p(x)dx
$,
so
$\lim_{N \to \infty} a
=A
$. 
Proof that Effros Borel space is standard  The citations below refer, as in the OP, to Kechris' book.
This is what Theorem $4.14$ does:
Every separable metrizable space is homeomorphic to a subset of the Hilbert cube.
This gives the desired embedding result. Note that Kechris uses a nonstandard (in my experience) notion of "compactification," in Definition $4.15$: a compactification of a separable metrizable space $X$ is a compact metrizable space $Y$ such that $X$ is homeomorphic to a dense subset of $Y$. So the "pick a compactification" language in the proof in question is unproblematic ... if we grant Kechris' use of the term. 
uniform boundedness principle for $L^{1}$  I wouldn't know about the proof in the book, but here's a proof. It could probably be streamlined some  you should see what it looked like a few days ago. Going to change some of the notation; this is going to be enough typing as it is.
Going to assume we're talking about realvalued functions, so that for every $f$ there exists $E$ with $\left\int_E f\right\ge\frac12f_1$.
Theorem Suppose $\mu$ is a measure on (some $\sigma$algebra on) $X$, $S\subset L^1(\mu)$, and $\sup_{f\in S}f_1=\infty$. Then there exists a measurable set $E$ with $$\sup_{f\in S}\left\int_Ef\right=\infty.$$
Notation: The letter $f$ will alsways refer to an element of $S$; $E$ and $F$ will always be measurable sets (or equivalence classes of measurable sets modulo null sets).
Proof: First we lop a big chunk off the top: Wlog $S$ is countable; hence wlog $\mu$ is $\sigma$finite. Now we nibble away at the bottom:
Case 1 $\mu$ is finite and nonatomic.
This is the meat of it. It's also the cool part: We imitate the standard proof of the standard uniform boundedness principle, with measurable sets instead of elements of some vector space.
Let $\mathcal A$ be the measure algebra; that is, the algebra of measurable sets modulo null sets. For $E,F\in\mathcal A$ define $$d(E,F)=\mu(E\triangle F)=\chi_E\chi_F_1.$$Now $\mathcal A$ is a complete metric space (complete because it's isometric with a closed subset of $L^1$). Define $$A_n=\{E\in\mathcal A:\sup_f\left\int_Ef\right>n\}.$$
$A_n$ is open in $\mathcal A$: Say $E\in A_n$ and choose $f$ so $$\left\int_Ef\rightn=\epsilon>0.$$ There exists $\delta>0$ so the integral of $f$ over any set of measure less than $\delta$ is less than $\epsilon$. Hence if $d(E,F)<\delta$ we have $$\left\int_{F} f\right\ge \left\int_{E} f\right\int_{E\triangle F}f>n,$$so $F\in A_n$.
$A_n$ is dense in $\mathcal A$: Say $E\in\mathcal A$ and let $\epsilon>0$. Write $$X=\bigcup_{j=1}^NE_j,$$where $E_j\cap E_k=\emptyset$ and $$\mu(E_j)<\epsilon.$$ Choose $f$ with $f_1>4Nn$. Choose $F$ so $$\left\int_{F} f\right>2Nn.$$Now there exists $j$ with$$\left\int_{F\cap E_j} f\right>2n.$$We want to show there exists $E'\in A_n$ with $d(E,E')<\epsilon$. If $\left\int_{E\setminus(F\cap E_j)} f\right>n$ then $E'=E\setminus(F\cap E_j)$ works; if not the triangle inequality shows that $E'=E\cup(F\cap E_j)$ works.
So the Baire category theorem shows that $\bigcap A_n\ne\emptyset$.
Case 2 $\mu$ is nonatomic. If there exists $E$ with $\mu(E)<\infty$ and $\sup_f\int_Ef=\infty$ we're done by Case 1. Suppose not.
Suppose we've chosen $f_1,,\dots f_n$ and $E_n$ so that $\mu(E_n)<\infty$ and $$\left\int_{E_n} f_j\right>j\quad(1\le j\le n).$$There exists $F$ with $\mu(F)<\infty$, $E_n\subset F$, and such that if $E_{n+1}$ is any set with $$E_{n+1}\cap F=E_n$$then we will still have $$\left\int_{E_{n+1}} f_j\right>j\quad(1\le j\le n).$$Choose $c$ so $\int_Ff\le c$ for all $f$. Choose $f_{n+1}$ with $f_{n+1}_1>3c+2(n+1)$. Then $\int_{X\setminus F}f_{n+1}>2c+2(n+1)$, so there exists $F_n\subset X\setminus F$ with $\left\int_{F_n} f_{n+1}\right>c+{n+1}.$If we let $E_{n+1}=E_n\cup F_n$ then we have $$\left\int_{E_{n+1}} f_j\right>j\quad(1\le j\le n+1).$$(For $1\le j\le n$ this follows by the comments above and for $j=n+1$ it uses the triangle inequality.)
So if $E=\bigcup E_n$ then $$\left\int_{E} f_j\right\ge j$$for all $j$.
Case 3 $X$ is a countable union of atoms. We may as well assume $\mu$ is a measure on $\Bbb N$. The argument in this case is really just like the argument in Case 2; details available on request.
Case 4 $\mu$ is $\sigma$finite. Write $X=A_2\cup A_3$ where $\mu$ is nonatomic on $A_2$ and $A_3$ is a countable union of atoms. There exists $j=2,3$ such that $\int_{A_j}f$ is unbounded; we are done by Case $j$ above. 
difference between normed linear space and inner product space  If you have an inner product space $\left(E, \varphi\right)$, it has a natural structure as a normed vector space: $\left(E,x\mapsto \sqrt{\varphi(x,x)}\right)$ but the other way around isn't true. There are norms that do not come from inner products.
And example with $E=\Bbb R^2$
If you take $\varphi:\left(\left(x_1,y_1\right),\left(x_2,y_2\right)\right)\mapsto x_1x_2+y_1y_2$ you have an inner product.
And if you let $N_2:\left(x,y\right) \mapsto \sqrt{\varphi\left(\left(x,y\right),\left(x,y\right)\right)}=\sqrt{x^2+y^2}$, you get the norm you know.
But there are other norms such as $N_\infty:(x,y)\mapsto \max(x,y)$ that can't be built from an inner product.
By the way, if your norm $N$ does come from an inner product, you can get the inner product back by letting $\psi:(x,y)\mapsto \cfrac{N(x+y)
^2N(xy)^2}{4}$ 
How to write $A D A^T x$ as $\sum_{j=1}^p A_j D_{jj} A_j^T x$?  Check the fact that $ADA^Tx=(AD)A^Tx$. Then,
$y=\sum_{j=1}^p (A_j D_{j,j})A^T_jx$
This is a consequence of matrix multiplication being associative, thus $AD$ is the linear combination of the row vectors of $A$ by the column vectors of $D$.
Hope this hint helps. 
Is it acceptable to say that if $a^b ≡ a^c\;(mod\;p)$, then $b≡c\;(mod\;p)\;?$  No, it's not. For example,
$$1^2\equiv 1^3\mod 5$$
but obviously $2\not\equiv3\mod 5$.
If you'd tried this out with any number $a\not\equiv0\mod p$, you'd have found that $a^b\equiv a^c\mod p$ pretty much never means $b\equiv c\mod p$.
I cannot stress this enough, but try numerical examples to your theorem before you try to prove it. It gives you a good idea of how problems work, how you could prove it, and in this case, if the statement is even true. 
Determining if the set is a basis for the vector space  Let V be a vector space, in your case $\mathbb{R}^4$. A linearly independent spanning set for V is called a basis. You have to show that $S$ is linearly independent and that it spans $\mathbb{R}^4$. 
Show that a specific $w$ cannot be the root of an quadratic with integer coefficients.  Note that, by Euclidien division
$$x^3x1=Q(x)(ax^2+bx+c)+x \left(\frac{b^2}{a^2}\frac{c}{a}1\right)+\frac{b c}{a^2}1$$
So, if $w$ is a root of both $x^3x1=0$ and $ax^2+bx+c=0~$, then we would have
$$
w \left(\frac{b^2}{a^2}\frac{c}{a}1\right)+\frac{b c}{a^2}1=0
$$
So if $a^2+a cb^2\ne0~$ then
$$
w=\frac{b ca^2}{a^2+a cb^2}\in\mathbb{Q}
$$
which is absurd since $x^3x1=0~$ has no rational solutions.
Now what happens if $a^2+a cb^2 =0~$?
In this cas we must also have
$a^2=bc$. Without loss of generality we may suppose that $\gcd(a,b,c)=1$. From
$a^2=bc$ we conclude that $\gcd(b,c)=1$, because otherwise any common prime divisor would also divide $a$.
From $\gcd(b,c)=1$ and $a^2=bc$ we conclude that
$$b=\epsilon\beta^2, c=\epsilon\gamma^2, a=\epsilon'\beta\gamma,\qquad\hbox{with $\gcd(\beta,\gamma)=1,~(\epsilon,\epsilon)'\in\{+1,1\}^2$}$$
Replacing in $a^2+a cb^2 =0$ we get $\gamma^2(\beta+\epsilon\epsilon'\gamma)=\beta^3$, so $\gamma\beta^3$, but $\gcd(\beta,\gamma)=1$ so,
$\gamma=1$ and consequently $ \beta+\epsilon\epsilon'=\beta^3$. This implies that $\beta $ is an integer solution of $x^3x1=0$ or $x^3x+1=0$ which is clearly absurd. So, this case cannot happen. 
Prove that ray CE intersects $\triangle ABC$ at a point $D$ on $AB$ and that $D$ must lie strictly between $A$ and $B$.  In most situations, the given statement would be accepted as obvious. I'm not sure whether this is what Allison is looking for, but here is a proof based on the late 19thearly 20th century axiomatization of Euclidean geometry, especially the fact (equivalent to Pasch's Axiom) that every line divides the plane into two halfplanes and whether a segment meets the line is determined by whether the endpoints of the segment are in the same or opposite halfplanes.
The fact that $E$ is an interior point of $ABC$ means that:
(1) $E$ is on the same side of $BC$ as $A$;
(2) $E$ is on the same side of $AC$ as $B$; and
(3) $E$ is on the same side of $AB$ as $C$ (although we don't need this).
First we prove that $CE$ cannot be parallel to $AB$. Assume the contrary.
Note that $A$ and $B$ are on the same side of line $m = CE$. Denote by $H$ the halfplane to which they belong. Select a point $F$ on line $CE$ but on the other side of $C$ from $E$. By (1), $F$ and $A$ are on opposite sides of $BC$ (since $E$ and $F$ are). Thus the segment $AF$ intersects line $CB$. But since segment $AF$ is situated in $H$, the point of intersection $G$ must in fact be on the ray $CB$. Since $G$ is between $A$ and $F$, points $F$ and $G$ are on the same side of line $AC$. But since $G$ is on ray $CB$, points $G$ and $B$ are also on the same side of $AC$. By transitivity, points $B$ and $F$ are on the same side of $AC$. By the definition of $F$, this shows that $E$ and $B$ are on opposite sides of $AC$, contradicting (2). Therefore $CE$ and $AB$ are not parallel.
It follows that line $CE$ intersects line $AB$ at some point $D$. We wish to prove that $D$ is between $A$ and $B$. We do this by eliminating the remaining possibilities. $D$ cannot coincide with $A$, because then $E$ would lie on the line $CA$, contradicting (2). Similarly, $D$ cannot be $B$. We will now show that $A$ cannot be between $B$ and $D$. Since it can be shown in the same way $B$ cannot be between $A$ and $D$, this is the last case to be addressed.
Assume therefore that $A$ is between $B$ and $D$. Thus $B$ and $D$ are on opposite sides of $CA$. By (2) therefore, $E$ and $D$ are on opposite sides of $CA$, hence of $C$. On the other hand, since $A$ and $D$ are on the same side of $B$, they're on the same side of $CB$. By (1) therefore $D$ and $E$ are on the same side of $CB$, hence of $C$. This is a contradiction.
From the foregoing we conclude that $D$ is between $A$ and $B$. It remains only to show that $D$ is on the ray $CE$. Since $D$ and $B$ are on the same side of $A$, they are on the same side of $CA$. Therefore, by (2), $E$ and $D$ are on the same side of $CA$, hence of $C$. Thus $D$ is on the ray $CE$. 
Do $A$ and $A^{2}$ share eigenvectors if both are real and symmetric?  No, not necessarily. For instance, suppose $A=\begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix}$. Then $A^2=I$, so every vector is an eigenvector of $A^2$. But, for instance, $(1,1)$ is not an eigenvector of $A$.
More generally, if there is a number $c\neq0$ such that both $c$ and $c$ are eigenvalues of $A$ (with eigenvectors $v$ and $w$, say), then any linear combination of $v$ and $w$ will be an eigenvector of $A^2$ (with eigenvalue $c^2$), but a linear combination $av+bw$ will only be an eigenvalue of $A$ if $a=0$ or $b=0$. 
How to find this recurrence relation,  Suppose $a_{n+1}=xa_n+ya_{n1}$
Then $4=2x+y$ and $9=3x+y$ $\implies$ $x=5,y=6$
Hence $a_{n+1}=5a_n6a_{n1}$, whose general term is $a_n=p2^n+q3^n$
When $a_1=a_2=1$, $2p+3q=1$ and $4p+9q=1$, Hence $p=1 ,q={1\over 3}$ 
Examine function differentiability  Your function is not differentiable at $(0,0)$. If it was, the function $x\mapsto0+\cos\left(\sqrt[3]{x^2}\right)$ would be differentiable at $0$. But it is not. 
axioms of real numbers without multiplication  Yes, this has been done by Tarski.
https://en.wikipedia.org/wiki/Tarski%27s_axiomatization_of_the_reals 
What is the probability that the 3 remaining cards of the suit are in one player's hand?  When you condition, you get that you have 26 cards left and 3 of them are of the particular suit. There are $26 \choose 13$ ways of assigning these remaining 26 cards among E and W (because once you assign 13 cards to E, the remaining 13 cards automatically go to W. You get that one player has all 3 cards if either E has 3 or E has 0. The number of ways E could have all 3 is $23 \choose 10$. The number of ways E could have 0 is $23 \choose 13$ which is also $23 \choose 10$. So $2 {23 \choose 10}/{26 \choose 13}$ is the answer. 
Complex analysis proof about $f(z)$  Michael M gave a wondeful hint above.
Define the function $g:U\to \mathbb{C}$, where $U$ consists of $z$ where $z < \frac{1}{R}$, and $g(z) = f(z^{1})$, with $g(0) = 0$. Check that this function is holomorphic. Here is a hint for this step. Prove that $g$ on the open set $U\setminus \{0\}$ is holomorphic and the limit of $f(z)$ as $z\to 0$ exists (so $0$ is a removable singularity).
Since $0$ is a zero of $g$, it follows that $g(z) = z^mg_1(z)$ where $g_1(z)\not = 0$ (assuming the function we are initially starting with $f$, is not identically zero, but then the problem is trivial). Can you take it from here? 
Should I be worried that I am doing well in analysis and not well in algebra?  I believe that I may be of some consolation.
I had a very similar experience to you. I started doing "serious" math when I was a senior in high school. I thought I was very smart because I was studying what I thought was advanced analysisbaby Rudin. My ego took a hit when I reached college and realized that while I had a knack for analysis and pointset topology, I could not get this algebra thing down! I just didn't understand what all these sets and maps had to do with anything. I didn't understand why they were useful, and even when I finally did grasp a concept I was entirely impotent when it came to those numbered terrors at the end of chapters.
I held the same fear that you do. I convinced myself that I was destined to be an analystI even went as far to say that I "hated" algebra (obnoxious, I know). After about a year of so, with the osmotic effect of being in algebra related classes, and studying tangentially related subjects, I started to understand, and really pick up on algebra. Two years after that (now) I would firmly place myself on the algebraic side of the bridge (if there is such a thing), even though I still enjoy me some analysis!
I think the key for me was picking up the goals and methods of algebra. It is much easier for a gifted math student to "get" analysis straight out of highschool, you have been secretly doing it for years. For the first half of Rudin while I "got it", this was largely thanks to the ability to rely on my calculus background to get why and how we roughly approached things. There was no such helpful intuition for algebra. It was the first type of math I seriously attempted to learn that was "structural", which was qualitative vs. quantitative. My analytic (read calculus) mind was not able to understand why it would ever be obvious to pass from ring X to its quotient, nor why we care that every finitely generated abelian group is a finite product of cyclic groups. I just didn't understand.
But, as I said, as I progressed through more and more courses, learned more and more algebra and related subjects, things just started to click. I not only was able to understand the technical reasons why an exact sequence split, but I understood what this really means intuitively. I started forcing myself to start phrasing other parts of mathematics algebraically, to help my understanding.
The last thing I will say to you, is that you should be scared and worried. I can't tell you how many times in my mathematical schooling I was terrified of a subject. I always thought that I would never understand Subject X or that Concept Y was just beyond me. I can tell you, with the utmost sincerity, that those subjects I was once mortified by, are the subjects I know best. The key is to take your fear that you can't do it, that algebra is just "not your thing", and own it. Be intrigued by this subject you can't understand, read everything you can about it, talk to those who are now good at the subject (even though many of them may have had similar issues), and sooner than you know, by sheer force of will you will find yourself studying topics whose name would make yourightnow die of fright. Stay strong friend, you can do it. 
How many different (circular) garlands can be made using $3$ white flowers and $6m$ red flowers?  My answer would be $\frac{1}{3}\left(\binom{6m+2}{2}1\right)+1$.
$\binom{6m+2}{2}$ is the number of ways of writing $6m$ as the sum of three nonnegative integers.
We count the one case where all the values are the same seperately. That yields one garland.
The other cases, the equations:
$$6m=a+b+c=b+c+a=c+a+b$$ are all the same garland, so we have to divide those cases by $3$.
Simplified:
$$\begin{align}
\frac{1}{3}\left(\binom{6m+2}{2}1\right)+1& = \frac{1}{3}\left((3m+1)(6m+1)1\right)+1\\
&=6m^2+3m+1
\end{align}$$
in general, if there were $p$ white flowers and $pn$ red, with $p$ prime, then the number of garlands would be:
$$\frac{1}{p}\left(\binom{np+p1}{p1}1\right)+1$$
In this case, $n=2m$ and $p=3$. 
Every rational function of $f \in k(x_1,x_2,\dots,x_n)$ is transcendental over $k$.  Since I've been a little rough in my comment, here is my trivial answer to make it up.
You can consider $f$ as a rational function in $\bar k(x_1,\dotsc,x_n)$, where $\bar k$ is an algebraic closure, right ? And $f$ is a constant if and only if it is a constant in this new field. But if $f$ is algebraic over $k$, it is certainly in $\bar k$, i.e. it is a constant. 
Integrals of functions with compact support.  The standard mollifier $f\ast\varphi_{\epsilon}$ of $f$ is such that $f\ast\varphi_{\epsilon}\rightarrow f$ in $L^{1}$, now passing to an a.e. convergence subsequence and you are done. 
The principal form, up to equivalanc, is the only integral form of Discriminant $\mathbf{D}$, which represent one.  Yes, you can.
Since a common factor of $u$ and $v$ would end up as a squared factor of any number of the form $g(u,v),$ we know that $u$ and $v$ are relatively prime. Thus, we may find integers $\alpha$ and $\beta$ with $\beta u  \alpha v = 1$. The coefficient of $x^2$ in the form
$$ f(x,y) = g(ux + \alpha y, vx + \beta y) $$
is then $g(u,v)$. Thus, we have replaced $g$ by the equivalent form $f$, which has first coefficient $1$, just like the principal form.
Now we adjust the middle coefficient of $f$ so that it matches the middle coefficient of the principal form. Observe that the middle coefficient of a form is congruent to its discriminant modulo $2$. It follows that the middle coefficients of $f$ and the principal form have the same parity.
Suppose $f = ax^2 + bxy + cy^2$. For an arbitrary integer $n$, consider the form $f(x+ny,y)$. This will still have first coefficient $1$, and one checks that its middle coefficient is $b+2n$. By choosing $n$ appropriately, we can make this coefficient be any integer we wish with the same parity as $b$. In particular, we can replace $f$ by a form that has the same first and middle coefficients as the principal form. In particular, you can choose $n$ so that $f(x+ny,y)$ has the same first and middle coefficient as the principal form. But both forms will also have the same discriminant, and these conditions ensure that they will have the same third coefficient as well. 
How to inter change of norm and limit in the Banach algebra?  In every Banach space absolutely convergent series converge and you have
$$\\sum_{n=0}^\infty x_n\\le \sum_{n=0}^\infty \x_n\.$$ The proof is exactly as in the case of real or complex numbers (the partial sums form a Cauchy sequence and the bound for the norms of the partial sums carries over to the limit). Therefore, your attempt is perfectly okay. 
Every continuous open mapping $\mathbb{R} \to \mathbb{R}$ is monotonic  Hint: Supose that $f$ is not monotonic, then exist a interval $[x,y]$ and a point $t \in (x,y)$, such that $f(t)= \max_{s \in [x,y]}{f(s)}$ or $f(t)= \min_{s \in [x,y]}{f(s)}$. Once you have that, study the set $f((x,y))$. 
Number of Points on the Jacobian of a Hyperelliptic Curve  In fact, you do not have to assume anything about the genus, nor is it relevant that the curve is hyperelliptic, nor does the cardinality of the finite field matter:
Theorem. Let $C$ be a smooth projective curve of genus $g$ over a finite field $\mathbb F_q$ and let
$$ Z(C;t)=\frac{L(t)}{(1t)(1qt)} $$
denote the zeta function of $C$, where $L(t)\in\mathbb Z[t]$. Then the number of $\mathbb F_q$rational points on the Jacobian $J$ of $C$ equals $L(1)$.
Proof. Let $\ell$ denote a prime distinct from $\operatorname{char}\mathbb F_q$. The $q$power Frobenius endomorphism of $C$ induces a purely inseparable isogeny $\varphi$ of degree $q$ of $J$, and thereby an endomorphism $T_\ell(\varphi)$ of the $\ell$adic Tate module $T_\ell(J)$. The number of $\mathbb F_q$rational points of $J$ is the number of fixed points of $\varphi$, and since $1\varphi$ is a separable isogeny, we have
$$ \#J(\mathbb F_q) = \#\ker(1\varphi) = \deg(1\varphi) = \det(1T_\ell(\varphi)) \text. $$
By definition, this equals $\chi_\varphi(1)$, where $\chi_\varphi(t)$ is the characteristic polynomial of $T_\ell(\varphi)$.
Now note that the Tate module is a special case of $\ell$adic cohomology: There is a natural isomorphism
$$ T_\ell(J)\otimes_{\mathbb Z_l}\mathbb Q_\ell \cong H^1(C,\mathbb Q_\ell) \text, $$
hence we may apply the Lefschetz trace formula for $\ell$adic cohomology (and some linear algebra) to deduce
$$ L(t) = \det(1t\varphi^\ast\mid H^1(C,\mathbb Q_\ell)) \text. $$
Since $H^1(C,\mathbb Q_\ell)$ is $2g$dimensional, this implies
$$ L(t) = t^{2g}\det(1/t\varphi^\ast\mid H^1(C,\mathbb Q_\ell) = t^{2g}\chi_\varphi(1/t) \text. $$
Note this is the "reverse polynomial" of $\chi_\varphi(t)$, i.e., it has the same coefficients in reversed order.
Together with the above, evaluating this at $1$ shows the claim
$$ \#J(\mathbb F_q) = L(1) \text. \tag*{$\square$}$$
A reference for this is Section 5.2.2 and 8.1.1 of Cohen and Frey's Handbook of Elliptic and Hyperelliptic Curve Cryptography, 1st ed. 
Determine the value of $ \frac{1}{\log_m (mn)}+\frac {1}{\log_n (mn)}$  Let $x = \log_m n$. Then $\log_m mn = \log_m m + \log_m n = 1 + x$ and $\log_n mn = \log_n n + \log_n m = 1 + \frac1x$. Therefore
$$\frac{1}{\log_m mn} + \frac{1}{\log_n mn} = \frac{1}{1 + x} + \frac{1}{1 + \frac1x} = \frac{1}{1 + x} + \frac{x}{x + 1} = 1.$$ 
Need help with simple system of differential equations  OK, I'll pitch two solution methods at y'all, one based on linear algebra and one, surprisingly enough, somewhat akin to our OP newuser's exploratory attempt centered around the derived equation
$\dfrac{\dot x_1}{\dot x_2} = \dfrac{x_2}{x_1}. \tag{1}$
Note that I prefer the use of the "$\dot y$" notation over the "$y'$" notation for derivatives whenever possible, as I shall continue to do throughout this little exposition. In any event, the given system
$\dot x_1 = x_2, \tag{2}$
$\dot x_2 = x_1, \tag{3}$
does indeed give rise to (1) at least in regions where $\dot x_ 2 \ne 0 \ne x_1$; I shall return to this topic momentarily, but first let me address things from the "linear algrbra" point of view. Setting
$\vec r(t) = \begin{pmatrix} x_1(t) \\ x_2(t) \end{pmatrix}, \tag{4}$
and
$J = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end {bmatrix}, \tag{5}$
we see that
$J^2 = I \tag{6}$
and that (2)(3) may be written
$\dot{\vec r}(t) = J\vec r(t). \tag{7}$
It follows from (7) that, if the initial condition at time $t_0$ is
$\vec r(t_0) = \begin{pmatrix} x_1(t_0) \\ x_2(t_0) \end{pmatrix}, \tag{8}$
then the solution may be written as
$\vec r(t) = e^{J(t  t_0)}\vec r(t_0); \tag{9}$
here we have that
$e^{J(t  t_0)} = I + (t  t_0)J + \dfrac{1}{2}(t t_0)^2J^2 + \ldots + \dfrac{1}{n!}(t  t_0)^nJ^n + \ldots$ $= \sum_0^\infty \dfrac{1}{n!}(t  t_0)^n J^n, \tag{10}$
just as for scalars $a$ we have
$e^{at} = 1 + at + \dfrac{1}{2}a^2 t^2 + \ldots + \dfrac{1}{n!}a^n t^n + \ldots = \sum_0^\infty \dfrac{1}{n!} a^n t^n. \tag{11}$
Just as if follows by term by term differentiation of (11) that
$\dfrac{d}{dt} e^{at} = a e^{at}, \tag{12}$
so we see by termbyterm differentiation of (10) that
$\dfrac{d}{dt}e^{J(t  t_0)} = Je^{J(t  t_0)}, \tag{13}$
which is sufficient to prove that (9) solves (7), since we have
$\dot{\vec r}(t) = \dfrac{d}{dt}(e^{J(t  t_0)})\vec r(t_0) = Je^{J(t  t_0)}\vec r(t_0) = J\vec r(t). \tag{14}$
We next examine the specific form of the matrix exponential (10). Since $J^2 = I$, just as $i^2 = 1$, a termbyterm comparison of (10) and (11), taking $a = i$, reveals that just as the terms of (11) contaning $i$ group to $i\sin(t t_0)$, so the terms of (10) containing $J$ group to $(\sin(t t_0))J$; and just as the terms of (11) which don't contain $i$ group to $\cos(t t_0)$, so the terms of (10) which don't contain $J$ group to $(\cos(t  t_0))J$, so we may conclude that just as
$e^{i(t  t_0)} = \cos(t  t_0) + i\sin(t  t_0), \tag{15}$
we also must have
$e^{J(t  t_0)} = (\cos(t  t_0))I + (\sin (t  t_0)) J; \tag{16}$
a more complete exposition of (16) and related equations may be found here.
When the the matrix equation (16) is written out explicitly we see that
$e^{J(t  t_0)} = \begin{bmatrix} \cos(t  t_0) & \sin (t  t_0) \\ \sin (t  t_0) & \cos(t  t_0) \end{bmatrix}, \tag{17}$
and it thus follows from (4), (8)(9) and (17) that
$x_1(t) = x_1(t_0) \cos(t  t_0)  x_2(t_0) \sin(t t_0), \tag{18}$
$x_2(t) = x_1(t_0) \sin(t  t_0) + x_2(t_0) \cos(t t_0). \tag{19}$
It should perhaps be observed, in the light of the above comments by newuser and Sam, that in general the formulas (18), (19) will together contain both $\cos$ and $\sin$ terms. However, with
$r = \sqrt{x_1^2(t_0) + x_2^2(t_0)} \tag{20}$
we may also write
$x_1(t) = r(\dfrac{x_1(t_0)}{r} \cos (t  t_0) \dfrac{x_2(t_0)}{r} \sin(t  t_0)) \tag{21}$
$x_2(t) = r(\dfrac{x_1(t_0)}{r} \sin(t  t_0) + \dfrac{x_2(t_0)}{r} \cos(t  t_0)); \tag{22}$
furthermore, since
$(\dfrac{x_1(t_0)}{r})^2 + (\dfrac{x_2(t_0)}{r})^2 = 1 \tag{23}$
there exists a constant $\phi \in [0, 2\pi)$ with
$\cos \phi = \dfrac{x_1(t_0)}{r}, \; \sin \phi = \dfrac{x_2(t_0)}{r}; \tag{24}$
then (21), (22) may be written
$x_1(t) = r \cos((t  t_0) + \phi) \tag{25}$
$x_2(t) = r \sin ((t  t_0) + \phi). \tag{26}$
We thus see that, with appropriate choice of the phase angle $\phi$, both $x_1(t)$ and $x_2(t)$ may be written as pure $\cos$ and $\sin$ functions with no admixture of the two. We also note that the matrix $e^{J(t  t_0)}$ appearing in (9) is orthogonal, that is
$(e^{J(t  t_0)})^T = \begin{bmatrix} \cos(t  t_0) & \sin (t  t_0) \\ \sin (t  t_0) & \cos(t  t_0) \end{bmatrix}^T$ $= \begin{bmatrix} \cos(t t_0) & \sin (t  t_0) \\\sin (t  t_0) & \cos(t  t_0) \end{bmatrix} = e^{J(t  t_0)} = (e^{J(t  t_0)})^{1}, \tag{27}$
as may readily be verified by direct evaluation of the matrix product $(e^{J(t  t_0)})^T(e^{J(t t_0)}) = I$. This in turn implies, as is wellknown, that the magnitude of $\vec r(t)$ is constant, as may also be easily seen by computing $\Vert \vec r(t) \Vert^2 = x_1^2(t) + x_2^2(t)$; the calculations are simple, if a tad longwinded. Thus the motion of $\vec r(t)$ is circular.
Having solved (2)(3) with the aid of matrix exponentials, what I have termed the "linear algebra" approach, I now turn to the second method of analyzing this system which I mentioned in the beginning of this post. This second treatment is in many ways similar in spirit to the attempt our OP newuser presented in his question.
First of all I think worthwhile to point out that one can get "rid of $dt$" through perfectly classical means that in no way refer to infinitesimals. Turning once again to equation (1) and the conditions $\dot x_2 \ne 0 \ne x_1$, we note that as long as $\dot x_2 \ne 0$, we may infer from the inverse function theorem that we may express $t$ as a function $t(x_2)$ of $x_2$ and that furthermore
$\dfrac{1}{\dot x_2(t)} = \dfrac{dt(x_2)}{dx_2}. \tag{28}$
We conclude from (28) via the chain rule that, writing $x_1(t) = x_1(t(x_2))$,
$\dfrac{dx_1(t(x_2))}{dx_2} = \dot x_1(t) \dfrac{dt(x_2)}{dx_2} = \dfrac{\dot x_1(t)}{\dot x_2(t)} = \dfrac{x_2}{x_1}, \tag{29}$
which of course leads directly to
$x_1 \dfrac{dx_1}{dx_2} =  x_2, \tag{30}$
a form of (2)(3) in which $t$ does not directly appear; we have rid ourselves of $t$ without introducing the concept of infinitesimals.
Having said these things, we further observe that (2), (3) imply
$x_1 \dot x_1 = x_1 x_2 \tag{31}$
$x_2 \dot x_2 = x_1 x_2; \tag{32}$
adding these equations we see, after some minor algebraic mechanics, that
$\dfrac{d(x_1^2 + x_2^2)}{dt} = 2(x_1 \dot x_1 + x_2 \dot x_2) = 0, \tag{33}$
implying that $x_1^2 + x_2^2$ is conserved along the trajectories of (2), (3); hence, such integral curves, if nontrivial, must be contained in the circles $x_1^2 + x_2^2 = C^2 > 0$ a constant. Then
$\dfrac{x_1^2(t)}{C_2} + \dfrac{x_2^2(t)}{C^2} =1, \tag{34}$
from which we may conclude that
$x_1(t) = C \cos \theta(t), \tag{35}$
$x_2(t) = C\sin \theta(t) \tag{36}$
for some function $\theta(t)$ of $t$. The implicit function theorem may now be invoked to demonstrate that $\theta(t)$ is differentiable: setting $g(t, \theta) = x_1(t)  C\cos \theta$, we see that $\partial g / \partial \theta = C\sin \theta \ne 0$ provided $\theta \ne n\pi$, $n \in \Bbb Z$; thus the equation $0 = g(t, \theta) = x_1(t)  C\cos \theta$ defines a differentiable function $\theta(t)$ with $0 = g(t, \theta(t)) = x_1(t)  \cos \theta(t)$; in the vicinity of $n\pi$, we may use (36) to establish the differentiability of $\theta(t)$ is a similar fashion. Once we rest assured that $\theta(t)$ is differentiable, we may write
$C \dot \theta(t) \cos \theta(t) = \dot x_2(t) = x_1(t) = C \cos \theta(t), \tag{37}$
which implies
$\dot \theta (t) = 1, \tag{38}$
immediately yielding the solution
$\theta(t)  \theta(t_0) = t  t_0 \tag{39}$
or
$\theta(t) = t  t_0 + \theta(t_0), \tag{40}$
so that
$x_1(t) = C\cos((t  t_0) + \theta(t_0)), \tag{41}$
$x_2(t) = C\sin((t  t_0) + \theta(t_0)); \tag{42}$
we see that (41), (42) agree with (25), (26) via a renaming of constants $C = r$, $\theta(t_0) = \phi$. For more information on similar technique applied in a slightly different context, see my answer to this question.
One equation, two solutions; would that things were always this easy! I'm more used to two equations with no solutions!
Hope this helps. Cheerio,
and as always,
Fiat Lux!!! 
The union of two closure  Your second part is ok, but the first is incomplete. You actually demonstrated that $Cl(A_1\cup A_2)\subset Cl(A_1)\cup Cl(A_2)$, but this doesn't mean they are equal. You should show also that $Cl(A_1)\cup Cl(A_2)\subset Cl(A_1\cup A_2)$. 
Dirichlet's Test Remark in Apostol  Note that the partial sums of $\{a_n\}$ are bounded means that $\lvert A_k \rvert \leq M$ for all $k$ and some $M > 0$. Hence, we have that
\begin{align}
\left \lvert \sum_{k \leq n} A_k(b_k  b_{k+1}) \right \rvert & \leq \sum_{k \leq n} \left(\left \lvert A_k(b_k  b_{k+1}) \right \rvert \right) & (\because \text{By triangle inequality})\\
&= \sum_{k \leq n} \left \lvert A_k \right \rvert \left \lvert (b_k  b_{k+1}) \right \rvert & \because \lvert z_1 z_2 \rvert = \lvert z_1 \rvert \lvert z_2 \rvert\\
& \leq \sum_{k \leq n} M \lvert(b_k  b_{k+1}) \rvert & (\because A_k \text{ is bounded by }M)\\
& = M \sum_{k \leq n} (b_k  b_{k+1}) & (\because \{b_n\}\text{ form a decreasing sequence})\\
& = M (b_1  b_{n+1}) & (\because \text{By telescoping})\\
& \leq Mb_1 & (\because b_n \downarrow 0 \implies b_{n+1} \geq 0)
\end{align}
Hence, $\displaystyle \sum_{k \leq n} A_k(b_k  b_{k+1})$ converges absolutely. 
Linear algebra elementary row operation  If you interchange the rows and then perform the operation you mentioned  the matrix will be in upper triangular form. I guess that's what the book is referring to. 
Is the source (and/or target) a group, or just its underlying set?  $G$ and $H$ certainly can be groups, but the function will not necessarily preserve any structure, so it will just be a map between sets.
If $f$ is a homomorphism, this preserves structure by definition, so it will be a map between groups by necessity. 
How do I evaluate this limit :$\displaystyle \lim_{x \to \infty} ({x\sin \frac{1}{x} })^{1x}$.?  I see you're a high school teacher so you're familiar with the following concepts :
$\bullet$ $\sin(\frac{1}{x}) \simeq \frac{1}{x}  \frac{1}{6x^3} \text{ } [\text{as x $\rightarrow$ $\infty$}]$
$\bullet $ $ \lim_{x \to \infty} (1\frac{k}{x})^x = e^{k} $
Compile these facts to get :
$$\underset{x \to \infty}{\lim} \bigg(1  \frac{1}{6x^2} \bigg)^{1x} = 1$$ 
Geometrical Interpretation of the CauchyGoursat Theorem?  It sounds like you want a kind of "visual" proof, or at least intuition. The goto source for that is Needham's Visual Complex Analysis. Check out page 435 (of the pdf) of the linked book, which offers a few different explanations. Personally, I find the geometric intuition to be the following: if you can shrink the contour to a single point without crossing a singularity of the function, then the integral is 0. Then the idea follows immediately because you have function $f$ bounded by some $M$, (it has no singularities inside the contour!), and you're integrating it on an arbitrarly short loop around a point, of length $\epsilon$, so your integral is bounded above by $M\epsilon$, and $\epsilon$ can be made arbitrarily small.
So in some sense you can think of a contour as a stretched rubber band, and each singularity as a peg (an imperfect analogy). In other words, it really helps to accept the fact that the choice of contour for contour integration is quite arbitrary (as long as you respect singularities). The precise formulation of this analogy is that what really matters is the winding number of the contour around each singularity. If the winding is 0, then the singularity does not contribute. 
Continuous realvalued functions on the first uncountable ordinal  Yes, every continuous $f \colon X \to \mathbb{R}$ is eventually constant.
There is a sequence $(\alpha_n)$ in $X$ such that for all $n$
$$\sup_{\beta > \alpha_n} \lvert f(\beta)  f(\alpha_n)\rvert \leqslant 2^{n}.$$
For otherwise, there would be a $k\in \mathbb{N}$ such that for every $\alpha \in X$ there is a $\beta > \alpha$ with $\lvert f(\beta)  f(\alpha)\rvert > 2^{k}$. Then we could construct a sequence $(\gamma_n)$ with $\gamma_n < \gamma_{n+1}$ and $\lvert f(\gamma_{n+1})  f(\gamma_n)\rvert > 2^{k}$. But the sequence $(\gamma_n)$ converges to its supremum $\gamma \in X$, and hence
$$\lvert f(\gamma_{n+1})  f(\gamma_n)\rvert \leqslant \lvert f(\gamma_{n+1})  f(\gamma)\rvert + \lvert f(\gamma)  f(\gamma_n)\rvert < 2^{(k+1)}$$
for $n$ so large that $\lvert f(\gamma_m)  f(\gamma)\rvert < 2^{(k+2)}$ for all $m \geqslant n$.
The existence of the sequence $(\alpha_n)$ established, let $\alpha = \sup\limits_{n\in\mathbb{N}} \alpha_n$.
Then $f$ is constant on $[\alpha,\Omega)$. 
every projective module has a free complement.  If $P$ is projective and $Q$ is any complement of $P$ in a free module,then $P\oplus Q\oplus P\oplus Q\oplus P\oplus Q\oplus\cdots$ is a free complement.
In your example, no finitely generated free module is a complement to $P$: indeed, every f.g. free module has $6^n$ elements for some $n$ and $P$ has $2$, so that the direct sum of $P$ and a free module has $2\times 6^n$ elements, and it is therefore never free. In particular, your argument is not correct. 
Orthogonality and cross product  The statement is trivial if $x$ and $y$ are linearly dependent, because then $x\times y=0$ and so $v=0.(v\times y)$.
Otherwise, $\dim\operatorname{span}(\{x,y\})=2$, and therefore $\dim\operatorname{span}(\{x,y\})^\perp=1$. So, since $v,x\times y\in\operatorname{span}(\{x,y\})^\perp$, and since $x\times y\ne0$, $v$ is a scalar multiple of $x\times y$. 
Question on proving quotient space is homeomorphic to circle  Label each point on the circle by the angle $\theta\in [0,2\pi)$. Let $f:S/{\sim}\rightarrow S$ be defined by $f(\theta)=2\theta$. You should be able to prove this is welldefined on $S/{\sim}$, and is in fact a bijection between the two spaces. Then to prove it's a homeomorphism, you just need to show that the image/preimage of any open set is open. 
Rate of change in length of hypotenuse  Check your question. Are you sure it's a rate of change you're asked to find for $x$ rather than simply a change for $x$?
Assuming what they're asking for is the estimate of a small change of $x$ for a small change of $\theta$ of $0.05$ radian, you would work it out like so:
The initial conditions help you verify that the triangle is right (you can't assume from a drawing).
$x = \frac{10}{\sin\theta}$
$\frac{dx}{d\theta} =  \frac{10\cos\theta}{\sin^2\theta}$
For the approximation of small changes, you can rearrange to:
$\delta x \approx  \frac{10\cos\theta}{\sin^2\theta} \delta \theta$
and if you're given $\delta\theta = 0.05$ radian, you get:
$\delta x \approx  \frac{10\frac{\sqrt 3}{2}}{(\frac 12)^2} \delta \theta =  \frac{10\frac{\sqrt 3}{2}}{(\frac 12)^2} (0.05) = \sqrt 3$
The answer would be the same numerically if you had been given the rate of change of $\theta$ in terms of radian per time, e.g. $\frac{d\theta}{dt} = 0.05 rad/sec$ but there, the chain rule based equation you'd be using would be $\frac{dx}{dt} = \frac{dx}{d\theta} \cdot \frac{d\theta}{dt}$ which would give the same numerical value (except the units would be in terms of length over time and this would be exact, not an approximation). 
if $ L = \lim_{x\to1^} \sum_{n=1}^\infty a_nx^n$ then $\sum_{n=1}^\infty a_n = L$  The statement in the title is false. But it's true under the additional assumption that $a_n\ge0$. And it's actually quite easy.
Since $a_n\ge0$ there exists $S\in[0,\infty]$ such that $$\sum_{n=0}^\infty a_n=S.$$(Note we allowed the possibility $S=\infty$.) Now for a given $N$,
$$\sum_{n=0}^Na_n=\lim_{x\to1}\sum_{n=0}^N a_nx^n\le L.$$So $$S=\lim_{N\to\infty}\sum_{n=0}^N a_n\le L.$$
For the other direction, say $\epsilon>0$. There exists $x\in(0,1)$ with $$\sum_{n=0}^\infty a_nx^n>L\epsilon.$$But $$S\ge\sum_{n=0}^\infty a_nx^n.$$So $S>L\epsilon$ for every $\epsilon>0$, hence $S\ge L$. 
$\Bbb Z[\sqrt{5}]$ is not a PID  Hint: use the fact that PID are unique factorization domains.
Let $N$ be the norm, $N(1+\sqrt{5})=6$, suppose that $1 + \sqrt{5}=ab, N(ab)=N(a)N(b)=6$, set $a=u+v\sqrt{5}, N(a)=1$ implies $u^2+5v^2=1$ this implies $v=0, u^2=1$, $N(a)=2$ implies also $v=0, u^2=2$ impossible, you cannot have $N(a)=3$ with a similar argument, so $N(a)=1$ this implies that $a=1$ or $a=1$ and $1+\sqrt{5}$ is irreducible, if $N(a)=6=u^2+5v^2, u^2=1$, and $v^2=1$ this implies that $a=1+\sqrt{5}$ or $a=1\sqrt{5}$, thus $1+\sqrt{5}$ is irreducible.
Show that 2 is irreducible by using the norm, thus since $6=2.3=(1+\sqrt{5})(1\sqrt{5})$, $Q(\sqrt{5})$ is not a unique factorization domain, so it is not principal. 
Integral calculus, find actual volume of cone  The diameters of the frustrums (frustra) are decreasing linearly, hence the volumes quadratically.
$$v_n=\frac Vn\left(\frac{nk}n\right)^2,$$ where $\dfrac{V}{n}$ denotes the volume of the corresponding cylindrical slices.
Then the total volume
$$V'=\frac Vn\sum_{k=0}^{n1}\left(\frac {nk}n\right)^2=\frac Vn\sum_{k=1}^{n}\frac{k^2}{n^2}=\frac V{n^3}\frac{n(n+1)(2n+1)}6=V\frac{(n+1)(2n+1)}{6n^2}.$$
The ratio tends to $\dfrac13.$ 
Weak topology and subspaces  I don’t have at hand the definition of the limit map between direct the limits, but I guess the question can have a negative answer. Let $X=\Bbb R^\omega$ be a subspace of a Tychonoff product $\Bbb R^\omega$ consisting of all sequences $x=(x_i)$ such that all but finitely many $x_n$ are zeroes. Then the space $X$ is a direct limit of a sequence of spaces $X_i$, where each $X_i=X$ and the inclusion maps are the identity maps. For each natural $i$ let $Y_i=\{(x_n)\in\Bbb R^\omega: x_n=0 \mbox{ for all }n>i \}$ and the inclusion maps are the embeddings. Endow the set $Y=\bigcup Y_i=\Bbb R^\omega$ with the topology consisting of all subsets $U$ of $\Bbb R^\omega$ such that $U\cap Y_i$ is open in $Y_i$ for each $i$. That is, $Y$ is a direct limit of the sequence $\{Y_i\}$. Now let $a^n=(a^n_i)$ be a sequence of elements of $\Bbb R^\omega$ such that for each $n$ and $i$ we have $a^n_i$ equals $1$, if $n=i$, and equals $0$, otherwise. Then the sequence $(a^n)$ converges to $0$ in $X$, but not in $Y$. 
Dataset Card Creation Guide
Dataset Summary
We automatically extracted question and answer (Q&A) pairs from Stack Exchange network. Stack Exchange gather many Q&A communities across 50 online plateform, including the well known Stack Overflow and other technical sites. 100 millon developpers consult Stack Exchange every month. The dataset is a parallel corpus with each question mapped to the top rated answer. The dataset is split given communities which cover a variety of domains from 3d printing, economics, raspberry pi or emacs. An exhaustive list of all communities is available here.
Languages
Stack Exchange mainly consist of english language (en).
Dataset Structure
Data Instances
Each data samples is presented as follow:
{'title_body': 'How to determine if 3 points on a 3D graph are collinear? Let the points $A, B$ and $C$ be $(x_1, y_1, z_1), (x_2, y_2, z_2)$ and $(x_3, y_3, z_3)$ respectively. How do I prove that the 3 points are collinear? What is the formula?',
'upvoted_answer': 'From $A(x_1,y_1,z_1),B(x_2,y_2,z_2),C(x_3,y_3,z_3)$ we can get their position vectors.\n\n$\\vec{AB}=(x_2x_1,y_2y_1,z_2z_1)$ and $\\vec{AC}=(x_3x_1,y_3y_1,z_3z_1)$.\n\nThen $\\vec{AB}\\times\\vec{AC}=0\\implies A,B,C$ collinear.',
'downvoted_answer': 'If the distance between AB+BC=AC then A,B,C are collinear.'}
This particular exampe corresponds to the following page
Data Fields
The fields present in the dataset contain the following informations:
title_body
: This is the concatenation of the title and body from the questionupvoted_answer
: This is the body from the most upvoted answerdownvoted_answer
: This is the body from most downvoted answertitle
: This is the title from the question
Data Splits
We provide three splits for this dataset, which only differs by the structure of the fieds which are retrieved:
titlebody_upvoted_downvoted_answer
: Includes title and body from the question as well as most upvoted and downvoted answer.title_answer
: Includes title from the question as well as most upvoted answer.titlebody_answer
: Includes title and body from the question as well as most upvoted answer.
Number of pairs  

titlebody_upvoted_downvoted_answer 
17,083 
title_answer 
1,100,953 
titlebody_answer 
1,100,953 
Dataset Creation
Curation Rationale
We primary designed this dataset for sentence embeddings training. Indeed sentence embeddings may be trained using a contrastive learning setup for which the model is trained to associate each sentence with its corresponding pair out of multiple proposition. Such models require many examples to be efficient and thus the dataset creation may be tedious. Community networks such as Stack Exchange allow us to build many examples semiautomatically.
Source Data
The source data are dumps from Stack Exchange
Initial Data Collection and Normalization
We collected the data from the math community.
We filtered out questions which title or body length is bellow 20 characters and questions for which body length is above 4096 characters. When extracting most upvoted answer, we filtered to pairs for which their is at least 100 votes gap between most upvoted and downvoted answers.
Who are the source language producers?
Questions and answers are written by the community developpers of Stack Exchange.
Additional Information
Licensing Information
Please see the license information at: https://archive.org/details/stackexchange
Citation Information
@misc{StackExchangeDataset,
author = {Flax Sentence Embeddings Team},
title = {Stack Exchange question pairs},
year = {2021},
howpublished = {https://huggingface.co/datasets/flaxsentenceembeddings/},
}
Contributions
Thanks to the Flax Sentence Embeddings team for adding this dataset.
 Downloads last month
 14