title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
What is the difference between cardinals and alephs?
The issue is the axiom of choice. The $\aleph$ numbers are the cardinalities of well-ordered sets: if $A$ can be well-ordered, then there is some least ordinal $\alpha$ which can be bijected with $A$, and this is the cardinality of $\alpha$ (and such ordinals in general are called "initial ordinals"). In case $A$ is infinite, we get an $\aleph$ number (and there's really nothing interesting to say about the cardinalities of finite sets). But if the axiom of choice fails, not every set can be well-ordered! And so if we want to speak of the cardinality of a non-well-orderable set, we need to use something other than $\aleph$s. At this point it's worth saying a few words about what cardinality is. First up, we have the "equinumerosity" relation $\equiv$: we write "$A\equiv B$" if there is a bijection between $A$ and $B$. This is easy to define, and there's no problem with it if the axiom of choice fails. Now what's the cardinality of a set $A$? Well, here's the idea: we want to associate some object $\vert A\vert$ to every set $A$, such that $\vert A\vert=\vert B\vert$ iff $A\equiv B$ (that is, $\vert A\vert$ is an $\equiv$-invariant: if you know what $\vert A\vert$ is, then you know what $A$ is equinumerous with). One natural choice (this one is due to Frege) is to look at the entire $\equiv$-class itself - e.g. the cardinal "$2$" is just the collection of all $2$-element sets. Unfortunately, this is a proper class, so this doesn't work well with ZFC. Instead, we have to be a little ad hoc. The natural way to fix Frege's idea is via Scott's trick: we let $\vert A\vert$ be the set of all sets equinumerous with $A$ and of minimal rank, and this is indeed a set (and we can think of it as an "initial segment" of the class Frege cares about). This definition, again, works independently of the axiom of choice (although it if we drop both choice and foundation, and in fact I think there's no good way to define cardinality in the absence of both axioms - instead, you have to work with the relation "$\equiv$" alone). Now if $A$ is well-ordered, we can do better: as observed above, we can pick out a specific set which is equinumerous with $A$! And that's the $\aleph$ number of $A$. In the presence of choice, there's no reason to use the Frege-style definition above, and we simply equate "cardinality" with "$\aleph$-number". But if choice fails, we can't find canonical representatives to measure the size of some sets, so we have to do something more involved, like Scott's trick (and note that at the linked question there is some argument for the Scott approach actually being more natural, which I have some sympathy with).
For what value of C is P(X=n) = C/n! For all n in the non-negative integers a probability density function?
By definition: $$\sum\limits_{n=0}^\infty \frac{x^n}{n!}=e^x$$ So, by plugging in $1$ in place of $x$ we have: $$e = \sum\limits_{n=0}^\infty \frac{1}{n!}$$ Dividing both sides by $e$ gives us $$1 = \sum\limits_{n=0}^\infty \frac{e^{-1}}{n!}$$
Lebesgue measures defined on subspaces of $\Bbb R^n$
One difficulty is that if $n\geq 3$ then the first integral makes sense as a nonzero value, but the second is an integral over a 2-d plane which has measure 0 in more than 2 dimensions. I see that is why you mention "$k$-dimensional Lebesgue measure" if $\dim(V) = k$. In that case, perhaps it is useful to use the standard change-of-variable formula. Let $h_1$ and $h_2$ be an orthonormal basis for $V$. So any vector $v \in V$ can be written $v = ah_1 + bh_2$. Define $M$ as the (invertible) matrix that maps $(t,u)$ to the $(a,b)$ coefficients of $v = tv_1 + uv_2$. So: \begin{align} [a;b] &= M[t;u]\\ dadb &= |det(M)|dtdu \end{align} Thus: \begin{align} &\int_{\mathbb{R}}\int_{\mathbb{R}}F(tv_1 +uv_2)g_1(t)g_2(u)dudt \\ &=\int_{\mathbb{R}}\int_{\mathbb{R}}F\left([h_1 \: h_2] M[t;u]\right)g_1(t)g_2(u)dudt \\ &=\int_\mathbb{R}\int_{\mathbb{R}} F([h_1 \: h_2][a;b])g_1([1 \: 0]M^{-1}[a;b])g_2([0 \: 1]M^{-1}[a;b])\frac{1}{|det(M)|}dadb\\ &=\int_V F(v)h_1(v)h_2(v)d\lambda_v \end{align} where the second equality is the standard change-of-variables formula, and the last equality defines: \begin{align} h_1(v) &= \frac{g_1([1 \: 0]M^{-1}[a(v);b(v)])}{\sqrt{|det(M)|}}\\ h_2(v) &= \frac{g_2([0 \: 1]M^{-1}[a(v);b(v)])}{\sqrt{|det(M)|}} \end{align} where $a(v)$ and $b(v)$ are the $(a,b)$ coefficients associated with the orthonormal representation $v = ah_1 + bh_2$. Here I am defining the measure of any (measurable) subset $C \subseteq V$ as: $$ \lambda(C) = \int\int_{\{(a,b): ah_1 + bh_2 \in C\}} dadb $$
Explanation for curved space
Einstein's theory of relativity uses Riemannian geometry, a non-Euclidean geometry where Euclid's fifth postulate and/or the usual metric is replaced by another axiom. Postulate If a straight line crossing two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if extended indefinitely, meet on that side on which are the angles less than the two right angles. Meaning if two lines are 'pointed' at each other, they will intersect. Explanation of a metric space A set for which distances between all members of the set are defined. Those distances, taken together, are called a metric on the set. Euclidean plane is a two dimensional real vector space that has an inner product. This can be extend to n-dimensions. Now, with Riemannian geometries we have manifolds which are topological spaces which resemble Euclidean space. Intuitively this means a space which, when you are close enough to, looks like flat Euclidean space. Earth may appear flat, but it's not. But we must be more specific. Riemannian geometry studies smooth/differential manifolds so that calculus can work. In calculus, we care a lot about tangent lines, and when a curvature is not smooth enough, i.e. it contains spikes, then the tangent line will not vary smoothly from point to point. It is always good practice to see where these ideas came from. Think of geodesics -- straight lines applied to curved spaces, like latitude and longitude. With Riemannian metric, the geodesics are the shortest path between points, in their respective neighborhoods.
Where am I mistaken in my approach through PIE?
Since the man only invites one friend to dinner each night, it is not possible for him to invite two different friends more than three times each since $2 \cdot 4 = 8 > 6$. Hence, the events we need to exclude are inviting the same friend to dinner on four, five, or six days. If there were no such restriction, the man would have three options on each of the six days, so there would be $3^6$ possible dinner schedules. The same friend is invited to dinner exactly four times: There are $3$ ways to choose the friend that is invited to dinner four times, $\binom{6}{4}$ ways to choose the days that friend is invited, and $2^2$ ways to invite one of the other two friends to dinner on the remaining two days. Hence, there are $$\binom{3}{1}\binom{6}{4}2^2$$ such dinner schedules. The same friend is invited to dinner exactly five times: There are $3$ ways to choose the friend that is invited to dinner five times, $\binom{6}{5}$ ways to choose the days that friend is invited, and $2$ ways to choose which of the other friends is invited on the other day. Hence, there are $$\binom{3}{1}\binom{6}{5}2$$ such dinner schedules. The same friend is invited to dinner on all six days: There are three ways to choose the friend who is invited to dinner every day. Hence, the number of ways the man can invite one of his three friends to dinner each day for six days without inviting the same friend to dinner on more than three days is $$3^6 - \binom{3}{1}\binom{6}{4}2^2 - \binom{3}{1}\binom{6}{5}2 - \binom{3}{1}$$
Let $u(x)=(x^{\alpha}+x^{\beta})^{-1}$ $x,\alpha,\beta>0$. Find $p\geq 1$, $u\in\mathcal L^p(\lambda^1,(0,\infty))$
HINT: I didnt spot your possible mistake (maybe some of the chosen bounds is too weak?) but you can assume that $\alpha \leqslant \beta $, then you can took common factor and the integrand becomes $x^{-p\alpha }(1+x^{\beta -\alpha })^{-p}$. This simplifies slightly the analysis and you can see that $$ \int_{(0,\infty )}\frac1{x^{\alpha p}(1+x^{\beta -\alpha } )^p}\,\mathrm d x<\infty \\ \iff \int_{(0,1)}\frac1{x^{\alpha p}}\,\mathrm d x<\infty \quad \text{ and }\quad \int_{[1,\infty )}\frac1{x^{\beta p }}\,\mathrm d x<\infty $$ what gives you $p\in(\tfrac{1}{\beta },\tfrac{1}{\alpha })\cap [1,\infty )$.
Interpolating a linear transformation
If matrix A is $n\times n$, you need at least $n$ linealry independent pairs of $(\vec u, \vec v)$ to reconstruct $A$. I hope this replies to both of your questions.
Prove the statement $\lim\limits_{h\to0}\frac{b^h-1}{h}=1 \iff b=e$.
Your first approach is simpler and the preferred one. It is based on the following assumption: There exists a function $\log:(0, \infty) \to\mathbb{R} $ such that $\log 1=0$ and $$\dfrac{d} {dx} \log x=\dfrac{1}{x},\,\forall x \in(0,\infty) $$ Further the symbol $e^x$ is defined by $$y=e^ x\iff x=\log y$$ The above assumption is easily justified by using the definition $$\log x=\int_{1}^{x}\frac{dt}{t}$$ The second approach you have chosen is difficult. It involves defining the symbol $a^b, a>0,b\in\mathbb {R} $ without the use of logarithm. And then one analyzes the limit $$f(a) =\lim_{h\to 0}\frac{a^h-1}{h}$$ and shows that it exists for every $a>0$ and hence defines a function $f:(0, \infty) \to\mathbb {R} $. Further one establishes that $f$ defined above is strictly increasing, continuous and the range of $f$ is $\mathbb {R} $ and $$f(1)=0,f(xy) =f(x) +f(y), f'(x) =1/x$$ Hence there is a unique number $e>1$ such that $f(e) =1$. Once you have reached this point, it is easy to show that $$e=\lim_{n\to \infty} \left(1+\frac{1}{n}\right) ^n$$ We have $$f((1+(1/n))^n)=nf(1+(1/n))=\dfrac{f(1+(1/n))-f(1)}{1/n}\to f'(1)=1$$ as $n\to\infty $. Let $g$ be the inverse of $f$ so that $g$ is also continuous and $g(1)=e$. Clearly we have $$g(f((1+(1/n))^n))\to g(1)=e$$ or $$(1+(1/n))^n\to e$$ and we are done.
Stirling Numbers of the Second Kind and Finding a General Formula
I'll sketch out the solution. Some of this stuff is in Concrete Mathematics; you can look up stuff that isn't familiar there, or try to establish things on your own. Here we use $\left\{n\atop k\right\}$ for the Stirling subset number (second kind) and $x^{(j)}$ for the falling factorial. $$\begin{align*}\sum_{k=0}^n k^p&=\sum_{k=0}^n \sum_{j=0}^p \left\{p\atop j\right\}k^{(j)}\\&=\sum_{k=0}^n \sum_{j=0}^p j!\left\{p\atop j\right\}\binom{k}{j}\\&=\sum_{j=0}^p j!\left\{p\atop j\right\}\sum_{k=0}^n \binom{k}{j}\\&=\sum_{j=0}^p j!\left\{p\atop j\right\}\binom{n+1}{j+1}\\&=\sum_{j=0}^p \frac{n+1}{j+1}j!\left\{p\atop j\right\}\binom{n}{j}\\&=(n+1)\sum_{j=0}^p \left\{p\atop j\right\}\frac{n^{(j)}}{j+1}\end{align*}$$ Actually, any of the last three expressions could be the answer...
using Negative values for a box plot (box whisker)
Never plot a box-whisker plot beyond min and max of data. Upper and lower bound are just used to find out outliers and extreme outliers. In this case you have to take lower bound as 172. You can use MATLAB command : boxplot(x) to check the same rule.
Points and dual $\mathbb{R}$-algebra to $\mathbb{R}[V]$ on real affine variety $V$
Suppose $V \subset \mathbb{A}_{\mathbb{R}}^n$, and consider the ideal $I(V) \subset \mathbb{R}[x_1, \cdots, x_n]$. Then $\mathbb{R}[V] \cong \mathbb{R}[x_1, \cdots, x_n]/I(V)$. Let $\overline{f}$ denote the image of $f\in \mathbb{R}[x_1, \cdots, x_n]$ in $\mathbb{R}[V]$. Since $\mathbb{R}[V]$ is generated by $\overline{x_k}$ for $1\le k \le n$, any $\mathbb{R}$-algebra homomorphism $\phi: \mathbb{R}[V]\to \mathbb{R}$ is entirely determined by the images of the $\overline{x_k}$. If $\phi(\overline{x_k}) = c_k$, then for any regular function $\overline{f}(x_1, \cdots, x_n)\in \mathbb{R}[V]$, we have $$\phi(\overline{f}(x_1, \cdots, x_n)) = f(\phi(\overline{x_1}), \cdots, \phi(\overline{x_n})) = f(c_1, \cdots, c_n)$$ Set $x := (c_1, \cdots, c_n) \in \mathbb{A}_{\mathbb{R}}^n$. Notice that for any $g\in I(V)$, we have $g(x) = \phi(\overline{g}) = \phi(0) = 0$. Thus, in fact $x \in V$. We conclude $\phi = \text{ev}_x$ for $x\in V$.
Is it possible to find an uncountable number of disjoint open intervals in $R$?
No, in a disjoint union of open intervals $(I_j)_{j\in J}$ each interval $I_j$ contains a rational number $q_j$ which enables to define an injection $J\rightarrow Q$ which sends $j$ to $q_j$.
A question regarding moment generating function
Yes. $E[XY]=E[X]E[Y]$ means that $X$ and $Y$ are orthogonal. Independence of $X$ and $Y$ is a sufficient (but not necessary) condition for this equality to be true. Yes, if the expectation is with respect to $X$.
Sigma-algebra inclusion and mixing processes
I think that your argument is correct. For the part (2), it would be better to keep $X_{l,T}$ instead of $X$.
Distance of closest neighbor points in a vectorspace ${\mathbb R}^n$ (infinitesimal or zero)?
Your way of thinking suggests that you consider that $\Bbb R$ is well ordered. It is not. In $\Bbb Z$, the number just after $1$ is $2$. In $\Bbb R$, or even in $\Bbb Q$, the number just after $1$ simply does not exist. Asking what is the nearest number from $1$ is as absurd as asking what is the greatest integer. Think at a number $1+x$ just a bit greater than $1$. Then the arithmetic mean of this number and $1$ (that is, $1+\frac x2$) is between $1$ and $1+x$. Just like no greatest integer exists because if $n$ is a very big integer, $n+1$ is even bigger.
In using $\int_C(z+\frac{1}{z})^{2n}\frac{1}{z}dz=\binom{2n}{n}2\pi i$, how does it possible to compute $\int^{\pi}_{-\pi} \cos^{2n} t dt$?
Sub $z=e^{i t}$, $t \in [-\pi,\pi]$. Then $$\oint_C \frac{dz}{z} \left (z+\frac1{z} \right )^{2 n} = i 2^{2 n} \int_{-\pi}^{\pi} dt \, \cos^{2 n}{t} $$ By the residue theorem, the contour integral is equal to $i 2 \pi$ times the coefficient of $z^0$ in the expansion of $\left (z+\frac1{z} \right )^{2 n}$, or $\binom{2 n}{n}$, i.e., $$i 2^{2 n} \int_{-\pi}^{\pi} dt \, \cos^{2 n}{t} = i 2 \pi \binom{2 n}{n}$$
Show that $2(a-b)$ is a period of $f$.
$$f(x) = f(a + x - a) = f(a - (x-a)) = f(2a - x) = f(b + (2a-b-x)) = f(b - (2a-b-x)) = f(2b-2a+x)$$ So $2b-2a$ is a period.
Logical Equivalences - (P and not Q) or((P and(not R)) and Q)
Here is a step-by-step way without using a truth table: $$P \ \wedge (R \to \neg\left(Q \ \wedge P\right) ) $$ $$P \ \wedge (R \to (\neg Q \ \vee \neg P) ) $$ $$P \ \wedge (\neg R \vee(\neg Q \ \vee \neg P) $$ $$P \ \wedge (\neg R \vee \neg Q \ \vee \neg P ) $$ $$P \ \wedge (\neg R \vee \neg Q \ ) $$ $$P \ \wedge ((\neg R \vee \neg Q ) \wedge (\neg Q \vee Q) ) $$ $$P \ \wedge ((\neg Q \vee (\neg R \wedge Q) ) $$ $$(P \ \wedge\neg Q) \vee (P \wedge (\neg R \wedge Q) ) $$ In the third line, we use the fact that $p \to q \equiv \neg p \vee q $ In the fifth line, the $\neg P$ is removed because it is unnecessary; $p \wedge \neg p$ is a contradiction and will never be true In the sixth line, we can then add in $\neg Q \vee Q$, as in general, $p \equiv p \wedge (\neg q \vee q)$
Integral $\int_{-\infty}^0 e^{(-3i+\omega)t} $
The second case also converges to zero, check your signs. $$ \int exp(-(3i-w)t) \mathrm{dt}= - \frac{exp(-(3i-w)t)}{3i-w} = - \frac{exp((w-3i)t)}{3i-w}$$ That should convince you.
find correlation coefficient of $f(x,y)=2$ for $0<x \leq y<1$
Your integrals look right. Note that $E(X)E(Y)=\frac{2}{9}$, so the covariance calculation is not right. The covariance is $\frac{1}{4}-\frac{2}{9}=\frac{1}{36}$.
$ I(r) = \int_0^{2\pi}\frac{\cos(t) - r}{1 - 2r\cos t + r^2}\,dt$ is always zero for $r\in[0,1)$. Why?
An alternative proof using complex methods: For $0&lt; r &lt; 1$ let $$ f_r \colon B_\frac{1}{r} (0) \to \mathbb{C} \, , \, f_r(z) = \frac{- \ln(1-rz)}{z} \, , $$ where $f_r(0) = r $ . Then $f_r$ is holomorphic, so $$ I(r) \equiv - \int \limits_0^{2\pi} \ln(1-r \mathrm{e}^{\mathrm{i}t}) \, \mathrm{d} t = - \mathrm{i} \int \limits_{S^1} f_r(z) \, \mathrm{d} z = 0 $$ holds by Cauchy's theorem. If you are not familiar with complex analysis, you can also show this using the Taylor series of the logarithm: $$ I(r) = \sum \limits_{n=1}^\infty \frac{r^n}{n} \int \limits_0^{2\pi} \mathrm{e}^{\mathrm{i} n t} \, \mathrm{d} t = 0 \, . $$ This implies \begin{align} \int \limits_0^{2\pi} \frac{\cos(t) - r}{1 - 2 r \cos(t) + r^2} \, \mathrm{d} t &amp;= - \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} r} \int \limits_0^{2\pi} \ln(1 - 2 r \cos(t) + r^2) \, \mathrm{d} t \\ &amp;= - \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} r} \int \limits_0^{2\pi} \ln[(1 -r \mathrm{e}^{\mathrm{i} t})(1 -r \mathrm{e}^{-\mathrm{i} t})] \, \mathrm{d} t \\ &amp;= \frac{\mathrm{d}}{\mathrm{d} r} I(r) = \frac{\mathrm{d}}{\mathrm{d} r} 0 = 0 \end{align} as desired.
Mistake in a textbook? - Taylor expansion
Let $$f(t)=\frac{(x+t)^{1-\gamma}}{1-\gamma}$$ then its Taylor expansion at $0$ is $$f(t)=f(0)+f'(0)t+o(t).$$ Are you able to obtain the textbook's formula?
Is this a logical fallacy or is this valid argumentation
The function $f(t) = t$ satisfies $|f(t)|\le 2t$. Take $C=2$. It also satisfies $|f(t)| &lt; 1$ for all $t \in (0,\frac{1}{100})$. Take $t_0 = \frac{1}{100}$. However we may not conclude $\frac{1}{100} &gt; \frac{1}{2}$. You did not say that $t_0$ was chosen as large as possible.
A ring homomorphism over rational numbers is the identity
Another way to see the answers above (which are all fine) is the following : Let $f$ be the ring morphism in question. For your question to be true, it has to be that you only allow $f(1)=1$ in your definition of ring morphism. Therefore that's what I will assume. So $f$ is also a field morphism. Consider $R=\{x\in \Bbb{Q}\mid f(x) = x\}$. Then quite obviously, $R$ is not empty ($1,0\in R$) and it is closed under addition and multiplication and taking inverses for non-zero elements. Therefore $R$ is a subfield of $\Bbb{Q}$. But $\Bbb{Q}$ is a prime field and thus its only subfield is itself. Therefore $R=\Bbb{Q}$, and $f$ is the identity. There are two advantages of this answer over the others : 1- it can be applied to any prime field, that is, the $\Bbb{F}_p$ for prime $p$, and $\Bbb{Q}$; 2- if you have already proved that $\Bbb{Q}$ is a prime field, then it can allow you not to redo the calculations that are done in other answers (but at some point you will have to do them)
Cycling through powers of a generator of finite field. Something similar to modpow for Z/nZ
Pick a primitive polynomial and use binary exponentiation together with repeated use of the Euclidean algorithm to reduce modulo your primitive polynomial. Montgomery multiplication should generalize just fine.
How many different ways can 64 players be paired?
Your answer is correct, and it is not hard to see what reasoning led to it. Ideally, the expression should be accompanied by a justification. There are other expressions for the answer. For example, let us line up the people in alphabetical order, or in order of height. Alicia can choose her partner in $63$ ways. For each such way, the first person not yet partnered can choose her partner in $61$ ways. And for each such way, the first person not yet partnered can choose her partner in $59$ ways. And so on. This gives a count of $(63)(61)(59)\cdots(3)(1)$. If one prefers (I don't) one can write this as $$\prod_{n=1}^{32} (2n-1).$$ Or else line up the $64$ people. This can be done in $64!$ ways.Now partner the first person with the second, the third with the fourth, and so on. Unfortunately this multiple counts the number of pairings. There are two reasons for this: (i) Any choice of interchanges of $2k-1$ with $2k$, $k=1$ to $32$, gives the same partnerings. To adjust, divide by the number of such possible interchanges, which is $2^{32}$. (ii) Any permutation of the $32$ couples gives the same partnerings. So we must further divide by $32!$. This yields the expression $$\frac{64!}{2^{32}\cdot 32!}.$$
Ordered Basis for $M_{2\times2}(\mathbb R)$ such that $T(A) = A^T$ is a diagonal matrix
You need to find eigenvectors of the $T$ operation, that is matrices $A$ with $T(A)=\lambda A$. In that case $T^2(A)=\lambda^2A$ but $T^2(A)=A$ so $\lambda^2=1$. For each solution of $\lambda^2=1$ you need to find a basis for the set of matrices satisfying $T(A)=A^T=\lambda A$.
$\int f(x) dx $ is appearing as $\int dx f(x)$. Why?
The second form sometimes makes it easier for the reader to match variables of integrations with their limits. Compare $$ \int_0^1\int_{-\infty}^{\infty}\int_{-\eta}^{\eta}\int_{0}^{|t|} \Big\{\text{some long and complicated formula here}\Big\}\,ds\,dt\,d\zeta\,d\eta $$ and $$ \int_0^1 d\eta\int_{-\infty}^{\infty}d\zeta\int_{-\eta}^{\eta}dt\int_{0}^{|t|} ds\,\Big\{\text{some long and complicated formula here}\Big\} $$
what is the meaning of "sequence which has no convergent subsequence"?
It means that if you create a new sequence out of your initial sequence such that you pick only some terms of the original sequence and leave the others out, the remaining sequence does not converge. In more mathematical terms, if $\{a_n\}$ is a sequence and $\{b_k\}$ is a sequence defined by $b_k=a_{n_k}$ such that $n_1 &lt; n_2 &lt; \cdots &lt; n_k &lt; \cdots$, then $b_k$ is not convergent. One may also reword the same statement like this: Suppose that $f: \mathbb{N} \to \mathbb{N}$ is a strictly increasing function and define $b_n = a_{f(n)}$. Then $b_n$ is called a subsequence of $a_n$. Regarding the proof, the author is trying to show that the closed ball in $\ell^1$ is not sequentially compact. Sequential compactness and compactness defined using open covers are equivalent in metric spaces. Consider $e_1 = (1,0,0,0,0,0,\cdots)$ and $e_2=(0,1,0,0,0,0,\cdots)$ Then, by definiton of the $\ell^1$ norm, we have: $$\|e_1-e_2\|_1=\sum_n|(e_1)_n-(e_2)_n|=|1-0|+|0-1|=2$$ If instead of $e_1$ and $e_2$ we had $e_i$ and $e_j$, the idea would still be exactly the same.
In a metric space, if $A$ is open and $B$ is closed, is $A + B$ open or closed?
Try simple examples. The most basic closed set, for instance, is a point. What happens in this case?
How to show that the geodesics of a metric are the solutions to a second-order differential equation?
In $\mathbb{R}^n$ we have some coordinates, so let $x^i (t)$ be the coordinate presentation of the curve. A shorthand notation $\dot{x}^i (t_0) = \frac{d}{dt}x^i |_{t=t_0}$ simplifies the calculations, as usual. The geodesic equation then says that $$ (\nabla_{\dot{x}}\dot{x})^k = \frac{d^2 x^k}{dt^2} + \dot{x}^i \dot{x}^j \Gamma^k_{ij} = 0 \tag{1} $$ where $x(t)$ has a unit speed parametrization. Recall that the Christoffel symbols are given by $$ \Gamma^k_{ij} = \frac{1}{2} g^{kl} (g_{il,j} + g_{lj,i} - g_{ij,l}) \tag{2} $$ Remark. In the standard Euclidean metric $g_{ij} = \delta_{ij}$ in $\mathbb{R}^n$ the Christoffel symbols vanish and equation (1) gives straight lines as its solutions. Let's now look what happens with the Christoffel symbols if we replace $g$ with $\hat{g} = e^{2 \rho} g$. Observe that $$ \hat{g}_{ij,l} = 2 \rho_l e^{2 \rho} g_{ij} + e^{2 \rho} g_{ij,l} $$ and substitute this into (2) to get $$ \begin{align} \hat{\Gamma}^k_{ij} &amp;= \frac{1}{2} e^{-2 \rho} g^{kl} ( 2 \rho_j e^{2 \rho} g_{il} + e^{2 \rho} g_{il,j} + 2 \rho_i e^{2 \rho} g_{lj} + e^{2 \rho} g_{lj,i} - 2 \rho_l e^{2 \rho} g_{ij} - e^{2 \rho} g_{ij,l}) \\ &amp;= \Gamma^k_{ij} + \rho_j \delta^k{}_i + \rho_i \delta^k{}_j - \rho^k g_{ij} \tag{3} \end{align} $$ where we have used metric $g$ to raise index $k$ in $\rho_k := \partial_k \rho$ This transformation affects equation (1) in the following way: $$ \begin{align} (\hat{\nabla}_{\dot{x}}\dot{x})^k &amp;= \frac{d^2 x^k}{dt^2} + \dot{x}^i \dot{x}^j \hat{\Gamma}^k_{ij} \\ &amp;= \frac{d^2 x^k}{dt^2} + + \dot{x}^i \dot{x}^j (\Gamma^k_{ij} + \rho_j \delta^k{}_i + \rho_i \delta^k{}_j - \rho^k g_{ij}) = 0 \end{align} $$ Your equation is now obtained immediately from the above calculations since you start with the Euclidean metric $g_{ij} = \delta_{ij}$ for which the Christoffels vanish, as has been already noted, so you only need to take into account that $\operatorname{grad}(\rho)\cdot \dot{x} = \rho_i \dot{x}^i$ and $\dot{x} \cdot \dot{x} = \dot{x}^i \dot{x}_i$ Ah, well, $\operatorname{grad}(\rho)^i = \rho^i = g^{ij} \rho_j$, of course. Indeed, the above considerations are very formal and straightforward. In order to get a deeper insight one needs to contemplate the conformal transformations of $\mathbb{R}^n$. In dimension $n=2$ the situation is controlled by the complex analysis, while in dimensions 3 and higher the Liouville's theorem states that all such transformations are compositions of translations, dilations, and inversions (so called Moebius transformations). (The Einstein summation convention is used throughout this post)
Intuition behind Contraction Mapping Theorem
The fixed point is unique because if $f(u)=u$ and $f(v)=v$ then $$ ||u-v||=||f(u)-f(v)||\leq K||u-v|| $$ and since $K&lt;1$ this is only possible if $u=v$. If $K=1$ then either uniqueness or existence can fail. For instance, the map $f(u)=u$ satisfies the hypotheses with $K=1$, but every point is a fixed point. And if $V=\mathbb{R}$ and $f(x)=x+\frac{1}{1+e^x}$, then $0\leq f^{\prime}(x)&lt;1$ for all $x$ (so the contraction property holds with $K=1$ by the mean value theorem), but $f$ has no fixed point. The most important application I'm aware of is the Picard-Lindelof theorem, which establishes the existence and uniqueness of solutions to a class of ordinary differential equations.
Evaluate $\underset{x\to 0+}{\mathop{\lim }}\,{{\left( {{x}^{x}} \right)}^{x}}$
$$\lim_{x\to 0^{+}}(x^{x})^{x} = \lim_{x\to 0^{+}}x^{x^{2}}=\lim_{x\to 0^{+}}(x^{2})^{\frac{x^{2}}{2}}\ \stackrel{u=\frac{x^{2}}{2}}{=} \lim_{u\to 0^{+}}(2u)^{u}=\lim_{u\to 0^{+}}2^{u}\lim_{u\to 0^{+}}u^{u}=2^{0}\cdot 1 = \boxed{1}$$
Linear mapping from $P_3 (\mathbb{R})$ to $M_{2,2}(\mathbb{R})$
In order to write the linear map as a matrix you need to fix bases of the two vector spaces, the domain and the target. As you rightly said, the monomials $\{1,x,x^2,x^3\}$ make up a basis of the space $P_3(\Bbb R)$ but you also need to fix also a basis for the space $M_{2,2}(\Bbb R)$. A standard choice for that is to take the $4$ matrices $$ e_{i,j}=(a_{k,\ell})\quad\text{where $a_{k,\ell}=1$ if $k=i$, $\ell=j$ and $a_{k,\ell}=0$ otherwise}. $$ With this choice you have $$ \phi(1)=1e_{1,1}+2e_{1,2}+0e_{2,1}-1e_{2,2} $$ and so on. You can now write the matrix associated to $\phi$ in the usual way, i.e. inserting in the $i$-th column the coefficients of $\phi(x^{i-1})$ in its expansion in terms of the basis $\{e_{i,j}\}$. So the entries in the first column are $1$, $2$, $0$, $-1$.
Is this : $\sqrt{3+\sqrt{2+\sqrt{3+\sqrt{2+\sqrt{\cdots}}}}}$ irrational number?
As soon as you are sure that such expression makes sense, it is a root of $(x^2-3)^2-2-x=x^4-6x^2-x+7$ which does not have any rational root.
can we use separation of variables to $v_{\xi\xi}+v_{\eta\eta}-4\tan{\xi}.v_{\xi}=0$?
$$u_{xx}+u_{yy}-4\tan(x)u_x=0$$ $$u(x,y)=X(x)Y(y)\quad\to\quad X''Y+XY''-4\tan(x)X'Y=0$$ $$\frac{X''}{X}+\frac{Y''}{Y}-4\tan(x)\frac{X'}{X}=0$$ $$\frac{Y''}{Y}=-\frac{X''}{X}+4\tan(x)\frac{X'}{X}=\lambda=\text{constant} $$ $$\begin{cases}Y''(y)-\lambda Y(y)=0 \\X''(x)-4\tan(x)X'(x)+\lambda X(x)=0\end{cases}$$ The first ODE is easy. The second involves hypergeometric functions.
An algebraic way to do this question? Find minimum and maximum values of $|z_1+iz|$ where $|z-i|\leq5,\:\:z_1=5+3i$.
For a purely algebraic solution: We have $f(z) = |z_1+iz|^2 = (5-b)^2 + (3+a)^2 = 58 +6a - 8b = 50 + 6a - 8(b-1)$ It is clear that to maximise $f(z)$ subject to the constraint $a^2 + (b-1)^2 \le 25$, we must have $a^2 + (b-1)^2 = 25$, otherwise if $a^2 + (b-1)^2 &lt; 25$ we could increase $a$ and/or decrease $b$ and so increase $f(z)$. So let $a=5 \sin \theta$ and $b-1 = 5 \cos \theta$. Then $f(z) = 50 + 30 \sin \theta - 40 \cos \theta \\ \Rightarrow \frac {df}{d \theta} = 30 \cos \theta + 40 \sin \theta$ So $f$ has maximum and minimum values when $30 \cos \theta + 40 \sin \theta = 0 \\ \Rightarrow \tan \theta = -\frac{3}{4} \\ \Rightarrow (\sin \theta, \cos \theta) = (\frac 3 5, - \frac 4 5) \text{ or } (- \frac 3 5, \frac 4 5)$ To maximise $f(z)$ we take the first pair of values, so $f(z)_{max} = 50 + \frac {90} 5 + \frac {160} 5 = 100 \\ \Rightarrow |z_1+iz| = 10$
What is the probability of winning a best of 5 series with varying probability in each game?
If $A$ wins the match, that will be because they win either after game 3, after game 4, or after game 5. The probability that they win after game 3 is $p(1)p(2)p(3)$ To win after game 4, $A$ must lose exactly one of the first three games, then win game 4. This has probability $$ \Big(p(1)p(2)(1-p(3))+p(1)(1-p(2))p(3)+(1-p(1))p(2)p(3)\Big)p(4) $$ Finally, if $A$ is to win the match only after game five, they must lose two of the first four games, and then win the fifth: $$ \Big(p(1)p(2)(1-p(3))(1-p(4))+p(1)(1-p(2))p(3)(1-p(4))+(1-p(1))p(2)p(3)(1-p(4))+p(1)(1-p(2))(1-p(3))p(4)+(1-p(1))p(2)(1-p(3))p(4)+(1-p(1))(1-p(2))p(3)p(4)\Big)p(5) $$
Every ideal $J \supsetneq \sqrt{0}$ contains a non-zero-divisor
Such a ring doesn't exist because the second and last conditions are incompatible. Let $x$ be a non-nilpotent element. By the last condition, the ideal $\big( x,\sqrt{0} \big) \supsetneq \sqrt{0}$ contains a non-zero-divisor, i.e. there exist elements $a, b$ with $ax + b$ a non-zero-divisor, and $b^n = 0$ for some positive integer $n$. But then $(ax + b)^n$ is a non-zero-divisor, since if $(ax+b)^n k = 0$, then $(ax+b)(ax+b)^{n-1} k = 0$, contradicting that $ax+b$ is not a zero-divisor. Furthermore, expanding $(ax + b)^n$ shows it is of the form $cx$ for some element $c$, so $x$ is also a non-zero-divisor.
How to solve this seemingly quadratic inequality?
$$\frac{2x-4}{x}\le 7\iff\frac{2x-4}{x}-7\le0\iff\frac{-5x-4}{x}\le0\stackrel{\cdot x^2}\iff-5x\left(x+\frac45\right)\le 0\iff\ldots$$ Try now to end the argument and find the solution. The above is the general way to these inequalities: make one side zero as shown above and etc.
How do you simplify $ \frac{\tan \theta \cos \theta}{\sec \theta} $?
Remember that $\tan(\theta) = \frac{\sin(\theta)}{\cos(\theta)}$ and $\sec(\theta) = \frac{1}{\cos(\theta)}$. Substitute these into your original expression, cancel terms where appropriate, and simplify. It is also entirely possible that your answer and your teacher's answer are equivalent, which is something you'll have to look out for more and more as you progress in math. :)
Find $\lim_{x\to0}(1+x^{2})^{\cot^2{x}}$ without lhopital
No, you can't say that. $\cos^2 x$ and $x^2$ are not very much alike close to $0$. The limit is actually $e$. But your tag says "limits without L'hospital", and I don't see a way to get $e$ without it.
8 bit linear feedback shift register Non-zero coefficients
Determine the key bits first. Then write down the equations on $a, \ldots h$. Solve.
Finding the partial derivatives of this function
Call $h(s) = g(w^2-\sqrt{s})$. When you compute the partial derivatives with rispect to $u,v$ the variable $w$ is fixed, hence you can think $h$ as a function $\mathbb{R} \longrightarrow \mathbb{R}$. So $$\frac{\partial}{\partial u} \int_u^v h(s) ds = - \frac{\partial}{\partial u} \int_v^u h(s) ds =- h(u)$$ since $v$ is considered a constant. In the same way you get $$ \frac{\partial}{\partial v} \int_u^v h(s) ds = h(v)$$ While for the third partial derivative you need to exchange the derivative with the integral sign, so you get $$\frac{\partial}{\partial w} \int_u^v g(w^2-\sqrt{s}) ds = \int_u^v \frac{\partial}{\partial w} g(w^2-\sqrt{s}) ds = \int_u^v g'(w^2-\sqrt{s}) 2w \ ds$$
Which of the following sets necessarily contain a multiple of 3
If $n^{19} $ is divisible by $3$, we are done. Otherwise, $n^{19} $ is congruent to $1$ or $-1$ modulo $3$. Then $n^{38} = (n^{19})^2 \equiv (\pm 1)^2 = 1 \pmod{3} $, so $n^{38}-1 \equiv 0 \pmod {3} $, so it is divisible by $3$. Alternatively, a non-modular solution: If $n^{19}$ is not a multiple of $3$, we have $n^{19}= 3m \pm 1$ for some integer $m $. Then $$ n^{38} -1=(n^{19})^2-1=(3m \pm 1)^2 -1=(3m)^2 + 2 (\pm 1)(3m) +(\pm1)^2 -1 = 9m^2 \pm 6m +1-1=3 (3m^2 \pm 2m), $$ which is divisible by $3$. The idea behind modular arithmetic is that if we only care about divisibility by three, we only need to keep track of the remainders when we perform such calculations, which greatly simplifies matters. You'll love it once you've learnt it!
Solving a ODE system with constant coefficients
Try solutions proportional to $\exp{(2t)}$, $\textit{i.e}$: $$x_1=c_1\exp{(2t)}$$ $$x_2=c_2\exp{(2t)}$$ To arrive to the system $$\left\{2\begin{bmatrix}1&amp;0\\0&amp;1\end{bmatrix}+\begin{bmatrix}-2&amp;1\\1&amp;-2\end{bmatrix}\right\}\begin{bmatrix}c_1\\c_2\end{bmatrix}=\begin{bmatrix}1\\2\end{bmatrix}$$ Solve for $c_1,c_2$
Counterexample to show that the set of global minima of a function $f$ is a strict subset of the set of minima of the convex envelope of $f$
Take the function $f(x)=x^2 (x-1)^2$ on $I=[-2,2]$, then the global minimum of f are $0,1$. If $f_C$ is its convex envelope, then you showed that 0,1 are global minimum for $f_C$ but then all the points in $[0,1]$ must be global minimum. (or in the general case, the global minimum of $f_C$ will contain the convex hull of the global minimum of $f$).
On the clarification of Manin's remark about Gödel’s incompleteness theorems
According to the simplest interpretation of Manin's comment, the algebraic structures here are theories (really, just sets of sentences), with the operations corresponding to proofs. This fits into an existing tradition ("algebraic logic") of trying to give algebraic interpretations of logical systems. Historical note: The original success of algebraic logic was the relationship between Boolean algebras and propositional logic, and this was further augmented on the topological side via Stone duality. The picture with stronger logics gets much more complicated, unfortunately; see e.g. the notion of cylindric algebras, which arise when we "algebraify" first-order logic. Let's look at a weak incompleteness principle first: $(*)\quad$ The set $Th(\mathbb{N})$ of true sentences in the language of arithmetic is not finitely axiomatizable. This is a corollary of the full first incompleteness theorem ("No complete consistent extension of PA (or even much less!) is recursively axiomatizable"). Now let's "algebraify" the principle $(*)$, to say that a certain algebraic structure $\mathcal{A}$ isn't finitely generated: The elements of $\mathcal{A}$ are exactly the sentences in $Th(\mathbb{N})$. The operations of $\mathcal{A}$ are just the inference rules of first-order logic. (We have to be a bit careful here, and cook up a set of "unique application" inference rules since otherwise our "operations" are multi-valued. Alternatively, we could take as our operations individual proofs: if $p$ is a proof of $\varphi$ from $\psi_1,...,\psi_n$, then the operation $f_p$ associated to $p$ is the $n$-ary operation defined by $f_p(x_1,...,x_n)=\varphi$ if $x_i=\psi_i$ for $1\le i\le n$ and $f_p(x_1,...,x_n)=x_1$ otherwise.) Before going forward, let's talk about where $\mathcal{A}$ "lives" (especially since the definition of $\mathcal{A}$ itself already referred to $Th(\mathbb{N})$, so it seems kind of ad hoc). $\mathcal{A}$ is a substructure of the larger structure $\mathcal{B}$ with the same operations but domain consisting of all sentences. $\mathcal{B}$ is computably presentable: its domain can be thought of as the set of valid Godel numbers of sentences (which is computable), and each operation of the algebra is computable when so interpreted (exercise). So it makes sense to consider $\mathcal{B}$ as "given at the outset," and all of our work being aimed at understanding complicated substructures of $\mathcal{B}$. The principle $(*)$ is then exactly equivalent to "$\mathcal{A}$ is not finitely generated." Because if it were generated by $\{a_1,...,a_n\}$, the theory $\{a_1,..., a_n\}$ would be complete and prove exactly $Th(\mathbb{N})$, and this contradicts $(*)$. A stronger form of the first incompleteness theorem says: $(**)\quad$ No complete consistent theory in the language of arithmetic extending $PA$ (or again, much less) is finitely axiomatizable. Now the algebraic situation is the following: I have a distinguished substructure $\mathcal{S}$ of $\mathcal{B}$, consisting of the theorems of PA; and I have a distinguished class $\mathfrak{C}$ of substructures of $\mathcal{B}$, namely those corresponding to complete consistent theories. Then $(**)$ is equivalent to the statement "If $\mathcal{A}\in\mathfrak{C}$ and $\mathcal{S}\subseteq\mathcal{A}$ then $\mathcal{A}$ is not finitely generated." Finally, the full first incompleteness theorem, $(*$$*$$*)\quad$ No complete consistent extension of PA (or indeed muchless) is recursively axiomatizable, is then equivalent to the statement that no $\mathcal{A}\in\mathfrak{C}$ containing $\mathcal{S}$ is recursively generated. This is a bit of an odd situation, since (unlike the finite generation situation) whether or not a structure is recursively generated is not isomorphism-invariant (finite sets remain finite no matter how you name them, but recursiveness isn't so invariant). This explains (I believe) why Manin talks about finite generability, even though we get a stronger result from the incompleteness theorem: the stronger result, being non-isomorphism-invariant on the face of things, is not "algebraically natural."
Solve using Bessel function properties
The question should have options or should have stated it want answer in terms of bessel function. $$\int{x^3J_3(x)dx} = \int{x^5x^{-2}J_3dx}$$ $$ =x^5(-x^{-2}J_2) - \int{5x^4.(-x^{-2}J_2)dx}$$ $$ =-x^3J_2 + 5\int{x^2J_2dx}$$ $$ =-x^3J_2 + 5\int{x^3.(x^{-1}J_2dx}$$ $$ =-x^3J_2 +5x^3.(-x^{-1}J_1) -5\int{3x^2.(-x^{-1}J_1)dx}$$ $$ =-x^3J_2 -5x^2J_1 +15\int{xJ_1dx}$$ $$ = -x^3J_2 -5x^2J_1 +15x.(-J_0) -15\int{-J_0dx}$$
Why is Wiener measure a Gaussian measure?
Take a sequence of partitions of $[0,T]$ with mesh tending to $0$. If $\mu \in (C[0,T])^{*}$ then, by uniform continuity of Brownian paths, we have $\int_0^{T}B(t)d\mu(t)=\lim \sum B(t_i) (t_{i+1}-t_i) d\mu (t)$ almost surely and almost sure limits of Gaussian random variables is Gaussian.
Triangular Summation $\displaystyle\sum_{i=0}^n\sum_{j=0}^i (i+j)=3\sum_{i=0}^n\sum_{j=0}^i j$
Another way maybe more beautiful than my previous answer: \begin{align*}\sum_{i=0}^n \sum_{j=0}^i (i+j)&amp;=\sum_{i=0}^n \sum_{j=0}^i ((i-j)+j+j)\\&amp;= \sum_{i=0}^n \left[\left(\sum_{j=0}^i (i-j)\right)+2\left(\sum_{j=0}^i j\right)\right]\\&amp;=3\sum_{i=0}^n\sum_{j=0}^ij\end{align*} Where $\sum_{j=0}^i (i-j)=\sum_{j=0}^i j$ by change of indices $j'=i-j$.
Cardinality of cartesian product of an infinite set with N
Your attempt to apply Zorn’s lemma doesn’t actually work, because you’re applying it to the wrong partial order. You’re taking as your partial order the family of subsets $S$ of $A$ such that there is some bijection from $S\times\Bbb N$ to $S$. If $\mathscr{C}$ is a chain in this partial order, $A$ is not known to be an upper bound for $\mathscr{C}$ in the partial order, because you don’t know that there is a bijection between $A\times\Bbb N$ and $A$: that, in fact, is what you’re trying to prove. In order to follow the hint, you should consider the partial order $\langle\mathscr{F},\subseteq\rangle$. Let $\mathscr{C}$ be a chain in this partial order, let $f=\bigcup\mathscr{C}$. For each $f\in\mathscr{C}$ there is an $S_f\subseteq A$ such that $f$ is a bijection from $S_f\times\Bbb N$ to $S_f$; let $S=\bigcup_{f\in\mathscr{C}}S_f$. Show that $f$ is a bijection from $S\times\Bbb N$ to $S$. Conclude that $h\in\mathscr{F}$ is an upper bound for $\mathscr{C}$. Now apply Zorn’s lemma to get a maximal $h\in\mathscr{F}$, and let $B\subseteq A$ be such that $h$ is a bijection from $B\times\Bbb N$ to $B$. Once you have this, the argument at your first bullet point works fine. I see no need to make two cases out of this depending on whether $A\setminus B$ is finite; that’s an unnecessary complication.
Casimir element of a representation of a semi-simple Lie algebra
If $L$ is semisimple, then the Killing form $\beta$ is non-degenerate and thus induces an isomorphism $L \to L^*$ via $y \mapsto \beta(\cdot, y)$. The thing is that calling $\{y_1,\ldots,y_n\}$ a &quot;dual basis&quot; is an abuse of language, since $y_j \in L$ for all $j$. But the word &quot;dual&quot; is justified by the isomorphism $L\to L^*$, since given the basis $\{x_1,\ldots,x_n\}\subseteq L$, there is the dual basis $\{x_1^*,\ldots,x_n^*\}\subseteq L^*$ satisfying $x_j^*(x_i)=\delta_{ij}$. And the map $L\to L^*$ induced by $\beta$ is bijective, so for each $j$ there is a unique $y_j\in L$ such that $\beta(\cdot,y_j)=x_j^*$. Evaluate both sides at $x_i$ to get $\beta(x_i,y_j)=\delta_{ij}$, as wanted.
Why is variance defined this way?
The mean absolute deviation (wrt to the mean) $E(\vert X - \mu \vert)$ is an alternative index of variability. A variant is to look at the mean absolute deviation wrt to the median $m$, because it can be shown that the mean absolute deviation $E(\vert X - a \vert)$ wrt to a value $a$ is minimised when $a=m$. The variance is usually preferred to the mean absolute deviation for a few reasons. A modelling is that the quadratic term penalises large deviations more than small deviations (presumably, large deviations are worse). Another one is that the square deviation is differentiable and hence easier to handle. You can learn more about this at https://stats.stackexchange.com/questions/118/why-square-the-difference-instead-of-taking-the-absolute-value-in-standard-devia
Abelian group of order $p^2q^2$ ($p$,$q$ distinct primes) determine number of elements of order $pq$ and $pq^2$
Note that you can write, in each case, the group as $G = G_p \oplus G_q$ where $G_p$ is a group of order $p^2$ and $G_q$ a group of order $q^2$. In each case, the number of elements of order $pq$ in $G$ is $n_pn_q$ where $n_p$ is the number of elements of order $p$ in $G_p$ and $n_q$ the number of elements of order $q$ in $G_q$. Likewise, in each case, the number of elements of order $pq^2$ in $G$ is $n_pn_q'$ where $n_p$ is the number of elements of order $p$ in $G_p$ and $n_q'$ the number of elements of order $q^2$ in $G_q$. The number of elements of order $q^2$ in $Z_{q^2}$ is $\varphi(q^2) = q^2 -q$, where $\varphi$ is Euler totient. The remaining elements, except for the $0$ element, have order $q$, so there are $q-1$. For $Z_q \oplus Z_q$ the number of elemnts of order $q$ is $q^2-1$ and/as there are (of course) none of order $q^2$. The remaining calculation should not pose a problem.
Semidirect product: general automorphism always results in a conjugation
Use the same idea. Observe $$\begin{array}{ll} (n_1,h_1)(n_2,h_2) &amp; =(n_1,e_H)(e_N,h_1)(n_2,e_H)(e_N,h_2) \\ &amp; = (n_1,e_H)\color{Blue}{(e_N,h_1)(n_2,e_H)(e_N,h_1^{-1})}(e_N,h_1,)(e_N,h_2) \end{array} $$ and $$\begin{array}{ll} (n_1,h_1)(n_2,h_2) &amp; =(n_1\phi_{h_1}(n_2),h_1h_2) \\ &amp; = (n_1,e_H)\color{Blue}{(\phi_{h_1}(n_2),e_H)}(e_N,h_1)(e_N,h_2). \end{array} $$ This means when we conjugate elements of $N\times\{e_H\}$ by elements of $\{e_N\}\times H$, we get the same thing as if we apply the elements of $H$ as automorphisms to $N$, then put it in $N\times\{e_H\}$. Tuples are annoying and obfuscate the algebra in my opinion though. We should think of the semidirect product $N\rtimes H$ as the free product $N*H$ (whose elements are words formed from using the elements of $N$ and $H$ as letters) modulo the relation that conjugating elements of $N$ by elements of $H$ yields the same thing as if we applied the corresponding automorphism. That is, elements of $N\rtimes H$ look like words formed from elements of $N$ and $H$. Their identity elements are identified as the same group element in $N\rtimes H$. Elements of $N$ multiply among themselves as usual, and same for elements of $H$ multiplying among themselves. But every instance of the word $hnh^{-1}$ ($h\in H,n\in N$) may be simplified to $\phi_h(n)$, and that is the only relation imposed on multiplication between elements of the two subgroups $N$ and $H$. Using this definition, it's easy to see that $hn=(hnh^{-1})h=\phi_h(n)h$ so $HN=NH$ within $N\rtimes H$, and every element of $H$ can be "slid past" an element of $N$ to the right (although it changes the element of $N$ along the way). As a result, every word $\cdots h_{-1}n_{-1}h_0n_0h_1n_1\cdots$ (finitely many letters of course) can be simplified via this sliding rule to the canonical form $nh$. Writing $n_1h_1=n_2h_2$ yields $h_1h_2^{-1}=n_1^{-1}n_2$, but the only element in $N\cap H$ (when we treat $N,H$ as subgroups of $N\rtimes H$) is the identity, so $h_1=h_2$ and $n_1=n_2$. Thus $N\rtimes H$ can be bijected with $N\times H$ set-theoretically. In order to transport the multiplication over, it remains to see how $(n_1h_1)(n_2h_2)$ simplifies to $n_3h_3$, which is something you've essentially already done.
Proving that induced homomorphism is an isomorphism
Maybe this solution clears it up for you: We have that $f:X \to A$ is a retraction. So that then the restriction $f|_{A}$ is the identity map on $A$($\subset X$). As $j: A \to X$, then the composition $f \circ j$ is the identity map on $A$. It follows that this composition induces a homomorphism $f_{*}\circ j_{*}$ which is the identity map on $\pi_{1}(A,a)$. In particular, for any loops $[k],[l] \in \pi_{1}(A,a)$ such that $j_{*}[k]=j_{*}[l]$, then we have $(f_{*}\circ j_{*})[k]=(f_{*} \circ j_{*})[l]$ implies $[k]=[l]$, as $f_{*}\circ j_{*}$ is the identity map on $\pi_{1}(A,a).$ It follows at once that $j_{*}$ is injective. Note that, it is not necessarily true that $j_{*}$ is a surjection. Thusly, it's not necessarily an isomorphism.
Meromorphic function tending to infinity cannot have poles at all integer points
Let $R&gt;0$ be such that $|f(z)|\ge1$ if $|z|&gt;R$. Then $g(z)=1/f(z)$ is meromorphic and bounded on the punctured disk $\{0&lt;|z|&lt;1/R\}$. It can be extended to the disk as an analytic function. Since $f(z)\to\infty$, we must have $g(0)=0$. If all integers were poles of $f$, then $g(1/n)=0$ for all $n\in\mathbf{Z}$, $n\ne0$. This implies that $g$ is identically equal to $0$.
Finding a basis for the nullspace
If matrix $$ \begin{bmatrix}-2 &amp; 5 &amp; 3 &amp; -1\\ 0 &amp; 1 &amp; -4 &amp; 2\\ 6 &amp; -14 &amp; -13 &amp; 1\\ 0 &amp; 0 &amp;0 &amp;0\end{bmatrix} $$ Row reduced form of matrix is $$ \begin{bmatrix}1 &amp; 0 &amp; -\frac{23}{2} &amp; 0\\ 0 &amp; 1 &amp; -4 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 1\\ 0 &amp; 0 &amp;0 &amp;0\end{bmatrix} $$ So basis is $$ \begin{bmatrix}23 \\ 8 \\ 2 \\ 0\end{bmatrix} $$ If matrix is $$ \begin{bmatrix}-2 &amp; 5 &amp; 3 &amp; -1\\ 0 &amp; 1 &amp; -4 &amp; -2\\ 6 &amp; -14 &amp; -13 &amp; 1\\ 0 &amp; 0 &amp;0 &amp;0\end{bmatrix} $$ Row reduced form of matrix is $$ \begin{bmatrix}1 &amp; 0 &amp; -\frac{23}{2} &amp; -\frac{9}{2}\\ 0 &amp; 1 &amp; -4 &amp; -2\\ 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp;0 &amp;0\end{bmatrix} $$ So basis is $$ \begin{bmatrix}23 \\ 8 \\ 2 \\ 0\end{bmatrix} \begin{bmatrix}9 \\ 4 \\ 0 \\ 2\end{bmatrix} $$
Is convergence related to absolute convergence?
Since you can't take sums in metric spaces, I think that the most natural space to ask this question is a Banach Space. If I understood you correctly, you want to answer the following: Question: Suppose $\sum_{n=1}^\infty a_n$ is a series in a Banach Space $X$ such that every subseries $\sum_{k=1}^\infty a_{n(k)}$ converges in $X$. Does $\sum_{n=1}^\infty \Vert a_n\Vert$ converge? In that case, the answer is no. Let $X=\ell^\infty=\ell^\infty(\mathbb{N})$, the set of all sequences $x=(x_1,x_2,\ldots)$ of scalar (in $\mathbb{C}$, say) such that $\sup_{n\in\mathbb{N}}| x_n|&lt;\infty$. Eqquiped with the norm $\Vert x\Vert=\sup_{n\in\mathbb{N}}|x_n|$, it is a Banach Space. Consider, for each $n\in\mathbb{N}$, $a_n=(0,\ldots,0,\underbrace{1/n}_{n^{th} position},0,\ldots)$. Then $\sum_{n=1}^\infty a_n$ is a series in $X$ for which the statement is not true.
finding the area of triangle
Since the ratio of sides are in $13:14:15$, let the three sides be $13k$, $14k$ and $15k$ in ascending order. Now the 3 sides add up to a perimeter length of $84$ m, so $$\begin{align*} 13k + 14k + 15k =&amp; 84 \text{ m}\\ k =&amp; 2 \text{ m} \end{align*}$$ And so the three sides are $26$ m, $28$ m and $30$ m respectively. Now use Heron's formula to find the area. (Hint: you should confirm that $s$ is half of perimeter)
Is$ \int_{0}^{\pi} f^{2}(x) d x$ divergent or convergent?
The functions $\sin nx$ form an orthogonal system in $L^2(0,\pi)$ and $||\sin nx||^2=\pi/2$.. If we write $S(x)=\sum a_n \sin nx$ then $\sum |a_n|^2=\sum n^{-4/3}&lt;\infty$ hence the infinite sum converges in $L^2$. In fact we have $\int S^2 dx=||S||^2=(\pi/2) \sum_1^\infty {1\over {n^{4/3}}}$ by Parseval.
When are two morphisms of sheaves the same?
$F$ may have literally no global sections at all. For example, if $F$ is any connected non-trivial cover of the circle, then $F$ has no global sections but nonetheless has non-trivial automorphisms. However, it is true that two morphisms of sheaves (on a topological space) are equal if and only if the induced morphisms of stalks are equal.
Cesaro summability of this sequence
Strategy: If you add up the first $2^n$ terms, then: If $n$ is even then over $\frac{1}{2}+\frac{1}{8}$ of the terms will be $1$, so the sum is over $\frac{5}{8}2^n$. If $n$ is odd then over $\frac{1}{2}+\frac{1}{8}$ of the terms will be $0$, so the sum is under $\frac{3}{8}2^n$.
How many choices can be made from the types of coffee beans: 4 Latte, 3 Americano, 2 Expresso, and 1 Arabica...
In the general case, I pick $(0/1/2/3/4)$ Latte and $(0/1/2/3)$ Americano and $(0/1/2)$ Expresso and $(0/1)$ Arabica - with the added constraint that all cannot be $0$ Using the and-or rule for counting, we can express the above as follows $$N = \left({4 \choose 0} + {4 \choose 1} + {4 \choose 2} + {4 \choose 3} + {4 \choose 4}\right)\times \left({3 \choose 0} + {3 \choose 1} + {3 \choose 2} + {3 \choose 3} \right)\times \left({2 \choose 0} + {2 \choose 1} + {2 \choose 2}\right)\times \left({1 \choose 0} + {1 \choose 1} \right) - 1$$ The one subtracted at the end is to remove the case that all $0$
ExPreQ1: If $f>0$ is integrable on $[a,b]$, then $\sqrt{f}$ is integrable.
Hint: Use that $$\sqrt{x}-\sqrt{y}= \frac{x-y}{\sqrt{x}+\sqrt{y}}$$ Alternatively, there is the following theorem: THM Let $f:[a,b]\to \Bbb R$ be Riemann integrable, and supose $\phi(x):[m,M]\to\Bbb R$ is continuous. Assume $f([a,b])\subset [m,M]$. Then $g=\phi\circ f$ is Riemann integrable. P Take $\epsilon &gt;0$. Since $\phi$ is continuous over the compact $[m,M]$, it is uniformly continuous there, so there exists $\delta &gt;0$ such that for each $x,y\in [m,M]$, $|x-y|&lt;\delta\implies |\phi(x)-\phi(y)|&lt;\epsilon$. Since $f$ is Riemann integrable on $[a,b]$ there exists a partition $P_\epsilon$ such that for any refinement $P$ of $P_\epsilon$ we have $$\tag 1 U(f,P)-L(f,P)&lt;\delta^2$$ Assuming $P=\{x_0,x_1,\dots,x_n\}$, let $$M_i=\sup\{f(x):x\in[x_{i-1},x_i]\}$$ $$m_i=\inf\{f(x):x\in[x_{i-1},x_i] \}$$ $$M_i^*=\sup\{g(x):x\in[x_{i-1},x_i]\}$$ $$m_i^*=\inf\{g(x):x\in[x_{i-1},x_i] \}$$ Divide now the numbers $1,\dots,n$ into two classes: $i\in A$ if $M_i-m_i&lt;\delta$, and $i\in B$ if $M_i-m_i\geq \delta$. If $i\in A$, the way we chose $\delta$ gives that $$M_i^*-m_i^*\leq \epsilon$$ For $i\in B$, we have that $$M_i^*-m_i^*\leq 2K$$ where $K=\sup\{|\phi(x)|:x\in[m,M]\}$ We have by $(1)$ that $$\delta\sum_{i\in B}\Delta x_i\leq \sum_{i\in B}(M_i-m_i)\Delta x_i&lt;\delta^2$$ since $B$ is a subset of $\{1,2,\dots,n\}$ and all is positive. It follows that $$U(g,P)-L(g,P)=\sum_{i\in A}(M_i^*-m_i^*)\Delta x_i+\sum_{i\in B}(M_i^*-m_i^*)\Delta x_i\\ \leq \epsilon(b-a)+2K\delta &lt;\epsilon(b-a+2K)$$ for we may assume $\delta &lt;\epsilon$. Since $\epsilon &gt;0$ was arbitrary, the theorem follows. $\blacktriangle$.
tiling floors with 1x1, 1x2, 2x2 - how many floor tile patterns
Let $T$ be any floor shape, and consider the upper-leftmost square of the floor. There are at least one, and at most four, ways of covering that square: The $2\times2$ tile The $1\times1$ tile The $1\times2$ tile oriented vertically The $1\times2$ tile oriented horizontally Convince yourself these are the only possibilities. For each of these possibilities, placing the tile gives you a new, smaller floor shape $T'$. In this way you can recursively build a quaternary tree that will systematically enumerate all possible tilings. Except for very small floors doing so will be very tedious to do by hand, but it's a nice dynamic programming exercise, if you have computer science experience. There are sometimes surprising combinatorial arguments that lead to simple formulas for these kinds of tiling problems, but I don't see an obvious one here, and I wouldn't hold my breath.
Show that $\langle u_1, u_2, u_3\rangle \subsetneq \langle v_1,v_2,v_3\rangle$ for the given vectors
The elements of the set $\{u_1,u_2,u_3\}$ aren't linearly independent, but $\{v_1,v_2,v_3\}$ it is. Since this last set genearate $\mathbb R^3$ and the first set generate a subspace of $\mathbb R^3$ then the claim of the tittle is true.
relation between singular values and eigenvalue
The point is that if $T=(t_{ij})\in\mathbb{C}^{n\times n}$ is triangular, then $$\tag{$❀$} \sigma_{\min}(T)\leq\min_i|t_{ii}|\leq\max_i|t_{ii}|\leq\sigma_{\max}(T). $$ In other words, $$ \sigma_{\min}(T)\leq|t_{ii}|\leq\sigma_{\max}(T), \quad i=1,\ldots,n. $$ To see this, consider the vector $e_i$ (the $i$th column of the identity matrix). Then $$ |t_{ii}|\leq\sqrt{\sum_{k=1}^i|t_{ki}|^2}=\|Te_i\|_2\leq\max_{\|x\|_2=1}\|Tx\|_2=\sigma_{\max}(T). $$ The other direction can be show similarly using the inverse of $T$, the fact that $\sigma_{\min}(T)=1/\sigma_{\max}(T^{-1})$, and that the diagonal of $T^{-1}$ is equal to the inverse of the diagonal of $T$. Note that if the matrix $T$ is not invertible, then the lower bound on $|t_{ii}|$ is trivial as $\sigma_{\min}(T)=0$. Now if $T=Q^*AQ$ is the Schur form of $A$, $T$ has the same singular values as $A$, and $t_{ii}$ are the eigenvalues of $T$ (and $A$). Then just use ($❀$).
Is Maxima/Minima of Lagrange function same as Maxima/Minima of function under consideration?
This is a good question that has a lot of depth to the answer, but the essence is no, this is not true in general for even what you're asking, which is a pretty specific case of the KKT theorem. Essentially this answer gets more complicated when dealing with the KKT theorem, which is a more generalized form of Lagrange multipliers. But the method of Lagrange multipliers is only a necessary condition, meaning that not all the points are guaranteed to be optimal. In fact, part of the exercise of working with Lagrange multipliers in problems is checking the solutions from what you get from the Lagrangian and seeing which is the optimal solution. However, in the case you're specifying, we do have that the Lagrangian will find any point that is optimal in our original problem. Edit: for further reading, page 7 of this lecture may help solidify some of this.
the determinant function is an open function?
It is an open function! First of all, We will state a standard result about open functions. Suppose $f:X\to Y$. Then $f$ is open if and only if for every point $x$, and every neighborhood $U$ of $x$, there is a neighborhood $V$ of $f(x)$ so that $V \subset f(U)$. Furthermore, it suffices to assume $U$ is basic open. I don't know how to prove this simultaneously for invertible and non-invertible matricies, so we'll have to do cases. Sorry it's so long... I'll keep thinking about a simpler solution. Let's start with the invertible case. Let $\varepsilon &gt;0$. Assume $M$ is an invertible $n\times n$ matrix over $\mathbb{F}$ ($\mathbb{F}$ can be complex or real, it doesn't matter). Then fix a small $x$. (We will figure out how small later.) In the real setting, let $z$ be the $n^\text{th}$ root of $1+x$. in the complex setting we need the root with the smallest argument. I.E. $$z = \sqrt[n]{|1+x|}e^{(\arg 1+x)/n}$$ Now, assume $x$ is small enough so that $$ |1-z|&lt;\frac{\varepsilon}{\|M\|} $$ So we have $$ \|M-zM\|\leq |1-z|\;\|M\|&lt;\varepsilon $$ Lastly, $$ \det(zM)=z^n\det(M)=(1+x)\det M $$ Now, because $\det M \neq 0$, we can choose $x$ to give us any value nearby $\det M$. So there is some neighborhood $V$ around $\det M$ with $V\subset f[B(M,\varepsilon)]$ We can now get to work on the non-invertible case. Let $\varepsilon&lt;0$. Suppose $M$ is a non-invertible $n\times n$ matrix. Let $M=SJS^{-1}$ be the Jordan decomposition of $M$. Then $J$ is a Jordan form matrix, and we know that at least $1$ of it's diagonal entries is $0$. Let $\lambda_1,...,\lambda_i$ be the non-zero eigenvalues. (There many not be any, just let $i=0$ in that case.) Then there are $j:= n-i$ many $0$ diagonal entries. Let $J'$ be the matrix where the first $0$ diagonal is replaced by $x$, and the rest by $|x|$ for some small $x$. ($M$ is singular so there is at least one such entry). Observe $$ \|M-SJ'S^{-1}\|=|x| $$ Also, $$ \det(SJ'S^{-1})=\det J'= x|x|^{j-1}\Pi_{k=0}^i \lambda_k $$ So we want $|x|&lt;\varepsilon$. So by choosing $x$ appropriately, we can get a small neighborhood around $0$, call it $V$, where $V\subset f[B(M,\varepsilon)]$ Then then we're done! yay.
Is a definite integral just a summation?
It's certainly helpful to think of integrals this way, though not strictly correct. An integral takes all the little slivers of area beneath a curve and sums them up into a bigger area - though there's, of course, a lot of technicality needed to think about it this way. There is an idea being obscured by your idea of an integral as a sum, however. In truth, a sum is a kind of integral and not vice versa - this is somewhat counterintuitive given that sums are much more familiar objects than integrals, but there's an elegant theory known as Lebesgue integration which basically makes the integral a tool which eats two pieces of information: We start with some indexing set and some way to "weigh" pieces of that set. We have some function on that set. Then, the Lebesgue integral spits out the weighted "sum" of that function. An integral in the most common sense arises when you say, "I have a function on the real numbers, and I want an interval to have weight equal to its length." A finite sum arises when you say "I have a function taking values $x_1,\ldots,x_n$ on the index set $\{1,\ldots,n\}$. Each index gets weight $1$" - and, of course, you can change those weights to get a weighted sum or you can extend the indexing set to every natural number to get an infinite sum. But, the basic thing to note is that there's a more general idea of integral that places summation as part of the theory of integration.
Measures induced by integral and by measurable mapping
As stated, (2) is not true. Take $\mu$ be a measure so that $\mu(X) = 1$. Let $f$ be the function $f\equiv 2$. Then $v_f = 2\mu$, and so $v_f(X) = 2$. By the definition of $w_g$, for any self-map $g:X\to X$, $w_g(X) = \mu(g^{-1}(X)) \leq \mu(X) = 1$. Hence $V$ is not a subset of $W$. The reverse is also not true. Take $\mu$ to be an arbitrary atomless measure. Let $X_0\in \mathcal{F}$ be a set with positive measure, and let $\{x_0\}\in \mathcal{F}$ be a point. Then the map $g:X\to X$ such that $g|_{X\setminus X_0} = Id$, and that $g(y) = x_0$ for any $y\in X_0$ is measurable. But $\mu(\{x_0\}) = 0$, while $w_g(\{x_0\}) = \mu(X_0) &gt; 0$, so $w_g$ is not absolutely continuous w.r.t $\mu$. In general I don't think there can be any relationships between those two sets you defined: one is the pushforward of $\mu$ under automorphisms of $X$, the other being essentially the class of all absolutely continuous measures w.r.t. $\mu$, it is not clear to me why one may expect the two to be connected, unless perhaps very stringent requirements on what $\mathcal{F}$ and $\mu$ one is allowed to take is made.
Understanding Indexed Family of Sets using Real Numbers
You are right: $\bigcup_{n\in\Bbb N}A_n=(1,\infty)$ and $\bigcap_{n\in\Bbb N}A_n=\emptyset$. In this case, $\bigcup_{n\in\Bbb N}A_n=\left(0,1+\sqrt2\right)$ and $\bigcap_{n\in\Bbb N}A_n=\left(1,\sqrt2\,\right]$. Can you justify it? In this case, $\bigcup_{n\in\Bbb N}A_n=\left(-\infty,1\right)$ and $\bigcap_{n\in\Bbb N}A_n=(-3,0]$. Again, can you justify it?
How can this function be differentiable at its endpoints?
When you want to check if a function is differentiable at its endpoints you only need to check if the $one-sided$ limit exists and is a real number. That's because the two sided limit isn't even defined for the endpoints. When you want to check if a function is differentiable at an interior point, you need to check the $two-sided$ limit (which needs to exist and be a real number in order for your function to be differentiable at this point). This answer will help you too. I found the link after writing the first two mini-paragraphs of my answer and I think that this is the most common definition given (and the one given to me at school by my teachers), so I will leave them(the two paragraphs) here.
Problem - Sum of digits of $n$ and its square
Prove by induction: for any positive integer $m$ there is a number $n$ with all digits $0$ and $1$, $z(n)=m$ and $z(n^2) = m^2$. For the induction step, note that if $n$ works for $m$, then $10^k + n$ works for $m+1$ if $k$ is sufficiently large.
How to study the monotonicity of $f(x) = \sin 2x$, without using differentiation and continuity.
It is : $ -1 \leq \sin x \leq 1$, $\forall x\in\mathbb R$. $(1)$ You have : $\sin(-\pi)=\sin(\pi)=\sin(0) = 0$ and also $\sin(-\pi/2) = -1,\sin(\pi/2) = 1$. $(2)$ Now, you know that $f(x) = \sin x$ is a continuous function $\forall x \in \mathbb R$. Taking into account this and $(1),(2)$, you can conclude that $f$ is decreasing in the intervals $[-\pi,-\pi/2],[\pi/2,\pi]$ and increasing in the intervals $[-\pi/2,\pi/2].$
Proof of the "Radius of Convergence Theorem"
It is a bit confusing the way he writes it, but what he actually shows in Theorem 3 (in the case $a=0$), is that if $\sum a_n x^n$ converges, then $\sum a_n y^n$ absolutely converges for any $|y|&lt;|x|$. The invocation of $ACT$ is confusing since it speaks about a notion (radius of convergence) whose existence is proved in Theorem 1. However, in the proof of Theorem 3, $R$ is used only to take an $|x|&lt;R$, so that we know $\sum a_n x^n$ converges. What he should have said is "from the proof of Theorem 3, etc...". More details: In the proof of Theorem 3 (in the book) he picks an arbitrary $|x|&lt;R$ for which he has to show absolute convergence. Then he says there exists $X$ such that $|x|&lt;|X|&lt;R$. For this $X$ you have convergence, and from this convergence alone (no $R$ needed after this point) he deduces absolute convergence for $x$. Notice that at no point he assumes or uses that the series converges for $x$. He only uses radius of convergence to say that the series converges for $X$. In other words he proves the following: If $\sum a_n x^n$ is convergent, than for any $|y|&lt;|x|$, $\sum a_n y^n$ is absolutely convergent. For Theorem 1, IF a) and b) do not hold, there must be an $X$ for which the series diverges. He concludes that for any $|x|&gt;|X|$, the series still diverges. Indeed, if it converges for some $|x|&gt;|X|$, it would absolute converge for ANY $|y|&lt;|x|$ (by the previous argument), in particular for $X$. Contradiction.
Set of slopes of tangents and secants of a continuous function
I'll assume $f: \mathbb R \to \mathbb R$ is continuous, and that "differentiable" means differentiable at each point of $\mathbb R.$ (i) As mentioned by several others, the statement here is true by the MVT. (ii) The statement is false: For the function $f(x)=x^3,$ $T=[0,\infty)$ while $S= (0,\infty).$ (The reason $0\notin S$ is because $f$ is strictly increasing, implying all slopes of secants are positive.) (iii) False: Define $$f(x)=\begin{cases} \sqrt {1-x^2},&amp; -1\le x \le 1\\\,\,\,\, 0, &amp; |x|&gt;1\end{cases}.$$ The graph of $f$ is the upper half of the unit circle together with the rays $(-\infty,-1]\times \{0\}, [1,\infty)\times \{0\}.$ Thus $f$ is continuous on $\mathbb R.$ Because $f'(x) = -x/\sqrt {1-x^2}$ for $x\in (-1,1),$ we have $f'((-1,1))=\mathbb R.$ Thus $T=\mathbb R.$ Considering slopes of secants through $(-1,0)$ and $(x,\sqrt {1-x^2})$ for $x\in (-1,1],$ we see $S$ contains $[0,\infty).$ Similarly, the secants through $(1,0)$ and $(x,\sqrt {1-x^2})$ for $x\in [-1,1)$ show $S$ contains $(-\infty,0].$ So we have $S = T = \mathbb R,$ while $f$ is not differentiable at $-1$ or $1.$ (iv) True: Let $U=\{(x,y)\in \mathbb R^2: y&gt;x\}.$ Then $U$ is connected (in fact it's convex). Define $F:U\to \mathbb R$ by $$F(x,y) = \frac{f(y)-f(x)}{y-x}.$$ Then $F$ is continuous on $U$ by the continuity of $f.$ Note that $S = F(U).$ Since $U$ is connected, $F(U)=S$ is connected by continuity. Hence $S$ is an interval. Since we are given $0,1\in S$ we must have $[0,1]\subset S.$
How to Derive Population Variance of AR(1) Process
Provided that $|\phi|&lt;1$, you can do backwards substitution to arrive at $$ Y_t=\mu(1+\phi+\phi^2+\cdots)+\epsilon_t+\phi\epsilon_{t-1}+\cdots=\frac{\mu}{1-\phi}+\sum_{j=0}^\infty\phi^j\epsilon_{t-j}. $$ From here, you can compute $$ \text{Var}(Y_t)=E\left(\sum_{j=0}^\infty\phi^j\epsilon_{t-j}\sum_{j=0}^\infty\phi^j\epsilon_{t-j}\right)=\sum_{j=0}^\infty\phi^{2j}E(\epsilon_{t-j}^2)=\frac{\text{Var}{\epsilon}}{1-\phi^2}=\frac{2}{1-\phi^2}. $$ The second equality uses the fact that when the indices don't match, the expectation vanishes. Finally, if your process doesn't start at infinity in the past, you need to define $\text{Var}(Y_0)$ as $\frac{\text{Var}{\epsilon}}{1-\phi^2}$.
Optimize triangle on ellipse
HINT: WLOG the third point be $C\left(5\cos t,\dfrac54\sin t\right)$ The equation of $AB$ $$\dfrac{y-0}{x-5}=\dfrac{0-1}{5-3}\iff x+2y-5=0$$ The length of $A(3,1);B(5,0)$ is constant, we need to maximize the perpendicular distance of $C$ from $AB$ which is $$\dfrac{|1\cdot5\cos t+2\cdot\dfrac54\sin t-5|}{\sqrt5}= \dfrac1{\sqrt2}|2\cos t+\sin t-2|$$ Now $2\cos t+\sin t=\sqrt5\cos\left(t-\arccos\dfrac2{\sqrt5}\right)$ $-1\le\cos\left(t-\arccos\dfrac2{\sqrt5}\right)\le1$ $\iff-\sqrt5-2\le2\cos t+\sin t-2\le\sqrt5-2$ $-\sqrt5-2\le2\cos t+\sin t-2,\implies(\cos t+\sin t-2)^2\le(-\sqrt5-2)^2=(\sqrt5+2)^2$
Partial likelihood in Cox's proportional hazards model
I asked a similar question here. Regarding the equation you posted: this is indeed the probability that a certain individual fails given that there is to be at least one failure from the risk set. This is a conditional probability. Therefore, the numerator is the probability of individual $i$ failing, with covariates $z_i$. The denominator reflects the probability of an individual in the risk set, with covariates $z_l$, failing. However, we sum over the entire risk set in the denominator because these events are mutually exclusive, and added together, represent the probability that there is at least one failure from the risk set.
Multiple examination of a result (probability)
Let $S$ be the event A performed the task correctly, and let $T$ be the event B says she did and C says she did not. We want the conditional probability $\Pr(S|T)$. By definition this is equal to $\Pr(S\cap T)/\Pr(T)$. The calculation of $\Pr(S\cap T)$ is easy. Assuming independence (unlikely, but we are implicitly expected to assume it), we find that it is $(0.7)(0.8)(0.1)$. The event $T$ can happen in $2$ ways: either (i) A was right, B was right, and C was wrong, or (ii) A was wrong, B was wrong, and C was right. We have already calculated the probability of (i). The probability of (ii) is $(0.3)(0.2)(0.9)$. So our conditional probability is $\frac{(0.7)(0.8)(0.1)}{(0.7)(0.8)(0.1)+(0.3)(0.2)(0.9)}.$ Remark: The "simpler" case mentioned in the OP is not done correctly. Again, a conditional probability is needed.
How is the set of condensation points of an uncountable subset of $\Bbb{R}^k$ perfect?
The set of condensation points of $E = (0,1)$ is not $\{0,1\}$, but the entire closed interval $[0,1]$. $x$ is a "condensation point" if every open ball around $x$ contains uncountably many points in $E$--this can be the case whether or not $x$ itself is a member of $E$. Re edit: $\{1\}$ is not a perfect set in $\mathbb{R}^1$. $1$ is not a limit point of $\{1\}$, so the set is not perfect.
Proof that a normal subgroup $N$ of $G$ is the identity coset in the group of cosets of $N$
The correct formulation is the following. Suppose $N$ is a subgroup of the group $G$. Let $g \in G$. Then $g N = N$ if and only if $g \in N$. You should know that for two cosets one has $$ \text{$a N = b N$ if and only if $a^{-1} b \in N$.}\tag{fact}$$ So $N = 1 N = g N$ if and only if $1^{-1} g = g \in N$. To prove (fact), you may start from the relation $R$ on $G$ defined by $a R b$ if and only if $a^{-1} b \in N$. You can prove this is an equivalence relation, and the class of an element $g$ is $g N$.
steps by Euclidean algorithm back tracing
Exactly what these steps are? They are: $$\begin{align} 2689 &amp;= 7*369 &amp;+&amp;106 \\ 369 &amp;= 3*106 &amp;+&amp;51 \\ 106 &amp;= 2*51 &amp;+&amp;4 \\ 51 &amp;= 12*4 &amp;+&amp;3 \\ 4 &amp;=1*3&amp;+&amp;1 \end{align} $$ So, reading these backwards, we have $$\begin{align} 1&amp;=4-1*3 \\ 3 &amp;=51-12*4 \\ 4&amp;=106-2*51\\ 51&amp;=369-3*106\\ 106&amp;=2689-7*369 \end{align} $$ So that, summing them up: $1=(106-2*51)-(51-12*4)$ then go on and substitute the next numbers using the above equations to get rid of $4$, then $51$, finally of $106$, leading to an expression of the form $x*2689-y*369$.
Prove that $f(x)/x$ is increasing
$(\frac {f(x)} x)'=\frac {xf'(x)-f(x)} {x^{2}}$ so it is enough to show that $xf'(x)-f(x) &gt; 0$. Now $(xf'(x)-f(x))'=xf''(x) &gt; 0$ so $xf'(x)-f(x)$ is increasing and it is enough to observe that $0f'(0)-f(0)&gt;0$.
Generalization of $K$ being a field, any finite subgroup $G$ of $K^\star$ is cyclic.
A version of the decomposition theorem says that a finite abelian group can be decomposed as a direct product $$ G=C(m_1)\times C(m_2)\times\dots\times C(m_k) $$ (where $C(a)$ is the cyclic group of order $a$) with $$ m_1\mid m_2 \mid \dots \mid m_k $$ It is clear that, in this case, $x^{m_k}=1$ for all $x\in G$. In the case $G$ is a subgroup of $K^{\star}$, we cannot have $m_k&lt;|G|=n$, because $x^{m_k}=1$ has at most $m_k$ solutions in $K$. Therefore $m_k=n$ and $G=C(n)$. Note that this cannot be invoked if $G$ is a finite subgroup of the group of units of a ring that is not a domain (or, equivalently, a field). Indeed, this fails for $R=\mathbb{Z}/8\mathbb{Z}$, where the group of units is the Klein $4$-group that's not cyclic.
What is the difference between property of Baire and Second Category in $\mathbb{R}$
Every first category set has the property of Baire, because the empty set is open. When he says "a set of second category having the property of Baire", that means "a set which differs from a nonempty open set by a set of first category."
If $V=U\oplus U'=W\oplus W'$, then is $V=(U\cap W)\oplus (U'+W')$?
Oh wow I guess this is one of those moments where you realise the answer to your own question as soon as you ask it. If $U'$ and $W'$ were orthogonal complements I think it might be true, but they're not so there's more freedom, A counterexample is $V=\mathbb{R}^2$, $U=\operatorname{span}\{(1,0)\}, W=\operatorname{span}\{(0,1)\}$ and $U'=W'=\operatorname{span}\{(1,1)\}$. Then $U\cap W=\{(0,0)\}$ and $U'+W'=\operatorname{span}\{(1,1)\}$, hence $(U\cap W)+(U'+W')$ is a proper subspace of $V$. What seems like should be true, is that $(U\cap W)\cap(U'+W')=\{0\}$, or at least I haven't thought of a counterexample to that yet.
Probability that a number of selected batteries will last longer than some years
Your result should be $\sum_{k=4}^{10} \binom{10}{k} 0.4^k0.6^{10-k} = \frac{6032416}{9765625} \simeq 0.6177$, according to https://www.wolframalpha.com/input/?i=sum+k%3D4+to+10+nchoosek%2810%2Ck%29%280.4%29%5Ek%280.6%29%5E%2810-k%29. Or, if you think it is easier, it is also $1-\sum_{k=0}^{3} \binom{10}{k} 0.4^k0.6^{10-k} \simeq 0.6177$. Both will require a calculator, so I think they are of the same "difficulty."
Chain rule and inverse in matrix calculus
I think this follows more quickly from the product rule. The derivative of $t \mapsto X + tY$ is $Y$. You have $$0 = \frac{d}{dt}I = \frac{d}{dt} [(X+tY)^{-1}(X+tY)] = \frac{d}{dt}(X+tY)^{-1} * (X+tY) + (X+tY)^{-1}Y$$ and so $$\frac{d}{dt} (X+tY)^{-1} = -(X+tY)^{-1} Y (X+tY)^{-1}.$$ Multiplying on the left and right by $B^T$ and $A$ won't change much.
Finding the original number
We are looking for integers $n_{1}, n_{2}$ between $0$ and $9$ such that: $10n_{2} + n_{1} - (10n_{1} + n_{2}) = 10n_{1} + n_{2} - 1$ and $3n_{1} + 4n_{2} = 10n_{1} + n_{2}$. The first equation reduces to $19n_{1} - 8n_{2} = 1$ and the second equation reduces to $7n_{1} - 3n_{2} = 0$. You can now solve this system of equations $\begin{cases} 19n_{1} - 8n_{2} = 1 \\ 7n_{1} - 3n_{2} = 0 \end{cases} $ to get your answer.
Find equation of a plane that passes through point and contains the intersection line of 2 other planes
Put the two given equations together in a system, set $x=0$ (say) and solve for $y,z$ to get an actual point on the intersection line. Accompanied by its direction vector and the outlying point $P$, we can determine three points from the desired plane and subsequently determine an equation for it. Dostre's edit (see comments) $x=0:\begin{cases} y-2z-3=0 \\ -y+z-2=0 \end{cases}$ ;$\;\;\;\;$ add them up and you get: $\;\;\;\;\;\;\;\;\;\;\begin{cases} -z-5,\;\;z=-5\\ -y+z-2=0,\;y=-7 \end{cases}\Rightarrow$Point $ (0,-7,-5)$, call it Q, is on the line of intersection. So now we have two points $P(-1,4,2)$ and $Q(0,-7,-5)$ on our desired plane and vector w thats is parallel to the desired plane. In order to find the equation of the desired plane we need a vector that is normal to it.We can find that normal vector by taking cross product of two vectors that are parallel to the desired plane. We already have w so the other vector will be *PQ*$&lt;0-(-1),-7-4,-5-2&gt;=&lt;1,-11,-7&gt;$ Now normal vector to desired plane will be the cross product of w and PQ: PQ x w=$\begin{vmatrix} i &amp; j &amp; k \\ 1 &amp; -11 &amp; -7 \\ 1 &amp; 10 &amp; 6 \end{vmatrix}=i\begin{vmatrix} -11 &amp; -7 \\ 10 &amp; 6 \end{vmatrix}-j\begin{vmatrix} 1 &amp; -7 \\ 1 &amp; 6 \end{vmatrix}+k\begin{vmatrix} 1 &amp; -11\\ 1 &amp; 10 \end{vmatrix}=4i-13j+21k$ $4i-13j+21k=&lt;4,-13,21&gt;$ is the vector normal to the desired plane.
Showing whether an $L^\infty$ function is in $L^2$
For any $f \in L^2(X)$, $\int |fg| d \mu = \int|\bar{f}\bar{g}| d\mu &lt; \infty$ ( Using the fact that $\forall f \in L^2(X)$ we have that $f\bar{g} \in L^1(X)$ ) Hence, $\forall f \in L^2(X), |\int fg d\mu| \le \int|fg|d \mu &lt; \infty$ , in other words, the sending $f \mapsto \int fg d\mu$ defines a bounded linear functional on $L^2(X)$ . Since, $L^2(X)^* \cong L^2(X)$ , it follows that $g \in L^2(X)$ .
explain derivative using dot product
Note that the gradient, $\nabla f$, points in the direction of maximum change of $f$ and is, therefore, normal to level curves. The directional derivative, $\frac{df}{du}=\hat u\cdot \nabla f=|\nabla f|\cos(\theta)$, where $\hat u$ is a unit vector along $\vec u$, and $\theta$ is the angle between $\nabla f$ and $\hat u$. As one example, at the point $Q$, the angle between the $\nabla f$ and $\hat u$ appears to be less than $\pi/2$ and hence the directional derivative should be positive.
Find whether roots of an equation are real or not.
Note that the discriminant, $$m^2 + \frac 1{m^2} -1 = \left(m - \frac 1m\right)^2+1&gt;0$$ So, the roots are real.
Estimating or Solving Two-Variable Infinite Sum
$$ S_k: \sum_{k=0}^\infty x^{k}(k+1) $$ $$ S_k: \sum_{k=0}^\infty kx^{k} +\sum_{k=0}^\infty x^{k} $$ $$ \sum_{k=0}^\infty x^{k}=\frac{1}{1-x} $$ $$ \sum_{k=0}^\infty kx^{k} = \frac{x}{(1-x)^2} $$ you can do same things about the the second case just with k+1, imean below: $$ -\sum_{k=0}^\infty x^{k+1}(k+1) $$
How to prove (P→¬P)→¬P when ¬ is primitive
The easiest way that i consider here is that $A \rightarrow B$ is true when A is false (or) B is true. So, $p \rightarrow ¬p$ is equivalent to $¬p \lor ¬p \Leftrightarrow ¬p$
Notation for functional derivative of two variables
This is just $\frac{\delta F_\varepsilon}{\delta\rho}$ with action $F_\varepsilon=\int_0^1Ldx$. It's the usual functional derivative because $\frac{\partial L}{\partial\dot{\rho}}=0$.