title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Posterior Odds of 99:13.7 Stated As A Probability
if the prior odds ratio is 99:1 (with a corresponding pre-felt probability of (1-x)/x = 99 ... x = 1% chance that the aunt has the ability)... multiplication by the likelihood results in posterior odds ratio of 99:13.7 = 7.226 odds are ratios of probabilities that add to unity so (1-x)/x = 7.226 or x = 1/8.226 = 12.16% probability that p = 0.75 vs the alternative hypothesis of p=0.5 with a probability of 88.8% of the aunt not having the ability if, instead the prior odds ratio was 1:1 (a priorly believed probability of 50%), after multiplying by the likelihood, you'd end up with a posterior odds ratio of 1/13.7 or 0.072993 now solving (1-x)/x = 0.072993, we arrive at x = 93.2% probability that p = 0.75 vs the alternative hypothesis that p = 0.5 note that 100%-93.2% = 6.7% so in the first case a prior probability of 1% that the aunt has the ability is increased to 12.2% by the data likelihood in the second case, the prior probability of 50% that the aunt has the ability is increased to 93.2%
Show that the posterior probability of $H_0$ equals $\Phi\left(\sqrt{n}\frac{\theta_o -\bar{X}_n}{\sigma}\right)$
The posterior distribution of $\theta$ (with $\sigma$ fixed) is proportional to the likelihood, which is given by $$\mathcal L(\theta \mid \boldsymbol x) \propto \exp \left( - \frac{1}{2\sigma^2} \sum_{i=1}^n (x_i - \theta)^2 \right).$$ We may ignore the improper prior $\pi(\theta) = 1$. We can ignore all other factors that do not depend on $\theta$. Now if we let $\bar x = n^{-1} \sum_{i=1}^n x_i$ be the sample mean, we have $$\begin{align*} \sum_{i=1}^n (x_i - \theta)^2 &= \sum_{i=1}^n (x_i - \bar x + \bar x - \theta)^2 \\ &= \sum_{i=1}^n (x_i - \bar x)^2 + 2(x_i - \bar x)(\bar x - \theta) + (\bar x - \theta)^2 \\ &= n(\bar x - \theta)^2 + 2(\bar x - \theta) \sum_{i=1}^n (x_i - \bar x) + \sum_{i=1}^n (x_i - \bar x)^2 \\ &= n(\bar x - \theta)^2 + 0 + \sum_{i=1}^n (x_i - \bar x)^2, \end{align*}$$ where the second term is zero because the sample total equals $n$ times the sample mean. Hence $$f(\theta \mid \boldsymbol x) \propto \exp\left( - \frac{n(\bar x - \theta)^2}{2\sigma^2}\right)\exp\left(-\frac{1}{2\sigma^2} \sum_{i=1}^n (x_i - \bar x)^2\right),$$ and as this second exponential factor is independent of $\theta$, it too is just a constant of proportionality with respect to the posterior distribution of $\theta$. We conclude that the posterior density of $\theta$ is normal, with mean $\bar x$ and variance $\sigma^2/n$; thus the posterior probability of $H_0$ is $$\Pr[H_0 \mid \boldsymbol x] = \Pr[\theta \le \theta_0 \mid \boldsymbol x] = \Pr\left[\frac{\theta - \bar x}{\sigma/\sqrt{n}} \le \frac{\theta_0 - \bar x}{\sigma/\sqrt{n}} \mid \boldsymbol x \right] = \Pr\left[Z \le \frac{\theta_0 - \bar x}{\sigma/\sqrt{n}}\right] = \Phi\left(\sqrt{n}\frac{\theta_0 - \bar x}{\sigma}\right).$$
Construction of parity check matrix for Hamming code
All vectors are assumed to be column vectors. Discard the zero vector $\mathbf 0$ from $\mathbb F_q^r$ leaving a set $\mathcal A_1$ of $q^r-1$ vectors. Pick a $\mathbf x_1$ in $\mathcal A_1 = \mathbb F_q^r-\{\mathbf 0\}$ as the first column of the parity check matrix, and then remove the $q-1$ nonzero multiples of $\mathbf x_1$ from $\mathbb F_q^r-\{\mathbf 0\}$ leaving a set $\mathcal A_2$ of $(q^r-1)-(q-1)$ vectors for use in the next step. Pick a $\mathbf x_2$ in $\mathcal A_2$ as the second column of the parity check matrix, and then remove the $q-1$ nonzero multiples of $\mathbf x_2$ from $\mathcal A_2$ leaving a set $\mathcal A_3$ of $(q^r-1)-2(q-1)$ vectors for use in the next step. Lather, rinse, and repeat the basic step: Choose one vector from $A_i$ as the $i$-th column of the parity check matrix, and removing its $q-1$ multiples from $\mathcal A_i$ to leave set $A_{i+1}$ for future use. Since we are discarding $q-1$ vectors at each step, the process can continue until we have discarded all $$q^r-1 = (q^{r-1} + q^{r-2} + \cdots + q^2 + q + 1)(q-1)$$ vectors in $$n = q^{r-1} + q^{r-2} + \cdots + q^2 + q + 1 = \frac{q^r-1}{q-1} ~\text{steps}.$$ We have thus created a parity-check matrix with $r$ rows and $n=\frac{q^r-1}{q-1}$ nonzero columns, which defines a $\left[\frac{q^r-1}{q-1}, \frac{q^r-1}{q-1}-r\right]$ code over $\mathbb F_q$. By construction, this code has the property that no nonzero linear combination $a\mathbf x_i+b\mathbf x_j$ of two columns can equal $\mathbf 0$ and so the minimum Hamming weight of the code is at least $3$. I will leave it to you to prove that the minimum distance is in fact exactly equal to $3$ and that the code we have thus constructed is indeed a (single-error-correcting) Hamming code over $\mathbb F_q$.
Show that $-\frac{2yx^3}{(x^2+y^2)^2}$ is bounded.
By AM-GM, $$ |yx^3| = 27\left|y\cdot\frac{x}{3}\cdot\frac{x}{3}\cdot\frac{x}{3}\right| \leq 27\left(\frac{|x|+|y|}{4}\right)^4 = \frac{27}{64}(|x|+|y|)^4$$ while by AM-QM: $$ \frac{x^2+y^2}{2} \geq \left(\frac{|x|+|y|}{2}\right)^2 $$ hence: $$\left|\frac{2yx^3}{(x^2+y^2)^2}\right|\leq \frac{\frac{27}{32}}{\frac{1}{4}}=\frac{27}{8}.$$ An even sharper bound (thanks to Git Gud) is: $$\left|\frac{2yx^3}{(x^2+y^2)^2}\right|\leq \left|\frac{2xy}{x^2+y^2}\right|\cdot \left|\frac{x^2}{x^2+y^2}\right|\leq 1,$$ but the sharpest possible bound is given by $\frac{3\sqrt{3}}{8}$, as shown in the comments.
How can I tell if a function has an inverse if it involves inverse trig functions?
Trying with algebraic methods seems quite difficult in this case. We can try to see whether the function is monotonic; since $$ f'(x)=3x^2+\frac{1}{1+(x+1)^2} $$ you should easily be able to conclude. If $f$ is strictly monotonic, $x<y$ implies $f(x)<f(y)$, so in particular that $x\ne y$ implies $f(x)\ne f(y)$.
If $G$ is a tree with $k$ leaves, then $G$ is the union of $k/2$ pairwise intersecting paths.
You can run into trouble when you merge paths. Suppose the leaves are $w_1, w_2, \dots, w_k$, so you start with $k$ paths: $(v, \dots, w_i)$ for $i = 1,2, \dots, k$. Merging these paths can be problematic for two reasons: It may be the case that if we try to merge $(v, \dots, w_1)$ and $(v, \dots, w_2)$ into one path that goes $(w_1, \dots, v, \dots, w_2)$, we don't actually get a simple path as a result. This happens whenever we merge two paths that take the same edge out of $v$, and is sometimes guaranteed to happen no matter how we pair up the paths to merge. (For example, if the same edge out of $v$ leads to more than half the leaves.) We can fix this by "simplifying" the paths after we merge them, deleting redundant edges. But then you can't be sure that the paths remain pairwise intersecting: originally, your only guarantee of that was that they all pass through $v$, and after simplifying, the paths might no longer pass through $v$. It might help you understand the original proof better to realize that if the leaves of a tree $T$ are $w_1, w_2, \dots, w_k$, we can specify a collection of $\left\lceil \frac k2\right\rceil$ paths that cover $T$ by pairing up the leaves and connecting each pair by a path. (If $k$ is odd, one leaf gets used twice.) There is no further freedom in choosing the paths, and each leaf must be the endpoint of a path or else it remains uncovered. So all that remains is to choose how to pair the leaves. And the claim that drives the proof is this: if the paths $(w_1, \dots, w_2)$ and $(w_3, \dots, w_4)$ are disjoint, then the paths $(w_1, \dots, w_3)$ and $(w_2, \dots, w_4)$ are not disjoint and have a greater total length.
Induction proving for $3^{n}+1 | 3^{3n}+1$
I agree with you that induction is unnecessary here. However, if you want to construct this in the form of an induction proof, it follows very similar lines. The base case is easy, as you say. Now suppose that the result holds for $k$. Then for $k + 1$, we wish to prove that $$ 3^{k+1} + 1 \mid 3^{3(k+1)} + 1$$ Using the identity $$ x^3 + y^3 = (x + y)(x^2 - xy + y^2)$$ We can write $$ 3^{3(k+1)} + 1 = (3^{k+1} + 1)(3^{2(k+1)} - 3^{k+1} + 1)$$ Therefore, we see that $3^{k+1} + 1 \mid 3^{3(k+1)} + 1$, as desired, proving our inductive step and our result. It is true that the inductive step relies in no way on any of the previous steps, making induction unnecessary, so I would stick with your method, if possible. I hope this helps!
Proof of $\vdash P \rightarrow (Q\lor P)$
Your derivation is absolutely correct! There is no need for EFQ, the rules $\lor_I$ ($\lor$-introduction) and $\to_I$ ($\to$-introduction) that you have used in your derivation are sufficient to prove $P \to (Q \lor P)$. Remark: This means that $P \to (Q \lor P)$ is provable not only in classical and intuitionistic logic, but also in minimal logic.
Use Euler's method with step size 10^-n to estimate x(1), where f(x) is the solution of the initial-value problem below. f(x)=-x x(0)=1
Euler's method is such: $$ \begin{align*} x_0 &= x(0), \\ x_n &= x_{n-1}+hf(x_{n-1}). \end{align*}$$ For $h = 10^{-n}$, $x_{10^n} = x(1)$.
How to average the function below with a Gaussian distribution
Rewrite $f$ in the following way: $$f(x) = f(x) = \frac{J^2+2x^2+2x^2\cos[\sqrt{J^2+4x^2}]}{J^2+4x^2} = 1 + \frac{2x^2(\cos[\sqrt{J^2+4x^2}]-1)}{J^2+4x^2}$$ The integral now becomes: $$1 + \frac{1}{\sqrt{2\pi\sigma^2}}\int_{-\infty}^{\infty} \frac{2x^2(\cos[\sqrt{J^2+4x^2}]-1)}{J^2+4x^2} e^{-\frac{x^2}{2\sigma^2}} dx$$ since the integral of the full Gaussian is $1$. Now focusing on the integral left over, rewrite it as a series. $$ = \sum_{n=1}^\infty \frac{(-1)^n}{(2n)!}\frac{1}{\sqrt{2\pi\sigma^2}}\int_{-\infty}^{\infty} 2x^2(J^2+4x^2)^{n-1} e^{-\frac{x^2}{2\sigma^2}} dx$$ $$ = \sum_{n=1}^\infty \sum_{k=0}^{n-1}\frac{(-1)^n}{(2n)!}\frac{1}{2\sqrt{2\pi\sigma^2}}{{n-1}\choose k}\int_{-\infty}^{\infty}J^{2n-2k-2}4^{k+1}x^{2k+2} e^{-\frac{x^2}{2\sigma^2}} dx$$ $$ = \sum_{n=1}^\infty \sum_{k=0}^{n-1}\frac{(-1)^n}{(2n)!}{{n-1}\choose k}J^{2n-2k-2}2^{2k+1}\sigma^{2k+2}(2k+1)!!$$ by the moment formulas for the normal distribution. Then swapping the order of the summations: $$= 2\sum_{k=0}^\infty \sum_{n=k+1}^\infty\frac{(-1)^n}{(2n)!}{{n-1}\choose k}J^{2n-2k-2}\sigma^{2k+2}\frac{(2k+1)!}{k!}$$ $$= \sum_{k=0}^\infty \frac{(-1)^{k+1}{}_1F_2\left(k+1;k+\frac{3}{2},k+2;-\frac{J^2}{4}\right)}{(k+1)!}\sigma^{2k+2}$$ and it makes the final answer (adding the leftover term from earlier) $$= \sum_{k=0}^\infty {}_1F_2\left(k;k+\frac{1}{2},k+1;-\frac{J^2}{4}\right)\frac{(-\sigma^2)^k}{k!}$$ which is as far as I could get. $\mathbf{\text{EDIT}}$: Supposed $\frac{J^2}{4} \ll 1$. Then the hypergeometric goes to 1 and the summation becomes approximately $e^{-\sigma^2}$ $\mathbf{\text{EDIT}}$: Alternatively, Wolfram tells me I could have done the unswapped summation leading to the following alternate answer: $$ 1 + \frac{J}{4\sigma}\sum_{n=1}^\infty U\left(\frac{3}{2},n+\frac{3}{2},\frac{J^2}{4\sigma^2}\right)\frac{(-J^2)^n}{(2n)!}$$
Using Binomial Distribution for analysis
One approximation which works reasonably well for large $n$ and reasonable $k$ is a Gaussian approximation with a continuity correction, in effect an application of the Central Limit Theorem. This gives reasonable absolute errors, but poor relative errors in the tails (Chernoff bounds work better in the tails) This would suggest $$\displaystyle \sum_{t=k}^n {{n}\choose{t}} p^t (1-p)^{(n-t)} \approx \Phi\left(\dfrac{np-k+\tfrac12}{\sqrt{np(1-p)}} \right)$$ where $\Phi(x)$ is the cumulative distribution function of a standard normal distribution As an example, let $p=\frac23$, $n=50$ and $k=35$. Using R, the exact sum is > sum( choose(50, 35:50) * (2/3)^(35:50) * (1/3)^(15:0) ) [1] 0.3689669 while the approximation would give > pnorm( (50*2/3 - 35 + 1/2) / sqrt(50*2/3*(1-2/3)) ) [1] 0.3631693 though if you have R you can just use pbinom(35-1, size=50, prob=2/3, lower.tail = FALSE) as a clearer way of getting to the exact result
Convert string to another string over a smaller alphabet, and vice versa.
I'm not sure if this counts, but it's an idea you might be interested in. In real world applications, you can identify a string in the $\Omega$ alphabet that would never ever ever arise in a real message. Neither in the $\Omega$ alphabet, nor in the $\Sigma$ alphabet after the simple embedding from $\Omega$ to $\Sigma$. Then you can use that string to represent one letter from $\Sigma$. Let's say $\omega$ is such a string, something like "fhqwhgads" in English, although you could get away with something shorter. For notation, let $$\Sigma=\{s_0,s_1,\ldots, s_{n-1}\}\qquad\Omega=\{w_1,w_2,\ldots,w_{n-1}\}$$ Then map $$\DeclareMathOperator{\strings}{strings}f:\strings(\Sigma)\to\strings(\Omega)\quad f(s_i)=w_i\text{ when }i>0\quad f(s_0)=\omega$$ The inverse map would first examine a string $\beta$ for instances of $\omega$ and convert them to $s_0$, and then convert remaining $w_i$ to $s_i$. This relies on you knowing that $\omega$ would never practically arise in a real message from either alphabet.
When does the stationary distribution exist for a Markov chain?
There is always a stationary distribution for any finite state (time-homogeneous) Markov chain. We normally assume irreducibility to ensure uniqueness, not existence. See Finite State Markov Chain Stationary Distribution.
Is there a good way to compute Christoffel Symbols
I'd like to expand on Hans Lundmark's answer because the question keeps recurring. Let $q^1,\ldots,q^n$ denote the generalized coordinates. Introduce additional velocity variables $\dot{q}^1,\ldots,\dot{q}^n$. If the dot accent is already reserved for other things in your theory, use another accent to avoid confusion. First, treat all $q^i$ and $\dot{q}^i$ as pairwise independent variables and define, using Einstein summation convention, $$L(q^1,\ldots,q^n;\dot{q}^1,\ldots,\dot{q}^n) = \frac{1}{2} g_{ij}(q^1,\ldots,q^n)\,\dot{q}^i \dot{q}^j\tag{1}$$ To avoid confusion later on, I have explicitly indicated the formal dependencies of the expressions for $g_{ij}$ and $L$. Note that $L$ can immediately be written down when given the first fundamental form. Now consider a twice differentiable curve, which makes the $q^i$ functions of some independent new parameter $\tau$, and set $\dot{q}^i = \frac{\mathrm{d}q^i}{\mathrm{d}\tau}$. Proposition. On the curve parameterized with $\tau$ we have $$g^{kh}\left(\frac{\mathrm{d}}{\mathrm{d}\tau} \left(\frac{\partial L}{\partial\dot{q}^h}\right) - \frac{\partial L}{\partial q^h}\right) = \ddot{q}^k + \Gamma^k_{\ ij} \dot{q}^i \dot{q}^j \tag{2}$$ You might recognize the right-hand side as the expression constrained by geodesics, and you might recognize the expression wrapped around $L$ in the left-hand side as the form of Euler-Lagrange differential equations, which correspond to variational problems with the Lagrangian $L$. Therefore $(2)$ hints at a variational foundation for the geodesics equation. Such interpretations make $(2)$ easier to memorize or cross-link with other knowledge, but all we actually need is the identity $(2)$. The following observations should provide enough details to prove $(2)$. We can rewrite $(2)$ to $$\frac{\mathrm{d}}{\mathrm{d}\tau} \left(\frac{\partial L}{\partial\dot{q}^h}\right) - \frac{\partial L}{\partial q^h} = g_{hk} \ddot{q}^k + \Gamma_{hij} \dot{q}^i \dot{q}^j \tag{3}$$ The idea now is, for every $h\in\{1,\ldots,n\}$, to take the left-hand side of $(3)$, plug in expressions for the metric in $L$, and rewrite the thing so that it matches the format of the right-hand side, where dotted variables occur only in the places shown. Then you can read off the Christoffel symbols of the first kind, $\Gamma_{h**}$, from the coefficients of the velocity products. To obtain $(2)$ and $\Gamma^k_{\ **}$ is then just a matter of multiplication with the inverse metric coefficients matrix $((g^{kh}))$, or equivalently, taking the right-hand side expressions of $(3)$ obtained for $h\in\{1,\ldots,n\}$ and then, for each $k\in\{1,\ldots,n\}$, finding a linear combination whose only second derivative with respect to $\tau$ is $\ddot{q}^k$, with coefficient $1$. This is the form given by $(2)$, so the coefficients of the velocity products are then the Christoffel symbols of the second kind, $\Gamma^k_{\ **}$. Remember, while doing the partial derivatives of $L$, treat the $q^i$ and the $\dot{q}^i$ as independent formal variables. You will do that with more concrete symbol meanings and metric expressions, but in this moderately abstract setting, you can already refine $$\begin{align} \frac{\partial L}{\partial\dot{q}^h} &= g_{hj}\dot{q}^j\tag{4} \\\frac{\partial L}{\partial q^h} &= \frac{1}{2}\frac{\partial g_{ij}}{\partial q^h}\dot{q}^i \dot{q}^j\tag{5} \end{align}$$ However, when doing the $\frac{\mathrm{d}}{\mathrm{d}\tau}$ outside of $L$, stick to the curve and apply the chain rule accordingly: $$\frac{\mathrm{d}g_{hj}}{\mathrm{d}\tau} = \frac{\partial g_{hj}}{\partial q^i}\,\dot{q}^i\tag{6}$$ Now $(4)$, $(5)$, $(6)$ and the Levi-Civita formula $$\Gamma_{hij} = \frac{1}{2}\left( \frac{\partial g_{hj}}{\partial q^i} + \frac{\partial g_{ih}}{\partial q^j} - \frac{\partial g_{ij}}{\partial q^h} \right)$$ can be used to prove $(3)$ and thereby $(2)$. But I will focus on how to apply that proposition. Example: Spherical coordinates with radius $r$, longitude $\phi$, latitude $\theta$, with $\theta=\frac{\pi}{2}$ at the equator. At index positions, I will write coordinate names instead of digits. The first fundamental form is $$\mathrm{d}s^2 = \mathrm{d}r^2 + (r^2\sin^2\theta)\,\mathrm{d}\phi^2 + r^2\,\mathrm{d}\theta^2$$ Accordingly, the Lagrangian $L$ is $$L = \frac{1}{2}\left(\dot{r}^2 + (r^2\sin^2\theta)\,\dot{\phi}^2 + r^2\,\dot{\theta}^2\right)$$ We now treat $r,\phi,\theta,\dot{r},\dot{\phi},\dot{\theta}$ as independent variables and get $$\begin{align} \frac{\partial L}{\partial\dot{r}} &= \dot{r} &\frac{\partial L}{\partial r} &= (r\sin^2\theta)\,\dot{\phi}^2 + r\,\dot{\theta}^2 \\\frac{\partial L}{\partial\dot{\phi}} &= (r^2\sin^2\theta)\,\dot{\phi} &\frac{\partial L}{\partial\phi} &= 0 \\\frac{\partial L}{\partial\dot{\theta}} &= r^2\,\dot{\theta} &\frac{\partial L}{\partial\theta} &= (r^2\sin\theta\cos\theta)\,\dot{\phi}^2 \end{align}$$ Now we give up the independence, consider some curve paramterized by $\tau$ and obtain $$\begin{align} \frac{\mathrm{d}}{\mathrm{d}\tau} \frac{\partial L}{\partial\dot{r}} &= \ddot{r} \\\frac{\mathrm{d}}{\mathrm{d}\tau} \frac{\partial L}{\partial\dot{\phi}} &= (r^2\sin^2\theta)\,\ddot{\phi} + 2\,(r\sin^2\theta)\,\dot{r}\,\dot{\phi} + 2\,(r^2\sin\theta\cos\theta)\,\dot{\phi}\,\dot{\theta} \\\frac{\mathrm{d}}{\mathrm{d}\tau} \frac{\partial L}{\partial\dot{\theta}} &= r^2\,\ddot{\theta} + 2\,r\,\dot{r}\,\dot{\theta} \end{align}$$ And so $$\begin{align} \frac{\mathrm{d}}{\mathrm{d}\tau} \left(\frac{\partial L}{\partial\dot{r}}\right) - \frac{\partial L}{\partial r} &= \underbrace{1}_{g_{rr}}\,\ddot{r} + \underbrace{(-r\sin^2\theta)}_{\Gamma_{r\phi\phi}}\,\dot{\phi}^2 + \underbrace{(-r)}_{\Gamma_{r\theta\theta}}\,\dot{\theta}^2 \\\frac{\mathrm{d}}{\mathrm{d}\tau} \left(\frac{\partial L}{\partial\dot{\phi}}\right) - \frac{\partial L}{\partial\phi} &= \underbrace{(r^2\sin^2\theta)}_{g_{\phi\phi}}\,\ddot{\phi} + 2\,\underbrace{(r\sin^2\theta)}_{\Gamma_{\phi r\phi} = \Gamma_{\phi\phi r}}\,\dot{r}\,\dot{\phi} + 2\,\underbrace{(r^2\sin\theta\cos\theta)}_{\Gamma_{\phi\phi\theta} = \Gamma_{\phi\theta\phi}}\,\dot{\phi}\,\dot{\theta} \\\frac{\mathrm{d}}{\mathrm{d}\tau} \left(\frac{\partial L}{\partial\dot{\theta}}\right) - \frac{\partial L}{\partial\theta} &= \underbrace{r^2}_{g_{\theta\theta}}\,\ddot{\theta} + 2\,\underbrace{r}_{\Gamma_{\theta r\theta} = \Gamma_{\theta\theta r}}\,\dot{r}\,\dot{\theta} + \underbrace{(-r^2\sin\theta\cos\theta)}_{\Gamma_{\theta\phi\phi}} \,\dot{\phi}^2 \end{align}$$ All other Christoffel symbols of the first kind are zero. If we had a non-diagonal metric, some right-hand side expressions would have several second derivatives, each accompanied by a corresponding metric coefficient. To obtain the Christoffel symbols of the second kind, find linear combinations of the above right-hand side expressions that leave only one second derivative, with coefficient $1$. Here this is easy because the metric is already in diagonal form. Therefore $$\begin{align} g^{rh}\left(\frac{\mathrm{d}}{\mathrm{d}\tau} \left(\frac{\partial L}{\partial\dot{q}^h}\right) - \frac{\partial L}{\partial q^h}\right) &= \ddot{r} + \underbrace{(-r\sin^2\theta)}_{\Gamma^r_{\ \phi\phi}}\,\dot{\phi}^2 + \underbrace{(-r)}_{\Gamma^r_{\ \theta\theta}}\,\dot{\theta}^2 \\g^{\phi h}\left(\frac{\mathrm{d}}{\mathrm{d}\tau} \left(\frac{\partial L}{\partial\dot{q}^h}\right) - \frac{\partial L}{\partial q^h}\right) &= \ddot{\phi} + 2\,\underbrace{\left(\frac{1}{r}\right)}_{\Gamma^\phi_{\ r\phi} = \Gamma^\phi_{\ \phi r}}\,\dot{r}\,\dot{\phi} + 2\,\underbrace{(\cot\theta)}_{\Gamma^\phi_{\ \phi\theta} = \Gamma^\phi_{\ \theta\phi}}\,\dot{\phi}\,\dot{\theta} \\g^{\theta h}\left(\frac{\mathrm{d}}{\mathrm{d}\tau} \left(\frac{\partial L}{\partial\dot{q}^h}\right) - \frac{\partial L}{\partial q^h}\right) &= \ddot{\theta} + 2\,\underbrace{\left(\frac{1}{r}\right)}_{\Gamma^\theta_{\ r\theta} = \Gamma^\theta_{\ \theta r}}\,\dot{r}\,\dot{\theta} + \underbrace{(-\sin\theta\cos\theta)}_{\Gamma^\theta_{\ \phi\phi}} \, \dot{\phi}^2 \end{align}$$ All other Christoffel symbols of the second kind are zero.
If $p_1 = 0.3$ and $p_2 = 0.4$, what is the probability that it will take Jay more than 12 hours to be successful on both jobs?
To have a clear answer we need the strategy that Jay uses in choosing a job to try. Let's say he tries job $1$ until he succeeds, then tries job $2$. The chance he never gets job $1$ done is $0.7^{12}$. The chance he fails job $1$ the first $i$ times, succeeds on try $i+1$ and never gets job $2$ is $0.7^i\cdot 0.3 \cdot 0.6^{11-i}$ The total chance of failure is then $$0.7^{12}+\sum_{i=0}^{11}0.7^i\cdot 0.3 \cdot 0.6^{11-i}$$
Euler's identity to find integrating factor for an homogeneous 1-form
(From Serret J.A. Cours de Calcul Differentiel Et Integral... Volume 1 book) Consider the homogeneous 1-form: $$ M(x,y)dx+N(x,y)dy=0 $$ Since $M(x,y)$ and $N(x,y)$ are both homogeneous functions of the same degree we can find a one variable function $f$ such that: $$ \frac{M(x,y)}{N(x,y)}=f\left(\frac{y}{x}\right), $$ then our original form becomes: $$ N(x,y)\bigg(\frac{M(x,y)}{N(x,y)}dx+dy\bigg)=0 $$ $$ N(x,y)\bigg(f\left(\frac{y}{x}\right)dx+dy\bigg)=0 $$ We know that an homogeneous differential 1-form can be turned into a separable differential equation using the change of variables $$z=\frac{y}{x}$$ and this give us: $$ N(x,zx)\bigg(f(z)dx+d(zx)\bigg)=0 $$ then $$ N(x,zx)\bigg(f(z)dx+xdz+zdx\bigg)=0 $$ $$ N(x,zx)\bigg(\left(f(z)+z\right)dx+xdz\bigg)=0 $$ $$ xN(x,zx)\big(f(z)+z\big)\left(\frac{dx}{x}+\frac{dz}{f(z)+z}\right)=0$$ From here we see that multiplying the last equation by $\displaystyle{\frac{1}{xN(x,zx)\big(f(z)+z\big)}=\frac{1}{xM(x,y)+yN(x,y)}}$ give us the following equation: $$ \frac{dx}{x}+\frac{dz}{f(z)+z}=0 $$ which is exact because: $$ \frac{\partial}{\partial z}\left(\frac{1}{x}\right)=0=\frac{\partial}{\partial x}\left(\frac{1}{f(z)+z}\right)$$ Remark: $f(z)+z\neq 0$ because $\displaystyle{\frac{\partial M}{\partial y}\neq\frac{\partial N}{\partial x}}.$
Is the picture isomorphic?
Let $G=(V,E)$ and $G‘=(V’,E‘)$ be two graphs. They are isomorphic if there is a bijection $\varphi: V \to V‘$ such that $(v_x, v_y) \in E \iff (\varphi(v_x), \varphi(v_y)) \in E‘$. This means that, if we can find a bijective vertex mapping between $V(G)$ and $V(G‘)$, such that adjacency is preserved, then the given graphs are isomorphic. Can you go from here for the provided graphs? Edit: The number of vertices, the number of edges, the degree sequence. and many more other properties must be the same in $G$ and $G‘$ in order for them to be isomorphic. These are necessary conditions. Take the number of vertices and edges, this follows directly from the defined isomorphism map. But these conditions are not sufficient, meaning you can have graphs having the same number of vertices and edges without them being isomorphic. We need a so called certificate in order to prove isomorphism (have a look at canonical labelling). However the complexity of the graph isomorphism problem is high (actually a precise classification is still missing, but we do not have efficient algorithms for general graphs). Meaning, finding certificates such as canonical labellings is not faster for a small number of graphs than finding the isomorphism map directly. Since your provided problem is very easy, you should construct the isomorphism map directly. For example: let $\varphi(v_5)=u_5$. Define the other mappings such that the definition provided is satisfied. (and of course, if some necessary condition fails, you can already state that the two graphs are not isomorphic).
Is Ornstein-Uhlenbeck process differentiable?
It isn't -- the OU process is as rough as standard Brownian motion. In fact, the defining SDE of the OU process shows that the variance of the difference quotient at any given $t$ diverges as $\Delta t \to 0$.
Clearest definition of a limit
It is just the usual definition for finite limit $l$ with x which tends to a finite cluster point $a$. Note that as an alternative someone set that $x\neq a$, in this case it suffices that $\lvert x-a\rvert<\delta$.
A subgroup of $\langle\mathbb{Z}, +\rangle$ containing two relatively prime integers
Yes. It also falls under the case of $\left< n\mathbb{Z} , + \right>$, when $n=1$.
Find if relation is reflexive, symmetric or transitive
Write $f\sim g$ if $(f,g)\in R$. For reflexivity of $R$, you need to check whether $f\sim f$ for all $f\in F$. For symmetry of $R$, you need to check whether, for all $f,g\in F$, it holds that $f\sim g$ implies $g\sim f$. For transitivity of $R$, you need to check whether $f\sim g$ and $g\sim h$ implies $f\sim h$ for arbitrary $f,g,h\in F$. Since this looks like home work, I will refrain from posting the concrete answer.
How prove this the number of ordered $n$-tuples $(\varepsilon_{1},\cdots,\varepsilon_{n})$such this following inequality is $2^{n-100}$
Partition $\{1,\dots,n\}$ into thirty sets $I_1,\dots,I_{30}$ such that $\sum_{i\in I_j}|z_i|^2<\tfrac1{30}+\tfrac1{100}<\tfrac4{90}$ for each $1\leq j\leq 30.$ This can be achieved by starting with empty sets and adding greedily: the only impediment to adding a value to some $I_j$ is that $\sum_{i\in I_j}|z_i|^2\geq \tfrac1{30},$ but if that holds for all the $j$ then all the values must already be assigned. Assume these lemmas for now: Lemma 1. For any positive integer $N$ and any values $z_1,\dots,z_N$ with $\sum_{i=1}^N |z_i|^2\leq 4/90,$ there exist at least $2^N/10$ tuples $(\epsilon_1,\dots,\epsilon_N)\in\{-1,1\}^{N}$ such that $|\sum_{i=1}^{N}\epsilon_iz_i|^2\leq 5/90$ and $\epsilon_1=1.$ Lemma 2. Given values $z_1,\dots,z_N$ such that $|z_i|^2\leq 5/90$ for $1\leq i\leq N,$ there exist $\epsilon_1,\dots,\epsilon_N\in\{-1,1\}$ such that $|\sum_{i=1}^N\epsilon_iz_i|\leq 1/3.$ (Actually we only need the case $N=30.$) Combining the tuples given by applying Lemma 1 to each $I_j,$ there are at least $2^n10^{-30}=2^n1000^{-10}>2^n1024^{-10}=2^{n-100}$ tuples $(\epsilon_1,\dots,\epsilon_n)\in\{-1,1\}^n$ such that $|\sum_{i\in I_j} \epsilon_iz_i|^2\leq 5/90$ and $\epsilon_{\min(I_j)}=1$ for each $j.$ For each of these tuples, by Lemma 2, we can find $(\theta_1,\dots,\theta_{30})\in\{-1,1\}^{30}$ such that $|\sum_{j=1}^{30}\theta_j(\sum_{i\in I_j} \epsilon_iz_i)|\leq 1/3.$ This gives a new tuple defined by $\epsilon'_i=\epsilon_i\theta_j$ for each $i\in I_j$ and $1\leq j\leq 30.$ The resulting $2^{n-100}$ tuples $\epsilon'$ are distinct and satisfy $|\sum_{i=1}^{n}\epsilon'_iz_i|\leq 1/3$ as required. Proof of Lemma 1: $$\sum_{\epsilon\in\{-1,1\}^N}\sum_{i=1}^{N}|\epsilon_iz_i|^2 =\sum_{\epsilon\in\{-1,1\}^N}\sum_{i=1}^{N}\epsilon_iz_i\overline{\sum_{j=1}^{N}\epsilon_jz_j} =2^N\sum_{i=1}^{N}|z_i|^2\leq\tfrac{4}{90}2^N$$ because the $z_iz_j$ terms cancel. So it is not possible for more than $\tfrac45 2^{N}$ tuples $\epsilon$ to satisfy $|\sum_{i=1}^{N}\epsilon_iz_i|^2>5/90.$ (This is a form of Markov's inequality.) So at least $\tfrac15 2^{N}$ must have $|\sum_{i=1}^{N}\epsilon_iz_i|^2\leq 5/90.$ Requiring $\epsilon_1=1$ halves this to $\tfrac1{10} 2^{N}.$ Proof of Lemma 2: Given any three complex values I claim there are always two $z,w$ such that $|z+w|$ or $|z-w|$ is at most $\max(|z|,|w|).$ This follows from the fact that some two make an angle of at most $\pi/3$ after possibly negating some values: assume one value $z$ lies on the real axis, then $z$ or $-z$ makes an angle of less than $\pi/3$ which anything not in the region $\arg w\in(\pi/3,2\pi/3)\cup (4\pi/3,5\pi/3)$; but any two points in this region make an angle of at most $\pi/6$ with each other after possibly negating one. In algebraic terms this means $2\mathrm{Re}(z\overline w)\geq \tfrac12 |z||w|,$ giving $|z-w|^2=|z|^2+|w|^2-2\mathrm{Re}(z\overline w)\leq\max(|z|,|w|).$ This lets us reduce to the case $N=2.$ Without loss of generality $z_1$ is a positive real, and negating $z_2$ if necessary we can assume $\mathrm{Re}(z_2)\leq 0.$ This gives $|z_1+z_2|^2\leq|z_1|^2+|z_2|^2\leq 10/90$ as required.
Need help with optimization concepts.
Both ways are equivalent when the non-negativity constrains are slack but if in the solution either $x=0$ or $y=0$ then as Rahul pointed you need to include the multipliers for them. But you can also use your method, if you don't find a solution just try with $x=0$ and/or $y=0$. A critical point of a concave function is always a global maximum (assuming a convex domain) as the function must lie below the tangent plane.
what do we mean by variational model?
Variational model is one, where we optimize over functions as opposed to over values. Lets take the classic case from your question $$\min_{D,C}\|Y-DC\|_F^2 $$ in which we optimize over $D$ and $C$, which are (probably) meant to be functions. I say probably, because you did not supply the article you are reading. You can look into Euler-Lagrange equations as an introdution to variational calculus, that deals with such questions. The classic case is finding the Brachistochrone curve. Another example of this is the expectation minimization algorithm. In contemporary deep learning, variational autoencoders work with this idea, where a neural network is used to approximate the function (as opposed to minimizing over the entire function space).
Find $y$ and $z$ in the differential equation $\frac{dy}{dx}=Ae^{-i\alpha x}z$
$z$ can be an arbitrary continuous function, and $y$ is then an antiderivative of $A e^{-i\alpha x} z(x)$. You can't "perform the integration" unless you have more information, e.g. another differential equation giving you $dz/dx$.
Cube equation and bitwise operators
The idea behind the bitwise AND trick is that the bit patterns of the integers from $0$ to $7$ cover all possible combinations of three bits. By taking advantage of the fact that $(-1)^0=1$ and $(-1)^1=-1$, you can use them to generate the eight combinations of $\pm r$ that you need to produce all of the cube’s vertices. That is, the ones bit of $n$ tells you whether to take $+r$ or $-r$ along the $x$-axis, the twos bit controls the $y$-axis and the fours bit controls the $z$-axis. Unfortunately, as presented here, this trick doesn’t quite work: n&1 is fine, since that produces either 000 or 001, but the other two bitwise AND expressions result in a power of two, and $-1$ raised to any power of two is $1$. Adding a right shift so that the bit that’s being tested ends up in the ones place will fix this bug. Using n>>m to represent a right-shift of $n$ by $m$ bits, we should really have something like $$V_n=(x_0+(-1)^{(n>>0)\&1}r,y_0+(-1)^{(n>>1)\&1}r,z_0+(-1)^{(n>>2)\&1}r).$$ On the other hand, right-shifting amounts to dividing by a power of two, and we just need an even or odd exponent for $-1$, so masking off the other bits is unnecessary. This leads to another way of expressing the same thing without using any bitwise operations at all: $$V_n=(x_0+(-1)^{\lfloor n/1\rfloor}r,y_0+(-1)^{\lfloor n/2\rfloor}r,z_0+(-1)^{\lfloor n/4\rfloor}r).$$
Proof of $\lim_{h\to 0}\frac{f(a+ h)-f(a)}{h}=\ell$ when $\lim_{x\to a}f'(x)=\ell$
Yes. Formally, you have to write the "other half" of the proof, the case $h<0$, but the argument is exactly the same.
Symmetric matrices property
By Sylvester's criterion, a real symmetric $2\times2$ matrix $M=\pmatrix{A&B\\ B&C}$ is positive definite if and only if $A>0$ and $\det M=AC-B^2>0$. So, if you want $\det M=1$, you need $C=\frac{1+B^2}A$.
Volume of a Triangle Rotated about Different Lines
Writing in Wolfram Mathematica 12.0: cone1 = ParametricPlot3D[{u Cos[v], u Sin[v], 1 - u}, {u, 0, 1}, {v, 0, 2π}, Mesh -> None, PlotStyle -> Directive[Green, Opacity[0.2]]]; cone2 = ParametricPlot3D[{1 + u Cos[v], u Sin[v], u}, {u, 0, 1}, {v, 0, 2π}, Mesh -> None, PlotStyle -> Directive[Green, Opacity[0.2]]]; cylinder = ParametricPlot3D[{1 + Cos[v], Sin[v], u}, {u, 0, 1}, {v, 0, 2π}, Mesh -> None, PlotStyle -> Directive[Green, Opacity[0.2]]]; frames = Table[ triangle1 = ParametricPlot3D[{u Cos[t], u Sin[t], v}, {u, 0, 1}, {v, 0, 1 - u}, Mesh -> None, PlotStyle -> Directive[Green, Opacity[0.8]]]; triangle2 = ParametricPlot3D[{1 + u Cos[t], u Sin[t], v}, {u, 0, 1}, {v, 0, u}, Mesh -> None, PlotStyle -> Directive[Green, Opacity[0.8]]]; binder1 = Show[{cone1, triangle1}, AxesLabel -> {x, y, z}, AxesOrigin -> {0, 0, 0}, ImageSize -> {750, 750}, Boxed -> False, Method -> {"ShrinkWrap" -> True}, PlotRange -> {{-2, 2}, {-2, 2}, {-2, 2}}, ViewPoint -> {2.4, 2.2, 1.1}]; binder2 = Show[{cone2, cylinder, triangle2}, AxesLabel -> {x, y, z}, AxesOrigin -> {0, 0, 0}, ImageSize -> {750, 750}, Boxed -> False, Method -> {"ShrinkWrap" -> True}, PlotRange -> {{-2, 2}, {-2, 2}, {-2, 2}}, ViewPoint -> {2.4, 2.2, 1.1}]; GraphicsGrid[{{binder1, binder2}}], {t, 0, 2π, π/10}]; Export["rotation_solids.gif", frames, "AnimationRepetitions" -> ∞, "DisplayDuration" -> 1]; you can get the following gif image (by clicking on it, you can enlarge it): from which it should be clear that: in the first case the volume of a cone ($V' = \pi/3$) is being brushed; in the second case the complementary volume of a cylinder ($V''=2\pi/3$) is being brushed. In fact, $V'+V''= \pi\,$ which is the volume of a cylinder of unit radius and height.
Does the property of non-increasing slope be generalized to a concave function for multiple variables?
A common characterization requires you take two derivatives and work with the Hessian. I'm going to phrase this in the convex case (but the concave case is analogous, just multiply everything by $-1$). By calculus, "$f'$ is always increasing" is the same as saying "$f''$ is always positive" (provided $f''$ exists). You have observed that this property holds for convex functions. More generally, the following is a common result: If a real-valued function defined on a real Hilbert space (e.g. $\mathbb{R}^N$) has a well-defined Hessian, $H$ then $$f \;\;\text{is convex}\;\;\Leftrightarrow\;\;H\;\;\text{is positive semidefinite}.$$ This result appears here and in most convex analysis text books (Rockafellar is a classic). The analogue for your concave case is that $H$ must be negative semidefinite. To your second question, it still holds that $x$ (globally) minimizes a convex function $f$ if and only if $\nabla f(x)=0$. Equivalently, $x$ (globally) maximizes a concave function $f$ if and only if $\nabla f(x)=0$. This is known as Fermat's rule, or a first-order condition.
Prove that if $G$ is a group of order $39$ then $G$ has a subgroup of order $3$
If there is no subgroup of order 3, then there are no elements of order 3. The only other possible orders are 1, 13, and 39, and there can be no elements of order 39 (otherwise there is an element of order 3). Thus, every element has order 1 or 13. Since 13 is prime, the distinct subgroups generated by the elements of the group intersect trivially, and these subgroups cover the group. Thus, there are $12n+1$ elements, for some $n$. Since 39 is not of this form, this is a contradiction.
How do I prove that an anti-symmetric matrix $A$ is not invertible?
$$\det(A)=\det(A^T)=\det(-A)=(-1)^n\det(A)=-\det(A)$$ since $n$ is odd. hence $$\det(A)=0$$
Restoring a point after transformation
(Edited) The values of $u$ and $v$, and therefore $u^2+v^2$, are given to you, and you want to find $x$, $y$ such that $$(u,v)=\Bigl({x\over f(r)}, {y\over f(r)}\Bigr)\ ,\qquad r:=\sqrt{x^2+y^2}\ .\qquad(2)$$ It follows that we necessarily have $$x=f(r)u\ , \quad y=f(r)v\qquad(3)$$ and therefore $$r^2=(u^2+v^2)f^2(r)\ .\qquad(4)$$ This equation only involves given data and the unknown $r$. Solving it produces a list of values $r_k>0$ (and maybe some other solutions), and to each of these $r_k$ by $(3)$ correspond values $$x_k= u f(r_k)\ ,\quad y_k=v f(r_k)\ .\qquad(5)$$ Now $(5)$ is just a necessary condition that solutions of the original equation $(2)$ would have to fulfill, and we have to prove that such pairs $(x_k,y_k)$ are in fact solutions of $(2)$, i.e., satisfy $$(u,v)=\Bigl({x_k\over f\bigl(\sqrt{x_k^2+y_k^2}\bigr)},{y_k\over f\bigl(\sqrt{x_k^2+y_k^2}\bigr)}\Bigr)\ .$$ To this end we argue as follows: If $x_k$ and $y_k$ are given by $(5)$, where $r_k>0$ is a solution of $(4)$, then $$x_k^2+y_k^2=(u^2+v^2)f^2(r_k)=r_k^2\ .$$ As $r_k>0$ it follows that $\sqrt{x_k^2+y_k^2}=r_k$ and therefore $${x_k\over f\bigl(\sqrt{x_k^2+y_k^2}\bigr)}={x_k\over f(r_k)}=u\ ,$$ and similarly for $y$ resp. $v$.
Gradient and Laplacian in $S^1$
This answer addresses the issue of the Laplacian on $S^1$ and not the issue of whether you are solving the differential equation correctly. Circles are completely classified by their radius $r$, or if you prefer, by their circumference $C$, where of course $C = 2 \pi r$. What I mean by this is that whether or not your circle is originally given to you as being embedded in $\mathbb{R}^2$ in the usual way, you can always view it as being embedded in $\mathbb{R}^2$, in the sense that there is an isometry (a length-preserving bijective smooth map) between the original circle and the usual one in $\mathbb{R}^2$. For your purposes, I would guess that the distinction between the original circle and the one in $\mathbb{R}^2$ is not too important. This means that your circle can always be parametrized by the coordinate $\theta$, where $0 \leq \theta < 2\pi$. If you are familiar with tangent vectors in an abstract setting, the tangent vector $\frac{\partial}{\partial \theta}$ has constant length $r$ (which, remember, is a fixed constant). You can also parametrize the circle with respect to arc length, say, using the coordinate $t$, where $0 \leq t < 2\pi r$, and the tangent vector $\frac{\partial}{\partial t}$ has constant length $1$. The Laplacian in $\mathbb{R}^n$ is $\Delta u = \sum_{k=1}^n \frac{\partial^2 u}{\partial x_k^2}$. This formula relies on the fact that the coordinates $x_1, \dots, x_n$ are the usual Euclidean coordinates, or, in other words, that the tangent vectors $\frac{\partial}{\partial x_1}, \dots, \frac{\partial}{\partial x_n}$ are orthonormal. On a more general Riemannian manifold, the key thing to remember is that the same formula for $\Delta$ applies, but only as long as one is working in a coordinate system that is orthonormal (to second order, meaning the first derivatives of the metric tensor vanish). In a general coordinate system, the Laplacian is given by $$ \Delta u = \sum_{j,k} \frac{1}{\sqrt{|g|}} \frac{\partial}{\partial x_j} \left( \sqrt{|g|} g^{jk} \frac{\partial u}{\partial x_k} \right), $$ where $|g|$ is the determinant of the metric tensor $(g_{jk})$ and the $g^{jk}$'s are the components of the inverse of the metric tensor. If the coordinate system is orthonormal to second order at a point, then this formula (at that point) reduces to the familiar formula from $\mathbb{R}^n$. (Note it is not generally possible to choose such a coordinate system globally, or even in a small open set, but we can always choose one so that those properties hold at a single point at the center of the coordinate system.) You can read more about the Laplacian on Riemannian manifolds on Wikipedia, for example. That article also contains links to articles about the gradient and divergence, including discussion of those operators on manifolds. I suppose this may not be too useful if you don't know any differential geometry, but you still might want to take a look. Now, back to the circle: Recall the two coordinates $\theta$ and $t$ that we could use to parametrize the circle. Since the circle is one-dimensional, there is no sum on $j$ and $k$ in the formula for $\Delta$. If we use $t$ (an orthonormal coordinate system), $g_{tt} = g^{tt} = |g| = 1$, so we obtain simply $\Delta u = \frac{\partial^2 u}{\partial t^2}$. If we use $\theta$ (not orthonormal but still quite simple), $g_{\theta \theta} = |g|= r^2$ and $g^{\theta \theta} = \frac{1}{r^2}$ (remember, $r$ is a constant), so we obtain $\Delta u = \frac{1}{r^2} \frac{\partial^2 u}{\partial \theta^2}$, as you claimed. To summarize, you have to divide by $r^2$ in the $\theta$ formula for $\Delta$ to account for the fact that $\frac{\partial}{\partial \theta}$ has length $r$, and that you are taking two derivatives with respect to $\theta$. This correction is not necessary when you use the coordinate $t$, since $\frac{\partial}{\partial t}$ has length $1$.
Does the transfer principle really work in both directions?
There are statements in nonstandard-land which don't transfer. But such statements can't be first-order expressible, for instance. There's no way to express the property "$n$ is infinite" in a statement to which the transfer principle applies; similarly, the statement "$x$ is not a standard real" doesn't transfer. All statements about internal sets do transfer, if I recall correctly; but you need to be careful to justify that the sets under consideration are internal. $\{1\}$ is internal because $1$ can be defined in a first-order way (it's the unique real such that $1x = x$ for all $x$); the set of all standard reals is not internal. You should be careful to find an exact statement of the transfer principle so that you know what restrictions need to be placed on the statements you're considering.
Showing uniqueness of Riemann's Integral
Assume that $L_1$ and $L_2$ are the Riemann inttegrals of $f$ over $[a,b]$. We want to show that $L_1=L_2$. Let $\epsilon >0$. Then for each $i=1,2$, there exists $\delta_i>0$ such that $$\|P \|<\delta_i \quad \Rightarrow \quad |\sigma-L_i|<\frac{\epsilon}{2}$$ whenever $P$ is a partition of $[a,b].$ Take $\delta$=min $\{\delta_1,\delta_2\}$. Fix a partition $P$ of $[a,b]$ and suppose $\|P\|<\delta$. Note that $\delta\le \delta_i$ for $i=1,2.$ Hence $$0\le|L_1-L_2|\le|\sigma-L_1|+|\sigma-L_2|<\epsilon.$$ Since $\epsilon>0$ was arbitrary, $$0\le|L_1-L_2|<\epsilon$$ holds for all $\epsilon >0.$ This forces us to conclude that $|L_1-L_2|=0.$ Hence, $L_1=L_2.$
Is $R$ a ring or an unit ring in a random literature?
It's probably the most likely assumption you can make, but there is likely to be exceptions (other than it might denote the set of real numbers). What you should do in such a scenario, where it's not explicitely mention is to observe how it's used. If you for example see that the book refers to the multiplicative identity without the explicit mention of it being present then the book would probably mean that the existence of it is implicit. The converse is if the book explicitely requires the existence of a identity in some of the theorems (then you assume that it isn't implied otherwise).
How many invertible 3x3 matrices?
$2^3-2^0=7$ choices for the first row as nozero vector. Then $2^3-2^1=6$ choices for the second row vector not in the span of the first. Then $2^3-2^2=4$ choices for the third row not in the span of the first two.
What is semantics of "type"? Do "types" of "type theory" semantically differ from "set" of set theory?
Yes, there is a rich field of mathematical treatments of types, in the sense of programming languages. However, New Foundations and Russell’s earlier theories of types are very atypical of what is now generally known as Type Theory. A good place to start is with something like the typed λ-calculi of Church and others. These may exactly be seen as minimal programming languages, and serious modern functional programming languages like Haskell and OCaml (and relatives) are based very closely on more elaborate versions of these, with lots of syntactic sugar and clever type extensions added. On the other hand, one may elaborate on these type systems in different directions to analyse how datatypes work in more pragmatically designed languages (C, Java, etc.). In all of these, though, the difference from set theory is not so much in what types themselves are seen as — in most type systems I know, they are still viewed essentially as just abstract collections. There are two main differences with set theory: the operations provided for constructing types and elements of types are typically very concrete and constructive, and mirror familiar constructs available in programming languages. For instance, a type theory may well have a basic construct which, for a type A, provides a type List A, of lists of elements of A. in (most) set theories, sets are collections from an ambient universe of sets; everything is a set, any set may be an element of any other set, and “being-an-element-of” is a relation, a property with a truth-value. In most type theories, types are not subcollections of some ambient collection — they are independent collections of elements. So in set theory, a statement “for every prime number $p$, …” officially means “for every $p$, if $p$ is a number, and $p$ is prime, then…” — so the quantifier allows e.g. $\mathbb{R}$ as a valid value for $p$! In type theory, it would formally become “for every number $p$, if $p$ is prime then …” — primeness may be a property, but being a number is the basic type of thing p is declared to be as soon as it was considered. Every object you ever talk about has some type. You can’t (within the language of the theory) take the number 100, and then ask “is 100 a string?” — being of a type isn’t a property, it’s a declaration that’s made when a variable is introduced, or that’s deduced for a derived term like (n+5). In practical programming terms, this roughly says that the implementations of objects are well-sealed abstractions: an implementation may use the same underlying representation for some integer and some string, but the language can only access them as an integer and as a string. (Of course, type systems that are specifically designed to closely model existing languages may throw out such abstractions.) There are lots of good introductions to type theory out there. The Wikipedia page gives a good start; for a serious book on type theory from a programming languages point of view, that’ll take you as far as you need to go, I recommend Bob Harper’s Practical foundations of programming languages, available as a pdf from his webpage. He’s highly opinionated and his more polemical statements must be taken with a large grain of salt, but he’s a fantastic writer, with a beautiful viewpoint on the field.
If $0 < \frac{a}{b} < 1$, does subtracting the next largest number $\frac{1}{n}$ always make the resulting fraction's numerator less than $a$?
$$\frac{1}{n} \leq \frac{a}{b} &lt; \frac{1}{n-1} \Rightarrow b \leq na &lt; a+b \Rightarrow 0 \leq na-b &lt; a $$ As $$ \frac{na-b}{b}=\frac{a'}{b'} $$ and the second fraction is reduced we have $$a' \leq na-b &lt;a$$ Also, since $ \frac{na-b}{b}\geq 0 $ it follows that $a' \geq 0$
The limit point of a singleton in a topology $\tau$
$c$ is not a limit point of $\{a\}$ because its open neighbourhood $\{c,d\}$ is disjoint from it. The same holds for $d$. $e$ is the only limit point of $\{b\}$. It has only one nontrivial neighbourhood and that intersects $b$ (and $b \neq e$).
counting the number of permutations of n number have k places that decrease
With the OP asking for a hint we can provide the following recursion. Ask how we can obtain a permutation with $k$ decreases by inserting the value $n$ into a permutation of the values from $1$ to $n-1.$ We could insert $n$ between one of the $k$ decreases of a permutation on $n-1$ with $k$ decreases, which keeps the number of decreases constant. Or we could add it at the end of a permutation on $n-1$ with $k$ decreases. Lastly we could insert it in one of the $(n-1)-(k-1)$ locations where there is no decrease of a permutation on $n-1$ with $k-1$ decreases, thereby increasing the count of decreases by one. This gives the recurrence $$X_{n,k} = (k+1) X_{n-1, k} + (n-k) X_{n-1,k-1}.$$ The base cases here are $X_{1,0} = 1, X_{1,k} = 0$ and $X_{n,0} = 1,$ for the sorted permutation. Implementing this in Maple we find (there is an enumeration routine as well to check the values for small $n$). with(combinat); ENUM := proc(n) option remember; local gf, perm, pos, decr; gf := 0; perm := firstperm(n); while type(perm, &#96;list&#96;) do decr := 0; for pos to n-1 do if perm[pos] &gt; perm[pos+1] then decr := decr + 1; fi; od; gf := gf + u^decr; perm := nextperm(perm); od; gf; end; A := (n, k) -&gt; coeff(ENUM(n), u, k); X := proc(n, k) option remember; if n=1 then if k=0 then return 1 fi; return 0; fi; if k=0 then return 1 fi; k*X(n-1, k) + X(n-1,k) + (n-k)*X(n-1, k-1) end; We thus obtain e.g. for $n=7$ the values $$1, 120, 1191, 2416, 1191, 120, 1.$$ We look these up in the OEIS and find that we are dealing with Eulerian numbers, OEIS A008292. Presumably many readers could have recognized the problem statement without doing the computation. Anyway the OEIS entry lists a considerable number of references and should suffice to start the reader on whatever type of investigation they plan to do.
Can this condition be proved in a sequence?
Yes, there is always a place in the sequence such that, if you start your summation there, you never get a negative sum. To find it, first, as a preliminary step, do the summation starting anywhere you like. Of course, this might go negative as in your example. If it doesn't go negative, you've got what you want. If it does go negative, find the most negative of these partial sums. (In your example, that would be the $-3$.) The next summand after that partial sum (in your example it's $5$) is the place you want to start. The reason this works is that, by starting after the most negative partial sum, say $-m$, you'll get new partial sums that begin with $0$ where you previously had $-m$. So as you go around the cycle, all your new partial sums will be larger by at least $m$ than the partial sum that you originally had up to the same point. Since $-m$ was the most negative original partial sum, all the others are $\geq-m$, and so the new partial sums, being bigger by at least $m$, will all be $\geq0$.
What is the point of algebraic logic?
One advantage of algebraic logic is that the distinctions and relations between the meta levels become very clear. However, as long as algebraic logic stays on the level of propositional logic, and doesn't try to capture predicate logic (or at least equational reasoning from universal algebra or syllogistic reasoning), this advantage risks to become trivial by not reaching situations which would benefit from this sort of clarification. Now cylindric algebra and polyadic algebras seem to capture predicate logic, but they only capture classical predicate logic, while Heyting algebras only capture intuitionistic propositional logic. So let me instead try to explain why algebraic logic is useful for me: Using algebraic logic allows to leave the strictly logical context during the study of logical systems. This allows to continue investigations even if some things don't fit together. It also allows to study duality, even so dual logics in general fail to be logical systems in any reasonable sense. And you can really investigate many aspects of logical systems in an algebraic way, independent of whether they need investigation or not. Let's try to illustrate this with an example which feels natural from an algebraic point of view, but dubious from a logical perspective. Start with intuitionistic logic, or rather a Heyting algebra, and define the tenary operation $t(x,y,z):=((z\to y)\to x) \land ((x\to y)\to z)$. Notice that $t(x,x,x)=x$ and $t(x,y,1)= (y\to x)$, i.e. $t$ is idempotent and allows to define implication. So Heyting algebras can be specified by purely idempotent operations, which is important in universal algebra in the context of Mal'cev conditions. Actually $t$ is a Mal'cev operation, i.e. $t(x,x,z)=z$ and $t(x,z,z)=x$. Now look at the dual Heyting algebra, where implication turns into minus (or non-implication, if you really prefer). Then $t'(x,y,z):=(z-(y-x)) \lor (x-(y-z))$ and classically (i.e. in a Boolean algebra) we have $t=t'$, but effectively the dual Heyting algebra is no longer able to talk about implication in any meaningful way. (I suspect the same is true for equivalence, but I haven't checked it yet.) But isn't the ability to talk about implications or at least equivalences the core of any logic? Maybe, but if you do algebraic logic, you don't need to worry about such questions as long as the resulting math is interesting and still sufficiently closely related to logic. To make the example more extreme, let's remove truth and falsehood (i.e. the requirement that $0$ and $1$ exist) from the Heyting algebra. We can get back truth from $1=(a\to a)$, but if we use the tenary operation $t$ instead of implication, then there is no way to get back truth (or implication). I hope even Doug Spoonwood agrees that a logic without truth is dubious from a logical perspective! Edit 20.08.16 I just noticed that the example even allows to illustrate a situation where things don't fit together. For a partial function $p:X\to Y$, we have $p^{-1}(A\cap B)=p^{-1}(A)\cap p^{-1}(B)$, $p^{-1}(A\cup B)=p^{-1}(A)\cup p^{-1}(B)$, and $p^{-1}(A-B)=p^{-1}(A)-p^{-1}(B)$. But $p^{-1}(Y)=X$ is only true if $p$ is a total function. If we interpret falsehood $0$ as the empty set, truth $1$ as the entire space $X$ (or $Y$), and $\land$ as intersection $\cap$, or $\lor$ as union $\cup$, and minus as minus, then nearly all operations of classical logic (including the tenary operation $t=t'$) are preserved under inverse partial functions, except for truth (and implication/negation). Categorical logic would be another approach which might seem to offer even more freedom than the algebraic approach. However, it is much more difficult to find your own way there. How long would you take on your own to realize that often the categorial product must be ignored in favor of the bifunctor of a monoidal category? Or you have a nice correspondence between topological spaces and intuitionistic logic, but somehow the category of topological spaces has too many deficiencies, and you don't really know how to best fix those! Add to that the general burden to become sufficiently familiar with category theory in the first place.
Finding an eigenvalue
Consider the map $f$ defined by $x \mapsto \frac{Ax}{\sum_i (Ax)_i}$ defined on the (topological) disk $D$ that consists of vectors $x$ satifying $x_1 \ge 0, x_2 \ge 0, \ldots, x_n \ge 0, x_1 + x_2 + \ldots + x_n = 0$ (i.e., $D$ is the standard simplex in the positive octant). Then $$ f : D \to D $$ is a continuous map of a closed disk to itself (this requires a sentence or two of proof...how do we know all entries of $Ax$ are positive? How do we know they're not all zero so that the division makes sense?), and hence has a fixed point, by the Brouwer theorem. This fixed point is a positive eigenvector for $A$.
Taylor's Remainder $x-\frac{x^2}{2}+\frac{x^3}{3(1+x)}<\log(1+x) <x-\frac{x^2}{2}+\frac{x^3}{3}$
It is much easier to solve this problem via integration. If $t &gt; 0$ then we can see that $$1 &lt; 1 + t^{3}$$ and on dividing by $(1 + t) &gt; 0$ we can see that $$\frac{1}{1 + t} &lt; 1 - t + t^{2}$$ and integrating this equation in the interval $[0, x]$ (and noting that $x &gt; 0$) we get $$\log(1 + x) &lt; x - \frac{x^{2}}{2} + \frac{x^{3}}{3}\tag{1}$$ Further note that $$\frac{1 - t^{2}}{1 + t} = 1 - t$$ so that $$\frac{1}{1 + t} = 1 - t + \frac{t^{2}}{1 + t}$$ and integrating this equation on interval $[0, x]$ we get $$\log(1 + x) = x - \frac{x^{2}}{2} + \int_{0}^{x}\frac{t^{2}}{1 + t}\,dt\tag{2}$$ and clearly we can see that for $0 &lt; t &lt; x$ we have $$\frac{t^{2}}{1 + t} &gt; \frac{t^{2}}{1 + x}$$ and hence $$\int_{0}^{x}\frac{t^{2}}{1 + t}\,dt &gt; \int_{0}^{x}\frac{t^{2}}{1 + x}\,dt = \frac{x^{3}}{3(1 + x)}$$ and then from equation $(2)$ we get $$\log(1 + x) &gt; x - \frac{x^{2}}{2} + \frac{x^{3}}{3(1 + x)}\tag{3}$$ Combining equations $(1)$ and $(3)$ we get the desired result.
How to compute $\frac{\partial f}{\partial \overline{z}}$ to show holomorphicity
By definition, we have that \begin{align} \frac{\partial}{\partial \bar z} = \frac{1}{2} \left(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y} \right) \end{align} and by simply calculation we also have that \begin{align} \frac{\partial}{\partial \bar z} \bar z= \frac{1}{2} \left(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y} \right)(x-iy) = 1 \end{align} and \begin{align} \frac{\partial}{\partial \bar z} z = \frac{1}{2} \left(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y} \right)(x+iy) =0. \end{align} Then we have \begin{align} \frac{\partial}{\partial \bar z} (z^2-z) = \frac{\partial}{\partial \bar z}z^2 - \frac{\partial}{\partial \bar z}z = \frac{\partial}{\partial \bar z}(z) z+ z\frac{\partial}{\partial \bar z}(z) -\frac{\partial}{\partial \bar z}z = 0\cdot z+z\cdot 0 -0 = 0 \end{align} where the product rule and linearity come from the fact that $\partial_x$ and $\partial_y$ are linear and satisfy the product rule.
Relation between incenter, circumcenter and orthocenter of a triangle
Triangle ABC Draw perpendiculars from vertex $B$ &amp; $C$ on side $CA$ &amp; $AB$ meeting them at points $E$ &amp; $F$ respectively. They intersect at $H$. Draw $BO$,$OH$,$OC$ and $BI$. Extend $BI$ to meet $OH$ at $L$. Also, drop a perpendicular from the circumcentre on side $BC$ meeting it at point $M$. Observe that, in $\triangle BOM$ &amp; $\triangle BHF$, $\left(i\right)$ $\angle BMO=\angle BFH=90^\text{o}$ $\left(ii\right)$ $\angle OBM=90-\frac{\angle BOC}{2}=90-\angle BAC=\angle ABE=\angle HBF$ $\left(iii\right)$ Since $\triangle BFC$ is a $30-60-90$ triangle, $BF=\frac{1}{2}BC=BM$ Hence, $\triangle BHF\cong \triangle BOM$ by $A-S-A$ criterion of congruence. Thus, $BO=BH$. Since $\angle HBF=\angle OBM$ and $BI$ bisects $\angle ABC$, $BI$ must also bisect $\angle OBH$. In $\triangle BOH$, $BI$ bisects $\angle OBH$ and $BO=BH$; Hence, $BI$ is the perpendicular bisector of $OH$. Since $I$ lies on this perpendicular bisector, $\boxed {OI=IH}$.
Existence of complementary subspaces for $dim(V) = 2k$
Chose a basis $v_1,\dotsc,v_{2k}$ of $V$ and define $$\begin{align*}S1:=&amp;span(v_1,v_2,\dotsc,v_k),\\ S2:=&amp;span(v_{k+1},v_{k+2},\dotsc,v_{k+k})\\ S3:=&amp;span(v_1+v_{k+1},v_2+v_{k+2},\dotsc,v_k+v_{k+k})\end{align*}$$
Length of a Coastline
Maybe the best-looking example of this is the Koch snowflake: The iteration does indeed go on forever, but there is no limit to the length of the curve! If you look carefully, the snowflake's perimeter increases by a factor of $\frac{4}{3}$ each iteration, so it tends to infinity. Don't think of the size of the measuring stick. Think instead of errors in measurement of the length; at each size scale, you have some "imprecision" in your measurement of the curve. As you increase the precision of your measurement, "zooming in," you more accurately approximate the length of the curve, and the length of this rectification may tend to $\infty$.Here's a picture of a precision-increasing iteration: For other examples of this, go to Google Maps, start in orbit and slowly zoom into some nice piece of coastline like the northwestern coast of Norway. In practice, of course, you find that when you zoom in sufficiently far, objects like coastlines cease to display fractal behavior, but fractals are still beautiful math.
Prove that set $ \mathbb{Z}×\mathbb{Q}$ is countably infinite by constructing a bijection from that set to the natural numbers
Take the map $\left(m,\dfrac pq\right)\rightarrow (m,p,q) \quad p,q,m\in\mathbb{Z}\:$ i.e. mapping $\mathbb{Z}\times\mathbb{Q} \rightarrow \mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}$. Assuming you know finite (countable as well) union of countable sets is countable, the result follows.
express tan(x) as a power series using maclauran's theorem.
The right name of the theorem -> Taylor-McLaurin How to format equations on M.SE -> Mathjax And an answer to your question (given by googling "mclaurin tan") -> here (where $B_n$ is the $n$-th Bernoulli number)
Confusion in joint occurrence probability?
We have $P(male) = \frac{100}{150}$ and $P(smoker)=\frac{110}{150}$. If the variables were really independent then $P(male \cap smoker) = P(smoker) \cdot P(male) = \frac{110}{150} \cdot \frac{100}{150} $. However, I think the variables are not independent. In the case of independence, you would expect the value $\frac{100 \cdot 110}{150}$ in the upper left corner of the matrix, i.e. about 73 insted of 70. If they were independent, both of your suggested methods would give the same result.
Question on a tricky Arithmo-Geometric Progression::
$$\begin{align} S&amp;=\qquad \frac 14+\frac 28+\frac 3{16}+\frac 4{32}+\frac 5{64}+\cdots\tag{1}\\ 2S&amp;=\frac 12+\frac 24+\frac 38+\frac 4{16}+\frac 5{32}\cdots\tag{2}\\ (2)-(1):\qquad\\ S&amp;=\frac 12+\frac 14+\frac 18+\frac 1{16}+\frac 1{32}\cdots\\ &amp;=\frac {\frac 12}{1-\frac 12}\\ &amp;=\color{red}1 \end{align}$$
Let $W$ = {($x,y,z$) : $y=0$, $z=0$} be subset of $R^3$. Is $W$ a subspace of $R^3$?
Yes, this is a subspace of $\mathbb{R}^3$ as an $\mathbb{R}$-vector space. It contains the zero vector. It is closed under vector addition, since any two vector added is another vector with $y=0$ and $z=0$. It is closed under scalar multiplication, since multiply by any scalar is a vector with $y=0$ and $z=0$.
If $f(x^3 + x) = x^3 + x^2 + 1$, then what is $f'(2)$?
Use the chain rule : $$g(x)=x^3+x^2+1=f(x^3+x)$$ so : $$3x^2+2x=f'(x^3+x)(3x^2+1)$$ Now just let $x=1$ to get $$5=4f'(2)$$ $$f'(2)=\frac{5}{4}$$
Isomorphisms in the localization of a category
Let $\mathcal{C}$ be a category, let $S \subseteq \operatorname{mor} \mathcal{C}$, and let $\bar{S}$ be the class of all morphisms in $\mathcal{C}$ that become invertible in $S^{-1} \mathcal{C}$. As t.b. pointed out in the comments, the 2-out-of-6 property + a three-arrow calculus is enough to guarantee $S = \bar{S}$, i.e. that the morphisms in $\mathcal{C}$ that become invertible in $S^{-1} \mathcal{C}$ are precisely the ones in $S$. (Note that the 2-out-of-6 property is a necessary condition). If you are willing to take the fundamental theorem of three-arrow calculi (i.e. the one that gives necessary and sufficient conditions for two three-arrow zigzags (i.e. $\bullet \leftarrow \bullet \rightarrow \bullet \leftarrow \bullet$) to represent the same morphism) for granted, this is actually quite straightforward: see e.g. Proposition 36.4 in [Dwyer, Hirschhorn, Kan, and Smith] or proposition 3.5.10 in my notes. The difficulty in the general case seems to boil down to the fact that the zigzag representing an inverse in $S^{-1} \mathcal{C}$ for a morphism in $\bar{S}$ may not consist of only morphisms in $\bar{S}$ (let alone $S$). A trivial example of this is the case where $S = \emptyset$ or $S = \{ \text{identities} \}$. However, observe that a three-arrow zigzag that represents an isomorphism in $S^{-1} \mathcal{C}$ necessarily consists of only morphisms in $\bar{S}$. This, I suppose, is the significance of 3. Rather curiously, the fact that $\bar{S}$ is closed under retracts seems to play no role. For the record, let me point out that the 2-out-of-6 property does not imply closure under retracts. Consider the following category $\mathcal{C}$, $$\begin{array}{ccccc} X' &amp; \to &amp; X &amp; \to &amp; X' \\ \downarrow &amp; &amp; \downarrow &amp; &amp; \downarrow \\ Y' &amp; \to &amp; Y &amp; \to &amp; Y' \end{array}$$ where the composite across the top row is $\mathrm{id}_{X'}$ and the composite across the bottom row is $\mathrm{id}_{Y'}$, but $X' \to X$ and $Y' \to Y$ are not isomorphisms. Let $S$ be the set of all identity morphisms in $\mathcal{C}$, plus the morphism $X \to Y$. Then $S$ has the 2-out-of-6 property (because none of the morphisms in $S$ admit any non-trivial factorisation) but is not closed under retracts. There is a small sliver of hope, though. Observe that the class of pairs $(\mathcal{C}, S)$ where $S = \bar{S}$ is closed under arbitrary products. (See lemma 3.1.11 in my notes.) Let us say that $(\mathcal{C}, S)$ is saturated if $S = \bar{S}$. The functor $(\mathcal{C}, S) \mapsto S^{-1} \mathcal{C}$ is a left adjoint, so it preserves colimits. In particular, it preserves filtered colimits. Moreover, given a small filtered diagram $\mathcal{A}_\bullet : \mathcal{J} \to \mathbf{Cat}$, a morphism in ${\varinjlim}_\mathcal{J} \mathcal{A}_\bullet$ is an isomorphism if and only if it is the image of an isomorphism in some $\mathcal{A}_j$; thus, filtered colimits preserve the property of being saturated. Hence, the class of pairs $(\mathcal{C}, S)$ where $S = \bar{S}$ is closed under ultraproducts. It is not hard to see that $(\mathcal{C}, S)$ is saturated if and only if some ultrapower is saturated, so the Keisler–Shelah theorem implies the class of saturated $(\mathcal{C}, S)$ is closed under elementary equivalence. It is therefore an elementary class, i.e. axiomatisable by a theory in the first-order language of categories with an extra unary predicate. Perhaps someone clever will be able to find an explicit description of this theory.
Linear dependence of set of linear combinations of linearly independent vectors
Suppose $$ \DeclareMathOperator{Null}{Null} \DeclareMathOperator{Span}{Span} \lambda_1\cdot(u-v-w)+\lambda_2\cdot(2u+w)+\lambda_3\cdot(3u+v+3w)=\mathbf 0\tag{1} $$ Then $$ (\lambda_1+2\lambda_2+3\lambda_3)\cdot u + (-\lambda_1+\lambda_3)\cdot v + (-\lambda_1+\lambda_2+3\lambda_3)\cdot w = \mathbf0\tag{2} $$ Since $\{u,v,w\}$ is linearly independent (2) implies \begin{align*} \lambda_1+2\lambda_2+3\lambda_3 &amp;= 0 \\ -\lambda_1+\lambda_3&amp;= 0 \\ -\lambda_1+\lambda_2+3\lambda_3 &amp;= 0 \end{align*} which is equivalent to the equation $A\vec\lambda=\mathbf 0$ where $$ A= \begin{bmatrix} 1 &amp; 2 &amp; 3 \\ -1 &amp; 0 &amp; 1 \\ -1 &amp; 1 &amp; 3 \end{bmatrix} $$ But $\DeclareMathOperator{Rank}{Rank}\Rank(A)=2$ and we see that $$ \Null(A)= \Span \left\{ \begin{bmatrix} 1\\-2\\ 1 \end{bmatrix} \right\} $$ Hence the equation (1) is solved by $\lambda_1=1$, $\lambda_2=-2$, and $\lambda_3=1$ and we see that $\{u-v-w, 2u+w, 3u+v+3w\}$ is linearly dependent.
About the inductive definition of Binary trees
Well, the empty tree is a tree (with two equal subtrees). If $L$ is a tree with two equal subtrees, then $L-a-L$ is a tree with two equal subtrees, where $a$ is the root and $L$ are the subtrees.
Why is a Vitali set non-measurable even though it is a member of the power set of $[0,1]$?
A measurable space is a pair $(X,\mathcal A)$ where $X$ is a set and $\mathcal A\subseteq\wp(X)$ is a $\sigma$-algebra. In that context a subset of $X$ is measurable iff it is an element of $\mathcal A$. Looking at the Vitali set $V$ as a subset of $[0,1]$ it is a measurable subset wrt measurable space $([0,1],\mathcal A)$ iff $V\in\mathcal A$ . So actually not measures "decide" whether $V$ is a measurable set or is not. You can start with $([0,1],\mathcal A)$ where $V\in\mathcal A$ (making $V$ measurable in advance) and then go looking for measures on that measurable space. Examples are: $\mathcal A=\wp([0,1])$ and $\mu$ is the counting measure. $\mathcal A=\wp([0,1])$ and $\mu$ is the zero measure (your suggestion). $\mathcal A=\{\varnothing,V,V^{\complement},[0,1]\}$ and $\mu$ is the measure determined by e.g. $\mu(V)=4$ and $\mu(V^{\complement})=\pi$. For completeness let me mention that a triple $(X,\mathcal A,\mu)$ where $(X,\mathcal A)$ is a measurable space and $\mu$ is a measure on it (i.e. a function $\mathcal A\mapsto[0,\infty]$ that has certain properties) is a measure space. On the other hand you can start with a set $X$ and some function on its subsets that "has the looks" of a measure and then go for finding a $\sigma$-algebra such that the function restricted to it is indeed a measure.
Is it correct to say that points of inflection of $f$ are local minima/maxima of $f'$?
I think that once you read my answer to your second question, you may begin to doubt your definition of inflection point, or at least realize that you need to make very clear the notion of "changes sign". For the second question, you're looking for a function $f$ where both $f''$ and $f'''$ change sign at some point $a$, which we might as well assume (by shifting coordinates) is zero. Now look at $g = f''$. That's a continuous function such that $g$ and $g'$ both change sign at zero and are continuous. Now we have to make sense of "change sign"; I'm going to assume that "$h$ changes sign at $0$" means that for some small interval around $0$, we have $h(x) &lt; 0$ for $x &lt; 0$, and $h(x) &gt; 0$ for $x &gt; 0$, or vice versa. An alternative notion would be "$h$ changes sign at $0$ if, for every number $c&gt;0$, there are numbers $p$ and $q$ with $-c &lt; p &lt; 0 &lt; q &lt; c$ and $h(p) \cdot h(q) &lt; 0$." I'll discuss that second case in a moment. For the first definition: By continuity, $g(0) = 0$. By negating, if necessary, we can assume that for $0 &lt; x &lt; c$, for some small $c$, we have $g'(x) &gt; 0$, and for $-c &lt; x &lt; 0$, we have $g'(x) &lt; 0$. Use these facts, and the fundamental theorem of calculus, we can conclude that for $-c &lt; x &lt; c$, we have $f(x) \ge 0$, hence $f$ does not change sign. For the second definition: Let $$ h(x) = \begin{cases} exp(\frac{-1}{x^2}) &amp; x \ne 0 \\ 0 &amp; x = 0 \end{cases} $$ Then $h$ is infinitely differentiable, and all derivatives of $h$ at $0$ are $0$. It's "extremely flat" at zero. It's also even (i.e., $h(-x) = h(x)$), so its derivative is odd, i.e., $h'(-x) = -h'(x)$. Let $k = h'$. Let's define $$ k(x) = \begin{cases} h(x) \sin\frac{1}{x} &amp; x \ne 0 \\ 0 &amp; x = 0 \end{cases} $$ Clearly $k$ is nice away from $0$. What about derivatives? Well, \begin{align} k'(0) &amp;= \lim_{s \to 0} \frac{k(s) - k(0)}{s} \\ &amp;= \lim_{s \to 0} \frac{h(s) \sin\frac{1}{s} - 0}{s} &amp;= \lim_{s \to 0} \frac{h(s) \sin\frac{1}{s}}{s} \end{align} Now $$ \frac{-h(s)}{s} \le \frac{h(s) \sin\frac{1}{s}}{s} \le \frac{h(s)}{s} $$ so by the squeeze lemma, the limit is between $-h'(0)$ and $h'(0)$, which are both $0$, so the limit exists and is zero. I believe (but have not checked!) that a similar computation shows that all other derivatives also exist at zero. But clearly both $k$ and its derivative oscillate wildly (but with very tiny amplitude!) near $0$, so they change sign infinitely often near zero, hence satisfy the second definition of "change sign at zero".
Solution of a system of ordinary differential equations
Yes, you can solve this system using the Runge-Kutta Method, but this problem has singularities that you have to watch out for. Here is the solution using Mathematica's built in numerical solver. system = {x'[t] == y[t], y'[t] == x[t] + y[t]*z[t],z'[t] == x[t] + y[t]^2 + x[t]*z[t], x[0] == 1, z[0] == 1, y[0] == 1} NDSolve[system, {x[t], y[t], z[t]}, {t, 0, 0.5}] Plot[Evaluate[{x[t], y[t], z[t]} /. First[%]], {t, 0, 1/2}] Here it is using Runge-Kutta mma = NDSolve[system, {x[t], y[t], z[t]}, {t, 0, 0.5}, Method -&gt; "ExplicitRungeKutta", "StartingStepSize" -&gt; 1/5] Plot[{{x[t], y[t], z[t]} /. mma}, {t, 0, 0.5}] In both cases, I just selected a random IC of $(x(0), y(0), z(0)) = (1,1,1)$, but I'd certainly play around with different values.
Set up and solve a math scheduling problem
I presume you mean that you need 26 employees at work, not in training, while the training is going on. In that case, your answer is no. Think of it like man-hours, but instead you have man-sessions. By your requirements, you need $22 \times 2 + 33 \times 3 = 143$ man-sessions of training. But during each training session, only $29$ employees can attend, so at most you can get only $4\times 29 = 116$ man-sessions. That is not enough. You might be able solve it with more employees, but I think the easier solution is to add at least two more training sessions. (Yes, numerically one more session is enough, but the reality is you will never be able to make it work.)
For how many ordered pairs of positive integers $(x,y)$ is $x+2y = 100$?
We have $x=100-2y=2(50-y)$ and so $x&gt;0$ iff $y&lt;50$. Therefore, every value of $y=1,\dots,49$ works.
Why there is no biholomorphism between complex plane and unit disk?
Entire non-constant functions are unbounded by Liouvilles theorem. But holomorphic maps $f : \mathbb{C} \to D$ are necessarily bounded, thus constant. And therefore not bijective of course.
What's the difference between 'any', 'all' and 'some'?
The term "any" is troublesome, because in natural usage it could mean "all" or "at least one", depending on the context. Here are examples to consider. (1) For any $a &gt; 0$ there is an $x &gt; 0$ such that $x^2 = a$. (2) Does the equation $x^3 + y^3 + z^3 = 33$ have any integral solution? (3) Have you solved any of those problems? (4) Using this new technique, I can solve any of the problems from that list. In the first example, "any" = "all". In the second one, "have any" is asking about existence. In the third, "any" means "at least one" (existence). In the fourth, "any" means "all". I have known weak math students who are native English speakers and think (1) is proved by showing it works when $a = 1$, even though that way of interpreting (1) makes it into a trivial statement. In other words, they interpret "For any" in (1) as meaning "For some", and hence turn (1) into an existence claim instead of a universal claim. Such usage of "any" is present in non-mathematical English (see the third example), and I think this is the basis for the student's misunderstanding (comparable to having to learn the different meaning of "or" in mathematical English compared to non-technical English). I don't think any native English speaker would misunderstand the different senses of "any" in (3) and (4). I would advise someone who is not a native English speaker to avoid using "any" in mathematical statements. You can convey what you need with other choices of words.
$A\cong \Bbb Z_{(p)}\otimes A$ if multiplication by $n$ is iso
Try showing $A\to\mathbb{Z}_p\otimes_{\mathbb{Z}}A$ is injective and bijective. That is, compute its kernel, and show that any tensor $x/y\otimes a$ may be rewritten as $1\otimes b$ with $b\in A$. You can obviously slide $x$ past the $\otimes$ symbol to get $1/y\otimes xa$, but do you have any idea how to get rid of the $y$? You will have to use the fact that the element $xa\in A$ equals $y$ times some other element of $A$.
Prove that the series $a_{n+1}=\frac{\alpha}{2}-\frac{a_n^2}{2}$ converges
We'll try to prove inductively that the sequence is monotone. $$a_{n+1} - a_n = \frac{a_{n-1}^2 - a_n^2}{2}$$ Suppose $a_1 \leq a_2$. Then inductively $a_{n-1} \leq a_n$, so $a_{n-1}^2 \leq a_n^2$ and hence $a_{n+1} - a_n &lt; 0$; so $a_n \geq a_{n+1}$. Oh no! We've actually proved inductively that the sequence is alternating between being increasing and decreasing! (You can flesh out the above idea into an actual proof, rather than a failed proof of something else.) OK, but what we do have is that inductively $$|a_{n+1}| = |\frac{\alpha}{2} - \frac{a_n^2}{2}| \leq \frac{\alpha}{2} + \frac{1}{2} a_n^2 \leq \frac{1}{2} + \frac{1}{2} = 1$$ where the last inequality is by inductive hypothesis. Therefore the sequence is bounded by $[-1, 1]$. In fact, we can tighten this to being bounded by $[-\alpha, \alpha]$ by noting that $$\frac{\alpha}{2} + \frac{1}{2} a_n^2 \leq \frac{\alpha}{2} + \frac{1}{2} \alpha^2 \leq \frac{1}{2} \alpha + \frac{1}{2} \alpha = \alpha$$ Notice that $f(x) = \frac{\alpha}{2} - \frac{x^2}{2}$ is a contraction mapping on $[-r,r]$ whenever $r &lt; 1$: $$d(f(x), f(y)) = \frac{1}{2} d(x^2, y^2) = \frac{1}{2} |x+y| |x-y| \leq r |x-y| = r \times d(x, y)$$ and $[-r, r]$ is a complete nonempty metric space (if $r &gt; 0$), so by the contraction mapping theorem, iterating $f$ from any starting point will produce a convergent sequence. Therefore $f$ is a contraction on $[-\alpha, \alpha]$, which is enough as long as $\alpha$ is not $0$ or $1$. If $\alpha = 0$, the sequence is constant $0$. I'm still working on the case $\alpha = 1$. (EDIT: which is taken care of nicely by Hua in another answer, who points out that we can prove $|a_n| \leq \frac{\alpha}{2}$ in all cases, not just $|a_n| \leq \alpha$.)
Basic equation solving $t/(1+t)=1-1/(1+t)$
\begin{align} \frac{t}{1+t} &amp;= 1 - \frac{1}{1+t}\\\\ &amp;= \frac{1+t}{1+t} - \frac{1}{1+t}\\\\ &amp;= \frac{1+t-1}{1+t}\\\\ &amp;= \frac{t}{1+t} \end{align} Moral of the story is; make a nice choice for the number $1$ and things often look like they should.
Let $f$ be a nonnegative bounded measurable function on a set of finite measure $E.$ Assume $\int_E f = 0.$ Show that $f=0$ a.e. on $E.$
Yes, it is correct, and it's the crucial part of this exercise. To flesh it out, instead of just considering $B = \{x: f(x)&gt;0\},$ consider the countable partition $$B_1 = \{x:f(x)\ge 1\} \\B_2 = \{x: 1/2 \le f(x) &lt; 1\}\\ B_3 = \{x: 1/3\le f(x) &lt; 1/2\} $$ etc. Then we have $$ \int_{B}f(x) d\mu = \sum_{n=1}^\infty \int_{B_n} f(x)d\mu \ge \sum_{n=1}^\infty \mu(B_n) \frac{1}{n}.$$ The only way we can have $\int_B f(x) d\mu = 0$ is to have $\mu(B_n) = 0$ for all $n,$ which implies $\mu(B) = 0.$
A point $P(a,b)$ is equidistant from the y-axis and from the point $(4,0)$. Find a relationship between $a$ and $b$.
A straight, horizontal line being formed from the point $P(a,b)$ and the y-axis will have length $\sqrt {(a-0)^2+(b-b)^2}$, as the point on the y-axis will have coordinates $(0,b)$. $\sqrt {(a-0)^2+(b-b)^2} \Rightarrow \sqrt {a^2}$ If the point $P(a,b)$ is equidistant from the y-axis and the point $(4,0)$, we can write: $\sqrt {a^2}=\sqrt {(a-4)^2+b^2} \Rightarrow a^2=(a-4)^2+b^2 \Rightarrow a^2=a^2-8a+16+b^2 \Rightarrow b^2=8a-16$
if skolem($\alpha$) is valid then $\alpha$ is valid
NO; saying that $\nvDash \alpha$ means that there is at least one structure $\mathcal M$ such that $\mathcal M \nvDash \alpha$. Hint Assume that $\alpha$ is $\forall x \exists y \phi(x,y)$; thus its "skolemization" must be : $sk(\alpha) := \forall x \phi(x,f(x))$. From $\nvDash \alpha$ we have that, for some $\mathcal M$, with domain $M=|\mathcal M|$, we have some $a \in M$ such that $\mathcal M \nvDash \exists y \phi(x,y)[a]$ [intuitively, if $\forall x\psi(x)$ is not true in $\mathcal M$, we have that, for some "value" $a$ of $x$, $\psi$ does not hold of $a$]. But $\mathcal M \nvDash \exists y \phi(x,y)[a]$ means that for every $b \in M$ : $\mathcal M \nvDash \phi(x,y)[a,b]$. This means that we have "no way" to define a function $f^M : M \to M$ such that : $f^M(a)$ satisfy in $\mathcal M$ : $\phi(x,y)[a,f^M(a)]$, and this implies that $\mathcal M \nvDash \forall x \phi(x,f(x))$. Thus, having found a structure $\mathcal M$ such that $\mathcal M \nvDash \forall x \phi(x,f(x))$, we conclude with : $\nvDash sk(\alpha)$.
Definition of interior
If we have a metric space $(M,d)$, then an open ball with centre $x$ and radius $\varepsilon$ is the set $$B_\varepsilon(x):=\{y\in M\mid d(x,y)&lt;\epsilon\}.\tag{1}$$ Each time you are dealing some particular metric space $(M,d)$, you should start over and see what $B_\varepsilon(x)$ actually represents, by just writing out the definion. In the OP the metric space is $(\mathbb R,\vert\cdot\vert)$. The line segment $[0,1]$ is now a subspace of $\mathbb R$. In the comments the OP asks about a line segment in $\mathbb R^2$. Here we live in the metric space $(\mathbb R^2,\Vert\cdot\Vert)$. Though in both cases we consider line segments, they are considered to be subsets of different spaces. N.B: Don't get confused by the word ball. The open ball $B_\epsilon(x)$ is just the set defined in $(1)$. It is very common that it is not actually something round. See for instance the picture below. These are all open balls in different metric spaces.
Prove that if $n^2$ is odd then $n$ is odd?
Not a correct proof because if $n^2$ is odd, then it doesn't necessarily take the form $(2k - 1)^2$. In fact, that's what you are required to prove. Your assumption should be $\exists$ $k \in \mathbb N$, such that $n^2 = 2k - 1$. However, this isn't a very fruitful approach. The classical solution to this is to work by contraposition. Suppose that $n$ is even, then we can write $n = 2k$. Then, $n^2 = 4k^2 = 2(2k^2)$, so it is even. This gives that if $n^2$ is odd, then $n$ is odd.
Are there PL-exotic $\mathbb{R}^4$s?
In this survey article on differential topology, Milnor outlines a proof that every PL manifold of dimension $n \leq 7$ possesses a compatible differential structure, and whenever $n&lt;7$ this structure is unique up to isomorphism. He includes references for the various facts he uses.
Finding the stationary point of a type of hyperbola?
Hmm. \begin{align*} y&amp;=\frac{1}{x}+\frac{1}{x^2}-\frac{1}{x^3} \\ &amp;=\frac{x^2+x-1}{x^3} \\ y'&amp;=\frac{x^3(2x+1)-3x^2(x^2+x-1)}{x^6} \\ &amp;=\frac{x(2x+1)-3(x^2+x-1)}{x^4} \\ &amp;=\frac{2x^2+x-3x^2-3x+3}{x^4} \\ &amp;=\frac{-x^2-2x+3}{x^4}. \end{align*} Setting this equal to zero is tantamount to solving $x^2+2x-3=0,$ with solutions $$x=\frac{-2\pm\sqrt{4+4(3)}}{2}=\frac{-2\pm 4}{2}=\{1, -3\}. $$ You can probably use the Second Derivative Test to show which of these is a local min, and which a local max.
Prove ∀a∈R, ∀b∈R ,[(a <= b)⇒(n^a ∈O(n^b))]
Are you familiar with the limit test? Take $L = \lim_{n \to \infty} \frac{n^{a}}{n^{b}}$. If $L = 0$, $n^{a} \in \omega(n^{b})$. If $0 &lt; L &lt; \infty$, $n^{a} \in \Theta(n^{b})$. Otherwise, $n^{a} \in o(n^{b})$, which implies $n^{a} \in O(n^{b})$.
Does this sequence always terminate or enter a cycle?
I think there can be cycles of any odd length. Take $2n+1$ odd primes $p_k$, for which any $n$ of them have a sum less than the other $n+1$. Let $a_i = p_i-p_{i+1}+p_{i+2}-....+p_{i-1}$, where the index is taken cyclically. Then $a_i+a_{i+1}=2p_i$, and all the $a_i$ are odd. Let $N$ be an odd number for which $Na_ia_{i+1}=a_{i+2}\pmod{a_i+a_{i+1}}$. That is possible by the Chinese Remainder Theorem. Then take the numbers $\{Na_i\}$ as the cycle.
What does it mean to say that a Hilbert space $E$ is dense in $F'$, where $F$ is another Hilbert space?
First we have to clarify what norm we put on $F'$. In general the dual of a normed vector space $X$ is endowed with the norm $$ \Vert f \Vert_{X'} = \sup_{\Vert x \Vert_X =1} f(x). $$ Using the map you describe above, for each $y \in E$ you define $f_y \in F'$ via $f_y(x) = \langle y,x\rangle$. Then $$ \Vert f_y \Vert_{F'} = \sup_{\Vert x \Vert_F =1} f_y(x)= \sup_{\Vert x \Vert_F =1} \langle y,x\rangle. $$ Now to prove what you're after you want to show that for any $f \in F'$ and $\epsilon &gt;0$ there exists $y \in E$ such that $\Vert f - f_y \Vert_{F'} &lt; \epsilon$, i.e. $$ \Vert f- f_y \Vert_{F'} = \sup_{\Vert x \Vert_F =1}( F(x)- \langle y,x\rangle) &lt; \epsilon. $$
Free groups: I'm trying to understand the proof of $F(X)$ being a group. How do I prove the map $\phi : F(X) \to F_0$ is a homomorphism?
Take $w_1=x_1^{\delta_1}...x_n^{\delta_n}$ and $w_2=y_1^{\gamma_1}...y_m^{\gamma_m}$. What you need to do is to check that : $$\varphi(w_1)\varphi(w_2):=|x_1^{\delta_1}|...|x_n^{\delta_n}||y_1^{\gamma_1}|...|y_m^{\gamma_m}| $$ is the same as $\varphi(w_1w_2)$. But this is clear since : $$w_1w_2=x_1^{\delta_1}...x_n^{\delta_n}y_1^{\gamma_1}...y_m^{\gamma_m}\text{ whence } \varphi(w_1w_2):=|x_1^{\delta_1}|...|x_n^{\delta_n}||y_1^{\gamma_1}|...|y_m^{\gamma_m}|=:\varphi(w_1)\varphi(w_2) $$ So, if $\varphi$ is defined then the "morphism" property is obvious. Beware, there is a little something here. One should demonstrate that $\varphi$ sending $x_1^{\delta_1}...x_n^{\delta_n}$ to $|x_1^{\delta_1}|...|x_n^{\delta_n}|$ is a well-defined function. Basically, it boils down to show that two words having the same reduction will have the same image by $\varphi$. I don't think there is something else than induction+tedious examination of cases to show that this is the case.
Cauchy's Integral Formula and Green's Theorem
We may safely assume $a=0$ since translation will not influence the integral in any ways. We want to prove that $\oint_{C}\frac{f(z)}{z}dz=2\pi if(0)$. Consider $z=re^{i\theta}$ we have $$\int_{|r|=R}\frac{f(re^{i\theta})}{re^{i\theta}}(dre^{i\theta}+re^{i\theta}id\theta)=\int_{|r|=R}\frac{f(re^{i\theta})}{r}dr+if(re^{i\theta})d\theta$$ We cannot apply Green's theorem directly because it assumes the area is simply connected. But we can calculate the integral directly via taking limit $R\rightarrow 0$. This can be done by noticing that since $r=R$ the left term actually vanishes; Thus we are left with $$\oint_{|r|=R}if(Re^{i\theta})d\theta=i\int^{2\pi}_{\theta=0}f(Re^{i\theta})d\theta$$ Taking the limit of this integral when $R\rightarrow 0$ should give you $2\pi if(0)$. This small trivial calculation is associated with Poincare's lemma and DeRham Cohomology. You may venture to read some reference books if you are interested.
How prove this here exsit $b\in R$,such $S=\{(b,b,\cdots,b)\}$,if $f(x_{1},x_{2},\cdots,x_{n})$ is the set of minimum and maximum points.
Any real symmetric polynomial $f$ of degree $\leq2$ in the variables $x_1$, $\ldots$, $x_n$ can be written in the form $$f(x_1,\ldots,x_n)=a_0+ a_1 \sigma_1+a_2\sigma_1^2 + a_3\sigma_2\ ,\tag{1}$$ where $\sigma_1$ and $\sigma_2$ denote the elementary symmetric polynomials of degree $1$ and $2$ in the $x_i$. Since $\sigma_1^2=(x_1^2+\ldots+x_n^2)+2\sigma_2$ we can replace $(1)$ by the more convenient form $$f(x_1,\ldots,x_n)=c_0+ c_1 \sigma_1+c_2\sigma_1^2 + c (x_1^2+\ldots+x_n^2)\ .\tag{2}$$ When $c=c_2=0$ then $f$ is constant or linear, and $S={\mathbb R}^n$ or $=\emptyset$, in accordance with the claim. When $c=0$ and $c_2\ne 0$ then $f$ depends only on $\sigma_1$, and assumes a minimum or a maximum on some hyperplane $\sigma_1={\rm const.}$. This hyperplane certainly contains a point of the form $(b,b,\ldots,b)$. Finally assume $c\ne0$, and let $p\in S$. Then necessarily $${\partial f\over\partial x_i}=c_1+2c_2\sigma_1+2c x_i=0\qquad(1\leq i\leq n)$$ at $p$, which implies that all $p_i$ are equal.
Show that H is a subset of the normalizer
Let $x \in H$. We need to show $x^{-1}Hx = H$. Suppose $h\in H$, and consider $x^{-1}hx \in x^{-1}Hx$. Since $x \in H$, we have $x^{-1} \in H$, and so $x^{-1}hx \in H$. Hence, $x^{-1}Hx\subseteq H$ for $x\in H$. Now suppose $h \in H$ and consider $k := xhx^{-1}\in H$. Then, we have $x^{-1}kx = h \in x^{-1}Hx$. Hence, $H\subseteq x^{-1}Hx$, and so $x^{-1}Hx = H$. Hence, $x \in N(H)$.
Anti-diagonal matrix symmetric bilinear form
Let $x,y$ be as you defined above. Let $k$ be the underlying field. Define $f_1:V\rightarrow k$, $f_2:V \rightarrow k$ by $f_1(z)=\varphi(x,z)$ and $f_2(z)=\varphi(y,z)$. Define $V'=\ker(f_1)\cap\ker(f_2)$. Define $F(z)=(f_1(z),f_2(z))$. Notice that $F$ has rank 2, since $F(x)=(0,1), F(y)=(1,0)$, and $\ker(F)=V'$. Thus, $\dim(V')=\dim(V)-2$. Use induction on the dimension of the space in order to find a basis for $V'$ such that $\varphi$ is anti-diagonal w.r.t. this basis. Let $\{e_2,\ldots,e_{\dim(V)-1}\}$ be this basis. Let $e_1=x$ and $e_{\dim(V)}=y$. Finally, notice that $\varphi$ w.r.t. $\{e_1,\ldots,e_{\dim(V)}\}$ is anti-diagonal.
Is this triangle possible to draw?
We can not draw this triangle since $$ DE+EF=7&lt;8=DF $$ In any triangle, the sum of the lengths of any two sides must be greater than the length of the remaining side.
Is $V$ under ZFC really a proper class?
Asaf has already given a good answer to your question but I'll add a second perspective. You don't need to assume that every set belongs to the von Neumann hierarchy, since you can prove it. Suppose there were some set that didn't belong to this hierarchy. Then by Foundation, there is an $\in$-minimal such set, let's call it $x$. Every element of $x$ belongs to the von Neumann hierarchy, but $x$ itself (supposedly) does not. But if we let $\alpha = \sup \{ \mathrm{rank}(y)+1 : y \in x \}$, it's clear that $x \subset V_{\alpha}$ and so $x \in V_{\alpha + 1}$, meaning it does belong to the hierarchy after all. In fact, if $M$ is any transitive model of $ZFC$, it has what it thinks are the operations of power set and union, and what it thinks is the class of all ordinals, and so $M$ can construct what it thinks is the von Neumann hierarchy, and this resulting hierarchy will equal $M$. So in particular, it'll think that its version of the von Neumann hierarchy forms a proper class. But if $M$ were, say, the $\kappa ^{\mathrm{th}}$ level in the von Neumann hierarchy of some bigger model $N$ of set theory, where $\kappa$ is inaccessible in $N$, then $N$ will think of $M$ as a set whereas $M$ thinks of itself as a proper class. Along these same lines, $N$ will think of $\kappa$ as some ordinal whereas $M$ will think of $\kappa$ as the class of all ordinals. More interestingly, there will be subcollections of $M$ belonging to $N$ which aren't definable over $M$ (by a counting argument), thus they will be subcollections of $M$ which are neither sets in $M$ nor are they what $M$ would consider proper classes. The moral is that, generally speaking, what gets considered a proper class and what doesn't depends on the context. Given a transitive model $M$ of ZFC, a proper class in $M$ is technically a formula $\phi (x, p)$ with parameters $p$ from $M$ such that there is no member of $M$ consisting of precisely all those members of $M$ which satisfy the given formula (according to $M$), i.e. $$M \not \vDash \exists y (x \in y \leftrightarrow \phi (x, p))$$ and informally it's the collection of those things in $M$ satisfying that formula, i.e. $$\{x \in M | M \vDash \phi (x,p)\}$$ but this collection may exist as a member of some larger model, or it may not.
Probability on randomly selecting 3 balls from a bowl of 6 white and 5 black balls
It is okay to label and list all possible outcomes, but it becomes extremely tedious and even near impossible when you have large sets. When you label the balls you actually change the nature of the variable and thus requires a different solution route. Yet, you seem to understand this as both of your approaches were sufficient for this problem. So to directly answer your question, there is no need to label the balls in these kinds of problems. In fact, I recommend using the approach you did for $P(B)$ as this will be the most useful in difficult counting/probability problems and in this particular case order does not matter.
Computing this limit: $ \lim_{y\to0} \frac{f(x,y) - f(x-y,y)}{y} = g(x)$
No, that's not quite valid, though it's getting you to the right answer. The difference quotient inside the limit is not equal to the partial $\frac{\partial f}{\partial x}$, because it's just a difference quotient; it doesn't become a derivative until you take a limit. And even after you do, that quotient is not simply a partial in the $x$ direction, because both terms are changing. $\begin{align*} \lim_{y \to 0} \frac{f(x,y) - f(x-y,y)}{y} &amp;= \lim_{y \to 0} \frac{f(x,y) - f(x,0)}{y} - \lim_{y \to 0} \frac{f(x-y,y) - f(x,0)}{y} \\ &amp;= \frac{\partial f}{\partial y}(x,0) - \left(-\frac{\partial f}{\partial x}(x,0) + \frac{\partial f}{\partial y}(x,0)\right) \\ &amp;= \frac{\partial f}{\partial x}(x,0) \end{align*}$ For the second equality above I recognized those difference quotients as directional derivatives.
Sigma finite measure positive on uncountable subset of the reals
Not possible. Suppose $A$ is uncountable and $\mu(\{x\}) &gt; 0$ for every $x \in A$. If $\mu$ is $\sigma$-finite then we can write $A = \bigcup_{n=1}^\infty A_n$ where $\mu(A_n) &lt; \infty$. Since $A$ is uncountable, one of the $A_n$ must be uncountable (otherwise $A$ would be a countable union of countable sets, which must be countable). So let's say $A_1$ is uncountable. Now for every $x \in A_1$, we have $\mu(\{x\}) &gt; 0$, and therefore there is an integer $k$ (depending on $x$) such that $\mu(\{x\}) &gt; 1/k$. So if we let $B_k = \{x \in A_1 : \mu(\{x\}) &gt; 1/k\}$, then $A_1 = \bigcup_{k=1}^\infty B_k$. Then one of the $B_k$ must be infinite (else $A_1$ would be countable). But since every element of $B_k$ has measure at least $1/k$, this implies $\mu(B_k) = \infty$ which is a contradiction.
A conjecture on closed discrete subset
Not necessarily. Let $X$ be the square of the Sorgenfrey line, $S=\{(x,-x):X\in\mathbb R\}$. We have $c(X)\le\omega$.
How to understand the convention on describing the "position" of mathematical objects
Although each preposition has a core meaning or set of (generally closely related) core meanings, prepositional usage is highly idiomatic, and not just in English: this is true in general. Once you get away from those core meanings, choice of preposition is largely a matter of idiom. For instance, there is no outstandingly obvious choice of preposition to express the relationship between a group and its operation; the use of under, as in a group under multiplication, is simply idiomatic. It makes some intuitive sense if we think of the operation as something imposed on the underlying set, but one could make cases for other ways of thinking about the relationship. Some of these idioms are already present in non-mathematical language. For instance, it is entirely idiomatic to speak of operating on or performing operations on things in general; to speak of addition, say, as an operation on the real numbers is just an instance of this existing idiom. In other cases a mathematical prepositional usage may not have clear non-mathematical parallels, and there may be no way to tell how it got started; presumably someone started using it, others picked it up, and it eventually became a standard idiom.
Linear independence is preserved under linear transformations with trivial kernel
Assume that $$b_1T(v_1)+\cdots+b_kT(v_k)=0$$ holds for some constants $b_1,..,b_k$. You can rewrite this as $$T(b_1v_1+\cdots b_kv_k)=0$$ Can you now use your assumption on the kernel of $T$ and finish the proof?
Why my answer for that math brain teaser is wrong
Your first statement that the bigger meadow was reaped by $n+\frac{n}{2}$ workers in one day is not correct. That would only be true if $\frac{3}{2}n$ workers worked a full day to reap the larger field. But $n$ workers work $\frac{1}{2}$ day and $\frac{n}{2}$ workers work for $\frac{1}{2}$ day to reap the larger field. Therefore $n\cdot\frac{1}{2}+\frac{n}{2}\cdot\frac{1}{2}=\frac{3}{4}n$ workers reap the larger field in one day. The same error is made later when you say that it takes $\frac{n}{2}+1$ workers to reap the smaller field. In fact it takes $\frac{n}{2}$ workers $\frac{1}{2}$ day and one worker one day to clear the smaller field, or $\frac{n}{2}\cdot\frac{1}{2}+1\cdot1=\frac{n}{4}+1$ workers. Taking these two errors into account your equation should be $$ \frac{\frac{3}{4}n}{2}=\frac{n}{4}+1 $$ giving the correct solution $n=8$.
Showing $\sum_{b=0}^{N-1}\left(\frac{b}{p}\right)\zeta_M^{-kb} = 0$
Think I've got this now, was being slow! $$\sum_{b=0}^{N-1}\left(\frac{b}{p}\right)\zeta_M^{-kb} = \sum_{b=0}^{p-1}\left(\frac{b}{p}\right)\sum_{r=0}^{\frac{N}{p}-1}\zeta_M^{-k(b+rp)} = \sum_{b=0}^{p-1}\left(\frac{b}{p}\right)\zeta_M^{-kb}\sum_{r=0}^{\frac{N}{p}-1}\zeta_M^{-krp}$$ If $kp\equiv 0 (M)$ (therefore $p\equiv 0 (M)$ as $(M,k)=1$) then this simplifies to $$\frac{N}{p}\sum_{b=0}^{p-1}\left(\frac{b}{p}\right)\zeta_M^{-kb}$$ If $kp\not\equiv 0 (M)$ then let $N=aM$ $$\sum_{r=0}^{\frac{N}{p}-1}\zeta_M^{-krp} = \sum_{r=0}^{\frac{N}{p}-1}\zeta_{\frac{N}{p}}^{-kra} = 0$$ as $ka \not \equiv 0 (\frac{N}{p})$ (because $p\not\equiv 0 (M)$).
How to prove $\frac{1}{n} > \frac{1}{n+1}$?
You could also subtract. $$\frac1n - \frac{1}{n+1} = \frac{1}{n(n+1)} &gt; 0.$$ Since the difference is positive, $\frac1n$ must be the larger one.
Generalized class group of $\mathbb Q(\sqrt{-5})$
In Gras's book, $\Delta_{\infty}$ is always a set of real Archimedean places (that is, your definition should have read $\Delta_{\infty} = \mathrm{Pl}_{\infty}^r \setminus S_{\infty}$ -- you're missing the superscript $r$. So in your example, $\Delta_{\infty}$ is empty and we can ignore it: $$\mathrm{Cl}_{\mathfrak{m}}^S = I_T/P_{T,\mathfrak{m}}.$$ To answer your question about whether it makes a difference if $S$ is empty or not: in your example, no. Including the Archimedean place in $S$ has no effect on the definition of the generalized class group, and it does not affect the definition of $S$-units since the restriction added by Archimedean places is vacuous when those places are complex. In your example, $\mathrm{Cl}^S = \mathrm{Cl}$, $\mathrm{Cl}_{\mathfrak{m}}^S = \mathrm{Cl}_{\mathfrak{m}}$, and your computations of $|\mathrm{Cl}|$, $\phi(\mathfrak{m})$ and $(E^S \colon E_{\mathfrak{m}}^S)$ are correct. So everything comes down to seeing why $|\mathrm{Cl}_{\mathfrak{m}}| = |\mathrm{Cl}|$. The key fact that your post indicates you are missing is that if $T$ is any finite set of prime ideals in $K$ and $P_T$ is the group of principal ideals coprime to the ideals in $T$, then $$I_T/P_T \cong I/P.$$ In particular, this proves that it is not true that every ideal coprime to $\mathfrak{m}$ is principal. To get a map $I_T/P_T \rightarrow I/P$, we take an ideal coprime to $T$ and form a class in $I/P$. If we change our ideal by a principal ideal in $P_T$, we still get the same class in $I/P$, so the map is well-defined. It is immediate that the kernel is trivial. But the map is also surjective. A simple reason (which is much more high-powered than it needs to be) is that there are infinitely many prime ideals in any ideal class of any number field, so given a class $\mathfrak{c}$ in $I/P$, pick a prime ideal in the class outside of the (finite!) list of primes in $T$ and its class in $I_T/P_T$ will map to $\mathfrak{c}$. Since $P_{T, \mathfrak{m}} \subseteq P_T$, there is a surjection $I_{T}/P_{T, \mathfrak{m}} \rightarrow I_T/P_T$. Composing with an isomorphism $I_T/P_T \cong \mathrm{Cl}$ gives us a surjection $\pi \colon \mathrm{Cl}_{\mathfrak{m}} \rightarrow \mathrm{Cl}$. Thus, $|\mathrm{Cl}_{\mathfrak{m}}| \geq |\mathrm{Cl}|$ will always hold. In your example, we have equality because $P_{T, \mathfrak{m}} = P_T$ (if a principal ideal is not divisible my $\mathfrak{m}$, then there exists a generator that is congruent to $1$ modulo $\mathfrak{m}$ -- in fact, every generator is).
Proving $\sum_{cyc}\sqrt[3]{\frac{1}{a}+\frac{2}{bc}+a+2b+c}\leq\frac{6}{abc}$ for positive values such that $ab+bc+ca=3$
By holder: $$\sum_{cyc}\sqrt[3]{\frac{1}{a}+\frac{2}{bc}+a+3b+c}\le \sqrt[3]{\sum_{cyc}({\frac{1}{a}+\frac{2}{bc}+a+3b+c})(1+1+1) \cdot (1+1+1)}$$ It hence suffices to prove $$\sqrt[3]{\sum_{cyc}({\frac{1}{a}+\frac{2}{bc}+a+3b+c})(1+1+1)(1+1+1)}\le \frac{6}{abc}$$ $$\sum_{cyc}({\frac{1}{a}+\frac{2}{bc}+a+3b+c})\le \frac{24}{a^3b^3c^3}\tag 1$$ Now let $p=a+b+c,q=ab+bc+ca=3,r=abc$ then we rewrite (1) as $$\frac{q}{r}+\frac{2p}{r}+5p\le \frac{24}{r^3}$$ $$\iff qr^2+2pr^2+5pr^3\le 24$$ Now as $q^2\ge 3pr \iff p\le \frac{3}{r}$ and $q=3$ it suffices to prove $$3r^2+6r+15r^2\le 24$$ which is true as $r\le 1$(by AM-GM) Note in your original question you had $...a+2b+c$ i think its a typo and should be $a+3b+c$ all the same since $$\sum_{cyc}\sqrt[3]{({\frac{1}{a}+\frac{2}{bc}+a+2b+c})}\le \sqrt[3]{\sum_{cyc}({\frac{1}{a}+\frac{2}{bc}+a+3b+c})}$$ the same proof applies here here so we are done in either case
Convergence of a sequence involving a an integral
Hint: First, try to show that $\{x_n\}$ is non-increasing and bounded below by $0$, i.e. $x_n \ge 0$ and $x_n \le x_{n-1}$ for all integers $n \ge 1$. Once you do that, you know that $L := \displaystyle\lim_{n \to \infty}x_n$ exists, and consequently, $L$ satisfies $L = \dfrac{3}{4}L^2+\dfrac{1}{4}\displaystyle\int_0^Lf$ and $L \ge 0$. Can you then figure out what $L$ must be?
Evaluate integral $\int\int xe^{xy} dx dy$, strange result after rearranging
$$ \int_{-1}^0 \int_0^1 x\cdot e^{xy} dx dy=\int_0^1\int_{-1}^0 x\cdot e^{xy} dydx=\int_0^1\left[e^{xy}\right]_{y=-1}^{0}dx=\int_0^11-e^{-x}dx=$$$$=1+(e^{-1}-1)=e^{-1}\ . $$
Characterization of delta Distribution
You are looking for Hadamard's lemma. http://en.wikipedia.org/wiki/Hadamard%27s_lemma