title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
On uniform convergence in normal families of functions
(A): $U=\{z: |z|<1\}, f_n(z)=z^{n}$. This sequence is normal but no subsequence converges uniformly on $U$. [This sequence converges uniformly on compact subsets of $U$ to $0$ but not on $U$: Suppose $|z^{n_k}| <\frac 1 2$ for all $z \in U$, for all $k$ sufficiently large. Let $z \to 1$ to get a contradiction]. (B) : The link you have given works.
Calculating Variance to the power of a variable
Your approach is fine but you have to modify $\mathbb E(X^p)$ which is calculated as follows: $$ \mathbb E(X^p)=\int_0^1 x^pdx=\frac{1}{p+1} $$ Therefore: $$ \mathbb E(Y^p)=\frac{1}{2}+\frac{1}{2}\frac{1}{p+1}\\ \mathbb E(Y^{2p})=\frac{1}{2}+\frac{1}{2}\frac{1}{2p+1} $$ and finally $$ \mathbb{Var}(Y^p)=\frac{1}{2}+\frac{1}{2}\frac{1}{2p+1}-\left(\frac{1}{2}+\frac{1}{2}\frac{1}{p+1}\right)^2 $$ and we can say: $$ \lim_{p\to\infty}\mathbb{Var}(Y^p)=\frac{1}{4}. $$
error bound for polynomial interpolation with derivative matching
You asking about Hermit Interpolation which is a generalization of Newton Interpolation Polynomial the error in your case is limited by $$E(x)=\frac{f^{(4)}(c)}{4!} (x-a)^2(x-b)^2, $$ where $c\in [a,b]$ In the general case the error formula is given by $$E(x) =\frac{f^{(m)}(c)}{m!} \prod_{i=1}^{n}(x-x_i)^{m_i}, $$ where $c\in[x_0,x_n]$, $n$ is a number of points and $m$ is a total number of data points and $m_i$ in number of known values at $x_i$.
How is $K \subset G_{\alpha_1} \cup \cdots \cup G_{\alpha_n} $, where $G_\alpha$'s are the open subsets of X corresponding to open subsets of $Y$?
It's the definition of compactness relative to $X$ that whenever we have that $K$ is covered by a collection of open sets (relative to $X$), i.e. $K \subset \cup_\alpha G_\alpha$ (see def 2.31), then it is covered by finitely many of them, so as per 2.32 we have finitely many indices $\alpha_1,\ldots,\alpha_n$ such that $K \subset G_{\alpha_1} \cup \ldots \cup G_{\alpha_n}$. So that step is just applying the definitions that were just given. It's the same formula as in 2.32. Because we want to show that $K$ is compact relative to $Y$ we start with an open cover of $K$ by open sets relative to $Y$ (and we need a finite subcover of those open sets); these are the $V_\alpha$. Because the assumption on $K$ is compactness relative to $X$ we need to go from the $V_\alpha$ to the corresponding $G_\alpha$ (which are open relative to $X$). Because the open sets are related by $G_\alpha \cap Y = V_\alpha$ we can use the finite cover for the corresponding cover to go back to the original cover $V_\alpha$. If we'd use other open sets we would lose this connection. We know that $K$ is covered by the finitely many $V_{\alpha_i}$. So if $x \in K$, we know that $x \in V_{\alpha_{i(x)}}$ for some $i(x) \in \{1,\ldots,n\}$. We know by assumption that $x \in Y$, as $K \subset Y$. So $x \in V_{\alpha_{i(x)}} \cap Y = G_{\alpha_{i(x)}}$. As this works for any $x$, $K$ is covered by the $G_{\alpha_1},\ldots,G_{\alpha_n}$ for the same indices that work for the $V_\alpha$.
Determine second degree polynomial by least squares method
Assume your polynomial to be $$P(x) = ax^2 + bx + c$$ Now, the error in approximation is defined as $$E(P, f) = \left(\frac{1}{b-a}\int_a^b(P - f)^2dx\right)^\frac{1}{2}$$ Hence, your task is to find $a,b,c$ to minimise $$I = \int_{0.5}^{1.5}\left(ax^2+bx+c-\frac{3}{\sqrt{x}}\right)^2dx$$ Now, to solve for $a,b,c$, you will need to solve the system $$\frac{\partial I}{\partial a} = \frac{\partial I}{\partial b} = \frac{\partial I}{\partial c} = 0$$
Find $n$ if the area between the curve of $y=x^n$ and the $y$ axis is $3$ times the area between the curve and the $x$-axis
The blue area is $$\int_a^b x^n dx = \frac{b^{n+1}-a^{n+1}}{n+1}.$$ The red (orange?) area is the area of the big rectangle with sides, $b$, $b^n$, minus the small (white) rectangle with sides $a$, $a^n$, minus the above integral. That is $$b^{n+1}-a^{n+1}-\frac{b^{n+1}-a^{n+1}}{n+1}.$$ Hence, we need $$ b^{n+1}-a^{n+1}-\frac{b^{n+1}-a^{n+1}}{n+1} = 3 \cdot \frac{b^{n+1}-a^{n+1}}{n+1}. $$ Simplifying a bit gives $$ (n+1)(b^{n+1} - a^{n+1}) = 4 (b^{n+1} - a^{n+1}). $$ So we have $$(n - 3)(b^{n+1}-a^{n+1})=0.$$ Since $a$ and $b$ are arbitrary, we require $n=3$.
Two approaches to duality in TVS: dual pairing versus "canonical" dual. Are they equivalent?
Given a dual pair $\langle E,F\rangle$ you can endow $E$ with the weak topology $\sigma(E,F)$ defined by the seminorms $$p_J(x)=\max\{|\langle x,y\rangle|: y\in J\}$$ for finite subsets $J$ of $F$. Then the linear map $\Phi:F\to (E,\sigma(E,F))'$, $y\mapsto \langle \cdot,y\rangle$ is a bijection (injective because the duality is separating, for the surjectivity you need a standard lemma: For all linear functionals $\phi,\phi_1,\ldots,\phi_n$ on $E$ such that the intersection of the kernels of $\phi_i$ is contained in the kernel of $\phi$ you get $\phi$ as a linear combination of the $\phi_i$). The advantage of general dual pairs is the symmetry. For example, if you have proved the theorem of bipolars that $B^{\circ\bullet}$ is the closed absolutely convex hull of $B\subseteq E$ (where $B^\circ$ is the polar in $E'$ and $M^\bullet$ is the polar in $E$) you get without further ado also the dual version that $M^{\bullet\circ}$ is the $\sigma(E',E)$-closed absolutely convex hull of $M\subseteq E'$.
Show that $x^4-2$ is the minimal polynomial for $\sqrt[4]{2}$ over $\mathbb{Q}$
Our problem is equivalent to showing that $1,\sqrt[4]{2}, \sqrt{2}$ and $\sqrt[4]8$ are linearly independent. Assume the contrary, then we have rational $a,b,c$ for which: $$\sqrt[4] 8 = a\sqrt2+b\sqrt[4]2+c$$ Raising the whole equation to the power of $4$ gives a value in $\mathbb{Q}[\sqrt[4]2,\sqrt2]$. Hence, we would see that the values of $\sqrt[4]2, \sqrt2$ and $1$ are not linearly independent either. Putting $\sqrt[4]2$ in the LHS and repeating the process again will imply that $\sqrt2$ and $1$ are not linearly independent i.e. $\sqrt 2$ is rational, which is a contradiction. Now, since $1,\sqrt[4]2,\sqrt2,\sqrt[4]8$ are linearly independent. There does not exist a polynomial of degree less than or equal to $3$ with $\sqrt[4]2$ as a root. Hence, the minimal polynomial of $\sqrt[4]2$ has degree $4$. Moreover, the polynomial $x^4-2$ is also unique upto scaling (else, you can use two minimal polynomials of degree $4$ and subtract them after scaling them appropriately and get a smaller minimal polynomial).
Is it impossible to grasp Multivariable Calculus with poor prerequisite from Single variable calculus?
The phrase "multivariable calculus" might be somewhat misleading. This might imply that it is different from single variable calculus, i.e. first you learned SVC, now you can leave that behind and you'll learn MVC. This is not the case. A great example of this is integrating with respect to two variables. Integrating with respect to two variables is a relatively simple and important concept in MVC, but does not rely on some brand new ideas developed only for this field of mathematics that introduce a completely different way of thinking. Integrating with respect to two variables is just integrating with respect to one variable twice. Unsurprisingly, this is no easier than integrating only once. This brings me to my main point. In MVC, single variable concepts and their applications do not just only up occasionally, they are important in every chapter and every class. They laid the groundwork for all future concepts, and most problems in MVC are solved by reducing them to SVC problems. In conclusion: As was already mentioned in some of the comments, I don't want to give specific advice for your situation. A friend or teacher who understands you better will be in a better position to give you specific advice. However, I did want to make you aware of exactly what a MVC course is and what will be required of you in terms of single variable knowledge.
Markov models: Proving that an occupation law is a stationary law
This is really not particularly tied to Markov chains. It is simply the following calculation that makes sense for matrices $A$ with complex entries: if $L:=\lim_{n \to \infty} \frac{\sum_{k=0}^n A^k}{n}$ exists, then $$L=\lim_{n \to \infty} \frac{I}{n} + \lim_{n \to \infty} \frac{\sum_{k=1}^n A^k}{n} = 0 + \left ( \lim_{n \to \infty} \frac{\sum_{k=0}^{n-1} A^k}{n} \right ) A.$$ This follows by linearity of the limit operation (first step) and continuity of linear transformations (second step). Now that inner limit is again $L$, because $\frac{\sum_{k=0}^{n-1} A^k}{n} = \frac{\sum_{k=0}^{n-1} A^k}{n-1} \frac{n-1}{n}$ and $\frac{n-1}{n} \to 1$. So $L=LA$. You could also have obtained $L=AL$, but that is not particularly useful in the Markov chain context, in which you want the rows of $L$ to be invariant under $p \mapsto pA$.
Stable convergence equivalence
This is not true. In fact, stable convergence is weaker than the a.s. convergence of conditional distributions (see e.g. this paper). The flaw in the argument is that $\lim_n\mathsf{E}[\exp(itX_n)\mid \mathcal{G}_n]$ may not exist.
Find an injective group homomorphism $\varphi: D_4 → Sym_4$ with $D_4$ being the dihedral group
Label the vertices of the square with the numbers $1,2,3,4$, and then corresponding to each element of $D_4$ you get a map from $\{\,1,2,3,4\,\}$ to itself, that is, a member of $S_4$. This gives you your injection (not an isomorphism, as $D_4$ has eight elements; $S_4$, $24$).
Generalized Chu-Vandermonde identity
Let us look at the coefficient in curly brackets on the right hand side. The question is , is it possible to do the sum over the indices $\tilde{j}$ and $\eta$? We have found a following closed form for the sum of $\eta$. We have: \begin{equation} {\mathcal S}_{\tilde{j}}(l,m) := \sum\limits_{\eta=0}^l (-1)^\eta \binom{l}{\eta} \binom{(l-\eta) m}{\tilde{j} + l} = \int\limits_0^{2 \pi} \left(\left(1+e^{i \phi }\right)^m-e^{i m \phi }\right)^l e^{i \phi (\tilde{j}-l (m-1))} \frac{d \phi}{2 \pi} \end{equation} In particular we have: \begin{eqnarray} {\mathcal S}_{0}(l,m) &= & m^l \\ {\mathcal S}_{1}(l,m) &= & m^{l-1} \cdot \binom{m}{2} \cdot l \\ {\mathcal S}_{2}(l,m) &= & m^{l-2} \cdot (\binom{m}{2})^2 \cdot \binom{l}{2} + m^{l-1} \cdot \binom{m}{3} \cdot l \\ {\mathcal S}_{3}(l,m) &= & m^{l-3} \cdot (\binom{m}{2})^3 \cdot \binom{l}{3} + m^{l-2} \cdot \binom{m}{3} \binom{m}{2} \cdot 2 \binom{l}{2} + m^{l-1} \cdot \binom{m}{4} \cdot l \\ \vdots \end{eqnarray} Now, inserting the integral representation of our sum into the curly brackets we can formally do the sum over $\tilde{j}$ by using the definition of the hypergeometric function . Here we exploit the fact that the integral vanishes whenever $\tilde{j} > l (m-1)$ and therefore we can extend the summation to infinity. Therefore we have: \begin{eqnarray} \left\{ \cdots \right\} = \int\limits_0^{2 \pi} F_{2,1} \left[ \begin{array}{ll} 1 && - \delta+j_2+l \\ -\delta+j_1+l+\xi m+1\end{array}; -e^{\imath \phi} \right] \left(\left(1+e^{i \phi }\right)^m-e^{i m \phi }\right)^l e^{i \phi (-l (m-1))} \frac{d \phi}{2 \pi} \end{eqnarray} For this integral to exist we have to have $j_2 < j_1+\xi m + 1$.
If $\mu (x,y) = e^{\int h(xy) d(xy) }$ then is $\mu$ a function of $xy$?
Yes, this is technically abuse of notation, since another way to write this would be $\mu(x,y)=(\phi\circ\mathop{\mathrm{mult}})(x,y)$ with $\mathop{\mathrm{mult}}$ being multiplication (so $\mathop{\mathrm{mult}}(x,y)=xy$) and $\phi$ being a function in a single-variable (namely $\phi(t)=e^{\int h(t)\text dt}$), and then renaming $\phi$ to $\mu$ again. So while not totally technically correct, this last step is a common thing to do within the mathematical community.
Reducing Subset-Sum to Scheduling with start time.
Maybe this answer is very late but since I was studying this topic I thought I may give it a shot. To reduce Subset Sum to Scheduling (with release time and deadline restriction) you have to do the following: Specify a function $f: \pi_1 \rightarrow \pi_2$ that is computable in polynomial time. (where $\pi_1$ is the decision problem 1 (Subset Sum) and $\pi_2$ is decision problem 2 (Scheduling)) Show that for every instance $I \in \pi_1$, $I \in Y_{\pi_1}$ if and only if $f(I) \in Y_{\pi_2}$ Where $Y_{\pi_j}$ are the instances of the decision problem $\pi_{j}$ which have a solution, roughly speaking. The transformation is done in the following way: Given $S = \{s_1,...,s_n\}$ a set of positive integers to which there is a subset S' such that its elements add up to $k$ and the sum of the whole set is $x$. (You have to pick an instance of the problem that is solvable). You define jobs $\{p_1,...,p_n\}$ such that $r_i = 0$, $t_i = s_i$ and $d_i = x + 1$ for every $p_i$ (release time, duration of the job and deadline respectively). This instance solves the scheduling problem because you can arrange the jobs in any order and they will always meet their deadline. Now the converse, let's suppose we started with a solvable instance of the Scheduling problem. To make things work, let's take the set $\{p_1,p_2,...,p_n,p_{n+1}\}$ where the first $n$ jobs are defined the same as before and job $p_{n+1}$ has the following property: $r_i = k$,$t_i = 1$,$d_i = k+1$. The extra job has to be done in $[k,k+1]$, but there would still be $x$ units of time available in the time interval $[0,x+1]$. Since this instance is solvable,you have $\{p_{i_1},...,p_{i_m}\} = S' $ processes that can fit before job $p_{n+1}$ and another sequence of jobs after. The set $S'$ solves your Subset Sum instance. Remember the release time $r_i$ indicate only at what time the job is available (it does not need to be done at that moment, but it does have to be done before its deadline $d_i$) TLDR: I believe you did not perform the transformation correctly. Please correct me if you think I made a mistake at some point, I'm trying to understand this topic as well. Cheers!
How to prove the Closed map lemma
The theorem is true in topological spaces in general, not just spaces in which sequences determine the topology (like metric spaces, for instance), so it can be proved without any use of sequences. In the general setting it's nets and filters that generalize sequences, but you don't need them here. If you want to learn more about them, a very good starting point is these notes by Pete L. Clark, who also posts here. Suppose that $H\subseteq X$ is closed, and let $K=f[H]\subseteq Y$. Let $\mathscr{U}$ be a family of open subsets of $Y$ such that $K\subseteq\bigcup\mathscr{U}$; that is, $\mathscr{U}$ is an open cover of $K$ in $Y$. Let $\mathscr{V}=\{f^{-1}[U]:U\in\mathscr{U}\}$; since $f$ is continuous, $\mathscr{V}$ is an open cover of $H$ in $X$. But $H$, being a closed subset of the compact space $X$, is compact, so some finite subset $\{V_1,\dots,V_n\}$ of $\mathscr{V}$ covers $H$. For $k=1,\dots,n$ there is $U_k\in\mathscr{U}$ such that $V_k=f^{-1}[U_k]$, and it's clear that $\{U_1,\dots,U_n\}$ covers $K$. Thus, every open cover of $K$ has a finite subcover, and $K$ is compact. Now just use (or prove) the result that every compact subset of a Hausdorff space is closed.
Condition for a set to be min-homogeneous
The desired conclusion, that $H$ is min-homogeneous, means that $c(x)=c(y)$ whenever $x,y\in[H]^n$ have the same smallest element. The given assumption says the same thing when $x\cup y$ has only $n+1$ elements, i.e., when $x$ and $y$ differ only in that a single element of $x$ has been removed and replaced by a different element to form $y$. To get the desired conclusion from this hypothesis, consider any $x$ and $y$ as in the conclusion and try to join them by a sequence of "one-step" moves as in the hypothesis. That is, we want a sequence $ x=z_0, z_1, z_2, \dots, z_{k-1}, z_k=y $ for some (finite) $k$ and some sets $z_i\in[H]^n$ such that each $z_i$ and $z_{i+1}$ differ by only a single element (and all the sets have the same minimum element). That is, we want to get from $x$ to $y$ by changing one element at a time. That's not hard to do; just consider each element of $x-y$ in turn and replace it with an element of $y-x$.
Rabin Cryptosystem Proof
Note that $m_p^2 \equiv c \pmod{p}$ by construction and also $m_q^2 \equiv c \pmod{q}$. Also by EEA, we chose $x_p,x_q$ so that $x_p p + y_q q=1 $ so it follows that $$x_p \cdot p \equiv 1 \pmod{q} \text{ and } x_q \cdot q \equiv 1 \pmod{p}$$ So using the CRT, look at what $r^2 \pmod{p}$, $r^2 \pmod{q}$ equal: $$r^2 \equiv m_p^2 \equiv c \pmod{p}\tag{1}$$ and $$r^2 \equiv m_q^2 \equiv c \pmod{q}\tag{2}$$ so CRT's uniqueness clause says that $r^2 \equiv c \pmod{pq}$ When $r$ has the property so as $-r \equiv n-r \pmod{n}$ as well, of course, in any ring. The case for $s$ is similar, the cross term cancels out mod $N$ anyway. Note that the construction of $r$ and $s$ closely follows the construction in the proof of the CRT for two moduli.
How to prove that $H(X, Y) + H(Y, Z) \geq H(X,Z) + H(Y)$
\begin{align*} H(X,Y) + H(Y,Z) &= \overbrace{H(Y) + H(X|Y)} + H(Y,Z) \\ &\geq H(Y) + \underbrace{H(X|Y,Z) + H(Y,Z)} \\ &= H(Y) + H(X,Y,Z) \\ &\geq H(Y) + H(X,Z) \end{align*}
find ${dy}/{dx}$ if $x^y + y^x = 1$
This may seem tricky but recall the derivatives of $2^x$ and $x^2$. $$\frac{d}{dx}(2^x)=2^x\ln(2) \quad \frac{d}{dx}(x^2)=2x$$ Now, let's try approaching the problem. $$\frac{d}{dx}(x^y)=yx^{y-1}+x^y\ln(x)\cdot\frac{dy}{dx},\qquad \frac{d}{dx}(y^x)=y^x\ln(y)+xy^{x-1}\cdot\frac{dy}{dx}$$ This may be hard to see but let's look at it another "way" (more of a physicist point of view): Mulitply both sides by $dx$ $$d(x^y)=yx^{y-1}dx+x^y\ln(x)dy, \qquad d(y^x)=y^x\ln(y)dx + xy^{x-1}dy$$ As you can see, the derivative is of $x^y$ and $y^x$ is the derivative with respect to $x$ on the left side $+$ the derivative with repsect to $y$ on the right side. So, $$\frac{dy}{dx}\left(x^y\ln(x)-xy^{x-1}\right)=-(y^x\ln(y)+yx^{y-1})$$ $$\frac{dy}{dx}=-\frac{y^x\ln(y)+yx^{y-1}}{x^y\ln(x)-xy^{x-1}}$$
Complex Triangle inequality
Notice that $|a|=|-a|$ and that $|b|+|a| \geq |b+a|$ so $$|z_2+1|+|z_1z_2+1|\geq|(z_2+1)-(z_1z_2+1)|$$ And for the second $$|z_2-z_1z_2| = |z_2(1-z_1)|= |z_2||1-z_1| = 1\cdot |1-z_1|$$
By using d'alembert's formula to deduce $k_1(z)$ and $k_2(z)$, show that they are periodic with period $2$.
Since $k_1(z)=-k_2(-z)$, it suffices to show that $k_1$ is periodic with period $2$. With this in mind, let's go at it directly: $$ k_1(2+z) = k_1(1+(1+z)) = -k_2(1-(1+z)) = -k_2(-z) = k_1(z), $$ where we have used $k_1(1+w)=-k_2(1-w)$ and then $k_1(z)=-k_2(-z)$. So the period is at most $2$, but may be $2/n$ for some integer $n$. In fact the period can't be less than $2$, but we need to lean on the initial conditions to do this: together they imply that $k_1(z)=k_2(z)=f(z)/2$ (see here), and so $$ k_1(1+z)=-k_2(1-z) = -f(1-z)/2 \neq f(z)/2 = k_1(z) $$ in general; moreover, the period can't be less than one unless $f$ is periodic.
Relation between general solutions and singular solution of Clairaut’s equation.
The envelope of the linear solution family is the curve close to which the lines for $C$ and $C+\Delta C$ intersect (or better the limit of these intersection loci for $ΔC\to 0$). The intersection can be computed as $$ Cx+f(C)=y=(C+ΔC)x+f(C+ΔC) $$ which then implies $$ 0=x+\frac{f(C+ΔC)-f(C)}{ΔC}. $$ In the limit $ΔC\to0$ this is exactly the equation $x(C)+f'(C)=0$. Then $$ y(C)=-f'(C)C+f(C) \implies \frac{dy}{dx}=\frac{y'(C)}{x'(C)} =\frac{-f''(C)C-f'(C)+f'(C)}{-f''(C)}=C $$ as long as $f''(C)\ne 0$. This can be described geometrically as the line with parameter $C$ having a "focal point" at $(x(C),y(C))$, and the curve of these focal points also having this line as its tangent at this point.
Proof of existance for a specific integral
Substitution $y = \sqrt{x}u$ gives $$ \int_{0}^{4} dx \int_{\sqrt{x}}^{2\wedge 2\sqrt{x}} \frac{dy}{\sqrt{x+y^2}} = \int_{0}^{4} dx \int_{1}^{\frac{2}{\sqrt{x}} \wedge 2} \frac{du}{\sqrt{1+u^2}}. $$ Now from the simple estimate $$ \int_{1}^{\frac{2}{\sqrt{x}} \wedge 2} \frac{du}{\sqrt{1+u^2}} \leq \int_{1}^{2} \frac{du}{\sqrt{1+u^2}} < \int_{1}^{2} \frac{du}{u} = \log 2, $$ we find that the integral is bounded above by $4\log 2 < \infty$.
Missing Exercises in Elementary Number Theory by Underwood Dudley.
[My comment has been well-received, so I'll risk elevating it to an answer] Numerical questions will have a unique correct answer, but "prove" questions will have so many correct answers that providing one would be pretty useless. If your answer differed from the one provided, you would have no idea whether your answer was wrong, or just different.
if $\alpha$ $\beta$ and $\gamma$ are the roots of a equation than find the value of .
Hint: If $\alpha,\beta,\gamma$ are the roots of $(x-1)^3+8=0$, what are $\alpha-1,\beta-1$ and $\gamma-1$ the roots of?
Damped pendulum equation
We have $$ \ddot{\theta} =\frac{1}{2}\frac{d}{d\theta}\dot{\theta}^2 $$ then sub in $p =\dot{\theta}^2$ you should be able to derive the formula you desire. with the suitable initial conditions.
Solve this Diophantine equation: $4xyz=x+2y+4z$
We know that $x,y,z\geq 1$. The first step is to notice that $x$ is even. Now if $x \geq 3$, then: $$4xyz=xyz+3xyz \geq x+9yz=x+2yz+7yz\geq x+2y+7z > x+2y+4z$$ That means $x=2$. And the equation becomes: $$8yz=2+2y+4z\Rightarrow 4yz=1+y+2z$$ Now, similarly, if $y\geq 2$, we have: $$4yz=yz+yz+2yz\geq 1+y+4z>1+y+2z$$ That means $y=1$. And solving for $z$, we get $z=1$. This gives the unique solution $(x,y,z)=(2,1,1)$.
One More Ordinary Differential Equation
To be more puristic, write $$f(x) = a\int_0^x f(t) dt + b (3 \cos(x) - 2) + c$$ Differentiate both sides using the fundamental theorem of calculus; this gives $$f'(x)=a f(x)-3b \sin(x)$$ Now, for the homogeneous part $$f'(x)=a f(x) \implies f(x)=C(x) e^{ax}$$ Now, variation of parameters leads to $$C'(x)=-3b \sin(x)e^{-ax}$$ which does not seem to be very difficult.
Proof that the function $\cot(\pi z)$ is uniformly bounded on the sides of the square with vertices $\pm(N+1/2)\pm i(N+1/2),n∈ℕ$.
Uniformly bounded just means that the bound doesn't depend on $N$. First note that $$ |\cot \pi z|^{2} = \Big|\frac{\cos \pi z}{\sin \pi z} \Big|^{2}$$ $$ = \Big| \frac{\cos \pi z \cosh \pi y -i \sin \pi x \sinh \pi y}{\sin \pi x \cosh \pi y + i \cos \pi x \sinh \pi y }\Big|^{2}$$ $$ =\frac{\cos^{2} \pi x \cosh^{2} \pi y +\sin^{2} \pi x \sinh^{2} \pi y}{\sin^{2} \pi x \cosh^{2} \pi y + \cos^{2} \pi x \sinh^{2} \pi y }$$ $$ = \frac{\cos^{2} \pi(1+ \sinh^{2} \pi y) + \sin^{2} \pi x \sinh^{2} \pi y}{\sin^{2} \pi x(1+\sinh^{2} \pi y) + \cos^{2} \pi x \sinh^{2} \pi y} $$ $$ = \frac{\cos^{2} \pi x + \sinh^{2} \pi y}{\sin^{2} \pi x +\sinh^{2} \pi y}$$ Next note that $\cos^{2}\Big( \pi (N+\frac{1}{2}) \Big) = 0$ and $\sin^{2} \Big( \pi (N+ \frac{1}{2}) \Big) =1$. So on the vertical sides of the square (where $x = N + \frac{1}{2}$ or $x = - N -\frac{1}{2}$), $$ |\cot \pi z|^{2} =\frac{\sinh^{2} \pi y}{1+ \sinh^{2} \pi y} = \frac{\sinh^{2} \pi y}{\cosh^{2} \pi y} = \tanh^{2} \pi y $$ $$ \implies |\cot \pi z| \le |\tanh \pi y| \le 1 $$ And on the horizontal sides of the square (where $y = N + \frac{1}{2}$ or $y = -N - \frac{1}{2}$), $$ |\cot \pi z|^{2} \le \frac{1 + \sinh^{2} \pi y}{\sin^{2} \pi x + \sinh^{2}\pi y} $$ $$ = \frac{\cosh^{2} \pi y}{\sin^{2} \pi x + \sinh^{2} \pi y} \le \frac{\cosh^{2}\pi y}{\sinh^{2} \pi y}= \coth^{2} \pi y$$ $$ \implies |\cot \pi z| \le |\coth \pi y| \le \coth \frac{\pi}{2} \approx 1.09$$ Thus $|\cot \pi z|$ is bounded by $\coth \frac{\pi}{2}$ on the sides the square, which implies that $\cot \pi z$ is uniformly bounded on the sides of the square.
Show that $f$ is Riemann integrable on $[0,1]$ when $f(x)= \operatorname{sgn}(\sin(\frac{\pi}{x}))$ for all $x \in (0,1]$ and $f(0)=1$
Firstly, for the sake of simplifying the argument, I'm going to assume that the $\operatorname{sgn}$ function is undefined at $0$. Normally, it's defined to be $0$ at $0$, but this will spoil the simplicity of the argument. There is away around it, but it makes things much more complicated, and I doubt your professor took this into account when they gave you this exercise. So, let's take the partition $$P_n = \left\{\left[0, \frac{1}{n}\right], \left[\frac{1}{n}, \frac{1}{n-1}\right], \left[\frac{1}{n-1}, \frac{1}{n-2}\right] \ldots, \left[\frac{1}{3}, \frac{1}{2}\right], \left[\frac{1}{2}, 1\right]\right\}.$$ First thing to note is that $f(x)$ is constant on each interval, except the first interval. For $x \in \left[\frac{1}{2}, 1\right]$, we have $\pi/x \in [\pi, 2\pi]$, hence $\sin(\pi/x) \le 0$, and thus $f(x) = -1$. For $x \in \left[\frac{1}{3}, \frac{1}{2}\right]$, we have $\pi/x \in [2\pi, 3\pi]$, so $\sin(x) \ge 0$, and $f(x) = 1$. Each successive interval, until $\left[0, \frac{1}{n}\right]$, alternates $-1$ and $+1$ in this way. On the interval $\left[0, \frac{1}{n}\right]$, on the other hand, $f$ achieves both $1$ and $-1$, and alternates between them. Now, we calculuate $U(f, P_n)$ by computing the maximum value of $f$ on each of the intervals, multiply the result by the length of the interval, and sum these signed areas. We compute $L(f, P_n)$ similarly, taking the minimal value of $f$ on each interval instead. But, note that $f$ is constant on each interval, except the first! That is, the upper sum and lower sum will agree on every interval, except the first interval. This means that, if we subtract $L(f, P_n)$ from $U(f, P_n)$, then all but one term will cancel: the term corresponding to the first interval $\left[0, \frac{1}{n}\right]$. So, \begin{align*} U(f, P_n) - L(f, P_n) &= \left(\max_{x \in \left[0, \frac{1}{n}\right]} f(x)\right) \times \frac{1}{n} - \left(\min_{x \in \left[0, \frac{1}{n}\right]} f(x)\right) \times \frac{1}{n} \\ &= 1 \times \frac{1}{n} - (-1) \times \frac{1}{n} = \frac{2}{n}. \end{align*} Because $\frac{2}{n} \to 0$ as $n \to \infty$, there should be some $n$ such that $\frac{2}{n} < \varepsilon$. This tells us that $P_n$ is the partition you want.
Let $f:ℝ→ℕ$ be onto. Does there exist a $g:ℕ→ℝ$ such that $f(g(b))=b$ for all $b∈ℕ$?
No. This is equivalent to the statement "Given any countable family of pairwise disjoint sets of real numbers, there is a choice function". Such thing, for example, would imply that every subset of the real numbers is finite or has a countably infinite subset. This was shown to be unprovable from $\sf ZF$ by Cohen, and is now one of the most important models of $\sf ZF+\lnot AC$, also known as "Cohen's first model". There are other constructions that would make it even more explicit where and how a counterexample could consistently exist. But to truly appreciate that, you'd need to first understand set theory in general, forcing, and symmetric extensions.
How to divide a triangle in 3 equal parts from altitude to hypotenuse?
See that when you would try to divide your triangle using lines parallel to the base , you will get three similar triangles. Using the fact that for two similar triangles if their sides are in the ratio $\frac{p}{q} \ $ the areas are in ratio $\frac{p^2}{q^2}$. Here since the areas a and a+b are in ratio $\frac{1}{2}\ $, it means that the sides would be in ratio $\frac{1}{\sqrt{2}}\ $, similarly sides of a and a+b+c would be in ratio $\frac{1}{\sqrt{3}} \ $. which means you need to divide your altitude into three segments in ratio $1:\sqrt{2}-1:\sqrt{3}-\sqrt{2}$.
Software to express a mathematical expression in terms of three other functions?
https://www.desmos.com/ You can define functions of multiple variables and you can define functions in terms of other functions you have defined.
Continuous functions in a metric space using the discrete metric
HINT: Is there any function from $X$ to $X$ that is not continuous? Your comment about what happens when $f(x)\ne f(a)$ suggests that you have some misunderstanding of continuity, because the identity function from $X$ to $X$ is always continuous, no matter what metric you’re using, and it’s never constant unless $X$ has only one point.
How to divide a positive integer $x$ in $n$ pieces so that every possible division is equally likely?
You can adapt the stars and bars argument Choose $n-1$ integers randomly from $\{1,2,3,\ldots, x+n-1\}$ without replacement. Include $0$ and $x+n$ in your sample set so it now has $n+1$ elements Sort your sample set so it has $n+1$ ordered elements starting at $0$ and ending at $x+n$ Take the differences so you have $n$ differences each at least $1$ adding up to $x+n$ Subtract $1$ from each of the differences so you have $n$ values of non-negative integers adding up to $x$ There are $\displaystyle {x+n-1 \choose n-1}$ equally likely possibilities for the first point, as you would expect If instead you wanted all your values to be positive integers, you would change $x+n$ to $n$ in the first four points and ignore the final point
What does the statement "Optimality condition for convex problem" mean? KKT or other condition?
You can, and must, assume that $f$ is differentiable. Otherwise a trivial counterexample is to pick some $y$ strictly in the interior of $X$, and let $f(x) = |x - y|$. It is convex, but at the minimum (which is achieved at $x^* = y$) no partial derivatives exist. On the other hand, without the smoothness condition, we have that a convex function is automatically Lipschitz, and so we can consider its subdifferentials. A similar statement can be made regarding subdifferentials for your problem. Perhaps the "special" thing that the instructor is looking at is that $X$ is a box and so the form of the KKT multipliers simplify?
Nonlinear DE with initial conditions and substitution method
Plugging in your initial conditions, you get that $$-\frac{1}{2} + C = -1 \implies C = -\frac{1}{2}$$ From here just use separation of variables again $$\int -dx = \int \frac{2\:dy}{1+y^2} \implies y = -\tan\left(\frac{x}{2}+C\right)$$ with another $+C$. Again plugging in initial conditions gives us $C= -\frac{\pi}{4}$. $y$ is only defined when $x-\frac{\pi}{2}$ is between odd multiples of $\pi$ (why?). Can you figure out which interval that would have to be for this problem?
Proof that $f(x)=4x^4-2x+1$ has no real roots.
$f'(x)=2(8x^3-1)$, so there's a single critical point: $\; x=\frac12$. $f''(x)=48x^2\ge 0$, so by the second derivative test, this critical point is a minimum, and this minimum is an absolute minimum. $f(\frac12)=\frac14>0$.
Proof of Wald's Identity
As the definition of stopping time, the event $\{N ≤ n-1\}$ is completely determined by $X_1, \cdots , X_{n-1}$, that is $\{N ≤ n-1\}$ can be written as $g(X_1, \cdots , X_n)$. And for condition expectation, $E[f(X)g\mid X]=f(X)E[g\mid X]$ so you get the next step.
Simple integration for absolute energy formulation
Let $v=v(t)$. Then $dv=\dot v\,dt$ and we can write $$\begin{align} \int m\frac{d\dot v}{dt}dv&=m\int \frac{d\dot v}{dt}\dot v\,dt\\\\ &=\frac12 m\int \frac{d(\dot v)^2}{dt}\,dt\\\\ &=\frac{m(\dot v)^2}{2} \end{align}$$
Convergence of series in p-adic field or non-archimedian field
In a non-archimedean space, we have the wonderful property that $\sum_{n \geq 0} a_n$ converges as soon as $a_n \to 0$. Since $a_n \to 0$ implies that $a_n^2 \to 0$, you can conclude that $\sum_{n \geq 0} a_n^2$ converges.
Is the homomorphic image of a PID a PID?
This is not the case in general. Note for example that $\Bbb Z$ is a PID. Consider quotient rings of $\Bbb Z$ to find homomorphic images of $\Bbb Z$ that are not PIDs (in fact, not even integral domains). However, if we know that a ring homomorphism $f:R\to S$ is one-to-one, then of course it is true, since $R$ is then isomorphic to $f(R)$.
Prove geometric sequence question
$$\sum_{n=1}^n nx^n = n\sum_{n=1}^n x^n$$ is not true, as we can not take out a dependent variable Let $$S=x+2x^2+3x^3+4x^4+...+nx^n \ \ \ \ (1)$$ The given Series is basically Arithmetico-geometric series, see here or here So, $$x\cdot S=x^2+2x^3+3x^4+4x^5+...+nx^{n+1}\ \ \ \ (2)$$ $$(1)-(2)\implies S(1-x)=x+x^2+\cdots+x^n-nx^{n+1}$$ Now, $$x+x^2+\cdots+x^n=x\frac{x^n-1}{x-1}$$
Measure of an ellipsoid
Any positive definite symmetric matrix $M$ can be written as $M = A^tA$ with some invertible matrix $A$ with positive eigenvalues, and then the inequality becomes $\| Ax \| < 1$, and the set of solutions is the image of the unit ball $\| y \|<1$ under the inverse of the matrix $x = A^{-1} y$. The change of volume under linear maps is given by the determinant, so the volume of the ellipsoid is $c_n\det A^{-1} = \frac{c_n}{\det A} = \frac{c_n}{\sqrt{\det M}}$, where $c_n$ is the volume of the $n$-dimensional unit ball. Oops, you don't assume that the matrix is symmetric. In that case, the ellipsoid is described by the equation $x^t N x$ with $N = \frac12 (M+M^t)$ symmetric and positive definite, so you get $ \frac{c_n}{\sqrt{\det N}}$
Why is $ f:x \mapsto bx $homomorphic?
Note that the groups are written with addition. Therefore to get a homomorphism it has to be $f(x+y)=f(x)+f(y)$. Then $f(x+y)=b(x+y)=bx+by=f(x)+f(y)$.
Using L'Hopital's rule to find a pole
Yes, this is correct and it tells you that $$ \operatorname*{Res}_{z=\pi/6}\left(\frac1{1-2\sin(x)}\right)=-\frac1{\sqrt3} $$ L'Hospital also works for holomorphic functions on $\mathbb{C}$.
Strategies for evaluating sums $\sum_{n=1}^\infty \frac{H_n^{(m)}z^n}{n}$
About quesiton 3, we have $$\sum_{n\geq1}\frac{H_{n}^{\left(2\right)}}{n2^{n}}=\int_{0}^{1/2}\frac{\textrm{Li}_{2}\left(x\right)}{x\left(1-x\right)}dx=\int_{0}^{1/2}\frac{\textrm{Li}_{2}\left(x\right)}{x}dx+$$ $$+\int_{0}^{1/2}\frac{\textrm{Li}_{2}\left(x\right)}{1-x}dx=\textrm{Li}_{3}\left(\frac{1}{2}\right)+\int_{0}^{1/2}\frac{\textrm{Li}_{2}\left(x\right)}{1-x}dx $$ now note that, using integration by parts $$\int_{0}^{1/2}\frac{\textrm{Li}_{2}\left(x\right)}{1-x}dx=-\log\left(\frac{1}{2}\right)\textrm{Li}_{2}\left(\frac{1}{2}\right)-\int_{0}^{1/2}\frac{\log^{2}\left(1-x\right)}{x}dx.$$ So let analyze $$J=\int_{0}^{1/2}\frac{\log^{2}\left(1-x\right)}{x}dx $$ using integration by parts few times $$J=\log^{2}\left(\frac{1}{2}\right)\log\left(\frac{1}{2}\right)+2\int_{0}^{1/2}\frac{\log\left(1-x\right)\log\left(x\right)}{1-x}dx= $$ $$=\log^{2}\left(\frac{1}{2}\right)\log\left(\frac{1}{2}\right)+2\textrm{Li}_{2}\left(\frac{1}{2}\right)\log\left(\frac{1}{2}\right)+2\int_{1/2}^{1}\frac{\textrm{Li}_{2}\left(u\right)}{u}du $$ $$=\log^{2}\left(\frac{1}{2}\right)\log\left(\frac{1}{2}\right)+2\textrm{Li}_{2}\left(\frac{1}{2}\right)\log\left(\frac{1}{2}\right)+2\zeta\left(3\right)-2\textrm{Li}_{3}\left(\frac{1}{2}\right) $$ so we have $$\sum_{n\geq1}\frac{H_{n}^{\left(2\right)}}{n2^{n}}=3\textrm{Li}_{3}\left(\frac{1}{2}\right)-3\log\left(\frac{1}{2}\right)\textrm{Li}_{2}\left(\frac{1}{2}\right)-\log^{2}\left(\frac{1}{2}\right)\log\left(\frac{1}{2}\right)-2\zeta\left(3\right)= $$ $$=\frac{5\zeta\left(3\right)}{8}. $$
Stuck at an assignment for conditional probability
Here is my understanding of scheme. If student can't answer both 1st and 2nd questions on paper (case 0), he fails. If student can't answer 1 question but can answer the other (case 1), he will be given another paper. If student can't answer at least 1 question on additional paper(case 2), he fails. Win otherwise. $$ P(\text{case 0}) = \frac{C_{44}^0 C_6^2}{C_{50}^2}\\ P(\text{case 1}) = \frac{C_{44}^1 C_6^1}{C_{50}^2}\\ P(\text{case 2}) = \frac{C_{43}^0 C_5^2}{C_{48}^2}\\ $$ $P(win) = 1 - (P_0 + P_1 P_2) \approx 0.985845$ (that much, maybe I am wrong with exam scheme? UPD1: But if student can take 2 papers regardless his ability to answer to the both questions in 1st paper, chances are even higher, every student should take 2 papers and select 2 questions of 4) *$C_n^k \equiv$ Binomial[n,k] UPD2: After posting this answer I mentioned (look at right side menu more often!) that this is duplicate ☹
Projection of vector onto span
By finding the projection of $y$ onto $span (S)$ suppose we write $S = \{v_1, v_2, v_3\}$ where these are the vector given above. You find the components of $y$ along each of the $v_i$, call these coefficients $a_1, a_2, a_3$, then you can write $P_S(y) = a_1v_1 + a_2v_2 + a_3v_3$ thus you've written the projection of $y$ into $span (S)$
Do you know of any instance when a continued fraction is really of use?
Continued fractions provide another representation of real numbers, offering insights that are not revealed by the decimal representation. For example, the golden ratio has the continued fraction [1; 1, 1, ...], and e = [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, 1, 1, ...]. Continued fractions can be used to— approximate real numbers and functions, compute square roots, solve Pell's equation, break RSA encryption, prove the sum-of-two-squares theorem, recognize rational numbers, find lattice points close to curves, which is useful in the study of dynamical systems, and estimate eigenvalues and eigenvectors.
Inner product vs scalar rpoduct
The quotation above holds in $\mathbb{C}$ as well. It might not be clear what "projection" means in the contest of complex vectors, but algebraically it is exactly as with real vectors (remember the conjugate, though). In $\mathbb{C}$, the "inner" product of $a+ib$ and $c+id$ is: $$ ac+ bd +i (ad-bc). $$ In a way, its real part is the $\mathbb{R}^2$ scalar product, and its imaginary part is some sort of "cross product" (can you see why?). What matters, anyway, is that this way squared moduli behave exactly like in $\mathbb{R}^2$: $$ \langle(a+ib) | (a+ib)\rangle = a^2 + b^2. $$ Passing from $\mathbb{C}$ to $\mathbb{C}^n$, or any other complex vector space, pretty much the same holds: squared moduli of vectors are exactly the same as the squared moduli of $\mathbb{R}^{2n}$ vectors, while general inner product acquire a skew-symmetrical imaginary part. (This skew-symmetric part should not disturb you, because it is imaginary: think of Hermitian matrices, which generalize real symmetric matrices adding an imaginary skew-symmetric part.) In linear algebra, you can think of the inner/scalar product as something that can "give you the coordinates". (Geometers please don't be mad at me for saying this - I agree with you but this is what they do in physics!) For example, to get the first component of the vector $v=(x,y,z)$ we calculate its inner product with the basis element $(0,1,0)$, getting $v_2=y$. Just as well, a function on the real line $f:x\mapsto f(x)$ is a vector in a vector space (they can be summed and multiplied by scalars). If the functions are suitably integrable (say, $L^1$ or $L^2$), we can define the inner product in the usual way. Now, a function has as its infinite components all its values $f(x)$ for all the $x$! So to know the "$x_0$-th" component, that is $f(x_0)$, we simply take the scalar product $\langle f | x_0\rangle$, where $| x_0\rangle$ is the continuous analogue of $(0,1,0)$, our "basis element". You can think of it as zero everywhere, and a sharp bump in $x_0$ (look at Dirac Delta if you don't know what it is). Strictly speaking, the basis elements $|x\rangle$ form improper vectors, but this is a subtle, entirely different story (look at the spectral theorem, and at Schwarz distributions to know more about this).
Limit point of sin function in real analysis
Your answer is right. Consider $a_n=\sin\left(\frac{n \pi}{2}\right) $ for $n=k$ ,for $n=k+1$ , for $n=k+2$ and for $n=k+3$. Then you split your sequence into 3 subsequences which one of them converges to 1,0,-1 respectively. In this kind of problems you need to take the subsequences $mod 2\pi $ in order to find alla limit points. For example if you have $b_n=\cos(n \pi / 6 ) $ you have to consider 12 subsequences in order to cover all the cases . On the other hand if you have $c_n=\sin(n) , n\in \mathbb{N}$ then this is a real problem. Someone needs to use special techniques to show that every point of $[-1,1]$ is a limit point of $c_n$
Lowest and Highest point on a curve of intersection using lagrange multipliers
You should optimize $$ \min / \max x_3$$ subject to $$2x_1+4x_3=5$$ $$x_1^2+x_2^2=2x_2$$ Edit: $$L(x, \lambda) = x_3 + \lambda_1(2x_1+4x_3-5)+\lambda_2(x_1^2+x_2^2-2x_2)$$ $$2\lambda_1+2\lambda_2x_1=0\tag{1}$$ $$2\lambda_2x_2-2=0\tag{2}$$ $$1+4\lambda_1=0\tag{3}$$ $$3x_1+4x_3=5\tag{4}$$ $$x_1^2+x_2^2=2x_2\tag{5}$$ From equation $(3)$, $\lambda_1=-\frac14$. $$x_1=\frac1{2\lambda_2}$$ $$x_2=\frac1{\lambda_2}$$ Substitute this into equation $(5)$, we should be able to solve for $\lambda_2$. After solving for $\lambda_2$, you should be able to recover $x_1$ and $x_2$ and hence $x_3$.
Convex function inequality for Euclidean norm: $\|(f(x_1),\cdots,f(x_n))\|_2\leq f(\|x\|_2)$
Let $g$ be the linear function that agrees with $f$ at $0$ and $\|x\|_2$: specifically, $$g(t) = \frac{f(\|x\|_2)}{\|x\|_2} t$$ By convexity, $f(t)\le g(t)$ for $0\le t\le \|x\|_2$. Assuming $f$ is nondecreasing, we also have $|f(t)|=f(t)\le g(t)$. Hence, $$\|(f(x_1),\cdots,f(x_n))\|_2\leq \|(g(x_1),\cdots,g(x_n))\|_2 = g(\|x\|_2) = f(\|x\|_2)$$ Without the assumption that $f$ is nondecreasing the inequality fails: nothing prevents $f$ from becoming very negative between $0$ and $\|x\|_2$, making $|f(x_k)|$ huge.
Induction principle (theorem meaning)
There is no restriction in the theorem. The theorem holds quite regardless of what the sentence symbols are; the problem you point to isn't one, really. Enderton probably (I don't have the book) defines the set of sentence symbols as the set containing exactly $A_1$, $A_2$ and so on (or $p_0$ and $p_1$ and so on; how he writes them doesn't matter much). So he has some set -- one single set, throughout the book -- of sentence symbols. Let's call that set $\mathrm{Sym}$. And let's call the set of all wffs $\mathrm{Wff}$. Now what the theorem says is: if you have a set $S \supset \mathrm{Sym}$ which is closed under formula-building operations, then $S \supset \mathrm{Wff}$. If you decide you don't like Enderton's $\mathrm{Sym}$, you can define your own. So let's replace $\mathrm{Sym}$ by the set $\mathrm{Sym'} := \{(A_1 \lor A_2)\}$, and not change any other definition. Then we have defined a new language. $\mathrm{Wff}$ depends on $\mathrm{Sym}$: it contains exactly the things we can build up from the things in $\mathrm{Sym}$. When we change $\mathrm{Sym}$, we change $\mathrm{Wff}$. Our new set of formulas $\mathrm{Wff'}$ will contain $(A_1 \lor A_2)$ and $\neg (A_1 \lor A_2)$ (and things like $\neg (A_1 \lor A_2) \lor ((A_1 \lor A_2) \lor \neg (A_1 \lor A_2))$), but it won't contain $\neg A_1$ nor, for instance, $A_3$, because we can't build those from (the only thing in) $\mathrm{Sym'}$. Now when we do this, the theorem will still hold: it does not really depend on $\mathrm{Sym}$. When we define our new $\mathrm{Sym'}$, we will be able to prove, exactly in the way Enderton proves his theorem, that whenever $P \supset \mathrm{Sym'}$ is closed under formula-building operations, $P \supset \mathrm{Wff'}$. $P$ might not contain $\neg A_1$ -- but, again, that is no longer a formula.
How do I find the critical points of this function involving e?
The only $x$ that makes that zero is $x=-2$. Divide both sides by the $e^{\dots}$ part which you know is never zero.
Graphing functions and identifying their Laplace transforms
(a) First let's choose an arbitrary $a>0$. for every $x<a$, you have $x-a<0$, and for every $x\ge a$ you have $x-a\ge 0$. Now you need to plot $u(x-a)$. This is a straight line along the horizontal axis up until $x=a$, then another horizontal line at $y=1$ starting at $a$ and towards $\infty$. (b)$[x]$ is $0$ in the interval $[0,1)$ then it is $1$ in the interval $[1,2)$ and so on. This looks like a staircase increasing towards right. Do not plot the vertical pieces, like, for example the line between $(1,0)$ and $(1,1)$. (c)You repeat the same procedure as in part (b) you should get a saw-tooth like pattern (d) Start plotting $\sin(x)$. Start from $0$. When your $y$ coordinate is $0$ again, you reached $x=\pi$. From there it's a horizontal line
Find a function $f$ with given criterion
The typical example is to let $f$ be mostly $0$, but have narrower and narrower, evenly spaced spikes from $0$ to $1$ and back to $0$ again as $x\to \infty$. The narrowness of the spikes lets the integral converge, but stops $f$ from having a limit.
Game on the tree, cutting branch
The technique is well described in Winning Ways or (in less detail) at Sprague-Gundy theory. Each tree is a nim-heap. To get its value, look at the value of all subtrees that you can move to and the value is the minimum value not in the set. For example, consider a tree with a single stick from the root and three branches at the top. You can move to zero or a similar tree with two branches at the top. To get the value of the one with two branches, you can move to zero or two, so its value is *1. Then the one with three can move to 0 or *1, so has value *2. The value of the forest is the bitwise XOR of the binary values of all the trees. The position is losing if the value is zero. A single tree will never be zero as you can always chop it down at the root.
How do you solve the following non linear system?
Substitute $y = \frac{\lambda x}{2}$ (eqn 1) in $x + 2 = 2 \lambda y$ (eqn 2) to obtain $$x = \frac{2}{\lambda^{2}-1} (\text{eqn } 1').$$ Solve for $y$ from eqn 1 to obtain $$y = \frac{\lambda}{\lambda^{2}-1} (\text{eqn } 2').$$ Plug the values of $x$ and $y$ in eqn 3: $\frac{x^{2}}{4} + y^{2} = 1$ and solve for $\lambda$ to obtain the values $$\lambda = -\sqrt{3}, \sqrt{3}.$$ Plug the values of $\lambda$ in eqn 1' and eqn 2' to get the solutions $$\left ( 1,\sqrt{\frac{3}{2}} \right ), \left ( 1,-\sqrt{\frac{3}{2}} \right ). $$ You can also check the solutions by plugging the values of $x$ and $y$ in eqn 3 equaling to $1$ or not, that is, the right-hand side.
How are definition 1 and definition 2 equivalent?
They are not. For example $\|x\|=0$ for $x=0$ and $\|x\|=1$ otherwise obeys definition 2, but not definition 1.
What is the distribution of a nested Laplace?
$$\begin{align*} f_{X_3 \mid X_1}(x_3 \mid x_1) &= \int_{x_2=-\infty}^\infty f_{X_3 \mid X_2}(x_3 \mid x_2) f_{X_2 \mid X_1}(x_2 \mid x_1) \, dx_2 \\ &= \frac{1}{4b^2} \int_{x_2 = -\infty}^\infty e^{-|x_3 - x_2|/b} e^{-|x_2 - x_1|/b} \, dx_2 \\ &= \frac{1}{4b^2} \int_{x_2 = -\infty}^\infty e^{-(|x_3 - x_2|+|x_2 - x_1|)/b} \, dx_2. \end{align*}$$ At this point, it becomes clear that the integration will depend on whether $x_3 > x_1$ or $x_3 < x_1$. Without loss of generality, suppose the former. Then note $$|x_3 - x_2| + |x_2 - x_1| = \begin{cases} x_1 + x_3 - 2x_2, & x_2 \in (-\infty, x_1) \\ x_3 - x_1, & x_2 \in [x_1, x_3] \\ 2x_2 - x_1 - x_3, & x_2 \in (x_3, \infty). \end{cases}$$ So when $x_3 > x_1$, $$\begin{align*} f_{X_3 \mid X_1}(x_3 \mid x_1) &= \frac{1}{4b^2} \left( \int_{x_2 = -\infty}^{x_1} \!\!\!\! e^{-(x_1 + x_3)/b} e^{2x_2/b} \, dx_2 + \int_{x_2 = x_1}^{x_3} \!\!\!\! e^{(x_1 - x_3)/b} \, dx_2 + \int_{x_2 = x_3}^\infty \!\!\!\! e^{(x_1 + x_3)/b} e^{-2x_2/b} \, dx_2 \right) \\ &= \frac{1}{4b^2} \left( \frac{b}{2}e^{-(x_1 + x_3)/b} e^{2x_1/b} + (x_3 - x_1) e^{(x_1 - x_3)/b} + \frac{b}{2} e^{(x_1 + x_3)/b} e^{-2x_3/b} \right) \\ &= \frac{1}{8b} \left( e^{-(x_3 - x_1)/b} + \frac{2}{b} (x_3 - x_1)e^{-(x_3 - x_1)/b} + e^{-(x_3 - x_1)/b} \right) \\ &= \frac{1}{4b} \left( 1 + \frac{x_3 - x_1}{b} \right)e^{-(x_3 - x_1)/b}. \end{align*}$$ When $x_3 < x_1$, then the roles of $x_1$ and $x_3$ are switched, thus we can write the full conditional density as $$f_{X_3 \mid X_1}(x_3 \mid x_1) = \frac{1}{4b} \left( 1 + \frac{|x_3 - x_1|}{b}\right) e^{-|x_3 - x_1|/b}, \quad x_3 \in (-\infty, \infty).$$ Clearly this is not Laplace but it is in a sense analogous to how a gamma distribution is a generalization of the exponential distribution. With this expression, I think now you should be able to use simulation to confirm it is correct.
Algebra and guesswork
Actually, these equations have no solution in the real numbers. Note that the arithmetic mean of the five numbers is $\frac{200}{5} = 40$ while the quadratic mean is $\sqrt{\frac{1870}{5}} = 19.339\ldots$, contradicting the $QM-AM$ inequality.
Question to be solved using matrices
You can set up your problem as a pair of simultaneous equations, one for total wheat bought and one for total rice bought. $$5N+2S=34\\2N+5S=1$$ Solving this gives a solution with a negative number though, so I am not too sure how this result should be interpreted.
Particle acceleration position problem (Calculus)
Consider that $\cos(6) = \cos(2\pi-6)$ and that $\cos^2(t)\approx \left(1-\frac{t^2}{2}\right)^2\approx 1-t^2$ for small $t$ using the Taylor series approximation of $\cos$. $2\pi-6\approx 0.3$, so $\cos^2(6)\approx 1-0.3^2\approx 0.9$. Therefore, $x(6)\approx 6.9$.
If two biased coins show the same side, what is the probability that the two coins were taken from the same box?
This is how I would have done the question, to find $P(F|E)$ rather than $P(E|F)$: The probability both coins come from the same box and both show the same side $P(E,F)$ is $$\left(\frac14\right)^2\left(\frac12\right)^2 +\left(\frac34\right)^2\left(\frac12\right)^2 + \left(\frac13\right)^2 \left(\frac12\right)^2 +\left(\frac23\right)^2\left(\frac12\right)^2 = \frac{85}{288}.$$ The probability both coins come from the different boxes and both show the same side $P(E,F^C)$ is $$2\left(\left(\frac14\right)\left(\frac12\right)\left(\frac13\right)\left(\frac12\right)+ \left(\frac34\right)\left(\frac12\right)\left(\frac23\right)\left(\frac12\right)\right) = \frac{84}{288}.$$ So you are correct when you say $P(E)=\frac{169}{288}$ as it is the sum of those two terms. Given both coins show the same side, the probability both coins come from the same box $P(F|E)$ is $$\frac{\frac{85}{288}}{\frac{85}{288}+\frac{84}{288}}=\frac{85}{85+84}=\frac{85}{169}.$$ If you really wanted to calculate $P(E|F)$ then, using $P(F) = \frac12$, it would be ${\frac{85}{288}}/{\frac12} = \frac{85}{144}$ which as leonboy points out is half of your result.
Rearrangement of harmonic oscillation formulae
Set $A\cos(\omega t) = \operatorname{Re} (Ae^{i\omega t})$ and, analogously, $B\sin(\omega t) = \operatorname{Re} (-iBe^{i\omega t})$. Then we can add the two together. $$\begin{align*} \operatorname{Re}(Ae^{i\omega t})+\operatorname{Re} (-iBe^{i\omega t})&=\operatorname{Re} (Ae^{i\omega t}-iBe^{i\omega t})\\ &=\operatorname{Re} (e^{i\omega t}(A-iB))\\ &=\operatorname{Re} (e^{i\omega t}Ce^{i\phi}),\,\,\,\phi=-\arctan\left(\frac{B}{A}\right),\,\, C=\sqrt{A^2+B^2}\\ &=\operatorname{Re} (Ce^{i(\omega t + \phi)})\\ &=C\cos(\omega t + \phi) \end{align*}$$ So in conclusion, you were essentially right, but off my a minus sign.
Question about $2 \times 2$, determinant one matrices over a Euclidean domain
Hint $\ $ Using row/col operations a la Euclid's algorithm you can reduce to a matrix where $\,a\mid b,c.\,$ Therefore $\,a\mid ad-bc = 1.\,$ Hence wlog $\,a = 1,\,$ so, we can reduce $\,b,c\,$ to $\,0,\,$ so $\,d = 1.$
Parametrization for the following surface
A possible parametrization is the following one : \begin{align} X\left(t,\theta\right)&=\left(x(t,\theta),y(t,\theta),z(t,\theta)\right)\\ &=\left(\cos\theta,t,\sin\theta\right) \\ &=u_{\theta}+tN \end{align} where $u_\theta:=(\cos\theta,0,\sin\theta)$ and $N:=(0,1,0)$. At any point $X(t,\theta)$ of the surface, a basis of the tangent space (plane here) is then constituted by the tangent vectors \begin{align} X_t(t,\theta)=N, \end{align} \begin{align} X_\theta(t,\theta)=u'_\theta:=(-\sin\theta,0,\cos\theta). \end{align} The surface element is then \begin{align} \mathrm{d}S&=|X_t\wedge X_\theta |\mathrm{d}t\mathrm{d}\theta\\ &=|N\wedge u'_\theta |\mathrm{d}t\mathrm{d}\theta\\ &=|-u_\theta| \mathrm{d}t\mathrm{d}\theta\\ &= \mathrm{d}t\mathrm{d}\theta \end{align} so between the planes $\{y=0\}$ and $\{y=3-x\}$, the integral of $y^2$ is \begin{align} \int_{0\leq y\leq3-x}y^2\mathrm{d}S&=\int_{0\leq t\leq3-\cos\theta}\int_{0<\theta<2\pi}t^2\mathrm{d}t\mathrm{d}\theta\\ &=\int_{0<\theta<2\pi}\frac{\left(3-\cos\theta\right)^3}{3}\mathrm{d}\theta\\ &=\frac{1}{3}\int_{0<\theta<2\pi}\left(27-27\cos\theta+9\cos^2\theta-\cos^3\theta\right)\mathrm{d}\theta\\ &=21\pi. \end{align}
Homology of the loop space
General idea for computation of $H(\Omega X)$ (due to Serre, AFAIK) is to consider a (Serre) fibration $\Omega X\to PX\cong pt\to X$ and use Leray-Serre spectral sequence (it allows, in particular, to compute easily (at least, in simply-connected case) $H(\Omega X;\mathbb Q)$; cohomology with integer coefficients are, indeed, more complicated). It's discussed, I believe, in any textbook covering LSSS — e.g. in Hatcher's.
Question about "equivalent" definitions for small inductive dimension of topological spaces
Yes, you are right. As I know small inductive dimension is usually defined only for regular spaces. Similary large inductive dimension is usually defined only for normal spaces and covering dimension only for completely regular spaces. Also note that for being zero-dimensional the conditions are always equivalent and it implies regularity.
isomorphism in rest classes
Consider the map $\varphi\colon\mathbb{Z}/mn\mathbb{Z}\rightarrow\mathbb{Z}/m\mathbb{Z}\times\mathbb{Z}/n\mathbb{Z}$ defined by: $$\varphi(x\bmod mn)=(x\bmod m,x\bmod n).$$ Show that $\varphi$ is well-defined, namely that the image of $x\bmod mn$ does not depend on the lift in $\mathbb{Z}$, you will used that $m\vert mn$ and $n\vert mn$. Then, it is easily checked that $\varphi$ is a ring homomorphism, therefore by cardinality, it is an isomorphism if and only if it is injective which follows from the following fact: If $m|x$ and $n|x$, then $mn|x$, since $m$ and $n$ are coprime. Which is derived from: If $a\vert bc$ and $a$ and $b$ are coprime, then $a\vert c$. Actually, one can build explicitly the inverse for $\varphi$ using Bezout's theorem. There exists $(u,v)\in\mathbb{Z}^2$ such that $um+vn=1$, now let define $\Psi\colon\mathbb{Z}/m\mathbb{Z}\times\mathbb{Z}/n\mathbb{Z}\rightarrow\mathbb{Z}/mn\mathbb{Z}$ by: $$\Psi(x\bmod m,y\bmod n)=xvn+yum\bmod mn.$$ Show that $\Psi$ is well-defined and that $\varphi\circ\Psi$ and $\Psi\circ\varphi$ are the identity maps. There is no chance for $\mathbb{Z}/8\mathbb{Z}$ and $\mathbb{Z}/2\mathbb{Z}$ to be isomorphic, they don't even have the same cardinality. In $\mathbb{Z}/8\mathbb{Z}$ there is an element of order $8$, what about $(\mathbb{Z}/2\mathbb{Z})^3$?
Most Efficient Method of Solving 3 Non-Linear (Quadratic) Equations
You don't need iterative methods, here, you can determine $P_0=(x_0,y_0)$ immediately, recalling a few simple facts from elementary geometry. So if you call your given points $P_1,P_2,P_3$, you're looking for a $P_0$ with $\sphericalangle P_1P_0P_2=\theta_{12}$ and $\sphericalangle P_2P_0P_3=\theta_{23}$. But those conditions say that $P_0$ lies on two circles, one through $P_1,P_2$, the other through $P_2,P_3$, and you can determine their centers $M_1$ and $M_2$ from the inscribed angle theorem, giving you the angles $\sphericalangle P_1M_1P_2$ and $\sphericalangle P_2M_2P_3$. Those two circles intersect in two points, one being $P_2$, of course. So $P_0$ is just the point symmetric to $P_2$ on the other side of $\overline{M_1M_2}$. The figure below shows the two circles: two circles
Is this a new twin prime sieve?
This works as a python program. You've tested it, you've run it, I recognize the math behind it and I know that it gives you the twin primes. I've been working through two definitions in particular -- compound arithmetic progressions and composite topologies -- that are full well drawn from those exact observations. Sieve theory, itself, is quite abstract and the cost of that versatility is a certain measure of ambiguity that (intentionally?) obscures solutions to problems that are notoriously easy to describe. Notice that in your program you define four (4) arithmetic progressions. The difference between an arithmetic progression and a residue class is that an arithmetic progression has an infimum, whereas a residue class does not. The residue class $\{a \pmod{m} \}$ represents the set of integers that when divided by the modulus $m$ produce a remainder (also residue) of $a$. Using the notation $[a]_m \in \mathbb{Z}/m\mathbb{Z}$ to represent the set $\{a \pmod{m} \}$, we can describe an arithmetic progression in the residue class by endowing the set with an infimum, or minimum value, $n$, and that would look like $[a]_m \cap [n,\infty)$ where we form the intersection with the coset of positive integers $\mathbb{Z}^+ + n - 1$, if we take $\mathbb{Z}^+$ to be defined as $[1, \infty) \cap \mathbb{Z}$ as in the ISO standard. We can also say that the sets of residue classes formed by $[\pm k]_{6k \pm 1}$ have residues and moduli that are all arranged as linear forms sharing a common dependent variable $k$ in that they are all of the form $mk + r$. But in carrying out the sieve process to find numbers ${n-1, n+1}$ that are both prime, as in the Sieve of Eratosthenes, it makes no sense to exclude certain residue classes beyond a certain function of $k$ as you did in the above program. In fact, it follows from the Sieve of Eratosthenes that the infimum of each residue class in the exclusion covering (the set of residue classes $[\pm k]_{6k \pm 1}$ for some positive integer $k > 0$) is specifically the sum of the residue and the modulus $a + m$. I refer to this as the summand infimum and use the notation $[a]^+_m = \{ a \pmod{m} \} \cap [a+m,\infty)$ to indicate the intersection between the residue class and the coset of the positive integers. The consensus is that, indeed the set of twin primes is represented by the following expression, across the positive integers $k > 0$: $$ \mathbb{Z}^+ \setminus \bigcup\{ [k]^+_{6k+1}, [-k]^+_{6k+1}, [k]^+_{6k-1}, [-k]^+_{6k-1} \} $$ It's easier to call the terms $[\pm k]^+_{6k \pm 1}$ in the above compound arithmetic progressions. The definition for compound arithmetic progressions that I give in Prime Gaps in Residue Classes is rock solid. The definition I give for composite topology, however has the caveat in that one cannot assume that the translation scalar for the compound arithmetic progressions is always zero, but it works out just fine for 6, which is the principal modulus that I use for the De Polignac sequence in my answer to this post about Large Gaps. Why Compound Arithmetic Progressions and Composite Topologies? Because with few restrictions they both have a provable, but elusive decomposition property whereby the residue classes contained by each can be decomposed into the intersection of residue classes with prime moduli and congruent residues through the reverse application of the ancient Chinese Remainder Theorem by Sun Tsu. And further, even if I'm not a respected analytical number theorist, we can note that a general estimate for any CAP with a constant principal modulus and relatively prime commutative residues is going to depend on a sum: $$ \sum_{k=2}^{\infty}\frac{\pi_k(n) \sum_{j=1}^{k}(\tau_j(a) + \tau_{k-j}(d)) }{\phi(c)^k}$$ where $k$ is the number of prime factors, $\pi_k(n)$ is the almost prime counting function, $\phi(c)$ is the result of Euler's totient function for the principal modulus and $\tau_k$ is the $k$-fold divisor function for the multiplicative group $(\mathbb{Z}/c\mathbb{Z})^*$. There is a heuristic that makes this extensible to intersections and through inclusion-exclusion unions as well, but we're only looking at CAPs with commutative residues that are relatively prime to the principal modulus. In order to lift the restriction on the commutative residues, we would need to know more about the homomorphic subgroups. The estimate for a composite topology reduces to the estimate for primes in an arithmetic progression. $$ n - \pi(n, q, a)$$ where $q$ is the principal modulus and $a$ is the determinant residue. You might be able to "prove" the admissibility of a k-tuple without using a similar method but I don't think its likely, and my response to Asypmtotic expressions... implies the utility of this method in studying the k-Tuple Conjecture.
On the parity of the coefficients of $(x+y)^n$.
$n \choose k$ is odd, for every $k=0,1,..,n$ if and only if $n=2^m-1$ (i.e. $n$ has only 1's in its base 2 expansion). This is proved using Lucas' theorem
Definite integral of an absolute value with "undefined result"
The numerator of $f(x)$ is clearly positive on the interval $[0,2]$, so $f < 0$ on $[0,1)$ and $f > 0$ on $(1,2]$. $f$ is undefined at $x = 1$. The given integral therefore is improper, and converges in the Riemann sense if and only if each of $$\lim_{\epsilon \to 0^+} \int_{x=0}^{1-\epsilon} f(x) \, dx$$ and $$\lim_{\epsilon \to 0^+} \int_{x=1+\epsilon}^2 f(x) \, dx$$ exist and are finite. Since they are not, the integral does not converge.
What are the conditions for zero to belong to a convex hull of a set of points
An algorithm to determine if 0 belongs to the interior of the convex hull defined by $n$ points $v_1,\ldots, v_n$ in $\mathbb{R}^3$. Without loss of generality, assume no two of the vectors are linearly dependent and not all of the points lie on the same plane. For distinct $i<j$, define: $$A_{ij}=\{v_k \cdot (v_i \times v_j): k\neq i,j \}.$$ Claim 1. If for all $i<j$, the set $A_{ij}$ contains both positive and negative numbers, then 0 belongs to the interior of the convex hull of $v_1,\ldots v_n$. The converse also holds. Proof. First suppose 0 belongs to the interior of the convex hull so that $0=\sum_{i=1}^n \lambda_i v_i$ and all $\lambda_i >0$ with $\sum_i \lambda_i=1$. For a fixed $i<j$, take the dot product of the two sides of the equation with $v_i \times v_j$ to get $0=\sum_{k\neq i,j} \lambda_k v_k \cdot (v_i \times v_j)$. Since $\lambda_k>0$ for all $k$ and the numbers $\lambda_k v_k \cdot (v_i \times v_j)$ add up to zero, at least one must be positive and at least one must be negative. Note that if they are all zero, then all of $v_k$s must lie on the plane spanned by $v_i$ and $v_j$ in which case the convex hull has empty interior. Conversely, suppose 0 is not in the convex hull of $v_1,\ldots, v_n$. Let $P$ be a plane passing through the origin so that the entire convex hull lies on one side of the plane. Rotate the plane around any axis until it touches one of the points say $v_1$. All other points still lie on one side of the plane. Now rotate the plane around the line passing through the origin and $v_1$ until it touches another point say $v_2$. We have found a plane through the origin and $v_1$, $v_2$ such that one side of the plane contains no points from $v_1,\ldots, v_n$. Clearly all of the $v_i$s either make an acute or right angle with $v_1\times v_2$ or an obtuse or right angle with $v_1 \times v_2$. Therefore $A_{ij}$ cannot contain both positive and negative numbers. Claim 2. If for all $i<j$, the set $A_{ij}$ is not a set of all positive or all negative numbers, then 0 belongs to the closed convex hull of the points. Proof is similar to the proof of Claim 1. If 0 is not in the interior but in the closed convex hull, then clearly it is on the boundary.
On Arthur Engel Book, First Chapter : Invariance
First let's visualize AM-GM inequality by an example. Take points $P=2$ and $Q=8$ on the number line. Their AM is point $A=5$. Note that arithmetic mean always lies at the midpoint of segment $PQ$. Their GM is point $G=4$. Geometric mean also lies between $P$ and $Q$, and precedes $A$ (this is the very statement of AM-GM inequality) except for the case when $Q$ coincides with $P$, when $A$ and $G$ also coincide with $P$ and $Q$. So $x_{n+1}$ lies exactly between $x_n$ and $y_n$ on the number line. $y_{n+1}$ being GM of $x_{n+1}$ and $y_n$, will always lie ahead of $x_{n+1}$ ie, $x_{n+1} \lt y_{n+1}$. To prove your inequality, $$ y_{n+1} = \sqrt{x_{n+1}y_n} \lt \frac{x_{n+1}+y_{n}}{2} = \frac{(x_{n}+y_{n})/2+y_{n}}{2}$$ This gives $$y_{n+1}-x_{n+1} \lt \frac{(x_{n}+y_{n})/2+y_{n}}{2} - \frac{x_{n}+y_{n}}{2} = \frac{y_{n}-x_{n}}{4}$$
Show the ideal $(x-2,x+3)\subset\mathbb{Z}[x]$ is prime but not principal
The quotient ring is indeed a field (it's isomorphic to $\mathbb{F}_{5}$). If you take the quotient with a principal ideal in this ring you can never obtain a field.
When is an infinite set larger than another infinite set?
It means that there is an injection from $A$ into $B$, but there is no bijection between $A$ and $B$. Assuming the axiom of choice, this is the same as saying that there is no surjection from $A$ onto $B$ (but without the axiom of choice, it is possible that $A$ maps injective (with one function) and surjectively (with another) onto $B$, but not bijectively (with any function)). As often is the case with infinite sets, it might not be possible to actually prove that one cardinal is strictly larger than the other. For example, $2^{\aleph_0}$, the cardinality of $\Bbb R$, is not necessarily strictly larger than $\aleph_1$, the cardinality of $\omega_1$ the first uncountable ordinal, and it is consistent that the two cardinals are equal, and it is consistent that they are not equal (in which case $\aleph_1$ is strictly smaller).
Finding number of digits of decimal number
Hint The number of digits of $x$ after the point is the least integer $n$ such that $10^nx$ is integer.
How many sequences of n letters chosen from { A,B, ..., Z } are in non-increasing, or non-decreasing order
Let me see if I understand the question. So AAABNN counts (nondecreasing), and NNBAAA counts (nonincreasing), but NNAAAB and BANANA don't count. Have I got that right? Looks like a simple in-and-out ("inclusion-exclusion") problem to me: Answer = #(nonincreasing sequences) + #(nondecreasing sequences) - #(constant sequences), since the constant sequences are the only ones that are both nonincreasing and nondecreasing. The number of constant sequences is exactly $26$ (assuming $n\gt0)$. To specify a nondecreasing sequence of length $n$, since the order is determined, all you need to know is how many of each letter. I.e., $26$ nonnegative integers adding up to $n$, that's $\binom{n+25}{25}$ or something like that. So your final answer is$$2\binom{n+25}{25}-26$$. P.S. The binomial coefficient $\displaystyle\binom{n+25}{25}$ comes from setting $k=26$ in $\displaystyle\binom{n+k-1}{k-1}$ which is the formula for the number of ordered $k$-tuples $(x_1,x_2,\dots,x_k)$ of nonnegative integers such that $x_1+x_2+\dots+x_k=n$. This is the so-called "stars-and-bars" theorem, which has probably been covered in class and you will need to know it for the exam. You can find this theorem (and a proof) on this Wikipedia page; it's Theorem Two. Watch out, I think they switched the letters around.
How to solve this nonstandard system of equations?
Solution. First way From the first equation, we have $$\begin{cases} 2x^2\leqslant 1,\\ y^2 \leqslant 1 \end{cases} \Leftrightarrow \begin{cases} -\dfrac{1}{\sqrt{2}} \leqslant x \leqslant \dfrac{1}{\sqrt{2}},\\ - 1 \leqslant y \leqslant 1. \end{cases}$$ Then, the conditions of $x$ and $y$ are $$\begin{cases} 0 \leqslant x \leqslant \dfrac{1}{\sqrt{2}},\\ - 1 \leqslant y \leqslant 1. \end{cases}$$ We have $x^2 + y^2 = 1-x^2$. Therefore $x^2 + y^2 \leqslant 1$. Another way, $$1-x^2 = y \sqrt{1-x^2} -(1-y)\sqrt{x} \leqslant y \sqrt{1-x^2}.$$ Because $$ y \sqrt{1-x^2} \leqslant \dfrac{y^2 + 1 - x^2}{2}.$$ Implies $$1-x^2 \leqslant \dfrac{y^2 + 1 - x^2}{2} \Leftrightarrow x^2 + y^2 \geqslant 1 .$$ From $x^2 + y^2 \leqslant 1$ and $x^2 + y^2 \geqslant 1$, we have $x^2 + y^2 = 1.$ Solve $$\begin{cases} x^2 + y^2 = 1,\\ 2x^2 + y^2 = 1,\\ 0 \leqslant x \leqslant 1,\\ - 1 \leqslant y \leqslant \dfrac{1}{\sqrt{2}} \end{cases} \Leftrightarrow \begin{cases} x = 0,\\ y = 1.\end{cases}$$ Second way. We have $2x^2 + y^2 = 1$, therefore $y=\sqrt{1 - 2x^2}$ or $y=-\sqrt{1 - 2x^2}.$ First case, $y=\sqrt{1 - 2x^2}$, subtitution the second equation, we get $$x^2 + \sqrt{1 - 2x^2}\cdot\sqrt{1-x^2}=1+(1-\sqrt{1 - 2x^2})\sqrt{x}.$$ equavalent to $$1 - x^2 - \sqrt{1 - 2x^2}\cdot\sqrt{1-x^2} + (1-\sqrt{1 - 2x^2})\sqrt{x}=0$$ or $$\sqrt{1-x^2} (\sqrt{1-x^2} - \sqrt{1 - 2x^2}) + (1-\sqrt{1 - 2x^2})\sqrt{x}=0.$$ This is equavalent to $$\dfrac{\sqrt{1-x^2}\cdot(1-x^2-1+2x^2)}{\sqrt{1-x^2}+\sqrt{1 - 2x^2}}+\dfrac{(1-1+2x^2)\sqrt{x}}{1+\sqrt{1 - 2x^2}} = 0$$ or $$x^2\left (\dfrac{\sqrt{1-x^2}}{\sqrt{1-x^2}+\sqrt{1 - 2x^2}} + \dfrac{2\sqrt{x}}{1+\sqrt{1 - 2x^2}}\right )=0.$$ It is easy to see that $$\dfrac{\sqrt{1-x^2}}{\sqrt{1-x^2}+\sqrt{1 - 2x^2}} + \dfrac{2\sqrt{x}}{1+\sqrt{1 - 2x^2}}> 0$$ Thus $x = 0$. Second cases. $y=-\sqrt{1 - 2x^2}.$ We can check that $$x^2 + y \sqrt{1-x^2} \leqslant \dfrac{1}{2}$$ and $$1+(1-y)\sqrt{x} >1.$$ In this case, the given system of equations has no solution.
List of Common or Useful Limits of Sequences and Series
This sheet by Dave Renfro that I found online was beyond helpful! http://mathforum.org/kb/servlet/JiveServlet/download/206-1874348-6544585-538002/seq3.pdf
Problem from complex analysis regarding series representation
Hint. Your answer for the Maclaurin series can be expanded: $$\eqalign{-(z+1)\sum_{n=0}^\infty z^n &=-(z+1)(1+z+z^2+\cdots)\cr &=-z-1-z^2-z-z^3-z^2-\cdots\ .\cr}$$ As you can see, there is often more than one $z$ term with the same exponent. If you collect terms you will get the simplified series.
Cauchy Principal Value, some sufficient condition.
This limit won't exist in general, a counterexample is $f(x) = \frac{1}{1+|\ln x|}$ for $x > 0$, $f(x)=0$ for $x\le0$, in which case the limit is $\infty$.
Change of variable for products
Continue: that's$$15\prod_k\frac{k+1}{k+3}=15\frac{n!}{(n+2)!/2}=\frac{30}{(n+1)(n+2)}.$$
$\lim_{n \to \infty }\int_{0}^{n}\frac{n \cdot e^{\frac{x}{n}}}{x^4+n^2}dx=$?
This is my try. Let: $f_n (y) = \frac{e^y}{n^2y^4+1} \mathbb I_{[0,1]}$ , where $\mathbb I$ is indicator function. We have: $f_n(y) \to 0$, as $n \to \infty$. For integrability and domination condition, for every $n \in \mathbb N$, $|f_n(y)| \leq \frac{e^y}{y^4+1} \mathbb I_{[0,1]} \leq e^y \mathbb I_{[0,1]}$. This function is integrable on $\mathbb R$, thus $f_n(x)$ is also integrable on $\mathbb R$. Now all conditions of the theorem of dominated convergence satisfy. So, $$\lim\limits_{n \to \infty} \displaystyle \int_{\mathbb R} f_n (y)dy = \displaystyle \int_{\mathbb R} \lim\limits_{n\to\infty} f_n(y)dy = 0$$
Which of these sentences are propositions? What are the truth values of those that are propositions?
Propositions are a, b, c, d ,i and j. The truth values of the following are T ,F ,T ,F ,F ,F respectively.
Find all $a$ such that $\lim_{x\to\infty}\left( \frac{x+a}{x-a} \right)^x = e$.
Note that $$\frac{x+a}{x-a} = \frac{1+\frac{a}x}{1-\frac{a}x}.$$ So if you can show (or simply recognize from theorem) that $$\lim_{x \to \infty}\left(1+\frac{a}x\right)^x = e^a,\qquad\lim_{x \to \infty}\left(1-\frac{a}x\right)^x = e^{-a},$$ then you get that $a - (-a) = 1$ so $a = 1/2$.
A series with the non square free numbers $q_n:\:$ $\sum_{k=1}^\infty\frac{(-1)^{1+\Omega(q_k)}}{q_k}=0$
This must be due to the fact that the reciprocals of all squarefree numbers with whatever signs appear in the $\prod\left(1\pm{1\over p}\right)$, and the reciprocals of all natural numbers appear in another product, $\prod{1\over1\pm{1/p}}$, and both products are easy to evaluate. Indeed, $$\sum_{n=1}^\infty{(-1)^{\Omega(n)}\over n}=\prod_{prime\;p}\left(1-{1\over p}+{1\over p^2}-{1\over p^3}+\dots\right)={1\over\prod\left(1+{1\over p}\right)}=0$$ On the other hand, look at the similar sum over squarefree numbers: $$\sum_{squarefree\;s}{(-1)^{\Omega(s)}\over s}=\prod_{prime\;p}\left(1-{1\over p}\right)=0$$ Your sum over squareful numbers is just the difference between the two.
Probability of drawing three different colours.
By your procedure any combination of 10 picks is just as likely as any other. How many ways are there of making the 10 picks? Obviously $10^{10}$. How many different sets of 3 colors can be chosen from the $10$? That is $10 \choose 3$ (note that to have the 1st, 4th, and 9th balls of different colors requires the 3 colors to be distinct). How many ways are there of making the 1st, 4th, and 9th picks to be of different colors? $3! = 6$. How many ways of making the remaining 7 picks from those same 3 colors? $3^7$ So there are ${10 \choose 3}(3!)(3^7)$ possible picks matching your conditions, which gives a probability of $$\frac{{10 \choose 3}(3!)(3^7)}{10^{10}}= \frac{10!\times3^7}{7!\times10^{10}} = \frac {3^9}{2^6 \times 5^9} \approx 0.000157464$$
Describing $\Bbb{Z}$ in set notation
No, this suggestion is not good. $\Bbb Z$ is a superset of $\Bbb N$. This means that it has more elements. Whenever you define a set as $\{x\in\Bbb N\mid \ldots\}$ you effectively require the defined set to be a subset of $\Bbb N$. Moreover, if $x\leq 0$ and $x\geq 0$, then it is necessarily the case that $x=0$. Therefore you have defined $\{0\}$. HINT: Recall that $x\in\Bbb Z$ if and only if $|x|\in\Bbb N$.
What would a Tutte Polynomial =0 represent?
There are no graphs for which the Tutte polynomial is $0$. One thing that would go wrong if there were such a graph: The chromatic polynomial is contained within the Tutte polynomial; if the Tutte polynomial were $0$, then the graph would not be $k$-colourable for all $k \geq 0$. But this is impossible since e.g. we can colour each vertex a different colour.
Are the letters $O$ and $\infty $ homeomorphic?
You have correctly observed that your feeling is wrong: since removing a point from a figure-eight may result in a disconnected space, but removing a point from a circle never will, the two spaces are not homeomorphic. But this isn't quite the end of the story. Why did you feel that they should be homeomorphic? And, what do the facts leading to that feeling actually imply? I can think of two ways one might be led to believe that they are "essentially the same shape: Artistic license. The process of drawing a figure-eight seems the same as the process of drawing a circle (you just "slightly change" the details of how the pencil moves around). What's going on here is that this process of drawing is really a continuous function (your pencil doesn't "jump") from TIME - which we can think of as $[0,1]$, in the sense that there's a start time $0$, a stop time $1$, and time is "line-like" - to the plane $\mathbb{R}^2$, and the resulting curve is the image of this function. That is: both the circle and the figure-eight are the continuous image of the same space. Moreover, the continuous functions "drawing" each shape - taking the codomains as the curves being drawn, not $\mathbb{R}^2$ - are "almost bijective:" each is trivially surjective, and going from $[0,1]$ to the circle we only have one non-injectivity (we wind up where we start, so $0$ and $1$ get "glued together") and going from $[0,1]$ to the figure eight we only have two non-injectivities (we wind up where we start and along the way we cross our path). But this kind of "almost homeomorphism" is quite deceptive: e.g. there is no almost-injective continuous surjection from the figure-eight to the circle, or from the circle to the line (but there is one from the circle to the figure-eight)! Incidentally, drawing pictures in spaces is really important, but that's a more advanced topic. Accidentally using scissors We often think of topology as "ignoring small distortions," and so one might feel that we should be able to "unpinch" the x-point in the figure-eight to get a circle(-ish shape). However, this is not in fact a "topologically innocent" transformation - there is a sense in which it is "small," but it does change the homeomorphism type of the space. None of the ideas above are silly - "small deviations" from nice behavior are often interesting, and later on you may see e.g. the notion of an immersion - but they do not correspond to genuine homeomorphicitude.
Integral of directional derivative.
By the chain rule, the derivative of $t \mapsto f(\gamma(t))$ is $$f^\prime(\gamma(t)) \circ \gamma^\prime(t)=\langle \nabla f (\gamma(t)), \gamma^{\prime}(t)\rangle$$ Which leads to the required result applying the Funtamental theorem of calculus.
Missing term in series expansion
Using Mellin transforms we can find the asymptotics in an intuitive and straightforward manner. We have that $$ \mathfrak{M}(K_2(x)/x^2; s) = 2^{s-4} \Gamma(s/2) \Gamma(s/2-2),$$ so that viewing $f(x)$ as a harmonic sum, we get $$ \mathfrak{M}(f(x); s) = f^*(s) = 2^{s-4} \Gamma(s/2) \Gamma(s/2-2) \zeta(s).$$ Now invert to get the expansion of $f(x).$ I will give a table of the contributions from the poles down to the pole at $s=-6.$ $$ \begin{array} \operatorname{Res}(f^*(s) \, x^{-s}; s=4) & = & 1/45\,{\frac {{\pi }^{4}}{{x}^{4}}} \\ \operatorname{Res}(f^*(s) \, x^{-s}; s=2) & = & -1/12\,{\frac {{\pi }^{2}}{{x}^{2}}} \\ \operatorname{Res}(f^*(s) \, x^{-s}; s=1) & = & 1/6\,{\frac {\pi }{x}} \\ \operatorname{Res}(f^*(s) \, x^{-s}; s=0) & = & -1/16\,\ln \left( 4\,\pi \right) +1/16\,\gamma+1/16\, \ln \left( x \right) -{\frac {3}{64}} \\ \operatorname{Res}(f^*(s) \, x^{-s}; s=-2) & = & {\frac {1}{96}}\,\zeta \left( 1,-2 \right) {x}^{2} \\ \operatorname{Res}(f^*(s) \, x^{-s}; s=-4) & = & {\frac {1}{3072}}\,\zeta \left( 1,-4 \right) {x}^{4} \\ \operatorname{Res}(f^*(s) \, x^{-s}; s=-6) & = & {\frac {1}{184320}}\,\zeta \left( 1,-6 \right) {x}^{6}. \end{array}$$ As to how the Mellin transform of $K_2(x)$ is calculated, I can offer some ideas. Start with the known integral representation $$K_\alpha(x) = \int_0^\infty e^{-x \cosh t} \cosh (\alpha t) \; dt,$$ so that $$\mathfrak{M}(K_2(x); s) = \int_0^\infty \int_0^\infty e^{-x \cosh t} \cosh (2t) \; dt \; x^{s-1} \; dx.$$ This becomes $$ \int_0^\infty \cosh (2t) \int_0^\infty e^{-x \cosh t} x^{s-1} \; dx \; dt = \Gamma(s) \int_0^\infty \frac{\cosh (2t)}{(\cosh t)^s} dt.$$ Now using $$\cosh(2t) = 2\cosh(t)^2 - 1,$$ we obtain $$ \Gamma(s) \left(2 \int_0^\infty \frac{1}{(\cosh t)^{s-2}} dt - \int_0^\infty \frac{1}{(\cosh t)^s} dt \right).$$ Furthermore, $$ \int_0^\infty \frac{1}{(\cosh t)^s} dt = 2^s \int_0^\infty \frac{1}{(e^t + e^{-t})^s} dt = 2^s \int_0^\infty \frac{1}{e^{ts}} \frac{1}{(1 + e^{-2t})^s} dt$$ which is $$ 2^s \int_0^\infty \frac{1}{e^{ts}} \sum_{q\ge 0} (-1)^q \binom{q+s-1}{q} e^{-2qt} dt = 2^s \int_0^\infty \sum_{q\ge 0} (-1)^q \binom{q+s-1}{q} e^{-(2q+s)t} dt $$ or $$ 2^s \sum_{q\ge 0} (-1)^q \binom{q+s-1}{q} \int_0^\infty e^{-(2q+s)t} dt = 2^s \sum_{q\ge 0} (-1)^q \binom{q+s-1}{q} \frac{1}{2q+s}.$$ Now this last term is $$ 2^s \frac{1}{s} \; _2F_1(s/2, s; 1+s/2; -1). $$ But we have $$ _2F_1(a, 2a; a+1; -1) = \frac{1}{2a} \frac{\Gamma(a+1)^2}{\Gamma(2a)}$$ as can be seen e.g. here. Hence $$\int_0^\infty \frac{1}{(\cosh t)^s} dt = 2^s \frac{1}{s^2} \frac{\Gamma(s/2+1)^2}{\Gamma(s)}.$$ Concluding, we have shown that $$\mathfrak{M}(K_2(x);s) = 2 \Gamma(s) 2^{s-2} \frac{1}{(s-2)^2} \frac{\Gamma(s/2)^2}{\Gamma(s-2)} - \Gamma(s) 2^s \frac{1}{s^2} \frac{\Gamma(s/2+1)^2}{\Gamma(s)} \\ = 2^{s-1} \frac{s-1}{s-2}\Gamma(s/2)^2 - 2^s \frac{1}{s^2} \Gamma(s/2+1)^2 = 2^{s-1} \frac{s-1}{s-2}\Gamma(s/2)^2 - 2^{s-2} \Gamma(s/2)^2 = 2^{s-2} \Gamma(s/2)^2 \left( 2 \frac{s-1}{s-2} - 1\right) = 2^{s-2} \Gamma(s/2)^2 \frac{s}{s-2} = 2^{s-2} \Gamma(s/2)^2 \frac{s/2}{s/2-1} = 2^{s-2} \Gamma(s/2+1) \Gamma(s/2-1).$$
Given a regular language $L$ and Turing machine $T$, is it decidable that $\mathcal{L}_{acc}(T) \subseteq L$?
It seems irrelevant that $L$ is regular; any $L$ other than the set of all strings will do what you want. Thanks to Rice's theorem, it's enough to construct (1) a Turing machine that accepts no strings at all (so $\mathcal L_{acc}=\varnothing\subseteq L$) and (2) a Turing machine that accepts exactly one string $s$, chosen from the complement of $L$ (so $\mathcal L_{acc}=\{s\}\not\subseteq L$). Both constructions are easy.
Is it necessary that $a\ge b$ to $h(n)=\sum\limits_{d^a|n}f(\frac n{d^a})g(\frac n{d^b})$ be multiplicative
This ensures that $d^b$ divides $n$ in the definition of $h$, so that $g$ is evaluated only where it is defined, i.e. at integers.