title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find derivative of gas pressure formula
Maybe expressing it like this will help: $$P=nRT(V-nb)^{-1}-an^2V^{-2}$$ Then you can apply the power rule and chain rule, holding everything on the RHS constant except $V$: $$\frac{\partial P}{\partial V} = -nRT(V-nb)^{-2}\left(\frac{\partial}{\partial V}(V-nb)\right)+2an^2V^{-3}\left(\frac{\partial V}{\partial V}\right),$$ where the derivatives in parentheses evaluate to $1$. So: $$\frac{\partial P}{\partial V} = -\frac{nRT}{(V-nb)^2}+\frac{2an^2}{V^3}$$
How to find the equivalence classes of a formal language complement?
It is easy to show that the Nerode equivalence classes are the same for a language $L$ and its complement $L'$. $$ xR_Ly \iff \left(\forall z\in \sum^*:xz\in L \iff yz \in L\right) \\ xR_{L'}y \iff \left(\forall z\in \sum^*:xz\in L' \iff yz \in L'\right) $$ Since the strings in $L'$ are exactly the strings not in $L$, then $$ xz\in L \iff xz\not\in L' \\ \left(xz\in L \iff yz \in L\right) \iff \left(xz\not\in L' \iff yz \not\in L'\right) \iff \left(xz\in L' \iff yz \in L'\right) $$ Therfore, $$ \left(\forall z\in \sum^*:xz\in L \iff yz \in L\right) \iff \left(\forall z\in \sum^*:xz\in L' \iff yz \in L'\right) \\ xR_Ly \iff xR_{L'}y $$ You seem to have forgotten the equivalences classes that contains all strings that are not accepted. However, I would strongly caution you not to try to find the equivalence classes the way that you have. One reasons is, there may be multiple equivalence classes of strings that are not accepted (for example, this question). These will not always be obvious and you may not be able to guess them from the definition. Another reason is, the accepting equivalence classes that you read off may not be separate equivalence classes. For instance, what if we changed your language just a bit: the language generated by $\{\epsilon,a,b,bba^*,aba^*\}$. You might similarly claim that the accepting equivalence classes are $$ S_1=\{\epsilon\} \\ S_2=\{a\} \\ S_3=\{b\} \\ S_4=\{bba^*\} \\ S_5=\{aba^*\} \\ $$ However, this is not true. The strings $a$ and $b$ must be in the same equivalence class, since they cannot be seperated by any $z\in\Sigma^*$. For these reasons, I would highly recommend that you find the minimal DFA for a language to find the equivalence classes. Your way happened to work (although you missed the last equivalence class), but for many other languages it will give you a very wrong answer.
$|x|=7$,$|y|=3$ then $|\langle x,y \rangle |=21$
Write $yx=x^2y$, so that the group $H=\langle x,y\rangle$ can be written as $$ H=\{x^iy^j\mid 0\le i\le 7, 0\le j\le 2\}. $$ So we have $|H|=21$.
Algebraic structure of "maze navigation" operations
This is most likely homotopy theory. You have (continuous) paths on a two-cell complex with non-trivial topology (basically your maze is the two complex with quad cells, the walls and the solid vertices are edges and vertices removed from the complex, so its topology is non-trivial) and when you go from one square (say C5) to another (say A#) along two different paths (say $C5,B5,B4,B3,A3$ and say $C5,C6,C5,B5,B4,B3,B2,B1,B2,B3,A3$) that are not separated by any obstructions, then you can "homotope" one path into the other. Going around in loops then gives rise to the fundamental group of the cell complex, which in this case is a free group because the complex deformation retracts onto a one-complex (a graph). I suggest you look into cell complexes, fundamental groups and homotopy theory. Just Wiki it at first to get an idea. Edit. In this line of thought, check out Cayley graphs. Just Wiki it and you will see the graph (one dimensional cell complex) related to a finitely generated group (more like finitely generated, finitely presented group). You assign an element of the group to each vertex of the graph and you connect two vertices by an edge if you can get from one element to another by applying a generator of the group. If there are (identity) relations, like the product of some elements being identity, then it corresponds to a closed cycle in the Calyey graph. You can even attach faces (two dimensional cells, i.e. topological discs) to each relation, obtaining a two dimensional complex. If your group is a free group (only generators, no relations) you end up with a graph that is a regular tree. A group given by generators and relations is universally covered by a free group. In general, when you have a space with complicated topology, sometimes moveing around that space can be done through unfolding the space into the so called universal covering space (Wiki that too after the Cayley graphs) which has trivial, elementary topology, but you keep the information of the underlying folded complicated space into a group of deck symmetries acting on the universal covering space. I have used these technique on one or two occasions in the development of algorithms.
How to evaluate this series using fourier series?
We begin with the expression $$S=\sum_{n=1}^\infty \frac{1}{n}\int_{2\pi n}^\infty \frac{\sin (x)}{x}\,dx \tag 1$$ Enforcing the substitution $x \to 2\pi n x$ in $(1)$ yields $$\begin{align} \sum_{n=1}^\infty \frac{1}{n}\int_{2\pi n}^\infty \frac{\sin (x)}{x}\,dx&=\sum_{n=1}^\infty \frac{1}{n} \int_{1}^\infty \frac{\sin (2\pi nx)}{x}\,dx\\\\ &=\int_{1}^\infty \frac 1x\sum_{n=1}^\infty \frac{\sin (2\pi nx)}{n}\,dx \tag 2\\\\ &=\sum_{k=1}^\infty \int_{k}^{k+1}\frac1x \left(\frac{\pi}{2}(1-2(x-k))\right)\,dx \tag 3\\\\ &=\frac{\pi}{2} \sum_{k=1}^\infty \left((2k+1)\log\left(\frac{k+1}{k}\right)-2\right) \tag 4\\\\ \end{align}$$ where we recognized the Fourier sine series for $\frac{\pi}{2}(1-2x)$ for $x\in(0,1)$ to go from $(2)$ to $(3)$. To evaluate the sum in $(4)$, we write the partial sum $S_K$ $$\begin{align} S_K&=\sum_{k=1}^K \left((2k+1)\log\left(\frac{k+1}{k}\right)-2\right)\\\\ &=-2K+\sum_{k=1}^K \left((2k+1)\log(n+1)-(2k-1)\log(n)\right)-2\sum_{k=1}^K\log(k) \\\\ &=-2K+(2K+1)\log(K+1)-2\log(K!) \\\\ &=-2K+(2K+1)\log(K)+(2K+1)\log\left(1+\frac1K\right)-2\log\left(K!\right) \tag 5\\\\ &=-2K+(2K+1)\log(K)+2-2\log\left(\sqrt{2\pi K}\left(\frac{K}{e}\right)^K\right)+O\left(\frac1K\right) \tag 6\\\\ &=2-\log(2\pi)+O\left(\frac1K\right) \end{align}$$ In going from $(5)$ to $(6)$, we used the asymptotic expansion for the logarithm function, $\log(1+x)=x+O(x^2)$ and Stirling's Formula $$K!=\left(\sqrt{2\pi K}\left(\frac{K}{e}\right)^K\right)\left(1+O\left(\frac1K\right)\right)$$ Finally, putting it all together, we find that $$\begin{align} S&=\sum_{n=1}^\infty \frac{1}{n}\int_{2\pi n}^\infty \frac{\sin (x)}{x}\,dx\\\\ &=\frac{\pi}{2}\lim_{K\to \infty}S_K\\\\ &=\pi-\frac{\pi}{2}\log(2\pi) \end{align}$$ and therefore $$\bbox[5px,border:2px solid #C0A000]{\sum_{n=1}^\infty \frac{1}{n}\int_{2\pi n}^\infty \frac{\sin (x)}{x}\,dx=\pi-\frac{\pi}{2}\log(2\pi)}$$ as was to be shown!
Finite intersection property for a subset
Yes. Assume that you don't have $\bigcap\limits_{F\in\mathcal{F}}\bar{F}^w \neq\emptyset$. Let $U = (\bar{F}^w)^c$ be the complement of $\bar{F}^w$ in $B(\theta,1)$ and $\mathcal{U} = \{U=(\bar{F}^w)^c:F\in\mathcal{F}\}$, then $\bigcup\limits_{U\in\mathcal{U}}U$ is a weak open cover for $B(\theta,1)$, which is compact, and thus $\exists \{U_1,\ldots,U_n\}$ a finite weak open subcover. But then $\bigcap\limits_{i=1}^n \bar{F_i}^w=\emptyset$ which contradicts that $\mathcal{F}$ has the FIP.
Summation of $c^i, i>1$.
It is easier to do like below $$\quad{\sum_{i=7}^{N}4^i=4^7+4^8+...+4^N=\\4^7(1+4+4^2+...+4^{N-7})=\\ 4^7(\frac{1-4^{N-7+1}}{1-4})=\\4^7(\frac{4^{N-6}-1}{3})=\\\frac{4^{N-6+7}-4^7}{3}=\\\frac{4^{N+1}-4^7}{3}}$$
Solution to $\frac{d^n y}{dx^n} = y$ for integer $n$
Recall the definition of $e^z$ where $z \in \Bbb{C}$. When $x, y \in \Bbb{R}$, we have $$e^{x + iy} = e^x \operatorname{cis}(y) = e^x(\cos(y) + i \sin(y)).$$ The $n$th roots of unity can therefore be expressed by $$\lambda_k = e^{2\pi i k / n} = \cos\left(\frac{2\pi k}{n}\right) + i \sin\left(\frac{2\pi k}{n}\right),$$ and thus your fundamental solutions are of the form \begin{align*} f_k(x) &= \exp(\lambda_k x) \\ &= \exp\left(\cos\left(\frac{2\pi k}{n}\right)x + i \sin\left(\frac{2\pi k}{n}\right)x\right) \\ &= \exp\left(\cos\left(\frac{2\pi k}{n}\right)x\right) \cdot\left(\cos\left(\sin\left(\frac{2 \pi k}{n}\right) x\right) + i \sin\left(\sin\left(\frac{2 \pi k}{n}\right)x\right)\right). \end{align*} All your solutions to this ODE are linear combinations of the above functions. Note that these are complex-valued functions, even with real inputs, but every real-valued function will have to be a complex linear combination of these functions. In particular, using the evenness of $\sin$ and the oddness of $\cos$, it's not hard to verify that $$\frac{f_k(x) + f_{-k}(x)}{2} = \exp\left(\cos\left(\frac{2\pi k}{n}\right)x\right) \cdot \cos\left(\sin\left(\frac{2 \pi k}{n}\right) x\right),$$ and that $$\frac{f_k(x) - f_{-k}(x)}{2i} = \exp\left(\cos\left(\frac{2\pi k}{n}\right)x\right) \cdot \sin\left(\sin\left(\frac{2 \pi k}{n}\right) x\right).$$ Provided $\sin(2 \pi k / n) \neq 0$ (i.e. the root of unity is not real), we can replace $f_k$ and $f_{-k}$ by the above functions, and get another set of fundamental solutions to the ODE. This is the fundamental solutions that you've run across.
How the $l_1$, $l_2$ and $l_\infty$ norms are they linked together?
So, is that true to say that the vector of $\mathcal{D}$ with maximum $l_2$ norm is such that it has all components, but a single one equal to $C$, equal to $0$ ? Yes. If I take two vectors $x^{(0)}$ and $x^{(1)}$ belonging to $\mathcal{D}$ such that $||x^{(0)}||_\infty \leq ||x^{(1)}||_\infty$, does it imply $||x^{(0)}||_2 \leq ||x^{(1)}||_2$? The answer to this is yes, and the following two facts will suffice. Lemma 1: Let $x = (x_1,\dots,x_n) \in \mathcal D$ with $x_1 \geq x_2 \geq \cdots \geq x_n$. For notational convenience define $x_{n+1} = 0$. Fix an index $1 < j \leq n$. For any $t \in [0,\max\{x_j-x_{j+1},x_{j-1}-x_j\}]$, define $y = x + t(e_{j-1} - e_j)$ where $e_i$ denotes the $i$th standard basis vector. Then $y$ is an element of $\mathcal D$ with entries in descending order, and $$ \|x\|_2 \leq \|y\|_2. $$ Proof: Note that $$ \|y\|_2^2 - \|x\|_2^2 = \\ (x_{j-1} + t)^2 + (x_j - t)^2 - x_{j-1}^2 - x_j^2 =\\ 2x_{j-1}t + t^2 - 2x_j t + t^2 =\\ 2(x_{j-1} - x_j + t)t \geq 0. $$ Lemma 2: Suppose that $x,y \in \mathcal D$ are such that $x_1 \geq \cdots \geq x_n$ and $y_1\geq \cdots \geq y_n$ with $y_1 \geq x_1$. Then there exists a sequence of vectors $y_0,y_1,\dots,y_k \in \mathcal D$ with $y_0 = x, y_k = y$, and each $y_j$ can be attained from $y_{j-1}$ as in Lemma 1. This is a tricky proof to write out, but the intuition is clear if you try it out for a low-dimensional example. Induction on the number of entries in $x$ and $y$ that fail to agree (with a base case of two such entries) will make the proof go nicely. Now, the desired result: Proposition: If $x,y \in \mathcal D$ are such that $\|x\|_\infty \leq \|y\|_\infty$, then $\|x\|_2 \leq \|y\|_2$. Proof: Let $\tilde x$ be $x$ rearranged so that the entries of $x$ are in descending order, and let $\tilde y$ be similarly defined. There exists a sequence of vectors $y_0,\dots,y_k$ with $y_0 = \tilde x$ and $y_k = \tilde y$ satisfying the construction in Lemma 2. By Lemma 1, it follows that $$ \|x\|_2 = \|\tilde x\|_2 = \|y_0\|_2 \leq \|y_1\|_2 \leq \cdots \leq \|y_k\|_2 = \|\tilde y\|_2 = \|y\|_2 $$ as desired.
Determine if $u(t) \in \mathcal{L}^1(\lambda)$
You don't need DCT for this. Use the fact that $|\frac {\sin t} {1+t^{2}}| \leq \frac 1 {t^{2}}I_{|t|>1}+I_{|t|\leq 1}$.
Finding a basis for $V, W, V+W$ and $V \cap W$
Your reasonings for calculating the basis for $V$ and $W$ are correct. For $V+W$, as qaphla pointed out in the comment, the two $x$'s need not be the same. You can calculate the basis for $V+W$ in the following manner. Let $\mathcal{B}_1=\{(1,0,0,0),(0,1,0,-1),(0,0,1,-1)\}$ and $\mathcal{B}_2=\{(1,-1,0,0),(0,0,2,1)\}$ be basis for $V$ and $W$, respectively. Then the space spanned by $\mathcal{B}_1\cup\mathcal{B}_2$ is contained in $V+W$. It is easy to verify that the subspace spanned by $\mathcal{B}_1\cup\mathcal{B}_2$ is $\mathbb{R}^4$. Hence $V+W=\mathbb{R}^4$. For $V\cap W$, remember what does the $\cap$ symbol mean: the vectors in $V\cap W$ satify the properties of both $V$ and $W$. So, $$V\cap W=\{(x,y,z,u)\in\mathbb{R}^4\;|\;y+z+u=0,\;x+y=0\;\text{and}\;z=2u\}$$ Can you see that a basis for this space is $\mathcal{B}_{V\cap W}=\{(3,-3,2,1)\}$? Now compare this results with the equation: $$\text{dim}(V+W)=\text{dim}(V)+\text{dim}(W)-\text{dim}(V\cap W)$$
show that $K/k$ is galois
Here is another approach: The galois group is a subgroup of $S_p$ and contains an element of order $p$ by Cauchy's Theorem, let us call it $\sigma$. This is a $p$-cycle. After possibly replacing $\sigma$ by a suitable power, we may assume $\sigma(\theta)=\theta_2$, in particular the restriction of $\sigma$ to $K$ is a map $K \to K$, hence so are all powers of $\sigma$. We conclude by noting that for each $i$ there is some power of $\sigma$ mapping $\theta$ to $\theta_i$, in particular $\theta_i \in K$.
Outer automorphism of SLn
One way to describe this automorphism is as $X\mapsto f^{-1}(X^T)^{-1}f$, where $f=\operatorname{antidiag}(1,\ldots,1,-1,\ldots,-1)$. The matrix $f$ is exactly the Gram matrix of the form to be preserved by the elements of the symplectic group. Note that the automorphisms are uniquely determined by their action on the generators, and in case of Chevalley groups one has to check if a bijection preserves a very few relations, see Section 12.2 of "Simple groups of Lie type" by R. W. Carter. So it is often easier to look at the generators and relations rather than the matrices.
Deterministic finite machine recognizing language $((00)^*|1(11)^*)^*$
Your language is equivalent to $(00|1)^*$. It's much clearer now that our machine should only reject sequences that contain an odd number of zeroes consecutively. From this, we can define a simpler machine Let $ M = (Q, \Sigma, \delta, q_0, F) $ where $Q = \{q^0, q^1, q^\Delta\}$ is the set of states, $\Sigma = \{0, 1\}$ is the alphabet, $ \delta : Q \times \Sigma \to Q $ is the transition function defined by $$ \begin{array}{c|cc} q\overset{\LARGE\setminus}{\phantom{.}}\overset{\Large i}{\phantom{l}}&0&1\\ \hline q^0&q^1&q^0\\ q^1&q^0&q^\Delta\\ q^\Delta&q^\Delta&q^\Delta \\ \end{array}, $$ $q_0 = q^0$ is the start state, and $F = \{q_0\}$ is the set of accept states.
Multiplicative inverse of the power series $e^x - c$ for $c \neq 1$.
Wolfram Alpha can find the inverse in terms of $c$. The denominators of each term are easy to express in closed form. The polynomials in the numerators seem to follow Euler's triangle.
What is physical interpretation of dot product?
Temporarily imagine that $V_2$ is of unit length. Then, $V_1 \cdot V_2$ is the projection of the vector $V_1$ onto the vector $V_2$. Picture here. Now we let $V_2$ have its original length and to do so we multiply the result of the dot product by the new length of $V_2$. (This has the effect of making it not matter which one you pretend has unit length initially.) You do this sort of thing when you write a vector as a sum of multiples of the standard unit coordinate vectors (sometimes written $\hat{x}, \hat{y}$, and $\hat{z}$). Use the dot product to project your vector onto $\hat{x}$ getting the multiple of $\hat{x}$ that, when assembled with the other components will sum to your vector. The dot product is a (poor) measure of the degree of parallelism of two vectors. If they point in the same (or opposite) directions, then the projection of one onto the other is not just a component of the length of the projected vector, but is the entire projected vector. It is a poor measure because it is scaled by the lengths of the two vectors -- so one has to know not only their dot product, but also their lengths, to determine how parallel or perpendicular they really are. In physics,the dot product is frequently used to determine how parallel some vector quantity is to some geometric construct, for instance the direction of motion (displacement) versus a partially opposing force (to find out how much work must be expended to overcome the force). Another example is the direction of the electric field compared to a small patch of surface (which is represented by a vector "normal" to its surface and of length proportional to its area).
Can i find such a polynomial?
The constant function $f(x,y)=1$ is a polynomial that seems to satisfy your needs. I don't understand what you mean by the sentence "Fixing the first input..."
A number related to the roots of a quartic polynomial is a root of a cubic polynomial
We have $$a^{4}-6a+3=0$$ and $$b^{4}-6b+3=0.$$ Subtracting the second from the first and cancelling the factor $a-b$, we obtain, $$(a+b)\{(a+b)^{2}-2ab\}=6.$$ Squaring this, and denoting $a+b$ and $ab$ by $t$ and $x$ respectively, we have, $$t^{6}-4t^{2}(xt^{2}-x^{2})-36=0.$$ So we must show that $xt^{2}-x^{2}=3$. Now, for that we denote by $c$ and $d$, the other roots of ${\rm f}(x)$. We have $$abc+abd+acd+bcd=6.$$ Also, using $a+b+c+d=0$ and $abcd=3$, we obtain $$t(\frac {3}{x}-x)=6.$$ But, we had found before that $t(t^{2}-2x)=6$ and hence, we have(since $t \neq 0$), $$t^{2}-2x=\frac {6}{t}=\frac{3}{x}-x.$$ Thus, $$xt^{2}-x^{2}=3.$$
Prove that this graph with 20 edges on 11 vertices is not 3-colorable.
As I suggested in the comments; this graph is the Grötzsch graph. It is triangle free and its independence number is $5$. Here's a nicely symmetric drawing of the graph, with vertex $i$ labeled $v_i$: Suppose the graph is $3$-colorable, with colors $1$, $2$ and $3$. Let $c_i$ denote the color of $v_i$. First note that the five vertices of the inner star cannot all be the same color, so without loss of generality $c_2\neq c_5$. Relabeling the colors and reflecting we may assume that $c_0=1$, $c_6=2$, $c_5=1$ and $c_2=3$. Then necessarily $c_8=2$, and continuing along the outer star we find that $c_4=3$ and then $c_{10}=2$, and now $v_1$ cannot be colored.
How to prove tr($Z^TZ$) is the sum of singular values
That should be interpreted as $$\text{tr}\left((Z^T Z)^{1/2}\right) = \ldots$$ not $$\left(\text{tr}(Z^T Z)\right)^{1/2} = \ldots$$ The singular values are the nonzero eigenvalues of $(Z^T Z)^{1/2}$, i.e. the positive semidefinite square root of $Z^T Z$. The trace of a matrix is the sum of its eigenvalues.
how to prove a corollary of multivariate normal??
An idempotent matrix is diagonalizable. I think you also need $A$ to be symmetric. If this is the case, the spectral theorem implies we can write $A$ as $BDB^\top$, where $B$ is orthogonal, and where $D$ is diagonal with diagonal entries either $1$ or $0$ (since $A$ is idempotent). Since $A$ has rank $m$, there are $m$ ones. By rotational invariance of the Gaussian distribution, $z:=B^\top y \sim N(0,I)$ as well. So $y^\top A y = z^\top D z$ is the sum of $m$ squares of standard Gaussian random variables, which is $\chi^2(m)$. I don't think the result holds when $A$ is not symmetric. Consider $A=\begin{bmatrix}1&-1\\0&0\end{bmatrix}$. It is idempotent with rank $m=1$. However, $y^\top A y = y_1^2 - y_1 y_2$ which is not $\chi^2(1)$.
Any necessary and sufficient condition(s) for closure of an open ball to be the corresponding closed ball?
This is a global condition - that is, it is both necessary and sufficient to have your condition be true for all $x,\delta$. You need: (Condition 1): Given any $x\neq y$ and any $\epsilon>0$ that there is some $z$ so that $d(y,z)<\epsilon$ and$d(x,z)<d(x,y)$. That is, every neighborhood of $y\neq x$ has a point closer to $x$ than $y$ is. For example, the discrete space fails because for some $\epsilon>0$ there is no $z\neq y$ with $d(y,z)<\epsilon$. It's necessary because if you have a counter-example to my condition, with and $x\neq y\in X$, define $r=d(x,y)$. Then $y$ is on the sphere or radius $r$ but, for some $\epsilon>0$, $B_{\epsilon}(y)\cap B_r(x)=\emptyset$, so $y$ is not in the closure of $B_r(x)$. It is sufficient because if $y$ is on the sphere of radius $r$ around $x$, then $d(x,y)=r$. Now, for each $\epsilon_k=\frac{1}{k}$, find $z_k\in B_{\epsilon_k}(y)$ with $d(x,z_k)<d(x,y)=r$. Then $z_k$ is a sequence in $B_r(x)$ which converges to $y$, so $y$ is in the closure of $B_r(x)$. This can be rewritten as: (Condition 2): If $x\in X$ and $U$ is an open set not containing $x$, then the function $U\to\mathbb R^{+}$ defined as $u\mapsto d(x,u)$ does not achieve its infimum in $U$. The relationship to condition (1) is more obvious, I suppose, if you rewrite condition 2 as: (Condition 1.5): Given open $U$ and $x\notin U$, then for any $y\in U$, there is a $z\in U$ so that $d(x,z)<d(x,y)$. That's therefore clearly an extended version of Condition (1), applied to all open sets containing $y$, rather than just open balls around $y$. Proof that Condition 1 and Condition 2 are equivalent Assume Condition (1). Let $U\subseteq X$ and $x\notin U$. For any $y\in U$, pick $\epsilon>0$ so that $B_{\epsilon}(y)\subseteq U$. This can be done because $U$ is open. But condition (1) means that there must be a $z\in B_{\epsilon}(y)\subseteq U$ so that $d(x,z)<d(x,y)$. So $d(x,y)$ is not a lower bound for $\{d(x,u)\mid u\in U\}$, for any $y\in U$, proving condition $2$. Assuming Condition (2): Given $x\neq y\in X$. If $\epsilon>0$ is chosen, define $U=B_{\epsilon}(y)$. If $x\in U$, then $d(x,x)=0<d(x,y)$, so we can just choose $z=x$. If $x\notin U$, then, since $U$ is open, we know by condition (2) that $d(x,y) \neq \inf_{u\in U} d(x,u)$, so there must be a $z\in U=B_{\epsilon}(y)$ with $d(x,z)<d(x,y)$. Thus we have Condition (1).
Find the intersection of two lines passing through given points
Denote the lines as $\mathcal{l}_1$ and $\mathcal{l}_2$. The equations of the lines using point slope formula is, $$\begin{cases}\mathcal{l}_1: \dfrac{y+1}{x+2}=\dfrac{5+1}{4+2}=1\implies x-y+1=0\\ \mathcal{l}_2: \dfrac{y-3}{x-3}=\dfrac{1-3}{6-3}=\dfrac{-2}{3}\implies 2x+3y-15=0\end{cases}$$ Solving the equation of the two lines simultaneously (preferably using cross multiplication) will yield the point of intersection. I hope you can do the rest by yourself.
Find the limit of the following sequence or determine that the limit does not exist?
The limit of a sequence means the limit as $n \to +\infty$. That being the case, to figure out the limit of a sequence, you only need to consider $n$ sufficiently large. A domain of a sequence is a subset of the natural numbers, so you shouldn't be dealing with negative $n$ at all. Consider the question as asking $\lim_{n \to +\infty}\langle \frac {n+4} {n}\rangle$ and ignore everything about what happens for small $n$.
are there closed form solution to $n \cdot y + \log(y) = x$?
\begin{align*} n y + \log y = x &\implies e^{n y + \log y} = e^x \\ &\implies y e^{ny} = e^x \\ &\implies n y e^{ny} = n e^x \end{align*} Now, the inverse of the function $xe^x$ is the Lambert W-function. So applying $W$, we get $$ ny = W(n e^x) \implies \boxed{y = \tfrac{1}{n} W(n e^x).} $$ (Does the Lambert W function count as a closed form? Well, other standard functions you are used to--such as $\log, \exp,$ and $\sin$--might not have seemed like closed forms when people first used them. But what "closed form" really means is that we express the result in terms of previously-defined functions, and the Lambert W function was defined exactly for this purpose.)
"Abstract nonsense" proof of the splitting lemma
Notations: The short exact sequence splits on the left $$ 0 \longrightarrow A \underset{\underset{\phi}{\longleftarrow}}{\overset{f}{\longrightarrow}} B \overset{g}{\longrightarrow} C \longrightarrow 0 \quad, \quad \phi\circ f =\mathrm{id}_A$$ Let's show that $B \cong A\oplus C$ $\underline{1)}\ $● $\enspace \phi\circ f = \mathrm{id}_A\ \Rightarrow\ P:= f\circ \phi $ is an idempotent ($P\circ P=P$). Any arrow in an abelian category has an image so that $P$ splits into $ B \overset{j_P}{\longrightarrow} \mathrm{Im}(P) \overset{i_P}{\longrightarrow} B $ with $j_P$ epi and $i_P$ mono (cf. the equivalent def. of image and the property that $j_P$ is epi in categories with finite limits and colimits) $j_P$ is actually a left inverse of $i_P$: $$i_P\, j_P\,i_P\, j_P= P^2 = P = i_P\, j_P\quad\underset{i_p\ \text{mono}}{\overset{j_P\ \text{epi}}{\Longrightarrow}}\quad j_P\,i_P= \mathrm{id}_{\mathrm{Im}(P)}$$ $j_P\circ f$ is actually an isomorphism $A \cong \mathrm{Im}(P)\ $: $f\circ \phi$ is a factorization of $P$ so that by the universal property of the image (in the sense of "initial factorization") there exists a unique morphism $v: \mathrm{Im}(P)\to A$ such that $i_P =f\, v$ and $\phi =v\, j_P$ whence $$(j_P\, f)\, v = j_P\, i_P = \mathrm{id}_{\mathrm{Im}(P)}\quad ,\quad v\,(j_P\, f)= \phi \, f = \mathrm{id}_A$$ (This is Proposition 10.2 p.12 from "Theory of Categories", Barry Mitchell: In a balanced category, if a morphism has an image and an (Epi-Mono) factorization then the mono is the image) $\underline{2)} $ Hom sets of an abelian category are abelian groups so that it makes sense to define $$Q := \mathrm{id}_B -P\quad \in\big(\mathrm{Hom}(B,B),+\big)$$ As previously, $Q$ factorizes as $Q=i_Q \circ j_Q$ and with $i_P, i_Q$ mono, $j_P, j_Q$ epi $$ QP = QP = 0\quad \Longrightarrow\quad j_P\circ i_Q = 0 \enspace ,\enspace j_Q \circ i_P = 0 \qquad (Eq)$$ $\underline{3)} $ The two maps $\ i_P: \mathrm{Im}(P)\rightarrow B\ ,\ i_Q: \mathrm{Im}(Q)\rightarrow B $ define by the universal property of the coproduct a map $\Phi: \mathrm{Im}(P)\coprod \mathrm{Im}(Q) \rightarrow B$ (this could also be written $i_P+i_Q$, + symbolically if we were not in an abelian category ) and the two maps $j_P : B \rightarrow \mathrm{Im}(P)\ ,\ j_Q : B \rightarrow \mathrm{Im}(Q) $ a map $\Psi: B\rightarrow \mathrm{Im}(P)\prod \mathrm{Im}(Q)$ Using $j_P\circ i_{P} = \mathrm{id}_{\mathrm{Im}(P)},\ j_Q\circ i_Q = \mathrm{id}_{\mathrm{Im}(Q)},\ (Eq)$ and the definition of biproduct, one checks the isomorphism $B\cong \mathrm{Im}(P)\oplus \mathrm{Im}(Q)$, i.e. $$ \Phi\circ \Psi =P+Q = \mathrm{id}_B\quad \text{and} \quad \Psi\circ \Phi =\mathrm{id}_{\mathrm{Im}(P)\oplus\mathrm{Im}(Q)}$$
Showing a polynomial is irreducible in $\mathbb{Q}[x]$
Gauss' lemma states that if a polynomial is irreducible over integers, it is also irreducible over rationals. Let's show the polynomial $$P_n(x)=a_0x^n+a_1x^{n−1}+...+a_n$$ such that $$a_n > a_{n-1}+a_{n-2}+...+a_0 \tag{1}$$ $a_k \in \mathbb{N}^{*},\forall k$ and $a_n$-prime is irreducible over integers. Proposition 1 If $\alpha \in \mathbb{C}$ is a root of $P_n(x)$, i.e. $P_n(\alpha)=0$, then $|\alpha|>1$. Let's suppose $|\alpha| \leq 1$ then $$a_0\alpha^n+a_1\alpha^{n−1}+...+a_n=0 \Rightarrow |a_n|=|a_0\alpha^n+a_1\alpha^{n−1}+...+a_{n-1}\alpha| \Rightarrow \\ a_n\leq a_0|\alpha^n|+a_1|\alpha^{n−1}|+...+a_{n-1}|\alpha| \leq a_0+a_1+...+a_{n-1}$$ which contradicts $(1)$ Proposition 2 $P_n(x)$ is irreducible over integers. Let's suppose it's reducible, i.e. $P_n(x)=G(x)\cdot F(x)$, where $G, F$ are non-constant polynomials with integer coefficients. Then $P_n(0)=G(0)\cdot F(0)=a_n$ which is prime. This means that the absolute values of the last coefficient of $G$ is 1 and of the last coefficient of $F$ is $a_n$ (or vice-versa, but WLOG let's assume so). Let's notes $$G(x)=b_kx^k+...+b_0, b_i\in \mathbb{Z}, b_k \ne 0, |b_0|=1$$ From Vieta's formulas we have that the absolute value of the product of $G(x)$'s roots ($G(\gamma_i)=0, \gamma_i \in \mathbb{C}$) is $$\left|\prod_{i} \gamma_i \right|= \left|\frac{b_0}{b_k}\right|\leq 1$$ But every single root of $G(x)$ is also a root for $P_n(x)$ and, according to Proposition 1, we must have $$\left|\prod_{i} \gamma_i \right|=\prod_{i} \left|\gamma_i \right|>1$$ So we have a contradiction. Here is a recommended source of inspiration.
Splitting a double summation
First, let's prove that for any sequences $(a_{k})$ and $(b_{k})$ and a positive integer $n$, $$ \sum_{k=1}^{n}\left[a_{k}+b_{k}\right]=\sum_{k=1}^{n}a_{k}+\sum_{k=1}^{n}b_{k}. $$ Under the convention $\sum_{k=1}^{0}f_{k}=0$, the statement above is trivially true for $n=0$. Now, suppose the statement holds for some particular $n=n_{0}-1$ with $n_{0}\geq1$. Then, $$ \sum_{k=1}^{n_{0}}\left[a_{k}+b_{k}\right]=a_{n_{0}}+b_{n_{0}}+\sum_{k=1}^{n_{0}-1}\left[a_{k}+b_{k}\right]=a_{n_{0}}+b_{n_{0}}+\sum_{k=1}^{n_{0}-1}a_{k}+\sum_{k=1}^{n_{0}-1}b_{k}=\sum_{k=1}^{n_{0}}a_{k}+\sum_{k=1}^{n_{0}}b_{k}. $$ The desired result follows by induction. Now, you should be able to use the above result to prove that for any sequences $(a_{k\ell})$ and $(b_{k\ell})$ and positive integers $n$ and $m$, $$ \sum_{k=1}^{n}\sum_{\ell=1}^{m}\left[a_{k\ell}+b_{k\ell}\right]=\sum_{k=1}^{n}\sum_{\ell=1}^{m}a_{k\ell}+\sum_{k=1}^{n}\sum_{\ell=1}^{m}b_{k\ell}. $$
Volume of the solid in the first octant bounded by the cylinder $z=9-y^2$
To evaluate the volume of this solid, a triple iterated integral is fine. Note that the solid is given by $$\{(x,y,z)\in\mathbb{R}^3\;:\; 0\leq x\leq 2,\; 0\leq y,\; 0\leq z\leq 9-y^2\}.$$ So your upper integration limit for $y$ is not correct (and according to the order of integration, in any case it should not depend on $z$). What is the right upper limit for $y$?
Proof for the universal property of the tensor product of modules
$H \subseteq \textrm{ker } \Psi$ is all you need for $\Psi$ to factor through the quotient $F/H$. In general, let $\delta: A \rightarrow B$ be a homomorphism of abelian groups ($\mathbf Z$-modules, same thing), and let $A_0$ be a subgroup of $A$. To say that $\delta$ factors through $A_0$ (or $A/A_0$) is to say that there exists an abelian group homomorphism $\bar{\delta}: A/A_0 \rightarrow B$ such that $\bar{\delta}(a + A_0) = \delta(a)$ for all $a \in A$. If such a homomorphism exists, it is clearly unique. The only issue barring the existence of such a homomorphism is the possibility that it is not well defined. And you can check that it is well defined if and only if $A_0$ is contained in the kernel of $\delta$.
Classical solution of $u_t + (u^4)_x = 0$ (characteristics)
a) Rewriting the PDE in quasi-linear form, $u_t + 4u^3 u_x = 0$, the solution obtained by the method of characteristics reads implicitly as $$ u = g(x - 4 u^3 t)\, . $$ The derivation in OP looks correct, since it leads to the same implicit formula. Following one of the methods in this post, we can show that the solution becomes singular unless $s\mapsto 4g(s)^3$ is a non-decreasing function of $s$. This property is true, as a consequence of the properties of $g$ and of the cubic polynomial function. Thus, the classical solution is defined for all $t\geq 0$. b) Taking the limit as $t\to+\infty$ in the implicit formula yields $$\lim_{t\to+\infty} u(t,x) = 0\qquad\text{at}\qquad x =0 \, . $$ For other abscissas, I have no clue how to prove it, but I'd say $u\to 0$ too. c) If $g$ is bounded, taking the limit in the implicit formula yields the result, and the values of $u$ at $x\to\pm\infty$ are the limits of $g$. If $g$ is unbounded, I've no clue how to prove the result.
Calculating number of ratings needed to reach 4 stars?
An equation would look something like... $$n = \frac{t(m_2 - m_1)}{c - m_2}$$ where $m_1$ is the old average among $t$ number of ratings $m_2$ is the new average among $t$ number of ratings $t$ is the number of ratings used in the first average $c$ is the score ($0-5$ stars) of the additional ratings used in the new average $n$ is the number of the additional ratings used in the new average In your specific application you have... $$n = \frac{500(4 - 3.2)}{5-4}$$ You can see the math as follows: let $s_i$ be the $i$th score, or rating, for $i = 1,2,3,...,500$. We know that... $$\frac{\sum_{i=1}^{500} s_i}{500} = 3.2$$ $$\sum_{i=1}^{500} s_i = 1600$$ We want to solve for $n$ in the following $$\frac{5n+ \sum_{i=1}^{500} s_i}{500 +n} = 4$$ $$\frac{5n+ 1600}{500 +n} = 4$$ $$5n+ 1600 = 4(500 +n)$$ $$5n+ 1600 = 2000 + 4n$$ $$5n - 4n = 2000 - 1600$$ $$n = 400$$
Show that $(\mathbb Z/20\mathbb Z)^*\cong\mathbb Z/2\mathbb Z\times\mathbb Z/4\mathbb Z$
This is a morphism because $$\phi((a,b)+(a',b')) = 7^{a+a'}11^{b+b'} = (7^a 11^b) (7^{a'} 11^{b'}) = \phi(a,b) \phi(a',b')$$ By the first isomorphism theorem, $(\Bbb Z \times \Bbb Z)/\ker \phi = \operatorname{im} \phi = (\Bbb Z/20\Bbb Z)^*$. One then verifies that $(\Bbb Z \times \Bbb Z)/\ker \phi = (\Bbb Z \times \Bbb Z)/(4 \Bbb Z \times 2 \Bbb Z) = (\Bbb Z/4\Bbb Z) \times (\Bbb Z/2\Bbb Z)$. Alternatively, $(\Bbb Z/20\Bbb Z)^* \cong ((\Bbb Z/4\Bbb Z) \times (\Bbb Z/5\Bbb Z))^* \cong (\Bbb Z/4\Bbb Z)^* \times (\Bbb Z/5\Bbb Z)^* \cong (\Bbb Z/2\Bbb Z) \times (\Bbb Z/4\Bbb Z)$.
Regarding the limit of the ratio of flux to volume enclosed when volume tends to zero.
The numerator in the limit may be rewritten as $\iint_{\partial S_i}\mathbf F\cdot\mathbf n\,dS_i$, which by the divergence theorem is equivalent to $\iiint_{V_i}(\nabla\cdot\mathbf F)\,dV_i$. Thus the limit gives the mean value of $\nabla\cdot\mathbf F$ in $V_i$, and the limit as $V_i\to0$ is just $(\nabla\cdot\mathbf F)(i)$, which is independent of $S_i$.
Proof that $\operatorname{rank}(dT)=1$ implies the image is a curve
Around each point $u$ you have that $G$ and $H$ are compositions of $C^1$ functions and thus $C^1$, hence your image is a $C^1$ curve. Note that your $K$ changes as we move along the curve but different $K$s will agree on intersection (implicit function theorem). If the function is constant in a neighbourhood around $p_0$ then the rank at $p_0$ would be 0.
Show there doesn't exist a 4-regular graph with 4 vertices.
If we pick a vertex in a $4$-regular simple graph, by definition it has four neighbors. This alone requires $5$ vertices, which is enough to imply it's impossible. Except for the empty graph (which has no vertices nor edges), a $k$-regular graph must have $k+1$ or more vertices. We can get it to work if we allow parallel edges, e.g.: but not in a simple graph.
The out-flux of the vector field $F(x,y,z)=(-sin(2x)+ye^(3z),-(y+1)^2,2z(y+cos(2x)+3)$
Given the vector field $\mathbf{F} : \mathbb{R}^3 \to \mathbb{R}^3$ of law: $$ \mathbf{F}(x,\,y,\,z) := \left(-\sin(2x) + y\,e^{3z}, \; -(y+1)^2, \; 2z\left(y+\cos(2x)+3\right)\right) $$ and the domain: $$ \Omega := \left\{ (x,\,y,\,z) \in \mathbb{R}^3 : x^2+y^2+z^2\le2^2, \; x \le 0, \; y \le 0, \; z \ge 0 \right\}, $$ by the divergence theorem, the outgoing flow of $\mathbf{F}$ from the boundary $\partial\Omega$ of $\Omega$ is calculated as: $$ \Phi_{\partial\Omega}(\mathbf{F}) = \iiint\limits_{\Omega} \nabla \cdot \mathbf{F}\,\text{d}\omega = \iiint\limits_{\Omega} 4\,\text{d}x\,\text{d}y\,\text{d}z = 4 \cdot ||\Omega|| = 4 \cdot \frac{\frac{4}{3}\,\pi\,2^3}{8} = \frac{16}{3}\pi\,, $$ i.e. it's equal to four times the volume of an eighth of a ball of radius two.
Find the equation of the circle which cuts the circle $x^2+y^2+2x+4y-4=0\;$ and the lines $xy-2x-y+2=0\;$ orthogonally
1493485 For two figures to be orthogonal vis-à-vis each other means that at each of their points of intersection their slopes are perpendicular. In terms of analytic geometry, this would mean that the slope of one is the negative reciprocal of the slope of the other -- i.e., the product of the two slopes at the intersection equals $-1\;$. $xy-2x-y+2=0\;$ is really the two lines $x=1$ and $y=2\;$. Any circle orthogonal to the two lines must therefore be centered at $A(1\mid 2)\;$ . The two tangents from $A\;$ to the circle $B\equiv[x^2+y^2+2x+4y-4=0]\;$ can be constructed by drawing the circle whose diameter is the line joining the center of the given circle $C(-1\mid -2)\;$ to $A\;$. The equation of this new circle is $D\equiv[x^2+y^2=5]\;$. The two intersections $E_1\left(\frac{-1-6\sqrt{11}}{10}\mid\frac{-2+3\sqrt{11}}{10}\right)\;$ and $E_2\left(\frac{-1+6\sqrt{11}}{10}\mid\frac{-2-3\sqrt{11}}{10}\right)\;$ of the two circles $B\;$ and $D\;$ are the points of tangency. Your required circle $[(x-1)^2+(y-2)^2=11]\;$, centered, as required, at $A\;$, passes through $E_1\;$ and $E_2\;$. Oh, by the way, from the menu of choices you provided, the proper choice is $a.$
Solving first order ordinary differential equation
Substitute $y'=\frac{1}{x'}$, \begin{align*} \frac{x(2x^2y \log(y)+1)}{x'}=2y \\ \Rightarrow \frac{x(2x^2y \log(y)+1)}{2y}=x' \\ \Rightarrow x^3 \log(y) + \frac{x}{2y}=x' \\ \end{align*} and rearranging gets you to your equation.
Uniform convergence on a compact
Recall $\lim_{n\to \infty} a^n = 0$ for $a \in (0,1)$. This should help for uniform convergence on $(0,1)$. Uniform convergence $\Rightarrow$ pointwise convergence, hence if $f_n$ does not converge pointwise for some $x$ it does not converge uniformly. For $[0,1]$, where does $f_n$ go if $x=1$? As a side note, it's worth mentioning that we had to restrict to $[0,a]$ (i.e. closed intervals) as $x$ may not converge uniformly on $[0,1)$. Check why this is true yourself using the definition of uniform convergence.
rational function limit involving factorials
Hint. You may recall the Stirling formula $$ n!\sim\sqrt{2\pi n}\left(\dfrac{n}{e}\right)^n $$ as $n$ is great, giving $$ \frac{(2n-1)!}{(2n)^n}\sim\sqrt{2\pi (2n-1)}\left(\dfrac{2n-1}{e}\right)^{2n-1}\frac{1}{(2n)^n} \sim\frac{\sqrt{2\pi}}{e}\left(\dfrac{2n}{e^2}\right)^{n-1/2} $$ then your limit is $+\infty$ since $\displaystyle \dfrac{2n}{e^2} \longrightarrow +\infty$.
Why is absolute value function not a polynomial?
All polynomials are differentiable, but the absolute value function $|x|$ is not (at $x=0).$
Projection of a lattice onto a subspace
I disagree with observation 2. It gives a sufficient condition that is not necessary unless you only consider projections onto a 1-dimensional subspace. A weaker sufficient condition, where I am not sure whether its also necessary is the following: if there is a decomposition $P_A G = BC$, where $B$ is any regular matrix and $C$ is a matrix with only integer entries then $\Lambda_U$ also must be a lattice. This must be because $C$ represents a map $\mathbb{Z}^n \rightarrow \mathbb{Z}^n$, and $B$ represents a vector space isomorphism. (note that it does not matter if $C$ can be integer or rational, for any common denominator can be moved into $B$.) It seems clear to me for geometric reasons that for any regular $G$ a suitable 1-dimensional $U$ can be found. Consider the basis vectors of the lattice, i.e. the columns of $G$. There must be a hyperplane through them and a line through the origin perpendicular to that hyperplane. An orthogonal projection onto that line will move points parallel to the hyperplane. In particular, the basis vectors will all be projected to the intersection of the line with the hyperplane. It follows that integer combinations of the basis vectors will be mapped to integer multiples of that. I am not sure whether there is a simple way to find a base for this $U$, but it should always possible to take differences of basis vectors as a basis for the hyperplane and use Gramm-Schmidt orthonogonalization to find a normal vector. If you are interested, it can be proved easily, at least in $\mathbb{R}^2$, that there is in fact an (countable) infinity of suitable $U$s. This can be done by setting up a sine/cosine parametrization of the unit vectors and showing that the ratio of the projection of the basis vectors of the lattice is a continuous non-constant function of the angle. Addendum: Actually, it is algebraically not as complicated as I imagined. Supposing that $G$ is invertible, let $H = G^{-1}$ and take for $D$ an $n \times 1$ matrix of all ones. Now if we let $A = H^TD$ then $A^TA$ becomes scalar and we can simplify $$ P_A = \frac{H^TDD^TH}{D^THH^TD} $$ This means that $$ P_AG = \frac{H^TD}{D^THH^TD} D^T $$ which means that the columns are all equal. Alternatively, you could of course replace the ones in D with any rationals.
Slice of a coordinate system in a manifold
As for the first question, since a permutation of the coordinates is a diffeomorphism, any set of coordinate functions is admitted. It's just easier to write it down that way. As for the second question, note that the $y-axis$ is divided in two parts which meet a the orign (and the origin is excluded from the set, note the tip of the arrows).
If $\sqrt {x+iy}=a+bi$ then prove that
The two square roots of $\,−1\,$ are indistinguishable, so any result that uses $\,i\,$ will also be true with $\,−i\,$ substituted for $\,i\,$ throughout. Technically, conjugation is an automorphism. You may think that this holds for square root of $\,2\,$, for example, but one root is positive and the other negative, although algebraically they are indistinguishable.
Prove that f(x) vanishes in whole of its domain.
As you started: $[0,1]$ is compact and $|f|$ is continuous, so let $M=\max_{x\in[0,1]}|f(x)|$ and $x_0\in[0,1]$ with $|f(x_0)|=M$. By the Mean Value Theorem, there exists $x_1\in(0,x_0)$ such that $f(x_0)-f(0)=(x_0-0)f'(x_1)$. As $f(0)=0$ and $|x_0|<1$, we get $$ M=|f(x_0)-f(0)|=|x_0|\cdot |f'(x_1)|\le |x_0|\cdot c\cdot |f(x_1)|\le |x_0|\cdot c\cdot M.$$ If $M>0$, this implies $|x_0|\cdot c\ge 1$, but clearly $0<x_0<1$ and we are given that $0<c<1$. We conclude that $M=0$.
Show that the following sets are equal:
Hint: Try to convince yourself that $B = \{ (x^2,y) \mid y^2 = 4 x^2 \} $
What is the expected number of rounds needed to finish the experiment?
The state of the game is completely determined by the number of red balls in the first urn. Denote by $E_r$ the expected number of additional rounds when there are $r\in\{4,3,2\}$ balls in the first urn. Then $$E_4=1+E_3,\quad E_3=1+{1\over4}\cdot{1\over4}\>E_4+\left({1\over4}\cdot{3\over4}+{3\over4}\cdot{1\over4}\right)E_3+{3\over4}\cdot{3\over4}E_2,\quad E_2=0\ .$$ Solving this system gives $E_4={26\over9}$.
Solving the inequality $\frac{x}{\sqrt{x+12}} - \frac{x-2}{\sqrt{x}} > 0$
Note that our expression is only defined when $x>0$. Bring to a common denominator $\sqrt{x+12}\sqrt{x}$. We get $$\frac{x\sqrt{x}-(x-2)\sqrt{x+12}}{\sqrt{x+12}\sqrt{x}}.$$ The bottom is safely positive, so we want to find out where $$x\sqrt{x}-(x-2)\sqrt{x+12}>0.$$ This expression can only change sign when we travel across points where the expression is $0$. So we solve $$x\sqrt{x}-(x-2)\sqrt{x+12}=0.$$ To find out where this could happen, we bring the negative stuff to the other side, and then square both sides. We are looking at $$x^3-(x-2)^2(x+12)=0.$$ Expand. The $x^3$ terms cancel, and we get a quadratic. Solve. The solutions should turn out to be $x=3/2$ and $x=4$. This divides the region we are interested in into parts $(0,3/2)$, $(3/2,4)$ and $(4,\infty)$. We also need to worry a tiny bit about $3/2$ and $4$. Now look at either our original function, or $g(x)=x\sqrt{x}-(x-2)\sqrt{x+12}$. Evaluate it at convenient "test points" in our intervals. For example, to deal with $(0,3/2)$, we can use the test point $x=1$. It is easy to see that $g(1)$ is positive. Now look at a convenient test point in $(3/2,4)$, like $x=2$. Clearly, $g(2)$ is positive. Finally, deal with $(4,\infty)$. We may need a calculator. Let $x=9$. We find that $g(9)$ is negative. So there is a change of sign only at $x=4$. For $0<x<4$, our expression is $>0$. For $x\ge 4$, our expression is $\le 0$. (It is exactly $0$ at $x=4$.)
Which of the following are correct definitions of functions?
(b) is not a valid function since $f\left(0\right)=0$ by the first specification and also $f\left(0\right)=1$ by the second specification (contradiction, for each $x$, $f\left(x\right)$ should be unique) you could argue that (c) is not a valid function if you are assuming $f$ is a function from the real numbers (e.g. $3$, $1/2$, $\pi$, etc.) to the real numbers as $1/\left(1-1^2\right)$ is undefined in that case (division by zero). I think from the context, this is what the author of the question wants you to assume.
Prove the Identity For Fringe Patterns
Hint: $$\sum_{n=0}^\infty \left(re^{\theta i}\right)^n=\frac1{1-re^{i\theta }}\;,\;\;\text{as long as}\;\;|r|<1$$ But $$\frac1{1-re^{i\theta}}=\frac{1-re^{-i\theta}}{|1-re^{i\theta}|^2}$$ and now just take the real and the imaginary parts...and remember, of course, that a complex sequence converges iff its real and imaginary parts converge each (and what's the relation of this with series convergence?) Added on request: By definition, $\;e^{i\theta}=\cos\theta+i\sin\theta\;,\;\;\theta\in\Bbb R\;$ . Sometimes, in particular at high school level, this is denoted by cis$\,\theta\;$ . Now, it's an easy exercise, using the polar form for complex numbers, to show that if $\;z\in\Bbb C\;$ , then $$z^{-1}=\overline z\iff |z|=1\iff z=e^{i\theta}$$ and from here, doing the usual and multiplying a complex fraction by the denominator's conjugate in the form of $\;1\;$, we get: $$\frac1{1-re^{i\theta}}=\frac1{1-re^{i\theta}}\frac{1-re^{-i\theta}}{1-re^{-i\theta}}=\frac{1-re^{-i\theta}}{|1-re^{i\theta}|^2}=\frac{1-re^{-i\theta}}{(1-r\cos\theta)^2+r^2\sin\theta^2}=$$ $$=\frac{1-r\cos\theta+i\sin\theta}{1-2r\cos\theta+r^2}$$ because $\;\forall\,z\in\Bbb C\;,\;\;z\overline z=|z|^2=\text{Re}\,(z)^2+\text{Im}\,(z)^2\;$ Finally, if we have an infinite geometric series (real or complex) $\;a,ar,ar^2,ar^3,...,ar^n,...\;$ , with $\;|r|<1\;$ , then $$\sum_{n=0}^\infty ar^n=\frac1{1-r}$$ since $$\sum_{k=0}^n ar^k=a\frac{1-r^{n+1}}{1-r}\;,\;\;\text{and}\;\;r^n\xrightarrow[n\to\infty]{}0\;\;\text{for}\;\;|r|<1$$ I'm assuming the OP knows the basics of complex numbers, e.g. de Moivre's formula: $$(\cos\theta+i\sin\theta)^n=\cos n\theta+i\sin n\theta$$ which becomes pretty trivial if we use the exponential form $\;\left(e^{i\theta}\right)^n=e^{in\theta}\;$ Hope this helps. Any other doubt write back.
A question about cardinal number.
Fix a set $A$ of $n-1$ elements of $X$; the map $X\setminus A\to\mathfrak{F}:x\mapsto A\cup\{x\}$ is an injection. Now use (or prove) the fact that $|X|=|X\setminus A|$.
What is the splitting field of $X^{20}-1$ over $\Bbb F_3$. And how to factor $X^{20}-1$ in $\Bbb F_3[X]$
I suppose you are looking for a general method of computing splitting fields. However, there is something special about the case you mentioned: splitting fields of polynomials of the form $x^n-1$ are called cyclotomic fields. There is a theorem for cyclotomic fields over finite fields, stating that: The $n$-th cyclotomic field $K^{n}$ over $\Bbb F_p$ with $p,n$ coprime has degree $$[K^{n}:\Bbb F_p] = ord_n(p)$$ where $ord_n(p)$ is the lowest $k$ such that $n \mid p^k-1$. See Thm 8.12 in http://www-groups.mcs.st-and.ac.uk/~neunhoef/Teaching/ff/ffchap4.pdf
proving inequalities with 3 terms
By a special case of Hölder's inequality we have $$\sum_{k=1}^n |x_ky_k| \leq \left(\sum_{k=1}^n x_k^p\right)^\frac{1}{p} \left(\sum_{k=1}^n y_k^q\right)^\frac{1}{q}$$ Where $1/p + 1/q = 1$. Let $n=3$, $x_1 = a$, $x_2 = b$ and $x_3 = c$ and $y_1 = y_2 = y_3 = 1$. Let also $p = 3$ and $q = 3/2$. Then: $$\frac{|a| + |b| + |c|}{3} \leq \sqrt[3]{\frac{a^3 + b^3 + c^3}{27}} $$ Equivalently, $$(|a|+|b|+|c|)^3 \leq 9(a^3 + b^3 + c^3)$$ And then it follows that $$(a+b+c)^3 \leq 9(a^3 + b^3 + c^3)$$
Find angle in a figure involving a scalene triangle
Don't let the diagram fool you. It does not follow from the given information that the circle is tangent to side BC at M. Here is another way to draw the diagram which should give you a hint about how to find the angle: I have created a Geogebra web page where you may experiment with the triangle. Move the vertices of the triangle here.
Finding the bounded area of two curves & first moment of area using integration
Here's the figure, including the shaded area underneath both curves: Integrate to get the area: $\int\limits_{x=0}^4 \min[x^2 + 2 x, -x + 4]\ dx$ or $\int\limits_{x=0}^1 x^2 + 2 x\ dx + \int\limits_{x=1}^4 -x + 4\ dx = {35 \over 6}$, where the transition point $x=1$ was found by solving $x^2 + 2 x = -x + 4$ for $x$. The first moment is: $\int\limits_{x=0}^4 x \min [x^2 + 2 x, -x + 4]\ dx = {119 \over 12}$. For those interested, all the above was done in Mathematica: Plot[{x^2 + 2 x, -x + 4, Min[x^2 + 2 x, -x + 4]}, {x, 0, 4}, Filling -> {3 -> Axis}] Integrate[x Min[x^2 + 2 x, -x + 4], {x, 0, 4}]
Unable to represent the function as a power series
$$\frac{x-1}{x^2+1}=\frac{x}{x^2+1}-\frac{1}{x^2+1}$$ Integrate $$\int\left(\frac{x}{x^2+1}-\frac{1}{x^2+1}\right)\,dx=\frac{1}{2}\log(1+x^2)+\arctan x+C$$ Series for these functions are known, we get $$\frac{1}{2}\log(1+x^2)-\arctan x+C=\frac{x^{10}}{10}-\frac{x^9}{9}-\frac{x^8}{8}+\frac{x^7}{7}+\frac{x^6}{6}-\frac{x^5}{5}-\frac{x^4}{4}+\frac{x^3}{3}+\frac{x^2}{2}-x+O(x^{11})+C$$ Now differentiate all terms $$\frac{x-1}{x^2+1}=-1 + x + x^2 - x^3 - x^4 + x^5 + x^6 - x^7 - x^8 + x^9+O(x^{10})$$ bonus $$\frac{x-1}{x^2+1}=\sum _{n=0}^{\infty } a_n x^n;\quad a_n=3 n^2+n+3-4 \left\lfloor \frac{1}{4} \left(3 n^2+n+4\right)\right\rfloor $$
The integral : $\frac{1}{2}\int_0^\infty x^n \operatorname{sech}(x)\mathrm dx$
\begin{align}\mathcal{I}&=\frac{1}{2}\int_0^\infty \frac{x^n}{\cosh(x)}\mathrm dx\\&=\int_0^\infty \frac{x^n}{e^x+e^{-x}}\mathrm dx\\&=\int_0^\infty \frac{x^ne^{-x}}{1+e^{-2x}}\mathrm dx.\end{align} Proposition: $$\int_0^\infty \frac{x^{s-1}e^{-x}}{1+e^{-2x}}\mathrm dx=\beta(s)\Gamma(s),$$ where $\beta(s)$ is the Dirichlet beta function defined as $\beta(s)=\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^s}$. Proof:\begin{align}\int_0^\infty x^{s-1}\left(\frac{e^{-x}}{1-(-e^{-2x})}\right)\mathrm dx&=\int_0^\infty x^{s-1}\left(\sum_{k=0}^\infty (-1)^ke^{-(2k+1)x}\right)\mathrm dx \text{, using geometric series}\\&=\sum_{k=0}^\infty (-1)^k\int_0^\infty x^{s-1} e^{-(2k+1)x}\mathrm dx\\&=\sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)^s}\displaystyle\int_0^\infty x^{s-1}e^{-x}\mathrm dx\text{, substituting $(2k+1)x\mapsto x$}\\&=\beta(s)\Gamma(s). \end{align} Therefore, $$\mathcal{I}=\beta(n+1)\Gamma(n+1).$$ We can evaluate for different values of $n\ge 0$. For $n=1$, $\frac{1}{2}\int_0^\infty x\operatorname{sech}(x)=\beta(2)\Gamma(2)=G$, where $G$ is Catalan's constant. For $n=2$, $\int_0^\infty x^2\operatorname{sech}(x)=2\beta(3)\Gamma(3)=\frac{\pi^3}{8}$
Find for which $\alpha$ the integral $\int_{0}^{1} \frac{1-x^{\alpha}}{1-x}dx$ converges
For any $\alpha$ , we have $\lim_{x \to 1^-} \frac{1-x^\alpha}{1-x}= \alpha$ (derivative of $x^\alpha$ type difference quotient). Hence we always have boundedness near $1$ for any $\alpha$. It is then $x=0$ which is likely to create a problem, if any. We will now observe the function near $x=0$ and make conclusions. Note that if $\alpha \geq 0$ then the limit exists at $0$ anyway by the quotient rule. In fact, we have for $\alpha < 0$ : $$ \lim_{x \to 0^+} \frac{x^{-\alpha}(1-x^{\alpha})}{(1-x)} = \lim_{x \to 0^+} \frac{x^{-\alpha}-1}{1-x} = 1 \tag{*} $$ So near $0$, $\frac{1-x^{\alpha}}{1-x}$ looks like $ x^{\alpha}$, and we know that if $\alpha \ge - 1$ that integral is not converging, while for $\alpha < -1$ it is. Now the answer. Get rid of $\alpha \ge 0$, for which continuity holds on the interval, hence boundedness and integral convergence. For $\alpha < 0$, we first do: $$ \int_{0}^1 \frac{1-x^{\alpha}}{1-x} dx = \int_{0}^\delta \frac{1-x^{\alpha}}{1-x} dx + \int_{\delta} ^1 \frac{1-x^{\alpha}}{1-x} dx $$ Provided LHS and RHS exist (for any $0<\delta < 1$). It is clear that the second integral on the RHS exists (for all $\alpha$)due to the continuity of the integrand on the interval. It follows that the existence of the LHS integral is down to the existence of $\int_{0}^{\delta} \frac{1-x^{\alpha}}{1-x}dx$. Now, take say $\epsilon = 0.1$. Then, there exists a $\delta > 0$ such that $0.9 < \frac{x^{-\alpha}(1-x^{\alpha})}{1-x}<1.1 $, in other words, $$ 0.9x^{\alpha} < \frac{1-x^{-\alpha}}{1-x} < 1.1x^{\alpha} $$ for $0<x<\delta$. If $\alpha <-1$ then the desired integral is bounded between $0.9\int_{0}^\delta x^{\alpha}dx$ and $1.1\int_{0}^\delta x^{\alpha}dx$. Otherwise the integral is unbounded, as it dominates an unbounded integral. Hence we complete the question. The technique to keep in mind for this problem, is that of finding asymptotically, the rate of decay of the function near the points of explosion. If the decay is not fast enough, then the integral will not converge,and vice versa, so we can use results about the decay rate to conclude.
Is this minimization problem NP-hard?
This problem is not NP hard. Actually it can be reformulated as a linear problem and can be easily solved by the simplex algorithm. Let me show you how the reformulation works. It is clear that the equality constraints are convex and linear. The difficulty of this problem lies in the target function part \begin{equation} \max_{0<i \leq m}\{A_{ij}B_{ij}\}. \end{equation} This part can be avoided by replacing it by the slack variable $ c_j $ which is an upper bound to $\{A_{ij}B_{ij}\} $ and inserting additional constraints \begin{equation} A_{ij}B_{ij} \leq c_j \rm{ ~~ for ~~all~~ }i \in \{1,\dots,m\}.\end{equation} Then, the original optimization problem becomes \begin{equation} \text{minimize} ~~\sum_{0<j \leq n} c_j \end{equation} subject to \begin{equation} A_{ij}B_{ij} \leq c_j \rm{ ~~ for ~~all~~ }i \in \{1,\dots,m\}, j \in \{1,\dots,m\}, \end{equation} \begin{equation} (\sum_{0<j\leq n}{B_{ij}}) = K~~ \text{for each} ~~i\in \{1, 2, \ldots, m\}. \end{equation} As this reformulated problem is a minization problem, $c_j$ will be chosen as small as possible such that \begin{equation} c_j = \max_{0<i \leq m}\{A_{ij}B_{ij}\}. \end{equation} (If \begin{equation} c_j \neq \max_{0<i \leq m}\{A_{ij}B_{ij}\}. \end{equation} it follows that $c_j > A_{ij}B_{ij},~~i \in \{1,\dots,m\}$ and $c_j$ can be decreased.) The reformulated problem has a linear target function and linear equality and inequality constraints. Therefore, this is linear problem.
Book on Applications of Diophantine Equations in Science
"OP" Can look up book by 'Stephen Wolfram' called 'A new kind of science'. It has stuff on Diophantine equations related to different branches of science in the notes section. The book can be availed of at the local library or purchased as hardcover or E-book. Also a summary is available on line. The links are given below: https://www.wolframscience.com http://store.wolfram.com/view/book/ISBN1579550088.str?Qualifier=COMM https://itunes.apple.com/us/app/stephen-wolfram-a-new-kind-of-science/id390711826
Basic Eigenvalue Question
There's two possible interpretations here. Either you mean that this represents a linear map $\mathbb{C} \to \mathbb{C}$ by $z \mapsto e^{-i\theta}z$ in which case the problem is quite simple: without even invoking the algebraic completeness of $\mathbb{C}$, for any field $F$, a "linear map" $T:F \to F$, can be completely decscribed by the data $T(1) = \lambda \in F$, as we have that for any $z \in F$ $$ T(z) = zT(1) = \lambda z$$ Where we just use the fact that in one dimension we can just identify the vector space $F$ with the scalars $F$ and pull anything out. To get to the punchline, any linear map $F \to F$ is all eigenvectors, as it is completely described as just multiplication by $T(1)$. So in our example the eigenvectors are everything (except 0, depending on your definition), and the eigenvalue is $e^{i\theta}$. Alternatively, the matrix $$T = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}$$ Can be viewed as a $2 \times 2$ complex matrix specifying a linear map $\mathbb{C}^2 \to \mathbb{C}^2$. To detect eigenvalues compute the characteristic polynomial $$ \det(\lambda I - T ) = \det \begin{pmatrix} \lambda - \cos\theta & \sin\theta \\ -\sin\theta & \lambda - \cos\theta \end{pmatrix} = \lambda^2 - 2\lambda\cos\theta + 1 $$ So just use the quadratic formula to find eigenvalues.
$a_n$ is the number of sequences with length $n$ over ${1,2,3}$
Let's call a sequence of $1$'s, $2$'s, and $3$'s "valid" if it doesn't have two consecutive $1$'s. Every valid sequence of length-$n$ fits one of these cases: A) [valid sequence of length $n-1$], $2$ B) [valid sequence of length $n-1$], $3$ C) [valid sequence of length $n-2$], $2$, $1$ D) [valid sequence of length $n-2$], $3$, $1$ By considering these cases, can you find the number of valid sequences of length-$n$ in terms of the number of valid sequences of lengths $n-1$ and $n-2$? i.e. find $a_n$ in terms of $a_{n-1}$ and $a_{n-2}$. After doing that, you'll have a recursive formula for $a_n$. From there, you just need to come up with some initial values and solve the 2nd order linear recursion.
Optimization Problem [CALC I]
$\frac12\pi r^2$ isn't a function of $\theta$, so it comes out that the area is constant no matter what $\theta$ is -- which can't be right. (You're close though!) As for converting a two-variable optimization problem into a one-variable optimization problem: you are optimizing perimeter, so you'll be taking the derivative of the perimeter function. To convert the perimeter function of $r$ and $\theta$ into a function of one variable only, use the relationship between $r$ and $\theta$ determined by the area function (the function which is being held constant).
How do I calculate the derivative of $x|x|$?
For $x \ge 0$ you have $f(x)=x \times |x| = x \times x = x^2$ For $x \le 0$ you have $f(x)=x \times |x| = x \times (-x) = -x^2$ so you can calculate the derivative when $x \gt 0$ and the derivative when $x \lt 0$ in the usual way. You are wrong when you say there is no derivative when $x=0$.
Does full rank matrix have a null space?
An $m\times n$ matrix has full rank if it has the maximum rank possible. When $m=n$, of course, this means the matrix is invertible. When $m>n$, this means it has rank $n$ and the nullspace consists just of $0$. However, when $n>m$, this means that the matrix has rank $m$ and the nullspace will have dimension $n-m$.
The extremities of a diameter of a circle..
Continue with your solution - The lengths of intercepts made by the circle $$x^2 + y^2 + 2gx + 2fy + c = 0$$ with X and Y axes are $2\sqrt{g^2−c}$ and $2\sqrt{f^2−c}$ respectively. Or - Mid point of (6,-2) and (6,8) is (6,3). Also radius r = $\frac 12 \sqrt{(6-6)^2 + (8+2)^2}$ = $\frac 12 \sqrt {100}$ = 5 Equation of circle is - $(x - 6)^2 + (y - 3)^2 = (5)^2$ Put y = 0, you will get two values of x subtract the two intercepts to get the length of the chord.
How to solve recursive higher-order linear ODE
The characteristic equation is $$ \bigl(\lambda^n+1\bigr)^2=0. $$ It has $n$ double roots, the $n$-th roots of $-1=e^{\pi i}$, which are $$ e^{\bigl(\tfrac{\pi}{n}+\tfrac{2k\pi}{n}\bigr)i}=\cos\Bigl(\frac{(2\,k+1)\pi}{n}\Bigr)+i\sin\Bigl(\frac{(2\,k+1)\pi}{n}\Bigr),\quad 0\le k\le n-1. $$ This gives $2\,n$ independent solutions for $0\le k\le n-1$: $$ e^{\cos\bigl(\tfrac{(2\,k+1)\pi}{n}\bigr)t}\cos\Bigl(\sin\Bigl(\frac{(2\,k+1)\pi}{n}\Bigr)\,t\Bigr),\quad e^{\cos\bigl(\tfrac{(2\,k+1)\pi}{n}\bigr)t}\sin\Bigl(\sin\Bigl(\frac{(2\,k+1)\pi}{n}\Bigr)\,t\Bigr),\\ t\,e^{\cos\bigl(\tfrac{(2\,k+1)\pi}{n}\bigr)t}\cos\Bigl(\sin\Bigl(\frac{(2\,k+1)\pi}{n}\Bigr)\,t\Bigr),\quad t\,e^{\cos\bigl(\tfrac{(2\,k+1)\pi}{n}\bigr)t}\sin\Bigl(\sin\Bigl(\frac{(2\,k+1)\pi}{n}\Bigr)\,t\Bigr). $$ The general solution is a linear combination of them.
How many solutions does the equation have? (Inclusion-exclusion principle)
This answer will be easier to understand if you first read this. First, you biject the problem to: $x_1 + x_2 + x_3 + x_4 = 13$, where each $x_i \in \{0,1,2,3,4,5,6\}.$ Then, you use inclusion exclusion. $T_k$ will represent the following computation: each of the $\binom{4}{k}$ possible combinations of $k$ of the variables will be examined. for each such combination, it will be assumed that the minimum value of the pertinent variables is $7$ rather than 0. That is, the $k$ variables will all be presumed to be out of bounds. the $4 - k$ remaining variables will be left unconstrained (i.e. minimum value = 0, they may or may not be out of bounds). the number of solutions possible for each such combination will be computed by bijecting the existing equation to one that adjusts for the pertinent variables being forced to be out of bounds. $T_k$ will represent the sum of the number of solutions possible for each of the $\binom{4}{k}$ combinations. Consistent with Inclusion-Exclusion the answer will be $T_0 - T_1 + T_2 - T_3 + \cdots - \cdots$ $T_0 = \binom{16}{3}.$ By symmetry $T_1 = 4 \times$ the number of solutions where $x_1 \geq 7.$ Using the analysis in the cited article, $T_1 = 4 \times \binom{9}{3}.$ At this point, the analysis stops, because it is impossible to have 2 or more constraints out of bounds and still have the sum be $\leq 13.$ Final answer: $$\binom{16}{3} - [4 \times \binom{9}{3}].$$
Unique Ergodic Measure
Im not sure what your first question means, but in regards the second: If $\{1,2,3,...,n\}^{\mathbb{Z}}$ is given the infinite product measure coming from a probability vector $(a_1,...a_n)$ $ ( \sum_ia_i=1)$ then the Bernoulli shifts (just shift every entry to the left one space) are always ergodic. Since there are different choices for the probability vector, there are diffent ergodic measures for the Bernoulli shift. See chapter two of Einseidler and Wards book "Ergodic theory with a view toward number theory".
Topological "closure" of a binary relation
$E \mapsto ( \operatorname{id}_U \cup f \cup f^2 \cup f^3 \cup \ldots ) [E]$ maps open sets to itself. So, it can be closure only if all open sets are closed. For a counterexample for the conjecture take $f = \{ (0, 0), (1, 1), (0, 1) \}$. Open sets are $\{ \}$, $\{ 1 \}$, $\{ 0, 1 \}$. $\{ 1 \}$ is open but not closed.
Taking the compositions of two constant functions
Suppose $f,g : \mathbb{N} \rightarrow \mathbb{N}$. Then $(f \circ g)(x) =f(g(x))$ and $(g \circ f)(x) = g(f(x))$. Hence if $f(x) = 2$ and $g(x) = 4$, then $f(g(x)) = f(4) =2$ and $g(f(x)) = g(2) = 4$. Hence $f \circ g$ is the constant function taking value $2$ and $g \circ f$ is the constant function taking value $4$. They are not equal.
Projective Geometry: Combinatorially, but not projectively equivalent polytopes
Projective equivalence means that there exists a projective transformation transforming one into the other. A projective transformation in $d$-dimensional projective space is uniquely defined by $d+2$ points and their images, both in general position. So you can choose any polytope with more than $d+2$ vertices, and if you move one vertex a bit then they have to be projectively non-equivalent since the unmodified vertices would imply an identity transformation but the polytope is not identical.
Split a triangle into two right triangles
Hint: Write $q-p_0=\lambda(p_1-p_0)=(q-p_2)+(p_2-p_0)$ and use the dot product to find $\lambda$.
Torsion on $\pi_1(X)$, $X$ connected and open in $\mathbb{R}^n$
The real projective plane $\Bbb R P^2$ embeds into $\Bbb R^4$. Taking a tubular neighborhood $N$ of this embedding, we get for all $n \geq 4$ an open subset $N \subset \Bbb R^n$ such that $\pi_1(N) \cong \Bbb Z/2$, so the fundamental group of this open subset has torsion. For $n = 2$, there is no such subset; see the following article: Eda, K. Free $\sigma$-products and fundamental groups of subspaces of the plane. Topology and its Applications Volume 84, Issues 1-3, 24 April 1998, pp. 283-306. This is actually an open problem for $n = 3$.
Finding range of $\frac{5}{(x+3)(x-4)}$
Set $$y=\frac{5}{(x+3)(x-4)}$$ So we have that $$y(x+3)(x-4)=5.$$ We also know that the range of $y$ is the set of all numbers that make the discriminant greater than or equal to zero, in this case the discriminant is $$49y^2+20y.$$ The only values of $y$ where this is greater than or equal to zero is $y\leq -\frac{20}{49}$ and $y>0$, which is your range.
How to write $5 \over 2-i$ in standard form
Take the denominator and flip the sign of the $i$. $$2-i\implies2+i$$ Now, multiply by $1$. $$\frac5{2-i}=\frac5{2-i}\color{#034da3}{\underline{\times1}}$$ Any number divided by itself? $$\color{#034da3}{1=\frac{2+i}{2+i}}$$ $$\frac5{2-i}=\frac5{2-i}\color{#034da3}{\underline{\times\frac{2+i}{2+i}}}$$ $$\text{Numerator}=5\times(2+i)\\\ \\\text{Denominator}=\underbrace{(2-i)(2+i)=4-i^2=5}_{\large\text{Foil, then remember that $i^2=-1$}}$$ $$\frac5{2-i}=\frac{\text{Numerator}}{\text{Denominator}}=\frac{5\times(2+i)}5=2+i$$
EX of area of a circle
The area of a triangle is $\displaystyle ab\sin{C}$. Consider the scenario where $O$ is the origin, and $A$ is fixed to the positive $x$ axis, from $0$ to $r_1$. Then, in polar coordinates, let $B$ be $(r_2,\theta)$, where $\theta$ ranges from $-\pi$ to $\pi$. The area of this triangle is $\displaystyle \frac{1}{2}r_1r_2|\theta|$. So that means the average value of the integral is $\displaystyle \frac{2}{2\pi r^2}\int_0^{\pi}\int_0^r\int_0^r\frac{1}{2}r_1 r_2\sin(\theta)\,dr_1\,dr_2\,d\theta$. Notice that I made the integral symmetric by noting the area is the same for $\theta$ and $-\theta$, hence the last bound only ranging from $0$ to $\pi$, and a corresponding $2$ in the numerator. In your case, which I just realized, $a$ is fixed. Eliminate $r_1$ from the integral, and now you just have $\displaystyle \frac{2}{2\pi r}\int_0^{\pi}\int_0^r\frac{1}{2}r_1 r_2\sin(\theta)\,dr_2\,d\theta$.
Show sums of complex $\sin$ and $\cos$ series
\begin{align*} \sum_{n=0}^{+\infty}r^ne^{in\theta} &=\sum_{n=0}^{+\infty}(re^{i\theta})^n\\ &=\frac{1}{1-re^{i\theta}}\\ &=\frac1{(1-r\cos\theta)-ir\sin\theta}\\ &=\frac{1-r\cos\theta+ir\sin\theta}{1-2r\cos\theta+r^2}\\ &=\frac{1-r\cos\theta}{1-2r\cos\theta+r^2} +i\frac{r\sin\theta}{1-2r\cos\theta+r^2}\\ \end{align*} Then observe that $$ \sum_{n=0}^{+\infty}r^ne^{in\theta} =\sum_{n=0}^{+\infty}r^n\cos n\theta+i\sum_{n=0}^{+\infty}r^n\sin n\theta $$ Now real and imaginary parts must be equal so you get the conclusion. Note that we was allowed to use the geometric series $\sum_{n=0}^{+\infty}q^n=\frac1{1-q}$ since $|re^{i\theta}|=r\in]0,1[$.
A polycyclic group is Solvable and Noetherian
More is true: A solvable group is Noetherian if and only if it is polycyclic, see Lemma 2 of the paper On linear Noetherian groups by Zassenhaus.
Prove that if $G$ is a graph of order $n \geq 3$ such that $deg$ $v \geq \frac{n}{2}$ for every vertex $v$ of $G$, then $G$ is nonseparable
Suppose $v$ is a cut vertex. Consider the smallest component $A$ of $G-v$. Each $a \in A$ has at most $(|A|-1)+1=|A| < \frac{n}{2}$ neighbours in $G$, contradiction.
Prove or disprove inequality $a^2+b^2+c^2\ge a^rb^{2-r}+b^rc^{2-r}+c^ra^{2-r}$.
Yup, since $(2,0)$ majorizes $(r, 2-r)$, by Muirhead's inequality, the result is true. http://en.wikipedia.org/wiki/Muirhead's_inequality
$x^{p-1} + ... + x^2 + x + 1$ is irreducible using Eisenstein's criterion?
You can't directly apply Eisenstein to $f$. But you can combine it with the substitution criterion: Look at the substituted polynomial $g(x) = f(x+1)$.
Approximate the value using Taylor series
You have an error in your Taylor series: as the Newton expansion of $(1+x)^\alpha$ is $$(1+x)^\alpha=1+\alpha x+\alpha(\alpha -1)\,\frac{x^2}{2!}+\alpha(\alpha -1)(\alpha-2)\,\frac{x^3}{3!}+\dotsm,$$ if $\,0<\alpha<1$, this is an alternating series if $x>0$, so you can apply Leibniz'rule for the error, namely it is bounded by the first omitted term, and it has the same sign. Therefore, if you expand at order $3$, you know the error will be negative and, in absolute value, less than $$\frac{\alpha(\alpha -1)(\alpha-2)(\alpha-3)}{4!}\,x^4.$$ Edit: Unfortunately, this doesn't work here since we have a negative value for $x$, so the series is no more alternating.
Is the connection between $e$ and $\pi$ "arbitrary" or "natural"?
The Taylor series of $e^x,\,\cos x,\,\sin x$ for real $x$ provide a natural identification of $e^{ix}=\cos x+i\sin x$, i.e. the unit complex number of argument $x$. So $e^{\pi i}=-1+0i$ is just the statement that $\pi$ is our name for half the number of radians in a revolution. Well, $\pi$ is also our name for the circumference-diameter ratio, i.e. half the circumference-radius ratio, so the claimed result is trivial. This relation of $e$ to $\pi$ is very natural: it's really just saying that rotations in the plane are exponentiations in complex numbers. Which makes sense, because complex numbers admit the matrix representation $x+yi=\left(\begin{array}{cc} x & -y\\ y & x \end{array}\right)$, making $\cos x+i\sin x$ the $x$-anticlockwise rotation. This gives us $e^{ix}e^{iy}=e^{i(x+y)},\,(e^{ix})^n=e^{inx}$ etc. for free. You might like to mull why the even weirder result $\int_{-\infty}^\infty e^{-x^2}dx=\sqrt{\pi}$ is also natural instead of arbitrary.
Inequality $\tan(a)^b+\tan(b)^c+\tan(c)^a\geq \tan(a)^c+\tan(c)^b+\tan(b)^a$
$a=b=c$ obviously works. From this position, you are allowed to increase $a$ and decrease $b$ and $c$. Suppose $a>\frac{1}{3}$ with $b=c < \frac{1}{3}$. Then the inequality is satisfied (you can try this for yourself). Now let's consider the case $a\geq b \geq c$. We start by the $a>b=c$ inequality and reduce $c$, fixing $b$ and increasing $a$. (This will cover every single case of $a \geq b \geq c$). We may rewrite by removing $a$ for $1-b-c$: $f(c) = \tan(1-b-c)^b +\tan(b)^c +\tan(c)^{1-b-c} - \tan(1-b-c)^c - \tan(c)^{b} -\tan(b)^{1-b-c}$. Now we compute the derivative with respect to $c$, keeping $b$ as a constant. We note that $f'(c) \leq 0$ here thus the inequality still holds when $c$ is reduced. Hence we have our result. Below is the proof that $f'(c) \leq 0$. $f'(c)$ has 8 terms, labelled $t_1, ..., t_8$. $t_1+t_6 \leq 0$ $t_2+t_8 \leq 0$. $t_3+t_5 \leq 0$ (here we use that $x\tan^x(k)$ is an increasing function for $\tan(k)<1$) $t_4+t_7 \leq 0$
Formality of commutative differential graded algebras
You need to distinguish between maps of complexes and maps of cdgas. I'll work over a field $k$. If $(C,d)$ is a complex of $k$-vector spaces, then it's true that $C$ is quasi-isomorphic to $(H(C),0)$, via taking sections (which we can do because every vector space is a projective module; over more general rings it is not true that every complex is quasi-isomorphic to its cohomology). If $(C,d)$ is in addition a cdga, then $(H(C),0)$ is a cdga as well. These two cdgas are quasi-isomorphic as complexes of vector spaces, but there may be no quasi-isomorphism between them that preserves the multiplicative structure. You can think of this as a derived version of the fact that a given vector space can admit many different algebra structures. My go-to concrete example of a non-formal dga $A$ appears on page 13 of Hess's Rational Homotopy Theory: A Brief Introduction [https://arxiv.org/pdf/math/0604626.pdf ] as the exterior algebra on three elements $u,v,w$ of degrees $3,3,5$ respectively, and with $d(w)=uv$. You can check that there is no cdga quasi-isomorphism $A \to H(A)$.
If T ◦ S = S ◦ T and v is an eigenvector of T, then v is an eigenvector of S
Take $T$ as the identity map and $S$ as any linear map. Then clearly $TS=ST$. If the proposition were true, then every eigenvector of $T$ would be an eigenvector of $S$. But since $T$ is identity, every vector of $V$ is an eigenvector of $T$. So as soon as one finds an $S$ for which some vector is not an eigenvector we have disproved the proposition. Of course, there are many such examples of $S$.
Negative of elliptic curve points
I know this is old but it has not been properly answered. Since this is not a Weierstrass equation but an elliptic curve equation defined over a finite field, negative points are found using the following rule: If $P = (x_P, y_P)$, then $P + (x_P, x_P + y_P) = O$. $P + (x_P, x_P + y_P) = O$. The point $(x_P, x_P + y_P)$ is the negative of $P$, which is denoted as $−P$. My source: page 312, Cryptography and Network Security: Principles and Practice, 7th edition, by William Stallings, Prentice Hall
Finding all critical points of $g(x,y) = (x^2 + y^2)e^{-x}$
Putting all the comments in an answer: for $y=0$ the equation $(-x^2 + 2x - y^2) e^{-x} = 0 $ becomes $$ (-x^2 + 2x ) e^{-x} = 0 \quad \Rightarrow\quad (-x +2)xe^{-x}=0 \quad \Rightarrow \quad (-x +2)x=0 $$ so we have the two solutions $(0,0)$ and $(2,0)$.
Disproving a relation between function and derivative concerning Big-O-Notation
I found a solution. Take the function $$ f(x) = \begin{cases} x^2\sin({\frac{1}{\sqrt{x}}}), & \text{for $x \neq 0$,} \\ 0, & \text{for $x = 0$.} \end{cases} $$ The function is continuous, because $ -1 \leq\sin(x) \leq 1$ and so for $x \rightarrow0$ the function will go to $0$. It is also differentiable everywhere with choosing $f'(0) = 0$. Finally $f$ is in $\mathcal{O}(x^2)$, because: $$|\frac{x^2\sin(\frac{1}{\sqrt{x}})}{x^2}| =|\sin(\frac{1}{\sqrt{x}})| \leq1,$$and computing $f'$ gives $$f'(x)= 2x\sin(\frac{1}{\sqrt{x}}) - \frac{1}{2}\sqrt{x}\cos(\frac{1}{\sqrt{x}}),$$ and we choose $f'(0)= 0$ as written above. Therefore the derivative is continuous, as both $\sin(x)$ and $\cos(x)$ are bounded by $1$ and $-1$ respectively and the $2x$ and $\sqrt{x}$ will go to $0$. Finally we check $f'(x) = \mathcal{O}(x)$ as following: $$|\frac{2x\sin(\frac{1}{\sqrt{x}})}{x} - \frac{\frac{1}{2}\sqrt{x}\cos(\frac{1}{\sqrt{x}})}{x}| = |2\sin(\frac{1}{\sqrt{x}}) -(\frac{1}{2}\cos(\frac{1}{\sqrt{x}}))\cdot\frac{1}{\sqrt{x}}| =(*),$$ and since both $\sin(x)$ and $\cos(x)$ will be bounded, the $\frac{1}{\sqrt{x}}$ part will decide the behavior when taking the limit. Therefore $(*) \rightarrow\infty $ for $x \rightarrow0$, that means $(*)$ is unbounded for $x \rightarrow0$ and with that we get $f'(x) \neq \mathcal{O}(x)$.
Sheafification of coker-presheaf of exp-map
Q1: Yes, you are correct. Q2: Calculate on the level of stalks. If we can show that $\mathcal{O}_X\to\mathcal{O}_X^*$ is surjective on stalks, this would show that the stalks of $\operatorname{coker(exp)}$ are all zero, which imply that it is the zero sheaf. Since the stalk is the colimit over all open neighborhoods and the simply-connected open neighborhoods define a cofinal system of these for any manifold, we may calculate using the simply-connected open neighborhoods. But on any simply-connected open set $U$, we can define a logarithm, which shows that $\mathcal{O}_X(U)\to \mathcal{O}_X^*(U)$ is surjective, and therefore the map is surjective on stalks.
maximum number of collinear points?
By point-line duality, this is equivalent to the question 'given a configuration of $n$ lines in the plane, what is the maximum number of them which intersect at one point?'; since there are $O(n^2)$ pairwise intersection checks, it becomes a question of whether there's some bucketing/perfect hashing scheme that allows for finding a point in a dynamically-built list in $O(1)$ time. While I don't know of any specific bucketing schemes applicable to this problem, AFAIK it's very common to have $O(1)$ or at least $O(\alpha(n))$ approaches for this sort of thing ($\alpha(n)$ being the inverse-Ackermann function).
Dimension of commuting null space of $(T-c_i)^{d_i}$
Yes. If $d_i > r_i$, then $\ker(T - c_i)^{r_i} = \ker(T - c_i)^{d_i}$. To show this, you can argue as following: Assume $T \colon V \rightarrow V$ is nilpotent of index $r < d = \dim V$. The characteristic polynomial of $T$ is $x^d$, the minimal polynomial of $T$ is $x^r$ and since $T^r = 0$ we have $\{ 0 \} = \ker(T^r) = \ker(T^{r+1}) = \dots = T^{d} = \ker(T^{r + k})$ for all $k \geq 0$. This handles the case where $k = 1$, $c_1 = 0$. Split $V$ into a direct sum of $T$-invariant subspaces $V = \oplus_{i=1}^k W_i$ and apply the previous item to each $(T - c_i I)|_{W_i}$. Then note that $\ker((T - c_i I)|_{W}^m) = \ker((T - c_i I)^m)$ because $T - c_i I$ is invertible on $\oplus_{j \neq i} W_j$.
Method for finding basis of an image
Yes, this always works. If you column-reduce a matrix, the non-zero columns that remain form a basis for the image. The reason is that when you column-reduce a matrix $M$, you end up with a matrix $C=MT$, where $T$ is an invertible matrix. Then $\operatorname{im}C=\operatorname{im}M$ and by construction the $\operatorname{rank}M$ non-zero columns of $C$ are linearly independent. You might also be interested in having a look at this question, in which bases for both the image and kernel are found simultaneously by column reduction.
Evaluate the closed form $\int_{0}^{\pi/2}\sin^k(x)\ln\left[\sin(x)\sin^2\left({x\over 2}\right)\right]\mathrm dx=F(k)$
Let us try to achieve the required result, if possible using elementary approaches. At first, $$F(k) = \int\limits_{0}^{\pi/2}\sin^kx\ln\left({1\over2}\sin x(1-\cos x)\right)\,\mathrm dx = \int\limits_{0}^{\pi/2}\sin^kx\ln\left({\sin x\over2} - {\sin 2x\over 4}\right)\,\mathrm dx.$$ $\mathbf{k=1}$ $$F(1) = \int\limits_{0}^{\pi/2}\sin x\ln\left({1\over2}\sin x(1-\cos x)\right)\,\mathrm dx = \int\limits_{0}^{\pi/2}\ln\left({\sin x\over2} - {\sin 2x\over 4}\right)\,\mathrm d(1-\cos x).$$ By parts: $$ F(1) = (1-\cos x)\left.\ln\left({1\over2}\sin x(1-\cos x)\right)\right|_{0}^{\pi/2} - \int\limits_{0}^{\pi/2}(1-\cos x)\,{{1\over2}(\cos x - \cos2x)\over{{1\over2}\sin x(1-\cos x)}}\,\mathrm d x$$ $$ = -\ln2 - \int\limits_{0}^{\pi/2}{\cos x - 1 + 2\sin^2 x\over{\sin x}}\mathrm dx = -\ln2 - 2\int\limits_{0}^{\pi/2}\sin x\mathrm dx + \int\limits_{0}^{\pi/2}{1-\cos x\over1-\cos^2x}\sin x\,\mathrm dx$$ $$ = -\ln2 + 2\cos x\biggr|_0^{\pi/2} - \int\limits_{0}^{\pi/2}{d(1+\cos x) \over1+\cos x}\,\mathrm dx = -\ln2 - 2 - \ln(1+\cos x)\biggr|_0^{\pi/2} = -2,$$ $$\boxed{F(1) = -2}.$$ $\mathbf{k=3}$ $$\int\limits_0^x\sin^3y\,\mathrm dy = \int\limits_0^x(1-\cos^2y)\sin y\,\mathrm dy$$ $$ = \int\limits_0^x\left(2(1-\cos y)-(1-\cos y)^2\right)\,\mathrm d(1-\cos y) = \int\limits_0^{1-\cos x}(2t-t^2)\,\mathrm dy = J_3(1-\cos x),$$ where $$J_3(t) = t^2-{1\over3}t^3.$$ Then $$F(3) = \int\limits_{0}^{\pi/2}\sin^3x\ln\left({1\over2}\sin x(1-\cos x)\right)\,\mathrm dx = \int\limits_{0}^{\pi/2}\ln\left({\sin x\over2} - {\sin 2x\over 4}\right)\,\mathrm dJ_3(1-\cos x).$$ By parts: $$ F(3) = J_3(1-\cos x)\left.\ln\left({1\over2}\sin x(1-\cos x)\right)\right|_{0}^{\pi/2} - \int\limits_{0}^{\pi/2}J_3(1-\cos x)\,{{1\over2}(\cos x - \cos2x)\over{{1\over2}\sin x(1-\cos x)}}\,\mathrm dx$$ $$ = -J_3(1)\ln2 - \int\limits_{0}^{\pi/2}J_3(1-\cos x){\cos x - 1 + 2\sin^2 x\over{\sin x(1-\cos x)}}\mathrm dx$$ $$ = -J_3(1)\ln2 + \int\limits_{0}^{\pi/2}{J_3(1-\cos x)\over\sin x}\mathrm dx - 2\int\limits_{0}^{\pi/2}{J_3(1-\cos x)\over1-\cos x}\sin x\,\mathrm dx$$ $$ = -J_3(1)\ln2 + \int\limits_{0}^{\pi/2}{J_3(1-\cos x)\over(1-\cos x)(1+\cos x)}\mathrm d(1-\cos x) - 2\int\limits_{0}^{\pi/2}{J_3(1-\cos x)\over1-\cos x}\,\mathrm d(1-\cos x)$$ $$ = -J_3(1)\ln2 + \int\limits_{0}^{1}{J_3(t)\over t(2-t)}\mathrm dt - 2\int\limits_{t}^{\pi/2}{J_3(t)\over t}\,\mathrm dt$$ $$ = -J_3(1)\ln2 + {1\over2}\int\limits_{0}^{1}{J_3(t)\over t}\mathrm dt + {1\over2}\int\limits_{0}^{1}{J_3(t)\over2-t}\mathrm dt - 2\int\limits_{t}^{\pi/2}{J_3(t)\over t}\,\mathrm dt$$ $$ = -J_3(1)\ln2 + {1\over2}J_3(2)\int\limits_{0}^{1}{\mathrm dt\over2-t} - {1\over2}\int\limits_{0}^{1}{J_3(t)-J_3(2)\over t-2}\mathrm dt - {3\over2}\int\limits_{t}^{\pi/2}{J_3(t)\over t}\,\mathrm dt$$ $$ = -J_3(1)\ln2 + {1\over2}J_3(2)\ln(2-t)\biggr|_{0}^{1} - {1\over2}\int\limits_{0}^{1}{J_3(t)-J_3(2)\over t-2}\mathrm dt - {3\over2}\int\limits_{t}^{\pi/2}{J_3(t)\over t}\,\mathrm dt$$ $$ = \left(-J_3(1)+{1\over2}J_3(2)\right)\ln2 - {1\over2}\int\limits_{0}^{1}{J_3(t)-J_3(2)\over t-2}\mathrm dt - {3\over2}\int\limits_{t}^{\pi/2}{J_3(t)\over t}\,\mathrm dt$$ $$ = \left(-{2\over3}+{1\over2}\cdot{4\over3}\right)\ln2 - {17\over18} = -{17\over18}$$ (see also Wolfram Alfa), $$\boxed{F(3) = -{17\over18}}.$$ $\mathbf{k=2n+1}$ Similarly, $$\int\limits_0^x\sin^{2n+1}y\,\mathrm dy = \int\limits_0^x(1-\cos^2y)^n\sin y\,\mathrm dy = \int\limits_0^{1-\cos x}(2z-z^2)^n\,\mathrm dz = J_{2n+1}(1-\cos x),$$ where $$J_{2n+1}(t) = \int\limits_0^{t}(2z-z^2)^n\,\mathrm dz.$$ Then $$F(2n+1) = \left(-J_{2n+1}(1)+{1\over2}J_{2n+1}(2)\right)\ln2 $$ $$- {1\over2}\int\limits_0^1{J_{2n+1}(t)-J_{2n+1}(2)\over t-2}\,\mathrm dt- {3\over2}\int\limits_0^1{J_{2n+1}(t)\over t}\,\mathrm dt.$$ Note that $$J_{2n+1}(1) = \int\limits_0^1(2t-t^2)^n\,\mathrm dt = \int\limits_0^1(1-(1-t)^2)^n\,\mathrm dt = \int\limits_0^1(1-t^2)^n\,\mathrm dt = {\sqrt{\pi}\,\Gamma(n + 1)\over2\Gamma(n + 3/2)}$$ (see also Wolfram Alfa), $$J_{2n+1}(2) = \int\limits_0^2(2t-t^2)^n\,\mathrm dt = {\sqrt{\pi}\,\Gamma(n + 1)\over\Gamma(n + 3/2)} = {n!\cdot2^{n+1}\over(2n+1)!!}$$ (see also Wolfram Alfa), therefore $$F(2n+1) = - {1\over2}\int\limits_0^1{J_{2n+1}(t)-J_{2n+1}(2)\over t-2}\,\mathrm dt- {3\over2}\int\limits_0^1{J_{2n+1}(t)\over t}\,\mathrm dt,$$ and the integrands are polynomials with rational coefficients. Besides, $$J_{2n+1}(t) - J_{2n+1}(2) = \int\limits_2^t(2y-y^2)^n\,\mathrm dy = -\int\limits_2^t((2-y)(2-(2-y))^n\,\mathrm d(2-y) = -\int\limits_0^{2-t}(z(2-z))^n\,\mathrm dz = -J_{2n+1}(2-t).$$ That gives $$\boxed{F(2n+1) = - {1\over2}\int\limits_0^1\left(R_n(2-t)+3R_n(t)\right)\,\mathrm dt},$$ where $$\boxed{R_n(t) = {1\over t}\int\limits_0^t(2z-z^2)^n\,\mathrm dz}.$$ This allows one to obtain closed form for the required integrals (as shown below) and explains the rational form of the results. The general formula Let us use Newton binomial formula $$(2z-z^2)^n = z^n(2-z)^n = z^n\sum\limits_{i=0}^n\genfrac{(}{)}{0}{0}{n}{i}(-1)^i\cdot2^{n-i}z^i,$$ then $$R_n(t) = {1\over t}\int\limits_0^t(2z-z^2)^n\,\mathrm dz$$ $$ = {1\over t}\sum\limits_{i=0}^n\genfrac{(}{)}{0}{0}{n}{i}(-1)^i\cdot2^{n-i}\int\limits_0^t z^{n+i}\,\mathrm dz = \sum\limits_{i=0}^n{(-1)^i\over n+i+1}\genfrac{(}{)}{0}{0}{n}{i}\cdot2^{n-i}t^{n+i},$$ $$F(2n+1) = - {1\over2}\int\limits_0^1\left(R_n(2-t)+3R_n(t)\right)\,\mathrm dt$$ $$ = - {1\over2}\sum\limits_{i=0}^n{(-1)^i\over n+i+1}\genfrac{(}{)}{0}{0}{n}{i}\cdot2^{n-i}\int\limits_0^1\left((2-t)^{n+i}+3t^{n+i}\right)\,\mathrm dt $$ $$ = - {1\over2}\sum\limits_{i=0}^n{(-1)^i\over (n+i+1)^2}\genfrac{(}{)}{0}{0}{n}{i}\cdot2^{n-i}\left(-(2-t)^{n+i+1}+3t^{n+i+1}\right)\biggr|_0^1 $$ $$ = - {1\over2}\sum\limits_{i=0}^n{(-1)^i\over (n+i+1)^2}\genfrac{(}{)}{0}{0}{n}{i}\cdot2^{n-i}\left(2+2^{n+i+1}\right),$$ $$\boxed{\boxed{F(2n+1) = -4^n\sum\limits_{i=0}^n{(-1)^i\over (n+i+1)^2}\genfrac{(}{)}{0}{0}{n}{i}(1+2^{-n-i})}}.\tag1$$ Results Calculation results by $(1)$ using the Wolfram Alpha program (сliсk to the right symbols "="): $\mathbf{n=0}\quad F(1)\ $=$\ -2$ $\mathbf{n=1}\quad F(3)\ $=$\ -\dfrac{17}{18}$ $\mathbf{n=2}\quad F(5)\ $=$\ -\dfrac{587}{900}$ $\mathbf{n=3}\quad F(7)\ $=$\ -\dfrac{629}{1225}$ $\mathbf{n=4}\quad F(9)\ $=$\ -\dfrac{342319}{793800}$ $\mathbf{n=5}\quad F(11)\ $=$\ -\dfrac{3613679}{38419920}$ Done!
Finding block diagonal matrix
It might help to notice that $A=u v^T$, where $u = (4, 1, -1)^T, v=(1, -1-,3)^T$. Notice that $u \bot v$. This simplifies matters a little. $\ker A = \operatorname{sp}\{ u, w \}$, where $w=u \times v = (2, -13, -5)^T$, and $u,v,w$ are all orthogonal. Then $Au = 0, Aw = 0$ and $Av = \|v\|^2 u = 11 u$. In the basis $\{ u, {1 \over 11} v, w \}$, the matrix $A$ has the form $\begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$.
Nasty double integral with lots of exponentials
I'm gonna take a wild stab at this-does the integrand get any simpler if we convert to polar coordinates like we do in the old "integrate $e ^{-{x^2}}$ as a dummy variable double integral" trick? Except hopefully in this case,it's even easier because it actually IS a double integral. Consider $\gamma_1$ = $ r \cos\theta_1$ and $\gamma_2$ = $ r \cos\theta_2$. Now let's substitute and see if it simplifies the integrand. $$F(r \cos\theta_1)= A \,\exp(-a \, (r \cos\theta_1)^ {1/2})+B \, r \cos\theta_1^{-1/2}\left(1-\exp(-br \cos\theta_1^ {1/4})\right)$$ $$F(r \cos\theta_2)= A \,\exp(-a \, (r \cos\theta_2)^ {1/2})+B \, r \cos\theta_2^{-1/2}\left(1-\exp(-br \cos\theta_2^ {1/4})\right)$$ Boy,that looks even uglier. The idea is when things are multiplied, hopefully,if the Gods are kind, terms will drop out. Make the appropriate polar substitutions and then plug the integrand into MATLAB. See if the resulting integral is simpler. Wish I had time to take a crack at it.
Finding values of quotient and remainders for Gaussian integers.
Let's say you want to check with dividend $5+i$ (norm $26$) and divisor $1+2i$ (norm $5$). First, we carry out the division this way: $$\frac{5+i}{1+2i}=\frac{5+i}{1+2i}\cdot\frac{1-2i}{1-2i}=\frac{7-9i}{5}=\frac75-\frac95i$$ Now, we have $\frac75$ between $1$ and $2$, and we have $-\frac95$ between $-2$ and $-1$. Thus, in principle, we have four options for $q$: $1-2i, 1-i, 2-2i, 2-i$. Usually, we round to the closest integer quotient, because this guarantees that the remainder will be small enough. It will always be true, though, that at least two of these work, assuming there is a non-zero remainder. In some cases, all four will work. With this example, the closest quotient is $1-2i$. Since $|\frac75-1|=\frac25$, and $|\frac95-(-2)|=\frac15$, we can calculate $(\frac25)^2+(\frac15)^2=\frac15$, so the remainder will have norm $\frac15$ that of the divisor. Indeed, $5+i=(1-2i)(1+2i)+i$, so the norm of the remainder is $1$. If we choose the "worst case scenario", and go with $2-i$ as our quotient, then the same calculation leads to $(\frac35)^2+(\frac45)^2=1$, so the remainder in that case will be no smaller than the divisor, so that doesn't work for the division algorithm. The other three options, though, are all good. Does this help?
To show that $\operatorname{Rank}(\mathbf{A}-\mathbf{I})=\operatorname{Nullity}(\mathbf{A})$
Hint: $$ \mathbf{A}^{2} = \mathbf{A} \quad \Leftrightarrow \quad \mathbf{A}^{2} - \mathbf{A} = \mathbf{0}_{n\times n} \quad \Leftrightarrow \quad \mathbf{A} \left(\mathbf{A}-\mathbf{I}\right) = \mathbf{0}_{n\times n}. $$ Let $\mathbf{B} = (\mathbf{A} - \mathbf{I})$. The above implies that every vector in the range of $\mathbf{B}$ lies in the nullspace of $\mathbf{A}$. Can you take it from there? Edit: I am expanding the answer because it seems we need to clarify a few things. Why is it that the range of $\mathbf{B}$ lies in the nullspace of $\mathbf{A}$? Let $\mathbf{w}$ be any vector in the range of $\mathbf{B}$, $\mathcal{R}(\mathbf{B})$. This means that we can write $\mathbf{w}$ as a linear combination of the columns of $\mathbf{B}$, i.e., there exists a $\mathbf{c}$ such that $\mathbf{w}=\mathbf{B}\mathbf{c}$. But, $$ \mathbf{A}\mathbf{w} =\mathbf{A}\mathbf{B}\mathbf{c} = \mathbf{0}_{n\times n} \mathbf{c} = \mathbf{0}, $$ which implies that $\mathbf{w}$ belongs to the nullspace of $\mathbf{A}$, $\mathcal{N}(\mathbf{A})$. Hence, we have shown that $$ \mathbf{w} \in \mathcal{R}(\mathbf{B}) \quad \Rightarrow \quad \mathbf{w} \in \mathcal{N}(\mathbf{A}), $$ or equivalently, $$ \mathcal{R}(\mathbf{B}) \subseteq \mathcal{N}(\mathbf{A}). $$ The above implies that $$ \text{nullity}(\mathbf{A}) \ge \text{rank}(\mathbf{B})=\text{rank}(\mathbf{A}-\mathbf{I}). $$ But we are not quite done yet, since we want to show that the two sides are actually equal. I am assuming that you should be familiar with the Rank-Nullity Theorem. How can you use that to show the other side of the inequality. Edit 2: (This is the point where hints start looking a lot like an answer!) The Rank-Nullity Theorem theorem says that $$ n = \text{rank}(\mathbf{A}) + \text{nullity}(\mathbf{A}) \quad \Rightarrow \quad \text{nullity}(\mathbf{A}) = n - \text{rank}(\mathbf{A}). $$ What can you say about $\text{rank}(\mathbf{A} - \mathbf{I})$ with respect to $\text{rank}(\mathbf{A})$? How can you get it into the picture? (Remember! we are trying to show that $\text{nullity}(\mathbf{A}) \le \text{rank}(\mathbf{A} - \mathbf{I})$ because we have already shown the reverse inequality.) Edit 3: We have already shown that $\text{nullity}(\mathbf{A}) \ge \text{rank}(\mathbf{A}-\mathbf{I})$. To show that the relation holds with equality, it suffices to show that $$ \text{nullity}(\mathbf{A}) \le \text{rank}(\mathbf{A}-\mathbf{I}). $$ We have already seen that $$ \text{nullity}(\mathbf{A}) = n - \text{rank}(\mathbf{A}). $$ Hence, to show the desired result, it suffices to show that $$ n - \text{rank}(\mathbf{A}) \le \text{rank}(\mathbf{A}−\mathbf{I}). $$ One way to argue about this, possibly not the best, is the following. Let $\mathbf{B} = \mathbf{A} - \mathbf{I}$. By the rank-nullity theorem, we know that $$ \text{rank}(\mathbf{B}) + \text{nullity}(\mathbf{B}) = n. $$ Now, the nullspace of $\mathbf{B}$, $\mathcal{N}(\mathbf{B})$, consists of all vectors $\mathbf{w}$ such that $\mathbf{B}\mathbf{w}=\mathbf{0}$, or equivalently: $$ \mathbf{w} \in \mathcal{N}(\mathbf{B}) \quad \Leftrightarrow \quad (\mathbf{A}-\mathbf{I})\mathbf{w} = \mathbf{0} \quad \Leftrightarrow \quad \mathbf{A}\mathbf{w} = \mathbf{w}. $$ These vectors $\mathbf{w}$ are clearly just a subset of the range of $\mathbf{A}$. In other words, $$ \mathbf{w} \in \mathcal{N}(\mathbf{B}) \quad \Rightarrow \quad \mathcal{R}(\mathbf{A}), $$ and in turn, $ \mathbf{w} \in \mathcal{N}(\mathbf{B}) \subseteq \mathcal{R}(\mathbf{A}). $ We conclude that $\text{nullity}(\mathbf{B}) \le \text{rank}(\mathbf{A})$ and finally, $$ \text{rank}(\mathbf{B}) = n-\text{nullity}(\mathbf{B}) \ge n-\text{rank}(\mathbf{A}), $$ which is the desired inequality $$ \text{rank}(\mathbf{A}-\mathbf{I}) \ge n-\text{rank}(\mathbf{A}). $$