title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Is it true that $(a,b)=(c,b)=1\iff (ac,b)=1$?
Yes it's true, and you can prove it easily : For the implication, suppose that $(ac,b)\neq 1$. Then, there is $d>1$ s.t. $d\mid ac$ and $d\mid b$. Since $d\mid b$ and $(a,b)=1$, by Gauss lemma, $d\mid c$ what is a contradiction with $(c,b)=1$. The converse can be proved with the same technic.
Grouping Property in Entropy
Note that (show it) $$ H(X)=H(I)+H(X\mid I)-H(I\mid X). $$ Also, $H(X\mid I=k)=H(X_k)$, therefore, $$ H(X\mid I)=\sum_I \mathbb{P}(I=k) H(X_k). $$ Finally, if $\mathcal{X}_i \cap \mathcal{X}_j=\emptyset$, for all $i,j$, it follows that knowledge of $X$ implies perfect knowledge of $I$, therefore, $H(I\mid X)=0$ in this case. If the sets overlap, then you cannot always be certain about $I$ given the value of $X$, as you may observe a value that is common to two or more variables (and, therefore, corresponds to two or more values of $I$). In that case, $H(I|X)>0$.
Cardinal number for a family of subsets to be a topology
If you look closely, your proof of the 2nd condition shows that this is always true. Because taking union of sets with "small complement" can only decrease the size of the complement (of the union). The third condition is where the cake hides. It suffice to show that the intersection of two sets still have a small complement. And as we know, $X\setminus(A\cap B)=(X\setminus A)\cup(X\setminus B)$. Therefore you should ask yourself, for which $\kappa$ the above equality holds: $\kappa=\kappa+\kappa$?
Two squares are incscribed in a circle as shown in figure, if the area of larger is 144cm² find the area of smaller.
Call the center of the semicircle $O$. The picture is showing $O$ as the midpoint as the center of one of the sides of the larger square. First, we know the side length of the square is $12$ because $12^2 = 144$. So, using Pythagorean theorem, we can figure out the radius of the semicircle is the distance between $O$ and the top right corner of the square which is $\sqrt{6^2 + 12^2} = \sqrt{180} = 6\sqrt{5}.$ Let $r$ be the radius of the semicircle. We can also find $r$ in another way, and that is the distance between $O$ and the top left corner of the smaller square. Letting the side length of the smaller square be $x$, by the Pythagorean theorem, $(6 + x)^2 + x^2 = r^2$, and we know $r = \sqrt{180}$, so $r^2 = 180.$ Solving for $x$, we get $2x^2 + 12x + 36 = 180.$ After some simplification, this gives $x^2 + 6x - 72 = 0$. So, $(x - 6)(x + 12) = 0$, giving $x = 6, -12$. $x$, however, cannot be a negative length so $-12$ is not a solution. So, $x = 6$, and the area of the smaller square is $x^2 = 6^2 = \boxed{36}.$
Determinant of block matrix with off-diagonal blocks conjugate of each other.
Hint: Suppose $C$ is invertible [otherwise use the matrix $C - \lambda I$ in place of $C$ for that will certainly be invertible for infinitely many $\lambda \in \mathbb{C}$]. Write $\begin{pmatrix} C & D \\ D^{*} & C \end{pmatrix} = \begin{pmatrix} I & 0 \\ D^{*}C^{-1} & I \end{pmatrix} \begin{pmatrix} C & D \\ 0 & C - D^{*}C^{-1}D \end{pmatrix}$
Proof of $\left| x\right| <1$, then $\lim_{n\to \infty } \, x^n=0$.
$-|x|^n$ is the "lower" and $|x|^n$ is the "upper": $-|x|^n \le x^n \le |x|^n$ And both have zero as limit because: PS. There is no $\log^n(|x|)$ in the proof, but $\ln(|x|^n)$. It's not the "lower" function (nor the "upper"), but it's used as a lemma for finding the limit of $|x|^n$.
How many digits are there in the number $200^{2010}$?
$\log 200^{2010}=2010\times\log 200=2010\times(2+0.3010)=4625.01$ $\therefore$ $4625\leq \log 200^{2010} &lt;4626$ $\therefore$ $10^{4625}\leq 200^{2010}&lt;10^{4626}$ $\therefore$ $4626$ digits
Intersection of all sets in finite sigma algebra
Regardless whether the sigma algebra is finite or infinite, the intersection between all of its elements will certainly be $\varnothing$; simply because $\varnothing$ is an element of it. In response to your edit: as long as the s.a. is infinite (countably or uncountably), the intersection of a particular infinite collection of elements of it doesn't need to be empty, because the collection needs neither contain the empty set nor a set and its complement.
Programming Bifurcation analysis
I would write everything in Mathematica since it has great symbol manipulation and superb flow visualization functions, such as
Transition matrix and coordinate matrix from basis B
The transition matrix we are looking for is the inverse of $[T]_B$. Indeed we have that for $v_S$ represented in the standard basis $$v_S=[T]_B\cdot v_B \implies v_B=[T]_B^{-1}\cdot v_S$$ where $v_B$ is the representation in the $B$ basis.
$R \circ R$ of algebraically defined relation $R$
Take for example $(x,z) \in R^2$, then by the definition of composing relations, there exists a $y \in \mathbb{R}$ such that $(x,y) \in R$ and $(y,z) \in R$. So for example if you take $(9,1) \in R^2$, what $y \in \mathbb{R}$ can make sure that $(9,y) \in R$ and $(y,1) \in R$? The answer would be $y=3$ as $(9,3) \in R$ and $(3,1) \in R$. Now, can you find the condition for the values of $x$ and $y$ so that the composing relation always exists?
Morphisms between locally ringed spaces and affine schemes
$Y$ is an affine scheme, so $f(x)$ is a prime ideal of $\mathscr{O}_Y(Y)$ and $\mathscr{O}_{Y,f(x)}$ is the localization of $\mathscr{O}_Y(Y)$ at $f(x)$. A basic fact about localization is that the inverse image in $\mathscr{O}_Y(Y)$ of the unique maximal ideal $\mathfrak{m}_{f(x)}$ of $\mathscr{O}_{Y,f(x)}$ is $f(x)$. Now it follows from the commutativity of the diagram, together with the fact you mentioned, which amounts to the map on stalks being local, that $f(x)$ is the inverse image in $\mathscr{O}_Y(Y)$ of $\mathfrak{m}_x$ under the map $\mathscr{O}_Y(Y)\rightarrow\mathscr{O}_X(X)\rightarrow\mathscr{O}_{X,x}$.
Gödel's incompleteness 1: construction of the formula relating Gödel number of proof to Gödel number of proven statement
The key point to be aware of is that the formula that defines an "arithmetical relation" can use quantifiers. So $x+y=6$ does not really represent the complexity that's possible. For example we could have $$ \exists a(2\cdot a=x) $$ for "$x$ is even", or $$ x\ne 1 \land \forall a\forall b(a\cdot b \ne x \lor a=1 \lor b = 1) $$ for "$x$ is prime". The general form is then something like $\mathit{PF}(x,y) \equiv{}$ There exists $n&lt;x$ such that the $n$th element of $x$ (viewed as a Gödel number for a sequence) equals $y$, and for each $m\le n$ [the $m$th element of $x$ is an axiom or there exist $m_1,m_2 &lt; m$ such that the $m$th element of $x$ follows from the $m_1$th and $m_2$th elements by a single inference rule]. Of course, this needs a lot of further unfolding to argue that the are arithmetical formulas for picking Gödel numbers apart and checking inference rules, etc. With modern concepts that were not available to Gödel in 1931, the entire construction of $\mathit{PF}$ can be described as: Define a restricted programming language of "first-order function definitions with primitive recursion". Argue that this language is powerful enough to write a program that recognizes valid proofs. In Gödel's 1931 paper, this argument is partially informal. There are 4½ pages full of terse formal descriptions of more and more complex functions that Gödel asserts can be created using primitive recursion, ending with $PF$, but it is up to the reader to figure out the details of how to create each of them using primitive recursion. Argue that any program in the restricted programming language can be systematically converted to an arithmetical definition. Gödel originally used a higher-order logic inspired by Principia Mathematica where this representation is almost trivial, but later in the 1931 paper he presented an alternative translation into what we now recognize as the first-order language of arithmetic, comprising addition, multiplication, symbols for $0$ and $1$ (or successor), equality, logical operators, and quantifiers. A key idea in the latter translation is to use the $\beta$ function to express quantification over finite sequences of numbers rather than just one number at a time. The result of these multiple steps is a somewhat unwieldy formula that I daresay nobody wrote down explicitly in the 1930s. Today it is would be a more-or-less routine (if long and tedious) exercise for CS undergraduates to program a computer to create it, but seeing the result is not particularly enlightening.
How exactly does the constant $C$ in the Sobolev inequality depend on the domain?
Let $T:\mathbb R^n\to\mathbb R^n$ be a rigid motion: translation, rotation, reflection, or composition of such things. Then $|D^k u\circ T|=|D^k u|\circ T$ for derivatives of all orders. Using this and the change of variables formula (in which the absolutely value of Jacobian is $1$), you will find that the neither side of $\Vert u \Vert_{L^q(U)} \leq C \Vert u \Vert_{W^{k,p}(U)}$ is affected by the transformation $T$. Thus, the best Sobolev constant is preserved under rigid motions. The scaling is trickier, because it affects the derivatives of different orders differently. I think the behavior of optimal $C$ in $\Vert u \Vert_{L^q(U)} \leq C \Vert u \Vert_{W^{k,p}(U)}$ under scaling can be pretty complicated. What is true is that one can give an upper bound on $C$ that depends (besides the indices $m,p,q$) only on the parameters of the cone condition satisfied by $U$. See Sobolev spaces by Adams for details. Scaling is more tractable for a related inequality $\Vert u \Vert_{L^q(U)} \leq C \Vert D^ku \Vert_{L^{p}(U)}$, which holds for $u\in W^{k,p}_0(U)$. Indeed, here replacing $u$ with $u(\lambda^{-1} x)$ contributes the factor of $\lambda^{n/q}$ on the left and $\lambda^{-k+n/p}$ on the right. So, if the domain is scaled by $\lambda$, the constant $C$ gets multiplied by $\lambda^{k-n/p+n/q}$. Observe that $k-n/p+n/q\ge 0$ here, with equality for the borderline Sobolev exponent. That larger domains have larger constants for $W^{k,p}_0(U)$ inequalities is obvious, because $W^{k,p}_0(U)\subset W^{k,p}_0(V)$ when $U\subset V$. Such monotonicity does not hold for $W^{k,p}(U)$ inequality, which is sensitive to the shape of the domain in a subtle and mostly intractable way.
Let $z=18+26i$ where $z_0=x_0+iy_0, \ x_0, y_0\in \Bbb R$ is the cube root of $z$ having least positive argument. Find the value of $x_0y_0(x_0+y_0)$.
I recommend another approach. Let's use polar form to find $z_0$. We see that $|z|=\sqrt{18^2+26^2}=\sqrt{1000}$. Thus $|z_0|$ is the cube root of that, namely $|z_0|=\sqrt{10}$. To find the argument of $z_0$ (let's call it $\theta$), note that the argument of $z_0^3=z=18+26i$ is $3\theta$ and is given by $\tan3\theta=\frac{26}{18}=\frac{13}9$. Expanding, $$\begin{align} \frac{13}9 &amp;= \tan 3\theta \\[2ex] &amp;= \tan(2\theta+\theta) \\[2ex] &amp;= \frac{\tan 2\theta+\tan\theta}{1-(\tan 2\theta)(\tan\theta)} \\[2ex] &amp;= \frac{\frac{2\tan\theta}{1-\tan^2\theta}+\tan\theta} {1-\left(\frac{2\tan\theta}{1-\tan^2\theta}\right)(\tan\theta)} \\[2ex] &amp;=\frac{3\tan\theta-\tan^3\theta}{1-3\tan^2\theta} \end{align}$$ Letting $t=\tan\theta$ and rearranging that equation, we get $$9t^3-39t^2-27t+13=0$$ Using the Rational Root Theorem we can factor that into $$(3t-1)(3t^2-12t-13)=0$$ Which gives us the solutions $$\tan\theta=t=\frac 13, \ \frac{6\pm5\sqrt 3}{3}$$ The smallest positive argument is given by the first solution, $\tan\theta=\frac 13$. We can quickly combine that with $|z_0|=\sqrt{10}$ to get the answer $$z_0=3+i$$ This could have been seen by inspection much earlier, but I wanted to show a fuller exposition of getting that answer. Checking quickly shows us this is the desired cube root of $z$. In any case, we get $$x_0=3, \ y_0=1$$ and the final answer $$x_0y_0(x_0+y_0)=3\cdot 1(3+1)=12$$
Count the A's in a sequence
Let $A_n$ be the number of $a$'s, $B_n$ be the number of $b$'s and $C_n$ be the number of $c$'s in the word $S_n$. Then by the rules of constructing the words we get \begin{align*} A_n &amp; = A_{n-1}+B_{n-1}+C_{n-1}\\ B_n &amp; = A_{n-1}\\ C_n &amp; = B_{n-1}. \end{align*} with $A_0=1, B_0=0$ and $C_0=0$. You can think of this as $$\mathbf{v}_{n}=\begin{bmatrix}1&amp;1&amp;1\\1&amp;0&amp;0\\0&amp;1&amp;0\end{bmatrix}\mathbf{v}_{n-1},$$ where $$\mathbf{v}_{n}=\begin{bmatrix}A_n\\B_n\\C_n\end{bmatrix}$$ Now if you can diagonalize this matrix then you can compute higher powers of it and make use of that to get $A_n,B_n,C_n$.
How does one prove that the number $111\ldots 1$ (formed by $3^{n}$ digits all equal to $1$) is divisible by $3^{n}?$
I think you meant For $n\geq 1$ Show that the number $111 \cdots 1$ (formed by $3^n$ 1's) is divisible by $3^n$. Base case is clear.Denote let $A_m:= \underbrace{111 \cdots 1}_{3^m}$. Suppose the result is true for integers $n=k$, then $$A_{k+1}= \frac{10^{3^{k+1}}-1}{9}=\frac{(10^{3^k})^3-1}{9}=\frac{10^{3^k}-1}{9}(10^{2\cdot3^n}+10^{3^n}+1)=A_k\times S$$ where $S=10^{2\cdot3^n}+10^{3^n}+1$, since $S$ is a multiple of $3$ the conclusion follows.
Rational exponents: prove some states
Since for real numbers $a$ and $b$, we want to define $a^b$ by $$ a^b := \exp(b \log a), $$ we see that we must restrict $a$ to be a positive real number, since $\log : \mathbb{R}_{&gt;0} \to \mathbb{R}$ is undefined on $\mathbb{R}_{\leq 0}$. Note that this definition of $a^b$ for $a &gt;0$ contains the following restricted definition : $$ a^n := \underbrace{a \cdot a \cdot \cdots \cdot a}_{n \text{ times}} \quad \forall a &gt;0, \forall n \in \mathbb{N}, $$ as well as this one : $$ a^{\frac{1}{n}} := \sqrt[n]{a}, \forall n \in \mathbb{N}. $$ The benefits from defining powers in terms of the inverse functions $\exp$ and $\log$ are that we can extend the last two definitions to irrational exponents and that $\exp$ and $\log$ can be rigorously defined in terms of power series and integrals, for example. However, if $a &lt; 0$, we can't use the definition in terms of $\exp$ and $\log$, but we can still try to mimic the restricted definitions given above. For example, we naturally define : $$ a^n := \underbrace{a \cdot a \cdot \cdots \cdot a}_{n \text{ times}} \quad \forall a \leq 0, \forall n \in \mathbb{N}, $$ which is indeed well-defined. However, when it comes to rational exponents, we can't make the general definition : $$ a^{\frac{1}{n}} := \sqrt[n]{a} \quad \forall a \leq 0, \forall n \in \mathbb{N}, $$ since it is well known that for particular choices of $a$ and $n$ there would be no element $a^{\frac{1}{n}}$ in the set of real numbers that would satisfy $$ (a^{\frac{1}{n}})^n = a. $$ For example, take $a = -2$ and $n = 2$. This contrasts with the case where $a$ is restricted to $\mathbb{R}_{&gt; 0}$. In the latter case, when we actually construct the set of real numbers, it is possible to show that there will always be such a real number $a^{\frac{1}{n}}$. For particular choices of $a \leq 0$ and $n$, it might happen that such a real number $a^{\frac{1}{n}}$ exists : for example, take $a = -8$ and $n = 3$ and you find that $-2 \in \mathbb{R}$ can be taken. To conclude, if we go back to the case $a = -2$ and $n = 2$, we find that the difficulty lies in the fact that the relation on $\mathbb{R}$ $$ x^2 = -2 $$ is empty. Complex numbers have been introduced to provide solutions to such equations.
How to solve the inequality with logarithm?
Hint: Use $$\log_a b = \frac{\ln b}{\ln a}$$ which yields $$\frac{1}{\frac{\ln \frac{x}{20}}{\ln (x-1)}} \geq -1$$ and the inequality can be solved simply from here onward.
Modulo congruences and remainder
We know that $M=kN$ with $k\in\mathbb{Z}$. Let $r=x\text{ mod }M$ and $s=r\text{ mod } N$. We have : $x=aM+r$ with $0\le r&lt;M$ and : $r=bN+s$ with $0\le s&lt;N$. Hence : $x=aM+bN+s=(ak+b)N+s$ This last formula shows that $x\text{ mod }N=s=r\text{ mod }N=(x\text{ mod }M)\text{ mod }N$.
What is a mathematically rigorous treatment of "sentences", "words" and "language"?
To my knowledge there is no deep study of such 'language systems' but the study of logic has made quite some progress in that regard. There are multiple ways of formalizing the concept of language systems, like just plain sequences, trees, or other but all this formulations are equivalent. Model Theory may be what you have in mind, it studies theories and their structures, but to write down the theories axioms it is needed a language. I will give a brief definition of a language and give the example of a language for the theory of natural numbers. A language has: $n$-placed functions. E.g. the 2-ary $+$ function, the multiplication $\times$ and the successor $S$. $n$-placed relations. E.g. the 2-ary $&lt;$ less than relation. Or the 1-ary $Odd$ relation. Constants. E.g. $0, 1, 2,...$ (As you may have noticed, you only need the constant $0$. the rest can be written down using it and the successor function). Similarly Logic has a language ($\wedge , \forall, \lnot$, etc; though this language is usally taken as a primitive). Now every sequence of these elements is a valid expression, but we only care about well formed formulas. Which are those that satisfy some requirements. E.g. a 2-ary relation may require have a term on each side of it e.g. ($1&lt;2$). Common sense restrictions like that can be used to generate, by recursion, the set of all well formed formulas. The phrase sentence is usually just meant to mean a well formed formula with no variables unquantified. Some nice things about this construction (assuming a lot of stuff I haven't mentioned explicitely): Unique readability theorem: There is only one way to interpret a sentence. No ambiguities in math. Effective computation to check if an expression is a well formed formula. There's a lot more to say but that would make me write a book so I'd just better recommend you a better book in first order logic: I personally like Enderton's A mathematical introduction to logic.
Binomial coefficient paths?
Your solution seems to be correct. I don't how to derive your solution, but here's my insight. Note that to find the total number of ways passing through one blue segment in the $k-$th column and $i-$th we can split the way into two smaller pieces. Eventually you have to lend at the intersection of the $k-$th vertical line and $i-$th horizontal line (bottom one and the left-most one are 0). The total number of ways is $$Y = \binom{k+i}{k}$$ Then we have to go one step right and we find all the ways from the intersection of the $(k+1)-$th line and $i-$th line. The total number of ways is: $$Z = \binom{m-(k+1) + (n-i)}{n-i}$$ Now just multiply the two results.
Looking for series that suits 1,1,-1,-1,1,1,-1,-1 ..
$$u_n=(-1)^{\lfloor \frac {n}{2}\rfloor} $$ or $$u_{2n}=u_{2n+1}=(-1)^n $$
$y' - ay = \delta(t - T) , t > 0$
For $0&lt;t&lt;T$, $y'-ay=0$. So, $y(t)=y_0e^{at}$. For $T&lt;t$, $y'-ay=0$. So, $y(t)=Ae^{at}$. Applying the discontinuity at $T$, we see that $y(T^+)-y(T^-)=1$ so that $$(A-y_0)e^{aT}=1$$ Solving for $A$ we find that $A=y_0+e^{-aT}$. Putting it all together $$y(t)=y_0e^{at}+\begin{cases}0&amp;,0&lt;t&lt;T\\\\ e^{a(t-T)}&amp;,T,t\end{cases}$$ This can be written more succinctly as $$y(t)=y_0e^{at}+e^{a(t-T)}H(t-T)$$ where $H(t)$ is the Heaviside (unit step) function.
Deformation theory formalism
We have a morphism $f: X'\rightarrow \operatorname{Spec} D$ such that the fibre over the closed point $\operatorname{Spec} k \hookrightarrow \operatorname{Spec} D$ is isomorphic to $X$. Note that the exact sequence $0\rightarrow k \rightarrow k[t]/t^2 \rightarrow k \rightarrow 0$ of $k[t]/t^2$-modules is the same as an exact sequence $0\rightarrow \tilde{k} \rightarrow \widetilde{k[t]/t^2} \rightarrow \tilde{k} \rightarrow 0$ of quasi-coherent modules on $\operatorname{Spec} k[t]/t^2$. Then what is meant by tensoring this with $\mathcal{F}'$ is to apply $f^*$ to this exact sequence and then tensor by $\mathcal{F}'$. You can think of $\mathcal{F}' \otimes k$ as restricting the coherent sheaf $\mathcal{F}'$ to the closed subscheme $X \hookrightarrow X'$.
Proving that $\lim_{n \to \infty} (1+ \frac{i}{n})^n = e^i$
Without too many details: $|(1+i/n)^n| = (1 + \frac{1}{n^2})^{n/2}$ $\arg(1 + i/n)^n = n\tan^{-1} 1/n$ as $n \to \infty$, the modulus goes to $1$ and the argument goes to $1$. This is exactly $e^i$.
Infinitely nested radicals
...canonical way to transform an infinitely nested radical in a series? There is no such way in general. In fact, general non-periodic nested radicals allow no alternative ways to express them (except for trivial identity transformations, which still lead to a nested radical expression). If we are talking about periodic nested radicals, connected to polynomial equations, such as in the OP, it might indeed be possible to express their limit as a series (if they converge). But as far as I know, there is no better way to do that, than to consider the original algebraic equation. The theory of nested radicals still has a long way to grow. There is not much known about them, except for the convergence theorem. I suggest reading this article which gives a very good review on the topic of nested radicals and other such expressions. Also, there are two papers by Dixon Jones, investigating a more general case of continued powers (which also incorporates infinite series and products, continued fractions, etc): His papers can be found here: first and second. Mathworld entry here provides other references and examples.
Maximum length sub-sequence of probability reciprocals
I found an answer that very closely addresses the problem. Based on that, it is easy to see that observation 2 actually gives the maximum for $R$.
Eigenvalues and power of a operator
Let $\mu_1,...,\mu_k$ be the roots of order $k$ of $\lambda$. Clearly $\mu_1,...,\mu_k$ are solutions for the polynomial $(x^k-\lambda)$. In particular we have $x^k-\lambda = \prod_{l=1}^k (x-\mu_l)$. We conclude that $(A^k-\lambda\cdot I) = \prod_{l=1}^{k} (A-\mu^l\cdot I)$. Can you finish now? (If $\ker(S)=\ker(T)=\{0\}$ what can you say about $\ker(ST)$?)
Applying monotone convergence theorem (Beppo-Levi) to a sequence of indicator functions
You cannot use the monotone convergence theorem, since the monotonicity of the indicator functions is equivalent to the assumption $$ A_1 \subseteq A_2 \subseteq \cdots $$ On the other hand, if $P$ is a probability measure (or, in general, if $P(X) &lt; \infty$), then you can use the dominated convergence theorem, since the constant function $1 \in L^1(P)$.
Probability that at least 2 people chose the same number
For $k\ge n$, it's just 1- the probability that everyone chooses different numbers. $1-\displaystyle \frac{k*(k-1)*...*(k-n+1)}{k^n}$
Find the area of the triangle AHC
The areas of triangles $AHC$ and $ABC$ are in the ratio $HD/BD$, so I think the easiest way to find the required area is that of computing $HD/BD$. From $K$ draw a line parallel to $AC$, intersecting $BD$ and $BC$ at $G$ and $F$ respectively (see picture below). Triangles $KFI$ and $ECI$ are similar, which entails $FI={1\over3}KF={1\over3}BK$ and $$ BF=BI-FI={2\over3}BC-{1\over3}BK. $$ By the similarity of $ABC$ and $KBF$ we have $BG=(BF/BC)BD$. Inserting here the above equality yields: $$ BG=\left({2\over3}-{1\over3}{BK\over BC}\right)BD. $$ By the similarity of $KGH$ and $CDH$ we also have $$GH={HK\over HC}HD={BK\over BC}HD,$$ where we have used that ${HK/HC}={BK/BC}$ by the theorem of triangle bisector. In the end we thus get: $$ BD-HD=BH=BG+GH=\left({2\over3}-{1\over3}{BK\over BC}\right)BD+ {BK\over BC}HD, $$ whence $HD={1\over3}BD$.
What is known about the functional square root of the Riemann Zeta function?
One common method is to develop a series expansion about the fixed-points, that is, around where $s_\star=\zeta(s_\star)$, which occurs at $s_\star\simeq1.8338$. Now suppose that we have $s_\star=f(s_\star)$. This then let's us derive $$\zeta'(s_\star)=f'(f(s_\star))f'(s_\star)=[f'(s_\star)]^2\\\implies f'(s_\star)=\pm\sqrt{\zeta'(s_\star)}$$ $$\zeta''(s_\star)=f''(f(s_\star))[f'(s_\star)]^2+f'(f(s_\star))f''(s_\star)=2f'(s_\star)f''(s_\star)\\\implies f''(s_\star)=\frac{\zeta''(s_\star)}{2f'(s_\star)}=\pm\frac{\zeta''(s_\star)}{2\sqrt{\zeta'(s_\star)}}$$ and so on. Since $\zeta'(s_\star)\simeq−1.374$ is negative, this gives us a non-real functional square root. This is somewhat expectable because $\zeta(s)$ behaves similarly to $s^{-1}$, which has a simple functional square root of $s^{\pm i}$. Another simple approach is to look at rates of convergence to fixed-points. Since $\zeta$ is invertible on $(1,\infty)$, we may consider how fast $\zeta^{-n}(s)$ converges to $s_\star$. In particular, we have $$q=\lim_{n\to\infty}\frac{\zeta^{-(n+1)}(s)-s_\star}{\zeta^{-n}(s)-s_\star}=\frac1{\zeta'(s_\star)}$$ From this, we may attempt to have $$q^{-1/2}=\lim_{n\to\infty}\frac{\zeta^{-(n-\frac12)}(s)-s_\star}{\zeta^{-n}(s)-s_\star}=\pm\sqrt{\zeta'(s_\star)}$$ and define $$f(s)=\lim_{n\to\infty}\zeta^n\left(s_\star+(\zeta^{-n}(s)-s_\star)q^{-1/2}\right)$$
How many functions Injective have for $|A|=3 \rightarrow |B|=4$ And How many Surjective
Hint: $4&gt;3$. If $X\to Y$ is surjective, then clearly $|Y|\le|X|$.
How does $E(|X|)=\int_0^{\infty}P[|X|\ge x]dx$?
This is, at its heart, a consequence of Tonelli's Theorem, which is a lot like Fubini's Theorem. For any non-negative random variable $Y$ with finite expectation, you can write $$ \begin{align*} \int_0^{\infty}P(Y\geq y)\,d\mu(y)&amp;=\int_0^{\infty}\int_{\Omega}1_{\{Y(\omega)\geq y\}}\,dP(\omega)\,d\mu(y)\\ &amp;=\int_{\Omega}\int_0^{\infty}1_{\{Y(\omega)\geq y\}}\,d\mu(y)\,dP(\omega)\\ &amp;=\int_{\Omega}Y(\omega)\,dP(\omega)\\ &amp;=\mathbb{E}[Y], \end{align*} $$ where $(\Omega,\mathcal{F},P)$ is our probability space and $\mu$ is Lebesgue measure on $\mathbb{R}$. (Note that we can definitely apply Tonelli's Theorem here, as $P$ and $\mu$ are both $\sigma$-finite and $1_{\{Y(\omega)\geq y\}}$ is a non-negative function.)
Solving Recurrence with Generating Function
One way to proceed is by the characteristic polynomial to yield a general solution: $r_n=4r_{n-1}+6r_{n-2}$ is analogous to solving $r^n=4r^{n-1}+6r^{n-2}$ or better yet: $r^2=4r+6$ etc. etc. But explicitly for your purposes: $r_0=1$ and $r_1=3$. So we take the linear recurrence and consider the formal power series: $$r(x)=\sum_{n=0}^{\infty} r_nx^n=1+\sum_{n=1}^{\infty}r_nx^n=1+3x+\sum_{n=2}^{\infty}r_nx^n$$ But we have a particular recurrence to satisfy! So: $$1+3x+\sum_{n=2}^{\infty}r_nx^n=1+3x+\sum_{n=2}^{\infty}(4r_{n-1}+6r_{n-2})x^n=1+3x+\sum_{n=2}^{\infty}4r_{n-1}x^n+\sum_{n=2}^{\infty}6r_{n-2}x^n$$ However, we have that: $\sum_{n=2}^{\infty}4r_{n-1}x^n=3x\cdot4\cdot\sum_{n=2}^{\infty}r_{n-1}x^{n-1}=12x\cdot r(x)$. Can you express the following in terms of $r(x)$?: $\sum_{n=2}^{\infty}6r_{n-2}x^n$ Once you have this, it will require only elementary algebra to solve for $r(x)$.
Finding the rank and nullity of $T$ with $T(f)(x) = \int_a^b f(t) \sin(x-t) \,\mathrm{d}t$
Your range, while technically correct, is probably not the answer they're looking for. Your null space is wrong. Hint: $T(f)$ is a function of $x$; which functions can we get? Hint: The domain of $T$ is the space of all continuous real functions on $[a,b]$, and the null space is a subset of this domain. Which functions $f$ satisfy $T(f) = 0$? You shouldn't be using the rank-nullity theorem. Instead, once you have a suitable description of the null space and range, it should be easy to compute the dimension of the null space (the nullity) and the dimension of the range (the rank).
How many vectors can be in a spanning set?
Did you mean that you have a vector space, and that for a subspace of dimension $n$, any spanning set of that subspace has at least $n$ vectors in it? This is true, and yes, there can easily be more than $n$ vectors in the spanning set. For instance, the set $\{(1,0),(0,1)\}$ spans $\mathbb{R}^2$, but the set $\{(1,0),(0,1),(1,1)\}$ also spans $\mathbb{R}^2$.
relation between measurable function and continuous functions
You cannot decide if a function is continuous or measurable just by a rule like $\sin x$ or $\cos x$. Besides the fact that you need to specify a domain and range for the functions, even that isn't enough specificity to talk about continuity and measurability. To talk about continuity, the domain and range would have to be made into topological spaces by specifying the 'open' sets in each. To talk about measurability, the domain and range would have to be made into measurable spaces by specifying the 'measurable' sets in each. In principle, you could construct these to be independent of each other, in which case a function could be continuous and non-measurable or measurable and discontinuous. In reality, we would usually specify a topology on the domain and would then make sure that the open sets are all measurable (the smallest such family of measurable sets being called the Borel $\sigma-$algebra of the topological space). This practice ensures that all continuous functions are measurable, which is probably the answer you're looking for.
Proof of triangular inequality for $|z_1-z_2|$
We can write $$|z_1-z_2|^2=(z_1-z_2)\overline{(z_1-z_2)}=(z_1-z_2)(\overline{z_1}-\overline{z_2})$$$$=z_1\overline{z_1}-z_1\overline{z_2}-z_2\overline{z_1}+z_2\overline{z_2}$$ $$ = |z_1|^2-2Re(z_1\overline{z_2})+|z_2|^2$$ $$\leq |z_1|^2-2|z_1\overline{z_2}|+|z_2|^2$$ $$=|z_1|^2-2|z_1||z_2|+|z_2|^2$$ Therefore, $$|z_1-z_2|^2\leq(|z_1|-|z_2|)^2$$ Result follows.
What is the moment generating function given a density of a continuous random variable?
The moment generating function is $$M_X(t) = E[e^{tX}] = \int_1^\infty e^{tx} e^{-(x-1)}\ dx$$ Hint: in order for this improper integral to be defined, you want to check what happens as $x \to \infty$.
Show that the following function is Lebesgue integrable.
Near $x=0$, the function may be extended by continuity, since $$ \frac{x}{e^x-1} \sim 1 $$ On the other hand we have, as $x \to +\infty$, $$ \frac{x^3}{e^x-1} \to 0 $$ then there exists an $M&gt;0$ such that $$ \left|\frac{x^3}{e^x-1} \right|&lt;1, \quad x&gt;M, $$ or equivalently $$ \left|\frac{x}{e^x-1} \right|&lt;\frac1{x^2}, \quad x&gt;M, $$ giving the convergence near $+\infty$ and your initial integral is convergent.
Given two angle and a segment, can you find $h$?
It is not possible to determine $h$ using only $\alpha, \beta, x$. For example, if we take $\alpha = \alpha_1, \beta= \beta_1$ (i.e. making a right triangle), we get $$h = \frac{x \tan \alpha \tan \beta}{\tan \beta - \tan \alpha}$$ On the other hand, if we take $\frac{\alpha}{2} = \alpha_1 = \alpha_2, \frac{\beta}{2} = \beta_1 = \beta_2$ (i.e. making an isosceles triangle), then we can cut it in two to get right triangles, so we get $$h = 2\frac{x \tan(\frac{\alpha}{2})\tan(\frac{\beta}{2})}{\tan(\frac{\beta}{2}) - \tan(\frac{\alpha}{2})}$$ Trying a few simple values for $x, \alpha, \beta$ (I used $1, 30^\circ, 45^\circ$) shows that $h$ takes different values.
Uniform convergence in an open interval of a power series
Theorem: If a sequence $f_k$ of continuous functions converges uniformly on $]a,\infty[$, and all functions are defined and continuous on $[a,\infty[$, then the sequence converges uniformly on $[a,\infty[$. For by continuity, we have $$\sup_{x \in ]a,\infty[} \lvert f_k(x) - f_m(x)\rvert = \sup_{x\in [a,\infty[} \lvert f_k(x) - f_m(x)\rvert.$$ Apply the theorem to the sequence of partial sums, and deduce that if the convergence were uniform on $]e,\infty[$, then the series would in particular also converge for $x = e$.
Given $I_n = \int_0^1 \frac{x^n}{\sqrt{x^3+1}} dx$, show $(2n-1)I_n+2(n-2)I_{n-3}=2\sqrt{2}$
HINT: Integrate by parts $u = x^{n-2}$ and $dv = \frac{x^2}{\sqrt{x^3 + 1}} \ dx$.
Find the Vectorial Equation of the intersection between surfaces $f(x,y) = x^2 + y^2$ and $g(x,y) = xy + 10$
We look for the intersection of the two surfaces (which are a paraboloid and a hyperbolic paraboloid): $$\tag{0}\cases{z=x^2+y^2 &amp; (a)\\z=xy+10 &amp; (b)}$$ Let $(r,\theta)$ be the polar coordinates of $(x,y)$; i.e, $$\tag{1}x=r \cos(\theta), \ \ y=r \sin(\theta).$$ Plugging these expressions in (0)(a) gives $$\tag{2}r=\sqrt{z}.$$ Using $(0)(b)$, $(1)$ and $(2)$: $z=\sqrt{z} \cos(\theta) \sqrt{z}\sin(\theta)+10$, yielding: $$z(1-\frac12 \sin(2 \theta))=10 \ \ \iff z=f(\theta) \ \ \text{with} \ \ f(\theta):=\dfrac{20}{2-\sin(2 \theta)}$$ Plugging this expression of $z$ in $(1)$ gives the final description of the intersection curve as a vector function of one variable, the polar angle: $$\cases{x=\sqrt{f(\theta)}\cos(\theta)\\y=\sqrt{f(\theta)}\sin(\theta)\\z=f(\theta)}$$ (valid for any value of $\theta$ because $f(\theta)&gt;0$ )
Values of Constants for an Exponentially Decaying General Solution
The general solution to your differential equation is $$y= C_1e^{\lambda_1x}+ C_2e^{\lambda_2x}$$ where $\lambda _1$ and $\lambda_2$ are roots of $$\lambda ^2 +b\lambda +c =0$$ Note that the sum of your eigenvalues is $\lambda _1+ \lambda _2=-b$ and the product is $\lambda _1\lambda _2=c$ For both eigenvalues to be negative you need $b&gt;0$ and $c&gt;0$ that is case (a)
Computing the kernel of a linear operator defined on a space of polynomials
The solution of the differential equation $$3p'''+2p''=0$$ is $$p=Ae^{-2t/3}+Bt+C\ .$$ This is a polynomial (of degree at most $4$) if and only if $A=0$. So the kernel is $$\{Bt+C\mid B,C\in{\Bbb F}\}\ .$$ Alternative solution. Take a general polynomial $$p=at^4+bt^3+ct^2+dt+e\ ,$$ substitute into the differential equation, and see what this tells you about $a,b,c,d,e$.
Analytic map zero on open subset set is zero
Given you allready know 1. i think you are on the right track: Indeed an open subset of $\mathbb C$ is connected if and only if it is path-connected see for example here and here. Now to show $f\equiv 0$, fix some $p\in S$ and let $q\in U$ be an arbitrary point. Since $U$ is patch-connected there exists a continous curve $\gamma:[0,1]\to U$ with $\gamma(0)=p$ and $\gamma(1)=q$. The set $$\{t\in[0,1]:f(\gamma(t))=0\}$$ is closed and hence contains a maximal element $s$. Then $\gamma(s)$ is an accumulation point of the set of all points where $f$ equals $0$, so by 1. $f=0$ in a neighbourhood of $\gamma(s)$, which implies $f\circ\gamma=0$ in a neighbourhood of $s$. Since $s$ was maximal, $s=1$, so $f(q)=f(\gamma(s))=0$.
How to prove that two groups $G$ and $H$ are isomorphic?
Since$$\left(a+b\sqrt2\right)\left(c+d\sqrt2\right)=\color{red}{ac+2bd}+(\color{blue}{ad+bc})\sqrt2$$and since$$\begin{bmatrix}a&amp;2b\\b&amp;a\end{bmatrix}.\begin{bmatrix}c&amp;2d\\d&amp;c\end{bmatrix}=\begin{bmatrix}\color{red}{ac+2bd}&amp;2(\color{blue}{ad+bc})\\\color{blue}{ad+bc}&amp;\color{red}{ac+2bd}\end{bmatrix},$$simply take$$\psi\left(a+b\sqrt2\right)=\begin{bmatrix}a&amp;2b\\b&amp;a\end{bmatrix}.$$
How can I prove that $g(\zeta)\in\mathbb R\implies g(\zeta)=h(\zeta+\bar\zeta)$
The claim seems false as currently stated. Consider for example $g(X)=X^2, \zeta=i\sqrt[4]{7}$. Then $g(\zeta)=-\sqrt{7}\in{\mathbb R}$, but $g(\zeta)$ can never equal $h(\zeta+\bar{\zeta})$ for any $h\in{\mathbb Q}[X]$ (because $g(\zeta)$ is irrational and $h(\zeta+\bar{\zeta})$ is rational, indeed $\zeta+\bar{\zeta}=0$).
Proving pairwise independence is equivalent to uncorrelated
Hint: Write the joint probability density function for $X_1, \dots, X_n$ given that their correlation matrix is diagonal (they are pairwise uncorrelated). Then try to write the probability density function as a product of the probability density functions for $X_1$ and that of $X_2$, and so on. If you can factor the probability density function, then they are independent.
If $A^2=\mathbb{I} (2\times 2$ identity) then $\mathbb{I} + A$ is invertible only if $A=\mathbb{I}$
Hint: $$A(I+A)=I+A\,\,\,\,\,\,$$
Show that if $R$ is reflexive, then $S$ is a subset of $R$ composition $S$
By definition, showing that $S \subseteq R \circ S$ amounts to show that if $(x, y) \in S$ then $(x,y) \in R \circ S$. How can we show it, under the hypothesis that $R$ is reflexive? Let $(x,y ) \in S$. As $R$ is reflexive, $(x,x) \in R$. By definition of composition, from $(x,x) \in R$ and $(x,y ) \in S$ it follows that $(x,y) \in R \circ S$. Since we are talking about a generic $(x,y ) \in S$, we have showed that $S \subseteq R \circ S$. The crucial point in the proof above is the definition of composition for relations: $(x,y) \in R \circ S$ means that there exists $z \in X$ such that $(x,z) \in R$ and $(z,y) \in S$. In the case above, since $R$ is reflexive, it is easy to find the &quot;intermediate term&quot; $z$ for the composition: just take $x$.
Convergence of sequence of a function
Some people take this as the definition of a limit of a function: $$f(x) \to L \text{ as } x \to l,$$ if and only if $$f(x_n) \to L \text{ as } n \to \infty$$ for all sequences $x_n$ such that $x_n \to l$ with $x_n \neq l$ for all $.$ (see eg my book Proof Patterns) However, it has to be ALL sequences. If it is just one sequence it doesn't work. Eg take $f(x) =1$ for $x \geq 0$ and $-1$ for $x \leq 0.$ Then a positive sequence of $x_n$ going to zero gives $1$ and a negative one gives $-1.$
Once Continuously Differentiable?
Once continuously differentiable is indeed equivalent to continuously differentiable, but it emphasis the point that the function may not be more than once continuously differentiable. For example : $$x\mapsto \cases{0 &amp; if $x = 0$\\x^3\sin\left(\frac{1}{x}\right) &amp; otherwise} $$ is exactly one time continuously differentiable.
Is $x=0$, in $x^2(x^2 + 4) = 0$, unique root or repeated?
The multiplicity of the solution is two: since the solutions $x=-2i$ and $x=2i$ have multiplicity 1 and the polynomial is of multiplicity 4 that leaves $x=0$ with multiplicity 2(by the fundamental theorem of algebra). You can see the multiplicity is even in a graph because the graph "bounces back at the point "$x=0$" in other words it doesn't cross the x-axis.
Evaluate $\int_{0}^{\infty} \frac{\sin x-x\cos x}{x^2+\sin^2x } dx$
$$ I=\int_{0}^{\infty} \frac{\sin x-x \cos x}{x^2+\sin^2 x} dx= - \int_{0}^{\infty}\frac{\frac {x\cos x -\sin x}{x^2}}{1+(\frac{\sin x}{x})^2}dx= -\int_{1}^{0} \frac{dt}{1+t^2}=\frac{\pi}{4}.$$
If $N$ is the least normal subgroup of $A*B$ containing $A$, then $(A*B)/N \cong B$.
Your kernel contains $A$, and therefore must contain $N$. Hence there is a surjective map $A*B/N \to B$. Moreover, $BN = A*B$ (as the lhs is a subgroup and contains $A$ and $B$), and by the second isomorphism theorem, $A*B/N = B/(N\cap B)$. Composing with our first map, we obtain a surjective map $B/(N\cap B) \to B$ which factors through $A*B/N$. But looking at the definitions of our maps, this map is just $b \mapsto b$, whence $N\cap B\cong \langle e\rangle$ and everything is an isomorphism. In particular $A*B/N \cong B$
Proving a bijection with the use of residue classes
As you figured out when prompted in comments, $f([0])=[0]$, $f([1])=[2]$ , and $f([2])=[1]$ in $\mathbb Z/3\mathbb Z$, so $f$ is a bijection for $\mathbb Z/3\mathbb Z$. However, in $\mathbb Z/6\mathbb Z$, $f([0])=[0]$ and $f([3])=[0]$, so $f$ is not a bijection for $\mathbb Z/6\mathbb Z$. (Multiplication by $2$ is not invertible with even moduli such as $6$.)
Seifert-van Kampen understanding
Can I suggest you view your topological space in a slightly different way? Take a unit circle, $$S^1 = \{ w \in \mathbb C : |w| = 1 \}.$$ Also take a closed unit disk, $$D^2 = \{ z \in \mathbb C : |z| \leq 1 \}.$$ Glue the boundary circle $\partial D^2$ of the disk $D^2$ to the circle $S^1$ by identifying each $z \in \partial D^2$ with the point $w = z^4 \in S^1$. Thus $$ X = (S^1 \sqcup D^2)/\sim$$ where $$w \in S^1 \ \sim \ z \in D^2 \ \ \iff \ \ |z| = 1 { \rm \ and \ } w = z^4.$$ Can you convince yourself that this is the same space as what you described? First, I recommend that we try to guess the fundamental group of this space using only our visual intuition! The idea is very simple: If we have a loop that wraps once, twice or three times around the $S^1$, then the loop is non-trivial. But if we have a loop that wraps four times around the $S^1$, then the loop can "slip off" into the interior of the disk $D^2$, and contract. So the fundamental group should be $\mathbb Z_4$. Now, let's prove this intuition is correct using Van Kampen. Take $U$ to be an open neighbourhood of the $S^1$, $$ U = \{ z \in \mathbb C : 1-\epsilon_1 &lt; |z| \leq 1 \}.$$ Take $V$ to be the interior of the $D^2$, $$ V = \{ z \in \mathbb C : |z| &lt; 1-\epsilon_2 \},$$ where $\epsilon_2 &lt; \epsilon_1$. So what does the overlap region $U \cap V$ look like? In $z$ coordinates, we can write this overlap region as $$ U \cap V = \{ z \in \mathbb C : 1-\epsilon_1 &lt; |z| &lt; 1-\epsilon_2 \}.$$ This is an annulus. It shouldn't be hard to see that the fundamental group of $U \cap V$ is generated by the loop $\gamma_{U \cap V}$, defined by $$ z = r e^{2\pi i t}, \ \ \ \ \ t \in [0,1],$$ where $r$ is any radius between $1-\epsilon_1$ and $ 1-\epsilon_2$. Now let us think about $\gamma_{U \cap V}$ as a loop in $U$ (rather than only as a loop in $U \cap V$). By expanding it to radius $1$, you can see that $\gamma_{U \cap V}$ is homotopic to the loop $$ z = e^{2\pi i t}, \ \ \ \ \ t \in [0,1]. $$ Since we identify $w \in S^1$ with $z \in \partial D^2$ when $w = z^4$, this loop can also be written as $$ w = e^{8\pi i t}, \ \ \ \ \ t \in [0,1]. $$ Since $U$ deformation retracts onto the circle $S^1$, the fundamental group of $U$ is generated by the loop $\gamma_U$, defined by $$ w = e^{2\pi i t}, \ \ \ \ \ t \in [0,1].$$ Thus, $\gamma_{U \cap V} \sim 4 \gamma_{U}$ in $\pi_1(U)$. Meanwhile $V$ is a open disk, which has trivial fundamental group. So $\gamma_{U \cap V} \sim 0$ in $\pi_1(V)$. Finally, Van Kampen tells you that $\pi_1(X)$ is generated by $\gamma_U$, except that the element $4\gamma_U$ should be identified with the element $0$. This group is precisely $\mathbb Z_4$.
Union of continuous function on a topological space
Write $X'=A\cup B$. Then from $\overline A\cap B=\emptyset=A\cap \overline B$ we can say, both $$A=X'\backslash \overline B=X'\backslash(\overline B\cap X')\text{ and }B=X'\backslash \overline A=X'\backslash(\overline A\cap X')$$ are open in $X'$ as, $\overline A\cap X'\subseteq_{\text{closed}}X',\overline B\cap X'\subseteq_{\text{closed}}X'$ . Now, for any open subset $V$ of $Y$ we have, $$h^{-1}(V)=f^{-1}(V)\cup g^{-1}(V).$$ Since $f,g$ are continuous, we have, $f^{-1}(V)\subseteq_{\text{open}}A\subseteq_{\text{open}}X'$ and $g^{-1}(V)\subseteq_{\text{open}}B\subseteq_{\text{open}}X'$. So that, $h^{-1}(V)$ is open in $X'$.
Iterated Elimination of Weakly Dominated Strategies with Unknown Parameters
Check the definition of what it means for a strategy to be weakly dominated by another strategy. Say the strategies for Player 1 and 2 are $A=\{a_1,a_2,a_3\}$ and $B=\{b_1,b_2\}$ respectively. You'll find that $b_1$ weakly dominates $b_2$, so we may eliminate $b_2$. Then $a_1$ strictly (and therefore weakly) dominates $a_2$ and $a_3$, so we may eliminate those two. Then $(a_1,b_1)$ is the only remaining strategy, which we obtain without knowledge of $\delta$.
How many ways are there to choose 5 ice cream cones if there are 10 flavors?
The combination $_{10}C_5$ shows how many ways we can choose $5$ of the $10$ flavors. But instead, we can have five ice cream cones with each the same flavor or any number of cones with the same flavor. Assuming the ice cream cones are not distinct (it doesn't matter whether cone $1$ gets vanilla or cone $3$ gets vanilla), the problem is equivalent to finding the number of integer solutions to $$x_1+x_2+\cdot\cdot\cdot+x_{10}=5$$ where each $x_i\geq 0$ is the number of cones dedicated to flavor $i$. Since $$(x_1+1)+(x_2+1)+\cdot\cdot\cdot+(x_{10}+1)=5+10$$ We can let $x_i+1=y_i$ and we have $$y_1+y_2+\cdot\cdot\cdot+y_{10}=15$$ where each $y_i\geq1$. The number of positive solutions is the number of ways we can draw $10-1=9$ lines, from $14$ possible locations, to divide the $15$ elements into $10$ groups, all of which are nonempty and hence $\geq 1$ For example, $$\star|\star\star|\star|\star\star\star|\star\star\star|\star|\star|\star|\star|\star$$ The number of ways to do this is $\binom{14}{9}$. The number of of nonnegative solutions is thus $$\binom{10+5-1}{9}=\binom{14}{9}=\binom{14}{5}$$
Two tangent closed discs connected
Connectedness can be a bit of an abstruse concept to work with. It's often easier to work with the stronger concept of path-connectedness (a space is path-connected if any two points can be joined by a continuous path in the space). Not every connected space is path-connected, but for those that are, this is generally the easiest way to prove connectedness. In this case, for example, it's almost trivial to see that $X$ is path-connected: two points in the same disc can be joined by a straight-line path, while two points in different discs can be joined by a composite path formed of two line segments meeting at the tangency point.
Find a basis for the set of invertible $3 \times 3$ matrices over an arbitrary field $\mathbb{F}$
The set of invertible matrices is not a vector space over the field. The easiest way to see this is that it is not even closed under addition: If $A$ is invertible, then so is $-A$, and $A+(-A)=0$ is not invertible.
Give all odd convex functions
Here are a few questions which hopefully lead in the right direction: What is the value of $f(0)$? Suppose that, for some $x\in I$, $f(x)\neq 0$. Does this imply something about $f(-x)$? (note that $I$ is symmetric in all cases, so there are no issues concerning $f(-x)$ being ill-defined). What does the convexity inequality tell us about the triple of points $-x$, $0$, and $x$? Extra hint: $0\in (-x,x)$. Hope this helps :-)
Maclaurin series for ln
The Taylor series for $\log\sqrt{\frac{1+x}{1-x}}$ at $0$ is $\sum_{n=0}^\infty\frac{x^{2n+1}}{2n+1}$, because$$\begin{align}\log\sqrt{\frac{1+x}{1-x}}&amp;=\frac12\log\left(\frac{1+x}{1-x}\right)\\&amp;=\frac12\left(\log(1+x)-\log(1-x)\right)\\&amp;=\frac12\left(x-\frac{x^2}2+\frac{x^3}3-\frac{x^4}4+\cdots-\left(-x-\frac{x^2}2-\frac{x^3}3-\frac{x^4}4-\cdots\right)\right)\\&amp;=x+\frac{x^3}3+\frac{x^5}5+\cdots\end{align}$$And the Taylor series for $\arctan(x)$ at $0$ is quite similar: $\sum_{n=0}^\infty(-1)^n\frac{x^{2n+1}}{2n+1}$. That's because$$\arctan'(x)=\frac1{1+x^2}=1-x^2+x^4-x^6+\cdots$$
Is the twist of the direct sum of simple objects the direct sum of the twists?
$\DeclareMathOperator{\id}{id} \DeclareMathOperator{\tr}{tr}$Well, I once again forgot to make sure that I was aware of all the things I'm working with. This time, I forgot that the twist is a natural isomorphism $$ \theta: 1_{\mathcal C} \Rightarrow 1_{\mathcal C}.$$ Then $$ \theta_{V_k\oplus V_k} \circ i_j = i_j \circ \theta_{V_k}$$ is just the naturality of $\theta$. With this we conclude \begin{align*} \pi^i\circ \theta_{V_k\oplus V_k} \circ i_j &amp;= \pi^i\circ i_j \circ \theta_{V_k} = \delta^{i}_j\ \theta_{V_k}, \end{align*} so indeed $\theta_{V_k\oplus V_k} = \theta_k\cdot\id_{V_k\oplus V_k}$. Then \begin{align*} \tr \theta_{V_i\otimes V_j} &amp;= \tr\theta_{V_k\oplus V_k} \\ &amp;= \theta_k\tr\id_{V_k\oplus V_k} \\ &amp;= 2 \theta_k d_k\\ &amp;= \sum_k N^k_{ij}\ \theta_k\ d_k\ . \end{align*}
Proofs for $0^0 =1$?
Perhaps the strongest reason why some people insist that $\;0^0=1\;$ is that $$\text{For}\;\;0&lt;x\in\Bbb R\implies x^x=e^{x\log x}\xrightarrow [x\to 0^+]{}e^0=1$$ Yet $\;x^x\;$ is undefined for lots of negative values in any neighborhood of zero...
inequality proof of $x^{y-1} \ge xy$
$$x^{y-2} \geq 3^{y-2} \geq y,\ \forall x,y \geq 3$$ where the last inequality can be proven by induction in a very simple way for $y \geq 3$ integer. For $y \geq 3$ you can use the function $f(y)=3^{y-2}-y$ and show that is increasing on $[3,\infty)$.
Word for cyclic, non-periodic function
Like the 11-year sunspot solar cycle? Maybe stockastic : with a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely. But it would be understood that you are in this situation of you use the word cyclic.
Why doesn't $\frac{d(e^x)}{dx}=\frac{0}{0}$?
The derivative of $f(x)$ is defined as $$\lim_{h\to0}\frac{f(x+h)-f(x)}{h}.$$You can rewrite a limit of a ratio as a ratio of limits, if the denominator limit is nonzero, but of course in this case it isn't. When we write $$L=\lim_{h\to0}\frac{f(x+h)-f(x)}{h}$$we mean only that $\frac{f(x+h)-f(x)}{h}$ is as close as you want to $L$, for sufficiently small $h\ne0$. The behaviour "at" $0$ isn't covered in this definition of a limit.
Measurability from taking out property of conditional expectation
No. Counterexample: $$ X=0; Y = \text{anything not }G\text{ measurable} $$ But if it is true for any bounded r.v. this is correct (this is the definition of the conditional expectation). Yes, look at the definition. $E[X|G]$ is the only $G$ measurable r.v. such as for every $G$measurable bounded r.v. $Y$, $$ E[XY] = E[E[X|G]Y] $$
modular arithmetic : $((a-b)/c)\bmod m = {}$?
Assuming the division is without remainder and that $c$ is relatively prime to $m$, you can just evaluate $(a-b)c^{-1}$ in $\mathbf Z/m\mathbf Z$. If $c$ is not relatively prime with $m$, you cannot deduce the value of $(a-b)/c$ modulo $m$ from the images of $a,b,c$ in $\mathbf Z/m\mathbf Z$ alone.
how to find every n>2 where 8 + 5^(n-3) + 3^(n-3) that are divisible by 8 (using induction)
Hint $\ {\rm mod}\ 8\!:\ x\equiv\, (-3)^{n-3}\!+3^{n-3}\equiv 3^{n-3}((-1)^{n-3}+1)\equiv 0\iff (-1)^{n-3}\equiv -1\ $ But it is quite straightforward to prove by induction that $\,(-1)^k\equiv -1\iff k\,$ is odd.
What is the weakest sufficient condition for $\chi_{A_n}\xrightarrow{a.e.} 0$?
Notice that $\lim_{n\to\infty}\chi_{A_n}(\omega)=0$ is equivalent to $\omega\notin A_n$ for $n$ large enough, i.e., $\omega\in\liminf_{n\to \infty}\Omega\setminus A_n$. Hence $$\left\{\omega\mid \lim_{n\to\infty}\chi_{A_n}(\omega)=0\right\}=\liminf_{n\to \infty}\Omega\setminus A_n,$$ and we deduce that the condition $\displaystyle\mathbb P\left(\limsup_{n\to \infty} A_n\right)=0$ is a necessary and sufficient condition for having $\chi_{A_n}\to 0$ almost everywhere.
Autonomous Differential Equation $y' = (1 - \frac{A}{y})^{-1}$
$$\frac{dy}{dt} = (1 - \frac{A}{y})^{-1}$$ $$(1 - \frac{A}{y})dy=dt$$ $$y-A\ln|y|=t+c$$ This is the solution on the form of implicit equation. Solving for $y$ cannot done with a finite number of elementary functions. It requires a special function namely the Lambert W function. https://mathworld.wolfram.com/LambertW-Function.html Details of the calculus : $$e^y y^{-A}=e^{t+c}$$ $$y^{A}e^{-y}=e^{-(t+c)}$$ $$ye^{-\frac{y}{A}}=e^{-\frac{t+c}{A}}$$ $$-\frac{y}{A}e^{-\frac{y}{A}}=-\frac{1}{A}e^{-\frac{t+c}{A}}$$ Let $X=-\frac{y}{A}$ and $Y=-\frac{1}{A}e^{-\frac{t+c}{A}}$ $$Xe^X=Y\quad\to\quad X=W(Y)$$ $W(X)$ is the Lambert W function. $$-\frac{y}{A}=W\left(-\frac{1}{A}e^{-\frac{t+c}{A}}\right)$$ $$y=-A\:W\left(-\frac{1}{A}e^{-\frac{t+c}{A}}\right)$$
What is the asymptotic growth rate of divisor function with k=-1?
Well, $$\sigma_{-1}(n) = \sum_{d|n} \frac{1}{d} = \sum_{d|n} \frac{d}{n} =\frac{\sigma(n)}{n}$$ so you can just study the asymptotics of $\sigma(n)$ and divide by $n$.
Is the localization at a prime ideal of a non-zero ring non-zero again?
We have $R_p / pR_p \cong Frac(R/p)$ by universal properties. Since $p\neq R$, we have $pR_p \neq R_p$. Concretely, the element $1/1 \in R_p$ equals $0/1$ precisely when there is $s\notin p$ such that $0=s(1\cdot 1 - 1\cdot 0) = s$, but $0$ is in every prime ideal.
Pontryagin duality for finite groups
Every element of $\mathbb{Q}/\mathbb{Z}$ can be written uniquely in the form $q+\mathbb{Z}$ for $q\in [0,1)$ (if this isn't obvious, you should take a moment to prove it). So, let $G=\{0,g_1,g_2,\cdots,g_n\}$ be a finite group, and denote $n_i:=\vert g_i \vert$. Then, for all $f\in G^*$, $n_i f(g_i)=0+\mathbb{Z}$. Write $f(g_i)=\frac{a_i}{b_i} + \mathbb{Z}$ with $a_i,b_i\in \mathbb{N}$, $a_i&lt;b_i$, and $gcd(a_i,b_i)=1$. Then, in $\mathbb{Z}$, we have $b_i \vert a_i n_i$ which implies $b_i \vert n_i$. Since there are only finitely many positive integer divisors of $n_i$ and only finitely many corresponding choices for $a_i$, there are a finite number of ways to choose $f$. That takes care of the first question. For the second, I'll bet you can show that the map sending 1 to $\frac{1}{n}+\mathbb{Z}$ is a generator for $G^*$ by using an argument similar to the one above.
Solving a trigonometric limit $\lim_{x\to\pi/6}\frac{1 - 2\sin{x}}{2\sqrt{3}\cos{x} - 3}$
Hint: $(2\sqrt{3}\cos{x} - 3)(2\sqrt{3}\cos{x} + 3)=12\cos^2x-9=3-12\sin^2x$ Now, do you see a way you can use difference of squares to simplify the following expression?$$\lim_{x\to\pi/6}\frac{(1 - 2\sin{x})(2\sqrt{3}\cos{x} + 3)}{3-12\sin^2x}$$
Is this matrix function bounded from above by a norm
From the answer of this question, and in view of the fact that $$d(A, B) \leq \|A^{1/2} - B^{1/2}\|_F^2$$ by the Araki-Lieb-Thirring inequality, the answer to my question is yes. Quoting from the linked page: In fact, we can say much more: every $α$-Holder continuous function $F$ is operator Holder continuous ($0&lt;α&lt;1$) on the space of self-adjoint matrices.
How to do logarithmic differentiation
Full solution, mostly because I need LaTex speed practice :P $$ y= \sqrt \frac {x-1}{x^8+1} \implies \ln y = \frac 12 \ln \frac {x-1}{x^8+1}$$ $$ 2 \ln y = \ln (x-1) - \ln (x^8+1) $$ Differentiate: $$ \frac {2y'}{y} = \frac {1}{x-1} - \frac {8x^7}{x^8+1}$$ $$y' = \frac 12 (\frac {1}{x-1} - \frac {8x^7}{x^8+1})\sqrt \frac {x-1}{x^8+1} \ for \ x \ge 1$$
Prove the following language is not regular using the Pumping Lemma for Regular Languages
Suppose for sake of contradiction that $L$ is regular, and let $p$ be the pumping length. Consider the word $a^p bc d^p$. If we write this word in the form $xyz$ such that $|xy| \le p$ and $|y| &gt; 0$, then $y=a^k$ for some $0 &lt; k \le p$. But then $xy^2z = a^{p+k} bc d^p \notin L$.
Definite integral, quotient of logarithm and polynomial: $I(\lambda)=\int_0^{\infty}\frac{\ln ^2x}{x^2+\lambda x+\lambda ^2}\text{d}x$
First, let me say that calculating this integral is not so easy, and according to my notes it evaluates to $\frac{16\pi^{3}}{81\sqrt{3}}.$ Hint: To evaluate $$\int_0^\infty \frac{(\log x)^2}{x^2+x+1}dx,$$ we take advantage of the fact that $$(a+1)^3-(a-1)^3 = 6a^2+2$$ and let $$f(x)=\frac{\left(\log x\right)^{3}}{x^{2}-x+1},$$ with a negative sign in the denominator. Consider the integral $\oint_{\mathcal{C}}f(x)dx$ where $\mathcal{C}$ is the counter clockwise oriented keyhole contour which is a circle of radius $R$, the circle of radius $\epsilon,$ and the two branches which extend to negative infinity on the upper and lower half plane. It is easy to see that as $R\rightarrow\infty,$ and as $\epsilon\rightarrow0,$ the two circles contribution goes to zero. In the limit, the integral on the branches becomes $$\int_{0}^{\infty}\frac{\left(\log x+\pi i\right)^{3}}{x^{2}+x+1}dx-\int_{0}^{\infty}\frac{\left(\log x-\pi i\right)^{3}}{x^{2}+x+1}dx.$$ I think you can solve it from here.
General real solution of a system of differential equations
As far as I can see, your calculations do not contain an error. However, the checked answer &quot;None of the other answers are correct&quot; is wrong, as the third variant is indeed correct. At the stage $$ A-\lambda I=\pmatrix{1-3i&amp;-2\\5&amp;-1-3i} $$ you decided on forming the kernel vector from the components of the second row and got $\pmatrix{1+3i\\5}$. You could the same way form it from the first row to find that the kernel is also generated by $\pmatrix{2\\1-3i}$ which then using the same procedures for the remaining steps gives the third answer variant. You get indeed the first kernel vector from the second by multiplication with the complex scalar $\dfrac{1+3i}2$.
Question about proof of FTA
Absolute value of polynomial tends to infinity for $\left|z\right|\to\infty$. That is, for each $M&gt;0$, there exists $R&gt;0$ such that for $\left|z\right|&gt;R$ we have $\left|p(z)\right|&gt;M$. Take sufficiently large closed disk, so that $\left|p(z)\right|&gt;1$ for $z$ outside the disk. The disk is compact, so it's image by $\left|p(z)\right|$ is compact, hence closed. It does not contain 0, so it's bounded away from 0, say, by $a&gt;0$. Thus, $$ \left|1/p(z)\right|&lt;1/a $$ inside the disk, and $\left|1/p(z)\right|&lt;1$ outside.
A question about intersecting straight lines
Yes, it's possible. In fact, there's more than one way to do it. Here's one way ... Let $\;a = 2,\;\; b = 4,\;\; c = 77,\;\; w = 17$. Note that $a + b + c + w = 100$. Create sets $A,B,C,W$ of lines, with cardinalities $a,b,c,w$, respectively, such that The lines of $A$ are parallel to each other. The lines of $B$ are parallel to each other. The lines of $C$ are parallel to each other. All other pairs of lines intersect in exactly one point. Then the number of intersection points is $$(ab + bc + ca) + w(a + b + c) + {w \choose 2}$$ $$ = (470) + (1411) + (136)$$ $$ = 2017$$
Parseval Theorem (Rayleigh)
The idea is to use that $$ \frac{1}{\sqrt{2\pi}}\int_{-1}^{1} \frac{1}{2}e^{ix t} \, dx = \frac{1}{\sqrt{2\pi}}\frac{e^{it}-e^{-it}}{2it} = \frac{1}{\sqrt{2\pi}}\frac{\sin{t}}{t}, $$ so $(\sin{t})/t$ is the Fourier transform of the function $B(x)$ that is $\sqrt{\pi/2}$ on $[-1,1]$ and zero elsewhere. Then Parseval gives $$ \int_{-\infty}^{\infty} \left( \frac{\sin{t}}{t} \right)^2 \, dt = \int_{-1}^{1} \left( \sqrt{\frac{\pi}{2}} \right)^2 \, dx = 2 \cdot \frac{\pi}{2} = \pi. $$
Does the identity matrix adapt to any other matrix?
When you "rearrange" you should get: $$ (I-A)X = B $$ $$ X = (I-A)^{-1} B $$ assuming that in the second step $I-A$ is actually invertible. Notice that we have both sides expressing $3\times 1$ columns, rather than the "1 by 3" column mentioned. Column vectors are matrices with a single column (and likely multiple rows). Matrix multiplication $AX$ is defined when $A$ has size $m\times n$ and $X$ has size $n\times k$. In this case we have $m=n=3$ and $k=1$, so matrix multiplication is properly defined. An identity matrix $I$ needs to be square (equal number of rows as columns), so if more generally $X$ was $n\times 1$, we'd want the identity to have $n$ rows and $n$ columns as well.
Trotter decomposition from Baker-Hausdorff
You don't have to use the Baker-Hausdorff formula, just use the definition of an exponentiated operator $e^H = lim_{n \rightarrow \infty} \left( 1+\frac{H}{n} \right)^n $ \begin{equation} e^{(\hat{A} +\hat{B})t} = lim_{n \rightarrow \infty} \left( 1+\frac{(\hat{A}+\hat{B})t}{n} \right)^n \end{equation} \begin{equation} = lim_{n \rightarrow \infty} \left( (1+\frac{\hat{A}t}{n}) (1+\frac{\hat{B}t}{n})\right)^n \end{equation} \begin{equation} = lim_{n \rightarrow \infty} \left( e^{\frac{\hat{A}t}{n}} e^{\frac{\hat{B}t}{n}} \right)^n \end{equation}
Basic question from vector analysis - Louis Brand Ch1, problem 2
The posted proof is correct. For an alternative, express everything in terms of position vectors relative to an arbitrary origin $O$. With the notation $\,x=\vec{OX}\,$, and using that $\,m = (a+c)/2\,$ and $\,n = (b+d)/2\,$: $$ \begin{align} \vec{AB}+\vec{CD} &amp;= (b-a)+(d-c) = (b+d)-(a+c) = 2n -2m =2(n-m)=2\vec{MN} \end{align} $$
Semidirect product operation
The semi direct product tells you how to multiply $(x_1, y_1)$ and $(x_2,y_2)$. If we had a direct product $X\times Y$, we would just set $$ (x_1,y_1)(x_2,y_2) = (x_1y_1,x_2y_2). $$ By removing the parentheses, and thinking of the $X$ and $Y$ as subgroups of some larger group that intersect only in the identity, we can imagine that we're really just specifying how $Y$ conjugates $X$: $$ x_1y_1x_2y_2 = x_1(y_1x_2y_1^{-1})y_1y_2. $$ For a semi direct product, we therefore need $X$ to be normal, and we can think of the conjugation of $Y$ acting on $X$ as specifying a homomorphism $Y\to Aut(X)$. For example, in the direct product case, we would get $y_1x_2y_1^{-1} = x_2$, so the map above sends every element of $Y$ to the identity. In your case, you're looking at the semi direct product $\mathbb Z_8 \rtimes \mathbb Z_2$, so we need to specify a homomorphism $\mathbb Z_2 \to Aut(\mathbb Z_8)$. Since $0\in \mathbb Z_2$ is the identity, the corresponding automorphism of $\mathbb Z_8$ must be the identity map. For the unique nontrivial element $y\in \mathbb Z_2$, you have specified the map $y(x) = x^3$. Thus, we can intuitively (note this is only intuition) think of this as $y$ conjugating $x$ by $yxy^{-1} = x^3$. Written in the proper form, $$ (x_1,y)(x_2,y) = (x_1x_2^3,y^2) = (x_1x_2^3,0). $$
Why does WolframAlpha give a strange solution set to $\lfloor a\rfloor=0$?
I suppose the answer might look a little silly but note that floor(1/2+sqrt(3)*i/2) returns zero, so the result is indisputably correct. Typically, WolframAlpha will try to classify the results to some extant. If you just enter x^3-x-1=0, for example, you'll see separate pods containing a real solution and two complex solutions. Now in your case, Mathematica is not able to solve floor(a)=0 over the complexes using analytic tools so it falls back to a numerical search and finds the specific examples you mentioned. Similar behavior is exhibited when we enter round(x)=i: Again, Round[-1/2+Sqrt[3]I/2] yields I in Mathematica, so the result is correct. Perhaps a bit silly, but not totally unreasonable either.
Proof by induction and number sets (very simple question)
The author could have also written $(A + B) \in \mathbb{Z}^{+}$. You are correct that $A, B \in \mathbb{Z}^{+}$ implies $(A + B) \in \mathbb{Z}^{+}$ because the set of positive integers is closed under addition. However, since $\mathbb{Z}^{+}$ is a subset of $\mathbb{Z}$, it does not make a difference here. Yes. If you're trying to prove a property for all $n$ in some set, then the value you &quot;suppose&quot; it's true should belong to the set as well. From there, you should try to construct a subsequent value that belongs to that set. It wouldn't make sense to take $k$ in a set that $n$ is not in because you would have a false premise.
Why can't you solve $0 = -5\sqrt{x+2}-2$
Remember that, $\sqrt{x+2}≥0$ and multiplying each side by $(-1)$ then you get, $$-5\sqrt{x+2}-2=0 \Longleftrightarrow 5\sqrt{x+2}+2=0$$ which is impossible. The sum of non-negative and positive numbers can not equal to zero. EDIT: Your argument is not generally correct. Because, $$a=b \not \Longleftrightarrow a^2=b^2.$$ For example, $$-1=1 \\ (-1)^2=1^2 \\ 1=1 $$ which is invalid.
Finding the zeroes using Chebyshev polynomials
The zeroes of $T_3(x)=4x^3-3x$ are $0,\pm\frac{\sqrt 3}{2}$. The first idea is to apply the mapping from the interval $[-1,1]$ (the natural domain for the Chebyshev polynomials) into your desired domain $[1,3]$. It can be easily seen that the required mapping is $f(x)=x+2$, and thus you are to interpolate $\frac1{x}$ over the points $2,2\pm\frac{\sqrt 3}{2}$. You can now apply any of a number of methods (Lagrange, Newton, Neville-Aitken) for constructing the quadratic polynomial interpolant. That part I leave to you.
Triple Sum construction
Yes, that's correct. You can verify by running through the indices and calculating the values on the subscripts. The limits on your indices result in the correct number of terms, and there is an exact match with the calculated subscripts in the expanded sum. m k j 3-m-k j k-j --------------------------- 0 0 0 3 0 0 1 0 0 4 0 0 1 1 0 3 0 1 1 1 1 3 1 0 2 0 0 5 0 0 2 1 0 4 0 1 2 1 1 4 1 0 2 2 0 3 0 2 2 2 1 3 1 1 2 2 2 3 2 0
diagonal and codiagonal morphism in additive category
All of this follows directly from the definition of an additive category, together with the definition of all of the terms in that definition. The most complicated part of that definition is understanding in detail what it means for a category to admit biproducts; for that see this blog post.