title_body
stringlengths
61
4.12k
upvoted_answer
stringlengths
20
29.9k
downvoted_answer
stringlengths
19
18.2k
Asymptotic expansion for $\frac{1}{2\zeta(3)}\int_x^\infty \frac{u^2}{e^u - 1} du$? Is there an asymptotic expansion for the function: \begin{equation} g(x)=\frac{1}{2\zeta(3)}\int_x^\infty \frac{u^2}{e^u - 1} du, \end{equation} over the domain $x\in [0,\infty)$ in terms of elementary functions? Here $\zeta$ is the Riemann zeta function and $1/2\zeta(3)$ is a normalization factor included to ensure $g(0)=1$.
We have $$g(x) = \dfrac1{2 \zeta(3)}\underbrace{\int_x^{\infty} \dfrac{u^2}{e^u-1} du}_{I(x)}$$ We will now obtain a series expansion for $I(x)$. We have $$I(x) = \int_x^{\infty} \dfrac{u^2}{e^u-1} du = \int_x^{\infty} \dfrac{u^2 e^{-u}}{1-e^{-u}} du = \int_x^{\infty} \sum_{k=0}^{\infty} u^2 e^{-(k+1)u} du = \sum_{k=1}^{\infty} \int_x^{\infty} u^2 e^{-ku} du$$ We now have that $$\int_x^{\infty} u^2 e^{-ku} du = \dfrac{e^{-kx} \left(k^2 x^2 + 2kx + 2\right)}{k^3}$$ We have have $$I(x) = \sum_{k=1}^{\infty} \dfrac{e^{-kx} \left(k^2 x^2 + 2kx + 2\right)}{k^3}\tag{$\star$}$$ For a given $x$, truncating $(\star)$ will give you an exponentially converging approximation. Note that $$I(0) = \displaystyle \sum_{k=1}^{\infty} \dfrac2{k^3} = 2 \zeta(3)$$
yes is equal to the incomplete gamma function http://mathworld.wolfram.com/IncompleteGammaFunction.html $$ \Gamma (3,x) $$
Physics Vector algebra question If $\vec A=2\vec i+4\vec j-\vec k$, $\vec B=2\vec i-3\vec j+\vec k$ and $\vec C=- \vec i+3\vec j$, then unit vector in direction of $\vec A+\vec B+\vec C$. I tried to find unit vector of all vectors and to add all of them. But I didn't get correct answer. It's a competitive question. I am unable to get the correct answer. I added unit vector of all vectors then got unit vector. $$\frac1{\sqrt{21}}\vec A=\frac1{\sqrt{21}}(2\vec i+4\vec j-\vec k)$$ $$\frac1{\sqrt{14}}\vec B=\frac1{\sqrt{21}}(2\vec i-3\vec j+\vec k)$$ $$\frac1{\sqrt{10}}\vec C=\frac1{\sqrt{21}}(-\vec i+3\vec j)$$ Now what I should do to get the correct answer?
You should first add the vectors $\vec A, \vec B$ and $\vec C$, and then calculate the unit vector. $$\vec{A} + \vec B + \vec C = 3\hat\imath + 4\hat\jmath$$ $$\text{Unit vector }= (3/5)\hat\imath + (4/5)\hat\jmath$$
You should first add the vectors $\vec A, \vec B$ and $\vec C$, and then calculate the unit vector. $$\vec{A} + \vec B + \vec C = 3\hat\imath + 4\hat\jmath$$ $$\text{Unit vector }= (3/5)\hat\imath + (4/5)\hat\jmath$$
Proving $<$ is transitive on $\mathbb{Q}$. I feel a little bit stupid asking this; I am asked to prove that, for all rational numbers if, x &lt; y and y &lt; z then x &lt; z. I have said this; $ x + 0 &lt; y $ $ x - z + z &lt; y$ $ x - z &lt; y- z $ but $ y - z &lt; 0$ so $ x - z &lt; y- z $ implies $ x - z &lt; 0 $. I had a little search before posting this just to make sure its not a duplicate, if it is i will delete it right away, sorry and thanks in advance. ORDERING OF THE RATIONALS: Let x and y be rational numbers. We say that x > y iff x - y is a positive rational number, and x &lt; y iff x - y is a negative rational number. I think this is the information that was missing.
We have $$z-x=(z-y)+(y-x)$$ We know $z-y$ is a positive rational number, since $y&lt;z$. We also know that $y-x$ is a positive rational number, since $x&lt;y$. We now need some sort of property that the sum of two positive rational numbers is again a positive rational number. With that property we know that $z-x$ is a positive rational number, and hence $x&lt;z$.
As Peter Franek pointed out in his comment, your proof is flawed. You used if $x-z&lt;y-z$ and $y-z&lt;0$ then $x-z&lt;0$ This is the same as saying if $x&lt;y$ and $y&lt;z$ then $x&lt;z$. All you have to do to see that is set $x'=x-z,y'=y-z,z'=0$ As for how to prove this, use the axioms of the real number system. By the axiom: $x\leq y$ and $y\leq z$ imply $x\leq z$ for any real numbers $x,y,z$. Now, say $x&lt;y$ and $y&lt;z$, it's the same as $x\leq y$ and $y\leq z$ and $x\neq y$ and $x\neq z$. This implies that $x\leq z$. What we have left to prove is that $y\neq z$ Assume $y=z$, then $x\leq y$ and $y\leq x$ and $x\neq y$, that is $x=x$ and $x\neq x$ which leads to a contradiction. Thus, $x\neq z$. This ends the proof.
Rationals in [0,1] are $F_{\sigma}$? I have the following problem that I think I have worked out. Let $A$ be the set of rational numbers in $[0,1]$. Is $A$ an $F_{\sigma}$ set? My Attempt: Yes, $A$ is an $F_{\sigma}$ set. We recall that an $F_{\sigma}$ set is one which may be expressed as the countable union of closed sets. Since the set of rationals $\mathbb{Q}$ are countable, so too is the set $A$. Hence, we may enumerate it as $A=\{q_{k}\}_{k=1}^{\infty}$. Now, each singleton set $\{q_{k}\}$ is closed in $\mathbb{R}$ (with the standard topology), since its complement $\{q_{k}\}^{c}=(-\infty,q_{k})\cup(q_{k},\infty)$ is open. Thus, we have $A=\bigcup_{k=1}^{\infty}\{q_{k}\}$, so $A$ is indeed an $F_{\sigma}$ set. My concerns: I think my above argument is okay, but I am shaky on one part. Does it matter if I consider each $\{q_{k}\}$ as closed in $\mathbb{R}$, or do I need to show explicitly that they are closed in $[0,1]$? Thanks in advance for any help!
If $S$ is closed in $\Bbb R$, then $S\cap X$ is closed in $X$ for all $X\subseteq \Bbb R$, so showing closedness in $\Bbb R$ is sufficient.
Yes it is a set. What you did is sufficient. Good job!
Finding equilibrium in a predator prey system Using predator prey system $\frac{dR}{dt}=6R\:-2RW$ $\frac{dW}{dt}=-4W\:+5RW$ $When\:the\:system\:is\:equilibrium\:with\:W\ne 0,\:R\:\ne 0\:then\:RW\:=?$ Apologize for formatting
you are looking for the non trivial special solutions where $w = constant, r = constant.$ that is $$r \neq 0, w \neq 0, 2r(3 - w) = 0 \text{ and } w(-4 + 5r)=0 $$ that is $$ w = 3, r = \frac 45.$$
"When the derivatives are 0 and it's an equation, use Cramer and some determination!" (It's a linear equation, solve for R and W and multiply them together) "I like the rhyme because it's sublime"
Bounding ${(2d-1)n-1\choose n-1}$ Claim: ${3n-1\choose n-1}\le 6.25^n$. Why? Can the proof be extended to obtain a bound on ${(2d-1)n-1\choose n-1}$, with the bound being $f(d)^n$ for some function $f$? (These numbers describe the number of some $d$-dimensional combinatorial objects; claim 1 is the case $d=2$, and is not my claim).
First, lets bound things as easily as possible. Consider the inequality $$\binom{n}{k}=\frac{(n-k)!}{k!}\leq\frac{n^{k}}{k!}\leq e^{k}\left(\frac{n}{k}\right)^{k}.$$ The $n^k$ comes from the fact that $n$ is bigger then each factor of the product in the numerator. Also, we know that $k!e^k&gt;k^k$ by looking at the $k^{th}$ term in the Taylor series, as $e^k=1+k+\cdots +\frac{k^k}{k!}+\cdots $. Now, lets look at the similar $3n$ and $n$ instead of $3n-1$ and $n-1$. Then we see that $$\binom{3n}{n}\leq e^{n}\left(3\right)^{n}\leq\left(8.15\right)^{n}$$and then for any $k$ we would have $$\binom{kn}{n}\leq\left(ke\right)^{n}.$$ We could use Stirlings formula, and improve this more. What is the most that this can be improved? Apparently, according to Wolfram the best possible is $$\binom{(k+1)n}{n}\leq \left(\frac{(k+1)^{k+1}}{k^k}\right)^n.$$ (Notice that when $k=2$ we have $27/4$ which is $6.25$) Hope that helps.
Just use the Stirling formula $$n! \sim \sqrt{2\pi n} (n/e)^n$$ for large $n$ and neglect the $\sqrt{2\pi n}$ factor for a while. That gives us a good estimate for your expression $$(3n-1)! / [ (n-1)!(2n)! ] \sim (2n)^{-2n}(n-1)^{1-n} (3n-1)^{3n-1} $$ Note that the powers of $e$ cancel. For large $n$, you may also approximate the bases of the powers simply by $2n,n,3n$, respectively. Then you get $$\sim 2^{-2n} 3^{3n} = (27/4)^{n} $$ because the powers of $n$ cancel, too. Note that $27/4 = 6.75$. I have only calculated the estimate - which is actually a better result because it shows that the number $6.75$ is exact in the large $n$ limit. To prove the inequality, you have to carefully watch whether the Stirling formula underestimates or overestimates it. At any rate, it's straightforward to check that the inequality holds for any positive $n$. It's enough to check it explicitly for a few first small values of $n$, and for larger ones, it can be shown that the $(27/4)^n$ Ansatz is being approached from the right side by computing the sign of the first subleading correction to this approximation. If you enumerate the first few binomial coefficients over the approximation (which should be smaller than one) by Mathematica [Table[Binomial[3 n - 1, n - 1]/(27/4)^n, {n, 1, 20}]] you will get {0.14814815, 0.10973937, 0.091043032, 0.079482012, 0.071435685, 0.06542258, 0.060709598, 0.056887142, 0.053706089, 0.051005081, 0.048674402, 0.046636504, 0.04483482, 0.043226987, 0.041780567, 0.040470244, 0.039275935, 0.038181474, 0.037173681, 0.036241691} The numbers are clearly smaller than one, and from the beginning, the decrease is uniform. For your case of $d$ dimensions, the relevant surviving terms are replaced by $$ \sim (2d-2)^{(2-2d)(n-1)} (2d-1)^{(2d-1)(n-1)} $$ so $27/4$ gets replaced by $(2d-1)^{2d-1}/(2d-2)^{2d-2}$. For $d=2$, you get $3^3/2^2 = 27/4$. For $d=3$, you would get $5^5/4^4$, and so on. Note that the proof only works for 27/4 = 6.75. This number couldn't be reduced further (6.25 is a typo) and any proof that replaces 6.75 by a larger number fails to prove the original assertion.
Mathematician vs. Computer: A Game A mathematician and a computer are playing a game: First, the mathematician chooses an integer from the range $2,...,1000$. Then, the computer chooses an integer uniformly at random from the same range. If the numbers chosen share a prime factor, the larger number wins. If they do not, the smaller number wins. (If the two numbers are the same, the game is a draw.) Which number should the mathematician choose in order to maximize his chances of winning?
For fixed range: range = 16; a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@ Flatten@Table[ Table[Position[a, a[[y, m]]][[n, 1]], {n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n, 1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; cf = Grid[{{Join[{"n"}, Rest@(r = Range[range])] // ColumnForm, Join[{"win against n"}, Rest@w] // ColumnForm, Join[{"lose against n"}, Rest@l] // ColumnForm, Join[{"probability win for n"}, (p = Drop[Table[ results[[n]]/Total@Drop[results, 1] // N,{n, 1, range}], 1])] // ColumnForm}}] Flatten[Position[p, Max@p] + 1] isn't great code, but fun to play with for small ranges, gives and perhaps more illuminating rr = 20; Grid[{{Join[{"range"}, Rest@(r = Range[rr])] // ColumnForm, Join[{"best n"}, (t = Rest@Table[ a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@Flatten@Table[Table[ Position[a, a[[y, m]]][[n, 1]], {n, 1,Length@Position[a, a[[y, m]]]}], {m, 1,PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n,1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; p = Drop[Table[results[[n]]/Total@Drop[results, 1] // N, {n, 1, range}], 1]; {Flatten[Position[p, Max@p] + 1], Max@p}, {range, 1, rr}]/.Indeterminate-&gt; draw); Table[t[[n, 1]], {n, 1, rr - 1}]] // ColumnForm, Join[{"probability for win"}, Table[t[[n, 2]], {n, 1, rr - 1}]] // ColumnForm}}] compares ranges: Plotting mean "best $n$" against $\sqrt{\text{range}}$ gives For range=$1000,$ "best $n$" are $29$ and $31$, which can be seen as maxima in this plot: Update In light of DanielV's comment that a "primes vs winchance" graph would probably be enlightening, I did a little bit of digging, and it turns out that it is. Looking at the "winchance" (just a weighting for $n$) of the primes in the range only, it is possible to give a fairly accurate prediction using range = 1000; a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@Flatten@Table[ Table[Position[a, a[[y, m]]][[n, 1]], {n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[ DeleteCases[ DeleteCases[ Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[ DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n, 1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; p = Drop[Table[ results[[n]]/Total@Drop[results, 1] // N, {n, 1, range}], 1]; {Flatten[Position[p, Max@p] + 1], Max@p}; qq = Prime[Range[PrimePi[2], PrimePi[range]]] - 1; Show[ListLinePlot[Table[p[[t]] range, {t, qq}], DataRange -&gt; {1, Length@qq}], ListLinePlot[ Table[2 - 2/Prime[x] - 2/range (-E + Prime[x]), {x, 1, Length@qq + 0}], PlotStyle -&gt; Red], PlotRange -&gt; All] The plot above (there are $2$ plots here) show the values of "winchance" for primes against a plot of $$2+\frac{2 (e-p_n)}{\text{range}}-\frac{2}{p_n}$$ where $p_n$ is the $n$th prime, and "winchance" is the number of possible wins for $n$ divided by total number of possible wins in range ie $$\dfrac{\text{range}}{2}\left(\text{range}-1\right)$$ eg $499500$ for range $1000$. Show[p // ListLinePlot, ListPlot[N[ Transpose@{Prime[Range[PrimePi[2] PrimePi[range]]], Table[(2 + (2*(E - Prime[x]))/range - 2/Prime[x])/range, {x, 1, Length@qq}]}], PlotStyle -&gt; {Thick, Red, PointSize[Medium]}, DataRange -&gt; {1, range}]] Added Bit of fun with game simulation: games = 100; range = 30; table = Prime[Range[PrimePi[range]]]; choice = Nearest[table, Round[Sqrt[range]]][[1]]; y = RandomChoice[Range[2, range], games]; z = Table[ Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m, 1, games}]; Count[Table[If[Count[z, choice] == 0 &amp;&amp; y[[m]] &lt; choice \[Or] Count[z, choice] &gt; 0 &amp;&amp; y[[m]] &lt; choice, "lose", "win"], {m, 1, games}], "win"] &amp; simulated wins against computer over variety of ranges with Clear[range] highestRange = 1000; ListLinePlot[Table[games = 100; table = Prime[Range[PrimePi[range]]]; choice = Nearest[table, Round[Sqrt[range]]][[1]]; y = RandomChoice[Range[2, range], games]; z = Table[Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m, 1, games}]; Count[Table[ If[Count[z, choice] == 0 &amp;&amp; y[[m]] &lt; choice \[Or] Count[z, choice] &gt; 0 &amp;&amp; y[[m]] &lt; choice, "lose", "win"], {m, 1, games}], "win"], {range,2, highestRange}], Filling -&gt; Axis, PlotRange-&gt; All] Added 2 Plot of mean "best $n$" up to range$=1000$ with tentative conjectured error bound of $\pm\dfrac{\sqrt{\text{range}}}{\log(\text{range})}$ for range$&gt;30$. I could well be wrong here though. - In fact, on reflection, I think I am (related).
The mathematician should choose 990. It has the most prime factors of 2, 3 ,5, 11 at the high end of numbers to 1000. Since the computer is going to pick a random number, there is a high probability (large set) 1. That the number picked will be less than 990 2. That it will get an easily factored number containing 2,3,5 or 11 (there are many). The mathematician does not want to be in this large set, and should win most games.
Show that the random variables $X$ and $Y$ are uncorrelated but not independent Show that the random variables $X$ and $Y$ are uncorrelated but not independent The given joint density is $f(x,y)=1\;\; \text{for } \; -y&lt;x&lt;y \; \text{and } 0&lt;y&lt;1$, otherwise $0$ My main concern here is how should we calculate $f_1(x)$ $f_1(x)=\int_y dy = \int_{-x}^{1}dy + \int_{x}^{1}dy = 1+x +1=2\; \; \forall -1 &lt;x&lt;1$ OR Should we do this? $f_1(x)$=$$ \begin{cases} \int_{-x}^{1}dy = 1+x &amp;&amp; -1&lt;x&lt;0 \\ \int_{x}^{1}dy = 1-x &amp; &amp; 0\leq x &lt;1 \\ \end{cases} $$ In the second case, how do I show they are not independent. I can directly say that the joint distribution does not have a product space but I want to show that $f(x,y)\neq f_1(x)f_2(y)$ Also, for anyone requiring further calculations, $f_2(y) = \int dx = \int_{-y}^{y}dx = 2y$ $\mu_2= \int y f_2(y)dy = \int_{0}^{1}2y^2 = \frac23$ $\sigma_2 ^2 = \int y^2f_2(y)dy - (\frac23) ^2 = \frac12 - \frac49 = \frac1{18}$ $E(XY)= \int_{y=0}^{y=1}\int_{x=-y}^{x=y} xy f(x,y)dxdy =\int_{y=0}^{y=1}\int_{x=-y}^{x=y} xy dxdy$ which seems to be $0$? I am not sure about this also.
$f_1(x)=1+x$ if $-1&lt;x&lt;0$ and $1-x$ if $0&lt;x&lt;1$. ( In other words $f_1(x)=1-|x|$ for $|x|&lt;1$). As you have observed $f_2(y)=2y$ for $0&lt;y&lt;1$. Now it is basic fact that if the random variables are independent then we must have $f(x,y)=f_1(x)f_2(y)$ (almost everywhere). Since the equation $(1-|x|)(2y)=f(x,y)$ is not true we can conclude that $X$ and $Y$ are not independent. $EXY=0$ is correct. Also $EX=\int_{-1}^{1}x(1-|x|)dx=0$ so $X$ and $Y$ are uncorrelated.
A slightly different approach: Joint density of $(X,Y)$ is \begin{align} f(x,y)&amp;=1_{-y&lt;x&lt;y\,,\,0&lt;y&lt;1} \\&amp;=\underbrace{\frac{1_{-y&lt;x&lt;y}}{2y}}_{f_{X\mid Y=y}(y)}\cdot\underbrace{2y\,1_{0&lt;y&lt;1}}_{f_Y(y)} \end{align} Since (conditional) distribution of $X$ 'given $Y$' depends on $Y$, clearly $X$ and $Y$ are not independent. In fact, $X$ conditioned on $Y=y$ has a uniform distribution on $(-y,y)$, which gives $E\,[X\mid Y]=0$. Therefore, by law of total expectation, \begin{align} E\,[XY]&amp;=E\left[E\left[XY\mid Y\right]\right] \\&amp;=E\left[YE\left[X\mid Y\right]\right] \\&amp;=0 \end{align} Similarly $E\,[X]=E\left[E\,[X\mid Y]\right]=0$, so that $\operatorname{Cov}(X,Y)=E\,[XY]-E\,[X]E\,[Y]=0$. A more intuitive way to see that two jointly distributed variables $X,Y$ are not independent is to verify that the joint support of $(X,Y)$ cannot be written as a Cartesian product of the marginal supports of $X$ and $Y$. For this, all we need to do is sketch the support of $(X,Y)$ given by $$S=\{(x,y)\in\mathbb R^2: |x|&lt;y&lt;1 \}$$ In fact $(X,Y)$ is uniformly distributed over $S$, which looks like So support of $X$ is $S_1=(-1,1)$ and that of $Y$ is $S_2=(0,1)$. But since $S\ne S_1 \times S_2$, the random variables $X$ and $Y$ are not independent. Related: Uncorrelated, Non Independent Random variables The mutual density of $X,Y$ in $\{|t|+|s|&lt;1\}$ is constant, are $X,Y$ independent?
Prove that the equation $ e^{x}+x^{3}=10+x $ has a unique solution on the open interval $(-\infty,\infty)$. Prove that the equation $$ e^{x}+x^{3}=10+x $$ has a unique solution on the open interval $(-\infty,\infty)$. Letting $f(x)=e^{x}+x^{3}-10-x$, $f(0)&lt;0$ and $f(10)&gt;0$, so by the IVT there must be a solution in the interval $(0,10)$. Now I must show there cannot be more than one solution. Generally for these proofs you assume that there is at least 2 solutions and show a contradiction. Letting the solutions be a and b ($a&lt;b$ WLOG) I want to use Rolle's theorem to show that $f'(x)=e^{x}+3x^{2}-1$ cannot be zero (giving the contradiction). The problem is $f'(0)=0$. So how do I proceed?
You can observe that if $x &lt;-1$ then $X^3-X &lt; 0$, thus $$f(x) &lt;e^{-1}-10 &lt;-9$$ Also if $x \in [-1, 1]$ we have $x^3 \leq 1$ and $-x \leq 1$ thus $$f(x) \leq e+2-10 &lt; -5$$ Therefore there is no solution $x \leq 1$. Now apply your technique on $(1, \infty)$. It is easy to show tha on this interval $f'(x) &gt;0$.
Prove that $f(x)$ is injective over positive real numbers. $$f'(x) &gt; 0, \forall x\geq0 \text{ appart for 0 and } f\in C^{\infty} (\mathbb{R}, \mathbb{R}) \implies f \text{ injective } $$ Then $\forall(a, b) \in \mathbb{R^+}^2, f(a) = f(b) \iff a=b$ And $\forall x&lt;0, f(x) &lt; 0$ as $x^3-x\leq1$ and $e^x&lt;1$
I'm trying to describe Godel's Incompletemeness Theorem in 1 short sentence... We at the Unemployed Philosophers Guild are adding Kurt Godel to our line of illustrious finger puppets. On the puppet tag we always have a short biography. Below is what we have written. Is our description of his Theorem acceptable to a mathematician/logician? Austrian-born philosopher and logician Kurt Friedrich Gödel studied physics before publishing his famous (two) Incompleteness Theorem(s). According to Gödel, a mathematical system can’t prove or disprove every proposition within itself (it’s “incomplete”) and can’t prove itself both complete and consistent. Gödel fled Nazi Germany, renewed his friendship with fellow émigré Albert Einstein, and became a U.S. citizen. As “the most important logician since Aristotle,” Gödel influenced computer science, artificial intelligence, and philosophy of mathematics. He was devoted to operetta. Here is an alternate description of his work: ...According to Gödel, if a mathematical system can prove every statement that can be constructed in the system, then there must be some contradictory statements in the system: and if there are no contradictory statements, then there are statements that cannot be proved... Thank you!
It depends on the level of precision you want to reach. My first remark is that "According to Gödel" feels too much like "He thought so and he said so", but maybe I'm just misunderstanding this by adding a connotation that wasn't there. More to the point, Gödel's theorem is about formal theories that can be computed. If you want to be precise this is a detail that cannot be missed because there are many theories that are complete and prove or refute every sentence. A quick summary would be "Gödel proved that no effective [here the word "effective" allows you to vaguely state that it should be computable], expressive enough, consistent mathematical system can prove or refute every sentence: it will always be incomplete. In particular, he proved that such a system cannot prove its own consistency." This allows you to be precise, while still being accessible (the words "effective", "expressive enough" are not detailed, so as to be understood by the non-logician), and is a good reflection of what Gödel actually did. My reformulation to "Gödel proved" (rather than "According to Gödel") is simply because I feel like this emphasizes more that this isn't just speculation or an idea he had but rather an actual theorem.
Austrian-born philosopher and logician Kurt Friedrich Gödel studied physics before publishing his famous (two) Incompleteness Theorem(s). Physics has nothing to do with the incompleteness theorems. I doubt he spent as much time or effort in physics as logic, although Wikipedia suggests he did spend a lot more time on physics later in life. According to Gödel, a mathematical system can’t prove or disprove every proposition within itself (it’s “incomplete”) No, there are logics, even rather complex ones, that are complete and consistent. Propositional logic and "the first-order theory of Euclidean geometry is complete and decidable" (from the link below). and can’t prove itself both complete and consistent. No. Sufficiently strong logics can not be both complete and consistent. But any inconsistent logic can prove either of those. Gödel fled Nazi Germany, renewed his friendship with fellow émigré Albert Einstein, and became a U.S. citizen. Wikipedia agrees with this. Why don't you just say "immigrant"? As “the most important logician since Aristotle,” I would say Hilbert, but some logicians would agree with you. Gödel influenced computer science, artificial intelligence, and philosophy of mathematics. He was devoted to operetta. CS and philosophy are a given, the other two, no idea. Here is an alternate description of his work: ...According to Gödel, if a mathematical system can prove every statement that can be constructed in the system, then there must be some contradictory statements in the system: and if there are no contradictory statements, then there are statements that cannot be proved... That doesn't even make sense to me. Here is the first incompleteness theorem: If a logic can be verified by a Turing Machine, is capable of proving all theorems of Robinson Arithmetic, and is 1-consistent; then there is a sentence $G$ such that neither $G$ is provable nor is $\lnot G$ provable in the logic. (Small detail, but the strengthening of the statement from 1-consistent to consistent is actually due to Rosser.) If I could say something a bit harsh, the Internet is rife with people attempting to summarize or explain Godel's Incompleteness Theorems who never bothered to really learn them. Please don't be another. Silence is better. I'm not even confident about them, so I hesitated to write this. If you wish to learn them, I find this link does a really thorough and detailed job without containing extraneous pontifications : https://plato.stanford.edu/entries/goedel-incompleteness/ On the puppet tag we always have a short biography. As far as logicians with interesting lives, I find Moses Schönfinkel really compelling. Born a poor boy in Ukraine, talented enough to eventually study even under David Hilbert, invented one of the first completely formal logics that was in some sense the grandfather of all constructive logics. Was committed to an asylum before he was 40 and spend his later life in poverty like how he was born. His personal papers/work were burned by his neighbors trying to stay warm because of wartime conditions.
Infinite closed subset of $S^1$ such that the squaring map is a bijection? Is there an infinite closed subset $X$ of the unit circle in $\mathbb C$ such that the squaring map induces a bijection from $X$ to itself?
Let us think of $S^1$ as $\mathbb{R}/\mathbb{Z}$, so we want an infinite closed subset $X$ on which multiplication by $2$ is a bijection. Suppose you have such an $X$; write $T:X\to X$ for the multiplication by $2$ map. Each element $x\in X$ determines a biinfinite binary expansion $f_x:\mathbb{Z}\to\{0,1\}$, such that $T^n(x)=\sum_{k=1}^\infty f_x(k+n)2^{-k}$ for each $n\in \mathbb{Z}$. Say that a finite string of $0$s and $1$s is admissible if it appears as a sequence of consecutive values of some $f_x$ (i.e., if it appears as a sequence of consecutive digits in the binary expansion of some element of $X$). For each $n$, let $A_n\subseteq\{0,1\}^n$ consist of those sequences $s$ such that both $0^\frown s$ and $1^\frown s$ are admissible (where $^\frown$ is string concatenation). If $A_n$ is empty for some $n$, that means that given a sequence of $n$ consecutive digits in any $f_x$, all of the preceding digits are uniquely determined. By pigeonhole, for each $x$, some sequence of $n$ digits must appear infinitely often in the restriction of $f_x$ to $\mathbb{N}$, and it follows that every $f_x$ is periodic. Furthermore, there is a uniform bound on the periods of all the $f_x$ (because if some particular $s\in\{0,1\}^n$ appears infinitely often in $f_x|_\mathbb{N}$, that determines the period of $f_x$, and there are only finitely many different such $s$). So there are only finitely many different $f_x$, so $X$ is finite. This is a contradiction. Thus each $A_n$ is nonempty. By König's lemma, it follows that there exists an infinite string $s:\mathbb{N}\to\{0,1\}$ such that every initial segment of $s$ is in the appropriate $A_n$. But then since $X$ is closed, the numbers $y_0$ and $y_1$ whose binary expansions are $0^\frown s$ and $1^\frown s$, respectively, are in $X$. Since $T(y_0)=T(y_1)$, this is a contradiction. Thus no such $X$ exists.
For a small $\theta$, close the set $\{e^{ix} : x \in (-\theta, \theta)\}$ under square roots to get an open set $U_{\theta}$. If $\theta$ is small, the measure of $U_{\theta}$ is small. So we can take $A$ to be the complement of $U_{\theta}$. Sorry, this only gives that $A$ is closed under squaring. Anyway, if $A$ is also required to be closed under square roots, then there is no such proper subset. This is because $(1, 0)$ is not an accumulation point of $A$ and therefore for some $\theta &gt;0$, $\{e^{ix} : x \in (-\theta, \theta)\}$ is disjoint with $A$ and closing this under squaring gives the whole circle.
What is the modern axiomatization of (Euclidean) plane geometry? I have heard anecdotally that Euclid's Elements was an unsatisfactory development of geometry, because it was not rigorous, and that this spurred other people (including Hilbert) to create their own sets of axioms. I have two related questions: 1) What is the modern axiomatization of plane geometry? For example, when mathematicians speak of a point, a line, or a triangle, what does this mean formally? My guess would be that one could simply put everything in terms of coordinates in R^2, but then it seems to be hard to carry out usual similarity and congruence arguments. For example, the proof of SAS congruence would be quite messy. Euclid's arguments are all "synthetic", and it seems hard to carry such arguments out in an analytic framework. 2) What problems exist with Euclid's elements? Why are the axioms unsatisfactory? Where does Euclid commit errors in his reasoning? I've read that the logical gaps in the Elements are so large one could drive a truck through them, but I cannot see such gaps myself.
I can recommend an article Old and New Results in the Foundations of Elementary Plane Euclidean and Non-Euclidean Geometries by Marvin Jay Greenberg, The American Mathematical Monthly, Volume 117, Number 3, March 2010, pages 198-219. One of the great strengths of the article is that I am in it. Marvin promotes what he calls Aristotle's axiom, which rules out planes over arbitrary non-Archimedean fields without leaving the synthetic framework. If you email me I can send you a pdf. EDIT: Alright, Marvin won an award for the article, which can be downloaded from the award announcement page GREENBERG. The award page, by itself, gives a pretty good response to the original question about the status of Euclid in the modern world. As far as book length, there are the fourth edition of Marvin's book, Euclidean and Non-Euclidean Geometries, also Geometry: Euclid and Beyond by Robin Hartshorne. Hartshorne, in particular, takes a synthetic approach throughout, has a separate index showing where each proposition of Euclid appears, and so on. Hilbert's book is available in English, Foundations of Geometry. He laid out a system but left it to others to fill in the details, notably Bachmann and Pejas. The high point of Hilbert is the "field of ends" in non-Euclidean geometry, wherein a hyperbolic plane gives rise to an ordered field $F$ defined purely by the axioms, and in turn the plane is isomorphic to, say, a Poincare disk model or upper half plane model in $F^{\; 2}.$ Perhaps this will be persuasive: from Hartshorne, Recall that an end is an equivalence class of limiting parallel rays Addition and multiplication of ends are defined entirely by geometric constructions; no animals are harmed and no numbers are used. In what amounts to an upper half plane model, what becomes the horizontal axis is isomorphic to the field of ends. This accords with our experience in the ordinary upper half plane, where geodesics are either vertical lines or semicircles with center on the horizontal axis. In particular, infinitely many geodesics "meet" at any given point on the horizontal axis.
Any axiomitization of 2-dimensional Euclieadn geometry is unjustifiable. In such a system, you just assume without proof that the undefined concept of distance satisfies certain intuitive properties and prove other properties from it like the Pythagorean theorem. If you invent a space and explicitly define a lot of the relations on it like distance and show that those definitions satisfy the intuitive properties of 2-dimensional Euclidean geometry, it follows that that space can be represented by R^2 and that for any points (x1, y1) and (x2, y2), the distance from (x1, y1) to (x2, y2) is sqrt((x2 - x1)^2 + (y2 - y1)^2). We can define the distance formula to be sqrt((x2 - x1)^2 + (y2 - y1)^2) without giving a reason then show that it satisfies the intuitive properties of distance. After we show that it satisfies the intuitive properties, we can derive from that fact that that the formula is sqrt((x2 - x1)^2 + (y2 - y1)^2) but there's no need because that can be proven directly from the definition of distance. Some people might want to know why it was defined that way so for them, it might suffice to show that that formula is the unique formula that satisfies the intuitive properties of distance. Those people might also be satisfied with seeing the distance formula being proven by defining it them showing that it satisfies those intuitive properties then reproving it from those properties because from that, they can figure out how to prove that that formula is the unique formula that satisfies the intuitive properties of distance. The article http://speedydeletion.wikia.com/wiki/Distance that I wrote actually gives an incomplete proof that that formula is the unique formula that satisfies the intuitive properties of distance. A complete proof probably includes constructing the set of all real numbers with operations from the power set of the set of all natural numbers in Zermelo-Fraenkel set theory then showing that that set is a totally ordered field, similar to what I described in my answer at What is a natural number?.
Which integers have order 6 (mod 37)? Which integers have order $6\ (\text {mod}\ 37)$? I have tried expanding $x^6-1=0\ (\text {mod}\ 37)$ equation, i tried expanding it as $x^3-1$ and $x^3+1$ and I expanded further and tried to expand as adding $37$ to $x^6-1$ does alter as $\text {mod}\ 37$ is zero , but I could lead further.
HINT: Try to use Fermats little theorem. EDIT: By the above named theorem $x^{36}=(x^6)^6\equiv 1 \ \ \bmod 37$. Not surprisingly $x\equiv_{37}1$ is your first solution. $2^6=64\equiv_{37} 27$ so $27$ goes on the list. $3^6=3^2\cdot 3^2\cdot 3=81\cdot 3^2\equiv_{37}7\cdot 3^2\equiv_{37}26$ so $26$ goes also on the list. $4^6=(2^6)^2\equiv_{37}=27^2\equiv_{37}26$. And so on... You will be able to use already known information as soon as $x$ is a composite.
Look for a generator, which is any number $g$ such that $g^{12}\not\equiv 1$ and $g^{18}\not\equiv 1$. Here you get lucky, because the first number you try, $2$, is a generator. So now it's just $2^6$ and $2^{30}$.
Finding $n$ such that $\frac{3^n}{n!} \leq 10^{-6}$ This question actually came out of a question. In some other post, I saw a reference and going through, found this, $n&gt;0$. Solve for n explicitly without calculator: $$\frac{3^n}{n!}\le10^{-6}$$ And I appreciate hint rather than explicit solution. Thank You.
Note that, for $n=3m$, $$3^{-3m}{(3m)!}=\left[m\left(m-\frac{1}{3}\right)\left(m-\frac{2}{3}\right)\right]\cdots\left[1\cdot\frac{2}{3}\cdot\frac{1}{3}\right] &lt;\frac{2}{9}\left(m!\right)^3.$$ So you have to go at least far enough so that $$ \frac{2}{9}\left(m!\right)^3&gt;10^{6}, $$ or $m! &gt; \sqrt[3]{4500000} &gt; 150$. So $m=5$ (corresponding to $n=15$) isn't far enough; the smallest $n$ satisfying your inequality will be at least $16$. Similarly, for $n=3m+1$, $$ 3^{-3m-1}(3m+1)!=\left[\left(m+\frac{1}{3}\right)m\left(m-\frac{1}{3}\right)\right]\cdots \left[\frac{4}{3}\cdot1\cdot\frac{2}{3}\right]\cdot\frac{1}{3} &lt; \frac{1}{3}(m!)^3,$$ so you need $m!&gt;\sqrt[3]{3000000}&gt; 140$, and $m=5$ (that is, $n=16$) is still too small. Finally, for $n=3m+2$, $$ 3^{-3m-2}(3m+2)!=\left[\left(m+\frac{2}{3}\right)\left(m+\frac{1}{3}\right)m\right]\cdots \left[\frac{5}{3}\cdot\frac{4}{3}\cdot1\right]\cdot\frac{2}{3}\cdot\frac{1}{3} &gt; \frac{560}{729}(m!)^3, $$ where the coefficient comes from the last eight terms, so it is sufficient that $m! &gt; 100\cdot\sqrt[3]{729/560}.$ To show that $m=5$ is large enough, we need to verify that $(12/10)^3=216/125 &gt; 729/560$. Carrying out the cross-multiplication, you can check without a calculator that $216\cdot 560 =120960$ is larger than $729\cdot 125=91125$, and conclude that $m=5$ (that is, $n=17$) is large enough. The inequality therefore holds for exactly all $n\ge 17$.
how about make a function?$f(n)=\frac{3^n}{n!}-10^{-6}$ or maybe $f(n)=\frac{3^n}{n!10^{-6}}$
If a continuous function is strictly decreasing before a point and strictly increasing afterwards, is the point a global minimum? I'm in the middle of a proof that a point on a function is a global minimum. Usually I'd just solve an inequality to prove by contradiction that there are no points less than the minimum. But I can't in this case, since it's a transcendental equation (can't rearrange to make $x$ subject in terms of the elementary functions.) So I'm back to the drawing board. My question is: If a continuous everywhere function is strictly decreasing before a point and strictly increasing afterwards, is the point a global minimum? How can this be proved?
Let us asume for contradiction that the given point is $x_0$, and that it is not a global minimum. Hence there is another point $x \neq x_0$ so that $f(x)&lt;f(x_0)$. But if $x \neq x_0$ then either $x&lt; x_0$ or $x&gt;x_0$, but since $f$ is strictly decreasing before $x_0$ then in the first case $f(x) &gt; f(x_0)$, in the second case similarly $f(x)&gt;f(x_0)$. We get a contradiction, and hence such $x$ does not exist. The continuity of $f$ is not necessary.
If you have been going downhill and at one stage things improved and never went down afterwards, then we say we turned around and that turning point was the worst. This is well-known to non-mathematicians too, does not calculus techniques to be able to see this.
Finding all the rational roots of $25x^3+25x^2-x-1$ Finding all the rational roots of $25x^3+25x^2-x-1$. So, I saw right away that $-1$ was a root. I then used synthetic division to factor this as: $(25x^2-1)(x+1)$. Then I found the roots to $25x^2-1$ as $\frac{1}{5},\frac{-1}{5}$. I've been telling my precalculus class that once you know the roots you know the factors, and if you know the factors you know the roots. However, in this case the original polynomial does not equal $(x+1)(x-\frac{1}{5})(x+\frac{1}{5})$. Can somebody tell me exactly what went wrong here? Thanks!!
The thing is, the fundamental theorem of algebra admits a leading coefficient. Let's be more explicit. Say a polynomial $f(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots a_1 x + a_0$ has the roots $r_1, \cdots, r_n$, which may be complex numbers and not necessarily distinct. The leading coefficient, $a_n$, is assumed nonzero. Then we can write $f$ as a product of linear factors in terms of its roots and its leading coefficient: $$f(x) = a_n (x-r_1)(x-r_2)\cdots (x-r_n)$$ This is what you forgot, the leading coefficient. The roots are not sufficient to uniquely determine the polynomial on their own - you can easily see that, however you vary it, the roots would remain the same since $a_n$ is just a constant factor. With this in mind, augmenting your procedure in this example is quite simple. Find the roots of $f(x)$ as you have, and note that the leading coefficient is $25$. Then $$f(x) = 25(x + 1)\left( x - \frac 1 5 \right)\left( x + \frac 1 5 \right)$$ Of course, that $25$ can be distributed into the rightmost two factors for a nicer presentation if you so choose. $$f(x) = (x + 1) \cdot 5 \cdot \left( x - \frac 1 5 \right) \cdot 5 \cdot \left( x + \frac 1 5 \right) = (x+1)(5x-1)(5x+1)$$ However this is purely cosmetic and not at all necessary.
Simple factorization.... $$ 25x^3+25x^2-x-1=25x^2(x+1)-1(x+1)=(x+1)(25x^2-1)) $$ $$ =(x+1)(5x-1)(5x+1) $$
When can you take the derivative of both sides of an equation? I know in general you cannot take the derivative of an equation to solve it because the derivative at a point depends on neighboring points of a function. However, lots of the proofs done in my probability course, for example finding the variance of a geometric random variable is done by differentiating both sides. Why is this allowed?
considering the equation $$x^3 = x + 6$$ here $x=2$ is the solution of differential equation but if we differentiate both sides of equation then we get $$3x^2 = 1$$ in this case $x=2$ gives $12=1$ not satisfied this means that two functions are just interesect not tangent to each other If these function are actually equal then we can differenctiate both sides For more geometric perspective the arithematic operation act pointwise but the drivative requires the knowledge of nieghborhood This means that derivative is not operation function if you sketch $x^3$ and $x+6$ then you would get answer of your each question....
considering the equation $$x^3 = x + 6$$ here $x=2$ is the solution of differential equation but if we differentiate both sides of equation then we get $$3x^2 = 1$$ in this case $x=2$ gives $12=1$ not satisfied this means that two functions are just interesect not tangent to each other If these function are actually equal then we can differenctiate both sides For more geometric perspective the arithematic operation act pointwise but the drivative requires the knowledge of nieghborhood This means that derivative is not operation function if you sketch $x^3$ and $x+6$ then you would get answer of your each question....
I can't understand logical implication I just started studying logic (high school) anyway...for the truth table of logical implication If sentence $A$ is true and $B$ is true then $A\implies B$ is true. does that mean if $A$ and $B$ are both true then there is a way to prove $B$ is true from $A$, always? the same for if $A$ is false can you get anything either True or false proved from this $A$?
As a logical proposition, the material conditional $A \implies B$ is a very weak one: as you've noticed, it's very easy to satisfy it just by accident. In fact, this happens whenever $A$ is false, or whenever $B$ is true. Thus, merely observing that $A \implies B$, for some specific $A$ and $B$, says very little. Instead, the usefulness of implication lies in the fact that, precisely because of its weakness, it is often possible to assert $A \implies B$ as a universal statement (either an axiom or a provable theorem) that holds for any valuation of any free variables mentioned in the propositions $A$ and $B$. For example, consider the statement: $$x &gt; 2 \;\land\; x \text{ is prime} \implies x \text{ is odd}.$$ Merely observing that this statement holds for some $x$ says very little &mdash; there are plenty of numbers for which it is trivially true, either because they are odd, or because they are not primes greater than 2. What makes this statement useful is that we can prove that it holds for all $x$ &mdash; there isn't a single number which would be greater than 2 and prime, but not odd.
This illustration of logical implication might help: a) When you truly understand logical implication, b) then you’ll be a happy person. You may be a happy person for other reasons. But if you’re unhappy, then certainly you don’t truly understand logical implication. :-) Truth table: a b a⇒b F F T F T T T F F T T T I hope this somewhat intuitive example helps.
functions with floors If $$z = \frac{ \left\{ \sqrt{3} \right\}^2 - 2 \left\{ \sqrt{2} \right\}^2 }{ \left\{ \sqrt{3} \right\} - 2 \left\{ \sqrt{2} \right\} }$$ find $\lfloor z \rfloor$. I don't rely know how to do this, but I was thinking about multiplying the denominator by it's conjugate, but idk.
Assuming $\{\cdot \}$ is fractional part, $\{\sqrt{3}\} = \sqrt{3}-1$ and $\{\sqrt{2}\} = \sqrt{2}-1$, so $$ z = \frac{(\sqrt{3}-1)^2 - 2 (\sqrt{2}-1)^2}{\sqrt{3} + 1 - 2 \sqrt{2}}$$ Expand out the numerator and you should recognize that it is a certain integer times the denominator.
I think $\{ \cdot \}$ is fractional part function. So, we have $\{ x \} = x- \lfloor x \rfloor : x \ge 0$, $\{ x \} = x- \lceil x \rceil : x \le 0$
Function for which it is unknown whether it is continuous Is there any function $f:\mathbb R\rightarrow \mathbb R$ for which at least some values are known but it is unknown whether $f$ is continuous or not? Edit: I am looking for examples from actual research, not functions explicitly constructed for that purpose.
Let $f(x)=0$ for $x\neq0$ and $f(0)=$ the first even number that is not the sum of two primes.
Let $f(x)=0$ for $x\neq0$ and $f(0)=$ the first even number that is not the sum of two primes.
find expectation of non-negative integer valued RV from generating function How can we find $E\left(X\right)$ and $E\left(X^{2}\right)$ if all we have is that $G\left(s\right)$ is the generating function for X, which takes non-negative integer values. I know $E\left(X\right)$ = $G'\left(1\right)$ and $G\left(s\right)=$$\sum_{i=0}^{\infty}s^{i}P\left(X=i\right)=\sum_{i=0}^{\infty}s^{i}f(i)$ but how can I take the derivative?
$E\left(X\right)$ = $G'\left(1\right)$ and $G\left(s\right)=$$\sum_{i=0}^{\infty}s^{i}P\left(X=i\right)=\sum_{i=0}^{\infty}s^{i}f(i)$ $G'(s) = \sum_{i=0}^{\infty}is^{i-1}f\left(i\right)$ $G''(s) = \sum_{i=0}^{\infty}i\left(i-1\right)s^{i-2}f\left(i\right)$ $G''(1) = \mathrm{E}(X^{2}) - \mathrm{E}(X)$ Just take the derivative with respect to s and you can play with my expression for $G''(s)$ (by splitting it into two summations) if you want to see the algebra.
$G''(1)=E(x^2)-E(x)$ then $E(x^2)=G''(1)+G'(1)$
$1=2$ | Continued fraction fallacy It's easy to check that for any natural $n$ $$\frac{n+1}{n}=\cfrac{1}{2-\cfrac{n+2}{n+1}}.$$ Now, $$1=\frac{1}{2-1}=\frac{1}{2-\cfrac{1}{2-1}}=\frac{1}{2-\cfrac{1}{2-\cfrac{1}{2-1}}}=\cfrac{1}{2-\cfrac{1}{2-\frac{1}{2-\frac{1}{2-1}}}}=\ldots =\cfrac{1}{2-\cfrac{1}{2-\frac{1}{2-\frac{1}{2-\frac{1}{2-\dots}}}}},$$ $$2=\cfrac{1}{2-\cfrac{3}{2}}=\cfrac{1}{2-\cfrac{1}{2-\cfrac{4}{3}}}=\cfrac{1}{2-\cfrac{1}{2-\frac{1}{2-\frac{5}{4}}}}=\cfrac{1}{2-\cfrac{1}{2-\cfrac{1}{2-\frac{1}{2-\frac{6}{5}}}}}=\ldots =\cfrac{1}{2-\frac{1}{2-\frac{1}{2-\frac{1}{2-\frac{1}{2-\ldots}}}}}.$$ Since the right hand sides are the same, hence $1=2$.
A variant: note that $$\color{red}{\mathbf 1}=0+\color{red}{\mathbf 1}=0+0+\color{red}{\mathbf 1}=0+0+\cdots+0+\color{red}{\mathbf 1}=0+0+0+\cdots$$ and $$\color{green}{\mathbf 2}=0+\color{green}{\mathbf 2}=0+0+\color{green}{\mathbf 2}=0+0+\cdots+0+\color{green}{\mathbf 2}=0+0+0+\cdots$$ "Since the right hand sides are the same", this proves that $\color{red}{\mathbf 1}=\color{green}{\mathbf 2}$.
Incidentally, nobody appeared to have resolved the fallacy of the question, so I have provided an answer. For all $a\in \mathbf N$, it follows $$\cfrac{1}{1+a}=1-\cfrac{1}{2-\cfrac{1}{2-\cfrac{1}{2-\ddots - \cfrac 12}}}$$ such that the number of times the reciprocal in the continued fraction appears is $a$. Proof. Note the identity $$\cfrac{1}{1+a}=1-\cfrac{1}{1+\color{red}{\cfrac 1a}}.$$ By letting $a=b-1$, it follows $$\cfrac 1b = 1-\cfrac{1}{1+\cfrac{1}{b-1}}.$$ From this we can substitute for $\color{red}{\cfrac 1a}$. $$\therefore \cfrac{1}{1+a}=1-\cfrac{1}{2-\cfrac{1}{1+\cfrac{1}{a-1}}}.$$ Clearly we can now likewise substitute for $1/(a-1)$, and the pattern will continue until for some $k\in\mathbf N$, the denominator of $1/(a-k)$ reaches $a-k=1$ since it cannot pass $0$. In consequence, we deduce as desired. (And, of course, when $a=0$, we have $1/1 = 1-0$.) This completes the proof. $\;\bigcirc$ And now, since $$\lim_{a\to\infty}\frac{1}{1+a}=0$$ then $$\boxed{\cfrac{1}{2-\cfrac{1}{2-\cfrac{1}{2-\ddots}}}=1}$$
Given two distinct primes $p,q$ such that $(p-1)(q-1)=A$, are $p$ and $q$ uniquely determined by $A$? I believe another way I could phrase the question is: Given $f(m,n)\equiv(m-1)(n-1)$ where $m$ and $n$ are two primes: Is $f$ one-to-one for pairings of $(m,n)$? Or: Are there only two distinct primes $(p,q)$ which make $f=A$? I conjecture the answer is yes, there is only one pairing which exists for each output, but I can still somewhat envision there being some way to get non-distinct mappings from $A\to(m,n)$
No, for example for $A=72$, the equation $(p-1)(q-1)=A$ has $4$ prime solutions: $$(p,q)\in\{ (2, 73),(3, 37), (5, 19), (7,13)\}.$$ For $A=1080$ there are $6$ prime solutions: $$(p,q)\in\{ (3, 541), (5, 271), (7,181), (11,109), (19,61), (31,37)\}.$$
It's quite easy to hunt for counter examples. For example Consider the factors of $36$. \begin{array}{rr|rr} p-1 &amp; q-1 &amp; p &amp; q \\ \hline 1 &amp; 36 &amp; 2 &amp; 37 \\ 2 &amp; 18 &amp; 3 &amp; 19 \\ 3 &amp; 12 \\ 4 &amp; 9 \\ 6 &amp; 6 &amp; 7 &amp; 7 \\ \hline \end{array} So $f(2,37)=f(3,19)=36$. Consider the factors of $60$. \begin{array}{rr|rr} p-1 &amp; q-1 &amp; p &amp; q \\ \hline 1 &amp; 60 &amp; 2 &amp; 61 \\ 2 &amp; 30 &amp; 3 &amp; 31 \\ 3 &amp; 20 \\ 4 &amp; 12 &amp; 5 &amp; 13\\ 6 &amp; 10 &amp; 7 &amp; 11 \\ \hline \end{array} So $f(2,61)=f(3,31)=f(5,13)=f(7,11)=60$.
Lunch Meeting Probability for two person to meet in given 1 hour slot and none would wait more then 15 minute. Two friends who have unpredictable lunch hours agree to meet for lunch at their favorite restaurant whenever possible. Neither wishes to eat alone and each dislikes waiting for the other, so they agree that each will arrive at a random time between noon and 1 pm, and each will wait for the other for 15 minutes or until 1:00. What is the probability that the friends will meet for lunch on a given day?
Per @AppDeveloper's request changing it from a comment to an answer: Just and idea: consider $0$ to $30$ min, the other half is the same by symmetry. If A arrives at $0$ min, B has to arrive between $0$ and $15$ min, i.e., $p(B\leq 15|A=0)=\frac{1}{4}$. If A arrives after $15$ min, $p(B|A)=\frac{1}{2}$. Applying conditional probabilities and integrating, get for $t \geq 15$ min $p(B|A)p(A)=\frac{1}{8}$ and for $0 \leq t \leq 15$, $p(B|A)p(A)=\frac{3}{32}$ Adding together and multiplying by $2$, get $\frac{7}{16}$.
No matters who arrives first. Assume $A$ arrives first. 2 cases then to be considered: Case 1: $A$ arrives between 12.00-12.45. Then the chance that both will meet equals: $3/4 \cdot 1/4 = 3/16$. Case 2: A arrives between 12.45-13.00. Then the chance that both will meet equals: $1/4\cdot 1= 1/4$. Chance that both will meet then equals: $3/16 + 1/4 = 7/16$.
To discuss differentiability of function at origin and my attempt P1: $F = |x| + |y| when x,y is not equal to 0, = 0 when x = y = 0 P2 :Discuss the differentiability at origin of $F = y sin(1/x)$ :
Near good, but for $|x|+|y|$. Do not use argument, that partial derivatives are not continuous. Theorem says "if partial derivatives... are continuous... derivative exists", but not reverse. However argument with one sided partial derivatives in enough. For the corrected version of P2 ($f(x,y)=y\sin(1/x)$ for $x\neq0$ and 0 for $x=0$): $$ f'_y(0,0)=\lim_{y\to0}\frac{0-0}{y}=0,\quad f'_x(0,0)=\lim_{x\to0}\frac{0\sin(1/x)-0}{x}=0, $$ hence (in case P2) partial derivatives exist. Function $f$ is differentiable in $(x_0,y_0)$ if $$ \lim_{(h,k)\to(0,0)}\frac{f(x_0+h,y_0+k)-f(x_0,y_0)-f'_x(x_0,y_0)h-f'_y(x_0,y_0)k}{\sqrt{h^2+k^2}}=0. $$ In our case we obtain $$ \lim_{(h,k)\to(0,0)}\frac{k\sin(1/h)-0-0-0}{\sqrt{h^2+k^2}}. $$ Let $k=h$. Then $$ \lim_{h\to0}\frac{h\sin(1/h)}{\sqrt{2h^2}}, $$ which doesn't exist. Edit: I can see, that there is yet another definition of $f$ ($f(0,y) = y$). Then $f'_y(0,0)=1$ and modifications are the following: $$ \lim_{(h,k)\to(0,0)}\frac{k\sin(1/h)-0-0-k}{\sqrt{h^2+k^2}}. $$ Let $k=h$. Then $$ \lim_{h\to0}\frac{h(\sin(1/h)-1)}{\sqrt{2h^2}}, $$ which again doesn't exist. Phew!
For the first question, your answer is wrong. The limits you've given do exist at the origin, the problem is that they come to different numbers depending on the direction. Thus the derivative is not well-defined.
Distance from Ellipsoid to Plane - Lagrange Multiplier Find the distance from the ellipsoid $x^2 + y^2 + 4z^2 = 4$ to the plane $x + y + z = 6$. I'm trying to do it using Lagrange multipliers over the distance equation, but then it just gets overwhelming and I have no idea how to go on? Can someone walk me through the computation?
This is easy if you don't insist on using Lagrange multipliers. The normal to the ellipse at the point $(x,y,z)$ is $\nabla(x^2+y^2+4z^2) = (2x, 2y, 8z)$. At minimum or maximum distance to the plane, this must be parallel to the normal to the plane, which is $(1,1,1).$ So $x = y = 4z$. Plug this back into the equation for the ellipse to get $36z^2 = 4$, or $z = \pm \frac13$. So the nearest and farthest points are $\pm(\frac43,\frac43,\frac13)$.
I don't know how to use that maths language so please bear with me. (I am editing his answer as this seems to be the most relevant one apart from latex. In some countries, the undergrads are unaware of this language but it shouldn't be the barrier.) ellipsoid: $x^2 + y^2 + 4(z^2) = 4$ plane: $x + y + z = 6$ Now distance of a point $(p,q,r)$ from a plane $ax+by+cz+d=0$ is given by $|\frac{(ap+bq+cr+d)}{\sqrt(a^2 + b^2 + c^2)}|.$ Now let the point on the ellipsoid at minimum distance from the plane be $(x,y,z)$ Therefore minimum distance $= |\frac{(x+y+z-6)}{\sqrt3}|$ Now define a function $f(x,y,z)=(x+y+z-6)/\sqrt3$ and $g(x,y,z)=x^2 + y^2 + 4(z^2) - 4 $ The minimum absolute value of f is our answer subject to the constraint $g(x,y,z)=0.$ Now, $\nabla(f) = (1/\sqrt3)i +(1/\sqrt3)j +(1/\sqrt3)k$ and $\nabla(g) = (2x)i + (2y)j + (8z)k $ Applying Lagrange's method At minimum value of f subject to constraint g=0 $\nabla(f) = m(\nabla(g))$ ......I've used $m$ instead of $\lambda$ therefore, $1/\sqrt3 = 2mx, 1/\sqrt3 = 2my$, $1/\sqrt3 = 8mz$ therefore, $x=y=4z$ and $x^2 + y^2 + 4(z^2) - 4 = 0$ Solving the above two equations we get two sets of points $(4/3, 4/3, 1/3)$ and $(-4/3, -4/3, -1/3)$ Minimum and maximum distance occurs at these points on the ellipsoid The minimum distance is given by $|f)|$ therefore, the point of minimum distance is $(4/3, 4/3, 1/3) $ and the distance is $|(f(4/3,4/3,1/3)) |=\sqrt3$ You can also solve this problem by using six variables if you don't want to use the plane point distance formula. It looks overwhelming using six variables but its easy to do but, this method is better. For the six variable method you have to solve for finding the points on the plane also.
Prove if $\lim_{x\to +\infty}(f(x)+f'(x))=0$ then $\lim_{x\to +\infty} f(x)=0$ Let $f$ be a real function continuously differentiable at $\Bbb R$ such that $$\lim_{x\to +\infty}(f(x)+f'(x))=0$$ prove that $$\lim_{x\to +\infty} f(x)=0$$ I tried tu use exponential function knowing that $$\frac{d}{dx}f(x)e^x=(f(x)+f'(x))e^x$$ but I got nothing. thanks in advance for an answer or un idea
Could you prove it by contradiction? If $\lim_{x\rightarrow \infty} f'(x) \neq 0$ then $\lim_{x\rightarrow \infty} f(x) = -\lim_{x\rightarrow \infty} f'(x)$ but if $f'&gt;0$ then $f$ is increasing, and if $f'&lt;0$ then $f$ is decreasing - that seems like a contradiction.
Could you prove it by contradiction? If $\lim_{x\rightarrow \infty} f'(x) \neq 0$ then $\lim_{x\rightarrow \infty} f(x) = -\lim_{x\rightarrow \infty} f'(x)$ but if $f'&gt;0$ then $f$ is increasing, and if $f'&lt;0$ then $f$ is decreasing - that seems like a contradiction.
If $f(x) = \int_{\cos x}^0 \tan(t)\mathrm dt$, what is $f'(x)$? If $f(x) = \int_{\cos x}^0 \tan(t)\mathrm dt$, what is $f'(x)$?
This is basically the First Fundamental Theorem of Calculus. \begin{align} f(x)&amp;=\int^0_{\cos(x)}\tan(t)dt\\ &amp;=-\int_0^{\cos(x)}\tan(t)dt\\ f'(x) &amp; = -\dfrac{d}{d\cos(x)}\int_0^{\cos(x)}\tan(t)dt\ \cdot\dfrac{d\cos(x)}{dx}\\ &amp;= \sin(x)\cdot\tan(\cos(x)) \end{align}
Use the rule $$\frac{d}{dx}(\int_a^x{f(t)dt})=f(x)$$ And the composition rule. Let $f(x)=\int_{\cos x}^0{\tan t dt}=-h(\cos x)$ where $h(x)=\int_0^{x}{\tan t dt}$ So, $f'(x)=\frac{d}{dx}f(x)=-h'(\cos x).(\cos x)'=\sin x.\tan(\cos x)$
Two variables limit question I proved that $f(x,y)= \dfrac{xy^2}{x^2 + y^3}$ does not have limit at origin. I used two paths test; first I followed the $x$ axis, then I followed $x = \frac{1}{2}(y^2 + (y^4 - 4y^3)^{1/2})$ for $y&lt;0$. However, I am STILL looking for other solutions other ideas. Any kind of answer, help or hint is appreciated.
Think of the path $(x,-x^{2/3})$ in the open fourth quadrant as $x\to 0^+.$ The denominator of your expression equals $0$ at every point of this path. Meanwhile the numerator is never $0$ in that open quadrant. That should give you pause. Even if you exclude this path from the domain of $f$ (which you should have done, otherwise you are dividing by $0$), you have insurmountable problems. There is no way around it: $f$ is unbounded in every neighborhood of $(0,0),$ so there is no hope of a limit.
Use polar coordinates, so your limit becomes $$\lim_{r\to0}\frac{r^3\cos\theta\sin^2\theta}{r^2\cos^2\theta+r^3\sin^3\theta}=\lim_{r\to0}\;r\frac{\cos\theta\sin^2\theta}{\cos^2\theta+r\sin^3\theta}=0$$
Testing pythagorean triples: $333,444,555$ In this page there is a necessary and sufficient test given for testing Pythagorean triples: A simpler, more powerful test is, (by naming the even leg a): $(c − a)$ and $\large\frac{(c − b)}{2}$ are both perfect squares. This is both necessary and sufficient for the triple to be a PT. Using this here,we can write $a = 444,b=333,c=555$,which means $111$ and $\frac{222}{2}=111$ must be perfect squares but it is not.Hence,that will not work. Is there any necessary and sufficient condition that will work for every and any (other than summing up the squares and checking for perfect squares? NOTE: $333,444,555$ are Pythagorean triples as $3\times111,4\times111,5\times111$ for $3,4,5$ is a Pythagorean triple.
Yes, there is a mistake in the phrasing of the condition. Every Pythagorean triple is of the form $$ a = 2k mn \qquad b = k(m^2 - n^2) \qquad c = k(m^2 + n^2) $$ which means that $$ c - a = k(m^2 + n^2 - 2mn) = k (m-n)^2 \qquad \frac{c-b}{2} = k n^2 $$ So the correct statement is that Let $d$ be the greatest common divisor of $c-a$ and $(c-b)/2$. Then a necessary and sufficient condition for $(a,b,c)$ to be a Pythagorean triple is that $(c-a)/d$ and $(c-b)/(2d)$ are both perfect squares.
Let 333, 444, and 555 are x,y,z of a Pythagorean triple to be tested. Rule to remember: "a given triple is Pythagorean iff there are two integers (j,k; k>j) such that square root of their product is an integer (obviously x) where k = z+y and j=z-y" In this example: k=555+444=999, j=555-444=111; j*k = 110889; (jk)^(0.5) = 333, which is an integer and hence x. Test is proven.
Can I prove Pythagoras' Theorem using that $\sin^2(\theta)+\cos^2(\theta)=1$? In any right-angled triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle). The theorem can be written as an equation relating the lengths of the sides $a$, $b$, and $c$, often called the Pythagorean equation: $$a^2 + b^2 = c^2$$ Can I prove Pythagoras' Theorem by the following way? Actually, my question is: does it violate any rules of mathematics, or is it alright? Sorry, it may not be a valid question for this site. But I want to know. Thanks.
The usual proof of the identity $\cos^2 t+\sin^2 t=1$ uses the Pythagorean Theorem. So a proof of the Pythagorean Theorem by using the identity is not correct. True, we can define cosine and sine purely "analytically," by power series, or as the solutions of a certain differential equation. Then we can prove $\cos^2 t+\sin^2 t=1$ without any appeal to geometry. But we still need geometry to link these "analytically" defined functions to sides of right-angled triangles. Remark: The question is very reasonable. The logical interdependencies between various branches of mathematics are usually not clearly described. This is not necessarily always a bad thing. The underlying ideas of the calculus were for a while quite fuzzy, but calculus was still used effectively to solve problems, Similar remarks can be made about complex numbers.
It is not true that you must use the Pythagorean theorem to prove that $sin^2(x)+ cos^2(x)= 1$. It depends upon how you have defined sine and cosine. It is, for example, perfectly proper to define $sin(x)= \sum_{n= 0}^\infty \frac{(-1)^n}{(2n+ 1)!}x^{2n+ 1}$ and $cos(x)=\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}x^{2n}$ or, equivalently, to define $sin(x)$ as the function satisfying the differential equation y''+ y= 0 with initial conditions y(0)= 0, y'(0)= 1 and $cos(x)$ as the function, y, satisfying the differential equation y''+ y= 0 with initial conditions y(0)= 1, y'(0)= 0. In either case, you can then prove that $sin^2(x)+ cos^2(x)= 1$ without reference to the Pythagorean theorem.
3-D geometry: line intersecting 2 lines and parallel to plane Find a surface generated by line intersecting lines $$y=a=z$$ and $$x+3z=a=y+z$$ and parallel to plane $$x+y=0$$ I tried to form a line equation which intersects the given two lines i.e. $(y-a)+k1(z-a)=0$ and $(x+3z-a)+k2(y+z-a)=0$. But don't know how to use the other (plane) condition.
Equation of the line L1 in symmetric form is $$\frac{x}{1} = \frac{y - a}{0} = \frac{z - a}{0} \tag1$$ Equation of the Line L2 in symmetric form is $$\frac{x -a}{-3} = \frac{y - a}{-1} = \frac{z}{1} \tag2$$ Let the required line 'L' intersect L1 at P and L2 at Q. Coordinates of P are $$(r,a,a) \tag3$$ Coordinates of Q are $$(-3s+a, -s+a, s) \tag4$$ Equation of L is $$\frac{x -r}{-3s-r+a} = \frac{y - a}{-s} = \frac{z-a}{s-a} \tag5$$ Given that this line L is parallel to the plane $x+y=0 \tag6$ Hence, its normal will be perpendicular to L So $$(-3s-r+a)+(-s)=0 \tag7$$ or $$r=a-4s \tag8$$ From equation 5 we have $$\frac{y-a}{z-a} = \frac{-s}{s-a}$$ Solving this for s, we will have $$s=\frac{a(y-a)}{y+z-2a} \tag9$$ From equation 5, we also have $$\frac{x -r}{-3s-r+a} = \frac{y - a}{-s} \tag10$$ Substituting the values of r and s obtained from equations 8 and 9, we get the required surface as $$y^2 + xy + yz + zx - 2ax -2az = 0$$
Reordering the equation of the line $$\alpha: y+k_1 z-a k_1-a=0;\;\beta:x+k_2 y+(k_2+3) z -ak_2-a=0$$ The normal vector of plane $\alpha$ is $n_1=(0,1,k_1)$ and the normal vector of $\beta$ is $n_2=(1,k_2,k_2+3)$ the direction vector of the line is the cross product $m=n_1\times n_2=(3+k_2-k_1 k_2,k_1,-1)$ The line $(y-a)+k_1(z-a)=0;\;(x+3z-a)+k_2(y+z-a)=0$ is parallel to the plane $x+y=0$ if its direction vector is perpendicular to the normal vector of the plane $x+y=0$ which is $n=(1,1,0)$ $m\cdot n=0\to (3+k_2-k_1 k_2,k_1,-1)\cdot (1,1,0)=3 + k_1 + k_2 - k_1 k_2$ the condition is that $3 + k_1 + k_2 - k_1 k_2=0$ $k_2=\dfrac{k_1+3}{k_1-1}$ $\alpha:y+k_1 z-a-a k_1=0$ $\beta:(-1+k_1) x+(3+k_1) y+(3+3 (-1+k_1)+k_1) z-a (-1+k_1)-a (3+k_1)=0$ The direction of the line is $m=n_1\times n_2=(3+k_2-k_1 k_2,k_1,-1)$ where $k_2=\dfrac{k_1+3}{k_1-1}$ $m=(k_1^2-k_1,k_1-k_1^2,k_1-1)$ a point on the line is $P\left(-6 a,0,\dfrac{3 a}{2}\right)$ a parametric equation of the line is $(x,y,x)=P+tm$ that is $x=-k_1 t+k_1^2 t-6a,y=k_1 t-k_1^2 t,z=\dfrac{3 a}{2}-t+k_1 t$ Note that from the first two equations $x=-y-6a$ Furthermore from the three equation we get $k_1= -\dfrac{4 y}{x+y+4 z},\;t= -\dfrac{(x+y+4 z)^2}{4 (x+5 y+4 z)}$ substitute in the third equation $z=\dfrac{3 a}{2}-t+k_1 t$ $z=\dfrac{3 a}{2}+\dfrac{(x+y+4 z)^2}{4 (x+5 y+4 z)}+\dfrac{y (x+y+4 z)}{x+5 y+4 z}$ simplify and get the result. The requested surface is the plane $x + y+6a=0$
Suppose A and B are sets. Prove that $A\subseteq B$ if and only if $A \cap B = A$. Suppose $A$ and $B$ are sets. Prove that $A \subseteq B$ if and only if $A \cap B = A$. Here's how I see it being proved. If $A$ and $B$ are sets,and the intersection of $A$ and $B$ is equal to $A$, then the elements in $A$ are in both the set $A$ and $B$. Therefore, the set of $A$ is a subset of $B$ since all the elements are contained in the interesection of sets $A$ and $B$ are equal to $A$. Can I prove it that way?
Your proof is almost perfect and let me rectify it a bit: Let $A$ and $B$ be two sets. The intersection of $A$ and $B$ is equal to $A$, is equivalent to the elements in $A$ are in both the set $A$ and $B$ which's also equivalent to the set of $A$ is a subset of $B$ since all the elements of $A$ are contained in the intersection of sets $A$ and $B$ are equal to $A$.
Suppose A is subset of B. Let X belongs to A then by hypothesis, X will belong to B. Hence X belong to A and X belong to B implies that X belongs to A intersection B. Accordingly A is subset of A intersection B. But we know that A intersection B is always subset of A. Hence A intersection B is equal to A. On the other hand, suppose A intersection B is equal to A. Then in particular, A is subset of A intersection B. We know that, A intersection B is subset of B. Hence A is subset of B.
How find this ODE $(1-x^2)y''+2xy'-2y=-2$ Question: Find the ODE $$(1-x^2)y''+2xy'-2y=-2$$ I think we can find $$(1-x^2)y''+2xy'-2y=0$$ I have find a solution $y=x$,But I can't find all solution. Thank you
By setting $y=1+f(x)$, we have that $f$ satisfies: $$(1-x^2) f'' + 2x f' - 2f = 0, $$ that can be solved using Frobenius' method. By setting: $$ f(z) = \sum_{j=0}^{+\infty} a_j z^j $$ we have: $$ z\,f'(z) = \sum_{j=1}^{+\infty} j a_j\, z^j,\qquad f''(z)=\sum_{j=2}^{+\infty}j(j-1)a_j z^{j-2} $$ $$(1-z^2)f''(z) = \sum_{j=0}^{+\infty}\left((j+2)(j+1)a_{j+2}-j(j-1)a_j\right)z^j$$ so: $$(j+2)(j+1)a_{j+2}-j(j-1)a_j+2ja_j-2a_j = 0 $$ or: $$(j+2)(j+1)a_{j+2} = (j-1)(j-2) a_{j}\tag{1} $$ so the relation $a_{j+2}=\frac{j-1}{j+1}\cdot\frac{j-2}{j+2}a_j$ gives that the solution associated with the initial conditions $a_0=0,a_1=1$ is given by $f(x)=x$ while the solution associated with the initial conditions $a_0=1,a_1=0$ is $f(x)=x^2+1$.
That is a Legendre equation. Deriving a solution on your own will be very difficult. Here is some information about it http://en.wikipedia.org/wiki/Legendre_polynomials
The 3rd raw moment of a binomial distribution What is the 3rd raw moment (that is, $ E\{X^3\} $) of a Binomial distribution with parameters $n$ and $p$? I am getting $n(n-1)(n-2)p^3 + 3n(n-1)p^2 + np$. Is it correct?
For such a distribution, it is best to compute the probability generating function (PGF) rather than MGF. That is to say, $$P_X(t) = \operatorname{E}[t^X] = \sum_{x=0}^n t^x \binom{n}{x} p^x (1-p)^{n-x} = \sum_{x=0}^n \binom{n}{x} (pt)^x (1-p)^{n-x} = (pt+1-p)^n,$$ the last equality being a consequence of the binomial theorem. From here, we observe/recall that $$\left[\frac{dP}{dt}\right]_{t=1} = \operatorname{E}\left[X t^{X-1}\right]_{t=1} = \operatorname{E}[X],$$ and similarly $$\left[\frac{d^2P}{dt^2}\right]_{t=1} = \operatorname{E}[X(X-1)],$$ and in general, $$\left[\frac{d^k P}{dt^k}\right]_{t=1} = \operatorname{E}[X(X-1)\cdots(X-k+1)].$$ Therefore, for $k = 1, 2, 3$, we get $$\operatorname{E}[X] = \frac{d}{dt}\left[(pt+1-p)^n\right]_{t=1} = np,$$ $$\operatorname{E}[X(X-1)] = \frac{d^2}{dt^2}\left[(pt+1-p)^n\right]_{t=1} = n(n-1)p^2,$$ $$\operatorname{E}[X(X-1)(X-2)] = n(n-1)(n-2)p^3,$$ and indeed in the general case, $$\operatorname{E}[X(X-1)\ldots(X-k+1)] = n(n-1)\ldots(n-k+1)p^k.$$ Therefore, $$\operatorname{E}[X^3] = (n(n-1)(n-2)p^3) + 3(n(n-1)p^2) + np.$$
Yes, it is correct.use the expectation values of first and second powers of the random variable, binomial to derive the third moment.Thus it will be reduced to a matter of relating some finite telescopic sums.
Solution to $x^\alpha + p x = q$? I was wondering if there was any tricks, similar in spirit to the Vieta's substitution, that would apply the equation $$ x^\alpha + p x = q, $$ where $p,q$ and $\alpha$ are real constants. In particular $\alpha$ is not necessarily an integer. The goal is to solve for $x$. Thanks for your help!
There is no known closed form general solution for a random integer $\alpha$. However, rewriting the equation as $\color{blue}x=\sqrt[\large\alpha]{q-p\color{blue}x}~$ yields the following formula: $x=\sqrt[\large\alpha]{q-p~\sqrt[\large\alpha]{q-p~\sqrt[\large\alpha]{q-\ldots}}}$
Let me choose a for alpha to make notation easier x^a = -px+q (a)LN( x) = LN(-px+q) x = e^{LN(-px+q)}/a hope that helps
How to show that $A(A+B)^{-1}B = B(A+B)^{-1}A$ I am trying this problem. Could not proceed . Please give a hint. The exercise, if the above image is not showing clearly, is the following: Let $A$ and $B$ be two square $m \times m$ matrices for some integer $m \ge 2$ such that $A+B$ is invertible. Then show that the equation $A(A+B)^{-1}B = B(A+B)^{-1}A$. [Note that either $A$ or $B$ or each of both may not be invertible, and that $A$ and $B$ may not commute.]
First note the following equation: $$A(A+B)^{-1}(A+B) = (A+B)(A+B)^{-1}A.$$ [Indeed, both sides of this equation just above are $A$.] That hint may or may not be enough. If you want more help, see below: Distributing on each side gives..... $$A(A+B)^{-1}A +A(A+B)^{-1}B = A(A+B)^{-1}A + B(A+B)^{-1}A.$$ But then subtracting both sides by $A(A+B)^{-1}A$ gives $$A(A+B)^{-1}B = B(A+B)^{-1}A,$$ which is what you want.
You can try prove: $$ A(A+B)^{-1} B * (B(A+B)^{-1}A)^{-1}=I $$
Real-world applications of prime numbers? I am going through the problems from Project Euler and I notice a strong insistence on Primes and efficient algorithms to compute large primes efficiently. The problems are interesting per se, but I am still wondering what the real-world applications of primes would be. What real tasks require the use of prime numbers? Edit: A bit more context to the question: I am trying to improve myself as a programmer, and having learned a few good algorithms for calculating primes, I am trying to figure out where I could apply them. The explanations concerning cryptography are great, but is there nothing else that primes can be used for?
Here is a hypothesized real-world application, but it's not by humans...it's by cicadas. Cicadas are insects which hibernate underground and emerge every 13 or 17 years to mate and die (while the newborn cicadas head underground to repeat the process). Some people have speculated that the 13/17-year hibernation is the result of evolutionary pressures. If cicadas hibernated for X years and had a predator which underwent similar multi-year hibernations, say for Y years, then the cicadas would get eaten if Y divided X. So by "choosing" prime numbers, they made their predators much less likely to wake up at the right time. (It doesn't matter much anyway, because as I understand it, all of the local bug-eating animals absolutely gorge themselves whenever the cicadas come out!) EDIT: I should have refreshed my memory before posting. I just re-read the article, and the cicadas do not hibernate underground. They apparently "suckle on tree roots". The article has a few other mild corrections to my answer, as well.
Prime numbers are used in public key cryptography. It is used because you generally don't think of the really big prime numbers, so it is great for codes and to keep things safe.
The diameter of Voronoi cells in Euclidean spaces Let $A \subset \mathbb{R}^d$ and let $(x_n)_{n \in \mathbb{N}} \subset \mathbb{R}^d$ be a sequence dense in $A$. For each $n \in \mathbb{N}$, let $V_{1,n},\dots,V_{n,n}$ the sequence of Voronoi cells associated to the points $x_1, \dots, x_n$, where the ties are broken lexicographically, i.e.: $$V_{1,n} =\{x \in \mathbb{R}^d \mid \forall k\in\{2,\dots,n\}, |x-x_1|\le |x-x_k|\} \\ V_{2,n} =\{x \in \mathbb{R}^d \mid |x-x_2|&lt;|x-x_1| \land \forall k\in\{3,\dots,n\}, |x-x_2|\le |x-x_k|\} \\ \vdots \\ V_{n,n} =\{x \in \mathbb{R}^d \mid \forall k\in\{1,\dots,n-1\}, |x-x_n| &lt; |x-x_k|\} $$ If $x \in \mathbb{R}^d$ and $n \in \mathbb{N}$, define $V_n(x)$ as the unique element in $V_{1,n},\dots, V_{n,n}$ that contains $x$. Is it true that $$\forall x \in A, \operatorname{diam}\big(A \cap V_n(x)\big) \to 0, n \to \infty?$$ I suspect that this result should hold due to the finite dimensionality of $\mathbb{R}^d$, since at least in this case we can obtain bounded sets using a finite number of intersections of half-spaces. However, it seems quite involved from a geometric point of view to obtain this claim. Has anyone any idea? EDIT: note that we can WLOG assume that $A$ is the closure of the set whose points are those of the sequence $(x_n)_{n \in \mathbb{N}}$. Some context: I'm trying to prove the aforementioned result to obtain that if $g \colon A \to \mathbb{R}$ is a continuous function, then $$\forall x \in A, \sup_{y \in A \cap V_n(x)} |g(x) - g(y)| \to 0, n \to \infty.$$
For additional insight, one key intuition might be that, by definition of density, any part of the set will eventually be dotted by an arbitrarily fine cloud of points. Note that the definition of $ V_n $ is to choose among $ V_{1,n} \cdots V_{n,n} $ with $ n $ points present from the start. Sufficiently far down the sequence, there should be points arbitrarily close to $ x $ “all around” in order to “force” $ V_n $ into an arbitrarily small corner. Otherwise, some angle would remain open, in which an open ball could exist, the interior of which must intersect a region of $ A $ where the sequence could then not be dense. Edit: I see now that this is already plain for the particular case $ A = \mathbb{R}^d $, but does not help understanding what happens when the cells are unbounded. Would it be fair to say you are asking why it is that the unboundedness must resolve when capping by $ A $? It should be in principle easier to show that the diameter at least does not diverge to infinity. Any point in the sequence is eventually “surrounded” by other points “in the local directions of $ A $” as you so evocatively put it. Consider any $ a \in A $ and its path connected component of $ A $, then any path emanating from it must encounter points of the sequence within an arbitrarily thin thickness and arbitrarily close to it. Therefore no “local direction” is ever free to go on forever. This is far from rigorous, I hope it nevertheless contributes positively to the discussion.
For additional insight, one key intuition might be that, by definition of density, any part of the set will eventually be dotted by an arbitrarily fine cloud of points. Note that the definition of $ V_n $ is to choose among $ V_{1,n} \cdots V_{n,n} $ with $ n $ points present from the start. Sufficiently far down the sequence, there should be points arbitrarily close to $ x $ “all around” in order to “force” $ V_n $ into an arbitrarily small corner. Otherwise, some angle would remain open, in which an open ball could exist, the interior of which must intersect a region of $ A $ where the sequence could then not be dense. Edit: I see now that this is already plain for the particular case $ A = \mathbb{R}^d $, but does not help understanding what happens when the cells are unbounded. Would it be fair to say you are asking why it is that the unboundedness must resolve when capping by $ A $? It should be in principle easier to show that the diameter at least does not diverge to infinity. Any point in the sequence is eventually “surrounded” by other points “in the local directions of $ A $” as you so evocatively put it. Consider any $ a \in A $ and its path connected component of $ A $, then any path emanating from it must encounter points of the sequence within an arbitrarily thin thickness and arbitrarily close to it. Therefore no “local direction” is ever free to go on forever. This is far from rigorous, I hope it nevertheless contributes positively to the discussion.
Every metric space with a countable base is separable I know that every separable metric space has a countable base. I was wondering if we can get a countable dense subset from every metric space that has a countable base. Thank you very much!!
You don't even need a metric space for that direction, in holds in general. If your topology has a countable base, just pick one element out of every base set and you will get a countable dense subset.
In general in any topological space let $A=\{a_i : a_i \in O_i\}$ you can prove that $A$ is a dense countable subset, such that $B=\{O_i : i \in\mathbb N\}$ is the Base of topological space
When does linearity of definite Riemann integrals hold? My Calculus text book says if functions $f$ and $g$ are continuous on a closed interval $[a,b]$, then $$ \int_a^b (f(x)+g(x)) \, dx=\int_a^b f(x) \, dx+\int_a^b g(x) \, dx $$ where the integrals are in the Riemann sense. However, there are many important applications for functions with discontinuities. Does this identity also apply in all cases where all three integrals exist? If not what other constraints are needed to include functions with discontinuities? *** Update *** I think I have an example where excluding discontinuities is relevant. $ \int_0^{\infty } \left(\sin (x) \cos \left(\frac{1}{x}\right)-\sin (x)\right) \, dx $ is well defined but $ \int_0^{\infty } \sin (x) \cos \left(\frac{1}{x}\right) \, dx-\int_0^{\infty } \sin (x) \, dx $ involves two integrals that don't exist. It seems the sufficient conditions used in my text book are not met in this example because sin(x) is not continuous at $\infty$.
The correct theorem in the setting of Riemann integration is the following: If $f,g:[a,b]\to\Bbb{R}$ are Riemann-integrable over $[a,b]$, then $f+g$ is also Riemann-integrable over $[a,b]$, and in this case we have \begin{align} \int_a^b(f+g)&amp;= \int_a^bf+\int_a^bg. \end{align} A proof should be available in any good textbook (for example it's in Spivak's Calculus book). Of course, continuous functions are Riemann-integrable so you can apply this result to continuous functions. There are of course also many Riemann-integrable functions which are not continuous; the thereom holds for these functions as well. As you can see from the theorem statement, there is no mention of continuity at all!
If we integrate via Lebesgue, then the same equality holds for discontinous functions, provided $f$ and $g$ are discontinous only on a dense numerable subsetset of their domain. To be fair, the condition of being integrable doesn't depend on continuity: there exists non continous functions which are integrable. As an example you can take the floor function, which is clearly integrable but it has discontinuities on every number in $\mathbb{Z}$, which is not dense in $\mathbb{R}$.
What is the mark of the winning team? $10$ teams have participated a competition with five contestants. According to the results, we give each person a grade. The grades are between $1$ and $50$ and we cannot use a grade twice. The winner is the team that gets the minimum score. What scores are possible to get by the winning team. My attempt: There is a lower bound $1+2+3+4+5=15$ and also an upper bound $\big\lfloor{\frac{1+2+\dots 50}{10}}\big\rfloor =127$, but I am not sure if we can reach the numbers between them.
Proposition: If the winning team can get a score of $n$, and $n&gt;15$, then they can get a score of $n-1$. Proof: If the score $n$ of the winning team isn't the minimum of $15$, then there must be some player on the winning team who got exactly one point more than some player not on the winning team. Swap the grades of the two, and the winning team gets one point less, while some other team gets one point more, and the other right teams are unchanged. Thus the winning team is still the winning team, with $n-1$ points. This proves that the winning team can "reach the numbers between them". What's left for you is finding the actual maximal score they can get. In other words, can they get $127$ and still be the winning team?
Every number from $15$ up to $127$ (in fact, up to $240$) can be written as sum of five distinct numbers $\in\{1,2,\ldots,50\}$. Indeed, this is evidently possible for $15=1+2+3+4+5$. Assume it is possible for $n$ with $15\le n&lt;240$, say $n=a_1+a_2+a_3+a_4+a_5$ with $1\le a_1&lt;a_2&lt;a_3&lt;a_4&lt;a_5\le 50$. If $a_i+1&lt;a_{i+1}$ for some $i$, $1\le i&lt;5$, then we are allowed to replace $a_i$ with $a_i+1$ and obtain a representation for $n+1$. The same holds if $a_5&lt;50$. Thus we are left only with the case that $a_{i+1}=a_i+1$ for $1\le i&lt;5$ and $a_5=50$, but then $n=46+47+48+49+50=240$, contradicting our assumption. We conclude that $n+1$ can also be written that way.
If the rank of $A$ is equal to the number of non-zero eigenvalues, do $A$ and $A^2$ have the same rank? Let $A$ be an $n$-by-$n$ matrix over some field. If it happens that $\operatorname{rank}(A)=$ number of non-zero eigenvalues of $A$, can we say that $\operatorname{rank}(A^2)=\operatorname{rank}(A)$? I believe we can say this (thinking about idempotent matrices) but I am not sure about the proof. Please give some hints to get started and the main idea.
Let $J= \operatorname{diag}(J_1,...,J_k)$ be the Jordan normal form. Note that $J^2= \operatorname{diag}(J_1^2,...,J_k^2)$ The rank of $J$ is given by the sum of the ranks of the blocks that is, $\operatorname{rk} J = \sum_k \operatorname{rk} J_k$. Similarly, $\operatorname{rk} J^2 = \sum_k \operatorname{rk} J_k^2$. It follows from the hypothesis that the rank of the Jordan block corresponding to the zero eigenvalue is zero. That is, the Jordan block is identically zero (and so is the square of the Jordan block). For the blocks $J_k$ corresponding to non zero eigenvalues, we have $\operatorname{rk} J_k = \operatorname{rk} J_k^2$. It follows that $\operatorname{rk} J = \operatorname{rk} J^2$. Alternative: Let $N = \ker A$. We see that $z=\dim N$ is the number of zero eigenvalues of $A$. Let $b_1,..,b_z$ be a basis for $N$ and complete the basis with $b_{z+1},...,b_n$. Note that $N$ is $A$ invariant and so in this basis, $A$ has the form $\begin{bmatrix} 0 &amp; A_{12} \\ 0 &amp; A_{22}\end{bmatrix}$. Furthermore, we must have $\det A_{22} \neq 0$ otherwise $A$ would have more that $ z$ zero eigenvalues. Then we want to show that $\ker A^2 = N$. Note that $A^2$ has the form $\begin{bmatrix} 0 &amp; A_{12}A_{22} \\ 0 &amp; A_{22}^2\end{bmatrix}$ and it follows from this that if $x \in \ker A^2$ then $x \in \ker A$.
No. Let for example $A$ the nilpotent matrix $$A=\begin{pmatrix}0&amp;1\\0&amp;0\end{pmatrix}$$ then we have $\operatorname{rank}(A)=1$ and $\operatorname{rank}(A^2)=\operatorname{rank}(0)=0$. Edit I used in my above answer the classic definition of the rank of matrix which is the dimension of the image of the matrix. For the definition of the rank given by the OP, the answer is yes in the field $\Bbb C$ and to see this we use that every matrix is similar to a triangular matrix.
number of squares in a rectangle. Given a rectangle of length a and width b(as shown in the figure).How many different squares of edge grater than 1 can be formed from using the cells inside . For example if a=2,b=2 ,then the number of such squares is just 1.
In general, given an $n \times k$ grid of squares, to find the number of rectangles you can form, you would turn your grid into an $n \times k$ multiplication table, put the values into each square (i.e., the $i$th row and $j$th column would contain $i \cdot j$), and then sum them all up. Proving that this holds is a nice exercise. For your particular question, you are asked to find the number of squares in a $3 \times 3$ grid where each square has its sides greater than $1$. This is straightforward to figure out directly from your picture (how many $3 \times 3$ squares are there? how many $2 \times 2$ squares are there?) but the solution also becomes clear to anyone who proves the statement of the previous paragraph. The number of $3 \times 3$ squares is $1$, which is $1 \cdot 1$; the number of $2 \times 2$ squares is $4$, which is $2 \cdot 2$. Thus, the total is $1 + 4 = 5$. Incidentally, the number of $1 \times 1$ squares is $9$, which is $3 \cdot 3$. Note that $1, 4, 9$ are the entries of the diagonal in a $3 \times 3$ multiplication table. This is no coincidence! Given an $n \times n$ multiplication table, to find the number of squares, just add up all the elements of the diagonal. The formula for the sum of the first $n$ squares, by the way, is $n(n+1)(2n+1)/6$, which you could look up online or prove by induction. Since you want to exclude $1 \times 1$ squares, you would subtract $n^2$ from this sum, giving a final answer of $$S(n) = \frac{n(n+1)(2n+1)}{6} - n^2 = \frac{(n-1)n(2n-1)}{6}$$ Indeed, for $n = 3$, we find $S(3) = 5$ as desired.
In an $n\times p$ rectangle, the number of rectangles that can be formed is $\frac{np}{4(n+1)(p+1)}$ and the number of squares that can be formed is $\sum_{r=1}^n (n+1-r)(p+1-r)$.
a question about a canonical form of a quadratic form using Gauss theorem Get the following quadratic form: $$Q(x)=x_1^{2}+x_3^2+4x_1x_2-4x_1x_3 $$ to obtain the canonical form I tried the following: $$Q(x)=x_1^{2}+x_3^{2}+4x_1x_2-4x_1x_3=4(x_1^{2})+(x_3^{2})-4x_1x_3-3x_1x_1+4x_1x_2-(4/3)x_2x_2+(4/3)x_2x_2=(2x_1-x_3)^2-... $$ There I stopped because I rememebered that I shouldn't change the coefficient of $x_1^2$. From here I don't know how to continue. I thank you in anticipation for your understanding and I wait forward your answer!
Since $x_1^2$ appears in $Q(x)$, you should start by writing $$ Q(x) = (ax_1 + bx_2 + cx_3)^2 + \star $$ for $a, b, c \in \mathbb{R}$ in such a way that $\star$ won't involve $x_1$ at all. Since the terms $$ x_1^2, 4x_1x_2, -4x_1x_3 $$ appear in $Q$, we choose $a = 1, b = 2, c = -2$ and get $$ (x_1 + 2x_2 - 2x_3)^2 = (x_1 + 2x_2)^2 - 4(x_1 + 2x_2)x_3 + 4x_3^2 \\ = \color{blue}{x_1^2} + \color{blue}{4x_1x_2} + 4x_2^2 - \color{blue}{4x_1x_3} - 8x_2x_3 + 4x_3^2 $$ and so $$ Q(x) = (x_1 + 2x_2 - 2x_3)^2 - 4x_2^2 + 8x_2x_3 - 4x_3^2 + x_3^2 \\ = (x_1 + 2x_2 - 2x_3)^2 -4 (\color{green}{x_2^2} - \color{green}{2x_2x_3}) - 3x_3^2. $$ Now we can repeat the process for the $x_2$ term. We have $$ (x_2 - x_3)^2 = \color{green}{x_2^2} - \color{green}{2x_2x_3} + x_3^2 $$ and so $$ Q(x) = (x_1 + 2x_2 - 2x_3)^2 - 4(x_2 - x_3)^2 + 4x_3^2 - 3x_3^2 \\ = (x_1 + 2x_2 - 2x_3)^2 - 4(x_2 - x_3)^2 + x_3^2 $$ and we're done.
There is a method I asked about at reference for linear algebra books that teach reverse Hermite method for symmetric matrices The main advantage is that it is a recipe with matrices, no need to carry variable names. The main disadvantage is the need to invert one matrix at the end; however, the matrix has determinant $\pm 1$ and may very well be upper triangular (it is this time). It leads to $$ x^2 + z^2 + 4 xy - 4 zx = (x +2y-2z)^2 - 4 (y-z)^2 + z^2 $$ and comes from this matrix stuff; I did it first by hand, it did work, but I thought i would check everything. The Pari code is not quite as readable as Latex but is not too bad. parisize = 4000000, primelimit = 500509 ? m = [ 1,2,-2; 2,0,0; -2,0,1] %2 = [1 2 -2] [2 0 0] [-2 0 1] ? m - mattranspose(m) %3 = [0 0 0] [0 0 0] [0 0 0] ? p1 = [1,-2,2; 0,1,0; 0,0,1] %4 = [1 -2 2] [0 1 0] [0 0 1] ? m1 = mattranspose(p1) * m * p1 %5 = [1 0 0] [0 -4 4] [0 4 -3] ? p2 = [ 1,0,0; 0,1,1; 0,0,1] %6 = [1 0 0] [0 1 1] [0 0 1] ? d = mattranspose(p2) * m1 * p2 %7 = [1 0 0] [0 -4 0] [0 0 1] ? p = p1 * p2 %8 = [1 -2 0] [0 1 1] [0 0 1] ? matdet(p) %9 = 1 ? q = matadjoint(p) %10 = [1 2 -2] [0 1 -1] [0 0 1] ? confirm = mattranspose(q) * d * q %12 = [1 2 -2] [2 0 0] [-2 0 1] ? m %13 = [1 2 -2] [2 0 0] [-2 0 1] ? m - confirm %14 = [0 0 0] [0 0 0] [0 0 0] ? ? ( x + 2 * y - 2 * z)^2 - 4 * (y - z)^2 + z^2 %1 = x^2 + (4*y - 4*z)*x + z^2 ========================================================= Places on this site I put this, several typeset: reference for linear algebra books that teach reverse Hermite method for symmetric matrices Bilinear Form Diagonalisation Given a $4\times 4$ symmetric matrix, is there an efficient way to find its eigenvalues and diagonalize it? Find the transitional matrix that would transform this form to a diagonal form. Writing an expression as a sum of squares Determining matrix $A$ and $B$, rectangular matrix Method of completing squares with 3 variables
Proof of triangle inequality I understand intuitively that this is true, but I'm embarrassed to say I'm having a hard time constructing a rigorous proof that $|a+b| \leq |a|+|b|$. Any help would be appreciated :)
From your definition of the absolute value, establish first $|x| = \max\{x,-x\}$ and $\pm x ≤ |x|$. Then you can use \begin{align*} a + b &amp;≤ |a| + b ≤ |a| + |b|,\quad\text{and}\\ -a - b &amp;≤ |a| -b ≤ |a| + |b|. \end{align*}
$|x+y|^2=(x+y).(x+y) =(x.x)+2(x.Y)+(y.y) =|x|^2+2(x.Y)+|y|^2$ from Cauchy-Schwarz inequality,$|x.Y|&lt;=|x||y|$ $|x+y|^2 &lt;=|x|^2+2|x||y|+|y|^2$ $ |x+y|^2 &lt;=(|x|+|y|)^2$ taking square root on both sides. $|x+y|&lt;=(|x|+|y|)$
Is this conception of countable vs. uncountable infinity adequate? I am not a mathematician, so please point out any mistakes I am making here - I am trying to grasp the concept of countable vs. uncountable infinity in a somewhat informal way and would like to know whether that conception makes sense. We can imagine the set of natural numbers as an axis that goes from some fixed point ($0$, or $1$ if you want) to infinity: Clearly, the points on this axis are countable, because we know exactly how the axis goes on and can therefore make a precise calculation about how many points the rest of the axis will contain. For the integers, we no longer have a fixed starting point, but the axis grows infinitely in two directions: However, we still have only one axis of fixed points and can make a precise calculation of how many points the axis as a whole will contain. The first thing that bugs me: Since the axis is calculably exactly twice as long, this should be a "larger infinity" than the for the natural numbers, right? But still, we would say that $\mathbb{Z}$ has the same cardinality as $\mathbb{N}$? For the rational numbers, things get a little more difficult, but we can still handle it: Any rational number can be displayed as the fraction between two integers - if I understood it correctly, this is what the Cantor pairing function does? - so we can just add a second axis to account for the combinatoric possibilities yielding $\mathbb{Q}$: The amount of points now doesn't simply add up, i.e. the axis doesn't just get longer (as in the step from $\mathbb{N}$ to $\mathbb{Z}$), but it multiplies, i.e. there are more axes now, so that's even a "larger increase of infinity". Is this correct? But we still have finitely many axes with countably many points, so the whole amount of points is countable too. Now for the case of real numbers, things look a bit differently. Clearly, a one-dimensional system doesn't suffice because we need to account for the digits behind the comma, so we need at least two axes, in order to create $0.0, 0.1, 0.2, ..., 1.0, 1.1, ...$: Now that doesn't suffice either, because from $1.1$, we can decide to either stay at $1.1$, which would be $1.10$ (Is it true that $1.1$ is in fact $1.10$ which is in fact $1.1000000...$, so that rational numbers are actually never really finite, or is this idea false and $1.1$ is really just $1.1$?) or go further to $1.11$, so we need another axis: We are now three-dimensional and can thus account for all the numbers with two digits behind the comma, but that still doesn't suffice, because between $1.10$ and $1.11$, we also have the numbers $1.101, 1.102, 1.103, ... $, and from any point we are, we are recursively stepping one dimension deeper, because for any digit we add, we again have all the possibilities to go on from that point, so we never reach a point where we can stop adding axes: (At this point I'm running out of imagination on how to draw a 7D diagram, sorry) Now we are at the point where we run into an infinite number of axes - and this corresponds to the set of real numbers $\mathbb{R}$ no longer being countably infinite. My question is: Is it adequate to say that countably infinite corresponds to finitely many dimensions, while uncountably infinite corresponds to infinitely many dimensions, or did I go anywhere wrong in my conception?
The notion of "countably infinite" is well named. Another word you can use is "enumerable," which is even more descriptive in my opinion. I understand your intuition on the subject and I see where you're coming from. Let me try to give you some insight into the agreements about infinity that have been reached over the years by the mathematicians who've worked on this problem. (This is what I wish someone had explained to me.) "Countably infinite" just means that you can define a sequence (an order) in which the elements can be listed. (Such that every element is listed exactly once.) The natural numbers are the most naturally "countable"—they're even called "counting numbers"—because they are the most basic, natural sequence. The word "sequence" itself is wrapped up in what we mean by natural numbers, which is just one thing following another, and the next one coming after that, and the next one after that, and so on in sequence. But the integers are countable as well. In other words, they are enumerable. You can specify an order which lists every integer exactly once and doesn't miss any of them. (Actually it doesn't matter, for the meaning of "countable," if a particular element gets listed more than once, because you could always just skip it any time it appears after the first time.) Here is an example of how you can enumerate (count, list out) every single integer without missing any: $0, 1, -1, 2, -2, 3, -3...$ The rational numbers are also countable, again because you can define a sequence which lists every rational number. The fact that they are listed means (at the same time) that they are listed in a sequence, which means that you can assign a counting number to each one. The real numbers are not countable. This is because it is impossible to define a list or method or sequence that will list every single real number. It's not just difficult; it's actually impossible. See "Cantor's diagonal argument." This will hopefully give you a solid starting point to understanding anything else about infinite sets which you care to examine. :)
Your understanding of the whole thing is a little off the road of regular Mathematical sense. I am not a mathematician or even a math major, but I know some math and let me tell you some of my understanding. The major difference btw set R and N is that N is listable but R is not, and listable is just another saying of countable. N is somewhat discrete while R is "continous". Btw every 2 reals, there will always be a real, no matter rational or irrational while between 2 consecutive integer, no integer exists. There is no "next" for a real. As to the dimension thing, you have to know linear combination and dimension is defined as the minimal number of elements such that each element of the space can be represented as a linear combination of those elements. Those elements must also be linear independent. In finite dimensional space, the situation is always simpler than infinite dim space. In finite dim, people tend to find element whose span is dense instead of being the whole space. For R, it is one dimensional because a single real can represent all others by linear combination.
Finding the total number of proper subfields of $F$? I was thinking about the following problem: Let $F$ be a field with $5^{12}$ elements.Then how can I find the total number of proper subfields of $F$? Can someone point me in the right direction? Thanks in advance for your time.
I take it you mean, proper subfield. Can you show that any subfield of $F$ contains the field of $5$ elements? Can you show that any subfield must contain $5^r$ elements, for some $r$? Can you show that the degree of such a subfield (over the field of $5$ elements) must be $r$? and must be a divisor of the degree of the field of $5^{12}$ elements? Can you show that a finite field has at most one subfield of any given number of elements? If you can do all those, you have your answer.
The number of divisors of 12 are 1,2,3,4,6,12 Therefore 6=subfields exists. Proper subfield means except 12 then number of proper subfield of 5^12 is 5.
Group of order $|G|=6545$ has either a normal Sylow 5-subgroup or a normal Sylow 17-subgroup. Let $G$ be a group of order $|G|=6545$. Show that $G$ has either a normal Sylow 5-subgroup or a normal Sylow 17-subgroup. My attempt (supposedly wrong): Given $|G|=6545=5\cdot7\cdot11\cdot17$, by Sylow's theorem, we have: $n_{5}=1\text{ mod }5\text{ and }n_{5}\mid7\cdot11\cdot17=1309\Rightarrow n_{5}=1\text{ or }11\\n_{7}=1\text{ mod }7\text{ and }n_{7}\mid5\cdot11\cdot17=935\Rightarrow n_{7}=1\text{ or }85\\n_{11}=1\text{ mod }11\text{ and }n_{11}\mid5\cdot7\cdot17=595\Rightarrow n_{11}=1\text{ or }595\\n_{17}=1\text{ mod }17\text{ and }n_{17}\mid5\cdot7\cdot11=385\Rightarrow n_{17}=1\text{ or }35$ Suppose $G$ has no normal Sylow 5-subgroup or normal Sylow 17-subgroup. Then, we have $n_{5}=11$ and $n_{17}=35$. and there exist $11\cdot(5-1)+35\cdot(17-1)=604$ non-identity elements of order $5$ and $17$. Consequently we have $6545-604=5941$ elements of order 7 or 11. But this is a impossible because the only possible outcomes of $n_{7}$ and $n_{11}$ are:$1\cdot(7-1)+1\cdot(11-1)+1=17\neq5941\\1\cdot(7-1)+595\cdot(11-1)+1=5956\neq5941\\85\cdot(7-1)+1\cdot(11-1)+1=521\neq5941\\85(7-1)+595\cdot(11-1)+1=6461\neq5941$ Thus, we must have $n_{5}=1$ or $n_{17}=1$. In other words, $G$ has either a normal Sylow 5-subgroup or a normal Sylow 17-subgroup. I know my solution is completely wrong and I'm not supposed to solve it this way. Can someone help me?
Here is one way to proceed. First case: $n_{5} = 1$. Then $G$ has a normal Sylow $5$-subgroup. Second case: $n_{5} = 11$. Let's consider the action of $G$ on the set of its Sylow $5$-subgroups by conjugation i.e the morphism $$\pi : \begin{array}[t]{rcl} G &amp; \longrightarrow &amp; \mathfrak{S}_{\left\{ \text{Sylow } 5\text{-subgroups of } G \right\}}\\ g &amp; \longmapsto &amp; \left(S \mapsto g S g^{-1}\right) \end{array}$$ Then $\frac{|G|}{|\text{Ker}(\pi)|} = |\text{Im}(\pi)| \geq 11$ since this action is transitive - all the Sylow $5$-subgroups are conjugate to each other - and divides both $|G| = 5 \cdot 7 \cdot 11 \cdot 17$ and $|\mathfrak{S}_{\text{Sylow } 5\text{-subgroups}}| = 11!$ Therefore, $$\frac{|G|}{|\text{Ker}(\pi)|} = 11, \ 5 \cdot 7, \ 5 \cdot 11, \ 7 \cdot 11 \text{ or } 5 \cdot 7 \cdot 11$$ and hence, $$|\text{Ker}(\pi)| = 17, \ 5 \cdot 17, \ 7 \cdot 17, \ 11 \cdot 17 \text{ or } 5 \cdot 7 \cdot 17$$ Furthermore, $|\text{Ker}(\pi)| \ne 5 \cdot 17 \text{ and } 5 \cdot 7 \cdot 17$. Indeed, let's suppose that $|\text{Ker}(\pi)| = 5 \cdot 17 \text{ or } 5 \cdot 7 \cdot 17$. Then, since $v_{5}(|G|) = v_{5}(|\text{Ker}(\pi)|)$ and $\text{Ker}(\pi) \vartriangleleft G$, the Sylow $5$-subgroups of $G$ are exactly the Sylow $5$-subgroups of $\text{Ker}(\pi)$. Hence, $n_{5} = 11 | 7 \cdot 17$. Contradiction. Thus, $$|\text{Ker}(\pi)| = 17, \ 7 \cdot 17 \text{ or } \ 11 \cdot 17 $$ Now, since $v_{17}(|G|) = v_{17}(|\text{Ker}(\pi)|)$ and $\text{Ker}(\pi) \vartriangleleft G$, the Sylow $17$-subgroups of $G$ are exactly the Sylow $17$-subgroups of $\text{Ker}(\pi)$. Hence, $n_{17} \equiv 1 \pmod{17}$ and $n_{17} | 7 \cdot 11$ which implies that $n_{17} = 1$. Thus, $G$ has a normal Sylow $17$-subgroup.
The 17-Sylow subgroup of a group of this order is ALWAYS normal. Simple solution: First, a group of this order is solvable. (You don't need the odd order theorem...groups of square-free order can be shown to be solvable by the Burnside transfer theorem, and by other means..there is a non-transfer proof in Marshall Hall's Theory of Groups that a square-free order group is metacyclic...i.e., a a normal cyclic subgroup with cyclic factor group.) For solvable groups, Philip Hall's theorem (see, e.g., Th 9.3.1 in Marshall Hall's book) says that the number of Sylow-17 subgroups is a product of factors each of which is congruent to $1\pmod{17}$ and divides a chief factor. Since the chief factors of a (necessarily solvable) square-free group are the primes dividing it, the number of 17-Sylow subgroups must be a product of ones..i.e., 1. Thus, in fact a much more general result holds: ANY group of square-free order has a normal Sylow subgroup corresponding to its largest prime. In fact, a group of order 6545, by the above result, must have normal subgroups of orders 7, 11, and 17, and thus has a cyclic normal subgroup of index 5, which has automorphism group a direct product of cyclic groups of orders 6, 10, and 16, and that has a unique subgroup of order 5. Thus, a Sylow-5 element has two ways to act on the index 5 subgroup...trivially or faithfully. This means that, up to isomorphism, there are only two groups of order 6545--the cyclic group, and a semidirect product with a 5-element acting non-trivially on an index 5 normal cyclic subgroup. Note that another way to characterize the non-abelian group of order 6545 is as the direct product of a cyclic group of order 119 with a Frobenius group of order 55. I should add that if you have a reference that does not contain the counting statement of Philip Hall's theorem but only the other parts of it (e.g., Isaacs's book), I posted a proof of the counting statement as it applies to Sylow subgroups here: Sylow Counting Generalization of Hall Theorem . Note that even though this statement for Sylow subgroups applies to all finite groups you still need to show independently that the group is solvable, since that's how you know that the chief factors are all 17 or less.
Is $\mathbb Q^n$ disconnected? there is a problem on topology. Let $n &gt; 1$ and let $X = \{(p_1,p_2, \ldots , p_n)\mid p_i\text{ is rational}\}$. Show that $X$ is disconnected. how to solve this problem.i am completely stuck out.
HINT: Consider the set $\{\langle p_1,p_2,\dots,p_n\rangle\in X:p_1&lt;\sqrt2\}$. Is it open? Closed?
If we take the line $Y=(\sqrt 2)Z + (\sqrt 3,\sqrt 3,\sqrt 3, \cdots \text{n times})$ where $Y,Z\in R^n$. then the line cut the space into two disjoint open sets.hence it is disconnected
What is the probability that when you place 8 towers on a chess-board, none of them can beat the other. What is the probability that when you place 8 towers on a chess-board, none of them can beat the other. Attempt: ${64 \choose 8}^{-1} \approx1$ in $4\ 400\ 000\ 000$ Correct answer: ${64 \choose 8}^{-1} \cdot 8! \approx 1$ in $9\ 000\ 000$. I disagree with the $8!$. If there's combinations (binomial coefficient) in the denominator, why would there be permutations i.e. the order counts, in the numerator?
There are $\binom{64}{8}$ ways to place the eight rooks on the board. Out of these, there are $8!$ ways for the rooks not to be able to beat each other. Why? There must be one rook at each row, one at each column. So the placement will define a one-to-one map from the eight columns to the eight rows. There are $8!$ such maps. (The “towers” are called rooks in English.)
That there is combination but not permutation in the denominator is supported by the fact that we need to choose 8 position out of the total 64 to place the towers, and since the towers are identical there is no need to consider the 8! combinations of every choosed 8 positions. Now, as to understand why there is permutation in the numerator, let' look from a different point of view. Whenever you place a tower, you cannot place another in its own row or column. Try visualizing the row and column corresponding to the tower getting shaded as you move the tower across the board. After having placed all the 8 towers, if you look from the side of the chess board, they all appear in a line. There are 8! combinations possible in this line, and here we do not avoid the 8! permutation because though we are visualizing all 8 in a line, on every next combination they will not exchange places. Rather they will exchange their row number but remain in the same column. This way we will cover every row position in all 8 column thus covering the entire chess board. This gives us 8! combinations for the towers to be placed such that none of them can beat the other.
Prove by induction that $3^{4n+2}+1$ is divisible by $5$ when $n \ge 0.$ Prove by induction that $3^{4n+2}+1$ is divisible by $5$ when $n \ge 0.$ (1) When $n=0$ we have that $3^2+1 = 10$ which is divisible by $5$ clearly. (2) Assuming that the condition hold for $n=k.$ (3) Proving that it holds for $n=k+1$ $$3^{4(k+1) + 2} + 1 = 3^{4k + 6} + 1 = 3^4 \cdot 3^{4k+2} + 1$$ Since we assumed that $5 \mid 3^{4k+2} + 1$ we have that $$3^4 \cdot 3^{4k+2} + 1 = 3^4 \cdot 5t, \text{ where $t \in \Bbb Z$}.$$ Thus $5 \mid 3^{4(k+1) + 2} + 1$. Is this a valid proof? I'm not entirely sure I'm correct with this...
Hint: $3^4 \cdot 3^{4k+2} + 1= 3^4(3^{4k+2}+1)+1-3^4=3^4(3^{4k+2}+1)-80.$
An alternative strategy for this kind of problem is to consider the difference between consecutive terms. Let $f(n)=3^{4n+2}+1$. Then $f(n+1)-f(n)=80 \cdot 3^{4 n + 2}$ is a multiple of $5$. The claim follows by induction since $f(0)=10$ is a multiple of $5$. (Actually, this proves that $f(n)$ is always a multiple of $10$.)
Changing streams in PhD I've a masters degree from a reputed Indian university in pure mathematics, with a specialization in Algebraic Number Theory. However, I'd like to apply for a PhD in computational math/theoretical computer science in US universities next year. How should I go about it? Considering that I've little or no formal background in the applied branches or computer science, do I stand a chance of getting accepted in a good PhD program? Also, what should I tell them when I write SOPs? How should I make myself appear to be a good candidate for a CS PhD without having any formal background in the area?
Speaking as someone who has a masters degree in Mathematics from India and is now doing a Ph.D. in CS in Europe: I would expect you to just be honest in SOPs about: Why do you want to do a Ph.D.? Maybe because you read about some topic and became interested, you tried a machine learning model and realized that you would like to study more on this Why should the university choose you? Because you have already shown the grit to understand highly rigorous and technical mathematical texts. A lot of theoretical computer science is based on pure mathematics. Regarding the showcase of your commitment, you could do online courses to have a deeper understanding of which field to explore and which sub-field to specialize in. This would even help your CV.
You're way ahead of most U. S.-born college students. One time I asked a chemical engineering student what $\sqrt{-25}$ is. Wanna know what his answer was? 5. Let's just hope he doesn't decide to switch to electrical engineering. So if I was you, I'd be more worried about having too many programs to choose from. If you've got some family already in America, you'll probably be able to choose a program that is prestigious and close to your family.
Determine whether the set $H$ of all matrices form a subspace of $M_{2 \times 2}$ Determine if the set $Z$ of all matricies form $ \left[ \begin{array}{cc} a &amp; b \\ 0 &amp; d \end{array} \right] $ is a subspace of $M_{2 \times 2}$ (the set of all $2 \times 2$ matrices). % This is something I came up with. Can someone look at it and let me know any useful corrections/suggestions to the question please. Answer: Without specification as to the nature of $a,b$ and $d$, it is assumed that $a,b,d \in \mathbb{R}$ Hence, $H$ is determined to be a subspace of $M_{2 \times 2}$ because it is closed under scalar addition and scalar multiplication and contains the zero vector when $a=b=d=0$.
What you have done looks reasonable except for one tiny point. When you say "because it is closed under scalar addition and scalar multiplication", you should delete the first (but not the second) scalar, as you are trying to show that adding two elements give a third member of the subspace. So you should end with Hence, $H$ is determined to be a subspace of $M_{2 \times 2}$ because it is closed under addition and scalar multiplication and contains the zero vector when $a=b=d=0$.
Let $A= \left[ \begin{array}{cc} a &amp; b \\ 0 &amp; d \end{array} \right]$ and $B= \left[ \begin{array}{cc} x &amp; y \\ 0 &amp; z \end{array} \right]$. Then $(AB)_{2,1}=dz$ and not $0$. It doesn't seem to be closed under multiplication on its own set.
Proving that an equation doesn't have integer solutions I need to prove that there are no integer solutions for a bunch of equations like the following: $$15x^2 - 7y^2 = 9$$ I was able to solve some simpler ones by picking a dividend and looking into it's remainder table. But it's not working for the others. How should I start thinking this kind of problem? It's from my algebra class and we are looking into divisibility and congruence. Thanks!
Hint: Maybe note first that $3$ must divide $y$. Let $y=3t$. Then we are looking at $15x^2-(7)(9t^2)=9$. So $3$ must divide $x$. Let $x=3s$. We end up with $$15s^2-7t^2=1.$$ Now work modulo $3$.
Call $x^2=a$ and $y^2=b$ then we have $15a-7b=9$ which is a diaphante equation. It's solution can be found by Euclid's algorithm, but also there is a theorem that no solutions exist iff $9$ isn't a multiple of the $gcd$ of $15$ and $7$. In this case there are solutions, so you can find them, then show if they can be or can't be 2 squares.
Expectation of norm of a random variable Let $x_k$ be a random vector such that its expectation $$ E[\Vert x_k \Vert]&lt;a $$ for some $a&gt;0$. Then can we say that $$ E[\Vert x_k \Vert^2]&lt;a^2 ? $$
$\mathbb{E}[\lVert X\rVert^2]$ need not even be defined in general. Consider the random variable on $\mathbb{N}^\ast$ (vector of dimension 1) with probability mass function $$ \mathbb{P}\{ X = n\} = \frac{1}{\zeta(3)}\cdot\frac{1}{n^3}. $$ (for which you do have $\mathbb{E}[X] = \sum_{n=1}^\infty \frac{1}{\zeta(3)}\cdot\frac{1}{n^2} = \frac{\pi^2}{6\zeta(3)}$)
No, not in general, but you can say that $\left[\mathbb{E}\left(\biggr|\biggr|\vec{X_k}\biggr|\biggr|\right)\right]^2 &lt;a^2$ .
Reversing the usual inequality involving the determinant of the sum of positive definite matrices Given positive definite matrices $A$ and $B$, of dimension $n$, is it possible to derive an inequality of the form $$\det(A+B)\le f(\det(A),\det(B)),$$ where $f$ is some linear function (perhaps involving n)?. The Minkowski inequality goes in the other direction, with $f(X,Y)=X+Y$. How about this one, though? EDIT: I'm also open to allowing $f$ to contain information about the spectral norms of $A$ or $B$, or information of this kind.
This is false even for $2\times2$ diagonal matrices. For these, what you want reduces to \begin{eqnarray*} \left(a_{1}+b_{1}\right)\left(a_{2}+b_{2}\right) &amp; = &amp; \det\left(\left(\begin{array}{cc} a_{1}\\ &amp; a_{2} \end{array}\right)+\left(\begin{array}{cc} b_{1}\\ &amp; b_{2} \end{array}\right)\right)\\ &amp; \overset{!}{\leq} &amp; \alpha\cdot\det\left(\begin{array}{cc} a_{1}\\ &amp; a_{2} \end{array}\right)+\beta\cdot\det\left(\begin{array}{cc} b_{1}\\ &amp; b_{2} \end{array}\right)\\ &amp; = &amp; \alpha a_{1}a_{2}+\beta b_{1}b_{2} \end{eqnarray*} with suitable $\alpha,\beta \in \Bbb{R}$ and all $a_1,a_2,b_1,b_2 &gt;0$. Now consider $a_{1}=b_{2}=n$ and $a_{2}=b_{1}=\frac{1}{n}$. Then the desired inequality becomes $$ n^{2}\leq\left(n+\frac{1}{n}\right)^{2}\leq\alpha+\beta $$ for all $n\in\mathbb{N}$, which is absurd.
You cannot. Basically, you want some constants $a,b,c$, which are possibly dependent on $n$, such that $$\det(A+B)\leq a\det (A) + b\det (B) + c .$$ However, take $A=\begin{bmatrix}1 &amp; 0 &amp;\dots &amp;0\\ 0&amp;0&amp;\dots &amp;0\\ \vdots &amp;\vdots &amp;\ddots &amp;\vdots\\ 0&amp;0&amp;\dots&amp;0\end{bmatrix}$ and $B=I_n - A$. Then, $\det(\alpha (A+B)) = \alpha$, and $\det(A)=\det(B)=0$, which means that for every real value $\alpha$, you have $\alpha \leq c$. Obviously, no such $c$ exists.
A Problem on Time Complexity of Algorithms I want no know if the following problem is solved or not, or how can I solve it? Problem: For every integer $t$, Is there any problem that can be verified in $O(n^{s})$ but its solution can be found in $T(n)=\omega(n^{st})$, i.e., its solution can not be found in $O(n^{st})$. By verifying, I mean that given a candidate solution $y$, we can judge whether $y$ is correct or not in time $O(n^s)$.
I think what you want to know is covered by the answers to this question and this question on cstheory stackexchange. It seems that the answer is likely to be yes, since it follows from $\mathbf{P} \neq \mathbf{NP}$ by a padding argument, but it isn't currently proven. To elaborate, if I am understanding things correctly what you asking about is this statement: For all $t$ and all $s$, there is a problem in $\mathbf{NTIME}(n^s)$ that is not in $\mathbf{TIME}(n^{s t})$. There is a slight subtlety that you may be asking about search problems, but this is about decision problems, but I think that shouldn't matter. According to the links, we don't even know that $\mathbf{NTIME}(n^s) \neq \mathbf{TIME}(n^s)$ so we couldn't even prove the weaker statement of finding a lower bounded of, say, $n^{s + 1}$, let alone a lower bound of $n ^{s t}$ (for $t&gt;1$). In other words, if we can't even manage to prove the much weaker statement that $\mathbf{NTIME}(n^s) \neq \mathbf{TIME}(n^s)$, we don't have much hope of proving the much harder statement that $\mathbf{NTIME}(n^s) \not \subseteq \mathbf{TIME}(n^{st})$. You can be fairly sure therefore that it is an open problem. I also clamied that the statement follows from $\mathbf{P} \neq \mathbf{NP}$. To prove this, we shall assume that the statement fails and use it to prove $\mathbf{P} = \mathbf{NP}$. If the statement is false, then in particular there is some $t$ and $s$ such that $\mathbf{NTIME}(n^s) \subseteq \mathbf{TIME}(n^{st})$. Let $f$ be a function that has a nonderministic algorithm that runs in polynomial time. Let $p$ be a padding function, that concatenates the string $11\ldots110$ to its input so that $f(n)$ terminates in less than $|p(n)|$ steps. Then there is a linear time nondeterministic algorithm for $f'$ such that $f'(p(n)) = f(n)$ and $f'(m) = 0$ if there is no $n$ such that $m = p(n)$ (essentially the same algorithm as for $f$ but it is now linear time because the input is longer). Since $f'(m) \in \mathbf{NTIME}(m)$ we know by assumption that $f'(m) \in \mathbf{TIME}(m^{st})$. So $f'$ has a deterministic algorithm that runs in polynomial time. By composing $f'$ with $p$, we also get a deterministic polynomial time algorithm for $f$, showing that $\mathbf{NP} \subseteq P$.
You have an array of $n^{st}$ integers and want to check if there is a zero. You check it in $O(1)$, if the index is given. You find it just by the brute-force search in $O(n^{st})$.
Normal subgroup of Normal subgroup If $H$ is a normal subgroup of $K$ and $K$ is a normal subgroup of $G$ can we say that $H$ is a normal subgroup of $G$.I could not prove it and cannot find a suitable counter example Will the results holds for $G$ abelian ?If else what will be counter example ?
It is not true that $H \lhd K \lhd G$ implies $H \lhd G$ (although the stronger condition that $ H \text{ char } K\ \lhd G$ will force $H \lhd G$). A good tactic would be to choose a group $G$ with an abelian normal subgroup $K$. Any subgroup of $K$ must be abelian, hence normal in $K$. Your job is to find a subgroup of $K$ that's not normal in $G$. The alternating group on $4$ letters, $A_4$ is a good choice for $G$. There, you can find such a chain $H \lhd K \lhd G$ with $H \not\lhd G$.
If $G$ is abelian then every subgroup its subgroup is normal: $gK = Kg$. Because $H$ also is subgroup of $G$ the answer is yes and proof is straightforward.
Prove ${\forall x \; \forall y \; (x + y = y + x)}$ Question: Determine the truth value of the statement if the universe of each variable consists of all the integers. Give reason to your answer if the statement is true and provide a counterexample for the false statement. ${\forall ‌x \; \forall ‌y \; (x + y = y + x)}$ $$\tag*{$(2\;marks)$}$$ Answer: True. ${Suppose\;x = m,\;y = n,\;m,\;n \in Z}$ ${By\;defination\;of\;commutativity,\; m + n = n + m}$ ${Then\;x + y = y + x}$ ${\;\;\;\;\;\;\;\;\;m + n = n + m}$ Can I prove like this?
If you are given commutivity, the result follows immediately. If you are given the Peano axioms and the definition of addition in terms of the successor function, you should look at Landau's "Foundations of Analysis" (do a Google search).
maybe you can use the successor function for the integers where " s " is the successor function x = s(x-1) and y = s(y-1) x + y = s(x-1) + y = s(x + y -1) y + x = s(y-1) + x = s(y + x -1) Now it remains to know if "s" is a function that is commutative I think you can not prove it so because this is a particular case where x = y
How to find the solution of a quadratic equation with complex coefficients? I know how to find the solution for a quadratic equation with real coefficients. But if the coefficient changes to complex numbers then what is the change in the solution? Want an example of such equation with solution.
It's no different. The quadratic formula works regardless of whether the coefficients are real or complex. Consider the example $$(3+i)x^2 + (2-i)x + (5+2i) = 0$$ The quadratic formula gives $$x = \frac{-(2-i) \pm \sqrt{ (2-i)^2-4(3+i)(5+2i) } }{2(3+i)}$$ Simplifying this is kind of a pain, of course. Under the radical you have to multiply everything out and combine terms. Eventually you get the radical into the form $\sqrt{M+Ni}$ where $M$ and $N$ are some constants -- in this example, they will be integers. Then the question is, how do you simplify the square root of a complex number? (For that, see How do I get the square root of a complex number?).
Let's see if this works. Kind of unclear how to use these tools, so I just was going to snapshot a quick writeup.
Are the following graph isomorphic? I have to prove or disprove that the following graph are isomorphic? If I will give the following isomorphism, \begin{align*} a &amp; \mapsto 1\\ b &amp; \mapsto 2\\ e &amp; \mapsto 5\\ d &amp; \mapsto 8 \end{align*} then we see that $h$ has only one uncommon vertex to $a$ which is $g$ but the vertex $8$ has two vertices which is not directly connected to the vertex $1$. Hence the two graphs are Not isomorphic. Am I correct?
You have a few problems. First, you haven't given the full map between the vertex sets. Second, you've only argued that this map isn't an isomorphism. You haven't clearly ruled out the possibility that some other map is an isomorphism. I think you're trying to argue that the partial map you've shown is perfectly general, but you haven't started anything to that effect. Typically, showing that two graphs are not isomorphic involves showing some invariant differs between them. Common invariants are degree sequence, diameter, connectivity, chromatic number, and Hamiltonicity. As a hint, notice that every edge in the left graph is contained in two otherwise disjoint 4-cycles.
The given graphs are Not isomorphic. Let me call the left graph as $G$ and right graph as $H$. We can see that graph $G$ has no odd length cycle, so $G$ is bipartite graph (actually $G$ is a hypercube graph $Q_3$). Graph $H$ has cycles of length $5$ (for example, take $1-2-3-4-5-1$), So, H is Not bipartite graph. So, $G$ and $H$ are Not isomorphic.
Let $f:[0,2]\to \mathbb{R}$ be a continuous function with no roots. Prove that the function is not surjective and $f(0)\cdot f(2)>0$ Let $f:[0,2]\to \mathbb{R}$ be a continuous function with no roots i) Prove that the function is not surjective ii) Show that $f(0)\cdot f(2)&gt;0$ Got no ideas, maybe $y=f(x)$? Doesn't seem helpful though. Also, is it possible to change the interval from $[0, 2]$ to $[a, b]$? I remember asking about a problem here for a particular 'case' and it ended up being true for every 'case': How to prove the following integral equation? $\int_{0}^{c}x^2f(x)=0$ (hope it makes sense what I'm saying) Thanks in advance!
Since it has no roots there is no $x \in [0,2]$ such that $f(x) = 0 \in \mathbb R$ (Updated) Assume $f(0)\cdot f(2) \le 0$, but this implies either at least one of $f(0)$ or $f(2)$ is $0$ (contradiction) or they have different signs which means there is a zero between $0$ and $2$ (contradiction). Therefore $f(0)\cdot f(2) &gt; 0$.
i) $f^{-1}(0) = \emptyset$. ii) Is not true, consider $f \equiv 1$.
Dual of $l^\infty$ is not $l^1$ I know that the dual space of $l^\infty$ is not $l^1$, but I didn't understand the reason. Could you give me a example of an $x \in l^1$ such that if $y \in l^\infty$, then $ f_x(y) = \sum_{k=1}^{\infty} x_ky_k$ is not a linear bounded functional on $l^\infty$, or maybe an example of a $x \notin l^1$ such that if $y \in l^\infty$, then $ f_x(y) = \sum_{k=1}^{\infty} x_ky_k$ is a linear bounded functional on $l^\infty$?
The point is the following: There are bounded functionals on $\ell^\infty$, which are not of the form $$ f(y) = \sum_k x_k y_k $$ for some $x$. I do not know if such a functional can be given explicitly, but they do exist. Let $f \colon c \to \mathbb R$ (where $c \subseteq \ell^\infty$ denotes the set of convergent sequences) be given by $f(x) = \lim_n x_n$. Then $f$ is bounded, as $|\lim_n x_n| \le \sup_n |x_n| = \|x\|$. Let $g \colon \ell^\infty \to \mathbb R$ be a Hahn-Banach extension. If $g$ where of the above mentioned form, we would have (with $e_n$ the $n$-th unit sequence) $$ x_n = g(e_n) = f(e_n) = 0 $$ hence $g = 0$. But $g \ne 0$, as for example $g(1,1,\ldots) = 1$.
Counterexample: Consider the linear functional $\phi$, defined on $l_\infty$ and given by $\phi(x) = \lim_{N\rightarrow\infty} \frac{1}{N}\sum_{n=1}^N x_n$. Now, $\phi$ (which you might call the average functional) is bounded since $|\phi(x)|\le \lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^N ||x||_\infty = ||x||_\infty$. Assume $\phi(x) = \sum_{n=1}^\infty x_n y_n$, for some $y\in l_1$. Then, for $\delta = \{1, 0, 0, ... \}\in l_\infty$, you have $\phi(\delta)=0=y_1$. Similarly, for each $e^n = \{0, ... 0, 1, 0, ...\}$ with a "single $1$" at position $n$ (thus $\delta=e^1$), you have that $\phi(e^n)=0=y_n$. Then $y_n = 0_n$ is the zero sequence. Then $\phi$ is the zero functional. But, since, e.g. for $x_n = (-1)^n$ you have $\phi(x)=0.5\ne 0$, you have a contradiction.
If the roots of the equation $p(q-r)x^2+q(r-p)x+r(p-q)=0$ are equal, show that $\dfrac {1}{p}+\dfrac {1}{r}=\dfrac {2}{q}$. If the roots of the equation $p(q-r)x^2+q(r-p)x+r(p-q)=0$ are equal, show that $$\dfrac {1}{p}+\dfrac {1}{r}=\dfrac {2}{q}$$ My Attempt: $$p(q-r)x^2+q(r-p)x+r(p-q)=0$$ Comapring with $ax^2+bx+c=0$, we get: $$a=p(q-r)$$ $$b=q(r-p)$$ $$c=r(p-q)$$. Since the roots are equal: $$b^2-4ac=0$$ $$(q(r-p))^2-4p(q-r)r(p-q)=0$$ Multipying and simplfying a bit; $$q^2r^2-2pq^2r+p^2q^2-4p^2qr+4pq^2r+4p^2r^2-4pr^2q=0$$.
Dividing by $p^2q^2r^2$ $$=\frac{1}{p^2} +\frac{2}{pr}+\frac{1}{r^2}-\frac{4}{qr}+\frac{4}{q^2}-\frac{4}{pq}$$ $$=\Bigl(\frac{1}{p}\Bigl)^2+\Bigl(\frac{-2}{q}\Bigl)^2+\Bigl(\frac{1}{r}\Bigl)^2$$ $$2\Bigl( \frac{1}{p}\Bigl)\Bigl(\frac{1}{r}\Bigl)+2\Bigl(\frac{1}{p}\Bigl)\Bigl(\frac{-2}{q}\Bigl)$$ $$+2\Bigl(\frac{1}{r}\Bigl)\Bigl(\frac{-2}{q}\Bigl)$$ Using $$(a+b+c)^2=a^2+b^2+c^2+2ab+2bc+2ac$$ $$=\Bigl(\frac{1}{p}+\frac{1}{r}-\frac{2}{q}\Bigl)^2$$ You can carry on
Let A=p(q-r), B=q(r-p) &amp; C=r(p-q). A+B+C=0 =&gt; B=-(A+C). Since roots are equal, B²-4AC=0 =&gt; (A+C)²-4AC=0 =&gt; (A-C)²=0 =&gt; A=C =&gt; p(q-r)=r(p-q) =&gt; (q-r)/qr=(p-q)/pq [Dividing throughout by pqr] =&gt; 1/r-1/q=1/p-1/q =&gt; 2/q=1/p+1/r Hence QED
Number of possible pairs I have a problem counting all the possible way of &quot;pairing&quot; 2 people in a group of N people (let's assume N is even). Example: The professor wants the students to work in pairs (groups of two). In how many ways the students could pairs ? I have seen that the answer is $$\frac{N!}{2^{N/2}*\frac{N}{2}!} $$ So, the way i understand it: it is the way of all possible combination (N!) divided by $$2^{N/2}$$ because ... I don't know (that is the reason of the question) ... And divided by $$(N/2)!$$ because of the permutation. So, could you explain me why this division by $$2^{N/2}$$ ? Also, what are the probability for 2 people (let's say student1 and student 2), to be in the same group ?
Since $N$, the number of students, is even, let's call it $2M$. The formula you quote can be obtained using the following reasoning. First let us divide the set of people not into pairs, but into pairs with designated leader, who gets to wear a special leader shirt. We can choose the set of leaders in $\dbinom{2M}{M}$ ways. Now assign a follower to each leader. This can be done in $M!$ ways. So the number of pairings with designated leader is $\dbinom{2M}{M}M!$, which simplifies to $\dfrac{(2M)!}{M!}$. But if we want to divide into plain groups of two, we have overcounted by a factor of $2^M$. For if $D$ is the number of democratic groups of $2$, then the number of groups-with-leader is $2^MD$. This is because for any division into democratic groups, there are $2^M$ ways to choose the persons in the democratic division who will wear the leader shirts. Thus $$2^M D=\frac{(2M)!}{M!},$$ and now solving for $D$ we have our count. Remark: The following is another way to count the number of divisions into groups, this time directly. Line up the students in order of student number, or weight, or beauty. For concreteness, assume there are $12$ people in the group. The first student in the list can select her partner in $11$ ways. For each such way, the first still unpartnered person in the lineup can select her partner in $9$ ways. Once this has been done, the first unpartnered person in the list can select her partner in $7$ ways. And so on. So the number of divisions into groups is $$(11)(9)(7)(5)(3)(1).$$ To get back to the formula of the OP, multiply top and bottom by the "missing" numbers, so by $(12)(10)(8)(6)(4)(2)$. On top, we get $12!$. At the bottom, we get $(12)(10)(8)(6)(4)(2)$. Taking out the $2$'s, we get $(2^6)(6!)$. The same idea works in general. The number of groups of $2$ when there are $2M$ students is $(2M-1)(2M-3)(2M-5)\cdots(5)(3)(1)$.
If we have group from n students, the first has (n-1) possible partner, the second has (n-2) because we already accounted the possibility to be a partner with the first, and so on until the later one don't has any possibility because we already accounted all the possibilities to be a partner with all the group members, So the result of all possibilities is (n-1)+(n-2)+...+1 or 1+2+3+4+....+(n-1) which is a simple sum of terms of arithmetic serie: the first term + the later term cros the number of terms by two: ((n-1)+1)*(n-1)/2=n(n-1)/2 and the general case of k from n is n!/((n-k)!k!) note : here AA can't be a pair and AB can not be accounted if BA is accounted
Can I define a function as the set of these points? Can I define a function as the set of these points as $k$ goes to infinity? $$ \lim_{k\to \infty}\bigg (\frac{k}{k-n},\frac{k-n}{k}\bigg)$$ Where $n$ is the ordered set of natural numbers less than $k$ from least to greatest. $n=1,2,3,...,k-1.$ A point is plotted for each $n.$ For example for $k=10$ it would look like: $$ \bigg (\frac{10}{10-n},\frac{10-n}{10}\bigg) $$ and there would be $9$ points because $n=1,2,3,4,...,9.$
A function consists of two sets (the domain and the range) and a graph between them. I think you have a certain relation between the points but the concept of a limit has nothing to do with this. Examine the definition of a function until it is clear for you.
The function $f(x)=1-\frac{1}{x}$ passes through all the desired points and therefore is the correct one.
Proof of intersection and union of Set A with Empty Set I need to prove the following: Prove that $A\cup \!\, \varnothing \!\,=A$ and $A\cap \!\, \varnothing \!\,=\varnothing \!\,$ It's my understanding that to prove equality, I must prove that both are subsets of each other. So to prove $A\cup \!\, \varnothing \!\,=A$, we need to prove that $A\cup \!\, \varnothing \!\,\subseteq \!\,A$ and $A\subseteq \!\,A\cup \!\, \varnothing \!\,$. However, I found an example proof for $A \cup \!\, A$ in my book and I adapted it and got this: $A\cup \!\, \varnothing \!\,=$ {$x:x\in \!\, A \ \text{or} \ x\in \!\, \varnothing \!\,$} = {$x:x\in \!\, A$} = A $A\cap \!\, \varnothing \!\,=$ {$x:x\in \!\, A \ \text{and} \ x\in \!\, \varnothing \!\,$} = {$x:x\in \!\, \varnothing \!\,$} = $\varnothing \!\,$ Do my proofs look ok?
This looks fine, but you could point out a few more details. For instance, $x\in \varnothing$ is always false. Therefore $x \in A \text{ or } x\in \varnothing $ is logically equivalent to $ x \in A $ and therefore the two set descriptions $$ \{x \mid x \in A \text{ or } x \in \varnothing\},\quad \{x\mid x \in A\} $$ must describe the same set, since the conditions are true for exactly the same elements $x$. Similarily, because $x \in \varnothing$ is trivially false, the condition $x \in A \text{ and } x \in \varnothing$ will always be false, so the two set descriptions $$ \{x \mid x \in A \text{ and } x \in \varnothing\},\quad \{x\mid x \in \varnothing \} $$ must describe the same set. Of course, for any set $B$ we have $$ B = \{x \mid x \in B\} $$
we need to proof that A U phi=A, it can be written as, A U PHI={X:X e A OR X e phi} to do it in a simpleast way I will use a example, write in roaster form A={1,2,3} PHI={4,2,5} WHEN YOU WRITE THE UNION IT COMES OUT TO BE {1,2,3,4,5} THEREFORE AUPHI=A
A Problem on Improper Integrals Let $f(x)$ be continuous except at $x = 0$ and let $a &gt; 0$. Assume that the improper integral $$\int_{0}^{a}f(x)\,dx = \lim_{\epsilon \to 0+}\int_{\epsilon}^{a}f(x)\,dx$$ exists and let $$g(x) = \int_{x}^{a}\frac{f(t)}{t}\,dt$$ Show that $$\int_{0}^{a}g(x)\,dx = \int_{0}^{a}f(x)\,dx$$ I tried integration by parts noting that $g'(x) = -f(x)/x$ and obtained for $0 &lt; \epsilon &lt; a$ the following $$\int_{\epsilon}^{a}g(x)\,dx = [xg(x)]_{x = \epsilon}^{x = a} - \int_{\epsilon}^{a}xg'(x)\,dx$$ or $$\int_{\epsilon}^{a}g(x)\,dx = -\epsilon g(\epsilon) + \int_{\epsilon}^{a}f(x)\,dx$$ The problem is solved if we can somehow show that $\lim_{\epsilon \to 0+}\epsilon g(\epsilon) = 0$. Looking at the definition $g(x)$ we see that we have no information of the behavior of $f(t)$ at $t = 0$ and the $t$ in denominator complicates the analysis of $g(\epsilon)$. Please suggest some hints which can lead to the solution. Note: This problem is taken from G. H. Hardy's "A Course of Pure Mathematics" 10th ed. Page 397.
Hint: Define $$h(x)=\int_x^af(t)dt,\ \forall x\in[0,a].$$ Then for every $x\in(0,a]$, $$g(x)=-\int_x^a\frac{h'(t)}{t}dt=\frac{h(x)}{x}-\int_x^a\frac{h(t)}{t^2}dt=\frac{h(x)}{a}+\int_x^a\frac{h(x)-h(t)}{t^2}dt.$$ Given $\delta\in(0, a]$, denote $$M_\delta=\max_{x,y\in[0,\delta]}|h(x)-h(y)|.$$ When $x\in(0,\delta)$, $$|g(x)-\frac{h(x)}{a}|\le \int_x^\delta\frac{|h(x)-h(t)|}{t^2}dt+\int_\delta^a\frac{|h(x)-h(t)|}{t^2}dt\le \frac{M_\delta}{x}+\frac{M_a}{\delta}.$$
find g'(x)=-f(x)/x x*g'(x)=f(x) integrate both side with limit 0 to a using integration by parts {x*g(x)|0 to a } - { integration of g(x) limit 0 to a} = -{integration of f(x) limit 0 to a } now use given equation and put x =a than u will get g(a)=0 now just solve u will get the desired result .
Sylow Theory: $gHg^{-1} \leq P$ Let $G$ be a finite group and $P$ be a Sylow $p$-subgroup of $G$. If $H$ is an arbitrary $p$-subgroup of $G$, then there exists a $g \in G$ such that $gHg^{-1} \leq P$ Proof: Using the $1$st Sylow theorem, we have that every $p$-subgroup of $G$ is contained in a Sylow $p$-subgroup. Also, from the $2$nd Sylow theorem, for every $P'$ Sylow $p$-subgroup there exists a $g \in G$ such that $gP'g^{-1} = P$. Hence $$ gHg^{-1} \subset gP'g^{-1}=P $$ Any suggestions for showing $gHg^{-1}$ is a subgroup of $P$?
Hit: consider the double coset decomposition of $G$ with respect to $H$ and $P$
Hit: consider the double coset decomposition of $G$ with respect to $H$ and $P$
Question about the graphical sequences A sequence $( d_1, d_2,...,d_p)$ is said to be graphic if and only if it is the degree sequence of some simple graph with p vertices. Show that the sequences $(7,5,5,5,3,2,1)$ and $(6,6,5,4,2,2,1)$ are not graphic. So for the first one 7 can't work because there are only 6 other vertices to connect to. But what if you have one degree of multiplicity 2? Then 7 works and I don't think this is what the question is looking for as an answer anyways. Using graph theory in Discrete Math, how would you solve this?
I think you're arguement for the first one holds. Usually in these kinds of problems graphs don't have two edges connecting the same vertices. For the second, two $6$s means that every vertex must has at least two neighbors, since both $6$s connect to every other vertex. But then you can't have a $1$ in the sequence.
It is impossible to draw this graph. A simple graph has no parallel edges nor any loops. There are only 7 vertices in first sequence, so each vertex can only be joined to at most six other vertices, so the maximum degree of any vertex would be 6. Hence, you can’t have a vertex of degree 7
Limit of $x^ny^n$ when $n \to \infty$ Please, someone could help me in solving the following limit. Let $x \in \mathbb{R}$, $y \in \mathbb{R}$ and $n \in \mathbb{N}$. Also consider that $0&lt;x&lt;1$ and $1&lt;y&lt;\infty$. $$ \lim_{n \to \infty}x^ny^n = L $$. What is the value of $L$ ?
$$x^ny^n = (xy)^n$$ If $xy&lt;1$ , Limit = $0$ If $xy=1$, Limit = 1 If $xy&gt;1$, Limit $\to \infty$
$L=\infty$ since the function is in form $\frac{\infty}{0}$ where $n\to \infty$
Distance between ellipse and line What is the distance between the ellipse $$\frac{x^2}{9}+\frac{y^2}{16}=1$$ and the line $y=6-x$. I think I need to be using Lagrange Multipliers but don't know how.
Just another approach, for the sake of variety. You can also observe that at the point of closest approach, the ellipse must be parallel to the line $y = 6-x$, which has slope $-1$. Implicit differentiation of the ellipse gives us $$ \frac{2x}{9}+\frac{2y}{16}\frac{dy}{dx} = 0 $$ $$ 16x+9y\frac{dy}{dx} = 0 $$ $$ \frac{dy}{dx} = -\frac{16x}{9y} $$ which must equal $-1$ at the point of closest approach (with $x, y &gt; 0$—a quick sketch will show why), so $$ y = \frac{16x}{9} $$ Plug this back into the equation for the ellipse, solve for the point, and obtain the distance.
Yes, you can use Lagrange multipliers minimize $x^2+ y^2$ (which is equivalent to minimizing $\sqrt{x^2+ y^2}$) subject to the constraints $\frac{x^2}{9}+ \frac{y^2}{16}= 1$ and $x+ y= 6$. $\nabla x^2+ y^2= &lt;2x, 2y&gt;$, $\nabla \frac{x^2}{9}+ \frac{y^2}{16}= &lt;2x/9, y/8&gt;$, and $\nabla x+ y= &lt;1, 1&gt;$. So you have $&lt;2x, 2y&gt;= \lambda&lt;2x/9, y/8&gt;+ \mu&lt;1, 1&gt;$ together with the constraints to solve for x and y, $\mu$, and $\lambda$.
How many possible combinations in 8 character password? I need to calculate the possible combinations for 8 characters password. The password must contain at least one of the following: (lower case letters, upper case letters, digits, punctuations, special characters). Assume I have 95 ascii characters (lower case letters, upper case letters, digits, punctuations, special characters). lower case letters = $26$ upper case letters = $26$ digits = $10$ punctuations &amp; special characters = $33$ The general formula for the possible passwords that I can from from these 95 characters is: $95^8$. But, accurately, I feel the above formula is incorrect. Please, correct me. The password policy requires at least one of the listed above ascii characters. Therefore, the password possible combinations = $(26)*(26)*(10)*(33)*(95)*(95)*(95)*(95)$ Which calculation is correct? EDIT: Please, note that I mean 8 characters password and exactly 8. Also, There is no order specified (i.e. it could start with small letter, symbol, etc.). But it should contain at least one of the specified characters set (upper case, lower case, symbol, no., etc.).
Start with all $8$-character strings: $95^8$ Then remove all passwords with no lowercase ($69^8$), all passwords with no uppercase ($69^8$), all passwords with no digit ($85^8$) and all passwords with no special character ($62^8$). But then you removed some passwords twice. You must add back all passwords with: no lowercase AND no uppercase: $43^8$ no lowercase AND no digit: $59^8$ no lowercase AND no special: $36^8$ no uppercase AND no digit: $59^8$ no uppercase AND no special: $36^8$ no digit AND no special: $52^8$ But then you added back a few passwords too many times. For instance, an all-digit password was remove three times in the first step, then put back three times in the second step, so it must be removed again: only lowercase: $26^8$ only uppercase: $26^8$ only digits: $10^8$ only special: $33^8$ Grand total: $95^8 - 69^8 - 69^8 - 85^8 - 62^8 + 43^8 + 59^8 + 36^8 + 59^8 + 36^8 + 52^8 - 26^8 - 26^8 - 10^8 - 33^8 = 3025989069143040 \approx 3.026\times10^{15}$
There's a simple flaw with the original equation: It was stated that you have: 26 lowercase letters (a-z) 26 uppercase letters (A-Z) 10 digits (0-9) 33 punctuations and special characters How many total choices can each character within the password use? ADD the above numbers to answer that: $26+26+10+33 = 95$ How many characters is the password in question? I believe we identified 8 in this scenario. How many combinations for this password are there? $$\text{(Possible choices)}^\text{(How many characters long)}=\text{(How many combinations)}$$ Or per this example, $96^8=6634204312890625$ Based on 8 characters of anything you can type, the answer is as simple as above. As stated in a more convoluted, albeit more descriptively accurate, the number changes based on password requirements. From a hacker / pentester perspective, entropy is stronger than mental complexity. If people use all lowercase because rules don't force them to use something else, yes, their password is weaker being all lowercase, because I can probe the password based on just lowercase ($26^8$). Wow, no rule against using humanly recognized words? (That was a common rule early-mid 2000s) The password rules themselves actually make a weaker password than the mathematically possible $96^8$. Password entropy makes this even more fun as we demonstrate that the entropy of password A#1WepOjII95&amp;^2! is actually weaker than the password OMGmathMakesMyHeadWant2EXPLODE If you're looking at this from a security standpoint, use a long run-on phrase for a more challenging time being cracked. Using rainbow tables, it's now possible to crack a 64-character password within 4 minutes on a single computer. No, your $120 Atom laptop isn't likely to meet that kind of hacking efficiency. It's simply saying you don't need a cluster of computers anymore.
$\int_0^{\pi}{x \over{a^2\cos^2 x+b^2\sin^2 x}}dx$ How to solve $$\int_0^{\pi}{x \over{a^2\cos^2 x+b^2\sin^2 x}}dx$$ The answer is ${\pi}^2\over{2ab}$ but I cant prove it.
$$\text{As }\int_a^bf(x)dx=\int_a^bf(a+b-x)dx$$ $$I=\int_0^\pi{x \over{a^2\cos^2 x+b^2\sin^2 x}}dx=\int_0^\pi\frac{\pi-x}{a^2\cos^2 x+b^2\sin^2 x}dx$$ as $\displaystyle\sin(\pi-x)=\sin x,\cos(\pi-x)=-\cos x$ $$I+I=\pi \int_0^\pi\frac1{a^2\cos^2 x+b^2\sin^2 x}dx$$ $$=\frac\pi{b^2} \int_0^\pi\frac{\sec^2x}{\left(\frac ab\right)^2+\tan^2 x}dx$$ Put $\tan x=u$
Set $a/b=\tan (\alpha)$ and the intgrand simplifies to the product of $x$ and $\sec (x-\alpha)$ which is readily integrable
The autocovariance function of ARMA(1,1) So I am reading Brockwell and Davis introduction to Time Series analysis on page 89 where he derives the ACVF of an $ARMA(1,1)$ given by: $X_t - \phi X_{t-1}=Z_t+\theta Z_{t-1}$ with ${Z_t}$ is $WN(0,\sigma^2)$ and $\mid \phi \mid &lt; 1$ What is first told is that by causality assumption, the autocovariance at lag $h$ is: $\gamma(h)=\sigma^2\sum_{j=0}^\infty\psi_j \psi_{j+\mid h \mid}$ So this at lag $h = 0$ is becomes: $\gamma (0) = \sigma^2 \sum_{j=0}^\infty \psi_j^2$ How can this then be shown that $\sigma^2 \sum_{j=0}^\infty \psi_j^2 = \sigma^2 \Big[ 1 + \frac{(\theta+\phi)^2}{1-\phi^2} \Big]$ ? And in the same way for $\gamma(1) = \sigma^2 \Big[ \theta + \phi + \frac{(\theta+\phi)^2\phi}{1-\phi^2} \Big]$? I know that there is a definition of the function $\psi (z) = \sum_{j=0}^\infty\psi_j z^j = \frac{\theta(z)}{\phi(z)}$, $\mid z \mid\leq 1$. In what can this be applied here?
Eventhough this question is old, here is the answer: The Autocovariance function for a causal time series is: $\\$ $\gamma(h) = \sigma^2 \sum_{j=0}^{\infty}\psi_j \psi_{j+|h|}$ $\\$ The MA($\infty)$ representation of $X_t$ is: $X_t = Z_t + \sum_{j=1}^{\infty} (\phi+\theta)\phi^{j-1} Z_{t-j}$ $\\$ ,where $\psi_j = (\phi+\theta)\phi^{j-1}$ Note that I will write the white noise $Z_{t-j}$ and not directly $\sigma^2$ to hopefully generate a better understanding of what is going on $\\$ Case h = 0: $Cov(X_t,X_t) = E(X_t X_t) = \gamma(0) = \sigma^2 \sum_{j=0}^{\infty}\psi_j^2$ $\\$ $E(X_t X_t)$ $= E(( Z_t + (\phi+\theta)\sum_{j=1}^{\infty} \phi^{j-1} Z_{t-j}) ^2) = E(Z_t^2) + E( ((\phi+\theta)\sum_{j=1}^{\infty} \phi^{j-1} Z_{t-j})^2 ) +$ $2(\phi+\theta) \sum_{j=1}^{\infty} \phi^{j-1}E( Z_{t-j}Z_t)$ since $Z_t$ is a White noise; $E( Z_{t-1}Z_t) = 0$, hence: $E(X_t X_t)$ $= E(Z_t^2) + (\phi+\theta)^2 \sum_{j=1}^{\infty} \phi^{2j-2} E(Z_{t-j}^2)= \sigma^2 + (\phi+\theta)^2 \sum_{j=1}^{\infty} \phi^{2j-2} \sigma^2= \sigma^2 ( 1 + (\phi+\theta)^2 \sum_{j=1}^{\infty} \phi^{2j-2})= \sigma^2 ( 1 + \frac{(\phi+\theta)^2}{1-\phi^2})$ where $\sum_{j=1}^{\infty} \phi^{2j-2} = \sum_{j=0}^{\infty} \phi^{2j} = $$ \sum_{j=0}^{\infty} (\phi^{j})^2$ is a geometric series converging to $\frac{1}{1-\phi^2}$ $\\$ Case h = 1: $Cov(X_t,X_{t-1}) = E(X_t X_{t-1}) = \gamma(1) = \sigma^2 \sum_{j=0}^{\infty}\psi_j \psi_{j-1}$ $\\$ $E(X_t X_{t-1})$ $ = E( (Z_t + (\phi+\theta)\sum_{j=1}^{\infty} \phi^{j-1} Z_{t-j}) \cdot ( Z_{t-1} + (\phi+\theta)\sum_{j=2}^{\infty} \phi^{j-2} Z_{t-j}) ) = E( (Z_t + Z_{t-1}(\phi+\theta) + (\phi+\theta)\sum_{j=2}^{\infty} \phi^{j-1} Z_{t-j}) \cdot ( Z_{t-1} + (\phi+\theta)\sum_{j=2}^{\infty} \phi^{j-2} Z_{t-j}) ) = E( (Z_t + Z_{t-1}(\phi+\theta) + (\phi+\theta)\phi\sum_{j=2}^{\infty} \phi^{j-2} Z_{t-j}) \cdot ( Z_{t-1} + (\phi+\theta)\sum_{j=2}^{\infty} \phi^{j-2} Z_{t-j}) ) = (\phi+\theta)E(Z_{t-1}^2) + (\phi+\theta)^2\phi \sum_{j=2}^{\infty} \phi^{2j-4} E(Z_{t-j}^2)= (\phi+\theta)E(Z_{t-1}^2) + (\phi+\theta)^2\phi \sum_{j=2}^{\infty} \phi^{2j-4} E(Z_{t-j}^2)= (\phi+\theta)\sigma^2 + (\phi+\theta)^2\phi \sum_{j=2}^{\infty} \phi^{2j-4} \sigma^2)= \sigma^2(\phi+\theta + \frac{(\phi+\theta)^2\phi}{1-\phi^2} )$ $\\$ Case h = 2: $Cov(X_t,X_{t-2}) = E(X_t X_{t-2}) = \gamma(2) = \sigma^2 \sum_{j=0}^{\infty}\psi_j \psi_{j-2}$ $\\$ $E(X_t X_{t-2})$ $ = E( (Z_t + (\phi+\theta)\sum_{j=1}^{\infty} \phi^{j-1} Z_{t-j}) \cdot ( Z_{t-2} + (\phi+\theta)\sum_{j=3}^{\infty} \phi^{j-3} Z_{t-j}) ) = E( (Z_t + Z_{t-1}(\phi+\theta) + Z_{t-2}(\phi+\theta)\phi + (\phi+\theta)\phi^2\sum_{j=3}^{\infty} \phi^{j-3} Z_{t-j}) \cdot ( Z_{t-2} + (\phi+\theta)\sum_{j=3}^{\infty} \phi^{j-3} Z_{t-j}) ) = (\phi+\theta)\phi E(Z_{t-2}^2) + (\phi+\theta)^2\phi^2\sum_{j=3}^{\infty} \phi^{2j-6} E(Z_{t-j}^2)) = \sigma^2( (\phi+\theta)\phi + \frac{(\phi+\theta)^2\phi^2}{1-\phi^2})$ We see the pattern and follow that: $\gamma(h) = \sigma^2 ( 1 + \frac{(\phi+\theta)^2}{1-\phi^2}) ~$ if h=0 $\gamma(h) = \sigma^2( (\phi+\theta)\phi^{h-1} + \frac{(\phi+\theta)^2\phi^h}{1-\phi^2}) ~$ if h>0
To be honest, you can get that variance just from the way they defined the model at the start. If you rearrange to make Xt the subject, and then use the fact we have no mean, one can get to the result in a few lines if we use the assumptions that future errors do not depend on past outcomes. I'll walk through it in more detail later as I'm on my phone now.
Coherent configurations (example and explanation) A coherent configuration is a pair $(X, S)$ consisting of a finite set $X $ of size $v$ and a set $S$ of binary relations on $X$ such that •$ S$ is a partition of $X × X$; • the diagonal relation $ ∆X$ is the union of some relations in $S$; • for each $R \in S$ it holds that $R^{T} \in S$; • there exist integers $p^{R}_{ ST}$ such that |$\{z ∈ X|(x, z) \in S$ and $(z, y) \in T \}| = p^{R}_{ ST} \}$ whenever $(x, y) \in R$, for each $R, S, T ∈ S$. I am new to this subject, so it is hard for me to understand. I request to explain above four axioms using an example. I have tried some notes, most of them have not included any example.
The canonical examples arise from a group of permutations acting on $X$. Define two ordered pairs $(a,x)$ and $(b,y)$ to be related if there is an element of $G$ that maps $(a,x)$ to $(b,y)$, i.e., some element $g$ in $G$ maps $a$ to $b$ and $x$ to $y$. The set of ordered pairs related to a given pair is an orbit of $G$ acting on $X\times X$, and these orbits form a partition of $X\times X$. Thus the first axiom holds. For the second axiom, note that all pairs in the orbit of a pair $(a,a)$ have the form $(b,b)$, and so the diagonal of $X\times X$ has a partition as required. The orbit of $(a,b)$ is the ``transpose'' of the orbit of $(b,a)$. Finally, if $x$ is $S$-related to $a$ and $T$-related to $b$ and $g\in G$, then $x^g$ is also $S$-related $a^g$ and $T$-related to $b^g$. So, in what I hope is an obvious notation, \[ p_{S,T}(a,b) = p_{S,T}(a^g,b^g)\] for all $g$ in $G$. From this it follows that the intersection numbers are well-defined.
We also know that if each relation R is represented by a matrix A whose rows and columns are indexed by the elements of X with (x,y) entry 1 if (x,y) belongs to R , otherwise 0. Then the three conditions become: 1. The sum of the matrices is the all-1 matrix. 2.There is a subset of the matrices which sums to the identity matrix. 3.The set of matrices is closed under transposition. But what will the fourth condition become ? Please explain in detail.
Intuition behind the derivative of dirac delta function Let me first begin what I mean by saying the intuition behind the " $\delta'(x)$ ". For example the smooth approximations of the delta function looks like the following: (Left:the smooth approximation of $\delta(x)$ Right:the smooth approximation of $\delta'(x)$) And by using my intuition I can understand why $$ \int_{-\infty}^{\infty}f( \bar{x} )\delta(x-\bar{x}) \mathrm{d}\bar{x}=f(x) $$ because I can say that the delta function fires whenever $x=\bar{x}$ and picks up the value of $f(x)$ at that point and when I integrate over all values of x, I get my function f(x) back. In other words it is like building the function $f(x)$ from thin sticks, which has the same hight as the value of the function. (Although I know that this explanation is nowhere near mathematical, it helps me and others to understand -whatever that means- the concept easier.) When I learned about the derivative of the delta function and its following property I was utterly shocked: $$ \int_{-\infty}^{\infty}f(\bar{x})\delta'(x-\bar{x}) \mathrm{d}\bar{x}=f'(x) $$ Because no matter how long I think about the subject I was unable to build a correct intuition about this distribution. My question is this: Can you explain me intuitively why the derivative of the delta function gives arise to a derivative? PS: I know why this is true mathematically (integrating by parts and so on).
Suppose the spikes in the smooth approximation to $\delta'(x)$ are located at $x=-h$ and $x=h$. When $\bar{x} \approx x+h$, the smooth approximation to $\delta'(x-\bar{x})$ will be large and positive, so the integral will roughly pick up "something large" times $f(x+h)$. Similary, for $\bar{x} \approx x-h$, the integral will pick up the same large factor times $f(x-h)$, but with the opposite sign. So if that large factor turns out to be of the magnitude $\frac{1}{2h}$, the integral will be roughly $$ \frac{f(x+h)-f(x-h)}{2h} = \frac{\bigl(f(x) + h \, f'(x) + O(h^2)\bigr) - \bigl(f(x) - h \, f'(x) + O(h^2)\bigr)}{2h} , $$ which tends to $f'(x)$ as $h \to 0$.
So you might have heard about the discrete difference. It's basically the discrete counterpart for derivatives. Its definition is DF[x] = F[x]-F[x-1]. Now imagine a continuous function f. If you look closely, what the integral of f with delta function's derivative is doing is taking something very similar to a discrete difference for f that is discretized at a really really small step size.
Counterexample for Dirichlet product of two completely multiplicative functions. The below text is the proof of why Dirichlet product of two multiplicative functions are multiplicative: It's obvious how the assumption of $(m,n)=1$ allowed the proof to be proceeded. However I am trying to find a counterexample to show that the Dirichlet product of two completely multiplicative functions is not be always completely multiplicative, but I couldn't succeed. Considering f and g to be power functions, $n^a$ and $n^b$ respectively, don't help since I can't find a way to decompose the double sum but it doesn't mean that $h(mn) \ne h(n)h(m)$. Is there any elementary counterexample?
If $p$ is a prime and $h = f * g$, $h(p) = f(1) g(p) + f(p) g(1) = g(p) + f(p)$ while $h(p^2) = f(1) g(p^2) + f(p) g(p) + f(p^2) g(1) = g(p)^2 + f(p) g(p) + f(p)^2$, so for $f * g$ to be completely multiplicative we'd need $f(p) g(p) = 2 f(p) g(p)$, and thus $f(p) = 0$ or $g(p) = 0$.
A counterexample is $I: \mathbb N\mapsto \mathbb N$ you have : $$\forall n\in \mathbb N \quad \quad (I*I)(n)=nd(n) $$ and we know that $n\mapsto nd(n)$ is not completely multiplicative because $d$ the divisor counting function is not completely multiplicative.
How could we show that the set is equal to the empty set? I want to show that the intersection of any inductive set is empty since every inductive set contains the empty set. I thought that we could do it like that: We know that $B$ is an inductive set. So: $$\varnothing \in B \wedge \forall x(x \in B \rightarrow x' \in B)$$ $$y \in \bigcap B \leftrightarrow \forall b \in B: y \in b$$ Since $\varnothing \in B$ we get that $y \in \bigcap B \leftrightarrow y \in \varnothing$. But since there is no $y$ such that $y \in \varnothing$ we conclude that we cannot find a $y$ such that $y \in \bigcap B$. But it isn't right, since we cannot just take one set to get the equivalence, right? How else could we do this?
This looks right. So let me reorder your argument: Now your argument: we know $\emptyset \in B$. So for an arbitrary $y$ we have two directions to prove: if $y\in \bigcap B$ it follows $y \in \emptyset$ (By your property for $\bigcap$). if we have $y\in \emptyset$, then we have a contradictory assumption (by the definition of the emptyset), and so we have $y \in \bigcap B$ in particular. So in total: $\forall y (y \in \bigcap B \leftrightarrow y \in \emptyset)$. And by the axiom of extensionality, we have $\bigcap B = \emptyset$. Alternatively you can prove it as follows: We know that the emptyset $\emptyset$ is the unique set with the property $\forall y(y\not\in \emptyset)$. Assume there is an element $y \in \bigcap B$, so we have $y \in \emptyset \in B$ by the property of $\bigcap B$, a contradiction. So $\forall y(y\not\in \bigcap B)$, and $\bigcap B = \emptyset$ by uniqueness. EDIT: And to answer the title question "How could we show that the set is equal to the empty set?": You show that a set $x$ is equal to the emptyset by proving that $\forall y. (y\not\in x)$.
Counterexample: Consider $N = 29$ and $M = 20$ with $N \cap M = 20$
differentiation of a matrix X with respect to vec(X) Assume $X$ is a $n\times n$ matrix, I am looking for the solution of $\frac{\partial X}{\partial vec(X)}$ Anyone that can shed light on this?
$\def\bb{\mathbb}\def\v{{\rm vec}}\def\M{{\rm Mat}}\def\d{{\rm diag}}\def\D{{\rm Diag}}\def\e{\varepsilon}\def\o{{\tt1}}\def\vcal#1{\vec{\cal #1}}\def\m#1{\left[\begin{array}{r}#1\end{array}\right]}\def\p#1#2{\frac{\partial #1}{\partial #2}}$The transformation between a matrix and its vectorized form can be indicated by functions $$\eqalign{ x &amp;= \v(X) \quad&amp;\iff\quad &amp;X=\M(x) \\ }$$ or by dot products with special third-order tensors $$\eqalign{ x &amp;= \vec\nu:X \quad&amp;\iff\quad X&amp;=\vec\mu\cdot x \\ x &amp;= X:\vec\mu \quad&amp;\iff\quad X&amp;=x\cdot\vec\nu \\ }$$ The gradient of a matrix with respect to itself yields the fourth-order identity tensor $$\vcal E = \frac{\partial X}{\partial X} \quad\implies\quad \vcal E:dX\;=\;dX\;=\;dX:\vcal E$$ just as the gradient of a vector wrt itself yields the second-order identity matrix $$I = \frac{\partial x}{\partial x} \quad\implies\quad I\cdot dx\;=\;dx\;=\;dx\cdot I\quad$$ Combining the above ideas yields $$\eqalign{ \frac{\partial X}{\partial x} &amp;= \left(\frac{\partial X}{\partial X}\right):\vec\mu &amp;= (\vcal E):\vec\mu = \vec\mu \\ \frac{\partial x}{\partial X} &amp;= \vec\nu:\left(\frac{\partial X}{\partial X}\right) &amp;= \vec\nu:(\vcal E) = \vec\nu \\ }$$ This can also be derived using the identity matrix $$\eqalign{ \frac{\partial X}{\partial x} &amp;= \vec\mu\cdot\left(\frac{\partial x}{\partial x}\right) &amp;= \vec\mu\cdot(I) = \vec\mu \\ \frac{\partial x}{\partial X} &amp;= \left(\frac{\partial x}{\partial x}\right)\cdot\vec\nu &amp;= (I)\cdot\vec\nu = \vec\nu \\ }$$ For $(m\times n)$ matrices the components of these zero-one tensors are $$\eqalign{ {\vec\mu}_{jk\ell} &amp;= {\vec\nu}_{\ell jk} \\ {\vec\nu}_{\ell jk} &amp;= \begin{cases} 1\quad{\rm if}\;\;\ell=j+mk-m \\ 0\quad{\rm otherwise} \\ \end{cases} \\ {\vcal E}_{jkpq} &amp;= \begin{cases} 1\quad{\rm if}\;\;j=p\;\;\&amp;\;\;k=q \\ 0\quad{\rm otherwise} \\ \end{cases} \\ I_{\ell r} &amp;= \begin{cases} 1\quad{\rm if}\;\;\ell=r \\ 0\quad{\rm otherwise} \\ \end{cases} \\ }$$ and the index ranges are $$\eqalign{ 1&amp;\le\; j,p \;&amp;\le m &amp;\qquad\big({\rm row\,index}\big) \\ 1&amp;\le\; k,q \;&amp;\le n &amp;\qquad\big({\rm column\,index}\big) \\ 1&amp;\le\; \ell,r \;&amp;\le mn &amp;\qquad\big({\rm vectorized\,index}\big) \\ }$$ In some sense the third-order tensors are the more fundamental quantities, since given $\big(\vec\mu,\vec\nu\big)$ the identity tensors can be calculated as $$\eqalign{ \vcal E &amp;= \vec\mu\cdot\vec\nu \qquad&amp;\big({\rm contract\,over\,vectorized\,index}\big) \\ I &amp;= \vec\nu:\vec\mu \qquad&amp;\big({\rm contract\,over\,row/col\,indexes}\big) \\ }$$
$\def\bb{\mathbb}\def\v{{\rm vec}}\def\M{{\rm Mat}}\def\d{{\rm diag}}\def\D{{\rm Diag}}\def\e{\varepsilon}\def\o{{\tt1}}\def\vcal#1{\vec{\cal #1}}\def\m#1{\left[\begin{array}{r}#1\end{array}\right]}\def\p#1#2{\frac{\partial #1}{\partial #2}}$The transformation between a matrix and its vectorized form can be indicated by functions $$\eqalign{ x &amp;= \v(X) \quad&amp;\iff\quad &amp;X=\M(x) \\ }$$ or by dot products with special third-order tensors $$\eqalign{ x &amp;= \vec\nu:X \quad&amp;\iff\quad X&amp;=\vec\mu\cdot x \\ x &amp;= X:\vec\mu \quad&amp;\iff\quad X&amp;=x\cdot\vec\nu \\ }$$ The gradient of a matrix with respect to itself yields the fourth-order identity tensor $$\vcal E = \frac{\partial X}{\partial X} \quad\implies\quad \vcal E:dX\;=\;dX\;=\;dX:\vcal E$$ just as the gradient of a vector wrt itself yields the second-order identity matrix $$I = \frac{\partial x}{\partial x} \quad\implies\quad I\cdot dx\;=\;dx\;=\;dx\cdot I\quad$$ Combining the above ideas yields $$\eqalign{ \frac{\partial X}{\partial x} &amp;= \left(\frac{\partial X}{\partial X}\right):\vec\mu &amp;= (\vcal E):\vec\mu = \vec\mu \\ \frac{\partial x}{\partial X} &amp;= \vec\nu:\left(\frac{\partial X}{\partial X}\right) &amp;= \vec\nu:(\vcal E) = \vec\nu \\ }$$ This can also be derived using the identity matrix $$\eqalign{ \frac{\partial X}{\partial x} &amp;= \vec\mu\cdot\left(\frac{\partial x}{\partial x}\right) &amp;= \vec\mu\cdot(I) = \vec\mu \\ \frac{\partial x}{\partial X} &amp;= \left(\frac{\partial x}{\partial x}\right)\cdot\vec\nu &amp;= (I)\cdot\vec\nu = \vec\nu \\ }$$ For $(m\times n)$ matrices the components of these zero-one tensors are $$\eqalign{ {\vec\mu}_{jk\ell} &amp;= {\vec\nu}_{\ell jk} \\ {\vec\nu}_{\ell jk} &amp;= \begin{cases} 1\quad{\rm if}\;\;\ell=j+mk-m \\ 0\quad{\rm otherwise} \\ \end{cases} \\ {\vcal E}_{jkpq} &amp;= \begin{cases} 1\quad{\rm if}\;\;j=p\;\;\&amp;\;\;k=q \\ 0\quad{\rm otherwise} \\ \end{cases} \\ I_{\ell r} &amp;= \begin{cases} 1\quad{\rm if}\;\;\ell=r \\ 0\quad{\rm otherwise} \\ \end{cases} \\ }$$ and the index ranges are $$\eqalign{ 1&amp;\le\; j,p \;&amp;\le m &amp;\qquad\big({\rm row\,index}\big) \\ 1&amp;\le\; k,q \;&amp;\le n &amp;\qquad\big({\rm column\,index}\big) \\ 1&amp;\le\; \ell,r \;&amp;\le mn &amp;\qquad\big({\rm vectorized\,index}\big) \\ }$$ In some sense the third-order tensors are the more fundamental quantities, since given $\big(\vec\mu,\vec\nu\big)$ the identity tensors can be calculated as $$\eqalign{ \vcal E &amp;= \vec\mu\cdot\vec\nu \qquad&amp;\big({\rm contract\,over\,vectorized\,index}\big) \\ I &amp;= \vec\nu:\vec\mu \qquad&amp;\big({\rm contract\,over\,row/col\,indexes}\big) \\ }$$
Convergence in $L^1$ problem. Problem: Let $f \in L^1(\mathbb{R},~\mu)$, where $\mu$ is the Lebesgue measure. For any $h \in \mathbb{R}$, define $f_h : \mathbb{R} \rightarrow \mathbb{R}$ by $f_h(x) = f(x - h)$. Prove that: $$\lim_{h \rightarrow 0} \|f - f_h\|_{L^1} = 0.$$ My attempt: So, I know that given $\epsilon &gt; 0$, we can find a continuous function $g : \mathbb{R} \rightarrow \mathbb{R}$ with compact support such that $$\int_{\mathbb{R}} |f - g|d\mu &lt; \epsilon.$$ We can then use the inequality $|f - f_h| \leq |f - g| + |g - g_h| + |g_h - f_h|$ to reduce the problem to the continuous case, so to speak, since the integral of the first and last terms will be $&lt; \epsilon$. But now I'm stuck trying to show that $$\lim_{h \rightarrow 0} \|g - g_h\|_{L^1} = 0.$$ I tried taking a sequence $(h_n)_{n \in \mathbb{N}}$ converging to $0$ and considering $g_n := g_{h_n}$, but I don't have monotonicity and the convergence doesn't seem to be dominated either, so I don't know what to do. Any help appreciated. Thanks.
By your construction, $g$ is continuous and compactly supported. Let $K$ be the support of $g$, and let $K_h=K\cup (h+K)$. Then we have $$ \int|g(x)-g(x-h)|\,\mathrm{d}x\leq |K_h|\|g-g_h\|_{L^\infty(K_h)}. $$ For all $h&gt;0$ sufficiently small we have $K_h\subset K_1$, and you can invoke uniform continuity of $g$.
I think you can use the fact that $f_h$ is in $L^1$ by translation invariance and the fact that $|f-f_h|\leq|f|+|f_h|$. So now you have a function $g_n=|f-f_h|$ which converges to 0 and is bounded by an integrable function.
How can it be proved that a continuous function is bounded? Say $f:[a,b]\to \mathbb{R}$ is continuous on $[a,b]$, what's the most concise way you know of to show that it's bounded? I was thinking let $A=\{u : f(x) \text{ is bounded on }x&lt;u\}$ Is there a way to show that $\sup(A)=b$?
Here is a "from scratch" attempt. Suppose $f$ were to be unbounded on $[a,b]$ yet continuous there. Then $f$ is unbounded on one of $[a,(a + b)/2]$ or $[(a + b)/2, b]$. Denote this interval by $I_1$. Keep subdividing in this fashion to obtain a sequence of intervals $I_n$ so that $I_{n+1}\subseteq I_n$ for all $n$ and so that the length of $I_n$ is $$(b-a)/2^n.$$ Now write $I_n = [a_n, b_n]$ for each $n$. The sequence $a_n$ is increasing and bounded by $b_1$ so it converges to a limit $l$. Since the lengths of the $I_n$ converge to zero, we have $b_n\rightarrow l$. By continuity, $f(l) = \lim f(a_n) = \lim f(b_n).$ By continuity, we can choose $\delta &gt; 0$ so that $f(x) &lt; f(l) + 1$ for $l - \delta &lt; x &lt; l + \delta$. Pick $n$ so that $I_n \subseteq (l - \delta, l + \delta)$. The function $f$ must be bounded on $I_n$, a contradiction of our construction. The arabesque executed here has a feel very similar to that of the proof of Heine-Borel theorem.
The distance (function) of points from the set $\{0\}$ is uniformly continuous, so it is bounded on any compact set, so the compact set can be enclosed in a ball with a radius of that distance. Consider the compact set $f([0,1])$.
Can a regular grammar be ambiguous? An ambiguous grammar is a context-free grammar for which there exists a string that has more than one leftmost derivation, while an unambiguous grammar is a context-free grammar for which every valid string has a unique leftmost derivation. A regular grammar is a mathematical object, $G$, with four components, $G = (N, \Sigma, P, S)$, where $N$ is a nonempty, finite set of nonterminal symbols, $\Sigma$ is a finite set of terminal symbols, or alphabet, symbols, $P$ is a set of grammar rules, each of one having one of the forms: $A \rightarrow aB$ $A \rightarrow a$ $A \rightarrow \varepsilon$ for $A, B \in N$, $a \in Σ$, and $\varepsilon$ the empty string, and $S ∈ N$ is the start symbol. Now the question is: Can a regular grammar also be ambiguous?
There do indeed exist ambiguous regular grammars. Take for example $S\rightarrow A~|~B$ $A\rightarrow a$ $B\rightarrow a$
We know from Chomsky's Heirarchy of Languages that every regular language is also a context free language. We also know every regular language is also a regular grammar. Therefore, every regular grammar is also a context free grammar. Since CFGs can be abmbiguous, therefore by logic, some regular grammars can be ambiguous. (not ALL but there exists, in this case).
Why is identity element required for groups? I would like to know the necessity for having an identity element for every group. I know the meaning of an identity element.
Most definitions of general objects in mathematics are a consequence of abstracting the properties of certain key examples that the object was originally developed to study so it could be extended to use in other fields. For example, the theory of measure and integration developed from precursor theories that were generalizations of ideas that ultimately had their roots in area and volume formulas that arose in construction problems in Ancient Greece. Groups originally arose in the study of the symmetry properties of geometric objects under the classical isometry and similarity transformations of Euclidean geometry. It was Felix Klein who recognized that the set of isometries in Euclidean space-rotation,reflection and identity-formed a group under composition of functions since the operation is associative,yields a unique identity function and an inverse for each mapping which yields the identity map when composed with the mapping. For example, a reflection of an object through a mirror plane is it's own inverse since applying the map twice successively leaves the object unchanged and is therefore the identity mapping. The generalization of the group of transformations of necessity gave an identity element in the definition of group when Leopold Kronecker proposed the abstract definition at the end of the 19th century.
By requiring having a unit $e\in G$ you achieve to things: Avoiding dealing with the concept of empty sets as the base of the Group. Avoiding dealing with empty operations (as an operation is a subset of $(G\times G)\times G)$. Since the empty set and empty operation have all the other axioms of groups there could be an empty Group. The empty group will be contradicting almost every theorem. Also, requiring a unit element gives a meaning to having an inverse element and to solving equations. From the comments below I have missed one main thing. A Group is a structure that exists in nature a lot.
Why is a linear transformation of a cauchy sequence in a normed space also cauchy? Suppose we have a cauchy sequence $\{a_n\}$ in a normed vector space $V$. Given a linear transformation $T:V \rightarrow V$, is the sequence $\{T(a_n)\}$ also cauchy? Or is it true only for finite dimensional normed spaces? I'd be much obliged if someone could give a proof for this, preferably an elementary one. Thanks!
Given $n,m\in \mathbb N; \exists p\in \mathbb N $ such that $||a_n-a_m||&lt;\epsilon \forall m,n\geq p$ Now $||T(a_n)-T(a_m)||=||T(a_n-a_m)||&lt;\epsilon$
Given $n,m\in \mathbb N; \exists p\in \mathbb N $ such that $||a_n-a_m||&lt;\epsilon \forall m,n\geq p$ Now $||T(a_n)-T(a_m)||=||T(a_n-a_m)||&lt;\epsilon$
How adding a joker makes events dependent? I have a question about probability. Imagine we have a deck of 40 cards with 4 different suits of ten cards. Define two events: $A \equiv $ We take a card and it's an ace $\Rightarrow P(A)=\frac{1}{10}$ $B \equiv $ We take a card and it's a spade $\Rightarrow P(B)=\frac{1}{4}$ $P(A\cap B) = \frac{1}{40}$ We can see that, since we return the card each time, these events are independent: $P(A\cap B) = P(A) \cdot P(B)$ If we now add a joker to the deck, which can take any card value, then: $P(A) = \frac{5}{41}$ $P(B) = \frac{11}{41}$ $P(A\cap B) = \frac{2}{41}$ And we can see that now the events aren't independent! $P(A\cap B) \ne P(A)\cdot P(B)$ My question is, why does adding the joker make the events independent? How is it different from just having one more ace of spades? I mean, before adding the joker we already had a card that belonged to both events (the ace of spades)... Thanks!
Note: the rules aren't entirely clear. Is the Joker always interpreted as $A\spadesuit$? If not, what rules determine how it is interpreted? To be clear: If you added a second $A\spadesuit$ then the events aren't independent either. You'd get $$P(A)=\frac 5{41}\quad P(B)=\frac {11}{41}\quad P(A\cap B)=\frac 2{41}$$ just as before. Indeed, having doubled the $A\spadesuit$, either directly or by the Joker, you make it so that drawing an Ace is evidence that the card is a spade, and drawing a spade is evidence that it is an Ace. Specifically, before you draw anything, the probability that a random draw will be a spade is $\frac {11}{41}=0.268$. If I tell you that you have drawn an ace, however, the probability that it is a spade is now $\frac 25=.4$ so drawing an ace is strong evidence that you have drawn a spade.
We take a card and its an ace This is not a description of an event to which you can assign probability. What is the probability I will draw a card today and it will be an ace? Maybe I will not draw any cards today. How do you compute the probability of that? Now introduce a second description, "We take a card and its a spades," and the situation is even more murky. Did we put the first card back and then draw another? Or are the two sentences talking about things that would occur simultaneously, that is we draw just one card from the deck one time, and look at that single card, which might be an ace and might be a spade. A better way to describe probabilities is to set up exactly the experiment/trial (or experiments/trials) that we will perform, and then specify the results as events. So either there is a single trial ("shuffle these $40$ cards, draw one, and look at it") or two trials ("shuffle these $40$ cards, draw one, look at it, then shuffle the cards again, draw a card, and look at it"). For a double trial, with one joker in the deck, there are not just $41$ possible equally-likely, distinguishable outcomes; there are $41\times41$ of those outcomes. For each card we could draw the first time there are $41$ possible cards we could draw the second time. The number of ways to draw an ace the first time and a spade the second time is $5\times 11$ ways. So if we actually make two draws then the events are independent.
Area of intersection of a circle with a rectangle I want to find the area of a given circle that comes under the region of a given rectangle. I searched many posts on stackoverflow but they are not satisfying. I followed this post http://www.eex-dev.net/index.php?id=100 But it doesn't seem accurate. I'd appreciate if someone provides me a good accurate solution atleast upto 10^-6 precision. Thanks.
Decompose your intersection area into polygons which are completely inside the circle, and circular segments formed by a chord and a part of the arc. Use e.g. the shoelace formula to compute the area of the polygoms, and the segment area formula for the segments. Compute these to whatever accuracy you want. The decomposition will need to make many case distinctions. But choosing integration bounds and integration formulas isn't really any easier, in my opinion. There just is no simple way around all those case distinctions.
You put the circle at the origin. The idea is to divide the rectangle into four rectangles, and then replace all rectangles that do not fall on the first quadrant by an equivalent rectangle, but located on the first quadrant. Then you deal with each case according to the number of vertex inside the circle. At the end all areas are given in terms of a single function F(U,V) that arises from the evaluation of the intersection area in the second case, which is the case when only on vertex is inside the ellipse. Then you sum the four areas. The matlab code is: A=2; % Circle of radius 2 (or an ellipse of semi axes A,B) B=2; L_x=0.5; % The width L_y=2.75; % The height x_1=0; %bottom left corner (x_1,y_1) y_1=-1; %This function makes the calculation int_area(x_1,y_1,L_x,L_y,A,B) %The function definition function [suma]=int_area(x_1,y_1,L_x,L_y,A,B) x(1)=x_1; y(1)=y_1; % each one of the remaining vertex x(2)=x(1); y(2)=y(1)+L_y; x(3)=x(1)+L_x; y(3)=y(1)+L_y; x(4)=x(1)+L_x; y(4)=y(1); % the center of the rectangle x_m=x(1)+L_x/2; y_m=y(1)+L_y/2; suma=0; % The original rectangle was divided in four rectangles % with the new vertex coordinate of the closest vertex to the origin given by (a,b) % according to the article http://www.dtic.mil/dtic/tr/fulltext/u2/410103.pdf % a (x- coordinate ) b( y coordinate) % c new width % d new height for i=1:4 a(i)=max([ 0 (-1)^( 1/2*(i^2-i) ) x_m - L_x/2 ]); b(i)=max([ 0 (-1)^( 1/2(i^2+i-2) )y_m - L_y/2 ]); c(i)=max([ 0 (-1)^( 1/2(i^2-i) ) x_m + L_x/2 - a(i) ]); d(i)=max([ 0 (-1)^( 1/2(i^2+i-2) )*y_m + L_y/2 - b(i) ]); %in case the width and the height are different from 0 otherwise it contributes area zero if (c(i) !=0 &amp;&amp; d(i) !=0 ) %this is the ellipse equation evaluated on each vertex eq_1=(a(i)/A)^2 + (b(i)/B)^2; eq_2=(a(i)/A)^2 + ((b(i)+d(i))/B)^2; eq_3=((a(i)+c(i))/A)^2 + ((b(i)+d(i))/B)^2; eq_4=((a(i)+c(i))/A)^2 + (b(i)/B)^2; % Area of intersection for each case according to the number of vertex inside the circle (or ellipse) % S intersection area if( eq_1 >=1 &amp;&amp; eq_2 >=1 &amp;&amp; eq_3 >=1 &amp;&amp; eq_4 >=1) S=0; %case_I: All vertex outside end if( eq_1 &lt;1 &amp;&amp; eq_2 >=1 &amp;&amp; eq_3 >=1 &amp;&amp; eq_4 >=1) S=AB/2 F(a(i)/A,b(i)/B); %case_II: vertex I inside end if( eq_1 &lt;1 &amp;&amp; eq_2 >=1 &amp;&amp; eq_3 >=1 &amp;&amp; eq_4 &lt;1) S=AB/2 (F(a(i)/A,b(i)/B)-F((a(i)+c(i))/A,b(i)/B)); %case_III: vertex 1 and 4 inside end if( eq_1 &lt;1 &amp;&amp; eq_2 &lt;1 &amp;&amp; eq_3 >=1 &amp;&amp; eq_4 >=1) S=AB/2 (F(a(i)/A,b(i)/B)-F(a(i)/A,(b(i)+d(i))/B)); %case_IV: vertex 1 and 2 inside end if( eq_1 &lt;1 &amp;&amp; eq_2 &lt;1 &amp;&amp; eq_3 >=1 &amp;&amp; eq_4 &lt;1) S=AB/2 (F(a(i)/A,b(i)/B)-F((a(i)+c(i))/A,b(i)/B) - F(a(i)/A,(b(i)+d(i))/B)); %case_V: vertex 3 outside end if( eq_1 &lt;1 &amp;&amp; eq_2 &lt;1 &amp;&amp; eq_3 &lt;1 &amp;&amp; eq_4 &lt;1) S=c(i)*d(i); %case_VI: all inside end else S=0; %in case the width or the height of the new rectangle is zero end suma=suma+S; %the total area is suma end end function [res] = F (U,V) res=asin( sqrt(1-U^2)sqrt(1-V^2) -UV ) -Usqrt(1-U^2)-Vsqrt(1-V^2)+2*U*V; end Note: for some reason some '*' (multiplication symbol) doesn't appear on the screen, but matlab uses it. All after the sign '%' is a comment Best Regards Ed.
Let $H$ be a subgroup of $G$. Define $N(H) = \{ a \in G | aHa^{-1}=H \}$. Show $H \subset N(H)$. Let $H$ be a subgroup of $G$. Define $N(H) = \{ a \in G | aHa^{-1}=H \}$. Show $H \subset N(H)$. I was able to show $N(H) $ is a subgroup of $G$ but now I am unsure about showing the relation $H \subset N(H)$. My attempt: Let $x \in H$. Let $h_1 \in H$ be some element in $H$. Then $x h_1 x^{-1} = (x h_1) x^{-1} = h_2 x^{-1}$ where $x h_1 = h_2 \in H$. Now $h_2 x^{-1}= h_3 \in H$. So we have shown for any $h_1 \in H$, $\exists h_3 \in H$ s.t. $x h_1 x^{-1} = h_3$. Hence, $x H x^{-1} = H$ and thus $x \in N(H) \implies H \subset N(H)$. Did I make any mistake or leave out any necessary detail? Edit: as per the comment below, I have incorrectly claimed that $xHx^{-1} = H$ whereas I had only shown $xHx^{-1} \subset H$. So for the other part, Let $h \in H$. Consider $x^{-1}hx = (x^{-1}h)x = h_1 x $ where $h_1 = x^{-1}h \in H$. Similarly $h_1x = h_2 \in H$. So $x^{-1}hx = h_2 \implies h = x h_2 x^{-1} \in xHx^{-1} \implies H \subset xHx^{-1}$. Therefore, $H = xHx^{-1}$ and the rest of the proof follows from here.
see that $a\in H\implies aH=Ha=H\implies aHa^{-1}=H$ so, $a\in H\implies a\in N(H)$ hence $H\subset N(H)$.
see that $a\in H\implies aH=Ha=H\implies aHa^{-1}=H$ so, $a\in H\implies a\in N(H)$ hence $H\subset N(H)$.
How prove this $ \sqrt{\frac{a}{a+3b+5bc}}+\sqrt{\frac{b}{b+3c+5ca}}+\sqrt{\frac{c}{c+3a+5ab}}\geq 1.$ Let $a,b,c$ be nonnegative real numbers such that $a+b+c=3$, Prove that $$ \sqrt{\frac{a}{a+3b+5bc}}+\sqrt{\frac{b}{b+3c+5ca}}+\sqrt{\frac{c}{c+3a+5ab}}\geq 1.$$ This problem is from http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&amp;t=555716 @Calvin Lin Thank you
Let $\displaystyle A = \sqrt{\frac{a}{a+3b+5bc}}+\sqrt{\frac{b}{b+3c+5ca}}+\sqrt{\frac{c}{c+3a+5ab}}$ and $\displaystyle B = \sum_{cyc}a^2(a+3b+5bc)$. Then by Hölder's inequality we have $A^2B \ge (a+b+c)^3 = 27$. So it is sufficient to prove that $B \le 27$ $$B = \sum_{cyc}a^3 + 3 \sum_{cyc}a^2b+5\sum_{cyc}a^2bc$$ As $\displaystyle \sum_{cyc}ab^2 \ge 3abc$ by AM-GM, we have $$B \le \left(\sum_{cyc}a^3 + 3 \sum_{cyc}a^2b + 3 \sum_{cyc}ab^2 + 6abc\right) - 15abc + 5\sum_{cyc}a^2bc \\ = (a+b+c)^3 - 5abc (3-\sum_{cyc}a) = 27$$
We know the one result if sum of two numbers $a$ and $b$ is constant, the max. of $ab$ occurs when both $a$ and $b$ equal. Eg. if $a+b=6$, then $\max\{ab\}=9$. Use this result, we get proof.
Proving that $8^x+4^x\geq 5^x+6^x$ for $x\geq 0$. I want to prove that $$8^x+4^x\geq 6^x+5^x$$ for all $x\geq 0$. How can I do this? My attempt: I try by AM-GM: $$8^x+4^x\geq 2\sqrt{8^x4^x}=2(\sqrt{32})^x.$$ However, $\sqrt{32}\approx 5.5$ so I am not sure if $$2(\sqrt{32})^x\geq 5^x+6^x$$ is true. Also, I try to compute derivatives but this doesn't simplify the problem. What can I do?
Hint. Let $f(t)=t^x$ then by the Mean Value Theorem there is $t_1\in (6,8)$ such that $$f(8)-f(6)=f'(t_1)(8-6)\Leftrightarrow 8^x-6^x=2xt_1^{x-1}.$$ Similarly there is $t_2\in (4,5)$ such that $$f(5)-f(4)=f'(t_2)(5-4)\Leftrightarrow 5^x-4^x=xt_2^{x-1}.$$ It remains to show that for $x\geq 0$ $$2xt_1^{x-1}\geq xt_2^{x-1}.$$
The best and the easiest way to prove it is using Induction . Base Case : For $x=1$ , $ 8 + 4 \gt 6+5$ Induction Step : Let us assume that $8^k +4^k \ge5^k + 6^k$ . This implies that $$8^k \ge (5^k + 6^k) - 4^k$$ Multiplying both the sides by $8$ , we get $$\color{#f14}{8^{k+1} \ge 8\cdot5^k +8\cdot6^k -8\cdot4^k} \quad \quad \text{(1.)}$$ Now all that is left is to prove that the R.H.S is bigger than $5^{k+1}+6^{k+1}-4^{k+1}$.We prove it by assuming it is true . Then , $$\begin{align}8\cdot5^k +8\cdot6^k -8\cdot4^k &amp; \ge 5^{k+1}+6^{k+1}-4^{k+1} \\ 5^k(8-5) + 6^k(8-6) +4^k(4-8) &amp; \ge 0 \\ 3\cdot 5^k +2\cdot6^k - 4\cdot4^k &amp; \ge 0 \\ \color{#2c0}{\left(\frac 34\right)\cdot \left(\frac 54\right)^k + \left(\frac 24\right)\cdot \left(\frac 64\right)^k }&amp; \color{#2c0}{\ge 0 }\quad \quad \text{(2.)}\end{align}$$ which is obviously true for $k \ge 0$ . Hence our initial assumption was true. Now combining $(1.)$ and $(2.)$ , we get , $$\color{#f14}{8^{k+1}} \ge \color{navy}{8\cdot5^k +8\cdot6^k -8\cdot4^k} \ge \color{#2c0}{5^{k+1}+6^{k+1}-4^{k+1}} $$ or $$ 8^{k+1} \ge 5^{k+1} + 6^{k+1} -4^{k+1} \implies \boxed { 8^{k+1} + 4^{k+1} \ge 5^{k+1} + 6^{k+1}}$$ Which completes our induction.
cubic polynomial equation $f(x)=0$ has a uniqe solution I would appreciate if somebody could help me with the following problem Q: Let $f(x)=ax^3+bx^2+cx+d(a,b,c,d\in\mathbb{R},a\neq 0)$ prove that if $f(x)=f'(x)q(x)+r(r\in\mathbb{R})$,$q(x)$ a polynomial then equation $f(x)=0$ has a uniqe solution in $\mathbb{R}$
Letting $q(x)=ex+f$ where $e,f\in\mathbb R$, we have $$ax^3+bx^2+cx+d=(3ax^2+2bx+c)(ex+f)+r.$$ Then, we get the followings : $$a=3ae$$$$b=3af+2be$$$$c=2bf+ce$$$$d=cf+r$$ Hence, we get $$e=\frac 13, f=\frac{4}{9a}, c=\frac{b^2}{3a}, d=\frac{b^3}{27a^2}+r.$$ Hence, since $$f(x)=ax^3+bx^2+\frac{b^2}{3a}x+d,$$ we have $$f^\prime(x)=3ax^2+2bx+\frac{b^2}{3a}.$$ Since the discriminant of $f^\prime(x)=0$ is $0$, we now know that we reach what we want.
Otherwise $f$ must has only two distinct roots in $R$ of multiplicity 1 and 2 respectively. Let $x_0$ be the former and $x_1$ the latter. Then $f'(x_1)=0$, and from the equation$$f(x)=f'(x)q(x)+r$$ we find that$$r=0$$ Thus $$f(x)=f'(x)q(x)$$ Since $f'$ has order 2, then there must be another real root that is different from $x_1$of $f'$, let's denote it by $x_2$. Then $x_2$ is also a root of $f$ having multiplicity 2, a contradiction.
How to find the limit $\lim_{x \rightarrow 1^+}\left (1 - \frac{1}{x}\right)^x \left( \log\left(1 - \frac{1}{x}\right) + \frac{1}{x - 1}\right)$ How can I find the limit $\lim_{x \rightarrow 1^+} \left (1 - \frac{1}{x}\right)^x \left( \log\left(1 - \frac{1}{x}\right) + \frac{1}{x - 1}\right)$? I tried turning into a fraction so that I could apply L'Hopital's rule: $\left(1 - \frac{1}{x}\right)^x \left(\frac{\log\left(1 - \frac{1}{x}\right)(x-1) + 1}{x - 1}\right)$ But that didn't seem to get me anywhere. Thanks.
Change variables to $y=\frac{x}{x-1}$, then $x=\frac{y}{y-1}$. Then your limit becomes: $$ \lim_{y\to+\infty}\exp\left[\log (y-1-\log y)-\frac{y}{y-1}\log y\right] $$ Now subtract $\log y$ from each of the terms to obtain: $$ \lim_{y\to+\infty}\exp\left[\log \left(1-\frac{1+\log y}{y}\right)-\frac{\log y}{y-1}\right]=\lim_{y\to\infty}\exp(\log (1-0)-0)=\boxed{1}. $$
let $$I=\lim_{x\to 1^{+}}\left(1-\dfrac{1}{x}\right)^x\left(\ln{\left(1-\dfrac{1}{x}\right)}+\dfrac{1}{x-1}\right)$$ Let $x=t+1$$$\Longrightarrow I=\lim_{t\to 0}\left(1-\dfrac{1}{t+1}\right)^{t+1}\left(\ln{\dfrac{t}{t+1}}+\dfrac{1}{t}\right)=I_{1}\cdot I_{2}$$ since $$I_{1}=\lim_{t\to 0^{+}}\left(1-\dfrac{1}{t+1}\right)^{t+1}=1$$ because apply L'Hopital's $$I_{1}=\lim_{t\to 0^{+}}e^{(t+1)[\ln{t}-\ln{(t+1)}]}=1$$ and $$I_{2}=\lim_{t\to 0}\left(\ln{\dfrac{t}{t+1}}+\dfrac{1}{t}\right)=1$$ becasue apply L'Hopital's $$\lim_{t\to 0}\dfrac{t\ln{t}-t\ln{(t+1)}}{t}=\lim_{t\to0}\left(\ln{\dfrac{t}{t+1}}-\dfrac{t}{t+1}+1\right)=1$$
entries of incidence matrix of undireccted graph What is the sum of the entries in a row (respectively column) of the incidence matrix for an undirected graph? I didn't fully understand the question, does it require a number as an answer or just an explanation? I tried to solve it, but all I got is that the rows are the vertices and the columns are the edges and I couldn't proceed
The sum on each column of the matrix is 2 because each edge is connecting two vertices.
The sum on each column of the matrix is 2 because each edge is connecting two vertices.
What is the value of $(-1)^\frac{4}{3}$? I was trying to plot the graph of $y=x^\frac{4}{3}$. However, I tried two online plotters, both gave me curves only on the right side of the y-axis. There is nothing on the left side of the y-axis. Shouldn't it be a curve symmetric to the y-axis? Similarily for $y=x^\frac{5}{3}$ which I thought is a function symmetric to origin but only has value for nonnegative x. I tried to use google to compute $(-1)^\frac{4}{3}$ and it automatically gives me $-0.5 - 0.866025404 i$ instead of $1$. And $(-1)^\frac{5}{3}$ got a answer of $0.5 - 0.866025404 i$ instead of $-1$. Why does the result include an imaginary part?
Fractional powers of negative numbers aren't uniquely defined. There are three cube roots of $-1$: $-1$, $\frac12+\frac{\sqrt{3}}2i$, and $\frac12-\frac{\sqrt3}2i$. The answer given by Google for $(-1)^{4/3}$ was the fourth power of the middle one of those three.
The function is defined for positive and negative values of $x$. The graph shows the real and imaginary parts: $${\rm Re}[(-1)^{4/3}] = -\frac{1}{2}$$ $${\rm Im}[(-1)^{4/3}] = - \frac{\sqrt{3}}{2}$$ Given that there is no unique way to compute a partial root of a negative number, Mathematica seems to assume the most general complex form. There is no reason that the function should be (or is) symmetric with respect to the interchange $x \leftrightarrow -x$.
Bounds of $\frac{\ln(x+1)}{x}\ \forall x>0$ $f:(0,\infty)$, $f(x)=\frac{\ln(x+1)}{x}$ Prove that for $\forall\ x&gt;0$ that $f(x) \in(0,1)$. I calculated the derivative of $f(x)$: $f'(x)=\frac{\frac{x}{x+1}-\ln(1+x)}{x^2}$ which I think simplifies to $\frac{x^3}{x+1}-x^2(\ln(1+x))$. I have no idea what to do next, I can't find the roots of this equation and I don't see any connection as of why it should be bounded by 0 and 1. I hope I formatted this well, I don't usually post here but I am really curious how could I solve this kind of exercise.
For $x \in (0,\infty)$, you have $$\ln (x+1) = \int_0^x \frac{dt}{1+t}$$ Hence $$0 \le f(x) = \frac{1}{x}\int_0^x \frac{dt}{1+t} &lt; \frac{1}{x}\int_0^x \ dt =1$$ As all considered maps are continuous.
It is much simpler to show that the derivative of $\ln(x+1)$ is always in $(0,1)$ (and it is continuous). Then $\ln(x+1)$ is strictly between $0$ and $x$, think about it.
Find all triples $(x,y,z)\in \Bbb{R}$ that satisfy the following conditions: The question is to find all real numbers solutions to the system of equations: $y=\Large\frac{4x^2}{4x^2+1}$, $z=\Large\frac{4y^2}{4y^2+1}$, $x=\Large\frac{4z^2}{4z^2+1}$, This seems simple enough, so I tried substituting the values of x, y and z into the different equations but I only ended up with a huge degree 8 equation which clearly doesn't seem like the right approach. I really have no idea on how to go about solving this if substitution is not the answer. Any help would be greatly appreciated :)
We can note that $4a^2+1 \ge 4a \ \ \forall a \in R$. Also, since $\frac{4a^2}{4a^2+1} \ge 0$, then we know that $x,y,z \ge 0$ Therefore $y=\frac{4x^2}{4x^2+1} \le \frac{4x^2}{4x}=x$ for nonezero values of $x$. Similarly $z \le y$ and $x \le y$. Therefore $x \le y \le z \le x \implies x=y=z$. Now solving equation $a=\frac{4a^2}{4a^2+1} \implies 4a^2+1=4a \implies a = \frac{1}{2} \implies x=y=z=\frac{1}{2}$. Finally, we assumed that numbers are non-zero, so we should include solution $(0,0,0)$
$\bullet\; $ Clearly $x=y=z=0$ are the solution of system of eqn $\bullet\; $ If $x,y,z\neq 0\; $ Then $\displaystyle y=\frac{4x^2}{4x^2+1}\Longrightarrow \frac{1}{y}=1+\frac{1}{4x^2}$, $\displaystyle z=\frac{4y^2}{4y^2+1}\Longrightarrow \frac{1}{z}=1+\frac{1}{4y^2},$ $\displaystyle x=\frac{4z^2}{4z^2+1}\Longrightarrow \frac{1}{x}=1+\frac{1}{4z^2}$, Adding all three $\displaystyle \bigg(1-\frac{1}{2x}\bigg)^2+\bigg(1-\frac{1}{2y}\bigg)^2+\bigg(1-\frac{1}{2z}\bigg)^2=0$ which is possible when $\displaystyle 1-\frac{1}{2x}=0,1-\frac{1}{2y}=0,1-\frac{1}{2z}=0$ System of equation have $\displaystyle (x,y,z)=\bigg(\frac{1}{2},\frac{1}{2},\frac{1}{2}\bigg)$ as solution also
Calculating probability of game ending after $n$ flips Two players A and B flip a coin sequentially. The game finishes when the sequence TTH is formed and player A wins or the sequence HTT is formed and player B wins. What is the probability that the game will finish at the $n$-th flip? What I did: A wins iff the sequence is $n-1$ T's followed by a single H : $\frac{1}{2^n}$ B wins iff the sequence ends with HTT and we have no two consecutive T's in the first $n-3$ flips: this happens (I think) with probability $\frac{F_{n-2}}{2^n}$ where $F_{n}$ is the $n$-th Fibonacci number. I proved this by induction. Thus, the total probability is $\frac{F_{n-2}+1}{2^n}$ Can someone verify that this is correct and/or share how you would solve this problem? If the answer is correct, then by summing over all $n$ we can obtain an interesting identity involving the Fibonacci numbers!
In the first three tosses, one of the players will win is $\frac{2}{8}$ i.e HTT and THH are the winning flips out of $2^3$. Thus the proability the game will end in 3 tosses is $\frac{1}{4}$. In an addtional toss, one of the players will win is $\frac{4}{16}$, i.e THTT, HHTT, TTHH, HTHH are the winning flips out of $2^4$. Thus the probability that the game will end in 4 tosses is equal to game not ending in the first three tosses and ends in the fourth toss and thus equal to $\frac{3}{4}.\frac{4}{16}$ $=(\frac{3}{4})^{4-3}.\frac{1}{4}$ Extending the logic, you will have the probability that the game will end ( in other words one of the players will win) in n flips is $=(\frac{3}{4})^{n-3}.\frac{1}{4}$. The book is correct in its solution.
In the first three tosses, one of the players will win is $\frac{2}{8}$ i.e HTT and THH are the winning flips out of $2^3$. Thus the proability the game will end in 3 tosses is $\frac{1}{4}$. In an addtional toss, one of the players will win is $\frac{4}{16}$, i.e THTT, HHTT, TTHH, HTHH are the winning flips out of $2^4$. Thus the probability that the game will end in 4 tosses is equal to game not ending in the first three tosses and ends in the fourth toss and thus equal to $\frac{3}{4}.\frac{4}{16}$ $=(\frac{3}{4})^{4-3}.\frac{1}{4}$ Extending the logic, you will have the probability that the game will end ( in other words one of the players will win) in n flips is $=(\frac{3}{4})^{n-3}.\frac{1}{4}$. The book is correct in its solution.
Incorrect proof of the infinities between 0 and 1 and 0 and 2 In reading another question (Explaining Infinite Sets and The Fault in Our Stars) it got me thinking about the way that you can prove that the number of numbers between 0 and 1 and between 0 and 2 are the same. (apologies if my terminology is a bit woolly and imprecise, hopefully you catch my drift though). The way it is proved is that you can show that there is a projection of all the numbers on [0,1] to [0,2] and vice versa. I'm good with this. However I then got to thinking that you can also create a projection that takes all the numbers from [0,1] and maps them to two numbers from [0,2] by saying for a number x it can go to x or x+1. This is reversible to so you can say that you can find a pair of numbers in [0,2] such that they differ by one and the lowest is a member of [0,1]. Why is it that this doesn't prove that there are twice as many numbers in [0,2] than in [0,1]. It seems to me that this is the crux of why it runs counter to intuition but I can't work out the flaw. Or is it just in the nature of infinity that infinity*2 is still the same infinity and thus its just that infinite is "weird"?
The crux is the fact that you don't specify how you measure "size" of an infinite set. In the case of the real numbers, and even more so when we consider intervals, we can measure their length. In which case $[0,2]$ is twice as long as $[0,1]$ and therefore twice as large. If you want a "raw" measurement of how large a set is, then you reduce to the notion of cardinality, in which case we only care about bijections and therefore $[0,1]$ and $[0,2]$ and in fact $\Bbb R$ itself all have the same size. There is still a problem with your argument. The fact that you can map each number to two different numbers (or rather, map exactly two numbers to the same number) is not a good argument for "there are twice as many elements" (which implies a strict inequality, to my ears anyway). For example, consider $\Bbb N$ and map every even element $2k$ to $k$, and every odd element, $2k+1$ to $k$ as well, and of course $\Bbb N$ does not have strictly more elements than $\Bbb N$. You also have that each natural number has exactly two numbers which map to it, but it still doesn't mean that there are twice as many natural numbers as there are natural numbers. That's just not good mathematics.
For finite sets the simplest way is to count the elements to have a proper concept of size. For infinite sets it gets more tricky. In naive set theory one compares sets by trying to establish one to one mappings, like you wrote, in which case the sets are considered to be of the same size. Looking at $d = b - a$ to compare intervals $[a,b]$ or $[0,1] \subset [0,2]$ does not help in the context of comparing the number of their elements relative to each other. They all end up as large as $[0,1]$ (for $a\ne b$). Regarding the last line of the question: While $2 \cdot \infty$ might yield just $\infty$ in your case and is counter-intuitive, or $\mbox{card}(\mathbb{N}^n) = \mbox{card}(\mathbb{N})$ which I found remarkable (link), you will find funny results for $\infty - \infty$ (see certain quantum field theoretic calculations) and enlarge already infinite sets $A$ with the power set construction $2^A$, getting into different orders of infinity (which remarkably is the reason why there are uncomputable programs). Why is it "weird" or counter-intuitive? Personally, I tend to the biological explanation. It is us not the subject. Our brain is working fine for our environment and us living in it, which is finite, mostly flat, rather slow (compared to the speed of light), has not that much gravity (compared to the conditions on the surface of a neutron star), is not too small and not too large So we seem to have more difficulties grasping everything which is not like that, like infinities, theory of relativity and quantumn mechanics. If we had to grapple the last billion years with infinite objects in our physical world, I believe we wouldn't be that surprised often.
If in a semigroup $S$, $\forall x \exists ! y:xyx=x$, then $S$ is a group If for all $x$ in a semigroup $S$, there exists a unique $y$ such that $x y x=x$, then $S$ is a group. (Not to be confused with inverse semigroup, where only $y$ satisfying both $xyx=x$ and $yxy=y$ is unique) After tring with no result, I used Prover9 to find a proof. I did get one but it was very hard to understand (possible to go through once but really hard to remember what the point is). Is there any somewhat comprehensible or conceptual proof to this? Is there a theory underlying this?
Hint: You just need to find unique inverses for every element, and the identity element will arise by making $xx^{-1}$ with any element. If for all $x$, there is a unique $y$ such that $xyx=x$, then for all $x$ there is a unique $y$ such that $yx$ does nothing to the element... Can you find the identity element and the ivnerses from here?
Hint: You just need to find unique inverses for every element, and the identity element will arise by making $xx^{-1}$ with any element. If for all $x$, there is a unique $y$ such that $xyx=x$, then for all $x$ there is a unique $y$ such that $yx$ does nothing to the element... Can you find the identity element and the ivnerses from here?