title_body
stringlengths
61
4.12k
upvoted_answer
stringlengths
20
29.9k
downvoted_answer
stringlengths
19
18.2k
Show that $(\mathcal{M},d)$ is complete metric space Let $(\Bbb{R},\mathcal{M},\mu)$ be the Lebesgue measure space modulo the equivalence relation $A\sim B$ if $\mu(A\bigtriangleup B)=0$. Let $d(A,B)=\mu(A\bigtriangleup B)$. Show that $(\mathcal{M},d)$ is complete metric space. I could show that $d$ is a metric on the equivalence classes but how can I show that this metric is complete?
Note that $d(A,B) = \| 1_A-1_B\|_1$, so we can use properties of $L^1(\mathbb{R})$ to show completeness. As Michael noted in the comments above, one needs to be careful with infinities. Suppose we have a $d$-Cauchy sequence of sets $A_n$. Without loss of generality we can presume that $d(A_n,A_1) < 1$ for all $n$. Let $f_n = 1_{A_n} - 1_{A_1}$. We have $f_n \in L^1(\mathbb{R})$ for all $n$ and $f_n$ is Cauchy. Since $L^1(\mathbb{R})$ is complete, we know that there is some $f \in L^1(\mathbb{R})$ such that $f_n \to f$ (in the $\|\cdot\|_1$ norm, of course). The only issue remaining is to 'extract' a set $A$ such that $f(x) = 1_A(x)- 1_{A_1}$ ae. $[\mu]$. A standard result (see https://math.stackexchange.com/a/716328/27978, for example) is that if $f_n \to f$ in $L^1(\mathbb{R})$, then there is a subsequence such that $f_{n_k}(x) \to f(x)$ ae. $[\mu]$. We note that $f_n(x) \in \{-1,0,+1\}$ for all $n$, hence we have $f(x) \in \{-1,0,+1\}$ ae. $[\mu]$. Let $N= f^{-1}(\{-1\})$, $Z= f^{-1}(\{0\})$ and $P= f^{-1}(\{+1\})$ and let $A = (Z \cap A_1) \cup P$, then it is easy to check that $f(x) = 1_A(x) - 1_{A_1}(x)$ ae. $[\mu]$ and $d(A,A_n) \to 0$. (Measurability follows from the fact that $f \in L^1(\mathbb{R})$.)
You need to appeal to the definition of completeness. Suppose you are given a Cauchy sequence of sets $A_n$. That is, for any $\epsilon > 0$ one can find a sufficiently large $N$ for which $m,n \geq N$ implies $\mu(A_{n} \bigtriangleup A_{m}) < \epsilon$. We would like to show that a limit set $A_{\infty}$ exists, and is measurable. Here is a hint: Given a sequence of measurable sets $A_n$, there are a number of different sets one can associate to it, all of which are measurable. One example is the union $\bigcup A_n$, but there are others. Consider these.
Expected value of independent random variables So I have this problem where I need to find $E(4X+3Y-2Z^2-W^2+8)$ where $W,X,Y,Z$ are all standard normal and I'm kind of confused on how to find the expected value here. I thought to do it we just had to add the means together like this: $E(X_1+X_2+X_3)=\mu_1+\mu_2+\mu_3$ as long as they are iid of course. How come the answer is $5$? I tried doing $$\begin{align}E(4X+3Y-2Z^2-W^2+8)\\ =E(4X)+E(3Y)-E(2Z^2)-E(W^2)+E(8)\\ =4E(X)+3E(Y)-2E(Z^2)-E(W^2)+8\\=(4*0)+(3*0)-(2*0)-0+8\\=8\end{align}$$ And quite obviously $8$ doesn't equal $5$. So I'm just not sure where I went wrong.
The variance of $Z$ is $E(Z^2)-(E(Z))^2$. Since the variance is $1$, and $E(Z)=0$, we have $E(Z^2)=1$. Same for $W^2$. That gets us $5$. The error was in thinking that $E(Z^2)=0$ and $E(W^2)=0$.
You cant open $E(X+Y^2 ) = E(X) + E(Y^2)$. It is not linear at all .
Questions about cosets etc. I don't understand my notes that my teacher gave me, so... please answer those questions in bold. From my teacher's notes: "Suppose $H$ is a subgroup of $G$. For any $a\in G$, we define its associated left coset and right coset to be $$aH = \{ab | b \in H\}$$ $$Ha = \{ba | b\in H\}$$ respectively." What is a coset? What are the differences between left and right? . Also from the notes: "Suppose $H$ is a subgroup of $G$. The left cosets $\{aH | a\in G\}$ define a partition of $G$, so are the right cosets." What does "define a partition of $G$" mean? . Also: "A function $f:G\rightarrow H$ where $G$, $H$ are groups is called a group homomorphism if $$f(a\cdot b)=f(a)\cdot f(b)$$ for all $a,b \in G$. It's called an isomorphism if it's bijective." Can anyone explain the notation of "$f:G\rightarrow H$"? What does bijective mean?
The definitions of left / right coset are right there, and the different definitions make it clear that left and right coset are in general different. A partition of a set $X$ is a collection of (non-empty) subsets of $X$ such that these subsets are pairwise disjoint while the union of all these subsets is $X$. If $X,Y$ are sets, the notation $f\colon X\to Y$ expresses that $f$ is a function from $X$ to $Y$; that is, for each $x\in X$, $f$ defines some $f(x)\in Y$. A function can be injective (or one-to-one) meaning that $f(x_1)=f(x_2)$ holds only when $x_1=x_2$; it can be surjective (or onto) if for every $y\in Y$ there is at least one $x\in X$ with $f(x)=y$; and if a function is both injective and surjective, it is called bijective.
In my opinion, Wikipedia is a good enough source to read & understand these basic concepts in Algebra. (Cosets) https://en.wikipedia.org/wiki/Coset (Partitions of a group) https://groupprops.subwiki.org/wiki/Left_cosets_partition_a_group (Group Isomorphism)https://en.wikipedia.org/wiki/Group_isomorphism
How to find the last one digit of $8^{97}$? What will be the units digit of: $8^{97}$ I know I will have to start like: $8^1$, then $8^2, 8^3$ etc... is there a shorter method to finding the units/digit?
Hint for proof: Notice that when you take a product of two numbers the final digit of the result only depends on the final digit of each original factor. Why should that be the case? Why does this imply that taking powers of 8 will give a pattern in the final digit? In fact this is true for any natural number, not just 8, and this pattern will always be of a specific sort.
You can use the geometric progression formula which is in the form a*r^(n-1) "a" represents the first digit in the sequence. "r" represents the ratio between every 2 successive numbers in the sequence "n" represents the position of the number you are looking for. So...... a=8^1=1 r=8^2/8^1=8^3/8^2=(well, you get the idea) r=8 n=97 8*8^(97-1) Unfortunately I use a 6 bit calculator which will give me the wrong value so just calculate for yourself 8^96 then multiply by8.
Longest cylinder of specified radius in a given cuboid Find the maximum height (in exact value) of a cylinder of radius $x$ so that it can completely place into a $100 cm \times 60 cm \times50 cm$ cuboid. This question comes from http://hk.knowledge.yahoo.com/question/question?qid=7012072800395. I know that this question is equivalent to two times of the maximum height (in exact value) of a right cone of radius $x$ so that it can completely place into a $50 cm \times$ $30 cm \times 25 cm$ cuboid whose the apex of the right cone is placed at the corner of the cuboid, but I still have no idea until now.
Here's a possibly-wrong approach ... Center a "short" cylinder at the cuboid's center, oriented along a diagonal. Elongate the cylinder ---in each direction--- until it collides with a face, say the "top" (and, at the other end, the "bottom"). Letting the cylinder scrape against the top (and bottom) face(s), continue elongating the cylinder ---such that the projection of its axis into the top face coincides with the face's diagonal--- until it collides with a second (pair of) face(s). Finally, letting the cylinder scrape those faces, elongate in the only dimension allowable until it collides with the final (pair of) face(s). Symbolically ... Let the cuboid have dimensions $2a$, $2b$, $2c$; center it at the origin, and let its edges be axis-aligned. Let the cylinder have radius $s$. (I use "$r$" below for a different parameter.) Taking $P = p \; ( a, b, c )$ to be the center of one end of the "short" cylinder, we elongate the cylinder (that is, we increase $p$) until it collides with the walls $x=\pm a$; this happens when $$P_x + s \frac{P_x}{|P|} = a \qquad (1)$$ Let $p_\star$ be the value of $p$ solving $(1)$. Now, let $Q := p_\star \; ( a, b, c ) + q \; ( 0, b, c )$ take over as the center of the end of the cylinder; we elongate until the cylinder hits the walls $y=\pm b$: $$Q_y + s \frac{Q_y}{|Q|} = b \qquad (2)$$ Writing $q_\star$ for the appropriate value of $q$ solving $(2)$, we finish with cylindrical endpoint $R := p_\star \; ( a, b, c ) + q_\star\; ( 0, b, c ) + r ( 0, 0, c )$ until $$R_z + s \frac{R_z}{|R|} = c \qquad (3)$$ when $r = r_\star$. Then $$\ell = 2|R| = 2\sqrt{\; p_\star^2 \; a^2 + \left( p_\star + q_\star \right)^2 \; b^2 + \left( p_\star + q_\star + r_\star \right)^2 \; c^2 \; }$$ may (or may not) be the length of the longest cylinder in the cuboid. At least, it should be a "local maximum". As for solving $(1)$, $(2)$, $(3)$ ... For $(1)$, defining $d := \sqrt{a^2+b^2+c^2}$, we have $$p a + s \frac{p a}{p d} = a \qquad \to \qquad p_\star = 1 - \frac{s}{d}$$ For $(2)$, we have $$(p_\star+q) b + \frac{s(p_\star + q )b}{\sqrt{p_\star^2 a^2 + (p_\star + q )^2(b^2+c^2)}} = b$$ $$\to \qquad s(p_\star + q ) = \left(1-p_\star-q\right)\;\sqrt{p_\star^2 a^2 + (p_\star + q )^2(b^2+c^2)}$$ $$\to \qquad s^2 (p_\star + q )^2 = \left(1-p_\star-q\right)^2\;\left( p_\star^2 a^2 + (p_\star + q )^2(b^2+c^2) \right)$$ which makes $q_\star$ the root of a quartic polynomial. The same is true for $r_\star$. I'll leave the sorting-out of those roots to the reader.
Does the $x$ axis-aligned? Would it possible to have $x=0$? The only thing want to mention is the maximum length of a line within the bounded space would be the diagonal, based on the metric property.
Evaluate $\lim_{x\rightarrow\mathrm\pi}\frac{\sin(mx)}{\sin(nx)}$ I have to evaluate $\lim_{x\rightarrow\mathrm\pi}\frac{\sin(mx)}{\sin(nx)}$ where $m,n \in\mathbb{N*}$. At first I thought I could just use the remarkable limit $\lim_{x\rightarrow0}\frac{\sin(x)}x = 1$ and the answer could just be $\frac {m}{n}$ but this is not the answer.... I mean it's a part of it but I don't understand why.
Let $y=x-\pi $. Then $$\sin (mx)=\sin (m (y+\pi))$$ $$=(-1)^m\sin (my) \sim (-1)^mmy$$ thus, the limit is $$(-1)^{m-n} \frac {m}{n}$$
Just since $mx \to 0$ does not hold when $x\to \pi$.
Sum from infinity to infinity How does one evaluate the following limits? 1) $\lim_{n \rightarrow \infty} \sum_{k=n}^\infty 1$ 2) $\lim_{n \rightarrow \infty} \sum_{k=n}^\infty k^{-1}$ 3) $\lim_{n \rightarrow \infty} \sum_{k=n}^\infty 2^{-k}$ Do all three limits evaluate to $0$? If so, why? Perhaps only the third limit evaluates to $0$, while the first two are undefined. Again, why? @PeterTamaroff pointed out that the first two limits are undefined for fixed $n$, but does that necessarily imply they are undefined in the limit?
Consider the sequence $$\left(\sum_{k=n}^\infty 1\right)_{n \geq 1}.$$ This sequence is $$(\infty,\infty,\infty,\ldots).$$ Hence, if we wish to define a limit of this sequence, it should be $$\lim_{n \rightarrow \infty} \sum_{k=n}^\infty 1=\lim_{n \rightarrow \infty} \infty=\infty.$$ The second example is resolved in the same way (since the series is also divergent). In the third case, the sequence is $$\left(\sum_{k=n}^\infty \frac{1}{2^k}\right)_{n \geq 1}.$$ Since $$\sum_{k \geq 1} \frac{1}{2^k}=1,$$ the sequence is $$(1,\tfrac{1}{2},\tfrac{1}{4},\ldots)$$ so $$\lim_{n \rightarrow \infty} \sum_{k=n}^\infty \frac{1}{2^k}=0.$$
Although your second sum 2) $\lim_{n \rightarrow \infty} \sum_{k=n}^\infty k^{-1}$ converges to 0, it is intresting that $\lim_{n \rightarrow \infty} \sum_{k=n}^{2n} k^{-1} = \ln 2$ looking for other cases that base index starts from some infinity?
The positive integer $k$ has the property that for all $m \in \mathbb{N}$, $k \mid m \implies k \mid m_r$. Show that $k \mid 99$. Question from Engel's book Problem solving strategies. The positive integer $k$ has the property that for all $m \in \mathbb{N}$, $k \mid m \implies k \mid m_r$, where $m_r$ is the reflection of $m$, i.e. if $m=1234$ then $m_r = 4321$. Show that $k \mid 99$. I start with a small case, say $k$ divides some 2 digit number $ab$. Then $k$ divides $ba$ also. Since $ab = 10a+b$ and $ba=10b+a$, I eliminate $a$ to get $k \mid -99b$. $b$ is a 1 digit number, so I think I need to use this fact somehow, but I am stuck here. For the 3 digit case, let the number be $abc$. By similar reasoning, I get that $k$ divides $99a-99b = 99(a-b)$. Again, a-b is small, so maybe I can brute force this. I have considered trying to show that $k$ must be a palindrome? Since $k \mid k$, we have $k \mid k_r$, and maybe try to get something? Thanks for the help!
Let's just look at the divisibility rules for 3, 9, and 11. If 3 or 9 divides a number's sum of its digits then it's divides the number. If 11 divides the number you get from the alternating sum of its digits, then 11 divides the number. Let's enumerate the digits of $m$ with subscripts so that $m_1$ is the first digit and $m_n$ is the last where $m$ is an $n$ digit long number. The sum of the digits is $S=\sum_{i=1} ^n m_i$, and the alternating sum $S_a=\sum_{i=1} ^n (-1)^{n+1}m_i$. If $\ 9 \ | \ S$, then $ \ 9\ | \ m_r \ $ just by the law of communicative addition. Similarly if $\ 11\ |\ S_a$, then $ \ 11 \ | \ m_r $ . The reason being that if $n$ is odd the same digits will be negative in the sum $S_a$ regardless of the order being reversed, and if $n$ is even then the sum will be opposite in parity but the same magnitude. Ex: Odd $n$: the number $94572$ $m_r: 94572 \to 27549$ $9-4+5-7+2=2-7+5-4+9$ Ex: Even $n$:the number $9457$ $m_r: 9457 \to 7549$ $9-4+5-7 = - (7-5+4-9)$
HINT Write $$k = \sum_{i=0}^{r_k}10^ix_i$$ where $x_i \in \{0,...,9\}$. Since $k | k \implies k | k_r$, we have $$\sum_{i=0}^{r_k}10^ix_i \quad \biggr{|} \quad\sum_{i=0}^{r_k}10^ix_{(r_k-i)},$$ $$\implies \lambda\sum_{i=0}^{r_k}10^ix_i = \sum_{i=0}^{r_k}10^ix_{(r_k-i)},$$ $$\implies 0 = \sum_{i=0}^{r_k}10^i(x_{(r_k-i)}-\lambda x_i),$$ $$\implies 0 = x_{(r_k-i)}-\lambda x_i \quad \forall i.$$ We therefore have that $k$ is a palindrome.
How to add some accents in Microsoft Word Equation for geometry I hope this will be the right place for my question. I'm working on a Geometry book and I have to type some accents and I cannot find the way to do it. Why Microsoft Word, because I tried with MathType and MathMagic but they make a huge mess on my files, even corrupted them. Also, I don't have time to learn Latex or money to pay another person to do it. I need these to be text, not images. I'm desperate, and will be grateful of any hint. The accents are the following: Triangle notation over a letter Arc notation over letters Semiplane symbol
You could use \above. For $\overset{\Delta}{A}$, use A \above("Delta Symbol"), it'll become A┴Delta and then press Enter for $\overset{\Delta}{A}$. For $\overset{\cap}{AB}$, use AB\above("cap symbol").
Information was really helpful. To get complete list of all the shortcut for accents please see Shortcut for putting accents in MS Word equation editor
Has $n^{n+1}+(n+1)^{n+2}$ other obvious factors than that I found? Has the number $$f(n):=n^{n+1}+(n+1)^{n+2}$$ "obvious" factors (algebraic, aurifeuillan or similar kinds) apart from those , I mention below ? I only managed to find out forced factors for odd numbers $\ n\ $ : If $\ n\ $ is of the form $\ 6k+1\ $ , then $\ f(n)\ $ is divisible by $\ 3\ $. If $\ n\ $ is of the form $\ 6k+3\ $ , then $\ f(n)\ $ is divisible by $\ n^2+n+1\ $ and finally, if $\ n+2\ $ is prime, then $\ f(n)\ $ is divisble by $\ n+2\ $. For even $n$, I did not find forced factors. The smallest not completely factored number of this form is $f(62)$. It has the composite cofactor $$29645851324749161395794060252012567916992450650017954$$ $$8416412620499302880901240095492001218810908429181608669479$$ with $111$ digits.
The full factorization of $f(62)$ is $$\begin{eqnarray} f(62) & = & 97\times 503\times 434254837008211200040837849611255155960657\times \\ & & 682683272545530598298287751380982048464896752734487326295497775449847\end{eqnarray}$$ Somewhat disappointingly, it was reached using raw computing power rather than any smart ideas; just by running YAFU, the factoring tool.
Possible to if $n\equiv_6-1$ and $n+2=b^k$ is power of some prime $b$, then $b\mid f(n)$. Checked up to $n=10^5$ without exceptions. gp-code: nbk()= { for(n=1, 10^5, f= n^(n+1)+(n+1)^(n+2); if(n%6==5, k= ispower(n+2, , &b); if(k&&isprime(b), if(f%b==0, print("n = "n" f("n")%"b" = "f%b" b = "b" k = "k) , print("---- "n" f("n")%"b" = "f%b" b = "b" k = "k); break() ) ) ) ) }; Output: n = 23 f(23)%5 = 0 b = 5 k = 2 n = 47 f(47)%7 = 0 b = 7 k = 2 n = 119 f(119)%11 = 0 b = 11 k = 2 n = 167 f(167)%13 = 0 b = 13 k = 2 n = 287 f(287)%17 = 0 b = 17 k = 2 n = 341 f(341)%7 = 0 b = 7 k = 3 n = 359 f(359)%19 = 0 b = 19 k = 2 n = 527 f(527)%23 = 0 b = 23 k = 2 n = 623 f(623)%5 = 0 b = 5 k = 4 n = 839 f(839)%29 = 0 b = 29 k = 2 n = 959 f(959)%31 = 0 b = 31 k = 2 n = 1367 f(1367)%37 = 0 b = 37 k = 2 n = 1679 f(1679)%41 = 0 b = 41 k = 2 n = 1847 f(1847)%43 = 0 b = 43 k = 2 n = 2195 f(2195)%13 = 0 b = 13 k = 3 n = 2207 f(2207)%47 = 0 b = 47 k = 2 n = 2399 f(2399)%7 = 0 b = 7 k = 4 n = 2807 f(2807)%53 = 0 b = 53 k = 2 n = 3479 f(3479)%59 = 0 b = 59 k = 2 n = 3719 f(3719)%61 = 0 b = 61 k = 2 n = 4487 f(4487)%67 = 0 b = 67 k = 2 n = 5039 f(5039)%71 = 0 b = 71 k = 2 n = 5327 f(5327)%73 = 0 b = 73 k = 2 n = 6239 f(6239)%79 = 0 b = 79 k = 2 n = 6857 f(6857)%19 = 0 b = 19 k = 3 n = 6887 f(6887)%83 = 0 b = 83 k = 2 n = 7919 f(7919)%89 = 0 b = 89 k = 2 n = 9407 f(9407)%97 = 0 b = 97 k = 2 n = 10199 f(10199)%101 = 0 b = 101 k = 2 n = 10607 f(10607)%103 = 0 b = 103 k = 2 n = 11447 f(11447)%107 = 0 b = 107 k = 2 n = 11879 f(11879)%109 = 0 b = 109 k = 2 n = 12767 f(12767)%113 = 0 b = 113 k = 2 n = 14639 f(14639)%11 = 0 b = 11 k = 4 n = 15623 f(15623)%5 = 0 b = 5 k = 6 n = 16127 f(16127)%127 = 0 b = 127 k = 2 n = 16805 f(16805)%7 = 0 b = 7 k = 5 n = 17159 f(17159)%131 = 0 b = 131 k = 2 n = 18767 f(18767)%137 = 0 b = 137 k = 2 n = 19319 f(19319)%139 = 0 b = 139 k = 2 n = 22199 f(22199)%149 = 0 b = 149 k = 2 n = 22799 f(22799)%151 = 0 b = 151 k = 2 n = 24647 f(24647)%157 = 0 b = 157 k = 2 n = 26567 f(26567)%163 = 0 b = 163 k = 2 n = 27887 f(27887)%167 = 0 b = 167 k = 2 n = 28559 f(28559)%13 = 0 b = 13 k = 4 n = 29789 f(29789)%31 = 0 b = 31 k = 3 n = 29927 f(29927)%173 = 0 b = 173 k = 2 n = 32039 f(32039)%179 = 0 b = 179 k = 2 n = 32759 f(32759)%181 = 0 b = 181 k = 2 n = 36479 f(36479)%191 = 0 b = 191 k = 2 n = 37247 f(37247)%193 = 0 b = 193 k = 2 n = 38807 f(38807)%197 = 0 b = 197 k = 2 n = 39599 f(39599)%199 = 0 b = 199 k = 2 n = 44519 f(44519)%211 = 0 b = 211 k = 2 n = 49727 f(49727)%223 = 0 b = 223 k = 2 n = 50651 f(50651)%37 = 0 b = 37 k = 3 n = 51527 f(51527)%227 = 0 b = 227 k = 2 n = 52439 f(52439)%229 = 0 b = 229 k = 2 n = 54287 f(54287)%233 = 0 b = 233 k = 2 n = 57119 f(57119)%239 = 0 b = 239 k = 2 n = 58079 f(58079)%241 = 0 b = 241 k = 2 n = 62999 f(62999)%251 = 0 b = 251 k = 2 n = 66047 f(66047)%257 = 0 b = 257 k = 2 n = 69167 f(69167)%263 = 0 b = 263 k = 2 n = 72359 f(72359)%269 = 0 b = 269 k = 2 n = 73439 f(73439)%271 = 0 b = 271 k = 2 n = 76727 f(76727)%277 = 0 b = 277 k = 2 n = 78959 f(78959)%281 = 0 b = 281 k = 2 n = 79505 f(79505)%43 = 0 b = 43 k = 3 n = 80087 f(80087)%283 = 0 b = 283 k = 2 n = 83519 f(83519)%17 = 0 b = 17 k = 4 n = 85847 f(85847)%293 = 0 b = 293 k = 2 n = 94247 f(94247)%307 = 0 b = 307 k = 2 n = 96719 f(96719)%311 = 0 b = 311 k = 2 n = 97967 f(97967)%313 = 0 b = 313 k = 2
Find all integer solutions of the equation $x^2+15^a=2^b$ I found an integer solution for $$x^2+15^a=2^b$$ Which is $x=7$, when $a=1$ and $b=6$ In general: $x$ must be odd, $a$ must be odd and $b$ must be even The problem is that I don't know wheater there are more solutions. Otherwise, how can I prove there is no other solution?
I'll do the case where $x,a$ and $b$ are positive, the other one is easier but a bit intricate. Working mod $3$ we have that $b$ must be even. Let $b=2v$ We now have: $15^a=2^b-x^2=(2^v-x)(2^v+x)$ Clearly $x$ must be coprime to $15$, and so $(2^v-x)=3^a$ and $(2^v+x)=5^a$. or $2^v-x=1$ and $2^v+x=15^a$. We deal with the first case first ($(2^v-x)=3^a$ and $(2^v+x)=5^a$): So $3^a+5^a=2(2^v)$. So in fact finding solutions to your equation is equivalent to $3^a+5^a$ being a power of $2$. When $a$ is even it is not possible (work mod $4$). When $a$ is odd it only works with $a=1$, this is easy to observe with the lifting the exponent lemma. Since we have $2^v-x=3$ and $2^v+x=5$ we have $x=1,v=2,a=1$ or $x=1,b=4,a=1$. We now deal with the second case: ($2^v-x=1$ and $2^v+x=15^a$): We get $15^a+1=2(2^v)$. So by Catalan's theorem $a=1$, and so $v=3$. Hence we get $x=7,b=6,a=1$. And this gives all solutions with $x,a,b$ positive. Clearly the sign of $x$ is irrelevant, and simple arguments show there are no solutions with $a$ or $b$ negative. However, there remains one other possibility, $x=0$ (neither positive not negative). For this case $15^a=2^b $, and as $15$ is not any rational power of $2$ there can be no nonzero solutions for $a $ and $b $. Only $a=b=0$ works for the case $x=0$. The complete solution set is then $(x,a,b) \in \{(0,0,0),(1,1,4),(7,1,6)\} $
Another solution is $x=1,a=1,b=4$ $b$ must be even for the right to be $1 \bmod 3$ as $x^2 \equiv 1 \pmod 3$. Write $c=b/2$ and we are looking for $$(2^c-x)(2^c+x)=15^a$$ The two factors on the left are coprime, so we have the possibilities $$2^c-x=3^a\\2^c+x=5^a\\\text {and} \\2^c+x=15^a\\2^c-x=1$$ The solution $x=1,a=1,b=4,c=2$ is the first of these and $x=7,a=1,b=6,c=3$ is the second. We cannot get any more from the second because we would have to have $2^{c+1}=15^a+1$ and the only perfect powers that differ by $1$ are $8,9$. We cannot get any more from the first because we would have to have $2^{c+1}=5^a+3^a$ which can never be true $\bmod 16$ for $a \gt 1$
On the arithmetic differential equation $n''=n'$ If $n'$ denotes the arithmetic derivative of non-negative integer $n$, and $n''=(n')'$, then solve the following equation $$n''=n'.$$ What I have found, you can read in one minute! I have tried to explain it very detailed so anyone, even with a little knowledge of elementary number theory (like me), can follow the steps. $n=0$ and $n=1$ are solutions. It is known that for a natural number $n>1$ with prime factorization of $\prod_{i=1}^{k} p_i^{a_i}$ arithmetic derivative is $$n'=n \sum_{i=1}^{k} \frac{a_i}{p_i}. \tag{1}$$ Let $m=n'$, then equation becomes $m'=m$. Let prime factorization of $m$ be $\prod_{j=1}^{l} q_j^{b_j}$. Then from equation $(1)$ we get $$\frac{b_1}{q_1}+ \frac{b_2}{q_2}+... + \frac{b_l}{q_l}=1. \tag{2}$$ This equation implies that $q_j \ge b_j$. Multiply both sides of the equation $(2)$ by $q_1 q_2 ... q_{l-1}$. It follows that $q_1 q_2 ... q_{l-1}\frac{b_l}{q_l}$ is an integer. Thus $q_l | b_l$. Hence $b_l \ge q_l$ and $b_l=q_l$. Subsequently $b_1=b_2=...=b_{l-1}=0$ and $m=q^q$ for some prime number $q$. Thus we have $n'=m=q^q$ and $n\sum_{i=1}^{k} \frac{a_i}{p_i}=q^q$ or $$\prod_{i=1}^{k} p_i^{a_i-1}\sum_{i=1}^{k} \left( p_1 p_2 ... p_k \frac{a_i}{p_i} \right)=q^q. \tag{3}$$ Notice that if $p_i \neq q$ is a prime divisor of $n$, then $a_i=1$. We claim that if $q$ is a prime divisor of $n$, then its the only one. If $q \mid n$ then $n$ is in the form $$n=p_1p_2...p_kq^a,$$ Where $\gcd(q, p_i)=1$. Now its easy to see from equation $(3)$ that $a \le q$ and dividing both sides of it by $q^{a-1}$ gives $$q\sum_{i=1}^{k} \left( \frac{p_1 p_2 ... p_k}{p_i} \right)+p_1 p_2 ... p_k a=q^{q-a+1}.$$Therefore, $q|a$, which leads to $a \ge q$ and $q=a$. Thus $$\sum_{i=1}^{k} \left( \frac{p_1 p_2 ... p_k}{p_i} \right)+p_1 p_2 ... p_k=1,$$Which is a contradiction and $n=q^q$. Thus $n=q^q$ is a solution to the original equation, where $q$ is a prime number. If $q \nmid n$, then equation $(3)$ gives $$\sum_{i=1}^{k} \left( \frac{p_1 p_2 ... p_k}{p_i} \right)=q^q,$$Where I am stuck with. Edit: According to @user49640 comment, there are some solutions of the form $n=2p$, where $p=q^q-2$ is a prime. For example for $q=7$ and $q=19$. See also @Thomas Andrews answer for an another solution not in the form $n=p^p$. Look at this solution I found: $$(2\times17431\times147288828839626635378984008187404125879)'=29^{29}$$
We have that $$(3\cdot 29\cdot 25733)'=3\cdot 29 + 3\cdot 25733 + 29\cdot 25733=7^7$$ So you are going to get non-trivial solutions. It's probably a difficult problem to come up with all solutions. I was looking for "3 prime" solutions. So, if $n=abc$ with $n'=ab+ac+bc=(a+c)(b+c)-c^2$. Trying to solve with $q=5$ gives: $$(a+c)(b+c)=5^5+c^2$$ But $5^5\equiv 1\pmod{4}$ and thus $5^5+c^2$ cannot be divisible by $4$. so $a+c$ and $b+c$ cannot both be even, so one of $a,b,c$ must be $2$. We can assume $c=2$. Then we want $(a+2)(b+2)=3129=3\cdot 7\cdot 149$. No way to factor this as $mn$ with $m-2$ and $n-2$ prime. So there is no $3$-prime counter-example with $q=5$. So I tried $q=7$ and found the above solution. It helped that $7^7+9$ has divisible by $256$, which gave me a lot of possibilities for factorizations. There are two-prime solutions if $q^q-2$ is an odd prime.
For all $k\geq 2$, with $\{p_k\}$ any set of $n+1$ distinct primes, and $a_k$ any set of $k$ positive integers such that no $a_k$ is a multiple of the corresponding $p_k$, $$ \sum{\frac{a_k}{p_k} \neq 1} $$ The proof is by induction, starting with a basis at $n=1$: $1-\frac{a_1}{p_1}$ is a fraction with denominator $p_1$, say $\frac{r_1}{p_1}$ with $0<r_1<p_1$. So for $\sum_1^2{\frac{a_k}{p_k} = 1}$ to hold, you must have $$ \frac{a_2}{p_2}=\frac{r_1}{p_1}\\ a_2 p_1 = r_1 p_2 $$ Since $p_2$ is coprime with $p_1$, for this to hold $r_1 = mp_1$. But $r_1 < p_1$, so this is a contradiction, and there cannot be such a combination. the proof of the induction step is very similar, relying on the facts that the product of several primes none of which match $p_{n+1}$ cannot be divisible by $p_{n+1}$ and the "deficiency" of $\sum_1^n\frac{a_k}{p_k}$ must have a denominator of the form of the product of the $p_k$. Therefore, the only solutions to $$ n'=n $$ are of the form $n=p^p$, where $p$ is prime. So for your equation, we must lok for number whose aritmetic derivatives are of the form $p^p$. We see immediately that for such a number w, if $w=p^t\prod p_k^{a_k}$ with all the $p_k$ distinct from $p$,then all the $a_k = 1$ (otherwise, a prime other than $p$ will creep into $w'$). THus the problem of finding a solution of a form other than $p^p$ comes down to finding a collection of primes $\{p_k\}, 1\leq k \leq s$, a distinct prime $p$, and an integer exponent $t>0$ such that $$ \sum_{k=1}^s\frac1{p_k}+\frac1t = p^{p-t} $$ Clearly $p^{p-t}< p^p$, otherwise we get back our solution $p^p$. Let's see how this works, by trying to do this for $p=3$. If $t=2$, then we want $$ \sum_{k=1}^s\frac1{p_k}+\frac12 = p^{3-2}=3 $$ It is easy to find a set of primes wose reciprocals add to more than $\frac52$ but then the denominator of that sum of reciprocals is a large number having all those primes as factors. In order to get the full sum to be the simple fraction $\frac52$, $t$ would have to be at least half as big as that large number, thus it would need to be more than $3$. Since $t$ must be less than $p$, this won't work. So for this to work, $t$ must be a large number, forcing $p$ to be a large prime, thus requiring very many terms in the sum of reciprocal primes, in turn requiring $t$ to be a much larger number. This argument can be formalized, to show that $$ n=p^p$$ is the only way to have $$ n''=n' $$
Calculus - inequality problem. I have this inequality : $$|g(x)-B|<\frac{|B|}{2}$$ $$-\frac{|B|}{2}<g(x)-B<\frac{|B|}{2}$$ $$B-\frac{|B|}{2}<g(x)<B+\frac{|B|}{2}$$ I don't understand how it possible to conclude from the inequality that $$\frac{|B|}{2}<|g(x)|$$ Thanks.
$$|B-\frac{|B|}{2}|<|g(x)|$$ Case 1: $$B\geq0$$ $$|B-\frac{|B|}{2}|=\frac{B}{2}=\frac{|B|}{2}<|g(x)|$$ Case 2: $$B<0$$ $$|B-\frac{|B|}{2}|=\frac{-3B}{2}=\frac{3|B|}{2}<|g(x)| \implies \frac{|B|}{2}<|g(x)|$$
If $B>0$ then the first inequality implies $B/2 < g(x)< 3B/2$. When $B<0$, the first inequality implies $B/2>g(x)>3B/2$
What is the unit of measurement of $z$ as a solution to the equation: $\sin z=2$? Recently, I uploaded a video on my channel solving the equation $\sin z=2$ See Now, one question that I was asked is "What is the unit of this $Z$ as a solution to the equation: $\sin Z=2$. Is it in Radian? Can I change it to Degree?... And most importantly, what is the physical significance of the solution?" Though based on my limited understanding and knowledge, I answered the question, but I seek a concrete answer for the same. I will include my answer for your perusal "In complex trigonometric functions, $\sin z$ is expressed as power series and not as a measure of angles on a circle in classical trigonometry. So, neither of the unit suffice in this case and which is also evident from the answer which is in the form of $x+Iy$. Another representation of the same could be in terms of hyperbolic sines and cosines where unit of measurement is hyperbolic than circular."
The question is exactly the same as asking, "what is the unit of $x$ in $x^{2}=2$?" The student might be tempted to argue something like "the unit is obviously meters, and the result is an area". But this is not so obvious at all: in fact, people often speak of "seconds-squared", $s^2,$ or "per second", $s^{-1};$ so we see that the $x$ in $x^{2}$ need not be a length, it could be a duration. What the unit is in a particular mathematical model depends only on what $x$ is being used to represent in that model. If $x$ represents a distance, then the unit should be a unit of distance, like meters; but there is no fixed choice of unit, so the viewer's question is missing the context necessary to give it an answer. In practice, complex numbers are often useful merely as computational aids, or as a framework to make nice mathematical models, and then the real and imaginary parts are associated with units of their own. One place where complex numbers themselves famously take centre-stage in a physical model is in quantum mechanics: the wave function outputs complex numbers, and the square magnitude of these complex numbers represents probability density. But this is a pretty esoteric example, to be honest; and even in this example, the "unit" of the complex number (if that's really something you want to discuss) depends on how many dimensions your model is meant to account for.
From $e^{iz} = \cos(z)+i\sin(z)$ we get $\sin(z) = (e^{iz} -e^{-iz})/2$. Solve $(x-1/x)/2 =2$ and then $e^{iz} = x$. All well known, of course.
Is my proof that the set of all finite subsets of a countable set is countable correct? Q. Let $X$ be a countable, infinite set. Prove that the set of all finite subsets of $X$ is countable. So I say; Let $X$ countable be given. Let $F$ be the set of all finite subsets of $X$. Let $F_i$ be the subsets of $X$ containing $i$ elements where $i$ a natural number. Then $F$ is the union of all the $F_i$. This union is countable as the natural numbers are countable. A countable union of finite sets is countable, hence $F$ is countable. (I have already proved a countable union of finite sets is countable in an earlier stage of the question, that is why I am stating this as fact) Does this proof hold? I have seen a few other proofs that seem to be more complicated than mine and so I worry that I am missing something simple.
You say that $F$ is countable because it is a countable union of finite sets, which is false. $F_1$ is not a finite set if $X$ is infinite, and neither is $F_i$ for any other $i$!
There are numerous method for this kind of question. This is a nice one. In my opinion at least. The first step; Consider the empty set. We won't want to ruin our proof by considering everything except the empty set. This is easy, however, as if we can show all non-empty finite subsets are countable, we can show adding the empty set is. Step 2; We consider $ n > 0$ t be fixed. Let X be a subset of the natural numbers (for the sake of simplicity) that is of size $n$ - that is, is of the form {$x_1 , x_2 , ... , x_{n-1} , x_n$}. Can you associate this to n-tuple of real numbers? Hint; that is $(x_1 , x_2 , ... , x_{n-1} , x_n)$ Now, we have an injection of finite subsets of size $n$ to the set of n-tuples of natural numbers, i.e. $\mathbb{N}^n$. Now, can you prove that the (countable) union of countable sets is countable? Hint; Arrange the naturals in some kind of grid, where every row is infinite and not equal to any rows above or below it. Now, prove via induction that for a countable set $\mathbb{B}$ we also have $\mathbb{B}^k$ is countable for any natural number k. Hint; consider your notes of proving the rational numbers are countable. Now, we finally see that there is an injection from the set of all finite subsets of the natural numbers to the set $U_{n \in \mathbb{N}} \mathbb{N}^n$. Thus, the set of all subsets of $\mathbb{N}$ is countable (*). (*) assuming the following; that any subset of a countable set is countable, and if there is an injection from an infinite subset to a countable set, that set must be countable - can you prove these and state at which points they should be used (try to explicitly construct a bijection)?
How to solve this functional equation: $2f(x) = f(x-1)+f(x+1)$? After some calculations, I came up with this functional equation: $f(x-1)+f(x+1)=2f(x)$. I found linear function is one possible answer, but don't know how to derive it. I don't know much about the techniques to solve this kind of equation. Can anyone help? Edited: $f$ is a probability function, that is $0 \leq f \leq 1$, and $f$ is continuous.
You have $f(x+1)-f(x)=f(x)-f(x-1)$ and so $f(x+2)-f(x+1)$ is equal to the same value, and similarly $$f(x+n) - f(x+n-1)=f(x)-f(x-1)$$ for all integer $n$. This gives $$f(x+n) = (n+1)f(x)-nf(x-1)=n(f(x)-f(x-1)) + f(x)$$ which is linear in integer $n$, though not necessarily linear in real $x$. If you fix the values of $f(y)$ on $[-1,1)$ then you can give a general solution using rounding and $$f(y) = \lfloor y \rfloor\left(f(y-\lfloor y \rfloor)- f(y-\lfloor y \rfloor-1)\right) + f(y-\lfloor y \rfloor)$$
For integer $x=n$, this is an ordinary linear recurrence, $$f_{n+1}=2f_n-f_{n-1},$$ with characteristic equation $(r-1)^2=0$, so that the general solution is $$f_n=Cn+C'.$$ (Or simply, you have a second order finite difference which is zero, so that the sequence must be linear.) But as $f$ is bounded, the term $Cn$ cannot exist and the solution is constant. This conclusion is valid for all sets of points one unit apart. For the whole real domain, we can conclude that the solution is any continuous and periodic function of period $1$ with range in $[0,1]$. If $f$ is a CDF, then the function must also be non-decreasing, so that the only possibility is a constant. $$\color{green}{f(x)=p}$$ with $p\in[0,1]$.
Diagonal morphism of regular variety is a regular embedding Let $X$ be a regular $k-$variety (i.e. all of its local rings are regular) of pure dimension $d$. Then I would like to show that the diagonal morphism $X\rightarrow X\times_k X$ is a regular embedding of codimension $d$, to be able to prove exercise $12.2.M$ in Ravi Vakil's "Foundations of Algebraic Geometry". The preceding exercise shows that any closed embedding $Y \rightarrow Z$ with $Y$ regular and $Z$ regular at all points in the image of $Y$ is a regular embedding. I'd like to use this (particularly because Vakil lists $12.2.M$ as a consequence of the preceding exercise), only I'm having trouble proving that $X \times_k X$ is regular at points on the diagonal. Thus what I would really like to know is: Let $X$ be a regular $k-$variety. Then $X \times_k X$ is regular at points on the diagonal. Since $X$ is finite type, I can explicitly describe the local rings of $X$ as quotients of local rings of affine space, and use the kernels involved to explicitly describe the local rings of $X \times_k X$ as quotients of the local rings of $X$, but I can't then do very much with this, and I'm not sure that this is even the right approach. Edit: Some of the answers suggest that regularity is not enough to show that the diagonal is a regular embedding. One provides an example of a regular scheme $W$ where $W \times W$ isn't regular at the diagonal, so the approach taken in this question is certainly flawed.
Without some hypothesis on $k$ I don't think it's true that the diagonal embedding $X \to X \times_k X$ is regular. Like take everybody's favorite regular but non-smooth scheme over the non-perfect field $k:= \Bbb{F}_p(t)$. Namely, the scheme $\operatorname{Spec} \Bbb{F}_p(t)[x]/(x^p - t)$. We first compute the tensor product $$\begin{eqnarray*} \Bbb{F}_p(t)[x]/(x^p - t) \otimes_{\Bbb{F}_p(t)} \Bbb{F}_p(t)[y]/(y^p - t) &\cong& \Bbb{F}_p(t)[x,y]/(x^p - t, y^p - t) \\ &\cong& L[y]/(y-\alpha)^p \end{eqnarray*}$$ where $L := \Bbb{F}_p(t)[x]/(x^p-t)$ and $\alpha$ is the image of $x$ in $L$. We see that the tensor product is a local non-reduced of dimension $0$. Hence the diagonal $X \to X \times_k X$ is a map from a point to a fat point and so is not regular. Edit: Ravi told me via email that there is indeed an error in the notes with regards to this question.
Sadly I cannot comment. In any case: (1) make clear what $k$ is (any field?) It depends greatly on the properties of $k$, as was already mentioned by A.G. (2) unlike what some people think, there is no standard definition of "variety". E.g. everyone assumes *of finite type" (I think), but some assume irreducible, geometrically integral, whatever. For that reason you should always state what you mean by it. (3) Over any field "smoothness" is equivalent to geometric regularity (see EGA IV), and thus "smoothness" is equivalent to regularity over a perfect field. (4) The notion of smoothness is superior to regularity, exactly because it behaves very well with respect to base change. Over nonperfect fields I think regularity is less interesting.
weak convergence of product of weakly and strongly convergent $L^{2}$ sequences in $L^{2}$ there is one question bothering me for quite a while now. Let $a_{n},b_{n}\in L^{2}:a_{n}\stackrel{L^{2}}{\rightharpoonup} a\in L^{2} $ weakly $ b_{n}\stackrel{L^{2}}{\rightarrow} b \in L^{2}$ strongly and $a_{n}\cdot b_n\in L_{2}$ weakly and let all the sequences, also the product sequence, be bounded in $L^{2}$, that means w.l.o.g. $a_{n}\cdot b_n\stackrel{L^{2}}{\rightharpoonup}c\in L^{2}$ weakly I'd like to prove that $a_{n}\cdot b_{n}\stackrel{L^{2}}{\rightharpoonup} a\cdot b$, that means $a\cdot b=c$. My comments: It would be enough to show that the nemytskij-operator of $\mathbb{R}\times\mathbb{R}\times\Omega\ni\left(u_{1},u_{2},x\right)\mapsto u_{1}\cdot u_{2}\in\mathbb{R}$ to be weakly closed EDIT but this is wrong in general. It would STILL BE ENOUGH to prove that this nemytskij-operator is weakly$\times$strongly-closed. (the first function can be considered as the derivative of the first, so by sobolew embeding it converges strongly) does anyone have an idea or a hint for literature?
In fact $a_nb_n\in L^1(\Omega)$ not $L^2(\Omega)$, and $$ \int_\Omega a_nb_n\,dx \to \int_\Omega ab\,dx. $$ Indeed, $$ a_nb_n-ab=a_n(b_n-b)+(a_n-a)b $$ Then $$ \Big|\int_\Omega a_n(b_n-b)\,dx\,\Big|\le \|a_n\|_{L^2}\|b_n-b\|_{L^2}\le M\|b_n-b\|_{L^2}\to 0, $$ and $$ \int_\Omega (a_n-a)b \to 0, $$ as $a_n-a\rightharpoonup 0$. EDIT by OP: this argumentation gives $a_{n}\cdot b_{n}\stackrel{L^{1}}{\rightarrow}a\cdot b$. together with $a_{n}\cdot b_{n}\stackrel{L^{2}}{\rightarrow}c \Rightarrow a_{n}\cdot b_{n}\stackrel{L^{1}}{\rightarrow}c$ it follows that $a\cdot b = c$
There is a disproof of this statement in general. Take $ L^2([-1,1],\mathbb{C})$, and $a_n(x)= \exp(2\pi i n x)$ and $b_n (x) = \exp(-2\pi i n x)$. In this case, both $a_n \rightharpoonup a = 0$ and $b_n\rightharpoonup b=0$, but $a_n (x)\cdot b_n(x) =1$ for all $x$. Clearly $c_n=c =1$. Then $$\lim\limits_{n \to \infty} \int_{-1}^1 a_n(x)\cdot b_n(x)\,dx=2$$ but $$\int_{-1}^1 a(x)\cdot b(x)\,dx =0$$
Calculating sheet size required for creating a cubic corrugated box Basically I am developing a software for Corrugated Box Manufacturing Industry, but stumbled upon the calculations regarding the initial sheet size and weight required to create box of a particular specifications. Lets say I want to create a box of following specifications: Total Ply: 3 (means single wall) Flute in middle paper: 50% Length=10 inch Width=10 inch Depth=10 inch I have single paper roll of 100 GSM and appropriate width. I'll be really glad if anyone can suggest me appropriate formulas for calculating the specifications is the sheet required to create a box with a paper of above given specifications.
Calculation for corrugated boxes involve a few of the following 1. Box size in mm 2. thickness of the box in terms of 3 ply, 5 ply, 7 ply ie. no. of layers of paper in the box. 3. Thickness of the paper in each layer in GSM (grams per sq. meter) eg, 100gsm, 120 gsm, 150 gsm etc. 4. Weight of the box is to be then calculated 5. Weight of the box is then multiplied by rate of the paper+conversion cost. for eg. for a box of 254mm x 225mm x 150mm 3 ply box of 100 gsm weight is calculated as follows Length of the board = 2 x length of box +2 x width of box + joint allowance + triming allowance to calculate length of boards 254mm + 3mm (for thickness of 3ply) lenght = (254+3) + (225+3) + (254+3) + (225+3) + 25mm (for joint) + 20mm (for trimming) = 257+228+257+228+25+20 = 1015mm Length of the board is 1015mm to calcuate width of the board 1/2 width of box + Height of box + 1/2 widht of box (112.5mm + 3mm) + (150mm + 6mm) + (112.5mm +3mm) + 20mm = 407mm Width of the board is 407mm to calulate the weight of the box length of the board x widht of the board x GSM of the paper x ply for 3 ply it is take as 3.5 as one layer in the middle is 1.5 times more than plain sheet. so the calculation is done in meters to get the weight in kgs. convert mm into meter by dividing by 1000 and paper wt. is in grams divide it by 1000 to get in Kilogram (1015/1000) x (407/1000) x (100/1000) x 3.5=0.144587 approx 0.145 grams mutliply the wt with rate if rate of the paper is 20 and your conversion charges (expenses per kg) is 15 then 0.145*35 = 5.075 or 5.08 is the cost of the box. Hope this information is detailed for 5 ply multiply by 6 (1+1.5+1+1.5+1) sililarly for 7ply multiply by 8.5 you can also calculate for different combination of grammage of the paper ie., GSM for 100, 120, 150 or a combination of the same when you do that you must multiply product of length and width of the board with substance of the board. substance is the total grammage of the board for eg. 3ply 100 gsm substance is 350, for 3 ply top 120gsm rest 100 gsm substance will be (120+ (100*1.5)+100) = 370 and similar for different combination. My email is a.rajgopal@rediffmail.com you can contact for any clarification of the above for your calculation purposes. if useful please acknowledge
Calculation for corrugated boxes involve a few of the following 1. Box size in mm 2. thickness of the box in terms of 3 ply, 5 ply, 7 ply ie. no. of layers of paper in the box. 3. Thickness of the paper in each layer in GSM (grams per sq. meter) eg, 100gsm, 120 gsm, 150 gsm etc. 4. Weight of the box is to be then calculated 5. Weight of the box is then multiplied by rate of the paper+conversion cost. for eg. for a box of 254mm x 225mm x 150mm 3 ply box of 100 gsm weight is calculated as follows Length of the board = 2 x length of box +2 x width of box + joint allowance + triming allowance to calculate length of boards 254mm + 3mm (for thickness of 3ply) lenght = (254+3) + (225+3) + (254+3) + (225+3) + 25mm (for joint) + 20mm (for trimming) = 257+228+257+228+25+20 = 1015mm Length of the board is 1015mm to calcuate width of the board 1/2 width of box + Height of box + 1/2 widht of box (112.5mm + 3mm) + (150mm + 6mm) + (112.5mm +3mm) + 20mm = 407mm Width of the board is 407mm to calulate the weight of the box length of the board x widht of the board x GSM of the paper x ply for 3 ply it is take as 3.5 as one layer in the middle is 1.5 times more than plain sheet. so the calculation is done in meters to get the weight in kgs. convert mm into meter by dividing by 1000 and paper wt. is in grams divide it by 1000 to get in Kilogram (1015/1000) x (407/1000) x (100/1000) x 3.5=0.144587 approx 0.145 grams mutliply the wt with rate if rate of the paper is 20 and your conversion charges (expenses per kg) is 15 then 0.145*35 = 5.075 or 5.08 is the cost of the box. Hope this information is detailed for 5 ply multiply by 6 (1+1.5+1+1.5+1) sililarly for 7ply multiply by 8.5 you can also calculate for different combination of grammage of the paper ie., GSM for 100, 120, 150 or a combination of the same when you do that you must multiply product of length and width of the board with substance of the board. substance is the total grammage of the board for eg. 3ply 100 gsm substance is 350, for 3 ply top 120gsm rest 100 gsm substance will be (120+ (100*1.5)+100) = 370 and similar for different combination. My email is a.rajgopal@rediffmail.com you can contact for any clarification of the above for your calculation purposes. if useful please acknowledge
Is "This sentence is true" true or false (or both); is it a proposition? From what I understand, a proposition is either true or false, but not both. "This sentence is false" can be neither true nor false and is thus not a proposition. However, is "This sentence is true" true or false (or both)? And hence, is "This sentence is true" a proposition?
The dichotomy sentence/proposition is quite complex to manage, due to its philosophical implications. See e.g. Nik Weaver,Truth and Assertibility, World Scientific PC (2015), page 4: Many philosophers consider truth to be fundamentally an attribute not of sentences but of some more abstract correlate of sentences called “propositions”. The idea is that sentences function by referring to or expressing abstract propositions, and it is these propositions which are the “primary bearers of truth”. This seems to be a common opinion, but it is controversial, with some dissenters denying that there even are such things as propositions. Thus, if we want to stay in the realm of propositional logic, we can say that the basic entities are sentences, i.e. linguistic entities, that have a definite truth value. If so, a sentence like: "This sentence is false", that can be neither true nor false, is not a meaningful sentence to be used in the context of propositional logic. What about: "This sentence is true" ? Is it paradoxical ? I think so. Assume that the sentence is true; then its negation: "This sentence is not true" must be false. But the negated sentence is equivalent to "This sentence is false". But if "This sentence is false" is false, then the sentence (asserting something about a sentence, i.e. a linguistic entity) "agrees" with the way the things are, and this means that it is true. Again, we have reached a contradiction.
"This sentence is true" - call this sentence TT - is not paradoxical. It can be assigned either of the truthvalues T or F. The negation of TT is not the paradoxical Liar sentence "This sentence is false", since the 'this' in these two sentences refer to different sentences. The negation of TT might be rendered as "The sentence 'This sentence is true' is false" - call this sentence NT - which is awkward but quite self-consistent, since TT can indeed consistently be false; in fact, NT can also be true or false: as one would expect, its truthvalue must be the opposite of the truthvalue of TT. It might be worth emphasizing that being a sentence which can be true or false is nothing particularly remarkable. Most sentences of logic are in this category. None of this has anything at all to do with the double-slit experiment, Quantum theory or the nature of physical reality.
How to derive the formula for $\arctan(x) + \arctan(y)$ depending on $x,y$? I was trying to derive the following formula $$\arctan(x)+\arctan(y) = \begin{cases}\arctan\left(\dfrac{x+y}{1-xy}\right), &xy < 1 \\[1.5ex] \pi + \arctan\left(\dfrac{x+y}{1-xy}\right), &x>0,\; y>0,\; xy>1 \\[1.5ex] -\pi + \arctan\left(\dfrac{x+y}{1-xy}\right), &x<0,\; y<0,\; xy > 1\end{cases}$$ I proceeded this way $$\tan{(A+B)}= \frac{\tan{A}+\tan{B}}{1-\tan{A}\tan{B}}$$ $$\arctan(\tan{(A+B)})=\arctan\bigg(\frac{\tan{A}+\tan{B}}{1-\tan{A}\tan{B}}\bigg)$$ $$A+B=\arctan\bigg(\frac{\tan{A}+\tan{B}}{1-\tan{A}\tan{B}}\bigg)$$ $$\tag*{$\frac{-\pi}{2}<A+B<\frac{\pi}{2}$}$$ $$A=\arctan(\tan A)$$ $$\tag*{$\frac{-\pi}{2}<A<\frac{\pi}{2}$}$$ $$B=\arctan(\tan B)$$ $$\tag*{$\frac{-\pi}{2}<B<\frac{\pi}{2}$}$$ $$\arctan(\tan A) + \arctan(\tan B)=\arctan\bigg(\frac{\tan{A}+\tan{B}}{1-\tan{A}\tan{B}}\bigg)$$ $$\tan A=x$$ $$\tan B=y$$ $$\arctan(x) + \arctan(y)=\arctan\bigg(\frac{x+y}{1-xy}\bigg)$$ Now from here ownwards I don’t know how It gets converted to 3 different definitions. You’re help will be highly appreciated.
Let $y\in\Bbb R$ and let $f(x)=\arctan\left(\frac{x+y}{1-xy}\right)$. Then, by the chain rule,\begin{align}f'(x)&=\frac{\frac{y^2+1}{(1-xy)^2}}{1+\left(\frac{x+y}{1-xy}\right)^2}\\&=\frac{y^2+1}{(1-xy)^2+(x+y)^2}\\&=\frac{y^2+1}{1+x^2y^2+x^2+y^2}\\&=\frac{y^2+1}{(1+y^2)(1+x^2)}\\&=\frac1{1+x^2}\\&=\arctan'(x).\end{align}So, $f-\arctan$ is constant. But $f(0)-\arctan(0)=\arctan(y)$.
Hint: Apply $\tan$ to the both sides of the equality and use the fact that $\tan$ is injective. $\tan(\arctan(x)+\arctan(y))={{x+y}\over{1-xy}}$.
Integral of a function in the exponential I would like to calculate the following integral: $$\int \exp\left[a \frac{1-e^{-\kappa_1 s}}{\kappa_1}+b\frac{1-e^{-\kappa_2 s}}{\kappa_2}+c\times s\right]ds$$ This is how I proceeded: Let's define $u\triangleq e^{c\times s}$! Then we have: $du=c\times u\times ds$ where $a, b, c, \kappa_1$ and $\kappa_2$ are constants. Thus the original integral is: $\int u \exp\left[\frac{a}{\kappa_1}+\frac{b}{\kappa_2}\right]\exp\left[-\frac{a}{\kappa_1}e^{-\kappa_1 s}-\frac{b}{\kappa_2}e^{-\kappa_2 s}\right]ds \\ =\frac{1}{c}\exp\left[\frac{a}{\kappa_1}+\frac{b}{\kappa_2}\right]\int \exp\left[-\frac{a}{\kappa_1}u^{-\frac{\kappa_1}{c}}-\frac{b}{\kappa_2}u^{-\frac{\kappa_2}{c}}\right]du$ But from here I could not go any further. Any hints and help would be greatly appreciated!
$$ \begin{align} \int e^{a\frac{1-e^{-\kappa_1s}}{\kappa_1}+b\frac{1-e^{-\kappa_2s}}{\kappa_2}+cs}\ \text{d}s & = e^{\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}\int e^{-\frac{ae^{-\kappa_1s}}{\kappa_1}-\frac{be^{-\kappa_2s}}{\kappa_2}}e^{cs}\ \text{d}s \\ & = e^{\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}\int\sum\limits_{n=0}^\infty\dfrac{(-1)^n\left(\dfrac{ae^{-\kappa_1s}}{\kappa_1}+\dfrac{be^{-\kappa_2s}}{\kappa_2}\right)^ne^{cs}}{n!}\ \text{d}s \\ & = e^{\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}\int\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^nC_k^n\dfrac{a^ke^{-\kappa_1ks}b^{n-k}e^{-\kappa_2(n-k)s}}{\kappa_1^k\kappa_2^{n-k}}e^{cs}}{n!}\ \text{d}s \\ & = e^{\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}\int\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^na^kb^{n-k}e^{(c-\kappa_1k-\kappa_2(n-k))s}}{k!(n-k)!\kappa_1^k\kappa_2^{n-k}}\ \text{d}s \\ & = \sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^na^kb^{n-k}e^{(c-\kappa_1k-\kappa_2(n-k))s+\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}}{k!(n-k)!\kappa_1^k\kappa_2^{n-k}(c-\kappa_1k-\kappa_2(n-k))}+C \\ & = \sum\limits_{k=0}^\infty\sum\limits_{n=k}^\infty\dfrac{(-1)^na^kb^{n-k}e^{(c-\kappa_1k-\kappa_2(n-k))s+\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}}{k!(n-k)!\kappa_1^k\kappa_2^{n-k}(c-\kappa_1k-\kappa_2(n-k))}+C \\ & = \sum\limits_{k=0}^\infty\sum\limits_{n=0}^\infty\dfrac{(-1)^{n+k}a^kb^ne^{(c-\kappa_1k-\kappa_2n)s+\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}}{k!n!\kappa_1^k\kappa_2^n(c-\kappa_1k-\kappa_2n)}+C \end{align} $$ Which relates to Srivastava-Daoust Function
$$ \begin{align} \int e^{a\frac{1-e^{-\kappa_1s}}{\kappa_1}+b\frac{1-e^{-\kappa_2s}}{\kappa_2}+cs}\ \text{d}s & = e^{\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}\int e^{-\frac{ae^{-\kappa_1s}}{\kappa_1}-\frac{be^{-\kappa_2s}}{\kappa_2}}e^{cs}\ \text{d}s \\ & = e^{\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}\int\sum\limits_{n=0}^\infty\dfrac{(-1)^n\left(\dfrac{ae^{-\kappa_1s}}{\kappa_1}+\dfrac{be^{-\kappa_2s}}{\kappa_2}\right)^ne^{cs}}{n!}\ \text{d}s \\ & = e^{\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}\int\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^nC_k^n\dfrac{a^ke^{-\kappa_1ks}b^{n-k}e^{-\kappa_2(n-k)s}}{\kappa_1^k\kappa_2^{n-k}}e^{cs}}{n!}\ \text{d}s \\ & = e^{\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}\int\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^na^kb^{n-k}e^{(c-\kappa_1k-\kappa_2(n-k))s}}{k!(n-k)!\kappa_1^k\kappa_2^{n-k}}\ \text{d}s \\ & = \sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^na^kb^{n-k}e^{(c-\kappa_1k-\kappa_2(n-k))s+\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}}{k!(n-k)!\kappa_1^k\kappa_2^{n-k}(c-\kappa_1k-\kappa_2(n-k))}+C \\ & = \sum\limits_{k=0}^\infty\sum\limits_{n=k}^\infty\dfrac{(-1)^na^kb^{n-k}e^{(c-\kappa_1k-\kappa_2(n-k))s+\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}}{k!(n-k)!\kappa_1^k\kappa_2^{n-k}(c-\kappa_1k-\kappa_2(n-k))}+C \\ & = \sum\limits_{k=0}^\infty\sum\limits_{n=0}^\infty\dfrac{(-1)^{n+k}a^kb^ne^{(c-\kappa_1k-\kappa_2n)s+\frac{a}{\kappa_1}+\frac{b}{\kappa_2}}}{k!n!\kappa_1^k\kappa_2^n(c-\kappa_1k-\kappa_2n)}+C \end{align} $$ Which relates to Srivastava-Daoust Function
How to factorize this cubic equation? In one of the mathematics book, the author factorized following term $$x^3 - 6x + 4 = 0$$ to $$( x - 2) ( x^2 + 2x -2 ) = 0.$$ How did he do it?
There is a neat trick called the rational roots theorem. All we have to do is factor the first and last numbers, put them over a fraction, and take $\pm$. This gives us the following possible rational roots: $$x\stackrel?=\pm1,\pm2,\pm4$$ due to the factorization of $4$. Checking these, it is clear $x=2$ is the only rational root, since $$\begin{align}0&\ne(+1)^3-6(+1)+4\\0&\ne(-1)^3-6(-1)+4\\\color{#4488dd}0&=\color{#4488dd}{(+2)^3-6(+2)+4}\\0&\ne(-2)^3-6(-2)+4\\0&\ne(+4)^3-6(+4)+4\\0&\ne(-4)^3-6(-4)+4\end{align}$$ leaving us with $$x^3-6x+4=(x-2)(\dots)$$ We can find the remainder through synthetic division: $$\begin{array}{c|c c}2&1&0&-6&4\\&\downarrow&2&4&-4\\&\hline1&2&-2&0\end{array}$$ which gives us our factorization: $$x^3-6x+4=(x-2)(x^2+2x-2)$$
If $P$ is a polynomial with real coefficients and if $a\in\mathbb{R}$ is a root, which means that $P(a)=0$, then there exists a real polynomial $Q$ such that $\forall x\in\mathbb{R},\quad P(x)=(x-a)\,Q(x)$. On this case, you can see by inspection, that $P(2)=0$. It remains to find real constants $A,B,C$ such that : $$\forall x\in\mathbb{R},\quad x^3-6x+4=(x-2)(Ax^2+Bx+C)$$ Identification of coefficients leads to $A=1$, $-2C=4$ and, for example, $A-2B=0$ (equating the coeffts of $x^2$ in both sides).
Verification of a proof from the book Analysis (by Terence Tao) What a day trying to understand the proof of the least upper bound theorem in the Analysis by Terence Tao: Well one exercise which is necessary to complete the proof says the following: Let $E$ be a non-empty subset of $R$, let $n \ge 1$ be an integer, and let $L < K$ be integers. Suppose that $K/n$ is an upper bound for $E$, but that $L/n$ is not an upper bound for $E$. Show that there exists an integer $L < m_n \leq K$ such that $m_n/n$ is an upper bound for $E$, but that $(m_n-1)/n$ is not an upper bound for E. (Hint: prove by contradiction, and use induction. It may also help to draw a picture of the situation.) My silly attempt is as follows: Suppose for the sake of the contradiction that there is not $m$ between the integers $L$ and $K$ such that $m/n$ is an upper bound of $E$. This means that whenever $(L+m-1)/n$ is not an upper bound we must have that neither $(L+m)/n$ is an upper bound. Since $(L+0)/n$ is not an upper bound, thus we have that $(L+1)/n$ neither is. Then it is easy to use induction to show that $(L+m)/n$ is not an upper bound for every natural number $^{(1)}$. But $\,0\le K-L\in \mathbb{N}$ and then $ (\,L+(K-L)\,)/n = k/n$ is an upper bound (by hypothesis) contradicting the claim that $(L+m)/n$ is not an upper bound for every natural number. This contradiction gives the proof. $^{(1)}$ We may use induction to show $(L+m)/n$ is not an upper bound for every natural number. The claim clearly holds for $m=0$ as we have shown above. Now we assume that it holds for $m$. Thus, $(L+m)/n$ is not an upper bound for $E$ so $ (L+1+m)/n$ is not an upper bound, which closes the induction I'd like to know if my attempt is correct. I don't know is kinda silly. So, do you think the proof is correct? Thanks in advance.
Despite the complicated language, the question is just to prove that if $L<K$ are integers and $P$ is some property such that $P(K)$ holds but $P(L)$ does not hold, there exists an integer $m$ with $L<m\leq K$ such that $P(m)$ holds but not $P(m-1)$. The details of the property $P$ do not matter at all. This should be intuitively obvious: $m=\min\{\, i\in\Bbb Z\cap(L,K] \mid P(i) \,\}$ is well defined (the set contains$~K$ so it is nonempty, and it is finite) and works. If you really find this too informal to be convincing, do a proof by contradiction and induction as the book suggests. You did this, and your proof is correct.
Let $E$ be a non-empty subset of real numbers. Let $n \geq 1$ be an integer, and let L < K be integers. Suppose that $\frac{L}{n}$ is not an upper bound and that $\frac{K}{n}$ is an upper bound for $E$. We need to show that there exists an integer $L < m \leq K$, such that $\frac{m - 1}{n}$ is not an upper bound and $\frac{m}{n}$ is an upper bound for the set $E$. Suppose for the sake of contradiction, that for all integers $m$, $L < m \leq K$, the rational number $\frac{m - 1}{n}$ is an upper bound or $\frac{m}{n}$ is not an upper bound. Let $m_{k}$ denote the integer $L + k$. Then for the integer $m_{1} = L + 1$ we must have $L < m_{1} \leq K$, so that $\frac{m_{1} - 1}{n}$ is an upper bound or $\frac{m_{1}}{n}$ is not an upper bound. If $\frac{m_{1} - 1}{n}$ is an upper bound, then $\frac{L}{n} = \frac{m_{1} - 1}{n}$ is an upper bound, which contradicts the assumption that $\frac{L}{n}$ is not an upper bound. So we must have that $\frac{m_{1}}{n}$ is not an upper bound. Now we show that $\frac{m_{k}}{n}$ is not an upper bound for all natural numbers $k \geq 1$ for which $L < m_{k} \leq K$. The previous argument shows that the base case $k = 1$ holds. Now suppose inductively that $\frac{m_{k}}{n}$ is not an upper bound for some natural number $k$, such that $L < m_{k} \leq K$. We need to show that $\frac{m_{k + 1}}{n}$ is not an upper bound. By induction hypothesis $\frac{m_{k}}{n}$ is not an upper bound. If $m_{k + 1} > K$, then $\frac{m_{k + 1}}{n}$ is an upper bound because we would have that $m_{k} \geq K$, i.e. $\frac{m_{k}}{n} \geq \frac{K}{n}$, which contradicts the assumption that $\frac{m_{k}}{n}$ is not an upper bound. So we must have that $L < m_{k + 1} \leq K$. Then either $\frac{m_{k}}{n}$ is an upper bound or $\frac{m_{k + 1}}{n}$ is not an upper bound. The first alternative leads to contradiction with the assumption that $\frac{m_{k}}{n}$ is not an upper bound, so we must have that $\frac{m_{k + 1}}{n}$ is not an upper bound. This proves the claim for all $k$ such that $L < m_{k} \leq K$. In particular, this implies that $\frac{m_{k}}{n}$ is not an upper bound for $k$ such that $m_{k} = K$. But this contradicts the fact that $\frac{K}{n}$ is an upper bound and finishes the proof.
Proof of vector subspace on R functions Let $F (\Bbb R,\Bbb R)$ be the vector space of all the functions from $\Bbb R$ to $\Bbb R$. For what values ​​of $k\in\Bbb R$, the set $W = \{f\in F (\Bbb R,\Bbb R)\mid f (1) = k\}\leqslant F$? Any help?
As you say, the conditions for $W$ to be a linear subspace are that it should be closed under arbitrary linear combinations. That is for $f,g\in W$ and any $\alpha, \beta$ $$ \alpha f+ \beta g \in W $$ In particular this should be true for $\alpha=\beta =0$ meaning that the null function should also be in $W$. Can you take it from here?
If $W $ is a subspace, then the null-function belongs to $W $ Can you proceed?
Example of PA model in which Prov(1=0) and 1 != 0? $PA$ cannot prove $Prov(\phi) \to \phi$; and in particular $PA$ cannot prove "$Prov( 0 = 1 ) \to 0 = 1$". This can be viewed as a simple consequence of Lob's theorem; but we can also interpret it as "there are (non-standard) models of $PA$ in which both $Prov(0 = 1)$ and $0 \neq 1$ are true". Is there an intuitive way to build an example of such non-standard model of $PA$ in which $Prov( 0 = 1 )$ and $0 \neq 1$ ? Is $\neg Con(PA)$ mandatory in such non standard model?
Briefly: "$0\not=1$" holds in every model of PA. This is because, well, PA proves "$0\not=1$." There's no way to get around this: even if PA were to also prove "$0=1$," all that would tell us is that PA has no models at all (because in every model we would have to have both $0\not=1$ and $0=1$). Yes, "$Prov(0=1)$" and "$\neg Con($PA$)$" are equivalent, provably in PA (and indeed much less). Indeed, $Con($PA$)$ is often an abbreviation for "$\neg Prov(0=1)$," which makes the equivalence trivial. No concrete nonstandard models of PA are really known at all. In particular, Tennenbaum's theorem prevents us from having too snappy a description of such a model. That said, the proof of the completeness theorem is rather intuitive after a while: we pass to a complete theory $T$ in a larger language which contains PA and has the "witness property" - if it proves "$\exists x\varphi(x)$," then it proves "$\varphi(t)$" for some term $t$ (generating these extra terms is why we need to expand the language) - and then the set of terms in this larger language, modulo $T$-provable equivalence, forms a model of $T$ (and hence has a reduct which is a model of PA). The reason this doesn't describe a single construction is that we have lots of freedom in building $T$, but if you blackbox that step the construction of the term model of $T$ is (in my opinion) very intuitive. Incidentally, by the MRDP theorem we can think of a model of PA satisfying "$Prov(0=1)$" as simply a model of PA where a certain polynomial has a root which didn't before. You may also find this question interesting.
Hint: Using the axioms at First-order theory of arithmetic. If we start with Peano's Axioms for $(N,S,0)$ with $1$ not yet defined (as at link), then from FOL we have $0=0$ and $\exists x: x=0$.
Prove by mathematical induction $n < 2n$ Prove the following by mathematical induction $n &lt; 2n$, for all positive integer $n$. This is what I have done: Step 1: $n=1$: $1 &lt; 2$ Step 2: $k &lt; 2k$ $n=k+1$: $(k+1) &lt; 2(k+1)$ $k + 1 &lt; 2k + 1 &lt; 2k + 2 = 2(k+1)$ Hence $P(k+1)$ is true whenever $P(k)$ and since $P(1)$ is true. I didn't write all necessary assumptions but can anyone help me to check if my method is correct or if it needs improvements. Thank you.
We have to prove that $$n+1&lt;2(n+1)$$ , adding $1$ on both sides of $$n&lt;2n$$ we get $$n+1&lt;2n+1$$ and $$2n+1&lt;2(n+1)$$ so the proof is finished.
We have to prove that $$n+1&lt;2(n+1)$$ , adding $1$ on both sides of $$n&lt;2n$$ we get $$n+1&lt;2n+1$$ and $$2n+1&lt;2(n+1)$$ so the proof is finished.
Prove that $4$ is the only solution to $2+2$. This question was featured on Saturday Morning Breakfast Cereal and I haven't been able to find a proof. Can anyone help?
When proving theorems in mathematics, one starts from a set of axioms, statements that are accepted as true without argument. You might ask, "but what if an axiom isn't true?", and the answer is that we would be dealing with different mathematics. For example, Euclid included the parallel postulate as an axiom in his elements. For years mathematicians tried to prove that the parallel postulate could be derived from the other axioms. It turns out that if you don't accept the parallel postulate, you end up with different types of geometry that we now call non-euclidean. Einsteins theory of general relativity depends on these geometries. To come up with a proof of such a seemingly simple fact as $2 + 2 = 4$, we need a set of axioms to start with, and we need precise definitions of all the terms we are using. Depending on what set of axioms you start with, proving that $2 + 2 = 4$, and that no other natural number can equal $2+2$ may be either very simple or surprisingly difficult. For example in Russell and Whitehead's Principia, it famously took over 300 pages of work before they could prove that $1+1=2$. They started with a very sparse set of axioms though. The most common set of axioms for the natural numbers are the Peano Axioms. They are $0$ is a natural number. For every natural number $x$, $x=x$. For all natural numbers $x$ and $y$, if $x = y$, then $y = x$. For all natural numbers $x$, $y$, and $z$, if $x = y$ and $y = z$, then $x = z$. For all $a$ and $b$, if $a$ is a natural number, and $a = b$, then $b$ is a natural number. For every natural number $n$, $S(n)$ is a natural number. For every natural number $n$, $S(n) = 0$ is false. For all natural numbers $m$ and $n$, if $S(m) = S(n)$ then $m = n$. If $K$ is a set such that $0 \in K$, and for every natural number $n$, $n \in K$ implies that $S(n) \in K$, then $K$ contains all natural numbers. Here $S$ is the successor function, it takes each natural number to its successor. This might seem like a complicated mess compared to the simplicity of natural numbers, but we need to be precise. We need to carefully construct the axioms so that no contradiction can be derived from them, and so they encapsulate what we understand to be the natural numbers. We want to be able to prove interesting statements about the natural numbers from them. Note that the axioms contain undefined terms. The axioms don't need to state what the terms mean, only what they do. The following definitions are commonly used within this axiomatization. They are the definitions from Peano's original paper (An English translation is available in the book From Frege to Gödel), modified to start at $0$ instead of $1$. $1$ is defined as $S(0)$, $2$ is defined as $S(1)$, $3$ is defined as $S(2)$, and $4$ is defined as $S(3)$. Addition is defined recursively as follows. $$a + 0 = a$$ $$a + S(b) = S(a + b)$$. Thus $$2 + 2 = 2 + S(1) = S(2 + 1) = S(2 + S(0)) = S(S(2 + 0)) = S(S(2)) = S(3) = 4$$ proving that $2+2 = 4$. This is the unique value of $2+2$ by axiom 4. If $x = 2+2$ and $2+2 = 4$, then $x = 4$.
Prove it geometrically... Fist define your "anchor/base" unit -> a line with a defined length is #1. Next define #2... just join 2 of your #1 units... and postulate it's #2, the double of the "base"... you can measure it... then you can prove it... draw 2 of your #2 units, parallel in the geometric space, like in a vector space... Draw the next sequence -> Move the beginning of the the 2nd #2 to the end of the 1st #2 and voilá... you have #4 *you can prove all the relations -> #1 = 1x #1 | #2 = 2x #1 | #4 = (4x #1 or 2x #2) *if at a superior academic level you have to define and prove the operations also '+' and 'x'... at least the "neutral element" of operation 'x'... Good luck...
Prove or disprove that AB=AC $\implies$ B=C I proved it as follows but I'm not so sure about it. A, B and C are square matrices of the same order. Assume $ B \neq C $ $$ AB \neq AC$$ $$ B \neq C \implies AB \neq AC$$ $$ \neg ( AB \neq AC) \implies \neg ( B \neq C ) $$ $$AB =AC \implies B=C $$
What if $A$ is zero? ${}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}$
If A be non-zero,then it is true.
Finding $\min \frac{\left[\int_{0}^{1}f(x)\mathrm dx\right]^2}{\int_{0}^{1}(f(x))^3\mathrm dx}$ given that $f(0)=0$ and $0\lt f'(x)\le 1$ Let $f$ be a function having continuous derivative on $[0,1]$ such that $0 &lt; f' \le 1$ and $f(0)=0$. Defined $$I := \frac{\left(\int_0^1 f(x)dx\right)^2}{\int_0^1 f^3(x)dx}$$ we must have that: $$(A) \quad I \ge \frac{2}{3}\qquad (B) \quad I \ge \frac{1}{2}\qquad (C) \quad I \ge \frac{1}{3}\qquad (D) \quad I \ge 1$$ I have been stuck on this for quite some time, here is what I have tried. Since $0\lt f'(x)\le 1$, then taking the definite integral on both sides of the inequality we get that $0\lt f(1)\le 1$. Let $f(1)=k\in (0,1]$. then the curve $y=f(x)$ passes through the points $(0,0)$ and $(1,k)$ the join of which lies on the line $y=kx$. I am unable to proceed from this step. Another alternate approach I have tried to use was the Cauchy-Schwarz inequality for integrals but to no avail. Perhaps the AM-GM inequality might be useful, since the function is non-negative in $[0,1]$ but I am not exactly sure how. Any hints or ideas are appreciated. Thanks.
See we have: $0 &lt; f'(x) \le 1$ and $f(0)=0$ $$\implies \int_0^x 0dx&lt;\int_0^x f'(x)dx\le \int_0^x 1dx$$ $$\implies 0&lt;f(x)&lt;x, x\in [0,1]$$. We also know $x \le x^n, 0&lt;n&lt;1 $ when $x\in [0,1]$. Let $f(x) =x$, in the numerator since this is a potential function (it satisfies the constraints given). If we make the denominator larger we know $I$ must be greater than the following $$I &gt;\frac{\left(\int_0^1 xdx \right)^2}{\int_0^1 (x^n)^{3}dx}, 0&lt;n&lt;1 \implies I &gt; \dfrac{3n+1}{4}, 0&lt;n&lt;1$$ If $n= 2/3$ for example we have that $I&gt;0.75$. And we see as $n$ approaches $1$ we have $I$ approaching $1$. Hence all 4 options are correct.
The simplest trial function I thought of is $f(x) = kx$, with $0 \leq k \leq 1$. The numerator integral becomes $\frac{1}{2}kx^2$ which evaluates to $\frac{k}{2}$ so the numerator is $\frac{k^2}{4}.$. Meanwhile the denominator integral gives $\frac{1}{4}k^3x^4$ which evaluates to $\frac{1}{4}k^3$. Dividing, I get the expression equals $k^{-1}$. Since $k &lt; 1$, $1/k &gt; 1$, which does not rule out any of the options. Next I try the trial function $f(x) = kx^p$. For this we must have $kpx^{p-1} \leq 1$ for $0 \leq x \leq 1$. Since $f(x)$ must be increasing, $p &gt; 0$ and $k &gt; 0$. Also, if $p &lt; 1$ then the derivative at $0$ would be infinite, so we have $p \geq 1$, and we already tried $p = 1$. So for given $k$ and $p$, $kpx^{p-1} \leq kp \leq 1$. So $k \leq 1$. The numerator integral is $\frac{k}{p+1}$ so the numerator is $\frac{k^2}{(p+1)^2}$, while the denominator integral is $\frac{1}{3p+1}k^3x^{3p+1}$ which evaluates to $\frac{k^3}{3p+1}$. The expression becomes $$\frac{k^2(3p+1)}{(p+1)^2k^3} = \frac{3p+1}{k{(p+1)}^2}$$ Given that $k \leq 1$, we examine $\frac{3p+1}{{(p+1)}^2}$ which has an extremum at $p = \frac{1}{3}$. That looks promising, but plugging in, I get the expression is greater than $9/8$. I begin to suspect the answer is $1$, but I am still trying to think of other functions that could somehow evaluate to less. The target now becomes, prove $$\big[\int_0^1 f(x) dx \big]^2 \geq \int_0^1 (f(x))^3 dx$$
Evaluate the integral $\int_{0}^{\frac{\pi}{3}}\sin(x)\ln(\cos(x))\,dx$ $$\int_0^{\frac{\pi}{3}}\sin(x)\ln(\cos(x))\,dx $$ $$ \begin{align} u &amp;= \ln(\cos(x)) &amp; dv &amp;= \sin(x)\,dx \\ du &amp;= \frac{-\sin(x)}{\cos(x)}\,dx &amp; v &amp;= -\cos(x) \end{align} $$ $$ \begin{align} \int_0^{\frac{\pi}{3}}\sin(x)\ln(\cos(x))\,dx &amp;= -\cos(x)\ln(\cos(x)) - \int \frac{-\cos(x)-\sin(x)}{\cos(x)}\,dx \\\\ &amp;= -\cos(x)\ln(\cos(x)) - \int \sin(x)\,dx \\\\ &amp;= -\cos(x)\ln(\cos(x)) + \cos(x) \\\\ F(g) &amp;= -\cos(\pi/3)\ln(\cos(\pi/3)) + \cos(\pi/3) + \cos(0)\ln(\cos(0)) - \cos(0) \\\\ &amp;= -\frac{1}{2}\ln\left(\frac{1}{2}\right) - \frac{1}{2} \\\\ \end{align} $$ However, my textbook says that the answer is actually $$\frac{1}{2}\ln(2) - \frac{1}{2}$$ Where does the $\ln(2)$ come from in the answer?
Well, we have: $$\mathcal{I}_\text{n}:=\int_0^\text{n}\sin\left(x\right)\cdot\ln\left(\cos\left(x\right)\right)\space\text{d}x\tag1$$ Substitute $\text{u}:=\cos\left(x\right)$ so we get: $$\mathcal{I}_\text{n}=-\int_1^{\cos\left(\text{n}\right)}\ln\left(\text{u}\right)\space\text{d}\text{u}\tag2$$ Using IBP, we get: $$\mathcal{I}_\text{n}=\left[-\text{u}\cdot\ln\left(\text{u}\right)\right]_1^{\cos\left(\text{n}\right)}+\int_1^{\cos\left(\text{n}\right)}1\space\text{d}x=\left[-\text{u}\cdot\ln\left(\text{u}\right)\right]_1^{\cos\left(\text{n}\right)}+\left[\text{u}\right]_1^{\cos\left(\text{n}\right)}=$$ $$-\cos\left(\text{n}\right)\cdot\ln\left(\cos\left(\text{n}\right)\right)+1\cdot\ln\left(1\right)+\cos\left(\text{n}\right)-1=\cos\left(\text{n}\right)\cdot\left(1-\ln\left(\cos\left(\text{n}\right)\right)\right)-1\tag3$$ So, when $\text{n}=\frac{\pi}{2}$ we get: $$\mathcal{I}_{\frac{\pi}{3}}:=\int_0^\frac{\pi}{2}\sin\left(x\right)\cdot\ln\left(\cos\left(x\right)\right)\space\text{d}x=\cos\left(\frac{\pi}{3}\right)\cdot\left(1-\ln\left(\cos\left(\frac{\pi}{3}\right)\right)\right)-1=\frac{\ln\left(2\right)-1}{2}\tag4$$
$$-\ln(\frac{1}{2}) = \ln(2)$$ $$\ln(x^a) = a \ln(x)$$
Why not both true and false? Why can't some mathematical statement (or whatever is the correct term) be both true and false? For example we can prove (e.g. by induction) that $1+2+3+\cdots+n=\frac{n(n+1)}{2}$ for all positive integers $n$. But how can we be sure that no one will ever find a counter example? What if someone claims that $1+2+3+\cdots+1000$ equals (e.g.) 500567 and not 500500, which is what the above formula claims. Another example: Why is it impossible for someone to come up with three integer $a$, $b$ and $c$, for which $a^3+b^3=c^3$ (contradicting Fermat's Last Theorem)? This bothers me even in the simple intuitive level. Then I have heard about Gödel's incompleteness theorems, second of which says (at least this is how I have interpreted it) that an axiomatic system cannot prove its own consistency. So doesn't Gödel's second incompleteness theorem say basically that "anything is possible"? ...that there can be an integer $n$ for which $1+2+3+\cdots+n \neq \frac{n(n+1)}{2}$ or that there can be integers $a$, $b$ and $c$ for which $a^3+b^3=c^3$?
Gödel's theorem could be more accurately interpreted as saying that we can never be sure of the consistency of a sufficiently complex system. We can't be sure, for instance, that the Peano Axioms don't prove $1+1=3$. We sure hope this isn't the case, but no proof would convince us otherwise (and it's probably not, since the Peano Axioms have an intuitive model as being the natural numbers with addition and multiplication). However, it's still true that $1+1=2$ even if the Peano Axioms say otherwise (indeed, if they proved $1+1=3$, they would also have to prove $1+1\neq 3$, and also every other statement you could possibly make within that system). In fact, we can say that, if a (suitably complex) system is inconsistent, then it admits both a proof and a disproof of every statement - this is the principle of explosion. The difference is that there is an intended model of the Peano Axioms - the natural numbers with addition and multiplication. This is clearly well-defined and certain things are undeniably true of them. We would therefore expect that the Peano Axioms are, in fact, consistent (though we can't prove it) - and, if it is consistent, everything it proves is true and undeniably so. Even if PA were inconsistent, we would still expect proofs like the $1+2+\ldots+n=\frac{n(n+1)}2$ to be work since they are leveraging such simple properties of the structure of the natural numbers. The point here is that "truth" and "proof" are distinct statements - but we tend to identify them because we assume our logical systems are consistent, or at least assume that the bits of them we actually use are consistent.
We would have hoped that in mathematics every reasonable statement is either true or false. As Gödel showed, that is not the case: In any mathematical system, there are two possibilities: Either most statements are either true or false but some are neither true or false. Or the system is contradictory, which means every statement is both true and false at the same time which means the whole system is useless. Now you asked "Is it possible that a simple statement is both true and false"? We don't know which of the two kinds of systems we are using. We very very much hope that it is not contradictory. In that case, no, a simple statement can be either true, or false, or neither at all. If our system is contradictory, then yes, that simple statement will be both true and false. Actually, every statement will be both true and false. In your example, suppose we have a proof that a certain sum is 500,500 but we also calculated correctly that the same sum is 500,567. Then every mathematical statement is both true and false: We have just shown that 500,500 = 500,567, therefore $0 = 17$, therefore $0 = 1$. Then as an example, for every $a, b, c &gt; 0$ and $n \ge 3$, $a^n + b^n = 1 \cdot (a^n + b^n - c^n) + c^n = 0 \cdot (a^n + b^n - c^n) + c^n = c^n$ So every possible tuple $(a, b, c, n)$ is a counterexample to Fermat's Last Theorem!
limit $ \lim_{x\rightarrow3}\left(\frac{x+1 - \sqrt{5x+1}}{\ln(\frac{x}{3})}\right) $ I am trying to find the limit of $$ \lim_{x\rightarrow3}\left(\frac{x+1 - \sqrt{5x+1}}{\ln(\frac{x}{3})}\right) $$ the function is continuous for all $ x \in \mathbb{R^+} $ except for $ x = 3 $ since then the denominator will be 0. $ (\ln(3/3) = \ln(1) = 0) $ I am trying to get rid of the term $ \ln(\frac{x}{3}) $ so that the denominator will no longer be 0 when x is 3. So that I can continue the function at point $ x = 3 $. But so far I haven't been able to find a solution. Any suggestions would be much appreciated. How do I go about finding the limit here? Thanks.
In your revised expression, both numerator and denominator approach zero as $x$ approaches $3$. L'Hôpital's rule may therefore be applied, and we get an equal expression for your limit: $$\begin{align} \lim_{x\to 3}\left(\frac{x+1 - \sqrt{5x+1}}{\ln(\frac{x}{3})}\right) &amp;= \lim_{x\to 3}\left(\dfrac{1 - \dfrac 1{2\sqrt{5x+1}}\cdot 5} {\dfrac{1}{\dfrac{x}{3}}\cdot\dfrac 13}\right) \\[2 ex] &amp;= \lim_{x\to 3}\left(1 - \frac 5{2\sqrt{5x+1}}\right)\cdot x \\[2 ex] &amp;= \left(1 - \frac 5{2\sqrt{5\cdot 3+1}}\right)\cdot 3 \\[2 ex] &amp;= \left(1 - \frac 58\right)\cdot 3 \\[2 ex] &amp;= \frac 98 \end{align}$$
What you have is called an indeterminate equation. The reason is that both the numerator and denominator go to zero (at the limit). To solve this kind of problem, you have to take the derivatives of the numerator and denominator, separately, until one of them becomes a constant, then you substitute the 3 for x to find the value at the limit.
If $f$ is uniformly differentiable then $f '$ is uniformly continuous? The following theorem is true? Theorem. Let $U\subset \mathbb{R}^m$ (open set) and $f:U\longrightarrow \mathbb{R}^n$ a differentiable function. If $f$ is uniformly differentiable $ \Longrightarrow$ $f':U\longrightarrow \mathcal{L}(\mathbb{R}^m,\mathbb{R}^n)$ is uniformly continuous. Note that $f$ is uniformly differentiable if $\forall \epsilon&gt;0\,,\exists \delta&gt;0:|\!|h|\!|&lt;\delta,\color{blue}{[x,x+h]\subset U} \Longrightarrow |\!|f(x+h)-f(x)-f'(x)(h)|\!|&lt;\epsilon |\!|h|\!| $ (edited) $\forall \epsilon&gt;0\,,\exists \delta&gt;0:|\!|h|\!|&lt;\delta,\color{blue}{x,x+h\in U} \Longrightarrow |\!|f(x+h)-f(x)-f'(x)(h)|\!|&lt;\epsilon |\!|h|\!|\qquad \checkmark$ Any hints would be appreciated.
Let's build off of Tomas' last remark, slightly modified: Let $t&gt;0$ be small. Then \begin{eqnarray} \|f'(x)-f'(y)\| &amp;=&amp; \frac{1}{t}\sup_{\|w\|=1}\|\langle f'(x)-f'(y),tw\rangle\| \nonumber \\ &amp;\leq&amp; \frac{1}{t}\sup_{\|w\|=1}\|f(x+tw)-f(x)-[f(y+tw)-f(y)]\| + 2\epsilon \nonumber \end{eqnarray} It suffices to show that this weighted combination of four close points on a parallelogram can be bounded by $C\epsilon t$. Let us bound $\|f(x+h) - f(x) + f(x+k) - f(x+h+k)\|_2 \leq C\epsilon(\|h\|+\|k\|)$, and then in this case $\|h\|=t$ and $\|k\|\leq \delta$, so if $t=\delta$ the whole expression is bounded by a constant times $\epsilon$. Note applying uniform differentiability three times in directions $h,k,$ and $h+k$, for small $\|h\|,\|k\|$ we have \begin{eqnarray*} \|f(x+h) - f(x) + f(x+k) - f(x+h+k)\| &amp;\leq&amp; \|f'(x)h + f'(x)k - f'(x)(h+k)\|_2 + 3\epsilon(\|h\|+\|k\|)\\ &amp;=&amp; 3\epsilon(\|h\|+\|k\|) \end{eqnarray*}
Let us start with the simplest case f:$R^1 \rightarrow R^1$. The discussion below will generalize nicely to more dimensions. Then f is differentiable at $x_0$ iff $ \lim_{x \rightarrow x_0} {\frac{f(x)-f(x_0)}{x-x_0}}$ exists and is finite. We call that limit $f'(x_0)$ . So if f is differentiable at $x_0$ then we may write f(x) = $f(x_0) + f'(x_0)(x-x_0) + R(x)$ where $\lim_{x \rightarrow x_0} R(x)/(x-x_0)$ = 0. &nbsp; &nbsp; (1) If f is uniformly differentiable that means $\lim_{x \rightarrow x_0} R(x)/(x-x_0)$ converges uniformly to zero. &nbsp; &nbsp; (2) From (1) we have $f'(x)$ = $f'(x_0)$ + R'(x). [We know R is differentiable because it is the difference between f which is differentiable and a linear polynomial.] So $\lim_{x \rightarrow x_0} (f'(x)-f'(x_0))$ = $\lim_{x \rightarrow x_0} R'(x)$ = $\lim_{x \rightarrow x_0} R(x)/(x-x_0)$ = 0. And this convergence is uniform as per our hypothesis as stated in (2). Moving on to the case where $f:R^n \rightarrow R^m$. For this part of the discussion capital letters will be used to represent vectors (in whatever dimension). According to many sources (and I will state specifically Robert Sealey "Calculus of Several Variables") f is differentiable at $X_0$ iff there exists a linear function g (X) = $f(X_0)$ + $(M,X-X_0)$ [where M is a constant vector and (a,b) means the inner product], such that $ \lim_{X \rightarrow X_0} {\frac{f(X)-g(X)}{|X-X_0|}}$ = 0. &nbsp;&nbsp;(3) If f is uniformly differentiable that means simply that this limit converges uniformly.We could have stated differentiability this way in the $R^1 \rightarrow R^1$ case, but I stated it as I did in that case to throw some light on what g actually is. Let R(X) = f(X) - g(X) $\Rightarrow$ f(X) = g(X) + R(X). Since R = f - g then $R/|x-x_0| \rightarrow 0$ uniformly. Notice also that $R(X_0)$ = 0, since it is $f(X_0) - g(X_0)$ and by (3) this must be 0. Since g is linear and f is differentiable R is also differentiable and from the above $R'(X_0)$ = 0. Looking at $f'$ we have $f'(X) = g'(X) + R'(X) = M + R'(X)$; $f'(X_0) = M + R'(X_0)$ and thus $f'(X_0) = M$. Finally $\lim_{X \rightarrow X_0} (f'(X)-f'(X_0))$ = $\lim_{X \rightarrow X_0} R'(X)$ = $\lim_{X \rightarrow X_0}{\frac{R(X)-R(X_0)}{X-X_0}}$ = $\lim_{X \rightarrow X_0}{\frac{R(X)}{X-X_0}}$ and this goes to 0 uniformly because f is uniformly differentiable, R = f - g, and $R(X_0)$ is 0.
How can one prove that $e<\pi$? This question is inspired by another one, asking to prove that something approximately equal to $1.2$ is bigger than something approximately equal to $0.9$. The numerical answer to this question was (expectedly) downvoted, though in my opinion it is the most reasonable approach to this kind of problems (${\tiny \text{which I personally find completely useless}}$). My question will consist of 2 parts: Prove (without calculator) that $e&lt;\pi$; Explain what do we learn from the proof/what makes this problem interesting. Edit: Existing answers only confirm my point of view about various weird inequalitites. Fortunately there is $3$ between $e$ and $\pi$, otherwise the things would be very boring.
Inscribe a regular hexagon in a circle of radius $1$. Since a straight line is the shortest distance between two points the circumference of the circle is longer than the circumference of the hexagon. We take the definition of $\pi$ as half the circumference of the unit circle. Putting all this together we obtain $2\pi \gt 6$ or $\pi \gt 3$ We take $e$ as the sum $1+1+\frac 12+\frac 1{3!}+\cdots$ which converges absolutely and which, after the first three terms, is term by term less than the sum $1+1+\frac 12+\frac 1{2^2}+\cdots$ since the later terms in the second sum are obtained by dividing the previous term by $2$, and in the first sum by $n\gt 2$ (crudely for $n\ge 3$ we have $n!\gt 2^{n-1}$). Summing the geometric series we have $e\lt 3 \lt\pi$. What do we learn - well how easy it is to make an estimate depends on the definition. The geometric definition of $\pi$ lends itself to a good enough estimate. There are different ways of defining $e$ too, but the sum offers a range of possibilities for estimating, particularly as the terms decrease very quickly. But the geometric definition for $\pi$ requires assumed knowledge about a straight line as the shortest distance between two points, which seems obvious - yet conceals the trickiness of defining the length of a curve - so this looks simpler than it is.
Use the Leibniz formula for $\pi$: $$ \frac{\pi}{4} = \sum_{j=0}^\infty \frac{(-1)^j}{2j+1} \text{.} $$ This is an alternating sum where each partial sum is alternately an upper and lower bound. The first few terms constrain $\pi$ to these ranges: [0,4] [2.666..., 4] [2.666..., 3.4666...] [2.895..., 3.4666...] [2.895..., 3.33968...] [2.9760..., 3.33968...] [2.9760..., 3.28374...] [3.01707..., 3.28374...] at which point we know $\pi$ is greater than $3$. A similar method is this descending series for $\mathrm{e}$: $$ \mathrm{e} = 3 + \sum_{k=2}^\infty \frac{-1}{k!(k-1)k} \text{.} $$ Now after one term, we know $\mathrm{e} &lt; 3-\frac{1}{4} &lt; 3$ and the desired inequality follows. For determining which of two unknown numbers given only by series is larger, it is helpful to have a series whose partial sums are bounds of some sort. Since we know we want $\mathrm{e} &lt; \pi$ it is helpful to have a decreasing series for $\mathrm{e}$, an increasing series for $\pi$, and/or alternating series for either or both. Further lesson: It is very nice to have intervals (or at least bounds) on unfamiliar numbers. Alternating series are quite handy for this. Other series for this method: \begin{align} \mathrm{e}^{-1} &amp;= \sum_{k=0}^\infty \frac{(-1)^k}{k!} &amp;&amp;\text{gives $\mathrm{e}&lt;3$ after $5$ terms} \\ \frac{\pi^2}{8} &amp;= \sum_{k=1}^\infty \frac{1}{(2k-1)^2} &amp;&amp;\text{gives $3 &lt; \pi$ after 4 terms} \end{align} Further further lesson: the more representations of a thing you know, the more you are able to show about it.
Is there a name for the theorem that $\lim\limits_{n \to \infty} (1+\frac{1}{n})^n < \infty$? Is there a name for the theorem that $\displaystyle \lim_{n \to \infty} (1+\frac{1}{n})^n &lt; \infty$ ? Wikipedia has a List of things named after Leonhard Euler which mentions Euler's number but not the theorem. I wanted to write something like: Carleman's inequality can be proved using a weak form of Stirling's inequality, which follows from $\displaystyle \lim_{n \to \infty} (1+\frac{1}{n})^n &lt; \infty$, which can be proved using Bernstein's inequality. but that seems awkward.
I don't think there's a specific theorem for this. However, noticing that $\lim_{n \to \infty}(1+\frac{1}{n})^n=\lim_{n \to \infty}e^{n\ln(1+\frac{1}{n})}$=$e^{\lim_{n \to \infty}\frac{\ln(1+n^{-1})}{n^{-1}}}$ (because $(1+\frac{1}{n})^n&gt;0$ and $y=e^x$ is differentiable for all x) Then, it's obvious that the expression above could be simplified with L'Hopital's rule to get $e$ (However, L'Hopital's rule is not a theorem after all...And to argue this way we have to first define $e$ as a finite constant)
The result $\lim\limits_{n \to \infty} (1+\frac{1}{n})^n &lt; \infty$ is a special case of the Monotone Convergence Theorem for sequences.
In $Z_{12}$, find non-zero elements $a,b,c$ with $ab = ac$ but $b$ does not equal $c$ In $\mathbb{Z}_{12}$, find non-zero elements $a$, $b$, $c$ with $ab = ac$ but $b \neq c$ $\mathbb{Z}_{12}\setminus \{0\} = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11\}$ I have at the moment that $a = 2, b = 6$ and $c = 4$ $2 \times 6 \equiv 0 \pmod {12}$ $4 \times 6 \equiv 0 \pmod{12}$ It would be great if someone could verify this/correct me if I am wrong. Also is there a shorter way to do this than guess and check?
If you meant $a=6, b=2, $ and $c=4$, that is indeed an example where $ab\equiv ac$ but $b\not\equiv c\bmod 12$. To find such examples, note that we need $a(b-c)\equiv 0$ with $a\not\equiv0$ and $b-c\not\equiv0\bmod12$. Since $12=2^23=2\times6=6\times2=3\times4=4\times3$, this could be accomplished with $a\in\{2,10\}$ and $b-c=6$, or with $a=6$ and $b-c\in\{2,10\}$, or with $a\in\{3,9\}$ and $b-c\in\{4,8\}$, or with $a\in\{4,8\}$ and $b-c\in\{3,9\}.$
$$\text{2*2 = 4 = 2*8}$$ $$\text{2*3 = 6 = 2*9}$$ $$\text{2*4 = 8 = 2*10}$$ $$\text{2*5 = 10 = 2*11}$$ $$\text{3*1 = 3*5 = 3*9 = 3}$$ $$\text{3*2 = 3*6= 3*10 = 6}$$ $$\text{3*3 = 3*7 = 3*11 = 9}$$ $$\text{3*4 = 3*8 = 0}$$ $$\text{4*1 = 4*4 = 4*7 = 4*10 = 4}$$ $$\text{4*2 = 4*5 = 4*8 = 4*11 = 8}$$ $$\text{4*3 = 4*6 = 4*9 = 0}$$ $$\text{6*1 = 6*3 = 6*5 = 6*7 = 6*9 = 6*11 = 6}$$ $$\text{6*2 = 6*4 = 6*6 = 6*8 = 6*10 = 0}$$ $$\text{8*1 = 8*4 = 8*7 = 8*10 = 8}$$ $$\text{8*2 = 8*5 = 8*8 = 8*11 = 4}$$ $$\text{8*3 = 8*6 = 8*9 = 0}$$ $$\text{9*1 = 9*5 = 9*9 = 9}$$ $$\text{9*2 = 9*6 = 9*10 = 6}$$ $$\text{9*3 = 9*7 = 9*11 = 3}$$ $$\text{9*4 = 9*8 = 0}$$ $$\text{10*1 = 10*7 = 10}$$ $$\text{10*2 = 10*8 = 8}$$ $$\text{10*3 = 10*9 = 6}$$ $$\text{10*4 = 10*10 = 4}$$ $$\text{10*5 = 10*11 = 2}$$ where $*$ is multiplication modulo 12. sorry for the bad formatting, I haven't used MathJax in a while.
Proof of $\frac{1}{e^{\pi}+1}+\frac{3}{e^{3\pi}+1}+\frac{5}{e^{5\pi}+1}+\ldots=\frac{1}{24}$ I would like to prove that $\displaystyle\sum_{\substack{n=1\\n\text{ odd}}}^{\infty}\frac{n}{e^{n\pi}+1}=\frac1{24}$. I found a solution by myself 10 hours after I posted it, here it is: $$f(x)=\sum_{\substack{n=1\\n\text{ odd}}}^{\infty}\frac{nx^n}{1+x^n},\quad\quad g(x)=\displaystyle\sum_{n=1}^{\infty}\frac{nx^n}{1-x^n},$$ then I must prove that $f(e^{-\pi})=\frac1{24}$. It was not hard to find the relation between $f(x)$ and $g(x)$, namely $f(x)=g(x)-4g(x^2)+4g(x^4)$. Note that $g(x)$ is a Lambert series, so by expanding the Taylor series for the denominators and reversing the two sums, I get $$g(x)=\sum_{n=1}^{\infty}\sigma(n)x^n$$ where $\sigma$ is the divisor function $\sigma(n)=\sum_{d\mid n}d$. I then define for complex $\tau$ the function $$G_2(\tau)=\frac{\pi^2}3\Bigl(1-24\sum_{n=1}^{\infty}\sigma(n)e^{2\pi in\tau}\Bigr)$$ so that $$f(e^{-\pi})=g(e^{-\pi})-4g(e^{-2\pi})+4g(e^{-4\pi})=\frac1{24}+\frac{-G_2(\frac i2)+4G_2(i)-4G_2(2i)}{8\pi^2}.$$ But it is proven in Apostol "Modular forms and Dirichlet Series", page 69-71 that $G_2\bigl(-\frac1{\tau}\bigr)=\tau^2G_2(\tau)-2\pi i\tau$, which gives $\begin{cases}G_2(i)=-G_2(i)+2\pi\\ G_2(\frac i2)=-4G_2(2i)+4\pi\end{cases}\quad$. This is exactly was needed to get the desired result. Hitoshigoto oshimai ! I find that sum fascinating. $e,\pi$ all together to finally get a rational. This is why mathematics is beautiful! Thanks to everyone who contributed.
We will use the Mellin transform technique. Recalling the Mellin transform and its inverse $$ F(s) =\int_0^{\infty} x^{s-1} f(x)dx, \quad\quad f(x)=\frac{1}{2 \pi i} \int_{c-i \infty}^{c+i \infty} x^{-s} F(s)\, ds. $$ Now, let's consider the function $$ f(x)= \frac{x}{e^{\pi x}+1}. $$ Taking the Mellin transform of $f(x)$, we get $$ F(s)={\pi }^{-s-1}\Gamma \left( s+1 \right) \left(1- {2}^{-s} \right) \zeta \left( s+1 \right),$$ where $\zeta(s)$ is the zeta function . Representing the function in terms of the inverse Mellin Transform, we have $$ \frac{x}{e^{\pi x}+1}=\frac{1}{2\pi i}\int_{C}{\pi }^{-s-1}\Gamma \left( s+1 \right) \left( 1-{2}^{-s} \right) \zeta \left( s+1 \right) x^{-s}ds. $$ Substituting $x=2n+1$ and summing yields $$\sum_{n=0}^{\infty}\frac{2n+1}{e^{\pi (2n+1)}+1}=\frac{1}{2\pi i}\int_{C}{\pi}^{-s-1}\Gamma \left( s+1 \right)\left(1-{2}^{-s} \right) \zeta\left( s+1 \right) \sum_{n=0}^{\infty}(2n+1)^{-s}ds$$ $$ = \frac{1}{2\pi i}\int_{C}{\pi }^{-s-1}\Gamma \left( s+1 \right) \left(1-{2}^{-s} \right)^2\zeta\left( s+1 \right) \zeta(s)ds.$$ Now, the only contribution of the poles comes from the simple pole $s=1$ of $\zeta(s)$ and the residue equals to $\frac{1}{24}$. So, the sum is given by $$ \sum_{n=0}^{\infty}\frac{2n+1}{e^{\pi (2n+1)}+1}=\frac{1}{24} $$ Notes: 1) $$ \sum_{n=0}^{\infty}(2n+1)^{-s}= \left(1- {2}^{-s} \right) \zeta \left( s \right). $$ 2) The residue of the simple pole $s=1$, which is the pole of the zeta function, can be calculated as $$ r = \lim_{s=1}(s-1)({\pi }^{-s-1}\Gamma \left( s+1 \right) \left({2}^{-s}-1 \right)^2\zeta\left( s+1 \right) \zeta(s))$$ $$ = \lim_{s\to 1}(s-1)\zeta(s)\lim_{s\to 1} {\pi }^{-s-1}\Gamma \left( s+1 \right) \left({2}^{-s}-1 \right)^2\zeta\left( s+1 \right) = \frac{1}{24}. $$ For calculating the above limit, we used the facts $$ \lim_{s\to 1}(s-1)\zeta(s)=1, \quad \zeta(2)=\frac{\pi^2}{6}. $$ 3) Here is the technique for computing the Mellin transform of $f(x)$.
$$S=\sum_{n=1}^{\infty }\frac{2n-1}{1+e^{\pi (2n-1)}},$$ first we know that $$\frac{1}{1+e^x}=\sum_{k=1}^{\infty }(-1)^{k-1}e^{-kx}$$ therefore $$S=\sum_{n=1}^{\infty }(2n-1)\sum_{k=1}^{\infty }(-1)^{k-1}e^{-k(2n-1)\pi }=\sum_{k=1}^{\infty }(-1)^{k-1}\sum_{n=1}^{\infty }(2n-1)e^{-k(2n-1)\pi }$$ but we know $$ \sum_{n=1}^{\infty }e^{-(2n-1)\pi x}=\frac{1}{2sinh(\pi x)},$$ for all $x&gt;0$ but $$\frac{1}{2sinh(\pi x)}=(\frac{i}{2\pi })(\frac{\pi }{sin(i\pi x)})$$ and $$\int_{0}^{\infty }\frac{t^{-x}}{1+t}dt=\frac{\pi }{sin(\pi x)}$$ therefore $$\int_{0}^{\infty }\frac{t^{-x}}{1+t}dt=\int_{0}^{1}\frac{t^{-x}}{1+t}dt+\int_{0}^{1}\frac{t^{x-1}}{1+t}dt=\int_{0}^{1}\frac{t^{-x}+t^{x-1}}{1+t}dt\\ \\ \frac{\pi }{sin(\pi x)}=\int_{0}^{1}\frac{t^{-x}+t^{x-1}}{1+t}dt=\sum_{k=1}^{\infty }(-1)^{k-1}(\frac{1}{k-x}+\frac{1}{k-1+x})\\ \therefore from\ \frac{\pi }{sin(i\pi x)}=\int_{0}^{\infty }\frac{t^{-ix}+t^{ix-1}}{1+t}dt=\sum_{k=1}^{\infty }(-1)^{k-1}(\frac{1}{k-ix}+\frac{1}{k-1+ix})\\ \\ =\sum_{k=1}^{\infty }(-1)^{k-1}(\frac{k+ix}{x^2+k^2}+\frac{k-1-ix}{(k-1^2)+x^2})=\sum_{k=1}^{\infty }(-1)^{k-1}(\frac{k}{k^2+x^2}+\frac{k-1}{x^2+(k-1)^2})\\ \\ +i\sum_{k=1}^{\infty }(-1)^{k-1}(\frac{x}{x^2+k^2}-\frac{x}{x^2+(k-1)^2})=i(\sum_{k=1}^{\infty }(-1)^{k-1}\frac{x}{x^2+k^2}-\sum_{k=1}^{\infty }(-1)^{k}\frac{x}{x^2+k^2})\\ \\ \\ =i[\frac{-1}{x}+\sum_{k=1}^{\infty }(-1)^{k-1}\frac{x}{x^2+k^2}+\sum_{k=1}^{\infty }(-1)^{k-1}\frac{x}{x^2+k^2}]=i[\frac{-1}{x}+2\sum_{k=1}^{\infty }(-1)^{k-1}\frac{x}{x^2+k^2}]\\ \\\therefore \frac{\pi }{sinh(\pi x)}=\frac{1}{x}-2\sum_{k=1}^{\infty }(-1)^{k-1}\frac{x}{k^2+x^2}=\int_{0}^{1 }\frac{t^{-ix}+t^{ix-1}}{1+t}dt$$
Given any $10$ consecutive positive integers , does there exist one integer which is relatively prime to the product of the rest ? Given any $10$ consecutive positive integers , does there exist one integer which is relatively prime to the product of the rest ?
Notice that one integer $x$ being coprime to the product of the rest is equivalent to it being coprime to each other $y$ in the set, which is equivalent to it sharing no prime divisors. This gives immediately that: $x$ cannot be divisible by $2$, $3$, or $5$, since for those primes, either $x+p$ or $x-p$ would have to be in the interval, given the interval has length $10$. Clearly, we also have, for any prime $p$ equal to at least $11$ that if $x$ is in the interval $x+p$ and $x-p$ are not (and nor is any other multiple of $p$). Thus, if $x$ is also not divisible by $7$, it must be coprime to all the other integers in the desired interval. However, any interval of length $10$ has at least one integer in it which is not divisible by $2$, $3$, $5$, or $7$. In particular, notice that there will be: $5$ integers divisible by $2$. At most $2$ odd integers divisible $3$. At most $1$ odd integer divisible by $5$. At most $1$ odd integer divisible by $7$. Implying at least one integer is not divisible by any of those primes and hence satisfies the desired condition.
I assume the integer has to be one of the 10. In which case I can think of cases in which there definitely is - if there is any prime in the sequence greater than our equal to 11 for example, or any integer which is a product of those primes (e.g. 143=13*11) Whether this is true for all such sets of 10 consecutive numbers, I don't know
Example: satisfying $E(X_{n+1}\mid X_n)=X_n$ but not a martingale I am wondering if there is such a sequence of random variables $(X_n)_{n=0}^\infty$ such that $\mathbb{E}(X_{n+1}\mid X_n)=X_n$ for all $n\geq0$ but which is not a martingale with respect to the filtration $\mathcal{F_n}=\sigma(X_0, \dots, X_n)$. I would really appreciate if you could show me such an example.
Let $(Y_j)_{j \in \mathbb{N}}$ be a sequence of identically distributed independent random variables such that $\mathbb{E}Y_j=0$. For some fixed $N \in \mathbb{N}$ we define $$\begin{align*} X_n &amp;:= \sum_{j=1}^n Y_j \qquad \text{for all} \, \, n \leq N \\ X_{n} &amp;:= \sum_{j=1}^N Y_j + Y_1 - Y_2 = X_N+ Y_1-Y_2 \qquad \text{for all} \, \, n &gt;N. \end{align*}$$ For $n \leq N$ and $n&gt;N+1$, the condition $$\mathbb{E}(X_{n} \mid X_{n-1}) = X_{n-1}$$ is obviously satisfied. For $n=N+1$, we have $$\mathbb{E}(X_{N+1} \mid X_N) = X_N + \mathbb{E}(Y_1 \mid X_N) - \mathbb{E}(Y_2 \mid X_N). $$ Since $(Y_j)_{j \in \mathbb{N}}$ is identically distributed and independent, we have $$\mathbb{E}(Y_1 \mid X_N) = \mathbb{E}(Y_2 \mid X_N)$$ and therefore $$\mathbb{E}(X_{N+1} \mid X_N) = X_N.$$ On the other hand, $$\begin{align*} \mathbb{E}(X_{N+1} \mid \mathcal{F}_N) &amp;=X_N + 2\underbrace{\mathbb{E}(Y_1 \mid \mathcal{F}_N)}_{\mathbb{E}(X_1 \mid \mathcal{F}_N)=X_1} - \underbrace{\mathbb{E}(Y_1+Y_2 \mid \mathcal{F}_N)}_{\mathbb{E}(X_2 \mid \mathcal{F}_N) = X_2} \\ &amp;= X_{N+1} \neq X_N. \end{align*}$$ Intuition: It is widely known that the process $$S_n := \sum_{j=1}^n Y_j$$ can be used to model a fair game; the outcome of the $j$-th round is given by $Y_j$. Now we change the rules of our (fair) game: After $N$ rounds the game is stopped; in the final round the player gains the outcome of the first round, but loses the outcome of the second round. There are two cases: The player is very drunk and has already forgotten the outcomes of the first two rounds. In this case, from the point of view of our drunken player, the (changed) game is still fair - it looks like another two rounds of our (original) game. If the gambler is still sober, then he remembers the outcome of the first two rounds and can calculate the outcome of the final round explicitly (there is no randomness, given the information up to time $N$!), i.e. $$\mathbb{E}(X_{N+1} \mid \mathcal{F}_N) = X_{N+1}.$$
Remember that in the definition of martingale you should have $E|X_n| &lt; \infty$. So, it is enough to take a sequence which hasn't first moment. Put $X_n = X$ where $X$ follows a Cauchy distribution. It satisfy your condition, trivially, but it is not a martingale because they don't have first moment.
How was the vector magnitude derived? The magnitude of a $n$-vector is defined as: $$ \sqrt{a_1^2+a_2^2+...+a_n^2} $$ or for those that prefer sigma notation: $$ \sqrt{\sum_{i=1}^n a_i^2} $$ How would this have been derived? Or was this one of those cases where mathematicians went trial-by-error to find a formula that seemed to work and then proved it later?
Note that for a vector $[a_1]\in\mathbb{R}$, the magnitude is obviously $\sqrt{a_1^2}=|a_1|$. For a vector $\begin{bmatrix}a_1\\a_2\end{bmatrix}\in\mathbb{R}^2$, using pythagros, the magnitude is $\sqrt{a_1^2+a_2^2}$. Then for a vector $\begin{bmatrix}a_1\\a_2\\a_3\end{bmatrix}\in\mathbb{R}^3$, consider the image: In this case, $x=a_1$, $y=a_2$ and $z=a_3$. The length by pythagoras of $AC$ is $\sqrt{a_1^2+a_2^2}$. Using pythagroas again to find AB, the magnitude of out vector, we get: $$AB=\sqrt{AC^2+a_3^2}=\sqrt{\sqrt{a_1^2+a_2^2}^2+a_3^2}=\sqrt{a_1^2+a_2^2+a_3^2}$$ Similarly for $\begin{bmatrix}a_1\\a_2\\a_3\\a_4\end{bmatrix}\in\mathbb{R}^4$, we get: $$\sqrt{\sqrt{\sqrt{a_1^2+a_2^2}^2+a_3^2}^2+a_4^2}=\sqrt{\sqrt{a_1^2+a_2^2+a_3^2}^2+a_4^2}=\sqrt{a_1^2+a_2^2+a_3^2+a_4^2}$$ While this doesn't provide a rigorous proof for an n-dimensional formula in the euclidean metric, it provides some intuition into how you generalise the Pythagorean formula to dimensions higher than 2. You can prove the general case with induction, which I will leave to you if you want. I hope this helps!
Just try labelling the lengths of each of the vectors here and you'll answer the question for yourself:
To prove the inequality:- $\frac{4^m}{2\sqrt{m}}\le\binom{2m}{m}\le\frac{4^m}{\sqrt{2m+1}}$ Problem Statment:- Prove:-$$\dfrac{4^m}{2\sqrt{m}}\le\binom{2m}{m}\le\dfrac{4^m}{\sqrt{2m+1}}$$ My Attempt:- We start with $\binom{2m}{m}$ (well that was obvious), to get $$\binom{2m}{m}=\dfrac{2^m(2m-1)!!}{m!}$$ Now, since $2^m\cdot(2m-1)!!\lt2^m\cdot2^m\cdot m!\implies \dfrac{2^m\cdot(2m-1)!!}{m!}\lt 4^m$ $$\therefore \binom{2m}{m}=\dfrac{2^m(2m-1)!!}{m!}\lt4^m$$ Also, $$2^m\cdot(2m-1)!!\gt2^m\cdot(2m-2)!!\implies 2^m(2m-1)!!\gt2^m\cdot2^{m-1}\cdot(m-1)!\\ \implies \dfrac{2^m\cdot(2m-1)!!}{m!}\gt\dfrac{4^m}{2m}$$ So, all I got to was $$\dfrac{4^m}{2m}\lt\binom{2m}{m}\lt4^m$$ So, if anyone can suggest me some modifications to my proof to arrive at the final result, or just post a whole different non-induction based proof.
Taking the product of the ratios of the terms gives $$ \binom{2n}{n}=\prod_{k=1}^n4\frac{k-1/2}{k}\tag{1} $$ Bernoulli's Inequality says $$ \sqrt{\frac{k-1}k}\le\frac{k-1/2}{k}\le\sqrt{\frac{k-1/2}{k+1/2}}\tag{2} $$ Applying $(2)$ to $(1)$, we get $$ \frac{4^n}{2\sqrt{n}}\le\binom{2n}{n}\le\frac{4^n}{\sqrt{2n+1}}\tag{3} $$ In this answer, it is shown that $$ \frac{4^n}{\sqrt{\pi\left(n+\frac13\right)}}\le\binom{2n}{n}\le\frac{4^n}{\sqrt{\pi\left(n+\frac14\right)}}\tag{4} $$ which is a much tighter estimate.
You can use the Stirling formula for the lower bound: $$ \binom{2m}{m}=\frac{(2m)!}{(m!)^2}= \frac{\left(\frac{2m}{e}\right)^{2m}\sqrt{2\pi2m}}{\left(\left(\frac{m}{e}\right)^{m}\sqrt{2\pi m}\right)^2}(1+o(1))\ge 2^{2m}\frac{\sqrt{4\pi m}}{2\pi m}=\frac{4^{m}}{\sqrt{\pi m}}\ge\frac{4^{m}}{2\sqrt{m}}. $$
What space curves can this theorem describe? We were given the following theorem in our Vector Calculus class: THM: For space curve $R$ which does not pass through the origin, and which has a second derivative, the following are equivalent: 1) $R^{\prime\prime} \parallel R$ at all points 2) $R \times R^\prime = C$ where $C$ is a constant vector 3) There is a constant vector C such that either: a) $C \ne 0,$the curve is in the plane through 0 which is perpendicular to $C$, and the position vector $R(t)$ sweeps out area at the constant rate $\frac{\left | C \right |}{2}$ or b) $C = 0$, and the curve is confined to a line through 0 My question is this: what curves fulfill any of these conditions? Obviously, any curve confined to a line works, as does any curve confined to an ellipse or hyperbola. Are there any other functions it could apply to, or just these conic sections?
Might as well have the curve in the $xy$ plane. Kepler, and for that matter classical electromagnetism, has acceleration $r''$ parallel to position $r,$ with magnitude proportional to $1/|r|^2.$ Gravity gives ellipses, repulsion of like electric charges gives, i suppose, hyperbola. Let position be $(x(t), y(t))$ and, as usual, take $r^2 = x^2 + y^2.$ The Law of Gravity is $$ (x'', y'') = - (x,y)/(x^2 + y^2). $$ How about The Law of Jagy, $$ (x'', y'') = - (x,y)/(x^2 + y^2)^2. $$ It will not give ellipses in general. On the other hand, there is one circle, $(\cos t, \sin t).$
Might as well have the curve in the $xy$ plane. Kepler, and for that matter classical electromagnetism, has acceleration $r''$ parallel to position $r,$ with magnitude proportional to $1/|r|^2.$ Gravity gives ellipses, repulsion of like electric charges gives, i suppose, hyperbola. Let position be $(x(t), y(t))$ and, as usual, take $r^2 = x^2 + y^2.$ The Law of Gravity is $$ (x'', y'') = - (x,y)/(x^2 + y^2). $$ How about The Law of Jagy, $$ (x'', y'') = - (x,y)/(x^2 + y^2)^2. $$ It will not give ellipses in general. On the other hand, there is one circle, $(\cos t, \sin t).$
Whats the correct way to factorize $x \sin(\frac{\pi}{x})$? I am inspired by Euler's factorization of $sin(x)$ as an infinite product. I was trying to apply to the expression $x \sin(\frac{\pi}{x})$. Let $f(x) = x \sin{\frac{\pi}{x}}$. My findings were as follows: $f(x) = 0$ at infinitely many points in the closed interval $[-1,1]$ $f(x) = 0$, namely at $x = 0$, and $x = \dfrac{1}{n}, n \space \epsilon \space Z \space \&amp; \space n \neq 0$ $\therefore$ we can write $f(x) = A (x - 0)(x-1)(x+1)(x-\dfrac{1}{2})(x+\dfrac{1}{2})...$ $$ \implies f(x) = A x \Pi_{n=1}^{n=\infty}\left(x^2-\dfrac{1}{n^2} \right)$$ We can solve for $A$ by putting $x=2$, we get $A = \frac{1}{\Pi\left(4 - \dfrac{1}{n^2}\right)}$ So we get $$f(x) = x\Pi\left(\dfrac{x^2 - \frac{1}{n^2}}{4 - \frac{1}{n^2}}\right)$$ Now if we expand $f(x)$ in terms of its taylor series, we get a polynomial in powers of $\frac{1}{x}$, where as on the LHS we get powers of $x$. I can't equate coefficients as a result of this approach. Where did I do wrong?
I guess it is non-factorisable.. We can factorize the RHS. But we cannot proceed to step of equating coefficients of terms of similar powers Findings: the function isn't differentiable at $x = 0$ because of the above fact taylor series of the function is impossible.
I guess it is non-factorisable.. We can factorize the RHS. But we cannot proceed to step of equating coefficients of terms of similar powers Findings: the function isn't differentiable at $x = 0$ because of the above fact taylor series of the function is impossible.
Dirac delta convolution with function I've come into a bit of a snag, and thought some more talented mathematicians could maybe help. I am trying to do the following integral: $$S(x,t) = \int I(z)\delta(x-G(z,t)) \mathrm{d}z,$$ where $G(z,t)$ is a function which 'pushes' the original function $I(z)$ into $S(x,t)$ at some later time. I've tried using some Dirac delta identities but have not had much success. Any help would be very much appreciated. Thank you.
Have you tried to use the decomposition of the "composite" delta function $\delta(f(x))$, http://en.wikipedia.org/wiki/Dirac_delta_function#Composition_with_a_function In your case you have $$\delta(x - G(z,t)) = \sum_{i} \frac{\delta(z-z_i)}{|\partial_z G(z,t)|_{z=z_i}|}$$ where the sum goes over the solutions $z_i(x,t)$ of the equation $G(z,t) = x$, so that $$S(x,t) = \sum_{i} \frac{I(z_i)}{|\partial_z G(z,t)|_{z=z_i}|}$$ Potential problems might arise at points where $\partial_z G(z,t)|_{z=z_i} = 0$.
You'd think of $\int f(x) \delta(x) \; \mathrm{d} x = f(0)$ as picking just the value of $f$ where $\delta$'s argument is zero. I.e., in this case the result is $I(z)$ wherever $G(z, t) = x$.
Finding a closed form to a minimum of a function It's a try to find a closed form to the minimum of the function : Let $0&lt;x&lt;1$ then define : $$g(x)=x^{2(1-x)}+(1-x)^{2x}$$ Denotes $x_0$ the abscissa of the minimum . Miraculously using Slater's inequality for convex function I have found that : Define $f(x)=x^{2(1-x)}$ then : $$\lim_{x\to x_0}\Bigg(0.5+\frac{(x-1)f'(x)-xf'(1-x)}{f'(x)+f'(1-x)}\Bigg)=0$$ And by the definition of the derivative : $$\lim_{x\to x_0}g'(x)=0$$ For the first limit see here to compare with the second limit see here My question : With these two equations can we hope to find a nice closed form ? Any helps is greatly appreciated Thanks in advance because it's a hard nut . Little update Well I have got a down-vote there is no mistake if we use natural logarithm for the first limit .
Partial results since I am still working the problem. Using the Inverse Symbolic Calculator, the point $x_*$ where the derivative cancels seems to be very close to $$x_*=10 \,(\gamma \, K_1(1)){}^{\Gamma \left(\frac{1}{4}\right)}$$ which numerically is $$x_*=0.216453828\qquad \implies g(x_*)=0.99066450008687554$$ while the exact results are $$x_*=0.216453839\qquad \implies g(x_*)=0.99066450008687550$$ A much better approximation seems to be $$x_*=\frac{97+800 e-307 e^2}{-263-205 e+113 e^2}$$ which is in an absolute error of $2.87 \times 10^{-19}$; for this value, $g(x_*)$ is in an absolute error of $2.76 \times 10^{-28}$.
Because $x_0$ the abscissa of the minimum we know that $g'(x_0)=0$ then you can calculate the function $g'(x)$: $$g'(x) = \bigr(x^{2(1−x)} \bigl)'+\bigr((1−x)^{2x}\bigl)' = \bigr(e^{\ln{(x^{2(1−x)})}} \bigl)'+\bigr(e^{\ln{((1−x)^{2x})}}\bigl)' = \bigr(e^{{2(1−x)} \ln{(x)}} \bigl)'+\bigr(e^{{2x} \ln{(1−x)}}\bigl)' = \bigl[ \bigr(e^{{2(1−x)} \ln{(x)}} \bigl)\bigr({{2(1−x)} \ln{(x)}} \bigl)' \bigl]+\bigl[ \bigr(e^{{2x} \ln{(1−x)}}\bigl)({2x} \ln{(1−x)})' \bigr] = $$ $$ = \bigl[x^{2(1−x)}\bigl(2(1-x){1\over x}-2\ln(x) \bigr) \bigr]+\bigl[(1-x)^{2x}\bigl(2\ln(1-x)+2{x\over (1-x)}\bigr)\bigr] = 2\bigl[x^{2(1−x)}({1\over x}-1-\ln(x))+(1-x)^{2x}(\ln(1-x)+{x\over (1-x)})\bigr]$$ Now because $g'(x_0)=0$: $$ 2\bigl[x_0^{2(1−x_0)}({1\over x_0}-1-\ln(x_0))+(1-x_0)^{2x_0}(\ln(1-x_0)+{x_0\over (1-x_0)})\bigr]=0$$ so we can get: $$x_0^{2(1−x_0)}({1\over x_0}-1-\ln(x_0))=-(1-x_0)^{2x_0}(\ln(1-x_0)+{x_0\over (1-x_0)})$$ $$$$ The solutions are: $x_0 = 0.216... \ , \ 0.783... \ \ and \ \ 0.5$, but you can easily find $x$ such as $g(x)&lt;g(0.5)$ so $0.5$ is the max point and $x_0 = 0.216... \ , \ 0.783...$ are the minimum points. You can sea the solution here: https://www.wolframalpha.com/input/?i=-2+x%5E%281+-+2+x%29+%28x+%2B+x+log%28x%29+-+1%29+-+2+%281+-+x%29%5E%282+x+-+1%29+%28x+%2B+%28x+-+1%29+log%281+-+x%29%29+%3D0.
Show there exist integers $a$ and $b$ such that $a^2+b^2\equiv -1\mod p$ I'm asked to show there exist integers $a$ and $b$ such that $a^2+b^2\equiv -1\mod p$ for any odd prime $p$. The solution starts by saying that the subsets $\{a^2\}$ and $\{-1-b^2\}$ of $\mathbb Z/p\mathbb Z$ both contain $\frac{p-1}{2}$ elements. Why is this? What even is the subset $\{a^2\}$? Is this related to how there are $\frac{p-1}{2}$ quadratic residues and $\frac{p-1}{2}$ quadratic non-residues $\mod p?$
Let $A=\{a^2 : a\in \mathbb{Z}/p\mathbb{Z}\}$ and $B=\{-1-b^2:b\in\mathbb{Z}/p\mathbb{Z}\}=-1-A$. Then $A$ and $B$ have the same number of elements, $\frac{p-1}{2}+1=\frac{p+1}{2}$. Note that $0 \in A$. Therefore, $A$ and $B$ cannot be disjoint.
This is not nessesarily true. First, Fermat's two square theorem states that if $a^2 + b^2 = p$ where $p$ is prime, then $p = 1\pmod 4$. In particular if $p$ is not prime, then it is a product of prime powers congruent to ${0,1,2} \pmod 4$. Consider the solutions of $a^2 + b^2 \mod 8$. It is trivial that ${0, 1, 4, 5} \mod 8$ are solutions. ${2,6} \mod 8$ are possible solutions. Any integer $p=1\pmod 4$ gives $2p=2\pmod 8$, and $p=3\pmod 4$ gives $2p=6\pmod 8$. The latter, however, cannot be a solution since there are factors $p=3\pmod 4$ dividing $a^2+b^2$, a contradiciton. The solutions are therefore $a^2 + b^2 = {0, 1, 2, 4, 5} \mod 8$ and $a^2 + b^2 + 1 = {1, 2, 3, 5, 6} \mod 8$. Which does not represent all numbers.
Maximum value of $f(x, y, z) = e^{xyz}$ in the domain x+y+z = 3. Need the maximum value of f$(x, y, z) = e^{xyz}$ in the domain $x+y+z = 3$. Please help me on this, as I could not find how to solve this one. Thanks in advance.
Assuming $x,y,z \ge 0$ $e^{xyz}$ is maximum if $xyz$ is maximum $AM \ge GM\\ \implies \dfrac{x+y+z}{3} \ge\sqrt[3]{xyz}\\ \implies \sqrt[3]{xyz} \le 1\\ \implies xyz \le 1$ Maximum value of $e^{xyz}=e^{1}=e$
For e^xyz the domain is all (x,y,z) ,so the domain is whole space
If $x+y+z+w=29$ where x, y and z are real numbers greater than 2, then find the maximum possible value of $(x-1)(y+3)(z-1)(w-2)$ If $x+y+z+w=29$ where x, y and z are real numbers greater than 2, then find the maximum possible value of $(x-1)(y+3)(z-1)(w-2)$. $(x-1)+(y+3)+(z-1)+(w-2)=x+y+z+w-1=28$ Now $x-1=y+3=z-1=w-2=7$ since product is maximum when numbers are equal My answer came out to be as $6*10*6*5=1800$ but the answer is $2401$. What am I doing wrong? And also, how we will get the answer $2401$
Let $$f(x,y,z,w) = (x-1)(y+3)(z-1)(w-2)$$ and $$g(x,y,z,w) = x + y + w - 29$$ We want to $$\max\{f(x,y,x,w)\}$$ subject to: $$g(x,y,z,w) = 0, \ \ \ x,y,z &gt; 2$$ Let \begin{align*} \mathcal{L}(x,y,z,\lambda) &amp;= f(x,y,z,w) + \lambda g(x,y,z,w)\\ &amp;= (x-1)(y+3)(z-1)(w-2) + \lambda(x + y + w - 29) \end{align*} Then $$\nabla \mathcal{L}(x,y,z,\lambda) = 0$$ yields $4$ equations: \begin{equation}{\tag{1}} (y+3)(z-1)(w-2) + \lambda = 0 \end{equation} \begin{equation}{\tag{2}} (x-1)(z-1)(w-2) + \lambda = 0 \end{equation} \begin{equation}{\tag{3}} (x-1)(y+3)(w-2) + \lambda = 0 \end{equation} \begin{equation}{\tag{4}} (x-1)(y+3)(z-1) + \lambda = 0 \end{equation} Now when you set each one equal to each other and find another four equations where $y$ will be a linear combination of $x,y,w$ we find another $4$ equations: \begin{equation}{\tag{5}} y = x - 4 \end{equation} \begin{equation}{\tag{6}} y = z - 4 \end{equation} \begin{equation}{\tag{7}} y = y \end{equation} \begin{equation}{\tag{8}} y = w - 5 \end{equation} Now, we see that $$(y+4) + y + (y + 4) + (y+5) = 29 \Rightarrow y = 4$$ Then it is trivial to find $x,z,w$ by plugging in $y=4$ to the equations above. Hope that helps!
$x=8, y=4, z=8, w=9$ is the solution. edit: your solution is also correct. I don't know why are you multiplying $6∗10∗6∗5$? $x−1=y+3=z−1=w−2=7$ means 7*7*7*7=2401
If the order in a set doesn’t matter, can we change order of, say, $\Bbb{N}$? I’m given to understand that the order of the elements of a set doesn’t matter. So can I change the order of the set of natural numbers or any set of numbers ( $\mathbb{W,Z,Q,R}$ for that matter) as follows? $$ \mathbb{N} = \{ 1,2,5,4,3,\cdots \} $$ $$ \mathbb{N}= \{3,5,\cdots,78,1,9\} $$ If yes, is this is really problematic? If not, then why not? PS: I know this is a silly question, but this has been in the back of my mind for a really long time.
It's true that sets are not ordered. As to whether you can 'change' the order, you cannot change something that is not there. However you can define any ordering on them you want. For instance, we can order the naturals the usual way $$0,1,2,3,\ldots$$ or we can define an ordering where all the even numbers come first in their usual order, then the odd numbers $$ 0,2,4,6,\ldots, 1,3,5,7,\ldots.$$ There are many, many possibilities. Also, the ordering needs to be defined unambiguously so we know exactly the order relationship between any two elements. For instance, I don't know what you mean when you write $$ \{3,5,...,78,1,9\}.$$ It's clear that you mean $9$ comes last (it is not a problem for an ordering to have a greatest element, even though in the two orderings I gave above, there was no greatest element), but I have no idea where $2$ goes in this ordering. If you just wrote this out of the blue, I wouldn't even be able to tell it was an ordering of the whole set of natural numbers and not just a subset. Edit Henning mentions an example in the comments that I think deserves mention in the answer, to reinforce the fact that there are many possibilities. Any enumeration of a countably infinite ordered set induces an order on the natural numbers. So, from the usual ordering of the rationals and an enumeration of the rationals, we get an ordering on the natural numbers that is dense, i.e. between any two numbers lies infinitely many others. We can’t even try to communicate this ordering as a list with some ellipses.
A set does not have order, but an enumeration of a set may have, and even non-enumerable sets like the real numbers may still be orderable. You can reorder any listing of a finite set in any sense of the word. However, the definition of an infinite set is that it can be placed into bijection with a proper subset of itself. An enumeration is a bijection with natural numbers. So if your enumeration "starts" with a bijection to a proper subset, you'll be running into logical problems. For example, ordering a listing of the naturals by starting with all odd numbers first is not going to work: $$\{1, 3, 5, \dots, 2, 4, 6, \dots\}$$ does not work because you'll never actually get to 2: your "enumeration" will run out of steam doing just the odd numbers.
Math constants generalization Does any generalization for "famous" math constants (like $\pi$ or $e$) exist. I know how those constants are useful, but I do not know what property makes them useful. Is there any definition of those numbers which can include some additional numbers. (Those additional number can be part any super-set of $R$ or $C$). What property of those constants is making them "special"?
Perhaps you may be interested in ring of periods, recently developed by Kontsevich and Zagier, generalizing the constants you mentioned. More details in the Wikipedia article and its references. According to Kontsevich and Zagier, "all classical constants are periods in the appropriate sense".
Well, we can calculate $\pi$ by $$\pi = \lim_\limits{n\to \infty} \frac{1}{2} * n * \sin{(\frac{n}{360})}$$ And $e$ by using $$e = \sum_{n=0}^\infty (\frac{1}{n!})$$ Those two definitions are somewhat the proof that they can't actually be calculated. However, since they do occure in nature, they are very speciall. Also, the beautiful equation $$e^{i * \pi} + 1 = 0$$ States that they are relative to each other and $\sqrt{-1}$ (an imaginary number), which, once again, makes both of these very special.
Combination of quadratic and cubic series I'm an eight-grader and I need help to answer this math problem (homework). Problem: Calculate $$\frac{1^2+2^2+3^2+4^2+...+1000^2}{1^3+2^3+3^3+4^3+...+1000^3}$$ Attempt: I know how to calculate the quadratic sum using formula from here: Combination of quadratic and arithmetic series but how to calculate the cubic sum? How to calculate the series without using calculator? Is there any intuitive way like previous answer? Please help me. Grazie!
We have identities for sums of powers like these. In particular: $$1^2 + 2^2 + \dots + n^2 = \frac{n(n+1)(2n+1)}{6}$$ $$1^3 + 2^3 + \dots + n^3 = \frac{n^2(n+1)^2}{4}$$ The rest is just a bit of arithmetic.
It is a quadraic sequence as 1,4,9,16,25,36 ........// Its first term (a)=1 1st difference (d)=4-1=3 2nd difference i.e. constant difference(c)=2 There is a sum formula for any quadraic equation which is Sn=n/6[(n-1)3d+(n-1)(n-2)C]+an This is for any quadraic even for sum of squares. I have sum formula for cubic sequence.Let "b" be 1st difference "c" be 2nd difference and "d" be 3rd diffrrence .Also "a" is first term."n" is no. of term.Put all value in below formula except "n" to get sum formula of any cubic sequence. sum formula Sn=n/24*[(n-1)12b+(n-1)(n-2)4c+(n-1)(n-2)(n-3)d]+an Both formula are 100% working.
Centralizer of element in $SO(3)$ I'm asked to find an element $g$ in $SO(3)$ such that the centralizer of $g$, $Z(g)$, is a disconnected subgroup of $SO(3)$, whose identity component is isomorphic to $SO(2)$ (maximal torus). I attempt to transfer this question to a geometry question. The fact that $g$ commutes with $SO(2)$ means $g$ must be a 2D rotation around certain axis, but then the centralizer will just be $SO(2)$, which is connected. So I actually think such element does not exist. Am I missing something?
Define: $$g := \begin{pmatrix} 1 &amp; 0 &amp; 0\\ 0 &amp; -1 &amp; 0\\ 0 &amp; 0 &amp; -1 \end{pmatrix}\in SO(3)$$ The eigenspace of $1$ is one-dimensional and generated by $e_1$. If $h\in SO(3)$ commutes with $g$, then: $$ghe_1=hge_1=he_1$$ Since $\|he_1\|=\|e_1\|=1$, this implies $he_1=\pm e_1$. Therefore, $h$ is of one of the following forms for an appropriate $\phi\in\mathbb R$: $$h = \begin{pmatrix} 1 &amp; 0 &amp; 0\\ 0 &amp; \cos\phi &amp; -\sin\phi\\ 0 &amp; \sin\phi &amp; \cos\phi \end{pmatrix}\text{ or } h = \begin{pmatrix} -1 &amp; 0 &amp; 0\\ 0 &amp; \cos\phi &amp; \sin\phi\\ 0 &amp; \sin\phi &amp; -\cos\phi \end{pmatrix}$$ Clearly every matrix of one of those forms is an element of $SO(3)$ and commutes with $g$. The connected components of the centraliser of $g$ correspond to cases $he_1=+e_1$ and $he_1=-e_1$ respectively.
You are right. Moreover, in any connected compact Lie group $G$ the following statement is true. Let $S$ be a connected Abelian subgroup of $G$, and let $g\in G$ commute with all elements of $S$. Then there is a torus $T$ containing $g$ and $S$. This is Proposition 4.25 from Adams' book 'Lectures on Lie groups'. This is in case you are familiar with Lie group theory.
Regarding the radius of convergence and its equality to a certain limit Let $f$ be a holomorphic function on the open unit disk $\mathbb{D}$, and suppose that $f$ cannot be extended holomorphically to any open set $\Omega$ containing $\overline{\mathbb{D}}$. Let $f(z) = \sum_{n=0}^\infty a_n z^n$ be the power series expansion of $f$ around $0$, and assume that $a_n \ne 0$ for all but finitely many $n$. Show that if the limit $\lim_{n \to \infty} |a_n / a_{n+1}|$ exists, then it is equal to $1$. Is this supposed to be obvious? It pretty much tells that the radius of convergence around $0$ is $R=1$. So of course within the disc series converges and outside the disc it diverges, whatever happens on the boundary depends on the function hence the convergence is not guaranteed. Also what is up with assumption that $a_n \ne 0$ for all but finitely many $n$? Why doesn't it ensure that the limit in question exists? Edit. Alright, I looked at some literature (including proofs) and figured out that the limit $\lim_{n \to \infty} |a_n / a_{n+1}|$ is indeed nothing more than the radius of convergence itself. However I am still lacking an elegant argument... Ideally I wanted to get from $\lim_{n \to \infty} |a_n / a_{n+1}|=L$ general case to $L \ge 1$ and $L \le 1$ by using the definition $R = \sup \{ r \ge 0 \mid |a_n| r^n \text{ is bounded} \}$. It would involve $|a_n| L^n$ and whatnot. I am stuck with this at the moment.
Due to holomorphicity the power series converges within the unit disk $\mathbb{D}$, however it is impossible to find any disk of bigger radius (and centered at the origin) with the same property of convergence. (In conclusion the radius of convergence of the power series in question is exactly $R = 1$.) Now let us assume the existence of the limit $\lim_{n \to \infty} |a_n / a_{n+1}| = L$. If $r&lt;L$, then $|a_k / a_{k+1}| &gt; r$ eventually. Consequently $|a_k| &gt; r|a_{k+1}|$ for all $k \ge N$ and \begin{equation*} |a_N| r^N \ge |a_{N+1}| r^{N+1} \ge |a_{N+2}| r^{N+2} \ge |a_{N+3}| r^{N+3} \ge \ldots. \end{equation*} Hence the sequence $|a_k|r^k$ is bounded and $r \le 1$ because one is the supremum involved in such sequences. Since $r&lt;L$ is arbitrary we also have $L \le 1$. If $s&gt;L$, then $|a_k / a_{k+1}| &lt; s$ eventually. Consequently $|a_k| &lt; s|a_{k+1}|$ for all $k \ge N$ and \begin{equation*} |a_N| s^N \le |a_{N+1}| s^{N+1} \le |a_{N+2}| s^{N+2} \le |a_{N+3}| s^{N+3} \le \ldots. \end{equation*} Hence the terms $a_k z^k$ do not converge to $0$ for $|z| \ge s$ (and the whole power series does not converge) so $s \ge 1$. Since $s&gt;L$ is arbitrary we also have $L \ge 1$. It is then an obvious inequality $1 \le L \le 1$ that gives us \begin{equation*} \lim_{n \to \infty} \left| \frac{a_n }{ a_{n+1}} \right| = 1. \end{equation*}
The quotient criterion says: $$\limsup\frac{|a_{n+1}z^{n+1}|}{|a_{n+1}z^{n+1}|}&lt;1\implies\sum_n^\infty a_n z^n\text{ converges}$$ $$\liminf\frac{|a_{n+1}z^{n+1}|}{|a_{n}z^{n}|}&gt;1\implies\sum_n^\infty a_n z^n\text{ diverges}$$ (Note the differences in $\limsup$ and $\liminf$ due to different approaches...) As the limit exists that boils down to: $$|z|&lt;\lim\frac{|a_n|}{|a_{n+1}|}\iff\sum_n^\infty a_n z^n\text{ converges}$$ That is the radius of convergence is given precisely by: $$R=\lim\frac{|a_n|}{|a_{n+1}|}$$
Why is $\int_{0}^{\infty} \frac {\ln x}{1+x^2} \mathrm{d}x =0$? We had our final exam yesterday and one of the questions was to find out the value of: $$\int_{0}^{\infty} \frac {\ln x}{1+x^2} \mathrm{d}x $$ Interestingly enough, using the substitution $x=\frac{1}{t}$ we get - $$-\int_{0}^{1} \frac {\ln x}{1+x^2} \mathrm{d}x = \int_{1}^{\infty} \frac {\ln x}{1+x^2} \mathrm{d}x $$and therefore $\int_{0}^{\infty} \frac {\ln x}{1+x^2} \mathrm{d}x = 0 $ I was curious to know about the theory behind this interesting (surprising even!) example. Thank you.
When I see an $1 + x^2$ in the denominator it's tempting to let $\theta = \arctan(x)$ and $d\theta = {1 \over 1 + x^2} dx$. When you do that here the integral becomes $$\int_0^{\pi \over 2} \ln(\tan(\theta))\,d\theta$$ $$= \int_0^{\pi \over 2} \ln(\sin(\theta))\,d\theta - \int_0^{\pi \over 2} \ln(\cos(\theta))\,d\theta$$ The two terms cancel because $\cos(\theta) = \sin({\pi \over 2} - \theta)$. Also, if you do enough of these, you learn that doing the change of variables from $x$ to ${1 \over x}$ converts a ${dx \over 1 + x^2}$ into $-{dx \over 1 + x^2}$, so it becomes one of the "tricks of the trade" for integrals with $1 + x^2$ in the denominator. An example: show this trick can be used to show that the following integral is independent of $r$: $$\int_0^{\infty} {dx \over (1 + x^2)(1 + x^r)}$$
You demonstrated yourself why the result is 0 (by making the change $u = \frac{1}{x}$). I think you can view this it as the same as this integral: $\displaystyle\int_{-\infty}^{\infty}x dx = \displaystyle\lim_{X\rightarrow +\infty} \int_{-X}^X xdx=0 $. Note that I am not sure that $\int_{-\infty}^{\infty}x dx$ is actually defined, but this also applies to your integral $\displaystyle\int_0^{\infty}\displaystyle\frac{\ln x}{1+x^2}dx$.
How do you define functions for non-mathematicians? I'm teaching a College Algebra class in the upcoming semester, and only a small portion of the students will be moving on to further mathematics. The class is built around functions, so I need to start with the definition of one, yet many "official" definitions I have found too convoluted (or poorly written) for general use. Here's one of the better "light" definitions I've found: A function is a relationship which assigns to each input (or domain) value, a unique output (or range) value." This sounds simple enough on the surface, but putting myself "in the head" of a student makes me pause. It's almost too compact with potentially ambiguous words for the student (relationship? assigns? unique?) Here's my personal best attempt, in 3 parts. Each part of the definition would include a discussion and examples before moving to the next part. A relation is a set of links between two sets. Each link of a relation has an input (in the starting set) and an output (in the ending set). A function is a relation where every input has one and only one possible output. I'm somewhat happier here: starting with a relation gives some natural examples and makes it easier to impart the special importance of a function (which is "better behaved" than a relation in practical circusmtances). But I'm also still uneasy ("links"? A set between sets?) and I was wanting to see if anyone had a better solution.
For fun, I like to liven-up the "black box"/machine view of a function by putting a monkey into the box. (I got pretty good at chalkboard-sketching a monkey that looked a little bit like Curious George, but with a tail.) Give the Function Monkey an input and he'll cheerfully give you an output. The Function Monkey is smart enough to read and follow rules, and make computations, but he's not qualified to make decisions: his rules must provide for exactly one output for a given input. (Never let a Monkey choose!) You can continue the metaphor by discussing the monkey's "domain" as the inputs he understands (what he can control); giving him an input outside his domain just confuses and frightens him ... or, depending upon the nature of the audience, kills him. (What? You gave the Reciprocal Monkey a Zero? You killed the Function Monkey!) Of course, it's probably more appropriate to say that the Function Monkey simply ignores such inputs, but students seem to like the drama. (As warnings go, "Don't kill the Function Monkey!" gets more attention than "Don't bore the Function Monkey!") The Function Monkey comes in handy later when you start graphing functions: imagine that the x-axis is covered with coconuts (one coconut per "x" value). The Function Monkey strolls along the axis, picks up a "x" coconut, computes the associated "y" value (because that's what he does), and then throws the coconut up (or down) the appropriate height above (or below) the axis, where it magically sticks (or hovers or whatever). So, if you ever want to plot a function, just "Be a Function Monkey and throw some coconuts around". (Warning: Students may insist that that's not a coconut the Monkey is throwing.) Further on, you can make the case that we're smarter than monkeys (at least, we should strive to be): We don't always have to mindlessly plot points to know what the graph of an equation looks like; we can sometimes anticipate the outcome by studying the equation. This motivates manipulating an equation to tease out clues about the shape of its graph, explaining, for instance, our interest in the slope-intercept form of a line equation (and the almost-never-taught intercept-intercept form, which I personally like a lot), the special forms of conic section equations (which aren't all functions, of course), and all that stuff related to translations and scaling. Parametric equations can be presented as a way to let the Function Monkey plot elaborate curves ... both in the plane and in space (and beyond). All in all, I find that the Function Monkey can make the course material more engaging without dumbing it down; he provides a fun way to interpret the definitions and behaviors of functions, not a way to avoid them. Now, is the Function Monkey too cutesy for a College Algebra class? My high school students loved him, even at the Calculus level. One former student told me that he would often invoke the Function Monkey when tutoring his college peers. If it's clear to the students that the instructor isn't trying to patronize them, the Function Monkey may prove quite helpful.
One way I heard a lecturer describe functions recently was that of the CD player analogy.
Don't understand this question [table of ordered pairs, find missing values] I am very confused by this question that I have encountered while practicing for my GED. Over the last 6 months or so I've taken 3 official practice tests, and every time I took a test I encountered at least one problem similar to this one below. Picture of the problem in question which I printed out from the GED test http://i.imgur.com/ADjaT1Z.jpg "Add one number to each column of the table so that it shows a function. Do not repeat an ordered pair that is in the table." _________ | x | y | --------- | 6 | 6 | | 3 | 8 | | 9 | 12| | 7 | 8 | | ? | ? | --------- Options: [ 3] [ 6] [ 7] [ 8] [ 9] [12] I'm very confused by this. I even went as far as asking a teacher at a community education class in town and she couldn't figure it out either. And since I haven't encountered problems like these before, I don't even know what to search for on the internet, which makes it all the more frustrating. I suppose I need to somehow create a function out of this table and probably just plug in each number until it works, though that doesn't seem entirely proper to me. I also noticed a pattern in the Y column, though I'm not sure how to use this. Any information or explanation would be greatly appreciated...
Forgetting for a time the possible options : just be lazy if they really want an answer; you have a small number of data points $n$; so, fit a polynomial of degree $(n-1)$ which will go through all the $n$ points. In the case you gave $$y=-\frac{x^3}{9}+\frac{22 x^2}{9}-\frac{47 x}{3}+36$$ So, for any new value of $x$, you will get a $y$. For sure, the problem could be more complicated if $y$ must be an integer. So $(0,36)$, $(12,8)$, $(15,-24)$, $(16,-44)$, $(18,-102)$. If they expect $x$ and $y$ to be positive, plot the function : it is always negative if $x&gt;14$. Since you do not repeat an ordered pair that is in the table, this let you a very limited choice. Check your list of options.
update to previous post A function can have only one output value for every input value,therefore, 3, 6, 7, and 9 cannot be used as input, so input must be 8 or 12. If you graph the 4 ordered pairs and connect the points you can see the shape. A parabola (I think). Now you must come up with a new ordered pair. I choose (8, 9). It fits on my curve. You can't use 12 as x because the y value would be more than 12 and 12 is the largest value you can use. Y would probably be 15 or more. (8,9) is the answer. Good luck!
Is symmetric group on natural numbers countable? I guess it is too difficult a question to ask about the cardinality of $S_{\mathbb{N}}$ so I would like to ask whether it is countable or not. I tried to prove it is uncountable somewhat mimicking the Cantor's diagonal argument but failed.
Here's a very silly argument to show $|S_\mathbb{N}| \geq 2^\mathbb{N}$. A fact from calculus tells us that a non-absolutely convergent series whose terms converge to zero can be reordered to take the value of any real. So, for each real $\alpha$, there is a permutation such that $$ \sum\frac{(-1)^{n_i}}{n_i}= \alpha $$ So there must be at least as many permutations as reals!
Here is a slightly analytic approach, that I think isn't too similar to what anyone else has written. Since $\mathbb Q$ is countably infinite, it suffices to show that there are uncountably many bijections $\mathbb Q \to \mathbb Q$. We exhibit an injection $\mathbb R_{&gt;0} \to S_{\mathbb Q}$. Given a positive real $\alpha &gt; 0$, define \begin{align*} f_\alpha: \mathbb Q &amp;\to \mathbb Q \\ q &amp;\mapsto \begin{cases} q &amp; \text{if $\lvert q \rvert \le \alpha$} \\ -q &amp; \text{if $\lvert q \rvert &gt; \alpha$} \end{cases} \end{align*} Then $f_\alpha$ is an involution, so certainly a bijection. Moreover, it is clear that $\sup \{q \in \mathbb Q \mid f_\alpha(q) = q\} = \alpha$. This shows that $\alpha \mapsto f_\alpha$ is an injection. So indeed $S_{\mathbb Q}$ is uncountable. I personally like this proof a lot. I find it much more intuitive when things end up being uncountable &quot;because of $\mathbb Q$&quot;, and particularly when it's because of a Dedekind-cut-like density-related construction like this, than when it's because of a diagonal argument. (fun fact: the set of all bijections in $S_{\mathbb N}$ having no fixed points is also uncountable!)
How to find $x$ in some trigonometric equations How to solve these trigonometric equations? $$\tan2x-\sin4x = 0$$ and $$\tan2x = \sin x$$ I can't do this, please help me! I did this: \begin{align} \tan2x &amp;= 2\sin x\\ \\ \frac{\sin2x}{\cos2x} &amp;= \tan x \end{align}
$\sin {2x} = \cfrac {2\tan x} {1+\tan^2 x}$ Just to be clear on how to use this substitution: $$\tan {2x} = \sin {4x} = \cfrac {2\tan 2x} {1+\tan^2 2x}$$ Then we have $\tan {2x} = 0$ or $1+\tan^2 {2x} = 2$ and in this case $\tan {2x} = \pm 1$. ... from which point we are reduced to considering simple cases.
Equate tan2x then you will get sinx=sin4x, Now x=n(pi)+((-1)^n)*4x where n is a natural no. For any sinx=siny we have x=n(pi) + ((-1)^n)*y. you can prove this yourself very easily
Proof: $\lim_{n\to\infty} (1-\frac{1}{n})^{-n} = e$ If $e:=\lim_{n\to\infty} (1 + \frac{1}{n})^n$, prove that $$ \lim_{n\to\infty} \left(1-\frac{1}{n}\right)^{-n} = e $$ without using the property that says: if $\lim_{n\to\infty} a_n = \infty$, $\lim_{n\to\infty} x_n = x$, then $\lim{n\to\infty}(1+\frac{x_n}{a_n})^{a_n} = e^x$... I've tried rewriting $1-\frac{1}{n}$, and operating $\lim_{n\to\infty}(1-\frac{1}{n})^n = 1 / \lim_{n\to\infty}(1+\frac{1}{n})^n $ but I couldn't prove it. Any hint? Thanks in advance.
\begin{gather} \lim\limits_{n\to\infty}\left(1-\dfrac{1}{n}\right)^{-n}=\lim\limits_{n\to\infty}\left(\dfrac{n-1}{n}\right)^{-n}= \\ =\lim\limits_{n\to\infty}\left(\dfrac{n}{n-1}\right)^{n}=\lim\limits_{n\to\infty}\left[\left(1+\dfrac{1}{n-1}\right)^{n-1}\left(1+\dfrac{1}{n-1}\right)\right]=e \end{gather}
To prove that $(1-\frac{1}{x})^{-x} = e$, as x goes to infinity, you would have to prove that $ln(1-\frac{1}{x})^{-x}=1$ under the same conditions.I have taken natural log of both sides. I will now use y=1/x for convenience. In other words,I'd have to prove that when x=1/y, y going to zero: $(1/-y)*ln(1-y) =(ln(1-y))/(-y)=1$ We have an indefinite form here.Infinity in the numerator, 0 in the denominator.so we can use L'Hopitals Rule . We take the derivatives of both numerator and denominator: $(-1/(1-y))/-1)$, and this gives 1/1 when y goes to zero.In other words, when x ,being 1/y, goes to infinity , $ln(1-\frac{1}{x})^{-x}=1$ in the limit, which had to be proven.
Checking Understanding of DFA Regular Operations - Intersection and Star I'm currently taking a Logics course, and trying to understand the regular operations, intersection and star. I have a question regarding the work I have done so far. Given the following information: $L_A$ and $L_B$ are regular languages defined by DFAs $A$ and $B$. $N_A$ and $N_B$ are the number of states in DFAs $A$ and $B$. What are the HIGHEST number of states one would need in DFAs for the languages $L_A \cap L_B$ ${L_A}^*$ (star) INTERSECTION operation I have understood it so that the state of the new automaton A∩B will be the pairs of states in A and B. That is: Na∩b = Na x Nb. That is, the HIGHEST number of states needed in the DFA for language A∩B is the product of the number of states in A (Na) and B (Nb). Is this correct? Another question: how does this number differ from the highest number of states in the CONCATENATION operation? STAR operation: For the star operation, I'm at a loss. Any advice on how I can approach this?
Your question is formally incorrect because you are using the same notation for a language and an automaton and also because your notation $Na \cap b$ is not clear at all. However, the meaning of your question is clear and the answer is known, but not easy. So let me first reformulate your question in a more rigorous way and then give the answer. The (state) complexity of a regular language $L$ is the number of states $c(L)$ of its minimal DFA. Question 1. Estimate $\max\{ c(L_1 \cap L_2) \mid c(L_1) = n_1 \text{ and } c(L_2) = n_2 \}$ Question 2. Estimate $\max\{ c(L_1L_2) \mid c(L_1) = n_1 \text{ and } c(L_2) = n_2 \}$ Question 3. Estimate $\max\{ c(L^*) \mid c(L) = n \}$ Answers. \begin{align*} \max\{ c(L_1 \cap L_2) \mid c(L_1) = n_1 \text{ and } c(L_2) = n_2 \} &amp;= n_1n_2\\ \max\{ c(L_1L_2) \mid c(L_1) = n_1 \text{ and } c(L_2) = n_2 \} &amp;= n_12^{n_2} - 2^{n_2-1}\\ \max\{ c(L^*) \mid c(L) = n \} &amp;= 2^{n-1}+2^{n-2} \end{align*} These results hold for languages over an alphabet containing at least two-letters (and even 3 letters for the product). For one-letter alphabet, the values are also known, but are different. References S. Yu, Regular languages, in Handbook of language theory, G. Rozenberg et A. Salomaa (ed.), vol. 1, ch. 2, pp. 679–746, Springer, 1997. For related questions, I recommend reading the articles of H. Gruber, M. Holzer, M. Kutrib, H. Petersen, where you will find plenty of further references.
My naive estimates for the number of states needed: Concatenation $L_A+L_B$ can be detected by "chaining" $A$ and $B$ automata, thus not more than $N_A+N_B$ states. Intersection needs running $A$ and $B$ "simultaneously", thus using the product automaton with not more than $N_A N_B$ states. The Kleene hull ${L_A}^* = L^0 \cup L \cup L^2 \cup \ldots$ would need finite repetition of $A$. We could extend it into a $\epsilon$-NFA with extra $\epsilon$-transitions from the final states to the initial state. The number of states would not increase. Turning this into a DFA would result in a DFA with up to $2^{N_A}$ states. All those automata could be minimized, so lower upper bounds are possible.
If $abc=1$, then $\frac{a^{n+2}}{a^n+(n-1)b^n}+\frac{b^{n+2}}{b^n+(n-1)c^n}+\frac{c^{n+2}}{c^n+(n-1)a^n} \geq \frac{3}{n} $ Let $a$, $b$ and $c$ be positive real numbers with $abc=1$. Prove that $$ \frac{a^{n+2}}{a^n+(n-1)b^n}+\frac{b^{n+2}}{b^n+(n-1)c^n}+\frac{c^{n+2}}{c^n+(n-1)a^n} \geq \frac{3}{n} $$ for each integer $n$. I have used Cauchy-Schwarz inequalities and Jensen inequality. But I am stuck. I need some idea and advice on this problem. Induction would be cruel.
Using AM-GM we get (hereon $\sum$ denotes cyclic sums): $$\sum \frac{a^{n+2}}{a^n+(n-1)b^n} =\sum \left( a^2- (n-1)\frac{a^2b^n}{a^n+(n-1)b^n}\right) \\ \ge \sum\left( a^2- (n-1)\frac{a^2b^n}{n \cdot a\cdot b^{n-1}}\right)= \sum a^2-\frac{n-1}n\sum ab$$ So it is enough to show that $$n \sum a^2 \ge (n-1)\sum ab+3$$ which follows from $\sum a^2 \ge \sum ab$ and $\sum a^2 \ge 3$ by AM-GM.
You can see that every term has value $\frac{1}{n}$ in case $a = b = c = 1$. Try to prove that changing one of the variables (let $a &gt; 1$, $b &lt; 1$, $c &lt; 1$ be the first case and $a &gt; 1$, $b &gt; 1$, $c &lt; 1$ - the second one and $a &gt; 1$, $b &lt; 1$, $c &gt; 1$ - the third one) increases the total sum.
Searching function starting exactly constant and approaching another constant for the default of an R API parameter i seek a function that has the property of yielding a good guess. I want the function to be defined for $\mathbb Z^+$ (But no reason not to define it for $\mathbb R^+$, i guess) I want it to be smooth It should satisfy both $$\mathrm f(x_{small}) = 1 \forall x_{small} \in \left(0, k\right]$$ $$\lim_{x \rightarrow \infty} \mathrm f(x) = l$$ edit to be clear: $k$ and $l$ should be constants appearing in the function definition. E.g. with $k = 1000$ and $l = 0$, the $f(x) = 1 \forall x \in (0, 1000]$. Then it should gradually decline and approach 0. To simplify, the function can be written as: $$ \mathrm f(x) = \begin{cases} 1 &amp; \text{if } x \le k \\ [\dots] &amp; \text{otherwise} \end{cases} $$
One example is: $$f(x)=\frac{\sqrt{x-1}}{\sqrt{x-1}+1}$$
One example is: $$f(x)=\frac{\sqrt{x-1}}{\sqrt{x-1}+1}$$
How was the quadratic formula created? I have tested the quadratic formula and I have found that it works, yet I am curious as to how it was created. Can anybody please tell me one of the ways that it was created?
We begin with the equation $ax^2 + bx + c = 0$, for which we want to find $x$. We divide through by $a$ first, and then bring the constant term to the other side: $$x^2 + \frac b a x = - \frac c a $$ Next, we complete the square on the left-hand side. Remember, to complete the square, you take half of the coefficient of the linear term, square it, and add it both sides. This means we add $(b/2a)^2 = b^2/4a^2$ to both sides: $$x^2 + \frac b a x + \frac{b^2}{4a^2} = \frac{b^2}{4a^2} - \frac c a$$ The left-hand side factors as a result, and we combine the terms on the right-hand side by getting a common denominator: $$\left(x + \frac b {2a} \right)^2 = \frac{b^2 - 4ac}{4a^2}$$ We now take the square root of both sides: $$x + \frac b {2a} = \pm \sqrt{ \frac{b^2 - 4ac}{4a^2}}$$ Solve for $x$: $$x =- \frac b {2a} \pm \sqrt{ \frac{b^2 - 4ac}{4a^2}}$$ Recall that $\sqrt{a/b} = \sqrt a / \sqrt b ^{ \; \text{(note 1)}}$. Using this property, the denominator of our root becomes $2a$, giving a common denominator with $-b/2a$. Thus, $$x = \frac {-b \pm \sqrt{b^2 - 4ac}}{2a}$$ yielding the quadratic formula we all know and love. $^{ \; \text{(note 2)}}$ Footnotes: Note $(1)$ - The usual properties for roots, and exponents in turn (since $\sqrt[n] a = a^{1/n}$) that most people are familiar with, do not always hold. In particular, for example $$\sqrt{\frac a b} = \frac{\sqrt a}{\sqrt b} \;\;\;\;\; \sqrt{ab} = \sqrt{a} \cdot \sqrt{b}$$ do not hold if $a,b$ are complex numbers. They hold if $a,b$ are nonnegative real numbers (and, in the first property shown, $b \ne 0$). A well-known example of why the second does not hold involves $i$, the complex number such that $i^2 = -1$. If this property held, $$-1 = i^2 = \sqrt{-1} \sqrt{-1} = \sqrt{(-1)\cdot (-1)} = \sqrt{1} = 1$$ but $-1 \ne 1$. In light of this note, note that we do not necessarily have any problems splitting up the root as we do in the proof. Even if $a&lt;0$, $a^2 &gt; 0$ as a result (of course, we also assume $a,b,c$ are real numbers in this derivation). Note $(2)$ - As a note of interest, you are only guaranteed that $x$ is a real solution whenever the discriminant of the quadratic is nonnegative. The discriminant is the expression under the root; thus $$ax^2+bx+c = 0 \; \text{has real solutions if and only if} \; b^2-4ac \ge 0$$ In particular, if $b^2 - 4ac = 0$, then the solutions $x$ gives are the same (what is called a "double root" or a "root of multiplicity $2$"). For $b^2 - 4ac &gt;0$, you are ensured two distinct real values for your solutions. If $b^2 - 4ac &lt; 0$, then your solutions will instead be complex numbers. You will still have two distinct solutions however.
The way to get it is actually a beautiful one. The original proof is different since algebra was still in an early stage of development, but here's how I would do it today: Let's consider the equation $ax^2+bx+c=0$ If a=0, we have a first degree equation. Otherwise, our equation can be written as $x^2+\frac{b}{a}x +\frac{c}{a}=0$ On the other hand, $(x+\frac{b}{2a})^2=x^2+\frac{b}{a}x+\frac{b^2}{4a^2}$. This looks like an idea completely out of the air, but you can get it by trying different ones and checking which one works. As you can see, the first two terms look like those form the equation, so we can say that $x^2+\frac{b}{a}x = (x+\frac{b}{2a})^2 - \frac{b^2}{4a^2}$ Now go back to the equation and replace $x^2+\frac{b}{a}x$ We get that $(x+\frac{b}{2a})^2 - \frac{b^2}{4a^2} + \frac{c}{a} = 0$ From this equation, it's just basic algebra. $(x+\frac{b}{2a})^2 = \frac{b^2}{4a^2} - \frac{c}{a}$ Take the square root on both sides and enjoy your formula (unless I made a mistake, which I probably have)
Do we necessarily have that $W^{2, p}(I) \subset C^1(\overline{I})$ with compact injection? Let $I = (0, 1)$ and $p &gt; 1$. Do we necessarily have that$$W^{2, p}(I) \subset C^1(\overline{I})$$with compact injection?
Yes. In fact, we have the following more general result. Theorem (Adams, page 168). Let $\Omega$ be a domain in $\mathbb R^n$ and $\Omega_0$ be a bounded subdomain of $\Omega$. Let $j \geq 0$ and $m \geq 1$ be integers, and let $ 1 \leq p &lt; \infty$. If $\Omega$ satisfies the strong local Lipschitz condition, then the following embedding is compact: $$W^{j+m,p}(\Omega)\subset C^j(\overline{\Omega_0}),\qquad \text{if}\quad mp&gt;n.$$ Notes: If $\Omega$ is bounded, we may have $\Omega_0=\Omega$ in the statement of the theorem. The bounded subdomain $\Omega_0$ of $\Omega$ may be assumed to satisfy the cone condition if $\Omega$ does. The proof of your particular case, can be done as follows. Fact 1. $W^{2,p}(I)\subset C^1(\overline{I})$ Proof: Take $u\in W^{2,p}(0,1)$. Define $$h(x)=\int_0^xu''(t)\;dt,\qquad g(x)=\int_0^xu'(t)\;dt.$$ Then, $$\int_0^1(u'-h)\varphi'=\int_0^1u'\varphi'-\int_0^1h\varphi'=-\int_0^1u''\varphi+\int_0^1u''\varphi=0,\quad\forall\ \varphi\in C^1_c(0,1)$$ and thus $u'-h=c$ for some constant $c$. This imply $u'=h+c\in C(\overline{I})$. Analogously, there is a constant $c_2$ such that $u=g+c_2\in C(\overline{I})$. So, $W^{2,p}(I)\subset C^1(\overline{I})$. Fact 2. The inclusion $W^{2,p}(I)\subset C^1(\overline{I})$ is continuous. Proof: Take $u\in W^{2,p}(I)$ and $x\in I$. Then, $$u'(x)=u'(0)+\int_0^xu''(t)\;dt$$ and thus $$|u'(x)|\leq |u'(0)|+\|u''\|_{L^1},\quad |u'(0)|\leq |u'(x)|+\|u''\|_{L^1}.\tag{1}$$ From the second inequality in $(1)$, $$|u'(0)|=\int_0^1|u'(0)|\;dt\leq \|u'\|_{L^1}+\|u''\|_{L^1}$$ So, from the first inequality in $(1)$, $$|u'(x)|\leq \|u'\|_{L^1}+2\|u''\|_{L^1}\leq c_0\|u\|_{W^{2,p}}$$ for some constant $c_0$. A similar argument shows that $$|u(x)|\leq \|u\|_{L^1}+2\|u'\|_{L^1}\leq c_0\|u\|_{W^{2,p}}.$$ As $x\in I$ is arbitrary, we get $$\|u\|_{C^1}=\sup_{x\in\overline{I}} |u(x)|+\sup_{x\in\overline{I}} |u'(x)|\leq 2 c_0\|u\|_{W^{2,p}}.$$ Fact 3. The inclusion $W^{2,p}(I)\subset C^1(\overline{I})$ is compact. Proof: Let $(u_n)$ be a bounded sequence in $W^{2,p}(0,1)$, say by a constant $M$. The sequence $(u'_n)$ is uniformly bounded because, from the proof of the Fact 2, $|u'_n(x)|\leq c_0 M$ for all $n\in\mathbb{N}$ and all $x\in \overline{I}$. Also, $(u'_n)$ is equicontinuous because, from the Hölder's inequality, $$|u'_n(x)-u'_n(y)|=\left|\int_y^xu_n''(t)\;dt\right|\leq \|u''\|_{L^p}\left(\int_y^x1\;dt\right)^{\frac{1}{p}}\leq M|x-y|^{1/p}$$ for all $n\in\mathbb{N}$ and all $x,y\in \overline{I}$. So, the Arzelà–Ascoli Theorem implies that $(u'_n)$ has a subsequence $(u'_{n_k})$ that converges uniformly to some $g\in C(\overline{I})$. A similar argument shows that $(u_{n_k})$ has a subsequence $(u_{n_{k_m}})$ that converges uniformly to some $u\in C(\overline{I})$. It follows from Theorem 7.17 in Rudin's book that $u'=g$ and thus $(u_n)$ has a subsequence $(u_{n_{k_m}})$ that converges in $C^1(\overline{I})$.
Yes. In fact, we have the following more general result. Theorem (Adams, page 168). Let $\Omega$ be a domain in $\mathbb R^n$ and $\Omega_0$ be a bounded subdomain of $\Omega$. Let $j \geq 0$ and $m \geq 1$ be integers, and let $ 1 \leq p &lt; \infty$. If $\Omega$ satisfies the strong local Lipschitz condition, then the following embedding is compact: $$W^{j+m,p}(\Omega)\subset C^j(\overline{\Omega_0}),\qquad \text{if}\quad mp&gt;n.$$ Notes: If $\Omega$ is bounded, we may have $\Omega_0=\Omega$ in the statement of the theorem. The bounded subdomain $\Omega_0$ of $\Omega$ may be assumed to satisfy the cone condition if $\Omega$ does. The proof of your particular case, can be done as follows. Fact 1. $W^{2,p}(I)\subset C^1(\overline{I})$ Proof: Take $u\in W^{2,p}(0,1)$. Define $$h(x)=\int_0^xu''(t)\;dt,\qquad g(x)=\int_0^xu'(t)\;dt.$$ Then, $$\int_0^1(u'-h)\varphi'=\int_0^1u'\varphi'-\int_0^1h\varphi'=-\int_0^1u''\varphi+\int_0^1u''\varphi=0,\quad\forall\ \varphi\in C^1_c(0,1)$$ and thus $u'-h=c$ for some constant $c$. This imply $u'=h+c\in C(\overline{I})$. Analogously, there is a constant $c_2$ such that $u=g+c_2\in C(\overline{I})$. So, $W^{2,p}(I)\subset C^1(\overline{I})$. Fact 2. The inclusion $W^{2,p}(I)\subset C^1(\overline{I})$ is continuous. Proof: Take $u\in W^{2,p}(I)$ and $x\in I$. Then, $$u'(x)=u'(0)+\int_0^xu''(t)\;dt$$ and thus $$|u'(x)|\leq |u'(0)|+\|u''\|_{L^1},\quad |u'(0)|\leq |u'(x)|+\|u''\|_{L^1}.\tag{1}$$ From the second inequality in $(1)$, $$|u'(0)|=\int_0^1|u'(0)|\;dt\leq \|u'\|_{L^1}+\|u''\|_{L^1}$$ So, from the first inequality in $(1)$, $$|u'(x)|\leq \|u'\|_{L^1}+2\|u''\|_{L^1}\leq c_0\|u\|_{W^{2,p}}$$ for some constant $c_0$. A similar argument shows that $$|u(x)|\leq \|u\|_{L^1}+2\|u'\|_{L^1}\leq c_0\|u\|_{W^{2,p}}.$$ As $x\in I$ is arbitrary, we get $$\|u\|_{C^1}=\sup_{x\in\overline{I}} |u(x)|+\sup_{x\in\overline{I}} |u'(x)|\leq 2 c_0\|u\|_{W^{2,p}}.$$ Fact 3. The inclusion $W^{2,p}(I)\subset C^1(\overline{I})$ is compact. Proof: Let $(u_n)$ be a bounded sequence in $W^{2,p}(0,1)$, say by a constant $M$. The sequence $(u'_n)$ is uniformly bounded because, from the proof of the Fact 2, $|u'_n(x)|\leq c_0 M$ for all $n\in\mathbb{N}$ and all $x\in \overline{I}$. Also, $(u'_n)$ is equicontinuous because, from the Hölder's inequality, $$|u'_n(x)-u'_n(y)|=\left|\int_y^xu_n''(t)\;dt\right|\leq \|u''\|_{L^p}\left(\int_y^x1\;dt\right)^{\frac{1}{p}}\leq M|x-y|^{1/p}$$ for all $n\in\mathbb{N}$ and all $x,y\in \overline{I}$. So, the Arzelà–Ascoli Theorem implies that $(u'_n)$ has a subsequence $(u'_{n_k})$ that converges uniformly to some $g\in C(\overline{I})$. A similar argument shows that $(u_{n_k})$ has a subsequence $(u_{n_{k_m}})$ that converges uniformly to some $u\in C(\overline{I})$. It follows from Theorem 7.17 in Rudin's book that $u'=g$ and thus $(u_n)$ has a subsequence $(u_{n_{k_m}})$ that converges in $C^1(\overline{I})$.
$f: \mathbb{R^2} \to \mathbb{R}$ a $C^\infty$ function such that $f(x,0)=f(0,y)=0$ then exists $g$ such that $f(x,y)=xy\, g(x,y)$ I'm trying to solve the following exercise Let $f: \mathbb{R^2} \to \mathbb{R}$ a $C^\infty$ function such that $f(x,0)=f(0,y)=0$ $\forall x,y \in \mathbb{R}$. Then there exists a $C^\infty$ function $g:\mathbb{R^2} \to \mathbb{R}$ satisfying $f(x,y)=xy\, g(x,y)$ $\forall x,y \in \mathbb{R}$ I thought first about expanding $f$ using the taylor theorem, we would have $$f(x,y)=f(0,0)+f'(0,0)(x.y)+\frac{f''(0,0)(x,y)^2}{2!}+r(x,y),$$ but $f(0,0)=0$ and $f_x(0,0)=f_y(0,0)=0$ since $f$ is zero on the $x$ and $y$ axis. $f''(0,0)(x,y)^2=f_{xx}(0,0)x^2+f_{xy}(0,0)xy+f_{yy}(0,0)y^2$, now we have a term that has a product of $x$ and $y$. But I don't know how (and if it is possible) to proceed from here.
We are expanding the hint by zhw. \begin{align} f(x, y) &amp;= f(x, y) - f(0,0) \\ &amp;= \int_0^1 \frac{d}{dt} f(tx, ty)\, \mathrm dt \\ &amp;= \int_0^1 \left( x f_x (tx, ty) + y f_y (tx, ty) \right)\, \mathrm dt \\ &amp;= x\int_0^1 f_x (tx, ty) \, \mathrm dt + y \int_0^1 f_y (tx, ty) \, \mathrm dt \end{align} Since $f(x, 0) = 0$, $f_x(tx, 0) = 0$. Thus \begin{align} \int_0^1 f_x (tx, ty)\, \mathrm dt &amp;= \int_0^1 (f_x (tx, ty) - f_x(tx,0)) \,\mathrm dt \\ &amp;= \int_0^1 \int_0^t \frac{d}{ds} f_x (tx , sy)\, \mathrm ds\, \mathrm dt \\ &amp;= y \int_0^1 \int_0^t f_{xy} (tx, sy) \, \mathrm ds\, \mathrm dt \\ &amp;:= y g_1(x, y) \end{align} Similarly, using $f(0, y) = 0$, $f_y(0, ty) = 0$. Thus $$ \int_0^1 f_y (tx, ty)\, \mathrm dt = x \int_0^1 \int_0^t f_{yx} (sx, ty) \, \mathrm ds\, \mathrm dt := x g_2(x, y).$$ Since $f$ is smooth, it is clear that $g_1, g_2$ are smooth. Thus $$ f(x, y) = xy g(x, y)$$ with $g = g_1 + g_2$.
Define $g(x,y)$ like this : $x\neq 0 \land y \neq 0 : \frac{f(x,y)}{xy}=g(x,y) $ $g(0,y_0)=\lim_{(x,y) \to (0,y_0)} \frac{f(x,y)}{xy}$ $g(x_0,0)=\lim_{(x,y) \to (x_0,0)} \frac{f(x,y)}{xy}$ $g(0,0)=\lim_{(x,y) \to (0,0)} \frac{f(x,y)}{xy} $ The above limits must exist. To show this we can use the fact that $f(x,y)$ is continuously differentiable. This means : $\frac{\partial}{\partial x}f(x,y)=\lim_{h \to 0} \frac{f(x+h,y)-f(x,y)}{h}=\lim_{(x',y') \to (x,y)} \frac{f(x',y')-f(x,y')}{x'-x}$ , the last limit being independent of the chosen path. Making $\frac{\partial}{\partial x}f(x,y)$ a continuous function. So for example let $y_0 \neq 0$ : $ g(0,y_0) = \lim_{(x,y) \to (0,y_0)} \frac{f(x,y)}{xy}= \lim_{(x,y) \to (0,y_0)} \frac{f(0+x,y)-f(0,y)}{xy}= \lim_{(x,y) \to (0,y_0)} \frac{1}{y}\left[\frac{\partial}{\partial x}f(x,y)|_{(0,y_0)}\right]=\frac{1}{y_0} \frac{\partial}{\partial x}f(x,y)|_{(0,y_0)} $ Or:$ g(0,0) = \lim_{(x,y) \to (0,0)} \frac{f(x,y)}{xy}= \lim_{(x,y) \to (0,0)} \frac{f(0+x,y)-f(0,y) -xf_x(x,0)}{xy}= \lim_{(x,y) \to (0,0)} \frac{\frac{f(0+x,y)-f(0,y)}{x} -f_x(x,0)}{y} = \lim_{(x,y) \to (0,0)} \frac{f_x(x,y) -f_x(x,0)}{y} =\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x,y)|_{(0,0)} $ And we know that $f(x,y)$ is $C^\infty $ so the partial derivatives exist in $x,y=0$ . So the above definition gives a continuous function $g(x,y)$. To prove $g(x,y)$ is $C^\infty$ in the limit $x,y \to 0$ we can use similar argument as above knowing that $f(x,y)$ is $C^\infty$ . Example showing $g(x,y)$ is continuously differentiable using the fact that $f(x,y)$ is $C^\infty$ : $g_x(0,y)=\frac{\partial}{\partial x'}g(x',y')|_{(0,y)} = \lim_{(x',y') \to (0,y)} \frac{g(x',y')- g(0,y')}{x'} = \\$ $\lim_{(x',y') \to (0,y)} \frac{g(x',y')- \frac{1}{y'}f_x(0,y')}{x'} = \\$ $\lim_{(x',y') \to (0,y)} \frac{\frac{f(x',y')}{x'y'}- \frac{1}{y'}f_x(0,y')}{x'} = \\$ $\lim_{(x',y') \to (0,y)} \frac{\frac{f_x(x',y')\cdot x'}{x'y'}- \frac{1}{y'}f_x(0,y')}{x'}= \\$ $\lim_{(x',y') \to (0,y)} \frac{1}{y'}\frac{ f_x(x',y') - f_x(0,y')}{x'}=\frac{1}{y}\frac{\partial^2}{\partial x'^2}f(x',y')|_{(0,y)}$
Two weighted coins, determining which has a higher probability of landing heads A friend of mine asked me the following question, and I am not sure how to solve it: You are given two weighted coins, $C_1$ and $C_2$. Coin $C_1$ has probability $p_1$ of landing heads and $C_2$ has probability $p_2$ of landing heads. The following experiment is preformed: Coin $C_1$ is flipped 3 times, and lands heads 3 times. Coin $C_2$ is flipped 10 times, and lands heads 7 times. Based on this experiment, choose the coin which is more likely to have a higher probability of being heads. In other words, which is more likely: $p_1&gt;p_2$ or $p_2&gt;p_1$. Intuition tells me coin $C_1$ is the better choice, but this could be wrong, and I am wondering how do you solve this in general. Consider the experiment, $C_1$ is flipped $n_1$ times and lands heads $m_1$ times, $C_2$ is flipped $n_2$ times and lands heads $m_2$ times. Thanks for the help, Edit: I think this might answer some questions: Suppose that the probabilities of the coins, $p_1$ and $p_2$ are chosen uniformly from $[0,1]$.
As Henry mentions, I think one needs some information about the prior distributions of the weights. Denote by $r$ the weight of a coin. Suppose that the weight has some prior distribution $g(r)$. Let $f(r|H=h, T=t)$ be the posterior probability density function of $r$ having observed $h$ heads and $t$ tails tossed. Bayes' Theorem tells us that: $$ f(r|H=h, T=t) = \frac{Pr(H=h|r, N = h+t)g(r)}{\int_0^1 Pr(H=h|r, N = h+t)g(r)\ dr}. $$ This should allow you to answer your question. Once you have the posterior pdf for each coin, just find their respective expected weights.
I agree with the beta approach, but given the question, I think it makes more sense to plot out the results and compare visually: $x$ &lt;- seq($0,1,$length $= 1000$) #set $x$ from $0$ to $1$ since we're looking at probabilities $y_1$ &lt;- dbeta$(x,4,1)$ #calculate density based on a prior of $[1,1]$ and a posterior of $[3,0]$. $y_2$ &lt;- dbeta$(x,8,4)$ #calculate density based on a prior of $[1,1]$ and a posterior of $[7,3]$. plot($x,y_1$,type = &quot;l&quot;) #plot density of coin 1 in black lines($x,y2$,col $= 2$) #plot density of coin 2 in red This yields the following: Think of the densities as representing the &quot;probability of probability&quot;. Coin 1 (represented by the black line) has a higher probability of having a higher probability of showing heads. Coin 2 (represented by the red line) has a lower probability of having a higher probability of showing heads (or a higher probability of having a lower probability of showing heads...).
A very different, difficult geometry problem that need to be helped- The problem: In $\triangle PQR$, $ST$ is the perpendicular bisector of $PR$, and $PS$ is the bisector of $\angle QPR$ and the point $S$ lies on $QR$. If $|QS|=9$ and $|SR|=7$ and $|PR|=x/y$, where $x$ and $y$ are co-primes, then what is $x+y$? I can come up with these steps only: $\angle STR$ is equal to $90^\circ$, so $\triangle STR$ is a right triangle. The same with $\Delta STP$. In $\triangle PST$ and $\triangle STR$, $ST\cong ST$, $PT\cong TR$, and $\angle PTS= \angle STR$, so that $\triangle PST\cong\triangle STR$. Therefore, $|PS|=|SR|=7$, and $\angle SPT\cong\angle SRT$. That's the end. I have no clue how to solve the next steps, or to solve the problem. I don't even know if the above steps lead toward an answer to the problem! So please help me see how I can solve this step. I am a 8th grader. So please be careful, so that I can understand the steps. Sorry, I can't add the figure.
$\angle SRP=\angle SPR = \angle SPQ=\theta,$ say. Then adding up the angles in $\triangle PQR$ gives $\angle Q = 180^{\circ}-3\theta,$ and adding up the angles in $\triangle PSQ$ gives $\angle PSQ = 2\theta=\angle QPR.$ That is, $\triangle PQR$ and $\triangle PSQ$ are similar. Corresponding sides of similar triangles are proportional, so we have $$ \frac{|QP|}{9}=\frac{16}{|QP|}$$ (On the left-hand side we have two sides of $\triangle PQS,$ and on the right-hand side two we have two sides of $\triangle PQR.)$ We conclude that $|QP|=12.$ Now using similarity once again, we have $$\frac{|PR|}{|PQ|}=\frac{7}{9}\implies |PR|=\frac{28}{3}$$ That is, $\boxed{x=28,\space y=3,\space x+y=31}$.
I think this question is contradictory. If you draw a line segment $PR$ and perpendicularly bisect it. Now draw $angle QPR$ with Q as an unfixed point. You will see that the value of $PT,PS,$ $angle QPR$ will remain same but size of $QS$ will differ as $Q$ is an unfixed point along $rayPQ$. So length of $QS$ is unwanted information. Hence, it is impossible to find ${x\over y}$.
How many pairs of primes are there such that $pq < 10^6$ I'm trying to implement this function in my program, so I think that I should find that the number of pairs $$(p, q)$$ such that both p, q are prime numbers and: $$p\cdot q &lt; 10^6$$ I'm not really good in number theory, I know that there are $78498$ primes under one million, but I know that only some of those primes form combinations that are less that one million. Thanks in advance.
There are $$\sum_{\substack{p\text{ prim}\\p\,\le\, 10^6}}\pi\left(\frac{10^6}p\right)=419902$$ pairs, where $\pi(x)$ is the prime counting function. $10^6$ is not the product of two primes. So it suffices to count for every prime $p\le 10^6$ the number of primes $q$ less or equal $10^6/p$, because then $pq&lt;10^6$. Actually, we only have to check $p\leq 10^6/2$ as the smallest other prime to multiply with is $2$. I chose this form above because this was actually feasible to calculate on my computer using Mathematica. Mathematica certainly implements an efficient version of $\pi(x)$. Here is an even faster version to execute (motivated by Roddy): $$2\cdot\sum_{\substack{p\text{ prim}\\p\,\le\, 1000}}\pi\left(\frac{10^6}p\right)-\pi(1000)^2=419902.$$ It uses that at least one factor has to be at most $1000=\sqrt{10^6}$. So we sum only over primes $p\le 1000$. The factor $2$ then account for the flipped version $(p,q)\to(q,p)$. As this counts everything twice with $p,q\le 1000$, we have to subtract this in the end.
Just make a list of prime numbers upto $5\times10^5$ let it be $l_1$. Create a list to store $p$ and $q$ let it be $l_2$. Iterate over the list of prime element and let the element you get by each iteration be $p$. Enter $l_1$ by removing all elements less than $p$ to a function along with $l_2$, the work of function is given below - Iterate over list starting from $p$ and let the element you get each time be $q$. Check at every step if $pq$ is less than million if it is than store $p$ and $q$ in the list and if it is not than break.
Find the derivatives to $f(x)=4/x^2$ and $g(t)=(t-5)/(1+\sqrt{t}\,)$ I have these two assignments: Find the derivatives to (a) $f(x)=4/x^2$ and b) $g(t)=(t-5)/(1+\sqrt{t}\,)$ by using the definition $$\lim_{h \to 0} \frac{f(x+h)-f(x)}{h}=f'(x)$$ a) This is my attempt at (a); am I correct? $$\lim_{h \to 0} \frac{\left(\displaystyle\frac{4}{(x+h)^2}-\frac{4}{x^2}\right)}{h}=\frac{1}{h}\left(\displaystyle\frac{4}{(x+h)^2}-\frac{4}{x^2}\right)$$ Then I found the common denominator $$\begin{align*} \frac{4}{(x+h)^2}-\frac{4}{x^2} &amp;=\frac{4x^2}{(x+h)^2x^2}-\frac{4(x+h)^2}{(x+h)^2x^2} \\[6pt] &amp;=\frac{4x^2-4x^2-4h^2-8xh}{(x+h)^2x^2} \\[6pt] &amp;=\frac{-4h^2-8xh}{(x+h)^2x^2} \end{align*}$$ Then I expand the denominator to $$\frac{-4h^2-8xh}{x^4+2x^3h+h^2x^2} =\frac{\left(-4h^2-8h\right)}{x^3+2x^3h+h^2x^2}\frac{1}{h} =\frac{\left(-4h^2-8h\right)}{hx^3+2x^3h+h^2x^2}$$ And $h$ goes out with: $$\frac{-4h^2-8}{x^3+2x^3h+h^2x^2} = \lim_{h \to 0}\frac{-4\cdot0^2-8}{x^3+2x^3\cdot 0+0^2\cdot x^2} = -\frac{8}{x^3}$$ I dont know how to do part (b)
First function $$f'(x)=\lim_{h \to 0} \frac{\left(\frac{4}{(x+h)^2}-\frac{4}{x^2}\right)}{h}=\frac{1}{h}\left(\frac{4}{(x+h)^2}-\frac{4}{x^2}\right)=\lim_{h \to 0}\frac{1}{h}\frac{-4h^2-8xh}{(x+h)^2x^2}=\lim_{h \to 0}\frac{1}{h}\frac{-4h(h+2x)}{(x+h)^2x^2}= \lim_{h \to 0}\frac{-4(h+2x)}{(x+h)^2x^2}= \frac{-8x}{x^4}= -\frac{8}{x^3}, $$ for all $x\neq 0$. Second function (in what follows we consider $t\geq 0$) $$f'(t)=\lim_{h \to 0} \frac{1}{h}\left(\frac{t+h-5}{1+\sqrt{t+h}}-\frac{t-5}{1+\sqrt{t}} \right) = \lim_{h \to 0} \frac{1}{h}\left(\frac{(t+h-5)(1+\sqrt{t})-(t-5)(1+\sqrt{t+h})}{(1+\sqrt{t+h})(1+\sqrt{t})}\right);$$ Now we simplify the numerator arriving at $$f'(t)= \lim_{h \to 0} \frac{1}{h}\left(\frac{-(t-5)(\sqrt{t+h}-\sqrt{t})+h(1+\sqrt{t})}{(1+\sqrt{t+h})(1+\sqrt{t})}\right)= \lim_{h \to 0} \frac{1}{h}\frac{-(t-5)(\sqrt{t+h}-\sqrt{t})}{(1+\sqrt{t+h})(1+\sqrt{t})}+\frac{1}{h}\frac{h(1+\sqrt{t})}{(1+\sqrt{t+h})(1+\sqrt{t})}= \lim_{h \to 0} \frac{1}{h}\frac{-(t-5)(\sqrt{t+h}-\sqrt{t})}{(1+\sqrt{t+h})(1+\sqrt{t})}+\frac{1}{(1+\sqrt{t+h})}; $$ All we need is to use the "trick": $$\frac{1}{h}\frac{-(t-5)(\sqrt{t+h}-\sqrt{t})}{(1+\sqrt{t+h})(1+\sqrt{t})}= \frac{1}{h}\frac{-(t-5)(\sqrt{t+h}-\sqrt{t})}{(1+\sqrt{t+h})(1+\sqrt{t})}\frac{(\sqrt{t+h}+\sqrt{t})}{(\sqrt{t+h}+\sqrt{t})}= \frac{1}{h}\frac{-(t-5)h}{(1+\sqrt{t+h})(1+\sqrt{t})}\frac{1}{(\sqrt{t}+\sqrt{t+h})}= -\frac{(t-5)}{(1+\sqrt{t+h})(1+\sqrt{t})(\sqrt{t}+\sqrt{t+h})}. $$ In summary: $$f'(t)= \lim_{h \to 0} -\frac{(t-5)}{(1+\sqrt{t+h})(1+\sqrt{t})(\sqrt{t}+\sqrt{t+h})}+\frac{1}{(1+\sqrt{t+h})}= \\ -\frac{(t-5)}{2(1+\sqrt{t})^2\sqrt{t}}+\frac{1}{(1+\sqrt{t})}. $$
Let us cheat a little and first establish the general formula for the division: $$\frac{\left(\frac fg\right)(x+h)-\left(\frac fg\right)(x)}h=\frac{\frac{f(x+h)}{g(x+h)}-\frac{f(x)}{g(x+h)}}h\\=\frac{f(x+h)g(x)-f(x)g(x+h)}{g(x+h)g(x)h}\\=\frac{f(x+h)g(x)-f(x)g(x)+f(x)g(x)-f(x)g(x+h)}{g(x+h)g(x)h}\\ =\frac{(f(x+h)-f(x))g(x)-f(x)(g(x+h)-g(x))}{g(x+h)g(x)h}$$ By taking the limit $h\to0$, we have $$\left(\frac fg\right)'(x)=\frac{f'(x)g(x)-f(x)g'(x)}{g^2(x)}.$$ Now let's work on the numerators and denominators alone: $$(4)':\frac{4-4}h=0\to 0,$$ $$(x^2)':\frac{(x+h)^2-x^2}h=\frac{2xh+h^2}h=2x+h\to 2x,$$ $$(t-5)':\frac{(t+h-5)-(t-5)}h=1\to1,$$ $$(1+\sqrt t)':\frac{1+\sqrt{t+h}-1-\sqrt t}h=\frac{\sqrt{t+h}-\sqrt t}h=\frac{t+h-t}{(\sqrt{t+h}+\sqrt t)h}\to\frac1{2\sqrt t}.$$ Putting all together, $$\left(\frac4{x^2}\right)'=\frac{0.x^2-4.2x}{(x^2)^2}=-\frac8{x^3}.$$ $$\left(\frac{t-5}{1+\sqrt t}\right)'=\frac{1.(1+\sqrt t)-(t-5)\frac1{2\sqrt t}}{(1+\sqrt t)^2}=\frac{2\sqrt t+t+5}{2\sqrt t (1+\sqrt t)^2}.$$
How many possible orders are there? A tapas bar serves 15 dishes, of which 7 are vegetarian, 4 are fish and 4 are meat. A table of customers decides to order 8 dishes, possibly including repetitions. a) Calculate the number of possible dish combinations. b) The customers decide to order 3 vegetarian dishes, 3 fish and 2 meat. Calculate the number of possible orders. Progress. For a) I think that the answer would be $15^8$ as this would be the number of different ordered sequences of 8 elements from the 15 possible dishes.
a) First, we note that repetition is allowed, and the order in which we order the dishes is unimportant. Therefore, we use the formula ${k+n-1 \choose k}$. In this case, $n=15$ and $k=8$. So, the number of possible combinations of dishes is $${8+15-1 \choose 8} = {22 \choose 8} = 319770.$$ b) We will need to apply the above formula three times. For the vegetarian dishes, we have $n_V = 7$ and $k_V=3$. For the fish dishes, we have $n_F = 4$ and $k_F=3$. For the meat dishes, we have $n_M = 4$ and $k_M=2$. So, the number of possible combinations of dishes under these conditions is $${k_V+n_V-1 \choose k_V}{k_F+n_F-1 \choose k_F}{k_M+n_M-1 \choose k_M} = {9 \choose 3}{6 \choose 3}{5 \choose 2}=16800.$$
a) The given that all dishes are different, then $$ N_{orders}=15^8 $$ b) In this case, we can calculate how many different orders for each type and then combine them: $$ N_{orders} = 7^3 \times 4^3 \times 4^2 $$
Multivariable function: $x^3 + y^3 + e^{zw}$ - what to do after taking partial derivatives? I'm working on this problem. I'm kinda stuck and would be very grateful for any help. A have to compute the gradient of this function $$f(x,y,z,w) = x^3 + y^3 + e^{zw}$$ So, from what I know I should take the partial derivative with respect to each variable: $$\begin{align} \frac{\mathrm df}{\mathrm dx} &amp;= 3x \\ \frac{\mathrm df}{\mathrm dy} &amp;= 3y \\ \frac{\mathrm df}{\mathrm dz} &amp;= we^z \\ \frac{\mathrm df}{\mathrm dw} &amp;= ze^w \end{align}$$ My problem is, is that the right approach? what do I do from here?
You are right that you should take the partial derivative with respect to each variable, but all your derivatives are wrong. The correct derivatives are: $$\frac{\partial f}{\partial x} = 3x^2$$ $$\frac{\partial f}{\partial y} = 3y^2$$ $$\frac{\partial f}{\partial z} = we^{wz}$$ $$\frac{\partial f}{\partial w} = ze^{wz}$$ The gradient is then the vector valued function $$\nabla f(x,y,z,w) = \begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \\ \frac{\partial f}{\partial z} \\ \frac{\partial f}{\partial w} \end{bmatrix} = \begin{bmatrix} 3x^2 \\ 3y^2 \\ we^{wz} \\ ze^{wz} \end{bmatrix}$$
$$ \nabla f \stackrel{\text{def}}{=} \left( \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z}, \frac{\partial f}{\partial w} \right).$$
find the derivative of an integral Find $f'(x)$ where $f(x)$ is the integral from ${\sqrt{x}}$ to $x$ of $e^x-e^{t^2} dt$ Is there an easy way to do this using the fundamental theorem of calculus because if I try to ingretate w.r.t $t$ then $e^{t^2}$ is a bit of a problem.
Apply differentiation under the integral sign, which is a formula derived from FTC: \begin{align*} \frac{d}{dx}\int_{\sqrt x}^x(e^x-e^{t^2})dt&amp;=(e^x-e^{x^2})-(e^x-e^x)(\sqrt x)^\prime+\int_{\sqrt x}^x e^x dt \\&amp;= (e^x-e^{x^2})+e^x(x-\sqrt x) \end{align*}
Yes there is. Assuming no pathological situations, the derivative of a definite integral is the difference of the integrand evaluated at the two endpoints. So $$\frac{d}{dx} \int_\sqrt{x}^x \left( e^x - e^{t^2} \right) dt = \left. e^x - \right) dt = \left. \left( e^x - e^{t^2} \right) \right|_{t=\sqrt{x}}^{t=x} = e^x - e^{x^2} $$
Pigeon Hole Principle Algorithm The “pigeonhole principle” states that if n+1 objects (e.g., pigeons) are to be distributed into n holes then some hole must contain at least two objects. This observation is obvious but useful. Employ the pigeonhole principle to prove the following: Claim: Let G be an undirected graph with at least two vertices. Then there exist distinct vertices v and w in G that have the same degree. Thank you so much for all your help, a couple of problems on this homework were unlike anything we've done in class thus far. I think they're supposed to be an easy concept questions but I'm struggling.
It is tacitly assumed that the graph $G$ is finite and contains no loops or double edges. Otherwise it is easy to give counterexamples. Let $v_0$ be the number of isolated vertices, and denote by $v'$ the number of vertices of degree $\geq1$. When $v_0=2$ we are done. Otherwise $v'\geq1$, and $d_\max$, the maximal occurring degree, is $\geq1$. Since a vertex of maximal degree is connected to $d_\max$ other nonisolated vertices we have $$v'\geq d_\max+1\ .$$ The $v'$ vertices of degree $\geq1$ can have degrees from $1$ to $d_\max$ inclusive. As $v'&gt;d_\max$ we can conclude by the pigeon hole principle that two of them have to have the same degree.
You have two pigeons (vertices) so that pigeon can only have one pigeon friend (edge to another vertex). But you also have two pigeons, so two pigeons and one pigeon friend imply they are friends of each other. Inductively, you have n pigeons with n-1 pigeon friends.
Convergence of the integral $\int _0^\infty \ln^2x\sin(x^2)\,dx$ $$\int _0^\infty (\ln^2x)\sin(x^2)dx$$ Does this converge? In the answers in our textbook, we've been told that since the inner function limit's in in infinity is not 0 then the integral doesn't converge. I couldn't find a theorem that states this though. Are there any other good ways to prove that this integral diverges?
There is no real difficulty at $0$, since near $0$ the function $\sin(x^2)$ behaves like $x^2$, so $\lim_{x\to 0^+}\ln^2 x\sin(x^2)=0$. So we examine $$\int_1^B (\ln^2 x)( \sin(x^2))\,dx.\tag{1}$$ Rewrite as $$\int_1^B \frac{\ln^2 x}{2x} 2x \sin(x^2)\,dx,$$ and use integration by parts, letting $u=\frac{\ln^2 x}{2x}$ and $dv=2x\sin(x^2)\,dx$. Then $du=\frac{2\ln x-\ln^2 x}{2x^2}\,dx$ and we can take $v=-\cos(x^2)$. Thus our integral (1) is $$\left.\left(-\frac{\ln^2 x}{2x}\cos(x^2)\right)\right|_1^B +\int_1^B \frac{2\ln x-\ln^2 x}{2x^2}\cos(x^2)\,dx.$$ The first part gives no problem, indeed it vanishes as $B\to\infty$. The remaining integral has a (finite) limit as $B\to\infty$, because $\cos(x^2)$ is bounded and the $2x^2$ in tthe denominator crushes the $\ln$ terms in the numerator. It follows that our original integral converges.
$\sin(x^2)&gt;.5$ on $(\sqrt{\pi/6+2k\pi},\sqrt{5\pi/6+2k\pi}),k\in\mathbb{N}$. When $x&gt;e$, $\log^2(x)&gt;1$, so we can see that the positive part of the integral is greater than $.5\sum_{k=2}^K \left(\sqrt{5\pi/6+2k\pi}-\sqrt{\pi/6+2k\pi}\right)$ for any $K\in\mathbb{N}\setminus\{1\}$. Now see that $$ \sum_{k=2}^K \left(\sqrt{5\pi/6+2k\pi}-\sqrt{\pi/6+2k\pi}\right)=\sum_{k=2}^K \left(\frac{2\pi/3}{\sqrt{5\pi/6+2k\pi}+\sqrt{\pi/6+2k\pi}}\right)\geq\sum_{k=2}^K \left(\frac{2\pi/3}{\sqrt{5\pi/6+2k\pi}}\right) $$ which we see is unbounded. We conclude that the positive part of the integral is unbounded. Similarly, we can conclude that the negative part of the integral is unbounded. Therefore it is not integrable.
Elliptic Curve and Differential Form Determine Weierstrass Equation I am reading Fermat's Last Theorem by Diamond, Darmon and Taylor and they state: "An elliptic curve E over a field F is a proper smooth curve over F of genus one with a distinguished F-rational point. If $E/F$ is an elliptic curve and if $\omega$ is a non-zero holomorphic differential on E/F then E can be realised in the projective plane by an equation (called a Weierstrass equation) of the form $$Y^2Z + a_1XYZ + a_3Y Z^2 = X^3 + a_2X^2Z + a_4XZ^2 + a_6Z^3$$ such that the distinguished point is (0 : 1 : 0) (sometimes denoted $\infty$ because it corresponds to the “point at infinity” in the affine model obtained by setting $Z=1$) and $\omega =\frac{dx}{2y+a_1x+a_3}$." My question is how does the choice of $\omega$ determine the Weierstrass for $E$? Why state this in terms of differential forms instead of the usual projective embedding?
To ease notation, the Weierstrass equation is generally written using non-homogeneous coordinates $x = X/Z$ and $y = Y/Z$, then $$E:y^2 + a_1xy+a_3y=x^3+a_2x^2+a_4x+a6$$ If $char(\mathbb{K}) \neq 2$, then we can simplify the equation by completing the square. Thus the substitution $$y \to\frac{1}{2}(y-a_1x-a_3)$$ gives an equation of the form $$E:y^2 = 4x^3 + b_2x^2+2b_4 + b6$$ $$b_2 = a_1^2 + 4a_4, \quad\quad b_4 =2a_4+a_1a_3, \quad\quad b_6=a_3^2+4a_6$$ We also define quantities \begin{align} b_8 &amp;= a_1^2a_6+4a_2a_6-a_1a_3a_4+a_2a_3^2-a_4^2\\ c_4 &amp;= b_2^2 - 24b_4\\ c_6 &amp;= -b_2^3 +36b_2b_4-216b_6\\ \Delta &amp;= -b_2^2b_8 - 8b_4^3 - 27b_6^2+9b_2b_4b_6\\ j &amp;=c_4^3/\Delta\\ \omega &amp;=\frac{dx}{2y+a_1x+a_3} = \frac{dy}{3x^2 + 2a_2x+a_4-a_1y} \end{align} Where the quantity $\Delta$ is the discriminant of the Weierstrass equation, the quantity $j$ is the $j$-invariant of the elliptic curve, and $\omega$ is the invariant differential associated to the Weierstrass equation. References: J. Silverman - The Arithmetic of Elliptic Curves 2-ed. s. $42$ (pdf) Silverman and Tate's - Rational Points on Elliptic Curves Ludwig Bauer - Weierstrass Equations (pdf) PS: Maybe this is not a clear answer to your questions, but surely the references will be very helpful.
To ease notation, the Weierstrass equation is generally written using non-homogeneous coordinates $x = X/Z$ and $y = Y/Z$, then $$E:y^2 + a_1xy+a_3y=x^3+a_2x^2+a_4x+a6$$ If $char(\mathbb{K}) \neq 2$, then we can simplify the equation by completing the square. Thus the substitution $$y \to\frac{1}{2}(y-a_1x-a_3)$$ gives an equation of the form $$E:y^2 = 4x^3 + b_2x^2+2b_4 + b6$$ $$b_2 = a_1^2 + 4a_4, \quad\quad b_4 =2a_4+a_1a_3, \quad\quad b_6=a_3^2+4a_6$$ We also define quantities \begin{align} b_8 &amp;= a_1^2a_6+4a_2a_6-a_1a_3a_4+a_2a_3^2-a_4^2\\ c_4 &amp;= b_2^2 - 24b_4\\ c_6 &amp;= -b_2^3 +36b_2b_4-216b_6\\ \Delta &amp;= -b_2^2b_8 - 8b_4^3 - 27b_6^2+9b_2b_4b_6\\ j &amp;=c_4^3/\Delta\\ \omega &amp;=\frac{dx}{2y+a_1x+a_3} = \frac{dy}{3x^2 + 2a_2x+a_4-a_1y} \end{align} Where the quantity $\Delta$ is the discriminant of the Weierstrass equation, the quantity $j$ is the $j$-invariant of the elliptic curve, and $\omega$ is the invariant differential associated to the Weierstrass equation. References: J. Silverman - The Arithmetic of Elliptic Curves 2-ed. s. $42$ (pdf) Silverman and Tate's - Rational Points on Elliptic Curves Ludwig Bauer - Weierstrass Equations (pdf) PS: Maybe this is not a clear answer to your questions, but surely the references will be very helpful.
Proving that $\left(\frac{\pi}{2}\right)^{2}=1+\sum_{k=1}^{\infty}\frac{(2k-1)\zeta(2k)}{2^{2k-1}}$. Wolfram$\alpha$ says that we have the following identity $$ \left(\frac{\pi}{2}\right)^{2}=1+2\sum_{k=1}^{\infty}\frac{(2k-1)\zeta(2k)}{2^{2k}} $$ but, how does one prove such identity?
It's well known that $$ \pi\cot(\pi x)=\frac 1x-2\sum_{n=1}^\infty\zeta(2n)x^{2n-1}; $$ taking derivatives we get $$ \frac{\pi^2}{\sin^2\pi x}=x^{-2}+2\sum_{n=1}^\infty(2n-1)\zeta(2n)x^{2n-2}; $$ in particular for $x=\frac12$ $$ \pi^2=4+2\sum_{n=1}^\infty(2n-1)\zeta(2n)\frac1{2^{2n-2}}; $$ this is your formula (multiplied by 4).
Do not worry about the downvote. It is just an attack on my answers. The series can have the following integral representation $$ \sum_{k=1}^{\infty}\frac{(2k-1)\zeta(2k)}{2^{2k-1}} = \int_{0}^{1}\frac{(t^2+1)\ln t}{t^2-1} dt = \frac{\pi^2}{4}-1.$$
Show that the ideal $ (2, 1 + \sqrt{-7} ) $ in $ \mathbb{Z} [\sqrt{-7} ] $ is not principal Show that the ideal $ I = (2, 1 + \sqrt{-7} ) $ in $ \mathbb{Z} [\sqrt{-7} ] $ is not principal. My thoughts so far: Work by contradiction. Assume that $ I $ is principal, i.e. that it is generated by some element $ z = a + b\sqrt{-7} \in \mathbb{Z}[\sqrt{-7}] $. I'm really not sure what to consider though - I can't really 'see' what $ I $ looks like. Any help would be greatly appreciated. Thanks
Put $\rm\, w = 1\!+\!\sqrt{-7}.\,$ By norms $2\,$ is irreducible so, if principal $\rm\,\color{#c00}{ (2,w) = (1)}\ $ [not $(2)$ by $\rm\,2\nmid w$] so $\rm\ 2\mid 2w,\,ww'\Rightarrow\ 2\mid (2w',ww') = \color{#c00}{(2,w)}(w') = (w'),\ $ so $\rm\ 2\mid w',\ $ contradiction. $\ \, $ QED This is a special case of the fact that the failure of an irreducible element to be prime (or a failure of Euclid's Lemma) immediately yields a nonexistent gcd and nonprincipal ideal - see this answer.
To prove this note that $1 \not \in I$ so $I \not = (1)$. Then suppose $I = (\alpha)$, that implies that $\alpha = 2$ or $\alpha = 1 + \sqrt{-7}$ since those are both irreducibles, but neither of them can hold since one irreducible is not a multiple of another.
Uniform continuity of a bounded function on an unbounded interval Are all bounded continuous functions on an unbounded interval uniformly continuous ?
A counterexample is given by $f(x)=\sin(x^2)$. Let $x_n=\sqrt{2\pi n}$ and $y_n=\sqrt{2\pi n+\pi/4}$. Then $$|f(x_n)-f(y_n)|=1.$$ But $|x_n-y_n|\to0$. So $f$ is not uniformly continuous. (One way to see that $|x_n-y_n|\to0$ is to apply the Mean Value Theorem to the function $\phi(t)=\sqrt t$.)
The answer to your question is not necessarily. The counter example provided by @DavidC.Ullrich demonstrates this. A common reason a function is not uniformly continuous is that its derivative is actually unbounded (but a uniformly continuous function can actually have an unbounded derivative). For instance, if $g(x)=\sin(x^2)$ then $g'(x) = 2x \cos(x^2)$ which can take arbitrarily large positive and negative values. If a differentiable function has a derivative that is bounded over the whole real line, then it would be uniformly continuous. This follows from the mean value theorem. For instance suppose that $f:\mathbb{R} \to \mathbb{R}$ is differentiable and that $|f'(x)| &lt; M$ for all $x \in \mathbb{R}$. When we consider $x &lt; y$ and $$|f(x) - f(y)| = |f'(\xi)||x-y|$$ where $\xi \in (x,y)$, then we can see that $$|f(x) - f(y)| \le M |x-y|.$$ Now suppose we let $\epsilon &gt; 0$. If we fix $x_0$, and consider all $y \in \mathbb{R}$ such that $|x_0 - y| &lt; \epsilon/M$ we see that $$|f(x_0) - f(y)| &lt; \epsilon.$$ Finally, since $\epsilon/M$ is independent of $x_0$, $f$ is uniformly continuous.
Calculating Expectation and Covariance $X$ is a random variable which is uniformly distributed on the interval $[1,4]$. Define the random variable $Y = g(x)$ with $g(x) = x^2$ How can I calculate $E(g(X))$, $g(E(X))$ and the covariance Cov$[X,Y]$? I would really appreciate it if someone can show me how to solve this !
The probability density function of $X$ is defined as $$f_X(x) = \begin{cases}\frac13 &amp; 1\le x\le 4\\0&amp;\text{otherwise}\end{cases}$$ The expectation of $Y$, $E[Y] = E[g(X)]$ can be obtained by $$\begin{align*}E[g(X)] &amp;= \int_{-\infty}^\infty g(x)\ f_X(x)\ dx\\ &amp;= \int_1^4x^2\cdot\frac13\ dx\\ &amp;= \frac13\int_1^4x^2\ dx\\ &amp;= \frac13\left[\frac{x^3}3\right]_1^4\\ &amp;= 7 \end{align*}$$ By symmetry of uniform distribution, $E[X]$ can be quickly calculated as $$E[X] = \frac{4+1}2 = 2.5,$$ or not as quickly calculated by $$E[X] = \int_1^4x\cdot\frac13\ dx = \frac13\left[\frac{x^2}2\right]_1^4 = 2.5.$$ Then $g(E[X]) = 2.5^2$. The covariance $COV(X,Y) = E[(X-E[X])(Y-E[Y])]$, $$\begin{align*}COV(X,Y) &amp;= E[(X-E[X])(Y-E[Y])]\\ &amp;= E[XY] - E[X]\cdot E[Y]\\ &amp;= \frac13\int_1^4 x^3\ dx - 2.5\cdot 7\\ &amp;= \frac13\left[\frac{x^4}4\right]_1^4 - 2.5\cdot 7\\ &amp;= 3.75 \end{align*}$$
Hints You need $$\mathbb{E}[g(X)] = \mathbb{E}[X^2]\\ g(\mathbb{E}[X]) = \mathbb{E}[X] \times \mathbb{E}[g(X)] \\ \textrm{Cov}(X,Y) $$ can you get the first two?
Relation between Hilbert's hotel and Cantor's proof of the uncountability of the continuum I am reading the wikipedia page about Hilbert's Grand hotel. There it says: Hilbert's paradox is a veridical paradox: it leads to a counter-intuitive result that is provably true. The statements "there is a guest to every room" and "no more guests can be accommodated" are not equivalent when there are infinitely many rooms. An analogous situation is presented in Cantor's diagonal proof. Now I wonder what Hilbert's Grand hotel has to do with Cantor's diagonal proof, since Cantor's diagonal proof is concerned with showing that the continuum has bigger cardinality than the natural numbers, but Hilbert's hotel seems to be about showing that certain countable sets are equinumerous. Could you clarify?
Cantor's diagonal proof can be imagined as a game: Player 1 writes a sequence of Xs and Os, and then Player 2 writes either an X or an O: Player 1: XOOXOX Player 2: X Player 1 wins if one or more of his sequences matches the one Player 2 writes. Player 2 wins if Player 1 doesn't win. Cantor's diagonal proof basically says that if Player 2 wants to always win, they can easily do it by writing the opposite of what Player 1 wrote in the same position: Player 1: XOOXOX OXOXXX OOOXXX OOXOXO OOXXOO OOXXXX Player 2: OOXXXO You can scale this 'game' as large as you want, but using Cantor's diagonal proof Player 2 will still always win. For number of turns equals infinity ($T_{n}=\infty$): Player 1: XOOXOX... OXOXXX... OOOXXX... OOXOXO... OOXXOO... OOXXXX... ... Player 2: OOXXXO... Since $T_{n}=\infty$, and $\left | T_{n} \right |=\left | P_{1} \right |$, $\left | P_{1} \right |=\infty$ (please note that the lines are the notation for cardinality, not absolute value). However, by using the tactic described earlier, Player 2 ensures that $P_{2}\notin P_{1}$. We can extend this to Hilbert's paradox by assigning a company to the hotel to clean the rooms. This company has an infinite number of workers, but the way it trains them is strange. If rooms 2, 57, and 2,246 needed cleaning, the company would send the employee whose job it is to clean rooms 2, 57, and 2,246. Each cleaner would have a different set of rooms to be cleaned, including the worker who gets called when none of the rooms need cleaning. Assume now that hotel wishes to throw a party for its cleaners and give them all a free room on the same night. Since you have already read the Wikipedia article, you know that you can add an infinite group to an already full hotel without kicking anybody out. Since the cleaning schedules of the workers are infinite strings, we can try to use Cantor's diagonal proof for $T_{n}=\infty$. Replacing Xs with no and Os with yes (for whether or not a worker cleans a room), we get this: Rooms: Room 1: No, Yes, Yes, No, Yes, No, ... Room 2: Yes, No, Yes, No, No, No, ... Room 3: Yes, Yes, Yes, No, No, No, ... Room 4: Yes, Yes, No, Yes, No, Yes, ... Room 5: Yes, Yes, No, No, Yes, Yes, ... Room 6: Yes, Yes, No, No, No, No, ... ... Excluded Worker: Yes, Yes, No, No, No, Yes, ... So, the relationship between Hilbert's paradox and Cantor's diagonal proof is that Cantor's diagonal proof is an exception to the rule of Hilbert's paradox that $\infty+\infty=\infty$, and it establishes that there are different, unequal versions of infinity; the transfinite numbers.
Cantor's diagonal proof can be imagined as a game: Player 1 writes a sequence of Xs and Os, and then Player 2 writes either an X or an O: Player 1: XOOXOX Player 2: X Player 1 wins if one or more of his sequences matches the one Player 2 writes. Player 2 wins if Player 1 doesn't win. Cantor's diagonal proof basically says that if Player 2 wants to always win, they can easily do it by writing the opposite of what Player 1 wrote in the same position: Player 1: XOOXOX OXOXXX OOOXXX OOXOXO OOXXOO OOXXXX Player 2: OOXXXO You can scale this 'game' as large as you want, but using Cantor's diagonal proof Player 2 will still always win. For number of turns equals infinity ($T_{n}=\infty$): Player 1: XOOXOX... OXOXXX... OOOXXX... OOXOXO... OOXXOO... OOXXXX... ... Player 2: OOXXXO... Since $T_{n}=\infty$, and $\left | T_{n} \right |=\left | P_{1} \right |$, $\left | P_{1} \right |=\infty$ (please note that the lines are the notation for cardinality, not absolute value). However, by using the tactic described earlier, Player 2 ensures that $P_{2}\notin P_{1}$. We can extend this to Hilbert's paradox by assigning a company to the hotel to clean the rooms. This company has an infinite number of workers, but the way it trains them is strange. If rooms 2, 57, and 2,246 needed cleaning, the company would send the employee whose job it is to clean rooms 2, 57, and 2,246. Each cleaner would have a different set of rooms to be cleaned, including the worker who gets called when none of the rooms need cleaning. Assume now that hotel wishes to throw a party for its cleaners and give them all a free room on the same night. Since you have already read the Wikipedia article, you know that you can add an infinite group to an already full hotel without kicking anybody out. Since the cleaning schedules of the workers are infinite strings, we can try to use Cantor's diagonal proof for $T_{n}=\infty$. Replacing Xs with no and Os with yes (for whether or not a worker cleans a room), we get this: Rooms: Room 1: No, Yes, Yes, No, Yes, No, ... Room 2: Yes, No, Yes, No, No, No, ... Room 3: Yes, Yes, Yes, No, No, No, ... Room 4: Yes, Yes, No, Yes, No, Yes, ... Room 5: Yes, Yes, No, No, Yes, Yes, ... Room 6: Yes, Yes, No, No, No, No, ... ... Excluded Worker: Yes, Yes, No, No, No, Yes, ... So, the relationship between Hilbert's paradox and Cantor's diagonal proof is that Cantor's diagonal proof is an exception to the rule of Hilbert's paradox that $\infty+\infty=\infty$, and it establishes that there are different, unequal versions of infinity; the transfinite numbers.
Equivalence Relations on Set of Ordered Pairs Let $\mathbb{R}$ be the relation on $\mathbb{Z} \times \mathbb{Z}$, that is elements of this relation are pairs of pairs of integers, such that $((a,b),(c,d)) \in \mathbb{R}$ if and only if $a-d = c-b$. Can anyone give me a start on how to solve it to be transitive, reflexive and symmetric?
Reflexive: $\forall (a,b):\Bigl[(a,b)\in \mathbb{Z\times Z} \to \bigl((a,b),(a,b)\bigr)\in R\Bigr]$ Symmetric: $\forall (a,b,c,d): \Bigl[\bigl((a,b),(c,d)\bigr)\in R \leftrightarrow \bigl((c,d),(a,b)\bigr)\in R\Bigr]$ Transitive: $\forall (a,b,c,d)\exists (e,f): \Bigl[\bigl((a,b),(e,f)\bigr)\in R\land \bigl((e,f),(c,d)\bigr)\in R \leftrightarrow \bigl((a,b),(c,d)\bigr)\in R\Bigr]$ Show that these properties hold (or not) when $\Bigl[\bigl((a,b),(c,d)\bigr)\in R \Bigr]\iff\Bigl[ a-d=c-b\Bigr]$ Hint: $[a-d=c-b] \iff [a+b=c+d]$
Reflexive: $((a,b),(a,b))\in R$ since $a-b=a-b$. Symmetric: Suppose that $((a,b),(c,d))\in R$ since $a-d=c-b$, hence $c-b=a-d$. So $((c,d),(a,b))\in R$. Transitive: Suppose that $((a,b),(c,d))\in R$ and $((c,d),(e,f))\in R$. Then $a-d=c-b$ and $c-f=e-d$, and hence $a-d+c-f=c-b+e-d$ which implies that $a-f=e-b$. So we have that $((a,b),(e,f))\in R$
Solving Differential Equations without separation of variables How can one proceed for solving differential equations in physics without separation of variables? For ex- Take Laplace equation in spherical coordinates, we always assume the solutions of form R(r)Θ(θ)Φ(φ) and then we resolve the differential equation in three differential equations of single variable. Doesn't it restrict the type of solutions. What if other solutions cannot be written in R(r)Θ(θ)Φ(φ) form, how do we find such solutions then. I have same doubt for other differential equations of central importance in physics. "Schrodinger equation for hydrogen atom", "Wave-equation" "Diffusion-equation" and many more.
Take Laplace equation in spherical coordinates, we always assume the solutions of form $R(r)\Theta(\theta)\Phi(\phi)$ and then we resolve the differential equation in three differential equations of single variable. Doesn't it restrict the type of solutions. We choose to use spherical coordinates (in your example) precisely because we want to examine situations which have spherical geometries. We could equally choose e.g. cylindrical coordinates to study systems with cylindrical geometries. If you search for this you'll see cylindrical coordinates crop up quite often in physical models. Also, there are very few equations (or forms of equation) which admit simple closed form solutions or even the ability to decouple using separation of variables. When we get a chance to use the technique, we grab it. :-) Often a good choice of coordinate system will allow useful exploration of a problem even if an explicit solution remains out of reach.
Take Laplace equation in spherical coordinates, we always assume the solutions of form $R(r)\Theta(\theta)\Phi(\phi)$ and then we resolve the differential equation in three differential equations of single variable. Doesn't it restrict the type of solutions. We choose to use spherical coordinates (in your example) precisely because we want to examine situations which have spherical geometries. We could equally choose e.g. cylindrical coordinates to study systems with cylindrical geometries. If you search for this you'll see cylindrical coordinates crop up quite often in physical models. Also, there are very few equations (or forms of equation) which admit simple closed form solutions or even the ability to decouple using separation of variables. When we get a chance to use the technique, we grab it. :-) Often a good choice of coordinate system will allow useful exploration of a problem even if an explicit solution remains out of reach.
Proof by contradiction proving both numbers are not odd. I have to do a proof by contradiction: Suppose $a,b,\in\mathbb{Z}$. If $4| (a^2 + b^2)$ then a and b are not both odd. So far I know that I need to prove that if $4|(a^2+b^2)$ then a and b are both odd. I would use the definition of an odd number $(2k+1), k\in\mathbb{Z}$ I'm a little but unsure of where to go from here. Any help would be appreciated.
If both are odd then $a=2n+1$, $b=2m+1$ and $$a^2+b^2=4(n^2+n+m^2+m)+2$$ which is never divisible by $4$.
Let $a = (2k+1)$ and let $b=2k'+1$ for some k in the naturals. If you assume both are odd, and you don't get a factor of 4, then clearly 4 doesn't divide $(a^2 + b^2)$ when both $a,b$ are odd.
find interval of convergence for series Is it right that the range of convergence is here $1 &lt; x &lt; 3$: $$\sum_{n= 1}^\infty \frac{e^n + e^{-n}}{n^2} (x-2)^n$$ Just like you do with the geometric series? Or what is this radius of convergence? Thanks! update: i got until now: $$\frac{\frac{(e^{n+1}+e^{-(n+1)})*(x-2)^{n+1}}{(n+1)^2}}{\frac{(e^n+e^{-n})*(x-2)^n}{n^2}}$$ (the middle fractal line should be the main one) and this should be less the 1 right?
Using the Ratio Test, we have $\displaystyle\lim_{n\to\infty}\frac{e^{n+1}+e^{-(n+1)}}{(n+1)^2}\lvert x-2\rvert^{n+1}\cdot\frac{n^2}{(e^n+e^{-n})\lvert x-2\rvert^n}=\lim_{n\to\infty}\frac{e^{n+1}+e^{-(n+1)}}{e^n+e^{-n}}\cdot\frac{n^2}{(n+1)^2}\cdot\lvert x-2\rvert$ $\displaystyle=\lim_{n\to\infty}\frac{e+e^{(-2n+1)}}{1+e^{-2n}}\cdot\left(\frac{n}{n+1}\right)^2\cdot\lvert x-2\rvert=e\cdot 1\cdot\lvert x-2\rvert=e\lvert x-2\rvert$, and $e\lvert x-2\rvert&lt;1 \iff \lvert x-2\rvert&lt;\frac{1}{e}$. To test convergence at the endpoints of the interval, A) $\;\displaystyle x=2+\frac{1}{e}$ gives the series $\displaystyle\sum_{n=1}^{\infty}\frac{1+e^{-2n}}{n^2}$, $\;\;\;$which converges by comparing to $\displaystyle \sum_{n=1}^{\infty}\frac{1}{n^2}$ using the Limit Comparison Test. B) $\;\displaystyle x=2-\frac{1}{e}$ gives the series $\displaystyle\sum_{n=1}^{\infty}(-1)^n\frac{1+e^{-2n}}{n^2}$, $\;\;\;$which converges since its absolute value converges. Therefore the series converges for x in $\displaystyle\left[2-\frac{1}{e},2+\frac{1}{e}\right]$.
According to WolframAlpha, the interval of convergence is as follows. You can see it plotted as so: Unfortunately I'm not sure how one would go about finding it, but for the bottom part you can use the p-series test and because it is n^2, p>2 so it is convergent everywhere, now you just have to decide for the top part what test to use.
Arithmetic error in Feller's Introduction to Probability? In my copy of An Introduction to Probability by William Feller (3rd ed, v.1), section I.2(b) begins as follows: (b) Random placement of r balls in n cells. The more general case of [counting the number of ways to put] $r$ balls in $n$ cells can be studied in the same manner, except that the number of possible arrangements increases rapidly with $r$ and $n$. For $r=4$ balls in $n=3$ cells, the sample space contains already 64 points ... This statement seems incorrect to me. I think there are $3^4 = 81$ ways to put 4 balls in 3 cells; you have to choose one of the three cells for each of the four balls. Feller's answer of 64 seems to come from $4^3$. It's clear that one of us has made a very simple mistake. Who's right, me or Feller? I find it hard to believe the third edition of a universally-respected textbook contains such a simple mistake, on page 10 no less. Other possible explanations include: (1) My copy, a cheap-o international student edition, is prone to such errors and the domestic printings don't contain this mistake. (2) I'm misunderstanding the problem Feller was examining.
It is an error. The answer should is 81. I was really troubled that this book could have gone through 3 editions and a revision reprint and still have this elementary error. It is easy to see why 81 is the correct number: The first ball can be placed in 3 ways. The second in three ways. By the rs principle, the first two balls can be placed in 3x3 ways, i.e. 9 ways. The third ball can be paced in 3 ways. By the rs principle, the number of ways of placing the first 3 balls is 27. The 4th ball can be place in any one of three ways, hence the total number of ways the balls can be placed is 81.Repeated application of the rs principle gets you to 81, not 64. 64 would be the right answer if we had 4 cells and 3 balls, since 4x4x4 = 64. What bothers me is that anyone studying on his own will run into these problems with errata everywhere. The first reaction is always: I must be wrong, because these people with their ivy league degrees cannot possibly be wrong! The guilty secret is that almost all mathematics books have errors of this type. Before you buy any book, first get hold of the errata list. If the error count is too dense, buy a different book. I hope this helps. JLL
I have to go with Feller: each of the 3 balls has one of 4 cell numbers associated with it. That is 4^3. Here are the 64 possibilities 111, 112, 113, 114, 121, 122, 123, 124, 131, 132, 133, 134, 141, 142, 143, 144, 211, 212, 213, 214, 221, 222, 223, 224, 231, 232, 233, 234, 241, 242, 243, 244, 311, 312, 313, 314, 321, 322, 323, 324, 331, 332, 333, 334, 341, 342, 343, 344, 411, 412, 413, 414, 421, 422, 423, 424, 431, 432, 433, 434, 441, 442, 443, 444
Prove that no such $f$ exists I have come upon this question and could not find a solution for it. Prove that there is no entire function $f$ such that $\forall z\in \mathbb{C}$ $\quad|f(z)|&gt;|z|$. Any ideas on how to solve this question?
Clearly there is no $z$ for which $f(z) = 0$. Therefore, $g(z) := \frac{z}{f(z)}$ is entire. Since $|z| &lt; |f(z)|$, $|g(z)| &lt; 1$ for all $z \in \mathbb{C}$. Liouville implies $g(z) \equiv c$ is constant, i.e. $z \equiv cf(z)$. Clearly $c \not = 0$, so $f(0) = 0$, contradiction.
Let $g(z)=\frac z {f(z)}$ for $z \neq 0$. (Note that $f(z) \neq 0$ for $z \neq 0$). Then $g$ is analytic in $\mathbb C \setminus \{0\}$. Since it is bounded it extends to an entire function $G$. This entire function is bounded. By Liouville's Theorem it must be a constant. This means $f(z)=cz$ for some constant, but then the given inequality does not hold at $0$.
Is $y=\frac{x^2}{x^3+3}$ has a horizontal asymptote? Is $y=\frac{x^2}{x^3+3}$ has a horizontal asymptote? Since the graph passes through $(0,0)$, hence $y=0$ is not a horizontal asymptote. Can a rational function may not have a horizontal asymptote?
Hints: An asymptote is a straight line to which the curve approaches while it moves away from the origin. The curve can approach the line from one side, or it can intersect it again and again. Not every curve which goes infinitely far from the origin (infinite branch of the curve) has an asymptote. For functions given in explicit form $y=f(x)$, we know: the vertical asymptotes are at points of discontinuity where the function $f(x)$ has an infinite jump; the horizontal and oblique asymptotes have the equation: $$y=kx+b, \space \text{ with } \space \space k=\lim_{x \to \infty}\frac{f(x)}{x}, \space \space b=\lim_{x \to \infty}\left[f(x)-kx \right].$$ Figure:
Yes, of course: $$\lim_{x\rightarrow+\infty}\frac{x^2}{x^3+3}=\lim_{x\rightarrow+\infty}\frac{\frac{1}{x}}{1+\frac{3}{x^3}}=0,$$ $$\lim_{x\rightarrow-\infty}\frac{x^2}{x^3+3}=\lim_{x\rightarrow-\infty}\frac{\frac{1}{x}}{1+\frac{3}{x^3}}=0.$$ Id est, $y=0$ is a horizontal asymptote. The function $f(x)=\frac{x^3}{x^2+3}$ has no a horizontal asymptote.
Prove the inequality between integral and summation of multiplicative inverse I want to prove the following inequality: $$ \ln(n) = \int\limits_1^n{ \frac{1}{x} dx } \geq \sum_{x = 1}^{n}{\frac{1}{x + 1}} = \sum_{x = 1}^{n}{\frac{1}{x}} - 1 $$ I ask this question as I'm trying to calculate the upper bound of a harmonic series: $$ \sum_{x = 1}^{n}{\frac{1}{x}} \leq \ln(n) + 1 $$
Using the mL (minumum times Length) estimate $$\int_a^b |f(x)|dx\geq (b-a)\min_{x\in[a,b]}|f(x)|$$ One has $$\int_1^n\frac{1}{x}dx=\sum_{k=1}^{n-1}\int_{k}^{k+1}\frac{1}{x}dx\geq\sum_{k=1}^{n-1}(k+1-k)\frac{1}{k+1}=\sum_{k=1}^{n-1}\frac{1}{k+1}=\sum_{k=1}^n\frac{1}{k}-1,$$ where I have used that $\frac{1}{x}$ is positive on the positive axis (so the absolute value is irrelevant) and that $\frac{1}{x}$ is decreasing (again, for positive $x$), so that $$\min_{x\in[a,b]}\frac{1}{x}=\frac{1}{b}$$ PS: my suggestion to you, is to draw the situation: first draw the $\frac{1}{x}$ function, then subdivide the real axis in intervals of length $1$, and then build up the rectangles which fit under the curve, for each of the subintervals (bases). The sum of their areas will be given by the above formula.
The complete solution is provided here.
Does this integral expression makes sense? I'm wondering whether this expression has some significance: $$\int_{-1}^1 \frac{dx}{\sqrt{|x|}}$$ And, in general, if expressions in the following form make sense: $$\int_a^b f(x) dx$$ Where the set $(a, b)$ contains points out of the domain of $f(x)$. According to the definitions I know, these are neither definite nor improper integrals, so the expression shouldn't make any sense. But in case the integral exists, could you please tell me: Which kind is it? Is the following equation true? $\displaystyle \int_{-1}^1 \frac{dx}{\sqrt{|x|}} = \int_{-1}^0 \frac{dx}{\sqrt{|x|}} + \int_0^1 \frac{dx}{\sqrt{|x|}}$ Is it an area? If not, what is the area of $\frac1{\sqrt{|x|}}$? UPDATE 1: I'm saying that $\displaystyle \int_{-1}^1 \frac{dx}{\sqrt{|x|}}$ is not an improper integral is because of the following definition (taken from Wikipedia): [...] an improper integral is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number or ∞ or −∞ [...] -1 and 1 are the endpoints of my integral, but here the problem is 0. Perhaps, is that definition wrong or incomplete? UPDATE 2: I've replaced $\frac1x$ with $\frac1{\sqrt{|x|}}$, so that my questions about the area make more sense. UPDATE 3: I'm not particularly interested in calculating the value of that integral, what I really want to know is what is that stuff. An ideal answer would include a valid and coherent mathematical definition of integrals of that kind.
Consider the Cauchy principal value $$ \lim_{\epsilon \to 0}\left(\int_{-1}^{-\epsilon}\frac{1}{x}+ \int_{\epsilon}^{1}\frac{1}{x}\right)=0 $$
Edit: what follows is partly incorrect. The only way to give a meaning to the integral is to use the concept of Cauchy principal value. Your specific example. First, I don't see any reason for not calling it an improper integral; second, surely the decomposition you write in 2. is perfectly valid: $$ \int_{-1}^1 \frac1x dx= \lim_{\epsilon\rightarrow 0}\Bigg(\int_{-1}^{-\epsilon}\frac1x dx + \int_\epsilon^1 \frac1x dx\Bigg) $$ Changing variable from $x$ to $-x$ in the first integral you see that the parenthesis is $0$. Due to the symmetry of the integrand, we can say that the result is exactly $0$. (maybe I'm being not rigorous here?) Accordingly, the area is $0$ as well, in fact it is a signed area. On the contrary, $\displaystyle\int_{-1}^1\left|\frac1x\right|\; dx$ is a divergent integral. In general, it still makes sense to define integrals of functions which are singular on the path of integration. Then it may well be that the integral is divergent and the best you can do is to define its principal value, taking a limit analogue to the one which I wrote above.
Fractional Exponents powers I am having problems understanding how to answer questions containing fractional exponents to a given power ie $(2x^{1/2})^6$, i do not understand how to go about answering the question. I know this is an easy topic, but i would really appreciate the help
how to answer questions containing fractional exponents In the exact same manner in which you answer questions containing non-fractional exponents, since the base is obviously a positive number, given the fact that both $2$ and $\sqrt x$ are always $\ge0$. $(ab)^c=a^cb^c$ for all exponents, fractional or not, it doesn't matter. Likewise, $(a^b)^c=a^{bc}$, again, for all exponents. By combining the two, we have $(ab^c)^d=a^db^{cd}$. In this case, $a=2,b=x,c=$ $=\dfrac12$ and $d=6$. And $\dfrac12\cdot6=3$.
The general rule for exponents is $x^{a/b} = \sqrt[b]{x^a} \forall x \in \mathbb{N}$. For example, $4^{2/3} = \sqrt[3]{4^2} = \sqrt[3]{8}$. You can apply this to your problem to see that $(2x^{1/2})^6 = 64x^3.$
Relatively prime numbers formula Here is a problem I cannot manage with: Find two relatively prime positive numbers $p$, $q$ that satisfy: Sequence $ \{pn + q\}_{n=0,1,2,\ldots}$ does not contain any Fibonacci number. Any ideas how to touch it?
Write down the Fibonacci sequence modulo the first few primes, and note the periods (A060305): modulo 2 -- period 3 modulo 3 -- period 8 modulo 5 -- period 20 modulo 7 -- period 16 modulo 11 -- period 10 So the Fibonacci sequence modulo 55 has period 20. That is not long enough to hit all of the 40 numbers between 0 and 55 that are relatively prime to 55. (But in fact already the sequence modulo 11 itself is too short to hit everything, because there are necessarily repetitions in it. Indeed, neither of $4$, $6$, $7$, nor $9$ are in the sequence, so there is no Fibonacci number of the forms $11n+4$, $11n+6$, $11n+7$, or $11n+9$).
P=2-2(N+1)+3-(3(2N+1))+(6N±1)-{(6N±1)•(6n±1)}, where P does not equal 1, and where P equals all prime numbers, and where N equals all natural whole numbers from 1, 2, 3... Etc. To think that something deemed so complex could be reduced to such a simple and complete formula. My, my.
How to find the value of this limit? $\lim\limits_{n\to\infty}n\int_0^1 nx^{n-1}\left(\frac{1}{1+x}-\frac{1}{2}\right)\mathrm dx.$ How to calculate the following limit: $$\lim_{n\to\infty}n\int_0^1 nx^{n-1}\left(\frac{1}{1+x}-\frac{1}{2}\right)\mathrm dx.$$
Use the substitution $x^n=t$. $$n\int_0^1 nx^{n-1}\left(\frac{1}{1+x}-\frac{1}{2}\right)\,dx=n\int_0^1 \left(\frac{1}{1+t^{1/n}}-\frac{1}{2}\right)\,dt$$ $$\Rightarrow \lim_{n\rightarrow \infty} n\int_0^1 \left(\frac{1}{1+t^{1/n}}-\frac{1}{2}\right)\,dt=\lim_{h\rightarrow 0}\dfrac{\displaystyle \int_0^1 \left(\dfrac{1}{1+t^{h}}-\frac{1}{2}\right)\,dt}{h}$$ Use L'Hopital's rule and Leibniz rule to get: $$\lim_{h\rightarrow 0}\int_0^1 \frac{-t^h\ln t}{(1+t^h)^2}\,dt=-\frac{1}{4}\int_0^1\ln t \,dt=\boxed{\dfrac{1}{4}}$$
The integral is equal to (via Mathematica): $$I(n)=\int_0^1 nx^{n-1}\left(\frac{1}{1+x}-\frac{1}{2}\right)\mathrm dx.=(1/2)\left(-1 - n\psi( n/2) + n\psi((1 + n)/2)\right)$$ Where $\psi(z):=\frac{\Gamma'(z)}{\Gamma(z)}$ is the PolyGamma function. The limit is equal to (via Mathematica): $$\lim_{n\to\infty} n I(n)=\frac{1}{4}$$ -mike
how to prove the chain rule? I have just learnt about the chain rule but my book doesn't mention a proof on it. I tried to write a proof myself but can't write it. So can someone please tell me about the proof for the chain rule in elementary terms because I have just started learning calculus.
Assuming everything behaves nicely ($f$ and $g$ can be differentiated, and $g(x)$ is different from $g(a)$ when $x$ and $a$ are close), the derivative of $f(g(x))$ at the point $x = a$ is given by $$ \lim_{x \to a}\frac{f(g(x)) - f(g(a))}{x-a}\\ = \lim_{x\to a}\frac{f(g(x)) - f(g(a))}{g(x) - g(a)}\cdot \frac{g(x) - g(a)}{x-a} $$ where the second line becomes $f'(g(a))\cdot g'(a)$, by definition of derivative.
If I understand the notation correctly, this should be very simple to prove: Assume $f(x) = g(h(x))$ Then: $$f'(x) = \frac{df(x)}{dx}$$ This can be expanded to: $$\frac{df(x)}{dx} = \frac{df(x)}{dg(h(x))} \frac{dg(h(x))}{dh(x)} \frac{dh(x)}{dx}$$ When you cancel out the $dg(h(x))$ and $dh(x)$ terms, you can see that the terms are equal. Since $f(x) = g(h(x))$, the first fraction equals 1. ($$\frac{df(x)}{dg(h(x))} = 1$$) If we substitute $h(x)$ with $y$, then the second fraction simplifies as follows: $$\frac{dg(y)}{dy} = g'(y)$$ Substituting $y = h(x)$ back in, we get following equation: $$\frac{dg(h(x))}{dh(x)} = g'(h(x))$$ The third fraction simplifies to the derrivative of $h(x)$ with respect to $x$. This can be written as $$\frac{dh(x)}{dx} = h'(x)$$ Substituting these three simplifications back in to the original function, we receive the equation $$\frac{df(x)}{dx} = 1g'(h(x))h'(x) = g'(h(x))h'(x)$$ This is the chain rule.
The distance moved by the tip of the hand in clock. The minute hand of a clock is $15$ cm long. The distance moved by the tip of the hand in $35$ minutes is $a.)\ 35\pi \\ \color{green}{b.)\ \dfrac{35\pi}{2}} \\ c.)\ \dfrac{5\pi}{4} \\ d.)\ \dfrac{5\pi}{2} $ For minute hand, $12\ hrs =360^{\circ} \\ 35\ min=\left(\dfrac{35}{2}\right)^{\circ} $ Distance$=\dfrac{2\pi 15\times 35}{360\times 2}=\dfrac{35\pi}{24}\ cm $ But that is not in options I look for a short and simple way. I have studied maths upto $12th$ grade.
It has moved $\frac{35}{60}$ of a full circle (a full circle consists of $60$ minutes, and it has moved $35$ of those). A full circle is $2\pi\cdot15cm=30\pi cm$. It has therefore moved a total of $$ \frac{35}{60}\cdot30\pi cm=\frac{35\pi}{2}cm $$
Your mistake ? You are missing one fact, the minute hand has already went through $\left(360\cdot 12\right)^{\circ}$ every 12 hours.
Intuition for sum of triangular numbers and significance for $3\choose{k}$ Specific question: according to my calculation based on Timbuc's answer to this question, $$\sum_{k=0}^n\frac{k(k+1)}{2}=\frac{n(n+1)(2n+4)}{12} \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; =\frac{n(n+1)(n+2)}{6}$$ [Edit: RHS simplified based on suggestion from herbsteinberg.] If this is right, is there an intuitive or geometric proof of this? Background and motivation: I'm trying to relate the concepts of integration and summation using reasoning which I find simple and intuitive. As part of that, I'm trying to understand the process of summing sequences in this rotated section of Pascal's triangle: $(0)\;\;01\;\;01\;\;01\;\;01\;\;01\;\;01\;\;...\frac{k^0}{0!} \\ (1)\;\;01\;\;02\;\;03\;\;04\;\;05\;\;06\;\;...\frac{k+0}{1!} \\ (2)\;\;01\;\;03\;\;06\;\;10\;\;15\;\;21\;\;...\frac{k(k+1)}{2!} \\ (3)\;\;01\;\;04\;\;10\;\;20\;\;35\;\;56\;\;...\frac{k(k+1)(k+2)}{3!}$ I see that summing each line seems to increase the degree of the expression by $1$ and I can imagine that rearrangement, simplification and/or a limit process could later reduce these terms to $\frac{k^2}{2}$, $\frac{k^3}{3}$ etc., but for now I'm interested in how/why each line has the exact expression it does. Line $(1)$ is just counting. Line $(2)$ I can picture and understand as in Fig. 1 below. Fig. 1 Line $(3)$ [Edited to add the following, which may start to answer my question] I can picture and understand as a stepped version of the right-angled tetraga in Fig. 2 below. Fig. 2
Intuitively, you are walking the pascal triangle, in a zigzag way. $$\binom{n}{k} + \binom{n}{k-1} = \binom{n+1}{k}$$ For your example, $\large \sum_{k=0}^{n}{k(k+1)\over 2} = \sum_{k=1}^{n}\binom{k+1}{2} = \binom{n+2}{3}$ $$\large\begin{matrix} \binom{2}{2} = \binom{3}{3} = 1 \cr \binom{3}{3} + \binom{3}{2} = \binom{4}{3} = 4 \cr \binom{4}{3} + \binom{4}{2} = \binom{5}{3} = 10 \cr \binom{5}{3} + \binom{5}{2} = \binom{6}{3} = 20 \cr \cdots \cr \Large\binom{n+1}{3} + \binom{n+1}{2} = \binom{n+2}{3} \end{matrix}$$
Each row can be written as a sum of combinatorial numbers. This is what is known as the Hockey Stick Identity A nice way of imaginining the intuition behind these identities is understanding a combinatorial proof for the hockey stick identity
Theorem on continuous function "If f(x) is continuous and f(a) and f(b) are of opposite signs then there exist at least one or an odd number of roots between a and b." Is it true for polynomial equations only or any continuous function?
If $f$ is continuous function, then there exists at least a root between $f(a)$ and $f(b)$. This is true by intermediate value theorem. However, for a general function, it is possible that the zero set can be uncountable. For example let $$f(x) = \begin{cases} x-1 &amp;, x \ge 1 \\ 0 &amp; , x \in (0,1) \\ x+1 &amp; , x \le -1\\ \end{cases}$$
At least one: yes. An odd number: This is not so clear. Of course for polynomials this is true, but we have to count roots "with multiplicity". And as far as I can see if $f$ is just continuous and $f(c)=0$ there's no reasonable definition of the multiplicity. Example. Let $f(x)=|x|(x-1)$, $a=-1$, $b=1$. Then $f$ is continuous, $f(a)$ and $f(b)$ have different signs, but $f$ has exactly $2$ zeroes between $a$ and $b$. Similarly for $f(x)=|x|^\alpha(x-1)$ for any $\alpha&gt;0$; if we want to say there are an odd number of zeroes between $-1$ and $2$ we have to somehow define the multiplicity of a zero in such a way that for every $\alpha&gt;0$ the function $|x|^\alpha$ has a zero of even order at the origin. in fact, come to think of it, not only is there no standard definition of the order of a zero of a continuous function, it's easy to see that it's impossible to give such a definition, assuming two natural conditions: Define $\Bbb N = \{0,1,2\dots\}$. Prop There does not exist a function $O:C(\Bbb R)\to\Bbb N$ such that (i) $O(f)&gt;0$ if $f(0)=0$ and (ii) $O(fg)=O(f)+O(g)$. Proof. If $O$ is such a function then for every postitive integer $n$ we have $$O(|x|)=nO(|x|^{1/n})\ge n.$$
Axiom of Regularity allows for this set be an element of itself I'm new to set theory, and the axiom of regularity has been giving me some trouble. It states that every non-empty set A has an element B such that A and B are disjoint sets. Apparently, this axiom implies that a set can't be an element of itself. I've heard (and agree with) arguments like this: Let A be a set, and apply the axiom of regularity to {A}, which is a set by the axiom of pairing. We see that there must be an element of {A} which is disjoint from {A}. Since the only element of {A} is A, it must be that A is disjoint from {A}. So, since A ∈ {A}, we cannot have A ∈ A (by the definition of disjoint). However, let's say there's a set A = {{1, 2}, A}. It seems like this set obeys the axiom of regularity, because {1, 2} is an element of A, and {1,2} and A are disjoint sets. However, the axiom of regularity shouldn't allow A to be an element of itself. I suppose there's a flaw in my logic somewhere, and I'm hoping someone knowledgeable can show me why this isn't allowed.
Just because a set appears to obey the axiom of regularity doesn't mean it actually is a set! The axiom of regularity restricts what sets exist: if a non-empty set exists, then it has an element that is disjoint from it. The axiom of regularity doesn't say that any putative collection which follows this rule has to actually exist as a set. So, all you have observed is that if a set $A=\{\{1,2\},A\}$ existed, then $A$ would not be a counterexample to the axiom of regularity. This in no way proves that such a set actually exists! And in fact, if such a set did exist, then $\{A\}$ would be a counterexample to the axiom of regularity. This is a contradiction, and therefore no such set $A$ exists.
I'm actually not very familiar with set theory, but I think I see the issue here: when working with set theory, you adopt a certain definition of "set", which entails some mathematical rules (including the axiom of regularity) that any object has to satisfy in order to be a set. An object $A$ which satisfies $A = \{\{1,2\}, A\}$ may "exist" (whatever that means), but it does not satisfy the rules of "set-hood", and therefore is not a set. Just because the notation you use to define it looks set-like, that doesn't make it a set. I could probably make a rough analogy to the "number" $0.00\ldots1$, which is written using basically the same notation as many things which are numbers, but does not actually satisfy the rules of "number-hood". You might also find it useful to read Define $A = \{ 1,2,A \}$, $A$ can not be a set (Axiom of regularity). Can $A$ be a &quot;class&quot; or a &quot;collection&quot; of elements., e.g. Asaf Karagila's answer which points out that the definition $A = \{\{1,2\}, A\}$ is circular. Depending on your frame of mind that might make more sense than what I've said above.
Is $7$ the only prime followed by a cube? I discovered this site which claims that "$7$ is the only prime followed by a cube". I find this statement rather surprising. Is this true? Where might I find a proof that shows this? In my searching, I found this question, which is similar but the answers seem focused on squares next to cubes. Any ideas?
This is certainly true. Suppose $n^3 - 1$ is prime, for some $n$. We get that $n^3-1 = (n-1)(n^2 + n + 1)$ and so we have that $n-1$ divides $n^3 - 1$. If $n-1&gt;1$ then we're done, as we have a contradiction to $n^3 - 1$ being prime.
Wouldn't $-2$ also be a prime followed by $-1$ which is a cube of $-1$. As $x^2 + x +1$ will also equal one for $x=-2$.
Number of positive integral solution of product $x_{1} \cdot x_{2} \cdot x_{3}\cdot x_{4}\cdot x_{5}=1050$ is The number of positive integral solution of product $x_{1} \cdot x_{2} \cdot x_{3}\cdot x_{4}\cdot x_{5}=1050$ is $\bf{My\; Try}::$ Given $x_{1}\cdot x_{2}\cdot x_{3}\cdot x_{4}\cdot x_{5} = 2 \times 3 \times 5^2 \times 7$. Now Let we have $5$ Different Boxes $x_{1}\;,x_{2}\;,x_{3}\;,x_{4}\;,x_{5}$ and $5$ balls in which prints a numbers $2\;,3\;,5\;,5\;,7$. Now We have to put balls into boxes , So we will form different cases. $\bullet \;$ If all the balls are in same boxes(boxes contain all balls or no balls) Then no. of ways $ = 5\times 5 \times 5\times 5 \times 5 = 5^5$ $\bullet \; $ If all balles are in different boxes, Then no. of ways $\displaystyle = $ Now I did not understand how can i find in $\bf{(II)}$ cases, Help me Thanks
You can put $2,3,7$ in the boxes in five ways each. The fives can be placed in $5+\binom 52$ ways. $5$ represents both of the $5$s in the same box. $\binom 52$ represents them in different boxes. So you should get $5\times 5 \times 5 \times 15$ ways. It appears that the first case in the question, where the $5$s are in the same box should give the result $5^4$ rather than $5^5$ because the pair of $5$s acts as a single unit rather than two separate units. If the $5$s are in separate boxes, they are indistinguishable, and can be swapped without changing the factorisation. The first $5$ can be placed in any of $5$ boxes, and the second in any of the remaining $4$ boxes, but this counts every possibility twice. So the number of ways of placing the $5$s is $\cfrac {5\times 4}2=10$. This second case counts for $5^3\times 10$ possibilities. The number of ways of choosing $r$ places out of a possible $n$ is so commonly encountered that it is given the (modern) symbol $\binom nr=\frac {n!}{r!(n-r)!}$. Apologies if this confused you, but it is useful to know, as you are likely to encounter it frequently, especially if you ask similar questions on this site.
Each box can contain $5$ elements so it will be toatal $5^5$ cases I think.