title_body
stringlengths
61
4.12k
upvoted_answer
stringlengths
20
29.9k
downvoted_answer
stringlengths
19
18.2k
Finding $\int\sqrt{\sec(x)}\ln(\sec(x))\tan(x)\,\mathrm dx$ by substitution $$\int\sqrt{\smash[b]{\sec(x)}}\ln(\sec(x))\tan(x)\,\mathrm{d}x$$ I started by making u-substitution $u = \sec(x)$: $$\int\sqrt u\ln(u)\tan(x) \left(\frac{\mathrm{d}u}{\sec(x)\tan(x)}\right)$$ Now, does the $\tan(x)$ cancel? Then is integration by parts the appropriate method to use?
Yes! The $\tan(x)$ cancels. I'm sure you know what to do with the remaining $\sec(x)$, given that $u=\sec(x)$. Integration by parts is next.
Given $$\int\sqrt{\sec x}\ln(\sec x)\tan x\ dx$$ Apply integration by parts $u=\ln(\sec x),v^{\prime}=\sqrt{\sec x}\tan x$ $$=\ln(\sec x)\cdot2\sqrt{\sec x}-\int\tan x\cdot2\sqrt{\sec x} dx$$ Since $\int\tan x\cdot2\sqrt{\sec x}\ dx=4\sqrt{\sec x}$ $$=\ln(\sec x)\cdot2\sqrt{\sec x}-4\sqrt{\sec x}+C$$
How to determine if 3 points on a 3-D graph are collinear? Let the points $A, B$ and $C$ be $(x_1, y_1, z_1), (x_2, y_2, z_2)$ and $(x_3, y_3, z_3)$ respectively. How do I prove that the 3 points are collinear? What is the formula?
From $A(x_1,y_1,z_1),B(x_2,y_2,z_2),C(x_3,y_3,z_3)$ we can get their position vectors. $\vec{AB}=(x_2-x_1,y_2-y_1,z_2-z_1)$ and $\vec{AC}=(x_3-x_1,y_3-y_1,z_3-z_1)$. Then $||\vec{AB}\times\vec{AC}||=0\implies A,B,C$ collinear.
If the distance between |AB|+|BC|=|AC| then A,B,C are collinear.
Random Walk on Clock Hands We do a random walk on a clock. Each step the hour hand moves clockwise or counterclockwise each with probability 1/2 independently of previous steps. If you start at 1 what is the expected number before you hit 12? Well, the answer is 11 according to the simulations I ran. I know to return to 1, or from any number back to itself, it would be 12 because of the properties of the invariant distribution; so if you start at a certain position in the long run you can expect to come back once every 12 times. The invariant distribution is uniform across the 12 states. I can't explain it beyond that case though. I got ~20 steps on average to get the hands to move from 1 to 11.
These are not problems on a circle but on the linear interval of integers $$L=\{0,1,2,\ldots,12\},$$ where the hand $12$ is represented by both states $0$ and $12$, and every other hand by the state with its number. Starting from $k$ in $L$, the mean number of steps $t_k$ that the symmetric simple random walk needs to hit $0$ or $12$ is $$t_k=k(12-k),$$ hence your simulations giving $t_1=11$ and $t_2=20$ (since hitting $11$ starting from $1$ on the clock is equivalent to hitting $12$ starting from $2$) are accurate. To show this, the standard method is to consider the whole collection $(t_k)_{0\leqslant k\leqslant12}$ simultaneously. Then, $$t_0=t_{12}=0,$$ and the (simple) Markov property of the random walk after one step yields $$t_k=1+\tfrac12(t_{k-1}+t_{k+1}),$$ for every $1\leqslant k\leqslant11$. Solving this $13\times13$ affine system yields the desired formula.
from random import random import matplotlib.pyplot as plt import numpy as np from __future__ import division % matplotlib inline r=12 lt=[0]*r x,y=[],[] count,pos=0,0 flag=False n=10000 lc=[] for i in range(n): while not flag: count+=1 if random()<.5: pos+=1 else: pos+=-1 pos%=r lt[pos]=1 if sum(lt)==12: flag=True lc.append(count) count=0 flag=False lt=[0]*r pos=0 plt.plot(lc) print np.average(lc) ~67.04525 as far as I found How: I just built an array of 12 0's and a random variable uniformly distributed and I just defined when this variable is less than .5 it steps clockwise one position or else the other way around. (turning 0 in the n-position into 1) Counting the number of steps it took for all the 0's to become 1 and repeating this 10000 times y arrived that result. Feel free to run this code at jupyter notebook online
Showing that any of n balls drawn without replacement has the same probability of being a particular colour Suppose that $X\sim \text{HGeom}(w,b,n)$ represents the distribution of $w$ white and $b$ blue balls (where $w+b=n$) in an urn. Let $X_j$ represent the indicator random variable of the $j$-th ball being white if they are drawn without replacement. My question is how you can show that $E(X_j) = w/(w+b)$ for any $j$ by symmetry. Clearly, $$\begin{align} E(X_1) &=\frac{w}{w+b}\\[3ex] E(X_2) &= \frac{w}{w+b}\left(\frac{w-1}{w+b-1}\right) +\frac{b}{w+b} \left(\frac{w}{w+b-1}\right) =\frac{w}{w+b} \end{align} $$ Does this pattern continue for all $j$ up to $n$? Could it be extended if there were balls of $k$ different colours?
I believe there may be a number of elegant proofs, perhaps relaying on the linearity of expectation of random variables and indicator variables (expectation = probability). Joseph K. Blitzstein has a similar problem explain here, which would be paraphrased as follows with regard to the symmetry insight: This is true by symmetry. The first ball is equally likely to be any of the $b + w$ balls, so the probability of it being white is $\frac{w}{w +b}.$ But the second ball is also equally likely to be any of the $b + w$ balls (there aren’t certain balls that enjoy being chosen second and others that have an aversion to being chosen second); once we know whether the first ball is $W$ we have information that affects our uncertainty about the second ball, but before we have this information, the second ball is equally likely to be any of the balls. Alternatively, intuitively it shouldn’t matter if we pick one ball at a time, or take one ball with the left hand and one with the right hand at the same time. By symmetry, the probabilities for the ball drawn with the left hand should be the same as those for the ball drawn with the right hand. If every possible result is a single-cycle permutation of $w$ and $b$ balls that can be considered otherwise distinguishable by their order of extraction, but that for every single result each ball could have equally have been extracted in one position posterior to the position it occupies, the actual result can be viewed as that sliding of one position forward of each one of the balls with a period of $w+b,$ so that every single different result is matched by $w+b$ results where the relative position doesn't change, and each ball occupies all possible extraction points. In your calculation you get to $E(X_1)=\frac{w}{w+b}$ and $E(X_2)=\frac{w}{w+b}\left(\large \Box \right),$ where happily, $\large \Box =1,$ and hence, $E(X_1)=E(X_2).$ So what we want is that this patterns holds for all $X_i,$ such that $E(X_i)=\frac{w}{w+b}\left(\color{red}{\large \Box} \right)$ with $\color{red}{\large \Box}=1$ for all $i$'s. And this pattern can possibly be teased out by just seeing what happens next -in the case of $X_3:$ $$ E(X_3) =\Tiny \left(\frac{w}{w+b}\right) \left(\frac{w-1}{w+b-1}\right)\left(\frac{w-2}{w+b-2}\right) +2\left(\frac{b}{w+b}\right) \left(\frac{w}{w+b-1}\right)\left(\frac{w-1}{w+b-2}\right) + \left(\frac{b}{w+b}\right) \left(\frac{b-1}{w+b-1}\right)\left(\frac{w}{w+b-2}\right)$$ Clearly we'll always be able to extract the $E(X_1)=\frac{w}{w+b}$ as a factor in front of the sum since each $w$ and $w+b$ appear in each term in the numerator and denominator, respectively. What remains to be proven is that the sum multiplied by $E(X_1)$ is always equal to $1:$ $$\begin{align} 1=\Tiny{ \left(\frac{w-1}{w+b-1}\right)\left(\frac{w-2}{w+b-2}\right) +2\left(\frac{b}{1}\right) \left(\frac{1}{w+b-1}\right)\left(\frac{w-1}{w+b-2}\right) + \left(\frac{b}{1}\right) \left(\frac{b-1}{w+b-1}\right)\left(\frac{1}{w+b-2}\right)}\implies \\[3ex] \small{(w+b-1)(w+b-2)=(w-1)\;(w-2) + 2\;b\;(w-1) + b\;(b-1)\\ ={2 \choose 0}(w-1)\;(w-2) +{2 \choose 1} b\; (w-1) + {2\choose 2} b\;(b-1)} \end{align}$$ But the LHS is the 2-permutations of $(w - 1) + b$, while the RHS is the binomial expansion considering $w$ and $b$ to denote the number of elements in the set of class $\text W$hite and $\text{B}$lack, respectively. This pattern will hold for any $X_i,$ $$\begin{align} \left((w-1) + b\right)\left((w-2) + b\right)\cdots\left((w-i-1) +b\right)&\\[3ex] =\small{ {i-1 \choose 0} (w-1)(w-2)\cdots (w-i-1)\\+\cdots + {i-1 \choose j} (w-1)\cdots (w-i-j)\,b\,(b-1)\cdots(b-j)\\+\cdots+{i-1 \choose i-1} b\,(b-1)\cdots (b-i-1)} \end{align}$$
There are several perspective to recognize the fact without the tedious calculation. Here is some of my understanding can be shared with you. Note that the usually "fallacy" that people have is that people think that when $j > 1$, they already obtained the information from $X_1, X_2, \ldots, X_{j-1}$ so the probability is changed from $w / (w + b)$. Although $X_j$ are dependent, now the question is asking about the marginal distributionof $X_j$, but not the conditional distribution of $X_j \mid X_1, X_2, \ldots, X_{j-1}$. The latter conditional probability is similar to the experiment that we drawn the first $j-1$ balls and put on the table, observing their color, such that the probability of the next drawn will be dependent on the color of the balls we already drawn. However, in our case the marginal distribution is like we do not observe the color of the ball of the first $j-1$ balls, just put them into another bag immediately after drawn. So actually the marginal distribution of each ball is the same, assume they are all equally-likely to be drawn. Actually we assign the order of the ball is like we assign a permutation. Imagine we assign the $n$ balls the number $1, 2, \ldots, n$. Assume tt is equally-likely for each permutation, and thus the probability of the $i$-th ball is put in the $j$-th position is equal to the number of favorable permutations divided by number of total permutations. Since for each favorable permutation, we can shift the balls such that the $i$-th ball is put in the other $k$-th position. (E.g. the original permutation is $(1, 3, 2, 4)$, we can shift it to $(4, 1, 3, 2), (2, 4, 1, 3), (3, 2, 4, 1)$. So by grouping the permutation in this way, we can claim that it is equally-likely for the $i$-th ball to be position in any one of the position. Now you paint the first $w$ ball to be white, and you can sum the probabilities to be $w/n$
How many three digit numbers exist such that the third digit is the geo mean How many three digit numbers exist such that one of the digits is the geometric mean of the other two? A 12, B 18, C 24, D other So, $N = 100a + 10b + c$ let $c =\sqrt{ab}$. $ab$ must be a perfect square, so $ab = 1, 4, 9, 16, 25, 36, 49, 64, 81$ Case: $ab = 1 \implies a=1, b=1$. Case: $ab = 4 \implies a = 2, b=2$ Case: $ab = 9 \implies a=3, b=3$. Case: $ab = 16 \implies (a=8, b=2), (a=4, b=4)$ Case: $ab = 25 \implies a=5, b=5$ Case: $ab = 36 \implies (a=9, b=4), (a=6, b=6)$ Case: $ab = 49 \implies (a=7, b=7)$ Case: $ab = 64 \implies (a=8, b=8)$ Case: $ab=81 \implies (a=9, b=9)$. $(a=8, b=2, c = 4)$ also means that we could replace this with, $ (a, b, c) = (2, 4, 8) = (4, 2, 8) = (4, 8, 2)$ $(a=9, b=4, c=6)$ also means $(a, b, c) = (9, 4, 6) = (4, 6, 9) = (9, 6, 4) = (4, 9, 6) = (6, 9, 4) = (6, 4, 9)$. So to me, the answer is B $18$
You asked two different questions. The question in the title appears to be: "How many three digit numbers exist such that the third digit is the geometric mean of the other two?" I will answer the question in the statement of the problem: "How many three digit numbers exist such that one of the digits is the geometric mean of the other two?" Let $x, y \geq 0$. The geometric mean of $x$ and $y$ is $\sqrt{xy}$. A three digit positive integer is a number of the form $100a + 10b + c$, where $a \in \{1, 2, 3, 4, 5, 6, 7, 8, 9\}$ and $b, c \in \{0, 1, 2, 3, 4, 5, 6, 7, 8, 9\}$. We consider cases: The geometric mean of the digits is $0 \implies 0 = \sqrt{a \cdot 0}$, where $a \in \{1, 2, 3, 4, 5, 6, 7, 8, 9\}$. There are nine possibilities: $100, 200, 300, 400, 500, 600, 700, 800, 900$. If $n \in \{1, 2, 3, 4, 5, 6, 7, 8, 9\}$, $n = \sqrt{n \cdot n}$. There are nine such numbers. They are $111, 222, 333, 444, 555, 666, 777, 888, 999$. $2 = \sqrt{1 \cdot 4}$. Then the number is one of the $3! = 6$ permutations of the digits $1$, $2$, and $4$. They are $124, 142, 214, 241, 412, 421$. $3 = \sqrt{1 \cdot 9}$. Then the number is one of the $3! = 6$ permutations of the digits $1$, $3$, and $9$. They are $139, 193, 319, 391, 913, 931$. $4 = \sqrt{2 \cdot 8}$. The number is one of the $3! = 6$ permutations of the digits $2$, $4$, and $8$. They are $248, 284, 428, 482, 824, 842$. $6 = \sqrt{4 \cdot 9}$. The number is one of the $3! = 6$ permutations of the digits $4$, $6$ and $9$. They are $469, 496, 649, 694, 946, 964$. Hence, there are $2 \cdot 9 + 4 \cdot 6 = 42$ three digit numbers such that one of the digits is the geometric mean of the other two.
Third digit is $\sqrt{ab}$ implies square root should yield single digit so it can be $[0,9]$ thus now digits whose product is 0 so $100,..900$ , for 1 is only $1,1$ so number $111$ then for 2 so product is $4$ thus numbers are $222,142,412$ then $3$ so $333,193,913$ now $4$ so $284,824,444$ then $5$ so $555$ then $6$ so $496,946,666,$ for $7$ so $777$ then $8$ so $888$ and finally $9$ so $999$ thus total numbers are $9+1+3+3+3+1+3+1+1+1=26$ so its $D$
If $B$ if a finite Boolean algebra, then it contains an atom Let $B=\{a_1,\dots,a_n,0,1\}$ be a finite Boolean algebra. I want to show that there exists an atom $x\in B$. So I want to show that there exists $x\in B$, such that for each $a\in B$ for which $a<x$, we have $a=0$. I was thinking of maybe looking at $a=\bigwedge\{a_1,\dots,a_n\}$. If $a\neq 0$, then it seems to me that $a$ is an atom. So assume $a=0$. Is it possible to shrink the size of $\{a_1,\dots,a_n\}$ so that we get $a\neq 0$? And for the biggest set for which $a> 0$ we could argue that $a$ is an atom? (Short remark: I do know that every finite Boolean algebra equals $\mathcal P(\operatorname{At}(B))$, but to me it seems that we use the fact that $\operatorname{At}(B)$ is nonempty, or at least, that seems to be assumed. So I want to prove this from scratch.)
Suppose there is no atom. Then there exists $i_1$ such that $a_{i_1}<a_1$, and $a_{i_2}<a_{i_1}$, etc. Since there are only finitely many $a_i$'s, $i_k=i_{k'}$ for some $k\ne k'$. Thus, $a_{i_k}<a_{i_k}$ by transitivity, which is impossible.
Assume that $x\in B$ is not an atom. Then there are $x_a,x_b\in B$ such that $x_a\vee x_b=x$, $x_a,x_b\neq x$, and $x_a,x_b\neq 0$. If $x_a$ is not an atom, then we can find $x_{aa},x_{ab}$ such that $x_{aa}\vee x_{ab}=x_a$, $x_{aa},x_{ab}\neq x_a$, and $x_{aa},x_{ab}\neq 0$. Continuing in this way we find $$x=x_1\vee x_2\vee ...\vee x_n$$ such that $x_i\neq0$ and, because $B$ is finite, if $x_i=x_a\vee x_b$ with $x_a,x_b\neq 0,x_i$, then $x_a,x_b\in\{x_1,...,x_n\}$. When this happens $n$ can be decreased maintaining the same property. Take $n$ minimum with the property above. Each of the $x_i$ must be atoms, otherwise one can split them and reduce $n$ further.
Is it possible in a group of seven people for each to be friends with exactly 3 others? Is it possible in a group of seven people for each to be friends with exactly 3 others? I know that the sum of degrees of vertices in a graph must be even.
Let $G(V,E)$ represent our graph. Because we know every person must be friends with excatly three other people, we now: $\forall v \in V$. deg$(v) = 3$. By the hand-shaking lemma: \begin{align} 2|E| = \sum_{v \in V} deg(v) \Rightarrow 2|E| = \sum_{n=1}^7 3 = 21. \end{align} However $|E|$ is an integer so we come to a contradiction, since we get $|E| = \frac{21}{2}.$
So you consider the people as vertices and friendship as edges between them. then sum of all degrees is 21 , which is a contradiction.
Find E(Y). Conditional Expectations Let $X$ be an exponential random variable with $\lambda =5$ and $Y$ a uniformly distributed random variable on $(-3,X)$. Find $\mathbb E(Y)$. My attempt: $$\mathbb E(Y)= \mathbb E(\mathbb E(Y|X))$$ $$\mathbb E(Y|X) = \int^{x}_{-3} y \frac{1}{x+3} dy = \frac{x^2+9}{2(x+3)}$$ $$ \mathbb E(\mathbb E(Y|X))= \int^{\infty}_{0} \frac{x^2+9}{2(x+3)} 5 e^{-5x} \, dx$$
Hint: $$\int^{x}_{-3} y \frac{1}{x+3} dy=\frac{\frac12\cdot y^2}{x+3}\bigg|_{-3}^x=\frac12\cdot \frac{x^2-(-3)^2}{x+3}=\frac12\cdot \frac{x^2-9}{x+3}$$ At the numerator you can use the second binomial formula.
You made a mistake in your calculation of $\mathbb E[Y|X]$. The correct calculation is $$ E[Y|X=x] = \frac{1}{x+3}\int_{-3}^x{y dy} = \frac{1}{x+1} \left(\frac{1}{2}y^2\right)\bigg|_{-3}^x = \frac{x^2-9}{2(x+3)} = \frac{x-3}{2}. $$ We then have that $$ \mathbb E[Y] = \int_0^{\infty}\frac{x-3}{2}\cdot 5e^{-5x}dx = \left(\frac{1}{10}e^{-5x}(14-5x)\right)\bigg|_0^{\infty} = -\frac{7}{5}. $$
Is the affine cone of a flat projective scheme again flat? I'm trying to solve Hartshorne Chap.III Ex.9.5.(c): The biggest problem is to show the existence of a closed subscheme $\tilde{X}\subset \mathbb{P}_{T}^{n+1}$ such that $\tilde{X}_{t} = \operatorname{C}(X_{t})$. I guess the projective cone $\operatorname{C}(X) \subset \mathbb{P}_{T}^{n+1}$ of $X$ works. Note that an inclusion $\operatorname{C}(X_{t}) \subset \operatorname{C}(X)_{t}$ always holds but an equality doesn't necessarily hold in a general case. However, one can easily see that $\operatorname{C}(X_{\eta}) = \operatorname{C}(X)_{\eta}$, where $\eta$ is the generic point of $T$. Moreover, it is clear that the Hilbert polynomial of $\operatorname{C}(X_{t})$ is independent of $t \in T$. Hence it suffices to show that the Hilbert polynomial of $\operatorname{C}(X)_{t}$ is independent of $t$, i.e., $\operatorname{C}(X)$ is flat over $T$. (Then from the inclusion $\operatorname{C}(X_{t}) \subset \operatorname{C}(X)_{t}$ and the agreement of their Hilbert polynomials we conclude that $\operatorname{C}(X_{t}) = \operatorname{C}(X)_{t}$.) Note that $\operatorname{C}(X)$ is almost everywhere an $\mathbb{A}^{1}-$bundle over $X$, so almost everywhere flat. Therefore we only have to check the flatness of the affine cone $\operatorname{C}_{\operatorname{aff}}(X)$ of $X$. So we have reduced the original problem to the following: Let $A$ be a noetherian domain and write $T:= \operatorname{Spec}A$. Let $X \subset \mathbb{P}_{T}^{n}$ be a closed subscheme which is flat over $T$. Then, is the affine cone $\operatorname{Spec} A[x_0,\cdots x_n] / \operatorname{I}(X)$ flat over $T$? Thank you!
WARNING: The below is not completely correct and I am in the process of attempting to fix it. $\renewcommand{\Proj}{\operatorname{Proj}} \renewcommand{\Spec}{\operatorname{Spec}}$ Actually, $C(X_t)\cong C(X)_t$, always. It suffices to work affine-locally on $T$, so suppose $T= \Spec R$ and write $X=\Proj(S_X)$ where $S_X=R[x_0,\cdots,x_n]/\Gamma_*(\mathcal{I}_X)$. By definition, $C(X) = \Proj(S_X[x_{n+1}])$, $X_t=\Proj(S_X)\times_T \Spec k(t)$, and $C(X)_t = \Proj(S_X[x_{n+1}])\times_T \Spec k(t)$. Since Proj is compatible with base extensions along the degree-zero part of our algebra, we have $X_t\cong \Proj(S_X\otimes_R k(t))$, which is Proj of $$k(t)[x_0,\cdots,x_n]/Im(I_X\otimes_R k(t)\to k(t)[x_0,\cdots,x_n]).$$ Thus $C(X_t)$ is Proj of $$k(t)[x_0,\cdots,x_{n+1}]/(Im(I_X\otimes_R k(t)\to k(t)[x_0,\cdots,x_n])),$$ which can be written as $$\left(k(t)[x_0,\cdots,x_n]/Im(I_X\otimes_R k(t)\to k(t)[x_0,\cdots,x_n])\right)[x_{n+1}],$$ or $(S_X\otimes_R k(t))[x_{n+1}]$, so $C(X_t)\cong \Proj((S_X\otimes_R k(t))[x_{n+1}])$. On the other hand, $C(X)_t\cong \Proj((S_X[x_{n+1}])\otimes_R k(t))$. As $(S_X\otimes_R k(t))[x_{n+1}]\cong (S_X[x_{n+1}])\otimes_R k(t)$, we see that $C(X_t)\cong C(X)_t$, resolving your difficulty.
WARNING: The below is not completely correct and I am in the process of attempting to fix it. $\renewcommand{\Proj}{\operatorname{Proj}} \renewcommand{\Spec}{\operatorname{Spec}}$ Actually, $C(X_t)\cong C(X)_t$, always. It suffices to work affine-locally on $T$, so suppose $T= \Spec R$ and write $X=\Proj(S_X)$ where $S_X=R[x_0,\cdots,x_n]/\Gamma_*(\mathcal{I}_X)$. By definition, $C(X) = \Proj(S_X[x_{n+1}])$, $X_t=\Proj(S_X)\times_T \Spec k(t)$, and $C(X)_t = \Proj(S_X[x_{n+1}])\times_T \Spec k(t)$. Since Proj is compatible with base extensions along the degree-zero part of our algebra, we have $X_t\cong \Proj(S_X\otimes_R k(t))$, which is Proj of $$k(t)[x_0,\cdots,x_n]/Im(I_X\otimes_R k(t)\to k(t)[x_0,\cdots,x_n]).$$ Thus $C(X_t)$ is Proj of $$k(t)[x_0,\cdots,x_{n+1}]/(Im(I_X\otimes_R k(t)\to k(t)[x_0,\cdots,x_n])),$$ which can be written as $$\left(k(t)[x_0,\cdots,x_n]/Im(I_X\otimes_R k(t)\to k(t)[x_0,\cdots,x_n])\right)[x_{n+1}],$$ or $(S_X\otimes_R k(t))[x_{n+1}]$, so $C(X_t)\cong \Proj((S_X\otimes_R k(t))[x_{n+1}])$. On the other hand, $C(X)_t\cong \Proj((S_X[x_{n+1}])\otimes_R k(t))$. As $(S_X\otimes_R k(t))[x_{n+1}]\cong (S_X[x_{n+1}])\otimes_R k(t)$, we see that $C(X_t)\cong C(X)_t$, resolving your difficulty.
Number of ways to place $k$ non-attacking rooks on an $m\times n$ chessboard I need to calculate the number of ways to place $k$ non-attacking rooks on an $m \times n$ table where $k \leq n$ and $k \leq m$. ("Non-attacking" means that no two rooks may share a row or column.) My attempt: Calculate the number of ways to place $k$ rooks on a $k \times k$ board ($k!$), then multiply by the number of ways to select a $k \times k$ board from an $m \times n$ board. (This is the part I can't calculate, if it is correct at all.) My question: Is my approach good and if so, how to calculate the second part?
It is a reasonable approach. The columns can be chosen in $\binom{m}{k}$ ways and for each way of selecting columns the rows can be chosen in $\binom{n}{k}$ ways.
I think you can choose $k$ squares out of $nm$ in $\pmatrix{ nm\\k}$ different ways, and for each one of these choices there are $k!$ different ways to set the rooks, so the result is $k!\pmatrix{ nm\\k}$.
Is $\bigl(X(X-a)(X-b)\bigr)^{2^n} +1$ an irreducible polynomial over $\mathbb{Q}[X]$? Let $a, b \in \mathbb{Q}$, with $a\neq b$ and $ab\neq 0$, and $n$ a positive integer. Is the polynomial $\bigl(X(X-a)(X-b)\bigr)^{2^n} +1$ irreducible over $\mathbb{Q}[X]$? I know that $\bigl(X(X-a)\bigr)^{2^n} +1$ is irreducible over $\mathbb{Q}[X]$, but I have a hard time generalizing my proof with three factors. PS: This is not homework (and may even be open).
Call $f$ your polynomial. Then $f$ is not irreducible if and only if there exists a root $x\in\mathbb{Q}$ of $f$. If a such root exists then, for $n\geq 1$ we have $$0\leq(x(x-a)(x-b))^{2^n}=-1.$$ I leave the case $n=0$ to you.
Call $f$ your polynomial. Then $f$ is not irreducible if and only if there exists a root $x\in\mathbb{Q}$ of $f$. If a such root exists then, for $n\geq 1$ we have $$0\leq(x(x-a)(x-b))^{2^n}=-1.$$ I leave the case $n=0$ to you.
Finding maximum value of $|f(z)|$ using Maximum modulus theorem? In the question asked here→ Maximum Modulus Exercise I want to know, if we just want to find maximum value of $|f(z)|$, why 'Marlu' Sir in his answer (here https://math.stackexchange.com/a/325832/168676) has done calculations to show that there are no other Maxima? My attempt: as $f(z)=z^2-3z+2$ is analytic inside and on $|z|=1$ hence by maximum modulus theorem, maximum value of $|f(z)|$ occurs on boundary! and by traingle inequality, $|f(z)|=|z^2-3z+2|≤|z^2|+3|z|+2$ $$≤6$$ (since on the boundary, $|z|=1$) So from here, we know, maximum value of $|f(z)|$ cannot exceed $6$ and the fact that, at point $z=-1$ which is on boundary, $f(-1)=6$ confirms that, maximum value of $|f(z)|$ is $6$ is am I correct? Please help me...
No, 6 is just an upper bound but not the maximum modulus, check out the zill-complex analysis book. Check out my annotated image if you still have issue.
No, 6 is just an upper bound but not the maximum modulus, check out the zill-complex analysis book. Check out my annotated image if you still have issue.
MENSA IQ Test and rules of maths In a Mensa calendar, A daily challenge - your daily brain workout. I got this and put a challenge up at work. The Challenge starts with.. Assume you are using a basic calculator and press the numbers in the order shown, replacing each question mark ...continues... What is the highest number you can possibly score? Basically, only using $+,-, * ,\div$, once in place of a question mark. $5 ? 4 ? 7 ? 3 ? 2 =$ We all worked out the operators to be $5 + 4 $ x $ 7 - 3/2 =$ Except that I calculated the answer to be $31.5$ and the others argued $30$. THe answer sheet from MENSA says the calculated total is 30. Nobody actually understood the first part about using a basic calculator. I initially thought the challenge was all about the rules of maths. And when I asked why nobody else applied the rules of maths, they all forgot about it, not because the challenge said to use some "basic calculation" I emailed MENSA and queried them about the challenge and they replied, Thank you for your email. On a basic calculator it will be: 5 + 4 = 9 9 x 7 = 63 63 – 3 = 60 60 ÷ 2 = 30 Kind regards, Puzzle Team My Reply, Thank you for your reply. Could you please define what a basic calculator is? I tried 4 pocket, £1 calculators, and all gave me 31.5. And finally their definition. I guess what the question really means, whether you do the sum manually or on a calculator, is don’t change the order of the numbers. The Casio calculators we have in the office allow you to do the sum as it appears: 5 + 4 = 9 9 x 7 = 63 63 – 3 = 60 60 ÷ 2 = 30 Kind regards, Puzzle Team So they guess the challenge meant to do it that way. Why not just say ignore rules of maths. What is the point of this anyway? My original question, on Maths Stack (this one) was why MENSA used 30 instead of 31.5. And initially I did not understand that using a basic calculator meant calculating left to right by pressing equals after each operation. So what is going on here? If they wanted us the ignore rules of math they should of said taht. Because my basic calculator gives me 31.5 and not 30.0 (I dont have a special. Casio MENSA calculator though) Windows standard calculator gives me 30. Why? None of my pocket, office, el cheapo calculators do this. Google, or Windows Scientific give me 31.5 - As do ally my elelctornic calculators.
In response to edit of initial post, then answer is clearly $30$. Basic calculators are assumed to evaluate in order from left to right. Original post I responded to In a Mensa calander, IQ dialy challenge I got this and put a challenge up at work. Using +,-,time and divide only once. Use the math operator only once to get the highest answer. 5 ? 4 ? 7 ? 3 ? 2 = We all worked out 5 + 4 x 7 - 3 / 2 = 30 Except that my result answer was 31.5 and not 30, like in the answers of the MENSA calendar. Why was I the only one that applied the rules of maths on this? ANd when I asked why nobody else applied the rule of maths, I got the weirdest looks. Nobody knew about multiplication before division, subtraction before adding? I thought that was why the question was marked as the most difficutl to test if you knew this. Response to original post Sadly, many people forget the basic rules of arithmetic as they (a) don't view them as affecting their lives, (b) didn't like maths, and/or (c) know technology can handle the problem for them. The issue with the last point is that different technologies handle things differently. The Google calculator (much like most graphing calculators) will handle order of operations for you correctly. The standard Windows calculator appears to be operating like an old 4 function calculator which evaluates after every operation is completed as opposed to correct order of operations. Though this can also happen when users hit enter after every operation is finished as opposed to when the whole expression is finished. (Don't have access to a Windows calculator right now so can't tell which is the reason for the wrong answer.)
the answer is 31.5. you know it but you messed up the math. with subtraction and addition it shouldnt matter which is carried out first; they can both be considered to be identical functions (addition of a positive and a negative no or vice versa) the answer is therefore 28+5-1.5; whichever way you look at it.
Why do complex roots come in pairs? First time using this website so excuse me for using the body like this.
Complex roots of polynomials with real coefficients come in conjugate pairs because otherwise they wouldn't have real coefficients! Suppose a polynomial $p(z)=a_{0}+a_{1}z+ \ldots + a_{n}z^{n}$ has a complex root $\alpha$. Then $$\overline{p(\alpha)}=\overline{a_{0}+a_{1}\alpha+ \ldots + a_{n}\alpha^{n}}=\bar{a_{0}}+\bar{a_{1}}\bar{\alpha}+ \ldots + \bar{a_{n}}\bar{\alpha}^{n}\\=a_{0}+a_{1}\bar{\alpha}+ \ldots + a_{n}\bar{\alpha}^{n}=\bar{0}=0$$ So $\bar{\alpha}$ is also a root.
Because, in the complex plane, a root is also a rotation. In general, think of a complex number in polar representation: $$z=r\cdot e^{i\phi}$$ Then taking the square root gives you: $$\sqrt{z} = z^{-1/2} = \sqrt r\cdot e^{i\phi/2}$$ But - the angle $(\phi/2+\pi)$ also satisfies the demands for being half the angle $\phi$, because: $$2\cdot(\phi/2+\pi) = \phi+2\pi = \phi$$ It all stems from the fact that rotations live in on a circle, which is a cyclic space (obviously). When you do arithmetic on angles you must remember that a $full\ rotation = no\ ratation$, and from that it follows that when solving simple equations like $2x=y$ you end up with more than a single solution.
Five tyres over $40000$ kilometres The five tyres of a car (four road tyres and one spare) were used equally in a journey of $40,000~\text{km}$. The number of kilometres of use of each tyre was a. $40000$ b. $10000$ c. $32000$ d. $8000$ This is what I tried. I just want to know is my approach is correct or not. Or is there any better method to solve this type of question? Total kilometers travelled by $4$ tyres $= 40000 \cdot 4 = 160,000$. This has to be shared by $5$ tyres. So each tyre capacity $= \frac{160,000}{5} = 32,000$. After we travel $32,000~\text{km}$, we are left with $4$ worn tyres and one new tyre. But if the tyres are rotated properly after each $8000~\text{km}$,
You have to consider that every tire, in order to have equal use, had to at least once have been in the spare tire compartment. This means that there were 5 journeys in between the tire changes. Each journey was 8000km long (40,000/5). So, because each tire was on the road per journey segment except for 1, each tire was used a total of 32,000 times (8000x4)
You will switch tires $4$ times so every tire has a chance to run. So you will be making $5$ lags, and so each lag will be $8000$. So each of the $5$ tires rests in a lag of $8000$. Or, each tire runs a total of $32000$
Selecting 3 non-consecutive days of the week We need to find how many groups of $3$ days of the week there can be, given that no two days should be consecutive. The answer should be $7$, but I do not know how to get to the answer. Thanks in advance.
Use 1010100 to represent that you 'pick' the first, third, and fifth day of the week. What ways are there to pick three non-consecutive days? $1010100$ $1010010$ $1001010$ $0101001$ $0100101$ $0010101$ $0101010$ And that's it. It turns out to be equal to the number of days in the week since as you can see there is one time where you have two days that you don't pick (the two 0's), and that 00 bit can start at any of the 7 days, fixing the rest of the sequence.
The general formula for the number of circular arrangements of $k$ elements over $n$ spots with no consecutive elements is $$\binom{n-k}{k}+\binom{n-k-1}{k-1}$$ See Selection in Circular Table for a proof.
Why are sine and cosine vectors? I understand that sine and cosine functions are vectors, and I understand that what defines a vector is how it transforms under change of basis. Am I correct in understanding that means that for some angle $\alpha$ in the $(x,y)$ coordinate system and some $\alpha'$ in the rotated coordinate system $(x',y')$ $$\sin(\alpha')=\frac{\mathrm{d}\alpha}{\mathrm{d}\alpha'}\sin(\alpha)?$$ I've tried working with this definition for the transformation, which I've been led to believe is correct, but I can't seem to show that it holds. Is this even a correct starting point? If not, what is?
The word "vector" means slightly different things in various contexts. In linear algebra, a vector space is a structure satisfying a certain list of axioms, and a "vector" is just an element of whatever vector space you're talking about. In that sense $\sin$ and $\cos$ are vectors; they're elements of for example $C(\Bbb R)$, the space of all continuous functions on the line. But this use of the word "vector" has more or less nothing to do with the thing you understand about how a "vector" is defined by how it changes under coordinate transformations. That second sense of the word comes up in differential geometry, where for example it distinguishes vectors from "co-vectors". I may be oversimplifying, but I believe that the word is being used in two different ways in "$\sin$ is a vector" and "what defines a vector is how it transforms under change of basis".
The word "vector" means slightly different things in various contexts. In linear algebra, a vector space is a structure satisfying a certain list of axioms, and a "vector" is just an element of whatever vector space you're talking about. In that sense $\sin$ and $\cos$ are vectors; they're elements of for example $C(\Bbb R)$, the space of all continuous functions on the line. But this use of the word "vector" has more or less nothing to do with the thing you understand about how a "vector" is defined by how it changes under coordinate transformations. That second sense of the word comes up in differential geometry, where for example it distinguishes vectors from "co-vectors". I may be oversimplifying, but I believe that the word is being used in two different ways in "$\sin$ is a vector" and "what defines a vector is how it transforms under change of basis".
Prove/Disprove: if $x^2 = a^2$, then $x = a$ From Prof. Charles Pinter's "A Book of Abstract Algebra"'s Chapter 4 exercises: For each of the following rules, either prove that it is true in every group $G$, or give a counter-example. $$ \text{if } x^{2}=a^{2}, \text{then } x=a$$ I believe that this is true by: $$xx = aa$$ by cancellation, $$x=a$$ Is that right? Also, when the word "prove" is used, does that mean to use theorems to prove?
What is it that you are canceling there? It looks to me like you used that $x=a$ to show that $x=a$, which is not a good plan. Think: is this true even in the real numbers?
Cancellation is not possible, you must take the root of both sides. Square roots have the property $q = \pm\sqrt{q^2}$, therefore $a = \pm x$, which is not equivalent to $a = x$. The original statement has thus been disproved.
Compute $m_Z (t)$. Verify that $m'_Z (0)$ = $E(Z)$ and $m''_Z(0) = E(Z^2)$ Let $Z$ be a discrete random variable with $P(Z = z)$ = $1/2^z$ for $z = 1, 2, 3,...$ (b) Compute $m_Z (t)$. Verify that $m'_Z (0)$ = $E(Z)$ and $m''_Z(0) = E(Z^2)$ $E(Z) = \sum_{z=1}^{\infty} z P(Z = z) =\sum_{z=1}^{\infty} \frac{z}{2^z}$ Not sure how to proceed. $z^2$ for $E(Z^2)$ $m_z(t) = E[e^{tz}] = \sum_{z=1}^{\infty} \frac{e^{tz}}{2^z} = \sum_{z=1}^{\infty} (2e)^{tz - z} = \sum_{z=1}^{\infty} (2e)^{z(t-1)} = \frac{(2e)^{t-1}}{1-2e}$ Not sure if above is right. I know afterward, I would just take the derivative of $m_z(t)$ once and twice and set $t = 0$ to verify but having trouble simplifying.
It is not true that $$\frac{\exp(tz)}{2^z}=(2\text{e})^{tz-z}\,.$$ You can easily check that your version of $m_Z$ does not satisfy $m_Z(0)=1$. The correct moment generating function should be $$m_Z(t)=\sum_{z=1}^\infty\,\frac{\exp({tz})}{2^z}=\sum_{z=1}^\infty\,\left(\frac{\exp(t)}{2}\right)^z=\frac{\left(\frac{\exp(t)}{2}\right)}{1-\left(\frac{\exp(t)}{2}\right)}=\frac{\exp(t)}{2-\exp(t)}\,.$$ It should be easy to find $m_Z'$ and $m_Z''$ now. As for the calculations of $\mathbb{E}[Z]$ and $\mathbb{E}[Z^2]$, this link should be very useful: Proof of the equality $\sum\limits_{k=1}^{\infty} \frac{k^2}{2^k} = 6$.
The mistake is $$\frac{e^{tz}}{2^z} = (2e)^{tz - z}$$ Actually $$\frac{e^{tz}}{2^z} = (\frac {e^t} 2)^z \tag{*}$$ Anyhoo, right off the bat, you might recall the form $\frac 1 {2^z}$. It's a term of a geometric sequence. Do you know of the geometric distribution? Anyhoo, let's compute moments with mgf: $\forall |k| < 1$, we have $$\frac 1 {1-k} = \sum_{z=0}^{\infty} k^z$$ $$\implies \frac{d}{dk} \frac 1 {1-k} = \frac{d}{dk} \sum_{z=0}^{\infty} k^z$$ $$\implies \frac{d}{dk} \frac 1 {1-k} = \sum_{z=0}^{\infty} \frac{d}{dk} k^z$$ $$\implies \frac 1 {(1-k)^2} = \sum_{z=0}^{\infty} zk^{z-1}$$ $$\implies \frac 1 {(1-k)^2} = \sum_{z=1}^{\infty} zk^{z-1}$$ $$\implies \frac k {(1-k)^2} = \sum_{z=1}^{\infty} zk^{z}$$ $$\implies \frac 1 2 {(1-\frac 1 2)^2} = \sum_{z=1}^{\infty} z(\frac 1 2)^{z} = E[Z]$$ What's the moral lesson? We recognise $$\sum_{z=1}^{\infty} z(\frac 1 2)^{z} = E[Z]$$ as having to do with the derivative or integral of a geometric series. Similarly for the second moment we have, $$\sum_{z=1}^{\infty} z^2(\frac 1 2)^{z} = E[Z^2],$$ where we compute as follows: $$\frac{d}{dk} \frac k {(1-k)^2} = \frac{d}{dk} \sum_{z=1}^{\infty} zk^{z}$$ $$\implies \frac{d}{dk} \frac k {(1-k)^2} = \sum_{z=1}^{\infty} \frac{d}{dk} zk^{z}$$ $$ = \sum_{z=1}^{\infty} \frac{d}{dk} zk^{z} = \sum_{z=1}^{\infty} z^2k^{z-1}$$ $$\therefore, ([k][\frac{d}{dk} \frac k {(1-k)^2}])|_{k=\frac 1 2} = \sum_{z=1}^{\infty} z^2k^{z}|_{k=\frac 1 2} = E[Z^2]$$ Now, how do we do this the mgf way? Let's go back to $(*)$ $$M_Z(t) = \sum_{z=1}^{\infty} \frac{e^{tz}}{2^z} = \sum_{z=1}^{\infty} (\frac {e^t} 2)^z = \frac{\frac {e^t} 2}{1 - \frac {e^t} 2}$$ This agrees with the mgf for geometric on Wiki with $p=\frac 1 2$. So just differentiate and plug in: $$M_Z'(t) = \frac{d}{dt} \frac{\frac {e^t} 2}{1 - \frac {e^t} 2}$$ $$\therefore, M_Z'(0) = [\frac{d}{dt} \frac{\frac {e^t} 2}{1 - \frac {e^t} 2}]|_{t=0}$$ $$M_Z''(t) = \frac{d}{dt} \frac{d}{dt} \frac{\frac {e^t} 2}{1 - \frac {e^t} 2}$$ $$\therefore, M_Z''(0) = [\frac{d}{dt} \frac{d}{dt} \frac{\frac {e^t} 2}{1 - \frac {e^t} 2}]|_{t=0}$$
Order of finite fields is $p^n$ Let $F$ be a finite field. How do I prove that the order of $F$ is always of order $p^n$ where $p$ is prime?
Let $p$ be the characteristic of a finite field $F$.${}^{\text{Note 1}}$ Then since $1$ has order $p$ in $(F,+)$, we know that $p$ divides $|F|$. Now let $q\neq p$ be any other prime dividing $|F|$. Then by Cauchy's Theorem, there is an element $x\in F$ whose order in $(F,+)$ is $q$. Then $q\cdot x=0$. But we also have $p\cdot x=0$. Now since $p$ and $q$ are relatively prime, we can find integers $a$ and $b$ such that $ap+bq=1$. Thus $(ap+bq)\cdot x=x$. But $(ap+bq)\cdot x=a\cdot(p\cdot x)+b\cdot(q\cdot x)=0$, giving $x=0$, which is not possible since $x$ has order at least $2$ in $(F,+)$. So there is no prime other than $p$ which divides $|F|$. Note 1: Every finite field has a characteristic $p\in\mathbb N$ since, by the pigeonhole principle, there must exist distinct $n_1< n_2$ both in the set $\{1, 2, \dots, \lvert F\rvert +1\}$ such that $$\underbrace{1+1+\dots+1}_{n_1}=\underbrace{1+1+\dots+1}_{n_2},$$ so that $\underbrace{1+1+\dots+1}_{n_2-n_1}=0$. In fact, this argument also implies $p\le n$.
A slight variation on caffeinmachine's answer that I prefer, because I think it shows more of the structure of what's going on: Let $F$ be a finite field (and thus has characteristic $p$, a prime). Every element of $F$ has order $p$ in the additive group $(F,+)$. So $(F,+)$ is a $p$-group. A group is a $p$-group iff it has order $p^n$ for some positive integer $n$. The first claim is immediate, by the distributive property of the field. Let $x \in F, \ x \neq 0_F$. We have \begin{align} p \cdot x &= p \cdot (1_{F} x) = (p \cdot 1_{F}) \ x \\ & = 0 \end{align} This is the smallest positive integer for which this occurs, by the definition of the characteristic of a field. So $x$ has order $p$. The part that we need of the second claim is a well-known corollary of Cauchy's theorem (the reverse direction is just an application of Lagrange's theorem).
Placing 5 people on chairs so that 2 people never sit together We have a family composed by 2 parents and 3 children that have to seat on 5 chairs (in line), two of the children don't want to stay together, how many are the ways that they can sit? I was thinking about letting one of the 2 children sit first, now the second one can sit only on 3 chairs, the remaining parents and child now can sit on the other 3 chairs So I came to that expression: 3*3! = 18 but I saw that this wasn't the correct answer...
If the chairs are in a line, then the calculation must take into account where the first child sits. There are two ways for him to take a seat on the end, then three ways for the other child to take a seat. There are three ways for the first child to take a seat in the middle of the row, and then there are only two ways for the second child to take a seat. So there are in all $$2\cdot3+3\cdot2=12$$ ways for the two of them to be seated. There are then $3!$ ways to seat the rest of the family, so $72$ ways in all.
I think your idea is correct, you just miss some calculation. I will assume here that the 5 chair are put in circle So yes, let first choose the chair of one of the difficult child (5 choices, 4 chairs left) Then the second difficult choose from the 2 chairs which are not together (1 chair as 2 neighbour, and one chair is already taken) (so 2 choices, and 3 chair left) At least let the 3 other sit (3! choices) So tu sums up, we have $5\times2\times 3! =60$ possibilities Is this your expected answer, or did i miss something in the initial statement ?
Given a list of points on a rectilinear path, identify the corners You are provided list of co-ordinates in form of pair as shown below: $$li=[(576, 64), (576, 192), (448, 192), (320, 192), (320, 320), (192, 320), (192, 448), (192, 576), (320, 576), (448, 576), (576, 576), (704, 576), (832, 576), (960, 576)]$$ After modifying it according to goal. Goal is to first plot all points in co-ordinate plane then join them now carefully we need to find out all corner points. $$li=[(576, 64), (576, 192),(320, 192), (320, 320), (192, 320), (192, 576), (960, 576)]$$ Please refer the image I need the points which are encircled by a blue rectangle, where as red circled points are points that are provided in the problem. I tried brute force approach it worked but yes please I need something better than that.
Converting a comment (with some adjustments) to an answer, as OP is satisfied with its validity ... For each index $i$ (starting with the second, and ending with the next-to-last), compare $p_{i-1}$ to $p_{i+1}$: if (and only if) their $x$-coordinates differ and their $y$-coordinates differ, then (and only then) $p_i$ is a corner. This algorithm ignores the first and last points, but those are already "corners" by declaration. As OP has noted, the algorithm has complexity $O(n)$, as it simply steps through the list once.
Something like while i < N { Pi is a corner } while i < N and X[i] == X[i+1] i+= 1 { Pi is a corner } while i< N and Y[i] == Y[i+1] i+= 1 Details need to be worked out.
What are natural boundary conditions in the calculus of variations? What do people mean, when they speak of natural boundary conditions in the calculus of variations? How do natural boundary conditions relate to the Euler-Lagrange equations? An example would be fantastic!
Basically two types of boundary conditions are used: Essential or geometric boundary conditions which are imposed on the primary variable like displacements, and Natural or force boundary conditions which are imposed on the secondary variable like forces and tractions. Essential boundary conditions are imposed explicitly on the solution but natural boundary conditions are automatically satisfied after solution of the problem. Natural Boundary Conditions of the Simplest Kind: Let $~J : C^2[x_0, x_1] → \mathbb R~$ be a functional of the form of $$J(y) = \int^{x_1}_{x_0}f(x, y, y') dx$$ and assume that no boundary conditions have been imposed on $~y~$, then $~J~$ have an extremum $~y~$ if the following necessary conditions are satisfied: $(i)~~$ The ordinate of the extremal satisfies the Euler-Lagrange Equation $$f_y - \frac{d}{dx} f_y' = 0$$ $(ii)~~$ At $~x = x_0~$ $$\left|\frac{\partial f}{\partial y'}\right|_{~x_0}=0$$ $(iii)~~$ At $~x = x_1~$ $$\left|\frac{\partial f}{\partial y'}\right|_{~x_1}=0$$
From the perspective of computational math: Solving a differential equation using e.g. finite elements you usually build a linear equation $Ax = b$ where $x$ is a discrete approximation of the solution. You set up $A$ and $b$ depending on different aspect of your problem, like how your domain looks like and also what kind of boundary conditions there are. We speak of natural boundary conditions, if during assembly of the problem, this step of adding the information of boundary conditions is effectivly skipped. It's those boundary conditions that "naturally" arise when we don't pay attention to them. See also here: Neumann = 0 boundary condition drops the term for boundary conditions completely, so we don't have to worry about them.
Find the value of $R$ such that $\sum_{n=0}^{\infty} \frac{x^n}{n! \cdot (\ln 3)^n} = 3^x, \forall x \in (-R,R)$ I am not quite sure how to finish this exercise: Find the value of $R$ such that $$\sum_{n=0}^{\infty} \frac{x^n}{n! \cdot (\ln 3)^n} = 3^x, \forall x \in (-R,R)$$ I am honestly lost after I verify that the power series converges: $$\begin{align*} a_n &= \frac{x^n}{n! \cdot (\ln 3)^n}\\ a_{n+1} &= \frac{x^{n+1}}{(n+1)! \cdot (\ln 3)^{n+1}} \end{align*} $$ The ratio between the two and the limit are shown below: $$\begin{align*} \frac{a_{n+1}}{a_n} &= \frac{x}{(n+1) \cdot \ln 3}\\ \\ \lim_{n \to \infty} \left\vert \frac{a_{n+1}}{a_n} \right\vert &= \frac{\vert x \vert}{\ln 3} \cdot \lim_{n \to \infty} \frac{1}{n+1} = 0 \end{align*} $$ So as I mentioned, I am not sure what to do next. Any guidance is highly appreciated. Thank you.
Hint: It's possible the problem is misstated as @MathLover suggested in a comment. If it's not misstated, recall that $\sum_{n=0}^{\infty}\frac{u^n}{n!}= e^u$ and let $u=x/\ln 3.$
The $\ln 3$ does not matter. The series converges for all $x$. As you have shown, $|a_{n+1}/a_n| \to 0$. The $\ln 3$ does not affect this, so it still holds with any constant. This means that the series converges.
Is a strictly decreasing, invertible function necessarily continuous? We know that the inverse exists (therefore injective and surjective), and it is strictly decreasing. I'm not sure if continuity follows.
I don't think it necessarily follows. For example the split function: $f: [-3;3] \rightarrow [-3;-1]\cup[0;4]$, $f(x)= \mathrm{if}(x<1, -x+1, -x)$. It is discontinuous at $x=1$, but it is still invertible and strictly decreasing.
I don't think it necessarily follows. For example the split function: $f: [-3;3] \rightarrow [-3;-1]\cup[0;4]$, $f(x)= \mathrm{if}(x<1, -x+1, -x)$. It is discontinuous at $x=1$, but it is still invertible and strictly decreasing.
$\infty\pm\infty$ on a Riemann sphere Here I use "$\infty$" to represent the infinite point on a Riemann sphere. I read that (1) $z\cdot\infty = \infty$ for any complex non-zero $z$. (2) $\infty+\infty=\infty$. (3) $\infty-\infty$ is indeterminate. (1) makes sense to me. I would expect that (2) and (3) are indeterminate. Although if (2) is Inf, I expect that would mean that (3) is also Inf due to (1) and (2). If (1), (2), and (3) are correct as written above, why does it work that way? ----EDIT---- I take back (3) above, since I can't find a source that says that. Howecer, (1) and (2) can be found in Mathematical Analysis, 2nd edition, by Tom M. Apostol. He includes it in his section on the extended complex plane. One response to my initial question and Wikipedia said $\infty+\infty$ is undefined. Does that mean not all mathematicians agree on this?
To say that $\infty-\infty$ is "indeterminate" means that if for every point $c$ in the Riemann sphere, there exist functions $f$ and $g$ and a point $a$ such that \begin{align} \lim_{z\to a} f(z) & =\infty, \\[6pt] \lim_{z\to a} g(z) & = \infty, \\[6pt] \lim_{z\to a} f(z)-g(z) & = c. \end{align} You can choose point $a$ can be chosen to be anything you like and make trivial modifications in $f$ and $g$ so that this all works for that value of $a$. When working with real numbers rather than on the Riemann sphere, this does not work for $\infty+\infty$ nor for $c\cdot\infty$. If $f(z)\to\infty$ and $g(z)\to\infty$ then $f(z)+g(z)\to\infty$. You cannot choose $f$ and $g$ both approaching $\infty$ in such a way that $f(z)+g(z)$ approaches $19$, the way you can with $f(z)-g(z)$. However, with the Riemann sphere, as some have noted in comments below, similar things happen.
This is not a mathematical issue, it's a memory trick or something when you want to find limits for instance.
what is the value of $\int \sin(x)\cos(x)dx$? $\frac{\sin^2(x)}{2}$ or $\frac{-\cos^2(x)}{2}$ or $\frac{-\cos(2x)}{4}$ $\int \sin(x)\cos(x)dx = \frac{\sin^2(x)}{2}$ because $$\frac{d}{dx}\frac{\sin^2(x)}{2}=\sin(x)\frac{\sin(x)}{dx}=\sin(x)\cos(x)$$ but also $$\frac{d}{dx}\frac{-\cos^2(x)}{2}=-\cos(x)\frac{\cos(x)}{dx}=\sin(x)\cos(x)$$ Moreover $$\int \sin(x)\cos(x)dx = \int \frac{\sin(2x)}{2}dx = \frac{-\cos(2x)}{4} = \frac14 - \frac{\cos^2(x)}{2} = \frac{\sin^2(x)}{2} - \frac14 $$ It's true that $\frac{\sin^2(x)}{2}$ and $\frac{-\cos^2(x)}{2}$ are not equal so what is that problem I made here.
You are forgetting the constant of integration. The constant of integration will change depending on whether you chose to integrate the sine or the cosine. The bottom line is that the integrals are the same.
$\displaystyle \int \sin x\cos xdx = \displaystyle \int \dfrac{\sin 2x}{2}dx = -\dfrac{\cos 2x}{4}+C$
Show that a polynomial p can't have both roots 1 and i I have to show that a polynomial $p\in P_3(\mathbb{R})$ where $p\neq0$, can't have both roots, $1$ and $i$, where $i^2=-1$. The polynomial has degree 2 or less. $p(\alpha)=c_0+c_1\alpha+c_2\alpha^2$ I know that, when we want to find the roots of a function, we have to solve the function for $x$ so that $f(x)=0$ I don't know how to solve this kind of question, when $p$ cant be equal to $0$ and I have to show that the polynomial $p\in P_3(\mathbb{R})$ can't have both roots 1 and $i$. Thanks in advance
If we have a polynomial with real coefficients, with $z$ as a root, one can check that $\overline{z}$ must also be a root of the polynomial. Thus, if $i$ is a root of a polynomial $p$, then so is $-i$. If we also require that $1$ be a root of $p$, then $p$ has at least $3$ roots. But how many roots can $p$ have, given that it's of degree at most $2$?
I don't understand the fuss... Just write $p(x)=(x-1)(x-i)$ and expand it to get $p(x)=x^2-(1+i)x+i$, which clearly does not belong to $P_{3}(\mathbb{R})$. Or is it?
How to prove that geometric distributions converge to an exponential distribution? How to prove that geometric distributions converge to an exponential distribution? To solve this, I am trying to define an indexing $n$/$m$ and to send $m$ to infinity, but I get zero, not some relevant distribution. What is the technique or approach one must use here?
Recall pmf of the geometric distribution: $\mathbb{P}(X = k) = p (1-p)^k$ for $k \geq 0$. The geometric distribution has the interpretation of the number of failures in a sequence of Bernoulli trials until the first success. Consider a regime when the probability of success is very small, such that $n p = \lambda$, and consider $x = \frac{k}{n}$. Then, in the large $n$ limit: $$ 1 = \sum_{k=0}^\infty \mathbb{P}(X = k) = \sum_{k=0}^\infty \lambda \left(\left(1 - \frac{\lambda}{n} \right)^{n \cdot k/n} \frac{1}{n} \right) \stackrel{n \to \infty}\rightarrow \int_0^\infty \lambda \mathrm{e}^{-\lambda x} \mathrm{d} x $$ Alternatively, you could look at the moment generating function for the geometric distribution: $$ \mathcal{M}(p, t) = \frac{p}{1-\mathrm{e}^t (1-p)} $$ To recover the mgf of the exponential distribution consider the limit: $$ \lim_{n \to \infty} \mathcal{M}\left( \frac{\lambda}{n}, \frac{t}{n} \right) = \lim_{n \to \infty} \frac{\lambda}{n - \mathrm{e}^{t/n}\left(n - \lambda\right)} = \lim_{n \to \infty} \frac{\lambda}{n \left(1 - \mathrm{e}^{t/n}\right) + \lambda \mathrm{e}^{t/n} } = \frac{\lambda}{\lambda-t} $$ which is the mgf of the exponential distribution with rate $\lambda$.
I think we can just expand 1-p in Taylor series for small p we get the exponential function.:) 1-p=exp(-p) for very small p (around p=0)such that np=constant. so we find: p(x)=(1-p)^x.p is becoming p exp(-px) for a continuous random variable x, this in part has the continuous exponential distribution function:)
Does $\lim \frac{xy}{x+y}$ exist at $(0,0)$? Given the function $f(x,y) = \frac{xy}{x+y}$, after my analysis I concluded that the limit at $(0,0)$ does not exists. In short, if we approach to $(0,0)$ through the parabola $y = -x^2 -x$ and $y = x^2 - x$ we find that $f(x,y)$ approaches to $1$ and $-1$ respectively. Therefore the limit does not exists. I think my rationale is right. What do you think? Alternatively, is there another approach for this problem?
I'll explain here how to approach limits of functions in two variables, with the example the OP proposed in mind. If the limit $$\lim_{(x,y)\to (0,0)} \frac{xy}{x+y}$$ exists and equals $L$, then it also follows that if $\{(x_n,y_n)\}$ is a sequence of points with limit $(0,0)$, then $$\lim_{n\to\infty} \frac{x_ny_n}{x_n+y_n}=L.$$ Now we can choose a number of easy sequences $\{(x_n,y_n)\}$ with limit $(0,0)$, and calculate the limit. For instance, we can pick points in a line $y=\lambda x$, with slope $\lambda$, i.e., $(x_n,y_n) = (\frac{1}{n}, \frac{\lambda}{n})$. In this case: $$\lim_{n\to\infty} \frac{x_ny_n}{x_n+y_n}=\lim_{n\to\infty} \frac{\frac{\lambda}{n^2}}{\frac{1}{n}+\frac{\lambda}{n}}=\lim_{n\to\infty} \frac{\lambda}{(1+\lambda)n}$$ and the limit is $0$ as long as $\lambda\neq -1$. Hence, if the limit exists, it must be $0$. But the problem with $\lambda=-1$ tells us that there may be a problem if we approach $(0,0)$ with a path that ends tangent to $y=-x$ (notice that the function is not defined at points with $y=-x$). Thus, next we look at a sequence following a path on a curve with tangent line $y=-x$ at $(0,0)$. Examples of such curves include $y=x^2-x$, $y=-x^2-x$ or $y=e^{-x}-1$. Thus, we may consider sequences $(x_n,y_n)$ given by: $$\left(\frac{1}{n},\frac{1}{n^2}-\frac{1}{n}\right),\quad \text{or} \quad \left(\frac{1}{n},-\frac{1}{n^2}-\frac{1}{n}\right), \quad \text{or} \quad \left(\frac{1}{n},e^{-1/n}-1\right).$$ For the first sequence we obtain: $$\lim_{n\to\infty} \frac{x_ny_n}{x_n+y_n}=\lim_{n\to\infty} \frac{\frac{1}{n^3}-\frac{1}{n^2}}{\frac{1}{n}+\frac{1}{n^2}-\frac{1}{n}}=\lim_{n\to\infty} \frac{\frac{1}{n^3}-\frac{1}{n^2}}{\frac{1}{n^2}}=\lim_{n\to\infty} \frac{1-n}{n}= -1.$$ But the limit was supposed to be $L=0$. Hence the limit cannot exist. Similarly, if we try the other two sequences listed above: $$\lim_{n\to\infty} \frac{x_ny_n}{x_n+y_n}=\lim_{n\to\infty} \frac{-\frac{1}{n^3}-\frac{1}{n^2}}{\frac{1}{n}-\frac{1}{n^2}-\frac{1}{n}}=\lim_{n\to\infty} \frac{-\frac{1}{n^3}-\frac{1}{n^2}}{-\frac{1}{n^2}}=\lim_{n\to\infty} \frac{1+n}{n}= 1,$$ and $$\lim_{n\to\infty} \frac{x_ny_n}{x_n+y_n}=\lim_{n\to\infty} \frac{\frac{1}{n}(e^{-1/n}-1)}{\frac{1}{n}+e^{-1/n}-1}=\lim_{n\to\infty} \frac{e^{-1/n}-1}{1+ne^{-1/n}-n}=0.$$ These results are inconsistent, and therefore the limit cannot exist. Even more dramatic: let $\{x_n,y_n\}$ be a sequence following the curve $y=x^3-x$ towards the origin, for instance put $(x_n,y_n)=(\frac{1}{n},\frac{1}{n^3}-\frac{1}{n})$. Then: $$\lim_{n\to\infty} \frac{x_ny_n}{x_n+y_n}=\lim_{n\to\infty} \frac{\frac{1}{n^4}-\frac{1}{n^2}}{\frac{1}{n}+\frac{1}{n^3}-\frac{1}{n}}=\lim_{n\to\infty} \frac{\frac{1}{n^4}-\frac{1}{n^2}}{\frac{1}{n^3}}=\lim_{n\to\infty} \frac{1-n^2}{n}= -\infty.$$
maybe I miss something but isn't it: for $x=x,y=\dfrac{-x}{x+1}$ $\dfrac{xy}{x+y}=\dfrac{\dfrac{-x^2}{1+x}}{x-\dfrac{x}{1+x}}=\dfrac{\dfrac{-x^2}{x+1}}{\dfrac{x^2}{x+1}}=-1\to0$ when $(x,y)\to(0,0)$ and for $x\not=0,y=0$ $\dfrac{xy}{x+y}=0\to0$
Show that matrix $A$ is NOT diagonalizable. Let $A$ be a square matrix $A^2=0$ and $A\neq0$ and show that it is not diagonalizable. I decided to use the sample matrix of $$A = \begin{bmatrix}0 & 1\\0 & 0 \end{bmatrix}$$ which satisfies the conditions above. So my question is: how would I prove this is not diagonalizable. The matrix leads to a eigenvalue of $\lambda=0$ with an algebraic multiplicity of $2$. I know that if the algebraic multiplicity and geometric multiplicity are equal, then it is diagonalizable. But I am kind of stuck from here since when I use $\det(A-0I)=0$, it just leads to get $x_2=0$ but then also that $x_2=t$ so I don't really know what to do. Any help would be appreciated.
It isn’t enough to prove that your particular sample matrix isn’t diagonalizable: you must show that every non-zero square matrix $A$ such that $A^2=0$ is non-diagonalizable. HINT: Suppose that $A^2=0$ and $A$ is diagonalizable. Then there are an invertible matrix $P$ and a diagonal matrix $D$ such that $D=P^{-1}AP$. What is $D^2$? What does this tell you about $A$? How does this prove the desired result?
You have $det(A^2)=0$ then $det(A)=0$. This implies A has a zero eigenvalues which gives you that A is not diagonalizable
Let $p_1 , p_2$ be prime. Then prove that the only divisors of $p_1 p_2$ are $1 , p_1 , p_2 , p_1 p_2 $. Let $p_1 , p_2$ be prime. Then prove that the only divisors of $p_1 p_2$ are $1 , p_1 , p_2 , p_1 p_2 $. How do I prove it? I don't even intuitively get this... after so long time of trying to prove it.
For the intuition: take the primes $p_1=5$ and $p_2=11$. Then $p_1\cdot p_2=55$ and indeed the only divisors (among the natural numbers) of the product are $1,5,11$, and $55$. Now take not both numbers prime, say $p_1=24$ and $p_2=11$. Then $p_1\cdot p_2=264$ and the divisors of the product this time certainly include $1,24,11$, and $264$, but also $2,3,6,12$. For the proof: suppose $p_1,p_2$ are prime and that $n\mid p_1p_2$, where $n\in \mathbb N$. If $n\ne 1$ then it has a prime divisor. Let $q$ be an arbitrary prime divisor of $q$. Then in particular, $q$ divides $p_1p_2$. Now, according to Euclid's Lemma, if a prime divides a product, then it divides one of the factors. Thus, either $q\mid p_1$ or $q\mid p_2$. Assume, without loss of generality, that $q\mid p_2$. But since $p_2$ is prime, it follows that $q$ is either $1$ or $q=p$. Since $q$ is prime, the possibility of $q=1$ is (explicitly in the definition of prime number) ruled out. We conclude that $q=p$. Now consider $m=n/q=n/p_1$. It follows that $m\mid p_2$. Thus, either $m=1$ or $m=p$. In the case $m=1$, it follows that $n=p_1$. If $m=p_2$ it follows that $n=p_1p_2$. To conclude, we just showed that the only possible prime divisor of the product $p_1p_2$ are $1,p_1,p_2$, and $p_1p_2$.
Let there be a divisor $q$ of $p_1p_2$ which is not in the list. This means, that $q$ should be either divisor of $p_1$ or $p_2$ which meanse, that $p_1$ or $p_2$ is not a prime.
Find by integrating the area of ​​the triangle vertices $(5,1), (1,3)\;\text{and}\;(-1,-2)$ Find by integrating the area of ​​the triangle vertices $$(5,1), (1,3)\;\text{and}\;(-1,-2)$$ I tried to make straight and integrate, but it is very complicated, there is some better way?
It is just tedious. Let $f_1(x) = \frac{5x+1}{2}$, $f_2(x) = \frac{7-x}{2}$, $f_3(x)= \frac{x-3}{2}$. $(-1,f_1(-1)) = (-1,f_3(-1)) = (-1,-2)$, $(1,f_1(1)) = (1,f_2(1)) = (1,3)$, $(5,f_2(5)) = (5, f_3(5)) = (5, 1)$. $A = \int_{-1}^1 (f_1(x)-f_3(x) ) dx + \int_{1}^5 (f_2(x)-f_3(x) ) dx = 4 +8 = 12$.
You can try to simplify the calculations by using 1. Shift of origin ( say to (-1,-2) ), and 2. Rotation of axis. Then do the integrations.
Is there a general expression for the adjoint representation of $U(N)$ or $u(N)$? At least for low values of $N$ like $2$ or $3$ and such I would like to know if there are explicit matrices known giving the representation of $u(N)$ or $U(N)$ in the adjoint? (..a related query: Is it for the Lie group or the Lie algebra of U(N) that it is true that the weight vectors in the fundamental/vector representation can be taken to be N N-vectors such that all have weight/eigenvalue 1 under its Cartan and the ith of them has 1 in the ith place and 0 elsewhere and for the conjugate of the above representation its the same but now with (-1)?..I guess its for the u(N) since they are skew-Hermitian but would still like to know of a precise answer/proof..)
I've only recently started learning about group theory so it would be worth double checking the following. As I understand it, when we say something (for example a field, $X$) transforms under the adjoint rep of a group (say $U(N)$), the transformation can involve any matrix, $g$, belonging to the group $U(N)$. The adjoint rep doesn't put any constraints on the matrices we choose from $U(N)$, rather it tells us how to use those matrices in a transformation. When we say $X$ transforms in the adjoint rep of $U(N)$ it simply means it transforms as $X \rightarrow X' = gXg^{-1}$. The matrix $g$ can be any matrix belonging to $U(N)$. Note: $g^{-1}$ is the inverse of the matrix $g$. So when we say 'an object is in the adjoint representation of $U(N)$' this is a statement about how to act the matrices of $U(N)$ on an that object. It doesn't say anything about which matrices we choose from $U(N)$.
I don't know if it helps, but I think you can write $$U(N)=SU(N)\times U(1)$$ usually (in physics) people consider semisimple groups, i.e. groups without U(1) factors (=tori). To give an explicit example, consider $U(2)=SU(2)\times U(1)$. The adjoint representation of the algebra can be defined by $$(T_a)_{bc}=-if_{abc}$$ where $f_{abc}$ are the structure constants of the algebra. Perfoming an explicit calculation for $SU(2)\times U(1)$ and keeping in mind that the generator of $U(1)$ commutes with all other elements of the algebra we get the following generators for the adjoint representation: $$ T_1=\begin{pmatrix}0&0&0&0\\0&0&-i&0\\0&i&0&0\\0&0&0&0\end{pmatrix} T_2=\begin{pmatrix}0&0&i&0\\0&0&0&0\\-i&0&0&0\\0&0&0&0\end{pmatrix} T_3=\begin{pmatrix}0&-i&0&0\\i&0&0&0\\0&0&0&0\\0&0&0&0\end{pmatrix} T_4=0 $$ So you see, that with $T_1, T_2, T_3$ you can span a 3 dimensional real vector space, but with all four you can't since $T_4$ vanishes. This means, as I think, that while you can define an adjoint representation for $SU(2)$, you can't do so for $U(2)$ or $U(1)$.
Finding sum of factors of a number using prime factorization Given a number, there is an algorithm described here to find it's sum and number of factors. For example, let us take the number $1225$ : It's factors are $1, 5, 7, 25, 35, 49, 175, 245, 1225 $ and the sum of factors are $1767$. A simple algorithm that is described to find the sum of the factors is using prime factorization. $1225 = 5^2 \cdot 7^2$, therefore the sum of factors is $ (1+5+25)(1+7+49) = 1767$ But this logic does not work for the number $2450$. Please check if it's working for $2450$ Edit : Sorry it works for $2450$. I made some mistake in calculation.
Your approach works fine: $2450=2\cdot 5^2\cdot 7^2$, therefore the sum of divisor is $$(1+2)(1+5+25)(1+7+49)=5301=3\cdot 1767.$$ You are looking for the Formula For Sum Of Divisors, from there: Each of these sums is a geometric series; hence we may use the formula for sum of a geometric series to conclude $$ \sum_{d|n}d = \prod_{i=1}^k \frac{p_i^{m_i+1}-1}{p_i-1} $$
Sum of factors of $2450$ The Factors are $2, 5^2, 7^2$. Sum of the factors $= ( 2^0 + 2^1 ) × ( 5^0 + 5^1+ 5^2)×( 7^0 + 7^1+ 7^2) = 5301$
Proof: $2\sqrt{m}-2 < \sum\limits_{n=1}^m\frac{1}{\sqrt{n}}< 2\sqrt{m}-1$ I know that problems similar to this one, involving either one of the two bounds, have been posted before, but I would like just a hint in the last part of the proof involving the upper bound, with which I have some troubles with being ''convincing'' enough. Prove that $2(\sqrt{n+1} - \sqrt{n}) &lt; \frac{1}{\sqrt{n}} &lt; 2(\sqrt{n}-\sqrt{n-1})$ if $n\geq 1$. Then use this to prove that $$2\sqrt{m}-2 &lt; \sum_{n=1}^m\frac{1}{\sqrt{n}}&lt; 2\sqrt{m}-1 \qquad \text{if }\ m \geq 2$$ With some help I received here I got the following proof for the first set of inequalities: Proof (direct): Since $$\sqrt{n+1} - \sqrt{n} = \frac{1}{\sqrt{n+1} + \sqrt{n}} &lt; \frac{1}{\sqrt{n} + \sqrt{n}} = \frac{1}{2\sqrt{n}}$$ then we have $$2(\sqrt{n+1} - \sqrt{n}) &lt; \frac{1}{\sqrt{n}}\qquad (1)$$ Likewise, since $$\frac{1}{2\sqrt{n}} = \frac{1}{\sqrt{n} + \sqrt{n}} &lt; \frac{1}{\sqrt{n} + \sqrt{n-1}} = \sqrt{n} - \sqrt{n-1}$$ then $$\frac{1}{\sqrt{n}} &lt; 2(\sqrt{n} - \sqrt{n-1})\qquad (2)$$ as asserted. For the second part I tried to prove the lower bound firt: Proof (direct): From $(1)$ we have $$\sqrt{n+1} - \sqrt{n} &lt; \frac{1}{2\sqrt{n}}$$ By summing both sides we get $$\sum_{n=1}^{m}(\sqrt{n+1} - \sqrt{n}) &lt; \sum_{n=1}^{m}\frac{1}{2\sqrt{n}}$$ And after applying the telescoping property on the LHS, this becomes $$\sqrt{m+1} - 1 &lt; \sum_{n=1}^{m}\frac{1}{2\sqrt{n}}$$ But since $\sqrt{m} - 1 &lt; \sqrt{m+1} - 1$, by transitivity we have $$\begin{align*}\sqrt{m} - 1 &amp;&lt; \sum_{n=1}^{m}\frac{1}{2\sqrt{n}}\\ 2\sqrt{m} - 2 &amp;&lt; \sum_{n=1}^{m}\frac{1}{\sqrt{n}}\end{align*}$$ Then the upper bound: Proof (direct): From $(2)$ we have $$\frac{1}{2\sqrt{n}}&lt; \sqrt{n} - \sqrt{n-1}$$ By taking the sum of both sides we get $$\sum_{n=1}^{m}\frac{1}{2\sqrt{n}}&lt; \sum_{n=1}^{m}(\sqrt{n} - \sqrt{n-1})$$ If we apply the telescoping property on the RHS, the inequality becomes $$\begin{align*}\sum_{n=1}^{m}\frac{1}{2\sqrt{n}} &amp;&lt; \sqrt{n}\\ \sum_{n=1}^{m}\frac{1}{\sqrt{n}} &amp;&lt; 2\sqrt{n}\end{align*}$$ Here I'm not sure how to get the $-1$ on the RHS that I'm lacking. I could have started the summation at $n=2$ I guess, but don't think it's correct and it doesn't seem to be a convincing way to complete the proof.
Well you tagged induction so why not use it: $$ \sum_{n=1}^{m+1} \frac{1}{\sqrt{n}} \lt 2\sqrt{m}-1 + \frac{1}{\sqrt{m+1}} \lt 2\sqrt{m}-1 + 2(\sqrt{m+1}-\sqrt{m}) = 2\sqrt{m+1}-1 $$
For the lower bound, proof (direct): from $(1)$: $$2(\sqrt{n+1} - \sqrt{n}) &lt; \frac{1}{\sqrt{n}}$$ By summing both sides: $$2\sum_{n=1}^{m}(\sqrt{n+1} - \sqrt{n}) &lt; \sum_{n=1}^{m}\frac{1}{\sqrt{n}}$$ After applying the telescoping property on the LHS, the inequality becomes $$2\sqrt{m+1} - 2 &lt; \sum_{n=1}^{m}\frac{1}{\sqrt{n}}$$ For the upper bound, proof (direct): from $(2)$: $$\frac{1}{\sqrt{n}}&lt; 2(\sqrt{n} - \sqrt{n-1})$$ By taking the sum of both sides from n=2: $$\sum_{n=2}^{m}\frac{1}{\sqrt{n}}&lt; \sum_{n=2}^{m}2(\sqrt{n} - \sqrt{n-1})$$ Add 1 on both sides: $$1+\sum_{n=2}^{m}\frac{1}{\sqrt{n}}&lt; 1+2(\sum_{n=2}^{m}(\sqrt{n} - \sqrt{n-1}))$$ $$\sum_{n=1}^{m}\frac{1}{\sqrt{n}}&lt; 1+2(\sum_{n=2}^{m}(\sqrt{n} - \sqrt{n-1}))$$ Apply the telescoping property on the RHS, the inequality becomes $$\begin{align*}\sum_{n=1}^{m}\frac{1}{\sqrt{n}} &amp;&lt; 1+2(\sqrt{m}-1)\\ \sum_{n=1}^{m}\frac{1}{\sqrt{n}} &amp;&lt; 2\sqrt{m}-1\end{align*}$$ So $$2\sqrt{m+1} - 2 &lt; \sum_{n=1}^{m}\frac{1}{\sqrt{n}} &lt; 2\sqrt{m} - 1$$ This can be relaxed to: $$2\sqrt{m}-2&lt;2\sqrt{m+1}-2&lt;\sum_{n=1}^{m}\frac{1}{\sqrt{n}}&lt;2\sqrt{m}-1&lt; 2\sqrt{m}$$
Can one describe conic sections using synthetic geometry? I was curious to know whether synthetic geometry is more powerful than analytical geometry. Take as an example, the conic sections. Can one describe conic sections without the co-ordinate system, without the equations of conic sections? How difficult would it be?
I havent read it but I think that this book might be what you are looking for : https://archive.org/stream/conicsectionstre00besarich#page/n12/mode/1up In the introduction we have The object of the following pages is to discuss the general forms and characetistics of these (conic sections) curves and to determine their most important properties by help of the methods and relations developed in the first six books , and in the eleventh book of Euclid's Elements and it will be found that for this purpose a knowledge of Euclid's Geometry is all that is necessary. Also, I am also looking for books of this type so if you have found some yourself please post them as an answer or as a comment.
Yes, synthetic geometry works well for plan sections in conics in finding the focus points, the axes, the directors line positions. Danut Dragoi, PhD
Probability of ball hitting the fan Case 1: A person throws a ball upwards to hit a fan which is not rotating. Let the probability of the event of ball hitting the fan be $p_1$. Case 2: Same person now does the same experiment, this time with a rotating fan. Let the probability of the event this time be $p_2$. Then what is the relationship between $p_1$ and $p_2$?
$$P(2)&gt;P(1)$$ In order to exist, the ball must have some radius $r$. Now, as the ball passes through the plane on which the fan rotates, time passes (the amount depending on both the radius of the ball and its initial velocity). Given that any amount of time passes while the ball is passing through the plane, even a terribly small amount of time, the fan rotates at least a very small distance, meaning it covers greater area than if angular velocity $\omega=0$. Thus, the probability $P(2)$ must exceed that of a stationary fan $P(1)$ since having a nonzero radius is a necessary condition for the ball to exist.
$p(1)=p (2)$ same percent of ceiling is always covered by the fan at any time
Evaluate $\lim_{x\to0}\frac{e-(1+x)^\frac1x}{x}$ Somebody asked this and I think it's quite interesting as I couldn't figure out how to evaluate this but the Wolfram Alpha says its limit is $\frac e2$. $$\lim_{x\to0}\frac{e-(1+x)^\frac1x}{x}$$ Could someone help here?
This limit can be evaluated by applying l'Hospital's rule twice. For the first time we differentiate $$(1+x)^{\frac{1}{x}}=e^{\frac{\ln (1+x)}{x}}$$ to get $$\frac{\frac{x}{1+x}-\ln (1+x)}{x^2} e^{\frac{\ln (1+x)}{x}}.$$ Now $$e^{\frac{\ln (1+x)}{x}}\rightarrow e$$ so we need the limit of $$\frac{\frac{x}{1+x}-\ln (1+x)}{x^2}$$ another application of l'Hosptial gives $$\frac{\frac{1}{(1+x)^2}-\frac{1}{1+x}}{2x}=-\frac{1}{2}\frac{1}{(1+x)^2}\rightarrow -\frac{1}{2}$$ So the limit is $-\frac{e}{2}$.
You can think of the limit $$\lim_{x\to0}\frac{e-(1+x)^\frac1x}{x}$$ as the derivative of the function $f(x)=(1+x)^\frac1x$ at point $x=0$. Did that idea gave you any help? Edit: You should define the value of $f(x)$ at $x=0$ namely : $f(0)=e$
Find minimum value of the function If $g(x) = \max|y^2 - xy|$ for $0\leq y\leq 1$. Then the minimum value of $g(x)$ is? I am not being able to proceed. Tried drawing the graph.
Computing the function $g(x) = \max{|y^2 - xy|}$ is equivalent to solving for the maximum value of the expression $|y^2 - xy| = f(y)$ with a fixed parameter $x$. Let's then imagine that $x$ is a constant. In general, the maximum value of a function (of one variable) can be found in either Where the derivative is zero; Where the derivative is not defined; or At the boundaries of the domain. Let's go through these cases one by one. Firstly, we want to find $\frac{d f(y)}{dy}=0$. For $y^2 - xy &lt; 0$ the result is $y = \frac{x}{2}$, and for $y^2 -xy &gt; 0$ there is no solution. The derivative is not defined when $y^2 - xy = 0 \Rightarrow y= 0$ or $y = x$. The boundaries of the domain were defined to be $y = 0$ and $y = 1$. Inserting these results into $f(y)$ gives us $f(\frac{x}{2}) = \frac{x^2}{4}$; $f(0)=f(x)=0$; $f(1)=|1-x|$. We need the largest of these, which is $g(x) = \begin{cases} \left| 1 - x \right| , \text{if} -2(1+\sqrt{2}) \leq x \leq 2(\sqrt{2}-1)\\ x^2/4 , \text{otherwise} \end{cases}$ This was found out by solving the equation $|1-x|&gt;\frac{x^2}{4}$. The minimum of $g(x)$ can be found at $x= 2(\sqrt{2}-1)$, which is $g(2(\sqrt{2}-1)) = 1- 2(\sqrt{2}-1) = 3-2\sqrt{2}$.
The maximum for the square of a non negative function is the square of the maximum , so find the maximum (x fixed) for z= |...|^2=( y^2-xy)^2 = y^2(y-x)^2 this gets rid of the troublesome absolute value . Now study z as a function of y for fixed x . Determine where dz/dx is =,-,0 and sketch the graph .
Calculating $\int e^{-|x|} \, dx$. I have been trying to calculate $\int e^{|x|} \, dx$, but merely splitting up in two cases for $x&lt;0$ and $x&gt;0$ does not give the desired result, which I got by calculating it on a CAS. Suggestions would be very welcome. Edit. I made a mistake. It's $\int e^{-|x|} \, dx$.
Using $$\exp(|x|) =\begin{cases} \exp(x) &amp; x \geqslant 0 \\ \exp(-x) &amp; x &lt; 0 \end{cases} $$ we can integrate branch-wise: $$ \int \exp(|x|) \mathrm{d}x = \begin{cases} \int \exp(x) \mathrm{d}x &amp; x \geqslant 0 \\ \int \exp(-x) \mathrm{d}x &amp; x &lt; 0 \end{cases} = \begin{cases} \exp(x) + C_1 &amp; x \geqslant 0 \\ -\exp(-x) + C_2 &amp; x &lt; 0 \end{cases} $$
we can also write $$\int e^{|x|} dx= sign(x) e^{|x|}+C$$
How to solve this pair of differential equations? The equations are $$y' = y + 3z$$$$z' = -3y + z$$I vaguely remember I need to differentiate one to get the form of the other, but by differentiating the first and subbing in the second I get$$y'' = y' + 3z'=&gt; y''- y' + 9y = 3z$$ I don't know what to do from here. Edit: From here I got to $y''-2y'+10y = 0$, so from $m^2-2m+10=0$ we get$$y=e^x(Acos(3x) + Bsin(3x))$$Where would I go from here?
From the first equation, $z = \frac{1}{3}(y'-y)$ and $y'' = y' + 3z'$. From the second eqution, $z' = -3y+z = -3y+\frac{1}{3}(y'-y)$. So, $$y'' = y'+ 3(-3y+\frac{1}{3}(y'-y)) \\ \Rightarrow y'' - 2y' + 10y = 0 \\ \Rightarrow y = (A\cos(3x)+B\sin(3x))\exp(x) $$ Recalling that $z = \frac{1}{3}(y'-y)$, we first calculate $$y' = y + (B\cos(3x)-A\sin(3x))\exp(x) $$ and so $$ z = \left(\frac{B}{3}\cos(3x)-\frac{A}{3}\sin(3x)\right)\exp(x) $$
$(y,z)'=A\cdot(y,z)$ where $A=\begin{pmatrix}1&amp;3\\-3&amp;1\end{pmatrix}$. Hence $$ (y,z)(t) = \exp(tA)\cdot(y,z)(0) $$
Cards in a 5x5 grid- Probability of a diagonal of all hearts All the aces, 2’s, 3’s, 4’s, 5’s and 6’s, as well as the jack of diamonds are taken from a regular deck of 52 playing cards, and then placed face up on a table in a 5 × 5 square grid configuration randomly. What is the probability that at least one of the diagonals in the array is all hearts? I know that the probability of having all 5 hearts in a single row is 4!20!/24! but I am having trouble with this specific problem. Any help would be appreciated.
Since both diagonals can't be all hearts at the same time, the desired probability is simply twice the probability that one diagonal is all hearts, and thus $$ 2\cdot\frac{\binom65}{\binom{25}5}=2\cdot\frac{6!\cdot20!}{25!}=\frac2{8855}\;. $$
There are two branches that must be considered. First, let's start with the center. It has a 6/25 chance of being a heart. Next, we'll traverse the diagonals. The chance that the next spot we uncover is a heart is 5/24. If it is a heart then we keep uncovering spots on that diagonal. The chance that the next card is a heart is 4/23. If it is not a heart we still have a chance that the other diagonal is all hearts. So, there are two sub-branches here. If it is a heart then we have probabilities 3/22, and 2/21 until we uncover the entire diagonal. There is a 19/23 chance that the third card uncovered is not a heart. If that occurs we switch to the other diagonal and require that all turns reveal a heart. We have probabilities 4/22, 3/21, 2/20, and 1/19 for this sub-branch. After the center, there is a 19/24 chance that card we turned over is not a heart. If that is the case then we switch to the other diagonal. The probabilities there are 5/23, 4/22, 3/21, and 2/20 until the entire diagonal is exposed. So we have P(total) = 6/25*(5/24*(4/23*3/22*2/21 + 19/23*4/22*3/21*2/20*1/19) + 19/24*5/23*4/22*3/21*2/20) Recombining, we have P(total) = 6/25 * (5!*20!/24! + 5!*19!/24! + 19*5!*19!/24!) = 6!/25! * (20! + 19! + 19 * 19!) = 6!/25! * (20! + 20!/20 + (19 * 20!)/20) = 6!*20!/25! * (1 * 1/20 + 19/20) = (2 * 6! * 20!)/25! This is the same value as answer generated by my original solution. The original solution made some assumptions not made here.
Trouble understanding stated consequence of the division theorem Question: I'm having some trouble understanding the following Corollary (and construction of its proof) given in the book &quot;An Introduction to Mathematical Reasoning&quot;. The main reason being the condition $0 \le q \lt r$, which seems to me non-sense because it would imply that $0 \lt r$, so that $r$ couldn't be zero (contradicting the corollary itself!). Also, regarding the second sentence of the construction of the proof, wouldn't $gcd(b, r)$ and $gcd(b, 0)$ be two other possible solutions? Corollary: Let $a$ and $b$ be integers with $b \gt 0$ and suppose that $a = bq +r$ for integers $q$ and $r$ with $0 \le q \lt r$. Then $b$ divides $a$ if and only if $r = 0$. Construction of the proof: The point here is that if $b$ divides $a$ then, for some integer $q_1$, $a=bq_1=bq_1+r_1$ with $r_1 = 0$. So if $a=bq+r$ with $0 \lt r \lt q$ then $gcd(q, r)$ and $gcd(q_1, 0)$ would be two distinct solutions to the division problem, contradicting the uniqueness part of the division theorem. For completeness, I leave the statement of the division theorem as given in the book. Division theorem: Let $a$ and $b$ be integers with $b \gt 0$. Then there are unique integers $q$ and $r$ such that $$a = bq + r \quad and \quad 0 \le r \lt b$$ Thank you.
For your first question, yes, that's surely a typo and should be $0\le r&lt;q$. As for the second question: I'm pretty sure those gcds shouldn't be there—it seems that they want to say that the ordered pairs $(q,r)$ and $(q_1,0)$ are two distinct solutions to the division problem.
For your first question, yes, that's surely a typo and should be $0\le r&lt;q$. As for the second question: I'm pretty sure those gcds shouldn't be there—it seems that they want to say that the ordered pairs $(q,r)$ and $(q_1,0)$ are two distinct solutions to the division problem.
Max. distance of Normal to ellipse from origin How Can I calculate Maximum Distance of Center of the ellipse $\displaystyle \frac{x^2}{a^2}+\frac{y^2}{b^2} = 1$ from the Normal. My Try :: Let $P(a\cos \theta,b\sin \theta)$ be any point on the ellipse. Then equation of Normal at that point is $ax\sec \theta-by\csc \theta = a^2-b^2$. Then How can I find Max. distance of Center of the ellipse from the Normal
So, the distance of the normal from the origin $(0,0)$ is $$\left| \frac{a^2-b^2}{\sqrt{(a\sec\theta)^2+(-b\csc\theta)^2}} \right|$$ So, we need to minimize $(a\sec\theta)^2+(-b\csc\theta)^2=a^2\sec^2\theta+b^2\csc^2\theta=f(\theta)$(say) So, $\frac{df}{d\theta}=a^22\sec\theta\sec\theta\tan\theta+b^22\csc\theta(-\csc\theta\cot\theta)=2a^2\frac{\sin\theta}{\cos^3\theta}-2b^2\frac{\cos\theta}{\sin^3\theta}$ For the extreme value of $f(\theta),\frac{df}{d\theta}=0$ $\implies 2a^2\frac{\sin\theta}{\cos^3\theta}-2b^2\frac{\cos\theta}{\sin^3\theta}=0$ or $\tan^4\theta=\frac{b^2}{a^2}$ Assuming $a&gt;0,b&gt;0$, $\tan^2\theta=\frac ba$ Now, $\frac{d^2f}{d\theta^2}=2a^2\left(\frac1{\cos^2\theta}+\frac{3\sin^2\theta}{\cos^4\theta}\right)+2b^2\left(\frac1{\sin^2\theta}+\frac{3\cos^2\theta}{\sin^2\theta}\right)&gt;0$ for real $\theta$ So, $f(\theta)$ will attain the minimum value at $\tan^2\theta=\frac ba$ So, $f(\theta)_\text{min}=a^2\sec^2\theta+b^2\csc^2\theta_{\text{at }\tan^2\theta=\frac ba}=a^2\left(1+\frac ba\right)+b^2\left(1+\frac ab\right)=(a+b)^2$ So, the minimum value of $\sqrt{(a\sec\theta)^2+(-b\csc\theta)^2}$ is $a+b$ If $\tan\theta=\sqrt \frac ba, \frac{\sin\theta}{\sqrt b}=\frac{\cos\theta}{\sqrt a}=\pm\frac1{b+a}$ If $\sin\theta=\frac{\sqrt b}{a+b}\implies \csc\theta=\frac{a+b}{\sqrt b},\cos\theta=\frac{\sqrt a}{a+b}\implies \sec\theta=\frac{a+b}{\sqrt a}$ There will be another set $(\csc\theta=-\frac{a+b}{\sqrt b},\sec\theta=-\frac{a+b}{\sqrt a})$ There will be two more set of values of $(\csc\theta,\sec\theta)$ for $\tan\theta=-\sqrt\frac ba$ So, we shall have four normals having the maximum distance from the origin.
let a point p(acost,bsint) is on the ellipse. x2/a2 y2/b2=1 dy/dx=-b2x/a2y dy/dx of normal on(acost,bsint) = a2y/b2x=a/btant equestion of normal y-bsint=b/atant(x-acost) axsect-bycost-(a2-b2)=0 now lenth from origin(0,0) of the normal l=mode -(a2-b2)/ squat(a2sec2t b2cosec2t) =a2-b2/squat(a2sec2t b2cosec2t) then Differential of this function respect t dl/dt=a2-b2(b2cos4t-a2sin4t)/(a2sin2t b2cosec2t)3/2 for max. lenth dl/dt=0 b2cos4t-a2sin4t =0 tant=squat(b/a) now d2l/dt2=(a2-b2)(-4sintcost)[(a2sin2t b2cos2t)3/2] now d2l/dt2 on tant=squat(b/a) ' .'sint=squat(b/a b),cost=squat(a/a b) = -4(a-b)a2b2 &lt;0 ;.a>b so the lenth of normal will max. on t=tan(inverse)squat(b/a) now l=a2-b2/squat(a2sec2t b2cosec2t) l(max.)= a2-b2/squat{a2(a b/a) b2(a b?b) ;.sect=squat{(a b)/a,},cosect=squat{(a b)/b} because tant=squat(b/a) = a2-b2/squat{(a b)(a b)} = a2-b2/a b =a-b
How do I integrate the integral for the independent Poisson random variable? I don't know how to derive this (my red pen mark)
Possible explanation: Since $\left| -\frac{1}{\sqrt n} \right | \approx 0,$ the interval of integration is very close to zero, and so $x^2 \approx 0 \implies e^{-x^2/2} \approx 1$.
When $x \approx 0$, we have $e^{-x^2/2} \approx 1$. Since the text states that we are doing a slightly less than rigorous proof, replace $e^{x^2/2}$ with $1$ and the answer follows immediately.
Finding a Lyapunov function for the differential system $x_1'=-8x_1^3-x_2$, $x_2'=-4x_2-4x_1^3$ I've got the following system of equations: $$ x_1'=-8x_1^3-x_2 \qquad x_2'=-4x_2-4x_1^3 $$ I'm trying to check, if the equilibrium point in $(0,0)$ is stable or not. I am supposed to find so called Lyapunov function $L$, i.e. function which satisfies three following conditions: 1) $L(x_1,x_2)=0$ iff $(x_1,x_2)=(0,0)$ 2) $\forall_{(x_1,x_2)\neq(0,0)}L(x_1,x_2)&gt;0$ 3) $\forall_{(x_1,x_2)}:\frac{dL}{dt}(x_1,x_2)\leqslant 0$ or $\frac{dL}{dt}(x_1,x_2)\geqslant 0$ The sign in 3) gives us the information about stability in the equilibrium point. I was trying to find a polynomial in $\mathbb{R}[x,y]$ with only even exponents, but I was unable to find such function. Is there any way to do it without just "guessing"? What is the Lyapunov function for this system of ODEs?
Hint: If $L(x_1,x_2)$ has a part $x_1^{2n}$, the stability condition involves some multiples of $x_1^{2n+2}$ and $x_1^{2n-1}x_2$. If $L(x_1,x_2)$ has a part $x_2^{2m}$, the stability condition involves some multiples of $x_2^{2m}$ and $x_2^{2m-1}x_1^3$. The mixed terms $x_1^{2n-1}x_2$ and $x_2^{2m-1}x_1^3$ are the same if $(n,m)=(2,1)$. This suggest to check whether $L(x_1,x_2)=x_1^4+x_2^2$ is a Lyapunov function. If it was not (it is), one could have tried $L(x_1,x_2)=ax_1^4+x_2^2$ for some positive $a$, or even $L(x_1,x_2)=ax_1^4+bx_1^2x_2+x_2^2$ for some $(a,b)$ such that $b^2\lt4a$.
But differentiating your $L$, I get: $$\frac{dL}{dt}(x_1,x_2)=(x_1^4+x_2^2)'=4x_1^3 \cdot x_1'+2x_2\cdot x_2'=4x_1^3\cdot(-8x_1^3-x_2)+2x_2\cdot(-4x_2-4x_1^3)=-32x_1^6-4x_1^3x_2-8x_2^2-8x_1^3x_2=-4\cdot(8x_1^6+3x_1^3x_2+2x_2^2)$$ which doesn't have a constant sign. Differentiating $L=ax_1^4+x_2^2$ works only for $a=-2$, but it is not positive. I am going to try the remaining option.
Proof by contrapositive Prove that if the product $ab$ is irrational, then either $a$ or $b$ (or both) must be irrational. How do I prove this by contrapositive? What is contrapositive?
The statement you want to prove is: If $ab$ is irrational, then $a$ is irrational or $b$ is irrational. The contrapositive is: If not($a$ is irrational or $b$ is irrational), then not($ab$ is irrational). A more natural way to state this (using DeMorgan's Law) is: If both $a$ and $b$ are rational, then $ab$ is rational. This last statement is indeed true. Since the truth of a statement and the truth of its contrapositive always agree, one can conclude the original statement is true, as well.
If $a,b \in \Bbb{Z}^+$ and $a\neq b$, then $ax^2+bx+(b-a) = 0$ has no positive integer root (solution).
Determining stability of equilibrium point I want to determine the stability property of the equilibrium point (0,0) for the system $$x'=-xy^4-y\cos(x^2y) \\ y'=3x^5\cos(x^2y)-\sin(y) $$ I get the eigenvalues $-1$ and $0$ for the Jacobian evaluated at $(0,0)$. We have theorems regarding the case when either one eigenvalue has a real part, or when all the eigenvalues are negative. But since these eigenvalues ($0$ and $-1$) do not satisfy this, I feel lost. I know that one could show it by constructing a Lyapanov function, but I wasnt able to do so. Could anyone help me out? How do I determine the stability property of origin?
The Lyapunov function is $$ V(x,y)= \frac12 x^6+\frac12 y^2. $$ Its derivative along the trajectories $$ \dot V= 3x^5\dot x+y\dot y=3x^5(-xy^4-y\cos x^2y)+y(3x^5\cos x^2y-\sin y) =-3x^6y^4-y\sin y $$ is non-positive in some neighborhood of the origin, thus, the origin is stable. In order to prove that the origin is asymptotically stable, we should show that the set $$ S=\{ (x,y): \dot V(x,y)=0 \}= \{ (x,y): y=0 \} $$ does not contain whole trajectories of the system except for the origin. This follows from the fact that $$ \dot y|_{(x,y)\in S}= 3x^5 $$ is nonzero for any $x\ne 0$.
Assuming that near $(0,0)$ the dynamical system behaves as $$ \cases{ \dot x = -x y^4-y\\ \dot y = 3 x^5-y } $$ and assuming for the center manifold $h(x) = \sum_{k=0}^n a_k x^{k+1}$ we have $$ \dot h(x) = h'(x)(-x h^4(x)-h(x)) = \dot y =3x^5-h(x) $$ and equating coefficients we have for $n=4$ $$ \left\{ \begin{array}{rcl} \left(a_0-1\right) a_0&amp;=&amp;0 \\ \left(1-3 a_0\right) a_1&amp;=&amp;0 \\ \left(1-4 a_0\right) a_2-2 a_1^2&amp;=&amp;0 \\ \left(1-5 a_0\right) a_3-5 a_1 a_2&amp;=&amp;0 \\ a_4-a_0^5-6 a_4 a_0-3 \left(a_2^2+2 a_1 a_3+1\right)&amp;=&amp;0 \\\end{array} \right. $$ with solution $h(x)=3 x^5$ for $n=4$. For $n = 2,3, h(x)=0$ and for $n = 8$ we have $h(x) = 45 x^9+3 x^5$ The flow along the central manifold in both cases is stable. Regarding the case $n = 4$ we have a flow given by $$\dot x = -3 x^5 - 81 x^{21} $$ which is stable. Attached a stream flow showing in red part of the center manifold. NOTE The $h(x)$ coefficients determination involves two solutions. The plot shown is for $n = 10$
A question about polynomials as vectors The space $P_n(R)$ is very similar to the vector space $R^{n+1}$; indeed one can match one to the other by the pairing $a_nx^n + a_{n−1}x^{n−1} + \ldots + a_1x + a_0$ is equivalent to $(a_n, a_{n−1}, \ldots, a_1, a_0)$, thus for instance in $P_3(R)$, the polynomial $x^3 + 2x^2 + 4$ would be associated with the 4-tuple $(1, 2, 0, 4)$. I am not sure I understand the equivalence. The left side of it looks like it could be summed up to a scalar - single number, while the right side is a vector in $R^n$. The same goes for the cubic polynomial which could be summed up to be a scalar for some $x$ whereas the 4-tuple is a vector in $R^4$. Please, explain all this. Thanks.
In an arbitrary $n$-dimensional vector space with basis $\{\mathbf{f}_1,\ldots,\mathbf{f}_n\}$, it is usual to denote by the $n$-tuple $(x_1,\ldots,x_n)$ the vector $$\sum_{i=1}^n x_i\mathbf{f}_i$$ If one fixes the following basis $\{1,x,\ldots,x^n\}$ of $P_n(\mathbb R)$, then the polynomial $x^3+2x^2+4$ can be written as $$\mathbf{1}\cdot x^3 + \mathbf 2\cdot x^2 + \mathbf 0\cdot x +\mathbf 4 \cdot 1$$ Which (by the above) is represented by the tuple $(1,2,0,4)$, which we can identify with an element of $\mathbb R^{3+1} = \mathbb R^4$. The explicit isomorphism is given by mapping the basis vector $x^i \in P_n(\mathbb R)$ to the $(n-i+1)$-th standard basis vector $e_{n-i+1}\in \mathbb R^{n+1}$. The key thing here is that the powers of $x$ are not scalars, they are indeterminates in a polynomial ring. You should check that adding polynomials and multiplying them my scalars under this identification with an $(n+1)$-tuple works the same as in $\mathbb R^{n+1}$ if you're not yet convinced, but explicitly constructing (and checking) the map I defined above is an isomorphism of vector spaces should convince you.
As you already stated, this is an equivalence, no equation. So you actually can't set both sides as equal. If you define a variable $x$ in $\mathbb{R}$, you can identify it for example as the tuple $(x,0)$ in $\mathbb{R}^2$. You have an equivalence between both elements, but they are not equal, as one lies in $\mathbb{R}$ and the other in $\mathbb{R}^2$. You can define a function $$ f: \mathbb{R} -&gt; \mathbb{R}^2 \\ f(x) = (x,0) $$ that identifies one element with the other. Can you define a function that identifies the polynomial elements with the vectors in your equivalence? Or even an inverse function?
Representation of polynomials Let $A$ be a ring and $A[x_1, x_2, ... ,x_n] = A[x_1][x_2]...[x_n]$ be the ring of the polynomials of $n$ independent variables over $A$, i.e. every element $f \in A[x_1, x_2, ... x_n]$ is $$f = f_m x_n^m + f_{m-1}x_n^{m-1} + ... + f_0$$ for some $m \in \mathbb Z^{\geq 0}$ and $f_i \in A[x_1, x_2, ..., x_{n-1}]$ for $0 \leq i \leq m$. I'm trying to prove that every such polynomial $f$ is uniquely represented as a finite sum of monomials $x_1^{\alpha_1} x_2^{\alpha_2} ... x_n^{\alpha_n}$. What I have done so far is this: First of all let $S = (\mathbb Z ^{\geq 0})^n$ and $$G = \bigoplus_{(\alpha_1, \alpha_2, ..., \alpha_n) \in S} A$$ Then $\psi : G \longrightarrow A[x_1, x_2, ... , x_n]$ defined by $$\psi (\{ a_{(\alpha_1, \alpha_2, ..., \alpha_n)}\} _ {(\alpha_1, \alpha_2, ..., \alpha_n) \in S}) = \sum _{(\alpha_1, \alpha_2, ..., \alpha_n) \in S} a_{(\alpha_1, \alpha_2, ..., \alpha_n)} x_1^{\alpha_1}x_2^{\alpha_2}...x_n^{\alpha_n}$$ is a surjective group homomorphism and all I have to show is that it's kernel is $0$. My idea is to do this by induction over the number of variables $n$. The case when $n = 1$ is easy, but then I don't know how to use the inductive hypothesis. Please help me! Thanks in advance!
Suppose you have $f=f_m x_n^m + f_{m-1}x_n^{m-1} + ... + f_0 \in A[x_1, x_2, ... x_n]$. By induction on $n$, each $f_i$ can be written uniquely as a sum of monomials in $x_1,\dots,x_{n-1}$. Define an element $a\in G$ by saying that $a_{(\alpha_1,\dots,\alpha_n)}$ is defined to be $0$ if $\alpha_n&gt;m$ and the coefficent of $x_1^{\alpha_1}\dots x_{n-1}^{\alpha_{n-1}}$ in the unique representation of $f_{\alpha_n}$ as a sum of monomials if $\alpha_n\leq m$. Then $$\psi(a)=\sum_{(\alpha_1,\dots,\alpha_n)}a_{(\alpha_1,\dots, \alpha_n)} x_1^{\alpha_1}\dots x_n^{\alpha_n}=\sum_{\alpha_n}x^{\alpha_n}\sum_{(\alpha_1,\dots,\alpha_{n-1})}a_{(\alpha_1,\dots, \alpha_n)}x_1^{\alpha_1}\dots x_{n-1}^{\alpha_{n-1}}=\sum_{\alpha_n} x_n^{\alpha_n}f_{\alpha_n}=f.$$ This shows $\psi$ is surjective. For injectivity, suppose that $a\in A$ is such that $\psi(a)=\sum_{(\alpha_1,\dots,\alpha_n)}a_{(\alpha_1,\dots, \alpha_n)} x_1^{\alpha_1}\dots x_n^{\alpha_n}=0$. For each $i$, define $$f_i=\sum_{(\alpha_1,\dots,\alpha_{n-1})}a_{(\alpha_1,\dots, \alpha_{n-1},i)}x_1^{\alpha_1}\dots x_{n-1}^{\alpha_{n-1}}.$$ As above, we can write $\psi(a)=\sum_i x_n^i f_i$. Since $f_i\in A[x_1,\dots,x_{n-1}]$, $\psi(a)=0$ implies each $f_i$ must be $0$. By induction, this means that for each $i$, $a_{(\alpha_1,\dots,\alpha_{n-1},i)}=0$ for all $(\alpha_1,\dots,\alpha_{n-1})$. But this just means that $a=0$.
Not much to prove, really, this is a non-problem. Assume the result true for $n$ and study it for $n+1$. Note that $A[x_1, \dots, x_n, x_{n+1}] = A[x_1, \dots, x_n] [x_{n+1}]$, as you have noted yourself. To assume that $f$ has two distinct representations $S_1$ and $S_2$ as a sum of monomials is equivalent to showing that $S_1 - S_2$ is a representation not identically zero of the $0$ polynomial. Assume that $0 \in A[x_1, \dots, x_n] [x_{n+1}]$ can be written as $f_m x_{n+1} ^m + \dots + f_0$ with the coefficients $f_0, \dots, f_m \in A[x_1, \dots, x_n]$. Then, equality of two polynomials in any ring means that they must be equal component-wisely, so $f_i = 0 \ \forall i = 0, \dots, m$, so every representation of $0$ as a sum of monomials must be identically zero. Therefore, your conclusion follows.
What common definition of norm on the space of analytic functions makes the basis $e_n=\frac{x^n}{n!}$ orthonormal? I mean, this basis, with factorials is very useful as it is the basis of Taylor expansion. But it is not orthonormal under usual definitions of norm ($\int_a^b \sqrt{f(x)^2}dx$). I wonder, how it should be modified so to make the basis normal.
You can extend it to complex domain and then u will have an upper traingular form for the matrix: $a_{ij}=\langle z^i,z^j\rangle $ since for $i&gt;j$ $a_{ij}=\langle z^i,z^j\rangle =0$ by cauchy integral theorem where the inner product is defined as $\langle z^i,z^j\rangle =\int_C z^{i-j}\,\mathrm{d}z$ for a closed curve $C$. If u define the same inner product for a closed curve not including origin, it might work as diagonal matrix also. You can convert ur intution into equation. Another definition of inner product for real analytic functions only would be: would be $&lt;f,g&gt;=\sum_i D^i(f)|_{x=a} D^i(g)|_{x=a}$ where $D$ is the derivative operator. In this inner product $(x-a)^i$ and $(x-a)^j$ are orthogonal.
You can extend it to complex domain and then u will have an upper traingular form for the matrix: $a_{ij}=\langle z^i,z^j\rangle $ since for $i&gt;j$ $a_{ij}=\langle z^i,z^j\rangle =0$ by cauchy integral theorem where the inner product is defined as $\langle z^i,z^j\rangle =\int_C z^{i-j}\,\mathrm{d}z$ for a closed curve $C$. If u define the same inner product for a closed curve not including origin, it might work as diagonal matrix also. You can convert ur intution into equation. Another definition of inner product for real analytic functions only would be: would be $&lt;f,g&gt;=\sum_i D^i(f)|_{x=a} D^i(g)|_{x=a}$ where $D$ is the derivative operator. In this inner product $(x-a)^i$ and $(x-a)^j$ are orthogonal.
Calculating permutations for a specific password policy The security researcher Troy Hunt posted an example of an obscure password policy and I've been trying to figure out how to calculate the possible permutations (Source: https://twitter.com/troyhunt/status/885243624933400577) The rules are: The password must contain $9$ numbers (and only $9$ numbers) It must include at least $4$ different numbers It cannot include the same number more than three times I understand the basic permutations will be $10^9$ ($0-9$ nine times) $= 1,000,000,000$ What I don't understand is how you factor in the reduction in permutations by enforcing $4$ different numbers and limiting repeats to $3$.
We calculate the number of valid passwords with the help of exponential generating functions. At first we are looking for strings of length $9$ built from the alphabet $V=\{0,1,\ldots,9\}$ which contain a digit from $V$ no more than three times. The number of occurrences of a digit can be encoded as \begin{align*} 1+x+\frac{x^2}{2!}+\frac{x^3}{3!} \end{align*} In the following we denote with $[x^n]$ the coefficient of $x^n$ in a series. Since we have to consider ten digits building strings of length $9$ we calculate with some help of Wolfram Alpha \begin{align*} 9![x^{9}]\left(1+x+\frac{x^2}{2!}+\frac{x^3}{3!}\right)^{10}=916\,927\,200\tag{1} \end{align*} The second condition requires at least four different digits for valid passwords. We respect the second condition by subtracting invalid words from (1). We observe there are no words of length $9$ which consist of one or two different digits whereby each digit does not occur more than three times. We conclude the only type of invalid strings of length $9$ counted in (1) is built of words with three different digits each digit occurring exactly three times. There are $\binom{10}{3}$ possibilities to choose three digits out of $V$. The first digit can be placed in $\binom{9}{3}$ different ways, leaving $\binom{6}{3}$ different possibilities for the second digit and $\binom{3}{3}$ for the third digit. We finally conclude the number of valid passwords is \begin{align*} 9![x^{9}]&amp;\left(1+x+\frac{x^2}{2!}+\frac{x^3}{3!}\right)^{10}-\binom{10}{3}\binom{9}{3}\binom{6}{3}\binom{3}{3}\\ &amp;=916\,927\,200-120\cdot 84\cdot 20\cdot 1\\ &amp;=916\,927\,200-201\,600\\ &amp;=\color{blue}{916\,725\,600} \end{align*}
First, each number can appear at most 3 times. So we are choosing 9 elements from 1, 1, 1, 2, 2, 2, ..., 9, 9, 9. Then, there must be at least 4 kinds of numbers. It's impossible to have 1, 2 kinds of numbers. The number of 3 kinds is 9 choose 3. So 27 choose 9 - 9 choose 3 = 4686741 is the final answer.
When can I simplify an equation? Suppose I have a few ecuations: $$\cos^2(x) = \sin(x)\cos(x) \Rightarrow cos(x) = sin(x) $$ $$ x^2 + 3x \ge 2x \Rightarrow x(x+3) \ge 2x \Rightarrow x+3 \ge 2$$ Which of them are true and why? Basically, when can one simplify an equation or an inequation with common terms that contain x (a variable) that we need to solve for? We may lose solutions.
For equations one can always apply a function on both sides, this includes dividing(unless of course you divide by $0$), squaring, subtracting etc. This is because of the obvious but nevertheless useful to note fact that $x = y$ implies $f(x) = f(y)$. For inequalities this is not the case, because essentially $x \leq y$ does not imply $f(x) \leq f(y)$ for all functions, but there are a lot of functions for which it is known whether the function is decreasing or increasing. For example, multiplying by some $c&gt;0$ is an increasing function, in other words: $x \leq y$ implies $cx \leq cy$. Now we can apply the preceding discussion to your examples. If $\cos^2(x) = \sin(x) \cos(x)$, we can divide both sides by $\cos(x)$, obtaining $\cos(x) = \sin(x)$, if $\cos(x) \neq 0$. In the case $\cos(x) = 0$, the equation does not imply anything new. Now for your inequality, only the last step is dubious, because in the case $x&lt;0$, $1/x&lt; 0$, so multiplying both sides by $1/x$ turns around the sign and we get $x+3 \leq 2$. If $x&gt;0$ the last step holds, but then this implies $x \geq -1$ which does not give us any new information, since $x&gt; 0$. However, adding and subtracting is increasing, so $x^2 + 3x \geq 2x$ if and only if $x^2+x \geq 0$. We see this equation is satisfied if $x \geq 0$. if $x \leq 0$, we have $x \leq -1$. Conversely if $x \leq -1$ we have $x+3 \leq 2$, and multiplying both sides by $x$ and noticing $x&lt;0$ we have $x(x+3) \geq 2x$. So the solutions of the inequality are all $x$ for which $x \geq 0$ and $x \leq -1$. Finally a general tip for this kind of confusion: always remind yourself of what it is you want to do. Do you want to find $x$ such that $f(x) = g(x)$? Do you want to validate whether $f(x) = g(x)$ for all $x$? Do you want to see wheter $f(x) = g(x)$ implies some contradiction? Keep using logic, if you take a step, think about whether it is reversible (in other words, whether it is 'if and only if'), and always remember that an equation without an explanation means nothing.
If $f(x)$ is monotonic then you can simplify as if the equation is an equality. Monotonic means the first derivative never changes sign. Broadly speaking this means the function always goes up with $x$ or down with $x$ and never changes. In your first example, $cos^2(x)$ isn't strictly $1:1$ because in some places it goes up with $x$ and in other paces it goes down. In your 2nd, the quadratic form will again go both up and down due to positive and negative square roots. As a rule, find the source of the duotonicity and this will lead you to find the multiple solutions. In the case of the quadratic you will need to allow for the positive and negative square roots. In the case of the trigonometric identities you need to account for the cyclicity of the functions with period $2\pi$.
Calculating the missing dimension? 1) a right cone has a surface area 12 m${^2}$ and radius 1.3 m here is my answers: 1) $s$ = 1.28 (photo of how i got my answer) textbook answer: 1.64 m how did they get that??
Your error is in thinking that they want the vertical height of the cone. For a cone, $s$ usually represents the slant height. So the first thing you want to is to express the total surface are as $$S=\pi r s+\pi r^2$$ where $\pi r s$ is the lateral area. Then $$ s=\frac{S-\pi r^2}{\pi r}\approx1.638 $$ as in your text!
The formula to calculate the surface area of a cone is: $$SA=\pi r(r+\sqrt{h^2+r^2})$$ We know the $SA$ and $r$. Now solve for $h$. $$12=\pi(1.3)(1.3+\sqrt{h^2+(1.3)^2})$$ $$\frac{12}{1.3\pi}-1.3=\sqrt{h^2+(1.3)^2}\text{ moved stuff we already know to left side}$$ $$(\frac{12}{1.3\pi}-1.3)^2=h^2\text{ squared both sides}$$ $$\sqrt{\frac{12}{1.3\pi}-1.3}=h\text{ take square root to solve for $h$}$$ $h\approx 1.28$ I'm guessing they made a mistake.
Probability that $n$ random walks (1D) intersect a single point I am struggling finding a relation to determine the probability $n$ random walks (1D) intersect in a single point at step $s$. In the method below my attempts. My method is somewhat intuitive based. I am looking for more rigorous proof. note: This question arises from someone who claims that a matching cumulative digit sum of: $\pi$, $e$ and $\varphi$ (golden ratio) is unique and &quot;cosmological&quot; [1]. I tend to disprove it. This digit sum can be seen as a random walk if the constants are normal (every digit occurs with same frequency). Method: For every step $s$ on the random walk we can determine the probability density function if we know the standard deviation on every step $s$. The standard deviation of a single step can be calculated it's a discrete uniform distribution &quot;equally likely outcomes&quot;, where $q$ is the number of outcomes e.g. the number of digits $[0,1,2,3,4,5,6,7,8,9]$, $q=10$: $$\sigma=\sqrt{\frac{q^{2}-1}{12}} $$ All the (1D) random walks start in the origin for this example. The standard deviation will grow with every step $s$, the variance is proportional to the number of steps [2]. $$Var(s)=s \cdot \sigma^{2}$$ $$\sigma(s)=\sqrt{s} \cdot \sigma$$ While the bins grow rapid I assume a normal approximation of the Binomial distribution. $$f(x)=\int_{-\infty}^{\infty} {\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x}{\sigma }}\right)^{2}}dx $$ The probability that $n$ random walks intercept in a single point is (not sure): $$p(s)=\int_{-\infty}^{\infty} \left[ {\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x}{\sigma }}\right)^{2}} \right]^n dx $$ With help of Wolfram Alpha [3] the solution is found for $n=3$ meaning the probability of $3$ point intersecting random walks. $$p(s)=\frac{1}{2 \sqrt{3} \ \pi \ \sigma^{2}} \cdot \frac{1}{s}$$ The total probability $p(n)$ is proportional to the reciprocal sum of $s$. So the total probability is proportional to the harmonic series: $$p(n)=\frac{1}{2 \sqrt{3} \ \pi \ \sigma^{2}} \cdot \sum_{s=1}^{\infty}\frac{1}{s}$$ This series diverges, meaning there are infinate point intersections of $n=3$ random walks. So a matching cumulative digit sum of $\pi$, $e$ and $\varphi$ is not unique, probability $\sim 8 \%$ for the first 1200 digits (see graph). Question Does anyone know the general formula for the probability $p(n)$ that $n$ random walks (1D) intersect a single point? import numpy as np #Elements of digits [0,1,2,3,4,5,6,7,8,9] rescaled to fit random walk array=[-9,-7,-5,-3,-1,1,3,5,7,9] #steps, in single random walk, x walks to intercept, number of trial to find intercept steps=2500 xwalks=3 trials=1500 #Set output array to zero count=np.full([steps],0) for n in range(trials): #Identify initial array, set total array to zero w0=np.random.choice(array,steps) w0=np.cumsum(w0) total=np.full([steps],0) #Select x random walk check for intercept for m in range(xwalks-1): #Next current random walk w=np.random.choice(array,steps) w=np.cumsum(w) #Compare previous and current random walk eq=np.equal(w0,w) eq=eq.astype(int) #Count intercepts total=total+eq #Set current walk to previous w0=w #Sum all interceptions for all trials count=count+np.where(total==(xwalks-1),1,0) #Print output print(count) print(np.sum(count)) print(np.sum(count)/trials)
The probability $n$ random walks starting in origin (1D) intersect a single point can be calculated with: $$p(s)=\int_{-\infty}^{\infty} \left[ {\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x}{\sigma }}\right)^{2}} \right]^n dx$$ Solutions have been found with Wolfram Alpha 1. This solution can be written as a function of the number of steps $s$. So the probability $p$ at $s$ steps can be calculated with a empirical found formula: $$p(s)= \frac{1}{g(n) \cdot \sigma ^{(n-1)} } \cdot s^{-\frac{1}{2}(n-1)} $$ Where the function for $g(n)$ is found as: $$g(n)=\sqrt { n} \cdot (2 \pi)^{\frac{1}{2}(n-1)}$$ The total probability over all steps $s$ is only dependent upon the $p$-series formed by $s$. This summation can be calculated with the Riemann zeta function: $$\zeta(\small{\frac{1}{2}(n-1)} \normalsize) =\sum\limits_{s=1}^{\infty}s^{- \frac{1}{2}(n-1)}$$ $$p(n)=\frac{1}{g(n) \cdot \sigma^{(n-1)}} \cdot \sum\limits_{s=1}^{\infty}s^{- \frac{1}{2}(n-1)} $$ Graph information Below a table and a plot for the probabilities of $p(n)$ point coinciding random walks (1D). Details: The standard deviation is chosen: $10$ equal likely outcomes per step: $$\sigma=\sqrt{\frac{99}{12}}$$ For infinate random walks $n\rightarrow \infty$ the $p$-series converges to $1$. This probability is plotted as: $p_{\infty}(n)$. $p(n)$ can be graphed as an continuous function. Values $n&lt;3$ have not been graphed while Riemann zeta $\zeta(&lt;1)$ are resulting in negative probabilities. Observations: $n=3$ random walks have infinate single point intersections. $p$-series: $\sum\limits_{s=1}^{\infty}s^{-1}=\infty$, Riemann zeta: $\zeta(1)=\infty$ both have infinity probability $p(n)\rightarrow \infty$. $n=2$ random walks have infinate single point intersections for $p$-series: $\sum\limits_{s=1}^{\infty} s^{-\frac{1}{2}}=\infty$ , interpretation Riemann zeta is unclear: $\zeta(\frac{1}{2})=-1.46035...$ resulting in a negative probability $p(2)$. It is possible that $n\geq4$ random walks never point intersect. For $n=4$ only $\sim 0.35 \%$ of them will point intersect (with 10 equal likely outcomes per step). The probability $p(n)$ plot of $n$ random walks (1D) intersecting a single point is directly related to the $p$-series. A relation with Riemann zeta is unclear while the number of walks $n=a+ib$ would be a complex number. I learned this has to do with: recurrence and transience 2. Any more rigorous proofs or information is welcome please.
I now realise that your rebuttal of my analysis in your comment above at Probability that $n$ random walks (1D) intersect a single point was in fact valid, as I had made a mistake in my analysis. I assumed incorrectly that I could simply add together the probabilities of collision after one step, two steps etc. to get the total probability of collision. However, these events are not mutually exclusive, since random walks can collide at multiple steps, so if the probability of collision at step 1 was 0.6 and the probability of collision at step 2 was 0.5, I would already have a probability exceeding 1. Hence, the correct process is to work out the probability of non-collision before step m multiplied by the probability of collision at step m, and only then add together the probabilities. Alternatively, I could just calculate the probability of non-collision at or before step m by multiplying together probabilities, then subtract it from one, and take the limit as m tends to infinity. Edit: This method describes collision random walk with probability per step is: [-1,1] 50%. s: step number in random walk, n: number of intersecting random walks. Probability intersecting at single step position: $$p(s)= \sum_{k=0}^{s} \left(\frac{1}{2^{s}} \binom{s}{k}\right)^{n}$$ Total probability over all steps: $$p(n)=\sum_{s=1}^{\infty}\sum_{k=0}^{s} \left(\frac{1}{2^{s}} \binom{s}{k} \right)^{n}$$ I have now created a spreadsheet to calculate the answer to your question, which is: n p(n) 1 1 2 1 3 0.907472937 4 0.382922701 5 0.147129979 6 0.062758395 7 0.028516044 8 0.013419184 9 0.006446336 10 0.003136593 11 0.001538874 12 0.000759198 I am not sure how accurate my results are, but if you need better accuracy, you could extend the spreadsheet tables as required. My spreadsheet uses data tables, and it appears that Excel cannot handle nested data tables, so the table above was produced by manually typing in the first twelve input values for n and copying the result. For a larger data set, a quick script could be written to automate the process. The spreadsheet is available at https://groups.io/g/UnsolvedProblems/message/12251. I can now see that your question is related to Pólya's Random Walk constants, but because you are essentially asking a different question, I don't think the answer to your question should directly equate to Pólya's Random Walk constants. In terms of the normal approximation you mentioned in your analysis, the conditions of the Central Limit Theorem state under which circumstances a normal approximation applies, which can be found at https://en.wikipedia.org/wiki/Central_limit_theorem.
How prove this inequality $\sum\limits_{cyc}\frac{1}{a+3}-\sum\limits_{cyc}\frac{1}{a+b+c+1}\ge 0$ show that: $$\dfrac{1}{a+3}+\dfrac{1}{b+3}+\dfrac{1}{c+3}+\dfrac{1}{d+3}-\left(\dfrac{1}{a+b+c+1}+\dfrac{1}{b+c+d+1}+\dfrac{1}{c+d+a+1}+\dfrac{1}{d+a+b+1}\right)\ge 0$$ where $abcd=1,a,b,c,d&gt;0$ I have show three variable inequality Let $ a$, $ b$, $ c$ be positive real numbers such that $ abc=1$. Prove that $$\frac{1}{1+b+c}+\frac{1}{1+c+a}+\frac{1}{1+a+b}\leq\frac{1}{2+a}+\frac{1}{2+b}+\frac{1}{2+c}$$ also see:can see :http://www.artofproblemsolving.com/Forum/viewtopic.php?f=51&amp;t=243 from this equality,I have see a nice methods: I think this Four varible inequality is also true First,Thank you Aditya answer,But I read it your solution,it's not true
We have $$\begin{align} \frac3{a+b+c+1} &amp;- \frac1{a+3} - \frac1{b+3} - \frac1{c+3}\\ &amp;= \sum_{cyc}^{a, b, c} \left(\frac1{a+b+c+1}-\frac1{a+3} \right) \\ &amp;= \frac1{a+b+c+1} \sum_{cyc}^{a, b, c}\frac{2-b-c}{a+3} \\ &amp;= \frac1{(a+b+c+1)\prod_{cyc}^{a, b, c}(a+3)} \sum_{cyc}^{a, b, c} (18-6a-4ab-a^2b-ab^2-6a^2) \\ &amp;\le \frac1{1\times3^3} \sum_{cyc}^{a, b, c} (18-6a-4ab-a^2b-ab^2-6a^2) \end{align}$$ using $a, b, c &gt; 0$. Summing over four such inequalities, we get $$3\sum_{cyc}\frac1{a+b+c+1} - 3\sum_{cyc} \frac1{a+3} \\ \le \frac2{27}\left( 108-\sum_{cyc}\left(9(a+a^2)+4(2ab+bd)+(a^2b+ab^2+a^2c+ac^2+b^2d+bd^2)\right) \right)$$ Now by AM-GM and the constraint, we have that $\sum_{cyc} a^mb^n \ge 4\sqrt[4]{(abcd)^{m+n}}=4$ for all $m, n \ge 0$, so RHS $\le 0$ and we are done. P.S. the method looks general, though I wouldn't want to write down the cyclic sums for more variables!
Partial Proof: For general case of n variables, the inequality converts to: $$\sum_i^n \frac1{1+a_1+a_2+\cdots+a_n-a_i}\le \sum_i^n\frac1{n-1+a_i}$$ Similiar to the given proof we can convert $\frac {a_1}{a_1+(n-1)}$ like this: $$\frac{a_1}{a_1+(n-1)}=\frac{a_1}{a_1+(n-1)(a_1a_2\cdots a_n)^{1/n}}$$ Dividing numerator and denominator by $a_1^{1/n}$: $$=\frac{a_1^{(n-1)/n}}{a_1^{(n-1)/n}+(n-1)\left(\frac{a_1a_2\cdots a_n}{a_1}\right)^{1/n}} $$ Finally using AM-GM gives: $$\frac{a_2+a_3+\cdots+a_n}{(n-1)}\ge\left(a_2a_3\cdots a_n\right)^{1/(n-1)}$$ Or: $$(n-1)\left(a_2a_3\cdots a_n\right)\le (a_2+a_3+\cdots+a_n)^{n-1}$$ $$=\frac{a_1^{(n-1)/n}}{a_1^{(n-1)/n}+(n-1)\left(\frac{a_1a_2\cdots a_n}{a_1}\right)^{1/n}} \ge\frac{a_1^{(n-1)/n}}{a_1^{(n-1)/n}+a_2^{(n-1)/n}+\cdots+a_n^{(n-1)/n}}$$ So, $$\sum_i^n\frac{a_i}{a_i+(n-1)}\ge\sum_i^n\frac{a_1^{(n-1)/n}}{a_1^{(n-1)/n}+a_2^{(n-1)/n}+\cdots+a_n^{(n-1)/n}}=1\tag{i}$$ We have proved what we need for the general case of n-variables, try putting $n=3$. Since product of all numbers is 1, we can define new fractions as: Let $$\displaystyle a_1:=\frac{x_1}{x_2},a_2:=\frac{x_2}{x_3},\cdots,a_n:=\frac{x_n}{x_1}$$ Notice that in the given proof: $$\frac{b}{ab+b+1}=\frac{x_2/x_3}{x_1/x_2.x_2/x_3+x_2/x_3+1}=\frac{x_2}{x_1+x_2+x_3 }$$ Now similiar to the given proof we can show that(step unproven): $$\frac{2}{a_1+(n-1)}-\frac{1}{1+a_2+a_3+\cdots+a_n}-\frac{x_2}{x_!+x_2+\cdots+x_n }\ge0$$ $$\frac{2}{a_1+(n-1)}\ge \frac{1}{1+a_2+a_3+\cdots+a_n}+\frac{x_2}{x_1+x_2+\cdots+x_n }$$ $$\frac1{a_1+(n-1)}+\frac1{a_1+(n-1)}\ge\frac{1}{1+a_2+a_3+\cdots+a_n}+\frac{x_2}{x_1+x_2+\cdots+x_n }$$ Since $\displaystyle \sum_i^n\frac{x_2}{x_1+x_2+\cdots+x_n }=1$ $$\frac1{a_1+(n-1)}+\frac1{a_1+(n-1)}\ge\frac{1}{1+a_2+a_3+\cdots+a_n}+1$$ $$\sum_{cyc}\frac{1}{a_1+(n-1)}-\sum_{cyc,i}\frac1{1+a_2+a_3+\cdots+a_n }\ge1-\sum_{cyc}\frac{1}{a_1+(n-1)}\ge0 $$ The$\ge0$ part, we have proved in (i). So, $${\large \sum_{cyc}\frac{1}{a_1+(n-1)}\ge\sum_{cyc,i}\frac1{1+\sum_{cyc,j\ne i}a_j}}\Box$$
Where can one post proofs of unsolved maths problems? With certifucates etc. There is a 0.1% chance I have solved an unimportant yet unsolved maths problems. I live in the UK - can get to London. Where can I get certificate etc if I do solve it. I'm going to talk to my maths teacher (PHD). And If on the extremely rare chance I solve it where do I go?
When you are acquainted enough with mathematics, you should be able to write a clean proof. Learn LaTeX, write your proof in LaTeX, publish it on Arxiv. On this way, you don't have to go by some mathematics professor who may be in dire need of an important publication and it costs no money. If however Arxiv doesn't accept your proof it will be because it is either because your proof is obviously not based on real mathematics or obviously wrong (or you are unable to use LaTeX properly). If Arxiv accepts, soon you will see a thread here or on math overflow on your approach if the problem is really important and your approach seems reasonable enough. Of course, this isn't THE way to go, just the way I would do it. Technically I think, as Mathmo123 mentioned, this question would be better handled in Academia.
When you are acquainted enough with mathematics, you should be able to write a clean proof. Learn LaTeX, write your proof in LaTeX, publish it on Arxiv. On this way, you don't have to go by some mathematics professor who may be in dire need of an important publication and it costs no money. If however Arxiv doesn't accept your proof it will be because it is either because your proof is obviously not based on real mathematics or obviously wrong (or you are unable to use LaTeX properly). If Arxiv accepts, soon you will see a thread here or on math overflow on your approach if the problem is really important and your approach seems reasonable enough. Of course, this isn't THE way to go, just the way I would do it. Technically I think, as Mathmo123 mentioned, this question would be better handled in Academia.
$\mathbb{R}$ with the finite complement topology Let $X=\mathbb{R}$ be given with the collection $\tau$ where $$ \tau = \{U\subset X: |X\setminus U|&lt;\aleph_0\}\cup \{\emptyset\} $$ I was sitting at my computer, when I suddenly asked myself: "How is $\tau$ actually a topology on $\mathbb{R}$?". It is supposed to satisfy the arbitrary union condition, but it doesn't. Indeed $$ \bigcup_{i\in \mathbb{N}} \mathbb{R}\setminus \{i\} \notin \tau $$ but each of $\mathbb{R}\setminus \{i\} \in \tau$. Can someone explain to me how $\tau$ is a topology? edit as Thomas Andrews points out, I was confusing unions with intersections and, in general, just being an idiot.
To get a topology you need to include $\emptyset$ in $\tau$. With that fixed the proof is easy. $\tau$ is closed under intersection of two sets (and hence finitely many). If $A$ and $B$ are both in $\tau$ and neither is empty, then there are only finitely many elements missing from $A$ and finitely many missing from $B$. For an element $x$ to not be in $A\cap B$ we must have either $x\notin A$ or $x\notin B$, which can happen only for finitely many $x$. Of course, if one of $A,B$ is empty, so is the intersection so $A\cap B$ is still in $\tau$ Closure under unions is even easier. If $U_i$ are elements of $\tau$ for some indices $i\in I$, then if at least one of the sets, say $U_{i_1}$, is non-empty then there are only finitely many elements of $X$ missing from $U_{i_1}$. For an element $x\in X$ to not be in $\bigcup_{i\in I}U_i$ we must have $x\notin U_{i_1}$, so this can only happen for finitely many $x\in X$. Again, if all the $U_i$:s are empty their union is still in $\tau$.
The union you have written is all of $\mathbb{R}$ which is in the topology.
Amount of elements in finite vector space. I'm trying to resolve two exercises from Kostrikin's book. Ex. 1 How many elements contains vector space $\mathbb{F}_p^n$ (vectors $(x_1,x_2,\dots,x_n)$ of length $n$) over a field $\mathbb{F}_p$ with $p$ elements? How many solutions has equation $a_1x_1+a_2x_2+\dots+a_nx_n=0$ (not all $a_i=0$)? Ex. 2 How many $k$-dimmension subspaces ($1\le k\le n$) contains $n$-dimension vector space $V$ over a field $F_q$ with $q$ elements. I think, I resolve first exercise: $$\left|\mathbb{F}_p^n\right|=p^n$$ because I need to create vector of length $n$. I can choose first element on $p$ ways, second too.... so $p\cdot p\cdot p\cdots p$. Number of solutions is an amount of linear dependent vectors in this space. Dimension of this space is equal $n$, so a maximum set of independent vectors contains $n$ vectors. Number of solution is equal $p^n-n$? In case of second exercise I have two solutions. First: There is only one $k$-dimension subspace? Because all spaces with dimension $k$ are isomorphic. Second: Base of $V$ space contains $n$ vectors. So I can choose ${n \choose k}$ vectors from $V$ base and create $k$-dimension subspace. So there is ${n \choose k}$ subspaces. Are any of my solutions correct? Thanks.
Your answer to the first exercise is correct. For the second exercise, first note that they ask for the number of $k$-dimensional subspaces, not the number of isomorphism classes of $k$-dimensional subspaces. If they were asking for isomorphism classes, then yes you're right there would only be 1. Also, you make the incorrect assumption that every $k$-dim subspace can be obtained as the span of a subset of your basis. This is not true. For example, in $F^2$ (let $F := \mathbb{F}_p$), you can take the basis $\{(1,0),(0,1)\}$, but here subsets of the basis yield only two distinct 1-dim subspaces, even though there are many more. For example, what about the subspace generated by $(1,1)$? In this example, $F^2$ should be thought of as a plane, and 1-dim subspaces are just lines in the plane going through the origin, which are classified by their slope (which may be infinity for the vertical line). To correctly count the number of $k$-dim subspaces, note that any such subspace has a basis of $k$ vectors, and thus we can begin by counting the number of sets of $k$ linearly independent vectors in $V$, which we will identify with $F^n$. For the first vector you have $p^n-1$ choices (anything but $0$). For the second, you have $p^n-p$ choices (anything but a vector in the span of the first), and so on. Thus, you have $$\prod_{j=0}^{k-1}(p^n-p^j)$$ possible ordered sets of $k$ linearly independent vectors in $V$. Thus we have a map (of sets): $$f : \{\text{ordered lists of $k$ linearly independent vectors in $V$}\}\longrightarrow \{\text{$k$-dim subspaces of $V$}\}$$ (where $f(v_1,\ldots,v_k) = \text{Span}\{v_1,\ldots,v_k\}$) which is surjective, but not injective. We would like to count the size of $f^{-1}(U)$, where $U\le V$ is a $k$-dim subspace. Since $f^{-1}(U)$ is just the set of ordered bases of $U$, we find that $$|f^{-1}(U)| = \text{the number of ordered bases of $U$}$$ Identifying $U$ with $F^k$, we find that the number of ordered bases of $U$ are in bijection with matrices in $GL_k(F)$ (this group acts freely and transitively on the set of such ordered bases, or alternatively, any such ordered basis gives you a matrix in $GL_k(F)$ with elements of the basis as columns). Thus, $|f^{-1}(U)| = |GL_k(F)|$. Since this doesn't depend on $U$, we find that the number of $k$-dim subspaces of $V$ is precisely: $$\frac{\prod_{j=0}^{k-1}(p^n-p^j)}{|GL_k(F)|}$$ By the same reasoning as above, the size of $GL_k(F)$ is $$|GL_k(F)| = \prod_{j=0}^{k-1}(p^k-p^j)$$ so: $$\frac{\prod_{j=0}^{k-1}(p^n-p^j)}{|GL_k(F)|} = \frac{\prod_{j=0}^{k-1}(p^n-p^j)}{\prod_{j=0}^{k-1}(p^k-p^j)} = \prod_{j=0}^{k-1}\frac{p^{n-j}-1}{p^{k-j}-1}$$
After 4 years I faced this exercise again and I found that first exercise is not solved properly. First part is ok and $\left|\mathbb{F}_p^n\right|=p^n$. Second part is wrong. Number of solutions is not equal $p^n-n$. My new solution: Let's begin on count how many solution has this equation: $$ a_1x_1+a_2x_2=0\;\;\;\;\;\; a_i,x_i\in\mathbb{F}_p $$ Let's assume that $a_1\not =0$ (For $a_2$ result will be the same). Let's assign any value for $x_2$ (we have $p$ possibilities). Then $a_2x_2'=r$ ($x_2'$ is assigned value). $\mathbb{F}_p$ is a field, so $r$ has additive inverse element $-r$. $$a_1x_1=-r$$ Basing on the same reason (and $a_1\not=0)$ $a_1$ has multiplicative inverse element $a_1^{-1}$, so we have solution. $$ x_1=-ra_1^{-1}$$ We have $p$ solutions for this equation. Let's start with general solution right now. $$ a_1x_1+a_2x_2+a_3x_3+\cdots+a_nx_n=0$$ Let's assume that $a_1\not =0$ (We could did it for any $a_i$) and let's assign any values for $x_2,x_3,x_4,\cdots x_n$ (We have $p^{n-1}$ possibilities). Then let's evaluate this expression. Result: $$ a_1x_1+k=0\;\;\;\;\;\;k=a_2x_2'+a_3x_3'+\cdots+a_nx_n' $$ Where $x_i'$ is assigned value. This equation has only one solution, so general equation has $p^{n-1}$ solutions, not $p^n-n$. Is there any other way to find this value? On is my solution OK? Thanks Thanks @ancientmathematician I tried with rank-nullity theorem. Here is my result: $A$ is linear transformation $A:\mathbb{F}_p^n\rightarrow\mathbb{F}_p$. $$ A(x)=\sum_{i=1}^na_ix_i $$ Domain of $A$ is whole $\mathbb{F}_p^n$, so $$ \mathrm{dom}{A}=\mathbb{F}_p^n\;\Rightarrow\; \mathrm{dim}\,\mathrm{dom}{A}=n $$ Image of $A$ is whole $\mathbb{F}_p$, so $$ \mathrm{im}{A}=\mathbb{F}_p\;\Rightarrow\; \mathrm{dim}\,\mathrm{im}{A}=1 $$ From rank-nullity theorem $$ \mathrm{dim}\,\mathrm{dom}{A}=\mathrm{dim}\,\mathrm{ker}{A}+\mathrm{dim}\,\mathrm{im}{A} $$ i can figure out value of kernel dimension $$ n=\mathrm{dim}\,\mathrm{ker}{A}+1\Rightarrow\mathrm{dim}\,\mathrm{ker}{A}=n-1 $$ Kernel of transformation is set of vectors which value is 0. So i understand i'm looking for $|\mathrm{ker}{A}|$. I'm not sure how to find amount of elements in kernel. Here is my try: I know dimension of kernel and I know that kernel is linear space, so base of this set has $n-1$ elements. I can take any base so i'll take 'default' $$ (1,0,0,\cdots,0)\;(0,1,0,\cdots,0)\;(0,0,1,\cdots,0)\cdots(0,0,0,\cdots,1) $$ All vectors has $n-1$ coordinates. I can generate $p^{n-1}$ vectors basing on this set, so equation from begging of exercise has $p^{n-1}$ solutions. Is that correct?
Magma function for modulo irreducible polynomial So, I am trying to make a program in Magma which returns the value table of a given function F over a field $GF(2^n)$. To do so I need a irreducible polyomial. For example, I've considered $GF(2^3)$ and the irreducible polynomial $p(x)=x^3+x+1$. My program started like this: F&lt;a&gt;:=GF(2^3); for i in F do i mod a^3+a+1; end for; The 'mod' apperantly only works with integers, is there a polynomial version for this?
The mod function works for polynomials, provided they are recognized by Magma as being elements in a polynomial ring. For example, put F := GF(2); and P&lt;a&gt; := PolynomialRing(F); and you will get the results you want if you ask for a^i mod a^3+a+1;. Alternately, specific for your finite field example, you can put F&lt;a&gt; := GF(2^3); which you can verify is constructed with $x^3+x+1$ as the minimal polynomial for a (ask for DefiningPolynomial(F);). By default, Magma will print elements of F as powers of a, but if you put in the command SetPowerPrinting(F,false); it will give you a reduced polynomial in a instead. So then you can just type a^i; and it will return this field element as the remainder when divided by $a^3+a+1$. (Note that if you type both F&lt;a&gt; := GF(2^3); and P&lt;a&gt; := PolynomialRing(F); then you have introduced some confusion as to whether a is a finite field element, or an indeterminate in your polynomial ring. You should really avoid doing this.)
I managed to make the program. F&lt;a&gt;:=GF(2^3); P&lt;a&gt;:=PolynomialRing(F); function f(x) return x^5; end function; A:=[]; for i in [1..6] do Append(~A, (a^i mod (a^3+a+1))); end for; A:=Reverse(A); A:=Append(A,1); A:=Append(A,0); A:=Reverse(A); F:=[]; for i in [1..6] do Append(~F,((f(a))^i mod (a^3+a+1))); end for; F:=Reverse(F); F:=Append(F,1); F:=Append(F,0); F:=Reverse(F); BinA:=[]; for i in A do for j in [a^2,a,1] do if j in Terms(i) then Append(~BinA,1); else Append(~BinA,0); end if; end for; end for; BinA:=Matrix(3,BinA); BinF:=[]; for i in F do for j in [a^2,a,1] do if j in Terms(i) then Append(~BinF,1); else Append(~BinF,0); end if; end for; end for; print BinF;
conditional probability problem A prerequisite for students to take a probability class is to pass calculus. A study of correlation of grades for students taking calculus and probability was conducted. The study shows that 25% of all calculus students get an A; and that students who had an A in calculus are 50% more likely to get an A in probability as those who had a lower grade in calculus. If a student who received an A in probability is chosen at random, what is the probability that he/she also received an A in calculus? My Attempt: I know $\Pr(A\mid B)$ with $A$ being event that the student gets an $A$ in calculus and $B$ being the event that the student gets an $A$ in probability is $\Pr(A \mid B)=\frac{Pr(A, B)}{\Pr(B)}$ but, I can't seem to put the givens into that form.
You know that $$ P(A|B)=\frac{P(A\cap B)}{P(B)} $$ and $$ P(B|A)=\frac{P(A\cap B)}{P(A)} $$ thus $$ P(A|B)=\frac{P(A\cap B)}{P(B)}=\frac{P(A|B)P(B)}{P(A)}=\frac{P(A)}{P(B)}P(A| B) $$
That is a formula for finding P(A|B), but not the only formula. You should consider using Bayes' theorem, which would allow you to turn probabilities like P(B|A) into your desired P(A|B).
Dense set in the unit circle- reference needed For $x \notin \pi\mathbb Q$, that is, a real $x$ that is not a rational multiple of $\pi$, consider the set $$\{(\cos nx,\sin nx):n = 0,1,2,...\}.$$ It is known that this set is dense in the unit circle $B(0,1)$ of $\mathbb R^2$. Could someone please give me a proof or reference for a proof?
From $x\notin \pi{\mathbb Q}$ it follows that $e^{ikx}\ne1$ for all $k\in{\mathbb Z}\setminus\{0\}$, and this implies that the numbers $e^{inx}\in S^1$ $(n\geq0)$ are all different. Assume that a point $\zeta\in S^1$ and an $\epsilon&gt;0$ is given. Since $S^1$ has finite length we then can find two numbers $n_1&lt;n_2$ with $|e^{in_2x}-e^{in_1 x}|&lt; \epsilon$. Put $n':=n_2-n_1$. Then $$\bigl|e^{in'x} -1\bigr|=|e^{in_2x}-e^{in_1 x}|&lt;\epsilon\ .$$ Put $n_k:=k\&gt;n'$ $(k\geq0)$. Then the successive points $e^{i n_k x}\in S^1$ $(k\geq0)$ have a distance $&lt;\epsilon$. It follows that there is a $k\geq0$ with $|e^{in_k x}-\zeta|&lt;\epsilon$.
You can proove that every rational number other than 1/2 comes from an irrational angle. a = cos(pi d/z) + i sin(pi d/z) solves some equation of the form a^z + 1 = 0 Prove that a can never be a rational, by supposing that where $$a^n = \sum^{z-1}_{e=0} x_i a^e$$ that if a were a fraction p/q, multiply through by p^n, then the LHS is not a multiple of q, and the RHS is, unless q=1. Therefore the equation a can never be rational unless it is half-integer, which means that the rational numbers arise from irrational angles. Since the rational numbers are dense on the circle, the irrational angles are too.
Show that $a_n^{1/n}$ converges as $n\to\infty$ Suppose that a sequence ${a_n : n = 1,2,\cdots}$ real numbers is such that $a_n\geq 1$ for all $n ≥ 1$ and $$ a_{n+m}\leq a_na_m\quad \rm{for~ all}~n\geq1, m\geq1. $$ Show that $a_n^{1/n}$ converges as $n\to\infty$ My solution: By taking log, we have $$ \log{a_{n+m}}\leq \log{a_na_m}=\log{a_n}+\log{a_m}\quad \rm{for~ all}~n\geq1, m\geq1. $$ So we have \begin{align} \log{a_2}&amp;\leq \log{a_1}+\log{a_1}=2\log{a_1} \\ \log{a_3}&amp;\leq \log{a_1}+\log{a_2}\leq3\log{a_1} \\ \cdots \\ \log{a_n}&amp;\leq n\log{a_1} \end{align} So, $$ \log{a_n}^{1/n}=\frac{1}{n}\log{a_n} $$ So I can prove $\log a_n$ is bounded but cannot to prove it's monotonic which can sufficiently lead to $\log a_n$ converges. How to deal with it? Or if we cannot prove monotonic, how to prove the limit exists?
we have $$a_{n+m}\leq a_na_m\quad $$ taking m=n, we get $$a_{2n}\leq a_n^2\quad $$ taking square roots on both sides, we get $$ a_{2n}^{1/2}\leq a_n\quad$$ taking $n^{th}$ roots on both sides , we get $$ a_{2n}^{1/2n}\leq a_n^{1/n}\quad$$ let $$f(n)=a_n\quad$$ then , we have $$f(2n)\leq f(n)\quad$$ which is a monotic decreasing sequence and hence converges
we have $$a_{n+m}\leq a_na_m\quad $$ taking m=n, we get $$a_{2n}\leq a_n^2\quad $$ taking square roots on both sides, we get $$ a_{2n}^{1/2}\leq a_n\quad$$ taking $n^{th}$ roots on both sides , we get $$ a_{2n}^{1/2n}\leq a_n^{1/n}\quad$$ let $$f(n)=a_n\quad$$ then , we have $$f(2n)\leq f(n)\quad$$ which is a monotic decreasing sequence and hence converges
expected value for max of a set of random variables we randomly pick $M$ elements from a set of $N$ real numbers, $A=\{ a_1,a_2,\ldots,a_N \}$. Then we sort these $M$ numbers, what is the expected value for the largest element? (let say $N=1000000$, $M=1000$), I am interested in the solution for general $A$, and/or the case that elements of $A$ are taken from a normal distribution.
Note: Since you're concerned with subsets, I'm going to assume that you're choosing your elements without repetition. As per your question, I'll also assume that you're picking your elements with uniform probability (ie completely at random). Adding a bit of notation, suppose that our chosen subset is $B=\{b_1,b_2,\cdots,b_M\}$ with $b_1&lt;b_2&lt;\cdots &lt;b_M$. If we assume also that $a_1&lt;a_2&lt;\cdots&lt;a_N$, then clearly $P(b_M&lt;a_M)=0$, since the smallest $M$ elements that we can choose from $A$ are $\{a_1,a_2,\cdots,a_M\}$. Now, $P(b_M=a_M)=\frac{1}{{N \choose M}}$, since the only possible such subset is $\{a_1,a_2,\cdots, a_M\}$. To compute $P(b_M=a_{M+1})$, the only way this can happen is if $\{b_1,\cdots, b_{M-1}\}\subset \{a_1,a_2,\cdots, a_M\}$, hence $P(b_M=a_{M+1})=\frac{{M \choose M-1}}{{N \choose M}}$. And by a similar argument, for $1\leq i \leq N-M$, $P(b_M=a_{M+i})=\frac{{M +(i-1) \choose {M-1}}}{{N \choose M}}$. So, from there, the expected value is $$\frac{1}{{N \choose M}}\sum\limits_{i=0}^{N-M}a_i {{M+(i-1)} \choose {M-1}}$$
Hint 1: For every nonnegative random variable $X$, $\mathrm E(X)=\int\limits_0^{+\infty}\mathrm P(X\geqslant x)\mathrm dx$. Hint 2: If $M=\max\{\xi_1,\xi_2,\ldots,\xi_n\}$, then, for every $x$, $[M\leqslant x]=\bigcap\limits_{k=1}^n[\xi_k\leqslant x]$.
Solve $a^2 - 2b^2 - 3 c^2 + 6 d^2 =1 $ over integers $a,b,c,d \in \mathbb{Z}$ Are we able to completely solve this variant of Pell equation? $$ x_1^2 - 2x_2^2 - 3x_3^2 + 6x_4^2 = 1 $$ This has an interpretation as is related to the fundamental unit equation of $\mathbb{Q}(\sqrt{2}, \sqrt{3}) = \mathbb{Q}[x,y]/(x^2 - 2, y^2 - 3)$ as well as various irreducible quartics. Is this the same as solving three separate Pell equations? \begin{eqnarray*} x^2 - 2y^2 &amp;=&amp; 1 \\ x^2 - 3y^2 &amp;=&amp; 1 \\ x^2 + 6y^2 &amp;=&amp; 1 \tag{$\ast$} \end{eqnarray*} Our instinct suggests there should be three degrees of freedom here, and setting different variables to zero we could find three two of these (the third equation has no solutions over $\mathbb{R}$). Does that generate all the solutions? Perhaps I should remark this quadratic form is also a determinant $$ a^2 - 2b^2 - 3c^2 + 6d^2 = \det \left[ \begin{array}{cc} a + b \sqrt{2} &amp; c - d \sqrt{2}\\ 3(c + d \sqrt{2}) &amp; a - b \sqrt{2} \end{array} \right]$$ This might not even contain Oscar's solution. Extending Keith's solution We could have: $$ \left[ \begin{array}{cc} a + b \sqrt{2} &amp; c - d \sqrt{2}\\ 3(c + d \sqrt{2}) &amp; a - b \sqrt{2} \end{array} \right] = \left[ \begin{array}{cc} 3 + 1 \sqrt{2} &amp; 2 - 1 \sqrt{2}\\ 3(2 + 1 \sqrt{2}) &amp; 3 - 1 \sqrt{2} \end{array} \right]^n $$
Equation given above is shown below: $a^2-2b^2-3c^2+6d^2=1$ As shown by "Individ" , what "OP" can do is to take (a,b,c,d) as shown below: $a=p(6w^2+4w+3)$ $b=p(2w^2+6w+1)$ $c=p(4w^2+4w+2)$ $d=p(2w^2+4w+1)$ Where $(p)= [1/(2w^2-1)]$ For suitable value's of 'w' we get the numerical solutions below: w=(1), (a,b,c,d)=(13, 9, 10, 7) w=(3/4), (a,b,c,d)= (75, 53, 58, 41) w=(5/7), (a,b,c,d)= (437, 309, 338, 239)
$$x^2-2y^2-3z^2+6q^2=1$$ Use what any decision $a^2-2b^2=1$ and $c^2-2d^2-3k^2+6t^2=1$ $$x=ac\pm{2bd}$$ $$y=ad\pm{bc}$$ $$z=ak\pm{2bt}$$ $$q=at\pm{bk}$$
Prove without Liouville's theorem: $f$ is entire, $\forall z \in \mathbb C: |f(z)| \leq |z|$, then $f=a \cdot z$, $a \in \mathbb C, |a| \leq 1$ Prove without Liouville's theorem: $f$ is entire, $\forall z \in \mathbb C: |f(z)| \leq |z|$, then $f=a \cdot z$, $a \in \mathbb C, |a| \leq 1$ What I tried so far: $f$ is entire, so $f(z)= \Sigma _{n=0}^\infty a_nz^n$, and then $|\Sigma _{n=0}^\infty a_nz^n| \leq |z| \Rightarrow |\frac {\Sigma _{n=0}^\infty a_nz^n}{z}| \leq 1 \Rightarrow |\Sigma _{n=0}^\infty a_nz^{n-1}| \leq 1$ How can I continue from here? Thank you in advance for your assistance!
Note that $|f(z)| \leq |z|$ implies $|f(0)| \leq |0|$ and thus $f(0) = 0$. If $f(z) = \sum \limits _{n=0} ^\infty a_n z^n$, this means that $a_0 = 0$, so $f(z) = z g(z)$, where $g(z) = \sum \limits _{n=1} ^\infty a_n z^{n-1}$. By hypothesis, you get $|g(z)| \leq 1$. Now, you don't want to use Liouville's theorem, so we turn to Cauchy's inequality: for any $r&gt;0$ and $n&gt;1$ we have $|a_{n+1}| = |g ^{(n)} (0)| \leq {\sup \limits _{|z| = r} |g(z)| \over r^n} = {1 \over r^n}$. Since $r&gt;0$ may grow arbitrarily large, this shows that $a_{n+1}$ may get arbitrarily small when $n&gt;1$, and thus $a_{n+1} = 0$ for $n&gt;1$. So, $f(z) = a_0 + a_1 z$. But we have proved right at the beginning that $a_0 = 0$, so $f(z) = a_1 z$.
Possible answer: Set $g(z)=\frac {f(z)}{z}$ for $z \neq 0$ and $g(z)=A, A \in \mathbb C$ for $z=0$. So we get that $g(z)$ is entire. Let there be some $z_0 \in \mathbb C$, and circle with radius $R$ centered in $z_0$, $C_R$. So by Cauchy Integral Formula: $g'(z_0)= \frac {1}{2 \pi i} \int_{C_R} \frac{g(z)}{(z-z_0)^2}$ $$|g'(z_0)|= |\frac {1}{2 \pi i} \int_{C_R} \frac{g(z)}{(z-z_0)^2}| = |\frac {1}{2 \pi i} \int_{C_R} \frac{ \frac {f(z)}{z}}{(z-z_0)^2}|= |\frac {1}{2 \pi i} \int_{C_R} \frac{f(z)}{z(z-z_0)^2}| \leq \frac {1}{2 \pi } \cdot 2 \pi R \cdot \frac {1}{R^2} \cdot max_{z \in C_R}|\frac {f(z)}{z}|= \frac {1}{R}\cdot 1= \frac {1}{R} $$ So we'll take circle $C_R$ with infinite radius, and we get that $g'(z_0)=0$ for every $z_0$, so $g(z)$ is constant. So $g(z)=a$ for some $a \in \mathbb C$, so $f(z)=a \cdot z$, and $|a| \leq 1$.
Prove that if $ p | x^p + y^p $ where $p$ is a prime number greater than $2$, then $p^2 | x^p + y^p$ I was trying to solve the following problem recently: Prove that if $ p | x^p + y^p $ where $p$ is a prime number greater than $2$, then $p^2 | x^p + y^p$. Here $x$ and $y$ are both integers. $a|b$ reads $a$ divides $b$ or $b$ is divisible by $a$. I was able to solve the problem, but through arguably tedious means, my solutions is here. I feel like there are more elegant solutions available, but I cannot think of any at present, which is why I decided to post this as a question here. Thanks
Using Fermat's Little Theorem, $$a^p\equiv a\pmod p$$ for any integer $a$ $$p|(x^p+y^p)\iff p\mid(x+y)$$ Let $x+y=kp$ For odd prime $p,$ $$x^p+y^p=x^p+(kp-x)^p=x^p-(x-kp)^p=\binom p1x^{p-1}\cdot kp+\text{ terms divisible by }p^2$$ $$\equiv0\pmod{p^2}$$
$p|x^p+y^p$ given since $p$ is a prime number $&gt; 2$, it is an odd prime number For any odd power $p$, $x^p+y^p$ has a factor $(x+y)$ Using Fermat's little theorem $$x^p \equiv x \mod p\\ y^p \equiv y\mod p\\ x^p+y^p \equiv (x+y) \equiv 0\mod p$$ since $p|x^p+y^p$ Therefore $p|(x+y)$, so $(x+y) = m*p$ where $m$ is an integer $p|x^p+y^p$ and $(x+y)|x^p+y^p$ we can say $x^p+y^p = k*p*(x+y)$ where k is an integer $=k*p*m*p = kmp^2$ So $x^p+y^p= kmp^2$ where $k$ and $m$ are integers. so $p^2|x^p+y^p$ Proved
do i treat (y) as a variable or as a constant when differentiating dy/dx I want to find $\left(\dfrac{dy}{dx}\right)$ for : $x^3 +y^3=6xy$ ,i don't understand when diff $(y^3)$ do i treat it like a constant or do i treat it like a term?
It helps to remember here that you're ASSUMING that there's a function, which I'll call $Y$ to keep things distinct, such that $$ x^3 + Y(x)^3 = 6 \cdot x \cdot Y(x) $$ and since the expressions on the left and right side of the equality both are functions of $x$, you can differentiate them. For the right hand side, for instance, you get $$ deriv = 6 Y(x) + 6x Y'(x) $$ But this is almost always written $$ deriv = 6y + 6x \frac{dy}{dx} $$ completely hiding the dependence of $Y$ on $x$.
Think of it as separating all the elements dependent on x being represented as functions (i.e. assuming those functions differentiable at x) . Then take the limit for each change in function ($\Delta(f(x)$) as $\Delta$x->0 which then will just work as simple fractions instead of lagrange's notation. $$\lim\limits_{\Delta x \to 0} \frac{\Delta f(x)}{\Delta x}$$ So for the a g(x) = $x^3$ and h(y) = $y^3$ So it becomes $\frac{\Delta g(x)}{\Delta x}$ + $\frac{\Delta h(y)}{\Delta y} * \frac{\Delta y}{\Delta x}$ and taking the limit will give you $\frac{dy}{dx}$
Show $\frac{1}{x^{2}}$ is not uniformly continuous on (0,5]. Show g(x) = $\frac{1}{x^{2}}$ is not uniformly continuous on (0,1.5]. attempt g(x) is not uniformly continuous if $$\exists \epsilon &gt; 0 : \forall m &gt; 0 \exists x,y \epsilon (0,1.5] |x-y| &lt; m, |g(x)-g(y)| \geq \epsilon$$ $ |g(x)-g(y)| = |\frac{1}{x^{2}} - \frac{1}{y^{2}}| = |\frac{y^{2}-x^{2}}{x^{2}y^{2}}|$ $$$$ $=\frac{|x^{2}-y^{2}|}{x^{2}y^{2}} = \frac{|x-y||x+y|}{x^{2}y^{2}} = \frac{|x-y||x-y+2y|}{x^{2}y^{2}} \geq \frac{|x-y|(|x-y|-2|y|)}{x^{2}y^{2}} = \frac{|x-y|(1-2|y|)}{x^{2}y^{2}}$ if m = 1 $$$$ and if $\frac{|x-y|(1-2|y|)}{x^{2}y^{2}}$ is to be made less than $\epsilon$ then $|x-y|) &lt; \frac{x^{2}y^{2}\epsilon}{(1-2|y|}$ and since $x^{2}y^{2}$ is smaller and smaller for x and y close to 0, no single number m workds for all x and y and so g(x) = $\frac{1}{x^{2}}$ is not uniformly continuous on (0,1.5].
Let $\epsilon =1$, assume to get contradiction that $g(x)=\frac{1}{x^2}$ is uniformly continuous on $(0,1.5]$. By the assumption exists $\delta &gt; 0$ s.t. $\forall x,y\in (0,1.5],\mid x-y\mid&lt;\delta \Rightarrow \mid g(x)-g(y) \mid &lt;\epsilon$. Let us observe the following series, $x_n=\frac{1}{n}$ and $y_n=\frac{1}{2n}$. It is easy to see that $\mid x_n-y_n\mid=\mid \frac{1}{n}-\frac{1}{2n}\mid=\mid \frac{1}{2n}\mid \xrightarrow{n\rightarrow \infty} 0$ thus for a large enough $n$ we have $\mid x_n-y_n\mid&lt;\delta$. But we also have $\mid g(x_n)-g(y_n)\mid=\mid \frac{1}{(\frac{1}{n})^2}-\frac{1}{(\frac{1}{2n})^2}\mid=\mid n^2-4n^2\mid=3n^2\xrightarrow{n\rightarrow \infty}\infty$. For large enough $n$ we have $\mid x_n-y_n\mid&lt;\delta$ and $\mid g(x_n)-g(y_n)\mid &gt;1=\epsilon$
Perhaps I'm greatly mistaken, but I believe that it IS uniformly continuous on (0,1.5], Since f(x) is (in the usual sense) continuous on (0,1.5], and (0,1.5] is compact, then f(x) is uniformly continuous on (0,1.5]. (This is a basic theorem regarding uniform continuity and compactness)
Finding Inverse Laplace Transform using Taylor Series Find the inverse Laplace transform $F(t)=\mathcal{L}^{-1}(s^{-\frac{1}{2}}e^{-\frac{1}{s}})$ using each of the following techniques: Expand the exponential in a Taylor series about s=∞, and take inverse Laplace transforms term by term (this is allowable since the series is uniformly convergent.). Sum the resultant series in terms of elementary functions.
We know that $$ \displaystyle \mathcal{L} \{t^{n-1}\} = \frac{\Gamma(n)}{s^{n}}, n&gt;0 ,s&gt;0 $$ $$ \frac{1}{s^n} = \frac{\displaystyle \mathcal{L} \{t^{n-1}\}}{\Gamma(n)} $$ $$ \displaystyle \mathcal{L^{-1}} \{\frac{1}{s^n}\} = \frac{t^{n-1}}{\Gamma(n)} $$ Therefore $$ \displaystyle \mathcal{L^{-1}} \{ s^{-1/2} e^{-1/s} \} = \displaystyle \mathcal{L^{-1}} \{ \frac{1}{s^{1/2}} - \frac{1}{s^{3/2}} + \frac{1}{(2!s^{5/2})} + ... + (-1)^{n}\frac{1}{n! s^{n+\frac{1}{2}}} \} $$ $$ = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \displaystyle \mathcal{L^{-1}} \{\frac{1}{s^{n+\frac{1}{2}}}\} $$ $$ = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \frac{t^{n-\frac{1}{2}}}{\Gamma(n+\frac{1}{2})} $$ But after this I don't know how to simplify
$\frac{\cos 2(\sqrt{t})}{\sqrt{\pi t}}$
Greatest Lower Bound in $\mathbb{R}$ as a corollary of the LUB? I can assume as fact that $\mathbb{R}$ is an ordered field in which every non-empty subset that is bounded above has a least upper bound. My question is whether I can also assume as fact that every non-empty subset that is bounded below also has a greatest lower bound. I'm trying to show that if $A,B \subseteq \mathbb{R}$, and $A,B \neq \emptyset$, then there exists a least upper bound for $A$ (I've proven this), and a greatest lower bound for $B$ (this is what I'm currently concerned with). Any tips, or helpful definitions?
HINT: If $A\subseteq\Bbb R$ is bounded below, then $-A=\{-a:a\in A\}$ is bounded above. Alternative HINT: Let $B$ be the set of lower bounds for $A$. $B$ is bounded above, and its supremum is ... ?
Yes, if every nonempty set that is bounded above has a least upper bound, then every nonempty set that is bounded below has a greatest lower bound. I know of two different ways to prove this. Suppose $B$ is nonempty and bounded below. Then the set $-B=\{-x:x\in B\}$ is nonempty and bounded above, so it has a least upper bound, call it $u$. Then $-u$ is the greatest lower bound of $B$. Never mind, just read Brian's answer!
2 seemingly isomorphic groups Take the following two groups: $G_1$ $$\begin{array}{c|c|c|c|c} \cdot &amp; e &amp; a &amp; b&amp; c\\\hline e &amp; e &amp; a &amp; b &amp; c \\\hline a &amp;a &amp; e &amp; c&amp; b\\\hline b &amp; b &amp; c &amp; e &amp; a \\\hline c &amp; c &amp; b &amp; a &amp; e \end{array}$$ $G_2$ $$\begin{array}{c|c|c|c|c} \cdot &amp; e &amp; a &amp; b&amp; c\\\hline e &amp; e &amp; a &amp; b &amp; c \\\hline a &amp;a &amp; e &amp; c&amp; b\\\hline b &amp; b &amp; c &amp; a&amp; e \\\hline c &amp; c &amp; b &amp; e&amp;a \end{array}$$ In $G_1$ there are 3 normal subgroups, $\{e,a\},\{e,b\},\{e,c\}$ Each one leading to isomorphicly equivalent factor groups. $$G_1 \cong \mathbb{Z} / 2\mathbb{Z} \times \mathbb{Z} / 2\mathbb{Z} $$ $G_2$ has one normal subgroup $\{e,a\}$ which leads to $$G_2 \cong \mathbb{Z} / 2\mathbb{Z} \times \mathbb{Z} / 2\mathbb{Z} $$ Which seems to imply there is an isomorphism between them, but there clearly isn’t. Where am I wrong?
In$G_2$ we have $c=b^{-1}$ and $c^2=a$ so that $b^{-2}=a=b^2$, and hence $b^4=e$, but $b^2=a\neq e$. Hence $G_2$ has an element of order $4$, so that $G_2\cong C_4$, and hence is not isomorphic to $C_2\times C_2\cong G_1$.
No reason to think the groups are isomorphic. In fact, they have different &quot;identity-skeletons&quot;, so can't be.
How many ways in which distinct people can get off a train I've been learning probability recently but I'm having trouble solving this question: Suppose you have 50 people on a train and you have 4 stations you can get off at (call them Stations 1,2,3,4). If no one boards the train at any of these stations, in how many ways can the 50 people get off the train, assuming the people are distinguishable (I care who gets off where)? If I didn't care who gets off where, then using stars and bars it would simply be 54 choose 3. But the problem for me arises in thinking when they're distinguishable. Right now my logic was for each of the 54 choose 3 ways, you can permute the groups among themselves in 4! ways. I'm pretty sure that's not right, but I don't know where to go.
We are counting the functions $f$ from a set of size $50$ to $\{1,2,3,4\}$. To see this, for any person $p$ let $f(p)$ be the station she gets off at. There are $4^{50}$ such functions.
This is a problem of the form $x_1 + \dots + x_k = n$, where $k=4$ and $n=54$. A typical ball-picking problem, which is unordered and with replacement. The answer is therefore $$\binom{n+k-1}{k}.$$ More information here: http://mathworld.wolfram.com/BallPicking.html .
Simple Combinations Binomial There is a little sum I am stuck with. Find the value of $${1 \choose 0}+{4 \choose 1}+{7 \choose 2} +\ldots+{3n+1 \choose n}$$ where ${n \choose r}$ is the usual combination. A little hint will be fine.
Find the coefficient of $x^{0}$ in $\displaystyle (1+x)+\frac{(1+x)^{4}}{x}+\frac{(1+x)^{7}}{x^{2}}+\ldots$ It's forming a geometric series.
Find the coefficient of $x^{0}$ in $\displaystyle (1+x)+\frac{(1+x)^{4}}{x}+\frac{(1+x)^{7}}{x^{2}}+\ldots$ It's forming a geometric series.
Scalar Multiplication, why do I keep getting this bizarre result? I have two vectors, u and v u = [2/5,-1,1/5] v = [-1,5/2, -1/2] I need to prove that they are Scalar of one another, and in such a way that u = kv k being the scalar variable. The intuitive method would be to look at it and see what happens when you multiply 2/5 by v and suddenly it equals u. However, I want to do things by the book and FIND the scalar multiple using math. So, I do the standard formula (u*v)/|v| However, I keep getting -3/(sqrt(30))/2 Which doesn't line up with the actual result. What I have tried: u*v is just a dot product, right? So that ends up giving you -(2/5) + -(5/2) + -(1/10) which ends up being -30/10 = -3. Now the denominator which is magnitude v or |v| which is sqrt((-1)^2 + (5/2)^2 + (-1/5)^2) which netted me sqrt(30)/2, this isn't the result that I need or that mathlab says is correct, a little help would go a long way here.
You say that you want to "find the scalar using math." $u = (u_1,u_2, u_3) , v = (v_1,v_2, v_3)$ It is given that one vector is a scalar multiple of the other. $u = \lambda v\\ \lambda = \frac {u_1}{v_1} = \frac {u_2}{v_2} = \frac {u_3}{v_3}$ Is a perfectly valid approach "using math" If "by the book," means using the dot product... $u = \lambda v\\ u\cdot v = \lambda v\cdot v = \lambda \|v\|^2\\ \lambda = \frac {u\cdot v}{\|v\|^2}$ $\lambda = \frac {-\frac {2}{5} - \frac {5}{2} - \frac {1}{10}}{1 + \frac {25}{4} + \frac 14} = -\frac {2}{5}$
Multiplying the first vector by the scalar $-\frac{5}{2}$ yields the second vector. $u = kv$ So $$k = \frac{-1}{(\frac{2}{5})} = \frac{(\frac{5}{2})}{-1} = \frac{-(\frac{1}{2})}{(\frac{1}{5})} = -\frac{5}{2}$$
About Exact Differentials This is an excerpt from my textbook: Consider the general differential containing two variables, where $f = f(x,y)$, $$ d f=A(x, y) d x+B(x, y) d y $$ We see that $$ \frac{\partial f}{\partial x}=A(x, y), \quad \frac{\partial f}{\partial y}=B(x, y) $$ and, using the property $f_{x y}=f_{y x},$ we therefore require $$ \frac{\partial A}{\partial y}=\frac{\partial B}{\partial x} $$ This is in fact both a necessary and a sufficient condition for the differential to be exact. I see why this is a necessary condition, but why is it a sufficient condition?
Assume that $A,B $ and its first order partial derivatives are continuous on a simply connected open set $D$. Given $\frac{\partial A}{\partial y}=\frac{\partial B}{\partial x}$, if there exists a function $h(x,y)$ such that $d(h(x,y))=Adx+Bdy\tag{A}$, then we are done. Let's denote $h(x,y)$ by $h$. Consider, $\frac{\partial h }{\partial x}=A$ and $\frac{\partial h }{\partial y}=B$. Let $(a,b)$ and $(x,y)\in D$ From $\frac{\partial h }{\partial x}=A$, we have : $h=\int_{x=a}^{x} A\partial x+g(y)\tag{1}$ Therefore, by $\frac{\partial h }{\partial y}=B$, we get $\frac{\partial }{\partial y}(\int_{x=a}^{x} A\partial x)+g'(y)=B\implies \int_{x=a}^{x}\frac{\partial }{\partial y}A \partial x+g'(y)=B\implies g'(y)=B-\frac{\partial }{\partial x}(\int_{x=a}^{x}B\partial x)=B-B(x,y)+B(a,y)=B(a,y)\tag{2}$ So, we have now shown that $g'(y)$ is free of $x$, that is we can find $g(y)$ from $(2)$ using FTC. $g(y)=g(b) +\int_{y=b}^{y} g'(y) dy=g(b) +\int_{y=b}^{y} B(a,y) dy$. So now $g(y)$ is known. We'll put this $g(y)$ into $(1)$ and we'll have known $h$. And clearly the way $h$ was constructed implies that $(A)$ is satisfied by $h$.
I think the misunderstanding here comes from the language used, and not so much the differential calculus (might want to tag this as &quot;logic&quot; or something). This is a good resource to understand what necessary and sufficient mean in logic, separately and together. The most useful thing you can do to better understand these is think of counter examples: what is something that is necessary but not sufficient? A good example from that page is the following. P: Oxygen exists in the atmosphere. Q: Humans exist. Clearly P is necessary for Q. Having oxygen in the earth's atmosphere is a necessary condition for human life. Crucially, though, having oxygen will not guarantee human life – there are many other conditions needed for human life other than oxygen in the atmosphere. In this way, P is necessary but not sufficient for Q. Now consider this example. P: All men are mortal. Q: Socrates was mortal. In this case, P being true always means Q is true (we all know Socrates was a man). There is no possible way P could be true without Q being true. Equally, Q couldn't be true without P being true. P is necessary and sufficient for Q. TLDR: P necessary and sufficient for Q = P $\Leftrightarrow$ Q, P necessary but not sufficient for Q = Q $\Rightarrow$ P, P not necessary but sufficient = logically invalid. What your textbook is saying, then, is that that condition implies necessarily the exactness of the differential.
What does $\text{rank}(AB) = \text{rank}(A)$ imply? Suppose now I have two matrices $A$ and $B$ which are of size $m\times n$ and $n\times l$ respectively. For simplicity, assume $n&lt;m&lt;l$. Assume that I have $\text{rank}(AB) = \text{rank}(A) \ne \text{rank}(B)$. Then can I conclude that the matrix $B$ will have full row rank $n$? I know there is a special case that when $\text{rank}(A) = \text{rank}(B)$, for example the simplest case that the matrices are zero matrices, then the conclusion is not correct. So I would exclude this case. I am wondering that if I exclude the special case, is the conclusion above correct?
It implies $rank(B)$ is at least $rank(A)$. Since the rank of the matrix is the dimension of its image, then $\dim A(B(\mathbb{R}^\ell)) \leq \dim B(\mathbb{R}^\ell)$, hence $rank(AB) \leq rank(B)$. As noted in the comments, you cannot deduce that $B$ is full rank. Consider $$A = \begin{pmatrix} 1 &amp; 0 &amp; 0\end{pmatrix}, B = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 \end{pmatrix}$$ Then $$AB = \begin{pmatrix} 1 &amp; 0 &amp; 0 \end{pmatrix}.$$ Hence $AB = A$, in particular $Rank(AB) = Rank(A)$. But $Rank(B) = 2 &lt;3$. Edit, another example: For some reason, we want to consider $A$ $m \times n$ and $B$ $n \times \ell$ will $n &lt; m &lt; \ell$. Let $$A = \begin{pmatrix} 0 &amp; 0 \\ 0 &amp; 0 \\ 0&amp; 0 \end{pmatrix}, B = \begin{pmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp;0 \end{pmatrix}$$ Then $A$ is rank 0, $AB$ rank 0, $B$ rank 1 &lt; 2.
It implies $rank(B)$ is at least $rank(A)$. Since the rank of the matrix is the dimension of its image, then $\dim A(B(\mathbb{R}^\ell)) \leq \dim B(\mathbb{R}^\ell)$, hence $rank(AB) \leq rank(B)$. As noted in the comments, you cannot deduce that $B$ is full rank. Consider $$A = \begin{pmatrix} 1 &amp; 0 &amp; 0\end{pmatrix}, B = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 \end{pmatrix}$$ Then $$AB = \begin{pmatrix} 1 &amp; 0 &amp; 0 \end{pmatrix}.$$ Hence $AB = A$, in particular $Rank(AB) = Rank(A)$. But $Rank(B) = 2 &lt;3$. Edit, another example: For some reason, we want to consider $A$ $m \times n$ and $B$ $n \times \ell$ will $n &lt; m &lt; \ell$. Let $$A = \begin{pmatrix} 0 &amp; 0 \\ 0 &amp; 0 \\ 0&amp; 0 \end{pmatrix}, B = \begin{pmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp;0 \end{pmatrix}$$ Then $A$ is rank 0, $AB$ rank 0, $B$ rank 1 &lt; 2.
Measurable set in real numbers with arbitrary lebesgue density at some point I'm not sure if this is easy or not, but i can't see the solution (or that it is wrong!) Suppose that $\alpha \in (0,1)$ is given, Can you find a Lebesgue measurable set in $\mathbb{R}$, such that at point $0$, it has Lebesgue density $\alpha$?
Let $\alpha _n =\alpha \{\frac 1 n -\frac 1 {n+1}\}$ and choose a subset $E_n$ of $ (\frac 1 {n+1},\frac 1 n) \cup (-\frac 1 n,-\frac 1 {n+1})$ whose measure is $\alpha _n$. Let E be the union of the sets $E_n$. Then E has density $\alpha$.
Let $\alpha _n =\alpha \{\frac 1 n -\frac 1 {n+1}\}$ and choose a subset $E_n$ of $ (\frac 1 {n+1},\frac 1 n) \cup (-\frac 1 n,-\frac 1 {n+1})$ whose measure is $\alpha _n$. Let E be the union of the sets $E_n$. Then E has density $\alpha$.
Why study Algebraic Geometry? I'm going to start self-stydying algebraic geometry very soon. So, my question is why do mathematicians study algebraic geometry? What are the types of problems in which algebraic geometers are interested in? And what are some of the most beautiful theorems in algebraic geometry?
NEW ADDITION: a big list of freely available online courses on algebraic geometry, from introduction to advanced topics, has been compiled in this other answer. And a digression on motivation for studying the subject along with a self-learning guide of books is in this new answer. There are other similar questions, above all asking for references for self-studying, whose answers may be helpful: (Undergraduate) Algebraic Geometry Textbook Recomendations. Reference for Algebraic Geometry. Best Algebraic Geometry text book? (other than Hartshorne). My personal recommendation is that you start and get your motivation in the following freely available notes. They are extremely instructive, from the very basics of complex algebraic curves up to schemes and intersection theory with Grothendieck-Riemann-Roch, and prove of some of the theorems I mention below. They are excellent for self-study mixing rigor with many pictures! (sadly, something quite unusual among AG references): Matt Kerr - Lecture Notes Algebraic Geometry III/IV, Washington University in St. Louis. Andreas Gathmann - Class Notes: Algebraic Geometry, University of Kaiserslautern. For a powerful, long and abstract course, suitable for self-study, these notes have become famous: Ravi Vakil - Foundations of Algebraic Geometry, Stanford University. Also, there are many wonderful lecture videos for complete courses on elementary algebraic geometry, algebraic surfaces and beyond, by the one old master: Miles Reid - Lecture Courses on Video (WCU project at Sogang University), where you can really start at a slow pace (following his undergraduate textbook) to get up to the surface classification theorem. Now, Algebraic Geometry is one of the oldest, deepest, broadest and most active subjects in Mathematics with connections to almost all other branches in either a very direct or subtle way. The main motivation started with Pierre de Fermat and René Descartes who realized that to study geometry one could work with algebraic equations instead of drawings and pictures (which is now fundamental to work with higher dimensional objects, since intuition fails there). The most basic equations one could imagine to start studying were polynomials on the coordinates of your plane or space, or in a number field in general, as they are the most basic constructions from the elementary arithmetic operations. Equations of first order, i.e. linear polynomials, are the straight lines, planes, linear subspaces and hyperplanes. Equations of second order turned out to comprise all the classical conic sections; in fact the conics classification in the affine, Euclidean and projective cases (over the real and complex numbers) is the first actual algebraic geometry problem that every student is introduced to: the classification of all possible canonical forms of polynomials of degree 2 (either under affine transformations or isometries in variables $(x,y)$, or projective transformations in homogeneous variables $[x:y:z]$). Thus the basic plane curves over the real numbers can be studied by the algebraic properties of polynomials. Working over the complex numbers is actually more natural, as it is the algebraic closure of the reals and so it simplifies a lot the study tying together the whole subject, thanks to elementary things like the fundamental theorem of algebra and the Hilbert Nullstellensatz. Besides, working within projective varieties, enlarging our ambient space with the points at infinity, also helps since then we are dealing with topologically compact objects and pathological results disappear, e.g. all curves intersect at least at a point, giving the beautiful Bézout's theorem. From a purely practical point of view, one has to realize that all other analytic non-polynomial functions can be approximated by polynomials (e.g. by truncating the series), which is actually what calculators and computers do when computing trigonometric functions for example. So when any software plots a transcendental surface (or manifold), it is actually displaying a polynomial approximation (an algebraic variety). So the study of algebraic geometry in the applied and computational sense is fundamental for the rest of geometry. From a pure mathematics perspective, the case of projective complex algebraic geometry is of central importance. This is because of several results, like Lefschetz's principle by which doing (algebraic) geometry over an algebraically closed field of characteristic $0$ is essentially equivalent to doing it over the complex numbers; furthermore, Chow's theorem guarantees that all projective complex manifolds are actually algebraic, meaning that differential geometry deals with the same objects as algebraic geometry in that case, i.e. complex projective manifolds are given by the zero locus of a finite number of homogeneous polynomials! This was strengthened by Jean-Pierre Serre's GAGA theorems, which unified and equated the study of analytic geometry with algebraic geometry in a very general setting. Besides, in the case of projective complex algebraic curves one is actually working with compact orientable real surfaces (since these always admit a holomorphic structure), therefore unifying the theory of compact Riemann surfaces of complex analysis with the differential geometry of real surfaces, the algebraic topology of 2-manifolds and the algebraic geometry of algebraic curves! Here one finds wonderful relations and deep results like all the consequences of the concept of degree, index and curvature, linking together the milestone theorems of Gauß-Bonnet, Poincaré-Hopf and Riemann-Roch theorem! In fact the principal classification of algebraic curves is given in terms of their genus which is an invariant proved to be the same in the different perspectives: the topological genus of number of doughnut holes, the arithmetic genus of the Hilbert polynomial of the algebraic curve and the geometric genus as the number of independent holomorphic differential 2-forms over the Riemann surface. Analogously, the study of real 4-manifolds in differential geometry and differential topology is of central importance in mathematics per se but also in theoretical and mathematical physics, for example in gauge theory, so the study of complex algebraic surfaces gives results and provides tools. The full birational classification of algebraic surfaces was worked out decades ago in the Kodaira-Enriques theorem and served as a starting point to Mori's minimal model program to birationally classify all higher-dimensional (projective) complex algebraic varieties. A fundamental difference with other types of geometry is the presence of singularities, which play a very important role in algebraic geometry as many of the obstacles are due to them, but the fundamental Hironaka's resolution theorem guarantees that, at least in characteristic zero, varieties always have a smooth birational model. Also the construction and study of moduli spaces of types of geometric objects is a very important topic (e.g. Deligne-Mumford construction), since the space of all such objects is often an algebraic-geometric object itself. There are also many interesting problems and results in enumerative geometry and intersection theory, starting from the classic and amazing Cayley-Salmon theorem that all smooth cubic surfaces defined over an algebraic closed field contain exactly 27 straight lines, the Thom-Porteus formula for degeneracy loci, Schubert calculus up to modern quantum cohomology with Kontsevich's and ELSV formulas; Torelli's theorem on the reconstruction of algebraic curves from their Jacobian variety, and finally the cornerstone (Grothendieck)-Hirzebruch-Riemann-Roch theorem computing the number of independent global sections of vector bundles, actually their Euler-Poincaré characteristics, by the intersection numbers of generic zero loci of characteristic classes over the variety. Besides all this, since the foundational immense work of Alexandre Grothendieck, the subject has got very solid and abstract foundations so powerful to fuse algebraic geometry with number theory, as many were hoping before. Thus, the abstract algebraic geometry of sheaves and schemes plays nowadays a fundamental role in algebraic number theory disguised as arithmetic geometry. Wondeful results in Diophantine geometry like Faltings theorem and Mordell-Weil theorem made use of all these advances, along with the famous proof of Wiles of Fermat's last theorem. The development of abstract algebraic geometry was more or less motivated to solve the remarkable Weil conjectures relating the number of solutions of polynomials over finite number fields to the geometry of the complex variety defined by the same polynomials. For this, tremendous machinery was worked out, like étale cohomology. Also, trying to apply complex geometry constructions to arithmetic has led to Arakelov geometry and the arithmetic Grothendieck-Riemann-Roch among other results. Related to arithmetic geometry, thanks to schemes, there has emerged a new subject of arithmetic topology, where properties of the prime numbers and algebraic number theory have relationships and dualities with the theory of knots, links and 3-dimensional manifolds! This is a very mysterious and interesting new topic, since knots and links also appear in theoretical physics (e.g. topological quantum field theories). Also, anabelian geometry interestingly has led the way to studies on the relationships between the topological fundamental group of algebraic varieties and the Galois groups of arithmetic number field extensions. So, mathematicians study algebraic geometry because it is at the core of many subjects, serving as a bridge between seemingly different disciplines: from geometry and topology to complex analysis and number theory. Since in the end, any mathematical subject works within specified algebras, studying the geometry those algebras define is a useful tool and interesting endeavor in itself. In fact, the requirement of being commutative algebras has been dropped since the work of Alain Connes and the whole 'new' subject of noncommmutative geometry has flourished, in analytic and algebraic styles, to try to complete the geometrization of mathematics. On the other hand it attempts to give a quantum counterpart to classical geometries, something of extreme interest in fundamental physics (complex algebraic geometry and noncommutative geometry appear almost necessarily in one way or another in any attempt to unify the fundamental forces with gravity, i.e. quantum field theory with general relativity; even abstract and categorical algebraic geometry play a role in topics like homological mirror symmetry and quantum cohomology, which originated in physics). Therefore, the kind of problems mathematicians try to solve in algebraic geometry are related to much of everything else, mostly: anything related to the classification (as fine as possible) of algebraic varieties (and schemes, maybe someday), their invariants, singularities, deformations and moduli spaces, intersections, their topology and differential geometry, and framing arithmetic problems in terms of geometry. There are many interesting open problems: Birational minimal model program for all varieties, Hodge conjecture, Jacobian conjecture, Hartshorne's conjecture, General Griffiths conjecture, Fujita's conjecture, Linearization and cancelation conjectures, Coolidge-Nagata conjecture, Resolution of singularities in nonzero characteristic, Grothendieck's standard conjectures on algebraic cycles, Grothendieck's anabelian section conjecture, Classification of vector bundles over projective spaces, Unirationality of moduli spaces of curves, Unirationality of rationally connected varieties, Full rigorous formalization of mirror symmetry and quantum cohomology, Full theory of a universal cohomology and mixed motives (e.g. Voevodsky vanishing conjecture). In my personal case, I started as a theoretical physicists but switched completely to pure mathematics because of algebraic geometry, and I also began by self-learning. It is a very deep subject with connections to almost everything else, once one has learned enough to realize that. It is also a very demanding field because of the tremendous background one has to master, in commutative and homological algebra for example, before being able to get to the most modern and interesting results. The effort nevertheless pays off! In fact, the route through commutative algebra actually paves the way not only to algebraic geometry but to algebraic number theory and arithmetic geometry. I had a strong background in differential geometry so I arrived at algebraic geometry through complex (Kähler) geometry, and ended up fascinated by even the most abstract incarnations of it. "Algebraic geometry seems to have acquired the reputation of being esoteric, exclusive, and very abstract, with adherents who are secretly plotting to take over all the rest of mathematics. In one respect this last point is accurate..." - David Mumford. So the question could be instead "why not study algebraic geometry!?" I hope this answer motivates you enough to dive into this deep ocean of the mathematical world and to corroborate it yourself. Best luck!
The first chapter of Justin Smith's Introduction to Algebraic Geometry has a nice discussion of Bezout's Theorem and how algebraic geometry approaches geometric problems in general. I think you can download the PDF from the author's web site for free.
Showing that $1 - (1 - b_n)^{a_n} \sim a_n b_n$ as $n \to \infty$. Let $a_n \in \mathbb{N}_{&gt; 0}$ be an increasing sequence and $b_n \in [0,1]$ a decreasing sequence. I am interested in the behavior of the sequence $c_n \triangleq 1 - (1-b_n)^{a_n}$. Intuitively, I know that \begin{equation*} 1 - (1 - b_n)^{a_n} \sim a_n b_n \quad \text{as $n \to \infty$}. \end{equation*} However, I cannot find a rigorous proof for that. It is easy to upper-bound \begin{equation*} 1 - (1 - b_n)^{a_n} \le a_n b_n, \end{equation*} using the fact that $(1-x)^k \ge 1 - k x$ for $k \ge 1$ and $x$ in vicinity of $0$. However, I am unable to find a (similar) lower-bound to conclude the claim. I realized that I should be more clear about what I would like to conclude. In fact, I would like to prove \begin{equation*} \lim_{n \to \infty} \frac1n \ln(c_n) = \lim_{n \to \infty} \frac1n \ln(a_n b_n). \end{equation*} As pointed out in the answer of Kim Jong Un, we can also lower-bound \begin{equation*} 1-(1-b_n)^{a_n} \ge a_n b_n (1-b_n)^{a_n-1}, \end{equation*} which shows, \begin{equation*} \frac1n \ln (c_n) \ge \frac1n \ln (a_n b_n) + \frac1n (a_n-1) \ln(1-b_n) \end{equation*} Now, if $\limsup_{n\to\infty} a_n b_n &lt; \infty$, \begin{equation*} \lim_{n\to\infty} \frac1n (a_n-1) \ln(1-b_n) = 0 \end{equation*} and we can conclude the claim. However, if $a_n b_n \to \infty$ the lower-bound is not useful.
Let $d=(1-b_n)\in[0,1]$, then \begin{align*} 1-(1-b_n)^{a_n}=1-d^{a_n}&amp;=(1-d)(d^{a_n-1}+\cdots+d+1)\\ &amp;\geq(1-d)(d^{a_n-1}+\cdots+d^{a_n-1}+d^{a_n-1})\\ &amp;=(1-d)(a_nd^{a_n-1})=b_na_n(1-b_n)^{a_n-1}. \end{align*}
Maybe logging? $(1-b_n)^{a_n} = e^{a_n \log (1-b_n)} \sim e^{-a_n b_n}$ using Maclaurin series, terms of the form $e^{O(\frac{1}{n})}$ converge to 1.
How to factor numbers that are the product of two primes What are techniques to factor numbers that are the product of two prime numbers? For example, how would we factor $262417$ to get $397\cdot 661$? Would we have to guess that factorization or is there an easier way?
The problem of the factorization is the main property of some cryptograpic systems as RSA. This fact has been studied for years and nowadays we don't know a algorithm to factorize a big arbitrary number efficiently. However, if $p*q$ satisfies some propierties (e.g $p-1$ or $q-1$ have a soft factorization (that means the number factorizes in primes $p$ such that $p \leq \sqrt{n}$)), you can factorize the number in a computational time of $O(log(n))$ (or another low comptutational time) If you are interested in it, you can check this pdf with some famous attacks to the security of RSA related with the fact of factorization of large numbers. http://www.nku.edu/~christensen/Mathematical%20attack%20on%20RSA.pdf
There is an easier way. The length of prime numbers is not a problem - it is the randomness that lies within. Let $n=262417$ $397 + 661 = 1058$ $1058 / 2 = 529 = d$ $d^2 = 279841 = a$ Let $s = a - n$ $s = 279841 - 262417 = 17424$ (this square defines distance between $p$ and $q$) $\sqrt{s} = 132 = t$ $n = p \cdot q$ $n = (d + t)(d - t)$ $n = (529 + 132)(529 - 132) = 661 \cdot 397 = 262417$ To find $s$ you need to know pattern in prime numbers, which will also help you structure semiprime $n$ - "an almost beautiful" arithmetic progression.
Is this olympiad-like question about remainders an open problem? Suppose that we are given two positive integers $x$ and $y$ such that $$x \mod p \leqslant y \mod p$$ for each prime number $p$. (Here, $x \mod p,\; y \mod p$ stand for the least non-negative residua.) Does it follow that $x = y$? The problem is seemingly easy as we have to test $x,y$ against finitely many primes only. However, after several attempts I begin to wonder whether this is an open problem... Note that it seems to be an open problem whether there is a prime number between a pair of squares (a reference would be appreciated), so the case where $x$ and $y$ are squares themselves is hard enough. However, it may well happen that this doesn't require such an argument.
This is a known theorem which is proved in this paper: a(mod p) ≤ b(mod p) for all Primes p Implies a = b Authors: P. Erdos, P. P. Palfy and M. Szegedy The American Mathematical Monthly, Vol. 94, No. 2 (Feb., 1987), pp. 169-170. Enjoy it!!!
Taking any prime number greater than $x$ and $y$ leads to $x\leq y$. Now, every $p$ dividing $y$ also divides $x$ because then $x \mod p \leq 0$, thus $y=kx$. For any prime number $p$ such as $x(k-1)&lt;p&lt;kx=y$ we obtain $y\mod p &lt;x$, with the condition that the primes dividing $k$ also divide $x$. If we can prove there are always primes in such intervals then we have $x=y$.
what am i misunderstanding here? Dummit and Foote p.161 Let $G$ be an abelian group of order $n&gt;1$. Let $n={p_1}^{a_1}\cdots {p_k}^{a_k}$ be the prime factorzation. Then, $G\cong \mathbb{Z}_{{p_1}^{a^1}}\times\cdots\times\mathbb{Z}_{{p_k}^{a^k}}$ Doesn't this mean that there is a unique abelian group of order $n$ up to isomorphism? What am i misunderstanding here?
The statement in Dummit and Foote says $$G \cong A_1 \times A_2 \times \dots \times A_k,\text{ where } |A_i| = p_i^{\alpha_i}.$$ It does not say $A_i \cong \mathbb{Z}/p_i^{\alpha_i}\mathbb{Z}$. Indeed, there can be many abelian groups of the same prime-powered order, for example $\mathbb{Z}/p^2\mathbb{Z}$ and $\mathbb{Z}/p\mathbb{Z} \times \mathbb{Z}/p\mathbb{Z}$.
The prime factors does not have to be unique.
Proving Prime Integers are countably infinite I've been doing some work with cardinality of sets, and ran into an example I thought was interesting. In proving that the set of prime numbers is a countably infinite set, I've started that showing that the set of prime numbers (integers) $\mathbb{P}$ is a subset of $\mathbb{N}$. Obviously the natural numbers $\mathbb{N}$ can be mapped one-to-one to itself ($1$ to $1$, $2$ to $2$, etc.), so it is a countably infinite set. Following from this, since $\mathbb{P}$ is a subset of a countably infinite set $\mathbb{N}$, then $\mathbb{P}$ must be a countably infinite set as well. Is this enough information to show $\mathbb{P}$ is a countably infinite set, or must I show a concrete mapping for $\mathbb{P}$?
One way of proving that a subset of $A \subset \Bbb N$ is countably infinite is by demonstrating that it satisfies the following property, $\tag 1 (\forall F \subset A) \, \bigr[ \, \text{IF } F \text{ if finite THEN } (\exists x \in \Bbb N) \, [x \notin F \text{ and } x \in A] \, \bigr ]$ See also $\quad$ Subset of a countable set is itself countable Now let $\mathbb{P}$ denote the set of prime numbers and let $F$ be any finite subset of $\mathbb{P}$. If $|F| = 0$ then $2 \notin F$ but $2 \in \Bbb P$. if $|F| = n \gt 0$ then $F = \{p_1,p_2,\dots\,p_n\}$. Set $\quad \displaystyle {a = (\prod_{1 \le i \le n} p_i) + 1} $ Using Euclidean division one readily shows that $p_i \nmid a$ for any subscript $i$. So if $q$ is any prime factor of $a$ then $q \ne p_i$ for any subscript $i$. So $q \notin F$ but $q \in \Bbb P$. The above proof 'gears' Euclid's (direct) proof to the OP's requested set theoretic setting.
One way of proving that a subset of $A \subset \Bbb N$ is countably infinite is by demonstrating that it satisfies the following property, $\tag 1 (\forall F \subset A) \, \bigr[ \, \text{IF } F \text{ if finite THEN } (\exists x \in \Bbb N) \, [x \notin F \text{ and } x \in A] \, \bigr ]$ See also $\quad$ Subset of a countable set is itself countable Now let $\mathbb{P}$ denote the set of prime numbers and let $F$ be any finite subset of $\mathbb{P}$. If $|F| = 0$ then $2 \notin F$ but $2 \in \Bbb P$. if $|F| = n \gt 0$ then $F = \{p_1,p_2,\dots\,p_n\}$. Set $\quad \displaystyle {a = (\prod_{1 \le i \le n} p_i) + 1} $ Using Euclidean division one readily shows that $p_i \nmid a$ for any subscript $i$. So if $q$ is any prime factor of $a$ then $q \ne p_i$ for any subscript $i$. So $q \notin F$ but $q \in \Bbb P$. The above proof 'gears' Euclid's (direct) proof to the OP's requested set theoretic setting.
Limit $\mathop {\lim }\limits_{n \to \infty } n({1 \over {{{(n + 1)}^2}}} + {1 \over {{{(n + 2)}^2}}} + \cdots{1 \over {{{(2n)}^2}}})$ Without using integrals, how to find this limit: $$\mathop {\lim }\limits_{n \to \infty } {a_n} = n\cdot\left({1 \over {{{(n + 1)}^2}}} + {1 \over {{{(n + 2)}^2}}} + \cdots{1 \over {{{(2n)}^2}}}\right)$$ I tried squeezing the sequence but it didn't workout. What next should I do?
$$\sum_{i=n+1}^{2n}{\frac{1}{i^2}} \leq \sum_{i=n+1}^{2n}{\frac{1}{i(i-1)}}=\sum_{i=n+1}^{2n}{\left(\frac{1}{i-1}-\frac{1}{i}\right)}=\frac{1}{n}-\frac{1}{2n}=\frac{1}{2n}$$ $$\sum_{i=n+1}^{2n}{\frac{1}{i^2}} \geq \sum_{i=n+1}^{2n}{\frac{1}{i(i+1)}}=\sum_{i=n+1}^{2n}{\left(\frac{1}{i}-\frac{1}{i+1}\right)}=\frac{1}{n+1}-\frac{1}{2n+1}=\frac{n}{(n+1)(2n+1)}$$ Now use squeeze theorem to find $\lim_{n \to \infty}{a_n}$.
It's value is equal to $\dfrac{1}{2}$, using definite integral as a limit of sum where $f(x)=\frac{1}{x^2}$.
Why do statisticians like "$n-1$" instead of "$n$"? Does anyone have an intuitive explanation (no formulas, just words! :D) about the "$n-1$" instead of "$n$" in the unbiased variance estimator $$S_n^2 = \dfrac{\sum\limits_{i = 1}^n \left(X_i-\bar{X}\right)^2}{n-1}?$$
(Too long for a comment:) I can offer an explanation showing that dividing by $n$ would give an underestimation of the variance. The sum of squares $\sum (X_i - \overline{X})^2$, where $\overline{X}$ is the sample mean, is smaller than the sum $\sum (X_i - \mu)^2$ where $\mu$ is the true mean. This is the case since $\overline{X}$ is expected to be ''closer'' to the data points than the true mean since $\overline{X}$ is calculated based on the data. In fact, $\overline{X}$ is the value of $t$ such that the sum $\sum (X_i - t)^2$ is minimized. This shows that we underestimate the variance, so we should divide by something smaller than $n$. To put it even less formal, you try to determine how much your data is spread by comparing the deviations to the sample mean, which is always an underestimation. The sample mean is as close to the data as possible, whereas the true mean will differ more. The reason that we divide by precisely $n-1$ is that the estimator becomes unbiased (as pointed out in the comments).
using samples we try to estimate the population mean. But since it is a sample our sample mean would either be more or less than population mean. So squared distance of sample mean from each reading would be higher than that of population. We try to reduce this effect by dividing with a small denominator (N-1) in case of sample variance.
calculating previous k-average Given a number flow $a_1, a_2, ... a_N$, $N$ is a large number for example $N = 1000$, the goal is to calculate the average value of previous $n$. For example, now we are at state $k$, then if $n = 20$ then $Avg(k) = (a_k + a_{k-1} + ... + a_{k-19})/20$, $k \ge 20$. 1) given an equation to calculate this $Avg(k)$ My answer: $Avg(k) = Avg(k-1) + (a_k - a_{k-n})/n$ 2) Calculate $Avg(k)$ without using substraction My answer: build an array that has the $Avg(1), Avg(2) ... Avg(k)$, update each one for each step, but it has complexity of $O(Nn)$. Do anybody know an $O(N)$ algorithm for this?
You can keep track of the previous 19 numbers (in a 20 element array), and sum them with the current number, and the replace the oldest number in the array with the current one and move on. To compute the size-$n$ averages for all $N-n$ reasonable slots then takes time $O(Nn)$ as in your solution. But suppose that instead you keep track of the last 19 numbers, but also keep the sum of the last 1, the next 2, the next 4, the next 8, ... up to the largest power of 2 less than N. You move forward, and now you have the sum of the last 2, the next 2, the next 4, ...which you can update by keeping the last 2, but replaceing the next 2 with the sum of your first two elements, so that you have last 2, next 4, next 4, next 8, ... and you can cascade this. And you can tack on the last 1 to the first of the list. In log n time, you've updated this list. In log n time, you can compute the sum of more than half of the previous elements....but computing the sum for the remaining few still takes time O(n/2). But what if you kept a SECOND logarithmic list to help you do that bunch fast as well? Then you'd need to do at most $c * n/4$ work at the end. And so on.... In this manner, you keep $\log n$ lists, each of size $\log n$, and compute the sum in time that looks like (I think) $\log^2 n$, so your overall runtime ends up being $O(N \log^2 n)$. That's not $O(N)$, but it's a good deal better than $O(Nn)$.
The solution for your first question is to keep track of the cumulative sum of the elements in a new array. Now when you are asked to find average of elements between say 'i' and 'j' indices, you can directly get the sum by of elements between these indices by subtracting cumulative sum of 'j' elements from cumulative sum of 'i' elements and divide it by n to get the average. In your case 'i' and 'j' are 'k-n' and 'k' respectively. Now, time complexity for above algorithm is O(N), because you have to traverse the entire array once and build the cumulative sum array. And the space complexity is O(N), because you have to keep track of cumulative sum of all the elements. Your second question can be solved by performing subtraction using bitwise operators. Here(https://www.geeksforgeeks.org/subtract-two-numbers-without-using-arithmetic-operators/) is the link to perform subtraction using bitwise operator. Any way time complexity and space complexity will not differ.
Why integration by long division gives different answer than using u substitution? So, after solving this question, I got two different answers? and I don't think it's supposed to be this way? Evaluate the integral $$\int \frac{x^2+2}{x+2} dx$$ Using polynomial long division, I get $$\frac{x^2}2-2x+6\ln|x+2|+C,$$ but using substitution, I get $$\frac{(x+2)^2}2-4(x+2)+6\ln|x+2|+C.$$
Note that $$\frac{(x+2)^2}2-4(x+2)+6\ln|x+2|+C=\frac{x^2}2-2x-6+6\ln|x+2|+C=$$ $$=\frac{x^2}2-2x+6\ln|x+2|+(C-6)=\frac{x^2}2-2x+6\ln|x+2|+C_1$$ therefore the two results are the same up to a constant which is not essential, indeed in both cases $$\frac{d}{dx}\left(\frac{x^2}2-2x+6\ln|x+2|+C\right)=\frac{x^2+2}{x+2}$$
Your answers differ by a constant. Therefore, if one of them is correct, the other one is correct too.
Prove/refute property about a constrained sequence of numbers I need to prove or refute a property about a sequence of numbers. Here is what is given to me: Sequence ($a_1,a_2,...,a_k,a_{k+1},a_{k+2}$) containing $k+2$ numbers. Every number $0 &lt; a_i \leq M, i=1,...,k+2$ for the same given constant $M$. Moreover, $\sum_{i=1}^{k+2} a_i = N$, for another given constant $N &gt; 0$. The constants $N$ and $M$ are related by $kM \geq N, k &gt; 1$. Then, I need to prove or refute the following property: Can all consecutive pairs of numbers $a_i, a_{i+1}, i=1,...,k+1$ be defined such that $a_i + a_{i+1} &gt; M$ ? Or does it lead to a violation of one of the given constraints? I tried searching for something similar but in truth, I barely know what to search for. My tentative proofs do not really go anywhere meaningful, so I had to resort to more experienced math guys to help me with this. It has been a long time since I had to prove some property like this.
Assuming that given $k$, $N$ is selected such that $$\sum_{i=1}^{k+2} a_i = N,$$ I claim it is possible $ \iff k\ge 3$ First of all we see the cases $k=1,2$ to check it is impossible: (i) if $k=1$ then $$M&lt; a_1+a_2 &lt; N \le kM = M;$$ (ii) if $k=2$ then we only have $(a_1,a_2,a_3,a_4)$, and $$a_1+a_2&gt;M \mbox{ and } a_3+a_4 &gt;M ,$$ so $$N = \sum_{i=1}^{k+2} a_i &gt;M + M =2M = kM .$$ Now let $k\ge 3$: let $\epsilon &gt; 0$ (arbitrarily small) and such that $\epsilon &lt; M$, then I pick: $$a_i= M \mbox{ if } i \mbox{ is even; } a_i = M -\epsilon \mbox{ if } i \mbox{ is odd.}$$ Now we obseve that $a_i + a_{i+1} = 2M - \epsilon &gt; M$ anf that $0 &lt; a_i \le M$, so they are good, then: (i) if $k$ is even $$\sum_{i=1}^{k+2} a_i = N = (2M -\epsilon)\Big(\frac{k}{2} + 1\Big) = (k+2)M - \epsilon\Big(\frac{k}{2}+1\Big)$$ which is possible if I pick $\epsilon$ such that $$kM \ge (k+2)M - \epsilon\Big(\frac{k}{2}+1\Big)$$ which implies that $$\epsilon \ge \frac{2M}{\frac{k}{2}+1} .$$ Also we have that $M&gt; \epsilon$, so we can pick such a $\epsilon$ only if $$M&gt; \frac{2M}{\frac{k}{2}+1},$$ which means that $k &gt; 2$. (ii) if $k$ is odd $$\sum_{i=1}^{k+2} a_i = N = (2M-\epsilon)\Big(\frac{k+1}{2}\Big) + a_{k+2} = (k+1)M - \epsilon \Big(\frac{k+1}{2}\Big) + a_{k+2} =$$ $$= (k+2)M - \epsilon \Big(\frac{k+1}{2} +1 \Big)$$ and here too we see it is possible if $\epsilon$ is such that $$\epsilon \ge \frac{2M}{\frac{k+1}{2}+1} .$$ This implies that it is tue if $$M &gt; \frac{2M}{\frac{k+1}{2}+1},$$ which means if $k&gt;1$. This concludes my claim.
If $a_i+a_{i+1}&gt;M$ for all applicable $i$, then $$ N=\sum_{i=1}^{k+2}a_i=\frac{a_1}2+\sum_{i=1}^{k+1}\frac{a_i+a_{i+1}}2+\frac{a_{k+2}}2&gt;\frac{k+1}2M$$ and in fact $$ N=\sum_{i=1}^{k+2}a_i=\sum_{j=1}^{\frac{k+2}2}(a_{2j-1}+a_{2j})&gt;\frac{k+2}2M\qquad\text{if $k$ is even}.$$ Hence we certainly need the additional condition that $$\tag1 \left\lceil\frac{k+1}2\right\rceil M&lt;N.$$ In particular, $kM\ge N$ contradicts $(1)$ when $k=1$ or $k=2$. Hence we incidentally also need $$\tag2 k\ge3 $$ (but as said, $(1)$ implies $(2)$ in the given context). Finally, we also need $$ \tag3 M&gt;0$$ to allow for $0&lt;a_i\le M$ in the first place. On the other hand, assume we have $k\in\Bbb N$, $N,M\in \Bbb R$ such that $(1)$ and $(3)$ and $kM\ge N$. Then we can let $$a_i=\frac N{k+2}. $$ By $(1)$ and $(3)$ and $N\le kM$, this makes $$ 0&lt;a_i&lt;M.$$ We clearly have $$ \sum a_i = (k+2)\cdot \frac N{k+2}=N$$ and by $(1)$, $$ a_i+a_{i+1}=\frac{2N}{k+2}\ge\frac N{\left\lceil \frac{k+1}2\right\rceil}&gt;M,$$ as desired.
Factorizing a matrix into a matrix and its transpose Let $W\in \mathbb{R}^{n \times n}$ be a positive semi-definite matrix. Then, what are some well-known factorization methods that guarantee $W=A^T A$, with the conditions being that \begin{align} 1.&amp; \ \ \ \ \ A \in \mathbb{R}^{n \times n}, \\ 2.&amp; \ \ \ \ \ \text{$A$ has the same rank as $W$}? \end{align}
It's not possible to factor a general positive semidefinite matrix that way. $A^TA$ is automatically symmetric. But there are positive semidefinite matrices that are not symmetric. Like $W=\begin{bmatrix}1&amp;3\newline-1&amp;1\end{bmatrix}$, which has $\vec{v}^tW\vec{v}=(v_1+v_2)^2\geq0$ for all $\vec{v}$. Did you intend to ask about factoring a symmetric matrix? Or a Hermitian positive semidefinite matrix?
I don't know such a decomposition for all p.s.d. matrices $W$. But take a look at Matrix decomposition especially Cholesky decomposition.
Why should I believe that the real numbers model distances along a line? Taking the real numbers to be a complete ordered field, why do we believe that they model distances along a line? How do we know (or why do we believe) that any length that can be drawn is a real number multiple of some unit length?
Completeness often feels a bit technical at first: we show that there is exactly one complete ordered field up to isomorphism, but why should that confluence of properties correspond to line-ness? I think it's more intuitive to focus instead on connectedness. This is really the same thing in our context, but is a priori phrased in more convenient language: the idea is that if I &quot;cut&quot; the line into a &quot;lower piece&quot; and an &quot;upper piece,&quot; then there is some point which captures this cut. Specifically: The line is connected, in the sense that it cannot be written as the disjoint union of two nonempty sets $A$ and $B$ where $A$ is downwards closed, $B$ is upwards closed, $A$ has no greatest element, and $B$ has no least element. This is - for me at least - a pretty fundamental piece of the intuition I have about the line. (Connectedness is equivalent to completeness in our context, but is in my opinion more obviously a fundamental line property.) The fact that $\mathbb{R}$ is the unique connected ordered field then tells me that $\mathbb{R}$ is the only possible way we could faithfully &quot;model&quot; the line by a field, that is, the only way we could sensibly add/subtract/multiply/divide lengths. This whole line of attack was developed by Dedekind in his Essays on the Theory of Numbers. That said, we can critique this idea. The line has three fundamental properties (to me anyways): it is a linear order without endpoints (duh), it is connected, and it is dense (= between any two elements is a third). These three properties are not enough to pin down a single linear order up to isomorphism: there are lots of connected dense linear orders without endpoints much larger than $(\mathbb{R};&lt;)$. This does not contradict the above, since these &quot;unreal&quot; linear orders do not support a field structure. So one way I could argue that the ordered field $(\mathbb{R};+,\times)$ does not faithfully model the intuitive line is if I gave up the assumption that lengths do in fact form a field! This raises a natural question: What sort of algebraic structure can a connected dense linear order without endpoints other than $(\mathbb{R};&lt;)$ support? It turns out that the answer is very little: any two (nontrivial) connected ordered groups are in fact isomorphic,$^*$ and in particular up to isomorphism the only (nontrivial) connected ordered group is $(\mathbb{R};+)$. So assuming we want to be able to add and subtract lengths in a reasonable way, we're stuck with $\mathbb{R}$. This is pretty much case-closed for me: the idea of &quot;non-additive intervals&quot; is sort of a non-starter as far as my own intuition is concerned. $^*$Here's a proof sketch: Suppose $G$ is a nontrivial connected ordered group. First, note that by connectedness $G$ must be divisible: e.g. for each $x$ there must be some $y$ such that $y+y=x$, since otherwise we could partition $G$ into $\{y: y+y&lt;x\}$ and $\{y: y+y&gt;x\}$ contradicting connectedness. Now - by nontriviality - fix some positive group element $g$. By divisibility we get an embedding $\eta$ of $(\mathbb{Q};+)$ into $G$ generated by sending $1$ to $g$. By connectedness, $ran(\eta)$ is cofinal in $G$ (otherwise look at the downwards closure of $ran(\eta)$); this gives us a homomorphism $h:G\rightarrow \mathbb{R}$ (think about sending $x\in G$ to the least upper bound in the sense of $\mathbb{R}$ of $\eta^{-1}(\{y: y&lt;x\})$). And we can show that $h$ is a bijection - hence an isomorphism - by applying connectedness again.
I think steven gregory's answer is probably the best so far: we simply choose to believe that real numbers describe distances - it has turned out to be a useful notion, but it is simply a choice we made, it isn't absolute truth. This really about the fundamental nature of mathematics: there are certain things we choose to accept as true witout proof (called axioms), and when we that maths represents absolute truth, it means simply that the logical statements of the form &quot;if the axioms are true, then ...&quot; are true.
Find the amount of natural numbers that can be written as $x^2$, $x^3$ and $x^5$ that are smaller or equal than $2^{30}$ Since I have not found any formula for this, I've written a quick Python script to calculate the number of numbers that can be expressed as $x^2$ that are $\le2^{30}$, just to see the result. It took a little while to compute, and it returned $32769$. Wolframalpha says that $32769$ can be represented as $2^{15} + 1$, but I am still not seeing any pattern here. EDIT: The script started from $0$, which explains the extra $+1$. The actual number of perfect squares that are $\le2^{30}$ is $2^{15} = 32768$. Also, thanks to Eevee Trainer, I've been able to solve this more efficiently for $x^2$, $x^3$ and $x^5$ using their formula: $\text{# of positive perfect k-th powers less than or equal to n} = \lfloor \sqrt[k] n \rfloor$ Therefore, these are the number of numbers that are less than or equal to $2^{30}$ for each of the following types: perfect squares: $\sqrt[2] {2^{30}} = 2^{15}$ cubes: $\sqrt[3] {2^{30}} = 2^{10}$ powers of $5$: $\sqrt[5] {2^{30}} = 2^{6}$
I assume you want positive integers $x$; if it's just any kind of integer (positive or negative or 0), the below can be modified to apply. If it's just any real number, then the number is clearly infinite, but I imagine that's not at all the scope. So going forward, we'll be considering positive perfect squares less than some other number. First, let's establish the underlying pattern. This will explain why the number of squares is coincidentally equal to $2^{15}+1 = \sqrt{2^{30}} + 1$. This might be one of those kinds of cases where it's logical to try some small values first. For example, let's find the number of positive perfect squares $s$ less than or equal to $n$. Suppose $n=2^2$. Well, we have $s=1,4$. Suppose $n = 3^2$. Then $s=1,4,9$. Keep trying further numbers, and it becomes clear: if $n$ is a perfect square, then $$\text{# of positive perfect squares less than or equal to n} = \sqrt n$$ It should be easy to deduce that if $n$ is not a perfect square, it falls between two perfect squares, $\lfloor \sqrt n \rfloor ^2$ and $\lfloor \sqrt{n+1} \rfloor ^2$. But of course, there aren't going to be more perfect squares between the former and $n$, so we can just treat $n$ as the former. Then it can be deduced: for positive integers $n$, $$\text{# of positive perfect squares less than or equal to n} = \lfloor \sqrt n \rfloor$$ Similar logic follows for the number of perfect cubes or perfect fifth powers or whatever: $$\text{# of positive perfect k-th powers less than or equal to n} = \lfloor \sqrt[k] n \rfloor$$ (Note: This is by no means a formal argument, nor is meant to be. This is more a heuristic idea to show where the results you need come from.) Take $n=2^{30} = (2^{15})^2$ to begin to get your solutions. So far, this only gets you $2^{15}$ solutions with respect to the number of squares (i.e. one off). This comes about on the assumption we have positive integer solutions (i.e. $x &gt; 0$) and include the number we're searching at if it's a perfect square (i.e. $"..." \leq 2^{30}$). The only conclusion I can think of is that $0^2$ is being counted as a further solution. It depends on the exact framing of the question whether that counts - whether you wanted natural number solutions, whether you wanted nonnegative integer solutions, positive integer solutions, etc., and of course to touch on the first whether the problem comes with the implicit assumption that $0$ is a natural number (this a contentious issue in mathematics). So whether this solution is valid needs to be addressed to whomever gave you the problem. As for why it might have popped up in your solution and why Wolfram gave the same answer, it depends on the code you used. If you started checking squares at $0$ and not $1$, then that would explain it, but it depends on your specific implementation. Per a comment from you, it seems that you indeed included $0$ in your search so I figure that's why.
Only the squares up to $x=2^{15}$ satisfy the condition. Obviously there are $2^{15}+1$ of them.
Eigenvectors of a normal matrix According to the spectral theorem every normal matrix can be represented as the product of a unitary matrix $\mathbb{U}$ and a diagonal matrix $\mathbb{D}$ $$\mathbb{A} = \mathbb{U}^H\mathbb{D}\mathbb{U}$$ meaning that every normal matrix is diagonalizable. Does it necessarily mean that the unitary matrix has to be composed from the eigenvectors of $\mathbb{A}$ ? I presume that not, because then the eigenvectors of every normal matrix would form an orthonormal set (rows and columns of a unitary matrix are orthonormal in $\mathbb{C}^n$). So am I right that only the set of eigenvectors of a hermitian (or symmetric while in $\mathbb{R}^n$) matrix is orthonormal?
An $n\times n$ matrix $A$ over the field $\mathbf{F}$ is diagonalizable if and only there is a basis of $\mathbf{F}^n$ of eigenvectors of $A$. This occurs if and only if there exists an invertible $n\times n$ matrix $Q$ such that $Q^{-1}AQ$ is a diagonal matrix; the column of $Q$ form the basis made up of eigenvectors, and conversely, if you take a basis made up of eigenvectors and arrange them as columns of a matrix, then the matrix is invertible and conjugating $A$ by that matrix will yield a diagonal matrix. An $n\times n$ matrix $A$ with coefficients in $\mathbb{R}$ or $\mathbb{C}$ is orthogonally diagonalizable if and only if there is an orthonormal basis of eigenvectors of $A$ for $\mathbb{R}^n$ (or $\mathbb{C}^n$, respectively). An $n\times n$ matrix with coefficients in $\mathbb{C}$ is orthogonally diagonalizable over $\mathbb{C}$ if and only if it is normal; a square matrix is orthogonally diagonalizable over $\mathbb{R}$ if and only if it is Hermitian. "Unitary" is usually reserved for complex matrices, with "orthogonal" being the corresponding term for real matrices.
Here is what I think is correct: Normal matrices are matrices that have orthogonal eigenvectors. Hermitian matrices are normal matrices that have real eigenvalues. So this answers your first question in positive: Yes, the unitary matrix in your decomposition has the same eigenvectors as your original matrix.
Finding the antiderivative of $f(x)=\sqrt[3]{x^2+2x}$ $$f(x)=\sqrt[3]{x^2+2x}$$. Let $g(x)$ be an antiderivative of $f(x)$. If $g(5)=7$, then what is the value of $g(1)$? I tried doing integrating by parts repeatedly, but no success. Wolfram Alpha also gives something in a different type of function which I don't know. Please help?
If you are confident with a numerical solution , here is it : Because of $$g(5)-g(1)=\int_1^5 (x^2+2x)^\frac{1}{3}dx=9.729162187801335050406060297$$ we get $$g(1)=g(5)-9.729162187801335050406060297=-2.729162187801335050406060298$$
Which is the exact phrasing of the question? I mean, maybe you are not asked to compute but just to express the value in terms of $g$, i.e.: $$g(1)=\int_5^1 \sqrt[3]{x^2+2 x} \, dx +7$$ because, at the end of the day, the definition of $g$ is simply: $$g(x)=\int_5^x \sqrt[3]{t^2+2 t} \, dt +7$$
Definition of the nth derivative? [First post] If the definition of the derivative is $$ f^\prime(x) = \lim_{\Delta x \to 0} \dfrac{f(x+\Delta x) - f(x)}{\Delta x} $$ Would it make sense that the nth derivative would be (I know that the 'n' in delta x to the nth power is useless) $$ f^{(n)}(x)=\lim_{\Delta x \to 0} \sum_{k=0}^{n}(-1)^k{n \choose k}\dfrac{f(x+\Delta x(n-k))}{\Delta x^n} $$ I came to this conclusion using this method $$ f^\prime(x) = \lim_{\Delta x \to 0} \dfrac{f(x+\Delta x) - f(x)}{\Delta x} $$ (this is correct right?) $$ f^{\prime\prime}(x) = \lim_{\Delta x \to 0} \dfrac{f^\prime(x+\Delta x) - f^\prime(x)}{\Delta x}=$$$$\lim_{\Delta x \to 0}\dfrac{\dfrac{f((x+\Delta x)+\Delta x)-f(x+\Delta x)}{\Delta x}-\dfrac{f(x+\Delta x)-f(x)}{\Delta x}}{\Delta x}=$$$$\lim_{\Delta x \to 0}\dfrac{f(x+2\Delta x)-2f(x+\Delta x)+f(x)}{\Delta x^2} $$ After following this method a couple of times(I think I used it to the 5th derivative) I noticed the pattern of $$(a-b)^n$$ And that is how i arrived at $$ f^{(n)}(x)=\lim_{\Delta x \to 0} \sum_{k=0}^{n}(-1)^k{n \choose k}\dfrac{f(x+\Delta x(n-k))}{\Delta x^n} $$ Have I made a fatal error somewhere or does this definition actually follow through? Thanks for your time I really appreciate it. P.S. Any input on using tags will be appreciated.
This is probably not a good definition of the $n$th derivative. To see this, consider the case $n = 2$: $$ f''(x) = \lim_{h \to 0} \frac{f(x + 2h) - 2f(x + h) + f(x)}{h^2} $$ Define $f: \mathbb{R} \to \mathbb{R}$ as follows. First, define $f(0) = 0$. Now define $f$ on the intervals $\left[-1, -\tfrac12\right)$ and $\left(\tfrac12, 1\right]$ to be your favorite unbounded function, for instance $\frac{1}{x^2 - 1/4}$ is a good choice. Now, for any $x$, let $k$ be the unique integer such that $2^k x$ is contained in one of these intervals, and define $f(x) = 2^{-k} f(2^k x)$. This construction satisfies $f(2h) = 2f(h)$ for all $h \in \mathbb{R}$, so the derivative formula above gives $$ f''(0) = \lim_{h \to 0} \frac{f(2h) - 2f(h) + f(0)}{h^2} = \lim_{h \to 0} \frac{0}{h^2} = 0 $$ However, $f$ is wildly discontinuous at $0$, and is in fact unbounded in any neighborhood containing $0$.
when you did for f''(x) you are taking both limits as a single limit for delta x. what I did is take $h_1$ for first limit and $h_2$ for second limit, so that nth derivative is limit $(h_1, h_2, h_3,.... h_n)$ --> $(0,0,0,....,0)$ and some function.
Determine which Fibonacci numbers are even (a) Determine which Fibonacci numbers are even. Use a form of mathematical induction to prove your conjecture. (b) Determine which Fibonacci numbers are divisible by 3. Use a form of mathematical induction to prove your conjecture I understand that for part a that all multiples of 3 of n are even. So F(0),F(3),F(6)... I just don't understand how to prove it. For part B it is the same thing except multiples of 4 Please help, thank you!
Hint: You can look at the first few terms of the sequence modulo $n$, and then conclude a pattern (because each term is based upon the previous two). The first few terms of the Fibonacci sequence modulo $2$ are $1,1,0,1,1,0,1,1,0,1,1,0,\ldots$ The first few terms of the Fibonacci sequence modulo $3$ are $1,1,2,0,2,2,1,0,1,1,2,0,\ldots$ Now how can you formalize this argument using induction? Another hint: you may want multiple base cases.
1,1,2,3,5,8 . . . an even number and an odd number added together is odd, an odd number plus and odd number is even.
Find all $2 \times 2$ matrices $A$ such that $AB = BA$ for every $2 \times 2$ matrix $B$ Find all possible $2 \times 2$ matrices A that for any $2 \times 2$ matrix B, AB = BA. Hint: AB = BA must hold for all B. Try matrices B that have lots of zero entries. I'm clueless as to how to solve this problem. How should I start it? I tried plugging in values for B that "have lots of zero entries" but didn't seem to see anything that could help.
Consider the following four matrices: $$\left(\begin{array}{cc} 1 &amp; 0 \\ 0 &amp; 0\end{array}\right), \quad \left(\begin{array}{cc} 0 &amp; 1 \\ 0 &amp; 0\end{array}\right), \quad \left(\begin{array}{cc} 0 &amp; 0 \\ 1 &amp; 0\end{array}\right), \quad \left(\begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 1\end{array}\right).$$ See what happens when you solve the equation $AB = BA$ for each of those four (let $B$ be each one of those four). To facilitate it, write $A = \left(\begin{array}{cc} a &amp; b \\ c &amp; d\end{array}\right)$. You will get a set of equations for the entries of $a$ which are easily solved. This trick is quite general.
Given a matrix B, consider any matrix that is a polynomial on $B$ with coefficients {$c_k$} in the field you are working with, i.e., take all $A$ with $$A=c_0I+c_1B+c_2B^2+...+c_nB^n$$. I actually think this gives you all such matrices $A$ , but I am not sure.
finding recursive formula and show it converges to a limit Suppose we are playing cards and we start with $1000$ dollars. Every hour we lose $\frac{1}{2}$ of our money and then we buy another $100$ dollars. I am trying to find $x_n$ for the amount of money the player has after $n$ hours. I think we can just take $x_n = \frac{x_{n-1}}{2} + 100 $ An so, let $L = \lim x_n$. Then $L = \frac{L}{2} + 100 $ and so $L = 200$ Is this correct?
What you did is not wrong, but it's not complete. What you did prove: If the sequence $x_n$ has a limit, then the limit is equal to $200$. What you did not prove: The sequence $x_n$ has a limit. Also, that's not what the question is asking you. The question says you need to find a formula for $x_n$, not the limit of $x_n$.
$x_n = \frac{x_{n-1}}{2} + 100 = (\frac{\frac{x_{n-2}}{2} + 100}{2}) + 100 =\frac{x_{n-2}}{4} + 100.(\frac{1}{2} + 1)= \frac{\frac{x_{n-3}}{2} + 100}{4} + 100.(\frac{1}{2} + 1)=...$ $\implies x_n = \frac{x_0}{2^n} + 100.(\frac{1}{2^{n-1}} + \frac{1}{2^{n-2}} + ... + \frac{1}{2} + 1 ) $ where $x_0 = 1000$.
Example that $\lim \sup (x_n\cdot y_n)<\lim \sup (x_n)\cdot \lim \sup (y_n)$ This is a short question, I already managed to prove using definitions that $$\lim \sup (x_n\cdot y_n)\le \lim \sup (x_n)\cdot \lim \sup (y_n)$$ But I'm having trouble coming up with an example such that $$\lim \sup (x_n\cdot y_n)&lt;\lim \sup (x_n)\cdot \lim \sup (y_n)$$ I tried to consider alternative sequences but I'm not sure if I'm doing it right. I'm considering the following right now. $$x_n=(1,0,1,0,...)$$ $$y_n=(0,1,0,1,...)$$ $$x_n\cdot y_n=(0,0,0,0,...)$$ $\lim \sup x_n \cdot y_n=0$ as there sequence is convergent. But $\lim \sup x_n = 1$ and $\lim \sup y_n =1$ So it appears the inequality holds. I just need a confirmation that what I'm doing is right. Sorry if this is a redundant question, I'm just learning this concept so it's a little fuzzy for me. Note that $(x_n)$ and $(y_n)$ are non-negative.
Even values of x are zero, odd values are one. y is the opposite.
Even values of x are zero, odd values are one. y is the opposite.