INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Convergence of rationals to irrationals and the corresponding denominators going to zero If $(\frac{p_k}{q_k})$ is a sequence of rationals that converges to an irrational $y$, how do you prove that $(q_k)$ must go to $\infty$? I thought some argument along the lines of "breaking up the interval $(0,p_k)$ into $q_k$ parts", but I'm not sure how to put it all together. Perhaps for every $q_k$, there is a bound on how close $\frac{p_k}{q_k}$ can get to the irrational $y$ ?
Hint: For every positive integer $n$, consider the set $R(n)$ of rational numbers $p/q$ such that $1\leqslant q\leqslant n$. Show that for every $n$ the distance $\delta(n)$ of $y$ to $R(n)$, defined as $\delta(n)=\inf\{|y-r|\mid r\in R(n)\}$, is positive. Apply this to any sequence $(p_k/q_k)$ converging to $y$, showing that for every positive $n$, there exists $k(n)$ such that for every $k\geqslant k(n)$, $p_k/q_k$ is not in $R(n)$, hence $q_k\geqslant n+1$.
Continuity at accumulation point Following was the homework question for my analysis class. Given any sequence $x_n$ in a metric space $(X; d)$, and $x \in X$, consider the function $f : \mathbb N^\ast = \mathbb N \cup \{\infty\} \to X$ defined by $f(n) = x_n$, for all $n\in \mathbb N$, and $f(\infty) = x$. Prove that there exists a metric on $\mathbb N^\ast$ such that the sequence $x_n$ converges to $x$ in $(X; d)$ if and only if the function $f$ is continuous. Our teacher said that since $\infty$ is the only accumulation point of $\mathbb N^\ast$, if $f$ is continuous at $\infty$, it is continuous on $\mathbb N^\ast$. So, it is enough to show it is continuous at $\infty$. But I did not understand why this is true. It is also possible that I misunderstood what my teacher meant. If you can tell me the statement is correct or not and why, I would be grateful. Thanks in advance.
It's not clear to me what your teacher meant by $\infty$ being the only accumulation point of $\mathbb N^*$, since $\mathbb N^*$ is yet to be equipped with a metric and it makes no sense to speak of accumulation points before that. A metric on $\mathbb N^*$ satisfying the requirement is induced by mapping $\mathbb N^*$ to $[0,1]$ using $n\mapsto1/n$, with $1/\infty:=0$, and using the canonical metric on $[0,1]$. Then all natural numbers are isolated points, and the open neighbourhoods of $\infty$ are the cofinite sets. The function $f$ is continuous if and only if the preimages of all open sets are open. The preimages of open sets not containing $x$ are open because they consist of a finite number of isolated points. The preimages of open sets containing $x$ are open because $x_n$ eventually ends up in every such open set, and thus the preimage is a cofinite set.
Zero divisor in $R[x]$ Let $R$ be commutative ring with no (nonzero) nilpotents. If $f(x) = a_0+a_1x+\cdots+a_nx^n$ is a zero divisor in $R[x]$, how do I show there's an element $b \ne 0$ in $R$ such that $ba_0=ba_1=\cdots=ba_n=0$?
This is the case of Armendariz Rings, which I studied in last summer briefly. It is an interesting topic. A ring $R$ is called Armendariz if whenever $f(x)=\sum_{i=0}^{m}a_ix^i, g(x)=\sum_{j=0}^{n}b_jx^j \in R[x]$ such that $f(x)g(x)=0$, then $a_ib_j=0\ \forall\ i,j$. In his paper "A NOTE ON EXTENSIONS OF BAER AND P. P. -RINGS" in 1973, Armendariz proved that Reduced rings are Armendariz which is a nice generalization of your result. Proof- Let $fg=0$ and assuming $m=n$ is sufficient. We then have $$a_0b_0=0,\\ a_1b_0+a_0b_1=0 ,\\ \vdots \\a_nb_0+\dots +a_0b_n=0$$ Since $R$ is reduced, $a_0b_0=0\implies (b_0a_0)^2=b_0(a_0b_0)a_0=0 \implies b_0a_0=0$. Now left multiplying $a_1b_0+b_1a_0=0$ by $b_0$ we get $b_0a_1b_0=-b_0a_0b_1=0 \implies (a_1b_0)^2=a_1(b_0a_1b_0)=0 \implies a_1b_0=0$. Similarly we get $a_ib_0=0\ \forall\ 1\leq i \leq n$. Now original equations reduces to $$a_0b_1=0\\ a_1b_1+a_0b_2=0\\ \vdots\\ a_{n-1}b_1+\dots +a_0b_n=0$$ and then by same process first we will get that $a_0b_1=0$ and then multiplying on left of $a_1b_1+a_0b_2=0$ by $b_1$ we get $a_1b_1=0$, and so on we will get, $a_ib_1=0\ \forall\ 1\le i \le n$. Similarly, Repeating it we get $a_ib_j=0\ \forall\ 1 \leq i,j \leq n$. $\hspace{5.5cm}\blacksquare$ Some other examples of Armendariz Rings are: * *$\Bbb{Z}/n\Bbb{Z}\ \forall\ n$. *All domains and fields (which are reduced). *If A and B are Armendariz , then $A(+)B$ in which multiplication is defined by $(a,b)(a',b')=(aa',ab'+a'b)$ is Armendariz. *Direct Product of Armendariz rings.
Fourier cosine series and sum help I have been having some problems with the following problem: Find the Fourier cosine series of the function $\vert\sin x\vert$ in the interval $(-\pi, \pi)$. Use it to find the sums $$ \sum_{n\: =\: 1}^{\infty}\:\ \frac{1}{4n^2-1}$$ and $$ \sum_{n\: =\: 1}^{\infty}\:\ \frac{(-1)^n}{4n^2-1}$$ Any help is appreciated, thank you. edit:I have gotten as far as working out the Fourier cosine series using the equations for cosine series $$\phi (X) = 1/2 A_0 + \sum_{n\: =\: 1}^{\infty}\:\ A_n \cos\left(\frac{n\pi x}{l}\right)$$ and $$A_m = \frac{2}{l} \int_{0}^{l} \phi (X) \cos\left(\frac{m\pi x}{l}\right) dx $$ I have found $$A_0 = \frac{4}{l}$$ but the rest of the question is a mess on my end and then I don't know how to relate the rest of it back to those sums.
$f(x) = |\sin(x)| \quad \Rightarrow\quad f(x) = \left\{ \begin{array}{l l} -\sin(x) & \quad \forall x \in [- \pi, 0\space]\\ \sin(x) & \quad \forall x \in [\space 0,\pi\space ]\\ \end{array} \right.$ The Fourier coefficients associated are $$a_n= \frac{1}{\pi}\int_{-\pi}^\pi f(x) \cos(nx)\, dx = \frac{1}{\pi} \left[\int_{-\pi}^0 -\sin (x) \cos(nx)\, dx + \int_{0}^\pi \sin(x) \cos(nx)\, dx\right], \quad n \ge 0$$ $$b_n= \frac{1}{\pi}\int_{-\pi}^\pi f(x) \sin(nx)\, dx = \frac{1}{\pi} \left[\int_{-\pi}^0 -\sin (x) \sin(nx)\, dx + \int_{0}^\pi \sin(x) \sin(nx)\, dx\right], \quad n \ge 1$$ All functions are integrable so we can go on and compute the expressions for $a_n$ and $b_n$. $$a_n = \cfrac{2 (\cos(\pi n)+1)}{\pi(1-n^2)}$$ $$b_n = 0$$ The $b_n = 0$ can be deemed obvious since the function $f(x) = |\sin(x)|$ is an even function. and $a_n$ could have been calculated as $\displaystyle a_n= \frac{2}{\pi}\int_{0}^\pi f(x) \cos(nx)\, dx $ only because the function is even. The Fourier Series is $$\cfrac {a_0}{2} + \sum^{\infty}_{n=1}\left [ a_n \cos(nx) + b_n \sin (nx) \right ]$$ $$= \cfrac {2}{\pi}\left ( 1 + \sum^{\infty}_{n=1} \cfrac{(\cos(\pi n)+1)}{(1-n^2)}\cos(nx)\right )$$ $$= \cfrac {2}{\pi}\left ( 1 + \sum^{\infty}_{n=1} \cfrac{((-1)^n+1)}{(1-n^2)}\cos(nx)\right )$$ $$= \cfrac {2}{\pi}\left ( 1 + \sum^{\infty}_{n=1} \cfrac{2}{(1-4n^2)}\cos(2nx)\right )$$ Since for an odd $n$, $((-1)^n+1) = 0$ and for an even $n$, $((-1)^n+1) = 2$ At this point we can't just assume the function is equal to its Fourier Series, it has to satisfy certain conditions. See Convergence of Fourier series. Without wasting time, (you still have to prove that it satisfies those conditions) we assume the Fourier Series converges to our function i.e $$f(x) = |\sin(x)| = \cfrac {2}{\pi}\left ( 1 + \sum^{\infty}_{n=1} \cfrac{2}{(1-4n^2)}\cos(2nx)\right )$$ Note that $x=0$ gives $\cos(2nx) = 1$ then $$f(0) = |\sin(0)| = \cfrac {2}{\pi}\left ( 1 + 2\sum^{\infty}_{n=1} \cfrac{1}{(1-4n^2)}\right ) =0$$ which implies that $$\sum^{\infty}_{n=1} \cfrac{1}{(1-4n^2)} = \cfrac {-1}{2}$$ and $$\boxed {\displaystyle\sum^{\infty}_{n=1} \cfrac{1}{(4n^2 -1)}= -\sum^{\infty}_{n=1} \cfrac{1}{(1-4n^2)} = \cfrac {1}{2}}$$ Observe again that when $x = \cfrac \pi 2$, $\cos (2nx) = cos(n \pi) = (-1)^n$, thus $$f \left (\cfrac \pi 2 \right) = \left |\sin \left (\cfrac \pi 2\right )\right | = \cfrac {2}{\pi}\left ( 1 + 2\sum^{\infty}_{n=1} \cfrac{(-1)^n}{(1-4n^2)}\right ) =1$$ which implies that $$\sum^{\infty}_{n=1} \cfrac{(-1)^n}{(1-4n^2)} = \cfrac {1}{4}(\pi -2)$$ and $$\boxed {\displaystyle\sum^{\infty}_{n=1} \cfrac{(-1)^n}{(4n^2 -1)}= -\sum^{\infty}_{n=1} \cfrac{(-1)^n}{(1-4n^2)} = \cfrac {1}{4}(2-\pi)}$$
Integral of product of two functions in terms of the integral of the other function Problem: Let $f$ and $g$ be two continuous functions on $[ a,b ]$ and assume $g$ is positive. Prove that $$\int_{a}^{b}f(x)g(x)dx=f(\xi )\int_{a}^{b}g(x)dx$$ for some $\xi$ in $[ a,b ]$. Here is my solution: Since $f(x)$ and $g(x)$ are continuous, then $f(x) g(x)$ is continuous. Using the Mean value theorem, there exists a $\xi$ in $[ a,b ]$ such that $\int_{a}^{b}f(x)g(x)dx= f(\xi)g(\xi) (b-a) $ and using the Mean value theorem again, we can get $g(\xi) (b-a)=\int_{a}^{b}g(x)dx$ which yields the required equality. Is my proof correct? If not, please let me know how to correct it.
The integrals on both sides of the problem are well defined since $f$ and $g$ are continuous, and $g$ is positive so $ \displaystyle \int^b_a g(x) dx > 0.$ Thus there exists some constant $K$ such that $$ \int^b_a f(x) g(x) dx = K\int^b_a g(x) dx . $$ If $\displaystyle K > \max_{x\in [a,b]} f(x) $ then the left side is smaller than the right. If $\displaystyle K < \min_{x\in [a,b]} f(x) $ then the left side is larger than the right. Thus $ K \in f( [a,b] ).$
Characteristic polynomial equals minimal polynomial iff $x, Mx, \ldots, M^{n-1} x$ are linearly independent I have been trying to compile conditions for when characteristic polynomials equal minimal polynomials and I have found a result that I think is fairly standard but I have not been able to come up with a proof for. Any references to a proof would be greatly appreciated. Let $M\in M_n(\Bbb F)$ and let $c_M$ be the characteristic polynomial of $M$ and $p_M$ be the minimal polynomial of $M$. How do we show that $p_M = c_M$ if and only if there exists a column vector $x$ such that $x, Mx, \ldots M^{n-1} x$ are linearly independent ?
Here is a simple proof for the "only if" part, using rational canonical form. For clarity's sake, I'll assume that $M$ is a linear map. If $M$ is such that $p_M=c_M$, then it is similar to $F$, the companion matrix of $p_M$.i.e. There is a basis $\beta = (v_1, \dots ,v_n)$ for $V$ under which the matrix of $M$ is $F$. Let $e_i (i \in \{1,\dots,n\})$ be the vector with $1$ as its $i$-th component and $0$ anywhere else, then $F^{i} e_{1} = e_{i+1}$ for $i \in \{1, \dots , n-1\}$. But $e_1, \dots , e_n$ are just the coordinates of the vectors $v_1, \dots , v_n$ under the basis $\beta$ itself. So this means that $M^i v_1 = v_{i+1}$ for $i \in \{1, \dots , n-1\}$. In other words, $(v_1, M v_1, \dots , M^{n-1} v_1) = (v_1, v_2, \dots , v_n)$ is a basis for $V$, which is of course linearly independent.
What exactly happens, when I do a bit rotation? I'm sitting in my office and having difficulties to get to know that exactly happens, when I do a bit rotation of a binary number. An example: I have the binary number 1110. By doing bit rotation, I get 1101, 1011, 0111 ... so I have 14 and get 7, 11, 13 and 14 again. I can't get a rule out of it... can somebody help me? Excuse my bad English, I'm just so excited about this problem.
Interpret the bits as representing a number in standard binary representation (as you are doing). Then, bit rotation to the right is division by $2$ modulo $15$, or more generally, modulo $2^n-1$ where $n$ is the number of bits. Put more simply, if the number is even, divide it by $2$, while if it is odd, add $15$ and then divide by $2$. So from $14$ you get $7$. Since $7$ is odd, add $15$ to get $22$ and diivide by $2$ to get $11$. Add $15$ and divide by $2$ to get $13$ and so on. The calculations are somewhat different if the bits are interpreted in $2$s-complement representation.
Compact Sets in Projective Space Consider the projective space ${\mathbb P}^{n}_{k}$ with field $k$. We can naturally give this the Zariski topology. Question: What are the (proper) compact sets in this space? Motivation: I wanted nice examples of spaces and their corresponding compact sets; usually my spaces are Hausdorff and my go-to topology for non-Hausdorff-ness is the Zariski topology. I wasn't really able to find any proper compact sets which makes me think I'm doing something wrong here.
You are in for a big surprise, james: every subset of $\mathbb P^n_k$ is quasi-compact. This is true more generally for any noetherian space, a space in which every decreasing sequence of closed sets is stationary. However: the compact subsets of $\mathbb P^n_k$ are the finite sets of points such that no point is in the closure of another. Reminder A topological space $X$ is quasi-compact if from every open cover of $X$ a finite cover can be extracted. A compact space is a Hausdorff quasi-compact space. Bibliography Bourbaki, Commutative Algebra, Chapter II, §4,2.
A fair coin is tossed $n$ times by two people. What is the probability that they get same number of heads? Say we have Tom and John, each tosses a fair coin $n$ times. What is the probability that they get same number of heads? I tried to do it this way: individually, the probability of getting $k$ heads for each is equal to $$\binom{n}{k} \Big(\frac12\Big)^n.$$ So, we can do $$\sum^{n}_{k=0} \left( \binom{n}{k} \Big(\frac12\Big)^n \cdot \binom{n}{k}\Big(\frac12\Big)^n \right)$$ which results into something very ugly. This ugly thing is equal to the 'simple' answer in the back of the book: $\binom{2n}{n}\left(\frac12\right)^{2n},$ but the equality was verified by WolframAlpha -- it's not obvious when you look at it. So I think there's a much easier way to solve this, can someone point it out? Thanks.
As you have noted, the probability is $$ p_n = \frac{1}{4^n} \sum_{k=0}^n \binom{n}{k} \binom{n}{k} = \frac{1}{4^n} \sum_{k=0}^n \binom{n}{k} \binom{n}{n-k} = \frac{1}{4^n} \binom{2n}{n} $$ The middle equality uses symmetry of binomials, and last used Vandermonde's convolution identity.
Whether $f(x)$ is reducible in $ \mathbb Z[x] $? Suppose that $f(x) \in \mathbb Z[x] $ has an integer root. Does it mean $f(x)$ is reducible in $\mathbb Z[x]$?
No. $x-2$ is irreducible but has an integer root $2$. If the degree of $f$ is greater than one, then yes. If $a$ is a root of $f(x)$, carry out synthetic division by $x-a$. You will get $f(x) = (x-a)g(x) + r$, and since $f(a) = 0$, $r=0$.
Lambert series expansion identity I have a question which goes like this: How can I show that $$\sum_{n=1}^{\infty} \frac{z^n}{\left(1-z^n\right)^2} =\sum_{n=1}^\infty \frac{nz^n}{1-z^n}$$ for $|z|<1$?
Hint: Try using the expansions $$ \frac{1}{1-x}=1+x+x^2+x^3+x^4+x^5+\dots $$ and $$ \frac{1}{(1-x)^2}=1+2x+3x^2+4x^3+5x^4+\dots $$ Expansion: $$ \begin{align} \sum_{n=1}^\infty\frac{z^n}{(1-z^n)^2} &=\sum_{n=1}^\infty\sum_{k=0}^\infty(k+1)z^{kn+n}\\ &=\sum_{n=1}^\infty\sum_{k=1}^\infty kz^{kn}\\ &=\sum_{k=1}^\infty\sum_{n=1}^\infty kz^{kn}\\ &=\sum_{k=1}^\infty\sum_{n=0}^\infty kz^{kn+k}\\ &=\sum_{k=1}^\infty\frac{kz^k}{1-z^k} \end{align} $$
Can't write context free grammar for language $L=\{a^n\#a^{n+2m}, n,m \geq 1\}$ Spent some time on this problem and seems like I am not able write context free grammar for language $L=\{a^n\#a^{n+2m}, n\geq 1\wedge m \geq 1, n\in \mathbb{N} \wedge m\in\mathbb{N}\}$ I am sure I am missing something obvious, but can't figure out what. I understand that strings are in L: for odd n 1 # 3, 5, 7, 9, ... 3 # 5, 7, 9, 11, ... 5 # 7, 9, 11, 13, ... i.e. one t followed by # followed by 3 or 5 or ... t's; three t's followed by 5 or 7 or 9 t's ... for even n 2 # 4, 6, 8, 10, ... 4 # 6, 8, 10, 12, ... 6 # 8, 10, 12, 14, ... i.e. two t's followed by 4 or 6 or 8 ... t's and so on. I am struggling to generate rules. I understand what I can start with one or two t's then followed by # then followed by 3 or 4 t's. What I can't figure out how to recursively manage n increment, i.e. to make sure for example there are least 7 t's after 5 t's followed by #. I also tried to check if L is CFL, but with no success :( Any hints to the right direction, ideas and solutions are welcomed!
I think this is the solution: $S \rightarrow aLaT$ $L \rightarrow aLa \mid \#$ $T \rightarrow Taa \mid aa$ This language is actually just $\{ a^n\#a^n \mid n \geq 1\} \circ (aa)^+$, where $\circ$ is the concatenation operator. Which is why this CFG is so easy to construct, as it is an easily expressible language followed by an arbitrary even number of a's.
Dense subset of given space If $E$ is a Banach space, $A$ is a subset such that $$A^{\perp}:= \{T \in E^{\ast}: T(A)=0\}=0,$$ then $$\overline{A} = E.$$ I don't why this is true. Does $E$ has to be Banach? Thanks
Did you mean to say that $A$ is a vector subspace, or does $\overline{A}$ mean the closed subspace of $E$ generated by $A$? If $A$ were only assumed to be a subset and $\overline{A}$ means the closure, then this is false. E.g., let $A$ be the unit ball of $E$. Suppose that the closed subspace of $E$ generated by $A$, $\overline{\mathrm{span}}(A)$, is not $E$. Let $x\in E\setminus \overline{\mathrm{span}}(A)$. Using Hahn-Banach you can show that there is an element $T$ of $E^*$ such that $T(A)=\{0\}$ and $T(x)=1$. (Start by defining $T$ on the subspace $\overline{\mathrm{span}}(A)+\mathbb C x$.) No, this does not depend on $E$ being complete.
What is a real world application of polynomial factoring? The wife and I are sitting here on a Saturday night doing some algebra homework. We're factoring polynomials and had the same thought at the same time: when will we use this? I feel a bit silly because it always bugged me when people asked that in grade school. However, we're both working professionals (I'm a programmer, she's a photographer) and I can't recall ever considering polynomial factoring as a solution to the problem I was solving. Are there real world applications where factoring polynomials leads to solutions? or is it a stepping-stone math that will open my mind to more elaborate solutions that I actually will use? Thanks for taking the time!
You need polynomial factoring (or what's the same, root finding) for higher mathematics. For example, when you are looking for the eigenvalues of a matrix, they appear as the roots of a polynomial, the "characteristic equation". I suspect that none of this will be of any use to someone unless they continue their mathematical education at least to the junior classes like linear algebra (which deals with matrices) and differential equations (where polynomials also appear). And I would also bet that the majority of people who take these classes never end up using them in "real life".
A metric space in which every infinite set has a limit point is separable I am struggling with one problem. I need to show that if $X$ is a metric space in which every infinite subset has a limit point then $X$ is separable (has countable dense subset in other words). I am trying to use the result I have proven prior to this problem, namely every separable metric space has a countable base (i.e. any open subset of the metric space can be expressed as a sub-collection of the countable collection of sets). I am not sure this is the right way, can anyone outline the proof? Thanks a lot in advance!
Let $\langle X,d\rangle$ be a metric space in which each infinite subset has a limit point. For any $\epsilon>0$ an $\epsilon$-mesh in $X$ is a set $M\subseteq X$ such that $d(x,y)\ge\epsilon$ whenever $x$ and $y$ are distinct points of $M$. Every $\epsilon$-mesh in $X$ is finite, since an infinite $\epsilon$-mesh would be an infinite set with no limit point. Let $\mathscr{M}(\epsilon)$ be the family of all $\epsilon$-meshes in $X$, and consider the partial order $\langle \mathscr{M}(\epsilon),\subseteq\rangle$. This partial order must have a maximal element: if it did not have one, there would be an infinite ascending chain of $\epsilon$-meshes $M_0\subsetneq M_1\subsetneq M_2\subsetneq\dots$, and $\bigcup_n M_n$ would then be an infinite $\epsilon$-mesh. Let $M_\epsilon$ be a maximal $\epsilon$-mesh; I claim that $$X=\bigcup_{x\in M_\epsilon}B(x,\epsilon)\;,$$ where as usual $B(x,\epsilon)$ is the open ball of radius $\epsilon$ centred at $x$. That is, each point of $X$ is within $\epsilon$ of some point of $M_\epsilon$. To see this, suppose that $y\in X\setminus \bigcup\limits_{x\in M_\epsilon}B(x,\epsilon)$. Then $d(y,x)\ge\epsilon$ for every $x\in M_\epsilon$, and $M_\epsilon \cup \{y\}$ is therefore an $\epsilon$-mesh strictly containing $M_\epsilon$, contradicting the maximality of $M_\epsilon$. Now for each $n\in\mathbb{N}$ let $M_n$ be a maximal $2^{-n}$-mesh, and let $$D=\bigcup_{n\in\mathbb{N}}M_n\;.$$ Each $M_n$ is finite, so $D$ is countable, and you should have no trouble showing that $D$ is dense in $X$.
Convergence of $\lim_{n \to \infty} \frac{5 n^2 +\sin n}{3 (n+2)^2 \cos(\frac{n \pi}{5})},$ I'm in trouble with this limit. The numerator diverges positively, but I do not understand how to operate on the denominator. $$\lim_{n \to \infty} \frac{5 n^2 +\sin n}{3 (n+2)^2 \cos(\frac{n \pi}{5})},$$ $$\lim_{n \to \infty} \frac{5 n^2 +\sin n}{3 (n+2)^2 \cos(\frac{n \pi}{5})}= \lim_{x\to\infty}\frac {n^2(5 +\frac{\sin n}{n^2})}{3 (n+2)^2 \cos(\frac{n \pi}{5})} \cdots$$
Let's make a few comments. * *Note that the terms of the sequence are always defined: for $n\geq 0$, $3(n+2)^2$ is greater than $0$; and $\cos(n\pi/5)$ can never be equal to zero (you would need $n\pi/5$ to be an odd multiple of $\pi/2$, and this is impossible). *If $a_n$ and $b_n$ both have limits as $n\to\infty$, then so does $a_nb_n$, and the limit of $a_nb_n$ is the product of the limits of $a_n$ and of $b_n$, $$\lim_{n\to\infty}a_nb_n = \left(\lim_{n\to\infty}a_n\right)\left(\lim_{n\to\infty}b_n\right).$$ *If $b_n$ has a limit as $n\to\infty$, and the limit is not zero, then $\frac{1}{b_n}$ has a limit as $n\to\infty$, and the limit is the reciprocal of the limit of $b_n$: $$\lim_{n\to\infty}\frac{1}{b_n} = \frac{1}{\lim\limits_{n\to\infty}b_n},\qquad \text{if }\lim_{n\to\infty}b_n\neq 0.$$ As a consequence of $2$ and $3$, we have: * *If $\lim\limits_{n\to\infty}a_nb_n$ and $\lim\limits_{n\to\infty}a_n$ exists and is not equal to $0$, then $\lim\limits_{n\to\infty}b_n$ exists: Just write $\displaystyle b_n = \left(a_nb_n\right)\frac{1}{a_n}$ *Equivalently, if $\lim\limits_{n\to\infty}a_n$ exists and is not zero, and $\lim\limits_{n\to\infty}b_n$ does not exist, then $\lim\limits_{n\to\infty}a_nb_n$ does not exist either. So, consider $$a_n = \frac{5n^2 + \sin n}{3(n+2)^2},\qquad b_n =\frac{1}{\cos(n\pi/5)}.$$ We have, as you did: $$\begin{align*} \lim_{n\to\infty}a_n &= \lim_{n\to\infty}\frac{5n^2 + \sin n}{3(n+2)^2}\\ &= \lim_{n\to\infty}\frac{n^2\left(5 + \frac{\sin n}{n^2}\right)}{3n^2(1 + \frac{2}{n})^2}\\ &=\lim_{n\to\infty}\frac{5 + \frac{\sin n}{n^2}}{3(1+\frac{2}{n})^2}\\ &= \frac{5 + 0}{3(1+0)^2} = \frac{5}{3}\neq 0. \end{align*}$$ What about the sequence $(b_n)$? If $n=(2k+1)5$ is an odd multiple of $5$, then $$b_n = b_{(2k+1)5}\frac{1}{\cos\frac{n\pi}{5}} = \frac{1}{\cos((2k+1)\pi)} = -1;$$ so the subsequence $b_{(2k+1)5}$ is constant, and converges to $-1$. On the other hand, if $n=10k$ is an even multiple of $5$, then $$b_n = \frac{1}{\cos\frac{n\pi}{5}} = \frac{1}{\cos(2k\pi)} = 1.$$ so the subsequence $b_{10k}$ is constant and converges to $1$. Since a sequence converges if and only if every subsequence converges and converges to the same thing, but $(b_n)$ has two subsequences that converge to different things, it follows that $(b_n)$ does not converge. (It also does not diverge to $\infty$ or to $-\infty$, since there are subsequences that are constant). And so, what can we conclude, given our observations above about products of sequences?
commuting matrices & polynomials 1 I need help on this problem: Problem: Find two 3x3 matrices, A and B that commute with each other; and neither A is a polynomials of B nor B is a polynomial of A
A=diag(1,1,2) en B is the matrix with rows [1,1,0\0,1,0\0,0,1]. Then AB=BA, B is not polynomial in A (B is not a diagonal matrix) en A is not polynomial in B. For any polynomial p of degree <3 with P(B)=A should have the property p(1)=1 (since p([1,1\0,1])=diag(1,1), so p(x)=1+(x-1)^2) and p(1)=2. J. Vermeer
Is $O(\frac{1}{n}) = o(1)$? Sorry about yet another big-Oh notation question, I just found it very confusing. If $T(n)=\frac{5}{n}$, is it true that $T(n)=O(\frac{1}{n})$ and $T(n) = o(1)$? I think so because (if $h(n)=\frac{1}{n}$) $$ \lim_{n \to \infty} \frac{T(n)}{h(n)}=\lim_{n \to \infty} \frac{\frac{5}{n}}{\frac{1}{n}}=5>0 , $$ therefore $T(n)=O(h(n))$. At the same time (if $h(n)=1$) $$ \lim_{n \to \infty} \frac{T(n)}{h(n)}=\frac{(\frac{5}{n})}{1}=0, $$ therefore $T(n)=o(h(n))$. Thanks!
If $x_n = O(1/n)$, this means there exists $N$ and $C$ such that for all $n > N$, $|x_n| \le C|1/n|$. Hence $$ \lim_{n \to \infty} \frac{|x_n|}{1} \le \lim_{n \to \infty} \frac{C|1/n|}{1} = 0. $$ This means if $x_n = O(1/n)$ then $x_n = o(1)$. Conversely, it is not true though. Saying that $x_n = o(1)$ only means $x_n \to 0$, but there are sequences that go to zero and are not $O(1/n)$ (think of $1/\sqrt{n}$ for instance). Hope that helps,
On the Origin and Precise Definition of the Term 'Surd' So, in the course of last week's class work, I ran across the Maple function surd() that takes the real part of an nth root. However, conversation with my professor and my own research have failed to produce even an adequate definition of the term, much less a good reason for why it is used in that context in Maple. Various dictionaries indicate that it refers to certain subsets (perhaps all of?) the irrationals, while the Wikipedia reference link uses it interchangeably with radical. However, neither of those jive with the Maple interpretation as $\mbox{Surd}(3,x) \neq\sqrt[3]{x}\;\;\;\;\;\;\;x<0$. So, the question is: what is a good definition for "surd"? For bonus points, I would be fascinated to see an origin/etymology of the word as used in mathematical context.
An irrational root of rational number is defined as surd. An example is a root of (-1)
Can we say a Markov Chain with only isolated states is time reversible? By "isolated", I mean that each state of this Markov Chain has 0 probability to move to another state, i.e. transition probability $p_{ij} = 0$ for $ i \ne j$. Thus, there isn't a unique stationary distribution. But by definition, since for any stationary distribution $\pi$, we have $$ \pi_{i}p_{ij} = 0 = \pi_{j}p_{ji} $$ seems that we can still call this Markov Chain time reversible. Is the concept "time reversible" still make sense in this situation? A bit background, I was asked to find a Markov Chain, with certain restrictions, that is NOT time reversible. But I found if the stationary distribution exist, the chain is always reversible. So I guess that my be chance is that a chain who doesn't have unique stationary distribution. Maybe in this situation we can't call the chain reversible.
I do not think that constructing a markov chain with isolated states will give you a time irreversible markov chain. Consider the case when you have one isolated state. Since, an isolated state can never be reached from any other state, your chain is actually a union of two different markov chains. * *A markov chain that always stays in the isolated state (which is time reversible by definition) and *A markov chain on the non-isolated states which may or may not be time reversible. Thus, I do not think the above strategy will work.
How determine or visualize level curves Let $f:\mathbb{C}\to\mathbb{C}$ given for $f(z)=\int_0^z \frac{1-e^t}{t} dt-\ln z$ and put $g(x,y)=\text{Re}(f(z))$. While using the computer, how to determine the curve $g(x,y)=0$? Thanks for the help.
Using Mathematica: ContourPlot[With[{z = x + I y}, Re[EulerGamma - ExpIntegralEi[z]]] == 0, {x, -20, 20}, {y, -20, 20}]
Finding a correspondence between $\{0,1\}^A$ and $\mathcal P(A)$ I got this question in homework: Let $\{0,1\}^A$ the set of all functions from A (not necessarily a finite set) to $\{0,1\}$. Find a correspondence (function) between $\{0,1\}^A$ and $\mathcal P(A)$ (The power set of $A$). Prove that this correspondence is one-to-one and onto. I don't know where to start, so I need a hint. What does it mean to find a correspondence? I'm not really supposed to define a function, right? I guess once I have the correspondence defined somehow, the proof will be easier. Any ideas? Thanks!
I'll try to say this without all the technicalities that accompany some of the earlier answers. Let $B$ be a member of $\mathcal{P}(A).$ That means $B\subseteq A$. You want to define a function $f$ corresponding to the set $B$. If $x\in A$, then what is $f(x)$? It is: $f(x)=1$ if $x\in B$ and $f(x) = 0$ if $x\not\in B$. After that, you need to show that this correspondence between $B$ and $f$ is really a one-to-one correspondence between the set of all subsets of $A$ and the set of all functions from $A$ into $\{0,1\}$. If has to be "one-to-one in both directions"; i.e. you need to check both, and you need to check that the word "all" is correct in both cases.
Check if point on circle is in between two other points (Java) I am struggling with the following question. I'd like to check if a point on a circle is between two other points to check if the point is in the boundary. It is easy to calculate when the boundary doesn't go over 360 degrees. But when the boundary goes over 360 degrees (e.g. 270° - 180°), the second point is smaller than the first point of the boundary. And then I don't know how to check if my point on the circle is between the boundary points, because I cannot check "first boundary point" < "my point" < "second boundary point". Is there an easy way to check this? Either a mathematical function or an algorithm would be good.
From the question comments with added symbols I have a circle with a certain sector blocked. Say for example the sector between $a = 90°$ and $b = 180°$ is blocked. I now want to check if a point $P = (x,y)$ in the circle of center $C = (x_0,y_0)$ of radius $r$ is in this sector or not to see if it is a valid point or not. In other words what you need is the angle the $PC$ line forms with the $x$ axis of your system of reference. And that's already been answered here: $$v = \arccos\left(\frac{xx_0 + yy_0}{\sqrt{(x^2+y^2) \cdot (x_0^2+y_0^2)}}\right)$$ Notice that you still need to calculate the distance $\bar{PC}$ to make sure your point is in the circle to begin with.
calculus textbook avoiding "nice" numbers: all numbers are decimals with 2 or 3 sig figs Many years ago, my father had a large number of older used textbooks. I seem to remember a calculus textbook with a somewhat unusual feature, and I am wondering if the description rings a bell with anyone here. Basically, this was a calculus textbook that took the slightly unusual route of avoiding "nice" numbers in all examples. The reader was supposed to always have a calculator at their side, and evaluate everything as a decimal, and only use 2 or 3 significant figures. So for instance, rather than asking for the $\int_1^{\sqrt3} \frac{1}{1+x^2}$, it might be from $x=1.2$ to $x=2.6$, say. The author had done this as a deliberate choice, since most "real-life" math problems involve random-looking decimal numbers, and not very many significant digits. Does this sound familiar to anybody? Any ideas what this textbook might have been?
Though this is probably not the book you are thinking of, Calculus for the Practical Man by Thompson does this. It is, most famously, the book that Richard Feynman learned calculus from, and was part of a whole series of math books "for the practical man". The reason I do not think it is the particular book you are thinking of is that the most recent edition was published in 1946, so there would be no mention of a calculator.
If a topological space has $\aleph_1$-calibre and cardinality at most $2^{\aleph_0}$ must it be star-countable? If a topological space $X$ has $\aleph_1$-calibre and the cardinality of $X$ is $\le 2^{\aleph_0}$, then it must be star countable? A topological space $X$ is said to be star-countable if whenever $\mathscr{U}$ is an open cover of $X$, there is a countable subspace $A$ of $X$ such that $X = \operatorname{St}(A,\mathscr{U})$.
Under CH the space is separable (hence, star countable ). Proof(Ofelia). On the contrary, suppose that X is not separable .Under CH write $X = \{ x_\alpha : \alpha \in \omega_1 \}$ and for each $\alpha$ in $\omega_1$ define $U_\alpha$ = the complement of $cl ( { x_\beta : \beta \le \alpha } )$ . The family of the $U_\alpha$ is a decreasing family of non-empty open sets; since $\aleph_1$ is a caliber of X, the intersection of all $U_\alpha$ must be non-empty (contradiction!)
Computational complexity of least square regression operation In a least square regression algorithm, I have to do the following operations to compute regression coefficients: * *Matrix multiplication, complexity: $O(C^2N)$ *Matrix inversion, complexity: $O(C^3)$ *Matrix multiplication, complexity: $O(C^2N)$ *Matrix multiplication, complexity: $O(CN)$ where, N are the training examples and C is total number of features/variables. How can I determine the overall computational complexity of this algorithm? EDIT: I studied least square regression from the book Introduction to Data Mining by Pang Ning Tan. The explanation about linear least square regression is available in the appendix, where a solution by the use of normal equation is provided (something of the form $a=(X^TX)^{-1}X^Ty)$, which involves 3 matrix multiplications and 1 matrix inversion). My goal is to determine the overall computational complexity of the algorithm. Above, I have listed the 4 operations needed to compute the regression coefficients with their own complexity. Based on this information, can we determine the overall complexity of the algorithm? Thanks!
In this work https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/153646/eth-6011-01.pdf?sequence=1&isAllowed=y two implementation possibilities (the Gaussian elimination alternative vs. using the QR decomposition) are discussed in pages 32 and 33 if you are interested in the actual cost DFLOP-wise.
proof of the Cauchy integral formula $ D=D_{r}(c), r > 0 .$ Show that if $f$ is continuous in $\overline{D}$ and holomorphic in $D$, then for all $z\in D$: $$f(z)=\frac{1}{2\pi i} \int_{\partial D}\frac{f(\zeta)}{\zeta - z} d\zeta$$ I don't understand this question because I don't see how it is different to the special case of the Cauchy Integral formula. I would be very glad if somebody could tell me what the difference is and how to show that it is true.
If I understand correctly, $D_r(c)$ is the open ball centered in $c$ with radius $r$? If this is the case, the difference between the two is that above your $c$ is fixed, and in the special case your $c$ "moves" with the ball. Fix $c$ then; we want to show that for every $z\in D_r(c)$ we have that $$f(z)=\frac{1}{2\pi i}\int_{\partial D_r(c)}\frac{f(\zeta)}{\zeta-z}d\zeta$$ (we assume that $f$ is holomorphic in the ball). Well, if we choose $s>0$ such that $D_s(z)\subseteq D_r(c)$, then by the special case we have that $$f(z)=\frac{1}{2\pi i}\int_{\partial D_s(z)}\frac{f(\zeta)}{\zeta-z}d\zeta,$$ and we see that $\partial D_s(z)$ is homologous to $D_r(c)$. We then have that $$f(z)=\frac{1}{2\pi i}\int_{\partial D_s(z)}\frac{f(\zeta)}{\zeta-z}d\zeta=\frac{1}{2\pi i}\int_{\partial D_r(c)}\frac{f(\zeta)}{\zeta-z}d\zeta.$$ Does that answer your question? Is that what your question was? The nice thing about the "general" Cauchy formula is that the curve that we're integrating over no longer depends on the point you want to evaluate.
$\limsup $ and $\liminf$ of a sequence of subsets relative to a topology From Wikipedia if $\{A_n\}$ is a sequence of subsets of a topological space $X$, then: $\limsup A_n$, which is also called the outer limit, consists of those elements which are limits of points in $A_n$ taken from (countably) infinitely many n. That is, $x \in \limsup A_n$ if and only if there exists a sequence of points $\{x_k\}$ and a subsequence $\{A_{n_k}\}$ of $\{A_n\}$ such that $x_k \in A_{n_k}$ and $x_k \rightarrow x$ as $k \rightarrow \infty$. $\liminf A_n$, which is also called the inner limit, consists of those elements which are limits of points in $A_n$ for all but finitely many n (i.e., cofinitely many n). That is, $x \in \liminf A_n$ if and only if there exists a sequence of points $\{x_k\}$ such that $x_k \in A_k$ and $x_k \rightarrow x$ as $k \rightarrow \infty$. According to the above definitions (or what you think is right), my questions are: * *Is $\liminf_{n} A_n \subseteq \limsup_{n} A_n$? *Is $(\liminf_{n} A_n)^c = \limsup_{n} A_n^c$? *Is $\liminf_{n} A_n = \bigcup_{n=1}^\infty\overline{\bigcap_{m=n}^\infty A_m}$? This is based on the comment by Pantelis Sopasakis following my previous question. *Is $\limsup_{n} A_n = \bigcap_{n=1}^\infty\overline{\bigcup_{m=n}^\infty A_m}$, or $\limsup_{n} A_n = \bigcap_{n=1}^\infty\operatorname{interior}(\bigcup_{m=n}^\infty A_m)$, or ...? Thanks and regards!
I must admit that I did not know these definitions, either. * *Yes, because if $x \in \liminf A_n$ you have a sequence $\{x_k\}$ with $x_k \in A_k$ and $x_k \rightarrow x$ and you can choose your subsequence $\{A_{n_k}\}$ to be your whole sequence $\{A_n\}$. *For $X = \mathbb{R}$ take $A_n = \{0\}$ for all $n$. Then you have $(\liminf A_n)^c = \{0\}^c = \mathbb{R}\setminus{\{0\}}$, but $\limsup A_n^c = \mathbb{R}$. So in general you do not have equality. *Of course you have "$\supseteq$". But if you take $A_n := [-1,1- \frac{1}{n}) \subseteq \mathbb{R}$ you have the sequence $\{x_k\}$ defined by $x_k := 1 - \frac{2}{k}$ which converges to $1$ which is not in $\overline{\bigcap_{n = m}^{\infty} A_n} = [-1,1-1/m]$ for any $m \in \mathbb{N}$. But it works of course, if $\{A_k\}$ is decreasing. *In your first assumption you have "$\subseteq$". A sequence $\{x_k\}$ with $x_k \in A_{n_k}$ with $\{A_{n_k}\}$ some subsequence of $\{A_n\}$ lies eventually in $\bigcup_{n = m}^\infty A_n$ for all $m$ by the definition of subsequence. Now for metric spaces you have "$\supset$", too, as you can start by choosing $x_1 \in A_{n_1}$ with $d(x_1,x) < 1$ and proceed by induction choosing $x_{n_k} \in A_{n_k}$ with $n_k > n_{k-1}$ and $d(x_{n_k},x) < \frac{1}{k}$. I doubt that this holds in general (at least this way of proving it does not).
Solve $f(f(n))=n!$ What am I doing wrong here: ( n!=factorial ) Find $f(n)$ such that $f(f(n))=n!$ $$f(f(f(n)))=f(n)!=f(n!).$$ So $f(n)=n!$ is a solution, but it does not satisfy the original equation except for $n=1$, why? How to solve $f(f(n))=n!$?
The hypothesis is $f(f(n))=n!$. This implies that $f(n)!=f(n!)$ like you say, but unfortunately the converse is not true; you can't reverse the direction and say that a function satisfying the latter equation also satisfies $f(f(n))=n!$. For a similar situation, suppose we have $x=1$ and square it to obtain $x^2=1$; now $x=-1$ is a solution to the latter equation but clearly $-1\ne+1$ so we've "lost information" by squaring.
Algebraic Structures Question I am having problems understanding what this question is asking. any help would be appreciated. Thanks. The dihedral group D8 is an 8 -element subgroup of the 24 -element symmetric group S4 . Write down all left and right cosets of D8 in S4 and draw conclusions regarding normality of D8 in S4 . According to your result determine NS4 (D8) .
HINT: Represent $D_8$ with your preferred notation. Perhaps it is the group generated by $(1234)$ and $(13)$. That's ok. Then write down the 8 elements. Then multiply each on the right and on the left by elements of $S_4$, i.e. write down the right and left cosets. You can just sort of do it, and I recommend it in order to get a feel for the group. Patterns will quickly emerge, and it's not that much work. By the way, I want to note that you should pay attention to what elements you multiply by each time. To see if it's normal, you're going to want to see if ever left coset is a right coset.
Complex equation solution. How can i resolve it? I have this complex equation $|z+2i|=| z-2 |$. How can i resolve it? Please help me
The geometric way The points $z$ that satisfy the equation are at the same distance of the points $2$ and $-2\,i$, that is, they are on the perpendicular bisector of the segment joining $2$ and $-2\,i$. This is a line, whose equation you should be able to find. The algebraic way When dealing vith equations with $|w|$, it is usually convenient to consider $|w|^2=w\,\bar w$. In your equation, if $z=x+y\,i$, this leads to $x^2+(y+2)^2=(x-2)^2+y^2$, whose solution I'll leave to you.
estimate the perimeter of the island I'm assigned a task involving solving a problem that can be described as follows: Suppose I'm driving a car around a lake. In the lake there is an island of irregular shape. I have a GPS with me in the car so I know how far I've driven and every turns I've made. Now suppose I also have a camera that takes picture of the island 30 times a second, so I know how long sidewise the island appears to me all the time. Also assume I know the straight line distance between me and the island all the time. Given these conditions, if I drive around the lake for one full circle, will I be able to estimate the perimeter of the island? If yes, how? Thanks.
I am going to take a stab at this, although I would like to hear a flaw in my argument. If you have a reference point on the island where your camera is always pointed to then, since you know your exact travel path and distance from your path to the edge of the island, you can plot the shape of the island, then, finding the area can be done in multiple ways (integration being one of the ways). If you do not have a reference point that the camera(your eyes) can always point to, then I think the problem is tricky, and I believe there is no solution.
$f\colon\mathbb{R}\to\mathbb{R}$ such that $f(x)+f(f(x))=x^2$ for all $x$? A friend came up with this problem, and we and a few others tried to solve it. It turned out to be really hard, so one of us asked his professor. I came with him, and it took me, him and the professor about an hour to say anything interesting about it. We figured out that for positive $x$, assuming $f$ exists and is differentiable, $f$ is monotonically increasing. (Differentiating both sides gives $f'(x)*[\text{positive stuff}]=2x$). So $f$ is invertible there. We also figured out that f becomes arbitrarily large, and we guessed that it grows faster than any linear function. Plugging in $f{-1}(x)$ for $x$ gives $x+f(x)=[f^{-1}(x)]^2$. Since $f(x)$ grows faster than $x$, $f^{-1}$ grows slower and therefore $f(x)=[f^{-1}(x)]^2-x\le x^2$. Unfortunately, that's about all we know... No one knew how to deal with the $f(f(x))$ term. We don't even know if the equation has a solution. How can you solve this problem, and how do you deal with repeated function applications in general?
UPDATE: This answer now makes significantly weaker claims than it used to. Define the sequence of functions $f_n$ recursively by $$f_1(t)=t,\ f_2(t) = 3.8 + 1.75(t-3),\ f_k(t) = f_{k-2}(t)^2 - f_{k-1}(t)$$ The definition of $f_2$ is rigged so that $f_2(3) = 3.8$ and $f_2(3.8) = 3^2- 3.8 = 5.2$. Set $g_k=f_k(3)=f_{k-1}(3.8)$. So the first few $g$'s are $3$, $3.8$, $5.2$, $9.24$, etc. Numerical data suggests that the $g$'s are increasing (checked for $k$ up to $40$). More specifically, it appears that $$g_n \approx e^{c 2^{n/2}} \quad \mbox{for}\ n \ \mbox {odd, where}\ c \approx 0.7397$$ $$g_n \approx e^{d 2^{n/2}} \quad \mbox{for}\ n \ \mbox {even, where}\ d \approx 0.6851$$ We have constructed the $f$'s so that $f_k(3)=g_k$ and $f_k(3.8) = g_{k+1}$. Numerical data suggests also that $f_k$ is increasing on $[3,3.8]$ (checked for $k$ up to $20$). Assuming this is so, define $$f(x) = f_{k+1} (f_{k}^{-1}(x)) \ \mathrm{where}\ x \in [g_k, g_{k+1}].$$ Note that we need the above numerical patterns to continue for this definition to make sense. This gives a function with the desired properties on $[3,\infty)$. Moreover, we can extend the definition downwards to $[2, \infty)$ by running the recursion backwards; setting $f_{-k}(t) = \sqrt{f_{-k+1}(t) + f_{-k+2}(t)}$. Note that, if $f_{-k+1}$ and $f_{-k+2}$ are increasing then this equation makes it clear that $f_{-k}$ is increasing. Also, if $g_{-k+1} < g_{-k+2}$ and $g_{-k+2} < g_{-k+3}$ then $g_{-k} = \sqrt{g_{-k+1} + g_{-k+2}} < \sqrt{g_{-k+2} + g_{-k+3}} = g_{-k+1}$, so the $g$'s remain monotone for $k$ negative. So this definition will extend our function successfully to the union of all the $[g_k, g_{k+1}]$'s, for $k$ negative and positive. This union is $[2, \infty)$. There is nothing magic about the number $3.8$; numerical experimentation suggests that $g_1$ must be chosen in something like the interval $(3.6, 3.9)$ in order for the hypothesis to hold. I tried to make a similar argument to construct $f:[0,2] \to [0,2]$, finding some $u$ such that the recursively defined sequence $1$, $u$, $1^2-u$, $u^2-(1^2-u)$ etcetera would be decreasing. This rapidly exceeded my computational ability. I can say that, if there is such a $u$, then $$0.66316953423239333 < u < 0.66316953423239335.$$ If you want to play with this, I would be delighted to hear of any results, but let me warn you to be very careful about round off errors!
Vector Mid Point vs Mid Point Formula Given $OA=(2,9,-6)$ and $OB=(6,-3,-6)$. If $D$ is the midpoint, isit $OD=((2+6)/2, (9-3)/2, (-6-6)/2)$? The correct answer is $OD=\frac{1}{2}AB=(2,-6,0)$
Your first answer is the midpoint of the line segment that joins the tip of the vector $OA$ to the tip of the vector $OB$. The one that you call the correct answer is gotten by putting the vector $AB$ into standard position with, its initial end at the origin, and then finding the midpoint.
Can every group be represented by a group of matrices? Can every group be represented by a group of matrices? Or are there any counterexamples? Is it possible to prove this from the group axioms?
Every finite group is isomorphic to a matrix group. This is a consequence of Cayley's theorem: every group is isomorphic to a subgroup of its symmetry group. Since the symmetric group $S_n$ has a natural faithful permutation representation as the group of $n\times n$ 0-1 matrices with exactly one 1 in each row and column, it follows that every finite group is a matrix group. However, there are infinite groups which are not matrix groups, for example, the symmetric group on an infinite set or the metaplectic group. Note that every group can be represented non-faithfully by a group of matrices: just take the trivial representation. My answer above is for the question of whether every group has a faithful matrix representation.
$f:[0,1] \rightarrow \mathbb{R}$ is absolutely continuous and $f' \in \mathcal{L}_{2}$ I am studying for an exam and am stuck on this practice problem. Suppose $f:[0,1] \rightarrow \mathbb{R}$ is absolutely continuous and $f' \in \mathcal{L}_{2}$. If $f(0)=0$ does it follow that $\lim_{x\rightarrow 0} f(x)x^{-1/2}=0$?
Yes. The Cauchy-Schwarz inequality gives $$f(x)^2=\left(\int^x_0 f^\prime(y)\ dy\right)^2\leq \left(\int^x_0 1\ dy\right) \left(\int^x_0 f^\prime(y)^2\ dy\right)=x\left(\int^x_0 f^\prime(y)^2\ dy\right).$$ Dividing by $x$ and taking square roots we get $$|f(x)/\sqrt{x}|\leq \left(\int^x_0 f^\prime(y)^2\ dy\right)^{1/2}$$ and since $f^\prime$ is square integrable, the right hand side goes to zero as $x\downarrow 0$. This gives the result.
Solving $A(x) = 2A(x/2) + x^2$ Using Generating Functions Suppose I have the recurrence: $$A(x) = 2A(x/2) + x^2$$ with $A(1) = 1$. Is it possible to derive a function using Generating Functions? I know in Generatingfunctionology they shows show to solve for recurrences like $A(x) = 2A(x-1) + x$. But is it possible to solve for the above recurrence as well?
I am a little confused by the way you worded this question (it seems that you have a functional equation rather than a recurrence relation), so I interpreted it in the only way that I could make sense of it. If this is not what you are looking for, then please clarify in your original question or in a comment. Let's assume that $A(x)$ is a formal power (or possibly Laurent) series, $A(x) = \sum_n a_n x^n$. Plugging this into your equation, we get $$ \sum_n a_n x^n = 2 \sum_n a_n \frac{x^n}{2^n} + x^2 $$ For $n\neq 2$, we get $a_n = 2^{1-n} a_n$, so if $n \neq 1,2$ we get $a_n = 0$. For $n=2$, we get $a_2 = a_2/2 + 1$, so $a_2 = 2$. Finally, the condition $A(1) = 1$ gives $a_1 = -1$, so we have $$ A(x) = -x + 2x^2 $$ Check: $$ 2 A(x/2) + x^2 = 2( -x/2 + x^2/2) + x^2 = -x + 2x^2 = A(x) $$
Secret santa problem We decided to do secret Santa in our office. And this brought up a whole heap of problems that nobody could think of solutions for - bear with me here.. this is an important problem. We have 4 people in our office - each with a partner that will be at our Christmas meal. Steve, Christine, Mark, Mary, Ken, Ann, Paul(me), Vicki Desired outcome Nobody can know who is buying a present for anybody else. But we each want to know who we are buying our present for before going to the Christmas party. And we don't want to be buying presents for our partners. Partners are not in the office. Obvious solution is to put all the names in the hat - go around the office and draw two cards. And yes - sure enough I drew myself and Mark drew his partner. (we swapped) With that information I could work out that Steve had a 1/3 chance of having Vicki(he didn't have himself or Christine - nor the two cards I had acquired Ann or Mary) and I knew that Mark was buying my present. Unacceptable result. Ken asked the question: "What are the chances that we will pick ourselves or our partner?" So I had a stab at working that out. First card drawn -> 2/8 Second card drawn -> 12/56 Adding them together makes 28/56 i.e. 1/2. i.e. This method won't ever work... half chances of drawing somebody you know means we'll be drawing all year before we get a solution that works. My first thought was that we attach two cards to our backs... put on blindfolds and stumble around in the dark grabbing the first cards we came across... However this is a little unpractical and I'm pretty certain we'd end up knowing who grabbed what anyway. Does anybody have a solution for distributing cards that results in our desired outcome? I'd prefer a solution without a third party..
See this algorithm here: http://weaving-stories.blogspot.co.uk/2013/08/how-to-do-secret-santa-so-that-no-one.html. It's a little too long to include in a Stack Exchange answer. Essentially, we fix the topology to be a simple cycle, and then once we have a random order of participants we can also determine who to get a gift for.
Limits Involving Trigonometric Functions I should prove using the limit definition that $$\lim_{x \rightarrow 0} \, x^{1/3}\cos(1/x) = 0.$$ I have a problem because the second function is much too complex, so I think I need transformation. And what form this function could have in case I will transform it?
You can solve this problem with the Squeeze Theorem. First, notice that $-1 \leq \cos(1/x) \leq 1$ (the cosine graph never goes beyond these bounds, no matter what you put inside as the argument). Multiplying through by $x^{1/3}$, we get $$ -x^{1/3} \leq x^{1/3}\cos(1/x) \leq x^{1/3}. $$ Now, the Squeeze Theorem says $$ \lim_{x \rightarrow 0} (-x^{1/3}) \leq \lim_{x \rightarrow 0} \, x^{1/3}\cos(1/x) \leq \lim_{x \rightarrow 0} \, x^{1/3}, $$ so we investigate the left- and right-most limits. Since, $x^{1/3}$ is continuous on $[0,\infty)$, $$ \lim_{x \rightarrow 0} \, x^{1/3} = \lim_{x \rightarrow 0} (-x^{1/3}) = 0. $$ Finally, we have $$0 \leq \lim_{x \rightarrow 0} \, x^{1/3}\cos(1/x) \leq 0,$$ which forces us to conclude $$ \lim_{x \rightarrow 0} \, x^{1/3}\cos(1/x) = 0. $$
Zero Dim Topological group I have this assertion which looks rather easy (or as always I am missing something): We have $G$ topological group which is zero dimensional, i.e it admits a basis for a topology which consists of clopen sets, then every open nbhd that contains the identity element of G also contains a clopen subgroup. I naively thought that if I take $\{e\}$, i.e the trivial subgroup, it's obviously closed, so it's also open in this topology, i.e clopen, and it's contained in every nbhd that contains $e$, but isn't it then too trivial. Missing something right? :-)
This is certainly false. Take $G=\mathbb{Q}$ with the standard topology and additive group structure. The topology is zero-dimensional since intersecting with $\mathbb{Q}$ the countably many open intervals whose endpoints are rational translates of $\sqrt 2$ gives a clopen basis for the topology on $G$. The trivial subgroup $\{0\}$ is certainly not open since the standard topology on $\mathbb{Q}$ is not discrete. So any clopen subgroup $H \subset G$ contains a nonzero rational number $q$ and so also $2q,3q,4q$ and so on. This shows $H$ is unbounded so, for example, the open neighbourhood $(-1,1) \cap \mathbb{Q}$ of $0 \in G$ contains no nontrival subgroup.
The history of set-theoretic definitions of $\mathbb N$ What representations of the natural numbers have been used, historically, and who invented them? Are there any notable advantages or disadvantages? I read about Frege's definition not long ago, which involves iterating over all other members of the universe; clearly not possible in set theories without a universal set. The one that is commonly used today to construct natural numbers from the empty set is * *$0 = \{\}$ *$S(n)=n\cup\{n\}$ but I know that another early definition was * *$0=\{\}$ *$S(n)=\{n\}$ Unfortunately I don't know who first used or popularized these last two, nor whether there were other early contenders.
Maybe this book might be useful for you, too. I'll include a short quote from §1.2 Natural numbers. Ebbinghaus et al.: Numbers, p.14 Counting with the help of number symbols marks the beginning of arithmetic. Computation counting. Until well into the nineteenth century, efforts were made to trace the idea of number back to its origins in the psychological process of counting. The psychological and philosophical terminology used for this purpose met with criticism, however, after FREGE's logic and CANTOR'S set theory had provided the logicomathematical foundations for a critical assessment of the number concept. DEDEKIND, who had been in correspondence with CANTOR since the early 1870's, proposed in his book Was sind und was sollen die Zahlen? [9] (published in 1888, but for the most part written in the years 1872—1878) a "set-theoretical" definition of the natural numbers, which other proposed definitions by FREGE and CANTOR and finally PEANO'S axiomatization were to follow. That the numbers, axiomatized in this way, are uniquely defined, (up to isomorphism) follows from DEDEKIND'S recursion theorem. Dedekind's book Was sind und was sollen die Zahlen? seems to be available online: Wikipedia article on Richard Dedekind gives two links, one at ECHO and one at GDZ.
Geometry problem: Line intersecting a semicircle Suppose we have a semicircle that rests on the negative x-axis and is tangent to the y-axis.A line intersects both axes and the semicircle. Suppose that the points of intersection create three segments of equal length. What is the slope of the line? I have tried numerous tricks, none of which work sadly.
In this kind of problem, it is inevitable that plain old analytic geometry will work. A precise version of this assertion is an important theorem, due to Tarski. If "elementary geometry" is suitably defined, then there is an algorithm that will determine, given any sentence of elementary geometry, whether that sentence is true in $\mathbb{R}^n$. So we might as well see what routine computation buys us. We can take the equation of the circle to be $(x+1)^2+y^2=1$, and the equation of the line to be (what else?) $y=mx+b$. Let our semicircle be the upper half of the circle. Substitute $mx+b$ for $y$ in the equation of the circle. We get $$(1+m^2)x^2+2(1+mb)x +b^2=0. \qquad\qquad(\ast)$$ Let the root nearest the origin be $r_1$, and the next one $r_2$. Note that the line meets the $x$-axis at $x=-b/m$. From the geometry we can deduce that $-r_2=-2r_1$ and $b/m=-3r_1$, and therefore $$r_1=-\frac{b}{3m} \qquad\text{and} \qquad r_2=-\frac{2b}{3m}.$$ By looking at $(\ast)$ we conclude that $$-\frac{b}{m}=-\frac{2(1+mb)}{1+m^2} \qquad\text{and} \qquad \frac{2b^2}{9m^2}=\frac{b^2}{1+m^2}.$$ Thus the algebra gives us the candidates $m=\pm\sqrt{\frac{2}{7}}$. (Of course, the first equation was not needed.) Sadly, we should not always believe what algebraic manipulation seems to tell us. I have checked out the details for the positive candidate for the slope, and everything is fine. Our line has equation $y=\sqrt{\frac{2}{7}}x+ \frac{2\sqrt{14}}{5}$. Pleasantly, the points $r_1$ and $r_2$ turn out to have rational coordinates. However, the negative candidate is not fine. That can be checked by looking at the geometry. But it is also clear from the algebra, which has been symmetrical about the $x$-axis. The algebra was not told that we are dealing with a semicircle, not a circle. So naturally it offered us a mirror symmetric list of configurations. We conclude that the slope is $\sqrt{\dfrac{2}{7}}$.
How to use the Extended Euclidean Algorithm manually? I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?
The way to do this is due to Blankinship "A New Version of the Euclidean Algorithm", AMM 70:7 (Sep 1963), 742-745. Say we want $a x + b y = \gcd(a, b)$, for simplicity with positive $a$, $b$ with $a > b$. Set up auxiliary vectors $(x_1, x_2, x_3)$, $(y_1, y_2, y_3)$ and $(t_1, t_2, t_3)$ and keep them such that we always have $x_1 a + x_2 b = x_3$, $y_1 a + y_2 b = y_3$, $t_1 a + t_2 b = t_3$ throughout. The algorithm itself is: (x1, x2, x3) := (1, 0, a) (y1, y2, y3) := (0, 1, b) while y3 <> 0 do q := floor(x3 / y3) (t1, t2, t3) := (x1, x2, x3) - q * (y1, y2, y3) (x1, x2, x3) := (y1, y2, y3) (y1, y2, y3) := (t1, t2, t3) At the end, $x_1 a + x_2 b = x3 = \gcd(a, b)$. It is seen that $x_3$, $y_3$ do as the classic Euclidean algorithm, and easily checked that the invariant mentioned is kept all the time. One can do away with $x_2$, $y_2$, $t_2$ and recover $x_2$ at the end as $(x_3 - x_1 a) / b$.
Proof by Induction: Alternating Sum of Fibonacci Numbers Possible Duplicate: Show that $f_0 - f_1 + f_2 - \cdots - f_{2n-1} + f_{2n} = f_{2n-1} - 1$ when $n$ is a positive integer This is a homework question so I'm looking to just be nudged in the right direction, I'm not asking for my work to be done for me. The Fibonacci numbers are defined as follows: $f_0 = 0$, $f_1 = 1$, and $f_{n+2} = f_n + f_{n+1}$ whenever $n \geq 0$. Prove that when $n$ is a positive integer: \begin{equation*} f_0 - f_1 + f_2 + \ldots - f_{2n-1} + f_{2n} = f_{2n-1} - 1 \end{equation*} So as I understand it, this is an induction problem. I've done the basis step using $n = 1$: \begin{align*} - f_{2(1)-1} + f_{2(1)} &= f_{2(1)-1} - 1\newline - f_1 + f_2 &= f_1 - 1\newline - 1 + 1 &= 1 - 1\newline 0 &= 0 \end{align*} I've concluded that the inductive hypothesis is that $- f_{2n-1} + f_{2n} = f_{2n-1} - 1$ is true for some $n \geq 1$. From what I can gather, the inductive step is: \begin{equation*} f_0 - f_1 + f_2 + \ldots - f_{2n-1} + f_{2n} - f_{2n+1} = f_{2n} - 1 \end{equation*} However, what I find when I try to prove it using that equation is that it is incorrect. For example, when I take $n = 1$ \begin{align*} - f_{2(1)-1} + f_{2(1)} + f_{2(1)+1} &\neq f_{2(1)} - 1\newline - f_1 + f_2 - f_3 &\neq f_2 - 1\newline - 1 + 1 - 2 &\neq 1 - 1\newline -2 &\neq 0 \end{align*} I suppose that my inductive step is wrong but I'm not sure where I went wrong. Maybe I went wrong elsewhere. Any hints?
HINT $\ $ LHS,RHS both satisfy $\rm\ g(n+1) - g(n)\: =\: f_{2\:n},\ \ g(1) = 0\:.\:$ But it is both short and easy to prove by induction that the solutions $\rm\:g\:$ of this recurrence are unique. Therefore LHS = RHS. Note that abstracting away from the specifics of the problem makes the proof both much more obvious and, additionally, yields a much more powerful result - one that applies to any functions satisfying a recurrence of this form. Generally uniqueness theorems provide very powerful tools for proving equalities. For much further elaboration and many examples see my prior posts on telescopy and the fundamental theorem of difference calculus, esp. this one.
The construction of function If $X$ is a paracompact Hausdorff space and $\{ {X_i}\} $ $i \ge 0$ are open subsets of $X$ such that ${X_i} \subset {X_{i + 1}}$ and $\bigcup\nolimits_{i \ge o} {{X_i}} = X$, can we find a continuous function $f$ such that $f(x) \ge i + 1$ when $x \notin {X_i}$ ?
Yes, your function exists. As $X$ is paracompact and Hausdorff you get a partition of unity subordinate to your cover $\{X_i\}$, that is, a family $\{f_i : X \rightarrow [0,1]\}_{i \geq 0}$ with $\text{supp}(f_i) \subset X_i$ and every $x \in X$ has a neighborhood such that $f_i(x) = 0$ for all but finitely many $i \geq 0$ and $\sum_i f_i(x) = 1$. That implies that $g = \sum_i i \cdot f_i$ is a well defined, continuous map. For $x \notin X_k$ you have $f_i(x) = 0$ for all $i \leq k$ and that implies $$ \sum_{i \geq 0} i \cdot f_i(x) = \sum_{i \geq k+1} i \cdot f_i(x) \geq (k+1) \sum_{i \geq k+1} f_i(x) = k+1, $$ that is, $g(x) \geq k+1$ for all $x \notin X_k$.
Reciprocal of a continued fraction I have to prove the following: Let $\alpha=[a_0;a_1,a_2,...,a_n]$ and $\alpha>0$, then $\dfrac1{\alpha}=[0;a_0,a_1,...,a_n]$ I started with $$\alpha=[a_0;a_1,a_2,...,a_n]=a_0+\cfrac1{a_1+\cfrac1{a_2+\cfrac1{a_3+\cfrac1{a_4+\cdots}}}}$$ and $$\frac1{\alpha}=\frac1{[a_0;a_1,a_2,...,a_n]}=\cfrac1{a_0+\cfrac1{a_1+\cfrac1{a_2+\cfrac1{a_3+\cfrac1{a_4+\cdots}}}}}$$ But now I don't know how to go on. In someway I have to show, that $a_0$ is replaced by $0$, $a_1$ by $a_0$ and so on. Any help is appreciated.
Coming in late: there's a similar approach that will let you take the reciprocal of nonsimple continued fractions as well. * *change the denominator sequence from $[b_0;a_0,a_1,a_2...]$ to $[0; b_0,a_0,a_1,a_2...]$ *change the numerator sequence from $[c_1,c_2,c_3,...]$ to $[1,c_1,c_2,c_3...]$
An unintuitive probability question Suppose you meet a stranger in the Street walking with a boy. He tells you that the boy is his son, and that he has another child. Assuming equal probability for boy and girl, and equal probability for Monty to walk with either child, what is the probability that the second child is a male? The way I see it, we get to know that the stranger has 2 children, meaning each of these choices are of equal probability: (M,M) (M,F) (F,M) (F,F) And we are given the additional information that one of his children is a boy, thus removing option 4 (F,F). Which means that the probability of the other child to be a boy is 1/3. Am I correct?
I don't think so. The probability of this is $\frac 1 2$ which seems clear since all it depends on is the probability that the child not walking with Monty is a boy. The probability is not $\frac 1 3$! The fact that he is actually walking with a son is different than the fact that he has at least one son! Let $A$ be the event "Monty has $2$ sons" and $B$ be the event "Monty is walking with a son." Then the probability of $A$ given $B$ is $P(A | B) = \frac {P(A)} {P(B)}$ since $B$ happens whenever $A$ happens. $P(A) = \frac 1 4$ assuming the genders of the children are equally likely and the genders of the two children are independent. On the other hand, $P(B) = \frac 1 2$ by symmetry; if the genders are equally likely then Monty is just as likely to be walking with a boy as a girl. This particular problem isn't even counterintuitive; most people would guess $\frac 1 2$. Now, on the other hand, suppose $C$ is the event that Monty has at least one son. This event has probability $\frac 3 4$ and so if you happen to run into Monty on the street without a child, but you ask him if he has at least one son and he says "yes," it turns out that Monty has two sons $\frac 1 3$ of the time using the same calculation. This seems a little paradoxical, but you actually get more information from event $B$ than you do from event $C$; you know more than just that he has a son, but you can pin down which one it is: the one walking with Monty. The juxtaposition of these two results seems paradoxical. Really, I think the moral of the story is that human beings are bad at probability, since just about everyone gets this and the original Monty Hall problem wrong; in fact, people can be quite stubborn about accepting the fact that they were wrong! The common thread between the two problems is the concept of conditional probability, and the fact that humans are terrible at applying it.
How to find irreducible representation of a group from reducible one? I was reading this document to answer my question. But after teaching me hell lot of jargon like subgroup, normal subgroup, cosets, factor group, direct sums, modules and all that the document says this, You likely realize immediately that this is not a particularly easy thing to do by inspection. It turns out that there is a very straightforward and systematic way of taking a given representation and determining whether or not it is reducible, and if so, what the irreducible representations are. However, the details of how this can be done, while very interesting, are not necessary for the agenda of these notes. Therefore, for the sake of brevity, we will not pursue them. (>_<) I want to learn to do this by hand and then write a program. Please don't ask me to learn GAP or any other software instead. How to find irreducible representation of a group from reducible one? What is that straightforward and systematic way?
I do not believe that there is a straightforward way of doing what you want for complex representations. Probably the best way is to first compute the character table of the group. There are algorithms for that, such as Dixon-Schneider, but it is not something you can just sit down and program in an afternoon. Then you can use the orthogonality relations to find the irreducible constituents of your representation. There are then algorithms you could use to construct the matrices of the representations from its character - there is one due to Dixon, for example. This method is indirect in that you are not computing the irreducible constituents directly, but you are using the full chaarcter table, but I don't know any better way of doing it. Strangely, this problem is a little easier for representations over finite fields, where there is a comparatively simple algorithm known as the "MeatAxe" for finding the irreducible constituents directly. (But programming it efficiently would still take a lot fo effort.)
How to find the GCD of two polynomials How do we find the GCD $G$ of two polynomials, $P_1$ and $P_2$ in a given ring (for example $\mathbf{F}_5[x]$)? Then how do we find polynomials $a,b\in \mathbf{F}_5[x]$ so that $P_1a+ P_2b=G$? An example would be great.
If you have the factorization of each polynomial, then you know what the divisors look like, so you know what the common divisors look like, so you just pick out one of highest degree. If you don't have the factorization, the Euclidean algorithm works for polynomials in ${\bf F}_5[x]$ just as it does in the integers, which answers the first question; so does the extended Euclidean algorithm, which answers the second question. If you are unfamiliar with these algorithms, they are all over the web, and in pretty much every textbook that does field theory.
Infinite product of connected spaces may not be connected? Let $X$ be a connected topologoical space. Is it true that the countable product $X^\omega$ of $X$ with itself (under the product topology) need not be connected? I have heard that setting $X = \mathbb R$ gives an example of this phenomenon. If so, how can I prove that $\mathbb R^\omega$ is not connected? Do we get different results if $X^\omega$ instead has the box topology?
Maybe this should have been a comment, but since I don't have enough reputation points, here it is. On this webpage, you will find a proof that the product of connected spaces is connected (using the product topology). In case of another broken link in the future, the following summary (copied from here) could be useful: The key fact that we use in the proof is that for fixed values of all the other coordinates, the inclusion of any one factor in the product is a continuous map. Hence, every slice is a connected subset. Now any partition of the whole space into disjoint open subsets must partition each slice into disjoint open subsets; but since each slice is connected, each slice must lie in one of the parts. This allows us to show that if two points differ in only finitely many coordinates, then they must lie in the same open subset of the partition. Finally, we use the fact that any open set must contain a basis open set; the basis open set allows us to alter the remaining cofinitely many coordinates. Note that it is this part that crucially uses the definition of product topology, and it is the analogous step to this that would fail for the box topology.
Form of rational solutions to $a^2+b^2=1$? Is there a way to determine the form of all rational solutions to the equation $a^2+b^2=1$?
I up-voted yunone's answer, but I notice that it begins by saying "if you know some field theory", and then gets into $N_{\mathbb{Q}(i)/\mathbb{Q}}(a_bi)$, and then talks about Galois groups, and then Hilbert's Theorem 90, and tau functions (where "$\tau$ is just the complex conjugation map in this case" (emphasis mine)). Sigh. I would have no hesitation about telling a class of secondary-school pupils that all Pythagorean triples are of the form $$ (a,b,c) = (m^2-n^2,2mn,m^2+n^2) $$ and that they are primitive precisely if $m$ and $n$ are coprime and not both odd. That means rational points on the circle are $$ \left(\frac a c, \frac b c\right) = \left( \frac{m^2-n^2}{m^2+n^2}, \frac{2mn}{m^2+n^2} \right) $$ and they're in lowest terms precisely under those circumstances. But how much of what appears in my first paragraph above would I tell secondary-school pupils? Maybe it's better not to lose the audience before answering the question.
Reference for "It is enough to specify a sheaf on a basis"? The wikipedia article on sheaves says: It can be shown that to specify a sheaf, it is enough to specify its restriction to the open sets of a basis for the topology of the underlying space. Moreover, it can also be shown that it is enough to verify the sheaf axioms above relative to the open sets of a covering. Thus a sheaf can often be defined by giving its values on the open sets of a basis, and verifying the sheaf axioms relative to the basis. However, it does not cite a specific reference for this statement. Does there exist a rigorous proof for this statement in the literature?
It is given in Daniel Perrin's Algebraic Geometry, Chapter 3, Section 2. And by the way, it is a nice introductory text for algebraic geometry, which does not cover much scheme theory, but gives a definition of an abstract variety (using sheaves, like in Mumford's Red book). Added: I just saw that Perrin leaves most of the details to the reader. For another proof, see Remark 2.6/Lemma 2.7 in Qing Liu's Algebraic Geometry and Arithmetic curves.
Proving an exponential bound for a recursively defined function I am working on a function that is defined by $$a_1=1, a_2=2, a_3=3, a_n=a_{n-2}+a_{n-3}$$ Here are the first few values: $$\{1,1,2,3,3,5,6,8,11,\ldots\}$$ I am trying to find a good approximation for $a_n$. Therefore I tried to let Mathematica diagonalize the problem,it seems to have a closed form but mathematica doesn't like it and everytime I simplify it gives: a_n = Root[-1 - #1 + #1^3 &, 1]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 1] + Root[-1 - #1 + #1^3 &, 3]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 2] + Root[-1 - #1 + #1^3 &, 2]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 3] I used this to get a numerical approximation of the biggest root: $$\text{Root}\left[x^3-x-1,1\right]=\frac{1}{3} \sqrt[3]{\frac{27}{2}-\frac{3 \sqrt{69}}{2}}+\frac{\sqrt[3]{\frac{1}{2} \left(9+\sqrt{69}\right)}}{3^{2/3}}\approx1.325$$ Looking at the function I set $$g(n)=1.325^n$$ and plotted the first 100 values of $\ln(g),\ln(a)$ ($a_n=:a(n)$) in a graph (blue = $a$, red = $g$): It seems to fit quite nicely, but now my question: How can I show that $a \in \mathcal{O}(g)$, if possible without using the closed form but just the recursion. If there would be some bound for $f$ thats slightly worse than my $g$ but easier to show to be correct I would be fine with that too.
Your function $a_n$ is a classical recurrence relation. It is well-known that $$a_n = \sum_{i=1}^3 A_i \alpha_i^n,$$ where $\alpha_i$ are the roots of the equation $x^3 = x + 1$. You can find the coefficients $A_i$ by solving a system of linear equations. In your case, one of the roots is real, and the other two are complex conjugates whose norm is less than $1$, so their contribution to $a_n$ tends to $0$. So actually $$a_n = A_1 \alpha_1^n + o(1).$$ Mathematica diligently found for you the value of $A_1$, and this way you can obtain your estimate (including the leading constant).
Numerically solve second-order ODE I want to solve a second-order ODE in the form of $$ y^{''} = \frac{a (y^{'})^2}{b y^{'}+cy+d} $$ by numerical method (eg, solver ODE45), given initial condition of $y(0)$ and $y'(0)$. The results are wield and numbers go out of machinery bound. I guess the catch is that what is in the denominator becomes highly unstable when it converges to zero. I tried bound it away from zero with no avail. Could anyone provide insights on how to proceed with the numerical procedure? Thanks in advance...
Discretizing the ODE by finite differences gives $$\frac{y_2-2y_1 + y_0}{h^2} = \frac{a\left(\frac{y_1-y_0}{h}\right)^2}{b\left(\frac{y_1-y_0}{h}\right) + cy_1 + d},$$ or $$y_2 = 2y_1 - y_0 + \frac{ah(y_1-y_0)^2}{b(y_1-y_0)+chy_1+dh}.$$ Here's C++ code I wrote which has no trouble integrating this ODE for $a=-10,b=c=d=1$, initial conditions $y(0)=0,y'(0)=10$ and time step $h=0.01$. I'm sure you can adapt it to whatever language you prefer: #include <iostream> using namespace std; double y0, y1; void step(double dt) { double y2 = 2*y1-y0 - 10*dt*(y1-y0)*(y1-y0)/(y1-y0+dt*y1+dt); y0 = y1; y1 = y2; } int main() { y0 = 0; y1 = .1; for(int i=0; i<1000; i++) { step(0.01); cout << y1 << endl; } }
Homogeneous Fredholm Equation of Second Kind I'm trying to show that the eigenvalues of the following integral equation \begin{align*} \lambda \phi(t) = \int_{-T/2}^{T/2} dx \phi(x)e^{-\Gamma|t-x|} \end{align*} are given by \begin{align*} \Gamma \lambda_k = \frac{2}{1+u_k^2} \end{align*} where $u_k$ are the solutions to the transcendental equation \begin{align*} \tan(\Gamma T u_k) = \frac{2u_k}{u_k^2-1}. \end{align*} My approach was to separate this into two integrals: \begin{align*} \int_{-T/2}^{T/2} dx \phi(x)e^{|t-x|} = e^{-\Gamma t}\int_{-T/2}^t dx \phi(x)e^{\Gamma x} + e^{\Gamma t} \int_t^{T/2} dx \phi(x) e^{-\Gamma x}. \end{align*} Then I differentiated the eigenvalue equation twice with this modification to find that \begin{align*} \ddot{\phi} = \frac{\Gamma^2-2\Gamma}{\lambda}\phi \end{align*} indicating that $\phi(t)$ is a sum of exponentials $Ae^{\kappa t} + Be^{-\kappa t}$ where $\kappa^2$ is the coefficient in the previous equation. Can anyone confirm this is the correct approach? I don't think there's any way to factor the original kernel and invert the result. And I'm having trouble determining the initial conditions of the equation to set $A$ and $B$. Assuming I can find these my instinct would be to plug $\phi$ back into the original equation and explicitly integrate and solve for $\lambda$.
This problem was solved in Mark Kac, Random Walk in the presence of absorbing barriers, Ann. Math. Statistics 16, 62-67 (1945) https://projecteuclid.org/euclid.aoms/1177731171
Complexity substitution of variables in multivariate polynomials I want to substitute a variable with a number in multivariate polynomials. For example for the polynomial $$ P = (z^2+yz^3)x^2 + zx $$ I want to substitute $z$ with $3$. I have intuition how to do that algorithmatic: I have to regard the coefficients in $F[y,z]$ and make a recursive call of that method to obtain even more coeffcients in $F[z]$, substitute them and return the result to the "lower" recursiv calls. Is that a good idea? I'm not really interested in the formulation a real algorithm but more in the complexity of the substitution operation. I think that the above sketched algorithm can be bound with $\mathcal{O}(\deg_x(P)\deg_y(P)\deg_z(P))$. Is that correct? The bound is not really strict. Any ideas for a stricter bound? Our is there a faster algorithm which is used in practice? And what is its complexity?
There are classical results (Ostrowski) on optimality of Horner's method and related evaluation schemes. These have been improved by Pan, Strassen and others. Searching on "algebraic complexity" with said authors should quickly locate pertinent literature.
Subspaces in Linear Algebra Find the $\operatorname{Proj}_wv$ for the given vector $v$ and subspace $W$. Let $V$ be the Euclidean space $\mathbb{R}^4$, and $W$ the subspace with basis $[1, 1, 0, 1], [0, 1, 1, 0], [-1, 0, 0, 1]$ (a) $v = [2,1,3,0]$ ans should be - $[7/5,11/5,9/5,-3/5]$ My attempt at the solution was basically we can find the basis perpendicular to $W$ as $[ 1,-2,2, 1]$ then, $[2, 1, 3, 0] = a[1, 1, 0, 1] + b[0, -1, 1, 0] + c[0 ,2, 0,3] + d[1,-2,2,1]$ We solve for $a,b,c,d$ and get $a = 16/3,b=29/3,c=-2/3,d=-10/3$ now the problem is what do I do from here?
You can do it that way (though you must have an arithmetical error somewhere; the denominator of $3$ cannot be right), and the remaining piece is then simply to take $a[1, 1, 0, 1] + b[0, -1, 1, 0] + c[0 ,2, 0,3]$, forgetting the part perpendicular to $W$. However, it is much easier to normalize your $[1,-2,2,1]$ to $n=\frac{1}{\sqrt{10}}[1,-2,2,1]$ -- then the projection map is simply $v\mapsto v - (v\cdot n)n$. (If you write that out fully, the square root even disappears).
One-to-one mapping from $\mathbb{R}^4$ to $\mathbb{R}^3$ I'm trying to define a mapping from $\mathbb{R}^4$ into $\mathbb{R}^3$ that takes the flat torus to a torus of revolution. Where the flat torus is defined by $x(u,v) = (\cos u, \sin u, \cos v, \sin v)$. And the torus of revolution by $x(u,v) = ( (R + r \cos u)\cos v, (R + r \cos u)\sin v, r \sin u)$. I think an appropriate map would be: $f(x,y,z,w) = ((R + r x)z, (R + r x)w, r y)$ where $R$, $r$ are constants greater than $0$. But now I'm having trouble showing this is one-to-one.
Shaun's answer is insufficient since there are immersions which are not 1-1. For example, the figure 8 is an immersed circle. Also, the torus covers itself and all covering maps are immersions. http://en.wikipedia.org/wiki/Immersion_(mathematics) Your parametrization of the torus of rotation is the the same as in http://en.wikipedia.org/wiki/Torus You just have to notice that the minimal period in both coordinates of the $uv$ plane is the same $2\pi$ in the case of both the flat and rotated tori.
Calculating a Taylor Polynomial of a mystery function I need to calculate a taylor polynomial for a function $f:\mathbb{R} \to \mathbb{R}$ where we know the following $$f\text{ }''(x)+f(x)=e^{-x} \text{ } \forall x$$ $$f(0)=0$$ $$f\text{ }'(0)=2$$ How would I even start?
We have the following $$f''(x) + f(x) = e^{-x}$$ and $f(0) = 0$, $f'(0) = 2$. And thus we need to find $f^{(n)}(0)$ to construct the Taylor series. Note that we already have two values and can find $f''(0)$ since $$f''(0) + f(0) = e^{-0}$$ $$f''(0) +0 = 1$$ $$f''(0) = 1$$ So now we differentiate the original equation and get: $$f'''(x) + f'(x) = -e^{-x}$$ But since we know $f'(0) = 2$, then $$f'''(0) + f'(0) = -e^{-0}$$ $$f'''(0) + 2 = -1$$ $$f'''(0) = -3$$ And we have our third value. Differentiating one more time gives $$f^{IV}(x) + f''(x) = e^{-x}$$ So again we have $$f^{IV}(0) + f''(0) =1$$ $$f^{IV}(0) + 1 =1$$ $$f^{IV}(0) =0$$ Using this twice more you'll get $$f^{V}(0) =2$$ $$f^{VI}(0) =1$$ $$f^{VII}(0) =-3$$ In general the equation is saying that $$f^{(2n+2)}(0) + f^{(2n)}(0) = 1$$ $$f^{(2n+1)}(0) + f^{(2n-1)}(0) = -1$$ which will allow you to get all values. A little summary of the already known values: $f(0) = 0$ $f'(0) = 2$ $f''(0) = 1$ $f'''(0) = -3$ $f^{IV}(0) = 0$ $f^{V}(0) = 2$ $f^{VI}(0) = 1$ $f^{VII}(0) = -3$ Do you see a pattern?
Example of a real-life graph with a "hole"? Anyone ever come across a real non-textbook example of a graph with a hole in it? In Precalc, you get into graphing rational expressions, some of which reduce to a non-rational. The cancelled factors in the denominator still identify discontinuity, yet can't result in vertical asymptotes, but holes. Thanks!
A car goes 60 miles in 2 hours. So 60 miles/2 hours = 30 miles per hour. But how fast is the car going at a particular instant? It goes 0 miles in 0 hours. There you have a hole! It is for the purpose of removing that hole that limits are introduced in calculus. Then you can talk about instantaneous rates of change (such as the speed of a car at an instant), which is the topic of differential calculus.
Is it possible that in a metric space $(X, d)$ with more than one point, the only open sets are $X$ and $\emptyset$? Is it possible that in a metric space $(X, d)$ with more than one point, the only open sets are $X$ and $\emptyset$? I don't think this is possible in $\mathbb{R}$, but are there any possible metric spaces where that would be true?
One of the axioms is that for $x, y \in X$ we have $d(x, y) = 0$ if and only if $x = y$. So if you have two distinct points, you should be able to find an open ball around one of them that does not contain the other.
Maximizing symmetric matrices v.s. non-symmetric matrices Quick clarification on the following will be appreciated. I know that for a real symmetric matrix $M$, the maximum of $x^TMx$ over all unit vectors $x$ gives the largest eigenvalue of $M$. Why is the "symmetry" condition necessary? What if my matrix is not symmetric? Isn't the maximum of $x^TMx=$ still largest eigenvalue of $M$? Thanks.
You can decompose any asymmetric matrix $M$ into its symmetric and antisymmetric parts, $M=M_S+M_A$, where $$\begin{align} M_S&=\frac12(M+M^T),\\ M_A&=\frac12(M-M^T). \end{align}$$ Observe that $x^TM_Ax=0$ because $M_A=-M_A^T$. Then $$x^TMx=x^T(M_S+M_A)x=x^TM_Sx+x^TM_Ax=x^TM_Sx.$$ Therefore, when dealing with something of the form $x^TMx$, we may as well assume $M$ to be symmetric; if it wasn't, we could replace it with its symmetric part $M_S$ and nothing would change.
Common internal tangent of two circles PA is the radius of a circle with center P, and QB is the radius of a circle with center Q, so that AB is a common internal tangent of the two circles, Let M be the midbout of AB and N be the point of line PQ so that line MN is perpendicular to PQ. Z is the point where AB and PQ intersects. If PA=5, QB=10, and PQ=17. compute PN. So I tried to compute the problem above and I found the ratio between triangle ZMN:PAN:BQZ is 1:2:4. After finding that I discovered that the distance from both circles is 2, so after some work I found MN to be 2.5 and MZ to be 17/6 but when I used the pythogerom therom to find ZN thus getting a weird answer (8/6). Ultimately my answer for PN was incorrect and I don't know how to solve this problem. Please help me.
Since $BQ=10$, $AP=5$ and triangles $BQZ$ and $APZ$ are similar, we get $QZ=2PZ$. Because $PQ=17$, we get $PZ=17/3$ and $QZ=34/3$. Using the Pythagorean theorem, we get $BZ=16/3$ and $AZ=8/3$, and thus $AB=8$. Since $MZ=AB/6$, we get $MZ=8/6$ (and not 17/6 as you computed). Could you do the rest of the computation?
Proving Integral Inequality I am working on proving the below inequality, but I am stuck. Let $g$ be a differentiable function such that $g(0)=0$ and $0<g'(x)\leq 1$ for all $x$. For all $x\geq 0$, prove that $$\int_{0}^{x}(g(t))^{3}dt\leq \left (\int_{0}^{x}g(t)dt \right )^{2}$$
Since $0<g'(x)$ for all $x$, we have $g(x)\geq g(0)=0$. Now let $F(x)=\left (\int_{0}^{x}g(t)dt \right )^{2}-\int_{0}^{x}(g(t))^{3}dt$. Then $$F'(x)=2g(x)\left (\int_{0}^{x}g(t)dt \right )-g(x)^3=g(x)G(x),$$ where $$G(x)=2\int_{0}^{x}g(t)dt-g(x)^2.$$ We claim that $G(x)\geq 0$. Assuming the claim, we have $F'(x)\geq 0$ from the above equality, which implies that $F(x)\geq F(0)=0$, which proves the required statement. To prove the claim, we have $$G'(x)=2g(x)-2g(x)g'(x),$$ which is nonnegative since $g'(x)\leq 1$ and $g(x)\geq 0$ for all $x$. Therefore, $G(x)\geq G(0)=0$ as required.
More Theoretical and Less Computational Linear Algebra Textbook I found what seems to be a good linear algebra book. However, I want a more theoretical as opposed to computational linear algebra book. The book is Linear Algebra with Applications 7th edition by Gareth Williams. How high quality is this? Will it provide me with a good background in linear algebra?
I may be a little late responding to this, but I really enjoyed teaching from the book Visual Linear Algebra. It included labs that used Maple that I had students complete in pairs. We then were able to discuss their findings in the context of the theorems and concepts presented in the rest of the text. I think for many of them it helped make abstract concepts like eigenvectors more concrete.
Continued fraction: Show $\sqrt{n^2+2n}=[n; \overline{1,2n}]$ I have to show the following identity ($n \in \mathbb{N}$): $$\sqrt{n^2+2n}=[n; \overline{1,2n}]$$ I had a look about the procedure for $\sqrt{n}$ on Wiki, but I don't know how to transform it to $\sqrt{n^2-2n}$. Any help is appreciated. EDIT: I tried the following: $\sqrt{n^2+2n}>n$, so we get $\sqrt{n^2+2n}=n+\frac{n}{x}$, and $\sqrt{n^2+2n}-n=\frac{n}{x}$ and further $x=\frac{n}{\sqrt{n^2+2n}}$. So we get $x=\frac{n}{\sqrt{n^2+2n}-n}=\frac{n(\sqrt{n^2+2n}+n)}{(\sqrt{n^2+2n}-n)(\sqrt{n^2+2n}+n}$ I don't know if it's right and how to go on.
HINT $\rm\ x = [\overline{1,2n}]\ \Rightarrow\ x\ = \cfrac{1}{1+\cfrac{1}{2\:n+x}}\ \iff\ x^2 + 2\:n\ x - 2\:n = 0\ \iff\ x = -n \pm \sqrt{n^2 + 2\:n} $
Locally exact form $P\;dx+Q\;dy$ , and the property $\frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x}$ This is a very known result, but I don't have some proof. Someone known or has some proof of it? Let be $\omega = P\;dx + Q\;dy$ be a $C^1$ differential form on a domain $D$. If $$\frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x} ,$$ then $\omega$ is locally exact.
If you have a curl-free field $W = (W_1, W_2, W_3)$ in a neighborhood of the origin, it is the gradient of a function $f$ given by $$ f(x,y,z) = \int_0^1 \; \left( \; x W_1(tx, ty,tz) + y W_2(tx, ty,tz) + z W_3(tx, ty,tz) \; \right) dt.$$ In your case, take $W_3 = 0$ and drop the dependence on $z$ from $f, \; W_1$ and $W_2.$ Note how this is set up so that $f=0$ at the origin. There is more information at Anti-curl operator
Proving that crossing number for a graph is the lowest possible How would one go about proving that the crossing number for a graph is the lowest possible? To be more specific, given a specific representation of a particular cubic graph $G$, how do I prove that the crossing number can not be lowered any further? This $G$ has $|V|<20$, and $\operatorname{cr}{(G)}\geq 2$. Furthermore, various online sources say the $\operatorname{cr}{(G)}$ I have found is correct, but offer no proof of why. Complexity for this in general is hard, supposedly NP-hard, and no general solution is known, but given that the number of vertices is small enough and graph is 3-regular, there must be a way.
If the graph is small enough and you were willing to prove it by hand, you could do a case analysis similar to how students show a graph is non-planar ($cr(G) \geq 1$) by hand. Take a long cycle (hopefully Hamiltonian), and place it evenly spaced on a circle, then start adding in edges. You're done if you can show by case analysis that no matter how the edges are added, more than one crossing is created.
Checking for Meeting Clashes I've been sent here from StackOverflow with my mathematical / algorithm question. I am currently working with an organisation developing a web system, one area attempting to resolve in-house training clashes. An example (as best as I can describe is): What the company is attempting to do is prevent major clashes (>10 people affected) when planning training course times. * *100 people are attending training course A. *75 people are attending training course B. *25 people are attending training course C. *5 people are attending training course D. If 75 people attending B are all attending course A, and B were to run at the same time, there would be a total of 75 clashes. If all 25 people from course C are attending course A and B, running any of these courses at the same time would result in at minimum of 25 clashes. If 3 people were attending A and D, and they were to run at the same time only 3 would have an issue and therefore not be a major problem. The system they are attempting to develop does not necessarily need to resolve the clash itself, just highlight if a certain number of clashes are likely to occur when arranging a new time. I hope that explains the situation - I am a programmer by profession so this sort of thing is new to me, any points in the right direction would be fantastic!
If you intend to estimate the expected number of clashes (not necessarily the unique or best measure, but perhaps the more easy to compute) you need a probabilistic model: in particular, you need to know the size of the total population ($N$) and if there is some dependency among courses attendance (i.e. if given that a particular person attends course A then it's more or less probable that he attends course B). Assumming the most simple scenario -no dependencies-, a simple computation (see this related question) shows that if $N_A$ people from a total population of $N$ attends course A, and $N_B$ attends course B, then the expected number of classhes is: $$E(N_{AB}) = \frac{N_A N_B} {N}$$
What can be said given that $\Psi$ is a Homomorphism? Let G and H be nontrivial finite groups with relatively prime orders When $\Psi: G\to H$ be a homomorphism, what can be said about $\Psi$ ?
If $\Psi : G \to H$ is a homomorphism then by the (first) isomorphism theorem you know $G / \ker \Psi \cong \Psi (G)$. This means that $\frac{|G|}{|\ker \Psi|} = |\Psi(G)|$ so you know that $\Psi(G)$ divides $|G|$. Next you know that $\Psi (G) $ is a subgroup of $H$ and hence by the Lagrange theorem it divides the order of $|H|$. Putting these two things together you now know that $|\Psi (G)|$ divides $|H|$ and $|\Psi(G)|$ divides $ |G| $. But $\gcd(|G|, |H|) = 1$ i.e. their only common divisor is $1$ and so $|\Psi(G)| = 1$ and so $\Psi = 0$ is the trivial map.
The letters ABCDEFGH are to be used to form strings of length four The letters ABCDEFGH are to be used to form strings of length four. How many strings contain the letter A if repetitions are not allowed? The answer that I have is : $$ \frac{n!}{(n-r)!} - \frac{(n-1)!}{(n-r)!} = \frac{8!}{4!} - \frac{7!}{4!} = 8 \times 7 \times 6 \times 5 - (7 \times 6 \times 5) = 1470 $$ strings. If you could confirm this for me or kindly guide in me the right direction, please do let me know.
I presume I am correct. Here is a detailed proof. First exclude 'A' and permute the rest (7P3). Which can be done in $\frac{7!}{4!}$ ways. Then, include 'A' back into those permuted cases. $|X_1|X_2|X_3|$ and as indicated by the vertical lines can be in 4 locations. So, the answer is $$\frac{7!}{4!} \times 4 = 840$$
reference for "compactness" coming from topology of convergence in measure I have found this sentence in a paper of F. Delbaen and W. Schachermayer with the title: A compactness principle for bounded sequences of martingales with applications. (can be found here) On page 2, I quote: "If one passes to the case of non-reflexive Banach spaces there is—in general—no analogue to theorem 1.2 pertaining to any bounded sequence $(x_n )_{n\ge 1} $ , the main obstacle being that the unit ball fails to be weakly compact. But sometimes there are Hausdorff topologies on the unit ball of a (non-reflexive) Banach space which have some kind of compactness properties. A noteworthy example is the Banach space $ L^1 (Ω, F, P) $ and the topology of convergence in measure." So I'm looking for a good reference for topology of convergence in measure and this property of "compactness" for $ L^1 $ in probability spaces. Thx math
So that this question has an answer: t.b.'s comment suggests that the quotes passage relates to the paper's Theorem 1.3, which states: Theorem. Given a bounded sequence $(f_n)_{n \ge 1} \in L^1(\Omega, \mathcal{F}, \mathbb{P})$ then there are convex combinations $$g_n \in \operatorname{conv}(f_n, f_{n+1}, \dots)$$ such that $(g_n)_{n \ge 1}$ converges in measure to some $g_0 \in L^1(\Omega, \mathcal{F}, \mathbb{P})$. This is indeed "some kind of compactness property" as it guarantees convergence after passing to convex combinations.
Does $\mathbb{R}^\mathbb{R}$ have a basis? I'm studying linear algebra on my own and saw basis for $\mathbb{R}, \mathbb{R}[X], ...$ but there is no example of $\mathbb{R}^\mathbb{R}$ (even though it is used for many examples). What is a basis for it? Thank you
The dimension of $\mathbb R^\mathbb R$ over $\mathbb R$ is $2^{\frak c}$. It is not even the size of the continuum. As Jeroen says, this space is not finitely generated. Not even as an algebra. Even as an algebra it is not finitely generated. What does that mean? Algebra is a vector space which has a multiplication operator. In $\mathbb R[x]$, the vector space of polynomials, we can write any polynomial as a finite sum of scalars and $x^n$'s. This is an example for a vector space which is not finitely generated, but as an algebra it is finitely generated. In introductory courses it is customary to deal with well understood spaces. In the early beginning it is even better to use only finitely generated spaces, which are even better understood. The axiom of choice is an axiom which allows us to "control" infinitary processes. Assuming this axiom we can prove that every vector space has a basis, but we cannot necessarily construct such space.
What are some "natural" interpolations of the sequence $\small 0,1,1+2a,1+2a+3a^2,1+2a+3a^2+4a^3,\ldots $? (This is a spin-off of a recent question here) In fiddling with the answer to that question I came to the set of sequences $\qquad \small \begin{array} {llll} A(1)=1,A(2)=1+2a,A(3)=1+2a+3a^2,A(4)=1+2a+3a^2+4a^3, \ldots \\ B(1)=1,B(2)=1+3a,B(3)=1+3a+6a^2,B(4)=1+3a+6a^2+10a^3, \ldots \\ C(1)=1,C(2)=1+4a,C(3)=1+4a+10a^2,C(4)=1+4a+10a^2+20a^3, \ldots \\ \ldots \\ \end{array} $ with some indeterminate a . We had the discussion often here in MSE, that interpolation to fractional indexes, say A(1.5)=?? is arbitrary, considering, that an initial solution composed with any 1 -periodic function satisfies the condition. But here the embedding in a set of sequences, which are constructed from binomial-coefficients might suggest some "natural" interpolation, such as for $\qquad \small K(1)=1, K(2)=1+a, K(3)=1+a+a^2, \ldots $ the interpolation $\small K(r) = {a^r-1 \over a-1}$ seems the most "natural" which even can smoothly be defined for a=1. This observation made me to refer to "q-analogues" $\small [r]_a $ in my answer in the initiating MSE-question, but it's not obvious how to interpolate the shown sequences of higher orders A , B , C (I think they're not related to the "q-binomial-analogues" , for instance ). Q: So what would be some "natural" interpolation to fractional indexes for the sequences A, B, C, and possibly in general for sequences generated in the obvious generalized manner? Agreeing mostly with Henning's ansatz I got now the general form as $$ A_m(n) = {1 \over (1-a)^m} - \sum_{k=0}^{m-1} \binom{n+m}{k}{a^{n+m-k} \over (1-a)^{m-k} } $$ I do not yet see, whether some examples of fractional indexes agree with the solutions of all three given answers so far, for instance: given a=2.0 what is A(1.5), B(4/3), C(7/5)? With my programmed version I get now $\qquad \small A(1.5)\sim 9.48528137424 $ $\qquad \small B(4/3) \sim 11.8791929545 $ $\qquad \small C(7/5) \sim 18.4386165488 $ (No interpolation for fractional m yet) [update 2] the derivative-versions of Sivaram and Michael arrive at the same values so I think, all versions can be translated into each other and mutually support each other to express a "natural" interpolation. [update 3] I had an index-error in my computation call. Corrected the numerical results.
I'll build from Michael's work (thanks for doing the heavy lifting!) and start with $$A_n(r)=\frac1{n!}\frac{\mathrm d^n}{\mathrm da^n} \frac{a^{r+n}-1}{a-1}$$ Let's switch back to the series representation, and swap summation and differentiation: $$A_n(r)=\frac1{n!}\sum_{k=0}^{r+n-1} \frac{\mathrm d^na^k}{\mathrm d a^n}$$ and make a suitable replacement: $$A_n(r)=\frac1{n!}\sum_{k=0}^{r+n-1} \frac{k!a^{k-n}}{(k-n)!}=\frac1{n!}\sum_{k=-n}^{r-1} \frac{(k+n)!a^k}{k!}=\frac1{n!}\sum_{k=0}^{r-1} \frac{(k+n)!a^k}{k!}$$ where a reindexing and removal of extraneous zero terms was done in the last two steps. The result can then be expressed as: $$A_n(r)=\frac{\mathrm{I}_{1-a}(n+1,r)}{(1-a)^{n+1}}=\frac{1-\mathrm{I}_a(r,n+1)}{(1-a)^{n+1}}$$ where $\mathrm{I}_z(r,n)$ is a regularized incomplete beta function. In terms of the Gaussian hypergeometric function, we have $$\begin{align*} A_n(r)&=\frac{\mathrm{I}_{1-a}(n+1,r)}{(1-a)^{n+1}}\\ &=\frac{a^r}{(n+1)\mathrm{B}(n+1,r)}{}_2 F_1\left({{n+r+1, 1}\atop{n+2}}\mid1-a\right)\\ &=\binom{n+r}{n+1} {}_2 F_1\left({{1-r,n+1}\atop{n+2}}\mid1-a\right) \end{align*}$$ where the third one is derived from the second one through the Pfaff transformation. In particular, the third one gives the representation $$A_n(r)=r\binom{n+r}{r}\sum_{k=0}^{r-1}\binom{r-1}{k}\frac{(a-1)^k}{n+k+1}$$ which is valid for arbitrary $n$ and nonnegative integer $r$. Other hypergeometric representations can be derived.
Continuous functions are Riemann-Stieltjes integrable with respect to a monotone function Let $g:[a,b] \to \mathbb{R}$ be a monotone function. Could you help me prove that $\mathcal{C}([a,b])\subseteq\mathcal{R}([a,b],g)$? (Here $\mathcal{R}([a,b],g)$ is the set of all functions that are Riemann-Stieltjes integrable with respect to $g$.) Definition of the Riemann-Stieltjes integral. Suppose $f,g$ are bounded on $[a,b]$. If there is an $A \in \mathbb{R}$ such that for every $\varepsilon >0$, there exists a partition $\mathcal{P}$ of $[a,b]$ such that for every refinement $\mathcal{Q}$ of $\mathcal{P}$ we have $|I(f,\mathcal{Q},X,g)-A|<\varepsilon$ (where if $\mathcal{P}=\{a=x_0<\ldots<x_n=b\}$ and $X$ is an evaluation sequence $X=\{x_1^\prime,\ldots,x_n^\prime\}$ and $I(f,\mathcal{Q},X,g)=\sum_{j=1}^n f(x_j^\prime)(g(x_j)-g(x_{j-1}))$), then $f$ is R-S integrable with respect to $g$, and the integral is $A$.
Assume that $g$ is increasing. I suppose that you know that $f\in\mathcal{R}([a,b],g)$ iff $f$ satisfies the Riemann's condition. The Riemann's condition says: $f$ satisfies the Riemann's condition respect to $g$ in $[a,b]$ if for every $\epsilon\gt 0$, there exist a partition $P_\epsilon$ of $[a,b]$ such that if $P$ is a refinement of $P_\epsilon$ then $$0\leq U(P,f,g)-L(P,f,g)\lt \epsilon,$$ where $U(P,f,g)$ and $L(P,f,g)$ are the upper and lower Riemann-Stieltjes sums respectively. With this and the hint in the comments the result holds. Perhaps the chapter 7 of Mathematical Analysis of Tom M. Apostol can be useful to you.
How do I show that $\mathbb{Q}(\sqrt[4]{7},\sqrt{-1})$ is Galois? How do I show that $\mathbb{Q}(\sqrt[4]{7},\sqrt{-1})$ is Galois? At first I thought it was the splitting field of $x^4-7$, but I was only able to prove that it was a subfield of the splitting field. Any ideas? I'm trying to find all the intermediate fields in terms of their generators, but I don't understand how. I am trying to imitate Dummit and Foote on this. I am looking at the subgroups of the Galois group in terms of $\tau$ where $\tau$ is the automorphism that takes $\sqrt[4]{7}$ to itself and $i$ to $-i$, and $\sigma$ that takes $\sqrt[4]{7}$ to $i\sqrt[4]{7}$ and $i$ to itself. How do I, for example, find the subfield corresponding to $\{1, \tau\sigma^3\}$? I know I am supposed to find four elements of the galois group that $\tau\sigma^3$ fixes, but so far i can only find $-\sqrt[4]{7}^3$.
The polynomial $f(x)=x^4-7$ factors as $(x-7^{1/4})(x+7^{1/4})(x-i7^{1/4})(x+i7^{1/4})$, and all these irreducible factors are distinct. Hence, $x^4-7$ is separable. Moreover, the field $L= \mathbb{Q}(7^{1/4},i)$ contains all its roots and is the minimal field where $x^4-7$ factors completely in. Hence, it is the minimal splitting field of $f(x)$. So the extension $K \subset L$ is both normal and separable and hence, is Galois.
Chaos and ergodicity in hamiltonian systems EDIT : I formerly claimed something incorrect in my question. The Liouville measure needs NOT be ergodic on hypersurfaces of constant energy. Also, I found out that NO hamiltonian system can be globally ergodic. So the new formulation of my question is now this : Do we call chaotic any hamiltonian system that exhibits the usual chaotic properties on each hypersurface of constant energy (e.g. ergodicity, mixing, positive entropy, positive Lyapunov exponent, etc.) or do we require a "complicated" geometry of those hypersurfaces ? For example, imagine a system whose hypersurfaces of constant energy are very simple, like planes $z$=constant, but with complicated, chaotic behaviour on each of those planes. Would you call that system chaotic ? Thank you for your thoughts !
* *Yep: Alfredo M. Ozorio de Almeida wrote about this: http://books.google.co.uk/books?id=nNeNSEJUEHUC&pg=PA60&lpg=PA60&dq=hamiltonian+chaos+liouville+measure&source=bl&ots=63Wnmn-xvT&sig=Z0eRtIQxmdQvgWUcLBab7ZJ9y-U&hl=en&ei=0EXfTuvzJcOG8gP5mZjaBQ&sa=X&oi=book_result&ct=result&resnum=4&ved=0CDcQ6AEwAw#v=onepage&q=hamiltonian%20chaos%20liouville%20measure&f=false *What is meant by Chaos: Laplace said, standing on Newton's shoulders, "Tell me the force and where we are, and I will predict the future!" An elusive claim, which assumes the absence of deterministic chaos: Deterministic time evolution does not guarantee predictability, which is particularly relevant for mechanical systems whose equations are non-integrable - common in systems which have nonlinear differential equations with three of more variables. Knowing this, we enter the realm of physics: In the hamiltonian formulation of classical dynamics, a system is described by a pair of first-order ordinary equations for each degree of freedom, so in addition we re-impose the conditions from deterministic time evolution (the nonlinear differential equations) and given the space is constrained: Ergodicity! Hamiltonian Chaos: http://www.phys.uri.edu/~gerhard/hamchaos.html
Proving the existence of point $z$ s.t. $f^{(n)}(z) = 0$ Suppose that $f$ is $n$ times differentiable on an interval $I$ and there are $n + 1$ points $x_0, x_1, \ldots, x_n \in I, x_0 < x_1 < \cdots < x_n$, such that $f(x_0) = f(x_1) = \cdots = f(x_n) = 0$. Prove that there exists a point $z \in I$ such that $f^{(n)}(z) = 0$. I am trying to solve this, but other than using then using Rolle's Theorem, I am not sure how to proceed.
Here’s a fairly broad hint: You know from Rolle’s theorem that it’s true when $n=1$. Try it for $n=2$; Rolle’s theorem gives you points $y_0$ and $y_1$ such that $x_0<y_0<x_1<y_1<x_2$ and $f\;'(y_0)=f\;'(y_1)=0$. Can you now apply Rolle’s theorem to $f\;'$ on some interval to get something useful? In order to generalize the hint from $2$ to $n$, you’ll need to use mathematical induction.
Encyclopedic dictionary of Mathematics I'm looking for a complete dictionary about Mathematics, after searching a lot I found only this one http://www.amazon.com/Encyclopedic-Dictionary-Mathematics-Second-VOLUMES/dp/0262090260/ref=sr_1_1?ie=UTF8&qid=1323066833&sr=8-1 . I'm looking for a book that can give me a big picture, well written with a clever idea in mind from the author, a big plus can be the presence of railroad diagram or diagram with logic connections between the elements.
You might first try the updated version of the book you mention, the Encyclopedia of Mathematics which is freely available via the quoted link. Then, since this encyclopedia is very weak on applied mathematics you could have a try with Engquist (ed): Encyclopedia of Applied and Computational Mathematics. This will give you a cross-disciplinary overview of mathematics.
Conditional probability with union evidence In the problem of cancer (C) and tests (t1, t2), or any other example, How can I calculate: $P(C^+|(t1^+ \text{ or } t2^+)$ I think this would be the same as finding: $$P(t1^+ \text{ or } t2^+|C^+) P(C^+)\over P(t1^+ \text{ or } t2^+).$$ But is $$P(t1^+ \text{ or } t2^+|C^+) = P(t1^+|C^+)+P(t2^+|C^+)-P(t1^+ \text{ and } t2^+|C^+)?$$ On the other side, is it true in other problems that $$P(t2^+|t1^+) ={ P(t1^+ \text{ and } t2^+)\over P(t1^+)}?$$ Thanks.
The answer for your third (last) question is "yes"; this is just the definition of conditional probability. (I answer this first, since it is used later, here). Your initial instinct is right. For any two events $A$ and $C$: $$ P(A|C)={P(C\cap A)\over P(C)}={P(A\cap C)\over P(C)}={P(A)P(C| A)\over P(C)}. $$ The answer to your second question is "yes": $$\eqalign{P(t1^+\cup t2^+|C^+)&={P(( t1^+\cup t2^+)\cap C+)\over P(C^+)}\cr & ={P( ( t1^+\cap C^+)\cup (t2^+\cap C^+))\over P(C^+)}\cr &={P( t1^+\cap C^+)+P (t2^+\cap C^+)- P (t2^+\cap t1^+\cap C^+) \over P(C^+) }\cr &={P( t1^+\cap C^+) \over P(C^+) } +{P (t2^+\cap C^+) \over P(C^+) } -{ P (t2^+\cap t1^+\cap C^+) \over P(C^+) }\cr &={P( t1^+| C^+)+P (t2^+| C^+)- P (t2^+\cap t1^+| C^+) . } }$$
Rank of a degenerate conic This question comes from projective geometry. A degenerate conic $C$ is defined as $$C=lm^T+ml^T,$$ where $l$ and $m$ are different lines. It can be easily shown, that all points on $l$ and m lie on the $C$. Because, for example, if $x\in l$, then by definition $l^Tx=0$ and plugging it into conic equation makes it true. Question: Find the rank of $C$. (We can limit ourself to 3-dimensional projective space.) P.S. I'm reading a book, where it is guessed and checked, but I would like to have a proof without guessing. I do not provide the guess, since it can distract you, but if you really need it just leave a note.
The rank of $lm^T$ is one. The same goes for $ml^T$. In most cases, the rank of the symmetric matrix $C$ as you define it will be 2. This corresponds to a conic degenerating into two distinct lines. If the lines $l$ and $m$ should coincide, though, the rank of $C$ will be 1. If you need a proof, you can show this assumption for specific cases without loss of generality. Of you can have a look at the corresponding dual conics and how that relates to adjoint matrices.
Can you give an example of a complex math problem that is easy to solve? I am working on a project presentation and would like to illustrate that it is often difficult or impossible to estimate how long a task would take. I’d like to make the point by presenting three math problems (proofs, probably) that on the surface look equally challenging. But… * *One is simple to solve (or prove) *One is complex to solve (or prove) *And one is impossible So if a mathematician can’t simply look at a problem and say, “I can solve that in a day, or a week, or a month, how can anyone else that is truly solving a problem? The very nature of problem solving is that we don’t know where the solutions lies and therefore we don’t know how long it will take to get there. Any input or suggestions would be greatly appreciated.
What positive integers can be written as the sum of two squares? Sum of three squares? Sum of four? For two squares, it's all positive integers of the form $a^2b$, where $b$ isn't divisible by any prime of the form $4k+3$, and the proof is easy. For four squares, it's all positive integers, and the proof is moderately difficult, but covered in any course on number theory. For three squares, it's positive integers not of the form $4^a (8b+7)$ and, while not impossible, the proof is considerably more difficult than the previous two.
Trouble forming a limit equation Here's the question: The immigration rate to the Czech republic is currently $77000$ peeople per year. Because of a low fertility rate, the population is shrinking at a continuous rate of $0.1$% per year. The current Czech population is ten million. Assume the immigrants immediately adopt the fertility rate of their new country. If this scenario did not change, would there be a terminal population projected for the Czech republic? If so, find it. I need to establish a limit as time would approach infinity (terminal population) but the only way Ive been able to solve it has been using $(x_{n-1} + 77000)(0.999) = x_n$ and I'm not sure exactly how to establish a limit off that (can't use differential equations, we never learned this year). Is there another way to solve for the terminal population without using differential equations? I've tried multiple ways to get a better answer, but the prof said we need to use limits and I'm stumped.
We will proceed in two steps: First, assuming that the limit $\lim \limits_{n \to \infty} x_n$ exists, we will find it. Of course, we need to justify our assumption. So we will come back and show the existence of the limit. Finding the limit. Suppose $x = \lim \limits_{n \to \infty} x_n$. Then allowing $n$ to go to infinity in $$ x_n = 0.999(x_{n-1} + 77000), $$ we get $$ x = 0.999 x + 76992.3 \qquad \implies x = \cdots. $$ (Exercise: Fill in the blank!) Showing the existence of the limit. We suspect that for large $n$, we must have $x_n$ approaching $x = \cdots$, so we will think about the "corrected" sequence $x_n - x$ instead. That is, subtracting $x$ from both sides of the equation $$ x_n = 0.999(x_{n-1} + 77000), $$ we get $$ x_n - x = 0.999(x_{n-1} + 77000) - x \stackrel{\color{Red}{(!!)}}{=} 0.999(x_{n-1} - x). $$ Be sure to check the step marked with $\color{Red}{(!!)}$. So we end up with $$ (x_n - x) = 0.999(x_{n-1} - x). $$ Making the substitution $x_n - x = y_n$ (for all $n$), we get $$ y_n = 0.999 y_{n-1}. $$ Can you take it from here?
Function which has no fixed points Problem: Can anyone come up with an explicit function $f \colon \mathbb R \to \mathbb R$ such that $| f(x) - f(y)| < |x-y|$ for all $x,y\in \mathbb R$ and $f$ has no fixed point? I could prove that such a function exists like a hyperpolic function which is below the $y=x$ axis and doesn't intersect it. But, I am looking for an explicit function that satisfies that.
$f(x)=2$ when $x\leq 1$, $f(x)=x+\frac{1}{x}$ when $x\geq 1$. Another example: Let $f(x)=\log(1+e^x)$. Then $f(x)>x$ for all $x$, and since $0<f'(x)<1$ for all $x$, it follows from the mean value theorem that $|f(x)-f(y)|<|x-y|$ for all $x$ and $y$.
Why does $\int \limits_{0}^{1} \frac{1}{e^t}dt $ converge? I'd like your help with see why does $$\int^1_0 \frac{1}{e^t} \; dt $$ converge? As I can see this it is suppose to be: $$\int^1_0 \frac{1}{e^t}\;dt=|_{0}^{1}\frac{e^{-t+1}}{-t+1}=-\frac{e^0}{0}+\frac{e}{1}=-\infty+e=-\infty$$ Thanks a lot?
I assume you are integrating over the $t$ variable. $1/e^t$ is a continuous function, and you are integrating over a bounded interval, so the integral is well defined. An antiderivative of $1/e^t=e^{-t}$ is equal to $-e^{-t}$. So the integral equals $1-1/e$
Correspondences between Borel algebras and topological spaces Though tangentially related to another post on MathOverflow (here), the questions below are mainly out of curiosity. They may be very-well known ones with very well-known answers, but... Suppose $\Sigma$ is a sigma-algebra over a set, $X$. For any given topology, $\tau$, on $X$ denote by $\mathfrak{B}_X(\tau)$ the Borel algebra over $X$ generated by $\tau$. Question 1. Does there exist a topology, $\tau$, on $X$ such that $\Sigma = \mathfrak{B}_X(\tau)$? If the answer to the previous question is affirmative, it makes sense to ask for the following too: Question 2. Denote by ${\frak{T}}_X(\Sigma)$ the family of all topologies $\tau$ on $X$ such that $\Sigma = \mathfrak{B}_X(\tau)$ and let $\tau_X(\Sigma) := \bigcap_{\tau \in {\frak{T}}_X(\Sigma)} \tau$. Is $\Sigma = \mathfrak{B}_X({\frak{T}}_X(\Sigma))$? Updates. Q2 was answered in the negative by Mike (here).
I think that I can answer the second question. For each point $p \in \mathbb{R}$, let $\tau_p$ be the topology on $\mathbb{R}$ consisting of $\varnothing$ together with all the standard open neighbourhoods of $p$. Unless I've made some mistake, the Borel sigma-algebra generated by $\tau_p$ is the standard one. However, $\bigcap_{p \in \mathbb{R}} \tau_p$ is the indiscrete topology on $\mathbb{R}$.
How to prove $k^n \equiv 1 \pmod {k-1}$ (by induction)? How to prove $k^n \equiv 1 \pmod {k-1}$ (by induction)?
Well, I'll leave the case of $n=1$ to you. So, for a fixed $k$, suppose that $k^n\equiv 1 \mod(k-1)$ for some $n\in \mathbb{N}$. We want to show that $k^{n+1} \equiv 1 \mod(k-1)$. Well, $k^{n+1}=k^n k$, and we know that $k^n\equiv 1 \mod(k-1)$ (since this is the induction hypothesis). So, what is $k^{n+1}$ congruent to mod $k-1$?
Arithmetic Mean Linear? Is a function that finds the arithmetic mean of a set of real numbers a linear function? So is $\left(X_1 + X_2 + \cdots + X_n\right)/n$ linear or not? I'm not sure because so long as the set stays the same size $n$ could be defined as a constant.
To talk about linearity, the domain and range of a function must be vector spaces, in this case over the real numbers. So your first question should be, what vector space do you take the arithmetic mean to be defined on? It turns out you must fix the value of $n$ to get any reasonable vector space with the arithmetic mean defined. If you mix various length sequences, they cannot be added (and extending the shorter sequence by zeros in order to perform the addition is not an option, because this changes its mean value). Once you realise $n$ must be fixed, you should have no difficulty seeing that the arithmetic mean is a linear function.
probability a credit-card number has no repeated digits It seems as though every Visa or Mastercard account number I've ever had (in the United States) has had at least two consecutive digits identical. I was wondering what the probability is that a particular account number will have at least two consecutive digits identical. Assume the account number is $16$ digits long, totally ordered, with digits called $(d_i)_{i=1}^{16}$. Let $$a_i=\left\{\begin{array}{rl}d_i&i \textrm{ is even}\\2d_i&i \textrm{ is odd and }2d_i\lt9\\2d_i-9&i \textrm{ is odd and }2d_i\gt9\end{array}\right.$$ Assume $d_1=4$, $d_2$ through $d_{15}$ are chosen arbitrarily, and $d_{16}$ is chosen so that $$\sum_{i=1}^{16}a_i\in10\mathbb Z.$$ (Fwiw, those assumptions are not correct in the real world, but they're based on real-world facts.) My answer so far is this: Consecutive integers are distinct iff all the following hold: * *For $2\le i\le15$, $d_i\ne d_{i-1}$ (probability $\frac9{10}$ each). *Since the ten digits all appear as values of $a_i$ (for fixed $i$, as $d_i$ varies), we might as well assume, in computing $d_{16}$, that the $d_i$ are used in the sum, i.e., that $\sum_id_i\in10\mathbb Z$. But since $d_2,\ldots,d_{15}$ are with equal probability any digit, so is $d_{16}$, so the probability it's distinct from $d_{15}$ is just $\frac9{10}$ again. So we get $1-.9^{15}\approx .79$. However I'm very unsure about that last bullet point. Can someone make it more rigorous or correct it?
It looks fine to me. What you’re using is the fact that if the random variables $X$ and $Y$ are uniformly distributed in $\{0,1,\dots,n-1\}$, then so is the reduced sum $Z=(X+Y)\bmod n$, where $\bmod$ here denotes the binary operation. (In your case $n$ is of course $10$.) To see this, you can observe that $X+Y$ itself takes values in the set $\{0,1,\dots,2n-2\}$, with the following probabilities: $$\mathbb{P}[X+Y=k]=\begin{cases} \frac{k+1}{n^2},&k\le n-1\\\\\\ \frac{2n-1-k}{n^2},&k\ge n-1\;. \end{cases}$$ (Just count the number of ways that each sum can occur.) But $Z=(X+Y)\bmod n=k$ iff $X+Y=k$ or $X+Y=k+n$, so $$\mathbb{P}[Z=k]=\frac1{n^2}\bigg((k+1)+\big(2n-1-(k+n)\big)\bigg)=\frac1{n}\;,$$ i.e., $Z$ is uniformly distributed in $\{0,1,\dots,n-1\}$. By induction the same is true for any finite sum reduced modulo $n$.
Consequence of Cauchy Integral Formula for Several Complex Variables in Gunning's book I am reading Gunning's book Introduction to Holomorphic Functions of Several Variables, Vol. I, and I am stuck in the proof of Maximum modulus theorem: if $f$ is holomorphic in a connected open subset $D \subset \mathbb{C}^{n}$ and if there is a point $A \in D$ such that $|f(Z)| \leq |f(A)|$ for all points $Z$ in some open neighborhood of $A$, then $f(Z) = f(A)$ for all points $Z \in D$. In the proof Gunning says that for any polydisc $\Delta = \Delta(A; R) \subset D$ for which $\overline{\Delta} \subset D$ we have as a consequence of the Cauchy integral formula that $$|\Delta| f(A) = \int_{\Delta} f(Z) dV(Z),$$ where $dV(Z)$ is the Euclidean volume element in $\mathbb{C}^{n} = \mathbb{R}^{2n}$ and $|\Delta| = \int_{\Delta} dV(Z) = \pi^{n}r_{1}^{2} \cdots r_{n}^{2}$ is the Euclidean volume of $\Delta$. It looks very easy, but I am stuck on it for a long time. I can not see how this is a consequence of Cauchy integral formula, since the integral is on $\Delta$ and not on the product of $|\zeta_{j} - a_{j}| = r_{j}$. We can not apply Stokes, because the form is of degree $n$. Maybe the Intermediate Value Theorem for integrals solve it, but how to assure that the point giving the equality is $A$? Maybe a change of variables? Thanks for help.
It is an integrated form of the Cauchy formula. The single complex variable case illustrates what's going on. For example, $$ f(0) = {1\over 2\pi i}\int_{|z|=1} {f(z)\over z}\;dz = {1\over 2\pi} \int_0^{2\pi} {f(re^{i\theta})\over r\,e^{i\theta}}\,d(re^{i\theta}) = {1\over 2\pi} \int_0^{2\pi} f(re^{i\theta})\;i\,d\theta $$ can be integrated (for example) $\int_0^1 \ldots r\,dr$: $$ f(0)\cdot \int_0^11\cdot r\,dr \;=\; {1\over 2\pi}\int_0^1 \int_0^{2\pi} f(re^{i\theta})\;r\;d\theta\,dr $$ The obvious absolute-value estimate gives $$ |f(0)|\;\le\; {1\over \pi} \int_{|z|\le 1} |f(z)|\,dV $$
An application of Gronwall's lemma I've come across a creative use of Gronwall's lemma which I would like to submit to the community. I suspect that the argument, while leading to a correct conclusion, is somewhat flawed. We have a continuous mapping $g \colon \mathbb{R}\to \mathbb{R}$ such that $$\tag{1} \forall \varepsilon>0\ \exists \delta(\varepsilon)>0\ \text{s.t.}\ \lvert x \rvert \le \delta(\varepsilon) \Rightarrow \lvert g(x) \rvert \le \varepsilon \lvert x \rvert$$ and a continuous trajectory $x\colon [0, +\infty) \to \mathbb{R}$ such that $$\tag{2} e^{\alpha t}\lvert x(t)\rvert \le \lvert x_0\rvert+\int_0^t e^{\alpha s}\lvert g(x(s))\rvert\, ds. $$ Here $x_0=x(0)$ is the initial datum, which we may choose small as we wish, but $\alpha >0$ is a fixed constant that we cannot alter in any way. Now comes the point. Fix $\varepsilon>0$. The lecturer says: Suppose we can apply (1) for all times $t \ge 0$. Then inserting (1) in (2) we get $$e^{\alpha t}\lvert x(t) \rvert \le \lvert x_0\rvert + \varepsilon \int_0^t e^{\alpha s} \lvert x(s)\rvert \, ds$$ and from Gronwall's lemma we infer $$\tag{3} \lvert x(t)\rvert \le e^{(\varepsilon - \alpha)t}\lvert x_0\rvert.$$ So if $\varepsilon <\alpha$ and $\lvert x_0 \rvert < \delta(\varepsilon)$, $\lvert x(s) \rvert$ is small at all times and our use of (1) is justified. We conclude that inequality (3) holds. Does this argument look correct to you? I believe that the conclusion is correct, but that it requires more careful treatment. Thank you.
As you presented it, this is completely bogus: it is an example of the logical fallacy called "begging the question".
Prove equations in modular arithmetic Prove or disprove the following statement in modular arithmetic. * *If $a\equiv b \mod m$, then $ a^2\equiv b^2 \mod m$ *If $a\equiv b \mod m$, then $a^2\equiv b^2 \mod m^2$ *If $a^2\equiv b^2\mod m^2$, then $a\equiv b\mod m$ My proofs. * *$$ a\equiv b \mod m \implies (a-b) = mr, r\in\mathbb{Z}$$ $$ a^2-b^2 = (a+b)(a-b) = (a+b)mr = ms \text{ where } s = (a+b)\cdot r$$ So the first statement is true *$$ a\equiv b \mod m \implies (a-b) = mr, r\in\mathbb{Z}$$ $$a^2-b^2 = (a+b)(a-b) = (a+b)mr$$ but $(a+b)\neq ms$ $\forall s\in\mathbb{Z}$ in general. So the second one is false. *$$a^2-b^2 = m^2r, \exists r\in\mathbb{N}$$ $$a^2-b^2= (a+b)(a-b) $$ Then I kind of got stuck here. I'm not sure how to continue it. Am I missing some properties I don't know? Or there is a algebra trick that could be applied here?
HINT $\: $ for $\rm (3),\ \ m^2\ |\ a^2 - b^2\ \Rightarrow\ m\ |\ a-b\ $ fails if $\rm\: m > 1 = a - b\:.\:$ Then $\rm\:a^2-b^2 = 2\:b+1\:$ so any odd number with a square factor $\rm\:m^2 \ne 1\:$ yields a counterexample.
Algorithm wanted: Enumerate all subsets of a set in order of increasing sums I'm looking for an algorithm but I don't quite know how to implement it. More importantly, I don't know what to google for. Even worse, I'm not sure it can be done in polynomial time. Given a set of numbers (say, {1, 4, 5, 9}), I want to enumerate all subsets of this set (its power set, really) in a certain order: increasing sum of the elements. For example, given {1, 4, 5, 9}, the subsets should be enumerated in this order, "smaller" sets first: {} = 0 {1} = 1 {4} = 4 {5} = 5 {1, 4} = 5 {1, 5} = 6 {9} = 9 {4, 5} = 9 {1, 9} = 10 {1, 4, 5} = 10 {4, 9} = 13 {5, 9} = 14 {1, 4, 9} = 14 {1, 5, 9} = 15 {4, 5, 9} = 18 {1, 4, 5, 9} = 19 This feels like some unholy mix between a breadth-first search and a depth-first search, but I can't wrap my head around the proper way to mix these two search strategies. My search space is very large ($2^{64}$ elements) so I can't precompute them all up-front and sort them. On that note, I also don't need to enumerate the entire search space -- the smallest 4,096 subsets is fine, for example. Can anyone give any pointers or even any clues to google for? Many thanks.
Here's an algorithm. The basic idea is that each number in the original set iterates through the list of subsets you've already found, trying to see if adding that number to the subset it's currently considering results in the smallest subset sum not yet found. The algorithm uses four arrays (all of which are indexed starting with $0$). * *$N$ consists of the numbers in the original set; i.e., $N = [1, 4, 5, 9]$ in your example. *$L$ is the list of subsets found so far. *$A[i]$ contains the subset that $N[i]$ is currently considering. *$S[i]$ is the sum of the elements of subset $i$ in $L$. Algorithm: * *Initialize $N$ to numbers in the original set, all entries of $A$ to $0$, $L[0] = \{\}$, $S[0] = 0$. Let $j = 1$. *For iteration $j$ find the minimum of $S[A[i]] + N[i]$ over all numbers $N[i]$ in the original set. (This finds the subset with smallest sum not yet in $L$.) Tie-breaking is done by number of elements in the set. Let $i^*$ denote the argmin. *Let $L[j] = L[A[i^*]] \cup \{N[i^*]\}$. Let $S[j] = S[A[i^*]] + N[i^*]$. (This updates $L$ and $S$ with the new subset.) *Increase $A[i^*]$ to the next item in $L$ that has no number larger than $N[i^*]$. If there is none, let $A[i^*] =$ NULL. (This finds the next subset in $L$ to consider for the number $N[i^*]$ just added to an existing subset in $L$ to create the subset just added to $L$.) *If all entries in $A[i]$ are NULL, then stop, else increment $j$ and go to Step 2. For example, here are the iterations for your example set, together with the subset in $L$ currently pointed to by each number. Initialization: {} 1, 4, 5, 9 Iteration 1: {} 4, 5, 9 {1} Iteration 2: {} 5, 9 {1} 4 {4} Iteration 3: {} 9 {1} 4, 5 {4} {5} Iteration 4: {} 9 {1} 5 {4} {5} {1,4} Iteration 5: {} 9 {1} {4} 5 {5} {1,4} {1,5} Iteration 6: {} {1} 9 {4} 5 {5} {1,4} {1,5} {9} Iteration 7: {} {1} 9 {4} {5} {1,4} 5 {1,5} {9} {4,5} Iteration 8: {} {1} {4} 9 {5} {1,4} 5 {1,5} {9} {4,5} {1,9} Iteration 9: {} {1} {4} 9 {5} {1,4} {1,5} {9} {4,5} {1,9} {1,4,5} And the rest of the iterations just involve adding $9$ successively to each subset already constructed that doesn't include $9$.
Foreign undergraduate study possibilities for a student in Southeastern Europe In the (non-EU) country I live in, the main problem with undergraduate education is that it's awfully constrained. I have only a minimal choice in choosing my courses, I cannot take graduate courses, and I have to take many applied and computational classes. My problem is that * *I have already learned (or I plan to learn in the 9 months until university) a lot of the pure mathematics that I would learn at the undergraduate programme *I am not interested in applied or computational classes I'm interested in: Is there any undergraduate programme, affordable to an average but dedicated student with a modest budget, which either has very pure emphasis or freedom in choosing the classes? I'm ready to learn a new language if it is required. Thank you for any help!
Hungary also has a very strong mathematical tradition, especially in discrete math, and has relatively cheap living standards. Many great mathematicians have studied at Eötvös Loránd University (ELTE) in Budapest. You can try looking there as well.
Sparsest cut is solvable on trees The problem is to prove that Sparsest cut is solvable on trees in polynomial time. A short review, a sparsest cut is linear program $$\min \frac{c(S,\overline{S})}{D(S,\overline{S})}$$ where $c(S,\overline{S})$ - sum of edge weights for every edge that crosses the cut $S,\overline{S}$ and $D(S,\overline{S})$ - sum of demands between $s_{i}$ and $t_{i}$ are separated by $S,\overline{S}$ The proof is based on the claim. There exists a sparsest cut $(S,\overline{S})$, such that the graphs $G[S]$ and $G[\overline{S}]$ are connected. Proof of the claim: Without loss of generality, assume $G[S]$ is not connected. Say it has components $C_{1}...C_{t}$. Let the total capacity of edges from $C_{i}$ to $\overline{C}_{i}$ be $c_{i}$ and the demand be $d_{i}$. The sparsity of cut $S$ is $\frac{c_{1}+\cdots+c_{t}}{d_{1}+\cdots+d_{t}}$. Now since all quantities $C_{i},d_{i}$ are non-negative, by simple arithmetic, there exists $i$ such that $\frac{c_{i}}{d_{i}} \leq \frac{c_{1}+\cdots+c_{t}}{d_{1}+\cdots+d_{t}}$. This implies that cut $C_{i}$ is at least as good as $S$. Using this claim we know that the sparsest cut on trees will be exactly one edge. Therefore, the sparsest cut problem on trees becomes easy to solve in polynomial time. The problem is I don't really understand how the claim was proved. I think it was proved by contradiction, firstly we assumed that $G[S]$ is not connected and found a better that $(S,\overline{S})$ cut for a component $C_{i}$, which is contradiction therefore $S$ is connected. Am I wrong? Secondly, how does this claim apply that sparsest cut is solvable on a tree. Just because as a component $C_{1}$ we can take a one edge of the tree? And how to show that it's solvable in polynomial time? Thanks!
Actually, you understand more than you think :). The proof indeed goes by contradiction, in that if the optimal cut induced disconnected components, then one of the components would give a better cut value. The rest of the proof follows from the fact that you can now parametrize the set of candidate optimal solutions by edges from the tree (since each edge removal creates two connected subgraphs). There are n-1 edges, and for each edge you can compute the cut cost in poly time.
Trick to find multiples mentally We all know how to recognize numbers that are multiple of $2, 3, 4, 5$ (and other). Some other divisors are a bit more difficult to spot. I am thinking about $7$. A few months ago, I heard a simple and elegant way to find multiples of $7$: Cut the digits into pairs from the end, multiply the last group by $1$, the previous by $2$, the previous by $4$, then $8$, $16$ and so on. Add all the parts. If the resulting number is multiple of $7$, then the first one was too. Example: $21553$ Cut digits into pairs: $2, 15, 53$ Multiply $53$ by $1, 15$ by $2, 2$ by $4$: $8, 30, 53$ Add: $8+30+53=91$ As $91$ is a multiple of $7$ ($13 \cdot 7$), then $21553$ is too. This works because $100-2$ is a multiple of 7. Each hundreds, the last two digits are 2 less than a multiple of $7 (105 → 05 = 7 - 2, 112 → 12 = 14 - 2, \cdots)$ I figured out that if it works like that, maybe it would work if we consider $7=10-3$ and multiplying by $3$ each digit instead of $2$ each pair of digits. Exemple with $91$: $91$ $9, 1$ $9\cdot3, 1\cdot1$ $27, 1$ $28$ My question is: can you find a rule that works with any divisor? I can find one with divisors from $1$ to $19$ ($10-9$ to $10+9$), but I have problem with bigger numbers. For example, how can we find multiples of $23$?
One needn't memorize motley exotic divisibility tests. There is a universal test that is simpler and much easier recalled, viz. evaluate a radix polynomial in nested Horner form, using modular arithmetic. For example, consider evaluating a $3$ digit radix $10$ number modulo $7$. In Horner form $\rm\ d_2\ d_1\ d_0 \ $ is $\rm\: (d_2\cdot 10 + d_1)\ 10 + d_0\ \equiv\ (d_2\cdot 3 + d_1)\ 3 + d_0\ (mod\ 7)\ $ since $\rm\ 10\equiv 3\ (mod\ 7)\:.\:$ So we compute the remainder $\rm\ (mod\ 7)\ $ as follows. Start with the leading digit then repeatedly apply the operation: multiply by $3$ then add the next digit, doing all of the arithmetic $\rm\:(mod\ 7)\:.\:$ For example, let's use this algorithm to reduce $\rm\ 43211\ \:(mod\ 7)\:.\:$ The algorithm consists of repeatedly replacing the first two leading digits $\rm\ d_n\ d_{n-1}\ $ by $\rm\ d_n\cdot 3 + d_{n-1}\:\ (mod\ 7),\:$ namely $\rm\qquad\phantom{\equiv} \color{red}{4\ 3}\ 2\ 1\ 1$ $\rm\qquad\equiv\phantom{4} \color{green}{1\ 2}\ 1\ 1\quad $ by $\rm\quad \color{red}4\cdot 3 + \color{red}3\ \equiv\ \color{green}1 $ $\rm\qquad\equiv\phantom{4\ 3} \color{royalblue}{5\ 1}\ 1\quad $ by $\rm\quad \color{green}1\cdot 3 + \color{green}2\ \equiv\ \color{royalblue}5 $ $\rm\qquad\equiv\phantom{4\ 3\ 5} \color{brown}{2\ 1}\quad $ by $\rm\quad \color{royalblue}5\cdot 3 + \color{royalblue}1\ \equiv\ \color{brown}2 $ $\rm\qquad\equiv\phantom{4\ 3\ 5\ 2} 0\quad $ by $\rm\quad \color{brown}2\cdot 3 + \color{brown}1\ \equiv\ 0 $ Hence $\rm\ 43211\equiv 0\:\ (mod\ 7)\:,\:$ indeed $\rm\ 43211 = 7\cdot 6173\:.\:$ Generally the modular arithmetic is simpler if one uses a balanced system of representatives, e.g. $\rm\: \pm\{0,1,2,3\}\ \:(mod\ 7)\:.$ Notice that for modulus $11$ or $9\:$ the above method reduces to the well-known divisibility tests by $11$ or $9\:$ (a.k.a. "casting out nines" for modulus $9\:$).
Count the number of integer solutions to $x_1+x_2+\cdots+x_5=36$ How to count the number of integer solutions to $x_1+x_2+\cdots+x_5=36$ such that $x_1\ge 4,x_3 = 11,x_4\ge 7$ And how about $x_1\ge 4, x_3=11,x_4\ge 7,x_5\le 5$ In both cases, $x_1,x_2,x_3,x_4,x_5$ must be nonnegative integers. Is there a general formula to calculate things like this?
$\infty$, if you have no constraint on $x_2$ and $x_5$ other than that they are integers: note that you can always add $1$ to one of these and subtract $1$ from the other. Or did you mean nonnegative (or positive) integers?
Rules for algebraically manipulating pi-notation? I'm a bit of a novice at maths and want to learn more about algebraically manipulating likelihoods in statistics. There are a lot of equations that involve taking the product of a set of values given a model. I know a few rules for manipulating sigma-notation (e.g., here and here). * *What are the basic rules for manipulating sequences of products (i.e, $\prod_{i=1}^{I} ... $)? Is there a web page that you could direct me to? e.g., * *$\prod_{i=1}^{I} x_i$ *$\prod_{i=1}^{I} x_i y_i$ *$\prod_{i=1}^{I} a + b x_i$ *$\prod_{i=1}^{I} \exp x_i$
This might be inappropriate for an answer but I believe you tried yourself too hard at here. The $\prod $ sign just means multiplying some elements together, with a label in the bottom to denote the beginning element and a label on the top to denote the end element. The files you provided are very good and you should get a sense of what the general situation is. You will get better in using it if you read articles proving things using this symbol. But as any other short hand symbol writing the product of elements in this way does not really simplify anything; you should be able to write out any such products explicitly if you can write it in the $\prod$ sign.
Curve defined by 3 equations Suppose that $X$ is a curve in $\mathbb{A}^3$ (in the AG sense, let's say over an algebraically closed field $k$) that contains no lines perpendicular to the $xy$ plane, and that there exist two polynomials $f,g\in k[x,y,z]$ such that $\{f=0\}\cap\{g=0\}=X\cup l_1\cup\cdots\cup l_n$, where $l_i$ are lines perpendicular to the $xy$ plane (and can possibly intersect $X$). Is it possible to find a third polynomial $h$ such that the intersection $\{f=0\}\cap\{g=0\}\cap\{h=0\}=X$? Since $X$ is algebraic, of course given a point that does not lie on $X$, there is a polynomial that is zero on $X$ and not zero on that point. I want to see if I can "cut out" $X$ with one other equation.
Yes. I believe this is from a Shafarevich problem? For instance, suppose $\ell_1$ intersects the $x-y$ plane at $(x,y) = (a,b)$. Consider the homomorphism $k[x,y,z] \rightarrow k[z]$ sending $x\mapsto a$ and $y \mapsto b$. The image of $I(X)$ is some prime ideal of $k[z]$, which is principal. Now look at the pullback of a generator. To treat the general case, use this idea along with the Chinese remainder theorem.
Find all analytic functions such that... Here is the problem: find all functions that are everywhere analytic, have a zero of order two in $z=0$, satisfy the condition $|f'(z)|\leq 6|z|$ and such that $f(i)=-2$. Any hint is welcomed.
Here is a hint: consider $f'(z)/z$. Since $f(z)$ has a zero of order two at $z=0$, the derivative $f'(z)$ is also holomorphic, and $f'(0)=0$. Thus, you may write $f'(z)$ as $z\cdot g(z)$, with $g(z)$ holomorphic. Then, the bound in the statement tells you that $|g(z)|$ is bounded.