INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Equivalence class on real numbers Call two real numbers equivalent if their binary decimal expansion differ in a finite amount of places, if S is a set which contains an element of every equivalence class, must S contain an interval? How to show that every interval contains an (uncountable number of?) element of every equivalence class?
Added part: We produce a bounded set $S$ that contains a member of every equivalence class but does not contain an interval. Every equivalence class meets $[0,1]$, since for any $x$, we can, by making a finite number of changes to the bits of $x$, produce an $x'\in [0,1]$. Use the Axiom of Choice to select $S\subset [0,1]$ such that $S$ contains precisely one member of each equivalence class. Any two dyadic rationals (expressed in ultimately $0$'s form) belong to the same equivalence class, so $S$ contains exactly one dyadic rational. Since the dyadic rationals are dense in the reals, this means that $S$ cannot contain an interval of positive length. (End of added part) Every non-empty interval $I$ contains a member of every equivalence class. For let $I$ be a (finite) interval, and let $a$ be its midpoint. Suppose that $I$ has length $\ge 2\times 2^{-n}$. Let $x$ be any real number. By changing the initial bits of $x$ so that they match the initial bits of $a$, up to $n$ places after the "decimal" point, we can produce an $x'$ equivalent to $x$ which is at distance less than $2^{-n}$ from $a$. Edit: The question has changed to ask whether every interval contains an uncountable number of members from every equivalence class. Minor modification of the first paragraph shows that every interval contains a countably infinite number of members from every equivalence class. As Asaf Karagila points out, one cannot get more, since every equivalence class is itself countable. (The set of places where there is "change" can be identified with a finite subset of the integers, and $\mathbb{N}$ has only countably many finite subsets.)
In a Boolean algebra B and $Y\subseteq B$, $p$ is an upper bound for $Y$ but not the supremum. Is $qI don't think that this is the case. I am reading over one of my professor's proof, and he seems to use this fact. Here is the proof: Let $B$ be a Boolean algebra, and suppose that $X$ is a dense subset of $B$ in the sense that every nonzero element of $B$ is above a nonzero element of $X$. Let $p$ be an element in $B$. The proof is to show that $p$ is the supremum of the set of all elements in $X$ that are below $p$. Let $Y$ be the set of elements in $X$ that are below $p$. It is to be shown that $p$ is the supremum of $Y$. Clearly, $p$ is an upper bound of $Y$. If $p$ is not the least upper bound of $Y$, then there must be an element $q\in B$ such that $q<p$ and $q$ is an upper bound for $Y$ ...etc. I do not see how this last sentence follows. I do see that if $p$ is not the least upper bound of $Y$, then there is some upper bound $q$ of $Y$ such that $p$ is NOT less than or equal to $q$. But, since we have only a partial order, and our algebra is not necessarily complete, I do not see how we can show anything else. So, is my professor's proof wrong, or am I just missing something fundamental?
The set of upper bounds is closed under intersection, so $p \cap q$ is an upper bound less than $p$.
Jordan decomposition/Levi decomposition in GL(n) in positive characteristic Let $k$ be a non archimedean field of positive characteristic. Lets consider a parabolic subgroup $P \subset GL(n, k)$. I am a little bit confused by the following statement in "Laumon - Cohomology of Drinfeld Modular ... ": I have an issue with the following two assertions $P = MN$ has a Levi decomposition over $k$ (pg.123) and $\gamma \in P$ can be written as $\gamma = \gamma_m \gamma_n$ with $\gamma_m \in M$ and $\gamma_n \in N$ (pg.124) Now, I have read that the Jordan decomposition and Levi decomposition need not to hold in positive characteristic (e.g. in Humpreys, Waterhouse). Do they mean that the decomposition are not functorial with respect to field extensions, and are available for the group, but not the group scheme? Why is this not a contradiction? Remark: I understand that elliptic element can become unipotent in an algebraic extension, since the minimal polynomial might not be separable in general.
Both those assertions describe a Levi decomposition. In general, groups need not have Levi decompositions, but parabolic subgroups of reductive groups do. This is proven for connected reductive groups (e.g. $GL(n, k)$) in Borel "Linear Algebraic Groups": see 20.5 for the decomposition over k.
Subgroup(s) of a group of order 25 I am working on a problem (self-study) from Artin - 2.8.8 which goes: "Let G be a group of order 25. Prove that G has at least one subgroup of order 5, and that if it contains only one subgroup of order 5, then it is a cyclic group." I can see that there is an element of order 5, and this can generate a cyclic subgroup of order 5. -- So my first question is: what would make one think that there might be more than one subgroup of order 5, and what would they look like. And what would be the criteria for there being only one rather than more than one? For the second part, I am familiar with a proof that a group of order $p^2$ is abelian, by showing the center is all of G. -- My second question is how to show G is cyclic - and how does this use the stipulation that there is only one subgroup of order 5 (in the text there is a statement that if there is only one subgroup of a particular order then it is normal). And how does there being more than one subgroup of order 5 prevent G from being cyclic. Thanks.
It is an easy exercise to show that if $c_d(G)$ denotes the number of cyclic subgroups of $G$ of order $d$ then $\displaystyle \sum_{d\mid |G|}c_d(G)\varphi(d)=|G|$ (just partition your group according to the elements orders). Now, if $G$ had only one subgroup of order $5$, then via the fact that all groups of order $5$ are cyclic we can conclude that $c_5(G)=1$. If then $c_{25}(G)=0$ then we'd see that $$25=|G|=c_1(G)\varphi(1)+4c_5(G)+\varphi(5)=1+4=5$$ Of course, this is ridiculous. So, if $G$ contains only one subgroup of order $5$ then $c_d(G)\ne0$ and so there exists a cyclic subgroup of $G$ of order $25$ which, obviously, must be $G$ itself. EDIT: Of course, there is nothing special about $5$. The above argument tells you that for a group of order $p^2$, $p$ prime, that being cyclic is equivalent to having one subgroup of order $p$. You will eventually learn that, up to isomorphism, the only groups of order $p^2$ are the cyclic one and the $2$-dimensional $\mathbb{F}_p$-space.
Chance for picking a series of numbers (with repetition, order doesn't matter) I want to calculate the chance for the following situation: You throw a die 5 times. How big is the chance to get the numbers "1,2,3,3,5" if the order does not matter (i.e. 12335 = 21335 =31235 etc.)? I have 4 different solutions here, so I won't include them to make it less confusing. I'm thankful for suggestions!
There are $5$ options for placing the $1$, then $4$ for placing the $2$, and then $3$ for placing the $5$, for a total of $5\cdot4\cdot3=60$. Alternatively, there are $5!=120$ permutations in all, and pairs of these are identical because you can exchange the $3$s, which also gives $120/2=60$. The total number of combinations you can roll is $6^5=7776$, so the chance is $60/7776=5/648\approx0.77\%$.
Lie Algebra of $SL_n(\mathbb H)$ The Lie algebra of $SL_n(\mathbb C)$ are the matrices where the trace is $0$. But what is the Lie algebra of $SL_n(\mathbb H)$ where $\mathbb H$ is the quaternions?
The obvious canditate for $\mathfrak{sl}_2(H)$ is the space of $2\times 2$ matrices $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ with quaternion entries such that $a+d=0$, with bracket the commutator of matrices, but... that is not a Lie algebra. For example, the trace of the commutator of $\begin{pmatrix}i&0\\0&-i\end{pmatrix}$ and $\begin{pmatrix}j&0\\0&-j\end{pmatrix}$ is not zero. The big problem, really, is that you have to decide what you mean by $SL_2(H)$. There is no determinant... (There is the Dieudonné determinant, though)
Cauchy Sequence in $X$ on $[0,1]$ with norm $\int_{0}^{1} |x(t)|dt$ In Luenberger's Optimization book pg. 34 an example says "Let $X$ be the space of continuous functions on $[0,1]$ with norm defined as $\|x\| = \int_{0}^{1} |x(t)|dt$". In order to prove $X$ is incomplete, he defines a sequence of elements in $X$ by $$ x_n(t) = \left\{ \begin{array}{ll} 0 & 0 \le t \le \frac{1}{2} - \frac{1}{n} \\ \\ nt-\frac{n}{2} + 1 & \frac{1}{2} - \frac{1}{n} \le t \le \frac{1}{2} \\ \\ 1 & t \ge \frac{1}{2} \end{array} \right. $$ Each member of the sequence is a continuous function and thus member of space $X$. Then he says: the sequence is Cauchy since, as it is easily verified, $\|x_n - x_m\| = \frac{1}{2}\left|\dfrac1n - \dfrac1m\right| \to 0$. as $n,m \to \infty$. I tried to verify the norm $\|x_n - x_m\|$ by computing the integral for the norm. The piecewise function is not dependent on $n,m$ on the last piece (for $t \ge 1/2$), so norm $\|x_n - x_m\|$ is 0. For the middle piece I calculated the integral, it comes up zero. That leaves the first piece, and I did not receive the result Luenberger has. Is there something wrong in my approach?
It's relatively easy to see that for $m<n$ we have $x_n(t)\le x_m(t)$ for each $t$. Hence $$\|x_m-x_n\|=\int_0^1 x_m(t) \mathrm{d}t-\int_0^1 x_n(t) \mathrm{d}t.$$ We can disregard intervals $\langle 0,1/2-1/m\rangle$, since both functions are zero there. We can also disregard $\langle 1/2,1\rangle$, since $x_m(t)=x_n(t)$ on that interval. Therefore $$\|x_m-x_n\|=\int_{\frac12-\frac1m}^1 x_m(t) \mathrm{d}t-\int_{\frac12-\frac1n}^1 x_n(t) \mathrm{d}t=\frac1{2m}-\frac1{2n}.$$ The last equality can be shown by direct computation. You can also see this geometrically: If you draw the picture, the first integral is area of a triangle with base $\frac1{2m}$ and height $1$. The second is a triangle as well, the base is $\frac1{2n}$. I used metapost to create the picture. In case someone is interested to see it, it is figure 6 in this source code: rapidshare, megaupload, pastebin.
Is this true about integrating composite functions? Let's say that I'm integrating a composite function, say $f(g(x))$, that is in a form to which I can apply the substitution rule. Is it true to say that both $f$ and $g$ must be differentiable? I understand that the substitution rule requires $g$ to be differentiable and that the substitution rule relies on the chain rule, and the chain rule requires both f and g to be differentiable.
if you talk about the Riemann-integral: if $f : [a,b] \to [c,d]$ R-integrable and $g : [c,d] \to \mathbb{R}$ continuous, than $g \circ f : [a,b] \to \mathbb R$ is R-integrable. Differentiable implies continuous, so if f,g are differentiable they are R-integrable
Solving $\int\frac{\ln(1+e^x)}{e^x} \space dx$ I'm trying to solve this integral. $$\int\frac{\ln(1+e^x)}{e^x} \space dx$$ I try to solve it using partial integration twice, but then I get to this point (where $t = e^x$ and $dx = \frac{1}{t} dt$): $$\int\frac{\ln(1+t)}{t^2} \space dt = \frac{1}{t} \cdot \ln(1+t) - \frac{1}{t} \cdot \ln(1+t) + \int\frac{\ln(1+t)}{t^2} \space dt$$ $$\cdots$$ $$0 = 0$$ What am I doing wrong?
Edit: Your problem is having integrated by parts twice. Doing it one time looks like this: $$\int \frac{\ln(1+t)}{t^2} \ \mathrm{d}t = -\frac1{t}\ln(1+t) - \int -\frac1{t}\cdot\frac1{1+t} \ \mathrm{d}t = -\frac1{t}\ln(1+t) + \int \frac1{t}\cdot\frac1{1+t} \ \mathrm{d}t$$ That new integral should be evaluated with partial fractions: $$ \int \frac1{t(t+1)} \ \mathrm{d}t = \int \frac1{t}-\frac1{t+1} \ \mathrm{d}t = \ln t - \ln (t+1) + C $$ If you want to check, the result is the following: $$ \int \frac{\ln(1+e^x)}{e^x} \ \mathrm{d}x= x - \ln(1+e^x) - \frac{\ln(1+e^x)}{e^x} + C$$
Partitioning of geometric net into equivalence classes Fellow Puny Humans, A geometric net is a system of points and lines that obeys three axioms: * *Each line is a set of points. *Distinct line has at most one point in common. *If $p$ is a point and $L$ is a line with $p \notin L$, then there is exactly one line $M$ such that $p \in M$ and $L \cap M = \phi $. And whenever $L \cap M = \phi$ we say that $L$ is parallel to $M$ i.e $L || M$. So far so good. I want to partition these lines of geometric net into equivalence classes with two lines in same class if they are equal or parallel. One can easily show that binary operation equal or parallel is an equivalence relation. Let's say there are $m$ such classes, then how many points does a line have in each class? For a given line $l$ in any class, if a point $p \in l$ then how many lines passes through $p$. For example, if I partition them into two classes $CL_1$ and $CL_2$ of parallel or equal lines, then number of points on any line in $CL_1$ is equal to number of lines in $CL_2$. This implies that each point belongs to two line. Can this be extended to a case when number of classes are $m$ i.e. each point belong to $m$ lines? I am confused because I can not show it for the case when more than two lines passes through the same point. This problem is from TAOCP 4(a) : combinatorial searching Problem 21. (Edision Wesly).
It is exactly the number of lines you have in a class, simply because equal or parallel is an equivalence relation. Let me clarify: Say $C_1, C_2,\dots, C_m$ are your equivalence classes. Say $L\in C_i$ is a line in $C_i$, for some $i\in\{1,\dots,m\}$. Say $1\leq j \leq m$, $j\neq i$, and $M\in C_j$. If $L\cap M = \emptyset$, then $L$ is parallel to $M$, and so $L$ and $M$ must belong to the same class. In other words, $C_i\cap C_j\neq \emptyset$, or $C_i = C_j$ by equivalence. But by assumption, $i\neq j$, so $C_i\neq C_j$, and so $L$ must intersect $M$. By one of your axioms, it must intersect $M$ in only one point. By arbitrariness of $M$, $L$ must intersect every line in $C_j$ in only one point.
Secret Number Problem Ten students are seated around a (circular) table. Each student selects his or her own secret number and tells the person on his or her right side the number and the person his or her left side the number (without disclosing it to anyone else). Each student, upon hearing two numbers, then calculates the average and announces it aloud. In order, going around the table, the announced averages are 1,2,3,4,5,6,7,8,9 and 10. What was the secret number chosen by the person who announced a 6?
Let us denote secret numbers as $x_i$ , where $i$ is announced number ,then we have following system of equations : $\begin{cases} x_1+x_3=4 \\ x_2+x_4=6 \\ x_3+x_5=8 \\ x_4+x_6=10 \\ x_5+x_7=12 \\ x_6+x_8=14 \\ x_7+x_9=16 \\ x_8+x_{10}=18 \\ x_9+x_1=20 \\ x_{10}+x_2=2 \end{cases}$ According to Maple : $x_6=1$ , so requested secret number is $1$ .
Number of subgroups of prime order I've been doing some exercises from my introductory algebra text and came across a problem which I reduced to proving that: The number of distinct subgroups of prime order $p$ of a finite group $G$ is either $0$ or congruent to $1\pmod{p} $. With my little experience I was unable to overcome this (all I was able to conclude is that these groups are disjoint short of the identity), and also did not find any solution with a search on google (except for stronger theorems which I am not interested in because of my novice level). I remember that a similar result is widely known as one of Sylow Theorems. This result was proven by the use of group actions. But can my problem be proved without using the concept of group actions? Can this be proven WITH the use of that concept? EDIT: With help from comments I came up with this: The action Derek proposed is well-defined largely because in a group if $ab = e$ (the identity), then certainly $ba = e$. By Orbit-Stabilizer Theorem we can see that all orbits are either of size 1 or $p$ (here I had most problems, and found out cyclic group of order $p$ acts on the set of solutions in the same way). The orbits of size 1 contain precisely the elements $(x,x,x....,x)$ for some element x in G. In addition, orders of all orbits add up to $|G|^{(p-1)}$ because the orbits are equivalence classes of an equivalence relation. But certainly $(e,e,e....,e)$ is in an orbit of size 1, and that means there has to be more orbits of exactly one element, actually $p-1 + np$ more for some integer $n$. These elements form the disjoint groups I am looking for. if $p-1$ divides $(p-1 + np)$, it's easy to check the result is 1 mod p. Could someone check if I understood this correctly?
Here's another approach. Consider the solutions to the equation $x_1x_2\cdots x_p=1$ in the group $G$ of order divisible by $p$. Since there is a unique solution for any $x_1,\ldots,x_{p-1}$, the total number of solutions is $|G|^{p-1}$, which is divisible by $p$. If $x_1,x_2,\ldots,x_p$ is a solution, then so is $x_2,x_3,\ldots,x_p,x_1$, and so we have an action of the $p$-cycle $(1,2,3,\ldots,p)$ on the solution set. Since $p$ is prime, the orbits of this action have size $p$ if $x_1,x_2,\ldots,x_p$ are not all equal, and size 1 if they are all equal. So the number of solutions of $x^p=1$ is a multiple of $p$. Now use Steve D's hint to complete the proof. Incidentally there is a theorem of Frobenius that says that for any $n>0$ and any finite group of order divisible by $n$, the number of solutions of $x^n=1$ is a multiple of $n$.
Probability that ace of spades is at bottom of deck IF ace of hearts is NOT at top What is the probability that the ace of spades is at the bottom of a standard deck of 52 cards given that the ace of hearts is not at the top? I asked my older brother, and he said it should be $\frac{50}{51} \cdot \frac{1}{51}$ because that's $$\mathbb{P}(A\heartsuit \text{ not at top}) \times \mathbb{P}(A\spadesuit \text{ at bottom}),$$ but I'm not sure if I agree. Shouldn't the $\frac{50}{51}$ be $\frac{50}{52}$? Thanks you!
The ace of hearts has 51 positions available (since it's not at the top). Having placed it somewhere, there are 51 positions available for ace of spades, so Pr = P(ace of spades not at bottom)*P(ace of diamonds at bottom) = 50/51 *1/51 = 50/51²
Solution of a polynomial of degree n with soluble galois group. Background: Given the fundamental theorem of algebra every polynomial of degree n has n roots. From Galois Theory we know that we can only find exact solutions of polynomials if their corresponding Galois group is soluble. I am studying Galois Theory ( Ian Stewart ) and I am not getting the result out of it that I expected. I expected to learn to determine for a polynomial of degree n its corresponding Galois group, and if it that group is soluble a recipe to find the exact roots of that polynomial. My experience thus far with Galois Theory is that it proves that there is no general solution for a polynomial of degree 5 and higher. Question: I want to learn to solve polynomials of degree 5 and higher if they have a corresponding soluble Galois group. From which book or article can I learn this?
By exact roots you probably mean radical expressions. Even for equations whose Galois group is unsolvable there might be exact trigonometric expressions for the roots. If you know German, the diploma thesis "Ein Algorithmus zum Lösen einer Polynomgleichung durch Radikale" (An algorithm for the solution of a polynomial equation by radicals) by Andreas Distler is exactly what you're looking for. It is available online. It also contains several program codes. On the other hand, today there are many computer algebra systems which can compute the Galois group of a given polynomial or number field (GAP, Sage, ...).
Is $\sin^3 x=\frac{3}{4}\sin x - \frac{1}{4}\sin 3x$? $$\sin^3 x=\frac{3}{4}\sin x - \frac{1}{4}\sin 3x$$ Is there any formula that tells this or why is it like that?
\begin{equation} \text{You can use De Moivre's identity:} \end{equation} \begin{equation} \text{Let's Call:}\\\\ \end{equation} \begin{equation} \mathrm{z=\cos x+i \sin x}\\ \mathrm{\frac{1}{z}=\cos x-i \sin x}\\ \end{equation} \begin{equation} \text{Now subtracting both equations together, we get:}\\ \end{equation} \begin{equation} \mathrm{2i\sin x=z-\frac{1}{z}}\\ \text{And we know that:}\\ \end{equation} \begin{equation} \mathrm{z^n=(cis x)^n=cis~nx}\\ \end{equation} \begin{equation} \text{So:}\\ \mathrm{2i\sin x=z-\frac{1}{z}}\Rightarrow\\ \mathrm{-8i\sin^3 x=\left (z-\frac{1}{z} \right )^{3}}\\ \end{equation} \begin{equation} \text{Expanding the RHS:}\\ \end{equation} \begin{equation} \mathrm{-8i\sin^3 x =z^3-\frac{1}{z^3}-3\left (z-\frac{1}{z}\right)}\\ \end{equation} \begin{equation} \mathrm{-8i\sin^3 x=2i\sin 3x-6i\sin x}\\ \end{equation} \begin{equation} \boxed{\boxed{\mathrm{\therefore\sin^{3} x=\frac{ 3}{4}}\sin\mathrm{x}-\frac{1}{4}\sin 3x}} \end{equation}
Showing $f(x)/x \to 0$ when $\lvert f(x) - f(\lambda x)\rvert/x \to 0$ I would like to solve this problem, but I do not know how ... Let $f:(0;1) \rightarrow \mathbb{R}$ be a function such that: $$\lim_{x \to0^+}f(x)=0$$ and such that there exists $0<\lambda<1$ such that: $$\lim_{x \to0^+} \frac{ \left [ f(x)-f(\lambda x) \right ]}{x}=0$$ prove that $$\lim_{x \to0^+} \frac{f(x)}{x}=0$$
Since $$ \frac{f(x) - f(\lambda x)}{x} \to 0,$$ for any $\epsilon > 0$, we can restrict $x$ near enough $0$ so that we have $\lvert f(x) - f(\lambda x)\rvert \leq \epsilon \lvert x \rvert$. Since $0 < \lambda < 1$, this means that we also have $\lvert f(\lambda^n x) - f(\lambda^{n+1} x) \rvert \leq \epsilon \lvert x \rvert\lambda^n$ for each $n \geq 0$. By using the triangle inequality, we get that $$ \begin{align} \lvert f(x) - f(\lambda^n x) \rvert &= \lvert f(x) - f(\lambda x) + f(\lambda x) + \cdots - f(\lambda^n x)\vert \\ &\leq \epsilon \lvert x \rvert ( 1 + \lambda + \lambda^2 + \cdots + \lambda^{n-1}) \\ &\leq \epsilon \lvert x \rvert \frac{1 - \lambda^n}{1 - \lambda} \\ &\leq \epsilon \lvert x \rvert \frac{1}{1 - \lambda}. \end{align}$$ Notice the final expression on the right is independent of $n$. By letting $n \to \infty$, the right hand side does not change, while the term $f(\lambda^n x) \to 0$ on the left hand side. This leads to an expression of the form $$\lvert f(x) \rvert \leq \epsilon \lvert x \rvert \frac{1}{1 - \lambda},$$ or equivalently $$ \frac{\lvert f(x) \rvert}{\lvert x \rvert} \leq \epsilon \frac{1}{1 - \lambda}$$ for all $\epsilon > 0$. Choosing $\epsilon \to 0$ completes the proof. $\diamondsuit$
Limit of the sequence of regular n-gons. Let $A_n$ be the regular $n$-gon inscribed in the unit circle. It appears intuitively obvious that as $n$ grows, the resulting polygon approximates a circle ever closer. Can it be shown that the limit as $n \rightarrow \infty $ of $A_n$ is a circle?
Given a sequence of sets $(A_n)_{n\geq3}$ there is a natural $\lim\inf_{n\to\infty} A_n=:\underline{A}$ and a natural $\lim\sup_{n\to\infty}A_n=:\overline{A}$ of this sequence. In the problem at hand the $A_n$ are closed regular $n$-gons inscribed in the unit circle, all sharing the point $P:=(1,0)$. The set $\underline{A}$ consists of all points that are in all but finitely many of the $A_n$. It is easy to see that all points $z\in D:=\{(x,y)\ |\ x^2+y^2 < 1\}$ satisfy this condition and that in fact $\underline{A}=D\cup\{P\}$. The set $\overline{A}$ consists of all points that are in infinitely many $A_n$. Obviously $\overline{A}\supset\underline{A}\ $, and $\overline{A}$ is contained in $\overline{D}=\{(x,y)\ |\ x^2+y^2 \leq 1\}$. In fact $\overline{A}\cap\partial D$ consists of all points on the unit circle whose argument is a rational multiple of $\pi$. This is how much you can say on the pure set-theoretical level; an actual limit set $A_*$ does not exist.
Changing the argument for a higher order derivative I start with the following: $$\frac{d^n}{dx^n} \left[(1-x^2)^{n+\alpha-1/2}\right]$$ Which is part of the Rodrigues definition of a Gegenbauer polynomial. Gegenbauer polynomials are also useful in terms of trigonometric functions so I want to use the substitution $x = \cos\theta$, which is the usual way of doing it. However, I'm stuck as to how this works for the Rodrigues definition, because it gives me a derivative with respect to $\cos\theta$ instead of a derivative with respect to $\theta$: $$\frac{d^n}{d(\cos\theta)^n} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]$$ QUESTION: Is there a way to write this as $\dfrac{d^n}{d\theta^n}[\text{something}]$? I have read some about Faa di Bruno's formula for the $n$-th order derivative of a composition of functions but it doesn't seem to do what I want to do. Also, for n=1 there is the identity, from the chain rule, $\dfrac{d}{d(\cos\theta)} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]=\frac{\frac{d}{d\theta} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]}{\frac{d}{d\theta} \left[\cos\theta\right]}$, but this doesn't hold for higher order derivatives. Any ideas?
Instead of Faa di bruno's formula, you can try generalizing the formula for $n^{th}$ derivative of inverse function. Let $f,g$ be functions of $x$ and inverses of each other. We know that $\displaystyle f'=\frac{1}{g'}$ i.e. $f'g'=1$. Using Leibniz' rule, we get $\displaystyle (f'g')^{(n)}(\theta)=\sum_{k=0}^n \binom{n}{k} f^{(n-k+1)} g^{(k+1)}(\theta)=0 \cdots (1)$ Using this equation recursively with $x=cos^{2}\theta$, so that $f(\theta)=cos^{2}\theta$ and $f'(\theta)=-2sin\theta cos\theta$, one can find the required values. Thus, we have: $\displaystyle\frac{d}{d(cos^2\theta)}[(sin^2\theta)^{n+\alpha-1/2}\mathbf{]}=\frac{1}{\frac{d}{d(\theta)}[(sin^2\theta)^{n+\alpha-1/2}\mathbf{]}}=\frac{1}{2{sin\theta}^{2n+2\alpha-1}cos\theta}$. Then, using $(1)$, we can determine $\displaystyle{\frac{d^n}{d\theta^n}[(sin^2\theta)^{n+\alpha-1/2}\mathbf{]}}$ for $n>1$ as well.
Derivative of a function is odd prove the function is even. $f:\mathbb{R} \rightarrow \mathbb{R}$ is such that $f'(x)$ exists $\forall x.$ And $f'(-x)=-f'(x)$ I would like to show $f(-x)=f(x)$ In other words a function with odd derivative is even. If I could apply the fundamental theorem of calculus $\int_{-x}^{x}f'(t)dt = f(x)-f(-x)$ but since the integrand is odd we have $f(x)-f(-x)=0 \Rightarrow f(x)=f(-x)$ but unfortunately I don't know that f' is integrable.
* *Define functions $f_0(x)=(f(x)+f(-x))/2$ and $f_1(x)=(f(x)-f(-x))/2$. Then $f_0$ and $f_1$ are also differentiable, and $f_0$ is even and $f_1$ is odd. *Show that the derivative of an odd function is even, and that of an even function is odd. *From the equality $f'=f_0'+f_1'$ conclude that $f_1$ is constant and, therefore, zero.
Inequality for modulus Let $a$ and $b$ be complex numbers with modulus $< 1$. How can I prove that $\left | \frac{a-b}{1-\bar{a}b} \right |<1$ ? Thank you
Here are some hints: Calculate $|a-b|^2$ and $|1-\overline{a}b|^2$ using the formula $|z|^2=z\overline{z}$. To show that $\displaystyle\left | \frac{a-b}{1-\bar{a}b} \right |<1$, it's equivalent to show that $$\tag{1}|1-\overline{a}b|^2-|a-b|^2>0.$$ To show $(1)$, you need to use the fact that $|a|<1$ and $|b|<1$. If you need more help, I can give your more details.
What is the purpose of Stirling's approximation to a factorial? Stirling approximation to a factorial is $$ n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n. $$ I wonder what benefit can be got from it? From computational perspective (I admit I don't know too much about how each arithmetic operation is implemented and which is cheaper than which), a factorial $n!$ contains $n-1$ multiplications. In Stirling's approximation, one also has to compute one division and $n$ multiplications for $\left(\frac{n}{e}\right)^n$, no? Plus two multiplication and one square root for $\sqrt{2 \pi n}$, how does the approximation reduce computation? There may be considerations from other perspectives. I also would like to know. Please point out your perspective if you can. Added: For purpose of simplifying analysis by Stirling's approximation, for example, the reply by user1729, my concern is that it is an approximation after all, and even if the approximating expression converges, don't we need to show that the original expression also converges and converges to the same thing as its approximation converges to? Thanks and regards!
A Stirling inequality $$(n!)^{\frac{1}{n}} \le \frac{e}{n+1}$$ can be used to derive Carleman's inequality from the AM-GM inequality.
Is there a simple formula for this simple question about a circle? What is the average distance of the points within a circle of radius $r$ from a point a distance $d$ from the centre of the circle (with $d>r$, though a general solution without this constraint would be nice)? The question arose as an operational research simplification of a real problem in telecoms networks and is easy to approximate for any particular case. But several of my colleagues thought that such a simple problem should have a simple formula as the solution, but our combined brains never found one despite titanic effort. It looks like it might involve calculus. I'm interested in both how to approach the problem, but also a final algebraic solution that could be used in a spreadsheet (that is, I don't want to have to integrate anything).
I guess it involves calculus. Let $(x,y)$ be a point within the circle of radius $R$ and $(d,0)$ the coordinates of the point a distance $d$ away from the origin (because of the symmetry we can choose it to lie on the $x$-axis). Then the distance between the two points is given by $$\ell = \sqrt{(x-d)^2 + y^2}.$$ Averaging over the circle is best done in polar coordinates with $x=r \cos \phi$ and $y=r \sin\phi$. We have $$\begin{align} \langle \ell \rangle &= \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, r \sqrt{(r\cos \phi -d)^2 + r^2\sin^2 \phi}\\ &= \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, r \sqrt{r^2 + d^2 -2 d r\cos\phi}. \end{align}$$ I am not sure it the integral has a simple analytic solution. Thus, I calculate it in three simple limits. (a) $d\gg R$: we can expand the $\sqrt{}$ and have $$\langle \ell \rangle = \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, [r d- r^2 \cos \phi + \frac{r^3}{2d} \sin^2 \phi] = d + \frac{R^2}{8d}$$ (b) for $d \approx R$, we have $$\langle \ell \rangle = \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, r^2 \sqrt{2 (1-\cos \phi)} = \frac{8 R}{3\pi}$$ (c) for $d\ll R$ [joriki's comment] $$\langle \ell \rangle = \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, \left[r^2 -rd\cos\phi + \frac{d^2}{2} \cos^2\phi\right] = \frac{2 R}{3} + \frac{d^2}{2R}$$
Easy way to determine the primes for which $3$ is a cube in $\mathbb{Q}_p$? This is a qual problem from Princeton's website and I'm wondering if there's an easy way to solve it: For which $p$ is $3$ a cube root in $\mathbb{Q}_p$? The case $p=3$ for which $X^3-3$ is not separable modulo $p$ can easily be ruled out by checking that $3$ is not a cube modulo $9$. Is there an approach to this that does not use cubic reciprocity? If not, then I'd appreciate it if someone would show how it's done using cubic reciprocity. I haven't seen good concrete examples of it anywhere. EDIT: I should have been more explicit here. What I really meant to ask was how would one find all the primes $p\neq 3$ s.t. $x^3\equiv 3\,(\textrm{mod }p)$ has a solution? I know how to work with the quadratic case using quadratic reciprocity, but I'm not sure what should be done in the cubic case.
For odd primes $q \equiv 2 \pmod 3,$ the cubing map is a bijection, 3 is always a cube $\pmod q.$ For odd primes $p \equiv 1 \pmod 3,$ by cubic reciprocity, 3 is a cube $\pmod p$ if and only if there is an integer representation $$ p = x^2 + x y + 61 y^2, $$ or $4p=u^2 + 243 v^2.$ In this form this is Exercise 4.15(d) on page 91 of Cox. Also Exercise 23 on page 135 of Ireland and Rosen. The result is due to Jacobi (1827). For more information when cubic reciprocity is not quite good enough, see Representation of primes by the principal form of discriminant $-D$ when the classnumber $h(-D)$ is 3 by Richard H. Hudson and Kenneth S. Williams, Acta Arithmetica (1991) volume 57 pages 131-153.
Show that $D_{12}$ is isomorphic to $D_6\times C_2$ Show that $D_{12}$ is isomorphic to $D_6 \times C_2$, where $D_{2n}$ is the dihedral group of order $2n$ and $C_2$ is the cyclic group of order $2$. I'm somewhat in over my head with my first year groups course. This is a question from an example sheet which I think if someone answered for me could illuminate a few things about isomorphisms to me. In this situation, does one use some lemma (that the direct product of two subgroups being isomorphic to their supergroup(?) if certain conditions are satisfied)? Does $D_{12}$ have to be abelian for this? Do we just go right ahead and search for a fitting bijection? Can we show the isomorphism is there without doing any of the above? If someone could please answer the problem in the title and talk their way through, they would be being very helpful. Thank You.
Assuming $D_{n}$ is the dihedral group of order $n$, I would proceed as follows. Note that $D_{6} \cong S_{3}$, and $S_{3}$ is generated by $(12)$ and $(123)$. Therefore $D_{6} \times C_{2} = \langle ((12),[0]),((123),[1]) \rangle$. Next note that $D_{12} = \langle r,s | \, r^{6}=s^{2}=e , s^{-1}rs=r^{-1} \rangle$, map the generators of $D_{6} \times C_{2}$ to the generators of $D_{12}$, and show this extends to a bijection.
Exercise about semidirect product This is exercise 7.12 from Algebra, Isaacs. $ G= N \rtimes H \ $ is a semidirect product; no nonidentity element of H fixes any nonidentity element of N; identify N and H with the corresponding subgroups of G. Show that: a) $ H \bigcap H^{g } = 1 $ for all $ g \in G - H $ b) If G is finite, $ G = N \cup \bigcup_{g \in G} \ H^{g} $
(a) Let $x\in H\cap H^g$, where $g=hn$, $h\in H$, $n\in N$, $n\neq 1$. Then there exists $y\in H$ such that $x=g^{-1}yg = n^{-1}(h^{-1}yh)n$. Since $h^{-1}yh\in H$, it suffices to consider the case of $g\in N$. So we set $g=n$. (The intuition is that we want to go to some expression like $x^{-1}nx = n$, because this will force $x=1$ given our assumption. The rest of the computations are done to try to get that.) Thus, we want to show that if $n\in N$, then $H\cap H^n=\{1\}$. If $ x= n^{-1}yn$, then $nx = yn$, so $x^{-1}nx = (x^{-1}y)n$. However, since $N$ is normal, $x^{-1}nx\in N$, so $x^{-1}y\in N$. Since $x,y\in H$, then $x^{-1}y=1$, hence $x=y$. Thus, $x=n^{-1}xn$. But this in turn means $x^{-1}nx = n$. Since no nonidentity element of $H$ fixes any nonidentity element of $N$, and $n\neq 1$, then $x=1$. Thus, $H\cap H^n=\{1\}$, and by the argument above we conclude that $H\cap H^g=\{1\}$ for any $g\in G-H$, as claimed. (b) Added: The fact that this is restricted to finite groups should suggest that you are looking for a counting argument: that is, that you want to show that the number of elements on the right hand side is equal to the number of elements on the left hand side, rather than trying to prove set equality by double inclusion. Note that $H^g=H^{g'}$ if and only if $gg'^{-1}\in N_G(H)$; by (a), $N_G(H)=H$, so we can take the union over a set of coset representatives of $H$ in $G$; $N$ works nicely as such a set. Again using (a) we have then that: $$\begin{align*} \left| N \cup \bigcup_{g\in G}H^g\right| &= \left| N \cup \bigcup_{n\in N}H^n\right|\\ &= |N| + |N|(|H|-1)\\ &= |N| + |N||H|-|N|\\ &= |N||H|\\ &=|G|, \end{align*}$$ and we are done.
Estimate probabilities from its moments I want to estimate probability $Pr(X \leq a)$, where $X$ is a continuous random variable and $a$ is given, only based on some moments of $X$ (e.g., the first four moments, but without knowing its distribution type).
As I had pointed out in my comments, it's hard to answer this question in generality. So, I'll just point you to a resource online. But, that said, the magic words are generating functions-Probability generating functions and Moment Generating Functions. The probability generating functions $\Phi_X$ exists only for non-negative integer valued random variables. The Moment generating function $M_X$ is related to the former [whenever and wherever both exist] by the following: $$M_X(t)=\Phi_X(e^t)$$ There are other inputs required, sometimes and sometimes not. So, please go through the material I have pointed you to. EDITED TO ADD: I'll get a little specific now: If the random variable at hand has finite range, and you have $all$ the moments, then the distribution of $X$ can be found out, {Theorem 10.2, pp 5, 369 in the typeset}. If you just have first two moments, you'll get only Mean and Variance. I'd love to hear from you incase you have specific queries. [Just add a comment below, I'll be notified!]
Continuous extension of a real function defined on an open interval Let $I\subset\mathbb{R}$ be a compact interval and let $J$ denote its interior. Consider $f:J\to\mathbb{R}$ being continuous. * *Under which conditions does the following statement hold? $$ \text{There exists a continuous extension $g:I\to\mathbb{R}$ of $f$.}\tag{A} $$ *Is boundedness of $f$ sufficient for (A)?
Call $I = [a, b]$ with $-\infty < a < b < \infty$. Such an extension exists if and only if both $\lim_{x\to a^+} f(x)$ and $\lim_{x\to b^-} f(x)$ exist, and in fact these values become the values of the extension. (The proof is left as a simple exercise.) With this in mind, boundedness is not sufficient due to previously mentioned functions, such as $\sin\left(\frac{1}{x}\right)$ on $(0, 1)$.
How to prove that every infinite cardinal is equal to $\omega_\alpha$ for some $\alpha$? How to prove that every infinite cardinal is equal to $\omega_\alpha$ for some $\alpha$ in Kunen's book, I 10.19? I will appreciate any help on this question. Thanks ahead.
I took the trouble to read through Kunen in order to understand the problem, as well the definitions which you can use for this. * *Cardinal is defined to be an ordinal $\kappa$ that there is no $\beta<\kappa$ and a bijection between $\kappa$ and $\beta$. *The successor cardinal $\kappa^+$ is the least cardinal which is strictly larger than $\kappa$. *$\aleph_\alpha=\omega_\alpha$ defined recursively, as the usual definitions go: $\aleph_0=\omega$; $\aleph_{\alpha+1}=\omega_{\alpha+1}=\omega_\alpha^+$; at limit points $\aleph_\beta=\omega_\beta=\sup\{\omega_\alpha\mid\alpha<\beta\}$. Now we want to show that: Every cardinal is an $\omega_\alpha$ for some $\alpha$. Your question concentrates on the second part of the lemma. Suppose $\kappa$ is an infinite cardinal. If $\kappa=\omega$ we are done. Otherwise let $\beta=\sup\{\alpha+1\mid\omega_\alpha<\kappa\}$. I claim that $\kappa=\omega_\beta$. Now suppose that $\omega_\beta<\kappa$ then we reach a contradiction since this means that $\beta<\sup\{\alpha+1\mid\omega_\alpha<\kappa\}=\beta$ (since $\beta$ is in this set, then $\beta<\beta+1\le\sup{\cdots}=\beta$). If so, $\kappa\le\omega_\beta$. If $\beta=\alpha+1$ then $\omega_\alpha<\kappa\le\omega_\beta$ and by the definition of a successor cardinal we have equality. Otherwise $\beta$ is a limit cardinal and we have that $\omega_\alpha<\kappa$ for every $\alpha<\beta$, then by the definition of a supremum we have that $\omega_\beta\le\kappa$ and again we have equality.
Computing taylor series for trigonometric exponential function How do I compute the taylor series for $\cos(x)^{\sin(x)}$ ? I tried using the $e^x$ rule but I still am not getting to the result: $$\cos(x)^{\sin(x)}=1-\frac{x^3}{2}+\frac{x^6}{8}+o(x^6).$$
Your formula ($\cos(x)^{\sin(x)}=1-\frac{x^3}{2}+\frac{x^6}{8}+o(x^6).$) has been achieved from the definition of The Taylor Series: $$f(x) = \sum_{i=0}^{\infty}\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n$$ Where $f^{(n)}(x)$ is $n$th derivative of $f(x)$ with respect to $x$. (Notice that $f\in c^{\infty}$) put $x_0=0$ and calculate the coefficients of $x^0$, $x^1$, ... $x^6$.
Examples of Galois connections? On TWF week 201, J. Baez explains the basics of Galois theory, and say at the end : But here's the big secret: this has NOTHING TO DO WITH FIELDS! It works for ANY sort of mathematical gadget! If you've got a little gadget k sitting in a big gadget K, you get a "Galois group" Gal(K/k) consisting of symmetries of the big gadget that fix everything in the little one. But now here's the cool part, which is also very general. Any subgroup of Gal(K/k) gives a gadget containing k and contained in K: namely, the gadget consisting of all the elements of K that are fixed by everything in this subgroup. And conversely, any gadget containing k and contained in K gives a subgroup of Gal(K/k): namely, the group consisting of all the symmetries of K that fix every element of this gadget. Apart from fields, what other "big gadgets" can be described in this way ? And what are the corresponding "little gadgets" ?
This wikipage gives a good list of examples of both monotone and antitone Galois connections.
Differentiability of Moreau-Yosida approximation. I want to show that if $X$ is a reflexive Banach space with norm of class $\mathcal{C}^1$ and $f\colon X\to\mathbb{R}\cup \{+\infty\}$ is convex and lower semicontinuous, then $f_{\lambda}$ is differentiable of class $\mathcal{C}^1$. (where $f_{\lambda}:X\to\mathbb{R}\cup \{+\infty\}$ is the Moreau-Yosida approximation: $$f_\lambda(x)=\inf_{y\in X} \left\{ f(y)+\frac{1}{2\lambda}|x-y|^2\right\})$$ Maybe, this result could be useful: If $g\colon X\to\mathbb{R}$ is convex and differentiable in every point then $g\in\mathcal{C}^1(X)$. Many thanks in advance.
Observe that the subdifferential of the function $y\to \frac{\|x-y\|^2}{2\lambda}+f(y)$ is the operator $$y\to F(y-x)+\partial f(y),$$ where $F:X\to X^*$ is a duality mapping ($Fx=\{f^*\in X^*\,|\,\langle f,x\rangle=\|x\|^2=\|f\|^2\}$). Now, recall that a point $y$ is a minimazer of a convex function $g$ iff $0\in \partial g(y)$. Since $F$ is a duality mapping, $A=\partial f$ is maximal monotone and $X$ is reflexive, invoking Rockafellar (or Minty - don't remember) theorem, we have that the equation $$F(y-x)+Ay\ni0$$ has a unigue solution, which gives that the infimum is attained. Contrary to user53153 answer, the argement which realizes the infimum is not only a weak limit of a minimazing sequance, but also a solution to a certain equation. This has a direct impact on Gâteaux differentiability of $f_\lambda$ if we don't assume that the norm is $\mathcal{C}^1$.
Probability of choosing the correct stick out of a hundred. Challenge from reality show. So I was watching the amazing race last night and they had a mission in which the contestants had to eat from a bin with 100 popsicles where only one of those popsicles had a writing on its stick containing the clue. Immediately I thought well of course choosing the correct stick is 1 in a 100. So taking the correct stick on the first try probability is $\frac{1}{100}$. Then on the second attempt it should be $\frac{1}{99}$ and so on. Multiplying these results give a huge number and so it seems that the more times you try the probability of getting the correct stick decreases. while it seems that the more times you try it more probable for you to get the correct stick. So how do you calculate the probability of getting the correct one first try? the second? What about last? I mean the probability of trying 100 times to get the correct stick? Thanks.
The probability of getting the first wrong is $\dfrac{99}{100}$. The probability of getting the second right given the first is wrong is wrong $\dfrac{1}{99}$; the probability of getting the second wrong given that the first is wrong $\dfrac{98}{99}$. And this pattern continues. Let's work out the probability of getting the correct one on the fourth try: it is the probability of getting the first three wrong $\dfrac{99}{100}\times\dfrac{98}{99}\times\dfrac{97}{98}$ times the probability of getting the fourth correct given the first three were wrong $\dfrac{1}{97}$. It should be obvious that the answer is $\dfrac{1}{100}$. It will still be $\dfrac{1}{100}$, no matter which position you are considering. This should not be a surprise as each position of the stick is equally likely.
Why does the following set have no subspaces but {0} and itself? Here's the statement: The following set, $V$, only has subspaces $\{0\}$ and $V$. $$V=\{f(t) \colon \mathbb R \to \mathbb R \mid f'(t) = k\cdot f(t) \text{ where } k \text{ is a constant}\}$$ I'm having trouble understanding why there are no other subspaces. Why is this the case here? Examples are welcome. I provided a definition of "subspace" in the comments.
HINT $\rm\displaystyle\ \begin{align} f{\:'} &=\ \rm k\ f \\ \rm \:\ g' &=\ \rm k\ g \end{align}\ \Rightarrow\ \dfrac{f{\:'}}f\: =\: \dfrac{g'}g\: \iff\: \bigg(\!\!\dfrac{g}f\bigg)' =\ 0\ \iff \ g\: =\: c\ f,\ \ \ c'\: =\ 0,\ $ i.e. $\rm\ c\:$ "constant". This is a special case of the the Wronskian test for linear dependence.
Solutions to the matrix equation $\mathbf{AB-BA=I}$ over general fields Some days ago, I was thinking on a problem, which states that $$AB-BA=I$$ does not have a solution in $M_{n\times n}(\mathbb R)$ and $M_{n\times n}(\mathbb C)$. (Here $M_{n\times n}(\mathbb F)$ denotes the set of all $n\times n$ matrices with entries from the field $\mathbb F$ and $I$ is the identity matrix.) Although I couldn't solve the problem, I came up with this problem: Does there exist a field $\mathbb F$ for which that equation $AB-BA=I$ has a solution in $M_{n\times n}(\mathbb F)$? I'd really appreciate your help.
HINT $\ $ Extending a 1936 result of Shoda for characteristic $0,$ Benjamin Muckenhoupt, a 2nd year graduate student of A. Adrian Albert, proved in the mid fifties that in the matrix algebra $\rm\ \mathbb M_n(F)\ $ over a field $\rm\:F\:$, a matrix $\rm\:M\:$ is a commutator $\rm\ M\: = \: A\:B - B\:A\ $ iff $\rm\:M\:$ is traceless, i.e. $\rm\ tr(M) = 0\:.$ From this we infer that $1$ is a commutator in $\rm\mathbb M_n(\mathbb F_p)\ \iff\ n = tr(1) = 0\in \mathbb F_p \iff\ p\ |\ n \:.$ Muckenhoupt and Albert's proof is short and simple and is freely acessible at the link below A. A. Albert, B. Muckenhoupt. On matrices of trace zero, Michigan Math. J. 4, #1 (1957), 1-3. Tracing citations to this paper reveals much literature on representation by (sums of) commutators and tracefree matrices. For example, Rosset proved that in a matrix ring $\rm\:\mathbb M_n(R)\:$ over a commutative ring $\rm\:R\:,\:$ every matrix of trace zero is a sum of two commutators.
Integrate $\log(x)$ with Riemann sum In a homework problem I am asked to calculate $\int_1^a \log(x) \mathrm dx$ using a Riemann sum. It also says to use $x_k := a^{k/n}$ as steps for the stair functions. So far I have this: My step size is $x_k - x_{k-1}$ which can be reduced to $a^{\frac{k-1}{n}} (a^{\frac{1}{n}} -1)$. The total sum then is: $$ S_n = \sum_{k=0}^n \frac{k}{n} \log(a) a^{\frac{k-1}{n}} (a^{\frac{1}{n}} -1) $$ $$ S_n = \log(a) \frac{a^{\frac{1}{n}}}{n} (a^{\frac{1}{n}} -1) \sum_{k=0}^n k a^{\frac{k}{n}} $$ When I punch this into Mathematica to derive the Limit $n \rightarrow \infty$, it gives me $1-a+a \log(a)$ which seems fine to me. The problem gives a hint, that I should show the Limit of $n(a^{\frac{1}{n}} - 1)$ by setting it equal to a difference quotient. Mathematica says that the limit is $\log(a)$, but that does not really help me out either. How do I tackle this problem? Thanks!
First, notice that $1-a+a \ln (a)$ can't be the (final) answer. It is an antiderivative of $\ln (a)$, but it is not the antiderivative you are looking for : it does not vanish at $0$. The subtelty is that the Riemann sums approximate the integral of the logarithm between $1$ and $a$, and not between $0$ and $a$. 1) The limit of $n (a^{1/n}-1)$ We can assume that $a$ is positive. I will put $f(x) = a^x = e^{x \ln (a)}$ for non-negative $x$. We can see that : $$\lim_{n \to + \infty} n (a^{1/n}-1) = \lim_{n \to + \infty}\frac{f(1/n)-f(0)}{1/n} = \lim_{h \to 0} \frac{f(h)-f(0)}{h} = f' (0).$$ Interpretating a limit as the derivative of some well-chosen function is a useful trick (that is, before you learn more powerful and general methods). Now, find by yourself the result of Mathematica :) 2) Back to your problem As a preliminary remark, I advise you to be careful about the bounds in your sums. A nice Riemann sum is a sum going from $0$ to $n-1$, or from $1$ to $n$, so that it has exactly $n$ terms and does not overflow from the domain of integration. here, we are looking at : $$ S_n = \sum_{k=1}^n (x_k^{(n)} - x_{k-1}^{(n)}) \ln(x_k^{(n)}) = \sum_{k=1}^n a^{\frac{k-1}{n}} (a^{\frac{1}{n}}-1) \ln(a^{\frac{k}{n}}) = \ln (a) a^{-\frac{1}{n}} n (a^{\frac{1}{n}}-1) \left[ \frac{1}{n} \sum_{k=1}^n \frac{k}{n} a^{\frac{k}{n}} \right]$$ (I prefer sums going from $0$ to $n-1$, but since $\ln (0) = - \infty$ it is a tad easier to use a sum from $1$ to $n$) As $n$ goes to $+ \infty$, we know that $a^{-1/n}$ converges to $1$ and that $n (a^{1/n}-1)$ converges to $\ln (a)$, so that : $$ \int_1^a \ln (x) dx = \lim_{n \to +\infty} S_n = \ln (a)^2 \lim_{n \to + \infty} \left[ \frac{1}{n} \sum_{k=1}^n \frac{k}{n} a^{\frac{k}{n}} \right].$$ To compute the expression in brackets, look at Joriki's post. As a side note, we can remark that it is a Riemann sum. Hence, with a change of variable ($u = x \ln (a)$): $$ \int_1^a \ln (x) dx = \ln (a)^2 \int_0^1 x a^x dx = \int_0^{\ln (a)} u e^u du,$$ or equivalently: $$ \int_0^a \ln (x) dx = \int_{- \infty}^{\ln (a)} u e^u du.$$ Alas, this integral is usually computed with an integration by parts, in other words by the same trick on usually compute an antiderivative of the logarithm, so that we are back at the beginning (one could have obtained this equality with a mere change of variable).
Spaces with equal homotopy groups but different homology groups? Since it's fairly easy to come up with a two spaces that have different homotopy groups but the same homology groups ($S^2\times S^4$ and $\mathbb{C}\textrm{P}^3$). Are there any nice examples of spaces going the other way around? Are there any obvious ways to approach a problem like this?
Standard example is $\mathbb RP^2\times S^3$ and $\mathbb RP^3\times S^2$ (they have same homotopy groups since they both have $\pi_1=\mathbb Z/2$ and the universal cover is in both cases $S^2\times S^3$).
Absolute value of Brownian motion I need to show that $$R_t=\frac{1}{|B_t|}$$ is bounded in $\mathcal{L^2}$ for $(t \ge 1)$, where $B_t$ is a 3-dimensional standard Brownian motion. I am trying to find a bound for $\mathbb{E}[\int_{t=1}^{\infty}R^2_t]$. Asymptotically $B_t^i$ is between $\sqrt{t}$ and $t$. I also know that $|B_t| \to \infty$, but the rate is not clear. Hints would be helpful.
Since $B_t$ and $\sqrt{t}B_1$ are identically distributed, $\mathrm E(R_t^2)=t^{-1}\mathrm E(R_1^2)$, hence $\mathrm E(R_t^2)\leqslant\mathrm E(R_1^2)$ for every $t\geqslant1$ and it remains to show that $\mathrm E(R_1^2)$ is finite. Now, the density of the distribution of $B_1$ is proportional to $\mathrm e^{-\|x\|^2/2}$ and $B_1$ has dimension $3$ hence the density of the distribution of $Y=\|B_1\|$ is proportional to $\varphi(y)=y^{3-1}\mathrm e^{-y^2/2}=y^2\mathrm e^{-y^2/2}$ on $y\gt0$. Since the function $y\mapsto y^{-2}\varphi(y)=\mathrm e^{-y^2/2}$ is Lebesgue integrable, the random variable $Y^{-2}=R_1^2$ is integrable. On the other hand, $\mathrm E\left(\int\limits_1^{+\infty}R_t^2\mathrm dt\right)$ is infinite. Edit The distribution of $B_1$ yields the distribution of $Y=\|B_1\|$ by the usual change of variables technique. To see this, note that in dimension $n$ and for every test function $u$, $$ \mathrm E(u(Y))\propto\int_{\mathbb R^n} u(\|x\|)\mathrm e^{-\|x\|^2/2}\mathrm dx\propto\int_0^{+\infty}\int_{S^{n-1}}u(y)\mathrm e^{-y^2/2}y^{n-1}\mathrm d\sigma_{n-1}(\theta)\mathrm dy, $$ where $\sigma_{n-1}$ denotes the uniform distribution on the unit sphere $S^{n-1}$ and $(y,\theta)\mapsto y^{n-1}$ is proportional to the Jacobian of the transformation $x\mapsto(y,\theta)$ from $\mathbb R^n\setminus\{0\}$ to $\mathbb R_+^*\times S^{n-1}$. Hence, $$ \mathrm E(u(Y))\propto\int_0^{+\infty}u(y)y^{n-1}\mathrm e^{-y^2/2}\mathrm dy, $$ which proves by identification that the distribution of $Y$ has a density proportional to $y^{n-1}\mathrm e^{-y^2/2}$ on $y\gt0$.
Mathematics understood through poems? Along with Diophantus mathematics has been represented in form of poems often times. Bhaskara II composes in Lilavati: Whilst making love a necklace broke. A row of pearls mislaid. One sixth fell to the floor. One fifth upon the bed. The young woman saved one third of them. One tenth were caught by her lover. If six pearls remained upon the string How many pearls were there altogether? Or to cite from modern examples: Poetry inspired by mathematics which includes Tom Apostol's Where are the zeros of Zeta of s? to be sung to to the tune of "Sweet Betsy from Pike". Or Tom Lehrer's derivative poem here. Thus my motivation is to compile here a collection of poems that explain relatively obscure concepts. Rap culture welcome but only if it includes homological algebra or similar theory. (Please let us not degenerate it to memes...). Let us restrict it to only one poem by answer so as to others can vote on the richness of the concept.
Prof. Geoffrey K. Pullum's "Scooping the Loop Snooper: A proof that the Halting Problem is undecidable", in the style of Dr. Seuss.
truth table equivalency I am stuck on this question and attempting to answer it makes me feel that its equivalent to searching for a needle in a large pond... I need help with this, can someone explain how I even attempt to find the solution to this? Question: Find a logical statement equivalent to $(A \to B) \& \sim C$, the statement must use only operators $\sim, |$. I know that I can do $(A \& \sim B) \, | \, C$ which is logically equivalent but it says not to use anything other than $\sim, |$. The statement I have uses "$\&$".
I will assume that "|" is NAND operator defined as : $A | B \Leftrightarrow \lnot(A \land B)$ If it is so then we can write : $(A \rightarrow B) \land \lnot C \Leftrightarrow (\lnot A \lor B) \land \lnot C \Leftrightarrow (\lnot A \land \lnot C) \lor (B \land \lnot C) \Leftrightarrow$ $\Leftrightarrow \lnot(\lnot A \mid \lnot C) \lor \lnot(B \mid \lnot C) \Leftrightarrow \lnot ((\lnot A \mid \lnot C) \land (B \mid \lnot C)) \Leftrightarrow$ $\Leftrightarrow (\lnot A \mid \lnot C) \mid (B \mid \lnot C)$ On the other hand if " | " is OR operator then we have : $(A \rightarrow B) \land \lnot C \Leftrightarrow (\lnot A \lor B) \land \lnot C \Leftrightarrow \lnot(\lnot(\lnot A \lor B) \lor C)$
Convergence of the next series I'm trying to determine the convergence of this series: $$\sum \limits_{n=1}^\infty\left(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}\right)^a$$ I've tried using D'Alambert criteria for solving it. $$\lim_{n->\infty}\frac{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}\frac{2n}{2n+1})^a}{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n})^a} = \lim_{n->\infty}\left(\frac{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}·\frac{2n}{2n+1})}{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n})}\right)^a$$ Which becomes: $$\lim_{n->\infty}\left(\frac{2n}{2n+1}\right)^a$$ But after that, the limit is 1, so its convergence is unknown. Any idea?
Rewrite the summand in terms of factorials like this: $$ \left( \frac{(2n)!}{ 2^{2n} (n!)^2} \right)^a .$$ Applying Stirling's approximation gives $$ \frac{(2n)!}{ 2^{2n} (n!)^2} \sim \frac{1}{\sqrt{\pi n} } $$ so to finish off, apply what you know about the convergence of $ \displaystyle \sum \frac{1}{n^p} $ for various $p.$
Calculating $\prod (\omega^j - \omega^k)$ where $\omega^n=1$. Let $1, \omega, \dots, \omega^{n-1}$ be the roots of the equation $z^n-1=0$, so that the roots form a regular $n$-gon in the complex plane. I would like to calculate $$ \prod_{j \ne k} (\omega^j - \omega^k)$$ where the product runs over all $j \ne k$ with $0 \le j,k < n$. My attempt so far Noting that if $k-j = d$ then $\omega^j - \omega^k = \omega^j(1-\omega^d)$, I can re-write the product as $$ \prod_{d=1}^{\lfloor n/2 \rfloor} \omega^{n(n-1)/2}(1-\omega^d)^n$$ I thought this would be useful but it hasn't led me anywhere. Alternatively I could exploit the symmetry $\overline{1-\omega^d} = 1-\omega^{n-d}$ somehow, so that the terms in the product are of the form $|1-\omega^d|^2$. I tried this and ended up with a product which looked like $$\prod_{j=0}^{n-1} |1 - \omega^j|^n $$ (with awkward multiplicative powers of $-1$ left out). This appears to be useful, but calculating it explicitly is proving harder than I'd have thought. The answer I'm expecting to find is something like $n^n$. My motivation for this comes from Galois theory. I'm trying to calculate the discriminant of the polynomial $X^n+pX+q$. I know that it must be of the form $ap^n+bq^{n-1}$ for some $a,b \in \mathbb{Z}$, and putting $p=0,q=-1$, the polynomial becomes $X^n-1$. This has roots $1, \omega, \dots, \omega^{n-1}$, so that $(-1)^{n-1}b$ is (a multiple of) the product you see above. An expression for $a$ can be found similarly by setting $p=-1,q=0$.
First, note that $$\prod_{k=1}^{n-1} (1-w^k) = n$$ The proof is that $\prod_{k=1}^{n-1}(x-w^k) = 1+x+x^2+...+x^{n-1}$, then substitute $x=1$. Now, you can rewrite: $$\prod_{j\neq k} (w^j-w^k) = \prod_{j=0}^{n-1} \prod_{i=1}^{n-1} (w^j-w^{i+j})$$ $$= \prod_{j=0}^{n-1} w^{j(n-1)} n = n^n w^{\frac{n(n-1)^2}2}$$ If $n$ is odd, then $w^{\frac{n(n-1)^2}2}= 1$, otherwise $w^{\frac{n(n-1)^2}2}=-1$. So we can write our formula as $(-1)^{n-1}n^n$ or as $-(-n)^n$.
Essays on the real line? Are there any essays on real numbers (in general?). Specifically I want to learn more about: * *The history of (the system of) numbers; *their philosophical significance through history; *any good essays on their use in physics and the problems of modeling a 'physical' line. Cheers. I left this vague as google only supplied Dedekind theory of numbers which was quite interesting but not really what I was hoping for.
You might try consulting The World of Mathematics, edited by James R. Newman. This is a four-volume compendium of articles on various topics in mathematics. It was published in 1956 so is not exactly cutting-edge, but then again, neither is our understanding of the construction of the real numbers. It contains an essay by Dedekind himself, which is just part of a section of articles on the number concept. There is also a book simply called Number by Tobias Dantzig that is a classic history of the number system.
The square of an integer is congruent to 0 or 1 mod 4 This is a question from the free Harvard online abstract algebra lectures. I'm posting my solutions here to get some feedback on them. For a fuller explanation, see this post. This problem is from assignment 6. The notes from this lecture can be found here. a) Prove that the square $a^2$ of an integer $a$ is congruent to 0 or 1 modulo 4. b) What are the possible values of $a^2$ modulo 8? a) Let $a$ be an integer. Then $a=4q+r, 0\leq r<4$ with $\bar{a}=\bar{r}$. Then we have $a^2=a\cdot a=(4q+r)^2=16q^2+8qr+r^2=4(4q^2+2qr)+r^2, 0\leq r^2<4$ with $\bar{a^2}=\bar{r^2}$. So then the possible values for $r$ with $r^2<4$ are 0,1. Then $\bar{a^2}=\bar{0}$ or $\bar{1}$. b) Let $a$ be an integer. Then $a=8q+r, 0\leq r<8$ with $\bar{a}=\bar{r}$. Then we have $a^2=a\cdot a=(8q+r)^2=64q^2+16qr+r^2=8(8q^2+2qr)+r^2, 0\leq r^2<8$ with $\bar{a^2}=\bar{r^2}$. So then the possible values for $r$ with $r^2<8$ are 0,1,and 2. Then $\bar{a^2}=\bar{0}$, $\bar{1}$ or $\bar{4}$. Again, I welcome any critique of my reasoning and/or my style as well as alternative solutions to the problem. Thanks.
$$\begin{align} x^2 \mod 4 &\equiv (x \mod 4)(x \mod 4) \pmod 4 \\ &\equiv \begin{cases}0^2 \mod 4 \\ 1^2 \mod 4 \\ 2^2 \mod 4 \\ 3^2 \mod 4 \end{cases} \\ &\equiv \begin{cases}0 \mod 4 \\ 1 \mod 4 \\ 4 \mod 4 \\ 9 \mod 4 \end{cases} \\ &\equiv \begin{cases}0 \mod 4 \\ 1 \mod 4 \\ 0 \mod 4 \\ 1 \mod 4 \end{cases} \end{align} $$
Morita equivalence of acyclic categories (Crossposted to MathOverflow.) Call a category acyclic if only the identity morphisms are invertible and the endomorphism monoid of every object is trivial. Let $C, D$ be two finite acyclic categories. Suppose that they are Morita equivalent in the sense that the abelian categories $\text{Fun}(C, \text{Vect})$ and $\text{Fun}(D, \text{Vect})$ are equivalent (where $\text{Vect}$ is the category of vector spaces over a field $k$, say algebraically closed of characteristic zero). Are $C, D$ equivalent? (If so, can we drop the finiteness condition?) Without the acyclic condition this is false; for example, if $G$ is a finite group regarded as a one-object category, $\text{Fun}(G, \text{Vect})$ is completely determined by the number of conjugacy classes of $G$, and it is easy to write down pairs of nonisomorphic finite groups with the same number of conjugacy classes (take, for example, any nonabelian group $G$ with $n < |G|$ conjugacy classes and $\mathbb{Z}/n\mathbb{Z}$). On the other hand, I believe this result is known to be true if $C, D$ are free categories on finite graphs by basic results in the representation theory of quivers, and I believe it's also known to be true if $C, D$ are finite posets.
On MO, Benjamin Steinberg links to a paper of Leroux with a counterexample.
How can I solve the differential equation $y'+y^{2}=f(x)$? $$y'+y^{2}=f(x)$$ I know how to find endless series solution via endless integral or endless derivatives and power series solution if we know $f(x)$. I also know how to find general solution if we know one particular solution ($y_0$). I am looking for an exact analitic solution $y= L({f(x)})$ without knowing a particular solution, if it exists. (Here $L$ defines an operator such as integral, derivative, radical, or any defined function.) If it does not exist, could you please prove why we cannot find it? Note: This equation is related to second order differantial linear equation. If we put $y=u'/u$, this equation will turn into $u''(x)-f(x).u(x)=0$. If we the find general solution of $y'+y^{2}=f(x)$, it means that $u''(x)-f(x).u(x)=0$ will be solved as well. As we know, many functions such as Bessel function or Hermite polinoms and so many special functions are related to Second order linear differential equations. Thank you for answers. EDIT: I asked the question in mathoverflow too. You can also find the link below for details. (1-Endless transform, 2-Endless Integral,3-Endless Derivatives,4-Power series) and answers about the subject. https://mathoverflow.net/questions/87041/looking-for-the-solution-of-first-order-non-linear-differential-equation-y-y
Interesting. In Maple I tried $y'+y^2 = \sin(x)$, and the solution involves Mathieu functions $S, C, S', C'$. I tried $y'+y^2=x$, and the solution involves Airy functions Ai, Bi. I tried $y'+y^2=1/x$, and the solution involves Bessel functions $I_0, I_1, K_0, K_1$. This is a Riccati equation. For more info, in particular how its solutions are related to solutions of a second-order linear equation, look that up.
If $xy$ is a unit, are $x$ and $y$ units? I know if $x$ and $y$ are units, in say a commutative ring, then $xy$ is a unit with $(xy)^{-1}=y^{-1}x^{-1}$. But if $xy$ is a unit, does it necessarily follow that $x$ and $y$ are units?
Yes. Let $z=xy$. If $z$ is a unit with inverse $z^{-1}$, then $x$ is a unit with inverse $yz^{-1}$, and $y$ is a unit with inverse $xz^{-1}$, because $$x(yz^{-1})=(xy)z^{-1}=zz^{-1}=1$$ $$y(xz^{-1})=(yx)z^{-1}=(xy)z^{-1}=zz^{-1}=1$$
If $B \ (\supseteq A)$ is a finitely-generated $A$-module, then $B$ is integral over $A$. I'm going through a proof of the statement: Let $A$ and $B$ be commutative rings. If $A \subseteq B$ and $B$ is a finitely generated $A$-module, then all $b \in B$ are integral over $A$. Proof: Let $\{c_1, ... , c_n\} \subseteq B$ be a set of generators for $B$ as an $A$-module, i.e $B = \sum_{i=1}^n Ac_i$. Let $b \in B$ and write $bc_i = \sum_{j=1}^n a_{ij}c_j $ with $a_{ij} \in A$, which says that $(bI_n - (a_{ij}))c_j = 0 $ for $ 1 \leq j \leq n$. Then we must have that $\mathrm{det}(bI_n - (a_{ij})) = 0 $. This is a monic polynomial in $b$ of degree $n$. Why are we not done here? The proof goes on to say: Write $1 = \alpha_1 c_1 + ... + \alpha_n c_n$, with the $\alpha_i \in A$. Then $\mathrm{det}(bI_n - (a_{ij})) = \alpha_1 (\mathrm{det}...) c_1 + \alpha_2 (\mathrm{det}...) c_2 + ... + \alpha_n (\mathrm{det}...) c_n = 0$. Hence every $b \in B$ is integral over $A$. I understand what is being done here on a technical level, but I don't understand why it's being done. I'd appreciate a hint/explanation. Thanks
Another way to phrase it, slightly different to Georges's answer and comments, is as follows: In the first paragraph of the proof, $B$ could be replaced by any f.g. $A$-module $M$, and $b$ could any endomorphism of that $A$-module. What we conclude is that every $\varphi \in End_A(M)$ is integral over $A$. In particular, if $M$ is in fact a $B$-module, then we conclude that the image of $B$ in $End_A(M)$ is integral over $A$. The point of the second paragraph is to observe that (since $B$ is a ring with $1$), the natural map $B \to End_A(B)$ (given by $B$ acting on itself through multiplication) is injective, so that $B$ coincides with its image in $End_A(B)$. Only after making this additional observation can we conclude that $B$ is integral over $A$. Just as something to think about, what you'll see is that the argument proves that if $B$ is an $A$-algebra which admits a faithful module which is f.g. over $A$, then $B$ is integral over $A$. On the other hand, if $B$ just admits a module that is f.g. over $A$, but not necessarily faithful, then we can't conclude that $B$ is integral over $A$. (See if you can find a counterexample.)
Restricted Integer Partitions Two Integer Partition Problems Let $P(n,k,m)$ be the number of partitions of $n$ into $k$ parts with all parts $\leq m$. So $P(10,3,4) = 2$, i.e., (4,4,2); (4,3,3). I need help proving the following: $P(2n,3,n-1) = P(2n-3,3, n-2)$ $P(4n+3, 3, 2n+1) = P(4n,3,2n-1) + n + 1$.
For the first one: For any partition $2n=a+b+c$ where $a,b,c \leq n-1$, we have a partition $2n-3=(a-1)+(b-1)+(c-1)$ where $a-1,b-1,c-1 \leq n-2$ and vice versa.
problem with continuous functions $f,g:\mathbb{R}\longrightarrow\mathbb{R}$ f,g are continuous functions. $\forall q\in\mathbb{Q}$ $f\left(q\right)\leq g\left(q\right)$ I need to prove that $\forall x\in\mathbb{R}$ $f\left(x\right)\leq g\left(x\right)$
Hint: Note that if $(x_n)_{n\in\mathbb{N}}$ is a convergent sequence of real numbers and $f,g$ are continuous functions, $\lim\limits_{n\to\infty} f(x_n)=f(\lim\limits_{n\to\infty}x_n)$ and $\lim\limits_{n\to\infty} g(x_n)=g(\lim\limits_{n\to\infty}x_n)$, and that for any real number $x$ we have some sequence $(x_n)_{n\in\mathbb{N}}$ of rational numbers that converges to it. How can we manipulate limits to show that $f(x)=f(\lim\limits_{n\to\infty}x_n)\leq g(\lim\limits_{n\to\infty}x_n)=g(x)$?
The meaning of Implication in Logic How do I remember Implication Logic $(P \to Q)$ in simple English? I read some sentence like * *If $P$ then $Q$. *$P$ only if $Q$. *$Q$ if $P$. But I am unable to correlate these sentences with the following logic. Even though the truth table is very simple, I don't want to remember it without knowing its actual meaning. $$\begin{array}{ |c | c || c | } \hline P & Q & P\Rightarrow Q \\ \hline \text T & \text T & \text T \\ \text T & \text F & \text F \\ \text F & \text T & \text T \\ \text F & \text F & \text T \\ \hline \end{array}$$
I would like to share my own understanding of this. I like to think of Implication as a Promise rather than Causality which is the natural tendency when you come across it the first time. Example: You have a nice kid and you make him the following promise to him: If you get an A in your exam, then I will buy you a car. In this case P is kid gets A in exam and Q is You buy him a car. Now let's see how this promise holds with various values for P and Q If P is true (Kid gets A in exam) and Q is true (You bought him car) then your promise has held and $P \Rightarrow Q$ is true. If P is true (Kid gets A in exam) and Q is false (You didn't buy him a car) then your promise didn't hold so $P \Rightarrow Q$ is false. If P is false (Kid didn't get A in exam) and Q is true (You bought him car) then your promise still holds and $P \Rightarrow Q$ is true and that's because you only said what will happen if he get's an A, you basically didn't say what will happen if he doesn't which could imply anything. Basically you didn't break your promise and this is the weak property which most people find confusing in implication. If P is false (Kid didn't get A in exam) and Q is false (you didn't buy him a car) then your promise has also held and $P \Rightarrow Q$ is true because you only promised and guaranteed a car if he gets an A.
Is there a formula for solving integrals of this form? I was wondering if there was a substitution formula to solve integrals of this form: $\int f(g(x))g''(x)dx$
No, not a nice one, anyway. It is worthwhile, I think, to point out that integration rules, such as the usual substitution rule, do not always "solve" ("evaluate" is the proper term) the given integral. The usual substitution rule, for instance, only transforms the integral into another integral which may or may not be easily handled. Of course, if the antiderivative of $f$ is known, then the usual substitution rule will allow you to evaluate integrals of the form $\int f\bigl(g(x)\bigr)g'(x)\,dx$. I don't think a formula of the type you seek would be very useful, as it can't handle all cases when an antiderivative of the "outer function" is known: consider $\int \sin(x^2)\cdot 2\,dx$. This can't be expressed in an elementary way.
If $n\ge 3$, $4^n \nmid 9^n-1$ Could anyone give me a hint to prove the following? If $n\ge 3$, $4^n \nmid 9^n-1$
Hint : Try to prove using induction : $1.$ $9^3 \not \equiv 1 \pmod {4^3}$ $2.$ suppose : $9^k \not \equiv 1 \pmod {4^k}$ $3.$ $9^k \not \equiv 1 \pmod {4^k} \Rightarrow 9^{k+1} \not \equiv 9 \pmod {4^k}$ So you have to prove : $ 9^{k+1} \not \equiv 9 \pmod {4^k} \Rightarrow 9^{k+1} \not \equiv 1 \pmod {4^{k+1}}$
Every non zero commutative ring with identity has a minimal prime. Let $A$ be a non zero commutative ring with identity. Show that the set of prime ideals of $A$ has minimal elements with respect to inclusion. I don´t know how to prove that, I can suppose that the ring is an integral domain, otherwise the ideal $(0)$ is a prime ideal , but I don´t know how to proceed. Probably it's a Zorn application.
Below is a hint, with further remarks on the structure of the set of prime ideals, from Kaplansky's excellent textbook Commutative Rings. For a recent survey on the poset structure of prime ideals in commutative rings see R & S Wiegand, Prime ideals in Noetherian rings: a survey, in T. Albu, Ring and Module Theory, 2010.
Showing that $ \int_{0}^{1} \frac{x-1}{\ln(x)} \mathrm dx=\ln2 $ I would like to show that $$ \int_{0}^{1} \frac{x-1}{\ln(x)} \mathrm dx=\ln2 $$ What annoys me is that $ x-1 $ is the numerator so the geometric power series is useless. Any idea?
This is a classic example of differentiating inside the integral sign. In particular, let $$J(\alpha)=\int_0^1\frac{x^\alpha-1}{\log(x)}\;dx$$. Then one has that $$\frac{\partial}{\partial\alpha}J(\alpha)=\int_0^1\frac{\partial}{\partial\alpha}\frac{x^\alpha-1}{\log(x)}\;dx=\int_0^1x^\alpha\;dx=\frac{1}{\alpha+1}$$ and so we know that $\displaystyle J(\alpha)=\log(\alpha+1)+C$. Noting that $J(0)=0$ tells us that $C=0$ and so $J(\alpha)=\log(\alpha+1)$.
One vs multiple servers - problem Consider the following problem: We have a simple queueing system with $\lambda%$ - probabilistic intensity of queries per some predefined time interval. Now, we can arrange the system as a single high-end server ($M/M/1$, which can handle the queries with the intensity of $2\mu$) or as two low-end servers ($M/M/2$, each server working with intensity of $\mu$). So, the question is - which variant is better in terms of overall performance? I suspect that it's the first one, but, unfortunately, my knowledge of queuing / probability theory isn't enough. Thank you.
You need to specify what you mean by "overall performance", but for most measures the two server system will have better performance. Intuitively, a "complicated" customer, one that has a long service time will shut down the M/M/1 queue but only criple the M/M/2 queue. If we let the utiliztion be $$\rho=\frac{\lambda}{2\mu}$$ then some of the usual performance measures are $L_q$ the average length of the queue, $W_q$ the average waiting time, and $\pi_0$ the probability that the queue is empty. For the M/M/1 queue these measures are $$L_q=\frac{\rho^2}{1-\rho}$$ $$W_q=\frac{\rho^2}{\lambda(1-\rho)}$$ $$\pi_0=1-\rho$$ and for the M/M/2 queue $$L_q=\frac{2\rho^3}{1-\rho^2}$$ $$W_q=\frac{2\rho^3}{\lambda(1-\rho^2)}$$ $$\pi_0=\frac{1-\rho}{1+\rho}$$ So, the system is empty more often in the M/M/1 queue, but the expected wait time and the expected queue length are less for the M/M/2 (as $\frac{2\rho}{1+\rho}<1$).
Linear operator's linearity $$f:R^n \to R^3 \ \ \ \ \ \ \ \ \ f(x,y,z)=(x-z,y,az^2)$$ I have to find $n$ and $a$ such that $f$ is a linear operator. $$x-z=0$$ $$y=0$$ $$az^2=0$$ I found $n$ to be 3. For $az^2$ to be equal to $0$, either $z$ is $0$ or $a$ is $0$, right? The $z^2$ is confusing me, I don't know from what $R^n \to R^3$ it is. Any idea please? After findng $a$ and $n$, I have to write the matrix of $f$ and find the $dim(KerF)$ Thank you.
The fact that $n=3$ comes from inspection. In order for $f:\mathbb{R}^3\to\mathbb{R}^3:(x,y,z)\mapsto(x-z,y,az^2)$ to be a linear operator you need $$f(\vec{x}+\vec{u})=f(\vec{x})+f(\vec{u}), \quad\text{or}$$ $$\forall \vec{x},\vec{u}\in\mathbb{R}^3:\quad\begin{cases}(x+u)-(z+w)=(x-z)+(u-w) \\ (y+v)=(y)+(v) \\ a(z+w)^2=az^2+aw^2.\end{cases}$$ (Note: $\vec{x}=(x,y,z),\vec{u}=(u,v,w)$ here.) The first two check out but the last one implies $2azw=0$ for all $z,w\in\mathbb{R}$, which is obviously false unless $a=0$. Now we have that the matrix associated to $f$ as a linear map is given by $$f:\begin{pmatrix}x\\y\\z\end{pmatrix}\mapsto x\begin{pmatrix}1\\0\\0\end{pmatrix}+y\begin{pmatrix}0\\1\\0\end{pmatrix}+z\begin{pmatrix}-1\\0\\0\end{pmatrix}\quad\text{hence}\quad f(\vec{x})=\begin{pmatrix}1&0&-1\\0&1&0\\0&0&0\end{pmatrix}\vec{x}.$$ Finally, to find $\mathrm{Ker} f$, solve $x-z,y,0=0$, which is parametrized by $(t,0,t)$ and hence $\mathrm{dim}=1$.
Multiplying Infinite Cardinals (by Zero Specifically) On the Wikipedia page on Cardinal Numbers, Cardinal Arithmetic including multiplication is defined. For finite cardinals there is multiplication by zero, but for infinite cardinals only defines multiplication for nonzero cardinals. Is multiplication of an infinite cardinal by zero undefined? If so, why is it? Also does $\kappa\cdot\mu= \max\{\kappa,\mu\}$ simply means that the multiplication of the two is simply the cardinality of the higher cardinal? Why is this?
For any cardinal $\kappa$ whatsoever, $0\cdot\kappa=\kappa\cdot 0=0$. This is an immediate consequence of the definition and the fact that for any set $X$, $\varnothing\times X=\varnothing$. Yes, if one assumes the axiom of choice, the product of two infinite cardinals is simply the larger of them; so is their sum. The product of a non-zero finite cardinal and an infinite cardinal is that infinite cardinal, so it’s also simply the larger of the two. This fails when the finite cardinal is $0$, because then the product is $0$. Even without the axiom of choice it’s true that if $\kappa$ and $\mu$ are well-orderable cardinals, $\kappa\cdot\mu=\max\{\kappa,\mu\}$. This is proved by constructing a bijection between $\kappa\times\mu$ and $\max\{\kappa,\mu\}$.
Local diffeomorphism from $\mathbb R^2$ onto $S^2$ Is there any local diffeomorphism from $\mathbb R^2$ onto $S^2$?
First note that there actually are no covering maps $\mathbb{R}^2 \to S^2$. This is because $S^2$ is simply connected and hence is its own universal cover; if there were a covering map $\mathbb{R}^2 \to S^2$, then by the universal property of the universal cover there would be a covering map $S^2 \to \mathbb{R}^2$. But there can't even be a continuous surjection $S^2 \to \mathbb{R}^2$ because $S^2$ is compact and $\mathbb{R}^2$ is not. Thus any local diffeomorphism $\mathbb{R}^2 \to S^2$ answers your question. For example, the inverse of the stereographic projection map does the job.
Hindu calendar (lunar) to Gregorian calendar We have to convert the Hindu calendar (the lunar one) to the Gregorian calendar. According to the (Dutch) Wikipedia (sorry for that, but it has more information than other websites), it is based on he angle between the sun and moon. Now I have many questions about that, such as what angle do they mean? If I draw a line between the sun and moon, I am missing a line to calculate the angle. But the more concrete question is: how can we convert from the Hindu calendar to the Gregorian calendar? Also, we have noticed that the conversion is not injective; it can happen that one Hindu date relates to two Gregorian dates. We decide in that case to pick the first Gregorian date, but we are still not able to convert dates. One of the days we know the conversion of is: Basant Pachami (d-m-Y) : 5-11-2068 (Hindu) : 1-28-2012 (Gregorian)
I don't have the entire answer but I hope this will at least help a bit you if not more. Have you looked at http://www.webexhibits.org/calendars/calendar-indian.html Quoting: "All astronomical calculations are performed with respect to a Central Station at longitude 82°30’ East, latitude 23°11’ North." Why do you wish to convert these dates? If this is a one-time thing, you might be able to find something that does it for you. I found http://www.rajan.com/calendar/ but this is Gregorian <> Nepali (I was under the assumption that it is the same, maybe it isn't)
On the set of integer solutions of $x^2+y^2-z^2=-1$. Let $$ \mathcal R=\{x=(x_1,x_2,x_3)\in\mathbb Z^3:x_1^2+x_2^2-x_3^2=-1\}. $$ The group $\Gamma= M_3(\mathbb Z)\cap O(2,1)$ acts on $\mathcal R$ by left multiplication. It's known that there is only one $\Gamma$-orbits in $\mathcal R$, i.e. $\Gamma \cdot e_3=\mathcal R$ where $e_3=(0,0,1)$. Could anybody give me a proof of this fact? Thanks.-. [Comments: (i) $O(2,1)$ is the subgroup in $GL_3(\mathbb R)$ which preserves the form $x_1^2+x_2^2-x_3^2$, that is $$ O(2,1)=\{g\in GL_3(\mathbb R): g^t I_{2,1} g=I_{2,1}\}\qquad\textrm{where}\quad I_{2,1}= \begin{pmatrix}1&&\\&1&\\&&-1\end{pmatrix}. $$ (ii) $g(\mathcal R)\subset \mathcal R$ for any $g\in \Gamma$ because $g$ has integer coefficients and we can write $$ x_1^2+x_2^2-x_3^2=x^tI_{2,1}x, $$ then $$ (gx)^t I_{2,1} (gx)= x^t (g^tI_{2,1}g)x=x^tI_{2,1}x=-1. $$]
Do you know about Frink's paper? http://www.maa.org/sites/default/files/Orrin_Frink01279.pdf
Compute: $\int_{0}^{1}\frac{x^4+1}{x^6+1} dx$ I'm trying to compute: $$\int_{0}^{1}\frac{x^4+1}{x^6+1}dx.$$ I tried to change $x^4$ into $t^2$ or $t$, but it didn't work for me. Any suggestions? Thanks!
First substitute $x=\tan\theta$. Simplify the integrand, noticing that $\sec^2\theta$ is a factor of the original denominator. Use the identity connecting $\tan^2\theta$ and $\cos2\theta$ to write the integrand in terms of $\cos^22\theta$. Now the substitution $t=\tan2\theta$ reduces the integral to a standard form, which proves $\pi/3$ to be the correct answer. This method seems rather roundabout in retrospect, but it requires only natural substitutions, standard trigonometric identities, and straightforward algebraic simplification.
Showing $C=\bigg\{ \sum_{n=1}^\infty a_n 3^{-n}: a_n=0,2 \bigg\}$ is uncountable Let us define the set $$C=\bigg\{ \sum_{n=1}^\infty a_n 3^{-n}: a_n=0,2 \bigg\}$$ This is the Cantor set, could anyone help me prove it is uncountable? I've been trying a couple of approaches, for instance assume it is countable, list the elements of $C$ as decimal expansions, then create a number not in the list, I am having trouble justifying this though. Secondly i've been trying to create a function $f$ such that $f(C)=[0,1]$. Many thanks
Note the "obvious" bijection between $C$ and $P(\mathbb N)$ defined as: $$f(A)= 2\sum_{n=1}^\infty\frac{\chi_A(n)}{3^n}$$ Where $\chi_A$ is the characteristics function of $A$ ($1$ for $n\in A$, $0$ otherwise). Suppose that $A\neq B$ and $x=\min (A\Delta B)$, wlog $x\in A$ then $f(A)-f(B)\ge\dfrac2{3^x}>0$. In the other direction, if $x\in C$ then we can write $A=\{n\in\mathbb N\mid a_n=2\}$ and it is quite clear why $f(A)=x$. Now we use the fact that $P(\mathbb N)$ is uncountable (direct result of Cantor's theorem) and we are done.
Show that $u+v$ bisects $u$ and $v$ only if $|u|=|v|$ I want to show that if I have two Euclidian vectors in $\mathbb{R}^n$ than the sum of these two vectors bisects the angle between the two vectors. Said more mathematically. Let $(u,v) \in \mathbb{R}^n$ Then $\angle(u,v+u) = \angle(u+v,v)$ if and only if $|u|=|v|$ I tried using the fact that $$ \angle(u,v) = \arccos \left( \frac{u \cdot v}{|u| |v|} \right) $$ Alas this attempt was futile Now for $\mathbb{R}^2$ this is obviously true, as can bee seen from my illustration. How do I prove this with some rigor? Thanks for all tips and advices, this is not a homework question.
The first prove: Two vectors form a parallelogram ABCD, we know that in a parallelogram diagonals bisect by their point of intersection. It means that segment from A to point of intersections of diagonals is a middle line of triangle ABD. There is a criteria that middle line is bisecting line if and only if the triangle is isosceles. That proves your problem. The second prove: Let $|u|=|v|$ is equal $u^2=v^2$ then $\cos(\phi_1)=\frac{(u+v,u)}{|u+v||u|}=\frac{u^2+(u,v)}{|u+v||u|}=\frac{v^2+(u,v)}{|u+v||v|}=\cos(\phi_2)$ Let $\cos(\phi_1)=\cos(\phi_2)$ then $\frac{u^2+(u,v)}{|u|}=\frac{v^2+(u,v)}{|v|}$ => $|u|+\frac{(u,v)}{|u|}=|v|+\frac{(u,v)}{|v|}$=>$(|u|-|v|)(1-\frac{(u,v)}{|u||v|})=0$ if $u$ and $v$ aren't collinear, it means that $|u|=|v|$ But if they are, it means that angle is zero. And angle $\phi_1$ and $\phi_2$ are also zero.
Why is a linear equation with three variables a plane? In linear algebra, why is the graph of a three variable equation of the form $ax+by+cz+d=0$ a plane? With two variables, it is easy to convince oneself that the graph is a line (using similar triangles, for example). However with three variables, this same technique does not seem to work: setting one of the variables to be a constant yields a line in 3-D space (I think), and one could repeat this process for each constant value of that variable, but in the end there seems not to be an easy way to check that all these lines are coplanar. I don't remember seeing why this is in a book, and Khan Academy's video, for example, simply states that this is the case.
Look at the equation as a dot product or inner product: $$ \left[ \begin{array}{ccc} a & b & c \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] = -d. $$ Then it is clear to see that the point $(x, y, z)$ that satisfies the equation is any point in the plane that is perpendicular to the vector $\left[ \begin{array}{ccc} a & b & c \end{array} \right]$ (this fixes its orientation) and is the right distance from the origin to yield a dot product of $-d$.
terminology: euler form and trigonometric form Am I right, that the following is the so-called trigonometric form of the complex number $c \in \mathbb{C}$? $|c| \cdot (\cos \alpha + \mathbf{i} \sin \alpha)$ And the following is the Euler form of the very same number, right? $|c|\cdot \mathbf{e}^{\mathbf{i}\alpha}$ I think there must be a mistake in one of my tutor's notes..
They are the same, and can also be called "polar coordinates" for the complex number.
Finding how many terms of the harmonic series must be summed to exceed x? The harmonic series is the sum 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + ... + 1/n + ... It is known that this sum diverges, meaning (informally) that the sum is infinite and (more formally) that for any real number x, there there is some number n such t that the sum of the first n terms of the harmonic series is greater than x. For example, given x = 3, we have that 1 + 1/2 + 1/3 + ... + 1/11 = 83711/27720 &approx; 3.02 So eleven terms must be summed together to exceed 3. Consider the following question Given an integer x, find the smallest value of n such that the sum of the first n terms of the harmonic series exceeds x. Clearly we can compute this by just adding in more and more terms of the harmonic series, but this seems like it could be painfully slow. The best bound I'm aware of on the number of terms necessary is 2O(n), which uses the fact that 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + ... is greater than 1 + (1/2) + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + ... which is in turn 1 + 1/2 + 1/2 + 1/2 + ... where each new 1/2 takes twice as many terms as the previous to accrue. This means that the brute-force solution is likely to be completely infeasible for any reasonably large choice of x. Is there way to calculate the harmonic series which requires less operations than the brute-force solution?
The DiGamma function (the derivative of the logarithm of the Gamma function) is directly related to the harmonic numbers: ψ(n) = Hn-1 - γ, where γ is Euler's constant (0.577...). You can use one of the approximations in the Wikipedia article to compute an approximate value for Hn, and then use that in a standard root finding algorithm like bisection to locate the solution.
Calculate the derivate $dx/dy$ using $\int_0^x \sqrt{6+5\cos t} \, dt + \int_0^y \sin t^2 \, dt = 0$ I want to calculate $\frac{dx}{dy}$ using the equation below. $$\int_0^x \sqrt{6+5\cos t}\;dt + \int_0^y \sin t^2\;dt = 0$$ I don't even know from where to start. Well I think that I could first find the integrals and then try to find the derivative. The problem with this approach is that I cannot find the result of the first integral. Can someone give me a hand here?
HINT: You have $$f(x)=\int_0^x\sqrt{6+5\cos t}\,dt=-\int_0^y\sin t^2 \,dt=g(y)\;.$$ What are $\dfrac{df}{dx}$ and $\dfrac{dg}{dy}$ according to the fundamental theorem? And when you have $\dfrac{df}{dx}$, what can you multiply it by to get $\dfrac{df}{dy}$?
Conditional probability. Targeting events Electric motors coming off two assembly lines are pooled for storage in a common stockroom, and the room contains an equal number of motors from each line. Motors are periodically sampled from that room and tested. It is known that 10% of the motors from line I are defective and 15% of the motors from line II are defective. If a motor is randomly selected from the stock-room and found to be defective, find the probability that it came from line I. Here is my way to solve it. First it is a conditional probability. The formula is $$P(A \mid B) = \frac{P (A\cap B) }{ P(B) }.$$ $P(B)$ = probability that it came from line 1 = $2 P_1$. Now here is where it gets interesting. What would be $P(A\cap B)$ in that case? Is $P(A \cap B)=P(\text{came from line 1 * defective})$?
P(B) is not P(came from line 1) in this problem. You are being asked to calculate P(came from line 1 | is defective) so B is "is defective" and A is "came from line 1". You're right that P(AB) is "Came from line 1 and is defective", and if you know how to calculate P(B) correctly in this case then you're essentially doing the same thing as you would if you used Bayes' Theorem. To calculate P(B) correctly: P(B) needs to be the probability that any given motor in the entire factory is defective, not just from line I. Use the Law of Total Probability. For P(AB): This is itself a conditional probability problem. Consider P(is defective | came from line 1) = P(defective and from line 1)/P(came from line 1)
Convergence of $\sum_{n=1}^\infty\frac{1}{2\cdot n}$ It is possible to deduce the value of the following (in my opinion) converging infinite series? If yes, then what is it? $$\sum_{n=1}^\infty\frac{1}{2\cdot n}$$ where n is an integer. Sorry if the notation is a bit off, I hope youse get the idea.
The series is not convergent, since it is half of the harmonic series which is known to be divergent$^1$. $$\sum_{n=1}^{\infty }\frac{1}{2n}=\frac{1}{2}\sum_{n=1}^{\infty }\frac{1}{n}.$$ -- $^1$ The sum of the following $k$ terms is greater or equal to $\frac{1}{2}$ $$\frac{1}{k+1}+\frac{1}{k+2}+\ldots +\frac{1}{2k-1}+\frac{1}{2k}\geq k\times \frac{1}{2k}=\frac{1}{2},$$ because each term is greater or equal to $\frac{1}{2k}$.
Equation of straight line I know, $Ax + By = C$ is the equation of straight line but a different resource says that: $y = mx + b$ is also an equation of straight line? Are they both same?
Yes. That is, they both give the equation of a straight line and the equation of any non-vertical line can be written in either form. If $B\ne 0$. Then you can write $Ax+By=C$ as $$ By=-Ax+C $$ and, since $B\ne0$, the above can be written $$ y=-\textstyle{A\over B}x +{C\over B}. $$ If $B=0$, the equation is $Ax=C$, which is a vertical line when $A\ne0$. In this case you can't write it in the form $y=mx+b$ (which defines a function). On the other hand, given $y=mx+b$, you can rewrite it as $-mx+y=b$. Note that for the equation $Ax+By=C$ with $A$ and $B$ both non-zero: The $y$-intercept of its graph is $C/B$ and is found by taking $x=0$. The $x$-intercept is of its graph is $C/A$ and is found by taking $y=0$. The slope of the line is then $ {C/B-0\over 0-C/A } = -A/B$.
When is $a^k \pmod{m}$ a periodic sequence? Let $a$ and $m$ be a positive integers with $a < m$. Suppose that $p$ and $q$ are prime divisors of $m$. Suppose that $a$ is divisible by $p$ but not $q$. Is there necessarily an integer $k>1$ such that $a^k \equiv a \pmod{m}$? Or is it that the best we can do is say there are $n>0$ and $k>1$ such that $a^{n+k} \equiv a^n \pmod{m}$ What can be said about $n$ and $k$? EDIT: Corrected to have $k>1$ rather than $k>0$. EDIT: The following paper answers my questions about $n$ and $k$ very nicely. A. E. Livingston and M. L. Livingston, The congruence $a^{r+s} \equiv a^r \pmod{m}$, Amer. Math. Monthly $\textbf{85}$ (1978), no.2, 97-100. It is one of the references in the paper Math Gems cited. Arturo seems to say essentially the same thing in his answer.
A nice presentation of such semigroup generalizations of the Euler-Fermat theorem and related number theory is the following freely available paper S. Schwarz, The role of semigroups in the elementary theory of numbers, Math. Slovaca, Vol. 31 (1981) pp. 369–395.
Finding a third point I have learnt that if we are given 3 points in the extended complex plane and their corresponding image points, we have a unique Möbius map that can perform the mapping. Suppose I have 2 orthogonally intersecting circles and I want to map them (individually) to the real and imaginary axes respectively by some Möbius map, is there a systematic way to do so? I have figured that the intersection points will have to be sent to $0$ and $\infty$ respectively but how might I determine a third point and its image so as to define such a map?
Let one of the intersections be $p$. The inversion $z \to 1/(z-p)$ takes $p$ to $\infty$ and takes your two circles to straight lines intersecting orthogonally. Now just translate and rotate.
Numerical analysis textbooks and floating point numbers What are some recommended numerical analysis books on floating point numbers? I'd like the book to have the following * *In depth coverage on the representation of floating point numbers on modern hardware (the IEEE standard). *How to do arbitrary precision floating point calculations with a reasonably fast modern algorithm. *How to compute the closest 32-bit floating point representation of a dot product and cross product. And do this fast, so no relying on generic arbitrary precision calculations to get the bits of the 32-bit floating point number right. From what I can infer from doing some searches most books tend to focus on stuff like the runge kutta and not put much emphasis on how to make floating point calculations that are ultra precise.
You could try the book written by J.M. Muller, N. Brisebarre: * *Handbook of Floating Point Arithmetic (amazon.com) The literature of numerical mathematics concentrates on algorithms for mathematical problems, not on implementation issues of arithmetic operations. How to compute the closest 32-bit floating point representation of a dot product and cross product. Since these are concatenations of addition and multiplication, I expect that you won't find much about dot and cross products themselves.
Easiest way to perform Euclid's division algorithm for polynomials Let's say I have the two polynomials $f(x) = x^3 + x + 1$ and $g(x) = x^2 + x$ over $\operatorname{GF}(2)$ and want to perform a polynomial division in $\operatorname{GF}(2)$. What's the easiest and most bullet proof way to find the quotient $q(x) = x + 1$ and the remainder $r(x)=1$ by hand? The proposal by the german edition of Wikipedia is rather awkward.
Polynomial long division is the way to go. Especially over a finite field where you don't have to worry about fractional coefficients (working over for instance the rational numbers these can get extremely unwieldy surprisingly soon). Over $\mathbb Z/2\mathbb Z$ you don't even have to worry about dividing coefficients at all, the only question to be answered is "to substract or not to subtract", where as a bonus subtraction is actually the same as addition. Note that the wikipedia article you refer to does not assume such a simple context, and avoids division by coefficients by doing a pseudo-division instead (for which instead of explosion of fractions you can get enormous coefficients).
Formulas for counting pairs in a set I have a few questions regarding Cartesian products that will help me optimize a complicated SQL query I'm working on. Suppose I have 52 playing cards, and I want to know how many combinations of pairs (first two cards) a dealer can draw at the beginning. Obviously, this would be less than $52*52$ since the dealer cannot draw the same card twice. So, to me it seems the answer is $(52*52) - 52$, since there's 52 "pairs" of the same card, in other words $52*51$. However, I'd like to better understand the math behind this so I can apply it to any number of cards and any size sets: * *Given n cards, how many ordered sets of y cards can be created? For example, if I had 100,000 cards, how many unique sets of 10 cards could I make? *Given n cards, how many unordered sets of y cards can be created? For example, if I had 100 cards, how many unique unordered sets of 3 could I make? What's the mathematical formula that represents both these answers? Thanks!
The concepts you are looking for are known as "permutations" and "combinations." * *If you have $n$ items, and you want to count how many ordered $r$-tuples you can make without repetitions, the answer is someimtes written $P^n_r$, and: $$P^{n}_{r} = n(n-1)(n-2)\cdots (n-r+1).$$ This follows from the "multiplication rule": if event $A$ can occur in $p$ ways, and event $B$ can occur in $q$ ways, then the number of ways in which both events $A$ and $B$ can occur is $pq$. Your answer of $52\times 51$ for ordered pairs of playing cards is correct if you care about which one is the first card and which one is the second. Another way to see this is that there are 52 possible ways in which the first card is dealt; and there are 51 ways for the second card to be dealt (as there are only 51 cards left). *If you don't care about the order, then you have what are called "combinations" (without repeititons). The common symbol is $$\binom{n}{k}$$ which is pronounces "$n$ choose $k$". The symbol represents the number of ways in which you can select $k$ elements from $n$ possibilities, without repetition. In other words, the number of ways to choose subsets with $k$ elements from a set with $n$ elements. The formula is $$\binom{n}{k}=\frac{n!}{k!(n-k)!},\quad \text{if }0\leq k\leq n$$ where $n! = n\times (n-1)\times\cdots\times 2\times 1$. To see this, note that there are $\frac{n!}{(n-k)!} = n(n-1)\cdots(n-k+1)$ ways of selecting $k$ items if you do care about the order. But since we don't care about the order, how many times did we pick each subset? For instance, a subset consisting of $1$, $2$, and $3$ is selected six times: once as 1-2-3, once as 1-3-2, once as 2-1-3, once as 2-3-1, once as 3-1-2, and once as 3-2-1. Well, there are $k$ items, and so there are $P^k_k$ ways of ordering them; this is exactly $k!$ ways. So we counted each $k$-subset $k!$ ways. So the final answer is $\frac{n!}{(n-k)!}$ divided by $k!$, giving the formula above, $$\binom{n}{k}=\frac{n!}{k!(n-k)!}.$$ See also this previous question and answer for the general principles and formulas.
How to prove that for $A\cap B\neq\varnothing$, $(\bigcap A)\cap(\bigcap B)\subseteq\bigcap(A\cap B)$? $A$ and $B$ are non empty sets with non empty intersection. Prove that $(\bigcap A)\cap(\bigcap B) \subseteq \bigcap (A\cap B).$ The definition of intersection of a set is something like this, if $M$ is a nonempty set whose elements are themselves sets, then $x$ is an element of the intersection of $M$ if and only if for every element $A$ of $M$, $x$ is an element of $A$.
For theorems like these, as Asaf wrote, expanding definitions and simplifying is the way to go. However, I do these kind of things more 'calculationally' using the rules of predicate logic. In this case, we can easily calculate the elements $\;x\;$ of the left hand side: \begin{align} & x \in \bigcap A \;\cap\; \bigcap B \\ \equiv & \qquad\text{"definition of $\;\cap\;$; definition of $\;\bigcap\;$, twice"} \\ & \langle \forall V : V \in A : x \in V \rangle \;\land\; \langle \forall V : V \in B : x \in V \rangle \\ \equiv & \qquad\text{"logic: merge ranges of $\;\forall\;$ statements -- to simplify"} \\ & \langle \forall V : V \in A \lor V \in B : x \in V \rangle \\ \end{align} And similarly for the right hand side: \begin{align} & x \in \bigcap (A \cap B) \\ \equiv & \qquad\text{"definition of $\;\bigcap\;$; definition of $\;\cap\;$"} \\ & \langle \forall V : V \in A \cap B : x \in V \rangle \\ \equiv & \qquad\text{"definition of $\;\cup\;$"} \\ & \langle \forall V : V \in A \land V \in B : x \in V \rangle \\ \end{align} These two results look promisingly similar. We see that latter range implies the former, and predicate logic tells us that $$ \langle \forall z : P(z) : R(z) \rangle \;\Rightarrow\; \langle \forall z : Q(z) : R(z) \rangle $$ if $\;Q(z) \Rightarrow P(z)\;$ for all $\;z\;$. In our specific case, that means \begin{align} & \langle \forall V : V \in A \lor V \in B : x \in V \rangle \\ \Rightarrow & \qquad \text{"using the above rule, with $\;P \land Q \;\Rightarrow\; P \lor Q\;$"} \\ & \langle \forall V : V \in A \land V \in B : x \in V \rangle \\ \end{align} Putting this all together, with the definition of $\;\subseteq\;$, tells us that $$ \bigcap A \;\cap\; \bigcap B \;\subseteq\; \bigcap (A \cap B) $$ which is what we set out to prove.
Tensoring with vector bundle is a dense endofunctor of $D^b(\text{coh }X) $? A functor $F:T\to R$ between triangulated categories is dense if every object of $R$ is isomorphic to a direct summand in the image of $F$. Let $R=T=D^b(\text{coh }X)$ for a variety $X$ and consider the functor $-\otimes \mathcal{V}$, $\mathcal{V}$ a vector bundle. I do not understand the following claim: "$-\otimes\mathcal{V}$ is a is a dense functor, as any object $P\in D^b(\text{coh }X)$ is a summand of $(P\otimes V^\vee)\otimes V$." Can anyone help?
What part of the claim you don't understand? For any vector bundle $V$ the bundle $V\otimes V^\vee$ contains trivial 1-dimensional vector bundle (spanned by the section "Id"$\in V\otimes V^\vee$; the map in the opposite direction is the evaluation map). So any object $P\in D^b$ is a summand of the image of the object $P\otimes V^\vee$.
How can one prove that the cube root of 9 is irrational? Of course, if you plug the cube root of 9 into a calculator, you get an endless stream of digits. However, how does one prove this on paper?
This is essentially the same proof I gave in my answer here. Suppose $9^{\frac{1}{3}}$ is rational. Then $3^2n^3 = m^3$ for some natural numbers $n$ and $m$. On left side of the equation, the power of $3$ is of the form $3k + 2$ and on the right side it is of the form $3l$. This is a contradiction, because each integer greater than one has a unique prime factorization by the fundamental theorem of arithmetic. Thus $9^{\frac{1}{3}}$ is not rational. This same proof also works for a more general case. Let $p$ be prime and $n \geq 2$ an integer. Then $\sqrt[n]{p^k}$ is irrational when $n$ does not divide $k$. Just like before, assuming that $\sqrt[n]{p^k}$ is rational leads to a situation where we have a number with two different prime factorizations. One factorization has $p$ of power divisible by $n$, while the other has $p$ of power not divisible by $n$.
Find $DF$ in a triangle $DEF$ Consider we have a triangle $ABC$ where there are three points $D$, $E$ & $F$ such as point $D$ lies on the segment $AE$, point $E$ lies on $BF$, point $F$ lies on $CD$. We also know that center of a circle over ABC is also a center of a circle inside $DEF$. $DFE$ angle is $90^\circ$, $DE/EF = 5/3$, radius of circle around $ABC$ is $14$ and $S$ (area of $ABC$), K (area of DEF), $S/K=9.8$. I need to find $DF$. Help me please, I'd be very grateful if you could do it as fast as you can. Sorry for inconvenience.
Here is a diagram. I may or may not post a solution later. Edit I will not post a solution since it appears to be quite messy. Please direct votes towards an actual solution.
Simple expressions for $\sum_{k=0}^n\cos(k\theta)$ and $\sum_{k=1}^n\sin(k\theta)$? Possible Duplicate: How can we sum up $\sin$ and $\cos$ series when the angles are in A.P? I'm curious if there is a simple expression for $$ 1+\cos\theta+\cos 2\theta+\cdots+\cos n\theta $$ and $$ \sin\theta+\sin 2\theta+\cdots+\sin n\theta. $$ Using Euler's formula, I write $z=e^{i\theta}$, hence $z^k=e^{ik\theta}=\cos(k\theta)+i\sin(k\theta)$. So it should be that $$ \begin{align*} 1+\cos\theta+\cos 2\theta+\cdots+\cos n\theta &= \Re(1+z+\cdots+z^n)\\ &= \Re\left(\frac{1-z^{n+1}}{1-z}\right). \end{align*} $$ Similarly, $$ \begin{align*} \sin\theta+\sin 2\theta+\cdots+\sin n\theta &= \Im(z+\cdots+z^n)\\ &= \Im\left(\frac{z-z^{n+1}}{1-z}\right). \end{align*} $$ Can you pull out a simple expression from these, and if not, is there a better approach? Thanks!
Take the expression you have and multiply the numerator and denominator by $1-\bar{z}$, and using $z\bar z=1$: $$\frac{1-z^{n+1}}{1-z} = \frac{1-z^{n+1}-\bar{z}+z^n}{2-(z+\bar z)}$$ But $z+\bar{z}=2\cos \theta$, so the real part of this expression is the real part of the numerator divided by $2-2\cos \theta$. But the real part of the numerator is $1-\cos {(n+1)\theta} - \cos \theta + \cos{n\theta}$, so the entire expression is: $$\frac{1-\cos {(n+1)\theta} - \cos \theta + \cos{n\theta}}{2-2\cos\theta}=\frac{1}{2} + \frac{\cos {n\theta} - \cos{(n+1)\theta}}{2-2\cos \theta}$$ for the cosine case. You can do much the same for the case of the sine function.
injection from double dual to finite-dimensional vector space (Note: I'm using the word "natural" to mean "without the need to choose a basis." I'm aware that there is a precise category-theoretic meaning of this word, but I don't have great intuition for it yet and am hoping, perhaps naively, it's not necessary to understand the following.) There exists a natural injection $V\rightarrow (V^{*})^{*}$ defined by sending $v\in V$ to a functional $\mu_{v}$ on $V^{*}$ such that $\mu_{v}(f)=f(v)$ for all $f\in V^{*}$. When $V$ is finite dimensional, this map is an isomorphism by comparing dimensions, so there is also an injection $(V^{*})^{*}\rightarrow V$. Is there a "natural" (again, in this context I understand this to mean basis-free) way to write down this injection, other than as simply the reverse of the first one?
For the sake of having an answer: no. Any good definition of "natural" would imply that this map also existed for infinite-dimensional vector spaces, which it doesn't. You shouldn't be able to do any better than "the inverse, when it exists, of the natural map $V \to (V^{\ast})^{\ast}$."
100 Soldiers riddle One of my friends found this riddle. There are 100 soldiers. 85 lose a left leg, 80 lose a right leg, 75 lose a left arm, 70 lose a right arm. What is the minimum number of soldiers losing all 4 limbs? We can't seem to agree on a way to approach this. Right off the bat I said that: 85 lost a left leg, 80 lost a right leg, 75 lost a left arm, 70 lost a right arm. 100 - 85 = 15 100 - 80 = 20 100 - 75 = 25 100 - 70 = 30 15 + 20 + 25 + 30 = 90 100 - 90 = 10 My friend doesn't agree with my answer as he says not all subsets were taken into consideration. I am unable to defend my answer as this was just the first, and most logical, answer that sprang to mind.
You can easily do it visually with a Venn diagram with the four sets of soliders with each limb. For mimimum number of soliders losing all four limbs, none of the inner sets overlap. So $100 - (15+20+25+30) = 10$.
Help to find the domain to this function $$ \sqrt{\log_\frac{1}{2}\left(\arctan\left(\frac{x-\pi}{x-4}\right)\right)} $$ Please, could someone show me the steps to find the domain of this function? It's the sixth time that I try to solve it, and I'm going to burn everything...
I assume that you are talking about the so-called "natural domain" of a real valued function of real variable (a common concept in Calculus, at least in the U.S.): given a formula, such as the above, and no words about its domain, we assume the domain is to be taken to be a subset of the real numbers, and that this subset should be "as large as possible". That is, we want the know all real numbers for which the expression makes sense and yields a real number. So, let's analyze the expression step by step, just as you would if you were trying to evaluate it. * *First, given an $x$, you would compute both $x-\pi$ and $x-4$. No problems there, that can be done with any real number $x$. *Then you would compute $\frac{x-\pi}{x-4}$. In order to be able to do this, you need $x-4\neq 0$. So we are going to have to exclude $x=4$. That is, the domain so far is "all $x\neq 4$". *Then we would compute $\arctan\left(\frac{x-\pi}{x-4}\right)$. Since the domain of the arctangent is "all real numbers", this can be done with any $x$ for which the fraction makes sense. We don't need to exclude any new values of $x$. *Then we would try to compute the logarithm (base $\frac{1}{2}$) of this number. In order to be able to compute the logarithm, we need the argument to be positive. So we are going to need $$\arctan\left(\frac{x-\pi}{x-4}\right)\gt 0.$$ When is the arctangent positive? When the argument is positive. So we need $$\frac{x-\pi}{x-4}\gt 0.$$ When is a fraction positive? When both numerator and denominator are positive, or when they are both negative. So we need either $x-\pi\gt 0$ and $x-4\gt 0$ (this happens when $x\gt 4$); or $x-\pi\lt 0$ and $x-4\lt 0$ (this happens when $x\lt \pi$). So we now need to restrict our $x$s to $(-\infty,\pi)\cup(4,\infty)$. (Note that this also maintains the exclusion of $4$). *Finally, we need to take the square root of the answer. That means that the logarithm must be nonnegative. When is $\log_{\frac{1}{2}}(a)\geq 0$? When $0\lt a \leq 1$ (taking exponentials with base $\frac{1}{2}$ flips the inequality, because $(\frac{1}{2})^x$ is decreasing). So we actually need $$0\lt \arctan\left(\frac{x-\pi}{x-4}\right)\leq 1.$$ When is $0\lt \arctan(a)\leq 1$? When $0\lt a \lt \frac{\pi}{4}$ (thanks to Jonas Meyer for the heads up!). When $\tan(0)\lt a \leq \tan(1)$. So we need $$0 \lt \frac{x-\pi}{x-4}\leq \tan(1).$$ Since $\tan(1)\gt 0$ This happens if either $$0\lt (x-\pi) \lt \tan(1)(x-4)$$ or $$\tan(1)(x-4)\lt (x-\pi)\lt 0.$$ So check the inequalities; then remember that $x$ must be greater than $4$ for both $x-\pi$ and $x-4$ to be positive; or less than $\pi$ for both to be negative.
Why do all circles passing through $a$ and $1/\bar{a}$ meet $|z|=1$ are right angles? In the complex plane, I write the equation for a circle centered at $z$ by $|z-x|=r$, so $(z-x)(\bar{z}-\bar{x})=r^2$. I suppose that both $a$ and $1/\bar{a}$ lie on this circle, so I get the equation $$ (z-a)(\bar{z}-\bar{a})=(z-1/\bar{a})(\bar{z}-1/a). $$ My idea to show that the circles intersect at right angles is to show that the radii at the point of intersection are at right angles, which is the case when the sum of the squares of the lengths of the radii of the circles is the square of the distance to the center of the circle passing through $a$ and $1/\bar{a}$. However, I'm having trouble finding a workable situation, since I don't think there is not a unique circle passing through $a$ and $1/\bar{a}$ to give a center to work with. What's the right way to do this?
I have a solution that relies on converting the complex numbers into ordered pairs although I believe there must be a solution with just the help of complex numbers. Two circles intersect orthogonally, if their radii are perpendicular at the point of intersection. So, using this we can have a condition for orthogonality. $\hskip 2.5in$ Here's a trick of how you will get a condition. Let us consider two circles, $$C_1,A:x^2+y^2+2g_1x+2f_1y+c_1=0$$ $$C_2,B:x^2+y^2+2g_2x+2f_2y+c_2=0$$ From your high school course in analytical geometry in high school, it must be clear that the centres $A$ and $B$ are $A(-g_1,-f_1)$ and $B(-g_2,-f_2)$. And the radii of such a circle is $r_1=\sqrt{g_1^2+f_1^2-c_1}$ and similarly $r_2=\sqrt{g_2^2+f_2^2-c_2}$. Now invoke Pythagoras here, I'll leave the actual computation to you, the condition would turn out to be, $$2g_1g_2+2f_1f_2=c_1+c_2$$ Now, find a parametric equation for a circle passing through the complex numbers $a$ and $\dfrac{1}{\bar a}$. How do you do this? Since the circle always passes through, $a\cong(l,m)$ and $\dfrac{1}{\bar a}=\dfrac{a}{|a|^2}\cong\left(\dfrac{l}{l^2+m^2},\dfrac{m}{l^2+m^2}\right)$, you have the following will be the equation of the circle: $$(x-l)\left(x-\dfrac{l}{l^2+m^2}\right)+(y-m)\left(y-\dfrac{m}{l^2+m^2}\right)+\lambda(ly-mx)=0$$ The second circle is, $$x^2+y^2-1=0$$ So, you should now see that $g_2=f_2=0$ and $c_2=-1$. Also, after a little inspection, note that we need not care for what those $g_1$ and $f_1$ are. And, thankfully, $c_1=1$. So, you have the required condition for orthogonality. I know this is lengthy and not instructive, but this is all I can recollect from high school geometry. So, I only hope this is of some help!
How can you find the number of sides on this polygon? I'm currently studying for the SAT. I've found a question that I can't seem to figure out. I'm sure there is some logical postulate or assumption that is supposed to be made here. Here is the exact problem: I don't really care for an answer, I would rather know steps on how to solve this. I'm trying to be prepared for all types of questions on the SAT. Thanks! EDIT: Thanks so much for the help guys! I've figured it out and wanted to explain it in detail for anybody who wanted to know: SOLUTION SPOILER: 1. The figure displayed is a non-regular quadrilateral. 2. Because we know that, we know that the interior angles of the shape are 360 because of the formula (n-2)180. 3. I then created two statements. x+y=80 and x+y+z where z is the top two full angles (above both x and z). 4. I then simply solved for z (the total of the two angles above x and y). 5. I found z to equal 280. When split between the two angles it represented, I determined that each angle in the shape was equal to 140 degrees. 6. Because each angle is congruent (the shape is regular) we now know the measurement of every angle. 7. I then plugged this into the interior angle formula: (n-2)180=n140. 8. After solving for n, you learn that the number of sides is 9 :) Hope this helps!
Hints: The sum of the measures of the interior angles of an $n$-sided convex polygon is $(n-2)*180^\circ$. So, if the polygon is regular, each interior angle has measure $180^\circ-{360^\circ\over n}$. (You could also use the fact that the sum of the exterior angles of a convex polygon is $360^\circ$. An "exterior angle" is the angle formed by a line coinciding with a side and the "next" side.)
Is this group a semidirect product? $G=\langle x,y,z:xy=yx,zxz^{-1}=x^{-1},zyz^{-1}=y^{-1}\rangle$, could you help me to understand if this group is a semidirect product of the type $\langle x,y\rangle\rtimes_\varphi\langle z\rangle$. I was trying to prove that $\langle x,y\rangle\triangleleft G$ and $\langle x,y\rangle\cap\langle z\rangle=\{1\}$, but I'm having trouble with the second, and actually I even don't know if this is true, it's possible that this group is not a semidirect product. Could you help me?
Showing that the intersection of two subgroups is trivial in a group described by generators and relations is a little tricky. Clearly, it is enough to show that if $i,j,k$ are integers and $x^i y^j z^k = 1$, then $i=j=k=0$. This is of course equivalent to showing that if $i,j,k$ are not all zero, then $x^i y^j z^k \ne 1$ in $G$. In a group described by generators and relations, in order to show that some word $w \ne 1$, you need to show that there is some group $H$ such that: (1) $H$ contains elements that satisfy the relations; (2) $w \ne 1$ in $H$. (This proves that the relations together with the group axioms do not force $w=1$; hence $w \ne 1$ in $G$.) So we have to show, for each $(i,j,k)$, there is some group $H$ such that (1) $H$ contains three elements $x,y,z$ satisfying the given relations; (2) in $H$, we have $x^i y^j z^k \ne 1$. You need such an $H$ for each nonzero triple $(i,j,k)$, so you'll need to find lots of groups containing 3 elements satisfying the given relations. A good source of such groups are the dihedral groups: $x,y$ can be any 2 rotations, and $z$ any reflection. It is easy to check that $x,y,z$ satisfy the given relations. The dihedral groups should give you enough $H$'s to rule out $x^i y^j z^k = 1$ for any $(i,j,k) \ne (0,0,0)$.
Is there a first-order-logic for calculus? I just finished a course in mathematical logic where the main theme was first-order-logic and little bit of second-order-logic. Now my question is, if we define calculus as the theory of the field of the real numbers (is it?) is there a (second- or) first-order-logic for calculus? In essence I ask if there is a countable model of calculus. I hope my question is clear, english is my third language.
I take the view that the proper logical framework in which to do model theory for structures in analysis is continuous logic. For more information on the subject, look up the webpage of Ward Henson.
Perfect squares always one root? I had an exam today and I was thinking about this task now, after the exam of course. $f(x)=a(x-b)^2 +c$ Now, the point was to find C so that the function only has one root. Easy enough, I played with the calculator and found this. But I hate explanations like that, yes. You get a few points but far from full score. But overall I should still get an A, I hope. If $C=0$ then the expression is a perfect square and they only have one root? Is that far of? $a(x-b)^2= - c$ $\frac{a(x-b)^2}{a}= - \frac{c}{a}$ $(x-b)^2= - \frac{c}{a}$ This also argues that c should be 0 for it to only be one root?
An alternative way to think about it is geometrically. The graph of $y=x^2$ is a parabola that opens up with vertex at the origin. The graph of $$y = (x-b)^2$$ is then a horizontal shift by $b$ units (so $b$ units to the right if $b\geq 0$, and $|b|$ units to the left if $b\lt 0$) of the same graph. There is still only one root: the vertex. If $a\neq 0$, then $$y = a(x-b)^2$$ is a vertical stretch of this graph, possibly with a flip (if $a\lt 0$); it does not change the number of intersections with the $x$-axis. Finally, $$y=a(x-b)^2 + c$$ is a vertical shift by $c$ units (up if $c\gt 0$, down if $c\lt 0$). If $y=a(x-b)^2$ is a parabola that opens "up" (if $a\gt 0$), then shifting it up ($c\gt 0$) will remove all intersections with the $x$-axis; and shifting it down ($c\lt 0$) will create two intersections with the $x$-axis as the vertex moves down. If $y=a(x-b)^2$ is a parabola that opens "down" (if $a\lt 0$), then the situation is reversed: $c\gt 0$ will create two intersections with the $x$-axis, and $c\lt 0$ will remove all intersections with the $x$-axis. Either way, in order to maintain one and only one intersection, you need the vertex of the parabola to stay on the $x$-axis, so you need $c=0$. Conversely, if $c=0$, you have a parabola with vertex on the $x$-axis, hence with a single intersection.
How to test any 2 line segments (3D) are collinear or not? if we have two line segments in 3D, what would be the way to test whether these two lines are collinear or not? (I fogot to mentioned that my line segments are 3D. So, I edited the original post. Sorry for the inconveniences) I wish to check the direction of the lines and the perpendicular distance between them. Does these two factors are enough to decide whether 2 line segments are collinear or not. Thank you in advance.
If the two line segments $AB$ and $CD$ are given by 4 distinct points A, B, C and D, it is also sufficient that both $AB \parallel CD$, $AC \parallel BD$ and $AD\parallel BC$. To see if $A(a_1,a_2)B(b_1,b_2) \parallel C(c_1,c_2)D(d_1,d_2)$, you test whether or not $\vec{BA} = B-A $ and $\vec{DC} = C-D$ are linearly dependent vectors. So the two line segments are contained in the same line if $$ \begin{cases} (a_1-b_1)(c_2-d_2) - (c_1-d_1)(a_2-b_2) = 0 \\ (a_1-c_1)(b_2-d_2) - (b_1-d_1)(a_2-c_2) = 0 \\ (a_1-d_1)(c_2-b_2) - (c_1-b_1)(a_2-d_2) = 0 \end{cases}$$
is there a connection between the following? Assume $A$ is $m \times n$ and $B$ is $m \times n$. Is there a connection between the eigenvalues of $AB'$ and the eigenvalues of $B'A$? One is an $m \times m$ and the other is $n \times n$. ($B'$ stands for the transpose of $B$)
It seems easier for me to assume that $B$ is an $n \times m$ matrix. In that case, a classical argument shows that $AB$ and $BA$ have the same nonzero eigenvalues, not counting multiplicity. The case that these eigenvalues are distinct is dense in the general case, so $AB$ and $BA$ have the same nonzero eigenvalues counting multiplicity. Of course one of them has $|n-m|$ more zero eigenvalues than the other.
Nomenclature of random variables $\{X=0, Y=0\}$ same as $\{X=0\}\cap \{Y=0\}$? just a small doubt. My exercises keep oscillating their nomenclature on this small detail and I always have the other version. Let $X,Y$ be random variables. Is $\{X=0, Y=0\}$ the same as $\{X=0\}\cap \{Y=0\}$? Another example. Let $N$ be the number of Users on a webpage. Two files are available for download, one with 200 kb and another with 400 kb size. $$ \begin{align} X_n(w) := w_n = \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\} \end{align} $$ I want to express, at least one user downloaded the 200 kb file. Here's how I expressed it $\{X_1 + X_2 + \cdots + X_n \geq 1\}$. Would this be ok? The book expressed it as $\{X_1=1\}\cup\{X_1=3\}\cup \cdots \cup\{X_n=1\}\cup\{X_n=3\}$. Another thing to express: no user downloaded the 200 kb file. I expressed it as $|\{X_k=1, 1 \leq k \leq N\}|=0$. The book as $\{X_1 \neq 1\}\cap \cdots \cap \{X_n \neq 1\}$. Would my solution be ok? I'm always in doubt when I'm allowed to use symbols like $+$ and $|\mathrm{modulo}|$ (to get the number of elements). Is this generally always allowed? Many thanks in advance! Thanks in advance guys!
$\{X=0,Y=0\}$ and $\{X=0\}\cap\{Y=0\}$ are the same thing. Both notations refer to $$ \{\omega\in\Omega : X(\omega)=0\ \ \&\ \ Y(\omega)=0\} = \{\omega\in\Omega : X(\omega)=0\}\cap\{\omega\in\Omega : Y(\omega)=0\}. $$ Your notation saying $$ \begin{align} X_n(w) := w_n = \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\} \end{align} $$ seems confused. I suspect maybe you meant $$ \begin{align} \Omega = \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\}, \end{align} $$ although even that may differ from what's appropriate if you're bringing in $n$ different random variables. Your later notation makes it look as if what the author of the book had in mind is that $X_k$ is the number of kb downloaded by the $k$th user, for $k=1,\ldots,n$. Just what $w$ is, you're not clear about, and at this point I'm wondering if you're confusing $w$ with $\omega$. Probably what is needed is this: $$ \begin{align} \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\}^n \end{align} $$ i.e. then $n$th power of that set of four elements. This is the set of all $n$-tuples where each component of an $n$-tuple is one of these four elements. Then, when $\omega$ is any such $n$-tuple, $X_k(\omega)$ is its $k$th component, which is one of those four elements. For example, if $n=3$, so there are three users, then $$ \begin{align} \Omega = \{ & (0,0,0), (0,0,1), (0,0,2), (0,0,3), (0,1,0), (0,1,1), (0,1,2), (0,1,3),\ldots\ldots\ldots \\ \\ & \ldots\ldots\ldots, (3,3,3) \}, \end{align} $$ with $64$ elements. If, for example, $\omega=(2,3,0,1)$, then $X_2(\omega)=3$.
Does $\int_0^1 \sum_{n=0}^{\infty}x e^{-nx}\;dx = \sum_{n=0}^{\infty}\int_0^1 x e^{-nx}\;dx$? Does $$\int_0^1 \sum_{n=0}^\infty x e^{-nx}\;dx = \sum_{n=0}^\infty \int_0^1 x e^{-nx}dx$$ ? This exercise leaves me stumped. On the one hand, it seems the series $\sum_{n=0}^\infty xe^{-nx}$ is not uniformly convergent in $[0,1]$ (it equals $\frac{xe^x}{(e^x-1)}$ in $(0,1]$ and 0 in $x_0=0$, so it cannot be uniformly convergent since it is a series of continuous functions that converges to a non-continuous function). On the other hand, if this is the case, how do I deal with that... thing? Perhaps the series is uniformly convergent and I made a mistake? Thanks!
You can use Fubini's theorem, but it seems overkill. Note that for all integer $N$ we have $$\sum_{n=0}^N\int_0^1xe^{-nx}dx=\int_0^1\sum_{n=0}^Nxe^{-nx}dx\leq \int_0^1\sum_{n=0}^{+\infty}xe^{-nx}dx,$$ so $$\sum_{n=0}^{+\infty}\int_0^1xe^{-nx}dx\leq \int_0^1\sum_{n=0}^{+\infty}xe^{-nx}dx.$$ For the reversed inequality, fix $\varepsilon>0$. Since $\sum_{n=0}^{+\infty}xe^{-nx}$ is integrable, we can find a $\delta>0$ such that $\int_0^{\delta}\sum_{n=0}^{+\infty}xe^{-nx}\leq \varepsilon$. And the series $\sum_{n=0}^{+\infty}xe^{-nx}$ is normally convergent on $[\delta,1]$. So we have \begin{align*} \int_0^1\sum_{n=0}^{+\infty}xe^{-nx}dx&=\int_0^\delta\sum_{n=0}^{+\infty}xe^{-nx}dx+\int_\delta^1\sum_{n=0}^{+\infty}xe^{-nx}dx\\ &\leq\varepsilon +\int_\delta^1\sum_{n=0}^{+\infty}xe^{-nx}dx\\ &=\varepsilon +\sum_{n=0}^{+\infty}\int_\delta^1xe^{-nx}dx\\ &\leq \varepsilon +\sum_{n=0}^{+\infty}\int_0^1xe^{-nx}dx, \end{align*} and since $\varepsilon$ is arbitrary we can conclude the equality.
Conditional Probability Question Bowl A contains 6 red chips and 4 blue chips. Five chips are randomly chosen and transferred without replacement to Bowl B. One chip is drawn at random from Bowl B. Given that this chip is blue, find the conditional probability that 2 red chips and 3 blue chips are transferred from bowl A to bowl B. Attempt: $$P(A|B) = \frac{P(B|A)\cdot P(A)}{P(B)}$$ Let $B$ = chip is blue and $A$ = 2 red and 3 blue are chosen. $$\begin{align} &P(A) = \frac {\binom 6 2 \cdot \binom 4 3}{\binom {10} 5}\\ &P(B|A) = \frac 3 5 \end{align}$$ By Bayes Rule, $P(A|B) = \left(\dfrac 3 5\right)\cdot \dfrac{ \binom 6 2 \binom 4 3}{\binom {10} 5\cdot \dfrac{4}{10}}$. Is this correct?
There are $\frac{10!}{6!4!}$ (= 210) possible arrangements for the chips, and $\frac{5!}{2!3!}$ arrangements for the chips desired in bowl B. Any given arrangement of bowl B can occur for every corresponding arrangement in bowl A (also $\frac{5!}{2!3!}$ combinations) The total number of possiblilities with the correct bowl B is therefore $\frac{5!}{2!3!}\dot{}\frac{5!}{2!3!}=100$ Substitute P(A) = 100/210 to get P(A|B) = (3/5)(100/210)/(4/10) = 5/7, or about 71%
Is the following derivative defined? I am new to this site so I am not sure if this is the right place to be asking this question but I will try anyway. I am reading an economics paper for my research and the author does the following: $$\frac{\partial}{\partial C_t(j)} \int_0^1 P_t(j) C_t(j) dj = P_t(j)$$ I feel that this derivative is not properly defined but I am probably missing something obvious because the author knew what he was doing. Could someone please tell me if this is a legitimate derivative? Thanks,
That derivative can be properly defined if and only if there exists an appropriate pair of values $t$ and $j$ such that $\: C_t(j) = 0 \:$ and all appropriate values of $t$ such that [there exists an appropriate value of $j$ such that $\: C_t(j) = 0 \:$] give the same value of the integral and $0$ is a limit point of the set of values taken by $C_t(j)$ for appropriate values of $t$ and $j$ ("appropriate values" are those associated with the definition of $C_t(j)$, and even if the derivative "can be properly defined", it may still fail to exist, as is the case with non-differentiable functions). If either of the first two fail there is not a unique "base value" to plug into the difference quotient. If the last one fails, it is not clear what the definition of "limit" would be.
Circle and Line segment intersection I have a line segment (begin $(x_1,y_1)$, end $(x_2,y_2)$, with $D=5$, let’s say) and a circle (radius $R$, center $(x_3,y_3)$) How can I check that if my line segment intersects my circle? picture http://kepfeltoltes.hu/120129/inter_www.kepfeltoltes.hu_.png
The points $(x,y)$ on the line segment that joins $(x_1,y_1)$ and $(x_2,y_2)$ can be represented parametrically by $$x=tx_1+(1-t)x_2, \qquad y=ty_1+(1-t)y_2,$$ where $0\le t\le 1$. Substitute in the equation of the circle, solve the resulting quadratic for $t$. If $0\le t\le 1$ we have an intersection point, otherwise we don't. The value(s) of $t$ between $0$ and $1$ (if any) determine the intersection point(s). If we want a simple yes/no answer, we can use the coefficients of the quadratic in $t$ to determine the answer without taking any square roots.
Counterexample of G-Set If every element of a $G$-set is left fixed by the same element $g$ of $G$, then $g$ must be the identity $e$. I believe this to be true, but the answers say that it's false. Can anyone provide a counter-example? Thanks!
For concreteness, let $G$ be the group of isometries of the plane, and let $g$ be reflection in the $x$-axis. Let $S$ be the $x$-axis. Then $g(v)=v$ for every point $v$ in $S$.
Every manifold is locally compact? Theorem. Every Manifold is locally compact. This is a problem in Spivak's Differential Geometry. However, don't know how to prove it. It gives no hints and I don't know if there is so stupidly easy way or it's really complex. I good example is the fact that Heine Borel Theorem, I would have no clue on how to prove it if I didn't see the proof. So can someone give me hints. I suppose if it's local, then does this imply that it's homeomorphic to some bounded subset of a Euclidean Space?
I do not think the above answers are completely right, since the "Hausdorff" condition in the definition of topological manifolds must be needed. The key is to prove the following: If $V\subset U\subset X$, and X is Hausdorff, $\bar{V}_{U}$ is compact. Then $\bar{V}_{U}=\bar{V}$. Proof: By definition, we only need to show that $\bar{V}\subset \bar{V}_U$. Suppose $x\in\bar{V}$, if $x\notin \bar{V}_U$. Since X is Hausdorff and $\bar{V}_{U}$ is compact, $\bar{V}_{U}$ is closed in X. So $ (\bar{V}_{U})^{c}$ is open and $x\in (\bar{V}_{U})^{c}$. Since $x\in \bar{V}$, we must have $V\cap (\bar{V}_{U})^{c}$ is not empty. It is impossible since $\bar{V}_{U}$ contains $V$.
Proof that $\mathbb{Q}$ is dense in $\mathbb{R}$ I'm looking at a proof that $\mathbb{Q}$ is dense in $\mathbb{R}$, using only the Archimedean Property of $\mathbb{R}$ and basic properties of ordered fields. One step asserts that for any $n \in \mathbb{N}$, $x \in \mathbb{R}$, there is an integer $m$ such that $m - 1 \leq nx < m$. Why is this true? (Ideally, this fact can be shown using only the Archimedean property of $\mathbb{R}$ and basic properties of ordered fields...)
Assume first that $x>0$, so that $nx>0$. By the Archimedean property there is a $k\in\mathbb{N}$ such that $k>nx$; let $m$ be the least such $k$. Clearly $m-1\le nx<m$. If $x=0$, just take $m=1$. Finally, if $x<0$, then $-nx>0$, so by the first part of the argument there is an integer $k$ such that $k-1\le -nx<k$, and hence $-k<nx\le 1-k$. If $nx\ne 1-k$, you’re done: just take $m=1-k$. If $nx=1-k$, take $m=2-k$.
If a coin is flipped 10 times, what is the probability that it will land heads-up at least 8 times? I absolutely remember learning this is middle school, yet I cannot remember how to solve it for the life of me. Something to do with nCr, maybe? ... Thanks for any help.
What we'd like to do is find a way to set the problem up in some way that we know how to solve it. $P($At least $8$ heads) = $P(X \geq 8)$ where $X$ is the Random Variable associated with the number of heads attained. Well, since $X$ can only have the values $0$ through $10$, perhaps we should split $P$ up: $P(X \geq 8) = P(X = 8) + P(X = 9) + P(X = 10)$ We can split them up like this because there is no "overlap" between the events (You can't get 8 heads and then get either 9 or 10 heads too.) Now we just need to apply the definition of probability: $P(S) = n(E)/n(S)$ where $n(E)$ is the number of items in our event set, and $n(S)$ is the number of items in our sample space. Well, for each of the probabilities, $n(S)$ = $2^{10}$ by the multiplication principle. Now, what are each of the $n(E)$? You thought it would have to do with Combinations (nCr), and you were right. We use combinations instead of permutations because we really don't care which order we get the heads in, right? So, for $X = 8$: $n(E) = $${10}\choose{8}$ and so on. Can you take it from here?
Finding points in a grid with exactly k paths to them? Suppose that we begin at (0, 0) and are allowed to take two types of steps - steps one unit up and steps one unit to the right. For example, a legal path might be (0, 0) → (1, 0) → (2, 0) → (2, 1) → (3, 1). Now, suppose that you are given a number k. Is there an efficient algorithm to list of all points (x, y) with exactly k paths to them? For example, given the number 6, we would list (1, 5), (5, 1) and (2, 2), since these points have exactly six paths to them. Thanks!
This sounds to me like a combinatorial problem. Say you start in (x, y) and want to go to (x+3, y+3). If we represent all "up" movements by 'U' and all "right" movements by 'R', such a path could be UUURRR. The total number of possible paths would be all possible permutations of UUURRR, namely 6!/(3!3!) = 20. An algorithm finding all these paths could be to put all 'U's and 'R's in a pool and select one from the pool. This will be your first move. Then find all permutations involving the rest of the pool. Finally swap your first choice (i.e. an 'U') for the opposite choice (this time an 'R') and do it again. Recursively you'll now have found all possible paths between the two points. Updating the answer reflecting templatetypedef's comments below: If you want the number of paths reachable within k number of steps, the solution is still feasible and similar. Perhaps even simpler. Choose a 'U' then calculate the number of paths using (k-1) steps from there to the destination. After this is complete, choose an 'R' and calculate the number of paths using (k-1) steps from there to the destination. These two numbers added together will be your answer. Use recursion on the (k-1) subpath-steps. If you want the points with exactly n subpaths leading to them, it gets trickier. One way could be to go by binomial numbers. Find all i and j such as i!/((i-j)!j!)=n. This will take O(n) time since i+j<=n. Then you can use my proposition above for finding the number of paths reachable within k number of steps. OleGG's solution below might be cleaner though. I leave it to you to benchmark :)