title_body
stringlengths
61
4.12k
upvoted_answer
stringlengths
20
29.9k
downvoted_answer
stringlengths
19
18.2k
Probability problem. There are 16 disks in a box. Question: There are 16 disks in a box. Five of them are painted red, five of them are painted blue, and six are painted red on one side, and blue on the other side. We are given a disk at random, and see that one of its sides is red. Is the other side of this disk more likely to be red or blue?
There are $5 \times 2 + 6 \times 1 = 16$ red sides and of the $16$ red sides, $10$ have red on the other side, so the probability the other side is red is $\frac{10}{16}=\frac{5}{8} \gt \frac{1}{2}$, and so the other side of this disk more likely to be red. Alternatively, if you are given a disk at random, then it is more likely to be same colour on both sides (and red and blue have equal probabilities) , so if you see a red side then the other side is more likely than not red.
there are 5 red discs with red on the other side ... and 6 red discs with blue on the other side ... makes sense that you are more likely to find blue on the other side ... unless you lucky at blackjack
How to prove that $x \rightarrow e^{1/x}$ is not a restriction of any real distribution to $ \mathbb {R}_+$? This is an excercise 2.2 from Hormander, vol. I: Does there exist a distribution $u$ on $\mathbb{R}$ with the restriction $x \rightarrow e^{1/x}$ to $\mathbb{R}_+$? The answer, provided in the book, is "No". I am trying to "cook up" appropriate test function(s) such that $ \int \phi(x)e^{1/x} \leq C\sum_{\alpha \leq k} \sup\left|\partial^{\alpha}\phi\right|$ for no $k$, and I'm not sure at all what function(s) to take. What is the appropriate function? Is there a general method to come up with just right test functions?
Following Fedja's hint, let $\phi$ be a nonnegative test function supported in $(1,2)$ such that $\phi=1$ on $(5/4,7/4)$. For $b>0$, let $\phi_{b}(x):=\phi(bx)$. Observe that $$\int_{\mathbb{R}}\phi_{b}(x)e^{1/x}dx=b^{-1}\int_{\mathbb{R}}\phi(x)e^{b/x}dx,\qquad\forall b>0$$ Suppose that there is a distribution $u\in\mathcal{D}'(\mathbb{R})$ of order $k$ whose restriction to $\mathbb{R}^{+}$ is $e^{1/x}$: $$\left|\langle{u,\psi}\rangle\right|\leq C\sum_{\alpha\leq k}\left\|\partial^{\alpha}\psi\right\|_{\infty},\qquad\forall\psi\in C_{c}^{\infty}(\mathbb{R})$$ and in particular, $$\left|\langle{u,\phi_{b}}\rangle\right|\leq C\sum_{\alpha\leq k}\left\|\partial^{\alpha}\phi_{b}\right\|_{\infty}\leq C\sum_{\alpha\leq k}b^{\alpha}\left\|\partial^{\alpha}\phi\right\|_{\infty}\leq Cb^{k}\sum_{\alpha\leq k}\left\|\partial^{\alpha}\phi\right\|_{\infty},\qquad\forall b\geq 1$$ You can check that there is a constant $C'>0$ such that $$\dfrac{e^{b/x}}{b^{k+1}}\geq C'\dfrac{b}{x^{k+2}},\qquad\forall x>0$$ Whence, $$b^{-k}\left|\langle{u,\phi_{b}}\rangle\right|=b^{-k-1}\int_{\mathbb{R}}\phi(x)e^{b/x}dx\geq C' b\int_{1}^{2}\phi(x)x^{-k-2}dx$$ Since the RHS tends to $\infty$ as $b\rightarrow\infty$, we obtain a contradiction.
I think the argument is simpler than you expect, unless I make a terrible mistake... Just pick any test function with the property that $\phi(0) >1$ ($\phi(0)\neq 0$ is actually enough). Since $\phi(0)>1$ there exists some $a$ so that $\phi >1$ on $[0,1]$. Then $$\int \phi(x) e^{\frac{1}{x}}> \int_0^a e^{\frac{1}{x}} dx = \int_\frac{1}{a}^\infty \frac{e^u}{u^2}du = \infty $$
What's the difference between an initial value problem and a boundary value problem? I don't really see the difference, because in both case we need to determine y and the values of the constants. The only difference is that we give the value of y and y' in the former and the value of either 2 y or 2 y' in the latter. I solve both problems the same way. I don't really understand the theory, I guess.
For a simple example (second order ODE), an initial value problem would say $y(a)=p$, $y'(a)=q$. A boundary value problem would specify $y(a)=p$, $y(b)=q$.
In initial value problem we always want to determine the value of f(x)and f'(x) at initial point it may be 0 or something else but initial like f(1)=3 and f'(1)=2 then we can determine the constant.But in boundry value problem the condition will in form of a interval i.e.f(0)=3,f(2)=5
How to prove a function from $\mathbb N\times \mathbb N$ to $\mathbb N$ is bijective. I am having trouble with this problem: $f\colon \mathbb N\times \mathbb N \rightarrow \mathbb N$ is defined by $f(i,j)=\dfrac{(i+j-1)(i+j-2)}{2}+i$. How do you prove that $f$ is a bijection from $\mathbb N\times \mathbb N$ to $\mathbb N$? Work: I tried to set $f(i,j)=f(a,b)$. Assuming that $f(i,j)\neq f(a,b)$, then I assume $f(i,j)< f(a,b)$and that $i+j=m$ and $a+b=m+r$ for some remainder $r$. I replaced these into my equation and try to obtain a contradiction thus proving that $i=a, j=b$. However, I get stuck after this part and I do not know of a better way of doing this. Please help.
Instead, let $a=i+m$ and $b=j+n$. Then $(i+j-1)(i+j-2)+2i=(i+j+m+n-1)(i+j+m+n-2)+2i+2m$. Try multiply this out and collecting terms. You want that $m,n=0$. You might ask why is this a more natural substitution. When you make a substitution, you usually don't want to substitute for only some instances of a variable. Notice that in your set up you still have an $a$ term on the end, despite substituting for some of the $a$s, and likewise for $i$
$$T(n)=\frac{n(n+1)}{2}$$ $$f(i,j)-T(i+j-2)=i > 0$$ $$T(i+j-1)-f(i,j)=j-1≥0$$ $$T(i+j-2)<f(i,j)≤T(i+j-1)$$ Suppose $f(i,j)=f(a,b)$. Now, $T(i+j-2)<f(i,j)≤T(i+j-1)$ and $T(a+b-2)<f(i,j)≤T(a+b-1)$. Again, $T(n)$ is a strictly increasing sequence, hence $a+b=i+j$. But then the relation $f(i,j)=f(a,b)$ gives $i=a$ and $j=b$, hence f is one to one. I leave the work of proving that $f$ is onto upto you.
If $\gcd(x,y)=1379,\operatorname{lcm}(x,y)=n!$ then find $n$ If $\gcd(x,y)=1379,\operatorname{lcm}(x,y)=n! $ then find $n$ ($x,y,n$ are positive integers). I tried using the relation: $\gcd(x,y)\cdot\operatorname{lcm}(x,y)=xy$ and also wrote $1379=7\cdot197$ but nothing more.
Since $\gcd(x,y)=7\cdot 197$, we can assume $x=7\cdot 197a$, and $y=7\cdot 197b$, where $\gcd(a,b)=1$. Then $\text{lcm}(x,y)=7\cdot 197\cdot ab=n!$. Thus, $n\geqslant 197$. Actually, we can assign different prime factors of $\frac{n!}{7\cdot 197}$ to $a$ and $b$ respectively to make sure they are coprime.
$n$ can be any integer which is no smaller than $197$.
Is every proper subset of the real numbers bounded by the set of real numbers? Trying to see if I understand the definition of bounded. Are these true: $(-1, 1)$ is bdd above by $\mathbb{R}$ (and below) $(-1, 1)$ is bdd above by $(-1, 1]\, $, and $1$ is the supremum of $(-1,1)$ $[-1, 1]$ is bdd above and below by itself $(-1, 1)$ is not bdd above by $[-1,1)$ $\mathbb{R}^+$ is not bdd above by $\mathbb{R}$ but it is bounded below, with an infimum of $0$. Any mistakes or other elementary cases I should consider?
I have up-voted the answer by Xander Henderson. But since the subject line of the question says "Is every proper subset of the real numbers bounded by the set of real numbers?" it seems like a good idea to add that the set $\mathbb Z$ of all integers is a proper subset of $\mathbb R$ that is not bounded either above or below within $\mathbb R.$ (And I wrote "within $\mathbb R,$" not "by $\mathbb R.$")
That is why the real number system is 'Complete'. In case of open interval, say (a,b) we can take the deleted neighbourhood of a and b. Each member of the neighbourhood system lies in the set R. But, we cannot 'fix' a upper bound or lower bound. Sorry for giving a rather informal answer, but I hope it'll be of use.
Is it possible to eliminate a contradiction without recourse to the principle of explosion? I'd like to derive the following inference rule: $$ \frac{p\lor(q\land\neg q)}{p}\quad\text{[ContradictionElimination]} $$ I assumed that I could do this minimally somehow, however it turns out I need an alternative form of the principle of explosion. My derivation is: Rule (ContradictionElimination) Premise P∨(Q∧⌐Q) Conclusion P Proof Suppose P Hence P P=>P Suppose Q∧⌐Q Then Q ⌐Q Hence P by PrincipleOfExplosionAlternativeForm Q∧⌐Q=>P P by DisjunctionElimination My alternative form of the principle of explosion is, by the way: $$ \frac{p\quad\neg p}{q}\quad\text{[PrincipleOfExplosionAlternativeForm]} $$ This is easy enough to derive from the standard principle of explosion and modus ponens. Without a way to eliminate contradictions minimally, so to speak, all my minimal proofs of De Morgan's laws become intuitionsitic. This seems wrong to me.
I'm going to try to answer this question although I'm not entirely sure the answer is right. Maybe it will encourage debate, though. I think that the answer is no, you cannot derive this rule without some form of the principle of explosion. My reasoning goes as follows... The premise is a disjunction $p\lor(q\land\neg q)$. I have chosen to arrive at the conclusion by proof by cases, disjunction elimination in other words, which means I have to deal with $q\land\neg q$ somehow. The only other way that I can see to make progress from the premise is to use the rule that distributes disjunction over conjunction. This gives $(p\lor q)\land(p\lor\neg q)$. Now I have two disjunctions to deal with. Taking the rightmost $p\lor q$, I can only proceed by proof by cases again. Assuming $p$, I'm done. Assuming $q$ on the other hand, I must make use of the other side of the conjunction, namely $p\lor\neg q$ and I must then proceed to $q\land(p\lor\neg q)$. Using the rule that conjunction distributes over disjunction I get, unfortunately, $(p\land q)\lor(q\land\neg q)$ and I'm no better of than where I started. My line of reasoning is based on there only really being one way that the derivation can proceed. However it seems exhaustive, so I think there's something in it. It also seems, from an intuitive standpoint, that you can't eliminate $q\land\neg q$ without something like the principle of explosion, although admittedly this intuition is tainted by my findings above. Perhaps some other interpretation would shed more light on this.
In Polish notation you might use the rule of inference CN$\alpha$K$\beta$N$\beta$ $\vdash$ $\alpha$ which I'll call No. I'll also use Kol: K$\alpha$$\beta$ $\vdash$ $\alpha$ Kor: K$\alpha$$\beta$ $\vdash$ $\beta$ Ki: $\alpha$, $\beta$ $\vdash$ K$\alpha$$\beta$ and Ao: A$\alpha$$\beta$, C$\alpha$$\gamma$, C$\beta$$\gamma$ $\vdash$ $\gamma$ The proof then can go: premise 1 ApKqNq suppose 2 | p Ci 2-2 3 Cpp suppose 4 | KqNq suppose 5 || Np Kol 4 6 || q Kor 4 7 || Nq Ki 6 7 8 || KqNq Ci 5-8 9 | CNpKqNq No 9 10 | p Ci 4-10 11 CKqNqp Ao 1 3 11 12 p
Metric over a Lie algebra $\mathfrak{u}(n)$ Let $\mathfrak{u}(n)$ be the Lie algebra of the Lie group $U(n)$. I can define a positive-definite inner product over $\mathfrak{u}(n)$ in this way: if $A,B \in \mathfrak{u}(n)$ I define $\langle A,B \rangle := \Re(\operatorname{Tr}(AB^*))$, where by $\Re$ I denote the real part and by $B^*$ the conjugate transpose of $B$. Why does this inner product over $\mathfrak{u}(n)$ define the unique left-invariant metric on the Lie group $U(n)$?
Let $A$ and $B$ be complex $n \times n$ matrices with respective $(j, k)$ entries $A_{jk}$ and $B_{jk}$, and note that $B^*$ (the conjugate transpose) has $(j, k)$ entry $\bar{B}_{kj}$. By definition, $$ \langle A, B\rangle = \Re\bigl(\text{Tr}(AB^*)\bigr) = \Re \sum_{j,k=1}^n A_{jk} \bar{B}_{jk},$$ which is precisely the Euclidean inner product of $A$ and $B$ if these matrices are identified with complex vectors in $\mathbf{C}^{n^2}$. The resulting pairing on $\mathfrak{u}(n)$ is the restriction of this inner product. Generally, if $G$ is a Lie group and $g \in G$, then the left multiplication map $\ell_g:G \to G$ is a diffeomorphism sending $e$ to $g$, so the push-forward $(\ell_g)_*:\mathfrak{g} \to T_gG$ is an isomorphism of vector spaces. An inner product on $\mathfrak{g}$ thereby determines an inner product on each tangent space $T_gG$, and since multiplication is smooth (as a function of $g$) these inner products constitute a Riemannian metric on $G$. (In case it matters, this left-invariant metric is only "unique" in the sense that it is completely determined by the choice of inner product on $\mathfrak{g}$.)
Because Tr(AB*) is invariant under transformations of the orthogonal base of the algebra u(n). That's why define invariant metric on group U(n) and is left because B are tensors and must respect the sequence of < A,B >.
Complex Number solutions for a Circle I have a circle of radius 5 with its center at the origin represented as $X^2+Y^2=25$. I get that it has a solution for all values ranging from $-5$ to $+5$. My question is what does it mean when the equation returns a complex number? Example for $x=6$, I get $y = \pm i\sqrt{11}$. In doing this for all real number greater than $+5$ and less than $-5$, what is being returned/plotted and what plane is this plot on? Is this another circle on the imaginary plane, all values on the plane beyond a circle?
When you graph the solutions in "the plane", you are restricting yourself to look at solutions to the equation where both $x$ and $y$ are real. You could, for example, restrict further to only allow $x$ and $y$ to be rational numbers, and think about how those points fit in with all the real solutions. To think geometrically about non-real complex solutions, you will need more (real) dimensions! You could restrict yourself, as it sounds like you are doing in the question, to just solutions where $x$ is real and $y$ is allowed to be complex. Then you will need another dimension/direction for the imaginary part of $y$. You could graph this in a "$z$" direction, so that solutions where $x>5$ not lie in the plane but above/below it. You will find that for $|x|>5$ the solutions will be points where $x^2 - z^2 = 25$ so that if you just look at the $(x,z)$ plane the solution set will look like a hyperbola. Probably the most interesting thing to look at is when you allow both $x$ and $y$ to be complex... but graphing this would require more dimensions.
All the points that lie inside or on the boundary of the circle $$ \left\{ \ (x, y) \in \mathbb{R}^2 \ \colon \ x^2+ y^2 = 5^2 \ \right\} $$ lie to the right of the line $$ \left\{ \ (x, y) \in \mathbb{R}^2 \ \colon \ x = -5 \ \right\} $$ and to the left of the line $$ \left\{ \ (x, y) \in \mathbb{R}^2 \ \colon \ x = 5 \ \right\}. $$ Moreover, the circle intersects the former line at the point $(-5, 0)$ and the latter one at the point $(5, 0)$.
How many integer numbers between 0 and 9999 are there that have exactly one digit 1 and exactly one digit 3? What do I need to think about this problem? How many integer numbers between 0 and 9999 are there that have exactly one digit 1 and exactly one digit 3? The only thing I know that the total configurations is $10^4$ so if I want to count the numbers which at least have 1 three firstly I get the numbers which not have three's $9^4$ and then subtract and the same for four but how can I count with more restrictions?
The position of the digit $1$ can be chosen in $4$ ways, the position of the digit $3$ can be chosen in $3$ ways. The remaining two digits should belong to the set $\{0,2,4,5,6,7,8,9\}$ which has $8$ elements. Hence the number of integers between 0 and 9999 that have exactly one digit 1 and exactly one digit 3 is $$4\cdot 3\cdot 8\cdot 8=768.$$
I get a different answer...while I understand your logic here. Here is mine: Suppose $A = \{1,\} B =\{ 3\}, C = \{0,2,4,5,6,7,8,9\}$ and $D = \{0,2,4,5,6,7,8,9\}$ In a 4-digit number, we can place: $A-B-C-D$ in $4!$ ways. Now, we know that there is only 1 way to choose $A$, 1 way to choose $B$, but 8 ways to choose $C$ and 8 ways to choose $D$. So, by product rule, we have: $$4! * 1 * 1 * 8 * 8 = 1536$$. I get $1536 = 2*768$, but I guess it is because of the way I think of it, the order of $C$ and $D$ matters $(=2!)$. As I understand this problem to be referring to k-permutations (the order of each digit matters), shouldn't the answer be $1536$ and not $768$?
The Bent Washer Problem -- divide a shape into 2 pieces of the same volume. Fans of the Ham sandwich theorem know that any set of points can be divided by a plane into two equal halves. Consider instead a 3-D shape that must be divided into 2 equal pieces by a single cut. A sufficiently bent spring washer or keyring cannot be divided into 2 pieces by a plane. But it's possible to make a simpler cut that works -- a partial plane cut. Is that the simplest shape that cannot be split into 2 pieces by a plane? Is there a simple shape that cannot be split into 2 equal pieces by a simple cut?
The Ham Sandwich Theorem says that given three measurable subsets of $\mathbb{R}^3$ can be cut into two equal (with respect to measure) pieces by a single plane. In particular, we can choose two of our sets to be empty. So any measurable subset of $\mathbb{R}^3$ can be cut in half by a plane. EDIT: I originally misunderstood. You're interested in when the cut pieces are connected.
The Ham Sandwich Theorem says that given three measurable subsets of $\mathbb{R}^3$ can be cut into two equal (with respect to measure) pieces by a single plane. In particular, we can choose two of our sets to be empty. So any measurable subset of $\mathbb{R}^3$ can be cut in half by a plane. EDIT: I originally misunderstood. You're interested in when the cut pieces are connected.
Find the asymtotes to the function: $f(x)={1+ln|x|\over x(1-ln|x|)}$ Find the asymtotes to the function: $f(x)={1+ln|x|\over x(1-ln|x|)}$. I have difficulties with this one when i try to find the limits, i get indefinites that look like (infinity over infinity)over infinity.. Can anyone help me with this?
The important moments in your function's life include $x = 0$ and $\ln |x| = 1$ (so $x = \pm e$), and $x \rightarrow \pm \infty$. As we go to $\pm \infty$ we get $1 \pm \ln |x| \approx \ln |x|$ so we reduce the expression to approximately $\ln |x| / (-x \ln |x|)$ which divides to simply $-1/x$, so these asymptotes are the horizontal asymptote $y = 0$. At $x = 0$ the logarithms again overpower the nearby constants, creating something which goes like $-1/x$. But at $0$ this function has a vertical asymptote. At $x = \pm e$ you'll have vertical asymptotes as the numerator does not go to zero while the denominator does. It may be helpful to look at a graph of $x f(x)$ as that has the essential "weird" behavior spelled out for you: WolframAlpha graph. This function has a vertical asymptote at $y = -1$ for $x \rightarrow \pm \infty$, and also has a value of $y=-1$ at $0$.
The important moments in your function's life include $x = 0$ and $\ln |x| = 1$ (so $x = \pm e$), and $x \rightarrow \pm \infty$. As we go to $\pm \infty$ we get $1 \pm \ln |x| \approx \ln |x|$ so we reduce the expression to approximately $\ln |x| / (-x \ln |x|)$ which divides to simply $-1/x$, so these asymptotes are the horizontal asymptote $y = 0$. At $x = 0$ the logarithms again overpower the nearby constants, creating something which goes like $-1/x$. But at $0$ this function has a vertical asymptote. At $x = \pm e$ you'll have vertical asymptotes as the numerator does not go to zero while the denominator does. It may be helpful to look at a graph of $x f(x)$ as that has the essential "weird" behavior spelled out for you: WolframAlpha graph. This function has a vertical asymptote at $y = -1$ for $x \rightarrow \pm \infty$, and also has a value of $y=-1$ at $0$.
What does George E. Martin mean by "The belief that geometries can be classified by their symmetry groups is no longer tenable"? In the preface to George E. Martin's Transformation Geometry: An Introduction to Symmetry, he writes (emphasis mine) Transformation geometry is a relatively recent expression of the successful venture of bringing together geometry and algebra. The name describes an approach as much as the content. Our subject is Euclidean geometry. Essential to the study of the plane or any mathematical system is an understanding of the transformations on that system that preserve designated features of the system. ... The belief that geometries can be classified by their symmetry groups is no longer tenable. However, the correspondence for the classical geometries and their groups remains valid. Undergraduates should not be expected to grasp the idea of Klein's Erlanger program before encountering at least the projective and hyperbolic geometries. Therefore, although the basic spirit of the text is to begin to carry out Klein's program, little mention of the program is made within the text. What does he mean in the bolded statement? From the Erlangen program Wikipedia page, what I understand is that there are geometric objects that have "the same symmetries" (I don't know how else to put it), but are nevertheless distinct. Comments would be appreciated!
I suppose that he means that we now know that the symmetry group is not sufficient to capture the space. The idea of the Erlanger Program is that the group of symmetries determines what are the geometric figures and geometric properties within that space. If you have two subsets of the space that can be transformed into each other with a symmetry, then they are instances of the same type of geometric figure, and if a property is invariant under symmetry transformations, then that is a meaningful property for that given geometry. The Lens example given by Jason De Vito in the comments illustrates how the symmetry group doesn't characterize the space
I suppose that he means that we now know that the symmetry group is not sufficient to capture the space. The idea of the Erlanger Program is that the group of symmetries determines what are the geometric figures and geometric properties within that space. If you have two subsets of the space that can be transformed into each other with a symmetry, then they are instances of the same type of geometric figure, and if a property is invariant under symmetry transformations, then that is a meaningful property for that given geometry. The Lens example given by Jason De Vito in the comments illustrates how the symmetry group doesn't characterize the space
Solution to $7^{2x-2} \equiv 4 \mod13$ I need to solve the congruence $$ 7^{2x-2} \equiv 4 \mod13 $$ I think I need to use a primitive root to transform it into $2x-2 \equiv ??? \mod (\phi(13)=12)$ but I'm stumped on how to actually do it.
$$7^{2x-2}\equiv 4 \pmod{13}$$ $\iff$ $$49^{x}\equiv 4(49) \pmod{13}$$ $\iff$ $$10^{x}\equiv 4(-3) \pmod{13}$$ $\iff$ $$10^{x}\equiv 1 \pmod{13}$$ You can now check the powers of ${10} \pmod{13}$. Alternatively, for a slight shortcut: Integers mod ${13}$ form a group under multiplication of order $12$. So now you need to find the order of $10$, and you can do this by checking through the factors of $12$. Then $x\equiv0\pmod{\textrm{order}(10)}$.
$\mod 13$: $$7^2 \equiv 10 \equiv -3 \\ 7^4 \equiv (-3)^2 \equiv 9 \equiv -4\\ 7^8 \equiv (-4)^2 \equiv 16 \equiv 3\\ 7^{10} \equiv 7^2\cdot 7^8 \equiv (-3)\cdot 3\equiv -9 \equiv 4 $$ So $$ 2x-2 = 10 \\ x = 6 $$
Diophantine equation $x^2 + y^2 = z^3$ I have found all solutions to the Diophantine equation $x^2 + y^2 = z^3$ when $z$ is odd. I am having some difficulty finding the solutions when $z$ is even. I am asking for a proof that provides the solutions where $z$ is even. I want the proof to be elementary and use only Number theory and perhaps Calculus or basic ideas about groups and rings.
Unfortunately, there isn't (apparently) one complete polynomial parameterization to $$x^2+y^2 = z^k\tag1$$ when $k>2$. For $k=2$, the complete solution is, $$x,\,y,\,z = (a^2-b^2)s,\; (2ab)s,\; (a^2+b^2)s$$ where $s$ is a scaling factor. Using complex numbers $a+b i$, one can generalize the method. For $k=3$, it is $$x,\,y,\,z = (a^3 - 3a b^2)s^3,\; (3a^2 b - b^3)s^3,\; (a^2+b^2)s^2\tag2$$ but you can no longer find rational $a,b,s$ for certain solutions. For example, $\hskip2.7in$ $9^2+46^2 = 13^3\quad$ Yes $\hskip2.7in$ $58^2+145^2=29^3\quad$ No (You can click on the Yes/No links for Walpha output.) A related discussion can be found in this post while an alternative method is described here. For the case $k=3$, if $a^2+b^2=c^3$, then an infinite more can be found as, $$(a u^3 + 3 b u^2 v - 3 a u v^2 - b v^3)^2 + (b u^3 - 3 a u^2 v - 3 b u v^2 + a v^3)^2 = c^3(u^2+v^2)^3\tag3$$ which should provide some solutions not covered by $(2)$.
Usually a view. http://www.artofproblemsolving.com/community/c3046h1124891_almost_pythagoras http://www.artofproblemsolving.com/community/c3046h1049554__ You can write the solution in this form. http://www.artofproblemsolving.com/community/c3046h1054060_cubes_with_squares http://www.artofproblemsolving.com/community/c3046h1048815___ http://www.artofproblemsolving.com/community/c3046h1047876____ But usually use the standard simple approach. In the equation. $$X^2+Y^2=Z^3$$ $$X=ab+cd$$ $$Y=cb-ad$$ And receive such record. $$(a^2+c^2)(b^2+d^2)=Z*Z^2$$ $$b^2+d^2=Z^2$$ $$Z=a^2+c^2$$ So. $$d=a^2-c^2$$ $$b=2ac$$ Then the decision on the record. $$X=3ca^2-c^3$$ $$Y=3ac^2-a^3$$ $$Z=a^2+c^2$$
Traveling salesman problem (TSP): what is the Relation with number of vertices and length of the found route? I know that there are many algorithms (exact or approximate) which implement the traveling salesman problem. I would like to know the relationship between the number of the vertices (i.e. the places to visit) and the length of the route found by these algorithms. Intuitively, the fewer the vertices the shorter the route is. But can any one give me the mathematical relationship between the number of the vertices and the length of the route found by at least one of the existing traveling salesman algorithms?
Pick the hexagons that correspond to locations you want to visit. If these hexagons constitute a connected area (there is hexagon-path between any two), then the shortest solution is of length $c_1 \cdot n$ and the solution found by a good approximate algorithm is $c_2 \cdot n$ where $c_1$ and $c_2$ are some constants (i.e. they depend on the size of the hexagon). If these hexagons are not connected, then the length of the best solution can be arbitrarily big: imagine three hexagons separated by $k$ spaces each, then the best path has length of $3\cdot (k+1) \cdot d_\text{hex}$, despite having only $3$ places to visit. I hope this helps $\ddot\smile$
Well, since you're required to visit each city, and to visit it only once, the number of edges in your route exactly equals the number of vertices (assuming that you return to your original location at the end). The total length of the road is the sum over all these edges multiplied by their weights. It totally depends on the weights and has little to do with the number of vertices, if I understood your question correctly.
Let $r \cdot u = s \cdot u$ exists in a vector space $V$. Show that if $r \ne s$, then $u = o$, the zero vector in $V$ and If $u \ne o$, then $r = s$. Let $r \cdot u = s \cdot u$ exists in a vector space $V$. Show that: If $r \ne s$, then $u = o$, the zero vector in $V$. If $u \ne o$, then $r = s$. EDIT: Attempt: I know that this question is a little bit easy because it is a basic concept. But, although so, I got a little confused about my answer below. Let $r,s \in \Bbb F$, a field, and $u \in V$. Now, \begin{align*} r \cdot u &= s \cdot u \\ r \cdot u - s \cdot u &= o \\ (r-s) \cdot u &= o \\ \end{align*} What next? Assume that $V$ is over field, say $F$. Then, if $r \ne s$, \begin{align*} (r-s) \cdot (r-s)^{-1} \cdot u &= o \cdot (r-s)^{-1} \\ u &= o, \end{align*} and if $u \ne o$, then we must have $r-s = 0$ i.e. $r=s$. Does it true?
EDIT: You are correct so far, but you must explain why $(r - s) \mathbf u = \mathbf 0$ implies that $r - s = 0$ or $\mathbf u = \mathbf 0.$ Equivalently, we can show that if $r - s \neq 0,$ then $\mathbf u = \mathbf 0.$ Given that $r - s \neq 0$ and $r$ and $s$ are elements of a field, what can you say about $r - s?$ Is there a way to "cancel" $r - s$ from the left-hand side? Once you study bases, you may use the below argument. (It can't hurt to practice.) Consider a basis $\mathscr B$ of $V.$ Every vector of $V$ can be written uniquely as a finite $\mathbb F$-linear combination of basis vectors of $V.$ Particularly, there exist unique scalars $c_1, \dots, c_n$ and basis vectors $\mathbf v_1, \dots, \mathbf v_n$ such that $\mathbf u = c_1 \mathbf v_1 + \cdots + c_n \mathbf v_n.$ Given that $r \mathbf u = s \mathbf u,$ we have that $$(r c_1) \mathbf v_1 + \cdots + (r c_n) \mathbf v_n = (s c_1) \mathbf v_1 + \cdots + (s c_n) \mathbf v_n.$$ What can you say about the relationship between $r c_i$ and $s c_i$ for each integer $1 \leq i \leq n?$ From there, I believe that you can conclude the desired results.
I don't believe the question as posted is complete - unlike ordinary numbers, vector Dot Products do not have the cancellation property. From Wikipedia article for Dot Product If $a ⋅ b = a ⋅ c$ and $a ≠ 0 $, then we can write: $ a ⋅ ( b − c ) = 0$ by the Distributive Law; this just means that $a$ is perpendicular to $( b − c )$, which still allows $( b − c ) ≠ 0 $, and therefore allows $b ≠ c $. To put it another way, $r ⋅ u = s ⋅ u$ (with $r ≠ s$) can still be true if $u ⟂ (r - s)$ and $u ≠ 0$. EDIT: Assuming a downvote means someone doesn't believe me, I'll work through a simple example with three perpendicular, non-zero unit vectors. Let $r = (1,0,0)$, $s = (0,1,0)$, and $u = (0,0,1)$. $$r ⋅ u = s ⋅ u$$ $$\left\lVert r \right\rVert \times \left\lVert u \right\rVert \times \cos \theta_{ru} = \left\lVert s \right\rVert \times \left\lVert u \right\rVert \times \cos \theta_{su}$$ $$1 \times 1 \times \cos \theta_{ru} = 1 \times 1 \times \cos \theta_{su}$$ Obviously, being perpendicular unit vectors, $\theta_{ru} = \theta_{su} = 90^\circ$, so we get $$1 \times 1 \times \cos 90 = 1 \times 1 \times \cos 90$$ $$1 \times 1 \times 0 = 1 \times 1 \times 0$$ $$0=0$$
Why is $1/i$ equal to $-i$? When I entered the value $$\frac{1}{i}$$ in my calculator, I received the answer as $-i$ whereas I was expecting the answer as $i^{-1}$. Even google calculator shows the same answer (Click here to check it out). Is there a fault in my calculator or $\frac{1}{i}$ really equals $-i$? If it does then how?
$$\frac{1}{i}=\frac{i}{i^2}=\frac{i}{-1}=-i$$
$$\frac{1}{i}=\left|\frac{1}{i}\right|e^{\arg\left(\frac{1}{i}\right)i}=$$ $$1e^{\left(-\frac{1}{2}\pi\right) i}=e^{\left(-\frac{1}{2}\pi\right) i}=$$ $$1\left(\cos\left(-\frac{1}{2}\pi\right)+\sin\left(-\frac{1}{2}\pi\right)i\right)=\cos\left(-\frac{1}{2}\pi\right)+\sin\left(-\frac{1}{2}\pi\right)i=$$ $$0+(-1)i=0-1i=-i$$ So: $$\frac{1}{i}=-i$$ Why is $\left|\frac{1}{i}\right|=1$: $$\left|\frac{1}{i}\right|=\sqrt{\Re\left(\frac{1}{i}\right)^2+\Im\left(\frac{1}{i}\right)^2}=\sqrt{0^2+(-1)^2}=\sqrt{(-1)^2}=\sqrt{1}=1$$ Second way to show $\left|\frac{1}{i}\right|=1$: $$\left|\frac{1}{i}\right|=\frac{|1|}{|i|}=\frac{\sqrt{1^2}}{\sqrt{1^2}}=\frac{\sqrt{1}}{\sqrt{1}}=\sqrt{\frac{1}{1}}=\sqrt{1}=1$$
An explanation on variances I am having trouble understand what the following represent.
Let $\mathbb X^{p\times 1}$ denotes a column vector of random components $X_i,i=1(1)p.$ Then the expectation of $\mathbb x$ is defined as $$E(\mathbb X)=E\begin{pmatrix}X_1 \\ \vdots\\X_p\\\end{pmatrix}=\begin{pmatrix}EX_1 \\ \vdots\\EX_p\\\end{pmatrix}=\begin{pmatrix}\mu_1 \\ \vdots\\\mu_p\\\end{pmatrix}=\mathbb{\mu} $$ Define $\text{Cov}(X_i,X_j)=E[(X_i-EX_i)(X_j-EX_j)^T]=\sigma_{ij}$. We extend the variance notation to the p-dimentional random vector $\mathbb X$ by the following matrix: $$E[(\mathbb X-E \mathbb X)(\mathbb X-E \mathbb X)^T] \\=E[(\mathbb X- \mathbb \mu)(\mathbb X-\mathbb \mu)^T] \\=E\left[\begin{pmatrix}X_1-\mu_1 \\ \vdots\\X_p-\mu_p\\\end{pmatrix} \begin{pmatrix}X_1-\mu_1& \cdots &X_p-\mu_p\\\end{pmatrix}\right]\\=\begin{pmatrix}\sigma_{11} &\sigma_{12} &\cdots &\sigma_{1p}\\ \vdots & \vdots &\ddots &\vdots \\\sigma_{p1} &\sigma_{p2}&\cdots& \sigma_{pp}\\\end{pmatrix}=\Sigma$$ In similarly way co-variance between $X$ and $Y$ is defined. That's all from me.
Let $\mathbb X^{p\times 1}$ denotes a column vector of random components $X_i,i=1(1)p.$ Then the expectation of $\mathbb x$ is defined as $$E(\mathbb X)=E\begin{pmatrix}X_1 \\ \vdots\\X_p\\\end{pmatrix}=\begin{pmatrix}EX_1 \\ \vdots\\EX_p\\\end{pmatrix}=\begin{pmatrix}\mu_1 \\ \vdots\\\mu_p\\\end{pmatrix}=\mathbb{\mu} $$ Define $\text{Cov}(X_i,X_j)=E[(X_i-EX_i)(X_j-EX_j)^T]=\sigma_{ij}$. We extend the variance notation to the p-dimentional random vector $\mathbb X$ by the following matrix: $$E[(\mathbb X-E \mathbb X)(\mathbb X-E \mathbb X)^T] \\=E[(\mathbb X- \mathbb \mu)(\mathbb X-\mathbb \mu)^T] \\=E\left[\begin{pmatrix}X_1-\mu_1 \\ \vdots\\X_p-\mu_p\\\end{pmatrix} \begin{pmatrix}X_1-\mu_1& \cdots &X_p-\mu_p\\\end{pmatrix}\right]\\=\begin{pmatrix}\sigma_{11} &\sigma_{12} &\cdots &\sigma_{1p}\\ \vdots & \vdots &\ddots &\vdots \\\sigma_{p1} &\sigma_{p2}&\cdots& \sigma_{pp}\\\end{pmatrix}=\Sigma$$ In similarly way co-variance between $X$ and $Y$ is defined. That's all from me.
Special case of double integral where the upper bond of the inner integral is the integration variable of the out integral This is regarding a special type of double integral looking like the following: $$\int_0^T \exp\left(\int_0^\color{red}{t} f(s)\, ds\right) \,\color{red}{dt}. $$ Time being the integration variable here, one can think of $T$ as entire duration which sets the boundary of the outer integral, and the inner integral integrates from $0$ to each discrete time point $t$, and take the exponential of it. Here, the upper bond of the inner integral is the integration variable of the outer integral. I suspect the expression can be simply to: $$ \exp\left(\int_0^T f(t)\, dt\right). $$ but I'm not sure if this is correct. Can anyone enlighten me with the missing links in between and correct me if my suspicion is wrong.
You can look at it as $$\int\limits_{0}^{T}e^{\int\limits_{0}^{t}f(s) \mathrm{d}s} \mathrm{d}t$$ for f$(s) = -\frac{s}{2}$, for example, you obtain well known $$\int\limits_{0}^{T}e^{-\frac{t^2}{2}} \mathrm{d}t$$
You can look at it as $$\int\limits_{0}^{T}e^{\int\limits_{0}^{t}f(s) \mathrm{d}s} \mathrm{d}t$$ for f$(s) = -\frac{s}{2}$, for example, you obtain well known $$\int\limits_{0}^{T}e^{-\frac{t^2}{2}} \mathrm{d}t$$
Law of Probability question Question: 40% of Vancouverites are avid skiiers, whereas only 20% of people living in Canada are avid skiiers. Vancouverites make up about 2% of the population of Canada. You know that Johann lives in Canada, but you’re not sure where. Given that Johann is an avid skier, what is the probability that he lives in Vancouver? I need help with this question and your help will be greatly appreciated if you can show me how to set it up and calculate the probability so Far I set it up as: A-> Vancouverites are avid skiers (4/10) b-> People living in Canada are avid skiers (2/10) c-> Vancouverites make up Canada's pop (0.2/10)
$(0.4)(0.02) = 0.008$ $\frac{0.008}{0.2} = 0.04$ Because 0.8% of the Canadian skiers are Vancouverites. If your a skier in Canada there's a 4% chance that your a vancouverite as 20% of Canadians are skiers.
I could be wrong but for the Pr(S) = Pr(S|V)*Pr(V) + Pr(S/C)*Pr(~V) making ( Pr(V|S)= Pr(V)*Pr(S|V) )/ ( Pr(S|V)*Pr(V) + Pr(S/C)*Pr(~V)) ( (0.02)(0.4) ) / ( (0.02)(0.4)+(0.2)(0.98) =0.039
Coin chosen is two headed coin in this probability question I have a probability question that reads: Question: A box has three coins. One has two heads, another two tails and the last is a fair coin. A coin is chosen at random, and comes up head. What is the probability that the coin chosen is a two headed coin. My attempt: P(two heads coin| given head) = P(two heads coin * given head)/P(given head) = 1/3/2/3 = 1/2 Not sure whether this is correct?
For such a small number of options its easy to count them The possible outcomes are: heads or heads using the double head coin tails or tails using the double tail coin heads or tails using the fair coin All these outcomes are equally likely. How many of these are heads and of those how many use the double headed coin? $$Answer = \frac{2}{3}$$
1/2 is the only answer!! 50 50 chance, that's it period!!! The double tails is out for sure, so discard it. What is left besides the normal coin and 2 headed coin? Nothing! So, it's between two coins only. You can't rip the heads or tails from the coins and shake them up in the box. That would give you 3 heads and only 1 tail. That would be a 3 out of 4 chance! BUT THAT IS OUT! These are coins with no magical way of knowing if the coin chosen has a tail on the hidden side. So it's a 50 50 or 1/2 chance. Both the normal and 2 tail coins are out because it HAS to be a 2 headed coin chosen! So if you randomly draw out a coin with heads. You have a 50/50 or 1/2 chance. THAT'S IT, 1/2. No formula needed. I'm not great at math myself. This is common comprehension! Thanks Del
Proving equation is true Here is a question Show that if $x>0$,$y>0$ than: $$\frac{1}{x}+\frac{1}{y}\geq\frac{4}{{x+y}}$$ How would you solve/start this?
$\frac{x}{y}+\frac{y}{x} \geq 2$
Hint One way is simply to show that $$\frac1x+\frac1y-\frac{4}{x+y}\ge 0.$$ Then the rest is just simple algebra.
Mathematics of apportionment of representation in a legislative body It seems to be eclipsed by coronavirus, but today is the U.S. Census day for this coming decade. If you are American, where you are living today, where your children are living today, is where they are enumerated in the Census. This will determine, first hand, how many U.S. Representatives your state will be apportioned. It will also affect how redistricting will occur in your state and, perhaps, in your city if it is divided politically into wards or districts. Okay, this is about once the census populations of each state are finalized, about distributing (or "apportioning") a fixed number of representatives among the states. I know about the Huntington-Hill method and this will eventually be about that. But I want to get more fundamental about the problem. This is what is given: $N$: The number of U.S. states. Currently 50. $H$: The number of representatives in the House of Representatives. Currently 435. $P_n$: The census population of state $n$. All of these parameters are positive integers $\in \mathbb{N}$. What we need to determine: $R_n$: The number of representatives apportioned to state $n$. Also all of these $R_n \in \mathbb{N}$. We know that the total population of the 50 states (this leaves out the District of Columbia, Puerto Rico, Guam, American Samoa, and other territories whose residents are American, but they get no voting representation in Congress) is the sum of the populations of all states: $$ P_\mathrm{total} = \sum\limits_{n=1}^{N} P_n $$ And we know also that the total number of representatives is the sum of apportioned representatives of all states: $$ H = \sum\limits_{n=1}^{N} R_n $$ Now, the simplest meaning of the concept of apportionment is that the number of representatives a state has is directly proportional to the population of that state. That would say that there exists a constant (w.r.t. all of the states) of proportionality, $\alpha$, such that: $$ R_n = \alpha \cdot P_n $$ Now, if there were no problems regarding fractional representatives, we know that: $$\begin{align} \alpha \cdot P_\mathrm{total} &= \sum\limits_{n=1}^{N} \alpha \cdot P_n \\ \\ &= \sum\limits_{n=1}^{N} R_n \\ \\ &= H \\ \end{align} $$ So we can solve that for $\alpha$ and have an idea what it might be, but we cannot have a fraction of a representative, $R_n$ must be a positive integer, so then quantization or rounding to an adjacent integer is necessary. We know that if rounding down or even rounding to nearest may cause the rounded value of $R_n$ for the least populous states to possibly be zero, and the U.S. Constitution does not allow for that. It seems to me that the only consistent simple rule of rounding would be to always round up: $$ R_n = \big\lceil \alpha \, P_n \big\rceil $$ where $\lceil x \rceil$ is the ceiling function which is the smallest integer $m$ such that $m-1 < x \le m$, or $$ \lceil x \rceil \in \mathbb{Z} \\ \\ x \le \lceil x \rceil < x+1 $$ So, it seems to me that the simplest consistent rule to guarantee that each state gets a whole number of representatives and at least one representative is to find the constant of proportionality, $\alpha$ such that: $$\begin{align} H &= \sum\limits_{n=1}^{N} R_n \\ \\ &= \sum\limits_{n=1}^{N} \big\lceil \alpha \, P_n \big\rceil \\ \end{align} $$ Now couldn't we simply define an increasing function $h(\alpha)$ as $$ h(\alpha) \triangleq \sum\limits_{n=1}^{N} \big\lceil \alpha \, P_n \big\rceil $$ and, starting at $\alpha=0$ (and we know that $h(0)=0$), then increase that value $\alpha$ until $h(\alpha)=H$? Then we know the number of representatives for all of the states $R_n = \big\lceil \alpha \, P_n \big\rceil$ for all $n$. Is this consistent with the Huntington-Hill method? If needed, I will explain the Huntington-Hill method here, but I need to figure out a good set of symbols that is consistent with the symbols I use above. Give me a couple hours to do that.
Your method of apportionment is the one that John Quincy Adams proposed in 1832. It has never been used to apportion seats in the U.S. House of Representatives. Most methods of apportionment in widespread use can be described in one of two equivalent ways. One way is the one you use. A common divisor is defined for all states (or parties, when apportioning seats after an election), and each state’s population (or each party’s vote tally, in case of an election) is divided by this divisor and then rounded in some prescribed manner. The divisor is selected such that the desired total of seats results. The three obvious choices of rounding down, up or to the nearest integer are typically referred to by different names in Europe and the United States: \begin{array}{c|c} \text{rounding}&\text{European name}&\text{United States name}&\text{divisor offset}\\\hline \text{up}&\text{Adams}&\text{Adams}&0\\ \text{nearest}&\text{Sainte-Laguë}&\text{Webster}&\frac12\\ \text{down}&\text{D'Hondt}&\text{Jefferson}&1\\ \end{array} Equivalently, the seats can be apportioned one by one, with each state or party being assigned a divisor of its own that is determined by the seats already allocated to it. An offset $\Delta$ is added to this seat count, that is, the quotients $$ \frac{\text{population or #votes}}{\text{#seats}+\Delta} $$ are calculated, and the state or party with the largest current quotient gets the next seat. The divisor offsets corresponding to the three methods above are shown in the table. The equivalence of the two ways of describing these methods can be shown by noting that in the first approach, when the global divisor is adjusted to achieve the desired total, a state gets an additional seat whenever the divisor crosses its rounding threshold, and this is precisely when its population is $\text{#seats}+\Delta$ times the global divisor. The Huntington–Hill method currently in use in the U.S. House of Representatives is easier to describe in the second way. Here the divisor is the geometric mean of $\text{#seats}$ and $\text{#seats}+1$, that is, of the values that correspond to rounding up and down, respectively. Thus, it is somewhat similar to the Sainte-Laguë / Webster method, which uses the arithmetic mean $\text{#seats}+\frac12$ instead of the geometric mean. For large seat counts, the two are very similar, and even when a single seat has been apportioned, the geometric mean of $1$ and $2$ is $\sqrt2\approx1.414$, not all that different from $\frac32=1.5$; but when zero seats have been apportioned, the divisor is $0$ rather than $\frac12$, thus ensuring that all states get at least one seat. So in a sense Huntington–Hill is Webster with a smooth transition to the guaranteed minimum seat provided by Adams. Here’s Java code that implements all four methods of apportionment. Applying them to the $2010$ United States census data yields the following differences relative to the Huntington–Hill method (which I’m using as a reference since it’s the one actually being used): \begin{array}{l|r|c} \text{state}&\text{rank}&\text{Adams}&\text{Webster}&\text{Jefferson}\\\hline \text{California}&1&-3&&+2\\ \text{Texas}&2&-2&&+1\\ \text{New York}&3&-1&&+1\\ \text{Florida}&4&-1&&+1\\ \text{Illinois}&5&&&+1\\ \text{Pennsylvania}&6&-1&&&\\ \text{Ohio}&7&&&+1\\ \text{Georgia}&9&-1&&&\\ \text{North Carolina}&10&&+1&+1\\ \text{New Jersey}&11&&&+1\\ \text{Missouri}&18&+1&&&\\ \text{Minnesota}&21&&&-1\\ \text{South Carolina}&24&&&-1\\ \text{Louisiana}&25&+1&&&\\ \text{Oregon}&27&+1&&&\\ \text{Oklahoma}&28&+1&&&\\ \text{Iowa}&30&+1&&&\\ \text{West Virginia}&37&&&-1\\ \text{Nebraska}&38&&&-1\\ \text{Idaho}&39&+1&&\\ \text{Maine}&41&&&-1\\ \text{New Hampshire}&42&&&-1\\ \text{Rhode Island}&43&&-1&-1\\ \text{Montana}&44&+1&&\\ \text{Delaware}&45&+1&&\\ \text{South Dakota}&46&+1&&\\ \text{Vermont}&49&&&-1\\ \text{Wyoming}&50&&&-1\\ \end{array} The remaining states get the same number of seats under all four methods. The similarity between Huntington–Hill and Webster is evident, with the expected very slight advantage for smaller states under Huntington–Hill. The two “extreme” methods that always round up or down instead of taking one of the two means show a clear relative advantage for smaller and larger states, respectively. There would be quite significant changes if the Adams–Bristow-Johnson method were used instead of Huntington–Hill; so the answer to your question is “no”. An interesting statistic that illustrates a sense in which the two methods that take a mean are more fair than the two others is the variance in the weight of voters in the House. Ideally, each voter should have exactly the same weight there, so this variance is a measure of the unfairness of the apportionment. The mean weight is the same for all methods; it’s just the number of representatives divided by the total population. In 2010 this was about $1.41\cdot10^{-6}$. The standard deviations under Huntington–Hill and Webster are quite similar; they are $6.54\cdot10^{-8}$ and $6.49\cdot10^{-8}$, respectively. The standard deviations under Adams and Jefferson are about twice as large, $1.14\cdot10^{-7}$ and $1.32\cdot10^{-7}$, respectively. So under these methods, the weight of voters in the House varies by about $±10\%$; whereas under the methods that use a mean, the variation is only about $±5\%$.
Perhaps consult one of the hundred or so books written on the mathematics of the topic. For example El-Helaly, Sherif, The mathematics of voting and apportionment. An introduction. Compact Textbooks in Mathematics. Birkhäuser/Springer, 2019
Are those statements Tautology? a.$$\forall x\forall y \exists z (x\neq y)\rightarrow (x\neq z)$$ b. $$\neg\exists x\forall y \forall z (x=y)\rightarrow (x=z)$$ To revoke a. we need to find a case of $(x\neq z)\land (x=y)$ and for some $x,y$ (x=y) is false, therefore $(x\neq y)\land (x=z)$ is false and it always true. b. is $\forall x\exists y \exists z (x\neq z)\lor (x=y)$ and it is always true, because for all $x$ there $y$ such $x=y$
a) This seems a tautology. Just take $z=y$. b) I think you have mistaken in negating the implication.
One has to be careful with the possible domains for formulae like this. For (a): If the domain has two or more distinct elements x, y, then as already posted one can take z=y. If the domain has only one element, then (a) is still true, but now it is true vacuously because the condition (x $\neq$ y) can never be satisfied. If the domain is empty, however, then $\exists$z is always false regardless of the expression that follows. Conclusion: NOT A TAUTOLOGY. [I leave (b) for the student]
How to reconstruct a quadrilateral ABCD only using compass and straight edge? Reconstruct a quadrilateral ABCD given length of its sides and the length of the midline between the first and third sides (namely all the segments drawn in the given figure) using compass and straight edge. The method is parallel translation but i don't know how to do it.
This is really an interesting problem, which appears to be trivial. However, the only solution I can think of is not that trivial: Let $M, N$ be the midpoints of $AC, BD$ respectively. Then $EMFN$ is a parallelogram, with $EM=BC/2, MF= AD/2$. So $EMFN$ is constructable from the given data. Now let $M_1$ be the reflection of $E$ about $M$ and $G$ the common midpoint of $MN$ and $EF$. It is trivial that $M_1$ is contructable, and that $\vec{EM_1} = \vec{BC}$. Thus $EBCM_1$ is a parallelogram and so $CM_1=EB = AB/2$. Therefore, we can construct $C$ (note that there are two possible solution here). Once we have $C$, $D$ is the reflection of $C$ about $F$ $B$ is the reflection of $D$ about $N$ $A$ is the reflection of $B$ about $E$ And we are done.
During the construction: Either BC is fixed by compass and then AD is automatically determined, or, AD is fixed and then BC is automatically determined. However BC and AD cannot both be fixed together. No construction is possible. To visualize consider EBCF as a 4-bar mechanism of given link lengths made in steel with EF as fixed link and BC fixed, link AD would break unless made of thin elastic rubber.
Improbable vs Impossible? I was wondering how mathematics in general or any of its sub fields e.g.statistics, probability, define the words Improbable and Impossible. I get their English meaning, that something is impossible means it is never going to happen and improbable means something is unlikely to happen. Could someone please provide a mathematical description for these two words as used (if they are used) in some field of mathematics? May be in terms of size of the probabilities. Thanks.
improbable = something that can happen, but its probability is comparatively low, but not zero. The distinction between probable and improbable is not, as far as I know, exactly defined. if you roll one hundred dice, it is very improbable that all of them will land as sixes. almost impossible = something that can happen, but its probability is exactly zero if you roll a frictionless die, it is almost impossible that it will end up on its edge. if you keep rolling a die until a six lands, it is almost impossible that you will never stop rolling. impossible = something that cannot happen. if you roll one hundred (standard) dice, it is impossible that at least one of them will land as a seven. probable if you do get one hundred sixes in a row, it is probable that the casino will kick you out. almost definite if you attempt to balance a frictionless die on its edge, it will almost definitely topple to one of its faces definite I think you will definitely try to balance a die on its edge. If you do succeed - remember it's due to friction.
Mathematically an event E can be called impossible if and only if $Pr(E)=0$. I don't believe we can define improbable statically (i.e. in a way that doesn't change based on context). One attempt, however, might be: given a threshold probability of $p_T$, any event $X$ is considered improbable if $Pr(X) \lt p_T$.
Computing an Integral Using Complex Analysis Does anyone know how to compute $$ \int^{2\pi}_{0} \frac{1}{3 + \cos\theta} \, d\theta $$ using tools from complex analysis? I'm not sure how to get started, or if complex analysis is even needed to do this.
Recall that $\cos\theta = (e^{i\theta}+e^{-i\theta})/2$, and change variables to $z=e^{i\theta}$. Then you are integrating around a circle, and can look for poles.
Let $\cos \theta = \dfrac{z^2+1}{2z}$, $\,d\theta=\dfrac{dz}{iz}$.
Minimum difference of angles between points on square lattice I have integer grid of size $N \times N$. If I calculate angles between all point triples - is it possible analytically find minimal non-zero difference between those angles?
By brute force, for $N = 4, 5, 6$ the smallest non-zero angle is formed between the segments from $(N-1,N-2)$ to $(0,0)$ and from $(N-1,N-2)$ to $(1,1)$, if the coordinates have range $0$ to $N - 1$ inclusive. The first segment makes an angle $\theta_1 = \tan^{-1}\left(\frac{N-2}{N-1}\right)$ with the edge of the lattice, and the second segment makes an angle $\theta_2 = \tan^{-1}\left(\frac{N-3}{N-2}\right)$ with that edge. The magnitude of the resulting angle between these segments is $$| \,\theta_1 - \theta_2| = \tan^{-1}\left(\frac{N-2}{N-1}\right) - \tan^{-1}\left(\frac{N-3}{N-2}\right). $$ (Since $0 < \theta_2 < \theta_1 < \frac{\pi}{4}$, the right-hand side of this equation is positive.) While this exercise suggests that this might be the formula for larger $N$ as well, it is hardly a proof. I too am curious about the general result now.
Isn't the minimum angle just the one who's arms are along the diagonal (1,1) to (N,N) the back down to (1,2)? This gives the minimum angle as pi/4 - arctan((N-1)/N).
Find a group $G$ and elements $x,y,z ∈ G$ so that $|x|= 5$, $|y| = |z| = 7$, and $|xy| = 35$ but $|xz| ≠ 35$. I am doing an introductory group theory course, however this is one of the more advanced questions I have faced. This is my first question on here, so apologies if it isn't asked as well as it should be! Question: Find a group $G$ and elements $x,y,z ∈ G$ so that $|x|= 5$, $|y| = |z| = 7$, and $|xy| = 35$ but $|xz| ≠ 35$. My attempt: If we take $G$ as finite, $|G|$ must be divided by both $5$ and $7$, by Lagrange's Theorem. This would mean the order of $G$ must some multiple of $\operatorname{lcm}(5,7)$, which is $35$. In this case I can't see how $|xz|$ could be anything other than $35$, as I think it can't be of order $1$, $5$, or $7$, the other divisors of $35$. By this I feel I should be looking for an infinite Group, however I am very unfamiliar with these and haven't been able to come up with any examples. If anybody could tell me is my above reasoning correct, and point me in the right direction I would really appreciate it!
Orders of products of group elements can be quite unintuitive. For example, a lot of simple groups can be generated by two elements with order $2$ and $3$. If you are familiar with cycle notation in $S_n$, consider $x=(1,2,3,4,5)$, $y=(6,7,8,9,10,11,12)$ and $z=(1,2,3,4,5,6,7)$ (I had to add commas because you get numbers with two digits). Then $xz=(1,3,5,6,7,2,4)$, which has still order $7$. The fact that $7$ divides $35$ is... a coincidence. You can do "worse" things! For example if you take $z=(5,6,7,8,9,10,11)$ you get $xz=(1,2,3,4,5,6,7,8,9,10,11)$ which has order $11$.
Hint: If $x$ and $y$ commute and $\gcd(|x|, |y|)=1$, then $xy$ has order $|x||y|$.
Number of roots of a quadratic equation modulo 325 Any Help solving this question ? a) Find ONE solution $\overline x\in\Bbb Z/325\Bbb Z$ such that $x^2\equiv-1\pmod{325}$. (Hint: CRT and lifting.) b) How many solutions $\overline x$ to the above equation are there, and why?
$325=5^2\cdot 13$ so let's solve $x^2\equiv -1\pmod{25}$, $x^2\equiv -1\pmod{13}$. Case $1$: $\mod{25}$ If $x^2\equiv -1\pmod{25}$, then $x^2\equiv -1\pmod 5$ so $x\equiv\pm2\pmod 5$ $$(5k+2)^2=25k^2+20k+4\equiv 20k+4\equiv-1\pmod{25}\implies 4k\equiv -1\pmod 5\\\implies k\equiv 1\pmod 5$$ $$(5k-2)^2=25k^2-20k+4\equiv -20k+4\equiv-1\pmod{25}\implies 4k\equiv 1\pmod 5\\\implies k\equiv 4\pmod 5$$ So we have $x^2\equiv-1\pmod{25}\implies x\equiv\pm7\pmod{25}$ Case $2$: $\mod 13$ We see upon inspection that $x^2\equiv -1\pmod{13}\implies x=\pm 5$ See if you can use CRT to find all solutions to $x^2\equiv -1\pmod{325}$
We can prove by using Discrete Logarithm and Linear Congruence Theorem that $x^2\equiv a\pmod m$ has zero or two solutions if $m$ has a primitive root. Now, $\displaystyle x^2\equiv-1\pmod{325}\equiv-1\pmod{25}$ $\displaystyle x^2\equiv-1\equiv49\pmod{25}\equiv7^2\implies x\equiv\pm7\pmod{25}\ \ \ \ (1)$ Again, $x^2\equiv-1\pmod{325}\equiv-1\pmod{13}$ $\displaystyle x^2\equiv-1\pmod{13}\equiv25\equiv5^2\implies x\equiv\pm5\pmod{13}\ \ \ \ (2)$ Now apply CRT on $(1),(2)$ to find four in-congruent solutions
What is the differantial equation which has the particular solution $y_p=(x^2-1)e^{-x}+x$? What is the lowest ordered differantial equation which has the particular solution $y_p=(x^2-1)e^{-x}+x$? For $e^{-x}$ we have $(D+1)$ and for $xe^{-x}\to(D+1)^2$, For $x^2e^{-x}\to (D+1)^3$ I dont know how to combine them. answer is $(D+1)^3D^2y=0$ btw.
Given: $$y_p = (x^2-1)e^{-x}+x$$ Finding the total derivatives we obtain: $$dy = [e^{-x}(1+2x-x^2) + 1]dx$$ $$\therefore \frac{dy}{dx} = 1 - e^{-x}(x^2-2x-1)$$ $$\qquad \qquad\qquad\qquad\qquad\qquad\qquad\text{is the required differential equation} $$
Given: $$y_p = (x^2-1)e^{-x}+x$$ Finding the total derivatives we obtain: $$dy = [e^{-x}(1+2x-x^2) + 1]dx$$ $$\therefore \frac{dy}{dx} = 1 - e^{-x}(x^2-2x-1)$$ $$\qquad \qquad\qquad\qquad\qquad\qquad\qquad\text{is the required differential equation} $$
Nash equilibrium in second price sealed-bid auction I'm trying to understand the basics of game theory and the topic of auctions has arisen. I understand the basic concepts of auctions but I'm struggling with second price sealed-bid auctions. I understand that a weakly dominant strategy in a second price sealed-bid auction is to always bid the amount how much the item is worth to you. However I'm having trouble with some example questions that I've been given. We have $n$ bidders, $n \ge 2$. There is only one object in the auction. Player $i, i = 1, . . . , n,$ evaluates the object by giving it a valuation $v_i$ , where: $$v_1 > v_2 > v_3 > . . . > v_n > 0$$ Each player i submits a sealed bid $b_i \ , i = 1, . . . , n$. So, we can describe a bidding profile of all players as $(b_1, b_2, b_3, . . . , b_n)$. Now what I what I want to understand is are both the listed bidding profiles below nash equilibrium? A) bidding profile $(v_1, 0, 0, . . . , 0)$ B) bidding profile $(v_2, v_1, 0, . . . , 0)$ Surely both are a nash equilibrium because every bidder will bid there valuation for the item thus no one has any incentive to change there bid? E.G a weakly dominant strategy I just wanted some clarification, any help on the matter would be greatly appreciated.
In scenario $A$, only the highest bidder bids their true value and everyone else bids $0$. None of the other players can increase their payoff by changing their strategy alone, as no one values the item above $v_1$. This makes it a weakly dominant strategy. In scenario $B$, the bidder valuing the item at $v_1$ bids $v_2$ and the bidder valuing the item at $v_2$ bids $v_1$, with $v_1 > v_2$. In this scenario, Player 2 is actually bidding $v_1 - v_2$ above their true value, making it beneficial for this player alone to switch to bidding lower than $v_2$ to obtain $0$ value instead of his current negative value. Thus this is not a weakly dominant strategy.
In scenario $A$, only the highest bidder bids their true value and everyone else bids $0$. None of the other players can increase their payoff by changing their strategy alone, as no one values the item above $v_1$. This makes it a weakly dominant strategy. In scenario $B$, the bidder valuing the item at $v_1$ bids $v_2$ and the bidder valuing the item at $v_2$ bids $v_1$, with $v_1 > v_2$. In this scenario, Player 2 is actually bidding $v_1 - v_2$ above their true value, making it beneficial for this player alone to switch to bidding lower than $v_2$ to obtain $0$ value instead of his current negative value. Thus this is not a weakly dominant strategy.
Show minimum distance to a convex set is a convex function. Show that $$ g(x)=\inf_{z \in C}\|x-z\| $$ where $g:\mathbb{R}^n \rightarrow \mathbb{R}$, $C$ is a convex set in $\mathbb{R}^n$ (nor close neither bounded), and $\|\cdot\|$ is a norm on $\mathbb{R}^n$. Let $x,y$ be in $\mathbb{R}^n$. We need to show that $$ g(\lambda x +(1-\lambda)y) \leq \lambda g(x)+ (1-\lambda)g(y) \tag{1} $$ I tried the following: $$ \|\lambda x +(1-\lambda)y-z\| \leq \lambda\| x -z\| + (1-\lambda)\| y-z\| \,\, \forall {z \in C} $$ Since $$ g(\lambda x +(1-\lambda)y)=\inf_{z \in C}\|\lambda x +(1-\lambda)y-z\| \leq \|\lambda x +(1-\lambda)y-z\| \,\, \forall {z \in C} $$ So $$ g(\lambda x +(1-\lambda)y)=\inf_{z \in C}\|\lambda x +(1-\lambda)y-z\| \leq \lambda\| x -z\| + (1-\lambda)\| y-z\| \,\, \forall {z \in C} $$ I do not know how to handle the right hand side and apply infimum in a right way because the following is not correct in general $$ \inf_{z \in C}\|\lambda x +(1-\lambda)y-z\| \nleq \lambda \inf_{z \in C} \| x -z\| + (1-\lambda) \inf_{z \in C} \| y-z\| $$ Or maybe my initial way to prove the convexity is wrong. Can you complete my proof or show the claim using another way?
Suppose $C$ is also closed, there exists $z_1,z_2\in C$ such that $g(x)=||x-z_1||$, $g(y)=||y-z_2||$. Then $$\lambda g(x)+(1-\lambda)g(y)\ge ||\lambda x + (1-\lambda)y - \lambda z_1-(1-\lambda)z_2||.$$ Using the convexity of $C$, $z=\lambda z_1+ (1-\lambda)z_2\in C$. Hence, $$\lambda g(x)+(1-\lambda)g(y)\ge ||\lambda x + (1-\lambda)y - z||\ge g(\lambda x + (1-\lambda)y).$$ If $C$ is not closed, probably some limit argument may plus the above reasoning should work.
Hint: try starting from the opposite direction, consider the subadditivity of the infimum and see if you can show it that way
If a 3D-cake is cut by $n$ planes yielding the maximum number of pieces, then what is the number of pieces with the cake crust? It is known that a 3D-cake can be cut by $n$ plane cuts at most into $N$ pieces, defined by Cake Number $N=\frac {1}{6}(n^3+5n+6)$. However, some of the pieces would have a crust of the cake as one of their boundaries. How many pieces are with crust? (Entire surface of the cake is assumed to be crust).
To get the maximum number, the cutting planes are in general position. Consider the 2-D surface (crust) of the cake. Let the intersection of the cutting planes and the crust be denoted as cutting circles. Remove 1 point that is not on any of the cutting lines. Project what remains onto the plane. We now have $n$ circles in general position cutting the plane. We know that the number of regions it divides it into is $$n^2-n+2.$$
Number of pieces with crust $= ( n^2+n+2)/2$
Why is the polynomial ring $\Bbb R[x]$ a PID but $\Bbb Z[x]$ is not? Why is the polynomial ring $\Bbb R[x]$ a PID but $\Bbb Z[x]$ is not? The question is asking me to prove that $\Bbb R[x]$ is a PID. I'm assuming you would go about this knowing that every field is a PID, but I'm stuck on the differences between $\Bbb R[x]$ and $\Bbb Z[x]$.
There are many ways to answer this question, at different levels, but I would say that I think the right reason is because $\mathbb{Z}$ is not a field. I will give you 3 supporting evidences for that. The proof that $\mathbb{R}[X]$ is a PID relies on long division of polynomials, and the fact that you can divide a polynomial by any non zero polynomial, and get a quotient and a remainder in $\mathbb{R}[X]$. This relies on the fact that any nonzero real number is invertible. This does not work in $\mathbb{Z}[X]$: for example, if you divide $X$ by $2$, you will get a quotient which does not lie in $\mathbb{Z}[X]$. What goes wrong here is that the invertible elements of $\mathbb{Z}$ are $\pm 1$. OK, this may be not satisfactory, because what this really proves is that $\mathbb{Z}[X]$ is not Euclidean for the degree function. It does not prove that $\mathbb{Z}[X]$ is not Euclidean for another function. Worse, there exists PID's which are not Euclidean. Another reason (see Bernard's answer): in a PID, any non zero prime ideal is maximal. This is not the case for $\mathbb{Z}[X]$, since $p\mathbb{Z}[X]$ is a prime ideal but not maximal, for any prime number $p$. Note that this cannot happen in $\mathbb{R}[X]$ ( $p\mathbb{R}[X]=\mathbb{R}[X]$, since $p$ is invertible in $\mathbb{R}$). Once again, the main difference is that you have a lot of non invertible elements in $\mathbb{Z}$. Ok, but this is still too vague. SO let's go for : Thm. Let $A$ be a commutative ring with $1$. Then the following properties are equivalent: i) $A$ is a field ii) $A[X]$ is an Euclidean domain iii) $A[X]$ is a PID. Only iii)$\Rightarrow$ i) needs a proof: in a PID, any nonzero prime ideal is maximal. Since $A[X]$ is a PID, it is an integral domain, and thus so is $A$ (this is a subring of $A[X]$) Since evaluation at $0$ induces a ring isomorphism $A[X]/(X)\simeq A$, $(X)$ is a nonzero prime ideal. Since $A[X]$ is a PID, $(X)$ is thus a maximal ideal. Consequently, $A[X]/(X)$ is a field, meaning that $A$ is a field.
Find an ideal that is not principal in $\mathbb Z[x]$. Hint: $\mathbb Z$ is a PID so your idea will need something with $x$ in it. Show that every ideal in $\mathbb R[x]$ is principal. Hint: every number is related by multiplication.
What is the (mathematical) point of straightedge and compass constructions? The ancient discipline of construction by straightedge and compass is both fascinating and entertaining. But what is its significance in a mathematical sense? It is still taught in high school geometry classes even today. What I'm getting at is this: Are the rules of construction just arbitrarily imposed restrictions, like a form of poetry, or is there a meaningful reason for prohibiting, say, the use of a protractor?
Okay, I seem to be ranting too much in comments, so let me try to put forth my points and opinion here (as a community wiki, since it is opinion). First, to address the question: What is the significance of straightedge and compass constructions within mathematics? Historically, they played an important role and led to a number of interesting material (the three famous impossibilities lead very naturally to transcendental numbers, theories of equations, and the like). They are related to fascinating stuff (numbers constructible by origami, etc). But I would say that their significance parallels a bit the significance of Cayley's Theorem in Group Theory: although important historically, and relevant to understand the development of many areas of mathematics, they are not that particularly important today. As you can see from the responses, many find them "fascinating", many find them "boring", but nobody seems to have come forth with an important application. Now, addressing the issue of teaching it at K-12. Let me preface this by saying that I am a "survivor" of the New Math, which came to Mexico (where I grew up) in the 70s. I would say I became a mathematician in large part despite having been taught with New Math, rather than because of it. Also, I did not attend K-12 education or undergraduate in the U.S.; I am a bit more familiar with undergraduate at the level of precalculus and above thanks to my job, but not very much with the details of curricula in sundry states in the U.S. So I may very well have a wrong impression of details in what follows. Now, one problem, in my view, is that when we talk about "math education", we are really talking about two different things: numeracy and mathematics. This is the same phenomenon we see when we think about "English class". I suspect that English Ph.D.s are nonplussed at people who think they spend their time dealing with grammar, spelling, punctuation, etc, just as mathematicians are nonplussed that people think we spend our time multiplying really big numbers by hand. English education has two distinct components, which we might call "Literacy" and "Literature". Literacy is the component where we try to teach students to read and write effectively, spelling, punctuation, etc. Making themselves understood in written form, and understanding the written word. On the other hand, Literature is the component where they are introduced to Shakespeare, novelists, book reading, short stories, poetry, creative writing, historical and world literature, etc. We consider English education at our schools a failure when it fails in its mission with regards to literacy: we don't consider it a failure if students come out not being particularly enthused with reading classic novels or don't become professional poets, or even if they don't care or like poetry (or Shakespeare). The professional English Ph.D. engages in literature, not in literacy. Literacy is the domain of the grammarian. Mathematics education likewise has two components; numeracy (to borrow the term from John Allen Paulos) and mathematics. Numeracy is the parallel of literacy: we want children to be able to handle and understand numbers and basic algebra, percentages, etc., because they are necessary to function in the world. Mathematics is the stuff that mathematicians do, and which we all find so interesting and beautiful. "Mathematics" also includes advanced topics that are necessary for someone who is going to go on to study areas that require mathematics: so the physicists and engineers need to know trigonometry; business and economists need to know advanced statistics; computer science needs to know discrete mathematics; etc. Much like someone going on to college likely needs more than simple command of grammar and spelling. Part of the problem with math education is that it so often conflates numeracy with mathematics; another part of the problem is that mathematics was included in the "classical" curriculum for much the same reason as Latin and Greek were included: historical reasons, because every "gentleman" was expected to know some Latin, some Greek, and some mathematics (and by "mathematics", people meant Euclid). To some extent, we still teach geometry in K-12 because we've always taught geometry. But it is not part of the numeracy curriculum. It would be wonderful if we had the time in K-12 to teach students both numeracy and mathematics; it is a fact that today we are failing at both. We don't perform better in teaching students numeracy by teaching them beautiful mathematics, even if they understand and appreciate them, just like making them really understand and care for Shakespeare's plays (through performance, say) will make them able to understand a set of written instructions, or write a coherent argument. Trying to excite students about mathematics is all well and good; but numeracy should come first. Trying to excite students about the wonderful world of books, plays, and poetry is all well and good, but we need them to be able to read and write first, because that's part of what they will need to function in society. Constructions by straightedge and compass can become an interesting part of mathematics education, just like geometry. I just don't think they have a place in numeracy education. But they are taking the time we need for that numeracy education. Trigonometry used to be part of the numeracy education that people needed; this is no longer the case today. Trigonometry, today, is a foundational science for more advanced studies, not part of numeracy. But we are still teaching trigonometry as a numeracy subject (hence the rules, recipes, mnemonics, and the like). We could try to turn trigonometry into a mathematical subject, sure; or we could postpone it until later and only teach it to those for whom it is an important foundation. Added. To clarify: I don't mean to say that the "solution" is to do less math and more numeracy. I think the solution is likely to be complicated, but the first step is to identify exactly what parts of what is currently branded as "mathematics" are really numeracy, and which parts are mathematics. Trigonometry is taught as part of "advanced mathematics", but it is taught as rote and rules because it was really numeracy. We don't need to teach trigonometry as numeracy any more, so we shouldn't. If it is to be taught, it needs to be taught in the right context. Geometry is similar: geometry used to be taught as basic numeracy because "every educated person should know geometry"; (of course, "educated" at the time meant "rich and land owner, or with aspirations in that direction"). We don't need most of geometry as basic numeracy, we want it now as mathematics. So teaching geometry as numeracy is a waste of time, and it takes up the numeracy time that should be spent in other things. (It can still be taught during the mathematics time). I certainly don't say "drop all the math, concentrate on the numeracy". I say, "when dealing with numeracy, concentrate on the numeracy, not the math, and don't confuse the two." Nobody seems to confuse spelling rules with reading novels, because we separate literacy from literature. Too many people confuse arithmetic with mathematics, because we don't separate them. The reason I talk about dropping trigonometry and doing some basic statistics is precisely that: trigonometry is being taught as part of the numeracy curriculum, when it shouldn't. Basic statistics, say at the level of the wonderful How to Lie With Statistics, is not taught as part of numeracy. But in today's world, there is a far better case for statistics being part of the basic numeracy education than trigonometry. Everyone coming out of High School should know that taking a 10% pay-cut and then getting a 5% raise does not mean you are now at 95% of your old salary (go do a spot check, see how many people think you are). They need to know the difference between average and median, so they are not misled by statements about "the average salary of the American worker". They should understand what "false positive" and "false negative" means. They should be able to interpret graphs (even the silly ones on the cover of every USA Today issue) and be able to spot the distortions created by chopping axes, etc. These are numeracy issues. Likewise high school geometry: it is trying to be both numeracy and mathematics, and I think it generally fails at both. There are some components of geometry that are part of numeracy, they ought to be treated that way, but the parts that are mathematics should be separate. One problem I have with Lockhart's lament is that he does not make clear the distinction between numeracy and mathematical education. The nightmare he paints for the musician and the artist is precisely that musical education is being turned into the equivalent of numeracy/literacy education, thus doing a disservice to music-as-an-art. The science of mathematics that we all know and love has some intersection with numeracy and arithmetic, but we all know it is a limited intersection; just as the study of literature has some intersection with the study of grammar an spelling, but the intersection is limited. The main purpose of K-12 education (or at the very least, K-6 or K-8) should be numeracy, with some limited forays into mathematics (just as the main purpose will be literacy with some limited forays into literature). The way to teach numeracy is necessarily different from the way to teach mathematics. Numeracy requires that we memorize multiplication tables, for all the horror this will cause to modern education people; this of course is a far cry from teaching the mathematics of multiplication, which may very well be very interesting and awaken the child's curiosity and wonder at the world. That can be done within the context of mathematical education, but it shouldn't be done in the context, and at the expense of, numeracy. Trigonometry used to be part of numeracy; it no longer is. It is now either mathematical or foundational for advanced studies, so it should be treated as such. Geometry used to be taught for reasons which no longer hold, and to some extent we continue teaching it as a historical legacy; we shouldn't. Those components which are numeracy should be taught as that, and we could move the rest (including constructions with compass and straightedge) to the more creative, mathematical education side of the equation. Anyway, I've ranted long enough, and probably made myself a few detractors along the way...
There is a reason why Euclid's Elements are so revered. It builds logic, critical thinking/reasoning, spatial skills, and some. Also, it is beautiful. It has been as influential as any one book in the history of time. I think the undertones of conversations like this usually imply one's (perhaps subconscious) preference for quality or for quantity. Are you a Journey person or a Destination person? So many modern ideas are so obviously sprung from this foundation work. The thinking of so many that shaped our society was very much shaped by Elements (Government, Law, Architecture, Building Construction, City Planning, Science). It deserves more than a cursory glance for ourselves and our kids. Euclid's Elements was such a QUALITY work that it was taught in more or less its original form for two thousand years. Getting to your question, the axiomatic structure used in Euclid's straight edge and compass constructions has helped move the idea of proof along. Without the definitions, common notions, postulates we are talking about a bastardized version that is less mathematically significant yet still important in the development of spatial skills. Remember, math is not reality and is all about imposing arbitrary restrictions to find the truths they bring about (and often beauty). As far as not using a protractor, this just creates an extra set of constraints to make the subject more challenging and make you work/think a little bit for the answer. Why not use a computer or a geometer for that matter? Why are you not allowed to phone a friend on Jeopardy? Constraints often require you to use more creativity to solve a problem and exist in the real world in spades.
Is any uncountable, scattered subset of [0,1] dense? Just out of curiosity, I am wondering if any uncountable, scattered subset $U\subset[0,1]$ must be dense in $[0,1]$ (endowed with the Euclidean topology). It's not necessarily true if $U$ is countable, since we can take $U=\{\frac{1}{n} : n\in\mathbb{N}\}$, and there are countably many open sets in $[0,1]$ which $U$ does not intersect. But I'm having a hard time figuring out whether it's true for uncountable $U$. Any help would be appreciated. (I added the "scattered" hypothesis) since any other interval contained in $[0,1]$ would suffice as a counterexample).
There are no uncountable scattered subsets of $[0,1]$. This follows from the theory of Cantor-Bendixson rank, for instance. Given any scattered $A\subset[0,1]$, there must be some ordinal $\alpha$ such that the $\alpha$th Cantor-Bendixson derivative $A^\alpha$ of $A$ is empty. The least such $\alpha$ must be countable, since the Cantor-Bendixson derivatives are a descending chain of closed subsets of $A$ and $A$ is second-countable. Since every subset of $A$ can have only countably many isolated points (again by second-countability), this means $A$ is countable.
An uncountable set does not have to be dense. A simple example would be to take the half interval $[0,\frac{1}{2}]$ which is uncountable but not dense in $[0,1]$. But we can do even better, because we can find a set that is uncountable and not dense in any open interval. The Cantor set is uncountable and dense nowhere. https://en.wikipedia.org/wiki/Cantor_set
What's the difference in the interpretation of $y = x$ and $y(x) = x$? Both $y = x$ and $y(x) = x$ describe a line, but what are some of the differences between their uses, and can they ever be used interchangeably?
$y=x$ simply represents a line passing through origin, while $y(x)=x$ here $y$ is behaving as a function of $x$, in this case your function is identity function , you have to specify a domain and a codomain. Other possible functions are like $y(x)=x^2$ and so on for example.
The only reason y(x) = x describes a line is because, for any value of x, y(x) will be the same, as per the definition of y(x). As Kushal Bhuyan said, you can define y(x) to be any function you want. Also, when you graph y = x, there will be an x - axis and a y - axis, while if you graph y(x) = x, there will be an x - axis and a y(x) axis (or any other function that you choose).
Is there more simple way to get a doughnut property? The terms and my question appear from the Halbeisen's book Combinatorial set theory with a gentle introduction to forcing. For subsets $a$, $b$ of $\omega$ such that $b-a$ is infinite, define a doughnut as $$[a,b]^\omega:= \{x\in [\omega]^\omega : a\subseteq x\subseteq b\}.$$ (Imagine the Venn diagram of $a$, $x$ and $b$ then you can guess why this set is called a doughnut.) A collection $\mathcal{A}\subset [\omega]^\omega$ has a doughnut property if there are $a$ and $b$ such that either $[a,b]^\omega\subseteq \mathcal{A}$ or $[a,b]^\omega\cap \mathcal{A} = \varnothing$ hold. If previous conditions hold for $a=\varnothing$, we call $\mathcal{A}$ has the Ramsey property. Under the axiom of choice we can find a collection $\mathcal{A}$ that has a doughnut property but not Ramsey property. Here is my attempt: Define $\sim$ as $$a\sim b \iff a\triangle b\text{ finite}.$$ Then $\sim$ is an equivalence relation. For each $x\in [\omega]^\omega$ choose a representative $r_x$. (that is, choose $r_x$ such that $[x]=[r_x]$ for each equivalence class.) Take a maximal antichain $X\subseteq[\omega]^\omega$ which is not contain $\omega$. Let define $\mathcal{A}\subseteq[\omega]^\omega$ as follow: for given $y\in[\omega]^\omega$, if there is some $a\in X$ such that $a\subseteq y$, then $y\in\mathcal{A}$. if not, $y\in\mathcal{A}$ if and only if $|y\triangle r_y|$ is even. then for $a\in X$, $[a,\omega]^\omega\subseteq\mathcal{A}$. But for any $y\in [\omega]^\omega$, we can find some $a\in X$ such that $a$ and $y$ are comparable and if $y\subseteq a$ then we can find some subsets of $y$ contained in $\mathcal{A}$ and not in $\mathcal{A}$. if $a\subseteq y$, we can consider a set $b\subseteq a$ such that $a-b$ is infinite. (since $a$ is infinite) then $b$ has subsets lies on $\mathcal{A}$ and not on $\mathcal{A}$. thus $[y]^\omega$ is neither a subset of $\mathcal{A}$ nor disjoint from $\mathcal{A}$. My questions are: Is my construction valid? and is there more simple construction?
I think your construction works fine, adding the additional condition on the maximal antichain $X$: it should contain an infinite set $a$, whose complement is also infinite (that such $X$ exists can be quickly established by Zorn's lemma). This condition is needed for the doughnut $[a, \omega]^{\omega} \subseteq \mathcal{A}$ (the definition of the doughnut requires that $\omega \setminus a$ is infinite).
I think your construction works fine, adding the additional condition on the maximal antichain $X$: it should contain an infinite set $a$, whose complement is also infinite (that such $X$ exists can be quickly established by Zorn's lemma). This condition is needed for the doughnut $[a, \omega]^{\omega} \subseteq \mathcal{A}$ (the definition of the doughnut requires that $\omega \setminus a$ is infinite).
Quantifying "weight" or "control" of a variable to the value of a function? Say I have an equation $$x = f(x)$$ I know that here, the independent variable $x$ controls 100% of the variability of the value of the function. It is the only "knob" that I need to turn to manipulate the value of the function. That is, compared to another function: $$xy = f(x,y)$$ In this function, $x$ does not control the entire equation. The value of $y$ also controls the equation. I have two "knobs" to control, and each has its independent effect on the function. My question is, how do I measure how much effect a variable has over a function? To clarify: say my equation was $$x + \frac{y}{1000} = f(x,y)$$ Then increasing the value of $x$ by 1 will increase the value of the function by 1. But to get to the same effect using $y$, I have to increase its value by 1000. It seems easy enough to say that $x$ has a weight 1000 times that of $y$. Similarly, in the first equation $x = f(x)$, $x$ has 100% of the control. In the second equation, each variable has 50% of the control. But these values I'm saying - about how much "control" a variable has on the function's value - are only intuitive. I cannot prove their truth. How do I know how much control a variable has over the value of the function, especially in the general case when an equation becomes complex, like when the variable cannot be factored out: $$x + \frac{y}{x} = f(x,y)$$ Or when the value of the variables are "polluted" by constants or itself: $$\frac{\sqrt{x + 1000}}{99} + \frac{y}{y+1} = f(x,y)$$ Or when there are more than two variables: $$x + yz = f(x,y,z)$$
Sounds like the weight is just the partial derivatives with respect to the variable you wish to measure. $$x+y1000=f(x,y)$$ The derivative with respect to $y$ is $1000=dxf(x,y)/dy$ using the power rule. The derivative with respect to $x$ is $1=dyf(x,y)/dx$ If the equation were: $$x^2+1000y^3$$ then respectively $$3000y^2=dxf(x,y)/dy$$ and $$2x = dyf(x,y)/dx$$ and the "weight" depends very much on the point at which $x$ is being evaluated, as it can change. for $$x+y/x=f(x,y)$$, I am hazy on my rules for differentiation, but I looked it up: for dx/dy we can rewrite: $$x+y*(x^-1)$$ Y is a constant here. $$dx/dy = 1 + -1*(x^-2)*y$$ Always look how you can rewrite an equation, and also try wolframalpha.com if you dont already know about it.
Sounds like the weight is just the partial derivatives with respect to the variable you wish to measure. $$x+y1000=f(x,y)$$ The derivative with respect to $y$ is $1000=dxf(x,y)/dy$ using the power rule. The derivative with respect to $x$ is $1=dyf(x,y)/dx$ If the equation were: $$x^2+1000y^3$$ then respectively $$3000y^2=dxf(x,y)/dy$$ and $$2x = dyf(x,y)/dx$$ and the "weight" depends very much on the point at which $x$ is being evaluated, as it can change. for $$x+y/x=f(x,y)$$, I am hazy on my rules for differentiation, but I looked it up: for dx/dy we can rewrite: $$x+y*(x^-1)$$ Y is a constant here. $$dx/dy = 1 + -1*(x^-2)*y$$ Always look how you can rewrite an equation, and also try wolframalpha.com if you dont already know about it.
Number of permutations such that $a-b+c-d+e-f+g-h=0$ All possible permutations $\left\{a,b,c,d,e,f,g,h\right\}$ of the set $A=\left\{1,2,3,4,5,6,7,8\right\}$ are formed. How many of those permutations satisfy $$a-b+c-d+e-f+g-h=0$$ My Try: we have for example $$(2-1)+(4-3)+(5-6)+(8-7)=0$$ and each of the number in brackets if we treat them as four letters, they can be arranged in $4!=24$ ways.Now in all these possible permutations if we multiply with negative sign we get a different permutation. So total is $48$. similarly for $$(2-3)+(4-1)+(5-6)+(8-7)=0$$ we get $48$ permutations. but i feel this is an informal approach. Any clue for better approach?
The first thing to notice is that the sum of all the numbers from $1$ through $8$ is $36$ so the numbers that are added must sum to $18$ as must the numbers that are subtracted. We need to find the number of ways to divide the numbers into two sets of four such that each set adds to $18$. The $8$ has to go in one set, so we look for ways to have three numbers sum to $10$. They are $721,631,541,532$ so there are $4$ partitions of the set. For each partition we have two ways to choose which set is added, $4!$ ways to choose the order of the added set, and $4!$ ways to choose the order of the subtracted set. This gives a total of $4\cdot 2 \cdot 4! \cdot 4!=4608$
The total of the numbers is $36$. Divide by $2 = 18$. So you may have split the numbers in the following groups $= ({1,3,6,8}$ and ${2,4,5,7})$ or (${1,4,6,7}$ and ${2,3,5,8}$) or (${1,4,5,8}$ and ${2,3,6,7})$. Each group could be permuted in $4!$ ways such that $a-b+c-d+e-f+g-h=0$ is true and each could pair could be swapped for add ad subtract and thus there are three pairs and hence 8 to give a total of $ 8\times24\times24 = 4608$
20 books 5 different shelves So I'm trying to answer this question and am not sure if my answer is correct. In the text book I'm using, this question asked before combinations are even introduced (only permutations) so I'm not entirely sure if I'm in the right direction. Is it possible to answer these using permutations? Consider a bookcase with 6 shelves and suppose that we have 15 different books to place on the shelves. How many different ways are there to place the books on the shelves if the left to right order on each shelf is unimportant? The order being unimportant implies it's a combination question, and having different "containers" lead me to use the stars and bars approach. $C(n + k -1, k) = C(5 + 15, 15) = C(20,15)$ ways to put the puts on the shelf when order is unimportant. The next part of the question says the order is important, assuming the above answer is right, would the number of ways with order simply be $15! * C(20, 15$)? Furthermore if each shelf is to get at least one book, is C(14, 9) correct when order is unimportant? Some confirmation or correction would be appreciated, as would alternate methods for solving. Thanks!
If the left-to-right order on each shelf is unimportant, there are $6^{15}$ ways to place the books (as each book should be assigned with its shelf number). If the left-to-right order is important, you have first order all books in the left-to-right and top-to-bottom order, and then place 5 "containers separators" between them (consider the similar task of writing up 15 different printable characters in a text file containing no more than 6 lines - i.e. no more than 5 newline characters). The answer would be $15! \times \binom{20}{5}$.
combination $nCr = \dfrac{n!}{(n-r)r!}$ Here $n=$ factorial of no of units/items $r=$ chance no example $20$ similar books be placed on $5$ different selves... $\dfrac{20!}{(20-5)5!}$
Proof that degree $n$ polynomial has at most $n$ roots (counting multiplicity) I am trying to prove the following statement: If the polynomial $f(x)$ of degree $n$ has roots $a_1,a_2,...,a_k$ with multiplicities $\alpha_1,\alpha_2,...,\alpha_k$ in a field $F$ (with $\alpha_i\geq 1$), then $f(x)$ has $(x-a_1)^{\alpha_1}\cdots (x-a_k)^{\alpha_k}$ as a factor. In particular, a polynomial of degree $n$ in one variable over a field $F$ has at most $n$ roots in $F$, even counted with multiplicity. This is the proof that I wrote For the first part, we proceed by induction on $k$. If $k=1$, then by definition, we may write $f(x)=(x-a_1)^{\alpha_1}q(x)$. Now, suppose that the statement is true for up to $k$ roots, and consider a collection of $k+1$ roots. Using the inductive hypothesis, we may write $$f(x)=(x-a_1)^{\alpha_1}\cdots (x-a_k)^{\alpha_k}q(x)\quad\text{ and }\quad f(x)=(x-a_{k+1})^{\alpha_{k+1}}q'(x)$$ Equating $$(x-a_1)^{\alpha_1}\cdots (x-a_k)^{\alpha_k}q(x)=(x-a_{k+1})^{\alpha_{k+1}}q'(x)$$ and using the fact that $(x-a_{k+1})$ is an irreducible element that is distinct from any on the left hand side gives us that $(x-a_{k+1})^{\alpha_{k+1}}\mid q(x)$, which proves the first statement. Now, suppose that a polynomial $f(x)$ of degree $n$ as more than $n$ roots in $F$, counted with multiplicity. From what we showed, we must have $(x-a_1)^{\alpha_1}\cdots (x-a_k)^{\alpha_k}\mid f(x)$. But $F[x]$ is an integral domain so this is impossible, as the divisor has a higher degree than $f(x)$. I was first wondering if the proof was correct. Moreover, in proving the first half of the statement I used the fact that an irreducible element is prime, which leverages the fact that $F[x]$ is a U.F.D. Is there a more general method of proof that works on general integral domains?
I'd state the result as follows: If $F$ is a field and $f \in F[x]$ not zero, then $f(x)=(x-a_1)\cdots(x-a_m)g(x)$, where $m\ge 0$, $a_i \in F$, and $g$ has no roots in $F$. In particular, $m \le n$, the degree of $f$. and prove it by induction on the degree of $f$: If $f$ has no roots in $F$, then take $m=0$ and $g=f$. In particular, this holds for $f$ nonzero constant. If $f$ has a root $a_1$ in $F$, then $f(x)=(x-a_1)q(x)$ by polynomial division. The result follows by induction applied to $q$, whose degree is less than that of $f$. No need to bring multiplicities into this argument until now. But then it is easy to conclude a result about multiplicities by grouping equal factors $x-a_i$. Note that unique factorization is not a part of the argument.
I'd state the result as follows: If $F$ is a field and $f \in F[x]$ not zero, then $f(x)=(x-a_1)\cdots(x-a_m)g(x)$, where $m\ge 0$, $a_i \in F$, and $g$ has no roots in $F$. In particular, $m \le n$, the degree of $f$. and prove it by induction on the degree of $f$: If $f$ has no roots in $F$, then take $m=0$ and $g=f$. In particular, this holds for $f$ nonzero constant. If $f$ has a root $a_1$ in $F$, then $f(x)=(x-a_1)q(x)$ by polynomial division. The result follows by induction applied to $q$, whose degree is less than that of $f$. No need to bring multiplicities into this argument until now. But then it is easy to conclude a result about multiplicities by grouping equal factors $x-a_i$. Note that unique factorization is not a part of the argument.
Using variable spearation to solve a linear ODE of first order Suppose I want to solve the ODE: $$f'(x) + 2f(x) = 3$$ I want to use variable separation, so I get: $$ f'(x) = 3 - 2f(x)$$ Now I want to divide by $3-2f(x)$ but I am unsure what I need to assume on $f$ in order to do that. That is, does that expression have to be non zero for all x, or only for a certain x? After that point I know what to do but I don't understand what I am supposed to do here. thanks
The constant function $f(x)=\frac32$ is one solution. All other solutions must satisfy $f(x) \neq \frac32$ for all $x$ since solution curves can't cross (by the uniqueness theorem), so they are given by $f'(x)/(3-2 f(x))=1$ and so on (the usual procedure).
If you want to solve the ODE, you 've first to solve the homogenueos equation, i.e. $f'(x) + 2f(x) = 0$ and then fing the particular integral. You can easily rearrange the homogeneous equation as: \begin{equation} \frac{f'(x)}{f(x)}=-2 \end{equation} So that \begin{equation} \frac{df}{f}=-2 dx \end{equation} When integrating you get $f(x)=c e^{-2 x}$, where c is the integration constant. Your particular integral is a simple constant, since the right hand side of your differential equation is a constant. The solution is $f(x)=c e^{-2 x}+ 3/2$
Is there a proof that is true for all cases except for exactly one case? I was curious if there were any such proofs which state that a thing is true always EXCEPT for exactly one instance. As in, for some reason, there is only one instance where the proof is false, but it is true for all other objects. I understand that if it is not true in that one case that it is not necessarily a proof, I was just wondering if there were any "proof-like things" of this form.
Here's a famous one: $\mathbb{R}^n$ has a single differentiable structure (up to diffeo) except for $n=4$, in which case it has uncountably many. These posts may be of interest: https://mathoverflow.net/questions/16035/a-reference-for-smooth-structures-on-rn https://mathoverflow.net/questions/24930/differentiable-structures-on-r3
Each object you can touch in your body is not your tongue, but it fails when the object you touch is your tongue. Each person you look in this planet is not your mother, but it fails when you look at your mother. Each person in this question is trying to showoff with advanced math concepts to answer a trivial question like this and is not called Voyska, but it fails for me.
Why the life of a light bulb follow an exponential law? Why the life of a light bulb follow an exponential law ? My teacher always said that, but he can't explain why it's this. So why did we decided that a light bulb follow an exponential law ? Where does this fact comes from ?
The exponential distribution comes from the assumption that light bulbs do not age. They just randomly, for each moment in time, decide whether to fail or to keep working, with the same probability regardless of how old they are. This leads to the exponential distribution. Is this the way light bulbs actually works? I have no idea, but since this is such a ubiquitous example I would at the very least find it reasonable that it's close to the truth. Someone probably checked it once a long time ago, and it's just been the established truth since.
This was a great question, one I suffered with for a while. If you accept the premise that a lightbulb can be modeled by a memoryless exponential function, then you are forced to accept some rather embarrassing results. For instance, let's say you have $2$ lightbulbs, each modeled by a memoryless exponential with a mean of $30$ minutes. An experiment is done where the first bulb is run until it fails, the $2^{\mathrm{nd}}$ one is immediately put in and let run to it fails and you record only the time of the $2^{\mathrm{nd}}$ bulbs failure, let's say $60$ minutes. Using Bayesian Inference, you want to use the time the $2^{\mathrm{nd}}$ bulb failed to update you knowledge about when the $1^{\mathrm{st}}$ bulb failed. With Baye's Rule it is straight forward to show that you've learned nothing, that the failure of the $1^{\mathrm{st}}$ bulb is just as likely to have been at one second in, $59$ minutes in, or its mean, $30$ minutes in. This is preposterous, and if you apply this to bulbs or even, worse bridges, you are making a big mistake. A coin flip is memoryless, the arrival of a single photon in a very low light environment is memoryless, not a lightbulb, because a lightbulb failure is not a single event, but rather a series of events where material is slowly lost from the filament finally leading to failure, and it cannot be modeled with a memoryless distribution. I wish they would stop using lightbulbs as the cannonical example of exponential distributions and Bayesian Inference.
Reformulating Real Analysis using any basis of $\mathbb{R}^p$ To illustrate the question that I'm asking, consider the theorem: If the partial derivatives of the $f$ exists in a neighbourhood of $c$ and are continuous at $c$, then $f$ is differentiable at $c$. However, when we talk about partial derivatives, we talk about the directional derivative in the direction of the standard basis vectors (0,...,1,...0)of $\mathbb{R}^p$. First of all, why do we choose and formulate the theorems with respect to these basis ? I mean would any orthonormal basis of $\mathbb{R}^p$ work in the same way ? I mean, for example, if we think geometrically, for $\mathbb{R}^3 $the choice of $x,y,z$ coordinates is arbitrary, hence any orthonormal basis of $\mathbb{R}^3$ should work, but how about any basis of $\mathbb{R}^3$ ? Secondly, if we were to formulate, for example, the above theorem wrt any basis of $\mathbb{R}^p$, how could we do that ? Edit: Please provide an argument to your statements. Edit 2: In the book that I'm using (The elements of Real Analysis by Bartle), for example, the above theorem is given before stating that the derivative of $f$ can be written in terms of the gradient, so to answer to this question, I cannot use this fact.
First, your quoted theorem If the partial derivatives of the $f$ exists in a neighbourhood of $c$ and are continuous at $c$, then $f$ is differentiable at $c$. May or may not actually be true depending on how you define differentiable. If your are referring to Gâteaux differentiability then you are correct, but surprisingly not if you mean Fréchet differentiable. See this post for a counterexample. Your question actually touches on something very subtle. Consider the Jacobian of the function $f$--that is, the $p \times p$ matrix $J$ (in the standard basis), defined by $$ J_{ij} = \frac{\partial f_i}{\partial x_j}. $$ Where $f_i$ is the $i$th component of $f$ and $x_j$ the $j$th coordinate in the standard basis. Suppose now we transform from the standard basis to a new basis by the transformation $T : (x_j) \mapsto (x_j')$. We now have a new function $$ \tilde{f} = T\circ f\circ T^{-1}. $$ (That is $\tilde{f}$ first takes the vector in the new coordinates and converts back to the old, applies $f$, and then converts back to the new coordinates.) Using the chain rule, it can readily be deduced that the new Jacobian $J'$ is related to the old by $$ J' = PJP^{-1}, $$ where $P$ is the change of basis matrix of the transformation $T$. Thus, we are free to work in whichever coordinates we may like, and may readily convert to a different set of linear coordinates by applying the standard change of basis matrix from linear algebra. We are thus at liberty to take the standard basis because, as this argument shows, all the other partial derivatives in any other linear coordinate system exist and are given by a change of basis with respect to the standard coordinates. Addendum: This reasoning also explains a certain puzzle: when we encounter a vector function $\mathbf{u} : \Bbb R^3 \to \Bbb R^3$ in physics, there are nine derivatives $\partial u_i/\partial x_j$ for $i,j = 1,2,3$. Why then do we only ever encounter two particular expressions $\nabla \cdot \mathbf{u}$ and $\nabla \times \mathbf{u}$ in our physics books? It is because these combinations of partial derivatives are (in some sense) the only "derivatives" invariant under orthogonal transformations (and $\nabla \times \mathbf{u}$ is only "kind of" invariant under coordinate transformations, see this.) We can see that $\nabla \cdot \mathbf{u}$ is invariant under coordinate transformations since it is the trace of the Jacobian, and the trace of a matrix is invariant with respect to change of basis.
Any basis would suffice for this theorem. We just use the standard basis because its the easiest to work with. The best way to look at this is given that the theorem holds for a basis $v_i$ of $\mathbb{R}^p$ and $e_j = \sum a_{i,j}v_i$ then $\frac{\partial f}{\partial x_j} = \sum a_{i,j}\frac{\partial f}{\partial v_i}$.
For which $a$, the DE $f'(x)=f(ax)$ has non- zero solution The question is For which $a$, the differential equation $$f'(x)=f(ax)$$ has non- zero solution(s)? Of course, for $a=1$, we have $f'(x)=f(x)$ which has the solution $f(x)=ce^{x}$. For $a=-1$, the DE is $f'(x)=f(-x)$ which gives $$f''(x)=-f'(-x)=-f(x)$$ and this gives $$f(x)=c_1\cos x+c_2\sin x$$ where from $f'(x)=f(-x)$, we see $f'(0)=f(0)$ and thus for $a=-1$, the solution is $$f(x)=c_1(\cos x+\sin x).$$ For $a\ne\pm1$, I have no idea how to proceed. Thanks for helps.
You may did as for $f'(x)=f(-x)$ for general case $f'(x)=f(ax)$. Note that from $f'(x)=f(ax)$ we have $$f''(x)=af'(ax)=af(a^2x),\quad f'''(x)=a^3f'(a^2x)=a^3f(a^4x), \ldots$$ and in general $$f^{(n)}(x)=a^{n(n-1)/2}f(a^nx)$$ which gives $$f^{(n)}(0)=a^{n(n-1)/2}f(0).$$ Therefore the $f(x)$ is given by $$f(x)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n=f(0)(1+\sum_{n=1}^\infty \frac{a^{n(n-1)/2}}{n!}x^n)$$ and by ratio test, we see that $$|a|\leq 1.$$
Hint. Let $y=f(x),\,w=ax,$ then we get $y'_x=y'_ww'_x=y(ax),$ which gives $$ay'=y,$$ where $y$ is now a function of $w.$ You should now be able to proceed.
Conditional probability that first card is spades given that second and third one are spades Attempt: The original sample space is just the 52 cards. Now, Suppose the events that second and third card are spades have occured. We know now have a reduced sample space of $50$ cards which contains 2 spaces only, so $$P( \text{first card spade} ) = \frac{2}{50} = \frac{1}{25} $$ but the correct answer should be 11/50 which I have no idea how they got it. What is the correct reasoning with this type of problems? Added: Actually, thanks to the given hint I think I missunderstood. So, basically we $\bf have$ 13 spades. Given the event that reduces our sample space now we have $50$ cards and $13-2 = 11$ spades. Hence, $$ P = \frac{11}{50} $$
You got the correct answer, though you didn't use conditional probability. Your method works here because, as you noted, knowing that the 2nd and 3rd cards are spades is basically equivalent to removing them from the deck before we picked the 1st card. The way to do this with conditional probability would be to take the probability of both events -- i.e., the probability that the first three cards are all spades -- and divide by the probability of the observed event -- i.e., the probability that the second and third cards are both spades. We get $$\dfrac{\frac{13}{52}\cdot \frac{12}{51}\cdot\frac{11}{50}}{\frac{13}{52}\cdot \frac{12}{51}} = \dfrac{11}{50} $$
I think is goes like this: $P(A):=$ 1st card a space; $P(B):=$ only 2nd & 3rd cards are spades, first might be a spade Event E is the questions noted above. $$P(E)=P(A \cap B)/P(B)$$ $$P(A \cap B)=\text{all three cards are spades}=13/52*12/51*11/50$$ $$P(B)=\text{all three cards are spades + 2nd & 3rd cards are spades, first might be a spade}$$ $$P(B)=\underbrace{13/52*12/51*11/50}_{\text{ all three are spades}} + \underbrace{39/52*13/51*/12/50}_{ \text{ first not spade, rest spades}}$$ Plug into $P(E)$ equation and you get 11/50!
Prove that $\lim \limits_{n\to \infty}\frac{n^{\alpha}}{(1+p)^n}=0$ Prove that if $p>0$ and $\alpha\in \mathbb{R}$ then $$\lim \limits_{n\to \infty}\dfrac{n^{\alpha}}{(1+p)^n}=0.$$ No ideas how to prove it. Can anyone help please? Prove without using logarithm.
Consider $\sum_{i=1}^\infty \frac{n^\alpha}{(1+p)^n}$, and perform the Ratio Test for convergence of this series: $$\lim_{n\to \infty} \left| \frac{(n+1)^\alpha}{(1+p)^{n+1}} \cdot\frac{(1+p)^n}{n^\alpha}\right| = \lim_{n\to \infty}\left|\left(1 + \frac{1}{n}\right)^\alpha \frac{1}{1+p} \right| = \frac{1}{1+p} < 1$$ As this limit is less than 1, the series is convergent, which is only possible if the sequence $\frac{n^\alpha}{(1+p)^n} \to 0$ as $n \to \infty$.
Take the logarithm and show that it goes to -infinity. If the logarithm goes to -infinity, then the sequence itself goes to 0.
Is there a standard category-theoretic way to express a loop or quasigroup? The standard way to encode a group as a category is as a "category with one object and all arrows invertible". All of the arrows are group elements, and composition of arrows is the group operation. A loop obeys similar axioms to a group, but does not impose associativity. Inverses need not exist, but a "cancellation property" exists -- given $xy = z$, and any two of $x$, $y$, and $z$, the third is uniquely determined. Quasigroups need not even have a neutral element. Given the lack of associativity, arrows under composition do not work to encode loop elements. Is there a natural way to do this?
As expressed by Qiaochi Yuan in this and this comment, the way that category theory applies to studying loops and quasigroups is in the form of a category whose objects are loops, resp. quasigroups. Only a few structures can actually be described as categories having certain special properties (among which sets, groups, partially ordered sets). For a structure to have any chance of being a "special type of category", it is of course necessary that the defining properties for a category are somehow satisfied by the structure in question. For loops and quasigroups, this is not obvious to say the least, due to the lack of associativity (which is all-important in category theory).
I know this is an older thread but I've been thinking about basically this question for a few months now Something I've come up with is that a quasi-group requires certain automorphisms to exist on the categorical product of 3 elements. An example of this, in the Category of sets the products AxB, CxA and BxC are all isomorphic to subsets of AxBxC which is isomorphic to CxAxB and BxCxA. These isomorphisms are what allow us to distinguish between a binairy function and it's left and right inverses, and are generalizable withing the context of categorical products (I believe) as the proper morphisms should exist between categorical products of pairs and products of triples. Once these automorphisms are established, I believe quasi-groups arise naturally from the existence of morphisms between categorical products of pairs and objects. This is all really sketchy right now, I'm honestly just getting into category theory but I believe the above works?
Checking the existence of a solution for a set of linear equality and ineaulity equations I would like to know if there is a method to check the existence of the solution for a given set of linear equations composed with both equalities and inequalities? I'm not interested in the solution, but whether there exist at least one solution or not. Edit1: Mixed systems of linear equations and inequalities, $a_{1,1} x_1 + \dots + a_{1,n} x_n = b_1$ $a_{2,1} x_1 + \dots + a_{2,n} x_n = b_2$ $\dots$ $a_{m-1,1} x_1 + \dots + a_{m-1,n} x_n \geq b_{m-1}$ $a_{m,1} x_1 + \dots + a_{m,n} x_n \geq b_m$ The number of equations $m$ might be less than, equal to or greater than $n$ (the number of unknowns). Edit2: Example1, $ \begin{cases} 2x-y \geq -3 \\ -4x-y \geq -5 \\ -x+y=4 \end{cases} $ There is no solution for above system. I'm looking for a systematic (algorithmic) way to determine if such systems of equations have any solution or not.
Yes. You can easily find out the set of solutions given certain equations or inequations. The various cases that you come across are : 1) You get unique solution. 2) You get infinitely many solutions. 3) You get no solution.
Yes. You can easily find out the set of solutions given certain equations or inequations. The various cases that you come across are : 1) You get unique solution. 2) You get infinitely many solutions. 3) You get no solution.
Let $T:\mathbb{P_2}\rightarrow\mathbb{P_2}$ be a linear transformation defined by $T(a+bt+ct^2)=3a+(5a-2b)t+(4b+c)t^2$ Let $T:\mathbb{P_2}\rightarrow\mathbb{P_2}$ be a linear transformation defined by $T(a+bt+ct^2)=3a+(5a-2b)t+(4b+c)t^2$ Find the basis C for $\mathbb{P_2}$ so that the matrix $[T]_c$ is a digonal matrix My attmept: Suppose we take basis of C is ${1, t, t^2}$ i am little confusing can any help me to solve
You have to find a basis $C=\{p_1,p_2, p_3\}$ of $\mathbb{P_2}$ and real numbers $t_1,t_2,t_3$ such that $T(p_j)=t_jp_j$ for $j=1,2,3$. Then: $[T]_c=diag(t_1,t_2,t_3)$ FRED
Do The following operations on the T matrix w.r.t standard basis. $ C'_2\,= C_2 \, - \, 4. C_3\,\, \\\\\, C'_1 = \, 2. C_1-\,\,5.C'_2 $. to get a diagonal matrix. Do the same operation on standard basis matrix to obtain $ [2,\, 5+t,\,-20-4t+ t^2] $ as basis. This basis will give a diagonal matrix.
$(C_{[0;1]}, \Vert . \Vert_1)$ is not a Banach space I'm going to prove that $C_{[0;1]}$ is not a Banach space w.r.s to the norm $\Vert x \Vert_1 = \int_{0}^{1} |x(t)| dt$ by consider the series $\sum_{n=1}^{\infty}x_n$ where $x_{n}(t)= t^{n} \cdot \sqrt{1-t}$. By MCT we shall easy check that this series is absolutely convergence. Indeed $$ \begin{array}{ll}\sum_{1}^{\infty}\Vert x_n \Vert_1 & = \sum_{1}^{\infty}\int_{0}^{1}x_{n}(t)dt = \int_{0}^{1} \sum_{1}^{\infty} x_n(t)dt=\int_{0}^{1}\sum_{1}^{\infty}t^n\sqrt{1-t}\, dt \\ &= \int_0^1 \frac{t}{\sqrt{1-t}}dt = \frac{4}{3} \end{array} $$ But I'm stuck in proving this series is not convergence in $(C_{[0;1]}, \Vert . \Vert_1)$. I think that if this series is convergence in $(C_{[0;1]}, \Vert . \Vert_1)$ to $x$, it may convergence to $x$ pointwise, but I can't prove this. Thank you for your help.
Notice that, if $g(t)=\sum_1^\infty x_n(t)$, then, for $t\in (0,1)$ $$ g(t)= \frac{\sqrt{1-t}}{1-t}. $$ You can prove this as you did, using the fact that we have a geometric series. On the other hand, since $x_n(0)=0$ for all $n$, we have $g(0)=0$, which is impossible since the sum representation above forces $g(0)=1$. Therefore the series doesn't converge in your space.
Your series are convergence in this norm indeed, and you have already proven it. For an example to show the noncompleteness, consider fn(x) to be a function such that it equal to 1 when x < 1/2, it equal to 0 when x > 1/2 + 1/n, and it is linear when 1/2 < x < 1/2 + 1/n. If such fn converges to a function f in the norm above, then view it as L1 we get a subsquence converges a.e. to f, thus f must be the function equal to 1 when x < 1/2 and equal to 0 when x > 1/2 a.e., and cannot be continuous.
Row swap changing sign of determinant I was wondering if someone could help me clarify something regarding the effect of swapping two rows on the sign of the determinant. I know that if $A$ is an $n\times n$ matrix and $B$ is an $n\times n$ matrix obtained from $A$ by swapping two rows, then $$\det(B)=-\det(A)$$ but I don't know how to prove this. I have been looking for proofs at internet, and read in both in textbooks and lectures notes that are available that this result is very hard to prove and most approaches rely on induction and so was wondering if there is something wrong with using that $\det(AB)=\det(A)\det(B)$ and then writing $B=EA$ where $E$ is an elementary matrix swapping two rows and using this result to get $\det(B)=\det(E)\det(A)=-\det(A)$ (since showing that $\det(E)=-1$ in this case is not that hard).
Yes, your method would work, and it is probably the most elegant possible. We can without loss of generality assume that $E$ interchanges the first two rows. This means that we can write $E$ in block-diagonal form: $$ \left( \begin{array}[ccccc] 00 & 1 & 0 &\dots & 0 \\ 1 & 0 & 0 &\dots & 0 \\ 0 & 0 & 1 &\dots & 0 \\ ... & ... & ... & 1 & ... \\ 0 & 0 & 0 & ... & 1 \end{array}\right) $$ Now if you know how to calculate the determinant from the usual Laplace algorithm, starting at the bottom line, you see that the only nonzero terms are... Also, why can we assume it interchanges the first two lines without loss of generality? (Think of what happens if we change a basis...)
If $~i~$ and $~j~$ are two rows of matrix $~A~$ that are interchanged to give matrix $~A^*~$. Apply to $~A~$ and $~A^*~$ the general recursive formula twice (two-stage recursion) along the two interchanged rows $~i~$ and $~j~$. Using the fact that $~A~$ and $~A^*~$ are identical when rows $~i~$ and $~j~$ are deleted (which is the case for the determinants of the resulting matrices after the two-stage recursion) and rearranging the co-factors in the sums, you will find that $~\det A = -\det A^*$.
Let matrix $A$ be normal. If $A{A^T}$ has $n$ distinct eigenvalues, why is $A$ symmetric? Let matrix $A \in {M_n}(\Bbb R)$ be normal. If $A{A^T}$ has $n$ distinct eigenvalues, why is $A$ symmetric?
Here's a late answer. First, a bit of notation. Since we will work over $\mathbb C,$ we will need the notion of "conjugate-transpose" of a complex matrix. Namely, if $M\in \mathbb C^{n\times n},$ then its "conjugate-transpose" is defined as $$ M^H = (\overline M)^T, $$ where $\overline \cdot$ denotes complex conjugation of the matrix entries and $\cdot^T$ denotes matrix transposition. This is a well known operation. Observe that $\overline \cdot$ and $\cdot^T$ commute. And, of course, for $N\in\mathbb R^{n\times n},$ we have $N^H = N^T.$ In the following, we will use only the operation $\cdot^H.$ On to the actual question. So we are given a matrix $A\in\mathbb R^{n\times n}$ which is normal, i.e. $A^HA = AA^H$ and such that $A^HA$ has $n$ distinct eigenvalues. We want to show that $A$ is symmetric, i.e. $A^H = A.$ Observe that, since $A$ is real, its characteristic polynomial has real coefficients. This means that the eigenvalues of $A,$ i.e. the roots of its characteristic polynomial, are either real or come in complex conjugate pairs. In other words, we can enumerate the eigenvalues of $A$ as a sequence $$ \lambda_1,\ldots,\lambda_r,\mu_1,\overline{\mu_1},\ldots,\mu_s,\overline{\mu_s}, $$ with $r,s\in \mathbb N_0,$ $\lambda_j\in\mathbb R,$ and $\mu_k\in \mathbb C$ with $\Im\mu_k > 0.$ Repetitions are allowed, i.e. we can have $\lambda_j = \lambda_k$ or $\mu_j = \mu_k$ for $j\neq k.$ Now, since $A$ is normal, it can be diagonalized using a unitary transformation. This is a well known fact. In more detail, we have the following. Put $$ D = {\rm diag}(\lambda_1,\ldots,\lambda_r,\mu_1,\overline{\mu_1},\ldots,\mu_s,\overline{\mu_s})\in \mathbb C^{n\times n}, $$ i.e. $D$ is the diagonal matrix with the eigenvalues of $A$ on the diagonal. Then, there is a unitary matrix $U\in \mathbb C^{n\times n}$ such that $$ \tag{1} A = U^HDU. $$ Recall that "unitary" means $$ \tag{2} U^HU = UU^H = I, $$ where $I \in \mathbb C^{n\times n}$ is the identity matrix. From $(1),$ we get $$ \tag{3} A^H = (U^HDU)^H = U^HD^HU. $$ From $(1),$ $(2),$ and $(3),$ we conclude $$ A^HA = \left(U^HD^HU\right)\left(U^HDU\right) = U^HD^HDU. $$ This shows that the eigenvalues of $A^HA$ are the diagonal elements of the dialgonal matrix $$ \begin{align} D^HD & = {\rm diag}(\lambda_1^2,\ldots,\lambda_r^2,\overline{\mu_1}\mu_1,\mu_1\overline{\mu_1},\ldots,\overline{\mu_s}\mu_s,\mu_s\overline{\mu_s}) \\ & = {\rm diag}(\lambda_1^2,\ldots,\lambda_r^2,|\mu_1|^2,|\mu_1|^2,\ldots,|\mu_s|^2,|\mu_s|^2). \end{align} $$ Observe that every pair $\mu_j,\overline{\mu_j}$ of complex conjugate eigenvalues of $A$ produces twice the same eigenvalue for $A^HA.$ By assumption, this can't happen. So we must have that $s = 0,$ $A$ has only real eigenvalues, and in particular $D\in \mathbb R^{n\times n},$ i.e. $D$ is real. But then $$ \tag{4} D^H = D. $$ Now we get from $(1),$ $(3),$ and $(4)$ $$ A^H = U^HD^HU = U^HDU = A, $$ as desired.
$A$ is normal and $AA^T$ has $n$ distinct eigenvalues, so $AA^T=X \Lambda^2X^{-1}=A^TA $, where $\Lambda$ is diagonal and has $\lambda$'s on the diagonal. Now $AA^T=(X \Lambda X^{-1})(X \Lambda X^{-1})=A^TA $ so that $A=A^T$.
How to find the limit $\lim_{n\to \infty}\frac{n}{2^n-1}$? I've been trying to find and prove the $$\lim_{n\rightarrow\infty}\frac{n}{2^n-1}$$ but I haven't even figure out the limit, could anyone help?
$$\lim_{n\to\infty}\frac{n}{2^n-1}=\lim_{n\to\infty}\frac{1}{2^n\ln2}=\dfrac{1}{\ln{2}}\lim_{n\to\infty}2^{-n}=\dfrac{1}{\ln{2}}\cdot0=0$$
You can use Stolz theorem: $ \lim \frac{a_n}{b_n} = \lim\frac{a_{n+1}- a_n}{b_{n+1}-b_n} $ provided that $ b_n \rightarrow \infty $ monotonously
Subgroups of Symmetric groups isomorphic to dihedral group Is $D_n$, the dihedral group of order $2n$, isomorphic to a subgroup of $S_n$ ( symmetric group of $n$ letters) for all $n>2$?
Let $\left\{a_1,a_2,\ldots,a_n\right\}$ denote the vertices of an $n$-gon. The cyclic permutation $\alpha=(a_1,a_2,\ldots,a_n) \in S_n$. Choose any vertex, e.g. $a_1$ and consider it the fixed point of a reflexion of the plane (i.e the permutation $\beta=(a_2,a_n)(a_3,a_{n-1})\ldots$ of order $2$). This permutation also $\in S_n$. Now $D_n$ is generated by $\alpha, \beta \in S_n$ which shows that the $D_n$ thus defined is a subgroup of $S_n$
If you can build up a faithful action of $D_{2n}$ on a set of $n$ elements, then you are done. I'll adopt this definition of the dihedral group of order $2n$: $$D_{2n}:=\langle r,s\mid r^n=s^2=(sr)^2=1\rangle \tag 1$$ Let's consider the action of $D_{2n}$ by left multiplication on the set of left cosets of some subgroup $H$ of index $n$, i.e. of order $2$; such an action, say $\varphi$, has kernel $\operatorname{ker}\varphi=\bigcap_{g\in D_{2n}}gHg^{-1}$, and hence if there is a $\tilde g\in D_{2n}$ such that $\tilde gH\tilde g^{-1}\cap H=\{1\}$, then the action is faithful. Now, take $H=\{1,s\}$ and $\tilde g=r$; then $\tilde gH\tilde g^{-1}\cap H=\{1,r^2s\}\cap\{1,s\}=\{1\}$ as soon as $n>2$. Therefore, indeed $D_{2n}\hookrightarrow S_n$ for $n>2$.
Product of two non-zero complex numbers equals zero Is it possible for the product of 2 non-zero complex numbers to be 0?
No, it is not possible. For complex numbers $z_1,z_2 \in \mathbb{C}$ we have that $$z_1\cdot z_2 = 0 \quad\implies\quad z_1 = 0\quad\text{ or }\quad z_2 = 0.$$ As a hint: non-zero complex numbers have multiplicative inverses.
It's not possible. the product of 2 complex number can equal 0 if and only if one of the numbers is zero. i.e a.b = 0 where a,b are complex numbers Then either a or b is 0
Expected value of a single variable Function $\displaystyle{ f_x(x) = 6x^5 \text{ for } 0 < x <1}$ A.) Find $\mathbb{E}\left[X\right]$ B.) Find $P\bigl[1/3 \le X \le 2/3\bigr]$
Basic probability theory. A. $E(X)=\int_0^1xf_X(x)dx=\frac{6}{7}$. B. $P(1/3\le X \le 2/3)=\int_{1/3}^{2/3}f_X(x)dx=(2/3)^6-(1/3)^6$
Basic probability theory. A. $E(X)=\int_0^1xf_X(x)dx=\frac{6}{7}$. B. $P(1/3\le X \le 2/3)=\int_{1/3}^{2/3}f_X(x)dx=(2/3)^6-(1/3)^6$
How do I calculate the number of unique permutations in a list with repeated elements? I know that I can get the number of permutations of items in a list without repetition using (n!) How would I calculate the number of unique permutations when a given number of elements in n are repeated. For example ABCABD I want the number of unique permutations of those 6 letters (using all 6 letters).
There is a specific formula for such problems: Permute all elements, and remove permutations of elements that are identical, viz. $\dfrac{6!}{2!2!}$
Do you want the number of combinations of a fixed size? Or all sizes up to 6? If your problem is not too big, you can compute the number of combinations of each size separately and then add them up. Also, do you care about order? For example, are AAB and ABA considered unique combinations? If these are not considered unique, consider trying stars-and-bars: How to use stars and bars (combinatorics)
how to memorize the sum and product of roots for an $n^{th}$ degree equation For my exams I need to know the following equations by heart: for a polynomial equation: $a_nx+a_{n-1}x^{n-1}+...+a_1x+a_0,$ the sum and product of the roots are given by $$\textrm{Sum}=-\frac{a_{n-1}}{a_n}$$ $$\textrm{Product}=(-1)^n\frac{a_0}{a_n}.$$ I have never been able to memorize these, and for some reason, they are not on the formula booklet. If anyone has any mnemonic or trick of some sort for memorizing them, it would be very useful to me. Thank you very much in advance!
Write $$a_n x^n + a_{n-1}x^{n-1} + ... + a_1 x + a_0 = a_n(x-r_1)...(x-r_n)$$ and develop. You see immediately what is the constant term and the term of degree $n-1$.
Hint: Write the polynomial as $$(x-r_1)(x-r_2)...(x-r_n),$$ where the roots are $r_1, r_2, ... r_n.$
Determine amount multiplied in sum Suppose you have n , and r (result) = some value less than n (ltn) multiplied by 4, plus what's left of n after subtracting ltn (n - ltn). Eg. n = 60; (r) = 25 (ltn) + (60 (n) - 25 (ltn) = 35) = 135. Now, if I know the values of n (60) and r (135), is there a formula to determine the value of ltn? I can "bruteforce" it by taking r (135) find the highest number that divides by 4 without fractions (132 / 4 = 33) and then deduct that from n (60 - 33 = 27) and add the left over (27) to 133 = 159. Well that's way higher than 135, so let's move down to 128 (next number divisible by 4) etc... but is there a simpler way? Thanks...
It gives a linear equation. $r=4l+n-l $ $r=3l+n $ $l=\frac {r-n}{3}$
It gives a linear equation. $r=4l+n-l $ $r=3l+n $ $l=\frac {r-n}{3}$
Why are the solutions for $\frac{4}{x(4-x)} \ge 1\;$ and $\;4 \ge x(4-x)\;$ different? Why is $\;\dfrac{4}{x(4-x)} \geq 1\;$ is not equal to $\;4 \geq x(4-x)\;$? Probably a really dumb question but I just don't see it :(
Other answers (and comments) have already explained the potential pitfalls of multiplying both sides of an inequality by a quantity whose sign is sometimes negative. Here is an alternative approach that might help: $$\begin{align} {4\over x(4-x)}\ge1 &\iff{4\over x(4-x)}-1\ge0\\ &\iff{4-x(4-x)\over x(4-x)}\ge0\\ &\iff{4-4x+x^2\over x(4-x)}\ge0\\ &\iff{(2-x)^2\over x(4-x)}\ge0\\ &\iff x(4-x)\gt0\quad\text{(since }(2-x)^2\text{ is always non-negative)}\\ &\iff0\lt x\lt4 \end{align}$$ Remark: In general the quadratic in the numerator will not be a perfect square, in which case the final steps are a bit more complicated (you can wind up with more than one interval where the inequality holds). But the problem here was concocted to have a simple answer.
If $\dfrac4{x(4-x)}\ge 1$, $\:x(4-x)$ is positive, hence we can multiply both sides by $x(4-x)$ and therefore it implies $4\ge x(4-x)$. However, the other way, $ 4\ge x(4-x)$ necessarily happens in particular if $(4-x)<0$, and in this case it is equivalent to $\dfrac4{x(4-x)}\le 1$.
Are the laws of mathematics 'absolute' in this universe? We observe that almost all physical phenomena (which has been explained) can eventually be explained by the laws of mathematics. Mathematics seems ubiquitous- for example the form of the differential equation that governs the simple harmonic motion of a helical spring has the exact same form to explain the current in a RLC circuit. The parallels are endless virtually. But it does not mean necessarily that the 'next to be discovered phenomenon' has to follow the laws of mathematics provided that there is a theorem out there which proves that every physical mechanism has to abide by the laws of maths. Could anyone answer with a philosophical insight on this matter and has the Godel's Incompleteness theorem anything to say on this?
If you haven't already read Eugene Wigner's essay "On the unreasonable effectiveness of mathematics in the natural sciences". It addresses most of what you mentioned. https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html
I think they are not. Consider, for any consistent first-order theory S in any language L, the set S* of consistent theories of greater provability strength. The incompleteness theorems tell us that if S is incomplete, then S* can be arranged in an upside-down pyramid, where each S' in S* can be extended with at least con(S') and not con(S'), giving an upwards branching structure of incompatible extensions. If mathematics was absolute, then there would be a subset T(L) of first-order theories in S* that is the "true" tower without branching structure that traces out this absolute mathematical reality for the language L. Is there such an absolute mathematical reality for any meaningful language L? I don't think we have a universally accepted methodology for getting T(L) for arithmetic or set theory.
Need help with the general formula for the Taylor series of $e^x+\sin(x)$ A while ago, I asked about ways to derive the sigma notation for the infinite series of $e^x+sin(x)$. $$e^x+\sin(x) = 1+2x+\dfrac{x^2}{2!}+\dfrac{x^4}{4!}+\dfrac{2x^5}{5!}+\dfrac{x^6}{6!}+\dfrac{x^8}{8!}+\dfrac{2x^9}{9!}+...$$ In this thread, Clive Newstead brilliantly gave me two formulas. There, he said that the pattern of the series, regarding the coefficient of the numerators, is as followed: 1,2,1,0,1,2,1,0 He said that the pattern is "is periodic with period 4" This comment still puzzles me. I have tried to validate his formula and it is true, they are all correct, but I don't know why he can come up with succinct representation of the pattern: $$\sum_{k=0}^{\infty} \left( \dfrac{x^{4k}}{(4k)!} + \dfrac{2x^{4k+1}}{(4k+1)!} + \dfrac{x^{4k+2}}{(4k+2)!} \right)$$ How does one realize that the power is related to $4k, 4k+1$ and $4k+2$ and $4k+3$? This seems to relate to finding the general formula for a sequence or series of number, for example, trivially $1, 3, 5, 7, ...$ can be represented as $2k+1$ $0, 2, 4, 6, ...$ can be represented as $2k$ with $k$ as the position of the term in the sequence. So my thought is you to count the terms that have the numerator's coefficient as $1$, $2$ and $0$ The ones that have $1$ is $1^{st}$, $3^{sd}$, $5^{th}$, $7^{th}$, etc. The ones that have $2$ is $2^{sd}$, $6^{th}$, $10^{th}$, $14^{th}$, etc. The ones that have $0$ is $4^{th}$, $8^{th}$, $12^{th}$, $16^{th}$, etc. So the difference between the positions of the terms that contain the coefficient $1$ is $2$ So the difference between the positions of the terms that contain the coefficient $2$ is $4$ So the difference between the positions of the terms that contain the coefficient $0$ is $4$ I find that the general term for terms that contain $1$ is $2k+1$ I find that the general term for terms that contain $2$ is $4k-2$ I find that the general term for terms that contain $0$ is $4k$ But I don't know how to derive these into $4k$, $4k+1$, $4k+2$, $4k+3$ I realize that this is similar to finding patterns of a sequence on an IQ test, but I don't know how to derive it. One my friend said I should look into Lagrange's interpolation, but I don't why this technique has anything to do with writing down the general terms for sequences and series. Could you help me on this? Methinks, this is a simple question. I am not good at writing down general term of a sequence yet.
$e^x = 1 + x + \frac {x^2}{2} + \frac {x^3}{3!} + \cdots$ It seem pretty natural to write this as $e^x = \sum_\limits{n=0}^\infty \frac {x^n}{n!}$ But we could look at pairs of terms. $e^x = \sum_\limits{n=0}^\infty \left(\frac {x^{2n}}{2n!}+\frac {x^{2n+1}}{(2n+1)!}\right)$ Or, even triples of terms. $e^x = \sum_\limits{n=0}^\infty \left(\frac {x^{3n}}{3n!}+\frac {x^{3n+1}}{(3n+1)!}+\frac {x^{3n+2}}{(3n+3)!}\right)$ And do the same thing with $\sin x$ $\sin x = \sum_\limits{n=0}^\infty (-1)^n \frac {x^{2n+1}}{(2n+1)!} = \sum_\limits{n=0}^\infty \left(\frac {x^{4n+1}}{(4n+1)!} - \frac {x^{4n+3}}{(4n+3)!}\right)$ Now find a representation of $e^x$ that plays nicely with our representation of $\sin x$ add them together and you have what you show above.
The mod $4$ periodicity can perhaps be most easily understood using the formula $e^{ix}=\cos x+i\sin x$, so that $\sin x={e^{ix}-e^{-ix}\over2i}$, which gives $$e^x+\sin x={1\over2i}\sum_{n=0}^\infty{(2i+i^n-(-i)^n)x^n\over n!}$$ Since $i^4=1$, the coefficients $2i+i^n-(-i)^n$ cycle through the values $$\begin{align} 2i+i^0-(-i)^0&=2i+1-1=2i\\ 2i+i^1-(-i)^1&=2i+i-(-i)=4i\\ 2i+i^2-(-i)^2&=2i-1-(-1)=2i\\ 2i+i^3-(-i)^3&=2i-i-i=0 \end{align}$$
What is the integral of $e^x a^x$ Can you confirm that my answer below is correct? $$\int (a^x e^x)dx $$ My attempt: $$\int a^x e^x \, dx = \int (ae)^x \, dx $$ $$\int a^x e^x \, dx = \frac{(ae)^x}{\ln(ae)} + C $$ $$\int a^x e^x \, dx = \frac{a^xe^x}{\ln(ae)} + C $$
Yes, I can confirm that your answer above is correct.
Yes, I can confirm that your answer above is correct.
What will be the domain of the function $f(x)=\sqrt{2\{x\}^2-3\{x\}+1}$? I need to find the domain of the function $$f(x)=\sqrt{2\{x\}^2-3\{x\}+1}$$ where $x \in [-1,1]$ and $\{.\}$ represents the fractional part of $x$ So here's what I tried: Clearly the part inside the square root has to be greater than or equal to zero for it to exist, and by factoring the quadratic in terms of $\{x\}$ I get, $$(\{x\}-1)(2\{x\}-1)\geq0$$ So, from here I got $$\{x\} \in \bigg(-\infty,\frac{1}{2}\bigg] \cup \bigg[1,\infty\bigg)$$ But we know, that the fractional part of x can only vary between 0 and 1, ie, $\{x\} \in[0,1)$ So finally, I get $\{x\} \in \bigg(-\infty,\frac{1}{2}\bigg]$, which further reduces to the following $$\{x\} \in \bigg[0,\frac{1}{2}\bigg]$$ And, from the question, $x \in[-1,1]$ But how do I finally get the domain for $x$ here? I'm seemingly stuck at the last step
Hint: Write $X = \left \lfloor X \right \rfloor + \left \{ X \right \}$ $\\ And \left \lfloor X \right \rfloor = 0 \ or \ -1$
Hint: Write $X = \left \lfloor X \right \rfloor + \left \{ X \right \}$ $\\ And \left \lfloor X \right \rfloor = 0 \ or \ -1$
Is collapsing considered a legitimate proof? For example if I want to prove that $2^n - 1 = 1 + 2 + 4 + 8 +...+ 2^{n-1}$ I can obviously use induction and that is accepted. But I can also collapse it like: To Prove $2^n = S(n)$: $S(n) = (1 + 1) + 2 +...+ 2^{n-1}$ $S(n) = (2 + 2) + 4 + 8 +...+2^{n-1}$ $S(n) = (4 + 4) + 8 +...+2^{n-1}$ and so on until $S(n) = 2^{n-1} + 2^{n-1} = 2^n$ Is this method of collapsing considered a legitimate and presentable proof?
Well, sort of, but in fact, writing proofs like the one you want to write is why induction exists. Whenever people say something like "and so on until", they're expressing your intuition that it's possible to continue the argument by induction. The whole point of the method of induction is to make intuitions like this one precise. Let $S(k)=2^k + 2^k + 2^{k+1} + 2^{k+2} + ... + 2^{n-1}$. Then what we want to show is that $S(0) = 2^n$. Your proof basically amounts to saying $S(0) = S(1) = S(2) = ...$ "and so on", until we get $S(0) = S(n-1)$. Notice that $S(n-1) = 2^{n-1} + 2^{n-1}$, which obviously equals $2^n$. So we get $S(0) = 2^n$. To phrase this as a proof by induction, we're going to prove by induction that $S(0) = S(k)$ for all $k<n$, thus we'll obtain $S(0)=S(n-1)$ at the end. Obviously, $S(0) = S(0)$. Now suppose $S(0) = S(k)$. Then: $$\begin{align}S(0) &= S(k) \\&= 2^k + 2^k + 2^{k+1} + 2^{k+2} + ... + 2^{n-1} \\&= 2\cdot2^k + 2^{k+1} + 2^{k+2} + ... + 2^{n-1} \\&= 2^{k+1} + 2^{k+1} + 2^{k+2} + ... + 2^{n-1}\\&=S(k+1)\end{align}$$
the consensus here seems to be "yeah, the proof is ok but rather informal since it invokes induction 'coded' in the 'and-so-on' statement". I don't agree. Imho this is no proof at all and there is no induction argument given. The Argument starts with the claim (short of subtracting one from both sides of the equation). Why bother and continue? The modifications made in order to get to the next line are completely unclear to me. Are all the elements in the series doubled? Then the result (left side) should be doubled as well but it isn't?
Finding determinant of following matrix I need to find determinant of following matrix. $$\begin{bmatrix}1&0&0&0&0&2\\0&1&0&2&2&0\\0&0&1&2&0&0\\0&0&2&1&0&0\\0&2&0&0&1&0\\2&0&0&0&0&1\end{bmatrix}$$ Original Image I did it by simply doing $R_5$ - $R_1$. and then evaluating the determinant. But its a lengthy process but answer came out.. But another thing i have noticed afterwards is that it is symmetric matrix. So finding determinant my be easy, but i don't know how it helps which is my main concern. Thanks
Might be easiest to row-reduce just enough so that the matrix becomes a triangular matrix. Then the determinant is the product of the diagonal entries. In this case, we can row reduce to get $$\begin{bmatrix} 1 & 0 & 0 & 0 &0 & 2\\0& 1&0& 2 & 2 &0\\0 & 0 & 1&2&0&0\\0&0&0&-3&0&0\\0&0&0&0&-3&0\\0&0&0&0&0&-3\end{bmatrix},$$ and so the determinant is $-27$.
Why not $\;R_5-2R_1\;$ ? Adding scalar multiples of one row to other one doesn't change the determinant. Then you can develop wrt the first column: $$\begin{vmatrix} 1&0&0&0&0&2\\ 0&1&0&2&2&0\\ 0&0&1&2&0&0\\ 0&0&2&1&0&0\\ 0&2&0&0&1&0\\ 2&0&0&0&0&1\end{vmatrix}\stackrel{R_5-2R_1}\longrightarrow\begin{vmatrix} 1&0&0&0&0&2\\ 0&1&0&2&2&0\\ 0&0&1&2&0&0\\ 0&0&2&1&0&0\\ 0&2&0&0&1&0\\ 0&0&0&0&0&\!\!-3\end{vmatrix}=1\begin{vmatrix} 1&0&2&2&0\\ 0&1&2&0&0\\ 0&2&1&0&0\\ 2&0&0&1&0\\ 0&0&0&0&\!\!-3\end{vmatrix}=$$ $$=-3\begin{vmatrix} 1&0&2&2\\ 0&1&2&0\\ 0&2&1&0\\ 2&0&0&1\\\end{vmatrix}=\ldots etc.$$
Find the value of $(a^3 + b^3 + c^3)/(abc)$ if $a/b + b/c + c/a = 1$. Find the value of $$\frac{a^3+b^3+c^3}{abc}\qquad\text{ if }\quad \frac ab + \frac bc + \frac ca = 1.$$ I tried using Cauchy's inequality but it was of no help. Please guide me. $a, b, c$ are real.
There is not enough information to solve this problem. Clearing out denominators, your hypothesis is $$a^2 c + a b^2 + b c^2 = abc \quad (1)$$ and your desired conclusion is $$a^3+b^3+c^3=kabc \quad (2)$$ for some constant $k$. Suppose, for the sake of contradiction, there were a $k$ such that $(1)$ implied $(2)$. Since the polynomial $a^3+b^3+c^3-abc$ is irreducible, this would mean that $a^2 c+a b^2+b c^2-abc$ would divide $a^3+b^3+c^3-kabc$. But the two polynomials are both cubics, so the only way for the first to divide the second is the first is a scalar multiple of the second, and it isn't. It's also easy to generate points on $(1)$ and see that the ratio $(a^3+b^3+c^3)/(abc)$ isn't constant. Just choose random values for $a$ and $b$ and equation $(1)$ turns into a quadratic; solving that quadratic gives you some points to try. You'll see very quickly that nothing like this is true.
Take $a=1$ and $b=-\epsilon$. Then $c=O(1/\epsilon)$ ($\epsilon\rightarrow 0$). By taking $\epsilon$ suitably small, we can make the expression to be evaluated, of order $1/\epsilon^3$, big or much bigger.
Simultaneous equation $x^2 +y=10050$ and $y^2 + x= 2600$ solution Here's the problem: Jim and Tim are sharing money. If I square Jim’s money and add on Tim’s, I get £10,050. If I square Tim’s money and add on Jim’s, I get £2,600. How much do they each have? From this the equations are as follows $x^2 +y=10050$ and $y^2 + x= 2600$ This is a grade 10 math problem and want to solve it w/o using differentiation, integration etc.
Hint: assuming whole numbers, $y^2 \le 2600 \implies y \le 50\,$ and $x^2 \le 10050 \implies x \le 100\,$. But then $x^2 = 10050-y \ge 10050-50 = 10000\,$ so $x=100\,$.
$\text{Jim} = x$, $\text{Tim} = y$ $$\begin{cases} x^2 +y=10050\\ y^2 + x= 2600\end{cases}$$ Subtracting the equations, $$x^2 – y^2 +y – x = 10050-2600= 7450 \\( x-y) (x+y-1)= 7450 = 149 \times 50$$ Since $149$ is prime and $x\ge y$, $$\begin{cases} x- y = 50 \\ x + y -1 = 149\end{cases}$$ After solving $x= 100$ and $y = 50$.
How to identify rules of inference that establishes validity? I've been trying to determine an explanation for the falsity of a logical statement for some time now and I've had no luck in figuring out exactly how to go about it. The two part question goes as follows: Consider the arguments below. If the argument is valid, identify the rule of inference that establishes its validity. If not, explain why. a. If Robert understands the concepts correctly, he will be able to finish his assignment in two hours. Robert finished his assignment in more than two hours. Therefore, Robert did not understand the concepts correctly. b. If taxes increase, the housing market will decrease. Taxes are not increasing. Therefore, the housing market will not decrease. Perhaps I'm misunderstanding the way to determine the falsity of a logical statement? Any help is appreciated.
The "logical form" of the premise of a) is : if $p$, then $q$. The following is a valid argument : $p \rightarrow q \vDash \lnot q \rightarrow \lnot p$, thus, from the above premise we can correctly conclude with : if not $q$, then not $p$ which is exactly the conclusion of a). The argument in b) is not valid, because : $p \rightarrow q \nvDash \lnot p \rightarrow \lnot q$; thus, from the premise : if $p$, then $q$, we cannot conclude with : if not $p$, then not $q$.
a) is valid by modus tollens. Because recall, modus tollens is, P → Q, ~Q |- ~P The argument is identical
There exists an integer k such that $n = 3k+1$. Then $n^2 = (3k+1)^2 =9k^2 + 6k + 1 = 3 (3k2 +2k)+1$. Consider the following proof fragment. There exists an integer $k$ such that $n = 3k+1$. Then $n^2 = (3k+1)^2 =9k^2 + 6k + 1 = 3 (3k^2 +2k)+1$. For each of the statements, $(a), (b), (c)$, below, answer the following. Does the fragment provide a proof of the statement? If yes, explain why. If no, explain why not. The letter n denotes an integer. (a) If $n$ is odd, then $n^2$ is odd. (b) If $n^2$ is divisible by $3$, then $n$ is divisible by $3$. (c) If $n$ leaves remainder $1$ on division by $3$, then so does $n^2$. I am having some trouble understanding the question. I have started a) by assuming $n$ is odd and I took $n=3k+1$ and then I showed that $n^2=(3k+1)^2=3(3k^2+2k)+1$ and I said that if $n=3k+1$ is odd $3k$ is even so $k$ is even and can be writen as $k=2m$ where $m$ is an integer. After that I sub. $k=2m$ into the $n^2=3(3k^2+2k)+1$ and I find that $n^2=2$(somthing)$+1$ so it is odd. I have been told by my Professor that is I should not use the statment given to me to answer $a,b$, and $c$. I was wondering if for $a)$ I am just supposed to take $n=2k+1$ and then show that $n^2$ is odd. Can someone help me out?
Generally, what does it mean for a number to be odd? Well mathematically we say: $(2k)$ is even, and $(2k+1)$ is odd, for $k\in \mathbb{N}$. Let's consider part (a): Prove if $n$ is odd, $n^2$ must also be odd: Let n be an odd number, such that $n=2k+1$ for $k\in \mathbb{N}$. Then, $n^2=(2k+1)^2=(4k^2+4k+1)=2(2k^2+2k)+1\Rightarrow n^2$ is odd. Why? Well any natural number multiplied by an even number is even, so what happens when we add 1 to an even number? It turns odd! Consider part (b): Prove if $n^2$ is divisible by $3$, then $n$ is divisible by $3$. If $n^2$ is divisible by $3\Rightarrow n^2=3k$ for $k\in \mathbb{Z}$ Well, this comes directly from the fundamental theorem of arithmetic, that is, every number is either prime, or the product of prime numbers, such that each composite number has a unique sequence of products. (excluding the order in which primes are multiplied of course). Well, this implies that $n^2$ is the product of a sequence of prime numbers, that is, $n^2$ is a number multiplied by itself. So intuitively it makes sense that $n$ must also be divisible by $3$, and that's the proof! Consider part (c): Can you express $n^2=3M+R$ such that $M\in \mathbb{N}$ and $R$ is your remainder, in this case $R=1$?
Question has been answered in the comments by JMoravitz.
Relationship between the column space of a matrix $A$ and its non-free (pivot) columns Given an $m\times n$ matrix $A$ with $m\leq n$, with the rank of $A$ being less than $n$, is it necessarily true that the columns in $A$ representing the free variables are linear combinations of the pivot columns? If I am to figure out the column space of $A$, without having to calculate which of the columns are redundant (i.e. linear combinations of other columns), can I reliably say that $C(A)$ is the span of all (and only) the pivot columns in $A$? I was watching a video by Khan Academy where it seemed that this was the case, at least for the example given... but I don't know if it generalizes for all matrices $A$ where the null space does not equal $\{\vec{0}\}$ Example: $$A=\left[\begin{array}{rrrr}1 & 1 & 1 & 1\\ 2 & 1 & 4 & 3 \\ 3 & 4 & 1 & 2 \end{array}\right]$$ Its column space is the span of the two vectors $\left[\begin{array}{r}1\\2\\3 \end{array}\right]$ and $\left[\begin{array}{r}1\\1\\4\end{array}\right]$, which just so happen to be the only two pivot columns. The other two are free variable columns.
This is true for all matrices. Elementary row operations preserve linear relationships between the columns of a matrix. Suppose we have a matrix $A$ with columns $\mathbf{a}_i$ along with the Reduced Row Echelon Form $R$ with columns $\mathbf{r}_i$. Then for any set of coefficients $c_i$, we have $$\sum_{i=1}^nc_i\mathbf{a}_i = \mathbf{0}\iff \sum_{i=1}^nc_i\mathbf{r}_i = \mathbf{0}$$ The pivot columns in $R$ correspond to a basis for the columnspace of $R$, it follows that the same columns in $A$ form a basis for the columnspace of $A$.
It is not necesary true. We have a result which says If the matrices $A$ and $B$ are related by a elementary row operation, then their row spaces are equal. Hence, row-equivalent matrices have the same row space, and hence also, the same row rank. But with column space is different, another related result is Row operations do not change the column rank. which do not say nothing about column space, only column rank. For example, consider the effect on the column space of this row reduction $$\left( {\begin{array}{*{20}{c}} 1 & 2 \\ 3 & 4 \\ \end{array}} \right)\mathop \to \limits^{ - 2{\rho _1} + {\rho _2}} \left( {\begin{array}{*{20}{c}} 1 & 2 \\ 0 & 0 \\ \end{array}} \right). $$ The column space of the left-hand matrix contains vectors with second component that is nonzero. But the column space of the right-hand matriz is different because it contains only vectors whose second component is zero. Above comments says that we can not, always, express the column space of a initial matrix as span of the pivot columns of the echelon form of this matrix because, as above example, it can change with row operations. The only thing that do not change is the column rank.
Conditional probability - sum of dice is even given that at least one is a five Question: Calculate the conditional probability that the sum of two dice tosses is even given that at least one of the tosses gives a five. I'm a bit confused by this. Shouldn't the probability just be 1/2, since we know that at least one of the dice tosses gave us a five, thus the other must give us an odd number?
A = event when one of the tosses gives a five. (Sample space for the conditional probability) Let (n1, n2) represent the outcomes of die1 and die2 A = { (1,5), (2,5), (3,5), (4,5), (5,5), (6,5), (5,1), (5,2), (5,3), (5,4), (5,6) } // (5,5) must be counted once only Thus n(A) = 11 B = sum of two dice tosses is even n(B|A) = { (1,5), (3,5), (5,5), (5,1), (5,3)| P(B|A) = n(B|A)/n(A) = 5/11
Examine this sample space {(1,1),(1,2),(1,3),(1,4),(1,5),(1,6), (2,1),(2,2),(2,3),(2,4),(2,5),(2,6), (3,1),(3,2),(3,3),(3,4),(3,5),(3,6), (4,1),(4,2),(4,3),(4,4),(4,5),(4,6), (5,1),(5,2),(5,3),(5,4),(5,5),(5,6), (6,1),(6,2),(6,3),(6,4),(6,5),6,6)} If you did not know anything about the two die, then probability of having even sums when you have at least a 5 is 5/11. Your intuition for 1/2 however requires you to have some information about which die has turned out a 5. If you knew for sure everything about column 5 (or row 5) of this matrix, i.e. {(...,5), (...,5), (...,5), (...,5), (...,5), (...,5)}, this means you know which die has a five (first or second die). Then you only have row 5 (or column 5) to deal with. And in that case you probability is 1/2, i.e. your outcomes are: $\{(5,1), (5,3), (5,5)\}$ In a sample space of $\{(5,1), (5,2), (5,3), (5,4), (5,5), (5,6)\}$.
Does a closed point of a scheme have an affine open environment with the same dimension? Consider a scheme $X$, and an a closed point $x\in X$. I am wondering whether there is an affine open neighborhood $x\in U\subseteq X$ such that $$\dim \mathcal O_{X,x}=\dim U.$$ I tried the following approach: Let $V=\mathrm{spec}(R)$, then $\mathcal O_{X,x}=R_{\mathfrak m_x}$, where $\mathfrak m_x$ is the maximal ideal corresponding to the closed point $x$. Let $f\in R$ with $f\notin \mathfrak m_x$, then $U=\mathrm{spec}(R_f)$ is an open subset of $V$ that contains $x$. As every chain of prime ideals in $R_{\mathfrak m_x}$ is a chain in $R_f$ as well, we have $\dim\mathcal O_{X,x}\leq \dim U$. Unfortunately, I do not understand if we can choose an $f$ for which we obtain equality. If this is not possible in general, can we prove it for a variety, i.e. an integral, separated scheme of finite type over $\mathbb C$?
The answer to your first question is no. Let $k$ be a field and $m_i$ a sequence of positive integers such that $m_{i+1}-m_i >m_i-m_{i-1}$. For each positive integer $i$, let $\mathfrak{p}_i=(x_{m_i+1},\dots,x_{m_{i+1}})$, which is a prime ideal of $B=k\left[x_1,\dots,x_n,\dots\right]$. Let $S$ be the union of the $\mathfrak{p}_i$'s and $A$ the localization of $B$ at $S$ (the ring $B$ has been introduced by Nagata). Then, the $S^{-1}\mathfrak{p}_i$'s are maximal ideals of $A$ (see Remark 1 of Tomo's answer in this MSE question). Now, let $X=\text{Spec}(A)$ and chose $x=S^{-1}\mathfrak{p}_i$ a closed point of $X$: note that $\dim\mathscr{O}_{X,x}=\text{ht}(S^{-1}\mathfrak{p}_i)=m_{i+1}-m_i$. Now let $D(f)$ be a principal open set containing $x$ for some $f\in A$. One may assume that $f$ comes from an element $g$ of $B$ (take $g$ to be the numerator of a representative of $f$) and let $l>i+1$ such that $l$ is greater than all the indices of the indeterminates occurring in $f$: then, for any $j\geq l$, you may check that $S^{-1}\mathfrak{p}_j\in D(f)$ (this comes from the choice of $l$: none of the $x_n$'s occur in $f$ for $n\geq l$). This shows that $\dim D(f)=\infty>\dim\mathscr{O}_{X,x}$. Nonetheless, the answer is positive for schemes locally of finite type over a field. Indeed, if $X$ is locally of finite type over a field $k$, then one may show that for any point $x\in X$ one has that $$\dim_xX=\dim\mathscr{O}_{X,x}+\text{degtr}(\kappa(x)/k)$$ (where $\dim_xX=\min_{U\ni x}\dim U$). When $x$ is closed, one gets that $\dim_xX=\dim\mathscr{O}_{X,x}$ which answers your question.
The answer to your first question is no. Let $k$ be a field and $m_i$ a sequence of positive integers such that $m_{i+1}-m_i >m_i-m_{i-1}$. For each positive integer $i$, let $\mathfrak{p}_i=(x_{m_i+1},\dots,x_{m_{i+1}})$, which is a prime ideal of $B=k\left[x_1,\dots,x_n,\dots\right]$. Let $S$ be the union of the $\mathfrak{p}_i$'s and $A$ the localization of $B$ at $S$ (the ring $B$ has been introduced by Nagata). Then, the $S^{-1}\mathfrak{p}_i$'s are maximal ideals of $A$ (see Remark 1 of Tomo's answer in this MSE question). Now, let $X=\text{Spec}(A)$ and chose $x=S^{-1}\mathfrak{p}_i$ a closed point of $X$: note that $\dim\mathscr{O}_{X,x}=\text{ht}(S^{-1}\mathfrak{p}_i)=m_{i+1}-m_i$. Now let $D(f)$ be a principal open set containing $x$ for some $f\in A$. One may assume that $f$ comes from an element $g$ of $B$ (take $g$ to be the numerator of a representative of $f$) and let $l>i+1$ such that $l$ is greater than all the indices of the indeterminates occurring in $f$: then, for any $j\geq l$, you may check that $S^{-1}\mathfrak{p}_j\in D(f)$ (this comes from the choice of $l$: none of the $x_n$'s occur in $f$ for $n\geq l$). This shows that $\dim D(f)=\infty>\dim\mathscr{O}_{X,x}$. Nonetheless, the answer is positive for schemes locally of finite type over a field. Indeed, if $X$ is locally of finite type over a field $k$, then one may show that for any point $x\in X$ one has that $$\dim_xX=\dim\mathscr{O}_{X,x}+\text{degtr}(\kappa(x)/k)$$ (where $\dim_xX=\min_{U\ni x}\dim U$). When $x$ is closed, one gets that $\dim_xX=\dim\mathscr{O}_{X,x}$ which answers your question.
Fit exponential with constant I have data whic would fit to an exponential function with a constant. $$y=a\cdot\exp(b\cdot t) + c.$$ Now I can solve an exponential without a constant using least square by taking log of y and making the whole equation linear. Is it possible to use least square to solve it with a constant too ( i can't seem to convert the above to linear form, maybe i am missing something here) or do I have to use a non linear fitting function like nlm in R ?
A direct method of fitting (no guessed initial values required, no iterative process) is shown below. For the theory, see the paper (pp.16-17) : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
I tried to deduce the formula for 'b' in case of equdistant ti-s, but I have got a different one. There is a reciprocal under the logarithm in my result. (Please check it. Maybe I am not right.) Sorry. I see ... The t3-t1 minus sign makes the log content right.
Is there a name for this special matrix? $$C_{n\times n}={\begin{bmatrix} 1 & x_{12} & \dots & x_{1n} \\ {x_{12}}^{-1} & 1 & \dots & x_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ {x_{1n}}^{-1} & {x_{2n}}^{-1} & \dots & 1 \end{bmatrix}}$$ Criteria $$x_{ab} = \frac{1}{x_{ba}}, \qquad x_{ab}>0, \qquad x_{aa} = 1$$ Alternatively $$C \cdot C^{T} = \underline1$$ I know that this matrix models currency exchange (without commission or fluctuation). So I'm guessing it's called a currency matrix or a trade matrix. I'm just after a name so I can search it's properties. I couldn't spot it in the Matrix Reference Manual. http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/special.html I'm curious about some of it's algebraic properties, for example it has only 1 non-zero eigenvalue. What does this eigenvalue signify? $$\text{Eigenvalues of} \begin{bmatrix} 1 & 6 & 30 & 210 \\ 1/6 & 1 & 5 & 35 \\ 1/30 & 1/5 & 1 & 7 \\ 1/210 & 1/35 & 1/7 & 1 \end{bmatrix} = 0,0,0,4$$ Any help, ideas, advice greatly welcomed. Thanks
$C$ is a positive reciprocal matrix to be an exchange matrix the additional condition that $a_{ij} \times a_{jk} = a_{ik}$ is required $C$ is then a Positive Saaty-consistent Reciprocal matrix (PS-cR) the eigenvalues of an $n\times n$ PS-cR matrix are $n,0,..,0$ Thank you @Algebraic Pavel for the term reciprocal.
It is laplacian matrix which is normally I-A, where A is adjacency matrix. https://en.m.wikipedia.org/wiki/Laplacian_matrix
Proof or disprove that the following statement is true I have a problem I need help with: Prove or disprove that P(M $\cup$ N) $\neq$ P(M) $\cup$ P(N). What I have tried so far Consider: M = $\{ a,b,c\}$ and N = $\{\text{red},\text{blue},\text{yellow}\}$ $\Rightarrow$ P(M) ${}= 3^3 = 27$ and P(N) ${} = 3^3$ Combine P(M)$\cup$P(N) $\Rightarrow \bigcup \{M,N\} = \{x\mid(x\in M) \text{ and } (x\in N)\}$ $\Rightarrow$ P(M ${}\cup{}$ N) ${}= \{a,b,c,\text{red}, \text{blue}, \text{yellow} \}$ $\Rightarrow$ P(M,N) ${}= 6^6 = 46656$ $\Rightarrow$ P(M $\cup$ N) ${}\neq{}$ P(M) ${}\cup{}$ P(N) Is my answer correct?
Your answer has a good idea but there are some technical mistakes in it. For example, you say $P(M)=3^3$, which is not true. First of all, $P(M)$ is a set, while $3^3$ is a number. The two are not equal. What you probably want to say $|P(M)|=3^3$, i.e. that the size of the set $P(M)$ is $3^3$. But even then, no, that is not true. To try the proof again, I suggest you try to construct a smaller counterexample, and ditch the idea of set sizes. Simply look at the power sets directly. For an added challenge, try proving this statement: $P(A\cup B)=P(A)\cup P(B)$ if and only if $A=\emptyset$ or $B=\emptyset$.
In your example, $\vert A\vert=\vert B\vert = 3$ and $\vert A \cup B \vert=6$ so $\vert P (A)\vert =\vert P (B)\vert =2^3$ and $\vert P (A \cup B) \vert =2^6$. .. Now $\vert P (A) \cup P (B)\vert= \vert P (A)\vert +\vert P (B)\vert - \vert P (A)\cap P (B)\vert=2^3+2^3-1=2^4-1= 15$, so they are not equal. ..
If P(n) divides Q(n) for all integers n then does P(x) divides Q(x)? Let $S = \{ f \in \mathbb{Q}[x]\mid f(n) \in \mathbb{Z}, \forall n \in \mathbb{Z} \}$. Let $P(x), Q(x) \in S $ be polynomials such that for every $ n \in \mathbb{Z}$ either $\frac{Q(n)}{P(n)} \in \mathbb{Z}$ or both $P(n), Q(n)$ are $0$ . Can we conclude $\frac{Q(x)}{P(x)} \in S$ ?, or do we have a counter example?. What if we work in $\mathbb{Z}[x]$? [I know that a $\mathbb{Z}$-basis for $S$ is $x \choose i$ where $i \in \mathbb{Z}_{\geq 0}$.]
Let $dQ(x)=P(x)A(x)+r(x)$ for some integer $d$, with $A$ and $r$ having integer coefficients, and degree of $r$ less than degree of $P$. Then $P(n)$ divides $r(n)$ for all $n$. But this is impossible unless $r$ is identically zero, since $P(n)$ grows faster than $r(n)$.
If there exist some $n\in \mathbb{Z}$ such that $Q(n)$ & $P(n)$ are both zero then $\frac{Q(n)}{P(n)}$ is not exist. Hence $\frac{Q(x)}{P(x)}$ doesn't belongs to $S$. So for all $n\in \mathbb{Z}$ $\frac{Q(x)}{P(x)}\in\mathbb{Z}$. so we can't concluded that $\frac{Q(x)}{P(x)}\in S$ until the second either case not omitted.
The Gamma function and factorial satisfy $\Gamma(n+1) = n!$ This was a question from a mock paper, for my upcoming exam however my teacher unhelpfully did not post any solutions. Prove that $\Gamma(n+1)=n!$. Can anyone check if my proof is correct. Thank you for reading. -Alexis
Note that he property $$G(n + 1) = n G(n)$$ you establish also holds for any constant multiple of $\Gamma$, including the zero function. Since the proof you give is basically an inductive argument (it might be useful to say a little more in your solution about how this goes), it suffices to add a base case, that is, show that the identity holds for the lowest applicable value of $n$. Since $0!$ makes sense (but $(-1)!$ is not defined), one should show that $\Gamma(0 + 1) = 0!$.
If $~n~$ is a positive integer$$\Gamma(n+1)=n!$$ For negative integers it's undefined. So that. $$\Gamma(n+1)= n \Gamma(n)$$ $$=n \Gamma(n-1+1)=n (n-1)\Gamma(n-1) $$ $$ =n(n-1)\Gamma(n-2+1) $$ $$ =n(n-1)(n-2)\Gamma(n-2) $$ $$ =....... .... ..... $$ $$ =n(n-1)(n-2)....3.2.1.\Gamma(1)$$ $$ =n(n-1)(n-2)..... 3.2.1. $$ $$ = n! $$ since $\Gamma(1)=1$, It's simple proof
Pure mathematics in our society Is there some book or essay which deals with the sociological and economical justification of doing and funding pure mathematics? I'm looking for a modern version of Hardy's A Mathematician's Apology, but if possible from a non-mathematician's point of view. For example, the book should answer the following type of questions: Why is it (and should it be?) possible in our society to earn a living by calculating, say, K-theory groups of spectra? Why does the society support a subculture of mathematicians who solve abstract problems which "obviously" don't have any connection to the rest of the world? I'm not asking for personal opinions from math.SE users - this is a reference request.
Not an essay or book, but there is a rather informative and clear video address by Timothy Gowers (a Fields Medalist) in 2000 on the "Importance of Mathematics" for an audience containing non-mathematicians in Paris. He specifically addresses the issue of the (enormous) benefits that mathematics yields on a (very small) investment, its intrinsic cultural value and the interconnectedness of mathematics that make the "useful" areas inseparable from the "useless" ones. It is available for download from the Clay Math Institute's website here, you'll have to scroll down to the bottom of that page for the download link.
There doesn’t seem to be published very much: Mathematics in Society and History: Sociological Inquiries S. Restivo (Springer) On the Sociology of Mathematics D. J. Struik (Science & Society) (JSTOR) http://www.amazon.com/What-Mathematics-Really-Reuben-Hersh/dp/0195130871/ref=sr_1_1?s=books&ie=UTF8&qid=1404379719&sr=1-1&keywords=mathematics+hersh (And other books by same author)
Divergence of the sequence $\sin(n!)$ Does the sequence $\sin(n!)$ diverge(converge)? It seems the sequence diverges. I tried for a contradiction but with no success. Thanks for your cooperation.
Depends on whether the parameter of $\sin$ is in radians or degrees. If in degree, $n!$ becomes multiple of 360 and after that function value will be zero, for all value of $n$. In radians this will not happen as $\pi$ is irrational.
Hint: Take a subsequence in which $a_{n_i} \approx \pi ( 4i+1)/2 $ and another where $b_{n_j} \approx \pi ( 4i +3)/2$.
Show that if $\int_0^x f(y)dy \sim Ax^\alpha$ then $f(x)\sim \alpha Ax^{\alpha -1}$ Let $f$ be a real, continuous function defined on $[0,\infty)$ such that $xf(x)$ is increasing for all sufficiently large values of $x$. Show that if $$\int_0^x f(y)\,dy \sim Ax^\alpha \quad \left(\,x\to \infty\right)$$ for some positive constants $A$ and $\alpha$, then $$f(x)\sim \alpha Ax^{\alpha -1} \quad \left(\,x\to \infty\right).$$ Clearly, I have to use differentiation somewhere, but I don't know how to manipulate $\lim_{x\to \infty}\frac{\int_0^x f(y)\,dy}{Ax^\alpha}=1$ to get the desired result. From suggestions given, I know that by L'hospital's rule, it's enough to show that the limit $\lim \frac{f(x)}{\alpha Ax^{\alpha -1}}$ exists, and this limit equals $ \lim \frac{xf(x)}{\alpha Ax^{\alpha}}$. From here, I'll need to use the given assumption that $xf(x)$ is eventually increasing. But how can I show the existence of the limit based on these facts? I would greatly appreciate any solutions, hints or suggestions.
Ooh, a tauberian theorem with a simple elementary proof, cool. Assume $x$ is always so large below that $xf(x)$ is increasing. Fix $\delta>0$. It follows that $$\int_x^{(1+\delta)x}f(y)\,dy\sim A((1+\delta)^\alpha-1)x^\alpha.$$In particular, if $x$ is large enough we have $$\int_x^{(1+\delta)x}f(y)\,dy\le(1+\delta)A((1+\delta)^\alpha-1)x^\alpha.$$But $yf(y)$ increasing shows that $$\int_x^{(1+\delta)x}f(y)\,dy\ge\int_x^{(1+\delta)x}\frac{xf(x)}{y}\,dy=xf(x)\log(1+\delta),$$and combining this with the previous inequality shows that if $x$ is large enough then$$\frac{f(x)}{x^{\alpha-1}} \le\frac{(1+\delta)A((1+\delta)^\alpha-1)}{\log(1+\delta)}.$$Letting $\delta\to0$ now shows that $$\limsup\frac{f(x)}{x^{\alpha-1}}\le\alpha A.$$The inequality $\liminf f(x)/x^{\alpha-1}\ge\alpha A$ is proved similarly, starting with $$\int_{(1-\delta)x}^xf(y)\,dy\sim A(1-(1-\delta)^\alpha)x^\alpha.$$
First, obviously, $Ax^\alpha\to+\infty$ Now, we can see that $\lim\frac{\int_0^x f(y)dy}{Ax^\alpha}=1$ which means that $\lim \int_0^x f(y)dy = \lim Ax^\alpha=+\infty$ Now, you can use L'hopital's rule. Even though we already know the limit is $1$, L'hopital's statement is more than that. It states that $\lim \frac{f}{g}=\lim\frac{f'}{g'}$ which technically means you get what you want.
How to use prove this $p^4\equiv p\pmod {13}$ let prime number $p$,and $n$ postive integer, such $$p|n^4+n^3+2n^2-4n+3$$ show that $$p^4\equiv p\pmod {13}$$ A friend of mine suggested that I might be able to use the results problem
I highly respect subtle mathematics but here, at MSE, I give priority to the elementary. I think mainly of beginners who, for obvious reasons, do not understand anything if the reasoning is of medium high level. We have to prove that the polynomial $f(x)=x^4+x^3+2x^2-4x+3$ (which, say it, is always divisible by the prime $3$ because $f(n)=n(n-1)(n+1)^2+3(n^2-n+1)$) is such that putting $$f(n)=\prod_{i=1}^{i=r}p_i^{\alpha_i}$$ where $n$ is an arbitrary natural, it is verified for all $p_i$ the congruence $$p_i^4\equiv p_i\pmod{13}$$ so it is clear that the primes $p_i$ belong to a certain class excluding a lot of other primes. We can write $$4f(n)=(2n^2+n+5)^2-13(n+1)^2$$ from which $$4f(n)\equiv(2n^2+n+5)^2\pmod{13}$$ A straightforward calculation gives for $g(x)=(2x^2+x+5)^2$ $$g(\mathbb F_{13})=\{0,4,10,12\}$$ so we have (adding details) in $\mathbb F_{13}$ (where $0=13k;1=13k+1$, etc) $$\begin{cases}4f(n)=0 \hspace{10mm}\text {for } n=0\space \text {and 3}\\4f(n)=4\hspace{10mm}\text {for } n=2,4,8,11\\4f(n)=10\hspace{8mm}\text {for } n=7,9,10,12\\4f(n)=12\hspace{8mm}\text {for } n=1,5,6\end{cases}$$ On the other hand the inverse of $4$ modulo $13$ is $10$ so we have $$\begin{cases}f(n)= 0\hspace{10mm}\text {for } n=0\space \text {and 3} \\f(n)=1\hspace{10mm}\text {for } n=2,4,8,11\\f(n)=9\hspace{10mm}\text {for } n=7,9,10,12\\f(n)=3\hspace{10mm}\text {for } n=1,5,6\end{cases}$$ Now the primes $p_i$ above can be only $13$ and those of the form $13k+1,13k+9$ and $13k+3$. This property can be easily verified because if $p^4\equiv p\pmod{13}\iff p(p^3-1)=13$k then $p^3-1$ is divisible by $13$ when $p\ne13$. In fact $$1^3-1=0=13\cdot0\\\hspace{5mm}9^3-1=728=13\cdot56\\3^3-1=26=13\cdot2$$ But none of the nine integers below are divisible by $13$ $$2^3-1\\4^3-1\\5^3-1\\6^3-1\\7^3-1\\8^3-1\\10^3-1\\11^3-1\\12^3-1$$
We will have modulo $13$ all the time. Notice that $$N:= n^4+n^3+2n^2-4n+3 \equiv (n^2-6n-4)^2 \equiv (n-3)^4$$ Clearly $13\mid N\iff 13\mid n-3$, so we assume $p\ne 13$ and thus $13$ does not divide $n-3$. Say prime divisor $p$ of $N$ is good if $p^4{\equiv}p$. Suppose the statment is not true, so there exists a prime $q$ which is not good. Also we can assume that there is no good prime divisor of $N$: if $p\ne 13$ is good than we can divide $N$ by $p\equiv p^4$ modulo $13$ and then observe $N' = N/p^4$ modulo $13$. Also we can reduce $N$ with all divisors of $N$ of form $d^4$. So if we make now a prime factorisation for $N$ modulo $13$ we have $$N\equiv 2^a4^b5^c6^d7^e8^f10^g11^h12^i$$ (clearly there is no $0,1,3$ and $9$) where all the exponents are nonnegative and less than $4$ and at least one is positive. Of course, we can reduce this even more: $$N\equiv 2^{a+2b+d+3f+g+2i} 3^{d+i} 5^{c+g}7^e11^h$$ or $$N\equiv 2^x 3^y 5^z(-6)^e(-2)^h$$ where or exponents are nonegative and less than $4.$ Again we can assume if $y=0$ (else we divide $N$ by $3$ modulo $13$). So we can write:$$ N\equiv (-1)^{3z+e+h} 2^{x+3z+e+h} = (-1)^t2^{x+t}$$ where $t= 3z+e+h$. So $$N\equiv -1\pm2,\pm4,\pm8$$ Now this can not be of the form $(...)^4$ and we have a contradiction.
Ordinary generating function of powers of 2 Is there a good closed form expression for the generating function of the formal power series $$ A(z) := \sum_{n=0}^\infty z^{2^n} = z + z^2 + z^4 + z^8 + z^{16} + \cdots. $$ Is there a tractable way to retrieve the coefficient of $z^m$ in powers of $A(z)$, say in $A(z)^k$ for $k \geq 1$? Thanks.
The value $A(1/2)=\kappa$ is known as the Kempner number, and was proven transcendental in 1916. The paper "The Many Faces of the Kempner Number", by Adamczewski, may provide some insight for you.
How about $A(z)=\frac{z^2}{1-z^2}$? This works if $|z|<1$. I got this idea from expanding $\frac{1}{1-z}$.
Functions of the form $\int_a^x f(t) dt$ that are commonly used. I am a graduate student and teaching assistant, and I am teaching Calc 1 for the first time. In a few weeks I will be covering the Fundamental Theorem of Calculus. I'm using James Stewart's Calculus textbook, and I was hoping to give students several "real world" examples of functions of the form $g(x)=\int_a^x f(t) dt$ to make the first part of the FTC more accessible. Stewart gives one example, the Fresnel function $$S(x)=\int_0^x \sin (\pi t^2/2)$$ which is (apparently) used in the theory of the diffraction of light waves. But I was hoping for more examples from physics, chemistry, etc. Any thoughts or ideas?
$$ \Phi(z) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^x e^{-z^2/2}\,dz $$ This is the cumulative distribution function of the standard normal distribution, seen in every course on statistics. Physicists often talk about the "error function", in which $z^2$ appears instead of $z^2/2$ (and then the normalizing constant is different). That's trivially equivalent to this one, but with this version, the standard deviation is $1$. Abraham de Moivre considered this function in the 18th century while thinking about a coin-tossing problem. Suppose you toss a coin 1800 times. What is the probability that the number of heads is in the range (say) $\{886,\ldots, 908\}$? An exact answer is computationally far too expensive, but the integral above does it. The derivative of $\Phi$ is conventionally denoted $\varphi$. The one seen tabulated in the back of every statistics textbook is $\Phi$.
Why do you need any examples? In my opinion, FTC is the most important part in Calculus because how it unites the other two parts (dir/int). The derivative-form of FTC implies that the overall properties of a function (integral) is decided by the domestic properties (derivative). The integral-form of FTC of FTC indicates that the domestic properties are regulated by the overall properties. With FTC, it's much easier to do integral with anti-derivatives. It's an example to tell the students how the advancements in mathematics make it easier to solve problems. I don't think it's needed to make the "first part" of FTC accessible. Actually FTC is FTC and the two so-called parts are essentially the same. They are just represented in two different forms. Similarly, differentiation and integration are essentially the same. So do the two mean value theorems.
Time Complexity of DFS My understanding is that: 1) given a graph G with n vertices and m edges, DFS is O(n + m) 2) DFS can be used to produce a list of all simple paths between 2 vertices u and v This would mean that DFS can produce a list of all simple paths between u and v in polynomial time. However, if we could could list all simple paths between u and v in polynomial time, we should be able to decide if G has a Hamilton Path between u and v in polynomial time. Since determining if G has a Hamilton Path is NP-Complete, my understanding must be incorrect. I'm hoping someone can clarify what I'm missing?
What you're missing is that there could be an exponential number of paths that stop short of connecting $u$ and $v$. Discovering these dead ends using depth-first search and backtracking is what makes DFS require worst-case exponential time to find a Hamiltonian path.
For traversing all the nodes, you will only visit each point for one time, thus linear time for traversing. For finding all the paths, you might need to visit each node for many times(e.g. you need to backtrack to a node many times), and the number of possible paths is exponential thus exponential time for finding all paths.
How to show $\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$? I am able to evaluate the limit $$\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$$ for a given $n$ using l'Hôspital's (Bernoulli's) rule. The problem is I don't quite like the solution, as it depends on such a heavy weaponry. A limit this simple, should easily be evaluable using some clever idea. Here is a list of what I tried: Substitute $y = x - 1$. This leads nowhere, I think. Find the Taylor polynomial. Makes no sense, it is a polynomial. Divide by major term. Dividing by $x$ got me nowhere. Find the value $f(x)$ at $x = 1$ directly. I cannot as the function is not defined at $x = 1$. Simplify the expression. I do not see how I could. Using l'Hôspital's (Bernoulli's) rule. Works, but I do not quite like it. If somebody sees a simple way, please do let me know. Added later: The approach proposed by Sami Ben Romdhane is universal as asmeurer pointed out. Examples of another limits that can be easily solved this way: $\lim_{x \to 0} \frac{\sqrt[m]{1 + ax} - \sqrt[n]{1 + bx}}{x}$ where $m, n \in \mathbb{N}$ and $a, b \in \mathbb{R}$ are given, or $\lim_{x \to 0} \frac{\arctan(1 + x) - \arctan(1 - x)}{x}$. It sems that all limits in the form $\lim_{x \to a} \frac{f(x)}{x - a}$ where $a \in \mathbb{R}$, $f(a) = 0$ and for which $\exists f'(a)$, can be evaluated this way, which is as fast as finding $f'$ and calculating $f'(a)$. This adds a very useful tool into my calculus toolbox: Some limits can be evaluated easily using derivatives if one looks for $f(a) = 0$, without the l'Hôspital's rule. I have not seen this in widespread use; I propose we call this Sami's rule :).
Let $$f(x)=x+x^2+\cdots+x^n-n$$ then by the definition of the derivative we have $$\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1}= \lim_{x \to 1}\frac{f(x)-f(1)}{x - 1}=f'(1)\\[10pt] = \left[ \vphantom{\frac11} 1 + 2x + 3x^2 + \cdots + nx^{n-1} \right]_{x=1} = \frac{n(n + 1)}{2}$$
Your objection to using l'Hospital's rule is on the basis that it feels like it's "too powerful" a tool for the problem, right? Lets go all the way to the other end, then, and prove the limit using just the definition. $$\lim_{x\to a} f(x) = L \iff \forall \varepsilon \gt 0 \ \exists \delta \gt 0 \ni \left| x-a \right| \le \delta \Rightarrow \left| f(x)-L \right| \le \varepsilon$$ Now, all we need to do is figure out a way that we can always pick a $\delta$ small enough to keep the function within $\varepsilon$ of $L$. The particular limit in question: $$\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$$ From that, we can take: $$\begin{align*} a&=1\\ f(x)&=\frac{x + x^2 + \dots + x^n - n}{x - 1}\\ L&=\frac{n(n + 1)}{2} \end{align*}$$ So we need to solve $$\left| \frac{x + x^2 + \dots + x^n - n}{x - 1} - \frac{n(n + 1)}{2}\right| < \varepsilon$$ for $\left|x-1\right|$. I really don't feel like doing any of that crunchwork solving that equation. If someone out there does feel like it, please do so, and edit it into my answer. For now, though, I am going to skip ahead bunch of steps, assume we solved it, and have our solution of $$\left|x-1\right| \ge g(\varepsilon)$$ Now, we know from the definition that $\left| x-1 \right| \le \delta$, so we can conclude that if we pick $\delta = g(\varepsilon)$, we ensure that the value of $f(x)$ is within $\varepsilon$ of $L$, satisfying our definition of the limit, and proving that $$\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$$
Evaluate $3\int_{0}^{2\pi} \sin(t) \cos(t) \,{\rm d}t$ $$3\int_{0}^{2\pi} \sin(t) \cos(t) \,{\rm d}t$$ My calculus is a bit rusty and I can not find where I get it wrong. Setting $u= \sin(t)$, I get ${\rm d} u=\cos(t) \,{\rm d} t$ and, thus, $$3\int_{u=0}^{u=0}u \,{\rm d}u=0$$
The substitution is correct. If $f$ is a function with an antiderivative $F$, one has by the fundamental theorem of calculus \begin{equation} \int_a^b f(u(t))u'(t)dt = \int_a^b(F(u(t)))' dt = F(u(b)) - F(u(a)) = \int_{u(a)}^{u(b)}f(x) d x \end{equation} A sufficient hypothesis is that $u'$ is continuous in $[a,b]$ and that $f$ is continuous on an interval that contains $u([a,b])$. In your case $u(t) = \sin(t)$ and $f(t)=t$. I'm amazed that so many people are puzzled by this simple application of the fundamental theorem of calculus. Let us take the case of the integral \begin{equation} I = \int_0^\pi \sin(x) d x \end{equation} and let $u(x) = - \cos(x)$ as suggested in the comments, with $f(x)= 1$. The above substitution formula gives \begin{equation} I = \int_{-\cos(0)}^{-\cos(\pi)} dx = \int_{-1}^1 d x = 2 \end{equation} which is the correct result. There is no contradiction with this counterexample because in the counterexample, the invalid substitution is $u(x) = \sin(x)$. It would imply $f(u) = \pm\frac{u}{\sqrt{1-u^2}}$ and the original integral must be split at $\pi/2$ to choose between $+$ and $-$. It does not invalidate the above formula with the continuity condition on $f$.
The result is correct, because the function $x \mapsto \sin(x) \cos(x)$ is $\pi$-periodic. However, this was more of a lucky strike here, because the sub is not appropriate. Another (bad) example would be the integral $\displaystyle\int_0^\pi \sin(x) \mathrm{d}x$. If you set $u = \sin(x)$, then you get $\mathrm{d}x = \dfrac{\mathrm{d}u}{\sqrt{1 - u^2}}$, and after substitution, you end up with $\displaystyle\int_0^0 \dfrac{\mathrm{d}u}{\sqrt{1 - u^2}} = 0$, which is clearly false. So something went wrong somewhere. The main problem, in both cases, is that the inverse sine function is only defined to give you outputs in the "rightmost" part of the trigonometric circle (in other words, the image of $\sin^{-1}$ is $[-\pi/2, \pi, 2]$). So, since in your example, your range is $[0, 2\pi]$, it messes up as soon as $x$ goes beyond $\pi/2$. That's why you need to be careful when making trig subs. To do it properly, you'd have to split your intervals in parts where $\sin$ is injective, so that you can get a well defined inverse.
Find the smallest number $\alpha$, such that for all $x,y,z$ $\alpha(x^2-x+1)(y^2-y+1)(z^2-z+1)\ge(xyz)^2-xyz+1$ Find a smallest number $\alpha$, such that for all $x,y,z$ (not all of which are positive) inequality $$\alpha(x^2-x+1)(y^2-y+1)(z^2-z+1)\ge(xyz)^2-xyz+1$$ My work so far: Let $f(t)=t^2-t+1$. Then $f(t) \ge \frac34$. If $x=0, y=z=\frac12$, then $$\alpha\ge \frac{16}9$$
In the starting formulation the answer is $\frac{16}{9}$. If $x$, $y$ and $z$ are non-positives so after replacing $x\rightarrow-x$... we need to prove that $$\frac{16}{9}(x^2+x+1)(y^2+y+1)(z^2+z+1)\geq x^2y^2z^2-xyz+1$$ which is true for all non-negatives $x$, $y$ and $z$ because we'll prove now that even $$\frac{16}{9}(x^2+x+1)(y^2-y+1)(z^2-z+1)\geq x^2y^2z^2+xyz+1$$. If $x\leq0$, $y\leq0$ and $z\geq0$ we need to prove that $$\frac{16}{9}(x^2+x+1)(y^2+y+1)(z^2-z+1)\geq x^2y^2z^2-xyz+1$$ for non-negatives $x$, $y$ and $z$, which follows from $$\frac{16}{9}(x^2+x+1)(y^2-y+1)(z^2-z+1)\geq x^2y^2z^2+xyz+1$$ again. Now we'll prove it: $$\begin{align} &16(x^2+x+1)(y^2-y+1)(z^2-z+1)-9(x^2y^2z^2+xyz+1) \\&\phantom{aaa}=(16(y^2-y+1)(z^2-z+1)-9y^2z^2)x^2+(16(y^2-y+1)(z^2-z+1)-9yz)x \\ &\phantom{aaaaa}+16(y^2-y+1)(z^2-z+1)-9 \\&\phantom{aaa} \geq\left(16\left(\frac{3}{4}y^2+\left(\frac{y}{2}-1\right)^2\right)\left(\frac{3}{4}z^2+\left(\frac{z}{2}-1\right)^2\right)-9y^2z^2\right)x^2+ +(3y\cdot3z-9yz)x \\&\phantom{aaaaa}+ 16\left(\frac{3}{4}+\left(\frac{1}{2}-y\right)^2\right)\left(\frac{3}{4}+\left(\frac{1}{2}-z\right)^2\right)-9 \\&\phantom{aaa}\geq0 \end{align}$$
From your $a(x^2-x+1)^3\ge x^6-x^3+1 $, since $(x+1)(x^2-x+1) =x^3+1 $ and $(x^3+1)(x^6-x^3+1) =x^8+1 $, $a(\frac{x^3+1}{x+1})^3\ge \frac{x^9+1}{x^3+1} $ or $a\ge \frac{(x^9+1)(x+1)^3}{(x^3+1)^4} $. According to Wolfy, this has a maximum of $2.1547$ at $x \approx 0.435421 $ and $2.29663 $. These are the roots of $0 = x^6-2 x^5-x^4+x^2+2 x-1 $. The exact roots are $-1, 1, \frac12 (1\pm\sqrt{2} 3^{1/4}+\sqrt{3}), \frac12 (1\pm i\sqrt{2} 3^{1/4}-\sqrt{3}) $. So $a$ must be at least $2.1547$.
Prove $f|_{U_0}$ is $m$-to-$1$ except at $z_0$. Let $f$ be analytic on a domain $U$, $z_0\in U$, and $w_0=f(z_0)$. Suppose that $\mbox{ord}_{z_0}(f-w_0)=m\in\mathbb N$. Prove that there is an open set $U_0$ with $z_0\in U_0\subset U$ such that $f^{-1}(w_0)\cap U_0=\{z_0\}$ and $f^{-1}(w)\cap U_0$ contains exactly $m$ elements (without repetition) for $w\in f(U_0)\setminus\{w_0\}$. This means that $f|_{U_0}$ is $m$-to-$1$ except at $z_0$. Proof: First, since $\mbox{ord}_{z_0}(f-w_0)=m$, when writing $f-w_0$ as its Laurent series, then $a_m \ne 0$, and $a_n=0$ for all $n<m$ (this is the definition of $\mbox{ord}_{z_0}(f-w_0)=m$, which essentially says $m$ is the minimum power actually appearing in the Laurent series with a nonzero coefficient.) Then we can write \begin{align*} f(z)-w_0&=\sum_{n=m}^\infty a_n(z-z_0)^n \\ &=a_m(z-z_0)^m+a_{m+1}(z-z_0)^{m+1}+\cdots \end{align*} This also means $z_0$ is either a removable singularity or a pole of $f-w_0$. Since $m\in \mathbb{N}$, we know $z_0$ is removable and after removing the singularity, we say that $z_0$ is a zero of $f-w_0$ (of order $m$). Also, it means there exists a holomorphic function $g$ in the disk $D(z_0,r)$ with $g(z_0) \ne 0$ such that $f(z)-w_0=(z-z_0)^m g(z)$. Would this be correct so far? We have just finished talking about the Open Mapping Theorem and Inverse Mapping Theorem, so it's could be possible this question is an application of one of those.
Actually this claim is included in some proof of the open mapping theorem, for example, see W. Rudin's Real and Complex Analysis, Thm 10.23. Your discussion is correct. Let's start from $f(z)-w_0=(z-z_0)^m g(z)$, where $g(z)$ is holomorphic in $D(z_0,r)$ and never vanishes. Claim: There exists a holomorphic function $h(z)$, such that $g(z)=(h(z))^m$ in $D(z_0,r)$. Proof: Consider $g'/g$ which is holomorphic in $D(z_0,r)$. Hence we can find some $G$ holomorphic in $D(z_0,r)$, such that $G'=g'/g$. Hence, we can choose one $G$ such that $\exp G=g$. Thus, $h(z)=G(z)/m$ is what we want. Now we have $f(z)-w_0=((z-z_0)h(z))^m$. Choose a small neighborhood $V$ of $z_0$ such that $(z-z_0)h(z)$ is one-to-one in $V$, by the open mapping theorem, since the derivative of that is not zero at $z_0$. Now the conclusion follows from the fact that $z\mapsto z^m$ is $m$-to-$1$.
As you said, you can write $f(z)-f(c)=(z-c)^ng(z)$ where $g(c)\neq 0$ is holomorphic where $f$ is. Since $f$ is holomorphic, we can restrict ourselves to a nbhd $N$ where $f$ attains the value $f(c)$ only once, by the identity theorem. Since $g(c)\neq 0$ there is a neighborhood $V\subseteq N$ of $c$ where $g$ is nonzero. In such neighborhood you can pick an $n$-th root $q$ of $g$; i.e. $q$ holomorphic with $q^n=g$. If you let $h=(z-c)q$ then $f(z)-f(c)=q(z)^n$, and $h'(c)=q(c)$ and since $q(c)^n=g(c)\neq 0$; $h'(c)\neq 0$. This means there is further a nbhd $U$ where $h:U\to h(U)$ is biholomorphic, i.e. we have written $f(z)=f(c)+h(z)^n$ with $h$ biholomorphic, all this in $U$. Since $h(c)=0$, the open mapping theorem guarantees there is an open ball $B=B(0,r)$ inside $h(U)$. Let $W=h^{-1}(B)$, so that $\tilde h=r^{-1}h:W\to B(0,1)$. Note then that we can write $f$ has a series of compositions $t\circ p \circ \tilde h$ where $p(z)=z^n$ and $t(z)=r^nz+f(c)$. The end result is we have factored $f$ into a biholomorphism $\tilde h:W\simeq B(0,1)$, a linear biholomorphism $t:B(0,1)\to B(f(c),r^n)$ and the usual $n$-fold cover of the unit disk (minus the origin!) by itself $z\mapsto z^n$. This is known as the local normal form of a holomorphic mapping.
Probablity of picking up 3 vowels from the word "MATHEMATICS" Each of the letters in the word "MATHEMATICS" is on a letter tile in a bag. Foool picks three without replacement. what is the probability that he will get all vowels? My approach, The number of ways to the letters of the words "MATHEMATICS" can be arranged 3 at a time is coefficient of $x^3 $ $$ 3! \times (1+x)^5 \times \left(1+x+\frac{x^2}{2}\right)^3$$ which is $399$ The number of arrangements of 2A's, 1I and 1E taken 3 at at a time is coefficient of $x^3$ in $$ 3! \times (1+x)^2 \times \left(1+x+\frac{x^2}{2}\right)$$ which is $12$. Then the required probability is given by $\frac {12}{399} $ but apparently this is not the right answer. What exactly I am missing here?
How many ways are there to pick 3 letters? How many ways are there to pick 3 vowels? By my count, the right answer should be $$\binom{4}{3}/\binom{11}{3}=\frac{4}{165}.$$
We obtained vowels in a word =5 (a ,e i, o ,u ) And numbers of word = 11 A/Q the probability of picking up a vowel = 5/ 11 ans
Prove the Inequality: $\sum\frac{x^3}{2x^2+y^2}\ge\frac{x+y+z}{3}$ Let $x, y, z>0$. Prove that: $$\frac{x^3}{2x^2+y^2}+\frac{y^3}{2y^2+z^2}+\frac{z^3}{2z^2+x^2}\ge\frac{x+y+z}{3}$$
See my solution from 2006 here: https://artofproblemsolving.com/community/c6h22937p427220
I have a direction which may be fruitful but didn't work out all cases. Rewrite the inequality as: $$ \frac{x^3}{2x^2+y^2}-\frac{x}{3}+\frac{y^3}{2y^2+z^2}-\frac{y}{3}+\frac{z^3}{2z^2+x^2}-\frac{z}{3}\ge0 $$ Let $f(t)=\frac{1-t^2}{2+t^2}$, then the inequality is equivalent to: $xf(a)+yf(b)+zf(c)\ge0$, where $a=y/x, b=z/y $ and $c=x/z$. Case 1. Assume that $a,b,c \in (0,2]$. Note that $f(t)\ge \frac{1-t}{2}$ for each $t \in (0,2]$. Then we have $$xf(a)+yf(b)+zf(c) \ge \frac{x}{2}(1-\frac{y}{x})+\frac{y}{2}(1-\frac{z}{y})+\frac{z}{2}(1-\frac{x}{z})=0$$ Case 1 is settled. Case 2 $x\ge y\ge z$ and $x\ge 2z$ remains to be settled. Case 3 $z\ge y\ge x$ and $z\ge 2y$ OR $y\ge 2x$ remains to be settled.
Find $\lim_{n\to\infty}\left(1+\dfrac{1}{n}\right)\left(1+\dfrac{1}{2n}\right)\ldots\left(1+\dfrac{1}{2^{n-1}n}\right)$ Find $\displaystyle\lim_{n\to\infty}\left(1+\dfrac{1}{n}\right)\left(1+\dfrac{1}{2n}\right)\left(1+\dfrac{1}{4n}\right)\ldots\left(1+\dfrac{1}{2^{n-1}n}\right)$. (This is not my homework. One of my friends gave this to me.)
Apply the GM-AM inquality to get: $$\begin{align} \prod_{j=1}^n\left(1+\frac1{2^{j-1}n}\right)&\leq\left(1+\frac1{n^2}\frac{1+\cdots+2^{n-1}}{2^{n-1}}\right)^n\\ &\leq\left(1+\frac2{n^2}\right)^n \end{align}$$ Since last expression tends to $1$, the limit is $\leq1$ Since each factor is greater than $1$, the limit must be $1$.
Isn't it $1$? The fractional expressions inside the parentheses are zero as $n\to \infty$. $$\lim\limits_{n\to\infty}\frac{1}{n}=0$$ $$\lim\limits_{n\to\infty}\frac{1}{2n}=0$$ $$\vdots$$ $$\lim\limits_{n\to\infty}\frac{1}{2^{n-1}n}=0$$ Consequently the answer is $1$
Fixed point in one-to-one function on $\mathbb{R}$ Given that $f:[a,b]\to[a,b]$ is a real continuous one-to-one function, and that neither $a$ or $b$ are fixed points of $f$, show that there exists a fixed point in $(a,b)$. Proof: Since $f(a)\ne a$ and $f(b)\ne b$, and $f$ is one-to-one on $[a,b]$, by IVT, $\exists c_1, c_2 \in (a,b)$ such that $f(c_1)=a$ and $f(c_2)=b$. Since $a$ and $b$ are boundary points, $\exists d_1, d_2\in (a,b)$, with $d_1\ne d_2$ (WLOG, let $d_1 < d_2$), such that $f'(d_1)=f'(d_2)=0$. Because of this and since $f$ is continuous, $\exists k\in (d_1,d_2)$ such that $\left| f'(k)\right|=1$. This implies that $f(x)$ coincides with $g(x)=x$ at least once, which implies that a fixed point exists in $(a,b)$. I think that rigour suffers someplace in this proof. Would appreciate some feedback and suggestions.
I don't really understand your point about $d_1, d_2$, but I'd say $f(x) = a+b-x$ is a counterexample. Plus, you don't even know if $f$ is differentiable. Consider now $g(x) =f(x) - x$. $f$ is a continuous bijection, so it's monotone, and you can then easily check that $f(a) =b$ and consequently $f(b) =a$, so $g(a) > 0, g(b) < 0$ and you just need to apply Bolzano.
I don't really understand your point about $d_1, d_2$, but I'd say $f(x) = a+b-x$ is a counterexample. Plus, you don't even know if $f$ is differentiable. Consider now $g(x) =f(x) - x$. $f$ is a continuous bijection, so it's monotone, and you can then easily check that $f(a) =b$ and consequently $f(b) =a$, so $g(a) > 0, g(b) < 0$ and you just need to apply Bolzano.
Solve $\frac{dy}{dx} + \frac{y}{x} = \frac{1}{(1+\log x+\log y)^2}$ Solve $\frac{dy}{dx} + \frac{y}{x} = \frac{1}{(1+\log x+\log y)^2}$ I tried solving this differential equation, I got stuck after a few steps I got $(a+\log(xy))^2d(xy) = xdx$ I don't know what to do after this
If you write $$\frac{dy}{dx}+\frac{y}{x} = \frac{1}{(1+\log(xy))^2}$$ and substitute $v=xy$, then wonderful things happen. Then $$\frac{dv}{dx} = x\frac{dy}{dx} +y$$ Plug stuff in and the equation becomes $$\frac{1}{x}\frac{dv}{dx}=\frac{1}{(1+\log v)^2},$$ separable.
With $u = xy$, the differential equation becomes separable.
$1+2+3... =-\frac{1}{12}$ - Question regarding this So, I'm not a big expert in this subject but I know $1+2+3...=-\dfrac{1}{12}$ isn't to do with 'real' maths but it's all to do with the zeta function; however I was watching a maths video and the equation: $$ \frac{x(x+1)}{2} $$ ... is actually a perfect equation for the series $1+2+3...$ etc. where $x$ represents $n$ in a series and $y$ is the sum of the series up to $n$. So, you can conclude that: $$ \sum^{n}_{i=1}1+2+3...=\frac{x(x+1)}{2} $$ However, this is where it gets weird; as you have probably guessed, the roots of the equation is $x=0,-1$ but if I want to find the integral of the roots from $-1$ to $0$ which is under the $x$ axis, I get the following: $$ \int_{-1}^{0} \frac{x(x+1)}{2}\:dx=-\frac{1}{12} $$ So, my question is why is this the case; what connection is there between the value of the integral under the $x$ axis that this graph has compared to the summation of the series? Link to Desmos graph for more clarity
Your result for this integral is just a consequence of a general formula wich gives the value of the zeta function at negative integer. Integrating Faulhabert* (or maybe bernoulli's, i'm not sure) formula between -1 and 0 leads to the formula. *The formula is the one that gives the n-th partial sum of k-th power of integers in term of polynomials of degree k+1.
Your result for this integral is just a consequence of a general formula wich gives the value of the zeta function at negative integer. Integrating Faulhabert* (or maybe bernoulli's, i'm not sure) formula between -1 and 0 leads to the formula. *The formula is the one that gives the n-th partial sum of k-th power of integers in term of polynomials of degree k+1.