title_body
string
upvoted_answer
string
downvoted_answer
string
Why is the Penrose triangle "impossible"? I remember seeing this shape as a kid in school and at that time it was pretty obvious to me that it was "impossible". Now I looked at it again and I can't see why it is impossible anymore.. Why can't an object like the one represented in the following picture be a subset of $\mathbb{R}^3$?
I can't resist posting an answer based on the Mathematics Stack Exchange logo. Let's add some more cubes to the logo to make it clear that it's a subset of the Penrose triangle (or would be, if it was a real 3D object) Now note that the cubes are overlapping, so some must be in front of others. But in fact, each cube is partially obscured by at least one other cube, in such a way that it appears to be some distance behind it. You can go around the hexagon in the original logo, in clockwise order, and see that each cube appears to be located further from the 'camera' than the next one in the cycle - which means that each cube is in front of itself. There's no consistent "z ordering" that you can give to the different parts of the figure, and that's one way to see that it's impossible. In reply to some of the comments, just to be explicit, the point here isn't just that the cubes all overlap each other. If that was the case it would be incorrect, since it's possible to have mutually overlapping arragements of cubes, as in this image provided by Misha Lavrov. However, if we're assuming that the Stack Exchange logo is a subset of the Penrose triangle then we know the cubes aren't arranged like that. Instead, each cube is positioned so that some of its sides are coplanar with those of the next cube, and each cube is separated from the next by some distance in the z direction, where z is perpendicular to the plane of the image. Therefore the cubes' centres of mass can't be given consistent z coordinates. As an extra bonus point, even if we don't assume that, and instead assume that each cube is as close to the next as it can be (in the z direction) without the surfaces intersecting, the Math.SE logo still can't be made into a consistent 3D shape, as the following animation shows. Note that it doesn't quite form the Math.SE logo, since one cube ends up in front of all the rest. Of the six neighbouring pairs of cubes, three of them can have equal z coordinates, but for the remaining three pairs, one cube unavoidably has to have a greater z coordinate than the next. As another additional bonus point, although it's not possible to embed the Penrose triangle into normal, flat, Euclidean 3D space, it is possible to embed it into curved three dimensional space. The video below, by @ZenoRogue on Twitter, shows Penrose triangles embedded into something called "nil geometry". I don't pretend to understand the details, but it's a kind of curved space such that Penrose triangles really are possible. video link: https://www.youtube.com/watch?v=YmFDd49WsrY screenshot:
Imagine keeping the corners in the same place, but reducing the width of the square cross-section of each side down to zero, until each side is a one-dimensional line segment. You would end up with a triangle with three $90^{\circ}$ angles, which is impossible in Euclidean space $\mathbb{R}^n$.
Macaulay2: How to compute the remainder when dividing a polynomial by a set of polynomials (in some order)? I'm writing Buchberger's Criterion in a program in Macaulay2 to check whether or not the set of polynomials I have form a Grobner basis for the ideal it generates. However, I have not been able to find a method that gives me the remainder when a polynomial, say $f$, is divided by a set of polynomials $G=\{g_1, g_2, ..., g_t\}$ in some order. Would anyone know if such a method exists and, if yes, what is its name? Although that's not all I have, here is the part of the program where I try to implement the Buchberger's Criterion: n=0; for i to #polynomials-2 do ( for j from i+1 to #polynomials-1 do ( Spoly := lcm(leadTerm(polynomials#i),leadTerm(polynomials#j))/leadTerm(polynomials#i)*polynomials#i-lcm(leadTerm(polynomials#i),leadTerm(polynomials#j))/leadTerm(polynomials#j)*polynomials#j; remainder := Spoly%polynomials; if remainder == 0 then n=n+1; ); ); if n == binomial(#polynomials, 2) then print "The polynomials form a Grobner basis for the ideal it generates." else print "The polynomials don't form a Grobner basis for the ideal it generates." On the codes above, I use % as the method I'm looking for - but it doesn't work for a polynomial being divided by a list of polynomials.
It seems this is not built in (at least, not exposed to users), but it is easy enough to achieve: Given a monomial ordering, dividing a polynomial $f$ by an ordered list of polynomials $f_1,\ldots,f_s$ means to express $$f = q_1f_1 + \ldots + q_sf_s + r,$$ where either $r=0$ or none of the monomials in $r$ are divisible by any of $LT(f_1), \ldots, LT(f_s)$. Algorithm. To begin, set $q_1:= 0, \ldots, q_s :=0$ and $r := 0$, and introduce $p := f$. While $p \neq 0$, we remove $LT(p)$ (and possibly more) from $p$ as follows: Try to find the smallest index $i$ such that $LT(f_i)$ divides $LT(p)$. If $i$ is found (division step): set $q_i := q_i + LT(p)/LT(f_i)$ and $p := p - (LT(p)/LT(f_i))f_i$. If no such $i$ exists (remainder step), then set $r := r + LT(p)$ and $p := p - LT(p)$. Finally, return $(q_1,\ldots,q_s)$ and $r$. See Ideals, Varieties, and Algorithms 4th ed. by Cox, Little and O'Shea, Chapter 2, $\S$3, Theorem 3, pp. 64 – 66. They give more imperative pseudo-code which should be easy to translate into Macaulay2. It is a nice programming exercise.
If you are given f and g two polynomials the remainder is given by f % g. suppose now you are given with G ideal and G=(g1,g2,...,gn). then the remainder when f is divided by G should be f % G. Hope this helps.
Determine position of ellipse that contacts two fixed ellipses I'm working in a vector program with three identical ellipses, all of which additionally have the same angle of rotation. The first two are at the same Y coordinate and are tangent. I would like to position the third ellipse such that it is tangent with the first two. How can I calculate the third ellipse's position? One solution good enough for my end goal has been to start with two circles, rotate a copy of one 90° along the intersection point, and then decrease the eccentricity of all three to the desired form. This gives me the result I'm looking for, but I'd rather know a more elegant and less procedural solution.
If there is no other constraint on the position of the third ellipse other than bitangency, there are infinitely many solutions. If you have enough with one solution, stretching three tangent circles is certainly the most elegant.
Ellipses are drawn with major axis parallel to x-axis. The placement of ellipses results in anti-symmetry about lines drawn parallel to x-axis as shown making contact points at $y= \pm b/2, b$ is minor axis as shown around points P and Q. A vertical positioning of ellipse would be attempted next and a general tilted position later. When contacting ellipse has major axis vertical By a property of ellipse connecting slope $\phi$ and angle at elliptic arc $\psi$ eccentricity $e<1$ $$ \frac{\cos \psi_1}{\cos \phi_1}= e\,; \frac{\cos \psi_2}{\cos \phi_2}= e\tag1 $$ $$ \phi_1+\phi_2=\pi/2,\,\sqrt{\cos \psi_1^2+\cos \psi_1^2} =e \tag2 $$ $$ \frac{\cos \psi_1}{\cos \psi_2}= \tan\phi_2,\, \quad \frac{\cos \psi_1}{\sin \phi_2}=e\, \tag3 $$ The detail at contact area is zoomed at right. $F_1F_2$ is of arbitrary length upto a scale factor of construction. The construction has been made to numerical values $$ \psi_1=1.2,\,\psi_2=0.47, \phi_1=1.1, \,e= 0.798855 $$ in radians for the three ellipses. If the first of 2) is chosen different from $\pi/2$ then the resulting intruding contact ellipse can be computed and drawn to any inclination.
What algebraic structure do date, temperature, and similar quantities belongs to? I find that some quantities share serveral characteristics. For date: "1st July" + "1 day" = "2nd July" "2nd July" - "1st July" = "1 day" But "20th August" + "29th August" is nonsense. For temperature: An object can be heated up from 25 degree Celsius by 5 degree Celsius to 30 degree Celsius. The temperature of boiling water is 100 degree Celsius. The temperature of the surrounding is 25 degree Celsius. The difference is 75 degree Celsius. But we cannot add the temperature of a cup of tea to the temperature of a cup coffee. Also, we cannot do multiplication on date or temperature. Depending on context, their relative values are useful, but their absolute value are not relavent. E.g., we don't count date from the Big Bang; weather forecast does not involve absolute temperature. How does abstract algebra describe these quantities? Is there an algebraic structure captures their characteristics?
What you are describing is similar to the concept of an affine space. An affine space over a vector space $V$ is a space filled with points. You can add a vector to a point to get another point, and you can subtract one point from another to get a vector. But you cannot add two points together.
Calendar dates and temperatures in °C are sets where group operations are defined on. You can add and subtract days as differences in days follow the model of integer numbers ($\mathbb{Z}$). The elements of the group of day differences then operate on the "amorphic" set of calendar dates which has no own algebraic structure (calendar dates are not a vector space or a group; you cannot add 2nd July to 1st July as you correctly noted). You can add and subtract temperature differences in K. Kelvin, because by convention temperature differences are never measured in °C but in K. Here we see clearly that we don't have an algebraic structure on the temperatures themselves (14°C - 25°C is not a temperature in °C but a temperature difference in Kelvin). The group of temperature differences thus operates on the set of temperatures. Operations like these are quite important in analysing (multidimensional) data: You are not interested in the numbers themselves but on the information they convey. So if this information is not changed by some operation on the data, you have to use an analysis tool that is invariant to this class of operations. There are also examples where a more specific structure than a group operates on the data.
What is the dimension of the eigenspace corresponsing to eigenvalue lambda = 9? Another homework problem is asking me to find the dimension of the eigenspace corresponding to lambda = 9. From what I understand, I need to subtract original matrix A from 9 times a 4x4 identity matrix, but I'm seeking to understand what to do after that because my matrix has the number 9 in every column: $$ \begin{pmatrix} 3-\lambda & 0 & 0 & 0 \\ 3 & 9-\lambda & 0 & 0 \\ 9 & 0 & 9-\lambda & 0 \\ 3 & 3 & 9 & 3-\lambda \\ \end{pmatrix} $$ What should I do now? Am I even on the right track?
Well...you're sort of on the right track. Plug in $9$ for $\lambda$ in your matrix and you get $$ A - 9I = \begin{bmatrix} -6 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 \\ 9 & 0 & 0 & 0 \\ 3 & 3 & 9 & -6 \\ \end{bmatrix} $$ And now you have to compute the dimension of the nullspace of that matrix (or you could compute the rank of that matrix and use the rank-nullity theorem).
$$(A-\lambda I)v=0$$ $$\begin{pmatrix} -6& 0 & 0 & 0 \\ 3 & 0 & 0 & 0 \\ 9 & 0 & 0 & 0 \\ 3 & 3 & 9 & -6 \\ \end{pmatrix}\pmatrix {v_1 \\v_2 \\v_3 \\v_4}=\pmatrix {0\\0\\0\\0}$$ You deduce that: $$ \begin{cases} v_1=0 \\ v_2+3v_3-2v_4=0 \end{cases} $$ So that we have : $$v=\left(0,v_2,v_3,\dfrac 12 v_2+\dfrac 32 v_3\right)^t$$ $$v=c_1\left(0,2,0,1\right)^t+c_2\left(0,0,2,3\right)^t$$ You can deduce the dimenson of the eigenspace now.
What is a root vector? Are "root vectors" and generilised eigenvectors the same thing? Im reading a book written by Nikolski named "Operators, Functions, and Systems - An Easy Reading: Hardy, Hankel, and Toeplitz" and he uses the term every now and then. A generilised eigenvector is a vector $v$ such that $(A-\lambda)^{n}v=0$ for some $n$ and $(A-\lambda)^{k}\ne 0$ for $k<n$. Looking for references.
No. Given a Lie algebra $L$ and a Cartan subalgebra $H$, a root is a non-zero element $\alpha\in H^*$ such that the associated root space $L_{\alpha}=\left\{x\in L\mid [h,x]=\alpha(h)x, \forall h\in H\right\}$ is non-zero. A non-zero $x\in L_{\alpha}$ is called a root vector. You could see a root $\alpha$ as generalisation of an eigenvalue in the sense that it satisfies the equation $[h,x]=\alpha(h)x$. Here $x$ is an eigenvector of $[h,-]$ with eigenvalue $\alpha(h)$, but unlike standard linear algebra, this holds for all $h\in H$, hence $x$ is simultaneously an eigenvector for all $[h,-]$. The importance of roots is that given a semisimple complex Lie algebra $L$, you can decompose $L$ as $$L=L_0\oplus \bigoplus_{\alpha\in \Phi}L_{\alpha},$$ where $\Phi$ is the set of roots. It turns out that $L_0=H$ is a Cartan subalgebra of $L$. The set $\Phi$ is called the root system of $L$. One can associate a Dynkin diagram to the root system, and these diagrams are classified. It follows that the root systems can be classified which in turn can be used to classify the semisimple Lie algebras. This is perhaps one of greatest achievements in mathematics and is not very difficult to understand. (In fact, you only need basic linear algebra to understand this). Generalized eigenvectors on the other hand are very different from elements in root spaces. Given a linear transformation $T:V\rightarrow V$ on a finite-dimensional vector space (over an algebraically closed field such as $\mathbb{C})$, one can choose a basis of $V$ consisting of eigenvectors and generalized eigenvectors. Considering the matrix of $T$ w.r.t. this basis yields he Jordan canonical form of $T$. When $T$ is diagonalizable, there are only eigenvectors in this basis and no (strict) generalized eigenvectors.
A root vector in this context is a generlized eigenvector.
Show that extension is simple Let $E/k$ be a finite extension and for any sub field $F_1$ and $F_2$ of $E$ containing $k$ either $F_1 \subset F_2$ or $F_2 \subset F_1$ then $E=k(a)$ for some $a$ Solution: Assume, $E=k(x_i, 1 \leq i \leq n)$ and $F_i=k(x_i)$. By given condition of comparability we can compare $F_i$ and therefore there will be a “largest” $F_i$ which will be equal to $E$ Is this solution correct? Any other solution?
Let $E= k(x_1,\ldots,x_n)$ be a finite extension of $k$. Since the simple extensions $F_i=k(x_i)$ are pairwise comparable, there is largest element. W.l.o.g.\ let $F_n=k(x_n)$ be the largest one. Then $k(x_i)\subseteq k(x_n)$ for all $1\leq i\leq n$ and so $x_1,\ldots,x_{n-1},x_n\in F_n$. Hence, $E=F_n$ since both are the smallest extensions of $k$ containing $x_1,\ldots,x_n$.
Let $E= k(x_1,\ldots,x_n)$ be a finite extension of $k$. Since the simple extensions $F_i=k(x_i)$ are pairwise comparable, there is largest element. W.l.o.g.\ let $F_n=k(x_n)$ be the largest one. Then $k(x_i)\subseteq k(x_n)$ for all $1\leq i\leq n$ and so $x_1,\ldots,x_{n-1},x_n\in F_n$. Hence, $E=F_n$ since both are the smallest extensions of $k$ containing $x_1,\ldots,x_n$.
Why do we require a topological space to be closed under finite intersection? In the definition of topological space, we require the intersection of a finite number of open sets to be open while we require the arbitrary union of open sets to be open. why is this? I'm assuming this has something to do with the following observation: $\cap_{n=1}^{\infty} (-\frac{1}{n},\frac{1}{n}) = \{0\}$ and there is some reason we don't want singletons to be considered open, I am wondering what this reason is. Am I thinking in the right direction here? Thanks :)
You need to think about what the intuition behind open sets are. One way to think about it is through neighborhoods: an open set is a set which is a neighborhood of each of its points. What is a neighborhood of a point? A neighborhood of a point $x$ is a set that contains of all points that are "sufficiently close" to $x$ (what does "sufficiently close" mean? It depends on the situation; you think of different neighborhoods perhaps specifying different degrees of closeness). In particular, any set that contains a neighborhood of $x$ is itself a neighborhood of $x$. And specifying two degrees of closeness specifies another degree of closeness that makes sense (the smaller of the two at any given place, say). So: if you think about open sets as sets that are neighborhoods of all of the points they contain, then it is natural that the union of any family of pen sets will be open: each point in the union is one of the open sets, and that open set is a neighborhood, and the union contains that neighborhood and so is itself a neighborhood. So the arbitrary union of open sets should still be open. What about intersection? Well, if you take two open sets $O_1$ and $O_2$, and you consider a point $x$ in $O_1\cap O_2$, then $O_1$ contains all points that are $1$-sufficiently close to $x$, and $O_2$ contains all points that are $2$-sufficiently close to $x$ (with "$1$-sufficiently" and "$2$-sufficiently" describing the two degrees of closeness required), so $O_1\cap O_2$ will contains all points that are both $1$-sufficiently and $2$-sufficiently close to $x$, and so it contains all points that are "sufficiently close" to $x$ for some meaning of "sufficiently close", so it is also an open set. This gives you, inductively, any finite intersection. But what about arbitrary intersections? Then you run into trouble, because specfying two degrees of "closeness" gives you a degree of closeness (the smaller one), but an infinite number of degrees of closeness may end up excluding everything! (Just as in your example, taking the intersection of all $(-\frac{1}{n},\frac{1}{n})$, which specify all points that are $\frac{1}{n}$-close to $0$, but the intersection excludes everything). So we don't want to require that an arbitrary intersection of neighborhoods be a neighborhood, and so we don't want to require that an arbitrary intersection of open sets be an open set.
I think that finite intersection is necessary for a topological space because when you are dealing with infinite intersection then you are leaving almost everything. Lots of open sets in topology will disappear and that's alarming.
How many 11-digit numbers are there which contain all of the digits 0-9 at least once? How many $11$-digit numbers are there which contain all of the digits $0$-$9$ at least once? Note that in this question a number cannot begin with a digit of $0$. My instructor explained this as a case problem, but rushed through it without pausing for questions. The answer ended up being $99 \times (10!/2!)$, but I have no idea how to get there!
My thinking: When you have 11 items in which 9 are unique and 2 are the same numeral repeated, then that can be arranged in $\frac{11!}{2!}$ ways. Then, there are 10 different choices for which numeral is repeated. So we have $\frac{10\times11!}{2!}$. Finally, one tenth of these arrangements should have 0 as its first digit and those don't count. So we need to multiply by $\frac{9}{10}$ $$\frac{9}{10}\times\frac{10\times11!}{2!}=99\times\frac{10!}{2!}$$
10!×10×11-9! 10! is permutaion of 0,.......9 10 is the number additionally added 11 is the number of place available to add the number 9! is to eliminate numbers begin with 0
Prove that $\frac{x}{e^x}$ tends to zero as $x \to \infty $ As the title states, I want to prove $$\lim_{x \to \infty} \frac{x}{e^x} =0$$ Clearly, L'Hopital's rule easily solves this. However, I'm curious to see if there's another way to prove it, without involving some differential or integral calculus (that is, by algebraic means). What I'm really interested about, is to prove that $$\lim_{x \to \infty} \frac{x}{e^{x^2}}=0 $$ I assume that proving the first limit will provide a way to prove the second one, using the squeeze method. If you know a direct way to prove the second limit, it will be more than perfect. Thanks in advance!
For $x>0$, $e^x=1+x+x^2/2+\cdots>x^2/2$ so $$0<\frac{x}{e^x}<\frac2x.$$ Likewise $e^{x^2}=1+x^2+\cdots>x^2$ so $$0<\frac{x}{e^{x^2}}<\frac1x.$$
First show $$\frac{x}{a^x}\to 0$$ for any $a>1$. This means $$\frac{x}{e^{2x}}\to 0$$ and write this as $$\frac{x^2}{e^{2x^2}}\to 0$$ now take the square root to get $$\frac{x}{e^{x^2}}\to 0$$
What is the name for an equation that has both an exponential variable and a variable in the base? I would like to know if there is a name given to equations like the following: $$-31=\frac{-39.2}{x}(1-e^{\frac{x}{4}})$$ or more generally: $$a=\frac{b}{x}c^x$$ The formatting doesn't really matter, I'm just talking about an equation where there are variables both in an exponent and in another term in a base. Another example would be this: $$20x^2 = e^x$$ I've searched for things like "composite exponential functions" but to no avail. Also, I guess I could solve my example by rearranging so that there is an expression containing x on the left and an exponential one on the right. Then I could graph each side and see where they equal. However, is there a way to do it analytically? Perhaps I'm not explaining this as clearly as I could be, but I would appreciate any advice you could offer. Thanks!
Welcome to the world of Lambert function ! If you consider equation $$a=\frac{b \, e^{c x}}{x^d}$$ the solution is $$x=-\frac{d}{c} W\left(-\frac{c }{d}\left(\frac{b}{a}\right)^{1/d}\right)$$ where $W(z)$, the Lambert function, is such that $z=W(z)\,e^{W(z)}$. The Wikipedia page will provide you many examples of the series of manipulations to be done. In the real domain, $W(z)$ exists if $z \geq -\frac 1 e$ and, for $z<0$, it shows two branches. Considering the example you give $a=20$, $b=1$, $c=1$, $d=2$, the result will be $$x=-2 W\left(-\frac{1}{4 \sqrt{5}}\right)$$ The Wikipedia page will also provide you series expansions for computing the value of $W(z)$. Using for example $$W(z)=z-z^2+\frac{3 z^3}{2}-\frac{8 z^4}{3}+\frac{125 z^5}{24}+O\left(z^6\right)$$ amking for your example $$x=\frac{7936+31321 \sqrt{5}}{307200}\approx 0.253815$$ while the exact value should be $\approx 0.253871$. In fact, sooner or later, you will learn that any equation which can write $A+Bx+C\log(D+Ex)=0$ has solution(s) in terms of Lmabert function.
Thanks for the detailed answer! I was a bit confused at first because the Wikipedia page mentioned that it is defined on the complex plane. However, I reread your comment and saw what you said about it being defined for real numbers with domain restrictions, and things are making more sense now. Hopefully I will get to study Lambert functions in more depth at some point in the future, but, for now, I'm glad that I understand the basic concept. I think I would still be searching around for an answer had I just continued trying to Google things, so much appreciation for helping out! -I
Problem From Hungerford (Abstract Algebra) (Ch 1 Section 1) The Question: Let $a,b,c,$ and $q$ be as in Exercise 5. Suppose that when $q$ is divided by $c$, the quotient is $k$. Prove that when $a$ is divided by $bc$, the quotient is also $k$. Exercise 5 Question: Let $a$ be any integer and let $b$ and $c$ be positive integers. Suppose that when $a$ is divided by $b$, the quotient is $q$ and the remainder is $r$, such that $$a = bq + r \text{ and } 0 \leq r < b.$$ If $ac$ is divided by $bc$, the quotient is $q$ and the remainder is $rc$. What I have done so far: Let $a=bq+r, 0 \leq r < b$, and $q=ck+z, 0 \leq z < c$, where $a,b,c,q,k,z$ are integers. Then it follows that: $$a = bq + r \implies a = b(ck+z)+r \implies a = bck + bz + r, 0 \leq bz+r < bc.$$ What I am stuck on is how to show the last inequality is true. I have tried manipulating the inequalities given: $$ 0 \leq r < b, 0 \leq z < c \implies 0 \leq rc+bz < bc \implies 0 \leq bz + r < bz + rc < 2bc $$ $$ 0 \leq bz + r < 2bc $$ I think this is to no avail. Can anyone give me tips on how to proceed?
$$ 0\le bz+r \le b(c-1)+(b-1)=bc-1 < bc. $$
I know I'm super late I'm sorry. but the problem only want you to show k. so set r = to zero and you are done. you don't need z. when you substitute you will get 2r, then set are to zero.
Solve for $x$ - Logarithm Equation $\ln x+\ln(x+1)=\ln 2$ My attempt: $\ln x(x+1)=\ln 2$ $e^{\ln x(x+1)}=e^{\ln 2}$ $x(x+1)=2$ $x^2+x-2=0$ $(x-1)(x+2)=0$ therefore $x=1, -2$
$$\ln { x+\ln { \left( x+1 \right) =\ln { 2 } } } \\ \ln { x\left( x+1 \right) =\ln { 2 } } \\ x\left( x+1 \right) =2\\ x^{ 2 }+x-2=0\\ \left( x-1 \right) \left( x+2 \right) =0\\ { x }_{ 1 }=-2,{ x }_{ 2 }=1\\ $$ x should be $x>0$ hence ${ x }_{ 2 }=1$ is root
We have, $$\ln x+\ln x(x+1)=\ln 2$$ $$\implies \ln(x(x+1))=\ln 2$$ $$\implies \frac{\ln(x(x+1))}{\ln 2}=1$$ $$\implies \log_2(x(x+1))=1$$ $$\implies x(x+1)=2$$ $$\implies x^2+x-2=0$$ now, solving above quadratic equation for $x$ as follows $$\implies x=\frac{-1\pm\sqrt{(1)^2-4(1)(-2)}}{2(1)}$$ $$\implies x=\frac{-1\pm\sqrt{9}}{2}$$ $$\implies x=\frac{-1\pm 3}{2}$$ $$\implies x=\frac{-1+3}{2}=\color{}{1}$$ &$$\implies x=\frac{-1-3}{2}=\color{}{-2}$$ Edit: Since log is defined for positive number i.e. $x>0$ hence we have $x=\color{blue}{1}$
quadratic equation form maximum solutions My Pearson intermediate algebra book has a "concept check" question in its section on solving equations by using quadratic methods. These questions are supposed to highlight fundamental concepts that indicate full or poor understanding of the subject. The question asks: a. True or false? The maximum number of solutions that a quadratic equation can have is 2. b. True or false? The maximum number of solutions that an equation in quadratic form can have is 2. The answers are listed as a. true and b. false. I'm having difficulty searching for information on this point because search results yield explanations of how to determine the number of solutions based on the discriminant, but don't seem to get into why an equation in quadratic form is not necessarily a quadratic equation, or why it wouldn't have the same of maximum number of solutions as a quadratic equation. I'm also not finding an explanation anywhere in the text, which is mostly examples and the phrase "equation in quadratic form" is nowhere to be found.
I googled "quadratic in form" and found the following explanation from Paul's online notes:http://tutorial.math.lamar.edu/Classes/Alg/ReducibleToQuadratic.aspx For example, an equation like $x^4+12x^2-74=0$ is an equation in quadratic form. Now, you are asked the following questions: 1) How many solutions can a quadratic equation have? The answer is not more than two. To see this, write down the formula for roots of the quadratic equation $ax^2+bx+c=0$, which is:$$x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$$ Therefore there are at most two roots, given by $\dfrac{-b + \sqrt{b^2-4ac}}{2a}$ and $\dfrac{-b - \sqrt{b^2-4ac}}{2a}$. Of course, if the discriminant $b^2-4ac$ is zero, there is only one solution. 1) How many solutions can an equation in quadratic form have? The answer is can be more than two. To see this, look at the equation $x^4-3x^2+2=0$. This is an equation in quadratic form, which we write as $X^2-3X+2=0$, where $X=x^2$,and get $X = 1,2$ and the solutions are given by $1,-1,\sqrt{2},-\sqrt{2}$, which is more than two. I think this answers your question.
The discriminant determines the number of solutions a Quadratic equation can have. Let, D = b^2-4ac ( i.e. the discriminant). If D=0 then one real solution exists. If D>0 then two real solutions exist. If D<0 then two imaginary solutions exist. Note. The solution x = 0 means that the value 0 satisfies the equation, so there is a solution. “No solution” means that there is no value, not even 0, which would satisfy the equation. So to answer your question the maximum number of solutions is two.
What is the square root of $3 + 2\sqrt{10}i$? I need to compute the square root of $3 + 2\sqrt{10}i$. I know how to solve it, but for some reason I'm not getting the correct answer. I attempted to solve it like this: $$ \sqrt{3 + 2\sqrt{10}i} = x + iy \quad \longrightarrow \quad 3 + 2\sqrt{10}i = x^2 - y^2 +2xyi $$ and so forth, but my answer isn't correct.
As I explained here. there is a very simple formula for denesting such radicals, namely Simple Denesting Rule $\rm\ \ \ \ \color{blue}{subtract\ out}\ \sqrt{norm}\:,\ \ then\ \ \color{brown}{divide\ out}\ \sqrt{trace} $ $\ 3+2\sqrt{-10}\ $ has norm $= 49.\:$ $\rm\ \color{blue}{subtracting\ out}\,\ \sqrt{norm}\ = 7\,\ $ yields $\ {-}4+2\sqrt{-10}\:$ with $\, {\rm\ \sqrt{trace}}\, =\, 2\sqrt{-2}.\ \ \ \rm \color{brown}{Dividing\ this\ out}\ $ of the above we obtain $\,\ \sqrt{-2} + \sqrt 5$ Checking it we find $\,\ (\sqrt{-2} + \sqrt 5)^2 =\, -2+5 + 2\sqrt{-2}\sqrt 5\, =\, 3+ 2\sqrt{-10}$ Remark $\ $ Many more worked examples are in prior posts on this denesting rule.
You want $z=a+bi$ where $a,b\in\mathbb{R}$ such that $$z^2=a^2-b^2+2abi=3+2\sqrt{10}i.$$ Comparing coefficients, you need $a^2-b^2=3$ and $ab=\sqrt{10}$. So $a=\sqrt{5}$ and $b=\sqrt{2}$, or $a=-\sqrt{5}$ and $b=-\sqrt{2}$. Thus the square root of $3+2\sqrt{10}$: $$\pm(\sqrt{5}+\sqrt{2}i).$$
Show that there is a continuous function $g$ over $\mathbb{R}$ such that $0 < g(x) \leq f(x)$ Suppose $f$ is a function over $\mathbb{R}$ such that $f(x) &gt; 0$ for all $x \in \mathbb{R}$ and that $f$ is strictly decreasing. Show that there is a continuous function $g$ over $\mathbb{R}$ such that $0 &lt; g(x) \leq f(x)$ and that $\displaystyle \lim_{x \to \infty} \frac{g(x)}{f(x)} = 0$. Attempt: We have $f(x) &lt; f(y)$ if $x &lt; y$ and $f(x) &gt; 0$ for all $x \in \mathbb{R}$. I am not sure how to take into account the condition that $0 &lt; g(x) \leq f(x)$, but the condition that $\displaystyle \lim_{x \to \infty} \frac{g(x)}{f(x)}$ means that $f(x)$ grows faster than $g(x)$, so that intuitively fits the previous definition. Maybe proof by contradiction will work here?
For $n\in\Bbb N$ let $g(n)=\frac1nf(n+1)$ and linearly interpolate between these points. That is, let $$ g(x)=\begin{cases}f(2)&amp;x\le 1\\ (x-\lfloor x\rfloor)\frac1{\lceil x\rceil}f(\lceil x\rceil+1)+(\lfloor x\rfloor +1-x)\frac1{\lfloor x\rfloor}f(\lfloor x\rfloor +1)&amp;x&gt; 1\end{cases}$$ and verify that is is indeed continuous (the only "problematic" points being $x\in\Bbb N$). Also, $g$ is clearly positive. For $n\le x&lt;n+1$ we have $g(x)\le \frac 1nf(n+1)$ and thus $0&lt;\frac{g(x)}{f(x)}\le \frac 1n$. We conclude that $\lim_{x\to\infty}\frac{g(x)}{f(x)}=0$. Alternatively: The function $h(x)=\frac{f(x)}{\max\{1,x\}}$ is also positive and strictly decreasing. It suffices to find continuous $g$ with $0&lt;g(x)&lt;h(x)$. Such $g$ can be found by "mollification", e.g. simply $$g(x)=\int_0^1 h(x+t)\,\mathrm dt $$ The key point is that the monotonicity of $h$ implies integrability and that the integral is continuous as a function of $x$.
For simplicity, only consider $x \ge 0$. Since $f(x)$ is strictly decreasing, we know there are countable jump points $x_0 = 0 &lt; x_1 &lt; x_2 &lt; ..&lt;x_n &lt; ...$ for $f(x)$. $f(x)$ is continnous in $(x_i, x_{x+1})$, and f(x) has finite left limit $f(x_i^+)$ at $x_i$ and the finite right limit $f(x_{i+1}^-)$ at $x_{i + 1}$. We notice that $f(x_i^-) &gt; f(x_i^+)$ Now we can construst a continuouds function $G(x)$ from f(x) by connecting $f(x_i - \epsilon_i)$ to $f(x_i^+)$ with the straight line, otherwise, G(x) = f(x), where $0 &lt; \epsilon_i &lt; (x_i - x_{i - 1})$. The final $g(x) = G(x)/( x + 1)$. Clearly, $$0 &lt;= g(x) \le f(x),$$ and $$limit_{x \to \infty} \frac{g(x)}{f(x)} \le limit_{x-&gt;\infty} \frac{1}{1 + x} = 0 $$
How can we show that $2^3+1^3+3^3+4^3+\cdots+L_n^3={L_nL_{n+1}^2+5(-1)^n[L_{n-1}+2(-1)^n]+9\over 2}$ In my recent post I was intended to asked this question $...,3,-1,2,1,3,4,7,...$ for $n=...,-2,-1,0,1,2,3,4,...$; It is the Lucas numbers Let the sum of the cube of Lucas series be $S_L$ $S_L=2^3+1^3+3^3+4^3+\cdots+L_n^3$, show that it has a closed form $$S_L={L_nL_{n+1}^2+5(-1)^n[L_{n-1}+2(-1)^n]+9\over 2}$$ I try: I can't think of any simple identities to use. This is the only one might have some sort of link to it, $L_{n+1}^3+L_{n+2}^3={L_{n+3}\over 2}(L_n^2+L_{n+1}^2+L_{n+2}^2)$ $2^2+1^2+3^2+\cdots+L_n^2=L_nL_{n+1}+2$ Any further hints?
This answer uses that $$L_n^2-L_{n-1}L_{n+1}=5(-1)^n\tag1$$ whose proof can be seen at the end of this answer. Using $(1)$, we get $$\begin{align}&amp;{L_nL_{n+1}^2+5(-1)^n[L_{n-1}+2(-1)^n]+9\over 2}\\\\&amp;=\frac{L_nL_{n+1}^2+5(-1)^nL_{n-1}+10(-1)^{2n}+9}{2}\\\\&amp;=\frac{L_nL_{n+1}^2+(L_n^2-L_{n-1}L_{n+1})L_{n-1}+10+9}{2}\\\\&amp;=\frac{L_nL_{n+1}^2-L_{n-1}^2L_{n+1}+L_{n-1}L_n^2+19}{2}\end{align}$$ So, this answer proves by induction that $$\sum_{k=0}^{n}L_k^3=\frac{L_nL_{n+1}^2-L_{n-1}^2L_{n+1}+L_{n-1}L_n^2+19}{2}$$ It holds for $n=0$. Supposing that it holds for some $n\ (\ge 0)$ gives that $$\begin{align}\sum_{k=0}^{n+1}L_k^3&amp;=L_{n+1}^3+\sum_{k=0}^{n}L_k^3\\\\&amp;=L_{n+1}^3+\frac{L_nL_{n+1}^2-L_{n-1}^2L_{n+1}+L_{n-1}L_n^2+19}{2}\\\\&amp;=L_{n+1}^3+\frac{L_nL_{n+1}^2-(L_{n+1}-L_n)^2L_{n+1}+(L_{n+1}-L_n)L_n^2+19}{2}\\\\&amp;=\frac{L_{n+1}^3+2L_nL_{n+1}^2+L_n^2L_{n+1}-L_n^3-L_n^2L_{n+1}+L_nL_{n+1}^2+19}{2}\\\\&amp;=\frac{L_{n+1}(L_{n+1}+L_n)^2-L_n^2(L_n+L_{n+1})+L_nL_{n+1}^2+19}{2}\\\\&amp;=\frac{L_{n+1}L_{n+2}^2-L_n^2L_{n+2}+L_nL_{n+1}^2+19}{2}\qquad\blacksquare\end{align}$$ Let us prove $(1)$ by induction. It holds for $n=0$. Supposing that it holds for some $n\ (\ge 0)$ gives that $$\begin{align}5(-1)^{n+1}&amp;=5(-1)^n\cdot (-1)\\\\&amp;=-(L_n^2-L_{n-1}L_{n+1})\\\\&amp;=-L_n^2+L_{n-1}(L_{n-1}+L_n)\\\\&amp;=-L_n^2+L_{n-1}^2-L_nL_{n-1}+2L_nL_{n-1}\\\\&amp;=L_{n-1}^2+2L_nL_{n-1}-L_n(L_n+L_{n-1})\\\\&amp;=L_{n-1}^2+2L_nL_{n-1}-L_nL_{n+1}\\\\&amp;=L_{n-1}^2+2L_nL_{n-1}+L_n^2-L_n^2-L_nL_{n+1}\\\\&amp;=(L_{n-1}+L_n)^2-L_n(L_n+L_{n+1})\\\\&amp;=L_{n+1}^2-L_nL_{n+2}\qquad\blacksquare\end{align}$$
Some relations about Fibonacci and Lucas numbers:
Does $h \circ f = I = f \circ g \implies h = g$? Suppose that the identity function $I$ is defined as $I:X \to X$ such that $\forall x \in X$, $I(x) = x$. I was hoping that $f(g(x)) = h(f(x)) = x \implies g(x) = h(x)$ for a proof I am constructing. I am stuck as to where to begin. Any help would be greatly appreciated. Cheers.
$$ h = h\circ\operatorname{id}_X = h\circ (f\circ g) = (h\circ f)\circ g = \operatorname{id}_X\circ g = g $$
If $h(f(x))=x$ then it is also true that $f(h(x))=x$ now if $f$ is inyective we are done. If not there is no such inverse of $f$ for all the reals because then if $a$ and $b$ has the same image in $f$, $h$ will have to give different values for those images, which by definition is not a function.
Is this shape definition an ellipse? I want to define an ellipse-like shape with two radii. The radius at 0 degrees is 3. The radius at 90 degrees is 2. But in order to close the loop, we have to specify some smooth transition between them. If the transition is linear (radius at 45 degrees is 2.5), does that make it an ellipse? EDIT: "Linear" means the radius decreases linearly from 3 to 2 as the angle increases linearly from 0 to 90 degrees. So the equation is r = 2 + t/90 for quadrant 1. "The angle" means the angle made by the positive x axis and whatever radius we're looking at, centered at the origin. The loop I'm describing is also centered at the origin.
A common way to define an ellipse centered at origin is $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ where $a$ is the horizontal semi-minor axis, and $b$ is the vertical semi-minor axis. You often see this written parametrically as $$\begin{cases} x(t) = a \cos(t) \\ y(t) = b \sin(t) \end{cases}$$ with $0 \le t \le 2 \pi$, but it is important to notice that $t$ is not an angle here. If we use $P = (x(t), y(t))$, $O = (0, 0)$, and $A = (1,0)$, the angle $\angle AOP$ is actually $\theta$, $$\theta = \arctan\left(\frac{y(t)}{x(t)}\right) = \arctan\left(\frac{b \sin(t)}{a \cos(t)}\right) = \arctan\left(\frac{b}{a}\tan(t)\right)$$ i.e. $\theta$ is the angle from positive $x$-axis counterclockwise to the point. Conversely, $$t = \arctan\left(\frac{a}{b}\tan(\theta)\right)$$ In the circle case, $a = b = r$, and we have $t = \theta$, so for circles, $$\begin{cases} x(\theta) = r \cos(\theta) \\ y(\theta) = r \sin(\theta) \end{cases}$$ is true. If we assume the angle is measured counterclockwise from the positive $x$ axis as usual, then you could use $$\frac{x^2}{3^2} + \frac{y^2}{2^2} = 1 \tag{1}\label{1}$$ In general, we can also describe the same ellipse as a function of $x$, by solving equation $\eqref{1}$ for $y$: $$y(x) = \pm \frac{b}{a}\sqrt{a^2 - x^2} = \frac{2}{3}\sqrt{3^2 - x^2}$$ At 45° from origin (center of the ellipse) we have $y = x$, and the distance to the center is $$r(45°) = \sqrt{x^2 + (y(x))^2} = \sqrt{\frac{2 a^2 b^2}{a^2 + b^2}} = \sqrt{\frac{72}{13}} \approx 2.3534$$ So, if your shape has $r(45°) = 2.5$, it is not an ellipse.
The principal radii of curvature are $$ \frac{a^2}{b}, \frac{b^2}{a} $$ By looking at the ellipse evolute center and radii of curvature positions close to that of evolute can be chosen for a smooth transition tangential junction find two profiled ellipse approximations of any eccentricity. Sketches of EvoluteBasedEllipses
If $f$ is continuously differentiable in $[a,b]$, $f(a)=f(b)$, and $f'(a)=f'(b)$, then there exist $a<x_1<x_2<b$ such that $f'(x_1) = f'(x_2)$. This problem is from Apostol's Mathematical Analysis: Let $f$ be continuously differentiable in $[a, b]$. If $ f(a) = f(b)$ and if $f^{'}(a) = f^{'}(b)$, then prove that there exists $x_1$ and $x_2$ in $(a, b)$ such that $x_1\neq x_2$ but $f^{'}(x_1) = f^{'}(x_2)$. My try: By Rolle's Theorem $\exists x_0\in (a,b) $ such that $f^{'}(x_0)=0$. How to guarantee existence of $x_1,x_2 $ from here? Can it be solved from a geometrical point of view?
As you said, by Rolle's Theorem, we are guaranteed existence of $c \in (a,b)$ such that $f'(c) = 0$. WLOG, let $f'(a) = f'(b) = k &gt; 0$. By the continuity of $f'$ and the intermediate value theorem, we get existence of $c_1 \in (a,c)$ such that $f'(c_1) = k_1$ where $0&lt;k_1&lt;k$. Similarly, we get $c_2 \in (c,b)$ such that $f'(c_2) = k_1$. Now, if $f'(a) = 0$, then let's look at some cases. If $f(c) \neq f(a)$, then the mean value theorem gives existence of $d \in (a,c)$ such that $\alpha = f'(d) = \frac{f(c) - f(a)}{c - a}\neq 0.$ Now, we can apply the above argument again. If $f(c) = f(a) = f(b)$, then if $f'$ is nonzero at some point in the interval, the above argument works. Otherwise, $f'$ is $0$ and the result follows.
(i) Let's suppose that $f'(a)&gt;0$ (then we'll see the case $f'(a)=0$). Because of (i), you can find a $\eta &gt; 0$ (as small as it is) that we have $f(a + \eta) &gt; f(a)$ and $f(b - \eta) &lt; f(b)$. $f$ is continuous on $[a + \eta, b-\eta]$, therefore there exists $c \in [a + \eta; b - \eta]$ so that $f(c) = f(a) = f(b)$. ($f(c)$ in $[f(a+\eta], f(b-\eta)])$. $f$ is continuous on both intervals $[a, c] $ and $[c, b]$ and reach its maximums in a $x_1$ and $x_2$ ($x_1$ in $(a, c)$ and $x_2$ in $(c, b)$ because $a, c$ and $b$ are not the maximum since $a+\eta$ and $b-\eta$). We have $f'(x_1) = f'(x_2) = 0$ Opposite reasonning for $f'(a)&lt;0$ leading to the same conclusion (ii) if $f'(a) = 0$ because of Rolle's theorem you know that you have a $c$ in $(a,b)$ that $f'(c) = 0$. Assume that $f(c) = f(a)$ (then we reapply this reasonning on $a$ and $c$ instead of $a$ and $b$ until finding a $c_n$ that $f(c_n)$ is different from $f(a)$, if we can not find one then $f$ is constant and any $c_1&lt;c_2$ in $(a, b)$ will do) We have a $f(a) &lt; f(c)$ and $f'$ continuous on $[a, c]$ and $f'(a) = f'(c) = 0$ but $f'$ not equal to $0$ and it exists a $c_1$ in $(a, c)$ so that $f'(c_1)&gt;0$ because $f(a) &lt; f(c)$. Because of that and $f'$ continuous on this interval, $f'$ reaches its maximum in $d$ in $(a;c)$ and because of its continuity $f'$ reaches $f'(d)/2$ in two points $x_1,x_2$, where $a&lt;x_1&lt;d&lt;x_2&lt;c&lt;b$.
Differential equation : $y' = (x+1)/(xy+x)$ So, I have the following differential equation to solve : $$y' = \frac{x+1}{xy+x}$$ After several steps, I get here : $t^2 + 2t = 2x + 2ln(x) + c$ How do I isolate $t ?$ thank you! By the way, $ t=f(x)=y$
Start by dividing through by $x$: $$ y'=\frac{1+\frac{1}{x}}{y+1} \\ y'(y+1)=1+\frac{1}{x} \\ \int y'(y+1)dx=\int 1+\frac{1}{x}dx \\ \frac{y^2}{2}+2y=2x+2\ln(x)+c~\text{for constant $c$} \\ y^2+2y+1=2x+2\ln(x)+2c+1 \\ (y+1)^2=2x+2\ln(x)+2c+1 \\ y+1=\pm \sqrt{2x+2\ln(x)+2c+1} \\ y=\pm \sqrt{2x+2\ln(x)+2c+1}-1.~_{\square} $$
let $W$ denote Lambert's W-function. after taking exponentials of both sides: $$ Ae^{\frac12(t+1)^2} = xe^x =W^{-1}(x) $$ so we have $$ x = W(Ae^{\frac12(t+1)^2}) $$ if this is correct, then we have: $$ t = \sqrt(2\log( A^{-1}W^{-1}(x)))-1 $$
What are the interesting applications of hyperbolic geometry? I am aware that, historically, hyperbolic geometry was useful in showing that there can be consistent geometries that satisfy the first 4 axioms of Euclid's elements but not the fifth, the infamous parallel lines postulate, putting an end to centuries of unsuccesfull attempts to deduce the last axiom from the first ones. It seems to be, apart from this fact, of genuine interest since it was part of the usual curriculum of all mathematicians at the begining of the century and also because there are so many books on the subject. However, I have not found mention of applications of hyperbolic geometry to other branches of mathematics in the few books I have sampled. Do you know any or where I could find them?
Maybe this isn't the sort of answer you were looking for, but I find it striking how often hyperbolic geometry shows up in nature. For instance, you can see some characteristically hyperbolic "crinkling" on lettuce leaves and jellyfish tentacles:![ My guess as to why this shows up again and again (and I am certainly not a biologist here, so this is only speculation) is that hyperbolic space manages to pack in more surface area within a given radius than flat or positively curved geometries; perhaps this allows lettuce leaves or jellyfish tentacles to absorb nutrients more effectively or something. EDIT: In response to the OP's comment, I'll say a little bit more about how these relate to hyperbolic geometry. One way to detect the curvature of your surface is to look at what the surface area of a circle of a given radius is. In flat (Euclidean) space, we all know that the formula is given by $A(r) = \pi r^2$, so that there is a quadratic relationship between the radius of your circle and the area enclosed. Off the top of my head, I don't know what the formula is for a circle inscribed on the sphere (a positively-curved surface) is, but we can get an indication that circles in positive curvature enclose less area than in flat space as follows: the upper hemisphere on a sphere of radius 1 is a spherical circle of radius $\pi/2$, since the distance from the north pole to the equator, walking along the surface of the sphere, is $\pi/2$. In flat space, this circle would enclose an area of $\pi^3/4 \approx 7.75$. But the upper hemisphere has a surface area of $2 \pi \approx 6.28$. By contrast, in hyperbolic space, a circle of a fixed radius packs in more surface area than its flat or positively-curved counterpart; you can see this explicitly, for example, by putting a hyperbolic metric on the unit disk or the upper half-plane, where you will compute that a hyperbolic circle has area that grows exponentially with the radius. So what happens when you have a hyperbolic surface sitting inside three-dimensional space? Well, all that extra surface area has to go somewhere, and things naturally "crinkle up". If you are at all interested, you can crochet hyperbolic planes (see, for instance, this article of David Henderson and Daina Taimina), and you'll see how this happens in practice.
I am not a mathematician, but my "Barron's dictionary of Mathematics Terms' Second Edition 1995 copyright edited by Douglas Dowing says one example of hyperbolic function use is "catenary"; catenary: study of the curve formed by a flexible rope hanging between two points.
Explain why perpendicular lines have negative reciprocal slopes I am not sure how to explain this. I just know they have negative reciprocals because one one line will have a positive slope while the other negative.
Translate two lines so that their intersection is the origin and then take two vectors along each line, say $u=(1,k_1), v=(1,k_2)$. The two lines are perpendicular if and only if $u\perp v$, viz $$u\cdot v=1+k_1k_2=0$$ This explains why $k_1$ is the negative reciprocal of $k_2$.
Two 90 degree rotations will snap back to how it was originally: Suppose you have line A, line A' which is perpendicular to A, and line A'' which is perpendicular to A' (all on the Cartesian plane). A and A'' are parallel (since A'' is A rotated ±90±90 degrees, and even the ± is a silly distinction as rotating 90 degrees clockwise is the same as rotating 90 degrees counterclockwise), which means that if perpendicular slopes have a hard and fast formula, they had better be reciprocals or negative reciprocals (or the same slope, but that's garbage), as otherwise p(p(m)), the perpendicular of the perpendicular of the slope, will not equal the original slope. By inspection, in a pair of perpendicular lines, one has a positive slope and one has a negative slope (I'm not account for the student who asks "What about zero?"). Then the only valid option is negative reciprocals. In my opinion, this is the best way to show a middle school student that will allows them to teach it to other middle school students (but geometry/trigonometry should eventually show it formally). Rotating a piece of paper 90 degrees and describing how it transforms rise and run is my second favorite.
Another method of finding area of hypocycloids I was finding the are the of hypocycloids. Then it struck me that apart from integration, there could be another method of finding the area of the hypocycloid with different curves. But the problem is I am not getting my answer right. So could somebody please help me tell if my logic is wrong altogether or if I am doing some other mistake. Here is the another method I am talking about:- Take case of a deltoid - we can make a deltoid by taking an equilateral triangle, and on all three of its vertices drawing circles whose radius is half the side of the triangle. Now the figure left in the middle is a deltoid. Similarly, we could use n number of sides in the regular polygon to draw hypocycloids with n cusps. (see the pictures below - the red coloured drawing in the middle is the hypocycloid) Is this idea wrong? Thank you so much :)
I think you assume that hypocycloids consist of circle arcs, but this is not the case: Consider e.g. the Astroid: It is an algebraic curve of degree 6, defined by $$ (x^2+y^2-a^2)^3+27a^2x^2y^2=0 $$ which is nowhere close to circle. To come back to your example: The Deltoid is defined by this equation of degree 4: $$(x^2+y^2)^2+18a^2(x^2+y^2)-27a^4 = 8a(x^3-3xy^2)$$ which also is certainly not composed of circles.
I used a software to produce the hypocycloid and then I tried this with an Astroid. Here is what happened: Hypocycloid been drawn As you can see, the circumference of the circle doesn't go over the shape of the hypocycloid. It doesn't work like that unfortunately.
Asymptotics of the maximum of quantized standard normals This is a problem from my measure theoretic probability class. Problem: Given independent standard normals $Z_1,...Z_n$, let $X_i$ be the nearest integer to $Z_i$. Let $M_n$ be the maximum of $\{X_i,..., X_n\}$. Show that there exists an integer sequence $\{a_n\}$ and a sequence of probabilities $\{p_n\}$ such that $P(M_n = a_n) \sim p_n$ and $P(M_n = a_n + 1) \sim 1 - p_n$. The symbol '$\sim$' means that the ratio of the left and right tends to 1 as $n$ goes to infinity. Show that $p_n$ does not converge as $n$ goes to infinity. Context: The professor assigned this problem after we discussed how the CDF of the maximum of standard normals can asymptotically be written as a double exponential. He took $x = \sqrt{ 2 \log(n) - \log(\log(n)) + c }$ in order to show that the asymptotic distribution of the max of gaussians is $\exp(- \exp(-c/2) / (2 \sqrt{2 \pi}))$. He then went on to say that for discrete random variables "everything breaks." My ideas: I'm not really sure where to start, especially considering that the examples from class were all for continuous random variables. I think I understand the gist of the proposition: asymptotically the maximum of the $X_i$ will be in a window of two integers wp 1. The choice of $x$ for the continuous case is exotic and I think that the discrete case will require similar "magic." Any hints or ideas? Thanks!
One looks for an integer sequence $(a_n)$ such that, when $n\to\infty$, $P[M_n\leqslant a_n-1]\to0$ and $P[M_n\leqslant a_n+1]\to1$. For every $x$, $P[M_n\leqslant x]=\Phi(x+\frac12)^n$. When $x\to+\infty$, $\Phi(x)\to1$ hence $$ \log\Phi(x)\sim-(1-\Phi(x))\sim-1/(\sqrt{2\pi}x\mathrm e^{x^2/2}). $$ Thus, one looks for some integer sequence $(a_n)$ such that $x\mathrm e^{x^2/2}\ll n$ for $x=a_n-\frac12$ and $x\mathrm e^{x^2/2}\gg n$ for $x=a_n+\frac32$. Equivalently, one asks that $$ a_n\mathrm e^{a_n^2/2-a_n/2}\ll n\ll a_n\mathrm e^{a_n^2/2+3a_n/2}, $$ that is, $$ \tfrac12a_n^2+\tfrac12a_n\pm a_n+\log a_n-\log n\to\pm\infty. $$ Assume that $$ a_n=\lfloor\sqrt{2\log n}\rfloor, $$ then $\sqrt{2\log n}-1\leqslant a_n\leqslant\sqrt{2\log n}$ hence $$ \tfrac12a_n^2-\tfrac12a_n\leqslant \log n-\tfrac12\sqrt{2\log n}+O(1), $$ and $$ \tfrac12a_n^2+\tfrac32a_n\geqslant \log n+\tfrac12\sqrt{2\log n}+O(1), $$ hence the sequence $(a_n)$ yields the result. The fact that $a_n$ stays constant during longer and longer intervals, then jumps to $a_n+1$, and similar estimates to those above, probably entail that $\liminf\limits_{n\to\infty}p_n=0$ and $\limsup\limits_{n\to\infty} p_n=1$.
Since "everything breaks" for discrete random variables, do this in terms of the continuous random variables $Z_j$. Start by relating $M_n$ to the maximum of the $Z_j$.
There exists strictly increasing $\{x_n\}$ that converges to $\sup E$ I need to prove that, if $E \subseteq \mathbb{R}$ is a non-empty bounded set and $\sup E \not\in E$ then there exists a strictly increasing sequence $\{x_n\}$ that converges to $\sup E$ such that $x_n \in E$ for all $n \in \mathbb{N}$. I've been trying to find a clue in the textbook, but couldn't. I don't even know how to start the proof. Could someone please give a clue?
Hint Firstly, choose any element $x_1 \in E$. As stated, $\sup E \notin E \implies x_1 &lt; \sup E$. Using the approximation property for suprema, for each $n \geq 2 \exists x_n \in E$ such that $\max(\sup E -\frac{1}{n},x_{n-1})&lt;x_n&lt;\sup E$ so that a sequence $x_1&lt;x_2&lt;x_3&lt;...$ is strictly increasing. Then go on to apply the Squeeze theorem and you should be home and dry. Edit Show that the sequence above is increasing Start at $n=1$ there exists an $x \in E$ such that $\sup E − 1 &lt; x$. Denote this $x = x_1$. Now, for $n=2$ there exists an $x \in E$ such that $\sup E − \frac{1}{2} &lt; x_2$. Clearly $x_2 &gt; x_1$. This works for any $n$ so there exists a sequence $x_n \in E$ with the property that $\sup E - \frac{1}{n}&lt;x_{n}$
I would phrase a hint as this: If there were no sequence approaching $\sup E$, how could $\sup E$ be the supremum?
Exponential of a Quartic I know there is a related post regarding this, but does anyone know a 'closed' form solution for the integral \begin{equation}I=\int_{-\infty}^{\infty} dx\,e^{ax-bx^{4}}\end{equation} I know you can do a series expansion of the quartic term, but I'd like to find a way that avoids that. The integral should converge as $e^{-bx^{4}}$ dominates at large $x$ and so the integrand quickly goes to zero (for $b&gt;0$ obviously). Any help/ references would be greatly appreciated.
Integrating by parts leads to $$I=\left.x\cdot e^{ax-bx^4}\right\vert_{-\infty}^\infty-\int_{-\infty}^\infty x(a-4bx^3)e^{ax-bx^4}\,dx=-a\frac{\partial I}{\partial a}-4b\frac{\partial I}{\partial b},$$but $\,\partial I/\partial b=-\int_{-\infty}^\infty x^4e^{ax-bx^4}\,dx\;$ also equals $-\partial^4I/\partial a^4$, so $I(a)+aI'(a)=(aI(a))'=4bI^{(4)}(a)$, or $\;aI(a)=C+4bI'''(a)$. Since $I(a)$ is even (and entire, of course), $I'(0)$ and $I'''(0)=0$ and so $C=0$; and using the integral formula for the gamma function, $$I(0)=2\int_0^\infty e^{-bx^4}\,dx=2\int_0^\infty e^{-y}b^{-1/4}\frac{dy}{4y^{3/4}}=2^{-1}b^{-1/4}\Gamma(1/4)$$ and $$I''(0)=2\int_0^\infty x^2e^{-bx^4}\,dx=2\int_0^\infty b^{-1/2}y^{1/2}e^{-y}\frac{b^{-1/4}dy}{4y^{3/4}}=2^{-1}b^{-3/4}\Gamma(3/4)$$I plugged all of this into Wolfram|Alpha and got the same result as @GEdgar (with $x,a$ replacing, respectively, $a,b$).
This is called Gaussian, it is not integrable by elementary means. First you need to make the substitution $a \rightarrow ax$ so that the integral is easier. Now the integral is $$\int _{-\infty} ^{\infty} e^{ax^{2} -b x^{4}} dx$$ Now this is an easier integral if you make the substitution $x \rightarrow \sqrt{x}$ because the integral is now $$\int _{-\infty} ^{\infty} e^{ax^ - b x^{2}} dx$$ Ok, now we are really on track, because you can complete the square. Rewrite the integral as $$\int _{-\infty} ^{\infty} e^{-b(x-\frac{a}{2b})^{2}}e^{-(ax)^2} dx$$ Now we are almost done, just pull out the last factor to get $$e^{-(ax)^2} \int _{-\infty} ^{\infty} e^{-b(x-\frac{a}{2b})^{2}} dx = e^{-(ax)^2} \sqrt{\pi}$$ Now we have to evaluate somewhere because this is a function of x whereas the original integral is not. I picked 0, but any other value should work. So, the answer is $\sqrt{\pi}$.
Proof that a product of two quasi-compact spaces is quasi-compact without Axiom of Choice A topological space is called quasi-compact if every open cover of it has a finite subcover. Let $X, Y$ be quasi-compact spaces, $Z = X\times Y$. The usual proof that $Z$ is quasi-compact uses a maximal filter, hence Axiom of Choice. Can we prove it without using Axiom of Choice? Edit(Apr. 14, 2014) If I am not mistaken, I have come up with a proof without using Axiom of Choice. I would like to post it as an answer. Edit(Apr. 15, 2014) I was mistaken. As Andres Caicedo pointed out, I used AC without noticing it in my "proof".
Here is a proof which is choice free. I should also mention that the proof using ultrafilters appears in Herrlich's The Axiom of Choice, where later he says that it can be modified to work without the axiom of choice using some ideas from other proofs. I did not check that claim in details, though. First, let us prove the following lemma: Lemma. Let $X$ be a topological space and $\cal B$ a basis for the topology. $X$ is quasi-compact if and only if every cover with elements of $\cal B$ has a finite subcover. Proof. One direction is trivial, if $X$ is quasi-compact, certainly every cover with elements of $\cal B$ has a finite subcover. In the other direction, suppose that $\mathcal U=\{U_i\mid i\in I\}$ is an open cover, consider $\cal U'$ to be the refined cover, $\{V\in\mathcal B\mid\exists i\in I: V\subseteq U_i\}$. Then $\cal U'$ is an open cover as well, since every point in $U_i$ lies within some element of $\cal B$. Let $V_1,\ldots,V_n\in\cal U'$ be a finite subcover, then we can choose $U_1,\ldots,U_n\in\cal U$ such that $V_i\subseteq U_i$ for all $i\leq n$, and this is a finite subcover as wanted. $\ \square$ Now we can prove our theorem. Let $X,Y$ be two quasi-compact spaces, and let $\cal U$ be an open covering of $X\times Y$ using rectangles (that is, sets of the form $U\times V$ where $U$ is open in $X$ and $V$ open in $Y$). We say that $A\subseteq X$ is adequate [for $\cal U$] if $A\times Y$ has a finite subcover in $\cal U$. Our goal is to show that $X$ is adequate, then $X\times Y$ can be covered by a finite subcover of $\cal U$. First we show that if $x\in X$, then there is some $U\subseteq X$ which is open and $x\in U$ such that $U$ is adequate. Let $U_1\times V_1,\ldots,U_n\times V_n$ be a finite subcover such that $\{x\}\times Y\subseteq U_1\times V_1\cup\ldots\cup U_n\times V_n$ (such finite cover exists since $\{x\}\times Y$ is quasi-compact). Now consider $U=\bigcap_{i=1}^n U_i$, then $U$ is open as a finite intersection of open sets, and non-empty since $\{x\}\in U$, as wanted. $U$ is adequate since given any $(u,y)\in U\times Y$ we have that for some $i\leq n$ it is true that $(x,y)\in U_i\times V_i$, so $(u,y)\in U_i\times V_i$ as well (since $u\in U_i$). Therefore $U_1\times V_1,\ldots,U_n\times V_n$ is a cover of $U\times Y$. Next we note that the finite union of adequate sets is adequate (as it is covered by the [finite] union of the [finite] subcovers of the adequate sets). Now $\{U\subseteq X\mid U\text{ is open and adequate}\}$ is an open cover of $X$, by the above fact that every $x\in X$ has an adequate neighborhood, and by quasi-compactness of $X$ it has a finite subcover. And therefore $X$ is the finite union of adequate sets and it is adequate as well. Finally, since rectangles form a basis for the product topology, from the lemma above we have that $X\times Y$ is indeed quasi-compact, as wanted. $\quad\square$ (I found the proof on ProofWiki sometime in the past.)
Let $X, Y$ be quasi-compact spaces, $Z = X\times Y$. We will prove that $Z$ is quasi-compact without using Axiom of Choice. Suppose $(W_\alpha)_{\alpha\in A}$ is an open cover of $Z$. Then $W_\alpha = \bigcup_{\beta\in B_\alpha} U_{\alpha\beta} \times V_{\alpha\beta}$, where $U_{\alpha\beta}$ is an open subset of $X$ and $V_{\alpha\beta}$ is an open subset of $Y$. Let $x \in X$. Then $\{x\}\times Y$ is quasi-compact. Since $(U_{\alpha\beta} \times V_{\alpha\beta})_{(\alpha,\beta)\in A\times B_\alpha}$ is an open cover of $\{x\}\times Y$, there exists a finite subcover $U_i(x) \times V_i(x), i = 1, \cdots , n_x$ of $\{x\}\times Y$, where each $U_i(x) \times V_i(x)$ is of the form $U_{\alpha\beta} \times V_{\alpha\beta}$ for some $(\alpha, \beta) \in A\times B_\alpha$. Let $U(x) = U_1(x) \cap \cdots \cap U_{n_x}$. Since $(U(x))_{x\in X}$ is an open cover of $X$, there exists a finite subcover $U(x_1),\cdots, U(x_m)$ of $X$. Then $U(x_i) \times V_k(x_i)$, $i = 1, \cdots, m, k = 1,\cdots, n_{x_i}$ is a finite cover of $X\times Y$. Then for every pair $(i, k)$, there exist $\alpha \in A$ and $\beta \in B_\alpha$ such that $U(x_i) \times V_k(x_i) \subset U_{\alpha\beta} \times V_{\alpha\beta} \subset W_\alpha$. Hence there exists a finite subcover of $(W_\alpha)_{\alpha\in A}$. QED
Number of vectors in a set & span of a set I needed clarification on a linear algebra question that I had: Given the matrices $v_1 = \begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}, $ $v_2 = \begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}$ and $v_3 = \begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix}$, 1) How many vectors does the set {${v_1, v_2, v_3}$} have? 2) How many vectors are in Span{$v_1, v_2, v_3$}? I think the answer to #1 is 3, simply because there are three matrices, and the answer to #2 is infinite, since there are an infinite number of linear combinations that can be made using these vectors. I am uncertain of these answers, though.
Construct the matrix $$ \mathbf{A} = \left[ \begin{array}{ccc} v_{1} &amp; v_{2} &amp; v_{3} \end{array}\right] = \left[ \begin{array}{rrr} \phantom{-}1 &amp; 1 &amp; 1 \\ 1 &amp; -1 &amp; 1 \\ 1 &amp; 1 &amp; -1 \end{array}\right] . $$ Because $\det \mathbf{A} = 4 \ne 0$, the vector set is linearly independent. The span of the vector set is $$ \text{span} \left\{ v_{1}, v_{2}, v_{3} \right\} = \mathbb{R}^{3}. $$ These vectors are a linearly independent span, also called a minimal spanning set for $\mathbb{R}^{3}$.
Construct the matrix $$ \mathbf{A} = \left[ \begin{array}{ccc} v_{1} &amp; v_{2} &amp; v_{3} \end{array}\right] = \left[ \begin{array}{rrr} \phantom{-}1 &amp; 1 &amp; 1 \\ 1 &amp; -1 &amp; 1 \\ 1 &amp; 1 &amp; -1 \end{array}\right] . $$ Because $\det \mathbf{A} = 4 \ne 0$, the vector set is linearly independent. The span of the vector set is $$ \text{span} \left\{ v_{1}, v_{2}, v_{3} \right\} = \mathbb{R}^{3}. $$ These vectors are a linearly independent span, also called a minimal spanning set for $\mathbb{R}^{3}$.
Does there exist an integer $s$ such that every integer $> 1$ can be written as a sum of at most $s$ primes? Does there exist an integer $s$ such that every integer $&gt; 1$ can be written as a sum of at most $s$ primes ?
Terrence Tao has proven that every odd number greater than one is the sum of at most five primes. Hence, using $3$ as a prime that we can substract to any even number greater than $5$, and noticing that $2$ is prime and $4=2+2$, every odd number is the sum of at most six primes. As a conclusion, every number greater than one is the sum of at most six primes. Of course, this is a very loose approximation. If it is proven, Goldbach's conjecture would allow us to lower the number of needed primes to $3$.
Terrence Tao has proven that every odd number greater than one is the sum of at most five primes. Hence, using $3$ as a prime that we can substract to any even number greater than $5$, and noticing that $2$ is prime and $4=2+2$, every odd number is the sum of at most six primes. As a conclusion, every number greater than one is the sum of at most six primes. Of course, this is a very loose approximation. If it is proven, Goldbach's conjecture would allow us to lower the number of needed primes to $3$.
A book has a few pages on which page numbers are written. Someone has torn one page out of it and now average of all page numbers is $\frac{105}{4}$ I couldn't relate this question to any of the topics specifically , I found this in a miscellaneous math problems book(non-calculus) . Here's how it goes, A book has a few pages on which page numbers are written. Someone has torn one page out of it and now average of all page numbers is $\frac{105}{4}$. Answer the following: (i) If the total numbers of pages in book is n then find the value of $$\sum_{r=1}^{10} \biggl\lfloor{\frac{n+r}{r+1}}\biggr\rfloor\,.$$ OPTIONS: (A)100 (B)107 (C)105 (D)82 (ii) If the line $x+y=\bigl\lfloor\frac{n}{3}\bigr\rfloor$ is drawn ,then the total number of points with integral co-ordinates enclosed within the region bounded by $x=0,y=0$ and $x+y=\bigl\lfloor\frac{n}{3}\bigr\rfloor$ is -----? (A)105 (B)153 (C)59 (D)78 STATUS: No clue how to start. Any help is welcome.
OK, so here is a solution to (i) in case we know that when a page is torn out and odd and the following even page number is removed. Given the book has $n$ pages the sum of the page numbers will be $$ T_n=1+2+...+n=\frac{n(n+1)}{2} $$ known as the $n$'th Triangular Number. So assume that page numbers $x$ and $x+1$ have been torn out where $x$ is odd. Then we can write $$ \frac{105}{4}=\tfrac{1}{n-2}\left(T_n-(2x+1)\right) $$ Plugging in the formula for $T_n$ from above and solving for $x$ then yields $$ x=\frac{1}{4}n^2-\frac{103}{8}n+\frac{103}{4} $$ which is a quadratic expression in $n$ with zeros $n=\frac{103\pm\sqrt{8961}}{4}$ which approximately is $n=2.08$ and $n=49.42$. On the other hand it is obvious that $x&lt;n$ which then in turn yields a quadratic inequality in $n$ that can be solved to see that $n&lt;\frac{111+\sqrt{10673}}{4}\approx 53.58$. Having a slightly closer look at the expression for $x$ one realizes that $n-2$ must be divisible by $8$ for $x$ to be an integer. So unless we take $n=2$ (which actually works) we must have $49.42&lt;n=50&lt;53.58$ for $n$ to be an integer satisfying all requirements in that interval. Plugging $n=50$ into the expression for $x$ then yields $x=7,x+1=8$, and you can check as I did in the comments that the average of the remaining pages is really $\frac{105}{4}$. With $n=50$ one gets $$ \sum_{i=1}^{10}\left\lfloor\frac{50+r}{r+1}\right\rfloor=105 $$ So option (C) answers the first question correctly. Part (ii) Still using $n=50$ we get $\lfloor\frac{50}{3}\rfloor=16$ so that the line is $x+y=16$ or $y=16-x$. Together with the axes $y=0$ and $x=0$ this encloses a triangle with $17$ lattice points on the line $y=16-x$ since you can start from $(0,16)$ and move down right step-by-step to 'visit' 17 lattice points before you hit the x-axis. Similarly you hit $16$ lattice points following the same procedure from $(0,15)$. With this we get the answer to (ii) which is: $$ T_{17}=17+16+...+1=\frac{17\cdot 18}{2}=153 $$ So option (B) answers the second question correctly.
For 1, we have to assume the pages are consecutively numbered from $1$ to $n$ before a single page is removed (which removes two page numbers). The average of all the page numbers is $\frac {n+1}2$. The remaining number of pages must be a multiple of $4$ so we can get the denominator in the average. Clearly it will not move the average much, so the average is about $26$ and $n$ is about twice that. So we guess $n=54$ will work. The sum of all the pages is $\frac {54 \cdot 55}2=1485$. The sum of the remaining pages if $52 \cdot \frac {105}4=1365$, so the sum of the removed page numbers is $120$. Unfortunately there is no page with that sum. Trying $n=50$ gives a starting sum of $1275$ and an ending sum of $1260$, leading to removal of the page that includes numbers $12$ and $13$. It is then arithmetic to evaluate the sum, getting $105$ An approach without guessing, but with more work is as follows: If we let $m$ be the page number removed, we have $$\frac {\frac {n(n+1)}2-m}{n-1}=\frac {105}4\\ \frac {n(n+1)}2-m=\frac {105(n-1)}4\\2n^2+2n-4m=105n-105\\2n^2-103n+105-4m=0\\n=\frac 14\left(103 +\sqrt{103^2+32m-620}\right)$$ and searching for $m$ that will make the square root an integer will get there.
How does the classification using the 0-1 loss matrix method work? In this machine learning lecture the professor says: Suppose $\mathbf{X}\in\Bbb R^p$ and $g\in G$ where $G$ is a discrete space. We have a joint probability distribution $\Pr(\mathbf{X},g)$. Our training data has some points like: $(\mathbf{x_1},g_1)$, $(\mathbf{x_2},g_2)$, $(\mathbf{x_3},g_3)$ ... $(\mathbf{x_n},g_n)$ We now define a function $f(\mathbf{X}):\Bbb R^p \to G$. The loss $L$ is defined as a $K\times K$ matrix where $K$ is the cardinality of $G$. It has only zeroes along the main diagonal. $L(k,l)$ is basically the cost of classifying $k$ as $l$. An example of $0-1$ loss function: \begin{bmatrix} 0 &amp; 1 &amp; 1 \\ 1 &amp; 0 &amp; 1 \\ 1 &amp; 1 &amp; 0 \end{bmatrix} $\text{EPE}(\hat{f}) = \text{E} [L(G,\hat{f})]$ (where $\text{EPE= Expected Prediction Error}$) $=E_\mathbf{X} E_{G/\mathbf{X}} \{L[G,\hat{f}]|\mathbf{X}\}$ $\hat{f}(\mathbf{x})=\text{argmin}_g\sum_{k=1}^{k}L(k,g)\text{Pr}(k|\mathbf{X}=\mathbf{x})=\text{argmax}_g\text{Pr}(g|\mathbf{X=x})$ $\hat{f}(\mathbf{x})$ is the Bayesian Optimal Classifier. I couldn't really follow what the professor was trying to say in some of the steps. My questions are: Suppose our loss matrix is indeed: \begin{bmatrix} 0 &amp; 1 &amp; 1 \\ 1 &amp; 0 &amp; 1 \\ 1 &amp; 1 &amp; 0 \end{bmatrix} What it the use of this matrix? What does classifying $k$ as $l$ even mean? Then how do we read off the loss for (say) a certain input $\mathbf{x_i}$ from the matrix? I couldn't understand what $\hat{f}$ and $\text{EPE}(\hat{f}\mathbf{(x)})$ stand for. Could someone please explain it with a simple example?
Consider a random variable $X$ and a radom variable $g $. $X $ is uniformly distributed over $[0,1] $ if $g=1$ and is uniformly distributed over $[0.5,1.5] $ if $g=2$. $g$ takes the value $1$ with probabity $0.1$. By this the joint distribution of $(X,g) $ is given. Asssume that we have to guess $g $ after having observed $X $. If we miss then we have to pay $1$ rupy. If we don't then the penalty is $0$. It is clear that if $X $ is below $0.5$ then our guess for $g $ is $1$. If $X $ is above $1$ then our guess for $g $ is $2$. But what do we do if $X $ falls between $0.5$ and $1$? Now, define $\hat f $ the following way: let it be $1$ if $X &lt;0.7$ and $2$ otherwise. Could we do that any better? Can you identify the current loss matrix? Can you compute the expected loss belonging to the method given above? Can you create a similar problem so that the loss matrix is the one given in the OP?
I think the loss matrix, for example in the ESL book page 20, the zero-one loss matrix, can be treated as a square look-up table. The number of rows and the number of columns are both the number of all possible classes. Assume it is $K \times K$. In your example, $K = 3$. So there are three possible levels, let's say they are lvl-1, lvl-2, and lvl-3. If you have an observation of which the level is lvl-2 (truth), but your estimator is lvl-1. Then we got a penalty since this is a misclassification, and (lvl-2, lvl-1) corrsponds to the value that sits at the 2$_{nd}$ row and the 1$_{st}$ column, which is 1 in your table. You asked what's $k$ was classified as $l$. Here $k$ is the truth, which is lvl-2, and $l$ is an error, which is lvl-1 in my example. In the ESL book, the EPE (Expected Prediction Error) is $$EPE = E[L(G,\hat{G}(X))]$$ , where $G$ and $\hat{G}(X)$ within $L(G, \hat{G}(X))$ can be treated as the row and column indices. Therefore $L(G, \hat{G}(X))$ can be considered as a random variable. The sources of its randomness are G and X. Let's assume that we observed a $X$ and corresponding observed class $G$, then an estimator of $G$, that is the $\hat{G}(X)$, can be calculated by certain technique. Then we go to the 'look-up' table, check the row index by known G, and check the column index by estimator $\hat{G}(X)$. If $G = \hat{G}(X)$ then we have 0 'penalty', 1 otherwise. This is determined because we have already observed both $X$ and $G$, no randomness thereafter. If we only observed a $X$, not $G$, you are still able to find a $\hat{G}(X)$, which is the column index, since you can just apply the technique on any $X$ as long as it is observed. But we don't know the 'penalty' because you need both row and column indices. Therefore we calculate the expectation of this random 'penalty'. The expectation is conditioning on the observed $X$. It is the weighted average of all possible 'penalties' (or ones, which means miscatches) that this 'penalty' can be, each 'penalty' being weighted according to the conditional probability of its occurence. For example, here is the weight for $k^{th}$ class: $Pr(g_{k}|X)$, of which $k \in \{1, 2, ..., K\}$. Now we have: $$\tag{*} E[L(G, \hat{G}(X))|X]=\sum\limits_{k=1}^K [L(g_{k}, \hat{G}(X))Pr(g_{k}|X)]$$ The above "$| X$" means $X$ is known. But it is also random, which means it is unknown until we observe one. And we are trying to find the EPE, not a conditional expectation like (*). Therefore we need to find the expecation of the above conditional expectation, which is: $$EPE = E_{X}\left(\sum\limits_{k=1}^K [L(g_{k}, \hat{G}(X))Pr(g_{k}|X)]\right)$$ So we should pick the estimator $\hat{G}(X)$ which minimizes the above EPE. This is my understanding. I think there must be answer more mathematically solid.
find Number of connected components of $Y$? Let $Q,Q^c$ denotes be the set of all rational and irrational numbers in $\mathbb {R}$. $Y=\mathbb {R^2}\setminus(Q×Q^C )$ with the usual subspace topology of $\mathbb {R^2}$.find Number of connected components of $Y$? My answer : i thinks $Y = \mathbb{R} \times \mathbb{R} \cup \mathbb{R} \times \mathbb{R}$. Since $Y$ is covered by lines in both coordinate directions, it is path connected. so $Y$ has one connected component. is its correct/incorrect ?? Pliz tell me
Notice that $Y =\mathbb{R}^2\backslash(\mathbb{Q}\times\mathbb{Q}^c) = (\mathbb{R}\times (\mathbb{R}\backslash \mathbb{Q}^c))\cup((\mathbb{R}\backslash \mathbb{Q})\times \mathbb{Q}^c) = (\mathbb{R}\times\mathbb{Q})\cup(\mathbb{Q}^c\times\mathbb{Q}^c).$ So, if $y=(a,b)\in Y$ and $b\in \mathbb{Q}^c$, then $a\in\mathbb{Q}^c$ and $\{y\}$ is the connected component of $y$. If, $y'= (a,b)\in Y$ and $b\in\mathbb{Q}$, then $a$ can be any real number, so $\{(t,b),\ t\in \mathbb{R}\}$ is the connected componet of $y'$.
Notice that $Y =\mathbb{R}^2\backslash(\mathbb{Q}\times\mathbb{Q}^c) = (\mathbb{R}\times (\mathbb{R}\backslash \mathbb{Q}^c))\cup((\mathbb{R}\backslash \mathbb{Q})\times \mathbb{Q}^c) = (\mathbb{R}\times\mathbb{Q})\cup(\mathbb{Q}^c\times\mathbb{Q}^c).$ So, if $y=(a,b)\in Y$ and $b\in \mathbb{Q}^c$, then $a\in\mathbb{Q}^c$ and $\{y\}$ is the connected component of $y$. If, $y'= (a,b)\in Y$ and $b\in\mathbb{Q}$, then $a$ can be any real number, so $\{(t,b),\ t\in \mathbb{R}\}$ is the connected componet of $y'$.
Do you need real analysis to understand complex analysis? I'm debating whether I should take a course, in complex analysis (using Bak as a text). I've already taken Munkres level topology and "very light" real analysis (proving the basic theorems about calculus) using the text Wade. The complex analysis course is supposedly difficult and will even cover the Prime Number Theorem in the end. Do you think it's better to take Rudin level real analysis first?
If the course teaches complex analysis from a geometric perspective-emphasizing the properties of analytic maps of the plane as a "calculus of oriented angles", as I did in my undergraduate complex analysis course-then believe it or not, you'll need very little if any real analysis except for certain results (like Cauchy's theorem and convergent series). For example, a good way to think of the derivative in the complex plane as a sequence of "infinitesimal" rotations of a tangent line to a circle centered at a point in the Argand plane-whereas the sequence of rotated tangent lines converges to the point by contracting in length along increasingly smaller subcircles. Also, most of the standard transformational geometry of the Euclidean plane has very elegant reformulations in terms of the standard analytic functions of the plane, such as the complex exponential in plane polar coordinates. If the course focuses on these aspects of elementary complex analysis, you'd be better off brushing up on your basic geometry then real analysis! However, if the course develops complex analysis via a rigorous development of the complex plane as a metric or normed space and focuses on infinite series, then that's a different story and you'll need a lot more rigorous real analysis to get comfortable with it.
'Real analysis' may refer to either 'elementary analysis' such as in Trench Real Analysis (RIP) or, well, real analysis such as in Royden Fitzpatrick Real Analysis. The former is necessary, but the latter is not except possibly a few topics such as uniform convergence, but I guess first courses in complex analysis will teach uniform convergence as if for the first time just as in real analysis. A few parts may depend on the school/course/textbook, but overall I believe elementary analysis is sufficient.
Independence in Coupon Collecting Problem If $T_i$ is the number of cards that we draw from a deck before seeing the $(i+1)$th new card (after seeing the ith new card), how can I show that Then $T_i$ and $T_j$ are independent (for i≠j)? I know the definition of independence, but I don't know how to get a workable second definition to compare to $P(T_j)P(T_i)$.
In both cases you have a geometric distribution. If there are $n$ cards in total, the chance of success after having seen $i$ cards is $\frac {n-i}n$. This clearly is independent of $j$
In both cases you have a geometric distribution. If there are $n$ cards in total, the chance of success after having seen $i$ cards is $\frac {n-i}n$. This clearly is independent of $j$
Geometrical Proof of a Rotation I wanna prove geometrically ( and not by linear algebra, doing transformations in the bases ) the result of the rotation of a point. The proof should only include geometrical steps like using similarity between triangles, pythagorean theorem and definition of cos and sen on a triangle, for example. This picture clears my doubt : http://postimg.org/image/z60zv5d83/ Thanks in advance.
Let $(x,y)$ be the original coordinates, and $(x',y')$ the later one. Then $$x'=ax+by,~y'=cx+dy$$as they are linear combination. Notice that $$x^2+y^2=(x')^2+(y')^2$$Replacing $x',y'$ by $x,y$ in the above equation, we get $$a^2+c^2=b^2+d^2=1,~ab+cd=0$$Let $a=\cos\theta$ and do some work gives the result. As required by OP, I will give an alternative method using vector decomposition: Let $\vec{OA}=(x,y)$ be the original vector, $\vec{OB}=(x',y')$ the vector after rotation of angle $\theta$. Then $$(x',y')=\vec{OB}=\vec{OC}+\vec{OD}=(x,y)\cos\theta+(y,-x)\sin\theta.~~\mbox{(DONE!)}$$
No transformation of bases is needed. Let $R:=R_\theta$ be the rotation about the origin by angle $\theta$. All is needed, that this is a linear transformation. This can be viewed as a purely geometrical statement: $0$ goes to $0$. Rotation preserves line segments and takes triangles to triangles (corresponding to $R(a+b)=R(a)+R(b)$ for vectors $a,b$). Rotation is exchangable with all zooms from the origin, i.e. $R(\lambda a)=\lambda R(a)$. Let $e_1=(1,0)$ and $e_2=(0,1)$. Now, by definition of sine and cosine, we have that $R(e_1)=(\cos\theta,\sin\theta)$, and please check (geometrically, i.e. by drawing) that $R(e_2)=(-\sin\theta,\cos\theta)$. Then, for our arbitrary point $P=(d,f)$, rotating it about $0$ is the same as rotating the $\vec{OP}$ vector, so by linearity, we have $$R(P)=R(d\cdot e_1+f\cdot e_2)=d\cdot R(e_1)+f\cdot R(e_2)\,,$$ and we're there.
Deleting any digit yields a prime... is there a name for this? My son likes his grilled cheese sandwich cut into various numbers, the number depends on his mood. His mother won't indulge his requests, but I often will. Here is the day he wanted 100: But today he wanted the prime 719, which I obliged. When deciding which digit to eat first, he went through the choices, trying to make a composite with the digits left behind. But he quickly realized that eating any digit would leave a prime: 71, 79, 19 are all prime. Pleased with his discovery of this prime 719, he tried to find a larger one, but couldn't. My questions: Do these primes have a name? Can you think of any more of them (clearly 23 is the smallest)? Are there an infinite number of them? Is there likely to be a way to find them short of using a computer?
Thanks to everyone for the interesting answers. Here's an improved heuristic that makes use of various parts of them -- the basic approach from Matthew, Fixee's observation of the repeated digits, and Numth' observations about restrictions on the digits. It's still a bit off, but I think it may be further improved taking into account Numth' observation 2; I've only taken into account observation 1 (mod $2$ and $5$) so far; I think the mod $3$ part will need to be a bit more subtle; more on that later. Consider numbers with $m$ digits consisting of $k$ strings of repeated digits. For instance, $38800999$ would have $m=8$ and $k=4$. The number of such numbers is $9^k\left(m-1\atop k-1\right)$: We have $9$ choices for the first repeated digit (we can't use $0$) and $9$ choices for the remaining ones (we can't use the previous one), and there are $\left(m-1 \atop k-1\right)$ different way to divide the repetitions. To check that this is right, we can calculate the total number of numbers with $m$ digits: $$\sum_{k=1}^m 9^k\left({m-1 \atop k-1}\right)=9\sum_{k=0}^{m-1} 9^k\left({m-1 \atop k}\right)=9(1+9)^{m-1}=9\cdot10^{m-1}\;,$$ which is correct. Like Matthew, I'll use $1/\log x$ as the "probability" of a number to be prime. This isn't quite right, since this is the total density below $x$, whereas we need the marginal density, $(x/\log x)&#39;=1/\log x - 1/\log^2 x$. I'll do the analytic calculations using just $1/\log x$, which gives an upper bound and is thus good enough to show that the sum converges, but I'll give some numerical results using the more precise formula in the end. So the probability of one of these numbers being prime is $1/\log x$, which we can bound from above by $1/\log 10^m=1/(m\log 10)$. Now we need the probability that the $k$ different numbers that we can get by deleting one of the digits are also prime. Note that for all but the last digit, the deletion doesn't change the last digit. One of the main reasons why Matthew's heuristic significantly underestimates the abundance of these numbers is that we need to include a factor of $5/2$ for each digit except the last: For each of these digits, given that the original number is prime and the modified number has the same last digit, we already know it isn't divisible by $2$ or $5$, which raises its chances of being prime by a factor of $10/|\{1,3,7,9\}|=5/2$. So multiplying all these probabilities, we have one factor of $1/(m\log 10)$ for the original number to be prime, $k$ factors of $1/(m\log 10)$ for all the modified numbers to be prime, and $k-1$ factors of $5/2$ to account for the last digit. (We could bound the probability for the modified numbers by $1/((m-1)\log 10)$, but I want to keep things simple first to derive the convergence; I'll get back to that in the numerical estimates). Now we have all we need to write an upper bound for the sum of the "probabilities" over all numbers: $$ \begin{eqnarray} &amp;&amp; \sum_{m=2}^{\infty}\sum_{k=1}^{m}9^k\left({m-1 \atop k-1}\right)\left(\frac{1}{m\log10}\right)^{k+1}\left(\frac{10}{4}\right)^{k-1} \\ &amp;=&amp; \frac{9}{\log^2 10}\sum_{m=2}^{\infty}\frac{1}{m^2}\sum_{k=0}^{m-1}\left({m-1 \atop k}\right)\left(\frac{45}{2m\log10}\right)^k \\ &amp;=&amp; \frac{9}{\log^2 10}\sum_{m=2}^{\infty}\frac{1}{m^2}\left(1+\frac{45}{2m\log10}\right)^{m-1} \\ &amp;&lt;&amp; \frac{9}{\log^2 10}\sum_{m=2}^{\infty}\frac{1}{m^2}\mathrm{e}^{45/(2\log10)} \\ &amp;=&amp; \frac{9}{\log^2 10}\left(\frac{\pi^2}{6}-1\right)\mathrm{e}^{45/(2\log10)} \\ &amp;\approx&amp; 19191 \;. \end{eqnarray} $$ Although this is quite a lot bigger than Matthew's result, it's still finite. The bound is not very tight, for three reasons: I didn't use the fact that the modified numbers have one fewer digit; I dropped the $1/\log^2x$ term; and the exponential bound isn't tight for small $m$. To get a better estimate, let's keep the $1/\log^2x$ term and the power instead of the exponential, and let's roughly estimate the probabilities of the original and modified numbers being prime by using $1/((m-1)\log10)$ for all of them -- that overestimates the probability for the original number and underestimates it for the modified numbers, so should give a better estimate. Putting this all together yields the following estimate for the number of $m$-digit numbers, Fixee's $c(m)$, with $j:=m-1$: $$9\left(\frac{1}{j\log10}-\frac{1}{(j\log 10)^2}\right)^2\left(1+\frac{45}{2}\left(\frac{1}{j\log10}-\frac{1}{(j\log 10)^2}\right)\right)^j\;.$$ Here's a plot. The numbers are reasonably close to the actual ones, but still too low, even at $m=9$, i.e. $j=8$, where the approximations should be reasonable and the maximum is almost attained, but the value is around 11 whereas the actual values are all around 16. P.S.: I think I figured out how to correctly take into account the restrictions mod $3$; I'll be posting that later in the day. P.P.S.: I just realized that part of the remaining underestimation stems from the fact that the last digit should also get a factor of $5/2$ if it's repeated. I think that together with the mod $3$ part might get the result roughly in line with the actual numbers. [Update:] I've produced some more numerical data and improved the estimate as described, and the two are now in quite satisfactory agreement. For taking into account the restrictions mod $2$, $3$ and $5$, the general approach is to take the original number as given, with the generic probability of being prime, and then to analyze the modified numbers under the condition that the original number is prime. We need to distinguish two cases depending on whether the last digit is repeated. The repeated case is easier to analyze, so I'll treat that first. So assume that the last digit is repeated. That excludes one place at which to divide the repetitions, so there are $9^k\left(m-2 \atop k-1\right)$ such numbers. In this case, each digit, including the last one, gets a factor of $5/2$ to account for the fact that the corresponding modified number is not divisible by $2$ or $5$. To help in getting the more complicated case of a non-repeated last digit right, it's worthwhile stating more explicitly how this factor of $5/2$ arises from a conditional probability. We can consider the probabilities that a number is coprime to all primes in a set $S$ and that it is coprime to all primes not in $S$ as independent, so that the probability of the number being prime is the product of these two probabilities. Then the probability of a number being coprime to $2$ and $5$ is $\lvert\{1,3,7,9\}/10\rvert=2/5$, and the probability of the number being prime is $p=2/5q$, where $q$ is the probability of it being coprime to all primes other than $2$ and $5$. If we estimate $p$ using the overall density of primes but we know that the number is coprime to $2$ or $5$, then the conditional probability of it being prime is $q=(5/2)p$. This is relatively obvious in the present case, but this way of looking at it will be helpful in dealing with the case of a non-repeated last digit. Now let's look at the mod $3$ restrictions. The last digit is known to be $1$, $3$, $7$ or $9$. If we delete a $3$ or a $9$, the number will still not be divisible by $3$. If we delete a $1$ or a $7$, there is a 50% chance of the number becoming divisible by $3$. Thus, the conditional probability of the modified number being coprime to $3$ is $3/4$, and this has to be divided by the unconditional probability of it being coprime to $3$, which is $2/3$, to obtain the factor $(3/4)/(2/3)=9/8$ by which we need to multiply the unconditional probability estimate. Now consider the remaining digits, starting from the end. Each digit is different from the one after it, and the one after it is known to be one of the $7$ digits allowed mod $3$. That leaves $6$ allowed digits and $3$ forbidden ones, for a conditional probability of $2/3$. This is equal to the unconditional probability, so we don't need to include any factors to account for the mod $3$ restrictions on the remaining digits. That completes the considerations for the case of a repeated last digit. Let's call the estimate for the unconditional probability of a number with $m$ digits to be prime $p_m$ (more on that below); then we can write the sum of the probabilities for all $m$-digit numbers with repeated last digit as $$ \begin{eqnarray} &amp;&amp; \frac{9}{8}p_m\sum_{k=1}^{m-1}9^k{\left(m-2 \atop k-1\right)} \left(\frac{5}{2}\right)^kp_{m-1}^k \\ &amp;=&amp; \frac{9}{8}\frac{5}{2}9p_mp_{m-1}\sum_{k=0}^{m-2}9^k{\left(m-2 \atop k\right)} \left(\frac{5}{2}\right)^kp_{m-1}^k \\ &amp;=&amp; \frac{405}{16}p_mp_{m-1}\left(1+\frac{45}{2}p_{m-1}\right)^{m-2}\;, \end{eqnarray} $$ where the orginal sum now only runs up to $m-1$ since the last digit cannot be repeated for $k=m$. Now let's turn to the slightly more complicated case where the last digit isn't repeated. That reduces both the number of digits and the number of places at which to divide the repetitions, so there are $9^k\left(m-2 \atop k-2\right)$ such numbers. The sum works out as it should, since $\left(m-2 \atop k-1\right)+\left(m-2 \atop k-2\right)=\left(m-1 \atop k-1\right)$. If the last digit doesn't repeat, we don't know whether deleting the last digit makes the number divisible by $2$ or $5$, and there is correlation between the events of the last deletion leaving the number coprime to $2$ and $5$ and the last two deletions leaving it coprime to $3$. Thus we need to determine the conditional probability that all three of these events occur given that the original number is prime. Again, the last digit can be $1$, $3$, $7$ or $9$. But now the penultimate digit also has to be one of these, since otherwise the deletion of the last digit would render the number divisible by $2$. Thus we have $16$ combinations for the last two digits, $4$ of which are excluded because the digits cannot be the same. We have two different cases to consider, depending on whether $1$ and $7$ (which have the same remainder mod $3$) are allowed or not. Each of these cases occurs with probability $1/2$, depending on whether the remainder of the original number mod $3$ is $1$ or $2$. (Strictly speaking, the probabilities for these two cases are also slightly correlated with the probabilities being determined, but I believe this correlation should decay quickly as $m$ increases.) So with probability $1/2$, only $3$ and $9$ are allowed, so the only combinations for the last two digits are $39$ and $93$. In the other case, all $4$ digits, and hence all $12$ combinations, are allowed. Thus, on average $7$ combinations of the last two digits lead to the last two deletions leaving the number coprime to $2$, $3$ and $5$, out of the $4\cdot9=36$ that are possible given that the original number is prime. That yields a conditional probability $7/36$, which we need to divide by the unconditional probability $(\lvert\{1,7,11,13,17,19,23,29\}\rvert/30)^2=(4/15)^2$ of two numbers being coprime to $2$, $3$ and $5$ to get the factor $(7/36)(15/4)^2=175/64$ by which we need to multiply the unconditional probability. All remaining digits contribute a factor of $5/2$ because deleting them doesn't make the number divisible by $2$, and we don't need any correcting factors for the mod $3$ restrictions for the remaining digits, for the same reasons as above, so we're done. Putting it all together, we have for the sum of the probabilities for all $m$-digit numbers with non-repeated last digit: $$ \begin{eqnarray} &amp;&amp; \frac{175}{64}p_m\sum_{k=2}^{m}9^k{\left(m-2 \atop k-2\right)} \left(\frac{5}{2}\right)^{k-2}p_{m-1}^k \\ &amp;=&amp; 9^2\frac{175}{64}p_mp_{m-1}^2\sum_{k=0}^{m-2}9^k{\left(m-2 \atop k\right)} \left(\frac{5}{2}\right)^kp_{m-1}^k \\ &amp;=&amp; \frac{14175}{64}p_mp_{m-1}^2\left(1+\frac{45}{2}p_{m-1}\right)^{m-2}\;, \end{eqnarray} $$ where the original sum starts at $2$ since the last digit necessarily repeats for $k=1$. The ratio between the two results is $(35/4)p_{m-1}\approx(35/4)/((m-1)\log 10)\approx 3.8/(m-1)$, so for small $m$ there are more numbers with the last digit not repeated and for large $m$ there are more with the last digit repeated, in agreement with the numerical data. We can bound $p_m$ by $1/\log m$ from above as before, and then show as before that for both cases the sum over the probabilities for all $m$ converges. To get a better estimate, we can take $p_m$ to be the average density of primes for all $m$-digit numbers, which is $$p_m=\frac{\frac{10^m}{\log10^m}-\frac{10^{m-1}}{\log10^{m-1}}}{10^m-10^{m-1}}=\frac{1}{9\log 10}\left(\frac{10}{m}-\frac{1}{m-1}\right)\;.$$ Here are plots of the repeated case, the non-repeated case and the total. The maxima are at $m=18$, $7$ and $15$ with maximal values of and $21$, $8$ and $26$, respectively. The expected total counts (summed numerically) are $1794$, $209$ and $2003$, respectively. Thus, we can expect there to be around $2000$ of these numbers in total; about half of these have $74$ digits or more. Here's a table comparing the actual counts for $3$ to $11$ digits (using the numerical data I report further down) to the above estimates: $$ \begin{array}{|c|c|c|c|} m&amp;\text{last digit repeated}&amp;\text{last digit not repeated}&amp;\text{total}\\\hline\\ \begin{array}{c} \mathrm{\vphantom{actual}\vphantom{estimate}}\\ \\ 2\\ 3\\ 4\\ 5\\ 6\\ 7\\ 8\\ 9\\ 10\\ 11\\ \end{array} &amp; \begin{array}{c|c} \mathrm{actual}&amp;\mathrm{estimate}\\ \hline\\ 0&amp;\\ 1&amp;4\\ 3&amp;6\\ 7&amp;8\\ 12&amp;11\\ 5&amp;13\\ 8&amp;15\\ 11&amp;16\\ 21&amp;17\\ 16&amp;19\\ \end{array} &amp; \begin{array}{c|c} \mathrm{actual}&amp;\mathrm{estimate}\\ \hline\\ 4&amp;\\ 10&amp;6\\ 11&amp;7\\ 9&amp;8\\ 6&amp;8\\ 8&amp;8\\ 6&amp;8\\ 7&amp;8\\ 1&amp;7\\ 7&amp;7\\ \end{array} &amp; \begin{array}{c|c} \mathrm{actual}&amp;\mathrm{estimate}\\ \hline\\ 4&amp;\\ 11&amp;10\\ 14&amp;13\\ 16&amp;16\\ 18&amp;19\\ 13&amp;21\\ 14&amp;22\\ 18&amp;24\\ 22&amp;25\\ 23&amp;25\\ \end{array} \end{array} $$ The agreement turns out to be quite good, though there seems to be a slight overestimation now. I have no explanation for this, since the only systematic effect I can think of that I haven't taken into account is the correlation between the size of the original number and the size of the modified numbers, which should increase rather than decrease the probabilities. (In case you're wondering whether this is a case of changing the theory until it fits the data, it was actually the other way around: I got several incorrect results for the non-repeated case that would have removed the overestimation, but I believe the above analysis is the correct one.) Here are all the numbers with up to $11$ digits (up to the 4,239,555,920th prime); I checked that they coincide with the ones already reported, but I'm putting them all here to have them together in one place: 23, 37, 53, 73, 113, 131, 137, 173, 179, 197, 311, 317, 431, 617, 719, 1013, 1031, 1097, 1499, 1997, 2239, 2293, 3137, 4019, 4919, 6173, 7019, 7433, 9677, 10193, 10613, 11093, 19973, 23833, 26833, 30011, 37019, 40013, 47933, 73331, 74177, 90011, 91733, 93491, 94397, 111731, 166931, 333911, 355933, 477797, 477977, 633317, 633377, 665293, 700199, 719333, 746099, 779699, 901499, 901997, 944777, 962233, 991733, 1367777, 1440731, 1799999, 2668999, 3304331, 3716633, 4437011, 5600239, 6666437, 6913337, 7333331, 7364471, 7391117, 13334117, 22255999, 33771191, 38800999, 40011197, 40097777, 44333339, 49473377, 79994177, 86000899, 93361493, 94400477, 99396617, 99917711, 110499911, 144170699, 199033997, 222559399, 333904433, 461133713, 469946111, 640774499, 679774391, 680006893, 711110111, 716664317, 743444477, 889309999, 900117773, 982669999, 999371099, 999444431, 1113399311, 1133333777, 1176991733, 1466664677, 1667144477, 1716336911, 2350000999, 3336133337, 3355522333, 3443339111, 3973337999, 4111116011, 4900001111, 6446999477, 6666116411, 6689899999, 6914333711, 7463333477, 8555555599, 8888333599, 8936093833, 9746666477, 10000000097, 10666610333, 11100077711, 11793334733, 19000019333, 23525555899, 30001114937, 33008299999, 33110666399, 33399911777, 37796941199, 40470660977, 44434133339, 46661333333, 46666133333, 55888856893, 61077333377, 66664441373, 66700447613, 66993413393, 71111164499, 77443733111, 99444901133 Some interesting specimens are 10000000097 (which is the only one with lots of $0$s after the first digit, showing that there's no significant effect from the overestimation of the size of the modified numbers in such a case) and the pair 46661333333 and 46666133333. Here's the Java program I used to find these (using a bit set as Fixee suggested to cram as many primes as possible into my 8GB). The computation took half an hour on a MacBook Pro i7. public class EdiblePrimes { static class LongBitSet { int [] bits; public LongBitSet (long nbits) { bits = new int [(int) ((nbits + 0x1f) &gt;&gt; 5)]; } public void clear (long bit) { bits [index (bit)] &amp;= ~mask (bit); } public void set (long bit) { bits [index (bit)] |= mask (bit); } public boolean get (long bit) { return (bits [index (bit)] &amp; mask (bit)) != 0; } private static int mask (long bit) { return 1 &lt;&lt; (bit &amp; 0x1f); } private static int index (long bit) { return (int) (bit &gt;&gt; 5); } } final static long max = 0x1800000000L; final static long maxIndex = max &gt;&gt; 1; public static void main (String [] args) { LongBitSet composite = new LongBitSet (maxIndex); // bit at index n says whether 2n+1 is composite composite.set (0); // 1 isn't prime int maxDivisor = (int) (Math.sqrt (max) + 1); for (int divisor = 3;divisor &lt; maxDivisor;divisor += 2) if (!composite.get (divisor &gt;&gt; 1)) for (long multiple = (3 * divisor) &gt;&gt; 1;multiple &lt; maxIndex;multiple += divisor) composite.set (multiple); outer: for (long n = 11;n &lt; max;n += 2) { if (!composite.get (n &gt;&gt; 1)) { long power = 1; do { long nextPower = 10 * power; long modified = n % power + (n / nextPower) * power; if (modified != 2 &amp;&amp; ((modified &amp; 1) == 0 || composite.get (modified &gt;&gt; 1))) continue outer; power = nextPower; } while (power &lt; n); System.out.println (n); } } } [Update:] Since the probability has a simple $k$ dependence, we can calculate the expectation value for $k$, i.e. the expected number of strings of repeated digits. For the shifted sums over $k$, we have $$ \begin{eqnarray} &amp;&amp; \frac{ \sum\left(m-2 \atop k\right) k q^k }{ \sum\left(m-2 \atop k\right) q^k } \\ &amp;=&amp; \frac{ q\frac{\partial}{\partial q}\sum\left(m-2 \atop k\right)q^k }{ \sum\left(m-2 \atop k\right) q^k } \\ &amp;=&amp; \frac{ q\frac{\partial}{\partial q}(1+q)^{m-2} }{ (1+q)^{m-2} } \\ &amp;=&amp; \frac{ (m-2)q(1+q)^{m-3} }{ (1+q)^{m-2} } \\ &amp;=&amp; \frac{m-2}{1+q^{-1}}\;. \end{eqnarray} $$ With $q\approx 45/(2(m-1)\log10)$, this becomes $$ \frac{m-2}{1+\frac{2(m-1)\log10}{45}}\to_{m\to\infty}\frac{45}{2\log{10}}\approx 10\;. $$ We shifted $k$ down by $1$ and $2$ for numbers with with last digit repeated and not repeated, respectively. Thus, if we don't count a non-repeated last digit as a separate string, we get the same expectation value in both cases, namely $1$ more than the above value. Thus, in the limit of large $n$, there will be about $11$ strings of repeated digits on average (not counting non-repeated last digits), but the values for small $m$ are not near the limit because of the large factor of $45$. Here's a comparison of the actual averages with the estimated expectation values (using the above better estimate for $p_m$): $$ \begin{array}{|c|c|c|} \hline\\ m&amp;\text{actual}&amp;\text{estimated}\\ \hline\\ 2&amp;1.0&amp;\\ 3&amp;1.9&amp;1.8\\ 4&amp;2.8&amp;2.5\\ 5&amp;3.5&amp;3.1\\ 6&amp;3.8&amp;3.6\\ 7&amp;4.1&amp;4.1\\ 8&amp;4.4&amp;4.5\\ 9&amp;5.3&amp;4.8\\ 10&amp;5.4&amp;5.1\\ 11&amp;5.7&amp;5.4 \end{array} $$ Here, too, the agreement is quite good, but there seems to be a slight systematic error left.
Assuming the relaxation that the original number need not be prime (treated in an answer) digit repetitions give large solutions. Playing with 2 decimal digits gave the $332$ digit composite solution 11111111111111111111111111111111111111111111111111111111999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
How to prove that $j^\mu=\cos(\frac{\mu\pi}{2})+j\sin(\frac{\mu\pi}{2})$? I am trying to read a journal paper and I need to understand some math operations. At this time, I feel it hard to understand this equation: $j^\mu=\cos(\frac{\mu\pi}{2})+j\sin(\frac{\mu\pi}{2}), j=sqrt(-1), \mu\in R,$ Is this something from the complex number angular formation definition or some kind of Euler equation? Last year, I have asked my professor to help me go through this part. But now I still can't feel familiar to this part. Based on some answers: $j= e^{j \frac{\pi}{2}}$ is the part that I feel hard to understand. I know some basic complex number operations and I often use MATLAB to solve math problem. Thanks, Bo
Updating my answer, as OP has said that $\mu\in \mathbb R$, so the assumption I made in my previous answers below that $\mu \in \mathbb Z$ no longer hold. The first question to ask is, "What does $j^{\mu}$ even mean if $\mu$ is not an integer?" There are three numbers that satisfy the equation, $x^3=j$, and while you could argue that one of them should be the value $j^{1/3}$, which of the three values should you take? One way around this is by appealing to the exponential function, $\operatorname{exp}(x)=e^x=\sum \frac{x^n}{n!},$ where the sum is our actual definition, and $e^x$ is suggestive shorthand. One can establish various properties like $e^{a+b}=e^a e^b$, which mean that $e^x$ behaves like normal exponentiation in cases where things are simple, i.e., if $e^x=\alpha$, then $e^{nx}=\alpha^n$, and $\left(e^{x/n}\right)^n=\alpha$. Since $e^x&gt;0$ whenever $x\in \mathbb R$, this allows us to define $x^{1/n}$ as $e^{\log x/n}$ for positive $x$, which always spits out the real $n$th root instead of one of the other $n-1$ complex roots we could have taken. It also allows us to define $x^{\mu}=e^{\mu\log x}$ for positive $x$ even when $\mu$ is not rational. Because of continuity of $e^x$, and we have that if $(\r_i)$ is a sequence of rational numbers approaching $\mu$, then by taking the exponential function to define $x^{r_i}$, we have $x^{r_i}\to x^{\mu}$, and so this definition fits in with just saying "define things as roots of polynomials, then extend using continuity." However, we run into problems defining $x^{\mu}$ when $x$ is not a positive real number. I don't want to get into all the ways that things can go wrong, but what I am about to do is not wholly sufficient. By using the sum formula, we can define $e^x$ not just for $x\in \mathbb R$, but actually for $x\in \mathbb C$. By looking at the Taylor series for $\sin$ and $\cos$ (or otherwise), we can establish that $e^{jx}=\cos x + j\sin x$, and we can check that $e^{a+b}=e^a e^b$ still holds when $a$ and $b$ are complex numbers. Now, we can answer your question. Since $e^{j*\pi/2}=j$, we can define $j^{\mu}=e^{j\pi \mu/2}=\cos(\pi\mu/2)+j\sin(\pi \mu/2)$, and this will agree with the values you would get for $j^{\mu}$ when $\mu$ is an integer. However, this is not the only way to go... Since $e^{2j\pi}=1$, we also have that $e^{j*\pi/2+2jk\pi}=j$ whenever $k$ is an integer, and so we could define $j^{\mu}=e^{j\mu(\pi/2+2k\pi)}$ for some integer $k$. This will give us the same values when $\mu$ is an integer, but will give different values when $\mu$ is not. While one can argue that $k=0$ gives the simplest possibility, there isn't a compelling argument (that I can think of) to say that it is the "right" definition in any real sense. Just to throw something interesting in, the above argument shows that there are an infinite number of possibilities for what the value $j^j$ should be. However, all of them are real numbers! Old answers below. You can prove the formula by induction, but for the induction step you are going to need the following result, which is easy enough to prove by expanding things out, regrouping terms, and making an appeal to standard trig identities: $$ \left(\cos(\theta)+j\sin(\theta)\right)\left(\cos(\varphi)+j\sin(\varphi) \right)=\cos(\theta+\varphi)+j\sin(\theta+\varphi)$$ A second approach that you could take (because you are only asking about $j^{\mu}$ and not powers of a different complex number) is to verify that both the left and right hand sides of your equation only depend on the remainder of $\mu$ when divided by $4$, and then break things up into four cases, depending on if that remainder is $0, 1, 2$, or $3$.
$j^μ=e^{jπμ/2}=\cos (πμ/2)+j\sin (πμ/2)$
What was the notation for functions before Euler? According to the Wikipedia article, [Euler] introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. &mdash; Leonhard Euler, Wikipedia What was the notation for functions before him?
I scanned through parts of Newton's Pricipia found online, and was surprised that a search for the word "function" did not yield any results at all. There do appear to be equations acting as what we would call functions, such as when he describes force, we see things such as $$F=\frac {2h^2}{SP^2}\cdot \frac {QR}{QT^2}$$ and $$R=\frac {\frac 1 2 L}{1+e\cos ASP}$$ (on page 223) but he refers to these as equations, not functions, and admittedly (written the way they are) that is exactly what they are. It seems anything that we would today write as a function, Newton described in words, such as: If a hyperbolic orbit be described under the action of a repulsive force tending from the center, the force varies as the distance and the velocity at any point as the diameter of the conjugate hyperbola parallel to the tangent at the point. Or he used words within his equation: $$\text{Velocity at P}=\frac {h.VA}{SP^2}$$ This last one almost assuredly would be written as a function if presented in a modern textbook. Newton is certainly not the only source one should consider, but it does give an idea of what was going on right before Euler began publishing. The information on this website, which unfortunately does not include specific sources, indicates that Bernoulli proposed that $\phi$ or $\phi x$ be used as the notation for a function, and Euler introduced $f(x)$. Edit: Reference #11 from David Renfro's answer gives references for the statements made about Bernoulli and Euler on the website, as described in the last paragraph above. In my brief skim of Newton's Principia, I also found exactly what was described in reference #11 to be true, specifically that the arguments were motivated almost exclusively from analytic geometry, and that what we consider a "function" was really only considered a variable, as is indicated in the few examples above. I would recommend reading #11, it explains in good detail what you would like to know, I think.
Let's observe an example : $a)$ formal description of function (two-part notation) $f : \mathbf{N} \rightarrow \mathbf{R}$ $n \mapsto \sqrt{n}$ $b)$ Euler's notation : $f(n)=\sqrt{n}$ I don't know who introduced two-part notation but I think that this notation must be older than Euler's notation since it gives more information about function and therefore two-part-notation is closer to correct definition of the function than Euler's notation. There is also good wikipedia article about notation for differentiation.
Show that the power series is solution to f ''(x) + f(x) = 0 Show that $$f(x)=\sum_{n=0}^∞ \frac{(-1)^nx^{2n}}{(2n)!}$$ is a solution of: $$f''(x)+f(x)=0$$ It seems that $f''x$ is just $-f(x)$ $=&gt;$ $-f(x)+f(x)=0$, but when I try to get the solution I get a weird power series that is not $-f(x)$. Can anyone tell me what went wrong? See below: $$f'(x)=\sum_{n=1}^∞ \frac{(-1)^nx^{2n-1}*(2n)}{(2n)!}$$ $$f''(x)=\sum_{n=2}^∞ \frac{(-1)^nx^{2n-2}*(2n)*(2n-1)}{(2n)!}$$ $$f''(x)=\sum_{n=0}^∞ \frac{(-1)^{n+2}x^{2n+2}*(2n+4)*(2n+3)}{(2n+4)!}$$ $$=\sum_{n=0}^∞ \frac{(-1)^{n+2}x^{2n+2}}{(2n+2)!}$$ Anyone know what mistake I made? I need $f''(x)$ to be $-f(x)$
You have made mistakes in changing $n$ to $n+2$. In $f''(x)$, $x^{2n-2}$ the becomes $x^{2n+2}$ and $(2n)(2n-1)$ becomes $(2n+4)(2n+3)$. Also $(2n)!$ becomes $(2n+4)!$ Answer for the edited version. To compare your series for $f''(x)$ with the series for $f(x)$ you have to change $n$ to $n-1$ (so that $x^{2n+2}$ becomes $x^{2n}$).
‘’ has to star with n=1 not n=2. And be more careful when reindexing to 0, alternating power will switch by 1, x power by two.
$x_1$ such that given sequence $(x_n)$ converges Let $(x_n)_{n \in \mathbb N}$ be a sequence in $\mathbb R$ such that $|4-x_{n+1}|&lt;q|4-x_n|^2$, $\ q&gt;0$ Under what circumstances for the initial value $x_1$ can we guarantee that $(x_n)_{n\in \mathbb N}$ converges?
One can show by induction that $$ \forall n\in\mathbb{N},\,|4-x_n|\leqslant q^n|4-x_0|^{2^n} $$ If $x_0\in]3,5[$, then $|4-x_0|&lt;1$ and $\lim\limits_{n\rightarrow +\infty}x_n=4$, if $x_0\in\{3,5\}$, then $|4-x_0|=1$ and $(x_n)$ converges if $q&lt;1$, in this case the limit is $4$. If $x_0\notin[3,5]$, then $|x_0-4|&gt;1$ and $\lim\limits_{n\rightarrow +\infty}{|x_0-4|^{2^n}}=+\infty$, we can't say anything.
One can show by induction that $$ \forall n\in\mathbb{N},\,|4-x_n|\leqslant q^n|4-x_0|^{2^n} $$ If $x_0\in]3,5[$, then $|4-x_0|&lt;1$ and $\lim\limits_{n\rightarrow +\infty}x_n=4$, if $x_0\in\{3,5\}$, then $|4-x_0|=1$ and $(x_n)$ converges if $q&lt;1$, in this case the limit is $4$. If $x_0\notin[3,5]$, then $|x_0-4|&gt;1$ and $\lim\limits_{n\rightarrow +\infty}{|x_0-4|^{2^n}}=+\infty$, we can't say anything.
How can a transformation be linear transformation without linearity? My teacher at the University gave me a question I could not understand completely. Here is the question: Let $T: \mathbb R^3 \to P[x]$ be a linear transformation with $$T([1, 0, 0])=x+1, \quad T([0, 1, 0])=x^2-x, \quad T([0, 0, 1])=x^2,$$ find $T([a, b, c])$, also find the standard matrix $A$ for the transformation. The part that I did not understand is that how can $T([0, 1, 0])$ and $T([0, 0, 1])$ be linear since they have $x^2$. Also the $T([1, 0, 0])$ term has a constant. Those violate the linear transformation rules. Don't they?
As $$[a,b,c]=a[1,0,0]+b[0,1,0]+c[0,0,1],$$ by linearity $$T([a,b,c])=aT([1,0,0])+bT([0,1,0])+cT([0,0,1])=a(x+1)+b(x^2-x)+cx^2.$$ The linearity is not on $x$, but on $a,b,c$.
Linearity is nothing more or less than requiring $T(u+v)=T(u) +(v), T(au)=aT(u)$. This does not rule out $T(u)=x+1$. In fact consider the function $M\colon P[x]\to P[x]$ which has the effect of multiplying anything by $x^2+3x+1$ (or any random, fixed polynomial). $M(f(x)+g(x) )= M(f(x) ) + M(g(x))$. ANd M(af(x) ) = aM(f(x)). Can see $M(1)= x^2+3x+1$, a quadratic function! $M$ satisfies the requirement of linear transformation, Perfectly alright.
What is the negation of this statement? Let $(K_n)$ be a sequence of sets. What is the negation of the following statement? For all $U$ open containing $x$, $U \cap K_n \neq \emptyset$ for all but finitely many $n$.
Let us write the statement formally: $$\forall U(x\in U\rightarrow\exists n\forall k(k&gt;n\rightarrow U\cap K_k\neq\varnothing))$$ For every $U$ (open of course), if $x\in U$ then there is some $n$ that for all $k&gt;n$ we have $K_k\cap U\neq\varnothing$. Now negation flips quantifiers and $\lnot(\alpha\rightarrow\beta)$ is the same as $\lnot\beta\land\alpha$. So we have: $$\exists U(x\in U\land\forall n\exists k(k&gt;n\land U\cap K_k=\varnothing))$$ Or in words, there exists an open set $U$ such that $x\in U$ but for every $n$ there is some $k&gt;n$ such that $U\cap K_k=\varnothing$. However in the natural numbers to say that something happens unboundedly often is the same as saying it happens infinitely often. So finally we can say: There exists an open set $U$ such that $x\in U$ and for infinitely many $n$ we have $U\cap K_n=\varnothing$.
$\neg$(for all $U$ open containing $x$, $U \cap K_n \ne \emptyset$ for all but finitely many $n$).
$\sum_{p} \chi(p)/p$ is conditionally convergent for non-principal character Let $\chi$ is a non-principal character. Show that the sum $\sum_{p}\frac{\chi(p)}{p}$ is conditionally convergent. Then show that the product $\prod_{p}(1-\frac{\chi(p)}{p})^{-1}$ is conditionally convergent to $L(1,\chi)$. Using summation by parts, I can show that $\sum_{n}\frac{\chi(n)}{n}$ is conditionally convergent. But it seems that the method cannot be applied to $\sum_{p}\frac{\chi(p)}{p}$ since $\frac{1}{n}1_{\mathcal{P}}(n)$ is not monotone.
Use partial summation to show that \[\sum_{p \leq x} \frac{\chi(p)}{p} = \frac{1}{x \log x} \sum_{p \leq x} \chi(p) \log p - \int_{2}^{x} \frac{\log t + 1}{t^2 (\log t)^2} \sum_{p \leq t} \chi(p) \log p \, dt.\] Now \[\sum_{p \leq x} \chi(p) \log p = \sum_{n \leq x} \chi(n) \Lambda(n) + O\left(\sqrt{x} \log x\right).\] Via the usual methods as for proving the prime number theorem, one can then show that \[\sum_{n \leq x} \chi(n) \Lambda(n) = O\left(x e^{-c\sqrt{\log x}}\right)\] if $\chi$ is nonprincipal. (Note that this is by far the hardest part; it's essentially equivalent to the prime number theorem in arithmetic progressions, which in turn is essentially equivalent to the nonvanishing of $L(s,\chi)$ along the line $\Re(s) = 1$. A standard reference for this estimate is chapter 11 of Montgomery and Vaughan.) This implies the result, because then the first term in the earlier expression for $\sum_{p \leq x} \frac{\chi(p)}{p}$ is $o(1)$ and the second term is \[\int_{2}^{\infty} \frac{\log t + 1}{t^2 (\log t)^2} \sum_{p \leq t} \chi(p) \log p \, dt - \int_{x}^{\infty} \frac{\log t + 1}{t^2 (\log t)^2} \sum_{p \leq t} \chi(p) \log p \, dt,\] and the first integral is some finite number and the second is $o(1)$ due to the bound $\sum_{p \leq x} \chi(p) \log p = O\left(x e^{-c\sqrt{\log x}}\right)$. EDIT: Note that for $\Re(s) &gt; 1$, \[\log L(s,\chi) = \log \prod_p \frac{1}{1 - \chi(p) p^{-s}} = \sum_p \log \frac{1}{1 - \chi(p) p^{-s}} = \sum_p \sum_{k = 1}^{\infty} \frac{\chi(p)^k}{k p^{ks}}\] via the Euler product for $L(s,\chi)$ and the Taylor series expansion for $\log$. All of this is justified because $\Re(s) &gt; 1$, so everything is absolutely convergent. It follows that for $\Re(s) &gt; 1$, \[\sum_p \frac{\chi(p)}{p^s} = \log L(s,\chi) - \sum_p \sum_{k = 2}^{\infty} \frac{\chi(p)^k}{k p^{ks}}.\] The sum on the right-hand side is absolutely convergent for $\Re(s) &gt; 1/2$, and so defines a holomorphic function on this right-half plane. Moreover, by the zero-free region for $L(s,\chi)$, $\log L(s,\chi)$ defines a holomorphic function in the same zero-free region. It follows by analytic continuation that the left-hand side extends to a holomorphic function in this same zero-free region. In spite of all of this, one cannot conclude immediately this holomorphic function is equal to $\sum_p \frac{\chi(p)}{p}$ when $s = 1$, and so this sum converges conditionally. It does show however that the limit $\lim_{s \to 1^+} \sum_p \frac{\chi(p)}{p^s}$ exists.
complicated. there is a proof sketch when $\chi$ is a non-principal non-quadratic character (i.e. when it is a complex valued character) : the main trick is to use that $\frac{1}{\chi(q)}\sum_{\chi \pmod q} \chi(n) = \delta_{\ n \ \equiv\ 1 \pmod q}$ (the discrete Fourier transform of size $\phi(q)$) and that $L(s,\chi)$ is holomorphic (on $ Re(s) &gt; 0$ is enough and easy to prove) whenever $\chi$ is not the principal character $\chi_0$ (i.e. $\chi_0(n)=1$ whenever $gcd(n,q)$). hence if $\chi \ne \chi_0$ and $\lim_{s \to 1^+} \sum_p \chi(p)/p^s$ diverges then $L(s,\chi)$ has a $0$ at $s=1$, but if it is a non real character then the character comes with its complex conjugate $\overline{\chi}$ which in that case also has a zero at $s=1$, and hence two terms of the product $$F_q(s) = \prod_{\chi \pmod q} L(s,\chi)$$ would have a zero at s=1, but only $L(s,\chi_0)$ has a pole at $s=1$, of order $1$, hence $F_q(s)$ has a zero at $s=1$, a contradiction since : $$\ln F_q(s) = \phi(q) \sum_{p^k \equiv 1 \pmod q} \frac{1}{k p^{k s}}$$ which diverges to $+\infty$ when $s \to 1^+$. then prove that $L(s,\chi)$ doesn't have any zero on $Re(s) =1$, using the same argument as for $\zeta(s)$ : https://en.wikipedia.org/wiki/Prime_number_theorem#Proof_sketch which is based on the same kind of trick : that $\zeta(s)$ is meromorphic, that it doesn't have any pole on $Re(s) = 1, s \ne 1$, and that $\ln \zeta(s)$ is a Dirichlet series with non-negative coefficients, hence using $$\zeta(x)^3\zeta(x+iy)^4\zeta(x+2iy)|=\exp\sum_{n,p}\frac{3+4\cos(ny\log p) +\cos (2ny\log p)}{np^{nx}}\ge 1$$ if it had a zero at $1+it$ it would have a pole at $1+2it$, a contradiction. and you will be ready for proving the prime number theorem for Dirichlet L-functions when $\chi$ is a complex character, i.e. that $\ln L(s,\chi)$ is holomoprhic on $Re(s) \ge 1$, and hence that $$\sum_p \frac{\chi(p)}{p}$$ converges. when $\chi$ is the real non-principal (quadratic) character, I don't remember how we prove that it doesn't have a zero at $s=1$, see for example Terrence tao blog https://terrytao.wordpress.com/2014/11/23/254a-notes-1-elementary-multiplicative-number-theory/ just after the corollary 77.
Existence of biholomorphic function for two fixed points with $f(z_1) = z_2$ Let $U \subset C$ be a simply connected open subspace and $z_1, z_2 \in U$. I want to show that then there exists a biholomorphic function $f: U \rightarrow U$ with $f(z_1) = z_2$, but I have no idea how to do so.
If $f$ is biholomorphic, $f$ is injective. Hence, from $f(z_1) = f(z_2)$ we get $z_1=z_2$.
If $f$ is biholomorphic, $f$ is injective. Hence, from $f(z_1) = f(z_2)$ we get $z_1=z_2$.
How does $x^3 - \sin^3 x$ become $x^3 + \frac{1}{4}\sin{3x}-\frac{3}{4}\sin x$? I was going through answers on this question and came across this answer and I was wondering how the user arrived at the first line where they state: $$f(x) \equiv x^3 - \sin^3 x = x^3 + {1 \over 4} \,\sin {3x} - {3 \over 4}\,\sin x$$ How does $x^3 - \sin^3 x$ become $x^3 + \frac{1}{4}\sin{3x}-\frac{3}{4}\sin x$? Are they using some simple identity or is there some other observation happening? Thanks!
HINT: $$\sin{3x}=3\sin {x} -4\sin^3 x \\ \sin {(A+B)}=\sin A \cos B + \cos A \sin B$$
Hint: $$\sin x =\frac{opposite}{hypotenuse}$$
Prove that $A$ is closed I am asked to prove the following question: Let "If $a \in \mathbb{R}$, $a_n \in A$ for all $n \in \mathbb{N}^+$, and $a_n \to a$, then $a \in A$." be a statement $P$. Show that if $P$ is true then $A$ is closed. I was given a hint that if $a \notin \text{int}(\mathbb{R} \setminus A)$, then for all $n \in \mathbb{N}^+$, there exists $a_n \in (a - 1/n, a+1/n) \cap A$. But I can't get my head around what is the hint trying to tell me and I don't know how to proceed. Can anyone give a hint? Thanks a lot!
I finally understood the question so here is an attempt at the proof. Suppose $a \in \textbf{cl}A$, we have that $a \notin \textbf{int}(\mathbb{R} \setminus A)$. Thus, $a$ is not an interior point of $(\mathbb{R}\setminus A)$ and then we have for all $n \in \mathbb{N}^+$, $(a - 1/n, a + 1/n) \nsubseteq \mathbb{R}\setminus A$. This is equivalent to for all $n \in \mathbb{N}^+$, $(a - 1/n, a + 1/n) \cap A \neq \emptyset$. Hence, we let $a_n \in (a - 1/n, a + 1/n) \cap A$. Thus, $a_n \in A$ for all $n \in \mathbb{N}^+$. Moreoever, $a_n \to a$ since $\cap_{n = 1}^\infty(a- 1/n, a+ 1/n) = \{a\}$. Therefore, $a \in A$ by $P$. Hence, $\textbf{cl}A \subset A$ and $\textbf{cl}A = A$ since $A \subset \textbf{cl}A$. Therefore, $A$ is closed since $\textbf{cl}A$ is closed.
Hint: $a\not\in\operatorname{int}({\Bbb R\setminus A})$ is another way of saying $a\in\operatorname{cl}(A)$. So, the hint is that every element of the closure of $A$ has a sequence in $A$ converging to it. Now use $P$.
Complex power series and radius of convergence Let $c$ be a non-zero complex number, and consider the power series \begin{equation} S(z)=\frac{z-c}{c}-\frac{(z-c)^2}{2c^2}+\frac{(z-c)^3}{3c^3}-\ldots. \end{equation} By using the Ratio Test, or otherwise, show that the series has radius of convergence $|c|$. By differentiating term by term, show that $S'(z)= \frac{1}{z}$. I've never done power series in complex analysis, so this is what I've attempted so far: \begin{equation} S(z)=\frac{z-c}{c}-\frac{(z-c)^2}{2c^2}+\frac{(z-c)^3}{3c^3}-\ldots\\ =\sum_{n=0}^{\infty} \frac{(-1)^n (z-c)^n}{nc}. \end{equation} Let $x_{n}=\frac{(-1)^n}{nc}$. Using the Ratio test, we have: \begin{equation} lim_{n \rightarrow \infty} \lvert \frac{x_{n+1}(z-c)^{n+1}}{x_{n}(z-c)^n} \rvert = \lvert z-c \rvert lim_{n \rightarrow \infty} \lvert - \frac{n}{n+1} \rvert = -\lvert z-c \rvert &lt;1. \end{equation} I'm pretty sure this is wrong somewhere, but I have no idea how to continue to show that the radius of convergence is $|c|$.
Notice (hint): First of all: When $n$ 'starts' with $0$ than we've got a problem, because we get (dividing by $0$): $$\frac{(-1)^0(z-c)^0}{0c}$$ Use the ratio test, to proof that this series converges, when $|c-z|&lt;1$. So: $$\sum_{n=1}^{\infty}\frac{(-1)^n(z-c)^n}{cn}=-\frac{\ln(1+z-c)}{c}\space\space\space\space\space\space\text{when}\space|c-z|&lt;1$$
Notice (hint): First of all: When $n$ 'starts' with $0$ than we've got a problem, because we get (dividing by $0$): $$\frac{(-1)^0(z-c)^0}{0c}$$ Use the ratio test, to proof that this series converges, when $|c-z|&lt;1$. So: $$\sum_{n=1}^{\infty}\frac{(-1)^n(z-c)^n}{cn}=-\frac{\ln(1+z-c)}{c}\space\space\space\space\space\space\text{when}\space|c-z|&lt;1$$
Probability of draw 1000 balls of the same color from 2000 Without replacement. We have 2000 balls. 1000 red and 1000 green. What is the probability of draw the 1st 1000 balls to be the same color (red or green). This is how I worked on it, not sure. My work
Let's begin with a numerically simpler problem. Suppose you have 8 balls in the urn: 4 red and 4 green. You draw 4 balls from the urn without replacement. Then there are ${8 \choose 4} = 70$ possible outcomes. Two of the outcomes have all balls the same color: one all 4 red and the other all 4 green. So the probability of getting 4 balls of the same color is $2/{8 \choose 4} = 2/70 = 0.02857.$ In the same way, the answer to the original question is $2/{2000 \choose 10000},$ which is a very small number. Maybe you are allowed to leave your answer in this 'combinatorial' form. And maybe you are expected to use Stirling's Approximation (as suggested by @ThomasAndrews and @callciulus). As you can see by the Wikipedia reference, there are several forms of the Approximation, all of which give 'about' the same answer. The answer you linked in your Question is correct for the probability of getting 'all red balls'. Multiplying by 2, you'd get the probability for 'all the same color'. which matches $2/{2000 \choose 10000}.$ (Your first factor is 2.)
Edited: I removed the second answer. Thanks for letting me know where I went wrong. ;) This answer applies if you assume that the ball is returned to the bag every time you draw. The probability of drawing 1 ball a certain color is 1/2. Since the first 1000 balls all need to be the same color, you take probability and raise it to the power of the number of draws: so your answer is (1/2)^1000
Question on time speed and distance A and B, who are separated by a distance 90 m, are approaching towards each other. The initial speed of A is 5 m/s and that of B is 10 m/s. If A increases his speed by 5 m/s every second, when will they meet each other?
HINT: Let the distance $d$ between the two be expressed a a function of time $t$. Then you have that the rate of change of the distance is $$d'(t)=-5-10-5t$$ $$d'(t)=-15-5t$$ The two will meet when $d(t)=0$. Can you integrate, find the constant that is added when you integrate, and then find when $d(t)=0$? FULL ANSWER: Integrate both sides to get $$d(t)=-15t-\frac{5}{2}t^2+C$$ And since the starting distance is $90$, then $C=90$, so $$d(t)=-15t-\frac{5}{2}t^2+90$$ And you must find when this is zero: $$-\frac{5}{2}t^2-15t+90=0$$ $$\frac{5}{2}t^2+15t-90=0$$ This can be solved like a quadratic: $$t^2+6t-36=0$$ Can you use the quadratic formula to solve this?
$A$ &amp; $B$ are $90$ m distance apart. Their relative speed is $(10+5)$ m/sec Say, after $t$ second, they meet each other at a distance $x$ from the point from where $A$ starts. We can say that both of them in combination have covered $90$ m distance within $t$ secs at a speed of $15$ m/s so $t = \frac{90}{15} = 6$ secs
How to find a function using domain and range in math? I want to find a function, given domain and range. Example: Domain Range 2 6 4 8 6 20 7 24 This is just an example. I want to know the method of creating functions regardless of the values in Domain and Range Then how can I find a function using Domain and Range? Any help will be appreciated :)
The table can be used to define the function and it is a completely correct definition. If you are looking for a closed form, we can use for example polynomial interpolation. Refer also to the following Explanation of Lagrange Interpolating Polynomial
你能通过你已知的函数,例如指数函数,对数函数,幂函数这些你很熟悉的函数进行配凑,因为你知道它的性质,在哪个区间增长或减少,然后合理添加常数进行配凑。我建议你先画出x -y坐标系,标记出domain and range 所在的矩形区域,再考虑用哪一个函数将它放入这个区域,使得这个函数在矩形区域内并且可以取到最大最小值。these are answered by chinese you are suggested to use google translator. Translated From Google Translator: You can use a function you know, such as an exponential function, a logarithmic function, a power function, a function that you are familiar with, because you know its nature, in which interval it grows or decreases, and then add a constant to match . I suggest that you first draw the x-y coordinate system, mark the rectangular area where the domain and range are located, and then consider which function to put it into this area, so that this function is in the rectangular area and can get the maximum and minimum
Problems with double integration I'm trying to integrate $\int_{-a}^a\int_b^cy^{2m+1}e^{xy^{2n}}dxdy$.But I have never seen an integral with so many parts to it and I am little overwhelmed. How do I solve this?
$$\newcommand{\i}[2]{\int_{#1}{#2}}$$ I believe you can split it up like $$\int_{-a}^{a}\Biggr(\int_{b}^{c} \cdots dx \Biggr)dy$$ To learn more about double integrals, click on, https://www.khanacademy.org/math/multivariable-calculus/integrating-multivariable-functions/double-integrals-topic/v/double-integral-1 $$\i{-a}{a} y^{2m+1-2n}(\i{b}{c} y^{2n} e^{y^{2n}x}dx)dy$$ $$\i{-a}{a} y^{2m+1-2n} e^{y^{2n}x} dy$$ Putting $e^{y^{2n}x}=t$ $$dt =y^{2n-1} e^{y^{2n}x}dx$$ $$\i{-a}{a} \frac{y^{2m+1-2n-2n+1}}{x} y^{2n-1}x e^{y^{2n}x} dy$$ For further steps go to, https://www.integral-calculator.com/#expr=y%5E%7B2%28m-n%29%2B1%7D%20e%5E%7Bxy%5E%282n%29%7D&amp;intvar=y See the steps there
$$\newcommand{\i}[2]{\int_{#1}{#2}}$$ I believe you can split it up like $$\int_{-a}^{a}\Biggr(\int_{b}^{c} \cdots dx \Biggr)dy$$ To learn more about double integrals, click on, https://www.khanacademy.org/math/multivariable-calculus/integrating-multivariable-functions/double-integrals-topic/v/double-integral-1 $$\i{-a}{a} y^{2m+1-2n}(\i{b}{c} y^{2n} e^{y^{2n}x}dx)dy$$ $$\i{-a}{a} y^{2m+1-2n} e^{y^{2n}x} dy$$ Putting $e^{y^{2n}x}=t$ $$dt =y^{2n-1} e^{y^{2n}x}dx$$ $$\i{-a}{a} \frac{y^{2m+1-2n-2n+1}}{x} y^{2n-1}x e^{y^{2n}x} dy$$ For further steps go to, https://www.integral-calculator.com/#expr=y%5E%7B2%28m-n%29%2B1%7D%20e%5E%7Bxy%5E%282n%29%7D&amp;intvar=y See the steps there
If $f(x)<g(x)$ prove that $\lim f(x)<\lim g(x)$ I have this question: Let $f(x)→A$ and $g(x)→B$ as $x→x_0$. Prove that if $f(x) &lt; g(x)$ for all $x∈(x_0−η, x_0+η)$ (for some $η &gt; 0$) then $A\leq B$. In this case is it always true that $A &lt; B$? I've tried playing around with the definition for limits but I'm not getting anywhere. Can someone give me a hint on where to start?
HINT: Suppose that $A&gt;B$, and let $\epsilon=\frac12(A-B)&gt;0$. Show that there is a $\delta&gt;0$ such that $f(x)&gt;A-\epsilon=B+\epsilon&gt;g(x)$ for all $x\in(x_0-\delta,x_0+\delta)$; this contradicts the hypothesis that $f(x)&lt;g(x)$ on an open interval around $x_0$. (Why?) It’s easy to find examples in which $A=B$. You can do it with $f(x)=0$, in fact.
Quite obvious from the definition of continuity. By continuity,$\forall \epsilon$ there exist $\delta$ s.t $x\in B_{\delta}(x_0) \implies|f(x)-A|&lt;\epsilon$ and $|g(x)-B|&lt;\epsilon$ and so we have $ B-\epsilon&lt;g(x) \ \text{and} f(x)&lt;\epsilon +A$ so $ B&lt;2\epsilon +A \ \forall \epsilon &gt;0$. Since $\epsilon$ is arbitrary so we conclude that $B&lt;A$. But i am not sure what is missing, i am not sure why i couldn't prove that $A=B$
$X \subset \mathbb{R}^m$. Let $\phi:x \to \mathbb{R}^n$ be bounded. $\phi$ is continuous $\iff$ its graph is closed. $X \subset \mathbb{R}^m$. Let $\phi:x \to \mathbb{R}^n$ be bounded. Then $\phi$ is continuous $\iff$ its graph is closed. I was asked to prove this, but I believe it's false. Let $Gra(\phi) = \{(x, \phi(x))|x \in X\}$ the Graph of $\phi$. Suppose X is not closed. Then exists $a \in \bar{X}-{X}$ (being $\bar{X}$ the closure of X). Therefore exists $x_n \in X$ s.t. $x_n \to a$. The sequence $\phi(x_n)$ has a convergent subsequence $\phi(x_{k_n})$ since it's bounded. Let $\lim \phi(x_{k_n})=b$ Then $(a,b) \in \overline{Gra(\phi)}$ (the closure of $Gra(\phi)$) but $(a,b)\not \in Gra(\phi) \implies Gra(\phi)$ is not closed. Is this correct?
Of course, when $X$ itself is not closed in $\mathbb{R}^m$, the graph cannot be closed. We henceforth assume that $X$ is closed. Let $\mathcal{C}(X)$ denote the set of convergent sequences $\mathbf{x}=(x_n)_{n\in\mathbb{N}}\subset X$. For any such sequence we will denote its limit by $x$. Notice that $x\in X$ by the assumption that $X$ is closed. On the one hand, we have: \begin{align} \phi \text{ is continuous}&amp;\iff \phi \text{ is sequentially continuous}\\ &amp;\iff \forall \mathbf{x}\in\mathcal{C}(X), \,\phi(x)=\lim_{n\to\infty}\phi(x_n)\tag{1}\end{align} On the other, we have \begin{align} Gra(\phi) \text{ is closed}&amp;\iff Gra(\phi) \text{ is sequentially closed}\\ &amp;\iff \forall \text{ convergent } \big(x_n,\phi(x_n)\big),\, \lim_{n\to\infty}\big(x_n,\phi(x_n)\big) \in Gra(\phi)\tag{2} \end{align} It is clear that $(1)\implies (2)$. Indeed, if $\big(x_n,\phi(x_n)\big)$ is convergent, then $\mathbf{x}=(x_n)$ is convergent and hence by $(1)$ so too is $\big(\phi(x_n)\big)$, with $\lim_{n\to\infty}\phi(x_n)=\phi(x)$. Hence, $$\lim_{n\to\infty}\big(x_n,\phi(x_n)\big)=\big(x,\phi(x)\big),$$ which belongs to $Gra(\phi)$ by definition. We will show that when $\phi$ is bounded, $(2)\implies(1)$, which conludes the proof. Suppose $\phi$ is bounded and let $\mathbf{x}\in\mathcal{C}(X)$. Since $\phi$ is bounded and $(x_n)$ is convergent (and hence bounded), we have that $\big(\phi(x_n)\big)$ is a bounded sequence. By the Bolzano-Weierstrass Theorem, there is a convergent subsequence $\big(\phi(x_{n_k})\big)$ of $\big(\phi(x_n)\big)$. Of course, $(x_{n_k})$ it itself convergent and converges to $x$, the same limit of $(x_n)$. It follows that $\big(x_{n_k},\phi(x_{n_k})\big)$ is convergent, and by $(2)$ we have that $\lim_{k\to\infty}\big(x_{n_k},\phi(x_{n_k})\big)=\big(x,\phi(x)\big)$. In particular, $\lim_{k\to\infty}\phi(x_{n_k})=\phi(x)$. It then suffices to show that $\big(\phi(x_n)\big)$ is convergent; in this case, its limit must coincide with that of $\big(x_{n_k},\phi(x_{n_k})\big)$, that is, must equal $\phi(x)$. We show this by contradiction. Indeed, if that were not the case, then there would be some $\epsilon&gt;0$ and a subsequence $(x_{m_k})$ of $(x_n)$ with $d\big(\phi(x_{m_k}),\phi(x)\big)\geq \epsilon$ for all $k$. Now, $\big(\phi(x_{m_k})\big)$ is of course bounded, so by the Bolzano-Weierstrass Theorem it must have a convergent subsequence, say $\left(\phi\left(x_{m_{k_j}}\right)\right)$. By construction, we have that $$y=\lim_{j\to\infty}\phi\left(x_{m_{k_j}}\right)\neq \phi(x)$$ But $\left(x_{m_{k_j}}\right)$ is a subsequence of $(x_n)$, and hence converges to $x$. It follows that $$\lim_{j\to\infty}\left(x_{m_{k_j}},\phi\left(x_{m_{k_j}}\right)\right)=(x,y)$$ and hence does not belong to $Gra(\phi)$, in contradiction with $(2)$. $\square$. Notice that in the proof above we didn't actually need the hypothesis that $\phi$ be bounded, but rather that $\phi$ 'preserves boundedness', ie, $\phi$ takes bounded sets to bounded sets.
This is wrong (for unbounded function). take $\phi : [0, \infty] \rightarrow R$ with $\phi(x) = \frac{1}{x}$ for all $x \in (0, \infty)$ and $\phi (0)=0$. its graph is closed but $\phi$ is not continuous! EDIT: The above example is for unbounded function on limited domain (before editing the original question). If we assume function is bounded then the claim is correct. Proof: For Left to Right: the function $F(x) = (x, \phi(x))$ is continuous (now to show graph is closed in $X \times R^n$, use the sequential definition of continuity of $F$). For Right to Left: similar to your argument, take $x_n \rightarrow x \in X$. want to show that $\phi(x_n) \rightarrow \phi(x),$ if actually $\phi(x_n) \nrightarrow \phi(x)$ then (since $\phi$ is bounded) $\phi (x_n)$ has a convergent subsequence, say $\phi (x_{n_{k}}) \rightarrow y \neq \phi(x)$, i.e., $$(x_{n_{k}}, \phi (x_{n_{k}}) ) \rightarrow (x,y)\notin \text{Graph}.$$ Which is contradicting with Graph being Closed.
Question regarding basis and dimension So I was reading Linear Algebra: A geometrical approach by S.Kumaresan and there is a problem saying prove that a vector space with finite number of elements in basis will be finite dimensional. Though it is easy to prove, in the very next part they said although in later part you will see that converse is not necessarily true. And here is my problem. Considering what they are saying then a finite dimensional vector space may have infinite number of elements in basis. Consider a vector space $V$ which is finite dimensional(let it be $n$ dimensional) have an infinite set as basis. Now there is a definition that statesA vector space is $k$ dimensional if it has a set of $k$ elements as basis. so the vector space $V$ which is finite dimensional(let it be $n$ dimensional) will have a finite element($n$) set as basis. But it is not true. Since in a finite dimensional vector space's any two bases have same number of elements. hence all of vector space$V$'s basis will have n element which is contradictory by our first assumption. Then we can't say the vector space is finite dimensional. But that seems contradictory too. I mean a vector space can't be finite dimensional and infinite dimensional at the same time because according to definition of finite dimensional: A vector space$V$ is finite dimensional if it has a finite elements set $S$ such that $L(S)=V$. Where $L(S)$ is span of the set $S$. Please tell me where am I wrong in this whole arguement. Thanks in advance.
OK. So these are the pictures of the definitions and theorems I have used on question.
OK. So these are the pictures of the definitions and theorems I have used on question.
Derivative of an even function is odd and vice versa This is the question: "Show that the derivative of an even function is odd and that the derivative of an odd function is even. (Write the equation that says f is even, and differentiate both sides, using the chain rule.)" I already read numerous solutions online. This was the official solution but I didn't quite understand it (particularly, I'm not convinced why exactly $dz/dx=-1$; even though $z=-x$). Thanks in advance =]
Official, shmofficial: I think the following might prove to be easier to grasp for some: suppose $\,f\,$ is odd, then $$f'(x_0):=\lim_{x\to x_0}\frac{f(x)-f(x_0)}{x-x_0}=\lim_{x\to x_0}\frac{-f(x)+f(x_0)}{-x+x_0}=$$ $$=\lim_{x\to x_0}\frac{f(-x)-f(-x_0)}{(-x)-(-x_0)}\stackrel{-x\to y}=\lim_{y\to -x_0}\frac{f(y)-f(-x_0)}{y-(-x_0)}=:f'(-x_0)$$ The above remains, mutatis mutandis, in case $\,f\,$ is even.
Well, geometrically, even function means reflection along y axis, so any direction will reflect, that mean, the derivative on the right is the same as the derivative on the left, but the direction change. It means the value is the same, but with different sign. Odd function means rotational symmetric, if you rotate an arrow, I.e. direction, you will change by 180 degree, so it is the same slope, hence the derivative of odd function is even.
Show that if $G$ is abelian, and $|G| \equiv2 \mod 4$, then the number of elements of order $2$ in $G$ is $1$. I've tried proving it by contradiction, assuming the number of elements is different than one, which, by Sylows $3$, implies that $|G|=2^xm$ with $m$ odd. With that I managed to show that $m\equiv1\mod4$, but kinda got stuck there...
Here's an alternative solution without using Sylow's theorems. Note that $|G|\equiv 2 \pmod 4 \Rightarrow |G|=2(2k+1)$ for some $k\in \mathbb{Z}$. Since $2$ divides $|G|$, by Cauchy's theorem there exists an element $g\in G$ of order $2$. Now $\langle g\rangle$ is a subgroup of order $2$. Now since $G$ is abelian we have that $\langle g\rangle $ is a normal subgroup, and hence $G/\langle g\rangle $ is a group. Now by Lagrange's theorem $|G/\langle g\rangle | = 2k+1$, which is odd. Suppose there was another element $h\in G$ with order $2$ and $h\neq g$. Then we have that $ h + \langle g \rangle $ is an element of order $2$ in $G/\langle g\rangle$. Thus $\langle h + \langle g\rangle \rangle $ is a subgroup of $G/\langle g\rangle$ or order $2$. However, this is a contradiction since $|G/\langle g\rangle|$ is odd.
Hint: $|G|$ is $2(1+2p)$, a finite abelian group is a product of groups isomorphic to $\mathbb{Z}/p^l$ where $p$ is a prime this implies that the component associated to $2$ is $\mathbb{Z}/2$. https://en.wikipedia.org/wiki/Finitely_generated_abelian_group#Classification
relatively open sets Definition of a relatively open set: $ D \subset K^N $ is a set. $U \subseteq D $ is relatively open in D if $$ U = \emptyset \quad$$ or $$ \forall x \in U \quad \exists \quad r &gt; 0 \quad | \quad B(x,r) \cap D \subseteq U$$ What I want to know is: is there a set U with $x \in U \subseteq D $ such that $ B(x,r) \cap D \nsubseteq U$. Example: If $D = (0,2] $ and $ U =[1,2]$ and $x = 2$, then $B(2,r) \cap D = (2-r,2] \subseteq U$. In the above example I dont see for any $x \in U$ where $ B(x,r) \cap D \subseteq U $ is not satisfied. Can someone please give examples of $D $ and $ U $ where $ B(x,r) \cap D \subseteq U $ is not satisfied.
Definition of a relatively open set, U must be in D, but D=(0,2] and U=[1,2] , U is not in D, this is D in U, your example is not a counterexample
Definition of a relatively open set, U must be in D, but D=(0,2] and U=[1,2] , U is not in D, this is D in U, your example is not a counterexample
Can I use "$\iff$" symbol when I "transform" an expression to another form? I am writing a solution to prove that $\sqrt5$ is not rational. Here is my first half proof: Assume $\sqrt{5}$ is a rational number. By the definition of rational number, $\sqrt{5} = \frac{p}{q}$, where $p,q\in\mathbb{Z^+}, q\neq0$, and $gcd(p,q)=1$. We have $5=\frac{p^2}{q^2} \iff 5q^2=p^2$. Can I use "$\iff$" symbol like this? How about if a question is asking me to work backward from the desired conclusion and then prove: If x and y are nonnegative integer, then $\frac{x+y}{2}\geq \sqrt{xy}$. Can I do like something like: (first half proof) $\frac{x+y}{2}\geq \sqrt{xy} \iff x+y\geq 2\sqrt{xy} \iff (x+y)^2 \geq 4xy \iff x^2+2xy+y^2 \geq 4xy \iff x^2-2xy+y^2 \geq 0 \iff (x-y)^2 \geq 0$. Should I use $\iff$, $\Leftarrow$ or $\Rightarrow$?
Yes. "$\iff$" is used to denote logical equivalence, or necessary and sufficient conditions. It is read "if and only if," of which "iff" is a common abbreviation. $p \iff q$ is true if $p$ is true whenever $q$ is true and $q$ is true whenever $p$ is. Assume $\sqrt{5}$ is a rational number. By the definition of rational number, $\sqrt{5} = \frac{p}{q}$, where $p,q\in\mathbb{Z^+}, q\neq0$, and $gcd(p,q)=1$. We have $5=\frac{p^2}{q^2} \iff 5q^2=p^2$. This is true, since $5=\frac{p^2}{q^2} \Rightarrow 5q^2=p^2$ and $5q^2=p^2 \Rightarrow 5=\frac{p^2}{q^2}$. If x and y are nonnegative integer, then $\frac{x+y}{2}\geq \sqrt{xy}$. Proof: $\frac{x+y}{2}\geq \sqrt{xy} \iff x+y\geq 2\sqrt{xy} \iff (x+y)^2 \geq 4xy \iff x^2+2xy+y^2 \geq 4xy \iff x^2-2xy+y^2 \geq 0 \iff (x-y)^2 \geq 0.$ This is a fine proof. I would like to see it start where you end, but with $\iff$ used as you do here, it's perfectly valid as is.
You are moving forward in the proof, so you can use $\implies$ (this implies) sign which is more appropriate.
A problem on conditional geometric probability The point (x, y) is chosen randomly in the unit square. What is the conditional probability of $x^2+y^2 \leq 1/4$ given that $xy \leq 1/16$ I started solving this and while calculating I got some very unpleasant numbers but the problem is not marked as difficult in this book where I found it. So I start suspecting that I am having some conceptual mistake. The two curves intersect at $x=\sqrt{2+\sqrt {3}} /4$ and at $x=\sqrt{2-\sqrt {3}} /4$ Now I have to find a few areas via definite integrals. Is that what this problem is about?! The integral of $\sqrt{1/4-x^2} $ is very unpleasant, it seems. So?! Is there some smarter / nicer approach?
$xy \leq \frac{1}{16}$ iff $2xy \leq \frac{1}{8}$. Conditional on this, $x^2 + y^2 \leq \frac{1}{4}$ iff $x^2 + 2xy + y^2 = (x+y)^2 \leq \frac{3}{8}$ iff $x+y \leq \sqrt{\frac{3}{8}}$. Now do you know how to calculate the probability that the sum of two random numbers is less than a given number?
Suppose that $xy\le \frac{1}{16}.$ Then $x^2+y^2\le \frac14\iff (x+y)^2\le \frac38$ This should lead to an easier way to approach it.
Find the particular solution of the following equation my task is to Find the particular solution of the following equation at y(0) =1. the equation is $$dy/dx + 4y = 7$$ I have made this much progress as far as separating the Ys and the Xs and taking the anti derivatives (I hope it's correct) but don't know how to reach an answer dy/dx + 4y = 7 || .dx $dy + 4y = 7 dx$ $4 y^2/2 = 7x +c$ ..... correct answer is: $y = 7/4 - 3/4e^{-4x}$ update: I managed to get 7/4 and Ce^-4x) but still don't get where -3/4 comes from thank you!
We have $$\frac{dy}{dx}=7-4y,$$ and therefore $$\frac{dy}{7-4y}=dx.$$ Continue. Added: Integrate. We get $-\frac{1}{4}\ln(|7-4y|)=x+C$. Put $x=0$. We get $C=-\frac{1}{4}\ln 3$. Thus $$\ln(7-4y)=-4x+\ln 3.$$ Note this is valid only when $7-4y\gt 0$. Exponentiate. We get $7-4y=3e^{-4x}$. Solve for $y$. We get $y=\frac{1}{4}\left(7-3e^{-4x}\right)$.
$\frac{dy}{dx}+4Y=7$ | can be solved letting $y=Ae^{bx}$. $y'=Abe^{-x}$ substituting into the original equation and solving the homogeneous part gives $(b+4)=0 .$ hence $b=-4 $ we have a solution $y=Ae^{-4x}$ called the complementary solution ,we need o find the particular solution we check the right side of the equation ,the guess for a constant is $C$ $$Y(particular)=c$$ $$y'(particular)=0$$ substituting back into $\frac{dy}{dx}+4y=7$ we have $$4c=7$$ $$c=7/4$$ applying superposition whereby we have the general sol as $$y=Ae^{-4x}+7/4$$ the sum of complementary solution and particular solution .$y(0)=1$ $$1=A+7/4 \phantom{filler}\text{ (ANY NUMBER TO EXPONENT ZERO IS ONE)}$$ $$A =-3/4$$ $$Y=-3/4(e^{-4x})+7/4.$$
$\sup \left\| A x + B y + C z \right\|$ subject to $\left\|x\right\| = \left\|y\right\| = \left\|z\right\| = 1$ I'm interested in finding $\sup \left\| A x + B y + C z \right\|$ subject to $\left\|x\right\| = \left\|y\right\| = \left\|z\right\| = 1$ where $A$, $B$, $C$ and $x$, $y$, $z$ are real matrices and vectors, respectively, of compatible sizes, and the norms are Euclidean. What is this problem called in the literature? (If there is no established name, does this problem reduce to a well-known one?) (If there was only one constraint, this would be the induced matrix norm) Refutation of @zimbra314's answer (Can not fit in the comments) $[x, y, z]^{T}= \alpha_{1} v^{T}_{1} + \alpha_{2} v^{T}_{2}+\alpha_{3} v^{T}_{3}$, where $v_{i}$ is the $i$-the right singular vector of $[A,B,C]$ (i.e. singular vector corresponding to $i$-th largest singular value). This assumes that the optimal $[x, y, z]^{T}$ is a linear combination of the first 3 right singular vectors of $[A, B, C]$. However, this assumption is incorrect. As a counterexample, consider $ A = \begin{bmatrix}2&amp;0&amp;0\\0&amp;2&amp;0\\0&amp;0&amp;2\\0&amp;0&amp;0\\0&amp;0&amp;0\end{bmatrix} $ $ B = \begin{bmatrix}0\\0\\0\\1\\0\end{bmatrix} $ $ C = \begin{bmatrix}0\\0\\0\\0\\1\end{bmatrix} $
Hint: $\left \| Ax+By+Cz \right \| = \left \| \left [ A, B, C \right ] \left [ \begin{matrix} x\\ y\\ z \end{matrix} \right ]\right \|$ Detailed answer: $[x, y, z]^{T}= \alpha_{1} v^{T}_{1} + \alpha_{2} v^{T}_{2}+\alpha_{3} v^{T}_{3}$, where $v_{i}$ is the $i$-the right singular vector of $[A,B,C]$ (i.e. singular vector corresponding to $i$-th largest singular value). Denoting $[v_{ix},v_{iy},v_{iz}]^{T}=v_{i}$ to mark the partition of $v_{i}$ corresponding to $x$, $y$ and $z$. Now solve: $||x||^{2}=1 \rightarrow ||(\alpha_{1} v_{1x} + \alpha_{2} v_{2x}+\alpha_{3} v_{3x})||=1$ Similarly , $||(\alpha_{1} v_{1y} + \alpha_{2} v_{2y}+\alpha_{3} v_{3y})||=1$ $||(\alpha_{1} v_{1z} + \alpha_{2} v_{2z}+\alpha_{3} v_{3z})||=1$ Three equations to solve for three unknowns $\alpha_{1}, \alpha_{2}$, and $\alpha_{3}$. Then, your answer is $\sqrt{\alpha^{2}_{1} \sigma^{2}_{1} + \alpha^{2}_{2} \sigma^{2}_{2} + \alpha^{2}_{3} \sigma^{2}_{3}}$, where $\sigma_{i}$ is the singular value corresponding to $i$-the largest singular vectors.
Hint: $\left \| Ax+By+Cz \right \| = \left \| \left [ A, B, C \right ] \left [ \begin{matrix} x\\ y\\ z \end{matrix} \right ]\right \|$ Detailed answer: $[x, y, z]^{T}= \alpha_{1} v^{T}_{1} + \alpha_{2} v^{T}_{2}+\alpha_{3} v^{T}_{3}$, where $v_{i}$ is the $i$-the right singular vector of $[A,B,C]$ (i.e. singular vector corresponding to $i$-th largest singular value). Denoting $[v_{ix},v_{iy},v_{iz}]^{T}=v_{i}$ to mark the partition of $v_{i}$ corresponding to $x$, $y$ and $z$. Now solve: $||x||^{2}=1 \rightarrow ||(\alpha_{1} v_{1x} + \alpha_{2} v_{2x}+\alpha_{3} v_{3x})||=1$ Similarly , $||(\alpha_{1} v_{1y} + \alpha_{2} v_{2y}+\alpha_{3} v_{3y})||=1$ $||(\alpha_{1} v_{1z} + \alpha_{2} v_{2z}+\alpha_{3} v_{3z})||=1$ Three equations to solve for three unknowns $\alpha_{1}, \alpha_{2}$, and $\alpha_{3}$. Then, your answer is $\sqrt{\alpha^{2}_{1} \sigma^{2}_{1} + \alpha^{2}_{2} \sigma^{2}_{2} + \alpha^{2}_{3} \sigma^{2}_{3}}$, where $\sigma_{i}$ is the singular value corresponding to $i$-the largest singular vectors.
What is the average of no numbers? I have two programs that both behave nearly identically: they both take in any numbers you give them and can tell you the average and how many numbers were given. However, when you don't give them any numbers, one says the average is 0.0, and the other says it's NaN ("Not a Number"). Which of these answers, if any, is more correct, and why? Note: Although I use "programs" as a metaphor here, this isn't a programming question; I could've just as easily said "computers", "machines", "wise men", etc. and my question would be the same
From a statistical point-of-view, the average of no sample points should not exist. The reason is simple. The average is an indication of the centre of mass of the distribution. Clearly, for no observations there can be no way to prefer one location vs. another as their centre of mass since the the empty set is translation invariant. More mathematically, taking the average is a linear operation, which means if you add a constant $c$ to each observation, then the average $a$ becomes $a+c$. Now if you add $c$ to each observation in the empty set, you get the empty set again, and thus the average will have to satisfy $a+c=a$ for all $c$, clearly nonsense.
You can have an average of whatever is input (assuming we are not looking at PI which I do not know as having an ending). If you press Enter or whatever method is used to cause an input into the program, then the programming language or input design of the program will interpret that information, likely perform some calculation, then give the result of the calculation. What I mean is, you can just press enter (with no other value entered) and (depending on the programming language or design of the program) the program can interpret your value (if reading integers) as Zero. Some languages may interpret the input as a non value or nul and may output something similar to your 'NaN'. However, in answer to your first question, the average of one thing is always that thing. Unless you can only have an average until you have two or more of something to create an average. So whatever is input, as long as it was only one input, would result in that input as being the average. To answer your ending question, I would say the NaN is more correct because just pressing enter (assuming that is how the data is input) and is done so with the intent of at least not giving a specific value. That is the answer for the person inputting the data. The answer may also depend on the programmers intent on what type of input they are looking for --- numerical (0 may be correct) or non-numerical (Nan is correct).
Find $\iint \frac{1}{\sqrt{x^2+y^2}}\;\mathrm{d}y\;\mathrm{d}x$ Given that $x^2+y^2\leq a^2$, evaluate $$ \iint\limits_D \frac{1}{\sqrt{x^2+y^2}} \;\mathrm{d}A $$ Now my initial idea is to use polar coordinates, but I do not get the right answer. This is what I have done: $$\int\limits_0^{2 \pi }\int\limits_0^a\frac{r}{a}\;\mathrm{d}r\;\mathrm{d}\theta=\pi a$$ Now the answer I got I'm pretty sure is right, which makes me conclude that my setup of the initial integral is wrong since the answer should be $2\pi a$ according to my textbook. Could someone point out what I am missing?
$$\int\limits_0^{2 \pi }\int\limits_0^a\frac{r}{r}\;\mathrm{d}r\;\mathrm{d}\theta=2\pi a$$
The integral is The integral of (1/r)r dr d theta and the limits are for from 0 to a and for theta from 0 to 2 pi giving the expected answer
What is a link between the topological and order-theoretic completeness? Take $\mathbb{R}$ as an example. Order-theoretically, the set $\mathbb{R}$ of all real numbers can be developed as a complete totally ordered field, where "complete" indicates that the supremum axiom is imposed. On the other hand, the set $\mathbb{R}$ happens to be topologically complete, in the sense that every Cauchy sequence in $\mathbb{R}$ converges in $\mathbb{R}$. Is the order-theoretic completeness a notion developed independent of the concept of topological completeness, or is the converse true, or none of them is true? In general, what is a link between them?
The two concepts are strictly linked through the modern definitions of real number. We have to consider Cantor's definition, based on Cauchy sequence, and Dedekind's one, based on Dedekind cuts. Several axioms has been proposed to ensure the so-called "completeness" like : metric completeness : every Cauchy sequence of points in a metric space $M$ has a limit in $M$ order-theoretic completeness. See also Construction of the real numbers, Dedekind's Contributions to the Foundations of Mathematics and The Early Development of Set Theory; both Cantor and Dedekind was at the origins of : foundations of analysis and definition of the structure of the set $\mathbb R$ of real numbers point set topology set theory.
In high school you learn to work with the system ${\mathbb Q}$ of rational numbers, or with the smaller system ${\mathbb D}$ of finite (binary or) decimal fractions. While these systems are unproblematic for daily business they are unsatisfactory from a mathematical standpoint: Many important numbers that we know should be there, are missing; e.g., ${1\over3}\notin{\mathbb D}$, $\sqrt{2}\notin{\mathbb Q}$, etcetera. There are various "bigtime" abstract constructions that fill in these holes, so that a homogeneous continuum of real numbers ${\mathbb R}$ results. One such construction uses "Dedekind cuts" and produces a system ${\mathbb R}_{\rm Ded}$ that is so called order complete. Another construction uses "Cauchy sequences" and produces a system ${\mathbb R}_{\rm Cau}$ that is metrically complete. Whichever approach you take, you can prove at the end that your ${\mathbb R}_{\rm Ded}$ is in fact also metrically complete, resp. that your ${\mathbb R}_{\rm Cau}$ is in fact also order complete – the reason being that ${\mathbb R}_{\rm Ded}$ and ${\mathbb R}_{\rm Cau}$ are "isomorphic".
Bijection between partitions Give a bijective mapping from the set of partitions of $[n]$ with no cyclically consecutive integers in a block, and the set of partitions of $[n]$ with no singleton blocks. All the mappings that I come up with are injective. Can somebody please help?
[1,2,3] [1] [2] [3] [1] [2,3] [2] [1,3] [2] [1,3] [3] [1,2] [3] [1,2] [1] [2,3] [1] [2] [3] [1,2,3] [1,2,3,4] [1] [2] [3] [4] [1] [2,3,4] [2] [3] [1,4] [2] [1,3,4] [3] [4] [1,2] [3] [1,2,4] [1] [4] [2,3] [4] [1,2,3] [1] [2] [3,4] [1,2] [3,4] [1] [3] [2,4] [1,3] [2,4] [1,3] [2,4] [1,4] [2,3] [2] [4] [1,3] [1] [2] [3,4] [3] [1,2,4] [1] [3] [2,4] [1,2] [3,4] [1] [4] [2,3] [2] [1,3,4] [2] [3] [1,4] [4] [1,2,3] [2] [4] [1,3] [2,3] [4,1] [3] [4] [1,2] [1] [2,3,4] [1] [2] [3] [4] [1 2 3 4] Above is a grouping for the $n=3$ and $n=4$ case. The way I was approaching it was to think of a way of flagging cyclic consecutive integers in a group with singletons, and the rule I came up with for this purpose was to say that if $[i,i+1]$ appear in any given block, then in the bijective map $[i]$ would be by itself, and $[i+1]$ would be set aside and combined with the others (if $[i+1,i+2]$ is not in the same block). I believe this is a bijection between bad sets to bad sets, but I suppose there is something to be done for the other way :). Looks like it's not clear how to design the map on good sets... The current pairing for $n=4$ above works as follows: If there are no singletons, the map is an identity, otherwise for each singleton $[i]$, put $i+1$ in the same group. If $[i+1]$ is also a singleton, grab $[i+2]$ and place in the same group, etc. As this is homework, I don't want to spoil all the beans, only to give some ideas to push (it could be this doesn't work out, but it's an idea).
I'll try to give a bijection in the $n=2, 3, 4$ cases to see if we can see something. {1 2} {1,2} {1 2 3} {1,2,3} {1 2 3 4} {1,2,3,4} {1 3,2 4} {1 3,2 4} {2 1,3 4} {2,1 3,4} {1 4,2 3} {1,4 2,3} These bijections look "natural", but I can't really think of a way to expand the thinking...
The number of homomorphisms from $S_5$ to $A_6$ I used the fact that $A_5$ is the only real normal subgroup of $S_5$. Then we have a 2 types of homomorphisms and the trivial homomorphism. homomorphisms with a kernel $\{e\}$ homomorphisms with a kernel $A_5$ In the first case the first isomorphism theorem implies that the image of the homomorphism is isomorphic to $S_5$, therefore the number of homomorphism of that type is the number of injections of $S_5$ in $A_6$. In the second case because the index of $A_5$ in $S_5$ is 2, the quotient group is isomorphic to $C_2$. Therefore the image of the homomorphism is isomorphic to $C_2$. That's as far as I got, I classified the homomorphisms but I don't know how to count them.
There is no injective homomorphism from $S_5$ to $A_6$. Its image would have index $3$ in $A_6$. There is no such subgroup of $A_6$. If there were, there would be a homomorphism from $A_6$ onto a group of order $3$ or $6$, and there isn't.
This answer is wrong, I left it undeleted to remind me of my fault, please notice this and don't be misguided. Another reason for there is no injective homomorphism from S5 to A6 is that, if so S5 can be seen as a subgroup of A6 but every element in A6 is even permutation while S5 has odd permutation.
Convergence in probability with subsequences after hours of trying I wonder whether someone of you could may help me by proving this claim :-) Assume $X_n$ is a sequence of random variables which converges in probability to $X$. Now, let $X_{nk}$ be a subsequence of $X_n$. Then, there is a subsequence $X_{\overline{nk}}$ of $X_{nk}$ such that $$ P( |X_{\overline{nk}} - X | \geq \frac{1}{k}) \leq \frac{1}{k^2} \quad \forall k \geq 1$$ Unfortunately, I do not how to prove this statement... The proof given was "It holds if one waits long enough". But obviously, this is not very mathematically... Thanks in advance!
The definition of convergence in probability implies that for any $\epsilon &gt; 0$ and $\delta &gt; 0$, we can always find a $N_{\epsilon}$ such for all $n \geq N_{\epsilon}$, we have $P(|X_n - X| &gt; \epsilon) &lt; \delta$. Essentially, we are looking at convergence of the sequence of probabilities $P(|X_n - X| &gt; \epsilon)$ to zero. Now, every subsequence of the convergent sequence also converges to the same limit. The sequence of $P(|X_{nk} - X| &gt; \epsilon)$ also converges to zero - which implies that we can pick an index from the sequence $X_{nk}$ which satisfies $ P( |X_{\overline{nk}} - X | \geq \frac{1}{k}) \leq \frac{1}{k^2} $ for each $k \geq 1$ and construct a separate sequence $X_{\overline{nk}}$ from these picked indices. Note that for $k_2 &gt; k_1$, $ P( |X_{\overline{nk}} - X | \geq \frac{1}{k_2}) \leq \frac{1}{k_2^2} $ implies that $ P( |X_{\overline{nk}} - X | \geq \frac{1}{k_1}) \leq \frac{1}{k_1^2} $. Therefore, we can pick an increasing set of indices. Naturally, a sequence $X_{\overline{nk}}$ constructed as above satisfies the property discussed in the question.
Let $Y_k=X_{n_k}$. Then $Y_k \to X$ in probability. Hence $P(|Y_k-Y| \geq\frac 1 j) \to 0$ for each $j$. So there exists $k_j$ such that $P(|Y_{k_j}-Y| \geq \frac 1 j) &lt; \frac 1{j^{2}}$. We can choose the $k_j$ to be increasing. Now $P(|X_{n_{k_j}} -X| \geq \frac 1j) \leq \frac 1 {j^{2}}$ for all $j$.
Matrix positive definite Let $A$ and $B$ be both symmetric $ n \times n$ matrices, and $B \succ 0$; $U$ be one $n \times q$ column orthogonal matrix ($n &gt; q$). Assume $$ 0 \preceq U^{T} A U \preceq U^{T} B U,$$ do we have the following inequality $$UU^{T}AUU^{T} \preceq B.$$
Nope. Let $$ A=\begin{bmatrix}5 &amp; 3 \\ 3 &amp; 2\end{bmatrix}, \quad B=\begin{bmatrix}5 &amp; 1 \\ 1 &amp; 1\end{bmatrix} $$ (both are SPD). Let $U=[1,0]^T$. Then $$ 0\leq 5=U^TAU\leq U^TBU=5. $$ But $$ B-UU^TAUU^T=\begin{bmatrix}0 &amp; 1 \\ 1 &amp; 1\end{bmatrix}, $$ which is indefinite.
Yes. Just premultiply by $U^T$ and postmultiply by $U$ after bringing $B$ over to the other side. If $C \leq 0$ then $U^T C U\leq 0$ also.
Limit involving exponentials Being bored, I recently started trying to prove the exponential derivative formula by difference quotient: $\dfrac{d}{dx}n^x=\lim\limits_{\Delta x \to 0}\dfrac{n^{x+\Delta x}-n^x}{\Delta x} = n^x\log n$ Simple algebraic manipulation (exponent rule and factoring) brought me from the difference quotient to: $\lim\limits_{\Delta x \to 0}n^x\dfrac{n^{\Delta x}-1}{\Delta x}$ Limit of a product: $\Bigg(\lim\limits_{\Delta x \to 0}n^x\Bigg)\Bigg(\lim\limits_{\Delta x \to 0}\dfrac{n^{\Delta x}-1}{\Delta x}\Bigg)$ And finally limit of a constant. $n^x\Bigg(\lim\limits_{\Delta x \to 0}\dfrac{n^{\Delta x}-1}{\Delta x}\Bigg)$ This limit is where I got stuck, however. Clearly it equals $\log n$ by the well-known formula, but how can the limit be evaluated? Apologies if this is somewhat basic.
Write $n^{\Delta x}=e^{\Delta x\cdot \log n}$. Then by the definition of $\exp$ by its Taylor development: $$\frac{e^{\Delta x\log n}-1}{\Delta x}=\frac{(\Delta x\log n)+\frac{(\Delta x\log n)^2}{2}+\cdots}{\Delta x}=\log n+\Delta x\cdot \left(\frac{(\log n)^2}{2}+\cdots\right)\overset{\Delta x\rightarrow 0}{\longrightarrow} \log n$$ Alternatively one can use L'Hôpital's rule. But that makes use of knowing the derivative of $e^x$.
We need to show that there exists some value $e$ such that $$\displaystyle\lim_{h\to0}\frac{e^h-1}{h}=1$$ suppose $$\displaystyle f(n)=\lim_{h\to0}\frac{n^h-1}{h}$$ then consider $f(1)$ vs. $f(100)$. Clearly $f(1)=0.$ I'm assuming that the exponential function is an increasing function, and if I can't make that assumption, then I'll define it as such (with $e\gt1$). I would like to find $n$ such that $dn^x/dx=1$ to that end I would like to evaluate (and yes, I'm also assuming that $n^x$ is a convex function) $$\frac{n^{x}-n^{x-\Delta x}}{\Delta x}=n^x\frac{1-n^{-\Delta x}}{\Delta x}=n^x\frac{1}{n^{\Delta x}}\cdot\frac{n^{\Delta x}-1}{\Delta x}\lt n^xf(n)$$ Evaluating for $n=100$ and $\Delta x=1/2$ we have $$\frac{1}{100^{1/2}}\cdot\frac{100^{1/2}-1}{1/2}=\frac{18}{10}\lt f(100)$$ So if $f$ is continuous then by the intermediate value theorem there exists some $e$ such that $$0=f(1)\lt 1=f(e)\lt 1.8\lt f(100)$$ $$\therefore \lim_{\Delta x\to0}\frac{n^{\Delta x}-1}{\Delta x}=\lim_{\Delta x\to0}\frac{{(e^{\ln n})}^{\Delta x}-1}{\Delta x}=\lim_{\Delta x\to0}\frac{{e^{\ln n\cdot\Delta x}}-1}{\Delta x}$$ $$=\lim_{\Delta x\to0}\frac{{e^{\ln n\cdot\Delta x}}-1}{\ln n\cdot\Delta x}\cdot\ln n=\lim_{\ln n\cdot\Delta x\to0}\frac{{e^{\ln n\cdot\Delta x}}-1}{\ln n\cdot\Delta x}\cdot\ln n$$ $$=f(e)\cdot\ln n$$ $$=\ln n$$
A conjecture: every power of $16$ greater then $16^4$ has at least one digit $1, 2, 4$, or $8$ when written as base $10$? Is there a proof for why the following is true? Does every power of $16$ greater then $16^4$ have a digit $1, 2, 4$ or $8$ when you write it as base $10$?
Maple yields that ${\left(16^4\right)}^{20}$ is 2135987035920910082395021706169552114602704522356652769947041607822219725780640550022962086936576 whose digits contain more than one instance of each of $1,2,4$ and $8$, so the answer to your question would most definitely be no.
Maple yields that ${\left(16^4\right)}^{20}$ is 2135987035920910082395021706169552114602704522356652769947041607822219725780640550022962086936576 whose digits contain more than one instance of each of $1,2,4$ and $8$, so the answer to your question would most definitely be no.
Is there a set for which no defining formula can be found? I saw this question, but since (1) it's a little unclear (in fact, it has even been closed as unclear), (2) I'm not familiar enough with the terminology of the answer to judge wheter or not it answers my question and (3) it's old enough for me to assume the commenters won't respond to it, I thought it would be better to ask. In the book I've been using to study set theory, the author enunciates the Intuitive Principle of Abstraction in the very first chapter: A formula $P(x)$ defines a set $A$ by the convention that the members of $A$ are exactly those objects $a$ such that $P(a)$ is a true statement. Thinking a bit more deeply into it, I noticed the author said nothing of the converse. That is, nothing was said if the following statement is true: For every set $A$ there is a formula $P(x)$ which defines it. So, here I ask you all: is that statement true? Are there sets for which we can find no formula?
Fact 1: Since a formula must be of finite length, and our alphabets are finite, the are only countably many formulas. Fact 2: We usually assume (mostly implicitly) that there are uncountably many sets. Assuming these two facts, we can conclude that the answer is yes, there must be sets without a defining formula. But we can't describe a single one of them.
Fact 1: Since a formula must be of finite length, and our alphabets are finite, the are only countably many formulas. Fact 2: We usually assume (mostly implicitly) that there are uncountably many sets. Assuming these two facts, we can conclude that the answer is yes, there must be sets without a defining formula. But we can't describe a single one of them.
Generalize this notation for an ODE? I have $$ \frac{dN(t)}{dt}=a(t)N(t) \tag 1 $$ where $N(0)=N_0 $ (constant). Question 1: From the information in $(1)$ I assume I have the functions $a, N:\mathbb R\rightarrow \mathbb R$? And $N_0\in \mathbb R$? Is that correct? Suppose I now want to generalize the notation in $(1)$ by replacing the right hand side with a function $f$. Question 2: Does this mean I should write $$ \frac{d N(t)}{dt}=f(t) \tag 2 $$ where $N, f:\mathbb R\rightarrow \mathbb R$ ? Or maybe $$ \frac{dN(t)}{dt}=f(a(t),N(t)) \tag 3 $$ where $f:\mathbb R \times \mathbb R \rightarrow \mathbb R$? Or maybe $$ \frac{dN(t)}{dt}=f(t,a(t),N(t)) \tag 4 $$ where $f:\mathbb R \times \mathbb R \times \mathbb R \rightarrow \mathbb R$ and $t \in \mathbb R$? $\times$ is the Cartesian product. Which one is correct? Thanks!
An explicit first order ODE has the following form (see wiki) $$ \frac{dN(t)}{dt} = f(t, N(t)). $$ The form (2) is too narrow, since does not allow dependence on $N(t)$ in the right hand side. Form (3) does not allow dependence on raw $t$, and the form (4) has extra $a(t)$ which may be easily built in the definition of $f(t, N)$ itself. Few examples: $\frac{dN(t)}{dt} = g(t)$ is a special case of $f(t, N) = g(t)$ $\frac{dN(t)}{dt} = a(t) N(t)$ is a special case of $f(t, N) = a(t) N$ $\frac{dN(t)}{dt} = \frac{a(t)}{N(t) + t}$ is a special case of $f(t, N) = \frac{a(t)}{N + t}$
An explicit first order ODE has the following form (see wiki) $$ \frac{dN(t)}{dt} = f(t, N(t)). $$ The form (2) is too narrow, since does not allow dependence on $N(t)$ in the right hand side. Form (3) does not allow dependence on raw $t$, and the form (4) has extra $a(t)$ which may be easily built in the definition of $f(t, N)$ itself. Few examples: $\frac{dN(t)}{dt} = g(t)$ is a special case of $f(t, N) = g(t)$ $\frac{dN(t)}{dt} = a(t) N(t)$ is a special case of $f(t, N) = a(t) N$ $\frac{dN(t)}{dt} = \frac{a(t)}{N(t) + t}$ is a special case of $f(t, N) = \frac{a(t)}{N + t}$
Doubt in finding number of non-prime factors of an integer The question is: Find the number of non-prime factors of $4^{10} \times 7^3 \times 5^9$. I represented the number as $2^{20} \times 7^3 \times 5^9$ then the number of factors of this integer is $21 \times 4 \times 10 = 840$ now I can only see that there could be only three prime factors here $2,7 \text{ and } 5$. So the number of non-prime factors should be $837$ but my module says the answer would be $437$, what exactly I am missing here?
You are missing making the error that your module made. You are right, the module is wrong. One could speculate about why the person hired to solve the problems made the mistake. But note that $$11 \times 4 \times 10=440$$ so the person doing the solutions may not have noticed that $4$ is not prime. You did notice.
If the question is "Find the number of non-prime factors of $~2^10 * 7^3 * 5^9~$"? The answer will be $~440 - 3 =437~$
Constant function formal definition A generic function $F : X \rightarrow Y$ can be defined as $\forall x \in X \exists !y \in Y (x, y) \in F$. How about the formal definition of a constant function? The formula $\exists !y \in Y \forall x \in X (x, y) \in F$ does not seem right to me, since it does not prevent that the same element in the set $X$ is mapped to multiple different elements in the set $Y$. Instead, it only prevents that two different constant lines can be mapped from the same set $X$.
How about just combining the two statements together? $$(\forall x\in X\ \exists!y\in Y\ (x,y)\in F)\land(\exists y\in Y\ \forall x\in X\ (x,y)\in F)$$ Note that uniqueness of the $\exists$ in ths second statement is not required: suppose two such $y$'s existed, $y_1$ and $y_2$. Then for any element $x\in X$, $(x,y_1)$ and $(x,y_2)$ are in $F$, which is not allowed by the first statement. Thus only one such $y$ may exist.
Maybe $\exists! y \in Y : ((x,y) \in F , \forall x \in X) \land ((x,y') \notin F , \forall y' \ne y, \forall x \in X)$?
Algebraic condition for a twist in 2D ball or a hypersphere? We define twisted ball here as a ball where only one point (the twist) separates the ball sides. Simple implicit presentation for the 2D Ball is $x^2+y^2=r^2$. I am trying to find a general condition when the 2D ball has a crossing like the last parametric plot. Third one is just a line so it can also be considered as a naive twisted ball: is the twisted ball just a line with certain features? The extra 2 in the fourth plot results into a more apparent twisted ball: is there some other feature we should require for a twisted ball? Questions How to deduce the algebraic condition for the 2D twisted ball? Easier to consider higher dimensional balls also, hyperspheres? Please explain the twist in the ball.
Let us consider firstly the set of parametric plots in which the coordinates $x,y$ are given by $ [\sin(t), \cos(kt)]$. I will restrict my answer to the cases where $k$ is an integer. In this set of parametric plots, the function crosses the $x$-axis for $2k$ different values of $t$, which are those satisfying $t=\pi(j-1/2)/k$, where $j=1,2,3...2k$. However, because of the symmetry of these $t$ values with respect to the $y$-axis, these zeros can be grouped in pairs with the same $x$-value, i.e. corresponding to the same point on the $x$-axis. As a result, the function crosses the $x$-axis in $k$ points when $k$ is even, and $k+1$ points when $k$ is odd. For example, when $k=4$, the function crosses the $x$-axis for eight values of $t$, namely $\frac{\pi}{8}, \frac{3}{8}\pi, \frac{5}{8}\pi... \frac{15}{8}\pi$. However, these eight values correspond to only four points on the $x$-axis, since the $x$ value is equal for the pair $t=\frac{\pi}{8}$ and $t=\frac{7}{8}\pi$, for the pair $t=\frac{3}{8}\pi$ and $t=\frac{5}{8}\pi$, and so on. Accordingly, the parametric plot $ [\sin(t), \cos(4t)]$ is as follows: On the other hand, for example, when $k=3$, the function crosses the $x$-axis for six values of $t$, namely $\frac{\pi}{6}, \frac{3}{6}\pi, \frac{5}{6}\pi... \frac{11}{6}\pi$. These six values correspond to only four points on the $x$-axis, since the $x$ value is equal for the pair $t=\frac{\pi}{6}$ and $t=\frac{5}{6}\pi$, and for the pair $t=\frac{7}{6}\pi$ and $t=\frac{11}{6}\pi$; the other two points are given by $t=\frac{3}{6}\pi$ and $t=\frac{9}{6}\pi$. Accordingly, the parametric plot $ [\sin(t), \cos(3t)]$ is as follows: Note then that, when $k$ is even, all $2k$ zeros of the function can be grouped in pairs, leading to $k$ points on the $x$-axis; conversely, when $k$ is odd, all $2k$ zeros of the function can be grouped in pairs, except two that are "unpaired" (those corresponding to $t=\frac{\pi}{2}$ and $t=\frac{3}{2}\pi$). This leads to $k+1$ points on the $x$-axis. Also note other two things: 1) for $k$ odd, the two "unpaired" zeros given by $t=\frac{\pi}{2}$ and $t=\frac{3}{2}\pi$ correspond to the most lateral points (left and right) of the function; 2) for $k=1$, the function crosses the $x$-axis only in these two unpaired zeros, thus without crossing itself (in fact for $k=1$ it reduces to a circle). Now it is also not difficult to show that, considering the behaviour of the function in these points laying on the $x$-axis, its derivative has a unique value when $k$ is even - which means that the function does not cross itself. On the other hand, when $k$ is odd, the derivative assumes two different values (positive and negative) in each of these points laying on the $x$-axis except for the two "unpaired", most lateral ones. This means that the function crosses itself in $k-1$ points on the $x$-axis. For example, we expect that for $k=14$ the parametric plot $ [\sin(t), \cos(kt)]$ crosses the $x$-axis in $14$ points and does not cross itself. This is demonstrated by the following figure showing $ [\sin(t), \cos(14t)]$: On the other hand, for example we expect that for $k=9$ the parametric plot $ [\sin(t), \cos(kt)]$ crosses the $x$-axis in $9+1=10$ points, and that it crosses itself in $9-1=8$ of these points. This is demonstrated by the following plot of $ [\sin(t), \cos(9t)]$, showing the $8$ points where the function crosses itself and the remaining $2$ most lateral points ("unpaired" zeros) where the function crosses the $x$-axis: As a result, within the set of parametric plots $x,y$ given by $ [\sin(t), \cos(kt)]$, there is no possibility to find a case that yields a unique crossing point dividing the function in two sides (left and right). For $k$ even, the function does not cross itself; among the odd $k$ values, for $k=1$ the function does not cross itself; for $k=3$, the function crosses the x-axis in four points (two of which are the lateral, "unpaired" ones), and therefore crosses itself in two points on the $x$-axis; lastly, as already shown, for higher odd values of $k$, the function crosses itself on the $x$-axis even more times. All these considerations can be repeated for the set of parametric plots in which the coordinates $x,y$ are given by $ [\sin(t), \sin(kt)]$, with the only difference that the behaviour of the function for $k$ odd or even is inverted. More precisely, in this case the function crosses the $x$-axis $k$ times and never crosses itself when $k$ is odd; conversely, the function crosses the $x$-axis in $k+1$ points and crosses itself in $k-1$ of these points when $k$ is even. As a result, within the set of parametric plots $x,y$ given by $ [\sin(t), \sin(kt)]$, the only case in which there is a unique crossing point dividing the function in two sides (left and right) occurs for $k=2$. In fact, in this case, the function crosses the $x$-axis in three points (two of which are the"unpaired" lateral ones) and crosses itself in a single central point, yielding a "twisted" ball according to the definition given in the OP. Accordingly, the plot of $ [\sin(t), \sin(2t)]$ is as follows:
I have some sort of intuitive touch on these twists based on Quantum physics - they like trigger sudden peaks which then again change the architecture of the object through Gibbs' phenomenon. So the algebraic condition would be any sudden peak in a 2D twisted ball.
Number of ways to form three distinctive items Given a 6 by 5 array, Calculate the number of ways to form a set of three distinct items such that no two of the selected items are in the same row or same column. What I did was $C(30,1) \cdot C(20,1) \cdot C(12,1)$ however this is not the answer. They get 1200. How?
$1^{st}$ item: you will have $6\times5=30$ choices. $2^{nd}$ item: you take out the row and column containing the $1^{st}$ chosen item, so you are left with $5\times4=20$ choices. $3^{nd}$ item: you take out the row and column containing the $2^{nd}$ chosen item, so you are left with $4\times3=12$ choices. However, note that the order of items doesn't matter (i.e choosing $ABC$ is the same as choosing $CBA$). Hence the desired answer is $(30\times20\times12)\div3!=1200$
Debes darle importancia a una de las dos, o la fila o la columna. Cuando quieras darle la importancia a una o a la otra, una va con permutación 3 y la otra con combinación 3, para hallar una especie de coordenada. Quedaría 5C3 * 6P3 o lo mismo 6C3 * 5P3. Esto teniendo en cuenta que las 3 respuestas quedarían en linea distinta y 2 de ellas no se encontrarían, porque una esta en la mitad. Con la permutación ya se eliminarían una de las lineas y también se eliminaría el problema de que se encuentren.
h Numbering $8$ vertices of cube from $1$ to $8$ such that there is no 2 consecutive number adjacent How many ways are there to number $8$ vertices of a cube using numbers from $1-8$ such that there are no two consecutive numbers adjacent on the cube ($1$ and $8$ are considered to be consecutive). I know that the answer is $480$ (with symmetries/etc), however, I am looking for a solution. I see that there are $\frac{8!}{8\times 3}=1680$ ways (without symmetry/etc) to number the cube. I can't use graph theory for the problem. It was originally a probability problem.
Without symmetries, there are only ten solutions. If the cube is $\{0,1\}^3$, put 1 at $(0,0,0)$ and the numbers at $(1,0,0),(0,1,0)$ and $(0,0,1)$ in increasing order. Don't forget reflection symmetry. There is another symmetry by reversing the order of the digits, from $1,2,...8$ to $1,8,7,...,2$. You might assume the number at $(1,1,1)$ is less than 6. The number at $(1,1,1)$ can't be 3 as there would be nowhere to put 2. Now plough through the options. With the extra symmetry of the second paragraph, there are between five and ten solutions to find. If the number at $(1,1,1)$ is 2, then the neighbours of $1$ are $3$ and two out of $4,5,6,7$. If the number at $(1,1,1)$ is 4, then the neighbours of $1$ are $3,5$ and another one. If the number at $(1,1,1)$ is 5, then the neighbours of $1$ are $4,6$ and either 3 or 7. There aren't that many possibilities to check.
Without symmetries, there are only ten solutions. If the cube is $\{0,1\}^3$, put 1 at $(0,0,0)$ and the numbers at $(1,0,0),(0,1,0)$ and $(0,0,1)$ in increasing order. Don't forget reflection symmetry. There is another symmetry by reversing the order of the digits, from $1,2,...8$ to $1,8,7,...,2$. You might assume the number at $(1,1,1)$ is less than 6. The number at $(1,1,1)$ can't be 3 as there would be nowhere to put 2. Now plough through the options. With the extra symmetry of the second paragraph, there are between five and ten solutions to find. If the number at $(1,1,1)$ is 2, then the neighbours of $1$ are $3$ and two out of $4,5,6,7$. If the number at $(1,1,1)$ is 4, then the neighbours of $1$ are $3,5$ and another one. If the number at $(1,1,1)$ is 5, then the neighbours of $1$ are $4,6$ and either 3 or 7. There aren't that many possibilities to check.
Why do we start losing algebraic properties when dealing with hypercomplex numbers? Every form of hypercomplex number I have seen (including the complex numbers) lose some important algebraic property. Why is that? Is there a pattern to what we lose?
The complex numbers $\mathbb{C}$ do not lose any properties that $\mathbb{R}$ has. They are both fields and hence share all the properties of fields. Now the simple answer to your question is because they do! Unlike real numbers (and maybe arguably complex numbers) is because we create them (not that you don't create the others but you can argue the real numbers rather boldly present themselves in nature the way other 'number' structures don't). The way that we create the quaternions, $\mathbb{H}$, is such that they lose commutativity of multiplication. It comes directly from how we define multiplication of quaternions. Similarly, how we define the octonions means that they lose both associativity and commutativity of multiplication. For both groups of hypercomplex numbers, we form an operation called multiplication in terms of real numbers and check to see that they satisfy the properties we normally have when we work with real numbers. The way we have to define the operations to construct these hypercomplex numbers for $\mathbb{H}$ and $\mathbb{O}$ causes these properties to fail. The more complex (and more realistic reason) that they lose the properties 'numbers' normally have is that we are trading the structures they lose to get something more. That is, we want to generalize $\mathbb{R}$ to bigger and bigger sets of 'numbers' that have certain properties. We call these bigger structures algebras (because they resemble the algebra we are used to). For example, we also want to be able to tell the size of these new 'numbers' (we call this the norm of a number). In order to generalize to bigger and bigger 'numbers' with these properties, essentially something has to give and what often is lost is commutativity and associativity of multiplication. Of course, this is only the general idea. The mechanics of where and why things break down is quite complicated in the general case (involving Lie Algebras, Clifford Algebras, and general matrix groups). However, this gives a general insight into the question. In fact, there is still much work needed to be done to understand in full the structures for these objects and why they are the way that they are.
The complex numbers $\mathbb{C}$ do not lose any properties that $\mathbb{R}$ has. They are both fields and hence share all the properties of fields. Now the simple answer to your question is because they do! Unlike real numbers (and maybe arguably complex numbers) is because we create them (not that you don't create the others but you can argue the real numbers rather boldly present themselves in nature the way other 'number' structures don't). The way that we create the quaternions, $\mathbb{H}$, is such that they lose commutativity of multiplication. It comes directly from how we define multiplication of quaternions. Similarly, how we define the octonions means that they lose both associativity and commutativity of multiplication. For both groups of hypercomplex numbers, we form an operation called multiplication in terms of real numbers and check to see that they satisfy the properties we normally have when we work with real numbers. The way we have to define the operations to construct these hypercomplex numbers for $\mathbb{H}$ and $\mathbb{O}$ causes these properties to fail. The more complex (and more realistic reason) that they lose the properties 'numbers' normally have is that we are trading the structures they lose to get something more. That is, we want to generalize $\mathbb{R}$ to bigger and bigger sets of 'numbers' that have certain properties. We call these bigger structures algebras (because they resemble the algebra we are used to). For example, we also want to be able to tell the size of these new 'numbers' (we call this the norm of a number). In order to generalize to bigger and bigger 'numbers' with these properties, essentially something has to give and what often is lost is commutativity and associativity of multiplication. Of course, this is only the general idea. The mechanics of where and why things break down is quite complicated in the general case (involving Lie Algebras, Clifford Algebras, and general matrix groups). However, this gives a general insight into the question. In fact, there is still much work needed to be done to understand in full the structures for these objects and why they are the way that they are.
If $a \mid c$ and $b \mid c$, but $\gcd(a,b) = 1$, then $ab \mid c$. If $a | c$ and $b | c$ and $a$ and $b$ are relatively prime prove that $ab|c$. What I did was since $(a,b)=1$ then we can find integers $m,n$ such that $ma + nb=1$. Now since $a|c$ then $a = mc$. Similary, since $b|c$ then $b=nc$. Then I got that: $$ma+nb=1$$ $$c(m+n)=1$$ $$m +n = 1/c$$ $$abm +abn=\frac{ab}{c}.$$ Is this correct?
The key ideas are present in your proof. We know that $a \mid c$ and $b \mid c$. As $a \mid c$, we have that $c = ak$ for some integer $k$. In this form, we see that $b \mid c = ak$, but $\gcd(a,b) = 1$. By the theorem below, we have that $b \mid k$, so that $k = b \ell$. Now $c = ak = ab\ell$, and now it's clear that $ab \mid c$. Theorem If $a \mid bc$ and $\gcd(a,b) = 1$, then $a \mid c$. This can be proved from your application of Bezout's Theorem in your attempted solution. And in fact this is the content of this question and its answers.
If you want to avoid prime factorization, try saying $c = kab + r$ with $0 \leq r &lt; ab$. Then, because $a | c$ and $b|c$, we have that $a | r$ and $b | r$. Since $(a,b) = 1$, what can you say about $r$?
Is $f(-x) = 1/f(x)$ under these conditions? Let $f\colon \mathbb{R}\to \mathbb{R}$, so that: for $x\in \mathbb{R}$, $f(x)&gt;0$ for $x,y \in \mathbb{R}$, $f(x+y) = f(x)f(y)$ Prove that $f(-x) = 1/f(x)$.
Using your second condition, you get $$f(0)=f(x+(-x))=f(x)f(-x)$$ Dividing both sides by $f(x)$ $$\frac{f(0)}{f(x)}=f(-x)$$ Now, what is $f(0)$? Let's use that condition again: $$f(x)=f(x+0)=f(x)f(0)$$ hence, dividing both sides by $f(x)$, it's evident that $f(0)=1$. Substituting above, we get $$\frac{1}{f(x)}=f(-x)$$
If we want to solve this problem in the straightforward way, start by putting $y = -x$, then $f(0) = f(x)f(-x) =&gt; f(-x) = f(0) / f(x)$ To find out what f(0) would be, we can set $x = y = 0$ in the original equation and see, $f(0) = f(0)^{2} =&gt; f(0) = 0, 1$ So you can see, the only condition we need is, $f(0) \neq 0$ Now, we set y = 0 on the original equation, hence, $f(x + 0) = f(x)f(0) =&gt; f(x) = f(x) f(0) =&gt; f(x)(1 - f(0)) = 0$ Now, according to condition 1, f(x) can't be zero. Hence, $f(0) = 1$ Now we can conclude, $f(-x) = f(0) / f(x) = 1 / f(x)$ (Q.E.D) N.B.: As an addition, if the function is instead defined as $f:R -&gt; R^{+}$, which can be reduced to Cauchy's Functional Equation. From this pdf All continuous functions $f : R → (0,+∞)$ satisfying $f(x +y) = f(x)f(y)$ are of the form $f(x) = a^{x}$ . Namely the function $g(x) = log(f(x))$ is continuous and satisfies the Cauchy equation. Now, if $g(x) = log(f(x))$, then, $g(x+y) = log(f(x+y))$ Starting with your equation, taking log at both sides, $log(f(x+y)) = log(f(x)) + log(f(y)) =&gt; g(x + y) = g(x) + g(y)$ Which is just Cauchy's Functional Equation.
Evaluate the following: $(1-i)^{1+i}$ My progression: $(1-i)^{i+1} = e^{(i+1) * \ln(1-i)}$. I get stuck after this point.
The expression $z^w$ where $z, w \in \mathbb{C}$ is not uniquely determined. In fact, we define $$z^w = e^{w\log z}$$ where $\log z$ is any logarithm of $z$. There are infinitely many choices of $\log z$, and for most values of $z$ and $w$ there will be infinitely many possible values for $z^w$. To get something unique, you will have to specify a particular branch of the complex logarithm, but when you do so. $z^w$ won't be defined for all $z$ (or at the very least $z^w$ won't be continuous in $z$, depending on what your conventions with branches are). In your particular case $\log(1-i) = \ln \sqrt 2 - \dfrac{i\pi}4 + 2\pi i k$ for some arbitrary integer $k$, and \begin{align} (1-i)^{1+i} &amp;= e^{(1+i)\log(1-i)} \\ &amp;= e^{(1+i)(\ln \sqrt 2 - \frac{i\pi}4 + 2\pi i k)} \\ &amp;= e^{ \ln \sqrt 2 + \frac\pi4-2\pi k + i(\ln\sqrt 2 - i\frac{\pi}4 + 2\pi k)} \\ &amp;= \sqrt 2 e^{\frac\pi4-2\pi k\pi}\cdot e^{i(\ln\sqrt2-\frac\pi4)} \\ &amp;= \sqrt 2 e^{\frac\pi4-2\pi k\pi}\cdot \big( \cos (\ln\sqrt2-\frac\pi4) + i \sin(\ln\sqrt2-\frac\pi4) \big) \\ \end{align}
$n \in {\mathbb Z}$. \begin{align} \left(1 - {\rm i}\right)^{1 + {\rm i}} &amp;= \left[\sqrt{2\,}\,{\rm e}^{{\rm i}\left(-\pi/4 + 2n\pi\right)}\right] ^{1 + {\rm i}} = \left(\sqrt{2\,}\,\right)^{1 + {\rm i}} {\rm e}^{{\rm i}\left(-\pi/4 + 2n\pi\right) - \left(-\pi/4 + 2n\pi\right)} \\[3mm]&amp;= \sqrt{2\,}\,{\rm e}^{\pi/4 - 2n\pi}\, \left(\sqrt{2\,}\,\right)^{\rm i} {\rm e}^{{\rm i}\left(-\pi/4 + 2n\pi\right)} = \left(1 - {\rm i}\right){\rm e}^{\pi/4 - 2n\pi}\, 2^{{\rm i}/2} \\[3mm]&amp;= {\rm e}^{\pi/4 - 2n\pi}\, {\rm e}^{{\rm i}\ln\left(2\right)/2}\left(1 - {\rm i}\right) = {\rm e}^{\pi/4 - 2n\pi}\, \left[% \cos\left({1 \over 2}\ln\left(2\right)\right) + {\rm i}\sin\left({1 \over 2}\ln\left(2\right)\right) \right]\left(1 - {\rm i}\right) \\ ----&amp;------------------------------------ \end{align} \begin{align} &amp;n \in {\mathbb Z}\,, \\[5mm] &amp;\color{#ff0000}{\left(1 - {\rm i}\right)^{1 + {\rm i}}} \\&amp;= \color{#ff0000}{{\rm e}^{\pi/4 - 2n\pi}\times \left\{% \left[ \cos\left({1 \over 2}\ln\left(2\right)\right) + \sin\left({1 \over 2}\ln\left(2\right)\right)\right] + {\rm i}\left[% -\cos\left({1 \over 2}\ln\left(2\right)\right) + \sin\left({1 \over 2}\ln\left(2\right)\right)\right] \right\}} \end{align}
determine whether the vector is in the image of A I am having a really hard time trying to understand what "the image of A" means and how to start such a question. I was thinking of creating a matrix with b1 as the augumented part of the matrix.
The image of a function $f: X \rightarrow Y$ is the set of elements of $Y$ that are the result of applying $f$ to some $x$ in $X$. Any $3\times 3$ matrix $A$ gives rise to a linear transformation $T_A$ given by $T_A(\vec{x})=A \vec{x}$. In other words, a vector $\vec{b}$ is in the image of $A$ if there exists a vector $\vec{x}$ such that $A\vec{x} = \vec{b}$. Hint: What has this got to do with solving a system of linear equations?
Turn A into the upper echelon matrix U...the columns of A that correspond to the columns of U with the pivots of U are the vectors of the basis for the image of A
The ambigous definition of vacuous truth It is no doubt that the vacuous truth is related to material implication "$P\Rightarrow Q$". We say the material implication statement is true when $P$ is false. However, seems that this is not the definition for vacuously truth. Do we call it vacuously true only when $P$ can never be true? (Seems that all examples are in this way). More clearly, suppose we have a statement "$P(x)\Rightarrow Q(x)$", and at some domains of $x$ (we may denote the domain by $T_x$), $P(x)$ is true; at some other domains of $x$ (denote the domain by $F_x$), $P(x)$ is false. Can we say the statement is vacuously true in the domain $F_x$? Is there any real example? To make the problem more clear, we may check the statement "For all $x$, $P(x)\Rightarrow Q(x)$". Can we say this statement is vacuously true in the domain $F_x$ ($F_x$ is defined as above)?
I would frame the concept as pertaining to first order sentences of the form $$\forall x(P(x)\to Q(x)).$$ Vacous truth is the particular situation when there are no $x$ in the domain satisfying $P(x),$ in which case standard semantics says the statement is true. So yes, whether a first order sentence is vacuously true or not depends on the interpretation, and hence the domain, much as regular truth does. (However this doesn’t usually matter: when we’re talking about math, we generally are making statements relative to some fixed interpretation.)
Do we call it vacuously true only when P can never be true? No. In classical logic, for any logical propositions P and Q (it doesn't matter if they are true or false), we have: $P\implies [\neg P \implies Q]$ This is a tautology. See truth table here.
Find a polynomial of the specified degree that satisfies the given conditions. Find a polynomial of the specified degree that satisfies the given conditions. Degree $4$; zeros $-1$, $0$, $3$, $1/3$ ; coefficient of $x^3$ is $7$ My answer is... $$ P(x)=3x^4 - 7x^3 - 7x^2 + 3x. $$ When I entered this answer into the software (MindTap) for my class it was marked incorrect. mymathportal.com and wolframalpha.com both agree with my answer. Is there another format that could be also correct or something I am missing? Thanks for your time.
$\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Since $\ds{\braces{\color{#f00}{-1},\color{#f00}{0},\color{#f00}{3},\color{#f00}{{1 \over 3}}}}$ are the polynomial roots, the polynomial we are looking for is proportional to \begin{align} \bracks{x - \color{#f00}{\pars{-1}}}\pars{x - \color{#f00}{0}} \pars{x - \color{#f00}{3}}\pars{x - \color{#f00}{{1 \over 3}}} = x^{4} - {7 \over 3}\,x^{3} - {7 \over 3}\,x^{2} + x \end{align} In order to 'adjust' the $\ds{x^{3}\!}$-coefficient, multiply it by $\ds{-3 \implies \bbox[8px,#efe,border:0.1em groove navy]{-3x^{4} + \color{#f00}{7}x^{3} + 7x^{2} - 3x}}$
The answer was found in the comments to be $$P(x)=-3x^4+7x^3+7x^2-3x$$
Definition of similarity mapping between ordered sets: why is a " strictly precedes" relation required on each side of the biconditionnal? As a definition of " similarity mapping" I read in Lipschutz, Set Theory : the mapping f from A to B ( A and B being ordered sets) is a similarity mapping iff, (a) f is a bijection and (b) for any elements a and a' belonging to A : a &lt; a' iff f(a) &lt; f(a') The definition is expressed in terms of &lt; : "strictly precedes". Would f also be a similarity mapping in case &lt; were replaced by " precedes or is equal to" ? What I do not understand is that (1) the author has defined ordered sets in terms of " precedes or is equal to " and (2) defines an " order preserving function" in terms of " strictly precedes".
Yes, the two notations are equivalent because $f$ is also required to be a bijection. This condition ensures that $f(a) = f(a') \iff a = a'$. It follows that the strict and non-strict variants are equivalent. As to the use of "strictly": one could argue it would have been better for the author to phrase the definition of "ordered set" in terms of "strictly precedes or is equal to" (which we would then abbreviate as "precedes").
The given condition implies $a\le a'\implies f(a) \le f(a')$, but not the other way around, as any constant function is a counterexample.
Is the determinant of a covariance matrix always zero? My understanding is that given matrix X, I can find its corresponding covariance matrix by: finding the means of each column subtracting each mean from each value in its respective column and multiplying the resulting matrix by its own transpose. Let's call this matrix C. Here is what it would look like in Python: Y = X - numpy.mean(X, axis = 0) C = numpy.dot(Y, Y.T) If I do this, I can prove mathematically (and experimentally using some simple Python code) that det(C) = 0 always. However, a colleague tells me that using the inverse of a covariance matrix is common in his field and he showed me some R code to demonstrate. &gt; det(cov(swiss)) [1] 244394171542 I notice that R has several ways of calculating the covariance matrix that leads to different results. I also notice from Googling that some people say the covariance matrix is always singular (eg here) whereas others say it is not. So, my question is: why the differences of opinion and what's the true answer? EDIT: I discovered that the determinant is only zero if the matrix is square. If anybody knows the proof for this or can throw some further light on the matter, I'd be grateful.
No, the covariance matrix is not always singular. Counterexample: \begin{align} X&amp;=(\mathbf x_1,\mathbf x_2,\mathbf x_3)=\pmatrix{1&amp;2&amp;3\\ 1&amp;1&amp;4},\\ \bar{\mathbf x}&amp;=\frac13(\mathbf x_1+\mathbf x_2+\mathbf x_3)=\pmatrix{2\\ 2},\\ Y&amp;=(\mathbf x_1-\bar{\mathbf x},\ \mathbf x_2-\bar{\mathbf x},\ \mathbf x_3-\bar{\mathbf x})=\pmatrix{-1&amp;0&amp;1\\ -1&amp;-1&amp;2},\\ YY^T&amp;=\pmatrix{2&amp;3\\ 3&amp;6},\text{ which is nonsingular}. \end{align} It is true, however, that when the data matrix $X$ is square or "tall", i.e. when $X$ is $n\times m$ with $n\ge m$ (or equivalently, when the number of data points $m$ does not exceed the number of variables $n$), $YY^T$ is always singular. This is evident because by definition, the sum of all columns of $Y$ is the zero vector. Hence $Y$ has deficient column rank and in turn, $\operatorname{rank}(YY^T)=\operatorname{rank}(Y)&lt;m\le n$.
The result of matrix multiplication is only an approximation of the true covariance matrix and disregards distributions of random variables that are crucial for accurate calcualtion. Hope it helps.
Solve the following congruence: 2x ≡ 7 (mod 17) Question Solve the congruence 2x ≡ 7 (mod 17). I have tried working out this problem but I am stuck midway. Could someone help me out by showing me or explaining how to proceed further? Here is the work that I have so far: Inverse of a modulo m is an integer b for which ab ≡ 1 (mod m), a = 2 m = 17 17 = 8 ⋅ 2 + 1 2 = 2 ⋅ 1 + 0 The greatest common divisor is the last non-zero remainder values, that is, gcd (a, m) = 1. Expressing the greatest commmon divisor as a multiple of a and m, gcd (a, m) = 1 = 17 - 8 ⋅ 2 = 1 ⋅ 17 - 8 ⋅ 2 The inverse would then be the coefficient of a = 2, which in this case, would be -8. And, since, -8 mod 17 = 9 , 9 is also the inverse of a modulo m. Solving the congruence 2x ≡ 7 (mod 17) by multiplying each side by the inverse 9, 9⋅2x ≡ 9⋅7 (mod 17) 18x ≡ 63 (mod 17) And, this is the part where I am stuck. Could anyone help out? Thank you.
you are basically done $18x \equiv 63 \pmod{17}$ and $18x \equiv x \pmod {17}$ and $63\equiv 12 \pmod {17}$ so $x \equiv 12 \pmod{17}$.
$7+17=24=2 \times 12$, so $x \equiv 12 \pmod{17}$.
Check of step in proof that open subsets of the real numbers can be expressed as unions of disjoint open intervals. The following is from "An Introduction to Lebesgue Integration and Fourier Series" by Howard J. Wilcox and David L. Myers: 7.2 Theorem: Every non-empty open set $G \subset \mathbb{R}$ can be expressed uniquely as a finite or countably infinite union of pairwise disjoint open intervals. Proof: Suppose first that $G$ is bounded. Since $G$ is open, for each $x \in G$ there is an open subinterval of $G$ containing $x$. Let $b_{x} = \mathrm{lub} \{y \mid (x,y) \subset G\}$, and $a_{x} = \mathrm{glb} \{z \mid (z,x) \subset G\}$. Let $I_{x} = (a_{x}, b_{x})$, called the component of $x$ in $G$. Clearly $x \in I_{x}$. Now $I_{x} \subset G$, for if $w \in I_{x}$, say $x &lt; w &lt; b_{x}$, then by definition of $b_{x}$, there is a number $y$ such that $w &lt; y$ and $(x,y) \subset G$. Hence $w \in G$. The case where $a_{x} &lt; w &lt; x$ is handled similarly. (What about $w = x$?) Also $a_{x} \notin G$ and $b_{x} \notin G$ (see exercise 9.10). I am attempting Exercise 9.10: Prove that $a_{x} \notin G$, in the proof of Theorem 7.2. This is my attempt: Suppose for contradiction that $a_{x} \in G$. Since $G$ is open, then there exists an open subinterval $(\alpha, \beta) \subset G$ containing $a_{x}$. Since $(a_{x}, x) \subset I_{x}$, and $I_{x} \subset G$, then $(a_{x}, x) \subset G$. Then since $(\alpha, a_{x}) \subset G$, and $a_{x} \in G$ by assumption, then $(\alpha, x) \subset G$. Then $\alpha \in \{z \mid (z,x) \subset G\}$. Then since $\alpha &lt; a_{x}$, and $a_{x}$ is a lower bound of $\{z \mid (z,x) \subset G\}$, this is a contradiction. Is this correct? Is there a simpler or more elegant proof?
FWIW your proof seems fine to me. It's straightforward and it efficiently uses what is necessary for the proof without anything extraneous. Personally I can't think of a significantly better approach.
A simple proof. Notation. For $x,y\in \Bbb R$ let $In[x,y]=[x,y]\cup [y,x]=[\min(x,y),\max (x,y)]$. Let $G$ be an open subset of $\Bbb R.$ For $x,y\in G$ let $x\sim y$ iff $In[x,y]\subset G.$ Obviously $\sim$ is symmetric and reflexive on $G.$ Exercise: Show that $\sim$ is transitive on G. Hint: $In[x,y]\cup In[y,z]=In[\min(x,y,z),\max(x,y,z)]$ for all $x,y,z\in \Bbb R.$ So $\sim$ is an equivalence relation on $G.$ For $x\in G$ let $[x]_{\sim}=\{y\in G:y\sim x\}.$ Exercise: $[x]_{\sim}$ is convex for each $x\in G.$ Now the set $G_{/\sim}=\{[x]_{\sim}:x\in G\}$ of $\sim$-equivalence classes is a partition of $G.$ And each $[x]_{\sim}$ is open, because if $x\in G$ then $(-r+x,r+x)\subset G$ for some $r&gt;0,$ so $y\sim x$ for all $y \in (-r+x,r+x).$ Therefore $G_{/\sim}$ is a partition of $G$ into a family of pair-wise disjoint non-empty convex open sets. $Any$ family $F$ of pair-wise disjoint non-empty open subsets of $\Bbb R$ is countable because $\Bbb R$ has a countable dense subset. For example, for each $f\in F$ let $\psi(f)\in f\cap \Bbb Q.$ Then $\psi:F\to \Bbb Q $ is injective, so $F$ is countable. Therefore $G_{/\sim}$ is countable. Remark. The notation $In[x,y]$ is ad hoc. I just found it convenient, that you don't need to distinguish the cases $x&lt;y,x=y,$ or $x&gt;y $.
Goldbach Conjecture Consequences I have been looking into the Goldbach Conjecture pretty recently and I have often heard that it would have far-reaching consequences. However, I haven't found many of the actual consequences. I was wondering if you all could supply me with some of these consequences (theorems, etc.).
A proof of the Riemann Hypothesis will have security implications if that proof can be used to determine the distribution of primes. It is often claimed that the proof for the RH will verify the Goldbach Conjecture but this has not been proven. I assume that this is where the idea that GC will have security implications comes from, since it is often presented with a confused relationship with the RH, but this doesn't necessarily mean that a proof of the GC will have any real consequences for mathematics. Much like the GC, proving the RH will not have any major impact on mathematics aside from the phrase "assuming the Riemann Hypothesis" being removed from academic literature.
A proof of the Riemann Hypothesis will have security implications if that proof can be used to determine the distribution of primes. It is often claimed that the proof for the RH will verify the Goldbach Conjecture but this has not been proven. I assume that this is where the idea that GC will have security implications comes from, since it is often presented with a confused relationship with the RH, but this doesn't necessarily mean that a proof of the GC will have any real consequences for mathematics. Much like the GC, proving the RH will not have any major impact on mathematics aside from the phrase "assuming the Riemann Hypothesis" being removed from academic literature.
Two minus signs make a plus Why do two minus signs make a plus sign and is there a corresponding rule for division and multiplication signs?
Multiplication is a scaling action, on the number line. When you multiply by a negative number, you also reflect the scaling. Reflecting again - multiplying by another negative number - bring ths scaling back to the positive direction. Note that you can always start the process with $1$ - so for example $-3 \times -4 = 1 \times -3 \times -4$. There is a kind of corresponding process for division. If you divide $1$ by $4$, you get $\frac{1}{4}$, then if you divide $1$ by $\frac{1}{4}$, you get $4$ again. Not exactly the same but a similar concept of double-action reverting to the initial state.
One solution is to use complex numbers, using $z=e^{i\theta}$. $\theta$ in this representation represents the angle through which we rotate a unit vector anti-clockwise about the origin. As the total internal rotation angle of a circle is $2\pi$, we arrive at $e^{i\pi}=-1$. Multiplying two complex numbers $u\times v$, where $u=e^{i\alpha}$ and $v=e^{i\beta}$, gives $e^{i\alpha}e^{i\beta}=e^{i(\alpha+\beta)}$. Therefore $-1\times-1=e^{i\pi}\times e^{i\pi}=e^{2i\pi}=e^0=1$.
Number having sexagesimal expansion end with infinitely many zeros? I am looking for all the real numbers whose sexagesimal expansion (base $60$) ends in infinite tail of zeros. Does they really exist? It seems absurd to me or mm thinking it in a wrong manner?
A number $N$ has a finite sexagesimal expansion if it can be written as $$ N=a_0+\frac{a_1}{60}+\frac{a_2}{60^2}+\dots+\frac{a_n}{60^n} $$ with $a_0$ any integer and $0\le a_i&lt;60$ for $i=1,2,\dots,n$. Then we can write $$ N=\frac{m}{60^n} $$ The converse is obviously also true: take $a_0=\lfloor N\rfloor$ and write $$ N-a_0=\frac{m'}{60^n} $$ which is true for a unique $m'$ with $0\le b&lt;60^n$. Then write the base $60$ expansion of $b$ and you're done. You can notice that base $60$ has nothing special. A number $N$ has a finite base $b$ expansion if and only if it has the form $$ N=\frac{m}{b^n} $$ for some integer $m$ and nonnegative integer $n$.
$$ \forall n,a \in N, 60 \nmid n \lor a = 0 $$ $$ \frac {n} {60^a} $$ Example: $$ \frac {175371} {60^2} $$
Continuity and Differentiability and Analicity Relations between $f(z)$ and $\overline{f(\bar{z})}$ I know that if $f(z)$ is analytic in $D$ then $\overline{f(\bar{z})}$ is analytic in $\overline{D}$. If $f(z)$ is continuous at $z_{0}$ then $\overline{f(z)}$ is continuos. Is it true that : $f(z)$ is continuous at $z_{0} $$ \Leftrightarrow$ $\overline{f(\bar{z})}$ is continuos ? and $f(z)$ is differentiable at $z_{0}$ $\Leftrightarrow$ $\overline{f(\bar{z})}$ is differentiable This question was in the Final Exam : Prove that if $f(z)$ is continuous at a point $z_{0}$ ,then $g(z)=\overline{f(\bar{z})}$ is also continuous .Is it rue that differentiability of $f$ at $z_{0}$ implies differentiability of g ? Thank You ...
If $f(z)$ is continuous at $z_0$, $\overline{f(\overline z)}$ is not necessarily continuous at $z_0$ : Let $z_0 = ib \in \Bbb C, b \in \Bbb R \setminus \{0\}$. Consider $f(z) = \frac{1}{z+ib}$ if $z \neq -ib$ and $f(-ib) = \omega \in \Bbb C$. $f(z)$ is continuous at $z = ib = z_0$ but is not continuous at $z = -ib = \overline {z_0}$ because $\lim \limits_{z \rightarrow -ib} \lvert f(z) \rvert = \lim \limits_{z \rightarrow -ib} \lvert\frac{1}{z + ib}\rvert = + \infty$. Similarly, $\overline{f(z)}$ is not continuous at $\overline {z_0}$ because $\lim \limits_{z \rightarrow -ib} \lvert f(z) \rvert = \lim \limits_{z \rightarrow -ib} \lvert \overline{f(z)} \rvert = + \infty$, i.e. $\overline{f(\overline z)}$ is not continuous at $z_0$ even though $f(z)$ is. If $f(z)$ is complex-differentiable at $z_0$, then $\overline{f(z)}$ is usually not complex-differentiable at $z_0$ ($\iff$ $\overline{f(\overline z)}$ is usually not complex-differentiable at $\overline{z_0}$). A simple example is $f(z) = z$. It is differentiable but $\overline{f(z)} = \overline z$ is not (look at the Cauchy-Riemann equations to see it). Obviously, $f(z)$ is continuous at $z_0$ $\iff$ $\overline{f(z)}$ is continuous at $z_0$ ($\iff$ $\overline{f(\overline z)}$ is continuous at $\overline{z_0}$). Indeed, if you write $f(z) = f(x + iy) = u(x, y) + iv(x, y)$, if $f$ is continuous, both $u$ and $v$ are continuous so the function $\overline{f(z)} = u(x, y) - iv(x, y)$ is also continuous and conversely. Finally, $f(z)$ is complex-differentiable at $z_0$ and on an open neighborhood of $z_0$ $\iff$ $f(z)$ is analytic on a neighborhood of $z_0$ $\iff$ $\overline{f(\overline z)}$ is analytic on a neighborhood of $z_0$ $\iff$ $\overline{f(\overline z)}$ is complex-differentiable at $z_0$ and on an open neighborhood of $z_0$, but you seem to already know that.
Take $f(z)=\frac{1}{z+i}$ which is continuous at $z=i$ but $\overline{f(\bar z)}=\frac{1}{\overline{\bar z+i}}=\frac{1}{z-i}$ is not continuous at $z=i$. Similarly you can check for differentiability of $f(z)$. However, you may like to know the well-known result $f(z)=\overline{f(\bar z)}$ if $f(x)$ is real. (Reflection principle)
Why does the derivative graph of a curve look linear and not curvy? Why is the derivative graph of $y=x^2$ linear and not some sort of curve? I know that $\frac{d}{dx}x^2=2x$, but I am not talking computing algebraically, but more conceptually, thinking in terms of how the slope of tangent changes. When I look at the graph, why is the slope of $x^2$ changing at a linear rate? Alternatively, why is the derivative of $y=ln(x)$ some kind of curve and not linear? I assume the rate of "curviness" of $x^2$ is different than $ln(x)$ which gets flat in a hurry. (Does 2nd derivative and concavity explain this? Rate of change of the rate of change? ) My last example in the scan below summarizes my question. Why is it one of those, and not the other 3? All 4 start with a slope of 0, then 1, then back to 0. What is the connection, graphically ? Thank you!
It's easy to visually see that the derivative of $x^2$ is $2x$, by not thinking in terms of how the slope changes, but plotting the slope of the tangents at points and seeing. As to 'why' it is $2x$... It is not clear what kind of answer you are looking for. I intepret your wish as wanting a more ontological answer. Think about two consecutive square numbers. Well $(x+1)^2 =x^2+2x+1$. So when we add one to $x$, we add $2x+1$ to $f(x)$. If we add $2$ to $x$ we add $4x+4$ to $f(x)$. That is when we add some $c$ to $x$, we add $2cx + c^2$ to $f(x)$. This gives us a slope term of $2x+c$. But for $x$ much larger than $c$ this is $~2x$. Now, when we take the tangent on the $x^2$ curve, we are making $c$ as small as possible to get a more accurate rate of change at $x$ and this can (atleast in my head) be thought of as making the tangent line intersect the $x^2$ curve as much as possible. And assuming all lines have same width (euclidean assumption), we get the largest area of intersection when $c$ is smallest because this results in the part of the tangent line leaving the $x^2$ curve having the smallest area possible. That is because a line is defined as having constant width, and a shortest $c$ implies shortest possible length of the tangent line that has left the $x^2$ curve between $x$ and $x+c$. So the area of tangent that has left the $x^2$ curve in this zone is smallest. Whilst the rest of the tangent line that is not intersecting the $x^2$ has infinite area no matter what $c$ .... hence my interpretation of taking a tangent at $x$ on $x^2$ is defined as increasing the intersection between tangent line and curve as much as possible about $x$ (of course this wouldnt work for all curves, but it would for $x^2$. Anyways, in pure mathematics we assume that the smallest $c$ is in fact small enough to make $2x+c = 2x$ ignoring other physical variables. In some sense, the slope being $2x$ for $f(x)=x^2$ is a consequence of our chosen deconstructions which give us nice aesthetical answers.
What you're attempting to reason is why a derivative to a graph $f(x)$ is linear or non-linear. If you do a simple test visually say, the first segment of your reference graph is concave up and positive slope. The positive slope notifies that the graph of the derivative will be in the positive terminal. The concave up quality of the initial part of the graph assumes that the derivative is increasing. However, the question is at what rate is the rate of change increasing? This gives that the graph of the derivative is concave up because the original function is increasing.
Disproving the proposition that $a_n = (-1)^nn$ has a limit. Is the following Proof Correct? - Show that the sequence $a_n = (-1)^nn,\forall n\in\{1,2,3,...\}$ does not have a limit. Proof. Assume on the contrary that $\lim a_n = L$ consequently there exists an $N\in\mathbf{R^+}$ such that given any $n\in\mathbf{Z^+}$ $|(-1)^nn-L|&lt;|L|$ whenever $n&gt;N$ the archimedian property guarantees that such an $n$ exists consequently for some $n&gt;N$ we have $|(-1)^nn-L|&lt;|L|$ but $|(-1)^nn-L| = |(-1)^{n+1}(n+L)| = |(-1)^{n+1}|\cdot|n+L|$ implying that $|n+L|&lt;|L|$ resulting in a contradiction. $\blacksquare$
The proof is missing $\varepsilon$ if you intend to use the $\varepsilon-N$ definition of the limit for sequences. If $(a_n)$ had a limit $L&lt;+\infty$ then so would $(b_n)$ where $b_n:=|a_n|$. This follows from triangle inequality $$|b_n-|L||=||a_n|-|L||\leqslant |a_n-L|&lt;\varepsilon$$ whenever $n\geqslant N$ for some large enough $N$. But $\lim_nb_n=\lim_n |a_n|=\lim_nn=+\infty$.
if $n$ is even we get $$a_n=n$$ and this tends to infinity for $n$ tends to infinityif $n$ is odd, then we get $$a_n=-n$$ and this tends to $-\infty$ if $n$ tends to infinity, so no Limit exists
Why can we use mathematical induction to prove that $2^n \ge n^2$ for $ n \ge 5$ Why can we use mathematical induction to prove that $2^n \ge n^2$ for $ n \ge 5$ Normally, the proof via mathematical induction starts with $n =1$. Why does it still hold, when we start with other numbers?
if you insist, you can define $$ n = m+4 $$ and begin with $m=1$
multiplying $$2^k\geq k^2$$ by $2$ we get $$2^{k+1}\geq 2k^2$$ now you have to prove that $$2k^2\geq (k+1)^2$$
can this integral be expressed in elementary functions? I have been trying to find the length of an arc of an ellipse and I have been stuck with this integral for a complete day : $$\int_{0}^{x} \sqrt{a^2\cos^2t+b^2\sin^2t} dt$$ And my question is : can this integral be expressed in terms of elementary functions ? If not then does this integral have a special function or something ?
The answer to the question in the title is No, the integral cannot be expressed in terms of elementary functions. Elementary functions are a fairly restricted group (like polynomials, the exponential function and the natural logarithm). But you go on to ask whether it can be expressed in terms of special functions or something. The short answer is that it is easy to get arc lengths along an ellipse in terms of an incomplete elliptic function of the second kind. But you need software like Mathematica (where it is EllipticE[]) to be able to use it freely. The incomplete elliptic function (the complete/incomplete distinction is about whether the function deals with the complete curve or only an arc) is certainly a "special function or something". Whether it is a special function is more debatable. Personally, I don't think I would refer to this function as a special function. Wikipedia disagrees, so does the NIST bible. But the term "special function" is often used for a relatively small group of functions which were mainly important in physics and applied maths and intensively studied in the century or more before WW2 (usually as functions of a complex variable). In any case the term "special function" is starting to get archaic, and I don't think it is a helpful one today. Long ago they were special in the ordinary English language sense, because they had been extensively studied and people knew how to deal with them, although they required more expertise than the "elementary" functions. [Whereas many other functions were fairly intractable except by inequalities, bounds, crude approximations etc.] Today I would rather call something like the functions used for ellipse arc length a "named function". The reason is that it is possible to use it (and indeed many of the old special functions) with much less special expertise. What often matters in practice today is not so much whether a function is an "elementary" or "special" function, but whether it is easy to calculate with the function using software like Mathematica. If you want more information on how to use the complete/incomplete elliptic functions, I suggest googling. You can find plenty of tutorial material about ellipse arc lengths and the functions used to deal with them. Eg here (about halfway through he turns to elliptic integrals). There is also a fair amount of material in Wikipedia scattered over several articles. Added a little later Incidentally, do not think of math software as just a way of getting numerical answers or plotting functions with known parameters (ie $\sin 2x$ rather than $\sin kx$). It is much more powerful than that. For many functions you can easily get indefinite integrals, power series etc. I remember spending a significant chunk of my undergraduate years at Cambridge University many decades ago learning endless tricks for integrating functions. That knowledge is of little use to me now. Mathematica easily outperforms most academics on this site at integration (and certainly me, but notably not Jack d'Aurizio, whose expertise with integrals often delights me). In general, you can cheerfully manipulate its named functions just as easily as if they were elementary functions. I mention Mathematica just because it is widely available and good for integration. But there is a good deal of both commercial and open source software available.
The answer to the question in the title is No, the integral cannot be expressed in terms of elementary functions. Elementary functions are a fairly restricted group (like polynomials, the exponential function and the natural logarithm). But you go on to ask whether it can be expressed in terms of special functions or something. The short answer is that it is easy to get arc lengths along an ellipse in terms of an incomplete elliptic function of the second kind. But you need software like Mathematica (where it is EllipticE[]) to be able to use it freely. The incomplete elliptic function (the complete/incomplete distinction is about whether the function deals with the complete curve or only an arc) is certainly a "special function or something". Whether it is a special function is more debatable. Personally, I don't think I would refer to this function as a special function. Wikipedia disagrees, so does the NIST bible. But the term "special function" is often used for a relatively small group of functions which were mainly important in physics and applied maths and intensively studied in the century or more before WW2 (usually as functions of a complex variable). In any case the term "special function" is starting to get archaic, and I don't think it is a helpful one today. Long ago they were special in the ordinary English language sense, because they had been extensively studied and people knew how to deal with them, although they required more expertise than the "elementary" functions. [Whereas many other functions were fairly intractable except by inequalities, bounds, crude approximations etc.] Today I would rather call something like the functions used for ellipse arc length a "named function". The reason is that it is possible to use it (and indeed many of the old special functions) with much less special expertise. What often matters in practice today is not so much whether a function is an "elementary" or "special" function, but whether it is easy to calculate with the function using software like Mathematica. If you want more information on how to use the complete/incomplete elliptic functions, I suggest googling. You can find plenty of tutorial material about ellipse arc lengths and the functions used to deal with them. Eg here (about halfway through he turns to elliptic integrals). There is also a fair amount of material in Wikipedia scattered over several articles. Added a little later Incidentally, do not think of math software as just a way of getting numerical answers or plotting functions with known parameters (ie $\sin 2x$ rather than $\sin kx$). It is much more powerful than that. For many functions you can easily get indefinite integrals, power series etc. I remember spending a significant chunk of my undergraduate years at Cambridge University many decades ago learning endless tricks for integrating functions. That knowledge is of little use to me now. Mathematica easily outperforms most academics on this site at integration (and certainly me, but notably not Jack d'Aurizio, whose expertise with integrals often delights me). In general, you can cheerfully manipulate its named functions just as easily as if they were elementary functions. I mention Mathematica just because it is widely available and good for integration. But there is a good deal of both commercial and open source software available.
How do we know that Cantor's diagonalization isn't creating a different decimal of the same number? Edit: As the comments mention, I misunderstood how to use the diagonalization method. However, the issue I'm trying to understand is a potential problem with diagonalization and it is addressed in the answers so I will not delete the question. Cantor's diagonalization is a way of creating a unique number given a countable list of all reals. I can see how Cantor's method creates a unique decimal string but I'm unsure if this decimal string corresponds to a unique number. Essentially this is because $1 = 0.\overline{999}$. Consider the list which contains all real numbers between $0$ and $1$: $0.5000 \mathord\ldots \\ 0.4586 \mathord\ldots \\ 0.3912 \mathord\ldots \\ 0.3195 \mathord\ldots \\ 0.7719 \mathord\ldots\\ \vdots$ The start of this list produces a new number which to four decimal places is: $0.4999 \mathord\ldots$ But $0.5$ was the first number and $0.4\overline{999} = 0.5$ so this hasn't produced a unique number. Of course my list is very contrived, I admit that it's hard to imagine a list of the reals where numbers would align nicely to give a problem like this (since some numbers have no nines). However, I can't see a good reason why such an enumeration of numbers would be impossible.
This is in fact a potential problem if the proof is carelessly stated, but it’s easily avoided: if the $n$-th decimal digit of the $n$-th number on the list is $7$, we replace it by $6$, and if it’s not $7$, we replace it by $7$. The only numbers in $(0,1)$ with two decimal representations are those with one representation ending in an infinite string of nines and the other in an infinite string of zeroes, and this version of the argument clearly doesn’t produce a number of either of those forms.
Cantor's Diagonal proof was not about numbers - in fact, it was specifically designed to prove the proposition "some infinite sets can't be counted" without using numbers as the example set. (It was his second proof of the proposition, and the first was criticized because it did.) What Cantor actually used, were infinite length strings that combine only two different characters. He used "m" and "w." But modern High School and Middle School students have experience with infinite decimal expansions, and it is easier to teach new concepts to them in terms of what is already familiar. The point is, that there is no ambiguity in saying that two strings are different if they have different characters in at least one position. Another point that is taught poorly, is that the proof doesn't have to be a proof-by-contradiciton. Many people have a justifiable objection to assuming statement A, and then proving that not(A) follows from it. All it really proves is that there is something wrong in the circle that leads from A to not(A), not that it has to be the assumption of A. Which is why so many people try to invalidate one of the steps - what you claim here one of the ways they try (and yes, it can be gotten around easily, as stated in other answers). Cantor only assumes that you have a countably infinite set of some of these strings. Diagonalization proves that there is a string not in your set. Since "If A then B" is logically equivalent to "If not(B) then not(A)," this proves that if you have a set of all such strings, then it can't be counted. Elegant, no?
Continuous without restriction or Continuous with discrete interpretation I have a question , with contradictions with my teacher. Basically the question number is 7d .. And the instructions for number 7 questions are as follows: Determine whether each of the following function representation is discrete, continuous without restriction, or continuous with discrete interpretation. And the graph for 7d question is: The letter on top says Total Public expenditures since 1960. Ok so my teacher says its continuous with discrete interpretation while I say it's continuous. I say its continuous because dollars can be any point for example 12.5 billon dollars, 16.21313131 billon dollars, doesn’t matter. I don't understand why the teacher is saying continuous with discrete interpretation. Help Would be appreciated! If you guys don't understand continuous with discrete interpretation , then my previous question explains it : https://math.stackexchange.com/questions/1929805/what-is-discrete-with-continuous-interpretation If you guys don't know what continuous with no restriction means , it basically just means continuous
I want to extend my comment: After reading the definitions in the referenced question I conclude that your teacher wants to say, that "continous with discrete interpretation" means, it makes only sense to speak about the public expenditures in 1961, 1962, 1963,.... So $f$ is only meaningful at 1,2,3,.. (years after 1960), and not e.g. at 1.2345 or at $\sqrt{2}$. But I am not sure if I am of the same opinion. Even if the data were supplied in a tabel (year, expenditures) I think one can speak about the expenditures a an arbitrary point in time. A more clear example is something like $$f:A\to \mathbb{R}$$ $$f(x)=y$$ where $x$ is the number of inhabitants of a city and $f(y)$ are the expenditures for public transport. Clearly there is no city with $1234567.8$ inhabitants, nevertheles it make sense to set $A=\mathbb{R^+}$ in a model with out any problems.
What makes your counterargument wrong in this case is simply the fact that money amounts always have a minimum possible value - in this case, pennies. Any fractional amount of billions-of-dollars which specifies a fraction of a penny is simply not a real amount of money. The curve in the diagram has to be a continuous curve interpolated through a discrete point-set of actual financial data.
How to find the order of the ring I have some examples solved about order of an element in a ring. In $\mathbb Z_{5}$ find order of element $2$ in the ring. Identity is $1$ $2^1 ≠ 1$ $2^2 ≠ 1$ $2^3 ≠ 1$ $2^4 = 16 = 1$ &lt;----------This is the part that I dont understeand
First of all, mention in the title that you would like to calculate order of an 'element' of a ring. In $Z_5$, the multiplication is defined by multiplying two elements as integers and then reducing the result to modulo $5$ which means that you need to divide the result by $5$ and take remainder. An example is $3\cdot 4=12$ which gives remainder $=2$ on dividing by $5$ and hence in $Z_5$, $3\cdot 4=2$. Now in the last line of your question, the correct equation is $2^4=16\equiv 1 \pmod 5$ because $16$ yields remainder $=1$ when divided by $5$.
I think this is a matter of the rigor of definitions. Strictly speaking, $16$ is NOT an element of $\mathbb{Z}_5$, which as a set is $\{0,1,2,3,4\}$. Strictly speaking, we have a surjective homomorphism of rings $\pi:\mathbb{Z}\rightarrow\mathbb{Z}_5$ sending a number to its remainder modulo $5$, i.e., when dividing by $5$: $\pi(n):=r$ where $n=5q+r$. So, strictly speaking we have $\pi(2^4)=\pi(16)=\pi(5\cdot 3+1)=1$ (in $\mathbb{Z}_5$), and also $2^4=\pi(2)^4=1$ in $\mathbb{Z}_5$ (since $\pi$ is a homomorphism such that $\pi(2)=2$), which we can write also as $2^4\equiv 16\equiv 1\pmod{5}$, where now $\equiv\pmod{5}$ is an equivalence relation for $\mathbb{Z}$. Nevertheless it is usual, but an abuse of notation, to write $16=1$ in $\mathbb{Z}_5$.