INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Reconstructing a ring from a stack of 2D images (radially aligned) I have a stack of images (about 180 of them) and there are 2 black dots on every single image. Hence, the position(x,y) of the two stars are provided initially. The dimensions of all these images are fixed and constant. The radial 'distance' between the image is about 1o with the origin to be the center of every single 2D image. Since the images are radially aligned, the output would be a possible ring shape in 3D. the dotted red circle and dotted purple circle are there to give a stronger scent of a 3D space and the arrangement of the 2D images(like a fan). It also indicates that each slice is about 1o apart and a legend that'd give you an idea where the z-axis should be. Now my question is With the provided (x,y) that appeared in the 2D image, how do you get the corresponding (x,y,z) in the 3d space knowing that each image is about 1o apart? I know that every point on a sphere can be approximated by the following equations: x = r sin (theta) cos (phi) y = r sin (theta) sin (phi) z = r cos (theta) However, i don't know how to connect those equations to my problem as i am rather weak in math as you can see by now. :( Thanks!!
If I understand the question correctly, your $180$ images are all taken in planes that contain one common axis and are rotated around that axis in increments of $1^\circ$. Your axis labeling is somewhat confusing because you use $x$ and $y$ both for the 2D coordinates and for the 3D coordinates, even though these stand in different relations to each other depending on the plane of the image. So I'll use a different, consistent labeling of the axes and I hope you can apply the results to your situation. Let's say the image planes all contain the $z$ axis, and lets label the axes within the 2D images with $u$ and $v$, where the $v$ axis coincides with the $z$ axis and the $u$ axis is orthogonal to it. Then the orientation of the image plane can be described by the (signed) angle $\phi$ between the $u$ axis and the $x$ axis (which changes in increments of $1^\circ$ from one plane to the next), and the relationship between the 2D coordinates $u,v$ and the 3D coordinates $x,y,z$ is $$ \begin{eqnarray} x&=&u\cos\phi\\ y&=&u\sin\phi\\ z&=&v\;. \end{eqnarray} $$ This only answers your question (as I understand it) about the relationship between the coordinates. How to reconstruct the ring from the set of points is another question; you could to do a least-squares fit for that.
Behaviour of a holomorphic function near a pole Apparently, the following statement is true: * *"Let $D\subseteq \mathbb{C}$ be open and connected and $f:D\setminus \{a\}\longrightarrow \mathbb{C}$ holomorphic with a pole of arbitrary order at $a\in D$. For any $\epsilon > 0$ with $B_\epsilon(a)\setminus\{a\} \subseteq D$, there exists $r > 0$ so that $\{z \in \mathbb{C}: |z| > r\} \subseteq f(B_\epsilon(a)\setminus\{a\})$." So far, I have been unsuccessful in proving this. I know that $f(B_\epsilon(a)\setminus\{a\})$ must be open and connected (open mapping theorem), as well as that for any $r > 0$ there exists an $x \in B_\epsilon(a)$ so that $f(x) > r$ (because $\lim_{z\rightarrow a}|f(z)| = \infty)$, but I don't see how this would imply the statement in question. Any help would be appreciated.
Define $g$ on a punctured neighborhood of $a$ by $g(z)=\frac{1}{f(z)}$. Then $\displaystyle{\lim_{z\to a}g(z)=0}$, so the singularity of $g$ at $a$ is removable, and defining $g(a)=0$ gives an analytic function on a neighborhood of $a$. By the open mapping theorem, for each neighborhood $U$ of $a$ in the domain of $g$, there exists $\delta>0$ such that $\{z\in\mathbb{C}:|z|\lt \delta\}\subseteq g(U)$. Now let $r=\frac{1}{\delta}$.
A Universal Property Defining Connected Sums I once read (I believe in Ravi Vakil's notes on Algebraic Geometry) that the connected sum of a pair of surfaces can be defined in terms of a universal property. This gives a slick proof that the connected sum is unique up to homeomorphism. Unfortunately, I am unable to find where exactly I read this or remember what exactly universal property was; if anyone could help me out in either regard it would be much appreciated.
As already noted in the comments, there is an obvious universal property (since the connected sum is a special pushout) once the embeddings of the discs have been chosen. For different embeddings, there exists some homeomorphism. There are lots of them, but even abstract nonsense cannot replace the nontrivial proof of existence. But since there is no canonical homeomorphism, I strongly doubt that there is a ny universal property which does not depend on the embeddings of the discs.
Different ways to prove there are infinitely many primes? This is just a curiosity. I have come across multiple proofs of the fact that there are infinitely many primes, some of them were quite trivial, but some others were really, really fancy. I'll show you what proofs I have and I'd like to know more because I think it's cool to see that something can be proved in so many different ways. Proof 1 : Euclid's. If there are finitely many primes then $p_1 p_2 ... p_n + 1$ is coprime to all of these guys. This is the basic idea in most proofs : generate a number coprime to all previous primes. Proof 2 : Consider the sequence $a_n = 2^{2^n} + 1$. We have that $$ 2^{2^n}-1 = (2^{2^1} - 1) \prod_{m=1}^{n-1} (2^{2^m}+1), $$ so that for $m < n$, $(2^{2^m} + 1, 2^{2^n} + 1) \, | \, (2^{2^n}-1, 2^{2^n} +1) = 1$. Since we have an infinite sequence of numbers coprime in pairs, at least one prime number must divide each one of them and they are all distinct primes, thus giving an infinity of them. Proof 3 : (Note : I particularly like this one.) Define a topology on $\mathbb Z$ in the following way : a set $\mathscr N$ of integers is said to be open if for every $n \in \mathscr N$ there is an arithmetic progression $\mathscr A$ such that $n \in \mathscr A \subseteq \mathscr N$. This can easily be proven to define a topology on $\mathbb Z$. Note that under this topology arithmetic progressions are open and closed. Supposing there are finitely many primes, notice that this means that the set $$ \mathscr U \,\,\,\, \overset{def}{=} \,\,\, \bigcup_{p} \,\, p \mathbb Z $$ should be open and closed, but by the fundamental theorem of arithmetic, its complement in $\mathbb Z$ is the set $\{ -1, 1 \}$, which is not open, thus giving a contradiction. Proof 4 : Let $a,b$ be coprime integers and $c > 0$. There exists $x$ such that $(a+bx, c) = 1$. To see this, choose $x$ such that $a+bx \not\equiv 0 \, \mathrm{mod}$ $p_i$ for all primes $p_i$ dividing $c$. If $a \equiv 0 \, \mathrm{mod}$ $p_i$, since $a$ and $b$ are coprime, $b$ has an inverse mod $p_i$, call it $\overline{b}$. Choosing $x \equiv \overline{b} \, \mathrm{mod}$ $p_i$, you are done. If $a \not\equiv 0 \, \mathrm{mod}$ $p_i$, then choosing $x \equiv 0 \, \mathrm{mod}$ $p_i$ works fine. Find $x$ using the Chinese Remainder Theorem. Now assuming there are finitely many primes, let $c$ be the product of all of them. Our construction generates an integer coprime to $c$, giving a contradiction to the fundamental theorem of arithmetic. Proof 5 : Dirichlet's theorem on arithmetic progressions (just so that you not bring it up as an example...) Do you have any other nice proofs?
Maybe you wanna use the sum of reciprocal prime numbers. The argument for the fact that the series diverges you may find here in one of Apostol's exercise.
creating smooth curves with $f(0) = 0$ and $f(1) = 1$ I would like to create smooth curves, which have $f(0) = 0$ and $f(1) = 1$. What I would like to create are curves similar to the gamma curves known from CRT monitors. I don't know any better way to describe it, in computer graphics I used them a lot, but in math I don't know what kind of curves they are. They are defined by the two endpoints and a 3rd point. What I am looking for is a similar curve, what can be described easily in math. For example with a simple exponential function or power function. Can you tell me what kind of curves these ones are (just by lookin at the image below), and how can I create a function which fits a curve using the 2 endpoints and a value in the middle? So what I am looking for is some equation or algorithm what takes a midpoint value $f(0.5) = x$, returns me $a, b$ and $c$ for example if the curve can be parameterized like this (just ideas): $a \exp (bt) + c$ or $a b^t + c$ Update: yes, $x^t$ works like this, but it gets really sharp when $t < 0.1$. I would prefer something with a smooth derivative at all points. Thats why I had exponential functions in mind. (I use smooth here as "not steep")
It might be worth doing some research into Finite Element shape functions as the basis of these functions is very similar to the problem you are trying to solve here. My experience with shape functions is that the equations are usually identified through trial and error although there are approaches that can ease you through the process.
Number of fields with characteristic of 3 and less than 10000 elements? it's exam time again over here and I'm currently doing some last preparations for our math exam that is up in two weeks. I previously thought that I was prepared quite well since I've gone through a load of old exams and managed to solve them correctly. However, I've just found a strange question and I'm completely clueless on how to solve it: How many finite fields with a charateristic of 3 and less than 10000 elements are there? I can only think of Z3 (rather trivial), but I'm completely clueless on how to determine the others (that is of course - if this question isn't some kind of joke question and the answer really is "1").
It's not a joke question. Presumably, the year that was on the exam, the class was shown a theorem completely describing all the finite fields. If they didn't do that theorem this year, you don't have to worry about that question (but you'd better make sure!). It's not the kind of thing you'd be expected to answer on the spot, if it wasn't covered in class.
Does Permuting the Rows of a Matrix $A$ Change the Absolute Row Sum of $A^{-1}$? For $A = (a_{ij})$ an $n \times n$ matrix, the absolute row sum of $A$ is $$ \|A\|_{\infty} = \max_{1 \leq i \leq n} \sum_{j=1}^{n} |a_{ij}|. $$ Let $A$ be a given $n \times n$ matrix and let $A_0$ be a matrix obtained by permuting the rows of $A$. Do we always have $$ \|A^{-1}\|_{\infty} = \|A_{0}^{-1}\|_{\infty}? $$
Exchanging two rows of $A$ amounts to multiplying $A$ by an elementary matrix on the left, $B=EA$; so the inverse of $B$ is $A^{-1}E^{-1}$, and the inverse of the elementary matrix corresponding to exchanging two rows is itself. Multiplying on the right by $E$ corresponds to permuting two columns of $A^{-1}$. Thus, the inverse of the matrix we get form $A$ by exchanging two rows is the inverse of $A$ with two columns exchanged. Exchanging two columns of a matrix $M$ does not change the value of $\lVert M\rVert_{\infty}$; thus, $\lVert (EA)^{-1}\rVert_{\infty} = \lVert A^{-1}\rVert_{\infty}$. Since any permutation of the rows of $A$ can be obtained as a sequence of row exchanges, the conclusion follows.
Help with solving an integral I am looking for help with finding the integral of a given equation $$ Y2(t) = (1 - 2t^2)\int {e^{\int-2t dt}\over(1-2t^2)^2}. dt$$ anyone able to help? Thanks in advance! UPDATE: I got the above from trying to solve the question below. Solve, using reduction of order, the following $$y'' - 2ty' + 4y =0$$ , where $$f(t) = 1-2t^2$$ is a solution
There is no elementary antiderivative for this function. Neither Maple nor Mathematica can find a formula for it.
union of two independent probabilistic event I have following question: Suppose we have two independent events whose probability are the following: $P(A)=0.4$ and $P(B)=0.7$. We are asked to find $P(A \cap B)$ from probability theory. I know that $P(A \cup B)=P(A)+P(B)-P(A \cap B)$. But surely the last one is equal zero so it means that result should be $P(A)+P(B)$ but it is more than $1$ (To be exact it is $1.1$). Please help me where i am wrong?
If the events $A$ and $B$ are independent, then $P(A \cap B) = P(A) P(B)$ and not necessarily $0$. You are confusing independent with mutually exclusive. For instance, you toss two coins. What is the probability that both show heads? It is $\frac{1}{2} \times \frac{1}{2}$ isn't it? Note that the coin tosses are independent of each other. Now you toss only one coin, what is the probability that it shows both heads and tails?
mapping of cube by itself From exam textbook I am given to solve following problem: question is like this in space how many lines are such by which if turn cube by $180^\circ$ it will map itself? I was thinking about this problem many times I though it should be axis of symmetry for which answer would be $4$ but in answers there is not 4 so I did not find solution of it yet please help me to make it clear for me
Ross correctly enumerated the possible lines. You should take into account that when you rotate the cube about a body diagonal, you have to rotate an integer multiple of 120 degrees in order to get the cube to map back to itself. So for the purposes of this question the body diagonals don't count. 9 is the correct answer. Perhaps the easiest way to convince you of this is that there are 3 edges meeting at each corner. Rotation about a 3D-diagonal permutes these 3 edges cyclically, and is therefore of order 3 as a symmetry. Yet another way of seeing this is that if we view the cube as a subset $[0,1]\times[0,1]\times[0,1]\subset\mathbf{R}^3$, then the linear mapping $(x,y,z)\mapsto (y,z,x)$ keeps the opposite corners $(0,0,0)$ and $(1,1,1)$ as fixed, obviously maps the cube back to itself, and as this mapping is orientation preserving (in $SO(3)$), it must be a rotation about this diagonal. As it is of order 3, the angle of rotation must be 120 degrees.
What's the meaning of algebraic data type? I'm reading a book about Haskell, a programming language, and I came across a construct defined "algebraic data type" that looks like data WeekDay = Mon | Tue | Wed | Thu | Fri | Sat | Sun That simply declares what are the possible values for the type WeekDay. My question is what is the meaning of algebraic data type (for a mathematician) and how that maps to the programming language construct?
Think of an algebraic data type as a type composed of simpler types, where the allowable compositions operators are AND (written $\cdot$, often referred to as product types) and OR (written $+$, referred to as union types or sum types). We also have the unit type $1$ (representing a null type) and the basic type $X$ (representing a type holding one piece of data - this could be of a primitive type, or another algebraic type). We also tend to use $2X$ to mean $X+X$ and $X^2$ to mean $X\cdot X$, etc. For example, the Haskell type data List a = Nil | Cons a (List a) tells you that the data type List a (a list of elements of type a) is either Nil, or it is the Cons of a basic type and another lists. Algebraically, we could write $$L = 1 + X \cdot L$$ This isn't just pretty notation - it encodes useful information. We can rearrange to get $$L \cdot (1 - X) = 1$$ and hence $$L = \frac{1}{1-X} = 1 + X + X^2 + X^3 + \cdot$$ which tells us that a list is either empty ($1$), or it contains 1 element ($X$), or it contains 2 elements ($X^2$), or it contains 3 elements, or... For a more complicated example, consider the binary tree data type: data Tree a = Nil | Branch a (Tree a) (Tree a) Here a tree $T$ is either nil, or it is a Branch consisting of a piece of data and two other trees. Algebraically $$T = 1 + X\cdot T^2$$ which we can rearrange to give $$T = \frac{1}{2X} \left( 1 - \sqrt{1-4X} \right) = 1 + X + 2X^2 + 5X^3 + 14X^4 + 42X^5 + \cdots$$ where I have chosen the negative square root so that the equation makes sense (i.e. so that there are no negative powers of $X$, which are meaningless in this theory). This tells us that a binary tree can be nil ($1$), that there is one binary tree with one datum (i.e. the tree which is a branch containing two empty trees), that there are two binary trees with two datums (the second datum is either in the left or the right branch), that there are 5 trees containing three datums (you might like to draw them all) etc.
Help me formalize this calculation I needed to find the number of five digits numbers that are made of numbers from $0,1,2,3,4,5$ and are divisble by 3. One of the proper methods can be, that $0+1+2+3+4+5 = 15$ So we can pick out either $3$ or $0$ from this set. For picking out $0$ there are $5!$ numbers and for picking out $3$ there are $5!$ numbers $4!$ of which are 4 digit numbers, so the total number is $5!+5!-4! =216$ I tried a rough estimate before the above (correct) solution. I need your help as I think it can formalized and used as a valid argument. There are $^6C_5\times5!=720$ total $5$-digit numbers (including $4$-digit numbers with digits from one to five) Roughly a third of them, i.e $\approx 240$ should be divisble by three. Of these, roughly a tenth $\approx 24$ should be $4$-digit and hence the answer should be close to $\approx 216$. I thought my answer should be close plus or minus some correction as this was very rough. The initial set of numbers has only $2$ of total $6$ numbers that are divisible by $3$ and it is not uniform and does not contain all digits $0$-$9$, but I get an exact number. How do I state this more formally? I need to know this as I use these rough calculations often. "Formal" would be an argument that would allow me to replace the "approximately equal to" symbols in the third paragraph by equality symbols.
Brian has already explained that an error in your reasoning happened to lead to the right result. Here's an attempt to fix the mistake and give a derivation of the correct result that has the "probabilistic" flavour of your initial estimate -- though the result could be argued to be closer to the correct solution in the first paragraph than to the initial estimate :-). In a sense, you argued probabilistically and disregarded the correlation between the two events of the number being divisible by $3$ and the number starting with $0$. These are correlated, since fewer of the numbers that are divisible by $3$ can start with $0$ (since half of them don't contain the $0$) whereas all of the ones that aren't can. Now what got you the right result was that you estimated, for the wrong reasons, that the probability of the number starting with $0$ was $1$ in $10$. The correct conditional probability, given that the number is divisible by $3$, is indeed $$\frac12\cdot\frac15+\frac12\cdot0=\frac1{10}\;,$$ where the factors $1/2$ are the probabilities of taking out at $0$ or a $3$, respectively, to get a set of digits with sum divisible by $3$, and $1/5$ and $0$ are the probabilities of a zero being the leading digit in those two cases, respectively.
Math without infinity Does math require a concept of infinity? For instance if I wanted to take the limit of $f(x)$ as $x \rightarrow \infty$, I could use the substitution $x=1/y$ and take the limit as $y\rightarrow 0^+$. Is there a statement that can be stated without the use of any concept of infinity but which unavoidably requires it to be proved?
Does math require an $\infty$? This assumes that all of math is somehow governed by a single set of universally agreed upon rules, such as whether infinity is a necessary concept or not. This is not the case. I might claim that math does not require anything, even though a mathematician requires many things (such as coffee and paper to turn into theorems, etc etc). But this is a sharp (like a sharp inequality) concept, and I don't want to run conversation off a valuable road. So instead I will claim the following: there are branches of math that rely on infinity, and other branches that do not. But most branches rely on infinity. So in this sense, I think that most of the mathematics that is practiced each day relies on a system of logic and a set of axioms that include infinities in various ways. Perhaps a different question that is easier to answer is - "Why does math have the concept of infinity?" To this, I have a really quick answer - because $\infty$ is useful. It lets you take more limits, allows more general rules to be set down, and allows greater play for fields like Topology and Analysis. And by the way - in your question you distinguish between $\lim _{x \to \infty} f(x)$ and $\lim _{y \to 0} f(\frac{1}{y})$. Just because we hide behind a thin curtain, i.e. pretending that $\lim_{y \to 0} \frac{1}{y}$ is just another name for infinity, does not mean that we are actually avoiding a conceptual infinity. So to conclude, I say that math does not require $\infty$. If somehow, no one imagined how big things get 'over there' or considered questions like How many functions are there from the integers to such and such set, math would still go on. But it's useful, and there's little reason to ignore its existence.
If a sequence of boundaries converges, do the spectrums of the enclosed regions also converge? A planar region will have associated to it a spectrum consisting of Dirichlet eigenvalues, or parameters $\lambda$ for which it is possible to solve the Dirichlet problem for the Laplacian operator, $$ \begin{cases} \Delta u + \lambda u = 0 \\ u|_{\partial R} = 0 \end{cases}$$ I'm wondering, if we have a sequence of boundaries $\partial R_n$ converging pointwise towards $\partial R$, then will the spectrums also converge? (I make the notion of convergence formal in the following manner: $\cap_{N=1}^\infty l(\cup_{n=N}^\infty\partial R_n)=\partial R$; $\cap_{N=1}^\infty l(\cup_{n=N}^\infty\mathrm{spec}(R_n))=\mathrm{spec}( R)$, where $ l(\cdot)$ denotes the set of accumulation points of a set and $\mathrm{spec}(\cdot)$ denotes the spectrum of a region.) One motivating pathological example is the sequence of boundaries, indexed by $n$, defined by the polar equations $r=1+\frac{1}{n}\sin(n^2\theta)$. The boundaries converge to the unit circle. However, since the gradient of any eigenfunction must be orthogonal to the region boundary (as it is a level set), the eigenfunctions can't possibly converge to anything (under any meaningful notion) and so it makes me question if it's even possible for the eigenvalues to do so. If the answer is "no, the spectrum doesn't necessarily converge," a much broader question arises: what are necessary and sufficient conditions for it to converge? Intuitively, I imagine a necessary condition is that the curvature of the boundaries also converge appropriately, but I have no idea if that's sufficient. EDIT: Another interesting question is if the principal eigenvalue (the smallest nonzero one) can grow arbitrarily large.
There is a domain monotonicity of Dirichlet eigenvalues: if domains $\Omega^1\supset\Omega^2\supset\ldots\supset\Omega^n\supset\ldots\ $ then the corresponding eigenvalues $\lambda_k^1\ge\lambda_k^2\ge\ldots\ge\lambda_k^n\ge...\ $ so convergence of curvatures are not necessary in this case. There are also lots of more general results on spectral stability problems for elliptic differential operators.
On the height of an ideal Which of the following inequalities hold for a ring $R$ and an ideal $I\subset R$? $\operatorname{height}I\leq\dim R-\dim R/I$ $\operatorname{height}I\geq\dim R-\dim R/I$
I think to have it: suppose $\mathrm{height}\;I=n$ and $\mathrm{dim}\;R/I=m$ then we have a chain $\mathfrak{p}_0\subset\ldots\subset\mathfrak{p}_n\subset I\subset\mathfrak{p}_{n+1}\subset\ldots\subset\mathfrak{p}_{n+m}$ but in general $\mathrm{dim}\;R$ would be greater, so $\mathrm{height}\;I+\mathrm{dim}\;R/I\leq\mathrm{dim}\;R$ holds
Is it possible to solve a separable equation a different way and still arrive at the same answer? I have the following equation $$(xy^2 + x)dx + (yx^2 + y)dy=0$$ and I am told it is separable, but not knowing how that is, I went ahead and solved it using the Exact method. Let $M = xy^2 + x $ and $N = yx^2 + y$ $$My = 2xy \text{ and } Nx = 2xy $$ $$ \int M.dx ==> \int xy^2 + x = x^2y^2 + (x^2)/2 + g(y)$$ $$ \text{Partial of } (x^2y^2 + (x^2)/2 + g(y)) => xy^2 + g(y)'$$ $$g(y)' = y$$ $$g(y) = y^2/2$$ the general solution then is $$C = x^2y^2/2 + x^2/2 + y^2/2$$ Is this solution the same I would get if I had taken the Separate Equations route?
We can also try it this way, $$(xy^2 + x)dx + (yx^2 + y)dy=0$$ $$xdx +ydy +xy^2dx+yx^2dy$$ $$\frac{1}{2}(2xdx+2ydy) + \frac{1}{2}(2xy^2dx+2yx^2dy)=0$$ $$\frac{1}{2}d(x^2+y^2) + \frac{1}{2}d(x^2y^2) =0$$ $$x^2+y^2+ x^2y^2 +c=0$$ :)
Volume of Region in 5D Space I need to find the volume of the region defined by $$\begin{align*} a^2+b^2+c^2+d^2&\leq1,\\ a^2+b^2+c^2+e^2&\leq1,\\ a^2+b^2+d^2+e^2&\leq1,\\ a^2+c^2+d^2+e^2&\leq1 &\text{ and }\\ b^2+c^2+d^2+e^2&\leq1. \end{align*}$$ I don't necessarily need a full solution but any starting points would be very useful.
There's reflection symmetry in each of the coordinates, so the volume is $2^5$ times the volume for positive coordinates. There's also permutation symmetry among the coordinates, so the volume is $5!$ times the volume with the additional constraint $a\le b\le c\le d\le e$. Then it remains to find the integration boundaries and solve the integrals. The lower bound for $a$ is $0$. The upper bound for $a$, given the above constraints, is attained when $a=b=c=d=e$, and is thus $\sqrt{1/4}=1/2$. The lower bound for $b$ is $a$, and the upper bound for $b$ is again $1/2$. Then it gets slightly more complicated. The lower bound for $c$ is $b$, but for the upper bound for $c$ we have to take $c=d=e$ with $b$ given, which yields $\sqrt{(1-b^2)/3}$. Likewise, the lower bound for $d$ is $c$, and the upper bound for $d$ is attained for $d=e$ with $b$ and $c$ given, which yields $\sqrt{(1-b^2-c^2)/2}$. Finally, the lower bound for $e$ is $d$ and the upper bound for $e$ is $\sqrt{1-b^2-c^2-d^2}$. Putting it all together, the desired volume is $$V_5=2^55!\int_0^{1/2}\int_a^{1/2}\int_b^{\sqrt{(1-b^2)/3}}\int_c^{\sqrt{(1-b^2-c^2)/2}}\int_d^{\sqrt{1-b^2-c^2-d^2}}\mathrm de\mathrm dd\mathrm dc\mathrm db\mathrm da\;.$$ That's a bit of a nightmare to work out; Wolfram Alpha gives up on even small parts of it, so let's do the corresponding thing in $3$ and $4$ dimensions first. In $3$ dimensions, we have $$ \begin{eqnarray} V_3 &=& 2^33!\int_0^{\sqrt{1/2}}\int_a^{\sqrt{1/2}}\int_b^{\sqrt{1-b^2}}\mathrm dc\mathrm db\mathrm da \\ &=& 2^33!\int_0^{\sqrt{1/2}}\int_a^{\sqrt{1/2}}\left(\sqrt{1-b^2}-b\right)\mathrm db\mathrm da \\ &=& 2^33!\int_0^{\sqrt{1/2}}\frac12\left(\arcsin\sqrt{\frac12}-\arcsin a-a\sqrt{1-a^2}+a^2\right)\mathrm da \\ &=& 2^33!\frac16\left(2-\sqrt2\right) \\ &=& 8\left(2-\sqrt2\right)\;. \end{eqnarray}$$ I've worked out part of the answer for $4$ dimensions. There are some miraculous cancellations that make me think that a) there must be a better way to do this (perhaps anon's answer, if it can be fixed) and b) this might be workable for $5$ dimensions, too. I have other things to do now, but I'll check back and if there's no correct solution yet I'll try to finish the solution for $4$ dimensions.
How to show that this series does not converge uniformly on the open unit disc? Given the series $\sum_{k=0}^\infty z^k $, it is easy to see that it converges locally, but how do I go about showing that it does not also converge uniformly on the open unit disc? I know that for it to converge uniformly on the open disc that $sup{|g(z) - g_k(z)|}$, z element of open unit disc, must equal zero. However, I am finding it difficult to show that this series does not go to zero as k goes to infinity. Edit:Fixed confusing terminology as mentioned in answer.
Confine attention to real $x$ in the interval $0<x<1$. Let $$s_n(x)=1+x+x^2+\cdots +x^{n-1}.$$ If we use $s_n(x)$ to approximate the sum, the truncation error is $>x^n$. Choose a positive $\epsilon$, where for convenience $\epsilon<1$. We want to make the truncation error $<\epsilon$, so we need $$x^n <\epsilon,\qquad \text{or equivalently}\qquad n >\frac{|\ln(\epsilon)|}{|\ln(x)|}.$$ Since $\ln x \to 0$ as $x \to 1^{-}$, the required $n$ grows without bound as $x\to 1^{-}$.
Calculating point on a circle, given an offset? I have what seemed like a very simple issue, but I just cannot figure it out. I have the following circles around a common point: The Green and Blue circles represent circles that orbit the center point. I have been able to calculate the distance/radius from the point to the individual circles, but I am unable to plot the next point on either circle, given an angle from the center point. Presently, my calculation looks like the following: The coordinates of one of my circles is: y1 = 152 x1 = 140.5 And my calculation for the next point, 1 degree from the starting point (140.5,152) is: distance = SQRT((160-x1)^2 + (240-y1)^2) = 90.13 new x = 160 - (distance x COS(1 degree x (PI / 180))) new y = 240 - (distance x SIN(1 degree x (PI / 180))) My new x and y give me crazy results, nothing even close to my circle. I can't figure out how to calculate the new position, given the offset of 160, 240 being my center, and what I want to rotate around. Where am I going wrong? Update: I have implemented what I believe to be the correct formula, but I'm only getting a half circle, e.g. x1 = starting x coordinate, or updated coordinate y1 = starting y coordinate, or updated y coordinate cx = 100 (horizontal center) cy = 100 (vertical center) radius = SQRT((cx - x1)^2 + (cy - y1)^2) arc = ATAN((y1 - cy) / (x1 - cx)) newX = cx + radius * COS(arc - PI - (PI / 180.0)) newY = cy + radius * SIN(arc - PI - (PI / 180.0)) Set the values so next iteration of drawing, x1 and y1 will be the new base for the calculation. x1 = newX y1 = newY The circle begins to draw at the correct coordinates, but once it hits 180 degrees, it jumps back up to zero degrees. The dot represents the starting point. Also, the coordinates are going counterclockwise, when they need to go clockwise. Any ideas?
We can modify 6312's suggestion a bit to reduce the trigonometric effort. The key idea is that the trigonometric functions satisfy a recurrence relation when integer multiples of angles are considered. In particular, we have the relations $$\cos(\phi-\epsilon)=\cos\,\phi-(\mu\cos\,\phi-\nu\sin\,\phi)$$ $$\sin(\phi-\epsilon)=\sin\,\phi-(\mu\sin\,\phi+\nu\cos\,\phi)$$ where $\mu=2\sin^2\frac{\epsilon}{2}$ and $\nu=\sin\,\epsilon$. (These are easily derived through complex exponentials...) In any event, since you're moving by constant increments of $1^\circ$; you merely have to cache the values of $\mu=2\sin^2\frac{\pi}{360}\approx 1.523048436087608\times10^{-4}$ and $\nu=\sin\frac{\pi}{180}\approx 1.745240643728351\times 10^{-2}$ and apply the updating formulae I gave, where your starting point is $\cos\,\phi=\frac{140.5-160}{\sqrt{(140.5-160) ^2+(152-240)^2}}\approx-0.2163430618226664$ and $\sin\,\phi=\frac{152-240}{\sqrt{(140.5-160) ^2+(152-240)^2}}\approx-0.9763174071997252$
Yet another sum involving binomial coefficients Let $k,p$ be positive integers. Is there a closed form for the sums $$\sum_{i=0}^{p} \binom{k}{i} \binom{k+p-i}{p-i}\text{, or}$$ $$\sum_{i=0}^{p} \binom{k-1}{i} \binom{k+p-i}{p-i}\text{?}$$ (where 'closed form' should be interpreted as a representation which is free of sums, binomial coefficients, or any other hypergeometric functions).
Lets examine the first sum. I can't seem to find a closed form, but there is something very nice with the generating series. They are simple, and symmetrical with respect to the variables $p$ and $k$. Result: Your sum is the $k^{th}$ coefficient of $\frac{(1+x)^{p}}{\left(1-x\right)^{p+1}},$ and also the $p^{th}$ coefficient of $\frac{(1+x)^{k}}{\left(1-x\right)^{k+1}}.$ The Generating Series for the variable $p$ Consider $$F(x)=\sum_{p=0}^{\infty}\sum_{i=0}^{p}\binom{k}{i}\binom{k+p-i}{p-i}x^{p}.$$ Changing the order of summation, this becomes $$F(x)=\sum_{i=0}^{\infty}\binom{k}{i}\sum_{p=i}^{\infty}\binom{k+p-i}{p-i}x^{p},$$ and then shifting the second sum we have $$F(x)=\sum_{i=0}^{\infty}\binom{k}{i}x^{i}\sum_{p=0}^{\infty}\binom{k+p}{p}x^{p}.$$ Since the rightmost sum is $\frac{1}{(1-x)^{k+1}}$ we see that the generating series is $$F(x)=\frac{1}{(1-x)^{k+1}}\sum_{i=0}^{\infty}\binom{k}{i}x^{i}=\frac{\left(1+x\right)^{k}}{(1-x)^{k+1}}$$ by the binomial theorem. The Generating Series for the variable $k$: Lets consider the other generating series with respect to the variable $k$. Let $$G(x)=\sum_{k=0}^{\infty}\sum_{i=0}^{p}\binom{k}{i}\binom{k+p-i}{p-i}x^{k}.$$ Then $$G(x)=\sum_{i=0}^{p}\sum_{k=i}^{\infty}\binom{k}{i}\binom{k+p-i}{p-i}x^{k}=\sum_{i=0}^{p}x^{i}\sum_{k=0}^{\infty}\binom{k+i}{i}\binom{k+p}{p-i}x^{k}.$$ Splitting up the binomial coefficients into factorials, this is $$=\sum_{i=0}^{p}x^{i}\sum_{k=0}^{\infty}\frac{(k+i)!}{k!i!}\frac{(k+p)!}{(k+i)!(p-i)!}x^{k}=\sum_{i=0}^{p}\frac{x^{i}p!}{i!\left(p-i\right)!}\sum_{k=0}^{\infty}\frac{\left(k+p\right)!}{k!p!}x^{k}.$$ Consequently, $$G(x)=\frac{(1+x)^{p}}{\left(1-x\right)^{p+1}}.$$ Comments: I am not sure why the generating series has this symmetry. Perhaps you can use this property to tell you more about the sum/generating series. Hope that helps,
Take any number and keep appending 1's to the right of it. Are there an infinite number of primes in this sequence? Ignoring sequences that are always factorable such as starting with 11, Can we take any other number such as 42 and continually append 1s (forming the sequence {42, 421, 4211, ...}) to get a sequence that has an infinite number of primes in it?
Unless prevented by congruence restrictions, a sequence that grows exponentially, such as Mersenne primes or repunits or this variant on repunits, is predicted to have about $c \log(n)$ primes among its first $n$ terms according to "probability" arguments. Proving this prediction for any particular sequence is usually an unsolved problem. There is more literature (and more algebraic structure) available for the Mersenne case but the principle is the same for other sequences. http://primes.utm.edu/mersenne/heuristic.html Bateman, P. T.; Selfridge, J. L.; and Wagstaff, S. S. "The New Mersenne Conjecture." Amer. Math. Monthly 96, 125-128, 1989
How to find the least $N$ such that $N \equiv 7 \mod 180$ or $N \equiv 7 \mod 144$ but $N \equiv 1 \mod 7$? How to approach this problem: N is the least number such that $N \equiv 7 \mod 180$ or $N \equiv 7 \mod 144$ but $N \equiv 1 \mod 7$.Then which of the these is true: * *$0 \lt N \lt 1000$ *$1000 \lt N \lt 2000$ *$2000 \lt N \lt 4000$ *$N \gt 4000$ Please explain your idea. ADDED: The actual problem which comes in my paper is "or" and the "and" was my mistake but I think I learned something new owing to that.Thanks all for being patient,and appologies for the inconvenience.
(1) For the original version of the question $\rm\:mod\ 180 \ $ and $\rm\: mod\ 144\::$ $\rm\: 144,\:180\ |\ N-7\ \Rightarrow\ 720 = lcm(144,180)\ |\ N-7\:.\:$ So, $\rm\: mod\ 7:\ 1\equiv N = 7 + 720\ k\ \equiv -k\:,\:$ so $\rm\:k\equiv -1\equiv 6\:.$ Thus $\rm\: N = 7 + 720\ (6 + 7\ j) =\: 4327 + 5040\ j\:,\:$ so $\rm\ N\ge0\ \Rightarrow\ N \ge 4327\:.$ (2) For the updated simpler version $\rm\:mod\ 180\ $ or $\rm\ mod\ 144\:,\:$ the same method shows that $\rm\: N = 7 + 180\ (3+ 7\ j)\:$ or $\rm\:N = 7 + 144\ (2 + 7\ j)\:,\:$ so the least$\rm\ N> 0\:$ is $\rm\:7 + 144\cdot 2 = 295\:.$ SIMPLER $\rm\ N = 7+144\ k\equiv 4\ k\ (mod\ 7)\:$ assumes every value $\rm\:mod\ 7\:$ for $\rm\:k = 0,1,2,\:\cdots,6\:,\:$ and all these values satisfy $\rm\:0 < N < 1000\:.\:$ Presumably this is the intended "quick" solution.
Are all numbers real numbers? If I go into the woods and pick up two sticks and measure the ratio of their lengths, it is conceivable that I could only get a rational number, namely if the universe was composed of tiny lego bricks. It's also conceivable that I could get any real number. My question is, can there mathematically exist a universe in which these ratios are not real numbers? How do we know that the real numbers are all the numbers, and that they dont have "gaps" like the rationals? I want to know if what I (or most people) intuitively think of as length of an idealized physical object can be a non-real number. Is it possible to have more then a continuum distinct ordered points on a line of length 1? Why do mathematicians mostly use only R for calculus etc, if a number doesnt have to be real? By universe I just mean such a thing as Eucildean geometry, and by exist that it is consistent.
With regard to the OP's question Can there mathematically exist a universe in which these ratios are not real numbers? to provide a meaningful answer, the question needs to be reinterpreted first. Obviously the real numbers, being a mathematical model, do not coincide with anything in "the universe out there". The question is meaningful nonetheless when formulated as follows: What is the most appropriate number system to describe the universe if we want to understand it mathematically? Put this way, one could argue that the hyperreal number system is more appropriate for the task than the real number system, since it contains infinitesimals which are useful in any mathematical modeling of phenomena requiring the tools of the calculus, which certainly includes a large slice of mathematical physics. For a gentle introduction to the hyperreals see Keisler's freshman textbook Elementary Calculus.
Can a prime in a Dedekind domain be contained in the union of the other prime ideals? Suppose $R$ is a Dedekind domain with a infinite number of prime ideals. Let $P$ be one of the nonzero prime ideals, and let $U$ be the union of all the other prime ideals except $P$. Is it possible for $P\subset U$? As a remark, if there were only finitely many prime ideals in $R$, the above situation would not be possible by the "Prime Avoidance Lemma", since $P$ would have to then be contained in one of the other prime ideals, leading to a contradiction. The discussion at the top of pg. 70 in Neukirch's "Algebraic Number Theory" motivates this question. Many thanks, John
If $R$ is the ring of integers $O_K$ of a finite extension $K$ of $\mathbf{Q}$, then I don't think this can happen. The class of the prime ideal $P$ is of finite order in the class group, say $n$. This means that the ideal $P^n$ is principal. Let $\alpha$ be a generator of $P^n$. Then $\alpha$ doesn't belong to any prime ideal other than $P$, because at the level of ideals inclusion implies (reverse) divisibility, and the factorization of ideals is unique. This argument works for all the rings, where we have a finite class group, but I'm too ignorant to comment, how much ground this covers :-(
How to add compound fractions? How to add two compound fractions with fractions in numerator like this one: $$\frac{\ \frac{1}{x}\ }{2} + \frac{\ \frac{2}{3x}\ }{x}$$ or fractions with fractions in denominator like this one: $$\frac{x}{\ \frac{2}{x}\ } + \frac{\ \frac{1}{x}\ }{x}$$
Yet another strategy: \begin{align} \frac{\frac1x}2+\frac{\frac2{3x}}x&=\frac1{2x}+\frac2{3x^2}\\ &=\frac{3x}{6x^2}+\frac4{6x^2}=\frac{3x+4}{6x^2}\,. \end{align} What did I do? Given is the sum of two fractions, and I multiplied top-and-bottom of the first by $x$, and top-and-bottom of the second by $3x$. Second step, find the minimal common denominator, which is $6x^2$, and on each of your current fractions, multiply top-and-bottom by a suitable quantity to get the denominators equal. Now add.
Neglecting higher order terms in expansion Suppose we have a function $v$ of $x$ with a minimum at $x=0$. We have, for $x$ close to zero, $$v'(x) = v'(0) +xv''(0) +\frac{x^2}{2}v'''(0)+\cdots$$ Then as $v'(0)=0$ $$v'(x)\approx xv''(0)$$ if $$|xv'''(0)|\ll v''(0)$$ Which is fine. I am unable to understand this statement: Typically each extra derivative will bring with it a factor of $1/L $ where $L$ is the distance over which the function changes by a large fraction. So $$x\ll L$$ This is extracted from a physics derivation, and I cannot get how they tacked on a factor of $1/L$
If each derivative contributes $\frac{1}{L}$, then $|xv'''| << v'' \implies x(\frac{1}{L})^3 << (\frac{1}{L})^2$. Divide both sides by $(\frac{1}{L})^3$ and this becomes $x << L$. That $\frac{1}{L}$ term is refering to the change in the function according to the difference method of derivatives (Definition via difference quotients) given in Wikipedia. If you calculate out the quotient between the second and third derivatives (or first and second), it should approximate to the result above given the context.
A Math function that draws water droplet shape? I just need a quick reference. What is the function for this kind of shape? Thanks.
You may also try Maple to find a kind of water droplet as follows: [> with(plots): [> implicitplot3d(x^2+y^2+z^4 = z^2, x = -1 .. 1, y = -1 .. 1, z = -1 .. 0, numpoints = 50000, lightmodel = light2, color = blue, axes = boxed);
If $f'$ tends to a positive limit as $x$ approaches infinity, then $f$ approaches infinity Some time ago, I asked this here. A restricted form of the second question could be this: If $f$ is a function with continuous first derivative in $\mathbb{R}$ and such that $$\lim_{x\to \infty} f'(x) =a,$$ with $a\gt 0$, then $$\lim_{x\to\infty}f(x)=\infty.$$ To prove it, I tried this: There exist $x_0\in \mathbb{R}$ such that for $x\geq x_0$, $$f'(x)\gt \frac{a}{2}.$$ There exist $\delta_0\gt 0$ such that for $x_0\lt x\leq x_0+ \delta_0$ $$\begin{align*}\frac{f(x)-f(x_0)}{x-x_0}-f'(x_0)&\gt -\frac{a}{4}\\ \frac{f(x)-f(x_0)}{x-x_0}&\gt f'(x_0)-\frac{a}{4}\\ &\gt \frac{a}{2}-\frac{a}{4}=\frac{a}{4}\\ f(x)-f(x_0)&\gt \frac{a}{4}(x-x_0)\end{align*}.$$ We can assume that $\delta_0\geq 1$. If $\delta_0 \lt 1$, then $x_0+2-\delta_0\gt x_0$ and then $$f'(x_0+2-\delta_0)\gt \frac{a}{2}.$$ Now, there exist $\delta\gt 0$ such that for $x_0+2-\delta_0\lt x\leq x_0+2-\delta_0+\delta$ $$f(x)-f(x_0+2-\delta_0)\gt \frac{a}{4}(x-(x_0+2-\delta_0))= \frac{a}{4}(x-x_0-(2-\delta_0))\gt \frac{a}{4}(x-x_0).$$ It is clear that $x\in (x_0,x_0+2-\delta_0+\delta]$ and $2-\delta_0+\delta\geq 1$. Therefore, we can take $x_1=x_0+1$. Then $f'(x_1)\gt a/2$ and then there exist $\delta_1\geq 1$ such that for $x_1\lt x\leq x_1+\delta_1$ $$f(x)-f(x_1)\gt \frac{a}{4}(x-x_1).$$ Take $x_2=x_1+1$ and so on. If $f$ is bounded, $(f(x_n))_{n\in \mathbb{N}}$ is a increasing bounded sequence and therefore it has a convergent subsequence. Thus, this implies that the sequence $(x_n)$: $$x_{n+1}=x_n+1,$$ have a Cauchy's subsequence and that is a contradiction. Therefore $\lim_{x\to \infty} f(x)=\infty$. I want to know if this is correct, and if there is a simpler way to prove this. Thanks.
I will try to prove is in a different way which can be much simpler - using visualization. Imagine how will a function look if it has a constant, positive slope - A straight line, with a positive angle with the positive x axis. Although this can be imagined, I am attaching a simple pic - (Plot of our imaginative function - $f(x)$ vs $x$) As per the situation in the question, for $f(x)$ the slope exists (and is finite) at all points, so it means that the function is continuous. Since the slope is also constant at $\infty$, $f$ has to be linear at $\infty$. Thus, the graph of the function should be similar to the above graph. (assume the value of x to be as large as you can imagine.) Hence, putting the above situation mathematically, we have, If $\lim_{x\to \infty}\ f'(x) =a\qquad(with\ a>0)$ then $\lim_{x\to\infty}\ f(x)=\infty.$
Moving a rectangular box around a $90^\circ$ corner I have seen quite a few problems like this one presented below. The idea is how to determine if it is possible to move a rectangular 3d box through the corner of a hallway knowing the dimensions of all the objects given. Consider a hallway with width $1.5$ and height $2.5$ which has a corner of $90^\circ$. Determine if a rectangular box of dimensions $4.3\times 0.2\times 0.07$ can be taken on the hallway around the corner. I know that intuitively, the principle behind this is similar to the 2 dimensional case (illustrated here), but how can I solve this rigorously?
Here is an attempt based on my experiences with furniture moving. The long dimension a=4.3 will surely be horizontal. One of the short dimensions, call it b will be vertical, the remaining dimension c will be horizontal. The box must be as "short" as possible during the passage at the corner. So, one end of the box will be lifted: We calculate the projection L = x1 + x2 of the lifted box onto the horizontal plane. Now we move the shortened box around the corner. Here is an algorithm as a Python program (I hope it is readable): # hallway dimensions: height = 2.5 width = 1.5 def box(a, b, c): # a = long dimension of the box = 4.3, horizontal # b = short dimension, 0.2 (or 0.07), vertical # c = the other short dimension, horizontal d = math.sqrt(a*a + b*b) # diagonal of a x b rectangle alpha = math.atan(b/a) # angle of the diagonal in axb rectangle omega = math.asin(height/d) - alpha # lifting angle x1 = b * math.sin(omega) # projection of b to the floor x2 = a * math.cos(omega) # projection of a to the floor L = x1 + x2 # length of the lifted box projected to the floor sin45 = math.sin(math.pi/4.0) y1 = c * sin45 # projection of c to the y axis y2 = L / 2 * sin45 # projection of L/2 to the y axis w = y1 + y2 # box needs this width w ok = (w <= width) # box passes if its width w is less than the # the available hallway width print "w =", w, ", pass =", ok return ok def test(): # 1) try 0.07 as vertical dimension: box(4.3, 0.07, 0.2) # prints w= 1.407, pass= True # 2) try 0.2 as vertical dimension: box(4.3, 0.2, 0.07) # prints w= 1.365, pass= True test() So, the box can be transported around the corner either way (either 0.2 or 0.07 vertical). Adding Latex formulae for the pure mathematician: $$ \begin{align*} d= & \sqrt{a^{2}+b^{2}}\\ \alpha= & \arctan(b/a)\\ \omega= & \arcsin(height/d)-\alpha\\ L= & x_{1}+x_{2}=b\sin\omega+a\cos\omega\\ w= & y_{1}+y_{2}=\frac{c}{\sqrt{2}}+\frac{L}{2\sqrt{2}} \end{align*} $$ The box can be transported around the corner if $w \le width$.
Number of ways a natural number can be written as sum of smaller natural number It is easy to realize that given a natural number N the number of doublets that sum N are $\frac{N+(-1)(N \pmod 2)}{2}$ , so I thought I could reach some recursive formula in the sense that found the number of doublets I could find the number of triplets and so on ..., example: N=3 the only doublet is 2+1=3 -not said yet but 2+1, and 1+2 count as one- then I could count the number of way the number 2 can be expressed as the indicated sum and got the total number of ways 3 can be written as a sum. But this seems not so efficient, so I was wondering if there is other way to attack the problem and if there is some reference to this problem such as if it is well known its used, once I read that this have a chaotic behavior, and also read It was used in probability but don't remember where I got that information. So if you know something I would be grateful to be notice, thanks in advance.
You are asking about integer partitions. This is a well studied topic and you can look at http://en.wikipedia.org/wiki/Integer_partitions for details.
A good book on Statistical Inference? Anyone can suggest me one or more good books on Statistical Inference (estimators, UMVU estimators, hypothesis testing, UMP test, interval estimators, ANOVA one-way and two-way...) based on rigorous probability/measure theory? I've checked some classical books on this topic but apparently all start from scratch with an elementary probability theory.
Dienst's recommendations above are all good,but a classic text you need to check out is S.S. Wilks' Mathematical Statistics. A complete theoretical treatment by one of the subject's founding fathers. It's out of print and quite hard to find,but if you're really interested in this subject,it's well worth hunting down. Be sure you get the hardcover 1963 Wiley edition; there's a preliminary mimeographed Princeton lecture notes from 1944 by the same author and with the same title-it's not the same book,it's much less complete and more elementary. Make sure you get the right one!
Ways of building groups besides direct, semidirect products? Let's say we have a group G containing a normal subgroup H. What are the possible relationships we can have between G, H, and G/H? Looking at groups of small order, it seems to always be the case that G = G/H x H or G/H x| H. What, if any, other constructions/relations are possible? And why is it the case that there are or aren't any other possible constructions/relations(if this question admits a relatively elementary answer)?
I don't believe that this question admits elementary answer. The two ways, direct product and semidirect product, give various groups but not all. As per my experience with small groups, complexity of constructions of groups lies mainly in $p$-groups. For $p$-groups of order $p^n$, $n>4$ (I think) there are always some groups which can not be semidirect products of smaller groups. One method, is using generators and relations. Write generators and relations of normal subgroup $H$, and quotient $G/H$. Choose some elements of $G$ whose images are generators of $G/H$; make single choice for each generators. So this pullback of generators and generators of $H$ gives generators of $G$. We only have to determine relations. Relations of $H$ are also relations of $G$. Other possible relations are obtained by considering relations of $G/H$ and their pullbacks. Not all pullbacks of relations of $G/H$ give groups of order equal $|G|$; but order may become less. Moreover, different relations may give isomorphic subgroups. For best elementary examples by this method (generators and relations), see constructions of non-abelian groups of order $8$; (Ref. the excellent book "An Introduction to The Theory of Groups : Joseph Rotman")
Continuity of this function at $x=0$ The following function is not defined at $x=0$: $$f(x) = \frac{\log(1+ax) - \log(1-bx)}{x} .$$ What would be the value of $f(0)$ so that it is continuous at $x=0$?
Do you want to evaluate the limit at $0$. Then you can see that \begin{align*} f(x) = \lim_{x \to 0} \biggl(\frac{1+ax}{1-bx}\biggr)^{1/x} &=\lim_{x \to 0} \biggl(1+ \frac{(b+a)x}{1-bx}\biggr)^{1/x} \\\ &=\lim_{x \to 0} \biggl(1+ \frac{(b+a)x}{1+bx}\biggr)^{\small \frac{1+bx}{(b+a)x} \cdot \frac{(b+a)x}{x \cdot (1+bx)}} \\\ &=e^{\small\displaystyle\tiny\lim_{x \to 0} \frac{(b+a)x}{x\cdot (1+bx)}} = e^{b+a} \qquad \qquad \Bigl[ \because \small \lim_{x \to 0} (1+x)^{1/x} =e \Bigr] \end{align*} Therefore as $x \to 0$, $\log(f(x)) \to (b+a)$ Please see this post: Solving $\lim\limits_{x \to 0^+} \frac{\ln[\cos(x)]}{x}$ as a similar kind of methodology is used to solve this problem.
Do addition and multiplication have arity? Many books classify the standard four arithmetical functions of addition, subtraction, multiplication, and division as binary (in terms of arity). But, "sigma" and "product" notation often writes just one symbol at the front, and indexes those symbols which seemingly makes expressions like $+(2, 3, 4)=9$ meaningful. Of course, we can't do something similar for division and subtraction, since they don't associate, but does the $+$ symbol in the above expression qualify as the same type of expression as when someone writes $2+4=6$? Do addition and multiplication qualify as functions which don't necessarily have a fixed arity, or do they actually have a fixed arity, and thus instances of sigma and product notation should get taken as abbreviation of expressions involving binary functions? Or is the above question merely a matter of perspective? Do we get into any logical difficulties if we regard addition and multiplication as $n$-ary functions, or can we only avoid such difficulties if we regard addition and multiplication as binary?
There are no logical difficulties passing back and forth between binary associative operations and their higher-arity extensions. However, a theorem of Sierpinski (Fund. Math., 33 (1945) 169-73) shows that higher-order operations are not needed: every finitary operation may be expressed as a composition of binary operations. The proof is especially simple for operations on a finite set $\rm\:A\:.\:$ Namely, if $\rm\:|A| = n\:$ then we may encode $\rm\:A\:$ by $\rm\:\mathbb Z/n\:,\:$ the ring of integers $\rm\:mod\ n\:,\:$ allowing us to employ Lagrange interpolation to represent any finitary operation as a finite composition of the binary operations $\rm\: +,\ *\:,\:$ and $\rm\: \delta(a,b) = 1\ if\ a=b\ else\ 0\:,\:$ namely $$\rm f(x_1,\ldots,x_n)\ = \sum_{(a_1,\ldots,a_n)\ \in\ A^n}\ f(a_1,\ldots,a_n)\ \prod_{i\ =\ 1}^n\ \delta(x_i,a_i) $$ When $\rm\:|A|\:$ is infinite one may instead proceed by employing pairing functions $\rm\:A^2\to A\:.$
Probability of Median from a continuous distribution For a sample of size $n=3$ from a continuous probability distribution, what is $P(X_{(1)}<k<X_{(2)})$ where $k$ is the median of the distribution? What is $P(X_{(1)}<k<X_{(3)})$? $X_{(i)},i=1,2,3$ are the ordered values of the sample. I'm having trouble trying to solve this question since the median is for the distribution and not the sample. The only explicit formulas for the median I know of are the median $k$ of any random variable $X$ satisfies $P(X≤k)≥1/2$ and $P(X≥k)≥1/2$, but I don't see how to apply that here.
I assume $X_1, X_2, X_3$ are taken to be iid. Here's a hint: $$P(X_{(1)} < k < X_{(2)}) = 3P(X_1 < k \cap X_2 > k \cap X_3 > k)$$ by a simple combinatoric argument. Do you see why? Since the distributions are continuous, $$P(X_1 > k) = P(X_1 \ge k) = P(X_1 < k) = P(X_1 \le k) = \frac 1 2.$$ The second part of the question is similar.
Types of divergence My teacher said there are two main ways a sequence can diverge, it can increase in magnitude without bound, or it can fail to resolve to any one limit. But maybe that second kind of divergence is too diverse? There is a big difference between the divergent sequence 1, -1, 1, -1 . . . And the sequence formed by taking a digit from pi, g, then adding up the next g digits of pi and dividing that by g. (6/3, 25/5, 36/5, 11/2, 18/4, . . . ) Yet both of the above are more orderly than a sequence of random numbers. From what little I understand of randomness. So maybe we should say that we have: * *Sequences that increase in magnitude without bound. *Sequences the can be decomposed in to convergent sub sequences, or in to sequences as in #1 *Sequences based on a rule. *Random sequences. Yet, a random sequence, with even distribution will have convergent sub sequences to every number in it's range...suddenly randomness seems orderly. What do professionals say about types of divergence?
Every sequence that doesn't increase in magnitude without bound can be decomposed into convergent subsequences. EDIT: Maybe a useful way of classifying divergent sequences (of real numbers) would be by the set of accumulation points, that is, by the set of limit points of convergent subsequences. One could ask * *Is $+\infty$ an accumulation point? *Is $-\infty$ an accumulation point? *Are there any finite accumulation points? *Is there more than one finite accumulation point? *Are there infinitely many accumulation points? *Are there uncountably many accumulation points? *Is every real number an accumulation point?
Introduction to the mathematical theory of regularization I asked the question "Are there books on Regularization at an Introductory level?" at physics.SE. I was informed that "there is (...) a mathematical theory of regularization (Cesàro, Borel, Ramanujan summations and many others) that is interesting per se". Question: Can someone advise me on how to study one or more of the above topics and provide some reference?
In terms of summations of otherwise divergent series (which is what Borel summation and Cesàro summation are about), a decent reference is G.H. Hardy's Divergent Series. In terms of divergent integrals, you may also be interested in learning about Cauchy principal values, which is related to Hadamard regularisation. (The references in those Wikipedia articles should be good enough; these two concepts are actually quite easily understood.) Zeta function regularisation has its roots in number theory, which unfortuantely I don't know enough about to comment. Heat kernel type regularisation techniques is closely related to the study of partial differential equations and harmonic analysis. It is related to Friedrichs mollifiers (an exposition is available in most introductory texts in generalised functions / distribution theory; and a slightly more advanced text is volume 1 of Hörmander's Analysis of Linear Partial Differential Operator). It can also be interpreted as a Fourier-space cut-off (which in physics terminology is probably called "ultraviolet cutoff" and which can be interpreted in physical space as setting a minimal length scale), so can be described in terms of, say, Littlewood-Paley Theory (another advanced text is Stein's Topics in Harmonic Analysis relating to Littlewood-Paley Theory) or the FBI transform. Unfortunately I don't know many good introduction texts in these areas. But I hope some of these keywords can aid your search.
Maximize and Minimize a 12" piece of wire into a square and circle A wire of length 12" can be bent into a circle, a square or cut into 2 pieces and make both a circle and a square. How much wire should be used for the circle if the total area enclosed by the figure(s) is to be: a) a Maximum b) a Minimum What I've got so far is that the formula for the square is $A_s=\frac{1}{16}s^2$ and the circumfrance of the circle to be $P=12-c$ and area to be $A_c = \pi(\frac{P}{2\pi})^2$ where $c$ is the length of the wire for the circle and $s$ is the length of the wire for the square. Now I know I need to differentiate these formulas to then find the max and min they both can be, but what am I differentiating with respect to? The missing variable in each of the formulas? Also, once, I find the derivitives, what would my next steps be to minimizing and maximizing these? And did I set the problem up correctly? Thanks for any help
Let $s$ be the circumference of the square. Then the circumference of the circle is $12-s$ (because that's what is left from the wire). Now you already computed the formulas $A_{\mathrm{square}}(s) = \frac{1}{16} s^2$ and $A_{\mathrm{circle}}(s) = \frac{1}{4\pi}(12 - s)^2$. The total area is $A(s) = A_{\mathrm{square}}(s) + A_{\mathrm{circle}}(s)$, where $s \in [0,12]$ is the variable. To find the extrema (maximum/minimum) of this function, a necessary condition is $A'(s) = 0$ (differentiate with respect to $s$) when $0 \lt s \lt 12$ and you need also consider $A(0)$ and $A(12)$. So the task you need to do is to differentiate $A(s)$ with respect to $s$, solve $A'(s) = 0$ for $s$ (there will be only one solution $s_0$). Now the maximum among $A(0)$, $A(12)$ and $A(s_0)$ will be the maximum and the minimum among them will be the minimum of $A(s)$. It may also help if you sketch the graph to convince yourself of the solution. Here's a small sanity check: The circle is the geometric figure that encloses the largest area among all figures with the same circumference, so the maximum should be achieved for $s = 0$. Since enclosing two figures needs more wire than enclosing a single one, the minimum should be achieved at $s_0$. Added: Since the results you mention are a bit off, let me show you what I get: First $$A(s) = \frac{1}{16}s^2 + \frac{1}{4\pi}(12-s)^2.$$ Differentiating this with respect to $s$ I get $$A'(s) = \frac{1}{8}s - \frac{1}{2\pi}(12-s)$$ Now solve $A'(s) = 0$ to find $$s_0 = \frac{12}{1+\frac{\pi}{4}} \approx 6.72$$ Plugging this in gives me $A(s_0) \approx 5.04$. (No warranty, I hope I haven't goofed)
The leap to infinite dimensions Extending this question, page 447 of Gilbert Strang's Algebra book says What does it mean for a vector to have infinitely many components? There are two different answers, both good: 1) The vector becomes $v = (v_1, v_2, v_3 ... )$ 2) The vector becomes a function $f(x)$. It could be $\sin(x)$. I don't quite see in what sense the function is "infinite dimensional". Is it because a function is continuous, and so represents infinitely many points? The best way I can explain it is: * *1D space has 1 DOF, so each "vector" takes you on "one trip" *2D space has 2 DOF, so by following each component in a 2D (x,y) vector you end up going on "two trips" *... *$\infty$D space has $\infty$ DOF, so each component in an $\infty$D vector takes you on "$\infty$ trips" How does it ever end then? 3d space has 3 components to travel (x,y,z) to reach a destination point. If we have infinite components to travel on, how do we ever reach a destination point? We should be resolving components against infinite axes and so never reach a final destination point.
One thing that might help is thinking about the vector spaces you already know as function spaces instead. Consider $\mathbb{R}^n$. Let $T_{n}=\{1,2,\cdots,n\}$ be a set of size $n$. Then $$\mathbb{R}^{n}\cong\left\{ f:T_{n}\rightarrow\mathbb{R}\right\} $$ where the set on the right hand side is the space of all real valued functions on $T_n$. It has a vector space structure since we can multiply by scalars and add functions. The functions $f_i$ which satisfy $f_i(j)=\delta_{ij}$ will form a basis. So a finite dimensional vector space is just the space of all functions on a finite set. When we look at the space of functions on an infinite set, we get an infinite dimensional vector space.
A simple conditional expectation problem $X, Y$ iid uniform random variables on $[0,1]$ $$Z = \left\{ \begin{aligned} X+Y \quad&\text{ if } X>\frac{1}{2} \\ \frac{1}{2} + Y \quad & \text{ if } X\leq\frac{1}{2} \end{aligned} \right.$$ The question is $E\{Z|Z\leq 1\}= ?$ I tried $\displaystyle \int_0^1 E\{Z|Z = z\} P\{Z = z\}dz$ and got $5/8$, but I am not so sure about the result since I haven't touched probability for years.
Your probability space is the unit square in the $(x,y)$-plane with $dP={\rm d}(x,y)$. The payout $Z$ is ${1\over 2}+y$ in the left half $L$ of the square and $x+y$ in the right half $R$. The region where $Z\leq 1$ consists of the lower half of $L$ and a triangle in the lower left of $R$; it has total area $P(Z\leq 1)={3\over8}$. It follows that the expectation $E:=E[Z\ |\ Z\leq 1]$ is given by $$E=\left(\int_0^{1/2}\int_0^{1/2}\bigl({1\over2}+y\bigr)dy dx + \int_{1/2}^1\int_0^{1-x}(x+y)dy dx\right)\Bigg/{3\over8} ={{3\over16}+{5\over48}\over{3\over8}}={7\over9}\ .$$
Range of a sum of sine waves Suppose I'm given a function f(x) = sin(Ax + B) + sin(Cx + D) is there a simple (or, perhaps, not-so-simple) way to compute the range of this function? My goal is ultimately to construct a function g(x, S, T) that maps f to the range [S, T]. My strategy is to first compute the range of f, then scale it to the range [0,1], then scale that to the range [S, T]. Ideally I would like to be able to do this for an arbitrary number of waves, although to keep things simple I'm willing to be satisfied with 2 if it's the easiest route. Numerical methods welcome, although an explicit solution would be preferable.
I'll assume that all variables and parameters range over the reals, with $A,C\neq0$. Let's see how we can get a certain combination of phases $\alpha$, $\gamma$: $$Ax+B=2\pi m+\alpha\;,$$ $$Cx+D=2\pi n+\gamma\;.$$ Eliminating $x$ yields $$2\pi(nA-mC)=AB-BC+\alpha C-\gamma A\;.$$ If $A$ and $C$ are incommensurate (i.e. their ratio is irrational), given $\alpha$ we can get arbitrarily close to any value of $\gamma$, so the range in this case is at least $(-2,2)$. If $AB-BC$ happens to be an integer linear combination of $2\pi A$ and $2\pi C$, then we can reach $2$, and the range is $(-2,2]$, whereas if $AB-BC$ happens to be a half-integer linear combination of $2\pi A$ and $2\pi C$ (i.e. and odd-integer linear combination of $\pi A$ and $\pi C$), then we can reach $-2$, and the range is $[-2,2)$. (These cannot both occur if $A$ and $C$ are incommensurate.) On the other hand, if $A$ and $C$ are commensurate (i.e. their ratio is rational), you can transform $f$ to the form $$f(u)=\sin mu+ \sin (nu+\phi)$$ by a suitable linear transformation of the variable, so $f$ is periodic. In this case, there are periodically recurring minima and maxima, and in general you'll need to use numerical methods to find them.
Should I combine the negative part of the spectrum with the positive one? When filtering sound I currently analyse only the positive part of the spectrum. From the mathematical point of view, will discarding the negative half of the spectrum impact significantly on my analysis? Please consider only samples that I will actually encounter, not computer generate signals that are designed to thwart my analysis. I know this question involves physics, biology and even music theory. But I guess the required understanding of mathematics is deeper than of those other fields of study.
Sound processing is achieved through Real signal samples. Therefore there is no difference in the phase and magnitude of the FFT, or DFT coefficients, from positive to negative part of the found spectrum. So, to save us or the machine the burden of saving/analyzing twice the same information/data, one looks only to the positive side of the FFT/DFT. However, do take notice that when figuring out spectral energy, you must remember to multiply the density by two (accounting for the missing, yet equal, negative part).
$T(a_{0}+a_{1}x+a_{2}x^2)=2a_{0}+a_{2}+(a_{0}+a_{1}+a_{2})x+3a_{2}x^2$- Finding $[T]_{E}^{E}$ Let $T(a_{0}+a_{1}x+a_{2}x^2)=2a_{0}+a_{2}+(a_{0}+a_{1}+a_{2})x+3a_{2}x^2$ be a linear transformation. I need to find the eigen-vectors eigenvalues of $T$. So, I'm trying to find $[T]_{E}^{E}$ when the base is $E=\{1,x,x^2\}$. I don't understand how I should use this transformation to do that. Thanks.
The columns of the matrix you seek are the coordinates of the images under $T$ of the elements of the basis. So you need only compute $T(1)$, $T(x)$, and $T(x^2)$.
A prize of $27,000 is to be divided among three people in the ratio 3:5:7. What is the largest share? This is not homework; I was just reviewing some old math flash cards and I came across this one I couldn't solve. I'm not interested in the solution so much as the reasoning. Thanks
You can think of splitting the money in the ratio $3:5:7$ as dividing it into $3+5+7=15$ equal parts and giving $3$ of these parts to one person, $5$ to another, and $7$ to the third. One part, then, must amount to $\frac{27000}{15}=1800$ dollars, and the shares must then be $3 \cdot 1800 = 5400$, $5 \cdot 1800 = 9000$, and $7 \cdot 1800 = 12600$ dollars, respectively. (As a quick check, $5400+9000+12600=27000$, as required.)
Problem in skew-symmetric matrix Let $A$ be a real skew-symmetric matrix. Prove that $I+A$ is non-singular, where $I$ is the identity matrix.
As $A$ is skew symmetric, if $(A+I)x=0$, we have $0=x^T(A+I)x=x^TAx+\|x\|^2=\|x\|^2$, i.e. $x=0$. Hence $(A+I)$ is invertible.
What is the name of the vertical bar in $(x^2+1)\vert_{x = 4}$ or $\left.\left(\frac{x^3}{3}+x+c\right) \right\vert_0^4$? I've always wanted to know what the name of the vertical bar in these examples was: $f(x)=(x^2+1)\vert_{x = 4}$ (I know this means evaluate $x$ at $4$) $\int_0^4 (x^2+1) \,dx = \left.\left(\frac{x^3}{3}+x+c\right) \right\vert_0^4$ (and I know this means that you would then evaluate at $x=0$ and $x=4$, then subtract $F(4)-F(0)$ if finding the net signed area) I know it seems trivial, but it's something I can't really seem to find when I go googling and the question came up in my calc class last night and no one seemed to know. Also, for bonus internets; What is the name of the horizontal bar in $\frac{x^3}{3}$? Is that called an obelus?
Jeff Miller calls it "bar notation" in his Earliest Uses of Symbols of Calculus (see below). The bar denotes an evaluation functional, a concept whose importance comes to the fore when one studies duality of vector spaces (e.g. such duality plays a key role in the Umbral Calculus). The bar notation to indicate evaluation of an antiderivative at the two limits of integration was first used by Pierre Frederic Sarrus (1798-1861) in 1823 in Gergonne’s Annales, Vol. XIV. The notation was used later by Moigno and Cauchy (Cajori vol. 2, page 250). Below is the cited passage from Cajori
Questions about composite numbers Consider the following problem: Prove or disprove that if $n\in \mathbb{N}$, then $n$ is prime iff $$(n-1)!+n$$ is prime. If $n$ is composite and greater than $1$, then $n$ has a divisor less than $n-1$, therefore $(n-1)!$ and $n$ have a common factor. Thus "$\Leftarrow$" is true. To proof the other direction we can consider the more general problem: Let $n\in\mathbb{N}$. Consider the set $$C(n)=\{m\in\mathbb{N}:n+m\text{ is composite}\}.$$ How can we characterize the elements of $C(n)$? The ideal answer would be to describe all elements in $C(n)$ in terms of only $n$. But, is that possible? As a first approximation to solve this, we can start by defining for $n,p\in\mathbb{N}$: $$A(n,p)= \{ m\in\mathbb{N}:n+m\equiv 0\pmod{p} \}.$$ After of some observations we can prove that $$A(n,p)=\{(\lceil n/p \rceil + k)p - n:k\in \mathbb{N}\}$$ and then $A(n,p)$ is the range of a function of the form $f_{n,p}:\mathbb{N}\to \mathbb{N}$. From this $$C(n)=\bigcup_{p=2}^\infty A(n,p),$$ But this still far from a characterization in terms of $n$. What do you think that is the best that we can do or the best we can hope?
One reason your professors might have smiled at you is that $$ C(n) = C(0) - n, $$ where $C(0) = \{m \in \mathbb N: m \text{ is composite} \}$. So characterizing $C(n)$ reduces to characterizing $C(0)$, which in turn reduces to characterizing the set of primes $\mathbb N \setminus C(0)$. (Well, okay, technically $C(n) = (C(0) - n) \cap \mathbb N$ as you've defined it, but cutting off the negative part of the set doesn't make any fundamental difference.)
On sorting in an array-less language This is partly a programming and partly a combinatorics question. I'm working in a language that unfortunately doesn't support array structures. I've run into a problem where I need to sort my variables in increasing order. Since the language has functions for the minimum and maximum of two inputs (but the language does not allow me to nest them, e.g. min(a, min(b, c)) is disallowed), I thought this might be one way towards my problem. If, for instance, I have two variables $a$ and $b$, I only need one temporary variable so that $a$ ends up being less than or equal to $b$: t = min(a, b); b = max(a, b); a = t; for three variables $a,b,c$, the situation is a little more complicated, but only one temporary variable still suffices so that $a \leq b \leq c$: a = min(a, b); t = max(a, b); c = max(t, c); t = min(t, c); b = max(a, t); a = min(a, t); Not having a strong combinatorics background, however, I don't know how to generalize the above constructions if I have $n$ variables in general. In particular, is there a way to figure out how many temporary variables I would need to sort out $n$ variables, and to figure out what is the minimum number of assignment statements needed for sorting? Thanks in advance!
Many sorting algorithms work by performing a sequence of swaps, so you need only one extra variable to implement them for any fixed $n$. What you're doing is effectively unrolling the entire algorithm loop into a sequence of conditional assignments. The number of assignments will be three times the number of swaps, and I think the exact number may depend on the sorting algorithm. It'll be on the order of $n \log n$, though.
Integer solutions of $3a^2 - 2a - 1 = n^2$ I've got an equation $3a^2 - 2a - 1 = n^2$, where $a,n \in \mathbb{N}$. I put it in Wolfram Alpha and besides everything else it gives integer solution: see here. For another equation (say, $3a^2 - 2a - 2 = n^2$, where $a,n \in \mathbb{N}$) Wolfram Alpha does not provide integer solutions: here. Could you please tell me: * *How does Wolfram Alpha determine existence of the integer solutions? *How does it find them? *What should I learn to be able to do the same with a pencil and a piece of paper (if possible)? Thanks in advance!
I believe Pell's Equation (and variants) would be useful. The first one can be recast as $$9a^2 - 6a - 3 = 3n^2$$ i.e. $$(3a-1)^2 - 4 = 3n^2$$ You are looking for solutions to $$ x^2 - 3y^2 = 4$$ such that $x = -1 \mod 3$. There are standard techniques to solve Pell's equation and variants (see the wiki page linked above and mathworld page here: http://mathworld.wolfram.com/PellEquation.html) and I am guessing Wolfram Alpha is using one of them. For the second I believe we get $$x^2 - 3y^2 = 7$$ which does not have solutions, considering modulo $4$ (as pointed out by Adrián Barquero).
Prove that $(a_1a_2\cdots a_n)^{2} = e$ in a finite Abelian group Let $G$ be a finite abelian group, $G = \{e, a_{1}, a_{2}, ..., a_{n} \}$. Prove that $(a_{1}a_{2}\cdot \cdot \cdot a_{n})^{2} = e$. I've been stuck on this problem for quite some time. Could someone give me a hint? Thanks in advance.
The map $\phi:x\in G\mapsto x^{-1}\in G$ is an automorphism of $G$ so, in particular, it induces a bijection $G\setminus\{e\}\to G\setminus\{e\}$. It maps $b=a_1\cdots a_n$ to itself, so that $b=b^{-1}$ and, therefore, $b^2=e$.
Proving that $ 30 \mid ab(a^2+b^2)(a^2-b^2)$ How can I prove that $30 \mid ab(a^2+b^2)(a^2-b^2)$ without using $a,b$ congruent modulo $5$ and then $a,b$ congruent modulo $6$ (for example) to show respectively that $5 \mid ab(a^2+b^2)(a^2-b^2)$ and $6 \mid ab(a^2+b^2)(a^2-b^2)$? Indeed this method implies studying numerous congruences and is quite long.
You need to show $ab(a^2 - b^2)(a^2 + b^2)$ is a multiple of 2,3, and 5 for all $a$ and $b$. For 2: If neither $a$ nor $b$ are even, they are both odd and $a^2 \equiv b^2 \equiv 1 \pmod 2$, so that 2 divides $a^2 - b^2$. For 3: If neither $a$ nor $b$ are a multiple of 3, then $a^2 \equiv b^2 \equiv 1 \pmod 3$, so 3 divides $a^2 - b^2$ similar to above. For 5: If neither $a$ nor $b$ are a multiple of 5, then either $a^2 \equiv 1 \pmod 5$ or $a^2 \equiv -1 \pmod 5$. The same holds for $b$. If $a^2 \equiv b^2 \pmod 5$ then 5 divides $a^2 - b^2$, while if $a^2 \equiv -b^2 \pmod 5$ then 5 divides $a^2 + b^2$. This does break into cases, but as you can see it's not too bad to do it systematically like this.
The product of all the elements of a finite abelian group I'm trying to prove the following statements. Let $G$ be a finite abelian group $G = \{a_{1}, a_{2}, ..., a_{n}\}$. * *If there is no element $x \neq e$ in $G$ such that $x = x^{-1}$, then $a_{1}a_{2} \cdot \cdot \cdot a_{n} = e$. Since the only element in $G$ that is an inverse of itself is the identity element $e$, for every other element $k$, it must have an inverse $a_{k}^{-1} = a_{j}$ where $k \neq j$. Thus $a_{1}a_{1}^{-1}a_{2}a_{2}^{-1} \cdot \cdot \cdot a_{n}a_{n}^{-1} = e$. * *If there is exactly one $x \neq e$ in $G$ such that $x = x^{-1}$, then $a_{1}a_{2} \cdot \cdot \cdot a_{n} = x$. This is stating that $x$ is not the identity element but is its own inverse. Then every other element $p$ must also have an inverse $a_{p}^{-1} = a_{w}$ where $p \neq w$. Similarly to the first question, a rearrangement can be done: $a_{1}a_{1}^{-1}a_{2}a_{2}^{-1} \cdot \cdot \cdot xx^{-1} \cdot \cdot \cdot a_{n}a_{n}^{-1} = xx^{-1} = e$. And this is where I am stuck since I proved another statement. Any comments would be appreciated for both problems.
For the first answer, you are almost there : if $a_1 a_1^{-1} \cdots a_n a_n^{-1} = e$, since the elements $a_1, \cdots , a_n$ are all distinct, their inverses are also distinct. Since the product written above involves every element of the group, we have $a_1 a_1^{-1} \cdots a_n a_n^{-1} = (a_1 a_2 \cdots a_n) (a_1^{-1} a_2^{-1} \cdots a_n^{-1}) = (a_1 \cdots a_n)^2 = e$, and since no element is its own inverse (by hypothesis) besides $e$, you have to conclude that $a_1 \cdots a_n = e$. For the second one, when you re-arrange the terms, $x^{-1}$ should not appear in there, since $x = x^{-1}$ and $x$ does not appear twice in the product, so all that's left is $x$.
Factorial decomposition of integers? This question might seem strange, but I had the feeling it's possible to decompose in a unique way a number as follows: if $x < n!$, then there is a unique way to write x as: $$x = a_1\cdot 1! + a_2\cdot 2! + a_3\cdot3! + ... + a_{n-1}\cdot(n-1)!$$ where $a_i \leq i$ I looked at factorial decomposition on google but I cannot find any name for such a decomposition. example: If I chose : (a1,a2) = * *1,0 -> 1 *0,1 -> 2 *1,1 -> 3 *0,2 -> 4 *1,2 -> 5 I get all number from $1$ to $3!-1$ ideas for a proof: The number of elements between $1$ and $N!-1$ is equal to $N!-1$ and I have the feeling they are all different, so this decomposition should be right. But I didn't prove it properly. Are there proofs of this decomposition? Does this decomposition as a name? And above all is this true ? Thanks in advance
Your conjecture is correct. There is a straightforward proof by induction that such a decomposition always exists. Suppose that every positive integer less than $n!$ can be written in the form $\sum_{k=1}^{n-1} a_k k!$, where $0 \le a_k \le k$, and let $m$ be a positive integer such that $n! \le m < (n+1)!$. There are unique integers $a_n$ and $r$ such that $m = a_nn! + r$ and $0 \le r < n!$, and since $m < (n+1)! = (n+1)n!$, it’s clear that $a_n \le n$. Since $r < n!$, the induction hypothesis ensures that there are non-negative integers $a_1,\dots,a_{n-1}$ such that $r = \sum_{k=1}^{n-1} a_k k!$, and hence $m = \sum_{k=1}^n a_k k!$. We’ve now seen that each of the $(n+1)!$ non-negative integers in $\{0,1,\dots,n\}$ has a representation of the form $\sum_{k=1}^n a_k k!$ with $0 \le a_k \le k$ for each $k$. However, there are only $\prod_{k=1}^n (k+1) = (n+1)!$ distinct representations of that form, so each must represent a different integer, and each integer’s representation is therefore unique.
Real world applications of Pythagoras' Theorem I have a school assignment, and it requires me to list a few of the real world applications of Pythagoras Theorem. However, most of the ones I found are rather generic, and not special at all. What are some of the real world applications of Pythagoras' Theorem?
Here is a true life application of the Pythagorean theorem (the 3-dimensional version, which is a corollary of the 2-dimensional version). My wife and I needed to have a long iron rod manufactured for us, to use as a curtain rod. I measured the length $L$ of the rod we wanted. But we forgot to take into account that we live on the 24th floor of an apartment building and therefore the only way the rod could get into our apartment was by coming up the elevator. Would the rod fit in the elevator? My wife measured the height $H$, the width $W$, and the depth $D$ of the elevator box. She then calculated the diagonal of the elevator box by applying the Pythagorean theorem: $\sqrt{H^2 + W^2 + D^2}$. She compared it to $L$, and thankfully, it was greater than $L$. The rod would fit! I would like to say that we realized this problem BEFORE we asked them to manufacture the rod, but that would be a lie. However, at least my wife realized it before the manufacturers arrived at our apartment building with the completed curtain rod, and she quickly did the measurements, and the Pythagorean Theorem calculation, and the comparison. So PHEW, we were saved.
Find the image of a vector by using the standard matrix (for the linear transformation T) Was wondering if anyone can help out with the following problem: Use the standard matrix for the linear transformation $T$ to find the image of the vector $\mathbf{v}$, where $$T(x,y) = (x+y,x-y, 2x,2y),\qquad \mathbf{v}=(3,-3).$$ I found out the standard matrix for $T$ to be: $$\begin{bmatrix}1&1\\1&-1\\2&0\\0&2\end{bmatrix}$$ From here I honestly don't know how to find the "image of the vector $\mathbf{v}$". Does anyone have any suggestions?
The matrix you've written down is correct. If you have a matrix $M$ and a vector $v$, the image of $v$ means $Mv$. Something is a bit funny with the notation in your question. Your matrix is 4x2, so it operates on column vectors of height two (equivalently, 2x1 matrices). But the vector given is a row vector. Still, it seems clear that what you need to calculate is the product $Mv$ that Theo wrote down in the comment. Do you know how to do that?
Any idea about N-topological spaces? In Bitopological spaces, Proc. London Math. Soc. (3) 13 (1963) 71–89 MR0143169, J.C. Kelly introduced the idea of bitopological spaces. Is there any paper concerning the generalization of this concept, i.e. a space with any number of topologies?
For $n=3$ Google turns up mention of AL-Fatlawee J.K. On paracompactness in bitopological spaces and tritopological spaces, MSc. Thesis, University of Babylon (2006). Asmahan Flieh Hassan at the University of Kufa, also in Iraq, also seems to be interested in tritopological spaces and has worked with a Luay Al-Sweedy at the Univ. of Babylon. This paper by Philip Kremer makes use of tritopological spaces in a study of bimodal logics, as does this paper by J. van Benthem et al., which Kremer cites. In my admittedly limited experience with the area these are very unusual, in that they make use of a tritopological structure to study something else; virtually every other paper that I’ve seen on bi- or tritopological spaces has studied them for their own sake, usually in an attempt to extend topological notions in some reasonably nice way. I’ve seen nothing more general than this.
Why are cluster co-occurrence matrices positive semidefinite? A cluster (aka a partition) co-occurrence matrix $A$ for $N$ points $\{x_1, \dots x_n\}$ is an $N\times N$ matrix that encodes a partitioning of these points into $k$ separate clusters ($k\ge 1$) as follows: $A(i,j) = 1$ if $x_i$ and $x_j$ belong to the same cluster, otherwise $A(i,j) = 0$ I have seen texts that say that $A$ is positive semidefinite. My intuition tells me that this has something to do with transitive relation encoded in the matrix, i.e.: If $A(i,j) = 1$, and $A(j,k) = 1$, then $A(i,k) = 1$ $\forall (i,j,k)$ But I don't see how the above can be derived from the definition of positive semidefinite matrices, i.e. $z^T A z > 0$ $\forall z\in R^N$ Any thoughts?
....and yet another way to view it: an $n\times n$ matrix whose every entry is 1 is $n$ times the matrix of the orthogonal projection onto the 1-dimensional subspace spanned by a column vector of 1s. Its eigenvalues are therefore $n$, with multiplicity 1, and 0, with multiplicity $n-1$. Now look at $\mathrm{diag}(A,B,C,\ldots)$, where each of $A,B,C,\ldots$ is such a square matrix with each entry equal to 1 (but $A,B,C,\ldots$ are generally of different sizes.
Projection of tetrahedron to complex plane It is widely known that: distinct points $a,b,c$ in the complex plane form equilateral triangle iff $ (a+b+c)^{2}=3(a^{2}+b^{2}+c^{2}). $ New to me is this fact: let $a,b,c,d$ be the images of vertices of regular tetrahedron projected to complex plane, then $(a+b+c+d)^{2}=4(a^{2}+b^{2}+c^{2}+d^{2}).$ I wonder if somebody would came up with intresting proof, maybe involving previous statement. What I try is some analytic geometry but things get messy enough for me to quit.
As I mentioned in my comment, the tetrahedral formula is invariant under translations, so let's focus on regular tetrahedra conveniently centered at the origin. Let $T$ be the coordinate matrix such a tetrahedron; that is, the matrix whose columns are coordinates in $\mathbb{R}^3$ of the tetrahedron's vertices. The columns of the matrix obviously sum to zero, but there's something less-obvious that we can say about the rows: Fact: The rows of $T$ form an orthogonal set of vectors of equal magnitude, $m$. For example (and proof-of-fact), take the tetrahedron that shares vertices with the double-unit cube, for which $m=2$: $$T = \begin{bmatrix}1&1&-1&-1\\1&-1&1&-1\\1&-1&-1&1\end{bmatrix} \hspace{0.25in}\text{so that}\hspace{0.25in} T T^\top=\begin{bmatrix}4&0&0\\0&4&0\\0&0&4\end{bmatrix}=m^2 I$$ Any other origin-centered regular tetrahedron is similar to this one, so its coordinate matrix has the form $S = k Q T$ for some orthogonal matrix $Q$ and some scale factor $k$. Then $$SS^\top = (kQT)(kQT)^\top = k^2 Q T T^\top Q^\top = k^2 Q (m^2 I) Q^\top = k^2 m^2 (Q Q^\top) = k^2 m^2 I$$ demonstrating that the rows of $S$ are also orthogonal and of equal magnitude. (Fact proven.) For the general case, take $T$ as follows $$T=\begin{bmatrix}a_x&b_x&c_x&d_x\\a_y&b_y&c_y&d_y\\a_z&b_z&c_z&d_z\end{bmatrix}$$ Now, consider the matrix $J := \left[1,i,0\right]$. Left-multiplying $T$ by $J$ gives $P$, the coordinate matrix (in $\mathbb{C}$) of the projection of the tetrahedron into the coordinate plane: $$P := J T = \left[a_x+i a_y, b_x+ib_y, c_x+i c_y, d_x + i d_y\right] = \left[a, b, c, d\right]$$ where $a+b+c+d=0$. Observe that $$P P^\top = a^2 + b^2 + c^2 + d^2$$ On the other hand, $$PP^\top = (JT)(JT)^\top = J T T^\top J^\top = m^2 J J^\top = m^2 (1 + i^2) = 0$$ Therefore, $$(a+b+c+d)^2=0=4(a^2 + b^2 + c^2 + d^2)$$ Note: It turns out that the Fact applies to all the Platonic solids ... and most Archimedeans ... and a great many other uniforms, including wildly self-intersecting realizations (even in many-dimensional space). The ones for which the Fact fails have slightly-deformed variants for which the Fact succeeds. (The key is that the coordinate matrices of these figures are (right-)eigenmatrices of the vertex adjacency matrix. That is, $TA=\lambda T$. For the regular tetrahedron, $\lambda=-1$; for the cube, $\lambda = 1$; for the great stellated dodecahedron, $\lambda=-\sqrt{5}$; for the small retrosnub icosicosidodecahedron, $\lambda\approx-2.980$ for a pseudo-classical variant whose pentagrammic faces have non-equilateral triangular neighbors.) The argument of my answer works for all "Fact-compliant" origin-centered polyhedra, so that $(\sum p_i)^2 = 0 = \sum p_i^2$ for projected vertices $p_i$. Throwing in a coefficient --namely $n$, the number of vertices-- that guarantees translation-invariance, and we have $$\left( \sum p_i \right)^2 = n \sum p_i^2$$
A property of the totient function Let $\ m\ge3$, and let $\ a_i$ be the natural numbers less than or equal to $\ m$ that are coprime to $\ m$ put in the following order: $$\ a_1<a_2<\cdots<a_\frac{\phi(m)}{2}\le \frac{m}{2}\le a_{\frac{\phi(m)}{2}+1}<a_{\frac{\phi(m)}{2}+2}<\cdots<a_{\phi(m)}.$$ If $\ a_{\frac{\phi(m)}{2}}>\frac{m}{2}$ and $\ a_{\frac{\phi(m)}{2}+1}\ge\frac{m}{2}$ then $\ a_{\frac{\phi(m)}{2}}+a_{\frac{\phi(m)}{2}+1}>m$ which is wrong. If $\ a_{\frac{\phi(m)}{2}}\le\frac{m}{2}$ and $\ a_{\frac{\phi(m)}{2}+1}<\frac{m}{2}$ then $\ a_{\frac{\phi(m)}{2}}+a_{\frac{\phi(m)}{2}+1}<m$ which is wrong. If $\ a_{\frac{\phi(m)}{2}}>\frac{m}{2}$ and $\ a_{\frac{\phi(m)}{2}+1}<\frac{m}{2}$ then $\ a_{\frac{\phi(m)}{2}+1}<a_{\frac{\phi(m)}{2}}$ which is wrong. So $\ a_{\frac{\phi(m)}{2}}>\frac{m}{2}$ or $\ a_{\frac{\phi(m)}{2}+1}<\frac{m}{2}$ is wrong, $\ a_{\frac{\phi(m)}{2}}\le\frac{m}{2}$ and $\ a_{\frac{\phi(m)}{2}+1}\ge\frac{m}{2}$ is true, and it gives the result. Does this proof work?
Your proof is correct, but you should clearly indicate where the proof starts and that you are using the result on the sum of two symmetric elements in the proof.
Multiplicative inverses for elements in field How to compute multiplicative inverses for elements in any simple (not extended) finite field? I mean an algorithm which can be implemented in software.
In both cases one may employ the extended Euclidean algorithm to compute inverses. See here for an example. Alternatively, employ repeated squaring to compute $\rm\:a^{-1} = a^{q-2}\:$ for $\rm\:a \in \mathbb F_q^*\:,\:$ which is conveniently recalled by writing the exponent in binary Horner form. A useful reference is Knuth: TAoCP, vol 2: Seminumerical Algorithms.
Connection Between Automorphism Groups of a Graph and its Line Graph First, the specific case I'm trying to handle is this: I have the graph $\Gamma = K_{4,4}$. I understand that its automorphism group is the wreath product of $S_4 \wr S_2$ and thus it is a group of order 24*24*2=1152. My goal is to find the order of the AUTOMORPHISM GROUP of the Line Graph: $L(\Gamma)$. That is - $|Aut(L(G))|$ I used GAP and I already know that the answer is 4608, which just happens to be 4*1152. I guess this isn't a coincidence. Is there some sort of an argument which can give me this result theoretically? Also, I would use this thread to ask about information of this problem in general (Connection Between Automorphism Groups of a Graph and its Line Graph). I suppose that there is no general case theorem. I was told by one of the professors in my department that "for a lot of cases, there is a general rule of thumb that works" although no more details were supplied. If anyone has an idea what he was referring to, I'd be happy to know. Thanks in advance, Lost_DM
If G is a graph with minimum valency (degree) 4, then Aut(G) is group-theoritical isomorphic to Aut(L(G)). See Godsil & Royle, Algebraic graph theory exercise 1.15. The proof is not too hard.
Determine limit of |a+b| This is a simple problem I am having a bit of trouble with. I am not sure where this leads. Given that $\vec a = \begin{pmatrix}4\\-3\end{pmatrix}$ and $|\vec b|$ = 3, determine the limits between which $|\vec a + \vec b|$ must lie. Let, $\vec b = \begin{pmatrix}\lambda\\\mu\end{pmatrix}$, such that $\lambda^2 + \mu^2 = 9$ Then, $$ \begin{align} \vec a + \vec b &= \begin{pmatrix}4+\lambda\\-3 + \mu\end{pmatrix}\\ |\vec a + \vec b| &= \sqrt{(4+\lambda)^2 + (\mu - 3)^2}\\ &= \sqrt{\lambda^2 + \mu^2 + 8\lambda - 6\mu + 25}\\ &= \sqrt{8\lambda - 6\mu + 34} \end{align} $$ Then I assumed $8\lambda - 6\mu + 34 \ge 0$. This is as far I have gotten. I tried solving the inequality, but it doesn't have any real roots? Can you guys give me a hint? Thanks.
We know that $\|a\|=5$, $\|b\|=3$, and we have two vector formulas $$ \|a+b\|^2=\|a\|^2+2(a\cdot b)+\|b\|^2,$$ $$ a\cdot b = \|a\| \|b\| \cos\theta.$$ Combining all this, we have $$\|a+b\|^2 = (5^2+3^2)+2(5)(3)\cos\theta.$$ Cosine's maximum and minimum values are $+1$ and$-1$, so we have $$\|a+b\|^2 \in [4,64]$$ $$\|a+b\| \in [2,8].$$
A periodic decimal expansion Let us suppose that $\{\alpha_{n}\}_{n \in \mathbb{N}}$ is a strictly increasing sequence of natural numbers and that the number obtained by concatenating the decimal representations of the elements of $\{\alpha_{n}\}_{n \in \mathbb{N}}$ after the decimal point, i.e., $0.\alpha_{1}\alpha_{2}\alpha_{3}\ldots$ has period $s$ (e.g., $0.12 \mathbf{12} \mathrm{121212}...$ has period 2). If $a_{k}$ denotes the number of elements in $\{\alpha_{n}\}_{n \in \mathbb{N}}$ with exactly $k$ digits in their decimal representation, does the inequality $a_{k} \leq s$ always hold? What would be, in your opinion, the right way to approach this question? I've tried a proof by exhaustion without much success. I'd really appreciate any (self-contained) hints you can provide me with.
If the period is $s$ then there are essentially $s$ starting places in the recurring decimal for a $k$-digit integer - begin at the first digit of the decimal, the second etc - beyond $s$ you get the same numbers coming round again. If you had $a_k > s$ then two of your $\alpha_n$ with $k$ digits would be the same by the pigeonhole principle.
eigenvalues of certain block matrices This question inquired about the determinant of this matrix: $$ \begin{bmatrix} -\lambda &1 &0 &1 &0 &1 \\ 1& -\lambda &1 &0 &1 &0 \\ 0& 1& -\lambda &1 &0 &1 \\ 1& 0& 1& -\lambda &1 &0 \\ 0& 1& 0& 1& -\lambda &1 \\ 1& 0& 1& 0&1 & -\lambda \end{bmatrix} $$ and of other matrices in a sequence to which it belongs. In a comment I mentioned that if we permute the indices 1, 2, 3, 4, 5, 6 to put the odd ones first and then the even ones, thus 1, 3, 5, 2, 4, 6, then we get this: $$ \begin{bmatrix} -\lambda & 0 & 0 & 1 & 1 & 1 \\ 0 & -\lambda & 0 & 1 & 1 & 1 \\ 0 & 0 & -\lambda & 1 & 1 & 1 \\ 1 & 1 & 1 & -\lambda & 0 & 0 \\ 1 & 1 & 1 & 0 & -\lambda & 0 \\ 1 & 1 & 1 & 0 & 0 & -\lambda \end{bmatrix} $$ So this is of the form $$ \begin{bmatrix} A & B \\ B & A \end{bmatrix} $$ where $A$ and $B$ are symmetric matrices whose characteristic polynomials and eigenvalues are easily found, even if we consider not this one case of $6\times 6$ matrices, but arbitrarily large matrices following the same pattern. Are there simple formulas for determinants, characteristic polynomials, and eigenvalues for matrices of this latter kind? I thought of the Haynesworth inertia additivity formula because I only vaguely remembered what it said. But apparently it only counts positive, negative, and zero eigenvalues.
Because the subblocks of the second matrix (let's call it $C$) commute i.e. AB=BA, you can use a lot of small lemmas given, for example here. And also you might consider the following elimination: Let $n$ be the size of $A$ or $B$ and let,(say for $n=4$) $$ T = \left(\begin{array}{cccccccc} 1 &0 &0 &0 &0 &0 &0 &0\\ 0 &0 &0 &0 &1 &0 &0 &0\\ -1 &1 &0 &0 &0 &0 &0 &0\\ -1 &0 &1 &0 &0 &0 &0 &0\\ -1 &0 &0 &1 &0 &0 &0 &0\\ 0 &0 &0 &0 &-1 &1 &0 &0\\ 0 &0 &0 &0 &-1 &0 &1 &0\\ 0 &0 &0 &0 &-1 &0 &0 &1 \end{array} \right) $$ Then , $TCT^{-1}$ gives $$ \hat{C} = \begin{pmatrix}-\lambda &n &\mathbf{0} &\mathbf{1} \\n &-\lambda &\mathbf{1} &\mathbf{0}\\ & &-\lambda I &0\\&&0&-\lambda I \end{pmatrix} $$ from which you can identify the upper triangular block matrix. The bold face numbers indicate the all ones and all zeros rows respectively. $(1,1)$ block is the $2\times 2$ matrix and $(2,2)$ block is simply $-\lambda I$. EDIT: So the eigenvalues are $(-\lambda-n),(-\lambda+n)$ and $-\lambda$ with multiplicity of $2(n-1)$. Thus the determinant is also easy to compute, via their product.
Is there a geometric meaning of the Frobenius norm? I have a positive definite matrix $A$. I am going to choose its Frobenius norm $\|A\|_F^2$ as a cost function and then minimize $\|A\|_F^2$. But I think I need to find a reason to convince people it is reasonable to choose $\|A\|_F^2$ as a cost function. So I'm wondering if there are some geometric meanings of the Frobenius norm. Thanks. Edit: here $A$ is a 3 by 3 matrix. In the problem I'm working on, people usually choose $\det A$ as a cost function since $\det A$ has an obvious geometric interpretation: the volume of the parallelepiped determined by $A$. Now I want to choose $\|A\|_F^2$ as a cost function because of the good properties of $\|A\|_F^2$. That's why I am interested in the geometric meaning of $\|A\|_F^2$.
In three dimensions (easier to visualize) we know that the scalar triple product of three vectors, say $a, b, c$, is the determinant of a matrix with those vectors as columns and the modulus is the volume of the parallelepiped spanned by $a, b$ and $c$. The squared Frobenius norm is the average squared length of the four space diagonals of the parallelepiped. This can easily be shown. The diagonals are: $d_1 = a + b + c\\ d_2 = a + b - c\\ d_3 = b + c - a\\ d_4 = c + a - b.$ Calculate and sum their squared lengths as $d_1^T d_1 + d_2^T d_2 + d_3^T d_3 + d_4^T d_4.$ Things cancel nicely and one is left with $ 4 ( a^T a + b^T b + c^T c)$ which is exactly four times the square of the Frobenius norm. The proof in more dimensions is along the same lines, just more sides and diagonals. The squared Frobenius norm of the Jacobian of a mapping from $\mathbb{R}^m$ to $\mathbb{R}^n$ is used, when it is desired that reductions in volume under the mapping shall be favoured in a minimization task. Because of its form, it is much easier to differentiate the squared Frobenius norm, than any other measure which quantifies the volume change, such as the modulus of the determinant of the Jacobian (which can only be used if $m=n$).
Order of solving definite integrals I've been coming across several definite integrals in my homework where the solving order is flipped, and am unsure why. Currently, I'm working on calculating the area between both intersecting and non-intersecting graphs. According to the book, the formula for finding the area bounded by two graphs is $$A=\int_{a}^{b}f(x)-g(x) \mathrm dx$$ For example, given $f(x)=x^3-3x^2+3x$ and $g(x)=x^2$, you can see that the intersections are $x={0, 1, 3}$ by factoring. So, at first glance, it looks as if the problem is solved via $$\int_0^1f(x)-g(x)\mathrm dx+\int_1^3f(x)-g(x)\mathrm dx$$ However, when I solved using those integrals, the answer didn't match the book answer, so I took another look at the work. According to the book, the actual integral formulas are $$\int_0^1f(x)-g(x)\mathrm dx+\int_1^3g(x)-f(x)\mathrm dx$$ I was a little curious about that, so I put the formulas in a grapher and it turns out that $f(x)$ and $g(x)$ flip values at the intersection $x=1.$ So how can I determine which order to place the $f(x)$ and $g(x)$ integration order without using a graphing utility? Is it dependent on the intersection values?
You are, I hope, not quoting your calculus book correctly. The correct result is: Suppose that $f(x) \ge g(x)$ in the interval from $x=a$ to $x=b$. Then the area of the region between the curves $y=f(x)$ and $y=g(x)$, from the line $x=a$ to the line $x=b$, is equal to $$\int_a^b(f(x)-g(x))\,dx.$$ The condition $f(x)-g(x) \ge 0$ is essential here. In your example, from $x=0$ to $x=1$ we have $f(x) \ge g(x)$, so the area from $0$ to $1$ is indeed $\int_0^1 (f(x)-g(x))\, dx$. However, from $x=1$ to $x=3$, we have $f(x) -g(x) \le 0$, the curve $y=g(x)$ lies above the curve $y=f(x)$. So the area of the region between the two curves, from $x=1$ to $x=3$, is $\int_1^3(g(x)-f(x))\,dx$. To find the full area, add up. Comment: When you calculate $\int_a^b h(x)\,dx$, the integral cheerfully "adds up" and does not worry about whether the things it is adding up are positive or negative. This often gives exactly the answer we need. For example, if $h(t)$ is the velocity at time $t$, then $\int_a^bh(t)\,dt$ gives the net displacement (change of position) as time goes from $a$ to $b$. The integral takes account of the fact that when $h(t)<0$, we are going "backwards." If we wanted the total distance travelled, we would have to treat the parts where $h(t) \le 0$ and the parts where $h(t)\ge 0$ separately, just as we had to in the area case. For determining where $f(x)-g(x)$ is positive, negative, we can let $h(x)=f(x)-g(x)$, and try to find where $h(x)$ is positive, negative. A continuous function $h(x)$ can only change sign where $h(x)=0$. (It need not change sign there. For example, if $h(x)=(x-1)^2$, then $h(1)=0$, but $h(x)$ does not change sign at $x=1$.) If the solutions of $h(x)=0$ are easy to find, we can quickly determine all the places where there might be a change of sign, and the rest is straightforward. Otherwise, a numerical procedure has to be used to approximate the roots.
Combinatorial proof Using notion of derivative of functions from Taylor formula follow that $$e^x=\sum_{k=0}^{\infty}\frac{x^k}{k!}$$ Is there any elementary combinatorial proof of this formula here is my proof for $x$ natural number Denote by $P_k^m$ number of $k$-permutations with unlimited repetitions of elements from a $m$-set then we can prove that $$P_k^m=\sum_{r_0+r_1+...+r_{m-1}=k}\frac{k!}{r_0 !...r_{m-1}!}$$ also is valid $$P_k^m=m^k$$ Based on first formula we can derive that $$\sum_{k=0}^{\infty}P_k^m\frac{x^k}{k!}=\left(\sum_{k=0}^{\infty}\frac{x^k}{k!}\right)^m$$ from second formula $$\sum_{k=0}^{\infty}P_k^m\frac{x^k}{k!}=\sum_{k=0}^{\infty}\frac{(mx)^k}{k!}$$ now is clear that $$\sum_{k=0}^{\infty}\frac{(mx)^k}{k!}=\left(\sum_{k=0}^{\infty}\frac{x^k}{k!}\right)^m$$ from last equation for $x=1$ taking in account that $$\sum_{k=0}^{\infty}\frac{1}{k!}=e=2,71828...$$ we have finally that for natural number $m$ is valid formula $$e^m=\sum_{k=0}^{\infty}\frac{m^k}{k!}$$
We will handle $x>0$ here. If we define $e=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$, then $e^x=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}$. Note that since $0\le nx-\lfloor nx\rfloor<1$, $$ \begin{align} e^x&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}\\ &=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor} \left(1+\frac{1}{n}\right)^{nx-\lfloor nx\rfloor}\\ &=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor} \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx-\lfloor nx\rfloor}\\ &=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor} \end{align} $$ Using the binomial theorem, $$ \begin{align} \left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor} &=\sum_{k=0}^{\lfloor nx\rfloor} \frac{1}{k!}\frac{P({\lfloor nx\rfloor},k)}{n^k}\\ &=\sum_{k=0}^\infty \frac{1}{k!}\frac{P({\lfloor nx\rfloor},k)}{n^k} \end{align} $$ Where $P(n,k)=n(n-1)(n-2)...(n-k+1)$ is the number of permutations of $n$ things taken $k$ at a time. Note that $0\le\frac{P({\lfloor nx\rfloor},k)}{n^k}\le x^k$ and that $\sum_{k=0}^\infty \frac{x^k}{k!}$ converges absolutely. Thus, if we choose an $\epsilon>0$, we can find an $N$ large enough so that, for all $n$, $$ 0\le\sum_{k=N}^\infty \frac{1}{k!}\left(x^k-\frac{P({\lfloor nx\rfloor},k)}{n^k}\right)\le\frac{\epsilon}{2} $$ Furthermore, note that $\lim_{n\to\infty}\frac{P({\lfloor nx\rfloor},k)}{n^k}=x^k$. Therefore, we can choose an $n$ large enough so that $$ 0\le\sum_{k=0}^{N-1} \frac{1}{k!}\left(x^k-\frac{P({\lfloor nx\rfloor},k)}{n^k}\right)\le\frac{\epsilon}{2} $$ Thus, for n large enough, $$ 0\le\sum_{k=0}^\infty \frac{1}{k!}\left(x^k-\frac{P({\lfloor nx\rfloor},k)}{n^k}\right)\le\epsilon $$ Therefore, $$ \lim_{n\to\infty}\;\sum_{k=0}^\infty\frac{1}{k!}\frac{P({\lfloor nx\rfloor},k)}{n^k}=\sum_{k=0}^\infty\frac{x^k}{k!} $$ Summarizing, we have $$ \begin{align} e^x&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}\\ &=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor}\\ &=\lim_{n\to\infty}\;\sum_{k=0}^\infty \frac{1}{k!}\frac{P({\lfloor nx\rfloor},k)}{n^k}\\ &=\sum_{k=0}^\infty\frac{x^k}{k!} \end{align} $$
Is this Batman equation for real? HardOCP has an image with an equation which apparently draws the Batman logo. Is this for real? Batman Equation in text form: \begin{align} &\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\ &\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\ &\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\ &\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\ &\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0 \end{align}
The 'Batman equation' above relies on an artifact of the plotting software used which blithely ignores the fact that the value $\sqrt{\frac{|x|}{x}}$ is undefined when $x=0$. Indeed, since we’re dealing with real numbers, this value is really only defined when $x>0$. It seems a little ‘sneaky’ to rely on the solver to ignore complex values and also to conveniently ignore undefined values. A nicer solution would be one that is unequivocally defined everywhere (in the real, as opposed to complex, world). Furthermore, a nice solution would be ‘robust’ in that small variations (such as those arising from, say, roundoff) would perturb the solution slightly (as opposed to eliminating large chunks). Try the following in Maxima (actually wxmaxima) which is free. The resulting plot is not quite as nice as the plot above (the lines around the head don’t have that nice ‘straight line’ look), but seems more ‘legitimate’ to me (in that any reasonable solver should plot a similar shape). Please excuse the code mess. /* [wxMaxima batch file version 1] [ DO NOT EDIT BY HAND! ]*/ /* [ Created with wxMaxima version 0.8.5 ] */ /* [wxMaxima: input start ] */ load(draw); /* [wxMaxima: input end ] */ /* [wxMaxima: input start ] */ f(a,b,x,y):=a*x^2+b*y^2; /* [wxMaxima: input end ] */ /* [wxMaxima: input start ] */ c1:sqrt(26); /* [wxMaxima: input end ] */ /* [wxMaxima: input start ] */ draw2d(implicit( f(1/36,1/9,x,y) +max(0,2-f(1.5,1,x+3,y+2.7)) +max(0,2-f(1.5,1,x-3,y+2.7)) +max(0,2-f(1.9,1/1.7,(5*(x+1)+(y+3.5))/c1,(-(x+1)+5*(y+3.5))/c1)) +max(0,2-f(1.9,1/1.7,(5*(x-1)-(y+3.5))/c1,((x-1)+5*(y+3.5))/c1)) +max(0,2-((1.1*(x-2))^4-(y-2.1))) +max(0,2-((1.1*(x+2))^4-(y-2.1))) +max(0,2-((1.5*x)^8-(y-3.5))) -1, x,-6,6,y,-4,4)); /* [wxMaxima: input end ] */ /* Maxima can't load/batch files which end with a comment! */ "Created with wxMaxima"$ The resulting plot is: (Note that this is, more or less, a copy of the entry I made on http://blog.makezine.com.)
Integer partition with fixed number of summands but without order For a fixed $n$ and $M$, I am interested in the number of unordered non-negative integer solutions to $$\sum_{i = 1}^n a_i = M$$ Or, in other words, I am interested in the number of solutions with distinct numbers. For $n = 2$ and $M = 5$, I would consider solutions $(1,4)$ and $(4,1)$ equivalent, and choose the solution with $a_1 \ge a_2 \ge ... \ge a_n \ge 0$ as the representative of the class of equivalent solutions. I know how to obtain the number of total, ordered, solutions with the "stars and bars" method. But unfortunately, I cannot just divide the result by $n!$ since that would only work if all the $a_i$ are distinct.
Let the number of partitions be $P_n(M)$. By looking at the smallest number in the partition, $a_n$, we get a recurrence for $P_n(M)$: $$ P_n(M) = P_{n-1}(M) + P_{n-1}(M-n) + P_{n-1}(M-2n) + P_{n-1}(M-3n) + ... $$ Where $P_n(0)=1$ and $P_n(M)=0$ for $M<0$. The first term in the sum above comes from letting $a_n=0$, the second term from $a_n=1$, the third from $a_n=2$, etc. Now lets look at $g_n$, the generating function for $P_n$: $$ g_n(x) = \sum_{M=0}^\infty P_n(M)\;x^M $$ Plugging the recurrence above into this sum yields a recurrence for $g_n$: \begin{align} g_n(x)&=\sum_{M=0}^\infty P_n(M)\;x^M\\ &=\sum_{M=0}^\infty (P_{n-1}(M) + P_{n-1}(M-n) + P_{n-1}(M-2n) + P_{n-1}(M-3n) + ...)\;x^M\\ &=\sum_{M=0}^\infty P_{n-1}(M)\;(1+x^n+x^{2n}+x^{3n}+...)\\ &=g_{n-1}(x)/(1-x^n) \end{align} Note that $P_0(0)=1$ and $P_0(M)=0$ for $M>0$. This means that $g_0(x)=1$. Combined with the recurrence for $g_n$, we get that $$ g_n(x)=1/(1-x)/(1-x^2)/(1-x^3)/.../(1-x^n) $$ For example, if $n=1$, we get $$ g_1(x) = 1/(1-x) = 1+x+x^2+x^3+... $$ Thus, $P_1(M) = 1$ for all $M$. If $n=2$, we get \begin{align} g_2(x) &= 1/(1-x)/(1-x^2)\\ &= 1+x+2x^2+2x^3+3x^4+3x^5+4x^6+4x^7+5x^8+5x^9+6x^{10}+... \end{align} Thus, $P_2(7)=4$, $P_2(10)=6$, etc.
Bipartite graph cover problem Let $G$ be bipartite with bipartition $A$, $B$. Suppose that $C$ and $C'$ are both covers of $G$. Prove that $C^{\wedge}$ = $(A \cap C \cap C') \cup (B \cap (C \cup C'))$ is also a cover of $G$. Does anyone know which theorem is useful for proving this statement?
This statement is fairly easy to prove without appealing to any special theorems. It might be useful to rewrite the statement Every edge in $G$ has an endvertex in $C''=(A\cap C\cap C')\cup (B\cap(C\cup C')$, which you are trying to prove, as the equivalent statement If an edge $e\in E(G)$ has no endvertex in $B\cap (C\cup C')$, then it has an endvertex in $A\cap C\cap C'$. Hint: every edge $e$ has an endvertex in $A$, an endvertex in $B$, and at least one endvertex in each of $C$ and $C'$. Hope this helps!
Notation/name for the number of times a number can be exactly divided by $2$ (or a prime $p$) I am using this simple snippet of code, variants of which I have seen in many places: for(int k = 0 ; n % 2 == 0 ; k++) n = n / 2; This code repeatedly divides num by 2 until it is odd and on completion k contains the number of divisions performed. I am wondering what the appropriate way to write this using mathematical notation is? Does this correspond to some named concept? Of course, $lg\ n$ gives the appropriate $k$ when $n$ is a power of 2, but not for anything else. For example, $k = 1$ when $n = 6$ and $k = 0$ when $n$ is odd. So it looks it should be specified using a piece-wise function but there may be some mathematical concept or nomenclature here that I am not aware of...
You might call it the "highest power of $2$ dividing $n$," but I'm not aware of any snazzy standalone term for such a thing. However, I have seen it notated succinctly as $2^k\|n$, which means $2^k$ divides into $n$ but $2^{k+1}$ does not.
non time constructible functions A function $T: \mathbb N \rightarrow \mathbb N$ is time constructible if $T(n) \geq n$ and there is a $TM$ $M$ that computes the function $x \mapsto \llcorner T(\vert x\vert) \lrcorner$ in time $T(n)$. ($\llcorner T(\vert x\vert) \lrcorner$ denotes the binary representation of the number $T(\vert x\vert)$.) Examples for time-constructible functions are $n$, $nlogn$, $n^2$, $2^n$. Time bounds that are not time constructible can lead to anomalous results. --Arora, Barak. This is the definition of time-constructible functions in Computational Complexity - A Modern Approach by Sanjeev Arora and Boaz Barak. It is hard to find valid examples of non-time-constructible functions. $f(n)=c$ is an example of a non-time-constructible function. What more (sophisticated) examples are out there?
You can use the time hierarchy theorem to create a counter-example. You know by the THT that DTIME($n$)$\subsetneq$DTIME($n^2$), so pick a language which is in DTIME($n^2$)$\backslash$DTIME($n$). For this language you have a Turing machine $A$ which decideds if a string is in the language in time $O(n^2)$ and $\omega(n)$. You can now define the following function $f(n) = 2\cdot n+A(n)$ If it is time constructible it contradicts the THT.
Chord dividing circle , function Two chords $PA$ and $PB$ divide circle into three parts. The angle $PAB$ is a root of $f(x)=0$. Find $f(x)$. Clearly , $PA$ and $PB$ divides circle into three parts means it divides it into $3$ parts of equal areas How can i find $f(x)$ then ? thanks
You may assume your circle to be the unit circle in the $(x,y)$-plane and $P=(1,0)$. If the three parts have to have equal area then $A=\bigl(\cos(2\phi),\sin(2\phi)\bigr)$ and $B=\bigl(\cos(2\phi),-\sin(2\phi)\bigr)$ for some $\phi\in\ ]0,{\pi\over2}[\ $. Calculating the area over the segment $PA$ gives the condition $$2\Bigl({\phi\over2}-{1\over2}\cos\phi\sin\phi\Bigr)={\pi\over3}\ ,$$ or $f(\phi):=\phi-\cos\phi\sin\phi-{\pi\over 3}=0$. This equation has to be solved numerically. One finds $\phi\doteq1.30266$.
(k+1)th, (k+1)st, k-th+1, or k+1? (Inspired by a question already at english.SE) This is more of a terminological question than a purely mathematical one, but can possibly be justified mathematically or simply by just what common practice it. The question is: When pronouncing ordinals that involve variables, how does one deal with 'one', is it pronounced 'one-th' or 'first'? For example, how do you pronounce the ordinal corresponding to $k+1$? There is no such term in mathematics 'infinityeth' (one uses $\omega$, with no affix), but if there were, the successor would be pronounced 'infinity plus oneth'. Which is also 'not a word'. So then how does one pronounce '$\omega + 1$' which is an ordinal? I think it is simply 'omega plus one' (no suffix, and not 'omega plus oneth' nor 'omega plus first'. So how ist pronounced, the ordinal corresponding to $k+1$? * *'kay plus oneth' *'kay plus first' *'kay-th plus one' *'kay plus one' or something else?
If you want a whole lot of non-expert opinions, you can read the comments here.
How to solve a symbolic non-linear vector equation? I'm trying to find a solution to this symbolic non-linear vector equation: P = a*(V0*t+P0) + b*(V1*t+P1) + (1-a-b)*(V2*t+P2) for a, b and t where P, V0, V1, V2, P0, P1, P2 are known 3d vectors. The interesting bit is that this equation has a simple geometrical interpretation. If you imagine that points P0-P2 are vertices of a triangle, V0-V2 are roughly vertex normals* and point P lies above the triangle, then the equation is satisfied for a triangle containing point P with it's three vertices on the three rays (Vx*t+Px), sharing the same parameter t value. a, b and (1-a-b) become the barycentric coordinates of the point P. In order words for a given P we want to find t such that P is a linear combination of (Vx*t+Px). So if the case is not degenerate, there should be only one well defined solution for t. *) For my needs we can assume these are roughly vertex normals of a convex tri mesh and of course lie in the half space above the triangle. I posted this question to stackoverflow, but no one was able to help me there. Both MatLab and Maxima time out while trying to solve the equation.
Let's rewrite: $$a(P_0 - P_2 + t(V_0-V_2)) + b(P_1 - P_2 + t(V_1 - V_2)) = P - P_2 - t V_2$$ which is linear in $a$ and $b$. If we let $A=P_0-P_2$ and $A'=V_0-V_2$ and $B=P_1-P_2$ and $B'=V_1-V_2$ and $C=P-P_2$ and $C'=-V_2$ then you have $$a(A + tA') + b(B + tB') = C + tC'$$ This can be written as a matrix equation: $$ \begin{bmatrix} A_1 + t A'_1 & B_1 + t B'_1 & C_1 + tC'_1 \\ A_2 + t A'_2 & B_2 + t B'_2 & C_2 + tC'_2 \\ A_3 + t A'_3 & B_3 + t B'_3 & C_3 + tC'_3 \end{bmatrix} \begin{bmatrix} a \\ b \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0\end{bmatrix}$$ or $Ax=0$, with suitable definitions for $A$ and $x$, and with both $A$ and $x$ unknown. Now you know that this only has a solution if $\det A = 0$, which gives you a cubic in $t$. Solving for $t$ using your favorite cubic solver when gives you $Ax=0$ with only $x$ unknown - and $x$ is precisely the zero eigenvector of the matrix $A$. The fact that the third component of $x$ is $-1$ forces the values of $a$ and $b$, and you are done.
Perfect square sequence In base 10, the sequence 49,4489,444889,... consists of all perfect squares. Is this true for any other bases (greater than 10, of course)?
$((2*19^5+1)/3^2)=2724919437289,\ \ $ which converted to base $19$ is $88888GGGGH$. It doesn't work in base $13$ or $16$. In base $28$ it gives $CCCCCOOOOP$, where those are capital oh's (worth $24$). This is because if we express $\frac{1}{9}$ in base $9a+1$, it is $0.aaaa\ldots$. So $\left (\frac{2(9a+1)^5+1}{3}\right)^2=\frac{4(9a+1)^10+4(9a+1)^5+1}{9}=$ $ (4a)(4a)(4a)(4a)(4a)(4a)(8a)(8a)(8a)(8a)(8a+1)_{9a+1}$ where the parentheses represent a single digit and changing the exponent from $5$ changes the length of the strings in the obvious way.
Find the perimeter of any triangle given the three altitude lengths The altitude lengths are 12, 15 and 20. I would like a process rather than just a single solution.
First, $\begin{aligned}\operatorname{Area}(\triangle ABC)=\frac{ah_a}2=\frac{bh_b}2=\frac{ch_c}2\implies \frac1{h_a}&=\frac{a}{2\operatorname{Area}(\triangle ABC)}\\\frac1{h_b}&=\frac{b}{2\operatorname{Area}(\triangle ABC)}\\\frac1{h_c}&=\frac{c}{2\operatorname{Area}(\triangle ABC)} \end{aligned}$ By the already mentioned Heron's formula: $\begin{aligned}\operatorname{Area}(\triangle ABC)&=\sqrt{s(s-a)(s-b)(s-c)}=\sqrt{\frac{a+b+c}2\cdot\frac{b+c-a}2\cdot\frac{a+c-b}2\cdot\frac{a+b-c}2}\Bigg/:\operatorname{Area}^2(\triangle ABC)\\\frac1{\operatorname{Area}(\triangle ABC)}&=\sqrt{\frac{a+b+c}{2\operatorname{Area}(\triangle ABC)}\cdot\frac{b+c-a}{2\operatorname{Area}(\triangle ABC)}\cdot\frac{a+c-b}{2\operatorname{Area}(\triangle ABC)}\cdot\frac{a+b-c}{2\operatorname{Area}(\triangle ABC)}}\\\frac1{\operatorname{Area}(\triangle ABC)}&=\sqrt{\left(\frac1{h_a}+\frac1{h_b}+\frac1{h_c}\right)\left(\frac1{h_b}+\frac1{h_c}-\frac1{h_a}\right)\left(\frac1{h_a}+\frac1{h_c}-\frac1{h_b}\right)\left(\frac1{h_a}+\frac1{h_b}-\frac1{h_c}\right)}\end{aligned}$ $$\implies\operatorname{Area}(\triangle ABC)=\frac1{\sqrt{\left(\frac1{h_a}+\frac1{h_b}+\frac1{h_c}\right)\left(\frac1{h_b}+\frac1{h_c}-\frac1{h_a}\right)\left(\frac1{h_a}+\frac1{h_c}-\frac1{h_b}\right)\left(\frac1{h_a}+\frac1{h_b}-\frac1{h_c}\right)}}$$ Then, plugg the values $h_a=12,h_b=15,h_c=20$ into the formula.
Uniformly continuous $f$ in $L^p([0,\infty))$ Assume that $1\leq p < \infty $, $f \in L^p([0,\infty))$, and $f$ is uniformly continuous. Prove that $\lim_{x \to \infty} f(x) = 0$ .
Hints: Suppose for a contradiction that $f(x) \not \to 0$. Together with the definition of uniform continuity, conclude that there exist constants $\varepsilon > 0$ and $\delta = \delta(\varepsilon) > 0$ such that for any $M > 0$ there exists $x > M$ for which it holds $$ \int_{(x,x + \delta )} {|f(y)|^p \,dy} \ge \bigg(\frac{\varepsilon }{2}\bigg)^p \delta . $$ However, this would imply $\int_{[0,\infty )} {|f(x)|^p \,dx} = \infty$, a contradiction.
Quasi convexity and strong duality Is there a possibility to prove the strong duality (Lagrangian duality) if the primal problem is quasiconvex? Or is there a technique that proves that the primal problem is convex instead of quasiconvex?
I'm not sure I understand the question clearly enough to give a precise answer, but here is a kind of meta-answer about why you should not expect such a thing to be true. Generally speaking duality results come from taking some given objective function and adding variable multiples of other functions (somehow connected to constraints) to it. Probably the simplest case is adding various linear functions. Proving strong duality for a certain type of setup involves understanding the class of functions thus generated, usually using some sort of convexity property. For example, if our original objective function $f$ is convex and we think of adding arbitrary linear functions $L$ to it, the result $f+L$ will always be convex, so we have a good understanding of functions generated in this way, in particular what their optima look like. Quasiconvexity does not behave nearly so will with respect to the operation of adding linear functions. One way to express this is the following. Let $f:\mathbb{R}^n\to\mathbb{R}$ be a function. Then $f+L$ is quasiconvex for all linear maps $L:\mathbb{R}^n\to\mathbb{R}$ if and only if $f$ is convex. Therefore the class of functions for which the benefits of quasiconvexity (local minima are global, etc.) would help prove strong duality is in some sense just the convex functions. This is not to say strong duality will never hold for quasiconvex objectives, but just that some strong additional conditions are required beyond the usual constraint qualifications used for convex functions: quasiconvexity alone does not buy you much duality-wise.
A compactness problem for model theory I'm working on the following problem: Assume that every model of a sentence $\varphi$ satisfies a sentence from $\Sigma$. Show that there is a finite $\Delta \subseteq \Sigma$ such that every model of $\varphi$ satisfies a sentence in $\Delta$. The quantifiers in this problem are throwing me off; besides some kind of compactness application I'm not sure where to go with it (hence the very poor title). Any hint?
Cute, in a twisted sort of way. You are right, the quantifier structure is the main hurdle to solving the problem. We can assume that $\varphi$ has a model, else the result is trivially true. Suppose that there is no finite $\Delta\subseteq \Sigma$ with the desired property. Then for every finite $\Delta \subseteq \Sigma$, the set $\{\varphi, \Delta'\}$ has a model. (For any set $\Gamma$ of sentences, $\Gamma'$ will denote the set of negations of sentences in $\Gamma$.) By the Compactness Theorem, we conclude that $\{\varphi, \Sigma'\}$ has a model $M$. This model $M$ is a model of $\varphi$ in which no sentence in $\Sigma$ is true, contradicting the fact that every model of $\varphi$ satisfies a sentence from $\Sigma$.
Use of a null option in a combination with repetition problem Ok, so I am working on a combinatorics problem involving combination with repetition. The problem comes off a past test that was put up for study. Here it is: An ice cream parlor sells six flavors of ice cream: vanilla, chocolate, strawberry, cookies and cream, mint chocolate chip, and chocolate chip cookie dough. How many combinations of fewer than 20 scoops are there? (Note: two combinations count as distinct if they differ in the number of scoops of at least one flavor of ice cream.) Now I get the correct answer $\binom{25}{6}$, but the way they arrive at the answer is different and apparently important. I just plug in 20 combinations of 6 flavors into $\binom{n+r-1}{r}=\binom{n+r-1}{n-1}$. The answer given makes use of a "null flavor" to be used in the calculation. I can't figure out for the life of me why, could someone explain this to me? Answer: This is a slight variation on the standard combinations with repetition problem. The difference here is that we are not trying to buy exactly 19 scoops of ice cream, but 19 or fewer scoops. We can solve this problem by introducing a 7 th flavor, called “noflavor” ice cream. Now, imagine trying to buy exactly 19 scoops of ice cream from the 7 possible flavors (the six listed an “no-flavor”). Any combination with only 10 real scoops would be assigned 9 “no-flavor” scoops, for example. There is a one-to-one correspondence between each possible combination with 19 or fewer scoops from 6 flavors as there are to 19 “scoops” from 7 flavors. Thus, using the formula for combination with repetition with 19 items from 7 types, we find the number of ways to buy the scoops is $\binom{19+7-1}{19}=\binom{25}{19}=\binom{25}{6}$. (Grading – 4 pts for mentioning the idea of an extra flavor, 4 pts for attempting to apply the correct formula, 2 pts for getting the correct answer. If a sum is given instead of a closed form, give 6 points out of 10.) Any assistance would be greatly appreciated!
Suppose that the problem had asked how many combinations there are with exactly $19$ scoops. That would be a bog-standard problem involving combinations with possible repetition (sometimes called a stars-and-bars or marbles-and-boxes problem). Then you’d have been trying to distribute $19$ scoops of ice cream amongst $6$ flavors, and you’d have known that the answer was $\binom{19+6-1}{6-1}=\binom{24}{5}$, either by the usual stars-and-bars analysis or simply by having learnt the formula. Unfortunately, the problem actually asks for the number of combinations with at most $19$ scoops. There are at least two ways to proceed. The straightforward but slightly ugly one is to add up the number of combinations with $19$ scoops, $18$ scoops, $17$ scoops, and so on down to no scoops at all. You know that the number of combinations with $n$ scoops is $\binom{n+6-1}{6-1}=\binom{n+5}{5}$, so the answer to the problem is $\sum\limits_{n=0}^{19}\binom{n+5}{5}$. This can be rewritten as $\sum\limits_{k=5}^{24}\binom{k}{5}$, and you can then use a standard identity (sometimes called a hockey stick identity) to reduce this to $\binom{25}{6}$. The slicker alternative is the one used in the solution that you quoted. Pretend that there are actually seven flavors: let’s imagine that in addition to vanilla, chocolate, strawberry, cookies and cream, mint chocolate chip, and chocolate chip cookie dough there is banana. (The quoted solution uses ‘no-flavor’ instead of banana.) Given a combination of exactly $19$ scoops chosen from these seven flavors, you can throw away all of the banana scoops; what remains will be a combination of at most $19$ scoops from the original six flavors. Conversely, if you start with any combination of at most $19$ scoops of the original six flavors, you can ‘pad it out’ to exactly $19$ scoops by adding enough scoops of banana ice cream to make up the difference. (E.g., if you start with $10$ scoops of vanilla and $5$ of strawberry, you’re $4$ scoops short of $19$, so you add $4$ scoops of banana.) This establishes a one-to-one correspondence (bijection) between (1) the set of combinations of exactly $19$ scoops chosen from the seven flavors and (2) the set of combinations of at most $19$ scoops chosen from the original six flavors. Thus, these two sets of combinations are the same size, and counting one is as good as counting the other. But counting the number of combinations of exactly $19$ scoops chosen from seven flavors is easy: that’s just the standard combinations with repetitions problem, and the answer is $\binom{19+7-1}{7-1}=\binom{25}{6}$. Both solutions are perfectly fine; the second is a little slicker and requires less computation, but the first is also pretty straightforward if you know your basic binomial identities.
Simple problem with pattern matching in Mathematica I'm having trouble setting a pattern for simplifying a complex expression. I've distilled the question down to the simplest case where Mathematica seems to fail. I set up a simple rule based on a pattern: simpRule = a b v___ - c d v___ -> e v which works on the direct case a b - c d /. simpRule e but fails if I simply add a minus sign. -a b + c d /. simpRule -a b + c d How do I go about writing a more robust rule? Or perhaps there's a better way to go about performing simplifications of this sort? Thanks, Keith
You need to be aware of the FullForm of the expressions you are working with to correctly use replacement rules. Consider the FullForm of the two expressions you use ReplaceAll (/.) on: a b - c d // FullForm -a b + c d // FullForm (* Out *) Plus[Times[a, b], Times[-1, c, d]] (* Out *) Plus[Times[-1, a, b], Times[c, d]] From this you can see how the negation is internally represented as multiplication by -1, and therefore, why your rule matches differently. One way to allow for the alternative pattern is this: rule = (-1 a b v___ | a b v___) + (-1 c d v___ | c d v___) -> e v; This makes use of Alternatives. Another, somewhat more opaque way is: rule = (-1 ...) a b v___ + (-1 ...) c d v___ -> e v; Which makes use of RepeatedNull.
Numerical computation of the Rayleigh-Lamb curves The Rayleigh-Lamb equations: $$\frac{\tan (pd)}{\tan (qd)}=-\left[\frac{4k^2pq}{\left(k^2-q^2\right)^2}\right]^{\pm 1}$$ (two equations, one with the +1 exponent and the other with the -1 exponent) where $$p^2=\frac{\omega ^2}{c_L^2}-k^2$$ and $$q^2=\frac{\omega ^2}{c_T^2}-k^2$$ show up in physical considerations of the elastic oscillations of solid plates. Here, $c_L$, $c_T$ and $d$ are positive constants. These equations determine for each positive value of $\omega$ a discrete set of real "eigenvalues" for $k$. My problem is the numerical computation of these eigenvalues and, in particular, to obtain curves displaying these eigenvalues. What sort of numerical method can I use with this problem? Thanks. Edit: Using the numerical values $d=1$, $c_L=1.98$, $c_T=1$, the plots should look something like this (black curves correspond to the -1 exponent, blue curves to the +1 exponent; the horizontal axis is $\omega$ and the vertical axis is $k$):
The Rayleigh-Lamb equations: $$\frac{\tan (pd)}{\tan (qd)}=-\left[\frac{4k^2pq}{\left(k^2-q^2\right)^2}\right]^{\pm 1}$$ are equivalent to the equations (as Robert Israel pointed out in a comment above) $$\left(k^2-q^2\right)^2\sin pd \cos qd+4k^2pq \cos pd \sin qd=0$$ when the exponent is +1, and $$\left(k^2-q^2\right)^2\cos pd \sin qd+4k^2pq \sin pd \cos qd=0$$ when the exponent is -1. Mathematica had trouble with the plots because $p$ or $q$ became imaginary. The trick is to divide by $p$ or $q$ in convenience. Using the numerical values $d=1$, $c_L=1.98$, $c_T=1$, we divide equation for the +1 exponent by $p$ and the equation for the -1 exponent by $q$. Supplying these to the ContourPlot command in Mathematica I obtained the curves for the +1 exponent, and for the -1 exponent.
Showing that $l^p(\mathbb{N})^* \cong l^q(\mathbb{N})$ I'm reading functional analysis in the summer, and have come to this exercise, asking to show that the two spaces $l^p(\mathbb{N})^*,l^q(\mathbb{N})$ are isomorphic, that is, by showing that every $l \in l^p(\mathbb{N})^*$ can be written as $l_y(x)=\sum y_nx_n$ for some $y$ in $l^q(\mathbb N)$. The exercise has a hint. Paraphrased: "To see $y \in l^q(\mathbb N)$ consider $x^N$ defined such that $x_ny_n=|y_n|^q$ for $n \leq N$ and $x_n=0$ for $n > N$. Now look at $|l(x^N)| \leq ||l|| ||x^N||_p$." I can't say I understand the first part of the hint. To prove the statement I need to find a $y$ such that $l=l_y$ for some $y$. How then can I define $x$ in terms of $y$ when it is $y$ I'm supposed to find. Isn't there something circular going on? The exercise is found on page 68 in Gerald Teschls notes at http://www.mat.univie.ac.at/~gerald/ftp/book-fa/index.html Thanks for all answers.
We know that the $(e_n)$ with a $1$ at the $n$-th position and $0$s elsewhere is a Schauder basis for $\ell^p$ (which has some nice alternative equivalent definitions, I recommend Topics in Banach Space Theory by Albiac and Kalton as a nice reference about this. So, every $x \in \ell^p$ has a unique representation by $$x = \sum_{k = 1}^\infty y_k e_k.$$ Now consider $l \in \ell^q$. Because $l$ is bounded we also have that $$l(x) = \sum_{k = 1}^\infty y_k l(e_k).$$ Now set $z_k = f(e_k)$. Consider the following $x_n = (y_k^{(n)})$ where $$y_k^{(n)} = \begin{cases} \frac{|z_k|^q}{z_k} &\text{when $k \leq n$ and $z_k \neq 0$,}\\ 0 &\text{otherwise.} \end{cases}.$$ We have that $$\begin{align}l(x_n) &= \sum_{k = 1}^\infty y_k^{(n)} z_k\\ &= \sum_{k = 1}^n |z_k|^q\\ &\leq \|l\|\|x_n\|\\ &= \|l\| \left ( \sum |x_k^{(n)}|^p \right )^{\frac1p}\\ &= \|l\| \left ( \sum |x_k^{(n)}|^p \right )^{\frac1p}\\ &= \|l\| \left ( \sum |z_k|^q \right )^{\frac1q}. \end{align}$$ Hence we have that $$\sum |z_k|^q = \|l\| \left (\sum |z_k|^q \right )^{\frac1q}.$$ Now we divide and get $$\left ( \sum_{k = 1}^n |z_k|^q \right )^{\frac1q} \leq \|l\|.$$ Take the limit to obtain $$\left ( \sum_{k \geq 1} |z_k|^q \right )^{\frac1q} \leq \|l\|.$$ We conclude that $(z_k) \in \ell^q$. So, now you could try doing the same for $L^p(\mathbf R^d)$ with a $\sigma$-finite measure. A small hint: Using the $\sigma$-finiteness you can reduce to the finite measure case.
On the growth order of an entire function $\sum \frac{z^n}{(n!)^a}$ Here $a$ is a real positive number. The result is that $f(z)=\sum_{n=1}^{+\infty} \frac{z^n}{(n!)^a}$ has a growth order $1/a$ (i.e. $\exists A,B\in \mathbb{R}$ such that $|f(z)|\leq A\exp{(B|z|^{1/a})},\forall z\in \mathbb{C}$). It is Problem 3* from Chapter 5 of E.M. Stein's book, Complex Analysis, page 157. Yet I don't know how to get this. Will someone give me some hints on it? Thank you very much.
There is a formula expressing the growth order of entire function $f(z)=\sum_{n=0}^\infty c_nz^n\ $ in terms of its Taylor coeffitients: $$ \rho=\limsup_{n\to\infty}\frac{\log n}{\log\frac{1}{\sqrt[n]{|c_n|}}}. $$
planar curve if and only if torsion Again I have a question, it's about a proof, if the torsion of a curve is zero, we have that $$ B(s) = v_0,$$ a constant vector (where $B$ is the binormal), the proof ends concluding that the curve $$ \alpha \left( t \right) $$ is such that $$ \alpha(t)\cdot v_0 = k $$ and then the book says, "then the curve is contained in a plane orthogonal to $v_0$." It's a not so important detail but .... that angle might not be $0$, could be not perpendicular to it, anyway, geometrically I see it that $ V_0 $ "cuts" that plane with some angle. My stupid question is why this constant $k$ must be $0$. Or just I can choose some $v_0 $ to get that "$k$"?
The constant $k$ need not be $0$; that would be the case where $\alpha$ lies in a plane through the origin. You have $k=\alpha(0)\cdot v_0$, so for all $t$, $(\alpha(t)-\alpha(0))\cdot v_0=0$. This means that $\alpha(t)-\alpha(0)$ lies in the plane through the origin perpendicular to $v_0$, so $\alpha(t)$ lies in the plane through $\alpha(0)$ perpendicular to $v_0$. (If $0$ is not in the domain, then $0$ could be replaced with any point in the domain of $\alpha$.)
Regular curve which tangent lines pass through a fixed point How to prove that if a regular parametrized curve has the property that all its tangent lines passs through a fixed point then its trace is a segment of a straight line? Thanks
In his answer, user14242 used a vector multiplication for two vectors - while I believe it's only defined for 3-dim case. If you talk not only about the 3-dim problem then just writing explicitly the equation of the tangent line you obtain $$ r(t)+\dot{r}(t)\tau(t) = a $$ where $a$ is a fixed point and $\tau(t)$ denotes the value of parameter when the tangent line crosses $a$. Let's assume that $t$ is a natural parametrization of the curve. Taking the derivative w.r.t. $t$ you have $$ \dot{r}(t)(1+\dot{\tau}(t))+n(t)\tau(t) = 0 $$ where $n(t) = \ddot{r}(t)$ is a normal vector. Also $n\cdot\dot{r} = 0$ and then $$ \tau(t)\|n(t)\|^2 = 0. $$ You have $\tau(t) = 0$ iff $r(t) = a$ and for all other points $n(t) = 0$ which gives us $\dot{r}(t) = const$.
Proving $\sum\limits_{k=1}^{n}{\frac{1}{\sqrt{k}}\ge\sqrt{n}}$ with induction I am just starting out learning mathematical induction and I got this homework question to prove with induction but I am not managing. $$\sum\limits_{k=1}^{n}{\frac{1}{\sqrt{k}}\ge\sqrt{n}}$$ Perhaps someone can help me out I don't understand how to move forward from here: $$\sum\limits_{k=1}^{n+1}{\frac{1}{\sqrt{k}}+\frac{1}{\sqrt{n+1}}\ge \sqrt{n+1}}$$ proof and explanation would greatly be appreciated :) Thanks :) EDIT sorry meant GE not = fixed :)
For those who strive for non-induction proofs... Since $\frac 1{\sqrt k} \ge \frac 1{\sqrt n}$ for $1 \le k \le n$, we actually have $$ \sum_{i=1}^n \frac 1{\sqrt k} \ge \sum_{i=1}^n \frac 1{\sqrt n} = \frac n{\sqrt n} = \sqrt n. $$
Do finite algebraically closed fields exist? Let $K$ be an algebraically closed field ($\operatorname{char}K=p$). Denote $${\mathbb F}_{p^n}=\{x\in K\mid x^{p^n}-x=0\}.$$ It's easy to prove that ${\mathbb F}_{p^n}$ consists of exactly $p^n$ elements. But if $|K|<p^n$, we have collision with previous statement (because ${\mathbb F}_{p^n}$ is subfield of $K$). So, are there any finite algebraically closed fields? And if they exist, where have I made a mistake? Thanks.
No, there do not exist any finite algebraically closed fields. For suppose $K$ is a finite field; then the polynomial $$f(x)=1+\prod_{\alpha\in K}(x-\alpha)\in K[x]$$ cannot have any roots in $K$ (because $f(\alpha)=1$ for any $\alpha\in K$), so $K$ cannot be algebraically closed. Note that for $K=\mathbb{F}_{p^n}$, the polynomial is $$f(x)=1+\prod_{\alpha\in K}(x-\alpha)=1+(x^{p^n}-x).$$
What is the analogue of spherical coordinates in $n$-dimensions? What's the analogue to spherical coordinates in $n$-dimensions? For example, for $n=2$ the analogue are polar coordinates $r,\theta$, which are related to the Cartesian coordinates $x_1,x_2$ by $$x_1=r \cos \theta$$ $$x_2=r \sin \theta$$ For $n=3$, the analogue would be the ordinary spherical coordinates $r,\theta ,\varphi$, related to the Cartesian coordinates $x_1,x_2,x_3$ by $$x_1=r \sin \theta \cos \varphi$$ $$x_2=r \sin \theta \sin \varphi$$ $$x_3=r \cos \theta$$ So these are my questions: Is there an analogue, or several, to spherical coordinates in $n$-dimensions for $n>3$? If there are such analogues, what are they and how are they related to the Cartesian coordinates? Thanks.
I was trying to answer exercise 9 of $ I.5. $ from Einstein gravity in a nutshell by A. Zee that I saw this question so what I am going to say is from this question. It is said that the d-dimensional unit sphere $S^d$ is embedded into $E^{d+1}$ by usual Pythagorean relation$$(X^1)^2+(X^2)^2+.....+(X^{d+1})^2=1$$. Thus $S^1$ is the circle and $S^2$ the sphere. A. Zee says we can generalize what we know about polar and spherical coordinates to higher dimensions by defining $X^1=\cos\theta_1\quad X^2=\sin\theta_1 \cos\theta_2,\ldots $ $X^d=\sin\theta_1 \ldots \sin\theta_{d-1} \cos\theta_d,$ $X^{d+1}=\sin\theta_1 \ldots \sin\theta_{d-1} \sin\theta_d$ where $0\leq\theta_{i}\lt \pi \,$ for $ 1\leq i\lt d $ but $ 0\leq \theta_d \lt 2\pi $. So for $S^1$ we just have ($\theta_1$): $X^1=\cos\theta_1,\quad X^2=\sin\theta_1$ $S^1$ is embedded into $E^2$ and for the metric on $S^1$ we have: $$ds_1^2=\sum_1^2(dX^i)^2=d\theta_1^2$$ for $S^2$ we have ($\theta_1, \theta_2$) so for Cartesian coordinates we have: $X^1=\cos\theta_1,\quad X^2=\sin\theta_1\cos\theta_2,$ $\quad X^3=\sin\theta_1\sin\theta_2$ and for its metric: $$ds_2^2=\sum_1^3(dX^i)^2=d\theta_1^2+\sin^2\theta_1 d\theta_2^2$$ for $S^3$ which is embedded into $E^4$ we have($ \theta_1,\theta_2,\theta_3 $): $X^1=\cos\theta_1,\quad X^2=\sin\theta_1\cos\theta_2,$ $\quad X^3=\sin\theta_1\sin\theta_2\cos\theta_3$ $\quad X^4=\sin\theta_1\sin\theta_2\sin\theta_3 $ $$ds_3^2=\sum_{i=1}^4(dX^1)^i=d\theta_1^2+\sin^2\theta_1 d\theta_2^2+sin^2\theta_1\sin^2\theta_2\,d\theta_3^2$$ Finally, it is not difficult to show the metric on $S^d$ will be: $$ds_d^2=\sum_{i=1}^{d+1}(dX^1)^i=d\theta_1^2+\sin^2\theta_1 d\theta_2^2+sin^2\theta_1\sin^2\theta_2\,d\theta_3^2+\cdots+sin^2\theta_1\cdots\sin^2\theta_{d-1}\,d\theta_d^2$$
Asymptotics for prime sums of three consecutive primes We consider the sequence $R_n=p_n+p_{n+1}+p_{n+2}$, where $\{p_i\}$ is the prime number sequence, with $p_0=2$, $p_1=3$, $p_2=5$, etc.. The first few values of $R_n$ for $n=0,1,2,\dots $ are: $10, 15, 23, 31, 41, 49, 59, 71, 83, 97, 109, 121, 131, 143, 159, 173, 187, 199, $ $211,223,235,251,269,287,301,311,319,329,349,271,395,407,425,439, 457$ $\dots \dots \dots$ Now, we define $R(n)$ to be the number of prime numbers in the set $\{R_0, R_1 , \dots , R_n\}$. What I have found (without justification) is that $R(n) \approx \frac{2n}{\ln (n)}$ My lack of programming skills, however, prevents me from checking further numerical examples. I was wondering if anyone here had any ideas as to how to prove this assertion. As a parting statement, I bring up a quote from Gauss, which I feel describes many conjectures regarding prime numbers: "I confess that Fermat's Theorem as an isolated proposition has very little interest for me, because I could easily lay down a multitude of such propositions, which one could neither prove nor dispose of."
What this says (to me) is that there are twice as many primes in R(n) as in the natural numbers. But, since each R(n) is the sum of three primes, all of which are odd (except for R(1)), then each R(n) is odd. This eliminates half of the possible values (the evens), all of which are, of course, composite. So, if the R(n) values are as random as the primes (which, as Kac famously said, "play a game of chance"), then they should be twice as likely as the primes to be primes since they can never be even. As to proving this, haa!
Solving $-u''(x) = \delta(x)$ A question asks us to solve the differential equation $-u''(x) = \delta(x)$ with boundary conditions $u(-2) = 0$ and $u(3) = 0$ where $\delta(x)$ is the Dirac delta function. But inside the same question, teacher gives the solution in two pieces as $u = A(x+2)$ for $x\le0$ and $u = B(x-3)$ for $x \ge 0$. I understand when we integrate the delta function twice the result is the ramp function $R(x)$. However elsewhere in his lecture the teacher had given the general solution of that DE as $u(x) = -R(x) + C + Dx$ So I dont understand how he was able to jump from this solution to the two pieces. Are these the only two pieces possible, using the boundary conditions given, or can there be other solutions? Full solution is here (section 1.2 answer #2) http://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/assignments/pset1.pdf
Both describe the same type of function. The ramp function is nothing but \begin{align} R(x) = \begin{cases}0 & x \leq 0 \\ x & x \geq 0\end{cases} \end{align} If you use the general solution and plug in the same boundary conditions, \begin{align}u(-2) &= -R(-2) + C -2D = C - 2D = 0 \\ u(3) &= -R(3) + C + 3D = -3 + C +3D = 0\end{align} with the solution $C=6/5$, $D=3/5$, and then split it at $x=0$ to get rid of the ramp function and you obtain; \begin{align}u(x) = \begin{cases} \frac65 + \frac35 x & x \leq 0\\ -x + \frac65 + \frac35 x = \frac65 - \frac25 x &x \geq 0\end{cases}\end{align} which is exactly the same expression you got by splitting the function earlier (pratically, there is no difference, but its shorter when written with the ramp function).
Please help me to understand why $\lim\limits_{n\to\infty } {nx(1-x^2)^n}=0$ This is the exercise: $$f_{n}(x) = nx(1-x^2)^n,\qquad n \in {N}, f_{n}:[0,1] \to {R}.$$ Find ${f(x)=\lim\limits_{n\to\infty } {nx(1-x^2)^n}}$. I know that $\forall x\in (0,1]$ $\Rightarrow (1-x^2) \in [0, 1) $ but I still don't know how to calculate the limit. $\lim\limits_{n\to\infty } {(1-x^2)^n}=0$ because $(1-x^2) \in [0, 1) $ and that means I have $\infty\cdot0$. I tried transformation to $\frac{0}{0} $ and here is where I got stuck. I hope someone could help me.
Please see $\textbf{9.10}$ at this given link: The Solution is actually worked out for a more general case and I think that should help. * *http://www.csie.ntu.edu.tw/~b89089/book/Apostol/ch9.pdf
Generate a random direction within a cone I have a normalized $3D$ vector giving a direction and an angle that forms a cone around it, something like this: I'd like to generate a random, uniformly distributed normalized vector for a direction within that cone. I would also like to support angles greater than pi (but lower or equal to $2\pi$), at which point the shape becomes more like a sphere from which a cone was removed. How can I proceed? I thought about the following steps, but my implementation did not seem to work: * *Find a vector normal to the cone axis vector (by crossing the cone axis vector with the cardinal axis that corresponds with the cone axis vector component nearest to zero, ex: $[1 0 0]$ for $[-1 5 -10]$) *Find a second normal vector using a cross product *Generate a random angle between $[-\pi, \pi]$ *Rotate use the two normal vectors as a $2D$ coordinate system to create a new vector at the angle previously generated *Generate a random displacement value between $[0, \tan(\theta)]$ and square root it (to normalize distribution like for points in a circle) *Normalize the sum of the cone axis vector with the random normal vector times the displacement value to get the final direction vector [edit] After further thinking, I'm not sure that method would work with theta angles greater or equal to pi. Alternative methods are very much welcome.
So you want to uniformly sample on a spherical cap. With the notations of this link, here is some pseudo-code which performs the sampling on the cap displayed in the above link (then it remains to perform a rotation): stopifnot(h > 0 && h < 2 * R) xy = uniform_point_in_unit_circle() k = h * dotproduct(xy) s = sqrt(h * (2 * R - k)) x = xy[1] y = xy[2] return (s * x, s * y, R - k) I have this code in my archives, unfortunately I don't remember how I got it. I put the image of the link here, in case it changes:
Is the language $L = \{0^m1^n: m \neq n - 1 \}$ context free? Consider the language: $L = \{0^m1^n : m \neq n - 1 \}$ where $m, n \geq 0$ I tried for hours and hours but couldn't find its context free grammar. I was stuck with a rule which can check on the condition $m \neq n - 1$. Would anyone can help me out? Thanks in advance.
The trick is that you need extra "clock" letters $a,b, \dots$ in the language with non-reversible transformations between them, to define the different phases of the algorithm that builds the string. When the first phase is done, transform $a \to b$, then $b \to c$ after phase 2, etc, then finally remove the extra letters to leave a 0-1 string. The natural place for these extra markers is between the 0's and the 1's, or before the 0's, or after the 1's. The string can be built by first deciding whether $m - (n-1)$ will be positive or negative, then building a chain of 0's (resp. 1's) of some length, then inserting 01's in the "middle" of the string several times. Each of these steps can be encoded by production rules using extra letters in addition to 0 and 1.
Semi-partition or pre-partition For a given space $X$ the partition is usually defined as a collection of sets $E_i$ such that $E_i\cap E_j = \emptyset$ for $j\neq i$ and $X = \bigcup\limits_i E_i$. Does anybody met the name for a collection of sets $F_i$ such that $F_i\cap F_j = \emptyset$ for $j\neq i$ but * *$X = \overline{\bigcup\limits_i F_i}$ if $X$ is a topological space, or *$\mu\left(X\setminus\bigcup\limits_i F_i\right) = 0$ if $X$ is a measure space. I guess that semipartition or a pre-partition should be the right term, but I've never met it in the literature.
Brian M. Scott wrote: In the topological case I'd simply call it a (pairwise) disjoint family whose union is dense in $X$; I've not seen any special term for it. In fact, I can remember seeing it only once: such a family figures in the proof that almost countable paracompactness, a property once studied at some length by M.K. Singal, is a property of every space.
$A\in M_{n}(C)$ and $A^*=-A$ and $A^4=I$ Let $A\in M_{n}(C)$ be a matrix such that $A^*=-A$ and $A^4=I$. I need to prove that the eigenvalues of A are between $-i$ to $i$ and that $A^2+I=0$ I didn't get to any smart conclusion. Thanks
Do you recall that hermitian matrices ($A^*=A$) must have real eigenvalues? Similiarly, skew-hermitian matrices, i.e. $A^*=-A$, must have pure imaginary eigenvalues. (see Why are all nonzero eigenvalues of the skew-symmetric matrices pure imaginary?) Also, since $A$ is skew-hermitian, then $A$ is normal too, i.e. $A^*A=AA^*$, so we can apply the spectral theorem: there exists a unitary matrix $U$ such that $A=UDU^{-1}$, where $D$ is a diagonal matrix, whose diagonal entries are the eigenvalues of $A$. Thus we know that $A^4=(UDU^{-1})^4=UD^4U^{-1}=I$, so $D^4=I$, so all the eigenvalues are 4th roots of unity, i.e. $1,-1,i,\text{ or} -i$. But we already know the eigenvalues are pure imaginary, so all the eigenvalues are $i$ or $-i$. So $D^2=-I$. Finally, we have $A^2=(UDU^{-1})^2=UD^2U^{-1}=U(-I)U^{-1}=-I$, i.e. $A^2+I=0$.
Set Theory: Proving Statements About Sets Let $A, B,$ and $C$ be arbitrary sets taken from the positive integers. I have to prove or disprove that: $$ \text{If }A ∩ B ∩ C = ∅, \text{then } (A ⊆ \sim B) \text{ or } (A ⊆ \sim C)$$ Here is my disproof using a counterexample: If $A = \{ \}$ the empty set, $B = \{2, 3\}$, $C = \{4, 5\}$. With these sets defined for $A, B,$ and $C$, the intersection includes the disjoint set, and then that would lead to $A$ being a subset of $B$ or $A$ being a subset of $C$ which counteracts that if $A ∩ B ∩ C = ∅$, then $(A ⊆ \sim B)$ or $(A ⊆ \sim C)$. Is this a sufficient proof?
As another counterexample, let $A=\mathbb{N}$, and $B$ and $C$ be disjoint sets with at least one element each. Elaborating: On the one hand, since $B$ and $C$ are disjoint, $$ A \cap B \cap C = \mathbb{N} \cap B \cap C = B \cap C = \emptyset. $$ On the other hand, $A = \mathbb{N}$ is not contained in the complement of $B$, since $B$ is not empty, and the same for $C$.
Convergence of a double sum Let $(a_i)_{i=1}^\infty$ be a sequence of positive numbers such that $\sum_1^\infty a_i < \infty$. What can we say about the double series $$\sum_{i, j=1}^\infty a_{i+ j}^p\ ?$$ Can we find some values of $p$ for which it converges? I'm especially interested in $p=2$. Intuitively I'm inclined to think that the series converges for $p \ge 2$. This intuition comes from the continuum analog $f(x)= x^a, \quad x>1$: if $a<-1$ we have $$\int_1^\infty f(x)\ dx < \infty$$ and $F(x, y)=f(x+y)$ is $p$-integrable on $(1, \infty) \times (1, \infty)$ for $p \ge 2$.
To sum up, the result is false in general (see first part of the post below), trivially false for nonincreasing sequences $(a_n)$ if $p<2$ (consider $a_n=n^{-2/p}$) and true for nonincreasing sequences $(a_n)$ if $p\ge2$ (see second part of the post below). Rearranging terms, one sees that the double series converges if and only if the simple series $\sum\limits_{n=1}^{+\infty}na_n^p$ does. But this does not need to be the case. To be specific, choose a positive real number $r$ and let $a_n=0$ for every $n$ not the $p$th power of an integer (see notes) and $a_{i^p}=i^{-(1+r)}$ for every positive $i$. Then $\sum\limits_{n=1}^{+\infty}a_n$ converges because $\sum\limits_{i=1}^{+\infty}i^{-(1+r)}$ does but $na_{n}^p=i^{-pr}$ for $n=i^p$ hence $\sum\limits_{n=1}^{+\infty}na_n^p$ diverges for small enough $r$. Notes: (1) If $p$ is not an integer, read $\lfloor i^p\rfloor$ instead of $i^p$. (2) If the fact that some $a_n$ are zero is a problem, replace these by positive values which do not change the convergence/divergence of the series considered, for example add $2^{-n}$ to every $a_n$. To deal with the specific case when $(a_n)$ is nonincreasing, assume without loss of generality that $a_n\le1$ for every $n$ and introduce the integer valued sequence $(t_i)$ defined by $$ a_n\ge2^{-i} \iff n\le t_i. $$ In other words, $$ t_i=\sup\{n\mid a_n\ge2^{-i}\}. $$ Then $\sum\limits_{n=1}^{+\infty}a_n\ge u$ and $\sum\limits_{n=1}^{+\infty}na_n^p\le v$ with $$ u=\sum\limits_{i=0}^{+\infty}2^{-i}(t_i-t_{i-1}),\quad v=\sum\limits_{i=0}^{+\infty}2^{-ip-1}(t_{i+1}^2-t_i^2). $$ Now, $u$ is finite if and only if $\sum\limits_{i=0}^{+\infty}2^{-i}t_i$ converges and $v$ is finite if and only if $\sum\limits_{i=0}^{+\infty}2^{-ip}t_i^2$ does. For every $p\ge2$, one sees that $2^{-ip}t_i^2\le(2^{-i}t_i)^2$, and $\ell^1\subset\ell^2$, hence $u$ finite implies $v$ finite.
Determining which values to use in place of x in functions When solving partial fractions for integrations, solving x for two terms usually isn't all that difficult, but I've been running into problems with three term integration. For example, given $$\int\frac{x^2+3x-4}{x^3-4x^2+4x}$$ The denominator factored out to $x(x-2)^2$, which resulted in the following formulas $$ \begin{align*} x^2+3x-4=\frac{A}{x}+\frac{B}{x-2}+\frac{C}{(x-2)^2}\\ x^2+3x-4= A(x-2)(x-2)^2+Bx(x-2)^2+Cx(x-2)\\ x^2+3x-4= A(x-2)^2+Bx(x-2)+Cx\\\\ \text{when x=0, }A=-1 \text{ and x=2, }C=3 \end{align*} $$ This is where I get stuck, since nothing immediately pops out at me for values that would solve A and C for zero and leave some value for B. How do I find the x-value for a constant that is not immediately apparent?
To find the constants in the rational fraction $$\frac{x^2+3x-4}{x^3-4x^2+4x}=\frac{A}{x}+\frac{B}{x-2}+\frac{C}{(x-2)^2},$$ you may use any set of 3 values of $x$, provided that the denominator $x^3-4x^2+4x\ne 0$. The "standard" method is to compare the coefficients of $$\frac{x^2+3x-4}{x^3-4x^2+4x}=\frac{A}{x}+\frac{B}{x-2}+\frac{C}{(x-2)^2},$$ after multiplying this rational fraction by the denominator $x^3-4x^2+4x=x(x-2)^2$ and solve the resulting linear system in $A,B,C$. Since $$\begin{eqnarray*} \frac{x^{2}+3x-4}{x\left( x-2\right) ^{2}} &=&\frac{A}{x}+\frac{B}{x-2}+% \frac{C}{\left( x-2\right) ^{2}} \\ &=&\frac{A\left( x-2\right) ^{2}}{x\left( x-2\right) ^{2}}+\frac{Bx\left( x-2\right) }{x\left( x-2\right) ^{2}}+\frac{Cx}{x\left( x-2\right) ^{2}} \\ &=&\frac{A\left( x-2\right) ^{2}+Bx\left( x-2\right) +Cx}{x\left( x-2\right) ^{2}} \\ &=&\frac{\left( A+B\right) x^{2}+\left( -4A-2B+C\right) x+4A}{x\left( x-2\right) ^{2}}, \end{eqnarray*}$$ if we equate the coefficients of the plynomials $$x^{2}+3x-4\equiv\left( A+B\right) x^{2}+\left( -4A-2B+C\right) x+4A,$$ we have the system $$\begin{eqnarray*} A+B &=&1 \\ -4A-2B+C &=&3 \\ 4A &=&-4, \end{eqnarray*}$$ whose solution is $$\begin{eqnarray*} B &=&2 \\ C &=&3 \\ A &=&-1. \end{eqnarray*}$$ Alternatively you could use the method indicated in parts A and B, as an example. A. We can multiply $f(x)$ by $x=x-0$ and $\left( x-2\right) ^{2}$ and let $% x\rightarrow 0$ and $x\rightarrow 2$. Since $$\begin{eqnarray*} f(x) &=&\frac{P(x)}{Q(x)}=\frac{x^{2}+3x-4}{x^{3}-4x^{2}+4x} \\ &=&\frac{x^{2}+3x-4}{x\left( x-2\right) ^{2}} \\ &=&\frac{A}{x}+\frac{B}{x-2}+\frac{C}{\left( x-2\right) ^{2}},\qquad (\ast ) \end{eqnarray*}$$ if we multiply $f(x)$ by $x$ and let $x\rightarrow 0$, we find $A$: $$A=\lim_{x\rightarrow 0}xf(x)=\lim_{x\rightarrow 0}\frac{x^{2}+3x-4}{\left( x-2\right) ^{2}}=\frac{-4}{4}=-1.$$ And we find $C$, if we multiply $f(x)$ by $\left( x-2\right) ^{2}$ and let $x\rightarrow 2$: $$C=\lim_{x\rightarrow 2}\left( x-2\right) ^{2}f(x)=\lim_{x\rightarrow 2}\frac{% x^{2}+3x-4}{x}=\frac{2^{2}+6-4}{2}=3.$$ B. Now observing that $$P(x)=x^{2}+3x-4=\left( x+4\right) \left( x-1\right)$$ we can find $B$ by making $x=1$ and evaluate $f(1)$ in both sides of $(\ast)$, with $A=-1,C=3$: $$f(1)=0=2-B.$$ So $B=2$. Or we could make $x=-4$ in $(\ast)$ $$f(-4)=0=\frac{1}{3}-\frac{1}{6}B.$$ We do obtain $B=2$. Thus $$\frac{x^{2}+3x-4}{x\left( x-2\right) ^{2}}=-\frac{1}{x}+\frac{2}{x-2}+\frac{3% }{\left( x-2\right) ^{2}}\qquad (\ast \ast )$$ Remark: If the denominator has complex roots, then an expansion as above is not possible. For instance $$\frac{x+2}{x^{3}-1}=\frac{x+2}{(x-1)(x^{2}+x+1)}=\frac{A}{x-1}+\frac{Bx+C}{% x^{2}+x+1}.$$ You should find $A=1,B=C=-1$: $$\frac{x+2}{x^{3}-1}=\frac{1}{x-1}-\frac{x+1}{x^{2}+x+1}.$$