INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Optimum solution to a Linear programming problem If we have a feasible space for a given LPP (linear programming problem), how is it that its optimum solution lies on one of the corner points of the graphical solution? (I am here concerned only with those LPP's which have a graphical solution with more than one corner/end point.) I was asked to take this as a lemma in the class, but got curious about the proof. Any help is sincerely appreciated.
It's a special instance of the following general theorem: The maximum of a convex function $f$ on a compact convex set $S$ is attained in an extreme point of $S$. An extreme point of a convex set $S$ is a point in $S$ which does not lie in any open line segment joining two points of $S$. In your case, the "corner points of the graphical solution" are the only extreme points of the feasible region. It's easy to see that the feasible region of a LPP is convex. It's not always compact, and some LPP indeed have no solution despite having a nonempty feasible region. The linear objective function is clearly convex. If it is minimized instead of maximized, this can be reformulated as maximizing the negative objective function. I quite like the cited theorem, because it highlights that optimization can lead to problematic results for a large class of situations (because a solution at the boundary of the feasible region will become infeasible under perturbations, so it is not a robust solution). It's also similar to the bang-bang principle in optimal control.
Stalk of coherent sheaf vanishing is the following true: if I have a coherent sheaf $F$ on a noetherian scheme $X$ with a point $x$ and the stalk $F_x$ is zero, then there is a neighborhood $U$ of $x$, such that the restriction of $F$ to $U$ is zero? Thank you
Yes. For the locality of the problem, you can assume that $X$ is affine: $X = \mathrm{Spec} A$ and $F = \tilde{M}$, where $A$ is a noetherian ring and $M$ is a finite $A$-module. Let $P$ a prime such that $M_P = 0$. Let $\{ x_1, \dots, x_n \}$ a set of generators of $M$ as an $A$-module. Then exist $s_i \in A \setminus P$ such that $s_i x_i = 0$. Pick $s = s_1 \cdots s_n$, then $F \vert_{D(s)} = 0$.
Question about all the homomorphisms from $\mathbb{Z}$ to $\mathbb{Z}$ An exercise in "A first course in Abstract Algebra" asked the following: Describe all ring homomorphisms from the ring $\mathbb{Z},+,\cdot$ to itself. I observed that for any such ring homomorphism the following has to hold: $$\varphi(1) = \varphi(1\cdot 1) = \varphi(1) \cdot \varphi(1)$$ In $\mathbb{Z}$ only two numbers exists so that their square equals itself: 0 and 1. When $\varphi(1) = 0$ then $\varphi = 0$ hence $\forall n \in \mathbb{Z}$: $\varphi(n) = \varphi(n \cdot 1) = \varphi(n) \cdot \varphi(1) = \varphi(n) \cdot 0 = 0$. Now, when $\varphi(1) = 1$ I showed that $\varphi(n) = n$ using induction Base case: $n = 1$, which is true by our assumption Induction hypothesis: $\varphi(m) = m$ for $m < n$ Induction step: $\varphi(n) = \varphi((n-1) + 1) = \varphi(n-1) + \varphi(1) = n-1 + 1 = n$ Now I wonder whether you could show that $\varphi(n) = n$ when $\varphi(1) = 1$ without using induction, which seems overkill for this exercise. EDIT: Forgot about the negative n's. Since $\varphi$ is also a group homomorphism under $\mathbb{Z},+$, we know that $\varphi(-n) = -\varphi(n)$. Thus, $$\varphi(-n) = -\varphi(n) = -n$$
I assume you are talking about Fraleigh's book. If so, he does not require that a ring homomorphism maps the multiplicative identity to itself. Follow his hint by concentrating on the possible values for $f(1)$. If $f$ is a (group) homomorphism for the group $(\mathbb{Z},+)$ and $f(1)=a$, then $f$ will reduce to multiplication by $a$. For what values of $a$ will you get a ring homomorphism? You will need to have $(mn)a=(ma)(na)$ for all pairs $(m,n)$ of integers. What can you conclude about the value of $a$? You still won't have a lot of homomorphisms.
Definition of manifold From Wikipedia: The broadest common definition of manifold is a topological space locally homeomorphic to a topological vector space over the reals. A topological manifold is a topological space locally homeomorphic to a Euclidean space. In both concepts, a topological space is homeomorphic to another topological space with richer structure than just topology. On the other hand, the homeomorphic mapping is only in the sense of topology without referring to the richer structure. I was wondering what purpose it is to map from a set to another with richer structure, while the mapping preserves the less rich structure shared by both domain and codomain? How is the extra structure on the codomain going to be used? Is it to induce the extra structure from the codomain to the domain via the inverse of the mapping? How is the induction like for a manifold and for a topological manifold? Thanks!
The reason to use topological vector spaces as model spaces (for differential manifolds, that is) is that you can define the differential of a curve in a topological vector space. And you can use this to define the differential of curves in your manifold, i.e. do differential geometry. For more details see my answer here. All finite dimensional topological vector spaces of dimension n are isomorph to $\mathbb{R}^n$ with its canonical topology, so there is not much choice. But in infinite dimensions things get really interesting :-)
Homology of $\mathbb{R}^3 - S^1$ I've been looking for a space on the internet for which I cannot write down the homology groups off the top of my head so I came across this: Compute the homology of $X : = \mathbb{R}^3 - S^1$. I thought that if I stretch $S^1$ to make it very large then it looks like a line, so $\mathbb{R}^3 - S^1 \simeq (\mathbb{R}^2 - (0,0)) \times \mathbb{R}$. Then squishing down this space and retracting it a bit will make it look like a circle, so $(\mathbb{R}^2 - (0,0)) \times \mathbb{R} \simeq S^1$. Then I compute $ H_0(X) = \mathbb{Z}$ $ H_1( X) = \mathbb{Z}$ $ H_n(X) = 0 (n > 1)$ Now I suspect something is wrong here because if you follow the link you will see that the OP computes $H_2(X,A) = \mathbb{Z}$. I'm not sure why he computes the relative homologies but if the space is "nice" then the relative homologies should be the same as the absolute ones, so I guess my reasoning above is flawed. Maybe someone can point out to me what and then also explain to me when $H(X,A) = H(X)$. Thanks for your help! Edit $\simeq$ here means homotopy equivalent.
Consider $X = \mathbb{R}^3 \setminus (S^1 \times \{ 0 \}) \subseteq \mathbb{R}^3$, $U = X \setminus z \text{-axis}$ and $V = B(0,1/2) \times \mathbb{R}$, where $B(0, 1/2)$ is the open ball with radius $1/2$ and center in the origin of $\mathbb{R}^2$. It is clear that $\{ U, V \}$ is an open cover of $X$. Now let's compute the homotopy type of $U$. Consider a deformation $f$ of $((0, +\infty) \times \mathbb{R}) \setminus \{(1,0) \}$ onto the circle of center $(1,0)$ and radius $1/2$. Let revolve $f$ around the $z$-axis: obtain a deformation of $U$ onto the doughnut ($2$-torus) of radii $R=1$ and $r = 1/2$. Since $V$ is contractible and $U \cap V$ is homotopically equivalent to $S^1$, Mayer-Vietoris sequence gives the solution: $H_0(X) = H_1(X) = H_2(X) = \mathbb{Z}$ and $H_i(X) = 0$ for $i \geq 3$.
prove cardinality rule $|A-B|=|B-A|\rightarrow|A|=|B|$ I need to prove this $|A-B|=|B-A|\rightarrow|A|=|B|$ I managed to come up with this: let $f:A-B\to B-A$ while $f$ is bijective. then define $g\colon A\to B$ as follows: $$g(x)=\begin{cases} f(x)& x\in (A-B) \\ x& \text{otherwise} \\ \end{cases}$$ but I'm not managing to prove this function is surjective. Is it not? or am I on the right path? if so how do I prove it? Thanks
Note that $$\begin{align} |A| = |A \cap B| + |A \cap B^c| = |B \cap A| + |B \cap A^c| = |B|. \end{align}$$ Here $E^c$ denotes the compliment of the event $E$ in the universal space $X$.
Incorrect manipulation of limits Here's my manipulation of a particular limit: $\displaystyle \lim\limits_{h\rightarrow 0}\Big[\frac{f(x+h)g(x) - f(x)g(x+h)}{h}\Big]$ Using the properties of limits: $\displaystyle \begin{align*} &=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)g(x) - f(x)g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[f(x)g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)\Big]\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[f(x)\Big]\lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{f(x)\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - f(x)\lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{f(x)\Big(\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]\Big)}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{f(x)\Big(\lim\limits_{h\rightarrow 0}\Big[g(x) - g(x+h)\Big]\Big)}{\lim\limits_{h\rightarrow 0}h}\\ &=f(x)\lim\limits_{h\rightarrow 0}\Big(\frac{g(x) - g(x+h)}{h}\Big)\\ &=-f(x)g'(x)\end{align*}$ I'm pretty sure that my end result is incorrect, as I've used arbitrary functions for $f(x)$ and $g(x)$ and it didn't support my conclusion. I think that the factoring of $f(x)$ might be what is incorrect in my manipulation, but I'm not 100% sure. Could someone explain to me what I did wrong and why it is wrong? Which one of the limit "axioms" did I use incorrectly? Thank you.
The answers already posted fully answer your question. So what follows is not an answer to your question, but it may be helpful. Let us assume that $f$ and $g$ are differentiable at $x$. Note that $$f(x+h)g(x) - f(x)g(x+h)= f(x+h)g(x)+(f(x)g(x)-f(x)g(x))-f(x)g(x+h).$$ We have added $0$ in the middle, which is harmless. A trick that looks very similar was undoubtedly used in your book or notes to prove the product rule for differentiation. Rearranging a bit, and with some algebra, we find that $$f(x+h)g(x) - f(x)g(x+h)=(f(x+h)-f(x))g(x)-f(x)(g(x+h) -g(x)),$$ and therefore $$\frac{f(x+h)g(x) - f(x)g(x+h)}{h}=\frac{(f(x+h)-f(x))g(x)}{h}-\frac{f(x)(g(x+h) -g(x))}{h}.$$ The rest is up to you. Added stuff, for the intuition: The following calculation is way too informal, but will tell you more about what's really going on than the mysterious trick. When $h$ is close to $0$, $$f(x+h) \approx f(x)+hf'(x)$$ with the approximation error going to $0$ faster than $h$. Similarly, $$g(x+h) \approx g(x)+hg'(x).$$ Substitute these approximations into the top. Simplify. Something very pretty happens!
Proof for divisibility rule for palindromic integers of even length I am studying for a test and came across this in my practice materials. I can prove it simply for some individual cases, but I don't know where to start to prove the full statement. Prove that every palindromic integer in base $k$ with an even number of digits is divisible by $k+1$.
HINT $\rm\ \ mod\ \ x+1:\ \ f(x) + x^{n+1}\:(x^n\ f(1/x))\ \equiv\ f(-1) - f(-1)\:\equiv\: 0$ Remark $\ \ $ It is simple to verify that $\rm\ x^n\ f(1/x)\ $ is the reversal of a polynomial $\rm\:f\:$ of degree $\rm\:n\:,\:$ therefore the above is the general palindromic polynomial with even number of coefficients. See also the closely related question.
Integrate $\int\limits_{0}^{1} \frac{\log(x^{2}-2x \cos a+1)}{x} dx$ How do I solve this: $$\int\limits_{0}^{1} \frac{\log(x^{2}-2x \cos{a}+1)}{x} \ dx$$ Integration by parts is the only thing which I could think of, clearly that seems cumbersome. Substitution also doesn't work.
Please see $\textbf{Problem 4.30}$ in the book: ($\text{solutions are also given at the end}$) * *$\text{The Math Problems Notebook,}$ by Valentin Boju and Louis Funar.
Transitive graph such that the stabilizer of a point has three orbits I am looking for an example of a finite graph such that its automorphism group is transitive on the set of vertices, but the stabilizer of a point has exactly three orbits on the set of vertices. I can't find such an example. Anyone has a suggestion?
Consider the Petersen graph. Its automorphism group is $S_5$ which acts on $2$-subsets of $\{1,\ldots,5\}$ (that can be seen as vertices of the graph). Now the result is clear.
Convergence of sequence points to the point of accumulation Wikipedia says: Every finite interval or bounded interval that contains an infinite number of points must have at least one point of accumulation. If a bounded interval contains an infinite number of points and only one point of accumulation, then the sequence of points converge to the point of accumulation. [1] How could i imagine a bounded interval of infinite points having a single limit point and that sequence of interval's points converge to that limit point. Any examples? [1] http://en.wikipedia.org/wiki/Limit_point Thank you.
What about: $$\left \{\frac{1}{n}\Bigg| n\in\mathbb Z^+ \right \}\cup\{0\}$$ This set (or sequence $a_0=0,a_n=\frac{1}{n}$) is bounded in $[0,1]$ and has only limit point, namely $0$. A slightly more complicated example would be: $$\left \{\frac{(-1)^n}{n}\Bigg| n\in\mathbb Z^+ \right \}\cup\{0\}$$ In the interval $[-1,1]$. Again the only limit point is $0$, but this time we're converging to it from "both sides". I believe that your confusion arises from misreading the text. The claim is that if within a bounded interval (in these examples, we can take $[-1,1]$) our set (or sequence) have infinitely many points and only one accumulation point, then it is the limit of the sequence. There is no claim that the interval itself has only this sequence, or only one limit/accumulation point.
Feller continuity of the stochastic kernel Given a metric space $X$ with a Borel sigma-algebra, the stochastic kernel $K(x,B)$ is such that $x\mapsto K(x,B)$ is a measurable function and a $B\mapsto K(x,B)$ is a probability measure on $X$ for each $x$ Let $f:X\to \mathbb R$. We say that $f\in \mathcal C(B)$ if $f$ is continuous and bounded on $B$. Weak Feller continuity of $K$ means that if $f\in\mathcal C(X)$ then $F\in\mathcal C(X)$ where $$ F(x):=\int\limits_X f(y)K(x,dy). $$ I wonder if it implies that if $g\in \mathcal C(B)$ then $$ G(x):=\int\limits_Bg(y)K(x,dy) $$ also belongs to $\mathcal C(B)$?
To make it clearer, you can re-write $ G(x) = \int\limits_X g(y)\mathbf{1}_B(y) K(x,{\rm d} y)$. Now in general $g{\mathbf 1}_B$ is not continuous anymore so as Nate pointed out you should not expect $G$ to be such. However, if you take a continuous $g $ with $\overline{{\rm support}(g)} \subsetneq B$ then (I let you :) show this) $g{\mathbf 1}_B$ is still continuous and so is $G$ then.
Improper Double Integral Counterexample Let $f: \mathbf{R}^2\to \mathbf{R}$. I want to integrate $f$ over the entire first quadrant, call $D$. Then by definition we have $$\int \int_D f(x,y) dA =\lim_{R\to[0, \infty]\times[0, \infty]}\int \int_R f(x,y) dA$$ where $R$ is a rectangle. I remember vaguely that the above is true if $f$ is positive. In other words, if $f$ is positive, then the shape of the rectangle does not matter. So this brings me to my question: give a function $f$ such that the shape of the rectangles DO MATTER when evaluating the improper double integral.
Let $f$ be 1 below the diagonal, -1 above.
Determining if a quadratic polynomial is always positive Is there a quick and systematic method to find out if a quadratic polynomial is always positive or may have positive and negative or always negative for all values of its variables? Say, for the quadratic inequality $$3x^{2}+8xy+5xz+2yz+7y^{2}+2z^{2}>0$$ without drawing a graph to look at its shape, how can I find out if this form is always greater than zero or has negative results or it is always negative for all non-zero values of the variables? I tried randomly substituting values into the variables but I could never be sure if I had covered all cases. Thanks for any help.
This is what Sylvester's criterion is for. Write your quadratic as $v^T A v$ where $v$ is a vector of variables $(x_1\ x_2\ \cdots\ x_n)$ and $A$ is a matrix of constants. For example, in your case, you are interested in $$\begin{pmatrix} x & y & z \end{pmatrix} \begin{pmatrix} 3 & 4 & 5/2 \\ 4 & 7 & 1 \\ 5/2 & 1 & 2 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix}$$ Observe that the off diagonal entries are half the coefficients of the quadratic. The standard terminology is that $A$ is "positive definite" if this quantity is positive for all nonzero $v$. Sylvester's criterion says that $A$ is positive definite if and only if the determinants of the top-left $k \times k$ submatrix are positive for $k=1$, $2$, ..., $n$. In our case, we need to test $$\det \begin{pmatrix} 3 \end{pmatrix} =3 \quad \det \begin{pmatrix}3 & 4 \\ 4 & 7\end{pmatrix} = 5 \quad \det \begin{pmatrix} 3 & 4 & 5/2 \\ 4 & 7 & 1 \\ 5/2 & 1 & 2 \end{pmatrix} = -67/4.$$ Since the last quantity is negative, Sylvester's criterion tells us that this quadratic is NOT positive definite.
ODE problem shooting Please help me spot my mistake: I have an equation $$(u(x)^{-2} + 4u'(x)^2)^{\frac{1}{2}} - u'(x)\frac{d}{du'}(u(x)^{-2} + 4u'(x)^2)^{\frac{1}{2}} = k$$ where $k$ is a constant. I am quite sure that if I take $u(x) = \sqrt{y(x)}$ I would have the brachistochrone equation, hence I am expecting a cycloid equation if I let $u(x) = \sqrt{y(x)}$ in the result, but I don't seem to get it :( My workings are as follows: $$u(x)^{-2} + 4u'(x)^2- 4u'(x)^2 = k \times (u(x)^{-2} + 4u'(x)^2)^{\frac{1}{2}}$$ $$\implies u(x)^{-4} = k^2 \times (u(x)^{-2} + 4u'(x)^2)$$ $$\implies u'(x)= \frac{1}{2k}\sqrt{u(x)^{-4} - k^2u(x)^{-2}}$$ $$\implies \int \frac{1}{u \sqrt{u^2 - k^2}} du = \int \frac{1}{2k} dx$$ Change variable: let $v = \frac{u}{k}$ $$\implies \int \frac{1}{v \sqrt{v^2 - 1}} dv = \frac{x+a}{2}$$, where $a$ is a constant $$\implies \operatorname{arcsec}(v) = \frac{x+a}{2} $$ $$\implies \operatorname{arcsec}\left(\frac{\sqrt{y}}{k}\right) = \frac{x+a}{2}$$ which does not seem to describe a cycloid... Help would be very much appreciated! Thanks.
In the line after $$ 2k u' = \sqrt{ u^{-4} - k^2 u^{-2}} $$ (the fourth line of your computations) when you divided, you divided wrong. The integrand in the LHS should be $$ \int \frac{u^2}{\sqrt{1-k^2 u^2}} \mathrm{d}u $$
Solve this Inequality I am not sure how to solve this equation. Any ideas $$(1+n) + 1+(n-1) + 1+(n-2) + 1+(n-3) + 1+(n-4) + \cdots + 1+(n-n) \ge 1000$$ Assuming $1+n = a$ The equation can be made to looks like $$a+(a-1)+(a-2)+(a-3)+(a-4)+\cdots+(a-n) \ge 1000$$ How to proceed ahead, or is there another approach to solve this?
Here is an example (probably not the most elegant though...) way to solve this. First either you write $(n+1)a-\frac{n(n+1)}{2}$ and replace $a$ by $n+1$ or you just realize replacing $a$ by $n+1$ in your second inequation that the left side is $\sum_{i=1}^{n+1}i=\frac{(n+1)(n+2)}{2}$. Your inequality is then $$(n+1)^2+(n+1)\geq 2000$$ You can for example consider the polynomial $P(X)=X^2+X-2000$ and you want to study the sign of this, which should not be a problem.
Normal subgroups vs characteristic subgroups It is well known that characteristic subgroups of a group $G$ are normal. Is the converse true?
I think the simplest example is the Klein four-group, which you can think of as a direct sum of two cyclic groups of order two, say $A\oplus B$. Because it is abelian, all of its subgroups are normal. However, there is an automorphism which interchanges the two direct summands $A$ and $B$, which shows that $A$ (and $B$) are normal, but not characteristic. (In fact, the other non-trivial proper subgroup, generated by the product of the generators of $A$ and $B$ also works.)
How I can use the mean value theorem in this problem? Use the Mean Value Theorem to estimate the value of $\sqrt{80}$. and how should we take $f(x)$? Thanks in advance. Regards
You want to estimate a value of $f(x) = \sqrt{x}$, so that's a decent place to start. The mean value theorem says that there's an $a \in (80, 81)$ such that $$ \frac{f(81) - f(80)}{81 - 80} = f'(a). $$ I don't know what $a$ is, but you know $f(81)$ and you hopefully know how to write down $f'$. How small can $f'(a)$ be?
Some question of codimension 1 (1) "For affine variety $V$ of $\mathbb{A}^{n}$ such that its coordinate ring is UFD, closed subvariety of $V$ which has codimension 1 is cut out by a single equation." I looked at the proof of this statement, Where I have been using UFD in I do not know... (2) I want to see proof of follwing statement.... "Any closed subvariety of affine normal variety with codimension 1 is cut out by a single equation."
As for (1): a closed subvariety of $V$ is defined by a prime ideal $p$ of the coordinate ring $k[V]$ of height $1$. If $k[V]$ is a UFD such a prime ideal is principal. As for (2): the statement is false. The closed subvariety is cut out by a single equation locally but not globally as in the case of a UFD.
Example of infinite field of characteristic $p\neq 0$ Can you give me an example of infinite field of characteristic $p\neq0$? Thanks.
Another construction, using a tool from the formal logic: the ultraproduct. The cartesian product of fields $$P = {\Bbb F}_p\times{\Bbb F}_{p^2}\times{\Bbb F}_{p^3}\times\cdots$$ isn't a field ("isn't a model of..." ) because has zero divisors: $$(0,1,0,1,\cdots)(1,0,1,0\cdots)=(0,0,0,0,\cdots).$$ The solution is taking a quotient: let $\mathcal U$ be a nonprincipal ultrafilter on $\Bbb N$. Define $$(a_1,a_2,\cdots)\sim(b_1,b_2,\cdots)$$ when $$\{n\in\Bbb N\,\vert\, a_n=b_n\}\in\mathcal U.$$ The quotient $F=P/\sim$ will be a infinite field of characteristic $p$.
Prove convexity/concavity of a complicated function Can anyone help me to prove the convexity/concavity of following complicated function...? I have tried a lot of methods (definition, 1st derivative etc.), but this function is so complicated, and I finally couldn't prove... however, I plotted with many different parameters, it's always concave to $\rho$... $$ f\left( \rho \right) = \frac{1}{\lambda }(M\lambda \phi - \rho (\phi - \Phi )\ln (\rho + M\lambda ) + \frac{1}{{{e^{(\rho + M\lambda )t}}\rho + M\lambda }}\cdot( - (\rho + M\lambda )({e^{(\rho + M\lambda )t}}{\rho ^2}t(\phi - \Phi ) ) $$ $$+ M\lambda (\phi + \rho t\phi - \rho t\Phi )) + \rho ({e^{(\rho + M\lambda )t}}\rho + M\rho )(\phi - \Phi )\ln ({e^{(\rho + M\lambda )t}}\rho + M\lambda ))$$ Note that $\rho > 0$ is the variable, and $M>0, \lambda>0, t>0, \phi>0, \Phi>0 $ are constants with any possible positive values...
I am a newcomer so would prefer to just leave a comment, but alas I see no "comment" button, so I will leave my suggestion here in the answer box. I have often used a Nelder-Mead "derivative free" algorithm (fminsearch in matlab) to minimize long and convoluted equations like this one. If you can substitute the constraint equation $g(\rho)$ into $f(\rho)$ somehow, then you can input this as the objective function into the algorithm and get the minimum, or at least a local minimum. You could also try heuristic methods like simulated annealing or great deluge. Heuristic methods would give you a better chance of getting the global minimum if the solution space has multiple local minima. In spite of their scary name, heuristic methods are actually quite simple algorithms. As for proving the concavity I don't see the problem. You mention in your other post that both $g(\rho)$ and $f(\rho)$ have first and second derivatives, so it should be straightforward, right?
When can a random variable be written as a sum of two iid random variables? Suppose $X$ is a random variable; when do there exist two random variables $X',X''$, independent and identically distributed, such that $$X = X' + X''$$ My natural impulse here is to use Bochner's theorem but that did not seem to lead anywhere. Specifically, the characteristic function of $X$, which I will call $\phi(t)$, must have the property that we can a find a square root of it (e.g., some $\psi(t)$ with $\psi^2=\phi$) which is positive definite. This is as far as I got, and its pretty unenlightening - I am not sure when this can and can't be done. I am hoping there is a better answer that allows one to answer this question merely by glancing at the distribution of $X$.
I think your characteristic function approach is the reasonable one. You take the square root of the characteristic function (the one that is $+1$ at 0), take its Fourier transform, and check that this is a positive measure. In the case where $X$ takes only finitely many values, the characteristic function is a finite sum of exponentials with positive coefficients, and the criterion becomes quite manageable: you need the square root to be a finite sum of exponentials with positive coefficients. More general cases can be more difficult.
Elementary central binomial coefficient estimates * *How to prove that $\quad\displaystyle\frac{4^{n}}{\sqrt{4n}}<\binom{2n}{n}<\frac{4^{n}}{\sqrt{3n+1}}\quad$ for all $n$ > 1 ? *Does anyone know any better elementary estimates? Attempt. We have $$\frac1{2^n}\binom{2n}{n}=\prod_{k=0}^{n-1}\frac{2n-k}{2(n-k)}=\prod_{k=0}^{n-1}\left(1+\frac{k}{2(n-k)}\right).$$ Then we have $$\left(1+\frac{k}{2(n-k)}\right)>\sqrt{1+\frac{k}{n-k}}=\frac{\sqrt{n}}{\sqrt{n-k}}.$$ So maybe, for the lower bound, we have $$\frac{n^{\frac{n}{2}}}{\sqrt{n!}}=\prod_{k=0}^{n-1}\frac{\sqrt{n}}{\sqrt{n-k}}>\frac{2^n}{\sqrt{4n}}.$$ By Stirling, $n!\approx \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$, so the lhs becomes $$\frac{e^{\frac{n}{2}}}{(2\pi n)^{\frac14}},$$ but this isn't $>\frac{2^n}{\sqrt{4n}}$.
Here is a better estimate of the quantity. I am not going to prove it. Since I know about it, hence I am sharing and will put the reference. $$\frac{1}{2\sqrt{n}} \le {2n \choose n}2^{-2n} \le \frac{1}{\sqrt{2n}}$$ Refer to Page 590 This is the text book "Boolean functions Complexity" by Stasys Jukna.
The smallest subobject $\sum{A_i}$ containing a family of subobjects {$A_i$} In an Abelian category $\mathcal{A}$, let {$A_i$} be a family of subobjects of an object $A$. How to show that if $\mathcal{A}$ is cocomplete(i.e. the coproduct always exists in $\mathcal{A}$), then there is a smallest subobject $\sum{A_i}$ of $A$ containing all of $A_i$? Surely this $\sum{A_i}$ cannot be the coproduct of {$A_i$}, but I have no clue what it should be.
You are quite right that it can't be the coproduct, since that is in general not a subobject of $A$. Here are two ways of constructing the desired subobject: * *As Pierre-Yves suggested in the comments, the easiest way is to take the image of the canonical map $\bigoplus_i A_i \to A$. This works in any cocomplete category with unique epi-mono factorisation. *Alternatively, the subobject $\sum A_i$ can be constructed by taking the colimit over the semilattice of the $A_i$ and their intersections. This construction can be carried out in any bicomplete category, but is not guaranteed to give a subobject of $A$ unless the category is nice enough.
Prove this number fact Prove that $x \neq 0,y \neq 0 \Rightarrow xy \neq 0$. Suppose $xy = 0$. Then $\frac{xy}{xy} = 1$. Can we say that $\frac{xy}{xy} = 0$ and hence $1 = 0$ which is a contradiction? I thought $\frac{0}{0}$ was undefined.
If xy=0, then x=0 or y=0. So, by contraposition, if is not the case that "x=0 OR y=0", then it is not the case that xy=0. By De Morgan's Laws, we have that "if it is not the case that x=0 AND it is not the case that y=0, then it is not the case that xy=0". Now we move the negations around as a "quantifier exchange" and we have "if x is not equal to 0 and y is not equal to 0, then xy is not equal to 0." If xy=0, we can't say that xy/xy=1. Division doesn't exist in the rational numbers. It only exists in non-zero rational numbers, or if you prohibit 0 in any denominator. xy/xy=1 only on the condition that "/" is defined. So this "Suppose xy=0. Then xy/xy=1." isn't correct.
Is $\mathbb{Q}[2^{1/3}]$ a field? Is $\mathbb{Q}[2^{1/3}]=\{a+b2^{1/3}+c2^{2/3};a,b,c \in \mathbb{Q}\}$ a field? I have checked that $b2^{1/3}$ and $c2^{2/3}$ both have inverses, $\frac{2^{2/3}}{2b}$ and $\frac{2^{1/3}}{2c}$, respectively. There are some elements with $a,b,c \neq 0$ that have inverses, as $1+1*2^{1/3}+1*2^{2/3}$, whose inverse is $2^{1/3}-1$. My problem is that is that I can't seem to find a formula for the inverse, but I also can't seem to find someone who doesn't have an inverse. Thanks for your time.
A neat way to confirm that it is a field: Since $\mathbb{Q}$ is a field, $\mathbb{Q}[x]$ is a PID. $\mathbb{Q}[2^{1/3}] \cong \mathbb{Q}[x] / (x^3 - 2)$. Now, $x^3 - 2$ is irreducible over $\mathbb{Q}$, since if it weren't, there would be a rational root to $x^3 - 2$. Because $\mathbb{Q}[x]$ is a PID and the polynomial is irreducible over $\mathbb{Q}$, $(x^3 - 2)$ is a maximal ideal in $\mathbb{Q}[x]$. By the Correspondence Theorem of Ideals, we see that as $(x^3 - 2)$ is maximal, $\mathbb{Q}[x] / (x^3 - 2)$ must be a field.
$f_A(x)=(x+2)^4x^4$, $m_A(x)=(x+2)^2x^2$- What can I know about $A$? $A$ is matrix under $R$ which I know the following information about it: $f_A(x)=(x+2)^4x^4$- Characteristic polynomial $m_A(x)=(x+2)^2x^2$- Minimal polynomial. I'm trying to find out (i) $A$'s rank (ii) $\dim$ $\ker(A+2I)^2$ (iii) $\dim$ $\ker (A+2I)^4$ (iv) the characteristic polynomial of $B=A^2-4A+3I$. I believe that I don't have enough information to determine none of the above. By the power of $x$ in the minimal polynomial I know that the biggest Jordan block of eigenvalue 0 is of size 2, so there can be two options of Jordan form for this eigenvalue: $(J_2(0),J_2(0))$ or $(J_2(0),J_1(0),J_1(0))$, therefore $A$'s rank can be $2$ or $3$. I'm wrong, please correct me. How can I compute the rest? Thanks for the answers.
If the Jordan form of A is C then let P be invertible such that $A=PCP^{-1}$ then $$(A+2I)^2=P(C+2I)^2P^{-1}\;\rightarrow\; \dim\ker((A+2I)^2)=\dim\ker((C+2I)^2)$$ and you know exactly how $(C+2I)^2$ looks like (well, at least the part of the kernel). The same operation should help you solve the rest of the problems
Radical ideal of $(x,y^2)$ How does one show that the radical of $(x,y^2)$ is $(x,y)$ over $\mathbb{Q}[x,y]$? I have no idea how to do, please help me.
Recall that the radical of an ideal $\mathfrak{a}$ is equal to the intersection of the primes containing $\mathfrak{a}$. Here, let $\mathfrak{p}$ be a prime containing $(x, y^2)$. Then $y^2 \in \mathfrak{p}$ implies $y \in \mathfrak{p}$. Can you finish it off from there?
Ratio of circumference to radius I know that the ratio of the circumference to the diameter is Pi - what about the ratio of the circumference to the radius? Does it have any practical purpose when we have Pi? Is it called something (other than 2 Pi)?
That $\pi$ and $2 \pi$ have a very simple relationship to each other sharply limits the extent to which one can be more useful or more fundamental than the other. However, there are probably more formulas that are simpler when expressed using $2\pi$ instead of $\pi$, than the other way around. For example, there is often an algebraic expression involving something proportional to $(2\pi)^n$ and if expressed using powers of $\pi$ this would introduce factors of $2^n$.
Stalks of the graph of a morphism I am interested in the graph $\Gamma_f$ of a morphism $f: X\rightarrow Y$ between two sufficiently nice schemes $X,Y$. One knows that it is a closed subset of $X\times Y$ (when the schemes are nice, say varieties over a field). I would like to know the following: if you endow it with the reduced structure, what are the stalks of it's structure sheaf in a point $(x,f(x))$ ? Thanks you very much!
Let $f:X\to Y$ be a morphism of $S$-schemes. The graph morphism $\gamma_f:X\to X\times_S Y$ is the pull-back of the diagonal morphism $\delta: Y\to Y\times_S Y$ along $f\times id_Y: X\times_S Y \to Y\times_S Y$. This implies that if $\delta$ is a closed embedding (i.e. $Y$ is separated over $S$) so is $\gamma_f$. So $\gamma_f$ induces an isomorphism between $X$ and its image $\Gamma_f \subset X\times Y$.
$A\in M_n(\mathbb C)$ invertible and non-diagonalizable matrix. Prove $A^{2005}$ is not diagonalizable $A\in M_n(\mathbb C)$ invertible and non-diagonalizable matrix. I need to prove that $A^{2005}$ is not diagonalizable as well. I am asked as well if Is it true also for $A\in M_n(\mathbb R)$. (clearly a question from 2005). This is what I did: If $A\in M_n(\mathbb C)$ is invertible so $0$ is not an eigenvalue, We can look on its Jordan form, Since we under $\mathbb C$, and it is nilpotent for sure since $0$ is not an eigenvalue, and it has at least one 1 in it's semi-diagonal. Let $P$ be the matrix with Jordan base, so $P^{-1}AP=J$ and $P^{-1}A^{2005}P$ but it leads me nowhere. I tried to suppose that $A^{2005}$ is diagonalizable and than we have this $P^{-1}A^{2005}P=D$ When D is diagonal and we can take 2005th root out of each eigenvalue, but how can I show that this is what A after being diagonalizable suppose to look like, for as contradiction? Thanks
As luck would have it, the implication: $A^n$ diagonalizable and invertible $\Rightarrow A$ diagonalizable, was discussed in XKCD forum recently. See my answer there as well as further dicussion in another thread.
A sufficient condition for linearity? If $f$ is a linear function (defined on $\mathbb{R}$), then for each $x$, $f(x) – xf’(x) = f(0)$. Is the converse true? That is, is it true that if $f$ is a differentiable function defined on $\mathbb{R}$ such that for each $x$, $f(x) – xf’(x) = f(0)$, then $f$ is linear?
If $f\in C^2$ (it is twice differentiable, and $f''$ is continuous), then the answer is yes; I don't know if it's necessarily true without this hypothesis. If $f(x)-xf'(x)=f(0)$ for all $x$, then $$f(x)=xf'(x)+f(0),$$ so that $$f'(x)=f'(x)+xf''(x)$$ $$0=xf''(x)$$ This shows that $f''(x)=0$ for all $x\neq0$, but because $f''$ is continuous this forces $f''(x)=0$ everywhere. Thus $f'$ must be constant, and thus $f$ must be linear.
Prove that all even integers $n \neq 2^k$ are expressible as a sum of consecutive positive integers How do I prove that any even integer $n \neq 2^k$ is expressible as a sum of positive consecutive integers (more than 2 positive consecutive integer)? For example: 14 = 2 + 3 + 4 + 5 84 = 9 + 10 + ... + 15 n = sum (k + k+1 + k+2 + ...) n ≠ 2^k
(The following is merely a simplification of @Arturo Magidin's proof. So, please do not upvote my post.) Suppose $S=k+(k+1)+\ldots+\left(k+(n-1)\right)$ for some $k\ge1$ and $n\ge2$. Then $$ S = nk + \sum_{j=0}^{n-1} j = nk+\frac{n(n-1)}{2} = \frac{n(n+2k-1)}{2}. $$ Hence $S\in\mathbb{N}$ can be written as a consecutive sum of integers if and only if $2S = nN$ with $N\ (=n+2k-1) >n\ge2$ and $N, n$ are of different parities. If $S$ is a power of $2$, so is $2S$. Hence the above factorization cannot be done. If $S$ is not a power of $2$ (whether it is even is immaterial), we may write $2S=ab$, where $a\ge3$ is odd and $b\ge2$ is a power of $2$. Therefore we can put $N=\max(a,b)$ and $n=\min(a,b)$.
Formula for $1^2+2^2+3^2+...+n^2$ In example to get formula for $1^2+2^2+3^2+...+n^2$ they express $f(n)$ as: $$f(n)=an^3+bn^2+cn+d$$ also known that $f(0)=0$, $f(1)=1$, $f(2)=5$ and $f(3)=14$ Then this values are inserted into function, we get system of equations solve them and get a,b,c,d coefficients and we get that $$f(n)=\frac{n}{6}(2n+1)(n+1)$$ Then it's proven with mathematical induction that it's true for any n. And question is, why they take 4 coefficients at the beginning, why not $f(n)=an^2+bn+c$ or even more? How they know that 4 will be enough to get correct formula?
There are several ways to see this: * *As Rasmus pointed one out in a comment, you can estimate the sum by an integral. *Imagine the numbers being added as cross sections of a quadratic pyramid. Its volume is cubic in its linear dimensions. *Apply the difference operator $\Delta g(n)=g(n+1)-g(n)$ to $f$ repeatedly. Then apply it to a polynomial and compare the results. [Edit in response to the comment:] An integral can be thought of as a limit of a sum. If you sum over $k^2$, you can look at this as adding up the areas of rectangles with width $1$ and height $k^2$, where each rectangle extends from $k-1$ to $k$ in the $x$ direction. (If that's not clear from the words, try drawing it.) Now if you connect the points $(k,k^2)$ by the continuous graph of the function $f(x)=x^2$, the area under that graph is an approximation of the area of the rectangles (and vice versa). So we have $$1^2+\dotso+n^2\approx\int_0^nk^2\mathrm dk=\frac13n^3\;.$$
Unique ways to assign P boxes to C bags? How many ways can I arrange $C$ unlabeled bags into $P$ labeled boxes such that each box receives at least $S$ bags (where C > S and C > P)? Assume that I can combine bags to fit a box. I have just learnt that there are $\binom{C-1}{P-1}$ unique ways to keep P balls into C bags. I am unable to get the answer for the above question from this explanation. For example, if there are 2 Boxes, 3 Bags and each Box should get 1 Bag, then there are two ways: (2,1) and (1,2). Could you please help me to get this? Thank you. $\quad\quad$
First reduce the number of bags by subtracting the required minimum number in each bag. Using your notation: $C' = C-SP$. Now, you're freely place the remaining $C'$ items into $P$ boxes. Which is $(C'+P-1)!/C'!(P-1)!$ Take your example: 4 Boxes, 24 Bags and each Box should get 6 Bags. $C'= 24-6*24 = 0$, $(0+4-1)!/0!(4-1)!=1$. There is only one way! In the referenced link they use $S=1$, which makes $C'=C-P$, so the formula $\binom{C-1}{P-1}$ There is a nice visualization for this, which you don't have to remember the formula. Let's solve the case for 3 identical items to 3 bags (after reducing the required minimum), where items are not distinguishable. Assume we are placing 2 |'s and 3 x's in 5 slots. Some cases are (do all as an exercise): |xxx| x|x|x xxx|| ... Since there are 5 things (| and x) there are $5!$ ways of distributing them. However, we don't differentiate between individual |'s and individual x's so $2!3!$ will be idential to some other and won't be counted. The total number "unique ways" is $5!/(2!3!) = 10$. The \$1M visualization trick is thinking the |'s as the bag (usually states as box) boundries and the left most and right most bags are one sided. Note that the number of bags is one more than the boundaries. The formula you'll derive is $$\binom{\text{bags}+\text{items}-1}{\text{items}}$$
Maximizing the sum of two numbers, the sum of whose squares is constant How could we prove that if the sum of the squares of two numbers is a constant, then the sum of the numbers would have its maximum value when the numbers are equal? This result is also true for more than two numbers. I tested the results by taking various values and brute-force-ish approach. However I am interested to know a formal proof of the same. This is probably an mild extention to this problem. I encountered this result while searching for a easy solution for the same.
Here's the pedestrian, Calculus I method: Let $x$ and $y$ be the two numbers. The condition "the sum of the squares is a constant" means that there is a fixed number $c$ such that $x^2+y^2=c$. That means that if you know one of the two numbers, say, $x$, then you can figure out the absolute value of the other one: $y^2 = c-x^2$, so $|y|=\sqrt{c-x^2}$. Now, since you want to find the maximum of the sum, then you can restrict yourself to the positive values of $x$ and $y$ (the condition on the sum of squares doesn't restrict the sign). So we may assume $x\geq 0$, $y\geq 0$, and $y=\sqrt{c-x^2}$. And now you want to find the maximum of $x+y = x+\sqrt{c-x^2}$. Thus, this reduces to finding the maximum value of $S(x) = x+\sqrt{c-x^2}$ on the interval $[0,\sqrt{c}]$. By the Extreme Value Theorem, the maximum will be achieved either at an endpoint or at a critical point of $S(x)$. At $0$ we get $S(0) = \sqrt{c}$; at $\sqrt{c}$ we get $S(\sqrt{c}) = \sqrt{c}$. As for critical points, $$S'(x) = 1 - \frac{x}{\sqrt{c-x^2}}.$$ The critical points are $x=\sqrt{c}$ (where $S'(x)$ is not defined), and the point where $x=\sqrt{c-x^2}$, or $2x^2=c$; hence $x^2=\frac{c}{2}$ (which means $y^2=\frac{c}{2}$ as well). At $x=\sqrt{\frac{c}{2}}$, $S(x) = 2\sqrt{\frac{c}{2}} = \sqrt{2c}$. This is clearly the maximum. For the problem with $k$ variables, $k\gt 2$, the "pedestrian method" involves Multivariable calculus and Lagrange multipliers.
How to calculate number of lumps of a 1D discrete point distribution? I would like to calculate the number of lumps of a given set of points. Defining "number of lumps" as "the number of groups with points at distance 1" Supose we have a discrete 1D space in this segment For example N=15 . . . . . . . . . . . . . . . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Then we have a set of M "marks" distributed, For example M=8 Distributed all left: x x x x x x x x . . . . . . . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Groups with points at minimal distance = 1 (minimum) Distributed divided by two: x x x x . . . . . . . x x x x 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Groups with points at minimal distance = 2 Equi-distributed : x . x . x . x . x . x . x . x 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Groups with points at minimal distance = 8 (maximum) (perhaps other answer here could be "zero lumps" ?) Other distribution, etc: x x . . x x . . x x . . x x . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Groups with points at minimal distance = 4 It's quite obvious algorithmically, just walking the segment, and count every "rising edge", number of times it passes from empty to a point. But I would like to solve it more "mathematically", to think the problem in an abstract way, having a 1D math solution perhaps would help to scale the concept to higher dimentions, where distance is complex ("walking the segment" trick won't work anymore), (not to mention a discrete metric space).. How can I put that into an equation, a weighted sum or something like that? Thanks for any help
If you are looking for a general formula that, given the points, returns the number of clusters (I think that "cluster" is a more common name that "lump", in this context), I'm afraid you won't find it. The problem is quite complicated, and there are many algorithms (google for hierachical clustering, percolation). For you particular case (discrete grid, and a threshold distance from nearest neighbours as criterion for clusters) this Hoshen-Kopelman Algorithm seems appropiate.
How do Equally Sized Spheres Fit into Space? How much volume do spheres take up when filling a rectangular prism of a shape? I assume it's somewhere in between $\frac{3}{4}\pi r^3$ and $r^3$, but I don't know where. This might be better if I broke it into two questions: First: How many spheres can fit into a given space? Like, packed optimally. Second: Given a random packing of spheres into space, how much volume does each sphere account for? I think that's just about as clear as I can make the question, sorry for anything confusing.
You can read a lot about these questions on Wikipedia. Concerning the random version, there are some links at Density of randomly packing a box. The accepted answer links to a paper that "focuses on spheres".
Throw a die $N$ times, observe results are a monotonic sequence. What is probability that all 6 numbers occur in the sequence? I throw a die $N$ times and the results are observed to be a monotonic sequence. What is probability that all 6 numbers occur in the sequence? I'm having trouble with this. There are two cases: when the first number is 1, and when the first number is 6. By symmetry, we can just consider one of them and double the answer at the end. I've looked at individual cases of $N$, and have that For $ N = 6 $, the probability is $ \left(\frac{1}{6}\right)^2 \frac{1}{5!} $. For $ N = 7 $, the probability is $ \left(\frac{1}{6}\right)^2 \frac{1}{5!}\left(\frac{1}{6} + \frac{1}{5} + \frac{1}{4} + \frac{1}{3} + \frac{1}{2} + 1\right) $. I'm not sure if the above are correct. When it comes to $ N = 8 $, there are many more cases to consider. I'm worried I may be approaching this the wrong way. I've also thought about calculating the probability that a number doesn't occur in the sequence, but that doesn't look to be any easier. Any hints/corrections would be greatly appreciated. Thanks
I have a slightly different answer to the above, comments are very welcome :) The number of monotonic sequences we can observe when we throw a dice $N$ times is 2$ N+5\choose5$-$6\choose1$ since the six sequences which consist of the same number repeatedly are counted as both increasing and decreasing (i.e. we have counted them twice so need to subtract 6 to take account of this). The number of increasing sequences involving all six numbers is $N-1\choose5$ (as has already been explained). Similarly the number of decreasing sequences involving all six numbers is also $N-1\choose5$. Therefore I believe that the probability of all seeing all six numbers given a monotonic sequence is 2$ N+5\choose5$-$6\choose1$ divided by 2$N-1\choose5$. This is only slightly different to the above answers but if anyone has any comments as to whether you agree or disagree with my logic or if you require further explanation I'd be interested to hear from you.
Inserting numbers to create a geometric progression Place three numbers in between 15, 31, 104 in such way, that they would be successive members of geometric progression. PS! I am studying for a test and need help. I would never ask anyone at math.stackexchange.com to do my homework for me. Any help at all would be strongly appreciated. Thank you!
There is no geometric progression that contains all of $15$, $31$, and $104$, let alone also the hypothetical "extra" numbers. For suppose that $15=kr^a$, $31=kr^b$, and $104=kr^c$ where $a$, $b$, $c$ are integers ($r$ need not be an integer, and $a$, $b$, $c$ need not be consecutive). Then $31=kr^ar^{b-a}=15r^{b-a}$. Similarly, $104=31r^{c-b}$. Without loss of generality, we may assume that $r>1$. So $b-a$ and $c-b$ are positive integers. Let $b-a=m$ and $c-b=n$. Then $$\frac{31}{15}=r^m \qquad \text{and}\qquad \frac{104}{31}=r^n.$$ Take the $n$-th power of $31/15$, and the $m$-th power of $104/31$. Each is $r^{mn}$. It follows that $$\left(\frac{31}{15}\right)^n=\left(\frac{104}{31}\right)^m.$$ From this we conclude that $$31^{m+n}=15^n \cdot 104^m.$$ This is impossible, since $5$ divides the right-hand side, but $5$ does not divide the left-hand side. Comment: Have we really answered the question? It asks us to place $3$ numbers between $15$, $31$, $104$ "in such a way that they would be successive members of a geometric progression." Who does "they" refer to? Certainly not all $6$ numbers, since already as we have seen, $15$, $31$, and $104$ cannot all be members of a (single) geometric progression of any kind. But maybe "they" refers to the interpolated numbers! Then there are uncountably many solutions, and even several solutions where the interpolated numbers are all integers. For example, we can use $16$, $32$, $64$. The numbers $15$ and $31$ could be a heavy-handed hint pointing to this answer. Or else we can use $16$, $24$, $36$. Or else $16$, $40$, $100$. Then there is $18$, $24$, $32$, or $18$, $30$, $50$, and so on.
Algebraic proof that collection of all subsets of a set (power set) of $N$ elements has $2^N$ elements In other words, is there an algebraic proof showing that $\sum_{k=0}^{N} {N\choose k} = 2^N$? I've been trying to do it some some time now, but I can't seem to figure it out.
I don't know what you mean by "algebraic". Notice that if $N$ is $0$, we have the empty set, which has exactly one subset, namely itself. That's a basis for a proof by mathematical induction. For the inductive step, suppose a set with $N$ elements has $2^N$ subsets, and consider a set of $N+1$ elements that results from adding one additional element called $x$ to the set. All of the $2^N$ subsets of our original set of $2^N$ elements are also subsets of our newly enlarged set that contains $x$. In addition, for each such set $S$, the set $S\cup\{x\}$ is a subset of our enlarged set. So we have our original $2^N$ subsets plus $2^N$ new subsets---the ones that contain $\{x\}$. The number of subsets of the enlarged set is thus $2^N + 2^N$. Now for an "algebraic" part of the arugment: $2^N + 2^N = 2^{N+1}$.
Minimal area of a surface that splits a cube into two equal-size pieces I had read the following problem and its solution from one source problem which was the following: You want to cut a unit cube into two pieces each with volume 1/2. What dividing surface, which might be curved, has the smallest surface area? The author gave his first solution by this way: When bisecting the equilateral triangle, an arc of a circle centered at a vertex had the shortest path. Similarly for this problem, the octant (one-eighth) of a sphere should be the bisecting surface with the lowest area. If the cube is a unit cube, then the octant has volume 1/2, so its radius is given by $$\frac{1}{8}(\frac{4}{3} \pi r^3)=\frac{1}{2}$$ So the radius is $\displaystyle \left( \frac{3}{\pi} \right)^{\frac{1}{3}}$ and the surface area of the octant is $$\text{surface area}=\frac{4 \pi r^2}{8}=1.52$$ (approximate) But after this the author said that he made a mistake; the answer was wrong and the correct one is the simplest surface – a horizontal plane through the center of the cube – which has surface area 1, which is less than the surface area of the octant. But he has not given reasons why the horizontal surface area is the best solution and I need a formula or proof of this. Can you help me?
We know from the isoperimetric inequality that locally the surface must be a sphere (where we can include the plane as the limiting case of a sphere with infinite radius). Also, the surface must be orthogonal to the cube where they meet; if they're not, you can deform the surface locally to reduce its area. A sphere orthogonal to a cube face must have its centre on that face. You can easily show that it can't contain half the volume if it intersects only one or two faces. Thus, it must either intersect at least three adjacent faces, in which case its centre has to be at the vertex where they meet, or it has to intersect at least two opposite faces, in which case it has to be a plane.
Equation of the tangents What is the equation of the tangent to $y=x^3-6x^2+12x+2$ that is parallel to the line $y=3x$ ? I have no idea, how to solve, no example is given in the book! Appreciate your help!
Look at a taylor series (ref 1, 2) of your function about a point $x_c$ of order 1 (linear). Then find which $x_c$ produces a line parallel to $3x$, or has a slope of $3$. Hint! There are two solutions actually.
Why is the set of commutators not a subgroup? I was surprised to see that one talks about the subgroup generated by the commutators, because I thought the commutators would form a subgroup. Some research told me that it's because commutators are not necessarily closed under product (books by Rotman and Mac Lane popped up in a google search telling me). However, I couldn't find an actual example of this. What is one? The books on google books made it seem like an actual example is hard to explain. Wikipedia did mention that the product $[a,b][c,d]$ on the free group on $a,b,c,d$ is an example. But why? I know this product is $aba^{-1}b^{-1}cdc^{-1}d^{-1}$, but why is that not a commutator in this group?
See the exercise 2.43 in the book "An introduction to the theory of group"- Joseph Rotman (4th ed.). He also had made a nice remark: The first finite group in which product of two commutators is not a commutator has order 96.
QQ plot explanation The figure shows the Q-Q plot of a theoretical and empirical standardized Normal distribution generated through the $qqnorm()$ function of R statistical tool. How can I describe the right tail (top right) that does not follow the red reference line? What does it mean when there is a "trend" that running away from the line? Thank you
It means that in the right tail, your data do not fit normal well, specifically, there are far less numbers there would be in a normal sample. If the black curved up, there would be more than in a typical normal sample. You can think of the black curve as a graph of a function that , if applied to your data, would make them like a normal sample. In the following image, random sample is generated by applying Ilmari Karonen's function to normal sample.
Finite differences of function composition I'm trying to express the following in finite differences: $$\frac{d}{dx}\left[ A(x)\frac{d\, u(x)}{dx} \right].$$ Let $h$ be the step size and $x_{i-1} = x_i - h$ and $x_{i+ 1} = x_i + h$ If I take centered differences evaluated in $x_i$, I get: $\begin{align*}\left\{\frac{d}{dx}\left[ A(x)\frac{d\, u(x)}{dx}\right]\right\}_i &= \frac{\left[A(x)\frac{d\, u(x)}{dx}\right]_{i+1/2} - \left[A(x)\frac{d\, u(x)}{dx}\right]_{i-1/2}}{h} \\ &= \frac{A_{i+1/2}\left[\frac{u_{i+1}-u_{i}}{h}\right] - A_{i-1/2}\left[\frac{u_{i}-u_{i-1}}{h}\right]}{h} \end{align*}$ So, if I use centered differences I would have to have values for $A$ at $i + \frac 12$ and $A$ at $i - \frac 12$; however those nodes don't exist (in my stencil I only have $i \pm$ integer nodes); is that correct? If I use forward or backward differences I need A values at $i$, $i + 1$, $i + 2$ and at $i$, $i -1$, $i -2$ respectively. Am I on the correct path? I would really appreciate any hint. Thanks in advance, Federico
While the approach of robjohn is certainly possible, it is often better to take the approach suggested by the original poster: $$ \left\{\frac{d}{dx}\left[ A(x)\frac{d\, u(x)}{dx}\right]\right\}_i = \frac{A_{i+1/2}\left[\frac{u_{i+1}-u_{i}}{h}\right] - A_{i-1/2}\left[\frac{u_{i}-u_{i-1}}{h}\right]}{h} $$ As he noted, $A$ is evaluated on half grid point. In many cases, that's not much of a problem. For instance, if you start from the diffusion equation with variable diffusivity, $$ \frac{\partial u}{\partial t} = \frac{\partial}{\partial x} \left( A(x) \frac{\partial u}{\partial x} \right), $$ with $A(x)$ given, then it does not matter whether you sample $A$ on grid points or half grid points. It is quite okay to use a different grid for $A$ and $u$ (that's sometimes called a staggered grid). In other cases, you can approximate the values at half grid points by the average: $A_{i+1/2} \approx \frac12 ( A_{i+1} + A_i )$. That gives you almost the same method as the one derived by robjohn.
How to obtain the Standard Deviation of a ratio of independent binomial random variables? X and Y are 2 independent binomial random variables with parameters (n,p) and (m,q) respectively. (trials, probability parameter)
There probably isn't a closed-form formula for this. But $X$ has mean $np$ and standard deviation $\sqrt{np(1-p)}$, and $Y$ has mean $mq$ and standard deviation $mq(1-q)$. Now you need a simple fact: if $X$ has mean $\mu$ and standard deviation $\sigma$, then $\log X$ has mean approximately $\log \mu$ and standard deviation approximately $\sigma/\mu$. This can be derived by Taylor expansion. Intuitively, $X$ "usually" falls in $[\mu-\sigma, \mu+\sigma]$ and so $\log X$ "usually" falls in $[\log (\mu-\sigma), \log (\mu+\sigma)]$. But we have $$ \log (\mu \pm \sigma) = \log \Big(\mu(1 \pm \sigma/\mu)\Big) = \log \mu \pm \log(1 \pm \sigma/\mu) \approx \log \mu \pm \sigma/\mu $$ where the approximation is the first-order Taylor expansion of $\log (1+x)$ for $x$ close to zero. Therefore $\log X$ has mean approximately $\log np$ and standard deviation approximately $\sqrt{(1-p)/np}$. Note that for the Taylor expansion above to be sufficient, $\sigma/\mu=\sqrt{(1-p)/np}$ must be close to zero. Similarly $\log Y$ has mean approximately $\log mq$ and standard deviation approximately $\sqrt{(1-q)/mq}$. So $\log X - \log Y = \log X/Y$ has mean approximately $\log(np/mq)$ and standard deviation approximately $$ \sqrt{{(1-p) \over np} + {(1-q) \over mq}}. $$ But you asked about $X/Y$. Inverting the earlier fact, if $Z$ has mean $\mu$ and standard deviation $\sigma$, then $e^Z$ has mean approximately $e^{\mu}$ and standard deviation approximately $\sigma e^\mu$. Therefore $X/Y$ has mean approximately $np/mq$ (not surprising!) and standard deviation approximately $$ \left( \sqrt{{(1-p) \over np} + {(1-q) \over mq}} \right) {np \over mq}. $$ This approximation works well if $p,q$ and/or $m,n$ are not too small (see Taylor expansion explanation in the middle of this answer).
Checking whether a point lies on a wide line segment I know how to define a point on the segment, but I have this piece is a wide interval. The line has a width. I have $x_1$ $y_1$, $x_2$ $y_2$, width and $x_3$ $y_3$ $x_3$ and $x_4$ what you need to check. perhaps someone can help, and function in $\Bbb{C}$ #
Trying to understand your question, perhaps this picture might help. You seem to be asking how to find out whether the point $C$ is inside the thick line $AB$. You should drop a perpendicular from $C$ to $AB$, meeting at the point $D$. If the (absolute) length of $CD$ is more than half the width of the thick line then $C$ is outside the thick line (as shown in this particular case). If the thick line is in fact a thick segment, then you also have to consider whether $D$ is between $A$ and $B$ (or perhaps slightly beyond one of them, if the thickness extends further).
How to compute that the unit digit of $\frac{(7^{6002} − 1)}{2}$? The mother problem is: Find the unit digit in LCM of $7^{3001} − 1$ and $7^{3001} + 1$ This problem comes with four options to choose the correct answer from,my approach,as the two number are two consecutive even numbers hence the required LCM is $$\frac{(7^{3001} − 1)(7^{3001} + 1)}{2}$$ Using algebra that expression becomes $\frac{(7^{6002} − 1)}{2}$,now it is not hard to see that unit digit of $(7^{6002} − 1)$ is $8$. So the possible unit digit is either $4$ or $9$,As there was no $9$ as option I selected $4$ as the unit digit which is correct but as this last part is a kind of fluke I am not sure if my approach is right or not or may be I am unable to figure out the last part how to be sure that the unit digit of $\frac{(7^{6002} − 1)}{2}$ is $4$?
We look directly at the mother problem. Exactly as in your approach, we observe that we need to evaluate $$\frac{(7^{3001}-1)(7^{3001}+1)}{2}$$ modulo $10$. Let our expression above be $x$. Then $2x= (7^{3001}-1)(7^{3001}+1)$. We will evaluate $2x$ modulo $20$. Note that $7^{3000}$ is congruent to $1$ modulo $4$ and modulo $5$. Thus $7^{3001} \equiv 7\pmod{20}$, and therefore $$2x\equiv (6)(8)\equiv 8 \pmod{20}.$$ It follows that $x\equiv 4\pmod{10}$.
Determinantal formula for reproducing integral kernel How do I prove the following? $$\int\det\left(K(x_{i},x_{j})\right)_{1\leq i,j\leq n}dx_{1} \cdots dx_{N}=\underset{i=1}{\overset{n}{\prod}}\left(\int K(x_{i},x_{i})\;dx_{i}-(i-1)\right)$$ where $$K(x,y)=\sum_{l=1}^n \psi_l(x)\overline{\psi_l}(y)$$ and $$\{\psi_l(x)\}_{l=1}^n$$ is an ON-sequence in $L^2$. One may note that $$\int K(x_i,x_j)K(x_j,x_i) \; d\mu(x_i)=K(x_j,x_j)$$ and also that $$\int K(x_a,x_b)K(x_b,x_c)d\mu(x_b)=K(x_a,x_c).$$
(This is too long to fit into a comment.) Note that the product in the integrand of the right side foils out as $$\sum_{A\subseteq [n]} (-1)^{n-|A|}\left(\prod_{i\not\in A} (i-1)\right)\prod_{j\in A}K(x_j,x_j).$$ (Here $[n]=\{1,2,\dots,n\}$.) On the other hand, we can use Leibniz formula to expand the determinant in the left hand side to obtain $$\iint\cdots\int \sum_{\sigma\in S_n} (-1)^{\sigma}\left(\prod_{i=1}^nK(x_i,x_{\sigma(i)}) \right)dx_1\cdots dx_n$$ Interchange the order of summation and integration. Observe that $K(x_1,x_{\sigma(1)})\cdots K(x_n,x_{\sigma(n)})$ can be reordered into a product of factors of the form $K(x_a,x_b)K(x_b,x_c)\cdots K(x_d,x_a)$ (this follows from the cycle decomposition of the permutation $\sigma$), and the integration of this over $x_b,x_c,\dots,x_d$ is, by induction on the two properties at the bottom of the OP, smply $K(x_a,x_a)$. From here we can once again switch the order of summation and integration. I think the last ingredient needed is something from the representation theory of the symmetric group. In other words, we need to know that the number of ways a permutation $\sigma$ can be decomposed into cycles with representatives $i_1,i_2,\dots\in A\subseteq [n]$, weighed by sign, is the coefficient of $\prod_{j\in A}K(x_j,x_j)$ in the integrand's polynomial on the right hand side (or something roughly to this effect).
Finding limit of a quotient with two square roots: $\lim_{t\to 0}\frac{\sqrt{1+t}-\sqrt{1-t}}t$ Find $$ \lim_{t\to 0}\frac{\sqrt{1+t}-\sqrt{1-t}}{t}. $$ I can't think of how to start this or what to do at all. Anything I try just doesn't change the function.
HINT $\ $ Use the same method in your prior question, i.e. rationalize the numerator by multiplying both the numerator and denominator by the numerator's conjugate $\rm\:\sqrt{1+t}+\sqrt{1-t}\:.$ Then the numerator becomes $\rm\:(1+t)-(1-t) = 2\:t,\:$ which cancels with the denominator $\rm\:t\:,\:$ so $\rm\:\ldots$ More generally, using the same notation and method as in your prior question, if $\rm\:f_0 = g_0\:$ then $$\rm \lim_{x\:\to\: 0}\ \dfrac{\sqrt{f(x)}-\sqrt{g(x)}}{x}\ = \ \lim_{x\:\to\: 0}\ \dfrac{f(x)-g(x)}{x\ (\sqrt{f(x)}+\sqrt{g(x)})}\ =\ \dfrac{f_1-g_1}{\sqrt{f_0}+\sqrt{g_0}}$$ In your case $\rm\: f_0 = 1 = g_0,\ \ f_1 = 1,\ g_1 = -1\:,\ $ so the limit $\: =\: (1- (-1))/(1+1)\: =\: 1\:.$ Note again, as in your prior questions, rationalizing the numerator permits us to cancel the common factor at the heart of the indeterminacy - thus removing the apparent singularity.
Calculating points on a plane In the example picture below, I know the points $A$, $B$, $C$ & $D$. How would I go about calculating $x$, $y$, $z$ & $w$ and $O$, but as points on the actual plane itself (e.g. treating $D$ as $(0, 0)$, $A$ as $(0, 1)$, $C$ as $(1, 0)$ and $B$ as $(1, 1)$. Ultimately I need to be able to calculate any arbitrary point on the plane so I'm unsure as to whether this would be possible through linear interpolation of the results above or whether I would actually just have to do this via some form of Matrix calculation? I don't really know matrix math at all! Just looking for something I can implement in JavaScript (in an enviroment that does support matricies).
This should be done in terms of plane projective geometry. This means you have to introduce homogeneous coordinates. The given points $A=(a_1,a_2)$, $\ldots$, and $D=(d_1,d_2)$ have "old" homogeneous coordinates $(a_1,a_2, 1)$, $\ldots$, and $(d_1,d_2,1)$ and should get "new" homogeneous coordinates $\alpha(0,1,1)$, $\beta(1,1,1)$, $\gamma(1,0,1)$, and $\delta(0,0,1)$. There is a certain $(3\times 3)$-matrix $P:=[p_{ik}]$ (determined up to an overall factor) that transforms the old coordinates into the new ones. To find this matrix you have twelve linear equations in thirteen variables which is just right for our purpose. (The values of $\alpha$, $\ldots$, $\delta$ are not needed in the sequel.) After the matrix $P$ has been determined the new affine coordinates $(\bar x, \bar y)$ of any point $(x,y)$ in the drawing plane are obtained by applying $P$ to the column vector $(x,y,1)$. This results in a triple $(x',y',z')$, whereupon one has $$\bar x={x'\over z'}\ ,\quad \bar y={y'\over z'}\ .$$
Fibonacci divisibilty properties $ F_n\mid F_{kn},\,$ $\, \gcd(F_n,F_m) = F_{\gcd(n,m)}$ Can any one give a generalization of the following properties in a single proof? I have checked the results, which I have given below by trial and error method. I am looking for a general proof, which will cover the all my results below: * *Every third Fibonacci number is even. *3 divides every 4th Fibonacci number. *5 divides every 5th Fibonacci number. *4 divides every 6th Fibonacci number. *13 divides every 7th Fibonacci number. *7 divides every 8th Fibonacci number. *17 divides every 9th Fibonacci number. *11 divides every 10th Fibonacci number. *6, 9, 12 and 16 divides every 12th Fibonacci number. *29 divides every 14th Fibonacci number. *10 and 61 divides every 15th Fibonacci number. *15 divides every 20th Fibonacci number.
The general proof of this is that the fibonacci numbers arise from the expression $$F_n \sqrt{-5} = (\frac 12\sqrt{-5}+\frac 12\sqrt{-1})^n - (\frac 12\sqrt{-5}-\frac 12\sqrt{-1})^n$$ Since this is an example of the general $a^n-b^n$, which $a^m-b^m$ divides, if $m \mid n$, it follows that there is a unique number, generally coprime with the rest, for each number. Some of the smaller ones will be $1$. The exception is that if $f_n$ is this unique factor, such that $F_n = \prod_{m \mid n} f_n$, then $f_n$ and $f_{np^x}$ share a common divisor $p$, if $p$ divides either. So for example, $f_8=7$, and $f_{56}=7*14503$, share a common divisor of $7$. This means that modulo over $49$ must evidently work too. So $f_{12} = 6$, shares a common divisor with both $f_4=3$ and $f_3 = 4$, is unique in connecting to two different primes. Gauss's law of quadratic recriprocality applies to the fibonacci numbers, but it's a little more complex than for regular bases. Relative to the fibonacci series, reduce modulo 20, to 'upper' vs 'lower' and 'long' vs 'short'. For this section, 2 is as 7, and 5 as 1, modulo 20. Primes that reduce to 3, 7, 13 and 17 are 'upper' primes, which means that their period divides $p+1$. Primes ending in 1, 9, 11, 19 are lower primes, meaning that their periods divide $p-1$. The primes in 1, 9, 13, 17 are 'short', which means that the period divides the maximum allowed, an even number of times. For 3, 7, 11, 19, it divides the period an odd number of times. This means that all odd fibonacci numbers can be expressed as the sum of two squares, such as $233 = 8^2 + 13^2$, or generally $F_{2n+1} = F^2_n + F^2_{n+1}$ So a prime like $107$, which reduces to $7$, would have an indicated period dividing $108$ an odd number of times. Its actual period is $36$. A prime like $109$ divides $108$ an even number of times, so its period is a divisor of $54$. Its actual period is $27$. A prime like $113$ is indicated to be upper and short, which means that it divides $114$ an even number of times. It actually has a period of $19$. Artin's constant applies here as well. This means that these rules correctly find some 3/4 of all of the periods exactly. The next prime in this progression, $127$, actually has the indicated period for an upper long: 128. So does $131$ (lower long), $137$ (upper short, at 69). Likewise $101$ (lower short) and $103$ (upper long) show the maximum periods indicated. No prime under $20*120^4$ exists, where if $p$ divides some $F_n$, so does $p^2$. This does not preclude the existance of such a number.
Method to reverse a Kronecker product Let's say I have two simple vectors: $[0, 1]$ and $[1, 0]$. Their Kronecker product would be $[0, 0, 1, 0]$. Let's say I have only the Kronecker product. How can I find the two initial vectors back? If my two vectors are written as : $[a, b]$ and $[c, d]$, the (given) Kronecker product is: $$[ac, ad, bc, bd] = [k_0, k_1, k_2, k_3]$$ So I have a system of four non linear equations that I wish to solve: $$\begin{align*} ac &= k_0\\ ad&= k_1\\ bc&= k_2\\ bd &=k_3. \end{align*}$$ I am looking for a general way to solve this problem for any number of initial vectors in $\mathbb{C}^2$ (leading my number of variables to $2n$ and my equations to $2^n$ if I have $n$ vectors). So here are a few specific questions: What is the common name of this problem? If a general solution is known, what is its complexity class? Does the fact that I have more and more equations when $n$ goes up compared to the number of variables help? (Note: I really didn't know what to put as a tag.)
I prefer to say the Kronecker of two vectors is reversible, but up to a scale factor. In fact, Niel de Beaudrap has given the right answer. Here I attempt to present it in a concise way. Let $a\in\mathbb{R}^N$ and $b\in\mathbb{R}^M$ be two column vectors. The OP is: given $a\otimes b$, how to determine $a$ and $b$? Note $a\otimes b$ is nothing but $\mathrm{vec} (ba^T)$. The $\mathrm{vec}$ operator reshapes a matrix to a column vector by stacking the column vectors of the matrix. Note $\mathrm{vec}$ is reversible. Therefore, given $c=a\otimes b$, first reshape it to a $M\times N$ matrix $C=ba^T$. $C$ is a rank-one matrix. You can simply decompose $C$ to get $kb$ and $a/k$.
Fourier transform solution of three-dimensional wave equation One of the PDE books I'm studying says that the 3D wave equation can be solved via the Fourier transform, but doesn't give any details. I'd like to try to work the details out for myself, but I'm having trouble getting started - in particular, what variable should I make the transformation with respect to? I have one time variable and three space variables, and I can't use the time variable because the Fourier transform won't damp it out. If I make the transformation with respect to one of the spatial variables, the differentiations with respect to time and the other two spatial variables become parameters and get pulled outside the transform. But it looks like then I'm still left with a PDE, but reduced by one independent variable. Where do I go from here? Thanks.
You use the Fourier transform in all three space variables. The wave equation $\frac{\partial^2 u}{\partial t^2} = c^2 \left( \frac{\partial^2 u}{\partial x_1^2} + \frac{\partial^2 u}{\partial x_2^2} + \frac{\partial^2 u}{\partial x_3^2}\right)$ becomes $ \frac{\partial^2 U}{\partial t^2} = - c^2 (p_1^2 + p_2^2 + p_3^2) U$.
What is the term for a factorial type operation, but with summation instead of products? (Pardon if this seems a bit beginner, this is my first post in math - trying to improve my knowledge while tackling Project Euler problems) I'm aware of Sigma notation, but is there a function/name for e.g. $$ 4 + 3 + 2 + 1 \longrightarrow 10 ,$$ similar to $$4! = 4 \cdot 3 \cdot 2 \cdot 1 ,$$ which uses multiplication? Edit: I found what I was looking for, but is there a name for this type of summation?
The name for $$ T_n= \sum_{k=1}^n k = 1+2+3+ \dotsb +(n-1)+n = \frac{n(n+1)}{2} = \frac{n^2+n}{2} = {n+1 \choose 2} $$ is the $n$th triangular number. This picture demonstrates the reasoning for the name: $$T_1=1\qquad T_2=3\qquad T_3=6\qquad T_4=10\qquad T_5=15\qquad T_6=21$$ $\hskip1.7in$
There exists $C\neq0$ with $CA=BC$ iff $A$ and $B$ have a common eigenvalue Question: Suppose $V$ and $W$ are finite dimensional vector spaces over $\mathbb{C}$. $A$ is a linear transformation on $V$, $B$ is a linear transformation on $W$. Then there exists a non-zero linear map $C:V\to W$ s.t. $CA=BC$ iff $A$ and $B$ have a same eigenvalue. ===========This is incorrect========== Clearly, if $CA=BC$, suppose $Ax=\lambda x$, then $B(Cx)=CAx=C(\lambda x)=\lambda (Cx)$, so $A$ and $B$ have same eigenvalue $\lambda$. On the other hand, if $A$ and $B$ have a same eigenvalue $\lambda$, suppose $Ax=\lambda x, By=\lambda y$. Define $C:V\to W$ s.t. $Cx=y$, then $BCx=By=\lambda y=C\lambda x=CAx$. But I don't kow how to make $BC=CA$ for all $x\in V$. ======================================
Here is a simple solution for the if condition. Let $\lambda$ be the common eigenvalue. Let $u$ be a right eigenvector for $B$, that is $$Bu= \lambda u$$ and $v$ be a left eigenvector for $A$, that is $$v^TA= \lambda v^T$$ Then $C =uv^T$ is a non-zero $n \times n$ matrix which works: $$CA = u v^T A = \lambda u v^T =\lambda C$$ $$BC= B u v^T= \lambda u v^T= \lambda C$$
Can someone explain consensus theorem for boolean algebra In boolean algebra, below is the consensus theorem $$X⋅Y + X'⋅Z + Y⋅Z = X⋅Y + X'⋅Z$$ $$(X+Y)⋅(X'+Z)⋅(Y+Z) = (X+Y)⋅(X'+Z)$$ I don't really understand it? Can I simplify it to $$X'⋅Z + Y⋅Z = X' \cdot Z$$ I don't suppose so. Anyways, why can $Y \cdot Z$ be removed?
Something like the following: $X \cdot Y + X' \cdot Z + Y \cdot Z $ = $X \cdot Y + X' \cdot Z + (X + X') \cdot Y \cdot Z $ = $X \cdot Y + X \cdot Y \cdot Z + X' \cdot Z + X' \cdot Y \cdot Z$ = $X \cdot (Y + Y \cdot Z) + X' \cdot (Z + Y \cdot Z)$ = $X \cdot Y + X' \cdot Z$
Orthogonal mapping $f$ which preserves angle between $x$ and $f(x)$ Let $f: \mathbf{R}^n \rightarrow \mathbf{R}^n$ be a linear orthogonal mapping such that $\displaystyle\frac{\langle f(x), x\rangle}{\|fx\| \|x\|}=\cos \phi$, where $\phi \in [0, 2 \pi)$. Are there such mapping besides $id$, $-id$ in case whe $n$ is odd? Is it true that if $n=2k$ then then there exists an orthogonal basis $e_1,\ldots,e_{2k}$ in $\mathbf{R}^{2k}$ such that the matrix $F$ of $f$ in that basis is of the form $$ F=\left [ \begin{array}{rrrrrrrr} A_1 & & &\\ & A_2 & &\\ & & \ddots \\ & & & A_{k}\\ \end{array} \right ], $$ where $$ A_1=A_2=\cdots=A_{k}=\left [ \begin{array}{rr} \cos \phi & -\sin \phi \\ \sin \phi & \cos \phi \\ \end{array} \right ] ? $$ Thanks.
Your orthogonal transformation $f$ can be "complexified" to a unitary transformation $U$ on ${\mathbb C}^n$ such that $U(x + i y) = f(x) + i f(y)$ for $x, y \in {\mathbb R}^n$. Being normal, $U$ can be diagonalized, and its eigenvalues have absolute value 1. The only possible real eigenvalues are $\pm 1$, in which case the eigenvectors can be chosen to be real as well; this corresponds to $\cos \phi = \pm 1$, and your cases $id$ and $-id$. If $v = x + i y$ is an eigenvector for a non-real eigenvalue $\lambda = a + b i$, then $\overline{v} = x - i y$ is an eigenvector for $\overline{\lambda} = a - b i$. Thus the eigenspaces for complex eigenvalues pair up. Now $Uv = \lambda v = (a x - b y) + i (a y + b x)$, so $f(x) = a x - b y$ and $f(y) = a y + b x$. Then $x^T f(x) = a \|x\|^2 - b x^T y = (\cos \phi) \|x\|^2$ and $y^T f(y) = a \|y\|^2 + b x^T y = (\cos \phi) \|y\|^2$. Therefore $b x^T y = (a - \cos \phi) \|x\|^2 = -(a - \cos \phi) \|y\|^2$. Since $b \ne 0$, $\|x\|>0$ and $\|y\|>0$, we must have $x^T y = 0$ and $a = \cos \phi$. Since $|\lambda\|=\sqrt{a^2 + b^2} = 1$, $b = \pm \sin \phi$. This gives you your $2 \times 2$ block $\pmatrix{\cos \phi & -\sin \phi \cr \sin \phi & \cos \phi\cr}$ or $\pmatrix{\cos \phi & \sin \phi \cr -\sin \phi & \cos \phi\cr}$ (but you don't need both of them).
$x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}$ show that $x=-c/b$ when $a=0$ OK, this one has me stumped. Given that the solution for $ax^2+bx+c =0$ $$x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}\qquad(*)$$ How would you show using $(*)$ that $x=-c/b$ when $a=0$ (Please dont use $a=0$ hence, $bx+c=0$.)
One of the two solutions approaches $-\frac{c}{b}$ in the limit as $a\rightarrow 0$. Assuming $b$ is positive, apply L'Hospital's rule to $$x=\frac{-b+\sqrt{b^2-4ac}}{2a}$$ If $b$ is negative, work with the other solution. (And as $a\rightarrow 0$, the second solution approaches $\pm\infty$.)
One last question on the concept of limits I read through this post on the notion of limits Approaching to zero, but not equal to zero, then why do the points get overlapped? But there's one last question I have. [Trust me, it'll be my last ever question on limits!] It clearly says as you get closer and closer to a certain point you are eventually getting closer to the limiting point. I.e., as $\Delta x$ approches $0$, you get to the limit. Let me give you an example which supports the above statement. Let's say you want to evaluate the limit $$ \lim_{x \to 2} \frac{x^2 - 4}{x-2} .$$ Sooner or later, you have to plug in $2$ and you get the answer $4$ and you say that the limit of that function as $x \to 2$ is $4$. But why should I plug in $2$ and not a number close to $2$? $x$ is certainly not equal to $2$, right?
An important point to consider is that you cannot actually "plug in" $2$ and get $4$. When $x = 2$, you will have a division by zero, since $x - 2 = 0$. When calculating $\frac{x^2 - 4}{x - 2}$ for $x = 2$ using a computer, you may get $4$ due to rounding errors when you have set $x$ to a value very close to 2. This is due to the very fact that the function approaches $4$ as $x$ approaches $2$.
Canonical to Parametric, Ellipse Equation I've done some algebra tricks in this derivation and I'm not sure if it's okay to do those things. $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = \cos^2\theta + \sin^2\theta$$ Can I really do this next step? $$\frac{x^2}{a^2} = \cos^2\theta\quad\text{and}\quad\frac{y^2}{b^2} = \sin^2\theta$$ $$x^2 = a^2\cos^2\theta\quad\text{and}\quad y^2 = b^2\sin^2\theta$$ Ignoring the negative numbers: $$x = a\cos\theta\quad\text{and}\quad y = b\sin\theta$$
The idea behind your argument is absolutely fine. Any two non-negative numbers $u$ and $v$ such that $u+v=1$ can be expressed as $u=\cos^2\theta$, $v=\sin^2\theta$ for some $\theta$. This is so obvious that it probably does not require proof. Set $u=\cos^2\theta$. Then $v=1-\cos^2\theta=\sin^2\theta$. The second displayed formula muddies things somewhat. You intended to say that if $x^2/a^2+y^2/b^2=1$, then there exists a $\theta$ such that $x^2/a^2=\cos^2\theta$ and $y^2/b^2=\sin^2\theta$. You did not mean that for any $\theta$, if $x^2/a^2+y^2/b^2=1$ then $x^2/a^2=\cos^2\theta$! But the transition from the second displayed equation to the third could be interpreted as asserting what you clearly did not intend to say. It would be better to do exactly what you did, but to use more geometric language, as follows. $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1 \quad\text{iff}\quad \left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2=1.$$ But the equation on the right holds iff the point $(x/a, y/b)$ lies on the unit circle. The points on the unit circle are parametrized by $(\cos \theta,\sin\theta)$, with $\theta$ ranging over $[0,2\pi)$, so the points on our ellipse are given by $x=a\cos\theta$, $y=a\sin\theta$.
Series used in proof of irrationality of $ e^y $ for every rational $ y $ Following from the book "An Introduction to the Theory of Numbers" - Hardy & Wright I am having trouble with this proof. The book uses a familiar proof for the irrationality of e and continues into some generalizations that lose me. In the following statement where is the series coming from or how is the statement derived? $ f = f(x) = \frac{x^n(1 - x)^n}{n!} = \frac{1}{n!} \displaystyle\sum\limits_{m=n}^{2n} c_mx^m $ I understand that given $ 0 < x < 1 $ results in $ 0 < f(x) < \frac{1}{n!} $ but I become confused on . . . Again $f(0)=0$ and $f^{(m)}(0)=0$ if $m < n$ or $m > 2n.$ But, if $n \leq m \leq 2n $, $ f^{(m)}(0)=\frac{m!}{n!}c_m $ an integer. Hence $f(x)$ and all its derivatives take integral values at $x=0.$ Since $f(1-x)=f(x),$ the same is true at $x=1.$ All wording kept intact! The proof that follows actually makes sense when I take for granted the above. I can't however take it for granted as these are, for me, the more important details. So . . .
For your first question, this follows from the binomial theorem $$x^n(1-x)^n=x^n\sum_{m=0}^{n}{n\choose m}(-1)^mx^m=\sum_{m=0}^{n}{n\choose m}(-1)^{m}x^{m+n}=\sum_{m=n}^{2n}{n\choose m-n}(-1)^{m-n}x^m$$ where the last equality is from reindexing the sum. Then let $c_m={n\choose m-n}(-1)^m$, which is notably an integer. I'm not quite clear what your next question is.
Proof for $\max (a_i + b_i) \leq \max a_i + \max b_i$ for $i=1..n$ I know this question is almost trivial because the truth of this statement is completely intuitive, but I'm looking for a nice and as formal as possible proof for $$\max (a_i + b_i) \leq \max a_i + \max b_i$$ with $i=1,\ldots,n$ Thanks in advance, Federico
For any choice of $j$ in $1,2,\ldots,n$, we have that $$a_j\leq\max\{a_i\}\quad\text{and}\quad b_j\leq\max\{b_i\}$$ by the very definition of "$\max$". Then, by additivity of inequalities, we have that for each $j$, $$a_j+b_j\leq\max\{a_i\}+\max\{b_i\}.$$ But because this is true for all $j$, it is true in particular for the largest out of the $a_j+b_j$'s; that is, $$\max\{a_i + b_i\} \leq \max\{a_i\} + \max\{b_i\}$$
standard symbol for "to prove" or "need to show" in proofs? Is there a standard symbol used as shorthand for "to prove" or "need to show" in a proof? I've seen "N.T.S." but was wondering if there is anything more abstract — not bound to English.
I routinely use WTS for "Want to Show" - and most teachers and professors that I have come across immediately understood what it meant. I do not know if this is because they were already familiar with it, or if it was obvious to them. But I still use it all the time. I got this from a few grad students at my undergrad, although a very funny internet commentator (Sean Plott, if you happen to know him) once mentioned that he uses it as well.
Does the Schur complement preserve the partial order? Let $$\begin{bmatrix} A_{1} &B_1 \\ B_1' &C_1 \end{bmatrix} \quad \text{and} \quad \begin{bmatrix} A_2 &B_2 \\ B_2' &C_2 \end{bmatrix}$$ be symmetric positive definite and conformably partitioned matrices. If $$\begin{bmatrix} A_{1} &B_1 \\ B_1' &C_1 \end{bmatrix}-\begin{bmatrix} A_2 &B_2 \\ B_2' &C_2 \end{bmatrix}$$ is positive semidefinite, is it true $$(A_1-B_1C^{-1}_1B_1')-(A_2-B_2C^{-1}_2B_2')$$ also positive semidefinite? Here, $X'$ means the transpose of $X$.
For a general block matrix $X=\begin{pmatrix}A&B\\C&D\end{pmatrix}$, the Schur complement $S$ to the block $D$ satisfies $$ \begin{pmatrix}A&B\\C&D\end{pmatrix} =\begin{pmatrix}I&BD^{-1}\\&I\end{pmatrix} \begin{pmatrix}S\\&D\end{pmatrix} \begin{pmatrix}I\\D^{-1}C&I\end{pmatrix}. $$ So, when $X$ is Hermitian, $$ \begin{pmatrix}A&B\\B^\ast&D\end{pmatrix} =\begin{pmatrix}I&Y^\ast\\&I\end{pmatrix} \begin{pmatrix}S\\&D\end{pmatrix} \begin{pmatrix}I\\Y&I\end{pmatrix}\ \textrm{ for some } Y. $$ Hence $$ \begin{eqnarray} &&\begin{pmatrix}A_1&B_1\\B_1^\ast&D_1\end{pmatrix} \ge\begin{pmatrix}A_2&B_2\\B_2^\ast&D_2\end{pmatrix} \\ &\Rightarrow& \begin{pmatrix}S_1\\&D_1\end{pmatrix} \ge \begin{pmatrix}I&Z^\ast\\&I\end{pmatrix} \begin{pmatrix}S_2\\&D_2\end{pmatrix} \begin{pmatrix}I\\Z&I\end{pmatrix}\ \textrm{ for some } Z\\ &\Rightarrow& (x^\ast,0)\begin{pmatrix}S_1\\&D_1\end{pmatrix}\begin{pmatrix}x\\0\end{pmatrix} \ge (x^\ast,\ x^\ast Z^\ast) \begin{pmatrix}S_2\\&D_2\end{pmatrix} \begin{pmatrix}x\\Zx\end{pmatrix},\ \forall x\\ &\Rightarrow& x^\ast S_1 x \ \ge\ x^\ast S_2 x + (Zx)^\ast D_2 (Zx) \ \ge\ x^\ast S_2 x,\ \forall x\\ &\Rightarrow& S_1\ge S_2. \end{eqnarray} $$ Edit: In hindsight, this is essentially identical to alex's proof.
What's bad about left $\mathbb{H}$-modules? Can you give me non-trivial examples of propositions that can be formulated for every left $k$-module, hold whenever $k$ is a field, but do not hold when $k = \mathbb{H}$ or, more generally, need not hold when $k$ is a division ring (thanks, Bruno Stonek!) which is not a field? I'm asking because in the theory of vector bundles $\mathbb{H}$-bundles are usually considered alongside those over $\mathbb{R}$ and $\mathbb{C}$, and I'd like to know what to watch out for in this particular case.
Linear algebra works pretty much the same over $\mathbb H$ as over any field. Multilinear algebra, on the other hand, breaks down in places. You can see this already in the fact that when $V$ and $W$ are left $\mathbb H$-modules, the set of $\mathbb H$-linear maps $\hom_{\mathbb H}(V,W)$ is no longer, in any natural way, an $\mathbb H$-module. As Bruno notes, tensor products also break (unless you are willing to consider also right modules, or bimodules—and then it is you who broke!)
Motivation of the Gaussian Integral I read on Wikipedia that Laplace was the first to evaluate $$\int\nolimits_{-\infty}^\infty e^{-x^2} \, \mathrm dx$$ Does anybody know what he was doing that lead him to that integral? Even better, can someone pose a natural problem that would lead to this integral? Edit: Many of the answers make a connection to the normal distribution, but then the question now becomes: Where does the density function of the normal distribution come from? Mike Spivey's answer is in the spirit of what I am looking for: an explanation that a calculus student might understand.
The integral you gave, when taken as a definite integral : $\int^{x_2}_{x_1} e^{-x^2} dx$ When scaled by $\frac {1}{\pi^{0.5}}$ is/describes the univariate probability density of a normally-distributed trandom variable $X$ with mean=0 and standard deviation ${\frac {1}{2^{0.5}}}$, i.e. This means that the numerical value of this integral gives you the probability of the event: $x_1 \leq X\leq x_2$ When this integral is scaled by the right factor $K$ it describes a family of normal distributions with mean $\mu$ and standard deviation $\sigma$ You can show it integrates to that constant K (so that when you divide by $K$ , the value of the integral is $1$, which is what makes it into a density function) by using this trick (used for the case mean=0) Set I=$C\int e^{-x^2}$ , then consider $\int e^{-y^2}$ , and then compute their product as (using the fact that $x^2$ is a constant when considered as a function of y, and viceversa for x ): $I^2$=$\int e^{-x^2+y^2}dxdy$ , using a polar change of variable: $x^2+y^2=r^2$ (and, of course, a change of the regions of integration.) The integral is based on non-mathematical assumptions too: http://www.stat.tamu.edu/~genton/2007.AG.Bernoulli.pdf
How to represent XOR of two decimal Numbers with Arithmetic Operators Is there any way to represent XOR of two decimal Numbers using Arithmetic Operators (+,-,*,/,%).
I think what Sanisetty Pavan means is that he has two non-negative integers $a$ and $b$ which we assume to be in the range $0 \leq a, b < 2^{n+1}$ and thus representable as $(n+1)$-bit vectors $(a_n, \cdots, a_0)$ and $(b_n, \cdots, b_0)$ where $$ a = \sum_{i=0}^n a_i 2^i, ~~ b = \sum_{i=0}^n b_i 2^i. $$ He wants an an arithmetic expression for the integer $c$ where $$c = \sum_{i=0}^n (a_i \oplus b_i) 2^i = \sum_{i=0}^n (a_i + b_i -2 a_ib_i) 2^i = a + b - 2 \sum_{i=0}^n a_ib_i 2^i$$ in terms of $a$ and $b$ and the arithmetic operators $+, -, *, /, \%$. Presumably integer constants are allowed in the expression. The expression for $c$ above shows a little progress but I don't think it is much easier to express $\sum_{i=0}^n a_ib_i 2^i$ than it is to express $\sum_{i=0}^n (a_i \oplus b_i) 2^i$ in terms of $a$ and $b$, but perhaps Listing's gigantic formula might be a tad easier to write out, though Henning Makholm's objections will still apply. Added note: For fixed $n$, we can express $c$ as $c = a + b - 2f(a,b)$ where $f(a, b)$ is specified recursively as $$f(a, b) = (a\%2)*(b\%2) + 2f(a/2, b/2)$$ with $a\%2$ meaning the remainder when integer $a$ is divided by $2$ (that is, $a \bmod 2$) and $a/2$ meaning "integer division" which gives the integer quotient (that is, $a/2 = (a - (a\%2))/2$). Working out the recursion gives a formula with $n+1$ terms for $f(a, b)$.
Complex functions, integration Let $L_iL_j-L_jL_i = i\hbar\varepsilon_{ijk}L_k$ where $i,j,k\in\{1,2,3\}$ Let $u$ be any eigenstate of $L_3$. How might one show that $\langle L_1^2 \rangle = \int u^*L_1^2u = \int u^*L_2^2u = \langle L_2^2\rangle$ ? I can show that $\langle u|L_3L_1L_2-L_1L_2L_3|u\rangle=\int u^*(L_3L_1L_2-L_1L_2L_3)u =0$. And I know that $L_3 $ can be written as ${1\over C}(L_1L_2-L_2L_1)$. Hence I have $\int u^*L_1L_2^2L_1u=\int u^*L_2L_1^2L_2u$. But I don't seem to be getting the required form... Help will be appreciated. Added: The $L_i$'s are operators that don't necessarily commute.
Okay then. In below, we fix $u$ an eigenfunction of $L_3$, and denote by $\langle T\rangle:= \langle u|T|u\rangle$ for convenience for any operator $T$. Using self-adjointness of $L_3$, we have that $L_3 u = \lambda u$ where $\lambda \in \mathbb{R}$. And furthermore $$ \langle L_3T\rangle = \langle L_3^* T\rangle = \lambda \cdot \langle T\rangle = \langle TL_3\rangle $$ which we can also write as $$ \langle [T,L_3] \rangle = 0 $$ for any operator $T$. This implies $$ \frac{1}{\hbar}\langle L_1^2\rangle = \langle -iL_1L_2L_3 + iL_1L_3L_2 \rangle = \langle -iL_3L_1L_2 +i L_1L_3L_2\rangle = \frac{1}{\hbar}\langle L_2^2\rangle $$ The first and third equalities are via the defining relationship $[L_i,L_j] = i \hbar \epsilon_{ijk} L_k$. The middle equality is the general relationship derived above, applied to the first summand. (And is precisely the identity that you said you could show in the question.) Remark: it is important to note that the expression $\langle u| [T,A] |u\rangle = 0$ holds whenever $A$ is self adjoint and $u$ is an eigenvector for $A$. This does not imply that $[T,A] = 0$. This is already clear in a finite dimensional vector space where we can represent operators by matrices: consider $A = \begin{pmatrix} 1 & 0 \\ 0 & 2\end{pmatrix}$ and $T = \begin{pmatrix} 1 & 1 \\ 0 & 0\end{pmatrix}$. The commutator $[T,A] = \begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix}$, which is zero on the diagonals (as required), but is not the zero operator.
What's the limit of the sequence $\lim\limits_{n \to\infty} \frac{n!}{n^n}$? $$\lim_{n \to\infty} \frac{n!}{n^n}$$ I have a question: is it valid to use Stirling's Formula to prove convergence of the sequence?
We will first show that the sequence $x_n = \frac{n!}{n^n}$ converges. To do this, we will show that the sequence is both monotonic and bounded. Lemma 1: $x_n$ is monotonically decreasing. Proof. We can see this with some simple algebra: $$x_{n+1} = \frac{(n+1)!}{(n+1)^{n+1}} = \frac{n+1}{n+1}\frac{n!}{(n+1)^n} \frac{n^n}{n^n} = \frac{n!}{n^n} \frac{n^n}{(n+1)^n} = x_n \big(\frac{n}{n+1}\big)^n.$$ Since $\big(\frac{n}{n+1}\big)^n < 1$, then $x_{n+1} < x_n$. Lemma 2: $x_n$ is bounded. Proof. Straightforward to see that $n! \leq n^n$ and $n! \geq 0$. We obtain the bounds $0 \leq x_n \leq 1$, demonstrating that $x_n$ is bounded. Together, these two lemmas along with the monotone convergence theorem proves that the sequence converges. Theorem: $x_n \to 0$ as $n \to \infty$. Proof. Since $x_n$ converges, then let $s = \lim_{n \to \infty} x_n$, where $s \in \mathbb{R}$. Recall the relation in Lemma 1: $$x_{n+1} = x_n \big(\frac{n}{n+1}\big)^n = \frac{x_n}{(1+ \frac{1}{n})^n}.$$ Since $x_n \to s$, then so does $x_{n+1}$. Furthermore, a standard result is the limit $(1+ \frac{1}{n})^n \to e$. With these results, we have $\frac{x_n}{(1+ \frac{1}{n})^n} \to \frac{s}{e}$ and consequently $$s = \frac{s}{e} \implies s(1 - e^{-1}) = 0$$ Since $1 \neq e^{-1}$, then this statement is satisfied if and only if $s = 0$ and that concludes the proof.
Is there a way to solve for an unknown in a factorial? I don't want to do this through trial and error, and the best way I have found so far was to start dividing from 1. $n! = \text {a really big number}$ Ex. $n! = 9999999$ Is there a way to approximate n or solve for n through a formula of some sort? Update (Here is my attempt): Stirling's Approximation: $$n! \approx \sqrt{2 \pi n} \left( \dfrac{n}{e} \right ) ^ n$$ So taking the log: $(2\cdot \pi\cdot n)^{1/2} \cdot n^n \cdot e^{-n}$ $(2\cdot \pi)^{1/2} \cdot n^{1/2} \cdot n^n \cdot e^{-n}$ $.5\log (2\pi) + .5\log n + n\log n \cdot -n \log e$ $.5\log (2\pi) + \log n(.5+n) - n$ Now to solve for n: $.5\log (2\pi) + \log n(.5+n) - n = r$ $\log n(.5+n) - n = r - .5 \log (2\pi)$ Now I am a little caught up here.
I believe you can solve Sterling’s formula for $n$ by way of the Lambert $W$ function, but that just leads to another expression, which can not be directly evaluated with elementary operations. (Although, I wish scientific calculators included the Lambert $W$, at least the principal branch). I’ve derived a formula which is moderately accurate. If you are rounding to integer values of $n$, it will provide exact results through $170!$, and perhaps further. The spreadsheet app on my iPad will not compute factorials beyond $170!$. If you are seeking rational solutions for $n$, in other words, trying to calculate values of a sort of inverse Gamma function, it will provide reasonably close approximations for $n$ in $[2, 170]$, and as I remarked above, possibly greater. My Clumsy Little Equation: I also have derived a variant which is much more accurate for $n$ in $[2, 40\text{ish}]$, however it starts to diverge beyond that. I initially derived these approximations for my daughter, who, while working on Taylor/Maclaurin series, wanted a quick way to check for potential factorial simplifications in the denominator of terms. I couldn’t find any solutions, except the obvious path through Stirling’s formula and the Lambert $W$ function. If more competent mathematicians can find a tweak which can improve accuracy, please share it. I apologize in advance, as I am new here (as a contributor, that is), and I am not yet allowed to directly embed an image. Hopefully, the linked one works.
Computer algebra system to simplify huge rational functions (of order 100 Mbytes) I have a huge rational function of three variables (which is of order ~100Mbytes if dumped to a text file) which I believe to be identically zero. Unfortunately, neither Mathematica nor Maple succeeded in simplifying the expression to zero. I substituted a random set of three integers to the rational function and indeed it evaluated to zero; but just for curiosity, I would like to use a computer algebra system to simplify it. Which computer algebra system should I use? I've heard of Magma, Macaulay2, singular, GAP, sage to list a few. Which is best suited to simplify a huge rational expression? In case you want to try simplifying the expressions yourself, I dumped available in two notations, Mathematica notation and Maple notation. Unzip the file and do <<"big.mathematica" or read("big.maple") from the interactive shell. This loads expressions called gauge and cft, both rational functions of a1, a2 and b. Each of which is non-zero, but I believe gauge=cft. So you should be able to simplify gauge-cft to zero. The relation comes from a string duality, see e.g. this paper by M. Taki.
Mathematica can actually prove that gauge-cft is exactly zero. To carry out the proof observe that expression for gauge is much smaller than the cft. Hence we first canonicalize gauge using Together, and then multiply cft by it denominator:
2 solutions for, solve $\cos x = -1/2$? Answer sheet displays only one, does this mean there is only one? $\cos x = -1/2$ can occur in quadrants 2 or 3, that gives it 2 answers, however the answer sheet only shows one. Does this mean im doing something completely wrong, or are they just not showing the other one? Thanks :D
It depends upon the range answers are allowed in. If you allow $[0,2\pi)$, there are certainly two answers as you say:$\frac{2\pi}{3}$ and $\frac{4\pi}{3}$. The range of arccos is often restricted to $[0,\pi)$ so there is a unique value. If $x$ can be any real, then you have an infinite number of solutions: $\frac{2\pi}{3}+2k\pi$ or $\frac{-2\pi}{3}+2k\pi$ for any integer $k$.
Expressing sums of products in terms of sums of powers I'm working on building some software that does machine learning. One of the problems I've come up against is that, I have an array of numbers: $[{a, b, c, d}]$ And I want to compute the following efficiently: $ab + ac + ad + bc + bd + cd$ Or: $abc + abd + acd + bcd$ Where the number of variables in each group is specified arbitrarily. I have a method where I use: $f(x) = a^x + b^x + c^x + d^x$ And then compute: $f(1) = a + b + c + d$ $(f(1)^2-f(2))/2 = ab + ac + ad + bc + bd + cd$ $(f(1)^3 - 3f(2)f(1) + 2f(3))/6 = abc + abd + acd + bcd$ $(f(1)^4 - 6f(2)f(1)^2 + 3f(2)^2 + 8f(3)f(1) - 6f(4))/24 = abcd$ But I worked these out manually and I'm struggling to generalize it. The array will typically be much longer and I'll want to compute much higher orders.
See Newton's identities en.wikipedia.org/wiki/Newton's_identities
Trouble with partial derivatives I've no clue how to get started .I'am unable to even understand what the hint is saying.I need your help please. Given $$u = f(ax^2 + 2hxy + by^2), \qquad v = \phi (ax^2 + 2hxy + by^2),$$ then prove that $$\frac{\partial }{\partial y} \left ( u\frac{\partial u }{\partial x} \right ) = \frac{\partial }{\partial x}\left ( u \frac{\partial v}{\partial y} \right ).$$ Hint. Given $$u = f(z),v = \phi(z), \text{where} z = ax^2 + 2hxy + by^2$$
I recommend you to use the chain rule, i.e. given functions $f:U\mathbb{R}^n\rightarrow V\subset\mathbb{R}^m$ differenciable on $x$ and $g:V_0\subset V\subset\mathbb{R}^m\rightarrow W\subset\mathbb{R}^p$ differenciable on $f(x)$ we have $$D_x(g\circ f)=D_{f(x)}(g)D_x(f)$$ where $D_x(f)$ represents the Jacobian matriz of $f$ at $x$. In your particular case when $n=2$ and $m=p=1$, we have for each coordinate that $$\left.\frac{\partial g\circ f}{\partial x_i}\right|_{(x_1,x_2)}=\left.\frac{d\,g}{dx}\right|_{f(x_1,x_2)}\left.\frac{\partial f}{\partial x_i}\right|_{(x_1,x_2)}$$
How to show $a,b$ coprime to $n\Rightarrow ab$ coprime to $n$? Let $a,b,n$ be integers such that $\gcd(a,n)=\gcd(b,n)=1$. How to show that $\gcd(ab,n)=1$? In other words, how to show that if two integers $a$ and $b$ each have no non-trivial common divisor with and integer $n$, then their product does no have a non-trivial common divisor with $n$ either. This is a problem that is an exercise in my course. Intuitively it seems plausible and it is easy to check in specific cases but how to give an actual proof is not obvious.
Let $P(x)$ be the set of primes that divide $x$. Then $\gcd(a,n)=1$ iff $P(a)$ and $P(n)$ are disjoint. Since $P(ab)=P(a)\cup P(b)$ (*), $\gcd(a,n)=\gcd(b,n)=1$ implies that $P(ab)$ and $P(n)$ are disjoint, which means that $\gcd(ab,n)=1$. (*) Here we use that if a prime divides $ab$ then it divides $a$ or $b$.
Proof that $6^n$ always has a last digit of $6$ Without being proficient in math at all, I have figured out, by looking at series of numbers, that $6$ in the $n$-th power always seems to end with the digit $6$. Anyone here willing to link me to a proof? I've been searching google, without luck, probably because I used the wrong keywords.
If you multiply any two integers whose last digit is 6, you get an integer whose last digit is 6: $$ \begin{array} {} & {} & {} & \bullet & \bullet & \bullet & \bullet & \bullet & 6 \\ \times & {} & {} &\bullet & \bullet & \bullet & \bullet & \bullet & 6 \\ \hline {} & \bullet & \bullet & \bullet & \bullet & \bullet & \bullet & \bullet & 6 \end{array} $$ (Get 36, and carry the "3", etc.) To put it another way, if the last digit is 6, then the number is $(10\times\text{something}) + 6$. So $$ \begin{align} & \Big((10\times\text{something}) + 6\Big) \times \Big((10\times\text{something}) + 6\Big) \\ = {} & \Big((10\times\text{something})\times (10\times\text{something})\Big) \\ & {} + \Big((10\times\text{something})\times 6\Big) + \Big((10\times\text{something})\times 6\Big) + 36 \\ = {} & \Big(10\times \text{something}\Big) +36 \\ = {} & \Big(10\times \text{something} \Big) + 6. \end{align} $$
Proving $1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$ using induction How can I prove that $$1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$$ for all $n \in \mathbb{N}$? I am looking for a proof using mathematical induction. Thanks
Let the induction hypothesis be $$ (1^3+2^3+3^3+\cdots+n^3)=(1+2+3+\cdots+n)^2$$ Now consider: $$ (1+2+3+\cdots+n + (n+1))^2 $$ $$\begin{align} & = \color{red}{(1+2+3+\cdots+n)^2} + (n+1)^2 + 2(n+1)\color{blue}{(1+2+3+\cdots+n)}\\ & = \color{red}{(1^3+2^3+3^3+\cdots+n^3)} + (n+1)^2 + 2(n+1)\color{blue}{(n(n+1)/2)}\\ & = (1^3+2^3+3^3+\cdots+n^3) + \color{teal}{(n+1)^2 + n(n+1)^2}\\ & = (1^3+2^3+3^3+\cdots+n^3) + \color{teal}{(n+1)^3} \end {align}$$ QED
Limits of Expectations I've been fighting with this homework problem for a while now, and I can't quite see the light. The problem is as follows, Assume random variable $X \ge 0$, but do NOT assume that $\mathbb{E}\left[\frac1{X}\right] < \infty$. Show that $$\lim_{y \to 0^+}\left(y \, \mathbb{E}\left[\frac{1}{X} ; X > y\right]\right) = 0$$ After some thinking, I've found that I can bound $$ \mathbb{E}[1/X;X>y] = \int_y^{\infty}\frac1{x}\mathrm dP(x) \le \int_y^{\infty}\frac1{y}\mathrm dP(x) $$ since $\frac1{y} = \sup\limits_{x \in (y, \infty)} \frac1{x}$ resulting in $$ \lim_{y \to 0^+} y \mathbb{E}[1/X; X>y] \le \lim_{y \to 0^+} y \int_y^{\infty}\frac1{y}\mathrm dP(x) = P[X>0]\le1 $$ Of course, $1 \not= 0$. I'm not really sure how to proceed... EDIT: $\mathbb{E}[1/X;X>y]$ is defined to be $\int_y^{\infty} \frac{1}{x}\mathrm dP(x)$. This is the notation used in Durret's Probability: Theory and Examples. It is NOT a conditional expectation, but rather a specifier of what set is being integrated over. EDIT: Changed $\lim_{y \rightarrow 0^-}$ to $\lim_{y \rightarrow 0^+}$; this was a typo.
Hint: For any $k > 1$, $\int_y^\infty \frac{y}{x} \ dP(x) \le \int_y^{ky}\ dP(x) + \int_{ky}^\infty \frac{1}{k} \ dP(x) \le \ldots$
Good book for self study of a First Course in Real Analysis Does anyone have a recommendation for a book to use for the self study of real analysis? Several years ago when I completed about half a semester of Real Analysis I, the instructor used "Introduction to Analysis" by Gaughan. While it's a good book, I'm not sure it's suited for self study by itself. I know it's a rigorous subject, but I'd like to try and find something that "dumbs down" the material a bit, then between the two books I might be able to make some headway.
The book of Bartle is more systematic; much clear arguments in all theorems; nice examples-always to keep in studying analysis.
Solve to find the unknown I have been doing questions from the past year and I come across this question which stumped me: The constant term in the expansion of $\left(\frac1{x^2}+ax\right)^6$ is $1215$; find the value of $a$. (The given answer is: $\pm 3$ ) Should be an easy one, but I don't know how to begin. Some help please?
Sofia,Sorry for the delayed response.I was busy with other posts. you have two choices.One is to use pascals triangle and the other one is to expand using the binimial theorem. You can compare the expression $$\left ( \frac{1}{x^2} + ax \right )^6$$ with $$(a+x)^6$$ where a = 1/x^2 and x = ax,n=6.Here'e the pascal triangle way of expanding the given expression.All you need to do is to substitute the values of a and x respectively. $$(a + x)^0 = 1$$ $$(a + x)^1 = a +x a+ x$$ $$(a + x)^2 = (a + x)(a + x) = a^2 + 2ax + x^2$$ $$(a + x)^3 = (a + x)^2(a + x) = a^3 + 3a^2x + 3ax^2 + x^3$$ $$(a + x)^4 = (a + x)^3(a + x) = a^4 + 4a^3x + 6a^2x^2 + 4ax^3 + x^4$$ $$(a + x)^5 = (a + x)^4(a + x) = a^5 + 5a^4x + 10a^3x^2 + 10a^2x^3 + 5ax^4 + x^5$$ $$(a + x)^6 = (a + x)^5(a + x) = a^6 + 6a^5x + 15a^4x^2 + 20a^3x^3 + 15a^2x^4 + 6ax^5 + x^6$$ Here'e the Binomial theorem way of expanding it out. $$(1+x)^n = 1 + nx + \frac{n(n-1)}{2!}x^2 + \frac{n(n-1)(n-2)}{3!}x^3 + ...$$ using the above theorem you should get $$a^6x^6 + 6a^5x^3 + 15a^4 + \frac{20a^3}{x^3} + \frac{15a^2}{x^6}+\frac{6a}{x^9}+\frac{1}{x^{12}}$$ You can now substitute the constant term and get the desired answer
Real square matrices space as a manifold Let $\mathrm{M}(n,\mathbb{R})$ be the space of $n\times n$ matrices over $\mathbb{R}$. Consider the function $m \in \mathrm{M}(n,\mathbb{R}) \mapsto (a_{11},\dots,a_{1n},a_{21}\dots,a_{nn}) \in \mathbb{R}^{n^2}$. The space $\mathrm{M}(n,\mathbb{R})$ is locally Euclidean at any point and we have a single chart atlas. I read that the function is bicontinuous, but what is the topology on $\mathrm{M}(n,\mathbb{R})$? Second question... in what sense it is defined a $C^{\infty}$ structure when there are no (non-trivial) coordinate changes? Do we have to consider just the identity change? Thanks.
The topology on $\mathrm{M}(n, \mathbb{R})$ making the map you give bicontinuous is, well, the topology that makes that map bicontinuous. Less tautologically, it is the topology induced by any norm on $\mathrm{M}(n, \mathbb{R})$. (Recall that all norms on a finite-dimensional real vector space are equivalent.) Because Euclidean space has a canonical smooth structure, the fact that you have a single-chart atlas means that you can give $\mathrm{M}(n, \mathbb{R})$ the smooth structure by pulling it back from $\mathbb{R}^{n^2}$. There are no compatibility conditions to verify: the identity transition map will always be smooth.
Integral around unit sphere of inner product For arbitrary $n\times n$ matrices M, I am trying to solve the integral $$\int_{\|v\| = 1} v^T M v.$$ Solving this integral in a few low dimensions (by passing to spherical coordinates) suggests the answer in general to be $$\frac{A\,\mathrm{tr}(M)}{n}$$ where $A$ is the surface area of the $(n-1)$-dimensional sphere. Is there a nice, coordinate-free approach to proving this formula?
The integral is linear in $M$, so we only have to calculate it for canonical matrices $A_{kl}$ spanning the space of matrices, with $(A_{kl})_{ij}=\delta_{ik}\delta_{jl}$. The integral vanishes by symmetry for $k\neq l$, since for every point on the sphere with coordinates $x_k$ and $x_l$ there's one with $x_k$ and $-x_l$. So we only have to calculate the integral for $k=l$. By symmetry, this is independent of $k$, so it's just $1/n$ of the integral for $M$ the identity. But that's just the integral over $1$, which is the surface area $A$ of the sphere. Then by linearity the integral for arbitrary $M$ is the sum of the diagonal elements, i.e. the trace, times the coefficient $A/n$.
A "fast" way to compute number of pairs of positive integers $(a,b)$ with lcm $N$ I am looking for a fast/efficient method to compute the number of pairs of $(a,b)$ so that its LCM is a given integer, say $N$. For the problem I have in hand, $N=2^2 \times 503$ but I am very inquisitive for a general algorithm. Please suggest a method that should be fast when used manually.
If $N$ is a prime power $p^n$, then there are $2n+1$ such (ordered) pairs -- one member must be $p^n$ itself, and the other can be a lower power. If $N=PQ$ for coprime $P$ and $Q$, then each combination of a pair for $P$ and a pair for $Q$ gives a different, valid pair for $N$. (And all pairs arise in this manner). Therefore, the answer should be the product of all $2n+1$ where $n$ ranges over the exponents in the prime factorization of $N$.
Why is a finite integral domain always field? This is how I'm approaching it: let $R$ be a finite integral domain and I'm trying to show every element in $R$ has an inverse: * *let $R-\{0\}=\{x_1,x_2,\ldots,x_k\}$, *then as $R$ is closed under multiplication $\prod_{n=1}^k\ x_i=x_j$, *therefore by canceling $x_j$ we get $x_1x_2\cdots x_{j-1}x_{j+1}\cdots x_k=1 $, *by commuting any of these elements to the front we find an inverse for first term, e.g. for $x_m$ we have $x_m(x_1\cdots x_{m-1}x_{m+1}\cdots x_{j-1}x_{j+1}\cdots x_k)=1$, where $(x_m)^{-1}=x_1\cdots x_{m-1}x_{m+1}\cdots x_{j-1}x_{j+1}\cdots x_k$. As far as I can see this is correct, so we have found inverses for all $x_i\in R$ apart from $x_j$ if I am right so far. How would we find $(x_{j})^{-1}$?
Simple arguments have already been given. Let us do a technological one. Let $A$ be a finite integral commutative domain. It is an artinian, so its radical $\mathrm{rad}(A)$ is nilpotent—in particular, the non-zero elements of $\mathrm{rad}(A)$ are themselves nilpotent: since $A$ is a domain, this means that $\mathrm{rad}(A)=0$. It follows that $A$ is semisimple, so it is a direct product of matrix rings over division rings. It is a domain, so there can only be one factor; it is is commutative, so that factor must be a ring of $1\times 1$ matrices over a commutative division ring. In all, $A$ must be a field.
What is your favorite application of the Pigeonhole Principle? The pigeonhole principle states that if $n$ items are put into $m$ "pigeonholes" with $n > m$, then at least one pigeonhole must contain more than one item. I'd like to see your favorite application of the pigeonhole principle, to prove some surprising theorem, or some interesting/amusing result that one can show students in an undergraduate class. Graduate level applications would be fine as well, but I am mostly interested in examples that I can use in my undergrad classes. There are some examples in the Wikipedia page for the pigeonhole principle. The hair-counting example is one that I like... let's see some other good ones! Thanks!
In a finite semigroup $S$, every element has an idempotent power, i.e. for every $s \in S$ there exists some $k$ such that $s^k$ is idempotent, i.e. $(s^{k})^2 = s^k$. For the proof consider the sequence $s, s^2, s^3, \ldots$ which has to repeat somewhere, let $s^n = s^{n+p}$, then inductively $s^{(n + u) + vp} = s^{n + u}$ for all $u,v \in \mathbb N_0$, so in particular for $k = np$ we have $s^{2k} = s^{np + np} = s^{np} = s^k$. This result is used in the algebraic theory of automata in computer science.
On factorizing and solving the polynomial: $x^{101} – 3x^{100} + 2x^{99} + x^{98} – 3x^{97} + 2x^{96} + \cdots + x^2 – 3x + 2 = 0$ The actual problem is to find the product of all the real roots of this equation,I am stuck with his factorization: $$x^{101} – 3x^{100} + 2x^{99} + x^{98} – 3x^{97} + 2x^{96} + \cdots + x^2 – 3x + 2 = 0$$ By just guessing I noticed that $(x^2 – 3x + 2)$ is one factor and then dividing that whole thing we get $(x^{99}+x^{96}+x^{93} + \cdots + 1)$ as the other factor , but I really don't know how to solve in those where wild guessing won't work! Do we have any trick for factorizing this kind of big polynomial? Also I am not sure how to find the roots of $(x^{99}+x^{96}+x^{93} + \cdots + 1)=0$,so any help in this regard will be appreciated.
In regard to the first part of your question ("wild guessing"), the point was to note that the polynomial can be expressed as the sum of three polynomials, grouping same coefficients: $$ P(x)= x^{101} – 3x^{100} + 2x^{99} + x^{98} – 3x^{97} + 2x^{96} + \cdots + x^2 – 3x + 2 = A(x)+B(x)+C(x)$$ with $$\begin{eqnarray} A(x) &= x^{101} + x^{98} + \cdots + x^2 &= x^2 (x^{99} + x^{96} + \cdots + 1) \\ B(x) &= - 3 x^{100} -3 x^{97} - \cdots -3 x &= - 3 x (x^{99} + x^{96} + \cdots + 1)\\ C(x) &= 2 x^{99} + 2 x^{96} + \cdots + 2 &= 2 (x^{99} + x^{96} + \cdots + 1) \\ \end{eqnarray} $$ so $$P(x) = (x^2 - 3x +2) (x^{99} + x^{96} + \cdots + 1) $$ and applying the geometric finite sum formula: $$P(x)=(x^2 - 3x +2) ({(x^{3})}^{33} + {(x^{3})}^{32} + \cdots + 1) = (x^2 - 3x +2) \frac{x^{102}-1}{x^3-1} $$ As Andre notes in the comments, your "guessing" was dictated by the very particular structure of the polynomial, you can't hope for some general guessing recipe...
Is there a solid where all triangles on the surface are isosceles? Are there any solids in $R^{3}$ for which, for any 3 points chosen on the surface, at least two of the lengths of the shortest curves which can be drawn on the surface to connect pairs of them are equal?
There can be no smooth surface with this property, because a smooth surface looks locally like a plane, and the plane allows non-isosceles triangles. As for non-smooth surfaces embedded in $\mathbb R^3$ -- which would need to be everywhere non-smooth for this purpose -- it is not clear to me that there is even a good general definition of curve length that would allow us to speak of "shortest curves".
Prove that meas$(A)\leq 1$ Let $f:\mathbb R\to [0,1]$ be a nondecreasing continuous function and let $$A:=\{x\in\mathbb R : \exists\quad y>x\:\text{ such that }\:f(y)-f(x)>y-x\}.$$ I've already proved that: a) if $(a,b)$ is a bounded open interval contained in $A$, and $a,b\not\in A$, then $f(b)-f(a)=b-a.$ b)$A$ contains no half line. What remains is to prove that the Lebesgue measure of $A$ is less or equal than $1$. I've tried to get estimates on the integral of $\chi_A$ but i went nowhere further than just writing down the integral. I'm not sure whether points a) and b) are useful, but i've reported them down for the sake of completeness so you can use them if you want. Thanks everybody.
hint (c) show that $A$ is open (d) Any open subset of $\mathbb{R}$ is the union of countably many disjoint open intervals $\amalg (a_i,b_i)$. The Lebesgue measure is equal to $\sum (b_i - a_i)$. Now apply item (a) and the fact that $f$ is non-decreasing.
How to show that $\lim\limits_{x \to \infty} f'(x) = 0$ implies $\lim\limits_{x \to \infty} \frac{f(x)}{x} = 0$? I was trying to work out a problem I found online. Here is the problem statement: Let $f(x)$ be continuously differentiable on $(0, \infty)$ and suppose $\lim\limits_{x \to \infty} f'(x) = 0$. Prove that $\lim\limits_{x \to \infty} \frac{f(x)}{x} = 0$. (source: http://www.math.vt.edu/people/plinnell/Vtregional/E79/index.html) The first idea that came to my mind was to show that for all $\epsilon > 0$, we have $|f(x)| < \epsilon|x|$ for sufficiently large $x$. (And I believe I could do this using the fact that $f'(x) \to 0$ as $x \to \infty$.) However, I was wondering if there was a different (and nicer or cleverer) way. Here's an idea I had in mind: If $f$ is bounded, then $\frac{f(x)}{x}$ clearly goes to zero. If $\lim\limits_{x \to \infty} f(x)$ is either $+\infty$ or $-\infty$, then we can apply l'Hôpital's rule (to get $\lim\limits_{x \to \infty} \frac{f(x)}{x} = \lim\limits_{x \to \infty} \frac{f'(x)}{1} = 0$). However, I'm not sure what I could do in the remaining case (when $f$ is unbounded but oscillates like crazy). Is there a way to finish the proof from here? Also, are there other ways of proving the given statement?
You can do a proof by contradiction. If ${\displaystyle \lim_{x \rightarrow \infty} {f(x) \over x}}$ were not zero, then there is an $\epsilon > 0$ such that there are $x_1 < x_2 < ... \rightarrow \infty$ such that ${\displaystyle\bigg|{f(x_n) \over x_n}\bigg| > \epsilon}$. Then for $a \leq x_n$ one has $$|f(x_n) - f(a)| \geq |f(x_n)| - |f(a)|$$ $$\geq \epsilon x_n - f(a)$$ By the mean value theorem, there is a $y_n$ between $a$ and $x_n$ such that $$|f'(y_n)| = {|f(x_n) - f(a)| \over x_n - a}$$ $$\geq {\epsilon x_n - f(a) \over x_n - a}$$ Letting $n$ go to infinity this means for sufficiently large $n$ you have $$|f'(y_n)| > {\epsilon \over 2}$$ Since each $y_n \geq a$ and $a$ is arbitrary, $f'(y)$ can't go to zero as $y$ goes to infinity.
Samples and random variables Suppose I picked a sample of $n$ 20-year-olds. I measured the height of each to obtain $n$ numbers: $h_1, h_2, \ldots, h_n$. According to theory of probability/statistics, there are $n$ random variables associated with the sample, say $X_1, X_2, \ldots, X_n$. However, I do not understand the relationship between the $X_i$ and the $h_i$, I have yet to see it explained clearly in any book and so I have a few questions. * *What is the probability space corresponding to the $X_i$? It seems to me that the way one samples should effect what the probability space will look like. In this case, I am sampling without replacement and the order in which I picked the individuals is irrelevant so I believe the sample space $\Omega$ should consist of all $n$-tuples of 20-year-olds such that no two tuples contain the same individuals. In this way, $X_i(\omega)$ is the height of the $i$th individual in the $n$-tuple $\omega \in \Omega$. The sample I picked would therefore correspond to one particlar point in $\Omega$, call it $\omega_0$, such that $X_i(\omega_0) = h_i$. I surmise that the $\sigma$-algebra will be just the power set $2^\Omega$ of $\Omega$ but I haven't a clue as to what the probability measure would be. *Let $(\Gamma, 2^\Gamma, P)$ be a probability space where $\Gamma$ is the set of all 20-year-olds and let $X$ be a random variable on $\Gamma$ such that $X(\gamma)$ is the height of the individual $\gamma\in\Gamma$. What is the connection between the $X_i$ and $X$ besides that afforded to us by the law of large numbers? In particular, what is the exact relationship between the probability space of $X$ and that of the $X_i$?
Your experiment consists of choosing $n$ people from a certain population of 20-year-olds and measuring their heights. $X_i$ is the height of the $i$'th person chosen. In the particular outcome you obtained when you did this experiment, the value of $X_i$ was $h_i$. The sample space $\Omega$ is all ordered $n$-tuples of distinct individuals from the population. Since the sample space is finite (there being only a finite number of 20-year-olds in the world), the $\sigma$-algebra is indeed $2^\Omega$. If you choose your sample "at random", the probability measure assigns equal probabilities to all $\omega \in \Omega$.
Linear functions I'm taking Algebra II, and I need help on two types of problems. The numbers may not work out, as I am making these up. 1st problem example: Using the following functions, write a linear functions. f(2) = 3 and f(5) = 4 2nd problem example: Write a solution set for the following equation (I thought you couldn't solve a linear equation?) 2x + 4y = 8 Feel free to use different numbers that actually work in your examples, these problems are just making me scratch my head.
First problem. At this level, a "linear function" is one of the form $f(x) = ax+b$ for some $a$ and $b$ If you know that $f(2)=3$ and $f(5)=4$, then by plugging in you get two equations: $$\begin{align*} 3 &= 2a + b\\ 4 &= 5a + b \end{align*}$$ From these two equations, you should be able to solve for $a$ and $b$, thus finding the function. For example, you can solve for $b$ in the first equation, substitute in the second, and solve the resulting equation for $a$; then plug that to find the value of $b$. Second Problem. An equation like $2x+4y = 8$ does not have a unique solution, but each value of $x$ gives you a corresponding value of $y$ and vice-versa. The "solution set" of this would be a description of all the values that make the equation true. For instance, if you had the problem "Write a solution set for $x-3y=4$", you could do the following: given a value of $y$, the value of $x$ has to be $4+3y$. So one way to write the solutions is: $$\bigl\{ (4+3y, y)\,\bigm|\, y\text{ any real number}\bigr\}.$$ For each real number $y$, you get one solution. Or you could solve for $y$, to get that $y=\frac{x-4}{3}$, and write a solution set as: $$\left\{\left. \left(x, \frac{x-4}{3}\right)\,\right|\, x\text{ any real number}\right\}.$$
How to find $\gcd(f_{n+1}, f_{n+2})$ by using Euclidean algorithm for the Fibonacci numbers whenever $n>1$? Find $\gcd(f_{n+1}, f_{n+2})$ by using Euclidean algorithm for the Fibonacci numbers whenever $n>1$. How many division algorithms are needed? (Recall that the Fibonacci sequence $(f_n)$ is defined by setting $f_1=f_2=1$ and $f_{n+2}=f_{n+1}+f_n$ for all $n \in \mathbb N^*$, and look here to get information about Euclidean algorithm)
anon's answer: $$ \gcd(F_{n+1},F_{n+2}) = \gcd(F_{n+1},F_{n+2}-F_{n+1}) = \gcd(F_{n+1},F_n). $$ Therefore $$ \gcd(F_{n+1},F_n) = \gcd(F_2,F_1) = \gcd(1,1) = 1. $$ In other words, any two adjacent Fibonacci numbers are relatively prime. Since $$\gcd(F_n,F_{n+2}) = \gcd(F_n,F_{n+1}+F_n) = \gcd(F_n,F_{n+1}), $$ this is also true for any two Fibonacci numbers of distance $2$. Since $(F_3,F_6) = (2,8)=2$, the pattern ends here - or so you might think... It is not difficult to prove that $$F_{n+k+1} = F_{k+1}F_{n+1} + F_kF_n. $$ Therefore $$ \gcd(F_{n+k+1},F_{n+1}) = \gcd(F_kF_n,F_{n+1}) = \gcd(F_k,F_{n+1}). $$ Considering what happened, we deduce $$ (F_a,F_b) = F_{(a,b)}. $$
Does solving a Rubik's cube imply alignment? Today, I got my hands on a Rubik's cube with text on it. It looks like this: Now, I would love to know whether solving the right cube will also always correctly align the text on the cube or whether it's possible to solve the cube in a way that the colors match correctly but the text is misaligned.
whether it's possible to solve the cube in a way that the colors match correctly but the text is misaligned Yes. But the total amount of misalignment (if I remember correctly; it's been a while since I played with a Rubik's cube) must be a multiple of $\pi$ radians. So if only one face has the center piece misaligned, it must be upside down. On the other hand it is possible to have two center pieces simultaneously off by quarter-turns. (This is similar to the fact that without taking the cube apart, you cannot change the orientation of an edge piece (as opposed to center piece or corner piece) while fixing everything else.) (I don't actually have a group-theoretic proof for the fact though; this is just from experience.) Edit: Henning Makholm provides a proof in the comments Here's the missing group-theoretic argument: Place four marker dots symmetrically on each center, and one marker at the tip of each corner cubie, for a total of 32 markers. A quarter turn permutes the 32 markers in two 4-cycles, which is even. Therefore every possible configuration of the dots is an even permutation away from the solved state. But misaligning one center by 90° while keeping everything else solved would be an odd permutation and is therefore impossible.
Jordan Measures, Open sets, Closed sets and Semi-closed sets I cannot understand: $$\bar{\mu} ( \{Q \cap (0,1) \} ) = 1$$ and (cannot understand this one particularly) $$\underline{\mu} ( \{Q \cap (0,1) \} ) = 0$$ where $Q$ is rational numbers, why? I know that the measure for closed set $\mu ([0,1]) = 1$ so I am puzzled with the open set solution. Is $\underline{\mu} ( (0,1) ) = 0$ also? How is the measure with open sets in general? So far the main question, history contains some helper questions but I think this is the lion part of it what I cannot understand. More about Jordan measure here. Related * *Jordan measure with semi-closed sets here *Jordan measure and uncountable sets here
The inner measure of $S:=\mathbb Q \cap [0,1]$ is, by definition, the "largest" simple subset contained in S (, with largest defined as the sup of the measures of simple sets contained in $S$). But there is no non-trivial , i.e., non-empty, simple set contained in $S'$, since , by density of both rationals and irrationals in $\mathbb R$, any simple set $S':=[a,a+e)$ contained in $S$ (i.e., with $0<a<a+e<1$ , will necessarily hit some irrational, i.e., no non-empty simple subset $S'$ of $S$ can be contained in $S$, so the only simple set contained in $S$ is the trivial , empty set. And the empty set is defined to have measure $0$. For the outer measure, you want to find the "smallest" set $T$ containing $S:=\mathbb Q \cap [0,1]$. As pointed above, by density of $\mathbb Q$ in $\mathbb R$ , no strict subset of $[0,1]$ can contain $S$. We then only have the option of having sets of the type $S'':=[0,1+e)$ covering $S$; we can rewrite $S'':=[0, 1+\frac{1}{n})$, and $m^*(S'')=1 + \frac{1}{n}$. The infimum of the measures over all $S''$ is then $ inf$ { 1+$\frac{1}{n}$ : n in $\mathbb N$ }, which is 1.
Produce output with certain probability using fair coin flips "Introduction to Algorithm" C.2-6 Describe a procedure that takes as input two integers a and b such that $0 < a < b$ and, using fair coin flips, produces as output heads with probability $a / b$ and tails with probability $(b - a) /b$. Give a bound on the expected number of coin flips, which should be O(1) (Hint: Represent a/b in binary.) My guess is that we can use head to represent bit 0 and tail for bit 1. Then by flipping $m = \lceil \log_2 b \rceil$ times, we obtain a $m$ bit binary based number $x$. If $ x \ge b $, then we just drop x and do the experiment again, until we can an $ x < b$. This x has probility $P\{ x < a\} = \frac a b$ and $P\{ a \le x < a\} = \frac {b - a} b$ But I'm not quite sure if my solution is what the question asks. Am I right? Edit, I think Michael and TonyK gave the correct algorithm, but Michael explained the reason behind the algorithm. The 3 questions he asked: * *show that this process requires c coin flips, in expectation, for some constant c; The expectation of c, the number of flips, as TonyK pointed out, is 2. * *show that you yell "NO!" with probability 2/3; P(yell "NO!") = P("the coin comes up tails on an odd toss") = $ \sum_{k=1}^\infty (\frac 1 2)^ k = \frac 2 3$ * *explain how you'd generalize to other rational numbers, with the same constant c It's the algorithm given by TonyK. We can restate it like this Represent a/b in binary. Define $f(Head) = 0$, and $f(Tail) = 1$. If f(nth flip's result) = "nth bit of a/b" then terminate. If the last flip is Head, we yell "Yes", otherwise "No". We have $ P \{Yes\} = \sum_{i\in I}(1/2)^i = \frac a b $ where I represent the set of index where the binary expression of a/b is 1.
Here is my thought of understanding Michael and TonyK's solutions. * *$0 \lt a \lt b$, which means $0 \lt {\frac ab} \lt 1$, so $\frac ab$ can be represent by infinite binary fraction with form $0.b_1b_2...b_{j-1}b_jb_{j+1}b_{j+2}$(adds trailing $0$s, when $\frac ab$ is finite binary fraction). flipping a coin infinite times can represent binary fraction between $0$ and $1$, as $0.{flip_1}{flip_2}...{flip_i}...$, one by one correspondingly. Now, the probability of the flipping fraction $0.{flip_1}{flip_2}...{flip_i}...$ less than fraction $\frac ab$ is $\frac ab$, and greater than fraction $\frac ab$ is $\frac {b-a} {b}$ , for continuous uniform probability distribution. *However, flipping a coin infinite times is impossible and what we want is the relation whether that infinite string $0.{flip_1}{flip_2}...{flip_i}...$ is less or greater than $\frac ab$. we can predict the less or greater than relation when the first unmatched throw occurred , because any $0.b_1b_2...b_{j-1}\mathbf0x_{j+1}x_{j+2}...$ is less than $0.b_1b_2...b_{j-1}\mathbf1b_{j+1}b_{j+2}...$ when $b_j = 1$ and any $0.b_1b_2...b_{j-1}\mathbf1x_{j+1}x_{j+2}...$ is greater than $0.b_1b_2...b_{j-1}\mathbf0b_{j+1}b_{j+2}....$ when $b_j = 0$.
Solving for Inequality $\frac{12}{2x-3}<1+2x$ I am trying to solve for the following inequality: $$\frac{12}{2x-3}<1+2x$$ In the given answer, $$\frac{12}{2x-3}-(1+2x)<0$$ $$\frac{-(2x+3)(2x-5)}{2x-3}<0 \rightarrow \textrm{ How do I get to this step?}$$ $$\frac{(2x+3)(2x-5)}{2x-3}>0$$ $$(2x+3)(2x-5)(2x-3)>0 \textrm{ via multiply both sides by }(2x-3)^2$$
$$ \frac{12}{2x-3} - (1-2x) = \frac{12 - (1+2x)(2x-3) }{2x-3} = \frac{ 12 - (2x-3+4x^2-6x)}{2x-3} $$ $$= - \frac{4x^2-4x-15}{2x-3} = - \frac{(2x+3)(2x-5)}{2x-3} $$
How can I prove this random process to be Standard Brownian Motion? $B_t,t\ge 0$ is a standard Brownian Motion. Then define $X(t)=e^{t/2}B_{1-e^{-t}}$ and $Y_t=X_t-\frac{1}{2}\int_0^t X_u du$. The question is to show that $Y_t, t\ge 0$ is a standard Brownian Motion. I tried to calculate the variance of $Y_t$ for given $t$, but failed to get $t$..
For every nonnegative $t$, let $Z_t=B_{1-\mathrm e^{-t}}=\displaystyle\int_0^{1-\mathrm e^{-t}}\mathrm dB_s$. Then $(Z_t)_{t\geqslant0}$ is a Brownian martingale and $\mathrm d\langle Z\rangle_t=\mathrm e^{-t}\mathrm dt$ hence there exists a Brownian motion $(\beta_t)_{t\geqslant0}$ starting from $\beta_0=0$ such that $Z_t=\displaystyle\int_0^t\mathrm e^{-s/2}\mathrm d\beta_s$ for every nonnegative $t$. In particular, $X_t=\displaystyle\mathrm e^{t/2}\int_0^t\mathrm e^{-s/2}\mathrm d\beta_s$ and $$ \int_0^tX_u\mathrm du=\int_0^t\mathrm e^{u/2}\int\limits_0^u\mathrm e^{-s/2}\mathrm d\beta_s\mathrm du=\int_0^t\mathrm e^{-s/2}\int_s^t\mathrm e^{u/2}\mathrm du\mathrm d\beta_s, $$ hence $$ \int_0^tX_u\mathrm du=\int_0^t\mathrm e^{-s/2}2(\mathrm e^{t/2}-\mathrm e^{s/2})\mathrm d\beta_s=2\mathrm e^{t/2}\int_0^t\mathrm e^{-s/2}\mathrm d\beta_s-2\beta_t=2X_t-2\beta_t. $$ This proves that $Y_t=X_t-\displaystyle\frac12\int\limits_0^tX_u\mathrm du=\beta_t$ and that $(Y_t)_{t\geqslant0}$ is a standard Brownian motion.
2D transformation I have a math problem for some code I am writing. I don't have much experience with 2D transformations, but I am sure there must be a straight-froward formula for my problem. I have illustrated it here: My goal is to work out the co-ordinates of (Xp2, Yp2). Shape A is a quadrilateral that exists that can exist anywhere in 2D space. Its four co-ordinates are known. It contains a point (Xp1, Yp1), which are also known. Shape B is a rectangle with one corner at (0,0). The height and width are variable, but known. Shape A needs to be transposed to Shape B so that the new position of the point inside can be calculated. How do I work out the new co-ordinates of (Xp2, Yp2)? Cheers,
See my answer to "Tranforming 2D outline into 3D plane". The transforms and 4 point to 4 point mapping described there should be just what you need.