Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
For what functions $f(x)$ is $f(x)f(y)$ convex? For which functions $f\colon [0,1] \to [0,1]$ is the function $g(x,y)=f(x)f(y)$ convex over $(x,y) \in [0,1]\times [0,1]$ ? Is there a nice characterization of such functions $f$? The obvious examples are exponentials of the form $e^{ax+b}$ and their convex combinations. Anything else? EDIT: This is a simple observation summarizing the status of this question so far. The class of such functions f includes all log-convex functions, and is included in the class of convex functions. So now, the question becomes: are there any functions $f$ that are not log-convex yet $g(x,y)=f(x)f(y)$ is convex? EDIT: Jonas Meyer observed that, by setting $x=y$, the determinant of the hessian of $g(x,y)$ is positive if and only if $f$ is a log-convex. This resolves the problem for twice continuously differentiable $f$. Namely: if $f$ is $C^2$, then $g(x,y)$ is convex if and only if $f$ is log-convex.
Suppose $f$ is $C^2$. First of all, because $g$ is convex in each variable, it follows that $f$ is convex, and hence $f''\geq0$. I did not initially have Slowsolver's insight that log convexity would be a criterion to look for, but naïvely checking for positivity of the Hessian of $g$ leads to the inequalities $$f''(x)f(y)+f(x)f''(y)\geq0$$ and $$f'(x)^2f'(y)^2\leq f''(x)f(x)f''(y)f(y)$$ for all $x$ and $y$, coming from the fact that a real symmetric $2$-by-$2$ matrix is positive semidefinite if and only if its trace and determinant are nonnegative. The first inequality follows from nonnegativity of $f$ and $f''$. The second inequality is equivalent to $f'^2\leq f''f$. To see the equivalence in one direction, just set $x=y$ and take square roots; in the other direction, multiply the inequalities at $x$ and $y$. Since $\log(f)''=\frac{f''f-f'^2}{f^2}$, this condition is equivalent to $\log(f)''\geq0$, meaning that $\log(f)$ is convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 0 }
Stochastic integral and Stieltjes integral My question is on the convergence of the Riemann sum, when the value spaces are square-integrable random variables. The convergence does depend on the evaluation point we choose, why is the case. Here is some background to make this clearer. Suppose $f\colon \Re \mapsto \Re $ is some continuous function on $[a,b]$, the Stieltjes integral of $f$ with respect to itself $f$ is $\int^{b}_{a} f(t)df(t)$ if we take a partition $ \Delta_n = \{t_0, t_1, \cdots, t_n \}$ of $[a,b]$ the Riemmans sums is $$ L_{n} = \sum^{n}_{i=1} f(t_{i-1})(f(t_{i})-f(t_{i-1})) $$ Now if the limit exists say $\lim \limits_{n\to\infty} L_{n}= A$, then if we choose the evaluation point $t_{i}$ then the sum $$ R_{n} = \sum^{n}_{i=1} f(t_{i})(f(t_{i})-f(t_{i-1})) $$ will also converge to $A$ so $$\lim_{n\to\infty}L_{n} = \lim_{n\to\infty}R_{n} .$$ Now we apply same idea for a stochastic integral. Here $W(t)$ is a wiener process and we wish to find $$\int^{b}_{a}W(t)dW(t) $$ $$ L_{n} = \sum^{n}_{i=1} W(t_{i-1})(W(t_{i})-W(t_{i-1})) $$ $$ R_{n} = \sum^{n}_{i=1} W(t_{i})(W(t_{i})-W(t_{i-1})) $$ in $L^2$ norm the limits of $L_{n}$ and $R_{n}$ exist but are different $$\lim_{n\to\infty} \Vert R_{n}-L_{n}\Vert = b-a $$ can someone explain why the limits are different ? If the limit exists which in this case it does. I would have expected $\lim_{n\to\infty} \Vert R_{n}-L_{n}\Vert = 0 $ in $L^2$ norm.
First write $$ R_n - L_n = \sum\limits_{i = 1}^n {[W(t_i ) - W(t_{i - 1} )]^2 }, $$ then consider Quadratic variation of Brownian motion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Simplicity of $A_n$ I have seen two proofs of the simplicity of $A_n,~ n \geq 5$ (Dummit & Foote, Hungerford). But, neither of them are such that they 'stick' to the head (at least my head). In a sense, I still do not have the feeling that I know why they are simple and why should it be 5 and not any other number (perhaps this is only because 3-cycles become conjugate in $A_n$ after $n$ becomes greater than 4). What is the most illuminating proof of the simplicity of $A_n,~ n \geq 5$ that you know?
The one I like most goes roughly as follows (my reference is in French [Daniel Perrin, Cours d'algèbre], but maybe it's the one in Jacobson's Basic Algebra I) : * *you prove that A(5) is simple by considering the cardinal of the conjugacy classes and seeing that nothing can be a nontrivial normal subgroup (because no nontrivial union of conjugacy classes including {id} has a cardinal dividing 60). Actually, you don't have to know precisely the conjugacy classes in A(5). *Then, you consider a normal subgroup N in A(n), n > 5 which is strictly larger than {id} and you prove (*) that it contains an element fixing at least n-5 points. The fact that A(5) is simple then gives that N contains every even permutation fixing the same n - 5 points. In particular, it contains a 3-cycle, and therefore contains all of A(n). To prove (*), you consider a commutator [x,y], where x is nontrivial in your normal subgroup and y is a 3-cycle: by the very definition, it is the product of the 3-cycle and the conjugate of its inverse. So it's the product of two 3-cycles and has at the very least n-6 fixed points. But it's easy to see that you can chose the 3-cycle so that the commutator has n-5 fixed points (it is enough that the two 3-cycles have overlapping supports). I like this proof because it keeps the "magical computation" part to a minimum, that simply amounts to the fact that you have automatically knowledge about a commutator if you have knowledge about one of his factors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 5, "answer_id": 2 }
How does (wikipedia's axiomatization of) intuitionistic logic prove $p \rightarrow p$? I'm looking at Wikipedia's article on Intuitionist logic, and I can't figure out how it would prove $(p \rightarrow p)$. Does it prove $(p \rightarrow p)$? If yes, how? If no, is there a correct (or reasonable, if there is no "correct") axiomatization of intuitionistic logic available anywhere online?
* *$p→(p→p)$ (THEN-1) *$p→((p→p)→p)$ (THEN-1) *$(p→((p→p)→p))→((p→(p→p))→(p→p))$ (THEN-2) *$(p→(p→p))→(p→p)$ (MP from lines 2,3) *$(p→p)$ (MP from lines 1,4)
{ "language": "en", "url": "https://math.stackexchange.com/questions/15844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Is long division the most optimal "effective procedure" for doing division problems? (Motivation: I am going to be working with a high school student next week on long division, which is a subject I strongly dislike.) Consider: $\frac{1110}{56}=19\frac{46}{56}$. This is really a super easy problem, since once you realize $56*20=1120$ its trivial to write out $1110=56*19+46$. You can work out the long division for yourself if you want; needless to say it makes an otherwise trivial problem into a tedious, multi-step process. Long division is an "effective procedure", in the sense that a Turing machine could do any division problem once it's given the instructions for the long division procedure. To put it another way, an effective procedure is one for which given any problem of a specific type, I can apply this procedure systematically to this type of problem, and always arrive at a correct solution. Here are my questions: 1) Are there other distinct effective procedures for doing division problems besides long division? 2) Is there a way to measure how efficient a given effective procedure is for doing division problems? 3) Does there exist an optimal effective procedure for division problems, in the sense that this procedure is the most efficient?
The OP asks Are there other distinct effective procedures for doing division problems besides long division? You can certainly tweak the long division algorithm to make it better. Here, we will describe one such 'tweak improvement', motivated by the OP's sample problem. Problem: Divide $1,110$ by $56$. $560 \times 1 = 560 \le 1,110 \; \checkmark \; \text{OVERSHOOT: } 560 \times 2 = 1,120 \text{, } 1,120 - 1,110 = 10$ Algorithm detects that the overshoot is less than or equal to $56$, so takes this path: $\frac {1,110}{560} = 2 - \frac{10}{560} \text{ iff }$ $\tag ! \frac {1,110}{56} = 20 - \frac{10}{56} = 19 + \frac{(56-10)}{56} = 19 + \frac{46}{56}$ To better understand this approach, please see this link; it describes a way of organizing your $\text{Base-}10$ long division work that is amenable to tweaks. For the purpose of comparison, we solve the problem without the above 'one off' enhancement: $560 \times 1 = 560 \le 1,110 \; \checkmark \; \text{OVERSHOOT: } 560 \times 2 = 1,120 \gt 1,110$ $\frac {1,110}{560} = 1 + \frac{550}{560} \text{ iff }$ $\tag 1 \frac {1,110}{56} = 10 + \frac{550}{56}$ $56 \times 9 = 504 \le 550 \; \checkmark \; \text{OVERSHOOT: } 560$ $\tag 2 \frac {550}{56} = 9 + \frac{46}{56}$ Combining (1) and (2), $\tag 2 \frac {1,110}{56} = 19 + \frac{46}{56}$ Here we would have to build the entire multiplication table $56 \times n \; | \; 1 \le n \le 9$. I'm fairly confident that an algorithm could be created that would be able to divide by $D$ with only a $D \times n \; | \; 1 \le n \le 5$ table. The table could be expanded as necessary, using addition to get us up to, if necessary, $D \times 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 2 }
Does the sum of reciprocals of primes converge? Is this series known to converge, and if so, what does it converge to (if known)? Where $p_n$ is prime number $n$, and $p_1 = 2$, $$\sum\limits_{n=1}^\infty \frac{1}{p_n}$$
No, it does not converge. See this: Proof of divergence of sum of reciprocals of primes. In fact it is known that $$\sum_{p \le x} \frac{1}{p} = \log \log x + A + \mathcal{O}(\frac{1}{\log^2 x})$$ Related: Proving $\sum\limits_{p \leq x} \frac{1}{\sqrt{p}} \geq \frac{1}{2}\log{x} -\log{\log{x}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/15946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Path for learning linear algebra and multivariable calculus I'll be finishing Calculus by Spivak somewhat soon, and want to continue into linear algebra and multivariable calculus afterwards. My current plan for learning the two subjects is just to read and work through Apostol volume II; is this a good idea, or would it be better to get a dedicated Linear Algebra book? Are there better books for multivariable calculus? (I don't think I want to jump right into Calculus on manifolds.) EDIT: I'd like to add, as part of my question, something I mentioned in a comment below. Namely, is it useful to learn multivariable calculus without differential forms and the general results on manifolds before reading something like Calculus on Manifolds or Analysis on Manifolds? That is, do I need to learn vector calculus as it is taught in a second semester undergraduate course before approaching differential forms?
I LOVED Advanced Calculus: A Differential Forms Approach by Edwards. Great book.. It's great to move to the next level in geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 3 }
Is $\ p_n^{\pi(n)} < 4^n$ where $p_n$ is the largest prime $\leq n$? Is $\ p_n^{\pi(n)} < 4^n$ where $p_n$ is the largest prime $\leq n$? Where $\pi(n)$ is the prime counting function. Using PMT it seems asymptotically $\ p_n^{\pi(n)} \leq x^n$ where $e \leq x$
Yes, Asymptotically you have $$(p_n)^{\pi(n)} \leq n^{n/\log n} = e^n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/16085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
How do I write a log likelihood function when I have 2 mean values for my pdf? I have been given the following pdf : fT (t; B, C) = ( exp(-t/C) - exp(-t/B) ) / ( C - B ) , (t>0) where the overall mean is B+C. I am unsure as to how to write the log likelihood function of B and C. The next part of the Q asks me to derive the equations that would have to be solved in order to find the max likelihood estimators of B and C. I would be grateful for any help =)
By definition, the log-likelihood is given by $$ \ln \mathcal{L}(B,C|x_1 , \ldots ,x_n ) = \sum\limits_{i = 1}^n {\ln f(x_i|B,C)}. $$ Thus, in our example, $$ \ln \mathcal{L}(B,C|x_1 , \ldots ,x_n ) = \sum\limits_{i = 1}^n {\ln \bigg[\frac{{e^{ - x_i /C} - e^{ - x_i /B} }}{{C - B}}\bigg]} . $$ EDIT: In view of the next part of the question, it may be useful to write $$ \ln \mathcal{L}(B,C|x_1 , \ldots ,x_n ) = \sum\limits_{i = 1}^n {\ln [e^{ - x_i /C} - e^{ - x_i /B} ]} - n\ln (C - B). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/16156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Replacing $\text{Expression}<\epsilon$ by $\text{Expression} \leq \epsilon$ As an exercise I was doing a proof about equicontinuity of a certain function. I noticed that I am always choosing the limits in a way that I finally get: $\text{Expression} < \epsilon$ However it wouldn't hurt showing that $\text{Expression} \leq \epsilon$ would it, since $\epsilon$ is getting infinitesimally small? I have been doing this type of $\epsilon$ proofs quite some time now, but never asked myself that question. Am I allowed to write $\text{Expression} \leq \epsilon$? If so, when?
Suppose for every $\epsilon >0$, there is an $N$ such that $n>N$ implies $x_n\leq\epsilon$. Let $k>0$. Then $\frac{k}{2}>0$, so there is an $M$ such that $n>M$ implies $x_n\leq\frac{k}{2}$ which implies that $x_n<k$. This corresponds to the original definition of continuity, doesn't it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/16301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
What is the point of logarithms? How are they used? Why do you need logarithms? In what situations do you use them?
In the geometric view of real numbers there are two basic forms of "movements", namely (a) shifts: each point $x\in{\mathbb R}$ is shifted a given amount $a$ to the right and (b) scalings: all distances between points are enlarged by the same factor $b>0$. In some instances (e.g. sizes of adults) the first notion is appropriate for comparison of different sizes, in other instances (e.g. distances between various celestial objects) the second notion. The logarithm provides a natural means to transform one view into the other: The sum of two shifts corresponds to the composition of two scalings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 5, "answer_id": 3 }
Finding Concavity in Curves Suppose $$\frac{d^2y}{dx^2} = \frac{e^{-t}}{1-e^t}.$$ After finding the second derivative, how do I find concavity? It's much easier to solve for $t$ in a problem like $t(2-t) = 0$, but in this case, solving for $t$ seems more difficult. Does it have something to do with $e$? $e$ never becomes $0$, but at what point is the curve upward or downward?
From the t on the RHS I assume that this comes from a pair of parametric equations where you were given x(t) and y(t) and that you got the second derivative by calculating dm/dx=m'(t)/x'(t) where m(t)=dy/dx=y'(t)/x'(t). In any case, you are right that the curve will be concave up at points where the second derivative is positive and concave down where it is negative. You are also right that the exponentials are never zero. e itself is just a specific positive constant and so raising it to any (positive or negative) power always gives a positive result. So the numerator in your expression is never zero and the only way the whole thing can change sign is when the denominator does. This happens when e^t=1, which means that t must be zero. Since e^t is an increasing function of t it must be bigger than 1 for t>0, and so at those points the second derivative is negative and the curve is concave down. But for the points with negative t, e^t is less than 1 (eg e^(-2)=1/e^2), so at these points the second derivative is positive and the curve is concave up.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Sum of series $2^{2m}$ How to sum $2^{2m}$ where $m$ varies from $0$ to $n$?
Note that you can get: $2^{2m}=4^m$, now $$\displaystyle\sum_{m=0}^{n} 2^{2m} = \displaystyle\sum_{m=0}^{n} 4^{m}$$ The last is easy, (is geometric with $r=4$), so $$\sum_{m=0}^{n} 2^{2m} = \sum_{m=0}^{n} 4^{m} = \frac{4-4^{n+1}}{1-4}+1=\frac{4^{n+1}-4}{3}+1=\frac{4^{n+1}-1}3.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/16452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do books titled "Abstract Algebra" mostly deal with groups/rings/fields? As a computer science graduate who had only a basic course in abstract algebra, I want to study some abstract algebra in my free time. I've been looking through some books on the topic, and most seem to 'only' cover groups, rings and fields. Why is this the case? It seems to me you'd want to study simpler structures like semigroups, too. Especially looking at Wikipedia, there seems to be a huge zoo of different kinds of semigroups.
Historically, the first "modern algebra" textbook was van der Waerden's in 1930, which followed the groups/rings/fields model (in that order). As far as I know, the first paper with nontrivial results on semigroups was published in 1928, and the first textbook on semigroups would have to wait until the 1960s. There is also a slight problem with the notion of "simpler". It is true that semigroups have fewer axioms than groups, and as such should be more "ubiquitous". However, the theory of semigroups is also in some sense "more complex" than the theory of groups, just as the theory of noncommutative rings is harder than that of commutative rings (even though commutative rings are "more complex" than rings because they have an extra axiom) and the structure theory of fields is simpler than that of rings (fewer ideals, for one thing). Groups have the advantage of being a good balance point between simplicity of structure and yet the ability to obtain a lot of nontrivial and powerful results with relatively little prerequisites: most 1-semester courses, even at the undergraduate level, will likely reach the Sylow theorems, a cornerstone of finite group theory. Semigroups require a lot more machinery even to state the isomorphism theorems (you need to notion of congruences).
{ "language": "en", "url": "https://math.stackexchange.com/questions/16546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79", "answer_count": 8, "answer_id": 3 }
Example of a ring with $x^3=x$ for all $x$ A ring $R$ is a Boolean ring if $x^2=x$ for all $x\in R$. By Stone representation theorem a Boolean ring is isomorphic to a subring of the ring of power set of some set. My question is what is an example of a ring $R$ with $x^3=x$ for all $x\in R$ that is not a Boolean ring? (Obviously every Boolean ring satisfies this condition.)
You can always pick a set $X$, consider the free $\mathbb Z$-algebra $A=\mathbb Z\langle X\rangle$, and divide by the ideal generated by all the elements $x^3-x$ for $x\in A$ to que a ring $B_X$, no? (Since the quotient is going to be commutative, you can also start from the polynomial ring...) To show that the resulting quotient is non-trivial, consider the many maps $B_X\to\mathbb Z_3$, which you get from functions $X\to\mathbb Z_3$. Indeed, for every example of a ring $R$ satisfying the condition, you can pick an $X$ and a surjective map $B_X\to R$. So these examples are universally complicated :) Later: Notice that $\mathbb Z_6$ is an example of a ring satisfying the identity, so $B_X$ is not a product of copies of $\mathbb Z_3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
A converse of sorts to the intermediate value theorem, with an additional property I need to solve the following problem: Suppose $f$ has the intermediate value property, i.e. if $f(a)<c<f(b)$, then there exists a value $d$ between $a$ and $b$ for which $f(d)=c$, and also has the additional property that $f^{-1}(a)$ is closed for every $a$ in a dense subset of $\mathbb{R}$, then $f$ is continuous. I can see plenty of counterexamples when the second property is not added, but I can't seem to bridge the gap between adding the property and proving $f$ is continuous. I can't get there either directly or by contradiction, because the additional property doesn't seem directly relevant to the property of continuity, so could anyone please tell me how to go about doing this? Thanks!
As noted in the comments, this is a slightly more general version of a problem in Rudin. I assume for simplicity that the dense set is $\mathbb{Q}$. Solution: Fix $x_0\in\mathbb{R}$, and fix a sequence $\{x_n\}$ converging to $x_0$. By the sequential characterization of continuity, it suffices to show that $f(x_n)\rightarrow f(x_0)$. Suppose not. Then an infinite number of the $f(x_n)$ are not equal to $f(x_0)$, and without loss of generality, we can assume there are infinitely many $n$ so that $f(x_n)>f(x_0)$. Passing to a subsequence, we can assume this is true for all $n$. Because the sequence $\{f(x_n\}$ does not converge to $f(x_0)$, there exists $r\in\mathbb{Q}$ with $f(x_n)>r>f(x_0)$ for all $n$. By the intermediate value property, for every $n$, there exists $t_n$ with $f(t_n)=r$ for some $t_n$ between $x_n$ and $x_0$. By the squeeze principle, $t_n\rightarrow x_0$. But the set of all $t$ with $f(t)=r$ is closed, so because $x_0$ is a limit point, $f(x_0)=r$, a contradiction. Source: W. Rudin, Principles of Mathematical Analysis, Chapter 4, exercise 19.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is $\frac{d}{dx}\left(\frac{dx}{dt}\right)$? This question was inspired by the Lagrange equation, $\frac{\partial L}{\partial q} - \frac{d}{dt}\frac{\partial L}{\partial \dot{q}} = 0$. What happens if the partial derivatives are replaced by total derivatives, leading to a situation where a function's derivative with respect to one variable is differentiated by the original function?
Write $$\frac{d}{dx}\left(\frac{dx}{dt}\right)=\frac{d}{dx}\left(\frac{1}{\frac{dt}{dx}}\right)$$ and use the chain rule. $$\frac{d}{dx}\left(\frac{dx}{dt}\right)=\left(\frac{-1}{(\frac{dt}{dx})^2}\right)\frac{d^2t}{dx^2}=-\left (\frac{dx}{dt}\right )^2\frac{d^2t}{dx^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/16709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
A divisor on a smooth curve such that $\Omega$ minus it has no nonzero sections Let $X$ be a smooth projective curve of genus $g$. Let $\Omega$ be the sheaf of differentials. Mumford (in Abelian Varieties, sec. 2.6, in proving the theorem of the cube) asserts that there is an effective divisor of degree $g$ on $X$ such that $H^0(X, \Omega \otimes L(-D))=0$. This should be easy, but I'm missing the argument. Could someone explain?
Let $\mathcal L$ be an invertible sheaf on the curve $X$, and suppose that $H^0(X,\mathcal L)$ has dimension $d$. (In the case when $\mathcal L = \Omega$, we have $d = g$.) If we choose a closed point $x \in X$, then there is a map $H^0(X,\mathcal L) \to \mathcal L_x/\mathfrak m_x \mathcal L_x$ given by mapping a section to its fibre at $x$. To make my life easier, let's assume that $k$ (the field over which $X$ is defined) is algebraically closed, so that $x$ is defined over $k$, and we may identify $\mathcal L_x/\mathfrak m_x\mathcal L_x$ with $k$ (since it one dimensional, being the fibre of an invertible sheaf). Evaluation is then a functional $H^0(X,\mathcal L) \to k.$ Now this functional will be identically zero if and only if every section vanishes at $x$. But a non-zero section has only finitely many zeroes, and $X$ has an infinite number of closed points, so we may certainly choose $x$ so that this functional is not identically zero. The evaluation map sits in an exact sequence $$0 \to H^0(X,L(-x)) \to H^0(X,L) \to k,$$ and so if we choose $x$ such that evaluation is surjective, we find that $H^0(X,L(-x))$ has dimension $d - 1$. Proceeding by induction, we find points $x_1,\ldots,x_d$ such that $H^0(X,L(-x_1-\cdots - x_d)) = 0$. In summary: we have shown that we may find an effective degree $d$ divisor $D$ such that $H^0(X,L(-D))$ vanishes. (And in fact, looking at the proof, we see that this vanishing will hold for a generic choice of $D$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/16757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
An approximation of an integral Is there any good way to approximate following integral? $$\int_0^{0.5}\frac{x^2}{\sqrt{2\pi}\sigma}\cdot \exp\left(-\frac{(x^2-\mu)^2}{2\sigma^2}\right)\mathrm dx$$ $\mu$ is between $0$ and $0.25$, the problem is in $\sigma$ which is always positive, but it can be arbitrarily small. I was trying to expand it using Taylor series, but terms looks more or less this $\pm a_n\cdot\frac{x^{2n+3}}{\sigma^{2n}}$ and that can be arbitrarily large, so the error is significant.
If you write y=x^2 and pull the constants out you have $$\frac{1}{2\sqrt{2\pi}\sigma}\int_0^{0.25}\sqrt{y}\cdot \exp(-\frac{(y-\mu )^2}{2\sigma ^2})dy$$ If $\sigma$ is very small, the contribution will all come from a small area in $y$ around $\mu$. So you can set $\sqrt{y}=\sqrt{\mu}$ and use your error function tables for a close approximation. A quick search didn't turn up moments of $\sqrt{y}$ against the normal distribution, but maybe they are out there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Proving $A \subset B \Rightarrow B' \subset A'$ Suppose that A is a subset of B. How can we show that B-complement is a subset of A-complement?
Instead of using a proof by contradiction, $b\in B^c \implies b \notin A$ since $A \subset B$ hence $b \in A^c$ which implies the result. The key here being that for all $x$, either $x\in X$ or $x\in X^c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Got to learn matlab I have this circuits and signals course where i have been asked to learn matlab all by myself and that its treated as a basic necessity now i wanted some help as to where should i start from as i hardly have around less than a month before my practicals session start should i go with video lectures/e books or which one should i prefer over the other?
For the elementary syntax, you may look up introduction videos on youtube, and start from simple task in this MIT introduction - problem set. visit the Mathworks website god idea, it's got very good explanatory videos. And dont foget 2 most important commands in matlab: "help" and "doc". Matlab got very good documentation with examples. EDIT: MIT Link is no longer available, you may want to copy and paste "MIT Introduction - problem set" into youtube. I believe the intended video series to view are below: MIT 6.00 Introduction to Computer Science and Programming, 2008.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Proving there is no natural number which is both even and odd I've run into a small problem while working through Enderton's Elements of Set Theory. I'm doing the following problem: Call a natural number even if it has the form $2\cdot m$ for some $m$. Call it odd if it has the form $(2\cdot p)+1$ for some $p$. Show that each natural number number is either even or odd, but never both. I've shown most of this, and along the way I've derived many of the results found in Arturo Magidin's great post on addition, so any of the theorems there may be used. It is the 'never both' part with which I'm having trouble. This is some of what I have: Let $$ B=\{n\in\omega\ |\neg(\exists m(n=2\cdot m)\wedge\exists p(n=2\cdot p+1))\}, $$ the set of all natural numbers that are not both even and odd. Since $m\cdot 0=0$, $0$ is even. Also $0$ is not odd, for if $0=2\cdot p+1$, then $0=(2\cdot p)^+=\sigma(2\cdot p)$, but then $0\in\text{ran}\ \sigma$, contrary to the first Peano postulate. Hence $0\in B$. Suppose $k\in B$. Suppose $k$ is odd but not even, so $k=2\cdot p+1$ for some $p$. Earlier work of mine shows that $k^+$ is even. However, $k^+$ is not odd, for if $k^+=2\cdot m+1$ for some $m$, then since the successor function $\sigma$ is injective, we have $$ k^+=2\cdot m+1=(2\cdot m)^+\implies k=2\cdot m $$ contrary to the fact that $k$ is not even. Now suppose $k$ is even, but not odd. I have been able to show that $k^+$ is odd, but I can't figure a way to show that $k^+$ is not even. I suppose it must be simple, but I'm just not seeing it. Could someone explain this little part? Thank you.
Here is a complete proof. We assume that $n^+=m^+$ implies $n=m$ (*) and say that $n$ is even if it is $m+m$ for some natural number $m$ and odd if it is $(m+m)^+$ for some natural number $m$. We know that $\phi=\phi+\phi$ so $\phi$ is even and $\phi\neq p^+$ for any $p$ and so is therefore not odd. Now assume that $k\neq\phi$ and $k$ is never both odd and even. Now consider $k^+.$ Suppose $k^+$ is odd. Then $k^+=(n+n)^+$ for some $n$. So by (*), $k=n+n$ so $k$ is even. Suppose $k^+$ is even. Then $k^+=(m+m)$ for some $m$. We know $m\neq\phi$ as otherwise $k^+=\phi$, so $m=p^+$ for some $p$. Hence $k^+=(p^++p^+)=(p^++{p)}^+$. So by (*), $k=p+p^+=(p+p)^+$, which is odd. Hence if $k^+$ is even and odd then $k$ is odd and even, a contradiction to our induction hypothesis. Hence $k^+$ is never both even and odd, and we have the result by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
$A,B \subset (X,d)$ and $A$ is open dense subset, $B$ is dense then is $A \cap B$ dense? I am trying to solve this problem, and i think i did something, but finally i couldn't get the conclusion. The question is: * *Let $(X,d)$ be a metric space and let $A,B \subset X$. If $A$ is an open dense subset, and $B$ is a dense subset, then is $A \cap B$ dense in $X$?. Well, i think this is true. We have to show that $\overline{A \cap B}=X$. Or in other words, $B(x,r) \cap (A \cap B) \neq \emptyset$ for any $x \in X$. Since $A$ is dense in $X$, so we have $B(x,r) \cap A \neq\emptyset$. That means there is a $y \in B(x,r) \cap A$. Which means there is a $y \in A$. And since $A$ is open we have $B(y,r_{1}) \subseteq A$ for some $r_{1} > 0$. Couldn't get any further. I did try some more from here on, but couldn't get it. Any idea for proving it or giving a counter example.
Let $(X,\tau)$ be a topological space. Let $A$ and $B$ be two dense subsets of $(X,\tau)$ with $A$ is an open set. Let $G\in \tau$. Then since $A$ is dense in $(X,\tau)$, $G\cap A\neq\emptyset$. Now since $B$ is dense in $(X,\tau)$, for open set $G\cap A$ which is a nonempty set, $(G\cap A)\cap B\neq \emptyset$. Hence $(A\cap B)$ is a dense set in $(X,\tau)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finite Model Property on the First-Order Theory of Two Equivalence Relations I know that there is a first-order sentence $\varphi$ such that * *$\varphi$ is written in the vocabulary given by just two binary relation symbols $E_1$, $E_2$ (and hence, without the equality symbol*), *$\varphi$ is satisfiable in a model where $E_1$ and $E_2$ are equivalence relations, *$\varphi$ is not satisfiable in a finite model where $E_1$ and $E_2$ are equivalence relations. This has to be true because the first-order theory of two equivalence relations is undecidable (see the paper A. Janiczak, Undecidability of some simple formalized theories, Fundamenta Mathematicae, 1953, 40, 131--139). However, I am not able to find any such formula. Does anybody know an example of a formula (or a family) with these three properties? Footnote *: The assumption of not allowing the equality symbol is not important. Indeed, if we know a formula with equality where the last two previous properties hold, then the formula obtained replacing all subformulas $x \approx y$ with the formula $E_1(x,y) \land E_2(x,y)$ also satisfies these two properties.
Let's call two elements $x$ and $y$ $E_1$-neighbors if $(x \neq y \wedge E_1(x, y))$, and $E_2$-neighbors if $(x \neq y \wedge E_2(x, y))$. Then the following assertions should suffice to force an infinite model: * *Every element has exactly one $E_1$-neighbor. *There exists an element with no $E_2$-neighbor; every other element has exactly one $E_2$-neighbor. In particular, the single element with no $E_2$-neighbor can be identified with $0$; and the remaining natural numbers are generated by alternating conditions 1) and 2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/17032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Different ways to represent functions other than Laurent and Fourier series? In the book "A Course of modern analysis", examples of expanding functions in terms of inverse factorials was given, I am not sure in today's math what subject would that come under but besides the followings : power series ( Taylor Series, Laurent Series ), expansions in terms of theta functions, expanding a function in terms of another function (powers of, inverse factorial etc.), Fourier series, infinite Products (Complex analysis) and partial fractions (Weisenstein series), what other ways of representing functions have been studied? is there a comprehensive list of representation of functions and the motivation behind each method? For example , power series are relatively easy to work with and establish the domain of convergence e.g. for $ \sin , e^x \text {etc.}$ but infinite product representation makes it trivial to see all the zeroes of $\sin, \cos \text etc. $ Also if anyone can point out the subject that they are studied under would be great. Thank you
You can add the following to your list of representations * *Continued fractions. See Jones & Thron or Lorentzen's books *Integral representations (Mellin–Barnes, etc). See the ECM entry *Exponentials. See Knoebel's paper Exponentials reiterated *Nested radicals. See Schuske & Thron's Infinite radicals in the complex plane *Rational or polynomial approximation (e.g Padé approximants) I guess the general topic is studied under Complex Analysis, Asymptotic Analysis, Harmonic Analysis. AFAIK, there is no single book which covers all representations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Torsion module over PID Suppose $p$ is irreducible and $M$ is a tosion module over a PID $R$ that can be written as a direct sum of cyclic submodules with annihilators of the form $p^{a_1} | \cdots | p^{a_s}$ and $p^{a_i}|p^{a_i+1}$. Let now $N$ be a submodule of $M$. How can i prove that $N$ can be written a direct sum of cyclic modules with annihilators of the form $p^{b_1} | \cdots | p^{b_t}, t\leq s$ and $\ p^{b_i}| p^ {a_(s-t+i)}$? I've already shown that $t\leq s$ considering the epimorphism from a free module to $M$ and from its submodule to $N$.
just look at the size of the subgroups annihilated by p^k for various k.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Apparent inconsistency between integral table and integration using trigonometric identity According to my textbook: $$\int_{-L}^{L} \cos\frac{n \pi x}{L} \cos\frac{m \pi x}{L} dx = \begin{cases} 0 & \mbox{if } n \neq m \\ L & \mbox{if } n = m \neq 0 \\ 2L& \mbox{if } n = m = 0 \end{cases} $$ According to the trig identity given on this cheat sheet: $$ \cos{\alpha}\cos{\beta} = \frac{1}{2}\left [ \cos \left (\alpha -\beta \right ) + \cos \left(\alpha +\beta \right ) \right ] $$ Substituting this trig identity in and integrating from $-L \mbox{ to } L$ gives: $$\int_{-L}^{L} \cos\frac{n \pi x}{L} \cos\frac{m \pi x}{L} dx = \frac{L}{\pi} \left [\frac{\sin \left ( \pi (n - m) \right )}{n - m} + \frac{\sin \left ( \pi (n+m) \right )}{n + m}\right ] $$ Evaluating the right side at $n = m$ gives a zero denominator, making the whole expression undefined. Evaluating the right hand side at $n \neq m$ gives $0$ because the sine function is always $0$ for all integer multiples of $\pi$ as can be clearly seen with the unit circle. None of these results jive with the first equation. Could you explain what mistakes I am making with my thinking?
The integration is wrong if $n=m$ or if $n=-m$, because it is false that $\int \cos(0\pi x)\,dx = \frac{\sin(0\pi x)}{0}+C$. So the very use of the formula assumes that $|n|\neq|m|$. But if $|n|\neq|m|$, then your formula does say that the integral should be $0$, the same thing you get after the substitution. What makes you say that it "does not jive"?
{ "language": "en", "url": "https://math.stackexchange.com/questions/17166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How many possible combinations are there in Hua Rong Dao? How many possible combinations are there in Hua Rong Dao? Hua Rong Dao is a Chinese sliding puzzle game, also called Daughter in a Box in Japan. You can see a picture here and an explanation here . The puzzle is a $4 \times 5$ grid with these pieces * *$2 \times 2$ square ($1$ piece) *$1\times 2$ vertical ($4$ pieces) *$2 \times 1$ horizontal ($1$ piece) *$1 \times 1$ square ($4$ pieces) Though traditionally each type of piece will have different pictures, you can treat each of the $1\times 2$'s as identical and each of the $1\times 1$'s as identical. The goal is to slide around the pieces (not removing them) until the $2 \times 2$ "general" goes from the middle top to the middle bottom (where it may slide out of the border). I'm not concerned in this question with the solution, but more curious about the number of combinations. Naively, I can come up with an upper bound like this Place each piece on the board, ignoring overlaps. The $2\times2$ can go in any of $3 \cdot 4 = 12$ squares The $2\times1$ can go in any of $4 \cdot 4 = 16$ squares The $1\times2$ can go in any of $3 \cdot5 = 15$ squares The $1 \times 1$ can go in any of $4\cdot 5 = 20$ squares If you place the pieces one at a time, subtracting out the used squares place the $2 \times 2 = 12$ options place each of the $2 \times 1 = \dfrac{(16 - 4) (16 - 6) (16 - 8) (16 - 10)}{ 4!}$ options place the $1 \times 2 = 15 - 6$ options place the $1 \times 1 = { {20-14} \choose 4} = 15$ options multiplied together this works out to $388,800$. Is there any way I might be able to narrow this down further? The two obvious things not taken into account are blocked pieces (a $2 \times 1$ pieces will not fit into two separated squares) and the fact that not all possibilities might be accessible when sliding from the starting position. Update: I realized that the puzzle is bilaterally symmetrical, so if you just care about meaningful differences between positions, you can divide by two.
A straightforward search yields the figure of $4392$ for all but the $1 \times 1$ stones. The former fill $14$ out of $20$ squares, so there are $\binom{6}{4} = 15$ possibilities to place the latter. In total, we get $$4392 \times 15 = 65880.$$ These can all be generated, and one can in principle calculate the number of connected components in the resulting graph, where the edges correspond to movements of pieces. Edit: There are 898 different connected components. There are 25955 configurations reachable from the initial state.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
If $4^x + 4^{-x} = 34$, then $2^x + 2^{-x}$ is equal to...? I am having trouble with this: If $4^x + 4^{-x} = 34$, then what is $2^x + 2^{-x}$ equal to? I managed to find $4^x$ and it is: $$4^x = 17 \pm 12\sqrt{2}$$ so that means that $2^x$ is: $$2^x = \pm \sqrt{17 \pm 12\sqrt{2}}.$$ Correct answer is 6 and I am not getting it :(. What am I doing wrong?
You haven't done anything wrong! To complete your answer, one way you can see the answer is $6$ is to guess that $$17 + 12 \sqrt{2} = (a + b\sqrt{2})^2$$ Giving us $$17 = a^2 + 2b^2, \ \ ab = 6$$ Giving us $$a = 3, \ \ b = 2$$ Thus $$ \sqrt{17 + 12 \sqrt{2}} = 3 + 2\sqrt{2}$$ which gives $$2^x + 2^{-x} = 6$$ (And similarly for $17 - 12\sqrt{2}$) A simpler way is to notice that $(2^x + 2^{-x})^2 = 4^x + 4^{-x} + 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Frequency Swept sine wave -- chirp I am experiencing what I think is really simple confusion. Take $y(t) = \sin(2 \cdot \pi \cdot t \cdot\omega(t))$ and $\omega(t) = a \cdot t+b$ for $t \in [0,p)$ and let $\omega(t)$ have a periodic extension with period $p$. The values $a,b,p$ are parameters $\omega(t)$ looks like the blue function here with $\omega$ as such, $y(t)$ should be two chirps -- a sine wave whose frequency sweeps. My intuition is that the two chirps should be identical, but they're not. The second chirp is higher in frequency. The argument to $\sin$ , the quantity $2 \cdot \pi \cdot t \cdot \omega(t)$ looks like the blue function here: Two piecewise quadratics. The second quadratic should just be a shifted version of the first one, which is what is depicted in red. I can't argue with the math, but there is something very simple wrong with my intuition. The function $\omega$ should modulate the "instantaneous frequency" of the sinewave.
It seems my comment has answered the question, so I suppose I should turn it into an actual answer. For a chirp signal of the form $\sin(2\pi\phi(t))$, the instantaneous frequency $\omega(t)$ is the time derivative of the instantaneous phase $\phi(t)$. So for given $\omega(t)$, your signal should be $\sin\left(2\pi\int_0^t\omega(\tau)d\tau\right)$. For what it's worth, the incorrect expression $\sin(2\pi t\omega(t))$ gives a signal with instantaneous frequency $\omega(t) + t\omega'(t)$. In your example, $\omega'(t)$ is positive almost everywhere, which is why the signal seems to be too high in frequency.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Division of Other curves than circles The coordinates of an arc of a circle of length $\frac{2pi}{p}$ are an algebraic number, and when $p$ is a Fermat prime you can find it in terms of square roots. Gauss said that the method applied to a lot more curves than the circle. Will you please tell if you know any worked examples of this (finding the algebraic points on other curves)?
To flesh out the comment I gave: Prasolov and Solovyev mention an example due to Euler and Serret: consider the plane curve with complex parametrization $$z=\frac{(t-a)^{n+2}}{(t-\bar{a})^n (t+i)^2}$$ where $a=\frac{\sqrt{n(n+2)}}{n+1}-\frac{i}{n+1}$ and $n$ is a positive rational number. The arclength function for this curve is $s=\frac{2\sqrt{n(n+2)}}{n+1}\arctan\,t$; since $$\arctan\,u+\arctan\,v=\arctan\frac{u+v}{1-u v}$$ the division of an arc of this curve can be done algebraically (with straightedge and compass for special values). Here are plots of these curves for various values of $n$: Serret also considered curves whose arclengths can be expressed in terms of the incomplete elliptic integral of the first kind $F(\phi|m)$; I'll write about those later once I figure out how to plot these... (but see the Prasolov/Solovyev book for details)
{ "language": "en", "url": "https://math.stackexchange.com/questions/17405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Creating a parametrized ellipse at an angle I'm creating a computer program where I need to calculate the parametrized circumference of an ellipse, like this: x = r1 * cos(t), y = r2 * sin(t) Now, say I want this parametrized ellipse to be tilted at an arbitrary angle. How do I go about this? Any obvious simplifications for i.e 30, 45 or 60 degrees?
If you want to rotate $\theta$ radians, you should use $$t\mapsto \left( \begin{array}{c} a \cos (t) \cos (\theta )-b \sin (t) \sin (\theta ) \\ a \cos (t) \sin (\theta )+b \sin (t) \cos (\theta ) \end{array} \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/17465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Time until x successes, given p(failure)? I hope this is the right place for help on this question! I expect this should be easy for this audience (and, no, this isn't homework). I have a task that takes $X$ seconds to complete (say, moving a rock up a hill). However, sometimes the task is restarted from the beginning (say, an earthquake rolls the rock back to the bottom of the hill). I know that these restart events happen on average every $Y$ seconds ($Y$ less than $X$). I need to find out how long it will take in total to reach the top of the hill. To be perfectly clear, suppose I start the task at time $t=0$ and, every time a "restart event" happens, I immediately restart. When I complete the task I record $f$, the final value of $t$. What is the expected value of $f$? In my attempt to solve this, I model the restart as a Poisson event with average $\lambda=Y$. However, I'm not sure how to proceed. p.s. the restart events are random, of course, not on a schedule happening every Y seconds (otherwise I would never be able to reach the top of the hill).
Let $T$ be the requested time, and $R$ a random variable measuring the time until the next restart. We have $$\mathbb{E}[T] = \Pr[R > X] \cdot X + \Pr[R \leq X]( \mathbb{E}[T] + \mathbb{E}[R|R\leq X]).$$ Simplifying, we get $$\mathbb{E}[T] = X + \frac{\Pr[R \leq X]}{\Pr[R > X]} \mathbb{E}[R|R\leq X].$$ Edit: Here's a more explicit formula, assuming that $R$ is a continuous random variable with density $f$: $$ \mathbb{E}[T] = X + \frac{\int_0^X rf(r)\, dr}{1 - \int_0^X f(r)\, dr}. $$ Edit: Here's another formula, for a discrete random variable with integer values (assuming a restart is valid if it happens at time $X$): $$ \mathbb{E}[T] = X + \frac{\sum_{t=0}^X t\Pr[R=t]}{1 - \sum_{t=0}^X \Pr[R=t]}.$$ Edit: plugging a geometric distribution with expectation $\lambda$, Wolfram alpha gets this. Note that a geometric distribution has a non-zero probability to be $0$ (in that case the restart occurs immediately).
{ "language": "en", "url": "https://math.stackexchange.com/questions/17519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Going back from a correlation matrix to the original matrix I have N sensors which are been sampled M times, so I have an N by M readout matrix. If I want to know the relation and dependencies of these sensors simplest thing is to do a Pearson's correlation which gives me an N by N correlation matrix. Now let's say if I have the correlation matrix but I want to explore tho possible readout space that can lead to such correlation matrix what can I do? So the question is: given an N by N correlation matrix how you can get to a matrix that would have such correlation matrix? Any comment is appreciated.
You could make your "10%" implementation faster by using gradient descent Here's an example of doing it for covariance matrix because it's easier. You have $k\times k$ covariance matrix C and you want to get $k \times n$ observation matrix $A$ that produces it. The task is to find $A$ such that $$A A' = n^2 C$$ You could start with a random guess for $A$ and try to minimize the sum of squared errors. Using Frobenius norm, we can write this objective as follows $$J=\|A A'-n^2 C\|^2_F$$ Let $D$ be our matrix of errors, ie $D=A A'-n^2 C$. Checking with the Matrix Cookbook I get the following for the gradient $$\frac{\partial J}{\partial a_{iy}} \propto a_{iy} \sum_j D_{i,j}$$ In other words, your update to data matrix for sensor $i$, observation $y$ should be proportional to the sum of errors in $i$th row of covariance matrix multiplied by the value of $i,y$'th observation. Gradient descent step would be to update all weights at once. It might be more robust to update just the worst rows, ie calculate sum of errors for every row, update entries corresponding to the worst rows, then recalculate errors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
How do you integrate a Bessel function? I don't want to memorize answers or use a computer, is this possible? I am attempting to integrate a Bessel function of the first kind multiplied by a linear term: $\int xJ_n(x)\mathrm dx$ The textbooks I have open in front of me are not useful (Boas, Arfken, various Schaum's) for this problem. I would like to do this by hand. Is it possible? I have had no luck with expanding out $J_n(x)$ and integrating term by term, as I cannot collect them into something nice at the end. If possible and I just need to try harder (i.e. other methods or leaving it alone for a few days and coming back to it) that is useful information. Thanks to anyone with a clue.
At the very least, $\int u J_{2n}(u)\mathrm du$ for integer $n$ is expressible in terms of Bessel functions with some rational function factors. To integrate $u J_0(u)$ for instance, start with the Maclaurin series: $$u J_0(u)=u\sum_{k=0}^\infty \frac{(-u^2/4)^k}{(k!)^2}$$ and integrate termwise $$\int u J_0(u)\mathrm du=\sum_{k=0}^\infty \frac1{(k!)^2}\int u(-u^2/4)^k\mathrm du$$ to get $$\int u J_0(u)\mathrm du=\frac{u^2}{2}\sum_{k=0}^\infty \frac{(-u^2/4)^k}{k!(k+1)!}$$ thus resulting in the identity $$\int u J_0(u)\mathrm du=u J_1(u)$$ For $\int u J_2(u)\mathrm du$, we exploit the recurrence relation $$u J_2(u)=2 J_1(u)-u J_0(u)$$ and $$\int J_1(u)\mathrm du=-J_0(u)$$ (which can be established through the series definition for Bessel functions) to obtain $$\int u J_2(u)\mathrm du=-u J_1(u)-2J_0(u)$$ and in the general case of $\int u J_{2n}(u)\mathrm du$ for integer $n$, repeated use of the recursion relation $$J_{n-1}(u)+J_{n+1}(u)=\frac{2n}{u}J_n(u)$$ as well as the additional integral identity $$\int J_{2n+1}(u)\mathrm du=-J_0(u)-2\sum_{k=1}^n J_{2k}(u)$$ should give you expressions involving only Bessel functions. On the other hand, $\int u J_{\nu}(u)\mathrm du$ for $\nu$ not an even integer cannot be entirely expressed in terms of Bessel functions; if $\nu$ is an odd integer, Struve functions are needed ($\int J_0(u)\mathrm du$ cannot be expressed solely in terms of Bessel functions, and this is where the Struve functions come in); for $\nu$ half an odd integer, Fresnel integrals are needed, and for general $\nu$, the hypergeometric function ${}_1 F_2\left({{}\atop b}{a \atop{}}{{}\atop c}\mid u\right)$ is required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Integrate Form $du / (a^2 + u^2)^{3/2}$ How does one integrate $$\int \dfrac{du}{(a^2 + u^2)^{3/2}}\ ?$$ The table of integrals here: http://teachers.sduhsd.k12.ca.us/abrown/classes/CalculusC/IntegralTablesStewart.pdf Gives it as: $$\frac{u}{a^2 ( a^2 + u^2)^{1/2}}\ .$$ I'm getting back into calculus and very rusty. I'd like to be comfortable with some of the proofs behind various fundamental "Table of Integrals" integrals. Looking at it, the substitution rule seems like the method of choice. What is the strategy here for choosing a substitution? It has a form similar to many trigonometric integrals, but the final result seems to suggest that they're not necessary in this case.
A trigonometric substitution does indeed work. We want to express $(a^2 + u^2)^{3/2}$ as something without square roots. We want to use some form of the Pythagorean trigonometric identity $\sin^2 x + \cos^2 x = 1$. Multiplying each side by $\frac{a^2}{\cos^2 x}$, we get $a^2 \tan^2 x + a^2 = a^2 \sec^2 x$, which is in the desired form. of (sum of two squares) = (something squared). This suggests that we should use the substitution $u^2 = a^2 \tan^2 x$. Equivalently, we substitute $u = a \tan x$ and $du = a \sec^2 x dx$. Then $$ \int \frac{du}{(a^2 + u^2)^{3/2}} = \int \frac{a \sec^2 x \, dx}{(a^2 + a^2 \tan^2 x)^{3/2}}. $$ Applying the trigonometric identity considered above, this becomes $$ \int \frac{a \sec^2 x \, dx}{(a^2 \sec^2 x)^{3/2}} = \int \frac{dx}{a^2 \sec x} = \frac{1}{a^2} \int \cos x \, dx, $$ which can be easily integrated as $$ =\frac{1}{a^2} \sin x. $$ Since we set $u = a \tan x$, we substitute back $x = \tan^{-1} (\frac ua)$ to get that the answer is $$ =\frac{1}{a^2} \sin \tan^{-1} \frac{u}{a}. $$ Since $\sin \tan^{-1} z = \frac{z}{\sqrt{z^2 + 1}}$, this yields the desired result of $$ =\frac{u/a}{a^2 \sqrt{(u/a)^2 + 1}} = \frac{u}{a^2 (a^2 + u^2)^{1/2}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/17666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
How does teacher get first step? Below are the steps the teacher took to solve: $y = \sqrt{3}\sin x + \cos x$ find min and max on $[0, 2\pi)$ Step 1: = $2\sin(x + \pi/6))$ How does the teacher get this first step? Note: No calculus please.
picakhu's answer is the simplest way to see how it works having already arrived at $y=2\sin(x+\frac{\pi}{6})$ (use the identity there to expand this form). In general, given $a\sin x+b\cos x$ (let's say for $a,b>0$), it is possible to arrive at a similar equivalent form: $$\begin{align} a\sin x+b\cos x &=a\left(\sin x+\frac{b}{a}\cos x\right) \\ &=a\left(\sin x+\tan\left(\arctan\frac{b}{a}\right)\cos x\right) \\ &=a\left(\sin x+\frac{\sin\left(\arctan\frac{b}{a}\right)}{\cos\left(\arctan\frac{b}{a}\right)}\cos x\right) \\ &=\frac{a}{\cos\left(\arctan\frac{b}{a}\right)}\left(\sin x\cos\left(\arctan\frac{b}{a}\right)+\sin\left(\arctan\frac{b}{a}\right)\cos x\right) \\ &=\sqrt{a^2+b^2}\sin\left(x+\arctan\frac{b}{a}\right). \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/17716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why a complete graph has $\frac{n(n-1)}{2}$ edges? I'm studying graphs in algorithm and complexity, (but I'm not very good at math) as in title: Why a complete graph has $\frac{n(n-1)}{2}$ edges? And how this is related with combinatorics?
A complete graph has an edge between any two vertices. You can get an edge by picking any two vertices. So if there are $n$ vertices, there are $n$ choose $2$ = ${n \choose 2} = n(n-1)/2$ edges. Does that help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/17747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 4, "answer_id": 2 }
Coefficients for Taylor series of given rational function Looking at an earlier post Finding the power series of a rational function, I am trying to get a closed formula for the n'th coefficient in the Taylor series of the rational function (1-x)/(1-2x-x^3). Is it possible to use any of the tricks in that post to not only obtain specific coefficients, but an expression for the n'th coefficient ?If T(x) is the Taylor polynomial I am looking at the equality (1-x) = (1-2x-x^3)*T(x) and differentiating, but I am not able to see a pattern giving me an explicit formula for the coefficients.
Denoting the coefficients of $T(x)$ as $t_i$ (so $T(x) = t_0 + t_1 x + ...$), consider the coefficient of $x^a$ in $(1 - 2x - x^3) T(x)$. It shouldn't be hard to show that it's $t_a - 2t_{a-1} - t_{a-3}$. So we have $t_0 - 2t_{-1} - t_{-3} = 1$, $t_1 - 2t_0 - t_{-2} = -1$, and $a > 1 \Rightarrow t_a - 2t_{a-1} - t_{a-3} = 0$. You can get $t_0$, $t_1$. Do you know how to solve the discrete recurrence $t_{a+3} = 2t_{a+2} + t_{a}$? (If not, here's a hint: suppose you had a number $\alpha$ such that $t_a = \alpha^a$ satisfied the recurrence. What constraint on $\alpha$ can you prove? Now consider a linear combination of the possible such solutions, $t_a = b_0 \alpha_0^a + ... + b_n \alpha_n^a$, and solve $n$ simultaneous equations from your base cases $t_0$, $t_1$, and use $0 = t_{-1} = t_{-2} = ...$ if necessary).
{ "language": "en", "url": "https://math.stackexchange.com/questions/17845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Unbounded subset of $\mathbb{R}$ with positive Lebesgue outer measure The set of rational numbers $\mathbb{Q}$ is an unbounded subset of $\mathbb{R}$ with Lebesgue outer measure zero. In addition, $\mathbb{R}$ is an unbounded subset of itself with Lebesgue outer measure $+\infty$. Therefore the following question came to my mind: is there an unbounded subset of $\mathbb{R}$ with positive Lebesgue outer measure? If there is, can you give me an example?
I guess you mean with positive and finite outer measure. An easy example would be something like $[0,1]\cup\mathbb{Q}$. But perhaps you also want to have nonzero measure outside of each bounded interval? In that case, consider $[0,1/2]\cup[1,1+1/4]\cup[2,2+1/8]\cup[3,3+1/16]\cup\cdots$. If you want the set to have positive measure in each subinterval of $\mathbb{R}$, you could let $x_1,x_2,x_3,\ldots$ be a dense sequence (like the rationals) and take a union of open intervals $I_n$ such that $I_n$ contains $x_n$ and has length $1/2^n$. On the other hand, it is often useful to keep in mind that every set of finite measure is "nearly bounded". That is, if $m(E)<\infty$ and $\epsilon>0$, then there is an $M\gt0$ such that $m(E\setminus[-M,M])<\epsilon$. One way to see this is by proving that the monotone sequence $(m(E\cap[-n,n]))_{n=1}^\infty$ converges to $m(E)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Why do we use the smash product in the category of based topological spaces? I was telling someone about the smash product and he asked whether it was the categorical product in the category of based spaces and I immediately said yes, but after a moment we realized that that wasn't right. Rather, the categorical product of $(X,x_0)$ and $(Y,y_0)$ is just $(X\times Y,(x_0,y_0))$. (It seems like in any concrete category $(\mathcal{C},U)$, if we have a product (does a concrete category always have products?) then it must be that $U(X\times Y)=U(X)\times U(Y)$. But I couldn't prove it. I should learn category theory. Maybe functors commute with products or something.) Anyways, here's what I'm wondering: is the main reason that we like the smash product just that it gives the right exponential law? It's easy to see that the product $\times$ I gave above has $F(X\times Y,Z)\not\cong F(X,F(Y,Z))$ just by taking e.g. $X=Y=Z=S^0$.
From nLab: The smash product is the tensor product in the closed monoidal category of pointed sets. That is, $$\operatorname{Fun}_*(A\wedge B,C)\cong \operatorname{Fun}_*(A,\operatorname{Fun}_*(B,C))$$ Here, $\operatorname{Fun}_*(A,B)$ is the set of basepoint-preserving functions from $A$ to $B$, itself made into a pointed set by taking as basepoint the constant function from all of $A$ to the basepoint in $B$. There's more at the link. I must admit that I know nothing about this, but I recommend nLab as a good place to look for the categorical place of mathematical constructions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 0 }
Prove A = (A\B) ∪ (A ∩ B) I have to demonstrate this formulae: Prove $A = (A\setminus B) ∪ (A ∩ B)$ But it seems to me that it is false. $(A\setminus B) ∪ (A ∩ B)$ * *$X \in A\setminus B \implies { x ∈ A \text{ and } x ∉ B }$ or * *$X ∈ A ∩ B \implies { x ∈ A \text{ and } x ∈ B }$ so: $x ∈ A ∩ B$ so: $A ≠ (A\setminus B) ∪ (A ∩ B)$ Did I solve the problem or I am just blind?
To show that two sets are equal, you show they have the same elements. Suppose first $x\in A$. There are two cases: Either $x\in B$, or $x\notin B$. In the first case, $x\in A$ and $x\in B$, so $x\in A\cap B$ (by definition of intersection). In the second case, $x\in A$ and $x\notin B$, so $x\in A\setminus B$ (again, by definition). This shows that if $x\in A$, then $x\in A\cap B$ or $x\in A\setminus B$, i.e., $x\in (A\setminus B)\cup(A\cap B)$. Now we have to show, conversely, that if $x\in (A\setminus B)\cup(A\cap B)$, then $x\in A$. Note that $x\in(A\setminus B)\cup(A\cap B)$ means that either $x\in A\setminus B$ or $x\in A\cap B$. In the first case, $x\in A$ (and also, $x\notin B$). In the second case, $x\in A$ (and also, $x\in B$). In either case, $x\in A$, but this is what we needed. In summary: We have shown both $A\subseteq (A\setminus B)\cup(A\cap B)$ and $(A\setminus B)\cup(A\cap B)\subseteq A$. But this means the two sets are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 0 }
Simpler solution to this geometry/trig problem? i had a geometry/trignometry problem come up at work today, and i've been out of school too long: i've lost my tools. i'm starting with a rectangle of known width (w) and height (h). For graphical simplification i can convert it into a right-angle triangle: i'm trying to find the coordinates of that point above which is perpendicular to the origin: i've labelled the opposite angle t1 (i.e. theta1, but Microsoft Paint cannot easily do greek and subscripts), and i deduce that the two triangles are similar (i.e. they have the same shape): Now we come to my problem. Given w and h, find x and y. Now things get very difficult to keep drawing graphically, to explain my attempts so far. But if i call the length of the line segment common to both triangles M: then: M = w∙sin(t1) Now i can focus on the other triangle, which i'll call O-x-M: and use trig to break it down, giving: x = M∙sin(t1) = w∙sin(t1)∙sin(t1) y = M∙cos(t1) = w∙sin(t1)∙cos(t1) with t1 = atan(h/w) Now this all works (i think, i've not actually tested it yet), and i'll be giving it to a computer, so speed isn't horribly important. But my god, there must have been an easier way to get there. i feel like i'm missing something. By the way, what this will be used for is drawing a linear gradient in along that perpendicular:
Parametrize the line from $(w,0)$ to $(0,h)$ by $(w,0) + t(-w, h)$. Then you are searching for the point $(x,y)$ on the line such that $(x,y)\cdot (-w,h) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Diophantine equations solved using algebraic numbers? On Mathworld it says some diophantine equations can be solved using algebraic numbers. I know one example which is factor $x^2 + y^2$ in $\mathbb{Z}[\sqrt{-1}]$ to find the Pythagorean triples. I would be very interested in finding some examples of harder equations (not quadratic) which are also solved easily using algebraic numbers. Thank you!
Fermat's Last Theorem was originally "proved" by using some elementary arguments and the "fact" that the ring of integers of $\mathbb{Q}(\zeta_p)$ is a UFD (which it is not), where $\zeta_p$ is a primitive $p$th root of unity. These arguments do hold when $\mathcal{O}_p$ is a UFD though; for a thorough account of the argument, see Chapter 1 of Marcus's Number Fields.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 4 }
What are the sample spaces when talking about continuous random variables? When talking about continuous random variables, with a particular probability distribution, what are the underlying sample spaces? Additionally, why these sample spaces are omitted oftentimes, and one simply says r.v. $X$ follows a uniform distribution on the interval $[0,1]$? Isn't the sample space critically important?
the sample space are the numbers that your random variable X can take. If you have a uniform distribution on the interval (0,1) then the value that your r.v. can take is any thing from O to 1. If your the r.v is outside this interval then your pdf is zero. So that means that the universal space are the all the real numbers but for this sample we are only consern with what happens from [0,1]
{ "language": "en", "url": "https://math.stackexchange.com/questions/18198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 3 }
Map of Mathematical Logic My undergraduate University does not offer advanced courses on logic, I know truth tables, Boolean algebra, propositional calculus. However I want to pursue Mathematical Logic on the long term as a mathematician. Can anyone suggest a study-map of Mathematical Logic. such as (1) Learn The following topics : a,b,c,etc.. (2) once you learned topics in (1), advance to these topics. (3) .. (4) etc.. Thank you
I recommend Teach Yourself Logic by Peter Smith as an extremely helpful resource. In addition to detailed textbook suggestions, it has descriptions of the basic topics and the key points that you want to learn regardless what book you learn them from.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
$n$th derivative of $e^{1/x}$ I am trying to find the $n$'th derivative of $f(x)=e^{1/x}$. When looking at the first few derivatives I noticed a pattern and eventually found the following formula $$\frac{\mathrm d^n}{\mathrm dx^n}f(x)=(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 n+k}$$ I tested it for the first $20$ derivatives and it got them all. Mathematica says that it is some hypergeometric distribution but I don't want to use that. Now I am trying to verify it by induction but my algebra is not good enough to do the induction step. Here is what I tried for the induction (incomplete, maybe incorrect) $\begin{align*} \frac{\mathrm d^{n+1}}{\mathrm dx^{n+1}}f(x)&=\frac{\mathrm d}{\mathrm dx}(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 n+k}\\ &=(-1)^n e^{1/x} \cdot \left(\sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} (-2n+k) x^{-2 n+k-1}\right)-e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 (n+1)+k}\\ &=(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k}((-2n+k) x^{-2 n+k-1}-x^{-2 (n+1)+k)})\\ &=(-1)^{n+1} e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k}(2n x-k x+1) x^{-2 (n+1)+k} \end{align*}$ I don't know how to get on from here.
We may obtain a recursive formula, as follows: \begin{align} f\left( t \right) &= e^{1/t} \\ f'\left( t \right) &= - \frac{1}{{t^2 }}f\left( t \right) \\ f''\left( t \right) &= - \frac{1}{{t^2 }}f'\left( t \right) + f\left( t \right)\frac{2}{{t^3 }} \\ &= - \frac{1}{{t^2 }}\left\{ {\frac{1}{{t^2 }}f\left( t \right)} \right\} + f\left( t \right)\frac{2}{{t^3 }} \\ &= \left( {\frac{1}{{t^4 }} + \frac{2}{{t^3 }}} \right)f\left( t \right) \\ \ldots \end{align} Inductively, let us assume $f^{(n-1)}(t)=P_{n-1}(\frac{1}{t})f(t)$ is true, for some polynomial $P_{n-1}$. Now, for $n$, we have \begin{align} f^{\left( n \right)} \left( t \right) &= - \frac{1}{{t^2 }}P'_{n - 1} \left( {\frac{1}{t}} \right)f\left( t \right) + P_{n - 1} \left( {\frac{1}{t}} \right)f'\left( t \right) \\ &= - \frac{1}{{t^2 }}P'_{n - 1} \left( {\frac{1}{t}} \right)f\left( t \right) + P_{n - 1} \left( {\frac{1}{t}} \right)\left\{ { - \frac{1}{{t^2 }}f\left( t \right)} \right\} \\ &= - \frac{1}{{t^2 }}\left\{ {P'_{n - 1} \left( {\frac{1}{t}} \right) + P_{n - 1} \left( {\frac{1}{t}} \right)} \right\}f\left( t \right) \\ &= P_n \left( {\frac{1}{t}} \right)f\left( t \right) \end{align} Thus, \begin{align} f^{\left( n \right)} \left( t \right) = P_n \left( {\frac{1}{t}} \right)f\left( t \right) \end{align} where, $P_n \left( x \right): = x^2 \left[ {P'_{n - 1} \left( x \right) - P_{n - 1} \left( x \right)} \right]$, $P_0=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65", "answer_count": 6, "answer_id": 0 }
Integral Representations of Hermite Polynomial? One of my former students asked me how to go from one presentation of the Hermite Polynomial to another. And I'm embarassed to say, I've been trying and failing miserably. (I'm guessing this is a homework problem that he is having trouble with.) http://functions.wolfram.com/Polynomials/HermiteH/07/ShowAll.html So he has to go from the Rodriguez type formula (written as a contour integral) to an integral on the real axis, which is the 3rd formula in the link provided above. It seems like the hint he was given was to start from the contour integral. Starting with the contour integral, I tried using different semi-circles (assuming that $z$ was real), but this quickly turned into something weird. I also tried to use a circle as the contour, then map it to the real line. That was a failure. I tried working backwards, from the integral on the real axis. Didn't have luck. The last resort was to show that 1) Both expressions are polynomials. 2) Show that the corresponding coefficients were equal. (That is, I took both functions and evaluated them and their derivatives at 0.) Even 2), I couldn't see a nice way of showing that $\int_C \frac{e^{-z^2}}{z^{n+1}}dz = \int_{-\infty}^{\infty} z^n e^{-z^2} dz$ (Up to some missing multiplicative constants.) I feel like I'm missing something really easy. If someone could give me some hints without giving away the answer, that would be most appreciated.
\begin{align*} H_n(x) &=(-)^n\mathrm{e}^{x^2}\partial_x^n\mathrm{e}^{-x^2} \\ &=(-1)^n\mathrm{e}^{x^2}\partial_x^n\mathrm{e}^{-x^2}\Big(\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}\mathrm{e}^{-(t\pm \mathrm{i}x)^2}\mathrm{d}t\Big) \\ &=(-1)^n\mathrm{e}^{x^2}\frac{1}{\sqrt{\pi}}\partial_x^n\int_{-\infty}^{\infty}\mathrm{e}^{-t^2\mp 2\mathrm{i}xt}\mathrm{d}t \\ &=(-1)^n\mathrm{e}^{x^2}\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}\partial_x^n\mathrm{e}^{-t^2\mp 2\mathrm{i}xt}\mathrm{d}t \\ &=(-1)^n\mathrm{e}^{x^2}\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}(\mp2\mathrm{i}t)^n\mathrm{e}^{-t^2\mp 2\mathrm{i}xt}\mathrm{d}t \\ &=\frac{(\pm2\mathrm{i})^n}{\sqrt{\pi}}\mathrm{e}^{x^2}\int_{-\infty}^{\infty}t^n\mathrm{e}^{-t^2\mp 2\mathrm{i}xt}\mathrm{d}t \\ &=\frac{(\pm2\mathrm{i})^n}{\sqrt{\pi}}\int_{-\infty}^{\infty}t^n\mathrm{e}^{-(t\pm \mathrm{i}x)^2}\mathrm{d}t \end{align*} Average the $+$ and the $-$ situations, we have $$H_n(x)=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}(2t)^n\mathrm{e}^{x^2-t^2}\cos(2xt-\frac{n\pi}{2})\,\mathrm{d}t .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/18325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Spread evenly $x$ black balls among a total of $2^n$ balls Suppose you want to line up $2^n$ balls of which $x$ are black the rest are white. Find a general method to do this so that the black balls are as dispersed as possible, assuming that the pattern will repeat itself ad infinitum. The solution can be in closed form, iterative, or algorithmic. For example, if $n=3$, where $0$ is a white ball and $1$ is a black ball, a solution is: x=0: 00000000... x=1: 10000000... x=2: 10001000... x=3: 10010010... x=4: 10101010... x=5: 01101101... x=6: 01110111... x=7: 01111111... x=8: 11111111...
Let $\theta = x/2^n$. For each $m$, put a (black) ball at $$\min \{ k \in \mathbb{N} : k\theta \geq m \}.$$ In other words, look at the sequence $\lfloor k \theta \rfloor$, and put a ball in each position where the sequence increases. For example, if $n=3$ and $x = 3$ then $\theta = 3/8$ and the sequence of floors is $$0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, \ldots,$$ and so the sequence of balls is $10010010\ldots$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/18384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Balancing an acid-base chemical reaction I tried balancing this chemical equation $$\mathrm{Al(OH)_3 + H_2SO_4 \to Al_2(SO_4)_3 + H_2O}$$ with a system of equations, but the answer doesn't seem to map well. I get a negative coefficient which is prohibitive in this equations. How do I interpret the answer?
Let $x$ be the number of $\mathrm{Al(OH)_3}$; $y$ the number of $\mathrm{H_2SO_4}$; $z$ the number of $\mathrm{Al_2(SO_4)_3}$, and $w$ the number of $\mathrm{H_2O}$. Looking at the number of $\mathrm{Al}$, you get $x = 2z$. Looking at $\mathrm{O}$, you get $3x + 4y = 12z + w$. Looking at $\mathrm{H}$ you get $3x + 2y = 2w$; and looking at $\mathrm{S}$ you get $y = 3z$. That looks like what you are getting from Wolfram, except you have the wrong signs for $z$ and $w$; unless you are interpreting the first two entries to represent the "unknowns", and the last two to represent the "solutions". I would translate into equations the usual way. What you have is the following system of linear equations: $$\begin{array}{rcrcrcrcl} x & & & -& 2z & & & = & 0\\ 3x & + & 4y & - & 12z & - & w & = & 0\\ 3x & + & 2y & & & - & 2w & = & 0\\ & & y & - & 3z & & & = & 0 \end{array}$$ This leads (after either some back-substitution from the first and last equations into the second and third, or some easy row reduction) to $x=2z$, $y=3z$, and $6z=w$. Since you only want positive integer solutions, setting $z=1$ gives $x=2$, $y=3$, and $w=6$, yielding the smallest solution: $$\mathrm{2 Al(OH)_3 + 3H_2SO_4 \to Al_2(SO_4)_3 + 6H_2O}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/18435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
How can I solve for a single variable which occurs in multiple trigonometric functions in an equation? This is a pretty dumb question, but it's been a while since I had to do math like this and it's escaping me at the moment (actually, I'm not sure I ever knew how to do this. I remember the basic trigonometric identities, but not anything like this). I have a simple equation of one unknown, but the unknown occurs twice in different trigonometric functions and I'm not sure how to combine the two. I want to simply solve for $\theta$ in the following equation, where $a$ and $b$ are constants. $a=\tan(\theta) - \frac{b}{\cos^2\theta}$ How can I reduce this into a single expression so that I can solve for $\theta$ given any $a$ and $b$? (I'm only interested in real solutions and, in practice (this is used to calculate the incidence angle for a projectile such that it will pass through a certain point), it should always have a real solution, but an elegant method of checking that it doesn't would not go unappreciated.) Based on Braindead's hint I reduced the equation to: $0=(b-a)+\tan(\theta)+b\tan^2(\theta)$ I can now solve for $\tan(\theta)$ using the quadratic equation, which gets me what I'm after. Is this the solution others were hinting towards? It seems like there would be a way to do it as a single trigonometric operation, but maybe not.
Hint: Can you solve $$p = \frac{q\sin 2\theta + r}{s\cos 2\theta + t}$$ Ok, more details. $$a = \frac{\sin \theta \cos \theta}{\cos^2 \theta} - \frac{b}{\cos^2 \theta} = \frac{\sin 2 \theta }{2\cos^2 \theta} - \frac{b}{\cos^2 \theta} $$ $$ = \frac{\sin 2\theta - 2b}{2cos^2 \theta} = \frac{ \sin 2\theta - 2b}{\cos 2\theta + 1}$$ Thus $$a(\cos 2 \theta + 1) = \sin 2 \theta - 2 b$$ Thus $$ \sin 2\theta - a \cos 2\theta = a + 2b$$ The equation $$ p \cos \alpha + q \sin \alpha = r$$ is standard. and can be solved by dividing by $\displaystyle \sqrt{p^2 + q^2}$ and noticing that for some $\displaystyle \beta$ we must have that $\displaystyle \sin \beta = \frac{p}{\sqrt{p^2 + q^2}}$ and $\displaystyle \cos \beta = \frac{q}{\sqrt{p^2 + q^2}}$ Giving rise to $$ \sin(\alpha + \beta) = \frac{r}{\sqrt{p^2 +q^2}}$$ I will leave it to you to solve your original equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
decompose some polynomials [ In first, I say "I'm sorry!", because I am not a Englishman and I don't know your language terms very well. ] OK, I have some polynomials (like $a^2 +2ab +b^2$ ). And I can't decompress these (for example $a^2 +2ab +b^2 = (a+b)^2$). Can you help me? (if you can, please write the name or formula of combination (like $(a+b)^2 = a^2 +2ab +b^2$) of each polynomial. * *$(a^2-b^2)x^2+2(ad-bc)x+d^2-c^2$ *$2x^2+y^+2x-2xy-2y+1$ *$2x^2-5xy+2y^2-x-y-1$ *$x^6-14x^4+49x^2-36$ *$(a+b)^4+(a-b)^4+(a^2-b^2)^2$ Thank you! very much ....
For 1) $(a^2-b^2)x^2+2(ad-bc)x+d^2-c^2$ think about rearranging $$(a^2-b^2)x^2+2(ad-bc)x+d^2-c^2=a^2x^2+2adx+d^2-(b^2x^2+2bcx+c^2)$$ The same idea can be applied to all your questions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Golden Number Theory The Gaussian $\mathbb{Z}[i]$ and Eisenstein $\mathbb{Z}[\omega]$ integers have been used to solve some diophantine equations. I have never seen any examples of the golden integers $\mathbb{Z}[\varphi]$ used in number theory though. If anyone happens to know some equations we can apply this in and how it's done I would greatly appreciate it!
You would probably solve the Mordell equation $y^2=x^3+5$ by working in that field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61", "answer_count": 4, "answer_id": 0 }
Convolution of two Gaussians is a Gaussian I know that the product of two Gaussians is a Gaussian, and I know that the convolution of two Gaussians is also a Gaussian. I guess I was just wondering if there's a proof out there to show that the convolution of two Gaussians is a Gaussian.
* *the Fourier transform (FT) of a Gaussian is also a Gaussian *The convolution in frequency domain (FT domain) transforms into a simple product *then taking the FT of 2 Gaussians individually, then making the product you get a (scaled) Gaussian and finally taking the inverse FT you get the Gaussian
{ "language": "en", "url": "https://math.stackexchange.com/questions/18646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 0 }
Proof of a combination identity:$\sum \limits_{j=0}^n{(-1)^j{{n}\choose{j}}\left(1-\frac{j}{n}\right)^n}=\frac{n!}{n^n}$ I want to ask if there is a slick way to prove: $$\sum_{j=0}^n{(-1)^j{{n}\choose{j}}\left(1-\frac{j}{n}\right)^n}=\frac{n!}{n^n}$$ Edit: I know Yuval has given a proof, but that one is not direct. I am requesting for a direct algebraic proof of this identity. Thanks.
This is inclusion-exclusion for counting the number of onto maps from $n$ to $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 1 }
If $\sum^n_{i=1}{f(a_i)} \leq \sum^n_{i=1}{f(b_i)}$. Then $\sum^n_{i=1}{\int_0^{a_i}{f(t)dt}} \leq \sum^n_{i=1}{\int_0^{b_i}{f(t)dt}}$? I have been puzzling over this for a few days now. (It's not homework.) Suppose $f$ is a positive, non-decreasing, continuous, integrable function. Suppose there are two finite sequences of positive real numbers $\{a_i\}$ and $\{b_i\}$ where $$\sum^n_{i=1}{f(a_i)} \leq \sum^n_{i=1}{f(b_i)}.$$ Is it true that $$\sum^n_{i=1}{\int_0^{a_i}{f(t)dt}} \leq \sum^n_{i=1}{\int_0^{b_i}{f(t)dt}}$$ If the answer is no, does it improve things if $f$ is convex?
The answer is no, even for convex functions. Let $f(t)=(t)^+$ (the positive part of $t$). Then $f$ is convex, the first inequality means that $a_1+\cdots+a_n\le b_1+\cdots+b_n$, the second inequality means that $a_1^2+\cdots+a_n^2\le b_1^2+\cdots+b_n^2$. As soon as $n\ge2$, the former cannot imply the latter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Help me spot the error? I have a determinant to expand which is $$\triangle = \begin{bmatrix} p& 1 & \frac{-q}{2}{}\\ 1& 2 &-q \\ 2& 2 & 3 \end{bmatrix} = 0 $$ But when I am expanding the determinant along the first row such as $ p(6+2q) - (3+2q) = 0 $ but when I am trying to expand along first column I am getting $p(6+2q) - (3+q) = 0$ but I have been told by my teacher that we can expand the determinant of $3\times 3$ matrix in along any row and any column giving the same result. where lies the error ?
Seems you like you forgot the third column ($\frac{-q}{2}$) when expanding using the first row. In the first row expansion, just using the first and second columns gives you $p(6+q) - (3 +2q)$. The third column gives an extra $q$, after adding which it matches the second expression you got, using the first column.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Property of a Cissoid? I didn't think it was possible to have a finite area circumscribing an infinite volume but on page 89 of Nonplussed! by Havil (accessible for me at Google Books) it is claimed that such is the goblet-shaped solid generated by revolving the cissoid y$^2$ = x$^3$/(1-x) about the positive y-axis between this axis and the asymptote x = 1. What do you think?
The interpretation is mangled. The volume is finite but the surface area infinite, much like Gabriel's Horn. The idea of the quote is much as in Gabriel's Horn: since the volume is finite, you can imagine "filling it up" with a finite amount of paint. But the surface area is infinite, which suggests that you are "painting" an infinite surface with a finite amount of paint, a paradox (of course, quantum mechanics gets in your way, even theoretically). So you get an unbounded area (which de Sluze and Huygens called "infinite") that can be "covered" with a finite quantity (the amount of "paint").
{ "language": "en", "url": "https://math.stackexchange.com/questions/18865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Finding the area of a quadrilateral I have a quadrilateral whose four sides are given $2341276$, $34374833$, $18278172$, $17267343$ units. How can I find out its area? What would be the area?
I can't comment (not enough reputation), but as this sort of a partial answer anyway, I'll just post it as an answer. Rahula Narain's comment is correct that the area depends on more than just the sidelengths. But assuming the sidelengths are listed in the question in order around the polygon as usual (in fact, assuming any particular order on the sidelengths), the polygon is pretty constricted in terms of its shape (note that the first side is very small compared to the others, and the last two sum to about the same as the second). So it should be possible to get bounds on the area. (Even if we don't assume an order on the sidelengths, there are only three possibilities modulo things that don't affect area. And because one side is so small and one so large, the three areas should all have pretty similar bounds.) Any takers?
{ "language": "en", "url": "https://math.stackexchange.com/questions/18906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Primitive integer solutions to $2x^2+xy+3y^2=z^3$? The class number of $\mathbb{Q}(\sqrt{-23})$ is $3$, and the form $$2x^2 + xy + 3y^2 = z^3$$ is one of the two reduced non-principal forms with discriminant $-23$. There are the obvious non-primitive solutions $(a(2a^2+ab+3b^2),b(2a^2+ab+3b^2), (2a^2+ab+3b^2))$. I'm pretty sure there aren't any primitive solutions, but can't seem to find an easy argument. Are there? In general, is it possible for a non-principal form to represent an $h$-th power primitively (where $h$ is the class number of the associated quadratic field)? [EDIT] I think I've solved the first question and have posted an answer below. Since the proof is very technical I don't see how it can generalize to the greater question above ($h$-th powers).
If $4|y$, then $2|z$, which in turn, by reducing mod $4$, $2|x$, contradicting primitivity. Multiply by $2$ and factor the equation over $\mathbb{Q}(\sqrt{-23})$: $$(\frac{4x+y+\sqrt{-23}y}{2})(\frac{4x+y-\sqrt{-23}y}{2}) = 2z^3$$ Note that both fractions are integral. The gcd of the two factors divides $\sqrt{-23}y$ and $4x+y$. If $23|4x+y$ then $23|z$, and reducing the original equation modulo $23^2$ we see that $23|y$, hence also $23|x$, contradicting primitivity. So the gcd divides $y$ and $4x+y$, and by the above argument, it then must be either $1$ or $2$ according to wether $y$ is odd or even, respectively. First, assume that $y$ is odd. So the gcd is $1$. Thus, for some ideal $I$: $$(2x+y\frac{1+\sqrt{-23}}{2})=(2,\frac{1+\sqrt{-23}}{2})I^3$$ Which implies that $(2,\frac{1+\sqrt{-23}}{2})$ is principal, leading to a contradiction. Now assume that $y$ is even, and that $x$ is therefore odd and $z$ is even. Put $y=2u$, $z=2v$, $u$ odd, so that: $$(2x+u+\sqrt{-23}u)(2x+u-\sqrt{-23}u)=16v^3$$ Both factors are divisble by $2$, so that: $$(x+u\frac{1+\sqrt{-23}}{2})(x+u\frac{1-\sqrt{-23}}{2})=4v^3$$ As before, the gcd is 1, and since $x+u\frac{1+\sqrt{-23}}{2}=x+u+u\frac{-1+\sqrt{-23}}{2} \in (2, \frac{-1+\sqrt{-23}}{2})$ we must have for some ideal $I$: $$(x+u\frac{1+\sqrt{-23}}{2}) = (2, \frac{-1+\sqrt{-23}}{2})^2I^3$$ Contradicting that $(2, \frac{-1+\sqrt{-23}}{2})^2$ is non-principal (the ideal above $2$ appears squared since the product of factors is divisible by $4$). We are done! It actually seems that the above can be generalised, but I have to use a major theorem, which seems like it might be a bit of an overkill. Without further ado: Let $aX^2+bXY+cY^2$ be a primitive non-principal quadratic form of discriminant $\Delta=b^2-4ac$, and $h$ be the class number of the associated quadratic field. Assume there is a solution $x,y,z$: $$ax^2+bxy+cy^2=z^h$$ Recalling the Chebotarev Density Theorem (OVERKILL), there is an equivalent form with $a$ an odd prime that doesn't divide $\Delta$, and since the invertible change of variables preserves primitivity, we reduce to this case. Multiplying by $a$ and factoring over $\mathbb{Q}(\sqrt{\Delta})$: $$(ax+\frac{b+\sqrt{\Delta}}{2}y)(ax+\frac{b-\sqrt{\Delta}}{2}y)=az^h$$ The gcd of the factors divides $(2ax+by,\sqrt{\Delta}y)$. Say $\Delta |2ax+by$, then since $$(2ax+by)^2-\Delta y^2=4az^h$$ we must have $\Delta |z$, so $\Delta |y$, and finally $\Delta |x$ (unless $\Delta=\pm 2$, which is impossible), contradicting primitivity. Hence the gcd divides $(2a,y)$. 1) gcd$=1$: $$(ax+\frac{b+\sqrt{\Delta}}{2}y) = (a,\frac{b+\sqrt{\Delta}}{2})I^h$$ contradicting that the form is non-principal. 2) gcd$=2$ or $2a$: then $y$ is even, so $z$ is too, and since $a$ is odd, $x$ must also be even, contradicting primitivity. 3) gcd$=a$: thus $a|z$. If $a^2|y$, then reducing modulo $a$ we see that $a|x$. So $a||y$. Dividing this gcd out of the two factors, we must have: $$(x+\frac{b+\sqrt{\Delta}}{2}\frac{y}{a}) = (a,\frac{b-\sqrt{\Delta}}{2})^{h-1}I^h$$ for some ideal $I$. Note the particular ideal above $a$, which appears since its conjugate cannot appear as $x$ isn't divisble by $a$. Since $(a,\frac{b-\sqrt{\Delta}}{2})^{h-1} \sim (a,\frac{b+\sqrt{\Delta}}{2})$, this contradicts that the form is non-principal. We are done! There must be a way to avoid applying a major theorem such as Chebotarev... Please tell me if you have an idea how :D
{ "language": "en", "url": "https://math.stackexchange.com/questions/18963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Sigma algebra/Borel sigma algebra proof problem In my stochastics class we were given the following problem ($\mathcal{B}(\mathbb{R})$ stands for the Borel $\sigma$-algebra on the real line and $\mathbb{R}$ stands for the real numbers): Let $f : \Omega → \mathbb{R}$ be a function. Let $F = \{ A \subset \Omega : \text{ there exists } B \in \mathcal{B}(\mathbb{R}) \text{ with } A = f^{−1}(B)\}$. Show that $F$ is a $\sigma$-algebra on $\Omega$. I'm not sure how I should break this down. Apparently the inverse function $f^{-1}$ has some property where its target space is a $\sigma$-algebra when its starting space is a Borel $\sigma$-algebra... or am I going down the wrong path? I've been going at this for a good while, any help is appreciated.
You have to show that $F$ is (i) non-empty, (ii) closed under complementation, and (iii) closed under countable unions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there a problem in defining a complex number by $ z = x+iy$? The field $\mathbb{C} $ of complex numbers is well-defined by the Hamilton axioms of addition and product between complex numbers, i.e., a complex number $z$ is a ordered pair of real numbers $(x,y)$, which satisfy the following operations $+$ and $\cdot$: $ (x_1,y_1) + (x_2,y_2) = (x_1+x_2,y_1+x_2) $ $(x_1,y_1)(x_2,y_2) = (x_1x_2-y_1y_2,x_1y_2 + x_2y_1)$ The other field properties follow from them. My question is: Is there a problem in defining complex number simply by $z = x+iy$, where $i² = -1$ and $x$, $y$ real numbers and import from $\mathbb{R} $ the operations? Or is this just an elegant manner to write the same thing?
There is no "explicit" problem, but if you are going to define them as formal symbols, then you need to distinguish between the + in the symbol $a$+$bi$, the $+$ operation from $\mathbb{R}$, and the sum operation that you will be defining later until you show that they can be "confused"/identified with one another. That is, you define $\mathbb{C}$ to be the set of all symbols of the form $a$+$bi$ with $a,b\in\mathbb{R}$. Then you define an addition $\oplus$ and a multiplication $\otimes$ by the rules $(a$+$bi)\oplus(c$+$di) = (a+c)$ + $(c+d)i$ $(a$+$bi)\otimes(c$+$di) = (ac - bd)$ + $(ad+bc)i$ where $+$ and $-$ are the real number addition and subtraction, and + is merely a formal symbol. Then you can show that you can identify the real number $a$ with the symbol $a$+$0i$; and that $(0$+$i)\otimes(0$+$i) = (-1)$+$0i$; etc. At that point you can start abusing notation and describing it as you do, using the same symbol for $+$, $\oplus$, and +. So... the method you propose (which was in fact how complex numbers were used at first) is just a bit more notationally abusive, while the method of ordered pairs is much more formal, giving a precise "substance" to complex numbers as "things" (assuming you think the plane is a "thing") and not just as "formal symbols".
{ "language": "en", "url": "https://math.stackexchange.com/questions/19108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Non-squarefree version of Pell's equation? Suppose I wanted to solve (for x and y) an equation of the form $x^2-dp^2y^2=1$ where d is squarefree and p is a prime. Of course I could simply generate the solutions to the Pell equation $x^2-dy^2=1$ and check if the value of y was divisible by $p^2,$ but that would be slow. Any better ideas? It would be useful to be able to distinguish cases where the equation is solvable from cases where it is unsolvable, even without finding an explicit solution.
A different and important way to view this equation is still as a norm equation, only in a non-maximal order of the quadratic field - namely, we are looking at the ring $\mathbb{Z}[p\sqrt{d}]$. Since we are still in the realm of algebra, we can prove results algebraically. For example: Proposition. Let $p$ be a prime that doesn't divide $d$, and $e=x+y\sqrt{d}$ be a unit with integer $x,y$. Then $e^{p-(d/p)} \in \mathbb{Z}[p\sqrt{d}]$, where $(d/p)$ is the Legendre symbol. Proof. If the Legendre symbol is $-1$, this means that $(p)$ doesn't split in $O$ (which will denote the maximal order from here on), so that $O/(p)$ is isomorphic to the finite field of size $p^2$. Now every number in $\mathbb{F}_{p^2}$ has order dividing $p^2-1$, so $e^{p+1} \pmod{p}$ has order diving $p-1$. But the numbers of order diving $p-1$ are exactly those in the subfield $\mathbb{F}_p$, and that means that $e^{p+1} = z+pw\sqrt{d}$, proving the claim in this case. If $(p)$ splits into $\pi_1 \pi_2$, then $O/(p)$ is isomorphic to $O/\pi_1 \pi_2$ which is in turn, by chinese remainder theorem, isomorphic to $\mathbb{F}_p \oplus \mathbb{F}_p$. By inspection we see that the galois action permutes the two coordinates. So, since under the isomorphism $e^{p-1}\pmod{\pi_1}=e^{p-1}\pmod{\pi_2}=1$, we see that $e^{p-1}\pmod{p}$ is invariant. Hence it is of the form $z+pw\sqrt{d}$, proving the claim in this case as well. $\Box$ The above proposition shows how to get solutions to the non-squarefree Pell equation from solutions of the squarefree one - simply power appropriately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Algorithm to compute Gamma function The question is simple. I would like to implement the Gamma function in my calculator written in C; however, I have not been able to find an easy way to programmatically compute an approximation to arbitrary precision. Is there a good algorithm to compute approximations of the Gamma function? Thanks!
Someone asked a similar question yesterday. I thought of replacing $e^{-t}$ by a series. $$\Gamma (z) = \int_{0}^{\infty} t^{z-1} e^{-t} dt \approx \sum_{j=0}^{a} \frac{(-1)^j b^{j+z}}{(j + z) j !} . \text{Choose } a > b ,$$ but as J. M. points out, I should have checked this a bit better. Take great care in the choice of $a, b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 5, "answer_id": 1 }
$ \sum\limits_{i=1}^{p-1} \Bigl( \Bigl\lfloor{\frac{2i^{2}}{p}\Bigr\rfloor}-2\Bigl\lfloor{\frac{i^{2}}{p}\Bigr\rfloor}\Bigr)= \frac{p-1}{2}$ I was working out some problems. This is giving me trouble. * *If $p$ is a prime number of the form $4n+1$ then how do i show that: $$ \sum\limits_{i=1}^{p-1} \Biggl( \biggl\lfloor{\frac{2i^{2}}{p}\biggr\rfloor}-2\biggl\lfloor{\frac{i^{2}}{p}\biggr\rfloor}\Biggr)= \frac{p-1}{2}$$ Two things which i know are: * *If $p$ is a prime of the form $4n+1$, then $x^{2} \equiv -1 \ (\text{mod} \ p)$ can be solved. *$\lfloor{2x\rfloor}-2\lfloor{x\rfloor}$ is either $0$ or $1$. I think the second one will be of use, but i really can't see how i can apply it here.
Here are some more detailed hints. Consider the value of $\lfloor 2x \rfloor - 2 \lfloor x \rfloor$ where $x=n+ \delta$ for $ n \in \mathbb{Z}$ and $0 \le \delta < 1/2.$ Suppose $p$ is a prime number of the form $4n+1$ and $a$ is a quadratic residue modulo $p$ then why is $(p-a)$ also a quadratic residue? What does this say about the number of quadratic residues $< p/2$ ? All the quadratic residues are congruent to the numbers $$1^2,2^2,\ldots, \left( \frac{p-1}{2} \right)^2,$$ which are themselves all incongruent to each other, so how many times does the set $\lbrace 1^2,2^2,\ldots,(p-1)^2 \rbrace$ run through a complete set of $\it{quadratic}$ residues? Suppose $i^2 \equiv a \textrm{ mod } p$ where $i \in \lbrace 1,2,\ldots,p-1 \rbrace$ and $a$ is a quadratic residue $< p/2$ then what is the value of $$ \left \lfloor \frac{2i^2}{p} \right \rfloor - 2 \left \lfloor \frac{i^2}{p} \right \rfloor \quad \text{?}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/19301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
Suggestions for topics in a public talk about art and mathematics I've been giving a public talk about Art and Mathematics for a few years now as part of my University's outreach program. Audience members are usually well-educated but may not have much knowledge of math. I've been concentrating on explaining Richard Taylor's analysis of Jackson Pollock's work using fractal dimension, but I'm looking to revise the talk, and wonder if anyone has some good ideas about what might go in it. M.C. Escher and Helaman Ferguson's work are some obvious possibilities, but I'd like to hear other ideas. Edit: I'd like to thank the community for their suggestions, and report back that Kaplan and Bosch's TSP art was a real crowd pleaser. The audience was fascinated by the idea that the Mona Lisa picture was traced out by a single intricate line. I also mentioned Tony Robbin and George Hart, which were well-received as well.
I heard Thomas Banchoff give a nice talk about Salvador Dali's work a few years ago. Apparently they were even friends. Here's a link to a lecture by Banchoff on Dali. It looks like Banchoff wrote a paper in Spanish Catalan on Dali, too: "La Quarta Dimensio i Salvador Dali," Breu Viatge al mon de la Mathematica, 1 (1984), pp. 19-24. Even with my poor Spanish nonexistent Catalan skills I can translate the title as "The Fourth Dimension and (or "in"?) Salvador Dali." :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/19400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 4 }
If $A$ is an $n \times n$ matrix such that $A^2=0$, is $A+I_{n}$ invertible? If $A$ is an $n \times n$ matrix such that $A^2=0$, is $A+I_{n}$ invertible? This question yielded two different proofs from my professors, which managed to get conflicting results (true and false). Could you please weigh in and explain what's happening, and offer a working proof? Proof that it is invertible: Consider matrix $A-I_{n}$. Multiplying $(A+I_{n})$ by $(A-I_{n})$ we get $A^2-AI_{n}+AI_{n}-I^2_{n}$. This simplifies to $A^2-I^2_{n}$ which is equal to $-I_{n}$, since $A^2=0$. So, the professor argued, since we have shown that there exists a $B$ such that $(A+I_{n})$ times $B$ is equal to $I$, $(A+I_{n})$ must be invertible. I am afraid, though, that she forgot about the negative sign that was leftover in front of the $I$ -- from what I understand, $(A+I_{n})$*$(A-I_{n})$=$-I$ does not mean that $(A+I_{n})$ is invertible. Proof that it is not invertible: Assume that $A(x)=0$ has a non-trivial solution. Now, given $(A+I_{n})(x)=\vec{0}$, multiply both sides by $A$. We get $A(A+I_{n})(x)=A(\vec{0})$, which can be written as $(A^2+A)(x)=\vec{0}$, which simplifies to $A(x)=0$, as $A^2=0$. Since we assumed that $A(x)=0$ has a non-trivial solution, we just demonstrated that $(A+I_{n})$ has a non-trivial solution, too. Hence, it is not invertible. I am not sure if I reproduced the second proof in complete accuracy (I think I did), but the idea was to show that if $A(x)=\vec{0}$ has a non-trivial solution, $A(A+I_{n})$ does too, rendering $A(A+I_{n})$ non-invertible. But regardless of the proofs, I can think of examples that show that at least in some cases, the statement is true; consider matrices $\begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix}$ and $\begin{bmatrix} 0 & 1\\ 0 & 0 \end{bmatrix}$ which, when added $I_{2}$ to, become invertible. Thanks a lot!
I suggest thinking of the problem in terms of eigenvalues. Try proving the following: If $A$ is an $n \times n$ matrix (over any field) which is nilpotent -- i.e., $A^k = 0$ for some positive integer $k$, then $-1$ is not an eigenvalue of $A$ (or equivalently, $1$ is not an eigenvalue of $-A$). If you can prove this, you can prove a stronger statement and collect bonus points from Arturo Magidin. (Added: Adrian's answer -- which appeared while I was writing mine -- is similar, and probably better: simpler and more general. But I claim it is always a good idea to keep eigenvalues in mind when thinking about matrices!) Added: here's a hint for a solution that has nothing to do with eigenvalues (or, as Adrian rightly points out, really nothing to do with matrices either.) Recall the formula for the sum of an infinite geometric series: $\frac{1}{1-x} = 1 + x + x^2 + \ldots + x^n + \ldots$ As written, this is an analytic statement, so issues of convergence must be considered. (For instance, if $x$ is a real number, we need $|x| < 1$.) But if it happens that some power of $x$ is equal zero, then so are all higher powers and the series is not infinite after all...With only a little work, one can make purely algebraic sense out of this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
extension/"globalization" of inverse function theorem I am curious as to what changes do we need to make to the hypotheses of the inverse function theorem in order to be able to find the global differentiable inverse to a differentiable function. We obviously need $f$ to be a bijection, and $f'$ to be non-zero. Is this sufficient for the existence of a global differentiable inverse? For functions $f\colon\mathbb{R}\to\mathbb{R}$, we have Motivation: $f^{-1}(f(x))=x$, so $(f')^{-1}(f(x))f'(x)=1$ Then, we could define $(f')^{^-1}(f(x))$ to be $1/f'(x)$ ( this is the special case of the formula for the differentiable inverse -- when it exists -- in the IFT) (and we are assumming $f'(x)\neq 0$) In the case of $\mathbb{R}^2$, I guess we could think of all the branches of $\log z$ and $\exp z$, and we do have at least a branch-wise global inverse , i.e., if/when $\exp z$ is 1-1 (and it is , of course onto $\mathbb{C}-{0}$), then we have a differentiable inverse. I guess my question would be: once the conditions of the IFT are satisfied: in how big of a neighborhood of $x$ can we define this local diffeomorphism, and, in which case would this neighborhood be the entire domain of definition of $f$? I guess the case for manifolds would be a generalization of the case of $\mathbb{R}^n$, but it seems like we would need for the manifolds to have a single chart. So, are the conditions of f being a bijective, differentiable map sufficient for the existence of a global differentiable inverse? And, if $f$ is differentiable, but not bijective, does the IFT hold in the largest subset of the domain of definition of $f$ where $f$ is a bijection? Thanks.
There is a theorem ("Introduction to Smooth Manifolds," Lee, Thm 7.15) for differentiable manifolds which says that: If $F: M \to N$ is a differentiable bijective map of constant rank, then $F$ is a diffeomorphism -- so in particular $F^{-1}$ is differentiable. Here, the rank of differentiable map $F\colon M \to N$ at a point $p \in M$ is defined to be the rank of its pushforward $F_*\colon T_pM \to T_{F(p)}N$. (Some authors use the word "differential" for "pushforward," and use the notation $d_pF$ for $F_*$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/19588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Filtration in the Serre SS I knew this at one point, and in fact it is embarassing that I have forgotten it. I am wondering what filtartion of the total space of a fibration we use to get the Serre SS. I feel very comfortable with the Serre SS, I am just essentially looking for a one line answer. I checked have Hatcher, Mosher and Tangora and Stricklands note on Spectral Sequences. I think it has something to do with looking at cells where the bundle is trivializable..
Yup. I win the bet :) See J. M. McCleary's User's guide to spectral sequences, Chapter 5.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Transformation T is... "onto"? I thought you have to say a mapping is onto something... like, you don't say, "the book is on the top of"... Our book starts out by saying "a mapping is said to be onto R^m", but thereafter, it just says "the mapping is onto", without saying onto what. Is that simply the author's version of being too lazy to write the codomain (sorry for saying something negative, but that's what it looks like to me at the moment), or does it have a different meaning?
You do indeed hear these terms in relation to functions. One-to-one means the same as injective. Onto means the same as surjective. One-to-one and onto means bijective. A function can be just one of them or all three of them. To answer your specific question, onto means each value of the codomain is mapped to by a member of the domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Combinatorial interpretations of elementary symmetric polynomials? I have some questions as to some good combinatorial interpretations for the sum of elementary symmetric polynomials. I know that for example, for n =2 we have that: $e_0 = 1$ $e_1 = x_1+x_2$ $e_2 = x_1x_2$ And each of these can clearly been seen as the coefficient of $t^k$ in $(1+x_1t)(1+x_2t)$. Now, in general, what combinatorial interpreations are there for say: $\sum_{i=0}^n e_i(x)$ for some $x = (x_1,...,x_n)$ ?
Here are two specific interesting cases to go with Mariano Suárez-Alvarez's general explanation. On $n$ variables, $e_k(1,1,\ldots,1) = \binom{n}{k}$ and $e_k(1,2,\ldots,n) = \left[ n+1 \atop n-k+1 \right]$, where the latter is a Stirling number of the first kind. (See Comtet's Advanced Combinatorics, pp. 213-214.) So summing the former over $k$ gives $2^n$, and summing the latter over $k$ gives $(n+1)!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Does this expression represent the largest real number? I'm not very good at this, so hopefully I'm not making a silly mistake here... Assuming that $\infty$ is larger than any real number, we can then assume that: $\dfrac{1}{\infty}$ is the smallest possible positive real number. It then stands to reason that anything less than $\infty$ is a real number. Therefore, if we take the smallest possible quantity from $\infty$, we end up with: $\infty-\dfrac{1}{\infty}$ Does that expression represent the largest real number? If not, what am I doing wrong?
Since $\infty$ is not a real number, you cannot assume that $\dfrac{1}{\infty}$ is a meaningful statement. It is not a real number. You might want to investigate non-standard, hyperreal and surreal numbers and infinitesimals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Finding number of roots of a polynomial in the unit disk I would like to know how to find the number of (complex) roots of the polynomal $f(z) = z^4+3z^2+z+1$ inside the unit disk. The usual way to solve such a problem, via Rouché's theorem does not work, at least not in an "obvious way". Any ideas? Thanks! edit: here is a rough idea I had: For any $\epsilon >0$, let $f_{\epsilon}(z) = z^4+3z^2+z+1-\epsilon$. By Rouché's theorem, for each such $\epsilon$, $f_{\epsilon}$ has exactly 2 roots inside the unit disc. Hence, by continuity, it follows that $f$ has 2 roots on the closed unit disc, so it remains to determine what happens on the boundary. Is this reasoning correct? what can be said about the boundary?
This one is slightly tricky, but you can apply Rouché directly. Let $g(z) = 3z^2 + 1$. Note that $|g(z)| \geq 2$ for $|z| = 1$ with equality only for $z = \pm i$ (because $g$ maps the unit circle onto the circle with radius $3$ centered at $1$). On the other hand for all $|z| = 1$ we have the estimate $h(z) = |f(z) - g(z)| = |z(z^3 + 1)| \leq 2$ and for $z = \pm i$ we have $h(\pm i) = \sqrt{2} < 2 \leq |g(\pm i)|$. Therefore $|f(z) - g(z)| < |g(z)|$ for all $|z| = 1$ and thus Rouché can be applied.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to deduce trigonometric formulae like $2 \cos(\theta)^{2}=\cos(2\theta) +1$? Very important in integrating things like $\int \cos^{2}(\theta) d\theta$ but it is hard for me to remember them. So how do you deduce this type of formulae? If I can remember right, there was some $e^{\theta i}=\cos(\theta)+i \sin(\theta)$ trick where you took $e^{2 i \theta}$ and $e^{-2 i \theta}$. While I am drafting, I want your ways to remember/deduce things (hopefully fast). [About replies] * *About TPV's suggestion, how do you attack it geometrically?? $\cos^{2}(x) - \sin^{2}(x)=\cos(2x)$ plus $2\sin^{2}(x)$, then $\cos^{2}(x)+\sin^{2}(x)=\cos(2x)+2\sin^{2}(x)$ and now solve homogenous equations such that LHS=A and RHS=B, where $A\in\mathbb{R}$ and $B\in\mathbb{R}$. What can we deduce from their solutions?
In my experience, almost all trigonometric identities can be obtained by knowing a few values of $\sin x$ and $\cos x$, that $\sin x$ is odd and $\cos x$ is even, and the addition formulas: \begin{align*} \sin(\alpha+\beta) = \sin\alpha\cos\beta + \cos\alpha\sin\beta,\\ \cos(\alpha+\beta) = \cos\alpha\cos\beta - \sin\alpha\sin\beta. \end{align*} For example, to obtain the classic $\sin^2x + \cos^2x = 1$, simply set $\beta=-\alpha$ in the formula for the cosine, and use the facts that $\cos(0)=1$ and that $\sin(-a)=-\sin(a)$ for all $a$. For the one you have, we use the formula for the cosine with $\alpha=\beta=\theta$ to get $$\cos(2\theta) = \cos^2\theta - \sin^2\theta.$$ Then using $\sin^2 \theta+\cos^2\theta=1$, so $-\sin^2\theta = \cos^2\theta-1$ gives $$\cos(2\theta) = \cos^2\theta +\cos^2\theta - 1 = 2\cos^2\theta - 1.$$ If you know the basic values (at $\theta=0$, $\frac{\pi}{6}$, $\frac{\pi}{4}$, $\frac{\pi}{3}$, $\frac{\pi}{2}$, $\pi$, $\frac{3\pi}{2}$), parity, and the addition formulas, you can get almost any of the formulas.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 0 }
If $b$ is the largest square divisor of $a$ and $a^2|c$ then $a|b$? I think this is false, a counter example could be: $c = 100,$ $b = 10,$ $a = 5$ But the book answer is true :( ! Did I misunderstand the problem or the book's answer was wrong? Thanks, Chan
With or without your edit, b does not divide a. I suspect the question you want is If b is the largest square divisor of c (not a) and a^2|c then a|b? Then the answer would be true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Special arrows for notation of morphisms I've stumbled upon the definition of exact sequence, particularly on Wikipedia, and noted the use of $\hookrightarrow$ to denote a monomorphism and $\twoheadrightarrow$ for epimorphisms. I was wondering whether this notation was widely used, or if it is common to define a morphism in the general form and indicate its characteristics explicitly (e.g. "an epimorphism $f \colon X \to Y$"). Also, if epimorphisms and monomorphisms have their own special arrows, are isomorphisms notated by a special symbol as well, maybe a juxtaposition of $\hookrightarrow$ and $\twoheadrightarrow$? Finally, are there other kinds of morphisms (or more generally, relations) that are usually notated by different arrows depending on the type of morphism, particularly in the context of category theory? Thanks.
Some people use those notations, some don't. Using $\hookrightarrow$ to mean that the map is a mono is not a great idea, in my opinion, and I much prefer $\rightarrowtail$ and use the former only to denote inclusions. Even when using that notation, I would say things like «consider the epimorphism $f:X\twoheadrightarrow Y$». In some contexts (for example, when dealing with exact categories) one uses $\rightarrowtail$ and $\twoheadrightarrow$ to denote that the map is not only a mono or an epi, but that it has certain special properties (for example, that it is a split mono, a cofibration, or what not) Denoting isomorphisms by mixing $\twoheadrightarrow$ and $\rightarrowtail$ is something I don't recall seeing. You will find that there are no rules on notation, and that everyone uses pretty much whatever they like---the only important thing is that when you use what you like you make it clear to the reader.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 4, "answer_id": 1 }
Probability of "clock patience" going out Here is a question I've often wondered about, but have never figured out a satisfactory answer for. Here are the rules for the solitaire game "clock patience." Deal out 12 piles of 4 cards each with an extra 4 card draw pile. (From a standard 52 card deck.) Turn over the first card in the draw pile, and place it under the pile corresponding to that card's number 1-12 interpreted as Ace through Queen. Whenever you get a king you place that on the side and draw another card from the pile. The game goes out if you turn over every card in the 12 piles, and the game ends if you get four kings before this happens. My question is what is the probability that this game goes out? One thought I had is that the answer could be one in thirteen, the chances that the last card of a 52 card sequence is a king. Although this seems plausible, I doubt it's correct, mainly because I've played the game probably dozens of times since I was a kid, and have never gone out! Any light that people could shed on this problem would be appreciated!
The name clock patience (solitaire in the US) is appropriate, not just because of the shape of the layout but because it is about cycles in the permutation. As you start with the King pile, a cycle ends when you find a King. If four cycles (one for each King) include all 52 cards, you win. You lose if the bottom card on any non-King pile matches its position, as that would be a one-card cycle in the permutation. You also lose if the bottom card in the Ace pile is a Two and the bottom card in the Two pile is an Ace. I'm trying to figure out the impact of the fact that suits are ignored. Maybe you can give each card its particular destination (always put the spade ace on top, for example) and ask for a single cycle of the 52 cards. In that case, it would be 1/52. To make a single cycle, the first card cannot go to itself (51/52). The card the first card goes to cannot go to the first (50/51). Then the next card in the chain cannot go to the first (49/50) and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Best way to exactly solve a linear system (300x300) with integer coefficients I want to solve system of linear equations of size 300x300 (300 variables & 300 equations) exactly (not with floating point, aka dgesl, but with fractions in answer). All coefficients in this system are integer (e.g. 32 bit), there is only 1 solution to it. There are non-zero constant terms in right column (b). A*x = b * *where A is matrix of coefficients, b is vector of constant terms. *answer is x vector, given in rational numbers (fractions of pairs of very long integers). The matrix A is dense (general case), but it can have up to 60 % of zero coefficients. What is the best way (fastest algorithm for x86/x86_64) to solve such system? Thanks! PS typical answer of such systems have integers in fraction up to 50-80 digits long, so, please don't suggest anything based only on float/doubles. They have no needed precision.
This kind of thing can be done by symbolic computation packages like Maple, Mathematica, Maxima, etc. If you need a subroutine to do this that you'll call from a larger program, then the answer will depend a lot on the programming language that you're using and what kinds of licensing you're willing to work with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 1 }
Intuition for the Yang-Baxter Equation (was: Giving relations via formal power series) I'm reading a book (Yangians and Classical Lie Algebras by Molev) which regularly uses (what appear to me to be) clever tricks with formal power series to encapsulate lots of relations. For instance, if we let $S_n$ act on $(\mathbb{C}^N)^{\otimes n}$ by permuting tensor components, so that e.g. $P_{(1 2)} (a \otimes b \otimes c) = b \otimes a \otimes c$, then working in $({\rm End} \mathbb{C}^N)^{\otimes 3}(u, v)$ we have the identity $\left(u - P_{(1 2)}\right)\left(u + v - P_{(1 3)}\right)\left(v - P_{(2 3)}\right) = \left(v - P_{(2 3)}\right)\left(u + v - P_{(1 3)}\right)\left(u - P_{(1 2)}\right)$ This is used to motivate the definition of an operator $R_{(j k)}(u) = 1 - P_{(j k)} u^{-1}$, the Yang R-matrix, which is then used to express an enormous family of relations on an algebra by multiplying by a matrix of formal power series. Of course it's straightforward to verify that the above expression holds if we multiply out the terms. That said, it seems considerably less straightforward to me how one would start from $S_3$ and end up at the equation above. Is this just a marvelous ad-hoc construction, or does it belong to some class of examples?
I share the thoughts above on the Yang-Baxter equation. My viewpoint on this is perhaps more algebraic: in light of the quantum group theory, the Yang R-matrix, as well as its trigonometric and elliptic counterparts, are indeed somewhat miraculous objects. Since no classification of solution of the Yang-Baxter equation is known, it is not clear how to put these examples in perspective. The R-matrix form of the defining relations does bring new tools to work these algebras. As pointed out above, the whole variety of relations is written as a single matrix relation. This is a starting point for special matrix techniques.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
Probability of Sum of Different Sized Dice I am working on a project that needs to be able to calculate the probability of rolling a given value $k$ given a set of dice, not necessarily all the same size. So for instance, what is the distribution of rolls of a D2 and a D6? An equivalent question, if this is any easier, is how can you take the mass function of one dice and combine it with the mass function of a second dice to calculate the mass function for the sum of their rolls? Up to this point I have been using the combinatorics function at the bottom of the probability section of Wikipedia's article on dice, however I cannot see how to generalize this to different sized dice.
Suppose we have dice $Da$ and $Db$, with $a \le b$. Then there are three cases: * *If $2 \le n \le a$, the probability of throwing $n$ is $\frac{n-1}{ab}$. *If $a+1 \le n \le b$, the probability of throwing $n$ is $\frac{1}{b}$. *If $b+1 \le n \le a+b$, the probability of throwing $n$ is $\frac{a+b+1-n}{ab}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Integrals as Probabilities Firstly, I'm not a mathematician as will become evident in a quick moment. I was pondering some maths the other day and had an interesting thought: If you encased an integrable function over some range in a primitive with an easily computable area, the probability that a random point within said primitive also exists below that function's curve, scaled by the area of the primitive, is the indefinite integral of the function over that domain. So let's say I want to "solve" for $\pi$. Exploiting a circle's symmetry, I can define $\pi$ as: $$4 \int_{0}^{1}\sqrt{1-x^2} \,dx$$ Which I can "encase" in the unit square. Since the area of the unit square is 1, $\pi$ is just 4 * the probability that a point chosen at random within the unit square is below the quarter-circle's arc. I'm sure this is well known, and so my questions are: * *What is this called? *Is there anything significant about this--for instance, is the relationship between the integral and the encasing object of interest--or is it just another way of phrasing indefinite integrals? Sorry if this is painfully elementary!
No, this is a very good observation! It is the basis of the modern definition of probability, where all probabilities are essentially defined as integrals. Your particular observation about $\pi$ being closely related to the probability that a point lands in a circle is also very good, and actually leads to a probabilistic algorithm to compute $\pi$ (an example of a Monte Carlo method). The subject in which probabilities are studied as integrals is, broadly speaking, called measure theory. Monte Carlo methods are also used to numerically compute other integrals; this is called Monte Carlo integration. Now that you have discovered this wonderful fact, here are some interesting exercises. I recommend that you try to draw the relevant regions when $n = 2, 3$ before tackling the general case. * *Choose $n$ numbers randomly in the interval $[0, 1]$. What is the probability that the first number you chose is the biggest one? *Choose $n$ numbers randomly in the interval $[0, 1]$. What is the probability that they are in decreasing order? *Choose $n$ numbers randomly in the interval $[0, 1]$. What is the probability that their sum is less than $1$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/20251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
How to know that $a^3+b^3 = (a+b)(a^2-ab+b^2)$ Is there a way of go from $a^3+b^3$ to $(a+b)(a^2-ab+b^2)$ other than know the property by heart?
Keeping $a$ fixed and treating $a^3 + b^3$ as a polynomial in $b$, you should immediately notice that $-a$ will be a root of that polynomial. This tells you that you can divide it by $a + b$. Then you apply long division as mentioned in one of the comments to get the other factor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 0 }
Converting recursive function to closed form My professor gave us a puzzle problem that we discussed in class that I could elaborate on if requested. But I interpreted the puzzle and formed a recursive function to model it which is as follows: $$f(n) = \frac{n f(n-1)}{n - 1} + .01 \hspace{1cm} \textrm{where } f(1) = .01\text{ and } n\in\mathbb{N}.$$ The question that is asked is when (if ever) does $f(x) = 1000x$. About half of the students concluded that it eventually will equal (they didn't have the formula I made) and that x would be near infinity. My personal question is, can the function be reduced so that it isn't recursive? And so that it doesn't need to be solved by brute force computer algorithm (which would be about 3 lines of code).
If you define $g(n) = \frac{f(n)}{n}$ then your expression is $g(n) = g(n-1) + \frac{0.01}{n}$ and $g(0) = 0$. So $g(n) = 0.01\sum_{i=1}^n \frac{1}{n} = 0.01 h(n)$ where $h(n)$ is the $n^{\text{th}}$ harmonic number. Therefore $f(n) = 0.01n \cdot h(n)$. I think this is about as closed of a form as you're going to get. Since the sequence $h(n)$ diverges, you will eventually get $h(n)\geq 100000$ so $g(n) \geq 1000$, which means $f(n)\geq 1000n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to find the sum of the following series How can I find the sum of the following series? $$ \sum_{n=0}^{+\infty}\frac{n^2}{2^n} $$ I know that it converges, and Wolfram Alpha tells me that its sum is 6 . Which technique should I use to prove that the sum is 6?
It is equal to $f(x)=\sum_{n \geq 0} n^2 x^n$ evaluated at $x=1/2$. To compute this function of $x$, write $n^2 = (n+1)(n+2)-3(n+1)+1$, so that $f(x)=a(x)+b(x)+c(x)$ with: $a(x)= \sum_{n \geq 0} (n+1)(n+2) x^n = \frac{d^2}{dx^2} \left( \sum_{n \geq 0} x^n\right) = \frac{2}{(1-x)^3}$ $b(x)=\sum_{n \geq 0} 3 (n+1) x^n = 3\frac{d}{dx} \left( \sum_{n \geq 0} x^n \right) = \frac{3}{(1-x)^2}$ $c(x)= \sum_{n \geq 0} x^n = \frac{1}{1-x}$ So $f(1/2)=\frac{2}{(1/2)^3}-\frac{3}{(1/2)^2} + \frac{1}{1/2} = 16-12+2=6$. The "technique" is to add a parameter in the series, to make the multiplication by $n+1$ appear as differentiation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
a coin toss probability question I keep tossing a fair coin until I see HTHT appears. What is the probability that I have already observed the sequence TTH before I stop? Edit: Okay. This problem is the same as before. I am trying to think of the following. Could anyone please tell me if this is also the same as before, and if not, how to solve it. Two players A and B toss two different fair coins. A tosses his coin until he sees HTHT appearing for the first time, and records the number of tosses, $X$. B tosses his coin until he sees TTH appearing for the first time, and records the number of tosses made, $Y$. What is the probability that $Y<X$?
By the standard techniques explained here in a similar context, one can show that the first time $T_A$ when the word A=HTHT is produced has generating function $$ E(s^{T_A})=\frac{s^4}{a(s)},\qquad a(s)=(2-s)(8-4s-s^3)-2s^3, $$ and that the first time $T_B$ when the word B=TTH is produced has generating function $$ E(s^{T_B})=\frac{s^3}{b(s)},\qquad b(s)=(2-s)(4-2s-s^2), $$ The next step would be to develop these as series of powers of $s$, getting $$ E(s^{T_A})=\sum_{n\ge4}p_A(n)s^n,\quad E(s^{T_B})=\sum_{k\ge3}p_B(k)s^k, $$ and finally, to compute the sum $$ P(T_B<T_A)=\sum_{k\ge3}\sum_{n\ge k+1}p_B(k)p_A(n). $$ An alternative solution is to consider the Markov chain on the space of the couples of prefixes of the words A and B and to solve directly the associated first hitting time problem for this Markov chain, as explained, once again, here. (+1 to Arturo's and Mike's comments.) Added later Introduce the decomposition into simple elements $$ \frac1{(1-s)b(s)}=\sum_{i=1}^4\frac{c_i}{1-s\gamma_i}. $$ Then, one can show that $$ P(T_B<T_A)=\sum_{i=1}^4\frac{c_i}{a(\gamma_i)}, $$ by decomposing the rational fractions $1/b(s)$ and $1/a(s)$ into simple elements, by relying on the fact that all their poles are simple, and by using some elementary algebraic manipulations. At this point, I feel it is my duty to warn the OP that to ask question after question on math.SE or elsewhere along the lines of this post is somewhat pointless. Much more useful would be to learn once and for all the basics of the theory of finite Markov chains, for instance by delving into the quite congenial Markov chains by James Norris, and, to learn to manipulate power series, into the magnificent generatingfunctionology by Herbert Wilf.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to map discrete numbers into a fixed domain? I have several numbers, say 3, 1, 4, 8, 5, and I wanted them to be mapped into a fixed domain [0.5, 3]. In this case, 1 should be mapped as 0.5 and 8 is 3. Then the rest numbers should be scaled down to their correspondences. So, my question is what should I do to deal with this case? what's the name of this processing? Thanks, Mike
Suppose you have $I=[a,b]$ and you want to map it to $J=[c,d]$ such that there is a constant C and for every $x,y \in I$ and their images $f(x),f(y)\in J$ you have $(x-y)C = (f(x)-f(y))$ (I hope this is what you mean by "scaled down..."). You also want that $a\mapsto c$ and $b\mapsto d$ (in your case $1\mapsto 0.5,\; 8 \mapsto 3$), then you see that the constant is $\frac{c-d}{a-b}$. Now, for every other $x\in I$ you have (with y=a) $$ f(x) = C(x-a) + f(a) = Cx +(f(a)-Ca) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/20511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are there any series whose convergence is unknown? Are there any infinite series about which we don't know whether it converges or not? Or are the convergence tests exhaustive, so that in the hands of a competent mathematician any series will eventually be shown to converge or diverge? EDIT: People were kind enough to point out that without imposing restrictions on the terms it's trivial to find such "open problem" sequences. So, to clarify, what I had in mind were sequences whose terms are composed of "simple" functions, the kind you would find in an introductory calculus text: exponential, factorial, etc.
As kind of a joke answer, but technically correct, and motivated by Chandru's deleted reply, $$\sum_{n=0}^\infty \sin(2\pi n!\,x)$$ where $x$ is the Euler-Mascheroni constant, or almost any other number whose rationality has not been settled. (If $x$ is rational, the series converges. The implication does not go the other way.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/20555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "191", "answer_count": 8, "answer_id": 3 }
Motivating differentiable manifolds I'm reading lectures for the first time starting next week ^_^ The subject is calculus on manifolds I was told that the students I'll be lecturing are not motivated (at all), so I need to kick off the series with an impressive demonstration of what an cool subject it will be that I'll be able to periodically return to later on when I'll need to motivate specific concepts. So far I came up with: 1) Projective plane, I'll demonstrate how conics morph into each other on different models, should be cool enough and motivate general-topological manifolds. 2) Plücker coordinates and their applications to line geometry and computer graphics (not sure if it will be easy to demonstrate), should motivate forms. 3) Expanding Universe, motion in relativity and perception of time, should motivate tangent vectors. I'm not sure they will work, and I could always use more ideas. I would greatly appreciate examples and general advice too. P.S.: Please, wiki-hammer this question.
Personally I always loved to link maths and physics. For instance, why not introduce hysteresis? Some hysteresis phenomenons can be modelled using manifolds I believe.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Maximize the product of two integer variables with given sum Let's say we have a value $K$. I want to find the value where $A+B = K$ (all $A,B,K$ are integers) where $AB$ is the highest possible value. I've found out that it's: * *$a = K/2$; $b = K/2$; when $K$ is even *$a = (K-1)/2$; $b = ((K-1)/2)+1$; when $K$ is odd. But is there a way to prove this?
We know that $B = K-A$. So you want to maximize $AB = A(K-A)$. So we have: $$AB = A(K-A)$$ $$ = AK-A^2$$ Find the critical points and use some derivative tests.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$ \mathbb{C} $ is not isomorphic to the endomorphism ring of its additive group Let $M$ denote the additive group of $ \mathbb{C} $. Why is $ \mathbb{C}$, as a ring, not isomorphic to $\mathrm{End}(M)$, where addition is defined pointwise, and multiplication as endomorphisms composition? Thanks!
Method 1: $\operatorname{End}(M)$ is not an integral domain. Method 2: Count idempotents. There are only 2 idempotent elements of $\mathbb{C}$, but lots in $\operatorname{End}(M)$, including for example real and imaginary projections along with the identity and zero maps. Method 3: Cardinality. Assuming a Hamel basis for $\mathbb{C}$ as a $\mathbb{Q}$ vector space, there are $2^\mathfrak{c}$ $\mathbb{Q}$ vector vector space endomorphisms of $\mathbb{C}$, but only $\mathfrak{c}$ elements of $\mathbb{C}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Connected components of a fiber product of schemes The underlying set of the product $X \times Y$ of two schemes is by no means the set-theoretic product of the underlying sets of $X$ and $Y$. Although I am happy with the abstract definition of fiber products of schemes, I'm not confident with some very basic questions one might ask. One I am specifically thinking about has to do with the connected components of a fiber product. Say $X$ and $Y$ are schemes over $S$ and let's suppose that $Y$ is connected and that $X = \coprod_{i \in I} X_i$ is the decomposition of $X$ into connected components. Is the connected component decomposition of $X \times_S Y$ simply $\coprod_{i \in I} X_i \times_S Y$? If so, how can we see this? What about if we replace "connected" with "irreducible". Thank you for your consideration.
"No" in both cases: let $f\in K[X]$ be a separable irreducible polynomial of degree $d>1$ over some field $K$. Let $L$ be the splitting field of $f$ over $K$. Let $X:=\mathrm{Spec} (K[X]/fK[X])$, $Y:=\mathrm{Spec}(L)$ and $S:=\mathrm{Spec}(K)$. Then $X$ is irreducible and $X\times_K Y=\mathrm{Spec}(L[X]/fL[X])$ consists of $d$ points, which are the irreducible components of $X\times_K Y$. Moreover these points are also the connected components of $X\times_K Y$. H
{ "language": "en", "url": "https://math.stackexchange.com/questions/20784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Graph for $f(x)=\sin x\cos x$ Okay, so in my math homework I'm supposed to draw a graph of the following function: $$f(x)=\sin x \cos x.$$ I have the solution in the textbook, but I just can't figure out how they got to that. So, if someone could please post (a slightly more detailed) explanation for this, it would be really appreciated. I have to turn the homework this Wednesday, but since I already have the solution the answer doesn't have to be really swift. Bonus points if it is, tohugh
Here is a hint: Can you draw the graph of $\sin x$? What about $a\sin{(bx)}$? Then recall the identity $$\sin{x} \cos{x} = \frac{1}{2} \sin {2x}.$$ Maybe that helps. Edit: This is just one way, there are many. As asked in the above comments, how are you supposed to solve it? What tools do you have your disposal? What type of things have you being taught so far?
{ "language": "en", "url": "https://math.stackexchange.com/questions/20858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Uniform Boundedness/Hahn-Banach Question Let $X=C(S)$ where $S$ is compact. Suppose $T\subset S$ is a closed subset such that for every $g\in C(T),$ there is an $f\in C(S)$ such that: $f\mid_{T}=g$. Show that there exists a constant $C>0$ such that every $g\in C(T)$ can be continuously extended to $f\in C(S)$ such that: $\sup_{x\in S}\left|f(x)\right|\leq C\sup_{y\in T}\left|g(y)\right|$
$C(S) \to C(T)$ is a surjective bounded linear map of Banach spaces (with sup norms), so there is a closed linear subspace $M \subset C(S)$ such that $C(S)/M \to T$ is bijective and bounded with the quotient norm. Inverse mapping theorem says that the inverse is a bounded linear map. The statement then follows. By the way, how is this related to Banach-Steinhaus/Hahn-Banach?
{ "language": "en", "url": "https://math.stackexchange.com/questions/20909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Showing $\frac{e^{ax}(b^2\cos(bx)+a^2\cos(bx))}{a^2+b^2}=e^{ax}\cos(bx).$ I've got: $$\frac{e^{ax}(b^2\cos(bx)+a^2\cos(bx))}{a^2+b^2}.$$ Could someone show me how it simplifies to: $e^{ax} \cos(bx)$? It looks like the denominator is canceled by the terms that are being added, but then how do I get rid of one of the cosines?
You use the distributive law, which says that $(X+Y)\cdot Z=(X\cdot Z)+(Y\cdot Z)$ for any $X$, $Y$, and $Z$. In your case, we have $X=b^2$, $Y=a^2$, and $Z=\cos(bx)$, and so $$\frac{e^{ax}(b^2\cos(bx)+a^2\cos(bx))}{a^2+b^2}=\frac{e^{ax}((b^2+a^2)\cos(bx))}{(a^2+b^2)}=\frac{e^{ax}((a^2+b^2)\cos(bx))}{(a^2+b^2)}=e^{ax}\cos(bx).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/20957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Equation for $y'$ from $\frac{y'}{ [1+(y')^2]^{1/2}} = c$ In a book there is a derivation for y' that comes from $$\frac{y'}{[1+(y')^2]^{1/2}} = c,$$ where $c$ is a constant. The result they had was $$y' = \sqrt{\frac{c^2}{1-c^2}}.$$ How did they get this? I tried expanding the square, and other tricks, I cant seem to get their result.
Square both sides, solve for ${y^{\prime}}^2$ and then take the square root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there an easy proof for ${\aleph_\omega} ^ {\aleph_1} = {2}^{\aleph_1}\cdot{\aleph_\omega}^{\aleph_0}$? The question contains 2 stages: * *Prove that ${\aleph_n} ^ {\aleph_1} = {2}^{\aleph_1}\cdot\aleph_n$ This one is pretty clear by induction and by applying Hausdorff's formula. *Prove ${\aleph_\omega} ^ {\aleph_1} = {2}^{\aleph_1}\cdot{\aleph_\omega}^{\aleph_0}$ Is there an easy proof for the second one? Thanks in advance.
As you mention, the first equation is a consequence of Hausdorff's formula and induction. For the second: Clearly the right hand side is at most the left hand side. Now: Either $2^{\aleph_1}\ge\aleph_\omega$, in which case in fact $2^{\aleph_1}\ge{\aleph_\omega}^{\aleph_1}$, and we are done, or $2^{\aleph_1}<\aleph_\omega$. I claim that in this case we have ${\aleph_\omega}^{\aleph_1}={\aleph_\omega}^{\aleph_0}$. Once we prove this, we are done. Note that $\aleph_\omega={\rm sup}_n\aleph_n\le\prod_n\aleph_n$, so $$ {\aleph_\omega}^{\aleph_1}\le\left(\prod_n\aleph_n\right)^{\aleph_1}=\prod_n{\aleph_n}^{\aleph_1}. $$ Now use part 1, to conclude that ${\aleph_n}^{\aleph_1}<\aleph_\omega$ for all $n$, since we are assuming that $2^{\aleph_1}<\aleph_\omega$. This shows that the last product is at most $\prod_n \aleph_\omega={\aleph_\omega}^{\aleph_0}$. This shows that ${\aleph_\omega}^{\aleph_1}\le {\aleph_\omega}^{\aleph_0}$. But the other inequality is obvious, and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
Prove that an odd integer $n>1$ is prime if and only if it is not expressible as a sum of three or more consecutive integers. Prove that an odd integer $n>1$ is prime if and only if it is not expressible as a sum of three or more consecutive integers. I can see how this works with various examples of the sum of three or more consecutive integers being prime, but I can't seem to prove it for all odd integers $n>1$. Any help would be great.
First of all, you can assume you're adding only positive numbers; otherwise the question isn't correct as written. Note that the sum of the numbers from $1$ to $n$ is ${\displaystyle {n^2 + n \over 2}}$. So the sum of the numbers from $m+1$ to $n$ is ${\displaystyle {n^2 + n \over 2} - {m^2 + m \over 2} = {n^2 - m^2 + n - m \over 2} = {(n - m)(n + m + 1) \over 2}}$. You want to know which odd numbers $k$ can be written in this form for $n - m \geq 3$. If $k$ were a prime $p$ that could be expressed this way, then you'd have $(n- m )(n+m+1) = 2p$. But $n - m \geq 3$, and $n + m + 1$ would only be bigger than that. Since $2p$ has only the factors $2$ and $p$, that can't happen. So suppose $k$ is an odd non-prime, which you can write as $k_1k_2$ where $k_1 \geq k_2$ are odd numbers that are at least $3$. You now want to solve $(n-m)(n+m+1) = 2k_1k_2$. It's natural to set $n - m = k_2$ (the smaller factor), and $2k_1 = n + m + 1$, the larger factor. Solving for $n$ and $m$ one gets ${\displaystyle n = {2k_1 + k_2 - 1 \over 2}}$ and ${\displaystyle m = {2k_1 - k_2 - 1 \over 2}}$. Since $k_1$ and $k_2$ are odd these are both integers. And since $k_1 \geq k_2$, the numbers $m$ and $n$ are nonnegative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Physical meaning of the null space of a matrix What is an intuitive meaning of the null space of a matrix? Why is it useful? I'm not looking for textbook definitions. My textbook gives me the definition, but I just don't "get" it. E.g.: I think of the rank $r$ of a matrix as the minimum number of dimensions that a linear combination of its columns would have; it tells me that, if I combined the vectors in its columns in some order, I'd get a set of coordinates for an $r$-dimensional space, where $r$ is minimum (please correct me if I'm wrong). So that means I can relate rank (and also dimension) to actual coordinate systems, and so it makes sense to me. But I can't think of any physical meaning for a null space... could someone explain what its meaning would be, for example, in a coordinate system? Thanks!
Its the solution space of the matrix equation $AX=0$ . It includes the trivial solution vector $0$. If $A$ is row equivalent to the identity matrix, then the zero vector is the only element of the solution space. If it is not i.e. when the column space of $A$ is of a dimension less than the number of columns of A then the equation $AX=0$ has non trivial solutions which form a vector space, whose dimension is termed nullity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "136", "answer_count": 10, "answer_id": 6 }
What is the simplification of $\frac{\sin^2 x}{(1+ \sin^2 x +\sin^4 x +\sin^6 x + \cdots)}$? What is the simplification of $$\frac{\sin^2 x}{(1+ \sin^2 x +\sin^4 x +\sin^6 x + \cdots)} \space \text{?}$$
What does $1 + \sin^2 x + \sin^4 x + \sin^6 x + ....$ simplify to? Or better, what does $1 + x^2 + x^4 + x^6 + ....$ simplify to? Or better, what does $1 + x + x^2 + x^3 + ....$ simplify to?
{ "language": "en", "url": "https://math.stackexchange.com/questions/21182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Distribution of $\log X$ If $X$ has the density function $$ f_\vartheta (x) = \Big \{ \begin{array}{cc} (\vartheta - 1)x^{-\vartheta} & x \geq 1\\ 0 & otherwise \end{array}$$ How can I see that $\log X \sim Exp(\vartheta - 1)$? I had the idea to look at $f(\log x)$ but I think that's not right. Many thanks for your help!
One way is to use the cdf technique. Let $Y = \log X$. Then $$P(Y \leq y) = P(\log X \leq y) = P(X \leq e^y) = \int_1^{e^y} (\vartheta - 1)x^{-\vartheta} dx = \left.-x^{-\vartheta+1}\right|_1^{e^y} = 1-e^{-(\vartheta-1)y}.$$ Differentiating with respect to $y$ yields $(\vartheta-1) e^{-(\vartheta-1)y}$ as the pdf of $Y$, which is also the pdf of an $Exp(\vartheta - 1)$ random variable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }