INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
How do I find the Fourier transform of a function that is separable into a radial and an angular part? how do I find the Fourier transform of a function that is separable into a radial and an angular part: $f(r, \theta, \phi)=R(r)A(\theta, \phi)$ ? Thanks in advance for any answers!
You can use the expansion of a plane wave in spherical waves. If you integrate the product of your function with such a plane wave, you get integrals over $R$ times spherical Bessel functions and $A$ times spherical harmonics; you'll need to be able to solve those in order to get the Fourier coefficients.
CDF of a ratio of exponential variables Let $X$ and $Y$ be independent exponential variables with rates $\alpha$ and $\beta$, respectively. Find the CDF of $X/Y$. I tried out the problem, and wanted to check to see if my answer of: $\frac{\alpha}{ \beta/t + \alpha}$ is correct, where $t$ is the time, which we need in our final answer since we need a cdf. Can someone verify if this is correct?
Here is a one-line proof. $$ \mathbb P(X/Y \le t) = \mathbb P(Y \ge X/t) = \mathbb E[\exp(-\beta X/t)] = \text{MGF}_X(-\beta/t) = (1 - (-\beta/t)1/\alpha)^{-1} = \frac{\alpha}{\alpha + \beta/t} = \frac{\alpha t}{\alpha t + \beta}. $$ N.B.: For the MGF of an exponential variable, see this table.
Euclidean algorithm to find the GCD I have to find the greatest common divisor of $a=78$ and $b=132$. I have worked out to $$\begin{align} 132 & = 78 \times 1 + 54 \\ 78 & = 54 \times 1 + 24 \\ 54 & = 24 \times 2 + 6 \\ 24 & = 6 \times 4 + 0 \end{align}$$ and back substituted $$\begin{align} 6 & = 54 - 24 \times 2 \\ & = 54 - (78 - 54) \times 2 \\ & = 3 \times 54 - 78 \times 2 \end{align}$$ However I don't seem to be able to work back to $132$ ? Can someone explain / help?
From the first equation you have $54=132-78$. By plugging this into the last one you get $6=3(132-78)-2\cdot78=3\cdot132-5\cdot78.$
What is the difference between "family" and "set"? What is the difference between "family" and "set"? The definition of "family" on mathworld (http://mathworld.wolfram.com/Family.html) is a collection of objects of the form $\{a_i\}_{i \in I}$, where $I$ is an index set. But, I think a set can also be represented in this form. So, what is the difference between the concept family and the concept set? Is there any example of a collection of objects that is a family, but not a set, or reversely? Many thanks!
A family is indeed a set, and it is defined by the indexing -- as you observed. Just as well every set $A$ is a family of the form $\{i\}_{i\in A}$. However often you want to have some property about the index set (i.e. some order relation, or some other structure) that you do not require from a general set. This addition structure on the index can help you define further properties about the family, or prove things using the properties of the family (its elements are disjoint, co-prime, increasing in some order, every two elements have a supremum, and so on).
How to prove a function is positive or negative in $x \in \mathbb{R}$ A homework question: I know the solution but I don't know how to prove that the function is negative XOR positive for $x \in \mathbb{R}$ f is continuous in $\mathbb{R}$. $$\text{ prove that if } |f(x)| \ge x \text{ for } x \in \mathbb{R} \text { then: } \lim_{x\to\infty} f(x) = \infty \text{ or } \lim_{x\to\infty} f(x) = -\infty$$ Now once I prove that the function is negative XOR positive it's relatively simple to prove that the limits are at infinity. How do I prove that there is no $f(x) = 0$? Thanks!
The function, as you stated it, is not exclusively negative nor exclusively positive. There's a really simple counter example. If f(x)=x for all x in R, then f is continuous everywhere, |f(x)|>=x everywhere, and even one of the limits is satisfied (limit as x goes-to infinity of f(x) is infinity). But clearly, f(0)=0, f(-1)=-1, and f(1)=1. So there is a point where f(x)=0, and f is not exclusively positive nor exclusively negative everywhere. As to the most likely question you were probably asking, user6312 already answered it for you, but I'll type the same proof for completeness. (Also, does anyone know where I can find a guide on how to get latex to work properly in here? I can't get things like /mathbb{R} or /infinity or /in to work. Maybe the rules have changed since I last used latex...) If f(1)>=1, then f is positive for all x>1. (Suppose there exists b>1 such that f(b)<1. Since |f(x)|>=x for all x, and b>1, f(b)<-b<0. But if 10 and f(b)<0, there exists a point c, 1the coolest theorem ever. Since c>0 and f(c)=0, =><=) Since f(x) is positive for all x>1, then |f(x)|=f(x) for x>1. Thus the limit x->infinity of f(x) is greater than limit x->infinity of x which is infinity. If f(1)<=-1, then let g(x)=-f(x), and do the same proof, and you'll end with lim x->infinity f(x) = -infinity.
Olympiad calculus problem This problem is from a qualifying round in a Colombian math Olympiad, I thought some time about it but didn't make any progress. It is as follows. Given a continuous function $f : [0,1] \to \mathbb{R}$ such that $$\int_0^1{f(x)\, dx} = 0$$ Prove that there exists $c \in (0,1) $ such that $$\int_0^c{xf(x) \, dx} = 0$$ I will appreciate any help with it.
This is a streamlined version of Thomas Andrews' proof: Put $F(x):=\int_0^x f(t)dt$ and consider the auxiliary function $\phi(x)={1\over x}\int_0^x F(t)dt$. Then $\phi(0)=0$, $\ \phi(1)=\int_0^1 F(t)dt=:\alpha$, and by partial integration one obtains $$\phi'(x)=-{1\over x^2}\int_0^xF(t)dt +{1\over x}F(x)={1\over x^2}\int_0^x t f(t)dt\ .$$ The mean value theorem provides a $\xi\in(0,1)$ with $\phi'(\xi)=\alpha$. If $\alpha$ happens to be $0$ we are done. Otherwise we invoke $F(1)=0$ and conclude that $\phi'(1)=-\alpha$. It follows that there is a $\xi'\in(\xi,1)$ with $\phi'(\xi')=0$.
Validate my reasoning for this logical equivalence I've basically worked out how to do this question but not sure about my reasoning: Question: Show 1) $(p \rightarrow q) \land (q \rightarrow (\lnot p \lor r))$ is logically equivalent to: 2) $p \rightarrow (q \land r)$ and I am given this equivalence as a hint: $u \rightarrow v$ is logically equivalent to $(\lnot u) \lor v)$ My reasoning: From statement (1): $(\lnot p \lor r)$ is equivalent to $(p \rightarrow r)$ (By the hint given) Hence statement (1) becomes: $(p \rightarrow q) \land (q \rightarrow (p \rightarrow r))$ We assume $p$ is true, therefore $q$ is true So $p$ also implies $r$ Therefore $p$ implies $q$ and $p$ also implies $r$ Hence $p \rightarrow (q \land r)$ I understand the basic ideas but I'm really confused as to how I can write it all down logically and clearly
There are several routes to a proof, I will list two: 1) You can make a list of all cases. Since you have three variables, there are 8 possibilites for them to have the values true/false. You can make a table with column titles: $p,q,r,p \to q, \lnot q \lor p, \dots$ and enter the truth values, then compare the columns for the two expression you want to be equivalent. 2) As indicated by the hint, you can transform all occurences of $\to$ to $\lor$. Then you can use the distributivity to bring both expression to a normal form, for example to conjunctive normal form http://en.wikipedia.org/wiki/Conjunctive_normal_form
Why are addition and multiplication commutative, but not exponentiation? We know that the addition and multiplication operators are both commutative, and the exponentiation operator is not. My question is why. As background there are plenty of mathematical schemes that can be used to define these operators. One of these is hyperoperation where $H_0(a,b) = b+1$ (successor op) $H_1(a,b) = a+b$ (addition op) $H_2(a,b) = ab $ (multiplication op) $H_3(a,b) = a^b$ (exponentiation op) $H_4(a,b) = a\uparrow \uparrow b$ (tetration op: $a^{(a^{(...a)})}$ nested $b$ times ) etc. Here it is not obvious to me why $H_1(a,b)=H_1(b,a)$ and $H_2(a,b)=H_2(b,a)$ but not $H_3(a,b)=H_3(b,a)$ Can anyone explain why this symmetry breaks, in a reasonably intuitive fashion? Thanks.
When I first read your question, I expected that it must mean that addition would possess some obscure property that multiplication lacks, after all, both the additive structure and multiplicative structure are abelian groups, so you'd expect something like this to just generalize. But after some thinking, I realized that this wasn't the case, and instead that the problem is that we aren't generalizing properly. For if we define "applying an operator $f$, $n$ times, ie $f^n$" as the recursive procedure $ f^n(x) = \begin{cases} x & \text{if n = 0} \\ f^1(f^{n - 1}(x)) & \text{otherwise} \end{cases} $ Then this definition actually uses addition, so if we'd want to generalize this procedure properly, we'd need to change our definition of "applying an operator $n$ times" as well. And indeed $a^n$ does equal $(a^2)^{n / 2}$, which induces a better generalization of commutativity.
Working with Conditions or Assumptions in Mathematica with boolean operators I have the following code: $Assumptions = {x > 0} b[x_] := x^2 b'[x] > 0 In my (very basic) understanding of Mathematica, this should give me me the Output True, but I get 2 x > 0. I also tried b[x_] := x^2 /; x > 0 and Assuming[x > 0, b'[x] > 0]. I've searched the mathematica help, but without success. What's my basic error and how do I get the desired output? EDIT: The original question is answered, now I wanted to adapt this solution to two variables: c[x_, y_] := x^2 + y $Assumptions = {y > 0} $Assumptions = {x > 0} Simplify[c[x, y] > 0] It follows the same logic as the first case, where I now get the desired output, but why not here? I realize that these are probably typical beginners questions, so if you could explain the logic to me or give me a hint where to read up upon this stuff? Neither the Mathematica help nor my university's (very short) guidebook are sufficient for my understanding.
Your first code $Assumptions = {x > 0} b[x_] := x^2 b'[x] > 0 works fine if you apply Simplify to the result (2x > 0). Edit: For completeness, I also add the answer of J.M in the comment to the second question. $Assumptions = {x > 0} overwrites $Assumptions = {y > 0}. Try $Assumptions = x > 0 && y > 0.
Half angle formulas Say I have the trig identity $$ \tan \frac{\theta}{2} = \frac{ 1 - \cos \theta }{ \sin \theta } $$ And you have the 3,4,5 triangle: taking the angle $\theta$ as marked, why can't we just say $$ \tan \frac{\theta}{2} = \frac{ 1.5 }{ 4 } = \frac{ 3 }{ 8 } $$ (Like, if you half $\theta$, then the opposite side length is seen as half, right?) But this doesn't work or check out with the identity: $$ \tan \frac{\theta}{2} = \frac{ 1 - \frac{4}{5} }{ \frac{3}{5} } = \frac{ \frac{1}{5} }{ \frac{3}{5} } = \frac{1}{3} $$
Actually, if you half an angle, it will divide the opposite side proportionally to the two other sides. (see http://en.wikipedia.org/wiki/Angle_bisector_theorem ) In your case, 3 would be divided into parts $4/3$ and $5/3$. So you get $\tan \theta/2=(4/3)/4= 1/3$. Everything works out fine.
Simple set exercise seems not so simple Exercise about sets from Birkhoff's "Modern Applied Algebra". Prove that for operation $\ \Delta $ , defined as $\ R \Delta S = (R \cap S^c) \cup (R^c \cap S) $ following is true: $\ R \Delta ( S \Delta T ) = ( R \Delta S ) \Delta T $ ($\ S^c $ is complement of $\ S $) It's meant to be very simple, being placed in the first excercise block of the book. When I started to expand both sides of equations in order to prove that they're equal, I got this monster just for the left side: $\ R \Delta ( S \Delta T ) = \Bigl( R \cap \bigl( (S \cap T^c) \cup (S^c \cap T) \bigr)^c \Bigr) \cup \Bigl(R^c \cap \bigl( (S \cap T^c) \cup (S^c \cap T) \bigr) \Bigr) $ For the right: $\ ( R \Delta S ) \Delta T = \Bigl(\bigl( (R \cap S^c) \cup (R^c \cap S) \bigr) \cap T^c \Bigr) \cup \Bigl( \bigl( (R \cap S^c) \cup (R^c \cap S) \bigr)^c \cap T \Bigr) $ I've tried to simplify this expression, tried to somehow rearrange it, but no luck. Am I going the wrong way? Or what should I do with what I have?
This is the symmetric difference. It includes all elements that are in exactly one of the two sets. In binary terms, it's the XOR operation. Independent of the order of the operations on the three sets, the result will contain exactly the elements that are in an odd number of the sets.
How to compute homotopy classes of maps on the 2-torus? Let $\mathbb T^2$ be the 2-Torus and let $X$ be a topological space. Is there any way of computing $[\mathbb T^2,X]$, the set of homotopy class of continuous maps $\mathbb T^2\to X$ if I know, for instance, the homotopy groups of $X$? Actually, I am interested in the case $X=\mathbb{CP^\infty}$. I would like to classify $\mathbb T^1$-principal bundles over $\mathbb T^2$ (in fact $\mathbb T^2$-principal bundles, but this follows easily.)
This is a good chance to advertise the paper Ellis, G.J. Homotopy classification the J. H. C. Whitehead way. Exposition. Math. 6(2) (1988) 97-110. Graham Ellis is referring to Whitehead's paper "Combinatorial Homotopy II", not so well read as "Combinatorial Homotopy I". He writes:" Almost 40 years ago J.H.C. Whitehead showed in \cite{W49:CHII} that, for connected $CW$-complexes $X, Y$ with dim $X \le n$ and $\pi_i Y = 0$ for $2\le i \le \ n - 1$, the homotopy classification of maps $X \to Y$ can be reduced to a purely algebraic problem of classifying, up to an appropriate notion of homotopy, the $\pi_1$-equivariant chain homomorphisms $C_* \widetilde{X} \to C_* \widetilde{Y}$ between the cellular chain complexes of the universal covers. The classification of homotopy equivalences $Y \simeq Y$ can similarly be reduced to a purely algebraic problem. Moreover, the algebra of the cellular chains of the universal covers closely reflects the topology, and provides pleasant and interesting exercises. "These results ought to be a standard piece of elementary algebraic topology. Yet, perhaps because of the somewhat esoteric exposition given in \cite{W49:CHII}, and perhaps because of a lack of worked examples, they have remained largely ignored. The purpose of the present paper is to rectify this situation."
what name for a shape made from two intersecting circles of different sizes? what is the name of a shape made from two circles with different radii that intersect each other? Sort of like a snowman shape, made of a big and a small ball of snow, melted together a bit! :-) Thanks
I do know that a "figure 8" shape is known as a lemniscate: you can read more here: http://en.wikipedia.org/wiki/Lemniscate. But I'm not sure if that's what you're looking for. What you seem to describe is the union of two circles (of different size) which intersect at two points. Wikipedia has an interesting "taxonomy" of various shapes and variations of familiar shapes, etc.: http://en.wikipedia.org/wiki/List_of_geometric_shapes
How to calculate the new intersection on the x-axis after rotation of a rectangle? I've been trying to calculate the new intersection on the x-axis after rotation of any given rectangle. The rectangle's center is the point $(0,0)$. What do I know: * *length of B (that is half of the width of the given rectangle) *angle of a (that is the rotation of the rectangle) What do I want to know: length of A (or value of point c on the x-axis).
Hint: Try to divide the cases. Referring to your image, after the rotation of the angle $a$ the vertex on the left side of the rectangle pass or not pass the x-axis? Suppose now that your rectangle has one side of lenght 2B, and the other one "large", so the vertex on the left side doesn't pass the x-axis. Then using Pythagoras you get $A=\sqrt{B^2 + B^2 sen^2(a)}$. What about the other case?
What does $\ll$ mean? I saw two less than signs on this Wikipedia article and I was wonder what they meant mathematically. http://en.wikipedia.org/wiki/German_tank_problem EDIT: It looks like this can use TeX commands. So I think this is the symbol: $\ll$
Perhaps not its original intention, but we (my collaborators and former advisor) use $X \gg Y$ to mean that $X \geq c Y$ for a sufficiently large constant $c$. Precisely, we usually use it when we write things like: $$ f(x) = g(x) + O(h(x)) \quad \Longrightarrow \quad f(x) = g(x) (1 + o(1)) $$ when $g(x) \gg h(x)$.
Proving a property of homogeneous equation that is exact The following question was given to us in an exam: If $0=M dx + N dy$ is an exact equation, in addition to the fact that $\frac{M}{N} = f\Big(\frac{y}{x}\Big)$ is homogeneous, then $xM_x + yM_y = (xN_x + yN_y)f$. Now I had absolutely no idea how to prove this question. I tried doing $M = Nf$ and taking derivatives and multiplying by $x$ or $y$, and you get the required R.H.S. but with the extra term $N(\frac{-f_x}{x} + \frac{f_y}{x})$ added. How does one approach a question like that?? I have never encountered a question like that, not even when solving for different types of integrating factors to get an exact equation or when working with a homogeneous equation. Anyone got any ideas? Please don't post a complete solution. Thanks.
So the solution should be: As $\frac{M}{N} = f\Big(\frac{y}{x}\Big)$, this means that the degree of homogeneity of $M$ and $N$ must be equal. So $xM_x + yM_y = aM$, and $xN_x + yN_y = aN$, by euler's homogeneity theorem where $a$ is the degree of homogeneity of $M$ and $N$. So dividing the first by the second of these equations, and one should get $xM_x + yM_y = \frac{M}{N} (xN_x + yN_y) = f\Big(\frac{y}{x}\Big)(xN_x + yN_y )$. Is that correct?
Showing that a level set is not a submanifold Is there a criterion to show that a level set of some map is not an (embedded) submanifold? In particular, an exercise in Lee's smooth manifolds book asks to show that the sets defined by $x^3 - y^2 = 0$ and $x^2 - y^2 = 0$ are not embedded submanifolds. In general, is it possible that a level set of a map which does not has constant rank on the set still defines a embedded submanifold?
It is certainly possible for a level set of a map which does not have constant rank on the set to still be an embedded submanifold. For example, the set defined by $x^3 - y^3 = 0$ is an embedded curve (it is the same as the line $y=x$), despite the fact that $F(x,y) = x^3 - y^3$ has a critical point at $(0,0)$. The set defined by $x^2 - y^2 = 0$ is not an embedded submanifold, because it is the union of the lines $y=x$ and $y=-x$, and is therefore not locally Euclidean at the origin. To prove that no neighborhood of the origin is homeomorphic to an open interval, observe that any open interval splits into exactly two connected components when a point is removed, but any neighborhood of the origin in the set $x^2 - y^2$ has at least four components after the point $(0,0)$ is removed. The set $x^3-y^2 = 0$ is an embedded topological submanifold, but it is not a smooth submanifold, since the embedding is not an immersion. There are many ways to prove that this set is not a smooth embedded submanifold, but one possibility is to observe that any smooth embedded curve in $\mathbb{R}^2$ must locally be of the form $y = f(x)$ or $x = f(y)$, where $f$ is some differentiable function. (This follows from the local characterization of smooth embedded submanifolds as level sets of submersions, together with the Implicit Function Theorem.) The given curve does not have this form, so it cannot be a smooth embedded submanifold.
A Binomial Coefficient Sum: $\sum_{m = 0}^{n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l}$ In my work on $f$-vectors in polytopes, I ran across an interesting sum which has resisted all attempts of algebraic simplification. Does the following binomial coefficient sum simplify? \begin{align} \sum_{m = 0}^{n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l} \qquad l \geq 0 \end{align} Update: After some numerical work, I believe a binomial sum orthogonality identity is at work here because I see only $\pm 1$ and zeros. Any help would certainly be appreciated. I take $\binom{-1}{l} = (-1)^{l}$, $\binom{m-1}{l} = 0$ for $0 < m < l$ and the standard definition otherwise. Thanks!
$$\sum_{m=0}^n (-1)^{n-m} \binom{n}{m} \binom{m-1}{l} = (-1)^{l+n} + \sum_{l+1 \leq m \leq n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l}$$ So we need to compute this last sum. It is clearly zero if $l \geq n$, so we assume $l < n$. It is equal to $f(1)$ where $f(x)= \sum_{l+1 \leq m \leq n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l} x^{m-1-l}$. We have that $$\begin{eqnarray*} f(x) & = & \frac{1}{l!} \frac{d^l}{dx^l} \left( \sum_{l+1 \leq m \leq n} (-1)^{n-m} \binom{n}{m} x^{m-1} \right) \\ & = & \frac{1}{l!} \frac{d^l}{dx^l} \left( \frac{(-1)^{n+1}}{x} + \sum_{0 \leq m \leq n} (-1)^{n+1} \binom{n}{m} (-x)^{m-1} \right) \\ & = & \frac{1}{l!} \frac{d^l}{dx^l} \left( \frac{(-1)^{n+1}}{x} + \frac{(x-1)^n}{x} \right) \\ & = & \frac{(-1)^{n+1+l}}{x^{l+1}} + \frac{1}{l!} \sum_{k=0}^l \binom{l}{k} n(n-1) \ldots (n-k+1) (x-1)^{n-k} \frac{(-1)^{l-k} (l-k)!}{x^{1+l-k}} \end{eqnarray*}$$ (this last transformation thanks to Leibniz) and since $n>l$, $f(1)=(-1)^{l+n+1}$. In the end, your sum is equal to $(-1)^{l+n}$ if $l \geq n$, $0$ otherwise.
Functions, graphs, and adjacency matrices One naively thinks of (continuous) functions as of graphs1 (lines drawn in a 2-dimensional coordinate space). One often thinks of (countable) graphs2 (vertices connected by edges) as represented by adjacency matrices. That's what I learned from early on, but only recently I recognized that the "drawn" graphs1 are nothing but generalized - continuous - adjacency matrices, and thus graphs1 are more or less the same as graphs2. I'm quite sure that this is common (maybe implicit) knowledge among working mathematicians, but I wonder why I didn't learn this explicitly in any textbook on set or graph theory I've read. I would have found it enlightening. My questions are: Did I read my textbooks too superficially? Is the analogy above (between graphs1 and graphs2) misleading? Or is the analogy too obvious to be mentioned?
My opinion: the analogy is not misleading, is not too obvious to be mentioned, but is also not terribly useful. Have you found a use for it? EDIT: Here's another way to think about it. A $\it relation$ on a set $S$ is a subset of $S\times S$, that is, it's a set of ordered pairs of elements of $S$. A relation on $S$ can be viewed as a (directed) graph, with vertex set $S$ and edge set the relation. We draw this graph by drawing the vertices as points in the plane and the edges as (directed) line segments connecting pairs of points Now consider "graph" in the sense of "draw the graph of $x^2+y^2=1$." That equation is a relation on the set of real numbers, and the graph is obtained by drawing the members of this relation as points in the plane. So the two kinds of graph are two ways of drawing a picture to illustrate a relation on a set.
Simplifying simple radicals $\sqrt{\frac{1}{a}}$ I'm having a problems simplifying this apparently simple radical: $\sqrt{\frac{1}{a}}$ The book I'm working through gives the answer as: $\frac{1}{a}\sqrt{a}$ Could someone break down the steps used to get there? I've managed to answer all the other questions in this chapter right, but my brain refuses to lock onto this one and I'm feeling really dense.
What do you get if you square both expressions and then simplify?
Which simple puzzles have fooled professional mathematicians? Although I'm not a professional mathematician by training, I felt I should have easily been able to answer straight away the following puzzle: Three men go to a shop to buy a TV and the only one they can afford is £30 so they all chip in £10. Just as they are leaving, the manager comes back and tells the assisitant that the TV was only £25. The assistant thinks quickly and decides to make a quick profit, realising that he can give them all £1 back and keep £2. So the question is this: If he gives them all £1 back which means that they all paid £9 each and he kept £2, wheres the missing £1? 3 x £9 = £27 + £2 = £29...?? Well, it took me over an hour of thinking before I finally knew what the correct answer to this puzzle was and, I'm embarrassed. It reminds me of the embarrassement some professional mathematicians must have felt in not being able to give the correct answer to the famous Monty Hall problem answered by Marilyn Vos Savant: http://www.marilynvossavant.com/articles/gameshow.html Suppose you're on a game show, and you're given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what's behind the doors, opens another door, say #3, which has a goat. He says to you, "Do you want to pick door #2?" Is it to your advantage to switch your choice of doors? Yes; you should switch. It's also mentioned in the book: The Man Who Only loved Numbers, that Paul Erdos was not convinced the first time either when presented by his friend with the solution to the Monty Hall problem. So what other simple puzzles are there which the general public can understand yet can fool professional mathematicians?
Along the same lines as the Monty Hall Problem is the following (lifted from Devlin's Angle on MAA and quickly amended): I have two children, and (at least) one of them is a boy born on a Tuesday. What is the probability that I have two boys? Read a fuller analysis here.
Gram matrix invertible iff set of vectors linearly independent Given a set of vectors $v_1 \cdots v_n$, the $n\times n$ Gram matrix $G$ is defined as $G_{i,j}=v_i \cdot v_j$ Due to symmetry in the dot product, $G$ is Hermitian. I'm trying to remember why $|G|=0$ iff the set of vectors are not linearly independent.
Here's another way to look at it. If $A$ is the matrix with columns $v_1,\ldots,v_n$, and the columns are not linearly independent, it means there exists some vector $u \in \mathbb{R}^n$ where $u \neq 0$ such that $A u = 0$. Since $G = A^T A$, this means $G u = A^T A u = A^T 0 = 0$ or that there exists a vector $u \neq 0$ such that $G u = 0$. So $G$ is not of full rank. This proves the "if" part. The "only if" part -- i.e. if $|G| = 0$, the vectors are not linearly independent -- follows because $|G| = |A^T A| = |A|^2 = 0$ which implies that $|A| = 0$ and so $v_1,\ldots,v_n$ are not linearly independent.
If $n$ is any positive integer, prove that $\sqrt{4n-2}$ is irrational If $n$ is any positive integer, prove that $\sqrt{4n-2}$ is irrational. I've tried proving by contradiction but I'm stuck, here is my work so far: Suppose that $\sqrt{4n-2}$ is rational. Then we have $\sqrt{4n-2}$ = $\frac{p}{q}$, where $ p,q \in \mathbb{Z}$ and $q \neq 0$. From $\sqrt{4n-2}$ = $\frac{p}{q}$, I just rearrange it to: $n=\frac{p^2+2q^2}{4q^2}$. I'm having troubles from here, $n$ is obviously positive but I need to prove that it isn't an integer. Any corrections, advice on my progress and what I should do next?
$4n-2 = (a/b)^2$ so $b$ divides $a$. But $\operatorname{gcd}(a,b) = 1$ so $b = 1$. So now $2$ divides $a$ so write $a = 2k$ then by substitution, we get that $2n-1 = 2k^2$ Left side is odd but the right side is even. Contradiction!
Inclusion-exclusion principle: Number of integer solutions to equations The problem is: Find the number of integer solutions to the equation $$ x_1 + x_2 + x_3 + x_4 = 15 $$ satisfying $$ \begin{align} 2 \leq &x_1 \leq 4, \\ -2 \leq &x_2 \leq 1, \\ 0 \leq &x_3 \leq 6, \text{ and,} \\ 3 \leq &x_4 \leq 8 \>. \end{align} $$ I have read some papers on this question, but none of them explain clearly enough. I am especially confused when you must decrease the total amount of solutions to the equation—with no regard to the restrictions—from the solutions that we don't want. How do we find the intersection of the sets that we don't want? Either way, in helping me with this, please explain this step.
If you don't get the larger question, start smaller first. * *How many solutions to $x_1 + x_2 = 15$, no restrictions? (infinite of course) *How many solutions where $0\le x_1$, $0\le x_2$? *How many solutions where $6\le x_1$, $0\le x_2$? *How many solutions where $6\le x_1$, $6\le x_2$? (these last questions don't really say anything about inclusion-exclusion yet) *How many solutions where $0\le x_1\le 5$, $0\le x_2$? Hint: exclude the complement. This is the fist step of the exclusion. *How many solutions where $0\le x_1\le 5$, $0\le x_2\le7$? Hint: exclude the both complements, but re-include where those two complements overlap (the intersection of those two excluded ranges - what is it), because you excluded the intersection twice. That is the gist of it. Now it gets harder, because you need to do it for 4 variables not just 2. But that's the exercise, figuring out how to manage including/excluding/then including back of what you threw away too much of /then excluding back again that littlest bit messed up in that last step.
What is the standard interpretation of order of operations for the basic arithmetic operations? What is the standard interpretation of the order of operations for an expression involving some combination of grouping symbols, exponentiation, radicals, multiplication, division, addition, and subtraction?
Any parts of an expression grouped with grouping symbols should be evaluated first, followed by exponents and radicals, then multiplication and division, then addition and subtraction. Grouping symbols may include parentheses/brackets, such as $()$ $[]$ $\{\}$, and vincula (singular vinculum), such as the horizontal bar in a fraction or the horizontal bar extending over the contents of a radical. Multiple exponentiations in sequence are evaluated right-to-left ($a^{b^c}=a^{(b^c)}$, not $(a^b)^c=a^{bc}$). It is commonly taught, though not necessarily standard, that ungrouped multiplication and division (or, similarly, addition and subtraction) should be evaluated from left to right. (The mnemonics PEMDAS and BEDMAS sometimes give students the idea that multiplication and division [or similarly, addition and subtraction] are evaluated in separate steps, rather than together at one step.) Implied multiplication (multiplication indicated by juxtaposition rather than an actual multiplication symbol) and the use of a $/$ to indicate division often cause ambiguity (or at least difficulty in proper interpretation), as evidenced by the $48/2(9+3)$ or $48÷2(9+3)$ meme. This is exacerbated by the existence of calculators (notably the obsolete Texas Instruments TI-81 and TI-85), which (at least in some instances) treated the $/$ division symbol as if it were a vinculum, grouping everything after it.
Exhibit an integral domain $R$ and a non-zero non-unit element of $R$ that is not a product of irreducibles. Exhibit an integral domain $R$ and a non-zero non-unit element of $R$ that is not a product of irreducibles. My thoughts so far: I don't really have a clue. Could anyone direct me on how to think about this? I'm struggling to get my head round irreducibles. Thanks.
Such an element can be factored, each factor can be factored, each factor can be factored, etc. Changing the problem into an additive one, you would want to find an element that can be written as a sum of two strictly smaller numbers, each of which can be written as a sum of two strictly smaller numbers, each of which... etc. Perhaps thinking along the lines of: $$1 = \frac{1}{2}+\frac{1}{2} = \left(\frac{1}{4}+\frac{1}{4}\right) + \left(\frac{1}{4}+\frac{1}{4}\right) = \cdots = \left(\frac{1}{2^n}+\frac{1}{2^n}\right) + \cdots + \left(\frac{1}{2^n}+\frac{1}{2^n}\right) = \cdots$$ Hmmm... Is there any way we could turn that into some kind of multiplicative, instead of additive, set of equalities?
Counting trails in a triangular grid A triangular grid has $N$ vertices, labeled from 1 to $N$. Two vertices $i$ and $j$ are adjacent if and only if $|i-j|=1$ or $|i-j|=2$. See the figure below for the case $N = 7$. How many trails are there from $1$ to $N$ in this graph? A trail is allowed to visit a vertex more than once, but it cannot travel along the same edge twice. I wrote a program to count the trails, and I obtained the following results for $1 \le N \le 17$. $$1, 1, 2, 4, 9, 23, 62, 174, 497, 1433, 4150, 12044, 34989, 101695, 295642, 859566, 2499277$$ This sequence is not in the OEIS, but Superseeker reports that the sequence satisfies the fourth-order linear recurrence $$2 a(N) + 3 a(N + 1) - a(N + 2) - 3 a(N + 3) + a(N + 4) = 0.$$ Question: Can anyone prove that this equation holds for all $N$?
Regard the same graph, but add an edge from $n-1$ to $n$ with weight $x$ (that is, a path passing through this edge contributes $x$ instead of 1). The enumeration is clearly a linear polynomial in $x$, call it $a(n,x)=c_nx+d_n$ (and we are interested in $a(n,0)=d_n$). By regarding the three possible edges for the last step, we find $a(1,x)=1$, $a(2,x)=1+x$ and $$a(n,x)=a(n-2,1+2x)+a(n-1,x)+x\,a(n-1,1)$$ (If the last step passes through the ordinary edge from $n-1$ to $n$, you want a trail from 1 to $n-1$, but there is the ordinary edge from $n-2$ to $n-1$ and a parallel connection via $n$ that passes through the $x$ edge and is thus equivalent to a single edge of weight $x$, so we get $a(n-1,x)$. If the last step passes through the $x$-weighted edge this gives a factor $x$, and you want a trail from $1$ to $n-1$ and now the parallel connection has weight 1 which gives $x\,a(n-1,1)$. If the last step passes through the edge $n-2$ to $n$, then we search a trail to $n-2$ and now the parallel connection has the ordinary possibility $n-3$ to $n-2$ and two $x$-weighted possibilities $n-3$ to $n-1$ to $n$ to $n-1$ to $n-2$, in total this gives weight $2x+1$ and thus $a(n-2,2x+1)$.) Now, plug in the linear polynomial and compare coefficients to get two linear recurrences for $c_n$ and $d_n$. \begin{align} c_n&=2c_{n-2}+2c_{n-1}+d_{n-1}\\ d_n&=c_{n-2}+d_{n-2}+d_{n-1} \end{align} Express $c_n$ with the second one, eliminate it from the first and you find the recurrence for $d_n$. (Note that $c_n$ and $a(n,x)$ are solutions of the same recurrence.)
Is product of two continuous functions still continuous? Let $f:\mathbb{R}\rightarrow \mathbb{R}$ and $g:\mathbb{R}\rightarrow \mathbb{R}$ be continuous. Is $h:\mathbb{R}\rightarrow \mathbb{R}$, where $h(x): = f(x) \times g(x)$, still continuous? I guess it is, but I feel difficult to manipulate the absolute difference: $$|h(x_2)-h(x_1)|=|f(x_2)g(x_2)-f(x_1)g(x_1)| \dots $$ Thanks in advance!
Hint: $$\left| f(x+h)g(x+h) - f(x)g(x) \right| = \left| f(x+h)\left( g(x+h) - g(x) \right) + \left( f(x+h) - f(x) \right) g(x) \right|$$ Can you proceed from here?
Evenly distribute points along a path I have a user defined path which a user has hand drawn - the distance between the points which make up the path is likely to be variant. I would like to find a set of points along this path which are equally separated. Any ideas how to do this?
If you measure distance along the path it is no different from a straight line. If the length is $L$ and you want $n$ points (including the ends) you put a point at one end and every $\frac{L}{n-1}$ along the way. If you measure distance as straight lines between the points there is no guarantee of a solution, but you could just start with this guess (or something a bit smaller) and "swing a compass" from each point, finding where it cuts the curve (could be more than once-this is problematic), and see how close to the end you wind up. Then a one-dimensional rootfinder (the parameter is the length of the radius) will do as well as possible.
If for every $v\in V$ $\langle v,v\rangle_{1} = \langle v,v \rangle_{2}$ then $\langle\cdot,\cdot \rangle_{1} = \langle\cdot,\cdot \rangle_{2}$ Let $V$ be a vector space with a finite Dimension above $\mathbb{C}$ or $\mathbb{R}$. How does one prove that if $\langle\cdot,\cdot\rangle_{1}$ and $\langle \cdot, \cdot \rangle_{2}$ are two Inner products and for every $v\in V$ $\langle v,v\rangle_{1}$ = $\langle v,v\rangle_{2}$ so $\langle\cdot,\cdot \rangle_{1} = \langle\cdot,\cdot \rangle_{2}$ The idea is clear to me, I just can't understand how to formalize it. Thank you.
You can use the polarization identity. $\langle \cdot, \cdot \rangle_1$ and $\langle \cdot, \cdot \rangle_2$ induces the norms $\| \cdot \|_1$ and $\| \cdot \|_2$ respectively, i.e.: $$\begin{align} \| v \|_1 = \sqrt{\langle v, v \rangle_1} \\ \| v \|_2 = \sqrt{\langle v, v \rangle_2} \end{align}$$ From this it is obvious that $\|v\|_1 = \|v\|_2$ for all $v \in V$, so we can write $\| \cdot \|_1 = \| \cdot \|_2 = \| \cdot \|$. By the polarization identity we get (for complex spaces): $$\begin{align} \langle x, y \rangle_1 &=\frac{1}{4} \left(\|x + y \|^2 - \|x-y\|^2 +i\|x+iy\|^2 - i\|x-iy\|^2\right) \ \forall\ x,y \in V \ \\ \langle x, y \rangle_2 &=\frac{1}{4} \left(\|x + y \|^2 - \|x-y\|^2 +i\|x+iy\|^2 - i\|x-iy\|^2\right) \ \forall\ x,y \in V \end{align}$$ since these expressions are equal, the inner products are equal.
Prove $e^{i \pi} = -1$ Possible Duplicate: How to prove Euler's formula: $\exp(i t)=\cos(t)+i\sin(t)$ ? I recently heard that $e^{i \pi} = -1$. WolframAlpha confirmed this for me, however, I don't see how this works.
This identity follows from Euler's Theorem, \begin{align} e^{i \theta} = \cos \theta + i \sin \theta, \end{align} which has many proofs. The one that I like the most is the following (sketched). Define $f(\theta) = e^{-i \theta}(\cos \theta + i \sin \theta)$. Use the quotient rule to show that $f^{\prime}(\theta)= 0$, so $f(\theta)$ is constant in $\theta$. Evaluate $f(0)$ to prove that $f(\theta) = f(0)$ everywhere. Take $\theta = \pi$ for your claim.
Help to understand material implication This question comes from from my algebra paper: $(p \rightarrow q)$ is logically equivalent to ... (then four options are given). The module states that the correct option is $(\sim p \lor q)$. That is: $$(p\rightarrow q) \iff (\sim p \lor q )$$ but I could not understand this problem or the solution. Could anybody help me?
$p \to q$ is only logically false if $p$ is true and $q$ is false. So if not-$p$ or $q$ (or both) are true, you do not have to worry about $p \to q$ being false. On the other hand, if both are false, then that's the same as saying $p$ is true and $q$ is false (De Morgan's Law), so $p \to q$ is false. Therefore, the two are logically equivalent.
What is the math behind the game Spot It? I just purchased the game Spot It. As per this site, the structure of the game is as follows: Game has 55 round playing cards. Each card has eight randomly placed symbols. There are a total of 50 different symbols through the deck. The most fascinating feature of this game is any two cards selected will always have ONE (and only one) matching symbol to be found on both cards. Is there a formula you can use to create a derivative of this game with different numbers of symbols displayed on each card. Assuming the following variables: * *S = total number of symbols *C = total number of cards *N = number of symbols per card Can you mathematically demonstrate the minimum number of cards (C) and symbols (S) you need based on the number of symbols per card (N)?
I have the game myself. I took the time to count out the appearance frequency of each object for each card. There are 55 cards, 57 objects, 8 per card. The interesting thing to me is that each object does not appear in equal frequency with others ... the minimum is 6, max 10, and mean 7.719. I am left curious why the makers of Spot It decided to take this approach. Apparently they favor the clover leaf over the flower, maple leaf, or snow man.
Lottery ball problem - How to go about solving? A woman works at a lottery ball factory. She's instructed to create lottery balls, starting from number 1, using the following steps: * *Open lottery ball package and remove red rubber ball. *Using two strips of digit stickers (0 through 9), create the current number by pasting the digits on the ball. *Digits not used in this way are put in a bowl where she may fish for other digits if she's missing some later. *Proceed to the next ball, incrementing the number by one. The lottery ball problem is, at what number will she arrive at before she's out of digits (granted, it's a large number, so assume these are basketball-sized rubber balls)? My question is not so much the solution as it is how to go about solving for this number? It seems evident that the first digit she'll run out of will be 1, since that's the number she starts with, however beyond that I wouldn't know how to go about determining that number. Any clues that could push me in the right direction would be greatly appreciated.
Yup, the first digit you run out of will be 1. As to how to solve it - try writing a formula for the number of $1$s in the decimal representations of the first $n$ numbers, and try and work out when it overtakes $2n$.
Showing the inequality $\frac{|f^{'}(z)|}{1-|f(z)|^{2}} \leq \frac{1}{1-|z|^{2}}$ I am trying to show if $|f(z)| \leq 1$, $|z| \leq 1$, then \begin{equation} \frac{|f^{'}(z)|}{1-|f(z)|^{2}} \leq \frac{1}{1-|z|^{2}} \end{equation}. I have used Cauchy's Inequality to derive $|f^{'}(z)| \leq \frac{1}{1-|z|}$ yet I still couldn't get the result I need. Also I am trying to find when equality would hold. Any tips or help would be much appreciated. Thanks!
That's the Schwarz–Pick theorem. The wikipedia page contains a proof.
Expected Value for summing over distinct random integers? Let $L=\{a_1,a_2,\ldots,a_k\}$ be a random (uniformly chosen) subset of length $k$ of the numbers $\{1,2,\ldots,n\}$. I want to find $E(X)$ where $X$ is the random variable that sums all numbers. We might want that $k < n$ too. My main problem is that I cannot get the function $q(a,k,n)$ that gives me the number of ways to write the number $a$ as the sum of exactly $k$ distinct addends less or equal $n$. This seems related but it doesn't limit the size of the numbers.
The expectation of each of the terms in the sum is $(n+1)/2$ so the expectation is $k(n+1)/2$. If you want to calculate the function $q(a,k,n)$ then you can use my Java applet here, by choosing "Partitions with distinct terms of:" $a$, "Exact number of terms:"$k$, "Each term no more than:" $n$, and then click on the "Calculate" button. If instead you start with "Compositions with distinct terms of:", then you will get a figure $k!$ times as big.
How to evaluate $\lim\limits_{h \to 0} \frac {3^h-1} {h}=\ln3$? How is $$\lim_{h \to 0} \frac {3^h-1} {h}=\ln3$$ evaluated?
There are at least two ways of doing this: Either you can use de l'Hôpital's rule, and as I pointed out in the comments the third example on Wikipedia gives the details. I think a better way of doing this (and Jonas seems to agree, as I saw after posting) is to write $f(h) = 3^{h} = e^{\log{3}\cdot h}$ and write the limit as $$\lim_{h \to 0} \frac{f(h) - f(0)}{h}$$ and recall the definition of a derivative. What comes out is $f'(0) = \log{3}$.
Computing the integral of $\log(\sin x)$ How to compute the following integral? $$\int\log(\sin x)\,dx$$ Motivation: Since $\log(\sin x)'=\cot x$, the antiderivative $\int\log(\sin x)\,dx$ has the nice property $F''(x)=\cot x$. Can we find $F$ explicitly? Failing that, can we find the definite integral over one of intervals where $\log (\sin x)$ is defined?
Series expansion can be used for this integral too. We use the following identity; $$\log(\sin x)=-\log 2-\sum_{k\geq 1}\frac{\cos(2kx)}{k} \phantom{a} (0<x<\pi)$$ This identity gives $$\int_{a}^{b} \log(\sin x)dx=-(b-a)\log 2-\sum_{k\ge 1}\frac{\sin(2kb)-\sin(2ka)}{2k^2}$$ ($a, b<\pi$) For example, $$\int_{0}^{\pi/4}\log(\sin x)dx=-\frac{\pi}{4}\log 2-\sum_{k\ge 1}\frac{\sin(\pi k/2)}{2k^2}=-\frac{\pi}{4}\log 2-\frac{1}{2}K$$ $$\int_{0}^{\pi/2} \log(\sin x)dx=-\frac{\pi}{2}\log 2$$ $$\int_{0}^{\pi}\log(\sin x)dx=-\pi \log 2$$ ($K$; Catalan's constant ... $\displaystyle K=\sum_{k\ge 1}\frac{(-1)^{k-1}}{(2k-1)^2}$)
How do we prove the existence of uncountably many transcendental numbers? I know how to prove the countability of sets using equivalence relations to other sets, but I'm not sure how to go about proving the uncountability of the transcendental numbers (i.e., numbers that are not algebraic).
If a number $t$ is algebraic, it is the root of some polynomial with integer coefficients. There are only countably many such polynomials (each having a finite number of roots), so there are only countably many such $t$. Since there are uncountably many real (or complex) numbers, and only countably many of them are algebraic, uncountably many of them must be transcendental.
Categorical description of algebraic structures There is a well-known description of a group as "a category with one object in which all morphisms are invertible." As I understand it, the Yoneda Lemma applied to such a category is simply a statement of Cayley's Theorem that every group G is isomorphic to a subset of the symmetric group on G (see the aside at the bottom of this post... I'm still a little confused on this). Assuming that I will make this clear in my own mind in the future, are there similar categorical descriptions of other algebraic object, eg rings, fields, modules, vector spaces? If so, what does the Yoneda Lemma tell us about the representability (or otherwise) of those objects? In particular, are there `nice' characterisations of other algebraic objects which correspond to the characterisation of a group arising from Cayley's Theorem as "subgroups of Sym(X) for some X"? Aside to (attempt to) work through the details of this: If $C$ is a category with one object $G$, then $h^G=\mathrm{Hom}(G,-)$ corresponds to the regular action of $G$ on itself (it takes $G$ to itself and takes the group element $f$ to the homomorphism $h_f(g)=f\circ g$). Any functor $F:C\to\mathbf{Set}$ with $F(G)=X$ gives a concrete model for the group, and the fact that natural transformations from $h^G$ to $F$ are 1-1 with elements of $X$ tells us that $G$ is isomorphic to a subgroup of $\mathrm{Sym}(X)$... somehow?
Let $C$ be a category, $C'$ the opposite category, and $S$ the category of sets. Recall that the natural embedding $e$ of $C$ in the category $S^{C'}$ of functors from $C'$ to $S$ is given by the following formulas. $\bullet$ If $c$ is an object of $C$, then $e(c)$ is the functor $C(\bullet,c)$ which attaches to each object $d$ of $C$ the set $C(d,c)$ of $C$-morphisms from $d$ to $c$. $\bullet$ If $x:c_1\to c_2$ is in $C(c_1,c_2)$, then $e(x)$ is the map from $C(c_2,d)$ to $C(c_1,d)$ defined by $$ e(x)(y)=yx. $$ In particular, if $C$ has exactly one object $c$, then $e$ is the Cayley isomorphism of the monoid $M:=C(c,c)$ onto the monoid opposite to the monoid of endomorphisms of $M$. One can also view $e$ as an anti-isomorphism of $M$ onto its monoid of endomorphisms.
Push forward and pullback in products I am reading this Questions about Serre duality, and there is one part in the answer that I'd like to know how it works. But after many tries I didn't get anywhere. So here is the problem. Let $X$ and $B$ be algebraic varieties over an algebraically closed field, $\pi_1$ and $\pi_2$ be the projections from $X\times B$ onto $X$ and $B$, respectively. Then it was claimed that $R^q\pi_{2,*} \pi_1^* \Omega_X^p \cong H^q(X, \Omega^p_X)\otimes \mathcal{O}_B$. I am guessing it works for any (quasi)coherent sheaf on $X$. Basically, I have two tools available, either Proposition III8.1 of Hartsshorne or going through the definition of the derived functors. Thank you.
Using flat base-change (Prop. III.9.3 of Hartshorne), one sees that $$R^1\pi_{2 *} \pi_1^*\Omega_X^p = \pi_1^* H^q(X,\Omega^p_X) = H^q(X,\Omega^p_X)\otimes \mathcal O_B.$$
Matrix Exponentiation for Recurrence Relations I know how to use Matrix Exponentiation to solve problems having linear Recurrence relations (for example Fibonacci sequence). I would like to know, can we use it for linear recurrence in more than one variable too? For example can we use matrix exponentiation for calculating ${}_n C_r$ which follows the recurrence C(n,k) = C(n-1,k) + C(n-1,k-1). Also how do we get the required matrix for a general recurrence relation in more than one variable?
@ "For example can we use matrix exponentiation for calculating nCr" There is a simple matrix as logarithm of P (which contains the binomial-coefficients): $\qquad \exp(L) = P $ where $ \qquad L = \small \begin{array} {rrrrrrr} 0 & . & . & . & . & . & . & . \\ 1 & 0 & . & . & . & . & . & . \\ 0 & 2 & 0 & . & . & . & . & . \\ 0 & 0 & 3 & 0 & . & . & . & . \\ 0 & 0 & 0 & 4 & 0 & . & . & . \\ 0 & 0 & 0 & 0 & 5 & 0 & . & . \\ 0 & 0 & 0 & 0 & 0 & 6 & 0 & . \\ 0 & 0 & 0 & 0 & 0 & 0 & 7 & 0 \end{array} $ and $ \qquad P =\small \begin{array} {rrrrrrr} 1 & . & . & . & . & . & . & . \\ 1 & 1 & . & . & . & . & . & . \\ 1 & 2 & 1 & . & . & . & . & . \\ 1 & 3 & 3 & 1 & . & . & . & . \\ 1 & 4 & 6 & 4 & 1 & . & . & . \\ 1 & 5 & 10 & 10 & 5 & 1 & . & . \\ 1 & 6 & 15 & 20 & 15 & 6 & 1 & . \\ 1 & 7 & 21 & 35 & 35 & 21 & 7 & 1 \end{array} $ L and P can be extended to arbitrary size in the obvious way
Branch cut of the logarithm I have a function $F$ holomorphic on some open set, and I have $ F(0) = 1 $ and $ F $ is non-vanishing. I want to show that there is a holomorphic branch of $ \log(F(z)) $. Now, I'm getting confused. The principal branch of logarithm removes $ (-\infty, 0] $. But if the point 0 is missing from the plane, what happens when we take $ \log{F(0)} = \log{1} + 0 = 0 $? (I'm sure we can take the principal branch, because $ \exp(z) $ satisfies the conditions in question). Any help would be appreciated. Thanks
That only means $F(z)$ cannot be 0, while $z$ can be 0.
How many different ways can you distribute 5 apples and 8 oranges among six children? How many different ways can you distribute 5 apples and 8 oranges among six children if every child must receive at least one piece of fruit? If there was a way to solve this using Pólya-Redfield that would be great, but I cannot figure out the group elements.
I am too lazy to calculate the numbers of elements with $k$ cycles in $S_8$ but if you do that yourself a solution could work as follows. (I will use this version of Redfield-Polya and $[n]$ shall denote $\{1,\dots,n\}$.) Let us take $X = [13]$ the set of fruits, where $G= S_5 \times S_8$ acts on $X$ such that the first five apples and the later eight oranges are indistinguishable. Then $$K_n = |[n]^X/G|$$ is the number of ways to distribute these apples and oranges among six distinguishable children. And $$ N_n = K_n -n\cdot K_{n-1}$$ the of ways to distribute these apples and oranges among six distinguishable children such that every child must recieve at least one piece of fruit. Now by the Theorem $$K_n = \frac{1}{|G|} \sum_{g\in G} n^{c(g)} = \frac{1}{5!\cdot 8!} \left(\sum_{g\in S_5} n^{c(g)}\right)\left(\sum_{g\in S_8} n^{c(g)}\right) = \frac{1}{5!\cdot 8!} \left(\sum_{i\in [2]} d_i n^{i}\right) \left(\sum_{i\in [4]} e_i n^{i}\right),$$ where $c(g)$ is the number of cycles of $g$, $d_i$ the number of permutations of $S_5$ with exactly $i$ cycles and $e_i$ the number of permutations of $S_8$ with exactly $i$ cycles. The number that we are looking for in the end is $N_6$.
Why truth table is not used in logic? One day, I bought Principia Mathematica and saw a lot of proofs of logical equations, such as $\vdash p \implies p$ or $\vdash \lnot (p \wedge \lnot p)$. (Of course there's bunch of proofs about rel&set in later) After reading these proofs, I suddenly thought that "why they don't use the truth table?". I know this question is quite silly, but I don't know why it's silly either (just my feeling says that). My (discrete math) teacher says that "It's hard question, and you may not understand until you'll become university student," which I didn't expected (I thought the reason would be something easy). Why people don't use truth table to prove logical equations? (Except for study logic (ex: question like "prove this logic equation using truth table"), of course.) PS. My teacher is a kind of people who thinks something makes sense iff something makes sense mathematically.
In Principia, the authors wanted to produce an explicit list of purely logical ideas, including an explicit finite list of axioms and rules of inference, from which all of mathematics could be derived. The method of truth tables is not such a finite list, and in any case would only deal with propositional logic. The early derivations of Principia are quite tedious, and could have been eliminated by adopting a more generous list of initial axioms. But for reasons of tradition, the authors wanted their list to be as small as possible. Remark: Principia is nowadays only of historical interest, since the subject has developed in quite different directions from those initiated by Russell and Whitehead. The idea of basing mathematics (including the development of the usual integers, reals, function spaces) purely on "logic" has largely been abandoned in favour of set-theory based formulations. And Principia does not have a clear separation between syntax and semantics. Such a separation is essential to the development of Model Theory in the past $80$ years.
Combination of smartphones' pattern password Have you ever seen this interface? Nowadays, it is used for locking smartphones. If you haven't, here is a short video on it. The rules for creating a pattern is as follows. * *We must use four nodes or more to make a pattern at least. *Once a node is visited, then the node can't be visited anymore. *You can start at any node. *A pattern has to be connected. *Cycle is not allowed. How many distinct patterns are possible?
I believe the answer can be found in OEIS. You have to add the paths of length $4$ through $9$ on a $3\times3$ grid, so $80+104+128+112+112+40=576$ I have validated the $80$, $4$ number paths. If we number the grid $$\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9 \end{array}$$ The paths starting $12$ are $1236, 1254, 1258, 1256$ and there were $8$ choices of corner/direction, so $32$ paths start at a corner. Starting at $2$, there are $2145,2147,2369,2365,2541,2547,2587,2589,2563,2569$ for $10$ and there are $4$ edge cells, so $40$ start at an edge. Starting at $5$, there are $8$ paths-four choices of first direction and two choices of which way to turn Added per user3123's comment that cycles are allowed: unfortunately in OEIS there are a huge number of series titled "Number of n-step walks on square lattice" and "Number of walks on square lattice", and there is no specific definition to tell one from another. For $4$ steps, it adds $32$ more paths-four squares to go around, four places to start in each square, and two directions to cycle. So the $4$ step count goes up to $112$. For longer paths, the increase will be larger. But there still will not be too many.
Variance for summing over distinct random integers Let $L=\{a_1,a_2,\ldots,a_k\}$ be a random (uniformly chosen) subset of length $k$ of the numbers $\{1,2,\ldots,n\}$. I want to find $\operatorname{Var}(X)$ where $X$ is the random variable that sums all numbers with $k < n$. Earlier today I asked about the expected value, which I noticed was easier than I thought. But now I am sitting on the variance since several hours but cannot make any progress. I see that $E(X_i)=\frac{n+1}{2}$ and $E(X)=k \cdot \frac{n+1}{2}$, I tried to use $\operatorname{Var}\left(\sum_{i=1}^na_iX_i\right)=\sum_{i=1}^na_i^2\operatorname{Var}(X_i)+2\sum_{i=1}^{n-1}\sum_{j=i+1}^na_ia_j\operatorname{Cov}(X_i,X_j)$ but especially the second sum is hard to evaluate by hand ( every time I do this I get a different result :-) ) and I have no idea how to simplify the Covariance term. Furthermore I know that $\operatorname{Var}(X)=\operatorname{E}\left(\left(X-\operatorname{E}(X)\right)^2\right)=\operatorname{E}\left(X^2\right)-\left(\operatorname{E}(X)\right)^2$, so the main Problem is getting $=\operatorname{E}\left(X^2\right)$. Maybe there is also a easier way than to use those formulas. I think I got the correct result via trial and error: $\operatorname{Var}(X)=(1/12) k (n - k) (n + 1)$ but not the way how to get there..
So I actually assigned this problem to a class a couple weeks ago. You can do what you did, of course. But if you happen to know the "finite population correction" from statistics, it's useful here. This says that if you sample $k$ times from a population of size $n$, without replacement, the variance of the sum of your sample will be $(n-k)/(n-1)$ times the variance that you'd get summing with replacement. The variance if you sum with replacement is, of course, $k$ times the variance of a single element. So you get $Var(X) = k(n-k)/(n-1) \times Var(U)$, where $U$ is a uniform random variable on $\{1, 2, \ldots, n\}$. It's well-known that $Var(U) = (n^2-1)/12$ (and you can check this by doing the sums) which gives the answer. Of course this formula is derived by summing covariances, so in a sense I've just swept that under the rug...
Using binomial expansion to derive Pascal's rule $\displaystyle \binom{n}{k}=\binom{n-1}{k} + \binom{n-1}{k-1}$ $\displaystyle \left(1+x\right)^{n} = \left(1+x\right)\left(1+x\right)^{n-1}$ How do I use binomial expansion on the second equations for the right hand side and convert it to the first equation? The left hand side is obvious, but I'm not sure how to do the right hand side. Please give me some hints thanks
Binomial expansion of both sides of $$\left(1+x\right)^{n} = \left(1+x\right)\left(1+x\right)^{n-1}$$ gives $$\sum_{k=0}^n \binom{n}{k} x^k = \left(1+x\right)\sum_{k=0}^{n-1} \binom{n-1}{k} x^k$$ by distributivity on the right hand side we find $$\left(\sum_{k=0}^{n-1} \binom{n-1}{k} x^k \right)+\left(\sum_{k=0}^{n-1} \binom{n-1}{k} x^{k+1} \right) = \left(\sum_{k=0}^{n} \binom{n-1}{k} x^k \right)+\left(\sum_{k=0}^{n} \binom{n-1}{k-1} x^{k}\right)$$ the limits of the summations do not change the sum because $\binom{n-1}{n} = 0$, $\binom{-1}{n} = 0$. Thus we have $$\sum_{k=0}^n \binom{n}{k} x^k = \sum_{k=0}^{n} \left(\binom{n-1}{k} + \binom{n-1}{k-1}\right) x^k$$ and extracting the $x^k$ coefficients from both sides gives the identity $$\displaystyle \binom{n}{k}=\binom{n-1}{k} + \binom{n-1}{k-1}.$$
Equality for the Gradient We have that $f : \mathbb{R}^2 \mapsto \mathbb{R}, f \in C^2$ and $h= \nabla f = \left(\frac{\partial f}{\partial x_1 },\frac{\partial f}{\partial x_2 } \right)$, $x=(x_1,x_2)$. Now the proposition I try to show says that $$\int_0^1 \! \langle \nabla f(x \cdot t),x \rangle\,dt \,= \int_0^{x_1} \! h_1(t,0) ,dt \, +\int_0^{x_2} \! h_2(x_1,t) \,dt \,$$ I know that $\langle \nabla f(x \cdot t),x \rangle=d f(tx) \cdot x$ but it doesn't seem to help, maybe you have to make a clever substitution? (Because the range of integration changes). Thanks in advance.
Before you read this answer, fetch a piece of paper and draw the following three points on it: $(0,0)$, $(x_{1},0)$ and $(x_{1},x_{2})$. These are the corners of a right-angled triangle whose hypotenuse I'd like to call $\gamma$ whose side on the $x_1$-axis I call $\gamma_{1}$ and whose parallel to the $x_2$-axis I call $\gamma_2$. More formally, let $\gamma: [0,1] \to \mathbb{R}^2$ be the path $t \mapsto tx$. Similarly, let $\gamma_{1} : [0,1] \to \mathbb{R}^2$ be the path $t \mapsto (tx_{1},0)$ and $\gamma_{2} : [0,1] \to \mathbb{R}^2$ be the path $t \mapsto (x_{1}, tx_{2})$. The integral on the left hand side can be written as $$\int_{0}^{1} df(\gamma(t))\cdot\dot{\gamma}(t)\,dt = \int_{0}^{1} \frac{d}{dt}(f \circ \gamma)(t)\,dt = f(\gamma(1)) - f(\gamma(0)) = f(x_1, x_2) - f(0,0).$$ Similarly, after some simple manipulations the right hand side is equal to $$\int_{0}^{1} \frac{d}{dt} (f \circ \gamma_{1})(t)\,dt + \int_{0}^{1} \frac{d}{dt} (f \circ \gamma_2)(t)\,dt = \left( f(\gamma_{1}(1)) - f(\gamma_{1}(0))\right) + \left(f(\gamma_2 (1)) - f(\gamma_2(0))\right)$$ and as $\gamma_1 (1) = (x_1,0) = \gamma_2 (0)$ two terms cancel out and what remains is $f(x_{1},x_{2}) - f(0,0)$. Thus the left hand side and the right hand side are equal.
Looking for the name of a Rising/Falling Curve I'm looking for a particular curve algorithm that is similar to to a bell curve/distribution, but instead of approaching zero at its ends, it stops at its length/limit. You specify the length of the curve of the curve and its maximum peak, and the plot will approach its peak at the midpoint of length(the middle) and then it curves downward to its end. As a math noob, I may not be making any sense. Here's an image of the curve I'm looking for:
The curve which you are looking for is a parabola. When I plugged in the equation $$f(x) = -(x-3.9)^{2} + 4$$ I got this figure, which some what resembles what you are looking for.
How do get combination sequence formula? What would be a closed-form formula that would determine the ith value of the sequence 1, 3, 11, 43, 171... where each value is one minus the product of the previous value and 4? Thanks!
The sequence can be written using the recurrence formula: $y_n = a+by_{n-1}$ . Then using the first 3 terms, one gets $y_1=1, a=-1, b=4 $ Sometimes it is easy to convert a recurrence formula to a closed-form, by studding the structure of the math relations. $y_1 = 1$ $y_2 = a+by_1$ $y_3 = a+by_2 = a+b(a+by_1) = a+ab+b^2y_1$ $y_4 = a+by_3 = a+b(a+by_2) = a+ab+ab^2+b^3y_1$ $y_5 = a+ab+ab^2+ab^3+b^4y_1$ there is a geometric sequence $a+ab+ab^2.. = a(1+b+b^2...) $ $ (1+b+b^2...b^n) = (1-b^{n+1}) / (1-b) $ therefore the general formula is: $y_n = a+ab+ab^2+...+ab^{n-2}+b^{n-1}y_1$ $y_n = a(1-b^{n-1}) / (1-b) + b^{n-1}y_1 $ Using the above parameters, $y_1=1 , a=-1, b=4, $ $y_n = -1(1-4^{n-1})/(1-4) +4^{n-1}y_1$ $y_n = -\frac{4^{n-1}}{3} +\frac13 +4^{n-1}y_1$ $y_n = \frac{2}{3}4^{n-1} + \frac{1}{3} $
How to find a polynomial from a given root? I was asked to find a polynomial with integer coefficients from a given root/solution. Lets say for example that the root is: $\sqrt{5} + \sqrt{7}$. * *How do I go about finding a polynomial that has this number as a root? *Is there a specific way of finding a polynomial with integer coefficients? Any help would be appreciated. Thanks.
One can start from the equation $x=\sqrt5+\sqrt7$ and try to get rid of the square roots one at a time. For example, $x-\sqrt5=\sqrt7$, squaring yields $(x-\sqrt5)^2=7$, developing the square yields $x^2-2=2x\sqrt5$, and squaring again yields $(x^2-2)^2=20x^2$, that is, $x^4-24x^2+4=0$.
How to calculate $\int_0^{2\pi} \sqrt{1 - \sin^2 \theta}\;\mathrm d\theta$ How to calculate: $$ \int_0^{2\pi} \sqrt{1 - \sin^2 \theta}\;\mathrm d\theta $$
Use Wolfram Alpha! Plug in "integrate sqrt(1-sin^2(x))". Then press "show steps". You can enter the bounds by hand... http://www.wolframalpha.com/input/?i=integrate+sqrt%281-sin%5E2%28x%29%29
A property of $J$-semisimple rings I'd like a little help on how to begin this problem. Show that a PID $R$ is Jacobson-semisimple $\Leftrightarrow$ $R$ is a field or $R$ contains infinitely many nonassociate irreducible elements. Thanks.
If $R$ is a PID and has infinitely many nonassociated irreducible elements, then given any nonunit $x\in R$ you can find an irreducible element that does not divide $x$; can you find a maximal ideal that does not contain $x$? If so, you will have proven that $x$ is not in the Jacobson radical of $R$. The case where $R$ is a field is pretty easy as well. Conversely, suppose $R$ is a PID that is not a field, but contains only finitely many nonassociated primes; can you exhibit an element that will necessarily lie in every maximal ideal of $R$?
Conditional probability Given the events $A, B$ the conditional probability of $A$ supposing that $B$ happened is: $$P(A | B)=\frac{P(A\cap B )}{P(B)}$$ Can we write that for the Events $A,B,C$, the following is true? $$P(A | B\cap C)=\frac{P(A\cap B\cap C )}{P(B\cap C)}$$ I have couple of problems with the equation above; it doesn't always fit my logical solutions. If it's not true, I'll be happy to hear why. Thank you.
Yes you can. I see no fault. Because if you put $K = B \cap C$ you obtain the original result
Logic problem - what kind of logic is it? I would be most gratefull, if someone could verify my solution to this problem. 1) Of Aaron, Brian and Colin, only one man is smart. Aaron says truthfully: 1. If I am not smart, I will not pass Physics. 2. If I am smart, I will pass Chemistry. Brian says truthfully: 3. If I am not smart, I will not pass Chemistry. 4. If I am smart, I will pass Physics. Colin says truthfully: 5. If I am not smart, I will not pass Physics. 6. If I am smart, I will pass Physics. While I. The smart man is the only man to pass one particular subject. II. The smart man is also the only man to fail the other particular subject. Which one of the three men is smart? Why? I would say that it could have been any one of them, as the implications in every statement are not strong enough to disprove the statements I,II. But I'm not sure if my solution is enough, as I'm not sure what kind of logic it is.
You could just create a simple table and read the solution:
A Curious Binomial Sum Identity without Calculus of Finite Differences Let $f$ be a polynomial of degree $m$ in $t$. The following curious identity holds for $n \geq m$, \begin{align} \binom{t}{n+1} \sum_{j = 0}^{n} (-1)^{j} \binom{n}{j} \frac{f(j)}{t - j} = (-1)^{n} \frac{f(t)}{n + 1}. \end{align} The proof follows by transforming it into the identity \begin{align} \sum_{j = 0}^{n} \sum_{k = j}^{n} (-1)^{k-j} \binom{k}{j} \binom{t}{k} f(j) = \sum_{k = 0}^{n} \binom{t}{k} (\Delta^{k} f)(0) = f(t), \end{align} where $\Delta^{k}$ is the $k^{\text{th}}$ forward difference operator. However, I'd like to prove the aforementioned identity directly, without recourse to the calculus of finite differences. Any hints are appreciated! Thanks.
This is just Lagrange interpolation for the values $0, 1, \dots, n$. This means that after cancelling the denominators on the left you can easily check that the equality holds for $t=0, \dots, n$.
Exact values of $\cos(2\pi/7)$ and $\sin(2\pi/7)$ What are the exact values of $\cos(2\pi/7)$ and $\sin(2\pi/7)$ and how do I work it out? I know that $\cos(2\pi/7)$ and $\sin(2\pi/7)$ are the real and imaginary parts of $e^{2\pi i/7}$ but I am not sure if that helps me...
There are various ways of construing and attacking your question. At the most basic level: it's no problem to write down a cubic polynomial satisfied by $\alpha = \cos(2 \pi/7)$ and hit it with Cardano's cubic formula. For instance, if we put $z = \zeta_7 = e^{2 \pi i/7}$, then $2\alpha = z + \overline{z} = z + \frac{1}{z}$. A little algebra leads to the polynomial $P(t) = t^3 + \frac{1}{2} t^2 - \frac{1}{2}t - \frac{1}{8}$ which is irreducible with $P(\alpha) = 0$. (Note that the noninteger coefficients of $P(t)$ imply that $\alpha$ is not an algebraic integer. In this respect, the quantity $2 \alpha$ is much better behaved, and it is often a good idea to work with $2 \alpha$ instead of $\alpha$.) To see what you get when you apply Cardano's formula, consult the other answers or just google for it: for instance I quickly found this page, among many others (including wikipedia) which does it. The expression is kind of a mess, which gives you the idea that having these explicit radical expressions for roots of unity (and related quantities like the values of the sine and cosine) may not actually be so useful: if I wanted to compute with $\alpha$ (and it has come up in my work!) I wouldn't get anything out of this formula that I didn't get from $2 \alpha = \zeta_7 + \zeta_7^{-1}$ or the minimal polynomial $P(t)$. On the other hand, if you know some Galois theory, you know that the Galois group of every cyclotomic polynomial is abelian, so there must exist a radical expression for $\zeta_n$ for any $n \in \mathbb{Z}^+$. (We will usually not be able to get away with only repeatedly extracting square roots; that could only be sufficient when Euler's totient function $\varphi(n)$ is a power of $2$, for instance, so not even when $n = 7$.) From this perspective, applying the cubic formula is a big copout, since there is no analogous formula in degree $d > 4$: the general polynomial of such a degree cannot be solved by radicals...but cyclotomic polynomials can. So what do you do in general? The answer was known to Gauss, and involves some classical algebra -- resolvents, Gaussian periods, etc. -- that is not very well remembered nowadays. In fact I have never gone through the details myself. But I cast around on the web for a while looking for a nice treatment, and I eventually found this writeup by Paul Garrett. I recommend it to those who want to learn more about this (not so useful, as far as I know, but interesting) classical problem: his notes are consistently excellent, and have the virtue of concision (which I admire especially for lack of ability to produce it myself).
Complete induction of $10^n \equiv (-1)^n \pmod{11}$ To prove $10^n \equiv (-1)^n\pmod{11}$, $n\geq 0$, I started an induction. It's $$11|((-1)^n - 10^n) \Longrightarrow (-1)^n -10^n = k*11,\quad k \in \mathbb{Z}. $$ For $n = 0$: $$ (-1)^0 - (10)^0 = 0*11 $$ $n\Rightarrow n+1$ $$\begin{align*} (-1) ^{n+1} - (10) ^{n+1} &= k*11\\ (-1)*(-1)^n - 10*(10)^n &= k*11 \end{align*}$$ But I don't get the next step.
Since 10 ≡ -1 (mod 11), you can just raise both sides to the power of $n$.
Limit of monotonic functions at infinity I understand that if a function is monotonic then the limit at infinity is either $\infty$,a finite number or $-\infty$. If I know the derivative is bigger than $0$ for every $x$ in $[0, \infty)$ then I know that $f$ is monotonically increasing but I don't know whether the limit is finite or infinite. If $f'(x) \geq c$ and $c \gt 0$ then I know the limit at infinity is infinity and not finite, but why? How do I say that if the limit of the derivative at infinity is greater than zero, then the limit is infinite?
You can also prove it directly by the Mean Value Theorem: $$f(x)-f(0)=f'(\alpha)(x-0) \geq cx \,.$$ Thus $f(x) \geq cx + f(0)$.
Primitive polynomials of finite fields there are two primitive polynomials which I can use to construct $GF(2^3)=GF(8)$: $p_1(x) = x^3+x+1$ $p_2(x) = x^3+x^2+1$ $GF(8)$ created with $p_1(x)$: 0 1 $\alpha$ $\alpha^2$ $\alpha^3 = \alpha + 1$ $\alpha^4 = \alpha^3 \cdot \alpha=(\alpha+1) \cdot \alpha=\alpha^2+\alpha$ $\alpha^5 = \alpha^4 \cdot \alpha = (\alpha^2+\alpha) \cdot \alpha=\alpha^3 + \alpha^2 = \alpha^2 + \alpha + 1$ $\alpha^6 = \alpha^5 \cdot \alpha=(\alpha^2+\alpha+1) \cdot \alpha=\alpha^3+\alpha^2+\alpha=\alpha+1+\alpha^2+\alpha=\alpha^2+1$ $GF(8)$ created with $p_2(x)$: 0 1 $\alpha$ $\alpha^2$ $\alpha^3=\alpha^2+1$ $\alpha^4=\alpha \cdot \alpha^3=\alpha \cdot (\alpha^2+1)=\alpha^3+\alpha=\alpha^2+\alpha+1$ $\alpha^5=\alpha \cdot \alpha^4=\alpha \cdot(\alpha^2+\alpha+1) \cdot \alpha=\alpha^3+\alpha^2+\alpha=\alpha^2+1+\alpha^2+\alpha=\alpha+1$ $\alpha^6=\alpha \cdot (\alpha+1)=\alpha^2+\alpha$ So now let's say I want to add $\alpha^2 + \alpha^3$ in both fields. In field 1 I get $\alpha^2 + \alpha + 1$ and in field 2 I get $1$. Multiplication is the same in both fields ($\alpha^i \cdot \alpha^j = \alpha^{i+j\bmod(q-1)}$. So does it work so, that when some $GF(q)$ is constructed with different primitive polynomials then addition tables will vary and multiplication tables will be the same? Or maybe one of presented polynomials ($p_1(x), p_2(x)$) is not valid to construct field (altough both are primitive)?
The generator $\alpha$ for your field with the first description cannot be equal to the generator $\beta$ for your field with the second description. An isomorphism between $\mathbb{F}_2(\alpha)$ and $\mathbb{F}_2(\beta)$ is given by taking $\alpha \mapsto \beta + 1$; you can check that $\beta + 1$ satisfies $p_1$ iff $\beta$ satisfies $p_2$.
Reference for a proof of the Hahn-Mazurkiewicz theorem The Hahn-Mazurkiewicz theorem states that a space $X$ is a Peano Space if and only if $X$ is compact, connected, locally connected, and metrizable. If anybody knows a book with a proof, please let me know. Thanks. P.S. (added by t.b.) A Peano space is a topological space which is the continuous image of the unit interval.
Read the section on Peano spaces in General Topology by Stephen Willard.
Transitive groups Someone told me the only transitive subgroup of $A_6$ that contains a 3-cycle and a 5-cycle is $A_6$ itself. (1) What does it mean to be a "transitive subgroup?" I know that a transitive group action is one where if you have a group $G$ and a set $X$, you can get from any element in $X$ to any other element in $X$ by multiplying it by an element in $G$. Is a transitive subgroup just any group that acts transitively on a set? And if so, does its transitiveness depend on the set it's acting on? (2) Why is $A_6$ the only transitive subgroup of $A_6$ that contains a 3-cycle and a 5-cycle? Thank you for your help :)
Let $H\leq A_6$ be transitive and generated by a 3-cycle and a 5-cycle. Let if possible, $H\neq A_6$, and let us compute $|H|$. $|H|$ is divisible by 15, and divides 360$=|A_6|$, so it is one of $\{15,30,45,60,90,120,180\}$. * *$|H|$ can not be $\{90,120,180\}$, otherwise we get a subgroup of $A_6$ of index less than $6$. *$|H|$ can not be 15, since then $A_6$ will have an element of order 15, which is not possible, *$|H|$ can not be 45, since a group of order 45 is abelian and so it contains an element of order 15. *$|H|$ can not be 30, since a group of order 30 has normal Sylow-5 subgroup, and so it will contain a subgroup of order 15, hence an element of order 15. Hence $|H|$ should be $60$. Now in this subgroup of order 60, Sylow-5 subgroup can not be normal, since if it is normal, then it will also be normalized by an element of order 3, giving a subgroup of order 15, hence an element of order 15. So $H$ is a group of order $60$, which has no normal Sylow-5 subgroup; $H$ must be isomorphic to $A_5$. There are 6 Sylow-5 subgroups of $H\cong A_5$, hence 24 elements of order 5; they will be 5 cycles, hence fixing an element in $\{1,2,...,6\}$. Let $(12345)\in H$. As $H$ is transitive subgroup, there will be an element $\sigma \in H$ such that $\sigma(6)=1$, so $\sigma (12345)\sigma^{-1}\in H$, will be a 5-cycle fixing 1; in this way all Sylow-5 subgroups of $A_6$, and hence all element of order 5 of $A_6$ will be in $H$, exceeding the size of $H$. Hence we must have $H=A_6$.
Proving that $\lim\limits_{x\to\infty}f'(x) = 0$ when $\lim\limits_{x\to\infty}f(x)$ and $\lim\limits_{x\to\infty}f'(x)$ exist I've been trying to solve the following problem: Suppose that $f$ and $f'$ are continuous functions on $\mathbb{R}$, and that $\displaystyle\lim_{x\to\infty}f(x)$ and $\displaystyle\lim_{x\to\infty}f'(x)$ exist. Show that $\displaystyle\lim_{x\to\infty}f'(x) = 0$. I'm not entirely sure what to do. Since there's not a lot of information given, I guess there isn't very much one can do. I tried using the definition of the derivative and showing that it went to $0$ as $x$ went to $\infty$ but that didn't really work out. Now I'm thinking I should assume $\displaystyle\lim_{x\to\infty}f'(x) = L \neq 0$ and try to get a contradiction, but I'm not sure where the contradiction would come from. Could somebody point me in the right direction (e.g. a certain theorem or property I have to use?) Thanks
Hint: If you assume $\lim _{x \to \infty } f'(x) = L \ne 0$, the contradiction would come from the mean value theorem (consider $f(x)-f(M)$ for a fixed but arbitrary large $M$, and let $x \to \infty$). Explained: If the limit of $f(x)$ exist, there is a horizontal asymptote. Therefore as the function approaches infinity it becomes more linear and thus the derivative approaches zero.
Reference request: introduction to commutative algebra My goal is to pick up some commutative algebra, ultimately in order to be able to understand algebraic geometry texts like Hartshorne's. Three popular texts are Atiyah-Macdonald, Matsumura (Commutative Ring Theory), and Eisenbud. There are also other books by Reid, Kemper, Sharp, etc. Can someone outline the differences between these texts, their relative strengths, and their intended audiences? I am not listing my own background and strengths, on purpose, (a) so that the answers may be helpful to others, and (b) I might be wrong about myself, and I want to hear more general opinions than what might suite my narrow profile (e.g. If I said "I only like short books", then I might preclude useful answers about Eisenbud, etc.).
It's a bit late. But since no one has mentioned it, I would mention Gathmann's lecture notes on Commutative Algebra (https://www.mathematik.uni-kl.de/~gathmann/class/commalg-2013/commalg-2013.pdf). The exposition is excellent. The content is comparable to Atiyah-McDonald, but contains much more explanation. It emphasizes the geometric intuitions throughout the lectures. For example, the chapters on integral ring extension and Noetherian normalization have one of the best expositions of the geometric pictures behind these important algebraic concepts that I have read among several introductory books on commutative algebra. Chapters usually begin with a very good motivation and give many examples.
Equivalent Definitions of Positive Definite Matrix As Wikipedia tells us, a real $n \times n$ symmetric matrix $G = [g_{ij}]$ is positive definite if $v^TGv >0$ for all $0 \neq v \in \mathbb{R}^n$. By a well-known theorem of linear algebra it can be shown that $G$ is positive definite if and only if the eigenvalues of $G$ are positive. Therefore, this gives us two distinct ways to say what it means for a matrix to be positive definite. In Amann and Escher's Analysis II, exercise 7.1.8 seems to provide yet another way recognize a positive definite matrix. In this exercise, $G$ is defined to be positive definite if there exists a positive number $\gamma$ such that $$ \sum\limits_{i,j = 1}^n g_{ij}v^iv^j \geq \gamma |v|^2 $$ I have not before seen this characterization of a positive definite matrix and I have not been successful at demonstrating that this characterization is equivalent to the other two characterizations listed above. Can anyone provide a hint how one might proceed to demonstrate this apparent equivalence or suggest a reference that discusses it?
Let's number the definitions: * *$v^T G v > 0$ for all nonzero $v$. *$G$ has positive eigenvalues. *$v^T G v > \gamma v^T v$ for some $\gamma > 0$. You know that 1 and 2 are equivalent. It's not hard to see that 3 implies 1. So it remains to show that either 1 or 2 implies 3. A short proof: 2 implies 3 because we can take $\gamma$ to be, say, half the smallest eigenvalue of $G$. Another short proof: 1 implies 3 because 3 is equivalent to the condition that $v^T G v > \gamma$ for all $v$ on the unit sphere. But the unit sphere is compact, so if $v^T G v$ is positive on the unit sphere, it attains a positive minimum. (I'd like to take the time to complain about definition 2. It is a misleading definition in that the statement it is describing makes sense for all matrices, but it is not equivalent to the first definition in this generality. The problem is that positive-definiteness is a property of a bilinear form $V \times V \to \mathbb{R}$, whereas eigenvalues are a property of an endomorphism $V \to V$, and in complete generality there's no natural way to turn one into the other. To do this you really need something like an inner product.)
Number of possible sets for given N How many possible valid collections are there for a given positive integer N given the following conditions: All the sums from 1 to N should be possible to be made by selecting some of the integers. Also this has to be done in way such that if any integer from 1 to N can be made in more than one way by combining other selected integers then that set of integers is not valid. For example, with N = 7, The valid collections are:{1,1,1,1,1,1,1},{1,1,1,4},{1,2,2,2},{1,2,4} Invalid collections are: {1,1,1,2,2} because the sum adds up to 7 but 2 can be made by {1,1} and {2}, 3 can be made by {1,1,1} and {1,2}, 4 can be made by {1,1,2} and {2,2} and similarly 5, 6 and 7 can also be made in multiple ways using the same set. {1,1,3,6} because all from 1 to 7 can be uniquely made but the sum is not 7 (its 11).
The term I would use is "multiset". Note that your multiset must contain 1 (as this is the only way to get a sum of 1). Suppose there are $r$ different values $a_1 = 1, \ldots, a_r$ in the multiset, with $k_j$ copies of $a_j$. Then we must have $a_j = (k_{j-1}+1) a_{j-1}$ for $j = 2, \ldots, r$, and $N = (k_r + 1) a_r - 1$. Working backwards, if $A(N)$ is the number of valid multisets summing to $N$, for each factorization $N+1 = ab$ where $a$ and $b$ are positive integers with $b > 1$ you can take $a_r = a$, $k_r = b - 1$, together with any valid multiset summing to $a-1$. Thus $A(N) = \sum_{b | N+1, b > 1} A((N+1)/b - 1)$ for $N \ge 1$, with $A(0) = 1$. We then have, if I programmed it right, 1, 1, 2, 1, 3, 1, 4, 2, 3, 1, 8, 1, 3, 3, 8, 1, 8, 1, 8, 3 for $N$ from 1 to 20. This matches OEIS sequence A002033, "Number of perfect partitions of n".
Real-world applications of prime numbers? I am going through the problems from Project Euler and I notice a strong insistence on Primes and efficient algorithms to compute large primes efficiently. The problems are interesting per se, but I am still wondering what the real-world applications of primes would be. What real tasks require the use of prime numbers? Edit: A bit more context to the question: I am trying to improve myself as a programmer, and having learned a few good algorithms for calculating primes, I am trying to figure out where I could apply them. The explanations concerning cryptography are great, but is there nothing else that primes can be used for?
Thought I'd mention an application (or more like an explicit effect, rather than a direct application) that prime numbers have on computing fast Fourier transforms (FFTs), which are of fundamental use to many fields (e.g. signal processing, electrical engineering, computer vision). It turns out that most algorithms for computing FFTs go fastest on inputs of power-of-two size and slowest on those of prime size. This effect is not small; in fact, it is often recommended, when memory is not an issue compared to time, to pad one's input to a power of 2 (increasing the input size to earn a speedup). Papers on this have been written: e.g. see Discrete Fourier transforms when the number of data samples is prime by Rader. And github issues like this suggest it is still an issue. Very specific algorithms (e.g. see this one using the Chinese remainder theorem for cases where the size is a product of relative primes) have been developed that, in my opinion, constitute some relevancy of primality to these applications.
Connections between metrics, norms and scalar products (for understanding e.g. Banach and Hilbert spaces) I am trying to understand the differences between $$ \begin{array}{|l|l|l|} \textbf{vector space} & \textbf{general} & \textbf{+ completeness}\\\hline \text{metric}& \text{metric space} & \text{complete space}\\ \text{norm} & \text{normed} & \text{Banach space}\\ \text{scalar product} & \text{pre-Hilbert space} & \text{Hilbert space}\\\hline \end{array} $$ What I don't understand are the differences and connections between metric, norm and scalar product. Obviously, there is some kind of hierarchy but I don't get the full picture. Can anybody help with some good explanations/examples and/or readable references?
When is a normed subspace of vector space a pre-Hilbert space. The analogous concept to orthogonal and orthonormal sequences in a normed space is perpendicular and a perpnormal sequences. If the norm squared of the sum of a linear combination of non-zero vectors equals the sum of the norm squared of each of the components, then the set of vectors is perpendicular. For example, x,y are perpendicular vectors in a complex normed space, if for arbitrary complex numbers a,b, ||ax + by||2 = ||ax||2 + ||by||2 = |a|2 ||x||2 + |b|2 ||y||2, and perpnormal, if, ||x||2 = ||y||2 = 1, so, ||ax + by||2 = |a|2 + |b|2. Define the polarization product of two vectors x,y in a normed space using the polarization identity in a pre-Hilbert space, (x|y) = 1/4 { ||x + y||2 - ||x - y||2 + i ||x + i y||2 - i ||x – i y||2 }. Then, a normed space having a sequence of perpnormal vectors (vectors that are perpendicular and unit vectors), is equivalent to all pairs of vectors in the normed space satisfying the parallelogram law. learn more>>
Energy norm. Why is it called that way? Let $\Omega$ be an open subset of $\mathbb{R}^n$. The following $$\lVert u \rVert_{1, 2}^2=\int_{\Omega} \lvert u(x)\rvert^2\, dx + \int_{\Omega} \lvert \nabla u(x)\rvert^2\, dx$$ defines a norm on $H^1(\Omega)$ space, that is sometimes called energy norm. I don't feel easy with the physical meaning this name suggests. In particular, I see two non-homogeneous quantities, $\lvert u(x)\rvert^2$ and $\lvert \nabla u(x)\rvert^2$, being summed together. How can this be physically consistent? Maybe some example could help me here. Thank you.
To expand on my comment (see e.g. http://online.itp.ucsb.edu/online/lnotes/balents/node10.html): The expression which you give can be interpreted as the energy of a $n$-dimensional elastic manifold being elongated in the $n+1$ dimension (e.g. for $n=2$, membrane in three dimension); $u$ is the displacement field. Let me put back the units $$E[u]= \frac{a}{2}\int_{\Omega} \lvert u(x)\rvert^2\, dx + \frac{b}{2} \int_{\Omega} \lvert \nabla u(x)\rvert^2\, dx.$$ The first term tries to bring the manifold back to equilibrium (with $u=0$), the second term penalizes fast changes in the displacement. The energy is not homogenous and involves a characteristic length scale $$\ell_\text{char} = \sqrt{\frac{b}{a}}.$$ This is the scale over which the manifold returns back to equilibrium (in space) if elongated at some point. With $b=0$, the manifold would return immediately, you elongate it at some point and infinitely close the manifold is back at $u=0$. With $a=0$ the manifold would never return to $u=0$. Only the competition between $a$ and $b$ leads to the physics which we expect for elastic manifold. This competition is intimately related to the fact that there is a characteristic length scale appearing. It is important that physical laws are not homogenous, in order to have characteristic length scales (like $\ell$ in your example, the Bohr radius for the hydrogen problem, $\sqrt{\hbar/m\omega}$ for the quantum harmonic oscillator, ...). The energy of systems only become scale invariant in the vicinity of second order phase transitions. This is a strong condition on energy functionals to the extend that people classify all possible second order phase transitions.
Calculate Line Of Best Fit Using Exponential Weighting? I know how to calculate a line of best fit with a set of data. I want to be able to exponentially weight the data that is more recent so that the more recent data has a greater effect on the line. How can I do this?
Most linear least squares algorithms let you set the measurement error of each point. Errors in point $i$ are then weighted by $\frac{1}{\sigma_i}$. So assign a smaller measurement error to more recent points. One algorithm is available for free in the obsolete version of Numerical Recipes, chapter 15.
Finding a vector in Euclidian space that minimizes a loss function subject to some constraints I'm trying to solve the following minimization problem, and I'm sure there must be a standard methodology that I could use, but so far I couldn't find any good references. Please let me know if you have anything in mind that could help or any references that you think would be useful for tackling this problem. Suppose you are given $K$ points, $p_i \in R^n$, for $i \in \{1,\ldots,K\}$. Assume also that we are given $K$ constants $\delta_i$, for $i \in \{1,\ldots,K\}$. We want to find the vector $x$ that minimizes: $\min_{x \in R^n} \sum_{i=1,\ldots,K} || x - p_i ||^2$ subject the following $K$ constraints: $\frac{ || x - p_i ||^2 } { \sum_{j=1,\ldots,K} ||x - p_j||^2} = \delta_i$ for all $i \in {1,\ldots,K}$. Any help is extremely welcome! Bruno edit: also, we know that $\sum_{i=1,\ldots,K} \delta_i = 1$.
You can try using Lagrange multiplier method - see wikipedia.
proper differentiable maps of manifolds Is the following statement true?If yes, why? Let $f: M\to N$ be a proper morphism between smooth manifolds. Let $x$ be a point of $N$, and $U$ a nbhd of $f^{-1}(x)$ in $M$. Then there exists a nbhd $V$ of $x$ in $N$ such that $f^{-1}(V)\subset U$. Here, proper means that the preimage of any compact set is compact. It seems to me this was used in a expository article that I am reading. If it is true, I expect it to be true for proper morphism of locally compact topological spaces. But for some reason I wasn't able to find a proof. Thank you.
Suppose not. Then there is a sequence $(y_n)_{n\geq1}$ in $M\setminus U$ such that $f(y_n)\to x$. The set $S=\{f(y_n):n\geq1\}\cup\{x\}$ is compact, so its preimage $f^{-1}(S)$ is also compact. Since the sequence $(y_n)_{n\geq1}$ is contained in $f^{-1}(S)$, we can —by replacing it with one of its subsequences, if needed— assume that in fact $(y_n)_{n\geq1}$ converges to a point $y\in M$. Can you finish? (I am using that $x$ has a countable basis of neighborhoods here and that sequences in a compact subset of $M$ have convergent subsequences —to reduce to dealing with sequences— but more technology will remove that in order to generalize this to spaces other that manifolds)
Proving Stewart's theorem without trig Stewart's theorem states that in the triangle shown below, $$ b^2 m + c^2 n = a (d^2 + mn). $$ Is there any good way to prove this without using any trigonometry? Every proof I can find uses the Law of Cosines.
Geometric equivalents of the Law of Cosines are already present in Book II of Euclid, in Propositions $12$ and $13$ (the first is the obtuse angle case, the second is the acute angle case). Here are links to Proposition $12$, Book II, and to Proposition $13$. There is absolutely no trigonometry in Euclid's proofs. These geometric equivalents of the Law of Cosines can be used in a mechanical way as "drop in" replacements for the Law of Cosines in "standard" proofs of Stewart's Theorem. What in trigonometric approaches we think of as $2ab\cos\theta$ is, in Euclid, the area of a rectangle that is added to or subtracted from the combined area of two squares.
Simple (even toy) examples for uses of Ordinals? I want to describe Ordinals using as much low-level mathematics as possible, but I need examples in order to explain the general idea. I want to show how certain mathematical objects are constructed using transfinite recursion, but can't think of anything simple and yet not artificial looking. The simplest natural example I have are Borel sets, which can be defined via transfinite recursion, but I think it's already too much (another example are Conway's Surreal numbers, but that again may already be too much).
Some accessible applications transfinite induction could be the following (depending on what the audience already knows): * *Defining the addition, multiplication (or even exponentiation) of ordinal numbers by transfinite recursion and then showing some of their basic properties. (Probably most of the claims for addition and multiplication can be proved easier in a non-inductive way.) *$a.a=a$ holds for every cardinal $a\ge\aleph_0$. E.g. Cieselski: Set theory for the working mathematician, Theorem 5.2.4, p.69. Using the result that any two cardinals are comparable, this implies $a.b=a+b=\max\{a,b\}$. See e.g. here *The proof that Axiom of Choice implies Zorn's lemma. (This implication is undestood as a theorem in ZF - in all other bullets we work in ZFC.) *Proof of Steinitz theorem - every field has an algebraically closed extension. E.g. Antoine Chambert-Loir: A field guide to algebra, Theorem 2.3.3, proof is given on p.39-p.40. *Some constructions of interesting subsets of plane are given in Cieselski's book, e.g. Theorem 6.1.1 in which a set $A\subseteq\mathbb R\times\mathbb R$ is constructed such that $A_x=\{y\in\mathbb R; (x,y)\in A\}$ is singleton for each $x$ and $A^y=\{x\in\mathbb R; (x,y)\in A\}$ is dense in $\mathbb R$ for every $y$.
Edge of factoring technology? Schneier in 1996's Applied Cryptography says: "Currently, a 129-decimal-digit modulus is at the edge of factoring technology" In the intervening 15 years has anything much changed?
The same algorithm is used for factoring, the Number-Field Sieve. It's probably been optimized further. But the main difference between now and then is computing power. Since the asymptotic running-time of the NFS is "known", you can even extrapolate into the future (assuming your favorite version of Moore's law).
Subset sum problem is NP-complete? If I know correctly, subset sum problem is NP-complete. Here you have an array of n integers and you are given a target sum t, you have to return the numbers from the array which can sum up to the target (if possible). But can't this problem be solved in polynomial time by dynamic programming method where we construct a table n X t and take cases like say last number is surely included in output and then the target becomes t- a[n]. In other case, the last number is not included, then the target remains same t but array becomes of size n-1. Hence this way we keep reducing size of problem. If this approach is correct, isn't the complexity of this n * t, which is polynomial? and so if this belongs to P and also NP-complete (from what I hear) then P=NP. Surely, I am missing something here. Where is the loophole in this reasoning? Thanks,
If you express the inputs in unary you get a different running time than if you express them in a higher base (binary, most commonly). So the question is, for subset sum, what base is appropriate? In computer science we normally default to the following: * *If the input is a list or collection, we express its size as the number of items *If the input is an integer, we express its size as the number of bits (binary digits) The intuition here is that we want to take the more "compact" representation. So for subset sum, we have a list of size $n$ and a target integer of value $t$. Therefore it's common to express the input size as $n$ and $t=2^k$ where $k$ is the number of bits needed to express $t$. So the running time is $O(n 2^k)$ which is exponential in $k$. But one could also say that $t$ is given in unary. Now the size of $t$ is $t$, and the running time is $O(n t)$, which is polynomial in $n$ and $t$. In reductions involving subset sum (and other related problems like partition, 3-partition, etc) we must use a non-unary representation if we want to use it as an NP-Hard problem to reduce from.
Easy Proof Adjoint(Compact)=Compact I am looking for an easy proof that the adjoint of a compact operator on a Hilbert space is again compact. This makes the big characterization theorem for compact operators (i.e. compact iff image of unit ball is relatively compact iff image of unit ball is compact iff norm limit of finite rank operators) much easier to prove, provided that you have already developed spectral theory for C*-algebras. By the way, I'm using the definition that an operator $T\colon H \to H$ is compact if and only if given any [bounded] sequence of vectors $(x_n)$, the image sequence $(Tx_n)$ has a convergent subsequence. edited for bounded
Here is an alternative proof, provided that you know that an operator is compact iff it is the operator-limit of a sequence of finite-rank operators. Let $T: H \to H$ be a compact operator. Then $T= \lim_n T_n$ where the limit is w.r.t. the operatornorm and $T_n$ is a finite rank operator. Using that the $*$-involution is continuous, we get $$T^*= \lim_n T_n^*$$ where $T_n^*$ is also a finite rank operator for all $n$. Thus $T^*$ is the limit of finite-rank operators and it follows that $T^*$ is compact as well.
Weierstrass Equation and K3 Surfaces Let $a_{i}(t) \in \mathbb{Z}[t]$. We shall denote these by $a_{i}$. The equation $y^{2} + a_{1}xy + a_{3}y = x^{3} + a_{2}x^{2} + a_{4}x + a_{6}$ is the affine equation for the Weierstrass form of a family of elliptic curves. Under what conditions does this represent a K3 surface?
A good reference for this would be Abhinav Kumar's PhD thesis, which you can find here. In particular, look at Chapter 5, and Section 5.1. If an elliptic surface $y^2+a_1(t)xy+a_3(t)y = x^3+a_2(t)x^2+a_4(t)x+a_6(t)$ is K3, then the degree of $a_i(t)$ must be $\leq 2i$. I hope this helps.
$n$ lines cannot divide a plane region into $x$ regions, finding $x$ for $n$ I noticed that $3$ lines cannot divide the plane into $5$ regions (they can divide it into $2,3,4,6$ and $7$ regions). I have a line of reasoning for it, but it seems rather ad-hoc. I also noticed that there are no "gaps" in the number of divisions that can be made with $n=4$, i.e we can divide a plane into $\{2,3,\cdots,11\}$ using $4$ lines. Is there anyway to know that if there are $n$ lines with $x$ being the possible number of divisions $\{ 2,\ldots, x,\ldots x_{max} \}$ where $x_{max}$ is the maximum number of divisions of the plane with $n$ lines, then what are the $x \lt x_{max}$ which are not possible? For e.g, $n=3$, $x_{max}=7$, $x=5$ is not possible.
I don't have access to any papers, unfortunately, but I think I've found a handwavy proof sketch that shows there are no gaps other than $n = 3, x = 5$. Criticism is welcomed; I'm not sure how to make this argument rigorous, and I'm also curious if there's an article that already uses these constructions. Suppose $n - 1$ lines can divide the plane into 2 through $\frac{(n-1)n}{2} + 1$ regions. For sufficiently large $n$, we will show that $n$ lines can divide the plane into $\frac{n(n-1)}{2} + 2$ through $\frac{n(n-1)}{2} + n + 1 = \frac{n(n+1)}{2} + 1$ regions. Consider an arrangement of $n$ lines that splits the plane into $\frac{n(n+1)}{2} + 1$ regions, such that, for simplicity, lines are paired into groups of two, where each line in the $k$th pair has a root at $k$ and the negative slope of its partner. If $n$ is odd, there will be one line left over which can't be paired; put this line horizontally underneath the roots of the pairs (e.g. $y = -1$). If $n$ is even, take the last pair and put one line horizontally as described, and the other vertically at $x = 0$. We can hand-wave to "pull down" pairs one-by-one so their intersection rests on the horizontal line, subtracting one region for each pair "pulled down." This ends up removing $\frac{n-1}{2}$ regions for odd $n$, and $\frac{n}{2} - 1$ regions for even $n$. Then we can go through each pair of lines and adjust the line with negative slope to have the same slope as the next pair's positively sloped line, shaving one region off each time (and removing the same number of regions as the previous operation). So these operations will get us to $\frac{(n-1)n}{2} + 2$ for odd $n$, and $\frac{(n-1)n}{2} + 3$ for even $n$. To get to $\frac{(n-1)n}{2} + 2$ for even $n$, we take the last pair's positive line and put it parallel to the first two vertical lines (subtracting two regions), then nudge the first pair slightly above the horizontal line (adding back one). Now we have to consider when such operations fail, for both odd and even cases. We certainly can't "pull down" when $n \le 2$. For $n = 3$, we have just one pair above the horizontal line, so we can't adjust slopes as suggested, giving us a gap at $x = 5$. For $n = 4$, we have only one pair, and we can't make up the gap at $\frac{(n-1)n}{2} + 2$ — but luckily, not only can we cover up the 8-region gap using 3 parallel lines and one non-parallel one, but 4 parallel lines cover the 5-region gap introduced when $n = 3$. So we can use these techniques to complete the induction process for $n \ge 5$.
Why can any affine transformaton be constructed from a sequence of rotations, translations, and scalings? A book on CG says: ... we can construct any affine transformation from a sequence of rotations, translations, and scalings. But I don't know how to prove it. Even in a particular case, I found it still hard. For example, how to construct a shear transformation from a sequence of rotations, translations, and scalings? Can you please help? Thank you. EDIT: Axis scalings may use different scaling factors for the axes. Is there a matrix representation or proof for this? For example, to show that a two-dimensional rotation can be decomposed into three shear transformation, we can write $$ \begin{pmatrix} \cos\alpha & \sin\alpha\\ -\sin\alpha & \cos\alpha \end{pmatrix} = \begin{pmatrix} 1 & \tan\frac{\alpha}{2}\\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0\\ -\sin\alpha & 1 \end{pmatrix} \begin{pmatrix} 1 & \tan\frac{\alpha}{2}\\ 0 & 1 \end{pmatrix} $$
Perhaps using the singular value decomposition? For the homogeneus case (linear transformation), we can always write $y = A x = U D V^t x$ for any square matrix $A$ with positive determinant, were U and V are orthogonal and D is diagonal with positive real entries. U and V would the be the rotations and D the scaling. Some (trivial?) details to polish: what if A has negative determinant, what is U and V are not pure rotations but also involve axis reflections. It only remains add the indepent term to get the affine transformation ($y = Ax +b$) and that would be the translation.
$\lim (a + b)\;$ when $\;\lim(b)\;$ does not exist? Suppose $a$ and $b$ are functions of $x$. Is it guaranteed that $$ \lim_{x \to +\infty} a + b\text{ does not exist} $$ when $$ \lim_{x \to +\infty} a = c\quad\text{and}\quad \lim_{x \to +\infty} b\text{ does not exist ?} $$
Suppose, to get a contradiction, that our limit exists. That is, suppose $$\lim_{x\rightarrow \infty} a(x)+b(x)=d$$ exists. Then since $$\lim_{x\rightarrow \infty} -a(x)=-c,$$ and as limits are additive, we conclude that $$\lim_{x\rightarrow \infty} a(x)+b(x)-a(x)=d-c$$ which means $$\lim_{x\rightarrow \infty} b(x)=d-c.$$ But this is impossible since we had that $b(x)$ did not tend to a limit. Hope that helps,
how to determine if two graphs are not isomorphic What are some good ways of determining if two reasonably simple looking graphs are not isomorphic? I know that you can check their cycle or some weird property (for certain graphs), but are there some other tricks to do this?
With practice often one can quickly tell that graphs are not isomorphic. When graphs G and H are isomorphic they have the same chromatic number, if one has an Eulerian or Hamiltonian circuit so does the other, if G is planar so is H, if one is connected so is the other. If one has drawings of the two graphs, our visual systems are so attuned to finding patterns that seeing that the two graphs have some property the don't share often makes it easy to show graphs are not isomorphic.
Weak limit of an $L^1$ sequence We have functions $f_n\in L^1$ such that $\int f_ng$ has a limit for every $g\in L^\infty$. Does there exist a function $f\in L^1$ such that the limit equals $\int fg$? I think this is not true in general (really? - why?), then can this be true if we also know that $f_n$ belong to a certain subspace of $L^1$?
Perhaps surprisingly, the answer is yes. More generally, given any Banach space $X$, a sequence $\{x_n\} \subset X$ is said to be weakly Cauchy if, for every $\ell \in X^*$, the sequence $\{\ell(f_n)\} \subset \mathbb{R}$ (or $\mathbb{C}$) is Cauchy. If every weakly Cauchy sequence is weakly convergent, $X$ is said to be weakly sequentially complete. Every reflexive Banach space is weakly sequentially complete (a nice exercise with the uniform boundedness principle). $L^1$ is not reflexive, but it turns out to be weakly sequentially complete anyway. This theorem can be found in P. Wojtaszczyk, Banach spaces for analysts, as Corollary 14 on page 140. It works for $L^1$ over an arbitrary measure space.
How many points in the xy-plane do the graphs of $y=x^{12}$ and $y=2^x$ intersect? The question in the title is equivalent to find the number of the zeros of the function $$f(x)=x^{12}-2^x$$ Geometrically, it is not hard to determine that there is one intersect in the second quadrant. And when $x>0$, $x^{12}=2^x$ is equivalent to $\log x=\frac{\log 2}{12}x$. There are two intersects since $\frac{\log 2}{12}<e$. Is there other quicker way to show this? Edit: This question is motivated from a GRE math subject test problem which is a multiple-choice one(A. None B. One C. Two D.Three E. Four). Usually, the ability for a student to solve such problem as quickly as possible may be valuable at least for this kind of test. In this particular case, geometric intuition may be misleading if one simply sketch the curve of two functions to find the possible intersect.
If you are solving a multiple choice test like GRE you really need fast intuitive, but certain, thinking. I tried to put myself in this rushed set of mind when I read your question and thought this way: think of $x^{12}$ as something like $x^2$ but growing faster, think of $2^x$ as $e^x$ similarly, sketch both functions. It is immediate to see an intersection point for $x<0$ and another for $0<x<b$, for some positive $b$ since the exponential grows slower for small $x$ for a while, as the sketched graph suggests. So the answer is at least $2$. In fact it is $3$ because after the second intersection point you clearly see the graph of $x^{12}$ over $2^x$ but you should notice that $a^x\gg x^n$ at $+\infty$, and therefore the exponential must take over and have a third intersection point at a really big value of positive $x$. Once this happens the exponential function is growing so fast that a potential function cannot catch up so there are no further intersections. (To quickly see that $a^x\gg x^n$ at $+\infty$ just calculate $\lim_{x\to\infty}\frac{a^x}{x^n}=+\infty$ using L'Hopital's rule or Taylor expanding the numerator whose terms are of the form $\log^m(a)a^m/m!$). More rigorously, maybe you can find a way to study the signs of $g(x)=x^{12}-2^x$ using derivatives and monotony. There are 4 intervals giving signs + - + - resulting in 3 points of intersection by the intermediate value theorem. These intervals are straightforwardly seen as reasoned above just sketching the function and taking into account the behavior for big values of $x$. To be sure that there is no other change of sign you must prove that $g'$ is monotone after the third point of intersection. Just after this last point, the graph of $2^x$ can easily be seen over $x^{12}$ and both subfunctions are monotone along with their derivatives: since $2^x>x^{12}\Rightarrow \log(2)2^x>12x^{11}$ which means $g'(x)=12x^{11}-\log(2)2^x$ is indeed monotone afterwards and therefore there is no fourth intersection.
Game theory textbooks/lectures/etc I looking for good books/lecture notes/etc to learn game theory. I do not fear the math, so I'm not looking for a "non-mathematical intro" or something like that. Any suggestions are welcome. Just put here any references you've seen and some brief description and/or review. Thanks. Edit: I'm not constrained to any particular subject. Just want to get a feeling of the type of books out there. Then I can decide what to read. I would like to see here a long list of books on the subject and its applications, together with reviews or opinions of those books.
Coursera.org offers an excellent game theory course by Dr.s Shoham, Leyton-Brown, and Jackson (https://www.coursera.org/course/gametheory).
Book/tutorial recommendations: acquiring math-oriented reading proficiency in German I'm interested in others' suggestions/recommendations for resources to help me acquire reading proficiency (of current math literature, as well as classic math texts) in German. I realize that German has evolved as a language, so ideally, the resource(s) I'm looking for take that into account, or else perhaps I'll need a number of resources to accomplish such proficiency. I suspect I'll need to include multiple resources (in multiple forms) in my efforts to acquire the level of reading proficiency I'd like to have. I do like "hard copy" material, at least in part, from which to study. But I'm also very open to suggested websites, multimedia packages, etc. In part, I'd like to acquire reading proficiency in German to meet a degree requirement, but as a native English speaker, I would also like to be able to study directly from significant original German sources. Finally, there's no doubt that a sound/solid reference/translation dictionary (or two or three!) will be indispensable, as well. Any recommendations for such will be greatly appreciated, keeping in mind that my aim is to be proficient in reading mathematically-oriented German literature (though I've no objections to expanding from this base!).
I realize this is a bit late, but I just saw by chance that the math department of Princeton has a list of German words online, seemingly for people who want to read German math papers.
Can a circle truly exist? Is a circle more impossible than any other geometrical shape? Is a circle is just an infinitely-sided equilateral parallelogram? Wikipedia says... A circle is a simple shape of Euclidean geometry consisting of the set of points in a plane that are a given distance from a given point, the centre. The distance between any of the points and the centre is called the radius. A geometric plane would need to have an infinite number of points in order to represent a circle, whereas, say, a square could actually be represented with a finite number of points, in which case any geometric calculations involving circles would involve similarly infinitely precise numbers(pi, for example). So when someone speaks of a circle as something other than a theory, are they really talking about a [ really big number ]-sided equilateral parallelogram? Or is there some way that they fit an infinite number of points on their geometric plane?
In the same sense as you think a circle is impossible, a square with truly perfect sides can never exist because the lines would have to have infinitesimal width, and we can never measure a perfect right angle, etc. You say that you think a square is physically possible to represent with 4 points, though. In this case, a circle is possible - you only need one point and a defined length. Then all the points of that length from the initial point define the circle, whether we can accurately delineate them or not. In fact, in this sense, I think a circle is more naturally and precisely defined than a given polygon.
What is the smallest number of $45^\circ-60^\circ-75^\circ$ triangles that a square can be divided into? What is the smallest number of $45^\circ-60^\circ-75^\circ$ triangles that a square can be divided into? The image below is a flawed example, from http://www.mathpuzzle.com/flawed456075.gif Laczkovich gave a solution with many hundreds of triangles, but this was just an demonstration of existence, and not a minimal solution. ( Laczkovich, M. "Tilings of Polygons with Similar Triangles." Combinatorica 10, 281-306, 1990. ) I've offered a prize for this problem: In US dollars, (\$200-number of triangles). NEW: The prize is won, with a 50 triangle solution by Lew Baxter.
I have no answer to the question, but here's a picture resulting from some initial attempts to understand the constraints that exist on any solution. $\qquad$ This image was generated by considering what seemed to be the simplest possible configuration that might produce a tiling of a rectangle. Starting with the two “split pentagons” in the centre, the rest of the configuration is produced by triangulation. In this image, all the additional triangles are “forced”, and the configuration can be extended no further without violating the contraints of triangulation. If I had time, I'd move on to investigating the use of “split hexagons”. The forcing criterion is that triangulation requires every vertex to be surrounded either (a) by six $60^\circ$ angles, three triangles being oriented one way and three the other, or else (b) by two $45^\circ$ angles, two $60^\circ$ angles and two $75^\circ$ angles, the triangles in each pair being of opposite orientations.
Are the inverses of these matrices always tridiagonal? While putzing around with the linear algebra capabilities of my computing environment, I noticed that inverses of $n\times n$ matrices $\mathbf M$ associated with a sequence $a_i$, $i=1\dots n$ with $m_{ij}=a_{\max(i,j)}$, which take the form $$\mathbf M=\begin{pmatrix}a_1&a_2&\cdots&a_n\\a_2&a_2&\cdots&a_n\\\vdots&\vdots&\ddots&a_n\\a_n&a_n&a_n&a_n\end{pmatrix}$$ (i.e., constant along "backwards L" sections of the matrix) are tridiagonal. (I have no idea if there's a special name for these matrices, so if they've already been studied in the literature, I'd love to hear about references.) How can I prove that the inverses of these special matrices are indeed tridiagonal?
Let $B_j$ be the $n\times n$ matrix with $1$s in the upper-left hand $j\times j$ block and zeros elsewhere. The space of $L$-shaped matrices you're interested in is spanned by $B_1,B_2,\dots,B_n$. I claim that if $b_1,\dots,b_n$ are non-zero scalars, then the inverse of $$ M=b_1B_1+b_2B_2+\dots + b_nB_n$$ is then the symmetric tridiagonal matrix $$N=c_1C_1+c_2C_2+\dots+c_nC_n$$ where $c_j=b_j^{-1}$ and $C_j$ is the matrix with zero entries except for a block matrix $\begin{pmatrix}1&-1\\-1&1\end{pmatrix}$ placed along the diagonal of $C_j$ in the $j$th and $j+1$th rows and columns, if $j<n$, and $C_n$ is the matrix with a single non-zero entry, $1$ in the $(n,n)$ position. The point is that $C_jB_k=0$ if $j\ne k$, and $C_jB_j$ is a matrix with at most two non-zero rows: the $j$th row is $(1,1,\dots,1,0,0,\dots)$, with $j$ ones, and if $j<n$ then the $j+1$th row is the negation of the $j$th row. So $NM=C_1B_1+\dots+C_nB_n=I$, so $N=M^{-1}$. If one of the $b_j$'s is zero, then $M$ not invertible since it's arbitrarily close to matrices whose inverses have arbitrarily large entries. Addendum: they're called type D matrices, and in fact the inverse of any irreducible nonsingular symmetric tridiagonal matrix is the entrywise product of a type D matrix and a "flipped" type D matrix (start the pattern in the lower right corner rather than the upper left corner). There's also a variant of this result characterising the inverse of arbitrary tridiagonal matrices. This stuff is mentioned in the introduction of this paper by Reinhard Nabben.
(Organic) Chemistry for Mathematicians Recently I've been reading "The Wild Book" which applies semigroup theory to, among other things, chemical reactions. If I google for mathematics and chemistry together, most of the results are to do with physical chemistry: cond-mat, fluids, QM of molecules, and analysis of spectra. I'm more interested in learning about biochemistry, molecular biology, and organic chemistry — and would prefer to learn from a mathematical perspective. What other books aim to teach (bio- || organic) chemistry specifically to those with a mathematical background?
Organic chemistry S. Fujita's "Symmetry and combinatorial enumeration in chemistry" (Springer-Verlag, 1991) is one such endeavor. It mainly focuses on stereochemistry. Molecular biology and biochemistry A. Carbone and M. Gromov's "Mathematical slices of molecular biology" is recommended, although it is not strictly a book. R. Phillips, J. Kondev and J. Theriot have published "Physical biology of the cell", which contains biochemical topics (such as structures of hemoglobin) and is fairly accessible to mathematicians in my opinion.
Is there any mathematical operation on Integers that yields the same result as doing bitwise "AND"? I'll provide a little bit of a background so you guys can better understand my question: Let's say I have two positive, non-zero Binary Numbers.(Which can, obviously, be mapped to integers) I will then proceed onto doing an "AND" operation for each bit, (I think that's called a bitwise operation) which will yield yet another binary number. Ok. Now this new Binary number can, in turn, also be mapped to an Integer. My question is: Is there any Integer operation I can do on the mapped Integer values of the two original binary numbers that would yield the same result? Thanks in advance. EDIT : I forgot to mention that What I'm looking for is a mathematical expression using things like +,-,/,pow(base,exp) and the like. I'm not 100% sure (I'm a compuer scientist) but I think what I'm looking for is an isomorphism. LAST EDIT: I think this will clear any doubts as to what sort of mathematical expression I'm looking for. I wanted something like: The bitwise AND of two Integers A and B is always equal to (AB)X(B)X(3). The general feeling I got is that it's not possible or extremely difficult to prove(either its validity or non-validity)
One way to do a bitwise AND would be to decompose each integer into a sequence of values in {0,1}, perform a Boolean AND on each pair of corresponding bits, and then recompose the result into an integer. A function for getting the $i$-th bit (zero-indexed, starting at the least significant bit) of an integer $n$ could be defined as $f(n, i) = \lfloor n/2^i\rfloor \bmod 2$; the bitwise AND of two integers $m$ and $n$ would then be $$\sum_{i=0}^\infty (f(n,i) \mbox{ AND } f(m,i)) 2^i$$ Expressing the simpler Boolean AND in terms of common mathematical functions is left as an exercise to the reader.
Why can ALL quadratic equations be solved by the quadratic formula? In algebra, all quadratic problems can be solved by using the quadratic formula. I read a couple of books, and they told me only HOW and WHEN to use this formula, but they don't tell me WHY I can use it. I have tried to figure it out by proving these two equations are equal, but I can't. Why can I use $x = \dfrac{-b\pm \sqrt{b^{2} - 4 ac}}{2a}$ to solve all quadratic equations?
Most answers are explaining the method of completing the square. Although its the preferred method, I'll take another approach. Consider an equation $$~~~~~~~~~~~~~~~~~~~~~ax^{2}+bx+c=0~~~~~~~~~~~~~~~~~~~~(1)$$We let the roots be $\alpha$ and $\beta$. Now, $$~~~~~~~~~~~~~~~~~~~~~x-\alpha = x-\beta = 0~~~~~~~~~~~~~~~~~~~~~~~~$$ $$~~~~~~~~~~~~~~~~~~~~~k(x-\alpha)(x-\beta)=0~~~~~~~~~~~~~~~~~~~~(2)$$ Equating equation 1 and 2 (k is a constant), $$ax^{2}+b{x}+c=k(x-\alpha)(x-\beta)$$ $$ax^{2}+b{x}+c=k(x^{2}-\alpha x-\beta x+\alpha \beta)$$ $$ax^{2}+b{x}+c=kx^{2}-k(\alpha+\beta )x+k\alpha \beta)$$ Comparing both sides, we get $$a=k~;~b=-k(\alpha +\beta)~;~c=k\alpha \beta$$ From this, we get $$\alpha + \beta = \frac{-b}{a}~~;~~~\alpha \beta = \frac{c}{a}$$ Now, to get the value of $\alpha$, we follow the following procedure : First we take out the value of $\alpha - \beta$, so that we can eliminate one term and find out the value of another. $$(\alpha-\beta)^{2} = \alpha ^{2}+ \beta ^{2} - 2 \alpha \beta$$ Now we'll add $4 \alpha \beta $ on both the sides $$(\alpha-\beta)^{2} +4 \alpha \beta = \alpha ^{2}+ \beta ^{2} + 2 \alpha \beta$$ $$(\alpha-\beta)^{2} +4 \alpha \beta = (\alpha + \beta )^{2} $$ $$(\alpha-\beta)^{2} = (\alpha + \beta )^{2} -4 \alpha \beta $$ $$\alpha-\beta = \pm \sqrt{(\alpha + \beta )^{2} -4 \alpha \beta } $$ Substituting the values of $\alpha + \beta$ and $\alpha \beta$, we get, $$\alpha-\beta = \pm \sqrt{(\frac{-b}{a} )^{2} -\frac{4c}{a} } $$ $$\alpha-\beta = \pm \sqrt{\frac{b^{2}-4ac}{a^{2}} } $$ or $$~~~~~~~~~~~~~~~~~~~~~\alpha-\beta = \frac{\pm \sqrt{b^{2}-4ac}}{a} ~~~~~~~~~~~~~~~~~~~~~(3)$$ Adding $Eq^{n} (2)~and~(3)$, we get, $$2 \alpha = \frac{-b \pm \sqrt{b^{2}-4ac}}{a}$$ $$\alpha = \frac{-b \pm \sqrt{b^{2}-4ac}}{2a}$$
Symmetric and diagonalizable matrix-Jacob method: finding $p$ and $q$ Given this symmetric matrix-$A$: $\begin{pmatrix} 14 &14 & 8 &12 \\ 14 &17 &11 &14 \\ 8& 11 &11 &10 \\ 12 & 14 &10 & 12 \end{pmatrix}$ I need to find $p,q$ such that $p$ is the number of 1's and $q$ is the number of -1's in the diagonalizable matrix $D_{p,q}$ such that $D_{p,q}$= Diag {$1,1,\ldots 1,-1,-1, \ldots-1,0,0,\ldots0$}. $D=P^{t}AP$ while $P$ is the the matrix that contains the eigenvectors of $A$ as a Columns. I tried to use Jacobi method but I found out that $|A|=0$, so I can't use it, but I know now that $0$ is an eigenvalue of $A$, So Do I really need to compute $P$ in order to find $p$ and $q$? It's a very long and messy process. Thank you
The characteristic polynomial of $A$ is $P(x)= x^4 - 54x^3 + 262x^2 - 192x $. It has $0$ as a simple root, and the other three are positive. Therefore $A$ has three positive eigenvalues and one equal to zero. Since the signature can be obtained from the signs of the eigenvalues, we are done. Therefore $p=3,q=0$.
Doubt in Discrete-Event System Simulation by Jerry Banks,4th Edition I'm new to the Math forum here, so pardon my question if it seems juvenile to some. I've googled intensively,gone through wikipedia,wolfram and after hitting dead ends everywhere have resorted to this site. My query is this- In chapter#8, "Random-Variate Generation", the problems are required to use a sequence of random numbers obtained from a table A.1 . But I find no correlation between the random numbers used and the numbers in the table. So how are the numbers generated exactly? Are they assumed?? Table A.1 is on page 501 in this link http://books.google.com/books?id=b0lgHnfe3K0C&pg=PA501&lpg=PA501&dq=78166+82521&source=bl&ots=nR33GcAzGF&sig=9LQjAPyGxDDxz1QLsEeMwN_UytA&hl=en&ei=3TTeTbPyNoqJrAe6zPGOCg&sa=X&oi=book_result&ct=result&resnum=6&ved=0CDUQ6AEwBTgo#v=onepage&q&f=false And the random numbers used in my problem are : R1=0.8353 R2=0.9952 R3=0.8004 How do you get these values of R1,R2,R3 from the table in the link??? If you cant view the table from the link up there, the table is as in the image shown below-
Here is an hypothesis. Since three coefficients only are obtained from a whole bunch of data, these could summarize some properties of the sample considered. Statisticians often use the symbol R2 for a coefficient of determination, which, roughly speaking, measures the proportion of variability in a data set. On the positive side, these are by definition between 0 and 1, like yours. On the negative side, one would still have to understand how one sample gave rise to three coefficients, perhaps the whole sample was split into three. (I was not able to check the pages around Table A.1 because I have access to no preview on googlebooks for this book.)
Characterization of linear independence by wedge product Let $V$ be a vector space of finite dimension. Show that $x_1,...,x_k$ is linearly independent iff $x_1\wedge ... \wedge x_k \neq 0$.
Hint for one direction: if there is a linear dependence, one of the $x_i$ is a linear combination of the others. Then substitute into $x_1\wedge\cdots \wedge x_k$. Hint for the other direction: You can do row operations $x_i\mapsto x_i+rx_j$ for $i\neq j$ without affecting the wedge $x_1\wedge\cdots\wedge x_k$. Similarly you can divide any $x_j$ by a scalar without affecting whether $x_1\wedge\cdots\wedge x_k$ is nonzero. I'm not sure what properties you already know about the wedge. If you know that wedges $e_{i_1}\wedge\cdots \wedge e_{i_k}$, $i_1<i_2<\cdots<i_k$ form a basis for $\wedge^kV$, when $e_i$ is a basis for $V$, then you're home free.
Intuitive explanation of the tower property of conditional expectation I understand how to define conditional expectation and how to prove that it exists. Further, I think I understand what conditional expectation means intuitively. I can also prove the tower property, that is if $X$ and $Y$ are random variables (or $Y$ a $\sigma$-field) then we have that $$\mathbb E[X] = \mathbb{E}[\mathbb E [X | Y]].$$ My question is: What is the intuitive meaning of this? It seems quite puzzling to me. (I could find similar questions but not this one.)
For simple discrete situations from which one obtains most basic intuitions, the meaning is clear. I have a large bag of biased coins. Suppose that half of them favour heads, probability of head $0.7$. Two-fifths of them favour heads, probability of head $0.8$. And the rest favour heads, probability of head $0.9$. Pick a coin at random, toss it, say once. To find the expected number of heads, calculate the expectations, given the various biasing possibilities. Then average the answers, taking into consideration the proportions of the various types of coin. It is intuitively clear that this formal procedure "should" give about the same answer as the highly informal process of say repeating the experiment $1000$ times, and dividing by $1000$. For if we do that, in about $500$ cases we will get the first type of coin, and out of these $500$ we will get about $350$ heads, and so on. The informal arithmetic mirrors exactly the more formal process described in the preceding paragraph. If it is more persuasive, we can imagine tossing the chosen coin $12$ times.
Group theory intricate problem This is Miklos Schweitzer 2009 Problem 6. It's a group theory problem hidden in a complicated language. A set system $(S,L)$ is called a Steiner triple system if $L \neq \emptyset$, any pair $x,y \in S, x \neq y$ of points lie on a unique line $\ell \in L$, and every line $\ell \in L$ contains exactly three points. Let $(S,L)$ be a Steiner triple system, and let us denote by $xy$ the third point on a line determined by the points $x \neq y$. Let $A$ be a group whose factor by its center $C(A)$ is of prime power order. Let $f,h:S \to A$ be maps, such that $C(A)$ contains the range of $f$, and the range of $h$ generates $A$. Show that if $$ f(x)=h(x)h(y)h(x)h(xy)$$ holds for all pairs of points $x \neq y$, then $A$ is commutative and there exists an element $k \in A$ such that $$ f(x)= k h(x),\ \forall x \in S $$ Here is what I've got: * *Because the image of $h$ generates $A$, for $A$ to be commutative is enough to prove that $h(x)h(y)=h(y)h(x)$ for every $x,y \in S$. *For the last identity to be true (if we have proved the commutativity) it is enough to have that the product $h(x)h(y)h(xy)=k$ for every $x \neq y$. *$h(y)h(x)h(xy)=h(xy)h(x)h(y)$ *I should use somewhere the fact that the factor $A /C(A)$ has prime power order.
Let $g:S\rightarrow A$ be defined as $g(x) = h(x)^{-1} f(x)$. Now, if $\{x,y,z\}\in L$, then $g(y) = h(z)g(x)h(z)^{-1}$. This means that the image of $g$ is closed under conjugation by elements of $A$ since $A$ is generated by the image of $h.$ Also, since this formula does not depend on the order of $x,y,z$, it means that $g(x)=h(z)g(y)h(z)^{-1}$. In particular, then $h(z)^2$ commutes with $g(x)$ for all $x$. But since $f(x)$ is in the center of $A$, that means that $h(z)^2$ commutes with $h(x)$ for all $x, z\in S$. Hence $h(z)^2$ commutes with all of $A$ - that is $h(z)^2\in C(A)$, so $A/C(A)$ is generated by elements of order $2$, so by the condition of the problem, $A/C(A)$ must be of order $2^n$ for some $n$. Now, since $g(x)=h(y)h(x)h(z) = h(z)h(x)h(y)$, we can see that: $$g(x)^2 = h(y)h(x)h(z)h(z)h(x)h(y) = h(x)^2 h(y)^2 h(z)^2$$ Therefore, $g(x)^2 = g(y)^2 = g(z)^2$, and in particular, for all $x,y \in S$, $g(x)^2 = g(y)^2$. So there is some $K\in C(A)$ such that $\forall x\in S, g(x)^2=K$. There are lots of things that can be concluded from knowing that $h(x)^2\in C(A)$. For example, that $f(x)f(y) {f(z)}^{-1}= h(x)^2 h(y)^2$. That can be used to show that $f(x)f(y)f(z) = h(x)^4h(y)^4h(z)^4 = K^2$. Not sure where to go from here.
Interesting integral related to the Omega Constant/Lambert W Function I ran across an interesting integral and I am wondering if anyone knows where I may find its derivation or proof. I looked through the site. If it is here and I overlooked it, I am sorry. $$\displaystyle\frac{1}{\int_{-\infty}^{\infty}\frac{1}{(e^{x}-x)^{2}+{\pi}^{2}}dx}-1=W(1)=\Omega$$ $W(1)=\Omega$ is often referred to as the Omega Constant. Which is the solution to $xe^{x}=1$. Which is $x\approx .567$ Thanks much. EDIT: Sorry, I had the integral written incorrectly. Thanks for the catch. I had also seen this: $\displaystyle\int_{-\infty}^{\infty}\frac{dx}{(e^{x}-x)^{2}+{\pi}^{2}}=\frac{1}{1+W(1)}=\frac{1}{1+\Omega}\approx .638$ EDIT: I do not what is wrong, but I am trying to respond, but can not. All the buttons are unresponsive but this one. I have been trying to leave a greenie and add a comment, but neither will respond. I just wanted you to know this before you thought I was an ingrate. Thank you. That is an interesting site.
While this is by no means rigorous, but it gives the correct solution. Any corrections to this are welcome! Let $$f(z) := \frac{1}{(e^z-z)^2+\pi^2}$$ Let $C$ be the canonical positively-oriented semicircular contour that traverses the real line from $-R$ to $R$ and all around $Re^{i \theta}$ for $0 \le \theta \le \pi$ (let this semicircular arc be called $C_R$), so $$\oint_C f(z)\, dz = \int_{-R}^R f(z)\,dz + \int_{C_R}f(z)\, dz$$ To evaluate the latter integral, we see $$ \left| \int_{C_R} \frac{1}{(e^z-z)^2+\pi^2}\, dz \right| = \int_{C_R} \left| \frac{1}{(e^z-z)^2+\pi^2}\right| \, dz \le \int_{C_R} \frac{1}{(|e^z-z|)^2-\pi^2} \, dz \le \int_{C_R} \frac{1}{(e^R-R)^2-\pi^2} \, dz $$ and letting $R \to \infty$, the outer integral disappears. Looking at the denominator of $f$ for singularities: $$(e^z-z)^2 + \pi^2 = 0 \implies e^z-z = \pm i \pi \implies z = -W (1)\pm i\pi$$ using this. We now use the root with the positive $i\pi$ because when the sign is negative, the pole does not fall within the contour because $\Im (-W (1)- i\pi)<0$. $$z_0 := -W (1)+i\pi$$ We calculate the beautiful residue at $b_0$ at $z=z_0$: $$ b_0= \operatorname*{Res}_{z \to z_0} f(z) = \lim_{z\to z_0} \frac{(z-z_0)}{(e^z-z)^2+\pi^2} = \lim_{z\to z_0} \frac{1}{2(e^z-1)(e^z-z)} = \frac{1}{2(-W(1) -1)(-W(1)+W(1)-i\pi)} = \frac{1}{-2\pi i(-W(1) -1)} = \frac{1}{2\pi i(W(1)+1)} $$ using L'Hopital's rule to compute the limit. And finally, with residue theorem $$ \oint_C f(z)\, dz = \int_{-\infty}^\infty f(z)\,dz = 2 \pi i b_0 = \frac{2 \pi i}{2\pi i(W(1)+1)} = \frac{1}{W(1)+1} $$ An evaluation of this integral with real methods would also be intriguing.
Find a first order sentence in $\mathcal{L}=\{0,+\}$ which is satisfied by exactly one of $\mathbb{Z}\oplus \mathbb{Z}$ and $\mathbb{Z}$ I'm re-reading some material and came to a question, paraphrased below: Find a first order sentence in $\mathcal{L}=\{0,+\}$ which is satisfied by exactly one of the structures $(\mathbb{Z}\oplus \mathbb{Z}, (0,0), +)$ and $(\mathbb{Z}, 0, +)$. At first I was thinking about why they're not isomorphic as groups, but the reasons I can think of mostly come down to $\mathbb{Z}$ being generated by one element while $\mathbb{Z}\oplus \mathbb{Z}$ is generated by two, but I can't capture this with such a sentence. I'm growing pessimistic about finding a sentence satisfied in $\mathbb{Z}\oplus \mathbb{Z}$ but not in the other, since every relation I've thought of between some vectors in the plane seems to just be satisfied by integers, seen by projecting down on an axis. In any case, this is getting kind of frustrating because my guess is there should be some simple statement like "there exists three nonzero vectors that add to 0 in the plane, but there doesn't exist three nonzero numbers that add to 0 in the integers" (note: this isn't true).
Here's one: $$ (\forall x)(\forall y)\Bigl[(\exists z)(x=z+z) \lor (\exists z)(y=z+z) \lor (\exists z)(x+y=z+z)\Bigr] $$ This sentence is satisfied in $\mathbb{Z}$, since one of the numbers $x$, $y$, and $x+y$ must be even. It isn't satisfied in $\mathbb{Z}\oplus\mathbb{Z}$, e.g. if $x=(1,0)$, $y=(0,1)$, and $x+y=(1,1)$.