url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/algebra/28064-answer-checks.html
# Thread: 1. ## Answer Checks!! ok so i did these problems and i would like someone to check my answers and telll me how i did...thanks alot find an exact solution for each problem. 4x+19=e^(2ln(x)) so i rearanged this equation to x^2-4x-19=0 then from there i used the quadratic eq. and came up with x= 2+sqrt(23) and x=2-sqrt(23) next one 1-2sin(t)/ln(t)=0 rearranged this to get 1-2sin(t)=0 then sin(t)=1/2 i did inverse sin of 1/2 and came up with pi/6 the problem wanted the exact solution for 2pi<t<5pi/2 and with that i came up with 13pi/6 and for the last one f(x)=4^x, and the slope of the tangent line of f(x) at x=0 is f`(x)=ln4. use the tangent line to estimate f(0.11) and f(0.2) i used ln4 for m is the equation y=mx+b then plugged one of the given points in 4^x to get a y value, then used the x and y to find b in the tangent line equation, once i found b i plugged in the given values to find y i came up with f(0.11) = 1.14 f(0.2) = 1.28 if someone could check my answers and let me know how i did that would be great thanks alot mathlete 2. mathlete, mathlete, mathlete... I expect more from you. Rule #1 -- It's not an equation if you don't see one of these "=". Rule #2 -- NEVER mess with an equation without thinking about Domain Issues. The very first thing I thought, when reading your first problem, was x > 0. After that, I wouldn't DARE suggest an answer that was zero or negative. My second thought, was that 4x+19 > 0. Why? This turns our to be less restricitve that x > 0, so it is of no consequence. After requiring x > 0, how do you feel about your answers? 3. yes i know its not an equation without an = sign... i meant to put x^2-4x-19=0 sorry Originally Posted by TKHunny mathlete, mathlete, mathlete... I expect more from you. Rule #1 -- It's not an equation if you don't see one of these "=". Rule #2 -- NEVER mess with an equation without thinking about Domain Issues. The very first thing I thought, when reading your first problem, was x > 0. After that, I wouldn't DARE suggest an answer that was zero or negative. My second thought, was that 4x+19 > 0. Why? This turns our to be less restricitve that x > 0, so it is of no consequence. After requiring x > 0, how do you feel about your answers? 4. You didn't address the Domain Issues. 5. Ok, well tell me what was wrong with the equation that i rearranged Originally Posted by TKHunny You didn't address the Domain Issues. 6. Already did. See Rule #2, Above. $e^{2ln(x)}$ is not quite $x^{2}$. What's the difference? Note: If we allow x to be complex, and the logarithm to provide complex values, then maybe they are EXACTLY the same. I still don't like it, but I guess I can deal with my emotional issues. :-)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539802074432373, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/301913/a-property-of-incidence-matrix-of-a-graph
# A property of incidence matrix of a graph Let $G$ be an oriented graph with incidence matrix $Q$, and let $B:=[b_{ij}]$ be a $k\times k$ sub-matrix of $Q$ which is non-singular. Can there exist two distinct permutations $\sigma$ and $\sigma^\prime$ of $1,\ldots ,k$ for which both the products $b_{1\sigma (1)}\cdots b_{k\sigma (k)}$ and $b_{1\sigma^\prime (1)}\cdots b_{k\sigma^\prime (k)}$ are non-zero ? - 2 The answer seems to be almost trivially yes -- clearly $B$ may contain a zero, and if it does, then the two permutations just have to map one index such that this zero is included in the products, and can differ arbitrarily on all other indices. Am I missing something? – joriki Feb 13 at 7:10 @joriki: Exteremely sorry, I made a typo; I wanted to ask whether the products are non-zero, fixed. – pritam Feb 14 at 11:35 ## 2 Answers If a set of columns of the incidence matrix of an oriented graph is linearly independent, then the corresponding edges form a forest. Suppose we choose $k$ columns, and then choose $k$ rows from these to form a non-singular matrix $M$. Claim: there is a column of $M$ with exactly one non-zero entry in it. For otherwise each column contains a 1 and a $-1$ and so the sum of the rows of $M$ is zero. Since $M$ is invertible, this is impossible. Note that any two permutations with nonzero products must both use this entry of $M$ Note also that if we delete from $M$ a column with exactly one nonzero entry, and also delete the row that contained it, the resulting $(k-1)\times(k-1)$ matrix is still non-singular. The result follows by induction on $k$. - MORE EDIT: The edit was wrong, the matrix I wrote down is singular. Nothing worth looking at here. EDIT: let $Q$ have a submatrix $$\pmatrix{1&0&0&-1\cr-1&1&0&0\cr0&-1&1&0\cr0&0&-1&1\cr}$$ The submatrix is nonsingular, and you can find a permutation that gets all the $1$s, and a different permutation that gets all the $-1$s, and the products will be equal and nonzero. Maybe I don't understand the problem, but if $$Q=\pmatrix{0&1&1&0&1\cr0&0&1&1&0\cr0&0&0&1&1\cr0&0&0&0&1\cr0&0&0&0&0\cr}$$ then $$\pmatrix{1&0&1\cr1&1&0\cr0&1&1\cr}$$ is a non-singular $3\times3$ submatrix, and $b_{13}b_{24}b_{35}=b_{15}b_{23}b_{34}=1\ne0$\$ I think the $b$s should be $q$s, or the indices should go from $1$ to $3$. – joriki Feb 14 at 11:45 I think you are assuming $G$ is undirected but I mentioned $G$ is a directed graph, then each coloumn (which corresponds to edges) must have exactly one $1$, one $-1$ and rest zero – pritam Feb 14 at 11:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577510952949524, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/2906/what-causes-the-call-and-put-volatility-surface-to-differ?answertab=votes
# What causes the call and put volatility surface to differ? I currently have a local volatility model that uses the standard Black Scholes assumptions. When calculating the volatility surface, what causes the difference between the call volatility surface, and the put surface? - Recall that options on shares of stocks are usually American style, where put-call parity does not hold, so neither the equality in volatility. – FKaria Feb 5 '12 at 2:11 ## 2 Answers The reason for put and call volatilities to appear different is that the implied vol has been calculated using different drift parameters than those implied by the market. Let's take everything in the model as given except the interest rate $r$ and the volatility $\sigma$. For European options we have the Black-Scholes formula for put and call values $V_{P,C}$ $$V_{P,C}=BS_{P,C}(r,\sigma)$$ Now, although it is common practice to run this equation backwards to "imply" the volatility $\sigma$ $$\sigma_{\text{Imp}} = BS^{-1}_{\sigma}(r,V)$$ we can see that from a mathematical point of view we could imply $r$ instead $$r_{\text{Imp}} = BS^{-1}_{r}(\sigma,V).$$ Obviously, using a different $r$ affect options prices and therefore implied volatilities. Consider now the consequences of receiving prices from someone using the Black-Scholes model. For concreteness I will take $T=1, K=S=100$ and no carry cost. Let's say you think $r=1\%$ I give you put and call prices of $7.95$ and $11.80$. You will get a put vol of $21.3\%$ and a call vol of $28.6\%$. Seem familiar? That's because I actually generated those prices using $r=4\%$. If you had used the same drift parameter $r$ as I had, you would have computed both volatilities to be $25\%$. Generally, risk-free interest rates are not too hard to pin down, but we have other effects on drift where the parameters are not so obvious. This includes dividends, borrow costs and funding costs. Each of these terms is typically treated as a deterministic "carry cost" but even in the simple case of European options it is not necessarily clear what values should be used for them. So to your answer your question, the difference between put and call volatility surfaces is a symptom of your drift parameters failing to match those of the market. - Implied volatility is the same for European call and European put options (it can be seen from Put-Call parity). If you use non-parametric local volatility model and fit it to implied volatility surface, then you should get exact fit. Therefore, local volatility surface should be the same for call and put options. - The put-call parity says that the implied volatility of a put and call at the same strike and time is the same, but in the market thats not true. And I was wondering why and what causes the put surface to be so different from the call surface. – Jeffrey Feb 4 '12 at 23:00 @Jeffrey Yes, it holds only for put and call with the same strike and time to maturity. What do you mean by "but in the market thats not true"? Do you mean that you observe different implied volatility for put and call with the same strike and time to maturity? – Alexey Kalmykov Feb 4 '12 at 23:10 Yes thats exactly what I mean. For example, deep in the money puts have a super high implied volatility is this because there is no volume on them so the price of the option is manipulated? – Jeffrey Feb 4 '12 at 23:54 1 @Jeffrey Yes, as far as there is no volume, the price is not representative. You shouldn't use ITM option prices for calibration of your local volatility model. – Alexey Kalmykov Feb 5 '12 at 10:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317549467086792, "perplexity_flag": "head"}
http://mathoverflow.net/questions/80039/tensor-product-of-structure-sheaves
Tensor product of structure sheaves Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\iota_A:A\hookrightarrow X$ and $\iota_B:B\hookrightarrow X$ be subschemes of a smooth ambient variety $X$. Then the derived tensor product $$\mathcal O_A\stackrel{L}{\otimes}\mathcal O_B\in D^b(X)$$ is a pushforward from $A$ (of $L\iota_A^*\mathcal O_B$), and a pushforward from $B$ (of $L\iota_B^*\mathcal O_A$). Is it also a pushforward from $A\cap B$ ? I assume not, which is presumably the need for all the homotopy complications of derived algebraic geometry ? (By this I mean that if it is not such a pushforward, then one has to carry round some of the information of the embedding of $X$, which in derived algebraic geometry is a non-canonical local choice; finding a category in which these choices "glue" is then the nasty bit I presume, though I know $\le0$ about it.) - @Moosbrugger: If $A = B$ then the tensor product being a pushforward from $A$ is a pushforward from $A \cap B = A$ as well. – Sasha Nov 5 2011 at 19:49 As Sasha points out, that's not a counterexample. Does someone have a simple one ? – Richard Thomas Nov 7 2011 at 20:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242134094238281, "perplexity_flag": "head"}
http://simple.wikipedia.org/wiki/Exponentiation_by_squaring
Exponentiation by squaring Exponentiating by squaring is an algorithm. It is used for quickly working out large integer powers of a number. It is also known as the square-and-multiply algorithm or binary exponentiation. It implicitly uses the binary expansion of the exponent. It is of quite general use, for example in modular arithmetic. The algorithm has been known for a long time. It is already written down in a book called Chandah-sûtra. That book was published in India, around 200 BC. Squaring algorithm The following recursive algorithm computes xn for a positive integer n where n > 0: $\mbox{Power}(x,\,n)= \begin{cases} x, & \mbox{if }n\mbox{ = 1} \\ \mbox{Power}(x^2,\,n/2), & \mbox{if }n\mbox{ is even} \\ x\times\mbox{Power}(x^2,\,(n-1)/2), & \mbox{if }n >\mbox{2 is odd} \end{cases}$ This algorithm is much faster than the ordinary method to compute such a value. Multipliying x by itself, n operations are needed to calculate x n. With the method shown above, only log2(n) operations are needed. This short article about mathematics can be made longer. You can help Wikipedia by .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542452096939087, "perplexity_flag": "head"}
http://mathoverflow.net/questions/35430/correlation-in-graph-coloring/35433
## Correlation in graph coloring ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a (simple) graph. Given $k \ge \chi(G)$, define $Cor(G,k,u,v)$ to be the proportion among all $k$-colorings of $G$ for which the vertices $u$ and $v$ have the same color. Questions: Question 1. Given a graph $G$ and a positive integer $k \ge \chi(G)$, is there a better-than-greedy way to calculate $Cor(G,k,u,v)$? I suspect the answer to this question is "Yes, but not really."; for is there was an efficient way to calculate $Cor$, we would probably get $P=NP$. Question 2. If not, is there a good'' way to estimate it? Question 3. Is there any other information (e.g., the chromatic polynomial, etc.) that would yield an efficient way to calculate $Cor$? - ## 2 Answers As regards question 3: Chromatic polynomials provide the answer quite directly--but calculating them is anything but efficient. Naturally if $u$ and $v$ are joined by an edge, the proportion you are asking about is 0. If they are not adjacent, then let $q$ be the chromatic polynomial of $G$ and $p$ be the chromatic polynomial of $G/\{u,v\}$, i.e. the result of identifying $u$ and $v$. The proportion you seek is then the rational function $p/q$ evaluated at $k$, which as you point out is only defined for $k$ at least the chromatic number. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Question 1. Yes, efficient algorithm for Cor exists for graphs of low tree-width Cast this in the framework of probabilistic graphical models and then use the junction tree algorithm, which scales exponentially in tree-width of the graph. In particular, let $G$ be a graph over $n$ vertices with edges $E$. Let $x \in [1,2,\ldots, k]^n$. Define probability distribution over $x$ as follows $$p(x)=\exp{(\sum_{(ij)\in E} \mathbf{I}(x_i\ne x_j))}/Z$$ Where $Z$ is a constant chosen to make this a valid probability distribution. Then $$\text{Cor}(G,k,u,v)=\sum_{c=1}^k p(x_u=c,x_v=c)$$ Computing p in the formula above in graphs that are not trees is not trivial, but can be done efficiently if the tree-width is low, look at page 370 of Koller's "Probabilistic Graphical Models" for details on that particular form of query. Question 2. Yes, for graphs with low degree or large number of colors. For instance, here the authors conjecture that colorings on graphs with degree at most k and at least k+1 number of colors exhibits "strong spatial mixing", which would imply that the algorithm they give to approximate the marginals could also give a guaranteed approximation to your problem in polynomial time -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313977360725403, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/10225/how-to-calculate-the-projected-area-at-different-angles-vectors
# How to calculate the projected area at different angles/vectors? Please help me with the following. I want to know if there is an equation/set of equations to find out the projected area of a (3-D) cube when it is oriented at different angles of attack to the fluid flow, rendering the velocity vector of the impinging fluid on the cube surface to change with each angle of attack. In particular, what is the projected area when the cube is oriented at such an angle that the flow velocity vector passes through one of the vertex and the geometric centre of the cube? Thanks. - Which hydrodynamic problem are You going to calculate? There are few which can be solved with such a precision to need exact projected area. In practice one regularly just takes the diameter of "equivalent sphere". – Georg Jun 6 '11 at 13:32 ## 1 Answer The projected area is $\sqrt{3} \ l^2$ Imagine the long diagonal of a cube. This diagonal is the same direction as the velocity vector you're interested in. The length of the diagonal is $\sqrt{3}l$, with $l$ the side length of the cube. This come from the Pythagorean Theorem. If we project this diagonal onto a side, we get simply $l$. Therefore, the projection operation at this angle divides by $\sqrt 3$. The cube has three faces exposed to a fluid flowing coming in from the direction of the long diagonal, each with area $l^2$. Each of these is projected onto the plane perpendicular to the fluid flow, so the projected area is $(3/\sqrt{3})\ l^2 = \sqrt{3} \ l^2$. - 3 General Case $$l^2 (|\hat{n_1} \cdot \hat{u}|+|\hat{n_2} \cdot \hat{u}|+|\hat{n_3} \cdot \hat{u}|)$$ where $\hat{u}$ is the unit vector of the fluid flow and $\hat{n_i}$ is the normal vector to one of the cubes three orthogonal sides. – David May 22 '11 at 1:32 @David: you might want to make that an answer. (If you also explain where it comes from, it'll be an even better answer.) – David Zaslavsky♦ May 22 '11 at 2:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944476842880249, "perplexity_flag": "head"}
http://mathoverflow.net/questions/68788/completeness-vs-compactness-in-logic
## Completeness vs Compactness in logic ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One standard approach to showing compactness of first-order logic is to show completeness, of which compactness is an easy corollary. I am told this approach is deprecated nowadays, as Compactness is seen as the "deeper" result. Could someone explain this intuition? In particular, are there logical systems which are compact but not complete? EDIT: Andreas brings up an excellent point, which is that the Completeness theorem is really a combination of two very important but almost unrelated results. First, the Compactness theorem. Second, the recursive enumerability of logical validities. Note that neither of these results depends on the details on the syntactic or axiomatic system. What is the connection between these two aspects of completeness? Are there logical systems that have one of these properties but not the other? - 4 See also Joel's excellent answer at mathoverflow.net/questions/9309/… . – Qiaochu Yuan Jun 25 2011 at 14:55 Thanks, Qiaochu... – Joel David Hamkins Jun 25 2011 at 15:34 As to the last question - I think so. If we reason ABOUT intuitionistic logic USING classical logic, then completeness is false : because consistent constructive theories can contradict classical tautologies, therefore cannot be satisfiable from the classical point of view. But it looks like compactness is still true, based on what I'm reading below : because it is semantic, therefore dependent only on the "meta" system, which is still classical, and not the "object" system (though I'm not sure here, is that right?) – Daniel Mehkeri Jun 25 2011 at 23:25 1 For an example of a system that has compactness but not completeness, take usual first-order logic, keep the semantics the same, and remove all the inference rules. Now nothing at all is provable, but compactness is unchanged. The point of completeness is to express a harmony between a semantics and a deductive system, you can break completeness by changing either. – Carl Mummert Jun 26 2011 at 3:01 2 There are a lot of answers here from model theorists talking about why compactness is important in model theory, but it's worth noting that the question isn't restricted to model theory. As Joel pointed out in the answer Qiaochu linked to, proof-theorists might focus on the completeness theorem rather than the compactness. – Mike Shulman Jun 26 2011 at 5:10 ## 10 Answers The point is that we care about the models, rather than about the proofs. The compactness theorem---the claim that a theory is satisfiable iff every finite subset of it is satisfiable---is fundamentally connected to the models, and the possiblility of truth in these models. To use it, you need to understand your theory, the models of your theory and the models of finite pieces of your theory. And this is what we want to be thinking about and what we know about. In particular, when working with models, we can use all the mathematical tools and constructions at our disposal, with no need to remain inside any first-order language or formal system (well, perhaps we have set theory as our background system). We are free to reason about the models via reducts and ultrapowers and limits of systems of morphisms and so on, using any mathematical method at all. The completeness theorem, in contrast, is fundamentally connected with the details of a formal deduction system. And so when using it, one is thinking about whether certain tautologies might be provable or not, or whether a certain formal consequence is allowed in the system or not. But when we are studying a certain first-order class of groups or rings or whatever, such details about the proof system might seem to be an irrelevant distraction. - Of course, as has been brought up elsewhere (and as you mentioned in your answer to the other question) the assertion that "we" care about the models, rather than the proofs, depends upon who "we" refers to. One might argue that at a foundational level, what we really care about is always the proofs. For instance, even when studying model theory, I would venture to assert that model theorists reach their conclusions by proving them. (-: – Mike Shulman Jun 26 2011 at 19:19 1 @Joel: I agree completely with your answer (for what that's worth -- you are 1000 times the expert I am on this subject). When I taught introductory model theory last summer and spoke to some colleagues who had had encounters with it, I came to the idea that perhaps model theory could be recast so as not to be part of mathematical logic at all. I found some sympathy for this in Poizat's introductory text, which phrases things in terms of "local isomorphisms" and "back and forth" but I didn't have the time (and perhaps not the expertise) to really follow up on this. What about you? – Pete L. Clark Jun 27 2011 at 4:33 Mike, yes indeed, and perhaps I stated the view more carefully in my answer to the other question. Of course the proof theorists are undertaking a fascinating and important foundational study. And although mathematicians generally strive to prove their results, wouldn't you say that this arises mostly from a concern with the mathematical objects themselves rather than with a concern specifically with the proof objects? (And one might view proof theory as a case where the proof objects become the mathematical object of study.) – Joel David Hamkins Jun 27 2011 at 21:14 Pete, you are very kind, but I think you are being modest. Like many maturing subjects, model theory increasingly touches other mathematical areas, and while the majority of model theorists I know continue to self-identify as logicians, I also know a number of model theorists who don't quite know what label to apply to their work. Of course, it is often work that crosses established boundaries that is the most valuable in mathematics, and perhaps the most difficult. Perhaps a similar situation has arisen in set theory, which has become vast, now touching many areas of mathematics. – Joel David Hamkins Jun 27 2011 at 21:25 Pete, I might also add that the methods of local isomorphism and back-and-forth were invented by Cantor in his proof on the uniqueness of the countable dense endless linear order, and logicians perhaps like to regard them as among the fundamental contributions of logic. – Joel David Hamkins Jun 27 2011 at 21:41 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I have two, somewhat conflicting, views on this. From the point of view of modern model theory, the Compactness Theorem is central to almost everything in the model theory of first order logic, while the Completeness Theorem is almost irrelevant. Completeness comes into play when proving the decidability of complete recursively axiomatized theories or in dealing with recursively saturated models for example, but these are not very central. Even when we want to apply Completeness, what's important is that the set of logical consequences of $T$ is recursively enumerable in $T$--the details of the proof system are of no importance whatsoever. This explains why in my model theory book, while I do explain that Compactness is an almost trivial consequence of Completeness, I give a direct Henkin-style proof. Nevertheless, I view the Completeness Theorem as one of the great intellectual achievements of our subject. The fact that the semantic notion of "logical consequence" can be captured by the syntactic notion of "proof" is really surprising. The first is, a priori, $\Pi_1$ over the universe of sets, while the second is recursively enumerable. I find this amazing. - You write: "Completeness comes into play when proving the decidability of complete recursively axiomatized theories ...". Don't you use here the notion of "completeness" in another sense (namely as a characteristics of a theory as opposed to one that appears in the Completeness Theorem, which characterizes a logic)? – Gyorgy Sereny Jun 26 2011 at 17:45 @Gyorgy--I am using "completeness in two senses. We want to look at "complete theory" $T$, i.e., one where $T\models \phi$ or $T\models\neg\phi$ for all sentences $\phi$. But we are also using the Completeness Theorem to argue that for any $T$ the set of logical consequences of $T$ is recursively enumerable in $T$. – Dave Marker Jun 26 2011 at 22:28 Thank you for your answer. As a matter of fact, I take it for granted, that the completeness of a theory is a syntactic property, that is, one formulated in terms of provability rather than in terms of semantic consequence: T is complete just in case $T\vdash\sigma$ or $T\vdash\lnot\sigma$ for all sentences $\sigma$. In this case, the decidability of complete recursively axiomatized theories can be shown without the Completeness Theorem. – Gyorgy Sereny Jun 30 2011 at 15:40 Really, compactness is seen as the deeper result? I have to say that I am also more interested in the models rather than formulas and deduction systems, and hence like to teach students proofs of the compactness theorem that do not use the completeness theorem. Typically I prove compactness using ultraproducts. On the other hand, I still believe that the completeness theorem for first order logic is the most important theorem of mathematical logic. The theorem tells you that in principle there are computer checkable proofs for all "true" theorems. I know that many mathematicians are not really concerned with having a solid foundation for the concept of a "proof" (you know it when you see it). But having this formal concept of proof in the background helps tremendously when you want to fight off people who present their $n$-th proof of the inconsistency of PA or ZFC, or that there are no infinite sets or that CH holds. Also, the completeness theorem does explain why we can do mathematics the way we do. Even though nobody really writes formal proofs of anything, ever, unless the correctness of the proofs is seriously challenged (see the FLYSPECK project at http://code.google.com/p/flyspeck/). Also, I think that the proof of the completeness theorem is deeper than of the compactness theorem. In some sense the proof of the completeness theorem (I am thinking of the proof where one builds a canonical model of a maximally consistent Henkin theory) is more straight forward than for example the ultraproduct proof of the compactness theorem, but it is more complicated in the details and certainly less accessible to mainstream mathematics than for instance the ultraproduct proof of the compactness theorem. - 2 Stefan, the Henkin idea can also be used directly to prove the compactness theorem (see mathoverflow.net/questions/45487/…), and I think actually that this use of the Henkin method to prove compactness may even be easier than when it is used for completeness. But of course I agree with your main point that completeness is a fundamentally important theorem. – Joel David Hamkins Jun 25 2011 at 16:09 Joel, thanks for pointing this out. I had read the answer that you linked to at some point, but wasn't aware of the Henkin proof of the compactness theorem anymore. This is in some sense more transparent than the Henkin proof of the completeness theorem since you don't have to go through the details of the deduction system. – Stefan Geschke Jun 25 2011 at 16:52 This is a side comment. There are several answers that explain why compactness is so important in model theory, and I agree with what they say. But I want to point out that the "in model theory" part is key here. In the overall study of logic, not restricted to model theory, both compactness and completeness are important, and each of those has areas of logic that favor it. Model theory, being a semantic field, naturally identifies more with semantic notions. In mathematics outside logic, I think there is more implicit use of completeness than of compactness. Every time I prove that an identify is derivable from the the axioms of a group by working semantically and showing that the identity holds in every group, I am implicitly using the completeness theorem. It is easy to miss this or take it for granted, because the completeness theorem is so well known. There are systems that do not have complete deduction systems; one example is second-order logic with full second-order semantics. In this system it is perfectly possible for something to be true in every model without being provable in our usual proof system. Therefore, when we study this system in logic, we have to keep a close watch on whether we have shown something is provable, or just shown that it is logically valid. Imagine the difficulties in an alternate world where mathematicians have to distinguish between "true in all groups" and "provable from the axioms of a group". The completeness theorem is what lets us ignore this. By comparison, it's more difficult to see reflections of the compactness theorem in everyday mathematics. - 1 It might be worth noting that, when you have shown semantically that an identity holds in every group, you can conclude not only that the identity is formally derivable from the group axioms by means of first-order logic but also that it is derivable by purely equational reasoning (assuming you've expressed the group axioms as identities). In other words, you can invoke the completeness theorem of equational logic. – Andreas Blass Jun 26 2011 at 4:20 @Andreas: thanks for mentioning that, it's an even stronger example. – Carl Mummert Jun 26 2011 at 12:39 How strong is the statement "provable from the axioms of a group"? Does any such statement hold for all group objects in all categories with finite products? (If so, that seems to be a nice motivation for caring about this point of view - you prove a first-order statement for all groups and subsequently it must be true for all group objects!) – Qiaochu Yuan Jun 26 2011 at 13:47 1 @Quiaochu: If what's being proved is an identity (a universally quantified equation between terms), then it will hold for group objects in categories with finite products. That follows from my previous comment (about equational deducibility) plus the fact that equational logic is sound in categories with finite products. (I suspect you don't even need the finite products, if you define a group object in a category as an object $G$ plus a lifting of the set-valued functor Hom$(-,G)$ to a group-valued functor.) For the non-equational case, see my next comment. – Andreas Blass Jun 27 2011 at 0:31 1 If a consequence $\phi$ of the group axioms is a first-order sentence but not an identity, then it's deducible from the group axioms in first-order logic. For that deduction to ensure $\phi$ for all group objects in a category $C$, you'd need to assume that $C$ is something like a Boolean pretopos, so that first-order formulas can be interpreted in structures in $C$ and first-order reasoning is sound. (If your proof of $\phi$ from the group axioms can be done in intuitionistic first-order logic, then you wouldn't need $C$ to be Boolean.) – Andreas Blass Jun 27 2011 at 0:35 show 1 more comment I will address the the most recent version of the question, which asks about the relationship between the following two features of a logic $L$, but note that a careful discussion of this topic should first clearly define what counts as a logic. (1) Abstract Completeness of $L$: The set of valid $L$-sentences is recursively/computably enumerable [hereafter: r.e.]. (2) Compactness of $L$: A set $S$ of sentences of $L$ has a model if every finite subset of $S$ has a model. (1) does not imply (2). For example, let $Q$ be the quantifier expressing "there are uncountably many", i.e., $Qx\phi(x)$ holds in a structure $\cal{M}$ with universe $M$ iff the set of $m\in M$ such that $\phi(m)$ holds in $\cal{M}$ is uncountable. Let $L_{FO}(Q)$ be the result of augmenting first order logic $L_{FO}$ with the new quantifier $Q$. Vaught proved that the set of valid sentences of $L_{FO}(Q)$ is r.e. Later [1970] in a seminal paper, Keisler gave an elegant axiomatization of $L_{FO}(Q)$. On the other hand, it is easy to see that $L_{FO}(Q)$ does not satisfy compactness, e.g., for $\alpha < \aleph_1$ introduce constant symbols $c_{\alpha}$ and consider the set $S$ of sentences consisting of $\lnot Qx (x=x)$ [expressing "the universe is not uncountable"] plus sentences of the form $c_{\alpha}\neq c_\beta$ for $\alpha < \beta < \aleph_1$. It is easy to see that every subset of $S$ has a model, but S itself does not have a model. I should point out that $L_{FO}(Q)$ has a limited form of compactness known as countable compactness: if $S$ is a countable set of sentences of $L_{FO}(Q)$, then S has a model if every finite subset of $S$ has a model [Vaught, ibid]. All of the above features of $L_{FO}(Q)$ [abstractly complete, countably but not fully compact] are shared by a number of other generalized quantifiers, including the stationary quantifier [introduced in the late 1970's and intensely studied in the 1980's]. However, as shown by Shelah, there are other generalized quantifiers that generate fully compact logics that also are abstractly complete (2) does not imply (1) either. For example consider the "logic" whose nonlogical symbols are the arithmetical ones, and whose axioms are the usual axioms of first order logic plus all the axioms of true arithmetic , i.e, arithmetical sentences that hold in $\Bbb{N}$. The semantics of the logic is the same as first order logic, so compactness continues to hold; but clearly abstract completeness fails by Tarski's undefinability of truth-theorem, which says that $Th(\Bbb{N})$ is not arithmetical, let alone r.e. PS A "naturally occurring logic" that also serves to show that (2) does not imply (1) is the existential fragment of second order logic; its compactness follows from the usual proofs of compactness of first order logic [including the ultraproduct proof], but its set of valid sentences is known to be co-r.e., but not r.e. - Fantastic, thanks – David Harris Jun 26 2011 at 21:20 Here is an interesting quote from Bruno Poizat's "A Course in Model Theory": The compactness theorem, in the forms of Theorems 4.5 and 4.6, is due to Gödel; in fact, as explained in the beginning of Section 4.3 [Henkin's Method], the theorem was for Gödel a simple corollary (we could even say an unexpected corollary, a rather strange remark!) of his "completeness theorem" of logic, in which he showed that a finite system of rules of inference is sufficient to express the notion of consequence (see Chapter 7). It could also have been taken from [Herbrand 1928] or [Gentzen 1934], in which results of the same sort were proven. This unfortunate compactness theorem was brought in by the back door, and we might say that its original modesty still does it wrong in logic textbooks. In my opinion it is a much more essential and primordial (and thus also less sophisticated) than Gödel's completeness theorem, which states that we can formalize deduction in a certain arithmetic way; it is an error in method to deduce it from the latter. If we do it this way, it is by a very blind fidelity to the historic conditions that witnessed its birth. The weight of this tradition is apparent even in a work like [Chang-Keisler 1973], which was considered a bible of model theory in the 1970s; it begins with syntactic developments that have nothing to do with anything in the succeeding chapters. This approach---deducing Compactness from the possibility of axiomatizing the notion of deduction---once applied to the propositional calculus gives the strangest proof on record of the compactness of $2^\omega$! It is undoubtedly more "logical," but it is inconvenient, to require the student to absorb a system of formal deduction, ultimately quite arbitrary, which can be justified only much later when we can show that it indeed represents the notion of semantic consequence. We should not lose sight of the fact that the formalisms have no raison d'être except insofar as they are adequate for representing notions of substance. See also this old answer of mine where the same quote appears. - 2 The remark about the compactness of $2^\omega$ is interesting. I personally don't have any use for deductive systems in propositional calculus. People closer to computer science might see this differently, though. I would always derive the compactness theorem for propositional logic from the topological compactness of $2^\omega$ (or something similar). – Stefan Geschke Jun 25 2011 at 17:00 Because of my natural inclination toward semantics rather than syntax, I tend to view the completeness theorem (for first-order logic) as being essentially the conjunction of two rather different facts. One is the compactness theorem. The other is the recursive enumerability of the set of valid sentences (say in any finite vocabulary). Both of these facts are often deduced from the completeness theorem, though they can also be proved by other methods (I like a version of Herbrand's theorem that doesn't mention any axioms or rules). But there is also a sort of converse, if one is willing to accept an unorthodox (but in my opinion fairly reasonable) notion of deduction. Namely, fix an algorithm $A$ enumerating the valid sentences, and define a "deduction" of a conclusion $\phi$ from a set $H$ of hypotheses to be a finite conjunction $\eta$ of members of $H$ together with a computation showing that $\eta\to\phi$ is enumerated by $A$. The completeness of this "deductive" system is an immediate consequence of the compactness theorem plus the fact that $A$ enumerates the valid sentences. - 1 This is how the Wizard of Oz treats A. Miller's Micky Mouse system, which I mention in the answer mathoverflow.net/questions/9309/…, which Qiaochu linked to above. – Joel David Hamkins Jun 26 2011 at 11:27 The "other fact" is not just recursive enumerability of valid sentences, but of the finitary consequence operator (in other words, finite strong completeness). This distinction is of utmost importance in logical systems lacking the deduction theorem. – Emil Jeřábek Jun 27 2011 at 15:01 I was to say that the answers given so far are all wrong and misleading, but thanksfully I recalled that I am not a mathematician :-) There are mainly two approaches to the concept of "logic". 1. The classical (or mathematical) approach to logic. Roughly speaking, a logic consists of two classes: a set of formulae and a class of models, together with a satisfiability relation saying what formulae are true in what models. Then, we may develop a proof system (or various proof systems) for the logic, which helps us --- in a systematic and coherent way --- derive satisfiability of formulae. Desirable properties of such proof systems are soundness (what we have derived is true), completness (what is true, we can derive). There is also compactness (if something follows a theory then it follows from a finite subset of the theory) that refers to the logic itself (here: satisfiability relation). This is how mathematicians are taught logic. 2. Modern (or computer-scientistic) approach to logic. Logic is a kind of a formal system (deductive system). To help proving facts about such a system, we may introduce the concept of "models" (or various concepts of models) for the logic. Desirable properties of such classes of models are that the deductive system over them is sound and complete (that is --- for a given system --- we develop the appropriate concept of models, such that the proof system is sound and complete; if we add/remove some axioms/rules to the system then we have to restrict/extend our class of models; this is most easily seen in temporal logics --- for example, LTL is sound and complete in linear models). There is also compactness (if something follows a theory then it follows from a finite subset of the theory) that refers to the logic itself (here: proof system; if a logic allows only finitary proofs, then it is obviously compact). Simply, in this approach, the system is fundamental here. This is how computer scientists are taught logic. Of course, in the presence of soundness and completeness, classical compactness and modern compactness coincide. So, moving back to your question --- I do agree with other answers saying that completeness and compactness are just far different concepts, so neither is "deeper". However, I do not think that the classification of wht belongs to models and what belongs to proofs is that obvious --- it is just all about how you think of logic. - I don't understand why this answer was downvoted. Perhaps there are some mathematicians who don't realize that it's true that computer scientists have a different approach to logic, and that it might be equally valid? True, it may be unnecessarily combative to call #2 the "modern" approach. But on the other hand, it's worth pointing out that in order to even start talking about "models", you need some sort of ambient set-theory or something, which makes #1 somewhat questionable as a foundational perspective. – Mike Shulman Jun 26 2011 at 5:03 1 @Mike Shulman: I voted it up from -2 to -1 but I would have voted down if it had a positive score. I agree the word choice of "modern" isn't ideal, and I think the differences between the fields are being exaggerated. Even in mathematical logic, people in constructive mathematics and proof theory more generally are likely to start with a deductive system and move on to models. From my point of view, that is the "classical" approach, while the "semantics first" approach from model theory is more recent, as the influence of formalism from the early 20th century is receding. – Carl Mummert Jun 26 2011 at 12:23 1 Re: "semantics", I've recently learned that people in computer science and proof theory say "operational semantics" to refer to things that I, with a background in mathematical logic, might be more inclined to regard as part of "syntax" or "proof theory". Then they say "denotational semantics" for what I was trained to call just "semantics". After all, "semantics" literally means "meaning", and so does "denotation", so if one has to qualify "semantics" with "denotational" then it seems like a word got misused somewhere. (cont.) – Mike Shulman Jun 26 2011 at 19:12 1 To me, Michal's last three comments tend to confirm Mike's previous comment that some people use "semantics" to mean a sort of syntax. In particular, Michal seems happy to dismiss unnamed elements, so that models amount to just term models. My own (admittedly limited) experience with "operational semantics" also tends to confirm what Mike said; whenever I've seen an operational semantics explicitly exhibited, it looked (to me) just like a proof system. But see my next comment (this one's about to hit the length limit) for "on the other hand". – Andreas Blass Jun 27 2011 at 0:19 2 On the other hand, I think that this conflation of syntax and semantics is one of the beauties of (some versions of) category-theoretic semantics. I really like the idea that, for example, a (set-based) model of an equational theory is the same sort of thing as an interpretation into another equational theory, namely a finite-product-preserving functor. From a sufficiently abstract point of view, a syntactic interpretation of one theory $T$ in another $T'$ is a semantic model of $T$ in the "universe" generated by a generic model of $T'$ ("generic" in the category sense, not forcing) – Andreas Blass Jun 27 2011 at 0:25 show 13 more comments Compactness is a "semantic" theorem, whose statement involves no "syntactic" concepts such as proofs or provability. So it seems one should not need the latter concepts to prove compactness (and of course, one does not). - I think that everything important that can be said about the differences between Compactness and Completeness Theorems and their proofs from the technical point of view has been said. (I also like most the detailed and elucidating answer given by Joel David Hamkins (at http://mathoverflow.net/questions/9309/in-model-theory-does-compactness-easily-imply-completeness)). On the other hand, one of the most important differences between these theorems is a non-technical one, and indeed some previous answers contain hints to this effect. Indeed, Completeness Theorem has an obvious metamathematical (or even philosophical) flavour as opposed to Compactness Theorem. Actually, it is about the relation between the two most important mathematical notions, i.e., those of proof and truth. And here I would like to argue with those (Carl Mummert and Stefan Geschke) who claim that sometimes Completeness Theorem is used in everyday mathematics. Actually, as I see it, it is about everyday mathematics, but it does not belong to everyday mathematics. Actually, contrary to what Carl Mummert says, I doubt that, in everyday mathematics, anybody in any time uses completeness theorem in either an explicit or implicit way. Obviously, one can successfully work in any field of mathematics (which are not intimately connected to logic) without any knowledge of mathematical logic. (Clearly she or he has to have a good sense of logic, but this is a completely different matter.) In other words (unlike Carl Mummert), I cannot imagine any `difficulties in an alternate world where mathematicians have to distinguish between "true in all groups" and "provable from the axioms of a group" '. The reason is simple. I do not think that anyone proves "that a group identity is derivable from the axioms of a group by working semantically and showing that the identity holds in every group." Though I am not a group theorist, I think that no group theorist is interested in the statements that are provable from the axioms of group theory alone. (On the other hand, of course, the most important elementary statements needed to begin group theory at all are usually derived directly from the axioms.) Most mathematicians work in intuitive set theory and freely make use of the different possibilities that this rich theory offers (independently of the fact that she or he is aware of the existence of ZFC). (Actually, the notion of a group itself is defined as a model, that is, generally in terms of sets rather than a first order theory. And, of course, this kind of definition is very practical, since otherwise every course on groups have to be preceded by an introduction to logic.) I think that the pure first order theory of groups has only theoretical or didactic significance for being a nice widely known example of a first order theory. Likewise, I do not agree with Stefan Geschke that "the completeness theorem does explain why we can do mathematics the way we do." Just the other way around. Clearly, metamathematics is the study of real mathematics by exact mathematical means. Therefore, its notions are intended to mimic those of everyday informal mathematics as faithfully as this is possible. So a metamathematical result cannot explain or justify anything. What it can do is to describe in exact terms and clarify the way mathematics is normally done (and, of course, to draw consequences about everyday mathematics from the results of this description). But its results do not affect the way mathematics is normally done. Obviously, we would do everyday mathematics in exactly the same way if the Completeness Theorem did not hold. Just as those mathematicians do who never have heard of this theorem. And indeed, we do arithmetic in exactly the same way as mathematicians before Gödel (who might well think that true arithmetic was recursively axiomatizable) did. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528945088386536, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/2496/how-to-cancel-floating-point-factors?answertab=votes
# How to cancel floating point factors? I am currently having problems with some floating points. I have a function, which gives as an intermediate result (for example) ````-(10000. - 10000. a) E^(-16.4157 kp^2 x^0.277)/((-1. + a) a x^0.252525 xj^4.47047) ```` It gets passed through into another function, which will fill in the value `a = 1`. Then Mathematica reacts with a `1/Sqrt[0]` error. However, as you can see, the factor `(10000. - 10000. a)` should cancel with the factor `(-1.+ a)` in the denominator to give 10000, the result is thus well-defined. Do you have any idea how to let Mathematica cancel these factors out? I have tried `Simplify` and `FullSimplify` with `Assumptions -> a!=1` but it doesn't work. I cannot change much to the function itself, as it is an intermediate result (and it is just an example for this post; other intermediate results also occur, sometimes with the same problem, sometimes they work fine). - 3 Floating point numbers are different than rational numbers. Since floating point numbers don't really follow the rules of algebra, you'll find that Simplify and the like will not do much at all wit them. - Run the calculations with increased precision using SetPrecision. (still essentially the same problem) I think there are two options: - Use the function Rationalize to turn the floating point numbers into rational numbers and then use FullSimplify. (I'm not a big fan of this because it ignores the fact that the numbers of a limited precision) – Searke Mar 1 '12 at 19:47 @Searke "Floats don't really follow the rules of algebra"?! – David Mar 1 '12 at 19:56 4 @David That is right. Addition/multiplication is not even associative. – Szabolcs Mar 1 '12 at 19:57 1 @David for example, `(1 + $MaxMachineNumber) == $MaxMachineNumber` evaluates to `True` – acl Mar 1 '12 at 20:02 @David try `1.*10^16 + 1. - 1.*10^16` which returns different answers depending on where `1.` is, however `-10.^16 + (10.^16 + 1.)` and `-10.^16 + (1. + 10.^16)` return the same incorrect answer. This indicates that addition is not associative, but commutative. Also, given `g[a_, b_, c_] := a^2 - 2 a b + b^2 - c^2`, `g[10.^10, 10.^10, 10.] == 0` implying that mma is doing speculative processing as the first three terms simplify to $(a - b)^2$ and should cancel. – rcollyer Mar 1 '12 at 20:43 show 3 more comments ## 3 Answers The problem is that symbolic computation doesn't mix well with floating point. If you use `Rationalize` first to convert to an exact representation, `Simplify` (or the faster `Cancel`) will be able to do it's job. - Use `Limit`, Mathematica's implementation of $\lim$: $\displaystyle\lim_{x\to a}f(x) =$ `Limit[f[x], x -> a]` Applied to your equation, this results in ````Limit[ -(((10000. - 10000. a) E^(-16.4157 kp^2 x^0.277)/((-1. + a) a x^0.252525 xj^4.47047))), a -> 1 ] ```` ````(10000. E^(-16.4157 kp^2 x^0.277))/(x^0.252525 xj^4.47047) ```` The error you are talking about is not caused by floating point arithmetic, it's how Mathematica (or any other language for that matter) handles computations in general. If you substitute $a=1$ then you've got a division by zero, and once this occurs you're out of the domain of, well, mathematics; therefore, you can't cancel the two factors anymore. (Actually, your function is not defined at $a=1$, it has a removable singularity there.) - The function is well defined at `a = 1`. The singularity is artificial, and has more to do with the order of evaluation than a being a property of the function, unlike in $\sin(x)/x$. Since mma evaluates the terms in isolation from one another, i.e. $(a - 1)^{-1}$ is evaluated separately from $1/a$, etc., this is likely to occur. – rcollyer Mar 1 '12 at 20:08 1 $\frac xx$ stands for the product of x and its multiplicative inverse, which does not exist for $x=0$. It's not well-defined or even defined at that point, although the limit exists of course. – David Mar 1 '12 at 20:10 – rcollyer Mar 1 '12 at 20:34 – David Mar 1 '12 at 20:41 2 This, of course, reminds me why I hated my analysis course. – rcollyer Mar 1 '12 at 20:44 Well... [Warning: This act is performed by trained professionals. Do not try it at home.] ````In[24]:= expr = -(10000. - 10000. a) E^(-16.4157 kp^2 x^0.277)/ ((-1. + a) a x^0.252525 xj^4.47047); In[25]:= Cancel[expr] Out[25]//InputForm= 10000./(a*E^(16.4157*kp^2*x^0.277)*x^0.252525*xj^4.47047) ```` Okay, I cheated. I'm using the development version of Mathematica. I will add some emphasis to remarks others have made. Doing computer algebra with approximate numbers is almost never a safe bet. Even to the extent that it might be supported, things can go wrong. And do. On a remarkably frequent basis. - +1, for cheek, alone. :) – rcollyer Mar 1 '12 at 23:03 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9167249798774719, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/14860/list
## Return to Question 2 Put in escapes to fix formatting problems Associated to any `$A_\infty$` $k$-algebra $A$ the Hochschild cochain complex `$CH^*(A)$` has the structure of a dg-Lie algebra and a dg-algebra which are compatible enough that the cohomology is a Gerstenhaber algebra. If two `$A_\infty$` algebras are Morita equivalent, are their Hochschild cochain complexes isomorphic in (i) the category of $k$-dg-algebras and (ii) the category of $k$-dg-Lie algebras, both up to quasi-isomorphism? Are they isomorphic in some category that feels both structures together? Now suppose that $\mathcal{C}$ is a dg-category over a field $k$. We say that the $k$-dg-algebra `$CH^*(\mathcal{C}) = End(id_\mathcal{C})$` is the Hochschild cochain complex. Does `$CH^*(\mathcal{C})$` have a bracket that generalizes the known one in the case that $\mathcal{C}$ is a (derived) category of modules? If two dg-categories are quasi-equivalent are their Hochschild cochain complexes quasi-isomorphic? Is there a point of view that clarifies these issues? 1 # Regarding the Gerstenhaber bracket on Hochschild cohomology and Morita equivalence Associated to any $A_\infty$ $k$-algebra $A$ the Hochschild cochain complex $CH^*(A)$ has the structure of a dg-Lie algebra and a dg-algebra which are compatible enough that the cohomology is a Gerstenhaber algebra. If two $A_\infty$ algebras are Morita equivalent, are their Hochschild cochain complexes isomorphic in (i) the category of $k$-dg-algebras and (ii) the category of $k$-dg-Lie algebras, both up to quasi-isomorphism? Are they isomorphic in some category that feels both structures together? Now suppose that $\mathcal{C}$ is a dg-category over a field $k$. We say that the $k$-dg-algebra $CH^*(\mathcal{C}) = End(id_\mathcal{C})$ is the Hochschild cochain complex. Does $CH^*(\mathcal{C})$ have a bracket that generalizes the known one in the case that $\mathcal{C}$ is a (derived) category of modules? If two dg-categories are quasi-equivalent are their Hochschild cochain complexes quasi-isomorphic? Is there a point of view that clarifies these issues?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.898790180683136, "perplexity_flag": "head"}
http://quant.stackexchange.com/tags/portfolio-management/new
# Tag Info ## New answers tagged portfolio-management 1 ### Desired portfolio volume One approach is to use an exponential utility function: $U(x) = -e^{-\lambda x}$. Here, $\lambda$ records what is known as the absolute risk aversion. Exponential utility functions are nice because they have a wealth independence property (of course, this may be seen as a drawback). As we will see below, the initial capital $X$ plays no part in the ... 1 ### Calculating Geometric mean I'm currently also using daily returns which I want to annualize. This is my approach: For every month, I calculate the simple return using the formula: (end-of-month closing price / beginning-of-month closing price) - 1. I use the Excel formula somproduct(geomean(A1:A12+1)-1) to find the monthly compounded return. Finally, I annualize the result of step 2 ... 0 ### Calculating Geometric mean When you say the return on firms, I take it you mean the change in the stock price of firms. If you were talking about the return of firms' investment strategies, then you would have to deal with cash inflows, which makes the answer more complicated. If you are having problems, there are two equivalent approaches that should give you the correct answer. In ... Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301161766052246, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/79055/is-this-a-valid-function/79057
# Is this a valid function? I am stuck with the question below, Say whether the given function is one to one. $A=\mathbb{Z}$, $B=\mathbb{Z}\times\mathbb{Z}$, $f(a)=(a,a+1)$ I am a bit confused about $f(a)=(a,a+1)$, there are two outputs $(a,a+1)$ for a single input $a$ which is against the definition of a function. Please help me out by expressing your review about the question. Thanks - 1 Is $Z$ the integers, and is $Z*Z$ the set of pairs of integers? If so, you can use `\mathbb{Z}` for $\mathbb{Z}$, and $\times$ (`\times`) instead of $*$ (the latter could be confused with the free product of groups). – Arturo Magidin Nov 4 '11 at 21:35 1 No, there's a single output, which happens to be a pair of two natural numbers. ($\mathbb Z\times \mathbb Z$ is the set of such pairs). There's only one pair coming out of the function. It doesn't matter that it has some internal structure. – Henning Makholm Nov 4 '11 at 21:35 @ArturoMagidin: Yes it is. – Fahad Uddin Nov 4 '11 at 21:40 1 Sigh. I meant "happens to be a pair of two integers". Damn the edit window. – Henning Makholm Nov 4 '11 at 22:48 ## 2 Answers If $f\colon A\to B$, then the inputs of $f$ are elements of $A$, and the outputs of $f$ are elements of $B$, whatever the elements of $B$ may be. If the elements of $B$ are sets with 17 elements each, then the outputs of the function will be sets with 17 elements each. If the elements of $B$ are books, then the output will be books. Here, the set $B$ is the set whose elements are ordered pairs. So every output of $f$ must be an ordered pair. This is a single item, the pair (just like your address may consist of many words and numbers, but it's still a single address). By the way: whether the function is one-to-one or not is immaterial here (and it seems to me, immaterial to your confusion...) - The output is a single pair of two integers. Your definition of $B$ specifies that all outputs of your function are pairs of integers. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089512825012207, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=199800
Physics Forums ## Lorentz Invariance and Non-Galilean Invariance of Maxwell's Equations I am having trouble going about proving the Lorentz invariance and non-Galilean invariance of Maxwell's equations. Can someone help me find a simple way to do it? I've looked online and in textbooks, but they hardly give any explicit examples. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Blog Entries: 47 Recognitions: Gold Member Homework Help Science Advisor You have to specify how the fields transform. To do it in general, it's easiest to do it tensorially. You could do it vectorially... or possibly less elegantly component-wise. Can you show some of your attempts so far? I've tried transforming the coordinates of the wave equations for Maxwell's equations into Lorentz transformed equations via the x and t components, excluding the y and z components of the wave equation for simplicity. I figuredsince the equations are homogeneous, the x and t components should be either equal to each other or each equal to zero when taking the second derivatives of each component (since the x - t components equal zero). I received a very messy x components after partially differentiating it twice, and noticed that the electric field doesn't have a time component in it, so it should equal zer, but I didn't see how my differentiated x part could equal zero too. Is this a good way to go about it? With the wave equations, substitute in the transformed coordinates? Otherwise, I've started the tensor formation that you said, with the field strength and the dual tensors, I derived Maxwell's equations via the four-vectors of current and potential. I figured I could simply transform the field strength tensor and the dual tensor each by Lorentz transformation matrices, then take those transformed tensors and try to derive Maxwell's equations by the same previous method, and receive the same result. But, I was confused as to what transformation matrices to use on the tensors, since they are second-rank tensors. What matrices would I use? Which way is better, if either of them are good? Blog Entries: 47 Recognitions: Gold Member Homework Help Science Advisor ## Lorentz Invariance and Non-Galilean Invariance of Maxwell's Equations You can show that the 1+1 wave equation is not invariant under a Galilean-boost. [Take care with the Chain Rule.] It is invariant under a Lorentz-boost (as suggested by the d'Alembert form of the solution). [Use the d'Alembert form and light-cone coordinates.] The calculations in terms of components are tedious. It's worth doing explicitly... then doing it tensorially. I don't have the patience right now to $$\LaTeX$$ the steps in this exercise. It might be best if you show your explicit steps, which we can comment on. You might find some help from http://farside.ph.utexas.edu/teachin...res/node6.html http://www2.maths.ox.ac.uk/~nwoodh/sr/index.html Blog Entries: 3 Recognitions: Gold Member Dahaka, But, I was confused as to what transformation matrices to use on the tensors, since they are second-rank tensors. What matrices would I use? Have a look here. I managed to boost (Lorentz transform) the F tensor after some help. http://www.physicsforums.com/showthr...2&goto=newpost Thread Tools | | | | |--------------------------------------------------------------------------------------------|------------------------------|---------| | Similar Threads for: Lorentz Invariance and Non-Galilean Invariance of Maxwell's Equations | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 6 | | | Special & General Relativity | 4 | | | Special & General Relativity | 1 | | | Classical Physics | 7 | | | Quantum Physics | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8998044729232788, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/62430/asymptotic-upper-bound-of-bisecting-trees
# Asymptotic upper bound of Bisecting trees The question is : B-3 Bisecting trees Many divide-and-conquer algorithms that operate on graphs require that the graph be bisected into two nearly equal-sized subgraphs, which are induced by a partition of the vertices. This problem investigates bisections of trees formed by removing a small number of edges. We require that whenever two vertices end up in the same subtree after removing edges, then they must be in the same partition. c. Show that by removing at most $O(lg(n))$ edges, we can partition the vertices of any n-vertex binary tree into two sets A and B such that $|A| = \lfloor\frac{n}2\rfloor$ and $|B| = \lceil\frac{n}2\rceil$ I've resolved a related question here, but I'm not sure if it's helpful for this problem. - ## 1 Answer Strengthen the statement to: For any $k=1$, $\dots$, $n-1$, by removing at most $3 \lg n$ edges, we can partition the vertices of any $n$-vertex binary tree into two sets, one of which contains $k$ vertices and the other of which contains $n-k$ vertices. The statement can then be proven by induction on $n$. If $n=1$, the statement is vacuously true. Otherwise, use the previous result to remove an edge to partition the tree into subtrees whose sets of vertices, $A$ and $B$, say, are such that $|A|\le 3n/4$, $|B|\le 3n/4$. If $|A|=k$, we are done. If $|A|>k$, use the induction hypothesis to partition $A$ into a piece of size $k$ and a piece of size $|A|-k$; otherwise, $|A|<k$, so partition $B$ into a piece of size $k-|A|$ and a piece of size $n-k$. By the induction hypothesis, this removes at most $1+3 \lg(\frac{3}{4}n)\le 3\lg n$ edges in all. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386653304100037, "perplexity_flag": "head"}
http://www.theresearchkitchen.com/archives/date/2012/04/12
# The Euler Method In R Posted on April 12, 2012 by The Euler Method is a very simple method used for numerical solution of initial-value problems. Although there are much better methods in practise, it is a nice intuitive mechanism. The objective is to find a solution to the equation $$\frac{dy}{dt} = f(t,y)$$ over a grid of points (equally spaced, in our case). Euler’s method uses the relation (basically a truncated form of Taylor’s approximation theorem with the error terms chopped off): $$y(t_{i+1}) = y(t_i) + h\frac{df(t_i, y(t_i))}{dt}$$ In R, we can express this iterative solution as: ```euler <- function(dy.dx=function(x,y){}, h=1E-7, y0=1, start=0, end=1) { nsteps <- (end-start)/h ys <- numeric(nsteps+1) ys[1] <- y0 for (i in 1:nsteps) { x <- start + (i-1)*h ys[i+1] <- ys[i] + h*dy.dx(x,ys[i]) } ys } ``` Note that given the start and end points, and the size of each step, we figure out the number of steps. Inside the loop, we calculate each successive approximation. An example using the difference equation $$\frac{df(x,y)}{dx} = 3x – y + 8$$ is: ```dy.dx <- function(x,y) { 3*x - y + 8 } euler(dy.dx, start=0, end=0.5, h=0.1, y0=3) [1] 3.00000 3.50000 3.98000 4.44200 4.88780 5.31902 ``` Posted in Coding, R
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8925255537033081, "perplexity_flag": "head"}
http://mathoverflow.net/questions/59770/are-the-non-trivial-zeros-of-zeta-simple/60347
## Are the non trivial zeros of Zeta simple? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, a few years ago, I found on ArXiv an article in which the authors (I think they were at least two to write it) claimed to have proven that the non trivial zeros of the Riemann Zeta function were all simple using the concept of Riemann surfaces. But unfortunately, I just can't find it back. Does someone know if such a result has been published and widely accepted by the mathematical community? Thank you in advance. - ## 4 Answers This is widely open. Moreover, I think we will prove the Riemann Hypothesis much earlier than the simplicity of the zeros (if true). The latter is somehow much more accidental, the only reasonable argument I know in favor of it is "why would two zeros ever coincide"? Note, however, that some automorphic $L$-functions do have multiple zeros. If I recall correctly, even a Dedekind $L$-function can have a multiple zero at the center. - 4 though there are results like a certain percentage of the zeros are simple... – shenghao Mar 28 2011 at 0:03 1 I am no expert, but can a Dedekind zeta function vanish multiply if there is no Armitage/Serre phenomenon, with a root number of -1 for an Artin representation in the decomposition? The MAGMA L-functions handbook code has an example. magma.maths.usyd.edu.au/magma/handbook/text/… – Junkie Mar 28 2011 at 0:31 15 @Junkie: "I am no expert, but can a Dedekind zeta function vanish multiply if there is no Armitage/Serre phenomenon"...That sounds suspiciously like a question an expert would ask! – Pete L. Clark Mar 28 2011 at 2:50 2 I let you guys sort this out. – GH Mar 28 2011 at 3:21 4 For a Galois extension, the Dedekind zeta function factors as a product over all the Artin L-functions of the irreducible representations, with multiplicity equal to the dimension. Thus for dim>1 (ie non-Abelian Galois groups) the Dedekind zeta function is forced to have multiple zeros, not just at 1/2 but all the way up the critical line. All this goes back to Artin, much before Armitage/Serre. – Stopple Mar 28 2011 at 15:26 show 4 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Please note in article - 0802.1764, Riemann Hypothesis may be proved by induction, by R. M. Abrarov and S. M. Abrarov - authors did not claim the proof of RH. They only suggested that induction procedure may be used for RH. This is a nice paper. It contains useful equations that were not known in number theory. - 2 Why is this downvoted? I do not know the details of the preprint in question, but I just had a brief look, and it appears to be correct that the authors of that paper do not claim (contrary to what one might assume from the title and the listing in the other answer) to prove RH (what they seem to do is to establish an equivalence of RH with another assertion, as do various papers). So, this comment/answer seems useful (Gregory cannot comment); if it is not for a more subtle reason it would be nice if this was pointed out along with the vote. – quid Apr 2 2011 at 11:29 Here is the arxiv page: arxiv.org/abs/0802.1764 – BR Apr 2 2011 at 17:28 My apologies for including this article in my list. I was misled by the title, by the sentence "At least one of these identities may be applied to prove the Riemann Hypothesis by induction" in the abstract, and by the statement, "The Induction Procedure can be applied over and over again for further validation of (19). Hence the Riemann Hypothesis is justified" toward the end of the paper. But the last sentence of the article seems to say they have only found a condition which, if true, implies RH. So far as I can tell, the authors publish only in the arXiv. – Gerry Myerson Apr 4 2011 at 0:04 To the best of my knowledge, it is still an open question as to whether all the zeros are simple. If you could find that article.... For what it's worth, any number of "proofs" of the Riemann Hypothesis have appeared on the ArXiv. Here are a few (I've not included three more that were withdrawn by the authors). 1006.0381 The Riemann Hypothesis, Ilgar Sh. Jabbarov (Dzhabbarov) 0906.4604 A Proof for the Riemann Hypothesis, Ruiming Zhang 0903.3973 Concerning Riemann Hypothesis, Raghunath Acharya 0802.1764 Riemann Hypothesis may be proved by induction, R. M. Abrarov, S. M. Abrarov [EDIT: It appears that this paper does not actually claim a proof of RH - see Gregory's answer to the question (and my comment on Gregory's answer).] 0801.4072 The Riemann Hypothesis and the Nontrivial Zeros of the General L-Functions, Fayang Qiu 0801.0633 From Bombieri's Mean Value Theorem to the Riemann Hypothesis, Fu-Gao Song 0709.1389 One page proof of the Riemann hypothesis, Andrzej Madrecki math/0308001 A Geometric Proof of Generalized Riemann Hypothesis, Kaida Shi math/9909153 Riemann Hypothesis, Chengyan Liu - 2 Nice collection! I particularly liked "One page proof of the Riemann hypothesis" and "Riemann Hypothesis may be proved by induction". Here is a generalization: "One page induction proof of the Riemann Hypothesis AND the Twin Prime Conjecture". Can you beat that? Perhaps "Three-line proof that Peano Arithmetic is inconsistent"? – GH Mar 28 2011 at 1:13 1 @GH Maybe it would be beaten by something like Zagier's title. Say something like "A One Sentence Proof Of The Riemann Hypothesis" =) – Adrián Barquero Mar 28 2011 at 1:56 2 @GH, did you notice that "One page proof of the Riemann hypothesis" is 17 pages long? – Gerry Myerson Mar 28 2011 at 3:19 3 Nope :-) I am sure 16 pages are for non-experts and then there is 1 page of beef. – GH Mar 28 2011 at 3:23 I finally managed to find back the article I was talking about. Just click on the green link in the first message of the following link: link text - 1 For convenience, I'm adding a link to the original Arxiv page. The has been updated twice with respect to the version you link (no, apparently the main result wasn't withdrawn or amended): arxiv.org/abs/0911.5138 – Federico Poloni Mar 28 2011 at 11:45 1 It appears that this paper has been very recently accepted by the International Journal of Mathematics and Mathematical Sciences, as per hindawi.com/journals/ijmms/aip/985323 – Gerry Myerson Mar 28 2011 at 23:27 3 I don't claim to have read it carefully, but a quick glance through makes me very suspicious: the arguments seem to abstract somehow, and there are quite a lot of computer-generated pictures that look as though they may be intended as substitutes for rigorous proofs. Basically, I can't find the beef anywhere -- for instance, I don't see any sign of hard estimates. Coupling that with GH's comment, I am left thinking I can safely ignore this one unless the experts suddenly get excited about it. – gowers Apr 2 2011 at 9:42 1 Already the very beginning makes me very suspicious. They make a claim about Riemann surfaces and give two references. One is a page in Ahlfors book, where we find no theorem, but an informal discussion of Riemann surfaces. Ahlfors says on the previous page "This idea leads to the notion of a Riemann surface. It is not our intention to give, in this connection, a rigorous definition of this notion. For our purposes it is sufficient to introduce Riemann surfaces in a purely descriptive manner. (cont. in next comment) – GH Apr 2 2011 at 14:12 (cont. from previous comment) We are free to do so as long as we use them merely for purposes of illustration, and never in logical proofs." The second one is a conference proceeding publication by the authors which does not even show up in MathSciNet. Also, it does not look good on the second author that his 33 publications published over 40 years only have 4 citations in MathSciNet. The first author is 80 years old which raises further doubt about the credibility of the paper. In short: This paper is simply wrong by all likelihood. – GH Apr 2 2011 at 14:17 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363561272621155, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/319541/finding-speed-and-length-of-a-line
# Finding speed and length of a line I have a math problem, which I'm kind of lost in. I have the question: Find the speed $ds/dt$ on the line $x=1+6t$, $y=2+3t$, $z=2t$. Integrate to find the length $s$ from $(1,2,0)$ to $(13,8,4)$. Check by using $12^{2} + 6^{2} + 4^{2}$. I think I have found the speed: $\sqrt{6^{2}+3^{2}+2^{2}} = 7$. But im lost in integrating to finding the length between the two points. Can anyone help me? David - ## 2 Answers If a particle travels along curve $s(t) = (1 + 6t, 2 + 3t, 2t)$ then its speed is $ds/ dt = (6, 3, 2)$. The length of curve $s$ between two points $a = (1,2,0)$ and $b=(13, 8, 4)$ is computed as $\int_{t_a}^{t_b} \sqrt{(ds/dt)_x^2 + (ds/dt)_y^2 + (ds/dt)_z^2} dt$. $t_a$ and $t_b$ are the points with the property $s(t_a) = a$ and $s(t_b) = b$. You need to determine them. Notice that $s(0) = a$ and $s(2) = b$. With the values inserted, $$\int_{t_a}^{t_b} \sqrt{6^2 + 3^2 + 2^2} dt= \int_0^2 \sqrt{49} dt = \int_0^2 7 dt = 14$$ - Thanks man - helped a lot! – user1090614 Mar 3 at 16:39 @user1090614 You are welcome! – goobie Mar 3 at 16:41 Since this sounds like homework; hints: • speed is simply the magnitude of the velocity. • the magnitude of a vector $\vec{x}$ is $\sqrt{\vec{x} \cdot \vec{x}}$ • given a time-dependent (or constant) expression for the speed, if you integrate it over a period, you'll find the length of the distance travelled. • i.e., you want an intermediate expression $s(t) = ...$, and in your case this expression will be very simple. - Am I correct when stating s(t) = 7. And therefore integrating s(t) between 0 and 2 = 14? – user1090614 Mar 3 at 16:31 sounds good to me... – Eamon Nerbonne Mar 3 at 16:35 Super - I totally forgot integrating over a constant c = xc+d... Sometimes the answer is right infront of you, but your eyes wont let you see it. – user1090614 Mar 3 at 16:37 Yeah, somehow particularly with math that's tricky. Wonder why... – Eamon Nerbonne Mar 3 at 16:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168703556060791, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Frame_of_reference
Frame of reference See also: Inertial frame and basis (linear algebra) Classical mechanics Branches Formulations Fundamental concepts Core topics Scientists In physics, a frame of reference (or reference frame) may refer to a coordinate system used to represent and measure properties of objects such as their position and orientation. It may also refer to a set of axes used for such representation. Alternatively, in relativity, the phrase can be used to refer to the relationship between a moving observer and the phenomenon or phenomena under observation. In this context, the phrase often becomes "observational frame of reference" (or "observational reference frame"). The context may itself include a coordinate system used to represent the observer and phenomenon or phenomena. Different aspects of "frame of reference" The need to distinguish between the various meanings of "frame of reference" has led to a variety of terms. For example, sometimes the type of coordinate system is attached as a modifier, as in Cartesian frame of reference. Sometimes the state of motion is emphasized, as in rotating frame of reference. Sometimes the way it transforms to frames considered as related is emphasized as in Galilean frame of reference. Sometimes frames are distinguished by the scale of their observations, as in macroscopic and microscopic frames of reference.[1] In this article, the term observational frame of reference is used when emphasis is upon the state of motion rather than upon the coordinate choice or the character of the observations or observational apparatus. In this sense, an observational frame of reference allows study of the effect of motion upon an entire family of coordinate systems that could be attached to this frame. On the other hand, a coordinate system may be employed for many purposes where the state of motion is not the primary concern. For example, a coordinate system may be adopted to take advantage of the symmetry of a system. In a still broader perspective, of course, the formulation of many problems in physics employs generalized coordinates, normal modes or eigenvectors, which are only indirectly related to space and time. It seems useful to divorce the various aspects of a reference frame for the discussion below. We therefore take observational frames of reference, coordinate systems, and observational equipment as independent concepts, separated as below: • An observational frame (such as an inertial frame or non-inertial frame of reference) is a physical concept related to state of motion. • A coordinate system is a mathematical concept, amounting to a choice of language used to describe observations.[2] Consequently, an observer in an observational frame of reference can choose to employ any coordinate system (Cartesian, polar, curvilinear, generalized, …) to describe observations made from that frame of reference. A change in the choice of this coordinate system does not change an observer's state of motion, and so does not entail a change in the observer's observational frame of reference. This viewpoint can be found elsewhere as well.[3] Which is not to dispute that some coordinate systems may be a better choice for some observations than are others. • Choice of what to measure and with what observational apparatus is a matter separate from the observer's state of motion and choice of coordinate system. Here is a quotation applicable to moving observational frames $\mathfrak{R}$ and various associated Euclidean three-space coordinate systems [R, R' , etc.]: [4] “ We first introduce the notion of reference frame, itself related to the idea of observer: the reference frame is, in some sense, the "Euclidean space carried by the observer". Let us give a more mathematical definition:… the reference frame is... the set of all points in the Euclidean space with the rigid body motion of the observer. The frame, denoted $\mathfrak{R}$, is said to move with the observer.… The spatial positions of particles are labelled relative to a frame $\mathfrak{R}$ by establishing a coordinate system R with origin O. The corresponding set of axes, sharing the rigid body motion of the frame $\mathfrak{R}$, can be considered to give a physical realization of $\mathfrak{R}$. In a frame $\mathfrak{R}$, coordinates are changed from R to R' by carrying out, at each instant of time, the same coordinate transformation on the components of intrinsic objects (vectors and tensors) introduced to represent physical quantities in this frame. ” and this on the utility of separating the notions of $\mathfrak{R}$ and [R, R' , etc.]:[5] “ As noted by Brillouin, a distinction between mathematical sets of coordinates and physical frames of reference must be made. The ignorance of such distinction is the source of much confusion… the dependent functions such as velocity for example, are measured with respect to a physical reference frame, but one is free to choose any mathematical coordinate system in which the equations are specified. ” and this, also on the distinction between $\mathfrak{R}$ and [R, R' , etc.]:[6] “ The idea of a reference frame is really quite different from that of a coordinate system. Frames differ just when they define different spaces (sets of rest points) or times (sets of simultaneous events). So the ideas of a space, a time, of rest and simultaneity, go inextricably together with that of frame. However, a mere shift of origin, or a purely spatial rotation of space coordinates results in a new coordinate system. So frames correspond at best to classes of coordinate systems. ” and from J. D. Norton:[7] “ In traditional developments of special and general relativity it has been customary not to distinguish between two quite distinct ideas. The first is the notion of a coordinate system, understood simply as the smooth, invertible assignment of four numbers to events in spacetime neighborhoods. The second, the frame of reference, refers to an idealized system used to assign such numbers … To avoid unnecessary restrictions, we can divorce this arrangement from metrical notions. … Of special importance for our purposes is that each frame of reference has a definite state of motion at each event of spacetime.…Within the context of special relativity and as long as we restrict ourselves to frames of reference in inertial motion, then little of importance depends on the difference between an inertial frame of reference and the inertial coordinate system it induces. This comfortable circumstance ceases immediately once we begin to consider frames of reference in nonuniform motion even within special relativity.…More recently, to negotiate the obvious ambiguities of Einstein’s treatment, the notion of frame of reference has reappeared as a structure distinct from a coordinate system. ” The discussion is taken beyond simple space-time coordinate systems by Brading and Castellani.[8] Extension to coordinate systems using generalized coordinates underlies the Hamiltonian and Lagrangian formulations[9] of quantum field theory, classical relativistic mechanics, and quantum gravity.[10][11][12][13][14] Coordinate systems Main article: Coordinate systems See also: Generalized coordinates and Axes conventions An observer O, situated at the origin of a local set of coordinates - a frame of reference F. The observer in this frame uses the coordinates (x, y, z, t) to describe a spacetime event, shown as a star. Although the term "coordinate system" is often used (particularly by physicists) in a nontechnical sense, the term "coordinate system" does have a precise meaning in mathematics, and sometimes that is what the physicist means as well. A coordinate system in mathematics is a facet of geometry or of algebra,[15][16] in particular, a property of manifolds (for example, in physics, configuration spaces or phase spaces).[17][18] The coordinates of a point r in an n-dimensional space are simply an ordered set of n numbers:[19][20] $\mathbf{r} =[x^1,\ x^2,\ \dots\ , x^n] \ .$ In a general Banach space, these numbers could be (for example) coefficients in a functional expansion like a Fourier series. In a physical problem, they could be spacetime coordinates or normal mode amplitudes. In a robot design, they could be angles of relative rotations, linear displacements, or deformations of joints.[21] Here we will suppose these coordinates can be related to a Cartesian coordinate system by a set of functions: $x^j = x^j (x,\ y,\ z,\ \dots)\ ,$    $j = 1, \ \dots \ , \ n\$ where x, y, z, etc. are the n Cartesian coordinates of the point. Given these functions, coordinate surfaces are defined by the relations: $x^j (x, y, z, \dots) = \mathrm{constant}\ ,$    $j = 1, \ \dots \ , \ n\ .$ The intersection of these surfaces define coordinate lines. At any selected point, tangents to the intersecting coordinate lines at that point define a set of basis vectors {e1, e2, …, en} at that point. That is:[22] $\mathbf{e}_i(\mathbf{r}) =\lim_{\epsilon \rightarrow 0} \frac{\mathbf{r}\left(x^1,\ \dots,\ x^i+\epsilon,\ \dots ,\ x^n \right) - \mathbf{r}\left(x^1,\ \dots,\ x^i,\ \dots ,\ x^n \right)}{\epsilon }\ ,$ which can be normalized to be of unit length. For more detail see curvilinear coordinates. Coordinate surfaces, coordinate lines, and basis vectors are components of a coordinate system.[23] If the basis vectors are orthogonal at every point, the coordinate system is an orthogonal coordinate system. An important aspect of a coordinate system is its metric gik, which determines the arc length ds in the coordinate system in terms of its coordinates:[24] $(ds)^2 = g_{ik}\ dx^i\ dx^k \ ,$ where repeated indices are summed over. As is apparent from these remarks, a coordinate system is a mathematical construct, part of an axiomatic system. There is no necessary connection between coordinate systems and physical motion (or any other aspect of reality). However, coordinate systems can include time as a coordinate, and can be used to describe motion. Thus, Lorentz transformations and Galilean transformations may be viewed as coordinate transformations. General and specific topics of coordinate systems can be pursued following the See also links below. Observational frames of reference Main article: Inertial frame of reference Three frames of reference in special relativity. Black frame is at rest. Primed frame moves at 40% of light speed, double primed at 80%. Note scissors-like change as speed increases. An observational frame of reference, often referred to as a physical frame of reference, a frame of reference, or simply a frame, is a physical concept related to an observer and the observer's state of motion. Here we adopt the view expressed by Kumar and Barve: an observational frame of reference is characterized only by its state of motion.[25] However, there is lack of unanimity on this point. In special relativity, the distinction is sometimes made between an observer and a frame. According to this view, a frame is an observer plus a coordinate lattice constructed to be an orthonormal right-handed set of spacelike vectors perpendicular to a timelike vector. See Doran.[26] This restricted view is not used here, and is not universally adopted even in discussions of relativity.[27][28] In general relativity the use of general coordinate systems is common (see, for example, the Schwarzschild solution for the gravitational field outside an isolated sphere[29]). There are two types of observational reference frame: inertial and non-inertial. An inertial frame of reference is defined as one in which all laws of physics take on their simplest form. In special relativity these frames are related by Lorentz transformations, which are parametrized by rapidity. In Newtonian mechanics, a more restricted definition requires only that Newton's first law holds true; that is, a Newtonian inertial frame is one in which a free particle travels in a straight line at constant speed, or is at rest. These frames are related by Galilean transformations. These relativistic and Newtonian transformations are expressed in spaces of general dimension in terms of representations of the Poincaré group and of the Galilean group. In contrast to the inertial frame, a non-inertial frame of reference is one in which fictitious forces must be invoked to explain observations. An example is an observational frame of reference centered at a point on the Earth's surface. This frame of reference orbits around the center of the Earth, which introduces the fictitious forces known as the Coriolis force, centrifugal force, and gravitational force. (All of these forces including gravity disappear in a truly inertial reference frame, which is one of free-fall.) Measurement apparatus A further aspect of a frame of reference is the role of the measurement apparatus (for example, clocks and rods) attached to the frame (see Norton quote above). This question is not addressed in this article, and is of particular interest in quantum mechanics, where the relation between observer and measurement is still under discussion (see measurement problem). In physics experiments, the frame of reference in which the laboratory measurement devices are at rest is usually referred to as the laboratory frame or simply "lab frame." An example would be the frame in which the detectors for a particle accelerator are at rest. The lab frame in some experiments is an inertial frame, but it is not required to be (for example the laboratory on the surface of the Earth in many physics experiments is not inertial). In particle physics experiments, it is often useful to transform energies and momenta of particles from the lab frame where they are measured, to the center of momentum frame "COM frame" in which calculations are sometimes simplified, since potentially all kinetic energy still present in the COM frame may be used for making new particles. In this connection it may be noted that the clocks and rods often used to describe observers' measurement equipment in thought, in practice are replaced by a much more complicated and indirect metrology that is connected to the nature of the vacuum, and uses atomic clocks that operate according to the standard model and that must be corrected for gravitational time dilation.[30] (See second, meter and kilogram). In fact, Einstein felt that clocks and rods were merely expedient measuring devices and they should be replaced by more fundamental entities based upon, for example, atoms and molecules.[31] Examples of inertial frames of reference Simple example Figure 1: Two cars moving at different but constant velocities observed from stationary inertial frame S attached to the road and moving inertial frame S' attached to the first car. Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 metres. The car in front is travelling at 22 metres per second and the car behind is travelling at 30 metres per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose. First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance d = 200 m apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where $x_1(t)$ is the position in meters of car one after time t seconds and $x_2(t)$ is the position of car two after time t. $x_1(t)= d + v_1 t = 200\ + \ 22t\ ; \quad x_2(t)= v_2 t = 30t$ Notice that these formulas predict at t = 0 s the first car is 200 m down the road and the second car is right beside us, as expected. We want to find the time at which $x_1=x_2$. Therefore we set $x_1=x_2$ and solve for $t$, that is: $200 + 22 t = 30t \quad$ $8t = 200 \quad$ $t = 25 \quad \mathrm{seconds}$ Alternatively, we could choose a frame of reference S' situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of v2 − v1 = 8 m / s. In order to catch up to the first car, it will take a time of d /( v2 − v1) = 200 / 8 s, that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at 8 m / s. It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one is able to convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you are able to deduct five minutes from the time displayed on your watch in order to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three). Additional example Figure 2: Simple-minded frame-of-reference example This section's . (February 2012) For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving toward the right. However, for the person facing west, the car was moving toward the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system. For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the x-axis and the direction in front of him as the positive y-axis. To him, the car moves along the x axis with some velocity v in the positive x-direction. Alfred's frame of reference is considered an inertial frame of reference because he is not accelerating (ignoring effects such as Earth's rotation and gravity). Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive x-axis, and the direction in front of her as the positive y-axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving - for instance, as she drives past Alfred, she observes him moving with velocity v in the negative y-direction. If she is driving north, then north is the positive y-direction; if she turns east, east becomes the positive y-direction. Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be a in the negative x-direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity v is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred - in her frame of reference, a in the negative y-direction. However, if she is accelerating at rate A in the negative y-direction (in other words, slowing down), she will find Candace's acceleration to be a' = a - A in the negative y-direction - a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive y-direction (speeding up), she will observe Candace's acceleration as a' = a + A in the negative y-direction - a larger value than Alfred's measurement. Frames of reference are especially important in special relativity, because when a frame of reference is moving at some significant fraction of the speed of light, then the flow of time in that frame does not necessarily apply in another frame. The speed of light is considered to be the only true constant between moving frames of reference. Remarks It is important to note some assumptions made above about the various inertial frames of reference. Newton, for instance, employed universal time, as explained by the following example. Suppose that you own two clocks, which both tick at exactly the same rate. You synchronize them so that they both display exactly the same time. The two clocks are now separated and one clock is on a fast moving train, traveling at constant velocity towards the other. According to Newton, these two clocks will still tick at the same rate and will both show the same time. Newton says that the rate of time as measured in one frame of reference should be the same as the rate of time in another. That is, there exists a "universal" time and all other times in all other frames of reference will run at the same rate as this universal time irrespective of their position and velocity. This concept of time and simultaneity was later generalized by Einstein in his special theory of relativity (1905) where he developed transformations between inertial frames of reference based upon the universal nature of physical laws and their economy of expression (Lorentz transformations). It is also important to note that the definition of inertial reference frame can be extended beyond three dimensional Euclidean space. Newton's assumed a Euclidean space, but general relativity uses a more general geometry. As an example of why this is important, let us consider the geometry of an ellipsoid. In this geometry, a "free" particle is defined as one at rest or traveling at constant speed on a geodesic path. Two free particles may begin at the same point on the surface, traveling with the same constant speed in different directions. After a length of time, the two particles collide at the opposite side of the ellipsoid. Both "free" particles traveled with a constant speed, satisfying the definition that no forces were acting. No acceleration occurred and so Newton's first law held true. This means that the particles were in inertial frames of reference. Since no forces were acting, it was the geometry of the situation which caused the two particles to meet each other again. In a similar way, it is now believed that we exist in a four dimensional geometry known as spacetime. It is believed that the curvature of this 4D space is responsible for the way in which two bodies with mass are drawn together even if no forces are acting. This curvature of spacetime replaces the force known as gravity in Newtonian mechanics and special relativity. Non-inertial frames Main articles: Fictitious force, Non-inertial frame, and Rotating frame of reference Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below. An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x' , y' , a' . The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r'. From the geometry of the situation, we get $\mathbf r = \mathbf R + \mathbf r'$ Taking the first and second derivatives of this, we obtain $\mathbf v = \mathbf V + \mathbf v'$ $\mathbf a = \mathbf A + \mathbf a'$ where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame. These equations allow transformations between the two coordinate systems; for example, we can now write Newton's second law as $\mathbf F = m\mathbf a = m\mathbf A + m\mathbf a'$ When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect). A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation (see Fictitious force for a derivation): $\mathbf a = \mathbf a' + \dot{\boldsymbol\omega} \times \mathbf r' + 2\boldsymbol\omega \times \mathbf v' + \boldsymbol\omega \times (\boldsymbol\omega \times \mathbf r') + \mathbf A_0$ or, to solve for the acceleration in the accelerated frame, $\mathbf a' = \mathbf a - \dot{\boldsymbol\omega} \times \mathbf r' - 2\boldsymbol\omega \times \mathbf v' - \boldsymbol\omega \times (\boldsymbol\omega \times \mathbf r') - \mathbf A_0$ Multiplying through by the mass m gives $\mathbf F' = \mathbf F_\mathrm{physical} + \mathbf F'_\mathrm{Euler} + \mathbf F'_\mathrm{Coriolis} + \mathbf F'_\mathrm{centripetal} - m\mathbf A_0$ where $\mathbf F'_\mathrm{Euler} = -m\dot{\boldsymbol\omega} \times \mathbf r'$ (Euler force) $\mathbf F'_\mathrm{Coriolis} = -2m\boldsymbol\omega \times \mathbf v'$ (Coriolis force) $\mathbf F'_\mathrm{centrifugal} = -m\boldsymbol\omega \times (\boldsymbol\omega \times \mathbf r')=m(\omega^2 \mathbf r'- (\boldsymbol\omega \cdot \mathbf r')\boldsymbol\omega)$ (centrifugal force) Notes 1. The distinction between macroscopic and microscopic frames shows up, for example, in electromagnetism where constitutive relations of various time and length scales are used to determine the current and charge densities entering Maxwell's equations. See, for example, Kurt Edmund Oughstun (2006). Electromagnetic and Optical Pulse Propagation 1: Spectral Representations in Temporally Dispersive Media. Springer. p. 165. ISBN 0-387-34599-X. . These distinctions also appear in thermodynamics. See Paul McEvoy (2002). Classical Theory. MicroAnalytix. p. 205. ISBN 1-930832-02-8. . 2. In very general terms, a coordinate system is a set of arcs xi = xi (t) in a complex Lie group; see Lev Semenovich Pontri͡agin. L.S. Pontryagin: Selected Works Vol. 2: Topological Groups (3rd Edition ed.). Gordon and Breach. p. 429. ISBN 2-88124-133-6. . Less abstractly, a coordinate system in a space of n-dimensions is defined in terms of a basis set of vectors {e1, e2,… en}; see Edoardo Sernesi, J. Montaldi (1993). Linear Algebra: A Geometric Approach. CRC Press. p. 95. ISBN 0-412-40680-2.  As such, the coordinate system is a mathematical construct, a language, that may be related to motion, but has no necessary connection to motion. 3. J X Zheng-Johansson and Per-Ivar Johansson (2006). Unification of Classical, Quantum and Relativistic Mechanics and of the Four Forces. Nova Publishers. p. 13. ISBN 1-59454-260-0. 4. Jean Salençon, Stephen Lyle (2001). Handbook of Continuum Mechanics: General Concepts, Thermoelasticity. Springer. p. 9. ISBN 3-540-41443-6. 5. Patrick Cornille (Akhlesh Lakhtakia, editor) (1993). Essays on the Formal Aspects of Electromagnetic Theory. World Scientific. p. 149. ISBN 981-02-0854-5. 6. Graham Nerlich (1994). What Spacetime Explains: Metaphysical essays on space and time. Cambridge University Press. p. 64. ISBN 0-521-45261-9. 7. John D. Norton (1993). General covariance and the foundations of general relativity: eight decades of dispute, Rep. Prog. Phys., 56, pp. 835-7. 8. Katherine Brading & Elena Castellani (2003). Symmetries in Physics: Philosophical Reflections. Cambridge University Press. p. 417. ISBN 0-521-82137-1. 9. Oliver Davis Johns (2005). Analytical Mechanics for Relativity and Quantum Mechanics. Oxford University Press. Chapter 16. ISBN 0-19-856726-X. 10. Donald T Greenwood (1997). Classical dynamics (Reprint of 1977 edition by Prentice-Hall ed.). Courier Dover Publications. p. 313. ISBN 0-486-69690-1. 11. Matthew A. Trump & W. C. Schieve (1999). Classical Relativistic Many-Body Dynamics. Springer. p. 99. ISBN 0-7923-5737-X. 12. A S Kompaneyets (2003). Theoretical Physics (Reprint of the 1962 2nd Edition ed.). Courier Dover Publications. p. 118. ISBN 0-486-49532-9. 13. M Srednicki (2007). Quantum Field Theory. Cambridge University Press. Chapter 4. ISBN 978-0-521-86449-7. 14. Carlo Rovelli (2004). Quantum Gravity. Cambridge University Press. p. 98 ff. ISBN 0-521-83733-2. 15. William Barker & Roger Howe (2008). Continuous symmetry: from Euclid to Klein. American Mathematical Society. p. 18 ff. ISBN 0-8218-3900-4. 16. Arlan Ramsay & Robert D. Richtmyer (1995). Introduction to Hyperbolic Geometry. Springer. p. 11. ISBN 0-387-94339-0. 17. According to Hawking and Ellis: "A manifold is a space locally similar to Euclidean space in that it can be covered by coordinate patches. This structure allows differentiation to be defined, but does not distinguish between different coordinate systems. Thus, the only concepts defined by the manifold structure are those that are independent of the choice of a coordinate system." Stephen W. Hawking & George Francis Rayner Ellis (1973). The Large Scale Structure of Space-Time. Cambridge University Press. p. 11. ISBN 0-521-09906-4.  A mathematical definition is: M is called an n-dimensional manifold if each point of M is contained in an open set that is homeomorphic to an open set in Euclidean n-dimensional space. 18. Shigeyuki Morita, Teruko Nagase, Katsumi Nomizu (2001). Geometry of Differential Forms. American Mathematical Society Bookstore. p. 12. ISBN 0-8218-1045-6. 19. Granino Arthur Korn, Theresa M. Korn (2000). Mathematical handbook for scientists and engineers : definitions, theorems, and formulas for reference and review. Courier Dover Publications. p. 169. ISBN 0-486-41147-8. 20. See Encarta definition. Archived 2009-10-31. 21. Katsu Yamane (2004). Simulating and Generating Motions of Human Figures. Springer. pp. 12–13. ISBN 3-540-20317-6. 22. Achilleus Papapetrou (1974). Lectures on General Relativity. Springer. p. 5. ISBN 90-277-0540-2. 23. Wilford Zdunkowski & Andreas Bott (2003). Dynamics of the Atmosphere. Cambridge University Press. p. 84. ISBN 0-521-00666-X. 24. A. I. Borisenko, I. E. Tarapov, Richard A. Silverman (1979). Vector and Tensor Analysis with Applications. Courier Dover Publications. p. 86. ISBN 0-486-63833-2. 25. See Arvind Kumar & Shrish Barve (2003). How and Why in Basic Mechanics. Orient Longman. p. 115. ISBN 81-7371-420-7. 26. Chris Doran & Anthony Lasenby (2003). Geometric Algebra for Physicists. Cambridge University Press. p. §5.2.2, p. 133. ISBN 978-0-521-71595-9. . 27. For example, Møller states: "Instead of Cartesian coordinates we can obviously just as well employ general curvilinear coordinates for the fixation of points in physical space.…we shall now introduce general "curvilinear" coordinates xi in four-space…." C. Møller (1952). The Theory of Relativity. Oxford University Press. p. 222 and p. 233. 28. A. P. Lightman, W. H. Press, R. H. Price & S. A. Teukolsky (1975). [[Problem book|Problem Book]] in Relativity and Gravitation. Princeton University Press. p. 15. ISBN 0-691-08162-X. 29. Richard L Faber (1983). Differential Geometry and Relativity Theory: an introduction. CRC Press. p. 211. ISBN 0-8247-1749-X. 30. Richard Wolfson (2003). Simply Einstein. W W Norton & Co. p. 216. ISBN 0-393-05154-4. 31. See Guido Rizzi, Matteo Luca Ruggiero (2003). Relativity in rotating frames. Springer. p. 33. ISBN 1-4020-1805-3. .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8961443901062012, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/probability
# Tagged Questions The probability tag has no wiki summary. 1answer 39 views ### Statistical sum of physical quantities in a quantum system Let $C = A + B$ (statistical sum, so $\mathbb{E}[C] = \mathbb{E}[A] + \mathbb{E}[B]$), and let $p(A = a) = 1$. Are the following true? $\mathbb{E}[C^2] = a^2 + 2a\mathbb{E}[B] + \mathbb{E}[B^2]$ ... 2answers 82 views ### Independent systems and Lagrangians Definition 1: The notion of independent systems has a precise meaning in probabilities. It states that the (joint) probability or finding the system ($S_1S_2$) in the configuration ($C_1C_2$) is ... 2answers 38 views ### Probability of position in linear shm? The problem that got me thinking goes like this:- Find $dp/dx$ where $p$ is the probability of finding a body at a random instant of time undergoing linear shm according to $x=a\sin(\omega t)$. ... 2answers 144 views ### Determinism, classical probabilities, and/or quantum mechanics? [I]f you want a universe with certain very generic properties, you seem forced to one of three choices: (1) determinism, (2) classical probabilities, or (3) quantum mechanics. [My emphasis.] ... 0answers 48 views ### Rolling a perfect square [closed] If a 20-sided fair die with sides distinctly numbered 1 through 20 is rolled, the probability that the answer is a perfect square can be expressed as a b where a and b are coprime positive ... 3answers 133 views ### Does entropy alter the probability of independent events? So I have taken an introductory level quantum physics and am currently taking an introductory level probability class. Then this simple scenario came up: Given a fair coin that has been tossed 100 ... 1answer 84 views ### Probabilistic vs Statistical interpretation of Double Slit experiment Why is it assumed that the results seen in the double slit experiment are probabilistic and not just a statistical result of some unknown variable or set of variables within the system. 0answers 23 views ### Probability in Radiation Physics for electrons [closed] The total linear attenuation coefficient for 10-keV electrons in water is 77.6 μm^-1, partitioned as follows: Elastic scattering 38.2 μm^-1 Ionization 37.4 ... 6answers 353 views ### Probability amplitude in Layman's Terms I am basically a Computer Programmer, but Physics has always fascinated and often baffled me. I have tried to understand probability density in Quantum Mechanics for many many years. What I ... 3answers 139 views ### Operators explaination and momentum operator in QM I know and understand why equation below holds. But i am new to operator thing in QM and would need some explaination on this. \langle x \rangle = \int\limits_{-\infty}^\infty |\Psi|^2 x \, ... 1answer 358 views ### 't Hooft for laypersons I have looked at some of 't Hooft's recent papers and, unfortunately, they are well beyond my current level of comprehension. The same holds for the discussions that took place on this website. (See, ... 0answers 47 views ### Classical analogy of particle decay Is there some classical system that mimics the decay law for particles $N(t)=N(0)e^{-(Q_1+Q_2..)t}$ with multiple decay modes? To help me visualize this process. Something like a barrel of water with ... 3answers 183 views ### Is “entanglement” unique to quantum systems? My text shows (sections 0.2 and 0.3) that the joint "state space" of a system composed of two subsystems with $k$ and $l$ "bits of information", respectively, requires $kl$ bits to fully describe it. ... 0answers 54 views ### Equivalence of simple formulations of qubit entanglement I'm reading some very elementary treatments of quantum computation and am unsure about the correspondence among "definitions" of qubit entanglement. One definition states that (1) the bits of a ... 3answers 166 views ### Could quantum mechanics work without the Born rule? Slightly inspired by this question about the historical origins of the Born rule, I wondered whether quantum mechanics could still work without the Born rule. I realize it's one of the most ... 3answers 215 views ### Normalisation factor $\psi_0$ for wave function $\psi = \psi_0 \sin(kx-\omega t)$ I know that if I integrate probabilitlity $|\psi|^2$ over a whole volume $V$ I am supposed to get 1. This equation describes this. $$\int \limits^{}_{V} \left|\psi \right|^2 \, \textrm{d} V = 1\\$$ ... 2answers 320 views ### Amplitude of Probability amplitude. Which one is it? QM begins with a Born's rule which states that probability $P$ is equal to a modulus square of probability amplitude $\psi$: $$P = \left|\psi\right|^2.$$ If I write down a wave function like this ... 0answers 171 views ### Probability and probability amplitude What made scientists believe that we should calculate probability $P$ as the $P = \left|\psi\right|^2$ in quantum mechanics? Was it the double slit experiment? How? Is there anywhere in the ... 2answers 148 views ### Why does the amplitude of a ripple tells us that it is a particle? The quote below is from Matt Strassler's blog: a particle is a ripple with many crests and troughs; its amplitude, relative to its overall length, is what tells you that it is a single ... 3answers 253 views ### Probability and probability amplitude The equation: $$P = |A|^2$$ appears in many books and lectures, where $P$ is a "probability" and $A$ is an "amplitude" or "probability amplitude". What led physicists to believe that the square of ... 1answer 37 views ### Is there an equivalent of a Galton box for a converging probability? This is a question about probability. The Galton box (or quincunx) uses the physical process of shot moving down a pin-board, to demonstrate central limit theorem, eg: So I am interested in events ... 1answer 162 views ### Parallel universe and Infinite monkey theorem [closed] Is the Infinite monkey theorem helpful for determining the existence of the very same our universe somewhere else? 2answers 122 views ### Computing microstate probabilities based on Boltzmann distribution for chemical systems - Is it rigorous? One approach to predicting the folded structure of a polymer (DNA, RNA, protein) is to compute the probability that any particular part of the polymer $x_i$ is "paired" with another part of the ... 1answer 82 views ### Basic question about probability and measurements Say I have a Galton box, i.e. a ball dropping on a row of solid bodies. Now I want to calculate the probability distribution of the movement of the ball based on the properties of the body (case A). ... 0answers 119 views ### Electron hopping among molecules - Marcus equation I'm running out of professors to talk to, and I need to clarify a couple of things for the sake of making a realistic model of electron travel through a mesh. This is about calculations of electron ... 1answer 205 views ### Expressions for canonical partition function and probabilities $p(E_i)$ Given an atom with 4 allowed states corresponding to the energy levels $E_1 = 0$, $E_2 = E$, and $E_3 = 2E$ with degeneracies 1, 1, and 2 respectively. How do I find the expressions for the ... 1answer 124 views ### Probability in Quantum Mechanics Do you need to take a probability/statistics course for Quantum Mechanics, or is the probability in quantum mechanics so rudimentary that you can just learn it along the way? I'm in doubt as to ... 0answers 59 views ### How to explain Tsirelson's inequality using extended probabilities? How to explain Tsirelson's inequality using extended probabilities? Some people have tried explaining the Bell inequalities using extended probabilities. For instance, a pair of entangled photons ... 2answers 207 views ### What is the physical interpretation of the density matrix in a double continuous basis $|\alpha\rangle$, $|\beta\rangle$? (a) Any textbook gives the interpretation of the density matrix in a single continuous basis $|\alpha\rangle$: The diagonal elements \$\rho(\alpha, \alpha) = \langle \alpha |\hat{\rho}| \alpha ... 4answers 303 views ### If wave packets spread, why don't objects disappear? If you have an electron moving in empty space, it will be represented by a wave packet. But packets can spread over time, that is, their width increases, with it's uncertainty in position increasing. ... 1answer 72 views ### How to solve the tranmission probability in an evolution of a quantum system I've just learned the evolution of some quantum system for about a week, and our homework sometimes something like this. I don't quite have any idea of solving this kind of problem. Can you help ... 1answer 175 views ### Boundary conditions from single-valuedness of spherical wavefunctions This question is a follow-up to David Bar Moshe's answer to my earlier question on the Aharanov-Bohm effect and flux-quantization. What I forgot was that it is not the wavefunction that must be ... 0answers 45 views ### Modeling the probability of a photodiode measuring photons targets at a neighbor In current digital cameras, sensors are arrays of photodiodes which "transform" photons energy to electrons. I am aware that the probability of a photon to generate an electron is modeled by a Poisson ... 1answer 179 views ### Simulating quantum network of harmonic oscillators Let's say that I have a system of $n$ particles $p_1,\ldots,p_n\in\mathbb{R}^3$ (where $n$ here is on the order of 10,000). Furthermore, suppose we have a graph $G=(V,E)$ describing some network, ... 2answers 117 views ### Probability wave speed of dispersion and interference I'm a layperson learning about quantum mechanics and probability waves. My understanding is that the probability wave for the position of a particle disperses throughout all of the universe. I have ... 1answer 179 views ### How to determine the probabilities for a cuboid die? Imagine we take a cuboid with sides $a, b$ and $c$ and throw it like a usual die. Is there a way to determine the probabilities of the different outcomes $P_{ab}, P_{bc}$ and $P_{ac}$? With $ab$, ... 3answers 262 views ### Propagators and Probabilities in the Heisenberg Picture I'm trying to understand why $$\Bigl|\langle0|\phi(x)\phi(y)|0\rangle\Bigr|^2$$ is the probability for a particle created at $y$ to propagate to $x$ where $\phi$ is the Klein-Gordon field. What's ... 1answer 90 views ### How to interpret a negative failure rate? In statistical engineering the "hazard rate" of a distribution is defined as: $$r(x)=\frac{f(x)}{1-F(x)}$$ where $f(x)$ and $F(x)$ are the PDF and CDF. Basically $r(x)$ is the odds that, having ... 2answers 101 views ### Diffusion of probability amplitudes Let's say I have a probability amplitude $\psi:\Sigma\rightarrow\mathbb{C}$ for some domain $\Sigma$ (so, $\psi$ satisfies $\int_\Sigma |\psi|^2=1$). Is there a way to use $\psi$ as initial ... 2answers 91 views ### Similarity of probability amplitude functions Let's say I have two probability amplitude functions given by $\psi_1$ and $\psi_2$. That is, $\psi_i:\Sigma\rightarrow\mathbb{C}$ for some domain $\Sigma$ with $\int_\Sigma|\psi_i|^2=1$ for ... 2answers 231 views ### Mathematical probabilistic interepretation of probability amplitude As a warning, I come from an "applied math" background with next to no knowledge of physics. That said, here's my question: I'm looking at the possibility of using probability amplitude functions to ... 4answers 244 views ### Are probabilities really tangible physical real numbers? Probabilities are usually considered to be a real number between 0 and 1. A real number has an infinite decimal expansion. Are probabilities really real numbers? Is the infinite decimal expansion ... 2answers 451 views ### How do I figure out the probability of finding a particle between two barriers? Given a delta function $\alpha\delta(x+a)$ and an infinite energy potential barrier at $[0,\infty)$, calculate the scattered state, calculate the probability of reflection as a function of ... 1answer 128 views ### Probability using Klein-Gordon Equation I read somewhere that the Klein-Gordon equation doesn't allow for conservation of probability. Can someone prove this mathematically? 1answer 174 views ### Probability, quantum physics, and why (can't it/does it) apply to macroscale events? Quantum physics dictates that there are probabilities that determine the outcome of an event, ie: the probability of a quark passing through a wall is X, due to the size of the quark in comparison to ... 2answers 159 views ### Average Neighbouring Impurity Separation in a Random 1D chain [closed] I have a finite and discrete 1D chain (edit: linear chain, i.e. a straight line) of atoms, with unit separation, with a set number of impurities randomly distributed in the place of these atoms in ... 3answers 281 views ### Probability vs. degree of belief in facts of nature (“Plausibility”) I just came across a line in a paper: "Assume the probability that a Lagrangian parameter lies between $a$ and $a + da$ is $dP(a) =$ [...]." This reminded me again of my single biggest qualm I have ... 1answer 68 views ### Computing an average escape distance for a particle Somewhere in a two dimensional convex bulk of particles (pic related) on a random position a reaction takes place and a particle is sent out in a random direction with a constant velocity $v$. What ... 2answers 58 views ### What is the minimal set of expectation values I need in a statistical model? At least if $\vec v$ is really only a one dimensional parameter, measuring all the moments $\langle v^n \rangle_f$ seems to give me all the information to compute $\langle A \rangle_f$ with $A(v)$ ... 2answers 513 views ### Combining multiple theories with 5 $\sigma$ confidence level Sadly I am not a physicist but I am interested in the topic. Please have mercy with me if you find my question trivial or dumb. Here it comes: As far I understand physicist express their certainty ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099065661430359, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/735?sort=oldest
## When is a map given by a word surjective? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let w(x,y) be a word in x and y. Let x and y now vary in SL_n(K), where K is a field. (Assume, if you wish, that K is an algebraically complete field of characteristic bigger than a constant.) I would like to know for which words w the map y -> w(x,y) isn't surjective (or even dominant - that is, "almost surjective") for x generic. It is clear, for example, that the map is surjective for w(x,y)=xy, and that it isn't surjective for w(x,y)= y x y^{-1}, or for w(x,y) = y x^n y^{-1}, n an integer: all elements of the image of y -> y x^n y^{-1} lie in the same conjugacy class. A moment's thought (thanks, Philipp!) shows that w(x,y) = x y x^n y^{-1} isn't surjective either: its image is just x* im(y->y x^n y^{-1}), and, as we just said, y-> y x^n y^{-1} isn't surjective. I would like to know if the only words w for which the map isn't surjective for x generic are the w's of the form w(x,y) = x^a v(x,y) x^b (v(x,y))^{-1} x^c, where v is some word and a,b,c are some integers. (This seems to me a sensible guess, though I would actually be quite glad if it weren't true.) - 1 To make an obvious comment, y x^2 y^{-1} is also not surjective. – David Speyer Oct 16 2009 at 14:33 I would like to know the answer to this as well! – Andrew Critch Oct 16 2009 at 15:26 Thanks, David - I've changed the question above. – H A Helfgott Oct 16 2009 at 16:54 ## 4 Answers This paper http://front.math.ucdavis.edu/math.GR/0211302 seems related, although it asks a slightly different question. - Thanks for the link, but the question seems pretty different. There it's clear what the answer ought to be - here I am not so positive of my guess. – H A Helfgott Oct 16 2009 at 16:55 Another paper treating a similarly-looking question is: N. Gordeev and U. Rehmann, On multicommutators for simple algebraic groups, J. Algebra 245 (2001), 275-296 mathematik.uni-bielefeld.de/LAG/man/056.html dx.doi.org/10.1006/jabr.2001.8929 . It is proved there that for a simple algebraic group $G$ of sufficiently large Coxeter number, the map $G^{n+1} \to G^n, (g,g_1, ..., g_n) \mapsto ([g,g_1], ..., [g,g_n])$ is dominant. But, again, probably the similarity is superficial. – Pasha Zusmanovich Sep 18 2010 at 16:29 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm going to say some things which might be either (a) obvious, (b) wrong, or (c) useless. (Or some combination!) You could rephrase the question by asking that the map from SL_k x SL_k to SL_k x SL_k given by (x,y) -> (x,w(x,y)) is dominant. This seems like an improvement, because now you're talking about a map between two spaces of equal dimension. If the corresponding map on the tangent spaces of Id x Id is an isomorphism, then certainly the map is dominant. (And this map is easy to compute, given w -- we just replace the product in w by the corresponding sum of tangent vectors.) The map is dominant iff the map on tangent spaces is generically an isomorphism. I don't know how to check this, but my feeling is that it will be easier for this to fail than in the given conjecture. - This is a useful rephrasing - I've been thinking along similar lines. – H A Helfgott Oct 16 2009 at 19:38 At least for n=2, the map y -> xyx-1y-1 is not surjective for generic x. Let us prove that the map is not surjective for diagonalizable x. Thus, it is not surjective for generic x. Suppose that x is diagonalizable and let a=(aij) be matrix in SL2(K). We want to solve the equation xyx-1y-1=a. By conjugating with an appropriate matrix, we may wlog assume that x=diag(b,b-1) is a diagonal matrix. Let y=(yij). A short calculation shows that the diagonal entries in the matrix xyx-1y-1 are a11=y11y22-b2y12y21=1+(1-b2)y12y21, a22=y11y22-b-2y12y21=1+(1-b-2)y12y21. This means that in the image there are only matrices a with (a11-1)/(a22-1)=-b2. But xyx-1y-1 has not the form vxkv-1. - 1 Philipp, this is very good - however, doesn't this follow directly from the examples we already know? Since f:y-> y x^{-1} y^{-1} isn't surjective, g:y -> x y x^{-1} y^{-1} isn't surjective either: im(g) = x im(f). So, it's essentially the same example. I'll change the phrasing of the question. Thanks. – H A Helfgott Oct 17 2009 at 19:00 Right. It follows quite simple from the above examples. – Philipp Lampe Oct 17 2009 at 19:11 The discussion has now moved to http://mathoverflow.net/questions/2082/surjective-maps-given-by-words-redux -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150663614273071, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/6411/2d-cfts-and-permutation-orbifolds
# 2D CFTs and permutation orbifolds Suppose we have 2 systems with the same partition function, does this mean the 2 systems are the same? For example, in 2D CFTs, would the equality of two partition functions imply that the underlying theories are the same (in the CFT sense, I mean same central charge, same OPE, etc). Suppose we take the $\text{N}^{th}$ symmetric product of a mother CFT with a partition function $Z(\tau,\bar{\tau})$ and then I orbifold by the permutation group $S_N$ or any cyclic subgroup $\mathbb{Z}_N$ to get the partition function of the permutation orbifold $\mathbf{Z}$. Now suppose we find another system with the same partition function $\mathbf{Z}$, does this mean that this system should be equivalent to the permutation orbifold? Thank you all in advance! - ## 1 Answer The partition function tells you about spectrum of dimensions and degeneracies. It doesn't tell you about OPEs (and higher point functions), so it doesn't determine the theory uniquely. The problem is somewhat reminiscent of the question "can you hear the shape of a drum?", for which the answer is negative: you can have drums of different shapes that produce the same harmonics. That would be analogous to two different CFTs that have the same spectrum and hence partition function, but differ in more detailed properties. - 1 BTW the most radical and trivial example are supersymmetric and other special CFTs whose $Z$ cancels, $Z(\tau,\bar\tau)=0$. There are many of them, so the value of $Z$ can't be uniquely determining the CFT. Well, it would be more interesting to quote examples with the same nonzero $Z$. ;-) – Luboš Motl Mar 6 '11 at 8:35 The analogy with drums is a bit misleading. The statement does hold under some further assumptions and it's definitely not obvious that at least some class of CFTs wouldn't in fact be completely determined by the spectrum. If you think it is, please update your answer with the reason :) – Marek Mar 6 '11 at 10:02 @Marek, it is only an analogy, some aspects of any analogy will not really be analogous, otherwise you'd have an isomorphism (or duality, I guess). To your question, I am not aware of a set of conditions under which two CFTs are guaranteed to be identical. – user566 Mar 6 '11 at 18:35 @Moshe: there are still better and worse analogies. I just wanted to know which one is yours :) – Marek Mar 6 '11 at 23:56 @Marek, mine is the excellent kind of course, distingushed by it's insight and clarity. Or at least that's how it resonates with me... – user566 Mar 7 '11 at 0:52 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478143453598022, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/275669/prove-that-fx-x-is-riemann-integrable-on-0-1-with-int-01xdx-frac1/275682
# Prove that $f(x)=x$ is Riemann-integrable on $[0,1]$ with $\int_0^1xdx=\frac{1}{2}$ I think that the best thing to do is prove that the upper and lower sums are equal in the limit. Since $f$ is monotonic I know that for any partition $\{x_0,\dots,x_N\}$ the upper and lower sums are given by $$U=\sum_{i=1}^Nx_i(x_i-x_{i-1})$$and$$L=\sum_{i=1}^Nx_{i-1}(x_i-x_{i-1})$$respectively. I considered showing that the the limit of $U-L$ as $N\rightarrow\infty$ is $0$, hoping that I would get some kind of telescoping situation, but that doesn't seem to be happening:$$U-L=\sum_{i=1}^N(x_i-x_{i-1})(x_i-x_{i-1})$$I can't see a nice way to show that that is going to be less than any $\epsilon$. Does this seem like the right approach? Am I missing something? - Since the function is bounded and continuous on $[0,1]$, then it is Riemann integrable. – Mhenni Benghorbal Jan 11 at 6:48 Hint: because the $N$ points $x_i$ are $1/N$ units apart (modulo an off-by-one error), the product in your sum will be roughly $1/N^2$ and you'll be summing $N$ terms of that size. What will the result be, and can you formalize that? – Steven Stadnicki Jan 11 at 6:48 1 @MhenniBenghorbal I know that it is. I'm meant to show that it's true from the definition of the Riemann integral. – crf Jan 11 at 6:49 @StevenStadnicki can I specify that that's the case about the partition? Don't I need to show that it holds for any partition? – crf Jan 11 at 6:50 @crf Usually, yes, you need to work with an arbitrary partition. However, if you use a partition where the points are evenly spaced $1/N$ apart, you can use the formula $\sum_{k=1}^n k = \frac{n(n+1)}{2}$ to get a nice formula for the upper and lower sums for this sort of partition. Let $N \to \infty$ and use the fact that upper sums are upper bounds for lower sums, and lower sums are lower bounds for upper sums. – proximal Jan 11 at 7:05 show 2 more comments ## 2 Answers I turned my comment into an answer. Let $P_N$ be the partition $\{0,\frac{1}{N},\frac{2}{N},...,1\}$, i.e. each point is evenly spaced with distance $1/N$. The upper and lower sums for such a partition are: $$U(P_N,f) = \sum_{i=1}^N \sup_{[\frac{i-1}{N},\frac{i}{N}]}x \cdot \Delta x_i = \sum_{i=1}^N \frac{i}{N} \cdot \frac{1}{N} = \frac{1}{N^2}\frac{N(N+1)}{2} = \frac{1}{2} + \frac{1}{2N}$$ $$L(P_N,f) = \sum_{i=1}^N \inf_{[\frac{i-1}{N},\frac{i}{N}]}x \cdot \Delta x_i = \sum_{i=1}^N \frac{i-1}{N} \cdot \frac{1}{N} = \frac{1}{N^2}\frac{N(N-1)}{2} = \frac{1}{2} - \frac{1}{2N}.$$ Let $N \to \infty$. Both $U(P_N,f)$ and $L(P_N,f)$ go to $\frac{1}{2}$, from above and below respectively. Since upper sums are upper bounds for lower sums, every lower sum is bounded above by $\frac{1}{2}$. Since there are lower sums that are arbitrarily close to $\frac{1}{2}$ (take $N$ large enough) it follows that $\frac{1}{2}$ is the least upper bound for the lower sums. Simliarly we conclude that $\frac{1}{2}$ is the greatest lower bound for the upper sums. This shows that $f(x)$ is Riemann integrable on $[0,1]$ with integral $\frac{1}{2}$. - This looks good, but I'm thinking of the case where we have a partition $P$ which includes an irrational number... Then there is no $P_N$ which is a refinement of $P$. Does that matter? – crf Jan 11 at 7:40 It doesn't matter. There's no need to consider refinements at all. If you're looking to just show integrability and not the value of the integral, then just subtract these two expressions: you'll get $|U-L| = \frac{1}{4N} \to 0$, so using the alternate definition of integrability (the difference of the upper and lower sums can be made arbitrarily small using partitions) we get our desired result. – proximal Jan 11 at 7:55 Yeah reviewed notes, saw alternate and much nicer definition of integrability, now I am happy. I actually suspected that we'd be using that $N(N+1)/2$ business; I was just worried about capturing every partition. But this is great, thank you very much! – crf Jan 13 at 5:38 Using what you have, observe that $$|U - L| \leq \sum_{i=1}^N |x_i - x_{i-1}|^2 \leq \sup_i |x_i - x_{i-1}| \sum_{i=1}^N x_i - x_{i-1}$$ $$= \sup_i (x_i - x_{i-1})$$ where we have used that the sum on the right is telescoping and equals $1 - 0$. To say that something is Riemann integrable, we just need to show that if the mesh tends to zero this difference tends to zero. Since the term on the right actually is the mesh, we're done. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478054046630859, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/93926/list
## Return to Question 8 edited tags 7 added 244 characters in body This is one of small the unsettled questions I had in my senior project. I want to prove for type $D$ we have $R(T)$ is a free module over $R(G)$ by finding a basis. I think we should have,$R(G)\cong R(g)$, $R(T)\cong R(h_{g})$, but since $SO_{6}$ is not simply connected this probably does not work and I have to "ascend" to spin groups, but I do not know how. Define the representation ring of a lie algebra to be the formal sums of its characters, it is not hard to show that $$R(su_{4})\cong \mathbb{Z}[x+y+z+w,xy+yz+zx+wz+wy+wz,xyz+yzw+xzw+xyw]/(xyzw-1)$$ and $$R(h_{su_{4}})\cong \mathbb{Z}[x,y,z,w]/(xyzw-1)$$ a typical basis of $R(h_{su_{4}})$ over $R(su_{4})$ consists of $x^{i}y^{j}z^{k}, 0\le i\le 3, o\le j\le 2, 0\le k\le 1$. I proved that the weight lattice of $su_{4}$ and $so_{6}$ are isomorphic, and their Weyl group are both isomorphic to $S_{4}$. So $R(h_{so_{6}})$ should be a free module over $R(so_{6})$ with rank 24 as well. But I found I could not use this to find a basis for $R(h_{so_{6}})$ over $R(so_{6})$, because we have: $$R(so_{6})\cong \mathbb{Z}[x+y+z+x^{-1}+y^{-1}+z^{-1},x^{\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}},x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}}]$$ the first is the standard representation with weights $\pm L_{i}$, the second and the third are the spin representations one obtain from clifford algebra or "ascend" to spin groupgroup(can be found at Fulton&Harris, Chapter 23.2 or here). As one commentator noted I am not clear about the relationship between $R(so_{6})$ and $R(h_{so_{6})$. R(h_{so_{6}})\$. and $$R(h_{so_{6}})\cong \mathbb{Z}[x,y,z,x^{-1},y^{-1},z^{-1}]$$ because we know the two diagonal submatrices in $so_{6}$ must be skew-symmetric. From $A+D^{T}=0$ we conclude $T$ is isomorphic to $S^{1}\times S^{1}\times S^{1}$. Thus we conclude this. I thought it would be a simple change of variable to prove the two cases are just the same, but I found the isomorphism between $R(so_{6})$ and $R(su_{4})$ does not extend nicely to an isomorphism between $R(h_{so_{6}})$ and $R(h_{su_{4}})$. So I believe I must be confused. My advisor suggested me that maybe there is some subtly in $Spin_{6}$, but I still do not know how to estbalish an isomorphism or to find the basis right away. 6 added explanation. This is one of small the unsettled questions I had in my senior project. I want to prove for type $D$ we have $R(T)$ is a free module over $R(G)$ by finding a basis. I think we should have,$R(G)\cong R(g)$, $R(T)\cong R(h_{g})$, but since $SO_{6}$ is not simply connected this probably does not work and I have to "ascend" to spin groups, but I do not know how. Define the representation ring of a lie algebra to be the formal sums of its characters, it is not hard to show that $$R(su_{4})\cong \mathbb{Z}[x+y+z+w,xy+yz+zx+wz+wy+wz,xyz+yzw+xzw+xyw]/(xyzw-1)$$ and $$R(h_{su_{4}})\cong \mathbb{Z}[x,y,z,w]/(xyzw-1)$$ a typical basis of $R(h_{su_{4}})$ over $R(su_{4})$ consists of $x^{i}y^{j}z^{k}, 0\le i\le 3, o\le j\le 2, 0\le k\le 1$. I proved that the weight lattice of $su_{4}$ and $so_{6}$ are isomorphic, and their Weyl group are both isomorphic to $S_{4}$. So $R(h_{so_{6}})$ should be a free module over $R(so_{6})$ with rank 24 as well. But I found I could not use this to find a basis for $R(h_{so_{6}})$ over $R(so_{6})$, because we have: $$R(so_{6})\cong \mathbb{Z}[x+y+z+x^{-1}+y^{-1}+z^{-1},x^{\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}},x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}}]$$ the first is the standard representation with weights $\pm L_{i}$, the second and the third are the spin representations one obtain from clifford algebra or "ascend" to spin group. As one commentator noted I am not clear about the relationship between $R(so_{6})$ and $R(h_{so_{6})$. and $$R(h_{so_{6}})\cong \mathbb{Z}[x,y,z,x^{-1},y^{-1},z^{-1}]$$ because we know the two diagonal submatrices in $so_{6}$ must be skew-symmetric. From $A+D^{T}=0$ we conclude $T$ is isomorphic to $S^{1}\times S^{1}\times S^{1}$. Thus we conclude this. I thought it would be a simple change of variable to prove the two cases are just the same, but I found the isomorphism between $R(so_{6})$ and $R(su_{4})$ does not extend nicely to an isomorphism between $R(h_{so_{6}})$ and $R(h_{su_{4}})$. So I believe I must be confused. My advisor suggested me that maybe there is some subtly in $Spin_{6}$, but I still do not know how to estbalish an isomorphism or to find the basis right away. 5 added some explanations. This is one of small the unsettled questions I had in my senior project. I want to prove for type $D$ we have $R(T)$ is a free module over $R(G)$ by finding a basis. I think we should have,$R(G)\cong R(g)$, $R(T)\cong R(h_{g})$, but since $SO_{6}$ is not simply connected this probably does not work and I have to "ascend" to spin groups, but I do not know how. Define the representation ring of a lie algebra to be the formal sums of its characters, it is not hard to show that $$R(su_{4})\cong \mathbb{Z}[x+y+z+w,xy+yz+zx+wz+wy+wz,xyz+yzw+xzw+xyw]/(xyzw-1)$$ and $$R(h_{su_{4}})\cong \mathbb{Z}[x,y,z,w]/(xyzw-1)$$ a typical basis of $R(h_{su_{4}})$ over $R(su_{4})$ consists of $x^{i}y^{j}z^{k}, 0\le i\le 3, o\le j\le 2, 0\le k\le 1$. I proved that the weight lattice of $su_{4}$ and $so_{6}$ are isomorphic, and their Weyl group are both isomorphic to $S_{4}$. So $R(h_{so_{6}})$ should be a free module over $R(so_{6})$ with rank 24 as well. But I found I could not use this to find a basis for $R(h_{so_{6}})$ over $R(so_{6})$, because we have: $$R(so_{6})\cong \mathbb{Z}[x+y+z+x^{-1}+y^{-1}+z^{-1},x^{\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}},x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}}]$$ and $$R(h_{so_{6}})\cong \mathbb{Z}[x,y,z,x^{-1},y^{-1},z^{-1}]$$ because we know the two diagonal submatrices in $so_{6}$ must be skew-symmetric. From $A+D^{T}=0$ we conclude $T$ is isomorphic to $S^{1}\times S^{1}\times S^{1}$. Thus we conclude this. I thought it would be a simple change of variable to prove the two cases are just the same, but I found the isomorphism between $R(so_{6})$ and $R(su_{4})$ does not extend nicely to an isomorphism between $R(h_{so_{6}})$ and $R(h_{su_{4}})$. So I believe I must be confused. My advisor suggested me that maybe there is some subtly in $Spin_{6}$, but I still do not know how to estbalish an isomorphism or to find the basis right away. 4 changed the lay out so it is easier to read. ; added 5 characters in body This is one of small the unsettled questions I had in my senior project. I want to prove for type $D$ we have $R(T)$ is a free module over $R(G)$ by finding a basis. I think we should have,$R(G)\cong R(g)$, $R(T)\cong R(h_{g})$, but since $SO_{6}$ is not simply connected this probably does not work and I have to "ascend" to spin groups, but I do not know how. Define the representation ring of a lie algebra to be the formal sums of its characters, it is not hard to show that $$R(su_{4})\cong \mathbb{Z}[x+y+z+w,xy+yz+zx+wz+wy+wz,xyz+yzw+xzw+xyw]/(xyzw-1)$$ and $$R(h_{su_{4}})\cong \mathbb{Z}[x,y,z,w]/(xyzw-1)$$ a typical basis of $R(h_{su_{4}})$ over $R(su_{4})$ consists of $x^{i}y^{j}z^{k}, 0\le i\le 3, o\le j\le 2, 0\le k\le 1$. I proved that the weight lattice of $su_{4}$ and $so_{6}$ are isomorphic, and their Weyl group are both isomorphic to $S_{4}$. So $R(h_{so_{6}})$ should be a free module over $R(so_{6})$ with rank 24 as well. But I found I could not use this to find a basis for $R(h_{so_{6}})$ over $R(so_{6})$, because we have: $$R(so_{6})\cong \mathbb{Z}[x+y+z+x^{-1}+y^{-1}+z^{-1},x^{\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}},x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}}]$$ and $$R(h_{so_{6}})\cong \mathbb{Z}[x,y,z,x^{-1},y^{-1},z^{-1}]$$ I thought it would be a simple change of variable to prove the two cases are just the same, but I found the isomorphism between $R(so_{6})$ and $R(su_{4})$ does not extend nicely to an isomorphism between $R(h_{so_{6}})$ and $R(h_{su_{4}})$. So I believe I must be confused. My advisor suggested me that maybe there is some subtly in $Spin_{6}$, but I still do not know how to estbalish an isomorphism or to find the basis right away. 3 added 293 characters in body This is one of small the unsettled questions I had in my senior project. I want to prove for type $D$ we have $R(T)$ is a free module over $R(G)$ by finding a basis. I think we should have,$R(G)\cong R(g)$, $R(T)\cong R(h_{g})$, but since $SO_{6}$ is not simply connected this probably does not work and I have to "ascend" to spin groups, but I do not know how. Define the representation ring of a lie algebra to be the formal sums of its characters, it is not hard to show that $$R(su_{4})\cong \mathbb{Z}[x+y+z+w,xy+yz+zx+wz+wy+wz,xyz+yzw+xzw+xyw]/(xyzw-1)$$ and $$R(h_{su_{4}})\cong \mathbb{Z}[x,y,z,w]/(xyzw-1)$$ a typical basis of $R(h_{su_{4}})$ over $R(su_{4})$ consists of $x^{i}y^{j}z^{k}, 0\le i\le 3, o\le j\le 2, 0\le k\le 1$. I proved that the weight lattice of $su_{4}$ and $so_{6}$ are isomorphic, and their Weyl group are both isomorphic to $S_{4}$. So $R(h_{so_{6}})$ should be a free module over $R(so_{6})$ with rank 24 as well. But I found I could not use this to find a basis for $R(h_{so_{6}})$ over $R(so_{6})$, because we have $$R(so_{6})\cong \mathbb{Z}[x+y+z+x^{-1}+y^{-1}+z^{-1},x^{\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}},x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}}]$$ and $$R(h_{so_{6}})\cong \mathbb{Z}[x,y,z,x^{-1},y^{-1},z^{-1}]$$ I thought it would be a simple change of variable to prove the two cases are just the same, but I found the isomorphism between $R(so_{6})$ and $R(su_{4})$ does not extend nicely to an isomorphism between $R(h_{so_{6}})$ and $R(h_{su_{4}})$. So I believe I must be confused. My advisor suggested me that maybe there is some subtly in $Spin_{6}$, but I still do not know how to estbalish an isomorphism or to find the basis right away. 2 edited body This is one of small the unsettled questions I had in my senior project. Define the representation ring of a lie algebra to be the formal sums of its characters, it is not hard to show that $$R(su_{4})\cong \mathbb{Z}[x+y+z+w,xy+yz+zw+wz+wy+wz,xyz+yzw+xzw+xyw]/(xyzw-1)$$ mathbb{Z}[x+y+z+w,xy+yz+zx+wz+wy+wz,xyz+yzw+xzw+xyw]/(xyzw-1)$$and$$ R(h_{su_{4}})\cong \mathbb{Z}[x,y,z,w]/(xyzw-1) a typical basis of $R(h_{su_{4}})$ over $R(su_{4})$ consists of $x^{i}y^{j}z^{k}, 0\le i\le 3, o\le j\le 2, 0\le k\le 1$. I proved that the weight lattice of $su_{4}$ and $so_{6}$ are isomorphic, and their Weyl group are both isomorphic to $S_{4}$. So $R(h_{so_{6}})$ should be a free module over $R(so_{6})$ with rank 24 as well. But I found I could not use this to find a basis for $R(h_{so_{6}})$ over $R(so_{6})$, because we have $$R(so_{6})\cong \mathbb{Z}[x+y+z+x^{-1}+y^{-1}+z^{-1},x^{\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}},x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}}]$$ and $$R(h_{so_{6}})\cong \mathbb{Z}[x,y,z,x^{-1},y^{-1},z^{-1}]$$ I thought it would be a simple change of variable to prove the two cases are just the same, but I found the isomorphism between $R(so_{6})$ and $R(su_{4})$ does not extend nicely to an isomorphism between $R(h_{so_{6}})$ and $R(h_{su_{4}})$. So I believe I must be confused. My advisor suggested me that maybe there is some subtly in $Spin_{6}$, but I still do not know how to estbalish an isomorphism or to find the basis right away. 1 # Is $R(su_{4})\cong R(so_{6})$? This is one of small the unsettled questions I had in my senior project. Define the representation ring of a lie algebra to be the formal sums of its characters, it is not hard to show that $$R(su_{4})\cong \mathbb{Z}[x+y+z+w,xy+yz+zw+wz+wy+wz,xyz+yzw+xzw+xyw]/(xyzw-1)$$ and $$R(h_{su_{4}})\cong \mathbb{Z}[x,y,z,w]/(xyzw-1)$$ a typical basis of $R(h_{su_{4}})$ over $R(su_{4})$ consists of $x^{i}y^{j}z^{k}, 0\le i\le 3, o\le j\le 2, 0\le k\le 1$. I proved that the weight lattice of $su_{4}$ and $so_{6}$ are isomorphic, and their Weyl group are both isomorphic to $S_{4}$. So $R(h_{so_{6}})$ should be a free module over $R(so_{6})$ with rank 24 as well. But I found I could not use this to find a basis for $R(h_{so_{6}})$ over $R(so_{6})$, because we have $$R(so_{6})\cong \mathbb{Z}[x+y+z+x^{-1}+y^{-1}+z^{-1},x^{\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}},x^{-\frac{1}{2}}y^{-\frac{1}{2}}z^{-\frac{1}{2}}+x^{\frac{1}{2}}y^{-\frac{1}{2}}z^{\frac{1}{2}}+x^{-\frac{1}{2}}y^{\frac{1}{2}}z^{\frac{1}{2}}+x^{\frac{1}{2}}y^{\frac{1}{2}}z^{-\frac{1}{2}}]$$ and $$R(h_{so_{6}})\cong \mathbb{Z}[x,y,z,x^{-1},y^{-1},z^{-1}]$$ I thought it would be a simple change of variable to prove the two cases are just the same, but I found the isomorphism between $R(so_{6})$ and $R(su_{4})$ does not extend nicely to an isomorphism between $R(h_{so_{6}})$ and $R(h_{su_{4}})$. So I believe I must be confused. My advisor suggested me that maybe there is some subtly in $Spin_{6}$, but I still do not know how to estbalish an isomorphism or to find the basis right away.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 155, "mathjax_display_tex": 28, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9720776677131653, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/146116-write-equation.html
# Thread: 1. ## write an equation a line parallel to 3x +2y - 4=0 and passing through (2;3) Thank you very much PS Still confused. Two lines are perpendicular if the product of their slope -1. Please detail. 2. Originally Posted by ecdino2 a line parallel to 3x +2y - 4=0 and passing through (2;3) Thank you very much PS Still confused. Two lines are perpendicular if the product of their slope -1. Please detail. re-write $3x+2y=4$ any line parallel will be of the form: $3x+2y=z,\ \forall z\in\mathbb{R}$ 3. Originally Posted by ecdino2 a line parallel to 3x +2y - 4=0 and passing through (2;3) Thank you very much PS Still confused. Two lines are perpendicular if the product of their slope -1. Please detail. Two lines are perpendicular if the product of their slopes is -1 and Two lines are parallel if their slopes are equal These should be stated in your book or should be taught by your instructor. and the equation of a line passing through a point $(x_1, y_1)$ is given by: $y-y_1= m(x-x_1)$ .............................(I) Now use this above formula to find the equation of the line passing through (2,3). ------------------------------------------------------------------------------------ for the line $3x +2y - 4=0$., find the slope of this line by re-writing the equation in the form of $y = mx+b$; where m is the slope. so you have : $3x +2y - 4=0 \implies 2y = -3x+4 \implies y = \frac{-3}{2}x + \frac{4}{3}$ so the slope of this line is $\frac{-3}{2}$. since the two lines are parallel, the slope of the other line should also be $\frac{-3}{2}$ so you have the point(2,3) and the slope m = -3/2. plug these in equation (I) to find the equation of the line. This question is similar to the one you had posted before 4. ## write an equation of a line parallel to 3x +3y-4=0 passing through the point(2,3) 5. Originally Posted by lorica2000 of a line parallel to 3x +3y-4=0 passing through the point(2,3) refer to the steps given in Post#3 in this thread! Show your work if you are stuck.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436591267585754, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/7873/is-this-a-linear-algebra-problem/7878
# Is this a linear algebra problem? I'm solving this question, but I don't know what it is. ============================================================ Consider a plane $x - y + z = 0$, and a point $b = (1, 2, 0)^T$ a) Find a basis of this plane, and a basis of orthogonal subspace to this plane. b) Find the closest point $\hat{b}$ on the plane to b. c) What is the error between b and $\hat{b}$? ============================================================ Is this a linear algebra problem? What's a basis of orthogonal subspace and closest point $\hat{b}$? What's the error between b and $\hat{b}$?? - 5 «Is this a linear algebra problem?» is a new angle :) – Mariano Suárez-Alvarez♦ Oct 26 '10 at 5:04 sorry, I can't understand.. – Brian Oct 26 '10 at 5:05 What Mariano means is that it's completely obvious that this is a linear algebra problem! – Hans Lundmark Oct 26 '10 at 6:26 In what context did you come across this? It looks like a typical exercise in a linear algebra course. If you are indeed taking such a course, your textbook should explain in detail what the word "basis" means, and how to solve this problem. And then you should also have access to teachers that you can ask; this sort of thing is easier to explain face to face than in writing. By the way, the word "error" here simply means the distance between the points $b$ and $\hat{b}$. – Hans Lundmark Oct 26 '10 at 6:31 ## 2 Answers Yes, this is a linear algebra problem. Looks like a homework. Hint: You are working in $\mathbb{R}^3$. Your plane is given by 1 equation, which strongly suggest that it has dimension 2. Hence you should try to find 2 vectors linear independant and you will have your first basis. Then, 2 vector spaces are orthogonal if any vector of the first one is orthogonal to any vector of the second one (if $v_1 = (x_1, y_1, z_1)^T$ and $v_2 = (x_2, y_2, z_2)^T$, then they are orthogonal if and only if $x_1x_2 + y_1y_2 + z_1z_2 = 0$). When we are talking about the orthogonal subspace in this context, you are supposed to find one with maximal dimension. For example $V = {(0,0,0)^T}$ is always orthogonal to any other subspace, therefore try to find one with dimension > 0. Hint: the dimension of a subspace + dimension of it's orthogonal complement should equals the dimension of the space you are working in. When you have solved a), then we might help you to solve b) and c). - (a) To find a basis for the plane $x-y+z = 0$, you could solve this equation in terms of $x$ and $z$: $y= x+z$. Then the set of vectors of your plane could be described as: $$V = \left\{ (x, x+z, z) \ \vert \ x,z \in \mathbb{R} \right\} \ .$$ From this description it's easy to find a basis for your plane $V$: it will have two vectors, say $u,v \in \mathbb{R}^3$: $$V = [u,v] \ .$$ Then, the orthonormal subspace to your plane is a straight line, generated by the cross product of $u$ and $v$ $$V^\bot = [u \times v] \ .$$ (b) The closest point $\hat{b}$ to $b$ in the plane $V$ is the orthogonal projection of $b$ onto $V$. In this case, you don't really need the formulae you'll see in Wikipedia, because it's just the intersection of $V$ with the straight line with direction $u\times v$ that passes through $b$. That is, you have to solve the system of linear equations $$\begin{align} x - y +z &= 0 \\ (x,y,z) &= (1,2,0) + \lambda u\times v \ . \end{align}$$ (c) The "error" between $b$ and $\hat{b}$ is the same as the distance from $b$ to $\hat{b}$; that is, $\| b - \hat{b} \|$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398846626281738, "perplexity_flag": "head"}
http://mathoverflow.net/questions/52899?sort=oldest
## Examples of two different descriptions of a set that are not obviously equivalent? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am teaching a course in enumerative combinatorics this semester and one of my students asked for deeper clarification regarding the difference between a "combinatorial" and a "bijective" proof. Specifically, they pointed out that when one is proving the validity of a combinatorial identity by counting a set in two different ways, this is a different activity than giving an explicit bijection between two different sets. However, in combinatorics we often use the phrases "combinatorial proof" and "bijective proof" as synonyms, and I have often heard people use the phrase "bijective proof" regarding a "count in two different ways" proof of an identity. It seems to me that often a combinatorial proof arises from describing one set in two different ways; implicit in a proof of the equality of the two descriptions of this set is the identity bijection from the set to itself. In this sense, one might regard all "combinatorial" proofs as "bijective," but I feel that I am on quite shaky ground with this. These thoughts have led me to the following questions: Question 1: What are some examples of combinatorial situations where the same set can be described in two different ways but it is not at all clear that the two descriptions yield the same object? Question 2: What are some examples of situations where two bijective proofs have been given for a theorem or identity where the bijections turned out to be the same, but proving their equivalence was non-trivial? I would also appreciate opinions regarding the distinction, if any, between combinatorial arguments where one proves identities by describing a set in two different ways and combinatorial arguments where one sets up bijections between genuinely different sets of objects. EDIT Thanks for the answers and comments so far. Here are two examples that will hopefully clarify what I am asking. One example of a bijective proof between two different sets are showing that Dyck paths and nonnesting partitions are both Catalan-enumerated objects (even preserving the Narayana statistic with a good bijection). On the other hand, the identity $\sum_{k=0}^nk{n\choose k}=n2^{n-1}$ is usually proved by describing $k$-subsets of $n$ with a distinguished element in two different ways: in the first way, pick the set then specify the element; in the second way, pick the element then specify the rest of the set. These are both referred to as bijective or combinatorial proofs, yet somehow they each have a different feel to them. In the second case, it is pretty easy to see that the two descriptions of these objects yield the same set of objects, but surely there must be more situations where the same set is described in two different ways and the equivalence of their descriptions is difficult to ascertain. Similarly, there must be times where there are several bijections between different sets, like the first example, where the bijections are the same but not obviously so. What I am wondering about are examples of these two situations. A non-combinatorial example of an answer to Q1 is the compact-group vs reflection group definition of the Weyl group of a semi-simple Lie Algebra, where it isn't immediately clear that the same group is obtained. However, I am looking for more combinatorial examples. - 15 The set of non-trivial integer solutions of the equation $x^{71}+y^{71}=z^{71}$ coincides with the empty set. – Mariano Suárez-Alvarez Jan 23 2011 at 3:05 Do you want to perhaps limit your question to combinatorics? Because Question 1 is extremely vague, and almost any characterisation theorem will give an example. Generously interpreting the word "object", an example can be the Selberg trace formula; but these kinds of examples are probably useless for your pedagogical question. – Willie Wong Jan 23 2011 at 3:09 I am confused what "same" means in this context, especially given that you are attempting to distinguish "combinatorial" from "bijective" proofs. – Qiaochu Yuan Jan 23 2011 at 3:11 3 Noah Snyder's answer at mathoverflow.net/questions/4841/… is relevant: as he points out, the question "what do Catalan numbers count?" has many answers. The passage from combinatorial to bijective proofs is an important instance of categorification, and your first question is an instance of the question "what examples are there of structures that can be categorified in two different ways?" – Tom Leinster Jan 23 2011 at 3:43 I'm fairly certain that Stanley's list of Catalan objects contains not-obviously-equivalent definitions of the same objects, but unfortunately I don't know of any specific examples off the top of my head: math.mit.edu/~rstan/ec/catadd.pdf I feel like the various forms of RSK are probably a good example for question 2. – JBL Jan 23 2011 at 5:09 show 1 more comment ## 9 Answers I'm not entirely sure I get the question, but I think the theory of partitions has many examples of the kind of thing you want. The number of partitions of $n$ into (say) 17 parts equals the number of partitions of $n$ into parts the largest of which is 17. There's a bijection between the two sets of partitions; the bijection is so natural (once you've seen it) that it's tempting to say we're just talking about one set of partitions but looking at it two different ways. The number of partitions (of $n$) into odd parts equals the number of partitions into distinct parts; here, too, there's a bijective proof, but it's quite a bit harder to find. There is a host of these things in the Andrews and Eriksson book, Integer Partitions (and in many other places where partitions are discussed in detail). - 1 Here's a bijection that equates partitions into odd parts and partitions into distinct parts. Start with any partition: pick two parts of the same size and combine them into a single part (twice as large, naturally); continue until there are no repeated part sizes. This maps partitions into odd parts into partitions into distinct parts. The inverse --- repeatedly taking even parts and splitting them into two, gives the inverse map. – Kevin O'Bryant Jan 23 2011 at 12:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is a fact that I find interesting. Strictly speaking it is to say that a certain number counts three completely unrelated things, but it seems easy to make these into a combinatorial problem or to define an explicit bijection between two (or three) sets. So, the concrete example is $168$, which (1) the number of hours in a week; (2) the number of primes under $1000$; and (3) the size of the smallest simple group of Lie type. $168$ is also $4 \times 42$, where $42$ is that famous number Douglas Adams wrote about. (How did he know?) Which is of course the reciprocal of the smallest positive number that can be written as $$1-\frac 1a -\frac 1b -\frac 1c$$ with $a,b,c\in \mathbb N$ and coincidentally (and relatedly) the largest size of the automorphism group of $C$ a smooth projective curve of genus at least $2$ is $42\cdot {\rm deg} K_C$ and of $S$ a smooth projective surface of general type is $(42 K_S)^2$. So I guess one could add that $168$ is also (4) the maximum value of $\frac 4{1 -\frac 1a -\frac 1b -\frac 1c}$ with $a,b,c\in \mathbb N$; and (5) the largest possible size of the automorphism group of a smooth complex projective curve of genus $3$, but I admit the last two are a little artificial... - 3 Why this answer got $\geq 4$ votes is beyond me. – Christian Blatter Jan 23 2011 at 16:24 1 There are more things in heaven and earth, Christian, than are dreamt of in your philosophy.... – Sándor Kovács Jan 23 2011 at 19:50 At the risk of plugging one of my own papers, may I recommend Producing New Bijections from Old by David Feldman and James Propp (published in Advances in Mathematics, volume 113 (1995), pages 1-44) which you can find here http://jamespropp.org/cancel.ps.gz . Establishing a definition of bijective proof turns out difficult since it hinges on the distinction between proving the existence of a bijection and using a bijection to establish an equality. One would like to trade in this syntactic problem (distinguishing a class of proofs) for a semantic problem - a mathematical universe where one can distinguish between mere numerical equality and the existence of a bijection. Topos theory offers one approach. There one can have two sheaves with a bijection between every stalk of one and every stalk of the other, but no global isomorphism of the sheaves. Alternatively, one can have two sets of the same cardinality that carry different actions by the same finite group $G$. This may all see to be getting away from the real world, but what Jim and I show in the paper is that it all does have bankable implications for relative questions about the existence of bijections. For example, if you have sets $A$ and $B$ and you find a bijection between $A^2$ and $B^2$, our paper gives you an concrete effective way to get a bijection between $A$ and $B$. We also give a group-theoretical criterion that predicts the existence of this effective reduction before you actually have it in your hands. On the other hand, we show that a bijection between $2^A$ and $2^B$ does not effectively give rise to a bijection between $A$ and $B$. Such a bijection might be too symmetrical, and we actually write down a concrete example to show you how things can go wrong. Pedagogically speaking, I have often encountered two diametrically opposite and equally difficult lessons. I have encountered many students who don't understand why you don't need the axiom of choice to pick out one element from one non-empty set (with more than one element)...in classical mathematics, say ZF. And I have encountered just as many students who don't understand why you do need the axiom of choice to pick out one element from a two element set...in effective, or intuitionistic mathematics, or in a topos. - 1 You need full AC to pick an element out of a two element set? Color me skeptical. Are you sure you don't mean something like the law of excluded middle? Doesn't Diaconescu's theorem prove that the axiom of choice implies the law of excluded middle, so constructive ZF + AC is isomorphic to ZFC? – Harry Gindi Jan 23 2011 at 8:30 1 Right Harry, I was being sloppy. I only meant a) you need something and AC will suffice. – David Feldman Jan 23 2011 at 8:37 But I meant what I said: once an undergraduate understands that AC (over ZF) for a one set family amounts to the tautology "a nonempty set is a nonempty set" and that AC thus does not exactly model naive ideas about choosing, its hard to develop the intuition that in another context it doesn't turn out to be a tautology at all. I suppose some mathematicians see mathematical life beginning inside axiomatic systems and thus they prefer pass over in silence the question of grounding the value of the axiom systems that command the most attention. But some students hunger for meaning. – David Feldman Jan 23 2011 at 9:20 1 Regarding your point about $2^A$ and $2^B$ in the penultimate paragraph, it is also known to be consistent with ZFC for infinite sets that $2^A$ and $2^B$ can have the same cardinality, even when $A$ and $B$ do not. For example, it is an easy matter to force $2^{\aleph_0}=\aleph_2=2^{\aleph_1}$, and this is true in Cohen's original $\neg CH$ model. See also mathoverflow.net/questions/1924/… – Joel David Hamkins Jan 23 2011 at 17:10 You should look up Catalan numbers in Enumerative Combinatorics by R. P. Stanley. - Here is an example where the question of whether the two countings are the same is independent of the axioms of set theory. Namely, if the Continuum Hypothesis holds, then the number of real numbers is the same as the number of countable ordinals. But if CH fails, then it is not. - You could discuss Beatty sequences. If $r>1$ is an irrational real define $\mathcal{B}_r =\{ \lfloor r \rfloor, \lfloor 2 r \rfloor, \dots\}$ a subset of the positive integers. The complement of $\mathcal{B}_r$ in the positive integers is $\mathcal{B}_s$, where $\frac{1}{r} + \frac{1}{s} = 1$. See, for example, http://en.wikipedia.org/wiki/Beatty_sequence - I am not sure I understand the question, but I would like to share the following ingenious observation produced by (elder) László Lovász on live TV in a contest (when he was a high school student). Question. Take a convex $n$-agon in which the mutual intersections of diagonals are all different. How many intersections are there? Solution. The intersections are in bijection with the 4-tuples of the vertices: to any pair of intersecting diagonals assign their 4 endpoints. Hence the number in question is $\binom{n}{4}$. - In response to question 1, there is some subtlety involving use of the phrase "same object". This more or less immediately suggests to me the question of whether we are thinking of a bijective proof as establishing an isomorphism between structures, or not. One of the simplest examples I can think of is the distinction between a permutation on an n-element set and a total ordering on the same set. There are $n!$ structures in each case, but in the one case we are counting the elements in a group, and in the other we are counting torsors over the same group. To see that these objects are truly distinct, imagine that we have a bijection $f: S \to T$ between n-element sets; how would we transport the structures in each case? In the total order case, we would simply apply $f$ directly to a total order $s_1 < s_2 < \ldots < s_n$ on $S$ to get a total order $f(s_1) < f(s_2) < \ldots < f(s_n)$ on $T$. In the other case, given a permutation $\phi: S \to S$ on $S$, we'd have to conjugate by $f$ to get a permutation $f \phi f^{-1}$ on $T$. These are very different actions; in the case where $S = T$, the action of $Aut(S)$ on total orders has just one orbit, and the action of $Aut(S)$ on permutations has many orbits given by cycle type decompositions. In the language of category theory, the issue is whether a bijective proof means an isomorphism between Joyal species, or not. For example, if $Tot$ is the species of total orders and $Perm$ the species of permutations, there is a non-isomorphic bijection between them. In such cases, one must typically make a choice of standard structure in order to effect the bijection (for example, one may choose the standard order on $\{1, 2, \ldots, n\}$ to give an explicit bijection between total orders and permutations, but a choice of different order would lead to a different explicit bijection). Cf. David Feldman's answer, where choice also enters. This is a simple example of course, but propagates more elaborate examples. Many readers here will know of Joyal's beautiful proof of Cayley's theorem (as discussed elsewhere at MO), that there are $n^{n-2}$ tree structures one can put on an n-element set. This also involves a non-isomorphic bijection between Joyal species; in compact form it involves a non-isomorphic bijection between two species $$Tot \circ Arbor$$ $$Perm \circ Arbor$$ where $Arbor$ is the species of what Joyal calls "arborescences", in other words rooted trees. For details, consult Joyal's original article (in French) in Adv. Math. 42 (1981), 1-82. Or see the book Combinatorial Species and Tree-like Structures by Bergeron, Labelle, and Leroux (Cambridge U. Press, 1998). Andreas Blass has an interesting paper "Seven Trees in One" where there is an in-depth discussion of issues of choice and constructivity. The paper by Conway and Doyle on 3A = 3B implies A = B also comes to mind here (this has also been discussed at MO). - For Question 2, you may want to look at http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/view/dmAJ0146 (and the reference given in the abstract). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458576440811157, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/36075/is-it-really-to-solve-problem-below-by-using-in-the-main-gauss-law?answertab=votes
Is it really to solve problem below by using, in the main, Gauss law? There is an infinite cylinder surface which uniformly charged along and has a surface charge density, which can be represented as $$\sigma = \sigma_{0}cos(\varphi ),$$ where $\varphi$ - polar angle in a cylindrical coordinate system (it means that z-axis is matches with cylinder axis). There need to find z-component $|\mathbf E|$. I tried to learn use Gauss law, but tasks, where density isn't constant on the surface, are still hard for me. Can you help? - This isn't solveable with Gauss's Law. You're going to have to solve this with an integral. – Jerry Schirmer Sep 10 '12 at 15:05 But how to "make" an integral? – PhysiXxx Sep 10 '12 at 15:09 2 Answers This is a superposition of two infintesimally displaced cylinders with a uniform charge density in the interior: $$\rho(\theta) = \cos(\theta)$$ on the surface implies that $\rho(x,y,z)\propto x$ on the cylinder surface, so up to a factor $$\rho(x,y) = ((x+\epsilon)^2 + y^2) - (x^2 + y^2)$$ Which is a superposition of two opposite charges. The solution for a uniform charge density is by Gauss's law: $$\phi(x,y) \propto \log((x-a)^2+y^2))$$ for the exterior, and $$\phi(x,y) \propto (x-a)^2 + y^2$$ on the interior, with the two solutions matching. You differentiate with respect to a and set a to zero to find the superposition solution for two infintesimally displaced cylinders of opposite charge density. - It seems to me that, due to the symmetry of the problem, $E_z$ vanishes identically. For example, the electric field can be regarded as an integral of electric fields created by homogeneously charged straight lines parallel to axis $z$, and it is obvious that $z$-component of electric field created by such a line vanishes due to the symmetry. - did you see that I gave a complete answer to this? Or did you think I made a mistake (I didn't). The z-symmetry is true and obvious. – Ron Maimon Sep 11 '12 at 15:48 I did not criticize your answer. However, it is my understanding that the only question asked in the problem is about z-component of electric field, so maybe the complete solution is an overkill. – akhmeteli Sep 11 '12 at 17:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964219331741333, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/68066-need-help-problem-ax-b-print.html
# Need help with a problem. Ax=b Printable View • January 13th 2009, 03:22 PM prometheos Need help with a problem. Ax=b We just started class and our first HW problem over a chapter we haven't gotten to yet has me stumped. The instructor made this one up I believe, and the following is from the chalkboard; [1 2 0 1 3 | 4 ] is the reduced row echelon form if [A.b] for the eqn. [0 0 1 2 4 | -3] Ax=b. Find solution for x. [0 0 0 0 0 | 0 ] From what I can gather by reading ahead in the book is for an Ax=b situation the solution is given by x=A^-1 * b... or x equals the inverse of A times b. From what I have read, inverses can only be calculated from matrices that are square or nxn dimensions. Therefore, I am stumped. Any help is greatly appreciated, even if you just show me a general method, so I can solve it on my own. • January 13th 2009, 03:56 PM Prove It Quote: Originally Posted by prometheos We just started class and our first HW problem over a chapter we haven't gotten to yet has me stumped. The instructor made this one up I believe, and the following is from the chalkboard; [1 2 0 1 3 | 4 ] is the reduced row echelon form if [A.b] for the eqn. [0 0 1 2 4 | -3] Ax=b. Find solution for x. [0 0 0 0 0 | 0 ] From what I can gather by reading ahead in the book is for an Ax=b situation the solution is given by x=A^-1 * b... or x equals the inverse of A times b. From what I have read, inverses can only be calculated from matrices that are square or nxn dimensions. Therefore, I am stumped. Any help is greatly appreciated, even if you just show me a general method, so I can solve it on my own. It's going to have infinitely many solutions. • January 13th 2009, 04:03 PM NonCommAlg Quote: Originally Posted by prometheos We just started class and our first HW problem over a chapter we haven't gotten to yet has me stumped. The instructor made this one up I believe, and the following is from the chalkboard; [1 2 0 1 3 | 4 ] is the reduced row echelon form if [A.b] for the eqn. [0 0 1 2 4 | -3] Ax=b. Find solution for x. [0 0 0 0 0 | 0 ] From what I can gather by reading ahead in the book is for an Ax=b situation the solution is given by x=A^-1 * b... or x equals the inverse of A times b. From what I have read, inverses can only be calculated from matrices that are square or nxn dimensions. Therefore, I am stumped. Any help is greatly appreciated, even if you just show me a general method, so I can solve it on my own. just write $Ax=b$ as a system of equations and solve it: $\begin{cases} x_1 + 2x_2 + x_4 + 3x_5=4 \\ x_3 + 2x_4 + 4x_5 =-3 \end{cases}.$ we have two equations and 5 variables. the solutions are: $x_1=4-2x_2-x_4-x_5, \ x_3=-3-2x_4-4x_5.$ note that $x_2, x_4, x_5$ are free variables. • January 13th 2009, 05:23 PM prometheos Ah, I think I see now what my problem was. The wording of the question led me to believe it wasn't a simple solution. Go go new math class language. Thank you. All times are GMT -8. The time now is 11:34 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933607816696167, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/28560/voltage-and-current-of-positive-lightning?answertab=votes
# Voltage and current of positive lightning For a physics issues investigation I chose to investigate what effects lightning could have on an aeroplane while in flight if it was struck and then go on to discuss some possible implications of engineers not taking into account the power of positive lightning. Just in-case you don't know what positive lightning is, my understanding of it at least is that when charges accumulate in clouds (I won't go into how) in most cases the underside of the cloud is negatively charged and the top of the cloud is positively charged. Basically positive lightning is a lot more powerful than negative lightning as it has a higher voltage and current. Q1. How would you determine the potential difference between the underside of the cloud (given an overall charge) and the ground (given the overall charge) and hence the electric field strength. $E = V/d$ ? But how would I calculate the voltage? Q2. I understand that $V = IR$. And this is why the voltage of a positive lightning strike is higher than a negative strike as the resistance for the positive strike is higher (it has to go out to the side of the cloud and THEN down). But why is the current higher? If $I = V/R$ and the resistance is higher, wouldn't the current be lower? (This question probably isn't as high a level as many of the other questions on this site so you should find it quite easy to answer.) - ## 2 Answers Ultimately, the one thing that matters for the airplane is the current, from which you can find the voltage across the airplane itself (knowing resistance of airplane). The length of pulse also may matter if the heating of materials is an issue. There's also the magnetic field, which also depends on the current. The initial 'voltage' between ground and cloud is not very relevant, other than as perhaps in it's effect on length of the strike. If one is to go into very short-time details one may need to consider inductance, but I don't think it is very relevant for the airplane itself. And the airplane can be tested with relatively low voltages that induce same current through the airplane. - 1 Well that didn't help.... I really need an explanation of question 2 – Michael May 19 '12 at 6:06 (1) To address your first question: you have to treat the cloud and the earth below it as forming as a capacitor. There's a good popular description of this at http://micro.magnet.fsu.edu/electromag/java/lightning/. A capacitor is described by it's capacitance, and this is related to the voltage and charge by: $$C = \frac{Q}{V}$$ where $Q$ is the electric charge and $V$ is the voltage difference across the capacitor. You can approximate the cloud and earth as a parallel plate capacitor, and the capacitance is given by: $$C = \frac{\epsilon A}{d}$$ where $A$ is the area of the cloud base, $d$ is the spacing between the cloud base and the earth, and $\epsilon$ is the permittivity of air ($8.854 \times 10^{-12}C^2N^{-1}m^{-2}$). Combining the two equations and a quick rearrangement gives: $$V = \frac{Qd}{\epsilon A}$$ This is obviously a gross simplification, but should give you a rough idea of the potential difference. (2) As to your second question: as you say, positive lightning requires a higher voltage to get it started. Looking at the equation for the voltage, assuming the cloud stays the same the only way the voltage can be higher is if the charge is higher. Current is defined as charge per unit time, and if the duration of the lightening strike is roughly constant a positive lightning bolt has to transfer more charge in the same time and therefore has a higher current. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964235246181488, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/172238/perimeter-of-triangle-inside-a-circle
# Perimeter of Triangle inside a circle If the circle has a radius of 4, what is the perimeter of the inscribed equilateral triangle? Answer: $12\sqrt{3}$ - 1 As you know, the best way to get the best possible answers is to tell us a little of the context of the problem (was it given in a class? Which class, what level?) and what your thoughts on it are (even if only to say you have no idea), rather than simply posting the problem with or without the final answer and expecting people to solve it for you. – Arturo Magidin Jul 18 '12 at 3:39 Well am practicing for a standardized test. I have posted the answer along with the question. I actually am not exactly asking for a solution .. but for suggestions on how to solve it.. – Rajeshwar Jul 18 '12 at 3:52 The general standardized test does not cover trigonometric ratios thus I am trying to solve it without using them – Rajeshwar Jul 18 '12 at 3:54 1 Your comment to Zev gives the lie to your statement; you are looking for a specific way of doing it (avoiding trig functions). You need to give the context and any notes about the kinds of tools you have available/want to use in the question itself! Otherwise, you are wasting people's time and just increasing the general level of frustration to no avail. – Arturo Magidin Jul 18 '12 at 4:08 2 @Rejshwar: Yes! If you are looking for a specific kind of approach, mention it in the question. If you are looking to avoid a particular kind of approach, mention it in the question. If you have no idea what to do, mention it in the question. If you have had some ideas but don't seem to lead anywhere, mention it in the question. The more you say about what you want, what you've tried, and what you don't want, the better chances that you will get an answer that is useful! – Arturo Magidin Jul 18 '12 at 16:07 show 4 more comments ## 4 Answers See here for the image I am referring to: the angle $\angle ABC$ is $60$, so the angle $\angle AOC$ is $120$ (inscribed angles to a chord in a circle have half the value the angle from the center $O$ to that same chord). In triangle $\triangle ACO$, $OA = OC = \text{radius}$, then $$\angle IAO = \angle ICO = (180 - \angle AOC)/2 = 30$$ ($I$ is midpoint of $AC$, and $OI$ is perpendicular to $AC$) Triangle $\triangle AIO$ is both half an equilateral triangle, and a right triangle at $I$; then $$OI = \frac{1}{2} OA = \frac{1}{2}\cdot 4 = 2$$ $$AI = \sqrt{OA^2 - OI^2} = \sqrt{4^2 - 2^2} = \sqrt{12} = 2\sqrt{3}$$ $$AC = 2 AI\text{ (as $I$ is the midpoint of $AC$)} = 4\sqrt{3}$$ so the perimeter of $\triangle ABC = 3 \cdot AC = \fbox{$12\sqrt{3}\;$}$ - 1 Please see here for how to typeset common math expressions with LaTeX, and see here for how to use Markdown formatting. If you need help with more esoteric math expressions, there are many excellent LaTeX references on the internet, including Stack Exchange's own TeX.SE. Here's a helpful trick: if you see a math expression on this site for which you want to know the LaTeX code, you can right click on it, go to "Show Math As", then choose "TeX Commands". – Zev Chonoles♦ Jul 18 '12 at 4:04 I've added a somewhat cleaner version of your image, hope you don't mind (feel free to change back if you want). Note that your labeling of $B$ and $C$ is reversed from the image that the question was using. – Zev Chonoles♦ Jul 18 '12 at 4:14 I know the image was not very good, its the only one I had on my site; I'm unable to post actual images here due to my low rep points (I joined the site last night), a few rep points from you guys would be great! I am aware that the B and C are reversed and as I said I couldn't upload any other image so I figured you guys would correct it :) Thanks for the help. **Added: I looked at my rep points and I now have enough. – DiscreteGenius Jul 18 '12 at 19:01 Hint: Use the law of sines on an interior triangle: - I want to solve it without using trignometric identities – Rajeshwar Jul 18 '12 at 3:46 5 Don't you think that's something you ought to have mentioned in your question? – Zev Chonoles♦ Jul 18 '12 at 3:53 2 @Rajeshwar: That is precisely why you need to give context in the question! As it is, you've succeeded only in not getting the answer you sought, and making people waste their time. – Arturo Magidin Jul 18 '12 at 4:07 Without trigonometry: since $\,\Delta ABC\,$ is equilateral, its circumcenter is the same as its incenter is the same as the intersection point of its medians. Call this point $\,O\,\Longrightarrow \,$ if $\,AM\,$ is the whole median from $\,A\,$ to BC, then $$4=AO=\frac{2}{3}AM\Longrightarrow x = 6$$ (since the intersection point of the medians cuts each of them in a$\,1:2\,$ proportion, the longest side being always on the vertex side). Now draw the triangle $\,\Delta AMB\,$ ,with sides $\,x\,,\,x/2\,,\,6\,\,,\,\,x=\,$ the triangle's side, and use Pythagoras (in an equilateral triangle, each median is its root vertex's angle bisector and also the height to the other side): $$x^2=6^2+\left(\frac{x}{2}\right)^2\Longrightarrow \frac{3}{4}x^2=36\Longrightarrow x=\frac{12}{\sqrt 3}\Longrightarrow P_{\Delta ABC}=3x=12\sqrt 3$$ - As you said you are preparing for a standardized test, if you are preparing for CAT/GRE/GMAT level test then this is somewhat fast approach: If you observe carefully, we are given the circumradius of the equilateral triangle. If $s$ and $h$ be the side and the height respectively of an equilateral traingle then we know circumradius is given by $\frac 23 \times h$. Thus, $$\frac 23 \times h = 4\implies h = 6$$ Again, $h=6=\frac{\sqrt{3}}2 \times s \implies s = \frac {12}{\sqrt{3}} \implies \text{ perimeter } = 3s = 12\sqrt{3}$ - Wow thanks for the tip. – Rajeshwar Jul 18 '12 at 4:43 Glad to help :) – Quixotic Jul 18 '12 at 5:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457447528839111, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21086/when-are-intersections-of-finitely-generated-field-extensions-finitely-generated/21093
## When are intersections of finitely generated field extensions finitely generated? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $k$ be a field, and let $E$ and $F$ be fields extending $k$, both contained in some single extension of $k$. If $E$ and $F$ are finitely generated (as fields) over $k$, must $E\cap F$ also be finitely generated? If not, is there a simple counterexample? - 3 Subextension inherits finite generation; this is a homework-level exercise. Please work on it by yourself to find a proof. (The version for $k$-algebras is of course a completely different story...) – BCnrd Apr 12 2010 at 10:17 2 But we should admit that it's a rather hard exercise. I've written it up here matheplanet.com/matheplanet/nuke/html/… (german). – Martin Brandenburg Apr 12 2010 at 11:33 5 But it's OK for some exercises to be hard. To succeed in research one needs the experience of struggling with hard exercises on one's own. – BCnrd Apr 12 2010 at 15:39 ## 1 Answer As Brian Conrad remarked above, subextensions of finitely generated extensions are also finitely generated. Here is a prove. I wish there would be a simpler one! • If $L/K$ is a finitely generated field extension and $L'$ an intermediate field, then $L'/K$ is also finitely generated. Proof: Since $tr.deg_K(L) = tr.deg_{L'}(L) + tr.deg_K(L')$ is finite, the same is true for $tr.deg_K(L')$. Choose a transcendence basis $B'$ of $L'/K$. Replacing $K$ by $K(B')$, we may asume that $L'/K$ is algebraic. Now let $B$ be a transcendence basis of $L/K$. Then $L/K(B)$ is algebraic and a finitely generated field extension, thus finite. Let $C \subseteq L'$ be linearly independent over $K$. If we knew that $B$ is also algebraically independent over $L'$, we could conclude that $C$ is linearly independant over $K[B]$ and thus over $K(B)$. This implies $|L':K| \leq |L : K(B)| < \infty$. Thus it remains to prove: • Let $L/L'/K$ be a tower of fields such that $L'/K$ is algebraic. Let $B \subseteq L$ be algebraically independent over $K$. Then $B$ is also algebraically independent over $L'$. Proof: Since algebraically independence is of finite character, we may assume that $B$ is finite. Since $L'(B) / K(B)$ is algebraic, we have $tr.deg_{L'}(L'(B)) = tr.deg_K(K(B)) + tr.deg_{K(B)}(L'(B)) = |B|$ Since $B$ generated $L'(B)/L'$, some subset of $B$ is a transcendence basis of $L'(B)/L'$, but this has cardinality $|B|$. Thus $B$ is itsself this basis. - 2 Sigh. If students know they can search on MO for full solutions to many standard exercises, they'll never learn to figure stuff out for themself, which is an important part of learning ideas and training to do creative work later. Please keep this in mind before future posting of arguments like this. I'll just say that there is a more geometric approach if one thinks in terms of irreducible components and nilpotence on geometric fiber of a $K$-variety associated to $L$, but please try that one in private; no need to post it on MO. – BCnrd Apr 12 2010 at 17:09 2 I understand your objection. I posted the proof because I'm pretty sure that this exercise is even interesting and not straight-forward for some graduates. As for me, to be honest, I have no intuition for the proof. Perhaps you can provide some. – Martin Brandenburg Apr 12 2010 at 18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413048028945923, "perplexity_flag": "head"}
http://mathoverflow.net/questions/9746/who-invented-the-gamma-function/48825
## Who invented the gamma function? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Who was the first person who solved the problem of extending the factorial to non-integer arguments? Detlef Gronau writes [1]: "As a matter of fact, it was Daniel Bernoulli who gave in 1729 the first representation of an interpolating function of the factorials in form of an infinite product, later known as gamma function." On the other hand many other places say it was Leonhard Euler. Will the real inventor please stand up? [1] "Why is the gamma function so as it is?" by Detlef Gronau, Teaching Mathematics and Computer Science, 1/1 (2003), 43-53. - 2 What makes you so sure there is a sensible answer to this question? Bernoulli and Euler were contemporaries who knew each other. Is the "invention" the discovery of the integral formula, or the posing of the question of extension? Moreover the gamma function is ubiquitous enough it has probably been discovered countless times in different contexts. – Kevin McGerty Dec 26 2009 at 2:58 1 "What makes you so sure there is a sensible answer to this question?" I do not claim there is. I ask a question. Do you think that asking a question always has the precondition that there is a sensible answer? What I am looking for is an answer in accordance to the conventions of the historiography of science. Is this not obvious? – Bruce Arnold Dec 26 2009 at 12:48 Kevin, knowing more about a nonresolvable problem is a sensible answer. Anybody interested in a question wants to know more, even if no resolution is very likely or even makes sense. – Zoran Škoda Jul 19 2010 at 12:45 ## 4 Answers I don't have a complete answer. As you say, many sources say that Euler did it, but Gronau gives compelling reason to doubt this. The best source I have found for this issue is "The early history of the factorial function" by Dutka, and for what it's worth I am convinced that Gronau's assessment is a fair one. First, I'll summarize the usual story. Kline discusses this in chapter 19, section 5 of Mathematical Thought from Ancient to Modern Times (which falls in volume 2 of the paperback printing), and a more thorough source is Davis's article "Leonhard Euler's Integral: A Historical Profile of the Gamma Function". There is agreement in these sources that Euler solved the problem after unsuccessful attempts by Stirling, D. Bernoulli, and Goldbach, and that the first record of Euler's solution appears in outline form in a 1729 letter from Euler to Goldbach. This was expanded in subsequent letters and written up in the article to which Kristal Cantwell links (apparently the article was written in 1729 but not published until 1738). Euler's letters to Goldbach start on the third page of this pdf. However, Gronau cites a letter by Bernoulli that was written a few days before Euler's and that contains at least a partial solution, possibly contradicting Kline and Davis. Dutka's paper goes into more detail and also claims that Euler's work was influenced by Bernoulli's earlier solution. I could only speculate on what led to the confusion among other authors, and I won't do so here. Perhaps it should be mentioned here (as is done by Gronau and Dutka) that Euler did much more than Bernoulli. For instance, Euler gave the first integral representations of the gamma function. Edit: Because this answer is accepted and yet incomplete, I want to direct attention to Bruce Arnold's answer below. It contains a link to a copy of the too often neglected letter of D. Bernoulli cited by Gronau and Dutka. - Davis's article "Leonhard Euler's Integral: A Historical Profile of the Gamma Function" received the Chauvenet Prize and can be downloaded free from maa.org [1]. It says that the problem was ".. bandied unsuccessfully by Daniel Bernoulli..". This is certainly the source for many attributions to Euler found in the literature. However, this paper was written in the year 1959 and Gronau's paper (which references Davis's paper) in the year 2003. Maybe there is sometimes advancement in the history of science? [1] mathdl.maa.org/mathDL/22/… – Bruce Arnold Dec 26 2009 at 12:21 Yes, I didn't think much about this, but perhaps the letter of Bernoulli hadn't been discovered by modern historians at the time Davis and Kline were writing. However places like maa.org/editorial/euler/… still cite this as fact. – Jonas Meyer Dec 26 2009 at 12:27 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The first person who gave a representation of the so called gamma function was Daniel Bernoulli in a letter to Goldbach from 1729-10-06. The letter can be seen here. The formula reads in modern notation as given by Gronau in the article cited in the answer: $x! = \lim_{n\rightarrow \infty}\left(n+1+\frac{x}{2}\right)^{x-1} \prod_{i=1}^n\frac{i+1}{i+x}$ Gronau also observes that "Numerical experiments show that the formula of Bernoulli converges much faster to its limit than that of Euler ...", "that of Euler" refers here to a formula Euler has given in a letter to Goldbach dated 1729-10-13. Gronau writes: "Euler who, at that time, stayed together with D. Bernoulli in St. Petersburg gave a similar representation of this interpolating function. But then, Euler did much more. He gave further representations by integrals, and formulated interesting theorems on the properties of this function." Though this justifies the name 'Euler gamma function' Euler's representation was historically only second to Daniel Bernoulli's. The correspondence between Goldbach, Daniel Bernoulli and Euler which undoubtedly gave birth to the gamma function is well documented in Paul Heinrich Fuss's „Correspondance mathématique et physique de quelques célèbres géomètres du XVIIIeme siècle ..“, St. Pétersbourg, 1843. - I'm still curious about 2 things: (1) If this correspondence was well documented by Fuss in the 19th century, why did so many 20th century scholars fail to acknowledge Bernoulli's contribution? (2) You are essentially following the source you provided in your question, so what has changed to make you so certain? – Jonas Meyer Dec 26 2009 at 23:08 @Jonas ->(1) I do not know. ->(2) That I found the letter, saw it with my own eyes and could check that is says what other claim it says. – Bruce Arnold Dec 26 2009 at 23:37 2 @Jonas: Clearly your first question deserves a better answer. I think Davis's Chauvenet Prize article makes a serious error in not mentioning Daniel Bernoulli's letter and solution. I just looked it up again: Davis gives only Tome I of Fuss's 'Correspondance' as reference, whereas Daniel's letter is in Tome II. Perhaps Davis missed this important source. Later his mistake was passed down by 'argumentum ad verecundiam' (which is the Latin translation of 'according to the wikipedia' :) – Bruce Arnold Dec 27 2009 at 23:37 In his first letter to Goldbach (already linked to in Jonas' answer) Euler writes that he communicated his interpolation of the sequence of factorials (or Wallis's hypergeometric series, as it was called back then) to Daniel Bernoulli: "I communicated this to Mr. Bernoulli, who by his own method arrived at nearly the same final expression" (this is the sentence starting with "communicavi haec . . . " on p. 4). - Well of course, these three noble men discussed this subject for a long time. Do you think the sentence you cite sheds any new light on the matter resp. doubt that Daniel Bernoulli was the first to come up with the gamma function? Since my Latin is a little bit rusty I might not be able to extract further relevant information from this letter; on the other hand Bernoulli in his letter, which he wrote a week earlier, gives not the slightest indication that he had the "term general pour la suite 1, 1*2, 1*2*3, etc." from Euler. He would certainly have mentioned this if it had been the case. – Bruce Arnold Dec 10 2010 at 17:35 After Euler communicated his result on the summation of the inverse squares to J. Bernoulli, he found his own proof (similar to Euler's) and even published it without crediting Euler - after all it was his proof!. And yes, I think Euler's letter clearly shows that it was him (and not D.B.) who first came up with the gamma function. – Franz Lemmermeyer Dec 10 2010 at 18:07 According to the wikipedia article the question was of extending the factorial beyond the integers was first posed in the 1720's by Daniel Bernoulli and Christian Goldbach. It was first solved by Euler in 1729. Here is an english translation of a paper by Euler which contains his solution. - 1 This does not answer the question in any way. I would like to get answers which rely on scholarly material, like academic and peer-reviewed publications. Wikipedia is not authoritative and unreliable, the lack of fact checking on esoteric topics is notorious. And I would like to get answers which consider Gronau's claim that "it was Daniel Bernoulli who gave in 1729 the first representation [of the gamma function]" seriously. – Bruce Arnold Dec 25 2009 at 20:09 5 So, would you say that unless Gronau provides a reference for his assertion, then it is no more authoritative than Wikipedia? – Gerald Edgar Dec 25 2009 at 20:50 2 @Gerald: Please do not start a discussion on the value of Wikipedia here and respect the FAQ which says: "Math Overflow is not a discussion forum. There's a place for discussion about mathematics, but it isn't Math Overflow. Blogs and threaded discussion forums are a more appropriate place for discussions." Any comments on the question is welcome. – Bruce Arnold Dec 25 2009 at 21:12 1 While I agree that this does not answer the question, I voted it up because of the link to an original source relevant to the issue which was useful for the time when this was the only posted answer. I also agree that this is not the best best place to discuss the value of Wikipedia, but I don't think that Gerald Edgar did start such a discussion. – Jonas Meyer Dec 27 2009 at 22:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9633013010025024, "perplexity_flag": "middle"}
http://psychology.wikia.com/wiki/Unit_of_measurement
# Unit of measurement Talk0 31,735pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory ## Introduction The definition, agreement and practical use of units of measurement have played a crucial role in human endeavour from early ages up to this day. Disparate systems of measurement used to be very common. Now there is a global standard, the International System (SI) of units, a form of metric system. The SI has been or is in the process of being adopted throughout the world. The United States of America is almost certainly the last to adopt the system but even there it is increasingly being used. Standards are very important. Each unit is a set size. A distance or length or volume or mass or span of time being measured is described as a certain number of these units. A measurement may be quoted to a certain degree of accuracy. One example of the importance of agreed units is the failure of the NASA Mars Climate Orbiter, which was accidentally destroyed on a mission to the planet Mars in September 1999 instead of entering orbit, due to miscommunications about the value of forces: different computer programs used different units of measurement (newton versus pound force). Enormous amounts of effort, time and money were wasted. In physics and metrology units are standards for measurement of physical quantities that need clear definitions to be useful. Reproducibility of experimental results is central to the scientific method. A standard system of units facilitates this. Scientific systems of units are a refinement of the concept of weights and measures developed long ago for commercial purposes. Psychology and medicine often use larger and smaller units of measurement than those used in day to day life and talk about them more exactly. The judicious selection of the units of measure can aid researchers in both framing and solving the problem. ## History Main article: History of measurement Units of measurement were among the earliest tools invented by humans. Primitive societies needed rudimentary measures for many tasks: constructing dwellings of an appropriate size and shape, fashioning clothing, or bartering food or raw materials. The earliest known uniform systems of weights and measures seem to have all been created sometime in the 4th and 3rd millennia BC among the ancient peoples of Mesopotamia, Egypt and the Indus Valley, and perhaps also Elam in Persia as well. Many systems were based on the use of parts of the body and the natural surroundings as measuring instruments. Our present knowledge of early weights and measures comes from many sources. ## Systems of measurement Main article: Systems of measurement A number of metric systems of units have evolved since the adoption of the original metric system in France in 1791. The current international standard metric system is the International system of units. Prior to the global adoption of the metric system many different systems of measurement had been in use. Many of these were related to some extent or other. Often they were based on the dimensions of the human body. Both the Imperial units and US customary units derive from earlier English units. Imperial units were mostly used in the British Commonwealth and the former British Empire. US customary units are the main system of measurement in the United States however some steps towards metrication have been made. The above systems of units are based on arbitrary unit values, formalised as standards. Some unit values occur naturally in science. Systems of units based on these are called natural units. Also a great number of strange and non-standard units may be encountered. .. ## Base and derived units Different systems of units are based on different choices of a set of fundamental units. The most widely used system of units is the International System of Units, or SI. There are seven SI base units. All other SI units can be derived from these base units. For most quantities a unit is absolutely necessary to communicate values of that physical quantity. For example, conveying to someone a particular length without using some sort of unit is impossible, because a length cannot be described without a reference used to make sense of the value given. But not all quantities require a unit of their own. Using physical laws, units of quantities can be expressed as combinations of units of other quantities. Thus only a small set of units is required. These units are taken as the base units. Other units are derived units. Derived units are a matter of convenience, as they can be expressed in terms of basic units. Which units are considered base units is a matter of choice. The base units of SI are actually not the smallest set. Smaller sets have been defined. There are sets in which the electric and magnetic field have the same unit. This is based on physical laws that show that electric and magnetic field are actually different manifestations of the same phenomenon. In some fields of science such systems of units are highly favoured over the SI system. ## Calculations with units ### Units as dimensions Any value of a physical quantity is expressed as a comparison to a unit of that quantity. For example, the value of a physical quantity Q is written as the product of a unit [Q] and a numerical factor: $Q = n \times [Q] = n [Q]$ The multiplication sign is usually left out, just as it is left out between variables in scientific notation of formulas. In formulas the unit [Q] can be treated as if it was a kind of physical dimension: see dimensional analysis for more on this treatment. A distinction should be made between units and standards. A unit is fixed by its definition, and is independent of physical conditions such as temperature. By contrast, a standard is a physical realization of a unit, and realizes that unit only under certain physical conditions. For example, the metre is a unit, while a metal bar is a standard. One metre is the same length regardless of temperature, but a metal bar will be one metre long only at a certain temperature. ### Guidelines • Treat units like variables. Only add like terms. When a unit is divided by itself, the division yields a unitless one. When two different units are multiplied, the result is a new unit, referred to by the combination of the units. For instance, in SI, the unit of speed is metres per second (m/s). See dimensional analysis. A unit can be multiplied by itself, creating a unit with an exponent (e.g. m2/s2). • Some units have special names, however these should be treated like their equivalents. For example, one newton (N) is equivalent to one kg·m/s2. This creates the possiblity for units with multiple designations, for example: the unit for surface tension can be referred to as either N/m (newtons per metre) or kg/s2 (kilograms per second squared). • Don't let definitions like density is mass per unit volume obscure your understanding of units. It sounds as if it says: D = m/m3 (mass divided by the unit of volume) (WRONG) This is not true. The correct statement is that density is mass divided by volume: D = m/V (mass divided by volume, both variables) ### Expressing a physical value in terms of another unit Conversion of units involves comparison of different standard physical values, either of a single physical quantity or of a physical quantity and a combination of other physical quantities. Starting with: $Q = n_i \times [Q]_i$ just replace the original unit [Q]_i with its meaning in terms of the desired unit $[Q]_f$, e.g. if $[Q]_i = c_ij * [Q]_f$, then: $Q = n_i \times c_ij \times [Q]_f$ Now $n_i$ and $c_ij$ are both numerical values, so just calculate their product. Or, which is just mathematically the same thing, multiply Q by unity, the product is still Q: $Q = n_i \times [Q]_i \times ( c_ij \times [Q]_f/[Q]_i )$ For example, you have an expression for a physical value Q involving the unit feet per second ($[Q]_i$) and you want it in terms of the unit miles per hour ($[Q]_f$): 1. Find facts relating the original unit to the desired unit:1 mile = 5280 feet and 1 hour = 3600 seconds 2. Next use the above equations to construct a fraction that has a value of unity and that contains units such that, when it is multiplied with the original physical value, will cancel the original units:1 = (1 mile) / (5280 feet) and 1 = (3600 seconds) / (1 hour) 3. Last, multiply the original expression of the physical value by the fraction, called a conversion factor, to obtain the same physical value expressed in terms of a different unit. Note: since the conversion factors have a numerical value of unity, multiplying any physical value by them will not change that value. ## See also • English system • Metric system • SI • Natural units • Conversion of units • Units conversion by factor-label • International standard ISO 31: Quantities and units • Metrication • Metric system in the United States • Orders of magnitude • ISO 31 - style guide for units of measurement • Scaling # Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275429844856262, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/52902/proving-theorems-by-using-functions-with-fixed-points/52964
## Proving theorems by using functions with fixed points. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am trying to get a better feel for solving questions where creating a function with a unique fixed point is the crux of the proof. In particular, the Inverse Function Theorem as well as the existence of solutions of certain ODE's can be proven by using contraction mappings. (Which have exactly one fixed point by Banach's Fixed Point Theorem.) My question is then what other problems can be solved by this technique? I am interested in any that come to mind. Hopefully this is not too vague. - 2 This may be relevant: mathoverflow.net/questions/19272/… – Nick Salter Jan 23 2011 at 4:11 Community wiki? – Yemon Choi Jan 23 2011 at 4:51 2 I can't help feeling this question is too broadly posed, and would best be addressed by waiting to learn more mathematics in different areas. – Yemon Choi Jan 23 2011 at 4:52 Also: try Googling Ryll-Nardzewski or Markov-Kakutani – Yemon Choi Jan 23 2011 at 22:04 ## 10 Answers The Fixed point phenomenon has a lot of very useful applications. You mentioned the Contraction mapping theorem, well that particular theorem pops up all over the place for example: in the proofs of 1. The Stable/Unstable Manifold Theorem, for systems of differential equations. 2. The Hartman-Grobman Theorem, which relates the local behavior of a system of differential equations to the linearized system in a neighborhood of a hyperbolic singularity. The Inverse Function Theorem also crops up everywhere, (but mainly in non-constructive existence proofs): In particular, in the proofs of 1. The Existence of the Poincare First Return Map, for a periodic orbit of a system of differential equations 2. The Flowbox Theorem 3. Finding Singular Points for surfaces/curves, (these are points where the Jacobian has determinate zero) These are all I can think of at the moment. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is a very slick proof (discussed here on MO) that every prime $p=4k+1$ is a sum of two squares, which looks at the set $S= \{(x,y,z) \in N^3: x^2+4yz=p \}$ and shows that a particular involution of $S$ has exactly one fixed point. - If you are interested in applications of fixed point theorems, you'll find entire journals dedicated to them. For example, you see fixed point techniques pop up in approximation theory where one is interested in finding the best approximation. For a specific problem, consider the following: You decide to take a break from the fast paced world of academia to climb Mt. Fuji. You begin your ascent at sunrise along a narrow path. Along the way, you stop a few times to take in the scenery and eat, maybe even work out a math puzzle or two. You reach the top at sunset. The next day you begin your descent at sunrise, again making leisurely stops along the way. It's reasonable to assume that going downhill is easier than uphill so let's assume your average downhill speed is greater than your average uphill speed. Show that there must be a place along the path that you occupy at the exact same time of day during your uphill and downhill trips. - I am familiar with a good example from the theory of 2nd order elliptic PDE. Technicalities omitted... A special case of the Leray-Schauder Theorem says the following: Let $T$ be a compact mapping of a Banach space X into itself and suppose there exists a constant $M$ such that $\|x\| \leq M$ for all $x$ in the set {$x \in X : Tx = \sigma x\ \text{for some}\ \sigma \in [0,1]$}. Then $T$ has a fixed point. One proves this by applying a sort of infinite-dimensional Brouwer's fixed-point theorem. The clever bit comes next: Say you want to solve the Dirichlet problem for the 2nd order quasiliniear elliptic PDE $Qu = a^{ij}(x,u,Du)D_{ij}u + b(x,u,Du) = 0$, and you know from more basic (Schauder) theory how to solve linear problems.Then, I define an operator $T$ by sending $v \in C^{1,\beta}(\overline{\Omega})$ to the unique solution $u$ of the linear problem $Qu = a^{ij}(x,v,Dv)D_{ij}u + b(x,v,Dv) = 0$. (I won't bother writing in the boundary conditions). Then a fixed point of this map is exactly a solution of the quasilinear problem! The Leray-Schauder theorem thus advocates the apriori bound philosophy: To prove the existence of a solution, you can assume it exists and then just bound it in the relevant Banach space. The task is getting the bound $\|u\|_{C^{1,\beta}(\overline{\Omega})} < M$ for solutions of $Qu = a^{ij}(x,u,Du)D_{ij}u + \sigma b(x,u,Du) = 0$ This is in Gilbarg and Trudinger. - Thurston's classification of diffeomorphisms of surfaces involves constructing a big space (projective measured lamination space) on which the diffeo acts, and studying the fixed points -- see the answer by Ryan Budney to this question. In the generic (pseudo-Anosov) case, there are exactly two fixed points, not one, though -- does that still count as an answer to your question? - The standard proofs of the existence of Nash equilibria in game theory all use either Brouwer's or Kakutani's fixed point theorem. See for example Nash's 1951 paper "Non-Cooperative Games," where he defines his equilibrium notion and gives the Brouwer-based proof. Recent complexity-theory results by Daskalakis, Papadimitriou, etc. showing PPAD-completeness of computing Nash equilibria mean that in some sense a fixed point theorem (or equivalent) is necessary to prove existence of Nash equilibria. A fixed point theorem like the Banach one does not in general apply to this problem, because there can be multiple equilibria. - In the "non-fibered case," Thurston's proof of the Hyperbolization Theorem for Haken manifolds centers on finding a fixed point for a certain map between Teichmuller spaces. - Borel fixed point theorem can be used to prove transitivity of an algebraic group acting on a projective variety. - I'm not sure if this is what you had in mind, but counting fixed points seem to come up often in elementary group theory, particularly in arguments involving group actions. In this setting, not only must you pick the right function (homomorphism of $G$ into an appropriate permutation group) but you also have to pick the correct "domain" (a suitable group $G$). For example, one way to show that all $p$-Sylow subgroups of a group are conjugate involves counting fixed points of $p$-Sylow groups under conjugation by other $p$-Sylows; a simpler (and cuter!) example, is the proof that every group of size divisible by $p$ has an order-$p$ element. To the best of my memory, this (standard) proof is found in Hungerford: Suppose $G$ is a group with $p \mid |G|$, and let $U = { (g_1,\dots,g_{p-1},x): (g_1\cdot \dots \cdot g_{p-1})\cdot x =1_{G} }$, i.e. the set of all $p$-tuples of elements in $G$ whose product is the identity. Since $x$ is uniquely determined by the $g_i$, $|U| = |G|^{p-1}$, so $p \mid |U|$ as well. Now, letting $Z/pZ$ act on $U$ by cyclic permutation yields a fixed set with size divisible by $p$, but greater than one, for at least one non-trivial element $(g_1,\dots,g_{p-1},x) \in U$ which is invariant under cyclic permutation, i.e. some $g_1=\dots=g_{p-1}=x \neq 1_{G}$. Consequently, we have $x^p = 1_{G}$, as desired. - In addition to ODE existence theorems, there are also uses for PDE existence/uniqueness theorems. An example of that is constructing weak solutions to the linear Boltzmann equation. I think this example is interesting because it is more of a philosophy, not so much precise "fixed point theorem" that is used here. The linear Boltzmann equation is: $\partial_t f + v\cdot \nabla_x f = Kf -af + Q$ where $Kf = \int k(t,x,v,v') f(t,x,v')dv'$ By Duhamel's principle, we know that a strong solution would satisfy $f(t,x,v) = f_0(x-tv,v) + \int_0^t (Kf - af + Q)(s,x-(t-s)v,v)ds$. We basically use this as our definition of a weak solution. Thus, we can rephrase the search for a weak solution as looking for a fixed point to the operator $g \mapsto F[f,Q] + \tau g$ where $F[f_0,Q] = f_0(x-vt,v) + \int_0^t Q(s,x-(t-s)v,v)ds$ and $\tau g = \int_0^t (Kf - af)(s,x-(t-s)v,v)ds$. Notice that the series $\sum_{n\geq 0} \tau^n[F[f_0,Q]]$ would be such a fixed point if we had appropriate convergence (just hit it with $\tau$ and see what happens), so basically, we've reduced the problem to bounding the operator $\tau$ in the appropriate space which we would like weak solutions to live. As I mentioned above, this doesn't really use any "fixed point theorems" but is clearly still a fixed point argument. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320189356803894, "perplexity_flag": "head"}
http://mathoverflow.net/questions/102010?sort=votes
## Eigenvalues of Product of two matrices ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have two matrices A, B. A relates to a physical system and all of its eigenvalues have negative real parts. B is a diagonal matrix with nonzero and unequal elements on its diagonal, which are in fact its eigenvalues. Let's say A and B are 5 by 5 and I want to know about eigenvalues of A*B. If for example 2 elements of B are negative and the rest are positive, can I say that 2 eigenvalues of A*B have positive real parts, and other eigenvalues of A*B have negative real parts ? I tried with different choices of A and B, and this thing held for all of them [seems like sign of eigenvalues of A and B are multiplied by each other in A*B], so I thought that there might be a proof or theory about it ! Can anybody help me ? - ## 1 Answer What would be true is the following: suppose $A$ is symmetric and negative definite, and let $(-A)^{1/2}$ be the positive definite square root of $-A$. Then by Sylvester's Law of Inertia, the numbers of positive and negative eigenvalues of $(-A)^{1/2} B (-A)^{1/2}$ are equal to the corresponding numbers for $B$; but the eigenvalues of $(-A)^{1/2} B (-A)^{1/2}$ are equal to the eigenvalues of $-AB$, so the number of positive (resp. negative) eigenvalues of $AB$ is equal to the number of negative (resp. positive) eigenvalues of $B$. EDIT: Here is a $3 \times 3$ counterexample where $A$ is not symmetric. Take $$A = \pmatrix{ -8.2 & 1 & 1\cr -45 & -3 & 0\cr 0 & 8.3 & 5.3\cr},\ B = \pmatrix{2 & 0 & 0\cr 0 & -1 & 0\cr 0 & 0 & -2\cr}$$ Then $A$ has eigenvalues $-3, -2.2, -0.7$, and $B$ has one positive and two negative eigenvalues, but $AB$ has one negative real eigenvalue (approximately $-23.9$) and two complex eigenvalues with negative real part (approximately $-.0432 \pm .878 i$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8989576697349548, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/204991-few-questions-about-radical-functions.html
1Thanks • 1 Post By MarkFL # Thread: 1. ## few questions about radical functions the speed, s, in metres per secondm of sound in dry air can be described by the function s=331.3sqrt(1+T/273.15) T is degrees in celcius a) determine the domain and range of the function what i PUT: domain: {x|x>=-1,xer} range: {y|y>=0, yer} IT WAS WRONG geogebra said the x-int is 273.15 but i thought it would be -1 because it says +1 on the function and the range is apparently wrong too, anyways would be appreciated if someone helped. b) what is the meaninig of the x-intercept in this context? 2. the manufacturer of a new GLOBAL POSITION SATELLITE system wants to predict the consumer interest in its new device. the company uses the function I(w)= -3sqrt(-w-1) +15 (square root is done after -1, +15 is alone) to model number, I, in thousandas, of pre-orders for the GPS as a function of the number ,w ,of weeks before the GPS release date. a) what are the domain and range and what do they mean in this situation? i put: domain: {x|x<=1,xer} range: {y|y<=15,xer} can u tell me if my answer is right and what do they mean in the situation b) identify the transformations represented by the function compared to y=sqrt(w) c)determine the number of pre-orders the manufacturer can expect to have 10 weeks before the release date? 3. consider the function f(x)=2(x-1)^2-3 a)determine square root of the function i put: g(x)=sqrt(2(x-1)^2-3 <--tell me if the square root is right and they told me to graph both equations and i cant seem to find the x-intercepts for the first function, if anyone can help. C)describe the relationship between the domain and range of f(x) and domain and range of g(x) (i know they seem alot, but i left most stuff out and the stuff i understood i left out. these questions are all too hard for me to do, if anyone can help, its test preparation and not cheating for homework, thnx if you help) 2. ## Re: few questions about radical functions 1.) We are given: $s(T)=331.3\sqrt{1+\frac{T}{273.15}}$ To find the domain, we set: $0\le1+\frac{T}{273.15}$ Now solve for $T$. Since the function is increasing as T increases, you will find the lower limit of the range by evaluating the function at the smallest value of T and the upper range is unbounded. Once we get this problem done and understood, I will move on the the next. I like to take them one at a time, and also solving this one will help you with the next one. 3. ## Re: few questions about radical functions Originally Posted by MarkFL2 1.) We are given: $s(T)=331.3\sqrt{1+\frac{T}{273.15}}$ To find the domain, we set: $0\le1+\frac{T}{273.15}$ Now solve for $T$. Since the function is increasing as T increases, you will find the lower limit of the range by evaluating the function at the smallest value of T and the upper range is unbounded. Once we get this problem done and understood, I will move on the the next. I like to take them one at a time, and also solving this one will help you with the next one. okay so i got the domain right, and as for the range, i got {y|y>=0,yer}...so it has to be above 0 because its a square root, did i get it right? 4. ## Re: few questions about radical functions A square root can be equal to zero, so your range is [0,∞). This is what you wrote using set-builder notation, but then you state it has to be above zero, so I just wanted to make it clear that it can include zero. I recommend using the variables given instead of converting the independent variable to x and the dependent variable to y. For 2a, your domain and range is correct if there were no other restrictions on I(w), but can we have a negative number of pre-orders? 5. ## Re: few questions about radical functions Originally Posted by MarkFL2 A square root can be equal to zero, so your range is [0,∞). This is what you wrote using set-builder notation, but then you state it has to be above zero, so I just wanted to make it clear that it can include zero. I recommend using the variables given instead of converting the independent variable to x and the dependent variable to y. For 2a, your domain and range is correct if there were no other restrictions on I(w), but can we have a negative number of pre-orders? no i put ">=" which includes 0 too i guess., and no negative number of pre-orders, i just need an answer sheet with explanation for the last few remaining questions, i dont know if u can be able to do them for me but i showed what i did on my own and i gotta go to bed now its like 2 am, could u be able to do the questions left and ill check in the morning? thank you for the help so far and thnx if u help with the rest. cya later mate.(edit: no we cant have negative pre-orders) 6. ## Re: few questions about radical functions Actually, your domain is not right for 2a), you should have $w\le-1$. I cannot just finish your homework for you, but I am glad to help guide you. 7. ## Re: few questions about radical functions alright well thnx anyways gotta go 8. ## Re: few questions about radical functions That sounds fine.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262469410896301, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/60743?sort=newest
## Tameness for the Galois closure of a map of curves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Say we are working over some $K=\overline{K}$, of characteristic $p>0$. Let $\phi: Y\rightarrow X$ be a nonconstant map of smooth projective curves. To this map we can associate a map $\psi: Z\rightarrow X$, where on the level of fields this is the Galois closure of $k(X)\subseteq k(Y)$. I would like to know about the tameness of this map. Let $e_P$ denote the ramification indices (with the maps understood to be either $\psi$ or $\phi$ depending on where $P$ lives). Now obviously if $p|e_P$ and if $Q$ lies above $P$, $p|e_Q$ as well, so $\psi$ has wild ramification at $Q$. I am wondering when we can ensure this map is (everywhere) tamely ramified. For instance if $d=deg(\phi) < p$, then the degree of the Galois closure of $k(Y)$ over $k(X)$ has degree dividing $d!$, and hence $\psi$ remains tame. My question is this: Suppose we can show for each $P\in Y$ such that $e_P \geq p$ that each point above $P$ is tamely ramified. Can we conclude that $\psi$ is (everywhere) tamely ramified? It seems to me that this isn't true but I cannot produce a counterexample. It would be fortuitous if it were true, however. Any help is greatly appreciated. - ## 2 Answers Look at Lemma 2.1.3 i.v) from Grothendieck and Murre: "The Tame Fundamental Group of a Formal Neighbourhood of a divsors with Normal Crossings on a Scheme". It says when given a tame field extension $L \supset K$, then its Galois closure will again be tame. Here, tameness is just defined with respect to one valution of $K$. But the proof should apply in your situation as well. - That's certainly true, but the question was unclearly enough phrased that I thought that he was asking the converse: if the Galois closure of $L\supset K$ is tame over $L$, is then $L$ tame over $K$? I'll wait for OP's comment. – Lubin Apr 6 2011 at 20:12 I was asking what Holger has posted. The converse seems clear to me. Just a silly error though--you meant iv) rather than v), but yes this is exactly the sort of result I was hoping for. – Randy Reddick Apr 6 2011 at 20:26 No, the converse, as I stated it, is not true. – Lubin Apr 6 2011 at 22:04 the converse is true, since if Q lies over S and S over P, where Q is place of a Galois closure of L/K S is place of L and P of K then e(Q|P)=e(Q|S)e(S|P) and hence if p does not divide e(Q|P) then p does not divide e(S|P) (p=char(K). – Alexey Zaytsev Apr 6 2011 at 22:16 Sorry, it seems that we understand the converse in different sense. if the property being tame only over L, then you are right. – Alexey Zaytsev Apr 6 2011 at 22:21 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For the first glance it should follow form the Abhaynkar's lemma (see "Algebraic Function Fields" by Stichtenoth, Theorem 3.9.1) and the fact the Galois closure is the composite of all the different embeddings of L over K into fixed algebraic closure of K (so each of them has the same properties of tame ramifications). Then we just apply the lemma and get the result that $p=char(K)$ does not divide $e_P$ for any place P in K. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441925287246704, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/07/01/absolute-continuity-i/
# The Unapologetic Mathematician ## Absolute Continuity I We’ve shown that indefinite integrals are absolutely continuous, but today we’re going to revise and extend this notion. But first, to review: we’ve said that a set function $\nu$ defined on the measurable sets of a measure space $(X,\mathcal{S},\mu)$ is absolutely continuous if for every $\epsilon>0$ there is a $\delta$ so that $\mu(E)<\delta$ implies that $\lvert\nu(E)\rvert<\epsilon$. But now I want to change this definition. Given a measurable space $(X,\mathcal{S})$ and two signed measures $\mu$ and $\nu$ defined on $\mathcal{S}$ we say that $\nu$ is absolutely continuous with respect to $\mu$ — and write $\nu\ll\mu$ — if $\nu(E)=0$ for every measurable set $E$ for which $\lvert\mu\rvert(E)=0$. It still essentially says that $\nu$ is small whenever $\mu$ is small, but here we describe “smallness” of $\nu$ by $\nu$ itself, while we describe “smallness” of $\mu$ by its total variation $\lvert\mu\rvert$. This situation is apparently asymmetric, but only apparently; If $\mu$ and $\nu$ are signed measures, then the conditions $\displaystyle\begin{aligned}&\nu\ll\mu\\&\nu^+\ll\mu\quad\mathrm{and}\quad\nu^-\ll\mu\\&\lvert\nu\rvert\ll\mu\end{aligned}$ are equivalent. Indeed, if $X=A\uplus B$ is a Hahn decomposition with respect to $\nu$ then whenever $\lvert\mu\rvert(E)=0$ we have both $\displaystyle\begin{aligned}0\leq\lvert\mu\rvert(E\cap A)&\leq\lvert\mu\rvert(E)=0\\{0}\leq\lvert\mu\rvert(E\cap B)&\leq\lvert\mu\rvert(E)=0\end{aligned}$ Thus if the first condition holds we find $\displaystyle\begin{aligned}\nu^+(E)=\nu(E\cap A)&=0\\\nu^-(E)=-\nu(E\cap B)&=0\end{aligned}$ and the second condition must hold as well. If the second condition holds we use the definition $\displaystyle\lvert\nu\rvert(E)=\nu^+(E)+\nu^-(E)$ to show that the third must hold. And if the third holds, then we use the inequality $\displaystyle0\leq\lvert\nu(E)\rvert\leq\lvert\nu\rvert(E)$ to show that the first must hold. Now, just because smallness in $\nu$ can be equivalently expressed in terms of its total variation does not mean that smallness in $\mu$ can be equivalently expressed in terms of the signed measure itself. Indeed, consider the following two functions on the unit interval $[0,1]\subseteq\mathbb{R}$ with Lebesgue measure $\mu$: $\displaystyle\begin{aligned}f_1(x)&=\chi_{\left[0,\frac{1}{2}\right]}-\chi_{\left(\frac{1}{2},1\right]}\\f_2(x)&=x\end{aligned}$ and define $\nu_i$ to be the indefinite integral of $f_i$. We can tell that the total variation $\lvert\nu_1\rvert$ is the Lebegue measure $\mu$ itself, since $\lvert f_1\rvert=1$. Thus if $\lvert\nu\rvert(E)=0$ then we can easily calculate $\displaystyle\nu_2(E)=\int\limits_Ex\,d\mu=\int x\chi_E\,d\mu=0$ and so $\nu_2\ll\nu_1$. However, it is not true that $\nu_2(E)=0$ for every measurable $E$ with $\nu_1(E)=0$. Indeed, $\nu_1(\left[0,1\right])=0$, and yet we calculate $\displaystyle\nu_2(\left[0,1\right])=\int x\,d\mu\geq\int\frac{1}{2}\chi_{\left[\frac{1}{2},1\right]}\,d\mu=\frac{1}{4}$ By the way: it’s tempting to say that this integral is actually equal to $\frac{1}{2}$, but remember that we only really know how to calculate integrals by taking limits of integrals of simple functions, and that’s a bit more cumbersome than we really want to get into right now. One first quick result about absolute continuity: if $\mu$ and $\nu$ are any two measures, then $\nu\ll\mu+\nu$. Indeed, if $\mu(E)+\nu(E)=0$ then by the positivity of measures we must have both $\mu(E)=0$ and $\nu(E)=0$, the latter of which shows the absolute continuity we’re after. ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 8 Comments » 1. [...] Continuity II Now that we’ve redefined absolute continuity, we should tie it back to the original one. That definition makes precise the [...] Pingback by | July 2, 2010 | Reply 2. [...] Another relation between signed measures besides absolute continuity — indeed, in a sense the opposite of absolute continuity — is singularity. We say that [...] Pingback by | July 5, 2010 | Reply 3. [...] Before the main business, a preliminary lemma: if and are totally finite measures so that is absolutely continuous with respect to , and is not identically zero, then there is a positive number and a measurable [...] Pingback by | July 6, 2010 | Reply 4. [...] to the general case, we know that the absolute continuity is equivalent to the conjunction of and , and so we can reduce to the case where is a finite measure, not just a [...] Pingback by | July 7, 2010 | Reply 5. [...] , while on . For the first case, let be a set for which . Since , we must have , and so . Then by absolute continuity, we conclude that , and thus on . The proof that on is [...] Pingback by | July 8, 2010 | Reply 6. [...] As we’ve said before, singularity and absolute continuity are diametrically opposed. And so it’s not entirely surprising that if we have two totally [...] Pingback by | July 14, 2010 | Reply 7. [...] from a measure space to a totally -finite measure space so that the pushed-forward measure is absolutely continuous with respect to . Then we can select a non-negative measurable [...] Pingback by | August 2, 2010 | Reply 8. [...] If is the metric space associated to a measure space , and if is a finite signed measure that is absolutely continuous with respect to . Then defines a continuous function on [...] Pingback by | August 16, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489136338233948, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/153510/find-nth-term-given-the-equation-of-a-series/153520
# Find nth term given the equation of a series Given $S_n=3(2^n+1)$, find the fourth term. I believe I can find the first term, $a$, assuming n=1 is the beginning. $3(2^1+1)=9$ To find the fourth term, I have tried to do it by subtracting $S_4$ and $S_3$. This gives me 24. Is this mathematically sound? When I try to determine the common ratio from this, namely $24=ar^3$, $r=1.386722549$. However, when I try to find another term, say $t_3$, $S_3-S_2 \neq ar^2$. Therefore, I must be doing something wrong here; specifically, finding the common ratio from only one equation is probably incorrect. So: 1) Is 24 the third term? 2) If so, how do I find the common ratio from this? - ## 4 Answers $S_n-S_{n-1}=a_n$, right? You get $S_n$ by adding $a_n$ to $S_{n-1}$. So then: $$a_n=S_n-S_{n-1}=3(2^n+1)-3(2^{n-1}+1)=3(2^n-2^{n-1})=3\cdot2^{n-1}$$ So $$a_4=3\cdot 2^3=24$$ Yes the method is perfectly fine. The issue you had stems from finding an incorrect value of $a$. Use the method of subtracting sums that you used in the first half of the question: $$a=S_1-S_0=9-6=3$$ which is the correct value. - The elements of any sequence $\,\{a_n\}\,$ of which its partials sums $\,\{S_n\}\,$ are given can be easily evaluated as you did $$a_n=S_n-S_{n-1}$$ Your problem is that you're wrongly assuming the sequence is *geometric...and it isn't, as you can readily check by evaluating $$\frac{a_{n+1}}{a_1}=\frac{3(2^{n+1}+1)}{3(2^n+1)}=\frac{2^{n+1}+1}{2^n+1}$$ - 1 what he has listed are the partial sums, not the sequence itself, so those fractions don't represent ratios of successive terms. – Robert Mastragostino Jun 3 '12 at 23:11 Indeed, you have $S_{n+1}/S_n$. – anon Jun 3 '12 at 23:12 That's interesting. I assumed it was geometric because that was what we were learning. Thank you. – InQ Jun 3 '12 at 23:28 As mentioned above, there is no need to assume that the sequence has an arithmetic or geometric form, and therefore there may not be a common difference or common ratio. You correctly deduced the 1st term, and could find the nth term iteratively, if necessary. Using the same method that you did for $a_1$, calculate $S_2, S_3,$ and $S_4$ and use your formula $a_n=S_n-S_{n-1}$ to find the rest. - You are correct the partial sum $S_n=\sum_{k=1}^n a_k$ has $n$-th term $a_n=S_{n}-S_{n-1}$ because $S_n=S_{n-1}+a_n$ for all $n>0$. Your fourth term is correct at 24 $S_4-S_3=3(17)-3(9)=3(8)=24$ Now you are assuming it is a geometric sum. That is $a_n=ar^n$ for some choice of $r$ and $a$ that work for all $n$. Note this usually doesn't work. In this case it actually doesn't work... edited to fix sum isn't geometric :*( -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9685177803039551, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/85729-area-under-curve-help-print.html
# area under curve help Printable View • April 26th 2009, 07:40 AM Tweety 1 Attachment(s) area under curve help Quote: A and B are two points which lie on the curve C, with equation $y = -x^2 + 5x +6$ The diagram shows C and the line l passing through A and B. (a) Calculate the gradient of C at the point where x=2. The line l passes through the point with coordinates (2, 3) and is parallel to the tangent to C at the point where x=2. (b) Find an equation of l. (c) Find the coordinates of A and B. The point D is the foot of the perpendicular from B on to the x-axis. (d) Find the area of the region bounded by C, the x-axis, the y-axis and BD. (e) Hence find the area of the shaded region. The equation of L is $y = x+1$ the coordinates of A (-1,0) B (5,6) for question 'd' I got the area = $\int_{0}^5 [-x^2 + 5x + 6] dx$ = $50\frac{5}{6}$ which is correct. I am stuck on question 'e', If I work out the whole area between point D and A would that be the $\int_{-1}^5 [-x^2 +5x+6 ]$ ? and than I minus the area of the triangle and the little area above the triangle on the left hand side of the y-axis ? $\int_{-1}^5 [-x^2 +5x+6 ]$ $(-\frac{x^3}{3} + \frac{5x^2}{2} + 6x )$ $[ -\frac{125}{3} + \frac{125}{2} + 30]$ - $[\frac{1}{3} + \frac{5}{2} - 6]$ = 54 . $\int_{-1}^0 (-x^2+5x+6)dx$ = $3\frac{1}{6}$ Area of triangle = $6 \times 6 \times \frac{1}{2} = 18$ $18+ 3\frac{1}{6} = 21\frac{1}{6}$ $54 - 21\frac{1}{6} = 32\frac{5}{6}$ The correct answer is $33\frac{1}{3}$ Can someone show me where I have gone wrong or whats the correct method? Thanks. • April 26th 2009, 08:41 AM curvature Quote: Originally Posted by Tweety The equation of L is $y = x+1$ the coordinates of A (-1,0) B (5,6) for question 'd' I got the area = $\int_{0}^5 [-x^2 + 5x + 6] dx$ = $50\frac{5}{6}$ which is correct. I am stuck on question 'e', If I work out the whole area between point D and A would that be the $\int_{-1}^5 [-x^2 +5x+6 ]$ ? and than I minus the area of the triangle and the little area above the triangle on the left hand side of the y-axis ? $\int_{-1}^5 [-x^2 +5x+6 ]$ $(-\frac{x^3}{3} + \frac{5x^2}{2} + 6x )$ $[ -\frac{125}{3} + \frac{125}{2} + 30]$ - $[\frac{1}{3} + \frac{5}{2} - 6]$ = 54 . $\int_{-1}^0 (-x^2+5x+6)dx$ = $3\frac{1}{6}$ Area of triangle = $6 \times 6 \times \frac{1}{2} = 18$ $18+ 3\frac{1}{6} = 21\frac{1}{6}$ $54 - 21\frac{1}{6} = 32\frac{5}{6}$ The correct answer is $33\frac{1}{3}$ Can someone show me where I have gone wrong or whats the correct method? Thanks. Let the "big function" minus the "small one", and integrate from 0 to 5. Area= $\int_{0}^5 ((-x^2+5x+6)-(x+1))dx$ = $\frac{100}{3} =33\frac{1}{3}$ • April 26th 2009, 08:53 AM Tweety 1 Attachment(s) I dont understand how you got that, becasue although it gives you the right answer is that not just the curve-line expression? How does that give you the shaded region, as that expression also includes the area thats not shaded? • April 26th 2009, 08:55 AM Tweety Actually ignore my other post I get what you did. Thanks • April 26th 2009, 09:16 AM curvature Quote: Originally Posted by Tweety I dont understand how you got that, becasue although it gives you the right answer is that not just the curve-line expression? How does that give you the shaded region, as that expression also includes the area thats not shaded? Always let the "big function" minus the "small one", and integrate from the left to the right, and you get the area between two curves. All times are GMT -8. The time now is 05:53 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312573075294495, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/83881/a-book-in-topology/83896
## A book in topology ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I will have to teach a topology course: it starts in point set topology and ends at fundamental group of $S^1$. In the past I have used two different books: • Elementary Topology. Textbook in Problems, by O.Ya.Viro, O.A.Ivanov, V.M.Kharlamov and N.Y.Netsvetaev. • A First Course in Algebraic Topology by Czes Kosniowski I like both of these books and my students hate both of them. So I am thinking, maybe I should choose another book this time. Any suggestions? - 19 I'm very fond of Munkres - Topology. It covers all the usual point set topology and some dimension theory. Although the second part of the book dealing with Algebraic Topology is not as good as other specialized books in AT such as Hatcher's book (which is free to download on Hatcher's site). – Asaf Dec 19 2011 at 17:55 2 Can you provide some more details? What year / level / major are these students? What have they seen and not seen yet? – Qiaochu Yuan Dec 19 2011 at 20:29 ## 11 Answers A fairly streamlined book, although initially gentle, is Essential Topology by Crossley. It goes up to homotopy and homology. See also Celebrating Swansea University Authors to view Crossley talking about his book. - Essential Topology looks good, but not suitable for me. Crossley does not think that fundamental group could be the highest point in the course. Nevertheless, this is the best answer I have got so far. Thank you very much. – ε-δ Jan 9 2012 at 0:24 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'd recommend a combination. Topology by Munkres for the point set stuff, and Algebraic Topology by Hatcher for the algebraic topology. You get all the advantages of two more specialized textbooks, and since Hatcher's text is free, your students won't need to buy two textbooks. - Additionally, further courses in algebraic topology can continue using Hatcher. It's nice to get used to his writing style early. – Zack Wolske Dec 19 2011 at 20:19 1 and for Alg.Top., you can couple Hatcher's textbook with Munkres' Elements of Algebraic Topology (since Hatcher takes a geometric approach and Munkres takes an algebraic one). – Chris Gerig Dec 19 2011 at 21:17 I am bound to recommend my book Topology and Groupoids, (2006) Ronald Brown, available from amazon.com . An e-version is also available from www.kagi.com for £5. See my web page http://www.bangor.ac.uk/r.brown/topgpds.html for links to reviews. It takes a geometric approach, and at the same time a categorical view, that is, there is an emphasis on constructing continuous functions. The approach to the fundamental group via groupoids goes a long way beyond a first course, but then the results go beyond other books, for example on the fundamental group(oid) of an orbit spaces, and a gluing theorem on homotopy equivalences. - 2 I should say that I chose the groupoid view in the first 1968 edition as it seemed to me more intuitive and more powerful. For example, to describe journeys between towns, you look at all journeys, without a special emphasis on return journeys. – Ronnie Brown Dec 19 2011 at 18:45 Introduction to Topology: Pure and Applied, by Colin Adams and Robert Franzosa. Immediately after proving that there is no retraction from the disk onto its circle boundary, they use degree theory to analyze sudden cardiac death. There is a chapter on knots, a chapter on dynamical systems, sections on Nash equilibrium and digital topology, a chapter on cosmology. - It's a great book to introduce applied topology, although it stops just short of using groups. – J W Dec 19 2011 at 20:19 I'm fond of Wilson Sutherland's book Introduction to Metric and Topological Spaces. It covers topics such as completeness and compactness extremely well. In particular, the motivation of compactness is the best I've seen. (It doesn't do any algebraic topology, though.) I just taught a class using it, and it was generally well liked. - +1: Sutherland is where I learned point-set – Yemon Choi Dec 19 2011 at 19:46 Thank you, the book seems to be very good. But, what would you choose for $\pi_1(S^1)$ from it? – ε-δ Dec 20 2011 at 21:35 I don't know. I don't have a favourite book for the fundamental group. – Tom Leinster Dec 21 2011 at 19:10 1 I like a book with lots of examples of applications of major theorems. So as part of a course in analysis I used as a source R.P. Boas, A primer of real functions, for lots of fun applications of the Baire category theorem; and I see these as the main point of the theorem. It is difficult to find a book at this level which also does in a basic and example oriented way the Hausdorff metric on compact subsets of $\mathcal R ^n$; there is an argument that graduands in maths should have heard of this background to fractals and chaos theory. Students do find this fun. – Ronnie Brown Dec 30 2011 at 18:04 The notes from when I learned topology were eventually published as a UTX book called "A taste of topology" by Volker Runde. It starts with metric spaces but ends at the same place your intended course. - A point-set topology book that students seem to love is Topology without Tears by Sidney A. Morris. And it doesn't cost anything. - Do you know, what is all of this business about having to get a password in order to print the book? I can see that the second batch of files[^1] is encrypted, but I can't see what stops me from printing the first batch. (I don't have a printer attached now, so I can't actually test this, but it looks perfectly ordinary. Even with an American printer, it looks like I could print it with no more trouble than funny margins.) [^1]: I only looked at the first file in each batch, trusting that the translations work the same way. – Toby Bartels Dec 31 2011 at 6:17 I actually don't know. I only know that in a course I was a TA for, all student used this book as their reference for point set topology, instead of the assigned text. I never tried printing it. – Michael Greinecker Dec 31 2011 at 6:40 Willard's General Topology is my favourite book on point-set topology (together with Bourbaki, but the latter is not suited as course text for several reasons). It also defines the fundamental group, but doesn't really do anything with it. More geometric is Lee's Introduction to Topological Manifolds, it is also very student friendly. - Willard's book is great, but probably too advanced for the students in question. – Michael Greinecker Dec 19 2011 at 20:23 I took the course from Willard and found it fine. The textbook is very efficient and encyclopaedic. Very much a point-set-topology-is-a-subject-in-its-own-right kind of outlook. It's not designed for a very general audience. But for students that have had a strong set theory or analysis course(s) beforehand, it's a great book. – Ryan Budney Dec 19 2011 at 23:57 I'm assuming that the students are not familiar with point-set topology and it's the first course in topology for them. I'd recommend a combination of Munkres and Intuitive topology by V. V. Prasolov. There will be a great deal of precision and intuition all together. - From several points of view i.e. group theory and computability and visualization I suggest 3 books: 1.Topology and Groupoids Prof Ronnie Brown Chapter 1-4 are one of the best approaches to the topology I have ever seen. The students learn the concepts fast, their theoretical language to explicate honed, and their visualization skills improved. From chapter 5 and on it provides one of the most modern theoretical works in Topology and group theory and their inter-relationships. The exercises are superbly chosen and the examples are wonderful in pushing the theory forwards. Both the language and presentation are modern and allows for much room for visualization computational development. 2.Topology Klaus Janich This book is excellent for visualization and at the same precise theoretical treatment of the subject. 3.Counter-examples in Topology Author?? (book is not with me right now) Lots of weird spaces, really great to flex muscles for the topological bodybuilders. I do not recommend Munkres I work with both his books on manifolds and topology and the students did not grasp much of the theory. The presentation is old and tired. Dara - Jänich is gorgeous - there is no way your students won't like it! – Peter Arndt Dec 20 2011 at 0:11 1 Jänich is great for revisiting topology, but I don't think it is a good book for learning the material the first time. It is far too chaotic and chatty, and one needs a lot of background to appreciate the connections he draws to other areas of mathematics. It also doesn't have enough theorems and proofs to immerse oneself in the new concepts. Giving that topology is very terminology-intensive, this is a real problem. – Michael Greinecker Dec 20 2011 at 5:39 1 Counterexamples in Topology, a real classic, is by Lynn Steen and Arthur Seebach. – Toby Bartels Dec 31 2011 at 6:19 I am an undergraduate student. I think that when you begin to study a new subject it is better to start from books not too broad. For a basic course in topology, I recommend these books (based on my experience as student) 1. J. Dugundji, Topology; 2. C. Kosniowski, A first course in algebraic topology; 3. L.C. Kinsey, Topology of surfaces. - It is better to read the question before giving an answer :) – ε-δ Jan 9 2012 at 0:17 Yes, I did. Your question was more or less "I will have to teach a topology course [...] should I choise another book?". My answer was you should not change your first choise. A wise choise because Kosniowski's "A first course in algebraic topology" is an user-friendly book to learn basic definitions and theorems about general topology, homotopy theory and fundamental group. If your students hate that book, they will grow up. Take your pick! – Daniele Jan 9 2012 at 15:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461765289306641, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/107509-scalar-product-vectors-question.html
Thread: 1. Scalar product of vectors question I’ve studied vectors but I can’t do this question in a book. PQRS is a rhombus of side 4 units. K, L, M and N are the mid-points of PQ, QR, RS and SP, respectively. SN® is a representative of vector u and SM® a representative of vector v. Show that SK®.SL® = 5u.v + 16. The rhombus has P at the top left corner and S directly below. Q is slightly lower than P and R is directly below Q. 2. Originally Posted by Stuck Man I’ve studied vectors but I can’t do this question in a book. PQRS is a rhombus of side 4 units. K, L, M and N are the mid-points of PQ, QR, RS and SP, respectively. SN® is a representative of vector u and SM® a representative of vector v. Show that SK®.SL® = 5u.v + 16. The rhombus has P at the top left corner and S directly below. Q is slightly lower than P and R is directly below Q. If you draw a picture, you should be able to see that $\vec{SK} = 2\mathbf{u} + \mathbf{v}$ and $\vec{SL} = \mathbf{u} + 2\mathbf{v}$. Also, $\mathbf{u.u} = \mathbf{v.v} = 2\times2 = 4$. Now you just have to work out $(2\mathbf{u} + \mathbf{v})\mathbf{.}(\mathbf{u} + 2\mathbf{v})$. 3. I had got the first part. I didn't realise that the magnitude of u is 2 and the same for v. I suppose the answer can also be 36. Thanks. 4. The scalar product of two vectors is the two magnitudes multiplied and multiplied by the cos of the angle (<=180). Why does the question not expect the result to be mutiplied by the cos of the unknown angle? So the answer should be 36 X cos of angle. 5. Originally Posted by Stuck Man I had got the first part. I didn't realise that the magnitude of u is 2 and the same for v. I suppose the answer can also be 36. Thanks. Originally Posted by Stuck Man The scalar product of two vectors is the two magnitudes multiplied and multiplied by the cos of the angle (<=180). Why does the question not expect the result to be mutiplied by the cos of the unknown angle? So the answer should be 36 X cos of angle. I think you're confusing $\mathbf{u.u}$ with $\mathbf{u.v}$. When you take the dot product of a vector with itself, the angle is 0, so you just get the square of the length of the vector. But when the vectors are different, as in $\mathbf{u.v}$, that no longer applies. That is why the answer to the question leaves $\mathbf{u.v}$ as it is, without attempting to evaluate it. Unless you know the angle of the rhombus, you can't evaluate $\mathbf{u.v}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960550844669342, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/103757-first-ode-course-first-quiz.html
# Thread: 1. ## first ODE course first quiz We had a quiz and there were two versions so you didn't have the same as your neighbor I could do the other one, but the one I was assigned seemed much more difficult Is there something I'm missing here? because I can't seem to do this one Find the general solution of $xsin(x+y)y' + xsin(x+y) = cos(x+y)$ 2. Looks exact to me. 3. Originally Posted by jbpellerin We had a quiz and there were two versions so you didn't have the same as your neighbor I could do the other one, but the one I was assigned seemed much more difficult Is there something I'm missing here? because I can't seem to do this one Find the general solution of $xsin(x+y)y' + xsin(x+y) = cos(x+y)$ Try a change of variable, put $u=x+y$, then: $<br /> \frac{du}{dx}=1+\frac{dy}{dy}<br />$ so the equation becomes: $x \sin(u) u'= \cos(u)$ CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9948089718818665, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/faster-than-light?sort=active&pagesize=30
# Tagged Questions The faster-than-light tag has no wiki summary. 2answers 96 views ### Superluminal particles with causality What kind of CLASSICAL theories would allow to true (non-apparent) superluminal particles (beyond speed of light, BSOL) agreeing with causality to exist? I mean, are causal superluminal classical ... 0answers 56 views ### Quantum entanglement and speed of light $c$ On the topic of quantum entanglement, Wikipedia states: Repeated experiments have verified that this works even when the measurements are performed more quickly than light could travel between the ... 3answers 77 views ### Mass in special relativity I have just got a query about how this equation works if its right. We have Newtonian Physics saying $F=ma$, According to the 'Mass in special relativity' the mass changes according to m= ... 1answer 76 views ### Why is this thought experiment flawed: A vast lever rotating faster than the speed of light [duplicate] If there were a vast lever floating in free space, a rigid body with length greater than the width of a galaxy, made of a hypothetical material that could endure unlimited internal stress, and this ... 7answers 920 views ### Rotate a long bar in space and get close to (or even beyond) the speed of light $c$ Imagine a bar spinning like a helicopter propeller, At $\omega$ rad/s because the extremes of the bar goes at speed $$V = \omega * r$$ then we can reach near $c$ (speed of light) applying some ... 0answers 25 views ### Why is it theoretically impossible to travel back through time? [duplicate] I know scientist have basically proved the fact that time travel to the future is possible, but persay if on wanted to travel back 30 years, why exactly would this be rendered impossible if we can go ... 3answers 140 views ### Can space expand with unlimited speed? At the beginning, right after the Big Bang, the universe was the size of a coin. One millionth of a second after the universe was the size of the Solar System (acording to ... 1answer 82 views ### Could the shadow move with faster-than-light speed? [duplicate] If I make a huge laser with a figure for shadow in front of the laser, and I shine it on to the moon, will I see the light from the laser AND the shadow moving the same speed? (I read somewhere the ... 0answers 84 views ### Status of experimental searches for tachyons? Now that the dust has settled on the 2011 superluminal neutrino debacle at OPERA, I'm interested in understanding the current status of experimental searches for neutrinos. Although the OPERA claim ... 1answer 445 views ### Neutrinos vs. Photons: Who wins the race across the galaxy? Inspired by the wording of this answer, a thought occurred to me. If a photon and a neutrino were to race along a significant stretch of our actual galaxy, which would win the race? Now, neutrinos ... 3answers 536 views ### How can neutrinos “beat light”? Article in the CERN newsletter "symmetry breaking" has the following statement: "Neutrinos are often the first particles to bring news of events in space to Earth, beating even light.". What does this ... 3answers 102 views ### Stuff can't go at the speed of light - in relation to what? [duplicate] We all know that stuff can't go faster than the speed of light - it's length becomes negative and all kinds of weird stuff happens. However, this is in relation to what? If two objects, each moving ... 2answers 139 views ### Quantum tunneling is faster than light travel? Quantum tunneling is faster than light travel ? My reasoning is that the particle cannot be detected inside the tunnel so if it travels from A to B it must be instantly going from A to B , hence ... 3answers 312 views ### Can something travel faster than light if it has always been travelling faster than light? I know there are zillions of questions about faster than light travel, but please hear me out. According to special relativity, it is impossible to accelerate something to the speed of light. However, ... 3answers 388 views ### Does entanglement not immediately contradict the theory of special relativity? Does entanglement not immediately contradict the theory of special relativity? Why are people still so convinced nothing can travel faster than light when we are perfectly aware of something that ... 1answer 307 views ### EPR-type experiments and faster-than-light communication using interference effects as signaling mechanism I understand that faster-than-light communication is impossible when making single measurements, because the outcome of each measurement is random. However, shouldn't measurement on one side collapse ... 3answers 154 views ### Is there absolute proof that an object cannot exceed the speed of light? Have any known experiments ruled out travelling faster than the speed of light? Or is this just a widely accepted theory? 3answers 2k views ### Why is the universe so big? The Universe is approximately 13.7 billion years old. But yet it is 80 billion light years across. Isn't this a contradiction? 2answers 158 views ### Has anyone ever measured the one way speed of light perpendicular to the Earth at the Earth's surface? 1 - Has anyone ever measured the one way speed of photons traveling perpendicular to the Earth at the Earth's surface? 2 - Given our current understanding of Physics is there any way both the upward ... 2answers 152 views ### Can a dot of light travel faster than the speed of light? [duplicate] Say I have a laser. If I spin the laser so that the beam sweeps in an arc along a very distant object, could that dot travel faster than the speed of light? In Diagram form: 5answers 420 views ### Special Relativity Second Postulate That the speed of light is constant for all inertial frames is the second postulate of special relativity but this does not means that nothing can travel faster than light. so is it possible the ... 3answers 505 views ### Special Relativity and $E = mc^2$ I read somewhere that $E=mc^2$ shows that if something was to travel faster than the speed of light then they would have infinite mass and would have used infinite energy. How does the equation show ... 1answer 115 views ### Exploiting the Heisenberg Uncertainty Principle as a means to communicate It seems as though I've come across a rather unusual conclusion that could either simply be a misinterpretation or a contradictory discovery. I seem to have found a way to utilize the Heisenberg ... 3answers 415 views ### How can a quasar be 29 billion light-years away from Earth if Big Bang happened only 13.8 billion years ago? I was reading through the Wikipedia article on Quasars and came across the fact that the most distant Quasar is 29 Billion Light years. This is what the article exactly says The highest redshift ... 2answers 237 views ### What if a faster-than-light particle is found? What will be the consequence (severe ones) on laws of physics if a particle that travels faster than light is discovered? I am looking for a more general answer so that a high school student would be ... 0answers 72 views ### Impulse travelling faster than light There have been conducted many experiments in which light impulses traveled faster than light like the one in Princeton in 2000. This phenomenon has something to do with quantum entanglement. Does ... 1answer 166 views ### Size of the Observable Universe [duplicate] I wanted to know what the observable universe is so I was thinking and I thought, it must be age of the universe times 2. Well I was wrong. I found on one website that it is 46B LY across in each ... 1answer 63 views ### What is the age of universe? [closed] As we know at the time of big bang as mentioned by the scientist the universe expanded faster than the speed of light. So does it mean that at that time all the particles present travelled in the time ... 1answer 44 views ### How the effect travel's? [duplicate] Let us assume that we placed lot's of ball touching each other in a hollow cylindrical tube, now if we push one ball at the end the ball at the other end move's instantly. So how do the information ... 1answer 221 views ### Relationship between Alcubierre drive space-time evolution and speed of gravity The top rated answer to this question about the Alcubierre drive asserts, "spacetime can dynamically evolve in a way which apparently violates special relativity," but according to the Wikipedia ... 4answers 751 views ### Double light speed Let's say we have 2 participles facing each other and traveling at speed of light Let's say I'm sitting on #1 participle so in my point of view #2 participle's speed is c+c=2c, double light speed? ... 1answer 127 views ### Faster than the speed of light and future travel [closed] I had read somewhere that if a person attains twice the speed of light, he can actually reach the future. But since the world belongs to one time and space, how will it change for one particular ... 0answers 43 views ### Is the speed of light the ultimate speed limit? [duplicate] As we all know nothing can go faster than the speed of light as mentioned by most of our pioneer's in physics. But as I was listening to one of the statements of Sir. Stephen Hawkins he stated that at ... 1answer 227 views ### Superluminal expansion of the early universe how is this possible? Is this a postulate? I get the expansion of the universe, the addition of discrete bits of space time between me and a distant galaxy, until very distant parts of the universe are moving relative to ... 3answers 353 views ### How does faster than light travel violate causality? Let's say I have two planets that are one hundred thousand lightyears away from each other. I and my immortal friend on the other planet want to communicate, with a strong laser and a tachyon ... 4answers 451 views ### Thought experiment regarding an object approaching a mirror Here's a thought experiment I came up with in class today when my mind drifted (I however highly doubt I'm the first to think about this since it is pretty rudimentary) : Let's say superman ... 1answer 114 views ### Universal expansion faster than the speed of light If the universe is expanding faster than the speed of light, and force carrier particles move at the speed of light, wouldn't this cause infinite universal expansion? Since no forces would be acting ... 2answers 705 views ### Faster-than-light communication using Alcubierre warp drive metric around a single qubit? The Alcubierre warp drive metric has been criticized on the points of requiring a large amount of exotic matter with negative energy, and conditions deadly for human travellers inside the bubble. What ... 2answers 304 views ### Does quantum mechanics allow faster than light (FTL) travel? Let's suppose I initially have a particle with a nice and narrow wave function[1] (I will leave these unnormed): $$e^{-\frac{x^2}{a}}$$ where $a$ is some small number (to make it narrow). Let's also ... 12answers 2k views ### Is it possible for information to be transmitted faster than light by using a rigid pole? Is it possible for information (like 1 and 0s) to be transmitted faster than light? For instance, take a rigid pole of several AU in length. Now say you have a person on each end, and one of them ... 2answers 686 views ### Is the “How to break the speed of light” minute physics video wrong? I am referring to this video, on YouTube, by minutephysics, which has quite a lot of views. In the video it states that if you flick your wrist while pointing a laser that reaches the moon, that the ... 3answers 1k views ### Harold White's work on the Alcubierre warp drive I've read a bit on Harold White's recent work. (A paper on Nasa's site) I haven't been able to find any comments by people claiming to know anything about the physics involved. Is this really serious? ... 1answer 87 views ### Extended Rigid Bodies in Special Relativity I was reading Landau & Lifhsitz's Classical Field Theory and I noticed that they mention that an extended rigid body isn't "relativistically correct". For example, if you consider a rigid rod ... 4answers 676 views ### In superluminal phase velocities, what is it that is traveling faster than light? I understand that information cannot be transmitted at a velocity greater than speed of light. I think of this in terms of the radio broadcast: the station sends out carrier frequencies $\omega_c$ but ... 0answers 30 views ### Relatively faster than light [duplicate] Possible Duplicate: Travelling faster than the speed of light If one spaceship travels in one direction at 3/4 of the speed of light and another spaceship travels in the opposition ... 3answers 91 views ### If neutrinos travel faster than light, how much lead time would we have over detecting supernovas? In light of the recent story that neutrinos travel faster than photons, I realize the news about this is sensationalistic and many tests still remain, but let's ASSUME neutrinos are eventually proven ... 4answers 2k views ### Einstein's special relativity beyond the speed of light Could someone with access to this paper which claims to have new transformations between frames with relative motion faster than light which are supposedly consistent with special-relativity, say what ... 0answers 200 views ### Could someone transmit a signal with equally-tuned Casimir plates across the quantum field? It seems, one could exploit the Casimir effect to send messages across arbitrarily-large distances with carefully-tuned Casimir plates. Obviously, relativity would preclude FTL information transfer, ... 1answer 168 views ### Will acceleration rate of expansion of space become faster than speed of light? From watching cosmology lectures, it seems that the space between galaxies is expanding at an accelerating rate, my question is since it is the space that is (acceleratingly expanding), the special ... 0answers 37 views ### Far stellar object going away from us faster than light [duplicate] Possible Duplicate: What does Brian Greene mean when he claims we wont be able to observe light from distant stars due to the universe’s expansion? Since the galaxies are fleeing us faster ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379882216453552, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/104957/curvature-of-curves-in-the-space-of-gaussians-measures
## curvature of curves in the space of gaussians measures ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a sequence of symmetric positive definite matrices in $GL(n)$ (in my case, covariance matrices of some gaussians) and vectors $R^n$ (the mean of these gaussians). I consider this sequence to be the discretization of a curve embedded in $GL(n)\times R^n$. I would like to compute the embedding curvature of this curve at each of my sample points, using the Wasserstein geometry (see Wasserstein Geometry of Gaussian Measures), not information geometry (like The Riemannian Geometry of the Space of Positive-Definite Matrices ...). I have some difficulties understanding the parameterization of the space of SPD matrices in the Wasserstein Geometry article above and thus, I am not able to compute the Levi-Civita connection and its projection on my curve with respect to the metric. Could anyone help me understand that (or anything else you feel I might not have understood - I feel I barely understand these things) ? Does anyone know of relevant work ? Thanks! - ## 1 Answer Your question is not specific enough about what you do not understand in the quoted paper; if you want help on this, you should at least explain what you understand and where the problem appears. Here is a little information about bibliography, that might help (or miss the point, I am not sure). For an introduction to optimal transport and Wasserstein spaces, you can have a look at Villani's books ("Topics on ..." is more elementary, but the beginning of "... Old and New" is not as difficult to read as the size of the book might lead you to think, and I like it a lot). A more concise introduction can also be found in a nice little book by Nicola Gigli, a version of which seems to be at http://math.unice.fr/~gigli/Site_2/Publications_files/users_guide%20-%20final.pdf (but I am not sure this is exactly the text I read). You should also now about Lott's paper "Some geometric calculations on Wasserstein space", Comm. Math. Phys. 277, p. 423-437, which computes the curvature of the Wasserstein space of a manifold. Concerning the notion of curvature of a discretized curve, you might be interested in the concept of Menger's curvature, which applies in a very broad context. - I read Villani's topics, and parts of its "Old and New", and Lott's paper. What I don't understand here is specific to matrices that is not addressed in the refs above: what parameterization is used to represent the matrices ? What is the basis to represent the tangent space (ie., symm. matrices) ? To compute the Christoffel symbols of the Levi-Civita connection, I need to be able to differentiate the metric along the vectors of a local basis... but which one ? Can I just take, for example for a 2x2 matrix, the basis consisting of the 3 matrices : e0=[1 0;0 0], e1=[0 1; 1 0], e2=[0 0; 0 1] ? – WhitAngl Aug 18 at 19:33 I do not know the details of Takatsu's paper, I think you should ask a more precise question about this parametrization. Are you familiar with Lie groups? If not, here is probably the problem. Then a good read is the chapter 0 of Knapp's "Lie groups, beyond an introduction". – Benoît Kloeckner Aug 18 at 20:57 I am not very comfortable with Lie groups, so I'll check this book. Thanks! :) – WhitAngl Aug 19 at 6:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9166232347488403, "perplexity_flag": "head"}
http://stochastix.wordpress.com/tag/folding/
# Rod Carvalho ## Posts Tagged ‘Folding’ ### Function composition via folding December 2, 2012 Let us suppose that we are given a list of functions. We would like to compose the functions in this list (if they are composable, of course) in the order in which they are arranged in the list, i.e., we would like to create a higher-order function `compose` such that $\text{compose} \,\, [f_0, f_1, \dots, f_{n-1}] = f_0 \circ f_1 \circ \dots \circ f_{n-1}$ Here is one of many possible implementations in Haskell: ```compose :: [(a -> a)] -> (a -> a) compose = foldr (.) (\x -> x)``` where we use a right-fold to compute the composition. Note that the identity function is defined anonymously. What happens if we apply `compose` to the empty list? We should obtain the identity function. Let us see if this is the case: `compose [] = foldr (.) (\x -> x) [] = (\x -> x)` Indeed, it is the case: if we apply `compose` to the empty list, we obtain the identity function. What is the type of this identity function? We load the two-line script above into GHCi and experiment a bit: ```*Main> let id = compose [] *Main> :t id id :: a -> a *Main> map id [0..9] [0,1,2,3,4,5,6,7,8,9] *Main> map id ['a'..'z'] "abcdefghijklmnopqrstuvwxyz"``` We thus obtain a polymorphic identity function that merely outputs its input, regardless of the type of the input. So far, so good. What happens if we apply `compose` to a non-empty list? Here is some more equational reasoning: ```compose [f0,f1,f2] = foldr (.) (\x -> x) [f0,f1,f2] = foldr (.) (\x -> x) (f0 : f1 : f2 : []) = f0 . (f1 . (f2 . (\x -> x)))``` A couple of days ago, we saw how to compose a function with itself $n$ times. This is a special case of composing a list of functions, of course. Using function replicate, we can compose a function `f` with itself $n$ times in the following manner: `compose (replicate n f) = foldr (.) (\x -> x) (replicate n f)` as suggested by Reid Barton. Let us now repeat the experiment we did perform a couple of days ago, but using the new `compose`: ```*Main> let f = (\n -> (n+1) `mod` 64) *Main> let h = compose (replicate 64 f) *Main> map h [0..63] [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, 16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31, 32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47, 48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]``` In the next posts, we will do some interesting things with this `compose`. Tags:Folding, Function Composition, Haskell, Higher-Order Functions Posted in Haskell | 3 Comments »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8877004384994507, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/3297/application-of-lie-groups-in-finance/3793
# application of lie groups in finance Can some one kindly go over some of the applications and use of Lie groups in finance? The math is very rigorous and I don't fully understand it or the potential it could have. Let me share some examples with you, I'm sure there are many more. There even is a financial math graduate program in Europe where there is required course work that teaches Lie groups. http://www.sciencedirect.com/science/article/pii/S0096300308009156 http://math.sut.ac.th/school/faculty/sergey/filespublic/2006/SrihirunMeleshkoSchultz2006_I.pdf http://www.phy.cuhk.edu.hk/~cflo/Finance/papers/publications/lie_QF.pdf - 3 Could you please explain what lie groups are and why you think it's applicable to finance (suggest areas of application})? This would make a much better question and it would be much more useful for other users of the site. – SRKX♦ Apr 20 '12 at 6:21 2 Please provide a link to a finance article that makes explicit use of lie group theory – Alexey Kalmykov Apr 20 '12 at 10:22 3 Where did you see this? I've never heard of lie groups applied to finance. – chrisaycock♦ Apr 20 '12 at 12:49 2 Well, we do use $\mathbb{R}^n$ all the time. – Brian B Apr 20 '12 at 15:49 3 I've updated the question with some papers – pyCthon Apr 21 '12 at 18:38 show 6 more comments ## 1 Answer I didn't read the papers you linked but I can understand that lie groups may be used much as there are used in quantum field theories to build up gauge theories for interaction of particles. The purpose is to have a model that is invariant according to a given transformation group. This introduce interaction terms in the equations. I can imagine that the same technique can be used in financial stochastic processes to force the introduction of new terms in the equations. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315991997718811, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/58009/non-locally-trivial-an-bundles/58011
## non-locally trivial A^n bundles ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $f: X \to Y$ be a morphism of varieties such that its fibres are isomorphic to $\mathbb{A}^n$. Since the definition of a vector bundle stipulates that $f$ be locally the projection $U \times \mathbb{A}^n \to U$, it is likely that there exist morphisms that are not locally of that form, but I can't come up with an example. So the question is: what is an example of a morphism with fibres $\mathbb{A}^n$ that is not locally trivial? not locally isotrivial? UPDATE: what if one assumes vector space structure on the fibres? - 2 Most of the non-locally trivial vector bundles I've seen show up are not locally trivial for 'another' reason: the rank is not constant. One gets them as kernels of maps between bundles: for example, the map $(x,v)\in\mathbb C\times\mathbb X\to(x,xv)\in\mathbb C\times\mathbb X$, viewed as an endomorphism of the trivial line bundle on $\mathbb C$, has a non-locally trivial kernel. – Mariano Suárez-Alvarez Mar 9 2011 at 23:12 (The $\mathbb X$s should be $\mathbb C$s...) – Mariano Suárez-Alvarez Mar 9 2011 at 23:13 4 @Mariano, this is a little confusing, since no matter how you look at it, I'm pretty sure that a vector bundle should have locally constant rank. The example you give corresponds to an injective map of locally free sheaves which is not an inclusion of vector bundles, precisely because the kernel is not a vector bundle. (But it's a nice example for understanding that confusing notion!) – Dave Anderson Mar 10 2011 at 5:10 3 In the definition of vector bundle it's not only required that the total space locally looks like $U\times \mathbb{A}^n$, but also that the resulting transition functions be linear. The object you are interested in in you question -I think- would be called an $\mathbb{A}^n$-bundle, where $\mathbb{A}^n$ is thought of as a variety, not a vectoor space. [hence my edit in the title; feel free to restore the previous version if you prefer!] – Qfwfq Mar 10 2011 at 19:36 1 In the absence of local triviality, I'm not sure it makes sense to ask about vector space structure on fibers. As @unknown points out, the definition of a vector bundle requires that the gluing maps be linear isomorphisms on fibers, but this makes sense only when you have a local trivialization. Without this requirement, you can of course put any vector space structure you like on each ${\Bbb A}^n$ fiber. – Dave Anderson Mar 10 2011 at 21:34 show 2 more comments ## 4 Answers In Jack's example the fiber is not scheme-theoretically $\mathbb A^1$. You can get a counterexample by taking $Y$ to be a nodal curve, $Y'$ its the normalization, with one of the two points in the inverse image of the node removed, and $X = Y' \times \mathbb A^1$. If we assume that the map is smooth, this becomes quite subtle. It is false in positive characteristic. Let $k$ be a field of characteristic $p > 0$. Take $Y = \mathbb A^1 = \mathop{\rm Spec}k[t]$, $Y' = \mathop{\rm Spec} k[t,x]/(x^p - t)$. Of course $Y' \simeq \mathbb A^1$, but the natural map $Y' \to Y$ is an inseparable homemorphism. Now embed $Y'$ in $\mathbb P^1 \times Y$ over $Y$, and take $X$ to be the difference $(\mathbb P^1 \times Y) \smallsetminus Y'$. On the other hand, it is not so hard to show that in characteristic 0 the answer is positive for $n = 1$ (if $Y$ is reduced), and I believe it is known to be true $n = 2$. The general case seems estremely hard. I am afraid that Sasha'a argument does not work; if the fiber does not have a vector spaces structure, there is not reason that choosing points gives you a trivialization. The question has been updated with "what if one assumes vector space structure on the fibres?"? Well, `$\mathbb A^n$` can always be given a vector space structure. In my first example, the fibers are canonically isomorphic to $\mathbb A^1$, so they have a natural vector space structure. However, if the map $X \to Y$ is smooth, and the vector space stucture is allowed to "vary algebraically" that is, if the zero section $Y \to X$ is a regular function, the addition gives a regular function $X \times_Y X \to X$, and scalar multiplication gives a regular function $\mathbb A^1 \times X \to X$, then $X$ is in fact a vector bundle. The proof uses some machinery: one uses smoothness to construct bases locally in the étale topology, showing that $X$ is étale locally trivial over $Y$, and descent theory to show that in fact $X$ is Zariski locally trivial. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think such map is locally trivial if and only if it is smooth. The "only if" part is clear. So, assume $f$ is smooth. Take any point $y_0 \in Y$ and $n+1$ affine-linearly independent points in the fiber over $y_0$. Then choose local sections through this points (this is where you need smoothness), denote them by $x_0,\dots,x_n$. Let $U$ be the set of $y \in Y$ over which $x_i$ are affine-lnearly independent. Then we have a map $f^{-1}(U) \to A^n$, taking a point $x$ to $(t_0,t_1,\dots,t_n) \in { \sum t_i = 1 } \subset A^{n+1}$ such that $x = \sum t_ix_i$. This gives a local trivialization. - Does your definition of varieties allow them to be disconnected? If so, let $Y$ be any variety, $P\in Y$ a closed point, $U$ the complement of $P$, $X_1$ a vector bundle over $U$, $X_2$ a vector bundle over $P$, and $X=X_1\cup X_2$. - I meant $X$ and $Y$ to be irreducible – Dima Sustretov Mar 10 2011 at 11:01 EDIT: See the comments for why this isn't a good example. Angelo gives a similar example that actually works. Suppose that $Y=C$ is a cuspidal curve, and let $\tilde C\to C$ be the normalization. Put $X = \mathbb{A}^1\times \tilde C$, and let $X\to Y$ be the obvious map. The fibers are all $\mathbb{A}^1$'s, but over the singularity in $Y$ the map cannot locally be the projection. If the map $f$ is a submersion and $Y$ is smooth then I think things should work out, at least in characteristic zero. - 3 But here the fiber over the origin is equal to two copies of ${\mathbb A}^1$, not one. – Steven Landsburg Mar 10 2011 at 7:25 More precisely, the fiber over the origin is a nonreduced scheme of multiplicity 2, whose reduction is A^1. (He said cuspidal, not nodal.) – David Speyer Mar 10 2011 at 13:01 Ah, yes of course. This is a bad example. – Jack Huizenga Mar 10 2011 at 14:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262953996658325, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/4568/what-is-the-canonical-reference-for-minimum-variance-portfolios-uniqueness?answertab=active
What is the canonical reference for Minimum Variance Portfolio's uniqueness? I am writing a white paper in which I am trying to compare a strategy to different well-known - and classic - asset allocation optimization approaches. One of the methods I chose is the minimum variance portfolio $w_\text{MV}$ defined as follows: $$w_\text{MV} = \underset{w}{\arg \min} ~ w' \Sigma w$$ where $\Sigma$ is the covariance matrix of the assets and under the linear constraints $Aw \leq b$ and $E w = d$. I have always heard that the MV portfolio was unique, and I know that this problem is linked to quadratic programming which I believe guarantees a unique solution as long as $\Sigma$ is positive-definite. I wanted to add a reference to another paper where this uniqueness was discussed (proved), and I found several ones written quite recently. However, I was wondering if there was one paper thas was more famously known for discussing that particular property? - You should look at papdog's comment. When I read you question, my first thought was "Merton, has to be Merton..." At least on the econ side, he's the big name attached to early portfolio choice research. – Nate Jan 17 at 1:54 3 Answers For academic references, you will likely have to look in the very early optimization literature. Uniqueness of the MV portfolio follows immediately from the lemma that a strictly convex function on a convex set has no local minima. The standard textbook reference is Convex Optimization by Boyd and Vandenberghe. See section 4.2.2 in particular. A free online copy is available at stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf. - Actually I don't think that's true. Uniqueness of a minimal variance is guaranteed by convexity but nothing says that there might not be several portfolios giving the same minimal value. – SRKX♦ Jan 13 at 13:32 That's not correct; if the covariance matrix is strictly positive definite then the variance is a strictly convex function of the portfolio weights, so it can only have one minimum on the (convex) constraint set. If you had two distinct min-var portfolio weights, then all linear combinations of those two would also have the same minimum variance, contracting strict convexity. – Marc Shivers Jan 13 at 13:41 Yes, but the covariance matrix is not strictly positive definite, it is positive semi-definite isn't it? – SRKX♦ Jan 13 at 14:35 If you could add references to what you just sayed it would be great by the way (not to say it's wrong but it was the base of my question). – SRKX♦ Jan 13 at 14:37 If your covariance matrix is only positive semi-definite, that would mean there are non-zero portfolio weights so that the variance of that portfolio is zero. I don't know what kind of universe of securities you're considering, but in the equity space that doesn't usually happen. – Marc Shivers Jan 13 at 15:33 show 2 more comments This article by Eric Falkenstein is exactly what you are looking for: Early Low Vol Literature Now Everywhere EDIT Falkenstein has a new post out on the academic origins of the approach: Here - I had a look at the blog posts, but none of them really mention the discussion of the uniqueness of the MV portfolio do they? This really the main point of my question in fact... – SRKX♦ Jan 11 at 10:03 No, not in the posts themselves but I thought in the papers that are referenced there. I will check on that... – vonjd Jan 15 at 9:44 If short sell is allowed, I remember there's a unique analytical solution, otherwise it has to be solved numerically. Is your approache different? IMHO the issue of min variance approach is really not how to solve this constrained optimization problem, but how to estimate asset return and var/covar matrix accurately. - 2 Well, I was looking for a reference... – SRKX♦ Nov 20 '12 at 13:24 2 JOURNAL OF FINANCIAL AND QUANTITATIVE ANALYSIS September 1972 AN ANALYTIC DERIVATION OF THE EFFICIENT PORTFOLIO FRONTIER Robert C. Merton* In this paper, the efficient portfolio frontiers are derived explicitly, and the characteristics claimed for these frontiers are verified. – papdog Nov 26 '12 at 1:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504605531692505, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/integral
Tagged Questions Questions on the evaluation of definite and indefinite integrals learn more… | top users | synonyms 0answers 7 views Polynomials, integrals convergence Let $P_n(x)= \frac{x^n(bx -a)^n}{n!}, \ \ \ a,b,n \in \mathbb{N}$. Prove that $\int_0 ^{\pi}P_n(x) \sin xdx \rightarrow 0 \ \ \ \$ and $\ \ \ \ \int_0 ^r P_n(x)e^xdx \rightarrow 0$ \$ \ \ \ \ \ \ (n ... 1answer 37 views Flow of $rot \overrightarrow{F}$ We've got vector field: $\overrightarrow{F} = \begin{bmatrix} yz\\x^3z\\e^z\end{bmatrix}$. I want to compute flow of $rot\overrightarrow{F}$($=curl \overrightarrow{F}$) through the area of the side ... 2answers 48 views real analysis and integral Let $f$ be a continuous function on $[a, b]$ satisfying $$\int_a^b f(x)g^\prime(x)\,\mathrm{d}x = 0$$ whenever $g$ is a continuously differentiable function on $[a, b]$ satisfying $g(a) = g(b) = 0$. ... 2answers 22 views Problem involving the computation of the following integral I was solving the past exam papers and stuck on the following problem: Compute the integral $\displaystyle \oint_{C_1(0)} {e^{1/z}\over z} dz$,where $C_1(0)$ is the circle of radius $1$ around ... 2answers 33 views Checking $\displaystyle \int_{0}^{\infty} {\frac{\sin^2x}{\sqrt[3]{x^7 + 1}} dx}$ for convergence Given $\displaystyle\int_{0}^{\infty} {\frac{\sin^2x}{\sqrt[3]{x^7 + 1}} dx}$, prove that it converges. So of course, I said: We have to calculate \$\displaystyle \lim_{b \to \infty} ... 1answer 22 views $0$-th moment of product of gaussian and sinc function I would like to calculate the following integrals: $$\int_{-\infty}^{+\infty} \quad \left(\frac{\sin(\pi a x)}{\pi ax}\right)^2\quad \exp(-bx^2)\,dx$$ \int_{-\infty}^{+\infty} \quad ... 2answers 68 views Computation of $\int_0^{\pi} \frac{\sin^n \theta}{(1+x^2-2x \cdot \cos \theta)^{\frac{n}{2}}} \, d\theta$ Show that \begin{align*} \forall x \in [-1,1]: \int_0^{\pi} \frac{\sin^n \theta}{(1+x^2-2x \cdot \cos \theta)^{\frac{n}{2}}} \, d\theta &= c_n \tag{1} \\ \int_0^{\pi} \frac{\sin^{n+2} ... 0answers 31 views A little help integrating this torus? Let $\mathbf{F}\colon \mathbb{R}^3 \rightarrow \mathbb{R}^3$ be given by $$\mathbf{F}(x,y,z)=(x,y,z).$$ Evaluate $$\iint\limits_S \mathbf{F}\cdot dS$$ where $S$ is the surface of the torus ... 2answers 45 views How to show that these integrals converge? What test do I use to show that the following integral converges? If you could provide me with the process that leads to the answer that would really help. \$\displaystyle ... 1answer 21 views can a line integral of domain $C$ be negative when $C$ is the boundary of the region in the upper half plane? ($y>0$) ok so here is the question: evaluate $\oint \left(3y^2+e^{\cos x}\,dx\right) + \left(\sin y+5x^2\,dy\right)$, where $C$ is the boundary of the region in the upper half-plane ($y\ge 0$)between the ... 1answer 26 views Evaluate the following contour integral I was solving old exam papers and I am stuck on the following question: Evaluate the contour integral $\displaystyle \oint_{C} \frac{dz}{(\bar z-1)^2}$ where $C$ is the semi-circle \$|z-1|=1, \Im ... 1answer 40 views Integration $\int \left(x-\frac{1}{2x} \right)^2\,dx$ $$\int\!\left(x-\frac{1}{2x} \right)^2\,dx$$ From U-substitution, I got $u=x-\frac{1}{2x},\quad \dfrac{du}{dx} =1+ \frac{1}{2x^2}$ , and $dx= 1+2x^2 du$ and in the end I come up with the answer to ... 3answers 44 views Integrate $\int {{{\left( {\cot x - \tan x} \right)}^2}dx}$ \$\eqalign{ & \int {{{\left( {\cot x - \tan x} \right)}^2}dx} \cr & = {\int {\left( {{{\cos x} \over {\sin x}} - {{\sin x} \over {\cos x}}} \right)} ^2}dx \cr & = {\int {\left( ... 1answer 50 views Is the following differentiating under the integral sign correct? Suppose \frac{\delta f[u]}{\delta u(x)}\equiv \frac{\partial f}{\partial u}-\frac{\partial }{\partial x}\frac{\partial f}{\partial u_x}+\left(\frac{\partial }{\partial x}\right)^2\frac{\partial ... 2answers 41 views Evaluating the integral: $\int_{0}^{\infty} \frac{|2-2\cos(x)-x\sin(x)|}{x^4}~dx$ I am interested in evaluating the following integral: $$\int_{0}^{\infty} \frac{|2-2\cos(x)-x\sin(x)|}{x^4}~dx$$ Using Matlab, Numerically it seems that the integral is convergent, but I'm not sure ... 1answer 34 views $k$-th moment of product of gaussian and sinc I would like to calculate the following integrals: $$\int_{-\infty}^{+\infty} \quad x^k\quad \left(\frac{\sin(\pi a x)}{\pi ax}\right)^2\quad \exp(-bx^2)\,dx$$ \int_{-\infty}^{+\infty} \quad ... 0answers 25 views Upper and lower integration inequality I would like to learn how to prove that the following inequality holds. Let $F$ be a bounded function on an interval $[a,b]$, so that there exists $B\geq 0$ such that $|f(x)| \leq B$ for every \$x\in ... 0answers 25 views Stuck on a problem in multivariable calculus regarding flux help please! I need help on this problem please! I tried doing it but I have been stuck on it for a while... some tutors couldnt even help me with this one. Let $S$ be the portion of the sphere of radius a given ... 3answers 42 views Finding an area of the portion of a plane? I need help with a problem I got in class today any help would be appreciated! Find the area of the portion of the portion of the plane $6x+4y+3z=12$ that passes through the first octant where \$x, ... 1answer 26 views We are to evaluate the problem at the given limit using pi and redicals in our answer as needed. The Problem: $$\int\!\sin^5(4x)\,dx$$ The formula that I used from the integration tables is: $$\int\!\sin^n(u)\,du$$ My final answer is ... 3answers 31 views Exponential integral question How would I solve the following problem? $$f(x)=\int\!\frac{4}{\sqrt{e^x}}\,dx$$ Using $u$ substitution I have set $u=e^x$ andd $du=e^x dx$ so would I have $$4\int\!\frac{du}{\sqrt{u}}$$ What would ... 1answer 80 views How to integrate $\cos\left(\sqrt{x^2 + y^2}\right)$ Could you help me solve this? $$\iint_{M}\!\cos\left(\sqrt{x^2+y^2}\right)\,dxdy;$$ $M: \frac{\pi^2}{4}\leq x^2+y^2\leq 4\pi^2$ I know that the region would look like this and I need to solve it as ... 3answers 80 views Integral of $\cot^2 x$? How do you find $\int \cot^2 x \, dx$? Please keep this at a calc AB level. Thanks! 4answers 63 views Integrate ${\sec 4x}$ How do I go about doing this? I try doing it by parts, but it seems to work out wrong: \$\eqalign{ & \int {\sec 4xdx} \cr & u = \sec 4x \cr & {{du} \over {dx}} = 4\sec 4x\tan 4x ... 1answer 26 views How to determine a function of 2 variables from its derivative? Please even the slightest advice would help! If I have a function $V$ made of 2 variables $x_1$ and $x_2$, and its derivative \frac{dV}{dt} = \frac{dV}{dx_1}\frac{dx_1}{dt} + ... 3answers 54 views Proof for an integral identity Is it true that $\int_0^A dx \int_0^B dy f(x) f(y) = 2 \int_0^A dx \int_0^x dy f(x) f(y)$ ? If so, can this be proved? 4answers 47 views What is the definite integral of… $$\int^L_{-L} x \sin(\frac{\pi nx}{L})$$ I've seen something like this in Fourier theory, but I'm still not sure how to approach this integral. Wolfram Alpha gives me the answer, but no method. ... 1answer 82 views How to prove that $\int_0^{\infty} \log^2(x) e^{-kx}dx = \dfrac{\pi^2}{6k} + \dfrac{(\gamma+ \ln(k))^2}{k}$? I was answering this question: $\int_0^\infty(\log x)^2(\mathrm{sech}\,x)^2\mathrm dx$ and in my answer, I encountered the integral $$\int_0^{\infty} \log^2(x) e^{-kx}dx$$ which according to ... 2answers 67 views $\int_0^\infty(\log x)^2(\mathrm{sech}\,x)^2\mathrm dx$ Is there any closed-form representation for the following integral? $$\int_0^\infty(\log x)^2(\mathrm{sech}\,x)^2\mathrm dx,$$ where $\mathrm{sech}\,x$ is the hyperbolic secant, ... 2answers 43 views Describing Domain of Integration (Triple Integral) I'm really struggling to go about starting the following problem: This question concerns the integral, \$\int_{0}^{2}\int_0^{\sqrt{4-y^2}}\int_{\sqrt{x^2+y^2}}^{\sqrt{8-x^2-y^2}}\!z\ ... 3answers 67 views Evaluate $\int x \cos x^2 dx$ Hay I have hit this in my book Evaluate $\int x \cos x^2 dx$. I got $x^2 \sin(x^2) / 2$ But I used a online calculator to check it and it is giving me $\sin(x^2)/2$ Where dose my X go? 4answers 98 views Integrate by parts: $\int \ln (2x + 1) \, dx$ \eqalign{ & \int \ln (2x + 1) \, dx \cr & u = \ln (2x + 1) \cr & v = x \cr & {du \over dx} = {2 \over 2x + 1} \cr & {dv \over dx} = 1 \cr & \int \ln (2x ... 1answer 34 views Integral involving gaussian and triangle function I would like to calculate the following integral: $$\int_{-\infty}^{+\infty} \quad tri(x)\exp\left(-\frac{(x-x_0)^2}{a}\right)dx$$ Thanks! 2answers 49 views Convolution of $1/(1+x^2)$ and $\exp(-x^2/(4t))$ Is there a closed form formula for the convolution of $1/(1+x^2)$ and $\exp(-x^2/(4t))$, where $t>0$, i.e. the integral \int_{-\infty}^\infty ... 2answers 46 views An integral problem? How do you integrate $e^{e^x}$? I was able to get it down to du/(ln u) but I wasn't able to go further. Thanks! 1answer 60 views Evalute this integral using Green's Thereom Let C be the boundary of the half-annulus $$1\leq(x^2+y^2)\leq4$$ where $$x\le0$$ in the xy plane, traversed in the positive direction. Evaluate : \$ \displaystyle \int_{c}(7\cosh^3(7x)-2y^3) ... 2answers 37 views Integral of $\int^1_0\frac{dx}{\sqrt{x+3}-1}$ I want to solve this integral and need some directions. $$\int^1_0\frac{dx}{\sqrt{x+3}-1}$$ I decided to call $x+3 = t^2 \rightarrow 2tdt = dx$ then : $$\int^1_0 \frac {2tdt}{t^2-1}$$ Now what should ... 1answer 39 views Integral of product of Bessel functions of the first kind I would like to solve the integral: $$\int_0^{+\infty}\quad rJ_n(ar)J_n(br)\quad dr$$ Is there any closed form for it? Thanks! 3answers 104 views How to solve this integral for a hyperbolic bowl? $$\iint_{s} z dS$$ where S is the surface given by $$z^2=1+x^2+y^2$$ and $1 \leq(z)\leq\sqrt5$ (hyperbolic bowl) 3answers 63 views The indefinite integral $\int \frac{1+\cos(x)}{\sin^2(x)}\,\mathrm dx$ I`m trying to solve this integral and I did the following steps to solve it but don't know how to continue. $$\int \frac{1+\cos(x)}{\sin^2(x)}\,\mathrm dx$$ \begin{align}\int \frac{\mathrm ... 2answers 31 views Integral of $\int \frac{\sin(x)dx}{3-\cos(x)}$ I am trying to solve this integral and I need your suggestions. I don't know if its OK to set $3-\cos(x)$ as $t$ $\rightarrow dt = \sin(x)dx$ or just take $\cos(x)$ and set it as $t$ \int ... 2answers 44 views Complex-valued Fourier integral: $\int_{ - \infty }^{ + \infty } {\frac{{\cos (ax)}}{{{x^2} + 1}}{e^{ - ibx}}\,\mathrm dx}$ I'm working on the Fourier transform, but I don't know how to evaluate the integral: $$I = \int_{ - \infty }^{ + \infty } {\frac{{\cos (ax)}}{{{x^2} + 1}}{e^{ - ibx}}\,\mathrm dx}$$ 2answers 48 views What's a better way to integrate this? $$\int \frac{1}{x^2 + z^2} dx$$ I tried substitution and also by parts. By parts is getting messy and I am not sure if I am getting the right answer. I am trying to figure out an easier way or the ... 0answers 15 views Flux integrals, parameterization let S be the cylinder x^2 + z^2 = 9 where -2 /ge y /le 2 parameterization: thi(u,v)= <3cosv, u, 3sinv> where -2 /ge y /le 2 and 0 /ge v /le 2pi (thi is the symbol of I with the circle in the ... 0answers 41 views Evaluating a line integral directly $F(x,y) = \dfrac{1}{x^2+ y^2}\langle -y,x\rangle$, and let $R$ be a circle of radius $a$, centered at the origin. a) Why can't Green's theorem be used to evaluate $\int_R F \cdot ds$? b) ... 0answers 36 views separating a variable from integral In the following integral, I would like to separate $\alpha$ from rest of the equation. Can we solve the following equation for $\alpha$? \large{\int_{0}^{a} \int_{0}^{2\pi} ... 1answer 31 views Piecewise defined integration [closed] Let $$f_n(x) = \begin{cases} 0 & x < -\tfrac{1}{n} \\ \tfrac{n}{2} & -\tfrac{1}{n} \leq x \leq \tfrac{1}{n} \\ 0 & x>\tfrac{1}{n} \\ \end{cases},$$ $n=1,2,3,\ldots$. Let \$F(x) = ... 2answers 126 views $\int_0^{\infty}\frac{x^3}{(x^4+1)(e^x-1)}\mathrm dx$ I need to find a closed-form for the following integral. Please give me some ideas how to approach it: $$\int_0^{\infty}\frac{x^3}{(x^4+1)(e^x-1)}\mathrm dx$$ 4answers 69 views $\frac{d}{dx}\int_{0}^{e^{x^{2}}} \frac{1}{\sqrt{t}}dt$ I'm having trouble understanding how to apply the $\frac{d}{dx}$when taking the anti-derivative. $$\frac{d}{dx}\int_{0}^{e^{x^{2}}} \frac{1}{\sqrt{t}}dt$$ In class it was mentioned we'll end up taking ... 1answer 38 views $\lim_{R \to \infty} \int_0^R \frac{dx}{(x^2+x+2)^3}$ $$\lim_{R \to \infty} \int_0^R \frac{dx}{(x^2+x+2)^3}$$ Please help me in this integral, I've tried some substitutions, but nothing work. Thanks in advance!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 101, "mathjax_display_tex": 30, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8952557444572449, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/43679/an-edge-partitioning-problem-on-cubic-graphs
## An edge partitioning problem on cubic graphs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello everyone, I already asked this question on the TCS Stack Exchange, but it has not been resolved yet. Maybe readers of this forum will have other ideas or information, although I suspect that the sets of users of both places form a large intersection. Has the complexity of the following problem been studied? Input: a cubic (or $3$-regular) graph $G=(V,E)$, a natural upper bound $t$ Question: is there a partition of $E$ into $|E|/3$ parts of size $3$ such that the sum of the orders of the (nonnecessarily connected) corresponding subgraphs is at most $t$? Related work I found quite a few papers in the literature that prove necessary and/or sufficient conditions for the existence of a partition into some graphs containing three edges, which is somehow related, and some others on computational complexity matters of problems that intersect with the above (e.g. the partition must yield subgraphs isomorphic to $K_{1,3}$ or $P_4$, and no weight is associated with a given partition), but none of them dealt exactly with the above problem. Listing all those papers here would be a bit tedious, but most of them either cite or are cited by Dor and Tarsi. A more closely related work is this paper by Goldschmidt et al., who prove that the problem of edge partitioning a graph into parts containing AT MOST $k$ edges, in such a way that the sum of the orders of the induced subgraphs is at most $t$, is NP-complete, even when $k=3$. Another difference between their problem and the one I describe is that they do not allow subgraphs to be disconnected. Is it obvious that their problem remains NP-complete on cubic graphs, when we require strict equality w.r.t. $t$ and drop the connectivity constraint? Additional information I've tried some strategies that failed. More precisely, I found some counterexamples that prove that: • maximising the number of triangles does not lead to an optimal solution; which I find somehow counter-intuitive, since triangles are those subgraphs with lowest order among all possible graphs on three edges; • partitioning the graph into connected components does not necessarily lead to an optimal solution either. The reason why it seemed promising may be less obvious, but in many cases one can see that swapping edges so as to connect a given subgraph leads to a solution with smaller weight (example: try that on a triangle with one additional edge connected to each vertex; the triangle is one part, the rest is a second, with total weight 3+6=9. Then exchanging two edges gives a path and a star, with total weight 4+4=8.) I'm currently trying to work out reductions from related problems (see above), as well as other leads suggested by the kind readers of the TCS forum. - 1 Question: is there a partition of $E$ into $|E|/3$ parts such that the sum of the orders of the (nonnecessarily connected) corresponding subgraphs is at most $t$? I don't really understand the question. You take a (not necessarily proper) 3-colouring of the edges of $G$, then look at the sum of the orders of the subgraphs defined by the colour classes. But what is the order of such a subgraph, if it isn't $n$? Do you mean the set of vertices incident to an edge of the colour? – Andrew D. King Oct 26 2010 at 18:07 5 The order of the subgraph is the cardinality of its vertex set. As an example, $K_4$ can be (edge-)partitioned into a triangle and a star, whose orders add up to 3+4=7. – Anthony Labarre Oct 26 2010 at 21:12 1 Note, it's an $|E|/3$-edge-colouring, not necessarily proper – Dave Pritchard Nov 2 2010 at 9:07 1 Dave: no, I want exactly $|E|/3$ parts, each of size $3$, and none can be empty. I agree that the problem is not especially natural, my motivation is that it is a special case of another problem I'm interested in (mathoverflow.net/questions/13364/…), so hardness of the former would imply hardness of the latter. – Anthony Labarre Nov 2 2010 at 12:21 1 @Anthony: What is a natural upper bound? Is being natural crucial? – Hans Stricker May 3 2012 at 12:25 show 6 more comments ## 1 Answer Thank you for the clarifications (the original post did not say each part should have size 3, maybe you can add that). I will take a stab, but it is not very clever so possibly I missed something. Note, • for a given triple of edges, its subgraph has $\ge 3$ nodes with equality iff it is a triangle • thus for a given partition into $|E|/3$ triples, the sum of this over all parts is $\ge |E|$ with equality iff every triple forms a triangle. However, it is known, due to Holyer 1981, that it's NP-complete to determine whether a graph can be edge-partitioned into triangles. So I think your problem is also NP-complete on these instances (taking $t=|E|$). RE: your comment, thanks, I forgot it is cubic! - A cubic graph (and actually, any $k$-regular graph with $k$ odd) cannot be partitioned into triangles. I agree that intuition would suggest that maximizing the number of triangles would be the way to go, but the following counterexample will convince you otherwise: take the complement of a cycle of length 6 (wwwteo.informatik.uni-rostock.de/isgci/images/…). If you use both triangles, you get a solution of weight 12, while discarding one triangle gives you a solution of weight 11. – Anthony Labarre Nov 2 2010 at 22:33 Thanks for pointing out the missing bit, I modified my question accordingly. – Anthony Labarre Nov 3 2010 at 7:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435103535652161, "perplexity_flag": "head"}
http://mathoverflow.net/questions/109330/did-gauss-know-dirichlets-class-number-formula-in-1801
## Did Gauss know Dirichlet’s class number formula in 1801? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $h_d$ be the number of $SL_{2}(\mathbb{Z})$ classes of primitive binary quadratic forms of discriminant $d$. It's natural to impose the hypothesis that $d$ is not at square, as we do below. In Carl Ludwig Siegel's paper titled The Average Measure of Quadratic Forms With Given Discriminant and Signature Siegel cites two formulae given by Gauss in Disquisitiones Arithmeticae: (a) $\displaystyle\sum\limits_{d= -N }^1 h_d \sim \frac{\pi}{18 \zeta(3)}N^{3/2}$ (b) $\displaystyle\sum\limits_{d = 1}^N h_d \log{\epsilon}_d \sim \frac{{\pi}^2}{18 \zeta(3)}N^{3/2}$ Where $N > 0$ and $\epsilon_{d} = \frac{1}{2}(t + u \sqrt{d})$ where $(t,u)$ is the smallest positive solution to $t^2 - ud^2 = 4$. (Actually, Gauss restricts to consideration to binary quadratic forms with even middle coefficient correspondingly arrives at different formulae, but they're essentially the same as those above). Siegel gives two proofs of these formulae: one proceeding from Dirichlet's class number formula together with character sum estimates due to Polya and Landau, and one via a direct lattice point counting argument. In light of the facts that (i) I haven't heard anyone say that Gauss's was the one to discover the class number formula and (ii) the character sum estimates seem outside of the scope of Gauss's work, I imagine that his argument was via lattice point counting. Do we have any evidence otherwise? (I checked Gauss's book and he doesn't describe his methods there.) - 2 A short comment for now: Gauss understood the connection between lattice points on spheres and class numbers of definite quadratic forms: The number of representations of $m$ as a sum of 3 squares is a constant times $h(-m)$ or $h(-4m)$ depending on the congruence of $m$ mod $8$, as I recall. The estimate (a) can probably be deduced from counting lattice points in $R^3$. – Marty Oct 10 at 21:14 @ Marty - aah, good point, I forgot about that result of Gauss. I wonder if there's an analogous result involving class numbers of real quadratic fields. – Jonah Sinick Oct 10 at 22:08 2 @ Marty - BTW, Shimura has a great article discussing a common framework for thinking about the ternary quadratic form given by the discriminant and the ternary quadratic form that you mention: ams.org/journals/bull/2006-43-03/… – Jonah Sinick Oct 10 at 22:10 Am I right that it is still not known whether there exists a set of $d>0$ of positive density for which $h_d=1$? I've heard a talk (long time ago) where this was referred to as a "Gauss problem". There is also some nice connection between $h_d$ and the length of the period of the continued fraction expansion of $\sqrt{d}$ but I don't quite remember what it was. – Nikita Sidorov Oct 10 at 23:17 2 Jonah is right about rings of integers in number fields, but that's a slightly different question (the $h(5^{2k+1})$ example involves non-maximal orders). – Henry Cohn Oct 11 at 0:29 show 2 more comments ## 1 Answer In 1801, Gauss certainly was aware of the general procedure how to obtain the class number formula (or asymptotic results) via counting lattice points. As a matter of fact, the approach using lattice points in general, and Gauss's circle problem in particular, can already be found in Legendre's Theorie des Nombres in 1798, in connection with his apoproach to the three-squares theorem. There do exist a couple of posthumous papers by Gauss on this topic, which can be found in his collected works as well as in Maser's German translation of the Disquisitiones (but not, unfortunately, in the English tranlsation). In fact Gauss attempted twice to publish his proof of the class number formula; the first attempt begins with the sentence "33 years have passed since the principles of the wonderful connection, to which this memoir is dedicated, was discovered, as I have remarked at the end of the Disquisitiones". Here Gauss refers to the last paragraph of the Disquisitiones, where he reports to have discovered the analytic solution to a problem stated in arts. 306 and 302. The second version of his manuscript begins with the same sentence, except that the 33 years have been replaced by 36 years. In any case what this means is that the question in your title should be anwsered with a firm "yes". - What prevented Gauss from publishing? – Igor Rivin Oct 11 at 18:03 You meant 1798? – Abdelmalek Abdesselam Oct 11 at 18:07 @Igor: probably his famous motto "pauca sed matura"... – Sylvain JULIEN Oct 11 at 19:35 @ Franz Lemmermeyer - Thanks very much! I had never heard of this before. A few follow up questions: (1) When you say that Gauss attempted to publish do you mean that he was somehow unsuccessful in doing so? If so, what was the reason for this? (2) So it looks like he knew the class number formula, but is that what he used to obtain the results that I state above? I would still bet on a lattice point argument in light of the need for character sum estimates if one uses the class number formula approach... – Jonah Sinick Oct 11 at 20:52 1 @Igor: lack of time, that is, astronomy, physics, geography etc. – Franz Lemmermeyer Oct 11 at 21:04 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495183825492859, "perplexity_flag": "head"}
http://mathoverflow.net/questions/11239?sort=newest
Conformal maps of doubly connected regions to annuli. Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In another question here on MO, Anweshi asks if any doubly connected region in the complex plane can be conformally mapped to some annulus. The answer to this is yes. But the fact is that two annuli are conformally equivalent iff the ratio of the outer radius and the inner radius is the same for the two. Thus each is conformally equivalent to a unique "standard" annulus $r < |z| < 1$ with $0 < r < 1$. Now my question is the following: Is there any way to see what the radius of a "standard" annulus conformally equivalent to a doubly connected region is, just by looking at the region? That is without constructing an explicit map. - This really depends on what you mean by "looking at" a region, but there are integrals that show up in conformal field theory that allow you to distinguish conformally inequivalent surfaces. – S. Carnahan♦ Jan 9 2010 at 18:25 re: Scott Carnahan's comment. see e.g. this section in Lawler's book: books.google.com/… – jc Jan 9 2010 at 18:28 Or see the wikipedia article on extremal length at en.wikipedia.org/wiki/Extremal%5flength . – Harald Hanche-Olsen Jan 9 2010 at 18:57 Ah, nice. Thank you very much! – Grétar Amazeen Jan 9 2010 at 19:06 2 Answers By "see" I will assume you mean in a geometric sense. Then your question falls within a standard topic in geometric complex analysis. First some terminology: A doubly connected domain $R$ on the Riemann sphere is called a ring domain, and if you map it onto $r < |z| < s$ as a canonical domain, then $\mathrm{mod}(R) = (2\pi)^{-1}\log(s/r)$ is called the conformal modulus or just modulus of the ring domain. By the way I have defined it, it is nearly trivially a conformal invariant, but it need not be defined this way. There is a geometric theory, the Ahlfors-Beurling theory of extremal length of curve families, within which the modulus of a ring domain can be defined directly and geometrically, without any preliminary conformal mapping onto some canonical domain. Extremal length can be proved to be a conformal invariant, and then one quickly sees that the two definitions coincide. There is an exposition of the theory of extremal length in Conformal Invariants by Ahlfors. It would be unreasonable to expect to "see" the exact value of the modulus of a ring domain. The boundary of a ring domain can be extremely complicated geometrically, and every tiny wiggle impacts on the modulus. But extremal length yields inequalities for the modulus from geometric data. I will quote a single, rather striking, result of this kind: If a ring domain $R$ contains no circle on the Riemann sphere separating its two boundary components, then $\mathrm{mod}(R) \leq 1/4$. The constant $1/4$ is sharp. The result is due to D. A. Herron, X. Y. Liu and D. Minda. Small modulus means "thin" ring domain; if the modulus is large enough, the ring domain is so "fat" that it has to contain a separating circle. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hi. I'wanna map an annulus into hollow region (doubly connected). I've got the shape functions of the boundries of the hollow regions (x=f(theta), y=g(theta)). Can you give me a general solution? - 1 This should be asked as a separate question with more context and also places you looked for answers. Typically people aren't willing to do work for you if it doesn't seem like you haven't put in some effort yourself. – jc May 20 2010 at 18:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8970116376876831, "perplexity_flag": "head"}
http://mathoverflow.net/questions/109885?sort=newest
## Second homotopy groups of 3-complexes and Fenn’s spiders. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a finite CW complex then with one zero cell. Then (up to homotopy) the two skeleton of X is the same as a group presentation, via the Cayley complex construction. For a while I had been searching for some planar description of the second homotopy group, which would allow a concrete combinatorial description of the fundamental 2-groupoid of X (up to equivalence). I found many discussions close to what I needed, before stumbling on the (IMHO beautiful) book "Techniques of geometric topology" by Roger Fenn. In Chapter 2 he gives a description of $\pi_2(X)$ of a 3-complex in terms of certain diagrams modulo local relations. Each relation in the 2-complex gives a "relation spider" and the second homotopy group of $X$ is the group of isotopy classes of planar diagrams generated by these spider diagrams modulo certain "universal" local relations (analogous to $gg^{-1} = 1$ in the $\pi_1$ case) and relations given by the 3-cells of $X$. (The spider diagrams are roughly dual to Van Kampen diagrams.) My questions are: 1) are spider diagrams Fenn's invention? Perhaps this way of thinking about $\pi_2$ was folklore? 2) what are other sources describing $\pi_2$ (or even better the fundamental 2-groupoid) concretely (ideally diagrammatically) for small dimensional complexes? I am aware that all of this can be viewed as a concrete example (for $n = 2$) of the dictionary between n-groupoids and n-types. However because of the applications I have in mind I am only looking for "concrete" sources! - The fundamental 2-groupoid of a CW-complex admits a presentation (in the category where it lives) analogous (indeed extending) to the presentation of the fundamental group. This is the philosophy of relations among relators discovered by Whitehead long ago. He used crossed modules, which are equivalent to 2-groups. – Fernando Muro Oct 17 at 9:40 Some references to early sources (van Kampen, Reidemmeister, ...) can be found in the review pages.bangor.ac.uk/~mas010/pdffiles/… . The book reviewed gives colimit methods for calculating second relative homotopy groups (in fact homotopy 2-types) and hence some calculations of $\pi_2$. – Ronnie Brown Oct 17 at 9:44 I am not sure what Geordie and Fernando understand by "the fundamental 2-groupoid of a CW-complex". I find 2-groupoids not easy to manage compared with crossed modules (over groupoids), and not useful for proving theorems compared with double groupoids with connections, and these two are well defined for pairs with a set of base points, or more generally for filtered spaces. – Ronnie Brown Oct 17 at 10:36 @Ronnie: by fundamental 2-groupoid I mean the 2-groupoid with objects points, 1-morphisms paths between points, and 2-morphisms paths between paths up to homotopy. (A truncation of the fundamental $\infty$-groupoid). I am a complete novice at these things (hence the question), and am sure you are right that other models are easier to work with. Thank you for the link to the review, which looks interesting. – Geordie Williamson Oct 18 at 7:38 @Geordie: Philip Higgins and I really made progress in 1974 when we considered the (strict!) homotopy double groupoid (with connections) of a pair of spaces $(X,A)$ with a set $C$ of base points. This enabled us to prove a 2-d van Kampen theorem, which gave new info on second relative homotopy groups. There are lots of pictures in the new book reviewed (pdf of the book on the web page of the book). – Ronnie Brown Oct 18 at 16:00 ## 2 Answers I am not sure (see the paper at www.cmi.univ-mrs.fr/~hamish/Papers/crmshort.pdf) but I think these are related to Igusa's pictures. There is a nice paper by Loday on the idea of homotopical syzygies (J.-L. Loday, 2000, Homotopical Syzygies , in Une dégustation topologique: Homotopy theory in the Swiss Alps , volume 265 of Con- temporary Mathematics , 99 – 127, AMS.) which may help and also the paper by Kapranov and Saito (M. Kapranov and M. Saito, 1999, Hidden Stasheff polytopes in alge- braic K-theory and in the space of Morse functions , in Higher homotopy structure in topology and mathematical physics (Poughkeepsie, N.Y. 1996) , volume 227 of Contemporary Mathematics , 191–225, AMS.) which is worth reading. The situations in these papers relate to when the 3-complex is to be constructed from its 2-skeleton by killing the $\pi_2$ but they are I think relevant. - Thank you very much for these excellent references. I seems Igusa's pictures are exactly Fenn's diagrams. Igusa says that the observation that one obtains a description of $\pi_2$ in this way is "essentially due to J. H. C. Whitehead". Also, the paper of Loday is exactly what I was hoping would exist somewhere in the literature. – Geordie Williamson Oct 18 at 7:32 I wrote up the ideas of Whitehead on free crossed modules in 30. On the second relative homotopy group of an adjunction space: an exposition of a theorem of J.H.C. Whitehead'', _J. London Math. Soc._ (2) 22 (1980) 146-152. Analogous ideas were developed independently by Peiffer and Reidemeister; the book review pages.bangor.ac.uk/~mas010/pdffiles/… gives good historical background. – Ronnie Brown Nov 5 at 11:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The web site Homological algebra programming by Graham Ellis gives methods of constructing resolution of groups; the basic idea is to construct inductively a universal cover of a $K(G,1)$ together with a contracting homotopy, each inductive step gives another "home" for a contracting homotopy. This method is a higher dimensional version of constructing a tree in a Cayley graph, and is more computational than the traditional "killing kernels". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9066674709320068, "perplexity_flag": "head"}
http://bpchesney.org/?tag=diversity
# bpchesney.org A blog by Brian Chesney ## Nested Linear Programs to Find Agreement among Sensors August 12th, 2012 Suppose you have 3 sensors: A, B, C. Each has a set of readings that goes into a column of a matrix, b: A matrix A is constructed such that bA, bB, bC represent compressed measurements in some domain.  A is Gaussian, since this is a universal sensing matrix for compressive sensing (CS), due to its poor correlation to anything other than itself. [1] The solution for each xi (i ∈ A,B,C) represents the data recovered from these compressive measurements and can be found with min ||xi||1, or some other sparsity-promoting technique. I am interested in the condition when all sensors (A,B,C) agree.  Consensus, agreement among multiple sensors, is important for many applications.  One example is safing functions. Safing is checking for agreement among, sometimes redundant, sensors before engaging a drastic action, for example, deploying the airbags in the event of an automobile crash. When all sensors agree (or agree enough), then the dangerous function of deploying an airbag towards the driver and passengers can be performed. In this example, I am interested in the case when each sensor finds some nonzero sparse solution. Previously, I developed the notion of the diverse solution to a set of linear equations that require a positive integer solution.  The most diverse solution is the one that has values of roughly similar value.  In another post, I showed that the diverse solution can be used to mitigate risk. For the safing function we want to mitigate the risk that our sensor data is in error, indicating that we should erroneously employ our drastic action.  We use multiple redundant sensor to mitigate this risk.  The agreement among sensors is maximized when diversity is maximized across all sensors. Even though we are seeking a sparse solution for each individual sensor (for example, min ||xi||1), we seek to diversify the number of sensors that have witnessed a significant event, that is, have found a nonzero sparse solution. So the algorithm is a sparsity-based linear program wrapped inside a diversity-based linear program.  When all sensors agree, a peak-to-sum ration (PSR) is minimized, if the sensitivity and units of each sensor are normalized.  Since we are nesting two linear programs together, it is important that the innermost program, recovering the compressively-sensed (CS) data, seeks a sparse solution, since this can be computationally efficient. The l1-norm minimization routine is used as an example of a CS recovery routine and exists inside the maximum diversity routine. Where x(:,i) indicates the ith column of x.  The objective function value of the l1-norm minimization of each column of x is stored in a vector, u.  This is the innermost routine.  The outermost routine seeks to maximize the diversity of u by minimizing its peak-to-sum ratio. Let’s look at some example solutions.  We’ll just look at types of solutions that we can reach with this algorithm, independent of the b and A that actually generated them. In the first example, above, all sensors seem to be in agreement that something happened (of magnitude 10).  This is indicated by the low PSR, which has a lower bound of 1/N (= 1/3, in this case).  There must be a little bit of noise in the readings, since the agreement isn’t perfect, but this could be considered close enough to declare consensus.  Notice, also, that the sensors don’t get exactly the same reading; the readings are basically shifted by one index.  If the columns of x represent samples in time, then each sensor is phase-shifted with respect to the others.  In this algorithm, we don’t have to explicitly time-align the data — which can be a computationally-intensive step.  We just accept the data as is, even with small difference between sensors. In the second example, above, 2 of the 3 sensors seem to agree.  One sensor appears to be broken, however, this has not completely destroyed our PSR.  We could still limp along with 2 functional sensors for a little while, until we are in a position to address the potentially defective sensor.  The algorithm has mitigated the risk of one sensor dying by using a diversity-promoting strategy. In the third example, above, something is clearly busted.  One sensor has detected a significant event, but the majority have not.  A simple majority voting scheme would not be suspicious of this data set, but our nested diversity/sparsity linear program is.  Notice that the PSR is getting closer to its upper bound, which is 1.  At this level of PSR, our algorithm diagnoses our system of sensors as broken and can take action accordingly, instead of making a bad decision. The strategy of reaching a consensus among sensors by using both sparsity- and diversity-promoting techniques have made this system more robust.  However, the way the actual computation is performed hasn’t been made clear yet.  Sparsity-based recovery techniques have been well covered and recently, I posted about how to solve a diversity-based integer program.  Next I’ll look at how to nest these linear programs. References [1] Baraniuk, Richard. “Compressive Sensing,” IEEE Signal Processing Magazine. July 2007. Tags: agree, consensus, decision, diversity, integer program, safing, sensors, sparsity, strategy Posted in Uncategorized | Comments Off ## Solving the Maximum Diversity Integer Program July 29th, 2012 In an earlier post, we saw the benefits of finding the solution to a set of equations with the most diversity, which was found by minimizing the peak-to-sum ratio (PSR).  This post discusses ways to formulate the linear program to arrive at that solution. Consider the following system of equations. The A and b matrices for this system of equations are We focus on finding an initial solution for the linear program using two methods. Method 1: Elimination and Back-substitution The first method for solving for the minimum PSR solution starts with Gaussian elimination and Jordanian back-substitution to find the initial solution. From there, the linear program iterates to find a feasible solution with the minimum PSR, that is also an integer solution, greater than 0.  With elimination and back-substitution, [A b] turns into: This means that is a solution to the system of equations, if non-integer solutions are allowed.  However, in this problem, they are not.  The first method takes this solution and then finds the nearest integer vector — not necessarily a solution — and then proceeds to the minimum PSR solution in the feasible set, considering only integer vectors. For an underdetermined system of equations, using elimination and substitution will not yield a solution, at all.  Instead it will describe the system of equations in terms of certain variables for which values need to be chosen, called “free variables.”  From the choice of free variables, the others are determined and the linear program can iteratively make better and better choices of free variables.  One common initial choice for the the free variables is to set them all to 0.  This, unfortunately, will yield the most sparse solution, which we saw earlier is the opposite of the most diverse solution.  Starting with the sparsest possible solution means the linear program will have to work for as long as possible to get to the most diverse solution. Method 2: The Intersection of a Boundary Condition and the Minimum PSR Line Let’s modify the system of equations slightly to look at them from a different angle.  We’ll turn the “greater-than” to “less-than” and add in another constraint.  This looks more like the investing example from earlier. Note also that PSR is bounded for nonnegative integers. The second method for finding the initial solution for the maximum diversity integer program exploits this fact: the PSR of an N-element, nonnegative, integer vector can never be below 1/N.  So if one of our equations has the form: The minimum PSR integer program can reduce to: So you can use a sparsity-promoting integer program to find the minimum PSR, maximum diversity solution once you subtract off the evenly distributed vector. Note that this vector does not need to be in the feasible solution space – it can be used as an initial value. Now the minimum PSR integer program can take advantage of the computational efficiency of and the extensive analysis devoted to sparsity-promoting integer programs.  We are looking for a vector that is not very different from the evenly distributed vector, just like with sparsity-promoting programs we look for a vector that is not very much different from 0. Making Sure All the Units are the Same is a Big Deal If N >> S, then subtracting off the evenly distributed vector doesn’t do much and we’re pretty much back to trying to solve a system of equations by minimizing the first norm.  This only works if x is sparse, otherwise, its a terrible approximation to the solution of Ax<=b.  However, N >> S is telling us that there isn’t much to distribute, compared to the number of bins we are distributing to.  So we may not get a good, diverse answer anyway. Another way of looking at the constraint is that it relates all of the elements of x to one another.  This equation requires that all of the units of x are the same, for the equation to make any sense.  Referring back to the investing example again, the A that gives the system of equations for this example, overall, is actually pretty sparse.  Without the two row vectors of A that are not mostly made up of zeroes, 1T and pT, the possible values of x could take on wildly different values, if the values of b varied wildly.  For instance, x2 could be on the order of 1000 and x3 could be on the order of 1,000,000.  This would make it very difficult for an integer program that minimizes the first norm, as given in method 2, to arrive at a good answer. So the choice of starting with a dense row of A and finding its most diverse solution as an initial vector is not arbitrary.  For well-formed problems that seek a most diverse solution, it can actually be a great starting point.  If your system of equations for a problem does not have such a vector then you may not really need the most diverse solution or maybe you have not fully captured the problem.  Usually, a system of equations has most, if not all dense rows.  Then it becomes a question of finding the best one for finding an initial solution to the maximum diversity integer program, which I’m sure is a whole investigation unto itself. Tags: back-substitution, diversity, elimination, gaussian, integer program, jordan Posted in Uncategorized | Comments Off ## Diversity in Investing July 7th, 2012 Minimize risk by maximizing diversity.  Whether you’re a trader deciding which stocks to invest in, an investor deciding which companies to invest in, or a manager trying to decide which projects to support, you can minimize your risk by investing in a portfolio of products.  This approach fits very nicely with the notion of diversity developed earlier. Suppose you have 5 companies seeking investment. An investor can purchase up to 30% of a company. Each company must project its value at the end of the investment period and the investor has this information, along with the price per share for each company. Different companies have different prices per share and different projected valuations at the end of the investment period. The investor cashes out and earns a share of the value of the company when the investment period is complete, proportional to the amount of the company the investor owns. For example, if the investor owns 3000 of the company’s 10000 shares, 30% of the valuation of the company at the end of the investment period goes to the investor. The investor can only purchase whole shares of the company (no fractional shares, this is like a minimum investment). The investor may not ’short’ a company, that is, sell shares of a company that the investor does not own. So the number of shares an investor chooses to purchase must either be 0 or some positive integer less than or equal to the number of shares for sale. Companies may or may not meet their valuation projections. Companies may go under, resulting in a valuation of 0 at the end of the investment period. The investor does not know how likely a company is to meet its goal or fold altogether. The goal is to avoid modeling this unknown and to develop a robust strategy based on the only two things the investor knows for sure: price per share and the projected valuation.  The investor seeks a 10% return and has \$100,000 to invest. Let’s write some equations to figure out what the investor should do.  We’ll use x to represent the amount of money invested in each company. The restriction on how much of each company the investor can own amounts to a restriction on the amount of money the investor can invest in each company.  Let’s say that for company 1, the maximum investment is 50000, for company 2, its 40000, and for company 3 its 60000. Each company expects to turn the investment they receive into a return to the investor.  We’ll use p to represent the percentage return on investment each company claims they will return to the investor at the end of the investment period.  If a company claims a 10% return on their investment, then their value for p is 1.1, since they will return the initial investment, plus another 10% on top of that.  The investor has a goal of 10% return on total investment, so weighted sum of p, weighted by the amount invested, must be greater than 110000. That last equation can be rewritten as: So now we have our equations in the form Ax<=b, where To find the most diverse solution, the optimization problem is As opposed to the most common way of solving this problem, which is to focus on maximizing return.  One way of maximizing return is to ignore the constraint of getting at least 10% on the investment and just get as much as possible.  That linear program is where Another way of maximizing the return is to leave the constraint as a minimum bound on the return.  That just tells the solver of the linear program to quit once 10% return is reached.  That kind of violates the spirit of maximizing your return and is not very illustrative when comparing it to maximizing diversity.  We’ll treat max return as truly getting as much return as possible, without regard for how difficult the problem is for the solver, and drop the minimum return requirement for max return. Let’s look at the two strategies: max diversity and max return.  Let’s say that as a company offers more return, it’s price per share is higher and, even accounting for differences in the number of shares available, that this translates into a higher maximum investment.  So the companies with a higher maximum investment are promising a bigger return.  Let’s say p is: The max return solution is: This solution has a projected return of 14.1%.  This solution is 3-sparse for N=5, or 60% sparse.  It concentrates as much investment as possible into the highest returning company.  Once that investment is maxed out, then the rest goes into the second highest returning company until the total investment limit is reached. The most diverse solution is: It has a PSR of 0.20834 and a return of 10.00016%. What happens if one of the companies fails?  If it’s the high-return company, 4, the max return solution, since it is sparse, only returns 33.6% of the investor’s money ($33600). However, the diverse solution is much more robust and still returns over 86% of the investor’s money ($860411) if company 4 fails and the others meet their targets.  For a 4% gain on return, the risk that you would lose over 50% of your money  is clearly not worth it. Maximizing return could give a sparse solution, even when you don’t specifically seek it. The sparse solution is not robust to the risk of total failure, in this case, zero return. The minimum PSR solution sacrifices return to deliver a diverse solution which is more robust to failure, that is, it is much more likely to have an acceptable return. Notice that I am comparing max return vs. the diverse solution as strategies. Allowing the sparse solution is what makes max return less robust.  When considering how to invest, the sparse solution is focusing your efforts, which could promise more return, but is less robust.  The diverse solution is hedging your bets.  Once the minimum return goal is met the diverse solution mitigates risk by spreading it out across a portfolio of investments. Tags: applied math, diversity, integer program, investing, linear program, portfolio, risk, strategy Posted in Uncategorized | Comments Off ## Diversity, mathematically June 6th, 2012 We know about sparsity for vectors, which is the property of having a lot of 0′s (or elements close enough to 0).  The 0′s in a sparse matrix or vector, generally, make computation easier.  This has been a boon to sampling systems in the form of compressive sampling, which allows recovery of sampled data by using computationally-efficient algorithms that exploit sparsity. The opposite of sparsity is density.  A vector whose entries are mostly nonzero is said to be “dense,” both in the sense that it is not sparse, as well as being slow and difficult to muddle through calculations on this vector if it is really large.  In general, dense vectors are not very useful.  Or are they?  In this post, I introduce the notion of diverse vectors, a subset of dense vectors, as a more interesting foil to sparsity than simply dense vectors. This post explores a measure of diversity called the peak-to-sum ratio (PSR). We can use PSR to find the most diverse solution in a linear program, but what can it be used for? We’ll find this linear program optimally distributes quantized, positive units, such as currency or genes. Maximizing diversity, by minimizing PSR, is desirable in several applications: • Investing • Workload Distribution • Product Distribution • Sensor Data Fusion and Machine Learning In contrast to some systems which allow dense vectors, these systems actually desire the most diverse solution. Consider the following system of equations: x1 + x2 + x3 + x4 = 12 x2 - x3     = 0 x4 = 0 The following table gives a partial list of possible solutions (for x1, x2, and x3, since x4 is already given). | | | | |----|----|----| | x3 | x2 | x1 | | 0 | 0 | 12 | | 1 | 1 | 10 | | 2 | 2 | 8 | | 3 | 3 | 6 | | 4 | 4 | 4 | | 5 | 5 | 2 | | 6 | 6 | 0 | | 7 | 7 | -2 | The sparsest solution is highlighted in green, (12, 0, 0, 0).  It has the most zeroes in it.  It has everything concentrated in one element of the vector, the first element.  All other elements are zero, making several computations on that vector faster and easier. Notice the solution highlighted in yellow, (4,4,4,0).  It has very few zeros in it, in fact, none except the one variable that was explicitly set to 0.  But there are other solutions that have only one zero, or are, as they say, 1-sparse.  For instance, (10,1,1,0) is also 1-sparse.  The vector (4,4,4,0) is highlighted and (10,1,1,0) is not because (4,4,4,0) is more diverse.  What makes it more diverse?  The (4,4,4,0) vector has the most equitable distribution among its elements, which is, conceptually, the opposite of being sparse.  Whereas the most sparse solution has its energy concentrated in the fewest elements, the most diverse solution has its energy distributed to the most elements.  In fact, (10,1,1,0) could be considered to be more sparse than (4,4,4,0) as it is closer to having its energy concentrated in one vector (first posited in [1]). So if we’re talking about more or less diverse/sparse, this implies that we can quantify it.  One way to quantify it is to look at the peak-to-sum  ratio (PSR).  The peak to sum ratio is defined as the ratio of the largest element in x to the sum of all elements of x. PSR can also be expressed as the ratio of two norms, the infinity norm and the first norm.  Let’s see what this ratio looks like for the solutions we considered above, plus a few more. | | | | | |----|----|-----|-------| | x3 | x2 | x1 | PSR | | 0 | 0 | 12 | 1.000 | | 1 | 1 | 10 | 0.833 | | 2 | 2 | 8 | 0.667 | | 3 | 3 | 6 | 0.500 | | 4 | 4 | 4 | 0.333 | | 5 | 5 | 2 | 0.417 | | 6 | 6 | 0 | 0.500 | | 7 | 7 | -2 | 0.438 | | 8 | 8 | -4 | 0.400 | | 9 | 9 | -6 | 0.375 | | 10 | 10 | -8 | 0.357 | | 11 | 11 | -10 | 0.344 | | 12 | 12 | -12 | 0.333 | | 13 | 13 | -14 | 0.325 | The (4,4,4,0) solution appears to have the lowest PSR, until we proceed to solution (13,13,-14,0).  As we proceed past (13,13,-14,0), PSR becomes vanishingly small.  In fact, we see that (4,4,4,0) was just a local minima for PSR. This is unsatisfying, for a couple of reasons.  The first is linear programming.  We can’t use a linear program to minimize PSR and arrive at the most diverse solution because PSR is not convex, that is, we can’t be sure we’re on a path to smaller and smaller PSR because we could encounter a local minima, like we did at (4,4,4,0). The second way this is unsatisfying is that (13,13,-14,0) does not seem more diverse than (4,4,4,0).  If we’re considering the elements of x to be bins of energy, or some other thing that is to be distributed, then having an element that is -14 does not make any sense.  Instead of distributing, or deciding how things are to be allocated, we are actually taking them away, which defeats the whole purpose of what we are trying to do. So let’s add in a constraint that x has to be greater than 0. For this particular problem, we can proceed in a linear fashion from the first candidate solution (12,0,0,0) to the most diverse one (4,4,4,0). However, in general, is this problem convex? That is, could we choose another set of equations and be able to march towards the most diverse answer? Consider the definition of convexity for a feasible set of solutions.  A set is convex if, when proceeding from one point to the next, you stay in that set [2].  It helps to rewrite our problem in terms of linear algebra.  We seek the most diverse solution to a set of linear equations, where Ax=b represents our equations we’re trying to solve.  Since x cannot be negative, we know that, Is this convex?  The shaded region in the following graph shows the x’s that satisfy the quantity (PSR) to be minimized when it’s less than or equal to 1. In the example in the graph above, there are only two elements in x.  The possible values of x outside of the shaded region, when either x1 or x2 or both are negative, are not in the set of feasible solutions.  So do the answers in this shaded region form a convex set? Yes, because we can pick two points in the shaded region, and draw a straight line between them that has to stay in the shaded region. Notice if we restrict values of the elements of x to the integers, we still have a convex set!  This result is satisfying because we’ll want to distribute units of energy, currency, etc…, anyway. In the language of linear algebra, where Z represents the set of integers.  So the two constraints we’ve added to our convex optimization problem to find the most diverse solution: 1. Must not contain a negative distribution (x greater than or equal to 0) 2. Must have a base unit of distribution (x must contain integers) We can now use an integer program to perform the optimization.  Restricted to positive, integer values, when we minimize PSR, our linear program will march algorithmically, inexorably to the most diverse solution to a system of linear equations. References [1]  Zonoobi et al. “Gini Index as Sparsity Measure for Signal Reconstruction from Compressive Samples,”  IEEE Journal of Selected Topics in Signal Processing.  vol 5, no. 5, Sept. 2011. [2]  Strang, Gilbert.  Computational Science and Engineering.  Wellesley, MA: Wellesley-Cambridge Press, 2007. Tags: applied math, convexity, diversity, integer program, linear program, sparsity Posted in Uncategorized | Comments Off
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203802943229675, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/20106/generalized-symmetric-algebras-and-dickson-algebras-over-mathbb-f-p/20158
## Generalized symmetric algebras and Dickson algebras over ${\mathbb F}_p$. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Start with the really well-known fact that `$R[x_1, \ldots, x_n]^{S_n}$`, where $R$ is any commutative ring, is polynomial on elementary symmetric polynomials. Now consider the slight generalization of multiple collections of variables, namely `$R[x(i)_1, \ldots, x(i)_n]^{S_n}$`, where $i$ runs over some finite indexing set and `$S_n$` still acts by permuting subscripts. These rings are generally not polynomial algebras, in particular when $R$ is `${\mathbb F}_p$`. Ten years ago, in the context of computing the cohomology of symmetric groups, Mark Feshbach gave generators and inductively-defined relations for these rings when $R$ is ${\mathbb F}_2$. My questions are: (1) Does anyone know of calculations over `${\mathbb F}_p$` or other approaches over `${\mathbb F}_2$`? (2) Restricting to `$R = {\mathbb F}_p$` and replacing `$S_n$` by `$GL_n({\mathbb F}_p)$` we get the Dickson algebras in the case of one collection of variables. Has anyone studied the analogues of Dickson algebras where there are multiple collections of variables? - As the answer below mentions, this has beeen studied a lot by combinatorialists when $k=0$ and $R$ is a field of characteristic 0, and it is interesting, but hard. Search for "diagonal coinvariants" or "Cherednik algebras" (which are a deformation of the smash product). – Ben Webster♦ Apr 1 2010 at 22:01 ## 3 Answers Four quick references that contain substantial info on your questions (for more, it'd be good to know what exactly you would like to know...): de Concini, C.; Procesi, C. A characteristic free approach to invariant theory. Advances in Math. 21 (1976), no. 3, 330--354. Grosshans, F. D. Vector invariants in arbitrary characteristic. Transform. Groups 12 (2007), no. 3, 499--514. Stepanov, S. A. Vector invariants of symmetric groups in the case of a field of prime characteristic. Discrete Math. Appl. 10 (2000), no. 5, 455--468 Stepanov, Serguei A. Orbit sums and modular vector invariants. Diophantine approximation, 381--412, Dev. Math., 16, Vienna, 2008. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The paper [P. FLEISCHMANN, A NEW DEGREE BOUND FOR VECTOR INVARIANTS OF SYMMETRIC GROUPS, TRANS. AMS Volume 350, Number 4, April 1998, Pages 1703-1712] shows that this ring is generated by homogeneous invariants whose degree does not exceed max{n, k(n − 1)} (where i runs over an index set of size k). Also this bound is sharp if $n=p^s$ for some prime $p$ and either $R=\mathbb Z$ or $R$ has characteristic $p$. Some work has been done on the Dickson invariants version as well. I think that is considered in the article [Steinberg, Robert, On Dickson's theorem on invariants. J. Fac. Sci. Univ. Tokyo Sect. IA Math. 34 (1987), no. 3, 699–707.] - Thank you for these references. – Dev Sinha Aug 2 2011 at 5:28 @Dev: You are welcome. Looking back at my answer I realize I misread question 2. Let i run over an index set of size k. Some things may be known for k=2 but I don't think anything is known for larger values of k except there is a complete answer for general k for $GL(2,F)$. See the paper [Vector invariants for the two dimensional Modular representation of a cyclic group of prime order, Campbell, Shank, Wehlau, Adv. in Math., 225(2) 1069-1094]. – David Wehlau Dec 3 2011 at 15:38 Do you mean the ring of diagonal invariants with $k > 1$? This appears in the combinatorics literature (Garsia-Haiman and developments thereof) for $k=2$ and $R$ a field of characteristic 0, but the definition can be given for all $k$ and $R$. It is just the $S_n$ invariants of the following object: The polynomial ring (with $R$ as coefficients) generated by $nk$ variables, with the variables partitioned into $k$ disjoint sets of size $n$, and $S_n$ simultaneously (that is, "diagonally"), permuting all of the $n$-sets. - Yes, I am asking about this ring of diagonal invariants, but in positive characteristic, as well as diagonal invariants (so to speak) of GL_n acting on $k$ disjoint sets of size $n$, in positive characteristic. – Dev Sinha Apr 2 2010 at 4:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.897605299949646, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/1892/the-practical-implication-of-p-vs-np-problem?answertab=votes
# The Practical Implication of P vs NP Problem Although whether $$P = NP$$ is important from theoretical computer science point of view, but I fail to see any practical implication of it. Suppose that we can prove all questions that can be verified in polynomial time have polynomial time solutions, it won't help us in finding the actual solutions. Conversely, if we can prove that $$P \ne NP,$$ then this doesn't mean that our current NP-hard problems have no polynomial time solutions. From practical point of view ( practical as in the sense that we can immediately use the solution to real world scenario), it shouldn't bother me whether P vs NP is proved or disproved any more than whether my current problem has a polynomial time solution. Am I right? - – Casebash Aug 9 '10 at 11:01 @Casebash, I read. But my point is precisely that even if we can prove that `P=NP` it won't help me find an efficient algorithm for travelling salesman problem. – Graviton Aug 9 '10 at 11:10 – Akhil Mathew Aug 9 '10 at 12:35 3 "..this doesn't mean that our current NP-hard problems have no polynomial time solutions." - Yes it does. – BlueRaja - Danny Pflughoeft Aug 9 '10 at 19:42 ## 6 Answers Many of the problems we know to be in NP or NP-complete are problems that we actually want to solve, problems that arise, say, in circuit design or in other industrial design applications. Furthermore, since the diverse NP-complete problems are all polynomial time related to one another, if we should ever learn a feasible means of solving any of them, we would have feasible means for all of them. The result of this would be extraordinary, something like a second industrial revolution. It would be as though we suddenly had a huge permanent increase in computational power, allowing us to solve an enormous array of practical problems heretofore out of our computational reach. The P vs. NP question is important in part because of this tantalizing possibility. If it were proved that P = NP and the proof provided a specific polynomial time algorithm for an NP-complete problem, then because of the existing reduction proofs, we could immediately produce polynomial time algorithms for all our other favorite NP problems. Of course, a proof may be indirect, and not provide a specific polynomial time algorithm, but you can be sure that if we have a proof of P=NP, then enormous resources will be put into extracting from the proof a speciffic algorithm. Conversely, if someone were to prove $P \neq NP$, then it would mean that there could be no polynomial time solution for any NP complete problem. (In particular, the last sentence of your second paragraph is not correct.) - 3 – David Speyer Aug 9 '10 at 12:42 Oops, got that slightly backwards. What is known is that this algorithm does solve SAT; what would follow from P=NP is that this algorithm is in P. – David Speyer Aug 9 '10 at 12:43 This is well and good, and shows why a proof of P=NP would change the world, but it doesn't answer the following question: Given that we already "know" P≠NP (it seems a widely held consensus among computer scientists), what would be the practical implications of a proof of this already-believed-to-be-true conjecture? (This may not have been the question originally asked, though.) – ShreevatsaR Aug 9 '10 at 13:01 1 @ShreevatsaR , I think I can answer your question. If P≠NP then for NP complete problem ( such as subset sum problem) there would never be any polynomial time solution. – Graviton Aug 9 '10 at 14:42 @Ngu: I know the significance of P≠NP; I was asking about the significance of a proof of it. To answer my own question as best as I can: any proof would likely involve new insights about computation, research could move in new directions, etc. The consequences are to research, mainly, not immediately to practice (somewhat similar to "practical consequences of proof of Fermat's Last Theorem"). – ShreevatsaR Aug 9 '10 at 20:52 show 2 more comments If $P = NP$, computational revolution (once a specific algorithm is identified for an NP-hard problem, with explicit asymptotic runtime bounds). If $P < NP$ and one can prove it, secure (classical) cryptography provably exists, and a huge missing piece in our understanding of computation is filled in. The first already has significant implications for daily life, and developing the second would have much larger implications. You should also understand that after 40 years of research, today P=NP carries a host of related ideas like: easy-to-hard phase transition in combinatorial problems; quantifiable boundaries between easy and hard approximate versions of specific NP-complete problems (so getting within 7/8 of the optimal solution is easy but anything closer is NP-complete); counting and randomly sampling combinatorial objects are the same problem; zero-knowledge proofs "that reveal nothing but their own validity" (unforgeable ID cards). It's a very rich universe of ideas and it doesn't run out of questions once you know the answer to P=NP. - BTW, cryptography doesn't commonly use NP-hard problems. It relies on the hardness of factoring, discrete log, etc., which are already believed not to be NP-hard. (I think this is because for cryptography we need average-case hardness, and such a property is not known for any NP-complete problem.) – ShreevatsaR Aug 12 '10 at 6:03 There are notions of average-case NP completeness but only a few problems are known to satisfy it. I think Levin showed completeness for some tiling problems and Ajtai showed that an NP problem on shortest vectors in a lattice has almost all cases as hard as the worst case. Theoretical cryptography uses one-way functions or other assumptions that are slightly stronger than P < NP, practical cryptosystems use problems that are not considered NP-complete (in part from the need for efficiency). – T.. Aug 12 '10 at 7:42 Actually $P \ne NP$ does mean that our current NP-hard problems have no polynomials time solutions. NP-complete problems are the hardest problems in NP and NP-hard problems are at least as hard as this. So if $P \ne NP$, then all these NP-hard problems must be harder than P. Whether the proof helps us find solutions will of course depend on the proof. If $P \ne NP$, then we know not to waste time looking for polynomial solutions. If $P=NP$, then the real practical benefits would of course come from the solution, rather than the proof. That is fine - there is no reason why all theoretical computer science needs to be directly practical. - 1 "NP-hard problems are the hardest problems in P and if there are problems not in P, they must be harder than P." - Er, no. NP-Complete problems are the hardest problems in NP; NP-hard problems are at least as hard as NP-Complete problems (but might not be in NP at all) - if $P \neq NP$, then no NP-Hard problems are in P. – BlueRaja - Danny Pflughoeft Aug 9 '10 at 19:40 @BlueRaja: Oops, my bad – Casebash Aug 9 '10 at 21:43 Currently, if a manager asks their software engineering team to look at implementing some utility, and the team says that requirements are NP hard, that's a reason that the project requirements need to be changed before work on implementation can begin. That's because no-one knows how to give feasible solutions to such problems. The plurality of complexity theorists furthermore believe P =/= NP, so that means that there's a widespread belief among experts that feasible solutions to these problems will never be found. If someone shows P=NP, then if the team says the requirements are NP-complete, then the manager and the team will start to move from talk of theory to possible realisations and their efficiency. - Not directly related to the question, but definitely relevant. Three days ago proof for P != NP is published. Community thinks it looks serious. - – J. M. Aug 9 '10 at 22:33 Updated link, thanks – n0vakovic Aug 9 '10 at 23:11 Yup, I saw that link and it was what prompted me to ask this question. – Graviton Aug 10 '10 at 4:28 Terry Tao and several others have said they think Deolalikar's proof is wrong, sadly :( – Jamie Banks Aug 13 '10 at 19:00 @Katie: Maybe it's not that bad, then there's still hope for P = NP :) – n0vakovic Aug 14 '10 at 2:08 There is an interesting heuristic to suggest that P is actually not NP. It is that, roughly, the task of finding out a proof of a statement is an NP task, but that of verifying it is a P task. From our actual experience that verifying a proof is far easier than finding one, we can intuitively expect P != NP to hold true. The practical application is that a result which would agree with our intuition would be a great thing, and psychologically satisfying moreover. - The set of theorems is not NP, since the witnessing object, the proof, can be and often is much larger than the original query, the purported theorem. In particular, if being a theorem were an NP problem, then it would be decidable (in exponential time), but this is known not to be true. For example, if one could decide whether or not something was a theorem, even in exponential time, then one could solve the halting problem, since program $p$ halts on input $n$ if and only if this is provable in PA. – JDH Aug 10 '10 at 1:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504569172859192, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/45308-combinatorics-r-digit-ternary-sequences.html
# Thread: 1. ## combinatorics- r-digit ternary sequences Hi, I have found some examples online of ternary sequences but still don't understand how to do how many r-digit ternary sequences are there with: a) and even # of 0s? b) and even # of 0s and even # of 1s? c) at least one 0 and at least one 1? thanks for any help 2. Originally Posted by dixie how many r-digit ternary sequences are there with: a) and even # of 0s? b) and even # of 0s and even # of 1s? c) at least one 0 and at least one 1? I understand that the reply is really late but I was stopped by part (b). The other two parts have rather simple solutions. But I found no simple way to model part (b). Here are my answers. (a) $\sum\limits_{k = 0}^{\left\lfloor {\frac{r}{2}} \right\rfloor } {\left( {\begin{array}{c} r \\ {2k} \\ \end{array} } \right)} \left( {2^{r - 2k} } \right)$. (b) $\sum\limits_{k = 0}^{\left\lfloor {\frac{r}{2}} \right\rfloor } {\sum\limits_{j = 0}^{\left\lfloor {\frac{r}{2}} \right\rfloor - 2k} {\left( {\begin{array}{c}<br /> r \\ {2k} \\ \end{array} } \right)\left( {\begin{array}{c} {r - 2k} \\<br /> {2j} \\ \end{array} } \right)} }$ This beautifully simple. (c) $3^r - 2^{r + 1} + 1$. 3. If the previous reply was late then this one is really, really late, but I can't resist applying some exponential generating functions to these problems. I don't know Dixie's background, so he/she may not be able to follow the generating functions, but maybe the final answers will be useful anyway. In what follows, $a_r$ is the number of r-digit ternary sequences and $f(x) = \sum_{r=0}^\infty \frac{a_r}{ r!} x^r$ is the associated exponential generating function. a) $f(x) = (e^x + e^{-x}) \, e^{2x} / 2$ $a_r = (3^r + 1) /2$ b) $f(x) = [(e^x + e^{-x}) / 2]^2 \, e^x$ $a_r = (3^r + 2 + (-1)^r) / 4$ c) $f(x) = (e^x - 1)^2 \, e^x$ $a_r = 3^r - 2^{r+1} + 1$ I haven't attempted to figure out if the answers to a) and b) are actually different than Plato's answers, or simply the same answers written in a different form.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569618701934814, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/288918/about-the-use-of-stirling-approximation
# About the use of Stirling approximation How to prove this inequality: $$\ln \Gamma \left( x \right)-2\ln \Gamma \left( \frac{x+1}{2} \right)>\frac{2x}{3}$$ Sry I forgot to mention that $x>300$ - 2 Try plugging in $x=1$ to see what happens. – Byron Schmuland Jan 28 at 13:41 1 @gauss115 - please revise the question, as Byron pointed out the statement as it stands is false. – nbubis Jan 28 at 13:54 @nbubis sry! The condition is $x>300$ – gauss115 Jan 28 at 14:00 How do you know this is true? What did you try to show this is true? – Did Jan 28 at 14:06 ## 2 Answers Another approach would be to use approximations. There is a quickly convergent version of Stirling's formula which goes like this:$$\ln{\Gamma(x)}=\left(x-\frac{1}{2}\right)\ln{x}-x+\frac{\ln{2\pi}}{2}+\frac{1}{12(x+1)}+O(x^{-2})$$ (see http://goo.gl/9hsnO). Derive upper and lower bound from this and plug back into your inequality. You'll end up with a relatively straightforward logarithmic inequality. - Simply take the derivative of your function: $$f'(x) = \psi(x)-\psi\left(\frac{x+1}{2}\right)$$ And then show that $f''(x) >0$ and that $f'(x)>2/3$ for say $x=20$. This shows that the derivative is larger for all $x>20$. Now calculate $f(300) > 200$ to prove that $f(x) > 2x/3$ for all $x>300$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170681238174438, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/84650-integration-calculus-final.html
# Thread: 1. ## Integration Calculus Final Hello everybody. I'm in sort of a bind here. For my calculus class we were given 60 integration problems with answers. Our job is to do the work from start to finish. My problem is I just so overwhelmed. We learn a new technique every class and I can't remember all the formulas and techniques to finish the problems. Now I don't want straight answers, I really want to learn this stuff. Here's an "easy" question from the final that I could not finish. Problem: $\int (\tan x + \text{cot}\, x)^2 \, dx$ And here is the answer: $\tan x - \text{cot} \, x + C$ Looks easy enough, but for some reason I just can't make integration work for me. I don't know why the LaTex didn't work. 2. Originally Posted by LanceyPants Hello everybody. I'm in sort of a bind here. For my calculus class we were given 60 integration problems with answers. Our job is to do the work from start to finish. My problem is I just so overwhelmed. We learn a new technique every class and I can't remember all the formulas and techniques to finish the problems. Now I don't want straight answers, I really want to learn this stuff. Here's an "easy" question from the final that I could not finish. Problem: $\int (\tan x + \cot x)^2 dx$ And here is the answer: $\tan x - \cot x + C$ Looks easy enough, but for some reason I just can't make integration work for me. Try $(\tan x + \cot x)^2 = \tan^2 x + 2 \tan x \cot x + \cot^2 x$ and use some trig. identites. 3. Originally Posted by danny arrigo Try $(\tan x + \cot x)^2 = \tan^2 x + 2 \tan x \cot x + \cot^2 x$ and use some trig. identites. So at least I'm not completely forgetful. I was able to get that far. All I know is the $\tan x \cot x$ cancel. Other than that I'm still stumped. Here is another one that seems easy but I am just dumbfounded on. $\int 10^{nx} \, dx$ The "n" and the "x" are supposed to be superscripted together. 4. $\int a^udu=\frac{a^u}{ln|a|}+C$ 5. How do you all know so much about calculus? I study every day and nite and get no progress what so ever. 6. Study some more, and if that doesn't work out, then there must something wrong in your approach. 7. Originally Posted by LanceyPants How do you all know so much about calculus? I study every day and nite and get no progress what so ever. Some of us have been studying Calculus for years. For example, Soroban is a retired Math Professor. 8. Originally Posted by LanceyPants How do you all know so much about calculus? I study every day and nite and get no progress what so ever. I just started learning this stuff this semester. But I find the worst thing you can ever do for yourself is think something's too hard. Not like your brain will try to comprehend something you've already said it can't, right? Just don't get hung up and you'll be fine. 9. Oops, would probably help to answer that other question, huh? =p tanx^2 + cotx^2 + 2 = (secx^2 - 1) + (cscx^2 - 1) + 2 = secx^2 + cscx^2 Now you've got two basic integrals 10. Originally Posted by derfleurer I just started learning this stuff this semester. But I find the worst thing you can ever do for yourself is think something's too hard. Not like your brain will try to comprehend something you've already said it can't, right? Just don't get hung up and you'll be fine. Well of course it's difficult, it's calculus. But for some reason when we started our study on integration, I just can't comprehend a dang thing. But I just can't not hang up on this. There is too much riding on me to get an A. I need a perfect grade. 11. Originally Posted by LanceyPants But I just can't not hang up on this...I need a perfect grade. Contradiction in terms? =p Really, though, stress won't help you in the least. Most of what you do in calc isn't even calc. Remember that. All you're doing is playing with equations, getting them in a proper form, then performing integration or derivation after. It's more algebra and trig application than it is how to derive/integrate. Check some online videos, they'll really help. Takes all the pressure off of you to have someone do the explaining for you for just a little while. 12. So it's come down to the fact that it's impossible for me to finish my final with a passing grade. I so want to curse down Calculus, but it's my fault for not having the intellectual capabilities to comprehend it. 13. Originally Posted by LanceyPants So it's come down to the fact that it's impossible for me to finish my final with a passing grade. I so want to curse down Calculus, but it's my fault for not having the intellectual capabilities to comprehend it. This thread is heading down a path that's best continued in the Chat Room. Thread closed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9663959741592407, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/188567/when-can-a-series-be-integrated-term-by-term/188573
# When can a series be integrated term by term? I have a function that is defined as a harmonic series, I would like to integrate it over part of its domain. I have been doing this by integrating term by term and summing the result, but I seem to remember something in little Rudin that gave conditions under which this is valid, however, if I could remember what it said I still don't think I understood it. So my question is when is the following true? $$\int_a^b{\sum_{i}{f_i\left(x\right)}}=\sum_i{\int_a^b{f_i\left(x\right)}}$$ - 1 – SL2 Aug 29 '12 at 22:04 ## 2 Answers If the sum is finite (in number of terms) then the formula always works, just by additivity of the integration operator. If the series is uniformly convergent and each $f_{n}(x)$ is integrable, then the formula works. I think there may be examples of pointwise- (but not uniformly-) convergent series for which the formula doesn't work, but I can't seem to find them at the moment. Basically, we need the sequence of partial sums to be uniformly convergent to some limit function, in which case the limit function is the series itself. The result uses the fact that the limit of a sequence of integrals of functions that converge uniformly is the integral of their uniform limit.. Edit: This set of notes from UCSC (with references) includes a pretty good treatment of the whole situation, and also has some helpful examples. - If I remember correctly, as long as each of the individual terms is integrable, this is ok. It's when some of the terms might have discontinuities that make an individual term non-integrable, but are cancelled out by similar features in other terms that this method falls apart. Not a rigorous definition, I know, but in the case of a harmonic series, this shouldn't be problematic. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9735849499702454, "perplexity_flag": "head"}
http://mathoverflow.net/questions/98579/deformation-of-lagrangian-manifolds/98928
## Deformation of Lagrangian manifolds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I read recently that on a symplectic manifold $M$, the infinitesimal deformations of a Lagrangian manifold $L$ can be identified with closed 1 forms in $T^*L$ (cotangent bundle of L). How can this correspondance be made? I suppose that one somehow has to use Weinstein's tubular neighborhood theorem, but I can't write down the required map. I am sure that this construction is standard in sympletic geometry so if someone knows a good reference please let me know. - ## 4 Answers You don't need to use Weinstein's tubular neighborhood theorem to assign closed one forms on L to deformations of L. Here is a construction which makes it clear the assignment is canonical. A smooth family of Lagrangian submanifolds is given by a pair of smooth maps $$\mathbb R \xleftarrow{t}X \xrightarrow{f} M$$ so that the map $t$ is a proper submersion and $f$ includes every fiber of $t$ as a Lagrangian submanifold of $M$. There is a vertical cotangent bundle of $X$ which is the quotient of `$T^*X$` by the pullback of one forms from $\mathbb R$. This vertical cotangent bundle should be regarded as putting together the cotangent bundles of the fibers of $t$ into a smooth vector bundle over $X$. Each differential form $\theta$ on $X$ has a well defined projection to a section $\pi\theta$ of the wedge of the vertical cotangent bundle, which is the definition of a smooth family of differential forms on the fibers of $t$. The fact that this is a family of Lagrangian submanifolds implies that $\pi(f^*\omega)=0$. Choose any smooth vector field $\frac \partial {\partial t}$ on $X$ so that $\frac\partial{\partial t} t=1$. Then $$\pi(\iota_{\frac \partial{\partial t}} f^*\omega)$$ is a family of one forms on the fibers of $t$ which does not depend on the choice of $\frac \partial {\partial t}$. It is a family of closed one forms because $\pi$ commutes with $d$ and $$\pi L_{\frac\partial{\partial t}}f^*\omega=0$$. This construction reverses the assignment of a deformation of L to a closed one form on L which uses the Weinstein neighborhood theorem. - Is this how the deformation of Lagrangian manifolds are defined? It would sure make life easier if you just wrote "X = \Bbb R \times L" or something like that. Also, this is sort of what I asked. Where can I read more about this stuff? – The Common Crane Jun 6 at 12:35 If I wrote $X=\Bbb R\times L$, that would mean deformations of Lagrangian submanifolds parametrized by $L$. This is a different, and much larger space, because it has more structure. The tangent space would then also have to include vectorfields on $L$. – Brett Parker Jun 7 at 1:14 I'm afraid I don't know any reference where this kind of stuff is explained explicitly. – Brett Parker Jun 7 at 12:02 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You already know that the pair $(M,L)$ of a symplectic manifold and a Lagrangian submanifold is locally isomorphic to $(T^*L, L)$. This is the beginning of Corollary 6.2 of Weinstein, who continues: "and the lagrangian submanifolds of $M$ "near" $L$ are in 1-1 correspondence with "small" closed forms on $L$." The correspondence in question (explained on the previous page of Weintein's paper) is that "a submanifold of $T^*L$ transversal to the fibres is locally the graph of a 1-form $\sigma:L\to T^*L$. The graph of $\sigma$ is isotropic if and only if... $\sigma$ is a closed 1-form." In short, the map you want attaches to a closed 1-form (on $L$!) its graph in $M\simeq T^*L$. Update: This construction identifies a neighborhood of $f_0:L\hookrightarrow M$ in the space of embeddings (Whitney C$^1$ topologized), with a neighborhood of zero in the space of closed 1-forms on $L$. See Thm II.3.8 in Michèle Audin's notes (available here). She concludes that $Z^1(L)$ "can be considered as a neighbourhood of $f_0$ in the “manifold” of deformations of $f_0$, or as its tangent space at $f_0$." - I am aware of the things you wrote. I was asking for the correspondence between infinitesimal deformations of a Lagrangian manifold L and closed 1 forms in $T^∗L$. So if you have a family $L_t$ of symplectic manifolds in $T*L$ close to $L$(closed 1-forms basically), then how does one associate "canonically" to "$\frac{d}{dt}L_t$" a closed 1-from? One can just differentiate with respect to $t$ for each $x \in L$ in $L_t_x$ but I am thinking that this might not be the canonical way, since along the smooth deformation $L_t$ one might afford variations that do not respect fibers! – The Common Crane Jun 1 at 17:42 Being transverse to the fibers is an open condition. I have added some details and a reference in my answer above; if this still doesn't answer your question, then please edit to make it more precise. – Francois Ziegler Jun 6 at 1:05 In general, deformations of a submanifold L of an ambient space M are identified with sections of L's normal bundle: $TM|_{L}/TL$. For your case, the normal bundle is canonically isomorphic to $T^*L$ by way of the symplectic form. To be more concrete: look at just the ```exact' deformations, deformations whose one-form is exact and so given by function on $L$. Take such a function $f$. Extend it arbitrarily to a function $F$ on M. Take the Hamiltonian vector field $X_F$ of $F$, restricted to $L$. That $X_F$ defines a vector field which tells you which way to push $L$ into $M$. Note that if $F, G$ are two different extensions of $f$ then they differ by a function which vanishes on $L$, so that their Hamiltonian vector fields $X_F, X_G$ differ by a vector field tangent to $L$: the vector field is well defined as a section of the normal bundle. In other words, we can think of```$X_f$' as a section of $L$'s normal bundle. You seem to want to go the other way' and directly concoct a vector field out of `$dL_t/ dt$''. How are you going to do that in the general case? - "You seem to want to go `the other way' and directly concoct a vector field out of ``dLt/dt''. How are you going to do that in the general case?" That still seems to be the question ... do you know a good reference for deformation of Lagrangian submanifolds? – The Common Crane Jun 1 at 21:12 generally calculation is like this: 1. you write down the tubular neighborhood and the exp map there; 2. you do re-parametrization, such that your symplectic form comes in the "darboux type" then the section of the normal bundle will be a nearby lagrangian. there are some simple examples you can do the calculation explicitly, for example: you consider the unit circle in R^2 with the standard symplectic form, then you choose the polar coordinate to write down the exp map in the tubular neighborhood, you will find you need a simple substitution to make the symplectic form in the "darboux type" -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 3, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949491560459137, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/80315/list
Return to Answer 3 added 138 characters in body; deleted 1 characters in body Informally speaking, taking the limit of two's complement as the number of bits goes to $\infty$, the integers are just the eventually constant binary sequences (which are naturally represented by finite binary sequences). For this to work, said sequences must start with the least significant bit, i.e., $1001011\overline{0}$ is interpreted as $2^0+2^3+2^5+2^6$ and $1001010\overline{1}$ is interpreted as $2^0+2^3+2^5-2^7$. The arithmetic and ordering of these strings is natural (and efficient for microprocessors when we restrict from $\mathbb{Z}$ to, say, $\{-2^{63},\ldots,2^{63}-1\}$). The above can be reinterpreted as the following less direct construction. If $R$ is the inverse limit of rings $\lim_{\infty\leftarrow n}\mathbb{Z}/2^n\mathbb{Z}$, then the diagonal map $\Delta\colon\mathbb{Z}\rightarrow R$ given by $m\mapsto \lim_{\infty\leftarrow n}(m\mod 2^n)$ is an injective ring homomorphism. [Edit: The image is characterized as the set of $\vec x\in R$ for which the truth value of $x(n+1)=x(n)$ is eventually constant.] Moreover, the ordering of $\mathbb{Z}$ is coded via $m\geq 0\Leftrightarrow(m\mod 2^n: n\in\mathbb{N})$ is eventually constant. Update: I couldn't resist the temptation to write a functional programming implementation. 2 added 130 characters in body Informally speaking, taking the limit of two's complement as the number of bits goes to $\infty$, the integers are just the eventually constant binary sequences (which are naturally represented by finite binary sequences). For this to work, said sequences must start with the least significant bit, i.e., $1001011\overline{0}$ is interpreted as $2^0+2^3+2^5+2^6$ and $1001010\overline{1}$ is interpreted as $2^0+2^3+2^5-2^7$. The arithmetic and ordering of these strings is natural (and efficient for microprocessors when we restrict from $\mathbb{Z}$ to, say, $\{-2^{63},\ldots,2^{63}-1\}$). The above can be reinterpreted as the following less direct construction. If $R$ is the inverse limit of rings $\lim_{\infty\leftarrow n}\mathbb{Z}/2^n\mathbb{Z}$, then the diagonal map $\Delta\colon\mathbb{Z}\rightarrow R$ given by $m\mapsto \lim_{\infty\leftarrow n}(m\mod 2^n)$ is an injective ring homomorphism. [Edit: The image is characterized as the set of $\vec x\in R$ for which the truth value of $x(n+1)=x(n)$ is eventually constant.] Moreover, the ordering of $\mathbb{Z}$ is coded via $m\geq 0\Leftrightarrow(m\mod 2^n: n\in\mathbb{N})$ is eventually constant. 1 Informally speaking, taking the limit of two's complement as the number of bits goes to $\infty$, the integers are just the eventually constant binary sequences (which are naturally represented by finite binary sequences). For this to work, said sequences must start with the least significant bit, i.e., $1001011\overline{0}$ is interpreted as $2^0+2^3+2^5+2^6$ and $1001010\overline{1}$ is interpreted as $2^0+2^3+2^5-2^7$. The arithmetic and ordering of these strings is natural (and efficient for microprocessors when we restrict from $\mathbb{Z}$ to, say, $\{-2^{63},\ldots,2^{63}-1\}$). The above can be reinterpreted as the following less direct construction. If $R$ is the inverse limit of rings $\lim_{\infty\leftarrow n}\mathbb{Z}/2^n\mathbb{Z}$, then the diagonal map $\Delta\colon\mathbb{Z}\rightarrow R$ given by $m\mapsto \lim_{\infty\leftarrow n}(m\mod 2^n)$ is an injective ring homomorphism. Moreover, the ordering of $\mathbb{Z}$ is coded via $m\geq 0\Leftrightarrow(m\mod 2^n: n\in\mathbb{N})$ is eventually constant.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593685865402222, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/32969/acceleration-of-a-car-required-to-reach-point-b-with-velocity-d-from-point-a-wit
# Acceleration of a car required to reach point B with velocity D from point A with velocity C in time T How do I get the acceleration of a car required to reach point B with velocity D from point A with velocity C in time T? I understand the acceleration is not constant but how can I calculate the required acceleration as a function with time as its parameter. - Can you give a bit more detail? Do you need to get from A to B in a specified time and reach B with a specified velocity? If so does it matter how the acceleration varies with time? – John Rennie Jul 27 '12 at 10:47 Yes, in a specified time and then the car's velocity should be D when it reaches the point B. I am interested in the acceleration function and how it varies with time. – RickyTar Jul 27 '12 at 11:05 Very simple answer. Apply an immediate infinite acceleration (okay, deceleration) so as to bring the car to a complete stop. Accelerate to speed $|B-A|/T$ in the direction of point B. Upon arrival at point B, apply infinite deceleration to stop car, and infinite acceleration to give the desired final velocity. [Other solutions may be easier on the car and driver.] – Carl Brannen Jul 27 '12 at 11:40 You need some decision about the functional form of the acceleration, otherwise the problem is too poorly constrained and there are an infinite number of solutions. One possibility would be to travel at speed C for time $t_1$ then accelerate to speed D with some acceleration $a$ and finally travel at speed D until you reach point B. You can adjust $t_1$ and $a$ to make the travel time equal to your required time T. Is this the sort of solution you're looking for? – John Rennie Jul 27 '12 at 13:55 ## 1 Answer Let: $a(t) = a + Kt$ It is given that: $v(0) = C, \ v(T) = D$ $x(0) = A, \ x(T) = B$ Then, $v(T) = C + aT + \dfrac{1}{2}KT^2 = D$ $x(T) = A + CT +\dfrac{1}{2}aT^2 + \dfrac{1}{6}KT^3 = B$ You have two equations and two unknowns, $a$ and $K$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100015759468079, "perplexity_flag": "head"}
http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Bertin_Jose&arg9=Jos%C3%A9_Bertin
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List  Item: 1 of 1 Introduction to Hodge Theory José Bertin, University of Grenoble I, St. Martin D'Heres, France, Jean-Pierre Demailly, University of Grenoble I, St. Martin d'Heres, France, Luc Illusie, University of Paris-Sud, Orsay, France, and Chris Peters, University of Grenoble I, St. Martin d'Heres, France A co-publication of the AMS and Société Mathématique de France. SEARCH THIS BOOK: SMF/AMS Texts and Monographs 2002; 232 pp; softcover Volume: 8 ISBN-10: 0-8218-2040-0 ISBN-13: 978-0-8218-2040-7 List Price: US\$75 Member Price: US\$60 Order Code: SMFAMS/8 Hodge theory originated as an application of harmonic theory to the study of the geometry of compact complex manifolds. The ideas have proved to be quite powerful, leading to fundamentally important results throughout algebraic geometry. This book consists of expositions of various aspects of modern Hodge theory. Its purpose is to provide the nonexpert reader with a precise idea of the current status of the subject. The three chapters develop distinct but closely related subjects: $$L^2$$ Hodge theory and vanishing theorems; Frobenius and Hodge degeneration; variations of Hodge structures and mirror symmetry. The techniques employed cover a wide range of methods borrowed from the heart of mathematics: elliptic PDE theory, complex differential geometry, algebraic geometry in characteristic $$p$$, cohomological and sheaf-theoretic methods, deformation theory of complex varieties, Calabi-Yau manifolds, singularity theory, etc. A special effort has been made to approach the various themes from their most natural starting points. Each of the three chapters is supplemented with a detailed introduction and numerous references. The reader will find precise statements of quite a number of open problems that have been the subject of active research in recent years. The reader should have some familiarity with differential and algebraic geometry, with other prerequisites varying by chapter. The book is suitable as an accompaniment to a second course in algebraic geometry. Titles in this series are co-published with Société Mathématique de France. SMF members are entitled to AMS member discounts. Readership Graduate students, research mathematicians, and physicists interested in Hodge theory. Reviews "This profound introduction to classical and modern Hodge theory, which discusses the subject in great depth and leads the reader to the forefront of contemporary research in many areas related to Hodge theory ... a masterly guide through Hodge theory and its various applications ... its significant role as an indispensible source for active researchers and teachers in the field ... its translation into English makes it now accessible to the entire mathematical and physical community worldwide. Without any doubt, this is exactly what both those communities and this excellent book on Hodge theory needed and deserved." -- Zentralblatt MATH From reviews of the French Edition: "The present book ... may be regarded as a masterly introduction to Hodge theory in its classical and very recent, analytic and algebraic aspects ... it is by far much more than only an introduction to the subject. The material leads the reader to the forefront of research in many areas related to Hodge theory, and that in a detailed highly self-contained manner ... this text is also a valuable source for active researchers and teachers in the field ..." -- Zentralblatt MATH "The book under review is a collection of three articles about Hodge theory and related developments, which are all aimed at non-experts and fulfill, in an extremely satisfactory manner, two functions. First, the basic methods used in the theories are discussed and developed in great detail; second, some newer developments are described, giving the reader a good overview of the more important applications. Furthermore, the style makes these articles a joy to work through, even for the mathematician not encountering these subjects for the first time." -- Mathematical Reviews • J.-P. Demailly -- $$L^2$$ Hodge theory and vanishing theorems
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065252542495728, "perplexity_flag": "middle"}
http://www.aimath.org/textbooks/beezer/PDMsection.html
Properties of Determinants of Matrices We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix. ## Determinants and Row Operations We start easy with a straightforward theorem whose proof presages the style of subsequent proofs in this subsection. Theorem DZRC (Determinant with Zero Row or Column) Suppose that $A$ is a square matrix with a row where every entry is zero, or a column where every entry is zero. Then $\detname{A}=0$. Proof. Theorem DRCS (Determinant for Row or Column Swap) Suppose that $A$ is a square matrix. Let $B$ be the square matrix obtained from $A$ by interchanging the location of two rows, or interchanging the location of two columns. Then $\detname{B}=-\detname{A}$. Proof. So Theorem DRCS tells us the effect of the first row operation (Definition RO) on the determinant of a matrix. Here's the effect of the second row operation. Theorem DRCM (Determinant for Row or Column Multiples) Suppose that $A$ is a square matrix. Let $B$ be the square matrix obtained from $A$ by multiplying a single row by the scalar $\alpha$, or by multiplying a single column by the scalar $\alpha$. Then $\detname{B}=\alpha\detname{A}$. Proof. Let's go for understanding the effect of all three row operations. But first we need an intermediate result, but it is an easy one. Theorem DERC (Determinant with Equal Rows or Columns) Suppose that $A$ is a square matrix with two equal rows, or two equal columns. Then $\detname{A}=0$. Proof. Now explain the third row operation. Here we go. Theorem DRCMA (Determinant for Row or Column Multiples and Addition) Suppose that $A$ is a square matrix. Let $B$ be the square matrix obtained from $A$ by multiplying a row by the scalar $\alpha$ and then adding it to another row, or by multiplying a column by the scalar $\alpha$ and then adding it to another column. Then $\detname{B}=\detname{A}$. Proof. Is this what you expected? We could argue that the third row operation is the most popular, and yet it has no effect whatsoever on the determinant of a matrix! We can exploit this, along with our understanding of the other two row operations, to provide another approach to computing a determinant. We'll explain this in the context of an example. Example DRO: Determinant by row operations. ## Determinants, Row Operations, Elementary Matrices As a final preparation for our two most important theorems about determinants, we prove a handful of facts about the interplay of row operations and matrix multiplication with elementary matrices with regard to the determinant. But first, a simple, but crucial, fact about the identity matrix. Theorem DIM (Determinant of the Identity Matrix) For every $n\geq 1$, $\detname{I_n}=1$. Proof. Theorem DEM (Determinants of Elementary Matrices) For the three possible versions of an elementary matrix (Definition ELEM) we have the determinants, 1. $\detname{\elemswap{i}{j}}=-1$ 2. $\detname{\elemmult{\alpha}{i}}=\alpha$ 3. $\detname{\elemadd{\alpha}{i}{j}}=1$ Proof. Theorem DEMMM (Determinants, Elementary Matrices, Matrix Multiplication) Suppose that $A$ is a square matrix of size $n$ and $E$ is any elementary matrix of size $n$. Then \begin{equation*} \detname{EA}=\detname{E}\detname{A} \end{equation*} Proof. ## Determinants, Nonsingular Matrices, Matrix Multiplication If you asked someone with substantial experience working with matrices about the value of the determinant, they'd be likely to quote the following theorem as the first thing to come to mind. Theorem SMZD (Singular Matrices have Zero Determinants) Let $A$ be a square matrix. Then $A$ is singular if and only if $\detname{A}=0$. Proof. For the case of $2\times 2$ matrices you might compare the application of Theorem SMZD with the combination of the results stated in Theorem DMST and Theorem TTMI. Example ZNDAB: Zero and nonzero determinant, Archetypes A and B. Since Theorem SMZD is an equivalence (technique E) we can expand on our growing list of equivalences about nonsingular matrices. The addition of the condition $\detname{A}\neq 0$ is one of the best motivations for learning about determinants. Theorem NME7 (Nonsingular Matrix Equivalences, Round 7) Suppose that $A$ is a square matrix of size $n$. The following are equivalent. 1. $A$ is nonsingular. 2. $A$ row-reduces to the identity matrix. 3. The null space of $A$ contains only the zero vector, $\nsp{A}=\set{\zerovector}$. 4. The linear system $\linearsystem{A}{\vect{b}}$ has a unique solution for every possible choice of $\vect{b}$. 5. The columns of $A$ are a linearly independent set. 6. $A$ is invertible. 7. The column space of $A$ is $\complex{n}$, $\csp{A}=\complex{n}$. 8. The columns of $A$ are a basis for $\complex{n}$. 9. The rank of $A$ is $n$, $\rank{A}=n$. 10. The nullity of $A$ is zero, $\nullity{A}=0$. 11. The determinant of $A$ is nonzero, $\detname{A}\neq 0$. Proof. Computationally, row-reducing a matrix is the most efficient way to determine if a matrix is nonsingular, though the effect of using division in a computer can lead to round-off errors that confuse small quantities with critical zero quantities. Conceptually, the determinant may seem the most efficient way to determine if a matrix is nonsingular. The definition of a determinant uses just addition, subtraction and multiplication, so division is never a problem. And the final test is easy: is the determinant zero or not? However, the number of operations involved in computing a determinant by the definition very quickly becomes so excessive as to be impractical. Now for the {\it coup de gr\^{a}ce}. We will generalize Theorem DEMMM to the case of any two square matrices. You may recall thinking that matrix multiplication was defined in a needlessly complicated manner. For sure, the definition of a determinant seems even stranger. (Though Theorem SMZD might be forcing you to reconsider.) Read the statement of the next theorem and contemplate how nicely matrix multiplication and determinants play with each other. Theorem DRMM (Determinant Respects Matrix Multiplication) Suppose that $A$ and $B$ are square matrices of the same size. Then $\detname{AB}=\detname{A}\detname{B}$. Proof. It is amazing that matrix multiplication and the determinant interact this way. Might it also be true that $\detname{A+B}=\detname{A}+\detname{B}$? (See exercise PDM.M30.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.877512514591217, "perplexity_flag": "head"}
http://mathoverflow.net/questions/28160/is-there-another-proof-for-dirichlets-theorem
## Is there another proof for Dirichlet’s theorem? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Possible Duplicate: Is a “non-analytic” proof of Dirichlet’s theorem on primes known or possible? Dirichlet's theorem on primes in arithmetic progression states that there are infinitely many primes of the form $kn+h$ given that $k$ and $h$ are coprime. Is there a short proof for this? - 14 short answer: no. – KConrad Jun 14 2010 at 20:30 3 "It is rash to assert that a particular theorem cannot proved in a particular way." Thought you were an endorser of that viewpoint, Professor K. C. – J. H. S. Jun 14 2010 at 20:40 5 Every proof I've even seen takes the same form up until the final steps. (1) Introduce the character group of the unit group of $Z/N$. (2) Consider the sum $\sum \chi(k)/k$, where $chi$ is a character of $Z/N$. Notice that this sum is much larger for $\chi$ trivial than for $\chi$ nontrivial. (3) Use the multiplicativity of $\chi$, and step 2, to deduce that $\sum 1/p$ grows much faster than $\sum \chi(p)/p$. (4) THE HARD STEP: Deduce somehow that $\sum \chi(k)/k \neq 0$, so $- \sum \chi(p)/p$ is also small. (5) Deduce the theorem. – David Speyer Jun 14 2010 at 20:52 5 J.H.S.: I didn't mean there can't be a short proof, but rather that (right now) there isn't one. That's what it seemed he was asking about, not some meta-mathematical query on the possible existence of a short proof. – KConrad Jun 14 2010 at 21:04 4 I have voted to close, but not for the reason that others have (they said "exact duplicate"). Rather, I think that the question "does there exist a short proof of Theorem X?" is inherently vague and subjective and could well lead to arguments of the form "Proof X which assumes Y and takes Z pages is / is not short." Please clarify what you actually want to know. There are proofs of Dirichlet's theorem which avoid complex or even real analysis, but I am not aware of a proof which could be given in an undergraduate course in less than a week of lectures. – Pete L. Clark Jun 15 2010 at 3:12 show 11 more comments ## 1 Answer Well, there are short proofs of particular instances of the result. For example, emulating the Euclidean assault on the infinitude of primes, one can establish, almost effortlessly, that there are infinitely many primes of the form 4k+3. Nevertheless, you have to be warned that there is no way to strenghten this technique in order to get the result for every arithmetic progression. You may want to take a look at [1]. In that note, Professor Murty mentions that it was I. Schur the one who first derived a sufficient condition for the existence of an "Euclidean" proof for the infinitude of primes in the arithmetic progression {$mk+a$}$_{k \in \mathbb{N}}$. Edit: As David Speyer mentioned above one on the main ingredients in the proof is a certain non-vanishing result for L-series. Hence, a way in which one might shorten the proof is by spotting the shortest demonstration for the corresponding non-vanishing theorem. I higly recommend that you take a look at the thread in [2] if you wish to learn more about this particular matter. References [1] M. R. Murty, Primes in certain arithmetic progressions, J. Madras. Univ. (1988), 161-169. [2] Shortest/Most elegant proof for the non-vanishing of $L(1, \chi)$ : http://mathoverflow.net/questions/25794/shortest-most-elegant-proof-for-l1-chi-neq-0/25815#25815 - Oh, that's a different matter! The person who asked the question should definitely clarify if he was asking for a short proof of the general theorem or if he'd be content with a short proof of some special cases (which usually don't give a flavor of the general proof). – KConrad Jun 14 2010 at 21:01 Point taken, Sire. Nonetheless, I decided to enter the reply because I thought it'd help to disseminate the fact that there are specific arithmetic progressions for which the related proofs are as short as possible. – J. H. S. Jun 14 2010 at 21:44 1 There's also a short proof for $4k+1$. Assume there were a finite number of such primes, take their product, square it, and add one. Then there is a prime $p$ that divides this result. This forces $-1$ to be a square in $F_p$, hence $p$ is congruent to 1 modulo 4, contradiction. – David Carchedi Jun 14 2010 at 22:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457387924194336, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1614/rsa-cracking-the-same-message-is-sent-to-two-different-people-problem
# RSA cracking: The same message is sent to two different people problem Suppose we have two people: Smith and Jones. Smith public key is e=9, n=179 and Jones public key is e=13, n=179. Bob sends to them a message $M$. The encrypted message $C_s$ to Smith is 32. The encrypted message $C_j$ to Jones is 127 I tried to resolve this problem with no luck. First I put both of them as an equation and tried to play with the module, multiplying and adding, but I can't retrieve the solution. I know there is a way to resolve it but I haven't found it anywhere. EDIT: Is not a realistic situation. Is just for practice. This is the primitive RSA. It has no padding. I'm beginning to think that the problem is wrong maybe because the results I get are not correct. This is the original problem text: Imagine that you are a CIA double agent. As good spy, you have discover that the agents Smith and Jones share the same modulo in their respective RSA public keys, namely ($e_s$ = 9, n = 179) and ($e_j$ = 13, n = 179). After some days sniffing the network, you see that the CIA director has sent the same message m to both agents. Concretely he has sent $c_s$ = 32 and $c_j$ = 127. Can you recover the original message m?. Solution m = 10 Resolution by me: So I have the following equation: $c_s$ = $m^9$ $mod$ 179 $c_j$ = $m^{13}$ $mod$ 179 And I began to use the extended Euclidean algorithm: 9*$a$ + 13*$b$ = $gcd$(9,13) which gives me: $a$ = 3 and $b$ = -2 As $b$ is negative, we calculate: $i$ $=$ $c_j^{-1}mod179$ the inverse of $c_j$ (127) which is $-31$ And finally: $M$ = $c_s^a$ * $i^{-b}$ $mod$ n $M = 32^3*{-31}^2$ $mod$ $179$ $=$ $10$ Great! thanks!!! - 2 Note that such "cracking" entails using RSA without padding (so it is not "the" RSA, only the core mathematical operation, but known to have a number of weaknesses), and also that Smith and Jones share the same modulus, which means that they have the "same" private key (at least they both know the factorization of the modulus, so they can compute each other's private key). That's not a realistic situation. – Thomas Pornin Jan 10 '12 at 13:03 I know it's just an example, but to extend Thomas' point, also note the modulus itself is prime, allowing you to decrypt a single message by computing the inverse of $e$ - in other words, you don't actually need the same message to be sent to two different people in this scenario - the inverse of $e=13$ is $d=137$ and you can compute $127^{137} \equiv 10 \mod(179)$. – Antony Vennard Jan 10 '12 at 13:58 ## 1 Answer I'm getting the following information from here (slide 26). $c_1 \equiv m^{e_1} \mod n$ $c_2 \equiv m^{e_2} \mod n$ if $\gcd(e_1,e_2)=1$ then $\exists a,b\in\mathbb{Z} : e_1\cdot a + e_2\cdot b = 1$ ($a$ and $b$ can be found by extended euclidean algorithm) And, $m\equiv c_1^a\cdot c_2^b \mod n$ Note: In practice, either $a$ or $b$ will be negative. WLOG, let $b$ be negative. This leads to problems in the above equation. To get around this, use the following computation. Let $i\equiv c_2^{-1} \mod n$ (i.e., the modular inverse of $c_2$) $m\equiv c_1^a\cdot i^{-b} \mod n$ - Oh my god. That's why it fails! Thanks so much for your help, I'll resolve it right now! – Tomas Jan 10 '12 at 13:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398024082183838, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/27199-m-t-calculus-can-you-pls-double-check.html
# Thread: 1. ## M(t) calculus - can you pls double check? Hello... Would someone please double check and see if I make mistakes in the calculus of this integral? Thanks in advance! $<br /> M_X(t)=\int_0^\infty 2 \lambda x\ e^{-\lambda{x^2}+tx} dx\<br />$ The exponent can be rewritten as: $<br /> -\lambda{(x-\frac{t}{2\lambda})^2}+\lambda{(\frac{t}{2\lambda} )^2}<br />$ Therefore: $<br /> M_X(t)=e^{\lambda{(\frac{t}{2\lambda})^2}}\int_0^\ infty 2 \lambda x\ e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx\ =\<br /> e^{\lambda{(\frac{t}{2\lambda})^2}}\int_0^\infty 2 \lambda (x-\frac{t}{2\lambda}+\frac{t}{2\lambda})\ e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx\ =\\<br />$ $<br /> =\ e^{\lambda{(\frac{t}{2\lambda})^2}}[\int_0^\infty 2 \lambda (x-\frac{t}{2\lambda})\ e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx\ +\ t\int_0^\infty e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx]\ =\\<br />$ $<br /> =\ e^{\lambda{(\frac{t}{2\lambda})^2}}[-e^{-\lambda{(x-\frac{t}{2\lambda})^2}}\mid_0^\infty\ +\ t\int_0^\infty e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx]\ =\\<br />$ I now convert the last remaining integral into normal gaussian of area 1/2 between zero and infinity: $<br /> \mu=\frac{t}{2\lambda}\ ;\ \sigma^2=\frac{1}{2\lambda} \Rightarrow\ \ t\frac{\sigma\sqrt{2\pi}}{\sigma\sqrt{2\pi}}\int_0 ^\infty e^{-\frac{(x-\mu)^2}{2\sigma^2}}dx\ =\ \frac{t\sigma\sqrt{2\pi}}{2}\ =\ \mu\sqrt{\lambda\pi}\ ]<br />$ Therefore, substituting: $<br /> M_X(t)=e^{\lambda\mu^2}[-e^{-\lambda{(x-\mu)^2}}\mid_0^\infty +\mu\sqrt{\lambda\pi}\ ] = e^{\lambda\mu^2}[e^{-\lambda{\mu^2}} +\mu\sqrt{\lambda\pi}\ ] = 1+\mu\sqrt{\lambda\pi}e^{\lambda\mu^2}<br />$ 2. Originally Posted by paolopiace Hello... Would someone please double check and see if I make mistakes in the calculus of this integral? Thanks in advance! $<br /> M_X(t)=\int_0^\infty 2 \lambda x\ e^{-\lambda{x^2}+tx} dx\<br />$ The exponent can be rewritten as: $<br /> -\lambda{(x-\frac{t}{2\lambda})^2}+\lambda{(\frac{t}{2\lambda} )^2}<br />$ Therefore: $<br /> M_X(t)=e^{\lambda{(\frac{t}{2\lambda})^2}}\int_0^\ infty 2 \lambda x\ e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx\ =\<br /> e^{\lambda{(\frac{t}{2\lambda})^2}}\int_0^\infty 2 \lambda (x-\frac{t}{2\lambda}+\frac{t}{2\lambda})\ e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx\ =\\<br />$ $<br /> =\ e^{\lambda{(\frac{t}{2\lambda})^2}}[\int_0^\infty 2 \lambda (x-\frac{t}{2\lambda})\ e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx\ +\ t\int_0^\infty e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx]\ =\\<br />$ $<br /> =\ e^{\lambda{(\frac{t}{2\lambda})^2}}[-e^{-\lambda{(x-\frac{t}{2\lambda})^2}}\mid_0^\infty\ +\ t\int_0^\infty e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx]\ =\\<br />$ I now convert the last remaining integral into normal gaussian of area 1/2 between zero and infinity: Mr F says: This is your mistake. When the Gaussian has a mean of $\mu$, the area is 1/2 between $\mu$ and infinity. Only if the mean is equal to zero will what you've got here be true. $<br /> \mu=\frac{t}{2\lambda}\ ;\ \sigma^2=\frac{1}{2\lambda} \Rightarrow\ \ t\frac{\sigma\sqrt{2\pi}}{\sigma\sqrt{2\pi}}\int_0 ^\infty e^{-\frac{(x-\mu)^2}{2\sigma^2}}dx\ =\ \frac{t\sigma\sqrt{2\pi}}{2}\ =\ \mu\sqrt{\lambda\pi}\ ]<br />$ Therefore, substituting: $<br /> M_X(t)=e^{\lambda\mu^2}[-e^{-\lambda{(x-\mu)^2}}\mid_0^\infty +\mu\sqrt{\lambda\pi}\ ] = e^{\lambda\mu^2}[e^{-\lambda{\mu^2}} +\mu\sqrt{\lambda\pi}\ ] = 1+\mu\sqrt{\lambda\pi}e^{\lambda\mu^2}<br />$ .. 3. ## Mr. F, Thanks... ... believe it or not, this error came to my brain tonight. Problem is that if I make mu=0, the starting point of the integral changes and that area cannot be calculated analytically (or... can it?). It seems this problem is not resolvable with regular analytical tools. Mr.F, what do you suggest? 4. Originally Posted by paolopiace ... believe it or not, this error came to my brain tonight. Problem is that if I make mu=0, the starting point of the integral changes and that area cannot be calculated analytically (or... can it?). It seems this problem is not resolvable with regular analytical tools. Mr.F, what do you suggest? When an integral does not have a known closed form what we do is invent a new function who's derivative is the integrand. $\int_0^\infty e^{-\lambda{(x-\frac{t}{2\lambda})^2}} dx = \lambda^{-1/2}\ {\rm{erfc}}\left(-\frac{t}{2 \sqrt(\lambda)}\right)$ $=\lambda^{-1/2}\left[1-{\rm{erf}}\left(-\frac{t}{2 \sqrt(\lambda)}\right)\right]$ In this case you need the error function ${\rm{erf}}(x)$, or the complementary error function ${\rm{erfc}}(x)$,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053366780281067, "perplexity_flag": "middle"}
http://nrich.maths.org/4838
### Happy Numbers Take any whole number between 1 and 999, add the squares of the digits to get a new number. Make some conjectures about what happens in general. ### Zooming in on the Squares Start with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens? # Litov's Mean Value Theorem ##### Stage: 3 Challenge Level: Start with two numbers, say 8 and 2. This is the start of a sequence of numbers. The rule is that the next number in the sequence is the average of the last two numbers. So the next number is $1/2$ of $(8 + 2$), which equals $5$, so the sequence becomes $8, 2, 5$ The next number is $1/2$ of $(2 + 5)$, which equals $3.5$, so the sequence becomes $8, 2, 5, 3.5$ Continue the sequence until you know what will happen when you continue this process indefinitely. Choose two different starting numbers and repeat the process. Continue exploring with different start numbers, trying to discover what the rule is for finding the limiting value. Can you find a relationship between the two start numbers and the limiting value? Can you explain why it works? Now start with three numbers, say $4, 1, 10$. The new rule is that the next number is the mean of the last three numbers. So the next number is $1/3$ of $(4 + 1 + 10)$, which equals $5$, so the sequence becomes $4, 1, 10, 5$ The next number is $1/3$ of $(1 + 10 + 5)$, which equals $16/3$, so the sequence becomes $4, 1, 10, 5, 16/3$ Continue this sequence and find the limiting value as the process is continued indefinitely. Can you find a rule for finding the limiting value in this case? Can you find a relationship between the three start numbers and the limiting value? Can you explain why it works? General rule Now explore what happens when you have $n$ start numbers and the rule for working out the next number changes to finding the average of the last $n$ numbers. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928607702255249, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/178529-expected-value-variance-arma-1-1-model.html
# Thread: 1. ## Expected value and variance of ARMA(1,1) model I guess the forum's having latex issues, so I'll try to write the model as best I can. y_{t}= 50 + 0.8y_{t-1} + a_{t} - 0.2a_{t-1} So, I've been pouring over notes trying to figure E[y_{t}] and VAR[y_{t}] but I just can't find it. Can someone help me out please? 2. can you re-express y(t) in terms of the past errors? NB: this may not be the only/easiest method! 3. Hello, An ARMA process is stationary, so we have E[y_t]=E[y_{t-1}]=0, don't we ? I don't remember for the variance, can't find my notes atm and it's 2am... 4. i dont think this one is stationary, because of the nonzero constant (50) that is added to each term edit the reasoning in this post is wrong, ignore it. theres an agebra mistake too... 5. my attempt: not sure if its right (especially the limits on the sums): it should (relatively) straightforward to compute the expected value and variance of that using the properties of a_t But dont waste time doing that without verifying the expression is correct PS: for the variance note that Spoiler: Edit: Maybe you're supposed to assume the process has an infinitely long history at time t, in which case the sums become infinite and the process might be stationary after all... 6. Ok, let's have a go at it I guess that the a's represent white noise, i.e. $\{a_{t}\} ~ is ~ WN(0,\sigma^{2})$ By assuming (weak) stationarity, i.e. that the mean and autocovariance function are both independent of t, the expected value can be found by $E(Y_{t})=E(50+0.8Y_{t-1}+a_{t}-0.2a_{t-1})$ $E(Y_{t})=50+0.8E(Y_{t})$ $E(Y_{t})=50/(1-0.8)$ The same expected value can be obtained whilst actually checking that the process is stationary by computing its mean and autocovariance function without such prior assumptions (other than that the process has infinite history). For the mean/expectation we can do this by observing that (remember E(a_t) = 0) $E(Y_{t})=E(50+0.8Y_{t-1}+a_{t}-0.2a_{t-1})$ $E(Y_{t})=50+0.8E(50+Y_{t-2}+a_{t-1}-0.2a_{t-2})$ $E(Y_{t})=50+0.8(50+0.8E(50+Y_{t-3}+a_{t-2}-0.2a_{t-3})$ ... $E(Y_{t})=50sum_{k=0}^{n}0.8^{k}+0.8^{n}Y_{t-(n+1)}$ By letting n -> inf, the last term -> 0, and the first term converges to $E(Y_{t})=50/(1-0.8)$ Now, for the variance: keep in mind that $Cov(Y_{t},a_{s})=0 ~ for ~ s > t$ but may be nonzero elsewhere. $Var(Y_{t})=Var(50+0.8Y_{t-1}+a_{t}-0.2a_{t-1})$ $Var(Y_{t})=0.8^{2}Var(Y_{t-1})+\sigma^2+0.2^{2}\sigma^2$ $\hspace\hspace-2*0.8*0.2*Cov(50+0.8*Y_{t-2}+a_{t-1}-0.2a_{t-2},a_{t-1})$ $Var(Y_{t})=0.8^{2}Var(Y_{t-1})+\sigma^2+0.2^{2}\sigma^2-2*0.8*0.2\sigma^2$ $Var(Y_{t})=0.8^{2}Var(Y_{t-1})+0.72\sigma^2$ Let n -> inf in the the recursion (first term -> 0) $Var(Y_t) = (0.8^{2})^{n+1}Var(Y_{t-(n+1)})+0.72\sigma^2sum_{k=0}^{n}(0.8^{2})^{i}$ (or assume stationarity Var(Y_t) = Var(Y_{t-1}), or know something about fixed points(?)) so that we end up with $Var(Y_{t})=(0.72\sigma^2)/(1-0.8^2)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320700764656067, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/6576/minimum-vertex-cover-in-a-bipartite-graph
# Minimum Vertex cover in a bipartite graph [closed] Show that the problem of finding the minimum vertex cover in a bipartite graph reduces to finding a maximum flow. Describe the reduction in a precise and concise way. - Why linear programming as a tag? – The Unfun Cat Nov 9 '12 at 10:28 @david: You are posting here for the first time. So let me point out some general rules to get a useful answer. Say what you have tried, where have you been stuck? Also it might be helpful not to phrase your question as a textbook question. – A.Schulz Nov 9 '12 at 10:40 – The Unfun Cat Nov 9 '12 at 10:52 – JeffE Nov 9 '12 at 16:15 ## closed as not a real question by A.Schulz, JeffE, Gilles♦Nov 9 '12 at 22:05 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ. ## 1 Answer Call the two partitions of the nodes $A$ and $B$. Add two new nodes, a source $s$ and a sink $t$. Connect the start node to all nodes in $A$ with an edge with max capacity of one. Connect all the nodes in $B$ to the sink with edges with a max capacity of one. And lastly give all the original edges in the graph a max capacity of one. Now finding the max flow from $s$ to $t$ will find the minimum vertex cover. For each edge $(u,v)$ included in the max-flow, the nodes $u$ and $v$ will be a part of the minimum vertex cover. Draw this and you will understand. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363282322883606, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/06/15/the-monotone-convergence-theorem/?like=1&_wpnonce=d19428bcb3
# The Unapologetic Mathematician ## The Monotone Convergence Theorem We want to prove a strengthening of the dominated convergence theorem. If $\{f_n\}$ is an a.e. increasing sequence of extended real-valued, non-negative, measurable functions, and if $f_n$ converges to $f$ pointwise a.e., then $\displaystyle\lim\limits_{n\to\infty}\int f_n\,d\mu=\int f\,d\mu$ If $f$ is integrable, then $f$ dominates the sequence $\{f_n\}$, and so the dominated convergence theorem itself gives us the result we assert. What we have to show is that if $\int f\,d\mu=\infty$, then the limit diverges to infinity. Or, contrapositively, if the limit doesn’t diverge then $f$ must be integrable. But this is the limit of a sequence of real numbers, and so if it converges then it’s Cauchy. That is, we can conclude that $\displaystyle\lim\limits_{m,n\to\infty}\left\lvert\int f_m\,d\mu-\int f_n\,d\mu\right\rvert=0$ Our assumption that $f_n$ is a.e. increasing tells us that for any fixed $m$ and $n$, the difference $f_m-f_n$ is either a.e. non-negative or a.e. non-positive. That is, $\displaystyle\left\lvert\int f_m\,d\mu-\int f_n\,d\mu\right\rvert=\int\lvert f_m-f_n\rvert\,d\mu$ And thus the sequence $\{f_n\}$ is mean Cauchy, and thus mean convergent to some integrable function $g$, which must be equal to $f$ almost everywhere. One nice use of this is when talking about series of functions. If $\{f_n\}$ is a sequence of integrable functions so that $\displaystyle\sum\limits_{n=1}^\infty\int\lvert f_n\rvert\,d\mu<\infty$ then I say that the series $\displaystyle\sum\limits_{n=1}^\infty f_n(x)$ converges a.e. to an integrable function $f$, and further that $\displaystyle\int f\,d\mu=\int\sum\limits_{n=1}^\infty f_n\,d\mu=\sum\limits_{n=1}^\infty\int f_n\,d\mu$ To see this, we let $S_n(x)$ be the partial sum $\displaystyle S_n(x)=\sum\limits_{i=1}^n\lvert f_i(x)\rvert$ which gives us a pointwise increasing sequence of non-negative measurable functions. The monotone convergence theorem tells us that these partial sums converge pointwise to some $S(x)$ and that $\displaystyle\int S\,d\mu=\lim\limits_{n\to\infty}\int S_n\,d\mu=\lim\limits_{n\to\infty}\int\sum\limits_{i=1}^n\lvert f_i\rvert\,d\mu=\lim\limits_{n\to\infty}\sum\limits_{i=1}^n\int\lvert f_i\rvert\,d\mu=\sum\limits_{i=1}^\infty\int\lvert f_i\rvert\,d\mu$ But this is exactly the sum we assumed to converge before. Thus the function $S$ is integrable and the series of the $f_n$ is absolutely convergent. That is, since $S$ must be a.e. finite, the series $\displaystyle\sum\limits_{n=1}^\infty f_n(x)$ is absolutely convergent for almost all $x$, and so it must be convergent pointwise almost everywhere. Since $S$ dominates the partial sums $\displaystyle S(x)=\sum\limits_{i=1}^\infty\lvert f_i(x)\rvert\geq\sum\limits_{i=1}^n\lvert f_i(x)\rvert\geq\left\lvert\sum\limits_{i=1}^nf_i(x)\right\rvert$ the bounded convergence theorem tells us that limits commute with integrations here, and thus that $\displaystyle\begin{aligned}\int f\,d\mu&=\int\sum\limits_{i=1}^\infty f_i\,d\mu\\&=\int\lim\limits_{n\to\infty}\sum\limits_{i=1}^nf_i\,d\mu\\&=\lim\limits_{n\to\infty}\int\sum\limits_{i=1}^nf_i\,d\mu\\&=\lim\limits_{n\to\infty}\sum\limits_{i=1}^n\int f_i\,d\mu\\&=\sum\limits_{i=1}^\infty\int f_i\,d\mu\end{aligned}$ ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 10 Comments » 1. [...] means we can bring the monotone convergence theorem to bear. This tells us [...] Pingback by | June 16, 2010 | Reply 2. [...] functions overall are handled by using their positive and negative parts. Then you can prove the monotone convergence theorem, followed by Fatou’s lemma, and then the Fatou-Lebesgue theorem, which leads to dominated [...] Pingback by | June 18, 2010 | Reply 3. [...] We define to be the limit of the — is the maximum of all the — and use the monotone convergence theorem to tell us [...] Pingback by | July 7, 2010 | Reply 4. [...] let be an increasing sequence of non-negative simple functions converging pointwise to . Then monotone convergence tells us [...] Pingback by | July 12, 2010 | Reply 5. [...] a monotone class. That is a monotone class follows from the dominated convergence theorem and the monotone convergence theorem, and so we have only to show that the assertions hold for measurable rectangles with finite-measure [...] Pingback by | July 22, 2010 | Reply 6. [...] we have used the monotone convergence theorem to exchange the sum and the [...] Pingback by | July 23, 2010 | Reply 7. [...] function , we can find an increasing sequence of simple functions converging pointwise to . The monotone convergence theorem tells us [...] Pingback by | July 28, 2010 | Reply 8. [...] for the case of for a measurable set . Linear combinations will extend it to simple functions, the monotone convergence theorem extends to non-negative measurable functions, and general functions can be decomposed into positive [...] Pingback by | August 2, 2010 | Reply 9. [...] monotone convergence theorem now tells us that the limiting [...] Pingback by | August 31, 2010 | Reply 10. [...] the monotone convergence theorem as we find that [...] Pingback by | September 3, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 35, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9033103585243225, "perplexity_flag": "head"}
http://en.m.wikibooks.org/wiki/Solutions_To_Mathematics_Textbooks/Principles_of_Mathematical_Analysis_(3rd_edition)_(ISBN_0070856133)/Chapter_1
# Solutions To Mathematics Textbooks/Principles of Mathematical Analysis (3rd edition) (ISBN 0070856133)/Chapter 1 Unless the contrary is explicitly stated, all numbers that are mentioned in these exercises are understood to be real. # Chapter 1 ## 1 If $r$ is rational ($r\ne 0$) and $x$ is irrational, prove that $r+x$ and $rx$ are irrational. Solution. Let $r+x=y$. If $y$ was rational then $x=y-r$ would be too. Similarly $rx$ is irrational. ↑Jump back a section ## 2 Prove that there is no rational number whose square is 12. Solution. Let, if possible, $p,q(\ne 0)\in \mathbb{Z}$ such that $(p,q)=1$ and $\frac{p^2}{q^2}=12$. Now $12q^2=p^2$. By the fundamental theorem of arithmetic $p^2$ and therefore $p$ has both 2 and 3 in its factorization. So $36k^2=12q^2$ for some $k$. But now $3k^2=q^2$ and so $3|q$, a contradiction. ↑Jump back a section ## 3 Prove Proposition 1.15. Solution. The results follow from using the facts related to $\mathbb{R}$ being a field. ↑Jump back a section ## 4 Let $E$ be a nonempty subset of an ordered set; suppose $\alpha$ is a lower bound of $E$ and $\beta$ is an upper bound of $E$. Prove that $\alpha\le \beta$. Solution. For $x\in E$ note that $\alpha\le x\le \beta$ and the result follows. ↑Jump back a section ## 5 Let $A$ be a nonempty set of real numbers which is bounded below. Let $-A$ be the set of all numbers $-x$, where $x\in A$. Prove that inf $A$=-sup$(-A)$. Solution. Let $x=$inf$A$ and $y=$sup$(-A)$. We need to show that $-x=y$. We first show that $-x$ is the upper bound of $-A$. Let $a\in -A$. Then $-a\in A$ and so $x\le -a$ or $-x\ge a$ follow. We now show that $-x$ is the least upper bound of $-A$. Let $z$ be an upper bound of $-A$. Then $\forall a\in -A$, $a\le z$ or $-a\ge -z$. So $-z$ is a lower bound of $A$. Since $x=$inf$A$ so $-z\le x$ or $-x\le z$. ↑Jump back a section ## 6 Fix $b>1$. (a) If $m,n,p,q$ are integers, $n>0$, $q>0$, and $r=m/n=p/q$, prove that $(b^m)^{1/n}=(b^p)^1/q$. Hence it makes sense to define $b^r=(b^p)^{1/q}$. (b) Prove that $b^{r+s}=b^rb^s$ if $r$ and $s$ are rational. (c) If $x$ is real, define $B(x)$ to be the set of all numbers $b^t$, where $t$ is rational and $t\le x$. Prove that $b^r=$sup$B(r)$ when r is rational. Hence it makes sense to define $b^x=$sup$B(x)$ for every real $x$. (d) Prove that $b^{x+y}=b^xb^y$ for all real $x$ and $y$. Solution. (a) Suppose $(m,n)=1$. Then $mq=pn$ and the fundamental theorem of arithmetic imply that $p=km$ and $q=kn$ where $k\in \mathbb{N}$. So $((b^m)^{1/n})^q=((b^m)^k=b^p$ and so we are done. If $(m,n)\ne 1$ then reduce $m/n$ to lowest factors, say $s/t$. Clearly now $(b^m)^{1/n}=(b^s)^1/t=(b^p)^1/q$ by the already worked out case when the ratios are coprime. (b) We will let $r=m/n$ and $s=p/q$ and equivalently show that $b^{mq+pn}=(b^rb^s)^{nq}$. Clearly $(b^rb^s)^{nq}=(b^r)^{nq}(b^s)^{nq}=(b^{m/n})^{nq}(b^{p/q})^{nq}=b^{mq}+b^{pn}=b^{mq+pn}$. The last equality holds as the exponents are integers. (c) Clearly $b^r\in B(r)$. We need merely show that br is an upper bound for B(r) since being in B(r) it then automatically becomes its supremum. Clearly b1/n>1. Now if r=m/n is any positive rational then br=(bm)1/n>1. Now let p,q be any rational numbers with p<q. As bq-p>1 so bpbq-p=bq>bp or in other words for every bt in B(r) we have t≤r and so bt≤br, i.e. br is the upper bound. (d) Suppose r is a rational number with r<x+y. WLOG let x<y and set δ=x+y-r>0. Choose a rational p such that x-δ<p<x and put q=r-p. Then q<y. By parts (b) and (c) br=bp+q=bpbq≤bxby. So bxby is an upper bound for {br:r≤x+y} or bx+y≤bxby. Now suppose p, q are rationals with p≤x and q≤y. Then bp+q is in B(x+y) and so bpbq=bp+q≤bx+y by (b) and so bp≤bx+y/bq. Now bp is in B(x). So for all q bx+y/bq is an upper bound for B(x) as p can be chosen arbitrarily. By definition bx≤bx+y/bq and so bq≤bx+y/bx. Again q can be chosen arbitarily so that bx+y/bx is an upper bound for B(y). As before this leads to by≤bx+y/bx or bxby≤bx+y. ↑Jump back a section ## 7 Fix b>1, y>0 and prove that there is a unique real x such that bx=y by completing the following outline. (This x is called the logarithm of y to the base b.) (a) For any positive integer n, bn-1≥n(b-1). (b) b-1≥n(b1/n-1) (c) If t>1 and $n>\frac{b-1}{t-1}$ then b1/n<t. (d) If w is such that bw<y then bw+(1/n)<y for sufficiently large n. (e) If bw>y, then bw-(1/n)>y for suffficiently large n. (f) Let A be the set of all w such that bw<y and show that x=sup A satisfies bx=y. (g) Prove that this x is unique. Solution. (a) Clearly each of bn-1, bn-2,...b is greater then 1 and summing them and applying the forumla of the finite sum of a geometric series gives the result. (b) As b1/n > 1 so by (a), (b1/n)n - 1 ≥ n(b1/n - 1). (c) b1/n = (b1/n - 1) + 1 ≤ (b - 1)/n + 1 < t. (d) Note that 1 < b-wy = t (say). Choose n > (b - 1)/(t - 1) then by (c), b1/n < b-wy or bw + (1/n) < y for sufficiently large n. (e) Choose t = bw/y > 1. The rest is similar. (f) From (a), bn ≥ n(b - 1) + 1 for all n. For which each z in R choose an n so that n(b - 1) > z - 1 or n(b - 1) + 1 > z. Hence for all z we have an n such that bn ≥ n(b - 1) + 1 > z. Hence the set {bn : n ∈ N} is unbounded. Now consider the function f : R → R defined by f(x) = bx. If x < y then as B(x) ⊆ B(y) so bx < by; i.e. f is an increasing function. Define A = {w : bw < y} as in the problem. The set {bn : n ∈ N} being unbounded gaurantees the existence of a n such that bn > y. Thus n is an upper bound for A. Let x = sup A. Suppose bx < y. By (d), for sufficiently large n, bx + (1/n) < y, i.e. x + 1/n is in A. But this is impossible as x = sup A. So bx < y is not possible. Suppose bx > y. By (e), for sufficiently large n, bx - (1/n) > y, i.e. x - 1/n is not in A. Since x - 1/n cannot possibly be the sup of A so there is a w in A such that x - 1/n < w ≤ x. But then as f was increasing, bx - 1/n < bw < y, a contradiction as bx - (1/n) > y. So bx > y is not possible. Hence bx = y. (g) The function f described in (f) is increasing and hence 1-1. ↑Jump back a section ## 8 Prove that no order can be defined in the complex field that turns it into an ordered field. Solution. Suppose an order < had been defined. Now (i)2 = -1 > 0 by Proposition 1.18. This violates 1 > 0. ↑Jump back a section ## 9 Suppose z = a + bi, w = c + di. Define z < w if a < c, and also if a = c but b < d. Prove that this turns the set of all complex numbers into an ordered set. Does this ordered set have the least-upper-bound property? Solution. Clearly if a < c then x < y. If a = c then either of the cases exist: b < d implies x < y, b > d implies x > y, b = d implies x = y. If a > c then x > y. Also if x = (a,b), y = (c,d) and z = (e,f) and x < y, y < z then we can establish x < y by considering the various cases. For example if a < c and c < e then clearly x < z. Similarly other cases may be handled. This set doesn't have the least upper bound property as the x-axis, a set bounded above by (1,0) doesn't have a least upper bound. ↑Jump back a section ## 10 Suppose z = a + bi, w = u + iv and $a=\Big(\frac{|w|+u}{2}\Big)^{1/2}$, $b=\Big(\frac{|w|-u}{2}\Big)^{1/2}$. Prove that z2 = w if v ≥ 0 and that $\bar z^2$ = w if v ≤ 0. Conclude that every complex number (with one exception!) has two complex roots. ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 100, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9025200009346008, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/94901/characterization-of-big-divisors
Characterization of big divisors Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In Kollár and Mori's Birational Geometry of Algebraic Varieties, lemma2.60 claim some multiple of a big divisor induced birational morphism onto its image in a projective space. But the proof only show it can be written as a sum of an ample divisor and an effective divisor, and say the result is obvious for the latter. I try to find the details for the latter but have no clues. Thank you for any answer or comments. Furthermore the lemma assume the scheme is a projective variety, does it must be integral? Can the result be done for proper varieties? The second question: The authors also claim a divisor is big iff its birational pullback is big. I know when the varieties are integral normal and proper, this can be done by Zariski' main theorem and projective formula. Are these conditions necessary? Thank you! - 3 If $mD=A+E$, where A is wlog very ample, the dimension of the image of $\phi_{|mD|}$ is at least the dimension of the image of $\phi_{|A|}$, because $H^0(X,kA)\subset H^0(X,mkD)$ for $k\ge 0$. – J.C. Ottem Apr 23 2012 at 2:52 2 And to see that the rational map you obtain is birational, take a non-zero section $s_E$ of $E$ whose divisor is $E$, and then the rational map $x\mapsto [s_0\otimes s_E:\cdots:s_N\otimes s_E]$ defined outside of $E$ ($(s_i)$ being a basis of $H^0(X,kA)$ for $k$ large enough) is clearly birational onto its image whose dimension is $\dim X$ as John explained above. – Henri Apr 23 2012 at 9:03 1 By the way, the book you're referring to has two authors. – Artie Prendergast-Smith Apr 24 2012 at 20:11 3 Actually, the missing author is Kollár... – Sándor Kovács Apr 25 2012 at 5:54 But mD may not be generated by global sections, right? So if the definition domain of $\phi_{|mD|}$ is not the entire X, while the image would be a closed subscheme with its image scheme structure, why $\phi_{|mD|}$ should be a morphism and onto its image? – MZWang Apr 28 2012 at 6:20 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329888224601746, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81251?sort=oldest
## Number of spanning trees which contain a given edge ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose I have a connected graph $G$ and a fixed edge $e = \langle u, v \rangle \in G$, and I want to count the number of spanning trees that involve $e$. I really only want to estimate the fraction of spanning trees containing $e$ compared to the total number of spanning trees $G$, that is, I want to find a lower bound $c \leq \kappa(G \backslash e) / \kappa(G)$ This lower bound should be in terms of the degrees of vertices $u,v$. Let $c(d,d')$ be the smallest possible value of $\kappa(G \backslash e) / \kappa(G)$ when the vertices have degree $d, d'$. What can one say about $c(d,d')$? For example, if $d = 1$ or $d' = 1$, then $c(d,d') = 1$. (The edge must be part of a spanning tree of $G$.). If $d = 2$, then $c(d,d') \geq 1/2$, as every spanning tree must involve one of the two edges incident on $u$, and the spanning trees using only $e$ are in bijection with the spanning trees using the other edge. If $d = d' = 2$, then $c(d,d') \geq 2/3$; and so on. Is there are general formula for $c(d,d')$? - 1 Might be helpful: start with the matrix-tree theorem (en.wikipedia.org/wiki/Kirchhoff's_theorem) and replace the entries corresponding to the edge you're interested in with a parameter $t$. You get a polynomial in $t$ (it should be quadratic if you delete the row and column in which $t$ appears off the diagonal) and you want to estimate the ratio between its value at $0$ and its value at $1$. – Qiaochu Yuan Nov 18 2011 at 16:08 The answer is highly dependent on the graph. Imagine uv being the only edge between two components of the graph. One can substitute other gadgets for an edge to reduce the fraction at will, which will remain largely independent of degree. Gerhard "Ask Me About System Design" Paseman, 2011.11.18 – Gerhard Paseman Nov 18 2011 at 20:00 Gerhard -- I am asking for a lower bound over all possible graphs. – David Harris Nov 18 2011 at 20:02 Doesn't $G\backslash e$ usually denote the graph $G$ with edge $e$ deleted? If so, then $\kappa(G \backslash e)$ is counting the number of spanning trees of $G$ that don't involve $e$. – Barry Cipra Nov 18 2011 at 20:04 1 @David: I think $G/e$ is standard for contraction, while $G\setminus e$ and $G-e$ are standard for deletion. – Andreas Blass Nov 18 2011 at 21:05 show 2 more comments ## 3 Answers The probability that an edge $e=(u,v)$ is part of a uniform spanning tree is equal to the resistance between $u$ and $v$ when the graph is considered as an electric network (see the book by Lyons with Peres, section 4.2). The bounds you get (in term of the degrees $d_u,d_v$) are $$\frac{1}{\min(d_u,d_v)} \le R_{eff}(u \leftrightarrow v) \le 1$$ when you allow multiple edges, or $$\frac{(d_u-1)+(d_v-1)}{(d_u-1)+(d_v-1)+(d_u-1)(d_v-1)} < R_{eff}(u \leftrightarrow v) \le 1$$ when the graph is simple, and these bounds are sharp. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. First of all, I think it's important to note that the number of spanning trees containing a given edge may depend on the global properties of a graph rather than just local properties like vertex degrees. For instance, if an edge is a cut-edge (also known as a bridge) then every spanning tree will contain it, but the vertex degrees won't necessarily tell you if that's the case. To every graph $G$ we can associate a bivarite polynomial over $\mathbb{Z}$ called the Tutte polynomial $T_G(x,y)$. Any standard text on algebraic graph theory (e.g., Norman Biggs' Algebraic Graph Theory; Bela Bollobas' Modern Graph Theory, etc) will contain a treatment of it. The Tutte polynomial encodes a really surprising amount of combinatorial information about a graph, particularly regarding its connectivity and cycle structure. Among other things: (1) $T_G(1,1)$ is equal to the number of spanning trees of $G$ (which is always positive, if $G$ is connected). (2) If $e$ is any edge of $G$, then $T_{G}(x,y) = T_{G\e}(x,y) + T_{G*e}(x,y)$. $G\e$ is, as in your notation, $G$ with $e$ excised and $G*e$ is the graph obtained by merging the two vertices of $e$ together (and keeping any loops that form). With some base conditions, this is often taken as the definition of $T_G$. Notice that $1 - T_{G*e}(1,1)/T_G(1,1) = T_{G-e}(1,1)/T_G(1,1)$. If I understand your problem correctly, you're interested in a choice of $e$ that minimizes the right hand side. So it might be helpful to consider maximizing the quotient on the left hand side, but both seem pretty opaque to me. Also I should warn you that Tutte polynomial computations, as you might expect, are NP-hard in general. Hopefully this is of some use. - The assertion below regarding O(1/n) is in contradiction with another posted answer, so I leave the construction available while I check the assertion. Let M_n be the (graph of the Hasse diagram of the) modular lattice on (n+2) elements. This will have 2n edges. Add two more edges on either side. Call the leaves u and v, and let us add an edge (the problem edge, called e) between u and v. I have a u-v gadget with 2n+3 edges on n+4 vertices, and n-many cycles of length 5. However, u and v have degree 2. Now to the u side of this gadget, add an edge and then dangle whatever favorite nonempty graph off this edge, and choose a disjoint graph to dangle off of v. In this graph, u and v have degree 3. However, any spanning tree that contains e can be modified to one of at least some number of other spanning trees; the analysis is more complicated than I originally imagined, but I think one can use this to show the ratio for this edge is at most O(1/n). So if d and d' are at least 3, I see no useful lower bound for the fraction in terms of the degrees themselves regarding edge e.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258425235748291, "perplexity_flag": "head"}
http://www.reference.com/browse/consistency
Definitions Nearby Words # Consistency [kuhn-sis-tuhn-see] /kənˈsɪstənsi/ In logic, a theory is consistent if it does not contain a contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model; this is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states that a theory is consistent if there is no formula P such that both P and its negation are provable from the axioms of the theory under its associated deductive system. If these semantic and syntactic definitions are equivalent for a particular logic, the logic is complete. The completeness of sentential calculus was proved by Paul Bernays in 1918 and Emil Post in 1921, while the completeness of predicate calculus was proved by Kurt Gödel in 1930. Stronger logics, such as second-order logic, are not complete. A consistency proof is a mathematical proof that a particular theory is consistent. The early development of mathematical proof theory was driven by the desire to provide finitary consistency proofs for all of mathematics as part of Hilbert's program. Hilbert's program was strongly impacted by incompleteness theorems, which showed that sufficiently strong proof theories cannot prove their own consistency. Although consistency can be proved by means of model theory, it is often done in a purely syntactical way, without any need to reference some model of the logic. The cut-elimination (or equivalently the normalization of the underlying calculus if there is one) implies the consistency of the calculus: since there is obviously no cut-free proof of falsity, there is no contradiction in general. ## Consistency and completeness The fundamental results relating consistency and completeness were proven by Kurt Gödel: • Gödel's completeness theorem shows that any consistent first-order theory is complete with respect to a maximal consistent set of formulae which are generated by means of a proof search algorithm. • Gödel's incompleteness theorems show that theories capable of expressing their own provability relation and of carrying out a diagonal argument are capable of proving their own consistency only if they are inconsistent. Such theories, if consistent, are known as essentially incomplete theories. By applying these ideas, we see that we can find first-order theories of the following four kinds: 1. Inconsistent theories, which have no models; 2. Theories which cannot talk about their own provability relation, such as Tarski's axiomatisation of point and line geometry, and Presburger arithmetic. Since these theories are satisfactorily described by the model we obtain from the completeness theorem, such systems are complete; 3. Theories which can talk about their own consistency, and which include the negation of the sentence asserting their own consistency. Such theories are complete with respect to the model one obtains from the completeness theorem, but contain as a theorem the derivability of a contradiction, in contradiction to the fact that they are consistent; 4. Essentially incomplete theories. In addition, it has recently been discovered that there is a fifth class of theory, the self-verifying theories, which are strong enough to talk about their own provability relation, but are too weak to carry out Gödelian diagonalisation, and so which can consistently prove their own consistency. However as with any theory, a theory proving its own consistency provides us with no interesting information, since inconsistent theories also prove their own consistency. ## Formulas A set of formulas $Phi$ in first-order logic is consistent (written Con$Phi$) if and only if there is no formula $phi$ such that $Phi vdash phi$ and $Phi vdash lnotphi$. Otherwise $Phi$ is inconsistent and is written Inc$Phi$. $Phi$ is said to be simply consistent iff for no formula $phi$ of $Phi$ are both $phi$ and the negation of $phi$ theorems of $Phi$. $Phi$ is said to be absolutely consistent or Post consistent iff at least one formula of $Phi$ is not a theorem of $Phi$. $Phi$ is said to be maximally consistent if and only if for every formula $phi$, if Con $Phi cup phi$ then $phi in Phi$. $Phi$ is said to contain witnesses if and only if for every formula of the form $exists x phi$ there exists a term $t$ such that $\left(exists x phi to phi \left\{t over x\right\}\right) in Phi$. See First-order logic. ### Basic results 1. The following are equivalent: (a) Inc$Phi$ (b) For all $phi,; Phi vdash phi.$ 2. Every satisfiable set of formulas is consistent, where a set of formulas $Phi$ is satisfiable if and only if there exists a model $mathfrak\left\{I\right\}$ such that $mathfrak\left\{I\right\} vDash Phi$. 3. For all $Phi$ and $phi$: (a) if not $Phi vdash phi$, then Con$Phi cup \left\{lnotphi\right\}$; (b) if Con $Phi$ and $Phi vdash phi$, then Con$Phi cup \left\{phi\right\}$; (c) if Con $Phi$, then Con$Phi cup \left\{phi\right\}$ or Con$Phi cup \left\{lnot phi\right\}$. 4. Let $Phi$ be a maximally consistent set of formulas and contain witnesses. For all $phi$ and $psi$: (a) if $Phi vdash phi$, then $phi in Phi$, (b) either $phi in Phi$ or $lnot phi in Phi$, (c) $\left(phi or psi\right) in Phi$ if and only if $phi in Phi$ or $psi in Phi$, (d) if $\left(phitopsi\right) in Phi$ and $phi in Phi$, then $psi in Phi$, (e) $exists x phi in Phi$ if and only if there is a term $t$ such that $phi\left\{t over x\right\}inPhi$. ### Henkin's theorem Let $Phi$ be a maximally consistent set of formulas containing witnesses. Define a binary relation on the set of S-terms $t_0 sim t_1 !$ if and only if $; t_0 = t_1 in Phi$; and let $overline t !$ denote the equivalence class of terms containing $t !$; and let $T_\left\{Phi\right\} := \left\{ ; overline t ; |; t in T^S \right\}$ where $T^S !$ is the set of terms based on the symbol set $S !$. Define the S-structure $mathfrak T_\left\{Phi\right\}$ over $T_\left\{Phi\right\} !$ the term-structure corresponding to $Phi$ by: (1) For $n$-ary $R in S$, $R^\left\{mathfrak T_\left\{Phi\right\}\right\} overline \left\{t_0\right\} ldots overline \left\{t_\left\{n-1\right\}\right\}$ if and only if $; R t_0 ldots t_\left\{n-1\right\} in Phi$, (2) For $n$-ary $f in S$, $f^\left\{mathfrak T_\left\{Phi\right\}\right\} \left(overline \left\{t_0\right\} ldots overline \left\{t_\left\{n-1\right\}\right\}\right) := overline \left\{f t_0 ldots t_\left\{n-1\right\}\right\}$, (3) For $c in S$, $c^\left\{mathfrak T_\left\{Phi\right\}\right\}:= overline c$. Let $mathfrak I_\left\{Phi\right\} := \left(mathfrak T_\left\{Phi\right\},beta_\left\{Phi\right\}\right)$ be the term interpretation associated with $Phi$, where $beta _\left\{Phi\right\} \left(x\right) := bar x$. $\left(*\right) ;$ For all $phi$,$; mathfrak I_\left\{Phi\right\} vDash phi$ if and only if $; phi in Phi$. ### Sketch of proof There are several things to verify. First, that $sim$ is an equivalence relation. Then, it needs to be verified that (1), (2), and (3) are well defined. This falls out of the fact that $sim$ is an equivalence relation and also requires a proof that (1) and (2) are independent of the choice of $t_0, ldots ,t_\left\{n-1\right\}$ class representatives. Finally, $mathfrak I_\left\{Phi\right\} vDash Phi$ can be verified by induction on formulas. ## References • The Cambridge Dictionary of Philosophy, consistency • H.D. Ebbinghaus, J. Flum, W. Thomas, Mathematical Logic • Jevons, W.S., Elementary Lessons in Logic, 1870
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 86, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408237338066101, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/320757/is-this-classical-exercice-missing-a-hypothesis
# Is this (classical?) exercice missing a hypothesis? A friend just told me about an exercice he was given quite a few years ago, but he wasn't sure wether he remembered all the hypothesis correctly. Does anybody recognize this? Let $f$ be a smooth real-valued function on $\Bbb R$ such that for all $n\in\Bbb N,~\sup_{\Bbb R}|f^{(n)}|=1$ and $f(0)=1$, then $f=\cos$. He wasn't sure wether theose were the exact hypothesis needed for this to work. He noticed that $f$ is automatically a power series with infinite radius of convergence, but wondered wether one ought to impose $f$ to be even, and wether the condition on the suprermums of the derivatives is correct. NOTE. If you know the hypothesis, please post them as an answer, and if you want to post a solution, please hide the text so that one needs to scroll over it to reveal it, thanks! - Hum... if this is true, it is quite striking, +1. And I wonder in which spheres this is classical. Not mine. – julien Mar 4 at 20:56 @Julien I agree :D I have tried a bit, and tried to prove that $f''(0)=-1$ which would be enough, but I don't make any progress, and doubt I'm on the right path. – Olivier Bégassat Mar 4 at 21:17 I understand that if you can prove $f''(0)=-1$, then you apply this to $-f''$ to find $f^{(4)}(0)=1$ and so on. But you are probably also assuming $f$ even, in which case the result follows clearly. Otherwise, how do deduce it? – julien Mar 4 at 21:27 @Julien I'm not assuming it's even, its just that $f'(0)$ must be equal to $0$ for $f$ not to exceed $1$ in a neighborhood of $0$, and if we can show that $f''(0)=-1$ we would be done. – Olivier Bégassat Mar 4 at 22:50 1 @julien : Since $f(0) = 1$, then $f(x) - 1$ has the sign of $f'(0)x$ in a neighborhood of $0$ (but it must remain $\le 0$). – Joel Cohen Mar 4 at 23:25 show 1 more comment ## 1 Answer There is a 1980 paper by John Roe titled A characterization of the sine function, with the following abstract. I suspect the specific result here follows from Roe's theorem. If a function and all its derivatives and integrals are absolutely uniformly bounded, then the function is a sine function with period 2π. http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=2081636 - Yes, that's probably it. Do you know a link to the full paper? – julien Mar 4 at 21:35 – Steve Kass Mar 4 at 21:42 1 If you have a journal subscription, a link to that paper is there. The essence of the approach is to investigate the Fourier transform of $f(x)$. An exponential dampening factor is introduced to make the function $L^1$. – Christopher A. Wong Mar 4 at 21:47 @ChristopherA.Wong Thanks for the sketch of proof. If I asked, it is precisely because I don't have journal subscription... – julien Mar 4 at 21:50 – Git Gud Mar 4 at 21:59 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368556141853333, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81539?sort=newest
## Is the following construction of the 0-Hecke monoid (well) known? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let W be a Coxeter group with Coxeter generators S. The corresponding 0-Hecke monoid H(W) has generating set S, the braid relations of W and the relations that each element of S is an idempotent. If one specializes the Hecke algebra associated to W to q=0 one gets the monoid algebra of H(W) (replace generators by their negatives to see this). It is also the monoid generated by foldings across the walls of the fundamental chamber of the Coxeter complex of W. It has been studied by a number of people and is sometimes called the Springer-Richardson monoid. Margolis and I came across the following construction of it and I wanted to know if it is known. Let P(W) be the power set of W. It is a monoid with usual set product: $AB=\lbrace ab\mid a\in A, b\in B\rbrace$. Let $I(w)$ be the principal Bruhat ideal generated by $w\in W$, e.g., $I(s)=\lbrace 1,s\rbrace$ for $s\in S$. Then the principal Bruhat ideals form a submonoid of P(W) isomorphic to H(W). Question: Does this construction appear explicitly in the literature and what is a reference? - Since there hasn't been an answer yet - I'd just like to comment that I think it is likely that this is known, since I know of two different contexts where this might be a useful statement. Unfortunately, it may be buried as a lemma. I suggest trying Deodhar's papers on combinatorial aspects of Kazhdan--Lusztig elements/polynomials and the Kostant--Kumar papers on cohomology and K-theory of flag varieties. – Alexander Woo Nov 24 2011 at 0:22 @Alexander, thanks I will take a look. – Benjamin Steinberg Nov 24 2011 at 0:50 Following up on Alexander's comment, you might also find it interesting to take a look at work of A. Knutson and E. Miller on subword complexes where they study properties of the `Demazure product', which is exactly the 0-Hecke product. It also appears in work of Drew Armstrong on sorting orders and in work of mine on total positivity. I had given an answer along these lines, but decided it wasn't really an answer to your question to your question. – Patricia Hersh Nov 10 at 22:28 Why the down vote on this old question? – Benjamin Steinberg Jan 29 at 2:00 ## 1 Answer Here is one such reference: Representation and classification of Coxeter monoids by S. V. Tsaranov European Journal of Combinatorics, Volume 11 Issue 2, Mar. 1990 http://dl.acm.org/citation.cfm?id=84891 - Many thanks Alexander. – Benjamin Steinberg Feb 3 2012 at 14:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293454885482788, "perplexity_flag": "middle"}
http://www.r-bloggers.com/why-an-inverse-wishart-prior-may-not-be-such-a-good-idea/
## R-bloggers R news and tutorials contributed by (452) R bloggers # Why an inverse-Wishart prior may not be such a good idea March 7, 2012 By simonbarthelme (This article was first published on dahtah » R, and kindly contributed to R-bloggers) While playing around with Bayesian methods for random effects models, it occured to me that inverse-Wishart priors can really bite you in the bum. Inverse Wishart-priors are popular priors over covariance functions. People like them priors because they are conjugate to a Gaussian likelihood, i.e, if you have data ${\mathbf{y}_{1},\ldots,\mathbf{y}_{n}}$ with each ${\mathbf{y}_{i}}$: $\displaystyle \mathbf{y}_{i}\sim\mathcal{N}\left(0,\mathbf{S}\right)$ so that the ${\mathbf{y}_{i}}$‘s are correlated Gaussian vectors, and you wish to infer the correlation matrix S, then putting an inverse-Wishart prior on S is convenient because the posterior distribution is very easy to sample from and its mean can be computed analytically. The inverse-Wishart works like this: a Wishart distribution ${\mathcal{W}\left(m,\Lambda\right)}$ produces random positive definite matrices by first producing ${m}$ Gaussian vectors: $\displaystyle \mathbf{x}_{i}\sim\mathcal{N}\left(0,\Lambda\right)$ the Wishart sample is ${m}$-times the sample covariance matrix: $\displaystyle \mathbf{W}=\sum_{i=1}^{m}\mathbf{x}_{i}\mathbf{x}_{i}^{t}$ which is why for large ${m}$ Wishart samples will look like ${m\Lambda}$. ${m}$ is the number of degrees of freedom, and we want to keep it low if the prior is to be relatively noninformative. A matrix S has inverse Wishart distribution ${\mathcal{IW}\left(m,\mathbf{M}\right)}$ if its inverse has Wishart distribution ${\mathcal{W}\left(m,\mathbf{M}^{-1}\right)}$. Again, since for large ${m}$ the inverse will look like ${m\mathbf{M}^{-1}}$, for large ${m}$ S will look like ${\mathbf{M}/m}$. More generally, the mean of the inverse Wishart is ${\frac{\mathbf{M}}{m-p-1}}$, where ${p}$ is the dimension. Let’s say we want to do Bayesian inference for the correlation of two Gaussian variables. Essentially, that’s a special case of what we started out with: ${n}$ datapoints ${\mathbf{y}_{i}\sim\mathcal{N}\left(0,\mathbf{S}\right)}$, with each ${\mathbf{y}_{i}}$ a 2-dimensional vector containing the observations for the two Gaussian variables. We are interested in one aspect of ${p(\mathbf{S}|\mathbf{Y})}$, namely the marginal for the correlation coefficient, i.e. ${r=s_{21}/\sqrt{s_{11}s_{22}}}$. What are generally reasonable assumptions about ${\mathbf{S}}$? I think we don’t want the prior to be too sensitive to scale – we don’t know how big or small the variances of the components are going to be, but we might have a reasonable range. As far as the correlation coefficient is concerned, it’s fairly rare to have variables that are perfectly correlated, so we might want our prior to shrink correlation coefficients a bit. A good default choice might then be to take ${\mathbf{S}\sim\mathcal{IW}\left(3,\mathbf{I}\right)}$, with I the identity matrix. Under the prior ${E\left(\mathbf{S}\right)=\mathbf{I}}$, and if we simulate from the prior we get a marginal distribution for the log-variance that looks like: which is suitably spread over several orders of magnitude. The marginal distribution for the correlation coefficient is: which is uniform – not what we wanted (we’d like very high absolute correlations to be less probable). That’s not the main problem with this prior, though. The big problem is this: For each sample ${\mathbf{S}_{j}}$ from the prior, I’m plotting the log-variance of the first component against the correlation coefficient. There’s a clear lack of independence, which is even easier to see in the conditional distribution of the correlation coefficient. Below, the histogram of the correlation coefficient conditioned on both variances being more than one (“High variance”), or not (“Low variance”): What we see here is a strong dependence between variance and correlation: the prior says that high variance implies high correlation, and low variance implies low-to-moderate correlation. This is a disaster for inference, because it means that correlation will tend to be exagerated if variance is higher than expected, which is the opposite of the shrinkage behaviour we’d like to see. There are better priors for covariance matrices out there, but sometimes you might be stuck with the Wishart for computational reasons (for example, it’s the only choice you have in INLA for random effects). An option is to estimate the variances first, then tweak the inverse-Wishart prior to have the right scale. Increasing the value of ${m}$ will provide correlation shrinkage. From a Bayesian point of view this is moderately dirty, but preferable to just sticking with the default choice (and see here for a prior choice with good frequentist properties). Kass & Natarajan (2006) is a much more sophisticated version of this strategy. Below, some R code to reproduce the plots: ```</pre> require(plyr) require(ggplot2) require(mvtnorm) #Given a 2x2 covariance matrix, compute corr. coeff ccm <- function(M) { M[2,1]/prod(sqrt(diag(M))) } #Generate n samples from the prior rprior <- function(n=1,r=3,M=diag(rep(1,2))) { Minv <- solve(M) rlply(n,chol2inv(chol(rwish(r,Minv)))) } #Wishart samples rwish <- function(r,R) { X <- rmvnorm(r,sig=R) t(X)%*%X } plot.marginal.variance <- function(df) { p <- ggplot(df,aes(var1))+geom_histogram(aes(y=..density..))+scale_x_log() p+theme_bw()+labs(x="\nVariance of the first component",y="Density\n") } plot.marginal.correlation <- function(df) { p <- ggplot(df,aes(cor))+geom_histogram(aes(y=..density..)) p+theme_bw()+labs(x="\nCorrelation coefficient",y="Density\n")+scale_x_continuous(lim=c(-1,1)) } plot.corr.var <- function(df) { p <- ggplot(df,aes(var1,cor))+geom_point()+scale_x_log() p+theme_bw()+labs(x="\nVariance of the first component",y="Correlation coeffient\n") } plot.conditional.corr <- function(df) { df 1 & var2 > 1), labels=c("Low variance","High variance"))) p <- ggplot(df,aes(cor))+geom_histogram(aes(y=..density..))+facet_wrap(~ high.var) p+theme_bw()+labs(x="\nCorrelation coefficient",y="Density\n")+scale_x_continuous(lim=c(-1,1)) } df <- ldply(rprior(2000),function(M) data.frame(var1=M[1,1],var2=M[2,2],cor=ccm(M))) ``` To leave a comment for the author, please follow the link and comment on his blog: dahtah » R. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series,ecdf, trading) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook... Comments are closed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183477163314819, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/6245/is-such-a-crypto-system-available
# Is such a crypto-system available? $E(k_1, pt) = c_1, E(k_2, c_1) = c_2, D(k_{new}, c_2) = pt$, where $k_{new} = f(k_1, k_2).$ Sharing $k_{new}$ and $k_2$ should reveal no information about $k_1$. Clarifications: 1. Being able to decrypt the ciphertext $c_2$ knowing only $k_{new}$ is the first desired property of the system 2. One cannot derive $k_1$ knowing both $k_{new}$ and $k_2$. 3. One cannot easily derive $k_1$ knowing other pairs $k_2'$ and $k_{new}'$. See @poncho's comments. $D(k_{new}, c_2) = pt$ or $E(k_{new}, c_2) = pt$ doesn't matter as long as you recover the plaintext. Real world example to clarify things even more: 1. Alice has a very large database which it chooses to store at Eve's site. Because Alice doesn't want Eve to read the data, nor doesn't she know in advance which data will be shared with whom, she encrypts all the records with a single key ($k_1$). 2. Now Bob requests access to some specific information. Bob and Alice know each other so Alice gives him access to that subset. However Alice wants to do that efficiently. Also Alice doesn't want Bob to be able to read anything else except the shared subset. Therefore she cannot: a) Decrypt the shared subset at Eve's site and then re-encrypt with a different key (this would expose all the database contents to Eve), b) Retrieve the data from Eve, decrypt, re-encrypt and store back data at Eve's site - this implies a round-trip delay which is prohibitively expensive. Thus Alice needs a cryptosystem which would allow re-encrypting the data directly at Eve's site, however Eve shouldn't be able to read any plain text (nor the one shared with Bob, nor the rest of the database records). Sharing the decryption key ($k_{new}$) is done through a direct channel between Alice and Bob (e.g. using Diffie-Hellman). The "cryptosystem" stated in the question is just a way I saw that happening. Note that $k_1$ shouldn't become known to Bob (e.g. if Bob colludes with Eve or other people with whom Alice shares (possibly different sets of) data). - 3 Knowledge of $k_{new}$ and $k_2$ will allow the decryption of any plaintext encrypted by $k_1$; what further information about $k_1$ do you want to be certain isn't revealed? – poncho Feb 5 at 15:35 The value of k1 – Eugen Feb 5 at 20:28 1 Typically, keys don't have any intrinsic value; the only reason we would prefer that people don't learn them is because of the ability that those keys would confer (for example, being able to decrypt traffic). Revealing knew and k2 allows people to decrypt; what further ability would you prefer people not to have? – poncho Feb 5 at 20:33 1 @jug Searching for "DES is not a group" I gradually found proxy re-encryption which seems like something I am looking for. – Eugen Feb 6 at 9:26 1 It doesn't matter for me as long as it satisfies the properties specified in the question. The originator of the data has key k1, Alice has key k_new. Even if Alice finds k2 or any other pairs (k2', k_new') she shouldn't be able to compute k1. – Eugen Feb 6 at 12:13 show 8 more comments ## 2 Answers This can be done with RSA where k1 and k2 are encryption keys and f is multiplication modulo the totient of the public modulus. The last property, that knowledge of other pairs $k_2'$ and $k_{new}'$ does not reveal $k_1$ is satisfied by using a different modulus for each pair generated. - You mean that f is the inverse of the multiplication of k1 and k2. In any case, this has a subtle problem; if someone learns k2, f(k1, k2), k3, f(k1, k3), they can deduce k1. Does this matter? Well, we can't tell from the question. – poncho Feb 5 at 16:12 1 Doesn't work. 1) k_new would be an encryption key, not a decryption key 2) f is invertable. So you can calculate k1 = k_new/k2. – CodesInChaos Feb 5 at 16:13 @CodesInChaos 1)No. My definition of f is correct because the post specifies D(k_new,c2)=pt not E(k_new,c2)=pt. Presumably the Decryption function magically obtains the decryption exponent from k_new! 2)Give me an example of inverting f which is not equivalent to factoring the modulus and hence breaking RSA. – Barack Obama Feb 5 at 16:20 @poncho I guess I could have specified that D=E and then f is the inverse as you said. – Barack Obama Feb 5 at 16:25 Checking my understanding of the suggested solution: $pt^{k_1} = c_1, c_1^{k_2} = c_2, k_{new} = (k_1 * k_2)^{-1} mod(\phi(N))$. $pt = c_2^{k_{new}} = pt^{k_1 * k_2 *(k_1 * k_2)^{-1}}$ Is that right? – Eugen Feb 5 at 21:02 show 7 more comments Actually, I believe that there is a solution, but it follows a different strategy than your proposed approach. The problem with your approach is that if Eve (with $k_{new}$) and Bob (with $k_2$) collude, there's nothing stopping them from reading the entire text (and not just the part that Alice wants Bob to read). Instead, consider that Alice divides up the plaintext into a series of blocks; she also selects a master key (H1), and creates a tree of keys as: ```` H1 /-----/ \------\ / \ H2 H3 / \ / \ H4 H5 H6 H7 / \ / \ / \ / \ H8 H9 H10 H11 H12 H13 H14 H15 ```` For each internal node $H_i$, you would use some hash function to generate the two child nodes $H_{2i}$ and $H_{2i+1}$: $H_{2i} = F(H_i)$ $H_{2i+1} = G(H_i)$ Where both $F$ and $G$ are noninvertible functions. You would use the values in the leaf nodes to encrypt the successive blocks (so in the example, the first block would be encrypted using H8, the second block would be encrypted using H9, etc. If the entire database consists of $N$ blocks, this would take a depth $\log_2(N)$ tree. Now, when Alice decided to reveal a portion of the plaintext to Bob, she would just reveal the internal nodes that would allow Bob to recompute the keys used to encrypt those sections. For example, if Alice wanted to reveal the sections protected by H10 through H14, all she would need to reveal would be H5 (which allows Bob to recompute H10 and H11), H6 (which allows Bob to recompute H12 and H13) and H14. In general, to reveal a section of $M$ consecutive blocks, Alice needs to reveal at most $O(\log(M))$ internal nodes. Now, Bob only gets the data Alice has decided to reveal; and even if a group of Bobs (and Eve) collude, they can only read what Alice has revealed to one of the Bobs. - The question to which this is a good answer is quite far away from the initial question to which my answer was perfectly correct. Could you separate this into two questions for which yours would be the answer to one of them and mine the other? – Barack Obama Feb 6 at 18:49 Yes, it's a ways from how the initial question was phrased. However, whether it's a good answer really depends on what Eugen is actually interested in. If he has an academic interest in this type of cipher (and gave his Real World example as a way it might be used), it's off topic. However, if his Real World Example is the problem he is really interested in (and he thought that a cipher as he described might solve it), well, he needs to be told that such a cipher doesn't have the security properties he wants, and here is a better method that does. – poncho Feb 6 at 19:29 @poncho The solution satisfies the requirements although I would like to share my thoughts on its deficiencies (btw. Bob knows k_new and Eve knows k2). First is the storage overhead needed to store the hash tree. With the Barack Obama's solution I have to store k_new (and modulus) only for the portions I share (although public key crypto is much slower than symmetric encryption). If new data is added there is a problem again (if I shared H8 now Bob would know also H16, H17, right?). Regarding my interests: they are mixed indeed, but solving the real example has a bigger priority for me. – Eugen Feb 6 at 20:04 @Eugen: as for the amount of storage needed, well, just storing H1 will allow you to recompute the tree as needed with $O(\log N)$ hash operations. If it becomes necessary to append data, well, you don't want to reuse external nodes as internal; you can extend the solution to make multiple trees (with each tree being bigger; this preserves the $O(\log N)$ scaling properties). One problem happens if Alice needs to update database; if she's revealed the key for a block, Bob can decrypt that block even after it has been updated. I'm not sure how to work around that. – poncho Feb 6 at 20:30 @poncho Regarding the storage you are right: its either storing or recomputing it every time you need to retrieve/share some data. Making new trees requires recomputing not only the hashes but re-encrypting the database on each insertion :). I hope it is understood that this is just an open discussion. I am not nitpicking or something like that. – Eugen Feb 6 at 20:59 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951422929763794, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Positive_temperature_coefficient
# Temperature coefficient (Redirected from Positive temperature coefficient) The temperature coefficient is the relative change of a physical property when the temperature is changed by 1 Kelvin. In the following formula, let R be the physical property to be measured and T be the temperature at which the property is measured. T0 is the reference temperature, and ΔT is the difference between T and T0. Finally, α is the (linear) temperature coefficient. Given these definitions, the physical property is: $\operatorname{R}(T) = \operatorname{R}(T_0)(1 + \alpha\Delta T)$ Here α has the dimensions of an inverse temperature (1/K or K−1). This equation is linear with respect to temperature. For quantities that vary polynomially or logarithmically with temperature, it may be possible to calculate a temperature coefficient that is a useful approximation for a certain range of temperatures. For quantities that vary exponentially with temperature, such as the rate of a chemical reaction, any temperature coefficient would be valid only over a very small temperature range. Different temperature coefficients are specified for various applications, including nuclear, electrical and magnetic. ## Negative temperature coefficient A negative temperature coefficient (NTC) occurs when a physical property (such as thermal conductivity or electrical resistivity) of a material lowers with increasing temperature, typically in a defined temperature range. For most materials, electrical resistivity will increase with increasing temperature. Materials with a negative temperature coefficient have been used in floor heating since 1971. The negative temperature coefficient avoids excessive local heating beneath carpets, bean bag chairs, mattresses etc., which can damage wooden floors, and may infrequently cause fires. Most ceramics exhibit NTC behaviour, which is governed by an Arrhenius equation over a wide range of temperatures: $R=A \cdot e^{\frac{B}{T}}$ where R is resistance, A and B are constants, and T is absolute temperature (K). The constant B is related to the energies required to form and move the charge carriers responsible for electrical conduction – hence, as the value of B decreases, the material becomes insulating. Practical and commercial NTC resistors aim to combine modest resistance with a value of B that provides good sensitivity to temperature. Such is the importance of the B constant value, that it is possible to characterize NTC thermistors using the B parameter equation: $R = r^{\infty}e^{\frac{B}{T}} = R_{0}e^{-\frac{B}{T_{0}}}e^{\frac{B}{T}}$ where $R_{0}$ is resistance at temperature $T_{0}$. Therefore, many materials that produce acceptable values of $R_{0}$ include materials that have been alloyed or possess variable cation valence states and thus contain a high natural defect center concentration. The value of B strongly depends on the energy required to dissociate the charge carriers that are used for the electrical conduction from these defect centers. ## Reversible temperature coefficient Residual magnetic flux density or Br changes with temperature and it is one of the important characteristics of magnet performance. Some applications, such as interial gyroscopes and traveling-wave tubes (TWTs), need to have constant field over a wide temperature range. The reversible temperature coefficient (RTC) of Br is defined as: $RTC = \frac{\Delta Br}{Br \Delta T} \times 100$ To address these requirements, temperature compensated magnets were developed in the late 1970s.[1] For conventional SmCo magnets, Br decreases as temperature increases. Conversely, for GdCo magnets, Br increases as temperature increases within certain temperature ranges. By combining samarium and gadolinium in the alloy, the temperature coefficient can be reduced to nearly zero. ## Temperature coefficient of electrical resistance See also: Table of materials' resistivities The temperature dependence of electrical resistance and thus of electronic devices (wires, resistors) has to be taken into account when constructing devices and circuits. The temperature dependence of conductors is to a great degree linear and can be described by the approximation below. $\operatorname{\rho}(T) = \rho_{0}[1 + \alpha_{0}(T-T_{0})]$ where $\alpha_{0}=\frac{1}{\rho_{0}}\left [ \frac{\delta \rho}{\delta T}\right ]_{T=T_{0}}$ $\rho_{0}$ just corresponds to the specific resistance temperature coefficient at a specified reference value (normally T = 0 °C)[2] That of a semiconductor is however exponential: $\operatorname{\rho}(T) = S \alpha^{\frac{B}{T}}$ where $S$ is defined as the cross sectional area and $\alpha$ and $b$ are coefficients determining the shape of the function and the value of resistivity at a given temperature. For both, $\alpha$ is referred to as the resistance temperature coefficient.[3] This property is used in devices such as thermistors. ### Positive temperature coefficient of resistance A positive temperature coefficient (PTC) refers to materials that experience an increase in electrical resistance when their temperature is raised. Materials which have useful engineering applications usually show a relatively rapid increase with temperature, i.e. a higher coefficient. The higher the coefficient, the greater an increase in electrical resistance for a given temperature increase. ## Coefficient of thermal expansion Main article: Coefficient of thermal expansion The physical dimensions of matter can be affected by temperature. The coefficient of thermal expansion for a given sample of matter can be used to approximate its change in volume given a change in temperature. A similar coefficient, the linear thermal expansion coefficient, is also often used to measure the change of length of an object in one-dimension. The coefficient of thermal expansion is often used to develop thermometers. Here lengths of materials can express temperature. The coefficient is also used for several types of thermostats. ## Temperature coefficient of elasticity The elastic modulus of elastic materials varies with temperature, typically decreasing with higher temperature. ## Temperature coefficient of reactivity In nuclear engineering, the temperature coefficient of reactivity is a measure of the change in reactivity (resulting in a change in power), brought about by a change in temperature of the reactor components or the reactor coolant. This may be defined as $\alpha_{T}=\frac{\partial \rho}{\partial T}$ Where $\rho$ is reactivity and T is temperature. The relationship shows that $\alpha_{T}$ is the value of the partial differential of reactivity with respect to temperature and is referred to as the "temperature coefficient of reactivity". As a result, the temperature feedback provided by $\alpha_{T}$ has an intuitive application to passive nuclear safety. A negative $\alpha_{T}$ is broadly cited as important for reactor safety, but wide temperature variations across real reactors (as opposed to a theoretical homogeneous reactor) limit the usability of a single metric as a marker of reactor safety.[4] In water moderated nuclear reactors, the bulk of reactivity changes with respect to temperature are brought about by changes in the temperature of the water. However each element of the core has a specific temperature coefficient of reactivity (e.g. the fuel or cladding). The mechanisms which drive fuel temperature coefficients of reactivity are different than water temperature coefficients. While water expands as temperature increases, causing longer neutron travel times during moderation, fuel material will not expand appreciably. Changes in reactivity in fuel due to temperature stem from a phenomenon known as doppler broadening, where resonance absorption of fast neutrons in fuel filler material prevents those neutrons from thermalizing (slowing down).[5] ## Units The thermal coefficient of electrical circuit parts is sometimes specified as ppm/°C. This specifies the fraction (expressed in parts per million) that its electrical characteristics will deviate when taken to a temperature above or below the operating temperature. ## References 1. 2. Kasap, S. O. (2006). Principles of Electronic Materials and Devices (Third ed.). Mc-Graw Hill. p. 126. 3. Alenitsyn, Alexander G.; Butikov, Eugene I.; Kondraryez, Alexander S. (1997). Concise Handbook of Mathematics and Physics. CRC Press. pp. 331–332. ISBN 0-8493-7745-5. 4. Duderstadt & Hamilton 1976, pp. 259–261 5. Duderstadt & Hamilton 1976, pp. 556–559 ## Bibliography • Duderstadt, James J.; Hamilton, Louis J. (1976). Nuclear Reactor Analysis. Wiley. ISBN 0-471-22363-8.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8931790590286255, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/291323/isogeny-and-minimal-models-of-elliptic-curves
# Isogeny and minimal models of elliptic curves Suppose I have two isogenous elliptic curves over $\mathbb{Q}$, $E$ and $E'$. Will the minimal models of $E$ and $E'$ still be isogenous? - 1 Dear user, what exactly do you mean by the mimimal models being isogenous? Neron models are functorial, and so the isogeny $E \to E'$ will induce a morphism of Neron models $\cal E \to \cal E'$. Are you asking about the kernel of this morphism? Regards, – Matt E Jan 31 at 16:25 – user60194 Jan 31 at 16:51 2 Dear user, But isogenous in what sense? In the entry you linked, the minimal model is just another Weierstrass equation for the same curve. The notion of isogeny is certainly independent of Weierstrass equations, and so the answer to your question would then be "yes", for trivial reasons. But do you mean/want something more? Regards, – Matt E Jan 31 at 20:10 The definition of minimal model on planetmath is that of minimal Weierstrass model, so they are schemes over $\mathbb Z$ and it doesn't really make sense to talk about isogeny unless you are asking for finite morphism or use Néron models as suggests MattE. By the way, on planetmath they say the minimal discriminant is smaller than 12, this is incorrect. – QiL'8 Feb 7 at 22:10 ## 1 Answer If I interpret the question correctly, the answer is yes. Each elliptic curve is isomorphic to its minimal model by definition: the isomorphism is given by the change of variables transformation. Remember that isomorphisms are degree $1$ isogenies and that the composition of isogenies is an isogeny. Therefore the minimal models are isogenous (with unchanged degree). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406909942626953, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/195589/what-is-this-statement-actually-asking?answertab=votes
# What is this statement actually asking? Let T: $\mathbb{R}^3\rightarrow \mathbb{R}$ be linear. Show that there exist scalars a, b, and c such that $T(x,y,z)= ax + by + cz$ for all $(x,y,z) \in \mathbb{R}^3$ Can I just say "you can pick $a=b=c=0$" or do I have to actually expand out $T(kx+ x', ky + y', kz + z')$ and verify that T is linear where $k\in \mathbb{R}$? - 2 You don't get to choose $T$. – Qiaochu Yuan Sep 14 '12 at 6:26 $T$ is given and is linear. It maps from $\mathbb{R}^3$ (which has a basis $(1,0,0), (0,1,0), (0,0,1)$) to $\mathbb{R}$. The question is to show how you would compute $a,b,c$ in terms of evaluating $T$ at specific points, and, of course, to ensure that $T$ actually equals the resulting form. – copper.hat Sep 14 '12 at 6:27 3 But if I had a choice, I would choose my $T$ to be Darjeeling, first cut. – copper.hat Sep 14 '12 at 6:30 ## 2 Answers No. If $a = b = c = 0$, then $T(x,y,z) = 0$. But, it could happen that $T$ is a nonzero linear transformation. To prove this, note that $(x,y,z)$ means $xe_1 + ye_2 + z e_3$ where $e_1$, $e_2$, and $e_3$ are basis for $\mathbb{R}_3$. You have by linearity $T(x,y,z) = T(xe_1) + T(ye_2) + T(z e_3) = x T(e_1) + yT(e_2) + z T(e_3)$. Letting $a = T(e_1)$, $b = T(e_2)$ and $c = T(e_3)$, you have proven the desired result. - Thanks for the reply. I understand what you did, but I'm still confused about the question. I'm interpreting it as "show that there exist scalars a, b, and c such that [definition]." It doesn't seem like there is any constraint there. Maybe I'm just being stupid right now. – anon Sep 14 '12 at 6:55 1 The question is: I give you any linear transformation $T : \mathbb{R}^3 \rightarrow \mathbb{R}$. You have to find $a, b, c$ such that $T(x,y,z) = ax + by + cz$. In other words, you have to show that no matter which $T$ I give you, you can find $a,b,c$ (depending on that T) to make $T(x,y,z) = ax + by + cz$. – William Sep 14 '12 at 6:58 If I set $a$, $b$, and $c$ arbitrarily, then I modify the behavior of the $T$ you provide. It's still a little confusing because: how do I know what the behavior of $T$ is? Does that make sense? I'm going to reread this in the morning. – anon Sep 14 '12 at 7:16 @Anon No, I give you $T$, you need to find the $a$, $b$, and $c$ that works for my $T$. You don't get to change or pick the $T$. – William Sep 14 '12 at 7:31 Ok thank you so much. I think I understand it. – anon Sep 14 '12 at 7:34 You are supposed to show that there is a single triple $a,b,c$ such that $T(x,y,z)=ax+by+cz$ for all $x,y,z\in\mathbb R$. Hint: Let $a=T(1,0,0)$. Similarly... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9639019966125488, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/109394/notation-l-p-vs-ell-p/109400
# Notation: $L_p$ vs $\ell_p$ $L_p$ is often used to describe a norm, or a vector space with that norm (see e.g. wikipedia). Is $\ell_p$ (typically, or canonically) a different notation for the same concept, or is it used to indicate something different? - 1 $\ell^p$ spaces are particular cases of $\mathbb{L}^p$ spaces. Usually, one uses $\ell^p$ when the underlying space is $\mathbb{Z}$ or $\mathbb{N}$, but I believe I've already seen such things as $\ell^p (\mathbb{Z}^n)$ (everything is done with the counting measure). Since it is a special case, there are a few properties that hold for $\ell^p$ spaces and not for general $\mathbb{L}^p$ spaces, such as $\ell^p \subset \ell^q$ if $p \leq q$. – D. Thomine Feb 14 '12 at 21:47 2 Traditionally, $\ell^p$ is used when the norm involves a summation, while $L^p$ is used when the norm involves an integral. Of course, in modern Lebesgue theory, a summation is a special case of an integral. – Jim Belk Feb 15 '12 at 0:14 ## 1 Answer $\ell^p$ spaces are a special case of $L^p$ spaces. If $(X,\mu)$ is a measure space, $L^p(X)$ (or $L^p_{\mathbb{R}}(X)$) is the (Banach) space of all measurable functions $f\colon X\to \mathbb{R}$ such that $$\int_X |f|^p\,d\mu\lt \infty.$$ In the special case in which $X=\mathbb{N}$ and $\mu$ is the counting measure, functions $f\colon\mathbb{N}\to\mathbb{R}$ can be taken to be sequences of elements of $\mathbb{R}$, and the integral is the sum of the terms of the sequence. That is, $L^p(\mathbb{N})$ is the set of sequences $(x_i)$ such that $\sum |x_i|^p\lt\infty$. To denote this special case, which occurs very often, we use $\ell^p$. (You can replace $\mathbb{R}$ with any normed vector space, replacing the absolute value with the norm.) - For L p space or l p space, I was wondering when to use subscript and when to use superscript? Why is that? – Tim Feb 15 '12 at 2:18 @Tim: One usually uses superscript, because it is mnemonic that the $p$ is the exponent in the condition, and because it leaves the subscript available to indicate the range of the functions. – Arturo Magidin Feb 15 '12 at 4:19 Shouldn't there be a $()^{1/p}$ in there somewhere (e.g. the Euclidean norm)? – Joe Feb 15 '12 at 8:54 @Joe: The $()^{1/p}$ is what you do in order to calculate the $p$-norm; but $f$ is in $L^p$ if and only if $\int |f|^p d\mu$ is finite. Similarly, the $p$-norm of a sequence in $\ell^p$ is $(\sum |x_i|^p)^{1/p}$, but whether a sequence is in $\ell^p$ or not is determined by looking at $\sum |x_i|^p$. – Arturo Magidin Feb 15 '12 at 16:06 @ArturoMagidin: Thanks! I suspect there is some inconsistency in superscript or subscript in your reply. – Tim Feb 15 '12 at 18:19 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304232597351074, "perplexity_flag": "head"}
http://nrich.maths.org/6554/solution?nomenu=1
## 'Coded Hundred Square' printed from http://nrich.maths.org/ ### Show menu Class 3 from Selside Endowed C of E Primary wrote to say: We began by working out that the number represented by three symbols must be $100$. This jigsaw piece helped us to identify other symbols. We identified that $1$ was represented by a large diamond and quickly found the opening jigsaw piece for the hundred square. By using the patterns of the hundred square we were able to complete it without much difficulty. Ilakya from Glenarm College used a similar method and said: Here is how I figured the code out: First I started with one hundred as it has three digits. I found the code with three shapes and put it in the place of one hundred. Now I realized that in one row all the shapes except the last shape must start with the same shape because numbers apart from $1$-$9$, in one row start with the same number. e.g. $11$, $12$, $13$, $14$, $15$, $16$, $17$, $18$, $19$. All these numbers start with $1$ so they all must start with the same shape. If I could not figure out where in that row the shape should go I looked at the last number. In one column all the last numbers should be the same. e.g. $11$, $21$, $31$, $41 ,$51 , $61$, $71$, $81$, $91$. All these numbers all end in $1$ and are in the same column so therefore must be represented by the same shape. By using this technique I built the whole hundred square! Sophie and Kyle from Holy Cross Primary explained clearly their way of working: First we noticed the only piece with three symbols had to fit in to the $100$ slot. From that you can tell the big diamond is the symbol for $1$ and the small diamond is the symbol for $0$. With this information we placed the number $1$ piece in to the right position. With this piece in place it gives you the symbols for $2$, $3$, $4$, and $5$. We then put in the single symbols in to place at the top using our information. Knowing the symbols for $0$ - $9$ it was a simple case of putting the rest of the pieces in to place. Students from Newmarket College sent in very good reasoning too. They also sent in this image of the completed square: Eloise went about the problem slightly differently: My solution is each piece has a code on it so I started with the single symbols (the ones with one symbol on it) then found out that when I put the single symbols that I found out the numbers that matched with the different symbols so then the rest of the pieces just fell in place. Robbie from Orchard Junior School worked in a similar way to Eloise. Kyle form Orchard Junior School used a different approach: I worked out that the diamond was $1$ because it was the only symbol that apeared $11$ times in the solution and that the smaller diamonds were $0$ because it was the only symbol that apeared $12$ times which made the rest easy. I like this way of working, Kyle, but are you sure that the number $1$ only appears eleven times?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9669718742370605, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/181902-shuffling-deck-cards-n-number-times-get-organized-order.html
# Thread: 1. ## Shuffling a deck of cards "n" number of times to get an organized order The scenario is that a fair deck of 52 playing cards is shuffled a certain number of times (52! times) . (a) What is the probability of shuffling a correctly ordered (Ace to 2) deck of cards? (b) What happens to the probability of shuffling a correctly ordered deck of cards as the number of shuffles increase? Here's what I did so far: 1. Probability of shuffling once and getting the correctly ordered deck = 1/52!. (52! arrangements of 52 cards, 1 permutation that works) 2. Probability of shuffling twice and getting the correctly ordered deck: P(2) = 1/52!^(2) so in general: P(n) = 1/52!^(n); We see that as the number of trials increase, P(n) approaches zero. So for (b) the probability of shuffling a correctly ordered deck decreases as "n" approaches infinity. (Counterintuitive here so I'm a bit skeptical that this is right) 3. P(52!) = 1/52!^(52!) = very close to 0. --- My friend disagrees maintaining that: The probability of it NOT being the right deck is (1- 1/52!) If we shuffle again and again, and we do NOT get the deck we are looking for, then the probability of NOT getting the right deck after n shuffles is (1-1/52!)^n. As n approaches infinity, this probability approaches 0, meaning that we WILL eventually break the chain of not getting the right deck. He thinks that the probabilities do not change for any specific shuffle. However, they do if you consider n shuffles, and are looking for the probability of success or failure at least once in these shuffles. He interprets it in reverse Q(n) as opposed to P(n) and it makes sense intuitively. But i'm confused as to why my friend's interpretation contrast with mines even though we seem to be modeling the same scenario albeit in terms of complements. 2. Your assessment of (1) is correct. There are 52! permutations without replacement that can occur with a deck of 52 cards. It should be clear that the first card can come in 52 possible ways, the second from (52-1), the third from (52-2), ..., etc. Thus, we have 52*51*...*1 = 51! permutations of the entire deck. The case of a completely ordered deck is just one of those 52! and therefore the answer is, as you indicated, 1 / 52! As for part 2, you need to be clear about what the probability is about. You do not want to commit the Gambler's Fallacy. The probability found in (1) will not change just because we shuffle the deck, no matter how many times we do it. The odds of a given hand in poker, for instance, does not change if the deck is randomized each time (and that is the key here). However, if you're taking the experiment to be "what are the odds of getting a completely ordered deck in n shuffles" then we're looking at a binomial experiment. In each of the n trials you either get it or you don't. The probability of a success comes from (1). From there it is a direct application of the binomial distribution. Therefore, your friend is correct in that "the probabilities do not change for any specific shuffle." That is what it means to have determined (1). But that is not the same thing as talking about observing a certain number of successes (in this case, one) in n such shuffles. Each shuffle is considered a Bernoulli trial, and as I indicated above, the binomial distribution is the extension of those trials. The probability of a joint event as represented in a binomial distribution is clearly going to be different than any one Bernoulli trial. 3. Originally Posted by Masterthief1324 (b) What happens to the probability of shuffling a correctly ordered deck of cards as the number of shuffles increase? I'll add that the way this question is worded sounds to me like asking what happens to the probability found in (1) as you increase the number of times you repeat that experiment. The answer would be it doesn't change the probability at all. You could ask a number of other questions, too, as I alluded to above. You can ask how many times you can expect to see the ordered deck in n shuffles, or you can ask how many shuffles are required before you can expect to see an ordered deck. 4. Thanks, I can see your point about ambiguity. To clarify, I wanted to ask about the probability of 1 success in 52! trials. I figured that this was a case of the Binomial experiment so I used the formula P(r) = (C_n,r)(p^r)(q^(n-r)) where n = number of trials, r = number of success, p = probability of success, to obtain the probability of 1. In a binomial experiment, the probability of success in each trial is the same as you have suggested. I mentioned to my friend that his method was incomplete since this was a binomial trial and this scenario simplifies to P(1) = n*p*q^(n-1) and not q^(n). r=1 is insignificant in the case that "n" is very large but not so insignificant that it should be disregarded because small "r" is significant in the case that n is small. Was I correct? 5. You can use latex to state the equation thusly: $_nC_k p^k (1-p)^{n-k}$ [tex]_nC_k p^k (1-p)^(n-k)[/tex] or $\binom{n}{k} p^k (1-p)^{n-k}$ [tex]\binom{n}{k} p^k (1-p)^{n-k}[/tex] In the case of one trial, we have $P(k = 1) = \binom{n}{1} p^1 (1-p)^{n-1} = npq^{n-1}, q = 1-p$ Of course if n increases so does the probability. Even unlikely events become more probable to happen if we consider them as a joint process. Consider a biased coin that turns up heads 1/51! of the time. In any one toss we would not expect a heads, but if we consider the process (experiment, event, etc.) of heads showing up in $10^{10^{10}}$ tosses, it becomes a lot more likely. We can model that process with the binomial distribution which looks at such processes that consist of those single events with probability p. As for significance, you need to define what is 'significant'.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500778317451477, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/101871/list
## Return to Answer 4 edited body I assume you are referring to the argument in page 313 of Jean's paper http://www.springerlink.com/content/a343g53033872345/ . The point here is that the bound does not hold for all $t$, but for a single $t$ (out of $k$ J$possible choices$t_1,\dots,t_k$); t_1,\dots,t_J$); note that Jean crucially refers in the paper to a "suitable" $t$ rather than an arbitrary $t$. This is a pigeonholing argument, based on the estimation of $$\sum_{i=1}^k sum_{j=1}^J \| f * P_{\delta t_it_j} - f * P_{\delta^{-1} t_it_j} \|_{L^2}^2$$ which can be done by Plancherel's theorem and routine computations (if the $t_i$ t_j\$ are lacunary, as noted in Jean's paper). The use of pigeonholing to turn qualitative results (such as dominated convergence) to quantitative ones (at the cost of losing some control on the parameter for which the bound is attained) is an important trick in the subject; I discuss it at http://terrytao.wordpress.com/2007/05/23/soft-analysis-hard-analysis-and-the-finite-convergence-principle/ . Another key trick displayed here is to always be aware whether one needs to control the worst-case choice of parameter (i.e. uniform bounds), average-case choice of parameter (e.g. integrated or probabilistic bounds), or best-case choice of parameter (e.g. what comes from the pigeonhole principle). In this case, because one only needs the bound for a single t, best-case analysis suffices, and one can use many more tricks in this setting than in worst-case or average-case analysis. Incidentally, I found the reading of Jean's papers as a graduate student to be simultaneously extremely frustrating and extremely rewarding. Decoding an offhand remark or a mysterious step in his paper was often as instructive (and as time-consuming) as reading several pages of arguments by some other authors. (But his papers do become much easier to read once one has internalised enough of his "box of tools"...) 3 added 91 characters in body; added 14 characters in body; added 24 characters in body I assume you are referring to the argument in page 313 of Jean's paper http://www.springerlink.com/content/a343g53033872345/ . The point here is that the bound does not hold for all t, $t$, but for a single t $t$ (out of k $k$ possible choices t_1,...,t_k). $t_1,\dots,t_k$); note that Jean crucially refers in the paper to a "suitable" $t$ rather than an arbitrary $t$. This is a pigeonholing argument, based on the estimation of $$\sum_{i=1}^k \| f * P_{\delta t_i} - f * P_{\delta^{-1} t_i} \|_{L^2}^2$$ which can be done by Plancherel's theorem and routine computations (if the $t_i$ are lacunary, as noted in Jean's paper). The use of pigeonholing to turn qualitative results (such as dominated convergence) to quantitative ones (at the cost of losing some control on the parameter for which the bound is attained) is an important trick in the subject; I discuss it at http://terrytao.wordpress.com/2007/05/23/soft-analysis-hard-analysis-and-the-finite-convergence-principle/ . Another key trick displayed here is to always be aware whether one needs to control the worst-case choice of parameter (i.e. uniform bounds), average-case choice of parameter (e.g. integrated or probabilistic bounds), or best-case choice of parameter (e.g. what comes from the pigeonhole principle). In this case, because one only needs the bound for a single t, best-case analysis suffices, and one can use many more tricks in this setting than in worst-case or average-case analysis. Incidentally, I found the reading of Jean's papers as a graduate student to be simultaneously extremely frustrating and extremely rewarding. Decoding an offhand remark or a mysterious step in his paper was often as instructive (and as time-consuming) as reading several pages of arguments by some other authors. (But his papers do become much easier to read once one has internalised enough of his "box of tools"...) 2 added 608 characters in body I assume you are referring to the argument in page 313 of Jean's paper http://www.springerlink.com/content/a343g53033872345/ . The point here is that the bound does not hold for all t, but for a single t (out of k possible choices t_1,...,t_k). This is a pigeonholing argument, based on the estimation of $$\sum_{i=1}^k \| f * P_{\delta t_i} - f * P_{\delta/t_iP_{\delta^{-1} t_i} \|_{L^2}^2$$ which can be done by Plancherel's theorem and routine computations (if the $t_i$ are lacunary, as noted in Jean's paper). The use of pigeonholing to turn qualitative results (such as dominated convergence) to quantitative ones (at the cost of losing some control on the parameter for which the bound is attained) is an important trick in the subject; I discuss it at http://terrytao.wordpress.com/2007/05/23/soft-analysis-hard-analysis-and-the-finite-convergence-principle/ . Another key trick displayed here is to always be aware whether one needs to control the worst-case choice of parameter (i.e. uniform bounds), average-case choice of parameter (e.g. integrated or probabilistic bounds), or best-case choice of parameter (e.g. what comes from the pigeonhole principle). In this case, because one only needs the bound for a single t, best-case analysis suffices, and one can use many more tricks in this setting than in worst-case or average-case analysis. Incidentally, I found the reading of Jean's papers as a graduate student to be simultaneously extremely frustrating and extremely rewarding. Decoding an offhand remark or a mysterious step in his paper was often as instructive as reading several pages of arguments by some other authors..authors. (But his papers do become much easier to read once one has internalised enough of his "box of tools"...) 1 I assume you are referring to the argument in page 313 of Jean's paper http://www.springerlink.com/content/a343g53033872345/ . The point here is that the bound does not hold for all t, but for a single t (out of k possible choices t_1,...,t_k). This is a pigeonholing argument, based on the estimation of $$\sum_{i=1}^k \| f * P_{\delta t_i} - f * P_{\delta/t_i} \|_{L^2}^2$$ which can be done by Plancherel's theorem and routine computations (if the $t_i$ are lacunary, as noted in Jean's paper). The use of pigeonholing to turn qualitative results (such as dominated convergence) to quantitative ones (at the cost of losing some control on the parameter for which the bound is attained) is an important trick in the subject; I discuss it at http://terrytao.wordpress.com/2007/05/23/soft-analysis-hard-analysis-and-the-finite-convergence-principle/ Incidentally, I found the reading of Jean's papers as a graduate student to be simultaneously extremely frustrating and extremely rewarding. Decoding an offhand remark or a mysterious step in his paper was often as instructive as reading several pages of arguments by other authors...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919488787651062, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/31367/proving-that-a-binary-matrix-is-totally-unimodular
## Proving that a binary matrix is totally unimodular ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm working on a set of problems for which I can formulate binary integer programs. When I solve the linear relaxations of these problems, I always get integer solutions. I would like to prove that this is always the case. I believe that this involves proving that the constraint matrix is totally unimodular. Is there any sufficent conditions for binary matrices to be totally unimodular that might be of use for this? - 1 Related mathoverflow.net/questions/28135/… – Gjergji Zaimi Jul 11 2010 at 6:48 ## 1 Answer Here are some common ways of proving a matrix is TU. 1. The incidence matrix of a bipartite graph and network flow LPs are TU; these are standard examples usually taught in every book on TU. 2. The consecutive-ones property: if it is (or can be permuted into) a 0-1 matrix in which for every row, the 1s appear consecutively, then it is TU. (The same holds for columns since the transpose of a TU matrix is also TU.) 3. Every "network matrix," defined as follows, is TU (and they are a fundamental building block of the set of all TU matrices, according to Seymour's theorem). The rows correspond to a tree $T = (V, R)$ each of whose arcs have an orientation (i.e. it is not necessary that exist a root vertex $r$ such that the tree is "rooted into $r$" or "out of $r$").The columns correspond to another set $C$ of arcs on the same vertex set $V$. To compute the entry at row $R$ and column $C = st$, look at the $s$-to-$t$ path $P$ in $T$, then the entry is: • +1 if arc $R$ appears forward in $P$ • -1 if arc $R$ appears backwards in $P$ • 0 if arc $R$ does not appear in $P$ [You can see more in Schrijver's 2003 book.] 4. Ghouila-Houri showed a matrix is TU iff for every subset $R$ of rows, there is an assignment $s : R \to \pm 1$ of signs to rows so that the signed sum $\sum_{r \in R} s(r)r$ (which is a row vector the same width as the matrix) has all its entries in $\{0, \pm1\}$. There are other if-and-only-if conditions like Ghouila-Houri too (see Schrijver 1998) but the 4 conditions I gave above have been the most practical for me. - Thanks for the info. You should consider contributing this to the wikipedia page on TU, the page is not very helpful at the moment. – unknown (google) Sep 9 2010 at 10:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262606501579285, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/58278-algebraically-closed-field-print.html
# Algebraically closed field Printable View • November 7th 2008, 06:41 PM roporte Algebraically closed field Prove that all algebraically closed field is finite. thanks!! • November 7th 2008, 07:33 PM whipflip15 The complex numbers are algebraically closed but they are infinite... In fact a finite field cannot be algebraically closed. This is easy to see because if F is a finite field and $F=\{a_{1},a_{2},...,a_{n}\}$ then the polynomial $(x-a_{1})(x-a_{2})...(x-a_{n})+1$ does not have a root in F. • November 7th 2008, 07:47 PM roporte Quote: Originally Posted by whipflip15 The complex numbers are algebraically closed but they are infinite... In fact a finite field cannot be algebraically closed. This is easy to see because if F is a finite field and $F=\{a_{1},a_{2},...,a_{n}\}$ then the polynomial $(x-a_{1})(x-a_{2})...(x-a_{n})+1$ does not have a root in F. SORRY! I must say prove that all algebraically closed field is infinite. • November 7th 2008, 07:56 PM whipflip15 Yeah i thought so. There are many ways to show it however the method above is probably the easiest. • November 8th 2008, 03:18 PM ThePerfectHacker It has already been adressed here. All times are GMT -8. The time now is 05:45 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562550783157349, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/259287/big-o-proving-that-an-estimate-is-correct
# Big O - proving that an estimate is correct I recently submitted this for homework. The question asked to give a big-O estimate for (1) below. I have included the feedback in bold. It seems the solution I proposed lacked a proof and I unsure what I need to do to prove it. I wasn't even aware I had to offer a proof for a question that asked to just give a big - O estimate. Do I need to show how I arrived at (2) by including a statement like if (n) = O(a(n)) and g(n) = O(b(n)) then f(n).g(n)=O(a(n).b(n))? Please could someone advise what a complete solution should look like as I would like to correct my error $$(1) \space(n! + 2^{n+3})(111n^3 + 15\log(n^{201} +1))$$ • $n! = O(n^{n})$ <<---"Better O(n!)" • $2^{n+3}=O(2^{n+3})$ • $111n^{3}=O(n^{3})$ • $15\log(n^{201} +1)= O(15\log n^{201})$ Therefore the dominant term appears to derive from $(n!)\cdot(111n^{3})$ which would give us the following: (2) $\space O(n^{n+3}) \space or \space O(n!n^3)$ . <<-- "You need to prove it" - 1 – Amzoti Dec 16 '12 at 14:42 1 cheers @Amzoti. – bosra Dec 16 '12 at 14:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9712129235267639, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/1054/is-there-a-group-of-prime-order-which-could-fit-the-ct-computational-diffie-hell?answertab=votes
# Is there a group of prime order which could fit the CT-Computational Diffie-Hellman assumption? I'm trying to choose a group that is hard under the Chosen-Target Computational Diffie-Hellman assumption, according to the definition in this paper, in order to implement the oblivious transfer scheme defined in the top box on page 10(=406). The (intimidating, to me) CT-CDH assumption is defined as follows (page 7=403): Let $\mathbb{G}_q$ be a group of prime order $q$, $g$ be a generator of $\mathbb{G}_q$, $x\in \mathbb{Z}^*_q$. Let $H_1 : \{0, 1\}^∗ \rightarrow \mathbb{G}_q$ be a cryptographic hash function. The adversary $A$ is given input $(q, g, g^x, H_1)$ and two oracles: target oracle $TG(\cdot)$ that returns a random element $w_i \in \mathbb{G}_q$ at the $i$-th query and helper oracle $HG(\cdot)$ that returns $(\cdot)^x$. Let $q_T$ and $q_H$ be the number of queries $A$ made to the target oracle and helper oracle respectively. Assumption: The probability that $A$ outputs $k$ pairs $((v_1, j_1), (v_2, j_2), \dots, (v_k, j_k))$, where $v_i = (w_{j_i})^x$ for $i \in \{1, 2, \dots , k\}$, $q_H \lt k \leq q_T$, is negligible. It should be noted that this assumption is equivalent to the standard Computational Diffie-Hellman assumption when $q_T=1$, according to this paper. Can anyone give an example of a group that fits the bill? I tried $\mathbb{Z}^*_q$ for a prime $q$ under multiplication, but that's of order $q-1$, which is clearly not prime. However, the complexity analysis on page 12 of the paper is in terms of modular exponentiations. Additionally (I can make a new question for this, if scolded), how would one implement the $(D_j)^{a_j^{-1}}$ operation in step 5 of the protocol? I can't figure out if it's equivalent to the discrete log problem. - 1 This assumption in the paper looks bogus. It does not say how the hash function is used, for example. – Paŭlo Ebermann♦ Oct 24 '11 at 19:46 Sorry, it says how $H_1$ is used in the paper. I don't think it's relevant for the assumption, though. The second paper I mentioned gives a definition without the hash function that is nearly identical. – oopsdude Oct 24 '11 at 20:10 ## 1 Answer The usual technique for having a group of prime size $q$ is to work modulo a prime $p$ such that $q$ divides $p-1$. The target group is then the subgroup of $q$-th roots of $1$ in $\mathbb{Z}_p$. To build such a group, first choose $q$, then selects random values $r$ until you find one such that $p = qr+1$ is prime. This is the way it is defined in the DSA standard. The remaining part is: how to build $H_1$, the hash function which produces elements in the target group ? For that, you first use a hash function which produces values modulo $p$ (e.g. you use a PRNG seeded with the hashed data, and produce bit sequences of the size of $p$ until you find one which is between $0$ and $p-1$); then, you raised that value to the power $(p-1)/q$. The result is necessarily a $q$-th root of 1, and the whole is a "hash function". As far as I know, such a group would fulfill the CT-CDH assumption -- i.e. there is no known way to break it. CT-CDH is a "weaker" assumption than standard CDH, but there is no proof that it is strictly weaker. For your additional question: $R$ knows the $a_j$, which are random non-zero integers modulo $q$. $R$ can thus compute each $a_j^{-1}$ modulo $q$ (that's regular modular inversion). In the expression "$(D_j)^{a_j^{-1}}$, $D_j$ is part of a group of size $q$, so any exponent can be taken modulo $q$. - Wow, awesome answer. Thanks! Would you mind explaining CT-CDH in terms of CDH? I understand the latter, but only vaguely understand the former. – oopsdude Oct 24 '11 at 23:18 CDH is about computing $h^{xy}$ given $h$, $h^x$ and $h^y$; this is equivalent to compute $m^x$ given $g$, $m$ and $g^x$ (simply declare $m = g^y$; for any $m$ in the group there is a corresponding $y$). So CDH is about trying to raise a single given (random) $m$ to the unknown $x$-th power. With CT-CDH, we give access to a "raise to $x$-th power" machine, but we allow for strictly less than $k$ calls to that machine, and we ask for the "raised to $x$-th power" version of $k$ random values (the $w_i$). The attacker has the choice of the target, but he must still be smart at some point. – Thomas Pornin Oct 25 '11 at 0:31 Now I get it. Thanks. – oopsdude Oct 25 '11 at 16:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197594523429871, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/107054-discrete-mathematics-problem-proofs.html
# Thread: 1. ## Discrete Mathematics Problem (Proofs) Prove for all M and N, if M and M-N are even, then N is even. 2. Originally Posted by sderosa518 Prove for all M and N, if M and M-N are even, then N is even. Assume, by contradiction, that $N$ is odd. Then, $N=2k+1$ for some $k \in \mathbb{N}$ We also know that $M = 2r$ for some $r \in \mathbb{N}$. Then, $M-N = 2r - (2k+1) = 2(r-k) + 1$ which is an odd number, thus $M-N$ is odd, in contradiction, and so N is even. .. Or: $N = M + (-)(M - N)$ And the sum of two even integers is even. 3. Originally Posted by sderosa518 Prove for all M and N, if M and M-N are even, then N is even. Do you know that sum or substraction of even numbers is even? If you can't use (or don't know) this then prove it: it's trivial when we characterize an even number as 2k , where k is an integer. Well, since N = M - (M-N) we're done. Tonio 4. Originally Posted by sderosa518 Prove for all M and N, if M and M-N are even, then N is even. Or simply, if M is even then M= 2k for some integer k. If M-N is even, the M-N= 2j for some integer j. N= M-(M-N)= 2k- 2j= 2(j-k).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381881356239319, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Consumer_choice
# Consumer choice Economics General classifications Technical methods Fields and subfields Lists In microeconomics, the theory of consumer choice relates preferences (for the consumption of both goods and services) to consumption expenditures; ultimately, this relationship between preferences and consumption expenditures is used to relate preferences to consumer demand curves. The link between personal preferences, consumption, and the demand curve is one of the most closely studied relations in economics. Consumer choice theory is a way of analyzing how consumers may achieve equilibrium between preferences and expenditures by maximizing utility as subject to consumer budget constraints.[citation needed] Preferences are the desires by each individual for the consumption of goods and services that translate into choices based on income or wealth for purchases of goods and services to be combined with the consumer's time to define consumption activities. Consumption is separated from production, logically, because two different consumers are involved. In the first case consumption is by the primary individual; in the second case, a producer might make something that he would not consume himself. Therefore, different motivations and abilities are involved. The models that make up consumer theory are used to represent prospectively observable demand patterns for an individual buyer on the hypothesis of constrained optimization. Prominent variables used to explain the rate at which the good is purchased (demanded) are the price per unit of that good, prices of related goods, and wealth of the consumer. The fundamental theorem of demand states that the rate of consumption falls as the price of the good rises; this is called the substitution effect. Clearly, if one does not have enough money to pay the price, then they cannot buy any of that item. As prices rise, consumers will substitute away from higher priced goods and services, choosing less costly alternatives. Subsequently, as the wealth of the individual rises, demand increases, shifting the demand curve higher at all rates of consumption; this is called the income effect. As wealth rises, consumers will substitute away from less costly inferior goods and services, choosing higher priced alternatives. ## Model setup Further information: Indifference curve and Budget constraint Economists' modern solution to the problem of mapping consumer choices is indifference curve analysis. For an individual, indifference curves and an assumption of constant prices and a fixed income in a two-good world will give the following diagram. The consumer can choose any point on or below the budget constraint line BC. This line is diagonal since it comes from the equation $xp_X + y p_Y \leq \mathrm{income}$. In other words, the amount spent on both goods together is less than or equal to the income of the consumer. The consumer will choose the indifference curve with the highest utility that is within his budget constraint. Every point on I3 is outside his budget constraint so the best that he can do is the single point on I2 that is tangent to his budget constraint. He will purchase X* of good X and Y* of good Y. Indifference curve analysis begins with the utility function. The utility function is treated as an index of utility.[1] All that is necessary is that the utility index change as more preferred bundles are consumed. Indifference curves are typically numbered with the number increasing as more preferred bundles are consumed. However, it is not necessary that numbers be used - any indexing system would suffice - colors for example. The advantage of numbers is that their use makes the math simpler. Numbers used to index indifference curves have no cardinal significance. For example if three indifference curves are labeled 1, 4, and 16 respectively that means nothing more than the bundles "on" indifference curve 4 are more preferred than the bundles "on" indifference curve I. The fact that the index number is a multiple of another is of no significance. For example, the bundles of good on 4 does not mean that they are four times more satisfying than those on 1. As noted they merely mean they are more satisfying. Income effect and price effect deal with how the change in price of a commodity changes the consumption of the good. The theory of consumer choice examines the trade-offs and decisions people make in their role as consumers as prices and their income changes. ## Substitution effect Main article: Substitute good The substitution effect is the effect observed with changes in relative price of goods. This effect basically affects the movement along the curve. These curves can be used to predict the effect of changes to the budget constraint. The graphic below shows the effect of a price increase for good Y. If the price of Y increases, the budget constraint will pivot from BC2 to BC1. Notice that because the price of X does not change, the consumer can still buy the same amount of X if he or she chooses to buy only good X. On the other hand, if the consumer chooses to buy only good Y, he or she will be able to buy less of good Y because its price has increased. To maximize the utility with the reduced budget constraint, BC1, the consumer will re-allocate consumption to reach the highest available indifference curve which BC1 is tangent to. As shown on the diagram below, that curve is I1, and therefore the amount of good Y bought will shift from Y2 to Y1, and the amount of good X bought to shift from X2 to X1. The opposite effect will occur if the price of Y decreases causing the shift from BC2 to BC3, and I2 to I3. If these curves are plotted for many different prices of good Y, a demand curve for good Y can be constructed. The diagram below shows the demand curve for good Y as its price varies. Alternatively, if the price for good Y is fixed and the price for good X is varied, a demand curve for good X can be constructed. ## Income effect Main article: Income effect Another important item that can change is the money income of the consumer. The income effect is the phenomenon observed through changes in purchasing power. It reveals the change in quantity demanded brought by a change in real income (utility). Graphically, as long as the prices remain constant, changing the income will create a parallel shift of the budget constraint. Increasing the income will shift the budget constraint right since more of both can be bought, and decreasing income will shift it left. Depending on the indifference curves, as income increases, the amount purchased of a good can either increase, decrease or stay the same. In the diagram below, good Y is a normal good since the amount purchased increased as the budget constraint shifted from BC1 to the higher income BC2. Good X is an inferior good since the amount bought decreased as the income increases. $\Delta y_1^n$ is the change in the demand for good 1 when we change income from $m'$ to $m$, holding the price of good 1 fixed at $p_1'$: $\Delta y_1^n = y_1(p_1', m) - y_1(p_1',m').$ ## Price effect as sum of substitution and income effects Further information: Slutsky equation and Hicksian demand Every price change can be decomposed into an income effect and a substitution effect; the price effect is the sum of substitution and income effects. The substitution effect is a price change that alters the slope of the budget constraint but leaves the consumer on the same indifference curve. In other words, it illustrates the consumer's new consumption basket after the price change while being compensated as to allow the consumer to be as happy as he or she was previously. By this effect, the consumer is posited to substitute toward the good that becomes comparatively less expensive. In the illustration below this corresponds to an imaginary budget constraint denoted SC being tangent to the indifference curve I1. If the good in question is a normal good, then the income effect from the rise in purchasing power from a price fall reinforces the substitution effect. If the good is an inferior good, then the income effect will offset in some degree the substitution effect. If the income effect for an inferior good is sufficiently strong, the consumer will buy less of the good when it becomes less expensive, a Giffen good (commonly believed to be a rarity). In the figure, the substitution effect, $\Delta y_1^s$, is the change in the amount demanded for $\ y$ when the price of good $\ y$ falls from $\ p_1$ to $\ p_1'$ (increasing purchasing power for $\ y$) and, at the same time, the money income falls from $m$ to $m'$ to keep the consumer at the same level of utility on $\ I1$: $\Delta y_1^s = y_1(p_1', m') - y_1(p_1,m).$ The substitution effect increases the amount demanded of good $\ y$ from $\ y_1$ to $\ y_s$. In the example, the income effect of the price fall in $\ y_1$ partly offsets the substitution effect as the amount demanded of $\ y$ goes from $\ y_s$ to $\ y_2$. Thus, the price effect is the algebraic sum of the substitution effect and the income effect. ## Assumptions The behavioral assumption of the consumer theory proposed herein is that all consumers seek to maximize utility. In the mainstream economics tradition this activity of maximizing utility has been deemed as the "rational" behavior of decision makers. More specifically, in the eyes of economists, all consumers seek to maximize a utility function subject to a budgetary constraint.[2] In other words, economists assume that consumers will always choose the "best" bundle of goods they can afford.[3] Consumer theory is therefore based around the problem of generate refutable hypotheses about the nature of consumer demand from this behavioral postulate.[2] In order to reason from the central postulate towards a useful model of consumer choice, it is necessary to make additional assumptions about the certain preferences that consumers employ when selecting their preferred "bundle" of goods. These are relatively strict, allowing for the model to generate more useful hypotheses with regard to consumer behaviour than weaker assumptions, which would allow any empirical data to be explained in terms of stupidity, ignorance, or some other factor, and hence would not be able to generate any predictions about future demand at all.[2] For the most part, however, they represent statements which would only be contradicted if a consumer was acting in (what was widely regarded as) a strange manner.[4] In this vein, the modern form of consumer choice theory assumes: Preferences are complete Consumer choice theory is based on the assumption that the consumer fully understands his or her own preferences, allowing for a simple but accurate comparison between any two bundles of good presented.[3] That is to say, it is assumed that if a consumer is presented with two consumption bundles A and B each containing different combinations of n goods, the consumer can unambiguously decide if (s)he prefers A to B, B to A, or is indifferent to both.[2][3] The few scenarios where it is possible to imagine that decision-making would be very difficult are thus placed "outside the domain of economic analysis".[3] However, discoveries in behavioral economics has found that decision making is affected by whether choices are presented together or separately through the distinction bias. Preferences are reflexive Means that if A and B are in all respect identical the consumer will consider a to be at least as good as (is weakly preferred) to B.[3] Alternatively, the axiom can be modified to read that the consumer is indifferent with regard to A and B.[5] Preference are transitive If A is preferred to B and B is preferred to C then A must be preferred to C. This also means that if the consumer is indifferent between A and B and is indifferent between B and C she will be indifferent between A and C. This is the consistency assumption. This assumption eliminates the possibility of intersecting indifference curves. Preferences exhibit non-satiation This is the "more is always better" assumption; that in general if a consumer is offered two almost identical bundles A and B, but where B includes more of one particular good, the consumer will choose B.[6] Among other things this assumption precludes circular indifference curves. Non-satiation in this sense is not a necessary but a convenient assumption. It avoids unnecessary complications in the mathematical models. Indifference Curves exhibit diminishing marginal rates of substitution This assumption assures that indifference curves are smooth and convex to the origin. This assumption is implicit in the last assumption. This assumption also set the stage for using techniques of constrained optimization. Because the shape of the curve assures that the first derivative is negative and the second is positive. The MRS tells how much y a person is willing to sacrifice to get one more unit of x. This assumption incorporates the theory of diminishing marginal utility. The primary reason to have these technical preferences is to replicate the properties of the real number system so the math will work. Goods are available in all quantities It is assumed that a consumer may choose to purchase any quantity of a good (s)he desires, for example, 2.6 eggs and 4.23 loaves of bread. Whilst this makes the model less precise, it is generally acknowledged to provide a useful simplification to the calculations involved in consumer choice theory, especially since consumer demand is often examined over a considerable period of time. The more spending rounds are offered, the better approximation the continuous, differentiable function is for its discrete counterpart. (Whilst the purchase of 2.6 eggs sounds impossible, an average consumption of 2.6 eggs per day over a month does not.)[6] Note the assumptions do not guarantee that the demand curve will be negatively sloped. A positively sloped curve is not inconsistent with the assumptions.[7] ### Use Value In Marx's critique of political economy, any labor-product has a value and a use value, and if it is traded as a commodity in markets, it additionally has an exchange value, most often expressed as a money-price.[8] Marx acknowledges that commodities being traded also have a general utility, implied by the fact that people want them, but he argues that this by itself tells us nothing about the specific character of the economy in which they are produced and sold. ## Labor-leisure tradeoff Main article: Backward bending supply curve of labour Someone can also use consumer theory to analyze a consumer's choice between leisure and labor. Leisure is considered one good (often put on the horizontal-axis) and consumption is considered the other good. Since a consumer has a finite and scarce amount of time, he must make a choice between leisure (which earns no income for consumption) and labor (which does earn income for consumption). The previous model of consumer choice theory is applicable with only slight modifications. First, the total amount of time that an individual has to allocate is known as his time endowment, and is often denoted as T. The amount an individual allocates to labor (denoted L) and leisure (l) is constrained by T such that: $l + L = T\,\!$ or $l + (T-l) = T\,\!$ A person's consumption is the amount of labor they choose multiplied by the amount they are paid per hour of labor (their wage, often denoted w). Thus, the amount that a person consumes is: $C = w(T-l)\,\!$ When a consumer chooses no leisure $(l=0)$ then $T-l = T$ and $C = wT$. From this labor-leisure tradeoff model, the substitution effect and income effect from various changes in price caused by welfare benefits, labor taxation, or tax credits can be analyzed. ## References 1. ^ a b c d 2. ^ a b • Silberberg; Suen (2001). The Structure of Economics, A Mathematical Analysis. McGraw-Hill. • Böhm, Volker; Haller, Hans (1987). "Demand theory". 1. pp. 785–92. • Hicks, John R. (1946). (2nd ed.). • Binger; Hoffman (1998). Microeconomics with Calculus (2nd ed.). Addison Wesley. pp. 141–43.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453458786010742, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/108834-help-difficult-centroid-problem.html
# Thread: 1. ## Help with Difficult Centroid Problem Here is the problem, Show by direct computation that the centroid of the triangle with vertices (0,0), (r,0), and (0,h) is the point (r/3, h/3). Verify that this point lies on the line from the vertex (0,0) to the midpoint of the opposite side of the triangle and twothirds of the way from the vertex of the midpoint. I really don't even know where to begin.. 2. Originally Posted by messianic Here is the problem, Show by direct computation that the centroid of the triangle with vertices (0,0), (r,0), and (0,h) is the point (r/3, h/3). Verify that this point lies on the line from the vertex (0,0) to the midpoint of the opposite side of the triangle and twothirds of the way from the vertex of the midpoint. I really don't even know where to begin.. what do you know about finding the centroid of a planar region? 3. I know you first have to find the mass but I don't know where to start for that 4. Originally Posted by messianic I know you first have to find the mass but I don't know where to start for that $\bar{x} = \frac{\int_a^b x \cdot f(x) \, dx}{\int_a^b f(x) \, dx}$ $\bar{y} = \frac{\int_a^b \frac{1}{2}[f(x)]^2 \, dx}{\int_a^b f(x) \, dx}$ 5. No, you don't find the "mass", a geometric figure does not have a "mass"! You find the area of the figure. This is a triangle with base of length r and height h. What is its area? You still have not answered the question skeeter asked. He asked if you knew how to find the centroid and you answered that you first had to find the "mass". After you have found the area (not mass) what do you do? Since this is a right triangle, it will be particularly easy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9594299793243408, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/11131/determine-if-solution-to-linear-system-exists?answertab=oldest
# Determine if solution to linear system exists I'm trying to determine only if a solution to a linear system of equations exists. I have been using `LinearSolve`, which works fine, but it solves the system as well. Is there another more efficient method for only checking the existence of a solution? - 2 For a one-off problem where nothing special is known about the system beforehand, actually solving the equation gives you access to the most efficient methods. In MMA, `LinearSolve` is very fast and (for general-purpose work involving non-square matrices) provides solutions an order of magnitude faster than other methods using (say) `MatrixRank`, `RowReduce`, or `Minors`. As an example of how extra info can help, if it's known the coefficient matrix is orthogonal, then you already know a solution exists. – whuber Sep 26 '12 at 19:07 There is some important discussion in the comments to my answer. – Vitaliy Kaurov Sep 27 '12 at 8:24 ## 3 Answers ==== Update ==== Please consider important discussion in the comments. ==== Original answer ==== If the matrix m has determinant zero, then there may be either no vector, or an infinite number of vectors x which satisfy m.x==b for a particular b. This occurs when the linear equations embodied in m are not independent. If you are interested only in well-defined systems, then, generally, confirming that you have a non-zero determinant is faster: ````m = RandomReal[1, 1000 {1, 1}]; b = RandomReal[1, 1000]; ```` Some timing tests: ````Mean@Table[LinearSolve[m, b]; // AbsoluteTiming, {30}][[All, 1]] ```` 0.0712654 ````Mean@Table[Det[m]; // AbsoluteTiming, {30}][[All, 1]] ```` 0.0571327 - 2 I suspect the performance trade may fall the other way depending on the size and character of the system. The most efficient methods for solving large systems usually do not involve directly computing the determinant. (LinearSolve automatically picks an efficient method..) – george2079 Sep 26 '12 at 17:47 @george2079 the benchmark i posted is pretty general and the system is pretty large. Of course, some specific systems may show a different result. – Vitaliy Kaurov Sep 26 '12 at 17:53 3 This has two problems. For approximate matrices it is not numerically sound. Far safer is to check singular values for zeros below some tolerance. Else you can get a false positive, that is, a claim of solvability when the matrix is singular or nearly so. The other problem is that (to be cont'd) – Daniel Lichtblau Sep 27 '12 at 1:34 1 ... in the exact case, e.g. integers, Det computation can be slower than actual solving. Example:In[22]:= n = 2^10; mm = RandomInteger[{-1, 1}, {n, n}]; vec = RandomInteger[{-1, 1}, n]; In[27]:= AbsoluteTiming[sol = LinearSolve[mm, vec];] Out[27]= {11.3440000, Null} In[28]:= AbsoluteTiming[Det[mm] == 0] Out[28]= {53.7370000, False} – Daniel Lichtblau Sep 27 '12 at 1:34 1 A third problem is that computing a determinant works only when there are exactly as many equations as variables. There is a generalization (implementable via `Minors`), but it is likely to be relatively inefficient (there can be a lot of minors to compute and store). – whuber Sep 27 '12 at 1:35 show 4 more comments Odd. If you get a solution, you know that a solution exists, isn't it? Anyway, what you can do is suppress the output by adding a `;` to your input. Then ````LinearSolve[{{4, 8}, {9, 2}}, {x, y}]; ```` returns nothing, while ````LinearSolve[{{4, 8}, {9, 2}, {0, 0}}, {x, y, z}]; ```` returns ````LinearSolve::nosol: Linear equation encountered that has no solution. ```` - `LinearSolve` carries out all of the steps of solving the system when all I need is a yes/no to whether or not a solution exists. So will `LinearSolve` be the most efficient way of doing this regardless? – A. R. S. Sep 26 '12 at 17:16 1 @A.R.S. - My guess is that the steps to check if there's a solution are the same as calculating the solution. (disclaimer: I'm not a mathematician, so you may wish confirmation from the Real Guys :-)) – stevenvh Sep 26 '12 at 17:19 @A.R.S., what mathematical "magic" do you think might be done to determine whether a linear system has a solution that would be substantially different from actually attempting to find solution? Except in special cases, as others have noted, in each situation some sequence of transformations will need to be done (whether row reduction or something more sophisticated like a matrix decomposition). – murray Sep 27 '12 at 14:59 @murray - Maybe he's thinking about something like primality testing, where you can say a number is composite in a second, but may need a couple of months to find the factors. – stevenvh Sep 27 '12 at 15:03 The upshot of Vitaliy's note to check for a zero determinant is that 1. A matrix with inexact entries that is supposed to be singular (e.g. because it is rank-deficient) might not necessarily give a determinant that is exactly zero, due to roundoff. 2. A tiny determinant does not necessarily imply that the matrix is "nearly" singular. (Conversely, just because a matrix does not have a tiny determinant doesn't mean that `LinearSolve[]` won't have a problem handling it.) See for instance this answer I wrote at scicomp.SE. Thus, to safely determine if a matrix is singular, you can do any of two things: 1. Check if the output of `NullSpace[]` is an empty list. If its output on your matrix is `{}`, the matrix is nonsingular; otherwise, the number of null vectors it produces (the nullity) gives an indication of how rank-deficient it is. 2. Use the undocumented function `LinearAlgebra`MatrixConditionNumber[]`. Checking for singularity is as easy as seeing if its output on your matrix is $\infty$, in which case, your matrix is singular. As a bonus, if the value returned is huge, but not necessarily infinite, you still have a good warning sign that `LinearSolve[]` might treat your matrix as singular even if it isn't. See any good numerical linear algebra book (e.g. Golub/Van Loan) for details. - 1 P.S. The nice thing about `NullSpace[]` is that it can flexibly deal with both exact and inexact matrices; IIRC it uses Gaussian elimination in the exact case, and the (safer) singular value decomposition in the inexact case. – J. M.♦ Sep 26 '12 at 23:10 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8818542957305908, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?s=e78b8211e767063120e87157c1043ed7&p=4030592
Physics Forums ## Gravitational force on objects If I take a feather and a rock and drop them at the same time, because of the effect of the gravitational force, we know that the rock will hit the ground first. (The rock experiences a larger gravitational force). My question is if these objects are dropped at the same time and vertically, then the velocity can be described as $v = gt$, for both objects. Therefore, at $t = 1, v = 9.81 m s^{-1}$ and at $t = 2, v = 19.6 m s^{-1}$. However, i find it difficult to believe the feather will have this velocity. So how does the differences in gravitational force be accounted for in this equation? Many thanks PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Mentor Blog Entries: 1 Quote by CAF123 If I take a feather and a rock and drop them at the same time, because of the effect of the gravitational force, we know that the rock will hit the ground first. (The rock experiences a larger gravitational force). The reason that the rock hits first is not because of its greater gravitational force. If only gravity acted, then both would have the same acceleration = g. But other forces are involved, in particular air resistance. Take away the air and the feather and rock will fall together. I see, many thanks. So essentially the formula only works when considering zero air resistance ## Gravitational force on objects yes and no... Just before an object is dropped it has some energy lets call it Estart where Estart = PEstart + KEstart Since it's stationary KEstart = 0 so Estart = PEstart = mgh When it hits the ground it has some energy, lets call it Eend then Eend = PEend + KEend but PEend = 0 so Eend = KEend = 0.5mv2 If we assume there is no air resistance to be overcome (or any other way for energy to leave the system) then the law of Conservation of Energy says Estart = Eend or mgh = 0.5mv2 Notice how the mass cancels and the velocity of impact is independant of mass v = SQRT(2gh) And if it takes $t$ seconds to cover this distance $h$, either $v = gt$ or $v = \sqrt{2gh}$ can be used to determine this impact velocity, I presume? Quote by CAF123 $v = \sqrt{2gh}$ can be used to determine this impact velocity, I presume? That's what i said. you have two objects of mass M and m, with M>m . if you let them fall, you get from Newton's 2nd law: M a1= - M g m a2= - m g from that you see immediately that a1=a2 which means the speed of the mass M will change by the same ammount in the same time interval as will the speed of mass m. The mass of the falling object, does not appear in your motion equations. It doesn't affect it. Even if I didn't use the "static model", that the grav force is mg, but used the force: F= G M' m /r (M':Earth's Mass) the masses of the falling objects would again drop out. Of course that is not obvious, it comes thanks to the equivalence principle, which says that the mass that appears in ma, and the mass that appears in m* (GM'/r) is the same m=m* Thread Tools | | | | |-----------------------------------------------------|-------------------------------|---------| | Similar Threads for: Gravitational force on objects | | | | Thread | Forum | Replies | | | Classical Physics | 4 | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 4 | | | Introductory Physics Homework | 6 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201180338859558, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/159245-find-general-solution.html
# Thread: 1. ## Find General Solution Find the general solution of y'' - iy' + 6y = 0. I know how to solve these, but the 'i' is throwing me off on this one... Can someone show this one? Thanks a lot... 2. Have you found and solved the characteristic equation? 3. The first thing i did was write: r^2 - ir + 6 = 0 Then factored it to be (r+2i)(r-3i) = 0 So, r = -2i and 3i What from here? THanks. 4. Sounds like a good start. So you have complex solutions to your charateristic equation. What general form should you use? This example is really good to follow. Homogeneous linear equations with constant coefficients 5. The problem that I'm having is this: a = 0, but b = -2 AND b = 3 So, would my general solution look something like this? What am I missing? y(x) = e^0 (c1cos(-2) + c2sin(3) 6. Originally Posted by jzellt The problem that I'm having is this: a = 0, but b = -2 AND b = 3 So, would my general solution look something like this? What am I missing? y(x) = e^0 (c1cos(-2) + c2sin(3) you're close as $e^0=1$ then $y(x) = c_1\cos(\beta x) + c_2\sin(\beta x)$ But now we have an additinal problem as the solutions aren't complex conjugates. 7. Exactly... Thats my problem. What do I do from here? 8. Originally Posted by jzellt Exactly... Thats my problem. What do I do from here? Since the DE is complex, why can't you just give $y = A e^{3ix} + B e^{-2ix}$ as the solution ....? 9. Yeah, not sure why I didn't realize that a first, but the important thing is that eventually I figured it out. Thanks #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9579960703849792, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/56180/what-is-the-probability-that-you-toss-next-time-heads-turns-up
What is the probability that you toss next time, heads turns up A bag contains 5 coins. Four of them are fair and one has heads on both sides. You randomly pulled one coin from the bag and tossed it 5 times, heads turned up all five times. What is the probability that you toss next time, heads turns up. (All this time you don't know you were tossing a fair coin or not). - 1 What do you think? Where are you stuck? – cardinal Aug 7 '11 at 20:02 should I consider this problem as independent of previous toss or not? – rohit Aug 7 '11 at 20:17 Hint: Consider how you'd answer if at least one of the previous tosses had been a tail. – cardinal Aug 7 '11 at 20:19 3 Answers Alternately, you could fully build the tree. What is the probability that you picked a fair coin? What is the probability that it shows heads five times in a row? The unfair coin? Five times in a row? And don't forget conditional probability. - The question can be easily answered if you can determine the probability that the coin you chose is the double-headed coin. For this, you want to use Bayes' theorem for inverting conditional probabilities: $$P(B|A)=\frac{P(A|B)P(B)}{P(A)}=\frac{P(A|B)P(B)}{P(A\cap B)+P(A\cap B^c)}=\frac{P(A|B)P(B)}{P(A|B)P(B)+P(A|B^c)P(B^c)}$$ Here, we want $A$ to be the event that we get $5$ heads in a row, and $B$ the event that we have the double-headed coin. All of the quantities in the final formula can be determined. If you don't have experience applying Bayes' theorem (or even if you do), it might be worthwhile to followed mixedmath's suggestion of building a probability tree and seeing how the terms in the formula come from the nodes of the tree. - Hint: what is the total probability that you will get 5 heads in a row starting from scratch? How much of that comes from the double head coin? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9674738049507141, "perplexity_flag": "head"}
http://mathoverflow.net/questions/3624/nonprojective-surface/21832
## Nonprojective Surface ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let k be an algebraically closed field. It's well known that every complete curve, period, is projective. Also, that every smooth surface is, and that there are smooth 3-folds which are not, and people go to reasonable lengths to include these examples all over the place, so they're easy to find. However, Hartshorne does say that singular complete surfaces are not all projective. Is there a simple example? A complete normal surface that is not projective? Is there some "least singular" possible such surface? I suspect that normality is too much to hope for, but I can't quite phrase why I think this, so is every normal complete surface projective? - What if we drop completeness -- is it easier to write down examples of non-quasi-projective varieties is we don't require them to be proper? – David Zureick-Brown♦ Nov 1 2009 at 0:56 1 @Igor: Kollar wrote an example for our paper arxiv.org/pdf/1109.4047.pdf (Example 34). The example is obtained by gluing two projective planes along three generic projective lines (with a twist). The result has no nontrivial line bundles, so it is not projective. – Misha Apr 21 2012 at 2:56 @Misha. But concerning this sort of twisted gluing, the result is non-algebraic, I believe. The main interest lies in algebraic examples. – Pelle Salomonsson Apr 21 2012 at 5:13 ## 3 Answers There is a construction of a proper normal non-projective surface here . There is an example given by Nagata in his paper "Existence theorems for nonprojective complete algebraic varieties" in the Illinois Journal, but I don't know where this is available on the web. Over a finite field complete + normal implies projective for surfaces. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A simple example of a proper non-projective surface can be found in Vakil's AG-notes: http://math.stanford.edu/~vakil/0506-216/216class4344.pdf - 1 Very nice. This is essentially Hironaka's 3-dimensional example, which locally looks like the blow-up of ℙ² along two lines which intersect in two points (but the blow-up is done "in the wrong order" at one of them). Here, you just take the "exceptional locus" (the surface lying over the two lines) and use the same proof to show that it's not projective. – Anton Geraschenko♦ Apr 19 2010 at 14:10 There are some typos in the construction in the notes, so read carefully. – Matt Aug 18 2010 at 22:08 There is also an example in an Exercise from Hartshorne's Algebraic geometry involving infinitessimal extensions which I am trying to understand. Let me recall some definitions and properties in the first place: • Infinitessimal lifting property: given an algebraically closed field $k$, a finitely generated $k$-algebra $A$ with $X=\mbox{Spec } A$ non-singular, and an exact sequence $0\rightarrow \mathcal{I} \rightarrow B' \rightarrow B \rightarrow 0$, where $B,B'$ are k-agebras and $I\subset B'$ is an ideal such that $\mathcal{I}^2=0$, any k-algebra homomorphism $A\rightarrow B$ lifts to a h-algebra homomorphism $A\rightarrow B'$, and two such homomorphism differ by a k-derivation of A into $\mathcal{I}$, namely an element in $Hom_A(\Omega_{A/k},\mathcal{I})$. • An infinitessimal extension of a k-scheme $X$ by a coherent sheaf $\mathcal{F}$ is a pair $(X',\mathcal{I})$ where $X'$ is a k-scheme and $\mathcal{I}$ is a sheaf of ideals on $X'$ with $\mathcal{I}^2=0$ and such that we have isomorphisms $(X',\mathcal{O}_{X'}/\mathcal{I})\cong (X,\mathcal{O}_X)$ (as k-schemes) and $\mathcal{I}\cong \mathcal{F}$ (as $\mathcal{O}_X$-modules). For instance, the trivial extension of $X$ by $\mathcal{F}$ is given by the pair $(X,\mathcal{F})$, where the $X$ has structure sheaf $\mathcal{O}_X'=\mathcal{O}_X\oplus \mathcal{F}$ with product $(a\oplus f)\cdot (a'\oplus f')=aa'\oplus (af'+a'f)$, so that $\mathcal{F}$ becomes an ideal sheaf in $X$. • If $X=\mbox{Spec }A$ is affine and $\mathcal{F}$ is a coherent sheaf, then any extension is isomorphic to the trivial one: we just use the previous lifting property to construct a splitting of an appropriate short exact sequence. • There is a correspondence between isomorphism classes of infinitessimal extensions of a k-scheme $X$ by a coherent sheaf $\mathcal{F}$ and the cohomology group $H^1(X,\mathcal{F}\otimes \mathcal{T}_X)$ where $\mathcal{F}_X$ is the tangent sheaf of $X$. If $(X',\mathcal{I})$ is an infinitessimal extension of $X$ by $\mathcal{F}$ and and ${U_i}$ is an affine open cover of $X$ (so that sheaf cohomology is isomorphic to Cech cohomology) then on every open affine set the extension is trivial, namely of the form $(U_i,\mathcal{I}_{|U_i}=\mathcal{O}_{U_i}\oplus \mathcal{F}_{|U_i})$. It is easy to see from the construction of the trivialisation (choosing a lift, and noting that the difference of two lifts gives an element of $Hom_A(\Omega_{A/k},\mathcal{I})$) that this gives a cocyle in `$H^1(X,\mathcal{F}\otimes \mathcal{T}_X)$`. Conversely, given a cocyle $\xi=(\xi_{ij})\in \check{H}^1(X,\mathcal{F}\otimes \mathcal{T}_X)$ and an open affine cover ${U_i}$, on each $U_i$ we have a trvial extension $(U_i,\mathcal{F}_{|U_i|}$ with $\mathcal{O}_{|U_i}'\cong\mathcal{O}_{U_i}\oplus \mathcal{F}_{|U_i}$ and we can glue them all via $\xi=(\xi_{ij})$ to give an extension of $X$ by $\mathcal{F}$. Then Hartshorne suggests that we perform the following computation: let $X=P_k^2$ and consider the sheaf of differential 2-forms $\omega_X$; then $H^1(X,\Omega_X^1)\cong H^1(X,\omega_X\otimes \mathcal{T}_X)$ and a nontrivial extension $X'$ of $X$ by $\omega_X$ is given by the cocylce $\xi \in H^1(X,\omega_X^1)$ given over $U_{ij}=U_i\cap U_j$ (where the ${U_i}$ are the standard open subsets covering $P_k^2$) by $\xi_{ij}=\frac{x_j}{x_i}d\left(\frac{x_i}{x_j}\right)$. This is our target proper non-projective surface and in order to see that it is indeed non-projective we shall prove that it has no ample invertible sheafs (in fact no invertible sheaf at all, namely $Pic X'=0$). We have a short exact sequence $0\rightarrow \omega_X \rightarrow \mathcal{O}_{X'}^{\ast} \rightarrow \mathcal{O}_X^{\ast} \rightarrow 0$ inducing a long exact cohomology sequence ```$\cdots \rightarrow \underbrace{H^1(X,\omega_X)}_0 \rightarrow \underbrace{H^1(X',\mathcal{O}_{X'}^{\ast})}_{Pic(X')} \rightarrow \underbrace{H^1(X,\mathcal{O}_X^{\ast})}_{Pic(X)} \stackrel{\delta}{\longrightarrow} \underbrace{H^2(X,\omega_X)}_k \rightarrow \cdots$``` and in order to see that $Pic X'==$ it suffices to prove that $\delta$ is injective and nonzero. Since $Pic X\cong \mathbb{Z}$, any invertible sheaf is of the form $\mathcal{L}=\mathcal{O}_X(d)\cong \mathcal{O}_X(1)^{\otimes d}$ and it suffices to see that $\delta(\mathcal{O}_X(1))\neq 0$. I am confused as to how to carry out this computation since I guess I still do not understand very well the correspondence between infinitessimal extensions and the cohomology group. What I intend to do is to compute $\delta$ explicitly in the standard way, namely via the diagram ```$\begin{array}{ccccccccc} 0 & \rightarrow & \check{C}^1(U,\omega) & \rightarrow & \check{C}^1(U,\mathcal{O}_{X'}^{\ast}) & \rightarrow & \check{C}^1(U,\mathcal{O}_X^{\ast}) & \rightarrow & 0 \\ && \downarrow && \downarrow && \downarrow && \\ 0 & \rightarrow & \check{C}^2(U,\omega) & \rightarrow & \check{C}^2(U,\mathcal{O}_{X'}^{\ast}) & \rightarrow & \check{C}^2(U,\mathcal{O}_X^{\ast}) & \rightarrow & 0 \end{array}$``` The cycle corresponding to $\mathcal{O}_X(1)$ in $\check{C}^1(U,\mathcal{O}_X^{\ast})$ is $\left(\frac{x_1}{x_0},\frac{x_2}{x_1},\frac{x_0}{x_2}\right)$. How does it map down to $\check{C}^2(U,\omega)$? Thanks in advance for any insight. - I don't undertand why it reads so bad. I have compiled this with a tex editor and there are no errors. – Marc Apr 20 2012 at 23:55 I cleaned up formatting a bit: tips: a. Do NOT use b. DO use backquotes. – Igor Rivin Apr 21 2012 at 0:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360969662666321, "perplexity_flag": "head"}