url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/55867?sort=oldest
## The canonical divisor of the Hilbert scheme $Hilb^n P^2$? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hey everyone, I was wondering if anyone knows what the canonical divisor of the Hilbert scheme $Hilb^n P^2$ is --$Hilb^n P^2$ is the Hilbert scheme of degree-n zero dimensional subschemes of the projective plane $P^2$. Any references? Many thanks in advance. - ## 2 Answers There is an easy formula for the canonical divisor on the Hilbert scheme of $n$ poin ts on any any smooth projective surface $X$. Let's first fix some notations. Denote by $X^{n}$ the $n-$fold product with projections $pr_i X^{n}\to X$. We can consider line bundles of the form $$L^{[n]}=pr_1^* L \otimes\cdots \otimes pr_n^* L$$It is not hard to show that this decends to a line bundle on the symmetric product ${X}^{[n]}$ and this defines a homomorphism $Pic(X)\to Pic X^{[n]}$. In this notation, the canonical line bundle is given by $$\omega_{X^{[n]}}=\omega_{X}^{[n]}.$$I think this is in Göttsche's book on Hilbert schemes of points. - Thank you. It was very helpful. – Turkelli Feb 18 2011 at 18:37 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. $n=1$ already tells you that the anticanonical divisor is going to be nicer than the canonical, in that it's effective. There, the divisor given by the three coordinate lines is anticanonical. Next step, look at the Chow variety of $n$ points in $P^2$, with an anticanonical given by "some point is on some coordinate line". Then use the fact that the morphism from Hilb to Chow is crepant, to say that we can pull the anticanonical back. So: the divisor given by "some point is on some coordinate line" is again anticanonical up on the Hilbert scheme. - Oh, I see, thank you. In fact, I care about the anti-canonical divisor (rather than the canonical) as I am interested in Batyrev-Manin conjectures for Hilbert schemes. I wonder if the effective cone of the Hilbert schemes is known? – Turkelli Feb 19 2011 at 21:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288250803947449, "perplexity_flag": "head"}
http://mathoverflow.net/questions/29652/what-are-the-most-elegant-proofs-that-you-have-learned-from-mo
## What are the most elegant proofs that you have learned from MO? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One of the things that MO does best is provide clear, concise answers to specific mathematical questions. I have picked up ideas from areas of mathematics I normally wouldn't touch, simply because someone posted an eye-catching answer on MO. In particular, there have been some really elegant and surprising proofs. For example, this one by villemoes, when the questioner asked for a simple proof that there are uncountably many permutations of $\mathbb{N}$. The fact that any conditionally convergent series [and that such exists] can be rearranged to converge to any given real number x proves that there is an injection P from the reals to the permutations of $\mathbb{N}$. Or this one by André Henriques, when the questioner asked whether the Cantor set is the zero set of a continuous function: The continuous function is very easy to construct: it's the distance to the closed set. There must many such proofs that most of us have missed, so I'd like to see a list, an MO Greatest Hits if you will. Please include a link to the answer, so that the author gets credit (and maybe a few more rep points), but also copy the proof, as it would nice to see the proofs without having to move away from the page. (If anyone knows the best way to copy text with preservation of LaTeX, please advise.) I realize that one person's surprise may be another person's old hat, so that's why I'm asking for proofs that you learned from MO. You don't have to guarantee that the proof is original. - 1 Shortly: mathoverflow.net/questions/26520/…. – Wadim Zudilin Jun 27 2010 at 1:32 Also shortly: How to capture a sphere in a knot? mathoverflow.net/questions/8091/… – Gjergji Zaimi Jun 27 2010 at 2:10 2 I've tried in vain to answer this question, and I've come to realize that for me, learning slick proofs has not been the most attractive or memorable part of the MO experience (though I must have learned a few here). – Thierry Zell Apr 16 2011 at 23:00 ## 6 Answers In this fantastic answer, Ashutosh proved that the Axiom of Choice is equivalent to the assertion that every set admits a group structure. In ZF, the following are equivalent: (a) For every nonempty set there is a binary operation making it a group (b) Axiom of choice Non trivial direction [(a) -> (b)]: The trick is Hartogs construction which gives for every set $X$ an ordinal $\aleph(X)$ such that there is no injection from $\aleph(X)$ into $X$. Assume for simplicity that $X$ has no ordinals. Let $o$ be a group operation on $X \cup \aleph(X)$. Now for any $x \in X$ there must be an $\alpha \in \aleph(X)$ such that $x o \alpha \in \aleph(X)$ since otherwise we get an injection of $\aleph(X)$ into $X$. Using $o$, therefore, one may inject $X$ into $(\aleph(X))^{2}$ by sending $x \in X$ to the <-least pair $(\alpha, \beta)$ in $(\aleph(X))^{2}$ such that $x o \alpha = \beta$. Here, < is the lexic well ordering on the product $(\aleph(X))^{2}$. This induces a well ordering on $X$. (The argument is due originally to Hajnal and Kertész, 1973.) - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Unfortunately I can't find the link but someone mentioned this proof that there are irrational numbers $a$ and $b$ such that $a^b$ is rational: if $\sqrt{2}^\sqrt{2}$ is rational then we are done, if it is irrational then $2 = (\sqrt{2}^\sqrt{2})^\sqrt{2}$ is an irrational raised to an irrational. - 16 @Waldim: Sorry but I don't understand your point. Obviously the statement is not deep or a difficult thing to prove using other means; I just found that proof to be very elegant. Different strokes for different folks, I guess. – Eric O. Korman Jun 27 2010 at 2:49 7 @Wadim: the point is that without Gelfond or Gelfond-Schneider etc etc it is actually a neat little puzzle to find irrational a,b with a^b rational. Your continuity argument doesn't work without more effort, because you have to check that the map $x\mapsto x^{\sqrt{2}}$ does not have the property that the pre-image of every rational is rational. Of course it doesn't---far from it---but the issue is finding a proof without invoking a transcendence theory sledgehammer. – Kevin Buzzard Jun 27 2010 at 8:16 21 Although this proof is pretty, I find it much more interesting as a demonstration of the meaning of the term "non-constructive proof", and I think this is the context in which it is usually presented. – Dan Piponi Jun 27 2010 at 17:23 15 Here's a simple, constructive proof: $\sqrt{2}^{\log_{\sqrt{2}} 3} = 3$ and $\log_{\sqrt{2}} 3$ is irrational since otherwise $2^p = 3^q$ for some positive integers $p,q$. It's not as pretty as the $\sqrt{2}^{\sqrt{2}}$ proof, but it shows that no "transcendence theory sledgehammer" is needed to provide an explicit example. – Mark Schwarzmann Apr 15 2011 at 15:59 3 sigfpe is right when he tacitly suggests that this argument is well-known and classical (which is why I wasn't much wowed by it myself). Come to think of it, sigfpe got everything about it. right :-) – Todd Trimble Apr 15 2011 at 16:40 show 4 more comments I found several very nice proofs which I enjoyed: 1.Brilliant proof of fundamental theorem of algebra by Gian Maria Dall'Ara http://mathoverflow.net/questions/10535/ways-to-prove-the-fundamental-theorem-of-algebra/10684#10684 2.Some proofs of quadratic reciprocity: http://mathoverflow.net/questions/1420/whats-the-best-proof-of-quadratic-reciprocity (I especially liked that one: http://mathoverflow.net/questions/1420/whats-the-best-proof-of-quadratic-reciprocity/1431#1431) 3.Proof that $\mathbb{R}^{2n+1}$ does NOT have a square root (quite elementary and beatiful) http://mathoverflow.net/questions/60375/is-r3-the-square-of-some-topological-space/60389#60389 4.Nullstellensatz using model theory http://mathoverflow.net/questions/9667/what-are-some-results-in-mathematics-that-have-snappy-proofs-using-model-theory/9693#9693 5.If in ring R every countably generated ideal is principal than R is a PID http://mathoverflow.net/questions/8042/do-there-exist-non-pids-in-which-every-countably-generated-ideal-is-principal/8067#8067 6.An infinite dimensional vector space have smaller dimension than it's dual. http://mathoverflow.net/questions/13322/slick-proof-a-vector-space-has-the-same-dimension-as-its-dual-if-and-only-if-it/13372#13372 7.Topological proof that Z is a Bezout domain. http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/64039#640397. - 12 "NUllstehlensatz" would be the zero stealing theorem. Instead, it's the zero point theorem, i.e. the "Nullstellensatz". – Alex Bartel Apr 16 2011 at 9:41 7 Alex, now I want to learn enough analysis that I can state and prove a Nullstehlensatz. :-) – L Spice Apr 16 2011 at 14:14 15 The Nullstehlensatz should be a security theorem about cryptographic protocols for digital cash. If you break such a protocol, that's a Positivstehlensatz. – Henry Cohn Apr 16 2011 at 15:46 1 I imagined it as some sort of complement to pole-pushing. – L Spice Apr 16 2011 at 18:16 I asked a question a while ago about proving that the real line is connected. http://mathoverflow.net/questions/26537/connectedness-and-the-real-line Omar Antolín-Camarena's Answer and comment prove that the closed interval $[0,1]$ is connected iff it is compact. - That is a very nice proof! – David Roberts Jul 24 2011 at 23:18 My candidate is Jim Belk's one-line answer to the question about the existence of functions from $\Bbb{R}$ to $\Bbb{R}$ whose range is $\Bbb{R}$ on every open interval. I do wonder, however, if Jim Balk's solution was known to founders of classical set theory (Cantor, Bernstein, Hausdorff, ...). - @Ali: I have no idea who first came up with that argument, but my first guess would be Sierpinski. Cantor, Bernstein, and Hausdorff undoubtedly knew the result, but they probably used a "construction" by transfinite induction, like the standard construction of a Bernstein set. – Andreas Blass Jul 25 2011 at 8:13 @Andreas: I agree that the existence of such a function must have been known to the "founding fathers" of set theory; it is amusing that even though the one-line proof uses AC, there are other proofs that are implementable in $ZF$. Indeed $ZF$ can produce such a function that is equal to $0$ almost everywhere. – Ali Enayat Jul 25 2011 at 13:36 My favorite is this proof by Bjorn Poonen that every finite Galois extension of $\mathbb{Q}$ has infinitely many completely split primes. Although Bjorn's proof does not give the density of such primes, as the proof using the Chebotarev Density Theorem does, it is refreshing to see that such an elementary proof exists. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246616363525391, "perplexity_flag": "middle"}
http://all-science-fair-projects.com/science_fair_projects_encyclopedia/Sound
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. Categories: Sound | Waves | Physics # Sound "Sound is an alternation in pressure, particle displacement, or particle velocity propagated in an elastic material" (Olson 1957) or series of mechanical compressions and rarefactions or longitudinal waves that successively propagate through medium that are at least a little compressible (solid, liquid or gas but not vacuum). In sound waves parts of matter (molecules or groups of molecules) move in a direction of the spreading of the disturbance (as opposite to transversal waves). The cause of sound waves is called the source of waves, e.g. a violin string vibrating upon being bowed or plucked. A sound wave is usually represented graphically by a wavy, horizontal line; the upper part of the wave (the crest) indicates a compression and the lower part (the trough) indicates a rarefaction. Contents ## Attributes of sound The characteristics of sound are frequency, wavelength, amplitude and velocity. ### Frequency and wavelength The frequency is the number of oscillations of a particular point in the course of soundwaves in a second. One single oscillatory cycle per second corresponds to 1 Hz(1/s). The wavelength is the distance between two successive crests and is the path that a wave travels in the time of one oscillatory cycle. In the case of longitudinal harmonic sound waves we can describe it with the equation $y(x,t) = y_0sin\omega(t-\frac{x}{c})$ where y(x,t) is the displacement of particles from the stable position (y0) in the direction of spreading of waves, while x is the displacement of the source of waves, c is the speed of waves, ω is the angle speed of the source of waves and x/c is the time that the wave needs to travel the path x. Time of one oscillatory cycle is denoted by t. ### Amplitude The amplitude is the magnitude of sound pressure change within the wave. It is the maximal displacement of particles of matter that is obtained in compressions, where the particles of matter move towards each other and pressure increases the most and in rarefactions, where the pressure lessens the most. See also particle displacement and particle velocity. While the pressure can be measured in pascals, the amplitude is more often referred to as sound pressure level and measured in decibels, or dBSPL, sometimes written as dBspl or dB(SPL). When the measurement is adjusted based on how the human ear perceives loudness based on frequency, it is called dBA or A-weighting. See decibels for a more thorough discussion. ### Velocity The speed of this propagation depends on the type, temperature and pressure of the medium. Under normal conditions, however, because air is nearly a perfect gas, it does not depend on the air pressure. In dry air at 20 °C (68 °F) the speed of sound is approximately 343 m/s. A real-world estimate is nearly 1 meter per 3 milliseconds. ## Types of sounds Noises are irregular and disordered vibrations, they include all possible frequencies, their picture does not repeat in time. The noise is an aperiodic series of waves. Sounds that are sine waves with fixed frequency and amplitude are perceived as pure tones. While sound waves are usually visualised as sine waves, sound waves can have arbitrary shapes and frequency content, limited only by the apparatus that generates them and the medium through which they travel. In fact, most sound waves consist of multiple overtones or harmonics and any sound can be thought of as being composed of sine waves (see additive synthesis). Waveforms commonly used to approximate harmonic sounds in nature include sawtooth waves, square waves and triangle waves. While a sound may still be referred to as being of a single frequency (for example, a piano striking the A above middle C is said to be playing a note at 440 Hz), the sound perceived by a listener will be colored by all of the sound wave's frequency components and their relative amplitudes (see timbre.) For convenience in this article, however, it is best to think of sound waves as sine waves. ## Perception of sound The frequency range of sound audible to humans is approximately between 20 and 20,000 Hz. This range varies by individual and generally shrinks with age. It is also an uneven curve - sounds near 3,500 Hz are often perceived as louder than a sound with the same amplitude at a much lower or higher frequency. Above and below this range are ultrasound and infrasound, respectively. The amplitude range of sound for humans has a lower limit of 0 dBSPL, called the threshold of hearing. While there is technically no upper limit, sounds begin to do damage to ears at 85 dBSPL and sounds above approximately 130 dBSPL (called the threshold of pain) cause pain. Again, this range varies by individual and changes with age. The perception of sound is the sense of hearing. In humans and many animals this is accomplished by the ears, but loud sounds and low frequency sounds can be perceived by other parts of the body through the sense of touch. Sounds are used in several ways, most notably for communication through speech or, for example, music. Sound perception can also be used for acquiring information about the surrounding environment in properties such as spatial characteristics and presence of other animals or objects. For example, bats use one sort of echolocation, ships and submarines use sonar, and humans can determine spatial information by the way in which they perceive sounds. The study of sound is called acoustics and is performed by acousticians. A notable subset is psychoacoustics, which combines acoustics and psychology to study how people react to sounds. ## Reference • Olson (1957) cited in Roads, Curtis (2001). Microsound. MIT. ISBN 0262182157. ## External links Categories: Sound | Waves | Physics 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283384680747986, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/43028/symmetries-of-spacetime-and-objects-over-it
# Symmetries of spacetime and objects over it I guess according to mathematical didactic, we first think of spacetime as a set and we reason about elements of its topology and then it's furthermore equipped with a metric. Appearently it is this Riemannian metric, which people consider to be the object, which induced the minimal symmetry requirements of spacetime. 1) Regarding the relation between Riemannian geometry and the Hamiltonian formalism of classical mechanisc: Does a setting for Riemannian geometry always already imply that it's possible to cook up a symplectic structre in the cotangent bundle? 2) Are there some some more natural structures which physicists might be tempted to put on spacetime, which might then also be restricting regarding the (spacetime) symmetry structures? Is constructing quantum group symmetries (of non-commutative coordinate algebras, alla Connes?) just this? 3) I'm given a solution to a differential equation which can be thought of a resulting from a Lagrangian with a set of $n$ symmetries (e.g. $n=10$ for some spacetime models). Can this solution also be the result of a Lagrangian with fewer symmetries? Here, I'm basically asking to what extend I can reconstruct the symmetries from a solution or specific sets of the soltuon. It's kind of the inverse problem of the question "are there hidden/broken symmetries?". - ## 1 Answer The following description will be from the particle point of view, i.e., the space time manifold will refer to the configuration manifold on which a particle moves. Remark: My wrong answer of the first question was correted following the comment by Qmechanic. 1) There is no need for a metric to define a symplectic structure on a cotangent bundle. A cotangent bundle has a canonical symplectic structure independent of any metric: $\omega = dx^i\wedge dp_i$ However, given a metric on the configuration manifold, a cotangent bundle of a wide class of manifolds (for example compact manifolds) can be given a Kähler structure. No explicit expression is known in the general case. However, there are implicit expressions for special cases such as Lie groups, please see the example of $T^{*}SU(2)$ in Hall's lectures. The advantage of having a Kähler structure on a cotangent bundle is that enables quantization in terms of creation and anihilation operators like in the case of a flat space. 2) A natural structure that one can put on a space time manifold is a principal bundle. In this case given a metric on the base manifold and a connection on the principal bundle a Poisson structure can be defined on this principal bundle. In this case even in the case of a vanishing Hamiltonian, there will be nontrivial dynamics determined by the constraints. The classical equations of motion are the Wong equations of a colored particle in a Yang-Mills field. Please see the following work by: A. Duviryac for a clear exposition. Regarding the second part of the question, The quantization of this system leads a quantum representation of the color group. The operator algebra of this representation has the structure of a noncommutative manifold. The most known example of this type of algebra is in the case of $SU(2)$, where this manifold is a fuzzy sphere. 3) Consider a particle moving uniformly on a great circle of a two dimensional sphere. This is a solution of a free particle on a circle whose symmetry is $U(1)$, and also a solution of a free particle on a sphere whose symmetry is $SO(3)$. - Comment to the answer(v1): If $p_j$ is supposed to transform as a co-vector under coordinate transformations $x\to x^{\prime}$, then the rhs. of the first eq. is not invariant under change of coordinates. – Qmechanic♦ Nov 6 '12 at 12:18 @Qmechanic Thank you – David Bar Moshe Nov 6 '12 at 13:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916456401348114, "perplexity_flag": "head"}
http://mathoverflow.net/questions/115636/properties-of-permutations-with-unknown-pattern-avoidance-descriptions/116699
## Background Many properties of permutations can be stated in terms of classical patterns. For example: • a permutation is stack-sortable if and only if it avoids 231 (Knuth 1975) • a permutation corresponds to a smooth Schubert variety if and only if it avoids 1324 and 2143 (Lakshmibai and Sandhya 1990) For other properties we need a stronger notion of a pattern, e.g., the mesh patterns introduced by Brändén and Claesson (2011). For example: • a permutation corresponds to a factorial Schubert variety if and only if it avoids 1324 and (2143,{(2,2)}) (These are the so-called forest-like permutations, Bousquet-Mélou and Butler 2007) • a permutation is sortable in two passes through a stack if and only if it avoids 2341 and (3241,{(1,4)}) (These are the so-called West-2-stack-sortable permutations, West 1990) There are also properties which have not been translated into patterns (to my knowledge): • meander permutations (http://theory.cs.uvic.ca/inf/perm/StampFolding.html) • the involutions in the symmetric group • ... ## The Question What permutation properties do you know that have not been described by the avoidance of patterns ## Motivation I recently wrote an algorithm that given a finite set of permutations outputs the mesh patterns that the permutations avoid. This algorithm is called BiSC (derived from the last names of three people that inspired me to write the algorithm) and can conjecture the descriptions given in the first two lists above. It is available at http://staff.ru.is/henningu/programs/bisc/bisc.html and described in the paper http://arxiv.org/abs/1211.7110. This is a community wiki question since it there is obviously not a single best answer - What means permutation avoids patern? – Alexander Chervov Dec 18 at 19:50 ## 4 Answers Here's one idea. For every permutation $\pi$ of length $n$, there are $n^2+1$ permutation of length $n+1$ containing $\pi$. However, once you look at permutations of length $n+2$, this quantity depends on $\pi$. Ray and West gave a proof that for $\pi$ of length $n$ the number of permutations of length $n+2$ containing $\pi$ is $$(n^4+2n^3+n^2+4n+4-2j)/2,$$ where $0\le j\le k-1$ depends on $\pi$. Perhaps you could give a description of this statistic in terms of patterns of $\pi$? References and a bit more discussion can be found in this paper: http://www.math.ufl.edu/~vatter/publications/pp2007-problems/ - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Derangements. More generally, properties that allow superexponentially many permutations. - I hope I understood the question correctly. I have a feeling that questions on permutations of algebraic as opposed to combinatorial nature, could be candidates. Lakshmibai and Sandhya's theorem is a geometric question and it is a significant theorem because it reduces geometry to combinatorics. With this understanding of your question let me attempt to give four examples: (1) A permutation being of specific order $m$ . Suppose we attempt pattern avoidance like: for any $k$ relatively prime to $m$ it should not have a length $k$ cycle. A permutation of order, for example $m^2$, will also satisfy that criterion and will be accepted wrongly. (2) Permutation being even. (avoidance criterion may not work: because presence of an even number of cycles of any particular length, as opposed odd number of them, will be ok) (3) Some irreducible character vanishing in it. This is conjugacy class question. Can be argued similarly (4) Commuting with another specific permutation. - A source of interesting examples may come from infinite groups with finite presentation, possibly extending your methods to words instead of just permutations (i.e. allowing repetitions). Given a set of generators $\{x,y,\dots,z\}$ of the group $G$, which words in the alphabets of $\{x,\ x^{-1},y,\ y^{-1},\dots z,\ z^{-1}\}$ correspond to minimal length presentations of elements of $G$? In this generality, of course, the problem is intractable, but in principle one optimal answer could be given (and actually is, in some concrete cases) precisely in terms of avoidance of a list of patterns (starting, of course, from avoiding $xx^{-1}$). Clearly, an algorithm as yours may prove very useful to formulate conjecture about patterns. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9013331532478333, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/109140-differentiability.html
# Thread: 1. ## Differentiability I don't understand this question. Could I get some help with it. (Problem is attached) Thanks Attached Thumbnails 2. It asks you to prove that the derivative of f(x,y) in (0,0) in the direction of a generic vector v (directional derivative) exists. You can do this thinking of v as (a,b) for example, or (cos $\theta$, sin $\theta$). Then you have to prove that f(x,y) is not differentiable at that point. Do you understand why this would be "surprising" as the exercise says, and what two important concepts is it referring to?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9655018448829651, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/129824-unbiased-estimator-theta.html
# Thread: 1. ## unbiased estimator of theta I'm trying to find an estimator for $\theta.$ I have the following facts: Where $v_{ij}$ is a matrix that was obtained by multiplying a standard normal matrix and an unspecified data matrix. $P(sign(v_{1j}) = sign(v_{2j})) = 1 - \frac{\theta}{\pi}$ My problem is how can I estimate $P(sign(v_{1j}) = sign(v_{2j})?$ Is this probability dependent on the distribution of $v_{ij}?$ Or, is the desired result simply: $P(sign(v1)=sign(v2))= 1/3$, since, + & -, - & +, 0 & +, + & 0, - & 0, 0 & - + & +, - & -, 0 & 0, And 3/9 satisfy the condition. Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8646955490112305, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/20717/how-to-find-solutions-of-linear-diophantine-ax-by-c/20727
# How to find solutions of linear Diophantine ax + by = c? I want to find a set of integer solutions of Diophantine equation: $ax + by = c$, and apparently $gcd(a,b)|c$. Then by what formula can I use to find $x$ and $y$ ? I tried to play around with it: $x = (c - by)/a$, hence $a|(c - by)$. $a$, $c$ and $b$ are known. So to obtain integer solution for $a$, then $c - by = ak$, and I lost from here, because $y = (c - ak)/b$. I kept repeating this routine and could not find a way to get rid of it? Any hint? Thanks, Chan - 2 – Qiaochu Yuan Feb 6 '11 at 19:21 Your condition is flipped; it's $\gcd(a,b)|c$, not the other way around. – Arturo Magidin Feb 6 '11 at 21:04 @Arturo Magidin: Thanks, edited. – Chan Feb 6 '11 at 21:37 ## 3 Answers The diophantine equation $ax+by = c$ has solutions if and only if $\gcd(a,b)|c$. If so, it has infinitely many solutions, and any one solution can be used to generate all the other ones. To see this, note that the greatest common divisor of $a$ and $b$ divides both $ax$ and $by$, hence divides $c$ if there is a solution. This gives the necessity of the condition (which you have backwards). (fixed in edits) The converse is actually a constructive proof, that you can find in pretty much every elementary number theory course or book, and which is essentially the same as yunone's answer above (but without dividing through first). From the Extended Euclidean Algorithm, given any integers $a$ and $b$ you can find integers $s$ and $t$ such that $as+bt = \gcd(a,b)$; the numbers $s$ and $t$ are not unique, but you only need one pair. Once you find $s$ and $t$, since we are assuming that $\gcd(a,b)$ divides $c$, there exists an integer $k$ such that $\gcd(a,b)k = c$. Multiplying $as+bt=\gcd(a,b)$ through by $k$ you get $$a(sk) + b(tk) = \gcd(a,b)k = c.$$ So this gives one solution, with $x=sk$ and $y=tk$. Now suppose that $ax_1 + by_1 = c$ is a solution, and $ax+by=c$ is some other solution. Taking the difference between the two, we get $$a(x_1-x) + b(y_1-y) = 0.$$ Therefore, $a(x_1-x) = b(y-y_1)$. That means that $a$ divides $b(y-y_1)$, and therefore $\frac{a}{\gcd(a,b)}$ divides $y-y_1$. Therefore, $y = y_1 + r\frac{a}{\gcd(a,b)}$ for some integer $r$. Substituting into the equation $a(x_1-x) = b(y-y_1)$ gives $$a(x_1 - x) = rb\left(\frac{a}{\gcd(a,b)}\right)$$ which yields $$\gcd(a,b)a(x_1-x) = rba$$ or $x = x_1 - r\frac{b}{\gcd(a,b)}$. Thus, if $ax_1+by_1 = c$ is any solution, then all solutions are of the form $$x = x_1 - r\frac{b}{\gcd(a,b)},\qquad y = y_1 + r\frac{a}{\gcd(a,b)}$$ exactly as yunone said. To give you an example of this in action, suppose we want to find all integer solutions to $$258x + 147y = 369.$$ First, we use the Euclidean Algorithm to find $\gcd(147,258)$; the parenthetical equation on the far right is how we will use this equality after we are done with the computation. \begin{align*} 258 &= 147(1) + 111 &\quad&\mbox{(equivalently, $111=258 - 147$)}\\ 147 &= 111(1) + 36&&\mbox{(equivalently, $36 = 147 - 111$)}\\ 111 &= 36(3) + 3&&\mbox{(equivalently, $3 = 111-3(36)$)}\\ 36 &= 3(12). \end{align*} So $\gcd(147,258)=3$. Since $3|369$, the equation has integral solutions. Then we find a way of writing $3$ as a linear combination of $147$ and $258$, using the Euclidean algorithm computation above, and the equalities on the far right. We have: \begin{align*} 3 &= 111 - 3(36)\\ &= 111 - 3(147 - 111) = 4(111) - 3(147)\\ &= 4(258 - 147) - 3(147)\\ &= 4(258) -7(147). \end{align*} Then, we take $258(4) + 147(-7)=3$, and multiply through by $123$; why $123$? Because $3\times 123 = 369$. We get: $$258(492) + 147(-861) = 369.$$ So one solution is $x=492$ and $y=-861$. All other solutions will have the form \begin{align*} x &= 492 - \frac{147r}{3} = 492 - 49r,\\ y &= -861 + \frac{258r}{3} =86r - 861, &\qquad&r\in\mathbb{Z}. \end{align*} You can reduce those constants by making a simple change of variable. For example, if we let $r=t+10$, then \begin{align*} x &= 492 - 49(t+10) = 2 - 49t,\\ y &= 86(t+10) - 861 = 86t - 1,&\qquad&t\in\mathbb{Z}. \end{align*} - 2 +1, puts my answer to shame! – yunone Feb 6 '11 at 20:56 2 All I have to say is AMAZING ANSWER ^_^! – Chan Feb 6 '11 at 21:40 I think there was a typo on the line: $x = 592 - \frac{147r}{3} = 492 - 49r$. I believe it should be $492$ on the left hand side. – Chan Feb 28 '11 at 1:02 @Chan: Yes, thank you. – Arturo Magidin Feb 28 '11 at 1:05 I don't mean to bug you all these months later, but I believe there is an extraneous $t$ in the equation for $y$ right before the gray page break line. – yunone Jul 22 '11 at 2:15 show 2 more comments As others have mentioned one may employ the extended Euclidean algorithm. It deserves to be better known that this is most easily performed via row-reduction on an augmented matrix - analogous to methods used in linear algebra. See this excerpt from one of my old sci.math posts: ````For example, to solve mx + ny = gcd(x,y) one begins with two rows [m 1 0], [n 0 1], representing the two equations m = 1m + 0n, n = 0m + 1n. Then one executes the Euclidean algorithm on the numbers in the first column, doing the same operations in parallel on the other columns, Here is an example: d = x(80) + y(62) proceeds as: in equation form | in row form ---------------------+------------ 80 = 1(80) + 0(62) | 80 1 0 62 = 0(80) + 1(62) | 62 0 1 row1 - row2 -> 18 = 1(80) - 1(62) | 18 1 -1 row2 - 3 row3 -> 8 = -3(80) + 4(62) | 8 -3 4 row3 - 2 row4 -> 2 = 7(80) - 9(62) | 2 7 -9 row4 - 4 row5 -> 0 = -31(80) -40(62) | 0 -31 40 Above the row operations are those resulting from applying the Euclidean algorithm to the numbers in the first column, row1 row2 row3 row4 row5 namely: 80, 62, 18, 8, 2 = Euclidean remainder sequence | | for example 62-3(18) = 8, the 2nd step in Euclidean algorithm becomes: row2 -3 row3 = row4 on the identity-augmented matrix. In effect we have row-reduced the first two rows to the last two. The matrix effecting the reduction is in the bottom right corner. It starts as the identity, and is multiplied by each elementary row operation matrix, hence it accumulates the product of all the row operations, namely: [ 7 -9] [ 80 1 0] = [2 7 -9] [-31 40] [ 62 0 1] [0 -31 40] The 1st row is the particular solution: 2 = 7(80) - 9(62) The 2nd row is the homogeneous solution: 0 = -31(80) + 40(62), so the general solution is any linear combination of the two: n row1 + m row2 -> 2n = (7n-31m) 80 + (40m-9n) 62 The same row/column reduction techniques tackle arbitrary systems of linear Diophantine equations. Such techniques generalize easily to similar coefficient rings possessing a Euclidean algorithm, e.g. polynomial rings F[x] over a field, Gaussian integers Z[i]. There are many analogous interesting methods, e.g. search on keywords: Hermite / Smith normal form, invariant factors, lattice basis reduction, continued fractions, Farey fractions / mediants, Stern-Brocot tree / diatomic sequence. ```` - Thanks, I really like your Linear Algebra approach. – Chan Feb 6 '11 at 21:54 1 @Chan: It irks me that most textbooks in elementary number theory present more obfuscated approaches. If you go on to study algebra you will learn more about the underlying theory when you study Hermite Smith normal forms and other module-theoretic generalizations of linear algebra results. – Gone Feb 6 '11 at 22:06 This is actually discussed in Niven, Zuckerman, Montgomery. Just so you have a reference (pages 217-218 in the 5th edition). – Arturo Magidin Feb 6 '11 at 22:07 @Arturo. Thanks for the reference. I'm happy to see that it finally made it into an edition of a popular textbook, but I'm sad that the presentation there leaves much to be desired. – Gone Feb 6 '11 at 22:21 Do you mean $\gcd(a,b)$ divides $c$? If so, you can divide both sides of the equation to get $$\frac{a}{g}x+\frac{b}{g}y=\frac{c}{g}$$ where $g=\gcd(a,b)$. But since $\gcd(a/g,b/g)=1$, you can use the extended Euclidean algorithm to find a solution $(x_0,y_0)$ to the equation $$\frac{a}{g}x+\frac{b}{g}y=1.$$ Once you have that, the solution $(X,Y)=(\frac{c}{g}\cdot x_0,\frac{c}{g}\cdot y_0)$ is a solution to your original equation. Furthermore, the values $$x=X + \frac{b}{g} t\quad y=Y - \frac{a}{g} t$$ give all solutions when $t$ ranges over $\mathbb{Z}$, I believe. - @yuone: Yes, that gives all solutions. – Arturo Magidin Feb 6 '11 at 20:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8716999888420105, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/269032/understanding-two-similar-definitions-frechet-urysohn-space-and-sequential-spac
# Understanding two similar definitions: Fréchet-Urysohn space and sequential space Here are the definitions: Fréchet-Urysohn space: A topological space $X$ where for every $A \subseteq X$ and every $x \in \text{cl}(A)$, there exists a sequence $(x_{n})_{n \in \mathbb{N}}$ in $A$ converging to $x$. Sequential space: A topological space $X$ where a set $A \subseteq X$ is closed iff $A$ contains the limit points of every sequence contained in it. As the title explains, I would like to know the difference between them. Thanks for any help. - ## 4 Answers Consider the following operation on a subset $A$ of a space $X$, defining a new subset of $X$: $$\mbox{s-cl}(A) = \{ x \in X \mid \mbox{ there exists a sequence } (x_n)_n \mbox{ from } A \mbox{ such that } x_n \rightarrow x \}\mbox{.}$$ This set, the sequential closure of $A$, contains $A$ (take constant sequences) and in all spaces $X$ it will be a subset of the $\mbox{cl}(A)$, the closure of $A$ in $X$. We can define $\mbox{s-cl}^{0}(A) = A$ and for ordinals $\alpha > 0$ we define $\mbox{s-cl}^\alpha(A) = \mbox{s-cl}(\cup_{\beta < \alpha} \mbox{s-cl}^\beta(A))$, the so-called iterated sequential closure. A space is Fréchet-Urysohn when $\mbox{s-cl}(A) = \mbox{cl}(A)$ for all subsets $A$ of $X$, so the first iteration of the sequential closure is the closure. A space is sequential if some iteration $\mbox{s-cl}^\alpha(A)$ equals the $\mbox{cl}(A)$, for all subsets $A$. So basically by taking sequence limits we can reach all points of the closure eventually in a sequential space, but in a Fréchet-Urysohn space we are done after one step already. For more on the differences and the "canonical" example of a sequential non-Fréchet-Urysohn space (the Arens space), see this nice topology blog, and the links therein. - Both Frechet-Urysohn and sequential spaces are related to first-countable spaces. In fact, first-countable $\Rightarrow$ Frechet-Urysohn $\Rightarrow$ sequential. • Frechet-Urysohn gives a characterisation of what it means for a point to belong to the closure of a set: $x \in \overline{A}$ iff there is a sequence in $A$ converging to $x$. • Sequentiality gives a characterisation of the closed subsets of a space: A set is closed exactly when it contains the limits of all its convergent sequences. An example of a sequential space which is not Frechet-Urysohn is as follows (this is essentially taken from Engelking's text with some added details): For $i \geq 1$ define $X_i = \left\{ \frac 1i \right\} \cup \left\{ \frac 1i + \frac 1{i^2 + k} : k \geq 0 \right\}$, and let $X = \{ 0 \} \cup \bigcup_{i=1}^\infty X_i$. (Note that $X_i \cap X_j = \emptyset$ for $i \neq j$.) We topologise $X$ as follows: • all points of the form $\frac 1i + \frac 1{i^2+k}$ are isolated; • the basic open neighbourhoods of $\frac 1i$ are of the cofinite subsets of $X_i$ containing $\frac 1i$; and • the basic open neighbourhoods of $0$ are of the form $\{ 0 \} \cup \bigcup_{i=1}^\infty Y_i$ where $Y_i \subseteq X_i$ for each $i$, and $Y_i \neq \emptyset$ for all but finitely many $i$, and if $Y_i \neq \emptyset$, then $\frac 1i \in Y_i$ and $Y_i$ is a cofinite subset of $X_i$. It is easy to see that $0 \in \overline{ X \setminus \left\{ 0 , \frac 11 , \frac 12 , \frac 13 , \ldots \right\} }$, but no sequence in this set converges to $0$: If $\{ x_j \}_{j=1}^\infty$ is any sequence in this set, note that if $X_i \cap \{ x_j : j \geq 1 \}$ is infinite for only finitely many $i$, then we can easily form a neighbourhood of $0$ containing no points of this sequence. If $X_i \cap \{ x_j : j \geq 1 \}$ is infinite for infinitely many $i$, enumerate them as $\{ i_k : k \geq 1 \}$. Inductively pick a sequence $\{ j_{k} \}_{k=1}^\infty$ so that $j_{k+1}$ is the least $j > j_k$ such that $x_j \in X_{i_k}$. Note that $X \setminus \{ x_{j_{k}} : k \geq 1 \}$ is a neighbourhood of $0$ which does not include a tail of the sequence. Nevertheless, $X$ is sequential. Suppose that $A \subseteq X$ contains the limits of all convergent sequences of its points. If $x \in \overline{A}$, note that if $x \neq 0$ then $x$ has a countable neighbourhood base it follows that there is a sequence in $A$ converging to $x$, meaning that $x \in A$. If $x = 0$, then assume that $0 \notin A$. Note that there must be a subsequence $\{ x_j \}_{j=1}^\infty$ of $\{ \frac 1 i \}_{i=1}^\infty$ such that every neighbourhood of each $x_j$ intersects $A$ (otherwise we could form a neighbourhood of $0$ which is disjoint from $A$). Then each $x_j \in A$ (since $x_j \in \overline{A}$, and we have observed above that for these points we can build a sequence in $A$ converging to $x_j$), and it follows that $\lim_j x_j = 0$ (every neighbourhood of $0$ contains all but finitely many points of the form $\frac 1i$). - a space is a Sequential space space iff every sequencially closed set is closed. a space is frechet-urysohn if sequencial closure of a set and usual closure coincide - a space is a Sequential space space iff every sequencially closed set is closed.But there can be closed sets which are not sequencially closed. a space is frechet-urysohn if sequencial closure of a set and usual closure coincide - That's awfully wrong! Any closed subset $A$ is sequentially closed since a limit point of a sequence in $A$ is always an adherence point (or limit point) of the set $A$. – Stefan H. Jan 2 at 17:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528118968009949, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/199962-geometric-sequences-how-find-maximum-value.html
# Thread: 1. ## Geometric Sequences - How to find maximum value This question legitimately confuses me. I found that the formula for a geometric sequence is An=1800 x 0.9^(n-1) How does one calculate the maximum value from that, however? I do not understand how there is a maximum value for geometric sequences. 2. ## Re: Geometric Sequences - How to find maximum value I think you mean a value which the sum of the sequence approaches but never reaches, called the sum to infinity. The formula ia a/(1-r) So in your case a=1800 and r=0.9 so sum to infinity =1800/0.1 =18000. r has to be between -1 and +1 for a sequence to have a sum to infinity. 3. ## Re: Geometric Sequences - How to find maximum value The maximum value of $a_n$ is simply the first term (assumed to be $a_1 = 1800$), because all proceeding terms get smaller.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208516478538513, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/112555/lp-and-l-infty
# $L^p$ and $L^\infty$ So I am trying to prove that for a set $E$ of finite measure, and for $1 \leq p < \infty$, $||f||_p \leq (m(E))^{1 - 1/p}||f||_{\infty}$. But I think I have proved the wrong thing. Can you help me see where I went wrong? My proof is something like $$||f||_p =\left(\int_E |f|^p\right)^{1/p} \leq \left(\int_E ||f||_{\infty}^p\right)^{1/p} = \left(||f||_{\infty}^p \int_E 1\right)^{1/p} = ||f||_{\infty} (m(E))^{1/p},$$ which is not what was asked for in the problem. Thanks! - ## 1 Answer What you get is true but not the wanted inequality. But you can write, assuming that $f\in L^{\infty}$ $|f|^p=|f|^{p-1}|f|\leq ||f||_{\infty}^{p-1}|f|$ then apply Hölder's inequality. - Oh right! Thanks :) For a while I thought the statements contradicted each other. – badatmath Feb 23 '12 at 19:40 Wait, but how can my statement be true? If $p \to \infty$, $||f||_p \to 0$, and assuming $||f||_p \to ||f||_\infty$ I think that's a contradiction. – badatmath Feb 23 '12 at 19:46 Why $||f||_p\to 0$? – Davide Giraudo Feb 23 '12 at 19:54 Wait, never mind, it's not :P – badatmath Feb 23 '12 at 19:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483696818351746, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/167346/implicit-differentiation-question
# Implicit differentiation question Differentiate given $$\frac{y}{x-y}=x^2+1$$ Initially I wanted to use the quotient rule to solve this, but then I tried differentiating it as it is: $$\frac {y_\frac{dy}{dx}}{1-y_\frac{dy}{dx}}=2x$$ $$\frac{dy}{dx}(y y^{-1})=2x$$ $$\frac{dy}{dx}=\frac{2x}{yy^{-1}}$$ $$\frac{dy}{dx}=\frac{2xy}{y}$$ $$\frac{dy}{dx}=2x$$ I am wondering how I can check to see if this is a valid answer? - What is $y_{dy/dx}$? – Peter Tamaroff Jul 6 '12 at 5:44 @PeterTamaroff: Quite possibly an error on my part. I arrived at $y_\frac{dy}{dx}$ by applying the chain rule to y. If I understand correctly, y is considered to be a function of x, so we apply the chain rule to y (when using implicit differentiation). – Kurt Jul 6 '12 at 5:51 1 Explain to me what you mean by "$y_{dx/dy}$". In general, what do you mean by $f_g$, when $f$ and $g$ are functions? – Peter Tamaroff Jul 6 '12 at 5:57 I meant to express that y is a function of x. My aim was to differentiate the top of bottom of the rational expression. Since the expression is a relation versus a function, I thought I had to differentiate y using the chain rule. – Kurt Jul 6 '12 at 6:18 1 Careful. The top is $y(x)$. The chain rule is used for compositions, and I see none (Do you?). What you ought to be doing is using the quotient rule and treating $y$ as $y(x)$ implicitly. I still can't understand why would you say a function is a functions of its derivative (when the converse might make more sense) or how you arrived to the expression $y'(y y^{-1})$. If you write out your reasoning it might help. – Peter Tamaroff Jul 6 '12 at 6:24 show 1 more comment ## 3 Answers $$\frac{y}{x-y}=x^2+1$$ You claim that $$y'=2x$$ so that $y=x^2+C$ This means $$\frac{x^2+C}{x-x^2-C}=x^2+1$$ This is absurd, since the quotient of two second degree polynomials can't be a second degree polynomial. In fact you get two non vanishing terms $x^3$ and $x^4$ which are off. I don't understand what your procedure is, also. I would proceed as follows: $$\displaylines{ \frac{y}{{x - y}} = {x^2} + 1 \cr \frac{d}{{dx}}\left( {\frac{y}{{x - y}}} \right) = \frac{d}{{dx}}\left( {{x^2} + 1} \right) \cr \frac{{y'\left( {x - y} \right) - \left( {1 - y'} \right)y}}{{{{\left( {x - y} \right)}^2}}} = 2x \cr \frac{{y'x - yy' - y + yy'}}{{{{\left( {x - y} \right)}^2}}} = 2x \cr \frac{{y'x - y}}{{{{\left( {x - y} \right)}^2}}} = 2x \cr y'x = 2x{\left( {x - y} \right)^2} + y \cr y' = 2{\left( {x - y} \right)^2} + \frac{y}{x} \cr}$$ - An explicit approach: Rewrite as $y = (x-y)(x^2+1)$, and factor out $y$ to get $y = \frac{x^3+x}{x^2+2}$. This is straightforward to differentiate, yielding $\frac{d y}{d x} = \frac{x^4+5 x^2+2}{(x^2+2)^2}$. - You can integrate your final expression to get $y=x^2+c$ and see if this works in the original equation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481683969497681, "perplexity_flag": "head"}
http://mathoverflow.net/questions/114801/normal-subgroups-of-free-products
## Normal Subgroups of Free Products ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G=A\ast \mathbb{Z}$ be the free product of a group $A$ and the cyclic group $\mathbb{Z}$ and suppose $K$ is a subgroup of $G$. By Kurosh Subgroup Theorem we know that $K=F\ast (\ast_{i\in I}(K\cap A^{u_i}))$, where $F$ is free group and $u_i$ are some representatives of double cosets $KxA$ in $G$. Now suppose further that $A$ has ACC on normal subgroups and $K$ is normal. Is it true that $K$ is finitely generated? (this will be true if we can show that $|I|$ and $rank\ F$ are finite). - Briefly, if $A$ has ACC on normal subgroups, show that $A\ast \mathbb{Z}$ has also ACC on normal subgroups or give a counterexample. – M Shahryari Nov 28 at 18:35 1 In fact a non-trivial free product $G=A*B$ (with $|A|\ge 3$ and $|B|\ge 2$) never has ACC on normal subgroups. This is because $G$ is a non-elementary relatively hyperbolic group, and there is a versions of small cancellation theory over such groups, which, in particular, implies that every non-elementary rel. hyperbolic group possesses a proper non-elementary rel. hyperbolic quotient. – Ashot Minasyan Nov 28 at 21:31 ## 1 Answer Set $A$ equal to $\mathbb{Z}$, which satisfies the ascending chain condition ("ACC", every strictly ascending chain of (normal) subgroups eventually terminates). Then $G=\mathbb{Z}\ast\mathbb{Z}=F_2$ and $F_2$ contains normal subgroups that are not finitely generated. Examples: 1) The commutator subgroup is normal and not finitely generated. 2) The subgroup generated by `$\left\{b^k a b^{-k}\ |\ k\in\mathbb{Z}\right\}$` is normal and not finitely generated. - 2 In fact, Greenberg proved that every normal subgroup of $F_2$ is trivial, of finite index or infinitely generated. – HW Nov 28 at 20:53 This is true for all finitely generated free groups (Hatcher, p.87, problem 7): If $N\leq F_n$ is a nontrivial normal subgroup of infinite index then $N$ is not finitely generated. This is an easy exercise that involves covering theory. – Sebastian Meinert Nov 28 at 21:04 In a now-deleted answer, the OP says: Thank you for answers. I was trying to prove some kind of Hilbert basis theorem for "Algebraic Geometry over Groups". Now, it is clear that there is no such a generalization: It is not true to say that if a group $G$ has ACC on normal subgroups, then $G[X] = G \ast F(X)$ is so. Therefore there may exist $G$-groups which are not Equationally Noetherian. For Algebraic Geometry over Groups, see J. Alg. (Baumslag, Miasnikov, Remeslennikov). – S. Carnahan♦ Dec 4 at 14:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211549758911133, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/294660/trying-to-prove-let-e-be-a-hilbert-a-module-then-e-langle-e-e-rangle-i
# Trying to prove: Let $E$ be a Hilbert $A$-module. Then, $E\langle E,E\rangle$ is norm dense in $E$. Let $E$ be a Hilbert $A$-module. Then, $E\langle E,E\rangle$ is norm dense in $E$. I am having trouble proving this. I believe $\langle E,E\rangle$ is a $C^*$-algebra. If I can show this, then the proof is easy since all $C^*$-algebras have an approximate identity. It seems like it shouldn't be too hard, but I am having trouble showing it. Thank you. - 1 What precisely do the notations $E\langle E,E\rangle$ and $\langle E,E\rangle$ refer to? In any case, a stronger statement can be found here: math.stackexchange.com/questions/163485/… – Jonas Meyer Feb 4 at 17:14 ## 1 Answer They are in fact equal See lemma 2.2.3. of book Hilbert C*-Modules by M. Manuilov page20 But I only point out that $\langle E,E\rangle$ is a C*-subalgebra of A and so $E\langle E,E\rangle\subset E$; conversely since any x in E can be written as $x=y\langle y,y\rangle$ we have $E\subset E\langle E,E\rangle.$ - 1 For those who don't have that book, could you describe a bit about what that citation says? – robjohn♦ Feb 5 at 7:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236316680908203, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/117156/generators-and-subgroups-of-mathbf-z-15
# Generators and subgroups of $\mathbf Z_{15}$ Could you help me with following excercise? Find all generators of additive group Z15. Find all sub-groups of additive group Z15. Could you please explain how to do that and post a solution? Thanks, Mark - 1 In case this is a homework, please add the tag `(homework)`. In any ways, what did you try? In which step did you get stuck? – user2468 Mar 6 '12 at 17:33 1 To get started, can you think of one generator for $\mathbf Z_{15}$? Is $2$ a generator? Is $3$? – Dylan Moreland Mar 6 '12 at 17:35 2 It might be time consuming, but if you're completely lost try writing out the addition table. – you Mar 6 '12 at 17:39 Also, what do you know about the size of subgroups as compared to the size of the original group? – JavaMan Mar 6 '12 at 17:50 1 The answer is the $10$ (equivalence classes of) numbers from $0$ to $14$ that are relatively prime to $15$. This can be verified painfully, by hand, one at a time. There is of course a shortcut theorem that tells me this, but computing is good. It is easy to verify the others don't work. Start with $1$. Sure. What about $2$? Keep adding $2$ to itself, modulo $15$. For a mild shortcut, note that $2+2+\cdots +2$ (eight of them) is $1$. But since $1$ is a generator, $\dots$. – André Nicolas Mar 6 '12 at 18:05 show 1 more comment ## 2 Answers An element $\overline{a}$ in $\mathbf{Z}_m$ generates if and only if its order is $m$. If $0\leq a\lt m$, then the order of $\overline{a}$ is the least positive integer $k$ such that $m|ka$. Since $a|ka$ for every integer $k$, it follows that the order of $\overline{a}$ is the smallest integer $k$ such that $ka=\mathrm{lcm}(m,a)$. Under what conditions is $k=m$? Since $\mathbf{Z}_m$ is cyclic, every subgroup is cyclic. Can you show that if $\overline{a}$ and $\overline{b}$ have the same order, then they generate the same subgroup? - I'll work with $\mathbf Z_6$. The differences are superficial and translating everything to the situation of $\mathbf Z_{15}$ will be good practice. For $x \in \mathbf Z$ to be a generator for $\mathbf Z_6$, it is necessary and sufficient that some multiple of $x$ be congruent to $1 \bmod 6$, i.e. that $6 \mid (ax - 1)$ for some integer $a$. To expand this further, there exists an integer $b$ such that $b6 = ax - 1$, so $1 = ax - b6$. What does Bézout now tell you about $a$ and $6$? It should follow that the classes of $1$ and $5$ are the possible generators. For finding subgroups, you can do something analogous to how we characterize the subgroups of $\mathbf Z$. In fact, certain theorems make this more than an analogy. If $H$ is a subgroup of $\mathbf Z_6$, then let $y$ be the smallest integer among $\{1, \ldots, 6\}$ whose residue modulo $6$ is in $H$. Again using Bézout, show that $y$ must divide $6$ and that it generates $H$. You should find that there are four subgroups, generated by the classes of $1$, $2$, $3$, and $6$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246519804000854, "perplexity_flag": "head"}
http://mathoverflow.net/questions/9981?sort=newest
## Coarse moduli spaces over Z and F_p ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I would like to know to what extent it is possible to compare fibers over $\mathbb{F}_p$ of coarse moduli spaces over $\mathbb{Z}$, and coarse moduli spaces over $\mathbb{F}_p$. I ask a more precise question below. Let $\mathcal{M}_g^{\mathbb{Z}}$ be the moduli stack of smooth genus $g$ curves over $\mathbb{Z}$. Let $M_g^{\mathbb{Z}}$ be its coarse moduli space, and $(M_g^{\mathbb{Z}})_p$ the fiber of this coarse moduli space over $\mathbb{F}_p$. Let $\mathcal{M}_g^{\mathbb{F}_p}$ be the moduli stack of smooth genus $g$ curves over $\mathbb{F}_p$ and $M_g^{\mathbb{F}_p}$ its coarse moduli space. The universal property gives a map $\phi:M_g^{\mathbb{F}_p}\rightarrow(M_g^{\mathbb{Z}})_p$. My question is : is $\phi$ an isomorphism ? In fact, since $\phi$ is a bijection between geometric points, and $M_g^{\mathbb{F}_p}$ is normal, the question can be reformulated as : is $(M_g^{\mathbb{Z}})_p$ normal ? This shows that when $g$ is fixed, the answer is "yes" except for a finite number of primes $p$. - ## 1 Answer So you're asking if formation of coarse spaces commutes with (certain types of) base change. In general the answer is no; one needs the notion of a tame moduli space. A good starting point for this is Jarod Alper's paper "Good Moduli Spaces for Artin Stacks", available on his web page; he explains the notion and cites the relevant papers for tame moduli spaces. This should help you to work out your particular example (I don't know the answer off the top of my head). - Thanks for the reference ! However, I don't think it applies in this situation. Indeed, being "tame" is a condition on the automorphism groups of the geometric points. In the situation here, these groups are reduced, and I think the copndition is exactly "being of order prime to the characteristic". And there exist curves in characteristic $p$ with $p$-groups of automorphisms. Such arguments apply, however, for primes $p$ bigger than the known upper bounds for the order of the automorphism group of a smooth genus $g$ curve. – Olivier Benoist Dec 29 2009 at 23:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9157240390777588, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27589/convert-state-vectors-to-bloch-sphere-angles?answertab=votes
# Convert state Vectors to Bloch Sphere angles I think this question is a bit low brow for the forum. I want to take a state vector $\alpha |0\rangle + \beta |1\rangle$ to the two bloch angles. What's the best way? I tried to just factor out the phase from $\alpha$, but then ended up with a divide by zero when trying to compute $\phi$ from $\beta$. - 1 This question should be migrated to physics.sx – Frédéric Grosshans May 4 '12 at 19:37 2 – Piotr Migdal May 8 '12 at 7:05 1 You have asked 13 questions, cast 0 votes, and marked only one question as accepted. Consider marking more questions as correct and definitely start voting on answers to your own questions, as well as other questions and answers that you have not provided. There is little incentive for anyone to answer your questions as it stands. – Mark S. Everitt May 10 '12 at 4:24 2 Sure, Ill do that more. I didnt understand that I could give back to people answering simply by upping their numbers. I like the community and will try to do better as a member – Ben Sprott May 17 '12 at 15:10 No problem. It can take a while to find your feet on SE. Sometimes we just need to make a little noise. ;) – Mark S. Everitt May 18 '12 at 9:08 ## 2 Answers You are probably dividing by $\alpha$ at some point to eliminate a global phase, leading to your divide by zero in some cases. It would be better to get the phase angles of $\alpha$ and $\beta$ with $\arg$, and set the relative phase $\phi=\arg(\beta)-\arg(\alpha)$. Angle $\theta$ is now simply extracted as $\theta = 2\cos^{-1}(|\alpha|)$ (note that the absolute value of $\alpha$ is used). This is all assuming that you want to get to $$|\psi\rangle = \cos(\theta/2)|0\rangle + \mathrm{e}^{i\phi}\sin(\theta/2)|1\rangle\,,$$ which neglects global phase. - 1 Your previous questions suggest that you are using Matlab, which has the `angle()` function for calculating `arg`. In other languages that support complex types, `arg` (or something similar such as `carg` for C99 complex doubles) is more common. – Mark S. Everitt May 10 '12 at 4:21 $\phi$ is the relative phase between $\alpha$ and $\beta$ (so the phase of $\alpha/\beta$). You will only get zero or divide-by-zero when $\alpha=0$ or $\beta=0$. But in that case, $\phi$ is arbitrary. And when $\alpha$ or $\beta$ are close to zero, you are near the poles of the Bloch sphere, and $\phi$ doesn't really matter that much. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662218689918518, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=472541
Physics Forums ## Kinetics of Particles work and energy; A moving car 1. The problem statement, all variables and given/known data The 2Mg (i assumed mega-gram, not sure if that is the correct term) car has a velocity of v= 100km/h when the driver sees an obstacle in front of the car. If it takes 0.75 s for him to react and lock the brakes, causing the car to skid, determine the distance the car travels before it stops. The coefficient of kinetic friction between the tires and the road is Uk= 0.25. 2. Relevant equations Work and energy for a system of particles: $$\Sigma$$T1 + $$\Sigma$$U = $$\Sigma$$T2 T's represent initial and final kinetic energy respectively, 1/2*m*v^2 U represents all work done by external and internal forces acting on the system. Work of a constant force along a straight line: U = Fcos$$\vartheta$$($$\Delta$$s) 3. The attempt at a solution So i tried to apply my basic equation to the question $$\Sigma$$T1 + $$\Sigma$$U = $$\Sigma$$T2 T2 equals zero (i assumed) because the car will come to rest at the end of the question T1 will equal 1/2 mv^2 which is equal to 1/2 *(2Mg * 100km/h^2) = 10000 since the only work acting in this question is the friction force Caused by the car braking U = Fcos$$\vartheta$$($$\Delta$$s) therefore U = Ff(force of friction)*$$\Delta$$s which is = -4.905$$\Delta$$s (negative because friction force acts in the negative direction) so to summarize i now have 10000 -4905$$\Delta$$s = 0. i solved for delta s and got 2038, but this is obviously incorrect, i dont know how to apply the time delay into this question. For reference the correct answer is s = 178m I just started this chapter and dont have my bearings yet so if you could please explain clearly how to proceed it would be appreciated, thank you Attached Thumbnails PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Without doing any of my own work, here are a few pointers: Be sure to account the fact that the given velocity is in km/h and not m/s. Remember that the work done by a constant force can be written as F*x where x is the distance along which the force did the work. Now, to account for the time delay, simply find the distance the car would have traveled in that time and add it to the final distance to find the distance the driver travels before stopping. use equations of motion to get this one Thread Tools | | | | |--------------------------------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: Kinetics of Particles work and energy; A moving car | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 1 | | | Advanced Physics Homework | 3 | | | Engineering, Comp Sci, & Technology Homework | 0 | | | Introductory Physics Homework | 4 | | | Special & General Relativity | 7 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231876730918884, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/48951-what-perpendicular-distance-pt-5-4-line-y-1-2x-6-a.html
# Thread: 1. ## What is the perpendicular distance from the pt 5, 4 to the line y = 1/2x + 6? The title pretty much explains the question: What is the perpendicular distance from the pt 5, 4 to the line y = 1/2x + 6? If you could please tell the answer and explain how you got there that would be great. thanks 2. Let's call line $y = \frac{1}{2}x + 6$: line k. Find the equation of the line whose slope is the negative reciprocal of line K and it includes the point (5, 4). Call this line J. You will need to use the point-slope form equation: $y - y_1 = m(x - x_1)$ Find the point of intersection of these 2 lines. The perpendicular distance from the pt 5, 4 to the line $y = \frac{1}{2}x + 6$ is the distance from (5,4) to the point of intersection. 3. Hello, gobbajeezalus! What is the perpendicular distance from $P(5,4)$ to the line $L_1\!:\;y \:= \:\frac{1}{2}x + 6$ ? There is a formula for this problem, but I'll walk through it for you . . . The given line $(L_1)$ has slope $\frac{1}{2}$ The line perpendicular to it $(L_2)$ has slope $-2$ $L_2$ has point (5,4) and slope -2 . . Its equation is: . $y - 4 \:=\:-2(x - 5) \quad\Rightarrow\quad y \:=\:-2x + 14$ Where do $L_1$ and $L_2$ intersect? . . $\frac{1}{2}x + 6 \;=\;-2x + 14 \quad\Rightarrow\quad \frac{5}{2}x \:=\:8$ Hence: . $x \:=\:\frac{16}{5},\;\;y \:=\:\frac{38}{5}\quad\hdots$ . They intersect at: . $Q\left(\frac{16}{5},\:\frac{38}{5}\right)$ The desired distance is: . $PQ \;=\;\sqrt{\left(\frac{16}{5} - 5\right)^2 + \left(\frac{38}{5} - 4\right)^2} \;= \;\sqrt{\left(\text{-}\frac{9}{5}\right)^2 + \left(\frac{18}{5}\right)^2}$ . . $= \;\sqrt{\frac{81}{25} + \frac{324}{25}} \;=\;\sqrt{\frac{405}{25}} \;=\;\sqrt{\frac{81\cdot5}{25}} \;=\;\boxed{\frac{9\sqrt{5}}{5} \;\approx\;4.025}$ 4. Originally Posted by gobbajeezalus The title pretty much explains the question: What is the perpendicular distance from the pt 5, 4 to the line y = 1/2x + 6? If you could please tell the answer and explain how you got there that would be great. thanks ${\text{Perpendicular distance from a point }}$ $\left( {x_1 ,y_1 } \right){\text{ to the line }}ax + by + c = 0{\text{ is given by the formula:}} \hfill \\$ $= \frac{{\left| {ax_1 + by_1 + c} \right|}}<br /> {{\sqrt {a^2 + b^2 } }} \hfill \\$ ${\text{So, perpendicular distance from }}\left( {5,4} \right){\text{ to line }}\frac{1}<br /> {2}x - y + 6 = 0{\text{ is given as:}} \hfill \\$ $= \frac{{\left| {\frac{1}<br /> {2}\left( 5 \right) + \left( { - 1} \right)\left( 4 \right) + 6} \right|}}<br /> {{\sqrt {\left( {\frac{1}<br /> {2}} \right)^2 + \left( { - 1} \right)^2 } }} = \frac{{4.5}}<br /> {{\sqrt {1.25} }} \approx 4.025 \hfill \\ <br />$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9112203121185303, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/53465/what-is-the-proof-that-a-force-applied-on-a-rigid-body-will-cause-it-to-rotate-a/53696
# What is the proof that a force applied on a rigid body will cause it to rotate around its center of mass? Say I have a rigid body in space. I've read that if I during some short time interval apply a force on the body at some point which is not in line with the center of mass, it would start rotating about an axis which is perpendicular to the force and which goes through the center of mass. What is the proof of this? - You mean like a mathematical proof on experimental proof? – Zetta Suro Feb 9 at 15:25 2 You could google for `cars crashing on ice site:youtube.com`. – dmckee♦ Feb 9 at 15:28 @phoenixheart6 A mathematical proof using the three laws of Newton for a particle as its axioms. – Alraxite Feb 9 at 15:31 @dmckee I know it's true, so an experimental poof isn't what I'm looking for. – Alraxite Feb 9 at 15:34 2 @joshphysics Yes, it isn't if the rod is fixed. That's why my object is in space. There is more than one force acting when it is fixed. – Alraxite Feb 10 at 0:53 show 5 more comments ## 2 Answers Assume a very small particle embedded in the Rigid body of mass $m$. Let us find out its Torque or moment of force $\vec{\tau}$ about an arbitrary point $p$. $\vec{\tau} = \vec{f} \times \vec{r}$ where $\vec{r}$ is a displacement of this particle from point $p$. The total Torque on the rigid body will be some of $\tau$ of all the particles. If this $\tau$ has a non-zero value then the body will be rotating. Lets find out the total Torque, $\Gamma$ $\Gamma = \Sigma{\tau}$ $=> \Gamma = \Sigma{ \vec{f} \times \vec{r}}$ $=> \Gamma = \Sigma{ m \, \vec{a} \times \vec{r}}$ As The body is said to be rigid, therefore all the points on this body will be having same accelerations at ever instance. Also, Cross product is distributive ref, therefore, we can take $\vec{a}$ out of summation. $=> \Gamma = \vec{a} \times \Sigma{ m \, \vec{r}}$ now, if point $p$ is center of mass then, $\Sigma{ m \, \vec{r}}$ is zero. ref Therefore, $\Gamma$ is zero and rigid body will not rotate at all. NOTE: $\times$ is the vector cross product operator. - "As The body is said to be rigid, therefore all the points on this body will be having same accelerations at ever instance." I don't think that's quite true. – Gugg Mar 5 at 17:34 It seems that you have put the conclusion to your answer in as a premise. – Gugg Mar 5 at 18:32 One can make reasonable assumptions to investigate the problem in a simple manner. Here is my reasoning about this question. For the sake of simplicity let us assume we have a spherical object of radius R in the outer space. Let there be a hook at the surface of the sphere from which we can attach a string. Imagine we are equipped with a rocket system that can give us momentum to move about. Now, we hold one end of the string and move away from the sphere in a direction that the string, when it becomes taut, is not parallel to the radius of the sphere. The force we exert on the sphere in that direction can be analysed into the tangent and the perpendicular to the surface of the sphere. If $\theta$ is the angle between the string and the normal to the sphere we have: Tangent component: $F_T=F\sin(\theta)$ Normal component: $F_N=F\cos(\theta)$. The normal component is parallel to the radius of the sphere and passes through the centre (CM) and has no moment. This component will pull the sphere in the normal direction. The tangent component has a moment with respect to the centre $M=FR\sin(\theta)$. This component would rotate the sphere, should the axis of the sphere be pivoted, but it is not! However, I believe that, due to the inertia of the mass of the sphere, it would be sufficient to give pivotal leverage for the tangent force to rotate the sphere. The law of conservation of energy must be written, for a short time interval of application of the force, in the form ${\bf {F.x}} = {\frac {1}{2}}mv^2+ {\frac {1}{2}}I{\omega}^2$ where:$\bf x$ is the displacement of the sphere, while the first term on the RHS is the kinetic energy due to the linear motion, and the second is the kinetic energy due to the rotational motion. Note that, as the sphere has no fixed axis, it will rotate about the axis which is peprepndicular to the great circle passing through the point of the hook, and the $F_T$ is tangent to it. Hence the axis will be perpendicular to $F_T$ and $F_N$ and so it is perpendicular to the force $\bf F$. This will be the case for any direction of $\bf F$. Why should the axis of rotation pass through the CM? The poitn here is that the object is rotating freely. Is not constrained to rotate about an arbitrary axis. Without going into mathematics, a quick argument from physics point of view is that, if the axis passed through another point, the rotational motion would be unstable. I mean that for a freely rotating object, there is a minimum state of energy, and this is when the axis of rotation passes through the CM. If it passed through some other point, then according to the parallel axis theorem, the inertia of the object would be higher, hence higher energy of the system. It is like you bring an object at a certain height near the surface of the earth and then you set it free. It will fall to the lowest energy state, and that is when it is on the ground. - I appreciate you writing the answer but consider removing it. This does not answer the question. You just gave an analysis of a force acting on a sphere and then you gave the expression for the kinetic and rotational energy for the sphere. That is all. – Alraxite Feb 12 at 1:39 You didn't give any arguments for why the axis should go through the center of mass even in the special case of a sphere. – Alraxite Feb 12 at 1:39 @Alraxite Sorry I did not have the chance to repond earlyer. Thanks for bringing to my attention the CM part of the question. I have edited my answer to include an argument about that. Please read it. – JKL Feb 12 at 11:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408742785453796, "perplexity_flag": "head"}
http://en.wikiversity.org/wiki/Advanced_elasticity/Stress-strain_relation_for_thermoelasticity
# Advanced elasticity/Stress-strain relation for thermoelasticity From Wikiversity Relation between Cauchy stress and Green strain Show that, for thermoelastic materials, the Cauchy stress can be expressed in terms of the Green strain as $\boldsymbol{\sigma} = \rho~\boldsymbol{F}\cdot\frac{\partial e}{\partial \boldsymbol{E}}\cdot\boldsymbol{F}^T ~.$ Proof: Recall that the Cauchy stress is given by $\boldsymbol{\sigma} = \rho~\frac{\partial e}{\partial \boldsymbol{F}}\cdot\boldsymbol{F}^T \qquad \implies \qquad \sigma_{ij} = \rho~\frac{\partial e}{\partial F_{ik}}F^T_{kj} = \rho~\frac{\partial e}{\partial F_{ik}}F_{jk} ~.$ The Green strain $\boldsymbol{E} = \boldsymbol{E}(\boldsymbol{F}) = \boldsymbol{E}(\boldsymbol{U})$ and $e = e(\boldsymbol{F},\eta) = e(\boldsymbol{U},\eta)$. Hence, using the chain rule, $\frac{\partial e}{\partial \boldsymbol{F}} = \frac{\partial e}{\partial \boldsymbol{E}}:\frac{\partial \boldsymbol{E}}{\partial \boldsymbol{F}} \qquad \implies \qquad \frac{\partial e}{\partial F_{ik}} = \frac{\partial e}{\partial E_{lm}}~\frac{\partial E_{lm}}{\partial F_{ik}} ~.$ Now, $\boldsymbol{E} = \frac{1}{2}(\boldsymbol{F}^T\cdot\boldsymbol{F} - \boldsymbol{\mathit{1}}) \qquad \implies \qquad E_{lm} = \frac{1}{2}(F^T_{lp}~F_{pm} - \delta_{lm}) = \frac{1}{2}(F_{pl}~F_{pm} - \delta_{lm}) ~.$ Taking the derivative with respect to $\boldsymbol{F}$, we get $\frac{\partial \boldsymbol{E}}{\partial \boldsymbol{F}} = \frac{1}{2}\left(\frac{\partial \boldsymbol{F}^T}{\partial \boldsymbol{F}}\cdot\boldsymbol{F} + \boldsymbol{F}^T\cdot\frac{\partial \boldsymbol{F}}{\partial \boldsymbol{F}}\right) \qquad \implies \qquad \frac{\partial E_{lm}}{\partial F_{ik}} = \frac{1}{2}\left(\frac{\partial F_{pl}}{\partial F_{ik}}~F_{pm} + F_{pl}~\frac{\partial F_{pm}}{\partial F_{ik}}\right) ~.$ Therefore, $\boldsymbol{\sigma} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial \boldsymbol{E}}: \left(\frac{\partial \boldsymbol{F}^T}{\partial \boldsymbol{F}}\cdot\boldsymbol{F} + \boldsymbol{F}^T\cdot\frac{\partial \boldsymbol{F}}{\partial \boldsymbol{F}}\right)\right]\cdot\boldsymbol{F}^T \qquad \implies \qquad \sigma_{ij} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial E_{lm}} \left(\frac{\partial F_{pl}}{\partial F_{ik}}~F_{pm} + F_{pl}~\frac{\partial F_{pm}}{\partial F_{ik}}\right)\right]~F_{jk} ~.$ Recall, $\frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}} \equiv \frac{\partial A_{ij}}{\partial A_{kl}} = \delta_{ik}~\delta_{jl} \qquad \text{and} \qquad \frac{\partial \boldsymbol{A}^T}{\partial \boldsymbol{A}} \equiv \frac{\partial A_{ji}}{\partial A_{kl}} = \delta_{jk}~\delta_{il} ~.$ Therefore, $\sigma_{ij} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial E_{lm}} \left(\delta_{pi}~\delta_{lk}~F_{pm} + F_{pl}~\delta_{pi}~\delta_{mk}\right)\right]~F_{jk} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial E_{lm}} \left(\delta_{lk}~F_{im} + F_{il}~\delta_{mk}\right)\right]~F_{jk}$ or, $\sigma_{ij} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial E_{km}}~F_{im} + \frac{\partial e}{\partial E_{lk}}~F_{il}\right]~F_{jk} \qquad \implies \qquad \boldsymbol{\sigma} = \frac{1}{2}~\rho~\left[\boldsymbol{F}\cdot\left(\frac{\partial e}{\partial \boldsymbol{E}}\right)^T + \boldsymbol{F}\cdot\frac{\partial e}{\partial \boldsymbol{E}}\right]\cdot\boldsymbol{F}^T$ or, $\boldsymbol{\sigma} = \frac{1}{2}~\rho~\boldsymbol{F}\cdot\left[\left(\frac{\partial e}{\partial \boldsymbol{E}}\right)^T + \frac{\partial e}{\partial \boldsymbol{E}}\right]\cdot\boldsymbol{F}^T ~.$ From the symmetry of the Cauchy stress, we have $\boldsymbol{\sigma} = (\boldsymbol{F}\cdot\boldsymbol{A})\cdot\boldsymbol{F}^T \qquad \text{and} \qquad \boldsymbol{\sigma}^T = \boldsymbol{F}\cdot(\boldsymbol{F}\cdot\boldsymbol{A})^T = \boldsymbol{F}\cdot\boldsymbol{A}^T\cdot\boldsymbol{F}^T \qquad \text{and} \qquad \boldsymbol{\sigma} = \boldsymbol{\sigma}^T \implies \boldsymbol{A} = \boldsymbol{A}^T ~.$ Therefore, $\frac{\partial e}{\partial \boldsymbol{E}} = \left(\frac{\partial e}{\partial \boldsymbol{E}}\right)^T$ and we get ${ \boldsymbol{\sigma} = ~\rho~\boldsymbol{F}\cdot\frac{\partial e}{\partial \boldsymbol{E}}\cdot\boldsymbol{F}^T ~. }$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8552948832511902, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/39042/proving-function-for-stirling-numbers-of-the-second-kind?answertab=active
# Proving function for Stirling Numbers of the Second Kind I need to proof the following formula for Stirling Numbers of the Second Kind: $\sum\limits_{n \geq 0} S(n,k) x^n = \frac{x^k}{(1-x)(1-2x)\cdots(1-kx)}$ It is used widely around formularies but I neither have found any proof nor was I able to figure it out on my own. Thank you in advance! - 2 How do you define Stirling numbers? – Phira May 14 '11 at 11:07 For $n \geq1, k \geq 0$ is $c(n,k)$ the quantity of permutations $\pi \in S_n$ having k cycles. c(0,0) := 1, c(0,k) := 0 for $k \geq 1$. Set for $m,n \geq 0$ $s(m,n) := (-1)^{m-n}c(m,n)$ The numbers $s(m,n)$ alre called Stirling Numbers of the First Kind. – muffel May 14 '11 at 11:15 @thewilli So why is there a capital S in your formula? – Phira May 14 '11 at 11:33 Hmm, our prof seems to use other conventions that the guy writing the lecture notes. It definitely is the same.. – muffel May 14 '11 at 11:39 No, it isn't.... – Phira May 14 '11 at 12:00 show 5 more comments ## 2 Answers This is set out in the initial part of section 1.6 of Geneatingfunctionology with the result in equation 1.6.5. $$S(n,k) = S(n-1,k-1) + kS(n-1,k)$$ with $S(0,0)=1$. Of the natural generation functions which might express this, the simplest which deals with the factor of $k$ is likely to be of the form $$B_k(x) = \sum_n S(n,k) x^n$$ which with the recurrence leads to $$B_k(x) = xB_{k-1}(x) + kxB_k(x) = \frac{x}{1-kx} B_{k-1}(x)$$ for $k \ge 1$; $B_0(x)=1$. That in turn leads to the desired formula by multiplying successive terms. - could you please explain me what you did in the last step ($\cdots = \frac{x}{1-kx}B_{k-1}(x)$)? – muffel May 15 '11 at 18:50 If $B_k(x) = xB_{k-1}(x) + kxB_k(x)$ then $B_k(x) - kxB_k(x)= xB_{k-1}(x)$, i.e $(1- kx)B_k(x)= xB_{k-1}(x)$, so $B_k(x) = \frac{x}{1-kx} B_{k-1}(x)$ – Henry May 15 '11 at 21:24 Hmm, I cannot see the problem which came up in the comments. If we assume simply an error in the notation and that the actually the Stirling numbers 2'nd kind are meant (as verbally exposed in the question) then the identity holds. This can even be checked simply using negative integer $x$ and computing Eulersums of appropriate order. Let's write S2 the matrix of that numbers as $\qquad \small \begin{array} {rrrrrrr} 1 & . & . & . & . & . & . & . \\ 1 & 1 & . & . & . & . & . & . \\ 1 & 3 & 1 & . & . & . & . & . \\ 1 & 7 & 6 & 1 & . & . & . & . \\ 1 & 15 & 25 & 10 & 1 & . & . & . \\ 1 & 31 & 90 & 65 & 15 & 1 & . & . \\ 1 & 63 & 301 & 350 & 140 & 21 & 1 & . \\ 1 & 127 & 966 & 1701 & 1050 & 266 & 28 & 1 \\ \ldots & \end{array}$ where we use zero-based row- and columnindexes. Then the problem can be restated as summing by building the dot product of one column by one row vector $V(x) = [1,x,x^2,x^3,...]$ with manageable (ideally infinite) dimension. The numbers along a column can be seen as composed by finite compositions of geometric series. Column 0 is $[1,1,1,1,1,...]$ and the dot-product with the V(x)-vector is then $V(x)*S2[,0] = {1 \over 1-x}$ Column 1 is $[1-1,2-1,4-1,8-1,16-1,...]$ and the dot-product with the V(x)-vector is then $V(x)*S2[,1] = {1 \over 1-2x} - {1 \over 1-x} = { (1-1x) - (1-2x) \over (1-1x)(1-2x) } = {x \over (1-1x)(1-2x) }$ One needs the simple composition of the other columns (see for instance in wikipedia) to see more examples for that decompositions, and also a general description for that compositions (where the text is:"Another explicit expanding of the recurrence-relation(...)"). I think the idea behind that homework-assignment was, that the student should find the compositions of powers such that the problem is ocnverted to describe the (finite) composition of closed forms of geometric series. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271489977836609, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/145248/limits-problem-in-integration
# Limits problem in Integration please look at the following question, Let $X$ denote the diameter of an armored electric cable and $Y$ denote the diameter of the ceramic mold that makes the cable. Both $X$ and $Y$ are scaled so that they range between $0$ and $1$. Suppose that $X$ and $Y$ have the joint density $$f(x, y) =\begin{cases} \frac1y,&0<x<y<1\\\\ 0,&\text{elsewhere} \end{cases}$$ when i solved it i got the following limits of x and y, 0->1/2 and 0->(1/2-x) however according to the book the correct limits are 0->1/4 and x->1/2-x I am just confused how to plot this function in order to find out where that 1/4 came from ? According to the book the solution is, $$\begin{align*} P\left(X+Y > \frac12\right)&=1-P\left(X+Y < \frac12\right)\\ &=1-\int_0^{1/4}\int_x^{1/2-x}\frac1y dy\,dx\\ &= 1-\int_0^{1/4}\left[\ln\left(\frac12-x\right)-\ln x\right]dx\\ &=1+\left.\left[\left(\frac12-x\right)\ln\left(\frac12-x\right)-x\ln x\right]\right\vert_0^{1/4}\\ &=1+\frac14\ln\left(\frac14\right)\\ &=0.6534. \end{align*}$$ please guide me where that 1/4 came from and how can i plot that question to clearly understand that ? - ## 2 Answers It all hinges on drawing the right picture. Once you do that, the rest is almost automatic. The joint density is $0$ except on or inside the triangle with vertices $(0,0)$, $(1,1)$, and $(0,1)$. We are interested in the integral of the joint density over the part of this triangle that has $x+y \gt 1/2$. So draw the line $x+y=1/2$. We will want to be "above" this line. Note that the line $x+y=1/2$ meets the line $y=x$ at $(1/4,1/4)$. So we want to integrate the joint density over the quadrilateral-shaped region that has the following corners: $(1/4,1/4)$, $(1,1)$, $(0,1)$ and $(0,1/2)$. The region is slightly ugly. Whether we integrate first with respect to $x$ or with respect to $y$, we will have to break up the region. It is tempting to integrate over the part of our big triangle that has $x+y \le 1/2$, and subtract the result from $1$. This "complementary" region is the triangle with corners $(0,0)$, $(1/4,1/4)$, and $(0,1/2)$. Why $(1/4,1/4)$? Because that's where $y=x$ and $x+y=1/2$ meet. If we integrate first with respect to $y$, there is no need to break up the integral. For then $y$ goes from $x$ to $1/2-x$. Then we integrate with respect to $x$. The rightmost point of our region is at $x=1/4$, $y=1/4$ so we will integrate from $x=0$ to $x=1/4$. that is the book's solution. Remark: The book's solution is not optimal. Despite the need to break up the integral, I would prefer to set things up so that I integrate first with respect to $x$. Since the density function does not mention $x$ explicitly, the first integration is trivial. We can integrate over the part of the triangle that has $x+y>1/2$, or over the complementary region. Let's find the answer directly. For $y=1/4$ to $y=1/2$, we want to integrate from $x=1/2-y$ to $x=y$. From $y=1/2$ to $y=1$, we want to integrate from $x=0$ to $x=y$. No integration of $\ln$ is needed. We get $$\int_{1/4}^{1/2} \left(2-\frac{1}{2y}\right)dy+\int_{1/2}^{1}dy,$$ which is easy to calculate. - This is mostly a rewording of things already in Andre's answer, but still it might be helpful. When you let $y$ run from $0$ to $(1/2)-x$, you are allowing $y\le x$, but $f(x,y)=0$ there, so if you want to wind up integrating $1/y$, you want $y$ to run from $x$ to $(1/2)-x$. Also, if you want $x+y\lt1/2$, then that, together with $0\lt x\lt y$, forces $x\lt1/4$; if $x\ge1/4$, then $y\gt x\ge1/4$, and $x+y\gt1/4+1/4=1/2$. That's where the $1/4$ comes from. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560387134552002, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/189127/what-is-bayes-theorem-in-simplest-way
# What is Baye's Theorem in simplest way I am currently planning to take a course on artificial intelligence, there Bayes theorem is the basic. Now I tried to understand bayes theorem so many times. But to understand that I have to understand conditional probability, joint probability and total probability. So can anyone answer my following questions in a simplest way? 1. What is difference between conditional probability and joint probability? I would be glad if someone can explain intuitively in real life problems and mathematically also. 2. What is Total probability? 3. Explain bayes theorem in simple logic and where I can use them? Real life examples would be better.... - ## 2 Answers I think Bayes' theorem is intuitive if you just multiply through with the denominator. The formula is $$P(A, B) = P(A|B)P(B) = P(B|A)P(A).$$ This is, in a way, the definition of conditional probability. Look at the first one: $$P(A, B) = P(B|A)P(A).$$ There are many equivalent ways to interpret this. Let me give you one example. Let $A$ be the event "I am hungry", and $B$ the event "I go to a restaurant". Note that it's possible for me to be hungry and not go to a restaurant, and it's also possible for me to go to a restaurant without being hungry. But there is a correlation like this: The chance of me going to restaurant increases if I'm hungry. That means $P(B|A) > P(B)$. (This is only true in this example!) $P(B|A)$ is called the conditional probability because it is conditioned on $A$. It is the probability of $B$ happening given the knowledge that $A$ happens. The joint probability is $P(A, B)$, the chance of both $A$ and $B$ happening. If I know $P(A)$ and $P(B|A)$, I can compute $P(A, B)$ as follows. First, think about how probable $A$ can happen. That is $P(A)$. And assuming $A$ happens, how probable is it that $B$ also happens? That is $P(B|A)$ by definition. Multiplying them together, I get the probability that both $A$ and $B$ happen. - What does this have to do with Bayes' theorem? – Dilip Sarwate Aug 31 '12 at 11:52 It's just the first equation. – Tunococ Sep 1 '12 at 0:09 I believe that the following article on Wikipedia answers your Question 3 quite nicely: http://en.wikipedia.org/wiki/Bayes'_theorem (see the Introductory Example). Bayes Theorem is about "reversing" the conditional probability, i.e. finding $P(A\mid B)$ given $P(B\mid A)$. This is sometimes easier than directly finding $P(A\mid B)$. Another nice website is : http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507700800895691, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/55865/why-does-this-method-for-solving-matrix-equations-work
Why does this method for solving matrix equations work? I have this assignment: Given: $A = \begin{pmatrix} 2 & 4 \\ 0 & 3 \end{pmatrix}$ $C = \begin {pmatrix} -1 & 2 \\ -6 & 3 \end{pmatrix}$ Find all B that satisfy $AB = C$. I know that one option is to say $B = \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)$ and multiply it with $A$. By making each member equal to the one in $C$, I have a system of linear equations which I can solve. However, I also know that I can set up a system like this: $$\left( \begin{array} {cc|cc} 2 & 4 & -1 & 2 \\ 0 & 3 & -6 & 3 \end{array} \right)$$ If I manipulate it like I would a system of linear equations (for example, by swapping rows, or adding a multiple of a row to another) to get the identity matrix $\left( \begin{smallmatrix} 1 & 0 \\ 0 & 1 \end{smallmatrix} \right)$, then what I'm looking for (matrix $B$) will appear in the right hand side, like this: $$\left( \begin{array} {cc|cc} 1 & 0 & 7/2 & -1 \\ 0 & 1 & -2 & 1 \end{array} \right)$$ In this case, $B = \left( \begin{smallmatrix} 7/2 & -1 \\ -2 & 1\end{smallmatrix} \right)$. My question is, quite simply, how does this work? It looks like magic to me right now. - 2 – lhf Aug 5 '11 at 21:39 2 Answers You want a matrix $B$ that satisfies $AB = C$. That is, if $A$ is invertible, you can left multiply both sides by $A^{-1}$ and get $B = A^{-1}C$. Notice that simple row operations are just left multiplication by matrices. You may need a minute or two to convince yourself of this, but try it: left multiplying a matrix $A$ by $\begin{pmatrix}1&0 \\ 0&3\end{pmatrix}$ is just multiplying (row 2) by 3; left multiplying by $\begin{pmatrix} 1&2 \\ 0&1\end{pmatrix}$ is just adding 2*(row 2) to (row 1). So performing the same row operations to $A$ and $C$ is essentially left multiplying $A$ and $C$ by the same matrix. When you manipulate $A$ until it becomes the identity, you must have ended up left multiplying it by $A^{-1}$, so the matrix you get on the right is $A^{-1}C$, i.e. $B$. (In the same way, if you wanted to solve $BA = C$ for $B$, you could use column operations, because right multiplication by matrices is just column operations.) - That makes sense, thanks. – Javier Badia Aug 5 '11 at 23:33 All the operations you perform on A and C (and on the intermediary matrices you get after some operations) make you replace A and C by PA and PC for some matrices P. If in the end A is transformed into the identity matrix Id, this means that the product K of the matrices P used is such that KA=Id. Hence K is the inverse A-1 of A and the matrix you get on the right is KC=A-1C, as desired. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485363960266113, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=107476
Physics Forums ## Applications of Integration-Volume I'm asked to find the volume generated by rotating the region bounded by the given curves about the y-axis (using the method of cylindrical shells). I'm given the functions $$y= 4(x-2)^2$$and $$y = x^2 - 4x +7$$. I'm not sure how to word this properly...they don't give me the domain of the function to find the volume...as in, most of the questions (and of course, all of the examples in the text) have given domains...find such and such when x=3 and x=0 or something of the like. I can do all the problems where the domain or range (in some cases) is given but I'm not entirely sure how to figure out my domains? Is that the intecept of the 2? Because then I would get x=1 and x=3. And if that's true then I'm just screwing something else up. Another general question I have is when I'm making equations (and I'm consistently having this problem for areas etc), I always seem to subtract the wrong function from the other one...in other words, I always seem to end up with a negative or incorrect area/volume. Say for volumes... $$\int_{a}^{b} 2 \pi xf(x)dx$$ for f(x) I always seem to subract the wrong function by the wrong function!!! How can I tell which one is going to be the correct one? And sometimes both answers are positive and one is correct and one is not. I asked my professor and he told us just to put a (+/-) at the front and then change it once you know what it is...???!!! At first I thought it was which function was "on top" of the graphed functions but that doesn't seem to work very well either!!! Sorry for the super long post!!! It's been almost 4 years since I've done calc and now I have to take another course (calc II) so if my questions seem dumb, I'm sorry but I'm still trying to catch up!!! Thanks! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug for the domain, you are getting the right region you set the functions equal to each other to find out where they intersect.. this may help http://mathdemos.gcsu.edu/shellmetho...y/gallery.html furthermmore, the formula for the volume of a cylindrical shell is V=2pi (delta r) h so then 2*pi integral radius and your height here radius is the distance from the y-axis to the center of the shell, which in this case is an x-distance.. now for the height, the height of the function is the top function minus the lower function... hope this helps.. Thanks again!!!! Those animations are awesome!! Thread Tools | | | | |---------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Applications of Integration-Volume | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 4 | | | Calculus & Beyond Homework | 1 | | | Calculus & Beyond Homework | 4 | | | Introductory Physics Homework | 2 | | | Calculus | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277462363243103, "perplexity_flag": "middle"}
http://www.mathplanet.com/education/pre-algebra/probability-and-statistic/combinations-and-permutations
# Combinations and permutations Before we discuss permutations we are going to have a look at what the words combination means and permutation. A Waldorf salad is a mix of among other things celeriac, walnuts and lettuce. It doesn't matter in what order we add our ingredients but if we have a combination to our padlock that is 4-5-6 then the order is extremely important. If the order doesn't matter then we have a combination, if the order do matter then we have a permutation. One could say that a permutation is an ordered combination. The number of permutations of n objects taken r at a time is determined by the following formula: $\\P(n,r)=\frac{n!}{(n-r)!}\\$ n! is read n factorial and means all numbers from 1 to n multiplied e.g. $\\5!=5\cdot 4\cdot 3\cdot 2\cdot 1\\$ This is read five factorial. 0! Is defined as 1. $\\0!=1\\$ Example A code have 4 digits in a specific order, the digits are between 0-9. How many different permutations are there if one digit may only be used once? A four digit code could be anything between 0000 to 9999, hence there are 10,000 combinations if every digit could be used more than one time but since we are told in the question that one digit only may be used once it limits our number of combinations. In order to determine the correct number of permutations we simply plug in our values into our formula: $\\P(n,r)=\frac{10!}{(10-4)!}=\frac{10\cdot9\cdot8\cdot 7\cdot 6\cdot 5\cdot 4\cdot 3\cdot 2\cdot 1 }{6\cdot5\cdot 4\cdot 3\cdot 2\cdot 1}=5040\\$ In our example the order of the digits were important, if the order didn't matter we would have what is the definition of a combination. The number of combinations of n objects taken r at a time is determined by the following formula: $\\C(n,r)=\frac{n!}{(n-r)!r!}\\$ Video lesson: Solve Next Class:  Probability and statistic, Finding the odds • Pre-Algebra • Algebra 1 • Algebra 2 • Geometry • Sat • Act
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 5, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245250225067139, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/19264/what-is-the-etymology-for-the-term-conductor
## What is the etymology for the term conductor? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is related to the previous question of how to define a conductor of an elliptic curve or a Galois representation. What motivated the use of the word "conductor" in the first place? A friend of mine once pointed out the amusing idea that one can think of the conductor of an elliptic curves as "someone" driving a train which lets you off at the level of the associated modular form. A similar statement can be made concerning Szpiro's conjecture, which provides asymptotic bounds on several invariants of an elliptic curve in terms of its conductor. Here one might think of the conductor as "someone" who controls this symphony of invariants consisting of the minimal discriminant, the real period, the modular degree, and the order of the Shafarevich-Tate group (assuming BSD). Was there some statement of this sort which motivated Artin's original definition of the conductor? Does anyone have a reference for the first appearance of the word conductor in this context? I apologize if this question is inappropriate for MO. - 5 I believe it is a translation of the German term 'Führer' (which must have led to some awkward conversations between number theorists in the late 1930's and early 1940's). I don't know how and when the German term originated. – François G. Dorais♦ Mar 25 2010 at 3:05 "conductor" must be appropriate for $\mho$, if not for MO ;-) – Noam D. Elkies Jan 29 at 20:47 ## 3 Answers It is a translation from the German Führer (which also is the reason that in older literature, as well as a fair bit of current literature, the conductor is denoted as f in various fonts). Originally the term conductor appeared in complex multiplication and class field theory: the conductor of an abelian extension is a certain ideal that controls the situation. Then it drifted off into other areas of number theory to describe parameters that control other situations. Of course in English we tend not to think of conductor as a leader in the strong sense of Führer, but more in a musical sense, so it seems like a weird translation. But back in the 1930s the English translation was leader rather than conductor, at least once: see the review of Fueter’s book on complex multiplication in the 1931 Bulletin of the Amer. Math. Society, page 655. The reviewer writes in the second paragraph "First there is a careful treatment of those ray class fields whose leaders are multiples of the ideal..." You can find the review yourself at http://www.ams.org/bull/1931-37-09/S0002-9904-1931-05214-9/S0002-9904-1931-05214-9.pdf. I stumbled onto that reference quite by chance (a couple of years ago). If anyone knows other places in older papers in English where conductors were called leaders, please post them as comments below. Thanks! Concerning Artin's conductor, he was generalizing to non-abelian Galois extensions the parameter already defined for abelian extensions and called the conductor. So it was natural to use the same name for it in the general case. Edit: I just did a google search on "leader conductor abelian" and the first hit is this answer. Incredible: it was posted less than 15 minutes ago! - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "Es steht alles schon bei Dedekind", as Emmy Noether was fond of saying. In fact, • R. Dedekind, Über die Anzahl der Idealklassen in den verschiedenen Ordnungen eines endlichen Körpers, Gauss Festschrift 1877 defined the "Führer" of an order in a number field. [BTW: in German, Führer does not actually mean a strong leader but rather someone who guides you (as in tourist guide). But of course . . . ] Class groups of orders in quadratic number fields are ring class groups, which generalize immediately to ray class groups (Weber); from there the word spread to complex multiplication and class field theory. - The first time I ever saw a conductor defined is not in the sense mentioned above, but in linear algebra, from Hoffman & Kunze's book. In their chapter on elementary canonical forms they define the conductor of vector $\alpha$ into a subspace $W$ with respect to a linear operator $T$ to be the ideal `$S_T(\alpha;W) = \{ g \in F[x] \mid g(T)\alpha \in W \}$` Where the ambient vector space is over the field $F$. Interestingly, they say that they themselves call this the 'stuffer' ideal (from German, das eistopfende Ideal), but claim that "Conductor" is more commonly used, and add that this term is "preferred by those who envision a less aggressive operator $g(T)$, gently leading the vector $\alpha$ into $W$." Hoffman & Kunze, Linear Algebra 2nd Edition, p. 201 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347158074378967, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/104583/exponentiating-4-by-4-matrix-analytically
## Exponentiating 4 by 4 matrix analytically ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does there exist an analytical method by which i can exponentiate a 4 by 4 matrix, in the same way as the general 2 by 2 matrix case in pauli matrix basis. I have dirac matrices (which are composed of direct products of pauli matrices) as my basis for 4 by 4 matrices. I need an analytical way ! Any reply is appreciated. regards - 3 What is the result you are alluding to for \$2\times 2? matrices? – Igor Rivin Aug 12 at 23:22 By "analytically" you mean "explicitely"? Put your matrix $X$ in Jordan form. Then $X=S+N$ where $S$ is diagonal and $N$ nilpotent and $SN=NS$, and $\exp(X)=\exp(S)\cdot\exp(N)$ which is quite explicit to compute... – Qfwfq Aug 12 at 23:47 @Qfwfq: that is computationally tractable, but NOT explicit (try writing a formula in terms of matrix elements of $X$) – Igor Rivin Aug 13 at 0:12 3 Also, why all the votes to close? – Igor Rivin Aug 13 at 0:12 ## 3 Answers Perhaps I misunderstand the question. When you say you have Dirac matrices, does that mean that you are computing the exponential of a liner combination of Dirac matrices? If so, then there is a very simple analytical formula in any dimension: just use the Clifford relations in the exponential series. More concretely, suppose that you would like to compute the exponential of a matrix $X := \sum_i x^i \Gamma_i$, where the Dirac matrices $\Gamma_i$ obey the Clifford relation $$\Gamma_i \Gamma_j + \Gamma_j \Gamma_i = - 2 g_{ij} I~,$$ with $I$ the identity matrix. Then it follows from this relation that $$X^2 = - x^2 I~,$$ where I have introduced the (indefinite, if $g_{ij}$ has indefinite signature) "squared norm" $$x^2 = \sum_{i,j} x^i x^j g_{ij}~.$$ If $x^2$ = 0, then $$\exp X = I + X$$ and if $x^2 \neq 0$, then letting $x = \sqrt{x^2}$ (which could be imaginary), $$\exp X = \cos x I + \frac{\sin x}{x} X~.$$ Added (for the "heathens") Quiaochu's comment is correct. Here are some more details. Let $V$ be a finite-dimensional real vector space with a non-degenerate inner product $\left<-,-\right>$. Let $Cl(V)$ be the corresponding Clifford algebra. Let $\rho: Cl(V) \to \operatorname{End}(M)$ be an irreducible representation of $Cl(V)$. Let $(e_i)$ be a basis for $V$. Then $\Gamma_i := \rho(e_i)$ are called Dirac matrices of $CL(V)$ in the representation $M$. - For us heathens: what are Dirac matrices? – Igor Rivin Aug 13 at 13:35 @Igor Rivin: as far as I understand (which is not to say much) the first display (Clifford relation) is sort-of the defintion. – quid Aug 13 at 14:08 2 @Igor: they are a particular matrix representation of a certain Clifford algebra. – Qiaochu Yuan Aug 13 at 16:05 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is a completely explicit formula in this paper of Bensauod and Mouline (rendicotti Palermo, 2005), which is quite compact for low dimensions. - (There seems to be a problem with the link.) – Andres Caicedo Aug 13 at 0:12 @Andres: should be fixed now... – Igor Rivin Aug 13 at 0:35 4 It's explicit in terms of the solution of a differential equation related to the characteristic polynomial. Of course, to solve that differential equation explicitly you need the eigenvalues... – Robert Israel Aug 13 at 1:29 Thanks all, Igor, are you sure about the Dirac matrices satisfy this particular Clifford relation ?? I guess it's +/- depending upon the indices.I tried this derivation before and got stuck at the anticommutator relations particularly. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124993681907654, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/251839/probabilities-for-unknown-finite-population-from-sample?answertab=active
# Probabilities for unknown finite population from sample? If i have a known population ($N$ marbles of which $M$ are black) and draw $n$ samples without replacement the probability to draw $x$ black marbles is given by the hypergeometric distribution. Is there a way to get probabilities for the total number of blacks $M$ from the number of black marbles $x$ in my sample? - Under suitable assumptions one can. Assume for example that $N$ is fixed, and Alicia decided on how many of these will be black, $0$ to $N$, using a uniform distribution, or any other known distribution. Then based on sample proportion, we can calculate the probabilities Alicia decided to put in $k$ black. – André Nicolas Dec 5 '12 at 21:37 Can you get any quantitative results without assumptions on the distribution of the $M$ (i.e. Alicia's choice)? I want to find a maximum number of the remaining black marbles (i.e. $M-x$), that holds except some small probability. – Tom S Dec 5 '12 at 21:47 – Jonathan Christensen Dec 5 '12 at 21:56 I do not have any insight on how to attack the problem without a prior. – André Nicolas Dec 5 '12 at 21:56 ## 1 Answer The probability of extracting exactly $x$ black marbles in a sample of size $n$ from a population of $N$ marbles of which $M$ are black can be calculated as: $$P(X=x|n,N,M) = \frac{\binom{M}{x}\binom{N-M}{n-x}}{\binom{N}{n}}$$ If you assume that all values of $M$ compatible with your result are equally likely (this would be André's prior distribution assumption, I believe), you can use the exact same formula with a different twist, considering the total number of black marbles, $M$, as the independent variable, and the number of black marbles in your sample, $x$, as a parameter instead: $$f(M) = P(X=x|n,N,M) = \frac{\binom{M}{x}\binom{N-M}{n-x}}{\binom{N}{n}}$$ The above function computes the relative likelihood of a value of $M$, and the value that maximizes it, is the maximum likelihood estimator of $M$ for the population. If you plot the above function for all possible values, $M\in[x,N-n+x]$, after normalization you'll get a probability distribution which can be used to compute the probabilities you are after. - Right, this corresponds to a uniform prior on M. – Jonathan Christensen Dec 5 '12 at 22:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9026869535446167, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/208726-group-theory-question-print.html
# Group theory question Printable View • November 29th 2012, 03:07 PM Ant Group theory question Hi, Let $H_{1}, H_{2}, H_{3}$ be subgroups of a group $G$ under addition. Moreover suppose $x\in H_{1}, y\in H_{2}, x+y \in H_{3}$ I'm wondering if it then follows that $H_{1} = H_{2} = H_{3}$ Could anyone offer any help as to whether or not this is true? Thanks! • November 29th 2012, 03:25 PM GJA Re: Group theory question Hi Ant, Take a look at $H_{1}=2\mathbb{Z}, H_{2}=3\mathbb{Z}$ and $H_{3}=5\mathbb{Z}$ as subgroups of $(\mathbb{Z}, +)$ to see if you can come up with a counterexample. If this is too cryptic let me know and I'll try to provide more details. Good luck! • November 29th 2012, 03:36 PM Ant Re: Group theory question Yes of course. $2 \in H_{1}$, $3 \in H_{2}$ and $2+3=5 \in H_{3}$ yet clearly these subgroups are not equal. Thanks! No wonder I couldn't prove it! • November 29th 2012, 03:45 PM Ant Re: Group theory question The problem I'm working is actually: Let R be a commutative ring with unity. Prove that if the sum of two non units is a non unit then R has a unique maximal ideal. My working so far: Let x,y be non zero non units. So the ideals they generate are proper subgroups of R. Furthermore the ideal that x+y generates is also proper. We also know that every proper ideal is contained in a maximal ideal. so $(x) \subset J_{1}$, $(y) \subset J_{2}$, $(x+y) \subset J_{3}$. For $J_{1}, J_{2}, J_{3}$ maximal ideals. Our goal is to prove that $J_{1} = J_{2} = J_{3}$ i.e. that There is unique maximal ideal. The only thing I can think of to do at the moment, is use the closure of ideals to show that the intersection of (x) and (y) will contain xy = yx. but I'm not sure how, if at all, that helps me... • November 29th 2012, 04:12 PM GJA Re: Group theory question Seems like a fun problem! I think looking at the ideal generated by a non unit is a good idea. Here's my two cents (for what it's worth): By way of contradiction suppose $R$ contains two distinct maximal ideals $M_{1}$ and $M_{2}$. Without loss of generality take $x\in M_{1}-M_{2}.$ Since $x\in M_{1}$ and $M_{1}\neq R$, $x$ is a non-unit. Now take a look at the ideal $(x)+M_{2}$ and see if you can use the assumption to get a contradiction. Good luck! • November 29th 2012, 04:22 PM Ant Re: Group theory question Thanks! I'll try that and see if I can come up with anything • November 29th 2012, 08:20 PM Deveno Re: Group theory question Quote: Originally Posted by GJA Seems like a fun problem! I think looking at the ideal generated by a non unit is a good idea. Here's my two cents (for what it's worth): By way of contradiction suppose $R$ contains two distinct maximal ideals $M_{1}$ and $M_{2}$. Without loss of generality take $x\in M_{1}-M_{2}.$ Since $x\in M_{1}$ and $M_{1}\neq R$, $x$ is a non-unit. Now take a look at the ideal $(x)+M_{2}$ and see if you can use the assumption to get a contradiction. Good luck! oh i like that! (x) + M2 is an ideal containing M2, and so we have two choices: a)(x) + M2 = R b)(x) + M2 = M2. b) is out of the question since x is in (x) + M2 (as the element 1x + 0) and by supposition, x is not in M2. the key to ruling out a) is that M2 is proper, and thus doesn't contain any units, and neither does (x). but certainly 1 is in R. • November 30th 2012, 12:51 AM Ant Re: Group theory question Quote: Originally Posted by Deveno oh i like that! (x) + M2 is an ideal containing M2, and so we have two choices: a)(x) + M2 = R b)(x) + M2 = M2. b) is out of the question since x is in (x) + M2 (as the element 1x + 0) and by supposition, x is not in M2. the key to ruling out a) is that M2 is proper, and thus doesn't contain any units, and neither does (x). but certainly 1 is in R. This seems to work perfectly. However, as far as I can see, at no point in this argument do we use the fact that if x,y are non units them so is their sum, x+y. This concerns me! • November 30th 2012, 04:47 AM Deveno Re: Group theory question sure we do. take any element of (x) (which is to say rx for some r in R). this cannot be a unit, for if so, we have, say rx = u, then we have: (u-1r)x = 1, contradicting the fact that x is not a unit (and we know x is not a unit, because x is in M1, and M1 ≠ R -this is using the fact that if an ideal of a commutative ring with unity contains a unit, it contains 1, and thus it is the entire ring). by the same reasoning, any element of M2 is ALSO not a unit. now if (x) + M2 = R, then: rx + m = 1, for some r in R, and some m in M2. so we have: non-unit + non-unit = unit, contradicting what we are given as a condition on R. thus any two maximal ideals of R cannot be distinct (the assumption that allowed us to assume x existed). • November 30th 2012, 06:20 AM Ant Re: Group theory question Ah okay, thanks. For some reason I was thinking that (x) + M2 was the union of (x) and M2. Which is why I thought we didn't need to use the closure under + of non units. BTW I've since realized that in fact considering the union isn't helpful as it may no even be an ideal. If anyone is interested, here's another proof (which I believe is also correct!): Consider the set $J$ of all non units in $R$. Claim 1: $J$ is an ideal of $R$. Proof: It's clear that $0$ is in $R$. Let $x$ be a non unit, assume $-x$ is a unit. So there exists $u$ s.t. $-xu = 1$ then $x(-u) = 1$ so $x$ is a unit. So $-x$ must be a non unit. Closure follows by assumption. so $J$ forms an abelian group under $+$. The product of two non units is clearly non unit, and so is the product of a unit with a non unit. (let u be a unit, $x$ be a non unit. Assume $ux$ is a unit. So there exists $w$ s.t $uxw=wux =1 = (wu)x$ So $wu$ is inverse of $x$ and thus $x$ is a unit. Contradiction proves $ux$ is non unit). So $J$ is an ideal. Claim 2: $J$ is unique maximal. Proof: $J \ne R$ because $R$ contains $1$. So we must still prove unique maximality. (Uniqueness) Consider an arbitrary proper ideal of $R$, $I$. $I$ is proper and therefore cannot contain any units. As $J$ is the set of all units, we have that $I \subset J$. (Maximality) Recall that $J$ contain all non units. This means that if we want to find an ideal of $R$ which is larger than $J$ we must include some non unit of $R$. But the inclusion of a non unit will immediately give us all of $R$. So $J$ is maximal. • November 30th 2012, 06:37 AM Deveno Re: Group theory question for claim 2 i would word it like so: let I be a maximal ideal of R. since I is a maximal ideal it is proper, and therefore contains no units. since J contains all non-units, I is contained in J, hence I = J (by the maximality of I, since J ≠ R). i would be curious to see what kind of ring R might have to be, since the integers don't qualify: -2 and 3 are not units, but -2+3 is. the ring Q[x] also doesn't appear to work: neither x nor 1-x are units, but their sum is. the only examples of such rings that spring to mind are fields (which have boring maximal ideals: {0}), but there might be others (i haven't thought about it too much). • November 30th 2012, 06:49 AM Ant Re: Group theory question Quote: Originally Posted by Deveno for claim 2 i would word it like so: let I be a maximal ideal of R. since I is a maximal ideal it is proper, and therefore contains no units. since J contains all non-units, I is contained in J, hence I = J (by the maximality of I, since J ≠ R). i would be curious to see what kind of ring R might have to be, since the integers don't qualify: -2 and 3 are not units, but -2+3 is. the ring Q[x] also doesn't appear to work: neither x nor 1-x are units, but their sum is. the only examples of such rings that spring to mind are fields (which have boring maximal ideals: {0}), but there might be others (i haven't thought about it too much). Yes, that's a bit more succinct. Apparently they're called "local rings" Local ring - Wikipedia, the free encyclopedia All times are GMT -8. The time now is 02:52 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 70, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569063782691956, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/129361-vector-2-a.html
# Thread: 1. ## vector 2 the vector $\vec{a}=(2,3)$ is projected onto the x=axis. what is the scalar projection? what is the vector projection? what are the scalar and vector projections when $\vec{a}$ is projected onto the y-axis 2. Originally Posted by william the vector $\vec{a}=(2,3)$ is projected onto the x=axis. what is the scalar projection? what is the vector projection? what are the scalar and vector projections when $\vec{a}$ is projected onto the y-axis Have you thought about these at all yourself? These problems are about as trivial as you going to see! Draw a picture. What does the projection of (2, 3) onto the x-axis look like?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9663858413696289, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/covariance-matrices/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘covariance matrices’ tag. ## Random covariance matrices: Universality of local statistics of eigenvalues 9 December, 2009 in math.PR, math.SP, paper | Tags: covariance matrices, Four Moment Theorem, universality, Van Vu, Wishart ensemble | by Terence Tao | 3 comments Van Vu and I have just uploaded to the arXiv our paper “Random covariance matrices: Universality of local statistics of eigenvalues“, to be submitted shortly. This paper draws heavily on the technology of our previous paper, in which we established a Four Moment Theorem for the local spacing statistics of eigenvalues of Wigner matrices. This theorem says, roughly speaking, that these statistics are completely determined by the first four moments of the coefficients of such matrices, at least in the bulk of the spectrum. (In a subsequent paper we extended the Four Moment Theorem to the edge of the spectrum.) In this paper, we establish the analogous result for the singular values of rectangular iid matrices ${M = M_{n,p}}$, or (equivalently) the eigenvalues of the associated covariance matrix ${\frac{1}{n} M M^*}$. As is well-known, there is a parallel theory between the spectral theory of random Wigner matrices and those of covariance matrices; for instance, just as the former has asymptotic spectral distribution governed by the semi-circular law, the latter has asymptotic spectral distribution governed by the Marcenko-Pastur law. One reason for the connection can be seen by noting that the singular values of a rectangular matrix ${M}$ are essentially the same thing as the eigenvalues of the augmented matrix $\displaystyle \begin{pmatrix} 0 & M \\ M^* & 0\end{pmatrix}$ after eliminating sign ambiguities and degeneracies. So one can view singular values of a rectangular iid matrix as the eigenvalues of a matrix which resembles a Wigner matrix, except that two diagonal blocks of that matrix have been zeroed out. The zeroing out of these elements prevents one from applying the entire Wigner universality theory directly to the covariance matrix setting (in particular, the crucial Talagrand concentration inequality for the magnitude of a projection of a random vector to a subspace does not work perfectly once there are many zero coefficients). Nevertheless, a large part of the theory (particularly the deterministic components of the theory, such as eigenvalue variation formulae) carry through without much difficulty. The one place where one has to spend a bit of time to check details is to ensure that the Erdos-Schlein-Yau delocalisation result (that asserts, roughly speaking, that the eigenvectors of a Wigner matrix are about as small in ${\ell^\infty}$ norm as one could hope to get) is also true for in the covariance matrix setting, but this is a straightforward (though somewhat tedious) adaptation of the method (which is based on the Stieltjes transform). As an application, we extend the sine kernel distribution of local covariance matrix statistics, first established in the case of Wishart ensembles (when the underlying variables are gaussian) by Nagao and Wadati, and later extended to gaussian-divisible matrices by Ben Arous and Peche, to any distributions which matches one of these distributions to up to four moments, which covers virtually all complex distributions with independent iid real and imaginary parts, with basically the lone exception of the complex Bernoulli ensemble. Recently, Erdos, Schlein, Yau, and Yin generalised their local relaxation flow method to also obtain similar universality results for distributions which have a large amount of smoothness, but without any matching moment conditions. By combining their techniques with ours as in our joint paper, one should probably be able to remove both smoothness and moment conditions, in particular now covering the complex Bernoulli ensemble. In this paper we also record a new observation that the exponential decay hypothesis in our earlier paper can be relaxed to a finite moment condition, for a sufficiently high (but fixed) moment. This is done by rearranging the order of steps of the original argument carefully. ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8959894776344299, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/15611?sort=oldest
## To prove the Nullstellensatz, how can the general case of an arbitrary algebraically closed field be reduced to the easily-proved case of an uncountable algebraically closed field? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In his answer to a question about simple proofs of the Nullstellensatz (http://mathoverflow.net/questions/15226/elementary-interesting-proofs-of-the-nullstellensatz), Qiaochu Yuan referred to a really simple proof for the case of an uncountable algebraically closed field. Googling, I found this construction also in Exercise 10 of a 2008 homework assignment from a course of J. Bernstein (see the last page of http://www.math.tau.ac.il/~bernstei/courses/2008%20spring/D-Modules_and_applications/pr/pr2.pdf). Interestingly, this exercise ends with the following (asterisked, hard) question: (*) Reduce the case of arbitrary field $k$ to the case of an uncountable field. After some tries to prove it myself, I gave up and returned to googling. I found several references to the proof provided by Qiaochu Yuan, but no answer to exercise (*) above. So, my question is: To prove the Nullstellensatz, how can the general case of an arbitrary algebraically closed field be reduced to the easily-proved case of an uncountable algebraically closed field? The exercise is from a course of Bernstein called 'D-modules and their applications.' One possibility is that the answer arises somehow when learning D-modules, but unfortunately I know nothing of D-modules. Hence, proofs avoiding D-modules would be particularly helpful. - 3 It seems natural to try to use the model completeness of the theory of algebraically closed fields. But if you're going to use model theory, it seems to me that you might as well prove the Nullstellensatz outright, which is possible: see the accepted answer to mathoverflow.net/questions/9667/…. – Pete L. Clark Feb 18 2010 at 3:01 1 It's possible that Bernstein had in mind a more direct reduction, although I can't imagine what it would look like. – Qiaochu Yuan Feb 18 2010 at 3:24 1 Is there a non-model-theory approach? – Harry Gindi Feb 18 2010 at 4:44 @PLC: Thank you very much for your comment. Given the context of the question in the homework assignment, I tend to believe (or at least to hope) that there is a proof from commutative algebra. Clearly, this should not be an obvious proof, but I am still hoping that someone familiar with Bernstein's work in other fields will come up with the proof. Less ambitiously, perhaps a student from that course will reveal the secret... – unknown (google) Feb 18 2010 at 6:27 2 Also, +1 for the long but extremely informative title. It's good for people to realize that nothing is gained by making their titles shorter. – Qiaochu Yuan Feb 18 2010 at 7:15 ## 5 Answers These logic/ZFC/model theory arguments seem out of proportion to the task at hand. Let $k$ be a field and $A$ a finitely generated $k$-algebra over a field $k$. We want to prove that there is a $k$-algebra map from $A$ to a finite extension of $k$. Pick an algebraically closed extension field $k'/k$ (e.g., algebraic closure of a massive transcendental extension, or whatever), and we want to show that if the result is known in general over $k'$ then it holds over $k$. We just need some very basic commutative algebra, as follows. Proof: We may replace $k$ with its algebraic closure $\overline{k}$ in $k'$ and $A$ with a quotient $\overline{A}$ of $A \otimes_k \overline{k}$ by a maximal ideal (since if the latter equals $\overline{k}$ then $A$ maps to an algebraic extension of $k$, with the image in a finite extension of $k$ since $A$ is finitely generated over $k$). All that matters is that now $k$ is perfect and infinite. By the hypothesis over $k'$, there is a $k'$-algebra homomorphism $$A' := k' \otimes_k A \rightarrow k',$$ or equivalently a $k$-algebra homomorphism $A \rightarrow k'$. By expressing $k'$ as a direct limit of finitely generated extension fields of $k$ such an algebra homomorphism lands in such a field (since $A$ is finitely generated over $k$). That is, there is a finitely generated extension field $k'/k$ such that the above kind of map exists. Now since $k$ is perfect, there is a separating transcendence basis $x_1, \dots, x_n$, so $k' = K[t]/(f)$ for a rational function field $K/k$ (in several variables) and a monic (separable) $f \in K[t]$ with positive degree. Considering coefficients of $f$ in $K$ as rational functions over $k$, there is a localization $$R = k[x_1,\dots,x_n][1/h]$$ so that $f \in R[t]$. By expressing $k'$ as the limit of such $R$ we get such an $R$ so that there is a $k$-algebra map $$A \rightarrow R[t]/(f).$$ But $k$ is infinite, so there are many $c \in k^n$ such that $h(c) \ne 0$. Pass to the quotient by $x_i \mapsto c_i$. QED I think the main point is twofold: (i) the principle of proving a result over a field by reduction to the case of an extension field with more properties (e.g., algebraically closed), and (ii) spreading out (descending through direct limits) and specialization are very useful for carrying out (i). - +1: This works nicely. – Pete L. Clark Feb 18 2010 at 6:05 1 @Brian: when you edit a post significantly, it is nice to give some indication of what you have changed. Was there something wrong with your previous argument? – Pete L. Clark Feb 18 2010 at 6:22 5 The previous post had an integrality argument that didn't apply when k'/k is not algebraic. The ironic thing is that my immediate reaction upon seeing the question was "Oh, it's just the old spread out and specialize business", and while typing that I thought I found an even "slicker" argument (the original post) which I realized was not right about 2 seconds after I posted it. So I went back to my original idea, which is correct. Better to follow one's instincts and not try to be too slick. :) – BCnrd Feb 18 2010 at 6:27 Wow! this looks like what I am looking for. It will take me some time to process this proof, though. – unknown (google) Feb 18 2010 at 7:53 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Well, this is the opposite of what you asked, but there is an easy reduction in the other direction. Namely, if the result is true for countable fields, then it is true for all fields. I can give two totally different proofs of this, both very soft, using elementary methods from logic. While we wait for a solution in the requested direction, let me describe these two proofs. Proof 1. Suppose k is any algebraically closed field, and J is an ideal in the polynomial ring k[x1,...,xn]. Consider the structure (k[x1,...,xn],k,J,+,.), which is the polynomial ring k[x1,...,xn], together with a predicate for the field k and for the ideal J. By the downward Loweheim-Skolem theorem, there is a countable elementary substructure, which must have the form (F[x1,...,xn],F,I,+,.), where F is a countable subfield of k, and I is a proper ideal in F[x1,...,xn]. The "elementarity" part means that any statement expressible in this language that is true in the subring is also true in the original structure. In particular, I is a proper ideal in F[x1,...,xn] and F is algebraically closed. Thus, by assumption, there is a1,...,an in F making all polynomials in I zero simultaneously. This is a fact about a1,...,an that is expressible in the smaller structure, and so it is also true in the upper structure. That is, every polynomial in J is zero at a1,...,an, as desired. Proof 2. The second proof is much quicker, for it falls right out of simple considerations in set theory. Suppose that we can prove (in ZFC) that the theorem holds for countable fields. Now, suppose that k is any field and that J is a proper ideal in the ring k[x1,...,xn]. If V is the set-theoretic universe, let V[G] be a forcing extension where k has become countable. (It is a remarkable fact about forcing that any set at all can become countable in a forcing extension.) We may consider k and k[x1,...,xn] and J inside the forcing extension V[G]. Moving to the forcing extension does not affect any of our assumptions about k or k[x1,...,xn] or J, except that now, in the forcing extension, k has become countable. Thus, by our assumption, there is a1,...,an in kn making all polynomials in J zero. This fact was true in V[G], but since the elements of k and J are the same in V and V[G], and the evaluations of polynonmials is the same, it follows that this same solution works back in V. So the theorem is true for k in V, as desired. But I know, it was the wrong reduction, since I am reducing from the uncountable to the countable, instead of from the countable to the uncountable, as you requested... Nevertheless, I suppose that both of these arguments could be considered as alternative very soft short proofs of the uncountable case (assuming one has a proof of the countable case). - 1 I am sure this is a naive question, but: In your introduction your hypothesis was "if the result is true for all countable fields", but then in Proof 2 the hypothesis is "suppose we can prove in ZFC that the theorem holds for countable fields". Does the former imply the latter? I guess it does, by the completeness of the theory of algebraically closed fields of each characteristic. But can you conclude this without appealing to the completeness of this particular theory? – Tom Church Feb 18 2010 at 3:00 It's not a naive question. In the forcing argument, one needs the theorem for countable fields to be true in V[G], rather than V. So if you assumed only that it was true (i.e. true in V), then the argument wouldn't quite work. If you assume it is provable in ZFC, then we get to use it in any model of ZFC, including V[G]. – Joel David Hamkins Feb 18 2010 at 3:09 I've realized that the assertion that the claim is true for countable fields has complexity only Pi^1_1, and so if it is true in V, it will also be true in V[G] by the Schoenfield Absoluteness theorem. So there is no need for me to have assumed that the claim was provable, but rather only that it was true. – Joel David Hamkins Feb 18 2010 at 14:31 I know a way to do this, but it involves some very heavy machinery... The first component are effective bounds on the degrees of the polynomials in the conclusion of the Weak Nullstellensatz. Such bounds are not that easy to get and there has been a lot of literature on the Effective Nullstellensatz. Perhaps the earliest effective bounds were found by Grete Hermann Die Frage der endlich vielen Schritte in der Theorie der Polynomideale (Mathematische Annalen 95, 1926), but there has been a lot of work on improving these bounds and also obtaining lower bounds over the years. [E.g., D. W. Brownawell, Bounds for the degrees in the Nullstellensatz, Ann. of Math. (2) 126 (1987), 577-591] It's interesting to read these papers, but I will only use the fact that effective bounds do exist. Using these bounds it is possible to find a sequence of first-order sentences $\phi_{n,k,r}$, which together are equivalent to the Weak Nullstellensatz; the sentence $\phi_{n,k,r}$ is a first order rendition of the following statement. If $p_1(\bar{x}),\dots,p_k(\bar{x})$ ($\bar{x} = x_1,\ldots,x_r$) are polynomials of degree at most $n$ without common zeros, then there are polynomials $q_1(\bar{x}),\dots,q_k(\bar{x})$ of degree at most $b(n,k,r)$ such that $p_1(\bar{x})q_1(\bar{x})+\cdots+p_k(\bar{x})q_k(\bar{x}) = 1$. The bounds $n$ and $b(n,k,r)$ are necessary so that the $p_i(\bar{x})$ and $q_i(\bar{x})$ have a bounded number of coefficients. Otherwise, we could not use a fixed number of variables for these coefficients. That said, the other piece of heavy machinery is the fact that the theory of algebraically closed fields of a given characteristic is complete, i.e. every first-order sentence is decided by the axioms. Therefore, if the above sentences $\phi_{n,k,r}$ are true in any algebraically closed field of a given characteristic, then they must be true in all algebraically closed fields of the same characteristic. In particular, the Weak Nullstellensatz for $\mathbb{C}$ implies the Weak Nullstellensatz for all algebraically closed fields of characteristic zero. From here, you can use the Rabinowitsch trick to get the Strong Nullstellensatz... PS: You do not need the Nullstellensatz to prove that the theory of algebraically closed fields of a given characteristic is complete. You implicitly need the Nullstellensatz to prove the effective upper bounds, but you only need them for the one field and you can think of them as wild guesses that turn out to be right. - After seeing Pete's comment, a simpler approach is to first prove quantifier elimination and use model completeness. (Well, I don't know which is easiest between getting very crude effective bounds and proving quantifier elimination.) However, there is a small benefit of my brute force approach, namely that the Nullstellensatz is actually expressible in first-order logic. – François G. Dorais♦ Feb 18 2010 at 3:37 Thank you very much for your answer. While I am hoping for a "trick" using only commutative algebra, this is still very interesting! – unknown (google) Feb 18 2010 at 6:27 This is a comment on Brian's answer, which is however a bit long to fit into the comment box. I wanted to remark that Brian's argument is ulimately not so different from the Noether normalization argument, nor is it so different to the argument linked to here, or to the argument in II.2 of Mumford--Oda using Chevalley's theorem. What they all have in common is the fact that any finite type variety can be projected to affine space with generically finite fibres and big image. On affine space (at least over an infinite field) we can find lots of points, and by the generic finiteness and big image assumptions we can even find such a point lying in the image of the original affine variety with finite fibres. Finding a point on this fibre then involves solving a finite degree polynomial, which we can do over the algebraic closure. Hence our original finite-type variety has a point. Here is a rewrite of Brian's argument which illustrates this: Following his reduction, we may assume that $k$ is infinite and perfect. We are given a non-zero finite type $k$-algebra $A$, and we want to show that Spec $A$ has a $\bar{k}$-point, i.e. that we can find a $k$-algebra homomorphism $A \to \bar{k}$. For this, we may as well replace $A$ by a quotient by a maximal ideal, and thus assume that $A$ is a field. As Brian notes, the theory of finitely generated field extensions allows us to write $A = k(X_1,\ldots,X_d)[t]/f(t)$ (because $k$ is perfect). We then observe that since $A$ is finite type over $k$, its generators involve only finitely many denominators, as do the coefficients of $f$, and so in fact $A = k[X_1,\ldots,X_d][1/h][t]/f(t)$ for some well-chosen non-zero $h$. Now because $k$ is infinite, $h$ is not identically zero on $k^d$, and so we are done: we choose a point $c_i$ where $h$ is non-zero, then solve $f(c_1,\ldots,c_d,t) = 0$ in $\bar{k}$. So one sees that the role of the theory of finitely generated field extensions is simply to provide a weaker version of the Noether normalization, with generic finiteness replacing finiteness. As I already wrote, the other "soft" arguments for the Nullstellensatz proceed along essentially the same lines. - Very helpful, +1 – Hailong Dao Feb 18 2010 at 16:04 I fixed your first link. – Harry Gindi Feb 18 2010 at 16:07 Thank you, fpqc – Emerton Feb 18 2010 at 16:36 1 Matt's formulation is more elegant, but I was trying to set up it to look more like a deduction from the case over a big extension field (hence my focus on the map to k' without changing A too much after the initial reduction step) since otherwise we'e just giving a direct proof of Nullstellensatz over the ground field, which seems to violate the spirit of the question. That said, in some sense the whole question is kind of pointless because we have so many nice direct proofs of Null. over any field. The above is yet another. :) – BCnrd Feb 18 2010 at 17:20 1 Dear Brian, Yes, I realized that you were trying to conform to the spirit of the question, which I happily disregarded! :) – Emerton Feb 18 2010 at 18:55 show 4 more comments The easiest way to reduce to the uncountable case may be as follows. Let $I$ be an ideal of $k[X_1,...,X_d]$ which does not contain $1$. Let $P_1,\dots,P_r$ be a generating family of $I$. Let $A=k^{\mathbf N}$ and let $m$ be a maximal ideal of $A$ which contains the ideal $N=k^{(\mathbf N)}$ of $A$. Then $K=A/m$ is an algebraically closed field which is has at least the power of the continuum. (Alternative description: let $K$ be an ultrapower of $k$, with respect to a non-principal ultrafilter.) Lemma. *For $i\in{1,\dots,r}$, let $a_i=(a_{i,n})\in A$. Assume that $(\bar a_1,\dots,\bar a_r)=0$ in $K^r$. Then the set of $n\in\mathbf N$ such that $(a_{1,n},\dots,a_{r,n})=0$ is infinite.* Proof. Assume otherwise. For every $n$ such that $(a_{1,n},...,a_{r,n}) \neq 0$, choose $(b_{1,n},\dots,b_{r,n})$ such that $\sum a_{i,n}b_{i,n}=1$, and let $b_i=(b_{i,n})_n\in A$. Then $\sum a_i b_i - 1$ belongs to $N^r$, hence $\sum \bar a_i \bar b_i=1$. Contradiction. Thanks to the lemma, one proves easily that the ideal $I_K$ of $K[X_1,...,X_d]$ generated by $I$ does not contain $1$. By the uncountable case, there exists $x=(x_1,...,x_d)\in K^d$ such that $P_j(x_1,...,x_d)=0$ for every $j$. For every $i$, let $a=(a_{n})\in A^d$ be such that $\bar a=x$. By the lemma again the set of integers $n$ such that $P_j(a_n) \neq 0$ for some $j$ is finite. In particular, there exists a point $y\in k^d$ such that $P_j(y)=0$ for every $j$. - I like this proof, it is completely elementary (and remark that also Brian's proof contains a choice of a maximal ideal). – Martin Brandenburg Jan 6 at 18:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 130, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451663494110107, "perplexity_flag": "head"}
http://nrich.maths.org/6575/note?nomenu=1
## 'Fix Me or Crush Me' printed from http://nrich.maths.org/ ### Why do this problem? This problem gives students the opportunity to explore the effect of matrix multiplication on vectors, and lays the foundations for studying the eigenvectors and kernel of a matrix, ideas which are very important in higher level algebra with applications in science. ### Possible approach Start by asking students to work with the vector ${\bf F}$ to find a matrix which fixes it. Initially, let students find their own methods of working - some may choose to try to fit numbers in the matrix, some may straight away work with algebra. Once students have had a chance to try the task, allow some time to discuss methods, as well as the simplest and most complicated examples of matrices they have managed to find. Repeat the same process to find a matrix which crushes the vector $\bf Z$. The last part of the problem asks students to seek vectors which are fixed or crushed by each of the three matrices given. This works well if students are first given time to explore the properties of the matrices and to construct the conditions needed for a vector to be fixed or crushed by them. Then encourage discussion of their findings, particularly focussing on justification for matrices where appropriate vectors can't be found. ### Key questions What properties must a matrix have if it fixes $\bf F$? Or if it crushes $\bf Z$? What is the simplest matrix with these properties? What is the most general matrix you can write down? What properties must a vector have to be fixed or crushed by the three matrices given?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946140468120575, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/253432/improving-newtons-iteration-where-the-derivative-is-near-zero/253466
# Improving Newton's iteration where the derivative is near zero? I'm implementing a root-solver for finding x coordinates of a function f(x), after I have an y-coordinate. The function is periodic, roughly sinusoidal with constant amplitude but non-linearly varying frequency; for an inverse I don't have a closed-form (it is an infinite series), so I use the Newton iteration to find the x-value at a given y beginning the iteration at $x_0$ which is rather near the true value by something like $x=newton(x=x_0,f(x)-y)$. In most cases this works fine, however if the y is in the near of a maximum (or minimum) of f, where the shape is very similar to the maximum of a sinus-curve, the newton-iteration does not converge. The wikipedia gives a bit of information about this, but not a workaround. The last way out would be to resort to binary search, but which I'd like to avoid since the computation of f(x) is (relatively) costly. Does someone know an improvement in the spirit of the Newton-iteration (which has often quadratic convergence) for this region of y-values? [update] hmmm... perhaps it needs only a tiny twist? It just occurs to me, that it might be possible to go along the way how I find the easily approximated x for the maximum y: here I use the Newton on the derivative of the function and search for the zero: $x_{max}=newton(x=x_0,f(x)') \qquad$ and this has the usual quadratic convergence. But how to apply this for some y in the near of the maximum? - You could approximate your function at $x$ by a parabola, using $f(x)$, $f'(x)$ and $f''(x)$, instead of a line using just the first two... – Jaime Dec 7 '12 at 23:33 Do you mean that the starting value is very close to a local extremum or that the actual zero of the function is also an extremum? In the latter case, newton for $\sqrt f$ might help ... – Hagen von Eitzen Dec 7 '12 at 23:34 @Hagen : the $y$ for which it is difficult to find the $x$ are in the near of $\sin(\pm \pi / 2)\cdot \alpha$ where $\alpha$ is the amplitude of my function $f(x)$ – Gottfried Helms Dec 7 '12 at 23:42 @Jaime : looks like a sort of idea which I'm looking for. Would you mind to elaborate this a bit more to help me step in? (Surely I'll also need time to test&adapt...) – Gottfried Helms Dec 7 '12 at 23:46 Maybe you could use quadratic interpolation along with bisection to (hopefully) converge faster? – copper.hat Dec 8 '12 at 0:05 ## 4 Answers Have you considered implementing a hybrid method? For example, at each step: IF a Newton step would result in an iteration that is outside the bounds where you have determined the root must lie, then take a bisection step (slower than Newton, but bisection always converges to a root and is not affected by extrema), or a step using a method other than Newton that is not prone to failing near extrema. ELSE proceed with a Newton step (since it converges quadratically, as you pointed out). - Yes, I'm considering bisection, but I'd like to find something else with better rate-of-convergence – Gottfried Helms Dec 7 '12 at 23:38 Then replace "bisection" with another method of your choosing? I chose bisection because it is reliable, and it will only be used at steps near a maximum/minimum. – Eric Angle Dec 7 '12 at 23:56 By popular demand from the OP... In Newton's method you are replacing your function $y=f(x)$ by a linear approximation around the point $x_0$, $y = f(x_0) + f'(x_0) (x-x_0)$, which intersects the x axis ($y=0$) at $x=x_0-f(x_0)/f'(x_0)$. You could instead approximate by a parabola as $y=f(x_0) + f'(x_0)(x-x_0) +\frac{1}{2}f''(x_0)(x-x_0)^2$, which intercepts the x-axis at $x = x_0 -\frac{f'(x_0)\mp\sqrt{f'(x_0)-2f(x_0)f''(x_0)}}{f''(x_0)}$. You will of course have the issue of having two, not one, possible next iteration point, but there are multiple ways to get around these: choose the closest one, always move up (or down), choose the one with a smallest $f(x_0)$... - Ahh, very nice - it looks, that with my update I was even near already... So I'll just try it. That shall need some time, then I'll come back to this. Thanks so far! – Gottfried Helms Dec 8 '12 at 0:14 Hmm, I couldn't manage to make this working with my specific application, maybe simply programming errors. Perhaps if I get the solver working by more detailed study of the process in the extreme cases I'll come back to this and try to locate my errors. Thanks anyway for that answer! – Gottfried Helms Jan 3 at 15:37 I couldn't tell you the cost of this idea, but maybe you could work it out: You are trying to solve for a root of $g$, where $g(x)=f(x)-y$ and you want your solution in a neighborhood of $x_0$. Let's call that solution $x_s$. The problem is that $g'(x)$ is too small near $x_s$, so in the Newton algorithm you divide by very small things, yielding big changes from $x_i$ to $x_{i+1}$; possibly so big that the algorithm converges to a different solution or not at all. So what if you engineered a substitute function $\tilde{g}$, who still satisfied the demand that $\tilde{g}(x_s)=0$, but has $\tilde{g}'(x_s)$ not so small. For example, $\tilde{g}(x)$ could equal $g(x)\cdot\ln|g(x)|$. This has $\lim_{x\to x_s}\tilde{g}(x)=0$ and has $\tilde{g}'(x)=g'(x)\cdot(1+\ln|g(x)|)$. The absolute value of $\tilde{g}'(x_s)$ will be quite larger than that of $g'(x_s)$. - If the minimum/maximum is negative, then an $x_0$ such that $f(x_0)$ is positive is preferred or if the minimum/maximum is positive then an $x_0$ such that $f(x_0)$ negative is more reasonable but if the root is at the minimum/maximum then I don't think there should be a problem. Something else with a better rate of convergence is the Secant Method. This has a convergence rate of $\cfrac{1+\sqrt 5}{2}=1.618...$ mostly due to the fact that it starts with two initial values. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509705305099487, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/55021/list
## Return to Answer 2 added 4 characters in body Ian's answer is very elegant, but in case you're looking for a more computational approach, you could use the Seifert form. Namely, if you take a Seifert surface $\Sigma$ for a knot, look at the form $\Theta\colon H_1(\Sigma)\otimes H_1(\Sigma)\to \mathbb Z$ given by $\Theta(x,y)=lk(x^+,y)$ where $x^+$ is a push-off of $x$ along a consistently chosen positive normal direction. Then one can show that the Alexander polynomial is expressible as $\det(t\Theta-\Theta^T)$. Note that for a Whitehead double, there is an obvious Seifert surface with one band being a thickening of the original knot, and one band being a small twisted dual band. In particular, the Seifert form looks something like `$$\left(\begin{array}{cc}0&1\0&1\end{array}\right)$$ &1\\0&1\end{array}\right)$$` which yields a trivial Alexander polynomial, which is only well-defined in this formula up to powers of $t$. Or you could notice that the unknot has a Seifert surface with the same Seifert form as this, by Whitehead doubling the unknot! 1 Ian's answer is very elegant, but in case you're looking for a more computational approach, you could use the Seifert form. Namely, if you take a Seifert surface $\Sigma$ for a knot, look at the form $\Theta\colon H_1(\Sigma)\otimes H_1(\Sigma)\to \mathbb Z$ given by $\Theta(x,y)=lk(x^+,y)$ where $x^+$ is a push-off of $x$ along a consistently chosen positive normal direction. Then one can show that the Alexander polynomial is expressible as $\det(t\Theta-\Theta^T)$. Note that for a Whitehead double, there is an obvious Seifert surface with one band being a thickening of the original knot, and one band being a small twisted dual band. In particular, the Seifert form looks something like $$\left(\begin{array}{cc}0&1\0&1\end{array}\right)$$ which yields a trivial Alexander polynomial, which is only well-defined in this formula up to powers of $t$. Or you could notice that the unknot has a Seifert surface with the same Seifert form as this, by Whitehead doubling the unknot!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481160640716553, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2012/02/09/the-propagation-velocity-of-electromagnetic-waves/?like=1&source=post_flair&_wpnonce=420e8d64d7
# The Unapologetic Mathematician ## The Propagation Velocity of Electromagnetic Waves Now we’ve derived the wave equation from Maxwell’s equations, and we have worked out the plane-wave solutions. But there’s more to Maxwell’s equations than just the wave equation. Still, let’s take some plane-waves and see what we get. First and foremost, what’s the propagation velocity of our plane-wave solutions? Well, it’s $c$ for the generic wave equation $\displaystyle\frac{\partial^2F}{\partial t^2}-c^2\nabla^2F=0$ while our electromagnetic wave equation is $\displaystyle\begin{aligned}\frac{\partial^2E}{\partial t^2}-\frac{1}{\epsilon_0\mu_0}\nabla^2E&=0\\\frac{\partial^2B}{\partial t^2}-\frac{1}{\epsilon_0\mu_0}\nabla^2B&=0\end{aligned}$ so we find the propagation velocity of waves in both electric and magnetic fields is $\displaystyle c=\frac{1}{\sqrt{\epsilon_0\mu_0}}$ Hm. Conveniently, I already gave values for both $\epsilon_0$ and $\mu_0$: $\displaystyle\begin{aligned}\epsilon_0&=8.85418782\times10^{-12}\frac{\mathrm{F}}{\mathrm{m}}&=8.85418782\times10^{-12}\frac{\mathrm{s}^2\cdot\mathrm{C}^2}{\mathrm{m}^3\cdot\mathrm{kg}}\\\mu_0&=1.2566370614\times10^{-6}\frac{\mathrm{H}}{\mathrm{m}}&=1.2566370614\times10^{-6}\frac{\mathrm{m}\cdot\mathrm{kg}}{\mathrm{C}^2}\end{aligned}$ Multiplying, we find: $\displaystyle\epsilon_0\mu_0=8.85418782\times1.2566370614\times10^{-18}\frac{\mathrm{s}^2}{\mathrm{m}^2}=11.1265006\times10^{-18}\frac{\mathrm{s}^2}{\mathrm{m}^2}$ which means that $\displaystyle c=\frac{1}{\sqrt{\epsilon_0\mu_0}}=0.299792457\times10^9\frac{\mathrm{m}}{\mathrm{s}}=299\,792\,457\frac{\mathrm{m}}{\mathrm{s}}$ And this is a number which should look very familiar: it’s the speed of light. In an 1864 paper, Maxwell himself noted: The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws. Indeed, this supposition has been borne out in experiment after experiment over the last century and a half: light is an electromagnetic wave. ## 3 Comments » 1. You have done a remarkable work so far to expound the principles of electrostatics. I have to re-read all the posts and absorb. Here you have found the speed of light. That is based on two other constants. I have to read the older posts to see if those constants were calculated from from principles. Otherwise, there is some circularity here. I am still not sure whether you proved in all these posts Maxwell’s equations from vector calculus alone (generalized Stokes’ theorem). If that is the case, it proves the power of vector calculus. No matter the questions that linger in mind, what you done relating calculus and electrostatics is truly wonderful. Congratulations! I hope you continue this effort. Comment by Soma Murthy | February 9, 2012 | Reply 2. I didn’t get into how $\epsilon_0$ and $\mu_0$ were calculated, but in fact they are determined from laboratory experiments which are specifically concerned with electric or magnetic phenomena, and not with light as such. Comment by | February 9, 2012 | Reply 3. [...] The Propagation Velocity of Electromagnetic Waves (unapologetic.wordpress.com) [...] Pingback by | February 26, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311769008636475, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/40526/list
## Return to Answer 4 the editings overlapped, apparently, it should be fixed now, sorry... Here is a quick sketch (probably it can be made much cleaner). Using an inductive argument it should not be difficult to reduce to studying the case that `$(X,<_1)$` is isomorphic to a cardinal, say $\kappa$. For convenience, let us identify $(\kappa,<)$ and `$(X,<_1)$`. Let us now construct $Y$ using transfinite induction. Let $X'\subseteq X$ be the initial segment of $X$ (with respect to `$<_2$`) which, endowed with the order `$<_2$`, is isomorphic to $\kappa$. For `$\alpha<k$` define `$$y_\alpha=\min\{\beta\in X': \forall \alpha'<\alpha\; \beta> y_{\alpha'}\textrm{\ and\ }\beta>_2 y_{\alpha'}\}.$$` The set above is not empty, and so $y_\alpha$ is well defined, because `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq y_{\alpha}\}|y_{\alpha'}\}|<k$` and also `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq_2 y_\alpha\}|y_{\alpha'}\}|<k$` (this depends on `$\{y_{\alpha'}\}_\{alpha'\{y_{\alpha'}\}_{\alpha'<\alpha}$` not being cofinal in $(k,<)$ and `$(X',<_2)$`), so their complements in $X'$ intersect. The set `$Y=\{y_\alpha\}_{\alpha<k}$` is what we were looking for. 3 fixed some of the LaTeX by adding backticks, but gave up on one part Here is a quick sketch (probably it can be made much cleaner). Using an inductive argument it should not be difficult to reduce to studying the case that `$(X,<_1)$` is isomorphic to a cardinal, say $\kappa$. For convenience, let us identify $(\kappa,<)$ and `$(X,<_1)$`. Let us now construct $Y$ using transfinite induction. Let $X'\subseteq X$ be the initial segment of $X$ (with respect to `$<_2$`) which, endowed with the order `$<_2$`, is isomorphic to $\kappa$. For `$\alpha<k$` define `$$y_\alpha=\min\{\beta\in X': \forall \alpha'<\alpha,\; <\alpha\; \beta> y_{\alpha'}\textrm{ y_{\alpha'}\textrm{\ and\ }\beta>_2 y_{\alpha'}\}.$$` The set above is not empty, and so $y_\alpha$ is well defined, because `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq y_{\alpha}\}|<k$` and also `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq_2 y_\alpha\}|<k$` (this depends on `$\{y_{\alpha'}\}_{\alpha'\{y_{\alpha'}\}_\{alpha'<\alpha}$` not being cofinal in $(k,<)$ and `$(X',<_2)$`), so their complements in $X'$ intersect. The set `$Y=\{y_\alpha\}_{\alpha<k}$` is what we were looking for. 2 fixed formatting problems Here is a quick sketch (probably it can be made much cleaner). Using an inductive argument it should not be difficult to reduce to studying the case that `$(X,<_1)$` is isomorphic to a cardinal, say $\kappa$. For convenience, let us identify $(\kappa,<)$ and `$(X,<_1)$. <_1)$`. Let us now construct $Y$ using transfinite induction. Let $X'\subseteq X$ be the initial segment of $X$ (with respect to `$<_2$) <_2$`) which, endowed with the order `$<2$<_2$`, is isomorphic to $\kappa$. For `$\alpha\alpha=\min{\beta\in \alpha<k$` define ```$$y_\alpha=\min\{\beta\in X': \forall \alpha'<\alpha\; beta> y_{\alpha'}\textrm{<\alpha,\; \beta> y_{\alpha'}\textrm{ and \ }\beta>2 y{\alpha'}}.$$ \beta>_2 y_{\alpha'}\}.$$``` The set above is not empty, and so $y_\alpha$ is well defined, because `$|{\gamma\in |\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq y_{\alpha}}|2)$)y_{\alpha}\}|<k$` and also `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq_2 y_\alpha\}|<k$` (this depends on `$\{y_{\alpha'}\}_{\alpha'<\alpha}$` not being cofinal in $(k,<)$ and `$(X',<_2)$`), so their complements in $X'$ intersect. The set `$Y={y````\alpha}_{\alpha Y=\{y_\alpha\}_{\alpha<k}$``` is what we were looking for. 1 Here is a quick sketch (probably it can be made much cleaner). Using an inductive argument it should not be difficult to reduce to studying the case that $(X,<_1)$ is isomorphic to a cardinal, say $\kappa$. For convenience, let us identify $(\kappa,<)$ and $(X,<_1)$. Let us now construct $Y$ using transfinite induction. Let $X'\subseteq X$ be the initial segment of $X$ (with respect to $<_2$) which, endowed with the order $<2$, is isomorphic to $\kappa$. For $\alpha\alpha=\min{\beta\in X': \forall \alpha'<\alpha\; beta> y_{\alpha'}\textrm{\ and\ }\beta>2 y{\alpha'}}.$$The set above is not empty, and so$y_\alpha$is well defined, because$|{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq y_{\alpha}}|2)$), so their complements in$X'$intersect. The set$Y={y\alpha}_{\alpha
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955096960067749, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/260249/finding-the-lengths-of-lines-outside-of-a-circle-when-theyre-not-tangent?answertab=active
# Finding the lengths of lines outside of a circle when they're not tangent I recreated the question in paint above ^ The line is not tangent to the circle. - ## 1 Answer They're geometrically powers, so it must be that 5·29=x·7, assuming 5 and x are the lengths from the intersection point to the point in which the lines first cut the circle (the short ones) He's right, I made a mistake: from the formula, it must be what he says: x(x+7)=5(5+29). The answer is 10, again, it doesn't make sense with your drawing, but your drawing is not accurate with the lengths of the segments. - That would mean the answer is 20.71? That doesn't make a lot of sense in the whole grand scheme of things Given the the line it's attached to is similar in length. – Nathan Abbott Dec 16 '12 at 21:08 1 He's got the formula wrong. Try $x(x+7)=5(5+29)$. – Mario Carneiro Dec 16 '12 at 21:12 Well, the problem is that the drawing doesn't make much sense either, because the 29 shouls look aprox. like 6 times the 5 segment, and it doesn't. Try making the drawing with some geometry software to see if it's actually that 20.71, you can try GeoGebra, it's online and free. – MyUserIsThis Dec 16 '12 at 21:13 – anegligibleperson Dec 16 '12 at 21:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564018249511719, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/34100/joint-distribution-of-mid-p-value-and-p-value
# Joint distribution of mid p-value and p-value I have a question about the joint distribution of the mid p-value and p-value. We know that, for right tailed test with discrete test statistic $X$ with distribution $F$, the p-value is defined as $P=Pr(X \geq observed~X)$ and the mid p-value is defined as $mid~P=Pr(X \gt observed~X)+\frac{1}{2}*P(X=Observed)$. I would like to find out $Pr(P_{mid} \leq t~and~P \gt t)$ for some $t \in (0,1)$. Here $P$ is the p-value and $P_{mid}$ is the mid p-value. - What test do you have in mind? – whuber♦ Aug 10 '12 at 23:13 Thanks so much for the reply. I am hoping to get a general solution for any discrete distribution F. – user13154 Aug 11 '12 at 1:06 The answer depends strongly on the test statistic, not just the distribution: that's why you need to tell us what test you're using. – whuber♦ Aug 11 '12 at 21:05 1 Ok. We can focus on the Fisher's exact test. Thanks. – user13154 Aug 12 '12 at 2:31 Does anyone have any thought on it, please? Thanks – user13154 Aug 14 '12 at 14:01 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262561798095703, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/05/18/galois-connections/?like=1&_wpnonce=d19428bcb3
# The Unapologetic Mathematician ## Galois Connections I want to mention a topic I thought I’d hit back when we talked about adjoint functors. We know that every poset is a category, with the elements as objects and a single arrow from $a$ to $b$ if $a\leq b$. Functors between such categories are monotone functions, preserving the order. Contravariant functors are so-called “antitone” functions, which reverse the order, but the same abstract nonsense as usual tells us this is just a monotone function to the “opposite” poset with the order reversed. So let’s consider an adjoint pair $F\dashv G$ of such functors. This means there is a natural isomorphism between $\hom(F(a),b)$ and $\hom(a,G(b))$. But each of these hom-sets is either empty (if $a\not\leq b$) or a singleton (if $a\leq b$). So the adjunction between $F$ and $G$ means that $F(a)\leq b$ if and only if $a\leq G(b)$. The analogous condition for an antitone adjoint pair is that $b\leq F(a)$ if and only if $a\leq G(b)$. There are some immediate consequences to having a Galois connection, which are connected to properties of adjoints. First off, we know that $a\leq G(F(a))$ and $F(G(b))\leq b$. This essentially expresses the unit and counit of the adjunction. For the antitone version, let’s show the analogous statement more directly: we know that $F(a)\leq F(a)$, so the adjoint condition says that $a\leq G(F(a))$. Similarly, $b\leq F(G(b))$. This second condition is backwards because we’re reversing the order on one of the posets. Using the unit and the counit of an adjunction, we found a certain quasi-inverse relation between some natural transformations on functors. For our purposes, we observe that since $a\leq G(F(a))$ we have the special case $G(b)\leq G(F(G(b)))$. But $F(G(b))\leq b$, and $G$ preserves the order. Thus $G(F(G(b)))\leq G(b)$. So $G(b)=G(F(G(b)))$. Similarly, we find that $F(G(F(a)))=F(a)$, which holds for both monotone and antitone Galois connections. Chasing special cases further, we find that $G(F(G(F(a))))=G(F(a))$, and that $F(G(F(G(b))))=F(G(b))$ for either kind of Galois connection. That is, $F\circ G$ and $G\circ F$ are idempotent functions. In general categories, the composition of two adjoint functors gives a monad, and this idempotence is just the analogue in our particular categories. In particular, these functions behave like closure operators, but for the fact that general posets don’t have joins or bottom elements to preserve in the third and fourth Kuratowski axioms. And so elements left fixed by $G\circ F$ (or $F\circ G$) are called “closed” elements of the poset. The images of $F$ and $G$ consist of such closed elements ### Like this: Posted by John Armstrong | Algebra, Category theory, Lattices ## 6 Comments » 1. “In particular, these functions behave like closure operators” In fact, under the standard terminology, they are closure operators. What have been called closure operators here and in the UM post linked to above are usually called topological closure operators. Comment by | May 18, 2009 | Reply 2. Well, yes. I just wanted to keep readers from getting confused by the other two axioms. Comment by | May 18, 2009 | Reply 3. [...] Orthogonal Complementation is a Galois Connection We now know how to take orthogonal complements of subspaces in an inner product space. It turns out that this process (and itself again) forms an antitone Galois connection. [...] Pingback by | May 19, 2009 | Reply 4. Typo: b <= F(G(b)) should be the other way around at the end of paragraph 3. Comment by Mike Stay | May 26, 2009 | Reply 5. No, at the end of paragraph 3 I’m doing the antitone version, not the monotone one. It’s like working with contravariant functors. As a particular case, consider the motivating case: the closure of a subgroup of the Galois group contains the original subgroup, and the closure of a subfield of an extension field contains the original subfield. Both directions are increasing, because of the antitonicity. Comment by | May 26, 2009 | Reply 6. [...] Doesn’t this start to look a bit like a Galois connection? [...] Pingback by | April 2, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913841962814331, "perplexity_flag": "head"}
http://mathoverflow.net/questions/82685/presheaves-on-a-complete-segal-space
## Presheaves on a complete Segal space ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let C be an $(\infty,1)$-category, incarnated as a complete Segal space, hence in particular a bisimplicial set. Is there a model structure on the slice category of bisimplicial sets over C which presents the $(\infty,1)$-presheaf category of C? Ideally, such a model structure would be Quillen equivalent to the contravariant model structure over a quasicategory incarnation of C, and to the projective model structure for simplicial presheaves on a simplicial-category incarnation of C. - ## 1 Answer Yes. Let $W$ be a complete Segal space, thought of as a simplicial "space" $(W_q)$. The fibrant objects of your model category will be the fibrations $f:X\to W$ such that for each simplicial operator $\delta:[q]\to [p]$ with $\delta(q)=p$, the evident map from $X_p$ to the pullback of $$X_q \xrightarrow{f} W_q \xleftarrow{\delta} W_p$$ is a weak equivalence of spaces. (Edit: in fact, it suffices to require the evident map to the pullback to be a weak equivalence only for $\delta:[0]\to[p]$ with $\delta(0)=p$.) I worked out some of this years ago, but never finished it; somebody should do this (or perhaps someone has already?). Lurie has done pretty much exactly the same thing in the context of quasi-categories, in HTT. - Thanks! Two questions. (1) By "fibrations $f\colon X\to W$", do you mean Reedy fibrations? Or fibrations in the CSS model structure? (2) Is the "evident map" necessarily itself a fibration (say, if $f$ is a fibration according to your answer to (1))? – Mike Shulman Dec 6 2011 at 0:45 Also, (3) I presume that this model structure is constructed as a localization of some over-model-structure for simplicial spaces over W? And (4, I guess) does that imply that the weak equivalences between fibrant objects in this model structure are still just the levelwise equivalences? – Mike Shulman Dec 6 2011 at 0:47 (1) I mean Reedy fibration. It turns out that if W is a CSS, and f is a Reedy fibration with the above property, then X is also a CSS, and thus f (being a Reedy fibration between CSS's) is also a fibration in the CSS model category. (2) I believe it is a fibration if $\delta$ is injective, but not generally. (In the condition describing fibrant objects, it actually suffices to require a weak equivalence only in the case $\delta:[0]\to [p]$.) (3) Yup. (4) Yup. – Charles Rezk Dec 6 2011 at 16:22 Beautiful! That's all I could have hoped for. Someone should really work all of this out and write it up. – Mike Shulman Dec 7 2011 at 0:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262704849243164, "perplexity_flag": "head"}
http://mathoverflow.net/questions/91928/ultrafilters-on-the-set-of-semialgebraic-subsets-of-r2
## Ultrafilters on the set of semialgebraic subsets of R^2 ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $R$ be a real closed field and $f: R \to R$ a map. Then let $\textrm{F}(f)$ be the set of semialgebraic subsets of $R^2$, which contain $(t,f(t))$ for all $0< t< \epsilon$ for some $\epsilon >0$. Clearly $\textrm{F}(f)$ is a filter on the set of semialgebraic subsets subsets of $R^2$ and if $f$ is defineable, then $\textrm{F}(f)$ is an ultrafilter. An other example would be the exponential function. Does anyone know for which functions $f$ the set $\textrm{F}(f)$ is also an ultrafilter? - ## 1 Answer Whenever $(R,f)$ is an o-minimal structure, $F(f)$ will be an ultrafilter. This includes your examples: $f$ definable and the exponential. Also, since the property defining $F(f)$ depends only locally on $f$, using the fact that $(R,f\upharpoonright _{[0,1]})$ is o-minimal for any analytic function $f$, one gets that $F(f)$ will be an ultrafilter for any analytic function $f$. I suspect some kind of converse should be true, which would give you a characterization (perhaps something along the lines "$F(f)$ is ultrafilter iff $(R,f\upharpoonright _{(0,\epsilon)})$ is o-minimal for some $\epsilon>0$"), but I'm not so sure about that. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153425097465515, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/speed-of-light?page=5&sort=newest&pagesize=30
# Tagged Questions The speed of light is fundamental universal constant that marks the maximum speed at which information can propagate. Its value is $299792458\frac{\mathrm{m}}{\mathrm{s}}$. 4answers 582 views ### What if we could give photons some mass? I was reading an article and these paragraphs got me wondering... Before I list the replies, here is some background. The Higgs mechanism describes an invisible field that, it is argued, split one ... 1answer 253 views ### Speed of Light, Photons or WaveSpeed? The speed of light is almost 300 000 km/s. The photons have a speed along the wave, and the wave have a speed straight forwards. What is the speed of light? Is it the speed the photons have along ... 2answers 890 views ### why is mechanical waves faster in denser medium while EM waves slower? Why is it that mechanical waves/longitudinal waves/sound travel faster in a denser/stiffer medium as in steel compared to say air, while EM waves/trasverse waves/light travels slower in a (optically) ... 1answer 174 views ### $E=mc^2$ why is it $c^2$ and not just $c$? Why is constant for the conversion of mass to energy square of the ligths speed? is it bedside it's the fastest real matter? 2answers 149 views ### What are the implications of the speed of light broken? [duplicate] Possible Duplicate: What would be the effects on theoretical physics if neutrinos go faster than light? I don't know if it's been asked before, but I couldn't find a thread about it. I ... 2answers 357 views ### Do gravitational waves slow down as they pass through matter? I've heard that gravitational waves travel at the speed of light, and have some parallels to electromagnetic waves. EM waves slow down as they pass through matter (speed of light in glass is slower ... 2answers 210 views ### Does gravitational differences affect the distance light travels? (a thought experiment) My question is: How is light passing through different gravitational densities affected? The value of "c" is constant in a vacuum. I'm curious about if various time frames have any effect. This is ... 2answers 221 views ### What happens to body chemistry at the speed of light? Assume that I'm traveling at the speed of light in one direction. My brain is also traveling at the speed of light in that direction. Presumably there is at least one receptor site in my brain that is ... 3answers 889 views ### Does a photon have a rest frame? Quite a few of the questions given on this site mention a photon having a rest frame such as it having a zero mass in its rest frame. I find this contradictory since photons must travel at the seed of ... 2answers 363 views ### At what fraction of the speed of light have people traveled? I'm guessing that, this would be someone in a rocket or something... When they hit their top speed, at what fraction of $c$ are they traveling? 2answers 215 views ### Does the Special Theory of Relativity “form” the foundation of Modern Physics? My question is in reference to Geoff Brumfiel's Scientific American article "Particles Found to Travel Faster than Speed of Light", about which I have two questions. I have become engaged in ... 4answers 1k views ### Why is the speed-of-light “the upper limit” rather than the speed of “particle type X”? Basically, I can't stop wondering why light (the photon) is so special, compared to all the other particles known (and unknown) to modern day physics. Could it be that there exists an upper limit on ... 1answer 194 views ### The relation between the speed of light and the Big Bang Theory I would like to know how much of the Big Bang theory is dependent on the constancy of the speed of light. P.S.: It might be guessed that I am asking this because of the recent CERN news. Yes, of ... 1answer 266 views ### Do the particles that were found to break the speed of light really break Einstein's theory of relativity? [duplicate] Possible Duplicate: What would be the effects on theoretical physics if neutrinos go faster than light? Update: Loose cable caused faulty results Apparently, researchers at CERN have found ... 0answers 472 views ### Can neutrinos travel faster than the speed of light? [duplicate] Possible Duplicate: Superluminal neutrinos What would be the immediate effects if light does not go at the maximum speed possible? This is a hot topic right now, so I thought we should ... 3answers 91 views ### If neutrinos travel faster than light, how much lead time would we have over detecting supernovas? In light of the recent story that neutrinos travel faster than photons, I realize the news about this is sensationalistic and many tests still remain, but let's ASSUME neutrinos are eventually proven ... 0answers 150 views ### What would the impact be on the physics world if neutrinos DO travel faster than light? [duplicate] Possible Duplicates: What would be the immediate effects if light does not go at the maximum speed possible? Superluminal neutrinos I was reading this article about a group of scientist ... 2answers 502 views ### What do physicists mean when they say “speed of light”? Does it make sense to say, "The speed of light varies?" Some may say right off the bat yes, it changes as a wave passes through a different medium. However, I'd like to say no, because when I hear ... 3answers 457 views ### High speed and low speed photons Looking at the discovery of the neutron, and I came across this page: http://www-outreach.phy.cam.ac.uk/camphy/neutron/neutron3_1.htm The animation on the left, talks about low energy photons and ... 3answers 2k views ### Reflection At Speed of Light I have looked online to no avail. There is two competing answers and I am curious to know which one is right. Someone asked me this question. If you are traveling at the speed of light can you see ... 2answers 267 views ### How to measure faster than light electric energy? According to relativity,nothing can break light barrier.But a recent preprint shows energy transmission of commercial electric power (f=60Hz) is faster than light. (It is not the drift velocity of ... 1answer 363 views ### Spin up, spin down and superposition I'm just starting to study quantum mechanics. Please explain the error in this thinking: You set up decay of two $\pi$ mesons and get $2\mathrm{e}^-$ on Mars and $2\mathrm{e}^+$ on Earth. On Earth ... 3answers 520 views ### At what speed does our universe expand? Conceivably it expands with the speed of light. I do not know, but curious, if there is an answer. At what velocity, does our universe expand? 1answer 298 views ### Microsecond trading with neutrinos The Spread Networks corporation recently laid down 825 miles of fiberoptic cable between New York and Chicago, stretching across Pennsylvania, for the sole purpose of reducing the latency of ... 2answers 301 views ### How can I explain the scientific basis of the constant speed of light to a C-Decay proponent? This question asks why and how the speed of light is constant. I would like to ask a related, almost converse question: Given that the speed of light is constant, how could I explain to a ... 2answers 407 views ### Does gravity spread instantly? [duplicate] Possible Duplicate: The speed of gravity I am real noob in physics, so sorry if this question is really stupid. Today in a casual conversation I claimed that if the sun were to instantly ... 3answers 880 views ### What really cause light/photons to appear slower in media? I know that if we solve the maxwell equation, we will end up with the phase velocity of light is related to the permeablity and the permitivity of the material. But this is not what I'm interested in, ... 4answers 751 views ### Double light speed Let's say we have 2 participles facing each other and traveling at speed of light Let's say I'm sitting on #1 participle so in my point of view #2 participle's speed is c+c=2c, double light speed? ... 4answers 214 views ### Massive particles and speed of their propagation Can one show that in quantum field theory at least some example massive particles propagate with speed less than speed of light, while massless travel at speed of light? Well, motion is a different ... 2answers 497 views ### Tachyon and Photons Is there a particle called "tachyons" that can travel faster than light? If so, would Einstein's relativity be wrong? According to Einstein no particle can travel faster than light. 2answers 164 views ### Thought experiment that seems to involve something growing at twice the speed of light. Is anything wrong? Let foo be some unit of distance and bar be some unit of time which have been chosen so that the speed of light c = 1 foo/bar. Position several observers along a line each separated by one foo, and ... 1answer 183 views ### What happens when light moves perpendicular to a moving object? Imagine the folllowing situation: A coherent light source is attached to a car such that the emitted light beam path is "being crossed over" by the car i.e. the long parallel light beams are struck by ... 2answers 853 views ### Michelson rotating mirror experiment Could someone explain the calculation required to answer this question. It is from a text book and the answer is recorded as 585Hz but I cannot replicate the answer. In 1931 Michelson used a ... 2answers 410 views ### Why is travelling around the speed of light a problem? I don't fully understand what would happen if we could travel at the speed of light. But I saw somewhere here that it would mean events happen out of order. But why is this a problem. It is said that ... 3answers 548 views ### Can the speed of light become complex inside a metamaterial? The speed of light in a material is defined as $c = \frac{1}{\sqrt{\epsilon \mu}}$. There are metamaterials with negative permittivity $\epsilon < 0$ and permeability $\mu < 0$ at the same time. ... 2answers 648 views ### Can you use a laser to measure the speed of light with a rotating mirror? I have learned that the classical measurement of the speed of light with a rotating mirror does not work with a laser (as opposed to, say, a mercury-vapor lamp). Can you tell me if and how coherency ... 3answers 419 views ### Max rocket speed in interstellar space? Interstellar space propulsion...if a spaceship were to get beyond our Sun's gravitational pull and since there is no atmosphere/wind/friction in space...does that mean, if an engine was constantly ... 5answers 1k views ### Maximum speed of a rocket with a potential of relativistic speeds Ultimately, the factor limiting the maximum speed of a rocket is: the amount of fuel it carries the speed of ejection of the gases the mass of the rocket the length of the rocket ... 3answers 3k views ### Why is the speed of light defined as 299792458 m/s? Why is the speed of light defined as 299792458 m/s? Why did they choose that number and no other number? Or phrased differently: Why is a metre 1/299792458 of the distance light travels in a sec in ... 3answers 243 views ### Seeing light travelling at the speed of light Imagine there are two cars travelling "straight" at the speed of light*, $A$, and $B$. $B$ is following directly behind $A$. Suddenly, $B$ switches on its headlights. Will $A$ be able to see this ... 3answers 698 views ### Alcubierre Drive - Clarification on relativistic effects On the Wikipedia article on the Alcubierre drive, it says: Since the ship is not moving within this bubble, but carried along as the region itself moves, conventional relativistic effects such as ... 5answers 2k views ### Why can't light escape from a black hole? Photons do not have mass (that's why they can move at speed of "light"). So, my question is how the gravity of black hole can stop light from escaping? 1answer 986 views ### Can you see yourself in a mirror when you are riding on top of a light stream? What happens if you would ride on top of a light stream and you would look into a mirror that is in front of you, could you actually see your own face? I am asking this because I heard that nothing ... 4answers 1k views ### Travelling faster than the speed of light Let's say I fire a bus through space at the speed of light. If I'm inside the bus (sitting on the back seat) and I run up the aisle of the bus will I in fact be traveling faster than the speed of ... 3answers 857 views ### “Speed” of Gravity and Speed of Light Some threads here touching speed of gravity made me think about that. This lead to some questions. The speed of gravity was not measured until today (at least there are no undebated papers to that ... 6answers 1k views ### What does it mean to say that mass “approaches infinity”? What does it mean to say that mass "approaches infinity"? I have read that mass of a body increases with the speed and when the body reaches the speed of light, the mass becomes infinity. What ... 4answers 676 views ### In superluminal phase velocities, what is it that is traveling faster than light? I understand that information cannot be transmitted at a velocity greater than speed of light. I think of this in terms of the radio broadcast: the station sends out carrier frequencies $\omega_c$ but ... 3answers 212 views ### The speed of light also applies for 'distance' materials? The question is hard for me to put into one sentence so please try to completely read the example: If I had a stick that is 1000KM long and I would push it forward with 1 millimeter in lets say... ... 5answers 680 views ### How long would it take for electricity to flow from one terminal to other, via a 1 LY long wire? Basically, how long does it take for electricity to determine there is a closed circuit and how does it know that the circuit exists? I'm curious to know how it knows there is a closed circuit at any ... 1answer 376 views ### What's a better phrase than “speed of light” for the universal spacetime speed constant? [closed] The phrase "speed of light" is commonly used for the constant c =3E8 m/s, a feature that's "hardcoded" into the structure of spacetime. All massless waves and particles move at this speed, and it's a ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479044675827026, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/64362/linear-programming-with-a-matrix-of-decision-variables
Linear programming with a matrix of decision variables [closed] Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is it possible to use linear programming when the decision variables are not simply a vector? For example, say I have $n$ countries and $m$ different potential charities I can donate to. The benefits per dollar spent are different for each charity-country pair, and are stored in the $m\times n$ matrix $b$ (if the charity $i$ does not exist in country $j$ then $b_{ij} = 0$). Assume I have a budget of $B$, $x_{ij}$ denotes how much money I donate to charity $i$ in country $j$ and I want to maximize the benefit. I can formulate this problem as: $\max_{x} \sum_i \sum_j b_{ij} x_{ij}$ subject to $\sum_i\sum_j x_{ij} \leq B$ [Assume obviously that I will add other constraints along the way (certain amounts donated in each country, maximum donated to each different charity etc), I am just setting this up trivially to begin with to determine if it is doable.] So rather than just setting up the simplex tableau with the vector $(x_1, x_2, ..., x_l)$ as the bottom row, I actually have an $m \times n$ matrix of decision variables. Is this solvable using linear programming? I am not interested in using a solver, I want to understand the basics of how a problem like this would be solved. - 1 en.wikipedia.org/wiki/… – Federico Poloni May 9 2011 at 8:29 Paul, I'm afraid this question is outside the scope of MathOverflow. Federico Poloni has answered your question, but if you want a more elaborate explanation, I suggest you ask at math.stackexchange.com – S. Carnahan♦ May 10 2011 at 21:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169601798057556, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/48914/independent-families-of-subsets-of-mathbb-n-of-size-continuum/77276
## Independent families of subsets of $\mathbb N$ of size continuum ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A family $\mathcal F$ of subsets of $\mathbb N$ is independent if for any two finite, disjoint subsets $\mathcal A,\mathcal B\subseteq\mathcal F$ the set $$\bigcap_{A\in\mathcal A}A\cap\bigcap_{B\in\mathcal B}(\mathbb N\setminus B)$$ is infinite. It is well-known that there is an independent family on $\mathbb N$ of size $2^{\aleph_0}$. This for example implies that there are $2^{2^{\aleph_0}}$ ultrafilters on $\mathbb N$. My favourite proof of the existence of a large independent family uses the Hewitt-Marczewski-Pondiczery Theorem that says that the space $2^{\mathbb R}$ (with the product topology) is separable: Pick a countable dense subset $D\subseteq 2^{\mathbb R}$ and consider, for each $r\in\mathbb R$, the set $A_r$ of all functions $f\in D$ (from $\mathbb R$ to $2$) with $f(r)=1$. The $A_r$ form an independent family of the required size on $D$. There is a purely combinatorial proof as an exercise in Kunen's set theory book, but that proof is rather by computation than by visualization. There is a large number of nice proofs of the fact that there is a large almost disjoint family on $\mathbb N$. So here is my question: Does anyone know a nice proof of the existence of a large independent family (large=of size continuum) on $\mathbb N$? (Other than the two proofs mentioned above or a combinatorialized version of the H.M.P.-argument.) - Observation: If we force to add $2^{\omega}$ many Cohen Reals, then this collection will form an independent family of size $2^{\omega}$ in the extension. Just thought I'd point out this fact to anyone perusing the post. – Jason Dec 12 2010 at 8:02 @Jason: Actually, adding a single Cohen real already adds $2^\omega$ many Cohen reals which give rise to an independent family of size continuum. – Stefan Geschke Dec 12 2010 at 18:34 How do you prove the Hewitt-Marczewski-Pondiczery theorem without using the existence of large independent sets in the first place? Otherwise the argument is circular. – Emil Jeřábek Oct 6 2011 at 2:11 @Emil Jerabek: The set of functions from $\mathbb R$ to $2$ that are 1 on a finite union of disjoint open intervals with rational endpoints and otherwise zero is a countable dense subset of $2^{\mathbb R}$. How do you prove the HMP-theorem using independent families? – Stefan Geschke Oct 6 2011 at 7:19 To prove HMP using an independent family $F$ of size continuum, identify `$2^{\mathbb R}$` with `$2^F$` and let, for each $n\in\mathbb N$, `$f_n$` be the element of `$2^F$` defined by `$f_n(X)=1$` iff $n\in X$. The set of these countably many `$f_n$`'s is dense, because when you untangle the definition of what it means for it to be dense, it boils down to the independence of $F$. – Andreas Blass Oct 7 2011 at 14:15 show 1 more comment ## 6 Answers I don't know whether this is the same as one of the proofs you already know, but the following seems to work. Take a set of irrationals of continuum size that is linearly independent over the rationals. (I don't know whether this can be done explicitly, but it's easy to show that it exists.) Now for each real r in this set, let $A_r$ be the set of all integers $n$ such that the integer part of $rn$ is even. - This is a nice construction! – Andres Caicedo Dec 11 2010 at 19:00 2 It can be done explicitly, as asked in this question: (unfortunately I don't know how put links in comments). [1]: mathoverflow.net/questions/23202/… – HenrikRüping Dec 11 2010 at 19:31 3 Amusingly, it seems that I gave an answer to that question. That's senility for you. – gowers Dec 11 2010 at 20:52 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The original proof by Fichtenholz and Kantorovich is, in my opinion, a nice one. It suffices to find continuum many independent subsets of some countably infinite set $C$; let's take $C$ to be the collection of finite sets of rational numbers. Associate to each real number $r$, the subset `$A_r$` of $C$ consisting of those finite sets $s\in C$ that have an odd number of members `$<r$`. It is easy to check that the intersection of any finitely many sets of the form $A_r$ and the complements of any finitely many others is infinite; that is `$\{A_r:r\in\mathbb R\}$` is an independent family. (I don't have a copy of Kunen's book handy, but I assume the proof there, the one you don't want here, is Hausdorff's proof, which generalizes to give `$2^\kappa$` independent subsets of $\kappa$ for any infinite cardinal $\kappa$.) - Yes, Andreas, the exercise in Kunen's book is to construct an independent family of size $2^\kappa$ of subsets of $\kappa$. I agree that the proof that you sketch is nice. – Stefan Geschke Dec 10 2010 at 17:30 Let $2^{<\omega}$ be the binary tree and assign to each branch $x$ the family $F_x$ of finite sets that intersect it. If $x_1$, $x_2$, $\ldots$ $x_k$ is a finite set of (distinct) branches then there is a level, $n$ say, where they all differ. On each level above $n$ you can find finite sets that satisfy any Boolean combination of the $F_{x_i}$ you like. - An obvious remark to make here is that all you need as your starting point is continuum-many subsets of a countable set with all intersections finite: thus, the binary tree is one of many constructions that will do the job. It's nice to see such a direct link between the two problems. – gowers Dec 14 2010 at 14:58 KP, this is my favorite proof among several very nice arguments so far. Thanks for the answer. – Stefan Geschke Dec 14 2010 at 22:36 An easy construction of an almost disjoint family of size continuum is given by taking, for each real number with, say, decimal expansions 0.123411... the set {1, 12, 123, 1234, ... }. – mathahada Dec 19 2010 at 15:33 A variant of Hewitt-Marczewski-Pondiczery: Let the base set $B$ be the set of all polynomials with integer (or rational) coefficients. For each real $r$, let $A_r$ be the set of $p(x)\in B$ with $p(r)>0$. This is an independent family of size continuum. (I think I learned this from Menachem Kojman.) - First argue that for every $N$ there is a family of $N$ finite sets which are independent, i.e., all Boolean combinations are nonempty. Indeed, take the $2^N$ nodes of the $N$-dimensional cube as the underlying set and consider $N$ nonparallel hyperplanes. Apply this, for each $k=0,1,\dots$ to obtain some family of $2^k$ sets, on some finite set $S_k$, indexed by the length-$k$ 0-1 sequences: `$\{X^k_{00\dots 0},\dots,X^k_{11\dots 1}\}$`. We may assume that the sets $S_0,S_1,\dots$ are disjoint. Our system will contain the sets $Y_f$ for all infinite 0-1 sequences where $Y_f=X^0_{f|0}\cup X^1_{f|1}\cup X^2_{f|2}\cup\cdots$. If we take finitely many infinite 0-1 sequences $f_1,\dots,f_n,g_1,\dots,g_m$, then for $k$ large enough $f_1|k,\dots,f_n|k,g_1|k,\dots,g_m|k$ all differ and so $Y_{f_1}\cap\cdots\cap Y_{f_n}-(Y_{g_1}\cup\cdots\cup Y_{g_m})$ contains an element in $S_k$. - I realize that the question is rather old, but here is another proof: Let $\mathcal{B}$ be a countable base for the topology of $\mathbb{R}$ closed under finite unions. For each $r \in \mathbb{R}$, let $A_r = \{ B \in \mathcal{B} : r \in B \}$. The family $\{A_r : r \in \mathbb{R} \}$ is an independent family on $\mathcal{B}$. - That's very nice and short. Thank you. – Stefan Geschke Oct 16 2011 at 9:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421071410179138, "perplexity_flag": "head"}
http://scicomp.stackexchange.com/questions/3314/whats-a-good-numerical-optimization-software-package-for-solving-the-2-d-optima
# What's a good numerical/optimization software package for solving the 2-D optimal stopping problem? I am looking for a numerical software package to help me solve the 2-dimensional "free boundary" PDEs that arise in optimal stopping problems. In one dimension a standard optimal stopping problem in Economics is when to exercise an expansion option. In the end, solving this problem comes down to solving the following ODE $$\mu x u' + \frac{1}{2}\sigma^2 x^2 u'' + x k_0= r u$$ such that \begin{eqnarray} u(0) & =& 0 \\ u(\overline{x}) & =& \frac{\overline{x}k_1}{r-\mu} - p\\ u'(\overline{x}) & = & \frac{k_1}{r-\mu}. \end{eqnarray} A solution to this problem is a function $u$ together with the location of the boundary $\overline{x}$ - this is the so called "free boundary". This problem has a closed form solution. In two dimensions, the problem no longer has a closed form solution. The problem I want to solve is $$x k_0 + \mu x \frac{\partial u}{\partial x}+ rw\frac{\partial u}{\partial w} + \frac{1}{2} x^2 \left(\sigma_x^2\frac{\partial^2 u}{\partial x^2} + \sigma_w^2 \frac{\partial^2 u}{\partial w^2} + 2\sigma_x\sigma_w \frac{\partial^2 u}{\partial x\partial w}\right) = r u$$ such that \begin{eqnarray} u(0,w) & = & 0\\ u(\overline{x}(w),w) & = & v(\overline{x}(w),w) - p \\ \frac{\partial}{\partial x}u(\overline{x}(w),w) & = & \frac{\partial}{\partial x}v(\overline{x}(w),w) \\ u(x,0) &= & \ell x \\ \frac{\partial}{\partial w}u(x,\overline{w}) & = & -1 \\ \frac{\partial^2}{\partial w^2}u(x,\overline{w}) & = & 0 \\ \end{eqnarray} where $v$ is some known function of $x$ and $w$. The solution of this problem will then be a function $u$ together with functions $\overline{x}(w)$ and $\overline{w}(x)$ that give the locations of the boundaries. Do you know of any software packages that can solve these free boundary problems? If so, can someone point me towards a reference or tutorial that might help? - Hi Prof. Glaser, welcome to scicomp, and nice meeting you on the bus to the airport! I don't think you want OpenFOAM for this, but I'm not actually sure what the appropriate solver is. I'm going to reframe your question a little so that it gets more attention, feel free to revert or fix any edits I make. – Aron Ahmadia Sep 18 '12 at 18:50 Hi Aron, it was good to meet you too. Thanks so much for your help with the question! – Barney Hartman-Glaser Sep 19 '12 at 1:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8871250152587891, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/19065-functions-properties.html
Thread: 1. Functions and properties Hi guys, im am so frustrated right now. I suck at discrete math. I dont understand it at all, the proffesor talks to fast. Im on the vergre of tears. Plus theres no tutoring for this class, and i really need help. right now i have two problems 1. suppose g: A--->B and f: B---> C are both one to one, prove f (o) g what does the little o mean?) i have no idea how to start this. 2. if E is an equivalance equation on A , then prove or give an counter example, E (o) E is an equivalnce on A. Im not even sure what this is asking. i dont really get equivalance equations or sets and relations really, all i know is E are transitive, reflexive and symmetric. 2. Originally Posted by Wardub 1. suppose g: A--->B and f: B---> C are both one to one, prove f (o) g $what does the little o mean?$) i have no idea how to start this. This notation suggests the succcessive operation of the two functions. Map this through 'g' to the range of 'g'. This becomes the Domain of some subset of 'f'. Map the new value through 'f'. 3. but there is no value of f or g, what do i do when theres nothing to begin with? 4. prove f (o) g Is this REALLY the problem statement? Doesn't seem like much to go on. 5. I assume that #1 is, if $f:A \mapsto B\quad \& \quad g:B \mapsto C$ are both one-to-one then so is $g \circ f$. The proof is quite easy: $\begin{array}{rcl}<br /> g \circ f(p) & = & g \circ f(q) \\ <br /> f(p) & = & f(q),\; \mbox{g is 1-1} \\ <br /> p & = & q,\quad \mbox{f is 1-1} \\ <br /> \end{array}$ For #2. If $R\quad \& \quad S$ are relations on a set that are reflexive, symmetric and transitive then $R \circ S$ is reflexive, symmetric and transitive.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244145154953003, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/248358/why-is-korns-inequality-not-trivial
# Why is Korn's inequality not trivial? Let $\Omega \subset \subset \mathbb{R}^N$ have smooth boundary, $N \geqslant 2$ and $$\mathcal{E} ( v) := \int_{\Omega} \sum_{i, j} \varepsilon_{i j} ( v) \varepsilon_{i j} ( v) \mathrm{d} x := \int_{\Omega} \sum_{i, j} \left( \frac{v_{i, j} + v_{j, i}}{2} \right)^2 \mathrm{d} x$$ be defined in $H^1 ( \Omega, \mathbb{R}^N)$. Having just read for the umpteenth time that the reason that Korn's inequality $$\mathcal{E} ( v) + \| v \|_{L^2}^2 \geqslant c \| v \|^2_{H^1}$$ is not trivial is that the left hand side "involves only certain combinations of derivatives", I wonder whether this is actually true and if yes, whether I understand things correctly, because to me it's a matter of some products (the $v_{i,j}v_{j,i}$ for $i \neq j$ ) possibly being negative. Did I get lost among the indices? Edit: in the context of linear elasticity it is often stated that this inequality is not a triviality (in the sense that it's not tautological), because of the different combinations of partial derivatives appearing at each side. Some authors affirm that only some (six) different partial derivatives appear at the left hand side (see [1], [2], [3]). However I see all partial derivatives at both sides, but combined differently (see my answer). I understand the actual difficulties in proving it and the implications for coercivity proofs for instance. It's this assertion that I find confusing. [1] G. Duvaut and J.-L. Lions, Inequalities in mechanics and physics, vol. 219. Springer-Verlag, 1976. [2] P. G. Ciarlet, An introduction to differential geometry with applications to elasticity. Springer, 2005, . [3] F. Demengel and G. Demengel, Functional spaces for the theory of elliptic partial differential equations, vol. 8. Springer, 2012. - ## 2 Answers Yes, it's "only" a matter of possible cancellation in the sum $v_{i,j}+v_{j,i}$. Like children, analysts can get excited about tiny little things of this kind. But seriously, this is an amazing result. For example, if you want to bound the $H^1$ norm of a scalar function $u$, you have to integrate the square of every single partial derivative: $\int \sum_{i=1}^n u_i^{2}$, or use another positive definite quadratic form of the derivatives. No semidefinite form will do. If $Q$ is positive semidefinite and $Q(\xi,\xi)=0$ for some vector $\xi\ne 0$, then there is a function $u$ such that $\nabla u$ is always parallel to $\xi$, and therefore $\int Q(\nabla u,\nabla u)=0$ while $\|u\|_{H^1}$ can be huge (and $\|u\|_{L^2}$ will not control $\|u\|_{H^1}$ either). In Korn's inequality we integrate the quadratic form $K(\xi,\xi)=\sum (\xi_{ij}+\xi_{ji})^2$. It is semidefinite with a huge kernel: there is an $n(n-1)/2$ dimensional subspace along which $K$ is zero (skew-symmetric matrices, to be precise). Recalling that in the scalar case even one-dimensional kernel killed the estimate, how can we expect that $\int K(\nabla v,\nabla v)$ will control $\|v\|_{H^1}$? Yet it does. - I didn't question the importance of the inequality, but whether the reason that it's non trivial truly is that the left hand side "involves only certain combinations of derivatives". You do concur that this is not the reason, but then why would Duvaut & Lions state it in their book "Inequalities in mechanics and physics"? They aren't exactly a couple of random guys who happened to write on some subject unknown them... – Miguel Dec 21 '12 at 11:24 I just found a quote by Ciarlet (not exactly some random guy either...): "This inequality is truly remarkable, since only six different combinations of first-order partial derivatives, viz., $\frac{1}{2} ( \partial_i v_j + \partial_j v_i)$, occur in its right-hand side, 2 while all nine partial derivatives $\partial_i v_j$ occur in its left-hand side!" (P. G. Ciarlet, An introduction to differential geometry with applications to elasticity. Springer, 2005, p.138) – Miguel Dec 21 '12 at 11:35 As I see it, we have: \begin{eqnarray*} \mathcal{E} ( v) + \| v \|_{L^2} & = & \int_{\Omega} \frac{1}{4} \sum_{i, j} ( v_{i, j} + v_{j, i})^2 \mathrm{d} x + \| v \|_{L^2}\\ & = & \int_{\Omega} \frac{1}{2} \sum_{i, j} ( v_{i, j}^2 + v_{i, j} v_{j, i}) \mathrm{d} x + \| v \|_{L^2}\\ & = & \frac{1}{2} \left( \| \nabla v \|^2_{L^2} + \underset{\geqslant 0}{\underbrace{\sum_{i = j} \| v_{i, j} \|^2_{L^2}}} + \sum_{i \neq j} \int_{\Omega} v_{i, j} v_{j, i} \mathrm{d} x \right) + \| v \|_{L^2}\\ & \geqslant & \frac{1}{2} \| v \|_{H^1}^2 + \frac{1}{2} \sum_{i \neq j} \int_{\Omega} v_{i, j} v_{j, i} \mathrm{d} x. \end{eqnarray*} And the problem is that that last term could be negative. Reasoning similarly one also concludes that the opposite inequality is easy using brute-force-Hölder: \begin{eqnarray*} \mathcal{E} ( v) + \| v \|_{L^2}^2 & \leqslant & \| v \|^2_{H^1} + \frac{1}{2} \sum_{i , j} \int_{\Omega} v_{i, j} v_{j, i} \mathrm{d} x\\ & \leqslant & \| v \|^2_{H^1} + \frac{N^2}{2} \| v \|_{H^1}^2\\ & \leqslant & N^2 \| v \|_{H^1}^2 . \end{eqnarray*} Any comments/corrections are more than welcome... - This could have been a part of question, perhaps separated with <hr> or such thing. – user53153 Dec 19 '12 at 7:49 I thought SE encouraged answering your own questions, that's why I did it. But it might have been a bad idea if that was the reason why it didn't get any attention... – Miguel Dec 21 '12 at 10:53 You did nothing wrong. But if it was me asking "why is Korn's inequality nontrivial" I would not be satisfies with the answer "if we expand it and move all terms to one side, some of them might be negative". – user53153 Dec 21 '12 at 19:28 I see your point. I should've phrased the question differently to stress that it's not its importance that I question, but the reasons alleged for the difficulty in proving it. This remains my question... – Miguel Dec 21 '12 at 22:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8957708477973938, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/acceleration+free-fall
# Tagged Questions 1answer 189 views ### Gravity and free fall In Wikipedia it's stated that "[..] gravity, is the natural phenomenon by which physical bodies appear to attract each other with a force proportional to their masses". Then I found many examples ... 3answers 233 views ### when an object moves downward, is its height negative? the question is: A ball is thrown directly downward with an initial speed of 8.00m/s from a height of 30.0m. After what time interval does it strike the ground. so i went through the problem and ... 4answers 149 views ### Why is the time taken for something to fall proprtional to acceleration due to gravity? This is a continuation of this question. I saw in a piece of writing (that I no longer seem to be able to aquire) on dimensional analysis that: $$t \propto h^\frac{1}{2} g^\frac{-1}{2}.$$ How can ... 4answers 4k views ### Free falling of object with no air resistance Why does an object with smaller mass hits the ground at same time compared to object with greater mass? I understand the acceleration due to gravity of earth will be same but won't the object with ... 9answers 7k views ### Don't heavier objects actually fall faster because they exert their own gravity? The common understanding is that, setting air resistance aside, all objects dropped to Earth fall at the same rate. This is often demonstrated through the thought experiment of cutting a large object ... 7answers 2k views ### Would it help if you jump inside a free falling elevator? Imagine you're trapped inside a free falling elevator. Would you decrease your impact impulse by jumping during the fall? When?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436497092247009, "perplexity_flag": "middle"}
http://physics.stackexchange.com/tags/density/hot
# Tag Info ## Hot answers tagged density 14 ### Why does a black hole have a finite mass? If something is infinitely dense, must it not also be infinitely massive? Nope. The singularity is a point where volume goes to zero, not where mass goes to infinity. It is a point with zero volume, but which still holds mass, due to the extreme stretching of space by gravity. The density is $\frac{mass}{volume}$, so we say that in the limit ... 13 ### How can super massive black holes have a lower density than water? Well, it can't (float), since a Black Hole is not a solid object that has any kind of surface. When someone says that a super massive black hole has less density than water, one probably means that since the density goes like $\frac{M}{R^3}$ where M is the mass and R is the typical size of the object, then for a black hole the typical size is the ... 9 ### Do glass panes become thicker at the bottom over time? The observation that old windows are sometimes found to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows over a timescale of centuries. The assumption being that the glass was once uniform, but has flowed to its new shape, which is a property of liquid. However, this assumption is incorrect; ... 8 ### Before a once-warm lake starts to freeze, must its temperature be 4°C throughout at some point? You have hit on the major explanation of the unusual thermal stability of surface-frozen lakes. The deep earth is temperature stable, since the surface seasonal fluctuations can't penetrate the heat by diffusion more than some meters into the deep ground. So the deep ground is at a temperature which is stable all year. Advection only raises heat to the ... 6 ### Are coffee's properties different enough from water's to cause increased spillage while walking? The article's preprint Mayer H. C., Krechetnikov R. "Walking with coffee: why does it spill?," Phys. Rev. E 85, 046117 (2012). is available from the UCSB site. From a glance of the article the phenomenon is not specific only to coffee. The authors make use of the next formula: The natural frequencies of oscillations of a frictionless, ... 6 ### Is there a compound denser than the densest element? Here is a table I made for you listing the elements with a density higher than 10 g/cm$^3$ and their approximate price per kg: I couldn't find any prices for Einsteinium or Actinium and some of the other prices might come from poor sources, but take it as a rough guide. Now you only have to figure out how much you need and your budgetetary constraints, ... 5 ### What is exactly the density of a black hole and how can it be calculated? Black holes are really hard to get a density. Basically, they are so dense that there is no known mechanism for providing sufficient outward force to counterbalance the inward pull of gravity, so they will collapse into an infinitesimally small size. Of course, that doesn't seem likely, it seems likely there is something that will keep the volume from being ... 5 ### Before a once-warm lake starts to freeze, must its temperature be 4°C throughout at some point? The only mechanically static situation is that at bottom of water column temperature is $T_\text{bottom} \le 4^\circ \text{C}$ and at top of the water column temperature is $0^\circ\text{C} \le T_\text{top} \le T_\text{bottom}$, with continuous drop between them. Of course, there will still be heat transfer due to thermal conductivity of water and ice. You ... 5 ### speed of sound relative to density of medium through which sound travels Have a look at http://en.wikipedia.org/wiki/Speed_of_sound#Basic_formula for info on how the speed of sound depends on the medium it's passing through. Generally the important factors are the stiffness of the medium and it's density. To get sound to travel faster you need a stiffer lighter medium. For ordinary matter you'll never get speeds at anything like ... 5 ### Delta Dirac Charge Density question The nature (and glory) of the dirac delta function is that the volume integral $$\int_{\Delta V} dV' \delta ( \boldsymbol{r-r'} ) = \left\{ \begin{array}{cc} 1 & \text{if } \Delta V \text{ contains } \boldsymbol{r}\\ 0 & \text{if } \Delta V \text{ does not contain } \boldsymbol{r} \end{array} \right.$$ Using this function, you can write the ... 5 ### Suppose a hollow metal sphere filled with helium is dropped in a body of water Simply if the average density $\rho_\text{avg}$ of the sphere + helium (or your horse, for that matter) is less than the density of water $\rho_w$. This is because the weight is \begin{align} mg = \rho_\text{avg} V_\text{object} g \end{align} while the buoyancy force is \begin{align} F = \rho_w V_\text{displaced} g, \end{align} where $V_\text{object}$ is ... 5 ### Is it possible for an object to stop sinking after a while? I see this is a follow-up post to Suppose a hollow metal sphere filled with helium is dropped in a body of water Well, the situation you are describing is possible if the object in question can change its average density while in the water. It will stop sinking when $\rho_\text{average} = \rho_w$. In fact, there's a vessel that uses this exact principle: ... 4 ### How can I determine density of a gas only given temperature? The key's is the Bernoulli's equation for the compressible flow: $$\frac{v^2}{2} + \frac{p}{\rho} + u = \text{const}$$ $u$ is internal energy per unit mass, or using enthalpy $h$ per unit mass: $$\frac{v^2}{2} + h = \text{const}$$ The other equation to find $v$ you'll get from the definition of the volumetric flow. You have two equations to solve the ... 3 ### How can black holes be so dense? It's actually more extreme than you think. The short story is this: Associated with any amount of matter, there is an associated radius know as the Schwarzschild radius. There is theorem in General Relativity that essentially states that if ever all of the matter is contained within the associated Schwarzschild radius, that matter must collapse to ... 3 ### 'Density' of a proton A proton is a bound state of three quarks. The quarks themselves are (as far as we know) pointlike, but because you have the three of them bound together the proton has a finite size. It doesn't have a sharp edge any more than an atom has a sharp edge, but an edge is conventionally defined at a radius of 0.8768 femtometres. Protons are spherical in the same ... 3 ### Computing a density of states of Hamiltonian $H=xp$ This question (v1) is discussed near eq. (8) in Ref. 1. The simplest regularization is to truncate the variables $x\geq\ell_x$ and $p\geq\ell_p$ at cut-offs $\ell_x$ and $\ell_p$, respectively, in such a way that the product $\ell_x \ell_p = h$ is Planck's constant. In an $(x,p)$ diagram, the truncated area under the hyperbola $p=\frac{E}{x}$ reads in ... 3 ### How can super massive black holes have a lower density than water? The black hole would float in water, if you could make a large enough pool to submerge it, and with enough replenishment to replace the water that the black hole will sucks up. The black hole will remove water from its surroundings, but the water below will come into the horizon at higher pressure than the water above, so the velocity inward will not be ... 3 ### What is the definition of density as a function? As far as I know, there are a lot of kinds of density - you mentioned volume density, but there is linear density (amount of mass for unit of length), for example. If your string is very-very thin, there is no sense in definition that density = mass / volume, cause very thin string doesn't have volume and you should use linear density = mass / length or ... 3 ### Planetary Gravity and its effects 1/3) As Newton pointed out way back in the Principia, the gravitational attraction due to a spherically symmetric mass distribution is, assuming you are outside of it, the same as if all the mass were at a point at the center. Thus the gravitational acceleration at the surface of a sphere is determined solely by the total mass $M$ and the radius $R$. What is ... 3 ### Would Oscars made of pure gold bend? [duplicate] Solid metals are crystals, not liquids. The way any crystal plastically deforms is by motion of crystal dislocations. Every crystal obeys a stress-strain curve, where stresses up to a certain amount do not result in permanent deformation. Higher stresses do result in permanent (plastic) deformation because dislocations move. If a metal is "soft", all that ... 2 ### How can super massive black holes have a lower density than water? I think it is actually misleading to make the claim that is puzzling you. "Density" suggests that the mass is distributed more or less uniformly within the black hole, and this is non-sense. The black hole is mostly empty, and all the mass is concentrated within a tiny region (clasically a point) in the center of the black hole. If you ignore this and ... 2 ### How can super massive black holes have a lower density than water? The Schwarzschild radius scales with mass as $r~=~2GM/c^2$. What might be defined as a Schwarzschild volume would then be $V~=~4\pi r^3/3$ $=~(32/3)\pi(GM/c^2)^3$. So the density of matter defined by the horizon is $\rho~=~(3/32)(c^2/G)^3M^{-2}$. So density scales as the inverse square of the mass. A 10 billion solar mass black hole has a radius about ... 2 ### Is there anything special about the Sun's photosphere in terms of density? The "edge" of the sun that we see (the photosphere) arises not so much from any feature in its density profile, but from the properties of how light travels through the sun as the density drops. The photosphere is the point where the density drops enough that photons can begin to free stream away without interacting any more with the gas. Formally, this is ... 2 ### Does the Relative density of water change based on the state it is in Yes the density of water changes with temperature in a non-linear way (which is important if you want life on your planet). It has a maximum density at 4deg C and is unusual in that it expands (lower density) as a solid - see http://en.wikipedia.org/wiki/Water_(properties)#Density_of_water_and_ice As a gas it's density varies with temperature and pressure ... 2 ### How to find out the compress happen when some force act on water/oil/air? You must specify wether the temperature is constant or if there is some heat exchange. Then you just have to use the isothermal compressibility or adiabatic compressibilty factor. http://en.wikipedia.org/wiki/Compressibility 2 ### How to correctly solve density tolerance Assuming your pipette has a volume of 15ml, the error in measuring the volume of your sample is 0.02 in 15 i.e. 0.133%. The weight of 15cc of ethanol is 0.789 $\times$ 15 = 11.835g. The error in measuring the weight is 0.002 in 11.835 i.e. 0.025%. Hedge physicists like me would immediately note that the error in the weight is a lot lower than the volume, ... 2 ### Percentage of water that is void or empty space? I'm guessing that you're asking this because you've heard that atoms are mostly empty space. The trouble is that your question doesn't have an answer because how much empty space you think there is in an atom depends on how hard you're willing to press on the atom. As chance would have it, I've just answere a question that deals with exactly this question ... 2 ### How can black holes be so dense? You ask how atoms can be that tightly compressed. Atoms are made of electrons and quarks (the protons and neutrons are made of quarks) and as far as we know electrons and quarks are point like i.e. they have no size. So in principle they can be compressed to infinite density if you squeeze hard enough. At this point someone is going to point out that all ... 2 ### How wide does a wall of ice need to be to stay in place? Any cross section of your wall is supporting the weight of all the wall above it. In a first approximation, every cross section will be in a state of pure axial compression. The most heavily solicited cross section will be the one at the very bottom, which will be supporting a compressive pressure of $\rho h g$, where $\rho$ is the density of the ice, $h$ ... 2 ### Center Of Mass Troubles The book I am reading defines the position of the com of a two-particle system to be $x_{com}= \Large\frac{m_1x_1+m_2x_2}{m_1+m_2}$ I'm sorry if this seems like a trivial question, but could someone explain to me the interpretation of this definition? Perhaps even why they defined it to be this way. It's a weighted average of the position of the ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439722895622253, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/109570?sort=oldest
## Can the level set of a critical value be a regular submanifold? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I know that there is a theorem that a non-empty level set of a regular value of a smooth function $f:M\rightarrow\mathbb{R}$ on a smooth manifold is a regular submanifold (or embedded submanifold) of codimension 1. Now I wonder if there is also a condition in which the converse holds true. I mean suppose $c\in\mathbb{R}$ is a critical value of $f$ is there a condition under which you know the level set of $c$ is not a regular submanifold. If $g:M\rightarrow\mathbb{R}$ is defined by mapping all of $M$ to $0$ then it seems to me that $0$ is a critical value of $g$. Now $g^{-1}(0)=M$ so definitely a regular submanifold of $M$. So I know that it is at least not true without an extra condition. Also if the function is second degree polynomial from $\mathbb{R}$ to $\mathbb{R}$ then the level set of a critical value is just a point which would again be a regular submanifold. If there is no such general condition on the function $f$ or the critical value then how in general does one go about showing that a critical level set is or is not a regular submanifold? Thanks in advance - By the way, I would love a reference for the statement in the first theorem, or even better, a textbook which I can pass to a student of mine who needs to know these things. – Andrej Bauer Oct 13 at 23:59 1 You probably should say amend the statement of the theorem to say that a non-empty level set of a regular value of a smooth function f:M→ℝ on a smooth manifold is a regular submanifold of codimension one. (That takes care of the problem with f identically zero.) – Dick Palais Oct 14 at 0:16 1 The most immediate converse is that a manifold with a trivial normal bundle is the level-set corresponding to a regular value. If you have a non-trivial normal bundle, you will need to allow for more degenerate functions. Bott-type Morse functions are a standard generalization. – Ryan Budney Oct 14 at 1:28 @Andrej If you or your student Google submersion theorem or constant rank theorem you or they should find it. Or look in any basic differential geometry text under those names. – Michael Murray Oct 14 at 12:39 @Andrej I found this theorem in "An Introduction to Manifolds" by Loring W. Tu. In the capter on submanifolds. The constant rank is indeed a generalization of this theorem. – Niek de Kleijn Oct 14 at 14:23 show 2 more comments ## 1 Answer Suppose that $0$ is a regular value of $f$. Then $0$ is a critical value of $g=f^2$, yet the level set $g^{-1}(0)$ is a regular submanifold. In this case all the points on $g^{-1}(0)$ are critical points of $g$. The general answer is difficult. You need to assume something about $f$. A natural assumption would be that the critical points of $f$ are isolated and of finite type. Near such points one can find local coordinates so that in these coordinate $f$ looks like a polynomial. (This is a generalization of the classical Morse lemma due to Tougeron.) In such cases you need to understand the zero sets of real polynomials which can be challenging. A nice place to consult for such issues is the book by Arnold, Gussein-Zade and Varchenko on singularities of differentiable mappings. - 2 A short version of this answer could be: if at a critical point the second derivative is non-degenerate but non-definite, then you know by Morse lemma that the level set is not a regular submanifold. This gives you a simple condition (that captures only a few cases, but is generic). – Benoît Kloeckner Oct 14 at 7:52 Thank you, I didn't think to check the book by Arnold, Gussein-Zade and Varchenko even though I had it lying around the house! – Niek de Kleijn Oct 14 at 14:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262204170227051, "perplexity_flag": "head"}
http://mathoverflow.net/questions/115375?sort=oldest
## Algebraic maximal extension and algebraic closure ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $K$ be a valued field. We say that $K$ is algebraic maximal if any algebraic extension of $K$ has either a bigger value group or a bigger residue field. Under which condition is an algebraic maximal valued field algebraically closed ? Thank you. - ## 2 Answers I think the answer is "hardly ever", because pretty much everything is algebraic maximal in your sense. For any complete discretely-valued field $K$, and any finite extension $L / K$, we have $[L : K] = e(L / K) f(L / K)$, where $f(L/K)$ is the degree of the extension of residue fields and $e(L / K)$ is the index of the value group of $K$ in that of $L$. So any complete discretely valued field is algebraic maximal, and such fields are very far from being algebraically closed! I can't actually think offhand of an example of a valued field which is not algebraic maximal. - Thank you! And if you consider fields whose residue fiels is algebraically closed and whose value group is divisible, when are such fields algebraically closed ? – Richard Dec 4 at 12:14 2 $\mathbf{C}_p$ has plenty of immediate extensions, that is extensions that have the same value group and the same residue field. The compositum of all these is its spherical completion. Now lots of intermediate fields between $\mathbf{C}_p$ and its spherical completion would not be algebraic maximal. – Laurent Berger Dec 4 at 15:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you take the compositum $K=TP$ of the maximal tamely ramified extension $T$ of $\mathbf{Q}_p$ with the cyclotomic $\mathbf{Z}_p$-extension $P$ of $\mathbf{Q}_p$, then $K$ is not algebraically closed, its residue field is $\bar{\mathbf{F}}_p$, and the value group is $\mathbf{Q}$. - This answers the question in your comment on David's answer. – Chandan Singh Dalawat Dec 4 at 12:46 Thank you very much ! – Richard Dec 4 at 13:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433081746101379, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/02/12/properties-of-irreducible-root-systems-iii-2/?like=1&source=post_flair&_wpnonce=f9365a8c97
# The Unapologetic Mathematician ## Properties of Irreducible Root Systems III Today we conclude with our series of lemmas on irreducible root systems. If $\Phi$ is irreducible, then roots in $\Phi$ have at most two different lengths. Here I mean actual geometric lengths, as measured by the inner product, not the “length” of a Weyl group element. Further, any two roots of the same length can be sent to each other by the action of the Weyl group. Let $\alpha$ and $\beta$ be two roots. We just saw that the $\mathcal{W}$-orbit of $\alpha$ spans $V$, and so not all the $\sigma(\alpha)$ can be perpendicular to $\beta$. From what we discovered about pairs of roots, we know that if $\langle\alpha,\beta\rangle\neq0$, then the possible ratios of squared lengths $\frac{\lVert\beta\rVert^2}{\lVert\alpha\rVert^2}$ are limited. Indeed, this ratio must be one of $\frac{1}{3}$, $\frac{1}{2}$, ${1}$, ${2}$, or ${3}$. If there are three distinct root-lengths, let $\alpha$, $\beta$, and $\gamma$ be samples of each length in increasing order. We must then have $\frac{\lVert\beta\rVert^2}{\lVert\alpha\rVert^2}=2$ and $\frac{\lVert\gamma\rVert^2}{\lVert\alpha\rVert^2}=3$, and so $\frac{\lVert\gamma\rVert^2}{\lVert\beta\rVert^2}=\frac{3}{2}$, which clearly violates our conditions. Thus there can be at most two root lengths, as asserted. We call those of the smaller length “short roots”, and the others “long roots”. If there is only one length, we call all the roots long, by convention. Now let $\alpha$ and $\beta$ have the same length. By using the Weyl group as above, we may assume that these roots are non-orthogonal. We may also assume that they’re distinct, or else we’re already done! By the same data as before, we conclude that $\alpha\rtimes\beta=\beta\rtimes\alpha=\pm1$. We can replace one root by its negative, if need be, and assume that $\alpha\rtimes\beta=1$. Then we may calculate: $\displaystyle[\sigma_\alpha\sigma_\beta\sigma_\alpha](\beta)=[\sigma_\alpha\sigma_\beta](\beta-\alpha)=\sigma_\alpha(-\beta-\alpha+\beta)=\alpha$. We may note, in passing, that the unique maximal root $\beta$ is long. Indeed, it suffices to show that $\langle\beta,\beta\rangle\geq\langle\alpha,\alpha\rangle$ for all $\alpha\in\Phi$. We may, without loss of generality, assume $\alpha$ is in the fundamental doman $\overline{\mathfrak{C}(\Delta)}$. Since $\beta-\alpha\succ0$, we must have $\langle\gamma,\beta-\alpha\rangle\geq0$ for any other $\gamma\in\overline{\mathfrak{C}(\Delta)}$. In particular, we have $\langle\beta,\beta-\alpha\rangle\geq0$ and $\langle\alpha,\beta-\alpha\rangle\geq0$. Putting these together, we conclude $\displaystyle\langle\beta,\beta\rangle\geq\langle\alpha,\beta\rangle\geq\langle\alpha,\alpha\rangle$ and so $\beta$ must be a long root. ### Like this: Posted by John Armstrong | Geometry, Root Systems ## 4 Comments » 1. Typo: gamma squared over alpha squared should be three not two. Comment by | February 14, 2010 | Reply 2. thanks, fixed Comment by | February 14, 2010 | Reply 3. [...] in each of the irreducible cases we see that there are at most two distinct root lengths. And, in each case, the unique maximal root is the long root within the fundamental [...] Pingback by | February 15, 2010 | Reply 4. [...] to be the collection of vectors in the lattice of either one or two specified lengths (since there can be at most two root lengths). That is, we’re considering the intersection of a discrete collection of points with one or [...] Pingback by | March 1, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240663647651672, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/1647-big-test-question.html
Thread: 1. Big test question Well, my math teacher gave us this question that he told us would be on the math test tomarow. Im not too sure how to do it but I have an idea. An underground cable runs from (-20,35) to (-10,-25). Jessica's house is at (40,5). Determine the shortest distance of cable required to connect Jessys house and the underground cable. So first what I did was get the slope of the underground wire. M=(y2-y1)/(x2-x1). M=-25-35/-10-(-20) M=-6 Now i know the perp. bisector is 1/6. Now i have to get the exuation of the line. y=mx+b y=5 x=40 m=1/6 When that was all done i got b=-1 1/3. I dont know if that is right but this is where i am stuck at. It would be real great if someone could help me out. Thanks. 2. the equation of the line in general form for (-20, 35) (-10, -25) is 6x + y + 85 = 0 the shortest distance between a point and a line is the perpendicular distance 6x + y + 85 = 0 (40, 5) $<br /> d= \frac{|6 * 40 + 1 * 5 + 85|}{\sqrt{6^2 +1^2}}<br />$ $<br /> d = \frac{330}{\sqrt{37}} = 54.25<br />$ 3. Let me just help explain jacs answer. If the equation of the a non-vertical line is $Ax+By+C$ (this is called standard form). And given a point $(x_0,y_0)$ Then the distance from the point to this line is $s=\frac{|Ax_0+By_0+C|}{\sqrt{A^2+B^2}}$. 4. opps sorry, i forgot to include the forumla....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236356616020203, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/21527/equivalence-principle-and-radiation-from-falling-particle
Equivalence principle and radiation from falling particle I am currently having a hard time solving a problem of GR from Lasenby's book. I can't make it more clear than by quoting the exercise: 7.2 A charged object held stationary in a laboratory on the surface of the Earth does not emit electromagnetic radiation. If the object is then dropped so that it is in free fall, it will begin to radiate. Reconcile these observations with the principle of equivalence. Hint: Consider the spatial extent of the electric field of the charge. Could someone give me a second hint, currently I am stuck because I try to think about an energetic argument: from the laboratory the particle is losing energy from radiation and potential energy from falling, but in the particle none of them is lost. And I am stuck there. - Don't think about it from an energetic argument. Think about it from the perspectives of what the time evolution of the electric field looks like in the co-moving frame of reference and then from the laboratory frame of reference. – Benjamin Franz Feb 26 '12 at 22:07 But, wait, I need a clarifier then, I thought that since in free-fall $\nabla_\mu u^\nu =0$ meant that the acceleration was zero, and hence, there should be no radiation emitted. – kηives Feb 27 '12 at 4:27 I would have thought that the body should emit radiation when it is stationary on Earths surface but not in free-fall (althgouh GR is not my forte to put it mildly). Come to think of it, can this be measured? I guess a charged object accelerating a g does not emit much radiation but still might a cute experiment. Maybe I should make this a q. – Bowler Feb 27 '12 at 10:55 1 – Benjamin Franz Feb 27 '12 at 14:12 2 Answers When the charge is fixed at the surface of the Earth, it is indeed accelerated. But so are we! When the charge falls with respect to the surface of the Earth, it gets accelerated with respect to us, and hence emits radiation in our reference frame. It is relative acceleration that matters, because one can write relativistic Maxwells equations in any reference frame, including the comoving frame of the observer (us). In this frame, near the world line of the observer the space-time is always flat, and the charge at rest with respect to it will create a static field. If the charge gets accelerated in this frame, than, as in flat case, it will emit radiation, as seen by the observer. - I investigated this question. I found the good analysis of this quastion in the arcticle "EQUIVALENCE PRINCIPLE AND RADIATION BY A UNIFORMLY ACCELERATED CHARGE". Now I adduce some conclusions, which is answed on your quastion: The notion of radiation in terms of receiving electromagnetic energy by an observer is not absolute, but this relative notion is consistent with the principle of equivalence. That is, in a static spacetime, a supported charge does not radiate according to another supported observer; neither does a freely falling charge according to a freely falling ob-server. Also, a freely falling charge does radiate according to a supported observer, and a supported charge does radiate according to a freely falling observer. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343613386154175, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/41026/list
## Return to Answer 2 clarified: integral curve -> trajectory of gradient When a function is very close to flat, the trajectories tend to be very erratic and wandering. The answer is NO, even if linear bounds are specified on the derivatives of the function. To be more specific: in the unit sphere, draw any smooth path $\gamma$ starts from the south pole, bottom, ends at the north pole, and weaves and wanders as much as you like, for as long as you like. Now define a function $f$ on a neighbhood of $\gamma$ that is 0 at the bottom, $1$ at the top, and for which $\gamma$ is an integral curve a trajectory of the gradient, and the north pole and south pole are critical points. You can do this by using closest-point projection in a regular neighborhood. Smooth functions on a regular neighborhood can be extended to $C^\infty$ funtions on $S^2$, and a smooth extension can be perturbed away from $\gamma$ to be a Morse function, so the particular curve is a gradient line of a Morse function with arbitrary length. Funtions that have a path like $\gamma$ in their gradient flow are obviously very inefficient. If you want functions with more efficiency, you could look at linear combinations of eigenfunctions of the Laplacian with small eigenvalue. In those cases, I think you can get reasonable inequalities concerning the average length of gradient flow lines; this is related to the known and widely used Cheeger type relationships between diameter of manifolds (or graphs), size of separators, and eigenvalues of the Laplacaian. I'm not sure what you can conclude about the maximum length of gradient flow lines, but I suspect something could be done, and may well be known. 1 When a function is very close to flat, the trajectories tend to be very erratic and wandering. The answer is NO, even if linear bounds are specified on the derivatives of the function. To be more specific: in the unit sphere, draw any smooth path $\gamma$ starts from the south pole, bottom, ends at the north pole, and weaves and wanders as much as you like, for as long as you like. Now define a function $f$ on a neighbhood of $\gamma$ that is 0 at the bottom, $1$ at the top, and for which $\gamma$ is an integral curve and the north pole and south pole are critical points. You can do this by using closest-point projection in a regular neighborhood. Smooth functions on a regular neighborhood can be extended to $C^\infty$ funtions on $S^2$, and a smooth extension can be perturbed away from $\gamma$ to be a Morse function, so the particular curve is a gradient line of a Morse function with arbitrary length. Funtions that have a path like $\gamma$ in their gradient flow are obviously very inefficient. If you want functions with more efficiency, you could look at linear combinations of eigenfunctions of the Laplacian with small eigenvalue. In those cases, I think you can get reasonable inequalities concerning the average length of gradient flow lines; this is related to the known and widely used Cheeger type relationships between diameter of manifolds (or graphs), size of separators, and eigenvalues of the Laplacaian. I'm not sure what you can conclude about the maximum length of gradient flow lines, but I suspect something could be done, and may well be known.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515239596366882, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/87221-solved-optimization-problem-cost-function.html
# Thread: 1. ## [SOLVED] optimization problem (cost function) I can find the cost function, but I don't know how to find minimum cost. Which should = \$329.34 A rectangular storage container with an open top is to have a volume of 20 m^3. The length of its base is twice the width. Material for the base costs \$15 per square meter.Material for the sides costs \$7 per square meter. 1. Find the cost function for the container. 2. Find the cost of materials for the cheapest such container. $<br /> V=20=2x^2y <br />$ $<br /> Surface Area = 2x^2+6xy<br />$ Cost Function: $<br /> 15*(2x^2) + \frac{7*60}{x}<br />$ $<br /> 30x^2 + \frac{420}{x}<br />$ 2. Originally Posted by coolguy00777 I can find the cost function, but I don't know how to find minimum cost. Which should = \$329.34 A rectangular storage container with an open top is to have a volume of 20 m^3. The length of its base is twice the width. Material for the base costs \$15 per square meter.Material for the sides costs \$7 per square meter. 1. Find the cost function for the container. 2. Find the cost of materials for the cheapest such container. $<br /> V=20=2x^2y <br />$ $<br /> Surface Area = 2x^2+6xy<br />$ Cost Function: $<br /> 15*(2x^2) + \frac{7*60}{x}<br />$ $<br /> 30x^2 + \frac{420}{x}<br />$ To find the max or min,you just find the stationary point of the graph and identify its concavity. $\frac{dy}{dx}=0$ $\frac{d^2y}{dx^2}$ is positive or megative. 3. sorry, but I'm still lost. Is there not an algebraic way to find the answer? What part do I take the derivative of? 4. Originally Posted by coolguy00777 sorry, but I'm still lost. Is there not an algebraic way to find the answer? What part do I take the derivative of? $\frac{d}{dx}\left[C = 30x^2 + \frac{420}{x}\right]$ $\frac{dC}{dx} = 60x - \frac{420}{x^2} = 0$ $x = \sqrt[3]{7}$ $C(\sqrt[3]{7}) = 329.34$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9047662615776062, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/76857-multiple-integral.html
# Thread: 1. ## Multiple integral Need helping starting this problem find the volume bound by $x+2z=4$ and $2y+z=2$ in the postive octant. Will this be a triangle based pyramid? 2. I think it will be a square based pyramid, which mean it's volume is uniquely determined by it's base area and it's height. $V=\frac{1}{3}Ah$. 3. I need to gat the answer by using integration.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560836553573608, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/61784/list
2 retagged 1 # cohomology of BG, G compact Lie group It has been stated in several papers that $H^{odd}(BG,\mathbb{R})=0$ for compact Lie group $G$. However, I've still not found a proof of this. I believe that the proof is as follows: --> $G$ compact $\Rightarrow$ it has a maximal toral subgroup, say $T$ --> the inclusion $T\hookrightarrow G$ induces inclusion $H^k(BG,\mathbb{R})\hookrightarrow H^k(BT,\mathbb{R})$ --> $H^*(BT,\mathbb{R})\cong \mathbb{R}[c_1,...,c_n]$ where the $c_i$'s are Chern classes of degree $\deg(c_k)=2k$ --> Thus, any polys in $\mathbb{R}[c_1,...,c_n]$ are necessarily of even degree. Hence, $H^{odd}(BG,\mathbb{R})=0$ Is this the correct reasoning? Could someone fill in the gaps; i.e., give a formal proof of this statement?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9629527926445007, "perplexity_flag": "head"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/G13/g13dnc.html
# NAG Library Function Documentnag_tsa_multi_part_lag_corr (g13dnc) ## 1  Purpose nag_tsa_multi_part_lag_corr (g13dnc) calculates the sample partial lag correlation matrices of a multivariate time series. A set of ${\chi }^{2}$-statistics and their significance levels are also returned. A call to nag_tsa_multi_cross_corr (g13dmc) is usually made prior to calling this function in order to calculate the sample cross-correlation matrices. ## 2  Specification #include <nag.h> #include <nagg13.h> void nag_tsa_multi_part_lag_corr (Integer k, Integer n, Integer m, const double r0[], const double r[], Integer *maxlag, double parlag[], double x[], double pvalue[], NagError *fail) ## 3  Description Let ${W}_{\mathit{t}}={\left({w}_{1\mathit{t}},{w}_{2\mathit{t}},\dots ,{w}_{\mathit{k}\mathit{t}}\right)}^{\mathrm{T}}$, for $\mathit{t}=1,2,\dots ,n$, denote $n$ observations of a vector of $k$ time series. The partial lag correlation matrix at lag $l$, $P\left(l\right)$, is defined to be the correlation matrix between ${W}_{t}$ and ${W}_{t+l}$, after removing the linear dependence on each of the intervening vectors ${W}_{t+1},{W}_{t+2},\dots ,{W}_{t+l-1}$. It is the correlation matrix between the residual vectors resulting from the regression of ${W}_{t+l}$ on the carriers ${W}_{t+l-1},\dots ,{W}_{t+1}$ and the regression of ${W}_{t}$ on the same set of carriers; see Heyse and Wei (1985). $P\left(l\right)$ has the following properties. (i) If ${W}_{t}$ follows a vector autoregressive model of order $p$, then $P\left(l\right)=0$ for $l>p$; (ii) When $k=1$, $P\left(l\right)$ reduces to the univariate partial autocorrelation at lag $l$; (iii) Each element of $P\left(l\right)$ is a properly normalized correlation coefficient; (iv) When $l=1$, $P\left(l\right)$ is equal to the cross-correlation matrix at lag $1$ (a natural property which also holds for the univariate partial autocorrelation function). Sample estimates of the partial lag correlation matrices may be obtained using the recursive algorithm described in Wei (1990). They are calculated up to lag $m$, which is usually taken to be at most $n/4$. Only the sample cross-correlation matrices ($\stackrel{^}{R}\left(\mathit{l}\right)$, for $\mathit{l}=0,1,\dots ,m$) and the standard deviations of the series are required as input to nag_tsa_multi_part_lag_corr (g13dnc). These may be computed by nag_tsa_multi_cross_corr (g13dmc). Under the hypothesis that ${W}_{t}$ follows an autoregressive model of order $s-1$, the elements of the sample partial lag matrix $\stackrel{^}{P}\left(s\right)$, denoted by ${\stackrel{^}{P}}_{ij}\left(s\right)$, are asymptotically Normally distributed with mean zero and variance $1/n$. In addition the statistic $Xs=n∑i=1k∑j=1kP^ij s 2$ has an asymptotic ${\chi }^{2}$-distribution with ${k}^{2}$ degrees of freedom. These quantities, $X\left(l\right)$, are useful as a diagnostic aid for determining whether the series follows an autoregressive model and, if so, of what order. ## 4  References Heyse J F and Wei W W S (1985) The partial lag autocorrelation function Technical Report No. 32 Department of Statistics, Temple University, Philadelphia Wei W W S (1990) Time Series Analysis: Univariate and Multivariate Methods Addison–Wesley ## 5  Arguments 1:     k – IntegerInput On entry: $k$, the dimension of the multivariate time series. Constraint: ${\mathbf{k}}\ge 1$. 2:     n – IntegerInput On entry: $n$, the number of observations in each series. Constraint: ${\mathbf{n}}\ge 2$. 3:     m – IntegerInput On entry: $m$, the number of partial lag correlation matrices to be computed. Note this also specifies the number of sample cross-correlation matrices that must be contained in the array r. Constraint: $1\le {\mathbf{m}}<{\mathbf{n}}$. 4:     r0[${\mathbf{k}}×{\mathbf{k}}$] – const doubleInput On entry: the sample cross-correlations at lag zero/standard deviations as provided by nag_tsa_multi_cross_corr (g13dmc), that is, ${\mathbf{r0}}\left[\left(\mathit{j}-1\right)k+\mathit{i}-1\right]$ must contain the $\left(\mathit{i},\mathit{j}\right)$th element of the sample cross-correlation matrix at lag zero if $\mathit{i}\ne \mathit{j}$ and the standard deviation of $\mathit{i}=\mathit{j}$, for $\mathit{i}=1,2,\dots ,k$ and $\mathit{j}=1,2,\dots ,k$. 5:     r[${\mathbf{k}}×{\mathbf{k}}×{\mathbf{m}}$] – const doubleInput On entry: the sample cross-correlations as provided by nag_tsa_multi_cross_corr (g13dmc), that is, ${\mathbf{r}}\left[\left(\mathit{l}-1\right){k}^{2}+\left(\mathit{j}-1\right)k+\mathit{i}-1\right]$ must contain the $\left(\mathit{i},\mathit{j}\right)$th element of the sample cross-correlation at lag $\mathit{l}$, for $\mathit{l}=1,2,\dots ,m$, $\mathit{i}=1,2,\dots ,k$ and $\mathit{j}=1,2,\dots ,k$, where series $\mathit{j}$ leads series $\mathit{i}$. 6:     maxlag – Integer *Output On exit: the maximum lag up to which partial lag correlation matrices (along with ${\chi }^{2}$-statistics and their significance levels) have been successfully computed. On a successful exit maxlag will equal m. If MATRIX_ILL_CONDITIONED on exit, then maxlag will be less than m. 7:     parlag[${\mathbf{k}}×{\mathbf{k}}×{\mathbf{m}}$] – doubleInput/Output On exit: ${\mathbf{parlag}}\left[\left(\mathit{l}-1\right){k}^{2}+\left(\mathit{j}-1\right)k+\mathit{i}-1\right]$ contains the $\left(\mathit{i},\mathit{j}\right)$th element of the sample partial lag correlation matrix at lag $\mathit{l}$, for $\mathit{l}=1,2,\dots ,{\mathbf{maxlag}}$, $\mathit{i}=1,2,\dots ,k$ and $\mathit{j}=1,2,\dots ,k$. 8:     x[m] – doubleOutput On exit: ${\mathbf{x}}\left[\mathit{l}-1\right]$ contains the ${\chi }^{2}$-statistic at lag $\mathit{l}$, for $\mathit{l}=1,2,\dots ,{\mathbf{maxlag}}$. 9:     pvalue[m] – doubleOutput On exit: ${\mathbf{pvalue}}\left[\mathit{l}-1\right]$ contains the significance level of the corresponding ${\chi }^{2}$-statistic in x, for $\mathit{l}=1,2,\dots ,{\mathbf{maxlag}}$. 10:   fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings MATRIX_ILL_CONDITIONED The recursive equations used to compute the partial lag correlation matrices are ill-conditioned (they have been computed up to lag $〈\mathit{\text{value}}〉$). NE_ALLOC_FAIL Dynamic memory allocation failed. NE_BAD_PARAM On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, ${\mathbf{k}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{k}}\ge 1$. On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{m}}\ge 1$. On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 2$. NE_INT_2 On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{m}}<{\mathbf{n}}$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. ## 7  Accuracy The accuracy will depend upon the accuracy of the sample cross-correlations. ## 8  Further Comments The time taken is roughly proportional to ${m}^{2}{k}^{3}$. If you have calculated the sample cross-correlation matrices in the arrays r0 and r, without calling nag_tsa_multi_cross_corr (g13dmc), then care must be taken to ensure they are supplied as described in Section 5. In particular, for $l\ge 1$, ${\stackrel{^}{R}}_{ij}\left(l\right)$ must contain the sample cross-correlation coefficient between ${w}_{i\left(t-l\right)}$ and ${w}_{jt}$. The function nag_tsa_multi_auto_corr_part (g13dbc) computes squared partial autocorrelations for a specified number of lags. It may also be used to estimate a sequence of partial autoregression matrices at lags $1,2,\dots \text{}$ by making repeated calls to the function with the argument nk set to $1,2,\dots \text{}$. The $\left(i,j\right)$th element of the sample partial autoregression matrix at lag $l$ is given by $W\left(i,j,l\right)$ when nk is set equal to $l$ on entry to nag_tsa_multi_auto_corr_part (g13dbc). Note that this is the ‘Yule–Walker’ estimate. Unlike the partial lag correlation matrices computed by nag_tsa_multi_part_lag_corr (g13dnc), when ${W}_{t}$ follows an autoregressive model of order $s-1$, the elements of the sample partial autoregressive matrix at lag $s$ do not have variance $1/n$, making it very difficult to spot a possible cut-off point. The differences between these matrices are discussed further by Wei (1990). Note that nag_tsa_multi_auto_corr_part (g13dbc) takes the sample cross-covariance matrices as input whereas this function requires the sample cross-correlation matrices to be input. ## 9  Example This example computes the sample partial lag correlation matrices of two time series of length $48$, up to lag $10$. The matrices, their ${\chi }^{2}$-statistics and significance levels and a plot of symbols indicating which elements of the sample partial lag correlation matrices are significant are printed. Three * represent significance at the $0.5$% level, two * represent significance at the 1% level and a single * represents significance at the 5% level. The * are plotted above or below the central line depending on whether the elements are significant in a positive or negative direction. ### 9.1  Program Text Program Text (g13dnce.c) ### 9.2  Program Data Program Data (g13dnce.d) ### 9.3  Program Results Program Results (g13dnce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 105, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7482621669769287, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/213451/prove-that-w-frac-sqrtz2z1z2-3z2-analytic-on-z1
# Prove that $w=\frac{\sqrt{z^2+z+1}}{z^2-3z+2}$ analytic on $|z|<1$ I've self-learning of a complex analysis course. My first difficulties are in analytic functions. In that book, that I learning from, there is no even one example on how proving such problem, like in the title: Prove that $w=\frac{\sqrt{z^2+z+1}}{z^2-3z+2}$ analytic on $|z|<1$ Could you direct me by providing a full solution of the above? Thank you. - 1 – Pragabhava Oct 16 '12 at 22:06 ## 1 Answer One can use a few general facts about how one can compose analytic functions: • The sum and product of two analytic functions $f,g$ is analytic. This is clear from $(f+g)'=f'+g'$ and $(fg)'=fg'+f'g$. • If $f(z)$ is analytic on $\Omega$ and has no zero there, then $\frac1{f(z)}$ is analytic as well. This is clear from $\frac d{dz}\frac1{f(z)}=\frac{f'(z)}{f^2(z)}$. • If $f(z)$ is analytic on $\Omega$ and $\Omega$ is simply connected and $f(z)$ has no zero there, then $\sqrt{f(z)}$ is analytic on $\Omega$ as well. This follows by monodromy. • The open unit disc given by $|z|<1$ is simply connected: The disc and its complement are clearly connected, it is not even difficult to give explicit paths between points. If you know these four facts, you can combine them to conclude that $f(z)=\frac{\sqrt{z^2+z+1}}{(z-2)(z-1)}$ is analytic on $|z|<1$: The denominator is zero only at $z=1$ and $z=2$, hence not in the open disc; the radicand in the numerator has two complex root of absolute value $1$, hence none inside the disc. - nicely done (+1) – robjohn♦ Oct 20 '12 at 14:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.958726704120636, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/4810/price-of-a-cash-or-nothing-binary-call-option?answertab=votes
# price of a “Cash-or-nothing binary call option” I'm stuck with one homework problem here: Assume there is a geometric Brownian motion \begin{equation} dS_t=\mu S_t dt + \sigma S_t dW_t \end{equation} Assume the stock pays dividend, with the cont. compounded yield $q$. a) Find the risk-neutral version of the process for $S_t$. b) What is the market price of risk in this case? c) Assume no yield anymore. Now, there is a derivative written on this stock paying one unit of cash if the stock price is above the strike price $K$ at maturity time $T$, and 0 else (cash-or-nothing binary call option). Find the PDE followed by the price of this derivative. Write the appropriate boundary conditions. d) Write the expression for the price of this derivative at time $t<T$ as a risk-neutral expectation of the terminal payoff. e) Writte the price of this option in terms on $N(d_2)$, where $d_2$ has the usual Black-Scholes value. Here is what I came up with by now: for a): This should become $dS_t'=(r-q)S_t'dt + \sigma S_t'dW_t^\mathbb{Q}$ (is this correct?) for b): This would be $\zeta=\frac{\mu-(r-q)}{\sigma}$ (?) for c): The boundary conditions should be: Price at $t=T$ is $0$ if $S<K, 1$ else; I have no idea what to write for the PDE. for d): I can only think of $C(S_t,t)=e^{-r(T-t)}\mathbb{E}[C(S_t),T]$, where $C(S_t,T)$ is the value at time $T$, i.e. the payoff. for e): I don't know how to start here. Can anybody help me and solve this with me? - I found that $\mathbb{Q}_t(S_T\geq K)=N(d_2)$, where $\mathbb{Q}$ denotes risk-neutral probability, which should solve part e): The present value is the discounted future payoff, which is just $p$ if $p$ is the probability that $S_T\geq K$. Hence, the current value is $e^{-r(T-t)}\mathbb{Q}_t(S_T\geq K)=e^{-r(T-t)} N(d_2)$ – Marie. P. Dec 20 '12 at 16:50 1 Not entirely correct. You did not convert correctly from real probability measure to risk neutral probability measure. See my answer. – phubaba Dec 26 '12 at 17:52 ## 1 Answer a. is correct, but you should derive it using appropriate logic, not just guessing the answer. Ie the drift of discounted stock should be 0. Define a bond dB = rBdt. d(S/B) should have no drift. This can help you find the correct mu. You can find the sde for S/B using two dimensional ito b. don't really know about market price of risk. c. In this case the pde is the same as the black scholes pde using your risk neutral process. Can you think of why this is? Does the type of call option change how the underlying changes? What are the other boundary conditions ie (for S = 0 and S = infinity). Take a look at dirichlet (also known as zero gamma condition) and other types of boundary conditions. d. That is the right start, but what is the expectation? Lets define C = cash on payout. Then the payout(S) = C*I(S>K). Plug this into your formula. The expectation now looks like C*E(I(S>K)). The problem is that this expectation is in real probability space and you want it in your risk neutral space. You can use girsanov's theorem. Best proof (result to use) I found is (1) in http://math.ucsd.edu/~pfitz/downloads/courses/spring05/math280c/girsanov.pdf e. In d you will basically find that E(I(S>K)) a function(t)*P(S>K) in your risk neutral space. You need to find P(S>k) this turns out to be N(d2). You can define a new variable (S-E(S))/std(S) = Normal(0,1) to transform P(S>k) into N(d2) - As of writing this comment it seems as if there is a type in your first paragraph. Isn't it that S/B is a martingale under Q^B (that is, has drift zero). – Christian Fries Jan 5 at 20:23 Yes you are definitely right sorry about that. S_t/B_t = E_t(S_T/B_T) under risk neutral probability – phubaba Jan 31 at 0:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072364568710327, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/210354-graph-problem-print.html
# Graph problem Printable View • December 26th 2012, 12:48 PM Franek222 Graph problem I have big problem to solve, could someone help :) Prove or disprove: If $G$ is a k-edge-connected graph and $v,v_{1},v_{2},...,v_{k}$ are $k+1$ distinct vertices of $G$ then for $i = 1,2,...k$ there exist $v-v_{i}$ paths $P_{i}$ such that each path $P_{i}$ contains exactly one vertex of ${v,v_{1},v_{2},...,v_{k}}$, namely $v_{i}$, and for $i \neq j, P_{i}$ and $P_{j}$ are edge-disjoint • December 26th 2012, 02:05 PM Plato Re: Graph problem Quote: Originally Posted by Franek222 I have big problem to solve, could someone help :) Prove or disprove: If $G$ is a k-edge-connected graph and $v,v_{1},v_{2},...,v_{k}$ are $k+1$ distinct vertices of $G$ then for $i = 1,2,...k$ there exist $v-v_{i}$ paths $P_{i}$ such that each path $P_{i}$ contains exactly one vertex of ${v,v_{1},v_{2},...,v_{k}}$, namely $v_{i}$, and for $i \neq j, P_{i}$ and $P_{j}$ are edge-disjoint I find this description hard to follow. However, you may find this webpage useful. Note that the degree of any vertex is $\ge k$. That gives some bound on the number of edges. All times are GMT -8. The time now is 05:44 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443026185035706, "perplexity_flag": "head"}
http://mathoverflow.net/questions/88955/is-there-a-right-adjoint-to-the-contravariant-functor-hom-b-in-the-category-of/88958
## Is there a right adjoint to the contravariant functor Hom(-,B) in the category of Sets ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I apologise if this is the wrong place for this question but i need to ask it somewhere. The question is whether the right, (perhaps left?), adjoint to $Hom(-,A)$ exists in $\bf Set$ and how to deduse it. I would very much like it to be the disjoint union $-\oplus A$ but im not quite sure how to deduce it or if its simply whishfull thinking. Alternativly, is there an other adjoint (left or right) to the disjoint union. I am writing an undergrad paper and I am aware of the coproduct. I'm asking since i hope the deduction of a dual adjoint will tell me a little of the general workings of category theory and help me understand how to make "computations" in concrete categories. Also, if you have the time and will, is there a right adjoint to the covariant Hom functor? Thanks in advance or sorry for wasting your time - ## 3 Answers To speak of right (or left) adjoints for a contravariant functor $F: C\to D$, one needs to decide whether to view it as a functor from $C^{op}$ to $D$ or as a functor from $C$ to $D^{op}$. What the one viewpoint calls a left adjoint, the other will call a right adjoint. One therefore often speaks instead of two contravariant functors being "adjoint on the right" (or on the left). In this language, $Hom(-,A)$ is adjoint on the right to itself, which means that morphisms from $X$ to $Hom(Y,A)$ are in natural bijective correspondence with morphisms from $Y$ to $Hom(X,A)$; here "natural" means (as usual for adjointness) with respect to both $X$ and $Y$. - Thx, I appreaciate all answers are good. what really had me confused was the contravariant nature and my own hopes. – Tobias Feb 19 2012 at 23:08 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Left adjoints preserve colimits, but $\mathrm{Hom}(-,A)$ does not because $\mathrm{Hom}(\varnothing,A)\neq\varnothing$. Right adjoints preserve limits, but if $A$ is not terminal, $\mathrm{Hom}(-,A)$ does not because `$\mathrm{Hom}(\{*\},A)\neq\{*\}$`. (if $A$ is terminal, the functor is constant equal to `$\{*\}$` and has a left adjoint which is the constant functor equal to $\varnothing$). You can do the same reasoning to deduce that `$-\oplus{}A$` does not have adjoints, unless $A=\varnothing$. - Nice explanation. – Samuel Vidal Feb 19 2012 at 20:22 You forgot the contravariance. Whichever hand it is, we do have $\hom(\emptyset,A) = \lbrace\ast\rbrace$, so we're preserving that limit, at least. – Theo Johnson-Freyd Feb 20 2012 at 2:44 Covariant $Hom$ takes $x$-element sets to $x^n$-element sets. If $f$ is a right adjoint, we have $y^{x^n}=f(y)^x$, and there is no function with this property. But product is left adjoint to covariant $Hom$, as follows: $Hom(X,Hom(A,Y))=Hom(X \times A,Y)$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343857765197754, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/67602/how-to-calculate-point-y-with-given-point-x-of-a-angled-line
# How to calculate point y with given point x of a angled line I dropped out of school to early I guess, but I bet you guys can help me here. I've got a sloped line starting from point a(0|130) and ending at b(700|0). I need an equation to calculate the y-coordinate when the point x is given, e.g. 300. Can someone help me please ? Sorry for asking such a dumb question, can't find any answer here, propably just too silly to get the math slang ;) - Math can be so simple when you're stupid :D – Anonymous Sep 26 '11 at 3:28 1 Thanks for your answers, helped me a lot, now I can draw some text in my php script at the right point :D – Anonymous Sep 26 '11 at 3:29 ## 2 Answers So we have a point $(0, 130)$ and another point $(700, 0)$. The equation for this line would then be $y = -\dfrac{130}{700} (x) + 130$. So to get the height at a particular x, you just plug x into this equation. Here is another reference. - You want the two point form of a linear equation. If your points are $(x_1,y_1)$ and $(x_2,y_2)$, the equation is $y-y_1=(x-x_1)\frac{y_2-y_1}{x_2-x_1}$. In your case, $y=-\frac{130}{700}(x-700)$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316611886024475, "perplexity_flag": "middle"}
http://nrich.maths.org/6959/solution
Modular Fractions We only need 7 numbers for modulus (or clock) arithmetic mod 7 including working with fractions. Explore how to divide numbers and write fractions in modulus arithemtic. Readme Decipher a simple code based on the rule C=7P+17 (mod 26) where C is the code for the letter P from the alphabet. Rearrange the formula and use the inverse to decipher automatically. Double Time Crack this code which depends on taking pairs of letters and using two simultaneous relations and modulus arithmetic to encode the message. Inverting Rational Functions Stage: 5 Challenge Level: One of our most prolific solvers, Patrick from Woodbridge School, sent in his thoughts on this problem To invert a function, $f(x)$, the following procedure is used: say $$f(x) = \frac{2x+9}{ x+2}$$ then the graph is $$y = \frac{2x+9}{x+2}$$ It is inverted by replacing $x$ with $y$, and $y$ with $x$: x = \frac{2y + 9}{y + 2} and rearranging gives y = \frac{9-2x}{x-2} Thus $$g(x) = \frac{9-2x}{x-2}$$ However, for values $x = \pm 2$, we have a denominator $0$ in one of the fractions, so these must be excluded from the domain of the functions. For y= \frac{x-7}{2x+1} To invert this put x = \frac{y-7}{2y+1} Rearrange: 2xy + x - y = -7\,\quad y(2x-1) = -7-x\, \quad y = -\frac{x+7}{2x-1} Thus, the inverse of $h$ is k(x) = -\frac{x+7}{2x-1} The procedure used by Patrick can be used to invert more general rational functions as follows f(x)= \frac{ax+b}{cx+d}\quad\mbox{has inverse}\quad g(x)=\frac{b-dx}{cx-a} To check this properly, consider f(g(x))=\frac{a\left(\frac{b-dx}{cx-a}\right)+b}{c\left(\frac{b-dx}{cx-a}\right)+d} This reduces to f(g(x)) = \frac{a(b-dx)+b(cx-a)}{c(b-dx)+d(cx-a)}=\frac{(bc-ad)x}{bc-ad} Which cancels to $x$, provided that $bc-ad \neq 0$. The same holds for $g(f(x))$. On Ask NRICH, some discussion took place concerning the generalisation to rational functions involving quadratics. Some progress can be made, in that if y = f(x) = \frac{P(x)}{Q(x)} then you might try to construct the inverse using the idea that $x\leftrightarrow y$, corresponding to reflecting the graph in the line $x=y$ so that x = \frac{P(y)}{Q(y)} Rearranging gives Q(y)x = P(y) This is a polynomial equation in $y$ of degree equal to the maximum of the degree of $P$ and $Q$. However, a unique solution will only typically follow if $P(y)$ and $Q(y)$ are linear, and in general no algebraic solution will exist. Moreover, for the inverse to be unique, the function $f(x)$ must be one-to-one, which will not be the case for anything but linear rational functions. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8583172559738159, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/67134-triangle-inequality-problem.html
# Thread: 1. ## Triangle Inequality Problem Triangle TUV has sides TU, UV, and VT. TU=6x - 11, UV=3x - 1, and VT=2x + 3. Describe the possible values of x. Any help is appreciated. I've been trying to figure this one out for a while. 2. Originally Posted by jstark Triangle TUV has sides TU, UV, and VT. TU=6x - 11, UV=3x - 1, and VT=2x + 3. Describe the possible values of x. Any help is appreciated. I've been trying to figure this one out for a while. The vertices of the triangle are the points where these lines intersect (ie, the points where their equations are equal). Does that help? 3. I don't think I understand how the equations are equal there. 4. I don't think I understand how the equations are equal there. 5. Originally Posted by jstark I don't think I understand how the equations are equal there. All three equations represent straight lines. Those lines intersect each other at the points T, U and V. For example, your equation for TU and your equation for UV intersect at the point U(a,b). So x = a, and y = b are solutions to BOTH equations, since that point lies on BOTH lines. If you think of the equations being y = 6x-11 and y = 3x-1. The point at which these two lines intersect lies on both lines. And hence the y-coordinate of that point is equal for both lines. Hence y = y Hence 6x-11 = 3x-1 3x = 10 x = 10/3 This is the x-coordinate of the intersection (and hence the x coordinate of point U!). If you plug that value of x into either of the two equations, you'll get the y coordinated for U. 6. OK 7. OK. I think I understand. Thank you so much. 8. Originally Posted by jstark OK. I think I understand. Thank you so much. Good. Now, if you find the three points T U and V using the method above, then you should find which one has the LOWEST x coordinate, and which one has the HIGHEST x coordinate. The range of possible x values for this triangle is the values in between those two coordinates, including the coordinates themselves! Let me know what answer you get, I'll let you know if you're right or not. 9. Originally Posted by jstark Triangle TUV has sides TU, UV, and VT. TU=6x - 11, UV=3x - 1, and VT=2x + 3. Describe the possible values of x. Any help is appreciated. I've been trying to figure this one out for a while. If and only if the given terms refer to lengths of the triangles sides you have to use the triangle inequality: The triangle ABC has the sides a, b, c. Then $a+b>c\ \wedge\ a+c>b\ \wedge\ b+c>a$ So you get: $6x-11+3x-1>2x+3\ \wedge\ 6x-11+2x+3>3x-1\ \wedge\ 3x-1+2x+3>6x-11$ $x>\dfrac{15}7\ \wedge\ x>\dfrac95\ \wedge\ 13>x$ Therefore you know: $\boxed{\dfrac{15}7 < x< 13}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338040947914124, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/54412/combining-two-finite-number-fock-spaces-into-one?answertab=active
Combining two finite number fock spaces into one Say I have two separate systems of identical Bosons, one with N Bosons the other with M. System one is described by a state $|\psi_1\rangle$ the other with $|\psi_2 \rangle$ which are expressed in a Fock space like $|\psi_1\rangle = \sum_{n_1,...,n_{max}} \alpha(n_1,..,n_{max}) |n_1,n_2,..,n_{max}\rangle$ $|\psi_2\rangle = \sum_{n_1,...,n_{max}} \beta(n_1,..,n_{max}) |n_1,n_2,..,n_{max}\rangle$ where $|n_1,n_2,..,n_{max}>=\prod_{k=1}^{max} \frac{(a^{\dagger}_k)^{n_k}}{\sqrt{n_k!}} |vac\rangle$ with "max" denoting the maximum occupied mode, $\alpha$ and $\beta$ some constants depending on each of the values (zero if $\sum_{k} n_k$ is not equal to $N$ for $\alpha$ or $M$ for $\beta$) and the wavefunction satisfying all the usual normalisation conditions. At someone point I wish to bring these two subsystems together, this state can be expressed as an $N+M$ body Fock space. $|\psi_{total}\rangle = |\psi_1\rangle \otimes|\psi_2\rangle$ For distinguishable particles this is fairly trivial, however the symmetry makes it somewhat unclear (to me) how to do this and give states with the appropriate amplitudes. Can anyone tell me, or point me to an appropriate book or paper? - – Michael Brown Feb 19 at 15:47 1 Answer Possible answer to the question (if anyone could confirm this that would be great) $$|\psi_1⟩⊗|\psi_2⟩ \propto \sum_{m_1,..,m_{max},n_1,...,n_{max}} \beta(m_1,..)\alpha(n_1,..) \prod_{k=1}^{max} \frac{(a^{\dagger}_k)^{n_k+m_k}}{\sqrt{n_k! m_k!}} |vac \rangle$$ Which seems to give the correct weighting with symmetry but does require a correction to the normalisation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258964657783508, "perplexity_flag": "head"}
http://mathoverflow.net/questions/31384/what-are-the-computationally-useful-ways-of-thinking-about-killing-fields/31425
## What are the computationally useful ways of thinking about Killing fields? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One definition of the Killing field is as those vector fields along which the Lie Derivative of the metric vanishes. But for very many calculation purposes the useful way to think of them when dealing with the Riemann-Christoffel connection for a Riemannian metric is that they satisfy the differential equation, $\nabla_\nu X_\nu + \nabla_\nu X_\nu = 0$ I suppose solving the above complicated set of coupled partial differential equations is the only way to find Killing fields given a metric. (That's the only way I have ever done it!) Would be happy to know if there are more elegant ways. But what would be the slickest way to say prove that the scalar curvature is constant along a Killing field, i.e it will satisfy the equation $X^\mu \nabla_\mu R = 0$ ? Sometimes it is probably good to think of Killing fields as satisfying a Helmholtz-like equation sourced by the Ricci tensor. Then again very often one first starts off knowing what symmetries one wants on the Riemannian manifold and hence knows the Lie algebra that the Killing fields on it should satisfy. Knowing this one wants to find a Riemannian metric which has that property. As I had asked in this earlier question There is also the issue of being able to relate Killing fields to Laplacians on homogeneous spaces and H-connection like earlier raised in this question. Good literature on this has been very hard to find beyond the very terse and cryptic appendix of a paper by Camporesi and Higuchi. I would like to know what are the different ways of thinking about Killing fields some of which will say help easily understand the connection to Laplacians, some of which make calculations of metric from Killing fields easier and which ones give good proofs of the constancy of scalar curvature along them. It would be enlightening to see cases where the "original" definition in terms of Lie derivative is directly useful without needing to put it in coordinates. - ## 2 Answers what would be the slickest way to say prove that the scalar curvature is constant along a Killing Field? The (local) flow of the Killing field must preserve the metric and hence also the scalar curvature. Is that answer too terse? - @Michael Not terse but I was looking for a computational proof of this statement (may be I am sounding naive!) I get your intuitive argument but I was looking for something like starting from the differential equation that we know a Killing Field should satisfy can one prove the equation that $X^\mu \nabla _ \mu R=0$ ? – Anirbit Jul 11 2010 at 17:26 Michael's argument is not just intuitive. It's a rigorous proof. If you integrate the Killing field, you get a 1-parameter group of isometries $\phi_t$. So $g_t = g$ for each $t$ near zero, where $g_t = \phi_t^*g$. Therefore, $R\circ\phi_t = R(g_t) = R(g)$. Differentiating with respect to $t$ at $t = 0$ gives the equation you want. But you probably want to prove the equation directly from the definition of a Killing field. This is reasonable and can be extracted from the proof above. – Deane Yang Jul 11 2010 at 18:05 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Sometimes it is probably good to think of Killing fields as satisfying a Helmholtz-like equation sourced by the Ricci scalar. Firstly, the source is not the Ricci scalar, but the Ricci tensor. The equation, up to a minus sign and some constant factors should be $\triangle_g \tau_a \propto R_{ab}\tau^b$. That expression sometimes is useful when you already know the metric and the Killing vector field, and are interested in a priori esimates in geometric analysis. My Ph.D. thesis contains some related examples. (Though in the Riemannian case on a compact manifold, the Bochner technique is generally more fruitful; the classical paper of Bochner's [in the Killing case] is, however, isomorphic to contracting the elliptic equation against the Killing field itself. http://projecteuclid.org/euclid.bams/1183509635 ) Anyway, the "Helmholtz like" equation is obtained by taking the trace of the Jacobi equation which is satisfied by any Killing vector field. So you are definitely losing information unless you work with only 2 or 3 manifolds. - Thanks for the references and pointing out the slip with the Ricci tensor. Corrected that. – Anirbit Jul 11 2010 at 17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945551872253418, "perplexity_flag": "head"}
http://www.chemeurope.com/en/encyclopedia/Gas_in_a_box.html
My watch list my.chemeurope.com my.chemeurope.com With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. • My watch list • My saved searches • My saved topics • Home • Encyclopedia • Gas_in_a_box # Gas in a box In quantum mechanics, the results of the quantum particle in a box can be used to look at the equilibrium situation for a quantum ideal gas in a box which is a box containing a large number of molecules which do not interact with each other except for instantaneous thermalizing collisions. This simple model can be used to describe the classical ideal gas as well as the various quantum ideal gases such as the ideal massive Fermi gas, the ideal massive Bose gas as well as black body radiation which may be treated as a massless Bose gas. Using the results from either Maxwell-Boltzmann statistics, Bose-Einstein statistics or Fermi-Dirac statistics we use the Thomas-Fermi approximation and go to the limit of a very large box, and express the degeneracy of the energy states as a differential, and summations over states as integrals. We will then be in a position to calculate the thermodynamic properties of the gas using the partition function or the grand partition function. These results will be carried out for both massive and massless particles. More complete calculations will be left to separate articles, but some simple examples will be given in this article. ## Thomas-Fermi approximation for the degeneracy of states For both massive and massless particles in a box, the states of a particle are enumerated by a set of quantum numbers. [nx, ny, nz]. The absolute value of the momentum is given by: $p=\frac{h}{2L}\sqrt{n_x^2+n_y^2+n_z^2}~~~~~~~~~n_i=1,2,3,\ldots$ where h is Planck's constant and L is the length of a side of the box. We can think of each possible state of a particle as a point on a 3-dimensional grid of positive integers. The distance from the origin to any point will be $n=\sqrt{n_x^2+n_y^2+n_z^2}=\frac{2Lp}{h}$ Suppose each set of quantum numbers specify f  states where f  is the number of internal degrees of freedom of the particle that can be altered by collision. For example, a spin 1/2 particle would have f=2, one for each spin state. The Thomas-Fermi approximation assumes that the quantum numbers are so large that they may be considered to be a continuum. For large values of n , we can estimate the number of states with absolute value of momentum less than or equal to p from the above equation as $g=\left(\frac{f}{8}\right) \frac{4}{3}\pi n^3 = \frac{4\pi f}{3} \left(\frac{Lp}{h}\right)^3$ which is just f  times the volume of a sphere of radius n  divided by eight since we only consider the octant with positive ni . The number of states with absolute value of momentum between p  and p+dp  is therefore $dg=\frac{\pi}{2}~f n^2\,dn = \frac{4\pi fV}{h^3}~ p^2\,dp$ where V=L3  is the volume of the box. Notice that in using this continuum approximation, we have lost the ability to characterize the low-energy states including the ground state where ni =1. For most cases this will not be a problem, but when considering Bose-Einstein condensation, in which a large portion of the gas is in or near the ground state, we will need to recover the ability to deal with low energy states. Without using the continuum approximation, the number of particles with energy εi is given by $N_i = \frac{g_i}{\Phi}$ where $\Phi=e^{\beta(\epsilon_i-\mu)}$ for particles obeying Maxwell-Boltzmann statistics $\Phi=e^{\beta(\epsilon_i-\mu)}-1$ for particles obeying Bose-Einstein statistics $\Phi=e^{\beta(\epsilon_i-\mu)}+1$ for particles obeying Fermi-Dirac statistics with β = 1/kT  with k  being Boltzmann's constant, T  being temperature, gi is the degeneracy of the state i, and μ being the chemical potential. Using the continuum approximation, the number of particles dN  with energy between E  and E+dE  is now written: $dN= \frac{dg}{\Phi}$ ## The energy distribution function We are now in a position to determine some distribution functions for the "gas in a box". The distribution function for any variable A is PAdA and is equal to the fraction of particles which have values for A  between A  and A+dA $P_A~dA = \frac{dN}{N} = \frac{dg}{N\Phi}$ It follows that: $\int_A P_A~dA = 1$ The distribution function for the absolute value of the momentum is: $P_p~dp = \frac{Vf}{N}~\frac{4\pi}{h^3\Phi}~p^2dp$ and the distribution function for the energy E  is: $P_E~dE = P_p\frac{dp}{dE}~dE$ For a particle in a box (and for a free particle as well), the relationship between energy E  and momentum p  is different for massive and massless particles. For massive particles, we have $E=\frac{p^2}{2m}$ while for massless particles: $E=pc\,$ where m  is the mass of the particle and c  is the speed of light. Using these relationships we have: • For massive particles $dg = \left(\frac{Vf}{\Lambda^3}\right) \frac{2}{\sqrt{\pi}}~\beta^{3/2}E^{1/2}~dE$ $P_E~dE = \frac{1}{N}\left(\frac{Vf}{\Lambda^3}\right) \frac{2}{\sqrt{\pi}}~\frac{\beta^{3/2}E^{1/2}}{\Phi}~dE$ where Λ is the thermal wavelength or thermal de Broglie wavelength of the gas. $\Lambda =\sqrt{\frac{h^2 \beta }{2\pi m}}$ This is an important quantity, since when Λ is on the order of the interparticle distance (V/N)1/3 , quantum effects begin to dominate and the gas can no longer be considered to be a Maxwell-Boltzmann gas. See configuration integral for a detailed derivation of the expression for $\displaystyle \Lambda$. • For massless particles $dg = \left(\frac{Vf}{\Lambda^3}\right)\frac{1}{2}~\beta^3E^2~dE$ $P_E~dE = \frac{1}{N}\left(\frac{Vf}{\Lambda^3}\right) \frac{1}{2}~\frac{\beta^3E^2}{\Phi}~dE$ where Λ is now the thermal wavelength for massless particles. $\Lambda = \frac{ch\beta}{2\,\pi^{1/3}}$ ## Specific examples The following sections give an example of results for some specific cases. ### Massive Maxwell-Boltzmann particles For this case: $\Phi=e^{\beta(E-\mu)}\,$ Integrating the energy distribution function and solving for N gives $N = \left(\frac{Vf}{\Lambda^3}\right)\,\,e^{\beta\mu}$ Substituting into the original energy distribution function gives $P_E~dE = 2 \sqrt{\frac{\beta^3 E}{\pi}}~e^{-\beta E}~dE$ which are the same results obtained classically for the Maxwell-Boltzmann distribution. Further results can be found in the article on the classical ideal gas. ### Massive Bose-Einstein particles For this case: $\Phi=e^{\beta \epsilon}/z-1\,$ where z is defined as $z=e^{\beta\mu}\,$ Integrating the energy distribution function and solving for N gives the particle number $N = \left(\frac{Vf}{\Lambda^3}\right)\textrm{Li}_{3/2}(z)$ Where Lis(z) is the polylogarithm function and Λ is the thermal wavelength. The polylogarithm term must always be positive and real, which means its value will go from 0 to ζ(3/2) as z  goes from 0 to 1. As the temperature drops towards zero, Λ will become larger and larger, until finally Λ will reach a critical value Λc where z=1 and $N = \left(\frac{Vf}{\Lambda_c^3}\right)\zeta(3/2)$ The temperature at which Λ=Λc is the critical temperature. For temperatures below this critical temperature, the above equation for the particle number has no solution. The critical temperature is the temperature at which a Bose-Einstein condensate begins to form. The problem is, as mentioned above, that the ground state has been ignored in the continuum approximation. It turns out, however, that the above equation for particle number expresses the number of bosons in excited states rather well, and so we may write: $N=\frac{g_0 z}{1-z}+\left(\frac{Vf}{\Lambda^3}\right)\textrm{Li}_{3/2}(z)$ where the added term is the number of particles in the ground state. (The ground state energy has been ignored.) This equation will hold down to zero temperature. Further results can be found in the article on the ideal Bose gas. ### Massless Bose-Einstein particles (e.g. black body radiation) For the case of massless particles, we must now use the massless energy distribution function. It is convenient to convert this function to a frequency distribution function: $P_\nu~d\nu = \frac{h^2}{N}\left(\frac{Vf}{\Lambda^3}\right) \frac{1}{2}~\frac{\beta^3\nu^2}{e^{(h\nu-\mu)/kT}-1}~d\nu$ where Λ is the thermal wavelength for massless particles. The spectral energy density (energy per unit volume per unit frequency) is then $U_\nu~d\nu = \left(\frac{N\,h\nu}{V}\right) P_\nu~d\nu = \frac{4\pi f h\nu^3 }{c^3}~\frac{1}{e^{(h\nu-\mu)/kT}-1}~d\nu$ Other thermodynamic parameters may be derived analogously to the case for massive particles. For example, integrating the frequency distribution function and solving for N gives the number of particles: $N=\frac{16\,\pi V}{c^3h^3\beta^3}\,\mathrm{Li}_3\left(e^{\mu/kT}\right)$ The most common massless Bose gas is a photon gas in a black body. Taking the "box" to be a black body cavity, the photons are continually being absorbed and re-emitted by the walls. When this is the case, the number of photons is not conserved. In the derivation of Bose-Einstein statistics, when the restraint on the number of particles is removed, this is effectively the same as setting the chemical potential (μ) to zero. Furthermore, since photons have two spin states, we have f=2. The spectral energy density is then $U_\nu~d\nu = \frac{8\pi h\nu^3 }{c^3}~\frac{1}{e^{h\nu/kT}-1}~d\nu$ which is just the spectral energy density for Planck's law of black body radiation. Note that if we had carried out this procedure for massless Maxwell-Boltzmann particles, we would recover Wien's distribution which approximates a Planck's distribution for high temperature or low density. In certain situations, the reactions involving photons will result in the conservation of the number of photons (e.g. light-emitting diodes, "white" cavities). In these cases, the photon distribution function will involve a non-zero chemical potential. (Hermann 2005) Another massless Bose gas is given by the Debye model for heat capacity. This considers a gas of phonons in a box and differs from the development for photons in that the speed of the phonons is less than light speed, and there is a maximum allowed wavelength for each axis of the box. This means that the integration over phase space cannot be carried out to infinity, and instead of results being expressed in polylogarithms, they are expressed in the related Debye functions. ### Massive Fermi-Dirac particles (e.g. electrons in a metal) For this case: $\Phi=e^{\beta(E-\mu)}+1\,$ Integrating the energy distribution function gives $N=\left(\frac{Vf}{\Lambda^3}\right)\left[-\textrm{Li}_{3/2}(-z)\right]$ Where again, Lis(z) is the polylogarithm function and Λ is the thermal de Broglie wavelength. Further results can be found in the article on the ideal Fermi gas. ## References • Herrmann, F.; Würfel, P. (August 2005). "Light with nonzero chemical potential". American Journal of Physics 73 (8): 717-723. doi:10.1119/1.1904623. Retrieved on 2006-11-20. • Huang, Kerson (1967). Statistical Mechanics. New York: John Wiley & Sons. • Isihara, A. (1971). Statistical Physics. New York: Academic Press. • Landau, L. D.; E. M. Lifshitz (1996). Statistical Physics, 3rd Edition Part 1, Oxford: Butterworth-Heinemann. • Yan, Zijun (2000). "General thermal wavelength and its applications" (PDF). Eur. J. Phys. 21: 625-631. doi:10.1088/0143-0807/21/6/314. Retrieved on 2006-11-20.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 36, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8675549626350403, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/5784/why-is-there-a-breakdown-in-kolmogorov-scaling-in-turbulence?answertab=votes
# Why is there a breakdown in Kolmogorov scaling in turbulence? Why is there a breakdown of Kolmogorov scaling in turbulence? What causes intermittency? - ## 1 Answer The basic Kolomogorov theory is a mean-field theory -- dissipation rate is considered to be constant over the whole volume of liquid. While the dissipation rate should depend on position -- that is where intermittency comes from. Somewhere in 60's Kolmogorov and Obukhov attempted to account for this effect. But the problem is that there is the "Landau objection" -- the impossibility to devise universal formula, independent on the geometry of the problem (with scale $l$) for relatively small scales (which are $\ll l$): ... It may be thought that the possibility exists in principle of obtaining a universal formula, applicable to any turbulent flow, which should give $B_{rr}$ and $B_{tt}$ for all distances $r$ that are small compared with $l$. In fact, however, there can be no such formula, as we see from the following argument. The instantaneous value of $(v_{2i}-v_{1i})(v_{2k}-v_{1k})$ might in principle be expressed as a universal function of the energy dissipation $\epsilon$ at the instant considered. When we average these expressions, however, an important part will be played by the manner of variation of $\epsilon$ over times of the order of the periods of large eddies (with size $\sim l$) and this variation is different for different flows. The result of averaging therefore cannot be universal. (Fluid Mechanics, Landau and Lifshitz, chapter 34.) - Great question and a great answer @Kostya +1 – user346 Feb 24 '11 at 13:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336321353912354, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/115646-finite-infinite-sets-denumerable.html
# Thread: 1. ## finite and infinite sets...denumerable Given Q is denumerable, such that R is not denumerable. Show now that R\Q is not denumerable. 2. Find injective map from the union of two countable sets to the naturals. 3. Problem: Given that $\mathbb{Q}\cong\mathbb{N}$ and $\mathbb{R}$ is not, prove that $\mathbb{R}-\mathbb{Q}$ is not as well. Proof: Lemma: If $n\ge2\wedge n\in\mathbb{N}$ and $E_i\cong\mathbb{N}\quad 1\le i\le n$ then $\bigcup_{i=1}^{n}E_i\cong\mathbb{N}$. Proof: It suffices to prove this for the case when $A\cap B=\varnothing$(why?). Since $A,B\cong\mathbb{N}$ there exists $f,f'$ such that $f:A\mapsto\mathbb{N}$ and $f':B\mapsto\mathbb{N}$ bijectively. Define a new mapping $\tilde{f}:A\cup B\mapsto\mathbb{Z}-{0}$ by $\tilde{f}(x)=\begin{cases} f(x) & \mbox{if} \quad x\in A \\ -f'(x) & \mbox{if} \quad x\in B \end{cases}$. Clearly this mapping is bijective, therefore $A\cup B\cong\mathbb{Z}-{0}$. But it can easily be shown that $\mathbb{N}\cong\mathbb{Z}\cong\mathbb{Z}-{0}$. And since $\cong$ is an equivalence relation it follows that $A\cup B\cong\mathbb{N}$. If $A\cap B\ne\varnothing$. The lemma follows by induction. $\blacksquare$ So assume that $\mathbb{R}-\mathbb{Q}$ was countable, then by the above lemma $\mathbb{Q}\cup\left(\mathbb{R}-\mathbb{Q}\right)=\mathbb{R}$ is countable. Contradiction. 4. Hi, What I wrote before that that a union of countable sets is countable , but in this case we have Q u (R\Q)= R were its is countable U non countable = Non Countable, how do I show this law to be be correct given we know Q is countable and R is not countable? 5. Originally Posted by hebby Hi, What I wrote before that that a union of countable sets is countable , but in this case we have Q u (R\Q)= R were its is countable U non countable = Non Countable, how do I show this law to be be correct given we know Q is countable and R is not countable? What?? I don't even understand what you mean. It sounds like you are restating the question. Was there something unsatisfactory with my proof? 6. Originally Posted by hebby Hi, What I wrote before that that a union of countable sets is countable , but in this case we have Q u (R\Q)= R were its is countable U non countable = Non Countable, how do I show this law to be be correct given we know Q is countable and R is not countable? He was using proof by contradiction, and showing that if we assumed $\mathbb{R}\backslash\mathbb{Q}$ was countable, we could reach a false conclusion. So therefore $\mathbb{R}\backslash\mathbb{Q}$ must be uncountable. 7. Hi Thanks for the help, but your proof is rather complicated for me, maybe could you write a more simple proof so I can understand it? 8. Originally Posted by hebby Thanks for the help, but your proof is rather complicated for me, maybe could you write a more simple proof so I can understand it? Has your background prepared you for this question? Do you understand that the union of two countable set is countable? Do you understand that $\mathbb{Q}\cup(\mathbb{R}\setminus\mathbb{Q})=\mat hbb{R}?$ So what if $\mathbb{R}\setminus\mathbb{Q}$ is countable? 9. well i have been out of school for some time...well So what if is countable then that means there is contradiction because R is not denumerable...thanks for the breakdown 10. Originally Posted by hebby Hi Thanks for the help, but your proof is rather complicated for me, maybe could you write a more simple proof so I can understand it? Here is a list of things that you are assumed to know (if you doubt/don't know these then say so): - $\mathbb{R}=\mathbb{R}\backslash\mathbb{Q} \cup \mathbb{Q}$ - A countable set can be mapped bijectively to the natural numbers {1,2,3...} - $\mathbb{Z}$ is countable (take the mapping that sends negative integers to odd naturals and positive integers to even naturals) - The composition of two bijections is a bijection (so a countable set is one who is bijective to an other countable set) Suppose that R\Q is countable. Then as Q is countable we can bijectively map Q to N using a function (say f). Same is true for R\Q, say g maps R\Q to N bijectively. Notice that R and R\Q are disjoint, hence if we define $h:\mathbb{R}\rightarrow \mathbb{Z}$ by $<br /> h(x)=\begin{cases} f(x) & \mbox{if} \quad x\in \mathbb{Q} \\ -g(x) & \mbox{if} \quad x\in \mathbb{R}\backslash\mathbb{Q} \end{cases}<br />$ then we see that that h is a bijection. Hence R is countable (by the fact that Z is countable). 11. Originally Posted by Drexel28 Lemma: If $n\ge2\wedge n\in\mathbb{N}$ and $E_i\cong\mathbb{N}\quad 1\le i\le n$ then $\bigcup_{i=1}^{n}E_i\cong\mathbb{N}$. I don't like the way it's stated. Imho there are some wrong things. It suffices for n to be $\geq 1$. Then I don't understand why you put $1\le i \le n$, since i is just an index in the union, and we don't need any condition over it (it's given in the notation of the union). Then more generally, it is sufficient to put a countable union, not necessarily finite. Note : E is countable means that there exists an injection from E to $\mathbb{N}$ Lemma 1 : $\mathbb{N}^2$ is countable. Proof : The easiest way is to consider the attached picture. Then you denote... : point 1: (0,0) point 2: (1,0) point 3: (1,1) point 4: (0,1) point 5: (0,2) point 6: (1,2) ... This is a bijection from N² to N. Lemma 2 : A countable union of countable sets is countable. Proof : Suppose we have a sequence $(E_n)_{n\geq 0}$ of countable sets. Then $\forall n \in\mathbb{N}, \exists \varphi_n ~:~ E_n \to \mathbb{N}$, an injective mapping. Let $E=\bigcup_{n\geq 0} E_n$ And for any $x\in E$, there exists $n\in\mathbb{N},x\in E_n$. So define $N(x)=\min \{n ~:~ x\in E_n\}$ Now consider $\phi ~:~ E \to \mathbb{N}^2$, where $\phi(x)=(N(x),\varphi_{N(x)}(x))$ which is an injection (easy to prove) And for finishing it, (injection o bijection) is an injection. Attached Thumbnails 12. Originally Posted by Moo I don't like the way it's stated. Imho there are some wrong things. It suffices for n to be $\geq 1$. Then I don't understand why you put $1\le i \le n$, since i is just an index in the union, and we don't need any condition over it (it's given in the notation of the union). Then more generally, it is sufficient to put a countable union, not necessarily finite. Note : E is countable means that there exists an injection from E to $\mathbb{N}$ $\color{red}\star$ Lemma 1 : $\mathbb{N}^2$ is countable. Proof : The easiest way is to consider the attached picture. Then you denote... : point 1: (0,0) point 2: (1,0) point 3: (1,1) point 4: (0,1) point 5: (0,2) point 6: (1,2) ... This is a bijection from N² to N. Lemma 2 : A countable union of countable sets is countable. Proof : Suppose we have a sequence $(E_n)_{n\geq 0}$ of countable sets. Then $\forall n \in\mathbb{N}, \exists \varphi_n ~:~ E_n \to \mathbb{N}$, an injective mapping. Let $E=\bigcup_{n\geq 0} E_n$ And for any $x\in E$, there exists $n\in\mathbb{N},x\in E_n$. So define $N(x)=\min \{n ~:~ x\in E_n\}$ Now consider $\phi ~:~ E \to \mathbb{N}^2$, where $\phi(x)=(N(x),\varphi_{N(x)}(x))$ which is an injection (easy to prove) And for finishing it, (injection o bijection) is an injection. I forgot to change the index to $\scriptstyle k$. The reason why it is $n\ge 2$ is because its obvious if $n=1$ and wasn't worth the weriting. Lastly, doing the countable case requires your nice little pictures which: A) I don't like pictures, B) I can't draw pictures, C) was uneccessary here. Also, the starred section above is a little misleading. Most books use countable as countably infinite and "at most countable" for what you said. 13. Originally Posted by Drexel28 A) I don't like pictures Amen to that. 14. Originally Posted by Drexel28 Most books use countable as countably infinite and "at most countable" for what you said. That is certainly not been my experience in years of reviewing textbooks. It is true that many texts make that distinction between finite and denumerable sets. Then say a countable set is either. Originally Posted by redsoxfan325 I don't like pictures Amen to that. Amen again. I have never liked that zigzag proof. Here is a way that really teaches students the structure of that problem. Let $\mathbb{N}=\{0,1,2,\cdots\}$ then define $\Phi :\mathbb{N} \times \mathbb{N} \mapsto \mathbb{Z}^ +$ as $\Phi (m,n) = 2^m (2n + 1)$. In proving that $\Phi$ is a bijection a great many concepts are learned. 15. Who cares if you don't like sketches ???? You think I do too ? To be honest, my friend did this sketch, and had to explain it 3 times before I understood. You're just a bunch of selfish people, who don't think that sometimes people reply to those who ask questions, and that maybe the ones who ask questions may understand better this way !!!!!!!!!! And that's precisely the problem with you, M. Drexel28, who dare say that he doesn't like sketches, and (presumably) suppose it shouldn't be used in order to prove or to explain something. But what about your Greek letters, that everybody would be such in an ease to manipulate ? What about that isomorphic sign that, of course, everybody would recognize at first sight and understand ? What about your so-perfect proof that everybody would understand where A and B come from ? And what about that excellent example of you saying that in most books, countable means being in bijection with $\mathbb{N}$, with your so-long experience of analysis books ? For your information, I precised at the beginning of my message what countable would stand for in the rest of the message. And why the heck would you say that the sketch is unnecessary here ? Do you think you can just memorize the function Plato gave and present it one year after reading it ? The aim of my message was to give a way to prove a general thing, that's what "more generally" means. I forgot to change the index to $\scriptstyle k$ And so, where does k intervene ? I wonder !! Just things among others. And, in response to your PM, I won't change my way, if I see something I don't like, or where there are problems (and there were !), I will say it. I won't shut up in order to be pleasant to M. Drexel28.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 66, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9660066366195679, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/88417-fourier-series-cosine.html
# Thread: 1. ## Fourier series cosine Hello can anybody help me with this problem? Find the even (cosine) extension of the function given in Q. 6 as a Fourier-series. Write down, without making any calculations, the odd extension as a Fourier This is the fuction given in question 6 f (x) = sin (x) 0 (less than or equal to) x (less than or equal to) (pi); 2. Originally Posted by math_lete Hello can anybody help me with this problem? Find the even (cosine) extension of the function given in Q. 6 as a Fourier-series. Write down, without making any calculations, the odd extension as a Fourier This is the fuction given in question 6 f (x) = sin (x) 0 (less than or equal to) x (less than or equal to) (pi); $<br /> y_c = \frac{a_0}{2} + \sum_{n=1}^{\infty} a_n \cos n x<br />$ where $a_n = \frac{2}{\pi}\int_0^{\pi} \sin x \cos n x\,dx$ For the fourier sine series there's just one term Spoiler: $y_s = \sin x$ 3. Originally Posted by math_lete Hello can anybody help me with this problem? Find the even (cosine) extension of the function given in Q. 6 as a Fourier-series. Write down, without making any calculations, the odd extension as a Fourier This is the fuction given in question 6 f (x) = sin (x) 0 (less than or equal to) x (less than or equal to) (pi); The "even extension would be, of course, f(x)= -sin(x) for $-\pi\le x\le 0$, sin(x) for [tex]0\le x\le \pi[tex]. Apply the usual formulas for Fourier coefficients to that. Simplifications: because this is an even function, the sine coefficients will be 0 and you can get the cosine coefficients by integrating from 0 to $\pi$ and doubling. The "odd extension" which, as implied, you can do without computation, is just sin(x) for $-\pi\le x \le \pi$ and the Fourier series is just sin(x) itself- that is all coefficients of cos(x) are 0, the coefficient of sin(nx) is 0 unless n= 1 in which case it is 1. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083432555198669, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/5277/determine-whether-a-number-is-prime?answertab=active
# Determine whether a number is prime How do I determine if a number is prime? I'm writing a program where a user inputs any integer and from that the program determines whether the number is prime, but how do I go about that? - 4 If you down vote my question please let me know why so that I can improve my question asking in the future. – KronoS Sep 24 '10 at 15:55 I had accidentally down-voted the question it appears... fixed now – jericson Sep 28 '10 at 23:41 @jericson no prob just wanted to make the question better. – KronoS Sep 29 '10 at 0:16 1 – mixedmath♦ Sep 5 '11 at 3:01 @mixedmath: what is your rationale for voting to close this qustion instead of the one you link to? – t.b. Sep 7 '11 at 16:51 show 1 more comment ## 5 Answers Have the program find the remainder when dividing the input (say n) by 2, 3, 4, ..., $\sqrt n$ (or the following integer if $\sqrt n$ is not an integer.) If this value ever leaves a remainder of zero then your number is composite and you can stop checking divisors. If the remainder is non-zero for all of these values then your number is prime. There are more efficient ways to check primeness but this is probably the easiest way to program it and I imagine will suffice for your purposes. Unless you are checking very very large numbers? - 2 Of course, once you check 2 you don't need to check any multiples of 2, and the same for 3 etc. But if you just want a simple algorithm and space/time constraints aren't an issue then what I wrote above would be very easy to program and would be fine for not too huge inputs. – jericson Sep 23 '10 at 5:42 This is exactly what I was looking for. Not doing any crazy large numbers. – KronoS Sep 24 '10 at 2:42 1 You don't need to check the following integer if $\sqrt{n}$ is not integral, you can stop just before it. If you store a list of primes, just divide by those up to $\sqrt{n}$ – Ross Millikan Sep 5 '11 at 3:52 @RossMillikan: I think this is a guard against rounding errors. If you compute $\sqrt{p^2}$ as a little bit less than $p$ due to round-off you would mischaracterize $p^2$ as composite, but not if you checked to the next integer. – Charles Apr 8 at 15:28 How do I mathematically determine if a number is prime? If the number is n, then dividing it by every prime number less than or equal to sqrt(n) and showing that there is a remainder. There are a number of different sieve solutions for finding prime numbers, the oldest and most famous of which is the Sieve of Eratosthenes. These are generally easy to programme and can, for example, find all of the primes below 100,000 in a few milliseconds on a modern processor. - Algorithm posted by jericson is the best for basic purposes. IMHO, for programming competitions and for practical purposes randomized algorithms are best. Rabin-Miller is my favorite. Take a look at Rabin-Miller primality testing algorithm code @ TopCoder. As primes are in P, there is deterministic, polynomial time algorithm called AKS primality test. - 2 Calling the Miller-Rabin algorithm a “nondeterministic algorithm” is technically correct but misleading (in the same way as calling a rectangle a parallelogram). It is a randomized algorithm. – Tsuyoshi Ito Sep 23 '10 at 12:36 @Tsuyoshi Ito: Changed. Thanks! :) – Pratik Deoghare Sep 23 '10 at 12:53 1 It's worth mentioning that the efficient versions of AKS are no longer deterministic. – Charles Sep 27 '10 at 14:54 1 @Sri: That's actually the funny thing about AKS; thus, it remains more a curiosity than a practical test. – J. M. Sep 5 '11 at 2:57 1 @Srivatsan Narayanan: Glad to help. It's exciting times we're seeing! When the test first came out it was hard to imagine an efficient version, but thanks to improvements on the exponent (starting from the work of Berrizbeitia) and amazing millionfold improvements to the constant factor (due to Bernstein) the algorithm looks usable if not yet competitive. – Charles Sep 5 '11 at 3:56 show 3 more comments There are many different algorithms for primality testing. See the Wikipedia page for an introduction and see Henri Cohen's book "A course in computational algebraic number theory" for further details. See also Caldwell's Prime Pages. - For very small numbers (less than a million), trial division is the best way: divide by 2, 3, 5, and so on until the square root of the number. If you find a factor, the number is composite; otherwise, the number is prime. For larger numbers there are better methods, but choosing which one depends on how much work you're willing to put into the program. It is now known that there are no BPSW-pseudoprimes below $2^{64}$, so if you can write that test (see here for details) then you have a very quick test for primality. If you only need to test up to $2^{32}$, you can simply check if the number is a 2-strong pseudoprime. If so, test if it's one of 2314 exceptions (this can be done in 12 or 13 steps with a binary search); if the test fails or it's an exception, the number is composite, otherwise prime. (You can go higher than $2^{32}$ if you're willing to build an appropriate table of exceptions.) For larger numbers, the work is usually split into two parts: determining with high probability (say, 99.99999999%) that the number is prime, then actually proving that it is. What type of proof depends on the form and size of the number. - 4 Nice answer. Very informative! – jericson Sep 23 '10 at 5:43 4 This is definitely the best answer overall. – KronoS Sep 24 '10 at 2:43 @Charles you said "divide by 2, 3, 5, and so on until the square root of the number." Why is it sufficient to do check upto the square root of the number and why divide by only primes ? – Geek Aug 14 '12 at 4:59 A composite number is always divisible by a prime number <= its square root. You can think of it this way: if you find a composite proper factor, either that has a prime factor <= its square root (which will divide the original number) or it has a composite factor <= its square root (in which case iterate). It can't have factors onlly larger than its square root since you could take one and divide the number by it to find one smaller than the square root. The process of taking smaller and smaller composite factors can't continue forever since there can be at most n steps. – Charles Aug 14 '12 at 12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9148827791213989, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/12822/examples-of-class-group/12829
# Examples of Class Group I am trying to better understand how unique factorization of algebraic integers in an algebraic number ring implies that the class number of that number ring is 1. I am asking for some examples of this to get me started. - ## 3 Answers I'm not sure what you mean by giving you examples... the abstract argument seems simple enough to me: Every principal ideal that is generated by a prime element is a prime ideal: suppose $ab\in (p)$; then $p|ab$, and since $p$ is prime, $p|a$ or $p|b$, so $a\in (p)$ or $b\in (p)$. Conversely, a nonzero principal ideal is prime if and only if the generator is a prime element. Suppose $\mathfrak{M}$ is a maximal ideal of the number ring; let $a\in\mathfrak{M}$. Then $a = p_1\cdots p_k$ for some primes $p_i$ by unique factorization into primes, hence $(a) = (p_1\cdots p_k) = (p_1)(p_2)\cdots(p_k)\subseteq \mathfrak{M}$. Since $\mathfrak{M}$ is maximal, it is prime, so there exists $i$ such that $(p_i)\subseteq \mathfrak{M}$. But since $p_i$ is a prime element, then $(p_i)$ is a prime ideal, and in a number ring every nonzero prime ideal is maximal; hence $\mathfrak{M}=(p_i)$. Therefore, every maximal ideal is principal; in particular, every nonzero prime ideal is principal, since nonzero prime ideals are maximal. Now let $\mathfrak{I}$ be any ideal; if $\mathfrak{I}=(0)$, then it is principal. Otherwise, $\mathfrak{I}$ is a product of prime ideals by the Fundamental Theorem of Dedekind Domains, so $\mathfrak{I}=(p_1)\cdots (p_k)$ with $p_i$ a prime element (since primes are maximal, which are principal, and therefore generated by prime elements); but $(p_1)\cdots(p_k) = (p_1\cdots p_k)$ is principal, so $\mathfrak{I}$ is principal. Thus, if you have unique factorization, then every ideal is principal, and therefore the class number is $1$: given any two nonzero ideals $\mathfrak{I}$ and $\mathfrak{J}$, we can find elements $a$ and $b$ such that $\mathfrak{I}=(a)$ and $\mathfrak{J}=(b)$, and therefore we have that $b\mathfrak{I}=a\mathfrak{J}$, hence $\mathfrak{I}\sim \mathfrak{J}$ in the ideal class group. Since this holds for any two nonzero ideals, the ideal class group consists of exactly one class. The key really is that you have unique factorization into prime ideals and that nonzero primes are maximal; without that, you would be out of luck. One problem with the simplest examples ($\mathbb{Z}[i]$, $\mathbb{Z}[\sqrt{-2}]$, for instance) is that are also Euclidean domains, so that's really how you get the unique factorization (as a further consequence, a prior one already being that it is a PID). I think the smallest imaginary quadratic which is a PID but not Euclidean is $\mathbb{Z}[(1+\sqrt{-19})/2]$, and I'm not sure I would want to try to work out explicitly the argument above with specific elements of that ring. - Thanks. After thinking about this I realize that the point is that if A is a Dedekind Domain with unique factorization then A is a PID. Thus, Class number is 1 iff PID iff UFD for a number ring. – Jason Smith Dec 3 '10 at 17:16 $\rm PID\:$s are precisely the $\rm UFD\:$s having dimension $\le 1\$ (i.e. every nonzero prime ideal is maximal). Therefore $\rm UFD$ Dedekind domains are $\rm PID\:,$ so they have trivial class group. Here's the key result: THEOREM $\rm\ \ \ TFAE\$ for a $\rm UFD\ D$ $1)\$ prime ideals are maximal if nonzero $2)\$ prime ideals are principal $3)\$ maximal ideals are principal $4)\ \rm\ gcd(a,b) = 1\ \Rightarrow\ (a,b) = 1$ $5)\$ $\rm D$ is Bezout $6)\$ $\rm D$ is a $\rm PID$ Proof $\$ (sketch of $1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 5 \Rightarrow 6 \Rightarrow 1$) $1\Rightarrow 2)$ $\rm\ \ P\supset (p)\ \Rightarrow\ P = (p)$ $2\Rightarrow 3)$ $\ \:$ Clear. $3\Rightarrow 4)$ $\ \ \rm (a,b) \subsetneq P = (p)\$ so $\rm\ (a,b) = 1$ $4\Rightarrow 5)$ $\ \ \rm c = \gcd(a,b)\ \Rightarrow\ (a,b) = c\ (a/c,b/c) = (c)$ $5\Rightarrow 6)$ $\ \ \rm 0 \ne I \subset D\:$ Bezout is generated by an elt with the least number of prime factors $6\Rightarrow 1)$ $\ \ \rm P \supset (p),\ a \not\in (p)\ \Rightarrow\ (a,p) = (1)\ \Rightarrow\ P = (p)$ - This is actually a frequently used (e.g. in the theory of divisors) fact of commutative algebra. A noetherian domain $A$ is a UFD if and only if every prime ideal of height one (by Krull's principal ideal theorem, this is the same thing as being minimal over a nonzero element) is principal (which corresponds, with a little work, to the statement that the Weil divisor class group of a normal ring is trivial iff it is factorial). In the case of a Dedekind domain (of which a ring of integers in a number field is a paradigm example), this means that every prime ideal is principal (as $(0)$ obviously is). In a Dedekind domain, every ideal is a product of prime ideals. So if every prime ideal is principal, so is every ideal. This is the statement that the class number is one. So this answers your question by reducing it to another result. You can find the proof and discussion of this result as Theorem 18.6 in http://people.fas.harvard.edu/~amathew/CAnotes.pdf - 1 @Akhil: Your cited proof should emphasize more clearly that it uses a very powerful and nontrivial result, viz. Krull's "principal ideal theorem". But the result at hand actually follows much more simply and generally as I indicated in my post. – Gone Dec 2 '10 at 23:05 1 @Akhil: You have done the math community a real service by nicely texing up notes from Jacob Lurie's lectures: thanks very much for this. May I recommend that you include a table of contents? This will make for easier "window shopping". – Pete L. Clark Dec 2 '10 at 23:22 @Bill: Yes. For the present application, however, nothing as fancy as Krull's PIT is necessary. If one defines height one as being minimal over a nonzerodivisor (as happened in the course whose notes I linked to above), then one can just prove the result in question directly. It is clear that "height one" under either definition (the usual one or the alternative one) will correspond to "maximal" for a Dedekind domain. – Akhil Mathew Dec 2 '10 at 23:24 @Pete: Thanks! Yes, I do plan on fixing the formatting of the notes up somewhat (there is actually a TOC, but it's currently a mess) and later posting a much edited version of them. – Akhil Mathew Dec 2 '10 at 23:26 1 @Akhil: thanks for the notes. @Pete: thanks for letting the rest of us know that "CAnotes.pdf" means Lurie's lectures. It is interesting that he delivered 40 lectures for a single class -- isn't that more than the typical US semester allows? – T.. Dec 3 '10 at 0:49 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415053129196167, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/291823/changing-1-cosx-x-to-avoid-cancellation-error-for-0-and/291843
# Changing $(1-\cos(x))/x$ to avoid cancellation error for $0$ and $π$? I have to change this formula: $$\frac{1-\cos(x)}{x}$$ so that I can avoid the cancellation error. I can do this for 0 but not for $π$. So I get: $$\frac{\sin^2(x)}{x(1+\cos(x))}$$ which for $x$ close to $0$ gets rid of the cancellation error. But I don't know how to fix the error for $x$ close to $π$? I just want to know if I should be using trigonometric identities again? I've tried to use trig identities but nothing works. Any suggestions or hints? Edit: So for π I meant that sin(π) would be 0 so it wouldn't give me the correct value as (1-cosπ)/π=2/π. The second equation would overall give me 0. That's the error I meant for π. Sorry for the confusion there. - ## 3 Answers Another possiblity that avoids cancellation error at both places is $$\frac{2 \sin^2(x/2)}{x}$$ - The expression $\dfrac{1-\cos x}{x}$ becomes $0/0$ when $x=0$, but not when $x=\pi$. The expression $\dfrac{\sin^2 x}{x(1+\cos x)}$ becomes $0/0$ when $x=\pi$ but not when $x=0$. The two are equal to each other if $x$ is anything between $0$ and $\pi$. So you've got one form that "gets rid of the error" when $x=0$ and another form that "gets rid of the error" when $x=\pi$. The answer to your final question seems to be just to stick with what you started with. - If I got you right: $$\lim_{x \to 0} \frac{1-\cos x}{x}= \lim_{x \to 0} \frac{1-(1-\frac{x^2}{2}+O(x^4))}{x}=0\\ \lim_{x \to \pi} \frac{1-\cos x}{x}=\frac{2}{\pi}$$ The first limit is due to Taylor series expansion around 0, the second is since $\cos \pi=-1$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956348180770874, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/210352/boolean-simplification?answertab=votes
Boolean Simplification I'm having some trouble getting a handle with this course. We are starting Boolean algebra and my professor wants us simplify the following: (AB)'+(A'+B')'= (AB)'+BC+A'B'C'= I am assuming the "()" with "'" means the over-score above the variables. Forgive my ignorance but my professor does not explain anything. He just says "Do!" in a Russian accent. I just want to understand. - 1 Answer By de Morgan’s law $(AB)'=A'+B\,'$, and it’s always true that $X+X'=1$, so $$(AB)'+(A'+B\,')'=(A'+B\,')+(A'+B\,')'=1\;.$$ Similarly, we can start simplifying $(AB)'+BC+A'B\,'C\,'$ by using de Morgan’s law to expand the first term, getting $A'+B\,'+BC+A'B\,'C'$. Now use one of the distributive laws to get $$A'+A'B\,'C\,'=A'1+A'B\,'C\,'=A'(1+B\,'C\,')$$ and then an absorption law to get $$A'+A'B\,'C\,'=A'(1+B\,'C\,')=A'1=A'\;.$$ Thus, $$A'+B\,'+BC+A'B\,'C'=A'+B\,'+BC\;.$$ Note that I could have reached the same final result by simplifying $B\,'+A'B\,'C\,'$ to $B\,'$, using exactly the same approach. - Alright that makes sense. My question is now do we always follow those same steps? i.e. de Morgans, distribute laws, absorption laws. Or does each equation go through a different approach, depending how it is presented? – Leo Oct 10 '12 at 9:29 @Leo: In general it’ll be a different approach each time, just as it was back in eighth- or ninth-grade algebra when you were asked to simplify an expression. In fact it is just algebra, though the rules are a little different from the ones for the familiar algebra of real numbers. – Brian M. Scott Oct 10 '12 at 9:35 Thanks Brain for help. – Leo Oct 10 '12 at 9:48 @Leo: You’re welcome. – Brian M. Scott Oct 10 '12 at 10:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92887282371521, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/51925-big-o.html
# Thread: 1. ## Big-O Hello, I'm totally confused by Big-O notation. This is the definition that I was given: I don't understand how "k" fits into this equation. I also don't understand the significance of "C" or the relationship between the two constants (or witnesses). I'm still thinking through this... To prove that f(x) is big-O of g(x), do I just choose an arbitrary value for "C" that makes g(x) larger than or equal to f(x). By arbitrary I mean any value... that I have to prove that works. And then for "k"... does that just represent the lower boundary in the domain for that value of "C" that I chose out of thin air. So "k" might actually be some value of "x" that makes f(x) = C ( g(x) ). So all values of x greater than "k" will make C ( g(x) ) greater than f(x). My book isn't very clear about any of this and all the information on line contains formulas I don't understand anyways... Does anyone have a simple explanation of what the formula means? Is "C" an arbitrary value or is it's value more definitive? What is "k" and what is it's relationship to "C". Right now I'm just guessing.. Thanks, Robert Attached Thumbnails 2. Originally Posted by lord_bunny Hello, I'm totally confused by Big-O notation. This is the definition that I was given: I don't understand how "k" fits into this equation. I also don't understand the significance of "C" or the relationship between the two constants (or witnesses). I'm still thinking through this... To prove that f(x) is big-O of g(x), do I just choose an arbitrary value for "C" that makes g(x) larger than or equal to f(x). By arbitrary I mean any value... that I have to prove that works. And then for "k"... does that just represent the lower boundary in the domain for that value of "C" that I chose out of thin air. So "k" might actually be some value of "x" that makes f(x) = C ( g(x) ). So all values of x greater than "k" will make C ( g(x) ) greater than f(x). My book isn't very clear about any of this and all the information on line contains formulas I don't understand anyways... Does anyone have a simple explanation of what the formula means? Is "C" an arbitrary value or is it's value more definitive? What is "k" and what is it's relationship to "C". Right now I'm just guessing.. Thanks, Robert It means that for $x$ large enough $f(x)$ is less than (or equal) some multiple of $g(x).$ RonL 3. Hello, To be precise, you should mention where "x" moves. "f(x) is O(g(x)) as x goes to infinity" if there are constants C and k such that |f(x)|<C|g(x)| whenever x>k. "f(x) is O(g(x)) as x goes to minus infinity" if there are constants C and k such that |f(x)|<C|g(x)| whenever x<k. "f(x) is O(g(x)) as x goes to zero" if there are constants C and k such that |f(x)|<C|g(x)| whenever |x|<k. These are used to express the asymptotic behavior of f, and it is common to omit the part "as x goes to ..." if it is clear from the context. You usually know that |f(x)| and |g(x)| both go to infinity, but you compare their speed, how fast they grow. For example, 2x^2+4x+5=O(x^2) (as x goes to infinity). PROOF: Let C=3, k=10. Then, |2x^2+4x+5|<C|x^2| whenever x>k. QED Of course, you can choose C=4, k=100 etc., they make no difference, it just says that the terms 4x and 5 are irrelevant compared to the x^2 term. Other examples: x^n=O(x^m) if n<m. (any polynomial of x)=O(e^x). Bye. 4. Originally Posted by wisterville Hello, To be precise, you should mention where "x" moves.. No, only if you are interested in something other than x large, as the usual definition is for n large. RonL 5. hello my friend, The big-O method is for approximation of complexity for large X. you can calculate it using limit also. Its to measure the progress of a function when it grows so much and by mean of that, to see how far and how fast a function grows. "C" is a constant and its not considered to have a certain effect on grow, so you can simply ignore it. Just take the largest powers of a function, since a larger power makes a function to grow more faster. it means that if there is f(x) = X^5 + 5X then you have to pick X^5 because it makes the function grows faster than any other element in that equation. "k" is where two functions g(x) and f(x) meet eachother and since then, g(x) becomes strictly greater than f(x) and by that |f(x)| <= C.|g(x) | for any x which is greater than k. I had the same problem understanding this issue. have a look at my post : http://www.mathhelpforum.com/math-he...-problems.html PEACE and good luck
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552528858184814, "perplexity_flag": "middle"}
http://nanoexplanations.wordpress.com/tag/rectangular-partitioning/
the blog of Aaron Sterling # Tag Archives: rectangular partitioning ## Polygon rectangulation wrap up Posted on January 6, 2012 Tying up loose ends from my three posts in December about rectangulation of orthogonal polygons. 1. Derrick Stolee requested in a comment a resolution of the computational complexity of the 3D version of the problem of decomposing a shape into the minimum number of rectangles.  I found a reference that proves the problem is NP-complete, by directly reducing the problem to a variant of 3SAT.  The diagrams of the gadgets used are pretty cool — the gadgets look like children’s toys used to build 3D structures.  Rectangular partition is polynomial in two dimensions but NP-complete in three, by Victor J. Dielissen and Anne Kaldewaij, Information Processing Letters, April 1991. 2. The survey Polygon Decomposition by J. Mark Keil (1996) has much more information on exact algorithms for rectangulation, triangulation, and problems I did not mention at all, like covering. 3. There is an extensive literature on approximation algorithms for finding a minimum-length rectangulation of an orthogonal polygon with holes.  (The problem is NP-complete even for the case where the polygon is a rectangle and its interior holes are points.)  I can recommend the survey Minimum Edge-Length Rectangular Partitions, by Gonzalez and Zheng (in Handbook of Approximation Algorithms and Metaheuristics, 2007). Victor J. Dielissen, & Anne Kaldewaij (1991). Rectangular partition is polynomial in two dimensions but NP-complete in three Information Processing Letters, 38 (1), 1-6 : 10.1016/0020-0190(91)90207-X ### Like this: → 1 Comment Posted in Uncategorized Tagged computational geometry, rectangular partitioning ## Polygon rectangulation, part 3: Minimum-length rectangulation Posted on December 16, 2011 In this third (and final) post on polygon rectangulation, I will consider how to find the rectangulation of minimum total length for an orthogonal polygon.  In part one of this short series, we considered rectangulations with a minimum number of rectangles; and, in part two, we considered rectangulations with a minimum number of “fat” rectangles.  I’ve saved this post for last, because this may be the most useful rectangulation application in VLSI, and this is the rectangulation problem that Ming-Yang Kao and I have applied to self-assembly (though I won’t discuss our application in this post). The minimum-length rectangulation algorithm appeared in Minimum Edge Length Partitioning of Rectilinear Polygons, by Lingas, Pinter, Rivest and Shamir (1982).  The authors proved both a positive and a negative result.  The positive result — which I will focus on today — is a $O(n^4)$ dynamic programming algorithm that finds an optimal minimum-length rectangulation for any orthogonal polygon with no interior holes.  The negative result is a proof that, if the input polygon is allowed to have holes, then the problem is NP-complete.  (I discussed the proof of this result in a previous blog post.) Continue reading → ## Polygon rectangulation, part 2: Minimum number of fat rectangles Posted on December 9, 2011 This post is the second in a series on polygon rectangulation. In my previous post, I discussed methods to decompose an orthogonal polygon into a minimum number of rectangles.  (See that post for definitions and motivation.)  In my next post, I will consider finding a rectangulation of a minimum length — a topic very important in VLSI.  In this post, I will consider a modification to the minimum-number-of-rectangles problem; the modification was motivated by concerns in VLSI, but, as yet, only a theoretical algorithm exists, with a running time of $O(n^{42})$.  (That is not a typo, and it is obtained through a “natural” dynamic programming solution to the problem I am about to state.) Printed circuits are created through a process called photolithography, in which electron beams etch a design onto a substrate.  While these electron beams are, in one sense, extremely narrow, as the Wikipedia article on VLSI states, current photolithography techniques “tend closer to the fundamental laws of optics.”  Among other things, this means that the fixed minimum width of an electron beam is suddenly important.  In principle, it implies a “fat fingers” problem.  Suppose our substrate is in the shape of orthogonal polygon $P$, and we use a rectangulation technique from the previous post to rectangulate $P$.  We may not be able to apply the rectangulation in real life, because we have no guarantee that all of the rectangles are wider than our electron beam.  Therefore, we would like to constrain the space of rectangulations we consider to ones that are feasible to etch — informally, ones that contain only “fat” rectangles.  We formalize this optimization problem as follows. Fat Rectangle Optimization Problem: Given an orthogonal polygon $P$, maximize the shortest side $\delta$ over all rectangulations of $P$.  Among the partitions with the same $\delta$, choose the partition with the fewest number of rectangles. This optimization problem has been studied by O’Rourke and co-authors in at least three papers.  In this blog post, I will focus on consideration of The Structure of Optimal Partitions of Orthogonal Polygons into Fat Rectangles, by O’Rourke and Tewari (2004). Continue reading → ## Polygon rectangulation, part 1: Minimum number of rectangles Posted on December 2, 2011 Over the next few posts, I will consider problems of polygon rectangulation: given as input $P$ an orthogonal polygon (all interior angles are 90 or 270 degrees), decompose $P$ into adjacent, nonoverlapping rectangles that fully cover $P$.  Different problems impose different conditions on what constitutes a “good” rectangulation.  Today we will discuss how to find a rectangulation with the least number of rectangles. Polygon decomposition is a method often used in computer graphics and other fields, in order to break a (perhaps very complex) shape into lots of small manageable shapes.  Polygon triangulation may be the best-studied decomposition problem.  (When triangulating, we don’t require that the input polygon be orthogonal, and our objective is to cut the polygon into triangles according to some notion of “best” decomposition.)  There is an extensive literature on polygon rectangulation as well, because of its connection to VLSI design.  Suppose, for example, that our input $P$ represents a circuit board, and we want to subdivide the board by placing as little additional “ink” on the board as possible, in order to save money on each unit.  However, because of mechanical limitations, we can only place ink horizontally or vertically — i.e., only create rectangulations of $P$.  Many questions in VLSI design are closely related to finding a rectangulation of minimum total length, which I will discuss in a future post.  The algorithm for minimum-length rectangulation is more complicated than the one I will present today for minimum-number-of-rectangles rectangulation, so today’s post can be considered a warm-up. The attendees of the recent Midwest Theory Day know that Ming-Yang Kao and I found an application of rectangulation to DNA self-assembly.  I will blog about that in the new year.  The only other application of rectangulation to self-assembly that I know about is A Rectangular Partition Algorithm for Self-Assembly, by Li and Zhang, which appeared in a robotics conference.  (Readers interested in the latest Midwest Theory Day are invited to check out a “workshop report” I wrote on the CSTheory Community Blog.) These slides (pdf format) by Derrick Stolee contain many lovely pictures about polygon rectangulation.  I think they may be a bit hard to follow all the way through as there is no ”attached narrative,” but I recommend them anyway. Continue reading → ## Minimum edge length partitioning of rectilinear polygons Posted on March 5, 2011 Figure 10 from Minimum Edge Length Partitioning of Rectilinear Polygons by Lingas et al., 1982. The title of this blog post is the same as that of a seminal paper in computational geometry and VLSI design by Lingas, Pinter, Rivest and Shamir from 1982.  The authors present an $O(n^4)$ algorithm to produce a minimum-length rectangular partitioning of a rectilinear polygon without holes.  (A rectilinear polygon, also called an orthogonal polygon, is one whose sides are all axis-parallel, i.e., all either “horizontal” or “vertical.”)  They also prove the NP-completeness of the same optimization problem in the case where the rectilinear polygon has rectilinear holes. Variations of this problem come up all the time in VLSI design.  There have been many subseqent papers on approximation algorithms, faster methods for tractable cases, and so on.  The Handbook of Discrete and Computational Geometry has a survey of known results in Chapter 26.  However, I recently needed to review the original NP-completeness proof, and apparently it exists nowhere online.  Dozens of papers (and books) cite the result, but no one seems to reprove it, and it did not appear in a journal with an online archive.  I took the liberty of scanning it, so it would be available for download somewhere (pdf file here: MinimumEdgeLengthPartitioningOfRectilinearPolygons), and with the blog post’s title identical to the article’s title, I hope that anyone searching for it in future might find this link without much effort. Continue reading → ### Like this: → 5 Comments Posted in Uncategorized Tagged computational geometry, NP-completeness, rectangular partitioning, VLSI • ### About me Occasionally streetwise researcher of DNA self-assembly and other nonstandard applications of theoretical computer science. • ### My Twitter feed • RT @TheHackersNews: Mystery of Duqu Programming Language Solved goo.gl/fb/VGXIZ #Security #THN #news #securitynews #vulnerability 1 year ago • RT @EFF: Over 8,200 academics are protesting journals with policies against #OpenAccess. And it's effective: eff.org/r.V8m #oa 1 year ago • RT @csoghoian: 2nd time in 6 months that Windows security update causes my Truecrypt encrypted laptop to develop unrepairable boot failu ... 1 year ago • @weasel0x00 Ha! You were right about passphrases not being much stronger than passwords. arstechnica.com/business/news/… 1 year ago • The frontiers of science are in the strangest places. arstechnica.com/science/news/2… 1 year ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8771289587020874, "perplexity_flag": "middle"}
http://en.wikisource.org/wiki/Page:A_Treatise_on_Electricity_and_Magnetism_-_Volume_1.djvu/375
# Page:A Treatise on Electricity and Magnetism - Volume 1.djvu/375 From Wikisource 280.] 333 SYSTEM OF LINEAR CONDUCTORS. radius of the sphere must diminish in order to maintain the potential constant when the charge is allowed to pass to earth through the conductor. In the electrostatic system, therefore, the conductivity of a conductor is a velocity, and of the dimensions $[LT^{-1}].$ The resistance of the conductor is therefore of the dimensions $[L^{-1}T].$ The specific resistance per unit of volume is of the dimension of $[T],$ and the specific conductivity per unit of volume is of the dimension of $[T^{-1}].$ The numerical magnitude of these coefficients depends only on the unit of time, which is the same in different countries. The specific resistance per unit of weight is of the dimensions $[L^{-3}MT].$ 279.] We shall afterwards find that in the electromagnetic system of measurement the resistance of a conductor is expressed by a velocity, so that in this system the dimensions of the resistance of a conductor are $[LT^{-1}].$ The conductivity of the conductor is of course the reciprocal of this. The specific resistance per unit of volume in this system is of the dimensions $[L^2T^{-1}],$ and the specific resistance per unit of weight is of the dimensions $[L^{-1}T^{-1}M].$ ### On Linear Systems of Conductors in general. 280.] The most general case of a linear system is that of $n$ points, $A_1, A_2, \ldots A_n,$ connected together in pairs by $\frac{{1}}{{2}}n(n-1)$ linear conductors. Let the conductivity (or reciprocal of the resistance) of that conductor which connects any pair of points, say $A_p$ and $A_q,$, be called $K_{pq},$ and let the current from $A_p$ to $A_q$ be $C_{pq}$. Let $P_p$ and $P_q$ be the electric potentials at the points $A_p$ and $A_q$ respectively, and let the internal electromotive force, if there be any, along the conductor from $A_p$ to $A_q$ be $E_{pq}.$ The current from $A_p$ to $A_q$ is, by Ohm's Law, $C_{pq} = K_{pq} (P_p - P_q + E_{pq}).$ (1) Among these quantities we have the following sets of relations: The conductivity of a conductor is the same in either direction, or $K_{pq} = K_{qp}.$ (2) The electromotive force and the current are directed quantities, so that $E_{pq} = -E_{qp},\quad \quad$and $\quad \quad C_{pq} = -C_{qp}.$ (3)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241228103637695, "perplexity_flag": "head"}
http://mathoverflow.net/questions/48963/asymptotic-expansion-of-an-integral
## Asymptotic expansion of an integral ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for an asymptotic expansion of J(n) $J(n)=\frac {2} {\pi} \int_{0}^{\pi/n} \prod_{k=1}^n \frac {\sin kx} {\sin x} dx$, $n=2,3,4,\dots$ The first approximation is managed to get $F_1(n)=\frac {n!} {\sqrt{\pi A}}$, $A=n(n-1)(2n+5)/36$ Is a general expansion known for this? - ## 1 Answer This is probably an overkill way to a solution, but applying Euler reflection formula $$\sin(z)=\frac{\pi}{\Gamma(1-\frac{z}{\pi})\Gamma(\frac{z}{\pi})},$$ one gets products of gamma functions both in the numerator and denominator. From this point on, I think, your integral is computable/representable via a generalized Hypergeometric Function of Fox type: http://en.wikipedia.org/wiki/Fox_H-function (I am mentioning the wiki link mainly because of the references there), i.e. Barnes-Integral with a kernel of fraction of products of Gamma functions. If my memory serves me right, this type of integral has been treated also in the book of Bleistein and Handelsman. If it does not, then the relevant paper in this case is "Asymptotic expansions and analytic continuations for a class of Barnes-integrals" by Braaksma, Compositio Math. 15,1964, p. 239–341. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8665377497673035, "perplexity_flag": "middle"}
http://rip94550.wordpress.com/2008/08/04/pca-fa-malinowski-summary/
# Rip’s Applied Mathematics Blog ## PCA / FA Malinowski Summary August 4, 2008 — rip Malinowski’s work is considerably different from everything else we’ve seen before. First of all, he expects that in most cases one will neither standardize nor even center the data X. We can do his computations as an SVD of X, or an eigendecomposition of $X^{T}X$ or of $XX^T$ – but because the data isn’t even centered, $X^{T}X$ and $XX^T$ are not remotely covariance matrices. For this reason, I assert that preprocessing is a separate issue. Nevertheless, the underlying mathematics is the same: get either an SVD of X or an eigendecomposition of $X^{T}X$ and/or of $XX^T$. But what do we do to X first? That’s a separate question. Second, he is not primarily interested in eliminating “small” eigenvalues or singular values: he is interested in eliminating “experimental error”, i.e. “noise”. Although I worked thru his chapter 4 on estimating noise, I have not discussed it: his main interest is in deciding when x and $\hat{x}$ are “close enough”, and without a real-world application, and without a more rigorous treatment, I’d rather pass for now. I’ll come back to his error stuff, however, if I ever find myself anywhere else looking at error estimation in PCA / FA. In addition to omitting chapter 4, I stopped after chapter 5. What he has in common with Harman and Jolliffe is a lot of references to the literature. I didn’t see anything after chapter 5 that I could confirm the computations of. Third, he doesn’t much use the usual vocabulary of scores and loadings, although he does use the subscript “load” for his $\hat{X}$ in contrast to the subscript “basic” for his matrix X of successful test vectors x. (I sometimes think that he uses subscripts for emphasis rather than to distinguish entities, but I’m probably exaggerating.) In any case, I decided that the customary vocabulary was secondary to the mathematics: find the eigenvector matrix or matrices. Fourth, he has no graphical techniques; he provides none of the graphs we came to expect in Harman, Jolliffe, and Davis. Such graphs do, in fact, have a place in chemistry; the Brereton “Chemometrics” – which I have not yet discussed – has a few. We will not do much with it: their internet data is available only to owners of the book, so I can compute to my heart’s content, but I can’t very well publish the data and you can’t very well follow along without it. But I will see if there’s anything I need to say about it. Fifth, his notion of target testing (including using it to fill in missing values) is a whole new world, a brave new world, and I like it. I think I did it more simply than he did, but I was just cleaning up the math. I do wonder. Can target testing be used in PCA “rotations”, i.e. change of basis? For handling multicollinearity in OLS? To tell us we should have centered the data? To tell us to subtract a constant? I don’t know yet. I’ll keep my eyes open. I learned a lot from Malinowski about using the available tools. Davis taught me to use the SVD, Malinowski got me comfortable with using both the full and the cut-down SVDs. Not to mention using the u and v bases, and constructing the hat matrix. OTOH, there is at least one thing we did along with Malinowski that the other authors did not do. They might have, but they did not: reconstitute the data matrix using the reduced set of eigenvalues or singular values. The classical techniques generally just list the eigenvectors v1, possible weighted by the w0 (equivalently, by the square roots of the eigenvalues). But in principle, they could have computed D1. In practice, they usually got no closer than describing the new correlation matrix or variances. $D = u\ w\ v^T$ and replace the smallest singular value in w by 0 (and call the new matrix w0), we can reconstitute the data D as $D1 = u\ w_0\ v^T$ (“Reconstitute” is intended to convey that impression that D1 is not the real data, not fresh orange juice. ) I want to show you more detail of the difference between D and D1. I’ve been a little vague about it. Recall example 5 with noise. Here’s the data matrix: $D = \left(\begin{array}{lll} 1.9 & 3.2 & 3.9 \\ 1 & -0.2 & -1 \\ 4 & 4.9 & 6.1 \\ 3 & 1.9 & 1.1 \\ 6 & 6.9 & 7.9\end{array}\right)$ The singular values were $\{16.194,\ 2.41991,\ 0.238533\}$ We interpret 3 nonzero singular values to mean that the matrix D is technically of rank 3; we know we can go further, however, and say that D differs from a matrix of rank 2 by its smallest singular value, namely .238533. (I could go so far as to say the matrix D is of rank 2.238533, but perhaps it’s better to leave it at “differs from a matrix of rank 2 by .238533.” After all, if the smallest singular value were 10, the matrix would differ from a matrix of rank 2 by 10, and I don’t want to say that it’s of rank 12 = 2+10.) We replaced the smallest singular value (0.238533) by 0, and reconstituted the data, calling it D1. $D1 = \left(\begin{array}{lll} 1.95859 & 3.05258 & 3.98415 \\ 0.992857 & -0.182028 & -1.01026 \\ 3.95401 & 5.0157 & 6.03396 \\ 3.02136 & 1.84625 & 1.13068 \\ 6.0016 & 6.89597 & 7.9023\end{array}\right)$ What is the difference between D and D1? D1 is of rank 2, and the difference is supposed to be 0.238533. Just how do we compute that difference? That’s the question. The appropriate “norm” – appropriate because it gives this answer – is called the Frobenius norm, and it’s pretty simple: pretend the matrix is a vector, and compute its Euclidean norm (2-norm), namely the square root of the sum of squares. Here we are. The element-by-element differences between D and D1 are: $e1 = \left(\begin{array}{lll} -0.0585929 & 0.147418 & -0.0841454 \\ 0.00714334 & -0.0179725 & 0.0102586 \\ 0.0459858 & -0.115699 & 0.0660403 \\ -0.0213625 & 0.0537476 & -0.0306787 \\ -0.00160247 & 0.00403179 & -0.00230131\end{array}\right)$ Now take the square root of the sum of the squares of the “components”. In case you’re working thru this with me, the squares of those numbers are $\left(\begin{array}{lll} 0.00343313 & 0.0217322 & 0.00708044 \\ 0.0000510273 & 0.000323011 & 0.000105238 \\ 0.0021147 & 0.0133863 & 0.00436132 \\ 0.000456356 & 0.00288881 & 0.000941184 \\ 2.56792E^{-6} & 0.0000162553 & 5.29604E^{-6} \end{array}\right)$ and the sum of them is 0.0568979, and the square root of that is 0.238533. That number should seem familiar: it’s exactly the singular value that we set to zero; it’s exactly what it ought to be. By computing the difference e1, and then the square root of the sum of squares, I explicitly computed the difference D – D1 and confirmed that it was equal to the smallest singular value of D. In practice, of course, there is no need to compute the sum-of-squares, because we already know it. I’ll close that by saying that if we set two singular values to zero, the difference between the original (D) and the reconstituted (D1) is the square root of the sum of squares of the two singular values. What’s going on is that the norm of D is the same as the norm of w, and the norm of D1 is the same as the norm of w0; so the difference between D and D1 is the difference between w and w0. I think we’re done here. Oh, let me be clear. I’m glad I own Malinowski (ah, the book). If you’re going to be doing PCA / FA in the physical sciences, you probably want it, too. If you’re going to be doing PCA / FA in chemistry, don’t even think about not buying it. Just my opinion. ### 6 Responses to “PCA / FA Malinowski Summary” 1. Dana H. Says: August 8, 2008 at 7:06 am I’m a chemist who does statistical data modeling, and I must say that (from your description), I find Malinowski’s approach just plain weird. If I’m doing PCA to try to understand where the variance in a set of molecular descriptors is coming from, I’d be crazy not to mean-center-and-scale the data first. Otherwise, a property such as molecular weight (with typical values in the 100s) will completely dominate a property such as AlogP (with typical values in the 1s). Perhaps in spectroscopic applications, using raw data in PCAs makes sense, but not in QSAR/QSPR-type applications. And, yes, it’s true that “component” has a special meaning in chemistry, but that’s no reason for Malinowski to use “factor analysis” in preference to “principal component analysis”. Many words have more than one meaning; the context should make the meaning clear in a given case. To my knowledge, *no one* in the QSAR/molecular modeling world uses the term “FA” in preference to “PCA”. OK, having gotten that out of my system, I have a question for you. I have a piece of code that does PCA, and for some reason it incorporates a constant term in the analysis — i.e., the first column of the X matrix is all 1′s (with all other columns centered and scaled). This seems pointless to me, as you obviously get no additional variance from the column. However, when you uncenter and unscale to express the PCs in terms of the original variables, you do get a constant term left over. Could this be the reason for including the extra column — essentially as a placeholder? Just wondering if you ever ran across this. 2. Dana H. Says: August 8, 2008 at 11:45 am Correction: In the PCA code I have, the constant term only gets added at the end, when the loadings get expressed in terms of the uncentered, unscaled variables. So there’s no wasted column being put into the X matrix, and the whole thing makes a lot more sense. 3. rip Says: August 8, 2008 at 2:05 pm Hi Dana, Thank you very much for taking the time to talk about your experience with PCA. Maybe you’ll find things to say about my earlier posts on PCA / FA, too. They may seem terribly elementary to you. I have been trying to emphasize that the preprocessing of data – just whether to use raw or centered or standardized data – is a very important issue, and people do it differently, with good reason, I hope. Your comment reinforces my assessment: you do what Harman and Jolliffe recommend; but not what my geology text (Davis) recommends, at least for some data; and not what Malinowski recommends, at least for some data. The only example I have taken from Malinowski is my “example 6″. The data was hypothetical, he said, “involving the ultraviolet absorbances of five different mixtures of the same absorbing components measured at six wavelengths.” The subsequent SVD was computed for that clearly uncentered data. You did say, “Perhaps in spectroscopic applications, using raw data in PCAs makes sense, but not in QSAR/QSPR-type applications.” That sounds like Malinowski’s application to me, but I’m no chemist. Finally, you did make me check if I exaggerated Malinowski’s explanation for calling it FA instead of PCA. You be the judge; he said on p. 17, “_Principal component analysis_ (PCA) is another popular name for _eigenanalysis_. To chemists the word ‘component’ conjures up a different meaning and therefore the terminology _principal factor analysis_ (PFA) is offered as an alternative.” Thanks for letting me know that he’s apparently not mainstream on this. Again, thanks for throwing in your two cents. Please feel free to throw more coins or even rocks – small ones, I hope – for it’s nice to have an actual practitioner weighing in. (Separate comment about the column of 1s.) 4. rip Says: August 8, 2008 at 2:17 pm Hello again Dana, A column of 1s in the “X” matrix for regression is how we incorporate a constant term. As you say, it makes far more sense to put it in at the end, after the PCA is done, and when we move on to other parts of the analysis. (Is your code performing “principal component regression?) I have a strong suspicion, but only a suspicion, that “target testing” can say to us something like: “subtract a constant term from your data”. Ah, I did close my post with the bold, “If you’re going to be doing PCA / FA in chemistry, don’t even think about not buying it [Malinowski].” From what you say, Malinowski’s focus is not so wide as all of chemistry. (OTOH, I think “target testing” looks promising.) Is there a book (or books or even just chapters) on PCA / FA that you would recommend for chemists, or for scientists in general? 5. Dana H. Says: August 9, 2008 at 2:55 pm Yes, I know that the column of 1′s is used in regression calculations for the constant term. I just couldn’t understand why it was there for the PCA. It turns out that I had just misread the code. I can’t say that I have a good book on PCA to recommend to chemists specifically. The discussion in “Modern Applied Statistics with S” by Venables and Ripley is pretty good, but very brief. Hastie et al. in “The Elements of Statistical Learning” have a few interesting things to say about PCA and its applications, though none of their examples are chemistry-focused. 6. rip Says: August 10, 2008 at 7:17 am OK, thanks. I’ll keep them in mind. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426864981651306, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/conservation-laws+energy
# Tagged Questions 2answers 142 views ### Perpendicular Elastic Collision (different masses, different velocities) I'm stuck on a mechanics problem and I can't make any headway past momentum and kinetic energy being conserved. Here is the problem: Two hover cars are approaching an intersection from ... 5answers 172 views ### Why is momentum conserved (or rather what makes an object carry on moving infinitely)? I know this is an incredibly simple question, but I am trying to find a very simple explanation to this other than the simple logic that energy is conserved when two items impact and bounce off each ... 2answers 78 views ### Ice skater increase of energy This may be a very basic question but I am not seeing how it works. Consider the standard example of an ice skate rotating about his/her center of mass and pulling in his/her arms. The torque is zero ... 2answers 199 views ### Can a neutron be created from pure energy Is it possible to create a neutron out of pure energy, i.e. not by bringing a bunch of already-existing quarks together? (A quick calculation using E = mc2 shows the energy required would be about 1.5 ... 1answer 134 views ### Mathematical question on Collisions [closed] A 2.5kg ball travelling with a speed of 7.5m/s makes an elastic collision with another ball of ... 1answer 181 views ### Tricky Conservation of Momentum problem: find the ratio of the carts by mass percentage lost [closed] A wagon is coasting at a speed $v_A$ along a straight and level road. When 42.5% of the wagon's mass is thrown off the wagon, parallel to the ground and in the forward direction, the wagon is brought ... 3answers 746 views ### On what basis do we trust Conservation of Energy? I'm happy to accept and use conservation of energy when I'm solving problems at Uni, but I'm curious about it to. For all of my adult life, and most of my childhood I've been told this law must hold ... 2answers 4k views ### Why can't energy be created or destroyed? My physics instructor told the class, when lecturing about energy, and that it can't be created or destroyed. Why is that? Is there a theory or scientific evidence that proves his statement true or ... 3answers 236 views ### Energy non-conservation for time-dependent potentials Written in a book I read that the "total energy is not preserved when the potential depends explicitly on time", i.e. $U=U(x,t)$. Is there any proof or explanation for this? 3answers 278 views ### Is the continuous transformation of energy from one form to another one free? Or it consumes some quantity of who knows what? Sorry for the unclarity. I probably created some bias in your mind having tagged my question with energy-conservation and conservation-laws. I simply consider that, relatively independently of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334813952445984, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/factorial+approximation
# Tagged Questions 0answers 37 views ### Bound on Permutations [duplicate] I am trying to prove the following inequality, $$n^{(l)} = n(n-1)\cdots(n-l+1) \geq \frac{n^l}{e}\quad\text{ for }\quad 2 \leq l \leq \sqrt{n}\;.$$ So my approach is to observe that \$n^{(l)} = ... 0answers 40 views ### Approximation of factorial - Stirling formula [duplicate] Possible Duplicate: Elementary central binomial coefficient estimates How can I prove that $$\binom{n}{n/2} = \Theta\left(\frac{2^n}{\sqrt n}\right)$$ I tried with Stirlings ... 2answers 117 views ### How many bits are in factorial? I am interested in good integer approximation from below and from above for binary Log(N!). The question and the question provides only a general idea but not exact values. In other words I need ... 0answers 123 views ### Generating all positive integers from three operations This question arose on sci.math, but since almost all the competent mathematicians there have migrated here, I thought I'd give the question a wider audience. Starting with an integer $t>2$, ... 2answers 185 views ### How to approximate $\sum_{k=1}^n k!$ using Stirling's formula? How to find summation of the first $n$ factorials, $$1! + 2! + \cdots + n!$$ I know there's no direct formula, but how can it be estimated using Stirling's formula? Another question : Why can't ... 3answers 651 views ### Approximating log of factorial I'm wondering if people had a recommendation for approximating $\log(n!)$. I've been using Stirlings formula, $(n + \frac{1}{2})\log(n) - n + \frac{1}{2}\log(2\pi)$ but it is not so great for ... 9answers 2k views ### What is the purpose of Stirling's approximation to a factorial? Stirling approximation to a factorial is $$n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n.$$ I wonder what benefit can be got from it? From computational perspective (I admit I don't ... 2answers 2k views ### Approximating the logarithm of the binomial coefficient We know that by using Stirling approximation: $\log n! \approx n \log n$ So how to approximate $\log {m \choose n}$? 1answer 356 views ### A series problem by Knuth I came across the following problem, known as Knuth's Series which originally was an American Mathematical Monthly problem. Prove that \sum_{n=1}^\infty ... 5answers 688 views ### How best to explain the $\sqrt{2\pi n}$ term in Stirling's? I recently showed my Algorithms class how to bound $\ln n! = \sum \ln n$ by integrals, thereby obtaining the simple factorial approximation e \left(\frac{n}{e}\right)^{n} \leq n! \leq ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924665629863739, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/1605/what-functional-form-describes-the-implied-volatility-curve/1612
# What functional form describes the implied volatility curve? It is often convenient to parametrize the implied volatility curve to allow easy interpolation of volatility for any strike or maturity. What functional form describes the implied volatility curve for options at varying strikes and fixed maturity? - 1 I've seen polynomial forms used. Jim Gatheral has some interesting research on parametrizing the volatility surface – Quant Guy Oct 5 '11 at 17:39 For fixed time and near the current price, the implied volatility as a function of price is "bilinear"-- a negative slope line that bottoms out at the current price, and then a positive slope line. However, this yields contradictions if extended too far from the current price AND doesn't help at all w/ volatility over time. Have you tried curve-fitting existing data? – barrycarter Oct 10 '11 at 2:32 ## 5 Answers OptionMetrics uses a kernel smoothing algorithm to interpolate the volatility surface. Their assumptions tend to be based on the academic consensus and have become somewhat industry standard, so the real answer to your question may be that there really is no good functional form. - First, note that there are actually quite a few implied volatility curves...I am afraid there is no "the" volatility curve. Right off the bat I can think of • The put and call bid and offer curves • The put and call midmarket price curves • The put and call midmarket vol curves • The out-of-the-money bid, offer, midmarket price and midmarket vol curves so that is 12 different curves right there. You can probably already tell that getting a single functional form to fit them all is not going to be easy. The most common function used is a parabola, though almost always on $\log(K)$ rather than on strike $K$. The second most common choice is cubic splines, either with nodes at every strike or smoothing. It is customary in these cases to specify "cutoffs", which are limiting high and low strikes beyond which volatility is assumed to be constant. That keeps the curve from going negative, or "too" positive. You will occasionally see implementations based on modifications of the terminal probability distribution, such as Edgeworth expansions. - - 3 Hi AMC, welcome to quant.SE and thanks for posting your answer. Your answer would be more helpful if you could synopsize Gatheral's recommendation for modeling the volatility surface, or if you could point out why you think Gatheral's book is relevant to this question. Otherwise, your answer does nothing but repeat @QuantGuy's comment. – Tal Fishman Oct 6 '11 at 16:18 A Polynomial of degree 2 or 3 ? But Linear interpolation on a datapoints vector works fine in my experience, let's say you have an index whose options strike : 80/82/84/86/88/90 You usually don't need to calculate vol @ 83. The only case is if you have a different volatility smile (estimated vol. for example) whose data points are 80/85/90 then you can just do linear interp to find your estimated vol @ 82/84. - 1 Historically, we have used a degree 2 polynomial (parabola) but currently it does not produce a good, arbitrage free, fit hence my question. – John Channing Aug 8 '11 at 7:43 Hi lliane, thanks for your answer and welcome to quant.SE. I have had the same question as @John Channing. I think the main problem is actually not one of interpolation but rather of smoothing noisy prices for the available strikes. Accurate "interpolation" may still be important when e.g. comparing IV in the cross section. – Tal Fishman Aug 8 '11 at 11:06 In the rates world (ie swaptions, caps and floors) I believe most banks are using some form of the SABR model (Stochastic Alpha Beta Rho) for building the volatility smile. When we say 'use the SABR model' what we really mean is that the smile shape function is derived from the shape of the smile in a theoretical model of form: $$\text{d} F_t = \sigma_t F_t^\beta \, \text{d} W^1_t$$ $$\text{d} \sigma_t = \alpha \sigma_t \, \text{d} W^2_t$$ $$\text{d}W^1_t \text{d}W^2_t = \rho \, \text{d}t$$ Some clever people found a way to get a good-quality closed-form approximation for the smile function, so effectively you can just plug in the parameters and get your volatity at a given strike for a given value of the forward. That said, the formula is known to break down at low strikes -- producing negative values for the implied probability distribution. Therefore most houses have put resources into fixing this in one way or another. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225186109542847, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/286911/condition-so-that-ypxyqxy-0-can-be-converted-in-a-ode-with-constant-c
# Condition so that $y''+p(x)y'+q(x)y=0$ can be converted in a ODE with constant coefficients I have to find a necessary and a sufficient condition for the functions $p$ and $q$ so that the linear differential equation : $y''+p(x)y'+q(x)y=0$ can be converted in a linear differential equation with constant coefficients by changing the independent variable of the equation. Can someone help with this or point me in the right direction? Thanks in advance! [EDIT:] First of all, I'm not sure if I should post this as an answer to my own question or as an edit. My apologies. With the tip of Antonio I've did the following, but I'm still not sure if it's completely correct. $\Phi(t)=y(x(t))$ $\Phi^{'}(t)=y^{'}(x(t)).x^{'}(t)$ $\Phi^{''}(t)=y^{''}(x(t)).(x^{'}(t))^{2}+y^{'}(x(t)).x^{''}(t)$ $y^{''}(t)+p(x)y^{'}(t)+q(x)y(t)=0$ $\frac{1}{q(x)}y^{''}(t)+\frac{p(x)}{q(x)}y^{'}(t)+y(t)=0$ $\frac{1}{q(x)}=(x^{'}(t))^{2}\Longrightarrow x^{'}(t)=\frac{1}{\sqrt{q(x)}}$ Condition 1: $\forall x:q(x)>0$ $x^{''}(t)=\frac{x^{'}(t).q^{'}(x)}{2\sqrt{q(x)}}$ $x^{''}(t)=\frac{q^{'}(x)}{2.q(x)}$ Condition 2: $p(x)=\frac{1}{2}q^{'}(x)$ - What have you tried? I would suggest letting $x=f(z)$ then writing all of the derivatives with respect to $x$ in terms of derivatives with respect to $z$. – Antonio Vargas Jan 25 at 21:09 @AntonioVargas I've tried something (see edit), but I'm still not sure if I'm following the correct path. – tim_a Jan 26 at 9:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528762102127075, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/semiclassical+quantum-gravity
# Tagged Questions 0answers 87 views ### Hawking radiation for closely orbiting black holes Suppose we have two black holes of radius $R_b$ orbiting at a distance $R_r$. I believe semi-classical approximations describe correctly the case where $R_r$ is much larger than the average black body ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.888599157333374, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/48044/matrix-operation-in-dirac-matrices/48184
Matrix operation in dirac matrices If we define $\alpha_i$ and $\beta$ as Dirac matrices which satisfy all of the conditions of spin 1/2 particles , p defines the momentum of the particle, then how can we get the matrix form ? \begin{equation} \alpha_i p_i= \begin{pmatrix} p_z & p_x-ip_y \\ p_x+ip_y & -p_z \end{pmatrix} . \end{equation} - Dirac matrices in 4 dimensions are 4x4. You've written a 2x2 matrix. Where did you find this equation? – Michael Brown Jan 1 at 4:05 – Unlimited Dreamer Jan 1 at 6:40 1 I see... you're breaking the Dirac equation down to 2x2 blocks. This is the standard way of solving it. Where in the argument are you having trouble? Unfortunately the equations in your source aren't numbered, but I can see you mean $\vec{\sigma}\cdot\vec{p}$ rather than $\alpha_i p_i$. – Michael Brown Jan 1 at 6:47 2 Answers The equation you wrote only makes one choice that should answer all questions about this context: it chooses a representation of the $\alpha_i$ matrices with $$\alpha_i = \sigma_i$$ where $\sigma_i$ are the three Pauli matrices. You may check that if you substitute the Pauli matrices (particular $2\times 2$ matrices listed in the Wikipedia article linked in the previous sentence) for $\alpha_i$ on the left hand side of your equation, you obtain the right hand side. If your formula had the Greek letter $\sigma$ instead of $\alpha$ on the left hand side, it would be uncontroversial. However, with $\alpha$, it is problematic. The $\alpha_i$ matrices are really $4\times 4$, not $2\times 2$, so all the equations above must be interpreted so that each matrix entry of the Pauli matrices is actually a block $$z \to \pmatrix {z&0 \\ 0&-z }.$$ We say that the Pauli matrices were tensor-multiplied by a $2\times 2$ unit matrix (in certain order). This extra tensor factor actually can't be the unit matrix because one couldn't find any matrix $\beta$ that anticommutes with all the $\alpha_i$ matrices. But it may be another $\sigma_z$, for example, in which case $\beta$ may be chosen to be ${\rm diag}(\sigma_x,\sigma_x)$, for example. Alternatively, you should ignore the source and learn some/all of the standard representations of the Dirac matrices. At any rate, something is sloppy about the notation in which $\alpha_i$ were written as $2\times 2$ matrices and the simplest recipe to get $4\times 4$ matrices (tensor product with the $2\times 2$ unit matrix) doesn't work. So one should first see what $4\times 4$ matrices your source (if it is correct at all) actually means. - It's just a matrix manipulation. Let $\sigma_i$ pauli matrices. \begin{equation} \alpha_i p_i= \begin{pmatrix} 0& \sigma_i \\ \sigma_i & 0 \end{pmatrix} p_i . \end{equation} $\alpha_i p_i= \begin{pmatrix} 0& p_1 \sigma_1 \\ p_1\sigma_1 & 0 \end{pmatrix} + \begin{pmatrix} 0& p_2 \sigma_2 \\ p_i\sigma_2 & 0 \end{pmatrix} + \begin{pmatrix} 0& p_3 \sigma_3 \\ p_3\sigma_3 & 0 \end{pmatrix}$ But $\sigma_1 p_1 = \begin{pmatrix} 0& 1 \\\ 1 & 0 \end{pmatrix}p_1=\begin{pmatrix} 0& p_1 \\\ p_1 & 0 \end{pmatrix}$ , $\sigma_2 p_2= \begin{pmatrix} 0& -i \\\ -i & 0 \end{pmatrix}p_2=\begin{pmatrix} 0& -ip_2 \\\ ip_2 & 0 \end{pmatrix}$ $\sigma_3 p_3= \begin{pmatrix} 1& 0 \\\ 0 & -1 \end{pmatrix}p_3= \begin{pmatrix} p_3& 0 \\\ 0 & -p_3 \end{pmatrix}$ Now adding these we get ($1\rightarrow x$, $2\rightarrow y$ ,$3\rightarrow z$) , \begin{equation} \alpha_i p_i= \begin{pmatrix} p_z & p_x-ip_y \\ p_x+ip_y & -p_z \end{pmatrix} . \end{equation} - Your final formula is 4x4 on the left and 2x2 on the right. You should have a 4x4 matrix with what you've written as the upper right and lower left blocks, with zeros everywhere else. – Michael Brown Jan 2 at 23:42 In the right $2\times 2$ makes the matrix $4 \times 4$ – Unlimited Dreamer Jan 3 at 5:19 No, the answer you want is $\alpha_i p_i = \left(\begin{matrix} 0 & p_i \sigma_i \\ p_i \sigma_i & 0 \end{matrix}\right)$ which has four 2x2 blocks. Your right hand side is the 2x2 matrix $p_i \sigma_i$, not $\alpha_i p_i$. You had it right until the very last line! – Michael Brown Jan 3 at 5:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.887740969657898, "perplexity_flag": "head"}
http://psychology.wikia.com/wiki/Principal_component_analysis
# Principal components analysis Talk0 31,735pages on this wiki ## Redirected from Principal component analysis Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory Principal component analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of uncorrelated variables called principal components. The number of principal components is less than or equal to the number of original variables. This transformation is defined in such a way that the first principal component has as high a variance as possible (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it be orthogonal to (uncorrelated with) the preceding components. Principal components are guaranteed to be independent only if the data set is jointly normally distributed. PCA is sensitive to the relative scaling of the original variables. Depending on the field of application, it is also named the discrete Karhunen–Loève transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD). PCA was invented in 1901 by Karl Pearson.[1] Now it is mostly used as a tool in exploratory data analysis and for making predictive models. PCA can be done by eigenvalue decomposition of a data covariance matrix or singular value decomposition of a data matrix, usually after mean centering the data for each attribute. The results of a PCA are usually discussed in terms of component scores (the transformed variable values corresponding to a particular case in the data) and loadings (the weight by which each standarized original variable should be multiplied to get the component score) (Shaw, 2003). PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way which best explains the variance in the data. If a multivariate dataset is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a "shadow" of this object when viewed from its (in some sense) most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis; indeed, some statistical packages (such as Stata) deliberately conflate the two techniques. True factor analysis makes different assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. ## Details PCA is mathematically defined[2] as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. Define a data matrix, $\mathbf X^\top$, with zero empirical mean (the empirical (sample) mean of the distribution has been subtracted from the data set), where each of the n rows represents a different repetition of the experiment, and each of the $m$ columns gives a particular kind of datum (say, the results from a particular probe). (Note that what we are calling $\mathbf X^\top$ is often alternatively denoted as $\mathbf X$ itself.) The singular value decomposition of $\mathbf X$ is $\mathbf X = \mathbf{W\Sigma V}^\top$, where the $m\times m$ matrix $\mathbf W$ is the matrix of eigenvectors of $\mathbf{XX}^\top$, the matrix $\mathbf\Sigma$ is an $m\times n$ rectangular diagonal matrix with nonnegative real numbers on the diagonal, and the matrix $\mathbf V$ is $n\times n$. The PCA transformation that preserves dimensionality (that is, gives the same number of principal components as original variables) is then given by: $\begin{align} \mathbf{Y}^\top & = \mathbf{X}^\top\mathbf{W} \\ & = \mathbf{V}\mathbf{\Sigma}^\top \end{align}$ ($\mathbf V$ is not uniquely defined in the usual case when $m < n -1$, but $\mathbf Y$ will usually still be uniquely defined.) Since $\mathbf W$ (by definition of the SVD of a real matrix) is an orthogonal matrix, each row of $\mathbf Y^\top$ is simply a rotation of the corresponding row of $\mathbf X^\top$. The first column of $\mathbf Y^\top$ is made up of the "scores" of the cases with respect to the "principal" component, the next column has the scores with respect to the "second principal" component, and so on. If we want a reduced-dimensionality representation, we can project $\mathbf X$ down into the reduced space defined by only the first $L$ singular vectors, $\mathbf W_L$: $\mathbf{Y}=\mathbf{W_L}^\top\mathbf{X} = \mathbf{\Sigma_L}\mathbf{V_L}^\top$ The matrix W of singular vectors of X is equivalently the matrix W of eigenvectors of the matrix of observed covariances C = X XT, $\mathbf{X}\mathbf{X}^\top = \mathbf{W}\mathbf{\Sigma}\mathbf{\Sigma}^\top\mathbf{W}^\top$ Given a set of points in Euclidean space, the first principal component corresponds to a line that passes through the multidimensional mean and minimizes the sum of squares of the distances of the points from the line. The second principal component corresponds to the same concept after all correlation with the first principal component has been subtracted out from the points. The singular values (in Σ) are the square roots of the eigenvalues of the matrix XXT. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is correlated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information. PCA is often used in this manner for dimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example and when applicable, to the discrete cosine transform. Nonlinear dimensionality reduction techniques tend to be more computationally demanding than PCA. PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same sample variance and are positively correlated, then the PCA will entail a rotation by 45° and the "loadings" for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Note that Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance. ## Discussion Mean subtraction (a.k.a. "mean centering") is necessary for performing PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.[3] Assuming zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), the principal component w1 of a data set X can be defined as: $\mathbf{w}_1 = \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{\arg\,max}}\,\operatorname{Var}\{ \mathbf{w}^\top \mathbf{X} \} = \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{\arg\,max}}\,E\left\{ \left( \mathbf{w}^\top \mathbf{X}\right)^2 \right\}$ (See arg max for the notation.) With the first k − 1 components, the kth component can be found by subtracting the first $k - 1$ principal components from X: $\mathbf{\hat{X}}_{k - 1} = \mathbf{X} - \sum_{i = 1}^{k - 1} \mathbf{w}_i \mathbf{w}_i^\top \mathbf{X}$ and by substituting this as the new data set to find a principal component in $\mathbf{w}_k = \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{arg\,max}}\,E\left\{ \left( \mathbf{w}^\top \mathbf{\hat{X}}_{k - 1} \right)^2 \right\}.$ PCA is equivalent to empirical orthogonal functions (EOF), a name which is used in meteorology. An autoencoder neural network with a linear hidden layer is similar to PCA. Upon convergence, the weight vectors of the K neurons in the hidden layer will form a basis for the space spanned by the first K principal components. Unlike PCA, this technique will not necessarily produce orthogonal vectors. PCA is a popular primary technique in pattern recognition. It is not, however, optimized for class separability.[4] An alternative is the linear discriminant analysis, which does take this into account. ## Table of symbols and abbreviations Symbol Meaning Dimensions Indices $\mathbf{X} = \{ X[m,n] \}$ data matrix, consisting of the set of all data vectors, one vector per column $M \times N$ $m = 1 \ldots M$ $n = 1 \ldots N$ $N \,$ the number of column vectors in the data set $1 \times 1$ scalar $M \,$ the number of elements in each column vector (dimension) $1 \times 1$ scalar $L \,$ the number of dimensions in the dimensionally reduced subspace, $1 \le L \le M$ $1 \times 1$ scalar $\mathbf{u} = \{ u[m] \}$ vector of empirical means, one mean for each row m of the data matrix $M \times 1$ $m = 1 \ldots M$ $\mathbf{s} = \{ s[m] \}$ vector of empirical standard deviations, one standard deviation for each row m of the data matrix $M \times 1$ $m = 1 \ldots M$ $\mathbf{h} = \{ h[n] \}$ vector of all 1's $1 \times N$ $n = 1 \ldots N$ $\mathbf{B} = \{ B[m,n] \}$ deviations from the mean of each row m of the data matrix $M \times N$ $m = 1 \ldots M$ $n = 1 \ldots N$ $\mathbf{Z} = \{ Z[m,n] \}$ z-scores, computed using the mean and standard deviation for each row m of the data matrix $M \times N$ $m = 1 \ldots M$ $n = 1 \ldots N$ $\mathbf{C} = \{ C[p,q] \}$ covariance matrix $M \times M$ $p = 1 \ldots M$ $q = 1 \ldots M$ $\mathbf{R} = \{ R[p,q] \}$ correlation matrix $M \times M$ $p = 1 \ldots M$ $q = 1 \ldots M$ $\mathbf{V} = \{ V[p,q] \}$ matrix consisting of the set of all eigenvectors of C, one eigenvector per column $M \times M$ $p = 1 \ldots M$ $q = 1 \ldots M$ $\mathbf{D} = \{ D[p,q] \}$ diagonal matrix consisting of the set of all eigenvalues of C along its principal diagonal, and 0 for all other elements $M \times M$ $p = 1 \ldots M$ $q = 1 \ldots M$ $\mathbf{W} = \{ W[p,q] \}$ matrix of basis vectors, one vector per column, where each basis vector is one of the eigenvectors of C, and where the vectors in W are a sub-set of those in V $M \times L$ $p = 1 \ldots M$ $q = 1 \ldots L$ $\mathbf{Y} = \{ Y[m,n] \}$ matrix consisting of N column vectors, where each vector is the projection of the corresponding data vector from matrix X onto the basis vectors contained in the columns of matrix W. $L \times N$ $m = 1 \ldots L$ $n = 1 \ldots N$ ## Properties and limitations of PCA As noted above, the results of PCA depend on the scaling of the variables. The applicability of PCA is limited by certain assumptions[5] made in its derivation. ## Computing PCA using the covariance method The following is a detailed description of PCA using the covariance method (see also here). But note that it is better to use the singular value decomposition (using standard software). The goal is to transform a given data set X of dimension M to an alternative data set Y of smaller dimension L. Equivalently, we are seeking to find the matrix Y, where Y is the Karhunen–Loève transform (KLT) of matrix X: $\mathbf{Y} = \mathbb{KLT} \{ \mathbf{X} \}$ ### Organize the data set Suppose you have data comprising a set of observations of M variables, and you want to reduce the data so that each observation can be described with only L variables, L < M. Suppose further, that the data are arranged as a set of N data vectors $\mathbf{x}_1 \ldots \mathbf{x}_N$ with each $\mathbf{x}_n$ representing a single grouped observation of the M variables. • Write $\mathbf{x}_1 \ldots \mathbf{x}_N$ as column vectors, each of which has M rows. • Place the column vectors into a single matrix X of dimensions M × N. ### Calculate the empirical mean • Find the empirical mean along each dimension m = 1, ..., M. • Place the calculated mean values into an empirical mean vector u of dimensions M × 1. $u[m] = {1 \over N} \sum_{n=1}^N X[m,n]$ ### Calculate the deviations from the mean Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data.[6] Hence we proceed by centering the data as follows: • Subtract the empirical mean vector u from each column of the data matrix X. • Store mean-subtracted data in the M × N matrix B. $\mathbf{B} = \mathbf{X} - \mathbf{u}\mathbf{h}$ where h is a 1 × N row vector of all 1s: $h[n] = 1 \, \qquad \qquad \text{for } n = 1, \ldots, N$ ### Find the covariance matrix • Find the M × M empirical covariance matrix C from the outer product of matrix B with itself: $\mathbf{C} = \mathbb{ E } \left[ \mathbf{B} \otimes \mathbf{B} \right] = \mathbb{ E } \left[ \mathbf{B} \cdot \mathbf{B}^{*} \right] = { 1 \over N } \sum_{} \mathbf{B} \cdot \mathbf{B}^{*}$ where $\mathbb{E}$ is the expected value operator, $\otimes$ is the outer product operator, and $* \$ is the conjugate transpose operator. Note that if B consists entirely of real numbers, which is the case in many applications, the "conjugate transpose" is the same as the regular transpose. • Please note that the information in this section is indeed a bit fuzzy. Outer products apply to vectors, for tensor cases we should apply tensor products, but the covariance matrix in PCA, is a sum of outer products between its sample vectors, indeed it could be represented as B.B*. See the covariance matrix sections on the discussion page for more information. ### Find the eigenvectors and eigenvalues of the covariance matrix • Compute the matrix V of eigenvectors which diagonalizes the covariance matrix C: $\mathbf{V}^{-1} \mathbf{C} \mathbf{V} = \mathbf{D}$ where D is the diagonal matrix of eigenvalues of C. This step will typically involve the use of a computer-based algorithm for computing eigenvectors and eigenvalues. These algorithms are readily available as sub-components of most matrix algebra systems, such as R (programming language), MATLAB,[7][8] Mathematica,[9] SciPy, IDL(Interactive Data Language), or GNU Octave as well as OpenCV. • Matrix D will take the form of an M × M diagonal matrix, where $D[p,q] = \lambda_m \qquad \text{for } p = q = m$ is the mth eigenvalue of the covariance matrix C, and $D[p,q] = 0 \qquad \text{for } p \ne q.$ • Matrix V, also of dimension M × M, contains M column vectors, each of length M, which represent the M eigenvectors of the covariance matrix C. • The eigenvalues and eigenvectors are ordered and paired. The mth eigenvalue corresponds to the mth eigenvector. ### Rearrange the eigenvectors and eigenvalues • Sort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue. • Make sure to maintain the correct pairings between the columns in each matrix. ### Compute the cumulative energy content for each eigenvector • The eigenvalues represent the distribution of the source data's energy This article or section may be confusing or unclear for some readers, and should be edited to rectify this. Please improve the article, or discuss the issue on the talk page. among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the mth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through m: $g[m] = \sum_{q=1}^m D[q,q] \qquad \mathrm{for} \qquad m = 1,\dots,M$[citation needed] ### Select a subset of the eigenvectors as basis vectors • Save the first L columns of V as the M × L matrix W: $W[p,q] = V[p,q] \qquad \mathrm{for} \qquad p = 1,\dots,M \qquad q = 1,\dots,L$ where $1 \leq L \leq M.$ • Use the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such that $\frac{g[m=L]}{\sum_{q=1}^M D[q,q]} \ge 90%\,$ ### Convert the source data to z-scores • Create an M × 1 empirical standard deviation vector s from the square root of each element along the main diagonal of the covariance matrix C: $\mathbf{s} = \{ s[m] \} = \sqrt{C[p,q]} \qquad \text{for } p = q = m = 1, \ldots, M$ • Calculate the M × N z-score matrix: $\mathbf{Z} = { \mathbf{B} \over \mathbf{s} \cdot \mathbf{h} }$ (divide element-by-element) • Note: While this step is useful for various applications as it normalizes the data set with respect to its variance, it is not integral part of PCA/KLT! ### Project the z-scores of the data onto the new basis • The projected vectors are the columns of the matrix $\mathbf{Y} = \mathbf{W}^* \cdot \mathbf{Z} = \mathbb{KLT} \{ \mathbf{X} \}.$ • W* is the conjugate transpose of the eigenvector matrix. • The columns of matrix Y represent the Karhunen–Loeve transforms (KLT) of the data vectors in the columns of matrix X. ## Derivation of PCA using the covariance method Let X be a d-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean. We want to find $(\ast)\,$ a $d \times d$ orthonormal transformation matrix P so that PX has a diagonal covariant matrix (i.e. PX is a random vector with all its distinct components pairwise uncorrelated). A quick computation assuming $P$ were unitary yields: $\begin{array}[t]{rcl} \operatorname{cov}(PX) &= &\mathbb{E}[PX~(PX)^{\dagger}]\\ &= &\mathbb{E}[PX~X^{\dagger}P^{\dagger}]\\ &= &P~\mathbb{E}[XX^{\dagger}]P^{\dagger}\\ &= &P~\operatorname{cov}(X)P^{-1}\\ \end{array}$ Hence $(\ast)\,$ holds if and only if $\operatorname{cov}(X)$ were diagonalisable by $P$. This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix. ## Computing principal components iteratively In practical implementations especially with high dimensional data (large m), the covariance method is rarely used because it is not efficient. One way to compute the first principal component efficiently[10] is shown in the following pseudo-code, for a data matrix XT with zero mean, without ever computing its covariance matrix. Note that here a zero mean data matrix means that the columns of XT should each have zero mean. ```$\mathbf{p} =$ a random vector do c times: $\mathbf{t} = 0$ (a vector of length m) for each row $\mathbf{x} \in \mathbf{X^T}$ $\mathbf{t} = \mathbf{t} + (\mathbf{x} \cdot \mathbf{p})\mathbf{x}$ $\mathbf{p} = \frac{\mathbf{t}}{|\mathbf{t}|}$ return $\mathbf{p}$ ``` This algorithm is simply an efficient way of calculating XXTp, normalizing, and placing the result back in p. It avoids the nm2 operations of calculating the covariance matrix. p will typically get close to the first principal component of XT within a small number of iterations, c. (The magnitude of t will be larger after each iteration. Convergence can be detected when it increases by an amount too small for the precision of the machine.) Subsequent principal components can be computed by subtracting component p from XT (see Gram–Schmidt) and then repeating this algorithm to find the next principal component. However this simple approach is not numerically stable if more than a small number of principal components are required, because imprecisions in the calculations will additively affect the estimates of subsequent principal components. More advanced methods build on this basic idea, as with the closely related Lanczos algorithm. One way to compute the eigenvalue that corresponds with each principal component is to measure the difference in sum-squared-distance between the rows and the mean, before and after subtracting out the principal component. The eigenvalue that corresponds with the component that was removed is equal to this difference. ### The NIPALS method Main article: Non-linear iterative partial least squares For very high-dimensional datasets, such as those generated in the *omics sciences (e.g., genomics, metabolomics) it is usually only necessary to compute the first few PCs. The non-linear iterative partial least squares (NIPALS) algorithm calculates t1 and p1' from X. The outer product, t1p1' can then be subtracted from X leaving the residual matrix E1. This can be then used to calculate subsequent PCs.[11] This results in a dramatic reduction in computational time since calculation of the covariance matrix is avoided. ## Relation between PCA and K-means clustering It has been shown recently (2007) [12] [13] that the relaxed solution of K-means clustering, specified by the cluster indicators, is given by the PCA principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace specified by the between-class scatter matrix. Thus PCA automatically projects to the subspace where the global solution of K-means clustering lies, and thus facilitates K-means clustering to find near-optimal solutions. ## Correspondence analysis Correspondence analysis (CA) was developed by Jean-Paul Benzécri[14] and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied to contingency tables. CA decomposes the chi-square statistic associated to this table into orthogonal factors.[15] Because CA is a descriptive technique, it can be applied to tables for which the chi-square statistic is appropriate or not. Several variants of CA are available including detrended correspondence analysis and canonical correspondence analysis. One special extension is multiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.[16] ## Generalizations ### Nonlinear generalizations Most of the modern methods for nonlinear dimensionality reduction find their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points. Principal curves and manifolds[19] give the natural geometric framework for PCA generalization and extend the geometric interpretation of PCA by explicitly constructing an embedded manifold for data approximation, and by encoding using standard geometric projection onto the manifold, as it is illustrated by Fig. See also the elastic map algorithm and principal geodesic analysis. ### Multilinear generalizations In multilinear subspace learning, PCA is generalized to multilinear PCA (MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA. ### Higher order N-way principal component analysis may be performed with models such as Tucker decomposition, PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS. ### Robustness - Weighted PCA While PCA finds the mathematically optimal method (as in minimizing the squared error), it is sensitive to outliers in the data that produce large errors PCA tries to avoid. It therefore is common practise to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify. For example in data mining algorithms like correlation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA [20] based on a Weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy. ## Software/source code • "ViSta: The Visual Statistics System" a free software that provides principal components analysis, simple and multiple correspondence analysis. • "Spectramap" is software to create a biplot using principal components analysis, correspondence analysis or spectral map analysis. • XLSTAT is a statistical and multivariate analysis software including Principal Component Analysis among other multivariate tools. • The Unscrambler is a multivariate analysis software enabling Principal Component Analysis (PCA) with PCA Projection. • Computer Vision Library • Multivariate Data Analysis Software • In the MATLAB Statistics Toolbox, the functions `princomp` and `wmspca` give the principal components, while the function `pcares` gives the residuals and reconstructed matrix for a low-rank PCA approximation. Here is a link to a MATLAB implementation of PCA `PcaPress` . • NMath, a numerical library containing PCA for the .NET Framework. • in Octave, the free software equivalent to MATLAB, the function `princomp` gives the principal component. • in the open source statistical package R, the functions `princomp` and `prcomp` can be used for principal component analysis; `prcomp` uses singular value decomposition which generally gives better numerical accuracy. Recently there has been an explosion in implementations of principal component analysis in various R packages, generally in packages for specific purposes. For a more complete list, see here: [1]. • In XLMiner, the Principles Component tab can be used for principal component analysis. • In IDL, the principal components can be calculated using the function `pcomp`. • Weka computes principal components (javadoc). • Software for analyzing multivariate data with instant response using PCA ## Notes 1. ↑ Pearson, K. (1901). On Lines and Planes of Closest Fit to Systems of Points in Space. Philosophical Magazine 2 (6): 559–572. 2. ↑ Jolliffe I.T. Principal Component Analysis, Series: Springer Series in Statistics, 2nd ed., Springer, NY, 2002, XXIX, 487 p. 28 illus. ISBN 978-0-387-95442-4 3. ↑ 4. ↑ Fukunaga, Keinosuke (1990). , Elsevier. 5. ↑ 6. ↑ 7. ↑ 8. ↑ 9. ↑ 10. ↑ Roweis, Sam. "EM Algorithms for PCA and SPCA." Advances in Neural Information Processing Systems. Ed. Michael I. Jordan, Michael J. Kearns, and Sara A. Solla The MIT Press, 1998. 11. ↑ Geladi, Paul (1986). Partial Least Squares Regression:A Tutorial. Analytica Chimica Acta 185: 1–17. 12. ↑ 13. ↑ 14. ↑ Benzécri, J.-P. (1973). L'Analyse des Données. Volume II. L'Analyse des Correspondances, Paris, France: Dunod. 15. ↑ Greenacre, Michael (1983). Theory and Applications of Correspondence Analysis, London: Academic Press. 16. ↑ Le Roux, Brigitte and Henry Rouanet (2004). Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis, Dordrecht: Kluwer. 17. ↑ 18. ↑ 19. ↑ 20. ↑ DOI:10.1007/978-3-540-69497-7_27 10.1007/978-3-540-69497-7_27 This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand ## References • Jolliffe, I. T. (1986). Principal Component Analysis, 487, Springer-Verlag. • R. Kramer, Chemometric Techniques for Quantitative Analysis, (1998) Marcel–Dekker, ISBN 0-8247-0198-4. • Shaw PJA, Multivariate statistics for the Environmental Sciences, (2003) Hodder-Arnold ISBN 0-3408-0763-6. • Patra sk et al., J Photochemistry & Photobiology A:Chemistry, (1999) 122:23–31 # Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 116, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8718215227127075, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/6340/list
Return to Answer 2 corrected spelling It's my understanding that the octonians aren't associative enough to have a projective plane in the usual sense. That is, you'd want to define $\mathbb{O}P^{2}$ as the collection of Cayley lines in $\mathbb{R}^{16}$. However, "Cayley lines" in $\mathbb{R}^{16}$ doesn't make sense due to the lack of associativity. However, equivalent (for $\mathbb{K}P^{2}$ with $\mathbb{K} \in${$\mathbb{R}, \mathbb{C}, \mathbb{H}$} ) formulations of due still work. Set $k = dim_{\mathbb{R}} \mathbb{K}$. For example, topologically, $\mathbb{K}P^{2}$ is obtained by attaching a $2k$ ball to the sphere $S^{k}$ via the $k$-dimensional Hopf map (meaning, where the base sphere is $k$ dimensional). The same is true for $\mathbb{K} = \mathbb{O}$, topologically. The (black box) fact that only fibration with fiber $S^{7}$ and total space a sphere is the fibration $S^{7}\rightarrow S^{15}\rightarrow S^{8}$ leads to the fact that there is no higher $\mathbb{O}P^n$. From this description, it's not too hard (using the same techniques which work on $\mathbb{C}P^{2}$ and $\mathbb{H}P^{2}$), to show that $H^{*}(\mathbb{O}P^{2}, \mathbb{Z}) = \mathbb{Z}[x]/x^{3}$ with $|x| = 8$. As another example of an equivalent formulation, one can start with a $2k$-dimensional ball in $\mathbb{R}^{2k}$ and quotient out the boundary by the $k$ dimensional Hopf map. One can put a particular radial metric on the ball and check that it's well defined and smooth under the quotienting (I forget exactly what the metric is). This construction yields $\mathbb{K}P^2$ with the Fubini-Study metric. This construction also works when $\mathbb{K} = \mathbb{O}$. This construction is nice because it shows $\mathbb{O}P^2$ has a Fubini-Study metric so that curvatures lie between 1 and 4 and the cut locus relative to a point is an $S^8$. In other words, this construction shows the geometry is very similar to that of $\mathbb{K}P^2$ for the division algebras $\mathbb{K}$ over $\mathbb{R}$. One can also use this description (with some hard work, or so I'm told), to show that $\mathbb{O}P^2$ is isometric to the homogeneous space $F_{4}/Spin(9)$ with normal homogeneous metric. 1 It's my understanding that the octonians aren't associative enough to have a projective plane in the usual sense. That is, you'd want to define $\mathbb{O}P^{2}$ as the collection of Cayley lines in $\mathbb{R}^{16}$. However, "Cayley lines" in $\mathbb{R}^{16}$ doesn't make sense due to the lack of associativity. However, equivalent (for $\mathbb{K}P^{2}$ with $\mathbb{K} \in${$\mathbb{R}, \mathbb{C}, \mathbb{H}$} ) formulations of due still work. Set $k = dim_{\mathbb{R}} \mathbb{K}$. For example, topologically, $\mathbb{K}P^{2}$ is obtained by attaching a $2k$ ball to the sphere $S^{k}$ via the $k$-dimensional Hopf map (meaning, where the base sphere is $k$ dimensional). The same is true for $\mathbb{K} = \mathbb{O}$, topologically. The (black box) fact that only fibration with fiber $S^{7}$ and total space a sphere is the fibration $S^{7}\rightarrow S^{15}\rightarrow S^{8}$ leads to the fact that there is no higher $\mathbb{O}P^n$. From this description, it's not too hard (using the same techniques which work on $\mathbb{C}P^{2}$ and $\mathbb{H}P^{2}$), to show that $H^{*}(\mathbb{O}P^{2}, \mathbb{Z}) = \mathbb{Z}[x]/x^{3}$ with $|x| = 8$. As another example of an equivalent formulation, one can start with a $2k$-dimensional ball in $\mathbb{R}^{2k}$ and quotient out the boundary by the $k$ dimensional Hopf map. One can put a particular radial metric on the ball and check that it's well defined and smooth under the quotienting (I forget exactly what the metric is). This construction yields $\mathbb{K}P^2$ with the Fubini-Study metric. This construction also works when $\mathbb{K} = \mathbb{O}$. This construction is nice because it shows $\mathbb{O}P^2$ has a Fubini-Study metric so that curvatures lie between 1 and 4 and the cut locus relative to a point is an $S^8$. In other words, this construction shows the geometry is very similar to that of $\mathbb{K}P^2$ for the division algebras $\mathbb{K}$ over $\mathbb{R}$. One can also use this description (with some hard work, or so I'm told), to show that $\mathbb{O}P^2$ is isometric to the homogeneous space $F_{4}/Spin(9)$ with normal homogeneous metric.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945958137512207, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/247395/limiting-distribution-and-initial-distribution-of-a-markov-chain
# Limiting distribution and initial distribution of a Markov chain For a Markov chain (can the following discussion be for either discrete time or continuous time, or just discrete time?), 1. if for an initial distribution i.e. the distribution of $X_0$, there exists a limiting distribution for the distribution of $X_t$ as $t \to \infty$, I wonder if there exists a limiting distribution for the distribution of $X_t$ as $t \to \infty$, regardless of the distribution of $X_0$? 2. When talking about limiting distribution of a Markov chain, is it in the sense that some distributions converge to a distribution? How is the convergence defined? Thanks! - ## 1 Answer 1. No, let $X$ be a Markov process having each state being absorbing, i.e. if you start from $x$ then you always stay there. For any initial distribution $\delta_x$, there is a limiting distribution which is also $\delta_x$ - but this distribution is different for all initial conditions. 2. The convergence of distributions of Markov Chains is usually discussed in terms of $$\lim_{t\to\infty}\|\nu P_t - \pi\| = 0$$ where $\nu$ is the initial distribution and $\pi$ is the limiting one, here $\|\cdot\|$ is the total variation norm. AFAIK there is at least a strong theory for the discrete-time case, see e.g. the book by S. Meyn and R. Tweedie "Markov Chains and Stochastic Stability" - the first edition you can easily find online. In fact, there are also extension of this theory by the same authors to the continuous time case - just check out their work to start with. - Thanks! I was wondering if the limiting distribution which is independent of initial distributions is unique when it exists? – Tim Dec 1 '12 at 15:10 @Tim: could you please define the uniqueness? Is far as my guess is true, you mean exactly its independence from the initial distribution. – S.D. Dec 1 '12 at 17:33 By "uniqueness" of the limiting distribution, I mean if there are two different probability measures on the state space s.t. they can both be the limiting distribution for a Markov chain, and the limiting distribution is defined to be the same for all initial distributions. – Tim Dec 1 '12 at 17:51 @Tim Given any initial distribution it admits (if admits) a unique limiting distribution. Thus, if the latter is independent of the former, the latter is unique. In other words, suppose there are two limiting distributions $\pi_1$ and $\pi_2$, then $\|\nu_1 P^n - \pi_1\| \to 0$ and $\|\nu_2 P^n - \pi_2\| \to 0$ for some $\nu_1, \nu_2$ which contradicts with the fact that limit of $\nu_1 P^n$ is the same as of $\nu_2 P^n$ – S.D. Dec 2 '12 at 9:43 – Tim Dec 2 '12 at 10:06 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273475408554077, "perplexity_flag": "head"}
http://rosettacode.org/wiki/Matrix_arithmetic
# Matrix arithmetic From Rosetta Code Matrix arithmetic You are encouraged to solve this task according to the task description, using any language you may know. For a given matrix, return the determinant and the permanent of the matrix. The determinant is given by $\det(A) = \sum_\sigma\sgn(\sigma)\prod_{i=1}^n M_{i,\sigma_i}$ while the permanent is given by $\operatorname{perm}(A)=\sum_\sigma\prod_{i=1}^n M_{i,\sigma_i}$ In both cases the sum is over the permutations σ of the permutations of 1, 2, ..., n. (A permutation's sign is 1 if there are an even number of inversions and -1 otherwise; see parity of a permutation.) More efficient algorithms for the determinant are known: LU decomposition, see for example wp:LU decomposition#Computing the determinant. Efficient methods for calculating the permanent are not known. Cf. ## Contents C99 code. By no means efficient or reliable. If you need it for serious work, go find a serious library. `#include <stdio.h>#include <stdlib.h>#include <string.h> double det_in(double **in, int n, int perm){ if (n == 1) return in[0][0];  double sum = 0, *m[--n]; for (int i = 0; i < n; i++) m[i] = in[i + 1] + 1;  for (int i = 0, sgn = 1; i <= n; i++) { sum += sgn * (in[i][0] * det_in(m, n, perm)); if (i == n) break;  m[i] = in[i] + 1; if (!perm) sgn = -sgn; } return sum;} /* wrapper function */double det(double *in, int n, int perm){ double *m[n]; for (int i = 0; i < n; i++) m[i] = in + (n * i);  return det_in(m, n, perm);} int main(void){ double x[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24 };  printf("det:  %14.12g\n", det(x, 5, 0)); printf("perm: %14.12g\n", det(x, 5, 1));  return 0;}` A method to calculate determinant that might actually be usable: `#include <stdio.h>#include <stdlib.h>#include <tgmath.h> void showmat(const char *s, double **m, int n){ printf("%s:\n", s); for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) printf("%12.4f", m[i][j]); putchar('\n'); }} int trianglize(double **m, int n){ int sign = 1; for (int i = 0; i < n; i++) { int max = 0;  for (int row = i; row < n; row++) if (fabs(m[row][i]) > fabs(m[max][i])) max = row;  if (max) { sign = -sign; double *tmp = m[i]; m[i] = m[max], m[max] = tmp; }  if (!m[i][i]) return 0;  for (int row = i + 1; row < n; row++) { double r = m[row][i] / m[i][i]; if (!r) continue;  for (int col = i; col < n; col ++) m[row][col] -= m[i][col] * r; } } return sign;} double det(double *in, int n){ double *m[n]; m[0] = in;  for (int i = 1; i < n; i++) m[i] = m[i - 1] + n;  showmat("Matrix", m, n);  int sign = trianglize(m, n); if (!sign) return 0;  showmat("Upper triangle", m, n);  double p = 1; for (int i = 0; i < n; i++) p *= m[i][i]; return p * sign;} #define N 18int main(void){ double x[N * N]; srand(0); for (int i = 0; i < N * N; i++) x[i] = rand() % N;  printf("det: %19f\n", det(x, N)); return 0;}` This requires the modules from the Permutations and Permutations by swapping Tasks. Translation of: Python `import std.algorithm, std.range, std.traits, permutations2, permutations_by_swapping1; auto prod(Range)(/*in*/ Range r) /*pure nothrow*/ { return reduce!q{a * b}(cast(ForeachType!Range)1, r);} T permanent(T)(in T[][] a) /*pure nothrow*/in { foreach (const row; a) assert(row.length == a[0].length);} body { auto r = iota(cast()a.length); T tot = 0; foreach (sigma; permutations(r.array())) tot += r.map!(i => a[i][sigma[i]])().prod(); return tot;} T determinant(T)(in T[][] a) /*pure nothrow*/in { foreach (const row; a) assert(row.length == a[0].length);} body { immutable n = a.length; auto r = iota(n); T tot = 0; //foreach (sigma, sign; spermutations(n)) { foreach (sigma_sign; spermutations(n)) { const sigma = sigma_sign[0]; immutable sign = sigma_sign[1]; tot += sign * r.map!(i => a[i][sigma[i]])().prod(); } return tot;} void main() { import std.stdio;  foreach (const a; [[[1, 2], [3, 4]],  [[1, 2, 3, 4], [4, 5, 6, 7], [7, 8, 9, 10], [10, 11, 12, 13]],  [[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]]) { writefln("[%([%(%2s, %)],\n %)]]", a); writefln("Permanent: %s, determinant: %s\n", permanent(a), determinant(a)); }}` Output: ```[[ 1, 2], [ 3, 4]] Permanent: 10, determinant: -2 [[ 1, 2, 3, 4], [ 4, 5, 6, 7], [ 7, 8, 9, 10], [10, 11, 12, 13]] Permanent: 29556, determinant: 0 [[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]] Permanent: 6778800, determinant: 0``` J has a conjunction for defining verbs which can act as determinant. This conjunction is symbolized as a space followed by a dot. For people who do not care to sort out what "recursive expansion by minors" means: `-/ .* y` gives the determinant of y and `+/ .* y` gives the permanent of y. For example, given the matrix: ` i. 5 5 0 1 2 3 4 5 6 7 8 910 11 12 13 1415 16 17 18 1920 21 22 23 24` Its determinant is 0. When we use IEEE floating point, we only get an approximation of this result: ` -/ .* i. 5 5_1.30277e_44` If we use exact (rational) arithmetic, we get a precise result: ` -/ .* i. 5 5x0` The permanent does not have this problem in this example (the matrix contains no negative values and permanent does not use subtraction): ` +/ .* i. 5 56778800` As an aside, note that for specific verbs (like -/ .*) J uses an algorithm which is more efficient than the brute force approach. ```П4 ИПE П2 КИП0 ИП0 П1 С/П ИП4 / КП2 L1 06 ИПE П3 ИП0 П1 Сx КП2 L1 17 ИП0 ИП2 + П1 П2 ИП3 - x#0 34 С/П ПП 80 БП 21 КИП0 ИП4 С/П КИП2 - * П4 ИП0 П3 x#0 35 Вx С/П КИП2 - <-> / КП1 L3 45 ИП1 ИП0 + П3 ИПE П1 П2 КИП1 /-/ ПП 80 ИП3 + П3 ИП1 - x=0 61 ИП0 П1 КИП3 КП2 L1 74 БП 12 ИП0 <-> ^ КИП3 * КИП1 + КП2 -> L0 82 -> П0 В/О ``` This program calculates the determinant of the matrix of order <= 5. Prior to startup, РE entered 13, entered the order of the matrix Р0, and the elements are introduced with the launch of the program after one of them, the last on the screen will be determinant. Permanent is calculated in this way. Determinant is a built in function Det `Permanent[m_List] := With[{v = Array[x, Length[m]]}, Coefficient[Times @@ (m.v), Times @@ v] ]` `a: matrix([2, 9, 4], [7, 5, 3], [6, 1, 8])$ determinant(a);-360 permanent(a);900` The determinant is built in: `matdet(M)` and the permanent can be defined as `matperm(M)=my(n=#M,t);sum(i=1,n!,t=numtoperm(n,i);prod(j=1,n,M[j,t[j]]))` Using the module file spermutations.py from Permutations by swapping. The algorithm for the determinant is a more literal translation of the expression in the task description and the Wikipedia reference. `from itertools import permutationsfrom operator import mulfrom math import fsumfrom spermutations import spermutations def prod(lst): return reduce(mul, lst, 1) def perm(a): n = len(a) r = range(n) s = permutations(r) return fsum(prod(a[i][sigma[i]] for i in r) for sigma in s) def det(a): n = len(a) r = range(n) s = spermutations(n) return fsum(sign * prod(a[i][sigma[i]] for i in r) for sigma, sign in s) if __name__ == '__main__': from pprint import pprint as pp  for a in ( [ [1, 2], [3, 4]],   [ [1, 2, 3, 4], [4, 5, 6, 7], [7, 8, 9, 10], [10, 11, 12, 13]],   [ [ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]], ): print('') pp(a) print('Perm: %s Det: %s' % (perm(a), det(a)))` Sample output ```[[1, 2], [3, 4]] Perm: 10 Det: -2 [[1, 2, 3, 4], [4, 5, 6, 7], [7, 8, 9, 10], [10, 11, 12, 13]] Perm: 29556 Det: 0 [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]] Perm: 6778800 Det: 0``` The second matrix above is that used in the Tcl example. The third matrix is from the J language example. Note that the determinant seems to be 'exact' using this method of calculation without needing to resort to other than Pythons default numbers. ` #lang racket(require math)(define determinant matrix-determinant) (define (permanent M) (define n (matrix-num-rows M)) (for/sum ([σ (in-permutations (range n))]) (for/product ([i n] [σi σ]) (matrix-ref M i σi)))) ` The determinant is provided by the linear algebra package in Tcllib. The permanent (being somewhat less common) requires definition, but is easily described: Library: Tcllib (Package: math::linearalgebra) Library: Tcllib (Package: struct::list) `package require math::linearalgebrapackage require struct::list proc permanent {matrix} { for {set plist {};set i 0} {$i<[llength $matrix]} {incr i} { lappend plist $i } foreach p [::struct::list permutations $plist] { foreach i $plist j $p { lappend prod [lindex $matrix $i $j] } lappend sum [::tcl::mathop::* {*}$prod[set prod {}]] } return [::tcl::mathop::+ {*}$sum]}` Demonstrating with a sample matrix: `set mat { {1 2 3 4} {4 5 6 7} {7 8 9 10} {10 11 12 13}}puts [::math::linearalgebra::det $mat]puts [permanent $mat]` Output: ```1.1315223609263888e-29 29556 ``` Tweet
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8654286861419678, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3635350
Physics Forums ## Left coset of a subgroup of Complex numbers. 1. The problem statement, all variables and given/known data For H $\leq$ G as specified, determine the left cosets of H in G. (ii) G = $\mathbb{C}$* H = $\mathbb{R}$* (iii) G = $\mathbb{C}$* H = $\mathbb{R}$$_{+}$ 3. The attempt at a solution I have the answers, it's just a little inconsistency I don't understand. For (ii) left cosets are {r(cos∅ + isin∅); r $\in$ (0,∞)} ∅ $\in$ [0, 2$\pi$) For (iii) {r(cos∅ + isin∅); r $\in$ $\mathbb{R}$ \ {0} } ∅ $\in$ [0, $\pi$) I'm told that the answers are different because the range of r and ∅ are different. It says in (ii) they are "half lines" coming out of the origin and in (iii) they are lines through the origin but excluding the origin itself. What I don't get, though, is that surely the answer for (ii) should be the answer for (iii)? And vice versa? Basically in (ii) we have H is the set of all the real numbers, while G is the set of all the complex numbers. So when we multiply an element of H by an element of G (and the constant multiplying the euler's forumla is positive), surely r would then range over all the real numbers (excluding zero). Yet in (iii) we have H is the set of all the positive real numbers, while G is still the set of all the complex numbers (and the constant multiplying the euler's forumla is positive). So when we multiply an element of H by an element of G, surely r would only range over the positive real numbers, as opposed to all the real numbers exluding zero like the answer says? Does anyone understand my problem? Thanks. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Science Advisor Staff Emeritus The question does not even make sense. R+ is NOT a subgroup of C*. Are you sure you have copied the problem correctly? R+ is a subgroup of C+. Quote by HallsofIvy The question does not even make sense. R+ is NOT a subgroup of C*. Are you sure you have copied the problem correctly? R+ is a subgroup of C+. Yeah, I've double checked and I've copied it correctly. When saying R+, I assumed it was talking about the multiplication of positive real numbers (as opposed to addition of R+, which cannot be a group let alone a subgroup). Why would R+ under multiplication not be a subgroup of C*? Surely every possible value of R+ on the positive real line is some form a complex number. All R+ is in C*, as well as gh (where g and h are elements of R+) are elements of R+, and lastly the inverse of an element g, is 1/g which is in R+). Recognitions: Homework Help ## Left coset of a subgroup of Complex numbers. Yes, you are right. R+ is a group under multiplication. And I agree, solutions (ii) and (iii) should be swapped around. Here's my interpretation of (ii). Let's consider one specific element of C*, say z=a cis(phi). Multiply it with R* to get the left coset. This is a line excluding zero. Now if we consider z=a cis(phi + pi) we get the same coset. Indeed if pick any z in C* on the line, we get the same coset. So the coset is uniquely defined by an angle between 0 and pi, but is independent of a. So a specific coset is: {r cis(phi) | r in R*} 0 ≤ phi < pi Thread Tools | | | | |-------------------------------------------------------------------|----------------------------------------|---------| | Similar Threads for: Left coset of a subgroup of Complex numbers. | | | | Thread | Forum | Replies | | | General Math | 3 | | | High Energy, Nuclear, Particle Physics | 15 | | | Linear & Abstract Algebra | 0 | | | Calculus | 7 | | | Linear & Abstract Algebra | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911666989326477, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/100368?sort=newest
## Idempotent homomorphisms of von Neumann algebras ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there any description of unital idempotent ($F^2(x)=F(x)$) morphisms of a von Neumann algebra into itself? Or, equivalently, of weakly closed subalgebras which are retracts as von Neumann algebras? - 2 Naive question perhaps - Why are the two questions equivalent? I don't see why the range of an unital idempotent is weakly closed. – mohanravi Jun 22 at 17:36 I had a doubt about that when asking. Indeed, these might be wo different questions. Just in my case I know that the image is weakly closed. – Yulia Kuznetsova Jun 22 at 20:22 So can I ask a naive question: is it correct that $F$ is just an algebra homomorphism (not assumed normal, or a $*$-map, etc.?) – Matthew Daws Jun 22 at 20:31 Matthew, mohanravi, I should have written "morphism", so normal and involutive. Then indeed the image is weakly closed since it is the kernel of $F-Id$. – Yulia Kuznetsova Jun 23 at 0:57 ## 1 Answer Yes. The kernel of F is an ultraweakly closed *-ideal of M generated by some central projection z. M splits as a direct sum of zM and (1-z)M. As a 2x2 matrix F has only two nonzero entries, one that corresponds to an idempotent automorphism (hence the identity map) of (1-z)M and another one to an arbitrary morphism from (1-z)M to zM. Thus idempotent morphisms are classified by central projections and morphisms from (1-z)M to zM. - Indeed, $G(X)=F(x)-(1-z)x$ is a morphism but one cannot get anything better... – Yulia Kuznetsova Jun 23 at 0:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268656373023987, "perplexity_flag": "middle"}
http://nrich.maths.org/2686
Gold Again Without using a calculator, computer or tables find the exact values of cos36cos72 and also cos36 - cos72. Pythagorean Golden Means Show that the arithmetic mean, geometric mean and harmonic mean of a and b can be the lengths of the sides of a right-angles triangle if and only if a = bx^3, where x is the Golden Ratio. Golden Triangle Three triangles ABC, CBD and ABD (where D is a point on AC) are all isosceles. Find all the angles. Prove that the ratio of AB to BC is equal to the golden ratio. Golden Eggs Stage: 5 Challenge Level: 1) An ellipse with semi axes $a$ and $b$ fits between two circles of radii $a$ and $b$ (where $b> a$) as shown in the diagram. If the area of the ellipse is equal to the area of the annulus what is the ratio $b:a$? (2) Find the value of $R$ if this sequence of 'nested square roots' continues indefinitely: $$R=\sqrt{1 + \sqrt{1 + \sqrt {1 + \sqrt {1 + ...}}}}.$$ The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8725770711898804, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/79681?sort=votes
## Fourier transform of a real-valued function. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) My chemist roommate asked me the following question. Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a real-valued function and $F$ its Fourier transform. Suppose we know the modulus function $|F| : \mathbb{R} \rightarrow \mathbb{R}$. What can we deduce about $f$, can we determine it completely? Feel free to assume any regularity conditions on $f$. - If you translate $f$ you multiply $F$ by a function of absolute value $1$ so no. – Torsten Ekedahl Nov 1 2011 at 5:46 2 We know it up to a certain group of transformations, including translations. More specifically, take any function of absolute value $1$ with odd imaginary part and even real part, and multiply the fourier transformation by that. This will create a translation-ish transformation of the function that the modulus of the fourier transform cannot detect. – Will Sawin Nov 1 2011 at 6:32 ## 2 Answers To try to determine a function from the absolute value of its Fourier transform is actually the famous "hidden phase problem". In X-ray crystollography one measures the absolute value of the Fourier transform of a function that describes where the atoms in the molecule are located. However, using clever tricks and some a priori knowledge of the unknown functions (for instance the fact that it is non-negative) one has been able to handle this problem in practice. The Nobel prize in chemistry 1985 was awarded for progress on this problem. - Herbert Hauptman, who shared the 1985 Nobel prize in Chemistry for his work in X-ray crystallography, died on October 23. See this link for information on Hauptman and his work: ams.org/news?news_id=1289 – Jan Boman Nov 2 2011 at 23:43 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. See http://www.optics.rochester.edu/workgroups/fienup/PUBLICATIONS/OL78_RecModFT.pdf for a closely related question, and Reconstruction of a function from the modulus of its Fourier transform V. V. Bashurov (math notes, 1969) for the exact question. Your chemist friend is probably thinking of X-ray diffraction, where all you get is the modulus. There is an enormous body of work on this (usually the thing you are transforming has additional crystallographic symmetry. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8949915170669556, "perplexity_flag": "middle"}
http://chempaths.chemeddl.org/services/chempaths/?q=book/Thermodynamics%3A%20Atoms%2C%20Molecules%20and%20Energy/2173/internal-energy
# Internal Energy Submitted by jwmoore on Sat, 03/26/2011 - 11:42 When matterAnything that occupies space and has mass; contrasted with energy. absorbs or releases heatEnergy transferred as a result of a temperature difference; a form of energy stored in the movement of atomic-sized particles. energyA system's capacity to do work., we cannot always explain this energy change on the microscopic level in terms of a speeding up or a slowing down of molecular motion as we do for heat capacities. This is particularly true when the heat change accompanies a chemical changeA process in which one or more substances, the reactant or reactants, change into one or more different substances, the products; chemical change involves rearrangement, combination, or separation of atoms. Also referred to as chemical reaction.. Here we must consider changes in the kinetic and potential energies of electrons in the atoms and molecules involved, that is, changes in the electronic energy. As a simple example of a chemical change, let us consider an exothermicDescribes a process in which energy is transferred to the surroundings as a result of a temperature difference. reaction involving only one kind of atom, the decomposition of ozone, O3: 2O3(g) → 3O2(g)      25°C      (1) O3 is a gas which occurs in very low but very important concentrations in the upper atmosphereA unit of pressure equal to 101.325 kPa or 760 mmHg; abbreviated atm. Also, the mixture of gases surrounding the earth.. It can be produced in somewhat higher concentrations in the laboratory by using an electric spark discharge and can then be concentratedIncreased the concentration of a mixture or solution (verb). Having a large concentration (adjective). and purified. The result is a blue gas which is dangerously unstable and liable to explode without warning. Let us now suppose that we have a pure sample of 2 mol O3(g) in a closed container at 25°C and are able to measure the quantity of heat evolved when it subsequently explodes to form O2 gas according to Eq. (1) (see Fig. 1). It is found that 287.9 kJ is released. This energy heats up the surroundings. Where do these 287.9 kJ come from? Certainly not entirely from the translational kinetic energy of the molecules. To begin with we had 2 mol O3 at 25°C and at the end 3 mol O2 at 25°C. Since the translational kinetic energy of any gas is 3/2 nRT, this corresponds to an increase of 3/2 (3 mol – 2 mol)RT = 3/2 × 1 mol × 8.314 J K–1 mol–1 × 298 K = 3.72 kJ. After the reaction the translational energy of the molecules is higher, and so heat should have been absorbed, not given off. In any case 3.73 kJ is only 1.3 percent of the total heat change. The changes in rotational and vibrational energy are even smaller, accounting for a decrease in energy of the substanceA material that is either an element or that has a fixed ratio of elements in its chemical formula. in the container by only 0.88 kJ. Figure 1 The reaction 2O3(g) → 3O2(g) (25°C). When the atoms in the O3 molecules rearrange to form O2 molecules, the result is a lowering in the electronic energy and a consequent release of 288 kJ of heat energy to the surroundings. From the standpoint of energy, the most important thing that happens as 2 mol O3 is converted into 3 mol O2 is rearrangement of the valence electrons so that they are closer to positively charged O nuclei. We see in other sections that the closer electrons are to nuclei, the lower their total energy. Thus three O2 molecules have less electronic energy than the two O3 molecules from which they were formed. The remaining energy first appears as kinetic energy of the O2 molecules. Immediately after the reaction the O2 is at a very high temperatureA physical property that indicates whether one object can transfer thermal energy to another object.. Eventually this energy finds its way to the surroundings, and the O2 cools to room temperature. A detailed summary of the various energy changes which occur when O3 reacts to form O2 is given in Table 1. The important message of this table is that 99 percent of the energy change is attributable to the change in electronic energy. This is a typical figure for gaseous reactions. What makes a gaseous reaction exothermic or endothermicIn chemical thermodynamics, describes a process in which energy is transferred from the surroundings to the system as a result of a temperature difference. is the change in the bonding. Changes in the energies of molecular motion can usually be neglected by comparison. TABLE 1 Detailed Balance Sheet of the Energy Changes Occurring in the Reaction 2O3(g) → 3O2(g)      25°C, constant volume Type of Energy Initial Value/kJ Final Value/kJ Change*in Energy/kJ Electronic x† x – 290.70 –290.70 Translational 7.43 11.15 +3.72 Rotational and vibrational 8.32 7.44 – 0.88 Total x + 15.75 x – 272.11 –287.86 * Final value – initial value. † There is no experimental means of determining the initial or final value of the electronic energy—only the change in electronic energy can be measured. Highly accurate calculations of electronic energy from wave-mechanical theory require complicated mathematics end a great deal of computer power. Therefore we have represented the initial electronic energy by x. The sum of all the different kinds of energy which the molecules of a substance can possess is called the internal energyA thermodynamic function corresponding to the energy of a system; represented by the symbol U or E. and given the symbol U. (The symbol E also widely used.) In a gas we can regard the internal energy as the sum of the electronic, translational, rotational, and vibrational energies. In the case of liquids and solids the molecules are closer together, and we must include the potential energy due to their interactions with each other. Noncovalent interactions (intermolecular forces) attract one molecule to others. In addition, the motion of one molecule now affects its neighbors, and we can no longer subdivide the energy into neat categories as in the case of a gas. ## Measurement of Internal Energy Equation (1) tells us how to detect and measure changes in the internal energy of a system. If we carry out any process in a closed container the volume remains constant), the quantity of heat absorbed by the system equals the increase in internal energy. qV = ΔU (1) A convenient device for making such measurements is a bomb calorimeterA sturdy, rigid container (bomb) and associated device for measuring the heat energy transferred out of or into a chemical system in which a reaction occurs; the sturdy bomb maintains constant volume. (Fig. 2), which contains a steel-walled vessel (bomb) with a screw-on gas-tight lid. In the bomb can be placed a weighed sample of a combustible substance together with O2(g) at about 3 MPa (30 atmAbbreviation for atmosphere, a unit of pressure equal to 101.325 kPa or 760 mmHg.) pressureForce per unit area; in gases arising from the force exerted by collisions of gas molecules with the wall of the container.. When the substance is ignited by momentarily passing electrical current through a heating wire, the heat energy released by its combustionVigorous combination of a material with oxygen gas, usually resulting in a flame. raises the temperature of water surrounding the bomb. Measurement of the change in temperature of the water permits calculation ofqV (and thus ΔU), provided the heat capacity of the calorimeter is known. The heat capacity can be determined as in Example 1 from Heat Capacities or by igniting a substance for which ΔU is already known. Figure 2 A schematic of a bomb calorimeter. EXAMPLE 1 When 0.7943g of glucose, C6H12O6, is ignited in a bomb calorimeter, the temperature rise is found to be 1.841 K. The heat capacity of the calorimeter is 6.746 kJ K–1. Find ΔUm for the reaction C6H12O6(s) + 6O2(g) → 6CO2(g) + 6H2O(l) under the prevailing conditions. SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture. The heat energy absorbed by the calorimeter in increasing its temperature by 1.841 K is given by q = C ΔT = 6.745 kJ K–1 × 1.841 K = 12.42 kJ Since this heat energy was released by the reaction system, we must regard it as negative. Accordingly, qV = –12.42 kJ = ΔU We need now only to calculate the change in internal energy per mole, that is, ΔUm. Now $n_{\text{glucose}}=\text{0}\text{.7953 g }\times \text{ }\frac{\text{1 mol}}{\text{180}\text{.16 g}}=\text{4}\text{.409 }\times \text{ 10}^{-\text{3}}\text{ mol}$ Thus      $\Delta U_{m}=\frac{-\text{12}\text{.42 kJ}}{\text{4}\text{.409 }\times \text{ 10}^{-\text{3}}\text{ mol}}=-\text{2817 kJ mol}^{-\text{1}}$ • Printer-friendly version • Login or register to post comments ## Term of the Day A substanceA material that is either an element or that has a fixed ratio of elements in its chemical formula. that increases the concentrationA measure of the ratio of the quantity of a substance to the quantity of solvent, solution, or ore. Also, the process of making something more concentrated. of hydroxide ions in aqueousDescribing a solution in which the solvent is water. solutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture.. ## Print Options • Printer Friendly Version will open a new window that has a clean view of the page you are viewing, plus any sub-pages. Warning: if you attempt to view a print version of the top page of a path, this could result in a very large file. • PDF Version will prompt a download of only the current page you are viewing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139116406440735, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/integral-inequality+measure-theory
# Tagged Questions 1answer 67 views ### Prove that $L^1$ is a Banach algebra with multiplication defined by convolution To be more specific, prove that $L^1(\mathbb{R}^n)$ with multiplication defined by convolution: $$(f\cdot g)(x)=\int_\mathbb{R^n}f(x-y)g(y)dy$$ is a Banach algebra. All the properties of Banach ... 1answer 587 views ### Do inequalities that hold for infinite sums hold for integrals too? Let $\mathbb{R}_{\geq0}$ denote the set of non-negative reals and $+\infty$, and $\mathbb{Z}^+$ denote the set of positive integers. I will also let $\lambda$ denote the Lebesgue measure on ... 1answer 372 views ### Hölder inequality from Jensen inequality I'm taking a course in Analysis in which the following exercise was given. Exercise Let $(\Omega, \mathcal{F}, \mu)$ be a probability space. Let $f\ge 0$ be a measurable function. Using Jensen's ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100854992866516, "perplexity_flag": "head"}