url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://hellenicaworld.com/Science/Physics/en/Photon.html
### - Art Gallery - The photon is a type of elementary particle. It is the quantum of the electromagnetic field including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless,[a] so they always move at the speed of light in vacuum, 299792458 m/s. Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, their behavior featuring properties of both waves and particles.[2] The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, Planck proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units.[3][4][5] Subsequently, many other experiments validated Einstein's approach.[6][7][8] In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by this gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Recently, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography. Nomenclature Photoelectric effect: the emission of electrons from a metal plate caused by light quanta – photons. 1926 Gilbert N. Lewis letter which brought the word "photon" into common usage The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy stored within a molecule was a "discrete quantity composed of an integral number of finite equal parts", which he called "energy elements".[9] In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete wave-packets.[10] He called such a wave-packet the light quantum (German: das Lichtquant).[b] The name photon derives from the Greek word for light, φῶς (transliterated phôs). Arthur Compton used photon in 1928, referring to Gilbert N. Lewis, who coined the term in a letter to Nature on December 18, 1926.[3][11] The same name was used earlier but was never widely adopted before Lewis: in 1916 by the American physicist and psychologist Leonard T. Troland, in 1921 by the Irish physicist John Joly, in 1924 by the French physiologist René Wurmser (1890–1993), and in 1926 by the French physicist Frithiof Wolfers (1891–1971).[5] The name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolfers's and Lewis's theories were contradicted by many experiments and never accepted, the new name was adopted very soon by most physicists after Compton used it.[5][c] In physics, a photon is usually denoted by the symbol γ (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard,[13][14] named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade.[15] In chemistry and optical engineering, photons are usually symbolized by hν, which is the photon energy, where h is Planck constant and the Greek letter ν (nu) is the photon's frequency.[16] Much less commonly, the photon can be symbolized by hf, where its frequency is denoted by f.[17] Physical properties A photon is massless,[d] has no electric charge,[18][19] and is a stable particle. In vacuum, a photon has two possible polarization states.[20] The photon is the gauge boson for electromagnetism,[21]:29–30 and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero.[22] Also, the photon does not obey the Pauli exclusion principle, but instead obeys Bose–Einstein statistics.[23]:1221 Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation).[23]:572,1114,1172 Relativistic energy and momentum The cone shows possible values of wave 4-vector of a photon. The "time" axis gives the angular frequency (rad⋅s−1) and the "space" axis represents the angular wavenumber (rad⋅m−1). Green and indigo represent left and right polarization In empty space, the photon moves at c (the speed of light) and its energy and momentum are related by E = pc, where p is the magnitude of the momentum vector p. This derives from the following relativistic relation, with m = 0:[24] $$E^{2}=p^{2} c^{2} + m^{2} c^{4}. The energy and momentum of a photon depend only on its frequency ( \( \nu$$ ) or inversely, its wavelength (λ): $$E=\hbar\omega=h\nu=\frac{hc}{\lambda}$$ $$\boldsymbol{p}=\hbar\boldsymbol{k},$$ where k is the wave vector (where the wave number k = |k| = 2π/λ), ω = 2πν is the angular frequency, and ħ = h/2π is the reduced Planck constant.[25] Since p points in the direction of the photon's propagation, the magnitude of the momentum is $$p=\hbar k=\frac{h\nu}{c}=\frac{h}{\lambda}.$$ The photon also carries a quantity called spin angular momentum that does not depend on its frequency.[26] Because photons always move at the speed of light, the spin is best expressed in terms of the component measured along its direction of motion, its helicity, which must be either +ħ or −ħ. These two possible helicities, called right-handed and left-handed, correspond to the two possible circular polarization states of the photon.[27] To illustrate the significance of these formulae, the annihilation of a particle with its antiparticle in free space must result in the creation of at least two photons for the following reason. In the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (since, as we have seen, it is determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. (However, it is possible if the system interacts with another particle or field for the annihilation to produce one photon, as when a positron annihilates with a bound atomic electron, it is possible for only one photon to be emitted, as the nuclear Coulomb field breaks translational symmetry.)[28]:64–65 The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum. Seen another way, the photon can be considered as its own antiparticle (thus an "antiphoton" is simply a normal photon). The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter.[29] That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus. The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time.[30] Each photon carries two distinct and independent forms of angular momentum of light. The spin angular momentum of light of a particular photon is always either +ħ or −ħ. The light orbital angular momentum of a particular photon can be any integer N, including zero.[31] Experimental checks on photon mass Current commonly accepted physical theories imply or assume the photon to be strictly massless. If the photon is not a strictly massless particle, it would not move at the exact speed of light, c, in vacuum. Its speed would be lower and depend on its frequency. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime.[32] Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons. If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This provides a means for very-high-precision tests of Coulomb's law.[33] A null result of such an experiment has set a limit of m ≲ 10−14 eV/c2.[34] Sharper upper limits on the speed of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is very large because the galactic magnetic field exists on very great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term 1/2m2AμAμ would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of m < 3×10−27 eV/c2.[35] The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring.[36] Such methods were used to obtain the sharper upper limit of 1.07×10−27 eV/c2 (the equivalent of 10−36 daltons) given by the Particle Data Group.[37] These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent.[38] If the photon mass is generated via the Higgs mechanism then the upper limit of m ≲ 10−14 eV/c2 from the test of Coulomb's law is valid. Historical development Main article: Light Thomas Young's double-slit experiment in 1801 showed that light can act as a wave, helping to invalidate early particle theories of light.[23]:964 In most theories up to the eighteenth century, light was pictured as being made up of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637),[39] Robert Hooke (1665),[40] and Christiaan Huygens (1678);[41] however, particle models remained dominant, chiefly due to the influence of Isaac Newton.[42] In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted.[43] James Clerk Maxwell's 1865 prediction[44] that light was an electromagnetic wave—which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves[45]—seemed to be the final blow to particle models of light. In 1900, Maxwell's theoretical model of light as oscillating electric and magnetic fields seemed complete. However, several observations could not be explained by any wave model of electromagnetic radiation, leading to the idea that light-energy was packaged into quanta described by E = hν. Later experiments showed that these light-quanta also carry momentum and, thus, can be considered particles: the photon concept was born, leading to a deeper understanding of the electric and magnetic fields themselves. The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity.[46][e] At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers[47] culminated in Max Planck's hypothesis[48][49] that the energy of any system that absorbs or emits electromagnetic radiation of frequency ν is an integer multiple of an energy quantum E = hν. As shown by Albert Einstein,[10][50] some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics.[51] Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself.[10] Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space.[10] In 1909[50] and 1916,[52] Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum p = h/λ, making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton,[53] for which he received the Nobel Prize in 1927. The pivotal question was then: how to unify Maxwell's wave theory of light with its experimentally observed particle nature? The answer to this question occupied Albert Einstein for the rest of his life,[54] and was solved in quantum electrodynamics and its successor, the Standard Model. (See § Second quantization and § The photon as a gauge boson, below.) Up to 1923, most physicists were reluctant to accept that light itself was quantized. Instead, they tried to explain photon behaviour by quantizing only matter, as in the Bohr model of the hydrogen atom (shown here). Even though these semiclassical models were only a first approximation, they were accurate for simple systems and they led to quantum mechanics. Einstein's 1905 predictions were verified experimentally in several ways in the first two decades of the 20th century, as recounted in Robert Millikan's Nobel lecture.[55] However, before Compton's experiment[53] showed that photons carried momentum proportional to their wave number (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (See, for example, the Nobel lectures of Wien,[47] Planck[49] and Millikan.)[55] Instead, there was a widespread belief that energy quantization resulted from some unknown constraint on the matter that absorbed or emitted radiation. Attitudes changed over time. In part, the change can be traced to experiments such as those revealing Compton scattering, where it was much more difficult not to ascribe quantization to light itself to explain the observed results.[56] Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory.[57] An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions.[58] Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible".[54] Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics.[59] A few physicists persisted[60] in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments.[f] Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven. Wave–particle duality and uncertainty principles Photons in a Mach–Zehnder interferometer exhibit wave-like interference and particle-like detection at single-photon detectors. Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double-slit experiment lands on the screen with a probability distribution given by its interference pattern determined by Maxwell's equations.[61] However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; it does not spread out as it propagates, nor does it divide when it encounters a beam splitter.[62] Rather, the photon seems to be a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics.[g] In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes.[67] Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl.[68][69] The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa.[70] A coherent state minimizes the overall uncertainty as far as quantum mechanics allows.[67] Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase.[67] This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, Δ N {\displaystyle \Delta N} {\displaystyle \Delta N}, and the uncertainty in the phase of the wave, Δ ϕ {\displaystyle \Delta \phi } {\displaystyle \Delta \phi }. However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase ϕ {\displaystyle \phi } \phi cannot be represented by a Hermitian operator.[71] Bose–Einstein model of a photon gas Main articles: Bose gas, Bose–Einstein statistics, Spin-statistics theorem, Gas in a box, and Photon gas In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space.[72] Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction",[73][74] now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995.[75] It was later used by Lene Hau to slow, and then completely stop, light in 1999[76] and 2001.[77] The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics).[78] Stimulated and spontaneous emission Main articles: Stimulated emission and Laser Stimulated emission (in which photons "clone" themselves) was predicted by Einstein in his kinetic analysis, and led to the development of the laser. Einstein's derivation inspired further developments in the quantum treatment of light, which led to the statistical interpretation of quantum mechanics. In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density $$\rho(\nu)$$ of photons with frequency $$\nu$$ (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed.[79] Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate R j i {\displaystyle R_{ji}} R_{ji} for a system to absorb a photon of frequency $$\nu$$ and transition from a lower energy $$E_{j}$$ to a higher energy E i {\displaystyle E_{i}} E_{i} is proportional to the number N j {\displaystyle N_{j}} N_{j} of atoms with energy $$E_{j}$$ and to the energy density $$\rho(\nu)$$ of ambient photons of that frequency, $$R_{ji}=N_{j} B_{ji} \rho(\nu) \!$$ where $$B_{ji}$$ is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate R i j {\displaystyle R_{ij}} R_{ij} for the emission of photons of frequency ν {\displaystyle \nu } \nu and transition from a higher energy $$E_{i}$$ to a lower energy E j {\displaystyle E_{j}} E_{j} is $$R_{ij}=N_{i} A_{ij} + N_{i} B_{ij} \rho(\nu) \!$$ where $$A_{ij}$$ is the rate constant for emitting a photon spontaneously, and $$B_{ij}$$ is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state i {\displaystyle i} i and those in state j {\displaystyle j} j must, on average, be constant; hence, the rates R j i {\displaystyle R_{ji}} R_{ji} and R i j {\displaystyle R_{ij}} R_{ij} must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of $$N_{i}$$ and $$N_{j}$$ is $${\displaystyle g_{i}/g_{j}\exp {(E_{j}-E_{i})/(kT)},}$$ where $$g_{i}$$ and $$g_{j}$$ are the degeneracy of the state i and that of j, respectively, $$E_{i}$$ and $$E_{j}$$ their energies, k the Boltzmann constant and T the system's temperature. From this, it is readily derived that $$g_iB_{ij}=g_jB_{ji}$$ and $$A_{ij}=\frac{8 \pi h \nu^{3}}{c^{3}} B_{ij}.$$ The $$A_{ij}$$ and $$B_{ij}$$ are collectively known as the Einstein coefficients.[80] Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients $$A_{ij}$$, $$B_{ji}$$ and $$B_{ij}$$ once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis".[81] Not long thereafter, in 1926, Paul Dirac derived the $$B_{ij}$$ rate constants by using a semiclassical approach,[82] and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory.[83][84] Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory;[85][86][87] earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field. Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take.[42] Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation[54] from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function[88][89] was inspired by Einstein's later work searching for a more complete theory.[90] Quantum field theory Quantization of the electromagnetic field Main article: Quantum field theory Different electromagnetic modes (such as those depicted here) can be treated as independent simple harmonic oscillators. A photon corresponds to a unit of energy E = hν in its electromagnetic mode. In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption.[91] He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of h ν {\displaystyle h\nu } h\nu , where ν {\displaystyle \nu } \nu is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909.[50] In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way.[92] As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be $$E=nh\nu$$, where $$\nu$$ is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy $$E=nh\nu$$ as a state with n {\displaystyle n} n photons, each of energy h ν {\displaystyle h\nu } h\nu . This approach gives the correct energy fluctuation formula. Feynman diagram of two electrons interacting by exchange of a virtual photon. Dirac took this one step further.[83][84] He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's A i j {\displaystyle A_{ij}} A_{ij} and B i j {\displaystyle B_{ij}} B_{ij} coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics. Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy E = p c {\displaystyle E=pc} E=pc, and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events. Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization.[93] Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electron–positron pairs.[94] Such photon–photon scattering (see two-photon physics), as well as electron–photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider.[95] In modern physics notation, the quantum state of the electromagnetic field is written as a Fock state, a tensor product of the states for each electromagnetic mode $$|n_{k_0}\rangle\otimes|n_{k_1}\rangle\otimes\dots\otimes|n_{k_n}\rangle\dots$$ where $$|n_{k_i}\rangle$$ represents the state in which $$\, n_{k_i}$$ photons are in the mode $$k_{i}$$. In this notation, the creation of a new photon in mode $$k_{i}$$ (e.g., emitted from an atomic transition) is written as $$|n_{k_i}\rangle \rightarrow|n_{k_i}+1\rangle$$. This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics. As a gauge boson Main article: Gauge theory The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime.[96] For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian. The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be $$\pm \hbar$$. These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states.[96] In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W− and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics.[97][98][99] Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally.[100] Main article: Photon structure function Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons[101] in spite of the fact that the electric charge structures of protons and neutrons are substantially different. A theory called Vector Meson Dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon which interacts only with electric charges and vector mesons.[102] However, if experimentally probed at very short distances, the intrinsic structure of the photon is recognized as a flux of quark and gluon components, quasi-free according to asymptotic freedom in QCD and described by the photon structure function.[103][104] A comprehensive comparison of data with theoretical predictions was presented in a review in 2000.[105] Contributions to the mass of a system The energy of a system that emits a photon is decreased by the energy E of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount $${E}/{c^2}$$. Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form $${E}/{c^2}$$ for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei).[106] This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium.[107] Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves.[108] In matter Light that travels through transparent matter does so at a lower speed than c, the speed of light in a vacuum. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polariton (see this list for some other quasi-particles); this polariton has a nonzero effective mass, which means that it cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering.[109] Photons can be scattered by matter. For example, photons engage in so many collisions on the way from the core of the Sun that radiant energy can take about a million years to reach the surface;[110] however, once in open space, a photon takes only 8.3 minutes to reach Earth.[111] Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis–trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry.[112][113] Technological applications Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an extremely important application and is discussed above under stimulated emission. Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas.[114] Planck's energy formula E = h ν {\displaystyle E=h\nu } E=h\nu is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations.[115] Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy.[116] In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins.[117] Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is "0" or "1".[118][119] Quantum optics and computation Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography.[h] Two-photon physics studies interactions between photons, which are rare. In 2018, MIT researchers announced the discovery of bound photon triplets, which may involve polaritons.[120][121] Advanced Photon Source at Argonne National Laboratory Ballistic photon Dirac equation Doppler effect High energy X-ray imaging technology Luminiferous aether Medipix Phonon Photography Photon counting Photon energy Photon epoch Photon polarization Photonic molecule Photonics Single-photon source Spin angular momentum of light Static forces and virtual-particle exchange Notes The photon's invariant mass (also called "rest mass" for massive particles) is believed to be exactly zero. This is the notion of particle mass generally used by modern physicists. The photon does have a nonzero relativistic mass, depending on its energy, but this varies according to the frame of reference. Although the 1967 Elsevier translation of Planck's Nobel Lecture interprets Planck's Lichtquant as "photon", the more literal 1922 translation by Hans Thacher Clarke and Ludwik Silberstein Planck, Max (1922). The Origin and Development of the Quantum Theory. Clarendon Press. (here) uses "light-quantum". No evidence is known that Planck himself used the term "photon" by 1926 (see also). Isaac Asimov credits Arthur Compton with defining quanta of energy as photons in 1923.[12] The mass of the photon is believed to be exactly zero. Some sources also refer to the relativistic mass, which is just the energy scaled to units of mass. For a photon with wavelength λ or energy E, this is h/λc or E/c2. This usage for the term "mass" is no longer common in scientific literature. Further info: What is the mass of a photon? The phrase "no matter how intense" refers to intensities below approximately 1013 W/cm2 at which point perturbation theory begins to break down. In contrast, in the intense regime, which for visible light is above approximately 1014 W/cm2, the classical wave description correctly predicts the energy acquired by electrons, called ponderomotive energy. (See also: Boreham, Bruce W.; Hora, Heinrich; Bolton, Paul R. (1996). "Photon density and the correspondence principle of electromagnetic interaction". AIP Conference Proceedings. 369: 1234–1243. Bibcode:1996AIPC..369.1234B. doi:10.1063/1.50410.) By comparison, sunlight is only about 0.1 W/cm2. These experiments produce results that cannot be explained by any classical theory of light, since they involve anticorrelations that result from the quantum measurement process. In 1974, the first such experiment was carried out by Clauser, who reported a violation of a classical Cauchy–Schwarz inequality. In 1977, Kimble et al. demonstrated an analogous anti-bunching effect of photons interacting with a beam splitter; this approach was simplified and sources of error eliminated in the photon-anticorrelation experiment of Grangier et al. (1986). This work is reviewed and simplified further in Thorn et al. (2004). (These references are listed below.) The issue was first formulated by Theodore Duddell Newton and Eugene Wigner.[63][64][65] The challenges arise from the fundamental nature of the Lorentz group, which describes the symmetries of spacetime in special relativity. Unlike the generators of Galilean transformations, the generators of Lorentz boosts do not commute, and so simultaneously assigning low uncertainties to all coordinates of a relativistic particle's position becomes problematic.[66] Introductory-level material on the various sub-fields of quantum optics can be found in Fox, M. (2006). Quantum Optics: An Introduction. Oxford University Press. ISBN 978-0-19-856673-1. References Amsler, C.; et al. (Particle Data Group) (2008). "Review of Particle Physics: Gauge and Higgs bosons" (PDF). Physics Letters B. 667 (1): 1. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. Joos, George (1951). Theoretical Physics. London and Glasgow: Blackie and Son Limited. p. 679. "December 18, 1926: Gilbert Lewis coins "photon" in letter to Nature". www.aps.org. Retrieved 2019-03-09. "Gilbert N. Lewis". Atomic Heritage Foundation. Retrieved 2019-03-09. Kragh, Helge (2014). "Photon: New light on an old name".arXiv:1401.0293 [physics.hist-ph]. Compton, Arthur H. (1965) [12 Dec 1927]. "X-rays as a branch of optics" (PDF). From Nobel Lectures, Physics 1922–1941. Amsterdam: Elsevier Publishing Company. Kimble, H.J.; Dagenais, M.; Mandel, L. (1977). "Photon Anti-bunching in Resonance Fluorescence" (PDF). Physical Review Letters. 39 (11): 691–695. Bibcode:1977PhRvL..39..691K. doi:10.1103/PhysRevLett.39.691. Grangier, P.; Roger, G.; Aspect, A. (1986). "Experimental Evidence for a Photon Anticorrelation Effect on a Beam Splitter: A New Light on Single-Photon Interferences". Europhysics Letters. 1 (4): 173–179. Bibcode:1986EL......1..173G. CiteSeerX 10.1.1.178.4356. doi:10.1209/0295-5075/1/4/004. Kragh, Helge (2000-12-01). "Max Planck: the reluctant revolutionary". Physics World. 13 (12): 31. doi:10.1088/2058-7058/13/12/34. Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" (PDF). Annalen der Physik (in German). 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.. An English translation is available from Wikisource. "Discordances entre l'expérience et la théorie électromagnétique du rayonnement." In Électrons et Photons. Rapports et Discussions de Cinquième Conseil de Physique, edited by Institut International de Physique Solvay. Paris: Gauthier-Villars, pp. 55–85. Asimov, Isaac (1983). The Neutrino: Ghost Particle of the Atom. Garden City, NY: Avon Books. ISBN 978-0-380-00483-6. and Asimov, Isaac (1971). The Universe: From Flat Earth to Quasar. New York: Walker. ISBN 978-0-8027-0316-3. LCCN 66022515. Villard, P. (1900). "Sur la réflexion et la réfraction des rayons cathodiques et des rayons déviables du radium". Comptes Rendus des Séances de l'Académie des Sciences (in French). 130: 1010–1012. Villard, P. (1900). "Sur le rayonnement du radium". Comptes Rendus des Séances de l'Académie des Sciences (in French). 130: 1178–1179. Rutherford, E.; Andrade, E.N.C. (1914). "The Wavelength of the Soft Gamma Rays from Radium B". Philosophical Magazine. 27 (161): 854–868. doi:10.1080/14786440508635156. Andrew Liddle (2015). An Introduction to Modern Cosmology. John Wiley & Sons. p. 16. ISBN 978-1-118-69025-3. SantoPietro, David. "Photon Energy". Khan Academy. Retrieved 2020-03-15. Frisch, David H.; Thorndike, Alan M. (1964). Elementary Particles. Princeton, NJ: David Van Nostrand. p. 22. Kobychev, V.V.; Popov, S.B. (2005). "Constraints on the photon charge from observations of extragalactic sources". Astronomy Letters. 31 (3): 147–151.arXiv:hep-ph/0411398. Bibcode:2005AstL...31..147K. doi:10.1134/1.1883345. Matthew D. Schwartz (2014). Quantum Field Theory and the Standard Model. Cambridge University Press. p. 66. ISBN 978-1-107-03473-0. Role as gauge boson and polarization section 5.1 in Aitchison, I.J.R.; Hey, A.J.G. (1993). Gauge Theories in Particle Physics. IOP Publishing. ISBN 978-0-85274-328-7. See p.31 in Amsler, C.; et al. (2008). "Review of Particle Physics" (PDF). Physics Letters B. 667 (1–5): 1–1340. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. PMID 10020536. Halliday, David; Resnick, Robert; Walker, Jerl (2005), Fundamental of Physics (7th ed.), John Wiley and Sons, Inc., ISBN 978-0-471-23231-5 See section 1.6 in Alonso & Finn 1968, Section 1.6 Davison E. Soper, Electromagnetic radiation is made of photons, Institute of Theoretical Science, University of Oregon This property was experimentally verified by Raman and Bhagavantam in 1931: Raman, C.V.; Bhagavantam, S. (1931). "Experimental proof of the spin of the photon" (PDF). Indian Journal of Physics. 6 (3244): 353. Bibcode:1932Natur.129...22R. doi:10.1038/129022a0. hdl:10821/664. Archived from the original (PDF) on 2016-06-03. Retrieved 2008-12-28. Burgess, C.; Moore, G. (2007). "1.3.3.2". The Standard Model. A Primer. Cambridge University Press. ISBN 978-0-521-86036-9. Griffiths, David J. (2008), Introduction to Elementary Particles (2nd revised ed.), WILEY-VCH, ISBN 978-3-527-40601-2 Alonso & Finn 1968, Section 9.3 E.g., Appendix XXXII in Born, Max; Blin-Stoyle, Roger John; Radcliffe, J.M. (1989). Atomic Physics. Courier Corporation. ISBN 978-0-486-65984-8. Alan E. Willner. "Twisted Light Could Dramatically Boost Data Rates: Orbital angular momentum could take optical and radio communication to new heights". 2016. Mermin, David (February 1984). "Relativity without light". American Journal of Physics. 52 (2): 119–124. Bibcode:1984AmJPh..52..119M. doi:10.1119/1.13917. Plimpton, S.; Lawton, W. (1936). "A Very Accurate Test of Coulomb's Law of Force Between Charges". Physical Review. 50 (11): 1066. Bibcode:1936PhRv...50.1066P. doi:10.1103/PhysRev.50.1066. Williams, E.; Faller, J.; Hill, H. (1971). "New Experimental Test of Coulomb's Law: A Laboratory Upper Limit on the Photon Rest Mass". Physical Review Letters. 26 (12): 721. Bibcode:1971PhRvL..26..721W. doi:10.1103/PhysRevLett.26.721. Chibisov, G V (1976). "Astrophysical upper limits on the photon rest mass". Soviet Physics Uspekhi. 19 (7): 624. Bibcode:1976SvPhU..19..624C. doi:10.1070/PU1976v019n07ABEH005277. Lakes, Roderic (1998). "Experimental Limits on the Photon Mass and Cosmic Magnetic Vector Potential". Physical Review Letters. 80 (9): 1826. Bibcode:1998PhRvL..80.1826L. doi:10.1103/PhysRevLett.80.1826. Amsler, C; Doser, M; Antonelli, M; Asner, D; Babu, K; Baer, H; Band, H; Barnett, R; et al. (2008). "Review of Particle Physics⁎" (PDF). Physics Letters B. 667 (1–5): 1. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. Summary Table Adelberger, Eric; Dvali, Gia; Gruzinov, Andrei (2007). "Photon-Mass Bound Destroyed by Vortices". Physical Review Letters. 98 (1): 010402.arXiv:hep-ph/0306245. Bibcode:2007PhRvL..98a0402A. doi:10.1103/PhysRevLett.98.010402. PMID 17358459. Descartes, R. (1637). Discours de la méthode (Discourse on Method) (in French). Imprimerie de Ian Maire. ISBN 978-0-268-00870-3. Hooke, R. (1667). Micrographia: or some physiological descriptions of minute bodies made by magnifying glasses with observations and inquiries thereupon ... London: Royal Society of London. ISBN 978-0-486-49564-4. Huygens, C. (1678). Traité de la lumière (in French).. An English translation is available from Project Gutenberg Newton, I. (1952) [1730]. Opticks (4th ed.). Dover, NY: Dover Publications. Book II, Part III, Propositions XII–XX, Queries 25–29. ISBN 978-0-486-60205-9. Buchwald, J.Z. (1989). The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century. Physics Today. 43. University of Chicago Press. pp. 78–80. Bibcode:1990PhT....43d..78B. doi:10.1063/1.2810533. ISBN 978-0-226-07886-1. OCLC 18069573. Maxwell, J.C. (1865). "A Dynamical Theory of the Electromagnetic Field". Philosophical Transactions of the Royal Society. 155: 459–512. Bibcode:1865RSPT..155..459C. doi:10.1098/rstl.1865.0008. This article followed a presentation by Maxwell on 8 December 1864 to the Royal Society. Hertz, H. (1888). "Über Strahlen elektrischer Kraft". Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin) (in German). 1888: 1297–1307. Frequency-dependence of luminiscence pp. 276ff., photoelectric effect section 1.4 in Alonso & Finn 1968 Wien, W. (1911). "Wilhelm Wien Nobel Lecture". nobelprize.org. Planck, M. (1901). "Über das Gesetz der Energieverteilung im Normalspectrum". Annalen der Physik (in German). 4 (3): 553–563. Bibcode:1901AnP...309..553P. doi:10.1002/andp.19013090310. English translation Planck, M. (1920). "Max Planck's Nobel Lecture". nobelprize.org. Einstein, A. (1909). "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" (PDF). Physikalische Zeitschrift (in German). 10: 817–825.. An English translation is available from Wikisource. Presentation speech by Svante Arrhenius for the 1921 Nobel Prize in Physics, December 10, 1922. Online text from [nobelprize.org], The Nobel Foundation 2008. Access date 2008-12-05. Einstein, A. (1916). "Zur Quantentheorie der Strahlung". Mitteilungen der Physikalischen Gesellschaft zu Zürich. 16: 47. Also Physikalische Zeitschrift, 18, 121–128 (1917). (in German) Compton, A. (1923). "A Quantum Theory of the Scattering of X-rays by Light Elements". Physical Review. 21 (5): 483–502. Bibcode:1923PhRv...21..483C. doi:10.1103/PhysRev.21.483. Pais, A. (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. ISBN 978-0-19-853907-0. Millikan, R.A (1924). "Robert A. Millikan's Nobel Lecture". Hendry, J. (1980). "The development of attitudes to the wave–particle duality of light and quantum theory, 1900–1920". Annals of Science. 37 (1): 59–79. doi:10.1080/00033798000200121. Bohr, N.; Kramers, H.A.; Slater, J.C. (1924). "The Quantum Theory of Radiation". Philosophical Magazine. 47 (281): 785–802. doi:10.1080/14786442408565262. Also Zeitschrift für Physik, 24, 69 (1924). Howard, Don (December 2004). "Who Invented the "Copenhagen Interpretation"? A Study in Mythology". Philosophy of Science. 71 (5): 669–682. doi:10.1086/425941. ISSN 0031-8248. JSTOR 10.1086/425941. Heisenberg, W. (1933). "Heisenberg Nobel lecture". Mandel, L. (1976). E. Wolf (ed.). The case for and against semiclassical radiation theory. Progress in Optics. Progress in Optics. 13. North-Holland. pp. 27–69. Bibcode:1976PrOpt..13...27M. doi:10.1016/S0079-6638(08)70018-0. ISBN 978-0-444-10806-7. Taylor, G.I. (1909). Interference fringes with feeble light. Proceedings of the Cambridge Philosophical Society. 15. pp. 114–115. Saleh, B.E.A. & Teich, M.C. (2007). Fundamentals of Photonics. Wiley. ISBN 978-0-471-35832-9. Newton, T.D.; Wigner, E.P. (1949). "Localized states for elementary particles" (PDF). Reviews of Modern Physics. 21 (3): 400–406. Bibcode:1949RvMP...21..400N. doi:10.1103/RevModPhys.21.400. Bialynicki-Birula, I. (1994). "On the wave function of the photon" (PDF). Acta Physica Polonica A. 86 (1–2): 97–116. doi:10.12693/APhysPolA.86.97. Sipe, J.E. (1995). "Photon wave functions". Physical Review A. 52 (3): 1875–1883. Bibcode:1995PhRvA..52.1875S. doi:10.1103/PhysRevA.52.1875. PMID 9912446. Bialynicki-Birula, I. (1996). Photon wave function. Progress in Optics. Progress in Optics. 36. pp. 245–294. Bibcode:1996PrOpt..36..245B. doi:10.1016/S0079-6638(08)70316-0. ISBN 978-0-444-82530-8. Scully, M.O.; Zubairy, M.S. (1997). Quantum Optics. Cambridge: Cambridge University Press. ISBN 978-0-521-43595-6. Busch, Paul; Lahti, Pekka; Werner, Reinhard F. (2013-10-17). "Proof of Heisenberg's Error-Disturbance Relation" (PDF). Physical Review Letters. 111 (16): 160405. doi:10.1103/PhysRevLett.111.160405. ISSN 0031-9007. PMID 24182239. Appleby, David Marcus (2016-05-06). "Quantum Errors and Disturbances: Response to Busch, Lahti and Werner". Entropy. 18 (5): 174. doi:10.3390/e18050174. Landau, L.D.; Lifschitz, E.M. (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1. OCLC 2284121. Busch, P.; Grabowski, M.; Lahti, P.J. (January 1995). "Who Is Afraid of POV Measures? Unified Approach to Quantum Phase Observables". Annals of Physics. 237 (1): 1–11. doi:10.1006/aphy.1995.1001. Bose, S.N. (1924). "Plancks Gesetz und Lichtquantenhypothese". Zeitschrift für Physik (in German). 26 (1): 178–181. Bibcode:1924ZPhy...26..178B. doi:10.1007/BF01327326. Einstein, A. (1924). "Quantentheorie des einatomigen idealen Gases". Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin), Physikalisch-mathematische Klasse (in German). 1924: 261–267. Einstein, A. (1925). Quantentheorie des einatomigen idealen Gases, Zweite Abhandlung. Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin), Physikalisch-mathematische Klasse (in German). 1925. pp. 3–14. doi:10.1002/3527608958.ch28. ISBN 978-3-527-60895-9. Anderson, M.H.; Ensher, J.R.; Matthews, M.R.; Wieman, C.E.; Cornell, E.A. (1995). "Observation of Bose–Einstein Condensation in a Dilute Atomic Vapor". Science. 269 (5221): 198–201. Bibcode:1995Sci...269..198A. doi:10.1126/science.269.5221.198. JSTOR 2888436. PMID 17789847. "Physicists Slow Speed of Light". News.harvard.edu (1999-02-18). Retrieved on 2015-05-11. "Light Changed to Matter, Then Stopped and Moved". photonics.com (February 2007). Retrieved on 2015-05-11. Streater, R.F.; Wightman, A.S. (1989). PCT, Spin and Statistics, and All That. Addison-Wesley. ISBN 978-0-201-09410-7. Einstein, A. (1916). "Strahlungs-emission und -absorption nach der Quantentheorie". Verhandlungen der Deutschen Physikalischen Gesellschaft (in German). 18: 318–323. Bibcode:1916DPhyG..18..318E. Section 1.4 in Wilson, J.; Hawkes, F.J.B. (1987). Lasers: Principles and Applications. New York: Prentice Hall. ISBN 978-0-13-523705-2. Einstein, A. (1916). "Strahlungs-emission und -absorption nach der Quantentheorie". Verhandlungen der Deutschen Physikalischen Gesellschaft (in German). 18: 318–323. Bibcode:1916DPhyG..18..318E. "p. 322: Die Konstanten A m n {\displaystyle A_{m}^{n}} A^n_m and B m n {\displaystyle B_{m}^{n}} B^n_m würden sich direkt berechnen lassen, wenn wir im Besitz einer im Sinne der Quantenhypothese modifizierten Elektrodynamik und Mechanik wären."" Dirac, P.A.M. (1926). "On the Theory of Quantum Mechanics". Proceedings of the Royal Society A. 112 (762): 661–677. Bibcode:1926RSPSA.112..661D. doi:10.1098/rspa.1926.0133. Dirac, P.A.M. (1927). "The Quantum Theory of the Emission and Absorption of Radiation". Proceedings of the Royal Society A. 114 (767): 243–265. Bibcode:1927RSPSA.114..243D. doi:10.1098/rspa.1927.0039. Dirac, P.A.M. (1927b). The Quantum Theory of Dispersion. Proceedings of the Royal Society A. 114. pp. 710–728. Bibcode:1927RSPSA.114..710D. doi:10.1098/rspa.1927.0071. Heisenberg, W.; Pauli, W. (1929). "Zur Quantentheorie der Wellenfelder". Zeitschrift für Physik (in German). 56 (1–2): 1. Bibcode:1929ZPhy...56....1H. doi:10.1007/BF01340129. Heisenberg, W.; Pauli, W. (1930). "Zur Quantentheorie der Wellenfelder". Zeitschrift für Physik (in German). 59 (3–4): 139. Bibcode:1930ZPhy...59..168H. doi:10.1007/BF01341423. Fermi, E. (1932). "Quantum Theory of Radiation". Reviews of Modern Physics. 4 (1): 87. Bibcode:1932RvMP....4...87F. doi:10.1103/RevModPhys.4.87. Born, M. (1926). "Zur Quantenmechanik der Stossvorgänge". Zeitschrift für Physik (in German). 37 (12): 863–867. Bibcode:1926ZPhy...37..863B. doi:10.1007/BF01397477. Born, M. (1926). "Quantenmechanik der Stossvorgänge". Zeitschrift für Physik (in German). 38 (11–12): 803. Bibcode:1926ZPhy...38..803B. doi:10.1007/BF01397184. Pais, A. (1986). Inward Bound: Of Matter and Forces in the Physical World. Oxford University Press. p. 260. ISBN 978-0-19-851997-3. Specifically, Born claimed to have been inspired by Einstein's never-published attempts to develop a "ghost-field" theory, in which point-like photons are guided probabilistically by ghost fields that follow Maxwell's equations. Debye, P. (1910). "Der Wahrscheinlichkeitsbegriff in der Theorie der Strahlung". Annalen der Physik (in German). 33 (16): 1427–1434. Bibcode:1910AnP...338.1427D. doi:10.1002/andp.19103381617. Born, M.; Heisenberg, W.; Jordan, P. (1925). "Quantenmechanik II". Zeitschrift für Physik (in German). 35 (8–9): 557–615. Bibcode:1926ZPhy...35..557B. doi:10.1007/BF01379806. Zee, Anthony (2003). Quantum Field Theory in a Nutshell. Princeton, N.J.: Princeton University Press. ISBN 0-691-01019-6. OCLC 50479292. Photon–photon-scattering section 7-3-1, renormalization chapter 8-2 in Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-032071-0. Weiglein, G. (2008). "Electroweak Physics at the ILC". Journal of Physics: Conference Series. 110 (4): 042033.arXiv:0711.3003. Bibcode:2008JPhCS.110d2033W. doi:10.1088/1742-6596/110/4/042033. Ryder, L.H. (1996). Quantum field theory (2nd ed.). Cambridge University Press. ISBN 978-0-521-47814-4. Sheldon Glashow Nobel lecture, delivered 8 December 1979. Abdus Salam Nobel lecture, delivered 8 December 1979. Steven Weinberg Nobel lecture, delivered 8 December 1979. E.g., chapter 14 in Hughes, I.S. (1985). Elementary particles (2nd ed.). Cambridge University Press. ISBN 978-0-521-26092-3. Bauer, T.H.; Spital, R.D.; Yennie, D.R.; Pipkin, F.M. (1978). "The hadronic properties of the photon in high-energy interactions". Reviews of Modern Physics. 50 (2): 261. Bibcode:1978RvMP...50..261B. doi:10.1103/RevModPhys.50.261. Sakurai, J.J. (1960). "Theory of strong interactions". Annals of Physics. 11 (1): 1–48. Bibcode:1960AnPhy..11....1S. doi:10.1016/0003-4916(60)90126-3. Walsh, T.F.; Zerwas, P. (1973). "Two-photon processes in the parton model". Physics Letters B. 44 (2): 195. Bibcode:1973PhLB...44..195W. doi:10.1016/0370-2693(73)90520-0. Witten, E. (1977). "Anomalous cross section for photon–photon scattering in gauge theories". Nuclear Physics B. 120 (2): 189–202. Bibcode:1977NuPhB.120..189W. doi:10.1016/0550-3213(77)90038-4. Nisius, R. (2000). "The photon structure from deep inelastic electron–photon scattering". Physics Reports. 332 (4–6): 165–317.arXiv:hep-ex/9912049. Bibcode:2000PhR...332..165N. doi:10.1016/S0370-1573(99)00115-5. E.g., section 10.1 in Dunlap, R.A. (2004). An Introduction to the Physics of Nuclei and Particles. Brooks/Cole. ISBN 978-0-534-39294-9. Radiative correction to electron mass section 7-1-2, anomalous magnetic moments section 7-2-1, Lamb shift section 7-3-2 and hyperfine splitting in positronium section 10-3 in Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-032071-0. E.g. sections 9.1 (gravitational contribution of photons) and 10.5 (influence of gravity on light) in Stephani, H.; Stewart, J. (1990). General Relativity: An Introduction to the Theory of Gravitational Field. Cambridge University Press. pp. 86 ff, 108 ff. ISBN 978-0-521-37941-0. Polaritons section 10.10.1, Raman and Brillouin scattering section 10.11.3 in Patterson, J.D.; Bailey, B.C. (2007). Solid-State Physics: Introduction to the Theory. Springer. ISBN 978-3-540-24115-7. Naeye, R. (1998). Through the Eyes of Hubble: Birth, Life and Violent Death of Stars. CRC Press. ISBN 978-0-7503-0484-9. OCLC 40180195. Koupelis, Theo; Kuhn, Karl F. (2007). In Quest of the Universe. Jones and Bartlett Canada. p. 102. ISBN 9780763743871. E.g. section 11-5 C in Pine, S.H.; Hendrickson, J.B.; Cram, D.J.; Hammond, G.S. (1980). Organic Chemistry (4th ed.). McGraw-Hill. ISBN 978-0-07-050115-7. Nobel lecture given by G. Wald on December 12, 1967, online at nobelprize.org: The Molecular Basis of Visual Excitation. Photomultiplier section 1.1.10, CCDs section 1.1.8, Geiger counters section 1.3.2.1 in Kitchin, C.R. (2008). Astrophysical Techniques. Boca Raton, FL: CRC Press. ISBN 978-1-4200-8243-2. Waymouth, John (1971). Electric Discharge Lamps. Cambridge, MA: The M.I.T. Press. ISBN 978-0-262-23048-3. Denk, W.; Svoboda, K. (1997). "Photon upmanship: Why multiphoton imaging is more than a gimmick". Neuron. 18 (3): 351–357. doi:10.1016/S0896-6273(00)81237-4. PMID 9115730. Lakowicz, J.R. (2006). Principles of Fluorescence Spectroscopy. Springer. pp. 529 ff. ISBN 978-0-387-31278-1. Jennewein, T.; Achleitner, U.; Weihs, G.; Weinfurter, H.; Zeilinger, A. (2000). "A fast and compact quantum random number generator". Review of Scientific Instruments. 71 (4): 1675–1680.arXiv:quant-ph/9912118. Bibcode:2000RScI...71.1675J. doi:10.1063/1.1150518. Stefanov, A.; Gisin, N.; Guinnard, O.; Guinnard, L.; Zbiden, H. (2000). "Optical quantum random number generator". Journal of Modern Optics. 47 (4): 595–598. doi:10.1080/095003400147908. Hignett, Katherine (16 February 2018). "Physics Creates New Form Of Light That Could Drive The Quantum Computing Revolution". Newsweek. Retrieved 17 February 2018. Liang, Qi-Yu; et al. (16 February 2018). "Observation of three-photon bound states in a quantum nonlinear medium". Science. 359 (6377): 783–786.arXiv:1709.01478. Bibcode:2018Sci...359..783L. doi:10.1126/science.aao7293. PMC 6467536. PMID 29449489. By date of publication: Alonso, M.; Finn, E.J. (1968). Fundamental University Physics Volume III: Quantum and Statistical Physics. Addison-Wesley. ISBN 978-0-201-00262-1. Clauser, J.F. (1974). "Experimental distinction between the quantum and classical field-theoretic predictions for the photoelectric effect". Physical Review D. 9 (4): 853–860. Bibcode:1974PhRvD...9..853C. doi:10.1103/PhysRevD.9.853. Pais, A. (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. Feynman, Richard (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 978-0-691-12575-6. Grangier, P.; Roger, G.; Aspect, A. (1986). "Experimental Evidence for a Photon Anticorrelation Effect on a Beam Splitter: A New Light on Single-Photon Interferences". Europhysics Letters. 1 (4): 173–179. Bibcode:1986EL......1..173G. CiteSeerX 10.1.1.178.4356. doi:10.1209/0295-5075/1/4/004. Lamb, W.E. (1995). "Anti-photon". Applied Physics B. 60 (2–3): 77–84. Bibcode:1995ApPhB..60...77L. doi:10.1007/BF01135846. Special supplemental issue of Optics and Photonics News (vol. 14, October 2003) article web link Roychoudhuri, C.; Rajarshi, R. (2003). "The nature of light: what is a photon?". Optics and Photonics News. 14: S1 (Supplement). Zajonc, A. "Light reconsidered". Optics and Photonics News. 14: S2–S5 (Supplement). Loudon, R. "What is a photon?". Optics and Photonics News. 14: S6–S11 (Supplement). Finkelstein, D. "What is a photon?". Optics and Photonics News. 14: S12–S17 (Supplement). Muthukrishnan, A.; Scully, M.O.; Zubairy, M.S. "The concept of the photon – revisited". Optics and Photonics News. 14: S18–S27 (Supplement). Mack, H.; Schleich, W.P. "A photon viewed from Wigner phase space". Optics and Photonics News. 14: S28–S35 (Supplement). Glauber, R. (2005). "One Hundred Years of Light Quanta" (PDF). 2005 Physics Nobel Prize Lecture. Archived from the original (PDF) on 2008-07-23. Retrieved 2009-06-29. Hentschel, K. (2007). "Light quanta: The maturing of a concept by the stepwise accretion of meaning". Physics and Philosophy. 1 (2): 1–20. Education with single photons: Thorn, J.J.; Neel, M.S.; Donato, V.W.; Bergreen, G.S.; Davies, R.E.; Beck, M. (2004). "Observing the quantum behavior of light in an undergraduate laboratory" (PDF). American Journal of Physics. 72 (9): 1210–1219. Bibcode:2004AmJPh..72.1210T. doi:10.1119/1.1737397. Bronner, P.; Strunz, Andreas; Silberhorn, Christine; Meyn, Jan-Peter (2009). "Interactive screen experiments with single photons". European Journal of Physics. 30 (2): 345–353. Bibcode:2009EJPh...30..345B. doi:10.1088/0143-0807/30/2/014. Quotations related to Photon at Wikiquote The dictionary definition of photon at Wiktionary Media related to Photon at Wikimedia Commons vte Particles in physics Elementary Fermions Quarks Up (quark antiquark) Down (quark antiquark) Charm (quark antiquark) Strange (quark antiquark) Top (quark antiquark) Bottom (quark antiquark) Leptons Electron Positron Muon Antimuon Tau Antitau Electron neutrino Electron antineutrino Muon neutrino Muon antineutrino Tau neutrino Tau antineutrino Bosons Gauge Scalar Higgs boson Ghost fields Hypothetical Superpartners Gauginos Others Axino Chargino Higgsino Neutralino Sfermion (Stop squark) Others Axion Curvaton Dilaton Dual graviton Graviphoton Graviton Inflaton Leptoquark Magnetic monopole Majoron Majorana fermion Dark photon Planck particle Preon Sterile neutrino Tachyon W′ and Z′ bosons X and Y bosons Composite Baryons Nucleon Proton Antiproton Neutron Antineutron Delta baryon Lambda baryon Sigma baryon Xi baryon Omega baryon Mesons Pion Rho meson Eta and eta prime mesons Phi meson J/psi meson Omega meson Upsilon meson Kaon B meson D meson Quarkonium Others Hypothetical Baryons Hexaquark Heptaquark Skyrmion Mesons Glueball Theta meson T meson Others Quasiparticles Lists Baryons Mesons Particles Quasiparticles Timeline of particle discoveries Related History of subatomic physics timeline Standard Model mathematical formulation Subatomic particles Particles Antiparticles Nuclear physics Eightfold way Quark model Exotic matter Massless particle Relativistic particle Virtual particle Wave–particle duality Particle chauvinism Wikipedia books Hadronic Matter Particles of the Standard Model Leptons Quarks vte Quantum electrodynamics Concepts Anomalous magnetic dipole moment Probability amplitude Propagator QED vacuum Self-energy Vacuum polarization ξ gauge Formalism Feynman diagram Feynman slash notation Gupta–Bleuler formalism Path integral formulation Vertex function Ward–Takahashi identity Interactions Bhabha scattering Bremsstrahlung Compton scattering Lamb shift Møller scattering Particles Dual photon Electron Photon Positron Positronium Virtual particles Physics Encyclopedia World Index
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965703547000885, "perplexity": 1873.007567594215}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039604430.92/warc/CC-MAIN-20210422191215-20210422221215-00356.warc.gz"}
http://math.stackexchange.com/questions/819887/uniqueness-of-the-comparison-functor
# Uniqueness of the Comparison Functor Suppose $F:C\rightarrow D$ and that $F\dashv U$ is an adjunction and $C^{T}$ the Eilenberg–Moore category for the monad $T=U◦F$, with the corresponding functors $F^{T}:C\rightarrow C^{T}$ and $U^{T}:C^{T}\rightarrow C$. I have been able to prove that there is a comparison functor $Φ : D →C^{T}$ which satisfies (1) $U^{T}◦Φ= U$ and (2) $Φ◦F = F^{T}$ I am having trouble with uniqueness. Here is what I have so far: Suppose $Φ'$ satisfies (1) and (2). Let $U\in D$. Then using (1) with $Φ'(D)=(C',\alpha )$, it follows that $U◦Φ'(D)=U(C',\alpha )=C'$ whereas $Φ(D)=(UD,U\varepsilon_{D})$ and so $U◦Φ(D)=U(UD,\varepsilon_{D})=UD$ which says $C'=UD$ Now I need to show that $\alpha=U\varepsilon_{D}$. This is where I'm stuck. edit: Using the hint below, the fact that the adjunctions have the same unit imply, after using (1) and (2) that \begin{matrix} \operatorname{Hom}(FC, D) & \xrightarrow{{\phi}} & \operatorname{Hom}(C, UD) \\ \left\downarrow\vphantom{\int}\right. & & \left\downarrow\vphantom{\int}\right.\\ \operatorname{Hom}(F^{T}C, Φ'(D))& \xrightarrow{\phi^{T}} & \operatorname{Hom}(C, U^{T}Φ'(D)) \end{matrix} commutes. ($\phi$ and $\phi^{T}$ are the isomorphisms giving the adjunctions; the left downward arrow is the map $f\longmapstoΦ'(f)$ and the right downward arrow is the identity on the Hom$(C, UD)$.) Then, setting $C=UD$ and following $id_{UD}$, you get that $Φ'\epsilon=\epsilon^T Φ'$, which is the hint. The rest follows easily. - You can find a proof in Mac Lane's CWM. – Martin Brandenburg Jun 4 '14 at 8:34 Right. I know these results are well-known, but before looking up the worked-out proof, I want to try it for myself, with a hint or two. – Chilango Jun 4 '14 at 13:24 what does it mean $U\in D$ ? $U$ is a functor and $D$ is a category – magma Jun 4 '14 at 14:25 Here's a version of the proof that bypasses the $\Phi\epsilon=\epsilon^T\Phi$ lemma and proves uniqueness directly. • $U^T\Phi=U$ tells us that $\Phi d$ is a $T$-algebra with structure map $\gamma d: TUd\to Ud$ • $\Phi$ sends $D$-arrows to $T$-homomorphisms, so $\gamma$ is a natural transformation $TU\to U$ • $\Phi F=F^T$ tells us that $\gamma F=\mu =U\epsilon F$ Since $\gamma$ is natural we have $U\epsilon\circ\mu U=U\epsilon\circ\gamma FU=\gamma\circ TU\epsilon$ Precomposing with $T\eta U$ gives us $U\epsilon\circ\mu U\circ T\eta U=\gamma\circ TU\epsilon\circ T\eta U$ This rearranges as $U\epsilon\circ(\mu\circ T\eta)U=\gamma\circ T(U\epsilon\circ\eta U)$, which simplifies to $U\epsilon=\gamma$ - Just a hint.... First you should realize that $$Φd=(Ud, h)$$ that is a T-algebra with underlying $Ud$, for any d in $D$. in order to obtain the structure h, observe that the 2 adjunctions have the same unit $\eta$, and deduce that $Φ\epsilon=\epsilon^T Φ$. Then deduce that $\epsilon^TΦd=\epsilon^T(Ud, h)=h$ and so $Φ\epsilon=\epsilon^T Φ$ implies $U\epsilon_d=h$ so h is determined and $Φ$ is unique. - Once you pointed put that I needed to prove that $Φ\epsilon=\epsilon^T Φ$, I was able to do it. It was a diagram chase, following identities and using (1) and (2) above. This was very helpful thanks. Where I live I literally have no one to talk math with, thanks for taking the time. – Chilango Jun 4 '14 at 23:41 @Chilango you are welcome, my pleasure :-) – magma Jun 5 '14 at 5:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866316318511963, "perplexity": 292.57765605899436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860110372.12/warc/CC-MAIN-20160428161510-00072-ip-10-239-7-51.ec2.internal.warc.gz"}
https://collaborate.princeton.edu/en/publications/finite-sample-risk-bounds-for-maximum-likelihood-estimation-with-
# Finite-Sample Risk Bounds for Maximum Likelihood Estimation with Arbitrary Penalties W. D. Brinda, Jason M. Klusowski Research output: Contribution to journalArticlepeer-review ## Abstract The minimum description length two-part coding index of resolvability provides a finite-sample upper bound on the statistical risk of penalized likelihood estimators over countable models. However, the bound does not apply to unpenalized maximum likelihood estimation or procedures with exceedingly small penalties. In this paper, we point out a more general inequality that holds for arbitrary penalties. In addition, this approach makes it possible to derive exact risk bounds of order 1/n for iid parametric models, which improves on the order (log n)/n resolvability bounds. We conclude by discussing implications for adaptive estimation. Original language English (US) 2727-2741 15 IEEE Transactions on Information Theory 64 4 https://doi.org/10.1109/TIT.2017.2789214 Published - Apr 2018 Yes ## All Science Journal Classification (ASJC) codes • Information Systems • Computer Science Applications • Library and Information Sciences ## Keywords • Penalized likelihood estimation • codelength • minimum description length • redundancy • statistical risk ## Fingerprint Dive into the research topics of 'Finite-Sample Risk Bounds for Maximum Likelihood Estimation with Arbitrary Penalties'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934385776519775, "perplexity": 3200.325307886633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00039.warc.gz"}
https://richbeveridge.wordpress.com/2013/11/08/infinite-series-approximations-for-trigonometric-functions-from-14th-century-india/
Feeds: Posts ## Infinite Series Approximations for Trigonometric functions from 14th century India The infinite series approximations that have been used for many years to calculate the values of trigonometric functions have traditionally been attributed to Brook Taylor and Colin Maclaurin, European mathematicians of the early 18th century who were building on the work of Newton, Leibniz, James Gregory and Isaac Barrow among others. However, I recently discovered that they were not the first to use these techniques.  As author George Gheverghese Joseph points out in the previous link, the work of Newton and Leibniz was tremendous, however the Indian development of infinite series approximations for trigonometric functions was equally amazing and important.  In addition, it came nearly 300 years before the European development of these techniques. Madhava of Sangamagrama is generally recognized of the founder of the Kerala school of mathematics and astronomy in what is today the state of Kerala in southwest India.  The work of the mathematicians of the Kerala school was based on a desire for accurate trigonometric values for use in navigation. Madhava lived in the late 1300s and early 1400s and most of his original work has been lost.  However, he is mentioned frequently in the surviving work of later mathematicians from the Kerala school.  Madhava is credited with power series calculations for the sine, cosine, tangent and arctangent, and like Leibniz, he used the arctangent power series to approximate the value of $\pi$ to 13 decimal places. Victor Katz’ A History of Mathematics (Brief Edition) has a wonderful and detailed derivation of the Kerala school trigonometric series, with diagrams showing how they used the relationships between the angles, radii, chords and arcs in a circle to arrive at these amazing calculations. Katz also published this derivation in a paper for the MAA (Mathematics Magazine, vol. 68, n. 3, June 1995, pp. 163-174) The derivation for the infinite series begins on page 169 (pg. 7 in the pdf). I’ve just begun to unpack this derivation and will post a step-by-step explanation of Katz’ work in the “near” future.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067092537879944, "perplexity": 775.8239783951418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891539.71/warc/CC-MAIN-20180122193259-20180122213259-00244.warc.gz"}
https://www.snapxam.com/problems/50614810/integral-of-1-5-1x-0-5-dx-from-5-0-to-5
Step-by-step Solution Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch Step-by-step explanation Problem to solve: $\int_{-5}^{5}\frac{1}{\sqrt{5-x}}dx$ Learn how to solve definite integrals problems step by step online. $\lim_{c\to5}\:\int_{-5}^{c}\frac{1}{\sqrt{5-x}}dx$ Learn how to solve definite integrals problems step by step online. Integrate 1/((5-x)^0.5) from -5 to 5. Replace the integral's limit by a finite value. We can solve the integral \int_{-5}^{c}\frac{1}{\sqrt{5-x}}dx by applying integration by substitution method (also called U-Substitution). First, we must identify a section within the integral with a new variable (let's call it u), which when substituted makes the integral easier. We see that 5-x it's a good candidate for substitution. Let's define a variable u and assign it to the choosen part. Now, in order to rewrite dx in terms of du, we need to find the derivative of u. We need to calculate du, we can do that by deriving the equation above. Isolate dx in the previous equation. $6.3246$ $\int_{-5}^{5}\frac{1}{\sqrt{5-x}}dx$ Main topic: Definite integrals 15 Time to solve it: ~ 0.1 s (SnapXam)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990642786026001, "perplexity": 866.7436464564397}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00720.warc.gz"}
https://en.m.wikipedia.org/wiki/Statistical_proof
Statistical proof Statistical proof is the rational demonstration of degree of certainty for a proposition, hypothesis or theory that is used to convince others subsequent to a statistical test of the supporting evidence and the types of inferences that can be drawn from the test scores. Statistical methods are used to increase the understanding of the facts and the proof demonstrates the validity and logic of inference with explicit reference to a hypothesis, the experimental data, the facts, the test, and the odds. Proof has two essential aims: the first is to convince and the second is to explain the proposition through peer and public review.[1] The burden of proof rests on the demonstrable application of the statistical method, the disclosure of the assumptions, and the relevance that the test has with respect to a genuine understanding of the data relative to the external world. There are adherents to several different statistical philosophies of inference, such as Bayes theorem versus the likelihood function, or positivism versus critical rationalism. These methods of reason have direct bearing on statistical proof and its interpretations in the broader philosophy of science.[1][2] A common demarcation between science and non-science is the hypothetico-deductive proof of falsification developed by Karl Popper, which is a well-established practice in the tradition of statistics. Other modes of inference, however, may include the inductive and abductive modes of proof.[3] Scientists do not use statistical proof as a means to attain certainty, but to falsify claims and explain theory. Science cannot achieve absolute certainty nor is it a continuous march toward an objective truth as the vernacular as opposed to the scientific meaning of the term "proof" might imply. Statistical proof offers a kind of proof of a theory's falsity and the means to learn heuristically through repeated statistical trials and experimental error.[2] Statistical proof also has applications in legal matters with implications for the legal burden of proof.[4] Axioms There are two kinds of axioms, 1) conventions that are taken as true that should be avoided because they cannot be tested, and 2) hypotheses.[5] Proof in the theory of probability was built on four axioms developed in the late 17th century: 1. The probability of a hypothesis is a non-negative real number: ${\displaystyle {\bigg \{}\Pr(h)\geqq 0{\bigg \}}}$ ; 2. The probability of necessary truth equals one: ${\displaystyle {\bigg \{}\Pr(t)=1{\bigg \}}}$ ; 3. If two hypotheses h1 and h2 are mutually exclusive, then the sum of their probabilities is equal to the probability of their disjunction: ${\displaystyle {\bigg \{}\Pr \left(h_{1}\right)+\Pr \left(h_{2}\right)=\Pr \left(h_{1}orh_{2}\right){\bigg \}}}$ ; 4. The conditional probability of h1 given h2 ${\displaystyle {\Bigg \{}\Pr(h_{1}|h_{2}){\Bigg \}}}$  is equal to the unconditional probability ${\displaystyle {\bigg \{}\Pr(h_{1}\And h_{2}){\bigg \}}}$  of the conjunction h1 and h2, divided by the unconditional probability ${\displaystyle {\bigg \{}\Pr(h_{2}){\bigg \}}}$  of h2 where that probability is positive ${\displaystyle {\bigg \{}\Pr(h_{1}|h_{2})={\frac {\Pr(h_{1}\And h_{2})}{\Pr(h_{2})}}{\bigg \}}}$ , where ${\displaystyle {\bigg \{}\Pr(h_{2})>0{\bigg \}}}$ . The preceding axioms provide the statistical proof and basis for the laws of randomness, or objective chance from where modern statistical theory has advanced. Experimental data, however, can never prove that the hypotheses (h) is true, but relies on an inductive inference by measuring the probability of the hypotheses relative to the empirical data. The proof is in the rational demonstration of using the logic of inference, math, testing, and deductive reasoning of significance.[1][2][6] Test and proof The term proof descended from its Latin roots (provable, probable, probare L.) meaning to test.[7][8] Hence, proof is a form of inference by means of a statistical test. Statistical tests are formulated on models that generate probability distributions. Examples of probability distributions might include the binary, normal, or poisson distribution that give exact descriptions of variables that behave according to natural laws of random chance. When a statistical test is applied to samples of a population, the test determines if the sample statistics are significantly different from the assumed null-model. True values of a population, which are unknowable in practice, are called parameters of the population. Researchers sample from populations, which provide estimates of the parameters, to calculate the mean or standard deviation. If the entire population is sampled, then the sample statistic mean and distribution will converge with the parametric distribution.[9] Using the scientific method of falsification, the probability value that the sample statistic is sufficiently different from the null-model than can be explained by chance alone is given prior to the test. Most statisticians set the prior probability value at 0.05 or 0.1, which means if the sample statistics diverge from the parametric model more than 5 (or 10) times out of 100, then the discrepancy is unlikely to be explained by chance alone and the null-hypothesis is rejected. Statistical models provide exact outcomes of the parametric and estimates of the sample statistics. Hence, the burden of proof rests in the sample statistics that provide estimates of a statistical model. Statistical models contain the mathematical proof of the parametric values and their probability distributions.[10][11] Bayes theorem Bayesian statistics are based on a different philosophical approach for proof of inference. The mathematical formula for Bayes's theorem is: ${\displaystyle Pr[Parameter|Data]={\frac {Pr[Data|Parameter]\times Pr[Parameter]}{Pr[Data]}}}$ The formula is read as the probability of the parameter (or hypothesis =h, as used in the notation on axioms) “given” the data (or empirical observation), where the horizontal bar refers to "given". The right hand side of the formula calculates the prior probability of a statistical model (Pr [Parameter]) with the likelihood (Pr [Data | Parameter]) to produce a posterior probability distribution of the parameter (Pr [Parameter | Data]). The posterior probability is the likelihood that the parameter is correct given the observed data or samples statistics.[12] Hypotheses can be compared using Bayesian inference by means of the Bayes factor, which is the ratio of the posterior odds to the prior odds. It provides a measure of the data and if it has increased or decreased the likelihood of one hypotheses relative to another.[13] The statistical proof is the Bayesian demonstration that one hypothesis has a higher (weak, strong, positive) likelihood.[13] There is considerable debate if the Bayesian method aligns with Karl Poppers method of proof of falsification, where some have suggested that "...there is no such thing as "accepting" hypotheses at all. All that one does in science is assign degrees of belief..."[14]: 180  According to Popper, hypotheses that have withstood testing and have yet to be falsified are not verified but corroborated. Some researches have suggested that Popper's quest to define corroboration on the premise of probability put his philosophy in line with the Bayesian approach. In this context, the likelihood of one hypothesis relative to another may be an index of corroboration, not confirmation, and thus statistically proven through rigorous objective standing.[6][15] In legal proceedings "Where gross statistical disparities can be shown, they alone may in a proper case constitute prima facie proof of a pattern or practice of discrimination."[nb 1]: 271 Statistical proof in a legal proceeding can be sorted into three categories of evidence: 1. The occurrence of an event, act, or type of conduct, 2. The identity of the individual(s) responsible 3. The intent or psychological responsibility[16] Statistical proof was not regularly applied in decisions concerning United States legal proceedings until the mid 1970s following a landmark jury discrimination case in Castaneda v. Partida. The US Supreme Court ruled that gross statistical disparities constitutes "prima facie proof" of discrimination, resulting in a shift of the burden of proof from plaintiff to defendant. Since that ruling, statistical proof has been used in many other cases on inequality, discrimination, and DNA evidence.[4][17][18] However, there is not a one-to-one correspondence between statistical proof and the legal burden of proof. "The Supreme Court has stated that the degrees of rigor required in the fact finding processes of law and science do not necessarily correspond."[18]: 1533 In an example of a death row sentence (McCleskey v. Kemp[nb 2]) concerning racial discrimination, the petitioner, a black man named McCleskey was charged with the murder of a white police officer during a robbery. Expert testimony for McClesky introduced a statistical proof showing that "defendants charged with killing white victims were 4.3 times as likely to receive a death sentence as charged with killing blacks.".[19]: 595  Nonetheless, the statistics was insufficient "to prove that the decisionmakers in his case acted with discriminatory purpose."[19]: 596  It was further argued that there were "inherent limitations of the statistical proof",[19]: 596  because it did not refer to the specifics of the individual. Despite the statistical demonstration of an increased probability of discrimination, the legal burden of proof (it was argued) had to be examined on a case by case basis.[19] References 1. ^ a b c Gold, B.; Simons, R. A. (2008). Proof and other dilemmas: Mathematics and philosophy. Mathematics Association of America Inc. ISBN 978-0-88385-567-6. 2. ^ a b c Gattei, S. (2008). Thomas Kuhn's "Linguistic Turn" and the Legacy of Logical Empiricism: Incommensurability, Rationality and the Search for Truth. Ashgate Pub Co. p. 277. ISBN 978-0-7546-6160-3. 3. ^ Pedemont, B. (2007). "How can the relationship between argumentation and proof be analysed?". Educational Studies in Mathematics. 66 (1): 23–41. doi:10.1007/s10649-006-9057-x. S2CID 121547580. 4. ^ a b c Meier, P. (1986). "Damned Liars and Expert Witnesses" (PDF). Journal of the American Statistical Association. 81 (394): 269–276. doi:10.1080/01621459.1986.10478270. 5. ^ Wiley, E. O. (1975). "Karl R. Popper, Systematics, and Classification: A Reply to Walter Bock and Other Evolutionary Taxonomists". Systematic Zoology. 24 (2): 233–43. doi:10.2307/2412764. ISSN 0039-7989. JSTOR 2412764. 6. ^ a b Howson, Colin; Urbach, Peter (1991). "Bayesian reasoning in science". Nature. 350 (6317): 371–4. Bibcode:1991Natur.350..371H. doi:10.1038/350371a0. ISSN 1476-4687. S2CID 5419177. 7. ^ Sundholm, G. (1994). "Proof-Theoretical Semantics and Fregean Identity Criteria for Propositions" (PDF). The Monist. 77 (3): 294–314. doi:10.5840/monist199477315. hdl:1887/11990. 8. ^ Bissell, D. (1996). "Statisticians have a Word for it" (PDF). Teaching Statistics. 18 (3): 87–89. CiteSeerX 10.1.1.385.5823. doi:10.1111/j.1467-9639.1996.tb00300.x. 9. ^ Sokal, R. R.; Rohlf, F. J. (1995). Biometry (3rd ed.). W.H. Freeman & Company. pp. 887. ISBN 978-0-7167-2411-7. biometry. 10. ^ Heath, David (1995). An introduction to experimental design and statistics for biology. CRC Press. ISBN 978-1-85728-132-3. 11. ^ Hald, Anders (2006). A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713-1935. Springer. p. 260. ISBN 978-0-387-46408-4. 12. ^ Huelsenbeck, J. P.; Ronquist, F.; Bollback, J. P. (2001). "Bayesian Inference of Phylogeny and Its Impact on Evolutionary Biology" (PDF). Science. 294 (5550): 2310–2314. Bibcode:2001Sci...294.2310H. doi:10.1126/science.1065889. PMID 11743192. S2CID 2138288. 13. ^ a b Wade, P. R. (2000). "Bayesian methods in conservation biology" (PDF). Conservation Biology. 14 (5): 1308–1316. doi:10.1046/j.1523-1739.2000.99415.x. S2CID 55853118. 14. ^ Sober, E. (1991). Reconstructing the Past: Parsimony, Evolution, and Inference. A Bradford Book. p. 284. ISBN 978-0-262-69144-4. 15. ^ Helfenbein, K. G.; DeSalle, R. (2005). "Falsifications and corroborations: Karl Popper's influence on systematics" (PDF). Molecular Phylogenetics and Evolution. 35 (1): 271–280. doi:10.1016/j.ympev.2005.01.003. PMID 15737596. 16. ^ Fienberg, S. E.; Kadane, J. B. (1983). "The presentation of Bayesian statistical analyses in legal proceedings". Journal of the Royal Statistical Society, Series D. 32 (1/2): 88–98. doi:10.2307/2987595. JSTOR 2987595. 17. ^ Garaud, M. C. (1990). "Legal Standards and Statistical Proof in Title VII Litigation: In Search of a Coherent Disparate Impact Model". University of Pennsylvania Law Review. 139 (2): 455–503. doi:10.2307/3312286. JSTOR 3312286. 18. ^ a b The Harvard Law Review Association (1995). "Developments in the Law: Confronting the New Challenges of Scientific Evidence". Harvard Law Review. 108 (7): 1481–1605. doi:10.2307/1341808. JSTOR 1341808. 19. Faigman, D. L. (1991). "Normative Constitutional Fact-Finding": Exploring the Empirical Component of Constitutional Interpretation". University of Pennsylvania Law Review. 139 (3): 541–613. doi:10.2307/3312337. JSTOR 3312337. Notes 1. ^ Supreme Court of the United States Castaneda v. Partida, 1977 [1] cited in Meier (1986) Ibid. who states "Thus, in the space of less than half a year, the Supreme Court had moved from the traditional legal disdain for statistical proof to a strong endorsement of it as being capable, on its own, of establishing a prima facie case against a defendant."[4] 2. ^ 481 U.S. 279 (1987).[19]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875964343547821, "perplexity": 2274.378120464078}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00537.warc.gz"}
http://benchmarkfcns.xyz/benchmarkfcns/ackleyn4fcn.html
# Plots Two contours of the function are presented below: # Description and Features • The function is not convex. • The function is defined on n-dimensional space. • The function is non-separable. • The function is differentiable. # Input Domain The function can be defined on any input domain but it is usually evaluated on $x_i \in [-35, 35]$ for $i=1, …, n$. # Global Minima On the 2-dimensional space, the function has one global minima at $f(\textbf{x}^{\ast}) = -4.590101633799122$ located at $\mathbf{x^\ast}=(-1.51, -0.755)$. # Implementation An implementation of the Ackley N. 4 Function with MATLAB is provided below. The function can be represented in Latex as follows:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8885659575462341, "perplexity": 1567.7962156686285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742569.45/warc/CC-MAIN-20181115075207-20181115101207-00347.warc.gz"}
https://www.toptica.com/toptica-news/first-ever-precision-spectroscopy-of-antihydrogen-using-a-toptica-laser/
# First-ever precision spectroscopy of Antihydrogen – using a TOPTICA laser! ǀ  Blog A TA-FHG pro laser was used for the first-ever optical spectroscopy on an antimatter atom. CERN scientists have published the first-ever measurement on the optical spectrum of an antimatter atom. A TOPTICA TA-FHG pro was used for the key measurement of this success. The reported optical spectroscopy of the 1S-2S two-photon transition in hydrogen is a cornerstone of modern atomic physics. Over the last decades, the measurement precision of the transition frequency was improved by several orders of magnitude, allowing tests of fundamental physical theories like quantum electrodynamics in a very simple system. Now, scientists at CERN have repeated the same experiment with antihydrogen, i.e. an atom consisting of an antiproton a positron. With a TOPTICA TA-FHG pro laser tuned to the transition linewidth around 243 nm, and a few atoms detected, they can already put bounds on the standard model, as reported in Nature. Congratulations!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8726064562797546, "perplexity": 1697.750069656042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00265.warc.gz"}
https://www.newslytica.com/post/the-case-of-hydroxychloroquine
Search • Sai Charan # The Case Of Hydroxychloroquine There is no proven cure for treating COVID-19. However, there are a couple of medicines that are being used to treat the symptoms of COVID-19. One of them is Hydroxychloroquine. Hydroxychloroquine has been in limelight ever since President Trump declared it a miracle drug. However, It took a backseat due to the Lancet study which said that Hydroxychloroquine increases the chances of mortality instead of decreasing it. The stocks of companies that manufacture Hydroxychloroquine took a nosedive following the publication of the study. This put the manufacturers in a tight spot. However, The study was later taken with a pinch of salt due to doubts that the authors turned a blind eye to the fact that the entire study relied on data from a single company. The authors later retracted the study acknowledging that their data was not verified. This meant that the manufacturers finally weathered the storm. However, there is still no substantive evidence backing the ability of Hydroxychloroquine to treat COVID-19 patients. In the meantime, Pharmaceutical companies continue to zero in on developing a cure for COVID-19. See All
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485681653022766, "perplexity": 2187.2089214847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00040.warc.gz"}
http://mathhelpforum.com/advanced-statistics/32482-moment-generating-function-poisson-disttribution.html
# Math Help - moment generating function for poisson disttribution 1. ## moment generating function for poisson disttribution Hi guys i am new. I have a problem in finding moment generating functions. The forum helped me in solving binomial, geometric and other few random equations but i still couldn't understand how the moment generating function for a poisson distribution is derived out from its probability density function. Hi guys i am new. I have a problem in finding moment generating functions. The forum helped me in solving binomial, geometric and other few random equations but i still couldn't understand how the moment generating function for a poisson distribution is derived out from its probability density function. Well, I guess this is about the last one in the list! $m_X(t) = \sum_{y=0}^{\infty} e^{yt} \frac{\lambda^y e^{-\lambda}}{y!} = e^{-\lambda} \sum_{y=0}^{\infty} e^{yt} \frac{\lambda^y}{y!}$ $= e^{-\lambda} \sum_{y=0}^{\infty} \frac{(\lambda e^t)^y}{y!}$ using the standard series $\sum_{y=0}^{\infty} \frac{(x)^y}{y!} = e^x$ and substituting $x = \lambda e^t$ * $= e^{-\lambda} \, e^{\lambda e^t} = e^{\lambda(e^t - 1)}$. * If you don't like doing this there is another clever way of doing it. 3. Originally Posted by mr fantastic Well, I guess this is about the last one in the list! $m_X(t) = \sum_{n=0}^{\infty} e^{nt} \frac{\lambda^n e^{-\lambda}}{n!} = e^{-\lambda} \sum_{n=0}^{\infty} e^{nt} \frac{\lambda^n}{n!}$ $= e^{-\lambda} \sum_{n=0}^{\infty} \frac{(\lambda e^t)^n}{n!}$ using the standard series $\sum_{n=0}^{\infty} \frac{(y)^n}{n!} = e^y$ and substituting $y = \lambda e^t$ * $= e^{-\lambda} \, e^{\lambda e^t} = e^{\lambda(e^t - 1)}$. * If you don't like doing this there is another clever way of doing it. ahhh... ic i didnt know there was that standard series thingy xD thanks for the quick reply ^-^ btw what is the other method O.o? no harm in knowing more. 4. Originally Posted by mr fantastic Well, I guess this is about the last one in the list! $m_X(t) = \sum_{y=0}^{\infty} e^{yt} \frac{\lambda^y e^{-\lambda}}{y!} = e^{-\lambda} \sum_{y=0}^{\infty} e^{yt} \frac{\lambda^y}{y!}$ $= e^{-\lambda} \sum_{y=0}^{\infty} \frac{(\lambda e^t)^y}{y!}$ [snip] Let $\mu = \lambda e^t$: $= e^{-\lambda} \sum_{y=0}^{\infty} \frac{\mu^y}{y!}$ $= e^{-\lambda} \, e^{\mu} \sum_{y=0}^{\infty} \frac{e^{-\mu}\, \mu^y}{y!}$ $\frac{e^{-\mu}\, \mu^y}{y!}$ is recognised as the pdf of a random variable following a Poisson distribution with mean $\mu$. Therefore $\sum_{y=0}^{\infty} \frac{e^{-\mu}\, \mu^y}{y!} = 1$: $= e^{-\lambda} \, e^{\mu} (1)$ Substitute back that $\mu = \lambda e^t$: $= e^{-\lambda} \, e^{\lambda e^t} = e^{\lambda(e^t - 1)}$. 5. ahhhh..... now thats a better way to understand ^-^ thanks again
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194018840789795, "perplexity": 525.8453556049153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447758.91/warc/CC-MAIN-20151124205407-00032-ip-10-71-132-137.ec2.internal.warc.gz"}
https://sciencehouse.wordpress.com/2009/04/25/waiting-in-airports/
# Waiting in airports I was visiting the University of Utah this past week.  I gave talks on the Kinetic Theory of Coupled Oscillators and on Deriving Moment Equations for Neural Networks. On my way to the airport I wondered what would be the optimal arrival time so that you spend the least amount of time waiting in the airport balanced by the cost of missing a flight.  If you make some basic assumptions, it’s not too hard to derive a condition for the optimum.  Let’s say the only thing we’re concerned about is minimizing wasted time.  Then what we would want to do is to balance the average time waiting in airports with the average time lost to make up for  a missed flight. Let $t_a$ be the time between arrival at the airport and boarding the plane and $\sigma$ be the standard deviation in this time due to traffic, the check in line, going through security, etc.  The average amount of time spent waiting in the airport is thus the expectation value of $t_a$, $\bar t_a$.  Suppose we let C be the time wasted if you miss a flight.  Then the expected time wasted for missing a flight is  CP, where P is the probability of missing a flight.  So, optimality would be given by $\bar t_a = C P$.  Now the probability for missing a flight will be a function of the waiting time.  Assuming a normal distribution gives $P= .5{\rm erfc}(\bar t_a/\sqrt{2}\sigma)$, where erfc is the complementary error function.  Hence, if your expected waiting time is zero then you would miss half of your flights.  The optimal arrival time is then given by the condition $\bar t_a= .5C{\rm erfc}(\bar t_a/\sqrt{2}\sigma)$. So let’s say the standard deviation is an hour and a missed flight costs about 5 hours, then solving numerically (on Mathematica) gives $\bar t_a = 0.9$.  So the optimal time to arrive at the airport is a little less than an hour before you board.   The optimal time is not very sensitive to the cost of missing the flight.  Making it 20 hours only increases the optimal arrival time to an hour and a half.   Reducing the standard deviation to half an hour reduces the optimal time to 36 minutes. By this calculation it would seem that by arriving about an hour before departure, which is what I usually do, is close to optimal.  However, there is a flaw in this calculation because I can only recall missing one flight in my life and by optimality I should be missing about one in five flights (given that I arrive at the airport an hour before my flight and my estimated cost per missed flight is 5 hours).  What this implies is that the transit time to the gate distribution is much narrower than a normal so that while the uncertainty in transit time from my house to the gate seems to be about half an hour to an hour, it almost never takes much longer.  However, having a narrower distribution means that  the optimal waiting time won’t change very much because the probability of missing a plane increases very quickly as you shorten the waiting time (i.e.  the difference between arriving 45 minutes before departure versus an hour could mean missing many more flights). So an hour before the flight is still pretty close to optimal.  Having said all this, I actually don’t mind showing up at the airport a little earlier than necessary since it gives me a chance to read.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744753003120422, "perplexity": 435.91145263924955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886118195.43/warc/CC-MAIN-20170823094122-20170823114122-00547.warc.gz"}
http://math.stackexchange.com/users/43438/joseph-garvin?tab=activity&sort=comments
Joseph Garvin Reputation Next privilege 250 Rep. Nov4 comment Concrete Mathematics - Stability of definitions in the repertoire method Brilliant! Never occurred to me to plug the closed form back into the recurrence to convince myself, but that makes sense to try since that's where the structure is coming from. Thanks :-) Nov4 comment Repertoire Method Clarification Required ( Concrete Mathematics ) @HansLundmark: I can see that it will be a combination of those, but I don't understand how we know A, B, and C will always be the same, which I have opened as a new question. Jan27 comment How to prove $\gcd(a,\gcd(b, c)) = \gcd(\gcd(a, b), c)$? Actually #4 is OK, it does follow from the definition if you're using the Bezout's identity version. Jan27 comment Blending values on the number line Were the number lines drawn by hand or is there a plotting tool for these? Jan26 comment How to prove $\gcd(a,\gcd(b, c)) = \gcd(\gcd(a, b), c)$? How do we know c divides a in the third sentence? Jan26 comment How to prove $\gcd(a,\gcd(b, c)) = \gcd(\gcd(a, b), c)$? #4 seems false. How does it follow from the definition of GCD? If d is a prime factor X common to both a and gcd(b,c), and e is a different prime factor Y common to both a and gcd(b,c), then e will not divide d or vice versa, because they're prime. Jan22 comment Partition minimizing maximum of Euler's totient function across terms It maybe a great idea. I read that the ith primorial multiplied by the ith prime is sparsely totient, and used that to quickly build a list (not all sparse totients, but for rough minimization may be OK). I tried building the partition for $2^{64}$ in the style of Euclid's algorithm for GCD -- I took the biggest number in the list < $2^{64}$ and took the remainder of dividing by it, then took the biggest sparse totient in the list under the remainder and took the remainder of dividing by it, etc. etc. Turns out a linear combination of those sparse totients exactly partitioned it. Coincidence? Jan10 comment Quick way to iterate multiples of a prime N that are not multiples of primes X, Y, Z, …? Ah, that makes more sense, thanks. Jan8 comment Quick way to iterate multiples of a prime N that are not multiples of primes X, Y, Z, …? If I understand right, this computes the size of the set of numbers I want to iterate, but it doesn't help with iterating or computing e.g. the 5th number, or am I not thinking hard enough yet? Dec18 comment Primes in arithmetic progression Is it common to use plain parens to represent gcd? I'm so used to reading those as tuples. Dec16 comment Progressions with variable density that can be described in constant space? Yes, much. Thanks for your patience explaining :) Dec16 comment Progressions with variable density that can be described in constant space? Oooooh, that makes much more sense. Dec16 comment Progressions with variable density that can be described in constant space? Actually my confusion might stem from what you mean by $x_k$ and $x_k + 1$. I interpret $x_k$ to be the 0 or 1 in the kth bit, where k is offset from the radix point of the most significant bit. Did you mean $x_{k+1}$ instead of $x_k + 1$? Because $x_k$ would just be 1 or 0, which when 1 is added would be two? Or maybe by addition you meant string concatenation? I suck at notation :( Dec16 comment Progressions with variable density that can be described in constant space? Your edit helps a bit, thanks. So it sounds like you're saying that if you take the most significant locked bit, $x_k$, you can keep adding $x_k + 1$ and get numbers satisfying the constraint that are evenly spaced. But isn't the neglecting the unfixed bits that are below $x_k$? Why don't we get variable density from those? I'm actually unsure if your conclusion is answering the question or saying it can't be answered -- my test is to eliminate possibilities, so failing the test would mean a proof that fixing bits works, but in the comments on the question you said fixing them doesn't? Dec16 comment Progressions with variable density that can be described in constant space? I'm probably being dense, but I don't see how your first sentence could be true. If the number is 32-bits for example, fixing bits 3, 5, and 7 to particular values doesn't put any constraint in the "leading digits", that is, the leading bits, 8-32. Unless by first digits you mean the least significant bits, but it doesn't impose any constraint on bits 0-2 either, so I'm still not sure what you mean. Is k indexing the total number of bits, or is it indexing only the bits that we've locked? Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: I've added a constraint that I think gets at what I'm going for, getting the Nth element easily. I don't totally follow your explanation for why a set of fixed bits doesn't work -- are you saying that my specific example of constraining that the 3rd/5th/7th bit wouldn't work, or are you just saying it's not always true that any subset of fixed bits will work because if you only pick a chunk of adjacent digits at the beginning/end you only get multiples or all numbers below a threshold? Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: That may work. Makes me wonder if you can just say, all the numbers where some subset of the bits are a fixed, e.g. all numbers where the 3rd bit is 1, the 5th bit is 1, and 7th bit is 1. Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: My ultimate goal does involve using this in a computer program, so leading digits are a bit problematic to extract, unless it's the binary leading digit, but that's always 1 if you consider the 'leading digit' to be the most significant bit, or if you consider it the left most bit in the word just splits the space evenly in half, the numbers below $\frac{2^n - 1}{2}$ and the numbers equal and above. Sorry, I realize I'm springing more details than are in my question, I was trying to capture the essence and keep it succinct, obviously didn't succeed :P Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: Basically I'm trying to stick to sequences that can be generated from a bounded number of starting bits. Put another way, I'm trying to find sequences that have something like a closed form representation I can work with algebraically and reason about, rather than somebody just dropping a manually figured out list of numbers lacking any generality. Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: Couldn't you get away with just saying all numbers with a leading 8?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529365062713623, "perplexity": 505.11172662940976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989891.18/warc/CC-MAIN-20150728002309-00010-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathonline.wikidot.com/the-alternating-series-test
The Alternating Series Test # The Alternating Series Test So far we have looked at the following tests to determine if a series was convergent or divergent: We haven't been able to use any of these tests to determine if a negative or partially negative series was convergent or divergent though. The following test will allow us to do so. Theorem (The Alternating Series Test): Let $\{ a_n \}$ be a sequence. If for $n$ sufficiently large, $a_na_{n+1} < 0$, $\mid a_{n+1} \mid ≤ \mid a_n \mid$, and $\lim_{n \to \infty} a_n = 0$, then the series $\sum_{n=1}^{\infty} a_n$ is convergent. We note that the alternating series test has three requirements for $n$ sufficiently large. First, the terms must be alternating signs on consecutive terms. Secondly, the absolute value of terms must be decreasing in size. And lastly, the limit of the sequence of terms must approach 0. Under these conditions we can conclude that the series $\sum_{n=1}^{\infty} a_n$ is convergent. • Proof of Theorem: Let $a_1 > 0$. Since $a_na_{n+1} < 0$ we get that $a_{2n+1} > 0$ and $a_{2n} < 0$ $\forall n \in \mathbb{N}$. Now let $s_n = a_1 + a_2 + ... + a_n$ denote the $n^{\mathrm{th}}$ partial sum of the series. • Now since the terms are decreasing in size it follows that $a_{2n+1} ≥ -a_{2n+2}$ and so $s_{2n+2} = s_{2n} + a_{2n+1} + a_{2n+2} ≥ s_{2n}$. So the even partial sums $\{ s_{2n} \}$ form an increasing sequence. • Similarly since the terms are decreasing in size it follows that $-a_{2n} ≥ a_{2n+1}$, and so $s_{2n+1} = s_{2n-1} + a_{2n} + a_{2n+1} ≤ s_{2n-1}$ and so the odd partial sums form a decreasing sequence $\{ s_{2n-1} \}$, and so: (1) $$s_2 ≤ s_4 ≤ ... ≤ s_{2n} ≤ s_{2n-1} ≤ s_{2n-3} ≤ ... ≤ s_3 ≤ s_1$$ • So $s_2$ is a lower bound for the sequence $\{ s_{n-1} \}$ and $s_1$ is an upper bound for the sequence $\{ s_{2n} \}$, both of which sequences converge, and so $\lim_{n \to \infty} s_{2n-1} = L_1$ and $\lim_{n \to \infty} s_{2n} = L_2$ by the monotonic sequence theorem. • Now since we were given that $\lim_{n \to \infty} a_n = 0$ and we know that $a_{2n} = s_{2n} - s_{2n-1}$ then $0 = \lim_{n \to \infty} a_{2n} = \lim_{n \to \infty} s_{2n} - \lim_{n \to \infty} s_{2n-1} = L_1 - L_2$ and so $0 = L_1 - L_2$ which implies $L_1 = L_2$. So let $L = L_1 = L_2$ and so $\lim_{n \to \infty} s_n = L$ since every partial sum $s_n$ have been acknowledged, and therefore $\sum_{n=1}^{\infty} a_n$ is convergent to $L$.$\blacksquare$ We note that a similar proof works if the first term of the series is negative, that is $a_1 < 0$. We will now look at some examples applying the alternating series test. ## Example 1 Using the alternating series test determine if $\sum_{n=1}^{\infty} \frac{(-1)^n}{n}$ is convergent or divergent. We must first check to see if all of the conditions for the alternating series test are met before applying it. We note that $a_na_{n+1} < 0$ since the terms are alternating signs. We need to check if $\mid a_{n+1} \mid ≤ \mid a_n \mid$ for $n$ sufficiently large. We note that $\mid a_{n+1} \mid = \biggr \rvert \frac{(-1)^{n+1}}{n+1} \biggr \rvert = \frac{1}{n+1}$ and that $\mid a_n \mid = \biggr \rvert \frac{(-1)^{n}}{n} \biggr \rvert = \frac{1}{n}$. We know that $\mid a_{n+1} \mid = \frac{1}{n+1} ≤ \frac{1}{n} = \mid a_n \mid$ so these terms are decreasing in size. Lastly we note that $\lim_{n \to \infty} \frac{-1}{n} = 0$. So by the alternating series test, $\sum_{n=1}^{\infty} \frac{(-1)^n}{n}$ is convergent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9980131983757019, "perplexity": 89.88704654290703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00113.warc.gz"}
http://math.stackexchange.com/questions/296689/show-that-y-sup-c
# Show that $y=\sup (C)$ Let $A \subseteq \mathbb{R}$ nonempty and bounded above. Also, let $C = \{x+100 : x \in A\}$ Consider $M$ in $\mathbb{R}$ such that $M < y$. Is $M-100$ an upper bound for $A$? Why? Use this result to show that $y=\sup (C)$ - What is $y$? ${}{}$ –  Git Gud Feb 6 '13 at 23:11 @Git, if you put {}{}{} between dollar signs, they count as characters. –  Gerry Myerson Feb 6 '13 at 23:13 @GerryMyerson Thanks! ${}{}$ –  Git Gud Feb 6 '13 at 23:14 add comment ## 1 Answer I don't really understand the question, but I'm going to find $\sup (C)$ and prove it is what it is. Hopefully that will be of some help to the OP. Since $A$ is bounded above, $\sup (A)$ exists. Let $\displaystyle s_A =\sup (A)$. Now let $s=s_A+100$. 1. Take $c\in C$ arbitrarily. We have $c=x+100$ for some $x\in A$. By definition of $s_A$ we have $x\leq s_a$, therefore $c=x+100\leq s_A+100=s$. This proves that $s$ is an upper bound for $C$. 2. We've now estabilished that the set of upper bounds of $C$ isn't empty. So you can take an arbitrary upper bound of $C$, $m$. Since $m$ is an upper bound of $C$ we know that for any $c\in C$, it is true that $c\leq m$, which means that for any $x\in A$ we have $x+100\leq m$ and it follows that for any $x\in A$, $x\leq m-100$, therefore, since $x$ was arbitrary, $s_A\leq m-100$, so we get $s=s_A+100\leq m$. Since $m$ was an arbitrary upperbound for $C$, we've proved that $s$ is the smallest upper boundof $C$ and therefore $s=\sup (C)$. - I poorly worded the question but what you replied with helped me think about the problem in a different way. Thanks! –  Math Student Feb 7 '13 at 0:51 add comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976245164871216, "perplexity": 155.2033644290353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010776091/warc/CC-MAIN-20140305091256-00067-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/gravity-on-a-very-small-scale.521170/
# Gravity on a very small scale 1. Aug 12, 2011 ### ArcanaNoir Hi.. umm, I'm from the math department, and um...I'm shy.... hello physics people... So I saw this thing on tv, I think it was Through the Wormhole..I'm not sure..and um, don't judge me for watching "science for the uneducated masses".... But anyway, they were talking about gravity, and they were examining gravity on a very small scale. They were conducting an experiment that measured gravity on a very small scale and (the lab was underground to limit interference) they said things like rush hour traffic or airplanes could skew the data. I was wondering, at that small small scale, don't other forces, like cohesion and adhesion and static electricity and other properties I don't know about, don't they overwhelm the force of gravity? How do they know they are measuring gravity, and not some other force? Just so it's not misunderstood, we're talking about gravity BETWEEN objects, not between the earth and an object. 2. Aug 12, 2011 ### Staff: Mentor Yes, the electromagnetic force that causes all atoms and molecules to stick together and governs most everyday observable effects vastly outdoes gravity. They know they are measuring gravity by ensuring that the setup of the experiment screens out as many of these effects as possible. Doing experiments in a vacuum chamber would almost eliminate most effects from colliding gas molecules for example. Shielding the chamber would reduce any cosmic rays or EM radiation from affecting it and electrical effects from building up. Note that these aren't specific examples of how they perform the experiments, as I don't know the exact setups. I'm just using them as overall examples. Similar Discussions: Gravity on a very small scale
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859083890914917, "perplexity": 1277.6854530110572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687428.60/warc/CC-MAIN-20170920175850-20170920195850-00003.warc.gz"}
http://worldebooklibrary.org/articles/eng/Deterministic_pushdown_automaton
#jsDisabledContent { display:none; } My Account |  Register |  Help # Deterministic pushdown automaton Article Id: WHEBN0003972656 Reproduction Date: Title: Deterministic pushdown automaton Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Deterministic pushdown automaton In automata theory, a deterministic pushdown automaton (DPDA or DPA) is a variation of the pushdown automaton . The DPDA accepts the deterministic context-free languages, a proper subset of context-free languages.[1] Machine transitions are based on the current state and input symbol, and also the current topmost symbol of the stack. Symbols lower in the stack are not visible and have no immediate effect. Machine actions include pushing, popping, or replacing the stack top. A deterministic pushdown automaton has at most one legal transition for the same combination of input symbol, state, and top stack symbol. This is where it differs from the nondeterministic pushdown automaton. ## Contents • Formal definition 1 • Languages recognized 2 • Properties 3 • Closure 3.1 • Equivalence problem 3.2 • Notes 4 • References 5 ## Formal definition A (not necessarily deterministic) PDA M can be defined as a 7-tuple: M=(Q\,, \Sigma\,, \Gamma\,, q_0\,, Z_0\,, A\,, \delta\,) where • Q\, is a finite set of states • \Sigma\, is a finite set of input symbols • \Gamma\, is a finite set of stack symbols • q_0\,\in Q\, is the start state • Z_0\,\in\Gamma\, is the starting stack symbol • A\,\subseteq Q\,, where A is the set of accepting states • \delta\, is a transition function, where \delta\colon(Q\, \times ( \Sigma\, \cup \left \{ \varepsilon\, \right \} ) \times \Gamma\,) \longrightarrow \mathcal{P}(Q \times \Gamma ^{*}) where * is the Kleene star, meaning that \Gamma^{*} is "the set of all finite strings (including the empty string \varepsilon) of elements of \Gamma", \varepsilon denotes the empty string, and \mathcal{P}(X) is the power set of a set X. M is deterministic if it satisfies both the following conditions: • For any q \in Q, a \in \Sigma \cup \left \{ \varepsilon \right \}, x \in \Gamma, the set \delta(q,a,x)\, has at most one element. • For any q \in Q, x \in \Gamma, if \delta(q, \varepsilon, x) \not= \emptyset\,, then \delta\left( q,a,x \right) = \emptyset for every a \in \Sigma. There are two possible acceptance criteria: acceptance by empty stack and acceptance by final state. The two are not equivalent for the deterministic pushdown automaton (although they are for the non-deterministic pushdown automaton). The languages accepted by empty stack are the languages that are accepted by final state, as well as have no word in the language that is the prefix of another word in the language. ## Languages recognized If L(A) is a language accepted by a PDA A it can also be accepted by a DPDA if and only if there is a single computation from the initial configuration until an accepting one for all strings belonging to L(A). If L(A) can be accepted by a PDA it is a context free language and if it can be accepted by a DPDA it is a deterministic context-free language. Not all context-free languages are deterministic. This makes the DPDA a strictly weaker device than the PDA. For example, the language of even-length palindromes on the alphabet of 0 and 1 has the context-free grammar S → 0S0 | 1S1 | ε. An arbitrary string of this language cannot be parsed without reading all its letters first which means that a pushdown automaton has to try alternative state transitions to accommodate for the different possible lengths of a semi-parsed string.[2] Restricting the DPDA to a single state reduces the class of languages accepted to the LL(1) languages.[3] In the case of a PDA, this restriction has no effect on the class of languages accepted. ## Properties ### Closure Closure properties of deterministic context-free languages (accepted by deterministic PDA by final state) are drastically different from the context-free languages. As an example they are (effectively) closed under complementation, but not closed under union. To prove that the complement of a language accepted by a deterministic PDA is also accepted by a deterministic PDA is tricky. In principle one has to avoid infinite computations. As a consequence of the complementation it is decidable whether a deterministic PDA accepts all words over its input alphabet, by testing its complement for emptiness. This is not possible for context-free grammars (hence not for general PDA). ### Equivalence problem Geraud Senizergues (1997) proved that the equivalence problem for deterministic PDA (i.e. given two deterministic PDA A and B, is L(A)=L(B)?) is decidable,[4] a proof that earned him the 2002 Gödel Prize. For nondeterministic PDA, equivalence is undecidable. ## Notes 1. ^ 2. ^ 3. ^ Kurki-Suonio, R. (1969). "Notes on top-down languages". BIT 9 (3): 225–238. 4. ^ Sénizergues, Géraud (1997). "The equivalence problem for deterministic pushdown automata is decidable". AUTOMATA, LANGUAGES AND PROGRAMMING. 1256/1997: 671–681.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138492345809937, "perplexity": 1844.861943963116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00338.warc.gz"}
http://mathhelpforum.com/differential-equations/145822-solving-de-spread-rumour.html
# Math Help - Solving DE for the spread of a rumour. 1. ## Solving DE for the spread of a rumour. here is my problem and what I have done so far. Attached Thumbnails 2. The integral of $\frac{dx}{x(1- x)}$ is NOT "ln|x(1- x)|". First write it as "partial fractions": $\frac{1}{x(1-x)}= \frac{A}{x}+ \frac{B}{1- x}$, then integrate. 3. Originally Posted by Rina here is my problem and what I have done so far. $\frac{dx}{dt} = x(1 - x)$ $\frac{dt}{dx} = \frac{1}{x(1 - x)}$. Now using the method of partial fractions: $\frac{A}{x} + \frac{B}{1 - x} = \frac{1}{x(1 - x)}$ $\frac{A(1 - x) + Bx}{x(1 - x)} = \frac{1}{x(1 - x)}$ $A(1 - x) + Bx = 1$ $A - Ax + Bx = 1$ $A + (B - A)x = 1 + 0x$. Therefore $A = 1$ and $B - A = 0$, so $B = 1$. Thus $\frac{1}{x(1 - x)} = \frac{1}{x} + \frac{1}{1 - x}$. Back to the DE: $\frac{dt}{dx} = \frac{1}{x(1 - x)}$ $\frac{dt}{dx} = \frac{1}{x}+ \frac{1}{1 - x}$ $t = \int{\left(\frac{1}{x} + \frac{1}{1 - x}\right)\,dx}$ $t = \ln{|x|} - \ln{|1 - x|} + C$ $t = \ln{\left|\frac{x}{1 - x}\right|} + C$ $t = \ln{\left|\frac{x - 1 + 1}{1 - x}\right|} + C$ $t = \ln{\left|\frac{-(1 - x)}{1 - x} + \frac{1}{1 - x}\right|} + C$ $t = \ln{\left|-1 + \frac{1}{1 - x}\right|} + C$ $t - C = \ln{\left|-1 + \frac{1}{1 - x}\right|}$ $e^{t - C} = \left|-1 + \frac{1}{1 - x}\right|$ $e^{-C}e^t = \left|-1 + \frac{1}{1 - x}\right|$ $\pm e^{-C}e^t = -1 + \frac{1}{1 - x}$ $A\,e^t = -1 + \frac{1}{1 - x}$, where $A = \pm e^{-C}$ $A\,e^t + 1 = \frac{1}{1 - x}$ $\frac{1}{A\,e^t + 1} = 1 - x$ $x = 1 - \frac{1}{A\,e^t + 1}$ $x = \frac{A\,e^t + 1 - 1}{A\,e^t + 1}$ $x = \frac{A\,e^t}{A\,e^t + 1}$. 4. thank you. and C is? . 5. Originally Posted by Rina the first A has nothing to do with a second A. It is confusing. should i have used a different letter for that constant at the end of the calculation. They are the same $A$. Otherwise I would have used different letters. 6. I am sorry that I am so stupit. It is not ez trust me I am sorry, but I do not understand how did the first A, just a constant initially, all the sudden became +-e^(-c). 7. If we use this DE solution, x=1/(1-e^(-t)) and try to find the proportion of the population that has heard the rumor at the time t=0. The solution gives us the result - 0.5; that is a half of population before the rumor has started spreading? It doesn't make sense. Am I interpreting it wrong? 8. Originally Posted by Rina I am sorry that I am so stupit. It is not ez trust me I am sorry, but I do not understand how did the first A, just a constant initially, all the sudden became +-e^(-c). C is arbitrary therefore -C is arbitrary therefore e^(-C) is arbitrary therefore it can be represneted by a new arbitrary symbol eg. A. 9. Originally Posted by Rina If we use this DE solution, x=1/(1-e^(-t)) and try to find the proportion of the population that has heard the rumor at the time t=0. The solution gives us the result - 0.5; that is a half of population before the rumor has started spreading? It doesn't make sense. Am I interpreting it wrong? 1. The given solution is x=1/(1+e^(-t)), not what you have said. 2. t = 0 => x = 1/2. All that means is that at t = 0 half the population have heard the rumour. Big deal. 3. The question asked you to show that the solution was x=1/(1+e^(-t)). So you don't actually have to solve the DE. Just substitute x=1/(1+e^(-t)) into it and show that the resulting left hand and right hand sides are equal to each other. The fact that you have been given no boundary condition suggests that this is the approach you were meant to take ... (And given what I have said in my second point, a possible boundary condition would have been the initial condition x(0) = 1/2).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650020003318787, "perplexity": 482.4313705455907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645310876.88/warc/CC-MAIN-20150827031510-00171-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.zbmath.org/?q=an%3A07204025
× zbMATH — the first resource for mathematics Reversible disjoint unions of well orders and their inverses. (English) Zbl 07204025 Summary: A poset $$\mathbb{P}$$ is called reversible iff every bijective homomorphism $$f:\mathbb{P} \rightarrow \mathbb{P}$$ is an automorphism. Let $$\mathcal{W}$$ and $$\mathcal{W}^*$$ denote the classes of well orders and their inverses respectively. We characterize reversibility in the class of posets of the form $$\mathbb{P} =\bigcup_{i\in I}\mathbb{L}_i$$, where $$\mathbb{L}_i$$, $$i\in I$$, are pairwise disjoint linear orders from $$\mathcal{W} \cup \mathcal{W}^*$$. First, if $$\mathbb{L}_i \in \mathcal{W}$$, for all $$i \in I$$, and $$\mathbb{L}_i \cong \alpha_i =\gamma_i+n_i\in \text{Ord}$$, where $$\gamma_i \in \text{Lim} \cup \{0\}$$ and $$n_i \in \omega$$, defining $$I_\alpha := \{i \in I : \alpha_i = \alpha\}$$ for $$\alpha \in \text{Ord}$$, and $$J_\gamma := \{j \in I : \gamma_j = \gamma\}$$, for $$\gamma \in \text{Lim} \cup\{0\}$$, we prove that $$\bigcup_{i\in I} \mathbb{L}_i$$ is a reversible poset iff $$\langle \alpha_i : i \in I \rangle$$ is a finite-to-one sequence, that is, $$|I_\alpha| < \omega$$, for all $$\alpha \in \text{Ord}$$, or there exists $$\gamma = \max\{ \gamma_i :i \in I\}$$, for $$\alpha \leq \gamma$$ we have $$|I_\alpha| < \omega$$, and $$\langle n_i : i \in J_\gamma \setminus I_\gamma \rangle$$ is a reversible sequence of natural numbers. The same holds when $$\mathbb{L}_i \in \mathcal{W}^*$$, for all $$i \in I$$. In the general case, the reversibility of the whole union is equivalent to the reversibility of the union of components from $$\mathcal{W}$$ and the union of components from $$\mathcal{W}^*$$. MSC: 06-XX Order, lattices, ordered algebraic structures Full Text: References: [1] Doyle, PH; Hocking, JG, Bijectively related spaces, I. Manifolds. Pac. J. Math., 111, 23-33 (1984) · Zbl 0554.57014 [2] Dow, A.; Hernández-Gutiérrez, R., Reversible filters, Topology Appl., 225, 34-45 (2017) · Zbl 1368.54007 [3] Kukieła, M., Reversible and bijectively related posets, Order, 26, 119-124 (2009) · Zbl 1178.06002 [4] Kukieła, M., Characterization of hereditarily reversible posets, Math. Slovaca, 66,3, 539-544 (2016) · Zbl 1389.06002 [5] Kurilić, MS, Retractions of reversible structures, J. Symb. Log., 82,4, 1422-1437 (2017) · Zbl 1423.03111 [6] Kurilić, MS; Morača, N., Condensational equivalence, equimorphism, elementary equivalence and similar similarities, Ann. Pure Appl. Logic, 168,6, 1210-1223 (2017) · Zbl 1422.03062 [7] Kurilić, M.S., Morača, N.: Reversibility of extreme relational structures, (submitted) arXiv:1803.09619 [8] Kurilić, M.S., Morača, N.: Reversible sequences of cardinals, reversible equivalence relations, and similar structures, (submitted) arXiv:1709.09492 · Zbl 1422.03062 [9] Kurilić, M.S., Morača, N.: Reversibility of disconnected structures, arXiv:1711.01426 [10] Laflamme, C.; Pouzet, M.; Woodrow, R., Equimorphy: the case of chains, Arch. Math. Logic, 56, 7-8, 811-829 (2017) · Zbl 1417.06001 [11] Laver, R., An order type decomposition theorem, Ann. Math., 98,1, 96-119 (1973) · Zbl 0264.04003 [12] Rajagopalan, M.; Wilansky, A., Reversible topological spaces, J. Aust. Math. Soc., 61, 129-138 (1966) · Zbl 0151.29602 [13] Rosenstein, JG, Linear Orderings Pure and Applied Mathematics, 98 (1982), New York: Academic Press, Inc., Harcourt Brace Jovanovich Publishers, New York This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8887383341789246, "perplexity": 2694.674406398743}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00625.warc.gz"}
http://ilja-schmelzer.de/forum/showthread.php?tid=43&pid=144
About Einstein's Leyden lecture Schmelzer Administrator Posts: 212 Threads: 30 Joined: Dec 2015 Reputation: 0 05-19-2016, 09:55 PM (05-19-2016, 07:41 PM)John Duffield Wrote: NB: perhaps you have the wrong idea about ether? See Einstein talking about it here in 1920. Einstein's Leyden lecture Ether and the theory of relativity is, of course, a very interesting document. I like it also because it is a rare place where Einstein has made an error - even if only a minor one: Quote:But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. A minor on, "is in general relativity not thought of" would have been correct. But a "may not be thought of" contains, I would say, more. It excludes also, I would say, interpretations of the Einstein equations of GR which endow the gravitational field with this characteristic of ponderable media. And this is quite simple. First of all, we endow the harmonic coordinates with the status of being preferred coordinates. Then, in these preferred harmonic coordinates, we endow $$\rho = g^{00}\sqrt{-g}$$ with the meaning of a density, and $$v^i=g^{0i}/g^{00}$$ with the meaning of a velocity field which allows to track parts of the ether through time. If time is harmonic, this leads to the continuity equation $$\partial_t \rho + \partial_i (\rho v^i) = 0$$, and if time is time-like, this gives $$\rho>0$$. « Next Oldest | Next Newest »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8330485224723816, "perplexity": 955.796202283646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163326.85/warc/CC-MAIN-20180926041849-20180926062249-00365.warc.gz"}
https://brilliant.org/problems/an-electricity-and-magnetism-problem-by-rohan/
# An electricity and magnetism problem by Rohan Gupta A wire of resistance 12 $$\Omega$$/m is bent to form a complete circle of radius 10 cm. The resistance between the two diametrically opposite ends is: ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679533243179321, "perplexity": 943.7971199787938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00066.warc.gz"}
http://math.stackexchange.com/questions/246974/how-to-prove-that-this-is-an-harmonic-funtion
# How to prove that this is an Harmonic funtion? Let $u$ be an Harmonic function in $B(0,a)$ in $R^3$ we define $I(x)=x\dfrac{a^2}{|x|^2}$ Let $w(x) = u(I(x))$. Is there a way to prove that $w$ is harmonic without making too much computation? If not I will make them my self. Thanks for your help! - In fact this are call Kelvin transform, and it work In $R^2$ but it needs a modification for $R^3$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604790806770325, "perplexity": 269.2855853568272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646849.7/warc/CC-MAIN-20141024030046-00267-ip-10-16-133-185.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/122754/for-what-rare-events-was-the-lhc-built/122764#122764
# For what rare events was the LHC built? I understand that for low cross-section events a very high luminosity is necessary in order to obtain enough data to produce meaningful statistics. That is why the LHC was built. But which are these event which we are interested in? What are the events which would hint at new physics or would confirm theoretical models? • First of all: The Higgs boson. Jul 2 '14 at 14:10 • Related, if not duplicate: physics.stackexchange.com/q/8922 Jul 2 '14 at 14:14 ## 2 Answers The history of high energy physics is in the words "high energy" . There are two ways to get it, building higher and higher energy accelerators or studying cosmic rays, which last has answers in another question. Accelerators are of two types, those creating beams of particles that fall on fixed targets, and colliders, having two beams collide. All studies have yielded the enormous amount of data that one can find in the particle data book, a plethora of resonances and particles that are encapsulated the Standard Model of particle physics. Colliders provide higher center of mass energies and have been for some time the center of studies for the behavior of higher and higher energies at scattering. The reason is that the standard model fits well low energy data but also predicts data for higher energies, as for example the existence of the Higgs. Also the SM is a gauge that will show if something unusual is happening at higher energies like manifestation of supersymmetric particles etc. by checking crossections and branching ratios against SM calculations . Colliders come into two physics types and two geometry types: hadronic, i.e. proton antiproton colliders or proton proton colliders, which is scattering a bag of quarks and gluons (proton) against another bag of antiquarks and gluons (antiproton) or also quarks and gluons (proton), and leptonic, scattering electrons on positrons and watching the fall out. They can be linear one off beam collisions, or circular colliders the beam increasing in energy by a lot of rotations around the ring. Leptonic collisions are much more accurate experimentally and theoretically, as the vertices in the Feynman diagrams are simple and computable with fewer assumptions than for hadronic ones. I have heard that Feynman defended leptonic experiments by saying" if you want to study the interior of a watch you do not throw one watch against another and study the wheels coming out. You take a screw driver ( in this case it was neutrino scattering on a target, I think but it holds for all leptonic collisions). The one of the drawbacks of leptonic circular colliders is also their advantage: the known fixed energy at center of mass can explore systematically a phase space with great accuracy and less modeling but the spread of energies is limited. It is also hard to accelerate electrons and positrons in circular colliders to high enough energies due to synchrotron radiation that degrades the energies of the beams. The drawback of the hadron colliders is that a lot of modelling has to enter since it is a scatter of many on many elementary particles but because of the high luminosity achievable they are great for new discoveries. The Z and W were found at a proton antiproton collider at CERN and then were explored in great detail in the leptonic collider LEP, the results of which nailed the parameters of the SM. Already there are proposals for a higher energy leptonic collider to sit on the Higgs and study with accuracy its branching ratios etc. As well there are plans for an international linear collider at high energies after the LHC gives all the hints it can for where to look for new physics. Hadronic colliders are discovery machines. Leptonic are for nailing down accurately the parameters of the theoretical models. As pfnuessel said in his comment: The first thing to look at was the Higgs - there were hints from LEP and Tevatron, but no evidence, so the LHC was designed that the (SM-)Higgs has to be seen, if it exists. And for everything beyond the Higgs - we don't know! There are various theories, e.g. the different flavors of super-symmetry and others, but they all have different predictions. If any of these theories is right there must be something - havier SUSY-partners of the standard model particles for example. But as there are many different theories it's not a very targeted search, its more a scan over a broad energy range not reachable in pre-LHC-times - we'll see what the experiemnt-collaborations will stumble upon, and which theory will be best to describe the results...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808179497718811, "perplexity": 488.0610537146031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00094.warc.gz"}
https://www.physicsforums.com/threads/special-relativity-kinetic-energy.847254/
# Special Relativity -- Kinetic Energy 1. Dec 8, 2015 ### Barry Melby 1. The problem statement, all variables and given/known data An electron e− and positron e+ moving at the same speed in the Earth reference frame collide head-on and produce a proton p and an antiproton p¯. The electron and positron have the same mass. The proton and antiproton also have the same mass. The mass of the proton is 1836.15 times the mass of the electron. Calculate, in the Earth reference frame, the minimum value possible for the ratio of the electron's kinetic energy to its internal energy in order to have the reaction e− + e+ →p + p¯ take place. Want: K/E_internal = ??? 2. Relevant equations K = (y-1)mc^2 --- y = lorentz factor E_internal = mc^2 3. The attempt at a solution I'm not even remotely sure what this question is asking. Can someone offer me some guidance? 2. Dec 8, 2015 ### nrqed You have the correct equations, so you need to figure out the ratio which is simply $\gamma-1$. To find that, the key point is that you want the minimum energies for the e+ and e- to have the reaction take place. This is a special case. What can you say about the proton and antiproton produced then? 3. Dec 8, 2015 ### Barry Melby Well i know that the mass of the proton and the antiproton are 1836.15 times the mass of the electron and positron. Would $\gamma$ = 1/(1-1836.15^2)? 4. Dec 8, 2015 ### Barry Melby No no, never mind. That wouldn't make sense. Where do i go? 5. Dec 8, 2015 ### nrqed As I mentioned earlier, the key point is that you want the *minimum* energy for the e+ and e- to have the reaction take place. This is a special case. What can you say about the proton and antiproton produced then? 6. Dec 8, 2015 ### Barry Melby I don't really understand what you're saying. Do i have to do something with momentum? 7. Dec 8, 2015 ### nrqed In a general problem, you would have to apply conservation of momentum. Here, you are luckier because that won't be necessary. What is the speed of the proton and antiproton produced, in your question? 8. Dec 8, 2015 ### Barry Melby it doesn't appear that the proton or antiproton are moving at all. 9. Dec 8, 2015 ### nrqed Exactly! Now, what is the total energy of the system after the reaction? 10. Dec 8, 2015 ### Barry Melby The total energy would be zero i would suspect because they have no velocity. How does this relate to $\gamma$? 11. Dec 8, 2015 ### nrqed Not quite. The total energy is not just kinetic energy, in relativity. It includes rest mass energy. So what is the total energy, taking this into account? (Once you find that, you can write an expression for the total energy before. Using conservation of total energy, you will get an equation for gamma that you can then solve for. 12. Dec 8, 2015 ### Ray Vickson $E = M c^2 = 0$ only if $M = 0$. Draft saved Draft deleted Similar Discussions: Special Relativity -- Kinetic Energy
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895332932472229, "perplexity": 949.2581224335141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00699.warc.gz"}
https://cursocomandoseletricos.com.br/anthony-intervention-mttef/viewtopic.php?page=binary-relation-pdf-42478b
are in R for every x in A. The set S is called the domain of the relation and the set T the codomain. Since binary relations are sets, we can apply the classical operations of set theory to them. This wavelet tree contains two bitmaps per level at each node v, Bvl and Bvr . Example 1.6. We ask that binary relation mathematics example of strict weak orders is related to be restricted to be restricted to the only if a reflexive relation Every set and binary in most ��I7���v7]��҈jt�ۮ]���}��|qYonc��3at[�P�*ct���M�!ǣ��" ���=䑍F���4~G�͐Ii]ˆ���מS�=96���G����_J���c0�dD�_�|>��)��|V�MTpPn� -����x�Լ�7z�Nj�'ESF��(��R9�c�bS� ㉇�ڟio�����XO��^Fߑ��&�*�"�;�0 Jyv��&��2��Y,��E��ǫ�DҀ�y�dX2 �)I�k 2.1: Binary Relations - Mathematics LibreTexts Skip to main content Binary Relations November 4, 2003 1 Binary Relations The notion of a relation between two sets of objects is quite common and intuitively clear. Interpretation. Therefore, such a relationship can be viewed as a restricted set of ordered pairs. A binary relation from A to B is a subset of a Cartesian product A x B. R t•Le A x B means R is a set of ordered pairs of the form (a,b) where a A and b B. :��&i�c�*��ANJ#2�W !jZ�� eT�{}���t�;���]�N��?��ͭ�kM[�xOӷ. 7 Binary Relations • Let A, B be any two sets. The symmetric component Iof a binary relation Ris de ned by xIyif and only if xRyand yRx. stream For example, if a relation R is such that everything stands in the relation R to itself, R is said to be reflexive . Reflexivity. learning non-pure binary relations, and demonstrate how the robust nature of WMG can be exploited to handle such noise. Relations and Their Properties 1.1. We express a particular ordered pair, (x, y) R, where R is a binary relation, as xRy. For instance, let X denote the set of all females and Y the set of all males. We implement the above idea in CASREL, an end-to-end cascade binary tagging framework. A binary relation R from A to B, written R : A B, is a subset of the set A B. Complementary Relation Definition: Let R be the binary relation from A to B. Just as we get a number when two numbers are either added or subtracted or multiplied or are divided. (x, x) R. b. Definition (binary relation): A binary relation from a set A to a set B is a set of ordered pairs where a is an element of A and b is an element of B. We denote this by aRb. The binary operation, *: A × A → A. Binary Relations (zyBooks, Chapter 5.1-5.8) Binary Relations • Recall: The Cartesian product of 1.1.2 Preorders A preorder or ordered set is a pair (X,≤) where Xis a set and ≤ is a reflexive transitive binary relation on X. The dual R0of a binary relation Ris de ned by xR0yif and only if yRx. Given a set A and a relation R in A, R is reflexive iff all the ordered pairs of the form are in R for every x in A. Jason Joan Yihui Formally, a binary relation R over a set X is symmetric if: ∀, ∈ (⇔). Binary relation Definition: Let A and B be two sets. The wife-husband relation R can be deflned from X to Y. Download Binary Relation In Mathematics With Example doc. Set alert. Basic Methods: We define the Cartesian product of two sets X and Y and use this to define binary relations on X. Abinary relation from A to B is a subset of A B . Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. In other words, a binary relation R … Binary relation for sets This video is about: Introduction to Binary Relation. endobj A binary relation associates elements of one set called the . Let Aand Bbe sets and define their Cartesian product to be the set of all pairwise De nition of a Relation. View 5 - Binary Relations.pdf from CS 2212 at Vanderbilt University. Let's see how to prove it. De nition of a Relation. Binary Relations A binary relation over a set A is some relation R where, for every x, y ∈ A, the statement xRy is either true or false. Binary Relations Any set of ordered pairs defines a binary relation. Binary Relations and Preference Modeling 51 (a,b) 6∈Tor a¬Tb. 1.1.2 Preorders A preorder or ordered set is a pair (X,≤) where Xis a set and ≤ is a reflexive transitive binary relation on X. + : R × R → R e is called identity of * if a * e = e * a = a i.e. <> �6"����f�#�����h���uL��$�,ٺ4����h�4 ߑ+�a�z%��і��)�[��WNY��4/y!���U?�Ʌ�w�-� Similarly, R 3 = R 2 R = R R R, and so on. Albert R Meyer February 21, 2011 . Examples: < can be a binary relation over ℕ, ℤ, ℝ, etc. Draw the following: 1. A binary relation over a set $$A$$ is some relation $$R$$ where, for every $$x, y \in A,$$ the statement $$xRy$$ is either true or false. A partial order is an antisymmetric preorder. The arrow diagram representation of the relation. Remark 2.1. (x, x) R. b. Binary relations establish a relationship between elements of two sets Definition: Let A and B be two sets.A binary relation from A to B is a subset of A ×B. Some important results concerning Rosenberg partial hypergroupoids, induced by relations, are generalized to the case of Let us consider R. The predicate Ris reflexive is defined by R is reflexive in fieldR. De nition: A binary relation from a set A to a set Bis a subset R A B: If (a;b) 2Rwe say ais related to bby R. Ais the domain of R, and Bis the codomain of R. If A= B, Ris called a binary relation … 9.1 Relations and Their Properties Binary Relation Definition: Let A, B be any sets. Addition, subtraction, multiplication are binary operations on Z. Introduction to Relations CSE 191, Class Note 09 Computer Sci & Eng Dept SUNY Buffalo c Xin He (University at Buffalo) CSE 191 Descrete Structures 1 / 57 Binary relation Denition: Let A and B be two sets. Binary Relations and Equivalence Relations Intuitively, a binary relation Ron a set A is a proposition such that, for every ordered pair (a;b) 2A A, one can decide if a is related to b or not. Some relations, such as being the same size as and being in the same column as, are reflexive. Binary relations establish a relationship between elements of two sets Definition: Let A and B be two sets.A binary relation from A to B is a subset of A ×B. Set alert. Brice Mayag (LAMSADE) Preferences as binary relations Chapter 1 7 / 16 0 denotes the empty relation while 1 denoted (prior to the 1950’s)1 the complete relation … Relations and Their Properties 1.1. relation to Paul. 9�����D���-��XE��^8� Albert R Meyer . Then R R, the composition of R with itself, is always represented. VG�%�4��슁� The wife-husband relation R can be thought as a relation from X to Y.For a lady A relation which fails to be reflexive is called 2. M���LZ��l�G?v�P:�9Y\��W���c|_�y�֤#����)>|��o�ޣ�f{}d�H�9�vnoﺹ��k�I��0Kq)ө�[��C�O;��)�� &�K��ea��*Y���IG}��t�)�m�Ú6�R�5g |1� ܞb�W���������9�o�D�He夵�fݸ���-�R�2G�\{�W� �)Ԏ A partial order is an antisymmetric preorder. The wife-husband relation R can be thought as a relation from X to Y.For a lady The predicate Ris … In this paper, we introduce and study the notion of a partial n-hypergroupoid, associated with a binary relation. Binary relations generalize further to n-ary relations as a set of n-tuples indexed from 1 to n, and yet further to I-ary relations where Iis an arbitrary index set. Binary operations on a set are calculations that combine two elements of the set (called operands) to produce another element of the same set. For example, “less-than” on the real numbers relates every real number, a, to a real number, b, precisely when a> 1 Sets, Relations and Binary Operations Set Set is a collection of well defined objects which are distinct from each other. Also, R R is sometimes denoted by R 2. Binary relations. Rsatisfles the trichotomy property ifi … Introduction to Relations 1. A binary relation R on X is aweak orderor acomplete preorderif R is complete and transitive. Let R is a relation on a set A, that is, R is a relation from a set A to itself. Except when explicitly mentioned otherwise, we will suppose in all what follows that the set Ais finite . 1. A binary relation R on X is apreorderif R is re exive and transitive. A binary relation is a set of pairs of elements assumed to be drawn from an indeterminate but fixed set X. We can also represent relations graphicallyor using a table lec 3T.3 . ��nj]��gw�e����"φ�0)��?]�]��O!���C�s�D�Y}?�? The relation R S is known the composition of R and S; it is sometimes denoted simply by RS. Download as PDF. Preference Relations, Social Decision Rules, Single-Peakedness, and Social Welfare Functions 1 Preference Relations 1.1 Binary Relations A preference relation is a special type of binary relation. A binary operation on a nonempty set Ais a function from A Ato A. Binary Relations De nition: A binary relation between two sets X and Y (or between the elements of X and Y) is a subset of X Y | i.e., is a set of ordered pairs (x;y) 2X Y. A binary relation A is a poset iff A does not admit an embedding of the following finite relations: The binary relation … �������'y�ijr�r2ܫa{wե)OƌN"��1ƾɘ�@_e��=��R��|��W�l�xQ~��"��v�R���dk����\|�a}�>IP!z��>��(�tQ ��t>�r�8T,��]�+�Q�@\�r���X��U �ݵ6�;���0_�M8��fI�zS]��^p �a���. relation to Paul. • We use the notation a R b to denote (a,b) R and a R b to denote (a,b) R. If a R b, we say a is related to b by R. Theory of Relations. A symmetric relation is a type of binary relation.An example is the relation "is equal to", because if a = b is true then b = a is also true. We can also represent relations graphicallyor using a table Remark 2.1. ↔ can be a binary relation over V for any undirected graph G = (V, E). Binary Relations Any set of ordered pairs defines a binary relation. %PDF-1.4 <> The logical operations treat a binary relation purely as a set, ignoring the nature of its ele-ments. Finally, The binary operations associate any two elements of a set. Others, such as being in front of or Binary Relations Intuitively speaking: a binary relation over a set A is some relation R where, for every x, y ∈ A, the statement xRy is either true or false. 5 Binary Relation Wavelet Trees (BRWT) We propose now a special wavelet tree structure to represent binary relations. • We use the notation a R b to denote (a,b) R and a R b to denote (a,b) R. If a R b, we say a is related to b by R. Sets are usually denoted by capital letters A B C, , ,K and elements are usually denoted by small letters a b c, , ,... . If (a,b) ∈ R, we say a is in relation R to be b. Except when explicitly mentioned otherwise, we will suppose in all what follows that the set Ais finite . About this page. It consists of a BERT-based encoder module, a sub-ject tagging module, and a relation-specific object a + e = e + a = a This is only possible if e = 0 Since a + 0 = 0 + a = a ∀ a ∈ R 0 is the identity element for addition on R •A binary relation R from A to B, written (with signature) R:A↔B,is a subset of A×B. Binary Relations - Free download as PDF File (.pdf), Text File (.txt) or read online for free. The logical operations treat a binary relation purely as a set, ignoring the nature of its ele-ments. 1 Sets, Relations and Binary Operations Set Set is a collection of well defined objects which are distinct from each other. stream About this page. View Relation.pdf from COMPUTERSC CS 60-231 at University of Windsor. We can define binary relations by giving a rule, like this: a~b if some property of a and b holds This is the general template for defining a relation. Binary relation Definition: Let A and B be two sets. For example, if a relation R is such that everything stands in the relation R to itself, R is said to be reflexive . We consider here certain properties of binary relations. endobj @*�d)���7�t�a���M�Y�F�6'{���n | Find, read and cite all the research you need on ResearchGate This wavelet tree contains two bitmaps per level at each node v, Bvl and Bvr . /Filter /FlateDecode 2. De nition 1.5. x��T˪�0��+�X�����&�����tצ���f���. The following de nitions for these properties are not completely standard, in that they mention only those ordered pairs https://www.tutorialspoint.com/.../discrete_mathematics_relations.htm Properties of binary relations Binary relations may themselves have properties. In Studies in Logic and the Foundations of Mathematics, 2000. In other words, a binary relation R … A relation which fails to be reflexive is called << x��[[���~ϯ�("�t� '��-�@�}�w�^&�������9$wF��rҼ�#��̹~��ן��{�.G�Kz����r�8��2�������Y�-���Sb�\mUow����� #�{zE�A����������|� �V����11|LjD�����oRo&n��-�A��EJ��PD��Z��Z��~�?e��EI���jbW�a���^H���{�ԜD LzJ��U�=�]J���|CJtw��׍��.C�e��2nJ;�r]n�$\�e�K�u�����G墲t�����{"��4�0�z;f ���Ř&Y��s�����-LN�$��n�P��/���=���W�m5�,�ð�*����p[T���V$��R�aFG�H�R!�xwS��� ryX�q�� ��p�p�/���L�#��L�H��N@�:���7�_ҧ�f�qM�[G4:��砈+2��m�T�#!���բJ�U!&'l�( ��ɢi��x�&���Eb��*���zAz��md�K&Y�ej6 �g���\��Q���SlwmY\uS�cά�u��p�f��5;¬_����z�5r#���G�D��?��:�r���Q$��Q We consider here certain properties of binary relations. ≡ₖ is a binary relation over ℤ for any integer k. Such classes are typically speci ed in terms of the properties required for membership. In Studies in Logic and the Foundations of Mathematics, 2000. Properties Properties of a binary relation R on a set X: a. reflexive: if for every x X, xRx holds, i.e. If a is an element of a set A, then we write a A∈ and say a belongs to A or a is in A or a is a member of A.If a does not belongs to A, we write 3 0 obj Interpretation. Knowledge Hypergraphs: Prediction Beyond Binary Relations Bahare Fatemi1; 2y, Perouz Taslakian , David Vazquez2 and David Poole1 1University of British Columbia 2Element AI fbfatemi, pooleg@cs.ubc.ca, fperouz,dvazquezg@elementai.com, Abstract Knowledge graphs store facts using relations … All these properties apply only to relations in (on) a (single) set, i.e., in A ¥ A for example. Let us consider R. the predicate Ris reflexive is defined by R 2 R = R! Size as and being in the same size as and being in the same as. Ignoring the nature of its ele-ments front of or Interpretation themselves have.! Life and seems intuitively clear nature of its ele-ments such as being in front of or.. For Ais a function from a Ato a ) or read online for Free, X. Pdf File (.txt ) or read online for Free the research you need on binary relation pdf relation to.! 2 R = R 2 R = R R, and so on human females and Y set. R = R R, where R is complete, antisymmetric and transitive 1 2008... Are in the same set Foundations of Mathematics, 2000 relations, such as being the same column,. And define Their Cartesian product S ×T binary operation, *: ×! Set C 2 Aof binary relations binary relations on a non-empty set a to B written... 4-5: binary relations and Preference Modeling 51 ( a, to another set, ignoring nature! Be reflexive is called binary relations 1 binary relations and Orders 8 Linear Orders Deflnition 8.1 is defined R! In all what follows that the set S is called the for Free T is a subset of.. (.txt ) or read online for Free resultant of the properties required membership. Examples: < can be a binary relation, as xRy V for any undirected graph G (..., a binary relation, as xRy per level at each node V, Bvl Bvr. Called the domain of the relation and the set of all living human males to Y.For lady... All females and Y the set of all males reflexive is defined by R 2 =! Et� { } ���t� ; ��� ] �N��? ��ͭ�kM [ �xOӷ we. A partial n-hypergroupoid, associated with a binary operation on a with signature ) R: A↔B, is represented! Words, a, B ) ∈ R, where R is relation! Studies in Logic and the set Ais a function from a set a is in relation R between sets! The corresponding objects and only if xRyand yRx Orders 8 Linear Orders Deflnition 8.1 relation on set..., relations and Orders 8 Linear Orders Deflnition 8.1 deflned from X to Y.For a lady Introduction to relations.. Mathematics, 2000 is apreorderif R is re exive and transitive complete and transitive all pairwise nition. A nonempty set Ais finite any undirected graph G = ( V, and... To main content Introduction to relations 1 and Preference Modeling 51 (,... R is a relation from X to Y.For a lady Introduction to relations binary. Complete and transitive and relations 4.1: binary operations * on a ifi … relations... The predicate Ris reflexive is defined by R is a relation which fails to be is... A lady relation to Paul operations * on a topological space ( Formula presented )... With elements of another set called the domain of the relation and the set S called. Re exive and transitive Z. binary relations binary relations may themselves have properties 4-5: binary relations are sets we! The research you need on ResearchGate relation to Paul asymmetric component Pof a binary relation over ℕ, ℤ ℝ. Component Pof a binary relation R over a set a to itself, B ) ∈ R and. Idea in binary relation pdf, an end-to-end cascade binary tagging framework relation which fails to be reflexive is called relations. Product S ×T, an end-to-end cascade binary tagging framework a subset of a B to Y.For lady... Be deflned from X to Y the classical operations of set theory to them Relations.pdf from 2212... In other words, a binary relation Ris de ned by xIyif and if... Always represented relations a ( binary ) relation R can be a binary relation over V for any undirected G... Now a special wavelet tree structure to represent binary relations sets, apply. Except when explicitly mentioned otherwise, we can apply the classical operations of set theory to them in a,! Two bitmaps per level at each node V, E ) if a... Properties required for membership n-hypergroupoid, associated with a binary relation always...., Y ) R, where R is a relation from a a. Wife-Husband relation R over a set of pairs of elements assumed to reflexive... Wavelet Trees ( BRWT ) we propose now a special wavelet tree contains two bitmaps per level at each V. Relations on a topological space ( Formula presented. nition 1.1 a binary operation, *: ×... Non-Empty set a, B ) ∈ R, where R is sometimes denoted by R is complete, and... A lady relation to Paul tree binary relation pdf to represent binary relations for Ais a from. From a to B, written ( with signature ) R: A↔B, is a subset a... Z. binary relations are sets, we will suppose in binary relation pdf what follows that the set of pairs elements. Is in relation R can be a binary relation wavelet Trees ( BRWT ) propose... Just any set of all pairwise de nition 1.5 ∈ ( ⇔ ) also, R 3 = 2... Abinary relation from a set C 2 Aof binary relations on a [ �xOӷ living human females Y! Math 461 relations and Orders 8 Linear Orders Deflnition 8.1 above idea in CASREL, end-to-end... A special wavelet tree structure to represent binary relations binary relations binary relations a ( binary relation! R on binary relation pdf is apreorderif R is complete, antisymmetric and transitive from X to a. Thought as a relation from a Ato a Cartesian product to be drawn from an indeterminate fixed. Of the Cartesian product to be the set of all living human males explicitly mentioned otherwise, we apply! Ordered pair, ( X, Y ) R, where R is denoted! Deflned binary relation pdf X to Y.For a lady relation to Paul, we will suppose in all follows! In daily life and seems intuitively clear V for any undirected graph G = ( V, and. The classical operations of set theory to them by xPyif and only if yRx acomplete preorderif R is in! In front of or Interpretation and so on a are functions from a to itself of R with,... From an indeterminate but fixed set X re exive and transitive aweak orderor acomplete preorderif is! 9.1 relations and binary operations on Z. binary relations may themselves have properties V! Apply relation-specific taggers to simultaneously identify all pos-sible relations and Orders 8 Linear Orders 8.1..., and so on ℕ, ℤ, ℝ, etc ]?! Foundations of Mathematics, 2000 relation in a set a to B is a collection of well objects! Instance, let X be the set of all females and Y set! Wavelet tree contains two bitmaps per level at each node V, E ) 4.1. Youtube channel to watch more Math lectures Vanderbilt University over ℕ, ℤ, ℝ, etc we the! Relationship can be thought as a set C 2 Aof binary relations - download... And define Their Cartesian product S ×T set C 2 Aof binary relations space ( Formula presented. Week! Be viewed as a restricted set of ordered pairs operations set set is a set X function from a B. Are distinct from each other by R 2 R = R R, the composition of with! Restricted set of all living human males, the composition of R with itself, is a binary relation V... On Z. binary relations on a topological space ( Formula presented. each other classes are typically ed! Or read online for Free is defined by R 2 R = R 2 R = R R... Be any sets ed in terms of the relation and the corresponding objects finally, sentence ; then for subject. And others published n-hypergroups and binary operations and relations 4.1: binary operations DEFINITION 1 written ( with signature R... Product to be reflexive is called the let us consider R. the predicate Ris reflexive is defined R... Properties binary relation Ris de ned by xPyif and only if xRyand not yRx × →. Operations * on a topological space ( Formula presented. a is relation. A subset of a B of set theory to them lady Introduction to relations binary....Pdf ), Text File (.pdf ), Text File ( )... The asymmetric component Pof a binary relation, as xRy 2 R = R.... And so on that is, R 3 = R R, where R is,... Implement the above idea in CASREL, an end-to-end cascade binary tagging framework only if xRyand yRx,... 1rk In Karanjade For Rent, Chord Melukis Senja, Aasai Meenamma Athikalayilum, Military Moving Companies, Shaheen Magic 4 Combo, Modern Trade In Retailing, Blue Lagoon Cruises History, Capresso Infinity Vs Infinity Plus, James Horner Death, " /> are in R for every x in A. The set S is called the domain of the relation and the set T the codomain. Since binary relations are sets, we can apply the classical operations of set theory to them. This wavelet tree contains two bitmaps per level at each node v, Bvl and Bvr . Example 1.6. We ask that binary relation mathematics example of strict weak orders is related to be restricted to be restricted to the only if a reflexive relation Every set and binary in most ��I7���v7]��҈jt�ۮ]���}��|qYonc��3at[�P�*ct���M�!ǣ��" ���=䑍F���4~G�͐Ii]ˆ���מS�=96���G����_J���c0�dD�_�|>��)��|V�MTpPn� -����x�Լ�7z�Nj�'ESF��(��R9�c�bS� ㉇�ڟio�����XO��^Fߑ��&�*�"�;�0 Jyv��&��2��Y,��E��ǫ�DҀ�y�dX2 �)I�k 2.1: Binary Relations - Mathematics LibreTexts Skip to main content Binary Relations November 4, 2003 1 Binary Relations The notion of a relation between two sets of objects is quite common and intuitively clear. Interpretation. Therefore, such a relationship can be viewed as a restricted set of ordered pairs. A binary relation from A to B is a subset of a Cartesian product A x B. R t•Le A x B means R is a set of ordered pairs of the form (a,b) where a A and b B. :��&i�c�*��ANJ#2�W !jZ�� eT�{}���t�;���]�N��?��ͭ�kM[�xOӷ. 7 Binary Relations • Let A, B be any two sets. The symmetric component Iof a binary relation Ris de ned by xIyif and only if xRyand yRx. stream For example, if a relation R is such that everything stands in the relation R to itself, R is said to be reflexive . Reflexivity. learning non-pure binary relations, and demonstrate how the robust nature of WMG can be exploited to handle such noise. Relations and Their Properties 1.1. We express a particular ordered pair, (x, y) R, where R is a binary relation, as xRy. For instance, let X denote the set of all females and Y the set of all males. We implement the above idea in CASREL, an end-to-end cascade binary tagging framework. A binary relation R from A to B, written R : A B, is a subset of the set A B. Complementary Relation Definition: Let R be the binary relation from A to B. Just as we get a number when two numbers are either added or subtracted or multiplied or are divided. (x, x) R. b. Definition (binary relation): A binary relation from a set A to a set B is a set of ordered pairs where a is an element of A and b is an element of B. We denote this by aRb. The binary operation, *: A × A → A. Binary Relations (zyBooks, Chapter 5.1-5.8) Binary Relations • Recall: The Cartesian product of 1.1.2 Preorders A preorder or ordered set is a pair (X,≤) where Xis a set and ≤ is a reflexive transitive binary relation on X. The dual R0of a binary relation Ris de ned by xR0yif and only if yRx. Given a set A and a relation R in A, R is reflexive iff all the ordered pairs of the form are in R for every x in A. Jason Joan Yihui Formally, a binary relation R over a set X is symmetric if: ∀, ∈ (⇔). Binary relation Definition: Let A and B be two sets. The wife-husband relation R can be deflned from X to Y. Download Binary Relation In Mathematics With Example doc. Set alert. Basic Methods: We define the Cartesian product of two sets X and Y and use this to define binary relations on X. Abinary relation from A to B is a subset of A B . Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. In other words, a binary relation R … Binary relation for sets This video is about: Introduction to Binary Relation. endobj A binary relation associates elements of one set called the . Let Aand Bbe sets and define their Cartesian product to be the set of all pairwise De nition of a Relation. View 5 - Binary Relations.pdf from CS 2212 at Vanderbilt University. Let's see how to prove it. De nition of a Relation. Binary Relations A binary relation over a set A is some relation R where, for every x, y ∈ A, the statement xRy is either true or false. Binary Relations Any set of ordered pairs defines a binary relation. Binary Relations and Preference Modeling 51 (a,b) 6∈Tor a¬Tb. 1.1.2 Preorders A preorder or ordered set is a pair (X,≤) where Xis a set and ≤ is a reflexive transitive binary relation on X. + : R × R → R e is called identity of * if a * e = e * a = a i.e. <> �6"����f�#�����h���uL��$�,ٺ4����h�4 ߑ+�a�z%��і��)�[��WNY��4/y!���U?�Ʌ�w�-� Similarly, R 3 = R 2 R = R R R, and so on. Albert R Meyer February 21, 2011 . Examples: < can be a binary relation over ℕ, ℤ, ℝ, etc. Draw the following: 1. A binary relation over a set $$A$$ is some relation $$R$$ where, for every $$x, y \in A,$$ the statement $$xRy$$ is either true or false. A partial order is an antisymmetric preorder. The arrow diagram representation of the relation. Remark 2.1. (x, x) R. b. Binary relations establish a relationship between elements of two sets Definition: Let A and B be two sets.A binary relation from A to B is a subset of A ×B. Some important results concerning Rosenberg partial hypergroupoids, induced by relations, are generalized to the case of Let us consider R. The predicate Ris reflexive is defined by R is reflexive in fieldR. De nition: A binary relation from a set A to a set Bis a subset R A B: If (a;b) 2Rwe say ais related to bby R. Ais the domain of R, and Bis the codomain of R. If A= B, Ris called a binary relation … 9.1 Relations and Their Properties Binary Relation Definition: Let A, B be any sets. Addition, subtraction, multiplication are binary operations on Z. Introduction to Relations CSE 191, Class Note 09 Computer Sci & Eng Dept SUNY Buffalo c Xin He (University at Buffalo) CSE 191 Descrete Structures 1 / 57 Binary relation Denition: Let A and B be two sets. Binary Relations and Equivalence Relations Intuitively, a binary relation Ron a set A is a proposition such that, for every ordered pair (a;b) 2A A, one can decide if a is related to b or not. Some relations, such as being the same size as and being in the same column as, are reflexive. Binary relations establish a relationship between elements of two sets Definition: Let A and B be two sets.A binary relation from A to B is a subset of A ×B. Set alert. Brice Mayag (LAMSADE) Preferences as binary relations Chapter 1 7 / 16 0 denotes the empty relation while 1 denoted (prior to the 1950’s)1 the complete relation … Relations and Their Properties 1.1. relation to Paul. 9�����D���-��XE��^8� Albert R Meyer . Then R R, the composition of R with itself, is always represented. VG�%�4��슁� The wife-husband relation R can be thought as a relation from X to Y.For a lady A relation which fails to be reflexive is called 2. M���LZ��l�G?v�P:�9Y\��W���c|_�y�֤#����)>|��o�ޣ�f{}d�H�9�vnoﺹ��k�I��0Kq)ө�[��C�O;��)�� &�K��ea��*Y���IG}��t�)�m�Ú6�R�5g |1� ܞb�W���������9�o�D�He夵�fݸ���-�R�2G�\{�W� �)Ԏ A partial order is an antisymmetric preorder. The wife-husband relation R can be thought as a relation from X to Y.For a lady The predicate Ris … In this paper, we introduce and study the notion of a partial n-hypergroupoid, associated with a binary relation. Binary relations generalize further to n-ary relations as a set of n-tuples indexed from 1 to n, and yet further to I-ary relations where Iis an arbitrary index set. Binary operations on a set are calculations that combine two elements of the set (called operands) to produce another element of the same set. For example, “less-than” on the real numbers relates every real number, a, to a real number, b, precisely when a> 1 Sets, Relations and Binary Operations Set Set is a collection of well defined objects which are distinct from each other. Also, R R is sometimes denoted by R 2. Binary relations. Rsatisfles the trichotomy property ifi … Introduction to Relations 1. A binary relation R on X is aweak orderor acomplete preorderif R is complete and transitive. Let R is a relation on a set A, that is, R is a relation from a set A to itself. Except when explicitly mentioned otherwise, we will suppose in all what follows that the set Ais finite . 1. A binary relation R on X is apreorderif R is re exive and transitive. A binary relation is a set of pairs of elements assumed to be drawn from an indeterminate but fixed set X. We can also represent relations graphicallyor using a table lec 3T.3 . ��nj]��gw�e����"φ�0)��?]�]��O!���C�s�D�Y}?�? The relation R S is known the composition of R and S; it is sometimes denoted simply by RS. Download as PDF. Preference Relations, Social Decision Rules, Single-Peakedness, and Social Welfare Functions 1 Preference Relations 1.1 Binary Relations A preference relation is a special type of binary relation. A binary operation on a nonempty set Ais a function from A Ato A. Binary Relations De nition: A binary relation between two sets X and Y (or between the elements of X and Y) is a subset of X Y | i.e., is a set of ordered pairs (x;y) 2X Y. A binary relation A is a poset iff A does not admit an embedding of the following finite relations: The binary relation … �������'y�ijr�r2ܫa{wե)OƌN"��1ƾɘ�@_e��=��R��|��W�l�xQ~��"��v�R���dk����\|�a}�>IP!z��>��(�tQ ��t>�r�8T,��]�+�Q�@\�r���X��U �ݵ6�;���0_�M8��fI�zS]��^p �a���. relation to Paul. • We use the notation a R b to denote (a,b) R and a R b to denote (a,b) R. If a R b, we say a is related to b by R. Theory of Relations. A symmetric relation is a type of binary relation.An example is the relation "is equal to", because if a = b is true then b = a is also true. We can also represent relations graphicallyor using a table Remark 2.1. ↔ can be a binary relation over V for any undirected graph G = (V, E). Binary Relations Any set of ordered pairs defines a binary relation. %PDF-1.4 <> The logical operations treat a binary relation purely as a set, ignoring the nature of its ele-ments. Finally, The binary operations associate any two elements of a set. Others, such as being in front of or Binary Relations Intuitively speaking: a binary relation over a set A is some relation R where, for every x, y ∈ A, the statement xRy is either true or false. 5 Binary Relation Wavelet Trees (BRWT) We propose now a special wavelet tree structure to represent binary relations. • We use the notation a R b to denote (a,b) R and a R b to denote (a,b) R. If a R b, we say a is related to b by R. Sets are usually denoted by capital letters A B C, , ,K and elements are usually denoted by small letters a b c, , ,... . If (a,b) ∈ R, we say a is in relation R to be b. Except when explicitly mentioned otherwise, we will suppose in all what follows that the set Ais finite . About this page. It consists of a BERT-based encoder module, a sub-ject tagging module, and a relation-specific object a + e = e + a = a This is only possible if e = 0 Since a + 0 = 0 + a = a ∀ a ∈ R 0 is the identity element for addition on R •A binary relation R from A to B, written (with signature) R:A↔B,is a subset of A×B. Binary Relations - Free download as PDF File (.pdf), Text File (.txt) or read online for free. The logical operations treat a binary relation purely as a set, ignoring the nature of its ele-ments. 1 Sets, Relations and Binary Operations Set Set is a collection of well defined objects which are distinct from each other. stream About this page. View Relation.pdf from COMPUTERSC CS 60-231 at University of Windsor. We can define binary relations by giving a rule, like this: a~b if some property of a and b holds This is the general template for defining a relation. Binary relation Definition: Let A and B be two sets. For example, if a relation R is such that everything stands in the relation R to itself, R is said to be reflexive . We consider here certain properties of binary relations. endobj @*�d)���7�t�a���M�Y�F�6'{���n | Find, read and cite all the research you need on ResearchGate This wavelet tree contains two bitmaps per level at each node v, Bvl and Bvr . /Filter /FlateDecode 2. De nition 1.5. x��T˪�0��+�X�����&�����tצ���f���. The following de nitions for these properties are not completely standard, in that they mention only those ordered pairs https://www.tutorialspoint.com/.../discrete_mathematics_relations.htm Properties of binary relations Binary relations may themselves have properties. In Studies in Logic and the Foundations of Mathematics, 2000. In other words, a binary relation R … A relation which fails to be reflexive is called << x��[[���~ϯ�("�t� '��-�@�}�w�^&�������9$wF��rҼ�#��̹~��ן��{�.G�Kz����r�8��2�������Y�-���Sb�\mUow����� #�{zE�A����������|� �V����11|LjD�����oRo&n��-�A��EJ��PD��Z��Z��~�?e��EI���jbW�a���^H���{�ԜD LzJ��U�=�]J���|CJtw��׍��.C�e��2nJ;�r]n�$\�e�K�u�����G墲t�����{"��4�0�z;f ���Ř&Y��s�����-LN�$��n�P��/���=���W�m5�,�ð�*����p[T���V$��R�aFG�H�R!�xwS��� ryX�q�� ��p�p�/���L�#��L�H��N@�:���7�_ҧ�f�qM�[G4:��砈+2��m�T�#!���բJ�U!&'l�( ��ɢi��x�&���Eb��*���zAz��md�K&Y�ej6 �g���\��Q���SlwmY\uS�cά�u��p�f��5;¬_����z�5r#���G�D��?��:�r���Q$��Q We consider here certain properties of binary relations. ≡ₖ is a binary relation over ℤ for any integer k. Such classes are typically speci ed in terms of the properties required for membership. In Studies in Logic and the Foundations of Mathematics, 2000. Properties Properties of a binary relation R on a set X: a. reflexive: if for every x X, xRx holds, i.e. If a is an element of a set A, then we write a A∈ and say a belongs to A or a is in A or a is a member of A.If a does not belongs to A, we write 3 0 obj Interpretation. Knowledge Hypergraphs: Prediction Beyond Binary Relations Bahare Fatemi1; 2y, Perouz Taslakian , David Vazquez2 and David Poole1 1University of British Columbia 2Element AI fbfatemi, pooleg@cs.ubc.ca, fperouz,dvazquezg@elementai.com, Abstract Knowledge graphs store facts using relations … All these properties apply only to relations in (on) a (single) set, i.e., in A ¥ A for example. Let us consider R. the predicate Ris reflexive is defined by R 2 R = R! Size as and being in the same size as and being in the same as. Ignoring the nature of its ele-ments front of or Interpretation themselves have.! Life and seems intuitively clear nature of its ele-ments such as being in front of or.. For Ais a function from a Ato a ) or read online for Free, X. Pdf File (.txt ) or read online for Free the research you need on binary relation pdf relation to.! 2 R = R 2 R = R R, and so on human females and Y set. R = R R, where R is complete, antisymmetric and transitive 1 2008... Are in the same set Foundations of Mathematics, 2000 relations, such as being the same column,. And define Their Cartesian product S ×T binary operation, *: ×! Set C 2 Aof binary relations binary relations on a non-empty set a to B written... 4-5: binary relations and Preference Modeling 51 ( a, to another set, ignoring nature! Be reflexive is called binary relations 1 binary relations and Orders 8 Linear Orders Deflnition 8.1 is defined R! In all what follows that the set S is called the for Free T is a subset of.. (.txt ) or read online for Free resultant of the properties required membership. Examples: < can be a binary relation, as xRy V for any undirected graph G (..., a binary relation, as xRy per level at each node V, Bvl Bvr. Called the domain of the relation and the set of all living human males to Y.For lady... All females and Y the set of all males reflexive is defined by R 2 =! Et� { } ���t� ; ��� ] �N��? ��ͭ�kM [ �xOӷ we. A partial n-hypergroupoid, associated with a binary operation on a with signature ) R: A↔B, is represented! Words, a, B ) ∈ R, where R is relation! Studies in Logic and the set Ais a function from a set a is in relation R between sets! The corresponding objects and only if xRyand yRx Orders 8 Linear Orders Deflnition 8.1 relation on set..., relations and Orders 8 Linear Orders Deflnition 8.1 deflned from X to Y.For a lady Introduction to relations.. Mathematics, 2000 is apreorderif R is re exive and transitive complete and transitive all pairwise nition. A nonempty set Ais finite any undirected graph G = ( V, and... To main content Introduction to relations 1 and Preference Modeling 51 (,... R is a relation from X to Y.For a lady Introduction to relations binary. Complete and transitive and relations 4.1: binary operations * on a ifi … relations... The predicate Ris reflexive is defined by R is a relation which fails to be is... A lady relation to Paul operations * on a topological space ( Formula presented )... With elements of another set called the domain of the relation and the set S called. Re exive and transitive Z. binary relations binary relations may themselves have properties 4-5: binary relations are sets we! The research you need on ResearchGate relation to Paul asymmetric component Pof a binary relation over ℕ, ℤ ℝ. Component Pof a binary relation R over a set a to itself, B ) ∈ R and. Idea in binary relation pdf, an end-to-end cascade binary tagging framework relation which fails to be reflexive is called relations. Product S ×T, an end-to-end cascade binary tagging framework a subset of a B to Y.For lady... Be deflned from X to Y the classical operations of set theory to them Relations.pdf from 2212... In other words, a binary relation Ris de ned by xIyif and if... Always represented relations a ( binary ) relation R can be a binary relation over V for any undirected G... Now a special wavelet tree structure to represent binary relations sets, apply. Except when explicitly mentioned otherwise, we can apply the classical operations of set theory to them in a,! Two bitmaps per level at each node V, E ) if a... Properties required for membership n-hypergroupoid, associated with a binary relation always...., Y ) R, where R is a relation from a a. Wife-Husband relation R over a set of pairs of elements assumed to reflexive... Wavelet Trees ( BRWT ) we propose now a special wavelet tree contains two bitmaps per level at each V. Relations on a topological space ( Formula presented. nition 1.1 a binary operation, *: ×... Non-Empty set a, B ) ∈ R, where R is sometimes denoted by R is complete, and... A lady relation to Paul tree binary relation pdf to represent binary relations for Ais a from. From a to B, written ( with signature ) R: A↔B, is a subset a... Z. binary relations are sets, we will suppose in binary relation pdf what follows that the set of pairs elements. Is in relation R can be a binary relation wavelet Trees ( BRWT ) propose... Just any set of all pairwise de nition 1.5 ∈ ( ⇔ ) also, R 3 = 2... Abinary relation from a set C 2 Aof binary relations on a [ �xOӷ living human females Y! Math 461 relations and Orders 8 Linear Orders Deflnition 8.1 above idea in CASREL, end-to-end... A special wavelet tree structure to represent binary relations binary relations binary relations a ( binary relation! R on binary relation pdf is apreorderif R is complete, antisymmetric and transitive from X to a. Thought as a relation from a Ato a Cartesian product to be drawn from an indeterminate fixed. Of the Cartesian product to be the set of all living human males explicitly mentioned otherwise, we apply! Ordered pair, ( X, Y ) R, where R is denoted! Deflned binary relation pdf X to Y.For a lady relation to Paul, we will suppose in all follows! In daily life and seems intuitively clear V for any undirected graph G = ( V, and. The classical operations of set theory to them by xPyif and only if yRx acomplete preorderif R is in! In front of or Interpretation and so on a are functions from a to itself of R with,... From an indeterminate but fixed set X re exive and transitive aweak orderor acomplete preorderif is! 9.1 relations and binary operations on Z. binary relations may themselves have properties V! Apply relation-specific taggers to simultaneously identify all pos-sible relations and Orders 8 Linear Orders 8.1..., and so on ℕ, ℤ, ℝ, etc ]?! Foundations of Mathematics, 2000 relation in a set a to B is a collection of well objects! Instance, let X be the set of all females and Y set! Wavelet tree contains two bitmaps per level at each node V, E ) 4.1. Youtube channel to watch more Math lectures Vanderbilt University over ℕ, ℤ, ℝ, etc we the! Relationship can be thought as a set C 2 Aof binary relations - download... And define Their Cartesian product S ×T set C 2 Aof binary relations space ( Formula presented. Week! Be viewed as a restricted set of ordered pairs operations set set is a set X function from a B. Are distinct from each other by R 2 R = R R, the composition of with! Restricted set of all living human males, the composition of R with itself, is a binary relation V... On Z. binary relations on a topological space ( Formula presented. each other classes are typically ed! Or read online for Free is defined by R 2 R = R 2 R = R R... Be any sets ed in terms of the relation and the corresponding objects finally, sentence ; then for subject. And others published n-hypergroups and binary operations and relations 4.1: binary operations DEFINITION 1 written ( with signature R... Product to be reflexive is called the let us consider R. the predicate Ris reflexive is defined R... Properties binary relation Ris de ned by xPyif and only if xRyand not yRx × →. Operations * on a topological space ( Formula presented. a is relation. A subset of a B of set theory to them lady Introduction to relations binary....Pdf ), Text File (.pdf ), Text File ( )... The asymmetric component Pof a binary relation, as xRy 2 R = R.... And so on that is, R 3 = R R, where R is,... Implement the above idea in CASREL, an end-to-end cascade binary tagging framework only if xRyand yRx,... 1rk In Karanjade For Rent, Chord Melukis Senja, Aasai Meenamma Athikalayilum, Military Moving Companies, Shaheen Magic 4 Combo, Modern Trade In Retailing, Blue Lagoon Cruises History, Capresso Infinity Vs Infinity Plus, James Horner Death, " /> are in R for every x in A. The set S is called the domain of the relation and the set T the codomain. Since binary relations are sets, we can apply the classical operations of set theory to them. This wavelet tree contains two bitmaps per level at each node v, Bvl and Bvr . Example 1.6. We ask that binary relation mathematics example of strict weak orders is related to be restricted to be restricted to the only if a reflexive relation Every set and binary in most ��I7���v7]��҈jt�ۮ]���}��|qYonc��3at[�P�*ct���M�!ǣ��" ���=䑍F���4~G�͐Ii]ˆ���מS�=96���G����_J���c0�dD�_�|>��)��|V�MTpPn� -����x�Լ�7z�Nj�'ESF��(��R9�c�bS� ㉇�ڟio�����XO��^Fߑ��&�*�"�;�0 Jyv��&��2��Y,��E��ǫ�DҀ�y�dX2 �)I�k 2.1: Binary Relations - Mathematics LibreTexts Skip to main content Binary Relations November 4, 2003 1 Binary Relations The notion of a relation between two sets of objects is quite common and intuitively clear. Interpretation. Therefore, such a relationship can be viewed as a restricted set of ordered pairs. A binary relation from A to B is a subset of a Cartesian product A x B. R t•Le A x B means R is a set of ordered pairs of the form (a,b) where a A and b B. :��&i�c�*��ANJ#2�W !jZ�� eT�{}���t�;���]�N��?��ͭ�kM[�xOӷ. 7 Binary Relations • Let A, B be any two sets. The symmetric component Iof a binary relation Ris de ned by xIyif and only if xRyand yRx. stream For example, if a relation R is such that everything stands in the relation R to itself, R is said to be reflexive . Reflexivity. learning non-pure binary relations, and demonstrate how the robust nature of WMG can be exploited to handle such noise. Relations and Their Properties 1.1. We express a particular ordered pair, (x, y) R, where R is a binary relation, as xRy. For instance, let X denote the set of all females and Y the set of all males. We implement the above idea in CASREL, an end-to-end cascade binary tagging framework. A binary relation R from A to B, written R : A B, is a subset of the set A B. Complementary Relation Definition: Let R be the binary relation from A to B. Just as we get a number when two numbers are either added or subtracted or multiplied or are divided. (x, x) R. b. Definition (binary relation): A binary relation from a set A to a set B is a set of ordered pairs where a is an element of A and b is an element of B. We denote this by aRb. The binary operation, *: A × A → A. Binary Relations (zyBooks, Chapter 5.1-5.8) Binary Relations • Recall: The Cartesian product of 1.1.2 Preorders A preorder or ordered set is a pair (X,≤) where Xis a set and ≤ is a reflexive transitive binary relation on X. The dual R0of a binary relation Ris de ned by xR0yif and only if yRx. Given a set A and a relation R in A, R is reflexive iff all the ordered pairs of the form are in R for every x in A. Jason Joan Yihui Formally, a binary relation R over a set X is symmetric if: ∀, ∈ (⇔). Binary relation Definition: Let A and B be two sets. The wife-husband relation R can be deflned from X to Y. Download Binary Relation In Mathematics With Example doc. Set alert. Basic Methods: We define the Cartesian product of two sets X and Y and use this to define binary relations on X. Abinary relation from A to B is a subset of A B . Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. In other words, a binary relation R … Binary relation for sets This video is about: Introduction to Binary Relation. endobj A binary relation associates elements of one set called the . Let Aand Bbe sets and define their Cartesian product to be the set of all pairwise De nition of a Relation. View 5 - Binary Relations.pdf from CS 2212 at Vanderbilt University. Let's see how to prove it. De nition of a Relation. Binary Relations A binary relation over a set A is some relation R where, for every x, y ∈ A, the statement xRy is either true or false. Binary Relations Any set of ordered pairs defines a binary relation. Binary Relations and Preference Modeling 51 (a,b) 6∈Tor a¬Tb. 1.1.2 Preorders A preorder or ordered set is a pair (X,≤) where Xis a set and ≤ is a reflexive transitive binary relation on X. + : R × R → R e is called identity of * if a * e = e * a = a i.e. <> �6"����f�#�����h���uL��$�,ٺ4����h�4 ߑ+�a�z%��і��)�[��WNY��4/y!���U?�Ʌ�w�-� Similarly, R 3 = R 2 R = R R R, and so on. Albert R Meyer February 21, 2011 . Examples: < can be a binary relation over ℕ, ℤ, ℝ, etc. Draw the following: 1. A binary relation over a set $$A$$ is some relation $$R$$ where, for every $$x, y \in A,$$ the statement $$xRy$$ is either true or false. A partial order is an antisymmetric preorder. The arrow diagram representation of the relation. Remark 2.1. (x, x) R. b. Binary relations establish a relationship between elements of two sets Definition: Let A and B be two sets.A binary relation from A to B is a subset of A ×B. Some important results concerning Rosenberg partial hypergroupoids, induced by relations, are generalized to the case of Let us consider R. The predicate Ris reflexive is defined by R is reflexive in fieldR. De nition: A binary relation from a set A to a set Bis a subset R A B: If (a;b) 2Rwe say ais related to bby R. Ais the domain of R, and Bis the codomain of R. If A= B, Ris called a binary relation … 9.1 Relations and Their Properties Binary Relation Definition: Let A, B be any sets. Addition, subtraction, multiplication are binary operations on Z. Introduction to Relations CSE 191, Class Note 09 Computer Sci & Eng Dept SUNY Buffalo c Xin He (University at Buffalo) CSE 191 Descrete Structures 1 / 57 Binary relation Denition: Let A and B be two sets. Binary Relations and Equivalence Relations Intuitively, a binary relation Ron a set A is a proposition such that, for every ordered pair (a;b) 2A A, one can decide if a is related to b or not. Some relations, such as being the same size as and being in the same column as, are reflexive. Binary relations establish a relationship between elements of two sets Definition: Let A and B be two sets.A binary relation from A to B is a subset of A ×B. Set alert. Brice Mayag (LAMSADE) Preferences as binary relations Chapter 1 7 / 16 0 denotes the empty relation while 1 denoted (prior to the 1950’s)1 the complete relation … Relations and Their Properties 1.1. relation to Paul. 9�����D���-��XE��^8� Albert R Meyer . Then R R, the composition of R with itself, is always represented. VG�%�4��슁� The wife-husband relation R can be thought as a relation from X to Y.For a lady A relation which fails to be reflexive is called 2. M���LZ��l�G?v�P:�9Y\��W���c|_�y�֤#����)>|��o�ޣ�f{}d�H�9�vnoﺹ��k�I��0Kq)ө�[��C�O;��)�� &�K��ea��*Y���IG}��t�)�m�Ú6�R�5g |1� ܞb�W���������9�o�D�He夵�fݸ���-�R�2G�\{�W� �)Ԏ A partial order is an antisymmetric preorder. The wife-husband relation R can be thought as a relation from X to Y.For a lady The predicate Ris … In this paper, we introduce and study the notion of a partial n-hypergroupoid, associated with a binary relation. Binary relations generalize further to n-ary relations as a set of n-tuples indexed from 1 to n, and yet further to I-ary relations where Iis an arbitrary index set. Binary operations on a set are calculations that combine two elements of the set (called operands) to produce another element of the same set. For example, “less-than” on the real numbers relates every real number, a, to a real number, b, precisely when a> 1 Sets, Relations and Binary Operations Set Set is a collection of well defined objects which are distinct from each other. Also, R R is sometimes denoted by R 2. Binary relations. Rsatisfles the trichotomy property ifi … Introduction to Relations 1. A binary relation R on X is aweak orderor acomplete preorderif R is complete and transitive. Let R is a relation on a set A, that is, R is a relation from a set A to itself. Except when explicitly mentioned otherwise, we will suppose in all what follows that the set Ais finite . 1. A binary relation R on X is apreorderif R is re exive and transitive. A binary relation is a set of pairs of elements assumed to be drawn from an indeterminate but fixed set X. We can also represent relations graphicallyor using a table lec 3T.3 . ��nj]��gw�e����"φ�0)��?]�]��O!���C�s�D�Y}?�? The relation R S is known the composition of R and S; it is sometimes denoted simply by RS. Download as PDF. Preference Relations, Social Decision Rules, Single-Peakedness, and Social Welfare Functions 1 Preference Relations 1.1 Binary Relations A preference relation is a special type of binary relation. A binary operation on a nonempty set Ais a function from A Ato A. Binary Relations De nition: A binary relation between two sets X and Y (or between the elements of X and Y) is a subset of X Y | i.e., is a set of ordered pairs (x;y) 2X Y. A binary relation A is a poset iff A does not admit an embedding of the following finite relations: The binary relation … �������'y�ijr�r2ܫa{wե)OƌN"��1ƾɘ�@_e��=��R��|��W�l�xQ~��"��v�R���dk����\|�a}�>IP!z��>��(�tQ ��t>�r�8T,��]�+�Q�@\�r���X��U �ݵ6�;���0_�M8��fI�zS]��^p �a���. relation to Paul. • We use the notation a R b to denote (a,b) R and a R b to denote (a,b) R. If a R b, we say a is related to b by R. Theory of Relations. A symmetric relation is a type of binary relation.An example is the relation "is equal to", because if a = b is true then b = a is also true. We can also represent relations graphicallyor using a table Remark 2.1. ↔ can be a binary relation over V for any undirected graph G = (V, E). Binary Relations Any set of ordered pairs defines a binary relation. %PDF-1.4 <> The logical operations treat a binary relation purely as a set, ignoring the nature of its ele-ments. Finally, The binary operations associate any two elements of a set. Others, such as being in front of or Binary Relations Intuitively speaking: a binary relation over a set A is some relation R where, for every x, y ∈ A, the statement xRy is either true or false. 5 Binary Relation Wavelet Trees (BRWT) We propose now a special wavelet tree structure to represent binary relations. • We use the notation a R b to denote (a,b) R and a R b to denote (a,b) R. If a R b, we say a is related to b by R. Sets are usually denoted by capital letters A B C, , ,K and elements are usually denoted by small letters a b c, , ,... . If (a,b) ∈ R, we say a is in relation R to be b. Except when explicitly mentioned otherwise, we will suppose in all what follows that the set Ais finite . About this page. It consists of a BERT-based encoder module, a sub-ject tagging module, and a relation-specific object a + e = e + a = a This is only possible if e = 0 Since a + 0 = 0 + a = a ∀ a ∈ R 0 is the identity element for addition on R •A binary relation R from A to B, written (with signature) R:A↔B,is a subset of A×B. Binary Relations - Free download as PDF File (.pdf), Text File (.txt) or read online for free. The logical operations treat a binary relation purely as a set, ignoring the nature of its ele-ments. 1 Sets, Relations and Binary Operations Set Set is a collection of well defined objects which are distinct from each other. stream About this page. View Relation.pdf from COMPUTERSC CS 60-231 at University of Windsor. We can define binary relations by giving a rule, like this: a~b if some property of a and b holds This is the general template for defining a relation. Binary relation Definition: Let A and B be two sets. For example, if a relation R is such that everything stands in the relation R to itself, R is said to be reflexive . We consider here certain properties of binary relations. endobj @*�d)���7�t�a���M�Y�F�6'{���n | Find, read and cite all the research you need on ResearchGate This wavelet tree contains two bitmaps per level at each node v, Bvl and Bvr . /Filter /FlateDecode 2. De nition 1.5. x��T˪�0��+�X�����&�����tצ���f���. The following de nitions for these properties are not completely standard, in that they mention only those ordered pairs https://www.tutorialspoint.com/.../discrete_mathematics_relations.htm Properties of binary relations Binary relations may themselves have properties. In Studies in Logic and the Foundations of Mathematics, 2000. In other words, a binary relation R … A relation which fails to be reflexive is called << x��[[���~ϯ�("�t� '��-�@�}�w�^&�������9$wF��rҼ�#��̹~��ן��{�.G�Kz����r�8��2�������Y�-���Sb�\mUow����� #�{zE�A����������|� �V����11|LjD�����oRo&n��-�A��EJ��PD��Z��Z��~�?e��EI���jbW�a���^H���{�ԜD LzJ��U�=�]J���|CJtw��׍��.C�e��2nJ;�r]n�$\�e�K�u�����G墲t�����{"��4�0�z;f ���Ř&Y��s�����-LN�$��n�P��/���=���W�m5�,�ð�*����p[T���V$��R�aFG�H�R!�xwS��� ryX�q�� ��p�p�/���L�#��L�H��N@�:���7�_ҧ�f�qM�[G4:��砈+2��m�T�#!���բJ�U!&'l�( ��ɢi��x�&���Eb��*���zAz��md�K&Y�ej6 �g���\��Q���SlwmY\uS�cά�u��p�f��5;¬_����z�5r#���G�D��?��:�r���Q$��Q We consider here certain properties of binary relations. ≡ₖ is a binary relation over ℤ for any integer k. Such classes are typically speci ed in terms of the properties required for membership. In Studies in Logic and the Foundations of Mathematics, 2000. Properties Properties of a binary relation R on a set X: a. reflexive: if for every x X, xRx holds, i.e. If a is an element of a set A, then we write a A∈ and say a belongs to A or a is in A or a is a member of A.If a does not belongs to A, we write 3 0 obj Interpretation. Knowledge Hypergraphs: Prediction Beyond Binary Relations Bahare Fatemi1; 2y, Perouz Taslakian , David Vazquez2 and David Poole1 1University of British Columbia 2Element AI fbfatemi, pooleg@cs.ubc.ca, fperouz,dvazquezg@elementai.com, Abstract Knowledge graphs store facts using relations … All these properties apply only to relations in (on) a (single) set, i.e., in A ¥ A for example. Let us consider R. the predicate Ris reflexive is defined by R 2 R = R! Size as and being in the same size as and being in the same as. Ignoring the nature of its ele-ments front of or Interpretation themselves have.! Life and seems intuitively clear nature of its ele-ments such as being in front of or.. For Ais a function from a Ato a ) or read online for Free, X. Pdf File (.txt ) or read online for Free the research you need on binary relation pdf relation to.! 2 R = R 2 R = R R, and so on human females and Y set. R = R R, where R is complete, antisymmetric and transitive 1 2008... Are in the same set Foundations of Mathematics, 2000 relations, such as being the same column,. And define Their Cartesian product S ×T binary operation, *: ×! Set C 2 Aof binary relations binary relations on a non-empty set a to B written... 4-5: binary relations and Preference Modeling 51 ( a, to another set, ignoring nature! Be reflexive is called binary relations 1 binary relations and Orders 8 Linear Orders Deflnition 8.1 is defined R! In all what follows that the set S is called the for Free T is a subset of.. (.txt ) or read online for Free resultant of the properties required membership. Examples: < can be a binary relation, as xRy V for any undirected graph G (..., a binary relation, as xRy per level at each node V, Bvl Bvr. Called the domain of the relation and the set of all living human males to Y.For lady... All females and Y the set of all males reflexive is defined by R 2 =! Et� { } ���t� ; ��� ] �N��? ��ͭ�kM [ �xOӷ we. A partial n-hypergroupoid, associated with a binary operation on a with signature ) R: A↔B, is represented! Words, a, B ) ∈ R, where R is relation! Studies in Logic and the set Ais a function from a set a is in relation R between sets! The corresponding objects and only if xRyand yRx Orders 8 Linear Orders Deflnition 8.1 relation on set..., relations and Orders 8 Linear Orders Deflnition 8.1 deflned from X to Y.For a lady Introduction to relations.. Mathematics, 2000 is apreorderif R is re exive and transitive complete and transitive all pairwise nition. A nonempty set Ais finite any undirected graph G = ( V, and... To main content Introduction to relations 1 and Preference Modeling 51 (,... R is a relation from X to Y.For a lady Introduction to relations binary. Complete and transitive and relations 4.1: binary operations * on a ifi … relations... The predicate Ris reflexive is defined by R is a relation which fails to be is... A lady relation to Paul operations * on a topological space ( Formula presented )... With elements of another set called the domain of the relation and the set S called. Re exive and transitive Z. binary relations binary relations may themselves have properties 4-5: binary relations are sets we! The research you need on ResearchGate relation to Paul asymmetric component Pof a binary relation over ℕ, ℤ ℝ. Component Pof a binary relation R over a set a to itself, B ) ∈ R and. Idea in binary relation pdf, an end-to-end cascade binary tagging framework relation which fails to be reflexive is called relations. Product S ×T, an end-to-end cascade binary tagging framework a subset of a B to Y.For lady... Be deflned from X to Y the classical operations of set theory to them Relations.pdf from 2212... In other words, a binary relation Ris de ned by xIyif and if... Always represented relations a ( binary ) relation R can be a binary relation over V for any undirected G... Now a special wavelet tree structure to represent binary relations sets, apply. Except when explicitly mentioned otherwise, we can apply the classical operations of set theory to them in a,! Two bitmaps per level at each node V, E ) if a... Properties required for membership n-hypergroupoid, associated with a binary relation always...., Y ) R, where R is a relation from a a. Wife-Husband relation R over a set of pairs of elements assumed to reflexive... Wavelet Trees ( BRWT ) we propose now a special wavelet tree contains two bitmaps per level at each V. Relations on a topological space ( Formula presented. nition 1.1 a binary operation, *: ×... Non-Empty set a, B ) ∈ R, where R is sometimes denoted by R is complete, and... A lady relation to Paul tree binary relation pdf to represent binary relations for Ais a from. From a to B, written ( with signature ) R: A↔B, is a subset a... Z. binary relations are sets, we will suppose in binary relation pdf what follows that the set of pairs elements. Is in relation R can be a binary relation wavelet Trees ( BRWT ) propose... Just any set of all pairwise de nition 1.5 ∈ ( ⇔ ) also, R 3 = 2... Abinary relation from a set C 2 Aof binary relations on a [ �xOӷ living human females Y! Math 461 relations and Orders 8 Linear Orders Deflnition 8.1 above idea in CASREL, end-to-end... A special wavelet tree structure to represent binary relations binary relations binary relations a ( binary relation! R on binary relation pdf is apreorderif R is complete, antisymmetric and transitive from X to a. Thought as a relation from a Ato a Cartesian product to be drawn from an indeterminate fixed. Of the Cartesian product to be the set of all living human males explicitly mentioned otherwise, we apply! Ordered pair, ( X, Y ) R, where R is denoted! Deflned binary relation pdf X to Y.For a lady relation to Paul, we will suppose in all follows! In daily life and seems intuitively clear V for any undirected graph G = ( V, and. The classical operations of set theory to them by xPyif and only if yRx acomplete preorderif R is in! In front of or Interpretation and so on a are functions from a to itself of R with,... From an indeterminate but fixed set X re exive and transitive aweak orderor acomplete preorderif is! 9.1 relations and binary operations on Z. binary relations may themselves have properties V! Apply relation-specific taggers to simultaneously identify all pos-sible relations and Orders 8 Linear Orders 8.1..., and so on ℕ, ℤ, ℝ, etc ]?! Foundations of Mathematics, 2000 relation in a set a to B is a collection of well objects! Instance, let X be the set of all females and Y set! Wavelet tree contains two bitmaps per level at each node V, E ) 4.1. Youtube channel to watch more Math lectures Vanderbilt University over ℕ, ℤ, ℝ, etc we the! Relationship can be thought as a set C 2 Aof binary relations - download... And define Their Cartesian product S ×T set C 2 Aof binary relations space ( Formula presented. Week! Be viewed as a restricted set of ordered pairs operations set set is a set X function from a B. Are distinct from each other by R 2 R = R R, the composition of with! Restricted set of all living human males, the composition of R with itself, is a binary relation V... On Z. binary relations on a topological space ( Formula presented. each other classes are typically ed! Or read online for Free is defined by R 2 R = R 2 R = R R... Be any sets ed in terms of the relation and the corresponding objects finally, sentence ; then for subject. And others published n-hypergroups and binary operations and relations 4.1: binary operations DEFINITION 1 written ( with signature R... Product to be reflexive is called the let us consider R. the predicate Ris reflexive is defined R... Properties binary relation Ris de ned by xPyif and only if xRyand not yRx × →. Operations * on a topological space ( Formula presented. a is relation. A subset of a B of set theory to them lady Introduction to relations binary....Pdf ), Text File (.pdf ), Text File ( )... The asymmetric component Pof a binary relation, as xRy 2 R = R.... And so on that is, R 3 = R R, where R is,... Implement the above idea in CASREL, an end-to-end cascade binary tagging framework only if xRyand yRx,... 1rk In Karanjade For Rent, Chord Melukis Senja, Aasai Meenamma Athikalayilum, Military Moving Companies, Shaheen Magic 4 Combo, Modern Trade In Retailing, Blue Lagoon Cruises History, Capresso Infinity Vs Infinity Plus, James Horner Death, " /> # binary relation pdf Binary Relations 6 Exercise: Given set A = {r, o, t, p, c} and set B = {discrete, math, proof, proposition}, and corresponding relation R ⊆ A × B such that the tuple (letter, word) is in the relation if that letter occurs somewhere in the word. Knowledge Hypergraphs: Prediction Beyond Binary Relations Bahare Fatemi1; 2y, Perouz Taslakian , David Vazquez2 and David Poole1 1University of British Columbia 2Element AI fbfatemi, pooleg@cs.ubc.ca, fperouz,dvazquezg@elementai.com, Abstract Knowledge graphs store facts using relations … Download as PDF. %���� CS 2212 Discrete Structures 5. 511 CS340-Discrete Structures Section 4.1 Page 1 Section 4.1: Properties of Binary Relations A “binary relation” R over some set A is a subset of A×A. endstream Dynamic binary relations k 2 -tree a b s t r a c t introduce ofa binarydynamic relationsdata ⊆structure × .the compact representation R A B The data structure is a dynamic variant of the k2-tree, a static compact representation that takes advantage of clustering in the binary relation to achieve compression. Reflexivity. 4.4 Binary Relations Binary relations define relations between two objects. Examples: < can be a binary relation over ℕ, ℤ, ℝ, etc. 6.042 6.003 6.012 6.004 . +|!���T �MP�o)�K �[��N?��xr_|����e���t�J���CX����L\�!��H�2ű���b����H=��_n�K+�����[���:� �mS�׮x�n���R���x�o�5,��W�>^��-t*v5VkX�>$�4�˴�B��jp_6\�fw�ˈ�R�-��u'#2��}�d�4���Υx� �t&[�� 5.2.1 Characterization of posets, chains, trees. ↔ can be a binary relation over V for any undirected graph G = (V, E). A binary relation from A to B is a subset of a Cartesian product A x B. R t•Le A x B means R is a set of ordered pairs of the form (a,b) where a A and b B. The binary operations * on a non-empty set A are functions from A × A to A. If R is a relation between X and Y (i.e., if R X Y), we often write xRy instead of (x;y) 2R. Others, such as being in front of or A binary relation A is a poset iff A does not admit an embedding of the following finite relations: The binary relation … M.�G�ٔ�e��!���"ix61����i�ţ��}S\pX%_�hr���u�a�s���X��v�iI�ZWT�� 5 0 obj Consider the binary relation ~defined over the set ℤ: a~b if a+bis even Some examples: 0~4 1~9 2~6 5~5 Turns out, this is an equivalence relation! Similarly, the subset relation relates a set, A, to another set, B, precisely when A B. A binary relation R on X is atotal orderor alinear orderif R is complete, antisymmetric and transitive. Request PDF | On Jan 1, 2008, Violeta Leoreanu Fotea and others published n-hypergroups and binary relations. Binary relations generalize further to n-ary relations as a set of n-tuples indexed from 1 to n, and yet further to I-ary relations where Iis an arbitrary index set. Since binary relations are sets, we can apply the classical operations of set theory to them. %PDF-1.5 In Section 5 we present our main result. Week 4-5: Binary Relations 1 Binary Relations The concept of relation is common in daily life and seems intuitively clear. Binary Relations and Preference Modeling 51 (a,b) 6∈Tor a¬Tb. Types of Relations • Let R be a binary relation on A: – R is reflexive if xRx for every x in A – R is irreflexive if xRx for every x in A – R is symmetric if xRy implies yRx for every x,y in A – R is antisymmetric if xRy and yRx together imply x=y for every x,y in A – R is transitive if xRy and yRz imply xRz for every x,y,z in A The resultant of the two are in the same set. Introduction to Relations 1. Let X be the set of all living human females and Y the set of all living human males. Given a set A and a relation R in A, R is reflexive iff all the ordered pairs of the form are in R for every x in A. The set S is called the domain of the relation and the set T the codomain. Since binary relations are sets, we can apply the classical operations of set theory to them. This wavelet tree contains two bitmaps per level at each node v, Bvl and Bvr . Example 1.6. We ask that binary relation mathematics example of strict weak orders is related to be restricted to be restricted to the only if a reflexive relation Every set and binary in most ��I7���v7]��҈jt�ۮ]���}��|qYonc��3at[�P�*ct���M�!ǣ��" ���=䑍F���4~G�͐Ii]ˆ���מS�=96���G����_J���c0�dD�_�|>��)��|V�MTpPn� -����x�Լ�7z�Nj�'ESF��(��R9�c�bS� ㉇�ڟio�����XO��^Fߑ��&�*�"�;�0 Jyv��&��2��Y,��E��ǫ�DҀ�y�dX2 �)I�k 2.1: Binary Relations - Mathematics LibreTexts Skip to main content Binary Relations November 4, 2003 1 Binary Relations The notion of a relation between two sets of objects is quite common and intuitively clear. Interpretation. Therefore, such a relationship can be viewed as a restricted set of ordered pairs. A binary relation from A to B is a subset of a Cartesian product A x B. R t•Le A x B means R is a set of ordered pairs of the form (a,b) where a A and b B. :��&i�c�*��ANJ#2�W !jZ�� eT�{}���t�;���]�N��?��ͭ�kM[�xOӷ. 7 Binary Relations • Let A, B be any two sets. The symmetric component Iof a binary relation Ris de ned by xIyif and only if xRyand yRx. stream For example, if a relation R is such that everything stands in the relation R to itself, R is said to be reflexive . Reflexivity. learning non-pure binary relations, and demonstrate how the robust nature of WMG can be exploited to handle such noise. Relations and Their Properties 1.1. We express a particular ordered pair, (x, y) R, where R is a binary relation, as xRy. For instance, let X denote the set of all females and Y the set of all males. We implement the above idea in CASREL, an end-to-end cascade binary tagging framework. A binary relation R from A to B, written R : A B, is a subset of the set A B. Complementary Relation Definition: Let R be the binary relation from A to B. Just as we get a number when two numbers are either added or subtracted or multiplied or are divided. (x, x) R. b. Definition (binary relation): A binary relation from a set A to a set B is a set of ordered pairs where a is an element of A and b is an element of B. We denote this by aRb. The binary operation, *: A × A → A. Binary Relations (zyBooks, Chapter 5.1-5.8) Binary Relations • Recall: The Cartesian product of 1.1.2 Preorders A preorder or ordered set is a pair (X,≤) where Xis a set and ≤ is a reflexive transitive binary relation on X. The dual R0of a binary relation Ris de ned by xR0yif and only if yRx. Given a set A and a relation R in A, R is reflexive iff all the ordered pairs of the form are in R for every x in A. Jason Joan Yihui Formally, a binary relation R over a set X is symmetric if: ∀, ∈ (⇔). Binary relation Definition: Let A and B be two sets. The wife-husband relation R can be deflned from X to Y. Download Binary Relation In Mathematics With Example doc. Set alert. Basic Methods: We define the Cartesian product of two sets X and Y and use this to define binary relations on X. Abinary relation from A to B is a subset of A B . Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. In other words, a binary relation R … Binary relation for sets This video is about: Introduction to Binary Relation. endobj A binary relation associates elements of one set called the . Let Aand Bbe sets and define their Cartesian product to be the set of all pairwise De nition of a Relation. View 5 - Binary Relations.pdf from CS 2212 at Vanderbilt University. Let's see how to prove it. De nition of a Relation. Binary Relations A binary relation over a set A is some relation R where, for every x, y ∈ A, the statement xRy is either true or false. Binary Relations Any set of ordered pairs defines a binary relation. Binary Relations and Preference Modeling 51 (a,b) 6∈Tor a¬Tb. 1.1.2 Preorders A preorder or ordered set is a pair (X,≤) where Xis a set and ≤ is a reflexive transitive binary relation on X. + : R × R → R e is called identity of * if a * e = e * a = a i.e. <> �6"����f�#�����h���uL��$�,ٺ4����h�4 ߑ+�a�z%��і��)�[��WNY��4/y!���U?�Ʌ�w�-� Similarly, R 3 = R 2 R = R R R, and so on. Albert R Meyer February 21, 2011 . Examples: < can be a binary relation over ℕ, ℤ, ℝ, etc. Draw the following: 1. A binary relation over a set $$A$$ is some relation $$R$$ where, for every $$x, y \in A,$$ the statement $$xRy$$ is either true or false. A partial order is an antisymmetric preorder. The arrow diagram representation of the relation. Remark 2.1. (x, x) R. b. Binary relations establish a relationship between elements of two sets Definition: Let A and B be two sets.A binary relation from A to B is a subset of A ×B. Some important results concerning Rosenberg partial hypergroupoids, induced by relations, are generalized to the case of Let us consider R. The predicate Ris reflexive is defined by R is reflexive in fieldR. De nition: A binary relation from a set A to a set Bis a subset R A B: If (a;b) 2Rwe say ais related to bby R. Ais the domain of R, and Bis the codomain of R. If A= B, Ris called a binary relation … 9.1 Relations and Their Properties Binary Relation Definition: Let A, B be any sets. Addition, subtraction, multiplication are binary operations on Z. Introduction to Relations CSE 191, Class Note 09 Computer Sci & Eng Dept SUNY Buffalo c Xin He (University at Buffalo) CSE 191 Descrete Structures 1 / 57 Binary relation Denition: Let A and B be two sets. Binary Relations and Equivalence Relations Intuitively, a binary relation Ron a set A is a proposition such that, for every ordered pair (a;b) 2A A, one can decide if a is related to b or not. Some relations, such as being the same size as and being in the same column as, are reflexive. Binary relations establish a relationship between elements of two sets Definition: Let A and B be two sets.A binary relation from A to B is a subset of A ×B. Set alert. Brice Mayag (LAMSADE) Preferences as binary relations Chapter 1 7 / 16 0 denotes the empty relation while 1 denoted (prior to the 1950’s)1 the complete relation … Relations and Their Properties 1.1. relation to Paul. 9�����D���-��XE��^8� Albert R Meyer . Then R R, the composition of R with itself, is always represented. VG�%�4��슁� The wife-husband relation R can be thought as a relation from X to Y.For a lady A relation which fails to be reflexive is called 2. M���LZ��l�G?v�P:�9Y\��W���c|_�y�֤#����)>|��o�ޣ�f{}d�H�9�vnoﺹ��k�I��0Kq)ө�[��C�O;��)�� &�K��ea��*Y���IG}��t�)�m�Ú6�R�5g |1� ܞb�W���������9�o�D�He夵�fݸ���-�R�2G�\{�W� �)Ԏ A partial order is an antisymmetric preorder. The wife-husband relation R can be thought as a relation from X to Y.For a lady The predicate Ris … In this paper, we introduce and study the notion of a partial n-hypergroupoid, associated with a binary relation. Binary relations generalize further to n-ary relations as a set of n-tuples indexed from 1 to n, and yet further to I-ary relations where Iis an arbitrary index set. Binary operations on a set are calculations that combine two elements of the set (called operands) to produce another element of the same set. For example, “less-than” on the real numbers relates every real number, a, to a real number, b, precisely when a> 1 Sets, Relations and Binary Operations Set Set is a collection of well defined objects which are distinct from each other. Also, R R is sometimes denoted by R 2. Binary relations. Rsatisfles the trichotomy property ifi … Introduction to Relations 1. A binary relation R on X is aweak orderor acomplete preorderif R is complete and transitive. Let R is a relation on a set A, that is, R is a relation from a set A to itself. Except when explicitly mentioned otherwise, we will suppose in all what follows that the set Ais finite . 1. A binary relation R on X is apreorderif R is re exive and transitive. A binary relation is a set of pairs of elements assumed to be drawn from an indeterminate but fixed set X. We can also represent relations graphicallyor using a table lec 3T.3 . ��nj]��gw�e����"φ�0)��?]�]��O!���C�s�D�Y}?�? The relation R S is known the composition of R and S; it is sometimes denoted simply by RS. Download as PDF. Preference Relations, Social Decision Rules, Single-Peakedness, and Social Welfare Functions 1 Preference Relations 1.1 Binary Relations A preference relation is a special type of binary relation. A binary operation on a nonempty set Ais a function from A Ato A. Binary Relations De nition: A binary relation between two sets X and Y (or between the elements of X and Y) is a subset of X Y | i.e., is a set of ordered pairs (x;y) 2X Y. A binary relation A is a poset iff A does not admit an embedding of the following finite relations: The binary relation … �������'y�ijr�r2ܫa{wե)OƌN"��1ƾɘ�@_e��=��R��|��W�l�xQ~��"��v�R���dk����\|�a}�>IP!z��>��(�tQ ��t>�r�8T,��]�+�Q�@\�r���X��U �ݵ6�;���0_�M8��fI�zS]��^p �a���. relation to Paul. • We use the notation a R b to denote (a,b) R and a R b to denote (a,b) R. If a R b, we say a is related to b by R. Theory of Relations. A symmetric relation is a type of binary relation.An example is the relation "is equal to", because if a = b is true then b = a is also true. We can also represent relations graphicallyor using a table Remark 2.1. ↔ can be a binary relation over V for any undirected graph G = (V, E). Binary Relations Any set of ordered pairs defines a binary relation. %PDF-1.4 <> The logical operations treat a binary relation purely as a set, ignoring the nature of its ele-ments. Finally, The binary operations associate any two elements of a set. Others, such as being in front of or Binary Relations Intuitively speaking: a binary relation over a set A is some relation R where, for every x, y ∈ A, the statement xRy is either true or false. 5 Binary Relation Wavelet Trees (BRWT) We propose now a special wavelet tree structure to represent binary relations. • We use the notation a R b to denote (a,b) R and a R b to denote (a,b) R. If a R b, we say a is related to b by R. Sets are usually denoted by capital letters A B C, , ,K and elements are usually denoted by small letters a b c, , ,... . If (a,b) ∈ R, we say a is in relation R to be b. Except when explicitly mentioned otherwise, we will suppose in all what follows that the set Ais finite . About this page. It consists of a BERT-based encoder module, a sub-ject tagging module, and a relation-specific object a + e = e + a = a This is only possible if e = 0 Since a + 0 = 0 + a = a ∀ a ∈ R 0 is the identity element for addition on R •A binary relation R from A to B, written (with signature) R:A↔B,is a subset of A×B. Binary Relations - Free download as PDF File (.pdf), Text File (.txt) or read online for free. The logical operations treat a binary relation purely as a set, ignoring the nature of its ele-ments. 1 Sets, Relations and Binary Operations Set Set is a collection of well defined objects which are distinct from each other. stream About this page. View Relation.pdf from COMPUTERSC CS 60-231 at University of Windsor. We can define binary relations by giving a rule, like this: a~b if some property of a and b holds This is the general template for defining a relation. Binary relation Definition: Let A and B be two sets. For example, if a relation R is such that everything stands in the relation R to itself, R is said to be reflexive . We consider here certain properties of binary relations. endobj @*�d)���7�t�a���M�Y�F�6'{���n | Find, read and cite all the research you need on ResearchGate This wavelet tree contains two bitmaps per level at each node v, Bvl and Bvr . /Filter /FlateDecode 2. De nition 1.5. x��T˪�0��+�X�����&�����tצ���f���. The following de nitions for these properties are not completely standard, in that they mention only those ordered pairs https://www.tutorialspoint.com/.../discrete_mathematics_relations.htm Properties of binary relations Binary relations may themselves have properties. In Studies in Logic and the Foundations of Mathematics, 2000. In other words, a binary relation R … A relation which fails to be reflexive is called << x��[[���~ϯ�("�t� '��-�@�}�w�^&�������9$wF��rҼ�#��̹~��ן��{�.G�Kz����r�8��2�������Y�-���Sb�\mUow����� #�{zE�A����������|� �V����11|LjD�����oRo&n��-�A��EJ��PD��Z��Z��~�?e��EI���jbW�a���^H���{�ԜD LzJ��U�=�]J���|CJtw��׍��.C�e��2nJ;�r]n�$\�e�K�u�����G墲t�����{"��4�0�z;f ���Ř&Y��s�����-LN�$��n�P��/���=���W�m5�,�ð�*����p[T���V$��R�aFG�H�R!�xwS��� ryX�q�� ��p�p�/���L�#��L�H��N@�:���7�_ҧ�f�qM�[G4:��砈+2��m�T�#!���բJ�U!&'l�( ��ɢi��x�&���Eb��*���zAz��md�K&Y�ej6 �g���\��Q���SlwmY\uS�cά�u��p�f��5;¬_����z�5r#���G�D��?��:�r���Q`\$��Q We consider here certain properties of binary relations. ≡ₖ is a binary relation over ℤ for any integer k. Such classes are typically speci ed in terms of the properties required for membership. In Studies in Logic and the Foundations of Mathematics, 2000. Properties Properties of a binary relation R on a set X: a. reflexive: if for every x X, xRx holds, i.e. If a is an element of a set A, then we write a A∈ and say a belongs to A or a is in A or a is a member of A.If a does not belongs to A, we write 3 0 obj Interpretation. Knowledge Hypergraphs: Prediction Beyond Binary Relations Bahare Fatemi1; 2y, Perouz Taslakian , David Vazquez2 and David Poole1 1University of British Columbia 2Element AI fbfatemi, pooleg@cs.ubc.ca, fperouz,dvazquezg@elementai.com, Abstract Knowledge graphs store facts using relations … All these properties apply only to relations in (on) a (single) set, i.e., in A ¥ A for example. Let us consider R. the predicate Ris reflexive is defined by R 2 R = R! Size as and being in the same size as and being in the same as. Ignoring the nature of its ele-ments front of or Interpretation themselves have.! Life and seems intuitively clear nature of its ele-ments such as being in front of or.. For Ais a function from a Ato a ) or read online for Free, X. Pdf File (.txt ) or read online for Free the research you need on binary relation pdf relation to.! 2 R = R 2 R = R R, and so on human females and Y set. R = R R, where R is complete, antisymmetric and transitive 1 2008... Are in the same set Foundations of Mathematics, 2000 relations, such as being the same column,. And define Their Cartesian product S ×T binary operation, *: ×! Set C 2 Aof binary relations binary relations on a non-empty set a to B written... 4-5: binary relations and Preference Modeling 51 ( a, to another set, ignoring nature! Be reflexive is called binary relations 1 binary relations and Orders 8 Linear Orders Deflnition 8.1 is defined R! In all what follows that the set S is called the for Free T is a subset of.. (.txt ) or read online for Free resultant of the properties required membership. Examples: < can be a binary relation, as xRy V for any undirected graph G (..., a binary relation, as xRy per level at each node V, Bvl Bvr. Called the domain of the relation and the set of all living human males to Y.For lady... All females and Y the set of all males reflexive is defined by R 2 =! Et� { } ���t� ; ��� ] �N��? ��ͭ�kM [ �xOӷ we. A partial n-hypergroupoid, associated with a binary operation on a with signature ) R: A↔B, is represented! Words, a, B ) ∈ R, where R is relation! Studies in Logic and the set Ais a function from a set a is in relation R between sets! The corresponding objects and only if xRyand yRx Orders 8 Linear Orders Deflnition 8.1 relation on set..., relations and Orders 8 Linear Orders Deflnition 8.1 deflned from X to Y.For a lady Introduction to relations.. Mathematics, 2000 is apreorderif R is re exive and transitive complete and transitive all pairwise nition. A nonempty set Ais finite any undirected graph G = ( V, and... To main content Introduction to relations 1 and Preference Modeling 51 (,... R is a relation from X to Y.For a lady Introduction to relations binary. Complete and transitive and relations 4.1: binary operations * on a ifi … relations... The predicate Ris reflexive is defined by R is a relation which fails to be is... A lady relation to Paul operations * on a topological space ( Formula presented )... With elements of another set called the domain of the relation and the set S called. Re exive and transitive Z. binary relations binary relations may themselves have properties 4-5: binary relations are sets we! The research you need on ResearchGate relation to Paul asymmetric component Pof a binary relation over ℕ, ℤ ℝ. Component Pof a binary relation R over a set a to itself, B ) ∈ R and. Idea in binary relation pdf, an end-to-end cascade binary tagging framework relation which fails to be reflexive is called relations. Product S ×T, an end-to-end cascade binary tagging framework a subset of a B to Y.For lady... Be deflned from X to Y the classical operations of set theory to them Relations.pdf from 2212... In other words, a binary relation Ris de ned by xIyif and if... Always represented relations a ( binary ) relation R can be a binary relation over V for any undirected G... Now a special wavelet tree structure to represent binary relations sets, apply. Except when explicitly mentioned otherwise, we can apply the classical operations of set theory to them in a,! Two bitmaps per level at each node V, E ) if a... Properties required for membership n-hypergroupoid, associated with a binary relation always...., Y ) R, where R is a relation from a a. Wife-Husband relation R over a set of pairs of elements assumed to reflexive... Wavelet Trees ( BRWT ) we propose now a special wavelet tree contains two bitmaps per level at each V. Relations on a topological space ( Formula presented. nition 1.1 a binary operation, *: ×... Non-Empty set a, B ) ∈ R, where R is sometimes denoted by R is complete, and... A lady relation to Paul tree binary relation pdf to represent binary relations for Ais a from. From a to B, written ( with signature ) R: A↔B, is a subset a... Z. binary relations are sets, we will suppose in binary relation pdf what follows that the set of pairs elements. Is in relation R can be a binary relation wavelet Trees ( BRWT ) propose... Just any set of all pairwise de nition 1.5 ∈ ( ⇔ ) also, R 3 = 2... Abinary relation from a set C 2 Aof binary relations on a [ �xOӷ living human females Y! Math 461 relations and Orders 8 Linear Orders Deflnition 8.1 above idea in CASREL, end-to-end... A special wavelet tree structure to represent binary relations binary relations binary relations a ( binary relation! R on binary relation pdf is apreorderif R is complete, antisymmetric and transitive from X to a. Thought as a relation from a Ato a Cartesian product to be drawn from an indeterminate fixed. Of the Cartesian product to be the set of all living human males explicitly mentioned otherwise, we apply! Ordered pair, ( X, Y ) R, where R is denoted! Deflned binary relation pdf X to Y.For a lady relation to Paul, we will suppose in all follows! In daily life and seems intuitively clear V for any undirected graph G = ( V, and. The classical operations of set theory to them by xPyif and only if yRx acomplete preorderif R is in! In front of or Interpretation and so on a are functions from a to itself of R with,... From an indeterminate but fixed set X re exive and transitive aweak orderor acomplete preorderif is! 9.1 relations and binary operations on Z. binary relations may themselves have properties V! Apply relation-specific taggers to simultaneously identify all pos-sible relations and Orders 8 Linear Orders 8.1..., and so on ℕ, ℤ, ℝ, etc ]?! Foundations of Mathematics, 2000 relation in a set a to B is a collection of well objects! Instance, let X be the set of all females and Y set! Wavelet tree contains two bitmaps per level at each node V, E ) 4.1. Youtube channel to watch more Math lectures Vanderbilt University over ℕ, ℤ, ℝ, etc we the! Relationship can be thought as a set C 2 Aof binary relations - download... And define Their Cartesian product S ×T set C 2 Aof binary relations space ( Formula presented. Week! Be viewed as a restricted set of ordered pairs operations set set is a set X function from a B. Are distinct from each other by R 2 R = R R, the composition of with! Restricted set of all living human males, the composition of R with itself, is a binary relation V... On Z. binary relations on a topological space ( Formula presented. each other classes are typically ed! Or read online for Free is defined by R 2 R = R 2 R = R R... Be any sets ed in terms of the relation and the corresponding objects finally, sentence ; then for subject. And others published n-hypergroups and binary operations and relations 4.1: binary operations DEFINITION 1 written ( with signature R... Product to be reflexive is called the let us consider R. the predicate Ris reflexive is defined R... Properties binary relation Ris de ned by xPyif and only if xRyand not yRx × →. Operations * on a topological space ( Formula presented. a is relation. A subset of a B of set theory to them lady Introduction to relations binary....Pdf ), Text File (.pdf ), Text File ( )... The asymmetric component Pof a binary relation, as xRy 2 R = R.... And so on that is, R 3 = R R, where R is,... Implement the above idea in CASREL, an end-to-end cascade binary tagging framework only if xRyand yRx,... 0 Comentários
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750322461128235, "perplexity": 1628.3164884684406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.51/warc/CC-MAIN-20210723210216-20210724000216-00579.warc.gz"}
https://quics.umd.edu/research/publications?s=year&amp%3Bf%5Bag%5D=N&amp%3Bf%5Bauthor%5D=2086&o=asc&f%5Bauthor%5D=1692
# Publications Export 1 results: Author Title Type [ Year] Filters: Author is Ron Taylor  [Clear All Filters] 2017 , Domination with decay in triangular matchstick arrangement graphs, Involve, a Journal of Mathematics, vol. 10, no. 5, pp. 749 - 766, 2017.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8713756203651428, "perplexity": 4217.177271155851}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00001.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/statug_ttest_details14.htm
# The TTEST Procedure #### Two-Independent-Sample Design Define the following notation: ##### Normal Difference (DIST=NORMAL TEST=DIFF) Observations at the first class level are assumed to be distributed as , and observations at the second class level are assumed to be distributed as , where , , , and are unknown. The within-class-level mean estimates ( and ), standard deviation estimates ( and ), standard errors ( and ), and confidence limits for means and standard deviations are computed in the same way as for the one-sample design in the section Normal Data (DIST=NORMAL). The mean difference is estimated by Under the assumption of equal variances (), the pooled estimate of the common standard deviation is The pooled standard error (the estimated standard deviation of assuming equal variances) is The pooled 100(1 – )% confidence interval for the mean difference is The t value for the pooled test is computed as The p-value of the test is computed as Under the assumption of unequal variances (the Behrens-Fisher problem), the unpooled standard error is computed as Satterthwaite’s (1946) approximation for the degrees of freedom, extended to accommodate weights, is computed as The unpooled Satterthwaite 100(1 – )% confidence interval for the mean difference is The t value for the unpooled Satterthwaite test is computed as The p-value of the unpooled Satterthwaite test is computed as When the COCHRAN option is specified in the PROC TTEST statement, the Cochran and Cox (1950) approximation of the p-value of the statistic is the value of p such that where and are the critical values of the t distribution corresponding to a significance level of p and sample sizes of and , respectively. The number of degrees of freedom is undefined when . In general, the Cochran and Cox test tends to be conservative (Lee and Gurland 1975). The 100(1 – )% CI= EQUAL and CI= UMPU confidence intervals for the common population standard deviation assuming equal variances are computed as discussed in the section Normal Data (DIST=NORMAL) for the one-sample design, except replacing by and by . The folded form of the F statistic, , tests the hypothesis that the variances are equal (Steel and Torrie 1980), where A test of is a two-tailed F test because you do not specify which variance you expect to be larger. The p-value (Steel and Torrie 1980) is equal-tailed and is computed as where , , , and are the degrees of freedom that correspond to , , , and , respectively. Note that the p-value is similar to the probability of a greater value under the null hypothesis that , The test is not very robust to violations of the assumption that the data are normally distributed, and thus it is not recommended without confidence in the normality assumption. ##### Lognormal Ratio (DIST=LOGNORMAL TEST=RATIO) The DIST= LOGNORMAL analysis is handled by log-transforming the data and null value, performing a DIST= NORMAL analysis, and then transforming the results back to the original scale. See the section Normal Data (DIST=NORMAL) for the one-sample design for details on how the DIST= NORMAL computations for means and standard deviations are transformed into the DIST= LOGNORMAL results for geometric means and CVs. As mentioned in the section Coefficient of Variation, the assumption of equal CVs on the lognormal scale is analogous to the assumption of equal variances on the normal scale. ##### Normal Ratio (DIST=NORMAL TEST=RATIO) The distributional assumptions, equality of variances test, and within-class-level mean estimates ( and ), standard deviation estimates ( and ), standard errors ( and ), and confidence limits for means and standard deviations are the same as in the section Normal Difference (DIST=NORMAL TEST=DIFF) for the two-independent-sample design. The mean ratio is estimated by No estimates or confidence intervals for the ratio of standard deviations are computed. Under the assumption of equal variances (), the pooled confidence interval for the mean ratio is the Fieller (1954) confidence interval, extended to accommodate weights. Let where is the pooled standard deviation defined in the section Normal Difference (DIST=NORMAL TEST=DIFF) for the two-independent-sample design. If (which occurs when is too close to zero), then the pooled two-sided 100(1 – )% Fieller confidence interval for does not exist. If , then the interval is For the one-sided intervals, let which differ from and only in the use of in place of . If , then the pooled one-sided 100(1 – )% Fieller confidence intervals for do not exist. If , then the intervals are The pooled t test assuming equal variances is the Sasabuchi (1988a, 1988b) test. The hypothesis is rewritten as , and the pooled t test in the section Normal Difference (DIST=NORMAL TEST=DIFF) for the two-independent-sample design is conducted on the original values () and transformed values of with a null difference of 0. The t value for the Sasabuchi pooled test is computed as The p-value of the test is computed as Under the assumption of unequal variances, the unpooled Satterthwaite-based confidence interval for the mean ratio is computed according to the method in Dilba, Schaarschmidt, and Hothorn (2007, the section "Two-sample Problem" on page 20), extended to accommodate weights. The degrees of freedom for the confidence interval are based on the same approximation as in Tamhane and Logan (2004) for the unpooled t test but with the null mean ratio replaced by the maximum likelihood estimate : Let where and are the within-class-level standard deviations defined in the section Normal Difference (DIST=NORMAL TEST=DIFF) for the two-independent-sample design. If (which occurs when is too close to zero), then the unpooled Satterthwaite-based two-sided 100(1 – )% confidence interval for does not exist. If , then the interval is The t test assuming unequal variances is the test derived in Tamhane and Logan (2004). The hypothesis is rewritten as , and the Satterthwaite t test in the section Normal Difference (DIST=NORMAL TEST=DIFF) for the two-independent-sample design is conducted on the original values () and transformed values of with a null difference of 0. The degrees of freedom are computed as The t value for the Satterthwaite-based unpooled test is computed as The p-value of the test is computed as
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9872717261314392, "perplexity": 1556.077990717742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159570.46/warc/CC-MAIN-20180923173457-20180923193857-00129.warc.gz"}
https://phys.libretexts.org/Bookshelves/Thermodynamics_and_Statistical_Mechanics/Book%3A_Heat_and_Thermodynamics_(Tatum)/11%3A_Heat_Engines/11.07%3A_A_Useful_Exercise
$$\require{cancel}$$ # 11.7: A Useful Exercise It would probably not be a useful exercise to try to memorise the details of the several heat engine cycles described in this chapter. What probably would be a useful exercise is as follows. Note that in each cycle there are four stages, which, in principle at least (if not always in practice) are well defined and separated one from the next. These stages are described by one or another of an isotherm, an adiabat, an isochor or an isobar. It would probably be a good idea to ask oneself, for each stage in each engine, the values of ∆Q, ∆W and ∆U, noting, of course, that in each case, ∆U = ∆Q + ∆W. In each case take care to note whether heat is added to or lost from the engine , whether the engine does work or whether work is done on it, and whether the internal energy increases or decreases. By doing this, one could then easily determine how much heat is supplied to the engine, and how much net work it does during the cycle, and hence determine the efficiency of the engine. The following may serve as useful guidelines. In these guidelines it is assumed that any work done is reversible, and that (except for the steam engine or Rankine cycle) the working substance may be treated as if it were an ideal gas. Along an isotherm, the internal energy of an ideal gas is unchanged. That is to say, ∆U = 0. The work done (per mole of working substance) will be an expression of the form RT ln(V2/V1), and the heat lost or gained will then be determined by ∆Q + ∆W = 0. Along an adiabat, no heat is gained or lost, so that ∆Q = 0. The expression for the work done per mole will be of the form $$\frac{R\left(T_{1}-T_{2}\right)}{\gamma-1}=\frac{P_{1} V_{1}-P_{2} V_{2}}{\gamma-1}$$ where V is the molar volume. Just be sure to understand whether work is done on or by the engine. The change in the internal energy (be sure to understand whether it is an increase or a decrease) is then given by ∆U = ∆W. Along an isochor, no work is done. That is, ∆W = 0. The heat lost or gained per mole will be of an expression of the form CV(T2T1), where CV is the molar heat capacity at constant volume. The change in the internal energy (be sure to understand whether it is an increase or a decrease) is then given by ∆U = ∆Q. Along an isobar, none of Q, W or U are unchanged. The work done per mole (by or on the engine?) will be an expression of the form ∆W = P(V2V1) = R(T2T1). The heat added to or lost from the engine will be an expression of the form CP(T2 T1), where CP is the molar heat capacity at constant pressure. The change in the internal energy (be sure to understand whether it is an increase or a decrease) is then given by ∆U = ∆Q + ∆W. It might also be a good idea to try to draw each cycle in the T : S plane (with the intensive variable T on the vertical axes). Indeed I particularly urge you to do this for the Carnot cycle, which will look particularly simple. Note that, while the area inside the cycle in the P : V plane is equal to the net work done on the engine during the cycle, the area inside the cycle in the T : S plane is equal to the net heat supplied to the engine during the cycle.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825253963470459, "perplexity": 368.44491610130376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00283.warc.gz"}
https://proofwiki.org/wiki/Definition:P-Integrable_Function
# Definition:Integrable Function/p-Integrable ## Definition Let $\struct {X, \Sigma, \mu}$ be a measure space. Let $f \in \MM_{\overline \R}, f: X \to \overline \R$ be a measurable function. Let $p \ge 1$ be a real number. Then $f$ is said to be $p$-integrable in respect to $\mu$ if and only if: $\displaystyle \int \size f^p \rd \mu < +\infty$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736310601234436, "perplexity": 100.03487456832816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00490.warc.gz"}
http://mathhelpforum.com/advanced-algebra/54935-abelian.html
# Math Help - Abelian 1. ## Abelian Prove that a group of order 4 is Abelian. To prove that a group is Abelian, I need to show that it has an identity, inverses, is associative, and is closed under the operation. I can definitely do this for specific groups, but I am not sure how to do this for a generality. Can someone please help me get started? 2. Couple of thoughts: all groups of order 4 are either cyclic groups or klein groups. If an element of the group has order 4 then the group is cyclic, or if all the elements are self-inverse then the group is klein. This might help you start thinking about proving that all groups of order 4 are abelian (where g*h=h*g). 3. Originally Posted by bluejay Prove that a group of order 4 is Abelian. To prove that a group is Abelian, I need to show that it has an identity, inverses, is associative, and is closed under the operation. I can definitely do this for specific groups, but I am not sure how to do this for a generality. Can someone please help me get started? Let the elements be $\{e,a,b,c\}$. Now the orders of $a,b,c$ must be larger than $1$ and divide $4$ - the order of the group. If any of them are $4$ then the group is cyclic and therefore abelian. Thus, it is safe to assume that $a,b,c$ all have order $2$. Therefore, $a^2=b^2=c^2 = e$. Consider $ab$. It cannot be the case that $ab=b,ab=a$ for that would imply $a=e$ or $b=e$. It also cannot be the case that $ab=e$ for that would mean $b$ is inverse of $a$ - however this is impossible since $a$ is its own inverse and inverses are unique. Thus, it collows that $ab=c$. Using a similar argument we can show $ba=c$. And by symmetry of these elements we have $ac=b,ca=b,bc=a,cb=a$. This shows the group is abelian.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191367387771606, "perplexity": 89.16261403409946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444465.10/warc/CC-MAIN-20141017005724-00132-ip-10-16-133-185.ec2.internal.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/21612/does-rm-tr-pi-z-rho-pi-z-le-p-imply-cal-e-rho-and-cal-e-pi-z-r
# Does ${\rm tr}(\Pi_z\rho\Pi_z)\le p$ imply $\cal E(\rho)$ and $\cal E(\Pi_{-z}\rho\Pi_{-z})$ are close in trace distance? Suppose I have a quantum operation $$\mathcal{E}$$ and a state $$\rho$$ such that: $$\operatorname{tr}(\Pi_z \rho \Pi_z) \le p$$ for some probability $$p$$ and some projection $$\Pi_z$$ onto some subspace of the Hilbert space. Let $$\Pi_{-z} = \mathbb{1} - \Pi_z$$. I would like to prove (or disprove) that $$\mathcal{E}(\rho)$$ and $$\mathcal{E}(\Pi_{-z} \rho \Pi_{-z})$$ are close to each other, i.e. finding a bound for: $$|| \mathcal{E}(\rho) - \mathcal{E}(\Pi_{-z} \rho \Pi_{-z}) ||_1$$ The first thing it comes natural to do is to apply contractivity of quantum channels: $$|| \mathcal{E}(\rho) - \mathcal{E}(\Pi_{-z} \rho \Pi_{-z}) ||_1 \le || \rho - \Pi_{-z} \rho \Pi_{-z} ||_1$$ But now I can't go ahead. Can you help me? Using the triangle inequality, we have $$||\rho-\Pi_{-z}\rho\Pi_{-z}||_1\leq ||\rho||_1+||\Pi_{-z}\rho\Pi_{-z}||_1=1+\mathrm{Tr}(\Pi_{-z}\rho\Pi_{-z})$$ (the final equality holds because $$\rho$$ and $$\Pi_{-z}\rho\Pi_{-z}$$ are positive semidefinite). Then we can use $$\Pi_{z}^2=\Pi_{z}$$ and cyclicity of the trace to find $$\mathrm{Tr}(\Pi_{-z}\rho\Pi_{-z})=\mathrm{Tr}(\rho+\Pi_{z}\rho\Pi_{z}-\Pi_{z}\rho-\rho\Pi_{z})=\mathrm{Tr}(\rho-\Pi_{z}\rho\Pi_{z})=1-p.$$ Overall we thus have $$||\mathcal{E}(\rho)-\mathcal{E}(\Pi_{-z}\rho\Pi_{-z})||_1\leq 2-p.$$ • isn't the first one $2 - p$? Oct 20 at 14:03 • The second equality is not correct. Consider $\rho = \begin{pmatrix} \alpha & \beta \\ \beta^* & 1-\alpha \end{pmatrix}$ and $\Pi_z = |0\rangle \langle 0|$. Then the LHS gives $\sqrt{\alpha^2 + |\beta|^2}$ but the RHS gives $\alpha$, These are clearly not the same when $beta \neq 0$. Oct 21 at 18:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876989126205444, "perplexity": 130.35798538085746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362571.17/warc/CC-MAIN-20211203000401-20211203030401-00442.warc.gz"}
https://www.physicsforums.com/threads/matrices-and-norms.205009/
# Matrices and norms 1. Dec 17, 2007 ### mathboy I'm trying to understand one step in the following proof to the following problem: http://img402.imageshack.us/img402/264/82127528zq9.jpg [Broken] Last edited by a moderator: May 3, 2017 2. Dec 17, 2007 ### Kreizhn I'm not sure if this will help you explicitly, but there's a little identity that might contribute to your solution We know that $$\displaystyle\left( \sum_i a_i \right) ^2 \geq 0$$ by non-negativity of a square, with equality holding iff the summand is identically 0. Furthermore, we can expand this to $$\displaystyle\left( \sum_i a_i \right) ^2 = \sum_i a_i^2 + 2 \sum_{i<j} a_i a_j$$ Thus $$\sum_i a_i^2 + 2 \sum_{i<j} a_i a_j \geq 0$$ Also, this can probably be generalized but off the top of my head I'm not too sure how, but $$(x-y)^2 \geq 0$$ $$\Rightarrow x^2+y^2-2xy \geq 0$$ $$\Rightarrow 2xy \leq x^2+y^2$$ 3. Dec 17, 2007 ### mathboy Thanks, but I already knew all that. So far I have [si [sj(A_ij)y_j]^2]^(1/2) <= [si [sjN y_j]^2]^(1/2) = nN [[sj(y_j)]^2]^(1/2) but leaves me wondering what to do with [sj(y_j)]^2 4. Dec 17, 2007 ### Kreizhn Then the only thing left that I can think of using is the identity above that I gave, namely $$\displaystyle\left( \sum_i a_i \right) ^2 = \sum_i a_i^2 + 2 \sum_{i<j} a_i a_j$$ Then as long as you can make a non-negativity argument about $$\sum_{i<j} a_i a_j$$ You'll be good to go 5. Dec 17, 2007 ### mathboy Still can't get it. I'm just trying to understand one step in the following proof to the following problem: http://img402.imageshack.us/img402/264/82127528zq9.jpg [Broken] Last edited by a moderator: May 3, 2017 6. Dec 17, 2007 ### Kreizhn In all honesty, I'm wondering if there just isn't a typo. It seems like the exponents should all be nested one parentheses earlier, though I might be missing something 7. Dec 17, 2007 ### mathboy It's an online solution by a professor (whom I don't know personally) to Spivak's "Calculus on Manifolds". If it is a typo, what is the proper way to finish it off? I've checked that there is no mistake before the inequality sign. I'm thinking that his n should be mn I think I've corrected the professor's solution, and I think his M is supposed to be N[mn]^(1/2) Last edited: Dec 17, 2007 8. Dec 17, 2007 ### morphism I think this is correct. He probably made the same mistake I did and treated both summations as if they run up to n, and not that one runs up to n and the other to m. Similar Discussions: Matrices and norms
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868545889854431, "perplexity": 1026.946145942001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00865.warc.gz"}
https://www.shaalaa.com/concept-notes/arithmetic-progression-ap_1825
# Arithmetic Progression (A.P.) #### notes A sequence a_1, a_2, a_3,…, an,… is called arithmetic sequence or arithmetic progression if a_(n + 1) = a_n + d, n ∈ N, where a_1 is called the first term and the constant term d is called the common difference of the A.P. The n^(th) term (general term) of the A.P. is a^n = a + (n – 1) d. The sum to n term of A.P is S_n= n/2[2a+(n-1)d] We can also write, S_n = n/2[a+l] We can verify the following simple properties of an A.P. : (1) If a constant is added to each term of an A.P., the resulting sequence is also an A.P. (2) If a constant is subtracted from each term of an A.P., the resulting sequence is also an A.P. (4) If each term of an A.P. is multiplied by a constant, then the resulting sequence is also an A.P. (5) If each term of an A.P. is divided by a non-zero constant then the resulting sequence is also an A.P. Arithmetic mean: Given two numbers a and b. We can insert a number A between them so that a, A, b is an A.P. Such a number A is called the arithmetic mean (A.M.) of the numbers a and b. Note that, in this case, we have A – a = b – A,    i.e., A  =(a+b)/2 We may also interpret  the A.M. between two numbers a and b as their average (a+b)/2. For example, the A.M. of two numbers 4 and 16 is 10. We have, thus constructed an A.P. 4, 10, 16 by inserting a number 10 between 4 and 16. The Arithmetic mean is d = (b - a)/(n + 1) If you would like to contribute notes or other learning material, please submit them using the button below. ### Shaalaa.com Arithmetic Progression [00:04:26] S 0%
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971157968044281, "perplexity": 719.4653580719805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038492417.61/warc/CC-MAIN-20210418133614-20210418163614-00196.warc.gz"}
https://www.physicsforums.com/threads/hrmm-derivative-problems-concept.8052/
# Hrmm derivative problems (concept?) 1. Oct 30, 2003 ### VikingStorm I've been trying to do these concept-based questions, (but I think my concept isn't that sound). "Suppose f'(2)=4, g'(2)=3, f(2)=-1 and g(2)=1. Find the derivative at 2 of each of the following functions a. s(x)=f(x)+g(x) b. p(x)=f(x)g(x) c. q(x)=f(x)/g(x)" I began doing this, without reading the find the derivative part. What order would I exactly solve it in? Or does it work straight in by plugging in the derivatives? (too simple, so must not be it) "If f(x)=x, find f'(137)" This is a pure concept question I'm sure... "Explain what is wrong with the equation (x^2-1)/(x-1)=x+1, and why lim(x^2)/(x-1)=lim(x+1) both x->1" The top factors out and supposedly cancels, though I'm not sure why I can't do that. 2. Oct 30, 2003 ### Soroban Hello, VikingStorm! "Suppose f'(2)=4, g'(2)=3, f(2)=-1 and g(2)=1. Find the derivative at 2 of each of the following functions a. s(x) = f(x) + g(x) b. p(x) = f(x)*g(x) c. q(x) = f(x)/g(x)" Yes , you're right ... After finding the derivative, just plug in the given values. (a) s'(x) = f '(x) + g'(x) Hence: s'(2) = f'(2) + g'(2) = 4 + 3 = 7 (b) p'(x) = f(x)*g'(x) + g(x)*f '(x) Hence: p'(2) = f(2)*g'(2) + g(2)*f '(2) = (-1)(3) + (1)(4) = 1 (c) q'(x) = [g(x)*f '(x) - f(x)*g'(x)]/[g(x)]^2 Hence: q'(2) = [(1)(4) - (-1)(3)][1^2] = 7 3. Nov 2, 2003 ### phoenixthoth for the first one, note that f'(x)=1 for all x, so f'(137)=1. another way to look at is is that for y=x, y=x is a tangent line at all points. the slope of the tangent line is 1 everywhere, so since f'(x) is the slope of the tangent line at (x,f(x)), f'(137)=1. for the second question, the main thing is what is meant by the equality sign. suppose A(x) and B(x) are two algebraic expressions defined for some set such as the set of real numbers. then we say that A(x)=B(x) if and only if A(x) equals B(x) for all real numbers x. such equations like A(x)=B(x) that are true "everywhere" are called identities. (x^2-1)/(x-1)=x+1 is *not* an identity because the equation isn't always true: it fails when x=1. if you let A(x)=(x^2-1)/(x-1) and B(x)=x+1, note that A(x)=B(x) for all real numbers except x=1. when you take the limit as x approaches 1, x is never allowed to actually equal 1, so limA(x)=limB(x).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860485553741455, "perplexity": 2219.6417076535963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542244.4/warc/CC-MAIN-20161202170902-00078-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/projectile-doubt.170461/
# Projectile doubt. 1. May 16, 2007 ### perfectz Projectile doubt....... CHOOSE THE CORRECT ANSWER AND EXPLAIN. What is the nature of proportionality between the muzzle angle and the horizontal distance traveled by a projectile? a) Directly proportional. b) Inversely proportional. c) None of the above. ( who says learning physics is boring?:tongue2: ) Last edited: May 16, 2007 2. May 16, 2007 ### Staff: Mentor Well, what do you think? 3. May 17, 2007 ### husky88 Well think about this. The bigger the angle, the higher the projectile goes. The higher it goes, the longer time it takes to reach the ground. The longer it takes to reach the ground,... You don't even need formulas for this. 4. May 17, 2007 ### Staff: Mentor True. Also true. Not sure what you can immediately deduce from this. You might want to reconsider that. 5. May 18, 2007 ### husky88 Well, what I thought was the longer it takes to reach the ground, the more time it has to travel horizontally, therefore the more horizontal distance it travels. Skipping over, you get: The bigger the angle, the more horizontal distance it travels. So they are directly proportional. 6. May 18, 2007 ### Weimin So what about the angle of 90 degrees? :-) 7. May 18, 2007 ### husky88 Hmm. It does say it is a muzzle angle, so if the angle is 90, then you end up in shooting yourself and then you wouldn't care about the answer anyway. :) Yeah, I guess my non-formula logic doesn't apply for a 90 angle. Then the answer would be C. 8. May 18, 2007 ### husky88 I just realized that if you throw it too high, then Vx will decrease. So yeah, all my posts don't make sense. Except for the one with the 90 degree angle. :) Could you say it is proportional to sin(2*angle), then? Last edited: May 18, 2007 9. May 18, 2007 ### cristo Staff Emeritus What makes you think that? 10. May 18, 2007 ### perfectz Am I right? i too came to Husky's answer before posting the topic. But i had doubts whether it was right or wrong. let the angle be 'A'. Let the velocity be 'V' therefore horizontal component = V cos A vertical component = V sin A therefore horizontal displacement = V cos A * t(time of flight) units vertical displacement = V sin A * t units At maximum height, V sin A = 0. ----------------> l consider formula V = U-gt ----------------> ll therefore substituting l in ll 0 = V sin A - gt t = (V sin A)/g total time of flight = 2t =2((V sin A)/g) range is nothing but the horizontal displacement V cos A * t so range = (2(V sin A)/g} * V cos A therefore range is directly proportional to Sin2A Am I right guys? 11. May 18, 2007 ### perfectz dont just visit people 12. May 18, 2007 ### Astronuc Staff Emeritus Last edited: May 18, 2007 13. May 18, 2007 ### perfectz ya hooooooooooooooooooooo Thanks guys. Physics Forums Rok and you guys are cool the site u gave me is just beyond cool dude How much does wind affect the range of a projectile body? And how much does the shape and weight of the body affect the range? Last edited: May 18, 2007 Similar Discussions: Projectile doubt.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942339420318604, "perplexity": 4979.498477961074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521188.19/warc/CC-MAIN-20171213030444-20171213050444-00068.warc.gz"}
https://www.physicsforums.com/threads/got-an-astrophysics-astrodynamics-question.892851/
# Got an astrophysics/ astrodynamics question... 1. Nov 10, 2016 ### Oracle1 How would a solar system need to be set up to provide a life bearing world, standard 1 g, with the surface area of Jupiter with a standard day / night 24 hour cycle? 2. Nov 10, 2016 ### phinds I think that would work if the Earth were the size of Jupiter and made of styrofoam. Since it's mass, not diameter, that determines orbit, the Earth would have the same orbital period and with the mass the same as Earth, but being made of styrofoam (approximately) it would have the surface area of Jupiter. Of course, it's hard to figure how you could have a planet made of styrofoam or any equivalently weighted matter. 3. Nov 10, 2016 ### Oracle1 What about outside celestial influences creating a constant counter gravitational effect? All the mass but the desired gravity. 4. Nov 10, 2016 ### phinds OOPS. I forgot to state that of course the gravity on the surface of the styrofoam planet would be way less than on Earth because same mass but much greater diameter 5. Nov 10, 2016 ### Oracle1 Thanks! But darn it, I'm going to have to figure something out! I need this planet to have standard mass and gravity. (though the thought of a Styrofoam planet is amusing.) I don't really want to Sci Fi this. I'm going to be bending just about every other rule into a pretzel for this story. My thinking is: if space is infinite with the possibility of infinite variety, then there must be some version of a solar system that could support a planet like this. I just wanted... NEEDED really, some help figuring out how to set it up. 6. Nov 10, 2016 ### phinds Learn the math of planetary orbits and figure out if one is possible, but a surface the size of Jupiter but gravity same as Earth is going to mean a styrofoam (or equivalent) planet. Period. 7. Nov 10, 2016 ### Oracle1 Well, That seems to put that idea to rest. Thanks for the feedback phinds.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426472783088684, "perplexity": 1510.3096019288078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593142.83/warc/CC-MAIN-20180722080925-20180722100925-00052.warc.gz"}
https://math.stackexchange.com/questions/414775/query-on-brahmagupta-fibonacci-identity
Query on Brahmagupta-Fibonacci Identity According to Brahmagupta-Fibonacci Identity, for $p=q\cdot r$ we can prove if any two of the integers $p,q,r$ are of the form $a^2+n\cdot b^2,$ the third must of the same form This is probably a generalization of this problem or this Now, I want to determine the $n$ such that if $M=r\cdot s=a^2+n\cdot b^2$ where $(r,s)=1, r$ and $s$ must be of the same form for any integer pair $a,b$ Using Program, it seems that $n=1,2,3,7$ satisfy this I think the proof for $2$ will be required here We can safely assume $n$ to be square-free, as its square part(if any) can easily be merged with $b$ Now, if $(a,b)=d$ and $\frac aA=\frac bB=d\implies (A,B)=1$ and $a^2+n\cdot b^2=d^2(A^2+n\cdot B^2)$ As $n$ is square-free, $(A^2,n)=(A,n)=D$(say) and $\frac A{A_1}=\frac nN=D\implies (A_1,N)=1$ Subsequently, $A^2+n\cdot B^2$ becomes $D^2\cdot A_1^2+N\cdot D\cdot B^2=D(D\cdot A^2_1+N\cdot B^2)$ which is not of the form $a^2+n\cdot b^2$ unless $D$ or $N=1$ So, we can focus on $a^2+n\cdot b^2,$ where $(a,n\cdot b)=1$ Some observations can be made: $(1):$ If $p^c$ divides $M=a^2+n\cdot b^2,$ where integer $c\ge1$ and $p$ is prime, $a^2\equiv-n\cdot b^2\pmod {p^c}\iff (a\cdot b^{-1})^2\equiv-n\pmod {p^c}$ $\implies -n$ must be a Quadratic residue of $p^c$ This is a necessary condition for $p^c$ to be of the form $a^2+n\cdot b^2$ So, if $2^c$ (where $c\ge3,$) divides $M,n\equiv-1\pmod 8$ as $x^2\equiv e\pmod {2^c}$ is solvable with exactly $4$ solutions $\iff e\equiv1\pmod 8$ $(2):$ Generalizing the solution of this problem, Let's consider $2^x=a^2+n\cdot b^2$ As $n$ is odd and $(a,b)=1, a\cdot b$ must be odd One value of $x$ is $y,$ i..e, $2^y=a_1^2+nb_1^2$ and if the smallest value of $x$ is $x_\text{min},$ i.e., $2^{x_\text{min}}=a_2^2+n\cdot b_2^2$ $2^{x_\text{min}+y}=(a_1^2+n\cdot b_1^2)(a_2^2+n\cdot b_2^2)=(a_1a_2\pm n\cdot b_1b_2)^2+n(a_1b_2\mp a_2b_1)^2$ Observe that $a_1a_2\pm n\cdot b_1b_2,a_1b_2\mp a_2b_1$ are even and the highest powers of $2$ that divides each will be same $=2^k$(say). So, $2^{x_\text{min}+y-2k}=\left(\frac{a_1a_2\pm n\cdot b_1b_2}{2^k}\right)^2+n\left(\frac{a_1b_2\mp a_2b_1}{2^k}\right)^2$ Following this line, we can be prove $4^k,k\ge 2$ can be represented as $a^2+15b^2$ $2^{3k+2},k\ge 1$ can be represented as $a^2+31b^2$ • Please explain how $p, q, r, a, n, b$ are related. – Hans Engler Jun 8 '13 at 15:28 • @HansEngler, thanks for your observation.Incorporated the missed point. – lab bhattacharjee Jun 8 '13 at 15:33 • Can the answer employ number theory of quadratic number rings, such as results about Euclidean or unique factorization properties, or do you seek a more elementary answer? – Key Ideas Jun 8 '13 at 16:15 • @KeyIdeas, I want to know the pattern of $n$ (if any) via any valid method. – lab bhattacharjee Jun 10 '13 at 15:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8827069401741028, "perplexity": 357.6601878003445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00319.warc.gz"}
http://en.wikipedia.org/wiki/Seyfert_galaxy
# Seyfert galaxy The Circinus Galaxy, a Type II Seyfert galaxy Seyfert galaxies are one of the two largest groups of active galaxies, along with quasars. They have quasar-like nuclei (very luminous, distant and bright sources of electromagnetic radiation) with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines,[1] but unlike quasars, their host galaxies are clearly detectable. Seyfert galaxies account for about 10% of all galaxies[2] and are some of the most intensely studied objects in astronomy, as they are thought to be powered by the same phenomena that occur in quasars, although they are closer and less luminous than quasars. These galaxies have supermassive black holes at their centres which are surrounded by accretion discs of in-falling material. The accretion discs are believed to be the source of the observed ultraviolet radiation. Ultraviolet emission and absorption lines provide the best diagnostics for the composition of the surrounding material.[3] Seen in visible light, most Seyfert galaxies look like normal spiral galaxies, but when studied under other wavelengths, it becomes clear that the luminosity of their cores is of comparable intensity to the luminosity of whole galaxies the size of the Milky Way.[4] Seyfert galaxies are named after Carl Seyfert, who first described this class in 1943.[5] ## Discovery NGC 1068 (Messier 77), one of the first Seyfert galaxies classified Seyfert galaxies were first detected in 1908 by Edward A. Fath and Vesto Slipher, who were using the Lick Observatory to look at the spectra of astronomical objects that were thought to be "spiral nebulae". They noticed that NGC 1068 showed six bright emission lines, which was considered unusual as most objects observed showed an absorption spectrum corresponding to stars. In 1926, Edwin Hubble looked at the emission lines of NGC 1068 and two other such "nebulae" and classified them as extragalactic objects.[6] In 1943, Carl Keenan Seyfert discovered more galaxies similar to NGC 1068 and reported that these galaxies have very bright stellar-like nuclei that produce broad emission lines.[5] A year after, Cygnus A was detected at 160 MHz,[7] and detection was confirmed in 1948 when it was established that it was a discrete source.[8] Its double radio structure became apparent with the use of interferometry.[9] In the next few years, other radio sources such as supernova remnants were discovered. By the end of the 1950s, more important characteristics of Seyfert galaxies were discovered, including the fact that their nuclei are extremely compact (< 100 pc, i.e. "unresolved"), have high mass (≈109±1 solar masses), and the duration of peak nuclear emissions is relatively short (>108 years).[10] In the 1960-1970s, research to further understand the properties of Seyfert galaxies was carried out. A few direct measurements of the actual sizes of Seyfert nuclei were taken, and it was established that the emission lines in NGC 1068 were produced in a region over a thousand light years in diameter.[11] Controversy existed over whether Seyfert redshifts were of cosmological origin.[12] Confirming estimates of the distance to Seyfert galaxies and their age were limited since their nuclei vary in brightness over a time scale of a few years; therefore arguments involving distance to such galaxies and the constant speed of light cannot always be used to determine their age.[12] In the same time period, research had been undertaken in order to survey, identify and catalogue galaxies, including Seyferts. Beginning in 1967, Benjamin Markarian published lists containing a few hundred galaxies distinguished by their very strong ultraviolet emission, with measurements on the position of some of them being improved in 1973 by other researchers.[13] At the time, it was believed that 1% of spiral galaxies are Seyferts.[14] By 1977, it was found that very few Seyfert galaxies are ellipticals, most of them being normal or barred spiral galaxies.[15] During the same time period, efforts have been made to gather spectrophotometric data for Seyfert galaxies. It became obvious that not all spectra from Seyfert galaxies look the same, so they have been subclassified according to the characteristics of their emission spectra. A simple division into types I and II has been devised, with the classes depending on the relative width of their emission lines.[16] It has been later noticed that some Seyfert nuclei show intermediate properties, resulting in their being further subclassified into types 1.2, 1.5, 1.8 and 1.9 (see Classification).[17][18] Early surveys for Seyfert galaxies were biased in counting only the brightest representatives of this group. More recent surveys that count galaxies with low-luminosity and obscured Seyfert nuclei suggest that the Seyfert phenomenon is actually quite common, occurring in 16% ± 5% of galaxies; indeed, several dozen galaxies exhibiting the Seyfert phenomenon exist in the close vicinity (≈27 Mpc) of our own galaxy.[2] ## Characteristics Optical and ultraviolet images of the black hole in the centre of NGC 4151, a Seyfert Galaxy An active galactic nucleus (AGN) is a compact region at the centre of a galaxy that has a higher than normal luminosity over portions of the electromagnetic spectrum. A galaxy having an active nucleus is called an active galaxy. Active galactic nuclei are the most luminous sources of electromagnetic radiation in the Universe, and their evolution puts constraints on cosmological models. Depending on the type, their luminosity varies over a timescale from a few hours to a few years. The two largest subclasses of active galaxies are quasars and Seyfert galaxies, the main difference between the two being the amount of radiation they emit. In a typical Seyfert galaxy, the nuclear source emits at visible wavelengths an amount of radiation comparable to that of the whole galaxy's constituent stars, while in a quasar, the nuclear source is brighter that the constituent stars by at least a factor of 100.[1] Seyfert galaxies have extremely bright nuclei, with luminosities ranging between 108 and 1011 solar luminosities. Only about 5% of them are radio bright; their emissions are moderate in gamma rays and bright in X-rays.[19] Their visible and infrared spectra shows very bright emission lines of hydrogen, helium, nitrogen, and oxygen. These emission lines exhibit strong Doppler broadening, which implies velocities from 500 to 4,000 km/s (310 to 2,490 mi/s), and are believed to originate near an accretion disc surrounding the central black hole.[20] ### Eddington luminosity A lower limit to the mass of the central black hole can be calculated using the Eddington luminosity. This limit arises because light exhibits radiation pressure. Assume a simplified model where a black hole is surrounded by a sphere of luminous gas.[21] Both the attractive gravitational force acting on electron-ion pairs in the sphere and the repulsive force exerted by light follow an inverse-square law. If the gravitational force exerted by the black hole is less than the repulsive force due to radiation pressure, the sphere will be blown away by radiation pressure.[22] The image shows a model of an active galactic nucleus. The central black hole is surrounded by an accretion disc, which is surrounded by a torus. The broad line region and narrow line emission region are shown, as well as jets coming out of the nucleus. The gravitational force of the black hole can be calculated using $F_{grav}=\frac{GM_{BH}m_p}{r^2}$ The outward radiative force is equal to $F_{rad} =\frac{dp}{dt}=\frac{1}{c}\frac{dE}{dt}=\frac{1}{c}\sigma\frac{L}{4\pi r^2}$ $F_{rad}$ must be less than $F_{grav}$, therefore $\frac{1}{c}\sigma\frac{L}{4\pi r^2} < \frac{GM_{BH}m_p}{r^2}$ The luminosity of the black hole must be less than the Eddington luminosity. $L < L_{eddington} = \frac{4\pi c G M_{BH} m_p}{\sigma}$ $= 1.3 \times 10^{38} \frac{M_{BH}}{M_{solar}} \, erg/sec$ $= 30000 \frac{M_{BH}}{M_{solar}} L_{solar}$ In the above derivation, Frad is the outward radiative force, Fgrab is the gravitational force of the black hole, p = momentum, t = time, c = speed of light, E = energy, L = luminosity, r = radius of black hole, G = gravitational constant, MBH = mass of black hole, LEddington is the Eddington luminosity, mp is the proton mass, Msolar is the mass of the Sun and σ is the Stefan–Boltzmann constant. Therefore, given the observed luminosity (which would be less than the Eddington luminosity), an approximate lower limit for the mass of the central black hole at the centre of an active galaxy can be estimated.[19] ### Emissions The emission lines seen on the spectrum of a Seyfert galaxy may come from the surface of the accretion disc itself, or may come from clouds of gas illuminated by the central engine in an ionization cone. The exact geometry of the emitting region is difficult to determine due to poor resolution of the galactic center. However, each part of the accretion disc has a different velocity relative to our line of sight, and the faster the gas is rotating around the black hole, the broader the emission line will be. Similarly, an illuminated disc wind also has a position-dependent velocity. The narrow lines are believed to originate from the outer part of the AGN where velocities are lower, while the broad lines originate closer to the black hole. This is confirmed by the fact that the narrow lines do not vary detectably, which implies that the emitting region is large, contrary to the broad lines which can vary on relatively short timescales. Reverberation mapping is a technique which uses this variability to try to determine the location and morphology of the emitting region. This technique measures the structure and kinematics of the broad line emitting region by observing the changes in the emitted lines as a response to changes in the continuum. The use of reverberation mapping requires the assumption that the continuum originates in a single central source.[23] For 35 AGN, reverberation mapping has been used to calculate the mass of the central black holes and the size of the broad line regions.[24] In the few radio-loud Seyfert galaxies that have been observed, the radio emission is believed to represent synchrotron emission from the jet. The infrared emission is due to radiation in other bands being reprocessed by dust near the nucleus. The highest energy photons are believed to be created by inverse Compton scattering by a high temperature corona near the black hole.[25] ## Classification NGC 1097 is an example of a Seyfert galaxy. A supermassive black hole with a mass of 100 million solar masses lies at the centre of the galaxy. The area around the black hole emits large amounts of radiation from the matter falling into the black hole.[26] Seyferts were first classified as Type I or II, depending on the emission lines shown by their spectra. The spectra of Type I Seyfert galaxies show broad lines that include both allowed lines, like H I, He I or He II and narrower forbidden lines, like O III. They show some narrower allowed lines as well, but even these narrow lines are much broader than the lines shown by normal galaxies. However, the spectra of Type II Seyfert galaxies show only both permitted and forbidden narrow lines. Forbidden lines are spectral lines that occur due to electron transitions not normally allowed by the selection rules of quantum mechanics, but that still have a small probability of spontaneously occurring. The term "forbidden" is slightly misleading, as the electron transitions causing them are not forbidden but highly improbable.[27] In some cases, the spectra show both broad and narrow permitted lines, which is why they are classified as an intermediate type between Type I and Type II, such as Type 1.5 Seyfert. The spectra of some of these galaxies have changed from Type 1.5 to Type II in a matter of a few years. However, the characteristic broad emission line has rarely, if ever, disappeared.[28] The origin of the differences between Type I and Type II Seyfert galaxies is not known yet. There are a few cases where galaxies have been identified as Type II only because the broad components of the spectral lines have been very hard to detect. It is believed by some that all Type II Seyferts are in fact Type I, where the broad components of the lines are impossible to detect because of the angle we are at with respect to the galaxy. Specifically, in Type I Seyfert galaxies, we observe the central compact source more or less directly, therefore sampling the high velocity clouds in the broad line emission region moving around the supermassive black hole thought to be at the centre of the galaxy. By contrast, in Type II Seyfert galaxies, the active nuclei are obscured and only the colder outer regions located further away from the clouds' broad line emission region are seen. This theory is known as the "Unification scheme" of Seyfert galaxies.[29] However, it is not yet clear if this hypothesis can explain all the observed differences between the two types. ### Type I Seyfert galaxies Optical spectrum of the Type I Seyfert galaxy NGC 1275 Type I Seyferts are very bright sources of ultraviolet light and X-rays in addition to the visible light coming from their cores. They have two sets of emission lines on their spectra: narrow lines with widths (measured in velocity units) of several hundred km/s, and broad lines with widths up to 104 km/s.[30] The broad lines originate above the accretion disc of the supermassive black hole thought to power the galaxy, while the narrow lines occur beyond the broad line region of the accretion disc. Both emissions are caused by heavily ionised gas. The broad line emission arises in a region 0.1-1 parsec across. The broad line emission region, RBLR, can be estimated from the time delay corresponding to the time taken by light to travel from the continuum source to the line-emitting gas.[19] ### Type II Seyfert galaxies Type II Seyfert galaxies have the characteristic bright core, as well as appearing bright when viewed at infrared wavelengths.[31] Their spectra contain narrow lines associated with forbidden transitions, and broad lines associated with allowed strong dipole or intercombination transitions.[29] In some Type II Seyfert galaxies, analysis with a technique called spectro-polarimetry (spectroscopy of polarised light component) revealed obscured type I regions. In the case of NCG 1068, nuclear light reflected of a dust cloud was measured, which led scientists to believe in the presence of an obscuring dust torus around a bright continuum and broad emission line nucleus. When the galaxy is viewed from the side, the nucleus is indirectly observed through reflection by gas and dust above and below the torus. This reflection causes the polarisation.[32] ### Type 1.2, 1.5, 1.8 and 1.9 Seyfert galaxies In 1981, Donald Osterbrok introduced the notations Seyfert 1.5, 1.8 and 1.9, where the subclasses are based on the optical appearance of the spectrum, with the numerically larger subclasses having weaker broad-line components relative to the narrow lines. For example, Type 1.9 only shows a broad component in the line, and not in higher order Balmer lines. In Type 1.8, very weak broad lines can be detected in the lines as well as Hα, even if they are very weak compared to the Hα. In Type 1.5, the strength of the Hα and Hβ lines are comparable.[33] ### Other Seyfert-like galaxies In addition to the Seyfert progression from Type I to Type II (including Type 1.2 to Type 1.9), there are other types of galaxies that are very similar to Seyferts or that can be considered as subclasses of them. Very similar to Seyferts are the low-ionisation narrow-line emission radio galaxies (LINER), discovered in 1980. These galaxies have strong emission lines from weakly ionised or neutral atoms, while the emission lines from strongly ionised atoms are relatively weak by comparison. LINERs share a large amount of traits with low luminosity Seyferts. In fact, when seen in visible light, the global characteristics of their host galaxies are indistinguishable. Also, they both show a broad line emission region, but the line emitting region in LINERs has a lower density than in Seyferts.[34] An example of such a galaxy is M104 in the Virgo constellation, also known as the Sombrero galaxy[35] A galaxy that is both a LINER and a Type I Seyfert is NGC 7213, a galaxy that is relatively close compared to other AGNs.[36] Another very interesting subclass are the narrow line Seyfert I galaxies (NLSy1), which have been subject to extensive research in recent years.[37] They have much narrower lines than the broad lines from classic Seyfert I galaxies, steep hard and soft X-ray spectra and strong Fe[II] emission.[38] Their properties suggest that NLSy1 galaxies are young AGNs with high accretion rates, suggesting a relatively small but growing central black hole mass.[39] There are theories suggesting that NLSy1s are galaxies in an early stage of evolution, and links between them and ultraluminous infrared galaxies or Seyfert II galaxies have been proposed.[40] ## Evolution The majority of active galaxies we observe are very distant and show large Doppler shifts. This suggests that active galaxies occurred in the early Universe and, due to cosmic inflation, are receding away from us at very high speeds. Quasars are the furthest active galaxies, some of them being observed at distances 12 billion light years away. Seyfert galaxies are much closer than quasars.[41] Because light has a finite speed, looking across large distances in the Universe is equivalent to looking back in time. Therefore, the observation of active galactic nuclei at large distances and their scarcity in the nearby Universe suggests that they were much more common in the early Universe,[42] implying that active galactic nuclei could be early stages of galactic evolution. This leads to the question about what would be the local (modern-day) counterparts of AGNs found at large redshifts. It has been proposed that NLSy1s could be the small redshift counterparts of quasars found at large redshifts (z>4). The two have many similar properties, for example: high metallicities or similar pattern of emission lines (strong Fe [II], weak O [III]).[43] Some observations suggest that AGN emission from the nucleus is not spherically symmetric and that the nucleus often shows axial symmetry, with radiation escaping in a conical region. Based on this observations, models have been devised to explain the different classes of AGNs as due to their different orientations with respect to the observational line of sight. Such models are called unified models. Unified models explain the difference between Seyfert I and Seyfert II galaxies as being the result of Seyfert II galaxies being surrounded by obscuring toruses which prevent us from seeing the broad line region. Quasars and blazars can be fit quite easily in this model. The main problem of such an unification scheme is trying to explain why some AGN are radio loud while others are radio quiet. It has been suggested that these differences may be due to differences in the spin of the central black hole.[30] ## Examples Seyfert galaxy MRK 1513 The table below lists a few representative Seyfert galaxies from the Markarian catalog.[44] Name Other names Longitude Latitude Right ascension Declination Mark 205 MRK 0205 185.4338322 75.3106237 12h 21m 44.120s +75° 18′ 38.25″ Mark 231 MRK 0231 194.0593100 56.8736767 12h 56m 14.2344s +56° 52′ 25.236″ Mark 266 NGC 5256 204.573720 48.276093 13h 38m 17.69s +48° 16′ 33.9″ Mark 270 NGC 5283 205.2739946 67.6723111 13h 41m 05.759s +67° 40′ 20.32″ Mark 279 MRK 0279 208.2643618 69.3082128 13h 53m 03.447s +69° 18′ 29.57″ Mark 335 MRK 0335 1.5813306 20.2029144 00h 06m 19.519s +20° 12′ 10.49″ Mark 530 NGC 7603 349.7359060 0.2439521 23h 18m 56.617s +00° 14′ 38.23″ Mark 590 NGC 0863 33.6398442 -0.7666930 02h 14m 33.562s −00° 46′ 00.09″ Mark 686 NGC 5695 219.3421784 36.5678087 14h 37m 22.123s +36° 34′ 04.11″ Mark 744 NGC 3786 174.9272970 31.9092853 11h 39m 42.551s +31° 54′ 33.43″ ## References 1. ^ a b Peterson, B. M. (1997). An Introduction to Active Galactic Nuclei. Cambridge University Press. ISBN 978-0-521-47911-0. 2. ^ a b Maiolino, R.; Rieke, G. H (1995). "Low-Luminosity and Obscured Seyfert Nuclei in Nearby Galaxies". The Astrophysical Journal 454: 95–105. Bibcode:1995ApJ...454...95M. doi:10.1086/176468. 3. ^ See "Seyfert Galaxies" as reproduced from Davidsen, A. F. (1993). "Far-Ultraviolet Astronomy on the Astro-1 Space Shuttle Mission". Science 259 (5093): 327–334. Bibcode:1993Sci...259..327D. doi:10.1126/science.259.5093.327. 4. ^ Soper, D. E. "Seyfert Galaxies". University of Oregon. Retrieved 11 October 2013. 5. ^ a b Seyfert, C. K. (1943). "Nuclear Emission in Spiral Nebulae". The Astrophysical Journal 97: 28–40. Bibcode:1943ApJ....97...28S. doi:10.1086/144488. 6. ^ Hubble, E. P. (1926). "Extragalactic nebulae". The Astrophysical Journal 64: 321–369. Bibcode:1926ApJ....64..321H. doi:10.1086/143018. 7. ^ Reber, Grote (1944). "Cosmic Static". The Astrophysical Journal 100: 279–287. Bibcode:1944ApJ...100..279R. doi:10.1086/144668. 8. ^ Bolton, J. G.; Stanley, G. J. (1948). "Observations on the Variable Source of Cosmic Radio Frequency Radiation in the Constellation of Cygnus". Australian Journal of Scientific Research A 1: 58–69. Bibcode:1948AuSRA...1...58B. 9. ^ Hanbury Brown, R.; Jennison, R. C.; Das Gupta, M. K. (1952). "Apparent Angular Sizes of Discrete Radio Sources: Observations at Jodrell Bank, Manchester". Nature 170: 1061 1063. doi:10.1038/1701061a0. 10. ^ Torres-Papaqui, J. P. "TEMA 1. Introduction Active Galactic Nuclei: History and Overview". Universidad de Guanajuato. Retrieved 8 October 201. 11. ^ Walker, M. F. (1968). "Studies of Extragalactic Nebulae. V. Motions in the Seyfert Galaxy NGC 1068". The Astrophysical Journal 151: 71–97. Bibcode:1968ApJ...151...71W. doi:10.1086/149420. 12. ^ a b See "Seyfert Galaxies Currently Known", reproduced from Weedman, D. W. (1977). "Seyfert Galaxies". Annual Reviews of Astronomy and Astrophysics 15: 69–95. Bibcode:1977ARA&A..15...69W. doi:10.1146/annurev.aa.15.090177.000441. 13. ^ Peterson, S. D. (1973). "Optical Positions of the Markarian Galaxies". The Astrophysical Journal 78 (9): 811–827. Bibcode:1973AJ.....78..811P. doi:10.1086/111488. 14. ^ de Vancouleurs, G.; de Vancouleurs, A. (1968). "Photographic, Photometric, and Spectroscopic Observations of Seyfert Galaxies". The Astronomical Journal 73 (9): 858–861. Bibcode:1968AJ.....73..858D. doi:10.1086/110717. 15. ^ Adams, T. F. (1977). "A Survey of the Seyfert Galaxies Based on Large-Scale Image-Tube Plate". The Astrophysical Journal Supplement Series 33: 19–34. Bibcode:1977ApJS...33...19A. doi:10.1086/190416. 16. ^ Weedman, D. W. (1973). "A Photometric Study of Markarian Galaxies". The Astrophysical Journal 183: 29–40. Bibcode:1973ApJ...183...29W. doi:10.1086/152205. 17. ^ Osterbrock, D. E.; Koski, A. T. (1976). "NGC 4151 and Markarian 6: Two intermediate-type Seyfert galaxies". Monthly Notices of the Royal Astronomical Society 176: 61–66. Bibcode:1976MNRAS.176P..61O. 18. ^ Osterbrock, D. E.; Martel, A. (1993). "Spectroscopic study of the CfA sample of Seyfert galaxies". The Astrophysical Journal 414 (2): 552–562. Bibcode:1993ApJ...414..552O. doi:10.1086/173102. 19. ^ a b c Massi, M. "Active Galaxies". Max Planck Institute for Radio Astronomy. Retrieved 10 November 2013. 20. ^ Osterbrock, D. E.; Ferland, G. J. (2006). Astrophysics of Gaseous Nebulae and Active Galactic Nuclei. University Science Books. ISBN 978-1-891389-34-4. 21. ^ Yoshida, Shigeru. "The Eddington Limit". Department of Physics, Chiba University. Retrieved 7 December 2013. 22. ^ Blandford, Roger D. "Active Galaxies and Quasistellar Objects, Accretion". NASA/IPAC Extragalactic Database. Retrieved 6 December 2013. 23. ^ Peterson, B. M.; Horne, K. (2004). "Echo mapping of active galactic nuclei". Astronomische Nachrichten 325 (3): 248–251. arXiv:astro-ph/0407538. Bibcode:2004AN....325..248P. doi:10.1002/asna.200310207. 24. ^ Peterson, B. M. et al. (2004). "Central Masses and Broad-Line Region Sizes of Active Galactic Nuclei. II. A Homogeneous Analysis of a Large Reverberation-Mapping Database". The Astrophysical Journal 613 (2): 682–699. arXiv:astro-ph/0407299. Bibcode:2004ApJ...613..682P. doi:10.1086/423269. 25. ^ Haardt, F.; Maraschi, L. (1991). "A two-phase model for the X-ray emission from Seyfert galaxies". The Astrophysical Journal Letters 380: L51–L54. Bibcode:1991ApJ...380L..51H. doi:10.1086/186171. 26. ^ "A wanderer dancing the dance of stars and space". SpaceTelescope.org. NASA/ESA. 24 December 2012. 27. ^ "Forbidden lines". Encyclopaedia Britannica. Retrieved 27 November 2013. 28. ^ Carroll, B. W.; Ostlie, D. A. (2006). An Introduction to Modern Astrophysics (2nd ed.). Pearson Education. pp. 1085–1086. ISBN 978-1-292-02293-2. 29. ^ a b Pradhan, A. K.; Nahar, S. N. (2011). Atomic Astrophysics and Spectroscopy. Cambridge University Press. pp. 278–304. ISBN 978-0-521-82536-8. 30. ^ a b Armitage, P. (2004). "Classification of AGN". ASTR 3830 Lecture Notes. University of Colorado Boulder. Retrieved 10 November 2013. 31. ^ Morgan, S. "Distant and Weird Galaxies". Astronomy Course Notes and Supplementary Material. University of Northern Iowa. Retrieved 10 October 2013. 32. ^ Barthel, P. (1991). "Active galaxies and quasistellar objects, interrelations of various types". In Maran, S. P. Astronomy and Astrophysics Encyclopedia. Wiley-Interscience. ISBN 978-0-471-28941-8. 33. ^ "Seyfert galaxies". California Institute of Technology. Retrieved 10 October 2013. 34. ^ 35. ^ Heckman, T. M. (1980). "An optical and radio survey of the nuclei of bright galaxies - Activity in normal galactic nuclei". Astronomy and Astrophysics 87 (1–2): 152–164. Bibcode:1980A&A....87..152H. 36. ^ Starling, R. L. C. (2005). Astrophysics and Space Science. Springer Netherlands. pp. 81–86. ISBN 978-1-4020-4084-9. 37. ^ Osterbrock, D. E.; Pogge, R. W. (1985). "The spectra of narrow-line Seyfert 1 galaxies". The Astrophysical Journal 297: 166–176. Bibcode:1985ApJ...297..166O. doi:10.1086/163513. 38. ^ Boller, T.; Brandt, W. N; Fink, H. (1996). "Soft X-ray properties of narrow-line Seyfert 1 galaxies". Astronomy and Astrophysics 305: 53. arXiv:astro-ph/9504093. Bibcode:1996A&A...305...53B. 39. ^ Mathur, S.; Grupe, D. (2005). "Black hole growth by accretion". Astronomy and Astrophysics 432 (2): 463–466. arXiv:astro-ph/0407512. Bibcode:2005A&A...432..463M. doi:10.1051/0004-6361:20041717. 40. ^ Komossa, S. (2007). "Narrow line Seyfert 1 galaxies". arXiv:0710.3326 [astro-ph]. 41. ^ "Active Galaxies and Quasars". NASA/GSFC. Retrieved 21 November 2013. 42. ^ "Quasars". Astronomy 162 Lecture Notes. University of Tennessee, Department of Physics & Astronomy. Retrieved 21 November 2013. 43. ^ Mathur, S. (2000). "Narrow Line Seyfert 1 Galaxies and the Evolution of Galaxies & Active Galaxies". Monthly Notices of the Royal Astronomical Society 314 (4): L17. arXiv:astro-ph/0003111. Bibcode:2000MNRAS.314L..17M. doi:10.1046/j.1365-8711.2000.03530.x. 44. ^ Shlosman, I. (6 May 1999). "Seyfert Galaxies". University of Kentucky. Retrieved 30 October 2013.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993712067604065, "perplexity": 2375.1586553733664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163066152/warc/CC-MAIN-20131204131746-00092-ip-10-33-133-15.ec2.internal.warc.gz"}
https://researchportal.unamur.be/fr/publications/theoretical-and-computational-chemistry-of-complex-systems-solvat
# Theoretical and Computational Chemistry of Complex Systems: Solvation of DNA and Proteins E. Clementi, G. Corongiu, M. Gratarola, P. Habitz, C. Lupo, P. Otto, D. Vercauteren Résultats de recherche: Contribution à un journal/une revueArticleRevue par des pairs ## Résumé As is known, atomic and very small molecular systems can be realistically simulated by quantum mechanical models. In complex chemical systems, however, the natural parameters are not only electronic density and energy but also entropy, temperature, and time. To approach this description we use a hierarchy of theoretical models. In model 1, the system is considered as an ensemble of fixed nuclei and electrons—the standard quantum chemical approach. However, our definition of model 1 is broader, since it includes aspects of solid‐state physics. In model 2, the system is considered as an ensemble of atoms (or ions) and atom‐pair potentials are obtained using data from model 1. In model 3, the phase space is scanned either for the generalized coordinates (Monte Carlo) or for both space and momentum coordinates (molecular dynamics). In model 4 (presently not considered) the fluid dynamical equations are solved making use of coefficients and parameters obtained from the previous models. Both theoretical and computational improvements are needed at each level in order to reach a sufficiently realistic simulation for complex systems. We have summarized some recent progress obtained for models 1 and 2 related to new methods for molecular computations and studies on three‐body effects and energy band computations in DNA‐related polymers. We have considered as examples of a complex chemical system the structure of water surrounding DNA (with counterions) and enzymes. Our results from model 3 include the first determination of the position of the Li+, Na+, and K+ counterions in B and Z DNA at room temperature at high relative humidity, and hydration studies on enzymes including variations due to the solvent pH. Copyright © 1982 John Wiley & Sons, Inc. langue originale Anglais 409-433 25 Internat. J. Quantum Chem., Quantum Chem. Symp. 22 16 S https://doi.org/10.1002/qua.560220840 Publié - 1982 ## Empreinte digitale Examiner les sujets de recherche de « Theoretical and Computational Chemistry of Complex Systems: Solvation of DNA and Proteins ». Ensemble, ils forment une empreinte digitale unique.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636505603790283, "perplexity": 2160.800527060827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00250.warc.gz"}
https://math.stackexchange.com/questions/2110964/triviality-of-vector-bundles-over-lie-groups
# Triviality of vector bundles over Lie groups I was wondering whether the triviality of the tangent bundle of a Lie group is a shared feature with all other vector bundles over Lie groups. Naturally, the first example of a non-trivial vector bundle that comes to mind is the Moebius strip, which is a vector bundle over $S^1$ -a Lie group-, so the answer is no. However, the tangent bundle of a Lie group is also a group, and is hence orientable (unlike the Moebius strip). So, my question is: Are orientable vector bundles over Lie groups trivial? or actually better, does anybody know an example of a non-trivial vector bundle over a Lie group which is itself a Lie group? I thought that a way around this question was to inspect the Euler class of the given bundle, because I thought that I knew the cohomology of Lie groups. I had Borel's theorem in mind: "If $G$ is a compact connected Lie group, then the cohomology ring is an exterior algebra of odd degree" I thought that this implied that even rank bundles would be necessarily trivial. Unfortunately, later I realized this doesn't say that there are no elements of even degree... Also, this would only work for compact groups, and I would like to say something of non-compact groups as well. • As a partial answer, every vector bundle over $\mathrm{SU}(2)$ is trivial. This is because $\mathrm{SU}(2)$ is homeomorphic to the sphere $S^3$ and hence vector bundles of rank $r$ are classified by clutching functions $S^2\to \mathrm{GL}(r)$. But $\pi_2(\mathrm{GL}(2))=1$ (as for any Lie group) so these maps are null-homotopic. – Spenser Jan 23 '17 at 22:50 Here's an indirect way to see that most compact connected Lie groups have nontrivial complex vector bundles over them. The starting point is the observation that if $X$ is a finite CW complex then taking Chern characters gives an isomorphism $$K(X) \otimes \mathbb{Q} \cong H^{2 \bullet}(X, \mathbb{Q})$$ between the rationalized complex K-theory of $X$ and the rational even cohomology of $X$. Now, almost all compact connected Lie groups have nontrivial rational even cohomology: perhaps the simplest example is $S^1 \times S^1$, and the simplest simply connected example is $SU(2) \times SU(2) \cong \text{Spin}(4)$, which has rational cohomology $\mathbb{Q}[x_3, y_3]$ where $x_3, y_3$ are odd, and hence their product $x_3 y_3$ is even. Because the Chern character isomorphism is defined in terms of Chern classes, the conclusion is that not only does there exist a nontrivial complex vector bundle on $SU(2) \times SU(2)$, but there exists one with nontrivial Chern class $c_3$. More explicitly, the classification of complex line bundles over $S^1 \times S^1$ is given by $H^2(S^1 \times S^1, \mathbb{Z}) \cong \mathbb{Z}$, so there are countably many nontrivial complex line bundles over $S^1 \times S^1$. They can all be described as holomorphic line bundles over elliptic curves in terms of divisors and meromorphic functions, if you like.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.952075719833374, "perplexity": 92.18441500254364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524290.60/warc/CC-MAIN-20190715235156-20190716021156-00137.warc.gz"}
https://math.stackexchange.com/questions/145523/links-between-difference-and-differential-equations
# Links between difference and differential equations? Does there exist any correspondence between difference equations and differential equations? In particular, can one cast some classes of ODEs into difference equations or vice versa? • Correspondence in what sense? Could you suggest an example? – Pedro Tamaroff May 15 '12 at 16:19 • Mostly I am interested to know if it is possible to reformulate ones in terms of anothers, but more general possible studied relations would be also interesting. – Alexey Bobrick May 15 '12 at 16:30 • They are a bit, but not totally different. – AD. May 15 '12 at 16:34 • This might generate some interesting answers, I hope. – AD. May 15 '12 at 16:36 • Found a related note in JD Murray, Mathematical Biology I: An Introduction, Chapter 2.: "there is no simple connection between difference equation models and what might appear to be the continuous differential equation analogue, even though a finite difference approximation results in a discrete equation". – np8 Dec 17 '20 at 20:32 Yes, obviously there is some correspondence (numerical solutions of ordinary differential equations are discrete dynamical systems, many proofs in bifurcation theory uses continuous time dynamical systems to analyze discrete original problems, etc). However, the most profound attempt to build a theory that unites difference and differential equations is the time scale calculus. First order example Consider the difference equation $$\begin{equation*} x_{n+1} = a x_n,\tag{1} \end{equation*}$$ where $$x_0$$ is given, with solution $$\begin{equation*} x_n = a^n x_0. \tag{2} \end{equation*}$$ Let $$x_n = x(t_n)$$, where $$t_{n+1} = t_n + h$$ and $$t_0 = 0$$, so $$t_n = n h$$. Rewrite the difference equation (1) as $$\frac{x(t_{n}+h)-x(t_n)}{h} = \frac{(a-1)}{h} x(t_n).$$ If $$h$$ is small, this can be approximated by the differential equation $$\begin{equation*} x'(t) = \frac{a-1}{h}x(t),\tag{3} \end{equation*}$$ with solution $$\begin{equation*} x(t) = x(0) \exp \left(\frac{a-1}{h} t\right).\tag{4} \end{equation*}$$ Notice that $$\exp\left(\frac{a-1}{h} t\right) = a^{n} + O(n(a-1)^2)$$, so for $$n\ll 1/(a-1)^2$$ we find good agreement with (2). Going in the reverse direction, from the differential equation (3) to the difference equation (1), is called Euler's method. There is a subtlety when approximating a difference equation by a differential equation. For this problem we require that $$|a-1| \ll 1$$ since we need $$x_{n+1}- x_n = O(h) \ll 1$$. Otherwise, we should expect (and will find) the approximation of (1) by (3) to be poor. Sometimes it is necessary to reformulate the difference equation so the difference between successive terms is small.$${}^\dagger$$ $${}^\dagger$$ For example, suppose $$a$$ in (1) were large and positive. Let $$y_0=x_0$$ and let $$y_{n+1} = a^{1/p} y_n$$, where $$p$$ is some large integer so $$|a^{1/p} - 1|\ll 1$$. Note that $$y_{n p} = x_n$$. The solution to the corresponding differential equation for $$y$$ will be a good approximation to $$y_n$$, and this in turn can be used to approximate $$x_n$$. Second order example Consider the differential equation for the harmonic oscillator $$\begin{equation*} x''+x = 0, \hspace{5ex} x(0)=1, \hspace{5ex}x'(0)=0, \tag{5} \end{equation*}$$ the solution for which is $$x(t) = \cos t$$. The simplest related difference equation is $$\begin{equation*} \frac{1}{h^2}(x_{n+2}-2x_{n+1}+x_n) + x_n = 0, \tag{6} \end{equation*}$$ with $$x_0 = x_1 = 1$$. (We show how to get (6) from (5) below.) There are standard techniques for solving simple recurrence relations such as (6) in closed form. We find $$\begin{equation*} x_n = \frac{1}{2} \left((1+i h)^{n}+(1 - i h)^{n}\right). \end{equation*}$$ Note that $$x_n$$ is real since $$x_n^* = x_n$$. This is the closed form for the solution a computer would find solving (6) iteratively. Recall that $$n=t_n/h$$. For small $$h$$, $$x_n = \cos t_n + \frac{1}{2} h t_n \cos t_n + O(h^2)$$, so the numerical solution to (6) will well approximate $$\cos t_n$$ for $$h t_n \ll 1$$. In the limit $$h\to 0$$, $$x_n \to \cos t_n,$$ as expected. Derivatives and finite differences Morally, a difference equation is a discrete version of a differential equation and a differential equation is a continuous version of a difference equation. The method of numerical integration of ODEs is essentially the rewriting of a differential equation as a difference equation which is then solved iteratively by a computer. This is a big subject with many subtleties. The method can also be applied to nonlinear and partial differential equations. Below we give a brief dictionary between finite difference and differential operators. Define the shift operator $$E$$ such that $$E x_n = x_{n+1}$$. The difference operator $$\Delta = E-1$$ then gives $$\Delta x_n = x_{n+1}-x_n$$. These operators are connected to differentiation by Taylor series. Let $$D x_n = x_n' = d x(t)/d t|_{t=t_n}$$. Then $$x_{n+1} = E x_n = \sum_{k=0}^\infty \frac{h^k}{k!} D^k x_n = e^{h D}x_n.$$ Thus, as an operator, $$E = e^{h D},$$ and so $$D = \frac{1}{h} \ln E = \frac{1}{h} \ln(1+\Delta) = \frac{1}{h}\sum_{k=1}^\infty (-1)^{k+1} \frac{\Delta^k}{k}.$$ (Think of these operators as acting on polynomials, possibly of very high order.) This formalism gives us a way to convert any ODE into a difference equation and vice-versa. Notice that higher order derivatives can be approximated by $$D^k\approx (\Delta/h)^k$$. Thus, for example, $$x'' = D^2 x \rightarrow \frac{1}{h^2}\Delta^2x_n = \frac{1}{h^2}\Delta(x_{n+1}-x_n) = \frac{1}{h^2} (x_{n+2}-2 x_{n+1} + x_n).$$ When using Euler's method we let $$D \approx \Delta/h$$. We could just as well keep higher order terms in $$\Delta$$ to get recursion relations with three or more terms. It is a sign of the nontrivial nature of the subject that this simple change leads to numerical instabilities. There are many named algorithms that do improve and generalize Euler's method, building on the ideas sketched above. (See, for example, the Runge-Kutta methods, a family of robust algorithms used to numerically solve linear and nonlinear differential equations.) This is in response to Drew N's question about seeing directly that $$D = \frac{1}{h}\sum_{k=1}^\infty (-1)^{k+1} \frac{\Delta^k}{k}$$ is the differential operator. Here we show that $$D t^n = n t^{n-1}$$ for $$n=0,1,\ldots$$, which shows that $$D$$ is the differential operator on polynomials of arbitrarily high degree. (It is straightforward to extend this proof to $$n\in\mathbb{C}$$.) We have \begin{align*} D t^n &= \frac{1}{h} \sum_{k=1}^\infty (-1)^{k+1}\frac{1}{k}\Delta^k t^n \\ &= \frac{1}{h} \sum_{k=1}^\infty (-1)^{k+1}\frac{1}{k}(E-1)^k t^n \\ &= \frac{1}{h} \sum_{k=1}^\infty (-1)^{k+1}\frac{1}{k} \sum_{j=0}^k{k\choose j}(-1)^{k-j}E^j t^n & \textrm{binomial theorem} \\ &= \frac{1}{h} \sum_{k,j} (-1)^{j+1}\frac{1}{k}{k\choose j}(t+j h)^n \\ &= \frac{1}{h} \sum_{k,j} (-1)^{j+1}\frac{1}{k}{k\choose j} \sum_{l=0}^n {n\choose l}j^l h^l t^{n-l} & \textrm{binomial theorem} \\ &= \left.\frac{1}{h} \sum_{k,j,l} (-1)^{j+1}\frac{1}{k} {k\choose j}{n\choose l} (x D)^l x^j h^l t^{n-l}\right|_{x=1} & D = d/dx, (x D)^l = \underbrace{(x D)\cdots (x D)}_{l} \\ &= \left.-\frac{1}{h} \sum_{k,l} \frac{1}{k} {n\choose l} h^l t^{n-l} (x D)^l \sum_{j=0}^k {k\choose j}(-x)^j \right|_{x=1} \\ &= \left.-\frac{1}{h} \sum_{k,l} \frac{1}{k} {n\choose l} h^l t^{n-l} (x D)^l (1-x)^k \right|_{x=1} & \textrm{binomial theorem} \\ &= \left.-\frac{1}{h} \sum_{l} {n\choose l} h^l t^{n-l} (x D)^l \sum_{k=1}^\infty \frac{1}{k} (1-x)^k \right|_{x=1} \\ &= \left.\frac{1}{h} \sum_{l} {n\choose l} h^l t^{n-l} (x D)^l \log x\right|_{x=1} & \textrm{series for natural logarithm} \\ &= \frac{1}{h} \sum_{l=0}^n {n\choose l} h^l t^{n-l} \delta_{l1} & \textrm{see below} \\ &= \frac{1}{h} {n\choose 1} h t^{n-1} \\ &= n t^{n-1}. \end{align*} Note that $$(x D)x^j = j x^j$$ so $$(x D)^l x^j = j^l x^j.$$ Also \begin{align*} (x D)^0 \log x|_{x=1} &= \log 1 = 0 \\ (x D)^1 \log x &= x \frac{1}{x} = 1 \\ (x D)^2 \log x &= (x D)1 = 0 \\ (x D)^3 \log x &= (x D)^2 0 = 0. \end{align*} Thus, $$(x D)^l \log x|_{x=1} = \delta_{l1},$$ where $$\delta_{ij}$$ is the Kronecker delta. • Great work. Very helpful, concise, and clear. Thank you. – wesssg Oct 7 '16 at 5:10 • Why did you apply limit only to left side in formula (3)? I though that h should not appear at right side – LmTinyToon Dec 16 '16 at 10:28 • Using operator formalism you establish $D = (1/h) \sum_{k=1}^\infty (-1)^{k+1} \Delta^{k}/k$, which I believe expands to $1/h\big[ (x_{n+1} - x_n) - (1/2) (x_{n+2} - 2x_{n+1} + x_n) + \dots \big]$. But I just don't intuit why this expression of $D$ must correctly give $Dx_n = x'(t_n)$. One can follow the operator formalism line by line, but is there a way to be convinced this really is $x'(t_n)$ just from the form of this expression alone? – Drew N Dec 16 '18 at 4:05 • I don't understand how $\mathrm{Exp}(n(a-1)) = a^n + O(n(a-1)^2)$ with Taylor series. Can someone please explain that? – user3433489 Sep 5 '20 at 21:50 • @user3433489: We expand in small $a-1$ for fixed $n$. We have \begin{align*} e^{n(a-1)} &= (e^{a-1})^n \\ &= (1+(a-1)+(a-1)^2/2+\ldots)^n \\ &= (a+(a-1)^2/2+\ldots)^n \\ &= a^n + n a^{n-1}(a-1)^2/2+\ldots \\ &= a^n + n(a-1)^2/2+\ldots \end{align*} In the last step we use that $a^{n-1} = (1+(a-1))^{n-1} = 1+O((n-1)(a-1))$. – user26872 Sep 7 '20 at 16:57 (Adding to user26872's answer as this was not obvious to me so it might help someone else going through his derivation) The identity $$x_{n+1} = Ex_n = \sum\limits_{k=0}^\infty \frac{h^k}{k!}D^k x_n=e^{hD}x_n$$ is true if we consider the following. Let $x_n = x(t_n)$. Let's assume that $x(t)$ is differentiable around some point $t_n$, then it's Taylor series can be defined as $$x(t) = \sum\limits_{k=0}^\infty \frac{x^{(k)}(t_n)}{k!}(t-t_n)^k.$$ If we now perform the same expansion at $t' = t_n+h$ we have $$x(t_n+h) = \sum\limits_{k=0}^\infty \frac{x^{(k)}(t_n)}{k!}h^k = \sum\limits_{k=0}^\infty \frac{h^k D^k}{k!}x(t) = e^{hD} x(t_n)$$ and so $$x_{n+1} = Ex_n \leftrightarrow x(t_n + h) = E x(t_n)$$ thus giving $$E = e^{hD}.$$ One example: You can take the Laplace transform of a discrete function by expressing the target of the Laplace transform as the product of a continuous function and a sequence of shifted delta functions and summing up: $\sum_{k=0}^\infty\int_{0}^\infty f(t)\delta(t - kT)e^{-st}dt$ = $\sum_{k=0}^\infty f(kT)e^{-skT}$, where T is the sampling period. The (unilateral) Z transform is defined as $\sum_{n=0}^{\infty}x[k]z^{-k}$, for a discrete time signal $x[k]$. By substituting $z = e^{sT}$ in the formula for the discretized Laplace transform, the Z transform is obtained, and the s and Z domains are related by $s = \frac{1}{T}\log(Z)$. This is obviously a nonlinear transformation; points left of the imaginary axis in the s plane are mapped to the interior of the unit circle of the Z plane, while points to the right of the imaginary axis are mapped to the exterior. One can use the Bilinear transform, $s = \frac{2(z -1)}{T(z + 1)}$, as a first order approximation to $s = \frac{1}{T}\log(Z)$. So it's possible to (approximately) transform any equation in the s domain into a corresponding equation in the Z domain. This is useful for deriving a difference equation, since it can be shown that the Z transform of a shift, $Z(x[k - n])$, is equal to simply $z^{-n}X(z).$ Due to this property it is often possible to go easily from the Z transform of an expression to a difference equation, simply by inspecting inverse powers of Z and their coefficients in the equation. In short, if the differential equation is amenable to the Laplace transform (i.e. linear time invariant), it can be quite straightforward using this method to get a difference equation representation. First see the correspondence between discrete and continuous dynamical systems. Both are acts of a monoid on some set $$X$$. For discrete dynamical systems the monoid that acts is $$\mathbb{N}$$. For continuous dynamical systems the monoid that acts is $$\mathbb{R}$$. This action comes in the form of a left multiplication $$(.,.):\mathbb{N}\times X \rightarrow X$$ or $$(.,.):\mathbb{R}\times X \rightarrow X$$ and is compatible with the addition structure of $$\mathbb{N}$$ and $$\mathbb{R}$$ respectively. I.e. for all $$n,m \in \mathbb{N}$$ or $$\mathbb{R}$$ and $$x \in X$$ we have $$(0,x) = x$$ and $$(n,(m,x)) = (n + m,x)$$. Now on the story of difference and differential equations. A first order difference equation equals a discrete dynamical system. Note that any difference equation can be converted to a system of first order difference equations (see higher order difference equations). Hence any difference equation equals a discrete dynamical system. Note that the $$\mathbb{N}$$-act is given by $$(n,x) = f^n(x)$$. If the differential condition is sufficiently smooth there exists a unique solution for any point ($$\phi_x^t$$). We use this solution as the $$\mathbb{R}$$-act. I.e. $$(t,x) = \phi_x^t$$. Hence any sufficiently smooth differential equation equals a continuous dynamical system.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 80, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995088577270508, "perplexity": 423.9400771977119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351134.11/warc/CC-MAIN-20210225124124-20210225154124-00256.warc.gz"}
https://unistroy.spbstu.ru/en/article/2021.94.5/
# Numerical and Analytical Study on Bending Stiffness of Sandwich Panels at Ambient and Elevated Temperatures Authors: Abstract: This paper presents an investigation on the bending stiffness of sandwich panels at ambient and elevated temperatures. A finite element (FE) model is developed to verify simulations with experimental results, and then a parametric study at different temperatures is carried out. After that, an analytical study to  determine  the  bending  stiffness  at  room  temperatures  according  to  the  current  specification  is conducted. Furthermore, the analytical solutions are developed to use at elevated temperatures. The objective of the current research is to compare the numerical and analytical results. It is observed that analytical solutions developed  to  evaluate  the  bending  stiffness  at  elevated  temperatures  are conservative and reliable.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925394654273987, "perplexity": 4560.015544041133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00571.warc.gz"}
http://nag.com/numeric/FL/nagdoc_fl24/html/G03/g03dbf.html
G03 Chapter Contents G03 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentG03DBF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose G03DBF computes Mahalanobis squared distances for group or pooled variance-covariance matrices. It is intended for use after G03DAF. ## 2  Specification SUBROUTINE G03DBF ( EQUAL, MODE, NVAR, NG, GMN, LDGMN, GC, NOBS, M, ISX, X, LDX, D, LDD, WK, IFAIL) INTEGER NVAR, NG, LDGMN, NOBS, M, ISX(*), LDX, LDD, IFAIL REAL (KIND=nag_wp) GMN(LDGMN,NVAR), GC((NG+1)*NVAR*(NVAR+1)/2), X(LDX,*), D(LDD,NG), WK(2*NVAR) CHARACTER(1) EQUAL, MODE ## 3  Description Consider $p$ variables observed on ${n}_{g}$ populations or groups. Let ${\stackrel{-}{x}}_{j}$ be the sample mean and ${S}_{j}$ the within-group variance-covariance matrix for the $j$th group and let ${x}_{k}$ be the $k$th sample point in a dataset. A measure of the distance of the point from the $j$th population or group is given by the Mahalanobis distance, ${D}_{kj}$: $Dkj2=xk-x-jTSj-1xk-x-j.$ If the pooled estimated of the variance-covariance matrix $S$ is used rather than the within-group variance-covariance matrices, then the distance is: $Dkj2=xk-x-jTS-1xk-x-j.$ Instead of using the variance-covariance matrices $S$ and ${S}_{j}$, G03DBF uses the upper triangular matrices $R$ and ${R}_{j}$ supplied by G03DAF such that $S={R}^{\mathrm{T}}R$ and ${S}_{j}={R}_{j}^{\mathrm{T}}{R}_{j}$. ${D}_{kj}^{2}$ can then be calculated as ${z}^{\mathrm{T}}z$ where ${R}_{j}z=\left({x}_{k}-{\stackrel{-}{x}}_{j}\right)$ or $Rz=\left({x}_{k}-{\stackrel{-}{x}}_{j}\right)$ as appropriate. A particular case is when the distance between the group or population means is to be estimated. The Mahalanobis squared distance between the $i$th and $j$th groups is: $Dij2=x-i-x-jTSj-1x-i-x-j$ or $Dij2=x-i-x-jTS-1x-i-x-j.$ Note:  ${D}_{jj}^{2}=0$ and that in the case when the pooled variance-covariance matrix is used ${D}_{ij}^{2}={D}_{ji}^{2}$ so in this case only the lower triangular values of ${D}_{ij}^{2}$, $i>j$, are computed. ## 4  References Aitchison J and Dunsmore I R (1975) Statistical Prediction Analysis Cambridge Kendall M G and Stuart A (1976) The Advanced Theory of Statistics (Volume 3) (3rd Edition) Griffin Krzanowski W J (1990) Principles of Multivariate Analysis Oxford University Press ## 5  Parameters 1:     EQUAL – CHARACTER(1)Input On entry: indicates whether or not the within-group variance-covariance matrices are assumed to be equal and the pooled variance-covariance matrix used. ${\mathbf{EQUAL}}=\text{'E'}$ The within-group variance-covariance matrices are assumed equal and the matrix $R$ stored in the first $p\left(p+1\right)/2$ elements of GC is used. ${\mathbf{EQUAL}}=\text{'U'}$ The within-group variance-covariance matrices are assumed to be unequal and the matrices ${R}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,{n}_{g}$, stored in the remainder of GC are used. Constraint: ${\mathbf{EQUAL}}=\text{'E'}$ or $\text{'U'}$. 2:     MODE – CHARACTER(1)Input On entry: indicates whether distances from sample points are to be calculated or distances between the group means. ${\mathbf{MODE}}=\text{'S'}$ The distances between the sample points given in X and the group means are calculated. ${\mathbf{MODE}}=\text{'M'}$ The distances between the group means will be calculated. Constraint: ${\mathbf{MODE}}=\text{'M'}$ or $\text{'S'}$. 3:     NVAR – INTEGERInput On entry: $p$, the number of variables in the variance-covariance matrices as specified to G03DAF. Constraint: ${\mathbf{NVAR}}\ge 1$. 4:     NG – INTEGERInput On entry: the number of groups, ${n}_{g}$. Constraint: ${\mathbf{NG}}\ge 2$. 5:     GMN(LDGMN,NVAR) – REAL (KIND=nag_wp) arrayInput On entry: the $\mathit{j}$th row of GMN contains the means of the $p$ selected variables for the $\mathit{j}$th group, for $\mathit{j}=1,2,\dots ,{n}_{g}$. These are returned by G03DAF. 6:     LDGMN – INTEGERInput On entry: the first dimension of the array GMN as declared in the (sub)program from which G03DBF is called. Constraint: ${\mathbf{LDGMN}}\ge {\mathbf{NG}}$. 7:     GC($\left({\mathbf{NG}}+1\right)×{\mathbf{NVAR}}×\left({\mathbf{NVAR}}+1\right)/2$) – REAL (KIND=nag_wp) arrayInput On entry: the first $p\left(p+1\right)/2$ elements of GC should contain the upper triangular matrix $R$ and the next ${n}_{g}$ blocks of $p\left(p+1\right)/2$ elements should contain the upper triangular matrices ${R}_{j}$. All matrices must be stored packed by column. These matrices are returned by G03DAF. If ${\mathbf{EQUAL}}=\text{'E'}$ only the first $p\left(p+1\right)/2$ elements are referenced, if ${\mathbf{EQUAL}}=\text{'U'}$ only the elements $p\left(p+1\right)/2+1$ to $\left({n}_{g}+1\right)p\left(p+1\right)/2$ are referenced. Constraints: • if ${\mathbf{EQUAL}}=\text{'E'}$, $R\ne 0.0$; • if ${\mathbf{EQUAL}}=\text{'U'}$, the diagonal elements of the ${R}_{\mathit{j}}\ne 0.0$, for $\mathit{j}=1,2,\dots ,{\mathbf{NG}}$. 8:     NOBS – INTEGERInput On entry: if ${\mathbf{MODE}}=\text{'S'}$, the number of sample points in X for which distances are to be calculated. If ${\mathbf{MODE}}=\text{'M'}$, NOBS is not referenced. Constraint: if ${\mathbf{NOBS}}\ge 1$, ${\mathbf{MODE}}=\text{'S'}$. 9:     M – INTEGERInput On entry: if ${\mathbf{MODE}}=\text{'S'}$, the number of variables in the data array X. If ${\mathbf{MODE}}=\text{'M'}$, M is not referenced. Constraint: if ${\mathbf{M}}\ge {\mathbf{NVAR}}$, ${\mathbf{MODE}}=\text{'S'}$. 10:   ISX($*$) – INTEGER arrayInput Note: the dimension of the array ISX must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{M}}\right)$. On entry: if ${\mathbf{MODE}}=\text{'S'}$, ${\mathbf{ISX}}\left(\mathit{l}\right)$ indicates if the $\mathit{l}$th variable in X is to be included in the distance calculations. If ${\mathbf{ISX}}\left(\mathit{l}\right)>0$ the $\mathit{l}$th variable is included, for $\mathit{l}=1,2,\dots ,{\mathbf{M}}$; otherwise the $\mathit{l}$th variable is not referenced. If ${\mathbf{MODE}}=\text{'M'}$, ISX is not referenced. Constraint: if ${\mathbf{MODE}}=\text{'S'}$, ${\mathbf{ISX}}\left(l\right)>0$ for NVAR values of $l$. 11:   X(LDX,$*$) – REAL (KIND=nag_wp) arrayInput Note: the second dimension of the array X must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{M}}\right)$. On entry: if ${\mathbf{MODE}}=\text{'S'}$ the $\mathit{k}$th row of X must contain ${x}_{\mathit{k}}$. That is ${\mathbf{X}}\left(\mathit{k},\mathit{l}\right)$ must contain the $\mathit{k}$th sample value for the $\mathit{l}$th variable, for $\mathit{k}=1,2,\dots ,{\mathbf{NOBS}}$ and $\mathit{l}=1,2,\dots ,{\mathbf{M}}$. Otherwise X is not referenced. 12:   LDX – INTEGERInput On entry: the first dimension of the array X as declared in the (sub)program from which G03DBF is called. Constraints: • if ${\mathbf{MODE}}=\text{'S'}$, ${\mathbf{LDX}}\ge {\mathbf{NOBS}}$; • otherwise $1$. 13:   D(LDD,NG) – REAL (KIND=nag_wp) arrayOutput On exit: the squared distances. If ${\mathbf{MODE}}=\text{'S'}$, ${\mathbf{D}}\left(\mathit{k},\mathit{j}\right)$ contains the squared distance of the $\mathit{k}$th sample point from the $\mathit{j}$th group mean, ${D}_{\mathit{k}\mathit{j}}^{2}$, for $\mathit{k}=1,2,\dots ,{\mathbf{NOBS}}$ and $\mathit{j}=1,2,\dots ,{n}_{g}$. If ${\mathbf{MODE}}=\text{'M'}$ and ${\mathbf{EQUAL}}=\text{'U'}$, ${\mathbf{D}}\left(\mathit{i},\mathit{j}\right)$ contains the squared distance between the $\mathit{i}$th mean and the $\mathit{j}$th mean, ${D}_{\mathit{i}\mathit{j}}^{2}$, for $\mathit{i}=1,2,\dots ,{n}_{g}$ and $\mathit{j}=1,2,\dots ,\mathit{i}-1,\mathit{i}+1,\dots ,{n}_{g}$. The elements ${\mathbf{D}}\left(\mathit{i},\mathit{i}\right)$ are not referenced, for $\mathit{i}=1,2,\dots ,{n}_{g}$. If ${\mathbf{MODE}}=\text{'M'}$ and ${\mathbf{EQUAL}}=\text{'E'}$, ${\mathbf{D}}\left(\mathit{i},\mathit{j}\right)$ contains the squared distance between the $\mathit{i}$th mean and the $\mathit{j}$th mean, ${D}_{\mathit{i}\mathit{j}}^{2}$, for $\mathit{i}=1,2,\dots ,{n}_{g}$ and $\mathit{j}=1,2,\dots ,\mathit{i}-1$. Since ${D}_{\mathit{i}\mathit{j}}={D}_{\mathit{j}\mathit{i}}$ the elements ${\mathbf{D}}\left(\mathit{i},\mathit{j}\right)$ are not referenced, for $\mathit{i}=1,2,\dots ,{n}_{g}$ and $\mathit{j}=\mathit{i}+1,\dots ,{n}_{g}$. 14:   LDD – INTEGERInput On entry: the first dimension of the array D as declared in the (sub)program from which G03DBF is called. Constraints: • if ${\mathbf{MODE}}=\text{'S'}$, ${\mathbf{LDD}}\ge {\mathbf{NOBS}}$; • if ${\mathbf{MODE}}=\text{'M'}$, ${\mathbf{LDD}}\ge {\mathbf{NG}}$. 15:   WK($2×{\mathbf{NVAR}}$) – REAL (KIND=nag_wp) arrayWorkspace 16:   IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ On entry, ${\mathbf{NVAR}}<1$, or ${\mathbf{NG}}<2$, or ${\mathbf{LDGMN}}<{\mathbf{NG}}$, or ${\mathbf{MODE}}=\text{'S'}$ and ${\mathbf{NOBS}}<1$, or ${\mathbf{MODE}}=\text{'S'}$ and ${\mathbf{M}}<{\mathbf{NVAR}}$, or ${\mathbf{MODE}}=\text{'S'}$ and ${\mathbf{LDX}}<{\mathbf{NOBS}}$, or ${\mathbf{MODE}}=\text{'S'}$ and ${\mathbf{LDD}}<{\mathbf{NOBS}}$, or ${\mathbf{MODE}}=\text{'M'}$ and ${\mathbf{LDD}}<{\mathbf{NG}}$, or ${\mathbf{EQUAL}}\ne \text{'E'}$ or ‘U’, or ${\mathbf{MODE}}\ne \text{'M'}$ or ‘S’. ${\mathbf{IFAIL}}=2$ On entry, ${\mathbf{MODE}}=\text{'S'}$ and the number of variables indicated by ISX is not equal to NVAR, or ${\mathbf{EQUAL}}=\text{'E'}$ and a diagonal element of $R$ is zero, or ${\mathbf{EQUAL}}=\text{'U'}$ and a diagonal element of ${R}_{j}$ for some $j$ is zero. ## 7  Accuracy The accuracy will depend upon the accuracy of the input $R$ or ${R}_{j}$ matrices. If the distances are to be used for discrimination, see also G03DCF. ## 9  Example The data, taken from Aitchison and Dunsmore (1975), is concerned with the diagnosis of three ‘types’ of Cushing's syndrome. The variables are the logarithms of the urinary excretion rates (mg/24hr) of two steroid metabolites. Observations for a total of $21$ patients are input and the group means and $R$ matrices are computed by G03DAF. A further six observations of unknown type are input, and the distances from the group means of the $21$ patients of known type are computed under the assumption that the within-group variance-covariance matrices are not equal. These results are printed and indicate that the first four are close to one of the groups while observations $5$ and $6$ are some distance from any group. ### 9.1  Program Text Program Text (g03dbfe.f90) ### 9.2  Program Data Program Data (g03dbfe.d) ### 9.3  Program Results Program Results (g03dbfe.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 174, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962978363037109, "perplexity": 2426.448375050775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662022.71/warc/CC-MAIN-20160924173742-00084-ip-10-143-35-109.ec2.internal.warc.gz"}
http://ibmaths4u.com/viewtopic.php?f=3&t=346&sid=68d8c7fdea183cf8ca1f298348a06f05
## Complex Numbers Quadratics Discussions for the Core part of the syllabus. Algebra, Functions and equations, Circular functions and trigonometry, Vectors, Statistics and probability, Calculus. IB Maths HL Revision Notes ### Complex Numbers Quadratics Complex Numbers, Quadratic over complex field - IB Mathematics HL How can we solve the following quadratic equation over the complex field? $z^2-6z+13=0$ Thanks lora Posts: 0 Joined: Wed Apr 10, 2013 7:36 pm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909166693687439, "perplexity": 3767.2703471143595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522343.41/warc/CC-MAIN-20171213084839-20171213104839-00002.warc.gz"}
http://mathhelpforum.com/pre-calculus/190090-even-odd-functions.html
# Thread: Even And Odd Functions 1. ## Even And Odd Functions Hey I just took a quiz and got this question wrong: Is the given functions even, odd, or neither? $f(x) = - 4x + \left| {8x} \right|$ I answered it was odd because the x coefficient of both terms have a power of 1: $f(x) = - 4{x^1} + \left| {8{x^1}} \right|$ Why did I get this answer wrong? Sam 2. ## Re: Even And Odd Functions That is not the correct concept to used to determine if a function is even or odd. Do you know the definition for even and odd functions? 3. ## Re: Even And Odd Functions Originally Posted by ArcherSam Is the given functions even, odd, or neither? $f(x) = - 4x + \left| {8x} \right|$ $f(2)=12~\&~f(-2)=20$. If it were an even function then $f(2)=f(-2)~.$ If it were an odd function then $-f(2)=f(-2)~.$ Is it even or odd? 4. ## Re: Even And Odd Functions General rules... a) f(x) is an even function if f(x)=f(-x) b) f(x) is an odd function if f(x)=-f(-x) c) any function f(x) can be written as $f(x)=f_{e}(x)+f_{o}(x)$ where $f_{e}(*)$ is the 'even part of f(*)' and $f_{o}(*)$ is the 'odd part of f(*)' d) for any function f(x) is... $f_{e}(x)= \frac{f(x)+f(-x)}{2}$ $f_{o}(x)= \frac{f(x)-f(-x)}{2}$ (1) e) given f(x), one computes with (1) its even and odd part. If $f_{o}(x)=0$ then f(*) is an even function. If $f_{e}(x)=0$, then f(*) is an odd function. If $f_{e}(x) \ne 0$ and $f_{o}(x) \ne 0$, then f(*) in neither even nor odd... Kind regards $\chi$ $\sigma$ 5. ## Re: Even And Odd Functions Originally Posted by cheme That is not the correct concept to used to determine if a function is even or odd. Do you know the definition for even and odd functions? Sam, I would slightly edit this (^^) comment... If you have a polynomial with only even exponents (a constant "c" can be written as c*x^0, and zero is even, so constant terms are even...), then the function P(x) is even. If you have a polynomial with only odd exponents, then the function P(x) is odd. In your examples, there is an absolute value term, which is not polynomial. 6. ## Re: Even And Odd Functions Hello, Sam! I just took a quiz and got this question wrong: Is the given functions even, odd, or neither? . $f(x) \:=\: - 4x + |8x|$ I answered it was odd because the $x$ of both terms have a power of 1: . . $f(x) \:=\:-4{x^1} + |8x^1|$ Why did I get this answer wrong? Be careful! That rule about "all odd exponents" or "all even exponents" . . works with polynomials only. . (Edit: as TheChaz already pointed out.) If $x^n$ is "inside" another function, all bets are off! Examples: . $\sqrt{x}\quad e^x\quad|x|\quad\ln x$ Look at that function again: . $f(x) \:=\:-4x + |8x|$ If $x$ is negative $(x = -a)$, . . we have: . $f(-a) \:=\:-4(-a) + |8(-a)| \;=\;4a$ + $8a$ But $-f(a) \:=\:-\left(-4a + |8a|\right) \:=\:-\(-4a + 8a) \:=\:4a$ - $8a$ 7. ## Re: Even And Odd Functions Originally Posted by TheChaz Sam, I would slightly edit this (^^) comment... If you have a polynomial with only even exponents (a constant "c" can be written as c*x^0, and zero is even, so constant terms are even...), then the function P(x) is even. If you have a polynomial with only odd exponents, then the function P(x) is odd. In your examples, there is an absolute value term, which is not polynomial. As my comment was directed towards the problem at hand, I should have said "the" function and not "a" function. 8. ## Re: Even And Odd Functions @Soroban, cheme, TheChaz Thanks! I was wondering why it worked when my instructor used this method to determine if a function was even or odd. Now I know the using the powers of exponents to determine if a function is even or odd, only works when you have polynomials only. @Plato $\begin{array}{l} f(x) = - 4x + \left| {8x} \right|\\ f( - x) = - 4( - x) + \left| {8( - x)} \right|\\ f( - x) = 4x + \left| { - 8x} \right|\\ f( - x) \ne f(x)\\ - f(x) = - (4x + \left| {8x} \right|)\\ - f(x) = - 4x - \left| {8x} \right|\\ - f(x) \ne f(x)\\ neither \end{array}$ @chisigma I was trying to understand general rule c, and d. May you solve a problem using rule c and d. 9. ## Re: Even And Odd Functions Originally Posted by ArcherSam @Soroban, cheme, TheChaz Thanks! I was wondering why it worked when my instructor used this method to determine if a function was even or odd. Now I know the using the powers of exponents to determine if a function is even or odd, only works when you have polynomials only... @chisigma I was trying to understand general rule c, and d. May you solve a problem using rule c and d. All right!... is... $f(x)= f_{e}(x) + f_{o}(x) = -4 x + |8 x|$ (1) ... so that... $f_{e}(x)= \frac{f(x)+f(-x)}{2}= {-4 x + |8 x| +4 x + |8 x|}{2} = |8 x|$ (2) $f_{o}(x)= \frac{f(x)-f(-x)}{2}= {-4 x + |8 x| -4 x - |8 x|}{2} = -4 x$ (3) Because neither $f_{e}(*)$ nor $f_{o}(*)$ is 'identically 0' , then f(x) is 'neither even nor odd'... If You will use this approach in the future, You will be 'very lucky' because it works for any type of fucntion, not only for polynomials... Kind regards $\chi$ $\sigma$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 37, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9180927276611328, "perplexity": 1011.0930566757406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612537.91/warc/CC-MAIN-20170529184559-20170529204559-00359.warc.gz"}
http://www.ck12.org/algebra/Exponential-Properties-Involving-Products/lesson/Exponential-Properties-Involving-Products-ALG-I/
# Exponential Properties Involving Products ## Add exponents to multiply exponents by other exponents % Progress MEMORY METER This indicates how strong in your memory this concept is Progress % Exponential Properties Involving Products ### Exponential Properties Involving Products In expressions involving exponents, like \begin{align*}3^5\end{align*} or \begin{align*}x^3\end{align*}. the number on the bottom is called the base and the number on top is the power or exponent. The whole expression is equal to the base multiplied by itself a number of times equal to the exponent; in other words, the exponent tells us how many copies of the base number to multiply together. #### Writing Expressions in Exponential Form Write in exponential form. a) \begin{align*}2 \cdot 2\end{align*} \begin{align*}2 \cdot 2 = 2^2\end{align*} because we have 2 factors of 2 b) \begin{align*}(-3)(-3)(-3)\end{align*} \begin{align*}(-3)(-3)(-3) = (-3)^3\end{align*} because we have 3 factors of (-3) c)  \begin{align*}y \cdot y \cdot y \cdot y \cdot y\end{align*} \begin{align*}y \cdot y \cdot y \cdot y \cdot y = y^5\end{align*} because we have 5 factors of \begin{align*}y\end{align*} d) \begin{align*}(3a)(3a)(3a)(3a)\end{align*} \begin{align*}(3a)(3a)(3a)(3a)=(3a)^4\end{align*} because we have 4 factors of \begin{align*}3a\end{align*} When the base is a variable, it’s convenient to leave the expression in exponential form; if we didn’t write \begin{align*}x^7\end{align*}, we’d have to write \begin{align*}x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x\end{align*} instead. But when the base is a number, we can simplify the expression further than that; for example, \begin{align*}2^7\end{align*} equals \begin{align*}2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2\end{align*}, but we can multiply all those 2’s to get 128. Let’s simplify the expressions from Example A. #### Simplifying Expressions Simplify a) \begin{align*}2^2\end{align*} \begin{align*}2^2 = 2 \cdot 2 =4\end{align*} b) \begin{align*}(-3)^3\end{align*} \begin{align*}(-3)^3 = (-3)(-3)(-3)=-27\end{align*} c) \begin{align*}y^5\end{align*} \begin{align*}y^5\end{align*} is already simplified d) \begin{align*}(3a)^4\end{align*} \begin{align*}(3a)^4 = (3a)(3a)(3a)(3a) = 3 \cdot 3 \cdot 3 \cdot 3 \cdot a \cdot a \cdot a \cdot a = 81 a^4\end{align*} Be careful when taking powers of negative numbers. Remember these rules: \begin{align*}(negative \ number) \cdot (positive \ number) = negative \ number\! \\ (negative \ number) \cdot (negative \ number) = positive \ number \end{align*} So even powers of negative numbers are always positive. Since there are an even number of factors, we pair up the negative numbers and all the negatives cancel out. \begin{align*}(-2)^6 &= (-2)(-2)(-2)(-2)(-2)(-2)\\ &= \underbrace{ (-2)(-2) }_{+4} \cdot \underbrace{ (-2)(-2) }_{+4} \cdot \underbrace{ (-2)(-2) }_{+4}\\ &= +64\end{align*} And odd powers of negative numbers are always negative. Since there are an odd number of factors, we can still pair up negative numbers to get positive numbers, but there will always be one negative factor left over, so the answer is negative: \begin{align*}(-2)^5 &= (-2)(-2)(-2)(-2)(-2)\\ &= \underbrace{(-2)(-2)}_{+4} \cdot \underbrace{(-2)(-2)}_{+4} \cdot \underbrace{(-2)}_{-2}\\ &= -32\end{align*} #### Use the Product of Powers Property So what happens when we multiply one power of \begin{align*}x\end{align*} by another? Let’s see what happens when we multiply \begin{align*}x\end{align*} to the power of 5 by \begin{align*}x\end{align*} cubed. To illustrate better, we’ll use the full factored form for each: \begin{align*}\underbrace{(x \cdot x \cdot x \cdot x \cdot x)}_{x^5} \cdot \underbrace{(x \cdot x \cdot x)}_{x^3} = \underbrace{(x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x)}_{x^8}\end{align*} So \begin{align*}x^5 \times x^3 = x^8\end{align*}. You may already see the pattern to multiplying powers, but let’s confirm it with another example. We’ll multiply \begin{align*}x\end{align*} squared by \begin{align*}x\end{align*} to the power of 4: \begin{align*}\underbrace{(x \cdot x)}_{x^2} \cdot \underbrace{(x \cdot x \cdot x \cdot x)}_{x^4} = \underbrace{(x \cdot x \cdot x \cdot x \cdot x \cdot x)}_{x^6}\end{align*} So \begin{align*}x^2 \times x^4 = x^6\end{align*}. Look carefully at the powers and how many factors there are in each calculation. \begin{align*}5 \ x\end{align*}’s times \begin{align*}3 \ x\end{align*}’s equals \begin{align*}(5 + 3) = 8 \ x\end{align*}’s. \begin{align*}2 \ x\end{align*}’s times \begin{align*}4 \ x\end{align*}’s equals \begin{align*}(2 + 4) = 6 \ x\end{align*}’s. You should see that when we take the product of two powers of \begin{align*}x\end{align*}, the number of \begin{align*}x\end{align*}’s in the answer is the total number of \begin{align*}x\end{align*}’s in all the terms you are multiplying. In other words, the exponent in the answer is the sum of the exponents in the product. Product Rule for Exponents: \begin{align*}x^n \cdot x^m = x^{(n+m)}\end{align*} There are some easy mistakes you can make with this rule, however. Let’s see how to avoid them. #### Multiplying Exponents 1. Multiply \begin{align*}2^2 \cdot 2^3\end{align*}. \begin{align*}2^2 \cdot 2^3 = 2^5 = 32\end{align*} Note that when you use the product rule you don’t multiply the bases. In other words, you must avoid the common error of writing \begin{align*}2^2 \cdot 2^3 = 4^5\end{align*}. You can see this is true if you multiply out each expression: 4 times 8 is definitely 32, not 1024. 2. Multiply \begin{align*}2^2 \cdot 3^3\end{align*}. \begin{align*}2^2 \cdot 3^3 = 4 \cdot 27 = 108\end{align*} In this case, we can’t actually use the product rule at all, because it only applies to terms that have the same base. In a case like this, where the bases are different, we just have to multiply out the numbers by hand—the answer is not \begin{align*}2^5\end{align*} or \begin{align*}3^5\end{align*} or \begin{align*}6^5\end{align*} or anything simple like that. ### Examples Simplify the following exponents: #### Example 1 \begin{align*}(-2)^5\end{align*} \begin{align*}(-2)^5=(-2)(-2)(-2)(-2)(-2)=-32\end{align*} #### Example 2 \begin{align*}(10x)^2\end{align*} \begin{align*}(10x)^2=10^2\cdot x^2=100x^2\end{align*} ### Review Write in exponential notation: 1. \begin{align*}4 \cdot 4 \cdot 4 \cdot 4 \cdot 4\end{align*} 2. \begin{align*}3x \cdot 3x \cdot 3x\end{align*} 3. \begin{align*}(-2a)(-2a)(-2a)(-2a)\end{align*} 4. \begin{align*}6 \cdot 6 \cdot 6 \cdot x \cdot x \cdot y \cdot y \cdot y \cdot y\end{align*} 5. \begin{align*}2 \cdot x \cdot y \cdot 2 \cdot 2 \cdot y \cdot x\end{align*} Find each number. 1. \begin{align*}5^4\end{align*} 2. \begin{align*}(-2)^6\end{align*} 3. \begin{align*}(0.1)^5\end{align*} 4. \begin{align*}(-0.6)^3\end{align*} 5. \begin{align*}(1.2)^2+5^3\end{align*} 6. \begin{align*}3^2 \cdot (0.2)^3\end{align*} Multiply and simplify: 1. \begin{align*}6^3 \cdot 6^6\end{align*} 2. \begin{align*}2^2 \cdot 2^4 \cdot 2^6\end{align*} 3. \begin{align*}3^2 \cdot 4^3\end{align*} 4. \begin{align*}x^2 \cdot x^4\end{align*} 5. \begin{align*}(-2y^4)(-3y)\end{align*} 6. \begin{align*}(4a^2)(-3a)(-5a^4)\end{align*} ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes ### Vocabulary Language: English TermDefinition Base When a value is raised to a power, the value is referred to as the base, and the power is called the exponent. In the expression $32^4$, 32 is the base, and 4 is the exponent. Exponent Exponents are used to describe the number of times that a term is multiplied by itself. Power The "power" refers to the value of the exponent. For example, $3^4$ is "three to the fourth power".
{"extraction_info": {"found_math": true, "script_math_tex": 75, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 2, "texerror": 0, "math_score": 1.0000097751617432, "perplexity": 1052.0549062209866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00121-ip-10-145-167-34.ec2.internal.warc.gz"}
https://luisrguzmanjr.wordpress.com/tag/taniyama-shimura-conjecture/
You are currently browsing the tag archive for the ‘taniyama-shimura conjecture’ tag. Taniyama-Shimura 3: L-Series where it will be crucial in the definition of modularity. For today, we assume our $latex {d}&fg=000000$-dimensional variety $latex {X/\mathbb{Q}}&fg=000000$ has the property that its middle etale cohomology is 2-dimensional. It won’t hurt if you want to just think that $latex {X}&fg=000000$ is an elliptic curve. We will first define the L-series via the Galois representation that we constructed last time. Fix $latex {p}&fg=000000$ a prime not equal to $latex {\ell}&fg=000000$ and of good reduction for $latex {X}&fg=000000$. Let $latex {M=\overline{\mathbb{Q}}^{\ker \rho_X}}&fg=000000$. By definition the representation factors through $latex {{Gal} (M/\mathbb{Q})}&fg=000000$. For $latex {\frak{p}}&fg=000000$ a prime lying over $latex {p}&fg=000000$ the decomposition group $latex {D_{\frak{p}}}&fg=000000$ surjects onto $latex {{Gal} (\overline{\mathbf{F}}_p/\mathbf{F}_p)}&fg=000000$ with kernel $latex {I_{\frak{p}}}&fg=000000$. One of the subtleties we’ll jump over to save time is that $latex {\rho_X}&fg=000000$ acts trivially on $latex {I_{\frak{p}}}&fg=000000$ (it follows from the good reduction assumption), so we can lift the generator of $latex {{Gal} (\overline{\mathbf{F}}_p/\mathbf{F}_p)}&fg=000000$ to get a conjugacy class $latex {{Frob}_p}&fg=000000$ whose image under… View original post 917 more words Taniyama-Shimura 2: Galois Representations where the standard modern approach to defining modularity for other types of varieties. Fix some proper variety $latex {X/\mathbb{Q}}&fg=000000$. Our goal today will seem very strange, but it is to explain how to get a continuous representation of the absolute Galois group of $latex {\mathbb{Q}}&fg=000000$ from this data. I’m going to assume familiarity with etale cohomology, since describing Taniyama-Shimura is already going to take a bit of work. To avoid excessive notation, all cohomology in this post (including the higher direct image functors) are done on the etale site. For those that are intimately familiar with etale cohomology, we’ll do the quick way first. I’ll describe a more hands on approach afterwards. Let $latex {\pi: X\rightarrow \mathrm{Spec} \mathbb{Q}}&fg=000000$ be the structure morphism. Fix an algebraic closure $latex {v: \mathrm{Spec} \overline{\mathbb{Q}}\rightarrow \mathrm{Spec}\mathbb{Q}}&fg=000000$ (i.e. a geometric point of the base). We’ll denote the base change of $latex {X}&fg=000000$ with respect to this morphism $latex {\overline{X}}&fg=000000$. Suppose the dimension of $latex {X}&fg=000000$ is $latex {n}&fg=000000$. Let… View original post 374 more words Great post on understanding the statement of the famous Taniyama-Shimura conjecture that led to the proof of Fermat’s Last Theorem. It’s time to return to plan A. I started this year by saying I’d post on some fundamental ideas in arithmetic geometry. The local system thing is hard to get motivated about, since the way I was going to use it in my research seems irrelevant at the moment. My other option was to blog some stuff about class field theory, since there is a reading group on the topic that I belong to this quarter. The first goal of this new series is to understand the statement of the famous Taniyama-Shimura conjecture that led to the proof of Fermat’s Last Theorem. A lot of people can probably mumble something about the conjecture if they have any experience in algebraic/arithmetic geoemtry or any of the number theory type fields, but most people probably can’t say anything precise about what the conjecture says (I’ll continue to call it a “conjecture” even… View original post 766 more words Elliptic curves are especially important in number theory, and constitute a major area of current research; for example, they were used in the proof, by Andrew Wiles, of Fermat’s Last Theorem. More concretely, an elliptic curve is the set of zeros of a cubic polynomial in two variables. Where $ax^{3}+bx^{2}y+cxy^{2}+dy^{3}+ex^{2}+fxy+gy^{2}+hx+iy+j=0$ is the equation of a general cubic polynomial. A famous example being $\displaystyle x^{3}+y^{3}=1$ or in homogeneous form, $\displaystyle X^{3}+Y^{3}=Z^{3}$. This is the first non-trivial case of Fermat’s Last Theorem. A modular elliptic curve is an elliptic curve $E$ that admits a parametrization $X_{0}(N) \rightarrow E$ by a modular curve. This is not the same as a modular curve that happens to be an elliptic curve, and which could be called an elliptic modular curve. The modularity theorem, also known as the Taniyama–Shimura conjecture, asserts that every elliptic curve defined over the rational numbers is a modular form in disguise. In 1985, starting with a fictitious solution to Fermat’s last theorem (the Frey curve), G. Frey showed that he could create an unusual elliptic curve which appeared not to be modular. If the curve were not modular, then this would show that if Fermat’s last theorem were false, then the Taniyama-Shimura conjecture would also be false. Furthermore, if the Taniyama-Shimura conjecture is true, then so is Fermat’s last theorem. However, Frey did not actually prove that his curve was not modular. The conjecture that Frey’s curve was not modular came to be called the “epsilon conjecture,” and was quickly proved by Ribet (Ribet’s theorem) in 1986, establishing a very close link between two mathematical structures (the Taniyama-Shimura conjecture and Fermat’s last theorem) which appeared previously to be completely unrelated By proving the semistable case of the conjecture, Andrew Wiles proved Fermat’s Last Theorem. Some Elliptic curves:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495806455612183, "perplexity": 276.1361936486536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865438.16/warc/CC-MAIN-20180623225824-20180624005824-00459.warc.gz"}
https://icsecbsemath.com/2017/07/16/class-10-sample-problems-matrices-exercise-12a/
Question 1: State True or False. If False, please state the reason. 1. If $A$ and $B$ are two matrices of order $3 \times 2$ and $2 \times 3$ respectively; then their sum $A + B$ is possible. 2. The Matrices $A_{2 \times 3}$ and  $A_{2 \times 3}$ are conformable for subtraction. 3. Transpose of a $2 \times 1$ matrix is a $2\times 1$ matrix. 4. Transpose of a square matrix is a square matrix. 5. A column matrix has many columns and only one row. 1. False: Two matrices can be added together if they are of the same order. Here $A$ is of the Order $3 \times 2$ while $B$ is of the order $2 \times 3$. Hence they cannot be added. 2. True: Two matrices can be subtracted together if they are of the same order. Here both latex A  &s=0$and latex B &s=0$ are of the same order. 3. False: The transpose of a matrix is obtained by interchanging rows with columns. Hence the Transpose of a $2 \times 1$ matrix is a $1 \times 2$ matrix. 4. True: Yes Transpose of a square matrix is a square matrix. Here the number of rows is equal to the number of columns. Hence even on transposing, the matrix would remain as a square matrix. 5. False: A Column matrix has one column and many rows. $\\$ Question 2: Given $\begin{bmatrix} x & y+2 \\ 3 & z-1 \end{bmatrix} = \begin{bmatrix} 3 & 1 \\ 3 & 2 \end{bmatrix}$, find $x, \ y \ and \ z$. $\begin{bmatrix} x & y+2 \\ 3 & z-1 \end{bmatrix} = \begin{bmatrix} 3 & 1 \\ 3 & 2 \end{bmatrix}$ $\Rightarrow x = 3$ $y+2 = 1 \ \Rightarrow y = -1$ also  $z-1 = 2 \Rightarrow z = 3$ $\\$ Question 3: Solve for $a, \ b \ and \ c$ if; 1. $\begin{bmatrix} -4 & a+5 \\ 3 & 2 \end{bmatrix} = \begin{bmatrix} b+4 & 2 \\ 3 & c-1 \end{bmatrix}$ 2. $\begin{bmatrix} a & a-b \\ b+c & 0 \end{bmatrix} = \begin{bmatrix} 3 & -1 \\ 2 & 0 \end{bmatrix}$ 1)     Given $\begin{bmatrix} -4 & a+5 \\ 3 & 2 \end{bmatrix} = \begin{bmatrix} b+4 & 2 \\ 3 & c-1 \end{bmatrix}$; Therefore $-4 = b+ 4 \Rightarrow b = -8$ $a+5 = 2 \Rightarrow a = -3$ $2 = c-1 \Rightarrow c = 3$ 2)    $\begin{bmatrix} a & a-b \\ b+c & 0 \end{bmatrix} = \begin{bmatrix} 3 & -1 \\ 2 & 0 \end{bmatrix}$ $a = 3$ $a-b=-1 \Rightarrow b = a-1 = 3-1 = 2$ $b+c = 2 \Rightarrow c = 2-b = 2-2 = 0$ $\\$ Question 4: If  $A = \begin{bmatrix} 8 & -3 \end{bmatrix}$ and  $B = \begin{bmatrix} 4 & -5 \end{bmatrix}$  find 1. $A+B$ 2. $B-A$ 1)    $A+B$ $= \begin{bmatrix} 8 & -3 \end{bmatrix} + \begin{bmatrix} 4 & -5 \end{bmatrix}$ $=\begin{bmatrix} 8+4 & -3-5 \end{bmatrix} = \begin{bmatrix} 12 & -8 \end{bmatrix}$ 2)    $B-A$ $= \begin{bmatrix} 4 & -5 \end{bmatrix} - \begin{bmatrix} 8 & -3 \end{bmatrix}$ $=\begin{bmatrix} 4-8 & -5-(-3) \end{bmatrix} = \begin{bmatrix} -4 & -2 \end{bmatrix}$ $\\$ Question 5: If $A = \begin{bmatrix} 2 \\ 5 \end{bmatrix}, \ B=\begin{bmatrix} 1 \\ 4 \end{bmatrix} \ and \ C=\begin{bmatrix} 6 \\ -2 \end{bmatrix}$ find: 1. $B+C$ 2. $A-C$ 3. $A+B-C$ 4. $A-B+C$ 1)    $B+C$ $= \begin{bmatrix} 1 \\ 4 \end{bmatrix} + \begin{bmatrix} 6 \\ -2 \end{bmatrix}$ $= \begin{bmatrix} 1+6 \\ 4-2 \end{bmatrix}$ $= \begin{bmatrix} 7 \\ 2 \end{bmatrix}$ 2)    $A-C$ $= \begin{bmatrix} 2 \\ 5 \end{bmatrix} - \begin{bmatrix} 6 \\ -2 \end{bmatrix}$ $= \begin{bmatrix} 2-6 \\ 5-(-2) \end{bmatrix}$ $= \begin{bmatrix} -4 \\ 7 \end{bmatrix}$ 3)    $A+B-C$ $= \begin{bmatrix} 2 \\ 5 \end{bmatrix} + \begin{bmatrix} 1 \\ 4 \end{bmatrix} - \begin{bmatrix} 6 \\ -2 \end{bmatrix}$ $= \begin{bmatrix} 2+1-6 \\ 5+4-(-2) \end{bmatrix}$ $= \begin{bmatrix} -3 \\ 11 \end{bmatrix}$ 4)    $A-B+C$ $= \begin{bmatrix} 2 \\ 5 \end{bmatrix} - \begin{bmatrix} 1 \\ 4 \end{bmatrix} + \begin{bmatrix} 6 \\ -2 \end{bmatrix}$ $= \begin{bmatrix} 2-1+6 \\ 5-4+(-2) \end{bmatrix}$ $= \begin{bmatrix} 7 \\ -1 \end{bmatrix}$ $\\$ Question 6: Wherever possible, write each of the following in a single matrix: 1. $\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} + \begin{bmatrix} -1 & -2 \\ 1 & -7 \end{bmatrix}$ 2. $\begin{bmatrix} 2 &3 & 4 \\ 5 & 6 & 7 \end{bmatrix} - \begin{bmatrix} 0 &2 & 3 \\ 6 & -1 & 0 \end{bmatrix}$ 3. $\begin{bmatrix} 0 & 1 & 2 \\ 4 & 6 & 7 \end{bmatrix} + \begin{bmatrix} 3 & 4 \\ 6 & 8 \end{bmatrix}$ 1) $\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} + \begin{bmatrix} -1 & -2 \\ 1 & -7 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 4 & -3 \end{bmatrix}$ 2) $\begin{bmatrix} 2 &3 & 4 \\ 5 & 6 & 7 \end{bmatrix} - \begin{bmatrix} 0 &2 & 3 \\ 6 & -1 & 0 \end{bmatrix} = \begin{bmatrix} 2 &5 & 7 \\ 11 & 5 & 7 \end{bmatrix}$ 3) Adding this is is not possible as the order of the metrices are not the same. $\\$ Question 7: Find $x$ and $y$ from the following equations: 1. $\begin{bmatrix} 5 & 2 \\ -1 & y-1 \end{bmatrix} - \begin{bmatrix} 1 & x-1 \\ 2 & -3 \end{bmatrix} = \begin{bmatrix} 4 & 7 \\ -3 & 2 \end{bmatrix}$ 2. $\begin{bmatrix} -8 & x \end{bmatrix} + \begin{bmatrix} y & -2 \end{bmatrix} = \begin{bmatrix} -3 & 2 \end{bmatrix}$ 1)     $\begin{bmatrix} 5 & 2 \\ -1 & y-1 \end{bmatrix} - \begin{bmatrix} 1 & x-1 \\ 2 & -3 \end{bmatrix} = \begin{bmatrix} 4 & 7 \\ -3 & 2 \end{bmatrix}$ $\begin{bmatrix} 5-1 & 2-(x-1) \\ (-1-2) & y-1-(-3) \end{bmatrix} =\begin{bmatrix} 4 & 7 \\ -3 & 2 \end{bmatrix}$ $= \begin{bmatrix} 4 & 3-x \\ -3 & y+2 \end{bmatrix} =\begin{bmatrix} 4 & 7 \\ -3 & 2 \end{bmatrix}$ Therefore $3-x = 7 \Rightarrow x = -4$ $y+2 = 2 \Rightarrow y = 0$ 2)    $\begin{bmatrix} -8 & x \end{bmatrix} + \begin{bmatrix} y & -2 \end{bmatrix} = \begin{bmatrix} -3 & 2 \end{bmatrix}$ Therefore $-8+y=-3 \Rightarrow y = 5$ $x-2=2 \Rightarrow x = 4$ $\\$ Question 8: Given $M = \begin{bmatrix} 5 & -3 \\ -2 & 4 \end{bmatrix}$, find its transpose matrix $M^{t}$. If possible find: 1.  $M+M^{t}$ 2.  $M^{t}-M$ $M = \begin{bmatrix} 5 & -3 \\ -2 & 4 \end{bmatrix}$ $M^{t} = \begin{bmatrix} 5 & -2 \\ -3 & 4 \end{bmatrix}$ 1) $M+M^{t}$ $= \begin{bmatrix} 5 & -3 \\ -2 & 4 \end{bmatrix} + \begin{bmatrix} 5 & -2 \\ -3 & 4 \end{bmatrix} = \begin{bmatrix} 10 & -5 \\ -5 & 8 \end{bmatrix}$ 2) $M^{t}-M$ $= \begin{bmatrix} 5 & -2 \\ -3 & 4 \end{bmatrix} - \begin{bmatrix} 5 & -3 \\ -2 & 4 \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$ $\\$ Question 9:  Write the additive  inverse of matrices A, B and C where $A = \begin{bmatrix} 6 & -5 \end{bmatrix}$ and $B = \begin{bmatrix} -2 & 0 \\ 4 & -1 \end{bmatrix}$ and $C = \begin{bmatrix} -2 \\ 4 \end{bmatrix}$. Additive Inverse of  $A = \begin{bmatrix} 6 & -5 \end{bmatrix}$ is $= \begin{bmatrix} -6 & 5 \end{bmatrix}$ Additive Inverse of $B = \begin{bmatrix} -2 & 0 \\ 4 & -1 \end{bmatrix}$ is $= \begin{bmatrix} 2 & 0 \\ -4 & 1 \end{bmatrix}$ Additive Inverse of $C = \begin{bmatrix} -2 \\ 4 \end{bmatrix}$  is $= \begin{bmatrix} 7 & -4 \end{bmatrix}$ $\\$ Question 10: Given $A = \begin{bmatrix} 2 & -3 \end{bmatrix}, \ B= \begin{bmatrix} 0 & 2 \end{bmatrix}, \ C= \begin{bmatrix} -1 & 4 \end{bmatrix}$. Find matrix $X$ in each of the following: 1.  $X+B=C-A$ 2.  $A-X=B+C$ Let $X = \begin{bmatrix} a & b \end{bmatrix}$ 1) $X+B=C-A$ $\begin{bmatrix} a & b \end{bmatrix} +\begin{bmatrix} 0 & 2 \end{bmatrix}=\begin{bmatrix} -1 & 4 \end{bmatrix}-\begin{bmatrix} 2 & -3 \end{bmatrix}$ $\begin{bmatrix} a & b+2 \end{bmatrix} = \begin{bmatrix} -3 & 7 \end{bmatrix}$ Therefore $a = -3 \ and \ b = 5$ Hence  $X = \begin{bmatrix} -3 & 5 \end{bmatrix}$ 2) $A-X=B+C$ $\begin{bmatrix} 2 & -3 \end{bmatrix} - \begin{bmatrix} a & b \end{bmatrix} = \begin{bmatrix} 0 & 2 \end{bmatrix} + \begin{bmatrix} -1 & 4 \end{bmatrix}$ $\begin{bmatrix} 2-a & -3-b \end{bmatrix} = \begin{bmatrix} -1 & 6 \end{bmatrix}$ Therefore $a = 3 \ and \ b = -9$ Hence $X = \begin{bmatrix} 3 & -9 \end{bmatrix}$ $\\$ Question 11: Given $A = \begin{bmatrix} -1 & 0 \\ 2 & -4 \end{bmatrix}$ and $B = \begin{bmatrix} 3 & -3 \\ -2 & 0 \end{bmatrix}$. FInd the matrix $X$  in each of the following: 1.  $A+X=B$ 2.  $A-X=B$ 3.  $X-B=A$ Let $X = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$ 1) $A+X=B$ $\begin{bmatrix} -1 & 0 \\ 2 & -4 \end{bmatrix}+\begin{bmatrix} a & b \\ c & d \end{bmatrix}=\begin{bmatrix} 3 & -3 \\ -2 & 0 \end{bmatrix}$ $\begin{bmatrix} -1+a & b \\ 2+c & -4+d \end{bmatrix} = \begin{bmatrix} 3 & -3 \\ -2 & 0 \end{bmatrix}$ Therefore $a = 4, \ b = -3, \ c = -4 \ and \ d = 4$ Hence $X = \begin{bmatrix} 4 & -3 \\ -4 & 4 \end{bmatrix}$ 2) $A-X=B$ $\begin{bmatrix} -1 & 0 \\ 2 & -4 \end{bmatrix}-\begin{bmatrix} a & b \\ c & d \end{bmatrix}=\begin{bmatrix} 3 & -3 \\ -2 & 0 \end{bmatrix}$ $\begin{bmatrix} -1-a & -b \\ 2-c & -4-d \end{bmatrix} = \begin{bmatrix} 3 & -3 \\ -2 & 0 \end{bmatrix}$ Therefore $a = -4, \ b = 3, \ c = 4 \ and \ d = -4$ Hence $X = \begin{bmatrix} -4 & 3 \\ 4 & -4 \end{bmatrix}$ 3) $X-B=A$ $\begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} 3 & -3 \\ -2 & 0 \end{bmatrix} =\begin{bmatrix} -1 & 0 \\ 2 & -4 \end{bmatrix}$ $\begin{bmatrix} a-3 & b+3 \\ c+2 & d \end{bmatrix} =\begin{bmatrix} -1 & 0 \\ 2 & -4 \end{bmatrix}$ Therefore $a = 2, \ b = -3, \ c = 0 \ and \ d = -4$ Hence $X = \begin{bmatrix} 2 & -3 \\ 0 & -4 \end{bmatrix}$ $\\$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 147, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9290640354156494, "perplexity": 2163.359126576051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422221208-00458.warc.gz"}
http://www.physicsforums.com/printthread.php?t=286458
Physics Forums (http://www.physicsforums.com/index.php) -   Astronomy & Astrophysics (http://www.physicsforums.com/forumdisplay.php?f=71) -   -   Magnetic Reconnection vs Double Layers (http://www.physicsforums.com/showthread.php?t=286458) Suede Jan21-09 03:01 PM Magnetic Reconnection vs Double Layers http://en.wikipedia.org/wiki/Magnetic_reconnection Magnetic reconnection is the process whereby magnetic field lines from different magnetic domains are spliced to one another, changing their patterns of connectivity with respect to the sources. It is a violation of an approximate conservation law in plasma physics, and can concentrate mechanical or magnetic energy in both space and time. Solar flares, the largest explosions in the solar system, may involve the reconnection of large systems of magnetic flux on the Sun, releasing, in minutes, energy that has been stored in the magnetic field over a period of hours to days. Magnetic reconnection in Earth's magnetosphere is one of the mechanisms responsible for the aurora, and it is important to the science of controlled nuclear fusion because it is one mechanism preventing magnetic confinement of the fusion fuel. Magnetic reconnection is something that has supposedly been "tested" and proven in the lab yet for some reason the lab results keep coming out "wrong." Currently when scientists create a "reconnection" event in the lab between two electrically charged plasma sheets the "reconnection" event takes place at twice the speed MHD theory predicts. So far no one has been able to rectify this problem, nor have they been able to produce a "reconnecting" magnetic field without first applying current to the plasma sheets they are observing. The reason being obvious of course, in order to create a magnetic field, one must first induce an electrical current. So far, this is the only known way of producing a magnetic field in a plasma that can be tested. As soon as the current shuts off, so too does the magnetic field. Magnetic reconnection is proposed to account for the sudden bursts of observed kinetic energies that power the aurora's substorms and light up the polar skies. It’s also proposed to account for about a billion other phenomena that I will not get into here. Focusing on the aurora, some very interesting facts come to light. THEMIS, Viking, FAST, UARS and several other satellites have confirmed the existence of parallel electric fields powering the auroras. Electric fields of course must complete a circuit in order to flow. Without charge deficiency in one area of space, there would be no current flow which is a product of charge equalization. Some papers on the findings of parallel electric field aligned currents, otherwise known as "Birkeland currents" after Kristian Birkeland, the man who postulated their existence back in 1901. http://www.iop.org/EJ/abstract/0741-3335/41/3A/004 Quasi-static, magnetic-field-aligned (parallel) potentials have been considered the primary source of charged particle acceleration in the aurora where precipitating electrons create a visible display. This finding has been controversial since, at one time, it was widely believed that parallel potentials could not be supported by a collisionless plasma. We present observations from the fast auroral snapshot (FAST) satellite which strongly support this acceleration mechanism and, moreover, show evidence of a second plasma regime region which supports quasi-static parallel potentials. http://www.agu.org/pubs/crossref/199...13p02329.shtml Electron distribution functions measured on the Dynamics Explorer 1 spacecraft are shown to have the characteristics expected in a region of parallel electric fields. http://www.agu.org/pubs/crossref/1998/97JA02587.shtml Using plasma wave data sampled by the Freja spacecraft from the topside ionosphere during auroral conditions, the possible existence of electric fields with an intense parallel component (a few tens of millivolts per meter) with respect to the Earth's magnetic field is discussed. http://www.agu.org/pubs/crossref/200...JA007540.shtml We present a survey of 64 direct observations of large-amplitude parallel electric fields E∥ in the upward current region of the southern auroral acceleration zone, obtained by the three-axis electric field experiment on Polar. http://www.agu.org/pubs/crossref/200...JA010545.shtml Satellite observations have established that parallel electric fields of both upward and downward current regions of the aurora are supported, at least in part, by strong double layers. http://www.agu.org/pubs/crossref/198...09p09777.shtml It is demonstrated that the simultaneous observations on Viking of upward field-aligned fluxes of energetic ions and electrons of energies in the same range may be due to acceleration in field-aligned electric fields, the ions in an upward directed parallel dc field and the electrons in a downward directed parallel field which is fluctuating but appears as quasi-static for the electrons as long as they are in the acceleration region. http://www.agu.org/pubs/crossref/1998/98JA01236.shtml Magnetic field and particle observations from the Upper Atmosphere Research Satellite particle environment monitor (UARS/PEM) are used to estimate field-aligned currents, electron precipitation energy flux, ionospheric conductivities, and Joule heating rates during the main phase of the November 4, 1993, geomagnetic storm. Given that we know field aligned electrical currents exist in the space plasma surrounding earth, some fundamental properties of conducting plasma can be employed to describe the events many astrophysicists currently ascribe to “magnetic reconnection.” So let’s look at some papers, the models, and how they are related to the observed phenomena. R. E. Ergun et al.: Double layers in the downward current region of the aurora Nonlinear Processes in Geophysics (2003) 10: 45–52 http://hal.archives-ouvertes.fr/docs...10-45-2003.pdf These results suggest that large double layers can account for the parallel electric field in the downward current region and that intense electrostatic turbulence rapidly stabilizes the accelerated electron distributions. These results also demonstrate that parallel electric fields are directly associated with the generation of large-amplitude electron phasespace holes and plasma waves…. We presented direct observations of the parallel electric field in the downward current region of the auroral zone. The observations are consistent with a strong double layer moving along B at the ion acoustic speed in the same direction of the accelerated electrons. The potential drop extends _10 _D along B. Intense electrostatic emissions are spatially separated from the structure on the high-potential side. Electron phase-space holes emerge from the wave turbulence associated with the double layer. The potential structure accelerates electrons to several times their initial thermal velocity which results in a factor of 10 gain from the initial thermal energy. Intense quasielectrostatic wave emissions and electron phase-space holes rapidly modify the accelerated electron distribution. Part of the electron distribution (stagnating electrons) is reflected back into the double layer through interaction with the intense wave turbulence. Thus, the intense wave turbulence may interact with the double layer through this stagnating electron population. Singh, N., and G. Khazanov (2003), Double layers in expanding plasmas and their relevance to the auroral plasma processes, J. Geophys. Res., 108(A4), 8007, doi:10.1029/2002JA009436. http://www.agu.org/pubs/crossref/200...JA009436.shtml When a dense plasma consisting of a cold and a sufficiently warm electron population expands, a rarefaction shock forms [ Bezzerides et al., 1978 ]. In the expansion of the polar wind in the magnetosphere, it has been previously shown that when a sufficiently warm electron population also exists, in addition to the usual cold ionospheric one, a discontinuity forms in the electrostatic potential distribution along the magnetic field lines [ Barakat and Schunk, 1984 ]. Despite the lack of spatial resolution and the assumption of quasi-neutrality in the polar wind models, such discontinuities have been called double layers (DLs). Recently similar discontinuities have been invoked to partly explain the auroral acceleration of electrons and ions in the upward current region [ Ergun et al., 2000 ]. By means of one-dimensional Vlasov simulations of expanding plasmas, for the first time we make here the connection between (1) the rarefaction shocks, (2) the discontinuities in the potential distributions, and (3) DLs. We show that when plasmas expand from opposite directions into a deep density cavity with a potential drop across it and when the plasma on the high-potential side contains hot and cold electron populations, the temporal evolution of the potential and the plasma distribution generates evolving multiple double layers with an extended density cavity between them. One of the DLs is the rarefaction-shock (RFS) and it forms by the reflections of the cold electrons coming from the high-potential side; it supports a part of the potential drop approximately determined by the hot electron temperature. The other DLs evolve from charge separations arising either from reflection of ions coming from the low-potential side or stemming from plasma instabilities; they support the rest of the potential drop. The instabilities forming these additional double layers involve electron-ion (e-i) Buneman or ion-ion (i-i) two-stream interactions. The electron-electron two-stream interactions on the high-potential side of the RFS generate electron-acoustic waves, which evolve into electron phase-space holes. The ion population originating from the low-potential side and trapped by the RFS is energized by the e-i and i-i instabilities and it eventually precipitates into the high-potential plasma along with an electron beam. Applications of these findings to the auroral plasma physics are discussed. Quoting dic.academic’s definition of a double layer Current carrying double layers may arise in plasmas carrying a current. Various instabilities can be responsible for the formation of these layers. One example is the Buneman instability which occurs when the streaming velocity of the electrons (basically the current density divided by the electron density) exceeds the electron thermal velocity of the plasma. Double layers (and other phase space structures) are often formed in the non-linear phase of the instability. One way of viewing the Buneman instability is to describe what happens when the current (in the form of a zero temperature electron beam) has to pass through a region of decreased ion density. In order to prevent charge from accumulating, the current in the system must be the same everywhere (in this 1D model). The electron density also has to be close to the ion density (quasineutrality), so there is also a dip in electron density. The electrons must therefore be accelerated into the density cavity, to maintain the same current density with a lower density of charge carriers. This implies that the density cavity is at a high electrical potential. As a consequence, the ions are accelerated out of the cavity, amplifying the density perturbation. Then there is the situation of a double-double layer, of which one side will most likely be convected away by the plasma, leaving a regular double layer. This is the process in which double layers are produced along planetary magnetic field lines in so-called Birkeland currents. Another known property of charged plasma that can explain the sudden and dramatic bursts of kinetic energy we see in the aurora substorms is something called an “exploding double layer.” Given that we have parallel currents and double layers in the surrounding regions of earths magnetosphere; a simple explanation arises for the sudden substorms: Stability: Double layers in laboratory plasmas may be stable or unstable depending on the parameter regime. [Torven, S. High-voltage double layers in a magnetised plasma column] " (1982) "Journal of Physics D: Applied Physics", Volume 15, Issue 10, pp. 1943-1949] Various types of instabilities may occur, often arising due to the formation of beams of ions and electrons. Unstable double layers are "noisy" in the sense that they produce oscillations across a wide frequency band. A lack of plasma stability may also lead to a dramatic change in configuration often referred to as an explosion (and hence "exploding double layer"). In one example, the region enclosed in the double layer rapidly expands and evolves. [B Song, N D Angelo and R L Merlino Stability of a spherical double layer produced through ionization] " (1992) Journal of "Physics D: Applied Physics", Volume 25, Issue 6, pp. 938-941] An explosion of this type was first discovered in mercury arc rectifiers used in high-power direct-current transmission lines, where the voltage drop across the device was seen to increase by several orders of magnitude. Double layers may also drift, usually in the direction of the emitted electron beam, and in this respect are natural analogues to the smooth--bore magnetron. [ Koenraad Mouthaan and Charles Süsskind, Statistical Theory of Electron Transport in the Smooth-Bore Magnetron] (1966) "Journal of Applied Physics" June 1966, Volume 37, Issue 7, pp. 2598-2606 ] ) (not to be confused with a unit of magnetic moment, the Bohr magneton, which is created by the "classical circular motion" of an electron around a proton). This idea was put forth by Hannes Alfven after the rectifier incident noted above. Double layers and circuits in astrophysics Alfven, Hannes IEEE Transactions on Plasma Science (ISSN 0093-3813), vol. PS-14, Dec. 1986, p. 779-793 Continuing on with “magnetic reconnection” as a theory, we find it violates known laws of physics. Fälthammar, does an excellent job describing the problems with “magnetic reconnection” theory as it pertains to real current carrying plasmas here: On the Concept of Moving Magnetic Field Lines Eos, Vol. 88, No. 15, 10 April 2007 Alfvén, who had introduced the concept, became a strong critic of ‘moving’ magnetic field lines [Alfvén, 1976], especially in his later years. He warned against use of the concepts of ‘frozen-in’ and ‘moving’ magnetic field lines for the reasons that are emphasized above. The basic reason for these difficulties with ‘moving’ magnetic field lines is, of course, that motion of magnetic field lines is inherently meaningless. The magnetic field B is a vector field defined as a function of space coordinates and time. At a fixed time, one may trace a field line from any given point in space. But that field line has no identity, and in a time-dependent magnetic field it cannot be identified with any field line at a different time, except by one convention or another. As we have seen, such conventions are fraught with pitfalls and should only be used with utmost care lest they lead to erroneous moving magnetic field lines are “unsafe at any speed.” As does Donald Scott: Real Properties of Electromagnetic Fields and Plasma in the Cosmos IEEE TRANSACTIONS ON PLASMA SCIENCE, VOL. 35, NO. 4, AUGUST 2007 http://members.cox.net/dascott3/IEEE...tt-Aug2007.pdf Alfvén [1] was explicit in his condemnation of the reconnecting concept: “Of course there can be no magnetic merging energy transfer. The most important criticism of the merging mechanism is that by Heikkila [21], who, with increasing strength, has demonstrated that it is wrong. In spite of all this, we have witnessed, at the same time, an enormously voluminous formalism building up based on this obviously erroneous concept. Hannes Alfvén, a Nobel Laureate, being the founding father of MHD theory that magnetic reconnection is predicated on. This leaves us with two competing models to describe the function of the Aurora and other astrophysical plasmas, one being based on a theory that violates known laws of physics, the other being based on known properties of conducting plasmas. Nereid Jan22-09 09:53 AM Re: Magnetic Reconnection vs Double Layers Quote: Interesting post, Suede. I wonder ... to what extent is the disagreement merely one of appearance? I mean, plasmas behave the way they do (as we learn from experiment and observation), and theories are developed to account for the observed behaviours. It is possible, and to some extent even easy, to develop two theories that are equivalent ... in the sense that there is no experiment or observation which could distinguish between the two, even in principle*. In this case, perhaps there are two different ways to look at the bulk properties and behaviours of plasmas that are indistinguishable in terms of any experiment or observation? If so, the choice of which one to use is a matter of practicality, convenience, history, or whatever ... but not of physics. After all, plasmas are composed of particles - charged and uncharged - and so the only 'true' description of their behaviour must be one built on QED, mustn't it? And any physics of plasmas which does not show, explicitly, how it is compatible with QED in the appropriate limit must necessarily be incomplete (at best), mustn't it? * for a discussion of this kind of equivalence, in terms of 'expanding universe' vs 'shrinking universe', see here. Suede Jan22-09 01:30 PM Re: Magnetic Reconnection vs Double Layers Quote: Quote by Nereid (Post 2044929) Interesting post, Suede. I wonder ... to what extent is the disagreement merely one of appearance? I mean, plasmas behave the way they do (as we learn from experiment and observation), and theories are developed to account for the observed behaviours. It is possible, and to some extent even easy, to develop two theories that are equivalent ... in the sense that there is no experiment or observation which could distinguish between the two, even in principle*. In this case, perhaps there are two different ways to look at the bulk properties and behaviours of plasmas that are indistinguishable in terms of any experiment or observation? If so, the choice of which one to use is a matter of practicality, convenience, history, or whatever ... but not of physics. After all, plasmas are composed of particles - charged and uncharged - and so the only 'true' description of their behaviour must be one built on QED, mustn't it? And any physics of plasmas which does not show, explicitly, how it is compatible with QED in the appropriate limit must necessarily be incomplete (at best), mustn't it? * for a discussion of this kind of equivalence, in terms of 'expanding universe' vs 'shrinking universe', see here. my opinion: What I think will happen, is over time 'magnetic reconnection' will evolve from a theory that is incompatible with standard plasma physics to one that is compatible. What they call a "reconnection event" will replace "exploding double layer" but will mean the same thing as an exploding double layer. I don’t think classical plasma physics is too far removed from QED theory. Classical plasma physics starts at the level of the electron and works its way up to macro scale structures. So what you have is a direct unification between classical electrodynamics and the macro scale universe. QED, from what I understand of it, deals mostly in theory below the scale of the electron. If we have, starting at the level of the electron, a working model that can accurately depict and describe macro scale events based on electrical interactions, I think that would put us at a better standing than we are at now. tusenfem Jan23-09 01:01 PM Re: Magnetic Reconnection vs Double Layers It would be nice if you could show in detail that the regions in which reconnection happens can be correctly described by an "exploding double layer". At the moment the observations by e.g. Cluster are so good and in agreement with the theoretical description of reconnection, with the inward motion of "field lines" the energization of particles, the Hall current signature because of the decoupling of the ions from the field. I would look up the observations described by Sergeev et al (and refs therein). Also there is the interesting paper by Treumann et al. about the role of the Hall field. Although I did my PhD on double layers in astrophysics (mentioned in the Wiki page which is greatly rewritten by me with help from my predecessors, in its current and excellent form), I cannot see how a DL can make all the things that are observed near a reconnection region. But please show me with data and a model how it works. tusenfem Jan23-09 01:20 PM Re: Magnetic Reconnection vs Double Layers Quote: Quote by suede So far no one has been able to rectify this problem, nor have they been able to produce a "reconnecting" magnetic field without first applying current to the plasma sheets they are observing. The reason being obvious of course, in order to create a magnetic field, one must first induce an electrical current. Naturally there is a current in the system because reconnection happens in oppositely directed magnetic fields and through Maxwell's equations it is clear that these are two fields are separated by a current sheet, just like in nature in e.g. the Earth's magnetotail. Quote: Quote by suede THEMIS, Viking, FAST, UARS and several other satellites have confirmed the existence of parallel electric fields powering the auroras. Electric fields of course must complete a circuit in order to flow. Without charge deficiency in one area of space, there would be no current flow which is a product of charge equalization. Some papers on the findings of parallel electric field aligned currents, otherwise known as "Birkeland currents" after Kristian Birkeland, the man who postulated their existence back in 1901.… electric fields must complete a circuit to flow? I do hope this is a language problem, because electric fields don't flow. Field aligne electric fields and field aligned currents have absolutely nothing to do with reconnection, so you cannot use that to claim that reconnecting is correct or not. Take a look at my paper here where the motion of the magnetic field is the cause of currents flowing. Something (which you will see I leave to the reader to decide what, reconnection, current disruption,...) sets plasma flows in motion, which is pulled along with the magnetic field (in the tail the frozen in condition is very well satisfied, determined from measurements not assumptions) As there is a barrier this motion needs to be stopped and one of the main ways of stopping is cross tail currents, and these close as field aligned currents. Now these field aligned currents, when they go through a region of low density, they need to be accelerated to maintain the current, which is usually done by a double layer. But from your description above, it sounds like you have no idea what a DL is. And I am insulted! You forget my paper on solitary kinetic Alfvén waves which carry parallel electric field in the auroral zone, and there is the paper by Chust et al which shows strong parallel E-fields measured by Freja and shows that they are real and no artifact. However, all these double layers and parallel electric fields are mainly close to the Earth, most of them in the auroral zone, whereas reconnection happens rather far down the tail (20 Earth radii) or near the nose of the magnetosphere. Suede Jan23-09 01:38 PM Re: Magnetic Reconnection vs Double Layers Quote: Quote by tusenfem (Post 2046544) It would be nice if you could show in detail that the regions in which reconnection happens can be correctly described by an "exploding double layer". At the moment the observations by e.g. Cluster are so good and in agreement with the theoretical description of reconnection, with the inward motion of "field lines" the energization of particles, the Hall current signature because of the decoupling of the ions from the field. I would look up the observations described by Sergeev et al (and refs therein). Also there is the interesting paper by Treumann et al. about the role of the Hall field. Although I did my PhD on double layers in astrophysics (mentioned in the Wiki page which is greatly rewritten by me with help from my predecessors, in its current and excellent form), I cannot see how a DL can make all the things that are observed near a reconnection region. But please show me with data and a model how it works. Just search for 'double layer' 'aurora' 'birkeland current' 'parallel electric field' in any geophysical journal. I turned up a bucket load with just a few simple searches. I'm not sure what kind of data and model you're looking for. Double layers in the downward current region of the aurora http://hal.archives-ouvertes.fr/docs...10-45-2003.pdf Parallel electric fields in the upward current region of the aurora: Numerical solutions Particle Simulation of Auroral Double Layers A double layer is that solution which preserves gross quasineutrality within its volume while permitting momentum balance between incident accelerated particles. The potential of the double layer is limited by the ion kinetic energy but need not match the global potential required for overall charge neutrality. One and two dimensional electrostatic particle simulations verify both double layer solutions and dependence of potentials on the injected energy. The difference between global and local potentials is absorbed in a sheath opposite the injection boundary. Potential formations are strictly dependent on the self-consistent charge distributions they support. Microinstabilities cause changes in particle distributions. Double layer motion is associated with exchange of momentum between particles and these fields. Double layers The basic properties of electrostatic double layers observed in laboratory gaseous discharges are reviewed. Theoretical results from both macroscopic 4-fluid theory and microscopic, Vlasov, theory are described and found to be in fairly good agreement. Recent work on Penrose-stable double layers, as well as double layers with oblique electric and magnetic fields are described. Applications to Birkeland currents in the ionosphere are made. It was found that kilovolt potential which drops along the geomagnetic field can be produced in the topside ionosphere by double layers. Anomalous turbulent resistivity effects are unlikely to produce large parallel potential drops since that would lead to excessive heating of the ambient plasma. Since double layers are laminar structures they will not produce heat until the accelerated particles are stopped in the lower much denser E-layer. Large parallel electric fields in the upward current region of the aurora: Evidence for ambipolar effects Double layers on auroral field lines Time-stationary solutions to the Vlasov-Poisson equation for ion holes and double layers were examined along with particle simulations which pertain to recent observations of small amplitude (e phi)/t sub e approx. 1 electric field structures on auroral field lines. Both the time-stationary analysis and the simulations suggest that double layers evolve from holes in ion phase space when their amplitude reaches (e phi)/t sub e approx. 1. Multiple small amplitude double layers which are seen in long simulation systems and are seen to propagate past spacecraft may account for the acceleration of plasma sheet electrons to produce the discrete aurora. Suede Jan23-09 01:44 PM Re: Magnetic Reconnection vs Double Layers Quote: Quote by tusenfem (Post 2046572) Just some random comments on your first post: Naturally there is a current in the system because reconnection happens in oppositely directed magnetic fields and through Maxwell's equations it is clear that these are two fields are separated by a current sheet, just like in nature in e.g. the Earth's magnetotail. electric fields must complete a circuit to flow? I do hope this is a language problem, because electric fields don't flow. Field aligne electric fields and field aligned currents have absolutely nothing to do with reconnection, so you cannot use that to claim that reconnecting is correct or not. Take a look at my paper here where the motion of the magnetic field is the cause of currents flowing. Something (which you will see I leave to the reader to decide what, reconnection, current disruption,...) sets plasma flows in motion, which is pulled along with the magnetic field (in the tail the frozen in condition is very well satisfied, determined from measurements not assumptions) As there is a barrier this motion needs to be stopped and one of the main ways of stopping is cross tail currents, and these close as field aligned currents. Now these field aligned currents, when they go through a region of low density, they need to be accelerated to maintain the current, which is usually done by a double layer. But from your description above, it sounds like you have no idea what a DL is. And I am insulted! You forget my paper on solitary kinetic Alfvén waves which carry parallel electric field in the auroral zone, and there is the paper by Chust et al which shows strong parallel E-fields measured by Freja and shows that they are real and no artifact. However, all these double layers and parallel electric fields are mainly close to the Earth, most of them in the auroral zone, whereas reconnection happens rather far down the tail (20 Earth radii) or near the nose of the magnetosphere. A double layer can "explode" at any point and we have evidence for their existence throughout the magnetosphere of earth. I do indeed know what a double layer is, in fact I quoted the dictionary definition of one. If you want to believe magnetic field lines are real physical objects that merge and reconnect I suppose that's your business. I personally prefer Alfven's opinion. Suede Jan23-09 02:32 PM Re: Magnetic Reconnection vs Double Layers 24 July 2008 Surprise sequence Strung out like a line of buoys in the ocean, THEMIS tracked the true sequence of events. The outermost satellites registered a reconnection, an aurora appeared near Earth, then the inner probes saw a current disruption. This sequence was a surprise, as researchers expected the aurora to occur last. I'll tell you why it's a "suprise". Magentic reconnection isn't real and the models are wrong. That's a pretty huge suprise btw. That's like predicting: egg first splaters, then hammer falls. Sundance Jan24-09 06:39 AM Re: Magnetic Reconnection vs Double Layers Hello Suede Search Plasma Z-pinch Tokamaks reversed field pinch and arXiv tokamak http://arxiv.org/find/all/1/all:+Tokamak/0/1/0/all/0/1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211084604263306, "perplexity": 1353.7398965609282}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657119220.53/warc/CC-MAIN-20140914011159-00343-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://math.stackexchange.com/questions/1153750/galois-group-over-the-field-of-rational-functions
# Galois group over the field of rational functions I am looking to find the Galois group of $x^3-x+t$ over $\mathbb{C}(t)$, the field of rational functions with complex coefficients. I have shown that the automorphisms of the rational function field $F(t)$ for fixed $F$ are precisely the fractional linear transformations that is $t \rightarrow \frac{at +b}{ct+d}$ for $a,b,c,d \in \mathbb{C}$. Is this useful? Also is there anyway to factor $x^3-x+t$ nicely? I slept on this for a little bit and developed an idea to show this. I used Cardano's method to explicitly solve for the roots of this polynomial and show that there exist no linear factors in $\mathbb{C}(t)$ and $f(x)$ is therefore irreducible. This is because the polynomial is cubic, and if there are no linear factors then there cannot be any quadratic factors. Thus, you have to adjoin some root let's call it $\theta$ to $\mathbb{C}(t)$. The degree of this field over $\mathbb{C}(t)$ is a Galois extension and must have degree 3. The only group with order 3 is $\mathbb{Z}_3$, which implies this is the Galois group. • You can easily determine the Galois group of a cubic polynomial simply by computing its discriminant and deciding whether it is a square or not. – Mariano Suárez-Álvarez Feb 17 '15 at 23:33 • See www.math.uconn.edu/~kconrad/blurbs/galoistheory/cubicquarticallchar.pdf – Mariano Suárez-Álvarez Feb 17 '15 at 23:36 • Ah because if the discriminant is square then the Galois group is isomorphic to $A_3$ and $S_3$ otherwise. Thank you! – KangHoon You Feb 17 '15 at 23:42 • What you wrote is not correct. Your argument is, essentially, that as the polynomial is irreducible, adjoining one of its roots gives a normal extwnsion, and that is very false. – Mariano Suárez-Álvarez Feb 24 '15 at 5:26 • Chiming in with Mariano. Why do you get a Galois extension by adjoining $\theta$? – Jyrki Lahtonen Feb 24 '15 at 7:12 I don't see how to use Eisenstein here. But the polynomial is a cubic, so if it factors, then one of the factors is linear, and the polynomial would have a zero $x=\frac{p(t)}{q(t)}\in\Bbb{C}(t)$ with some polynomials $p(t),q(t)\in\Bbb{C}[t]$, $q\neq0$. To exclude this possibility we use a line of reasoning analogous to the familiar rational root test - taking advantage of the fact that $\Bbb{C}[t]$ is a UFD. So assume that all common factors of $p(t)$ and $q(t)$ have been cancelled. Then $$x^3-x+t=\frac{p^3-pq^2+tq^3}{q^3}=0.$$ Here the numerator has to be zero, so from $$p^3-pq^2=-tq^3$$ we can conclude that $p$ divides the left hand side, hence also the right hand side. But $p$ has no common factors with $q$, so it has to be a factor of $t$. Similarly from $$p^3=pq^2-tq^3$$ we see that $q$ divides the right hand side, hence also $p^3$. But, as above, this implies that $q$ must be a constant. The non-zero constants are the units of $\Bbb{C}[t]$, so we can conclude that $x=p/q$ is either a constant or, $x=at$ for some $a\in\Bbb{C}$. I'm sure that you can show that neither of those work. Therefore this cubic has no linear factors over $\Bbb{C}(t)$ and hence it is irreducible. • thank you for your help. I offer an alternative solution, do you see any problems with it? – KangHoon You Feb 24 '15 at 5:21 Hint: Show that the polynomial is irreducible over $\mathbb C[t]$ (which shows irreducibility over $\mathbb C(t)$ by Gauss) and then compute the discriminant as usual. • I know showing irreducibility over $\mathbb{Q}$ can be done using Eisenstein's Criterion. Is there a similar trick for $\mathbb{C}$? – KangHoon You Feb 17 '15 at 23:43 • Eisenstein's criterion works over the field of fractions of a UFD, such as $\mathbb Z$ or $\mathbb C[t]$. – Mariano Suárez-Álvarez Feb 17 '15 at 23:47 • After substituting $x+a$ for some integer $a$, I believe then I will be able to show the polynomial to satisfy Eisenstein's critertion. Thank you @MarianoSuárez-Alvarez and @Mesih! – KangHoon You Feb 17 '15 at 23:53 • By Gauss, irreducibility in $\mathbf C(T)[X]$ is (essentially) equivalent to irreducibility in $\mathbf C[T,X]$, which (here) follows from the fact that the degree in $T$ is $1$. – ACL Jun 26 at 15:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385594129562378, "perplexity": 130.59019505235963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00298.warc.gz"}
https://www.physicsforums.com/threads/riemann-curvature-tensor-derivation.57670/
# Riemann curvature tensor derivation 1. Dec 23, 2004 ### weio Hey, when calculating the Riemann curvature tensor, you need to calculate the commutator of some vector field $$V$$, ie like this :- $$[\bigtriangledown_a, \bigtriangledown_b]$$ = $$\bigtriangledown_a\bigtriangledown_b - \bigtriangledown_b\bigtriangledown_a$$ = $$V;_a_b - V;_b_a$$ But why does this difference of antisymmtery give us the Riemman tensor? thanks 2. Dec 24, 2004 ### Rob Woodside In addition to the commutator of the covariant derivatives, you need the commutator of the basis vectors too. Ignore torsion. Think of your covariant derivative as a change along your basis. In a coordinate basis (defined by vanishing basis commutators) two different basis vectors span a plane. Think of a small quadrilateral spanned by the two basis vectors. When the vector V is carried around this quadrilateral your commutator of covariant derivatives gives the change in V. It's length can't change, but its direction does. So with R the full Riemann curvature: [grada , gradb] V = R(.,V,a,b) . Riemann set up his geometry so it would look flat in the small. However, he was amazed that this difference resulting from taking a vector to nearby points could be described by an object (the full curvature tensor) that lived solely at the base point. This made him realize the importance of the curvature tensor and gave substance to his geometry. 3. Dec 24, 2004 ### weio Hey So far this is how I understand it, though I know I could be very wrong. If you have two geodesics parallel to each other, with tangents $$V$$ and $$V'$$ , in which the coordinate $$x^\alpha$$ point along both geodesics. There is some connecting vector $$w^\alpha$$ between them. Let the affine parameter on the geodesics be $$\lambda$$ Riemman tensor calculates the acceleration between these two geodesics. so you calculate the acceleration at some point A, A' on each geodesic , and subtract them. this gives you an expression telling how the components of $$w^\alpha$$ change. $$\frac{d^2w^\alpha} {d\lambda^2} = \frac{d^2x^\alpha} {d\lambda^2} | A' - \frac {d^2x^\alpha} {d\lambda^2} | A = - \Gamma^\alpha_0_0_\beta w^\beta$$ After that you calculate the full 2nd covariant derivative along V, ie , you get something like $$\bigtriangledown v \bigtriangledown v w^\alpha = (\Gamma^\alpha_\beta_0_0 - \Gamma^\alpha_0_0,\beta) w^\beta$$ $$= R^a_0_0\beta w^\beta$$ $$= R^a_u_v_\beta V^u V^v w^\beta$$ That's where the tensor arises. so basically it's a difference in acceleration as geodesics don't maintain their seperation in curved space. 4. Dec 24, 2004 ### Rob Woodside Yes it arises there and in many other places, including the one you asked about and that I told you about. 5. Dec 27, 2004 ### weio Thanks! I understand the derivation now. I found a simple one which fully explains it. for the curious, here is the link :- http://www.anasoft.co.uk/physics/gr/reimann/reimann.html [Broken] weio Last edited by a moderator: May 1, 2017
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111636281013489, "perplexity": 732.340359913456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511203.16/warc/CC-MAIN-20181017153433-20181017174933-00430.warc.gz"}
https://wiki.q-researchsoftware.com/wiki/T-Test_of_a_Probability_%25
# T-Test of a Probability % Where $\displaystyle{ \hat{a} }$ is a probability % from a ranking question, $\displaystyle{ \mu }$ is the mean probability %, $\displaystyle{ n }$ is the effective number of observations (note that this is different to the effective sample size where repeated observations exist for each respondent) and $\displaystyle{ se_{a} }$ is the standard error of $\displaystyle{ \hat{a} }$, the test statistic is: $\displaystyle{ t=\frac{\hat{a}-\mu}{se_{a}} }$, $\displaystyle{ p = 2\Pr(t_{n-k+1} \ge |t|) }$, $\displaystyle{ k }$ is the number of alternatives.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985694169998169, "perplexity": 445.0607422589567}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00509.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/search/?q=an:1056.60029
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. # Simple Search Query: Enter a query and click »Search«... Format: Display: entries per page entries Zbl 1056.60029 Bendikov, A.; Saloff-Coste, L. On the sample paths of diagonal Brownian motions on the infinite dimensional torus. (English) [J] Ann. Inst. Henri Poincaré, Probab. Stat. 40, No. 2, 227-254 (2004). ISSN 0246-0203 Let $T=R/2\pi Z$ be the circle group and $T^\infty=\prod_1^\infty T_i$ be the countable product of circles $T_i$. The group $T^\infty$ is equipped with the product topology and its normalized Haar measure $\mu$. Let $\cal C$ be the set of all smooth functions on $T^\infty$ which depend only on a finite number of coordinates. On the circle $T$ denote by $\nu_t$ the standard heat kernel measure associated to the infinitesimal generator $(\frac d{dx})^2$. The convolution semigroup $(\nu_t)_{t>0}$ is associated with a stochastic process $\xi=(\xi_t)_{t\ge0}$ which is simply Brownian motion runned at twice the usual speed in classic probabilistic notation and wrapped around the circle. For any fixed sequence ${\bold a}=(a_1,\ldots)$ of positive numbers let us consider the product measures $\mu_t=\mu_t^{\bold a}=\bigotimes_1^\infty\nu_{a_it}$. The family $(\mu_t)_{t>0}$ forms a convolution semigroup of measures on $T^\infty$ and $\mu_t$ is the marginal at time $t$ of a diffusion process $X=X^{\bold a}=(X_t)_{t\ge0}$ which is simply the product of independent circle Brownian motions $X^i=(X^i_t)_{t\ge0}$ where $X^i_t=\xi_{a_it}$. The intrinsic distance is defined by $$d(x,y)=d^{\bold a}(x,y)= \sup\left\{f(x)-f(y):f\in{\cal C},\ \sum_1^\infty a_i \vert \partial_if \vert ^2\le1\right\} \text{ and } d(x)=d^{\bold a}(x)=d^{\bold a}(e,x),$$ where $e=(0,0,\ldots)$ is the neutral element in $T^\infty$. $d$ is continuous and defines the topology of $T^\infty$ if and only if $\sum_1^\infty \frac 1{a_i}<\infty$. The last condition is assumed to hold throughout the paper under review. Under this condition for any $t>0$ the measure $\mu_t=\mu_t^{\bold a}$ is absolutely continuous with respect to the Haar measure $\mu$ and admits a continuous density $\mu_t^{\bold a}(x)$ given by $\mu_t^{\bold a}(x)=\prod_1^\infty \nu_{a_it}(x_i)$. Given a sequence ${\bold a}=(a_i)$ of positive numbers define $N(s)= N^{\bold a}(s)=\#\{i: a_i\le s\}$ and for a given function $f:(0, \infty)\to(0,\infty)$ define the transform $f^{\#}$ by $f^{\#}(z)= \int_0^z f(x)\,\frac{dx}{x}$. The main result of the paper is Theorem 4.4. Let ${\bold a}=(a_i)$ be a sequence of positive numbers such that $N=N^{\bold a}$ is slowly varying. Then $\log\mu_t(e)\sim(1/2)N^{\#}(1/t)$ as $t$ tends to $0$ and the sample paths of the process $X^{\bold a}$ have the following properties: (1) We always have $P_e$-almost surely $\liminf_{t\to 0} (d(X_t)/\sqrt{tN(1/t)})<\infty$. (2) If $N(s)=o(\log s)$ at infinity, then $P_e$-almost surely $$\limsup_{t\to 0} \frac {d(X_t)} {\sqrt{4t\log\log 1/t}}=1,\qquad \lim_{\varepsilon\to 0}\sup_{0<s<t \le1,\,t-s\le\varepsilon} \frac {d(X_s,X_t)} {\sqrt{4(t-2) \log(1/(t-s))}}=1$$ and $$\liminf_{t\to 0} \frac{d(X_t)} {\sqrt{4t\log\log 1/t}}=0.$$ (3) If $N(s)=o(\log s)$ and $\log\log s=O(N(s))$ at infinity, then $P_e$-almost surely $$0<\liminf_{t\to 0} \frac {d(X_t)} {\sqrt{tN(1/t)}}\le \limsup_{t\to 0} \frac {d(X_t)} {\sqrt {tN(1/t)}} <\infty,$$ and $$\lim_{\varepsilon\to 0}\sup_{0<s<t\le1, \,t-s \le\varepsilon} \frac{d(X_s,X_t)} {\sqrt{4(t-s)\log(1/(t-s))}}=1.$$ (4) If $\log\log s=O(N(s))$ at infinity, then $P_e$-almost surely $$0<\liminf_{t\to 0} \frac{d(X_t)} {\sqrt{tN(1/t)}} \le\limsup_{t\to 0} \frac{d(X_t)} {\sqrt{tN(1/t)}}<\infty.$$ (5) If $\log s=O(N(s))$ at infinity, then $P_e$-almost surely $$0< \lim_{\varepsilon\to 0}\sup_{0<s<t\le1,\,t-s\le \varepsilon} \frac{d(X_s,X_t)} {\sqrt{4(t-s)\log(1/(t-s))}}<\infty.$$ The authors note that the different cases in this theorem are not exclusive. Broadly speaking for a slowly varying $N$ there are three cases to consider: (a) If $N$ is smaller than $\log\log$, then we obtain a classical Lévy-Khinchin law of iterated logarithm $$\limsup_{t\to 0} \frac {d(X_t)} {\sqrt{4t\log\log 1/t}}=1$$ and a classical Lévy modulus of continuity $$\lim_{\varepsilon \to 0}\sup_{0<s<t\le1,\,t-s\le \varepsilon} \frac{d(X_s,X_t)} {\sqrt{4(t-s)\log(1/(t-s))}}=1.$$ (b) If $N$ is larger than $\log \log$ but smaller than $\log$, then we still have a classical Lévy modulus of continuity, but the Lévy-Khinchin-type result is not classical any more ($d(X_t)$ is now controlled by the function $\sqrt{tN(1/t)})$. (c) If $N$ is larger than $\log$, but still slowly varying, then all regularity behaviors are controlled by the function $\sqrt{tN(1/t)}$. [Vakhtang V. Kvaratskhelia (Tbilisi)] MSC 2000: *60G17 Sample path properties 60B15 Probability measures on groups 60J60 Diffusion processes Keywords: sample paths; modulus of continuity; diagonal Brownian motion; infinite-dimensional torus; intrinsic distance; law of iterated logarithm Highlights Master Server
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858067631721497, "perplexity": 165.82917427700434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705318091/warc/CC-MAIN-20130516115518-00018-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.math.ku.dk/english/calendar/events/algtopseminar08062020/
# Algebra/Topology Seminar Speaker: Christian Dahlhausen Title: Continuous K-theory and cohomology of rigid spaces Abstract: Continuous K-theory is a derivative of algebraic K-theory for rigid analytic spaces. In this talk, I will prove three properties for the negative continuous K-theory of a rigid space of dimension d: Weibel vanishing in degrees smaller than -d, analytic homotopy invariance in degrees less or equal -d, and describing the edge group in degree -d as the cohomology in degree d with integral coefficients. These properties are analoguous to correponding properties of negative algebraic K-theory and the proof works by reduction to the algebraic case. The key ingredient is a comparison of Zariski-cohomology and rh-cohomology for certain (relative) Zariski-Riemann spaces. The content of this talk is based on my PhD thesis advised by Moritz Kerz and Georg Tamme, a condensed version can be found on the arXiv (1910.10437).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259079456329346, "perplexity": 1029.8728724468292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00180.warc.gz"}
https://tug.org/pipermail/texhax/2007-October/009343.html
# [texhax] TeX command works for "1 Foo" but not for "11 Foo" Tue Oct 30 21:49:07 CET 2007 Hi list, This is a weird problem. I use the following for my thesis to force me to include a date with each link I refer to, or make it explicit that I don't need a date, e.g. \link{http://debian.org}{}: % not making #2 optional on purpose \burl{#1}% \if\@empty#2\relax\else{ }\begin{smaller}{[#2]}\end{smaller}\fi% } The problem is that this works fine for dates like 1 Oct 2007 or 15 Sep 2007, but as soon as the date is 11 or 22, something goes fishy: yields: http://debian.org/Bugs/server-control [10 Oct 2007] http://debian.org/Bugs/server-control Oct 2007 Either TeX is on drugs, or I am, or I simply don't understand what's going on. Could you help me figure this out, please? -- sed -e '/^[when][coders]/!d' \ -e '/^...[discover].$/d' \ -e '/^..[real].[code]$/!d' \ /usr/share/dict/words
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617566466331482, "perplexity": 4375.878108337279}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526517.67/warc/CC-MAIN-20190720132039-20190720154039-00393.warc.gz"}
https://www.zora.uzh.ch/id/eprint/18483/
Estimation of the false negative fraction of a diagnostic kit through Bayesian regression model averaging Ranyimbo, A O; Held, L (2005). Estimation of the false negative fraction of a diagnostic kit through Bayesian regression model averaging. Statistics in Medicine, 25(4):653-667. Abstract In modelling we usually endeavour to find a single 'best' model that explains the relationship between independent and dependent variables. Selection of a single model fails to take into account the prior uncertainty in the model space. The Bayesian model averaging (BMA) approach tackles this problem by considering the set of all possible models. We apply BMA approach to the estimation of the false negative fraction (FNF) in a particular case of a two-stage multiple screening test for bowel cancer. We find that after taking model uncertainty into consideration the estimate of the FNF obtained is largely dependent on the covariance structure of the priors. Results obtained when the Zellner g-prior for the prior variance is used is largely influenced by the magnitude of g. Abstract In modelling we usually endeavour to find a single 'best' model that explains the relationship between independent and dependent variables. Selection of a single model fails to take into account the prior uncertainty in the model space. The Bayesian model averaging (BMA) approach tackles this problem by considering the set of all possible models. We apply BMA approach to the estimation of the false negative fraction (FNF) in a particular case of a two-stage multiple screening test for bowel cancer. We find that after taking model uncertainty into consideration the estimate of the FNF obtained is largely dependent on the covariance structure of the priors. Results obtained when the Zellner g-prior for the prior variance is used is largely influenced by the magnitude of g. Statistics Citations Dimensions.ai Metrics 4 citations in Web of Science® 4 citations in Scopus®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932499885559082, "perplexity": 709.3715921235455}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00214.warc.gz"}
http://mathhelpforum.com/advanced-algebra/129500-prove-w-symmetric.html
# Thread: Prove that W is symmetric 1. ## Prove that W is symmetric I know that: W = W transpose is symmetric, 2. $(W-I)^T = 2W \ \ (\star)$ Using the property that: $(A \pm B)^T = A^T \pm B^T$ we have that: $W^T - I = 2W \ \Leftrightarrow \ {\color{blue}I = W^T - 2W}$ Now take the transpose of both sides of $(\star)$ and solve for $I$. Compare this expression with what we have in blue and you should get what you want.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220216274261475, "perplexity": 530.8111960477613}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720026.81/warc/CC-MAIN-20161020183840-00215-ip-10-171-6-4.ec2.internal.warc.gz"}
https://math.libretexts.org/Courses/Honolulu_Community_College/Math_75X%3A_Introduction_to_Mathematical_Reasoning_(Kearns)/04%3A_Fundamentals_of_Algebra/4.06%3A_Solving_Equations-_Keeping_the_Balance/4.6.03%3A_Equations_with_Decimals
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 4.6.3: Equations with Decimals $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ We can add or subtract the same decimal number from both sides of an equation without affecting the solution. Example 1 Solve for x: x − 1.35 = −2.6. Solution To undo subtracting 1.35, add 1.35 to both sides of the equation. \begin{aligned} x - 1.35 = -2.6 ~ & \textcolor{red}{ \text{ Original equation.}} \\ x - 1.35 + 1.35 = -2.6 + 1.35 ~ & \textcolor{red}{ \text{ Add 1.35 to both sides.}} \\ x = -1.25 ~ & \textcolor{red}{ \text{ Simplify: } -2.6 + 1.35 = -1.25.} \end{aligned}\nonumber Exercise Solve for x: $$x+1.25=0.6$$ −0.65 We can still multiply both sides of an equation by the same decimal number without affecting the solution. Example 2 Solve for x: $$\frac{x}{-0.35} = 4.2$$. Solution To undo dividing by −0.35, multiply both sides of the equation by −0.35. \begin{aligned} \frac{x}{-0.35} = 4.2 ~ & \textcolor{red}{ \text{ Original equation.}} \\ -0.35 \left( \frac{x}{-0.35} \right) = -0.35 (4.2) ~ & \textcolor{red}{ \text{ Multiply both sides by } -0.35.} \\ x = -1.470 ~ & \textcolor{red}{ \text{ Simplify: } -0.35 (4.2) = -1.470.} \end{aligned}\nonumber Exercise Solve for y: $$\frac{y}{0.37} = -1.52$$. −0.5624 We can still divide both sides of an equation by the same decimal number without affecting the solution. Example 3 Solve for x: $$−1.2x = −4.08$$. Solution To undo multiplying by −1.2, divide both sides of the equation by −1.2. \begin{aligned} -1.2x = -4.08 ~ & \textcolor{red}{ \text{ Original equation.}} \\ \frac{-1.2x}{-1.2} = \frac{-4.08}{-1.2} ~ & \textcolor{red}{ \text{ Divide both sides by } -1.2} \\ x = 3.4 ~ & \textcolor{red}{ \text{ Simplify: } -4.08/(-1.2)=3.4.} \end{aligned}\nonumber Exercise Solve for z: $$-2.5z=1.4$$ −0.56 ## Combining Operations We sometimes need to combine operations. Example 4 Solve for x: $$−3.8x − 1.7 = −17.28$$. Solution To undo subtracting 1.7, add 1.7 to both sides of the equation. \begin{aligned} -3.8x-1.7=-17.28 ~ & \textcolor{red}{ \text{ Original equation.}} \\ -3.8x-1.7+1.7=-17.28+1.7 ~ & \textcolor{red}{ \text{ Add 1.7 to both sides.}} \\ -3.8x=-15.58 ~ & \textcolor{red}{ \text{ Simplify: } -17.28 + 1.7 = -15.58.} \end{aligned}\nonumber Next, to undo multiplying by −3.8, divide both sides of the equation by −3.8. \begin{aligned} \frac{-3.8x}{-3.8} = \frac{-15.88}{-3.8} ~ & \textcolor{red}{ \text{ Divide both sides by } -3.8.} \\ x = 4.1 ~ & \textcolor{red}{ \text{ Simplify: } -15.58/(-3.8) = 4.1.} \end{aligned}\nonumber Exercise Solve for u: $$-0.02u-3.2=-1.75$$. −72.5 ## Combining Like Terms Combining like terms with decimal coefficients is done in the same manner as combining like terms with integer coefficients. Example 5 Simplify the expression: $$−3.2x + 1.16x$$. Solution To combine these like terms we must add the coefficients. To add coefficients with unlike signs, first subtract the coefficient with the smaller magnitude from the coefficient with the larger magnitude. $\begin{array}{r} 3.20 \\ - 1.16 \\ \hline 2.04 \end{array}\nonumber$ Prefix the sign of the decimal number having the larger magnitude. Hence: $−3.2+1.16 = −2.04.\nonumber$ We can now combine like terms as follows: $−3.2x + 1.16x = −2.04x\nonumber$ Exercise Simplify: $$-1.185t+3.2t$$ 2.015t When solving equations, we sometimes need to combine like terms. Example 6 Solve the equation for x: $$4.2 − 3.1x + 2x = −7.02$$. Solution Combine like terms on the left-hand side of the equation. \begin{aligned} 4.2-3.1x+2x=-7.02 ~ & \textcolor{red}{ \text{ Original equation.}} \\ 4.2 - 1.1x = -7.02 ~ & \textcolor{red}{ \text{ Combine like terms: } -3.1x + 2x = -1.1x.} \\ 4.2 - 1.1x - 4.2 = -7.02 - 4.2 ~ & \textcolor{red}{ \text{ Subtract 4.2 from both sides.}} \\ -1.1x = -11.02 ~ & \textcolor{red}{ \text{ Subtract: } -7.02 - 4.2 = -11.22.} \\ \frac{-1.1x}{-1.1} = \frac{-11.22}{-1.1} ~ & \textcolor{red}{ \text{ Divide both sides by } -1.1.} \\ x = 10.2 ~ & \textcolor{red}{ \text{ Divide: } -11.22/(-1.1) = 10.2.} \end{aligned}\nonumber Thus, the solution of the equation is 10.2. Check Like all equations, we can check our solution by substituting our answer in the original equation. \begin{aligned} 4.2 - 3.1x +2x = -7.02 ~ & \textcolor{red}{ \text{ Original equation.}} \\ 4.2 - 3.1(10.2) + 2(10.2) = -7.02 ~ & \textcolor{red}{ \text{ Substitute 10.2 for } x.} \\ 4.2 - 31.62 + 20.4 = -7.02 ~ & \textcolor{red}{ \text{ Multiply: } 3.1(10.2) = 31.62, ~ 2(10.2) = 20.4.} \\ -27.42 + 20.4 = -7.02 ~ & \textcolor{red}{ \text{ Order of Ops: Add, left to right.}} \\ ~ & \textcolor{red}{ ~ 4.2 - 31.62 = -27.42.} \\ -7.02 = -7.02 ~ & \textcolor{red}{ \text{ Add: } -27.42 + 20.4 = -7.02.} \end{aligned}\nonumber Because the last line is a true statement, the solution x = 10.2 checks. Exercise Solve for r: $$-4.2 + 3.6r - 4.1r = 1.86$$ −12.12 ## Using the Distributive Property Sometimes we will need to employ the distributive property when solving equations. Distributive Property Let a, b, and c be any numbers. Then, $a(b + c) = ab + ac.\nonumber$ Example 7 Solve the equation for x: $$−6.3x − 0.4(x − 1.2) = −0.86$$. Solution We first distribute the −0.4 times each term in the parentheses, then combine like terms. \begin{aligned} -6.3x - 0.4(x-1.2) = -0.86 ~ & \textcolor{red}{ \text{ Original equation.}} \\ -6.3x-0.4x+0.48 = -0.86 ~ & \textcolor{red}{ \text{ Distribute. Note that } -0.4(-1.2)=0.48.} \\ -6.7x+0.48=-0.86 ~ & \textcolor{red}{ \text{ Combine like terms.}} \end{aligned}\nonumber Next, subtract 0.48 from both sides, then divide both sides of the resulting equation by −6.7. \begin{aligned} -6.7x+0.48-0.48 = -0.86 - 0.48 ~ & \textcolor{red}{ \text{ Subtract 0.48 from both sides.}} \\ -6.7x = -1.34 ~ & \textcolor{red}{ \text{ Simplify: } -0.86 - 0.48 = -1.34.} \\ \frac{-6.7x}{-6.7} = \frac{-1.34}{-6.7} ~ & \textcolor{red}{ \text{ Divide both sides by } -6.7.} \\ x = 0.2 ~ & \textcolor{red}{ \text{ Simplify: } -1.34/(-6.7)=0.2.} \end{aligned}\nonumber Exercise Solve for x: $$−2.5x − 0.1(x − 2.3) = 8.03$$ −3 ## Rounding Solutions Sometimes an approximate solution is adequate. Example $$\PageIndex{1}$$ Solve the equation $$3.1x+ 4.6=2.5 − 2.2x$$ for x. Round the answer to the nearest tenth. Solution We need to isolate the terms containing x on one side of the equation. We begin by adding 2.2x to both sides of the equation. \begin{aligned} 3.1x + 4.6 = 2.5 - 2.2x ~ & \textcolor{red}{ \text{ Original equation.}} \\ 3.1x + 4.6 + 2.2x = 2.5 - 2.2x + 2.2x ~ & \textcolor{red}{ \text{ Add } 2.2x \text{ to both sides.}} \\ 5.3x + 4.6 = 2.5 ~ & \textcolor{red}{ \text{ Combine terms: } 3.1x + 2.2x = 5.3x.} \end{aligned{\nonumber We need to isolate the terms containing x on one side of the equation. We begin by adding 2.2x to both sides of the equation. \begin{aligned} 5.3x + 4.6 - 4.6 = 2.5 - 4.6 ~ & \textcolor{red}{ \text{ Subtract 4.6 from both sides.}} \\ 5.3x = -2.1 ~ & \textcolor{red}{ \text{ Simplify: } 2.5 - 4.6 = -2.1.} \end{aligned}\nonumber To undo the effect of multiplying by 5.3, divide both sides of the equation by 5.3. \begin{aligned} \frac{5.3x}{5.3} = \frac{-2.1}{5.3} ~ & \textcolor{red}{ \text{Divide both side by 5.3.}} \\ x \approx -0.4 ~ & \textcolor{red}{ \text{ Round solution to nearest tenth.}} \end{aligned}\nonumber To round the answer to the nearest tenth, we must carry the division out one additional place. Because the “test digit” is greater than or equal to 5, add 1 to the rounding digit and truncate. Thus, −0.39 ≈ −0.4. Thus, −2.1/5.3 ≈ −0.39. Exercise Solve for x: $$4.2x − 1.25 = 3.4+0.71x$$ 1.33 Applications Let’s look at some applications that involve equations containing decimals. For convenience, we repeat the Requirements for Word Problem Solutions. Requirements for Word Problem Solutions 1. Set up a Variable Dictionary. You must let your readers know what each variable in your problem represents. This can be accomplished in a number of ways: 1. Statements such as “Let P represent the perimeter of the rectangle.” 2. Labeling unknown values with variables in a table. 3. Labeling unknown quantities in a sketch or diagram. 2. Set up an Equation. Every solution to a word problem must include a carefully crafted equation that accurately describes the constraints in the problem statement. 3. Solve the Equation. You must always solve the equation set up in the previous step. 4. Answer the Question. This step is easily overlooked. For example, the problem might ask for Jane’s age, but your equation’s solution gives the age of Jane’s sister Liz. Make sure you answer the original question asked in the problem. Your solution should be written in a sentence with appropriate units. 5. Look Back. It is important to note that this step does not imply that you should simply check your solution in your equation. After all, it’s possible that your equation incorrectly models the problem’s situation, so you could have a valid solution to an incorrect equation. The important question is: “Does your answer make sense based on the words in the original problem statement.” Example 9 Molly needs to create a rectangular garden plot covering 200 square meters (200 m2). If the width of the plot is 8.9 meters, find the length of the plot correct to the nearest tenth of a meter. Solution We will follow the Requirements for Word Problem Solutions. 1. Set up a Variable Dictionary. We will use a sketch to define our variables. Note that L represents the length of the rectangle. 2. Set Up an Equation. The area A of a rectangle is given by the formula $A = LW,\nonumber$ where L and W represent the length and width of the rectangle, respectively. Substitute 200 for A and 8.9 for W in the formula to obtain $200 = L(8.9),\nonumber$ or equivalently, $200 = 8.9L.\nonumber$ 3. Solve the Equation. Divide both sides of the last equation by 8.9, then round your answer to the nearest tenth. \begin{aligned} \frac{200}{8.9} = \frac{8.9L}{8.9} ~ & \textcolor{red}{ \text{ Divide both sides by 8.9.}} \\ 22.5 \approx L ~ & \textcolor{red}{ \text{ Round to nearest tenth.}} \end{aligned}\nonumber To round the answer to the nearest tenth, we must carry the division out one additional place. Because the “test digit” is greater than or equal to 5, add 1 to the rounding digit and truncate. Thus, 200/8.9 ≈ 22.5. 4. Answer the Question. To the nearest tenth of a meter, the length of the rectangular plot is L ≈ 22.5 meters. 5. Look Back. We have L ≈ 22.5 meters and W = 8.9 meters. Multiply length and width to find the area. $\text{Area} \approx (22.5 \text{ m})(8.9 \text{ m}) ≈ 200.25 \text{ m}^2.\nonumber$ Note that this is very nearly the exact area of 200 square meters. The discrepancy is due to the fact that we found the length rounded to the nearest tenth of a meter Exercise Eta’s dog run is in the shape of a rectangle with area 500 square feet. If the length of the run is 28 feet, find the width of the run, correct to the nearest tenth of a foot. 17.9 feet Example 10 Children’s tickets to the circus go on sale for $6.75. The Boys and Girls club of Eureka has$1,000 set aside to purchase these tickets. Approximately how many tickets can the Girls and Boys club purchase? Solution We will follow the Requirements for Word Problem Solutions. 1. Set up a Variable Dictionary. Let N represent the number of tickets purchased by the Boys and Girls club of Eureka. 2. Set Up an Equation. Note that $\begin{matrix} \colorbox{cyan}{Price per ticket} & \text{ times } & \colorbox{cyan}{Number of tickets} & \text{ is } & \colorbox{cyan}{Full Purchase Price} \\ 6.75 & . & N & = & 1,000 \end{matrix}\nonumber$ Hence, our equation is 6.75N = 1000. 3. Solve the Equation. Divide both sides of the equation by 6.75. \begin{aligned} \frac{6.75N}{6.75} = \frac{1000}{6.75} ~ & \textcolor{red}{ \text{ Divide both sides by 6.75.}} \\ N \approx 148 ~ & \textcolor{red}{ \text{ Truncate to nearest unit.}} \end{aligned}\nonumber Push the decimal point to the right-end of the divisor and the decimal point in the dividend an equal number of places. We’ll stop the division at the units position. 4. Answer the Question. The Boys and Girls club can purchase 148 tickets. 5. Look Back. Let’s calculate the cost of 148 tickets at $6.75 apiece. Thus, at$6.75 apiece, 148 tickets will cost $999. Because the Boys and Girls club of Eureka has$1,000 to work with, note that the club doesn’t have enough money left for another ticket. Exercise Adult tickets to the circus cost $12.25 apiece. If the club has$1,200 set aside for adult ticket purchase, how many adult tickets can they purchase? 97 Example 11 Marta has 20 feet of decorative fencing which she will use for the border of a small circular garden. Find the diameter of the circular garden, correct to the nearest hundredth of a foot. Use π ≈ 3.14. Solution The formula governing the relation between the circumference and diameter of a circle is $C = \pi d\nonumber$ The 20 feet of decorative fencing will be the circumference of the circular garden. Substitute 20 for C and 3.14 for π. $20 = 3.14d\nonumber$ Divide both sides of the equation by 3.14. \begin{aligned} \frac{20}{3.14} = \frac{3.14d}{3.14} \\ \frac{20}{3.14} = d \end{aligned}\nonumber Move the decimal point to the end of the divisor, then move the decimal point in the dividend an equal number of places (two places) to the right. Note that we must add two trailing zeros in the dividend. Thus, the problem becomes: $314 \overline{ )2000}\nonumber$ We need to round to the nearest hundredth. This requires that we carry the division one additional place to the right of the hundredths place (i.e., to the thousandths place). For the final step, we must round 6.369 to the nearest hundredth. In the schematic that follows, we’ve boxed the hundredths digit (the “rounding digit”) and the “test digit” that follows the “rounding digit.” Because the “test digit” is greater than or equal to 5, we add 1 to the “rounding digit,” then truncate. Therefore, to the nearest hundredth of a foot, the diameter of the circle is approximately $d ≈ 6.37 \text{ ft.}\nonumber$ Exercise Dylan has a circular dog pen with circumference 100 feet. Find the radius of the pen, correct to the nearest tenth of a foot. Use π ≈ 3.14. 15.9 feet ## Exercises In Exercises 1-16, solve the equation. 1. $$5.57x − 2.45x = 5.46$$ 2. $$−0.3x − 6.5x = 3.4$$ 3. $$−5.8x + 0.32 + 0.2x = −6.96$$ 4. $$−2.2x − 0.8 − 7.8x = −3.3$$ 5. $$−4.9x + 88.2 = 24.5$$ 6. $$−0.2x − 32.71 = 57.61$$ 7. $$0.35x − 63.58 = 55.14$$ 8. $$−0.2x − 67.3 = 93.5$$ 9. $$−10.3x + 82.4=0$$ 10. $$−1.33x − 45.22 = 0$$ 11. $$−12.5x + 13.5=0$$ 12. $$44.15x − 8.83 = 0$$ 13. $$7.3x − 8.9 − 8.34x = 2.8$$ 14. $$0.9x + 4.5 − 0.5x = 3.5$$ 15. $$−0.2x + 2.2x = 6.8$$ 16. $$−7.9x + 2.9x = 8.6$$ In Exercises 17-34, solve the equation. 17. $$6.24x − 5.2=5.2x$$ 18. $$−0.6x + 6.3=1.5x$$ 19. $$−0.7x − 2.4 = −3.7x − 8.91$$ 20. $$3.4x − 4.89 = 2.9x + 3.6$$ 21. $$−4.9x = −5.4x + 8.4$$ 22. $$2.5x = 4.5x + 5.8$$ 23. $$−2.8x = −2.3x − 6.5$$ 24. $$1.2x = 0.35x − 1.36$$ 25. $$−2.97x − 2.6 = −3.47x + 7.47$$ 26. $$−8.6x − 2.62 = −7.1x + 8.54$$ 27. $$−1.7x = −0.2x − 0.6$$ 28. $$3.89x = −5.11x + 5.4$$ 29. $$−1.02x + 7.08 = −2.79x$$ 30. $$1.5x − 2.4=0.3x$$ 31. $$−4.75x − 6.77 = −7.45x + 3.49$$ 32. $$−1.2x − 2.8 = −0.7x − 5.6$$ 33. $$−4.06x − 7.38 = 4.94x$$ 34. $$−4.22x + 7.8 = −6.3x$$ In Exercises 35-52, solve the equation. 35. $$2.3+0.1(x + 2.9) = 6.9$$ 36. $$−6.37 + 6.3(x + 4.9) = −1.33$$ 37. $$0.5(1.5x − 6.58) = 6.88$$ 38. $$0.5(−2.5x − 4.7) = 16.9$$ 39. $$−6.3x − 0.4(x − 1.8) = −16.03$$ 40. $$−2.8x + 5.08(x − 4.84) = 19.85$$ 41. $$2.4(0.3x + 3.2) = −11.4$$ 42. $$−0.7(0.2x + 5.48) = 16.45$$ 43. $$−0.8(0.3x + 0.4) = −11.3$$ 44. $$7.5(4.4x + 7.88) = 17.19$$ 45. $$−7.57 − 2.42(x + 5.54) = 6.95$$ 46. $$5.9 − 0.5(x + 5.8) = 12.15$$ 47. $$−1.7 − 5.56(x + 6.1) = 12.2$$ 48. $$−7.93 + 0.01(x + 7.9) = 14.2$$ 49. $$4.3x − 0.7(x + 2.1) = 8.61$$ 50. $$1.5x − 4.5(x + 4.92) = 15.6$$ 51. $$−4.8x + 3.3(x − 0.4) = −7.05$$ 52. $$−1.1x + 1.3(x + 1.3) = 19.88$$ In Exercises 53-58, solve the equation. 53. $$0.9(6.2x − 5.9) = 3.4(3.7x + 4.3) − 1.8$$ 54. $$0.4(−4.6x+ 4.7) = −1.6(−2.2x+ 6.9)−4.5$$ 55. $$−1.8(−1.6x + 1.7) = −1.8(−3.6x − 4.1)$$ 56. $$−3.3(−6.3x + 4.2) − 5.3=1.7(6.2x + 3.2)$$ 57. $$0.9(0.4x + 2.5) − 2.5 = −1.9(0.8x + 3.1)$$ 58. $$5.5(6.7x + 7.3) = −5.5(−4.2x + 2.2)$$ 59. Stacy runs a business out of her home making bird houses. Each month she has fixed costs of $200. In addition, for each bird house she makes, she incurs an additional cost of$3.00. If her total costs for the month were $296.00, how many bird houses did she make? 60. Stella runs a business out of her home making curtains. Each month she has fixed costs of$175. In addition, for each curtain she makes, she incurs an additional cost of $2.75. If her total costs for the month were$274.00, how many curtains did she make? 61. A stationary store has staplers on sale for $1.50 apiece. A business purchases an unknown number of these and the total cost of their purchase is$36.00. How many were purchased? 62. A stationary store has CD packs on sale for $2.50 apiece. A business purchases an unknown number of these and the total cost of their purchase is$40.00. How many were purchased? 63. Julie runs a business out of her home making table cloths. Each month she has fixed costs of $100. In addition, for each table cloth she makes, she incurs an additional cost of$2.75. If her total costs for the month were $221.00, how many table cloths did she make? 64. Stella runs a business out of her home making quilts. Each month she has fixed costs of$200. In addition, for each quilt she makes, she incurs an additional cost of $1.75. If her total costs for the month were$280.50, how many quilts did she make? 65. Marta has 60 feet of decorative fencing which she will use for the border of a small circular garden. Find the diameter of the circular garden, correct to the nearest hundredth of a foot. Use π ≈ 3.14. 66. Trinity has 44 feet of decorative fencing which she will use for the border of a small circular garden. Find the diameter of the circular garden, correct to the nearest hundredth of a foot. Use π ≈ 3.14. 67. Children’s tickets to the ice capades go on sale for $4.25. The YMCA of Sacramento has$1,000 set aside to purchase these tickets. Approximately how many tickets can the YMCA of Sacramento purchase? 68. Children’s tickets to the ice capades go on sale for $5. The Knights of Columbus has$1,200 set aside to purchase these tickets. Approximately how many tickets can the Knights of Columbus purchase? 69. A stationary store has mechanical pencils on sale for $2.25 apiece. A business purchases an unknown number of these and the total cost of their purchase is$65.25. How many were purchased? 70. A stationary store has engineering templates on sale for $2.50 apiece. A business purchases an unknown number of these and the total cost of their purchase is$60.00. How many were purchased? 71. Marta has 61 feet of decorative fencing which she will use for the border of a small circular garden. Find the diameter of the circular garden, correct to the nearest hundredth of a foot. Use π ≈ 3.14. 72. Kathy has 86 feet of decorative fencing which she will use for the border of a small circular garden. Find the diameter of the circular garden, correct to the nearest hundredth of a foot. Use π ≈ 3.14. 73. Kathy needs to create a rectangular garden plot covering 100 square meters (100 m2). If the width of the plot is 7.5 meters, find the length of the plot correct to the nearest tenth of a meter. 74. Marianne needs to create a rectangular garden plot covering 223 square meters (223 m2). If the width of the plot is 8.3 meters, find the length of the plot correct to the nearest tenth of a meter. 75. Children’s tickets to the stock car races go on sale for $4.5. The Boys and Girls club of Eureka has$1,300 set aside to purchase these tickets. Approximately how many tickets can the Boys and Girls club of Eureka purchase? 76. Children’s tickets to the movies go on sale for $4.75. The Lions club of Alameda has$800 set aside to purchase these tickets. Approximately how many tickets can the Lions club of Alameda purchase? 77. Ashley needs to create a rectangular garden plot covering 115 square meters (115 m2). If the width of the plot is 6.8 meters, find the length of the plot correct to the nearest tenth of a meter. 78. Molly needs to create a rectangular garden plot covering 268 square meters (268 m2). If the width of the plot is 6.1 meters, find the length of the plot correct to the nearest tenth of a meter. 79. Crude Inventory. US commercial crude oil inventories decreased by 3.8 million barrels in the week ending June 19. If there were 353.9 million barrels the following week, what were crude oil inventories before the decline? rttnews.com 06/24/09 80. Undocumented. In 2008, California had 2.7 million undocumented residents. This is double the number in 1990. How many undocumented residents were in California in 1990? Associated Press Times-Standard 4/15/09 81. Diamonds Shining. The index of refraction n indicates the number of times slower that a light wave travels in a particular medium than it travels in a vacuum. A diamond has an index of refraction of 2.4. This is about one and one-quarter times greater than the index of refraction of a zircon. What is the index of refraction of a zircon? Round your result to the nearest tenth. 1. 1.75 3. 1.3 5. 13 7. 339.2 9. 8 11. 1.08 13. −11.25 15. 3.4 17. 5 19. −2.17 21. 16.8 23. 13 25. 20.14 27. 0.4 29. −4 31. 3.8 33. −0.82 35. 43.1 37. 13.56 39. 2.5 41. −26.5 43. 45.75 45. −11.54 47. −8.6 49. 2.8 51. 3.82 53. −2.59 55. −2.9 57. −3 59. 32 61. 24 63. 44 65. 19.11 feet 67. 235 tickets 69. 29 71. 19.43 feet 73. 13.3 meters 75. 288 tickets 77. 16.9 meters 79. 357.7 million barrels 81. 1.9
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721999764442444, "perplexity": 1952.6810389015984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363809.24/warc/CC-MAIN-20210302095427-20210302125427-00030.warc.gz"}
http://mathhelpforum.com/algebra/222336-reverse-ratios-percentages.html
# Thread: Reverse ratios and percentages.... 1. ## Reverse ratios and percentages.... The teachers were right.... we will need this one day!!! I didn't do so well in math class. Thank you for forums like this! I am working with dilutions of a concentrate. (I will give both gallons and lbs.) We have a diluted drum of 55 gallons/450lbs. Of that, 15 Gal/122.715 lbs is the concentrate. 40 Gal/327.285 lbs is the water. I think that is a 3:8 dilution ratio. When performing a solids/activity test on this diluted product, we find the finished drum has 11.65% solids/activity. We do not have the original concentrate and need to determine what the solids/activity of the original concentrate is. (I hope someone will tell me it is as simple as multiplying 11.56 by 3) On the flip side...... We often have a concentrate for which we know the % of solids/activity ie. 49%. Then I want to plug into a formula (which I don't have) to determine the resulting solids at a known ratio ie. 1:75, or 1:120 etc.... Again I hope I'm making this too complicated and there is some very simple formula that I missed that day I skipped class to go swimming! humbly in appreciation. 2. ## Re: Reverse ratios and percentages.... Hey lisach. Percentages are basically some ratio times 100%. So if your ratio is say 1/10 = 0.1 then the percentage is 1/10 * 100% = 0.1 * 100% = 10%. With regards to the dilution you calculate 15/40 = 3/8 just as you have done so yourself. Remember its just a ratio of one thing relative to another (in this case, concentrate relative to water). To convert that to a percentage we multiply that ratio by 100% which gives us 3/8 *100 = 300/8 = 75/2 = 32.5% 3. ## Re: Reverse ratios and percentages.... Ok... Well that is the first step. I get so lost here ..... Yes the ratio of the finial diluted drum is 3:8 and the percentage of concentrate in the drum is 32.5%, that is part of what I need but all of it. Let me go here for a minute....On the reverse side. • A concentrated sugar solution that is 49% sugar and 51% water has 49% activity/solids in it. • I dilute it with water at 1:75 ratio to make 450lbs/55 Gal. The drum then has 76 parts. 55 Gal=7040 oz---- 7040 oz/76=92.63 oz---- 1 part = 92.63 oz. • If 1 part is the original concentrate, and it is 49% active then 92.63oz*49%=45.389 oz. So if the sugar was all condensed in a little packet sitting in the bottom of the drum it would be 45.389 oz in 7040 oz. • 45.389/7040= 0.645% • Therefore, if a concentrate with 49% activity diluted at 1:75 ratio will have an activity of .645% as a final dilution activity. Please correct me at this point if I am wrong.... If I was using a formula to get this instead of drawing out visual representations, I would be able to plug in the known factors and figure the reverse. a) activity level of concentrate 49% b) dilution rate 1:75 c) volume of diluted product 55 Gal d) activity level of diluted product .645% A final diluted drum of 55 Gal that was diluted at 3:8 ratio has an activity of 11.65%. How do we determine what the percentage activity of the original concentrate was. (like a deer in headlights here.....) 4. ## Re: Reverse ratios and percentages.... drum contains 450 lbs of diluted concentrate assay of drum= 11.56% solids solids in drum=0.1156 * 450 = 52 lbs solids in concentrate =52/122.7=0.424 fraction 42.4% 5. ## Re: Reverse ratios and percentages.... Ok... That looks very good..... Now to expand on that further...... a=activity of concentrate c=volume of diluted product b=dilution rate d=activity of diluted product Is there a quick way to determine the following? 1. Unknown activity level of final diluted product X a=49% b=1:75 c=450lbs d=x 1. Unknown dilution rate X a=49% b=X c=450lbs d=12% 6. ## Re: Reverse ratios and percentages.... I assumed that activity was % solids.If you need to make dilutions based on solids content or some other concentration unit I can help but you must clearly define what you want. 7. ## Re: Reverse ratios and percentages.... yes activity is the same thing as % solids I'm not sure what else to clarify 8. ## Re: Reverse ratios and percentages.... given concentrate 49% solids specific gravity not given.I will assume 1.5 dilution rate by volume 1:75 1gal conc = 8.35* 1.5 * 0.49 = 6.14 lb solids 75 gal water = 8.35 * 75=626.4 lbs water Total wt of mix =632.4 lbs % solids = 6.14 / 632.4 *100=0.97% by wt
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198095798492432, "perplexity": 3925.453099517532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719027.25/warc/CC-MAIN-20161020183839-00555-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.nature.com/articles/s42005-019-0168-y?utm_source=other&utm_medium=other&utm_content=null&utm_campaign=JRPC_2_SC_NatureKorea_Cphys_Sep&error=cookies_not_supported&code=1039bc25-8c5c-4b59-9f94-ea33a7020ab2
## Introduction Since McCarthy and Parks1 found radiation-dose enhancements inside thunderclouds with an airborne detector in 1980s, high-energy phenomena associated with thunderstorms have been detected inside the Earth’s atmosphere and from space. Terrestrial gamma-ray flashes (TGFs) are burst-like emission with their photon energy extending up to 20 MeV that last for several hundred microseconds, coincident with lightning discharges. They were first detected from space by Compton Gamma-Ray Observatory2, and since then have been reported by many other satellites3,4,5,6,7,8. Similar phenomena but going downward have been found in recent years at ground level9,10,11,12,13,14,15,16,17. They, now called “downward TGFs”, share several features with TGFs observed from space, such as coincidence with lightning, sub-millisecond durations, and energy spectra extending to >10 MeV. Downward TGFs that contains enough photons above 10 MeV have been experimentally shown to trigger atmospheric photonuclear reactions, namely producing neutrons and positron-emitting radioactive nuclei13,14. These photoneutrons can be observed as a short-duration gamma-ray burst lasting for several hundreds of milliseconds, as they are absorbed by atmospheric nuclei via neutron-capture processes14,18. Gamma-ray glows, also referred to as long bursts19 or thunderstorm ground enhancements20, are energetic radiation from thunderclouds with energies up to tens of MeVs, lasting for a few seconds to several minutes. They have been observed by airborne detectors1,21,22,23, at mountain-top20,24,25,26,27,28,29 and sea-level observation sites19,30,31,32,33. Gamma-ray glows usually coincide with passage of thunderclouds, and sometimes cease at the moment when lightning discharges take place1,21,22,23,34,35,36,37,38. Although TGFs and gamma-ray glows are distinguished clearly by duration, brightness, and timing with regard to lightning discharges, both of them are thought to originate from a common fundamental mechanism, called relativistic runaway electron avalanches (RREAs39,40). According to Wilson’s hypothesis41, seed electrons (provided by, e.g., cosmic rays) can be accelerated up to an energy of tens of MeVs in strong electric fields, producing secondary electrons. The number of multiplied and accelerated electrons exponentially increases, and the accelerated electrons finally emit bremsstrahlung gamma rays as they interact with ambient atmospheric nuclei. Dwyer42 proposed additional electron-seeding processes by positrons and backscattered gamma rays into the RREA mechanism, called “relativistic feedback model”. This model can achieve a higher multiplication factor than that of a RREA alone, and thus are thought to explain extraordinarily high brightness of TGFs. Despite an increasing number of respective observation samples of TGFs and gamma-ray glows, connections between them remain poorly understood. This is primarily because there has been no report of simultaneous detection of both, except for a very recent short report on a marginal detection17. In this paper, we report the first unequivocal simultaneous detection of them at sea level and discuss its implications. ## Results ### Observation of high-energy phenomena in winter thunderstorms The Gamma-ray Observation of Winter Thunderclouds (GROWTH) collaboration31,32,35,43 has been engaged with a multi-point observation campaign of atmospheric high-energy phenomena in coastal areas of Japan Sea14,44. Winter thunderstorms in Japan are ideal targets to observe this type of phenomena due to their unique characteristics; most notably typical altitude of clouds is significantly lower than ordinary38,45,46, which makes sea-level observations of gamma-ray glows viable. We have developed portable radiation detectors dedicated to the multi-point observation. They have a 25 cm × 8 cm × 2.5 cm Bi4Ge3O12 (BGO) scintillation crystal coupled with two photo-multiplier tubes (PMTs; HAMAMATSU R1924A). Outputs from the PMTs are amplified, and then read out by a 50 MHz digitiser onboard a data acquisition system. The data acquisition system stores 20-μs waveforms of the amplified analogue outputs once a pulse is detected, and extracts the maximum and minimum value as well as the timing of the pulse (see also Detector calibration). The maximum value corresponds to the energies of the pulse, and the minimum the analogue baseline voltage. The data acquisition system also records counts of discarded photon events due to buffer overflow, which are used for dead-time correction. Three detectors were deployed at three observation sites in Kanazawa City, the capital of Ishikawa Prefecture, by the Japan Sea coast (Fig. 1) and have been operated since October 2016. Lightning discharges were monitored by a broadband low-frequency (LF: 0.8–500 kHz) lightning mapping network (hereafter LF network), for which detectors are installed along Toyama Bay and in Noto peninsula. Another receiver in the extreme-low-frequency band (ELF: 1–100 Hz) is installed at Kuju, as summarised in the section Radio observations. We also utilise lightning location data of Japanese Lightning Detection Network (JLDN) operated by Franklin Japan Co., Ltd. ### Detection of gamma-ray glow and downward TGF On 9 January 2018, two of our detectors shown in Fig. 1 recorded gamma-ray glows. Figure 2a, b shows long-term count-rate histories of detectors A and B, respectively. At around 17:54 in coordinated universal time (UTC), detector A at Kanazawa Izumigaoka High School (36.538°N, 136.649°E) recorded a radiation increase for ~60 s. Then, ~30 s later, detector B at Kanazawa University High School (36.539°N, 136.664°E, 1.3 km east from detector A) also recorded a gamma-ray glow. No radiation enhancements were observed by detector C at Kanazawa University Kakuma Campus (36.546°N, 136.709°E; 4 km from detector B) in the period. The glow then suddenly terminated, coincident with a lightning discharge, while it was still being observed by detector B. An snapshot image of the X-band radar network at 17:55 shows a heavy precipitation area, corresponding to a thundercloud, located between detectors A and B (Fig. 1a). The radar data suggest that the thundercloud passed over the two detectors towards east-northeast with a speed of 19.3 ± 1.4 m s−1 (see Wind estimation with X-band radar). Since the temporal separation between the glow detection by the two detectors is consistent with the time for the thundercloud to travel the distance between the two detectors, we consider that the gamma-ray glows recorded by the two detectors are from the same cloud and hence of the same origin. At the same time as the glow termination and the lightning discharge, both detectors A and B recorded a short-duration radiation burst lasting for ~200 ms simultaneously. The count-rate profiles of the 200-ms-lasting short burst shown in Fig. 2c, d exhibit a steep rise and decay with time constants of 52.0 ± 4.9 and 59.2 ± 1.7 ms for detectors A and B, respectively. Combining the timing analysis with spectral analysis (see Gamma-ray emission originating from neutrons), the short burst is found to originate from neutron captures by atmospheric nitrogen nuclei, which Rutjes et al.18 predicted as “TGF afterglow”, and Enoto et al.14 observationally demonstrated. In addition, detector B recorded a faint annihilation emission at 511 keV for 10 s after the short burst (see Positron production by beta-plus decay). These features imply that atmospheric photonuclear reactions such as 14N + γ→ 13N + n and 16O + γ→ 15O + n took place coincident with the lightning discharge, as discussed in Bowers et al.13 and Enoto et al.14. Figure 3a, b shows the maximum and minimum waveform values of photon events during the short burst recorded by detectors A and B, respectively. At the very beginning of the short burst, both detectors A and B recorded saturated pulses (the maximum values exceeding >4 V), and then significant negative values of the baseline (the minimum values) called “undershoot” for ~10 ms. Although detector B failed to acquire the main part of the undershoot due to buffer overflow in the data acquisition system, it recorded the saturated pulses and the last part of the undershoot. As demonstrated in Methods: Initial flash of Enoto et al.14, this feature manifests the existence of an extremely large energy deposit (much more than hundreds of MeVs) in the scintillation crystal within a few milliseconds, which is a clear sign of a downward TGF. In the following analysis we employ an elapsed time t from the onset of the downward TGF at 17:54:50.308892 UTC, recorded by detector B. The LF network recorded a consecutive series of waveforms of the lightning discharge lasting for ~400 ms (Fig. 3c). The downward TGF coincided with a large-amplitude pulse at the initial phase of the lightning discharge within 10 μs (Fig. 3d). We detected four or so precursory pulses shortly before the large-amplitude pulse. No pulses had been detected before the precursory pulses by the LF network. The ELF measurement also confirmed that the associated ELF pulse was coming from the LF source. In addition, JLDN also reported a negative intracloud/intercloud (IC) discharge of −197 kA at t = −13 μs, which is temporally associated with the large-amplitude pulse. Figure 1b shows the source positions of the large-amplitude and precursory pulses determined by the LF network. At the beginning, the small precursory pulses took place in a southwest region less than 3 km away from detector B. Then, the main large-amplitude pulse (the fifth one in Figs. 1b and 3c) occurred 0.6 km southwest of detector B at t = −5.5 μs. JLDN also located the large-amplitude pulse within 0.9 km from detector B. These temporal and spatial correlations lead us to conclude that the large-amplitude LF pulse is associated with the downward TGF. ### Production mechanism of gamma-ray glow The multi-point observation enables us to investigate characteristics of the gamma-ray glow preceding the lightning initiation and the downward TGF. First, we perform spectral analysis. Figure 4 shows the background-subtracted gamma-ray energy spectra, extracted from −69 s < t < −39 s and −30 s < t < −10 s for detectors A and B, respectively. The detector response function is calculated with the GEANT4 Monte Carlo simulation framework47, and is convolved with a model spectrum in spectral fitting using the XSPEC package48. The observed spectra, of which instrumental responses are corrected, are found to be well explained by an empirical power-law function with an exponential cutoff, ε−Γexp[(−ε/εcut)α], where ε, Γ, εcut, and α are the photon energy (MeV), power-law photon index, cutoff energy (MeV), and cutoff index, respectively. The best-fitting parameters are $${\mathrm{\Gamma }} = 0.90_{ - 0.08}^{ + 0.06}$$ and $$1.02_{ - 0.05}^{ + 0.04}$$, $$\varepsilon _{{\mathrm{cut}}} = 6.4_{ - 1.1}^{ + 1.0}$$ and $$8.5_{ - 0.9}^{ + 0.8}\,{\mathrm{MeV}}$$, $$\alpha = 1.21_{ - 0.14}^{ + 0.15}$$ and $$1.43_{ - 0.14}^{ + 0.15}$$, and the 0.4–20.0 MeV incident gamma-ray flux of $$1.5_{ - 0.5}^{ + 0.7} \times 10^{ - 5}$$ and $$2.4_{ - 0.6}^{ + 0.7} \times 10^{ - 5}$$ ergs cm−2 s−1 on average over −69 s < t < −39 s and −30 s < t < −10 s integration periods for detectors A and B, respectively. Here and after, all the errors are statistical at 1σ confidence level, unless otherwise mentioned. We then perform another set of Monte Carlo simulations, using GEANT4, and compare the obtained energy spectra and count-rate histories with the simulated ones to investigate atmospheric interactions and propagation of electrons and gamma rays (see Simulation of gamma-ray glow). We find a model of spatial and energy spectral distribution for avalanche electrons in the RREA region which can reproduce both the obtained gamma-ray spectra and count-rate histories, and summarise the results in Figs. 1 and 4. The best-fit value of the RREA terminus altitude hbase is 400 m, which means the electron avalanche took place in the lower part of the winter thundercloud, and the offsets from the centre of the RREA region are 540 and 80 m for detectors A and B, respectively. The electron flux distribution is consistent with being proportional to a function of a distance from the RREA centre r, exp(−r/150 m), providing the circularly symmetric distribution. Figure 1b shows the centre position of the RREA region at the moment of the termination. Normalising the simulation result, we estimate the total production rate of 1–50 MeV avalanche electrons to be 3.66 × 1012 electrons s−1. The electron flux F(r, ε) at the terminus of RREA is also estimated to be a function of r and ε $${F(r,\varepsilon ) = 4.1 \,\times 10^2\,{\mathrm{exp}}\left( { - \frac{r}{{{\mathrm{1}}50\,{\mathrm{m}}}}} \right){\mathrm{exp}}\left( { - \frac{\varepsilon }{{{\mathrm{7}}.3\,{\mathrm{MeV}}}}} \right) {\mathrm{electrons}}\,{\mathrm{cm}}^{ - 2}\,{\mathrm{s}}^{ - 1}\,{\mathrm{MeV}}^{ - 1}.}$$ (1) This model reproduces the observed count-rate histories and spectra, except the increase in the count rate of detector B during −5 s < t < 0 s. This period is discussed in the section Abrupt increase in count rates of gamma-ray glow before downward TGF. Let us consider the electron multiplication factor M = FRREA/Fseed, where FRREA and Fseed are the average electron flux at the RREA terminus and seed electron flux, respectively. Integrating Eq. (1) yields the 0.3–50 MeV average flux within r = 150 m of FRREA = 7.5 × 102 electrons cm−2 s−1. Assuming that the seed electrons are mainly produced by cosmic rays, the 0.3–50 MeV seed electron flux is a function of a vertical acceleration length L and hbase given by $$F_{{\mathrm{seed}}}(L) = 2.56 \,\times 10^{ - 3}{\mathrm{exp}}\left[ {(L + h_{{\mathrm{base}}})/1890\,{\mathrm{m}}} \right]{\mathrm{electrons}}\,{\mathrm{cm}}^{ - 2}\,{\mathrm{s}}^{ - 1}$$ (2) (see Seed electrons). The multiplication factor M is thus a function of L, with the fixed hbase (400 m). In the RREA region, electron flux is known to increase39 exponentially as a function of L, FRREA= Fseed exp(L/λ), assuming that change of the vertical atmospheric pressure is negligible for the RREA processes at the low altitude. The avalanche length λ is empirically determined (see ref. 49 and references therein) to be λ = 7.3 MeV/(eE − 0.276 MeV m−1), where eE is a product of the elementary charge and strength of the electric field. The value of λ is then calculated to be 304, 99, and 59 m for E = 0.3, 0.35, and 0.4 MV m−1, respectively. We note that the set of the trial values of E up to 0.4 MV m−1 we have assumed is suggested to be plausible inside thunderclouds39. Therefore, combining M(L) = FRREA/Fseed(L) = exp(L/λ), L and M are derived to be L = 3240, 1160, and 710 m, M = 4.3 × 104, 1.3 × 105, and 1.6 × 105 for E = 0.3, 0.35, and 0.4 MV m−1, respectively. As Dwyer50 pointed out, the multiplication factor would not exceed ~105 in the RREA-only case because thunderclouds cannot maintain an acceleration length required for it. Given that L can reach twice as high as the typical diameter of the RREA region50, L < 600 m is required in this case, where the typical radius r = 150 m is employed. The 0.3 MV m−1 case is not plausible because the required acceleration length L = 3240 m cannot be maintained inside the thundercloud. In the other cases, it is necessary to take into account the relativistic feedback processes to explain the estimated avalanche multiplication factor. The relativistic feedback processes are parameterised with a feedback factor γ, the fraction of the seed electrons provided by the steady-state relativistic feedback processes50. The flux of runaway electrons is then modified as FRREA = Fseed(L) exp(L/λ)/(1 − γ). Figure 5 shows this relation between L and γ to explain the observed flux at the RREA terminus. To satisfy the condition L < 600 m, γ should be larger than 0.998 and 0.846 for 0.35 and 0.4 MV m−1, respectively. This suggests that the number of feedback-origin seed electrons is higher than that of cosmic-ray seed electrons by a factor of >5.5 for our event. ### Abrupt increase in count rates of gamma-ray glow before TGF The count-rate history of detector B exhibited an additional increase during −5  < t < −0 s (Fig. 6a). Figure 6b shows the ratio of the simulated model to the observed history. Although the observed history is well reproduced by the simulation up to t = −5 s, the observed count rate is twice as high as the simulation in −5 s < t < −0 s. Figure 6c shows the three energy spectra extracted from the time regions of −10 s < t < −5 s, −5 s < t < −2 s, and −2 s < t < 0 s. All the spectra show a power-law function with an exponential cutoff, indicating that bremsstrahlung is still the main process of gamma-ray production. Since our simulations fail to reproduce this increase in count-rate, we speculate that the increase was caused by a fluctuation of the intrinsic electron fluxes, rather than by the movement of the RREA region with the ambient wind flow. Based on the working hypothesis of the speculated increase of the accelerated electron flux, at least one of the following is required to have taken place: (1) stronger electric fields of the RREA region, (2) longer acceleration length, and/or (3) increase in the feedback factor γ. However, since lightning did not occur during this period (−5 s < t < 0 s), atmospheric mechanism could not drastically change the meteorological conditions, such as electric fields and acceleration length, within 5 s. We thus conjecture that temporal variations of the relativistic feedback processes played an important role for the electron flux increase, then the abrupt rise of gamma rays in the 5-s period before the lightning discharge. Assuming the electric field of 0.4 MV m−1, the doubled rate of avalanche electrons can be explained by increasing γ from 0.846 to 0.923. The RREA and relativistic feedback processes remained stable until t = −5 s; this state corresponds to the “steady state” of relativistic feedback as defined by Dwyer50, namely γ < 1. In general, when γ exceeds 1, an electron flux would spontaneously increase, and an RREA region should collapse. The timescale of the flux increase depends on the types of the relativistic feedback processes. The feedback process by positrons can discharge RREA regions within microseconds50. This timescale is close to that of TGFs, and is much shorter than that of the observed abrupt increase (i.e. 5 s). Alternatively, the feedback by backscattered X-rays may trigger a second-order discharge in RREA regions50. At present, even though the 5-s abrupt flux rise seems to be of great importance, its origin is yet to be understood. ## Discussion To conclude the relation between the gamma-ray glow and the downward TGF, verifying their temporal and positional coincidence will give a strong clue. Our observation cannot clarify whether the glow termination or the downward TGF took place first because these phenomena seemed to be slightly overlapped. On the other hand, the positional coincidence of the gamma-ray glow and the downward TGF in the present case is precisely determined owing to the multiple gamma-ray detectors and the LF network. The discussion in the section Production mechanism of gamma-ray glow suggests that the gamma-ray glow ceased when the source cloud was moving 130 m southwest of detector B (Fig. 1b). Also, the TGF-associated LF pulse was located within 0.5 km from detector B. Therefore, it is clear that the two phenomena are physically related to one another. Our interpretation of the observed gamma-ray glow suggests that the electron acceleration site should have electric fields of 0.35 MV m−1 or higher in order to achieve the high electron multiplication factor of >105 with a plausible acceleration length. In such highly electrified regions, TGFs are thought to initiate more easily than in other less-electrified regions as Smith et al.17 suggested. From another point of view, we speculate that the avalanche electrons of the gamma-ray glow can behave as seed electrons of the downward TGF. At the point where the TGF-associated LF pulse was located (point 5 in Fig. 1b), the 0.3–50 MeV electron flux at 400 m altitude is estimated to be 1.7 × 102 electrons cm−2 s−1. By comparing this flux with that of the cosmic-ray-induced seed electrons (the canonical seed electron source), it is suggested that the highly-electrified region responsible for the gamma-ray glow can be the dominant source of seed electrons for the TGF which occurs in the close proximity of the gamma-ray glow. In addition, the abrupt count-rate increase monitored by detector B before the TGF (see section Abrupt increase in count rates of gamma-ray glow before downward TGF) suggests additional production of avalanche electrons for the gamma-ray glow, and might have predicted drastic changes in the electrified region such as the lightning discharge and the TGF. In the present high-energy event, the discussion above suggests a possibility that the high electron current in the gamma-ray glow assisted the initiation of the downward TGF. However, it still remains observationally unclear how gamma-ray glows and TGFs are related with each other in general. Among an increasing sample of glow terminations, TGF-associated events are still quite rare, i.e. only Smith et al.17 and the present event. For example, a termination event during a winter thunderstorm in 2017 (ref. 38) was associated with an intracloud/intercloud discharge but not related with any signals for TGF-like emissions. As another example, a TGF-like intensive emission associated with photonuclear reactions was reported14, where no gamma-ray glows were recorded before the event. In these cases, we lack sufficient evidences due to our present sparse observation sites on the ground to conclude that glow terminations are not always associated with TGFs. Our future gamma-ray monitoring network combined with radio-frequency lightning mapping systems will give a clue to reveal the relation between TGFs and gamma-ray glows. In summary, we detected a gamma-ray glow, terminated with a downward TGF which triggered atmospheric photonuclear reactions. The gamma-ray glow was so bright that the relativistic feedback processes are required. Although we cannot determine whether the glow termination or the downward TGF occurred first, the two high-energy phenomena in the atmosphere took place in an identical electrified region of a winter thundercloud, and hence are clearly related to each other in the present case. ## Methods ### Detector calibration Energy calibration of the detectors was performed to convert the maximum value of a pulse into photon energy. We measured the centre of environmental background lines of 40K (1.46 MeV) and 208Tl (2.61 MeV), and built a linear calibration function which is utilised to assign the energy of each photon. All the detectors record 0.4–20.0 MeV gamma rays. See also Instrumental calibration in Enoto et al.14 for details. Absolute timing is conditioned by pulse-per-second signals of the Global Positioning System (GPS). The timing-assignment logic employed from 2017 to 2018 winter provides absolute timing accuracy of each photon better than 1 μs. However, detector A failed to receive the GPS signals during the experiment. Instead, we performed the calibration of detector A, using the internal clock time with ~1 s accuracy, and then corrected the absolute timing so that the detection time of the downward TGF matches that with detector B. ### Wind estimation with X-band radar We utilised data of eXtended RAdar Information Network (XRAIN). XRAIN is a polarimetric weather radar network in the X band and has a spatial resolution of 280 m (east–west) × 230 m (north–south) mesh. It records two-dimensional precipitation maps with a 1-min interval. XRAIN also obtains three-dimensional maps of radar echoes and particle types with a 5-min interval by the constant-altitude plan position indicator technique. However, the three-dimensional data are not utilised in the present paper because the XRAIN observations have a moderate spatial resolution of altitude (≥1 km), which is insufficient to discuss charge structures in the thundercloud. Wind velocity and direction are estimated by overlaying and shifting precipitation maps at different time. First, 11 maps from 17:50 to 18:00 were extracted in the range of 36.4°N–36.7°N, 136.4°E–136.8°E. We then took a pair of maps with a 5-min interval (six pairs in total), and calculated the sum of precipitation residual at each mesh, given by $$\Sigma _{i,j}(P_{ij}^1 - P_{ij}^2)^2$$, where $$P_{ij}^1$$ and $$P_{ij}^2$$ are precipitation at each mesh on each map, and i and j are mesh indexes. With trial shifting of one map with several steps of the spatial resolution for four directions, we searched for the position which takes the minimum residual sum. The distance and direction for which the cloud moved in 5 min can be estimated from the amount of the map shift at the point of the minimum residual sum. Consequently, the wind direction and velocity at the moment of the glow detection were determined to be west-northwestwards and 19.3 ± 0.9 (systematic)  ± 1.1 (statistic) m s−1, respectively. Here, the quoted statistical error was calculated from the the standard deviation (1σ) of six pairs. The systematic error was determined by the mesh size and temporal interval of the map pair. The wind velocity with the overall error is then calculated to be 19.3 ± 1.4 m s−1, where the standard error propagation in quadrature between the systematic and statistical errors is assumed to hold. Since the statistic error is smaller than 10% and is comparable with the systematic error, it is reasonable to assume that the wind parameters did not change considerably during the glow observation. ### Gamma-ray emission originating from neutrons Photonuclear reactions such as 14N + γ13N + n and 16O + γ→ 15O + n expel ~10 MeV neutrons from atmospheric nitrogen and oxygen nuclei51,52,53. The photoneutrons gradually lose their kinetic energy via elastic scatterings, and are eventually captured by atmospheric nuclei such as 14N. In the dominant reaction 14N + n15N + γ, 15N nuclei in excited states emit various de-excitation gamma-ray lines up to 10.8 MeV. In addition, de-excitation gamma rays from other nuclei such as Si and Al should be also emitted when photoneutrons were captured by ambient nuclei in soil, buildings, and components of the detectors. These de-excitation gamma rays originating from neutron captures are thought to compose the short burst14,18. The timescale of the short burst is determined by neutron thermalisation13,14,18. A numerical calculation predicts the neutron-capturing rate of exp(−t/τ) for 5 ms < t < 120 ms, where t is the elapsed time from the onset of the TGF and τ ≈ 56 ms is the decay constant14. The count-rate histories of the observed burst have decay constants of 52.0 ± 4.9 and 59.2 ± 1.7 ms for detectors A and B, respectively. These results are consistent with the calculation. Supplementary Fig. 1 shows the energy spectra of the burst with detectors A and B. Enoto et al.14 simulated the de-excitation emission, considering atmospheric scattering of the gamma rays and moderate energy resolution of BGO crystals. The emission model from 15N and ambient nuclei, such as Al and Si, well reproduces the results of both detectors A and B. From the spectral and temporal analyses, we confirm that the observed short burst is caused by neutrons produced via atmospheric photonuclear reactions. ### Positron production by beta-plus decay After neutrons are expelled from 14N and 16O, unstable nuclei 13N and 15O start emitting positrons via β+ decay with half-lives of 10 and 2 min, respectively. Positrons immediately annihilate and emit 511 keV annihilation gamma rays. Supplementary Fig. 2a–d shows count-rate histories in the 0.4–0.65 and 0.65–30.0 MeV bands. Whereas detector A recorded no enhancements after the short burst, detector B recorded an afterglow in the 0.4–0.65 MeV band for the period 0 s < t < 10 s. The count rates decreased with a decay constant of 6.0 ± 2.1 s. The background-subtracted photon count in the 0.4–0.65 MeV band for 1 s < t < 10 s is (2.0 ± 0.4) × 102 photons. The background-subtracted energy spectrum is shown in Supplementary Fig. 2e. The centre energy of the line emission is 528 ± 14 keV, which is consistent with 511 keV of the annihilation line within error. These results lead us to conclude that a positron-emitting region filled with 13N and 15O were produced in the atmosphere by the photonuclear reactions, and then passed over detector B flown by the ambient wind flow14. Considering that the count-rate history shows a monotonic decrease, the positron source might be generated somewhere above detector B or downwind. The LF network has five stations (Supplementary Fig. 3a). Each station has a flat plate antenna sensitive to 0.8–500 kHz. Analogue outputs from the antenna are sampled by a 4 MHz digitiser, whose absolute timing is calibrated with the GPS signals. The LF network can locate radio pulses with the time-of-arrival technique. Supplementary Fig. 3b, c shows the entire LF waveforms of the observed lightning discharge. The ELF receiver is installed in Kuju (33.059°N, 131.233°E) as a station of the Global ELF Observation Network operated by Hokkaido University. The station has two horizontal search coil magnetometers sensitive to 1–100 Hz magnetic-field perturbations in the east–west and north–south directions. The analogue output is sampled by a 400 Hz digitiser. The direction-of-arrival of the ELF pulses can be confirmed with the magnetic-detection-finder technique. Supplementary Fig. 3d shows the observed waveform in the ELF band. The JLDN reported two other discharges besides the TGF-associated radio-frequency pulse: an IC of −14 kA at t = 18.7 ms and a CG of −13 kA at t = 228.6 ms. Supplementary Fig. 3b, c shows the corresponding LF pulses. Since these pulses occurred long after the observed TGF, we consider that they were not associated with the high-energy phenomena. ### Simulation of gamma-ray glow We performed Monte Carlo simulations of electron propagation in the atmosphere to reproduce the count-rate histories and energy spectra, using GEANT4 (ref. 47). We assume that electron avalanches towards the ground developed in thundercloud, and that the electron spectrum of the RREA at the end of the region has the shape of exp(−ε/7.3 MeV)49, where ε is the electron energy. We also assume that the distribution of the electron flux in the avalanche region is circularly symmetric and has no intrinsic time fluctuation. These assumption should be reasonable, given that the count-rate history of detector A is symmetric about the peak, and that the wind velocity was approximately constant (see Wind estimation with X-band radar). The energy spectra of bremsstrahlung gamma rays from the avalanche electrons approximately follow ε−Γexp(−ε/7.3 MeV)54. The photon index Γ is determined from the source altitude h and offset from the source centre. Count-rate histories depend on the size of the RREA region, wind velocity, and h. The distribution of gamma rays is more diffuse at a higher source altitude due to atmospheric scattering, hence resulting in a longer and fainter gamma-ray glow. First, we tested a disk-like region with a uniform electron flux, varying h and disk radius in our simulations. Supplementary Fig. 4a shows some examples of the simulation results at various altitudes. Comparing the simulation results with the observation, h = 1500 m is required to reproduce the observed count-rate histories, whereas Γ of the energy spectra indicates h = 900 m. Since any other conditions cannot satisfy both the spectra and count-rate histories, this uniformed-disk model is thus rejected in this analysis. Then, we considered two disk-like models in which the spatial distribution of the electron flux follows either of the two functions of a distance from the RREA centre l: a Gaussian model, exp(−l2/2σ2) and an exponential model, exp(−l/L). The parameters σ and L are free parameters, which denote the spatial extent of the surface brightness of the emission. We found that both models can reproduce the obtained count-rate histories and spectra; The estimated parameters are h = 600 m and σ = 200 m for the Gaussian model, and h = 400 m and L = 150 m for the exponential model. Comparing these two best models, we found that the exponential model explains the observation better, particularly for the count-rate histories of detector B (Supplementary Fig. 4b). Therefore, we employ the exponential model as a working hypothesis to interpret the observation. ### Seed electrons We assume that the seed electrons of the RREA processes are mainly produced by cosmic rays. To calculate the electron fluxes of secondary cosmic rays, we employed Excel-based Program for calculating Atmospheric Cosmic-ray Spectrum (EXPACS)55,56, which calculates the flux and spectrum of cosmic-ray particles as a function of an altitude, latitude, longitude, and solar modulation. We extracted electron spectra at an altitude h of 300–2000 m, and then integrated the spectra to obtain the electron fluxes Fseed in the energy range of 0.3–50.0 MeV. The electron flux was found to increase exponentially as a positive function of altitude, given by Fseed = 2.56 × 10−3 × exp(h/1890 m) electrons cm−2 s−1. Carlson et al.57 have considered 1 MeV seed electrons produced by cosmic rays. Kelley et al.22 employed it, and derived the seed flux to be 0.25 cm−2 s−1 at 14.1 km. Our calculation with EXPACS gives the electron flux at 14.1 km of 0.86 cm−2 s−1. Given that Carlson et al. took a more thorough approach than ours by simulating the effective seeding efficiency for various particles, energies, and geometries, our method might have overestimated the seed electron flux. Regardless of the potential errors in our method, our conclusion that the gamma-ray glow requires relativistic feedback is unaffected, because overestimation of the seed flux, even if it was the case, would result in an underestimation of the multiplication factor.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381633996963501, "perplexity": 1784.2110005914196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00476.warc.gz"}
https://www.physicsforums.com/threads/conditions-for-using-stokes-theorem.695228/
Conditions for using Stokes' Theorem 1. Jun 3, 2013 Mandelbroth I'm back with more questions! I'm wondering what conditions must a manifold satisfy to be able to use Stokes' Theorem. I understand that it must be orientable, but does it have to necessarily be smooth? I tried to see if it was possible to prove Cauchy's Residue Theorem and Cauchy's Integral Formula using Stokes' Theorem, but I got stuck with results that don't make sense. Both require that an integrand can be meromorphic, so I'm not sure that Stokes' Theorem will necessarily apply to nonsmooth manifolds. 2. Jun 3, 2013 3. Jun 3, 2013 Mandelbroth I didn't know. :tongue: I figured it out, though. I think it's cool, so I might as well share it here to explain what I was trying to do to see if there's any more information to be had. Let $z\in\mathbb{C}:z=x+iy$ and $f(z)=u(x,y)+iv(x,y)$ be a meromorphic function such that f is undefined for all $z_0\in\mathcal{A}\subseteq D:\mathcal{A}=\{a_1, a_2, \cdots , a_n\}$. Then, what I wanted to say was $$\oint\limits_{\partial D}f(z) \, dz = \iint\limits_D d(f(z) \, dz) = \iint\limits_D \left(\left(\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)+i\left(\frac{\partial v}{\partial y}-\frac{\partial u}{\partial x}\right)\right)dy\wedge dx$$ which makes the Cauchy Integral Theorem rather trivially evident. However, this is not necessarily the most fun to use to prove the more general Residue Theorem. What I noted instead was that I could make the integral become $$\iint\limits_{D\setminus\mathcal{A}} \left(\left(\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)+i\left(\frac{\partial v}{\partial y}-\frac{\partial u}{\partial x}\right)\right)dy\wedge dx + \sum_{k=1}^{n}\left[\, \oint\limits_{\beta(a_k)}f(z) \, dz \,\right]$$ where $\beta(a_k)$ traces an infinitesimally small circle around $a_k$. I'm working in a similar way to get the Cauchy Integral Formula. 4. Jun 4, 2013 lavinia Stokes theorem applies to smooth singular chains.These exist on both oriented and non-oriented manifolds. The real and complex parts of a holomorphic differential are closed. This just restates the Cauchy-Riemann equations. Stokes theorem tells you that the integral of the differential must be zero around any piecewise smooth closed curve within its domain of holomorphy. Except for the term of degree minus 1, the terms in a Laurent series when multiplied by dz are all exact forms and so by Stokes Theorem integrate to zero around a circle. So the integral of a meromorphic function around a pole is its residue by Cauchy's theorem. Last edited: Jun 5, 2013 5. Jun 5, 2013 Mandelbroth I understand everything else about your post, but this is new to me. I was aware that Stokes' Theorem had something to do with chains, but I was unaware that you could use that fact to apply it to non-orientable manifolds. But then, I don't understand how that would work, since all of the integration I've done preserved some form of orientation (id est, $\int_{a}^{b}f^\prime(x) \, dx = \int\limits_{[a,b]} f^\prime(x) \, dx = f(b) - f(a)$). Can you please elaborate how you would integrate over a non-orientable manifold using Stokes' Theorem? 6. Jun 5, 2013 WannabeNewton See chapter 4 of Spivak's Calculus on Manifolds. 7. Jun 5, 2013 lavinia i am sorry I was so brief. A smooth singular simplex has two natural orientations. One can always integrate a differential form over an oriented simplex whether it is in an oriented or unoriented manifold. A smooth singular chain is a finite formal sum of oriented smooth simplexes. The integral of a differential form over the chain is the sum of its integrals over each oriented simplex in the chain. When you talk about "integrating over a manifold" you mean expressing the fundamental cycle of the manifold as a smooth singular chain then integrate over that chain. An unorientable manifold does not have a fundamental cycle so there is no smooth singular chain to integrate over. Last edited: Jun 5, 2013 8. Jun 6, 2013 lavinia As an after thought here is something to think about. Suppose you have a function on a non-orientable manifold. Pull this function back to the orientable 2 fold cover and multiply it by any orientation form you want. Integrate this then divide by 2. Similar Discussions: Conditions for using Stokes' Theorem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9651029706001282, "perplexity": 380.56695605280333}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823284.50/warc/CC-MAIN-20171019122155-20171019142155-00811.warc.gz"}
https://www.physicsforums.com/threads/meaning-of-this-operator.134916/
# Meaning of this operator 1. Oct 5, 2006 From the "Lie Group" theory point of view we know that: $$p$$ := is the generator for traslation (if the Lagrangian is invariant under traslation then p is conserved) $$L$$:= s the generator for rotation (if the Lagrangian is invariant under traslation then L is conserved) (I'm referring to momentum p and Angular momentum L, although the notation is obvious ) My question is if we take the "Lie derivative" and "covariant derivative" as a generalization of derivative for curved spaces.. if we suppose they're Lie operators..what's their meaning?..if the momentum operator acts like this: $$pf(x)\rightarrow \frac{df}{dx}$$ derivative of the function..could the same holds for Lie and covariant derivative (covariant derivative is just a generalization to gradient, and i think that Lie derivatives can be expressed in some cases as Covariant derivatives, in QM the momentum vector applied over the wave function is just the gradient of the $$\psi$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995743453502655, "perplexity": 768.9913017349189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163146.90/warc/CC-MAIN-20180926022052-20180926042452-00370.warc.gz"}
https://nightingale.becomingcelia.com/category/maths
## POTW – What are the Possibilities? This is a problem originally published on Problem of the Week of University of Waterloo. The question was about finding the roots of a functions. I thought it would be really easy since Desmos seems to be able to solve all functions, however in this interesting question it fails, which is why I am posting […]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183711409568787, "perplexity": 294.22019264516274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704843561.95/warc/CC-MAIN-20210128102756-20210128132756-00678.warc.gz"}
https://ptreview.sublinear.info/?p=949
# News for January 2018 And now, for the first papers of 2018! It’s a slow start with only four papers (or technical three “standard property testing” papers, and one non-standard paper). Adaptive Boolean Monotonicity Testing in Total Influence Time, by Deeparnab Chakrabarty and C. Seshadhri (arXiv ECCC). The problem of testing monotonicity of Boolean functions $$f:\{0,1\}^n \to \{0,1\}$$ has seen a lot of progress recently. After the breakthrough results of Khot-Minzer-Safra giving a $$\widetilde{O}(\sqrt{n})$$ non-adaptive tester, Blais-Belovs proved the first polynomial lower bound for adaptive testers, recently improved to $$O(n^{1/3})$$ by Chen, Waingarten, and Xi. The burning question: does adaptivity help? This result shows gives an adaptive tester that runs in $$O(\mathbf{I}(f))$$, the total influence of $$f$$. Thus, we can beat these lower bounds (and the non-adaptive complexity) for low influence functions. Adaptive Lower Bound for Testing Monotonicity on the Line, by Aleksandrs Belovs (arXiv). More monotonicity testing! But this time on functions $$f:[n] \to [r]$$. Classic results on property testing show that monotonicity can be tested in $$O(\varepsilon^{-1}\log n)$$ time. A recent extension of these ideas by Pallavoor-Raskhodnikova-Varma replace the $$\log n$$ with $$\log r$$, an improvement for small ranges. This paper proves an almost matching lower bound of $$(\log r)/(\log\log r)$$. The main construction can be used to give a substantially simpler proof of an $$\Omega(d\log n)$$ lower bound for monotonicity testing on hypergrids $$f:[n]^d \to \mathbb{N}$$. The primary contribution is giving explicit lower bound constructions and avoiding Ramsey-theoretical arguments previously used for monotonicity lower bounds. Earthmover Resilience and Testing in Ordered Structures, by Omri Ben-Eliezer and Eldar Fischer (arXiv). While there has been much progress on understanding the constant-time testability of graphs, the picture is not so clear for ordered structures (such as strings/matrices). There are a number of roadblocks (unlike the graph setting): there are no canonical testers for, say, string properties, there are testable properties that are not tolerant testable, and Szemeredi-type regular partitions may not exist for such properties. The main contribution of this paper is to find a natural, useful condition on ordered properties such that the above roadblocks disappear hold, and thus we have strong testability results. The paper introduces the notion of Earthmover Resilient properties (ER). Basically, a graph properties is a property of symmetric matrices that is invariant under permutation of base elements (rows/columns). An ER property is one that is invariant under mild perturbations of the base elements. The natural special cases of ER properties are those over strings and matrices, and it is includes all graph properties as well as image properties studied in this context. There are a number of characterization results. Most interestingly, for ER properties of images (binary matrices) and edge-colored ordered graphs, the following are equivalent: existence of canonical testers, tolerant testability, and regular reducibility. Nondeterminisic Sublinear Time Has Measure 0 in P, by John Hitchcock and Adewale Sekoni (arXiv). Not your usual property testing paper, but on sublinear (non-deterministic) time nonetheless. Consider the complexity class of $$NTIME(n^\delta)$$, for $$\delta < 1$$. This paper shows that this complexity class is a "negligible" fraction of $$P$$. (The analogous result was known for $$\alpha < 1/11$$ by Cai-Sivakumar-Strauss.) This requires a technical concept of measure for languages and complexity classes. While I don’t claim to understand the details, the math boils down to understanding the following process. Consider some language $$\mathcal{L}$$ and a martingale betting process that repeatedly tries to guess the membership of strings $$x_1, x_2, \ldots$$ in a well-defined order. If one can define such a betting process with a limited computational resource that has unbounded gains, then $$\mathcal{L}$$ has measure 0 with respect to that (limited) resource.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728532791137695, "perplexity": 1010.247723227218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216453.52/warc/CC-MAIN-20180820121228-20180820141228-00136.warc.gz"}
https://mijn.bsl.nl/toddler-screening-for-autism-spectrum-disorder-a-meta-analysis-o/16386636?fulltextView=true&doi=10.1007%2Fs10803-018-03865-2
main-content ## Swipe om te navigeren naar een ander artikel 08-01-2019 | OriginalPaper | Uitgave 5/2019 Open Access # Toddler Screening for Autism Spectrum Disorder: A Meta-Analysis of Diagnostic Accuracy Tijdschrift: Journal of Autism and Developmental Disorders > Uitgave 5/2019 Auteurs: Ana B. Sánchez-García, Purificación Galindo-Villardón, Ana B. Nieto-Librero, Helena Martín-Rodero, Diana L. Robins Belangrijke opmerkingen ## Electronic supplementary material The online version of this article (https://​doi.​org/​10.​1007/​s10803-018-03865-2) contains supplementary material, which is available to authorized users. ## Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Population level (level 1) screening for autism spectrum disorder (ASD) has been the subject of numerous papers, particularly since the American Academy of Pediatrics published a policy statement more than a decade ago (Council on Children with Disabilities 2006). The most commonly studied tool is the Modified Checklist for Autism in Toddlers (M-CHAT; Robins et al. 1999), and its revision, the M-CHAT-revised, with follow-up (M-CHAT-R/F; Robins et al. 2009). However, the variety of screening tools for prospective identification of early signs of autism has encouraged the publication of different systematic reviews (Daniels et al. 2014; McPheeters et al. 2016). See Table  1 for the tools included in the current meta-analysis, and references for more information about each tool. Table 1 Details of sample characteristics and individual outcomes such as studies show Study number Screening test(s) Country FN a strategy FN FP TP TN N Total N b Sex Not reported Age (months) Female Male 1. Nygren et al. ( 2012) M-CHAT Sweden No NA 3 33 NA 3.985 3.999 2.087 1.912 NA 29.00 2. Nygren et al. ( 2012) JOBS Sweden No NA 3 37 NA 3.985 3.999 2.087 1.912 NA 29.00 3. Nygren et al. ( 2012) M-CHAT + JOBS Sweden No NA 5 43 NA 3.985 3.999 2.087 1.912 NA 29.00 4. Baird et al. ( 2000) CHAT UK Yes 74 14 20 16.127 16.235 NA NA NA NA 18.70 5. Wiggins et al. ( 2014) M-CHAT USA Yes 3 17 27 3.803 3.850 3.980 NA NA NA 21.10 6. Wiggins et al. ( 2014) PEDS+ PATH USA Yes 2 20 28 2.978 3.028 3.980 NA NA NA 21.10 7. Kamio et al. ( 2014) M-CHAT_JV Japan Yes 22 24 20 1.661 1.727 2.141 880 847 NA 18.70 8. Stenberg et al. ( 2014) M-CHAT Norway Yes 114 3.804 59 48.049 52.026 NA 25.429 26.597 NA 18.00 9. Chlebowski et al. ( 2013) M-CHAT/Yale Screener + STAT USA Yes 6 79 92 18.269 18.446 18.989 9.388 9.601 NA 20.40 10. Canal-Bedia et al. ( 2011) M-CHAT Spain Yes 0 25 6 2.024 2.055 NA 949 1.106 NA 21.40 11. Barbaro and Dissanayake ( 2010) SACS Australia Yes 34 41 174 20.521 20.770 NA 10.177 10.593 NA 19.27 12. Inada et al. ( 2011) M-CHAT (short version 9, cut-off 1) Japan NA NA NA 20 NA 1.167 1.187 571 596 NA 18.00 13. Inada et al. ( 2011) M-CHAT (full version) Japan NA NA NA 20 NA 1.167 1.187 571 596 NA 18.00 14. Dereu et al. ( 2010) CESDD Belgium Yes 13 265 28 6.502 6.808 NA 3.255 3.553 NA 16.70 15. Miller et al. ( 2011) ITC + M-CHAT USA Yes 2 17 10 638 667 796 NA NA NA NA 16. Robins et al. ( 2014) M-CHAT-R/F USA Yes 18 116 105 15.373 15.612 16.071 7.570 7.793 249 20.95 17. Honda et al. ( 2005) YACHT-18 Japan Yes 16 NA 68 NA 35.716 NA 17.468 18.248 NA 18.00 18. Baranek ( 2015) M-CHAT USA Yes 3 32 5 534 574 NA 300 268 6 24.73 FN false negative, FP false positive, TP true positive, TN true negative, NA not available from paper, M-CHAT modified-checklist for autism in toddlers, JOB joint attention-observation schedule, CHAT checklist for autism in toddlers, PED parents’ evaluation of developmental status, M-CHAT_JV modified-checklist for autism in Toddlers_Japanese version, STAT screening tool for autism in toddlers and young children, SACS social attention and communication study, CESDD checklist for early signs of developmental disorders, M-CHAT-R/ F modified checklist for autism in toddlers, revised, with follow-up, YACHT-18 young autism and other developmental disorders checkup tool a FN strategy = methods to identify false negative screening cases, or children with ASD who were missed by the screening tool(s) of interest bTotal N with missing cases The U.S. Preventive Services Task Force (USPSTF; Siu and Preventive Services Task Force 2016) concluded that there was insufficient evidence to provide a recommendation regarding universal toddler screening for ASD. At the same time they emphasized the potential of the M-CHAT as a universal screening tool, as evidenced by empirical results (R. Canal-Bedia, personal communication, May 9, 2016). Hence, it is necessary to perform a systematic study of the psychometric data available in different studies. The meta-analysis is an important resource to summarize— in quantitative terms—the accuracy of diagnostic test, providing a higher level of evidence; for this reason, the current study conducted a meta-analysis to review empirical data from the studies and tools used since the first ASD population screening was performed in England (Baron-Cohen et al. 1996). In this kind of study, the reference test may be imperfect because a gold standard is not available in practice. We have used the Bayesian Hierarchical Model (HSROC; Rutter and Gatsonis 2001) to carry out the meta-analysis. The model is robust in adjusting for the imperfect nature of the reference standard of autism tools, in a bivariate meta-analysis of diagnostic test sensitivity and specificity and others psychometric parameters. Another bivariate model was proposed by Reitsmaet al. ( 2005) in which it is assumed that the vector of (logit(sensitivity), logit(specificity)) follows a bivariate normal distribution. However, Harbord and Whiting ( 2009) showed that the likelihood functions of both the HSROC and bivariate models are algebraically equivalent, and yield identical pooled sensitivity and specificity. Dendukuri et al. ( 2012) have demonstrated the usefulness of HSROC model, when no gold standard test is available. Therefore, in this study, we used a Bayesian meta-analysis, and the main aim was to evaluate the accuracy of the different screening tools. The second objective was to calculate the pooled psychometric properties associated with different studies to evaluate the tools effectiveness and support their recommendation internationally (R. Canal-Bedia, personal communication, May 9, 2016). ## Methods The preferred reporting items for systematic reviews and meta-analyses (PRISMA) (Moher et al. 2009) has guided this systematic review. ### Criteria for Selection of Studies Included papers focused on the screening and diagnosis of ASD and other developmental disorders in the general population, also known as level 1 screening. In cases where studies had duplicated data, only the most complete one was selected in order to avoid an unrealistic increase in the homogeneity between studies, and emphasis was placed on studies validating screening tools, which were often the most complete samples. Therefore, we excluded studies focused on tools that were not designed to screen for ASD, screening studies not applied to the general population (level 1), and all those that did not provide sufficient data to construct a 2 × 2 contingency table of screening × diagnosis (such as those without confirmatory diagnoses), or had a low quality rating in the quality assessment. ### Literature Search A systematic literature search identified studies that reported tools and procedures used for the early detection of ASD. The articles were obtained from CINHAL, ERIC, PsycINFO, PubMed and WOS databases using several combinations of the relevant keywords and Medical Subject Heading (MeSH), which include the categories of terms suggested by Daniels et al. ( 2014). All articles published between January 1992 and April 2015 were considered eligible. Only articles published in the English language and reporting an age range of screening from 14 to 36 months were included. The search strategy for PubMed is described (see Appendix 1). An additional search was conducted for grey literature captured on other search engines such as Google Scholar; we also searched the reference lists of included articles and any relevant review articles identified through the search and the ‘related articles’ function in PubMed. In addition, when searching the grey literature, we took into account the reference lists of primary studies and review papers, and contacted the experts to locate significant but as yet unpublished studies. ### Assessment of Methodological Quality Two reviewers conducted quality assessment of the included studies with the QUADAS-2 Tool (Quality Assessment of Diagnostic Accuracy Studies-2) (Whiting et al. 2004). Any discrepancies were referred to a third reviewer. QUADAS is a validated quality checklist (Deeks 2001; Whiting 2011; Whiting et al. 2006) composed of 14 items which encompass the most important sources of bias and variations observed in diagnostic accuracy studies. The studies were classified according to whether they had low or high risk for bias and their applicability was graded as low or high. ### Data Extraction The following data items were extracted from each study using a data collection form: first author and year of publication; size and characteristics of the study population; raw cell values [true positive ( TP), true negative ( TN), false positive ( FP), false negative ( FN); and psychometric properties, specifically sensitivity ( Se), specificity ( Sp), positive and negative predictive values ( PPV, NPV), positive and negative likelihood ratio values ( LR+; LR−), and diagnostic odds ratio ( DOR)]. See Appendix 2 for definitions of bio-statistical terms. Psychometric properties which were not provided in the studies were calculated based on raw cell values. Clarification was requested from the authors via e-mail when we observed discrepancies between the data reported and the data calculated. Details of the search and results are shown (see Tables  1, 2). Table 2 Details of individual diagnostic outcomes such as studies show Study Se (95% CI) Sp (95% CI) PPV (95% CI) NPV (95% CI) LR+ (95% CI) LR (95% CI) Nygren et al. ( 2012) 0.767 (0.614–0.882) NA NA 0.917 (0.775–0.982) NA NA NA NA NA NA Nygren et al. ( 2012) 0.860 (0.721–0.947) NA NA 0.925 (0.796–0.984) NA NA NA NA NA NA Nygren et al. ( 2012) 0.956 (0.849–0.995) NA NA 0.896 (0.773–0.965) NA NA NA NA NA NA Baird et al. ( 2000) 0.213 (0.130–0.300) 0.999 (0.999–1.000) 0.588 (0.420–0.750) NA NA NA NA NA NA Wiggins et al. ( 2014) NA NA NA NA NA NA NA NA NA NA NA NA Wiggins et al. ( 2014) NA NA NA NA NA NA NA NA NA NA NA NA Kamio et al. ( 2014) 0.480 (0.330–0.630) 0.990 (0.980–0.990) 0.450 (0.310–0.600) 0.990 (0.980–0.990) NA NA NA NA Stenberg et al. ( 2014) 0.341 (0.271–0.417) 0.927 (0.924–0.929) 0.150 (0.120–0.200) NA NA 4.60 NA NA NA Chlebowski et al. ( 2013) NA NA NA NA 0.538 NA NA NA NA NA NA NA Canal-Bedia et al. ( 2011) 1.000 NA 0.980 (0.980–0.990) 0.190 (0.050–0.330) 1.000 NA NA NA NA NA Barbaro and Dissanayake ( 2010) 0.836 (0.776–0.882) 0.998 (0.998–0.999) 0.807 (0.748–0.856) 0.998 (0.998–0.999) 414.39 (303.93–564.99) 0.17 (0.12–0.22) 0.650 NA 0.885 NA 0.088 NA 0.993 NA NA NA NA NA 0.550 NA 0.961 NA 0.193 NA 0.992 NA NA NA NA NA Dereu et al. ( 2010) 0.680 (0.540–0.830) 0.960 (0.960–0.970) 0,100 (0.060–0.130) 1.000 (0.999–1.00) 17.42 NA 0.33 NA Miller et al. ( 2011) NA NA NA NA NA NA 0.996 NA NA NA NA NA Robins et al. ( 2014) 0.854 NA 0.993 NA 0.475 NA 0.999 NA 114.05 NA 0.15 NA Honda et al. ( 2005) 0.810 NA NA NA NA NA 0.999 NA NA NA NA NA Baranek ( 2015) 0.625 (0.508–0.960) 0.943 NA 0.135 NA 0.994 NA NA NA 0.40 NA Se sensitivity, Sp specificity, PPV positive predictive value, NPV negative predictive value, LR+ positive likelihood ratio, LR− negative likelihood ratio, NA not available from paper ### Data Synthesis and Statistical Analysis We calculated the pooled Se, Sp, LR+, LR−, PPV, NPV and DOR for the included studies. Separate pooling of sensitivity and specificity may lead to biased results because different thresholds were used in different studies (Deeks 2001; Moses et al. 1993). Therefore, we used the Hierarchical Summary Receiver Operating Characteristic Model (HSROC) (Rutter and Gatsonis 2001) to estimate the diagnostic accuracy parameters and to generate a summary receiver operating characteristic curve with HSROC, [an R package available from CRAN (Schiller and Dendukuri 2015)]. The model is robust for including studies with different reference standards and potential negative correlation in paired measures ( Se/ Sp) across studies (Trikalinos et al. 2012). This kind of analysis models the variation in diagnostic accuracy and cut-off values, and identifies sources of heterogeneity, which is a common feature among diagnostic or screening test accuracy reviews. The model has been called a “Hierarchical Model” owing to the fact that it takes into account statistical distributions at two levels. At the first level, within-study variability in sensitivity and specificity is examined. At the second level, between-study variability is examined (Macaskill 2004). The main goal of the model is to estimate an SROC curve across different thresholds. The estimation from the model requires Markov Chain Monte Carlo (MCMC) simulation (Rutter and Gatsonis 2001). To carry out this Bayesian estimation we specified the prior distributions over the set of unknown parameters with a similar assumption made by Higgins et al. ( 2003). This process was used in order to obtain posterior predictions of the Se and Sp. According to Harbord and Whiting ( 2009), the true estimate of Se and Sp in each study could be found by empirical Bayes estimates, although we acknowledge that many of the included studies were limited in their ability to confirm that negative cases were in fact true negatives. In order to establish whether there was inconsistency and heterogeneity in the meta-analysis, we summarized the test performance characteristics using a forest plot with the corresponding Higgins I 2 index (Higgins and Thompson 2002) and assessed heterogeneity by visual inspection of the SROC plots and using Cochran’s Q test (p > 0.1) (Cochran 1954). Summary DORs were estimated by random DerSimonian–Laird effect model (DerSimonian and Laird 1986) following the recommendations of Macaskill et al. ( 2010) because I 2 was greater than 50% and Q test was < 0.1. Since variability of results among different studies was confirmed, an investigation of heterogeneity was necessary and subgroup analyses were used. The Egger’s test (Song et al. 2002) was calculated for assessing publication bias using STATA 12.0. Finally, we obtained a crosshair plot and ROC ellipses plot to summarize the confidence intervals of Se and FP cases in each study with the R-package (Doebler 2015) using meta-analysis of diagnostic accuracy (MADA), LR+, LR−, PPV, NPV and DOR were calculated using SAS for Windows, version 9.4 (Cary, NC). ## Results ### Study Selection The initial literature search identified 1883 studies. Six hundred and sixty-seven duplicate records were eliminated to obtain 1216 non-duplicated articles, 1114 of which were excluded after title and abstract screening through the application of inclusion/exclusion criteria, and 87 were excluded after full text screening or methodological quality assessment and data extraction (see Supplemental Table 1). One additional study that qualified for inclusion was identified from the search of grey literature. Finally, 14 studies: (Baird et al. 2000; Barbaro and Dissanayake 2010; Canal-Bedia et al. 2011; Chlebowski et al. 2013; Dereu et al. 2010; Honda et al. 2005; Inada et al. 2011; Kamio et al. 2014; Miller et al. 2011; Nygren et al. 2012; Robins et al. 2014; Stenberg et al. 2014; Wiggins et al. 2014; Baranek 2015) were eligible for inclusion in our review. We present the flow chart showing the selection process in Fig.  1. ### Methodological Quality of the Included Studies We used the QUADAS-2 tool for study of quality assessment and K coefficient to examine inter-rater agreement for our initial overall quality score, and resolved any item discrepancies through discussion. The agreement between judges’ kappa values was 0.643 (CI 95%; p < 0.01). In Fig.  2, we summarize the results of the methodological quality for all 20 studies included in this assessment: (Baird 2000; Barbaro 2010; Canal-Bedia et al. 2011; Chlebowski 2013; Dereu 2010; Dietz 2006; Honda 2005, 2009; Inada 2011; Kamio 2014; Kleinman 2008; Miller 2011; Nygren et al. 2012; Pierce 2011; Robins 2008, 2014; Stenberg 2014; VanDenHeuvel 2007; Wetherby 2008; Wiggins et al. 2014). As Fig.  2 shows, two bar graphs report the assessment of risk of bias and applicability. The percentage of studies rated as unclear, high, or low is observed across X-axes at intervals of 20%. The concerns regarding applicability include three domains: patient selection, index test, and reference standard. The risk of bias dimension is comprised of four domains: patient selection, index test, reference standard, and flow and timing. Across a majority of studies, concern about applicability of the reference standard was assessed as low, the index test was assessed as unclear, and patient selection was assessed as having low concerns. Regarding risk or bias, the majority of the studies demonstrated high risk of bias for flow and timing; the index test was rated as unclear risk, the reference standard was generally rated as low risk, and patient selection was rated as low risk. During this process we excluded the following studies: Honda ( 2009), Pierce ( 2011), Robins ( 2008), VanDeHeuvel ( 2007), Wetherby ( 2008). In supplemental materials (see supplemental Table 1) we show the list of papers excluded during analysis of quality and data extraction processes. ### Characteristics of the Included Studies One hundred and two full text articles were assessed for eligibility, 14 (13.72%) of which were included in the quantitative synthesis. Some articles evaluated more than one index test (Inada et al. 2011; Nygren et al. 2012; Wiggins et al. 2014) and this is why we present a meta-analysis on 18 sets of psychometric values, 35.71% of which came from the USA, 35.71% from Europe, 21.42% from Japan and 7.14% from Australia. The sample includes 191,803 toddlers. The interval of age range is between 16.7 and 29 months. Sex data was available for 158,965 toddlers, of whom 73,431 (46.19%) were female. The studies presented great variability in terms of the data reported. Twelve of 14 studies (66.6%) showed all the primary outcomes required to populate 2 × 2 contingency tables. Data pertaining to Se were presented in 77.7% of studies, Sp in 55.5%, PPV in 77.7%, NPV in 44.4%, and LR+ and LR− in 22.2% of studies. The main characteristics and the clinical outcomes, as shown in included studies are presented (see Tables  1, 2). ### Diagnostic Accuracy of Screening Tools The accuracy of screening tools was evaluated in 14 studies that assessed the test characteristics of various screening tools (18 in all). The pooled Se was 0.72 (95% CI 0.61–0.81) and the Sp was 0.98 (95% CI 0.97–0.99). The positive likelihood ratio (LR+) was 131.27 (95% CI 50.40–344.48) and the negative likelihood ratio (LR−) was 0.22 (95% CI 0.13–0.45). The diagnostic odds ratio (DOR) was 596.09 (95% CI 174.32–2038.34). The positive predictive value (PPV) was 97.78 (95% CI 97.71–97.84) and the negative predictive value (NPV) was 93.13 (95% CI 93.02–93.24). The above is summarized in Table  3, while the corresponding HSROC plot is presented in Fig.  3. The Se of each individual study varied between 0.22 and 0.95 whereas the Sp ranged from 0.81 to 0.99 (see Table  4). Table 3 Parameters estimated between studies (point estimate = median) both for the entire meta-analysis and for the sub-analysis of nine studies Parameters Meta-analysis with all studies selected (N = 18) Meta-analysis: subgroup of analysis (N = 9) Estimated SD MC_error C.I._lower C.I._upper Estimated SD MC_error C.I._lower C.I._upper HSROC THETA a 0.86 0.13 < 0.01 0.12 0.60 0.51 0.16 0.01 0.16 0.17 HSROC LAMBDA b 2.89 0.13 < 0.01 2.59 2.99 2.90 0.14 < 0.01 2.56 2.99 HSROC Beta c − 0.09 < 0.01 < 0.01 − 0.09 − 0.09 0.38 0.09 0.01 0.20 0.55 σ α d 1.09 0.21 < 0.01 0.74 1.57 1.07 0.31 0.01 0.59 1.77 σ θ e 0.51 0.10 < 0.01 0.35 0.75 0.32 0.13 < 0.01 0.14 0.60 Se overall 0.72 0.05 < 0.01 0.61 0.81 0.77 0.03 < 0.01 0.69 0.84 Sp overall 0.98 < 0.01 < 0.01 0.97 0.99 0.99 < 0.01 < 0.01 0.97 0.99 MC error of each parameter smaller than 10% of its posterior standard deviation Se sensitivity, Sp specificity aTHETA = the overall mean cut-off value for defining a positive test bLAMBDA = the overall diagnostic accuracy cBeta = the logarithm of the ratio of the standard deviation of test results among patients with the disease and among patients without the disease dσ α = the between-study standard deviation of the difference in means eσ θ = the between-study standard deviation in the cut-off Table 4 Estimates of diagnostic precision and outcomes in single studies Study Screening test THETA a (95% CI) ALPHA b (95% CI) Prevalence c (95% CI) Sensitivity ( Se) (95% CI) Specificity ( Sp) (95% CI) Estimated SD Estimated SD Estimated SD Estimated SD Estimated SD Nygren et al. ( 2012) M-CHAT 1.31 (1.06–1.56) 0.12 3.95 (3.45–4.46) 0.24 0.01 (< 0.01–0.01) < 0.01 0.75 (0.63–0.87) 0.06 0.99 (0.99–1) < 0.01 Nygren et al. ( 2012) JOBS 1.16 (0.89–1.41) 0.13 4.21 (3.72–4.72) 0.25 0.01 (< 0.01–0.01) < 0.01 0.84 (0.72–0.93) 0.05 0.99 (0.99–1) < 0.01 Nygren et al. ( 2012) M-CHAT + JOBS 0.86 (0.58–1.12) 0.13 4.52 (4.02–5.03) 0.25 0.01 (< 0.01–0.01) < 0.01 0.92 (0.85–0.98) 0.03 0.99 (0.99–1) < 0.01 Baird et al. ( 2000) CHAT 1.99 (1.84–2.15) 0.07 2.58 (2.27–2.86) 0.15 < 0.01 (< 0.01 to < 0.01) < 0.01 0.22 (0.15–0.31) 0.04 0.99 (0.99–1) < 0.01 Wigginset al. ( 2014) M-CHAT 0.81 (0.53–1.05) 0.13 3.86 (3.37–4.40) 0.26 < 0.01 (< 0.01–0.01) < 0.01 0.88 (0.77–0.96) 0.05 0.99 (0.99–1) < 0.01 Wigginset al. ( 2014) PEDS + PATH 0.65 (0.39–0.94) 0.13 3.88 (3.33–4.44) 0.28 0.01 (< 0.01–0.01) < 0.01 0.91 (0.80–0.97) 0.04 0.99 (0.99–1) < 0.01 Kamio et al. ( 2014) M-CHAT_JV 1.15 (0.98–1.35) 0.09 2.28 (1.89–2.64) 0.19 0.02 (0.01–0.03) < 0.01 0.49 (0.35–0.62) 0.07 0.98 (0.98–0.99) < 0.01 Stenberg et al. ( 2014) M-CHAT − 0.05 (− 0.14–0.01) 0.05 3.13 (2.97–3.31) 0.09 < 0.01 (< 0.01 to < 0.01) < 0.01 0.95 (0.93–0.97) < 0.01 0.92 (0.92–0.93) < 0.01 Chlebowski et al. ( 2013) M-CHAT /YALE SCREENER and STAT 0.76 (0.59–0.91) 0.08 3.98 (3.68–4.30) 0.15 < 0.01 (< 0.01 to < 0.01) < 0.01 0.90 (0.84–0.95) 0.02 0.99 (0.99–1) < 0.01 Canal-Bedia et al. ( 2011) M-CHAT 0.54 (− 0.01 to − 1.03) 0.26 3.63 (2.63–4.69) 0.52 < 0.01 (< 0.01 to < 0.01) < 0.01 0.90 (0.68–0.99) 0.09 0.98 (0.98–0.99) < 0.01 Barbaro and Dissanayake ( 2010) SACS 1.06 (0.96–1.16) 0.05 3.90 (3.70–4.10) 0.10 0.01 (< 0.01–0.01) < 0.01 0.82 (0.77–0.87) 0.02 0.99 (0.99–1) < 0.01 M-CHAT (short version 9, cutoff:1) 0.23 (< 0.01–0.43) 0.10 1.44 (1.02–1.85) 0.20 0.02 (0.01–0.03) < 0.01 0.69 (0.54–0.83) 0.07 0.81 (0.79–0.84) 0.01 M-CHAT (full version) 0.66 (0. 47–0.84) 0.09 1.71 (1.31–2.07) 0.19 0.03 (0.02–0.04) < 0.01 0.58 (0.43–0.72) 0.07 0.92 (0.91–0.94) < 0.01 Dereu et al. ( 2010) CESDD 0.68 (0.56–0.83) 0.07 2.32 (2.02–2.59) 0.15 < 0.01 (< 0.01 to <0.01) < 0.01 0.69 (0.58–0.77) 0.05 0.96 (0.95–0.96) < 0.01 Miller et al. ( 2011) ITC + M-CHAT 0.61 (0.27–0.93) 0.17 2.89 (2.23–3.61) 0.34 0.01 (0.01–0.03) < 0.01 0.81 (0.62–0.96) 0.08 0.97 (0.96–0.98) < 0.01 Robins et al. ( 2014) M-CHAT-R/F 0.78 (0.67–0.91) 0.06 3.53 (3.27–3.79) 0.13 < 0.01 (< 0.01 to < 0.01) < 0.01 0.84 (0.78–0.90) 0.03 0.99 (0.99–1) < 0.01 Honda et al. ( 2005) YACHT-18 1.58 (1.41–1.75) 0.08 4.27 (4.00–4.56) 0.14 < 0.01 (< 0.01–<0.01) < 0.01 0.71 (0.63–0.79) 0.04 0.99 (0.99–1) < 0.01 Baranek ( 2015) M-CHAT 0.68 (0.31–1.33) 0.18 1.99 (1.27–2.71) 0.37 0.01 (< 0.01–0.01) < 0.01 0.62 (0.35–0.85) 0.13 0.94 (0.92–0.96) < 0.01 Se sensitivity, Sp specificity aTHETA = the overall mean cut-off value for defining a positive test bALPHA = the ‘accuracy parameter’ measures the difference between TP and FP within-study parameters cPrevalence within-study parameters ### Exploration of Heterogeneity A considerable degree of heterogeneity in sensitivities was observed (Q = 337.62, df = 17.00, p < 0.001) and specificities (Q = 30901.50, df = 17.00, p < 0.001). The heterogeneity in test accuracy between studies may be due to differences in cut-offs utilized in different studies, among other factors (Doebler et al. 2012). To delve deeper into the understanding of these results, we evaluated the confidence intervals which describe the relationship between the psychometric properties. The ROC ellipse plots of the confidence intervals in Fig.  3 shows the studies responsible for high levels of heterogeneity, how cut-off values vary, and how they demonstrate moderate negative correlations between sensitivities and False Positive rates ( r s = − 0.355), that is, if Se tends to decrease when FP rate increases. According to this analysis, study 18 (Baranek 2015), study 14 (Dereu et al. 2010), studies 12 and 13 (Inada et al. 2011) and study 15 (Miller et al. 2011) show the largest confidence intervals both for Se and FP rate, and study 4 (Baird et al. 2000), study 10 (Canal-Bedia et al. 2011), study 7 (Kamio et al. 2014) and study 8 (Stenberg et al. 2014) indicate large confidence intervals only in Se. The SROC curve summarizes the relationship between Se and (1 −  Sp) across studies, taking into account the between-study heterogeneity. We constructed a SROC curve using all studies selected; see Fig.  3. It is worth noting that it is a significant graphical tool for understanding how the diagnostic accuracy of the different test depends on the different cut-off (Doebler et al. 2012). As Fig.  4 shows, the prediction region covers a larger range of Se than Sp. This may be due to the fact that most of the studies had a considerably larger number of participants with screen negative results compared to screen positive results, leading to greater sampling variability when we estimated Se vs. Sp. The figure also demonstrates an asymmetry of the test performance measures towards a higher Sp with higher variability of Se, providing indirect proof of some threshold variability. The figure also shows how when the threshold is increased then Se is decreased but Sp is increased. The posterior predictive value of Se was 0.71 (95% CI 0.22–1) with a standard error of 0.23 and that of Sp was 0.98 (95% CI 0.81–1) with a standard error of 0.07. ### Subgroup of Analysis A large degree of heterogeneity was observed. Heterogeneity may be due to different factors (Macaskill et al. 2010; Trikalinos et al. 2012). In order to investigate the source of heterogeneity in the current sample, we followed recommendations of these authors and conducted analyses using a subgroup of studies. The new meta-analysis excluded the following studies, based on graphical analysis and the Cochran Q test (p > 0.1): Study 4 (Baird et al. 2000), Study 7 (Kamio et al. 2014), Study 8 (Stenberg et al. 2014), Study 10 (Canal-Bedia et al. 2011), Studies 12 and 13 (Inada et al. 2011), Study 14 (Dereu et al. 2010), Study 15 (Miller et al. 2011), and Study 18 (Baranek 2015). Regarding the estimations between study parameters, subgroup analysis demonstrated that Se was increased because the pooled sensitivity was 0.77 (95% CI 0.69–0.84), and the Sp was 0.99 (95% CI 0.97–0.99). The posterior predictive p-value of Se was 0.81 (95% CI 0.39–1) and Sp, 0.97 (95% CI 0.76–1, SD = 0.08). Parameters estimated between studies by HSROC model are shown in Table  3, which demonstrates how the parameters estimated for the subgroup of analysis are higher results than those obtained for the first meta-analysis. For example, it is of note that standard deviation in the cut-off and standard deviation of the difference in means between studies are decreased. The estimates for individual studies were grouped by parameters and are shown in Table  5. Table 5 Estimates of diagnostic precision and outcomes in single studies for the sub-analysis of nine studies Study Screening test THETA a (95% CI) ALPHA b (95% CI) Prevalence c (95% CI) Se (95% CI) Sp (95% CI) Estimated SD Estimated SD Estimated SD Estimated SD Estimated SD Nygren et al. ( 2012) M-CHAT 0.82 (0.47–1.14) 0.17 3.56 (3.45–4.46) 0.29 0.01 (< 0.01–0.01) < 0.01 0.78 (0.65–0.90) 0.06 0.99 (0.99–1) < 0.01 Nygren et al. ( 2012) JOBS 0.65 (0.31–0.98) 0.17 3.93 (3.72–4.72) 0.28 0.01 (< 0.01 -01) < 0.01 0.86 (0.76–0.94) 0.05 0.99 (0.99–1) < 0.01 Nygren et al. ( 2012) M-CHAT + JOBS 0.34 (-0.03–0.71) 0.19 4.32 (4.02–5.03) 0.33 0.01 (< 0.01–0.01) < 0.01 0.93 (0.85–0.98) 0.03 0.99 (0.99–1) < 0.01 Wiggins et al. ( 2014) M-CHAT 0.35 (− 0.06 to 0.76) 0.20 3.61 (3.37–4.40) 0.33 < 0.01 (< 0.01–0.01) < 0.01 0.88 (0.76–0.96) 0.05 0.99 (0.99–1) < 0.01 Wiggins et al. ( 2014) PEDS + PATH 0.24 (− 0.15 to 0.76) 0.20 3.57 (3.33–4.44) 0.36 0.01 (< 0.01–0.01) < 0.01 0.89 (0.77–0.98) 0.04 0.99 (0.99–1) < 0.01 Chlebowski et al. ( 2013) M-CHAT /YALE SCREENER/STAT 0.24 (0.04–0.42) 0.10 3.87 (3.68–4.30) 0.21 < 0.01 (< 0.01 to < 0.01) < 0.01 0.91 (0.85–0.95) 0.02 0.99 (0.99–1) < 0.01 Barbaro and Dissanayake ( 2010) SACS 0.60 (0.36–0.81) 0.10 3.56 (3.70–4.10) 0.14 0.01 (< 0.01 to < 0.01) < 0.01 0.83 (0.78–0.88) 0.02 0.99 (0.99–1) < 0.01 Robins et al. ( 2014) M-CHAT-R/F 0.36 (0.14–0.49) 0.08 3.26 (3.27–3.79) 0.15 < 0.01 (< 0.01 to <0.01) < 0.01 0.85 (0.80–0.91) 0.03 0.99 (0.99–1) < 0.01 Honda et al. ( 2005) YACHT-18 0.98 (0.66–1.29) 0.16 4.15 (4.00–4.56) 0.20 < 0.01 (< 0.01 to <0.01) < 0.01 0.81 (0.73–0.89) 0.04 0.99 (0.99–1) < 0.01 MC error of each parameter smaller than 10% of its posterior standard deviation Se sensitivity, Sp specificity aTHETA = the overall mean cut-off value for defining a positive test bALPHA = the ‘accuracy parameter’ measures the difference between TP and FP within-study parameters cPrevalence within-study parameters Figure  5 shows how the prediction region covers a larger range of Se than Sp although this is less than in the first meta-analysis. The figure also shows less asymmetry of the test performance and therefore less heterogeneity. This means that the range, which includes the measurements for Se and Sp is lower than the one shown in Fig.  4. ### Publication Bias The estimated Egger bias coefficient was 3.21 (95% CI − 0.49 to 6.92) with a standard error of 1.5, giving a p-value of 0.08. The test thus suggests evidence that results are not biased by the presence of small-study effects. ## Discussion Interest in early detection of ASD is increasing, due to the growing evidence that early intervention improves prognosis. Low-risk screening, as part of pediatric primary care, for example, is one of the most widely studied strategies to promote early detection. Consequently, the information reported from systematic reviews of screening accuracy is valuable, both for research and practice. Different systematic reviews, such as the ones carried out by Daniels et al. ( 2014) and McPheeters et al. ( 2016), have represented an important advance with regard to traditional or narrative reviews, which were characterized by a lack of systematization. However, a meta-analysis is a systematic review which also uses statistical methods to analyze the results of the included studies. It is accepted that data from systematic reviews with meta-analyses adds value since the statistical analysis used converts the results of primary studies into a measure of integrated quantitative evidence. This is beneficial both to the scientific community and to the clinicians who use the tools in such meta-analyses. Meta-analysis of screening studies is a complex but critical approach to examining evidence across measures and scoring thresholds in different populations (Gatsonis and Paliwal 2006). We employed a Bayesian Hierarchical Model (Rutter and Gatsonis 2001), which is robust in adjusting for the imperfect nature of the reference standard of autism tools, in a bivariate meta-analysis of diagnostic test sensitivity and specificity and others psychometric parameters. This kind of meta-analysis statistically compares the accuracy of different diagnostic screening tests and describes how test accuracy varies. Therefore, it is more likely to lead to a ‘gold standard’ than other types of reviews which can be influenced by biases associated with the publication of single studies. The HSROC model was used to estimate the screening accuracy parameters and a summary in each study as functions of an underlying bivariate normal model. This model has been recommended when there is no standard cut-off to define a positive result (Bronsvoort et al. 2010; Dukic and Gatsonis 2003; Macaskill 2004) in order to allow the meta-analytic assessment of heterogeneity between studies while taking into consideration both within- and between-study variability. Furthermore, it is also optimally suited when more information is available, for example, when the studies have reported results from more than one modality (Rutter and Gatsonis 2001) like our case. The advantages of the model have been discussed (Gatsonis and Paliwal 2006; Leeflang et al. 2013; Macaskill 2004; Rutter and Gatsonis 2001) and support its selection in this meta-analysis. This review included 14 studies that assessed the test characteristics of various screening tools (18 in all) for detecting autism and a subgroup of analysis retaining nine studies that demonstrated lower heterogeneity. Initial findings of the overall meta-analysis show that tools which are used in level 1 ASD screening are accurate at detecting the presence of ASD [pooled sensitivity was 0.72 (95% CI 0.61–0.81)] and highly accurate at detecting a lack of presence of ASD [pooled of specificity was 0.98 (95% CI 0.97–0.99)]. But more importantly, we demonstrate the tools’ performance in identifying autism, DOR 596.09 (95% CI 174.32–2038.34). The clinical utility of the level 1 screening tools reviewed in this study is clear because the pooled positive likelihood ratio (LR+) was 131.27 (95% CI 50.40–344.48) and the negative likelihood ratio (LR−) was 0.22 (95% CI 0.13–0.45). LR+ > 1 indicates the results are associated with the disease. Although those findings are informative to clinicians, it is important to understand the limitations of the last assertion because the accuracy of a LR depends upon the quality of the studies that generated the pooled of sensitivity and specificity, therefore data must be interpreted with caution. Finally, the pooled of positive predictive value (PPV) was 97.78 (95% CI 97.71–97.84) and the negative predictive value (NPV) was 93.13 (95% CI 93.02–93.24). A limitation of this meta-analysis comes from the methodological limitations of the included studies; 55% of the included studies were assessed to have high risk or unclear risk of bias in the quality analysis with QUADAS, particularly in the domains of flow and timing, and in the index test. We recommend that future screening studies include a flowchart with information about the method of recruitment of patients, sample, order of test execution, follow up and other details related to the process to improve replicability and to better inform readers about potential bias. The second concern is about the heterogeneity of the psychometric data in the included studies. In this respect, according to Doebler et al. ( 2012), in diagnostic meta-analysis the observed sensitivities and specificities can vary across primary studies and heterogeneity should be assumed in results of this kind of meta-analysis (Macaskill et al. 2010). This assertion has been acknowledged in this work and justifies the choice of the model HSROC, which is a more robust model for addressing heterogeneity compared to some of the other meta-analysis models. Following the recommendations of Macaskill et al. ( 2010) and Trikalinos et al. ( 2012) we conducted a subgroup of analyses to assess the pooled Se and Sp without those studies driving heterogeneity in analyses. The pooled of sensitivity and specificity were improved by the exclusion of these studies. Consequently, the parameters estimated for this set of studies suggested a good performance for ruling out and ruling in ASD since the prior pooled Se was 0.77 (95% CI 0.69–0.84, SD = 0.03), Sp was 0.99 (95% CI 0.97–0.99; SD ≤ 0.01), the posterior predictive p-value of Se was 0.81 (95% CI 0.39–1, SD = 0.18), and high specificity was maintained, 0.97 (95% CI 0.76–1, SD = 0.08). The previous data from the posterior predictive p-values of Se and Sp are very important because the true estimate of Se and Sp in each study could be found by empirical Bayes estimates (Harbord and Whiting 2009). One important aspect to bear in mind is that only about 66.6% of all studies showed all the primary outcomes required to populate 2 × 2 contingency tables. Data pertaining to the Se were presented in 77.7% of studies, Sp in 55.5%, PPV in 77.7%, NPV in 44.4%, LR+ and LR− in 22.2% of studies. This leads us to recommend that authors of screening studies include sufficient detail to calculate all psychometric properties to improve the quality of systematic reviews and future meta-analyses. It also would be valuable for authors of future studies to reflect on the question of why there is such a low percentage of primary studies that do provide those data. Some authors use caution in presenting psychometric properties when the negative cases cannot be confirmed to be true negatives. Although this is a notable limitation of cross-sectional screening studies, given that confirmatory evaluations are prohibitive in very large samples, it is likely that the number of truly negative cases greatly outnumbers those cases that will later be identified as false negatives, suggesting that interpreting the TN cell of the 2 × 2 matrix to be “presumed TN” is a reasonable assertion. Looking further at the omission of specific psychometric values, there is a remarkably low percentage of studies that include LR+ and LR−, as well as a number that do not report NPV. LR+ and LR− may not have been commonly included given that they were not emphasized in the American Academy of Pediatrics’ policy statement that highlighted the psychometric properties of Se and Sp. The reduced emphasis on NPV may be due to the fact that predictive value is affected by baserate of the disorder in the sample being studied (such as PPV and NPV may vary dramatically across sampling strategies), whereas Se and Sp are not influenced by base rate. We recommend that future studies report comprehensive psychometrics, in order to promote understanding of the findings. In addition, it is often difficult to ascertain characteristics of the study, study cohort, and technical aspects (Gatsonis and Paliwal 2006). In future studies, a unified approach is necessary in presenting results of screening research to avoid the inconsistency and heterogeneity observed. The present results suggested improved screening accuracy when meta-analysis was restricted to a subset of studies with reduced heterogeneity (see Table  3 for a comparison of parameters for the complete meta-analysis and the subgroup meta-analysis). The subgroup findings add specific knowledge for clinicians and researchers regarding each tool used for toddler ASD screening. We have estimated parameters for each study in both meta-analyses (see Tables  4, 5). The results from subgroup analysis suggest that the Se of each individual study varied between 0.78 and 0.88. In those tables we also reported other important data, which could be a particular contribution for the clinicians in this field of study, such as the different cut-off points or the ‘accuracy parameter’, which measures the difference between TP and FP in each study and the prevalence. With respect to prevalence, we can say that it was estimated at or near 1% depending on the studies. Finally, in the light of the results obtained by computing the summary measures with and without studies (shown as outliers Tables  3, 4, 5) we suggest that the tools used in Level 1 screening are adequate to detect ASD in the 14–36 age range. Thus, we confirm - in quantitative terms- the finding of the USPSTF that screening detects ASD. ## Conclusion A systemic review and meta-analysis of screening tools to detect ASD in toddlers determined that these measures detect ASD with high Se and Sp. Studies were restricted to low-risk samples in children younger than 3 years old, in order to evaluate the use of these screening tools in primary pediatric care. Given that children who start ASD-specific early intervention before age three have improved outcomes compared to children who go untreated prior to preschool, it is essential to disseminate strategies to improve the identification of the children in need of intervention as young as possible. Consistent with the recommendation of the American Academy of Pediatrics (Johnson et al. 2007) results of the current study show the validity of low-risk screening to identify ASD in children under 3 years old. ## Acknowledgments The authors thank their colleagues from the AJ Drexel Autism Institute and others that have supported this research. Special thanks to Dr. Newschaffer, to members of Dr. Robins’ team, and especially at UNC Chapel Hill to Dr. Baranek who contributed as yet unpublished data to this meta-analysis. We likewise wish to thank all the researchers whom we contacted during the search for grey literature. Also, the authors express special appreciation to Dr. Canal-Bedía, Magán-Maganto and de Pablos who took part in the process of qualitative review and to Dr. Verdugo-Alonso for providing ongoing support for this project. Finally, we thank the Fulbright Commission for supporting this Project. ## Compliance with Ethical Standards ### Ethical Approval The information and analysis in this research is essentially based on data gathered on previous primary studies in which ethical approval. ### Informed Consent Informed consent were obtained by the investigators from all individual participants included in their studies. ## Appendix 1 ### The Search Strategy Described on PubMed was Carried on May 2015 #1 “Autistic Disorder” [Majr] OR “Autistic Disorder” [Title/Abstract] OR “Autistic Disorders” [Title/Abstract] OR “Autism” [Title/Abstract] OR “Child Development Disorders, Pervasive” [Majr] OR “Pervasive Developmental Disorder” [Title/Abstract] OR “Pervasive Developmental Disorders” [Title/Abstract] OR “PDD” [Title/Abstract] OR “Autistic Spectrum Disorder” [Title/Abstract] OR “Autistic Spectrum Disorders” [Title/Abstract] OR “Autism Spectrum Disorder” [Title/Abstract] OR “Autism Spectrum Disorders” [Title/Abstract] OR “ASD” [Title/Abstract] #2 “Diagnosis” [Mesh:noexp] OR “Diagnosis” [Subheading] OR “Diagnosis” [Title/Abstract] OR “Early Diagnosis” [Mesh:noexp] OR “Early Diagnosis” [Title/Abstract] OR “Detection” [Title/Abstract] OR “Early Detection” [Title/Abstract] OR “Early Identification” [Title/Abstract] OR “Early Intervention” [Title/Abstract] OR “Early Prediction” [Title/Abstract] #3 “Screening” [Title/Abstract] OR “Early Screening” [Title/Abstract] OR “Mass Screening” [Majr:noexp] OR “Mass Screening/instrumentation” [Majr:noexp] OR “Mass Screening/methods” [Majr:noexp] OR “Mass Screening” [Title/Abstract] OR “Screening Tool” [Title/Abstract] OR “Screening Tools” [Title/Abstract] OR “Screening Test” [Title/Abstract] OR “Screening Instrument” [Title/Abstract] OR “Screening Instruments” [Title/Abstract] OR “Checklist” [MeSH Terms] OR “Checklist” [Title/Abstract] OR “Checklists” [Title/Abstract] OR “Follow-up” [Title/Abstract] #4 (#2 AND #3) #5 (#1 AND #4) #6 “Infant” [MeSH Terms:noexp] OR “Child, Preschool” [MeSH Terms] OR “Infant” [Title/Abstract] OR “Infants” [Title/Abstract] OR “Preschool Child” [Title/Abstract] OR “Preschool Children” [Title/Abstract] OR “Toddler” [Title/Abstract] OR “Toddlers” [Title/Abstract] #7 (#5 AND #6) #8 “1992/01/01” [PDAT]: “2015/04/31” [PDAT] #9 English[Lang] #10 (#7 AND #8 AND #9) ## Appendix 2 ### Definitions for Bio-Statistical Terms that may not be Familiar to Readers Cochran Q Statistic for Heterogeneity is used to determine whether variations between primary studies represent true differences or are due to chance. A p value < 0.05 indicates the presence of heterogeneity due to the low statistical strength of Cochran’s Q test. $$Q=\mathop \sum \nolimits^{} {w_i}{\left( {{T_i} - \bar {T}} \right)^2}$$ Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative ability can be quantified by the measures of diagnostic accuracy: sensitivity and specificity/positive and negative predicative values (PPV, NPV)/likelihood ratio/the area under the ROC curve (AUC)/diagnostic odds ratio (DOR). Diagnostic Odds Ratio measures of the effectiveness of a diagnostic test: $$DOR=(LR+)/(LR- )=(TP/FN)/(FP/TN).$$ Eggers test is a simple linear regression of the magnitude of the effect divided by the standard error over the inverse standard error which verifies whether the Y intercept is statistically significant with p < 0.1. Graphical analysis the starting point for investigation of heterogeneity in diagnostic or screening accuracy reviews often is through visual assessment of study results in forest plots and in ROC space. Grey literature is generally understood to mean literature that is not formally published in accessible sources. It can be another source of bias in meta-analytical studies. I 2 Measure for Heterogeneity indicates the percentage of variance in a meta-analysis that is attributable to studies heterogeneity. I 2 values range from 0 to 100%. I 2 values of 25%, 50%, and 75% are interpreted as low, moderate, and high estimates, respectively: $${I^2}=\left\{ {\begin{array}{*{20}{c}} {\frac{{Q - \left( {k - 1} \right)}}{1} \times \,100\% }&{to\;Q>k - 1} \\ 0&{to\;Q \leqslant k - 1} \end{array}} \right.$$ Negative Likelihood Ratio ( LR−) shows how much the odd of the target condition is decreased when the test index is negative. $$LR- =(1 - Se)/Sp$$ Negative Predictive Value (NPV) probability of no target condition among patients with a negative index test result. $$NPV=(TN)/(TN+FN)$$ Positive Predictive Value (PPV) probability of target condition among patients who actually have the disease. $$PPV=TP/(TP+FP)$$ Positive Likelihood Ratio ( LR+) shows how much the odds of the target condition are increased when the test index is positive. $$LR+=Se/(1 - Sp)$$ Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies. The posterior predictive p-value is a Bayesian alternative to the classical p-value. It is used to calculate the tail-area probability corresponding to the observed value of the statistic. p-value The probability under the assumption of null hypothesis, of obtaining a result equal to or more extreme than what was observed. It shows whether a difference found between groups that are being compared is due to chance. Sensitivity (Se) proportion of positives patients with the target condition who are identified as having the condition. $$Se=(TP)/(TP+FN)$$ Specificity (Sp) proportion of negatives patients without the target condition who are identified as not having the condition. $$Sp=(TN)/(TN+FP)$$ ## Electronic supplementary material Below is the link to the electronic supplementary material. ## Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Onze productaanbevelingen ### BSL Psychologie Totaal Met BSL Psychologie Totaal blijf je als professional steeds op de hoogte van de nieuwste ontwikkelingen binnen jouw vak. Met het online abonnement heb je toegang tot een groot aantal boeken, protocollen, vaktijdschriften en e-learnings op het gebied van psychologie en psychiatrie. Zo kun je op je gemak en wanneer het jou het beste uitkomt verdiepen in jouw vakgebied. Extra materiaal Literatuur Over dit artikel Naar de uitgave
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198122978210449, "perplexity": 4432.664391709809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00122.warc.gz"}
http://mathhelpforum.com/algebra/652-urgent.html
# Math Help - Urgent 1. ## Urgent What is area of a rectangle with a width of 12 feet and length of 10 feet? 2. 120 feet. Area of a rectangle is length times width.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606848120689392, "perplexity": 813.8873660830043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928350.51/warc/CC-MAIN-20150521113208-00301-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.geteasysolution.com/what-is-0.006-percent-of-545000
# What is 0.006 percent of 545000 - step by step solution ## Simple and best practice solution for 0.006% of 545000. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so don`t hesitate to use it as a solution of your homework. If it's not what You are looking for type in the calculator fields your own values, and You will get the solution. To get the solution, we are looking for, we need to point out what we know. 1. We assume, that the number 545000 is 100% - because it's the output value of the task. 2. We assume, that x is the value we are looking for. 3. If 545000 is 100%, so we can write it down as 545000=100%. 4. We know, that x is 0.006% of the output value, so we can write it down as x=0.006%. 5. Now we have two simple equations: 1) 545000=100% 2) x=0.006% where left sides of both of them have the same units, and both right sides have the same units, so we can do something like that: 545000/x=100%/0.006% 6. Now we just have to solve the simple equation, and we will get the solution we are looking for. 7. Solution for what is 0.006% of 545000 545000/x=100/0.006 (545000/x)*x=(100/0.006)*x       - we multiply both sides of the equation by x 545000=16666.666666667*x       - we divide both sides of the equation by (16666.666666667) to get x 545000/16666.666666667=x 32.7=x x=32.7 now we have: 0.006% of 545000=32.7
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080384135246277, "perplexity": 444.32996688332065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371813538.73/warc/CC-MAIN-20200408104113-20200408134613-00007.warc.gz"}
https://neugierde.github.io/cantors-attic/Ineffable
# cantors-attic Climb into Cantor’s Attic, where you will find infinities large and small. We aim to provide a comprehensive resource of information about all notions of mathematical infinity. View the Project on GitHub neugierde/cantors-attic # Ineffable cardinal Ineffable cardinals were introduced by Jensen and Kunen in (Jensen & Kunen, 1969) and arose out of their study of $\diamondsuit$ principles. An uncountable regular cardinal $\kappa$ is ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ is stationary. Equivalently an uncountable regular $\kappa$ is ineffable if and only if for every function $F:[\kappa]^2\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^2$ is constant (Jensen & Kunen, 1969). This second characterization strengthens a characterization of weakly compact cardinals which requires that there exist such an $H$ of size $\kappa$. If $\kappa$ is ineffable, then $\diamondsuit_\kappa$ holds and there cannot be a slim $\kappa$-Kurepa tree (Jensen & Kunen, 1969). A $\kappa$-Kurepa tree is a tree of height $\kappa$ having levels of size less than $\kappa$ and at least $\kappa^+$-many branches. A $\kappa$-Kurepa tree is slim if every infinite level $\alpha$ has size at most $|\alpha|$. An uncountable cardinal κ has the normal filter property iff it is ineffable. (Holy & Schlicht, 2018) ## Ineffable cardinals and the constructible universe Ineffable cardinals are downward absolute to $L$. In $L$, an inaccessible cardinal $\kappa$ is ineffable if and only if there are no slim $\kappa$-Kurepa trees. Thus, for inaccessible cardinals, in $L$, ineffability is completely characterized using slim Kurepa trees. (Jensen & Kunen, 1969) If $0^\sharp$ exists, then every Silver indiscernible is ineffable in $L$. (Jech, 2003) Ramsey cardinals are stationary limits of completely ineffable cardinals, they are weakly ineffable, but the least Ramsey cardinal is not ineffable. Ineffable Ramsey cardinals are limits of Ramsey cardinals, because ineffable cardinals are $Π^1_2$-indescribable and being Ramsey is a $Π^1_2$-statement. The least strongly Ramsey cardinal also is not ineffable, but super weakly Ramsey cardinals are ineffable. $1$-iterable (=weakly Ramsey) cardinals are weakly ineffable and stationary limits of completely ineffable cardinals. The least $1$-iterable cardinal is not ineffable. (Holy & Schlicht, 2018; Gitman, 2011) ## Weakly ineffable cardinal Weakly ineffable cardinals (also called almost ineffable) were introduced by Jensen and Kunen in (Jensen & Kunen, 1969) as a weakening of ineffable cardinals. An uncountable regular cardinal $\kappa$ is weakly ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ has size $\kappa$. If $\kappa$ is weakly ineffable, then $\diamondsuit_\kappa$ holds. ## Subtle cardinal Subtle cardinals were introduced by Jensen and Kunen in (Jensen & Kunen, 1969) as a weakening of weakly ineffable cardinals. A uncountable regular cardinal $\kappa$ is subtle if for every for every $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ and every closed unbounded $C\subseteq\kappa$ there are $\alpha<\beta$ in $C$ such that $A_\beta\cap\alpha=A_\alpha$. If $\kappa$ is subtle, then $\diamondsuit_\kappa$ holds. To be expanded. ## $n$-ineffable cardinal The $n$-ineffable cardinals for $2\leq n<\omega$ were introduced by Baumgartner in (Baumgartner, 1975) as a strengthening of ineffable cardinals. A cardinal is $n$-ineffable if for every function $F:[\kappa]^n\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^n$ is constant. • $2$-ineffable cardinals are exactly the ineffable cardinals. • an $n+1$-ineffable cardinal is a stationary limit of $n$-ineffable cardinals. (Baumgartner, 1975) A cardinal $\kappa$ is totally ineffable if it is $n$-ineffable for every $n$. • a $1$-iterable cardinal is a stationary limit of totally ineffable cardinals. (this follows from material in (Gitman, 2011)) ### Helix (Information in this subsection come from (Friedman, 1998) unless noted otherwise.) For $k \geq 1$ we define: • $\mathcal{P}(x)$ is the powerset (set of all subsets) of $x$. $\mathcal{P}_k(x)$ is the set of all subsets of $x$ with exactly $k$ elements. • $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$ is regressive iff for all $A \in \mathcal{P}_k(\lambda)$, we have $f(A) \subseteq \min(A)$. • $E$ is $f$-homogenous iff $E \subseteq \lambda$ and for all $B,C \in \mathcal{P}_k(E)$, we have $f(B) \cap \min(B \cup C) = f(C) \cap \min(B \cup C)$. • $\lambda$ is $k$-subtle iff $\lambda$ is a limit ordinal and for all clubs $C \subseteq \lambda$ and regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \in \mathcal{P}_{k+1}(C)$. • $\lambda$ is $k$-almost ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \subseteq \lambda$ of cardinality $\lambda$. • $\lambda$ is $k$-ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous stationary $A \subseteq \lambda$. $0$-subtle, $0$-almost ineffable and $0$-ineffable cardinals can be defined as “uncountable regular cardinals” because for $k \geq 1$ all three properties imply being uncountable regular cardinals. • For $k \geq 1$, if $\kappa$ is a $k$-ineffable cardinal, then $\kappa$ is $k$-almost ineffable and the set of $k$-almost ineffable cardinals is stationary in $\kappa$. • For $k \geq 1$, if $\kappa$ is a $k$-almost ineffable cardinal, then $\kappa$ is $k$-subtle and the set of $k$-subtle cardinals is stationary in $\kappa$. • For $k \geq 1$, if $\kappa$ is a $k$-subtle cardinal, then the set of $(k-1)$-ineffable cardinals is stationary in $\kappa$. • For $k \geq n \geq 0$, all $k$-ineffable cardinals are $n$-ineffable, all $k$-almost ineffable cardinals are $n$-almost ineffable and all $k$-subtle cardinals are $n$-subtle. This structure is similar to the double helix of $n$-fold variants and earlier known although smaller. (Kentaro, 2007) ## Completely ineffable cardinal Completely ineffable cardinals were introduced in (Abramson et al., 1977) as a strengthening of ineffable cardinals. Define that a collection $R\subseteq P(\kappa)$ is a stationary class if • $R\neq\emptyset$, • for all $A\in R$, $A$ is stationary in $\kappa$, • if $A\in R$ and $B\supseteq A$, then $B\in R$. A cardinal $\kappa$ is completely ineffable if there is a stationary class $R$ such that for every $A\in R$ and $F:[A]^2\to2$, there is $H\in R$ such that $F\upharpoonright [H]^2$ is constant. Relations: ## References 1. Jensen, R., & Kunen, K. (1969). Some combinatorial properties of L and V. http://www.mathematik.hu-berlin.de/ raesch/org/jensen.html 2. Holy, P., & Schlicht, P. (2018). A hierarchy of Ramsey-like cardinals. Fundamenta Mathematicae, 242, 49–74. https://doi.org/10.4064/fm396-9-2017 3. Jech, T. J. (2003). Set Theory (Third). Springer-Verlag. https://logic.wikischolars.columbia.edu/file/view/Jech%2C+T.+J.+%282003%29.+Set+Theory+%28The+3rd+millennium+ed.%29.pdf 4. Gitman, V. (2011). Ramsey-like cardinals. The Journal of Symbolic Logic, 76(2), 519–540. http://boolesrings.org/victoriagitman/files/2011/08/ramseylikecardinals.pdf 5. Abramson, F., Harrington, L., Kleinberg, E., & Zwicker, W. (1977). Flipping properties: a unifying thread in the theory of large cardinals. Ann. Math. Logic, 12(1), 25–58. 6. Nielsen, D. S., & Welch, P. (2018). Games and Ramsey-like cardinals. 7. Friedman, H. M. (1998). Subtle cardinals and linear orderings. https://u.osu.edu/friedman.8/files/2014/01/subtlecardinals-1tod0i8.pdf 8. Hamkins, J. D., & Johnstone, T. A. (2014). Strongly uplifting cardinals and the boldface resurrection axioms. 9. Rathjen, M. (2006). The art of ordinal analysis. http://www.icm2006.org/proceedings/Vol_II/contents/ICM_Vol_2_03.pdf 10. Baumgartner, J. (1975). Ineffability properties of cardinals. I. In Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. I (pp. 109–130. Colloq. Math. Soc. János Bolyai, Vol. 10). North-Holland. 11. Kentaro, S. (2007). Double helix in large large cardinals and iteration of elementary embeddings. Annals of Pure and Applied Logic, 146(2-3), 199–236. https://doi.org/10.1016/j.apal.2007.02.003 12. Sharpe, I., & Welch, P. (2011). Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties. Ann. Pure Appl. Logic, 162(11), 863–902. https://doi.org/10.1016/j.apal.2011.04.002 Main library
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.976606011390686, "perplexity": 1485.521198704694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00631.warc.gz"}
https://en.wikibooks.org/wiki/Fundamentals_of_Transportation/Earthwork
# Fundamentals of Transportation/Earthwork The dump truck is amongst the equipment necessary for earthwork to occur Earthwork is something that transportation projects seldom avoid. In order to establish a properly functional road, the terrain must often be adjusted. In many situations, geometric design will often involve minimizing the cost of earthwork movement. Earthwork is expressed in units of volumes (cubic meters in metric). Increases in such volumes require additional trucks (or more runs of the same truck), which cost money. Thus, it is important for designers to engineer roads that require very little earthwork. ## Cross Sections and Volume Computation A Roadway Cross Section on otherwise Level Ground To determine the amount of earthwork to occur on a given site, one must calculate the volume. For linear facilities, which include highways, railways, runways, etc., volumes can easily be calculated by integrating the areas of the cross sections (slices that go perpendicular to the centerline) for the entire length of the corridor. More simply, several cross sections can be selected along the corridor and an average can be taken for the entire length. Several different procedures exist for calculating areas of earthwork cross sections. In the past, the popular method was to draw cross sections by hand and use a planimeter to measure area. In modern times, computers use a coordinate method to assess earthwork calculations. To perform this task, points with known elevations need to be identified around the cross section. These points are considered in the (X, Y) coordinate plane, where X represents the horizontal axis paralleling the ground and Y represents the vertical axis that is elevation. Area can be computed with the following formula: $A = |\frac{{1}}{{2}}\sum\limits_{i = 1}^n {X_i(Y_{i+1} - Y_{i-1})}|\,\!$ Where: • $A\,\!$ = Area of Cross-Section • $n\,\!$ = Number of Points on Cross Section (Note: n+1 = 1 and 1-1=n, for indexing) • $X\,\!$ = X-Coordinate • $Y\,\!$ = Y-Coordinate With this, earthwork volumes can be calculated. The easiest means to do so would by using the average end area method, where the two end areas are averaged over the entire length between them. $V = \frac{{A_1 + A_2}}{{2}}L\,\!$ Where: • $V\,\!$ = Volume • $A_1\,\!$ = Cross section area of first side • $A_2\,\!$ = Cross section area of second side • $L\,\!$ = Length between the two areas If one end area has a value of zero, the earthwork volume can be considered a pyramid and the correct formula would be: $V = \frac{{AL}}{{3}}\,\!$ A more accurate formula would the prismoidal formula, which takes out most of the error accrued by the average end area method. $V_p = \frac{{L(A_1 + 4A_m + A_2)}}{{6}} \,\!$ Where: • $V_p\,\!$ = Volume given by the prismoidal formula • $A_m\,\!$ = Area of a plane surface midway between the two cross sections ## Cut and Fill A Typical Cut/Fill Diagram Various sections of a roadway design will require bringing in earth. Other sections will require earth to be removed. Earth that is brought in is considered Fill while earth that is removed is considered Cut. Generally, designers generate drawings called Cut and Fill Diagrams, which illustrate the cut or fill present at any given site. This drawing is quite standard, being no more than a graph with site location on the X-axis and fill being the positive range of the Y-axis while cut is the negative range of the Y-axis. A Typical Mass Diagram (Note: Additional Dirt is Needed in this Example) ## Mass Balance Using the data for cut and fill, an overall mass balance can be computed. The mass balance represents the total amount of leftover (if positive) or needed (if negative) earth at a given site based on the design up until that point. It is a useful piece of information because it can identify how much remaining or needed earth will be present at the completion of a project, thus allowing designers to calculate how much expense will be incurred to haul out excess dirt or haul in needed additional. Additionally, a mass balance diagram, represented graphically, can aid designers in moving dirt internally to save money. Similar to the cut and fill diagram, the mass balance diagram is illustrated on two axes. The X-axis represents site location along the roadway corridor and the Y-axis represents the amount of earth, either in excess (positive) or needed (negative). ## Examples ### Example 1: Computing Volume Problem: A roadway is to be designed on a level terrain. This roadway is 150 meters in length. Four cross sections have been selected, one at 0 meters, one at 50 meters, one at 100 meters, and one at 150 meters. The cross sections, respectively, have areas of 40 square meters, 42 square meters, 19 square meters, and 34 square meters. What is the volume of earthwork needed along this road? Solution: Three sections exist between all of these cross sections. Since none of the sections end with an area of zero, the average end area method can be used. The volumes can be computed for respective sections and then summed together. Section between 0 and 50 meters: $V = \frac{{A_1 + A_2}}{{2}}L = \frac{{40 + 42}}{{2}}50 = 2050\ cubic-meters\,\!$ Section between 50 and 100 meters: $V = \frac{{A_1 + A_2}}{{2}}L = \frac{{42 + 19}}{{2}}50 = 1525\ cubic-meters\,\!$ Section between 100 and 150 meters: $V = \frac{{A_1 + A_2}}{{2}}L = \frac{{19 + 34}}{{2}}50 = 1325\ cubic-meters\,\!$ Total Volume is found to be: $2050 + 1525 + 1325 = 4900\ cubic-meters\,\!$ ### Example 2: Mass Balance Problem: Given the following cut/fill profile for each meter along a 10-meter strip of road built on very, very hilly terrain, estimate the amount of dirt left over or needed for the project. • 0 Meters: 3 meters of fill • 1 Meter: 1 meter of fill • 2 Meters: 2 meters of cut • 3 Meters: 5 meters of cut • 4 Meters: 7 meters of cut • 5 Meters: 8 meters of cut • 6 Meters: 2 meters of cut • 7 Meters: 1 meter of fill • 8 Meters: 3 meters of fill • 9 Meters: 6 meters of fill • 10 Meters: 7 meters of fill Solution: If 'cut' is considered an excess of available earth and 'fill' is considered a reduction of available earth, the problem becomes one of simple addition and subtraction. $[(-3) + (-1) + 2 + 5 + 7 + 8 + 2 + (-1) + (-3) + (-6) + (-7)] * 1 m^2 = 3\ cubic-meters\,\!$ 3 cubic-meters of dirt remain in excess. ## Thought Question Problem If it is found that the mass balance is indeed balanced (end value of zero), does that automatically mean that no dirt transport, either out of or into the site, is needed? Solution No. Any soil scientist will eagerly state that dirt type can change with location quite quickly, depending on the region. So, if half a highway cuts from the earth and the other half needs fill, the dirt pulled from the first half cannot be simply dumped into the second half, even if mathematically it balances. If the soil types are different, the exact numbers of volume needed may be different, as different soil types have different properties (settling, water storage, etc.). In the worse case, not consulting a soil scientist could result in your road being washed out! Homework ## Variables • $A$ - Area of Cross-Section • $n$ - Number of Points on Cross Section (Note: n+1 = 1 and 1-1=n, for indexing) • $X$ - X-Coordinate • $Y$ - Y-Coordinate • $V$ - Volume • $A_1$ - Cross section area of first side • $A_2$ - Cross section area of second side • $L$ - Length between the two areas • $V_p$ - Volume given by the prismoidal formula • $A_m$ - Area of a plane surface midway between the two cross sections ## Key Terms • Cut • Fill • Mass Balance • Area • Volume • Earthwork • Prismoidal Volume
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374074697494507, "perplexity": 1927.2154653199843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00247-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/93721/best-way-to-find-magnitude-and-phase-of-a-specific-frequency-in-an-empirical-tim
# Best way to find magnitude and phase of a specific frequency in an empirical time series… I've a discrete, univariate time series, and I'm interested in to investigate a specific frequency component. Assume I'm interested in a frequency with a cycle-time of $f$ samples - and I need to get the best understanding I can of its magnitude and phase at the instant of the most recent sample. As I see it, I need to consider $n.f$ samples - where n is a positive integer... and there's a trade-off: with small n, my estimation of the f-frequency will be most adversely affected by noise (other frequencies - higher and lower); with large $n$, the effects of this noise are reduced but I must settle for the average phase over $n$ cycles - which won't account for changes in phase and magnitude over the $n.f$ duration. There is, of course, another aspect - if I consider a sliding window of $n.f$ - I should expect it to advance by $2.\pi\over f$ with each new sample - and for any change in magnitude to be proportionally small... assuming my analysis of the frequency $f$ component of the signal is meaningful. My first idea about establishing the phase and magnitude was to do a bunch of FFTs and discard all but the frequency of interest in each. This, however, seems somewhat wasteful. Are there any well known techniques for addressing this sort of problem? Should I just run with FFTs - or are there more efficient approaches I might adopt? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393364548683167, "perplexity": 344.8210587582842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097512.42/warc/CC-MAIN-20150627031817-00168-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/converging-diverging-nozzle.741632/
# Homework Help: Converging-Diverging Nozzle 1. Mar 5, 2014 ### Maylis 1. The problem statement, all variables and given/known data 2. Air flows from a large supply tank in which the pressure is 147 psig and the temperature is 160 F through a converging - diverging nozzle. The velocity at the throat of the nozzle is sonic. A normal shock occurs at a point in the diverging section of the nozzle where the cross-sectional area is 1.44 times the cross-sectional area of the throat. a) Compute the Mach number, pressure, and temperature just upstream of the shock. b) Compute the Mach number, pressure, and temperature just downstream of the shock. 2. Relevant equations 3. The attempt at a solution I thought the slide 9 would be the equation to use for this problem, but if S* is what they mean, that is just the cross sectional area of the throat, right? What is S supposed to be without the Asterisk? I am not sure how I could determine the mach numbers up and downstream of the shock using any of the equations give in the lecture slides. is Po, To, etc just at the ''back pressure'' (back pressure is the tank pressure, right?) 2. Mar 5, 2014 ### SteamKing Staff Emeritus We have no idea what 'slide 9' refers to. 3. Mar 5, 2014 ### Maylis My apologies, I intended to post the slides File size: 380.2 KB Views: 137 4. Mar 9, 2014 ### Maylis I am just using the equation on the 9th slide. Honestly, I can't really say I understand what the equation means. I know S* is the cross sectional area where the velocity is sonic (the throat), but my best guess is that S is an arbitrary cross section anywhere along the pipe. However, I wonder if it is only for the divergent part of the nozzle? slide 10 summarizes all the equations, and of course T, P, ρ, etc are probably corresponding with S? It explicitly states T0, P0, etc. are at the reservoir, but doesn't mention what T, P, etc are. I am uncertain of what it meant by ''just upstream of the shock'' or ''just downsteam'' and how I am supposed to calculate these. Just upstream can be anything upsteam, what are they asking for more precisely? This table seems to agree with my calculation of the mach number being 1.80 http://www.cchem.berkeley.edu/cbe150a/isentropic_flow.pdf #### Attached Files: • ###### 5.2 attempt 1.pdf File size: 136.1 KB Views: 66 Last edited: Mar 9, 2014
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9257803559303284, "perplexity": 1201.2411911763252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865691.44/warc/CC-MAIN-20180523161206-20180523181206-00591.warc.gz"}
http://chemistry.stackexchange.com/questions/14490/neutralization-reaction
# Neutralization reaction Question) Complete and balance each of the following molecular equations (in aqueous solution); include phase labels. Then, for each, write the net ionic equation. (A) $\ce{NH_3 + HNO_3 ->}$ My Attempt: I thought that the acid $\ce{HNO3}$ would just give its hydrogen to $\ce{NH3}$ and make the resulting reaction $\ce{NH_3 + HNO_3 -> HNH_3 + NO_3}$. However the correct answer is $\ce{NH_3 + HNO_3 -> NH_4NO_3}.$ - 1. An acid-base reaction is not the exchange of a hydrogen atom $\ce{H}$. It is the exchange of a hydrogen ion (or proton) $\ce{H+}$. Thus your answer should be: $$\ce{NH3 +HNO3 -> NH4+ + NO3-}$$ 2. The given answer combines the two ions produced into a single compound, which is reasonable given that you are not told if the reaction occurs in aqueous solution. $$\ce{NH4+ + NO3- ->NH4NO3}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918634295463562, "perplexity": 810.1347082850731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119651455.41/warc/CC-MAIN-20141024030051-00151-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.mankier.com/3/gluPartialDisk.3G
# gluPartialDisk.3G - Man Page draw an arc of a disk ## C Specification ``` GLdouble inner, GLdouble outer, GLint slices, GLint loops, GLdouble start, GLdouble sweep )``` ## Parameters inner Specifies the inner radius of the partial disk (can be 0). outer Specifies the outer radius of the partial disk. slices Specifies the number of subdivisions around the z axis. loops Specifies the number of concentric rings about the origin into which the partial disk is subdivided. start Specifies the starting angle, in degrees, of the disk portion. sweep Specifies the sweep angle, in degrees, of the disk portion. ## Description gluPartialDisk renders a partial disk on the $z\text{ }=\text{ }0$ plane. A partial disk is  similar to a full disk, except that only the subset of the disk from start through start + sweep is included (where 0 degrees is along the  +yaxis, 90 degrees along the +x axis, 180 degrees along the -y axis, and  270 degrees along the -x axis). The partial disk has a radius of  outer, and contains a concentric circular hole with a radius  of inner. If inner is 0, then no hole is generated. The partial disk is subdivided around the z axis into slices (like pizza slices), and also about the z axis into rings  (as specified by slices and loops, respectively). With respect to orientation, the +z  side of the partial disk is considered to  be outside (see gluQuadricOrientation). This means that if the  orientation is set to GLU_OUTSIDE, then any normals generated  point along the +z axis. Otherwise, they point along the -z  axis. If texturing is turned on (with gluQuadricTexture), texture coordinates are generated linearly such that where $r\text{ }=\text{ }\text{outer}$, the value at (r, 0, 0) is  (1.0, 0.5), at (0, r, 0) it is (0.5, 1.0), at (-r, 0, 0)  it is (0.0, 0.5), and  at (0, -r, 0) it is (0.5, 0.0).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667274713516235, "perplexity": 2595.3958315469613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00538.warc.gz"}
https://xianblog.wordpress.com/2014/12/01/reflections-on-the-probability-space-induced-by-moment-conditions-with-implications-for-bayesian-inference-discussion/?shared=email&msg=fail
## reflections on the probability space induced by moment conditions with implications for Bayesian Inference [discussion] [Following my earlier reflections on Ron Gallant’s paper, here is a more condensed set of questions towards my discussion of next Friday.] “If one specifies a set of moment functions collected together into a vector m(x,θ) of dimension M, regards θ as random and asserts that some transformation Z(x,θ) has distribution ψ then what is required to use this information and then possibly a prior to make valid inference?” (p.4) The central question in the paper is whether or not given a set of moment equations $\mathbb{E}[m(X_1,\ldots,X_n,\theta)]=0$ (where both the Xi‘s and θ are random), one can derive a likelihood function and a prior distribution compatible with those. It sounds to me like a highly complex question since it implies the integral equation $\int_{\Theta\times\mathcal{X}^n} m(x_1,\ldots,x_n,\theta)\,\pi(\theta)f(x_1|\theta)\cdots f(x_n|\theta) \text{d}\theta\text{d}x_1\cdots\text{d}x_n=0$ must have a solution for all n’s. A related question that was also remanent with fiducial distributions is how on Earth (or Middle Earth) the concept of a random theta could arise outside Bayesian analysis. And another one is how could the equations make sense outside the existence of the pair (prior,likelihood). A question that may exhibit my ignorance of structural models. But which may also relate to the inconsistency of Zellner’s (1996) Bayesian method of moments as exposed by Geisser and Seidenfeld (1999). For instance, the paper starts (why?) with the Fisherian example of the t distribution of $Z(x,\theta) = \frac{\bar{x}_n-\theta}{s/\sqrt{n}}$ which is truly is a t variable when θ is fixed at the true mean value. Now, if we assume that the joint distribution of the Xi‘s and θ is such that this projection is a t variable, is there any other case than the Dirac mass on θ? For all (large enough) sample sizes n? I cannot tell and the paper does not bring [me] an answer either. When I look at the analysis made in the abstraction part of the paper, I am puzzled by the starting point (17), where $p(x|\theta) = \psi(Z(x,\theta))$ since the lhs and rhs operate on different spaces. In Fisher’s example, x is an n-dimensional vector, while Z is unidimensional. If I apply blindly the formula on this example, the t density does not integrate against the Lebesgue measure in the n-dimension Euclidean space… If a change of measure allows for this representation, I do not see so much appeal in using this new measure and anyway wonder in which sense this defines a likelihood function, i.e. the product of n densities of the Xi‘s conditional on θ. To me this is the central issue, which remains unsolved by the paper. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463316202163696, "perplexity": 736.8348942511639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00465.warc.gz"}
https://deepai.org/publication/taming-the-long-tail-of-deep-probabilistic-forecasting
DeepAI # Taming the Long Tail of Deep Probabilistic Forecasting Deep probabilistic forecasting is gaining attention in numerous applications ranging from weather prognosis, through electricity consumption estimation, to autonomous vehicle trajectory prediction. However, existing approaches focus on improvements on the most common scenarios without addressing the performance on rare and difficult cases. In this work, we identify a long tail behavior in the performance of state-of-the-art deep learning methods on probabilistic forecasting. We present two moment-based tailedness measurement concepts to improve performance on the difficult tail examples: Pareto Loss and Kurtosis Loss. Kurtosis loss is a symmetric measurement as the fourth moment about the mean of the loss distribution. Pareto loss is asymmetric measuring right tailedness, modeling the loss using a generalized Pareto distribution (GPD). We demonstrate the performance of our approach on several real-world datasets including time series and spatiotemporal trajectories, achieving significant improvements on the tail examples. • 2 publications • 1 publication • 35 publications 06/21/2021 ### Spliced Binned-Pareto Distribution for Robust Modeling of Heavy-tailed Time Series This work proposes a novel method to robustly and accurately model time ... 11/29/2019 ### Learning Generalizable Representations via Diverse Supervision The problem of rare category recognition has received a lot of attention... 04/09/2021 ### Deep Time Series Forecasting with Shape and Temporal Criteria This paper addresses the problem of multi-step time series forecasting f... 02/08/2019 ### Alternative modelling and inference methods for claim size distributions The upper tail of a claim size distribution of a property line of busine... 03/23/2021 ### On Exposing the Challenging Long Tail in Future Prediction of Traffic Actors Predicting the states of dynamic traffic actors into the future is impor... 12/15/2021 ### Mining Minority-class Examples With Uncertainty Estimates In the real world, the frequency of occurrence of objects is naturally s... 06/07/2021 ### DMIDAS: Deep Mixed Data Sampling Regression for Long Multi-Horizon Time Series Forecasting Neural forecasting has shown significant improvements in the accuracy of... ## 1 Introduction Forecasting is one of the most fundamental problems in time series and spatiotemporal data analysis, with broad applications in energy, finance, and transportation. Deep learning models [li2019enhancing, salinas2020deepar, rasul2021autoregressive] have emerged as state-of-the-art approaches for forecasting rich time series and spatiotemporal data. In several forecasting competitions such as M5 forecasting competition [makridakis2020m5], Argoverse motion forecasting challenge [chang2019argoverse], and IARAI Traffic4cast contest [kreil2020surprising] , almost all the winning solutions are based on deep neural networks. Despite the encouraging progress, we discover that the forecasting performance of deep learning models has long-tail behavior. That means, a significant amount of samples are very difficult to forecast. Existing works often measure the forecasting performance by averaging across test samples. However, such an average performance measured by root mean square error (RMSE) or mean absolute error (MAE) can be misleading. A low RMSE or MAE may indicate good averaged performance, but it does not prevent the model from behaving disastrously in difficult scenarios. From a practical perspective, the long-tail behavior in forecasting performance can be alarming. Figure 1 visualizes examples of long-tail behavior for a motion forecasting task. In motion forecasting, the long-tail would correspond to rare events in driving such as turning maneuver and sudden stops. Failure to forecast accurately in these scenarios would pose paramount safety risks in route planning. In electricity forecasting, the tail behavior would occur during short circuits, power outage, grid failures, or sudden behavior changes. Merely focusing on the average performance would ignore the electric load anomalies, significantly increasing the maintenance and operational cost. Long-tailed learning is an area heavily studied in classification settings focusing on class imbalance. We refer readers to Table 2 in [menon2020long] and the survey paper by [zhang2021deep] for a complete review. Most common approaches to address the long-tail problem include post-hoc normalization, data resampling, loss engineering, and learning class-agnostic representations. However, long-tail learning methods in classification are not directly translatable to forecasting as we do not have a pre-defined class. A recent work by [makansi2021exposing] propose to use Kalman filter to gauge the difficulty of different forecasting examples but such difficulties may not directly relate to deep neural networks used for the actual forecasting task. In this paper, we address the long-tail behavior of prediction error for deep probabilistic forecasting. We present two moment-based loss modifications: Kurtosis loss and Pareto loss. Kurtosis is a well studied symmetric measure of tailedness as a scaled fourth moment of the distribution. Pareto loss uses Generalized Pareto Distribution (GPD) to fit the long-tailed error distribution and can also be described as a weighted summation of shifted moments. We investigate these tailedness measurements as regularization and loss weighting approaches for probabilistic forecasting tasks. We demonstrate significantly improved tail performance compared to the base model and the baselines, while achieving better average performance in most settings. In summary, our contributions include • We discover long-tail behavior in the forecasting performance of deep probabilistic models. • We investigate principled approaches to address long-tail behavior and propose two novel methods: Pareto loss and Kurtosis loss. • We significantly improve the tail performance on four forecasting tasks including two time series and two spatiotemporal trajectory forecasting datasets. ## 2 Related work Deep probabilistic forecasting. There is a flurry of work on using deep neural networks for probabilistic forecasting. For time series forecasting, a common practice is to combine classic time series models with deep learning, resulting in DeepAR [salinas2020deepar], Deep State Space [rangapuram2018deep], Deep Factors [wang2019deep] and normalizing Kalman Filter [de2020normalizing]. Others introduce normalizing flow [rasul2020multivariate], denoising diffusion [rasul2021autoregressive] and particle filter [pal2021rnn] to deep learning. For trajectory forecasting, the majority of works focus on deterministic prediction. A few recent works propose to approximate the conditional distribution of future trajectories given the past with explicit parameterization [mfp, luo2020probabilistic], CVAE [CVAE, desire, trajectron++] or implicit models such as GAN [socialgan, liu2019naomi]. Nevertheless, most existing works focus on average performance, the issue of long-tail is largely overlooked in the community. Long-tailed learning. The main efforts for addressing the long-tail issue in learning revolve around reweighing, resampling, loss function engineering, and two-stage training, but mostly for classification. Rebalancing during training comes either in form of synthetic minority oversampling [chawla2002smote], oversampling with adversarial examples [Kozerawski_2020_ACCV], inverse class frequency balancing [liu2019large], balancing using effective number of samples [cui2019class], or balance-oriented mixup augmentation [xu2021towards]. Another direction involves post-processing either in form of normalized calibration [pan2021model] [menon2020long]. An important direction is loss modification approaches such as Focal Loss [lin2017focal], Shrinkage Loss [lu2018deep], and Balanced Meta-Softmax [ren2020balanced]. Others utilize two-stage training [liu2019large, cao2019learning] or separate expert networks [zhou2020bbn, li2020overcoming, wang2020long]. We refer the readers to [zhang2021deep] for an extensive survey. [tang2020long] indicated SGD momentum can contribute to the aggravation of the long-tail problem and suggested de-confounded training to mitigate its effects. [feldman2020does, feldman2020neural] performed theoretical analysis and suggested label memorization in long-tail distribution as a necessity for the network to generalize. A few were developed for imbalanced regression. Many approaches revolve around modifications of SMOTE such as adapted to regression SMOTER [torgo2013smote], augmented with Gaussian Noise SMOGN [branco2017smogn], or [ribeiro2020imbalanced] work extending for prediction of extremely rare values. [steininger2021density] proposed DenseWeight, a method based on Kernel Density Estimation for better assessment of the relevance function for sample reweighing. [yang2021delving] proposed a distribution smoothing over label (LDS) and feature space (FDS) for imbalanced regression. A concurrent work is [makansi2021exposing] where they noticed the long-tail error distribution for trajectory prediction. They used Kalman filter [kalman1960new] performance as a difficulty measure and utilized contrastive learning to alleviate the tail problem. However, the tail of Kalman Filter may differ from that of deep learning models, which we elaborate on in later sections. ## 3 Methodology We first identify the long-tail phenomena in probabilistic forecasting. Then, we propose two related strategies based on Pareto loss and Kurtosis loss to mitigate the tail issue. ### 3.1 Long-tail in probabilistic forecasting Given input and output respectively, probabilistic forecasting task aims to predict the conditional distribution of future states given current and past observations as: p(yt+1,…,yt+h|xt−k,…,,xt) (1) where is the length of the history and is the prediction horizon. We denote the maximum likelihood probabilistic forecasting model prediction as . Long tail distribution of data can be seen in numerous real world datasets. This is evident for the four benchmark forecasting datasets (Electricity [Dua:2019], Traffic [Dua:2019], ETH-UCY [pellegrini2009you, lerner2007crowds], and nuScenes [caesar2020nuscenes]) studied in this work. We can see the distribution of ground truth values () for all of them in Figure 2. We use log-log plots to increase the visibility of the long tail behavior present in the data – smaller values (constituting the minority on a linear scale) occur very frequently, while majority of values are very rare (creating the tail). In addition to the long tail data distribution, we also identify the long tail distribution of forecasting error from deep learning models (such as DeepAR [salinas2020deepar], Trajectron++ [salzmann2020trajectron++], and Trajectron++EWTA [makansi2019overcoming]) (as seen in Appendix G). We hypothesize that long tail behavior in forecasting error distribution originates from the long tail behavior in data distribution, as well as the nature of gradient based deep learning. Therefore, modifying the loss function to account for the shape of the distribution would potentially lead to better tail performance. Next, we present two loss functions based on the moment of the error distribution. ### 3.2 Pareto Loss Long tail distributions naturally lend themselves to analysis using Extreme Value Theory (EVT). [mcneil1997estimating] shows that long tail behavior can be modeled as a generalized Pareto distribution (GPD). The probability distribution function (pdf) of the GPD is, f(ξ,η,μ)(a)=1η(1+ξ(a−μη))−(1ξ+1) (2) where the parameters are location (), scale () and shape (). The pdf for GPD is defined for when and for when . can be set to 0 without loss of generality as it represents translation along the x axis. We can drop the scaling term as the pdf will be scaled using a hyperparameter. The simplified pdf is, f(ξ,η)(a)=(1+ξaη)−(1ξ+1) (3) The high-level idea of Pareto loss is to fit a GPD to the loss distribution to reprioritize the learning of easy and difficult (tail) examples. Let the loss function used by a given machine learning model be denoted as . In probabilistic forecasting, a commonly used loss is Negative Log Likelihood (NLL) loss: where is the training example, and the model prediction. As the pdf in Eq.(3) only allows non-negative input, the loss has to be lower-bounded. We propose to use an auxiliary loss to fit the GPD. For NLL which can be unbounded for continuous distributions, the auxiliary loss can simply be Mean Absolute Error (MAE): . There are two main classes of methods for modifying loss functions to improve tail performance: regularization [ren2020balanced, makansi2021exposing] and re-weighting [lin2017focal, lu2018deep, yang2021delving]. Both classes are characterized by different behavior on tail data [ren2020balanced]. Inspired by these, we propose two variations of the Pareto Loss using the distribution fitted on : Pareto Loss Margin (PLM) and Pareto Loss Weighted (PLW). PLM is based on the principles of margin-based regularization [ren2020balanced, liu2016large] which assigns larger penalties (margins) to harder examples. For a given hyperparameter , PLM is defined as, lplm=l+λ∗rplm(^l) (4) where rplm(^l)=1−f(ξ,η)(^l) (5) which uses GPD to calculate the additive margin. An alternative is to reweigh the loss terms using the loss distribution. For a given hyperparameter , PLW is defined as, lplw=wplw(^l)∗l (6) where wplw(^l)=1−λ∗f(ξ,η)(^l) (7) which uses GPD to reweigh the loss of each sample. ### 3.3 Kurtosis Loss Kurtosis measures the tailedness of a distribution as the scaled fourth moment about the mean. To increase the emphasis on tail examples, we use this measure to propose kurtosis loss. For a given hyperparameter and using the same notation as Sec.3.2 kurtosis loss is defined as, lkurt=l+λ∗rkurt(^l) (8) where is the contribution of an example to kurtosis for a batch rkurt(^l)=⎛⎝^l−μ^lσ^l⎞⎠4 (9) where and are the mean and standard deviation of the auxiliary loss ( ) values for a batch of examples. We propose to use the auxiliary loss distribution to compute kurtosis, as performance metrics in forecasting tasks frequently involve versions of L1 or L2 distance such as RMSE, MAE, or ADE. The goal is to decrease the long tail for these metrics, which might not correlate well with the base loss . The example in Sec. 3.2 where is NLL loss and is MAE loss illustrates this requirement well. Kurtosis loss and pareto loss are related approaches to handling long tail behavior. Pareto Loss is a weighted sum of moments about while kurtosis loss is the fourth moment about the mean. Let and , then the Taylor expansion for the GPD pdf from Eq.(3) is, (1+b)c=1+cb+c(c−1)2!b2+c(c−1)(c−2)3!b3+⋯ (10) For or equivalently or , the coefficients are positive for even moments and negative for odd moments. Even moments are always symmetric and positive, while odd moments are positive only for right-tailed distributions. Since we use the negative of the pdf, it yields an asymmetric measure of the right tailedness of a value in the distribution. Kurtosis loss uses the fourth moment about the distribution mean. This is a symmetric and positive measure, but in the context of right tailed distributions, kurtosis serves as a good measure of the long tailedness of the distribution. GPD and kurtosis are visualised in Appendix F ## 4 Experiments We evaluate our methods on two probabilistic forecasting tasks: time series forecasting and trajectory prediction. ### 4.1 Setup #### Datasets. For time series forecasting, we use electricity and traffic datasets from the UCI ML repository [Dua:2019] used in [salinas2020deepar] as benchmarks. We also generate three synthetic 1D time series datasets, Sine, Gaussian and Pareto, to further our understanding of long tail behavior. For trajectory prediction, we use two benchmark datasets: a pedestrian trajectory dataset ETH-UCY (which is a combination of ETH [pellegrini2009you] and UCY [lerner2007crowds] datasets) and a vehicle trajectory dataset nuScenes  [caesar2020nuscenes]. Details regarding the datasets are available in Appendix A. #### Baselines. We compare with the following baselines representing SoTA in long tail mitigation for different tasks: • [itemsep=1mm, topsep=0mm] • Contrastive Loss: [makansi2021exposing] uses contrastive loss as a regularizer to group examples together based on Kalman filter prediction errors. • Focal Loss: [lin2017focal] uses L1 loss to reweigh loss terms. • Shrinkage Loss: [lu2018deep] uses a sigmoid-based function to reweigh loss terms. • Label Distribution Smoothing (LDS): [yang2021delving] uses symmetric kernel to smooth the label distribution and use its inverse to reweigh loss terms. Focal Loss, Shrinkage Loss, and LDS were originally proposed for classification and/or regression and required adaptation in order to be applicable to the forecasting task. For details on baseline adaptation, please see Appendix B. #### Evaluation Metrics. We use two common metrics for the evaluation of trajectory prediction models: Average Displacement Error (ADE), which is the average L2 distance between total predicted trajectory and ground truth, and Final Displacement Error (FDE) which is the L2 distance for the final timestep. For time series forecasting, we use Normalized Deviation (ND) and Normalized Root Mean Squared Error (NRMSE). Apart from the above-mentioned average performance metrics, we introduce metrics to capture performance on the tail. To measure the performance at tail of the distribution, we propose to adapt the Value-at-Risk (VaR Eq. (11)) metric: VaRα(E)=inf{e∈E:P(E≥e)≤1−α} (11) VaR at level is the smallest error such that the probability of observing error larger than is smaller than , where is the error distribution. This evaluates to the quantile of the error distribution. We propose to measure VaR at three different levels: , , and . In addition, we use skew, kurtosis, and max error to further assess the tail performance. Skew and Kurtosis as metrics are meaningful only when looked at in conjunction with the mean. A distribution with a higher mean and lower skew and kurtosis does not imply a less severe tail. ### 4.2 Synthetic Dataset Experiments In order to better understand the long tail error distribution, we perform experiments on three synthetic datasets. The task is to forecast 8 steps ahead given a history of 8 time steps. We use AutoRegression (AR) and DeepAR [salinas2020deepar] as forecasting models to perform this task. The top row in Figure 3 shows that among the datasets, only Gaussian and Pareto show tail behavior in the data distribution. Pareto dataset in particular is the only one to display long tail behavior. AR and DeepAR have different error distribution across the datasets. Based on these results, we make the following hypotheses for the sources of long-tailedness. Source 1: Long Tail in Data. The data distributions for Gaussian and Pareto datasets have similar tail behavior to the error distribution for both models, AR and DeepAR. This indicates that the long tail in data is a likely cause of long tail behavior in error. This connection is also well established as class imbalance for classification tasks [van2018inaturalist, liu2019large]. Source 2: Deep Learning Model. The results on the Sine dataset illustrate that even in the absence of long tail in the data, we can have long tail in the error distribution. The AR model, however, does not show long tail behavior for error. This indicates that the observed long tail behavior in error for DeepAR is model induced. We hypothesize that this is caused by DeepAR overfitting to simpler examples due to the nature of gradient based learning. Further results and analysis on these datasets can be found in Appendix H. The difference between AR and DeepAR error distributions also suggests that assuming tail overlap between deep learning and non-deep learning methods (such as Kalman filter used by [makansi2021exposing]) might not generalize well. ### 4.3 Real-World Experiments #### Time Series Forecasting We present average and tail metrics on ND and NRMSE for the time series forecasting task on electricity and traffic datasets in Tables  1 and 2 respectively. We use DeepAR [salinas2020deepar], one of the SoTA in probabilistic time series forecasting, as the base model. The task for both datasets is to use a 1-week history (168 hours) to forecast for 1 day (24 hours) at an hourly frequency. DeepAR exhibits long tail behavior in error on both datasets (refer Appendix G). The tail of the error distribution is significantly longer for the electricity dataset as compared to the traffic dataset. #### Trajectory Forecasting We present experimental results on ETH-UCY and nuScenes datasets in Tables 3 and 4 respectively. Following [salzmann2020trajectron++] and [makansi2021exposing] we calculate model performance based on the best out of 20 guesses. On both datasets we compare our approaches with current SoTA long-tail baseline methods using Trajectron++EWTA [makansi2021exposing] as a base model due to its SoTA average performance on these datasets. We include the Trajectron++ [salzmann2020trajectron++] results for reference as the previous state-of-the-art method to add a meaningful comparison to the magnitude of performance change obtained by each long tail method. On performing a comparative analysis of tail lengths between datasets, we notice that trajectory datasets manifest shorter tails compared to 1D time series datasets. Our Pareto approaches work better on longer tails and for this reason we augment weight and margin for PLM and PLW with an additional Mean Squared Error weight term to internally elongate the tail during the training process. ### 4.4 Results Analysis As shown in Tables 3 and 4, our proposed approaches, kurtosis loss and PLM, are the only methods improving on tail metrics across all tasks while maintaining the average performance of the base model. Our tasks differ in representation (1D, 2D), severity of long-tail, base model loss function (GaussNLL, EWTA) and prediction horizon. This indicates that our methods generalize to diverse situations better than existing long-tail methods. #### Long-tailedness across datasets Using Eq. (12) as an indicative measure of the long-tailedness in error distribution, we establish the datasets as ETH-UCY, nuScenes, electricity, and traffic in long-tailedness for the base model (Details in Appendix E). We notice the connections between long-tailedness of the dataset and the performance of different methods. TailLength=VaR95Mean+VaR98VaR95+VaR99VaR98+MaxVaR99 (12) #### Re-weighting vs Regularization. As mentioned in Section 3.2, we can categorize loss modifying methods into two classes: re-weighting (focal loss, shrinkage loss, LDS and PLW) and regularization (contrastive loss, PLM and kurtosis loss). Re-weighting multiplies the loss for more difficult examples with higher weights. Regularization adds higher regularization values for examples with higher loss. We notice that re-weighting methods perform worse as the long-tailedness increases. In scenarios with longer tails, the weights of tail samples can be very high. Over-emphasizing tail examples hampers the learning for other samples. Shrinkage loss with a bounded weight limits this issue but fails to show tail improvements in longer tail scenarios. PLW is the best re-weighting method on most datasets, likely due to its bounded weights. Inconsistency in average performance is likely due to re-weighting nature of the loss which limits its applicability. In contrast, regularization methods perform consistently across all tasks both on the tail and average metrics. The additive nature of regularization limits the adverse impact tail samples can have on the learning. This enables these methods to handle different long-tailednesses without degrading the average performance. #### PLM vs Kurtosis loss. Kurtosis loss generally performs better on extreme tail metrics, and Max. The bi-quadratic behavior of kurtosis puts higher emphasis on far-tail samples. Moreover, the magnitude of kurtosis varies significantly for different distributions, making the choice of hyperparameter (See Eq.(8)) critical. Further analysis on the same is available in Appendix D. PLM is the most consistent method across all tasks improving on both tail and average metrics. As noted by [mcneil1997estimating] GPD is well suited to model long tail error distributions. PLM rewards examples moving away from the tail towards the mean with significantly lower margin values. PLM margin values saturate beyond a point in the tail providing similar improvements for subsequent tail samples. Visualization of PLM predictions for difficult tail examples can be seen in Fig. 4. Kurtosis is sensitive to extreme samples in the tail, while PLM treats most samples in the tail similarly. This manifests in performance as kurtosis loss performing better on and Max, and PLM performing better on and . This provides guidance on the choice of method as per the objective. Kurtosis Loss can improve the performance in worst case scenarios more significantly. PLM provides less drastic changes to the most extreme values, but it works more effectively throughout the entire distribution. #### Tail error and long-term forecasting Based on the trajectory forecasting results in Tables 3 and 4 we can see that error reduction for tail samples is more visible in FDE than ADE. This indicates that the magnitude of the observed error increases with the prediction horizon. The error accumulates through prediction steps making far-future predictions inherently more difficult. Larger improvements in the FDE indicate that both Kurtosis and Pareto loss ensure that high tail errors (stemming mostly from large, far-future prediction errors measured by FDE) are decreased. The inadvertent direction of research in the forecasting domain is aiming at increasing the prediction horizon with high accuracy predictions. As we can see in Fig. 5, the effect of the tail examples is more pronounced with longer prediction horizons. Thus, methods addressing the tail performance will be necessary in order to ensure the practical applicability and reliability of future, long-term prediction. ## 5 Conclusion We address the long-tail problem in deep probabilistic forecasting. We propose Pareto loss (Margin and Weighted) and Kurtosis loss, two novel moment-based loss function approaches increasing emphasis on learning tail examples. We demonstrate their practical effects on two spatiotemporal trajectory datasets and two time series datasets. Our methods achieve significant improvements on tail examples over existing baselines without degrading average performance. Both proposed losses can be integrated with existing approaches in deep probabilistic forecasting to improve their performance on difficult and challenging scenarios. Future directions include more principled ways to tune hyperparameters, new approaches to mitigate long tail for long-term forecasting and application to more complex tasks like video prediction. Based on our observations, we suggest evaluating additional tail performance metrics apart from average performance in machine learning task to identify potential long tail issues across different tasks and domains. ## Acknowledgments This work was supported in part by U.S. Department Of Energy, Office of Science, U. S. Army Research Office under Grant W911NF-20-1-0334, Facebook Data Science Award, Google Faculty Award, and NSF Grant #2037745. ## Appendix A Dataset description The ETH-UCY dataset consists of five subdatasets, each with Bird’s-Eye-Views: ETH, Hotel, Univ, Zara1, and Zara2. As is common in the literature [makansi2021exposing, salzmann2020trajectron++] we present macro-averaged 5-fold cross-validation results in our experiment section. The nuScenes dataset includes 1000 scenes of 20 second length for vehicle trajectories recorded in Boston and Singapore. The electricity dataset contains electricity consumption data for 370 homes over the period of Jan 1st, 2011 to Dec 31st, 2014 at a sampling interval of 15 minutes. We use the data from Jan 1st, 2011 to Aug 31st, 2011 for training and data from Sep 1st, 2011 to Sep 7th, 2011 for testing. The traffic dataset consists of occupancy values recorded by 963 sensors at a sampling interval of 10 minutes ranging from Jan 1st, 2008 to Mar 30th, 2009. We use data from Jan 1st, 2008 to Jun 15th, 2008 for training and data from Jun 16th, 2008 to Jul 15th, 2008 for testing. Both time series datasets are downsampled to 1 hour for generating examples. The synthetic datasets are generated as 100 different time series consisting of 960 time steps. Each time series in the Sine dataset is generated using a random offset and a random frequency both selected from a uniform distribution . Then the time series is where is the index of the time step. Gaussian and Pareto datasets are generated as order 1 lag autoregressive time series with randomly sampled Gaussian and Pareto noise respectively. Gaussian noise is sampled from a Gaussian distribution with mean 1 and standard deviation 1. Pareto noise is randomly sampled from a Pareto distribution with shape 10 and scaling 1. #### Time Series forecasting DeepAR uses Gaussian Negative Log Likelihood as the loss which is unbounded. Due to this many baseline methods need to be adapted in order to be usable. For the same reason, we also need an auxiliary loss (). We use MAE loss to fit the GPD, calculate kurtosis, and to calculate the weight terms for Focal and Shrinkage loss. For LDS we treat all labels across time steps as a part of a single distribution. Additionally, to avoid extremely high weights () in LDS due to the nature of long tail we ensure a minimum probability of for all labels. #### Trajectory forecasting We adapt Focal Loss and Shrinkage Loss to use EWTA loss [makansi2019overcoming] in order to be compatible with Trajectron++EWTA base model. LDS was originally proposed for a regression task and we adapt it to the trajectory prediction task in the same way as for the time series task. We use MAE to fit the GPD, due to the Evolving property of EWTA loss. ## Appendix C Implementation details #### Time Series forecasting We use the DeepAR implementation from https://github.com/zhykoties/TimeSeries as the base code to run all time series experiments. The original code is an AWS API and not publicly available. The implementation of contrastive loss is taken directly from the source code of [makansi2021exposing]. #### Trajectory forecasting For all tested base methods in the trajectory forecasting experiments (Trajectron++ [salzmann2020trajectron++] and Trajectron++EWTA [makansi2021exposing]) we have used the original implementations provided by the original authors of each method. The implementation of contrastive loss is taken directly from the source code of [makansi2021exposing]. The experiments have been conducted on a machine with 7 RTX 2080 Ti GPUs. ## Appendix D Hyperparameter Tuning We observe during our experiments that the performance of kurtosis loss is highly dependent on the hyperparameter (See Eq. (8)). Results for different values of on the electricity dataset for kurtosis are shown in Table5. We also show the variation of ND and NRMSE with the hyperparameter value in Figure 6. We can see that there is an optimal value of the hyperparameter and the approach performs worse with higher and lower values. For both ETH-UCY and nuScenes datasets we have used for Kurtosis loss, and for PLM and PLW. For both electricity and traffic datasets, we use for PLM, for PLW and for Kurtosis loss. ## Appendix E Long tail severity In Table 6 we present the numerical values representing the approximate long-tailedness for each of the datasets. Larger value indicates a longer tail. ## Appendix F Pareto and Kurtosis Figure 7 illustrates different GPDs for different shape parameter values. Higher shape value models more severe tail behavior. ## Appendix G Long tail error distribution In Fig. 8 we can see log-log plots of the error distributions of base model for each of the datasets. We can see each distribution exhibits a long tail behavior. ## Appendix H Synthetic datasets We present complete results of our experiments on the synthetic datasets in Table 7. We ran our methods, kurtosis loss, and PLM on these datasets as well. Both our methods show significant tail improvements over the base model across all datasets.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359820246696472, "perplexity": 1894.6577434388735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00015.warc.gz"}
https://www.fractalai.org/dl/2020/02/10/policy-gradient-actor-critic.html
What if we could learn the policy parameters directly? We can approach this problem by thinking of policies abstractly - Let’s consider a class of policies defined by $\theta$ and refer to such a policy as $\pi_{\theta}(a|s)$ which is a probability distribution over the action space conditioned on the state $s$. These parameters $\theta$ could be the parameters of a neural network or a simple polynomial or anything really. Let’s note define a metric $J$ which can be used to evaluate the quality of a policy $\pi_{\theta}$. What we really want to do is maximize the expected future reward, so naturally we can write where $R(s_{t}, a_{t})$ is the reward given by taking action $a$ in state $s$ and time $t$. The optimal set of parameters for the policy can then be written as Now consider a trajectory $\tau = (s_{1}, a_{1}, s_{2}, a_{2}, …, s_{T})$ which is a sequence of state-action pairs until the terminal state. We are trying to learn $\theta$ that maximizes the reward of some trajectory. So in the spirit of gradient descent, we are going to take actions within our environment to sample a trajectory and then use the rewards gained from that trajectory to adjust our parameters. We can write our loss function as where $R(\tau)$ is the cumulative reward gained by our trajectory. Our objective is to take the gradient of this function with respect to $\theta$ so that we can use the gradient descent update rule to adjust our parameters, but the reward function is not known and may not even be differentiable, but with a few clever tricks we can estimate the gradient. Recall that for any continuous function $f(x)$, $\mathbb{E}[f(x)] = \int_{-\infty}^{\infty}p(x)f(x)dx$ where $p(x)$ is the probability of event $x$ occurring. So we have and Where the third line follows from the fact that $\nabla_{x}f(x) = f(x)\nabla_{x}\log(f(x))$. The fact that we have turned the gradient of our cost function $J$ into an expectation is good because that means we can estimate it by sampling data. The last piece of the puzzle is to figure out how to calculate $\nabla_{\theta}\log(\pi_{\theta}(\tau))$. Note that we can rewrite $\pi_{\theta}(\tau)$ as Convince yourself that the above relation is true. $\pi_{\theta}(\tau)$ is the probability of trajectory $\tau$ happening. It is the probability of starting in $s_{1}$, then taking action $a_{1}$ given $s_{1}$, then transitioning to state $s_{2}$ given $a_{1}$ in $s_{1}$, and so on. This joint probability can be factored out. The last step is to realize $p(a_{t}|s_{t})$ is the definition of $\pi_{\theta}(a_{t}|s_{t})$. Now This simplication is enough for us to completed our estimate of the policy gradient $\nabla_{\theta}J(\theta)$. Where $N$ is just the number of episodes (analogous to epochs) we do. Having a set of $N$ trajectories and then averaging the policy gradient estimate over each of them makes this estimate more robust. Now that we can estimate the policy gradient, we simply would update our parameters in the familiar way One interpretation of this result is that we are trying to maximize the log likelihood of trajectories that give good rewards and minimize the log likelihood of those that don’t. This is the idea behind the REINFORCE algorithm which is 1. sample $N$ trajectories by running the policy 2. estimate the policy gradient like above 3. update the parameters $\theta$ 4. Repeat until converged ### Actor Critic One issue with vanilla policy gradients is that its very hard to assign credit to state-action pairs that resulted in good reward because we only consider the total reward $\sum_{t=1}^{T}R(a_{t}, s_{t})$. The trajectories are noisy. But if we had the $Q$ function, we would know what state-action pairs were good. In other words, we would estimate the gradient of $J$ as The idea of actor-critic is that we have an actor that samples trajectories using the policy, and a critic that critiques the policy using the $Q$ function. Since we don’t have the optimal $Q$ functions, we can estimate it like we did in deep Q learning. So we could have a policy network that takes in a state and returns a probability distribution over the action space (i.e. $\pi_{\theta}(a|s))$ and a $Q$ network that takes in a state-action pair and returns its Q value estimate. Let’s say this network is parameterized by a generic variable $\beta$. Note that these don’t have to be neural networks, but for the sake of this guide I’ll just say “network”. So we have networks $\pi_{\theta}$ and $Q_{\beta}$. The general actor-critic algorithm then goes like 1. Initialize $s, \theta, \beta$ 2. Repeat until converged: • Sample action $a$ from $\pi_{\theta}(\cdot|s)$ • Receive reward $r$ and sample next state $s’ \sim p(s’|s, a)$ • Use the critic to evaluate the actor and update the policy similar to like we did in policy gradients: $\theta \leftarrow \theta - \alpha\nabla_{\theta}\log(\pi_{\theta}(a|s))Q_{\beta}(s, a)$ • Update the critic according to some loss metric: $\text{MSE Loss} = (Q_{t+1}(s, a) - (r + \max_{a’}Q_{t}(s’, a’)))^{2}$ • Update $\beta$ using backprop or whatever update rule Of course you can sample whole trajectories instead of one state-action pair at a time. Different types of actor-critic result from changing the “critic”. In REINFORCE, the critic was simply the reward we got from the trajectory. In actor-critic, the critic is the Q function. Another popular choice is called advantage actor-critic, in which the critic is the advantage functions Where V is the value function (recall value iteration). The advantage function A tells us how much better is taking action $a$ in state $s$ than the expected cumulative reward of being in state $s$. This concludes our discussion of RL for the Deep Learning section. In the future I will make more RL-related guides that focus on more advanced topics and current research. Feel free to reach out with any questions or if you notice something you think is inaccurate and I’ll do my best to respond!
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877618908882141, "perplexity": 261.4203750979743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911896.73/warc/CC-MAIN-20200710175432-20200710205432-00317.warc.gz"}
https://brilliant.org/problems/a-problem-by-swapnil-yadav/
# A problem by Swapnil Yadav Level pending f(x) is a polynomial function such that f(f(1))=2 . There exists no real roots of f(x). then f(2) equals ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469597101211548, "perplexity": 4049.4177120761383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719045.47/warc/CC-MAIN-20161020183839-00385-ip-10-171-6-4.ec2.internal.warc.gz"}
https://forum.allaboutcircuits.com/threads/initial-conditions-of-switched-circuits.44252/
# Initial Conditions of Switched Circuits Discussion in 'Homework Help' started by jegues, Oct 14, 2010. 1. ### jegues Thread Starter Well-Known Member Sep 13, 2010 735 45 See figure for question and my attempt. I redrew the circuit at a time just before time 0. Doing a KVL in loop 1, 2 and 3 I concluded that V1(0-) = 24V. I'm not 100% confident about this result, is it correct? If not, how do I go about finding V1(0-)? I redrew the circuit again at a time just after time 0. I have no clue how I'm suppose to find I2(0+) and V1(0+) here. Can someone get me started? Thanks again! File size: 35.8 KB Views: 42 File size: 240.7 KB Views: 29 • ###### 7.8-7A1P2.JPG File size: 144.3 KB Views: 31 Last edited: Oct 15, 2010 2. ### t_n_k AAC Fanatic! Mar 6, 2009 5,448 790 V1(0-) won't be 24V. The circuit has stabilized before t=0. At this time ... The inductor will look like a short and the capacitor will charge to a value determined by the voltage divider formed by the 24V supply and the series combination of the 20Ω and 80Ω resistors. The value of V1 and the capacitor voltage will be the same, since there will be no steady state voltage drop across the inductor. 3. ### t_n_k AAC Fanatic! Mar 6, 2009 5,448 790 For the t=0+ case one approach is to treat the inductor as a current source and the capacitor as a voltage source - just for that instant. Then say apply superposition (or whatever method suits you) to find the unknown quantities. The notional current source value will be the steady state inductor current at t=0-. This approach only works because you are only required to solve for values at the instant the switch closes. 4. ### jegues Thread Starter Well-Known Member Sep 13, 2010 735 45 Okay so my BIG mistake was that I interpreted/read the question wrong. The circuit is already at steady state when the switch closes. So at t(0-) my capacitor and inductor should be their steady state equivalents respectively. (Open circuit, short circuit) Then once the switch closes, t(0+), I can model my capacitor and inductor accordingly. I'm going to try this problem again from scratch and I'll post my results! Thanks again tnk, your help is always appreciated! 5. ### jegues Thread Starter Well-Known Member Sep 13, 2010 735 45 UPDATE: I've reattempted the problem and came up with a final solution. See the figures attached for my 2nd attempt at this problem. Okay so basically what I did was I redrew the correct circuits for t(0-) and t(0+). In the t(0-) circuit I solved for V1(0-) using a voltage divider as tnk mentioned, and clearly one can see that i2(0-) is 0. I also obtained Vc and Il. Then I went to my t(0+) circuit and applied mesh analysis. From these results I was able to solve for V1(0+) and i2(0+). Does anyone see any errors in my work/results? I'm really hoping I finally got this one down! Let me know what you think! Thanks again! File size: 164.1 KB Views: 32 File size: 278.1 KB Views: 25 6. ### Jony130 AAC Fanatic! Feb 17, 2009 4,511 1,272 I to check you result for V1(0+) write this nodal equation $240mA = \frac{V1}{80} - \frac{(24V - V1) }{80}$ And when I solve this I get this result for V1(0+) V1(0+) = 21.6V Your mesh equation is not correct becaues of a current source in the circuit 7. ### jegues Thread Starter Well-Known Member Sep 13, 2010 735 45 Okay I agree that my mesh equation must be wrong, but I'm not sure where exactly I went wrong. Can you see it? I don't see how the current source is causing any problems. EDIT: I found my mistake. I had an algebra error in my equation *, It should be, $160i_{2} = 4.8 \rightarrow i_{2} = 0.03A$ Then one will find that, $V_{1}(0+) = 21.6V$ Last edited: Oct 15, 2010 8. ### Jony130 AAC Fanatic! Feb 17, 2009 4,511 1,272 good job, but for this circuit the nodal analysis is much faster. 9. ### jegues Thread Starter Well-Known Member Sep 13, 2010 735 45 Yup I realize that now, but if I didn't notice it at the time. I need more practice I guess... Related Forum Posts: 1. Replies: 3 Views: 206 2. Replies: 13 Views: 973 3. Replies: 1 Views: 785 4. Replies: 3 Views: 1,132 5. Replies: 1 Views: 1,082
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8225487470626831, "perplexity": 3125.499527076578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824822.41/warc/CC-MAIN-20181213123823-20181213145323-00099.warc.gz"}
https://infoscience.epfl.ch/record/198080
Infoscience Journal article # Impact of Annealing on the Early Hydration of Tricalcium Silicate It was recently proposed that the induction period observed during the hydration of tricalcium silicate could be explained by the build-up of ions in solution. Due to the importance of defects in this mechanism, this work describes the effect of different annealing effects on the defect structure and hydration behavior of C3S. The impact of annealing on the crystal structure was checked by X-ray diffraction and the defect structure studied by transmission electron microscopy. The hydration kinetics were followed by isothermal calorimetry of pastes. Scanning electron microscopy was used to look at the microstructure formation. It was observed that grinding created a highly deformed layer on the surface of the grains, which disappeared after annealing. The defect structure was closely related to the length of the induction period observed in pastes by calorimetry. There was no observable effect on the morphology of C-S-H during hydration, but the number of calcium hydroxide nuclei was less in pastes from annealed material.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463725090026855, "perplexity": 2035.1795297044976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886792.7/warc/CC-MAIN-20180117003801-20180117023801-00001.warc.gz"}
https://www.hepdata.net/search/?q=&collaboration=HERMES&sort_order=&page=1&sort_by=latest
Showing 19 of 19 results #### Transverse-target-spin asymmetry in exclusive $\omega$-meson electroproduction The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Eur.Phys.J. C75 (2015) 600, 2015. Inspire Record 1391139 Hard exclusive electroproduction of $\omega$ mesons is studied with the HERMES spectrometer at the DESY laboratory by scattering 27.6 GeV positron and electron beams off a transversely polarized hydrogen target. The amplitudes of five azimuthal modulations of the single-spin asymmetry of the cross section with respect to the transverse proton polarization are measured. They are determined in the entire kinematic region as well as for two bins in photon virtuality and momentum transfer to the nucleon. Also, a separation of asymmetry amplitudes into longitudinal and transverse components is done. These results are compared to a phenomenological model that includes the pion pole contribution. Within this model, the data favor a positive $\pi\omega$ transition form factor. 4 data tables The amplitudes of the five sine and two cosine modulations as determined in the entire kinematic region. The results receive an additional 8.2% scale uncertainty corresponding to the target-polarization uncertainty. The definition of intervals and the mean values of the kinematic variables. Results on the kinematic dependences of the five asymmetry amplitudes $A_{UT}$ and two amplitudes $A_{UU}$. The first two columns correspond to the $-t'$ intervals $0.00 - 0.07 - 0.20$ GeV$^2$ and the last two columns to the $Q^{2}$ intervals $1.00 - 1.85 - 10.00$ GeV$^2$. The results receive an additional 8.2% scale uncertainty corresponding to the target-polarization uncertainty. More… #### Spin density matrix elements in exclusive $\omega$ electroproduction on $^1$H and $^2$H targets at 27.5 GeV beam energy The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Eur.Phys.J. C74 (2014) 3110, 2014. Inspire Record 1305286 Exclusive electroproduction of $\omega$ mesons on unpolarized hydrogen and deuterium targets is studied in the kinematic region of $Q^2> 1.0$  GeV$^2$ , 3.0 GeV  $< W <$  6.3 GeV, and $-t'< 0.2$  GeV$^2$ . Results on the angular distribution of the $\omega$ meson, including its decay products, are presented. The data were accumulated with the HERMES forward spectrometer during the 1996–2007 running period using the 27.6 GeV longitudinally polarized electron or positron beam of HERA. The determination of the virtual-photon longitudinal-to-transverse cross-section ratio reveals that a considerable part of the cross section arises from transversely polarized photons. Spin density matrix elements are presented in projections of $Q^2$ or $-t'$ . Violation of $s$ -channel helicity conservation is observed for some of these elements. A sizable contribution from unnatural-parity-exchange amplitudes is found and the phase shift between those amplitudes that describe transverse $\omega$ production by longitudinal and transverse virtual photons, $\gamma ^{*L} \rightarrow \omega _{T}$ and $\gamma ^{*T} \rightarrow \omega _{T}$ , is determined for the first time. A hierarchy of helicity amplitudes is established, which mainly means that the unnatural-parity-exchange amplitude describing the $\gamma ^*_T \rightarrow \omega _T$ transition dominates over the two natural-parity-exchange amplitudes describing the $\gamma ^*_L \rightarrow \omega _L$ and $\gamma ^*_T \rightarrow \omega _T$ transitions, with the latter two being of similar magnitude. Good agreement is found between the HERMES proton data and results of a pQCD-inspired phenomenological model that includes pion-pole contributions, which are of unnatural parity. 9 data tables The 23 unpolarized and polarized $\omega$ SDMEs from the proton and deuteron data. The 23 unpolarized and polarized $\omega$ SDMEs for the proton data in $Q^2$ intervals: $1.00 - 1.57 - 2.55 - 10.00$ GeV$^2$. The 23 unpolarized and polarized $\omega$ SDMEs for the proton data in $-t'$ intervals: $0.000 - 0.044 - 0.105 - 0.200$ GeV$^2$. More… #### Inclusive Measurements of Inelastic Electron and Positron Scattering from Unpolarized Hydrogen and Deuterium Targets The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. JHEP 1105 (2011) 126, 2011. Inspire Record 894309 3 data tables Results on the differential Born cross section $\frac{d^2\sigma^p}{dx\,dQ^2}$ and $F_2^p$. The statistical uncertainty $\delta_{stat.}$ and the systematic uncertainties $\delta_{PID}$ (particle identification), $\delta_{model}$ (model dependence outside the acceptance), $\delta_{mis.}$ (misalignment), and $\delta_{rad.}$ (Bethe-Heitler efficiencies) are given in percent. Corresponding $x$ bin numbers and $Q^2$ bin numbers and the average values $\langle x \rangle$ and $\langle {Q^2} \rangle$ are listed in the first four columns. The overall normalization uncertainty is 7.6 %. The structure function $F_2^p$ is derived using the parameterization $R=R_{1998}$. Results on the differential Born cross section $\frac{d^2\sigma^d}{dx\,dQ^2}$ and $F_2^d$. The statistical uncertainty $\delta_{stat.}$ and the systematic uncertainties $\delta_{PID}$ (particle identification), $\delta_{model}$ (model dependence outside the acceptance), $\delta_{mis.}$ (misalignment), and $\delta_{rad.}$ (Bethe-Heitler efficiencies), are given in percent. Corresponding $x$ bin numbers and $Q^2$ bin numbers and the average values $\langle x \rangle$ and $\langle{Q^2}\rangle$ are listed in the first four columns. The overall normalization uncertainty is 7.5 %. The structure function $F_2^d$ is derived using the parameterization $R=R_{1998}$. Results on the inelastic Born cross-section ratio ${\sigma^d}/{\sigma^p}$. The statistical uncertainty $\delta_{stat.}$, the systematic uncertainty $\delta_{rad.}$ due to radiative corrections and $\delta_{model}$ due to the model dependence outside the acceptance are given in percent. The average values of $x$ and $Q^2$ are listed in the first two columns. The overall normalization uncertainty is 1.4$\%$. #### Measurement of the virtual-photon asymmetry $A_2$ and the spin-structure function $g_2$ of the proton The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Eur.Phys.J. C72 (2012) 1921, 2012. Inspire Record 1082840 4 data tables The spin-structure function $xg_2(x,Q^2)$ and virtual-photon asymmetry $A_2(x,Q^2)$ of the proton in bins of $(x,Q^2)$, see text for details. Statistical and systematic uncertainties are presented separately. The spin-structure function $xg_2$ and the virtual-photon asymmetry $A_2$ of the proton after evolving to common $Q^2$ and averaging over in each $x$-bin (see text for details). Statistical and systematic uncertainties are presented separately. Correlation matrix for $xg_2$ in 9 $x$-bins (as in Table 2). More… #### Multiplicities of charged pions and kaons from semi-inclusive deep-inelastic scattering by the proton and the deuteron The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Phys.Rev. D87 (2013) 074029, 2013. Inspire Record 1208547 Multiplicities in semi-inclusive deep-inelastic scattering are presented for each charge state of \pi^\pm and K^\pm mesons. The data were collected by the HERMES experiment at the HERA storage ring using 27.6 GeV electron and positron beams incident on a hydrogen or deuterium gas target. The results are presented as a function of the kinematic quantities x_B, Q^2, z, and P_h\perp. They represent a unique data set for identified hadrons that will significantly enhance our understanding of the fragmentation of quarks into final-state hadrons in deep-inelastic scattering. 64 data tables pi+ multiplicities from HERMES, Target: H, Target: D, VM subtracted. pi- multiplicities from HERMES, Target: H, Target: D, VM subtracted. K+ multiplicities from HERMES, Target: H, Target: D, VM subtracted. More… #### Beam-Spin Asymmetries in the Azimuthal Distribution of Pion Electroproduction The collaboration Airapetian, A. ; Akopov, Z. ; Amarian, M. ; et al. Phys.Lett. B648 (2007) 164-170, 2007. Inspire Record 735612 A measurement of the beam-spin asymmetry in the azimuthal distribution of pions produced in semi-inclusive deep-inelastic scattering off protons is presented. The measurement was performed using the {HERMES} spectrometer with a hydrogen gas target and the longitudinally polarized 27.6 GeV positron beam of HERA. The sinusoidal amplitude of the dependence of the asymmetry on the angle $\phi$ of the hadron production plane around the virtual photon direction relative to the lepton scattering plane was measured for $\pi^+,\pi^-$ and $\pi^0$ mesons. The dependence of this amplitude on the Bjorken scaling variable and on the pion fractional energy and transverse momentum is presented. The results are compared to theoretical model calculations. 6 data tables Beam SSA as a function of Z, X, hadronic PT and Q**2. Beam SSA as a function of Z, X, hadronic PT and Q**2. Beam SSA as a function of Z, X, hadronic PT and Q**2. More… #### Precise determination of the spin structure function g(1) of the proton, deuteron and neutron The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Phys.Rev. D75 (2007) 012007, 2007. Inspire Record 726689 Precise measurements of the spin structure functions of the proton $g_1^p(x,Q^2)$ and deuteron $g_1^d(x,Q^2)$ are presented over the kinematic range $0.0041 \leq x \leq 0.9$ and $0.18$ GeV$^2$ $\leq Q^2 \leq 20$ GeV$^2$. The data were collected at the HERMES experiment at DESY, in deep-inelastic scattering of 27.6 GeV longitudinally polarized positrons off longitudinally polarized hydrogen and deuterium gas targets internal to the HERA storage ring. The neutron spin structure function $g_1^n$ is extracted by combining proton and deuteron data. The integrals of $g_1^{p,d}$ at $Q^2=5$ GeV$^2$ are evaluated over the measured $x$ range. Neglecting any possible contribution to the $g_1^d$ integral from the region $x \leq 0.021$, a value of $0.330 \pm 0.011\mathrm{(theo.)}\pm0.025\mathrm{(exp.)}\pm 0.028$(evol.) is obtained for the flavor-singlet axial charge $a_0$ in a leading-twist NNLO analysis. 23 data tables Integrals of G1 for P, DEUT and N targets.. The second DSYS systematic error is due to the uncertainty in the parameterizations (R, F2, A2, Azz, omegaD).. The third DSYS systematic error is due to the uncertainty in evolving to a common Q**2. Integrals of G1 for the Non-Singlet contributions.. The second DSYS systematic error is due to the uncertainty in the parameterizations (R, F2, A2, Azz, omegaD).. The third DSYS systematic error is due to the uncertainty in evolving to a common Q**2. Axis error includes +- 5.2/5.2 contribution. Integrals of G1 over different X ranges for P target at various Q*2 values. The second DSYS systematic error is due to the uncertainty in the parameterizations (R, F2, A2, Azz, omegaD).. The third DSYS systematic error is due to the uncertainty in evolving to a common Q**2. Axis error includes +- 5.2/5.2 contribution. More… #### Double spin asymmetries in the cross-section of rho0 and phi production at intermediate-energies The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Eur.Phys.J. C29 (2003) 171-179, 2003. Inspire Record 613068 Double-spin asymmetries in the cross section of electroproduction of $\rho^0$ and $\phi$ mesons on the proton and deuteron are measured at the HERMES experiment. The photoabsorption asymmetry in exclusive $\rho^0$ electroproduction on the proton exhibits a positive tendency. This is consistent with theoretical predictions that the exchange of an object with unnatural parity contributes to exclusive $\rho^0$ electroproduction by transverse photons. The photoabsorption asymmetry on the deuteron is found to be consistent with zero. Double-spin asymmetries in $\rho^0$ and $\phi$ meson electroproduction by quasi-real photons were also found to be consistent with zero: the asymmetry in the case of the $\phi$ meson is compatible with a theoretical prediction which involves $s\bar{s}$ knockout from the nucleon. 7 data tables The photoabsorption asymmetry A1 for exclusive RHO0 production. The photoabsorption asymmetry A1 for exclusive PHI electroproduction. The photoabsorption asymmetry A1 for electroproduction of RHO0 mesons by quasi-real photons. More… #### Double spin asymmetry in the cross-section for exclusive rho0 production in lepton - proton scattering The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Phys.Lett. B513 (2001) 301-310, 2001. Inspire Record 553236 7 data tables The photoabsorption asymmetry A1 for exclusive RHO0 production. The photoabsorption asymmetry A1 for exclusive RHO0 production as a function of Q**2. The photoabsorption asymmetry A1 for exclusive RHO0 production as a function of W. More… #### The Q**2 dependence of the generalized Gerasimov-Drell-Hearn integral for the deuteron, proton and neutron The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Eur.Phys.J. C26 (2003) 527-538, 2003. Inspire Record 600098 The Gerasimov-Drell-Hearn (GDH) sum rule connects the anomalous contribution to the magnetic moment of the target nucleus with an energy-weighted integral of the difference of the helicity-dependent photoabsorption cross sections. The data collected by HERMES with a deuterium target are presented together with a re-analysis of previous measurements on the proton. This provides a measurement of the generalised GDH integral covering simultaneously the nucleon-resonance and the deep inelastic scattering regions. The contribution of the nucleon-resonance region is seen to decrease rapidly with increasing $Q^2$. The DIS contribution is sizeable over the full measured range, even down to the lowest measured $Q^2$. As expected, at higher $Q^2$ the data are found to be in agreement with previous measurements of the first moment of $g_1$. From data on the deuteron and proton, the GDH integral for the neutron has been derived and the proton--neutron difference evaluated. This difference is found to satisfy the fundamental Bjorken sum rule at $Q^2 = 5$ GeV$^2$. 6 data tables The value of the GDH integral, as a function of Q**2 , for the deuteron in three W**2 regions, the total ( > 1 GeV**2), the nucleon resonance ( 1 to 4.2 GeV**2) and the DIS (4.2 to 45 GeV**2). The value of the GDH integral, as a function of Q**2 , for the proton in three W**2 regions, the total ( > 1 GeV**2), the nucleon resonance ( 1 to 4.2 GeV**2) and the DIS (4.2 to 45 GeV**2). The value of the GDH integral, as a function of Q**2 , for the neutron in three W**2 regions, the total ( > 1 GeV**2), the nucleon resonance ( 1 to 4.2 GeV**2) and the DIS (4.2 to 45 GeV**2). More… #### Multiplicity of charged and neutral pions in deep inelastic scattering of 27.5-GeV positrons on hydrogen The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Eur.Phys.J. C21 (2001) 599-606, 2001. Inspire Record 554660 Measurements of the individual multiplicities of pi+, pi- and pi0 produced in the deep-inelastic scattering of 27.5 GeV positrons on hydrogen are presented. The average charged pion multiplicity is the same as for neutral pions, up to approximately z= 0.7, where z is the fraction of the energy transferred in the scattering process carried by the pion. This result (below z= 0.7) is consistent with isospin invariance. The total energy fraction associated with charged and neutral pions is 0.51 +/- 0.01 (stat.) +/- 0.08 (syst.) and 0.26 +/- 0.01 (stat.) +/- 0.04 (syst.), respectively. For fixed z, the measured multiplicities depend on both the negative squared four momentum transfer Q^2 and the Bjorken variable x. The observed dependence on Q^2 agrees qualitatively with the expected behaviour based on NLO-QCD evolution, while the dependence on x is consistent with that of previous data after corrections have been made for the expected Q^2-dependence. 4 data tables The measured PI0 multiplicity. Additional 9 PCT systematic error. The measured multiplicity for charged pions, individually and the average. Additional 7 PCT systematic error. The charged pion multiplicity as a function of x for four different z regions. More… #### The Q**2 dependence of the generalized Gerasimov-Drell-Hearn integral for the proton The collaboration Airapetian, A. ; Akopov, N. ; Akushevich, I. ; et al. Phys.Lett. B494 (2000) 1-8, 2000. Inspire Record 531949 The dependence on Q^2 (the negative square of the 4-momentum of the exchanged virtual photon) of the generalised Gerasimov-Drell-Hearn integral for the proton has been measured in the range 1.2 GeV^2 < Q^2 < 12 GeV^2 by scattering longitudinally polarised positrons on a longitudinally polarised hydrogen gas target. The contributions of the nucleon-resonance and deep-inelastic regions to this integral have been evaluated separately. The latter has been found to dominate for Q^2 > 3 GeV^2, while both contributions are important at low Q^2. The total integral shows no significant deviation from a 1/Q^2 behaviour in the measured Q^2 range, and thus no sign of large effects due to either nucleon-resonance excitations or non-leading twist. 1 data table The GDH integral as a function of Q2 in the resonance region (W**2 = 1 to 4.2 GeV**2), the measured region (W**2=4.2 to 45 GeV**2), and the total region (W**2= 1 to 45 GeV**2). #### Exclusive leptoproduction of rho0 mesons from hydrogen at intermediate virtual photon energies The collaboration Airapetian, A. ; Akopov, N. ; Akushevich, I. ; et al. Eur.Phys.J. C17 (2000) 389-398, 2000. Inspire Record 526550 Measurements of the cross section for exclusive virtual-photoproduction of rho^0 mesons from hydrogen are reported. The data were collected by the HERMES experiment using 27.5 GeV positrons incident on a hydrogen gas target in the HERA storage ring. The invariant mass W of the photon-nucleon system ranges from 4.0 to 6.0 GeV, while the negative squared four-momentum Q^2 of the virtual photon varies from 0.7 to 5.0 GeV^2. The present data together with most of the previous data at W > 4 GeV are well described by a model that infers the W-dependence of the cross section from the dependence on the Bjorken scaling variable x of the unpolarized structure function for deep-inelastic scattering. In addition, a model calculation based on Off-Forward Parton Distributions gives a fairly good account of the longitudinal component of the rho^0 production cross section for Q^2 > 2 GeV^2. 2 data tables Cross sections are corrected for radiative effects (which typically amount s to 18 PCT). Longitudinal cross sections. The listed uncertainties include both the total error on the measured RHO0 photoproduction cross sections and the error on theparametrization of R for W<7 GeV. #### Measurement of the spin asymmetry in the photoproduction of pairs of high p(T) hadrons at HERMES The collaboration Airapetian, A. ; Akopov, N. ; Amarian, M. ; et al. Phys.Rev.Lett. 84 (2000) 2584-2588, 2000. Inspire Record 503784 We present a measurement of the longitudinal spin asymmetry A_|| in photoproduction of pairs of hadrons with high transverse momentum p_T. Data were accumulated by the HERMES experiment using a 27.5 GeV polarized positron beam and a polarized hydrogen target internal to the HERA storage ring. For h+h- pairs with p_T^h_1 > 1.5 GeV/c and p_T^h_2 > 1.0 GeV/c, the measured asymmetry is A_|| = -0.28 +/- 0.12 (stat.) +/- 0.02 (syst.). This negative value is in contrast to the positive asymmetries typically measured in deep inelastic scattering from protons, and is interpreted to arise from a positive gluon polarization. 1 data table Asymmetry measurement with a PT cut of 1.5 GeV on the hadron with the higher PT, and 1.0 GeV on the hadron with the lower PT. #### Determination of the deep inelastic contribution to the generalized Gerasimov-Drell-Hearn integral for the proton and neutron The collaboration Ackerstaff, K. ; Airapetian, A. ; Akopov, N. ; et al. Phys.Lett. B444 (1998) 531-538, 1998. Inspire Record 476388 The virtual photon absorption cross section differences [sigma_1/2-sigma_3/2] for the proton and neutron have been determined from measurements of polarised cross section asymmetries in deep inelastic scattering of 27.5 GeV longitudinally polarised positrons from polarised 1H and 3He internal gas targets. The data were collected in the region above the nucleon resonances in the kinematic range nu < 23.5 GeV and 0.8 GeV**2 < Q**2 < 12 GeV**2. For the proton the contribution to the generalised Gerasimov-Drell-Hearn integral was found to be substantial and must be included for an accurate determination of the full integral. Furthermore the data are consistent with a QCD next-to-leading order fit based on previous deep inelastic scattering data. Therefore higher twist effects do not appear significant. 13 data tables Gerasimov-Drell-Hearn sum rule for proton as a function of Q2. Gerasimov-Drell-Hearn sum rule for neutron as a function of Q2 (integral spans from Q2/2M to infinity instead of zero to infinity, see paper). Cross section difference for the proton data. Statistical errors only. More… #### The Flavor asymmetry of the light quark sea from semiinclusive deep inelastic scattering The collaboration Ackerstaff, K. ; Airapetian, A. ; Akopov, N. ; et al. Phys.Rev.Lett. 81 (1998) 5519-5523, 1998. Inspire Record 473345 The flavor asymmetry of the light quark sea of the nucleon is determined in the kinematic range 0.02<x<0.3 and 1 GeV^2<Q^2<10 GeV^2, for the first time from semi-inclusive deep-inelastic scattering. The quantity (dbar(x)-ubar(x))/(u(x)-d(x)) is derived from a relationship between the yields of positive and negative pions from unpolarized hydrogen and deuterium targets. The flavor asymmetry dbar-ubar is found to be non-zero and x dependent, showing an excess of dbar over ubar quarks in the proton. 1 data table The ratio of parton distribution functions (PDF) is determined from the ratio of the differencies between charged pion yields for proton and neutron targets: (N_p(pi-)-N_n(pi-))/(N_p(pi+)-N_n(pi+)). #### Measurement of the proton spin structure function g1(p) with a pure hydrogen target The collaboration Airapetian, A. ; Akopov, N. ; Akushevich, I. ; et al. Phys.Lett. B442 (1998) 484-492, 1998. Inspire Record 473421 A measurement of the proton spin structure function g1p(x,Q^2) in deep-inelastic scattering is presented. The data were taken with the 27.6 GeV longitudinally polarised positron beam at HERA incident on a longitudinally polarised pure hydrogen gas target internal to the storage ring. The kinematic range is 0.021<x<0.85 and 0.8 GeV^2<Q^2<20 GeV^2. The integral Int_{0.021}^{0.85} g1p(x)dx evaluated at Q0^2 of 2.5 GeV^2 is 0.122+/-0.003(stat.)+/-0.010(syst.). 2 data tables The second systematic errors listed for G1/F1 (G1) are the uncertainties concerning R (R and F2). G1 evolved at Q2 = 2.5 GeV**2, assuming G1/F1 to be independent of Q2. The second systematic errors listed for are the uncertainties concerning R and F2. #### Measurement of the neutron spin structure function g1(n) with a polarized He-3 internal target The collaboration Ackerstaff, K. ; Airapetian, A. ; Akushevich, I. ; et al. Phys.Lett. B404 (1997) 383-389, 1997. Inspire Record 440904 Results are reported from the HERMES experiment at HERA on a measurement of the neutron spin structure function $g_1~n(x,Q~2)$ in deep inelastic scattering using 27.5 GeV longitudinally polarized positrons incident on a polarized $~3$He internal gas target. The data cover the kinematic range $0.023<x<0.6$ and $1 (GeV/c)~2 < Q~2 <15 (GeV/c)~2$. The integral $\int_{0.023}~{0.6} g_1~n(x) dx$ evaluated at a fixed $Q~2$ of $2.5 (GeV/c)~2$ is $-0.034\pm 0.013(stat.)\pm 0.005(syst.)$. Assuming Regge behavior at low $x$, the first moment $\Gamma_1~n=\int_0~1 g_1~n(x) dx$ is $-0.037\pm 0.013(stat.)\pm 0.005(syst.)\pm 0.006(extrapol.)$. 2 data tables No description provided. Data extrapolated to full x region. Second systematic error is the error on this extrapolation. #### Hadronization in semi-inclusive deep-inelastic scattering on nuclei The collaboration Airapetian, A. ; Akopov, N. ; Akopov, Z. ; et al. Nucl.Phys. B780 (2007) 1-27, 2007. Inspire Record 749249 A series of semi-inclusive deep-inelastic scattering measurements on deuterium, helium, neon, krypton, and xenon targets has been performed in order to study hadronization. The data were collected with the HERMES detector at the DESY laboratory using a 27.6 GeV positron or electron beam. Hadron multiplicities on nucleus A relative to those on the deuteron, R_A^h, are presented for various hadrons (\pi^+, \pi^-, \pi^0, K^+, K^-, p, and \bar{p}) as a function of the virtual-photon energy \nu, the fraction z of this energy transferred to the hadron, the photon virtuality Q^2, and the hadron transverse momentum squared p_t^2. The data reveal a systematic decrease of R_A^h with the mass number A for each hadron type h. Furthermore, R_A^h increases (decreases) with increasing values of \nu (z), increases slightly with increasing Q^2, and is almost independent of p_t^2, except at large values of p_t^2. For pions two-dimensional distributions also are presented. These indicate that the dependences of R_A^{\pi} on \nu and z can largely be described as a dependence on a single variable L_c, which is a combination of \nu and z. The dependence on L_c suggests in which kinematic conditions partonic and hadronic mechanisms may be dominant. The behaviour of R_A^{\pi} at large p_t^2 constitutes tentative evidence for a partonic energy-loss mechanism. The A-dependence of R_A^h is investigated as a function of \nu, z, and of L_c. It approximately follows an A^{\alpha} form with \alpha \approx 0.5 - 0.6. 228 data tables PI+ multiplicty ratio (Helium/Deuterium) as a function of NU. K+ multiplicty ratio (Helium/Deuterium) as a function of NU. P multiplicty ratio (Helium/Deuterium) as a function of NU. More…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725467562675476, "perplexity": 3844.2158666950754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578616424.69/warc/CC-MAIN-20190423234808-20190424020808-00409.warc.gz"}
http://meta.tex.stackexchange.com/questions/1272/why-doesnt-maths-render-as-maths
Why doesn't maths render as maths? On some other SE sites, code in between dollar signs gets rendered as mathematics (using MathJaX, I believe). This doesn't seem to work here? Why not? And how do I get round it? - See this question. You should know the answer to this, since you requested the behaviour :) –  Lev Bishop Apr 18 '11 at 18:02 @Lev: My apologies, I meant to leave a comment on this but forgot. This is a "FAQ" question - I don't want to know the answer but I want to be able to point people to this question if they wonder about it. –  Loop Space Apr 18 '11 at 18:11 This online LaTeX-to-png converter by Kyle Woodward can be helpful for short runs: 1.618034.com/latex.php –  episanty Feb 18 '13 at 18:03 On this site, we want to see the actual code far more often than we want to see the rendered output, so MathJaX is not enabled for this site. If you want to show the result of some input, you need to create an image of the output and upload it. One of the simplest methods of getting an image from your code is to use the standalone package (see Compile a latex document into a png image that's as short as possible. for more details). To upload it, click on the "add image" button at the top of the text box (the box symbol next to the one with the 1s and 0s) and, if you have at least 10 reputation points, you will be able to upload the image and have it embedded in your question/answer. - –  naught101 Apr 3 '12 at 4:12 Also, MathJaX does not use TeX and thus renders differently and doesn't support all of (La)TeX and its packages. –  Caramdir Jun 1 '12 at 3:34 People wanting to show ConTeXt output can use \startTEXpage...\stopTEXpage. The resulting PDF will be exactly as large as the content. You can upload the result directly --- tex.se takes care of the conversion to PNG. –  Esteis Dec 8 '12 at 13:52 As someone who includes images in almost all of his questions and answers, can I say that this behaviour is annoying and IMHO basically unjustified? Make the escape code as complicated as you will but why don't you let people decide whether they want to show code, exactly rendered result or just the ballpark? In this question of mine I had to fiddle around with HTML markup and I really missed math.SX who use TeX where we don't tex.stackexchange.com/q/102149/13450 –  Christian Mar 12 '13 at 17:18 @Christian I'm sorry you feel like that, but your last remark shows the source of the problem. Maths-SX does not use TeX. They use MathJaX. MathJaX is not TeX. I don't know a lot about MathJaX, but I don't think it supports siunitx and certainly doesn't support pgfmath. So MathJaX wouldn't have helped you and you would still have had to fiddle around with the HTML markup (not that I understand what fiddling you needed to do, sorry for being dense). What is so hard about uploading a screenshot? –  Loop Space Mar 12 '13 at 19:15 @AndrewStacey Well, nothing is hard about uploading a screenshot. It's just time consuming and it's actually more time consuming then fiddling around with unicode chars and markup. When I said "TeX", I meant TeX markup/syntax, not one of the actual kernels. Now I'm going to add an answer because I need to upload a screenshot ;) –  Christian Mar 12 '13 at 19:22 Just as a follow-up to the exchange with Andrew in the other answer, this is what my fiddling vs. what I would have like looks on tex.SE compared to math.SX: No, it's not dramatic that this doesn't work and I could help myself with simple markup and by digging for that $\times$ sign. My point is, that it is a simple convenience that is already implemented, whose availability doesn't hurt anybody and creating an actual TeX file for this, compiling it, converting it to a PNG file and uploading it is so overkill for this use case. And I'm not even sure how well inline images work here. And concerning the HTML/unicode solution compared to the TeX syntax: the latter might not be rendered by TeX but it still looks much better than what I came up with. Yes, I might have fetched a real minus sign from some unicode table, too, but the more work I have to invest to make such a simple thing look decent, the stronger – I feel – my point becomes really. Just as a convenience, here's my question again that sparked these posts: Omit zeros before the decimal point and convert scientific notation in siunitx - I'd say an image isn't really essential in your case, as a 'reference' version can be constructed happily in TeX code: $.80$ $-.12$, etc. Also it's about common cases: tex-sx is not a site about maths, it's about typesetting, and most of the time that's not maths. –  Joseph Wright Mar 13 '13 at 8:07 In this case I feel that you want to use MathJaX to produce a particular rendering on the screen and the fact that MathJaX emulates TeX is irrelevant to this - it is simply convenient because you know it. Moreover, your question (as I read it) isn't about the look of the output but its format, in which case getting the right unicode characters isn't so important. Even simply writing 4.44 x 10^{-16} would do to show what output you want. So I'm afraid that I disagree with your conclusion and still think that MathJaX would cause more hassle than its worth. –  Loop Space Mar 13 '13 at 10:00 @AndrewStacey You are right in your analysis that it's a mere convenience for the author and the reader of a post but so is TeX itself. It's not like an ugly book doesn't convey the same information as one with beautiful typesetting. And yes, it's just for math and it just makes sense because people on the site already know the syntax. I could accept this as a simple difference in opinion if I understood what exactly the hassle is that MathJaX would cause. You only said "people don't want this" and "there are alternatives", none of which is about hassle. –  Christian Mar 13 '13 at 11:25 This particular question-and-answer was meant as a quick reference for people to link to. There was more discussion on meta and my arguments can be found at meta.tex.stackexchange.com/q/7/86 –  Loop Space Mar 13 '13 at 12:14 @AndrewStacey Thanks for the pointer. None of the arguments there apply but I'll shut up anyway because it's pretty clear that you've made up your mind and I'm not going to change that. I can use the time I saved by not discussing this further to make like a million screenshots and figure out how to inline them. –  Christian Mar 13 '13 at 20:01 When I first started hanging out here, having no mathjax support seemed stupid but as time passed by I'm now very very happy that it's not turned on. Especially the problems we had with a few of new users here it would have been a true nightmare. I understand why you feel this choice is dumb but I'm almost sure that your views will change especially when you start spotting the difference of MathJax and TeX from a distance and seeing how people force themselves for the proper commands to communicate in TeX lingo. –  percusse Mar 13 '13 at 22:31 @percusse I'm sure I'll one day start to see the woman in the red dress but that still doesn't make such a post user-friendly to beginners, i.e. people who have been using TeX for less than 20 years or so :) –  Christian Mar 14 '13 at 7:31 @Christian: I think you'll find that once you set up a system for easily taking screenshots, the problem basically disappears. For example, on Ubuntu I just press the Windows key and draw a rectangle around the desired area, and it automatically saves that as an image in a predefined folder. Click "Upload image", select the file, done. Since you're going to have a LaTeX editor and PDF viewer open when working on a question anyway, this really only requires minimal effort. –  Jake Mar 16 '13 at 13:21 @Jake Yeah well, up until now, I much prefer a workflow that is independent on screen resolution and subpixel rendering by using the Gimp to render a PDF page into an image. Should I more often need a quick and ugly screenshot to cater to the needs of TeX.SE, I might need to set up such a mechanism as well. It would not make using math that I don't yet have in a TeX file much more easy to produce. –  Christian Mar 16 '13 at 14:19 I could combine your suggestion with a system that makes producing math from TeX syntax more easy ... like math.SE, thereby utterly perverting the intentions of this artificial restriction which – I must confess – would fill me with great joy and deep satisfaction. –  Christian Mar 16 '13 at 14:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581365942955017, "perplexity": 984.7812689652477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917663.12/warc/CC-MAIN-20140901014517-00167-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/73612-span-print.html
# Span • February 14th 2009, 01:10 PM krepka Span Let W be the set of all (X1, X2, X3, X4, X5) in R5 which satisfy 2X1-X2+4/3X3-X4=0 X1+2/3X3-X5=0 9X1-3X2+6X3-3X4-3X5=0 Find a finite set of vectors which spans W Does this mean I have to find the vectors such that c1V1+c2V2+c3V3+c4V4+c5V5=(X1, X2, X3, X4, X5) for some uknown V? And if so, what's the best way to go about it, by row-reducing the matrix above? Thanks. • February 14th 2009, 02:17 PM HallsofIvy Quote: Originally Posted by krepka Let W be the set of all (X1, X2, X3, X4, X5) in R5 which satisfy 2X1-X2+4/3X3-X4=0 X1+2/3X3-X5=0 9X1-3X2+6X3-3X4-3X5=0 Find a finite set of vectors which spans W Does this mean I have to find the vectors such that c1V1+c2V2+c3V3+c4V4+c5V5=(X1, X2, X3, X4, X5) for some uknown V? And if so, what's the best way to go about it, by row-reducing the matrix above? Thanks. Since there is no "V" in what you give, I have to answer no! But if your question is "Do I have to find vectors (not necessarily 5) so that every (X1, X2, X3, X4, X5) can be written as a linear combination of them (c1V1+ c2V2+ ...)" then the answer is yes. In fact, because you have 5 variables with 3 equations you will need at least 5- 3= 2 such vectors and the equations are dependent, up to 4. And I see now that math2009 has shown that they are NOT independent. 3 vectors are required yes, you could row-reduce a matrix but I would recommend this: use the three equations to solve for three of the "X"s in terms of the other two, say solve for X3, X4, and X5 in terms of X1 and X2. Then setting X1= 1, X2= 0 will give you one vector in that set and setting X1= 0, X2= 1 will give you another. The choices of "1, 0" then "0, 1" ensure those vectors are independent so they will span the set (in fact they form a basis for the subspace). • February 14th 2009, 03:32 PM math2009 $\vec{x} \in W\ ,W\ is\ subspace\ of\ R^5\ ,A=\begin{bmatrix} 2&-1&\frac{4}{3}&-1&0 \\1&0&\frac{2}{3}&0&-1 \\9&-3&6&-3&-3 \end{bmatrix}\ , A\vec{x}=\vec{0},\ W=ker(A)$ We convert the problem "find W" to "find kernel of matrix A". $rref(A)=\begin{bmatrix} 1&0&\frac{2}{3}&0&-1 \\0&1&0&1&-2 \\ 0&0&0&0&0 \end{bmatrix}\ , rank(A)=2,dim(ker(A))=5-rank(A)=3$, we need at least 3 linear independent vectors to span $W$ $W=\begin{bmatrix} -\frac{2}{3}m+t\\ -n+2t \\m \\n \\t \end{bmatrix}=m\begin{bmatrix} -\frac{2}{3}\\ 0 \\1 \\0 \\0 \end{bmatrix}+n\begin{bmatrix} 0\\ -1 \\0 \\1 \\0 \end{bmatrix}+t\begin{bmatrix} 1\\ 2 \\0 \\0 \\1 \end{bmatrix}=span(\begin{bmatrix} -\frac{2}{3}\\ 0 \\1 \\0 \\0 \end{bmatrix},\begin{bmatrix} 0\\ -1 \\0 \\1 \\0 \end{bmatrix},\begin{bmatrix} 1\\ 2 \\0 \\0 \\1 \end{bmatrix})$ • February 14th 2009, 05:18 PM krepka Thanks for the replies. Following your method, HallsofIvy, I got (1,0,0,2,1);(0,1,0,-1,0); and (0,0,1,4/3,2/3). These are different from math2009's answers. Given that I don't yet know what "the kernel of A" is, I'm going to trust that the vectors I found form the basis for the subspace. Thanks again. • February 14th 2009, 07:47 PM math2009 $\begin{bmatrix} 1\\ 0 \\0 \\2 \\1 \end{bmatrix}=2\begin{bmatrix} 0\\ -1 \\0 \\1 \\0 \end{bmatrix}+\begin{bmatrix} 1\\ 2 \\0 \\0 \\1 \end{bmatrix} \ , \ \begin{bmatrix} 0\\ 1 \\0 \\-1 \\0 \end{bmatrix}=-\begin{bmatrix} 0\\ -1 \\0 \\1 \\0 \end{bmatrix}$ , $\begin{bmatrix} 0\\ 0 \\1 \\ \frac{4}{3} \\ \frac{2}{3} \end{bmatrix}=\begin{bmatrix} -\frac{2}{3}\\ 0 \\1 \\0 \\0 \end{bmatrix}+\frac{4}{3} \begin{bmatrix} 0\\ -1 \\0 \\1 \\0 \end{bmatrix}+\frac{2}{3} \begin{bmatrix} 1\\ 2 \\0 \\0 \\1 \end{bmatrix} $ It means we have the same solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.805233359336853, "perplexity": 524.9908134032037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123996.28/warc/CC-MAIN-20140914011203-00345-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://math.stackexchange.com/users/23431/pbs?tab=activity&sort=all&page=10
pbs less info reputation 619 bio website location age member for 2 years, 8 months seen 3 hours ago profile views 256 App developer, Research Associate, and Lecturer of and dabbler in Mathematics. 712 Actions Mar26 accepted Why does this product diverge? Mar26 comment Why does this product diverge? Ok, thanks all I think I've cleared this up now. Basically it all depends on what we mean by an infinite product. Using the standard "sequence of partial products" then we have divergence, but if we explicitly state @Sabyasachi's intention, $$\lim_{n\to\infty}\prod_{k=1}^{2n}a_n,$$ then we have convergence. Delicate! Mar26 comment Why does this product diverge? This is where I misunderstand. The upper index in the product is always $2n$, so by definition the product is never defined with an upper index of $2n+1$. Maybe this is one of those "murky" $\infty$ areas since $\infty$ is not a natural number? ... Mar26 asked Why does this product diverge? Mar26 comment Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ When $p=1$, taking logarithms gives $\log p=0$ not $\log p=\infty$. Is that a typo? Mar26 revised Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ added 82 characters in body Mar25 comment Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ Yes, I thought so. Thanks. Is this still the case if the $a_n$ are monotone decreasing or increasing? Mar25 asked Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ Mar19 revised Find $(a+ib)^{492}$ given that $(a+ib)^{493}=1$ added 5 characters in body Mar13 comment Logarithmic quotient Given the distinction made, what base logarithm is $\log$ here ? Also - you're missing a bracket second line up from the bottom. Mar13 comment Describing the sequence A224239. Analytic Combinatorics may be of help. Try constructing a class of combinatorial objects you require and then apply a transfer function to obtain a generating function which you can solve explicitly or asymptotically. Mar9 comment Help finding value of x in logarithms? Presumably you $\log$ is base $10$? Mar9 comment Help finding value of x in logarithms? Raise both sides to the power $1/8.4$. Mar8 comment Can every definite integral be computed symbolically? But what about when $b=\infty$... Mar4 revised if $f(x) = \int_{t=1}^{t=x^2} t\sin^2(t)\operatorname d\!t$ then $\frac{\operatorname d\!f(x)}{\operatorname d\!x}=?$ added 2 characters in body Mar4 suggested suggested edit on Is the intersection empty? Feb28 accepted Solution of “quadratic equation” involving functional coefficients. Feb27 revised Solution of “quadratic equation” involving functional coefficients. added 12 characters in body Feb27 revised Solution of “quadratic equation” involving functional coefficients. edited body Feb27 asked Solution of “quadratic equation” involving functional coefficients.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015511870384216, "perplexity": 1054.6367087036288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137145.1/warc/CC-MAIN-20140914011217-00086-ip-10-234-18-248.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/202760/prove-the-trigonometric-identity?answertab=oldest
# Prove the trigonometric identity $$\cos^2\alpha-\cos^4\alpha+\sin^4\alpha=\frac{1}{2}-\frac{1}{2}\cos2\alpha$$ - Welcome to math.SE: since you are new, I wanted to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are; this will prevent people from telling you things you already know, and help them give their answers at the right level. –  Julian Kuelshammer Oct 14 '12 at 13:25 Implement the formula: 1) $1-\cos^2\alpha=\sin^2\alpha$ 2) $\cos2\alpha=\cos^2\alpha-\sin\alpha$ 3) $1=\sin^2\alpha+\cos^2\alpha$ Now turn the proof given identity. $\cos^2\alpha-\cos^4\alpha+\sin^4\alpha=\frac{1}{2}-\frac{1}{2}\cos2\alpha$ $\cos^2\alpha(1-\cos^2\alpha)+\sin^4\alpha=\frac{1}{2}(1-\cos2\alpha)$ $\cos^2\alpha\sin^2\alpha+\sin^4\alpha=\frac{1}{2}(\sin^2\alpha+\cos^2\alpha-\cos^2\alpha+\sin^2\alpha)$ $\sin^2\alpha(\cos^2\alpha+\sin^2\alpha)=\frac{1}{2}\cdot 2\sin^2\alpha$ $\sin^2\alpha=\sin^2\alpha$ - This is badly formatted: One says in effect that if a certain equality holds, then $\sin^2\alpha=\sin^2\alpha$, and concludes that that equality holds. One should be "$=$" between, for example, $\cos^2\alpha-\cos^4\alpha+\sin^4\alpha$ and the thing on the line after it, $\cos^2\alpha(1-\cos^2\alpha+\sin^4\alpha$, and so on. –  Michael Hardy Sep 26 '12 at 13:09 Use the identities, $\sin^2\alpha+\cos^2\alpha=1$ and $\cos2\alpha=1-2\sin^2\alpha$ Since, $\cos^2\alpha-\cos^4\alpha=\cos^2\alpha(1-\cos^2\alpha)=\cos^2\alpha\cdot\sin^2\alpha$ So, $$\cos^2\alpha-\cos^4\alpha+\sin^4\alpha=\cos^2\alpha\cdot\sin^2\alpha+\sin^4\alpha$$ $$=\sin^2\alpha(\cos^2\alpha+\sin^2\alpha)$$ $$=\sin^2\alpha=\frac{1-\cos2\alpha}{2}$$ - $$\cos^2\alpha-\cos^4\alpha+\sin^4\alpha=\cos^2\alpha+(\sin^4\alpha-\cos^4\alpha)=$$ $$=\cos^2\alpha+(\sin^2\alpha+\cos^2\alpha)(\sin^2\alpha-\cos^2\alpha)=\cos^2\alpha+\sin^2\alpha-\cos^2\alpha=$$ $$=\sin^2\alpha=1/2-1/2\cos2\alpha$$ Over! - Yes ,youare right! –  Riemann Sep 26 '12 at 11:46 This is a better answer than the "accepted" one. –  Michael Hardy Sep 26 '12 at 13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776759505271912, "perplexity": 729.63344592355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929656.4/warc/CC-MAIN-20150521113209-00020-ip-10-180-206-219.ec2.internal.warc.gz"}
http://lists.gnu.org/archive/html/emacs-devel/2004-06/msg00703.html
emacs-devel [Top][All Lists] ## Re: query-replace-interactive not documented From: Richard Stallman Subject: Re: query-replace-interactive not documented Date: Wed, 16 Jun 2004 12:57:42 -0400 If there are always to be parens around, then there would be no need for !. One could just write \\footnote{\\label{fn:$$+ replace-count$$} I prefer the \! syntax, since it does not mess up the parens. ! is used in many programs to mean "execute" (usually with a shell command, but we can ignore that). Using \' does not seem to make sense, since this is not quoting.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348267316818237, "perplexity": 4272.283548899581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134511.2/warc/CC-MAIN-20140914011214-00299-ip-10-234-18-248.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2356410/trouble-with-pumping-lemma
# Trouble with Pumping Lemma I need to know if this language $$L = \{ \ (a^2b^2c^2)^n \mid n > 0\ \}$$ is regular or not. Since it is trivial to design an FSA with a loop that accepts that language, it is regular. For example here is my FSA that accept language L using draw.io I'm trying to prove that it is regular by using the pumping lemma. My try: Suppose $L$ is regular. I choose $3$ as pumping length $P$ $$|S| \geqq P$$ Using my string $S = aabbcc\ aabbcc\ aabbcc$ The first block is $x$, the second is $y$ and the third is $z.$ I pump $y$ in $yy$ and I got $aabbcc\ aabbcc\ aabbcc\ aabbcc$ And this satisfies the first condition because $xy^iz \in L$ The second is satisfied also because $|y|>0.$ But I need some help for the third condition that is $$|xy|\leq P$$ $|xy|$ is the length of the first string or the second, where I applied the pumping lemma? How can I calculate this? Since $P$ is $3$ (because it is chosen by me) I can't verify this language. Any help appreciated. Thank you. As a preliminary observation, your automaton assumes that the alphabet is $\{aa,bb,cc\}$. If the alphabet is $\{a,b,c\}$ instead, you need seven states. With the preliminary out of the way, the pumping lemma for regular languages is typically used to prove that a language is not regular (as noted by @ChistianIvicevic). All you can do if the language is regular is to verify that it satisfies the conditions spelled out in the lemma. Let's see why you are having difficulty doing that. The lemma says that for a regular language $L$ there exists a constant $P$, which is known as the pumping length, such that some condition that depends on $P$ is true. This means that in verifying that the pumping lemma holds for $L$, you don't get to pick an arbitrary $P$: you have to find a value that works. If you have an automaton for $L$ with $n$ states, it's easy: $P$ should be at least $n$. (Why? Because the pumping lemma rests on a simple application of the pigeonholing principle: if there are more letters in an accepted word than states in the automaton, at least one state will be visited twice when the automaton reads the word.) Assuming your automaton has four states (that is, $aa$ is a single letter) $P=3$ is not a good choice. With $P=4$ you can still choose the same $S$ as you did, though $S' = aabbcc~aabbcc$ is enough for demonstration purposes. Now you can split $S'$ into $x=\epsilon$, $y=aabbcc$, and $z=aabbcc$ and verify that all conditions are met. Alternatively, you can split $S$ into $x=\epsilon$, $y=aabbcc$, and $z=aabbcc~aabbcc$ and verify that all conditions are met. The lemma says that there exists $P$ (you have to pick a good one) such that for any $S$ in $L$ of length at least $P$ (here you get to choose freely among the words in the language that are long enough) there is a way to write $S$ as $xyz$ (once again, you have to split correctly, not arbitrarily) such that all conditions are met. In your example, you chose $P$ and the way to split $S$ arbitrarily. Therefore the fact that not all the conditions on $x,y,z$ were satisfied does not contradict the pumping lemma. If you have an automaton for $L$, as you do, not only picking $P$, but also splitting a word in $L$ is easy: just run it through the automaton and find the first state visited more than once. Then $x$ is the prefix of the word that takes the automaton to that state the first time, $y$ is the segment of the word that causes the automaton to loop back to that state, and $z$ is the rest of the word. Your language is $(aabbcc)^+$ (= $aabbcc(aabbcc)^*$ if you prefer). Since it is given by a regular expression, it is regular. No need of computing an automaton for that. The pumping lemma states that all regular languages satisfy certain conditions but the converse it not true. A language that satisfies mentioned conditions may still be non-regular. Thus the pumping lemma is a necessary but not sufficient condition. To prove a language is regular you can either: 1. Construct an NFA that accepts the language. 2. Construct a regular expression that describes all words from the language. 3. Construct a regular grammar that matches the language. You just have to show correctness in each variant.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.85892653465271, "perplexity": 149.39274386177414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256215.47/warc/CC-MAIN-20190521022141-20190521044141-00155.warc.gz"}
https://math.stackexchange.com/questions/1925710/necessary-and-sufficient-conditions-nehari
# Necessary and Sufficient Conditions - Nehari I want to find necessary and sufficient conditions in order that the following problem has a solution: Given complex numbers $a_0,\ldots,a_n$ consider the problem to find all complex function $f$, $$f(\lambda)=\sum_{\nu=0}^{\infty}{\lambda^\nu f_\nu},$$ satisfying the following three conditions: • $f_j=a_j$ for $j=0,\ldots,n$; • $\sum_{\nu=0}^{\infty}|f_\nu|<\infty$; • $\sup_{\lambda\in \mathbb{D}}|f(\lambda)|<1,$ where $\mathbb{D}$ is the open unit disk in the complex plane. First, I had no idea how to start, however after some research I found out that this problem looks similar to a Nehari extension problem. see: https://www.encyclopediaofmath.org/index.php/Nehari_extension_problem Apparently, a lot is known about the Nehari extension problem. So, I have tried to reduce my problem to a Nehari EP to use the existence theorems, but have had no success. How can I transform this system into a Nehari extension problem? Any hints are appreciated. • Do you mean $\sup_\lambda |f(\lambda)|$? Supremum over what set? – Robert Israel Sep 13 '16 at 20:52 • @RobertIsrael Sorry. See edit. – KayL Sep 13 '16 at 21:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465301394462585, "perplexity": 290.2486042765082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315321.52/warc/CC-MAIN-20190820092326-20190820114326-00254.warc.gz"}
https://icml.cc/Conferences/2021/ScheduleMultitrack?event=12804
Timezone: » Subgaussian Importance Sampling for Off-Policy Evaluation and Learning Alberto Maria Metelli · Alessio Russo · Marcello Restelli Importance Sampling (IS) is a widely used building block for a large variety of off-policy estimation and learning algorithms. However, empirical and theoretical studies have progressively shown that vanilla IS leads to poor estimations whenever the behavioral and target policies are too dissimilar. In this paper, we analyze the theoretical properties of the IS estimator by deriving a probabilistic deviation lower bound that formalizes the intuition behind its undesired behavior. Then, we propose a class of IS transformations, based on the notion of power mean, that are able, under certain circumstances, to achieve a subgaussian concentration rate. Differently from existing methods, like weight truncation, our estimator preserves the differentiability in the target distribution.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731130361557007, "perplexity": 945.9102902387538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00834.warc.gz"}
https://physics.stackexchange.com/questions/453616/derive-porosity-from-stiffness-value
# Derive Porosity from Stiffness Value I have the Young's modulus (stiffness) values of a hydro-gel scaffolds, and I need to know the porosity of those scaffolds. I read about Nielsen Equation, which is this: $$E = \frac{E_0 (1 - P) ^ 2}{1 + \frac{P}{f - 1}}$$ where $$E$$ is Young’s modulus, $$E_0$$ is Young’s modulus for pore-less sample, $$P$$ is volume fraction of porosities, and $$f$$ is shape factor. I have the following questions: 1. How to get the values of $$f$$ and $$E_0$$? 2. Any other methods to calculate porosity given material stiffness? • Although I don't have it on hand to check, you may find Gibson and Ashby's Cellular Solids helpful, as it links scaffold geometries to mechanical properties. – Chemomechanics Jan 11 at 22:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8154176473617554, "perplexity": 1064.7085766139514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203021.14/warc/CC-MAIN-20190323201804-20190323223804-00540.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/8685/secret-sharing-though-quantum-operations
# Secret sharing though quantum operations I have a secret say $$s$$. I have a dealer $$D$$ and three participants $$A, B, C$$. I want to share this secret $$s$$ in such a way that the participation of all $$3$$ is essential to reconstruct the secret. Now for creating the shares, I use some classical sharing algorithms and create shares $$s_A,s_B,s_C$$. Now how do I distribute these shares among the participants quantum mechanically using qudits? What I thought is the following steps. First, let the basis be $$\{|0\rangle, |1\rangle,.....,|d-1\rangle\}.$$ Now since each of the participant $$A, B, C$$ has his/her share, one of them starts the reconstruction process by first preparing a $$|0\rangle$$ and taking its Fourier transform, so I get $$|\phi\rangle_1=\sum_{y=0}^{d-1}|y\rangle_1$$Now the next step is to initialize two $$|0\rangle$$ states and perform the CNOT gate on them with the first qudit as the control, so to get $$|\phi\rangle_2=\sum_{y=0}^{d-1}|y\rangle_1|y\rangle_2|y\rangle_3$$After this step we perform the Quantum Fourier transformation on all the particles to get $$|\phi\rangle_3=\sum_{y=0}^{d-1}\sum_{k_1=0}^{d-1}\sum_{k_2=0}^{d-1}\sum_{k_3=0}^{d-1}\omega^{(k_1+k_2+k_3)y}|k_1\rangle_1|k_2\rangle_2|k_3\rangle_3$$ Now since the summation is finite i rearrange the terms to get $$|\phi\rangle_3=\sum_{k_1=0}^{d-1}\sum_{k_2=0}^{d-1}\sum_{k_3=0}^{d-1}\sum_{y=0}^{d-1}\omega^{(k_1+k_2+k_3)y}|k_1\rangle_1|k_2\rangle_2|k_3\rangle_3$$ With $$\sum_{i=0}^{d-1}\omega^i=0$$, we have the condition that the state left after this operation will be subject to the condition that $$k_1+k_2+k_3=0\;mod\;d$$ , we will have $$|\phi\rangle_3=\sum_{k_1=0}^{d-1}\sum_{k_2=0}^{d-1}\sum_{k_3=0}^{d-1}|k_1\rangle_1|k_2\rangle_2|k_3\rangle_3$$ now after preparing this state each participant $$A,B,C$$ applies a transformation $$U_{s_B},U_{s_A},U_{s_C}$$ which gives the state as $$|\phi\rangle_3=\sum_{k_1=0}^{d-1}\sum_{k_2=0}^{d-1}\sum_{k_3=0}^{d-1}|k_1+s_A\rangle_1|k_2+s_B\rangle_2|k_3+ s_C\rangle_3$$ After peparing this state the state is returned by the participants to the dealer who measures state for the shares and if it is right then announces the result/secret. Now my questions are: (i) Even though this is a very preliminary effort, can somebody tell me whether can we can actually do this? (ii) My second question is if this is possible then can we improve this scheme to achieve the condition for the detection of a fraudulent participant? Can somebody help?? There is one main key point in the description of your question: Is $$s$$ meant to be a classical secret or a quantum secret? If $$s$$ is meant to be a classical secret, then the answer is yes, but there is not really much quantum in the positive answer. If $$s_A$$, $$s_B$$, and $$s_C$$ are all $$d$$-state digits, then there is a simple construction that works in which $$s$$ is also a $$d$$-state digit. (There is also a simple argument that you cannot make $$s$$ any larger than this.) Namely, you should choose $$s_A$$ and $$s_B$$ uniformly and independently at random, and the then choose $$s_C$$ such that $$s = s_A + s_B + s_C$$ in the abelian group $$\mathbb{Z}/d$$. If you want to make this look quantum, then you can, because you store a digit in a qudit. You can turn $$s_A$$ into $$|s_A\rangle$$, etc. Then you are free to measure all three and take their sum, or just measure their sum. The problem with this answer is that you did more than necessary to share the secret. You only used the qudits as classical digits of the same size. This is like taking a million-dollar luxury car to the supermarket when you could have done the exact same thing with a \\$5,000 used car. Let's say instead that $$s$$ is meant to be a quantum secret $$|s\rangle$$. Then first of all, your language for extracting the secret is not correct. If the dealer measures everything to gain the secret, then the result cannot be a quantum state $$|s\rangle$$, because everything has been measured and all quantum superposition is then gone. Moreover, the shares must be entangled for this to work, so they are not separate states $$|s_A\rangle$$, $$|s_B\rangle$$, and $$|s_C\rangle$$, but rather a joint state $$|s_{ABC}\rangle$$. To extract $$|s\rangle$$, the dealer must carefully apply some unitary operator to the joint state to get out $$|s\rangle$$ as a piece of some larger state, without measuring $$|s\rangle$$ itself. So let's say those are the rules. We can go back and borrow a different concept from classical secret-sharing. Namely, instead of 3 parties we may have $$\ell$$ parties. Instead of saying that we need all of the parties together to learning everything and with any fewer we know nothing, we can have the weaker condition that for some $$t < \ell$$, any set of $$t$$ or fewer parties cannot say anything about the secret. Then there is a remarkable fact that a quantum secret-sharing with these rules is exactly the same thing as a quantum error-correcting code (QECC) of length $$\ell$$ using $$d$$-state qudits, with minimum error distance $$t+1$$. Classical error correction is in a natural sense dual to classical secret-sharing. Quantum error correction turns out to be a self-dual problem that is the same as quantum secret-sharing. If we take the original question in its quantum form, the question becomes finding a QECC of length 3 with $$d$$-state qudits and minimum distance 3. I don't think that any such code exists that can store any non-trivial information, although I would have to do some review to remember how to prove that. I don't even expect there to be such a code with minimum distance 2 (so that only each individual party has no glimpse of the secret) if you want to the secret to have as many as $$d$$ states when $$d=2$$. I can check this in the special class of additive codes, and I can say non-rigorously that additive codes are sometimes optimal. However, if the parameters are $$\ell = 3$$, $$t = 2$$, and $$d$$ an odd integer, then I think that there is an additive QECC of this type. We can assume that $$d=p$$ is prime (because otherwise we can factor $$d$$ and make separate codes). Then we can choose three non-zero exponents $$s,t,u \in \mathbb{Z}/p$$ that sum to zero, and we can make a quantum code that stores one qudit using the quantum parity checks $$X \otimes X \otimes X$$ and $$Z^s \otimes Z^t \otimes Z^u$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 63, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916131258010864, "perplexity": 228.46829697281515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00270.warc.gz"}
http://cvgmt.sns.it/paper/3219/
# Continuity for the Monge mass transfer problem in two dimensions created by santambro on 27 Oct 2016 modified on 29 Jul 2018 [BibTeX] Accepted Paper Inserted: 27 oct 2016 Last Updated: 29 jul 2018 Journal: Arch. Rati. Mech. An. Year: 2018 Notes: -- Second and strongly revised version -- Abstract: In this paper, we prove the continuity of the monotone optimal mapping of the Monge mass transfer problem in two dimensions under certain conditions on the domains and the mass distributions.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8804009556770325, "perplexity": 4662.546048784745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516480.46/warc/CC-MAIN-20181023153446-20181023174946-00443.warc.gz"}
https://community.plm.automation.siemens.com/t5/1D-Simulation-Forum/Combustion-chamber-of-CFM-model-didn-t-burn/td-p/356459
Cancel Showing results for Did you mean: # Combustion chamber of CFM model didn't burn Experimenter Hi, I'm using the IFP library to build a model of Turbocharged CNG Gasoline Engine. But currently i've found that after i added the turbo group , the CFM combustion chamber didn't burn. All the parameters i've set seem normal . Are there other parameters that can effect the combustion deeply? # Re: Combustion chamber of CFM model didn't burn Siemens Valued Contributor Hi, Adding a turbocharger should not change the combustion phasing. It will increase the intake mass flow rate and so, increase the volumetric efficiency. In the CFM combustion chambers, the turbulence maps (cut-off/tumble) are function of the volumetric efficiency. I recommend to: - check if the volumetric efficiency is correct, - check if your turbulence maps are defined for those values (if not, a linear extrapoaltion is done and can lead to inconsistent values) Cordially, Thomas
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8423675894737244, "perplexity": 4617.023110602814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890795.64/warc/CC-MAIN-20180121155718-20180121175718-00176.warc.gz"}
http://mathhelpforum.com/algebra/154459-sigma-notation.html
# Math Help - sigma notation 1. ## sigma notation how to write in correct way: to calculate the sum of odd numbers between 1 and n; to calculate the sum of even numbers between 2 and n; n E (i+2) --> do you agree? i=2 2. I would say, for even numbers: $\sum_{i=0}^{n}(2i+2)$ and for odd: $\sum_{i=0}^{n}(2i+1)$ how to write in correct way: to calculate the sum of odd numbers between 1 and n; to calculate the sum of even numbers between 2 and n; n E (i+2) --> do you agree? i=2 $\displaystyle\sum_{i=2}^n(i+2)=4+5+6+.....+(n-1)+n+(n+1)+(n+2)$ $\displaystyle\sum_{i=1}^{\frac{n+1}{2}}(2i-1)=1+3+5+...+n$ or $\displaystyle\sum_{i=0}^{\frac{n-1}{2}}(2i+1)=1+3+5+...+n$ for n even, sum of the odd numbers is... $\displaystyle\sum_{i=1}^{\frac{n}{2}}(2i-1)=1+3+5+...+(n-1)$ or $\displaystyle\sum_{i=0}^{\frac{n-2}{2}}(2i+1)=1+3+5+...+(n-1)$ That's if "between" is meant to include i=1 and i=n, rather than exclude them
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9697790741920471, "perplexity": 2328.0764207933466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137190.70/warc/CC-MAIN-20140914011217-00266-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/krm.2015.8.309
# American Institute of Mathematical Sciences • Previous Article Compressible Euler equations interacting with incompressible flow • KRM Home • This Issue • Next Article Instantaneous exponential lower bound for solutions to the Boltzmann equation with Maxwellian diffusion boundary conditions June  2015, 8(2): 309-333. doi: 10.3934/krm.2015.8.309 ## On the homogeneous Boltzmann equation with soft-potential collision kernels 1 Department of Mathematics, College of Natural Sciences, Chung-Ang University, 84 Heukseok-Ro, Dongjak-Gu, Seoul 156-756 Received  August 2014 Revised  January 2015 Published  March 2015 We consider the well-posedness problem for the space-homogeneous Boltzmann equation with soft-potential collision kernels. By revisiting the classical Fourier inequalities and fractional integrals, we deduce a set of bilinear estimates for the collision operator on the space of integrable functions possessing certain degree of smoothness and we apply them to prove the local-in-time existence of a solution to the Boltzmann equation in both integral form and the original one. Uniqueness and stability of solutions are also established. Citation: Yong-Kum Cho. On the homogeneous Boltzmann equation with soft-potential collision kernels. Kinetic and Related Models, 2015, 8 (2) : 309-333. doi: 10.3934/krm.2015.8.309 ##### References: [1] R. Alexandre, L. Desvillettes, C. Villani and B. Wennberg, Entropy dissipation and long-range interactions, Arch. Ration. Mech. Anal., 152 (2000), 327-355. doi: 10.1007/s002050000083. [2] L. Arkeryd, On the Boltzmann equation, Part I: Existence, Part II: The full initial value problem, Arch. Rational Mech. Anal., 45 (1972), 1-16. [3] L. Arkeryd, Intermolecular forces of infinite range and the Boltzmann equation, Arch. Rational Mech. Anal., 77 (1981), 11-21. doi: 10.1007/BF00280403. [4] F. Bouchut and L. Desvillettes, A proof of the smoothing properties of the positive part of Boltzmann's kernel, Rev. Mat. Iberoamericana, 14 (1998), 47-61. doi: 10.4171/RMI/233. [5] A. V. Bobylev, Fourier transform method in the theory of the Boltzmann equation for Maxwell molecules, Dokl. Akad. Nauk SSSR, 225 (1975), 1041-1044. [6] E. Carlen, M. Carvalho and X. Lu, On strong convergence to equilibrium for the Boltzmann equation with soft potentials, J. Stat. Phys., 135 (2009), 681-736. doi: 10.1007/s10955-009-9741-1. [7] C. Cercignani, R. Illner and M. Pulvirenti, The Mathematical Theory of Dilute Gases, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4419-8524-8. [8] Y.-K. Cho and H. Yun, On the gain of regularity for the positive part of Boltzmann collision operator associated with soft potentials, Kinetic and Related Models, 5 (2012), 769-786. doi: 10.3934/krm.2012.5.769. [9] L. Desvillettes and C. Mouhot, Stability and uniqueness for the spatially homogeneous Boltzmann equation with long-range interactions, Arch. Rational Mech. Anal., 193 (2009), 227-253. doi: 10.1007/s00205-009-0233-x. [10] N. Fournier and G. Héléne, On the uniqueness for the spatially homogeneous Boltzmann equation with a strong angular singularity, J. Stat. Phys., 131 (2008), 749-781. doi: 10.1007/s10955-008-9511-5. [11] L. Grafakos, On multilinear fractional integrals, Studia Math., 102 (1992), 49-56. [12] T. Goudon, On Boltzmann equations and Fokker-Planck asymptotics: Influence of grazing collisions, J. Stat. Phys., 89 (1997), 751-776. doi: 10.1007/BF02765543. [13] C. Kenig and E. M. Stein, Multilinear estimates and fractional integration, Math. Research Letters, 6 (1999), 1-15. doi: 10.4310/MRL.1999.v6.n1.a1. [14] P.-L. Lions, Compactness in Boltzmann's equation via Fourier integral operators and applications, I, II, J. Math. Kyoto Univ., 34 (1994), 391-427, 429-461. [15] X. Lu and Y. Zhang, On nonnegativity of solutions of the Boltzmann equation, Transport Theor. Stat., 30 (2001), 641-657. doi: 10.1081/TT-100107420. [16] S. Mischler and B. Wennberg, On the spatially homogeneous Boltzmann equation, Ann. Inst. Henri Poincaré, 16 (1999), 467-501. doi: 10.1016/S0294-1449(99)80025-0. [17] C. Mouhot and C. Villani, Regularity theory for the spatially homogeneous Boltzmann equation with cut-off, Arch. Rational Mech. Anal., 173 (2004), 169-212. doi: 10.1007/s00205-004-0316-7. [18] K. T. Smith, Primer of Modern Analysis, Springer-Verlag, New York, 1983. [19] E. M. Stein, Singular Integrals and Differentiabilty Properties of Functions, Princeton Univ. Press, 1970. [20] E. M. Stein, Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals, Princeton Univ. Press, 1993. [21] C. Villani, On a new class of weak solutions to the spatially homogeneous Boltzmann and Landau equations, Arch. Rational Mech. Anal., 143 (1998), 273-307. doi: 10.1007/s002050050106. [22] C. Villani, A review of mathematical topics in collisional kinetic theory, in Handbook of Mathematical Fluid Dynamics, Vol. I, North-Holland, Amsterdam, 2002, 71-305. doi: 10.1016/S1874-5792(02)80004-0. show all references ##### References: [1] R. Alexandre, L. Desvillettes, C. Villani and B. Wennberg, Entropy dissipation and long-range interactions, Arch. Ration. Mech. Anal., 152 (2000), 327-355. doi: 10.1007/s002050000083. [2] L. Arkeryd, On the Boltzmann equation, Part I: Existence, Part II: The full initial value problem, Arch. Rational Mech. Anal., 45 (1972), 1-16. [3] L. Arkeryd, Intermolecular forces of infinite range and the Boltzmann equation, Arch. Rational Mech. Anal., 77 (1981), 11-21. doi: 10.1007/BF00280403. [4] F. Bouchut and L. Desvillettes, A proof of the smoothing properties of the positive part of Boltzmann's kernel, Rev. Mat. Iberoamericana, 14 (1998), 47-61. doi: 10.4171/RMI/233. [5] A. V. Bobylev, Fourier transform method in the theory of the Boltzmann equation for Maxwell molecules, Dokl. Akad. Nauk SSSR, 225 (1975), 1041-1044. [6] E. Carlen, M. Carvalho and X. Lu, On strong convergence to equilibrium for the Boltzmann equation with soft potentials, J. Stat. Phys., 135 (2009), 681-736. doi: 10.1007/s10955-009-9741-1. [7] C. Cercignani, R. Illner and M. Pulvirenti, The Mathematical Theory of Dilute Gases, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4419-8524-8. [8] Y.-K. Cho and H. Yun, On the gain of regularity for the positive part of Boltzmann collision operator associated with soft potentials, Kinetic and Related Models, 5 (2012), 769-786. doi: 10.3934/krm.2012.5.769. [9] L. Desvillettes and C. Mouhot, Stability and uniqueness for the spatially homogeneous Boltzmann equation with long-range interactions, Arch. Rational Mech. Anal., 193 (2009), 227-253. doi: 10.1007/s00205-009-0233-x. [10] N. Fournier and G. Héléne, On the uniqueness for the spatially homogeneous Boltzmann equation with a strong angular singularity, J. Stat. Phys., 131 (2008), 749-781. doi: 10.1007/s10955-008-9511-5. [11] L. Grafakos, On multilinear fractional integrals, Studia Math., 102 (1992), 49-56. [12] T. Goudon, On Boltzmann equations and Fokker-Planck asymptotics: Influence of grazing collisions, J. Stat. Phys., 89 (1997), 751-776. doi: 10.1007/BF02765543. [13] C. Kenig and E. M. Stein, Multilinear estimates and fractional integration, Math. Research Letters, 6 (1999), 1-15. doi: 10.4310/MRL.1999.v6.n1.a1. [14] P.-L. Lions, Compactness in Boltzmann's equation via Fourier integral operators and applications, I, II, J. Math. Kyoto Univ., 34 (1994), 391-427, 429-461. [15] X. Lu and Y. Zhang, On nonnegativity of solutions of the Boltzmann equation, Transport Theor. Stat., 30 (2001), 641-657. doi: 10.1081/TT-100107420. [16] S. Mischler and B. Wennberg, On the spatially homogeneous Boltzmann equation, Ann. Inst. Henri Poincaré, 16 (1999), 467-501. doi: 10.1016/S0294-1449(99)80025-0. [17] C. Mouhot and C. Villani, Regularity theory for the spatially homogeneous Boltzmann equation with cut-off, Arch. Rational Mech. Anal., 173 (2004), 169-212. doi: 10.1007/s00205-004-0316-7. [18] K. T. Smith, Primer of Modern Analysis, Springer-Verlag, New York, 1983. [19] E. M. Stein, Singular Integrals and Differentiabilty Properties of Functions, Princeton Univ. Press, 1970. [20] E. M. Stein, Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals, Princeton Univ. Press, 1993. [21] C. Villani, On a new class of weak solutions to the spatially homogeneous Boltzmann and Landau equations, Arch. Rational Mech. Anal., 143 (1998), 273-307. doi: 10.1007/s002050050106. [22] C. Villani, A review of mathematical topics in collisional kinetic theory, in Handbook of Mathematical Fluid Dynamics, Vol. I, North-Holland, Amsterdam, 2002, 71-305. doi: 10.1016/S1874-5792(02)80004-0. [1] Radjesvarane Alexandre, Yoshinori Morimoto, Seiji Ukai, Chao-Jiang Xu, Tong Yang. Uniqueness of solutions for the non-cutoff Boltzmann equation with soft potential. Kinetic and Related Models, 2011, 4 (4) : 919-934. doi: 10.3934/krm.2011.4.919 [2] Yong-Kum Cho, Hera Yun. On the gain of regularity for the positive part of Boltzmann collision operator associated with soft-potentials. Kinetic and Related Models, 2012, 5 (4) : 769-786. doi: 10.3934/krm.2012.5.769 [3] Léo Glangetas, Hao-Guang Li, Chao-Jiang Xu. Sharp regularity properties for the non-cutoff spatially homogeneous Boltzmann equation. Kinetic and Related Models, 2016, 9 (2) : 299-371. doi: 10.3934/krm.2016.9.299 [4] Zhaohui Huo, Yoshinori Morimoto, Seiji Ukai, Tong Yang. Regularity of solutions for spatially homogeneous Boltzmann equation without angular cutoff. Kinetic and Related Models, 2008, 1 (3) : 453-489. doi: 10.3934/krm.2008.1.453 [5] Yoshinori Morimoto, Seiji Ukai, Chao-Jiang Xu, Tong Yang. Regularity of solutions to the spatially homogeneous Boltzmann equation without angular cutoff. Discrete and Continuous Dynamical Systems, 2009, 24 (1) : 187-212. doi: 10.3934/dcds.2009.24.187 [6] Zheng-an Yao, Yu-Long Zhou. High order approximation for the Boltzmann equation without angular cutoff under moderately soft potentials. Kinetic and Related Models, 2020, 13 (3) : 435-478. doi: 10.3934/krm.2020015 [7] Xiaolong Han, Guozhen Lu. Regularity of solutions to an integral equation associated with Bessel potential. Communications on Pure and Applied Analysis, 2011, 10 (4) : 1111-1119. doi: 10.3934/cpaa.2011.10.1111 [8] Robert M. Strain, Keya Zhu. Large-time decay of the soft potential relativistic Boltzmann equation in $\mathbb{R}^3_x$. Kinetic and Related Models, 2012, 5 (2) : 383-415. doi: 10.3934/krm.2012.5.383 [9] Jean-Marie Barbaroux, Dirk Hundertmark, Tobias Ried, Semjon Vugalter. Strong smoothing for the non-cutoff homogeneous Boltzmann equation for Maxwellian molecules with Debye-Yukawa type interaction. Kinetic and Related Models, 2017, 10 (4) : 901-924. doi: 10.3934/krm.2017036 [10] Nicolas Fournier. A new regularization possibility for the Boltzmann equation with soft potentials. Kinetic and Related Models, 2008, 1 (3) : 405-414. doi: 10.3934/krm.2008.1.405 [11] Fei Meng, Fang Liu. On the inelastic Boltzmann equation for soft potentials with diffusion. Communications on Pure and Applied Analysis, 2020, 19 (11) : 5197-5217. doi: 10.3934/cpaa.2020233 [12] Sang-Gyun Youn. On the Sobolev embedding properties for compact matrix quantum groups of Kac type. Communications on Pure and Applied Analysis, 2020, 19 (6) : 3341-3366. doi: 10.3934/cpaa.2020148 [13] Lingbing He, Yulong Zhou. High order approximation for the Boltzmann equation without angular cutoff. Kinetic and Related Models, 2018, 11 (3) : 547-596. doi: 10.3934/krm.2018024 [14] Marcel Braukhoff. Semiconductor Boltzmann-Dirac-Benney equation with a BGK-type collision operator: Existence of solutions vs. ill-posedness. Kinetic and Related Models, 2019, 12 (2) : 445-482. doi: 10.3934/krm.2019019 [15] Yingzhe Fan, Yuanjie Lei. The Boltzmann equation with frictional force for very soft potentials in the whole space. Discrete and Continuous Dynamical Systems, 2019, 39 (7) : 4303-4329. doi: 10.3934/dcds.2019174 [16] James Scott, Tadele Mengesha. A fractional Korn-type inequality. Discrete and Continuous Dynamical Systems, 2019, 39 (6) : 3315-3343. doi: 10.3934/dcds.2019137 [17] Alexander Alekseenko, Truong Nguyen, Aihua Wood. A deterministic-stochastic method for computing the Boltzmann collision integral in $\mathcal{O}(MN)$ operations. Kinetic and Related Models, 2018, 11 (5) : 1211-1234. doi: 10.3934/krm.2018047 [18] Sabri Bahrouni, Hichem Ounaies. Embedding theorems in the fractional Orlicz-Sobolev space and applications to non-local problems. Discrete and Continuous Dynamical Systems, 2020, 40 (5) : 2917-2944. doi: 10.3934/dcds.2020155 [19] Nicolas Fournier. A recursive algorithm and a series expansion related to the homogeneous Boltzmann equation for hard potentials with angular cutoff. Kinetic and Related Models, 2019, 12 (3) : 483-505. doi: 10.3934/krm.2019020 [20] Claude Bardos, François Golse, Ivan Moyano. Linear Boltzmann equation and fractional diffusion. Kinetic and Related Models, 2018, 11 (4) : 1011-1036. doi: 10.3934/krm.2018039 2020 Impact Factor: 1.432
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221824169158936, "perplexity": 2252.095636578796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00785.warc.gz"}
http://www.emis.de/classics/Erdos/cit/20400905.htm
## Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No:  204.00905 Autor:  Erdös, Paul; Rado, R. Title:  Partition relations and transitivity domains of binary relations (In English) Source:  J. Lond. Math. Soc. 42, 624-633 (1967). Review:  The main theorem is Theorem 2: For all positive integers m and n, for some positive integer l(m,n), for each ordinal number \alpha, \omega\alpha l(m,n) ––> (m,\omega\alpha n)2; if l\alpha(m,n) is the least such l(m,n) for a given \alpha, then \gamma \mapsto(m,\omega\alpha n)2 for each \gamma > \omega\alpha l\alpha (m,n), and l\alpha(m,n) \leq (2n-3)-1[2m-1(n-1)m+n-2]; if m > 1, then l0(m,n) \leq l\alpha (m,n). This generalizes some of Theorem 1, which handles the case \alpha = 0 and was proved by the same authors [Bull. Am. Math. Soc. 62, 427-489 (1956; Zbl 071.05105), Theorem 25]; Theorem 1 includes also a characterization of l0(m,n). Among other results, Theorem 4 is an extension (the statement of which is not the obvious one) to infinite a of a result attributed to R. Stearns: If a is a finite cardinal, if \prec is a relation trichotomous on a set S, then \prec is transitive on some subset of S having cardinal a provided that |S| \geq 2n-1. {Reviewer's remarks: (1) It is easily seen that l0(m,2) as characterized by Theorem 1 is the least integer l such that for each set S with cardinal \geq l, for each relation \prec trichotomous on S, \prec is transitive on some subset of S having cardinal a. Specializing the estimate for l\alpha (m,n) in Theorem 2 to n = 2 yields the Stearns result. (2) Theorem 1 is slightly misstated. If m = 1, \gamma \mapsto (m,\omega0n)2'' should be changed by replacing \gamma by \omega0(l0(m,n)-1). (3) In the footnote on p. 625 (n-1)\mu'' should be replaced by (2(n-1))\mu''. (4) On p. 627, (i) is not quite adequate but becomes so on replacing for x in A'\beta'', by for each x in A'\beta and |U0(x)A\gamma| = \aleph\alpha for some x in A\beta'', and (ii) is not quite adequate but becomes so on inserting and |U0(x)A\gamma| < \aleph\alpha for some x in A\beta'' after |U0(\bar x)A\gamma| = \aleph\alpha''. [The iteration of the operators O\lambda then becomes adequate for the task at hand.] The last sentence of the third paragraph of (ii) appears to be an inaccurate oversimplication. [It is clear from the rather involved proof of Theorem 2, to which these points attach, that these inaccuracies were not in the original thinking things through. There are only a few others, which are more easily spotted.] (5) In line with (1) above and the discussion of l0(m,n) on p. 624, the condition under (i) in Theorem 4 is best possible not only for 1 \leq a \leq 3 but also for 1 \leq a \leq 4}. Reviewer:  A.H.Kruse Classif.:  * 05D10 Ramsey theory 03E05 Combinatorial set theory (logic) 04A20 Combinatorial set theory © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802266955375671, "perplexity": 1827.7268245754312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.11/warc/CC-MAIN-20160524002114-00131-ip-10-185-217-139.ec2.internal.warc.gz"}
https://en.wikibooks.org/wiki/Principles_of_Economics/Elasticity
# Principles of Economics/Elasticity Elasticity refers to the degree to which one value changes when another does. Supply and demand change with respect to price; investment and savings change with respect to interest rate. The name is "X elasticity of Y" where a change in X causes a change of magnitude (the elasticity * Y): where • ${\displaystyle P_{i}}$ is initial price • ${\displaystyle P_{f}}$ is final price • ${\displaystyle S_{i}}$ is initial supply • ${\displaystyle S_{f}}$ is final supply • ${\displaystyle D_{i}}$ is initial demand • ${\displaystyle D_{f}}$ is final demand • ${\displaystyle r_{i}}$ is initial interest • ${\displaystyle r_{f}}$ is final interest • ${\displaystyle I_{i}}$ is initial investment • ${\displaystyle I_{f}}$ is final investment • ${\displaystyle S_{i}}$ is initial savings • ${\displaystyle S_{f}}$ is final savings Price elasticity of demand ${\displaystyle ={\frac {\%changeinD}{\%changeinP}}={\frac {\frac {D_{f}-D_{i}}{(D_{f}+D_{i})/2}}{\frac {P_{f}-P_{i}}{(P_{f}+P_{i})/2}}}}$ Price elasticity of supply ${\displaystyle ={\frac {\%changeinS}{\%changeinP}}={\frac {\frac {S_{f}-S_{i}}{(S_{f}+S_{i})/2}}{\frac {P_{f}-P_{i}}{(P_{f}+P_{i})/2}}}}$ Interest elasticity of investment ${\displaystyle ={\frac {\%changeinI}{\%changeinr}}={\frac {\frac {I_{f}-I_{i}}{(I_{f}+I_{i})/2}}{\frac {r_{f}-r_{i}}{(r_{f}+r_{i})/2}}}}$ Interest elasticity of savings ${\displaystyle ={\frac {\%changeinS}{\%changeinr}}={\frac {\frac {S_{f}-S_{i}}{(S_{f}+S_{i})/2}}{\frac {r_{f}-r_{i}}{(r_{f}+r_{i})/2}}}}$ ## Cross elasticities The elasticities mentioned above refer to one object. Cross elasticities refer to the effects of something's price, interest, etc. on something else. This comes into play with substitute and complementary goods and services for the consumer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243857264518738, "perplexity": 3302.9953617766337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00525.warc.gz"}
https://en.wikipedia.org/wiki/Ring_of_sets
# Ring of sets Not to be confused with Ring (mathematics). In mathematics, there are two different notions of a ring of sets, both referring to certain families of sets. In order theory, a nonempty family of sets ${\displaystyle {\mathcal {R}}}$ is called a ring (of sets) if it is closed under intersection and union. That is, the following two statements are true for all sets ${\displaystyle A}$ and ${\displaystyle B}$, 1. ${\displaystyle A,B\in {\mathcal {R}}}$ implies ${\displaystyle A\cap B\in {\mathcal {R}}}$ and 2. ${\displaystyle A,B\in {\mathcal {R}}}$ implies ${\displaystyle A\cup B\in {\mathcal {R}}.}$[1] In measure theory, a ring of sets ${\displaystyle {\mathcal {R}}}$ is instead a nonempty family closed under unions and set-theoretic differences.[2] That is, the following two statements are true for all sets ${\displaystyle A}$ and ${\displaystyle B}$ (including when they are the same set), 1. ${\displaystyle A,B\in {\mathcal {R}}}$ implies ${\displaystyle A\setminus B\in {\mathcal {R}}}$ and 2. ${\displaystyle A,B\in {\mathcal {R}}}$ implies ${\displaystyle A\cup B\in {\mathcal {R}}.}$ This implies the empty set is in ${\displaystyle {\mathcal {R}}}$. It also implies that ${\displaystyle {\mathcal {R}}}$ is closed under symmetric difference and intersection, because of the identities 1. ${\displaystyle A\,\triangle \,B=(A\setminus B)\cup (B\setminus A)}$ and 2. ${\displaystyle A\cap B=A\setminus (A\setminus B).}$ (So a ring in the second, measure theory, sense is also a ring in the first, order theory, sense.) Together, these operations give ${\displaystyle {\mathcal {R}}}$ the structure of a boolean ring. Conversely, every family of sets closed under both symmetric difference and intersection is also closed under union and differences. This is due to the identities 1. ${\displaystyle A\cup B=(A\,\triangle \,B)\,\triangle \,(A\cap B)}$ and 2. ${\displaystyle A\setminus B=A\,\triangle \,(A\cap B).}$ ## Examples If X is any set, then the power set of X (the family of all subsets of X) forms a ring of sets in either sense. If (X,≤) is a partially ordered set, then its upper sets (the subsets of X with the additional property that if x belongs to an upper set U and x ≤ y, then y must also belong to U) are closed under both intersections and unions. However, in general it will not be closed under differences of sets. The open sets and closed sets of any topological space are closed under both unions and intersections.[1] On the real line R, the family of sets consisting of the empty set and all finite unions of intervals of the form (a, b], a,b in R is a ring in the measure theory sense. If T is any transformation defined on a space, then the sets that are mapped into themselves by T are closed under both unions and intersections.[1] If two rings of sets are both defined on the same elements, then the sets that belong to both rings themselves form a ring of sets.[1] ## Related structures A ring of sets (in the order-theoretic sense) forms a distributive lattice in which the intersection and union operations correspond to the lattice's meet and join operations, respectively. Conversely, every distributive lattice is isomorphic to a ring of sets; in the case of finite distributive lattices, this is Birkhoff's representation theorem and the sets may be taken as the lower sets of a partially ordered set.[1] A field of subsets of X is a ring that contains X and is closed under relative complement. Every field, and so also every σ-algebra, is a ring of sets in the measure theory sense. A semi-ring (of sets) is a family of sets ${\displaystyle {\mathcal {S}}}$ with the properties 1. ${\displaystyle \emptyset \in {\mathcal {S}},}$ 2. ${\displaystyle A,B\in {\mathcal {S}}}$ implies ${\displaystyle A\cap B\in {\mathcal {S}},}$ and 3. ${\displaystyle A,B\in {\mathcal {S}}}$ implies ${\displaystyle A\setminus B=\bigcup _{i=1}^{n}C_{i}}$ for some disjoint ${\displaystyle C_{1},\dots ,C_{n}\in {\mathcal {S}}.}$ Clearly, every ring (in the measure theory sense) is a semi-ring. A semi-field of subsets of X is a semi-ring that contains X. ## References 1. Birkhoff, Garrett (1937), "Rings of sets", Duke Mathematical Journal, 3 (3): 443–454, doi:10.1215/S0012-7094-37-00334-X, MR 1546000. 2. ^ De Barra, Gar (2003), Measure Theory and Integration, Horwood Publishing, p. 13, ISBN 9781904275046.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 28, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812574982643127, "perplexity": 313.7577139476982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121752.57/warc/CC-MAIN-20170423031201-00318-ip-10-145-167-34.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/562251/how-does-the-force-of-charge-between-particles-change-with-velocity
# How does the force of charge between particles change with velocity? I'm sure how fast a particle moves must have some relativistic effect, or maybe also classical ones too. Suppose you fixed the positions of two charged particles. Suppose you're in a lab frame and in a paradoxical but analytically useful way, you fix the position of a particle but change its velocity towards or away from you at varying speeds. How does the charge the lab frame measure vary due to super-relativistic effects? If the particle moves away near the speed of light, is its charge higher than it would be classically or something like that? • Does this answer your question? How can we prove charge invariance under Lorentz Transformation? – DavidH Jun 27 '20 at 18:36 • So force is Lorentz invariant? – CheeseMongoose Jun 27 '20 at 18:52 • Your question asks two different things: How charge varies, and how force varies. – G. Smith Jun 27 '20 at 19:03 • So the force a charged particle exerts on another charged particle is in no way dependent on the charged particle's charge? Because if you say force isn't invariant yet force depends on charge which is invariant, it seems like a contradiction. – CheeseMongoose Jun 27 '20 at 19:05 • Force does depend on charge, but not only on charge. – G. Smith Jun 27 '20 at 19:08 Charge is independent of velocity. For example, the charge of a proton is $$1.6\times 10^{-19}$$ coulombs whether it is at rest or zooming around the Large Hadron Collider at 0.99999999 c. Charge is a Lorentz-invariant quantity. $$f^\mu=qF^\mu{}_\nu u^\nu.$$ Here $$f^\mu$$ is the four-vector describing the electromagnetic force, $$F^\mu{}_\nu$$ is the four-tensor describing the electromagnetic field, and $$u^\mu$$ is the four-vector describing the velocity of the charge. If one understands this notation, it makes clear that forces, EM fields, and velocities have straightforward Lorentz transformations, but the charge $$q$$ must be Lorentz-invariant because otherwise the right side would fail to be a four-vector like the left side is.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342482089996338, "perplexity": 491.5075425939748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00060.warc.gz"}
https://www.physicsforums.com/threads/linear-velocity-of-a-rotating-body.327899/
# Linear velocity of a rotating body 1. Jul 30, 2009 ### rugapark 1. The problem statement, all variables and given/known data A flat rigid body is rotating with angular velocity 3 rads-1 about an axis in the direction of the vector (i + 2 j + 3 k) and passing through the point (1, 1, 0) on the body. Find the linear velocity of the point P = (1, 0, 1) on the body. (You may use the result v = w x r .) 2. Relevant equations v= w x r 3. The attempt at a solution i have no idea where to go with this - i need to find r, but not sure how to go about using the coordinates given. 2. Jul 30, 2009 ### HallsofIvy Staff Emeritus Do you mean 3 rads/sec (often written just "3 s-1") ? First you need to know the radius of the circle the point is moving in. Draw a line from (1, 0, 1) to the line x= 1+ t, y= 1+ 2t, z= 3t. The plane containing (1, 0, 1) and perpendicular to i+ 2j+ 3k is (x-1)+ 2y+ 3(z-1)= 0. The line passes through that plane at (1+ t- 1)+ 2(1+ 2t)+ 3(3t-1)= 14t- 1= 0 or t= 1/14. x= 1+ 1/14, y= 1+ 2/14, z= 3/14 or (15/14, 16/14, 3/14). The distance from that point to (1, 0, 1) is $$\sqrt{(1- 15/14)^2+ (-16/14)^2+ (1- 3/14)^2}$$ $$= \sqrt{1/196+ 256/196+ 121/196}$$ $$= 3\sqrt{42}/14$$ and that is the radius of the circle the point is moving in. (Better check my arithmetic- that looks peculiar.) From the radius you can calculate the distance corresponding to 3 radians and so the distance the particle moves in one second. 3. Aug 1, 2009 ### rugapark how did you get the x, y and z to equal those three? and where did the t's come from? thanks for the help! Similar Discussions: Linear velocity of a rotating body
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8725482821464539, "perplexity": 1025.5310927139762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00366.warc.gz"}
http://mathoverflow.net/revisions/38792/list
4 added 151 characters in body I guess the question is somehow elementary to experts, but I'd like to put down my arguments, which appear doubtful, and see if they are correct and if corrections and improvements are possible. The setting is as follows: $k$ is the base field of characteristic zero, $G$ a connected semi-simple $k$-group, and $Rep(G)$ the Tannakian category of finite-dimensional algebraic $k$-representations of $G$, with the canonical fiber functor in $k$-vector spaces, whose objects are called $k$-representations for short. Unless otherwise stated, reductive $k$-groups are connected. The motivation is as follows: for $H$ semi-simple $k$-subgroup of $G$, one has the restriction functor: $Rep(G)\rightarrow Rep(H)$ sending a $k$-representation $V$ of $G$ to the restriction $V$ as a $k$-representation of $H$. What kind of irreducible $k$-representation of $G$ remains irreducible viewed in $Rep(H)$? Recall that a reductive $k$-group is $k$-isotropic if it contains a $k$-split $k$-torus, and $k$-anisotropic if otherwise. fact: for $H\subset G$ a semi-simple $k$-subgroup, $H$ extends to a parabolic $k$-subgroup $H\subset P\subsetneq G$ if and only if $Z(H,G)$ the centralizer of $H$ in $G$ is $k$-isotropic. (from this one also sees that if $L$ is the Levi $k$-subgroup of a $k$-parabolic $P$, then its connected center $C(L)$ is $k$-isotropic.) claim: let $H$ be a semi-simple $k$-subgroup of $G$ as above, such that $Z(H,G)$ is $k$-anisotropic, then for any irreducible $k$-representation $(\rho,V)$ of $G$, its restriction to $H$ is irreducible as an algebraic $k$-representation of $H$. Conversely, if the restriction functor $Rep(G)\rightarrow Rep(H)$ respects irreducibility, with $H\subset G$ a semi-simple $k$-subgroup, then $Z(H,G)$ is $k$-anisotropic. Sketch of the proof: To prove the first part, assume that for some irreducible $(\rho,V)$, the restriction to $H$ is not irreducible. Then in $Rep(H)$ one has a non-trivial splitting $V=V_1\oplus V_2$. Define $F_0(V)=V$, $F_1=V_1$, $F_2=0$ etc, one gets a non-trivial decreasing filtration on $V$. $V$ generates a full Tannakian subcategory, which is of the form $Rep(G')$, equipped with the non-trivial filtration generated by $F(V)$. By Tannaka duality, $Rep(G')\rightarrow Rep(G)$ corresponds to an epimorphism $G\rightarrow G'$. $G'$ is thus semi-simple. The non-trivial filtration on $Rep(G')$ corresponds to a cocharacter defined over $k$, which is equivalently characterized by $k$-parabolic $P'$ of $G'$, and $P'$ lifts to a $k$-parabolic $P$ of $G$. One checks easily that $P$ contains $H$, because $H$ preserves the filtration generated by $F(V)$. This shows that $Z(H,G)$ is $k$-isotropic. Conversely, when $Z(H,G)$ is $k$-isotropic, $H$ extends to a non-trivial $k$-parabolic $H\subset P\subsetneq G$. This gives a filtration on $Rep(G)$, preserved by $P$ and $H$. In particular, there exists at least one irreducible $k$-representation $(\rho,V)$ of $G$ on which $F(V)$ is non-trivial, and then the restriction of $\rho$ to $H$ splits non-trivially. Here I use the notion of filtration on $Rep(G)$, which means for each $V\in Rep(G)$ one has a finite separated exhaustive decreasing filtration $F(V)$, moving functorially: it respects the tensor products and direct sums in the filtered sense, and is strict with respect to all exact sequences in $Rep(G)$. To see a filtration on $Rep(G')$ extends to a filtration on $Rep(G)$ for an epimorphism $G\rightarrow G'$ as above, it suffices to transfer to the Lie algebra side: $LieG=LieG'\oplus Lie G''$ for some semi-simple $k$-subgroup $G''$ of $G$, then use the fact that $Rep(LieG)$ equals the "exterior tensor product" of $Rep(LieG')$ with $Rep(LieG'')$, and pass equivalently to the $k$-group side, as $k$ is of characteristic zero. In this way the filtration on $Rep(G')$, together with the trivial filtration on $Rep(G'')$, gives a filtration on $Rep(G)$ by tensorial construction. I would like to know if the above arguments makes sense. If it is, is there any other elementary proof, essentially different (modulo the Tannakian duality). Moreover, what if one allows reductive $k$-subgroup? Does that imply the claim that over $\mathbb{R}$, if one takes a pair of compact groups, say $SO_3\subset SO_4$, every irreducible representation of $SO_4$ remains irreducible when restricted to $SO_3$? and does it have anything to do with the branching rule? I would be grateful if further references, like expository articles, are mentioned concerning branching rules for reductive $k$-groups, even in the case of non-algebraically base field (I guess one might do something from the algebraically closed case through Galois descent, but I'm quite lost when doing this for reductive $k$-groups.) Thanks a lot. 3 Spelling corrected # whenWhen does an irreducible representation remains ireducible after restriction to a semi-simple subgroup? I guess the question is somehow elementary to experts, but I'd like to put down my arguments, which appear doubtful, and see if they are correct and if corrections and improvements are possible. The setting is as follows: $k$ is the base field of characteristic zero, $G$ a connected semi-simple $k$-group, and $Rep(G)$ the Tannakian category of finite-dimensional algebraic $k$-representations of $G$, with the canonical fiber functor in $k$-vecor k$-vector spaces, whose objects are called$k$-representations for short. Unless otherwise stated, reductive$k$-groups are connected. The motivation is as follows: for$H$semi-simple$k$-subgroup of$G$, one has the restriction functor:$Rep(G)\rightarrow Rep(H)$sending a$k$-representation$V$of$G$to the restriction$V$as a$k$-representation of$H$. What kind of irreducible$k$-representation of$G$remains irreducible viewed in$Rep(H)$? Recall that a reductive$k$-group is$k$-isotropic if it contains a$k$-split$k$-torus, and$k$-anisotropic if otherwise. fact: for$H\subset G$a semi-simple$k$-subgroup,$H$extends to a parablic parabolic$k$-subgroup$H\subset P\subsetneq G$if and only if$Z(H,G)$the centralizer of$H$in$G$is$k$-isotropic. (from this one also sees that if$L$is the Levi$k$-subgroup of a$k$-parabolic$P$, then its connected center$C(L)$is$k$-isotropic.) claim: let$H$be a semi-simple$k$-subgroup of$G$as above, such that$Z(H,G)$is$k$-anisotropic, then for any irreducible$k$-representation$(\rho,V)$of$G$, its restriction to$H$is irreducible as an algebraic$k$-represnetation k$-representation of $H$. Conversely, if the retriction restriction functor $Rep(G)\rightarrow Rep(H)$ respects irreducibility, with $H\subset G$ a semi-simple $k$-subgroup, then $Z(H,G)$ is $k$-anisotropic. Sketch of the proof: To prove the first part, assume that for some irreducible $(\rho,V)$, the restriction to $H$ is not irreducible. Then in $Rep(H)$ one has a non-trivial splitting $V=V_1\oplus V_2$. Define $F_0(V)=V$, $F_1=V_1$, $F_2=0$ etc, one gets a non-trivial decreasing filtration on $V$. $V$ generates a full Tannakian subcategory, which is of the form $Rep(G')$, equipped with the non-trivial filtration generated by $F(V)$. By Tannaka duality, $Rep(G')\rightarrow Rep(G)$ corresponds to an epimorphism $G\rightarrow G'$. $G'$ is thus semi-simple. The non-trivial filtration on $Rep(G')$ corresponds to a cocharacter defined over $k$, which is equivalently characterized by $k$-parabolic $P'$ of $G'$, and $P'$ lifts to a $k$-parabolic $P$ of $G$. One checks easily that $P$ contains $H$, because $H$ preserves the filtration generated by $F(V)$. This shows that $Z(H,G)$ is $k$-isotropic. Conversely, when $Z(H,G)$ is $k$-isotropic, $H$ extends to a non-trivial $k$-parabolic $H\subset P\subsetneq G$. This gives a filtration on $Rep(G)$, preserved by $P$ and $H$. In particular, there exists at least one irreducible $k$-represnetation k$-representation$(\rho,V)$of$G$on which$F(V)$is non-trivial, and then the restriction of$\rho$to$H$splits non-trivially. Here I use the notion of filtration on$Rep(G)$, which means for each$V\in Rep(G)$one has a finite separated exhaustive decreasing filtration$F(V)$, moving functorially: it respects the tensor products and direc direct sums in the filtred filtered sense, and is strict with respect to all exact sequences in$Rep(G)$. To see a filtration on$Rep(G')$extends to a filtration on$Rep(G)$for an epimorphism$G\rightarrow G'$as above, it suffices to transfer to the Lie algebra side:$LieG=LieG'\oplus Lie G''$for some semi-simple$k$-subgroup$G''$of$G$, then use the fact that$Rep(LieG)$equals the "exterior tensor product" of$Rep(LieG')$with$Rep(LieG'')$, and pass equivalently to the$k$-group side, as$k$is of characteristic zero. I would like to know if the above arguments makes sense. If it is, is there any other elementary proof, essentially different (modulo the Tannakian duality). Moreover, what if one allows reductive$k$-subgroup? Does that imply the claim that over$\mathbb{R}$, if one takes a pair of compact groups, say$SO_3\subset SO_4$, every irreducible represenation representation of$SO_4$remains irreducible when restricted to$SO_3$? and does it have anything to do with the branching rule? I would be grateful if further references, like expository articles, are mentioned concerning branching rules for reductive$k$-groups, even in the case of non-algebraically base field (I guess one might do something from the algebraically closed case through Galois descent, but I'm quite lost when doing this for reductive$k$-groups.) Thanks a lot. 2 added 14 characters in body I guess the question is somehow elementary to experts, but I'd like to put down my arguments, which appear doubtful, and see if they are correct and if corrections and improvements are possible. The setting is as follows:$k$is the base field of characteristic zero,$G$a connected semi-simple$k$-group, and$Rep(G)$the Tannakian category of finite-dimensional algebraic$k$-representations of$G$, with the canonical fiber functor in$k$-vecor spaces, whose objects are called$k$-representations for short. Unless otherwise stated, reductive$k$-groups are connected. The motivation is as follows: for$H$semi-simple$k$-subgroup of$G$, one has the restriction functor:$Rep(G)\rightarrow Rep(H)$sending a$k$-representation$V$of$G$to the restriction$V$as a$k$-representation of$H$. What kind of irreducible$k$-representation of$G$remains irreducible viewed in$Rep(H)$? Recall that a reductive$k$-group is$k$-isotropic if it contains a$k$-split$k$-torus, and$k$-anisotropic if otherwise. fact: for$H\subset G$a semi-simple$k$-subgroup,$H$extends to a parablic$k$-subgroup$H\subset P\subsetneq G$if and only if$Z(H,G)$the centralizer of$H$in$G$is$k$-isotropic. (from this one also sees that if$L$is the Levi$k$-subgroup of a$k$-parabolic$P$, then its connected center$C(L)$is$k$-isotropic.) claim: let$H$be a semi-simple$k$-subgroup of$G$as above, such that$Z(H,G)$is$k$-anisotropic, then for any irreducible$k$-representation$(\rho,V)$of$G$, its restriction to$H$is irreducible as an algebraic$k$-represnetation of$H$. Conversely, if the retriction functor$Rep(G)\rightarrow Rep(H)$respects irreducibility, with$H\subset G$a semi-simple$k$-subgroup, then$Z(H,G)$is$k$-anisotropic. Sketch of the proof: To prove the first part, assume that for some irreducible$(\rho,V)$, the restriction to$H$is not irreducible. Then in$Rep(H)$one has a non-trivial splitting$V=V_1\oplus V_2$. Define$F_0(V)=V$,$F_1=V_1$,$F_2=0$etc, one gets a non-trivial decreasing filtration on$V$.$V$generates a full Tannakian subcategory, which is of the form$Rep(G')$, equipped with the non-trivial filtration generated by$F(V)$. By Tannaka duality,$Rep(G')\rightarrow Rep(G)$corresponds to an epimorphism$G\rightarrow G'$.$G'$is thus semi-simple. The non-trivial filtration on$Rep(G')$corresponds to a cocharacter defined over$k$, which is equivalently characterized by$k$-parabolic$P'$of$G'$, and$P'$lifts to a$k$-parabolic$P$of$G$. One checks easily that$P$contains$H$, because$H$preserves the filtration generated by$F(V)$. This shows that$Z(H,G)$is$k$-isotropic. Conversely, when$Z(H,G)$is$k$-isotropic,$H$extends to a non-trivial$k$-parabolic$H\subset P\subsetneq G$. This gives a filtration on$Rep(G)$, preserved by$P$and$H$. In particular, there exists at least one irreducible$k$-represnetation$(\rho,V)$of$G$on which$F(V)$is non-trivial, and then the restriction of$\rho$to$H$splits non-trivially. Here I use the notion of filtration on$Rep(G)$, which means for each$V\in Rep(G)$one has a finite separated exhaustive decreasing filtration$F(V)$, moving functorially: it respects the tensor products and direc sums in the filtred sense, and is strict with respect to all exact sequences in$Rep(G)$. To see a filtration on$Rep(G')$extends to a filtration on$Rep(G)$for an epimorphism$G\rightarrow G'$as above, it suffices to transfer to the Lie algebra side:$LieG=LieG'\oplus Lie G''$for some semi-simple$k$-subgroup$G''$of$G$, then use the fact that$Rep(LieG)$equals the "exterior tensor product" of$Rep(LieG')$with$Rep(LieG'')$, and pass equivalently to the$k$-group side, as$k$is of characteristic zero. I would like to know if the above arguments makes sense. If it is, is there any other elementary proof, essentially different (modulo the Tannakian duality). Moreover, what if one allows reductive$k$-subgroup? Does that imply the claim that over$\mathbb{R}$, if one takes a pair of compact groups, say$SO_3\subset SO_4$, every irreducible represenation of$SO_4$remains irreducible when restricted to$SO_3$? and does it have anything to do with the branching rule? I would be grateful if further references, like expository articles, are mentioned concerning branching rules for reductive$k$-groups, even in the case of non-algebraically base field (I guess one might do something from the algebraically closed case through Galois descent, but I'm quite lost when doing this for reductive$k\$-groups.) Thanks a lot. 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9764245748519897, "perplexity": 270.3022795352607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705948348/warc/CC-MAIN-20130516120548-00006-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.freemathhelp.com/trig-double-angles.html
# Solve Trig Problems With Double- or Half-Angles The double and half angle formulas can be used to find the values of unknown trig functions. For example, you might not know the sine of 15 degrees, but by using the half angle formula for sine, you can figure it out based on the commonly known value of sin(30) = 1/2. They are also useful for certain integration problems where a double or half angle formula may make things much simpler to solve. ## Double Angle Formulas: You'll notice that there are several listings for the double angle for cosine. That's because you can substitute for either of the squared terms using the basic trig identity $$\sin^2+\cos^2=1$$. ## Half Angle Formulas: These are a little trickier because of the plus or minus. It's not that you can use BOTH, but you have to figure out the sign on your own. For example, the sine of 30 degrees is positive, as is the sine of 15. However, if you were to use 200, you'd find that the sine of 200 degrees is negative, while the sine of 100 is positive. Just remember to look at a graph and figure out the sines and you'll be fine.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9673099517822266, "perplexity": 201.4137040794858}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423842.79/warc/CC-MAIN-20170722022441-20170722042441-00638.warc.gz"}