url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://tug.org/pipermail/metapost/2013-December/002897.html
# [metapost] Bug in MetaPost Ariel Barton barto106 at math.umn.edu Sun Dec 15 16:24:37 CET 2013 I've been poking at this particular bug too. It seems to be a problem with MetaPost's low-level "put a string on the figure" routine; you don't even need btex/etex. Example: beginfig(0); string test; test = "ABC" & char0 & "XYZ"; label(test,(0,0)); endfig; end This will print not "ABC\Gamma XYZ", but "ABC\Gamma\Gamma\Gamma\Gamma". If the zeroth character in a font shows up in a string, the following characters in that string are all converted to the same character. The main zeroth characters to worry about in practice are $-$ (minus), $\Gamma$ and $\big($. This is why "$-\infty$" comes out as "--". The "$-\,\infty$" comes out correctly because the kern (the \,) separates the $-$ and the $\infty$, so PostScript has to treat them as two separate strings. Similarly, a $-3$ would come out correctly because the $-$ and the $3$ are from different fonts, and so PostScript has to treat them as two separate strings. If you need a $\Gamma$ or $\big($, you can get the correct behavior with a ${\Gamma\mkern0.01mu}$. You have to be a little careful with the $-$ to get the spacing after it right. (Various TeX-internal workarounds, like $\Gamma\null$ or $\Gamma\,\!$, don't work; you need a nonzero amount of space to avoid this low-level bug.) On Sun, Dec 15, 2013 at 5:30 AM, Pétiard François <petiard.francois at free.fr>wrote: > Hello! > > I've already written that (2012/03/24), but there is no change... > > I'm on Windows 7, MiKTeX 2.9 (64 bits), MetaPost 1.803. > > Here is my file test.mp: > > %%%%%%%%%%%%%%%%%%%%%%%%%% > prologues:=3; > outputtemplate:="%j.eps"; > beginfig(0) > label(btex $-\infty$ etex,(0,20));%%%%%%% 1 > label(btex $\null-\infty$ etex,(0,0));%%% 2 > label(btex $-\!\infty$ etex,(0,-20));%%%% 3 > label(btex $-\,\infty$ etex,(0,-40));%%%% 4 > label(btex $-\,\!\infty$ etex,(0,-60));%% 5 > label(btex $+\infty$ etex,(0,-80));%%%%%% 6 > endfig; > end; > %%%%%%%%%%%%%%%%%%%%%%%%% > > When I make: > > mpost -debug test.mp > > I obtain a curious file test.eps (see the labels 1 and 5). > > Is there a chance to see that bug repaired ? > > Cheers > > François > > > > -- > francois.petiard at univ-fcomte.fr > > -- > http://tug.org/metapost/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://tug.org/pipermail/metapost/attachments/20131215/e16da4c1/attachment.html>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441216349601746, "perplexity": 11230.788990439276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00501.warc.gz"}
https://advances.sciencemag.org/content/2/3/e1500901
Research ArticleMOLECULAR PHYSICS # Universal diffraction of atoms and molecules from a quantum reflection grating See allHide authors and affiliations Vol. 2, no. 3, e1500901 ## Abstract Since de Broglie’s work on the wave nature of particles, various optical phenomena have been observed with matter waves of atoms and molecules. However, the analogy between classical and atom/molecule optics is not exact because of different dispersion relations. In addition, according to de Broglie’s formula, different combinations of particle mass and velocity can give the same de Broglie wavelength. As a result, even for identical wavelengths, different molecular properties such as electric polarizabilities, Casimir-Polder forces, and dissociation energies modify (and potentially suppress) the resulting matter-wave optical phenomena such as diffraction intensities or interference effects. We report on the universal behavior observed in matter-wave diffraction of He atoms and He2 and D2 molecules from a ruled grating. Clear evidence for emerging beam resonances is observed in the diffraction patterns, which are quantitatively the same for all three particles and only depend on the de Broglie wavelength. A model, combining secondary scattering and quantum reflection, permits us to trace the observed universal behavior back to the peculiar principles of quantum reflection. Keywords • Quantum reflection • emerging beam resonance • helium dimer • matter-wave optics • grazing incidence atom optics • Rayleigh-Wood anomaly ## INTRODUCTION On the basis of the quantum-mechanical wave nature of particles, optical effects such as refraction, diffraction, and interferometry have been observed with “matter waves” of atoms, molecules, and more recently, clusters and macromolecules (1, 2). In these experiments, unlike in classical optics with light, the interaction of the particle either with an external field or with the material of an optical element introduces a particle-dependent disturbance that, in general, tends to be detrimental for observing the optical effect of interest. For instance, diffraction patterns of atoms and molecules diffracted by a nanoscale transmission grating strongly depend on the van der Waals interaction between the particle and the grating material; an increase in interaction strength causes a narrowing of the effective width of the grating slits (3). Also, the diffraction peak intensities of He and D2 scattered from a crystal surface were found to strongly differ because of the different surface corrugations that result from different particle-solid interaction strengths (4). The effect of the molecule-surface interaction becomes severe for macromolecules and clusters, where it can result in a strong reduction of the fringe visibility as observed in matter-wave interferometry (5). One possible way of overcoming this problem was recently demonstrated by Brand et al., who succeeded in using atomically thin nanoscale gratings made from a single-layer material such as graphene, thereby minimizing the particle-grating interaction (6). An alternative approach could be to use conventional diffraction gratings in such a way that the atoms or molecules do not come close to the solid surfaces, thereby strongly reducing the effects of the particle-grating interaction. Here, we demonstrate universal (viz., interaction-independent) diffraction by the quantum reflection of atoms and molecules from a conventional reflection grating. We have observed emerging beam resonances of identical shape for He atoms, D2 molecules, and even helium dimers, He2, under grazing incidence conditions. Coherent scattering of the particles results from quantum reflection from the long-range Casimir-Polder particle-surface potential tens of nanometers above the actual grating surface (7). By applying a secondary scattering model, we show how universal diffraction results from the peculiar principles governing quantum reflection. Here, universal diffraction means that the diffraction phenomena, including both angles and relative intensities of the diffraction peaks, depend solely on the de Broglie wavelength λ and are independent of the different strengths of the specific particle-grating interaction. When an atom or molecule approaches a solid surface, it is exposed to the long-range attractive Casimir-Polder particle-surface interaction potential. In a classical description, this results in an acceleration of the particle toward the surface where, at the classical turning point, it will scatter back from the steep repulsive inner branch of the particle-surface potential. However, if the particle’s incident velocity is sufficiently small, the classical picture needs to be replaced by a quantum mechanical description. According to quantum mechanics, the particle’s de Broglie wavelength will vary along the slope of the attractive Casimir-Polder potential. If the length of the slope appears to be short on a length scale set by the de Broglie wavelength, the attractive potential effectively acts as an impedance discontinuity to the particle’s wave function. As a result, there is a detectable probability for the wave function to be quantum-reflected at the Casimir-Polder potential, way in front of the actual surface (811). In the limit of vanishing incident particle velocity (corresponding to infinite incident wavelength), the Casimir-Polder potential effectively resembles a step in the potential, and thus, the probability for quantum reflection approaches unity. Quantum reflection from a solid has been observed with ultracold, metastable atoms (12, 13), atom beams (1416), and even Bose-Einstein condensates (17, 18). Recently, coherent and nondestructive quantum reflection from a diffraction grating was reported for the van der Waals clusters He2 (7) and He3 (19). The exceptionally small binding energies of 10−7 eV (He2) and 10−5 eV (He3) are orders of magnitude smaller than the cluster-surface potential well depth of 10−2 eV. However, because dimers and trimers are quantum-reflected tens of nanometers above the surface, they do not come close to where surface-induced forces would inevitably break up the fragile bonds. Emerging beam resonance, also known as the Rayleigh-Wood anomaly, is a phenomenon that occurs in grating diffraction when conditions (wavelength, grating period, and incidence angle) are such that a diffracted beam of order m emerges from the grating plane. For instance, when, for given wavelength and grating period, the incidence angle is continuously varied from grazing toward normal incidence, the mth-order diffraction beam will, at some point, change from an evanescent wave state (pre-emergence) to emerging and, eventually, to a freely propagating wave above the grating plane (post-emergence). The incidence angle at which the emergence occurs is referred to as the mth-order Rayleigh angle. The emergence of a new beam causes abrupt intensity variations of the other diffraction beams marking the emerging beam resonance. The effect was first observed with visible light by Wood (20) and explained by Rayleigh in 1907 (21). Only recently was it observed in atom diffraction (22). Here, we report evidence for the emerging beam resonance effect for the helium dimer. ## RESULTS A schematic of the experimental setup is shown in Fig. 1. The 20-μm-period echelette grating is described in Materials and Methods, whereas further details of the apparatus are provided in the Supplementary Materials. Diffraction data observed with a helium beam, containing both atoms and dimers at de Broglie wavelengths of 0.33 and 0.16 nm, respectively, are shown in Fig. 2A for a range of incidence and diffraction angles. The –1st-order diffraction peak of helium atoms, which emerges at an incidence angle of θin = 1.047 mrad, shows the strongest overall signal, larger than the specular (0th-order) beam. The –2nd-order diffraction beam, emerging at θin = 1.480 mrad, and the first-order beam of the atoms are clearly visible. Higher-order (n = 2 and n = 3) atomic diffraction beams show up with less intensity decaying with increasing incidence angle. In addition, –1st- and –3rd-order peaks of He2 are clearly visible for incidence angles larger than 0.75 and 1.3 mrad, respectively (7). Furthermore, for incidence angles from 0.7 to 0.9 mrad, a weak signal of the dimer’s first-order peak appears. For both atoms and dimers, the larger intensities of negative-order peaks result from the grating blaze (23). At Rayleigh angles, indicated in Fig. 2A, diffraction peaks appear at grazing emergence, with their intensities steeply increasing with incidence angle. Figure 2B shows angular spectra (corresponding to cross sections of the contour plot along the y axis) for incidence angles around the –1st-order Rayleigh angle θR,−1(He) = 1.04 mrad, where the monomer’s –1st-order peak and the dimer’s –2nd-order peak emerge. For θin ≤ 0.99 mrad, the –1st-order diffraction beam has not yet appeared (pre-emergence); there is no peak at angles θ ≤ 0.5 mrad (greenish traces in Fig. 2B). In this regime, the specular and first-order peaks of the atoms as well as the –1st-order peak of the dimers (inset in Fig. 2B) show little or no change with incidence angle. For incidence angles in the range θin = 1.0 to 1.05 mrad, the new diffraction peak is emerging progressively at θ ≤ 0.5 mrad (red traces in Fig. 2B). Because of a finite incident beam divergence of about 50 μrad, the emergence of the new peak does not occur at a well-defined incidence angle but rather is spread out over an interval of angles (23). This is reflected by the fact that for θin = 1.040 and 1.050 mrad, the partly emerged peaks share the left slope (24). Concurrent to the emergence of a new peak, the specular peak and the first-order peak of the atoms exhibit a steep increase from 340 to 500 counts/s and from 70 to 105 counts/s, respectively. An even stronger increase of about 100% is found for the –1st-order diffraction peak of the helium dimers, as can be seen in the inset of Fig. 2B. We interpret these rather abrupt intensity variations as a manifestation of the emerging beam resonance effect for He and He2 upon the emergence of the –1st- and –2nd-order peak, respectively. For incidence angles θin ≥ 1.066 mrad, the new diffraction beam appears fully emerged from the grating (post-emergence; bluish traces in Fig. 2B). Figure 2C shows diffraction efficiencies analyzed from the data shown in Fig. 2A. It is evident in the graph that at θR,−1(He) = θR,−2(He2) = 1.04 mrad, the diffraction efficiencies not only for He (n = 0 and 1) but also for He2 (n = −1) exhibit cusps characteristic for the emerging beam resonance effect (22). In addition, when the dimer –3rd-order beam emerges at θR,−3(He2) = 1.28 mrad, the dimer –1st-order diffraction efficiency exhibits a rapid decrease. ## DISCUSSION To analyze the emerging beam resonance behavior, we apply the multiple scattering model introduced by Rayleigh (21) and Fano (25, 26). As depicted in Fig. 3, the nth-order diffraction beam amplitude An is approximated as the constructive interference of direct and secondary scattering waves; An = An(1) + An(2). For θin = θR,m, the geometrical path length difference between direct and secondary scattering, deff (1 − cosθin), is equal to |m|λ, giving rise to fully constructive interference. An additional phase shift Φ is induced by the particle-surface interaction potential for an atom or molecule propagating along the path of length deff between the first and second scattering occurrences (Fig. 3). For quantum reflection under Rayleigh conditions, one finds Φ = −m π [1 + cosθR,m] ≈ −m 2π (see Materials and Methods), which also corresponds to fully constructive interference. Although this simple model cannot account for the detailed shape of the emerging beam resonance effect displayed in Fig. 2C, it allows us to derive two main aspects. First, under Rayleigh conditions, we expect constructive interference between direct and secondary scattering. Thus, the emergence of an mth-order beam is expected to increase the other diffraction peaks, including the specular peak, the more so the more intense the emerging beam is. Consequently, the overall reflectivity shows a steep variation under Rayleigh conditions, which is in full agreement with experimental results (22). Second, the calculated phase shift depends on the de Broglie wavelength as the sole parameter. Hence, at a given de Broglie wavelength, the model predicts the same (universal) behavior for any atom or molecule. This comes as a surprise because quantum reflection is inherently linked with the Casimir-Polder potential, which is particle-specific. The derivation of the phase shift Φ is based on two assumptions (see Materials and Methods). The particle-surface potential probed by the particle along its path between the first and second scattering occurrences is (i) constant and (ii) equal to the particle’s incident perpendicular kinetic energy, that is, the energy associated with the incident velocity component perpendicular to the grating plane. The second assumption is an approximation that follows from the principles of quantum reflection (911). It holds independent of the specific particle properties. As a result, different atoms or molecules at the same de Broglie wavelength will be quantum-reflected at different heights above the surface, but their wave functions will acquire the same phase shift Φ. This peculiarity of quantum reflection is the origin of universal (interaction-independent) diffraction. To check this prediction of universal behavior, we repeated the experiment with He and D2 under conditions such that their de Broglie wavelengths are identical to the wavelength of He2 in the data shown in Fig. 2. All other experimental parameters were kept unchanged. Figure 4 shows a direct comparison of the –1st-order diffraction efficiency curves. We find excellent agreement of the data for the three species (except for incident angles larger than about 1.25 mrad), thereby confirming the prediction of universal diffraction. At larger incident angles, He and D2 diffraction efficiencies still overlap, but the He2 efficiency is found to taper off. A possible explanation for this deviation could be that some dimers start to break up as they approach closer to the surface with increasing incidence angle. Furthermore, we note that universal behavior was also found for the –3rd-order diffraction efficiency curves of He, He2, and D2. In conclusion, we have observed emerging beam resonances for He, He2, and D2 quantum-reflected from an echelette diffraction grating at grazing incidence. Our observation indicates that He2, despite its fragile bond, can undergo double coherent, nondestructive scattering; under Rayleigh conditions, dimers scattered at a grating unit propagate parallel to the surface, scatter a second time at another grating unit without breakup, and interfere with directly scattered dimers. Furthermore, a simple approximate calculation of the relative phase between the direct and the secondary scattering paths indicates constructive interference under Rayleigh conditions independent of the particle-specific Casimir-Polder interaction with the grating. Diffraction data of He, He2, and D2 under conditions of identical de Broglie wavelength confirm this universal behavior. Because the effect is independent of the particle-specific properties, universal diffraction from a quantum-reflection grating can, in principle, be applied to larger molecules as well. The only prerequisite is the preparation of a sufficiently large de Broglie wavelength corresponding to the velocity component perpendicular to the grating plane, thereby providing a sufficient quantum reflection probability. In future experiments, this could possibly be achieved by applying a state-of-the-art molecular-beam deceleration technique (27) or by choosing an even smaller incidence angle than the ones shown here. ## MATERIALS AND METHODS ### Diffraction grating The commercial plane ruled echelette grating (Newport 20RG050-600-1; period d = 20 μm; blaze angle, 14 mrad) is aligned in a conical mount (28); the grooves are almost parallel to the incidence plane. We define the azimuth angle φ as the angle between the grooves and incidence plane. Here, a negative azimuth angle was chosen to enhance the intensities of emerging diffraction beams (22). The exact value of φ is determined from fitting the diffraction angle curves shown in Fig. 2A. An agreement between the lines and the positions of the observed peaks is found for φ = −33.5 mrad corresponding to deff = 597 μm. ### Rayleigh angles The nth-order diffraction angle θn can be calculated by the approximated grating equation for conical diffraction, cosθin − cosθn = n(λ/deff) with effective period deff = d/|sinφ| (23). The Rayleigh angle θR,m is derived by inserting cosθR,m − 1 = mλ/deff into the grating equation. Because the de Broglie wavelength of a particle is inversely proportional to its mass, Rayleigh angles for monomers and dimers at the same particle velocity follow a simple relationship: θR,m(He) = θR,2m(He2). For instance, in the measurements shown in Fig. 2, the de Broglie wavelength λ is 0.327 nm for He and 0.164 nm for He2, resulting in Rayleigh angles θR,−1(He) = θR,−2(He2) = 1.047 mrad. ### Diffraction efficiencies We define the diffraction efficiency of an nth-order peak as the ratio of its area to the incidence beam area. Diffraction peak areas are determined by fitting each peak of an individual diffraction pattern, like the ones shown in Fig. 2B, with a Gaussian. The incidence beam area is determined from an angular spectrum measured with the grating removed from the beam path. The incident beam signal is dominated by the atomic beam component with just a small contribution of a few percent due to helium dimers. Therefore, the normalization to the incident beam peak area results in a slight underestimation of the actual diffraction efficiencies (given as the ratio of the nth-order beam intensity to the incident beam intensity for either atoms or dimers) for the atoms and in a severe underestimation for the dimers. Thus, the diffraction efficiencies for dimers plotted in Fig. 2C should be considered as having arbitrary units and cannot be compared quantitatively to their atomic counterparts. ### Diffractive evanescent waves Evanescent waves (29) contribute to An(2) because they propagate parallel to the grating surface plane. Evanescent waves result from diffraction into higher-order beams whose wave vector normal component is imaginary (that is, nonpropagating) (26, 29). A diffraction beam of order m is freely propagating as long as the incidence angle is larger than the Rayleigh angle, θin > θR,m; it is emerging under the Rayleigh condition, θin = θR,m; and it is evanescent for θin < θR,m. Evanescent waves contribute to An(2) in the secondary scattering model. Close to the Rayleigh angle, θin ≤ θR,m, a significant contribution of the mth-order evanescent wave to An(2) can be expected (26). This is why the emerging beam resonance can, potentially, affect the other diffraction peak intensities already in the pre-emergence regime at θin ≤ θR,m (23). ### Quantum reflection Quantum reflection of a particle from the attractive particle-surface potential takes place at a range of heights above the surface, where the reflection probability is nonzero. As a rule of thumb, this range of heights is around the location where the kinetic energy associated with the velocity normal component is equal to the absolute magnitude of the attractive particle-surface interaction potential (911). (See the Supplementary Materials for information on how the rule of thumb is derived.) The attractive part of the potential can be approximated by a Casimir-Polder surface potential, V(z) = −C3l/ [(l + z)z3]. Here, z denotes the distance from the surface, and the product of the van der Waals coefficient C3 and a characteristic length l (l = 9.3 nm for He) indicates the transition from the van der Waals (z << l) to the retarded Casimir-Polder regime (z >> l) (11). For He2, one expects C3 to be two times larger and l to be the same as compared to the He atom, because the extremely weak van der Waals bond of the dimer is too feeble to cause a significant disturbance to the electron shells of the two He atoms that, thus, can be treated as separate atoms (30). Because He and He2, coexisting in a helium beam, have the same velocity, the dimer’s kinetic energy is twice that of the atoms. As a result, at a given incidence angle, He and He2 in a beam are quantum-reflected at about the same distance from the surface, because the increased incidence energy of He2 is compensated for by its larger C3 coefficient. Using C3 = 0.202 meV nm3 between helium and aluminum (31), at θin = 0.740, 1.047, 1.282, and 1.480 mrad, which are the Rayleigh angles for the emergence of the –1st- to –4th-order He2 peaks in Fig. 2, the surface distance, where quantum reflection takes place, is estimated from the rule of thumb to be 35.3, 29.2, 26.2, and 24.2 nm, respectively. However, for identical de Broglie wavelengths, as in Fig. 4, quantum reflection of the three species is expected to occur at different heights above the surface. This can easily be seen by considering the rule of thumb and the strength of the particle-surface interaction, which is stronger for D2 than for He. As to He and He2, for identical de Broglie wavelengths, the dimer’s incident kinetic energy is just one-half of the monomer’s kinetic energy. Therefore, quantum reflection of He2 is expected to occur at larger distances above the grating surface as compared to He. ### Secondary scattering phase shift It is straightforward to calculate the additional phase shift Φ induced by the particle-surface potential for an atom or molecule of mass M and incident kinetic energy E propagating along the path deff between the first and second scattering (see Fig. 3). We apply the rule of thumb stating that quantum reflection takes place at about that distance to the surface where the particle’s incident kinetic energy (corresponding to the motion along the surface-normal coordinate) equals the absolute magnitude of the Casimir-Polder potential energy (911). Therefore, for secondary scattering, we can approximate the potential energy probed by the particle along the additional path of length deff to be equal to Eperp = (1/2)Mvperp2. Here, vperp denotes the normal component of the incident particle velocity. The particle-surface potential–induced phase shift can be calculated as Φ = (kk0)deff = = , where the square root has been approximated by its Taylor expansion justified by Eperp << E. k and k0 denote the particle’s wave vector in the presence and absence of the particle-surface interaction potential, respectively. In the former case, the kinetic energy is increased by the potential energy of the atom-surface interaction. We assume k to be constant along the path deff between the first and second scattering, corresponding to a constant height of the particle above the surface of the grating facet. The phase shift Φ = can be further simplified. For Rayleigh conditions of mth-order emergence, given by cosθR,m − 1 = mλ/deff, we get sin2θR,m = (1 + cosθR,m)(1 − cosθR,m) = (1 + cosθR,m) . As a result, one finds that, for quantum reflection under Rayleigh conditions, the phase shift induced by the potential is Φ = −m π [1 + cosθR,m] ≈ −m 2π. ## SUPPLEMENTARY MATERIALS Source and helium beam. Slits, apparatus geometry, and definition of angles. Mass spectrometer detector and apparatus resolution. Derivation of the “rule of thumb” of quantum reflection. Fig. S1. Schematic of the quantum-reflection diffraction setup. References (32, 33) This is an open-access article distributed under the terms of the Creative Commons Attribution license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## REFERENCES AND NOTES Acknowledgments: We thank P. R. Bunker for proofreading the manuscript and L. Y. Kim for help with preparing the figures. Funding: B.S.Z. acknowledges support from the T.J. Park Science Fellowship, and W.Z. acknowledges support from the Alexander von Humboldt Foundation. This work was further supported by the grant from Creative and Innovation Project (1.120025.01) at the UNIST and the National Research Foundation of the Ministry of Education, Science and Technology, Korea (NRF-2012R1A1A1041789). Author contributions: B.S.Z. and W.S. conceived the experiment, W.Z. and B.S.Z. made the measurements and analyzed the data, and B.S.Z. and W.S. derived the model description and wrote the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. View Abstract
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935216665267944, "perplexity": 1356.0545064596763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998581.65/warc/CC-MAIN-20190617223249-20190618005249-00109.warc.gz"}
http://fds.duke.edu/db/aas/Physics/faculty/goshaw/publications/306710
## Publications [#306710] of Alfred T. Goshaw Papers Published 1. Aaltonen, T; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, JA; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Bae, T; Barbaro-Galtieri, A; Barnes, VE; Barnett, BA; Barria, P; Bartos, P; Bauce, M; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Bland, KR; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brigliadori, L; Bromberg, C; Brucken, E et al., Signature-based search for delayed photons in exclusive photon plus missing transverse energy events from collisions with, Physical Review D - Particles, Fields, Gravitation, and Cosmology, vol. 88 no. 3 (August, 2013), pp. 031103 [doi] . Abstract: We present the first signature-based search for delayed photons using an exclusive photon plus missing transverse energy final state. Events are reconstructed in a data sample from the CDF II detector corresponding to $6.3 \text{fb}^{-1}$ of integrated luminosity from $\sqrt{s}=1.96$ TeV proton-antiproton collisions. Candidate events are selected if they contain a photon with an arrival time in the detector larger than expected from a promptly-produced photon. The mean number of events from standard model sources predicted by the data-driven background model based on the photon timing distribution is $286 \pm 24$. A total of 322 events are observed. A $p$-value of 12% is obtained, showing consistency of the data with standard model predictions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9746968150138855, "perplexity": 27492.134674858335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589932.22/warc/CC-MAIN-20180717222930-20180718002930-00631.warc.gz"}
https://www.splashlearn.com/math-vocabulary/measurements/mile
# Mile – Definition with Examples » Mile – Definition with Examples A customary unit of length equals 5,280 feet. It is equal to 1.6093 km. It is a large unit that is used to denote the distance between cities, length of rivers, and roads. Comparing Lengths Play Now Comparing Heights Play Now Comparing Weights Play Now
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98945552110672, "perplexity": 6630.371676454308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00131.warc.gz"}
https://arxiv.org/abs/astro-ph/0609066
astro-ph (what is this?) # Title:The Magnificent Seven: Magnetic fields and surface temperature distributions Authors:F. Haberl Abstract: Presently seven nearby radio-quiet isolated neutron stars discovered in ROSAT data and characterized by thermal X-ray spectra are known. They exhibit very similar properties and despite intensive searches their number remained constant since 2001 which led to their name The Magnificent Seven''. Five of the stars exhibit pulsations in their X-ray flux with periods in the range of 3.4 s to 11.4 s. XMM-Newton observations revealed broad absorption lines in the X-ray spectra which are interpreted as cyclotron resonance absorption lines by protons or heavy ions and / or atomic transitions shifted to X-ray energies by strong magnetic fields of the order of 10^13 G. New XMM-Newton observations indicate more complex X-ray spectra with multiple absorption lines. Pulse-phase spectroscopy of the best studied pulsars RX J0720.4-3125 and RBS 1223 reveals variations in derived emission temperature and absorption line depth with pulse phase. Moreover, RX J0720.4-3125 shows long-term spectral changes which are interpreted as due to free precession of the neutron star. Modeling of the pulse profiles of RX J0720.4-3125 and RBS 1223 provides information about the surface temperature distribution of the neutron stars indicating hot polar caps which have different temperatures, different sizes and are probably not located in antipodal positions. Comments: 10 pages, 8 figures, to appear in Astrophysics and Space Science, in the proceedings of "Isolated Neutron Stars: from the Interior to the Surface", edited by D. Page, R. Turolla and S. Zane Subjects: Astrophysics (astro-ph) DOI: 10.1007/s10509-007-9342-x Cite as: arXiv:astro-ph/0609066 (or for this version) ## Submission history From: Frank Haberl [view email] [v1] Mon, 4 Sep 2006 07:50:47 UTC (221 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5251227021217346, "perplexity": 4763.28742276576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00522.warc.gz"}
https://hsm.stackexchange.com/questions/2617/when-did-people-know-that-all-real-polynomials-of-degree-greater-than-2-are-redu
# When did people know that all real polynomials of degree greater than 2 are reducible? Let $f(x) \in \mathbb{R}[x]$, and write $d = \deg f$. It is well known that if $\deg f > 2$, then $f$ is reducible over $\mathbb{R}$. This fact can easily be proved with the fundamental theorem of algebra. Indeed, by the fundamental theorem of algebra, $f(x)$ splits over $\mathbb{C}$, and since $f(x) = \overline{f(x)}$ when $x$ is real, it follows that the linear factors of $f$ must be real or come in conjugate pairs. Therefore, irreducible polynomials over $\mathbb{R}$ are only of degree 1 or 2, and it is easy to find examples of degree two polynomials with real coefficients which are irreducible. This fact has a profound impact on the theory of partial fractions, a staple in first year calculus. Indeed, calculus II (at least at my university) tends to spend an inordinate amount of time on 'integration' via the method of anti-derivatives, which I believe most mathematicians know is ineffective in solving the majority of problems, considering most functions do not have elementary anti-derivatives. However, in the context of rational functions, anti-differentiation and partial fractions completely solves the problem, as any rational function can be written as the sum of simpler rational functions, each with an elementary anti-derivative. However, this fact is (I believe) far from obvious if you do not know that the only irreducible polynomials over $\mathbb{R}$ are linear or quadratic. That said, it seems to me that the method of partial fractions is much older than the fundamental theorem of algebra. So when did people know (perhaps before the first proof of the fundamental theorem of algebra) that the only irreducible polynomials over $\mathbb{R}$ are linear or quadratic? When was the 'complete' solution of anti-derivatives of rational functions obtained? Is this history accounted for anywhere? I apologize in advance if this question is trivial. ## migrated from mathoverflow.netAug 2 '15 at 12:36 This question came from our site for professional mathematicians. • If someone knew long ago that all polynomials over the reals can be factored into polynomials of degree $\leq2$, then all that would be needed to get the fundamental theorem of algebra would be that real quadratics have complex roots. That would be known pretty much immediately after complex numbers are invented. – Andreas Blass Aug 2 '15 at 2:58 • As I understand it, one of the reasons proving the FTA was important was to ensure that partial fractions would always work, at least in principle; I do not believe that it was a known fact before then. – Arturo Magidin Aug 2 '15 at 4:39 • This question would be more suitable on the History of Science and Mathematics stackexchange site. You are essentially asking who first proved the fundamental theorem of algebra. That is generally attributed to Gauss, in his doctoral thesis, although I believe his argument had some topological gaps. – KCd Aug 2 '15 at 6:34 • A very complete answer is contained in the Wikipedia article "Fundamental theorem of algebra". – Alexandre Eremenko Aug 2 '15 at 12:59 • I wrote an essay about the mathematics of integrating rational functions in this 15 April 2006 sci.math post, and in my brief historical comments I made a reference to the MacTutor History of Mathematics archive for The fundamental theorem of algebra, which seems to be more focused on what you're looking for than the Wikipedia article. See also the comments at the end of my sci.math post about Leibniz. – Dave L Renfro Aug 3 '15 at 19:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889865279197693, "perplexity": 189.35054184039356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987751039.81/warc/CC-MAIN-20191021020335-20191021043835-00220.warc.gz"}
http://www.koreascience.or.kr/search.page?keywords=%EC%83%81%EB%8C%80+%EC%A0%88%EC%A0%90+%EB%B3%80%EC%9C%84
• Title, Summary, Keyword: 상대 절점 변위 ### A Relative Nodal Displacement Method for Element Nonlinear Analysis (상대 절점 변위를 이용한 비선형 유한 요소 해석법) • Kim Wan Goo;Bae Dae sung • Transactions of the Korean Society of Mechanical Engineers A • / • v.29 no.4 • / • pp.534-539 • / • 2005 • Nodal displacements are referred to the initial configuration in the total Lagrangian formulation and to the last converged configuration in the updated Lagrangian furmulation. This research proposes a relative nodal displacement method to represent the position and orientation for a node in truss structures. Since the proposed method measures the relative nodal displacements relative to its adjacent nodal reference frame, they are still small for a truss structure undergoing large deformations for the small size elements. As a consequence, element formulations developed under the small deformation assumption are still valid for structures undergoing large deformations, which significantly simplifies the equations of equilibrium. A structural system is represented by a graph to systematically develop the governing equations of equilibrium for general systems. A node and an element are represented by a node and an edge in graph representation, respectively. Closed loops are opened to form a spanning tree by cutting edges. Two computational sequences are defined in the graph representation. One is the forward path sequence that is used to recover the Cartesian nodal displacements from relative nodal displacement sand traverses a graph from the base node towards the terminal nodes. The other is the backward path sequence that is used to recover the nodal forces in the relative coordinate system from the known nodal forces in the absolute coordinate system and traverses from the terminal nodes towards the base node. One open loop and one closed loop structure undergoing large deformations are analyzed to demonstrate the efficiency and validity of the proposed method. ### Non-Prismatic Beam Element for Beams with RBS Connection (RBS 연결부를 갖는 보에 대한 부등 단면 보 요소) • Kim, Kee Dong;Ko, Man Gi;Hwang, Byoung Kuk;Pae, Chang Kyu • Journal of Korean Society of Steel Construction • / • v.16 no.6 • / • pp.833-846 • / • 2004 • This study presents a non-prismatic beam element for modeling the elastic behavior of steel beams, which have the post-Northridge connections in steel moment frames. The elastic stiffness matrix, including the shear effects for non-prismatic members with reduced beam section (RBS) connection, is in closed form. A simplified approach is also suggested, which uses a prismatic beam element to model beams with the RBS connection. This method can estimate quiteexactly the maximum story drift ratios of frames with the RBS connection. The effects of reduced beam section connection on the elastic stiffness of steel moment frames were investigated. The selection of a proper model to account for deformations at the joint might have a more important role in estimating the maximum story drift ratios of frames with better accuracy than the RBS cutouts. ### A Relative for Finite Element Nonlinear Structural Analysis (상대절점좌표를 이용한 비선형 유한요소해석법) • Kang, Ki-Rang;Cho, Heui-Je;Bae, Dae-Sung • Proceedings of the Korean Society for Noise and Vibration Engineering Conference • / • / • pp.788-791 • / • 2005 • Nodal displacements are referred to the Initial configuration in the total Lagrangian formulation and to the last converged configuration in the updated Lagrangian formulation. This research proposes a relative nodal displacement method to represent the position and orientation for a node in truss structures. Since the proposed method measures the relative nodal displacements relative to its adjacent nodal reference frame, they are still small for a truss structure undergoing large deformations for the small size elements. As a consequence, element formulations developed under the small deformation assumption are still valid fer structures undergoing large deformations, which significantly simplifies the equations of equilibrium. A structural system is represented by a graph to systematically develop the governing equations of equilibrium for general systems. A node and an element are represented by a node and an edge in graph representation, respectively. Closed loops are opened to form a spanning tree by cutting edges. Two computational sequences are defined in the graph representation. One is the forward path sequence that is used to recover the Cartesian nodal displacements from relative nodal displacements and traverses a graph from the base node towards the terminal nodes. The other is the backward path sequence that is used to recover the nodal forces in the relative coordinate system from the known nodal forces in the absolute coordinate system and traverses from the terminal nodes towards the base node. One closed loop structure undergoing large deformations is analyzed to demonstrate the efficiency and validity of the proposed method. ### Evaluation of the Stress Intensity Factor for a Crack in Bimaterial Plate by the Boundary Method (경계요소법에 의한 이종재료내 크랙의 응력확대계수 평가) • Kim, Sang-Cheol;Im, Won-Gyun • Journal of the Korean Society for Precision Engineering • / • v.9 no.2 • / • pp.108-115 • / • 1992 • 이종재료의 접합면에 수직으로 존재하는 크랙에 대하여 경계요소 해석을 수행하여, 그 결과 실용가능한 수치 근사해을 얻을 수 있었다. 크랙을 정확히 모델링하기 위하여 크랙표면을 분리영역으로 하는 영역분할법을 채택하였으며, 해의 정확성을 향상시키기 위하여 등매개 2차요소로의 경계분할과 함께 크랙선단에서 표면력의 특이성을 나타내도록 하였다. 응력확대계수는 크랙표면상 절점의 상대변위를 이용하여 결정하였다. 또한 이종 재료내 크랙에 대하여 응력확대계수를 간단히 구할 수 있는 간편해석법을 제안하고 이의 적용 가능한 범위를 제 시하였다. ### Experimental Study on the Inelastic Behavior of Single-layer Latticed Dome with New Connection (새로운 접합상세를 가진 단층 래티스 돔의 비탄성 거동에 관한 실험연구) • Kim, Myeong Han;Oh, Myoung Ho;Jung, Seong Yeol;Kim, Sang Dae • Journal of Korean Society of Steel Construction • / • v.21 no.2 • / • pp.145-154 • / • 2009 ### Behavior of Back Ground of the Laterally Loaded Single Pile (수평하중이 작용하는 단독말뚝의 배면지반의 저항거동 특성) • Bae, Jong-Soon;Kim, Sung-Ho • Journal of the Korean Geotechnical Society • / • v.24 no.8 • / • pp.53-60 • / • 2008 • In this study, various kinds of behavior characteristics such as deformation area zone of back ground, failure angle and rotation point are examined on the laterally loaded single pile in the homogeneous ground through a model test. The main obtained conclusions are summarized as follows; In the back ground of single pile to which the lateral load is applied, failure surface shows almost linear movement characteristics and it is inclined to converge to constant values no matter how the length of pile and the pile head displacement. ### Experimental Study on the Inelastic Behavior of Single-layer Latticed Dome (단층 래티스 돔의 비탄성 거동에 대한 실험연구) • Kim, Jong-Soo;Kim, Sang-Dae;Kim, Myeong-Han;Oh, Myoung-Ho;Shin, Chang-Hoon • Proceeding of KASS Symposium • / • / • pp.165-170 • / • 2008
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146430850028992, "perplexity": 6152.62448340444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738892.21/warc/CC-MAIN-20200812112531-20200812142531-00235.warc.gz"}
https://tex.stackexchange.com/questions/467729/in-the-figure-form-adjust-the-whole-size-of-text-and-math-format-at-once-i
# In the figure form, adjust the whole size of “text” and “math” format at once (II): from \twocolumngrid to \onecolumngrid This follows from a previous post In the figure form, how to adjust the whole size of "text" and "math" format at once?, In the figure form, how to adjust the whole size of "text" and "math" format "as a combined figure" at once, from the \twocolumngrid to \onecolumngrid? (say in the revtex) My trouble is that in the "twocolumn" format of revtex4-1, if I use the following method @Herbert, \documentclass[aps,prl,twocolumn,superscriptaddress,floatfix,letterpaper,nofootinbib]{revtex4-1} \usepackage{mathtools,amssymb,varwidth} \usepackage{showframe} \begin{document} \onecolumngrid \begin{widetext} \begin{figure}[!h] \centering \begin{center} \resizebox{\linewidth}{!}{% \begin{varwidth}{\linewidth} \begin{gather*} \overbrace{\underbrace{A \times B}_E\times \underbrace{C\times {D}}_{EFG}}^{\text{ABCDEFG}}\\[-\normalbaselineskip] \underbrace{\hphantom{A\times B\times C\times D}}_{\text{family}} \end{gather*} \end{varwidth}} \end{center} \caption{} \end{figure} \end{widetext} \twocolumngrid \end{document} I got: This is not what I wanted. I want to have the figure stand in the middle, with adjustable size (likely 2/3 or 3/4 of the document width). However, the "\begin{gather*} \end{gather*}" seems to make the trouble. If I remove "\begin{gather*} ... \end{gather*}" to simply use "$...$", I got a compilation problem and a troublesome output, where the "braces" are making troubles in the wrong position: PS. My last figure is what I am hoping to get (with tunable size, like 2/3 or 3/4 of the whole document width). But it shall also be scale-invariant as the same smaller one as the one in my first figure. The code should be also compilable. Edit 1: This seems to cause the trouble \\[-\normalbaselineskip], which outputs "LaTeX Error: There's no line here to end." My following attempt can not be compiled fully: \documentclass[aps,prl,twocolumn,superscriptaddress,floatfix,letterpaper,nofootinbib]{revtex4-1} \usepackage{mathtools,amssymb,varwidth} \usepackage{showframe} \begin{document} \onecolumngrid \begin{widetext} \begin{figure}[!h] \centering \begin{center} \resizebox{\linewidth}{!}{% \begin{varwidth}{\linewidth} $\overbrace{\underbrace{A \times B}_E\times \underbrace{C\times {D}}_{EFG}}^{\text{ABCDEFG}}\\[-\normalbaselineskip] \underbrace{\hphantom{A\times B\times C\times D}}_{\text{family}}$ \end{varwidth}} \end{center} \caption{} \end{figure} \end{widetext} \twocolumngrid \end{document} • Can you make a mock-up of what you're after? All you talk about are problems without a clear instruction of the expected output. Also, there is no \onecolumngrid or \twocolumngrid. – Werner Dec 28 '18 at 22:05 • Thank you Werner, what I am after is similar to the "last figure." (But I did not get a error-free compilation for that figure. Also the figure is not scale invariant from the original smaller one) – wonderich Dec 28 '18 at 22:07 • There "are" \onecolumngrid and \twocolumngrid in my template – wonderich Dec 28 '18 at 22:08 • @Werner, I am still in trouble to make it work --- any enlightenment will count as an answer to me. – wonderich Dec 28 '18 at 22:50 • It seems that my compilation error "LaTeX Error: There's no line here to end." is due to something from here "\resizebox{\linewidth}{!} \begin{varwidth}{\linewidth}" – wonderich Dec 28 '18 at 23:41 Since displayed math uses the entire width of the column, varwidth won't help. Instead you need to measure the width seperately and use a normal minipage. Note the the actual width of the underbrace is slightly larger than its measured width. One gets the same result using \sbox0{$\displaystyle A\times B\times C\times D$}%. \documentclass[aps,prl,twocolumn,superscriptaddress,floatfix,letterpaper,nofootinbib]{revtex4-1} \usepackage{mathtools,amssymb,varwidth} \usepackage{showframe} \begin{document} \onecolumngrid \begin{widetext} \begin{figure}[htp]% I have yet to find a case where ! makes any difference whatsoever \centering \sbox0{$\displaystyle \underbrace{\hphantom{A\times B\times C\times D}}_\text{family}$}% measure width \resizebox{\linewidth}{!}{\begin{minipage}{\wd0} \begin{gather*} \overbrace{\underbrace{A \times B}_E\times \underbrace{C\times {D}}_{EFG}}^{\text{ABCDEFG}}\\[-\normalbaselineskip] \underbrace{\hphantom{A\times B\times C\times D}}_{\text{family}} \end{gather*} \end{minipage}} \caption{} \end{figure} \end{widetext} \twocolumngrid \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7915777564048767, "perplexity": 2023.15511381411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00418.warc.gz"}
https://runestone.academy/runestone/books/published/fopp/Files/WritingCSVFiles.html
# 10.11. Writing data to a CSV File¶ The typical pattern for writing data to a CSV file will be to write a header row and loop through the items in a list, outputting one row for each. Here we a have a list of tuples, each representing one Olympian, a subset of the rows and columns from the file we have been reading from. There are a few things worth noting in the code above. First, using .format() makes it really clear what we’re doing when we create the variable row_string. We are making a comma separated set of values; the {} curly braces indicated where to substitute in the actual values. The equivalent string concatenation would be very hard to read. An alternative, also clear way to do it would be with the .join method: row_string = ','.join([olympian[0], str(olympian[1]), olympian[2]]). Second, unlike the print statement, remember that the .write() method on a file object does not automatically insert a newline. Instead, we have to explicitly add the character \n at the end of each line. Third, we have to explicitly refer to each of the elements of olympian when building the string to write. Note that just putting .format(olympian) wouldn’t work because the interpreter would see only one value (a tuple) when it was expecting three values to try to substitute into the string template. Later in the book we will see that python provides an advanced technique for automatically unpacking the three values from the tuple, with .format(*olympian). As described previously, if one or more columns contain text, and that text could contain commas, we need to do something to distinguish a comma in the text from a comma that is separating different values (cells in the table). If we want to enclose each value in double quotes, it can start to get a little tricky, because we will need to have the double quote character inside the string output. But it is doable. Indeed, one reason Python allows strings to be delimited with either single quotes or double quotes is so that one can be used to delimit the string and the other can be a character in the string. If you get to the point where you need to quote all of the values, we recommend learning to use python’s csv module.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3595482110977173, "perplexity": 599.583914725852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00018.warc.gz"}
https://purerims.smu.ac.za/en/publications/trace-metals-bioaccumulation-potentials-of-three-indigenous-grass
# Trace metals bioaccumulation potentials of three indigenous grasses grown on polluted soils collected around mining areas in Pretoria, South Africa Gladness Nteboheng Lion*, Joshua Oluwole Olowoyo, T. A. Modise *Corresponding author for this work Research output: Contribution to journalArticlepeer-review 2 Citations (Scopus) ## Abstract The rapid increase in the number of industries may have increased the levels of trace metals in the soil. Phytoremediation of these polluted soils using indigenous grasses is now considered an alternative method in remediating these polluted soils. The present study investigated and compared the ability of three indigenous grasses as bioaccumulators of trace metals from polluted soils. Seeds of these grasses were introduced into pots containing polluted soil samples after the addition of organic manure. The seeds of the grasses were allowed to germinate and grow to maturity before harvesting. The harvested grasses were later separated into shoots and roots and the trace metal contents were determined using ICP-MS. From all the grasses, the concentrations of trace metals in the roots were more than those recorded in the shoot with a significant difference (P < 0.05). The transfer factor (TF) showed that Zn was the most bioaccumulated trace metals by all the grasses followed by Pb, Mn, and Cu respectively. Chromium concentration from the shoot of the grasses was in the order Urochlora moasambicensis > Themeda trianda > Cynodon dactylon. The study concluded that the three grasses used were all able to bioaccumulate trace metals in a similar proportion from the polluted soils. However, since livestock feed on these grasses, they should not be allowed to feed on the grasses used in this study especially when harvested from a polluted soil due to their bioaccumulative potentials. Original language English 43-51 9 West African Journal of Applied Ecology 24 1 Published - 2016 ## Fingerprint Dive into the research topics of 'Trace metals bioaccumulation potentials of three indigenous grasses grown on polluted soils collected around mining areas in Pretoria, South Africa'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8373563885688782, "perplexity": 11584.940124741097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00412.warc.gz"}
https://brilliant.org/discussions/thread/lets-solve-together/
# Let's solve together HELLO EVERYONE, I am a 15 years old boy from Bangladesh,who has always loved doing GEOMETRY and probably would be,always.Actually,i am quite insistent to find the solution of any problem.Even I solved some of them after trying several months continuously!! However,here I would like to introduce a Geometry Problem that I have been trying for about 3 months but haven't found much clue.WHY DON'T SOLVE TOGETHER?! In the diagram above,▲ADB is a right angled triangle,where ∠ADB is right angle.E is a point on DB.EF is perpendicular to AB.AE meets the circumcircle of ▲ADB at H and FH meets DB at G.Given that. DE=5,EG=3.Find the value of BG. All I have found is some similar triangles, even adding something new seemed more complex.I will of course share ,what I have found this far.lets discuss together! You can share any of your thoughts regarding the problem and ask any question freely!! HOPE TO TALK TO YOU SOON! ;)) Note by Shamin Yeaser 9 months, 1 week ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: BG=12 Hint: Drop the altitude from D onto AB and extend to intersect with circumcircle. Compare with the point of intersection of FH with the circumcircle(on the other side). - 9 months, 1 week ago OH MY GOD!!! You even don't know,how much emotions I gathered around this math.I thought for a moment,if I would be able to solve that ever or not.THANK YOU SO MUCH!!! I tried by the method, and found that they both meet at the same point.then I worked with the similar triangles!! again,than you!! Your favor truly means a lot to me. gonna follow you now. ;)) - 9 months, 1 week ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929376602172852, "perplexity": 5613.201800588136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589417.43/warc/CC-MAIN-20180716174032-20180716194032-00129.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-4-proportions-percents-and-solving-inequalities-chapters-1-4-cumulative-review-problem-set-page-186/19
## Elementary Algebra $5$ Combining like terms, the given expression simplifies to: $5x-7y-8x+3y$ =$5x-8x+3y-7y$ =$x(5-8)+y(3-7)$ =$-3x-4y$ We then substitute $x=9$ and $y=-8$ in the expression and simplify: $-3x-4y$ =$-3(9)-4(-8)$ =$-27+32$ =$5$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317671656608582, "perplexity": 296.43706851470284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867904.94/warc/CC-MAIN-20180526210057-20180526230057-00016.warc.gz"}
https://wiki.kidzsearch.com/wiki/Nano-
kidzsearch.com > wiki   Explore:images videos games International System of Units (Redirected from Nano-) Links between the seven SI base unit definitions. Clockwise from top: kelvin (temperature), second (time), metre (length), kilogram (mass), candela (luminous intensity), mole (amount of substance) and ampere (electric current). The International System of Units is the standard modern form of the metric system. The name of this system can be shortened or abbreviated to SI, from the French name Système International d'unités. The International System of Units is a system of measurement based on 7 base units: the metre (length), kilogram (mass), second (time), ampere (electric current), Kelvin (temperature), mole (quantity), and candela (brightness). These base units can be used in combination with each other. This creates SI derived units, which can be used to describe other quantities, such as volume, energy, pressure, and velocity. The system is used almost globally. Only Myanmar, Liberia, and the United States do not use SI as their official system of measurement.[1] In these countries, though, SI is commonly used in science and medicine. History and use The metric system was created in France after the French Revolution in 1789. The original system only had two standard units, the kilogram and the metre. The metric system became popular amongst scientists. In the 1860s, James Clerk Maxwell and William Thomson (later known as Lord Kelvin) suggested a system with three base units - length, mass, and time. Other units would be derived from those three base units. Later, this suggestion would be used to create the centimetre-gram-second system of units (CGS), which used the centimetre as the base unit for length, the gram as the base unit for mass, and the second as the base unit for time. It also added the dyne as the base unit for force and the erg as the base unit for energy. As scientists studied electricity and magnetism, they realized other base units were needed to describe these subjects. By the middle of the 20th century, many different versions of the metric system were being used. This was very confusing. In 1954, the 9th General Conference on Weights and Measures (CGPM) created the first version of the International System of Units. The six base units that they used were the metre, kilogram, second, ampere, Kelvin, and candela.[2] The seventh base unit, the mole, was added in 1971.[3] SI is now used almost everywhere in the world, except in the United States, Liberia and Myanmar, where the older imperial units are still widely used. Other countries, most of them historically related to the British Empire, are slowly replacing the old imperial system with the metric system or using both systems at the same time. Units of measurement Base units The SI base units are measurements used by scientists and other people around the world. All the other units can be written by combining these seven base units in different ways. These other units are called "derived units". SI base units Unit name Unit symbol Quantity measured General definition metre m length kilogram [note 1] kg mass second s time • Original (Medieval): 186400 of a day • Current (1967): The time needed for 9192631770 periods or cycles of the radiation created by electrons moving between two energy levels of the caesium-133 atom. ampere A electric current • Original (1881): A tenth of the abampere, the unit of current used in the electromagnetic CGS.[4] • Current (1946): The current passing through two very long and thin wires placed 1 m apart that produces an attractive force equal to 2×10−7 newton per metre of length. kelvin K temperature • Original (1743): The centigrade scale is obtained by assigning 0° to the freezing point of water and 100° to the boiling point of water. • Current (1967): The fraction 1273.16 of the thermodynamic temperature of the triple point of water. mole mol amount of substance • Original (1900): The molecular weight of a substance in mass grams. • Current (1967): The same amount as the number of atoms in 0.012 kilogram of carbon-12.[note 2] candela cd luminous intensity • Original (1946): 160 of the brightness per square centimetre of a black body at the temperature where platinum freezes. • Current (1979): The luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540×1012 hertz and that has a radiant intensity in that direction of 1683 watt per steradian. Notes 1. The kilogram is the SI base unit of mass and is used in the definitions of derived units. However, units of mass are named using prefixes as if the gram were the base unit. 2. When the mole is used, the substance being measured must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. Derived units Derived units are created by combining the base units. The base units can be divided, multiplied, or raised to powers. Some derived units have special names. Usually these were created to make calculations simpler. Named units derived from SI base units Name Symbol Quantity Definition other units Definition SI base units hertz Hz frequency s−1 newton N force, weight m∙kg∙s−2 pascal Pa pressure, stress N/m2 m−1∙kg∙s−2 joule J energy, work, heat N∙m m2∙kg∙s−2 watt W power, radiant flux J/s m2∙kg∙s−3 coulomb C electric charge s∙A volt V voltage, electrical potential difference, electromotive force W/A J/C m2∙kg∙s−3∙A−1 farad F electrical capacitance C/V m−2∙kg−1∙s4∙A2 ohm Ω electrical resistance, impedance, reactance V/A m2∙kg∙s−3∙A−2 siemens S electrical conductance 1/Ω m−2∙kg−1∙s3∙A2 weber Wb magnetic flux J/A m2∙kg∙s−2∙A−1 tesla T magnetic field strength Wb/m2 V∙s/m2 N/A∙m kg∙s−2∙A−1 henry H inductance Wb/A V∙s/A m2∙kg∙s−2∙A−2 degree Celsius °C temperature relative to 273.15 K TK − 273.15 K lumen lm luminous flux cd∙sr cd lux lx illuminance lm/m2 m−2∙cd becquerel Bq radioactivity (decays per unit time) s−1 gray Gy absorbed dose (of ionizing radiation) J/kg m2∙s−2 sievert Sv equivalent dose (of ionizing radiation) J/kg m2∙s−2 katal kat catalytic activity s−1∙mol Prefixes Very large or very small measurements can be written using prefixes. Prefixes are added to the beginning of the unit to make a new unit. For example, the prefix kilo- means "1000" times the original unit and the prefix milli- means "0.001" times the original unit. So one kilometre is 1000 metres and one milligram is a 1000th of a gram. Standard prefixes for the SI units of measure Multiples Name deca- hecto- kilo- mega- giga- tera- peta- exa- zetta- yotta- Prefix da h k M G T P E Z Y Factor 100 101 102 103 106 109 1012 1015 1018 1021 1024 Fractions Name deci- centi- milli- micro- nano- pico- femto- atto- zepto- yocto- Prefix d c m μ n p f a z y Factor 100 10−1 10−2 10−3 10−6 10−9 10−12 10−15 10−18 10−21 10−24 References 1. "Appendix G: Weights and Measures". The World Facebook. Central Intelligence Agency. 2013. Retrieved 5 April 2013. 2. . 9th session, Resolution 6. 3. International Bureau of Weights and Measures (1971), Unité SI de quantité de matière (SI unit of amount of substance). 14th session, Resolution 3.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408257365226746, "perplexity": 3864.1691077277846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00028.warc.gz"}
http://openstudy.com/updates/50cfda98e4b06d78e86d6ee4
Here's the question you clicked on: 55 members online • 0 viewing ## Danelle96 2 years ago Find the midpoint of PQ (3, 2) (3, 3) (2, 2) (2, 3) Delete Cancel Submit • This Question is Closed 1. Danelle96 • 2 years ago Best Response You've already chosen the best response. 0 ##### 1 Attachment 2. jazy • 2 years ago Best Response You've already chosen the best response. 1 P (-2, 8) Q (8, -4) Midpoint formula: $(\frac{ x + x }{ 2 }, \frac{ y + y }{ 2 })$$(\frac{ -2 +8 }{ 2 },\frac{ 8 - 4 }{ 2 })$$(\frac{ 6 }{ 2 },\frac{4 }{ 2 })$(3, 2) 3. jazy • 2 years ago Best Response You've already chosen the best response. 1 @Danelle96 4. Danelle96 • 2 years ago Best Response You've already chosen the best response. 0 Thank you sooo much! 5. jazy • 2 years ago Best Response You've already chosen the best response. 1 You're welcome! (: 6. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995662569999695, "perplexity": 17178.024424973817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300280.0/warc/CC-MAIN-20150323172140-00196-ip-10-168-14-71.ec2.internal.warc.gz"}
https://jart.guilan.ac.ir/article_3687.html
# On the Class of Subsets of Residuated lattice which induces a Congruence Relation Document Type: Research Paper Author Department of Mathematics, Faculty of Mathematical Sciences and Computer, Shahid Chamran University of Ahvaz, Ahvaz, Iran Abstract In this manuscript,  we study the class of special subsets connected with a  subset in a residuated lattice and investigate some related properties. We describe the union of elements of this class. Using the intersection of all special subsets connected with a subset, we give a necessary and sufficient condition for a subset to be a filter.  Finally, by defining some operations, we endow this class with a residuated lattice structure and prove that it is isomorphic to the set of all congruence classes with respect to a filter. Keywords
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293581604957581, "perplexity": 409.24174668095077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00521.warc.gz"}
https://blog.supplysideliberal.com/post/141129134773/how-negative-interest-rates-prevail-in-market
# How Negative Interest Rates Prevail in Market Equilibrium Many people have the intuition that even if paper currency were out of the picture, other things that pay a zero interest rate would still create a zero lower bound, so that an attempt to take the target rate into deep negative territory would fail. Among them is one of the greatest economics bloggers of them all: John Cochrane. In his Grumpy Economist post “Cancel Currency?” he writes: Suppose we have substantially negative interest rates – -5% or -10%, say, and lasting a while. But there is no currency. How else can you ensure yourself a zero riskless nominal return? Here are the ones I can think of: • Prepay taxes. The IRS allows you to pay as much as you want now, against future taxes. • Gift cards. At a negative 10% rate, I can invest in about $10,000 of Peets’ coffee cards alone. There is now apparently a hot secondary market in gift cards, so large values and resale could take off. • Likewise, stored value cards, subway cards, stamps. Subway cards are anonymous so you could resell them. • Prepay bills. Send$10,000 to the gas company, electric company, phone company. • Prepay rent or mortgage payments. • Businesses: prepay suppliers and leases. Prepay wages, or at least pre-fund benefits that workers must stay employed to earn. My brother Chris and I answer this argument in “However Low Interest Rates Might Go, the IRS Will Never Act Like a Bank.” The set of things that can create a zero lower bound can be narrowed down considerably by the two key principles we explain there: 1. Giving a zero interest rate when market interest rates are in deep negative territory (say -5%) is a money-losing proposition. Private firms are unlikely to continue very long in providing such an above-market interest rate to individuals wanting to store money with them. 2. Anything that can vary in price cannot create a zero lower bound: negative interest rates will either cause its price to go up enough that expected depreciation gives it a negative expected return, or potential price variation will make its return risky enough it is clear there is no risk-free arbitrage to be had. This rules out things such as gold or foreign assets from creating a zero lower bound, unless a credibly fixed exchange rate or an established price of gold is in play. What is left? The only other category I can see are opportunities to lend to a government within the central bank’s currency zone at a fixed interest rate. But even there, there is another logical proviso on what can create a zero lower bound. I explain in “How to Keep a Zero Interest Rate on Reserves from Creating a Zero Lower Bound”: [3.]… a zero interest rate that only applies to a limited quantity of funds does not create a zero lower bound. The reason that our current paper currency policy creates a zero lower bound is that under current policy banks can withdraw an unlimited quantity of paper currencyand redeposit it later on at par. By contrast, within-year prepayment of taxes is possible but practically limited to the amount of the tax liability. (Between tax years a typically nonzero interest rate based on the market yield of short-term U.S. obligations applies.) Thus, other than paper currency: IRS interest rates between tax years are set by the Secretary of the Treasury in line with market short-term rates, such as the Treasury bill rate. They are no stickier than the Secretary of the Treasury wants them to be. It is true that a sufficiently determined Secretary of the Treasury could probably thwart a Fed move to negative interest rates by offering convenient saving at a zero interest rate through the tax system. But I don’t think the Fed would be likely to go to deep negative rates in any case without some degree of tacit backing from the Executive Branch. (I do think that with the Executive Branch’s tacit backing, the Fed might go to deep negative rates despite complaints in Congress if it thought that was necessary for the economy.) Finally, suppose I am wrong about the willingness of private firms to lose money by continuing to offer a zero interest rate when many market rates have gone substantially negative. In “Banking at the IRS,” John Cochrane argues that private interest rates are sticky at zero. There is still a limit to how much a firm can allow individuals to store at an above-market zero interest rate without going bankrupt, and in practice, the quantity limit of how much value a firm will allow people to store at a zero interest rate is much tighter than that. Let’s get more concrete about the sheer magnitude of the task of finding zero interest ways of storing one’s money when the central bank is bidding up the price–and therefore down the interest rate–of Treasury bills as far as it can before investors sell over the whole stock of Treasury bills. To make the calculations easier, let me imagine that before going to negative rates, that a central bank has done enough quantitative easing that most of the national debt in private hands is in the form of short-term Treasury bills that have a negative rate. In that case, the net debt-to-GDP ratio (based on government debt in private hands) puts a floor on how much in funds private individuals will be trying to shift into zero interest rate opportunities. Actually, to this should be added the paper currency to GDP ratio too, since under my proposal, paper currency carries a negative rate of return because of its gradual depreciation against electronic money. (It is only because paper currency would have a negative rate of return under my proposed policy that the discussion in this post even arises.) Here are a few interesting net debt/GDP ratios rounded to the nearest full percent as of the latest update of the Wikipedia article “List of countries by public debt” in 2012. I doubt many of these numbers have gone down since then: • Australia: 17% • Austria: 51% • Belgium: 106% • Denmark: 8% • Finland: -51% • France: 84% • Germany: 57% • Greece: 155% • Ireland: 102% • Israel: 70% • Italy: 103% • Japan: 134% • Netherland: 32% • Norway: -166% • Portugal: 112% • Spain: 72% • Sweden: -18% • Switzerland: 28% • United Kingdom: 83% • United States: 88% Certainly, in the eurozone, Japan, the United Kingdom, the US and Canada, the task of finding zero interest rate opportunities for all the funds that start out in government debt is daunting. Countries like Norway that have a substantial sovereign wealth fund show that the amount of money the public holds in government bills and bonds–surely a positive number–can be larger than the government debt with assets netted out–in Norway’s case, a negative number. So the net debt to GDP ratios above are only the start of how much people might face a negative interest rate in that they are trying to escape. The biggest single opportunity for getting a zero interest rate when rates in general are negative is typically tax system. I suspect that most countries have much less wiggle room for playing with the timing of tax payments than the US. For example, the rules for the timing of paying VAT taxes probably don’t have the same kind of wiggle room. And even in the US, the wiggle room on the timing of payments is probably much greater for households than for firms. In the US, tax revenue as a percentage of GDP is something like 27%.  But shifting tax payments from being paid each month as the income comes in to being paid on January 1, say, only shifts that 27% forward by 6.5 months on average, since some of the payments are already early in the year. Or for those who pay quarterly, things might be shifted forward by 7.5 months. (7.5/12) * 27% is less than 17%. (This is composed of up to 27% of GDP at a zero interest rate at the beginning of the year, and much less at a zero interest rate toward the end of the year.) That limit of 17% of annual GDP (averaged over the year) that can get a zero interest rate is far short of the 88% that individuals and firms in the US and abroad will want to find in zero US interest rates. Along the lines of “How to Handle Worries about the Effect of Negative Interest Rates on Bank Profits with Two-Tiered Interest-on-Reserves Policies,” throw in bank account assets amounting to 4% of annual GDP in individual bank accounts exempted from a negative interest rate supported by subsidies through the interest on reserves formula. (The effective subsidy needed is not 4% of GDP, but the absolute value of the interest rate times 4% of a year’s GDP, say |-4%| per year times 4% of yearly GDP, or .16% of GDP on a flow basis.) Beyond the bank accounts subsidized to have a zero interest rate, then throw in a generous several percent of annual GDP worth of prepayment opportunities that the private sector will allow, and still those now holding government debt will fall far short of finding enough zero interest opportunities to shift their liquid assets into. When I say that is generous, remember that the flow that can be prepaid needs to be multiplied by the length of time it can be prepaid to get the stock of wealth that can be shielded from zero interest rates. Other than prepayment of mortgages–which is already a big issue even at positive interest rates–most opportunities to prepay are limited to about 90 days, which is much lass than in the tax system. Even with substantial opportunities to get a zero interest rate, if individuals and firms have liquid assets left over that can’t get a zero interest rate, then the key market rates can go into deep negative territory as the central bank bids up the price of Treasury bills so that, say it costs $10,100 to buy a promise from the Treasury of$10,000 three months from now: a -4% annual yield. So far, central banks that have gone to negative interest rates have done so tentatively. Still, interesting adjustments are beginning to happen. Here is a passage from Tommy Stubbington’s December 8, 2015 Wall Street Journal article “Less Than Zero: Living With Negative Interest Rates”: Danish companies pay taxes early to rid themselves of cash. At one small Swiss bank, customer deposits will shrink by an eighth of a percent a year. But it isn’t all bad. Some Danes with floating-rate mortgages are discovering that their banks are paying them every month to borrow, instead of charging interest on their home loans. … … other peculiar consequences are sprouting. In Denmark, thousands of homeowners have ended up with negative-interest mortgages. Instead of paying the bank principal plus interest each month, they pay principal minus interest. “Hopefully, it’s a temporary phenomenon,” said Soren Holm, chief financial officer at Nykredit, Denmark’s biggest mortgage lender by volume. Mr. Holm said the administration of negative rates has gone smoothly, but he isn’t trumpeting the fact that some borrowers get paid. “We wouldn’t use it as a marketing tool,” he said. Negative rates have cost Danish banks more than 1 billion kroner (\$145 million) this year, according to a lobbying group for Denmark’s banking sector. “It’s the banks that are paying for this,” said Erik Gadeberg, managing director for capital markets at Jyske Bank. If it worsens, Jyske might charge smaller corporate depositors, he said, then maybe ordinary customers. “One way or another, we would have to pass it on to the market,” Mr. Gadeberg said. In Switzerland, one bank already has. In October, Alternative Bank Schweiz, a tiny lender, sent letters to customers with some bad news: They were going to be charged for keeping money in their accounts. The Swiss central bank has a deposit rate of minus 0.75%, and Martin Rohner, chief executive of ABS, decided enough was enough. The costs were eating up the firm’s entire profit, he said. He set a rate of minus 0.125% on all accounts.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33172640204429626, "perplexity": 2913.5343214207282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887832.51/warc/CC-MAIN-20180119065719-20180119085719-00144.warc.gz"}
https://mario-gregorio.blogspot.com/2022/01/
## Wednesday, 26 January 2022 ### Virus Destruction by Resonance Very interesting paper on the Rife machine Source: https://www.scirp.org/journal/paperinformation.aspx?paperid=106157 Virus Destruction by Resonance Abstract Viruses and other microbes can be inactivated in a selective way by subjecting them to an oscillating electric field of adequate frequency. Royal R. Rife discovered this method already about 100 years ago. He proved its efficiency by means of high resolution microscopes and in 1934, by controlled clinically tests. However, these results seemed to be unbelievable, since the underlying mechanism was not yet understood. Actually, we are faced with three problems: 1) the functioning of Rife’s supermicroscopes, 2) his observation that bacteria can undergo size reduction, and 3) the decisive resonance phenomenon. We explain the high magnification and resolving power of Rife’s microscopes and show that new discoveries confirm that the postulate of invariable forms of bacteria has to be abandoned. Then we prove that forced oscillations of virus spikes lead to a peculiar resonance, because of nonlinear effects. It causes total destruction of the virus by rupture of its coating. The same theory applies to bacteria and nanobacteria, because of their pili. The worldwide coronavirus pandemic, the constant threat of unpredictable mutations and the now available explanations should make it obvious that biophysical methods cannot be neglected anymore. Share and Cite: Meessen, A. (2020) Virus Destruction by Resonance. Journal of Modern Physics, 11, 2011-2052. doi: 10.4236/jmp.2020.1112128. 1. Introduction Royal Raymond Rife (1888-1971) was an outstanding scientific inventor, who wanted to help humanity by discovering and fighting still unknown causes of sickness and death. His achievements and the opposition that he encountered were described in the remarkable book of Barry Lynes [1]. In his youth, Rife had been impressed by the discoveries of Louis Pasteur and Robert Koch during the second half of the 19th century. They used microscopesto see lethal microbes. They could then isolate and culture them. When the resulting entities caused always the same sickness after repeated cycles, the next step was to find specific vaccines to help their recognition by our immune system. Rife suspected that cancer does also result from microbes, but that they are too small to be seen by means of standard optical microscopes. This limitation results from the wave nature of light. It leads to interference effects. The physicist Ernst Abbe, working for the Carl Zeiss Company, established even in 1873 a relation between the best possible resolving power of optical microscopes and the wavelength of light. It is at best of about λ/3, where λ is the wave length for visible light. The limit is thus about 180 nm, but for ultra violet light, λ < 400 nm. The Carl Zeiss Company developed therefore UV fluorescence microscopes. The light source was an intense electric arc lamp and the UV light was concentrated on the specimen by means of quartz lenses. Since the specimen converted this light by fluorescence into visible light, the objective and ocular were glass lenses. In 1913, the Carl Zeiss Company was ready to sell this new type of microscopes [2]. Rife started in 1917 to construct himself a better microscope for visible light and finished it in 1922 [1]. It magnified up to 17,000 times, while the best standard optical microscopes could only reach 2000 - 2500 times. Moreover, the images were well-contrasted, with unprecedented high resolution. How this was achieved remained an unsolved puzzle. Rife constructed even more powerful optical microscopes. His second one was finished in 1929 and the third one in 1933. He called it a “universal microscope”, because of its great adaptability. It could even magnify up to 60,000 times. These supermicroscopes allowed him to make extraordinary discoveries, but they contradicted the beliefs of medical authorities. Using only conventional microscopes, they were unable to observe what Rife saw by means of his supermicroscopes. Moreover, Rife discovered the cancer microbe, although it was generally assumed that cancer is not an infectious sickness. Science progresses, of course, by discovering previously unknown facts and by explaining them, but that may require correcting previous assumptions. Unfortunately, it is very difficult to modify deeply rooted ideas. One objective of this article is to illustrate this fact by analyzing Rife’s discoveries and achievements, to show the complementarity of experimental and theoretical methods. It has been claimed [3], for instance, that it is impossible to reach the high resolution of Rife’s microscope, by repeating the standard theory. It is also incorrect to believe that Rife did suppress the interference effects [4]. The Nobel Prize in Chemistry of 2014 was actually awarded for the development of super-resolved microscopy [5]. This was achieved by using powerful lasers, while Rife succeeded with purely classical means. Nevertheless, his name has been wiped out from official medical and scientific literature. Fortunately, some documents were preserved and their analysis did help us to uncover the secret that escaped attention. Moreover, they provide first-hand information about the discoveries that Rife made with his new microscope. The first one concerned the existence of “nanobacteria”, resulting from size-reduction of known bacteria. They are more virulent, but Rife discovered also a method to destroy them, by resonance. To observe both phenomena very clearly, he constructed the more powerful microscopes. The size of typical bacteria is of the order of 1000 nm. For viruses, it ranges from about 400 to 20 nm, but their existence had already been established at about 1900 by means of porcelain filters with very narrow pores. The filtrate contained no bacteria, but remained virulent. This fact justified Rife’s conviction that these mysterious entities should be observable by means of better microscopes. A description of his universal microscope was published in 1944 by a scientific journal [6]. It mentioned also other new optical microscopes and compared them with electron microscopes, developed in the 1930th. Their magnification is greater and allows for instance to determine the size of the polio virus. It is merely 30 nm. The diameter of the spherical core of the Covid-19 virus is 85 nm and it carries 20 nm long spikes. However, electron microscopes can only yield images of dead microbes. The article of 1944 insisted thus on the “great interest” of the new type of optical microscopes, since they allow scientists to penetrate the unseen world of the minute and disease-causing organism when they are still living and active. This was essential for Rife, but how did he acquire the competence for his apparently impossible achievements? We know that he began medical studies in 1905 at John Hopkins University in Maryland, but he broke them off. His aim was to construct as soon as possible a better microscope and to find the cancer microbe. It has been reported [7] that he married in 1912 and moved then to San Diego, California. Speaking German, he travelled often before WWI to Germany for the US Navy. Rife provided himself reliable information in 1960, by answering 137 questions of the lawyer of his collaborator John Crane [8]. He was on trial, since the powerful American Medical Association (AMA) had decided to abolish the Rife/Crane cancer therapy. It was declared to be bogus, even without examining Rife’s methods and results. After Rife’s trial, orchestrated already in 1939 by AMA, he was forced to stop his research. He was bankrupt and desperate. His laboratory had been raided. His microscopes were made unusable and documents were stolen. They included also movies of the behavior of the tiny living beings that Rife made visible. He took refuge in Mexico. Even his written answers of 1960 were not allowed to appear in court during Crane’s trial in 1961, but they were conserved. Rife’s answered (to question Q.38) that he acquired his expertise in optics by working for 6 years with Hans Lukel, who was Carl Zeiss’s scientist in the USA. Rife mentions there also that he made “all the photomicrographs for the Atlas of Parasites which was done at the University of Heidelberg”. This book was published in 1914 and contained more than thirteen hundred colored lithographs [9]. Our inquiries at the archives of the universities of Heidelberg and Gießen, as well as at the Carl Zeiss History Support in Jena revealed no recorded or easily accessible traces. The authors of the book did only thank their academic colleagues. Anyway, it took nine years before Rife felt ready to begin his own research (Q.39). The link [8] provides photographs of his microscopes and the copy of a certificate, where the Institute for Scientific Research of the Andean Anthropological Expedition recognized Rife’s “outstanding contributions to science” and the attribution of a Research Fellowship in biochemistry. Rife confirmed this (Q.135-137) and added that he studied bacteriology at John Hopkins University (Q.56). He acquired there a PhD diploma, since he signed a scientific paper [10] with this title and was often called Dr. Rife. When he was running out of financial resources, he took a job as chauffeur of the multi-millionaire Henry Timken. He manufactured ball bearings and Rife provided a way to control their quality by constructing an X-ray machine. Timken valued his creativity and technical abilities, demonstrated in various ways. Impressed by Rife’s competence and his strong motivation to construct better microscopes for identifying the cancer microbe, Timken offered him a laboratory at Point Loma, California, with full equipment and financial support. Rife constructed there his microscopes and the basement provided facilities for 1000 animals with precisely controlled air-conditioning (Q.10). There were about 800 albino rats, but also Guinea pigs and rabbits (Q.59). In 1929, a local newspaper published an article about Rife’s extraordinary microscope and his fascinating discoveries [11]. The journalist praised his competence in bacteriology, chemistry, metallurgy and engineering. Rife had developed methods to observe and document previously unobservable living entities. He found even a technique to destroy harmful microbes, but “refuses to make money from it”. He is concerned with pure science and wants to free humanity from cancer and other dreadful sicknesses. In his interview [8], Rife provides important details concerning his research and the difficulties that he encountered. He did use cancer tissues and prepared over 15,000 slides, since he was convinced that his microscope should allow him to see the microbe that causes cancer, but it remained hidden. He tried numerous molecules for staining and this lasted over 10 years, to no avail (Q.22). Eventually, he thought that chemical substances might destroy the mini-microbes and started therefore to exploit other capabilities of his microscope. Although this was time-consuming, it allowed him to select a color that made microbes visible by their own luminescence, without having to add some staining substance. It was also necessary to discover a method for multiplying the microbes that he expected to be present in cancer tissue. He found that they had to be cultured in the “K medium”, developed by the microbiologist Arthur Kendall. This medium is protein based, but poor in peptones (polypeptides and amino acids). It was superior to all other ones (Q.29 and 30). Nevertheless, the decisive breakthrough was quasi-accidental [12]. “After many long procedures and unsuccessful attempts”, Rife had fortuitously placed a test tube near an argon discharge tube. It ionizes air and produces ${\text{O}}_{2}^{-}$ ions, becoming ozone. Rife noted some cloudiness, but the microscope revealed nothing special. After incubation during 24 hours at 37.5˚ on K-medium, the solution was “teeming with cancer virus”. These entities were very small and motile, but easily discernable by means of their purplish fluorescence. Their proliferation resulted from subjecting bacteria to stress. Since Rife had been searching these mysterious entities during many years, he called them X-bacteria or simply “BX”. To verify if they did really cause cancer, he inoculated albino rats with a solution, derived from human breast cancer tissue. In almost 3 to 4 days there appeared a lesion at the point of inoculation in the mammary gland of these rats. Pathological examination disclosed typical malignancy. Rife mentioned that the same procedure was repeated 411 times with identical results (Q. 24). He wanted to be absolutely sure. The culprit of cancer or at least a new possible cause had thus been uncovered. It was a tiny ovoid particle, about 50 nm large and 70 nm long. Its motility indicated that it was flagellated [10]. Rife observed also with his supermicroscope that these BX entities could be transformed again into bacteria or fungi, according to the medium where they were living. In their reduced form, they could not be destroyed by UV radiation or X-rays, but they were killed after two days at 42˚C. The newspaper article [11] attracted the attention of Dr. Arthur Kendall,whohad developed the K medium. He was professor and head of the department of bacteriology at the Northwestern University in Chicago and wanted to check the reality of these amazing claims. He invited Rife in 1931 to bring his microscope to his institution, where he used a type of bacteria that he had cultured himself. In their “filterable state”, they appeared in Rife’s scope as tiny, motile granules of turquoise-blue color. These observations were so astonishing, that the whole procedure was repeated eight times. In 1931, Prof. Dr. Kendall published with Rife an article [10] that described these experiments and their results. Kendall was then invited to speak in 1932 at a meeting of the Association of American Physicians. Dr. Thomas Rivers, virologist at the Rockefeller Foundation, tried to get Kendall’s talk cancelled. This was not accepted, but Rivers went immediately to the podium after his talk to express fierce opposition. He impressed the audience by daring to address Dr. Kendall as if he were a liar. Dr. Rivers and Dr. Zinser, bacteriologist at Harvard, had tried to reproduce Kendall’s results. They did use the K-medium, but their observations were made with conventional optical microscopes. Without Rife’s supermicroscope and his method to provoke luminescence, they were unable to see the relevant nanoparticles. Moreover, they believed in monomorphism, advocated by Louis Pasteur. He had attributed infectious diseases to bacteria of specific size and form. A similar postulate is used for classifications in botany and zoology, but metamorphosis is not impossible. Since Rife and Kendall were facing ideology, Dr. Edward Rosenow, who was heading the Mayo Foundation's Experimental Bacteriology program at Rochester, Minnesota, wanted to check the facts. He spent 3 days at Kendall’s laboratory where Rife’s microscope was still installed and reported his observations in Science [13]. He wrote: “there can be no question of the existence of filterable turquoise blue bodies”. They appear in large numbers in filtrates of cultures of infected tissues and constitute a “stage in the development of microorganisms.” Rife mentioned in his interview [8] that the BX entities are actively moving around (Q. 51), but that he found a method to “devitalize all microorganisms as desired” (Q.19). It required EM waves of particular frequency, depending on the kind of microbes. This frequency was determined by observing the microbes with his microscope (Q.76). Some types did then “explode or disintegrate” and other types did aggregate (Q.80). This happened for various types of bacteria (Q.78), but they were never affected by inadequate frequencies (Q.118). This observation led to the development of the “ray machine”. Its production was continued by John Crane when Rife fled to Mexico, but Crane was prosecuted. AMA accused him, though it refused to test his system (Q.131). The American Cancer Society was interested, until they found out that Crane and Rife were not medical doctors (Q.127). However, Dr. Milbank Johnson, Professor of Physiology and Clinical Medicine at the University of Southern California, gathered in 1934 a Research Committee to test the clinical efficiency of Rife’s cancer therapy. Sixteen terminally ill patients with different types of malignancies were selected and treated with the instrument that produced an oscillating electric field of the required frequency for deactivating the BX entities. The sessions lasted only 3minutes,every third day. The patients felt neither pain nor any other sensation, but “the virus or bacteria is destroyed and the body then recovers itself” [1]. The interval between treatments was needed to eliminate the toxic debris. “After 3 months, 14 of these hopeless cases were signed off as clinically cured by the staff of five medical doctors”. Other ones participated in overviewing this test and the pathologist of the group, Dr. Foord, certified the healing. The two other patients were cured after 20 more days [7]. The first cancer clinic that used the Rife technology was opened in 1934 and the Rife Beam Ray Company was operating in 1938. That was intolerable for the powerful AMA. The capitalistic pharma industry wanted also to prevent the development of alternative forms of medical treatments, of course, but had merely to encourage AMA’s request of legal prohibition and its use of other methods of coercion. This procedure has been analyzed in the context of social anthropology, related to science and technology [14]. It is necessary, indeed, to realize that it was profoundly unjust, unscientific and even harmful, by preventing possible cure. The structure of this article results from the need to solve three basic problems. Section 2 describes Rife’s supermicroscope and explains its functioning. Section 3 justifies the concept of pleomorphism. Section 4 provides a detailed scientific explanation of the physical mechanism of targeted microbe destruction by resonance. This part illustrates also the usefulness of physical reasoning for solving scientific riddles, even in medicine. Section 5 presents conclusions and some recommendations. 2. Optical Supermicroscopes 2.1. Rife’s Universal Microscope The basic principles of microscopy result from the laws of geometrical optics. All light rays that emerge from a point A and pass through a lens L do converge at the point A’. To determine its position, it is sufficient to consider two rays, as shown in Figure 1. The ray that passes through the center of a thin lens is not deviated, while the ray passing through the focal point F is refracted by the lens to become parallel to its symmetry axis. This applies even to combinations of thin lenses. The position of the image A’ of A is determined by 5 parameters: their distances D andd from the plane of the lens, their heights H and h above and below the symmetry axis, as well as the focal lengthf of the lens L. Indeed, $\frac{1}{f}=\frac{1}{d}+\frac{1}{D}$ and $\frac{H}{h}=\frac{D}{d}$(1) Analogous relations apply to the second lens L’. Since the rays that emerge from A’ and pass through this lens are refracted as if they did emerge from I, this is a virtual image. The global magnification is the product of the magnifications provided by the objective and ocular lenses. Rife modified this system to get a much greater magnification by increasing the distance D. He did that by intercalating many prisms as indicated in Figure 2. We represent there only a narrow bundle of rays, forming a beam that would coincide with the symmetry axis if it were not deviated. Rife stated in his description [15] that the beam was subjected to “21 light bends”. There are only 20 Figure 1. Successive formation of two images A’ and I of the point A in optical microscopes. Figure 2. Rife achieved great magnification by increasing the distance D between the lens L and A’. prisms, but the image A’ does also produce a deviation to yield the image I. Rife insisted that the rays of light have to remain nearly parallel, although the rays emerging from the lens L are converging. The 90˚ prisms did thus not simply act like mirrors. One of their faces was slightly concave. Every time before the rays would tend to meet, they were thus made less convergent. This stratagem required extremely high precision, since the total deviation from the ideal one could not exceed one wavelength. The distance that light rays traveled zigzag fashion was 449 mm, while the length of the barrel was only 229 mm. For usual microscopes it is 160 - 190 mm. All prisms and lenses were made of block-crystal quartz. The barrel and screws were made of “magnalium”. This AL/Mg alloy was very expensive, but it has the same thermal dilatation coefficient as quartz. Figure 1 shows that both focal distances f and f' should be as small as possible. Rife opted thus for a pair of identical, high-quality objectives with quartz lenses, developed by Zeiss. Why did Rife not follow the contemporary trend to improve resolution by using UV light? It would not be converted everywhere into visual light when it passes through the specimen, but is dangerous for visual observation. Though it could be eliminated by a filter, Rife preferred to use a normal light source, since his initial intention was to use chemical staining. Rife knew that molecular fluorescence results from processes that are represented in Figure 3(a). The curves show how the energy E depends for the ground state and the first excited state on the structure parameter s. Photons of blue or near ultraviolet light can thus excite an electron from the most probable ground state to another possible state. The molecule modifies then its configuration before emitting a photon of lower energy. Since absorption and emission of a photon are instantaneous processes, they are represented by vertical transitions. There are also vibrational states, leading to spectral bands instead of lines, but we can simplify the graph, since the essential point is that molecules, which can rapidly lose any excess energy. However, it was necessary that the light source could provide light of any required color to elicit optimal fluorescence. An electric arc is a very intense light source, but there are spectral lines. Rife preferred thus an incandescent filament. It yields a continuous spectrum and Rife did use a special filter to select the adequate color. It had been invented in 1889 by the ophthalmologist Risley and is represented in Figure 3(b). Indeed, Figure 3. (a) Molecular luminescence is optimal when it is excited by light of a particular color. (b) It was selected by a Risley filter. (c) Rife’s secret was a pinpoint filter near A’. two identical 90˚ prisms can be rotated with respect to one another, while their oblique faces remain in contact. At normal incidence light is only refracted at the interface. This leads to dispersion, but by rotating the prisms with respect to one another, we get only light of a particular color that leaves the second prism at normal incidence. Since quartz is birefringent, the angle of refraction depends not only on the color, but also on the polarization of light waves. Rife could thus produce polarized light of any desired color. This is useful for mineralogy, since thin slices reveal then colored parts. Figure 3(c) represents the pinpoint filter that Rife did place before the first image A’, but he did not mention it in his dialogue with Crane [16]. He stated there (#43) that the universal microscope has “a resolution of 31,000 times and a magnification of 60,000 times”. These values have often been repeated, without explaining how they were determined, but Rife mentioned that he used a Zeiss silver coated slide, carrying 800 lines per mm with perfect definition. They were thus separated by 1.25 μm. Under a standard microscope, there appeared 40 - 60 of these lines with distortions, due to spherical aberration of the lenses. “Under the universal microscope, we see only 1 - 3 lines and they are perfectly parallel through the entire field” (#44 and 45). The greater magnification kept only the central part of usual images. Rife claimed that “the resolution is 31,000 times”, which is unusual, but he considered that the sharpness of the image reached at least half the separation of the lines. The high resolution remained very puzzling, since Feynman noted in his famous lectures on physics (I, 27-7) that we might build a “system of lenses that magnifies 10,000 diameters, but we still could not see two points that are too close together.” We mentioned already that the resolving power of optical microscopes is limited by interference effects. However, Rife answered only to Cranes question concerning this limit (#43) that the lack of sharpness is due to “factors of error that come into the field of optics by the illumination and things of all sort.” The use of a pinpoint filter just before the first image was not mentioned. Even in the article of 1944, Rife noted merely [6] that “if one pierces a black strip of paper or cardboard with the point of a needle and then brings the card up close to the eye so that the hole is in the optic axis, a small brilliantly lighted object will appear larger and clearer, revealing more fine detail.” This hint has not been perceived, but Rife’s method is equivalent to a recent discovery [5], that deserved even a Nobel prize. 2.2. Modern High-Resolution Microscopes and Diffraction Stephan Hell, who pioneered the use of lasers for high resolution optical microscopy, explained the basic idea in his Nobel lecture [16]. He began with underlining the advantage of minimally invasive optical microscopy with respect to electronic microscopy. However, the resolution of optical microscopes is limited and “scientists believed throughout the 20th century” that this barrier cannot be overcome”. Knowledge of Rife’s achievement had been wiped out, but Hell thought that “so much new phenomena were discovered” that higher resolution of optical microscopes should be realizable. He was teaching biophysical chemistry in Göttingen and attributed the difficulty to the fact that light is emitted by different molecules inside a small volume of the specimen. They are exited together and emit light by the molecular processes that were depicted in Figure 3(a). Since it is impossible to excite only a very small group of molecules, he tried to find out if we can “manage to keep some molecules dark”. Searching an answer to this question, he found that physics knows about stimulated emission of radiation. Einstein discovered this phenomenon in 1917 by proving that it accounts for Planck’s spectrum of EM radiation in an oven or any cavity at sufficiently high temperature. With lasers, it is possible to silence excited molecules by “stimulated emission depletion” (STED). This process is represented in a simplified way in Figure 4(a). Blue laser light (1) of moderate intensity is absorbed by a molecule. An electron is there raised from the ground state So to the first excited state S1. The molecule relaxes, by changing its configuration. The interrupted line represents this loss of energy, since it is sufficient to represent the initial and final excited state to account for Figure 3(a). Light emission result from the transition (2) in Figure 4(a), but stimulated emission (2’) can be caused by intense laser light of the adequate frequency. The excited state is then constantly emptied and normal light emission is quenched. Figure 4(b) represents the spatial domains of illumination by means of the two lasers. These light distributions cannot be point-like, but it is possible to produce a blob of blue light (1) and to superpose a ring of very intense green light (2’). It abolishes light emission in this ring and leaves only the small central spot where light emission (2) is still possible. Once the spell was broken, other scientists did tackle the same problem during the two last decades and discovered various methods to solve it. They were reviewed in a comprehensive and interesting way [17]. The basic technique consisted always in separating light emission in space or time. How could Rife succeed already in the 1920th to reach high resolution without lasers? To understand this fact, it is useful to return to basics. Grimaldi’s careful observation in 1660 of the shadow cast by a rod revealed that it cannot be explained by geometrical optics, unless light rays that pass close to obstacles are divided (at least) in two rays. This led to the concept of “diffraction”. Figure 4. (a) Transitions between energy levels, used for high-resolution microscopy by the method of stimulated emission depletion (STEP). (b) Superposed illuminations and the resulting emission. Grimaldi thought already that light may not be constituted of particles that are moving along straight lines in any homogeneous medium. Young’s famous double slit experiment of 1801 proved that light is propagating like waves. It was then generally believed that there are “light waves”, but Planck discovered in 1900 that there are grains of light energy and Einstein proved in 1905 that they are particles. These photons do not always move along a single well-defined trajectory, but do also allow for interference. In 1815, Fraunhofer discovered spectral lines in sunlight by improving the quality of lenses and prisms, but he did also use gratings with many equidistant narrow slits. The resulting interference decomposes white light in different colors. Fraunhofer found that a single slit produces also a set of closely spaced interference fringes. Even when light encounters a screen with a small circular hole of radius r, as shown in Figure 5(a), it does produce a brilliant spot of light that is surrounded by fainter rings. Fresnel explained this fact in 1819 by considering that all points inside this hole emit spherical light waves, but on the symmetry axis, they do always interfere constructively with one another. Figure 5(b) shows that for the angle θ with respect to the symmetry axis, light emitted by any pair of points that are radially separated by the distance interferes destructively for monochromatic light of given wavelength λ, when $r\mathrm{sin}\theta =\lambda /2$(2) For other angles the wave amplitudes are not opposite to one another. Fresnel calculated the resulting variation of the light intensity and found that it varies as indicated in Figure 5(c). It is even the square of a Bessel function. The arrows define the width of the central spot, resulting from relation (2). Distant stars are point-like light sources, but for telescopes, their image is a spot of light that is surrounded by rings of decreasing intensity. This was proven in 1835 and led to the term “Airy disc”. It is represented in Figure 6(a), when light intensities are represented in black/white inversion. Figure 6(b) recalls that all light rays that emerge from the point A and pass through the lens L will converge at the point A’. Ernst Abbe realized in 1873 that a microscope is thus equivalent to a circular hole of radius $r=0.61\frac{\lambda }{2NA}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}\text{ }NA=n\mathrm{sin}\theta$(3) Figure 5. (a) Light passing through a small circular opening leads to interference. (b) It is always destructive for a particular angle θ. (c) The angular distribution of the resulting light intensity I yields a central blob, surrounded by fainter luminous rings. Figure 6. (a) Black/white image of an Airy disc. (b) Formation of the first image in a microscope. (c) The resolving power defines the smallest distance between separable point-like sources. The value of r results from (2) and the fact that light rays are refracted by the lens. NA is called the “numerical aperture” of the lens system, since it depends on the angular opening θ of the diaphragm and the index of refraction n of the medium between the specimen and the lens. It refracts light wave, but on the other side of the lens L there is air, where n = 1. Figure 6(c) shows that the blurred image of two points near A can only be separated when their separation is greater than r. The resolving power of microscopes is thus defined by (3). Abbe concluded that the resolution of optical microscopes can be improved by filling the space between the sample and the lens with a transparent fluid, where n ≈ 1.5. Immersion does thus reduce r from λ/2 to λ/3. 2.3. Rife’s Method and Other Supermicroscopes It has been asserted [3] that Rife could not overcome the diffraction barrier and that he did suppress it by applying the principle of reversibility [4]. Sorry, this principle is only valid in geometrical optics, since it implies that the direction of propagation of light is irrelevant for light rays. Diffraction of light that passes through a small circular hole or a narrow slit results from interference and cannot be eliminated, but Figure 4 showed that the difficulty can be bypassed. How could Rife also do that without lasers? Rife’s contact with the Zeiss Company and his own observations made him aware of diffraction phenomena, but he found a very simple way to reduce their effects. He combined very high magnification with a pinpoint filter, which reduces the size of the Airy disc at A’, as shown in Figure 3(c). Probably, Rife did not mention this filter to protect his invention. It is even impossible to verify that he did use a pinhole filter, since his microscopes were dismantled. Rife constructed in 1932 a supermicroscope for a friend, who was a watchmaker and may have given advice for high precision tooling. This instrument was sold on auction in 2009, but the objective and ocular lens systems were missing, as well as the pinpoint filter [18]. It is very remarkable that Gaston Naessens (1924-2018) developed in 1949 another type of optical supermicroscopes with specialists of the Leitz Company in Wetzlar, Germany. This instrument reached a magnification of 30,000 and a sufficiently high resolution to see very small living entities. Naessens called them “somatides” and his microscope a “somatoscope”. He observed that these somatides resulted from changes of size and form of normal bacteria. They could pass through a simple or a complex life-cycle [19]. This is demonstrated in a short and very interesting video [20]. His microscope was also based on bioluminescence, but the exciting radiation was very intense UV and blue light of fixed wavelengths, respectively of 185 nm and 330 nm. Since excitation like that of Figure 3(a) requires a specific color, this was achieved by means of magnetic and electric fields, modifying molecular spectra in an adaptable way. Since we ignore if he did also use a pinhole filter to achieve very high resolution, we wonder if other methods might have been used. This is conceivable, since Figure 7(a) accounts for excitation by means of UV light that allows for transitions (1) between the ground state So and a higher exited state S2. The energy is then reduced, by modifying the configuration parameter s of the molecule. However, the curve S2(s) cuts the curve S1(s). The exited electron goes then over to the lower energy state, which allows for immediate light emission (2), called fluorescence.However, the excited electron can also go over from S1 to another state T. It is a triplet state, since the spin of the excited electron has been reversed. We get thus an electron pair, where the total spin is 1 instead of 0 for singlet S states and the transition T → So is forbidden. The electron is thus trapped until it is thermally excited to reach again the level S1. This allows for retarded light emission, known as phosphorescence.Since there is no fixed phase relation anymore with the exciting light, the emitted light waves are incoherent and cannot interfere. Another method consists in placing a pinpoint filter precisely at the point A’, as shown in Figure 7(b). This has mainly the advantage of improving depth sharpness, since light emerging from other planes than at A will not be focalized in the image plane A’. This type of confocal microscopy was invented and patented in the 1950th [21]. It allowed for laser scanning. Optical microscopy made thus remarkable progress, but the pioneers, Rife and Naessens were unjustly discarded from the list of honorable scientists. They did use their microscopes for observing that bacteria can reduce their size and developed then methods for fighting sicknesses. Rife did this by resonance and Naessens by discovering drugs that he could observe to be efficient, but he was prosecuted for illegal exercise of medicine [22]. Rife and Naessens did not belong to the required corporation or caste. This mentality is akin to racism,since it was not even deemed necessary to check if their observations were correct and if their treatments were helpful or not. Figure 7. (a) Excitation of fluorescence and phosphorescence. (b) Confocal microscopy. 3. Biological Discoveries and Medical Applications 3.1. Pleomorphism Revisited Antoine Béchamp, a compatriot of Pasteur, proved already by extensive experimentation that the size and form of bacteria can be modified [23]. He introduced the term pleomorphism,meaning more possible forms.He found that the smaller entities survive in chalk, but can reacquire their usual size and even become fungi. In 1886, Béchamp described “microzymas” (small ferments) that subsist as living entities after the death of organs, although it was believed that nothing can survive. These microzymas can also be present in living beings, where they cause diseases, but Pasteur did fiercely defend his conviction that all infectious diseases are only due to microbes of external origin. Béchamp was doctor of pharmacy and professor of chemistry. He became a medical doctor and then professor of medicine. He was a member of the French Academy, like Louis Pasteur, who was a chemist. Pasteur made very important discoveries, but was quite intolerant. He had even a feud with Robert Koch [24] and reacted so aggressively against Béchamp that monomorphism prevailed by docile consent of colleagues. Nevertheless, Mrs. Henri of the Pasteur Institute in Paris observed in 1914 that anthrax bacilli change their form when they are subjected to ultraviolet light [25]. Monomorphism was also contested by the Swedish Biologist Ernest Almquist [26]. He wrote in 1922: “Nobody can pretend to know the complete life cycle and all the varieties of even a single bacterial species. It would be an assumption to think so.” Rife was not aware of these controversies or not interested. His aim was to observe the tiny microbes that he expected to exist. He succeeded and found even that “any alteration of artificial media (for culturing bacteria) or slight metabolic variation in tissues will induce an organism of one group to change over into any other organism included in that same group”. These transformations are reversible, but the bacillus typhosus yields always turquoise-blue luminescent entities. For the bacillus coli it has a mahogany color, for tuberculosis it is emerald green and for the BX purplish-red. Rife found that inoffensive bacteria become virulent when their size is reduced. He thought therefore that bacteria do not produce themselves diseases, but the microorganisms that result from their transformation. “If the metabolism of the human body is perfectly balanced or poised, it is susceptible to no disease”. Rife added nine years later [12]: “with a pure culture of bacillus coli, by altering the media as little as two parts per million by volume, we can change that organism in 36 hours”. In the initial medium, the micro-organism becomes again a bacillus coli. Continued microscopic study and stop motion photography confirmed the reversibility and revealed that the BX allows for “many changes and cycles.” In experimental animals, the BX produces “a typical tumor with all the pathology of true neoplastic tissue, from which we can again recover the BX micro-organism”. They are even abundantly present. To be absolutely sure of this astounding result, Rife repeated these observations several hundred times. It should be noted that he tended to use the term “virus” for any type of filterable microbes, since the specific mode of virus reproduction was not yet known. The biologist Gaston Naessens observed pleomorphism in more detail. He presented clear evidence in a video [27] that the somatid found in blood can have a simple lifecycle or one with 16 different stages. Can these facts simply be negated, since they are not observable with standard optical microscopes? The books of Lynes [1] and Bird [23] denounce the enormous lack of scientific honesty in regard to the discoveries of Rife and Naessens. Fortunately, it is possible to rediscover what is happening in Nature. Emmy Klieneberger-Nobel published in 1951 a thorough study of filterable forms of bacteria. She used a phase-contrast microscope, developed in 1942. It converts subtle phase changes of light waves passing through the specimen into amplitude modifications. They are made perceptible by reducing the amplitude of the background waves. This method made it possible to discover reversible changes of various types of bacteria, resulting from modifications of their environment [28]. According to her long list of references, she was not aware of the discoveries of Rife and Naessens, but the reality of pleomorphism was confirmed. Kurath and Morita demonstrated in 1983 that marine bacteria are able to adapt themselves for survival under conditions of nutrient starvation [29]. The geologist Robert Folk discovered in the early 90th that precipitation of carbonates in hot springs of central Italy is due to modified bacteria. He studied them by scanning electron microscopy (SEM), proving that heir average size was 200 nm. In 1994, Folk attributed the observed carbonate precipitation to negatively charged cell walls. They attract Ca++ ions, neutralized by CO3 ions, which yields chalk (CaCO3) [30]. Kajander and Ciftcioglu proved in 1998 that blood can contain nanobacteria, associated with Ca++, CO3 and PO4 [31]. Nanobacteria act then as crystallization centers that lead to the formation of kidney stones. The calcified envelope seems to provide a shelter for survival, since SEM micrographs revealed that they are still able of division. It appeared also that these nanobacteria are covered with a “hairy layer”. Transmission electron microscopy (TEM) confirmed the existence of this coat of mineralized whiskers. However, the existence of nanobacteria raised controversies, since it was generally believed that bacteria have to be big enough to accommodate DNA or RNA, as well as the required enzymes for autonomous replication. The opposition to the concept of pleomorphism resurged, but now because of molecular genetics. Ciftcioglu, Kajander et al. responded in 2006 by more facts: pathological calcification caused by nanobacteria does also account for gallstones, arterial heart disease, prostatitis, cancer and Alzheimer’s disease [32]. They stated that research in this area has been “paralyzed for decades by attributing the calcifications to insignificant, passive, degenerative processes of aging”. The dermatologist Milton Wainwright published already in 1990 a book [33], where he presented important results concerning microbiological causes of cancer. He discovered that scleroderma, a rare form of cancer that leads to hardening of the skin and inevitably to death, is caused by microbes. Other outstanding physicians and medical research scientists had also demonstrated in the meantime that various forms of cancer result from nanobacteria, but the medical establishment did recklessly reject all proofs that contradicted the standard dogma. Wainwright’s book of 2005 provides ample proof of this kind of “medical politics” [34]. Milton Wainwright recalled the battle between pleomorphism and monomorphism [35], since it is still going on. By dismissing this possibility, “we may be missing something of fundamental importance”. A review [36] concluded in 2011 that “the evidence that nanobacteria exist in the human body and are closely associated with many kinds of diseases is now overwhelming”. Nevertheless, it came still in 2011 to a categorical denial [37]. It refereed unjustly to a study of Ciftcioglu and McKay [38]. It was much more careful and mentioned that calcifying nanoparticles “do not fit the typical definition of life”, but it could be too restrictive. Smaller forms have detrimental effects on human health and should be the “focus of future effort.” Cantwell insisted again in 2014 on existing evidence that cancer is caused by pleomorphic bacteria [39]. More and more justifications of the relevance of modified forms of bacteria in human pathologies have been published. Russian scientists presented in 2012 a review [40] where coccoid nanobacteria were reported to appear in sea water, soil, sedimentary and granite rocks, the Greenland ice sheet, glaciers and permafrost soils, as well as in humans and insects, for instance. Their size can be reduced to about 150 nm. This article contains several micrographs showing that ultramicrobacteria can make contact with one another by means of hair-like appendices. Banfield’s team at the University of California, Berkeley, collected nanobacteria by filtering acetate-amended ground water [41]. These cells were immediately deep-frozen to prevent deterioration and brought to the laboratory, where they were analyzed by CryoTEM. It appeared that these nanobacteria are even quite common and well-adapted for survival under adverse conditions. A widely published micrograph [42] shows a coated spherical particle. Its diameter is close to 300 nm and its surface is covered with numerous pili, radiating outwards. Their length is about 50 nm. Their presence attracted our attention and will turn out to be very important. Lida Mattman was an outstanding microbiologist. She confirmed pleomorphism and described the life cycle of various bacteria. Her textbook [43] concentrated on “cell wall deficient forms”. They could result from an adaptation to increase protection against natural antibiotics, but they remained hidden in blood, though they are damaging. “Current bacteriology holds the belief that each cell species has only one simple form and retains it by reproduction”. Nevertheless, they are able to “pass through stages with markedly different morphologies”. Dennis Claessen and his group at the Institute of Biology in Leiden discovered filamentous bacteria that are present in soils. He called them “actinomycetes”. In response to stress, they can become cell wall deficient [44]. This may be a transient state, but also a lasting one. Pr. Claessen commented in an interview: “It gives a whole new understanding of the flexibility of microorganisms under stress and their ability to adapt to their environment.” He published an article in 2019, where he reviewed recent progress concerning stress induced wall deficiency and pleomorphism [45]. He concluded that during the last decade, it became increasingly clear that “cells have a wide range of defense mechanisms to cope with various stresses.” This is also clinically relevant, since morphological plasticity can cause reinfections. It is particularly noteworthy that Rife discovered that deadly nanobacteria did thrive in tissues that had been treated by αβ and γ radiation or cobalt 60 gamma rays [46]. The idea that cancer results only from mutations, due to radiation damage and some chemical substances, needs revision, since we were not aware of pleomorphism, caused by stress. 3.2. Rife’s Method for Producing EM Waves His aim was not only to see small microbes, causing sicknesses and death, but also to destroy them. He suspected already 1915 that this might be possible by subjecting them to oscillating electric fields. He declared [8] that systematic experimentation started in 1920. At that time radio amateurs were very active. The carrier wave for radio frequencies had initially been produced by means of a sequence of sparks (in German: Funken, meaning also radio-emission). The sparks result from high tension, obtained by means of a Ruhmkorf induction coil and an interrupter. The electric field accelerates some stray electrons that become then able to ionize air molecules. This produces more free electrons in avalanche fashion, but recombination processes lead to a spark and rapid discharge. This is also relevant for lightning. When it was realized that electrons can be emitted by a hot filament, it became possible to fabricate luminous diodes. However, Lee De Forest used a vacuum tube and invented the triode, where the amplified electron flux can be controlled. This “audion” could thus modulate a high frequency carrier wave at audio frequencies and was patented in 1908. Public broadcasting began in 1922 from the Eifel tower in Paris and in 1926, NBC started radio emissions in the USA, but Dr. Rife wanted only to produce radio waves of limited range. Their frequency should be adaptable and stable to allow for selective action on particular microbes. Being technically gifted and very creative, he did that by combining two existing inventions. On one hand, the German physicist and glassblower Geissler had created in 1857 a gas discharge tube, still used today as a source of colored light. It consists of a sealed glass vessel that contains a noble gas at low pressure and two electrodes. There is no hot filament, but a constant high potential difference is applied to these electrodes. The resulting field accelerates free electrons that will then ionize atoms. At low pressure, they are diffused by collisions before emitting light. The whole tube becomes thus luminous and the color depends on the chosen gas. On the other hand, Coolidge invented in 1913 an X-ray tube that was a glass sphere. A hot filament emitted electrons, bundled and accelerated in high vacuum by a strong electric field to impact the other electrode at high velocity. They were there suddenly decelerated, but basic laws of electromagnetism predict that when the velocity of electrically charged particles is reduced, they emit braking radiation (Bremsstrahlung). Deceleration of high velocity electrons produces X-rays. Their frequency spectrum is continuous, but there are also superposed narrow peaks, resulting from excitation of strongly bound electrons inside atoms of the anode. They return to their normal state by emitting photons of particular energy. These processes yield a beam of X-rays that is emitted at 90˚ when the flat anode is inclined at 45˚ with respect to the electron beam. Rife created a new system, represented in Figure 8(a). Actually, he replaced Geissler’s tube by a hand-sized glass sphere and the hot filament of the Coolidge sphere by a cold metal plate. When the applied potential difference between the two electrodes exceeds a particular value, there appears a glow discharge. It results from the fact that a local space charge is created near the negative electrode and pushes some ions towards the cathode. It emits then electrons that are accelerated and ionize atoms of the noble gas. Its pressure is such that diffusion of excited atoms is limited, but the accelerated electrons are constantly slowed down and hit the anode at moderate energy. Being suddenly stopped, they produce braking radiation, but the energy of the emitted photons is lower than for Coolidge’s X-ray tubes. The resultingbeam of radio waves is also perpendicular to the electron beam and represented by points in Figure 8(a). The lateral divergence of this beam is greater than for X-rays, because of the longer wave length. The gas that surrounds the beam becomes luminous and such a diode is called a “phanotron”. Rife tried different noble gases. He found that helium is preferable, since the impact of the lighter He+ ions on the cathode increases its lifetime. Figure 8(b) indicates in a schematic way that the electric tension can be applied by modulating a signal of high-frequency F with rectangular pulses of lower Figure 8. (a) Rife’s phanotron is a gas-discharge diode, where an electron beam produces at 90˚ a beam of EM waves. (b) An electric signal of fixed high frequency F is combined with pulses at a particular audio-frequency f to produce sinusoidal oscillations of desired frequencies f'. frequency f. Since avalanche phenomena distort these signals, the resulting EM wave is equivalent to a superposition of sinusoidal waves of various frequencies f' by Fourier transformation. 3.3. The Rife Frequencies and Evidence of Exploding Microbes The history of the development of Rife’s machine has been reconstructed in detail [47]. The report contains many photographs and includes new measurements. They were performed on rediscovered instruments or reconstructed ones, according to original schemes. This study was essentially concerned with engineering problems, but for us it is important that this system could produce electric fields that oscillate at any desired frequency f'. Rife provided a list of resonance frequencies for different types of microbes by observing their reaction with his microscope. Since they were definitively inactivated, he called them “mortal oscillatory rates”. He communicated his initial list in 1935 to the electronic engineer Philip Hoyland. He had been hired by Dr. Milbank Johnson who operated a “cancer clinic” that used Rife’s method. Hoyland’s mission was to construct a more compact version of Rife’s machine. Electronics was constantly progressing, indeed. He determined also the resonance frequencies with Rife’s microscope and Rife did control his measurements. He agreed with the fine-trimmed values of 1936. Table 1 provides the carefully established list. It appears in the report (p.103), but is rearranged here for increasing frequencies instead of the names the pathologies in alphabetic order. To produce these frequencies, Rife bought commercially available equipment of top quality. He needed a sinusoidal carrier wave of authorized frequency and was allowed to emit at = 3,300,000 Hz. This signal, modulated by rectangular pulses, was fed to the phanotron. The pulses were distorted by avalanche effects and possible frequencies were then determined by $F+nf={n}^{\prime }{f}^{\prime }$(4) The numbers and n' can be 1, ±2, ±3, … The sidebands have decreasing intensities for greater values of these numbers. Table 1 tells us that the resonance Frequency against 549.07 kHz Staphylococcus Albus 139.20 kHz Anthrax 759.45 kHz Typhoid Fever (rod) 191.80 kHz Streptothrix 719.15 kHz Streptococcus Pyrogen. 233.00 kHz Gonorrhea 769.000 kHz Tuberculosis (filterable) 234.00 kHz Tetanus 769.035 kHz Coli (filterable) 369.43 kHz Tuberculosis (rod) 788.70 kHz Syphilis 416.51 kHz Coli (rod) 1.4452 MHz Typhoid Fever (filterable) 426. 86 kHz Spinal Meningitis 1.52952 MHz Cancer Sarcoma (BY) 477.66 kHz Staphylococcus Aureus 1.60745 MHz Cancer Sarcoma (BX) Table 1. The Rife-Hoyland list of resonance frequencies for different pathologies. frequency for BX is 1607.45 kHz. It was thus produced by choosing f = 21.275 kHz, while n = -4 and n' = 2. This procedure is quite complicated, but similar to normal practice for broadcasting. It did also protect Rife’s invention, without requiring patenting, since it was not easy to find the adequate value of f. When Hoyland passed away, Verne Thomson became Rife’s new engineer. He built several ray machines and verified that the fine trimmed resonance frequencies of Table 1 are correct. We see there that the resonance frequencies are different, but unique for any particular type of microbes. Rife attributed this fact to their “chemical constitution” (p. 20 of the report). It was not yet possible to be more explicit, but the empirical result was already very important. Today, it is possible to use frequency generators and hand-held electrodes. We were surprised when we examined a system, promoted in a book (Frequenz Therapie, German Edition, 2014). It contains an enormous list of resonance frequencies, without explaining how they were determined. It states for instance that they are 21,275 Hz for the BX carcinoma and 20,080 Hz for the BY sarcoma. These values differ from those of Table 1. Moreover, the book preconizes for instance 11 frequencies against “lack of appetite”. They are very low, since even 10 of them are situated between 20 and 1865 Hz, while 63 frequencies were listed to treat hepatitis C. This contradicts Rife’s result: there is only one resonance frequency for every type of microbes. Another list of frequencies, claimed to be Rife frequencies [48] provides also many relatively low frequencies, without any detailed justification. Only 4 of the 11 frequencies of the previous list are given here for lack of appetite, but 52 frequencies are provided for cancer. They range between 120 Hz and 795.6 kHz. It is prudently stated that these frequencies are “only for experiments”, but “you can see if they work for you”. Anyway, the apparatus is sold with much publicity. We are afraid, of course, that this kind of business can constitute a serious obstacle to scientific recognition of Rife’s method and have to conclude that objective control and authorizations are needed, as for most professions and technical equipment. The biomedical engineer Marcello Allegretti reviewed medical applications of EM waves in several books, to make them understandable in general terms [49]. However, we never found an explanation of targeted destruction of microbes by resonance. It has to be possible for any real phenomenon. Our first task was therefore to verify if there are visual proofs of the efficiency of Rife’s method. In addition to the trustable test for curing cancer patients, Rife made numerous microphotographs and movies to document his observations [50], but nearly all of them were stolen or destroyed. Nevertheless, some proofs were saved. We recommend two easily available videos [51] to see with our own eyes what happens. The video “Royal Rife-in his own words” shows instruments, procedures and (at minute 6:58) the turquoise-blue nanobacteria that he discovered. “The Royal Rife story” shows several times bursting bacteria (6:45, 6:59, 7:12, 8:37 and 9:07). Other videos [52] provide more background information. John Crane presents in “Dr. Royal Rife’s 1939 lab film”the working place and equipment, used by Rife. The first pictures of the BX (14:33) were taken with Rife’s microscope of 1922. Bacteria are much larger and could already be observed with standard optical microscopes, even when they exploded (34:25, 34:45 and 35:15). The longer version of “Rife in his own words” displays turquoise-blue nanobacteria (7:39) and green ones (8:53). Drawings of the BX and the BY entities (15:23, 16:23) indicate the presence of pili on their surface. Nanobacteria can even be seen together with the corresponding bacteria (41:12). “The Rife story” includes more images of exploding bacteria (15:48, 16:02, 16:15, 1.56:03 and 1.56:14). 4. Explanation of Microbe Destruction by Resonance 4.1. Formulation of the Basic Problem The first method that is customarily used when we are faced with unexplained phenomena is to make them plausible by referring to analogies. In a video of outstanding pedagogical quality, the music professor Anthony Holland used this method, since it is possible to shatter a wine glass by sound waves of adequate frequency [53]. The resonance frequency can be determined by tapping the wine glass or by rubbing its rim with a moist finger. The resulting stick-slip friction yields a continuous sound as for the bow and a vibrating string of violins. Though there are several modes of vibration for wine-glasses and bells, the dominant one corresponds to deformations of the rim along two orthogonal axes, as indicated in Figure 9(a). When the intensity of the exciting sound wave is strong enough, the glass is shattered, since it is quite brittle. The relevant concept can be visualized in a more transparent way by means of a loop of steel wire, attached to a vertical vibrator that is activated by a frequency generator. There appear then “standing waves” with 4 knots for the lowest resonance frequency. More knots than in Figure 9(a) are also possible at higher frequencies. Figure 9(b) represents a “singing bowl”. This is a very old musical instrument, still used in China, Tibet and other Asian countries. It was made of bronze and vibrates like a wineglass, but does not break. When it is partly filled with water, bouncing droplets appear for strong enough excitation [54]. Nevertheless, these systems provide merely examples of normal resonance phenomena. Small scale surface waves of water result from surface tension and it is well-known that cohesion can then lead to the formation of drops. Figure 9. (a) Deformations of a vibrating wine glass. (b) A singing bowl or standing bell. The spectacular collapse of the Tacoma Bridge is often cited as an example of resonance phenomena, but this resonance concerned a twisting vibration of the bridge that caused turbulent wind. This turbulence amplified the vibration and vice-versa, as visually explained in a video [55]. Positive feed-back can thus lead to auto-amplification and stronger resonances than normal ones. We will show that targeted destruction of viruses and other microbes results from such a mechanism and does also end with fatal disruption. Anthony Holland did perform himself experiments concerning the effects of EM waves. He showed the results for a harmless unicellular organism, called “blepharisma”. Its usual size is 75 - 300 μm and can thus easily be observed with a standard optical microscope. These entities are elongated and their surface is uniformly covered with short hair-like organelles (see Wikipedia). Holland’s video [53] shows (at minute 10:43) that the membrane displays a blister before breaking up (11:13). It is interesting that another blister was forming at the other side (11:03) and that a small apparently exploring entity was unaffected at the imposed frequency. Holland showed then also exploding leukemia cells (13:00-13:24), by using the phanotron of a “Bare/Rife device”. James Bare is an MD, wanting to help patients. It is respectable that he tried to understand what happens, but it is instructive to examine his attempted explanations to become aware of possible pitfalls. The term “resonant light technology” is not adequate, since the resonance is due to an EM wave and not by the luminous electron beam. Dr. Bare knew that, but he thought that it constitutes a “small antenna”. He mentions even “solitons” [56], but an antenna is only a conducting wire, where electrons are set in longitudinal oscillation to produce a standing wave with maximal amplitude at both ends. This is the usual way to create EM waves, but we explained already that Rife produced EM waves by deceleration of electrons in the anode. Since the microscope revealed that the membrane of nearly spherical cells was locally broken up, Dr. Bare and collaborators thought that the resonance is due to the hole. A spherical vessel with a short neck constitutes a “Helmholtz resonator”, since the air in any closed vessel acts like a spring on the oscillating air in the neck. The resonance does spontaneously appear when an airstream is blowing fast enough over the opening, since stationary waves are then excited by turbulence, as inside some flutes. However, there is only one resonance frequency for a given size of the vessel and its neck, since there is a single oscillator. For flutes, there are several possible standing waves of different frequencies. Actually, the hole in the membrane of an exploding cell is a consequence of Rife’s resonances and not its cause. To explain the destruction of viruses and microbes by resonance, it is necessary to start with adopting an adequate model. Holland’s analogy with shattering a wine glass could suggest that Rife excited deformations of the cell body. This would yield an ensemble of resonance frequencies [57], while Table 1 attributes only one resonance frequency to any particular type of microbes. Another possible hypothesis would be an electric polarization of the cell body. This mechanism leads to a resonance for particles that are very small compared to the wavelength λ of EM waves. They are bathing in a homogeneous electric field that induces surface charges, creating a secondary electric field inside the particle. It leads to a resonance for dipole oscillations. This accounts for the blue sky, since the electric field of sunlight does polarize very small particles in atmospheric air. This resonance frequency is situated in the domain of UV light and energy is only absorbed at this resonance frequency, but scattered at neighboring frequencies. This so-called “Rayleigh scattering” of sunlight results from the induced polarization. It is more intense for blue than for red light. This theory applies also to collective oscillations of electrons in very small metal particles and allowed us to explain the “anomalous optical absorption” of thin granular metal films [58]. The same theory explains the resonance that radio-waves produce in red blood cells [59]. However, the mechanism of Rife’s resonance phenomenon is different and has not yet been explained. 4.2. Structure and Function of Virus Spikes The Covid-19 pandemic and images of the relevant viruses suggested to the author that their spikes might be set in oscillation and could perhaps lead to a devastating resonance. We will test this hypothesis by establishing the relevant equation and solving it, but first of all, we had to know more about the structure and function of virus spikes. They are not merely flower-like decorations, but very efficient weapons for attacking cells, in order to use their machinery to reproduce themselves in great numbers. The so-called “Spanish flue” of 1918 was due to a virus. It infected some 500 million people and killed at least 20 million persons [60]. This virus became more lethal during the pandemic, since its mutation led to a second wave, where lungs were filled with blood. The Corina-19 virus appeared in China and was partially controlled by containment, but not stopped. Social proximity, increased population densities, worldwide mobility and interwoven economies are major factors of its spreading. We see also that the search and testing of vaccines take time. Even a vaccine cannot protect us against unpredictable new mutations and the possible appearance of more virulent forms. It would thus be irresponsible to neglect already existing evidence that an efficient,safe and rapidly adaptable biophysical method is possible. Rife’s method could have been tested at least by means of experimental procedures, but it was rejected since it seemed to be unbelievable. Nevertheless, it is also necessary to explain why the reported facts are true. We are thus confronted with a basic scientific problem. There are numerous examples, indeed, where something unexpected was discovered. It concerned still hidden realities. How can we get access to anything that is not directly observable? We have to imagine what might happen, draw logical consequences from this hypothesis and verify then if they are true or not. The first problem is thus to select a plausible hypothesis, deserving rigorous analysis ... It became possible at about 1980 to combine cryogenic electron microscopy with gene sequencing to understand the role of spikes. The resulting knowledge is summarized in Figure 10, but for clarity, we represent only one pair of macromolecules, while the flower-like hemagglutinin (HA) spikes contain 3 identical pairs. We are now accustomed to artistic representations, but they do not reveal what is essential. Figure 10(a) represents the normal state of the associated HA1 and AH2 proteins. The HA1 molecule contains negative charges (COO) that belong to small sialic acid molecules that can move inside the molecule. Because of their mutual repulsion, they create surface charges. Electrostatics tells us that its density depends on the configuration of the surface and has to be greater near the pointed tip of the HA1 molecule. These charges polarize the electrically neutral HA2 molecule, which is a slender, helicoidally wound molecule. It tends to be straight, but is bent and kept in this state, because of the increased surface charges at the lower end of the HA1 molecule. The middle part of the coil spring is more irregular than indicated here, but it is only important that we get a loaded spring trap. It is anchored in the membrane of the virus. Tts extremity, represented in green in Figure 10(a), is situated on the other side of the membrane of the virus. It is electrically charged and thus hydrophilic. This is usually sufficient to secure the implantation, but not for very strong traction. Figure 10(b) shows what happens in the vicinity of a cell. Its pores allow ion transport that maintains a potential difference between the inner and outer surface of its membrane. This leads to an increased density of H+ ions near the external surface. When a virus comes close to the membrane, these protons reduce the surface charge density of the HA1 molecule, since COO + H+ → COOH. The remaining surface charge is then too small to keep the HA2 spring in its bent state. It is suddenly released and its tip carries a fusion protein, indicated in red. It penetrates the membrane and this occurs simultaneously for the three HA Figure 10. Schematic representation of one pair of HA1 and AH2 molecules, constituting spikes of viruses. (a) In the normal state, the helically wound molecule HA2 is bent and kept in place like a loaded spring. (b) It is released by neutralization of the HA1 molecule. pairs. The redundancy strengthens the grabbing, as well as the rooting. Moreover, the structure of the HA1 part is modifiable by mutations to prevent the immune system from recognizing the potential invader. Viruses are terrorists, carrying jackknifes under a coat. They are ready for stabbing cells that are in reach and their camouflage can be modified to escape detection. The HA2 part is on the contrary strictly conserved. This ingenious system was elaborated by natural selection, governed by survival of the fittest. The existence of this war machine was recognized in 1993 [61]. To highlight the rapidly expanding research on this subject, we mention the review of the virologist Suzuki and a molecular biochemist [62]. Professor Stephen Harrison of Harvard medical school presented this theory in an excellent video [63] and explained how the spikes help the virus to penetrate into the cell [64]. We insist more on explaining the mechanism of automatic triggering. It is known that spikes of the SARS-2 virus have a conical form [65]. It differs from usual representations, although it is also attached to the membrane by a thin stalk. 4.3. Other Spikes of Viruses and the Pili of Bacteria So far, we described only virus spikes that are made of hemagglutinin (HA), but viruses do also have another type of spikes. They are constituted of 4 neuraminidase (NA) proteins, which are also electrically charged [66]. 18 variants of HA and 11 subtypes of NA are known [67]. The phylogenic tree of the Corona virus [68] indicates thus quite great variability. Viruses were modified to become more efficient. The influenza virion carries 300 to 400 HA spikes and 40 to 50 NA spikes. Their relative numbers have to be balanced [69], since they cooperate to help the virus to get inside the pirated cell. The diameter of the spherical core of usual viruses is about 120 nm. The length of HA spikes can be about 12.5 nm long, while the length of NA spikes is slightly smaller or greater. Rotaviruses are different. They contain double-stranded RNA, instead of single-stranded one and are said to be non-enveloped. Actually, the lipid bilayer is replaced by a tightly fitting protein coat, but there are always spikes that undergo conformational changes and perforate the lipid bilayer of the target cell [70]. Bacteria are endowed with long corkscrew-like flagella. They allow for efficient propulsion, since they are activated by small individual motors, embedded in the cell membrane, but the whole surface of bacteria is covered with pili. The growth and molecular constitution of these hair-like structures are now well-known. The major part is a helical fiber with a sticky tip. This allows for adhesion and some stretching that causes retracting forces. The rod is hollow and can even allow for DNA transfer, but is also an electron conductor. The main function of pili or cilia of bacteria is to sense their environment and to facilitate sexual reproduction by fusion with the same type of bacteria. Antibacterial molecules interfere with this process, but do not prevent survivors from producing protective mutations. For us, it is essential that pili do also carry positive and negative parts [71], since that offers the possibility to act on them by means of an oscillating electric field. 4.4. The Normal Resonance and Parametric Amplifiers To facilitate a rational understanding of Rife’s resonance, we begin with analyzing the properties of a simple and familiar system. A pendulum is merely composed of a mass M, suspended at a fixed point by means of a string of constant length LFigure 11(a) shows that an angular displacement θ with respect to the vertical implies a horizontal displacement $x=L\mathrm{sin}\theta$. The mass M is subjected to the gravitational force F = Mg. Its tangential component $F\mathrm{sin}\theta =Mgx/L$ is proportional to x, but opposite. This restoring force tends to reestablish equilibrium, but once the pendulum has started to oscillate, it has the tendency to remain in this state of motion. To show why and to make scientific reasoning accessible to non-specialist, also for later generalizations, we recall Newton’s law. It states that the product of the mass M and its instantaneous acceleration $\stackrel{¨}{x}$ is always equal to the sum of all applied forces. When air friction is negligible, we get thus for small angular displacements $M\stackrel{¨}{x}=-\frac{Mgx}{L}\text{\hspace{0.17em}}\text{ }\text{ }\text{or}\text{\hspace{0.17em}}\text{ }\text{ }\stackrel{¨}{x}+{\omega }_{o}^{2}x=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}\text{ }\text{\hspace{0.17em}}{\omega }_{o}^{2}=\frac{g}{L}$(5) Every dot indicates a derivation with respect to the time variable t. The solution of this equation is $x\left(t\right)=C\mathrm{sin}\left({\omega }_{o}t+\phi \right)$ and accounts for free oscillations. The (angular) frequency ωo is determined by the value of the local gravitational acceleration g and the length L of the pendulum. The values of the amplitude C and the phase φ depend on the initial conditions. Indeed, a child that is sitting on a swing can initiate free oscillations by pushing with its feet on the ground. This means that x(0) = 0 and therefore φ = 0, but the initial velocity $\stackrel{˙}{x}\left(0\right)={\omega }_{o}C$. The amplitude C depends then on the impulse for starting the oscillation. If air friction were really negligible, the amplitude C would remain constant. Actually, the oscillation is progressively damped, since the mass M is subjected to air friction. This braking force is proportional to the instantaneous velocity $\stackrel{˙}{x}$ and depends on the cross-sectional area. However, the oscillations can be sustained by means of a push at the adequate rhythm. The mathematical treatment is simplified for a constantly applied driving force that oscillates at some frequency ω with constant strength S. The equation of motion is then Figure 11. (a) Parameters needed to describe the motion of a simple pendulum. (b) The peak for possible amplitudes of forced oscillation is very high and narrow for low friction. $\stackrel{¨}{x}+{\omega }_{o}^{2}x+2\nu \stackrel{˙}{x}=S\mathrm{sin}\left(\omega t\right)$(6) Thus, $x\left(t\right)=A\mathrm{sin}\omega t+B\mathrm{cos}\omega t=C\mathrm{sin}\left(\omega t+\phi \right)$(7) The solution (7) describes forced oscillations that subsist when the initial conditions are not relevant anymore, because of damping. The mass M is oscillating at the imposed frequency ω, but the constants A and B are determined by substituting Equation (7) in Equation (6). We get then $\left({\omega }_{o}^{2}-{\omega }^{2}\right)A-2\nu \omega B=S$ and $\left({\omega }_{o}^{2}-{\omega }^{2}\right)B+2\nu \omega A=0$ Thus, $A=\frac{\left({\omega }_{o}^{2}-{\omega }^{2}\right)S}{{\left({\omega }_{o}^{2}-{\omega }^{2}\right)}^{2}+{\left(2\nu \omega \right)}^{2}}$ and $B=\frac{-2\nu \omega S}{{\left({\omega }_{o}^{2}-{\omega }^{2}\right)}^{2}+{\left(2\nu \omega \right)}^{2}}$ Because of (7), ${C}^{2}={A}^{2}+{B}^{2}$. The amplitude of the forced oscillation is therefore $C=\frac{S}{{\left({\omega }_{o}^{2}-{\omega }^{2}\right)}^{2}+{\left(2\nu \omega \right)}^{2}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{while}\text{\hspace{0.17em}}\text{ }\text{ }\text{tg}\phi =\frac{2\nu \omega }{{\omega }^{2}-{\omega }_{o}^{2}}$(8) These expressions describe the normal resonance phenomenon. Rife’s resonance is more powerful, but the first relation (8) is already very useful to understand the physical meaning of resonances. The amplitude C of forced oscillations is maximal when the denominator is minimal, this happens when its derivative with respect to ω is zero and thus when ω = ων, where ${\omega }_{\nu }^{2}={\omega }_{o}^{2}-2{\nu }^{2}$. When friction is sufficiently small ( $\nu \ll {\omega }_{o}$ ), the graph of C(ω) displays a symmetric peak that is centered on ωo. The resonance frequency has thus the same value ωo as for free oscillations. Figure 11(b) provides the graph of C(ω) when = 1, ωo = 1 and ν = 0.02. In general, the height of the peak is S/4ν2 and its width at half height is equal to ν. The resonance becomes thus sharper for decreasing friction. The amplitude C would even become infinite in the absence of friction. The second relation (8) predicts the phase difference φ for different values of ω. This function is represented in Figure 12(a) for ωo = 1 and ν = 0.02. The change of sign means that the oscillator is able to follow the rhythm of the applied force when the imposed frequency is lower than ωo, but it stays behind when it is too fast. We are now ready to answer a simple question: is a child, sitting on a swing, able to sustain itself the oscillations without any external help? Probably, you found the answer yourself by trial and error. It is fun to make discoveries and to master a situation oneself. It is even much more interesting and surprising to understand why this is possible. The child has to straighten up when it is approaching the highest points and to incline backwards when it is moving towards the other side. The effective length L of the pendulum is thus modified as shown in Figure 12(b). Figure 12. (a) The phase difference between the forced oscillation and the applied force. (b) This type of periodic variations of the effective length L of the pendulum yields a parametric amplifier. The variation of L can be approximated by the function $L\left(t\right)={L}_{o}\left(1+2\gamma \mathrm{sin}2\omega t\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{when}\text{\hspace{0.17em}}\text{ }x\left(t\right)=A\mathrm{sin}\omega t$ The variation of the parameter L is adapted to the actual frequency of oscillation, but the equation of motion (6) is then modified. It yields $\stackrel{¨}{x}+{\omega }_{o}^{2}\left(1-2\gamma \mathrm{sin}2\omega t\right)x+2\nu \stackrel{˙}{x}=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{when}\text{\hspace{0.17em}}\text{ }S=0$(9) To solve this equation, we note that $2\mathrm{sin}2\omega t\mathrm{sin}\omega t=\mathrm{cos}\omega t-\mathrm{cos}3\omega t$(10) This relation was proven in trigonometry, without mentioning its physical importance. The fact that the product of two oscillating functions is equivalent to a sum of two oscillating functions implies that Equation (9) is equivalent to $\stackrel{¨}{x}+{\omega }_{o}^{2}x+2\nu \omega A\mathrm{cos}\omega t=\gamma {\omega }_{o}^{2}\left(\mathrm{cos}\omega t-\mathrm{cos}3\omega t\right)$ The periodic variation of the length L produces thus two oscillating driving forces. The first one does exactly compensate friction when ω = ωo and γ = 2ν/ωo. The other force has negligible effects, since the frequency 3ω is too far away from the resonance frequency. The child can thus sustain the oscillations by its own muscular forces. Greater values of γ will not only compensate energy losses, but increase the amplitude. The system becomes then a parametric amplifier until equilibrium is reached. This can easily be verified by attaching a small mass on a string that is partially wrapped around a pencil. It is sufficient to rotate the pencil towards the left and the right at the adequate rhythm. Trapeze artists apply this method at a human scale. We transposed this theory to explain the mysterious phenomenon of ball lighting [72]. Since skeptics did simply attribute it to illusions, in spite of numerous testimonies of serious witnesses, it is useful to show how the explanation was found. The example is also adequate to illustrate the method of imagining what is hidden and verifying if the logical consequences do agree with all observed facts. We assumed that ball lightning is similar to a soap bubble, but that the membrane is luminous, since it contains ionized air. However, light emission results from recombination processes. The luminosity should decrease very rapidly, as for ordinary lightning. The difference is that the membrane allows for collective motions of the free electrons with respect to the heavier ions inside the membrane. The frequency ωo of this “plasma oscillation” is determined by the density of free electrons, but they are radially oscillating inside the membrane. This makes the external surface alternatively positive and negative. Ball lightning does thus alternatively attract electrons and positive ions that are present in the ambient air near its surface. This replenishes not only the stock of charged particles, but modulates also the oscillation frequency as in Equation (9). This leads to auto-oscillations. They are sustained without requiring constant excitation by an oscillating electric field of external origin. How can we test the validity of this explanation? The answer is that it accounts for apparently mysterious observations. Indeed, ball lightning is usually moving around in an apparently erratic way. We can now “mentally see” that it is constantly attracted towards the greatest density of charged particles in the ambient medium. It behaves thus like a living being that survives by feeding itself. Moreover, ball lightning can suddenly disappear in midair, as if the light had been extinguished. This is due to entering a region where the ambient air was not sufficiently ionized. It can also happen that ball lightning explodes, even very violently. The invisible cause is a locally greater ionization of the ambient air, which led to fatal parametric amplification. Although this was a digression, it is useful to understand an essential feature of science and to be prepared for another intellectual adventure. It concerns the destruction of viruses and other microbes by means of resonances, but they are quite special. 4.5. The Peculiar Resonance of Virus Spikes The analogy with shattering a wine glass [49] is useful as a first approximation. It begins also with determining the resonance frequency and an experienced singer is able to produce a sound wave of precisely the same frequency. With laboratory instrumentation, we can prove that there are several possible ones and select one of these frequencies to produce a resonance [73]. Spikes can also be set in forced oscillations, but the resulting resonance is more powerful than normal resonances. Figure 13(a) shows the outline of the spikes of the Corona virus, according to data obtained by means of cryo-electromicrography [74]. The spike is normally straight and we know that its upper part is negatively charged. Figure 13(b) reduces the spike to its essential elements. It is then similar to a cantilever that is implanted in a larger body, but can be bent, when its effective tip Figure 13. (a) The global shape of a Corona virus spike in its normal state. (b) The applied force produces a lateral displacement and allows for a powerful resonance. mass Mo is subjected to a lateral force F. It produces then a displacement x with respect to the symmetry axis. How does it vary for an oscillating force? Fortunately, Friswell’s team established already in 2012 the equation of motion for energy harvesters [75]. They consist of a homogenous elastic beam that carries a tip mass and is implanted on a chariot. It is set in motion by random oscillations of a bridge and the resulting kinetic energy is transformed by detectors into electric energy. We adapt their equation to describe possible vibrations of virus spikes. The difference is only that their base remains fixed, while the oscillating force is directly applied to the tip mass Mo. Although the stem of the spike is not homogeneous, we can keep the model of a uniform beam by defining average values. However, the spike is surrounded by a fluid instead of air and thus subjected to viscous friction. The equation of motion is then ${M}_{o}\stackrel{¨}{x}=F\left(t\right)-\sigma \stackrel{˙}{x}-\left[k+a\left(\stackrel{¨}{x}+{\stackrel{˙}{x}}^{2}\right)x+b{x}^{2}\right]x$(11) where ${M}_{o}=M+0.227L\mu +\frac{2.467}{{L}^{2}}I$ and $k=\frac{3.044}{{L}^{3}}EI-\frac{1.234}{L}gM-0.367g\mu$ $a=\frac{1.523}{{L}^{2}}M+\frac{0.276}{L}\mu +\frac{6.091}{{L}^{4}}I$ and $b=\frac{3.756}{{L}^{5}}EI$ The effective mass Mo is thus defined by the real tip mass M, the length L of the beam and its averaged mass density μ. It is also necessary to account for the moment of inertia I of the spike with respect to the fixed point. The stiffness of the beam is mainly determined by the coefficient k, which depends on Young’s modulus of the constituting material, the length L of the cantilever and its moment of inertia I. For energy harvesters with a very flexible beam, the weights of the tip mass M of the beam will reduce the restoring force. This allows for two lateral equilibrium positions along a given direction. The nonlinear terms of Equation (11) lead then to chaotic behavior. where the tip mass can execute small oscillations around one of the possible equilibrium positions, but also pass from one side to the other. Indeed, very small changes of the instantaneous position and velocity at critical moments will produce qualitatively different motions. For spikes, surrounded by a fluid, we can neglect the weights because of buoyancy, but the nonlinear effects remain important. Actually, the equation of motion (11) is reduced to $\stackrel{¨}{x}+{\omega }_{o}^{2}x+2\nu \stackrel{˙}{x}+4\gamma \left[x\stackrel{¨}{x}+{\stackrel{˙}{x}}^{2}+\beta {x}^{2}\right]x=S\mathrm{sin}\omega t$(12) ${\omega }_{o}^{2}=\frac{k}{{M}_{o}}=\frac{3.044EI}{{L}^{3}{M}_{o}}$$\gamma =\frac{a}{4{M}_{o}}=\frac{0.381}{3.467{L}^{2}}$ and $\beta =\frac{b}{a{M}_{o}}=\frac{2.466}{1.523}\frac{EI}{{L}^{3}{M}_{o}}=\frac{1.619}{3.044}{\omega }_{o}^{2}$ The spike does not simply behave like an inverted pendulum, since Equation (6) is generalized. The nonlinear terms depend on the instantaneous values of the displacement x, the velocity $\stackrel{˙}{x}$ and the acceleration $\stackrel{¨}{x}$. When the tip mass is dominant, I ≈ ML2 and Mo ≈ 3.467M. Measuring the displacement x in units L = 1, we get then $\gamma =0.110$ and $\beta =0.532{\omega }_{o}^{2}$. Since forced oscillations occur at the imposed frequency ω $x=C\mathrm{sin}\left(\omega t+\phi \right)$$\stackrel{˙}{x}=\omega C\mathrm{cos}\left(\omega t+\phi \right)$ and $\stackrel{¨}{x}=-{\omega }^{2}x$ The value of φ depends on ν, but when $\alpha =\omega t+\phi$, we get always $4{x}^{3}={C}^{3}\left(3\mathrm{sin}\alpha -\mathrm{sin}3\alpha \right)$ and $4{\stackrel{˙}{x}}^{2}x={\omega }^{2}{C}^{3}\left(\mathrm{sin}3\alpha +\mathrm{sin}\alpha \right)$ The nonlinear terms in Equation (12) are again equivalent to adding two forces. They oscillate at the frequencies ω and 3ω. The first one modifies the applied force and the second force can be neglected, since the frequency 3ω is too far off-resonance. To concentrate on essential features for the resonance, we assume that friction is negligible (ν = 0 and φ = 0). The amplitude C of forced oscillations is then determined by Equation (12), which yields $\left({\omega }_{o}^{2}-{\omega }^{2}\right)C\mathrm{sin}\alpha +2\gamma \left({\omega }_{1}^{2}-{\omega }^{2}\right){C}^{3}\mathrm{sin}\alpha =S\mathrm{sin}\omega t$ Indeed, $4\left(x\stackrel{¨}{x}+{\stackrel{˙}{x}}^{2}+\beta {x}^{2}\right)=\left(3\beta -2{\omega }^{2}\right){C}^{3}\mathrm{sin}\alpha$ and ${\omega }_{1}^{2}=3\beta /2=0.798{\omega }_{o}^{2}$. The amplitude of forced oscillation is thus determined by the cubic equation $\left({\omega }_{o}^{2}-{\omega }^{2}\right)C+2\gamma \left({\omega }_{1}^{2}-{\omega }^{2}\right){C}^{3}=S$(13) Thus $C={C}_{o}\left(\omega \right)=\frac{S}{{\omega }_{o}^{2}-{\omega }^{2}}$ when $\gamma =0$ without nonlinear effects, we get the normal resonance. It corresponds to (8), but is simpler, since ν = 0. It is represented by the black graph in Figure 14(a) and represents the actual amplitude of forced oscillations with a change of sign below and above the resonance frequency. It differs from Figure 11(b), which accounted only for the magnitude of C(ω) when ν = 0.02, while the sign was determined by Figure 12(a). Nonlinear effects do modify the response of spikes, since Equation (13) allows for 3 possible solutions and in particular for Figure 14. (a) The amplitude of forced oscillations of spikes would be Co(ω) without nonlinear effects. The resonance frequency is then ωo = 1. Nonlinear effects allow for auto-oscillations, characterized by C1(ω). (b). This leads to a synergy when it is combined with constant excitation and yields then C(ω). $C=0$ or ${C}^{2}=\frac{\left({\omega }_{o}^{2}-{\omega }^{2}\right)}{2\gamma \left({\omega }^{2}-{\omega }_{1}^{2}\right)}$ when $S=0$ We expected, of course, that without excitation there is no response, but because of nonlinear effects, spikes are able to produce a driving force that yields auto-oscillations. This is similar to what happened for a pendulum, when its effective length was periodically modified, but the reason is different. Now, it does not compensate friction, since we considered even the case where ν = 0. The stiffness of spikes allows them to oscillate by themselves, but only in a limited frequency domain, since C2 has to be positive to yield a real value for the amplitude C. It is necessary that ${\omega }_{1}\le \omega \le {\omega }_{o}$, The resulting function C1(ω) is represented by the red curve in Figure 14(a). It defines the amplitude of the auto-oscillation and diverges when $\omega ={\omega }_{1}\approx 0.9{\omega }_{0}$. There is a resonance, but at a lower frequency than for the normal resonance. Equation (13) tells us what happens when the spike is constantly subjected to an excitation of strength S, oscillating at some given frequency ω. The normal resonance is then modified, since Co(ω) has to be replaced by a function C(ω) that is represented in Figure 14(b). It can only be defined by real solutions of (13). This yields 3 curves represented in different colors. They are simultaneously possible in a very small frequency domain (from 0.8933 to 0.89621 when ω0 = 1). The rigorous treatment of this problem was necessary, because of past contestations. It is also instructing from a theoretical point of view, since it illustrates the usually hidden mechanism of nonlinear systems. Moreover it is and should be of great practical importance. We can now harvest the fruits. We measured displacements in units L = 1, but when the mass M is dominant, the moment of inertia I is proportional to L2. The value of ${\omega }_{1}^{2}$ is thus proportional to 1/L. The resonance frequency is greater for short spikes, since they are stiffer. However, γ ≈ 0.11/L2. The amplitude of forced oscillations is thus proportional to L2S and greater for long spikes than for short ones. Moreover, there is only one resonance frequency for every particular type of microbes, in agreement with Rife’s measurements. Since its value ω1 does not depend on the strength of the applied force, it is sufficient to determine ω1 for one particular strength S, although greater values of S do increase the amplitude of forced oscillations. Empirical discoveries can make sense, of course, without understanding the underlying mechanism. It is then only necessary to verify that the observed facts are trustable. When it turns out that they are real, they have to be explained, but that requires other methods and may be delayed. 4.6. The Mechanism of Targeted Microbe Destruction Constructing a theory consists in choosing a plausible hypothesis and deducing logical consequences. Even a correct theory is only valid when its predictions agree with all known facts and further experimental verifications. Figure 15(a) represents a cantilever that is set in forced oscillation at a frequency ω that is close to the resonance frequency. When ω = ω1, the amplitude C of the oscillation Figure 15. (a) Forced oscillations of a spike or pili. (b) At resonance, it is extracted and creates a hole in the membrane that destroys the microbe by extrusion of its content. becomes quasi infinite, even when friction is not totally negligible. Resonance would thus imply enormous stretching of spikes or pili, if they were solidly implanted in the membrane or any equivalent structure. Normally, this would cause plastic deformations and eventually fracture, but that is replaced by extraction from the membrane and creation of a hole. This causes expulsion of the content of the virus, as represented in Figure 15(b). The virus is definitively destroyed. We recall that spikes are only fastened by the hydrophilic tail of their backbone, represented in green. This fixation is usually sufficient, but not secured, as can be achieved by means of bolts and screws. This system was invented in Antiquity at about 400 BC and it was recognized that the screw has to be well-tightened. The explanation is that dry friction between nonmoving surfaces is greater than kinetic friction and increased by pressure. This insight came much later and required even repeated discoveries [by da Vinci, Amontons and Coulomb, respectively at about 1500, 1700 and in 1785]. The fastening of spikes is improved by triple implanting, but not enough to resist at the resonance frequency. Even trees can be deracinated by strong wind, in spite of their distributed roots. This theory applies also to pili, covering the surface of bacteria and nanobacteria. Since they carry electrical charges, they can also be set in forced vibrations by an oscillating electric field. They behave like cantilevers that can be elastically bent, but because of nonlinear effects, they allow for auto-oscillation and extremely great amplitudes of oscillation at their resonance frequency ω1. The creation of a hole and destruction of microbes could be observed. Holland’s video [53] shows even local bulging of the membrane of bacteria before the explosive eruption. This means that a bunch of pili was involved at a place, where the electric field was perpendicular to these structures. Bulging began also at the opposite side, where the electric field is also effective. Moreover, microbes are immobilized before breaking up, since the propelling function of pili is already disturbed. It does not only depend on motorized flagella. The sophisticated spikes of viruses are highly efficient weapons, but they have an inherent weakness that can be exploited when we are aware of this possibility. Although pili of bacteria and nanobacteria have other functions, they lead also to total destruction, without survivors that could initiate securing mutations. It is only necessary to impose a frequency that is precisely equal to the resonance frequency. It was a huge error to claim that the observed facts had to be bogus, even without verifying, since it deprived humanity from making use of a simple and secure method to fight sicknesses, plagues and even cancer. Rife found not only that BX and BY bacteria are involved in cancer, but also that they are proliferating after radiotherapy. This merits special attention. It has been rediscovered that oscillating electric fields can be used for cancer therapy. A possible mechanism has been proposed [76] [77], but it is still a hypothetical one and should be reexamined in a larger context. Since pleomorphic transformations are reversible, it is plausible that the total number of pili is conserved in size reduction. Their surface density would then be increased for nanobacteria and explain why they become more virulent. Nevertheless, they remain vulnerable for Rife’s resonances. It should be noted that a cantilever, constituted of a steel wire, does also allow for another mode of oscillation, where a knot is created near the free end. However, the resonance frequency is significantly higher and this type of deformations can be excluded for virus spikes. We mention also that it is possible to use UV light for disinfecting surfaces. Low pressure mercury lamps produce UVC light at 254 nm. It is absorbed by RNA and produces there local defects [78], but this method is only applicable to viruses that are directly exposed to this radiation on a surface, in aerosol droplets or very thin liquid films. The FDA issued a warning, since UVC light damages the human skin and is dangerous for our eyes [79]. Rife’s method is not dangerous at all and efficient for destroying virulent microbes anywhere inside our body. 5. Summary and Conclusions Rife did prove that optical microscopes allow for much greater magnification and higher resolution than had been assumed. Since he constructed himself several supermicroscopes with purely classical means, his scientific creativity and accomplished craftsmanship deserve recognition and respect. With his instruments, he could clearly observe that when bacteria are exposed to stress, they reduce their size. They keep only what is essential to survive, but are more aggressive. Naessens did also prove the reality of pleomorphism, but some authorities did not like that. They declared that these pioneers are quacks, but refused to verify if the reported facts were real or not. This attitude is more than strange. Moreover, Rife discovered already about 100 years ago that bacteria and nanobacteria can be destroyed by resonance. His phanotron was an ingenious instrument that produced the required electric field by means of braking radiation. Instead of emitting EM waves in all directions by means of an antenna, like Hertz did and is usual for broadcasting, his system generated a local beam of EM waves. The destruction of bacteria by resonance was already observable with standard optical microscopes, but Rife was also able to see exploding nanobacteria. Actually, he had to persevere during more than a decade to make these tiny microbes visible. He did then prove that some of them are present in cancer tissue and that after being cultured, they do also cause cancer. He determined the resonance frequency for targeted destruction of various microbes. These values were confirmed by two collaborators, using different equipment. The efficiency of his method for healing cancer was clinically tested, but even these results were declared to be bogus, since they required modifying cherished postulates. For those who had the power to crush Rife, it did not matter that the reported results might be true. Rife was prosecuted. His documents were stolen. Instruments were wrecked and all traces of his achievements were eradicated from official medical records. That was profoundly unjust,truly antiscientific and detrimental for the worldwide health system. We solved the basic scientific puzzle with mathematical rigor by considering that spikes of viruses, as well as pili of bacteria and nanobacteria can be set in forced oscillation. Nonlinear effects make the resonance stronger and sharper than usual ones. There is only one resonance frequency for every particular type of microbes. It depends on the length of the oscillating structures and is thus specific. This theory is experimentally confirmed and leaves no rational reason to negate the possibility of targeted destruction of microbes. The author of this study has no financial interest whatsoever in a general implementation of this method. His concern is merely truth, justice, but also reduction of unnecessary suffering and death. Other persons and institutions have now to go on in the direction that was pointed out by Rife. The Coronavirus pandemic and its social implications should make everyone attentive to the existence of a biophysical method that differs from the usual biochemical one, but is effective,secure and rapidly adaptable. It is only necessary to determine the adequate resonance frequency very carefully, but this can be done with inexpensive equipment and by means of direct microscopic observation. Even the constant threat of unpredictable mutations can rapidly be countered. The treatment of patients requires only a few minutes, at intervals of three days. Because of the sharp resonance, this method is selective and can eliminate side effects when it is objectively controlled like cars and traffic, for instance. There is also rising evidence that our precious arsenal of antibiotics is endangered. Multiple antibacterial resistances are increasing [80]. The US Centers for Disease Control and Prevention concluded already in 2015 that “coordinated efforts to implement new politics, renew research efforts, and pursue steps to manage the crisis are greatly needed.” This was thought to “require considerable investment of human and financial resources”, but that is not true for Rife’s method. Basically, it only required to change some habits of though. We have to insist that dogmatism and concentration of power endanger scientific progress by blocking freedom of thought. When it is guided by observing reality and rational analysis, it is more efficient than ideology,but dictatorship and reckless use of power are always a temptation. In this context, we have to mention the research of the Russian chemist Tamara Lebedeva [81]. She made intensive research on trichomonade parasites and proved that they can cause cancer. Their reproduction rate was found to be increased by radiotherapy, but Lebedeva’s work was not recognized by medical authorities. Alfons Weber studied the impact on cancer of protozoa causing malaria [82]. He published his results in 1967 and promptly lost his approbation as medical doctor. It is now appearing that there are connections between malaria and cancer [83]. The plasmodium parasite is involved in both fields and cross-fertilization in medicine should be encouraged. The common feature of the research of Lebedeva and Dr. Weber was to consider that cancer is an infectious disease. We have to add that these types of parasites are not the only possible cause of cancer, but their effects should also be investigated. To provide a concrete and still actual example of tenacious preconceptions related to medicine, we mention that Dr. Jacques Benveniste did experimentally prove the existence of “water memory” in the 1980th. Nevertheless, it was declared (even by a world-famous scientific journal) that this is impossible. This claim was based on the apparently obvious idea that when biologically active molecules are repeatedly dissolved in water, they cannot be active any more when all of them have been eliminated. Since Benveniste found that the final solution did produce the same effect as if these molecules were still present, this fact required an explanation. From a scientific and human point of view, it is shocking that it was accepted to pretend that Dr. Benveniste and his team were merely victims of an illusion. The real problem required to verify if the active molecules could be replaced by substitutes, resulting from structuring water. Is it really impossible that thermal agitation of water molecules in the liquid state could cause structuring of water molecules according to the initial template? Since this concerns condensed matter physics, we tried to solve this problem and published in 2018 a rational explanation [84]. Since water molecules are dipolar, they can constitute very dense crystallites, where all these dipoles have the same orientation. We proved that they are spherical and so small that they could not be observed with standard optical microscopes. These nano-pearls are formed by the electric field of charged parts of biologically active molecules. Being also dipolar, these “water pearls” constitute chains, where they are set in rotation at the frequency of the vibrating charged part of active molecules. The resulting standing wave limits the length of these chains by preventing further growth. Since the length of these chains depends on the frequency of oscillation of the charged part of active molecules, they become relevant information carriers. Their existence is real. They are even multiplied by successive dilutions, always accompanied by vigorous shaking, since some chains are then broken, but are reconstituted by the oscillating electric field of the majority of intact chains. This process yields numerous efficient substitutes of the active molecules. It explains also about a dozen other experimental observations, made in the meantime. It accounts even for homeopathy, which is at present under harsh attack by those who claim that these products can merely act as placebos. Without understanding the real mechanism, it is more difficult to abandon preconceptions, but it is surprising that even a high-level “Scientific Advisory Council” did participate in lobbying to eliminate homeopathy by legal procedures [85]. Ideology and financial interests can even overrule normal scientific procedures. This fact has to be combined with objective evidence that pharmaceutical industries are radically and recklessly oriented towards big profits in a capitalistic way. This dreadful situation was recently documented by Luc Hartmann [86] and shows that Big Pharma is out of control. This is not always true, of course, but according to trustable authorities [86], they can indirectly pay institutions that should evaluate the efficiency, as well as already reported side effects. Political and legislating authorities should urgently react to avoid more scandals and flagrant injustices. Rife made a great discovery. We did only explain why apparently unbelievable, but observed facts are true. Now it is up to other persons and institutions to carry on. The fast development of vaccines merits praise, but there is now much more at stake than protection of already gigantic industrial empires. We appeal therefore to the World Health Organization, all Ministers of Health and Governments, as well as the United Nations to initiate and organize also another way to cope with unnecessary suffering and death. That is possible and urgently needed. Acknowledgements The author wants to thank Dr. Med. Jens Wurster for informing him about Rifle’s work and many interesting exchanges of ideas, as well as a careful, unknown referee for constructive remarks. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper. References [1] Lynes, B. (1987) The Cancer Cure That Worked! Fifty Years of Suppression. BioMed Publishing Group, South Lake Tahoe. [2] Masters, B.R. (2010) The Development of Fluorescence Microscopy. Wiley Online Library.https://doi.org/10.1002/9780470015902.a0022093 [3] Anonymous (2017) Royal Rife’s Universal Microscope (and Why It Can’t Exist). Blog.http://sustainable-nano.com/2017/08/18/royal-rifes-universal-microscope-and-why-it-cant-exist [4] Wade, G. (2014) The First Rife Microscope and the Gained Ability to Observe Viruses and the Fine Structure of Bacteria with an Optical Microscope.https://www.rife.de/observe-viruses-and-bacteria.html [5] Kungliga Vetenskapsakademien (2014) The Nobel Prize in Chemistry 2014. Scientific Background.https://www.nobelprize.org/prizes/chemistry/2014/advanced-information [6] Seidel, R.E. and Winter, M.E. (1944) Journal of the Franklin Institute, 237, 103-130.http://www.pulsedtechresearch.com/wp-content/uploads/2013/04/New-Microscopes-SeidelWinter.pdfhttps://doi.org/10.1016/S0016-0032(44)90203-6 [7] BRMI (2018) History—Dr. Royal Raymond Rife Jr.https://www.brmi.online/royal-raymond-rife [8] Comparet, B.L. (1960) Questions and Answers. Rife Research Europe: An Interview with Rife.https://www.rife.de/an-interview-with-rife.html [9] Neuman, R.O. and Mayer, M. (1914) Atlas und Lehrbuch wichtiger tierischer Parasiten und ihrer überträger.https://jamanetwork.com/journals/jama/fullarticle/438846 [10] Kendall, A.K. and Rife, R.R. (1931) California and Western Medicine, 35, 409-411.https://europepmc.org/article/med/18741967 [11] San Diego Union (1929) Local Man Bares Wonders of Germ Life. New Apparatuses Unveil Hidden Microbe Universe to Human Eye.https://rifevideos.com/local_man_bares_wonders_of_germ_life.html [12] Rife, R.R. (1953) History of the Development of a Successful Treatment for Cancer and Other Virus, Bacteria and Fungi. Research Lab. Data, San Diego.https://www.rife.de/files/history_rife_cancer_treatment.pdf [13] Rosenow, E.C. (1932) Science, 76, 192-193.https://www.rife.de/observations-with-the-rife-microscope.htmlhttps://doi.org/10.1126/science.76.1965.192 [14] Hess, D.J. (1996) Medical Anthropology Quarterly, 10, 657-674.https://doi.org/10.1525/maq.1996.10.4.02a00140 [15] Rife, R.R. and Crane, J. (1950) The Universal Microscope. RifeVideos.com.https://www.rifevideos.com/dr_rife_talks_with_john_crane_about_his_universal_microscope.html [16] Hell, S.W. (2014) Nobel Lecture.https://www.youtube.com/watch?v=9BzGB1SUPGQhttps://www.nobelprize.org/uploads/2018/06/hell-lecture.pdf [17] Vangindertael, J., et al. (2018) Methods and Applications in Fluorescence, 6, Article ID: 022003.https://doi.org/10.1088/2050-6120/aaae0chttps://www.chem.kuleuven.be/pd/static/publications/Vangindertael2018.pdf [18] Bonhams (2009) An Exceptionally Rare Rife Microscope.https://www.bonhams.com/auctions/16871/lot/113 [19] Elswick, S.R. (1994) The Amazing Wonders of Gaston Naessens. Super Microscopes and Suppressed Cancer Treatments. Nexus Magazine.http://www.whale.to/v/naessens.html [20] Naessens, G. (2010) The Somatoscope.https://www.youtube.com/watch?v=KGJW94ciq4c [
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 58, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7714560627937317, "perplexity": 1733.929272084753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00566.warc.gz"}
https://cs.stackexchange.com/questions/106056/the-time-complexity-of-finding-the-kth-smallest-number-using-buckets
The time complexity of finding the kth smallest number using buckets I've implemented kth smallest number using buckets representing the current nibble value of each element in the array where current is a value starting possibly at 64 (for 64 bits integers at most) and decrements each iteration by 4 (nibble size). I was wondering what is the time complexity (worst) of this implementation. I think it's O(n^log64/4) which is O(n^16), is that correct? function nthSmallest(array, k, sizeOfInt) { let buckets = [ [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [] ]; // put numbers in buckets - O(n) for (let i = 0; i < array.length; i++) { const high = (array[i] >> sizeOfInt - 4) & 0xF; // 4 = nibble size buckets[high].push(array[i]); } let numbers = []; for (let i = 0; i < buckets.length && numbers.length < k; i++) { if (numbers.length === k - 1 && buckets[i].length === 1) { return buckets[i][0]; } for (let j = 0; j < buckets[i].length; j++) { numbers.push(buckets[i][j]); } } return nthSmallest(numbers, k, sizeOfInt - 4); } Consider $$k=n$$ and the greatest two numbers differ by only the least 4 significant bits, then the two greatest numbers will be always in the last buckets in each iteration except the last. This means you have to iterate for sizeOfInt, sizeOfInt - 4, ... until 4, so the overall complexity is $$O(nw)$$ where $$w$$ is sizeOfInt. • Shouldn't it atleast be O(nw/4)? Also how do it compares to other solutions for the problem like max-heap or using partial quicksort? Should I consider using this over partial quicksort for example since it's O(n^2) worst case? – Jorayen Mar 26 '19 at 17:35 • @Jorayen $O(nw/4)=O(nw)$. You may want a more smart solution. – xskxzr Mar 26 '19 at 17:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44059792160987854, "perplexity": 2497.8013890521825}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703587074.70/warc/CC-MAIN-20210125154534-20210125184534-00549.warc.gz"}
https://code.tutsplus.com/courses/perfect-workflow-in-sublime-text-2/lessons/regular-expressions-in-sublime
FREELessons: 34Length: 2.5 hours • Overview • Transcript # 5.1 Regular Expressions in Sublime In this video, I'll provide you with the basics of using regular expressions in Sublime Text.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684618473052979, "perplexity": 13215.862472633135}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00855.warc.gz"}
https://www.physicsforums.com/threads/calculus-differentiation-question.128045/
# Homework Help: Calculus Differentiation Question 1. Aug 4, 2006 ### ohlhauc1 I am just learning calculus, and I have to differentiate a problem. I have worked on it and I have asked people, but they do not know. Here is what I have done thus far: f(x) = 3x-6 – 8x5 + 9x2/5 + √7 f(x) = 3(dx-6/dx) – 8(dx5/dx) + 9(dx2/5/dx) + √7(d/dx) f(x) = 3(-6x5) – 8(5x4) + 9(2/5x-3/5) + 0 f(x) = -18x5 – 40x4 + 18/5x-3/5 I am wondering if this is it, or whether I need to do more? If I need to do more, could you say which rule I should use or some other type of advice? Thanks P.S. If it is wrong, could you please tell me as well. 2. Aug 4, 2006 ### Staff: Mentor Your notation is a bit confusing. What is the starting function f(x)? And are you asked to find d[f(x)]/dx? Are all the "x" in your equations the unknown "x", or are some of the multiplication symbols? 3. Aug 4, 2006 ### Data if you mean that your function is $$f(x) = 3x^{-6} - 8x^5 + 9x^{\frac{2}{5}}+\sqrt{7}$$ and you found that the derivative is $$f^\prime (x) = -18x^5 - 40 x^4 + \frac{18}{5}x^{-\frac{3}{5}},$$ (I don't know why you've written f(x)= at every line when you are trying to differentiate) then you have made a small error, because $$\frac{d(x^{-6})}{dx} = -6x^{-7},$$ not $-6x^5$. Other than that it's ok (if I have translated your rather cryptic notation correctly - try to learn LaTeX: https://www.physicsforums.com/showthread.php?t=8997) Last edited: Aug 4, 2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544702529907227, "perplexity": 1086.4386785896857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158279.11/warc/CC-MAIN-20180922084059-20180922104459-00296.warc.gz"}
http://cds.cern.ch/collection/ATLAS%20Preprints?ln=ru&as=1
# ATLAS Preprints Последние добавления: 2019-09-20 22:59 Search for direct production of electroweakinos in final states with one lepton, missing transverse momentum and a Higgs boson decaying into two $b$-jets in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector The results of a search for electroweakino pair production $pp \rightarrow \tilde\chi^\pm_1 \tilde\chi^0_2$ in which the chargino ($\tilde\chi^\pm_1$) decays into a $W$ boson and the lightest neutralino ($\tilde\chi^0_1$), while the heavier neutralino ($\tilde\chi^0_2$) decays into the Standard Model 125 GeV Higgs boson and a second $\tilde\chi^0_1$ are presented. [...] CERN-EP-2019-188. - 2019. Fulltext - Previous draft version 2019-09-19 07:44 Search for squarks and gluinos in final states with same-sign leptons and jets using 139 fb$^{-1}$ of data collected with the ATLAS detector / ATLAS Collaboration A search for supersymmetric partners of gluons and quarks is presented, involving signatures with jets and either two isolated leptons (electrons or muons) with the same electric charge, or at least three isolated leptons. [...] arXiv:1909.08457 ; CERN-EP-2019-161. - 2019. - 42 p, 42 p. Fulltext - Previous draft version - Fulltext 2019-09-06 16:03 Combined measurements of Higgs boson production and decay using up to $80$ fb$^{-1}$ of proton-proton collision data at $\sqrt{s}=$ 13 TeV collected with the ATLAS experiment / ATLAS Collaboration Combined measurements of Higgs boson production cross sections and branching fractions are presented. [...] arXiv:1909.02845 ; CERN-EP-2019-097. - 2019. - 80 p. Fulltext - Previous draft version - Fulltext 2019-09-04 14:59 Measurement of azimuthal anisotropy of muons from charm and bottom hadrons in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector / ATLAS Collaboration The elliptic flow of muons from the decay of charm and bottom hadrons is measured in $pp$ collisions at $\sqrt{s}=13$ TeV using a data sample with an integrated luminosity of 150 pb$^{-1}$ recorded by the ATLAS detector at the LHC. [...] arXiv:1909.01650 ; CERN-EP-2019-166. - 2019. - 27 p. Fulltext - Previous draft version - Fulltext 2019-09-04 14:03 Rare top quark production at the LHC:\\ t$\bar{\text{t}}$Z, t$\bar{\text{t}}$W, t$\bar{\text{t}}\gamma$, tZq, t$\gamma$q, and t$\bar{\text{t}}$t$\bar{\text{t}}$ / Knolle, Joscha (DESY) /ATLAS and CMS Collaborations A comprehensive set of measurements of top quark pair and single top quark production in association with electroweak bosons (W, Z, or $\gamma$) is presented. The results are compared to standard model (SM) predictions and used to set limits on new physics effects that would induce deviations from the SM. [...] CMS-CR-2019-078.- Geneva : CERN, 2019 - 7 p. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 2019-09-04 13:59 Electroweak measurements at the High-Luminosity LHC / Savin, Alexander (Wisconsin U., Madison) /ATLAS and CMS Collaborations A set of selected standard model measurements proposed for the ATLAS and CMS experiments after the high-luminosity upgrade of the LHC is discussed. The measurements are separated into two categories: precise measurements that benefit from both improved systematic uncertainties and increased luminosity, like W or top mass, or weak mixing angle measurements; measurements of low cross section production that benefit mainly from luminosity increase and detector improvements, like VV VBS polarized cross section measurements or study of VVV production.. CMS-CR-2019-120.- Geneva : CERN, 2019 - 7 p. Fulltext: PDF; In : 7th Edition of the Large Hadron Collider Physics Conference, Puebla, Mexico, 20 May 2019 2019-09-04 13:59 Differential measurements of Higgs production at ATLAS and CMS / Sculac, Toni (Split Tech. U.) /ATLAS and CMS Collaborations Differential Higgs boson production cross sections are sensitive probes for physics beyond the Standard Model. New physics may contribute in the gluon-gluon fusion loop, the dominant Higgs boson production mechanism at the LHC, and manifest itself through deviations from the distributions predicted by the standard model. [...] CMS-CR-2019-113.- Geneva : CERN, 2019 - 8 p. Fulltext: PDF; In : 7th Edition of the Large Hadron Collider Physics Conference, Puebla, Mexico, 20 May 2019 2019-09-03 21:58 Search for light long-lived neutral particles produced in $pp$ collisions at $\sqrt{s} =$ 13 TeV and decaying into collimated leptons or light hadrons with the ATLAS detector / ATLAS Collaboration Several models of physics beyond the Standard Model predict the existence of dark photons, light neutral particles decaying into collimated leptons or light hadrons. [...] arXiv:1909.01246 ; CERN-EP-2019-140. - 2019. - 42 p. Fulltext - Previous draft version - Fulltext 2019-09-02 17:41 Performance of electron and photon triggers in ATLAS during LHC Run 2 / ATLAS Collaboration Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton-proton and heavy-ion collisions. [...] arXiv:1909.00761 ; CERN-EP-2019-169. - 2019. - 55 p. Fulltext - Fulltext 2019-08-22 22:53 Search for flavour-changing neutral currents in processes with one top quark and a photon using 81 fb$^{-1}$ of $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS experiment / ATLAS Collaboration A search for flavour-changing neutral current (FCNC) events via the coupling of a top quark, a photon, and an up or charm quark is presented using 81 fb$^{-1}$ of proton-proton collision data taken at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC. [...] arXiv:1908.08461 ; CERN-EP-2019-155. - 2019. - 34 p. Fulltext - Previous draft version - Fulltext
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953795075416565, "perplexity": 3201.947530609418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574765.55/warc/CC-MAIN-20190922012344-20190922034344-00214.warc.gz"}
http://graduate.physics.sunysb.edu/announ/
Defending Students: It is requirements of the Graduate School that your thesis defense is posted on the webbpage of the Graduate School at least three weeks in advance of the Defense. Please check! I will need the abstract at least 4 weeks before your defense. 10 May 2018 MT 6-125 Ningqiang-Song Extraterrestial Neutrinos as Probes of Their Sources and Interactions 9 May 2018 C 133 Hans Niederhausen Measurement of High Energy Astrophysical Neutrino Flux Using Electron and Tau Neutrinos Observed in Four Years of IceCube Data 2 May 2018 MT 6-125 Xinan Zhou Holographic Mellin Amplitudes 1 May 2018 MT 6-125 Yanliang Shi Study of Chiral Gauge Theories and Heavy-Quark Hadrons 25 April 2018 B 131 Juan Pablo Nery Renomalization of Electronic Properties of Semiconductors from Electron-Phonon Interaction 19 April 2018 MT 6-125 Gongjun Choi Study or Renormalization Scheme Transformation and RG Flow of Asymptotically Free Theories 10 April 2018 ESS 450 Taeho Ryu Study of Formation of Binary Black Holes, their Interactions with Surroundings and their Mergers 15 December 2017 C 133 Vladimir Khachatryan The Quark Gluon Plasma probed by Low Momentum Direct Photons in Au+Au Collisions at $\sqrt_{NN} = 62.4 GeV and$\sqrt s_{NN}$= 39 GeV Beam Energies 13 September 2017 D 122 Alyssa Montalban0 Search for Standard Model Higgs Boson Produced with an Associated W Boson and Decaying to a Pair of Quarks at$\sqrt s$= 13 TeV with the ATLAS Detector 15 August 2017 C 133 Huijun Ge Jet-Medium Interactions via Direct Photon-Hadron Correlations Measurements in Au Au Collisions at$\sqrt s$= 200 GeV 10 August 2017 B 131 Adrian Soto Simulating Complex Problems in Condensed Matter Physics: from Liquid Water to Dark Matter -- Electron Interactions 4 August 2017 MT 6125 Hualong Gervais Soft Radiation Theorems at All Loop Order in Quantum Field Theory 2 August 2017 B 131 Nicolas Tarantino Exactly Solvable Models for Topological Phases of Matter 10 July 2017 C 133 Rasmus Larsen Description of Gauge Theory Phenomena from Topological Objects 3 July 2017 MT 6125 Xinyu Zhang Exploring the BPS/CFT Correspondence 14 June 2017 S 141 Arthur Zhao Ionic Dynamics in Molecular Strong Field Ionization 13 June 2017 MT 6125 Harikrishnan Ramani Higgs Collider Phenomenology: Important Backgrounds, Naturalness Probes and the Electroweak Phase 24 May 2017 MT 6125 Yihong Wang Applications of 2D CFT: Entanglement and Soft Theorem 12 May 2017 MT 6125 Naveen Prabhakar Nonperturbative Studies in Supersymmetric Field Theories via String Theory 12 May 2017 MT 6125 Martin Polacek Aspects of T-Dually Extended Superspaces 8 March 2017 C 133 Yachao Qian Spin and QCD Instanton and Stringy Pomeron 26 January 2017 C 133 Aleksas Mazeliauska Fluctuations in Ultra-Relativistic Heavy Ion Collisions 30 November 2016 P 119 Fen Guan Effect of Strain on the Mechanical and Transport Properties of Graphene 14 November 2016 Laufer Center 101 Michael Hazoglou Topics in Statistical Physics: Protein Stability, Non-Equilibrium Thermodynamics and Bibliometrics 2 November 2016 Laufer Center 101 Elizaveta Guseva Physical Polymerization Mechanisms in the Chemistry-to-Biology Transisiton 28 October 2016 B 131 Daniel Elton Understanding the Dielectric Properties of Liquid Water 15 August 2016 ESS 450 Adam Jacobs The Explosive Possibilities of Little Dwarfs: Low-Mach Number Modeling of Thin Helium Shells on Sub-Chandrasekhar Mass White Dwarfs 9 August 2016 ESS 450 Maximilian Katz White Dwarf Mergers on Adaptive Meshes 8 August 2016 P 119 John Schneeloch Investigations in the Crystal Growth and Neutron Scattering of Supereconductors and Relaxor Ferroelectrics 5 August 2016 MT 6125 Chia-Yi Ju Some Applications of Superspace 4 August 2016 P 119 Sooraj Radhakrishnan Study of Long-Range Azimuthal and Longitudinal Correlations in High Energy Nuclear Collission at the LHC Using the ATLAS Detector 3 August 2016 P 119 Yutong Pang New Designs and Characterization Techniques for Thin-Film Solar Cells 2 August 2016 D 122 Angshuman Zaman Study of Vector Boson Scattering in the Semi-Leptonic Final State 20 July 2016 B 131 Gustavo Monteiro Anaomalous Transport in Chiral Systems 15 July 2016 MT 6125 Michael Spillane Studies of Entanglement Entropy, and Relativistic Fluids for Thermal Field Theories 11 July 2016 P 119 Omer Habib Rahman Funneling Electron Beams from Gallium Arsenide Photo Cathodes 5 July 2016 ESS 450 Mathew Madhavacheril Exploring the Dark Universe with Cosmic Microwave Background and Optical Data 17 June 2016 P 119 Zhedong Zhang Theories and Applications of Topological Insulators 19 May 2016 P 119 Xu-Gang He Theories and Applications of Topological Insulators 12 May 2016, 10 a.m. B 131 Mohammed Yusuf Control of Graphene Through Ferroelectric Switching 9 May 2016, 10.00 a.m. P 119 Ahsan Ashraf Nanoscale Spatial Inhomogeneity in Photovoltaic Devices 6 May 2016, 9.30 a.m. B 133 Benjamin Bein In-Situ X-Ray Diffraction, Atomic Force Microscopy and Photo Catalytic Characterization of Ferro-Electric Perovskite Surfaces 5 May 2016, 11.30 a.m. MT 6-125 Colin West Application of Tensor Network Algorithms in Quantum Many-Body Physics 5 May 2016, 10.00 a.m. MT 6125 Andrea Massari Searching for Dark Matter with the Fermi-LAT and Through New Experimental Ideas 4 May 2016, 10.30 a.m. MT 6-125 Yiming Zhong Searching for Dark Sectors 26 April 2016, 9.30 a.m. P 119 Tianmu Xin Electron Source based on Superconducting RF 13 April 2016, 2 p.m. C 120 Jiayin Sun Measurement of Dielectron Invariant Mass Spectra in Au Au collisions at \sqrt s = 200 GeV with HBD in PHENIX 18 January 2016, 10 a.m. S 141 Peter Sandor Molecular Strong Field Ionization Viewed with Photo-Electron Veolicty Map Imaging 13 December 2015, 11.00 a.m. D 122 Zaman Aungshuman Study of Vector Boson Scattering in the Semi-Leptonic Final State 10 December 2015, 3.30 p.m. B 131 Jian Liu A first-principles study of structural, electronic and optical properties of GaN, ZnO and (GaN)_{1-x}(ZnO)_x 7 December 2015, 10.00 a.m. P 119 Daniel McNally Strong Electronic Correlations in Manganese Pnictide Compounds 4 December 2015, 11.00 a.m. P 119 Cong Chen Non-adiabatic Dynamics of Gene Regulatory Network 22 September 2015, 1.00 p.m. B 131 Piranavan Kumaravadivel Quantum transport in Ballistic Graphene Devices 16 September 2015, 2.00 p.m. ESS 450 Rahul Patel Census of Warm Debris Disks in the Solar Neighborhood from WISE and Hipparcos 19 August 2015, 1.00 p.m. ESS 450 Melissa Louie Evolution of Gas Across Spiral Arms in the Whirlpool Galaxy 13 August 2015, 10.00 a.m. D 121 Angle Campoverde Search For Gravitons Decaying to Vector Bosons In Hadronic Final States in proton-proton Collisions at Collected With the ATLAS Detector 3 August 2015, 11.00 a.m. D 122 Jay Hyun Jo Measurement of the Inclusive Charged Current$\nu-e$Interaction rate on Water with the T2K Neutrino Detector 30 July 2015, 1.00 p.m. S 141 John Elgin Study of the Velocity Dependence of the Adiabatic Rapid Passage (ARP) Optical Force in Helium 29 July 2015, 10.00 a.m. S 141 Jeremy Reeeves Dynamics of Matter Waves in Optical Lattices 21 July 2015, 1.30 p.m. MT 6125 Ricardo Vaz Resurgence and the Large N Expansion 12 June 2015, 11.00 a.m. MT 6125 Wolfger Peelaers Exact Results in Supersymmetric and Superconformal Quantum Field Theories 11 June 2015, 2.30 p.m. MT 6125 Madalena Duarte de Almeida Lemos Exploring the Space of Superconformal Field Theories 10 June 2015, 11.00 a.m. MT 6125 Yiwen Pan 5d N=1 Supersymmetry and Contact Geometry 5 June 2015, 4.00 p.m. MT 6125 Mao Zeng QCD Factorization and Effective Field Theories at the LHC 4 June 2015, 11.00 a.m. B 131 Andrey Gromov Geometric Aspects of Quantum Hall States 26 May 2015, 3.00 p.m.. B 131 Shawn Pollard Understanding Magnetic Vortex Dynamics ad Ordering in Artificial Spin Ices with Transmission Electron Microscopy 13 May 2015, 11.00 a.m.. MT 6125 Jun Nian Some Studies on Partition Functions in Quantum Field Theory and Statistical Mechanics 8 May 2015, 11.00 a.m.. D 122 Karen Chen Measurement of WWW Production Cross Section in Proton-Proton Collisions at \sqrt s = 8 TeV with the ATLAS Detector and Limits on Anomalous Triple-Gauge-Boson Couplings 30 April 2015, 11.00 a.m.. MT 6125 Tyler Corbett Effective Lagrangians for Higgs Physics 20 April 2015, 2.00 p.m.. MT 6125 You Quan Chong Algebraic Bethe Ansatz and Tensor Networks 3 April 2015, 2.00 p.m.. P 119 Andrey Elizarov Advances in Theory of Coherent Electron Cooling 17 March 2015, 2.00 p.m.. MT 6125 Pin-Ju Tien Investigating Electroweak Physics in Large Hadron Collider 11 December 2014, 9.00 a.m.. B-131 Wei Wu Potential and Flux Field Landscape Theory of Spatially Inhomogeneous Non-Equilibrium Systems 21 November 2014, 1.30 p.m.. MT-6125 Michael Assis Integrability and non-integrability in the Ising model 21 November 2014, 10.00 a.m.. S-141 Christopher Corder Optical Forces on Metastable Helium 23 October 2014, 10.30 a.m.. P-119 Nathan Cook Alternative approaches to rapid acceleration of ion beams -- harmonic ratcheting for fast RF acceleration and laser driven acceleration of gas jet targets 23 September 2014, 8.30 a.m.. B-131 Hyejin Ryu Phase separation and neighboring ground states of superconductivity in K_x Fe_{2-y} Se_2 11 August 2014, 2 p.m. MT 6-125 Hong Zhang QCD factorization for heavy quarkonium production and fragmentation functions 11 August 2014, 10.00 a.m. D-122 David Puldon Measurement of the WW and WZ Production Cross Section in the Semi-Leptonic Final State Using \sqrt s = 7 TeV pp Collisions with the ATLAS Detector 7 August 2014, 4.30 p.m. B-131 Heli Vora Bolometric Effect and Phonon Cooling in Graphene-Superconductor Junctions Postdoc, NIST Boulder 4 August 2014, 9.30 a.m. B-131 Betul Pamuk Nuclear Quantum Effects in Ice Phases and Water From First Principles Calculations 21 July 2014, 11.00 a.m. C-133 Frasher Loshaj Real-Time Dynamics of the Confining String 26 June 2014, 2.00 p.m.. Math 6-125 A. Ozan Erdogan Quantum Field Theory in Coordinate Space 19 June 2014, 11.00 a.m.. D-122 Karen Gilje Neutral Current Single \pi^0 Production Rate Measurement On-Water Using the \pi^0 Detector in the Near Detector of the T2K Experiment 19 May 2014, 11.00 a.m.. C-120 Ciprian Gal Measuring the anti-quark contribution to the proton spin using parity violating W production in polarized proton-proton collisions 14 May 2014, 11.00 a.m.. C-120 Andrew Manion Double Longitudinal Helicity Asymmetries in Pion Production from Proton Collisions, Studies of Relative Luminosity Determination, and the Impact on Determination of the Gluon Spin in the Proton 12 May 2014, 1.00 p.m.. Math 6-125 Dharmesh Jain The Case of Extended Supersymmetry and A Study in Superspace 8 May 2014, 10.00 a.m.. Math 6-125 Sujan Dabholkar Exploring Warped Compactifications of Extra Dimensions 1 May 2014, 2.00 p.m.. P 119 Jaehyung Choi Applications of Physics and Geometry to Finance 30 April 2014, 1.00 p.m.. D 122 Josh Hignight Observation of$\nu_e$Appearance from an Off_Axis$\nu_\mu$Beam Utilizing Neutrino Energy Spectrum 30 April 2014, 12.00 p.m.. Math 6-125 Matthew von Hippel Explorations in Planar N=4 super Yang-Mills 22 April 2014, 10.30 1.m.. Math 6-125 Stanislav Srednyak Aspects of Perturbative QFT 15 April 2014, 1.30 p.m.. C 133 Yi Gu Measurement of the Azimuthal Anisotropy for Particle Identified Charged Hadrons in Au + Au Collisions via Long-Range Two-Particle Correlation Method at$\sqrt{s_{NN}}$= 200, 62.4 and 39 GeV 14 April 2014, 2.30 a.m.. C 120 Haijiang Gong Measurement of Direct Photons in Ultra-Relativistic Au+Au Collisions 09 December 2013, 1.00 p.m.. B 131 Chia-Hui Lin Translational Symmetry Breaking in Materials First Principles Wannier Function Study 06 Decmber 2013, 3.00 p.m.. S 141 Yuan Sun Stimulated Raman Adiabatic Passage between Metastable and Rudberg States of Helium 13 November 2013, 11.00 a.m.. C 120 Richard Petti Low Momentum Direct Photons as a Probe of Heavy Ion Collisions 09 August 2013, 4.00 p.m.. P 119 Tin Yau Pang Study of Topological Properties of Bacterial Metabolic Network 05 August 2013, 2.00 p.m.. D 122 Soumya Mohapatra Measurement of the azimuthal anisotropy for charged particle production in Pb+Pb collisions at$ \sqrt s_{NN}$=2.76 TeV and in p+Pb collisions at$ \sqrt s_{NN}$=5.02 TeV with the ATLAS detector at the LHC 26 July 2013, 2.00 p.m.. MT 6-125 Patricio Marcos Crichigno Aspects of Supersymmetric Field Theories and Complex Geometry 25 July 2013, 3.00 p.m. B131 Liusuo Wu Quantum Critical Behavior in Magnetic Systems: Yb_3Pt_4, YFe_2Al_10, Yb_2Pt_2Pb 23 July 2013, 1.00 p.m.. MT 6-125 Melvin Elroy Irizarry-Gelpi Eikonal Scattering at Strong Coupling 23 July 2013, 11.00 a.m.. MT 6-125 Raul Santos Entanglement in Low Dimensional Theories 23 July 2013, 11.00 a.m. C133 Alexander Stoffers Holographic Pomeron 22 July 2013, 2.00 p.m.. C133 Savvas Zafeiropoulos Chiral Random Matrix Theories for Lattice QCD Dirac Operators 19 July 2013, 10.00 a.m.. C133 Mario del Pilar Staig Fernandez A Look at Heavy Ion Collisions Through the SO(3)-Invariant Flow 15 July 2013, 2.00 p.m.. B131 Qiang Deng Quantum Computation and Quantum Measurement with Mesoscopic Superconducting Structures 10 July 2013, 11.00 a.m. C133 Li Yan A Hydronynamic Analysis of Collective Flow in Heavy-Ion Collisions 21 June 2013, 10.00 a.m. MT 6-125 Pedro Liendo Uncovering the Structure of (Super-)Conformal Field Theories 18 June 2013, 10.00 a.m. D122 Rafael Coelho Lopes de Sá Measurements of the W Boson Mass with the D0 Detector 25 April 2013, 3.00 p.m. C120 Nicole Apadula Single Electrons from Decays of Heavy Quarks Produced in Cu+Cu Collisions at the Relativistic Heavy Ion Collider (RHIC) 11 April 2013, 3.00 p.m. B131 Sara Callori PbTiO$_3$Based Ferroelectric Superlattices with Conventional and Novel Dielectric Components 8 April 2013, 2.00 p.m. C 120 Sook Hyun Lee Measurement of Cross Section and Double Longitudinal Asymmetries in$\pi$Production to Constrain the Gluon Polarization Contribution to the Proton Spin 5 February 2013, 4 p.m.. S 141 Dominik Geissler Insight into Melecular Dynamics and Strong Field Ionization gained by Belocity Map Imaging and Ultrafast Pulse Shaping 29 January 2013, 11.00 a.m. ESS 450 Constantinos Constantinou Thermal Effects in Supernova Matter 18 December 2012, 10.00 a.m. B 131 Nikita Simonian Design and Simulation of Single-Electron Molecular Devices 17 December 2012, 10.30 a.m. D 122 Wanyu Ye Search for the Standard Model Higgs Boson at D0 in the$\mu+\tau$(hadrons) + 2 jets Final State 14 December 2012, 3.30 p.m. B 131 John Sinsheimer Engineering Enhanced Piezoelectric Response in Ferroelectric Superlattices 11 September 2012, 10.45 a.m. MT 6-125 Francis Paraan Some Results on One-Dimensional Models with Broken and Deformed Symmetries 7 August 2012, 3.30 p.m. MT 6-125 Wenbin Yan The Spectrum of Superconformal Theories 7 August 2012, 2.00 p.m. C 120 Jason Kamin A Search for Charm and Beauty in a Very Strange World 3 August 2012, 10.00 a.m. D 122 Xiaoyang Gong Azimuthal Anisotropy Measurement of Neutral Pion and Inclusive Charge Hadron Production in Au+Au Collisions at \sqrt_NN = 62 and 39 GeV 3 August 2012, 10.00 a.m. C 133 Juhee Hong Transport Processes in High Temperature QCD Plasmas 23 July 2012, 2.00 p.m. ESS 450 Brendan Krueger Muidimensional Simulations of Type Ia Supernovae and Classical Novae 9 July 2012, 4.00 p.m. S-140 Daniel Stack Optical Forces from Periodic Adiabatic Rapid Passage Sequences in Metastable Helium 6 July 2012, 10.00 a.m. B 131 Sriram Ganeshan Quantum effects in condensed matter systems in 1, 2 and 3 dimensions 26 June 2012, 10.00 a.m. MT 6-125 Prerit Jaiswal Higgs Physics in Supersymmetric Models 20 June 2012, 2.00 p.m. D 122 Burton Dewilde A Search for Second-generation Leptoquarks in pp Collisions at \sqrt s = 7 TeV with the ATLAS Detector 18 June 2012, 11.00 a.m. S-141 Bryce Gadway Matter-wave dynamics in tailored optical and atomic lattices 14 June 2012, 2.00 p.m. ESS 450 Yeuhwan Lim Theory of Nuclear Matter for Neutron Stars and Supernovae 14 May 2012, 9.30 a.m. D-122 Dmitry Beznosky Data to NEUT and GENIE MC Generators Prediction CC1\pi+/CCQE Ratio Comparison for Neutrino Interactions with T2K PD Detector as input to T2K 11 May 2012, 11.30 a.m. B-131 Saul Lapidus Powder Diffraction Tells You What Your Sample Really Is: Case Studies 8 May 2012, 2.00 p.m. D-122 John Stupak III A Search for First Generation Leptoquarks in \sqrt s = 7 TeV pp Collisions with the ATLAS Detector at the LHC 7 May 2012, 10.00 a.m. B-131 Htay Hlaing Integration of Nanostructured Semiconducting/Conducting Polymers in Organic Photovoltaic Devices 4 May 2012, 12.30 p.m. C-133 Huan Dong Nuclear matter Equation of State and Brown-Rho scaling 3 May 2012, 11.00 a.m. S-141 Marija Kotur Strong-field Dissociative Ionization as a Probe of Molecular Dynamics and Structure 26 April 2012, 4.00 p.m. Math 6-125 Yan Xu Potts Model and Generalizations: Exact Results and Statistical Physics 17 April 2012, 12.30 p.m. D-128 Jeremiah Jet Goodson Search for Supersymmetry in States with Large Missing Transverse Momentum and Three Leptons including a Z-boson 17 Februari 2012, 10.00 a.m. D-122 Glenn Lopez Measurement of the Single Neutral Pion Production Cross Section in Neutral-Current Neutrino Interactions in the T2K Pi-zero Detector 31 January 2012, 9.00 a.m. S-141 Chien-hung Tseng Ultrafast Coherent Control Spectroscopy 19 December 2011, 10.00 a.m. C-133 Julia Gray Dijet Angular Decorrelation with the ATLAS Detector at the LHC 14 December 2011, 2.00 p.m. B-131 Adrien Poissier Theoretical Aproach of Water/Surfaces Interactions 28 November 2011, 2.30 p.m. C-133 Lee Hammons A Study of Higher-Order-Mode Damping in the Superconducting Energy Recovery LINAC at BNL 6 September 2011, 3.00 p.m. C-120 Megan Connors Direct Photon Tagged Jets oin 200 GeV Au+Au Collisions at PHENIX 27 September 2011 C-120 Harry William Joseph Themann A Measurement of Electrons From Heavy Quarks in p p Collisions at sqrt{s} = 200 GeV/c2 ) 6 September 2011, 3.00 p.m. D-120 Regina Caputo A Search for First Generation Leptoquarks at the ATLAS Detector 11 August 2011, 2.00 p.m. ESS-450 Joshua Edward Schlieder New Low-Mass Members of Nearby Young Moving Groups 9 August 2011, 2.00 p.m. S-141 Daniel A. Pertot Two-Component Bosons in State-Dependent Optical Lattices 8 August 2011, 1.00 p.m. S-141 Abhijit Gadde Aspects of Superconformal Field Theories 1 August 2011, 2.00 p.m. B-131 Nathan Borggren Probabilistic and Flux Landscape of the$\lambda\$ Phage Genetic Switch 1 August 2011, 1.00 p.m. B-131 Chen-Hao Chen Jet-Medium Interaction in Quark-Gluon Plasma 28 July 2011, 1.00 p.m. B-131 Tom Berlijn Effects of Disordered Dopants on The Electronic Structure of Functional Materials: Wannier Function-Based First Principles Methods for Disordered Systems 27 July 2011, 1.00 p.m. ESS-450 Christopher Michael Malone Multidimensional Simulations of Convection Preceding a Type I X-ray Burst 15 July 2011, 2.30 p.m. ESS-450 Tatjana Vavilkin Properties and Distribution of Luminous Stellar Clusters in a Large Sample of Luminous Infrared Galaxies 6 July 2011, 2.00 p.m. C-120 Sarah Catherine Campbell Dielectron Mass Spectra in \sqrt{s_{NN}} =200 GeV Cu Cu Collisions at PHENIX 22 June 2011, 2.00 p.m. B-131 Manas Kulkarni Hydrodynamics and transport in low-dimensional interacting systems 8 June 2011, 2.30 p.m. ESS-450 Aaron Jackson Exploring Systematic Effects in Thermonuclear Supernovae 24 May 2011, 2.00 p.m. D-122 Kathryn Tschann-Grimm Search for the Standard Model Higgs Boson at D0 in the final state tau tau jet 19 May 2011, 2.00 p.m. Math Tower 6-125 Chee Sheng Fong Soft Leptogenesis as a Viable Model of Barygenesis 16 May 2011, 10.00 a.m. C-120 John Matthew Durham Cold Nuclear Matter Effects on Heavy Quarks at RHIC 12 May 2011, 3.00 p.m. D-122 Stephen Webb Theoretical Considerations for Coherent Electron Cooling 10 May 2011, 10.30 a.m. B-131 Jue Wang First Principle Study of Water: From Fundamental Properties to Phtotocatalytic Reactions 5 May 2011, 1.00 p.m. Math Tower 6-125 Ning Chen A Study of Beyond Standard Model Physics at TeV Scale, LHC Signals and Dark Matter 2 May 2011, 3.00 p.m. B-131 Li Li Theoretical and Computational Studies Related to Solar Water Splitting with Semiconductor Alloys 25 April 2011, 2.30 p.m. C-133 Jie Ren Physically and Logically Reversible Superconducting Circuits 25 April 2011, 1.00 p.m. B-131 Megumi Kinoshita Optoelectronics with Carbon-Nanotube Devices 30 March 2011, 10.15 a.m. B-131 Yan Zhang Electronic Transport Properties of Semiconductor Nanostructures 28 March 2011, 2 p.m.. S-141 Xiaoxu Lu Excitation of Helium to Rydberg States Using STIRAP 24 March 2011, 10 a.m.. C-120 Zvi Citron Probing the Nucleus with d+Au Cillisions at RHIC 16 February 2010, 1.30 p.pm B131 Zhongkui Tan Experimental Study of Transport through Few-nm Metal Oxide Tunnel Barriers 15 December 2010, 11 a.m.. S-141 Hyunoo Shim Dynamics of Two-Component Bose-Einstein Condensates in an Optical Lattice 12 November 2010, 1 p.m.. ESS 450 Jacqueline Fahertie The Brown Dwarf Kinematics Project (BDKP) 20 August 2010, 10.30 p.m.. Math Tower, 6-125 Itai Ryb Generalized Isometries in Superspace 13 August 2010, 10.00 a.m. Physics, C133 Prasad Hegde Charge Fluctuations in Lattice QCD with Domain-Wall Fermions 6 August 2010, 10.00 a.m. Physics, C133 Rui Wei High p_T Azimuthal Anisotropy in Au+Au Collisions at \sqrt s_{NN} = 200 GeV 4 August 2010, 10.30 a.m. Physics, D122 Johanna Nelson X-ray diffraction microscopy on frozen hydrated specimens 3 August 2010, 4.30 p.m. Physics, D122 Christian Holzner Hard X-ray Phase Contrast Microscopy Techniques and Applications 30 July 2010, 4.30 p.m. Physics, C133 Clint Young Charmonium in strongly-coupled quark-gluon plasma 30 July 2010, 2.00 p.m. Physics, S141 Jason Reeves Neutral Atom Lithography Using the 389 nm Transition in He 26 July 2010, 12.30 p.m. Physics, B131 Lei Huang Understanding Nanoscale Magnetization Reversal and Spin Dynamics by Using Advanced Transmission Electron Microscopy 26 July 2010, 10.00 a.m. Physics, D122 Jan Steinbrener X-ray Diffraction Microscopy: Computational Methods and Scanning-type Experiments 17 June 2010, 1.30 p.m. Physics, D122 Feng Guo Ratio Method of Measuring the Mass of the W-Boson 15 June 2010, 1.30 p.m. Physics, D128 Stephen Clow Strong Field Control of Multilevel Quantum Systems 13 May 2010, 1.00 p.m. Math Tower, 6-125 Leandro Almeida Threshold Resummation in Pair Production 11 May 2010, 12.00 p.m. Math Tower, 6-125 Elli Pomoni AdS/CFT beyond the N=4 SYM paradigm 7 May 2010, 10.00 a.m. Math Tower, 6-125 Ilmo Sung Heavy Quarks and Interjet Radiation 4 May 2010, 1.00 p.m. Physics, B131 Ping Lin Quantum Transport in Electron Fabry-Perot Interferometers in Quantum Hall Regime 31 March 2010, 12.30 p.m. Physics, C133 Shu Lin Heavy Ion Collisions from AdS/CFT Correspondence 16 February 2010, 1.30 p.m. Physics, B131 Zhongkui Tan Experimental Study of Transport through Few-nm Metal Oxide Tunnel Barriers T2K Pizero Detector 10 December 2009, 10.00 a.m. Physics, D122 Le Trung Event Reconstruction and Energy Callibration Using Cosmic Muons for the T2K Pizero Detector 3 December 2009, 3.00 p.m. Physics, D122 Xiaojing Huang Cryo Soft X-ray Diffraction Microscopuy with Biological Specimens 3 December 2009, 11.15 a.m. Physics, B133 Kevin Stone Structual Studies from Powder Diffraction 1 December 2009, 2 p.m. Physics, P119 Cosmin Blaga Atoms and Molecules in Strong Mid-infrared Laser Fields 19 November 2009, 4.00 p.m. Chemistry, 410 David Lepzelter Noise and Oscillation in Simple Gene Networks 11 November 2009, 11.00 am Mathematics, 6-125 Peng Dai World Graph Approach to Amplitudes 22 October 2009, 1.00 pm Physics, C122 Sung Tae Cho Classical Stongly Coupled Quark-Gluon Plasma 24 August 2009, 1.00 pm Physics, C120 Michael McCumber Fast Parton Interactions with Hot Dense Matter 10 August 2009, 12.30 pm Physics, D122 Emanuel Strauss From ZZ to ZH : How Low Can These Cross Sections Go or Everybody, Let's Cross Section Limbo! 10 August 2009, 10.00 am Physics, D122 Jun Guo A Precision Measurement of the W Boson Mass 10 August 2009, 10.00 am Javits 223 Luigi Longobardi Studies of Quantum Transitions of Magnetic Flux in a rf SQUID Qubit 6 August 2009, 10.00 am Physics, B131 Philip Schiff Low temperature thermal conductivity in a d-wave superconductor with coexisting order parameters 4 August 2009, 10.00 am Physics, B131 Xiao Shen Theory of ZnO and GaN: Nanostructures, Surfaces and Heterogeneous Photo-catalysis 23 July 2009, 10.00 am Physics, C133 Enrique Morenos Mendez Black-Hole Binaries As Relics Of Gamma-Ray Burst / Hypernova Explosions This web page is for Physics and Astronomy faculty, students hosts and guests participating in the visit of prospective graduate students. Updated 1/27/2012 by Jacobus Verbaarschot.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872131109237671, "perplexity": 15565.329555611412}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216333.66/warc/CC-MAIN-20180820101554-20180820121554-00630.warc.gz"}
https://www.physicsforums.com/threads/not-quite-clear-in-application-of-fourier-series.794379/
# Not quite clear in application of fourier series • Thread starter cgw • Start date • #1 42 0 I am not quite clear on the use of Fourier series to solve the Schrodinger equation. Can you point me to a source of some simple one dimensional examples? • #2 760 69 • Last Post Replies 4 Views 2K • Last Post Replies 3 Views 2K • Last Post Replies 3 Views 1K • Last Post Replies 6 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 6 Views 1K • Last Post Replies 10 Views 3K • Last Post Replies 1 Views 2K • Last Post Replies 5 Views 566 • Last Post Replies 1 Views 2K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250985741615295, "perplexity": 9701.67409935949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00420.warc.gz"}
https://www.lmfdb.org/L/rational/8/280%5E4/1.1
## Results (22 matches) Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 8-280e4-1.1-c0e4-0-0 $0.373$ $0.000381$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$0.0, 0.0, 0.0, 0.0 0 1 0 1.19131 Modular form 280.1.c.a 8-280e4-1.1-c1e4-0-0 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $0.369327$ Modular form 280.2.n.a 8-280e4-1.1-c1e4-0-1 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.520589 Modular form 280.2.bl.a 8-280e4-1.1-c1e4-0-10 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $4$ $1.80849$ Modular form 280.2.bv.a 8-280e4-1.1-c1e4-0-2 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.525921 Modular form 280.2.bv.b 8-280e4-1.1-c1e4-0-3 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $0.678926$ Modular form 280.2.bj.d 8-280e4-1.1-c1e4-0-4 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.706594 Modular form 280.2.bj.c 8-280e4-1.1-c1e4-0-5 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $1.06518$ Modular form 280.2.bv.c 8-280e4-1.1-c1e4-0-6 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 1.07820 Modular form 280.2.bv.d 8-280e4-1.1-c1e4-0-7 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $1.08428$ Modular form 280.2.q.d 8-280e4-1.1-c1e4-0-8 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 4 1.48247 Modular form 280.2.bj.b 8-280e4-1.1-c1e4-0-9 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $4$ $1.68517$ Modular form 280.2.bj.a 8-280e4-1.1-c2e4-0-0 $2.76$ $3.38\times 10^{3}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$2.0, 2.0, 2.0, 2.0 2 1 0 0.0273502 Modular form 280.3.bi.a 8-280e4-1.1-c2e4-0-1 2.76 3.38\times 10^{3} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $2.0, 2.0, 2.0, 2.0$ $2$ $1$ $0$ $0.241217$ Modular form 280.3.c.e 8-280e4-1.1-c2e4-0-2 $2.76$ $3.38\times 10^{3}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$2.0, 2.0, 2.0, 2.0 2 1 0 0.263552 Modular form 280.3.bi.b 8-280e4-1.1-c2e4-0-3 2.76 3.38\times 10^{3} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $2.0, 2.0, 2.0, 2.0$ $2$ $1$ $0$ $0.426550$ Modular form 280.3.be.a 8-280e4-1.1-c2e4-0-4 $2.76$ $3.38\times 10^{3}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$2.0, 2.0, 2.0, 2.0 2 1 0 0.557054 Modular form 280.3.c.f 8-280e4-1.1-c3e4-0-0 4.06 7.44\times 10^{4} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $3.0, 3.0, 3.0, 3.0$ $3$ $1$ $0$ $0.394803$ Modular form 280.4.bg.a 8-280e4-1.1-c5e4-0-0 $6.70$ $4.06\times 10^{6}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$5.0, 5.0, 5.0, 5.0 5 1 4 1.09797 Modular form 280.6.a.h 8-280e4-1.1-c5e4-0-1 6.70 4.06\times 10^{6} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $5.0, 5.0, 5.0, 5.0$ $5$ $1$ $4$ $1.37505$ Modular form 280.6.a.g 8-280e4-1.1-c7e4-0-0 $9.35$ $5.85\times 10^{7}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$7.0, 7.0, 7.0, 7.0 7 1 4 0.954482 Modular form 280.8.a.b 8-280e4-1.1-c7e4-0-1 9.35 5.85\times 10^{7} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $7.0, 7.0, 7.0, 7.0$ $7$ $1$ $4$ $1.16106$ Modular form 280.8.a.a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716721773147583, "perplexity": 326.2237390478739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00470.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-y-2-2y-3-y-2-3y-2-and-what-are-the-restrictions
Algebra Topics # How do you simplify (y^2 + 2y - 3) / (y^2 - 3y + 2) and what are the restrictions? ##### 1 Answer Jun 4, 2015 You can factorise both sides of the fraction: $= \frac{\left(y - 1\right) \left(y + 3\right)}{\left(y - 1\right) \left(y - 2\right)}$ Restrictions: $y \ne 1 \mathmr{and} y \ne 2$ Then you can simplify to: $= \frac{y + 3}{y - 2}$ graph{(x+3)/(x-2) [-32.47, 32.52, -16.23, 16.23]} ##### Impact of this question 235 views around the world You can reuse this answer Creative Commons License
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238961338996887, "perplexity": 5331.754850005178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681625.83/warc/CC-MAIN-20200125222506-20200126012506-00297.warc.gz"}
http://sharif.ir/~mtefagh/
# Mojtaba Tefagh Mojtaba Tefagh Assistant Professor Sharif Optimization and Applications Laboratory Department of Mathematical Sciences Sharif University of Technology ## Contact • Math 205, Department of Mathematical Sciences, Sharif University of Technology • (+98) 21 6616 5617
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417301177978516, "perplexity": 9774.595401783732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300810.66/warc/CC-MAIN-20220118092443-20220118122443-00359.warc.gz"}
http://www.koreascience.or.kr/article/JAKO201611059006962.page
# INEQUALITIES FOR THE (q, k)-DEFORMED GAMMA FUNCTION EMANATING FROM CERTAIN PROBLEMS OF TRAFFIC FLOW • Nantomah, Kwara (Department of Mathematics, University for Development Studies, Navrongo Campus) ; • Prempeh, Edward (Department of Mathematics, Kwame Nkrumah University of Science and Technology) • Accepted : 2015.12.22 • Published : 2016.03.25 • 99 10 #### Abstract In this paper, the authors establish some double inequalities concerning the (q, k)-deformed Gamma function. These inequalities emanate from certain problems of traffic flow. The procedure makes use of the integral representation of the (q, k)-deformed Gamma function. #### Keywords Gamma function;q-deformed Gamma function;k-deformed Gamma function;(q, k)-deformed Gamma function;q-integral;Inequality #### References 1. R. Askey, The q-Gamma and q-Beta Functions, Appl. Anal. 8(2)(1978), 125-141. https://doi.org/10.1080/00036817808839221 2. W. S. Chung, T. Kim and T. Mansour, The q-deformed Gamma function and q-deformed Polygamma function, Bull. Korean Math. Soc. 51(4)(2014), 1155-1161. 3. R. Diaz and E. Pariguan, On hypergeometric functions and Pachhammer k-symbol, Divulg. Mat., 15(2)(2007), 179-192. 4. R. Diaz and C. Teruel, q, k-generalized gamma and beta functions, J. Nonlinear Math. Phys., 12(2005), 118-134. https://doi.org/10.2991/jnmp.2005.12.1.10 5. I. Ege, E. Yyldyrym, Some generalized equalities for the q-gamma function, Filomat, 26(6)(2012), 1221-1226. 6. H. Elmonster, K. Brahim, A. Fitouhi, Relationship between characterizations of the q-Gamma function, J. Inequal. Spec. Funct., 3(4)(2012), 50-58. 7. F. H. Jackson, On a q-Definite Integrals, Quart. J. Pure Appl. Math., 41(1910), 193-203. 8. J. Lew, J. Frauenthal, N. Keyfitz, On the Average Distances in a Circular Disc, SIAM Rev., 20(3)(1978), 584-592. https://doi.org/10.1137/1020073 9. K. Nantomah, Some Inequalities for the Ratios of Generalized Digamma Functions, Adv. Inequal. Appl., 2014(2014), Article ID 28. 10. K. Nantomah and E. Prempeh, Certain Inequalities Involving the q-Deformed Gamma Function , Probl. Anal. Issues Anal., 4(22)(1)(2015), 57-65. 11. F. Qi, and Q. M. Luo, Bounds for the ratio of two Gamma functions - From Wendel's and related inequalities to logarithmically completely monotonic functions, Banach J. Math. Anal., 6(2)(2012), 123-158. 12. J. Sandor, On certain inequalities for the Gamma function, RGMIA Res. Rep. Coll. 9(1)(2006), Art. 11. 13. J.G. Wendel, Note on the gamma function, Amer. Math. Monthly, 55(1948), 563-564. https://doi.org/10.2307/2304460
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818289637565613, "perplexity": 18022.22026491768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00010.warc.gz"}
https://okayama.pure.elsevier.com/en/publications/momentum-dependent-magnon-lifetime-in-the-metallic-noncollinear-t
# Momentum-Dependent Magnon Lifetime in the Metallic Noncollinear Triangular Antiferromagnet CrB2 Pyeongjae Park, Kisoo Park, Taehun Kim, Yusuke Kousaka, Ki Hoon Lee, T. G. Perring, Jaehong Jeong, Uwe Stuhr, Jun Akimitsu, Michel Kenzelmann, Je Geun Park Research output: Contribution to journalArticlepeer-review 2 Citations (Scopus) ## Abstract Noncollinear magnetic order arises for various reasons in several magnetic systems and exhibits interesting spin dynamics. Despite its ubiquitous presence, little is known of how magnons, otherwise stable quasiparticles, decay in these systems, particularly in metallic magnets. Using inelastic neutron scattering, we examine the magnetic excitation spectra in a metallic noncollinear antiferromagnet CrB2, in which Cr atoms form a triangular lattice and display incommensurate magnetic order. Our data show intrinsic magnon damping and continuumlike excitations that cannot be explained by linear spin wave theory. The intrinsic magnon linewidth Γ(q,Eq) shows very unusual momentum dependence, which our analysis shows to originate from the combination of two-magnon decay and the Stoner continuum. By comparing the theoretical predictions with the experiments, we identify where in the momentum and energy space one of the two factors becomes more dominant. Our work constitutes a rare comprehensive study of the spin dynamics in metallic noncollinear antiferromagnets. It reveals, for the first time, definite experimental evidence of the higher-order effects in metallic antiferromagnets. Original language English 027202 Physical Review Letters 125 2 https://doi.org/10.1103/PhysRevLett.125.027202 Published - Jul 10 2020 ## ASJC Scopus subject areas • Physics and Astronomy(all) ## Fingerprint Dive into the research topics of 'Momentum-Dependent Magnon Lifetime in the Metallic Noncollinear Triangular Antiferromagnet CrB2'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016333818435669, "perplexity": 9662.822544751934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00177.warc.gz"}
http://tex.stackexchange.com/questions/13914/toc-numbering-problem/13915
# ToC numbering problem My LaTeX document is acting strangely. Here is a simplified version of it: \documentclass{article} \begin{document} \tableofcontents \newpage \include{includedfile} \end{document} And in includedfile.tex: \section{My Section Title} Quack. Clearly, in the table of contents, the heading for the part should precede the one for the section, but it doesn't! What's wrong? - ## migrated from stackoverflow.comMar 20 '11 at 5:51 The delaying issue several people have mentioned is that TeX delays all \write commands until \shipout time. If for some reason, you need an immediate \write, you can use \immediate\write. To that end, here's a simple new macro that acts like \addcontentsline, but writes to the aux file immediately. \documentclass{article} \begingroup \let\origwrite\write \def\write{\immediate\origwrite}% \endgroup } \begin{document} \tableofcontents \newpage \include{includedfile} \end{document} - This also solved a problem where \addcontentsline was not adding bookmark entries in the resulting PDF, unless there was additional text after the command. Bizarre. – jevon Oct 13 '11 at 2:07 This is a tricky little issue. It turns out that \include differs from \input in an important way; it doesn't just add a couple of \clearpages. I think the right solution is to make a custom \include command which functions almost like the usual one: \newcommand{\myinclude}[1]{\clearpage\input{#1}\clearpage} When you use \addcontentsline, directly or indirectly, it writes a line on the aux file saying "write this and that to the toc file". Then it reads the aux file and follows that instruction. When you run latex again, the toc file has the right stuff in it and you get a nice table of contents. But the tex \write command has some sort of delay to it (that I don't understand). When you use \addcontentsline several times in a row, it doesn't matter because they all go on the write stack in the right order. But here's the tricky part: when you use \include, it makes a separate aux file for the file you're including and immediately writes a command in the main aux file saying "go look at that other aux file for instructions" (with no weird delay). So if you use \include immediately after an \addcontentsline, the "go look at the other aux file" command gets written before the "write some stuff in the toc file" command. So all the contents entries from the included file get written first! - This is what I ended up using. Thanks! – Ben Alpert Aug 3 '09 at 0:37 It works for me when I replace \include by \input. I think \include is for chapters (it forces a \clearpage or something like that), so I never use it in practice. - I do actually want to have the included file on a separate page, though, and I'd also like to figure out why it doesn't work as written. – Ben Alpert Aug 1 '09 at 18:01 You can add an explicit \clearpage or \cleardoublepage. The only thing you lose is that \include is for partial compilation (i.e. with \includeonly). As for why it does not work as written, I have no idea… experience with LaTeX shows that sometimes you probably don't want to understand :) – Damien Pollet Aug 1 '09 at 18:08 What if you replace \addcontentsline{toc}{part}{A Part of My Document} with \part{A Part of My Document} - Agreed. The usual sectioning commands call \addcontentsline, so it is generally not necessary to call it explicitly yourself. @Ben Alpert: If you have a special reason for making the explicit call, it might help to describe it. – dmckee Aug 2 '09 at 4:59 @dmckee: The usual sectioning commands also produce other output. He probably doesn't want to have a separate page saying "Part I" that you flip through when you're reading. If you use \addcontentsline, the table of contents is the only place you'll get any output. – Anton Geraschenko Aug 2 '09 at 23:58 Try moving the addcontentsline above the tableofcontents: Updated: incorrect ordering occurs if \addcontentsline is on the same level as \include. A workaround is to have the \addcontentsline in the included file: \documentclass{article} \begin{document} \tableofcontents \newpage \include{includedfile} \include{some-other-file} \end{document} contents of includedfile.tex \addcontentsline{toc}{part}{First Part of My Document} \section{My Section Title} Quack. - Except that I really want many different parts after the table of contents, with multiple included files for each. So that won't work. – Ben Alpert Aug 1 '09 at 18:05 Reading through some latex documentation there seems to be a problem using \addcontentsline at the same level as an \include statement. The solution is to move the \addcontentsline into the file loaded by \include. Not the cleanest solution but it will allow you to have multiple parts correctly ordered in the table of contents. – indy Aug 1 '09 at 19:22 If you try the file \documentclass{article} \begin{document} \tableofcontents \newpage \part{A Part of My Document} \include{includedfile} \end{document} You may get a clue as to what is happening. The /addcontentsline instruction is normally invoked automatically by the document sectioning commands... If you do not want a heading number (starred form) but you do want an entry in the .toc file, you can use \addcontentsline with or without \numberline ... (Mittelbach and Goosens (2004), see below) Hence, for example, \documentclass{article} \begin{document} \tableofcontents \newpage \part*{A Part of My Document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882081508636475, "perplexity": 1871.206049857766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00046-ip-10-60-113-184.ec2.internal.warc.gz"}
http://wiki.freepascal.org/Mod
Mod Deutsch (de) English (en) français (fr) Mod (modulus) divides two numbers and returns only the remainder. For instance, the expression "a:= 13 mod 4;" would evaluate to 1 (a=1), while "b := 12 mod 4;" would evaluate to 0 (b=0). From the language reference: "The sign of the result of a Mod operator is the same as the sign of the left side operand of the Mod operator. In fact, the Mod operator is equivalent to the following operation :" I mod J = I - (I div J) * J For example "c:= -13 mod 4;" results in c = -1 and "c:= 10 mod -3;" results in c = 1. This is also what most other languages like C++ and Java do (see note) From version 3.1.1 FreePascal also supports the mod operator for floating point values when you include the math unit. The precision used is the highest precision available for the platform. for instance, the expression "a:= 12.9 mod 2.2;" would evaluate to 1.9. This is equivalent to I mod J = I - int(I / J) * J. The result is the same as the fmod function for the highest precision available for the platform. In older versions of FPC that support operator overloading you can add this feature yourself. Here's an example for double precision modulo. operator mod(const a, b: double) c: double; inline; begin c:= a - b * Int(a / b); end; note regarding Delphi compatibility Delphi conforms to ISO 7185 Pascal instead of the de facto standard. This ISO standard states: Evaluation of a term of the form x mod y is an error if y is less than or equal to zero; otherwise there is an integer k such that x mod y satisfies the following relation : 0 <= x mod y = x - k * y < y. That means that in Delphi "c:= 10 mod -3;" results in an error and an exception will be thrown at run-time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7487975358963013, "perplexity": 1344.6222677939159}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202872.8/warc/CC-MAIN-20190323141433-20190323163433-00414.warc.gz"}
https://pretextbook.org/doc/guide/html/section-178.html
## Section35.5Unachievable Conversions By authoring WeBWorK problems within PreTeXt you do not need to learn all the ins and outs of PGML markup and you can concentrate on simply becoming proficient with PreTeXt. However, there are a few PreTeXt constructions which are not achievable in a WeBWorK problem for one reason or another. We list exceptions here, and also try to use source-checking tools to alert you to any differences. • Anything that is the numbered target of a cross-reference, such as a figure, may not be inside a WeBWorK exercise. The exercise may go on to have a life of its own independent of its parent PreTeXt project, and then such a number makes no sense. • Certain aspects of specifying borders of a PreTeXt <tabular> are not realizable in a PGML table. Specifically, • Specifying column-specific top border attributes are not implemented. • Cell-specific bottom border attributes are not implemented. • medium and major table rule-thickness attributes will be handled as if they were minor. • When constructing a list (<ul> or <ol>) specifying some number of columns (using the @cols attribute) will be ignored. PGML markup has no way to declare multicolumn lists.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15498197078704834, "perplexity": 2769.256050851415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304600.9/warc/CC-MAIN-20220124185733-20220124215733-00489.warc.gz"}
http://blog.inferis.org/blog/2014/03/06/categories-for-at-protocols/
# Categories for @protocols March 06th, 2014 · · objc, development Yup. Categories for protocols. You heard that right. It started with me asking about it on twitter: This question caused much confusion. "How is that even possible?" "Categories are used to extend existing classes, surely how can you extend a protocol?" I also got the suggestion that protocols can extend other protocols: I knew about that, and it was not what I was looking for. I genuinly want categories for protocols. Let me explain. ## Categories and protocols Suppose I have these two protocols: @protocol EffectsView <NSObject> @property (nonatomic, strong) NSArray* effects; // more ... @end @protocol LayeredEffectsView <EffectsView> @property (nonatomic, strong, readonly) UIView<EffectsView>* mainEffectsView; @property (nonatomic, strong, readonly) UIView<EffectsView>* overlaidEffectsViw; // more ... @end Basically, the first one describes a view/object that has effects, the second one describes a view/object has 2 layered effect views (the actual code is different but equivalent - I changed the original code to protect the innocent). My UI uses these two view types interchangely. I do not want to use classes since I want to be free to embed any type of view, as long as it implements effect. Sometimes I want to have a layered effect view, sometimes I want to have a single effect view. Now I also have a container view that handles these effects. But it takes a layered view (for reasons not disclosed here): @interface EffectsContainerView - (id)initWithEffectsView:(UIView<LayeredEffectsView>*)effectView; @end So when I want to use a simple effect view in this container view I need to wrap it into a layered view. StandardEffectsView* standard = [StandardEffectsView new]; LayeredEffectsView* layers = [[LayeredEffectsView alloc] initWithMainEffectsView:standard]; EffectsContainerView* container = [EffectsContainerView alloc] initWithEffectsView:layers]; While this works, it's a bit verbose (but hey, it's Cocoa, what did you expect). Still, while I was writing code I was thinking how useful it would be to be able to write: EffectsContainerView* container = [EffectsContainerView alloc] initWithEffectsView:[[StandardEffectsView new] layeredAsMain]]; Now I hear you say: you can do that! Of course I can. Just write a category on StandardEffectsView that handles that: @interface StandardEffectsView (Layered) - (UIView<LayeredEffectsView>*)layeredAsMain; @end Sure. But what if I have a SliderEffectsView and a ButtonEffectsView and a TableEffectsView? I now have to write the same code for each class. • I can't make a common superclass to define the category on, because I want the effects on a UISlider, UIButton and UITableView, for example. This is also tedious and error prone since every new class I add that implements this protocol would need to add this method. • I could make a category on UIView, but then all UIViews would gain this method which is not what I want since the views on the LayeredEffectsView protocol are expected to implement EffectsView. So that wouldn't work, really. • I could extend the EffectsView protocol to include the layeredAsMain method, but that would still require me to implement the same code on each class. So I thought: wouldn't it be cool if you can create category on a protocol? Something like: @interface UIView<EffectsView> (Layered) - (UIView<LayeredEffectsView>*)layeredAsMain; @end This would have all UIView subclasses conforming to the EffectsView protocol gain the layeredAsMain method. But how would you implement it? A protocol is just that: a definition of how a class should act and you don't have any specific instance to work with. But do we need one? Adding a category on a protocol would allow you to extend all classes confirming to the protocol, and you only have the protocol information to work with. But that's just fine. The implementation of our category method above would become: @implementation UIView<EffectsView> (Layered) - (UIView<LayeredEffectsView>*)layeredAsMain { // we could also use self.effects here, for example return [[LayeredEffectsView alloc] initWithMainEffectsView:self]; } @end Which works fine. We can use self because the protocol has to be implemented on an actual class, so at runtime an instance will be there. And that instance will conform to the protocol, so we can use it to it's full potential. And I only have to implement it once, and all UIView classes with <EffectsView> would be convertible to a layered view with once short, simple method. To elaborate a bit: in this case, self would be at least a UIView instance so we could use properties and methods from that class to, in addition to the protocol's stuff. In this case, we have an actual underlying class, but I guess it would work for "pure" protocols too: @protocol EffectsView (Layered) - (UIView<LayeredEffectsView>*)layeredAsMain; @end Everything conforming to this protocol would gain this (irregardless of class). The confusing part becomes the implementation because that would mean a .m file for the protocol category but that's not too problematic. Just something we're not used to. There would be no class methods/properties to use, just protocol methods/properties. I might hear you make a last complaint: "how would an object know what method to run, if more that one category supplies the same method?" The same problem arises with plain class categories, so this is as problematic as it is now. So really no larger problem than before. ## In closing So yeah, categories for protocols. A generic way to extend a range of classes without having to resort to class hierarchies. Unconventional, I admit. For the moment, this isn't possible. The syntax described above doesn't work. Sure, I can work around it. Sure, it's a bit more verbose. Nothing much. But it would be cool to be able to do this, I think. :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36192891001701355, "perplexity": 1849.021854508202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00499.warc.gz"}
https://www.physicsforums.com/threads/pressure-with-gases.15869/
# Pressure with Gases 1. Mar 7, 2004 ### Antepolleo Here's the problem: 4. A diving bell in the shape of a cylinder with a height of 2.10 m is closed at the upper end and open at the lower end. The bell is lowered from air into sea water ( p = 1.025 g/cm3). The air in the bell is initially at 16.0°C. The bell is lowered to a depth (measured to the bottom of the bell) of 47.0 fathoms or 86.0 m. At this depth the water temperature is 4.0°C, and the bell is in thermal equilibrium with the water. (a) How high does sea water rise in the bell? (b) To what minimum pressure must the air in the bell be raised to expel the water that entered? My question is, how are you supposed to figure this out without knowing the diameter of the bell? 2. Mar 8, 2004 ### Antepolleo Hmm.. I tried an approach where I let the number of moles of the gas be constant, but it got me nowhere. 3. Mar 8, 2004 ### Janitor My approach would be this... P(z) = gpz + P_0 where P(z) is the pressure as a function of depth z below sea level, g is acceleration of gravity, p (rho) is density of water, P_0 is pressure at sea level (i.e. one atmosphere). You have been given p and z, and you can look up g and P_0, so you can solve for P(z). Then if we make the assumption that none of the air in the cylinder dissolves in the water as the cylinder is lowered (number of moles is constant, as you say), we should have P(z)V(z)/T(z) = P_0 V_0/T_0 where V(z) is the volume of air in the cylinder at depth z, T(z) is the absolute temperature at depth z, V_0 is the volume of the entire cylinder, and T_0 is the absolute temperature at sea level. You can solve this for V(z), since you know all the other quantities. The height to which the water rises is h = L [1-V(z)/V_0] where L is the length of the cylinder. Last edited: Mar 8, 2004 Similar Discussions: Pressure with Gases
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571967124938965, "perplexity": 817.560823060112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321961.50/warc/CC-MAIN-20170627235941-20170628015941-00606.warc.gz"}
https://arxiv-export-lb.library.cornell.edu/abs/2205.06689?context=stat
stat (what is this?) Title: Heavy-Tail Phenomenon in Decentralized SGD Abstract: Recent theoretical studies have shown that heavy-tails can emerge in stochastic optimization due to `multiplicative noise', even under surprisingly simple settings, such as linear regression with Gaussian data. While these studies have uncovered several interesting phenomena, they consider conventional stochastic optimization problems, which exclude decentralized settings that naturally arise in modern machine learning applications. In this paper, we study the emergence of heavy-tails in decentralized stochastic gradient descent (DE-SGD), and investigate the effect of decentralization on the tail behavior. We first show that, when the loss function at each computational node is twice continuously differentiable and strongly convex outside a compact region, the law of the DE-SGD iterates converges to a distribution with polynomially decaying (heavy) tails. To have a more explicit control on the tail exponent, we then consider the case where the loss at each node is a quadratic, and show that the tail-index can be estimated as a function of the step-size, batch-size, and the topological properties of the network of the computational nodes. Then, we provide theoretical and empirical results showing that DE-SGD has heavier tails than centralized SGD. We also compare DE-SGD to disconnected SGD where nodes distribute the data but do not communicate. Our theory uncovers an interesting interplay between the tails and the network structure: we identify two regimes of parameters (stepsize and network size), where DE-SGD can have lighter or heavier tails than disconnected SGD depending on the regime. Finally, to support our theoretical results, we provide numerical experiments conducted on both synthetic data and neural networks. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Optimization and Control (math.OC) Cite as: arXiv:2205.06689 [stat.ML] (or arXiv:2205.06689v2 [stat.ML] for this version) Submission history From: Yuanhan Hu [view email] [v1] Fri, 13 May 2022 14:47:04 GMT (2446kb,D) [v2] Mon, 16 May 2022 14:31:35 GMT (2446kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402021765708923, "perplexity": 1140.647424142469}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00567.warc.gz"}
http://tex.stackexchange.com/questions/95058/error-texstudio-could-not-start-command/95086
Error: TexStudio “Could not start command” I just installed TeXstudio 2.5.2 and compiled the following \documentclass{article} \begin{document} \end{document} but I get this error Error: Could not start the command: "/usr/texbin/pdflatex" -synctex=1 -interaction=nonstopmode "texstudio_qH5268".tex I already installed MiKTeX, so why do I get this error? - I'm having this error with TeXLive also. It happens when the editor does not find the bin files. You can do this: go to Options>Configure and insert the whole path for the pdflatex. If you use windows, look for C:\PROGRA~2\MIKTEX~2.9\miktex\bin\pdflatex.exe or something similar. –  Sigur Jan 23 '13 at 18:38 Also the discussion here sourceforge.net/p/texstudio/discussion/907839/thread/6363740c –  Sigur Jan 23 '13 at 18:40 Hi Sigur, in Options>Configure>Build>Default Compiler I have inserted the path C:\Program Files\MiKTeX 2.9\miktex\bin\x64\pdflatex.exe but still doesn't work. –  user24846 Jan 23 '13 at 18:55 Try to use double quotes and the extension, but not on default compiler, try it on the other field (commands>pdflatex) "C:\....exe" %.tex –  Sigur Jan 23 '13 at 18:58 This works, thanks a lot! –  user24846 Jan 23 '13 at 19:22 Open TeXstudio and go to the Options menu. Then Configure TeXstudio. On the left panel choose the second item Commands. On the pdflatex field, fill it with the full path for your pdflatex.exe. For example, in Windows with MiKTeX, something similar to C:\Program Files\MiKTeX 2.9\miktex\bin\x64\pdflatex.exe Don't forget to use double quotes to it. Then write %.tex to denote the current tex file. In resume, you'll have something like this (ignoring highlights): "C:\Program Files\MiKTeX 2.9\miktex\bin\x64\pdflatex.exe" %.tex You can do the same to other tools, like dvips for example. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313029646873474, "perplexity": 8199.465327609396}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776420526.72/warc/CC-MAIN-20140707234020-00085-ip-10-180-212-248.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1503122/can-four-consecutive-numbers-all-be-powers-of-whole-numbers
# Can four consecutive numbers all be powers of whole numbers? I need a direction, I was given a hint: the claim is somehow how related to another claim, for 2 consecutive even numbers one is divisible by 4 the other isn't. • Can three?????? – barak manos Oct 29 '15 at 7:36 • @barakmanos: the hint cannot be used for three numbers. – Yves Daoust Oct 29 '15 at 7:37 • If they are first powers you can have as many as you like. – Mark Bennet Oct 29 '15 at 7:49 • Even two consecutive numbers can't be powers, except for $8,9$. This is because of Mihăilescu's theorem. – user236182 Oct 29 '15 at 9:41 • $2 = 2^1 , 3=3^1, 4=2^2, 5=5^1$ – Michael Stocker Oct 29 '15 at 10:55 Among the four consecutive numbers, one must be of the form $4n+2=2(2n+1)$. As the multiplicity of $2$ in its prime decomposition is $1$, this number cannot be a power. • However, $2,3,4,5$ works if you count primes as powers of primes. – John Dvorak Oct 29 '15 at 13:17 • I know Mihăilescu's theorem, but this thread makes me wonder if there is an elementary way to see that the three consecutive integers $4n-1$, $4n$ and $4n+1$ can never be simultaneously perfect powers. Becuase four consecutive are very easy to rule out, and two consecutive were a tough problem for a long time. – Jeppe Stig Nielsen Oct 29 '15 at 14:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5834964513778687, "perplexity": 405.1977984980787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145438.12/warc/CC-MAIN-20200221014826-20200221044826-00385.warc.gz"}
https://www.bradford-delong.com/2018/12/jacob-levy-populisms-dangerous-companions.html
Jacob Levy: Populism's Dangerous Companions: "It’s an interesting and illuminating exercise to single out a shift from debates about cultural moralism to those about nationalism as the important destabilizing influence.... Jerry Falwell Jr.’s sycophancy toward a serial adulterer, multiple divorcé, and seeker of sexual favors from nude models and porn actresses is a nice synecdoche for the apparent surrender of the old religious right, even in the United States, on everything except abortion... ...But... economic anxiety... the new style of politics. But of course it remains true that it’s not the genuinely poor and precarious who have embraced it.... The kind of politics Davies is describing is a fearful one, and always ripe for demagoguery and hysteria. The more terrifying a picture can be painted of aliens and immigrants, foreigners and elites, the better for the electoral prospects of those promoting borders closed to trade and migration.... National collectivism has predecessors and ancestors.... There have been normal parties committed to economic protectionism, but hostility to immigration and immigrants calls forth something else.... The combination of a longing for unity and distrust of elites makes populism congenial to one-man rule. A would-be autocrat can speak in one unified voice, as competing elites cannot. He can offer the many an alliance against the few, marking them as enemies of the true people.... Davies argues that unlike the free market conservatives, the national collectivists “support an active economic role for government and a large and generous but strictly national welfare state.” But at least in the United States, supposed “free market conservatives” have held that latter position for generations... agricultural subsidies, Social Security, the Federal Housing Administration, and the GI Bill amount to a large and generous but mostly white welfare state from the 1930s on.... None of this was ever seriously challenged even under Reagan or Gingrich. It was the additions to the redistributionist state of the 1960s, aimed at urban and minority recipients, that were stigmatized.... The broad base of voters electing people Davies calls “free market conservatives” were only very rarely really free market conservatives as Davies imagines that position. And the distance they have traveled thus isn’t as large as he imagines it to be. Race is always one of the dimensions of alignment in the United States, and it exerts a gravitational pull on the others.... Davies may... understate the exceptional character of... Trump... by treating it as the birth pangs of a new normal partisan alignment.... He ma... overstate its novelty in substantive ideological terms... #shouldread
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21453876793384552, "perplexity": 8378.801228390377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00050.warc.gz"}
https://link.springer.com/referenceworkentry/10.1057/978-1-349-95189-5_667
# The New Palgrave Dictionary of Economics 2018 Edition | Editors: Macmillan Publishers Ltd # Aggregate Demand Theory • H. Sonnenschein Reference work entry DOI: https://doi.org/10.1057/978-1-349-95189-5_667 ## Abstract Aggregate demand theory investigates the properties of market demand functions. These functions are obtained by summing the preference maximizing actions of individual agents. The study of aggregate demand theory is primarily motivated by the fact that market demand functions, rather than individual demand functions, are the data of economic analysis. In general, market demand functions do not inherit the structure which is imposed on individual demand functions by the utility hypothesis. Such structure, when present, enables us to obtain stronger predictions from available data. ## Keywords Aggregate demand theory Consumer demand functions Demand theory Individual demand functions Market demand functions ## JEL Classifications E1 This is a preview of subscription content, log in to check access. ## Bibliography 1. Antonelli, G.B. 1886. Sulla Teoria Matematica della Economia Politica. Pisa: Nella Tipognafia del Folchetto. Trans. J.S. Chipman and A.P. Kirman in Preferences, utility and demand, ed. J.S. Chipman et al., New York: Harcourt Brace Jovanovich, 1971.Google Scholar 2. Arrow, K.J. and Intriligator, M.D., eds. 1981–1986. Handbook of mathematical economics, Vols 1, 2 and 3. Amsterdam: North-Holland.Google Scholar 3. Chipman, J.S., and J. Moore. 1979. On social welfare functions and the aggregation of preferences. Journal of Economic Theory 21: 111–139. 4. Chipman, J.S., L. Hurwicz, M.K. Richter, and H.F. Sonnenschein, eds. 1971. Preferences, utility and demand. New York: Harcourt Brace Jovanovich.Google Scholar 5. Debreu, G. 1974. Excess demand functions. Journal of Mathematical Economics 1: 15–23. 6. Debreu, G. 1982. Existence of competitive equilibrium. In Handbook of mathematical economics, ed. K.J. Arrow and M.D. Intriligator. Amsterdam: North-Holland.Google Scholar 7. Dow, J., and H. Sonnenschein. 1983. Samuelson and Chipman–Moore on utility generated community demand. In Prices, competition and equilibrium, ed. M. Peston and R. Quandt. Oxford: Philip Allan Publishers.Google Scholar 8. Eisenberg, B. 1961. Aggregation of utility functions. Management Science 7: 337–350. 9. Gorman, W.M. 1953. Community preference fields. Econometrica 21: 63–80. 10. Hicks, J.R. 1957. A revision of demand theory. Oxford: Clarendon Press.Google Scholar 11. Hildenbrand, W. 1983. On the ‘law of demand’. Econometrica 51: 997–1020. 12. Hurwicz, L., and H. Uzawa. 1971. On the integrability of demand functions. In Preferences, utility and demand, ed. J.S. Chipman et al. New York: Harcourt Brace Jovanovich.Google Scholar 13. Nataf, A. 1953. Sur des qsts d’agrégation en économétrie. Publications de l’Institute de Statistique de l’Université de Paris 2: 5–61.Google Scholar 14. Samuelson, P. 1956. Social indifference curves. Quarterly Journal of Economics 70: 1–22. 15. Shafer, W., and H. Sonnenschein. 1982. Market demand and excess demand functions. In Handbook of mathematical economics, ed. K.J. Arrow and M.D. Intriligator. Amsterdam: North-Holland.Google Scholar 16. Sonnenschein, H. 1973a. Do Walras’ identity and continuity characterize the class of community excess demand functions? Journal of Economic Theory 6: 345–354. 17. Sonnenschein, H. 1973b. The utility hypothesis and market demand theory. Western Economic Journal 11: 404–410.Google Scholar
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322323560714722, "perplexity": 23922.200528252037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589752.56/warc/CC-MAIN-20180717144908-20180717164908-00536.warc.gz"}
https://www.physicsforums.com/threads/a-question-in-group-theory.557224/
# A question in group theory 1. Dec 5, 2011 ### Ledger This is not homework. Self-study. And I'm really enjoying it. But, as I'm going through this book ("A Book of Abstract Algebra" by Charles C. Pinter) every so often I run into a problem or concept I don't understand. Let G be a finite abelian group, say G = (e,a1, a2, a3,...,an). Prove that (a1*a2*...an)^2 = e. So, it has a finite number of elements and it's a group. So it's associative, has an identity element and an inverse as elements of G, and as it's abelian so it's also commutative. But I don't see how squaring the product of its elements leads to the identity element e. Wait. Writing this has me thinking that each element might be being 'multiplied' by it's inverse yielding e for every pair, which when all mutiplied together still yields e, even when ultimately squared. Could that be the answer, even though I may not have stated it elegantly? There's no one I can ask so I brought it to this forum. 2. Dec 5, 2011 ### spamiam You're on the right track. But the product is squared for a reason. What if your group has elements of order 2? This won't cause problems, but it's necessary to consider it. 3. Dec 5, 2011 ### Ledger 'If the group has elements of order 2' I don't really understand that. The terminology in this book I understand (so far) is if the group is of order 2 that means it is a finite group with two elements. Things are sometimes squared to get rid of a negative sign. But if the elements are numbers I would think that multiplying a negative number by its inverse (which would also be negative so the outcome is 1) would take care of that. But perhaps not so I'll go with squaring would knock a negative out of the final e. Is that it? 4. Dec 5, 2011 ### micromass What spamiam means is that there might be an element $a_i$ such that $a_i=a_i^{-1}$. In that case, your proof would not hold anymore. Indeed, its inverse does not occur in the list since it equals $a_i$. 5. Dec 5, 2011 ### Ledger So there could be an element of G that equals its own inverse. So squaring the product insures that this is reduced to e as well? Since they equal each other they should square to identity I think. Is this it? 6. Dec 5, 2011 Yes! 7. Dec 5, 2011 ### Ledger Similar Discussions: A question in group theory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297340869903564, "perplexity": 470.80278927720553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886860.29/warc/CC-MAIN-20180117082758-20180117102758-00436.warc.gz"}
https://hal-normandie-univ.archives-ouvertes.fr/hal-02143804
# Anaphore associative et anaphore possessive : quelles différences pour les relations de cohérence ? Abstract : This paper is devoted to associative anaphora and possessive anaphora with two kinds of nouns. One is meronyms, where a noun denotes a concept that is a part of another as in trunk/tree. The other is what I call “functionally localized” nouns, such as church with respect to village, where the latter denotes a concept that normally implies the existence of the concept denoted by the former : see Kleiber (2001). We first evoke the referential properties of these two kinds of anaphora. The properties of the possessive determiner are then discussed in order to explain its occasional incompatibility with meronyms or functionally localized nouns. Finally, we elucidate the consequences for the choice between associative and possessive anaphora upon textual coherence relations, in particular Claim-Evidence (Cornish, 2009a, b) and Explanation. Keywords : Document type : Journal articles Domain : https://hal-normandie-univ.archives-ouvertes.fr/hal-02143804 Contributor : Mathilde Salles <> Submitted on : Wednesday, May 29, 2019 - 3:51:32 PM Last modification on : Monday, July 22, 2019 - 10:11:33 AM ### Citation Mathilde Salles. Anaphore associative et anaphore possessive : quelles différences pour les relations de cohérence ?. Revue Romane, John Benjamins Publishing, 2013, 48 (1), pp.51-78. ⟨10.1075/rro.48.1.03sal⟩. ⟨hal-02143804⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934894800186157, "perplexity": 12412.727915443347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890273.42/warc/CC-MAIN-20201026031408-20201026061408-00694.warc.gz"}
https://www.techwhiff.com/learn/b2-the-multiplicity-of-a-monatomic-ideal-gas-is/225361
# B.2 The multiplicity of a monatomic ideal gas is given by 2 = f(N)VN U3N/2, where... ###### Question: B.2 The multiplicity of a monatomic ideal gas is given by 2 = f(N)VN U3N/2, where V is the volume occupied by the gas, U its internal energy, N the number of particles in the gas and f(N) a complicated function of N. [2] (i) Show that the entropy S of this system is given by 3 S = Nkg In V + ŽNkg In U + g(N), where g(N) is some function of N. (ii) Define the temperature T of a thermodynamic system, in terms of its entropy. Use the result in part (i) to calculate the temperature of a monatomic ideal gas, as a function of its internal energy U. Then invert this relation to obtain an expression for U. Compare the result with the prediction of U from the equipartition theorem. (iii) State the thermodynamic identity. Explain all the terms you introduce. Use the thermodynamic identity to show that [6] P=T as av UN [5] where P is the pressure of the gas, and the subscripts indicate that U and N are constant. Then use this result and that of part (i) to derive the ideal-gas law. (iv) Consider the gas confined in a container of volume V. What is the probability of finding all N atoms in the leftmost 99% of the container if (a) N = 100; (b) N = 10,000; (c) N = 1023? [2] #### Similar Solved Questions ##### Write the chemical equation that represents the enthalpy of atom combination, Δ H ∘ ac ,... Write the chemical equation that represents the enthalpy of atom combination, Δ H ∘ ac , for SF 4 ( g ) . Include phases.... ##### Compare and contrast two different roles in nursing informatics. Compare and contrast two different roles in nursing informatics.... ##### Discuss how the public policy implementation phase is critical to the improvement of healthcare outcomes discuss how the public policy implementation phase is critical to the improvement of healthcare outcomes... ##### Problem 6 m Di ZR S T In the following circuit, L1= 5 mH, L2= 2... Problem 6 m Di ZR S T In the following circuit, L1= 5 mH, L2= 2 mH, R= 9.9 Ohm, C= 10 uF. Find the average power delivered by the ideal current source if ig=50cos(10000t) mA. (Unit: W) Submit Answer Tries 0/3... ##### Use the following scenario analysis for stocks X and Y to answer the questions. Bear Normal... Use the following scenario analysis for stocks X and Y to answer the questions. Bear Normal Bull Market Market Market Probability 15.00% 50.00% 35.00% Stock X -13.00% 11.00% 28.00% Stock Y -26.00% 16.00% 46.00% What is the standard deviation of return for stock Y? Enter your answer r... ##### Thin Layer Chromatography Postlab Problems 1) Which of the following compounds would have the highest Rs?... Thin Layer Chromatography Postlab Problems 1) Which of the following compounds would have the highest Rs? The lowest R.? СН3 CH3... ##### What is the slope of the line passing through the following points: (2,-6); (-2,-1)? What is the slope of the line passing through the following points: (2,-6); (-2,-1)?...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7322901487350464, "perplexity": 1254.4272554070233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00187.warc.gz"}
https://ai.stackexchange.com/questions/118/how-can-fuzzy-logic-be-used-in-creating-ai/122
# How can fuzzy logic be used in creating AI? Fuzzy logic is the logic where every statement can have any real truth value between 0 and 1. How can fuzzy logic be used in creating AI? Is it useful for certain decision problems involving multiple inputs? Can you give an example of an AI that uses it? A classical example of fuzzy logic in an AI is the expert system Mycin. Fuzzy logic can be used to deal with probabilities and uncertainties. If one looks at, for example, predicate logic, then every statement is either true or false. In reality, we don't have this mathematical certainty. For example, let's say a physician (or expert system) sees a symptom that can be attributed to a few different diseases (say A, B and C). The physician will now attribute a higher likelihood to the possibility of the patient having any of these three diseases. There is no definite true or false statement, but there is a change of weights. This can be reflected in fuzzy logic, but not so easily in symbolic logic. My impression is that fuzzy logic has mostly declined in relevance and probabilistic logic has taken over its niche. (See the comparison on Wikipedia.) The two are somewhat deeply related, and so it's mostly a change in perspective and language. That is, fuzzy logic mostly applies to labels which have uncertain ranges. An object that's cool but not too cool could be described as either cold or warm, and fuzzy logic handles this by assigning some fractional truth value to the 'cold' and 'warm' labels and no truth to the 'hot' label. Probabilistic logic focuses more on the probability of some fact given some observations, and is deeply focused on the uncertainty of observations. When we look at an email, we track our belief that the email is "spam" and shouldn't be shown to the user with some number, and adjust that number as we see evidence for and against it being spam. • Probabilistic logic is a progressive term that is difficult to distinguish from the classical meaning of fuzzy logic. Both Drools and Prolog are in use in business and industrial fuzzy logic control. – han_nah_han_ Dec 11 '18 at 12:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8443382382392883, "perplexity": 673.566550057123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00040.warc.gz"}
http://web.archive.org/web/20070620232045/http:/www.vim.org/tips/tip.php?tip_id=661
# Tip #661: LaTeX: Addition to latex-suite: folds the preamble tip karma Rating 2/2, Viewed by 1064 created: February 24, 2004 13:36 complexity: basic author: Christian as of Vim: 5.7 In /ftplugin/latex-suite/folding.vim I added the lines you see below, so that the latex-suite folding \rf also folds the preamble (the part between \documentclass and \begin{document}. " {{{ Preamble call AddSyntaxFoldItem ( \ '^\s*\\documentclass', \ '^\s*\\begin{document}', \ 0, \ 0 \ ) " }}} happy LaTeXing chris rate this tip Life Changing Helpful Unfulfilling << Comment lines in different filetypes | Quote unquoted HTML attributes >>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083932995796204, "perplexity": 28260.814387795275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773066.29/warc/CC-MAIN-20141217075253-00088-ip-10-231-17-201.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/interesting-construction-problems/
# Interesting construction problems Here are some interesting construction problems that I stumbled upon. In the following problems, constructions can only be done using an unmarked straightedge and compass. I came across the first problem accidentally while watching my friend fold origami. However, it might have been posed long ago by some mathematician. The third problem is easier than the others. 1. Given a line segment of length $l$ and any positive integer $n$. Show that, using straightedge and compass, it is always possible to divide the line into $n$ equal segments, no matter what $n$ is. 2. Given a line segment of length $\sqrt{2}+\sqrt{3}+\sqrt{5}$, is it possible to construct a line segment of length $1$? 3. Given triangle ABC, construct the circumcircle, and the incircle. 4. Given the perpendicular from A and two medians from A, B onto BC, AC respectively, reconstruct triangle ABC. Note by Joel Tan 6 years, 6 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: 2: Follows directly from the definition of a constructible number. Constructively, show that $\sqrt{n}$ is always constructible when $n \in \mathbb{Z}^{+}$ and use compass equivalence theorem to add them up. 3: Let $O$ be the circumcenter of $\Delta ABC$. It's pretty obvious that $\Delta AOB$ is isoceles, and that the angle bisector of $\angle AOB$ is the perpendicular bisector of $AB$; hence, by similar argument for the other sides, $O$ is the intersection of the perpendicular bisectors of the sides of $\Delta ABC$. Circumcircle construction is just a corollary. Let $O^{\prime}$ be the incenter of $\Delta ABC$. Let the shortest distance from the incenter to $AB$ intersect the latter at $X$; do similarly for $AC$, with the intersection $Y$. $O^{\prime}X \equiv O^{\prime}Y$ (radii), and $AX \equiv AY$ (convergent tangents). It's easy to see the angle bisector of $\angle ABC$ passes through the incenter; thus, by similar argument for the other angles, $O^{\prime}$ is the intersection of the angle bisectors of the interior angles of $\Delta ABC$. Again, incircle is just a corollary (albeit a slightly more complicated one). - 6 years, 2 months ago I'm new to compass and straightedge, so I'm sorry if I used any theorems incorrectly. As for the incircle corollary, my method is to pick any side of $\Delta ABC$ and construct any circle with center $O^{\prime}$ that intersects that side. The median of the two intersections on the side will be a point on the circumference of the incircle. - 6 years, 2 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 36, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466005563735962, "perplexity": 834.5670153330383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00419.warc.gz"}
https://quizplus.com/quiz/150410-quiz-14-chi-square-tests
Try out our new practice tests completely free! Statistics Bookmark ## Quiz 16 : Chi-Square Tests To compute a pooled sample proportion, each of the sample proportions is weighted by the size of the population from which the sample was selected. Free True False False The degrees of freedom for in a test of independence involving a contingency table with 10 rows and 11 columns are 90. Free True False True One hundred people were sampled from each of three populations and asked a question.The responses are shown in the table: If the three populations represented here contain the same proportion of yes responses, we would have expected to see 23.33 yes responses in each sample. Free True False True The degrees of freedom for a test of independence involving a contingency table with 10 rows and 8 columns are 63. True False In a chi-square calculation involving 6 independent terms (that is, with df = 6), there is a 5% probability that the result will be less than 1.635. True False Two hundred items were sampled from each of two recent shipments.The results are shown in the table: If the two shipments contain the same proportion of defective items, we would have expected to see 14 defective items in each sample. True False The table-based approach to testing for differences in population proportions yields more accurate results than the squared standardized normal random variable approach. True False Samples of equal size have been selected from each of three populations, producing sample proportions of 0.4, 0.3, and 0.5.To compute a pooled sample proportion, we can compute the simple average of the three sample proportions. True False Samples of equal size have been selected from each of three populations, producing sample proportions of 0.4, 0.3, and 0.5.To compute a pooled sample proportion, each of the sample proportions is weighted by size of the population from which the sample was selected. True False The degrees of freedom for test of independence involving a contingency table with 12 rows and 12 columns are 144. True False In a chi-square distribution, 2 is the sum of squared standardized normal random variables with degrees of freedom equal to the number of independent terms included in the sum. True False In a chi-square test of proportion differences, we will reject the "all proportions are equal" null hypothesis if the p-value for the chi-square statistic is less than the significance level for the test. True False Samples of equal size have been selected from each of three populations, producing sample proportions of 0.5, 0.7, and 0.6.The pooled sample proportion would be 0.65. True False The degrees of freedom in a goodness of fit test for a multinomial distribution with 5 categories are 4. True False In a chi-square calculation involving 5 independent terms (that is, with df = 5), there is a 5% probability that the result will be greater than 16.750. True False The degrees of freedom in a goodness of fit test for a multinomial distribution with 5 categories are (5 − 2) = 3. True False
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8969221115112305, "perplexity": 604.3382240904839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00773.warc.gz"}
https://reference.wolframcloud.com/language/ref/SymmetricMatrixQ.html
# SymmetricMatrixQ gives True if m is explicitly symmetric, and False otherwise. # Details and Options • A matrix m is symmetric if m==Transpose[m]. • SymmetricMatrixQ works for symbolic as well as numerical matrices. • The following options can be given: • SameTest Automatic function to test equality of expressions Tolerance Automatic tolerance for approximate numbers • For exact and symbolic matrices, the option SameTest->f indicates that two entries mij and mkl are taken to be equal if f[mij,mkl] gives True. • For approximate matrices, the option Tolerance->t can be used to indicate that all entries Abs[mij]t are taken to be zero. • For matrix entries Abs[mij]>t, equality comparison is done except for the last bits, where is \$MachineEpsilon for MachinePrecision matrices and for matrices of Precision . # Examples open allclose all ## Basic Examples(2) Test if a 2×2 numeric matrix is explicitly symmetric: Test if a 3×3 symbolic matrix is explicitly symmetric: ## Scope(10) ### Basic Uses (6) Test if a real machine-precision matrix is symmetric: A real symmetric matrix is also Hermitian: Test if a complex matrix is symmetric: A complex symmetric matrix has symmetric real and imaginary parts: Test if an exact matrix is symmetric: Make the matrix symmetric: Use SymmetricMatrixQ with an arbitrary-precision matrix: A random matrix is typically not symmetric: Use SymmetricMatrixQ with a symbolic matrix: The matrix becomes symmetric when : SymmetricMatrixQ works efficiently with large numerical matrices: ### Special Matrices (4) Use SymmetricMatrixQ with sparse matrices: Use SymmetricMatrixQ with structured matrices: Use with a QuantityArray structured matrix: The identity matrix is symmetric: HilbertMatrix is symmetric: ## Options(2) ### SameTest(1) This matrix is symmetric for a positive real , but SymmetricMatrixQ gives False: Use the option SameTest to get the correct answer: ### Tolerance(1) Generate a real-valued symmetric matrix with some random perturbation of order 10-14: Adjust the option Tolerance to accept this matrix as symmetric: The norm of the difference between the matrix and its transpose: ## Applications(13) ### Generating Symmetric Matrices(4) Any matrix generated from a symmetric function is symmetric: The function is symmetric: Using Table generates a symmetric matrix: SymmetrizedArray can generate matrices (and general arrays) with symmetries: Convert back to an ordinary matrix using Normal: Check that matrices drawn from GaussianOrthogonalMatrixDistribution are symmetric: Matrices drawn from CircularOrthogonalMatrixDistribution are symmetric and unitary: Every Jordan matrix is similar to a symmetric matrix. Since any square matrix is similar to its Jordan form, this means that any square matrix is similar to a symmetric matrix. Define a function for generating an Jordan block for eigenvalue : For example, here is the Jordan matrix of dimension 4 for the eigenvalue : Define a function for generating a corresponding complex similarity transformation: The matrix is a sum of times the identity matrix and times the backward identity matrix: Then is symmetric, which shows that the Jordan matrix is similar to a symmetric matrix: Confirm the matrix is symmetric: ### Examples of Symmetric Matrices(5) The Hessian matrix of a function is symmetric: Many special matrices are symmetric, including FourierMatrix: And HilbertMatrix: Visualize the matrix types: Many filter kernel matrices are symmetric, including DiskMatrix: Visualize the matrices: AdjacencyMatrix of an undirected graph is symmetric: As is KirchhoffMatrix: Visualize adjacency and Kirchhoff matrices for different graphs: Several statistical measures are symmetric matrices, including Covariance: ### Uses of Symmetric Matrices(4) A positive-definite, real symmetric matrix or metric defines an inner product by : Verify that is in fact symmetric and positive definite: Orthogonalize the standard basis of to find an orthonormal basis: Confirm that this basis is orthonormal with respect to the inner product : The moment of inertia tensor is the equivalent of mass for rotational motion. For example, kinetic energy is , with taking the place of the mass and angular velocity taking the place of linear velocity in the formula . can be represented by a positive-definite symmetric matrix. Compute the moment of inertia for a tetrahedron with endpoints at the origin and positive coordinate axes: Verify that the matrix is symmetric: Compute the kinetic energy if its angular velocity is : The kinetic energy is positive as long as is nonzero, showing the matrix was positive definite: Determine if a sparse matrix is structurally symmetric: The matrix is not symmetric: But it is structurally symmetric: Use a different method for symmetric matrices, with failover to a general method: Construct real-valued matrices for testing: For a non-symmetric matrix m, the function myLS just uses Gaussian elimination: For a symmetric indefinite matrix ms, try Cholesky and continue with Gaussian elimination: For a symmetric positive-definite matrix mpd, try Cholesky, which succeeds: ## Properties & Relations(13) trivially returns False for any x that is not a matrix: A matrix is symmetric if mTranspose[m]: A real-valued symmetric matrix is Hermitian: But a complex-valued symmetric matrix may not be: Use Symmetrize to compute the symmetric part of a matrix: This equals the average of m and Transpose[m]: Any matrix can be represented as the sum of its symmetric and antisymmetric parts: Use AntisymmetricMatrixQ to test whether a matrix is antisymmetric: If is an symmetric matrix with real entries, then is antihermitian: MatrixExp[I m] for real symmetric m is unitary: A real-valued symmetric matrix is always a normal matrix: A complex-valued symmetric matrix need not be normal: Real-valued symmetric matrices have all real eigenvalues: Use Eigenvalues to find eigenvalues: Note that a complex-valued symmetric matrix may have both real and complex eigenvalues: for real symmetric m can be factored into linear terms: Real-valued symmetric matrices have a complete set of eigenvectors: As a consequence, they must be diagonalizable: Use Eigenvectors to find eigenvectors: Note that a complex-valued symmetric matrix need not have these properties: The inverse of a symmetric matrix is symmetric: Matrix functions of symmetric matrices are symmetric, including MatrixPower: And any univariate function representable using MatrixFunction: ## Possible Issues(1) SymmetricMatrixQ uses the definition for both real- and complex-valued matrices: These complex matrices need not be normal or possess many properties of self-adjoint (real symmetric) matrices: HermitianMatrixQ tests the condition for self-adjoint matrices: Alternatively, test if the entries are real to restrict to real symmetric matrices: ## Neat Examples(1) Images of symmetric matrices including FourierMatrix: Wolfram Research (2008), SymmetricMatrixQ, Wolfram Language function, https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html (updated 2014). #### Text Wolfram Research (2008), SymmetricMatrixQ, Wolfram Language function, https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html (updated 2014). #### CMS Wolfram Language. 2008. "SymmetricMatrixQ." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2014. https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html. #### APA Wolfram Language. (2008). SymmetricMatrixQ. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html #### BibTeX @misc{reference.wolfram_2022_symmetricmatrixq, author="Wolfram Research", title="{SymmetricMatrixQ}", year="2014", howpublished="\url{https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html}", note=[Accessed: 30-January-2023 ]} #### BibLaTeX @online{reference.wolfram_2022_symmetricmatrixq, organization={Wolfram Research}, title={SymmetricMatrixQ}, year={2014}, url={https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html}, note=[Accessed: 30-January-2023 ]}
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8705314993858337, "perplexity": 2770.472929884401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00573.warc.gz"}
https://probabilityexam.wordpress.com/tag/soa-exam-p/
Exam P Practice Problem 104 – two random insurance losses Problem 104-A Two random losses $X$ and $Y$ are jointly modeled by the following density function: $\displaystyle f(x,y)=\frac{1}{32} \ (4-x) \ (4-y) \ \ \ \ \ \ 0 Suppose that both of these losses had occurred. Given that $X$ is exactly 2, what is the probability that $Y$ is less than 1? $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \frac{7}{24}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \frac{11}{24}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \frac{12}{24}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \frac{13}{24}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \frac{14}{24}$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 104-B Two random losses $X$ and $Y$ are jointly modeled by the following density function: $\displaystyle f(x,y)=\frac{1}{96} \ (x+2y) \ \ \ \ \ \ 0 Suppose that both of these losses had occurred. Determine the probability that $Y$ exceeds 2 given that the loss $X$ is known to be 2. $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \frac{13}{36}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \frac{24}{36}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \frac{26}{36}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \frac{28}{36}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \frac{29}{36}$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2018 – Dan Ma Exam P Practice Problem 103 – randomly selected auto collision claims Problem 103-A The size of an auto collision claim follows a distribution that has density function $f(x)=2(1-x)$ where $0. Two randomly selected claims are examined. Compute the probability that one claim is at least twice as large as the other. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \frac{10}{36}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \frac{15}{36}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \frac{20}{36}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \frac{21}{36}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \frac{23}{36}$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 103-B Auto collision claims follow an exponential distribution with mean 2. For two randomly selected auto collision claims, compute the probability that the larger claim is more than four times the size of the smaller claims. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.2$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 0.3$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 0.4$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 0.5$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 0.6$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2018 – Dan Ma Exam P Practice Problem 102 – estimating claim costs Problem 102-A Insurance claims modeled by a distribution with the following cumulative distribution function. $\displaystyle F(x) = \left\{ \begin{array}{ll} \displaystyle 0 &\ \ \ \ \ \ x \le 0 \\ \text{ } & \text{ } \\ \displaystyle \frac{1}{1536} \ x^4 &\ \ \ \ \ \ 0 < x \le 4 \\ \text{ } & \text{ } \\ \displaystyle 1-\frac{2}{3} x+\frac{1}{8} x^2- \frac{1}{1536} \ x^4 &\ \ \ \ \ \ 4 < x \le 8 \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ x > 8 \\ \end{array} \right.$ The insurance company is performing a study on all claims that exceed 3. Determine the mean of all claims being studied. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ 4.8$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ 4.9$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ 5.0$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ 5.1$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ 5.2$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 102-B Insurance claims modeled by a distribution with the following cumulative distribution function. $\displaystyle F(x) = \left\{ \begin{array}{ll} \displaystyle 0 &\ \ \ \ \ \ x \le 0 \\ \text{ } & \text{ } \\ \displaystyle \frac{1}{50} \ x^2 &\ \ \ \ \ \ 0 < x \le 5 \\ \text{ } & \text{ } \\ \displaystyle -\frac{1}{50} x^2+\frac{2}{5} x- 1 &\ \ \ \ \ \ 5 < x \le 10 \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ x > 10 \\ \end{array} \right.$ The insurance company is performing a study on all claims that exceed 4. Determine the mean of all claims being studied. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 5.9$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 6.0$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 6.1$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 6.2$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 6.3$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2018 – Dan Ma Exam P Practice Problem 101 – auto collision claims Problem 101-A The amount paid on an auto collision claim by an insurance company follows a distribution with the following density function. $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{96} \ x^3 \ e^{-x/2} &\ \ \ \ \ \ x > 0 \\ \text{ } & \text{ } \\ \displaystyle 0 &\ \ \ \ \ \ \text{otherwise} \\ \end{array} \right.$ The insurance company paid 64 claims in a certain month. Determine the approximate probability that the average amount paid is between 7.36 and 8.84. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ 0.8320$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ 0.8376$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ 0.8435$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ 0.8532$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ 0.8692$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 101-B The amount paid on an auto collision claim by an insurance company follows a distribution with the following density function. $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{1536} \ x^3 \ e^{-x/4} &\ \ \ \ \ \ x > 0 \\ \text{ } & \text{ } \\ \displaystyle 0 &\ \ \ \ \ \ \text{otherwise} \\ \end{array} \right.$ The insurance company paid 36 claims in a certain month. Determine the approximate 25th percentile for the average claims paid in that month. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 15.11$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 15.43$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 15.75$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 16.25$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 16.78$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2017 – Dan Ma Exam P Practice Problem 100 – find the variance of loss in profit Problem 100-A The monthly amount of time $X$ (in hours) during which a manufacturing plant is inoperative due to equipment failures or power outage follows approximately a distribution with the following moment generating function. $\displaystyle M(t)=\biggl( \frac{1}{1-7.5 \ t} \biggr)^2$ The amount of loss in profit due to the plant being inoperative is given by $Y=12 X + 1.25 X^2$. Determine the variance of the loss in profit. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \text{279,927.20}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \text{279,608.20}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \text{475,693.76}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \text{583,358.20}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \text{601,769.56}$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 100-B The weekly amount of time $X$ (in hours) that a manufacturing plant is down (due to maintenance or repairs) has an exponential distribution with mean 8.5 hours. The cost of the downtime, due to lost production and maintenance and repair costs, is modeled by $Y=15+5 X+1.2 X^2$. Determine the variance of the cost of the downtime. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \text{130,928.05}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \text{149,368.45}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \text{181,622.05}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \text{188,637.67}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \text{195,369.15}$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2017 – Dan Ma Exam P Practice Problem 99 – When Random Loss is Doubled Problem 99-A A business owner faces a risk whose economic loss amount $X$ follows a uniform distribution over the interval $0. In the next year, the loss amount is expected to be doubled and is expected to be modeled by the random variable $Y=2X$. Suppose that the business owner purchases an insurance policy effective at the beginning of next year with the provision that any loss amount less than or equal to 0.5 is the responsibility of the business owner and any loss amount that is greater than 0.5 is paid by the insurer in full. When a loss occurs next year, determine the expected payment made by the insurer to the business owner. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \frac{8}{16}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \frac{9}{16}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \frac{13}{16}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \frac{15}{16}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \frac{17}{16}$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 99-B A business owner faces a risk whose economic loss amount $X$ has the following density function: $\displaystyle f(x)=\frac{x}{2} \ \ \ \ \ \ 0 In the next year, the loss amount is expected to be doubled and is expected to be modeled by the random variable $Y=2X$. Suppose that the business owner purchases an insurance policy effective at the beginning of next year with the provision that any loss amount less than or equal to 1 is the responsibility of the business owner and any loss amount that is greater than 1 is paid by the insurer in full. When a loss occurs next year, what is the expected payment made by the insurer to the business owner? $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.6667$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 1.5833$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 1.6875$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 1.7500$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 2.6250$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics expected insurance payment deductible $\copyright$ 2017 – Dan Ma Exam P Practice Problem 98 – flipping coins Problem 98-A Coin 1 is an unbiased coin, i.e. when flipping the coin, the probability of getting a head is 0.5. Coin 2 is a biased coin such that when flipping the coin, the probability of getting a head is 0.6. One of the coins is chosen at random. Then the chosen coin is tossed repeatedly until a head is obtained. Suppose that the first head is observed in the fifth toss. Determine the probability that the chosen coin is Coin 2. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.2856$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 0.3060$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 0.3295$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 0.3564$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 0.3690$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 98-B Box 1 contains 3 red balls and 1 white ball while Box 2 contains 2 red balls and 2 white balls. The two boxes are identical in appearance. One of the boxes is chosen at random. A ball is sampled from the chosen box with replacement until a white ball is obtained. Determine the probability that the chosen box is Box 1 if the first white ball is observed on the 6th draw. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.7530$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 0.7632$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 0.7825$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 0.7863$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 0.7915$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics geometric distribution Bayes $\copyright$ 2017 – Dan Ma Exam P Practice Problem 97 – Variance of Claim Sizes Problem 97-A For a type of insurance policies, the following is the probability that the size of claim is greater than $x$. $\displaystyle P(X>x) = \left\{ \begin{array}{ll} \displaystyle 1 &\ \ \ \ \ \ x \le 0 \\ \text{ } & \text{ } \\ \displaystyle \biggl(1-\frac{x}{10} \biggr)^6 &\ \ \ \ \ \ 0 Calculate the variance of the claim size for this type of insurance policies. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \frac{10}{7}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \frac{75}{49}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \frac{95}{49}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \frac{15}{7}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \frac{25}{7}$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 97-B For a type of insurance policies, the following is the probability that the size of a claim is greater than $x$. $\displaystyle P(X>x) = \left\{ \begin{array}{ll} \displaystyle 1 &\ \ \ \ \ \ x \le 0 \\ \text{ } & \text{ } \\ \displaystyle \biggl(\frac{250}{x+250} \biggr)^{2.25} &\ \ \ \ \ \ x>0 \\ \end{array} \right.$ Calculate the expected claim size for this type of insurance policies. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 200.00$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 203.75$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 207.67$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 217.32$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 232.74$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics $\copyright$ 2017 – Dan Ma Exam P Practice Problem 96 – Expected Insurance Payment Problem 96-A An insurance policy is purchased to cover a random loss subject to a deductible of 1. The cumulative distribution function of the loss amount $X$ is: $\displaystyle F(x) = \left\{ \begin{array}{ll} \displaystyle 0 &\ \ \ \ \ \ x<0 \\ \text{ } & \text{ } \\ \displaystyle \frac{3}{25} \ x^2 - \frac{2}{125} \ x^3 &\ \ \ \ \ \ 0 \le x<5 \\ \text{ } & \text{ } \\ 1 &\ \ \ \ \ \ 5 Given a random loss $X$, determine the expected payment made under this insurance policy. $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.50$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 1.54$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 1.72$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 4.63$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 6.26$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 96-B An insurance policy is purchased to cover a random loss subject to a deductible of 2. The density function of the loss amount $X$ is: $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{3}{8} \biggl(1- \frac{1}{4} \ x + \frac{1}{64} \ x^2 \biggr) &\ \ \ \ \ \ 0 Given a random loss $X$, what is the expected benefit paid by this insurance policy? $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.51$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 0.57$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 0.63$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 1.60$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 2.00$ _______________________________________________ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ _______________________________________________ _______________________________________________ $\copyright \ 2016 - \text{Dan Ma}$ Exam P Practice Problem 95 – Measuring Dispersion Problem 95-A The lifetime (in years) of a machine for a manufacturing plant is modeled by the random variable $X$. The following is the density function of $X$. $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{3}{2500} \ (100x-20x^2+ x^3) &\ \ \ \ \ \ 0 Calculate the standard deviation of the lifetime of such a machine. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2.0$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2.7$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 3.0$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 4.0$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 4.9$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ ______________________________________________________________________ Problem 95-B The travel time to work (in minutes) for an office worker has the following density function. $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{3}{1000} \ (50-5x+\frac{1}{8} \ x^2) &\ \ \ \ \ \ 0 Calculate the variance of the travel time to work for this office worker. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 3.87$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 5.00$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 6.50$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 8.75$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 15.00$ ______________________________________________________________________ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ ______________________________________________________________________ ______________________________________________________________________ $\copyright \ 2016 \ \ \text{Dan Ma}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 280, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999958276748657, "perplexity": 130.54515672381612}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526386.37/warc/CC-MAIN-20190719223744-20190720005744-00163.warc.gz"}
https://mathematica.stackexchange.com/questions/47285/generating-complete-lists-of-polynomials
# Generating complete lists of polynomials I would like to generate a list of all $3$-variable Laurent polynomials with non-negative integer coefficients using a looping construct so that I can, one-by-one, check them for specific specializations. **Of course, by "all" I mean "all within certain bounds", e.g. on the coefficients, exponents, number of terms... so long as these bounds can be flexibly changed. Any help would be greatly appreciated. Sample input/output: Say we want to produce all ordinary (vs. Laurent) polynomials with non-negative integer coefficients in two variables x,y, subject to: Maximum coefficient is $2$ Maximum x and y exponents are $1$ Input> {2,1,1} Output> 0, 1, 2, x, 1+x, 2+x, 2x, 1+2x, 2+2x, y, 1+y, 2+y, 2y, 1+2y, 2+2y, x+y, 1+x+y, 2+x+y, 2x+y, 1+2x+y, 2+2x+y, x+2y, 1+x+2y, 2+x+2y, 2x+2y, 1+2x+2y, 2+2x+2y, xy, 1+xy, 2+xy, x+xy, 1+x+xy, 2+x+xy, $\ldots$, 2+2x+2xy+2y for a total of $3^4 = 81$ items in the list. Note that they do not need to be produced in any particular order. Again, the "output" should be produced via a loop or otherwise, so that I can perform some analysis on one item of the list and then either "terminate" or move to the next item on the list. For this simplified example, an input of {a,b,c} will produce a list with $(a+1)^{(b+1)(c+1)}$ items. • Please give an example of the input and output that you desire. – Mr.Wizard May 4 '14 at 23:13 • Sure. I've included a simplified example in the original post. – Ross Elliot May 4 '14 at 23:58 I propose this: poly[C_, exp_, var_] := Times @@@ Tuples[var^Range[0, exp]] // Tuples[Range[0, C], Length @ #].# & Test: poly[2, {1, 1}, {x, y}] // Sort // Short {0, 1, 2, x, 2 x, 1+x, <<69>>, x+2 y+2 x y, 1+x+2 y+2 x y, 2+x+2 y+2 x y, 2 x+2 y+2 x y, 1+2 x+2 y+2 x y, 2+2 x+2 y+2 x y} This is also much faster than lp: lp[1, {3, 2}, {x, y}] // Timing // First poly[1, {3, 2}, {x, y}] // Timing // First 0.327 0.015 Allowing even: poly[2, {3, 2}, {x, y}] // Length // Timing {2.886, 531441} ## Extension to massive sets Although not requested here is an approach for extending this code to larger sets. Since huge sets will consume too much memory we could use an incremental approach, relying on IntegerDigits to construct the Dot vectors one at a time. Although this would make generating an entire series slower it allows exploration within a series. mem : poly[C_, exp_, var_, "VEC"] := mem = (* memoization *) Times @@@ Tuples[ var^Range[0, exp] ] // {Length @ #, #} & poly[C_, exp_, var_, part_] := IntegerDigits[part - 1, C + 1, #].#2 & @@ poly[C, exp, var, "VEC"] Now we can look at e.g. the 5,141,324,824th polynomial in this sequence: poly[3, {4, 1, 2}, {x, y, z}, 5141324824] x^3 + x^4 + 3 x^2 y + 2 x^3 y + x^4 y + x^2 z + 3 x^3 z + x^3 y z + x^4 y z + 2 x^2 y z^2 + 2 x^3 y z^2 + 3 x^4 y z^2 Or a list of polynomials in a smaller sequence: poly[2, {1, 1, 3}, {x, y, z}, {997, 998, 999}] // Column x z + 2 x y z + x z^2 + 2 x y z^2 x z + 2 x y z + x z^2 + 2 x y z^2 + x y z^3 x z + 2 x y z + x z^2 + 2 x y z^2 + 2 x y z^3 Memoization of the terms vector is included to speed sampling in truly massive sets: poly[17, {14, 35, 137}, {x, y, z}, 22] poly[17, {14, 35, 137}, {x, y, z}, 1341] poly[17, {14, 35, 137}, {x, y, z}, 7^14913] // LeafCount x^14 y^35 z^136 + 3 x^14 y^35 z^137 4 x^14 y^35 z^135 + 2 x^14 y^35 z^136 + 8 x^14 y^35 z^137 102378 • @Mr.Wizard...thanks for timings as well...(and much faster as well as neater and clearer)...one day will learn how to be more efficient... – ubpdqn May 5 '14 at 7:54 • @ubpdqn You're welcome, and thanks for being gracious. – Mr.Wizard May 5 '14 at 7:58 • @ubpdqn Sorry to keep hammering on your code, but since you're open to comments, it looks like my method is also about ten times more memory efficient. – Mr.Wizard May 5 '14 at 8:03 • I am very grateful. I will edit my answer to look at your answer. I am grateful that the seeds of the idea are the same (tuples of components and coefficients and assembling...I hadn't appreciated how nice Range and Tuples would work...the inner product was what I wanted to get to but 'bailed' with use of CoefficientRules construct. – ubpdqn May 5 '14 at 8:08 • very nice using padded base representation (IntegerDigits) to convert element index to vector. However, should it be IntegerDigits[part - 1, C + 1, #]. As, e.g. for the 81 polynomials of orginal test case: they go from 1 to 81 <-> {0,0,0,0}->{2,2,2,2} (<=>54+18+6+2=80=81-1)? – ubpdqn May 5 '14 at 8:48 EDIT See Mr. Wizard's answer (better in terms of conciseness, efficiency and memory usage). Does this achieve your desired result: lp[r_, exp_, var_] := Module[{num, tup, v}, num = Times @@ (1 + exp); tup = Tuples[Range[0, r], num]; v = Flatten[Outer[List, ##], Length@exp - 1] & @@ (Range[0, #] & /@ exp); FromCoefficientRules[#, var] & /@ (Thread[v -> #] & /@ tup) ] where r is the range of integer coefficients, exp is the list of exponents and var is the variable names. lp[2, {1, 1}, {x, y}] yields 81 polynomials: {0, x y, 2 x y, x, x + x y, x + 2 x y, 2 x, 2 x + x y, 2 x + 2 x y, y, y + x y, y + 2 x y, x + y, x + y + x y, x + y + 2 x y, 2 x + y, 2 x + y + x y, 2 x + y + 2 x y, 2 y, 2 y + x y, 2 y + 2 x y, x + 2 y, x + 2 y + x y, x + 2 y + 2 x y, 2 x + 2 y, 2 x + 2 y + x y, 2 x + 2 y + 2 x y, 1, 1 + x y, 1 + 2 x y, 1 + x, 1 + x + x y, 1 + x + 2 x y, 1 + 2 x, 1 + 2 x + x y, 1 + 2 x + 2 x y, 1 + y, 1 + y + x y, 1 + y + 2 x y, 1 + x + y, 1 + x + y + x y, 1 + x + y + 2 x y, 1 + 2 x + y, 1 + 2 x + y + x y, 1 + 2 x + y + 2 x y, 1 + 2 y, 1 + 2 y + x y, 1 + 2 y + 2 x y, 1 + x + 2 y, 1 + x + 2 y + x y, 1 + x + 2 y + 2 x y, 1 + 2 x + 2 y, 1 + 2 x + 2 y + x y, 1 + 2 x + 2 y + 2 x y, 2, 2 + x y, 2 + 2 x y, 2 + x, 2 + x + x y, 2 + x + 2 x y, 2 + 2 x, 2 + 2 x + x y, 2 + 2 x + 2 x y, 2 + y, 2 + y + x y, 2 + y + 2 x y, 2 + x + y, 2 + x + y + x y, 2 + x + y + 2 x y, 2 + 2 x + y, 2 + 2 x + y + x y, 2 + 2 x + y + 2 x y, 2 + 2 y, 2 + 2 y + x y, 2 + 2 y + 2 x y, 2 + x + 2 y, 2 + x + 2 y + x y, 2 + x + 2 y + 2 x y, 2 + 2 x + 2 y, 2 + 2 x + 2 y + x y, 2 + 2 x + 2 y + 2 x y} 3 variable case yields 6561: lp[2, {1, 1, 1}, {x, y, z}] • Perfect. Thanks a million! – Ross Elliot May 5 '14 at 4:42 • You have my vote, but see my answer for how this might be done considerably faster. – Mr.Wizard May 5 '14 at 7:50 • @RossElliot Mr. Wizard's answer is the better answer...see commentary – ubpdqn May 5 '14 at 8:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33793336153030396, "perplexity": 1140.662653233821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00298.warc.gz"}
https://maths.anu.edu.au/study/student-projects/computational-problems
# Computational problems The problem of numerically computing eigenvalues and eigenfunctions of the Laplacian, with Dirichlet (zero) boundary conditions, on a plane domain, is computationally intensive and there is a lot of theory behind finding efficient algorithms. Proving convergence rates is likewise an interesting theoretical problem. Recently, Barnett and Barnett-Hassell have shown that the method of particular solutions (MPS), a standard method, is more accurate by an order of E1/2, where E is the eigenvalue, then previously shown. Analyzing the scaling method, which is a more efficient method for finding large blocks of eigenvalues simultaneously, is planned for 2009. There are good projects possible here for those who like to combine theory and computation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890975713729858, "perplexity": 490.0474517486312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826968.71/warc/CC-MAIN-20181215174802-20181215200802-00034.warc.gz"}
https://chemistrycorners.com/first-term-of-chemistry-stpm/
# First Term ## 1 Atoms, Molecules and Stoichiometry ### 1.1 Fundamental particles of an atom Candidates should be able to: (a) describe the properties of protons, neutrons and electrons in terms of their relative charges and relative masses; (b) predict the behaviour of beams of protons, neutrons and electrons in both electric and magnetic fields; (c) describe the distribution of mass and charges within an atom; (d) determine the number of protons, neutrons and electrons present in both neutral and charged species of a given proton number and nucleon number; (e) describe the contribution of protons and neutrons to atomic nuclei in terms of proton number and nucleon number; (f) distinguish isotopes based on the number of neutrons present, and state examples of both stable and unstable isotopes. ### 1.2 Relative atomic, isotopic, molecular and formula masses Candidates should be able to: (a) define the terms relative atomic mass, Ar, relative isotopic mass, relative molecular mass, Mr, and relative formula mass based on 12C; (b) interpret mass spectra in terms of relative abundance of isotopes and molecular fragments; (c) calculate relative atomic mass of an element from the relative abundance of its isotopes or its mass spectrum. ### 1.3 The mole and the Avogadro constant Candidates should be able to: (a) define mole in terms of the Avogadro constant; (b) calculate the number of moles of reactants, volumes of gases, volumes of solutions and concentrations of solutions; (c) deduce stoichiometric relationships from the calculations above. ## 2 Electronic Structures of Atoms ### 2.1 Electronic energy levels of atomic hydrogen Candidates should be able to: (a) explain the formation of the emission line spectrum of atomic hydrogen in the Lyman and Balmer series using Bohr’s Atomic Model. ### 2.2 Atomic orbitals: s, p and d Candidates should be able to: (a) deduce the number and relative energies of the s, p and d orbitals for the principal quantum numbers 1, 2 and 3, including the 4s orbitals; (b) describe the shape of the s and p orbitals. ### 2.3 Electronic configuration Candidates should be able to: (a) predict the electronic configuration of atoms and ions given the proton number (and charge); (b) define and apply Aufbau principle, Hund’s rule and Pauli Exclusion Principle. ### 2.4 Classification of elements into s, p, d and f blocks in the Periodic Table Candidates should be able to: (a) identify the position of the elements in the Periodic Table as (i) block s, with valence shell configurations s1 and s2, (ii) block p, with valence shell configurations from s2p1 to s2p6, (iii) block d, with valence shell configurations from d1s2 to d10s2; (b) identify the position of elements in block f of the Periodic Table. ## 3 Chemical Bonding ### 3.1 Ionic bonding Candidates should be able to: (a) describe ionic (electrovalent) bonding as exemplified by NaCl and MgCl2. ### 3.2 Covalent bonding Candidates should be able to: (a) draw the Lewis structure of covalent molecules (octet rule as exemplified by NH3, CCl4, H2O, CO2, N2O4 and exception to the octet rule as exemplified by BF3, NO, NO2, PCl5, SF6); (b) draw the Lewis structure of ions as exemplified by SO42, CO32, NO3 and CN; (c) explain the concept of overlapping and hybridisation of the s and p orbitals as exemplified by BeCl2, BF3, CH4, N2, HCN, NH3 and H2O molecules; (d) predict and explain the shapes of and bond angles in molecules and ions using the principle of valence shell electron pair repulsion, e.g. linear, trigonal planar, tetrahedral, trigonal bipyramid, octahedral, V-shaped, T-shaped, seesaw and pyramidal; (e) explain the existence of polar and non-polar bonds (including CC1, CN, CO, CMg) resulting in polar or/ and non-polar molecules; (f) relate bond lengths and bond strengths with respect to single, double and triple bonds; (g) explain the inertness of nitrogen molecule in terms of its strong triple bond and non-polarity; (h) describe typical properties associated with ionic and covalent bonding in terms of bond strength, melting point and electrical conductivity; (i) explain the existence of covalent character in ionic compounds such as A12O3, A1I3 and LiI; (j) explain the existence of coordinate (dative covalent) bonding as exemplified by H3O+, NH4+, A12C16 and [Fe(CN)6]3. ### 3.3 Metallic bonding Candidates should be able to: (a) explain metallic bonding in terms of electron sea model. ### 3.4 Intermolecular forces: van der Waals forces and hydrogen bonding Candidates should be able to: (a) describe hydrogen bonding and van der Waals forces (permanent, temporary and induced dipole); (b) deduce the effect of van der Waals forces between molecules on the physical properties of substances; (c) deduce the effect of hydrogen bonding (intermolecular and intramolecular) on the physical properties of substances. ## 4 States of Matter ### 4.1 Gases Candidates should be able to: (a) explain the pressure and behaviour of ideal gas using the kinetic theory; (b) explain qualitatively, in terms of molecular size and intermolecular forces, the conditions necessary for a gas approaching the ideal behaviour; (c) define Boyle’s law, Charles’ law and Avogadro’s law; (d) apply the pV nRT equation in calculations, including the determination of the relative molecular mass, Mr; (e) define Dalton’s law, and use it to calculate the partial pressure of a gas and its composition; (f) explain the limitation of ideality at very high pressures and very low temperatures. ### 4.2 Liquids Candidates should be able to: (a) describe the kinetic concept of the liquid state; (b) describe the melting of solid to liquid, vaporisation and vapour pressure using simple kinetic theory; (c) define the boiling point and freezing point of liquids. ### 4.3 Solids Candidates should be able to: (a) describe qualitatively the lattice structure of a crystalline solid which is: (i) ionic, as in sodium chloride, (ii) simple molecular, as in iodine, (iii) giant molecular, as in graphite, diamond and silicon(IV) oxide, (iv) metallic, as in copper; (b) describe the allotropes of carbon (graphite, diamond and fullerenes), and their uses. ### 4.4 Phase diagrams Candidates should be able to: (a) sketch the phase diagram for water and carbon dioxide, and explain the anomalous behaviour of water; (b) explain phase diagrams as graphical plots of experimentally determined results; (c) interpret phase diagrams as curves describing the conditions of equilibrium between phases and as regions representing single phases; (d) predict how a phase may change with changes in temperature and pressure; (e) discuss vaporisation, boiling, sublimation, freezing, melting, triple and critical points of H2O and CO2; (f) explain qualitatively the effect of a non-volatile solute on the vapour pressure of a solvent, and hence, on its melting point and boiling point (colligative properties); (g) state the uses of dry ice. ## 5 Reaction Kinetics ### 5.1 Rate of reaction Candidates should be able to: (a) define rate of reaction, rate equation, order of reaction, rate constant, half-life of a first-order reaction, rate determining step, activation energy and catalyst; (b) explain qualitatively, in terms of collision theory, the effects of concentration and temperature on the rate of a reaction. ### 5.2 Rate law Candidates should be able to: (a) calculate the rate constant from initial rates; (b) predict an initial rate from rate equations and experimental data; (c) use titrimetric method to study the rate of a given reaction. ### 5.3 The effect of temperature on reaction kinetics Candidates should be able to: (a) explain the relationship between the rate constants with the activation energy and temperature using Arrhenius equation k = Ae ^[-(Ea/RT)] (b) use the Boltzmann distribution curve to explain the distribution of molecular energy. ### 5.4 The role of catalysts in reactions Candidates should be able to: (a) explain the effect of catalysts on the rate of a reaction; (b) explain how a reaction, in the presence of a catalyst, follows an alternative path with a lower activation energy; (c) explain the role of atmospheric oxides of nitrogen as catalysts in the oxidation of atmospheric sulphur dioxide; (d) explain the role of vanadium (V) oxide as a catalyst in the Contact process; (e) describe enzymes as biological catalysts. ### 5.5 Order of reactions and rate constants Candidates should be able to: (a) deduce the order of a reaction (zero-, first- and second-) and the rate constant by the initial rates method and graphical methods; (b) verify that a suggested reaction mechanism is consistent with the observed kinetics; (c) use the half-life (t½) of a first-order reaction in calculations. ## 6 Equilibria ### 6.1 Chemical equilibria Candidates should be able to: (a) describe a reversible reaction and dynamic equilibrium in terms of forward and backward reactions; (b) state mass action law from stoichiometric equation; (c) deduce expressions for equilibrium constants in terms of concentrations, Kc, and partial pressures, Kp, for homogeneous and heterogeneous systems; (d) calculate the values of the equilibrium constants in terms of concentrations or partial pressures from given data; (e) calculate the quantities present at equilibrium from given data; (f) apply the concept of dynamic chemical equilibrium to explain how the concentration of stratospheric ozone is affected by the photodissociation of NO2, O2 and O3 to form reactive oxygen radicals; (g) state the Le Chatelier‟s principle and use it to discuss the effect of catalysts, changes in concentration, pressure or temperature on a system at equilibrium in the following examples: (i) the synthesis of hydrogen iodide, (ii) the dissociation of dinitrogen tetroxide, (iii) the hydrolysis of simple esters, (iv) the Contact process, (v) the Haber process, (vi) the Ostwald process; (h) explain the effect of temperature on equilibrium constant from the equation, ln K= -H/RT + C ### 6.2 Ionic equilibria Candidates should be able to: (a) use Arrhenius, BrØnsted-Lowry and Lewis theories to explain acids and bases; (b) identify conjugate acids and bases; (c) explain qualitatively the different properties of strong and weak electrolytes; (d) explain and calculate the terms pH, pOH, Ka, pKa, Kb, pKb, Kw and pKw from given data; (e) explain changes in pH during acid-base titrations; (f) explain the choice of suitable indicators for acid-base titrations; (g) define buffer solutions; (h) calculate the pH of buffer solutions from given data; (i) explain the use of buffer solutions and their importance in biological systems such as the role of H2CO3 / HCO3 in controlling pH in blood. ### 6.3 Solubility equilibria Candidates should be able to: (a) define solubility product, Ksp; (b) calculate Ksp from given concentrations and vice versa; (c) describe the common ion effect, including buffer solutions; (d) predict the possibility of precipitation from solutions of known concentrations; (e) apply the concept of solubility equilibria to describe industrial procedure for water softening. ### 6.4 Phase equilibria Candidates should be able to: (a) state and apply Raoult’s law for two miscible liquids; (b) interpret the boiling point-composition curves for mixtures of two miscible liquids in terms of ideal‟ behaviour or positive or negative deviations from Raoult’s law; (c) explain the principles involved in fractional distillation of ideal and non ideal liquid mixtures; (d) explain the term azeotropic mixture; (e) explain the limitations on the separation of two components forming an azeotropic mixture; (f) explain qualitatively the advantages and disadvantages of fractional distillation under reduced pressure.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928998708724976, "perplexity": 3525.6200012660292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00068.warc.gz"}
http://gmatclub.com/forum/-post-your-questions-to-gmac-official-representative-here-133929.html?kudos=1
Post Your Questions to GMAC Official Representative Here : General GMAT Questions and Strategies Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack It is currently 16 Jan 2017, 18:18 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Post Your Questions to GMAC Official Representative Here Author Message Founder Affiliations: AS - Gold, HH-Diamond Joined: 04 Dec 2002 Posts: 14423 Location: United States (WA) GMAT 1: 750 Q49 V42 GPA: 3.5 Followers: 3712 Kudos [?]: 22934 [2] , given: 4510 ### Show Tags 05 Jun 2012, 01:04 2 KUDOS Expert's post 3 This post was BOOKMARKED Dear members, this is your opportunity to raise questions about the test process, the new integrated reasoning section, as well as anything you need an official confirmation for, such as: - Score reports - Cancellation policy - Identification required, etc If there are any questions, please feel free to post them here. GMAC Resources: _________________ Founder of GMAT Club US News Rankings progression - last 10 years in a snapshot - New! Just starting out with GMAT? Start here... Need GMAT Book Recommendations? Best GMAT Books Co-author of the GMAT Club tests GMAT Club Premium Membership - big benefits and savings Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [2] , given: 1 ### Show Tags 19 Sep 2013, 13:25 2 KUDOS Expert's post 1 This post was BOOKMARKED Hi bagdbmba. Yes, that correct. You can create customized practice sessions from practice questions only, not practice exams. So, you can customize practice sessions using the 90 questions from the free GMATPrep software and the 404 questions from Question Pack 1. You will never see questions from a practice exam in a customized practice session. To clarify: Free GMATPrep software: 90 practice questions - customizable 2 free practice exams - non-customizable GMATPrep Question Pack 1: 404 additional practice questions - customizable GMATPrep Exam Pack 1: 2 additional practice exams - non-customizable I hope that makes sense! Thanks, Tina _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [2] , given: 1 ### Show Tags 05 Sep 2014, 08:35 2 KUDOS Expert's post ansh0809 wrote: Hi, I took the GMAT last month and selected 5 schools to send my scores to on the test day. I did not score quite to my expectations and I am going to take the GMAT again. My question is that should I choose a different set of schools to send my scores this time. To be clear, my top 5 schools are still the ones I selected on the 1st attempt and I want them to see my 2nd attempt scores as well. So will sending the scores to them once allow them to see all my scores when I apply to them and therefore I can choose a different set of schools or do I need to resend my scores to my top 5. If you want the schools to see your newest scores, then you will need to send the score report to the same 5 schools again. I hope this helps! _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [2] , given: 1 ### Show Tags 21 Nov 2014, 11:12 2 KUDOS Expert's post OfficialGMAT wrote: minwoswoh wrote: minwoswoh wrote: Hi, Since my test day is approaching I have some questions regarding the test center and how the test is administered that I would really appreciate if you could answer. I apologize for the long post in advance! 1- Tutorial: since I understand that this section is timed, how much time do we have to complete it? - In addition, is each screen in the tutorial timed individually or does the tutorial section have an overall timing? From Gmat Prep exams, I was able to see what the screens look like but I would really like to know about this. 2-Selecting 5 MBA programs to send the scores to: does this section come before, during or after the tutorial? In any case, is this section timed? 3- Breaks and writing in the Erasable Scratchpad: my main question here is: are we allowed to write notes in the provided scratchpad during our 8-minute breaks? For example, it is very helpful to me to write down some notes about the section to come or to write down a timeframe on what times I must accomplish each question. Am I allowed to do this during the break? 4- Breaks and returning to the test center room: if for example, I leave the test center room and go to the bathroom during a break and when I come back to log in to the room, I still have 3 minutes, am I allowed to be sit in the room in front of my computer just waiting to those 3 minutes to pass? Or will the proctor make me wait until the 8 minutes are reached? Or once I enter the room those 3 minutes will disappear and I will be prompted to the next section regardless of whether I actually had 3 minutes in my favor? Please let me know if this and my point #2 depend on the specific policy of the test center. 5- Background Information Questions: I understand that these questions come right after the Verbal section and before the Report or Cancel section. My questions are: - How many screens are there? - Are these questions the same questions that I encounter in the “My Account > My Profile” section of my mba.com account (e.g. Demographic Information, Contact Information, Communication Preferences, Undergraduate Information, Work Experience Information, and Graduate Management Degree Information). I have already filled the mandatory fields and some of the optional fields. Can I keep the same information during this stage of the test (I mean just skipping these sections since I already completed the information I wanted to complete)? Or is it mandatory for me that I fill the remaining optional fields? - Lastly, is this section timed? And if so, how much time do we have? I really have tried to come up with the answers to these questions in forums and articles but I never found a thorough response. Therefore, I would really appreciate your response on this since will surely provide me with a much better test day experience and will probably benefit other members of the community who might have asked themselves these exact same questions. Thank you so much! Hello Rebecca, Hope you are doing well. Since my test day is approaching, I would really like to know your thoughts on my previously mentioned points (or at least some of them). I know they can be regarded as minor things compared to the whole GMAT preparation, but I believe your answer would provide me with a better understanding of what to expect and of how one´s test-taking experience is going to be. abane has kindly replied to me but I would really appreciate to know the official response to my queries. Thank you so much! Hello minwoswoh, We understand the importance of details when it comes to the GMAT exam. So, we want to make sure we provide you with accurate information and answers to your questions. We are looking into these questions, and will respond as soon as we can. We appreciate your patience. Thank you, Jeremy Hello minwoswoh, We have tracked down the answers to your questions. If you have any further questions, please do not hesitate to reach out. 1- Tutorial : since I understand that this section is timed, how much time do we have to complete it? - In addition, is each screen in the tutorial timed individually or does the tutorial section have an overall timing? From Gmat Prep exams, I was able to see what the screens look like but I would really like to know about this. The tutorial is timed as an overall section. The candidates have 4 minutes to review the tutorial. 2-Selecting 5 MBA programs to send the scores to : does this section come before, during or after the tutorial? In any case, is this section timed? The school selection screen comes before the Tutorial. It is an untimed section. 3- Breaks and writing in the Erasable Scratchpad: my main question here is: are we allowed to write notes in the provided scratchpad during our 8-minute breaks? For example, it is very helpful to me to write down some notes about the section to come or to write down a timeframe on what times I must accomplish each question. Am I allowed to do this during the break? The candidates are not permitted to remain in the testing room during their breaks, nor are they permitted to write in their noteboards during any untimed portion/section of the exam. 4- Breaks and returning to the test center room: if for example, I leave the test center room and go to the bathroom during a break and when I come back to log in to the room, I still have 3 minutes, am I allowed to be sit in the room in front of my computer just waiting to those 3 minutes to pass? Or will the proctor make me wait until the 8 minutes are reached? Or once I enter the room those 3 minutes will disappear and I will be prompted to the next section regardless of whether I actually had 3 minutes in my favor? Please let me know if this and my point #2 depend on the specific policy of the test center. GMAT policy requires all candidates leave the testing room during their break. Candidates are not permitted to sit at their workstation during their break. If a candidate returns from their break early and request to be checked back in to the exam, the candidate must resume the exam at that time, regardless of break time remaining. This policy does not vary depending on the testing center - it is universal. It should also be noted that candidates are responsible for keeping track of their 8 minute break and the Test Administrator will not tell them when their break is done and need to be checked back in. Candidates must also leave time in their break for check in and out of the testing room as this is not separate from the break time. 5- Background Information Questions: I understand that these questions come right after the Verbal section and before the Report or Cancel section. My questions are: - How many screens are there? - Are these questions the same questions that I encounter in the “My Account > My Profile” section of my mba.com account (e.g. Demographic Information, Contact Information, Communication Preferences, Undergraduate Information, Work Experience Information, and Graduate Management Degree Information). I have already filled the mandatory fields and some of the optional fields. Can I keep the same information during this stage of the test (I mean just skipping these sections since I already completed the information I wanted to complete)? Or is it mandatory for me that I fill the remaining optional fields? - Lastly, is this section timed? And if so, how much time do we have? Unfortunately, I am not sure of how many screens there are for the BIQ section, but as far as I can tell it is only one screen. The information is Graduation Date, Undergraduate GPA, Highest Education Level, Undergrad Institution, Undergrad Major, and Intended Graduate Study. They do fill this out when they create their mba.com profile; however I would advise against skipping it as the information listed on the score report pulls from what is entered at the testing center. The candidates are allotted 30 seconds for this section. Thank you and best of luck, Jeremy _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [2] , given: 1 ### Show Tags 21 Mar 2016, 08:52 2 KUDOS Expert's post OfficialGMAT wrote: Jeremy, You mentioned late-2015 ETA for Exam Pack 2. Can you update this for us? This software is by far the most valuable prep tool, but desperately needs a larger question bank and more analytics so that motivated studiers can better gauge their preparedness. Has GMAC ever considered open-sourcing any aspect of this software? For example, let's assume GMAC is unwilling to reveal code or specific details related to the scoring or CAT algorithms. But maybe instead of open-sourcing the whole thing, GMAC could expand the question bank by accepting user-written questions. These user-written questions could be evaluated through a similar tool as used for "experimental" questions on the real exam, and then be uploaded into a massive question-bank database that drives the software. Alternatively, open-source the whole damn thing on github. There are a lot of really smart people on here, some percentage of which are talented software developers. So you could get thousands of hours of expert code-writing for free. Overall, there is infinite demand for more GMAC prep material. Someone there needs to figure out how to provide it. Hello, That did not happen as originally expected (clearly - since we're now in 2016), and we are still developing a final launch date for the new software. Once I have an update on this, I'll be sure to share it here. We know there is a desire for more prep materials and are looking to provide that for our users. Thank you for your input and ideas - I will certainly share this with our product teams. Thanks, Jeremy Exam Pack 2 is now live and available on our website: http://www.mba.com/store/store-catalog/gmat-preparation/gmatprep-exam-pack-2.aspx?utm_campaign=Global_Exam-pack-2_03212016&utm_medium=social&utm_source=gmatclub. _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 06 Jun 2012, 11:10 1 KUDOS Expert's post balkan wrote: Hi! I understood that the new section will have a separate score between 1 and 8 but I noticed that the number of questions for the math and verbal is changing as well. The math section will have only 30 questions and the verbal will have 45 questions. What about the time dedicated to the math and verbal sections? Is it changing as well? Thanks for your time and consideration. Hello, Balkan. Thank you for coming to us with this question! The number of questions you will receive in the quant and verbal sections is not changing. You will still have 37 questions in the quant section, and 41 in the verbal section. You will still have 75 minutes per section. I hope that answers your question! _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 06 Jun 2012, 11:33 1 KUDOS Expert's post abk wrote: Good afternoon! I have to change my official name (my last name has been changed) in my profile to match my government documents. Is this a complicated process? I have yet to register for the test if that helps. I had created an account last year which is why this is an issue now. Hello, ABK! Do you have a GMAT ID yet? If you have already received a GMAT ID you will need to contact Pearson Vue to update your name. If you do not have a GMAT ID yet, you can use your new name when you create your profile. Thank you! _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 25 Jun 2012, 12:14 1 KUDOS Expert's post GPT55 wrote: OfficialGMAT wrote: GPT55 wrote: Hello Rebecca, Do you know if the 13th edition OG will have (and if yes, when?) a Kindle version, like the 12th editiod has? Thanks! This is awkward, but now I know the reason why I didn't find it. Amazon.com says it's "currently not available", thus it didn't even pop up in searches. Do you know anything about what's the reason for this? See here: http://www.amazon.com/dp/B008846CAC/ref ... le_ext_tmb (I could only navigate to the kindle edition by taking a look at the preview offered at the print version's page, and then switching there to Kindle. It's otherwise unaccessible.) Edit: I contacted Amazon and they said they don't know why it's unavailable and that I should contact GMAC about it. Thank you for letting me know about this! I passed this onto our products department and they are looking into it. I will let you know as soon as I have an answer for you. _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 04 Mar 2013, 05:38 1 KUDOS Expert's post TheNona wrote: I took the GMAT on Feb the 13th booking on phone , and a few minutes ago I scheduled online it for the 3rd of April , received the confirmation email and the payment receipt , but still my profile under the title" future exams" has this sentence "You do not have any exams scheduled." Is this normal? It can take up to 24 hours for a GMAT exam to show up in your mymba.com account. If you have the email confirmation then your appointment has been scheduled. If it has been more than 24 hours and your appointment still has not appeared in the account, you can contact Pearson Vue to confirm. _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 27 Mar 2013, 12:11 1 KUDOS Expert's post debayan222 wrote: Hi Rebecca, Can you please get us any update on the Exam pack 1? When we might get to download it for our prep...? Are the this year R1 applicants going to get to use it well in advance ? Multiple sources including NBC news have reported that GMAC will release an add-on package, Exam Pack 1 featuring additional computer adaptive practice GMAT exams with new questions, later this year. This is a much needed addition. I think most of us will look forward to the same. You can purchase the supplementary question pack for GMATPrep here: http://www.mba.com/store/product-info.a ... ID=80.0044. We have an other supplementary pack scheduled to come out later this year, but we do not yet have a target release date. Thank you! _________________ Leah Customer Care Coordinator gmac.com | mba.com Director Joined: 20 May 2013 Posts: 599 Location: United States Concentration: Strategy, Finance GMAT 1: 750 Q49 V44 Followers: 14 Kudos [?]: 276 [1] , given: 54 ### Show Tags 24 Apr 2014, 05:31 1 KUDOS gspatwal81 wrote: Dear GMAC team, It's been more than 3 months and I am yet to get the answers to my questions...so much for your customer service. I do NOT want to give up on my thought that GMAC is a reasonable and ethical organization. GMAC, though, has not been acting like one. Sincerely, Govind Singh Patwal Gspatwal81, your issue has been responded to and closed, your requests and arguments at this point are getting a bit ridiculous. While it is unfortunate the computer crashed the first time you took the test, I can assure you, from personal experience, the first score has no impact on your second score. Your second score was problem free, so I don't really understand why you are still spamming this forum. They didn't reduce your 2nd score because of your 1st score. I got a 70 point increase between my 1st and 2nd try, and there are people on this forum - who are easy to find - who obtained 150+ GMAT jumps between the 1st and 2nd time. Many people do worse on the real test than the practice test...sometimes because of nerves, sometimes because they had previously seen the practice questions that showed up on their practice tests (which inflated their scores), sometimes because the practice test isn't well calibrated, etc. Consider this post a moderator warning to chill out. I'd also recommend being careful where you put your name on the internet, this conversation stream would look VERY negative if an adcom stumbled upon it when you submit apps to business school. Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 05 Sep 2014, 08:33 1 KUDOS Expert's post sa2222 wrote: Hello Rebecca, Thanks for your reply. I do have another question. I have scheduled my GMAT appointment in Boston, MA. Usually after scheduling appointment, I thought GMAC sends a mail to your mailing address - with appointment confirmation and a CD. Obviously, I have received an appointment confirmation in email and I am able to download the tests through mba.com (official website). But, just wanted to be sure - if I have missed the mail or GMAC no longer sends the appointment confirmation mail with the CD. Thanks No need to worry! We do not mail anything physically. Thanks! _________________ Leah Customer Care Coordinator gmac.com | mba.com Intern Joined: 29 Sep 2014 Posts: 2 Followers: 0 Kudos [?]: 1 [1] , given: 2 ### Show Tags 29 Oct 2014, 15:21 1 KUDOS Hi, I answered your queries in attached file (there is some problem on this site that prevents me submitting the queries in a mail format). Cheers! Attachments Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 14 Nov 2014, 07:13 1 KUDOS Expert's post Vetrik wrote: Hi GMAC representative, I have my GMAT in another ten days. A surgery was done in my left ankle [due to an ankle fracture] two weeks back. Due to this, i cannot walk using my left leg temporarily and I cannot make movements in my left ankle. Hence, I am using crutches to walk. Will I be allowed to bring my crutches to my seat in the test center ? And in case, if I am not able to keep my left foot hanging/placed softly on the floor, will I be given a stool/chair to support my leg ? Do I need to get prior permission for this case and how do I do so? Thank you. Hello Vetrik, Thank you for reaching out. Please email testingaccommodations@gmac.com with your question and they will work with you to ensure you have what you need during your test. I've notified them in advance, so they will be expecting your message. If there's anything further we can help with, please let us know. Thanks, Jeremy _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 14 Nov 2014, 08:03 1 KUDOS Expert's post abane wrote: Hello, I mailed a query last week to PearsonVUE regarding an issue with additional GMAT score report order charges. However, since then, I have not received ANY response or update to my query/complaint - not even an automated email, OR a response to my follow up email on the status. I have also tried calling several times (STD) to the Asia-Pacific Customer Service (+91 120-439-7830) during India business hours but to NO avail. The automated response does not respond to any of my inputs and merely runs on a loop no matter which option I select. This is the result even after multiple attempts on various times of the day! It is hard to comprehend that an institute like PearsonVUE offers such poor service and cannot even establish a reliable way for customers to reach them. The only other option I have is to try the other international PearsonVUE centers, but looking at numerous complaints, I am not very hopeful (not to mention less than enthused about having to make costly international calls because the number for my region/ country DID NOT WORK). I am stuck with a transaction charge for a score report which I am not even sure I ordered (I never received a post-transaction-notification email for this, as for earlier orders). How soon can I expect a turnaround on the status? I can provide additional information specific to my complaint if it means getting some response. Thank you. Hello abane, If you could submit your email or GMAT ID to me via direct message, we can reach out to Pearson to help resolve this issue. Thanks, Jeremy _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 28 Jan 2015, 12:24 1 KUDOS Expert's post Im2bz2p345 wrote: I posted a topic here (gmat-prep-software-can-you-review-answers-191649.html) without realizing there is a separate topic devoted to GMAT related questions. My question from the topic mentioned above: "I was wondering if in the latest version of the official GMAT Prep software, are you able to "go back" and review your answers after you've completed a test? I know that in earlier versions of the software, once you quit the browser test, your answers were NOT saved... so there was no way to go back and review the test with your answers." Hello, Thanks, Jeremy _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 GMAC Cancels Scores[Mass cancelation in Bangalore] [#permalink] ### Show Tags 05 May 2015, 05:27 1 KUDOS Expert's post 1 This post was BOOKMARKED gmatquant25 wrote: Hello GMAC Officials : I am feeling tired and extremely distressed on the findings by APAC GMAT officals . I raised a concern and my incident ticket number is 206868407. It is my humble request to GMAC official here to kindly intervene in the ongoing investigation conducted by APAC GMAT Team official . The complain was against the unfair practice by the test coordinators at test center - THINK EDUCATION ADVISORY SERVICES LLP. ( Bangalore , India) The first response by APAC GMAT Team official that wait time was less than 20 mins and then the exam was resumed is absolutely false .I can be as sure as death that exam conducted after 30 Mins of break . so , I have replied to his incorrect finding with the hard core evidence . I am pretty sure that the next finding will be that "we did not see any one behind you during test" .Before it gets too late , I kindly request the GMAC officials to have some one assigned and conduct a fair investigation. I am ready to answer any question they have for me . My questions are -why is the investigation not conducted in a fair manner ? What have they done to find out my first concern - Just called up the TA ? I don't know how fairly the tests are conducted and how biased the reports are after this incident , but I am starting to lose faith . I gave the evidence that I received a call from the test center to come back and write the test at cetain hour ( narrated in the mail to APAC offical ) . Did they not see the recording what time I left the chair after the outage started and at what time I sat on the chair after I was called to the test center . Or are the videos tampered ? My only fault is - I choose a wrong GMAT Center to appear for the test that has highly unprofessional test coordinators . I have raised this concern to <pvapcustomerservice@pearson.com> I request the GMAC officials in GC to please understand the turmoil one has to go through because of this . Even if I reappear for the test again I will be scared that this incident could repeat again and no one is there to listen to me . Please guide me in the right direction as I know the subsequent findings are going to be false like the first one . UPDATE : I spoke to customer representative named - Nisha & her Supervisor named Murphy . Nisha updated me on call that the staff at the the test center have acknowledged that there was a server error in between the test. However, I have not received any such update on mail. There is no mention of tampering with the system that I have clearly specified in the Mail. For some reasons the GMAC - Asia team is showing excessive unfair partiality to the TAs at the test center. Moreover,trying to manipulate statements so that I am not entitled to retake the test . What they do not understand is a candidate will not go through this much effort to ask for a fair chance if he/she is manipulating . Murphy ( customer representative) asked me to wait for more days and then wait for a response from Aditya P who is doing the research . How much long they want me to wait now ....?It takes 1 week from the coordinator to reply and that too incorrectly . I am tired of following up with them. Regards Hello, I understand your deep concern about this issue. We have your information and are escalating this through the proper channels. Thank you for providing the level of detail you have here and on other channels. This really helps us to assist you. I will be sure to reach out as soon as I have an update for you. Thank you. Jeremy _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 05 May 2015, 05:42 1 KUDOS Expert's post OfficialGMAT wrote: gmatquant25 wrote: Hello GMAC Officials : I am feeling tired and extremely distressed on the findings by APAC GMAT officals . I raised a concern and my incident ticket number is 206868407. It is my humble request to GMAC official here to kindly intervene in the ongoing investigation conducted by APAC GMAT Team official . The complain was against the unfair practice by the test coordinators at test center - THINK EDUCATION ADVISORY SERVICES LLP. ( Bangalore , India) The first response by APAC GMAT Team official that wait time was less than 20 mins and then the exam was resumed is absolutely false .I can be as sure as death that exam conducted after 30 Mins of break . so , I have replied to his incorrect finding with the hard core evidence . I am pretty sure that the next finding will be that "we did not see any one behind you during test" .Before it gets too late , I kindly request the GMAC officials to have some one assigned and conduct a fair investigation. I am ready to answer any question they have for me . My questions are -why is the investigation not conducted in a fair manner ? What have they done to find out my first concern - Just called up the TA ? I don't know how fairly the tests are conducted and how biased the reports are after this incident , but I am starting to lose faith . I gave the evidence that I received a call from the test center to come back and write the test at cetain hour ( narrated in the mail to APAC offical ) . Did they not see the recording what time I left the chair after the outage started and at what time I sat on the chair after I was called to the test center . Or are the videos tampered ? My only fault is - I choose a wrong GMAT Center to appear for the test that has highly unprofessional test coordinators . I have raised this concern to <pvapcustomerservice@pearson.com> I request the GMAC officials in GC to please understand the turmoil one has to go through because of this . Even if I reappear for the test again I will be scared that this incident could repeat again and no one is there to listen to me . Please guide me in the right direction as I know the subsequent findings are going to be false like the first one . UPDATE : I spoke to customer representative named - Nisha & her Supervisor named Murphy . Nisha updated me on call that the staff at the the test center have acknowledged that there was a server error in between the test. However, I have not received any such update on mail. There is no mention of tampering with the system that I have clearly specified in the Mail. For some reasons the GMAC - Asia team is showing excessive unfair partiality to the TAs at the test center. Moreover,trying to manipulate statements so that I am not entitled to retake the test . What they do not understand is a candidate will not go through this much effort to ask for a fair chance if he/she is manipulating . Murphy ( customer representative) asked me to wait for more days and then wait for a response from Aditya P who is doing the research . How much long they want me to wait now ....?It takes 1 week from the coordinator to reply and that too incorrectly . I am tired of following up with them. Regards Hello, I understand your deep concern about this issue. We have your information and are escalating this through the proper channels. Thank you for providing the level of detail you have here and on other channels. This really helps us to assist you. I will be sure to reach out as soon as I have an update for you. Thank you. Jeremy Update: Our Customer Care team is currently following up with Pearson VUE's coordinators to determine next steps. We will get you this information as soon as possible, of course, but due the processes that must be completed, it could take as long as two working days to provide you with a response. Thank you. _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 05 May 2015, 08:37 1 KUDOS Expert's post OfficialGMAT wrote: OfficialGMAT wrote: gmatquant25 wrote: Hello GMAC Officials : I am feeling tired and extremely distressed on the findings by APAC GMAT officals . I raised a concern and my incident ticket number is 206868407. It is my humble request to GMAC official here to kindly intervene in the ongoing investigation conducted by APAC GMAT Team official . The complain was against the unfair practice by the test coordinators at test center - THINK EDUCATION ADVISORY SERVICES LLP. ( Bangalore , India) The first response by APAC GMAT Team official that wait time was less than 20 mins and then the exam was resumed is absolutely false .I can be as sure as death that exam conducted after 30 Mins of break . so , I have replied to his incorrect finding with the hard core evidence . I am pretty sure that the next finding will be that "we did not see any one behind you during test" .Before it gets too late , I kindly request the GMAC officials to have some one assigned and conduct a fair investigation. I am ready to answer any question they have for me . My questions are -why is the investigation not conducted in a fair manner ? What have they done to find out my first concern - Just called up the TA ? I don't know how fairly the tests are conducted and how biased the reports are after this incident , but I am starting to lose faith . I gave the evidence that I received a call from the test center to come back and write the test at cetain hour ( narrated in the mail to APAC offical ) . Did they not see the recording what time I left the chair after the outage started and at what time I sat on the chair after I was called to the test center . Or are the videos tampered ? My only fault is - I choose a wrong GMAT Center to appear for the test that has highly unprofessional test coordinators . I have raised this concern to <pvapcustomerservice@pearson.com> I request the GMAC officials in GC to please understand the turmoil one has to go through because of this . Even if I reappear for the test again I will be scared that this incident could repeat again and no one is there to listen to me . Please guide me in the right direction as I know the subsequent findings are going to be false like the first one . UPDATE : I spoke to customer representative named - Nisha & her Supervisor named Murphy . Nisha updated me on call that the staff at the the test center have acknowledged that there was a server error in between the test. However, I have not received any such update on mail. There is no mention of tampering with the system that I have clearly specified in the Mail. For some reasons the GMAC - Asia team is showing excessive unfair partiality to the TAs at the test center. Moreover,trying to manipulate statements so that I am not entitled to retake the test . What they do not understand is a candidate will not go through this much effort to ask for a fair chance if he/she is manipulating . Murphy ( customer representative) asked me to wait for more days and then wait for a response from Aditya P who is doing the research . How much long they want me to wait now ....?It takes 1 week from the coordinator to reply and that too incorrectly . I am tired of following up with them. Regards Hello, I understand your deep concern about this issue. We have your information and are escalating this through the proper channels. Thank you for providing the level of detail you have here and on other channels. This really helps us to assist you. I will be sure to reach out as soon as I have an update for you. Thank you. Jeremy Update: Our Customer Care team is currently following up with Pearson VUE's coordinators to determine next steps. We will get you this information as soon as possible, of course, but due the processes that must be completed, it could take as long as two working days to provide you with a response. Thank you. Update: This incident has been forwarded to the Exam Review Team for further investigation. You will be contacted via email as soon as the investigation has been completed. If we do not have a final answer/update by the end of this week, you will receive a message informing you of where your issue is in the process. Thank you for your patience. _________________ Leah Customer Care Coordinator gmac.com | mba.com Official GMAC Representative Joined: 04 Jun 2012 Posts: 342 Followers: 316 Kudos [?]: 86 [1] , given: 1 ### Show Tags 20 Nov 2015, 11:23 1 KUDOS Expert's post Verve42 wrote: Hi there. I would like to take refund of my Gmat test. What is the process of refund? How much refund can I get from $250? Please reply soon. My Gmat test date is approaching near. Regards, Verve42 I am sorry to hear of your need to cancel. If you cancel your appointment more than seven calendar days before the scheduled test date and time: US$80.00 refund, Within seven calendar days before the scheduled test date and time: No refund. However, if you are testing in South Korea and cancel your appointment more than seven calendar days before the scheduled test date and time: US$150.00 refund. Within seven calendar days before the scheduled test date and time: US$50.00 refund. Thanks, Jeremy _________________ Leah Customer Care Coordinator gmac.com | mba.com Re: Post Your Questions to GMAC Official Representative Here   [#permalink] 20 Nov 2015, 11:23 Go to page    1   2   3   4   5   6   7   8   9   10   11  ...  22    Next  [ 425 posts ] Similar topics Replies Last post Similar Topics: 10 Post your wishes and suggestions for GMAC 13 05 Nov 2014, 08:48 New to this forum. Can I post official IR questions in this 1 12 Jul 2012, 03:12 3 R2 Chat with current students/alum - POST YOUR QUESTIONS NOW 8 03 Nov 2010, 11:09 Chat with a GMAT Expert II - post your questions 9 22 Jul 2010, 18:57 5 Chat with a GMAT Expert - post your questions 23 07 Jul 2010, 19:07 Display posts from previous: Sort by # Post Your Questions to GMAC Official Representative Here Moderators: WaterFlowsUp, HiLine Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26512548327445984, "perplexity": 2237.3382884850794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00149-ip-10-171-10-70.ec2.internal.warc.gz"}
http://physics.stackexchange.com/users/23426/ataias-reis
# Ataias Reis less info reputation 1 bio website location age 20 member for 11 months seen Apr 27 '13 at 23:44 profile views 7 # 1 Question 1 What is the coefficient $\mu_\text{air}$? # 6 Reputation +5 What is the coefficient $\mu_\text{air}$? This user has not answered any questions # 2 Tags 0 thermodynamics 0 homework # 5 Accounts Mathematics 78 rep 4 Physics 6 rep 1 Stack Overflow 1 rep Super User 1 rep Electrical Engineering 1 rep
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42612677812576294, "perplexity": 7645.155380044166}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/75/2/e/b/
# Properties Label 75.2.e.b Level $75$ Weight $2$ Character orbit 75.e Analytic conductor $0.599$ Analytic rank $0$ Dimension $4$ CM discriminant -3 Inner twists $8$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$75 = 3 \cdot 5^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 75.e (of order $$4$$, degree $$2$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$0.598878015160$$ Analytic rank: $$0$$ Dimension: $$4$$ Relative dimension: $$2$$ over $$\Q(i)$$ Coefficient field: $$\Q(i, \sqrt{6})$$ Defining polynomial: $$x^{4} + 9$$ x^4 + 9 Coefficient ring: $$\Z[a_1, \ldots, a_{4}]$$ Coefficient ring index: $$1$$ Twist minimal: yes Sato-Tate group: $\mathrm{U}(1)[D_{4}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + \beta_1 q^{3} - 2 \beta_{2} q^{4} + \beta_{3} q^{7} + 3 \beta_{2} q^{9}+O(q^{10})$$ q + b1 * q^3 - 2*b2 * q^4 + b3 * q^7 + 3*b2 * q^9 $$q + \beta_1 q^{3} - 2 \beta_{2} q^{4} + \beta_{3} q^{7} + 3 \beta_{2} q^{9} - 2 \beta_{3} q^{12} - 3 \beta_1 q^{13} - 4 q^{16} + \beta_{2} q^{19} - 3 q^{21} + 3 \beta_{3} q^{27} + 2 \beta_1 q^{28} + 7 q^{31} + 6 q^{36} - 4 \beta_{3} q^{37} - 9 \beta_{2} q^{39} + 7 \beta_1 q^{43} - 4 \beta_1 q^{48} + 4 \beta_{2} q^{49} + 6 \beta_{3} q^{52} + \beta_{3} q^{57} - 13 q^{61} - 3 \beta_1 q^{63} + 8 \beta_{2} q^{64} - 9 \beta_{3} q^{67} - 8 \beta_1 q^{73} + 2 q^{76} - 4 \beta_{2} q^{79} - 9 q^{81} + 6 \beta_{2} q^{84} + 9 q^{91} + 7 \beta_1 q^{93} + 11 \beta_{3} q^{97}+O(q^{100})$$ q + b1 * q^3 - 2*b2 * q^4 + b3 * q^7 + 3*b2 * q^9 - 2*b3 * q^12 - 3*b1 * q^13 - 4 * q^16 + b2 * q^19 - 3 * q^21 + 3*b3 * q^27 + 2*b1 * q^28 + 7 * q^31 + 6 * q^36 - 4*b3 * q^37 - 9*b2 * q^39 + 7*b1 * q^43 - 4*b1 * q^48 + 4*b2 * q^49 + 6*b3 * q^52 + b3 * q^57 - 13 * q^61 - 3*b1 * q^63 + 8*b2 * q^64 - 9*b3 * q^67 - 8*b1 * q^73 + 2 * q^76 - 4*b2 * q^79 - 9 * q^81 + 6*b2 * q^84 + 9 * q^91 + 7*b1 * q^93 + 11*b3 * q^97 $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4 q+O(q^{10})$$ 4 * q $$4 q - 16 q^{16} - 12 q^{21} + 28 q^{31} + 24 q^{36} - 52 q^{61} + 8 q^{76} - 36 q^{81} + 36 q^{91}+O(q^{100})$$ 4 * q - 16 * q^16 - 12 * q^21 + 28 * q^31 + 24 * q^36 - 52 * q^61 + 8 * q^76 - 36 * q^81 + 36 * q^91 Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} + 9$$ : $$\beta_{1}$$ $$=$$ $$\nu$$ v $$\beta_{2}$$ $$=$$ $$( \nu^{2} ) / 3$$ (v^2) / 3 $$\beta_{3}$$ $$=$$ $$( \nu^{3} ) / 3$$ (v^3) / 3 $$\nu$$ $$=$$ $$\beta_1$$ b1 $$\nu^{2}$$ $$=$$ $$3\beta_{2}$$ 3*b2 $$\nu^{3}$$ $$=$$ $$3\beta_{3}$$ 3*b3 ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/75\mathbb{Z}\right)^\times$$. $$n$$ $$26$$ $$52$$ $$\chi(n)$$ $$-1$$ $$-\beta_{2}$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 32.1 −1.22474 + 1.22474i 1.22474 − 1.22474i −1.22474 − 1.22474i 1.22474 + 1.22474i 0 −1.22474 + 1.22474i 2.00000i 0 0 1.22474 + 1.22474i 0 3.00000i 0 32.2 0 1.22474 1.22474i 2.00000i 0 0 −1.22474 1.22474i 0 3.00000i 0 68.1 0 −1.22474 1.22474i 2.00000i 0 0 1.22474 1.22474i 0 3.00000i 0 68.2 0 1.22474 + 1.22474i 2.00000i 0 0 −1.22474 + 1.22474i 0 3.00000i 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 3.b odd 2 1 CM by $$\Q(\sqrt{-3})$$ 5.b even 2 1 inner 5.c odd 4 2 inner 15.d odd 2 1 inner 15.e even 4 2 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 75.2.e.b 4 3.b odd 2 1 CM 75.2.e.b 4 4.b odd 2 1 1200.2.v.f 4 5.b even 2 1 inner 75.2.e.b 4 5.c odd 4 2 inner 75.2.e.b 4 12.b even 2 1 1200.2.v.f 4 15.d odd 2 1 inner 75.2.e.b 4 15.e even 4 2 inner 75.2.e.b 4 20.d odd 2 1 1200.2.v.f 4 20.e even 4 2 1200.2.v.f 4 60.h even 2 1 1200.2.v.f 4 60.l odd 4 2 1200.2.v.f 4 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 75.2.e.b 4 1.a even 1 1 trivial 75.2.e.b 4 3.b odd 2 1 CM 75.2.e.b 4 5.b even 2 1 inner 75.2.e.b 4 5.c odd 4 2 inner 75.2.e.b 4 15.d odd 2 1 inner 75.2.e.b 4 15.e even 4 2 inner 1200.2.v.f 4 4.b odd 2 1 1200.2.v.f 4 12.b even 2 1 1200.2.v.f 4 20.d odd 2 1 1200.2.v.f 4 20.e even 4 2 1200.2.v.f 4 60.h even 2 1 1200.2.v.f 4 60.l odd 4 2 ## Hecke kernels This newform subspace can be constructed as the kernel of the linear operator $$T_{2}$$ acting on $$S_{2}^{\mathrm{new}}(75, [\chi])$$. ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$T^{4}$$ $3$ $$T^{4} + 9$$ $5$ $$T^{4}$$ $7$ $$T^{4} + 9$$ $11$ $$T^{4}$$ $13$ $$T^{4} + 729$$ $17$ $$T^{4}$$ $19$ $$(T^{2} + 1)^{2}$$ $23$ $$T^{4}$$ $29$ $$T^{4}$$ $31$ $$(T - 7)^{4}$$ $37$ $$T^{4} + 2304$$ $41$ $$T^{4}$$ $43$ $$T^{4} + 21609$$ $47$ $$T^{4}$$ $53$ $$T^{4}$$ $59$ $$T^{4}$$ $61$ $$(T + 13)^{4}$$ $67$ $$T^{4} + 59049$$ $71$ $$T^{4}$$ $73$ $$T^{4} + 36864$$ $79$ $$(T^{2} + 16)^{2}$$ $83$ $$T^{4}$$ $89$ $$T^{4}$$ $97$ $$T^{4} + 131769$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9935678839683533, "perplexity": 6618.367081415142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00672.warc.gz"}
https://stats.stackexchange.com/questions/23547/train-nn-or-svm-to-classify-stock-signals
# Train NN or SVM to classify stock signals I am applying Neural network and SVM to predict buy-hold - sell signals. I have trained nn and SVM in R. I used nnet function to train NN and svm to train SVM. I provided 20,000 data points to train and 2000 data points to test. training data set contain list of 10-15 technical indciators and buy-sell-hold signals. The problem I am having is it is not predicting the buy -sell - hold signals with good accuracy on the testing data. I have used sigmoid function in nnet and radial function in SVM. Any suggestion, how to improve the accuracy of the prediction? • Use cross-validation or basically u need to find the best value of C and gamma to improve the accuracy of prediction. hope this helps... – lakesh Feb 21 '12 at 14:39 • I tried various values of C and gamma varying between (1-100) for C and (0.001 - 1) for gamma. Still no sucess. – user395882 Feb 22 '12 at 5:05 • sometimes the gamma values can go higher as well.. like 8... – lakesh Feb 22 '12 at 9:05 • look at this qn..stackoverflow.com/questions/9047459/… it was posted by me... look at amro's ans... – lakesh Feb 22 '12 at 9:06 • or you could use the grid.py for grid search.. – lakesh Feb 22 '12 at 9:29 In the work I've done much lower values of C work better - in the range of $10^{-4}$ to $10^{-2}$. 1-100 is a pretty high value of C, at least for the data I've worked with, which means you are not allowing much 'slack' and so its not surprising that you are finding your model over-fits the data. I would recommend trying with much smaller values of C and in orders of magnitude increments. Another alternative is to try nu-SVM rather than C-SVM. The parameter nu ranges from 0 to 1 (.1 to .8 in practice) and is much more intuitive: .1 means a small proportion of your data points are support vectors (and therefore you have a narrow margin and little slack), .8 means a very large percent are support vectors (and therefore a wide margin and a good deal of slack).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5283153653144836, "perplexity": 1177.3715526643937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00366.warc.gz"}
https://hal.science/hal-01351604
Hodge-Dirac, Hodge-Laplacian and Hodge-Stokes operators in L^p spaces on Lipschitz domains - Archive ouverte HAL Access content directly Journal Articles Revista Matemática Iberoamericana Year : 2018 Hodge-Dirac, Hodge-Laplacian and Hodge-Stokes operators in L^p spaces on Lipschitz domains Alan Mcintosh • Function : Author Sylvie Monniaux Abstract This paper concerns Hodge-Dirac operators D = d + δ acting in L p (Ω, Λ) where Ω is a bounded open subset of R n satisfying some kind of Lipschitz condition, Λ is the exterior algebra of R n , d is the exterior derivative acting on the de Rham complex of differential forms on Ω, and δ is the interior derivative with tangential boundary conditions. In L 2 (Ω, Λ), δ = d * and D is self-adjoint, thus having bounded resolvents (I + itD) −1 t∈R as well as a bounded functional calculus in L 2 (Ω, Λ). We investigate the range of values p H < p < p H about p = 2 for which D has bounded resolvents and a bounded holomorphic functional calculus in L p (Ω, Λ). On domains which we call very weakly Lipschitz, we show that this is the same range of values as for which L p (Ω, Λ) has a Hodge (or Helmholz) decomposition, being an open interval that includes 2. The Hodge-Laplacian ∆ is the square of the Hodge-Dirac operator, i.e. −∆ = D 2 , so it also has a bounded functional calculus in L p (Ω, Λ) when p H < p < p H. But the Stokes operator with Hodge boundary conditions, which is the restriction of −∆ to the subspace of divergence free vector fields in L p (Ω, Λ 1) with tangential boundary conditions , has a bounded holomorphic functional calculus for further values of p, namely for max{1, p H S } < p < p H where p H S is the Sobolev exponent below p H , given by 1/p H S = 1/p H + 1/n, so that p H S < 2n/(n + 2). In 3 dimensions, p H S < 6/5. We show also that for bounded strongly Lipschitz domains Ω, p H < 2n/(n + 1) < 2n/(n − 1) < p H , in agreement with the known results that p H < 4/3 < 4 < p H in dimension 2, and p H < 3/2 < 3 < p H in dimension 3. In both dimensions 2 and 3, p H S < 1 , implying that the Stokes operator has a bounded functional calculus in L p (Ω, Λ 1) when Ω is strongly Lipschitz and 1 < p < p H . Dates and versions hal-01351604 , version 1 (04-08-2016) Identifiers • HAL Id : hal-01351604 , version 1 • ARXIV : Cite Alan Mcintosh, Sylvie Monniaux. Hodge-Dirac, Hodge-Laplacian and Hodge-Stokes operators in L^p spaces on Lipschitz domains. Revista Matemática Iberoamericana, 2018, 34 (4), pp.1711-1753. ⟨hal-01351604⟩ 147 View
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551637172698975, "perplexity": 1219.1947308360648}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00444.warc.gz"}
http://gmatclub.com/forum/photovoltaic-power-plants-produce-electricity-from-sunlight-16327.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 25 Jul 2016, 12:06 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Photovoltaic power plants produce electricity from sunlight. Author Message TAGS: ### Hide Tags Director Joined: 20 Apr 2005 Posts: 584 Followers: 2 Kudos [?]: 155 [0], given: 0 Photovoltaic power plants produce electricity from sunlight. [#permalink] ### Show Tags 11 May 2005, 16:08 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions ### HideShow timer Statistics 12. Photovoltaic power plants produce electricity from sunlight. As a result of astonishing recent technological advances, the cost of producing electric power at photovoltaic power plants, allowing for both construction and operating costs, is one-tenth of what it was 20 years ago, whereas the corresponding cost for traditional plants, which burn fossil fuels, has increased. Thus, photovoltaic power plants offer a less expensive approach to meeting demand for electricity than do traditional power plants. The conclusion of the argument is properly drawn if which one of the following is assumed? (A) The cost of producing electric power at traditional plants has increased over the past 20 years. (B) Twenty years ago, traditional power plants were producing 10 times more electric power than were photovoltaic plants. (C) None of the recent technological advances in producing electric power at photovoltaic plants can be applied to producing power at traditional plants. (D) Twenty years ago, the cost of producing electric power at photovoltaic plants was less than 20 times the cost of producing power at traditional plants. (E) The cost of producing electric power at photovoltaic plants is expected to decrease further, while the cost of producing power at traditional plants is not expected to decrease. Manager Joined: 08 Mar 2005 Posts: 101 Followers: 1 Kudos [?]: 1 [0], given: 0 ### Show Tags 12 May 2005, 03:46 Between B and D, D is straight forward. But B also seems to be fine with the following reasoning. 20 yrs ago Assuming for $100, P (Photovoltaic ) were producing x power. For the same amount T (traditional) were producing 10 times that is 10x power. now as the cost of producing power from P has reducing 10 times, i.e for$100 10x power can be generated. However from the assumption T costs have increased, i.e for $100 now <10x power can be produced, assume 9x. Hence the stated conclusion is valid. Manager Joined: 28 Jan 2004 Posts: 200 Location: Ghana Followers: 1 Kudos [?]: 6 [0], given: 0 [#permalink] ### Show Tags 12 May 2005, 05:54 I will go for C. I don't think B and D are acceptable because one can't be sure the amount of the increase in the cost of traditional plants. C because even if the cost of traditional plants increased and the new technology were applicable to the burning of fossil fuels, it will still be possible for the procedure to turn out less expensive than it was 20 years ago. _________________ It's not over until it's OVER! Senior Manager Joined: 10 Dec 2004 Posts: 272 Followers: 2 Kudos [?]: 120 [0], given: 0 [#permalink] ### Show Tags 12 May 2005, 10:10 "C" No explanation needed! Senior Manager Joined: 12 Oct 2003 Posts: 261 Location: sydney Followers: 1 Kudos [?]: 16 [0], given: 0 [#permalink] ### Show Tags 12 May 2005, 10:18 Meenu, B is wrong. your comparison is wrong for B. "20 yrs ago Assuming for$100, P (Photovoltaic ) were producing x power. For the same amount T (traditional) were producing 10 times that is 10x power. " according to the stem ,20 yrs ago P and T were not producing x power for the same amount. it should be like this for x power P(before) 100 P(now) 100/10=10 T(before) 120 T(now) 120/5(say)=24 C is also wrong because tech advances for P can not be applied to P. it is obvious or a pre-requisitive not and assumption. D is best. _________________ When u r about to make ends meet, someone moves the ends. Senior Manager Joined: 07 Nov 2004 Posts: 458 Followers: 2 Kudos [?]: 57 [0], given: 0 ### Show Tags 12 May 2005, 11:04 I pick D. we are comparing cost here, so technology in C is irrelevant. in B, even we know the Traditional ones produced more electricity, but we still cannot figure out the cost. Current Student Joined: 29 Jan 2005 Posts: 5239 Followers: 23 Kudos [?]: 301 [0], given: 0 ### Show Tags 12 May 2005, 21:16 C is the most logical. It is possible that the new technology could be applied to the traditional fossil burning plants. Director Joined: 20 Apr 2005 Posts: 584 Followers: 2 Kudos [?]: 155 [0], given: 0 ### Show Tags 13 May 2005, 06:56 The OA is D. Director Joined: 20 Apr 2005 Posts: 584 Followers: 2 Kudos [?]: 155 [0], given: 0 ### Show Tags 13 May 2005, 07:07 The OA is D. Manager Joined: 12 Jul 2003 Posts: 198 Followers: 2 Kudos [?]: 53 [0], given: 0 ### Show Tags 13 May 2005, 22:27 Can you please elaborate on d.im still not very clear. do we have to show that the cost of producing electric power at photovoltaic plants has remained lesser than that of producing power at traditional plants throughout-that is 20 years before as is explained in d and the rest in the question stem. Isnt it possible that though the cost of photovoltaic PP was 1/10 of traditional PP, the cost otherwise may not neccesarily be lesser?perhaps earlier they were cheaper and now are 1/10 of the original cost(not compared with traditional PP) Best. Director Joined: 18 Apr 2005 Posts: 548 Location: Canuckland Followers: 1 Kudos [?]: 30 [0], given: 0 ### Show Tags 13 May 2005, 23:07 D is the best answer imo, however it would fit question better if opereational costs of regular PPs 'has increased significantly' instead of simply 'has increased'. Assumption in D increases the likelihood of solar PPs cost to drop below regular PPs cost. Similar topics Replies Last post Similar Topics: Photovoltaic power plants produce electricity from sunlight. 8 25 Feb 2008, 01:26 3 Photovoltaic power plants produce electricity from sunlight. 20 15 Feb 2008, 15:02 Photovoltaic power plants produce electricity from sunlight. 6 27 Jan 2007, 02:09 Photovoltaic power plants produce electricity from sunlight. 12 08 Dec 2006, 23:15 Photovoltaic power plants produce electricity from sunlight. 11 17 Nov 2006, 04:30 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5677735209465027, "perplexity": 6511.7493680506095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824337.54/warc/CC-MAIN-20160723071024-00240-ip-10-185-27-174.ec2.internal.warc.gz"}
https://gitee.com/nlfox/bugplatform/blob/master/artisan
## nlfox / bugplatform artisan 1.61 KB nlfox 提交于 2015-12-12 21:41 . init #!/usr/bin/env php <?php /* |-------------------------------------------------------------------------- |-------------------------------------------------------------------------- | | Composer provides a convenient, automatically generated class loader | for our application. We just need to utilize it! We'll require it | into the script here so that we do not have to worry about the | */ $app = require_once __DIR__.'/bootstrap/app.php'; /* |-------------------------------------------------------------------------- | Run The Artisan Application |-------------------------------------------------------------------------- | | When we run the console application, the current CLI command will be | executed in this console and the response sent back to a terminal | or another output device for the developers. Here goes nothing! | */$kernel = $app->make(Illuminate\Contracts\Console\Kernel::class);$status = $kernel->handle($input = new Symfony\Component\Console\Input\ArgvInput, new Symfony\Component\Console\Output\ConsoleOutput ); /* |-------------------------------------------------------------------------- | Shutdown The Application |-------------------------------------------------------------------------- | | Once Artisan has finished running. We will fire off the shutdown events | so that any final work may be done by the application before we shut | down the process. This is the last thing to happen to the request. | */ $kernel->terminate($input, $status); exit($status);
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20723101496696472, "perplexity": 5849.280231041573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247504790.66/warc/CC-MAIN-20190221132217-20190221154217-00127.warc.gz"}
http://mahrabu.blogspot.com/2008/05/blog-post_12.html
## Monday, May 12, 2008 ### ברוך שכיונתי A while ago I said: You might say that an inverse-square force is infinite when r=0, but I would respond that classical electromagnetism doesn't really allow point charges, but only finite-density charge distributions. This probably means we should be a little bit more careful about using point charges in all our examples, but they're useful approximations if we don't think too hard about it. But point charges can't exist because they would result in infinite electric fields, which results in infinite energy density, and if you integrate the energy over any finite volume containing a point charge, you get infinite energy. So I certainly wasn't the first one with that idea. The Feynman Lectures, volume II chapter 8, does the same integral, gets an infinite result, and concludes: We must conclude that the idea of locating the energy in the field is inconsistent with the assumption of the existence of point charges. One way out of the difficulty would be to say that elementary charges, such as an electron, are not points but are really small distributions of charge. Alternatively, we could say that there is something wrong in our theory of electricity at very small distances, or with the idea of the local conservation of energy. There are difficulties with either point of view. These difficulties have never been overcome; they exist to this day. I'm going to go with choice B ("there is something wrong in our theory of electricity at very small distances"), where "our theory of electricity" means classical electromagnetism. In quantum mechanics, even "point charges" aren't really localized at a point. But then I wonder what Feynman meant when he said "these difficulties ... exist to this day", since he himself was one of the main people responsible for quantum electrodynamics. So I'll have to learn QED one day and get back to you. #### 1 comment: 1. Much as I enjoy reading all of your posts, it's nice to see a bit of physics again :).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8660739064216614, "perplexity": 414.51661703886475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164000853/warc/CC-MAIN-20131204133320-00012-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-equations/176916-spring-mass-undamped-motion-print.html
# spring mass undamped motion • April 5th 2011, 12:02 PM duaneg37 spring mass undamped motion A mass weighing 10 pounds stretches a spring 1/4 foot. This mass is removed and replaced with a mass of 1.6 slugs, which is initially released from a point 1/3 foot above the equilibrium position with a downward velocity of 5/4 ft/s. At what time does the mass attain a displacement below the equilibrium position numerically equal to 1/2 the amplitude? my position function is $x(t)=-\frac{1}{3}cos(160t)+\frac{1}{128}sin(160t) $ I have tried to put it into the form $x(t)=Asin(\omega t+\phi) $ and I am getting $x(t)=\frac{1}{3}sin(160t+\frac{\pi}{2}) $ The time I am getting is $\frac{\pi}{480}s$ initially, but I don't know if this is right. I'm also having trouble writing the answer as a series. Can anyone help me? Thanks • April 5th 2011, 01:25 PM TheEmptySet Quote: Originally Posted by duaneg37 A mass weighing 10 pounds stretches a spring 1/4 foot. This mass is removed and replaced with a mass of 1.6 slugs, which is initially released from a point 1/3 foot above the equilibrium position with a downward velocity of 5/4 ft/s. At what time does the mass attain a displacement below the equilibrium position numerically equal to 1/2 the amplitude? my position function is $x(t)=-\frac{1}{3}cos(160t)+\frac{1}{128}sin(160t) $ I have tried to put it into the form $x(t)=Asin(\omega t+\phi) $ and I am getting $x(t)=\frac{1}{3}sin(160t+\frac{\pi}{2}) $ The time I am getting is $\frac{\pi}{480}s$ initially, but I don't know if this is right. I'm also having trouble writing the answer as a series. Can anyone help me? Thanks First I don't think your ODE is correct. First by Hook's law we have $F=kx \iff 10=k(.25) \iff k=40$ Now the ode is $\displaystyle mx''=-kx \iff x''+\frac{1}{5}x=0$ So the solution should have the form $\displaystyle x(t)=c_1\cos\left( \frac{t}{5}\right)+c_2\sin\left( \frac{t}{5}\right)$ Now use the initial conditions • April 5th 2011, 03:18 PM duaneg37 I thought I had to convert the mass into slugs by $W=mg$, which gives the force as $320$ slugs $=k(\frac{1}{4})$ for Hooke's law. This gives $k=1280$. Then I found m for the weight of $1.6$ slugs >>> $1.6=m(32)$where $m=.05$ This gives $\omega ^{2}=25600$. Is this incorrect? Shouldn't your equation be $x(t)=c_{1}cos2t+c_{2}sin2t$? I did it again without rounding off as much and got: $x(t)=\frac{\sqrt{16393}}{384}sin(160t-1.547)$ the time I got: $t=.026+\frac{n\pi}{160}$ sec., let $n=0,1,2,3,...$ Am I on the wrong track with this? Thanks a lot! • April 5th 2011, 03:40 PM TheEmptySet Weight is a force. Slugs are mass. So in hooks law you need a force and since lbs are a force you do not need to do any conversions. • April 5th 2011, 04:29 PM topsquark Quote: Originally Posted by TheEmptySet $\displaystyle mx''=-kx \iff x''+\frac{1}{5}x=0$ Actually $\displaystyle \frac{k}{m} = \frac{40}{1.6} = 25$, not 1/25. Thus $x(t) = c_1~cos(5t) + c_2~sin(5t)$ -Dan • April 5th 2011, 04:35 PM duaneg37 I think I've done them all wrong! Thanks a lot for you help! • April 5th 2011, 07:26 PM duaneg37 My position function is $x(t)=\frac{5}{12}sin(5t-.927)$ To find the time I set $\frac{1}{2}=sin(5t-.927)$ I know the sine of $\frac{\pi}{6}$ and $\frac{5\pi}{6}$ will give $\frac{1}{2}$, so I get $t=.29$and $t=.709$. Do I use both of these times to get my answer? I found the period to be $T=\frac{2\pi}{5}$. I said $t=.29+\frac{2n\pi}{5}$ s where $n=0,1,2,3,...$ and $t=.709+\frac{2n\pi}{5}$ s where $n=0,1,2,3,...$ as my answer. Am I doing this right? • April 5th 2011, 07:40 PM topsquark Quote: Originally Posted by duaneg37 My position function is $x(t)=\frac{5}{12}sin(5t-.927)$ To find the time I set $\frac{1}{2}=sin(5t-.927)$ I know the sine of $\frac{\pi}{6}$ and $\frac{5\pi}{6}$ will give $\frac{1}{2}$, so I get $t=.29$and $t=.709$. Do I use both of these times to get my answer? I found the period to be $T=\frac{2\pi}{5}$. I said $t=.29+\frac{2n\pi}{5}$ s where $n=0,1,2,3,...$ and $t=.709+\frac{2n\pi}{5}$ s where $n=0,1,2,3,...$ as my answer. Am I doing this right? Looks good to me. (Nod) -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 50, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495034217834473, "perplexity": 545.9468328630679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137906.42/warc/CC-MAIN-20140914011217-00021-ip-10-234-18-248.ec2.internal.warc.gz"}
http://electronics.stackexchange.com/questions/10812/recommendation-to-learn-verilog
# recommendation to learn verilog To learn verilog, can anyone recommend any web-page or book? I have never seen such type of a language before, so what you recommend should be for beginner. - After trying to pick up verilog from online sources for a while, I finally decided to try VHDL instead. A lot of digital design courses around the world are taught using VHDL. This means that there is a wealth of information out on the Internet that is relatively good and readily accessible. Verilog, on the other hand, is more popular in industry where people are considerably less share-happy. I still utterly despise the VHDL syntax, but I found it a lot easier to get a working knowledge of it by using course material than I did with Verilog. – drxzcl Jul 25 '12 at 12:23 I've recently embarked on a simmilar journey my self and so far I've found the following useful: and this book by Pong P. Chu (FPGA Prototyping By Verilog Examples: Xilinx Spartan-3 Version [Hardcover]) - When I was learning Verilog I mainly used two resources, the Southerland Online Reference Guide and the FPGA4Fun website. Verilog (for me) required a massive brain shift, so take it slowly. You can do a lot of stuff in simulation, but sometimes it's difficult to create the test waveforms you need. Actually having an FPGA to program helps a lot. You should be able to get a simple board for ~\$100. FPGA4Fun sells some useful boards. - opencores.org is cool too. – user3045 Feb 28 '11 at 16:32 Probably the most important thing to understand about Verilog is that the tools you use to convert verilog (or vhdl) to gates have certain idioms that they use to map verilog to certain types of gates - you need to write using these to get what you want As a (most simplest) rule: always @(posedge clk) q <= d; will give you a flop while assign w = a|b; will give you a combinatorial (a logic net). Understanding the differences between '=' and '<=' is important, but more important is simply making sure you use <= for all flops and = elsewhere. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2140425592660904, "perplexity": 1735.4385902420127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158609.98/warc/CC-MAIN-20160205193918-00060-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.tug.org/pipermail/xy-pic/2008-September/000471.html
# [Xy-pic] bad positioning of an arrow tail Chris Heunen heunen at cs.ru.nl Mon Sep 15 15:22:47 CEST 2008 That is because latex treats the @ sign differently in sty files. If you tell it not to, by \makeatother \newdir{ >}{...} \makeatletter then it works out fine. Best wishes, Chris Heunen > I have a problem with the positioning of an arrow tail using the > \xymatrix command in latex. The problem appears when I load the xypic > package and define the tail inside a custom package. I attach a couple > of examples. > > The following code produces the expected result, an arrow of shape >> ----> > > File: test1.tex > > \documentclass[a4paper,11pt]{article} > > \usepackage[all]{xy} > \newdir{ >}{{}*!/-10pt/@{>}} > > \begin{document} > $$\xymatrix{A \ar@{ >->}[d]\\ B}$$ > \end{document} > > But I want to include the \usepackage and \newdir commands inside a > custom package in order to avoid large preambles in my latex files. A > minimal example of this goes as follows: > > File: testpkg.sty > > \ProvidesPackage{testpkg} > \RequirePackage[all]{xy} > \newdir{ >}{{}*!/-10pt/@{>}} > > > File test2.tex > > \documentclass[a4paper,11pt]{article} > > \usepackage{testpkg} > > \begin{document} > $$\xymatrix{A \ar@{ >->}[d]\\ B}$$ > \end{document} > > When I compile test2.tex the arrow tail is drawn shifted to the right > from the expected position. > > Anyone has an idea of what is happening and how to solve it ? > > Thank you very much. > > Abdó. > > > _______________________________________________ > xy-pic mailing list > http://tug.org/mailman/listinfo/xy-pic
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982062041759491, "perplexity": 14631.123763488078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00728.warc.gz"}
http://clay6.com/qa/19368/how-many-atp-molecules-are-formed-in-electron-transport-from-the-reduced-ni
Browse Questions How many ATP molecules are formed in electron transport from the reduced nicotinamide adenine dinucleotides generated in one turn of Krebs cycle : $\begin {array} {1 1} (1)\;Three & \quad (2)\;Six \\ (3)\;Nine & \quad (4)\;Twelve \end {array}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32041558623313904, "perplexity": 11368.982575008817}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00327-ip-10-171-10-70.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/23547/why-doesnt-the-gravitational-force-causes-body-to-fall-towards-orbital-center-m/23904
# Why doesn't the Gravitational Force causes body to fall towards orbital center moving with constant tangential speed? [duplicate] Suppose a satellite is orbiting the Earth. The gravitational and centripetal force supposedly point towards the Earth. Therefore, the net force is towards the Earth. Since the satellite doesn't fall immediately towards the earth, what's the third force holding the satellite in orbit? First impression tells me its the elusive "centrifugal force", but I was told to use this with caution as it's often a misnomer. What's the correct line of reasoning behind this? - ## marked as duplicate by CR Drost, Daniel Griscom, ACuriousMind, Gert, MAFIA36790Jan 25 at 3:12 net forces can end up being centripetal. Centripetal force isn't a <b>type</b> of force the way gravity or the electric forces are. – Jerry Schirmer Apr 10 '12 at 21:29 gravity and centripetal forces are one and the same thing in this case... in a manner of speaking, gravity provides the necessary centripetal force required to make the earth move around sun.... – Vineet Menon Apr 18 '12 at 7:08 There is no force. The sattelite has a constant velocity tangential to the earth and thats enough. This velocity is the square root of the gravitational potential. – centralcharge Jun 6 '13 at 13:08 essentially a duplicate of physics.stackexchange.com/q/9049/50583 – ACuriousMind Jan 24 at 18:28 Of course not @ACuriousMind ! The question you are comparing with is enough different and somehow related to this one. You may read my answer to understand why this is different. – Sufyan Naeem Jan 24 at 18:50 In this case, the gravitational force is the centripetal force, i.e. the force which keeps the satellite moving in orbit. As you have correctly surmised, the net force is towards the Earth, and the satellite will accelerate in that direction. What makes you think there is another force "holding the satellite in orbit"? - True, it's actually the gravitational force that keeps the satellite in orbit. Were it not for it, the satellite would fly away. – Lev Levitsky Apr 10 '12 at 22:42 What makes the satellite move though? I'm currently on this topic and I really don't get how an acceleration will produce a velocity vector that is perpendicular to it? – user11355 Dec 29 '14 at 1:01 Elaborating on tmarthal and tmac's answers: Centripetal force The force which acts on a body undergoing circular motion, always pointing towards the centre of the circle. This force ensures that the body remains at a fixed distance from the center of the circle. It can be gravitational, electrostatic, magnetic etc. Edit: It can also be a combination of two or more different kinds of forces. Think of it this way: The green thingy is a ball whose velocity is initially along the black horizontal line. Now I apply a force along the red line, which is perpendicular to the black one. This causes the ball to turn ever so slightly downwards, so that it has a new velocity (v'). At the same time, I adjust the direction of my force so that I have a new force (F') which is perpendicular to v'. So the velocity keeps on changing direction, and I keep on adjusting the direction of my force so that the ball moves in a circle. At any point, the force is perpendicular to the velocity. Notice that had there been no force, the ball would simply have moved in a horizontal straight line. The force 'forces' the ball to move in a circle. ;-) No force is required to 'balance' the centripetal force. Also, in this example I would have to continuously change the direction in which I apply a force on the ball, so as to keep it moving in a circle. In your example, the gravitational force on the satellite is the centripetal force which keeps the satellite moving in a circle. It is always acting towards the center of the earth, so the 'adjusting' happens automatically as the satellite moves around the earth. Centrifugal force This one is a little more complicated. It's best explained through an example. Imagine that you're sitting on a merry-go-round which is spinning in a circle. You are moving in a circle on the merry-go-round. Hence, there must be some force which is acting on you, pulling you towards the center of the merry-go-round. In the absence of such a force, you would fly off the merry-go-round. It's easy to guess that here, the centripetal force acting on you is composed of (1) the friction between you and the floor of the merry-go-round and (2) the force applied by the handlebars and other parts of the thing on you, as a result of you pushing against them. Now this is the important part - you, sitting on the merry-go-round, believe that you are not accelerating. (A person standing on the ground knows that you are accelerating, as your velocity is continuously changing its direction thanks to the centripetal force acting on you). Since you believe that you are not accelerating, you believe that the net force acting on you is zero. Thus, you feel an outward force acting on you, which 'balances' the centripetal force provided by the handlebars and friction. This is the so-called centrifugal force. This is true for all accelerating frames - anyone inside the frame thinks that they are not accelerating, and hence feels the effect of a "pseudo" force which acts opposite to whatever real force is acting on them and making them accelerate. To summarize, centripetal force is a real force which must act on a body for the body to travel in a circle. It acts towards the center of the circle. Centrifugal force is a force felt by a body if it is sitting in a frame which is travelling in a circle. It acts away from the center of the circle. - Where does the velocity come if without any force being exerted on the object in its direction, – user11355 Dec 29 '14 at 1:40 Also that outward force. So basically the reason the horse in the circus running in a circle not does not get pulled into the center of circle is that the horse exert a opposite force onto the ground?? – user11355 Dec 29 '14 at 1:43 You're actually thinking that the satellite's momentum is keeping it in orbit. A huge man-made force (a rocket) put the satellite in orbit, this orbit is actually the horizontal velocity (relative to the earth's surface) that the satellite is travelling in. Remember: If there was a rock at the same height as the satellite with no horizontal velocity, it would fall directly back to earth. Satellites usually travel at very high absolute speeds, around 10 km s$^{-1}$. Depending on the orbit type, the direction of the velocity is not quite perpendicular to the radius. Note that the earth's radius is 6,371 km. So you have a satellte going 10 km s$^{-1}$ around an object about 600 times its length. Remember that gravity is an acceleration downward of 9.8 m s$^{-2}$, or about 0.01 km s$^{-2}$. So after 100 seconds of gravity working on the satellite it has travelled about the same order of magnitude as the body that it is orbiting. If the satellite travels further than the radius of the body (in this case, Earth( in the time gravity pulls down on it then it remains in orbit. So there is a certain speed called the "escape velocity" that the satellite needs to be moving faster than to remain in orbit, otherwise it slowly spirals down to earth. - Looks like you are thinking that motion should be along the line on which the Net Force lies as well as in the direction of Net Force. This is not necessary. Force is only supposed to change the velocity of the body. The resulting velocity is not necessarily/always along the line of acceleration i.e, change of velocity. The Centripetal force create change in the instantaneous velocity and the Final velocity is again along the tangential direction of instantaneous location of the body along circular path. If you want to fall the body towards the center of the circular path while it is performing circular motion, you shall have to apply a force which gives you the final velocity directed towards the center of the circular path. In that case your force can not be called "Centripetal Force" because of its definition. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7985653281211853, "perplexity": 273.7974875776941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824624.99/warc/CC-MAIN-20160723071024-00240-ip-10-185-27-174.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/102970/good-ways-for-learning-and-cramming-formulas
# Good ways for learning and cramming formulas? [duplicate] I got an exam coming up and its numerical based (its pre-university level exam) but really tough. I want to know about various ways and methods to learn formulas. I know how can I derive them but its time consuming. I want to know how you people learnt these formulas? - ## marked as duplicate by jinawee, Qmechanic♦Mar 11 '14 at 11:27 I'm afraid that for the most part we learned them by deriving them. Always know the reason; never just cram the formulas. If you do you might pass the exam, but you'll find it hard to make progress afterwards. Maybe not so much help to you now, but you asked how we learnt all those formulas, and for most of us that's how it is. –  Nathaniel Mar 11 '14 at 10:30 no, I didn't cram any formulas(maybe some) .I know how to derive them and can derive them at any point but in exam its not viable.what I am asking if I want learn them fast ,how can i do it? –  user4678 Mar 11 '14 at 10:34 This question appears to be off-topic because it is about studying techniques. –  jinawee Mar 11 '14 at 10:52 IMO formulas are best learned through osmosis. Not just derivation, but repetitive usage. –  Stan Liou Mar 11 '14 at 11:15 @user4678 if you practice deriving them enough, you should be able to do it in your head in seconds. It's not about writing out the full derivation, it's about having such a clear understanding of where the formula comes form that the formula itself just becomes obvious. As Stan Liou says, just using them will help with this as well. Sooner or later they will just become second nature. –  Nathaniel Mar 11 '14 at 11:16 The best people way to learn formulas is to know the units for all the different quantities. You can then figure out almost any formula you want by reasoning it out. As a simple example consider the kinetic energy formula. The units of energy are $$Joules = kg \cdot \frac{m^2}{s^2}$$ If you remember that kinetic energy depends on the mass and velocity(or you can just reason what kinetic energy could possibly depend on...) then there is only one option: $$E.K. \propto m v ^2$$ At this point you do need to memorize the factor of $1/2$ in front but units got you almost the entire way there. This idea is extremely helpful is checking to make sure you remember your formulas correctly since if the units on the left and right hand side can't be converted into one another then you know you got your formula wrong. - You can use formulas you know to work out the units as well, i.e. $E=mc^2$ So $[J] = [kg][ms^{-1}]^2 = [kg][m^2][s^{-2}]$ –  user288447 Mar 11 '14 at 11:13 A great tool to quickly derive a formula for a quantity is dimensional analysis. Essentially, you identify the dimensions, or units, of all relevant quantites, and derive the formula for another quantity, e.g. energy by combining them in such a way that the dimensions are correct. A famous example of the power of this formalism is given by physicist G.I. Taylor in the 1950s. He wanted to compute the energy released of an atomic explosion. He identified the relevant variables: Shock front radius [R] = length; time from explosion [T] = time; and air density [$\rho$] = mass per length$^3$. Since the energy has dimensions of mass times length$^2$ times $\mathrm{time}^{-2}$, he deduced: $$E = C \frac{\rho R^5}{t^2}$$ up to a dimensionless constant $C$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7868975400924683, "perplexity": 511.5637935205335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115867507.22/warc/CC-MAIN-20150124161107-00135-ip-10-180-212-252.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/387538/how-to-trigger-another-clk-in-mainclk-verilog
how to trigger another clk in mainclk (verilog) I wrote somekind of prescaler in verilog to make sclk_adc signal from clk_i. by now my code looks like: always @(posedge clk_i) begin end now i wonder is there any probability to load shift registers on pos/negedge of generated clock in always @(posedge clk_i) block? When i wrote this outside the main always block always @(negedge sclk_adc) begin transdata = transtmp; dataInCh[ch_cnt][15:0] = transdata; end begin dout = shiftOut [ch_cnt] [15]; shiftOut [ch_cnt][15:0] = { shiftOut[ch_cnt] [14:0], 1'b0}; end It is simulated well in the GTKwave, but quartus started to complain about multiply reference to dout (my output), deservedly, i think. So it seems like i have to load them in main always block, but when i add those lines there, shift registers start to load with clk_i, not with the sclk_adc what is logical, but how to avoid this? Please, provide me any clue, thanks • You should share all the code you can. You haven't shared code that shows two references to dout, so it's hard for us to comment on how you can improve your code. – The Photon Jul 25 '18 at 5:11 • pastebin.com/NXUHLb3u – dshee Jul 25 '18 at 6:32 • now it is here (i am sorry it is my first verilog code. it is driver for adc circuit). But i faced another problem :c I almost get what i wanted link i needed dout loaded WITH posedge sclk_adc and transdata shift WITH negedge sclk. But they made it at another clk_i after event of pos/negedge, why? help me to explain this please – dshee Jul 25 '18 at 6:41 You can just update dout on the edge of sclk_adc: always @(posedge sclk_adc) begin dout <= shiftOut [ch_cnt] [15]; end But if you do this, you can only update dout in this always block. You can use an asynchronous reset if you are concerned that your reset signal won't be asserted long enough to see an edge of the slow clock: always @(posedge sclk_adc or posedge reset) begin if (reset) begin dout <= 0; else begin dout <= shiftOut [ch_cnt] [15]; end end You can't reset dout in the always block timed by clk_i if you're also going to set it in the block timed by sclk_adc. Alternatively, since you are generating sclk_adc using a counter timed by clk_i, you could just arrange to assert sclk_adc one cycle earlier than you are doing now. It's quite common when generating slow clocks to actually generate several clocks at the same frequency, but delayed from each other by 1 or more cycles of the master clock, to allow timing different events on different phases of the slow clock. • According to you i will rewrite dout like this assign dout = shiftOut [15]; assign out_next = { shiftOut[14:0], 1'b0}; to not mention it in main block and will make another always with sclk_adc. but it still mystery why my pos and neg detectors did not work. Anyway, thank you a lot! – dshee Jul 26 '18 at 4:00 Well i beat this one. I just needed to built rising and falling edge detector all i neeg is that few lines reg main_reg; wire pos_sclk, neg_sclk; assign pos_sclk = sclk_adc & ~main_reg; assign neg_sclk = ~sclk_adc & main_reg; and in main clk block main_reg <= sclk_adc;
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.180922269821167, "perplexity": 5563.2292764147205}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540504338.31/warc/CC-MAIN-20191208021121-20191208045121-00244.warc.gz"}
https://www.lmno.cnrs.fr/node/339?mini=2019-09
# Elliptic integrals of the third kind and 1-motives Vendredi, 5. avril 2019 - 14:00 - 15:00 Orateur: Cristiana Bertolin Résumé: In our PhD thesis we have showed that the generalized Grothendieck's Conjecture of Periods applied to 1-motives, whose underlying semi-abelian variety is a product of elliptic curves and of tori, is equivalent to a transcendental conjecture involving elliptic integrals of the first and second kind, and logarithms of complex numbers. In this talk we investigate the generalized Grothendieck's Conjecture of Periods in the case of 1-motives whose underlying semi-abelian variety is a non trivial extension of a product of elliptic curves by a torus. This will imply the introduction of elliptic integrals of the third kind for the computation of the period matrix of the 1-motive and therefore the generalized Grothendieck's Conjecture of Periods applied  will be equivalent to a transcendental conjecture involving elliptic integrals of the first, second and third kind.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8752935528755188, "perplexity": 305.7509080349906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598217.23/warc/CC-MAIN-20200120081337-20200120105337-00487.warc.gz"}
https://community.hpe.com/t5/System-Administration/How-to-trim-log-file-with-out-restarting-syslogd/m-p/4652697
cancel Showing results for Did you mean: ## How to trim log file with out restarting syslogd Honored Contributor ## Re: How to trim log file with out restarting syslogd F=user_log;k=200;t=`sed -n "\$=" \$F`;d=`expr \$t - \$k` ; perl -i -ne "print unless 1 .. \$d" \$F #Hth " If u think u can , If u think u cannot , - You are always Right . " Honored Contributor ## Re: How to trim log file with out restarting syslogd Senthil, > I would like to trim to 1or 2MB .. In the above example: (Details given ..) F = Filename to trim = user_log You may be want to keep last few lines of the log files , say last 200 lines you want to keep. Keep = k = 200 Total_lines = t (Automatically calculated.) Delete/Trim Lines = d = (Automatically calculated.) - So here is the command you need to run to trim the file to just 200 line , or any minimum number you desire: # F=user_log ; k=200 ; t=`sed -n "\$=" \$F`; d=`expr \$t - \$k` ; perl -i -ne "print unless 1 .. \$d" \$F # wc -l user_log 200 user_log - Now to see the size , you can use : - # ls -l user_log #It should have trimmed to a small size in KB ). Enjoy , Have fun!, Raj. " If u think u can , If u think u cannot , - You are always Right . " Highlighted Honored Contributor ## Re: How to trim log file with out restarting syslogd F=user_log;k=200;t=`sed -n "\$=" \$F`;d=`expr \$t-\$k`;perl -i -ne "print unless 1 .. \$d" \$F " If u think u can , If u think u cannot , - You are always Right . " Acclaimed Contributor ## Re: How to trim log file with out restarting syslogd HI (again): @ Raj: Using 'perl -ni -e...' to perform an inplace file update will *NOT* work for trimming files where a (daemon) process is required to continue to update the trimmed file without a restart. Perl performs this slight-of-hand by renaming the original input file and thus changing its inode. Thus, the process continues to write to the *old* file, and any logging to the new file doesn't occur. Regards! ...JRF... Honored Contributor ## Re: How to trim log file with out restarting syslogd > Perl performs this slight-of-hand by renaming the original input file and thus changing its inode. - This is really true. Thanks for correcting. Yes, perl changes the inode of the file ,so log update doesn't happen further to that file, unless the daemon is restarted again. James, Thanks again for the valuable input. " If u think u can , If u think u cannot , - You are always Right . " Honored Contributor ## Re: How to trim log file with out restarting syslogd Hi Senthil, What i use to do is copy this file with time stamp and trim this file to 0 bytes like #cp -p user_log user_log.24Jun2010 #>user_log Suraj
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8806248903274536, "perplexity": 9936.960323088617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987826436.88/warc/CC-MAIN-20191022232751-20191023020251-00139.warc.gz"}
https://indico.cern.ch/event/331032/contributions/1720171/
# SUSY 2015, 23rd International Conference on Supersymmetry and Unification of Fundamental Interactions 23-29 August 2015 Lake Tahoe US/Pacific timezone ## Searches for vector-like partners of top and bottom quarks at CMS 28 Aug 2015, 14:30 30m Court View () ### Court View Alternative Theories ### Speaker Huaqiao Zhang (Chinese Academy of Sciences (CN)) ### Description We present new results on searches for massive top and bottom quark partners using proton-proton collision data collected with the CMS detector at the CERN LHC at a center-of-mass energy of 8 TeV. These fourth-generation vector-like quarks are postulated to solve the Hierarchy problem and stabilize the Higgs mass, while escaping constraints on the Higgs cross section measurement. The vector-like quark decays result in a variety of final states, containing boosted top and bottom quarks, gauge and Higgs bosons. We search using several categories of reconstructed objects, from multi-leptonic to fully hadronic final states. We set exclusion limits on both the vector-like quark mass and pair-production cross sections, for combinations of the vector-like quark branching ratios.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736162424087524, "perplexity": 4036.1330701546735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317339.12/warc/CC-MAIN-20190822172901-20190822194901-00526.warc.gz"}
https://api-project-1022638073839.appspot.com/questions/how-do-you-test-the-series-sigma-5-n-6-n-2-n-7-n-from-n-is-0-oo-for-convergence
Calculus Topics # How do you test the series Sigma (5^n+6^n)/(2^n+7^n) from n is [0,oo) for convergence? Mar 20, 2017 The series: ${\sum}_{n = 0}^{\infty} \left(\frac{{5}^{n} + {6}^{n}}{{2}^{n} + {7}^{n}}\right)$ is convergent. #### Explanation: Use the ratio test by evaluating: $\left\mid {a}_{n + 1} / {a}_{n} \right\mid = \left(\frac{\frac{{5}^{n + 1} + {6}^{n + 1}}{{2}^{n + 1} + {7}^{n + 1}}}{\frac{{5}^{n} + {6}^{n}}{{2}^{n} + {7}^{n}}}\right)$ $\left\mid {a}_{n + 1} / {a}_{n} \right\mid = \left(\frac{{5}^{n + 1} + {6}^{n + 1}}{{2}^{n + 1} + {7}^{n + 1}}\right) \left(\frac{{2}^{n} + {7}^{n}}{{5}^{n} + {6}^{n}}\right)$ $\left\mid {a}_{n + 1} / {a}_{n} \right\mid = \left(\frac{{5}^{n + 1} + {6}^{n + 1}}{{5}^{n} + {6}^{n}}\right) \left(\frac{{2}^{n} + {7}^{n}}{{2}^{n + 1} + {7}^{n + 1}}\right)$ $\left\mid {a}_{n + 1} / {a}_{n} \right\mid = \left({6}^{n + 1} / {6}^{n}\right) \left(\frac{{\left(\frac{5}{6}\right)}^{n + 1} + 1}{{\left(\frac{5}{6}\right)}^{n} + 1}\right) \left({7}^{n} / {7}^{n + 1}\right) \left(\frac{{\left(\frac{2}{7}\right)}^{n} + 1}{{\left(\frac{2}{7}\right)}^{n + 1} + 1}\right)$ $\left\mid {a}_{n + 1} / {a}_{n} \right\mid = \left(\frac{6}{7}\right) \left(\frac{{\left(\frac{5}{6}\right)}^{n + 1} + 1}{{\left(\frac{5}{6}\right)}^{n} + 1}\right) \left(\frac{{\left(\frac{2}{7}\right)}^{n} + 1}{{\left(\frac{2}{7}\right)}^{n + 1} + 1}\right)$ We can now pass to the limit for $n \to \infty$: ${\lim}_{n \to \infty} \left\mid {a}_{n + 1} / {a}_{n} \right\mid = {\lim}_{n \to \infty} \left(\frac{6}{7}\right) \left(\frac{{\left(\frac{5}{6}\right)}^{n + 1} + 1}{{\left(\frac{5}{6}\right)}^{n} + 1}\right) \left(\frac{{\left(\frac{2}{7}\right)}^{n} + 1}{{\left(\frac{2}{7}\right)}^{n + 1} + 1}\right)$ Now as $\frac{5}{6} < 1$ we have: ${\lim}_{n \to \infty} {\left(\frac{5}{6}\right)}^{n} = {\lim}_{n \to \infty} {\left(\frac{5}{6}\right)}^{n + 1} = 0$ and similarly: ${\lim}_{n \to \infty} {\left(\frac{2}{7}\right)}^{n} = {\lim}_{n \to \infty} {\left(\frac{2}{7}\right)}^{n + 1} = 0$ So: ${\lim}_{n \to \infty} \left(\frac{6}{7}\right) \left(\frac{{\left(\frac{5}{6}\right)}^{n + 1} + 1}{{\left(\frac{5}{6}\right)}^{n} + 1}\right) \left(\frac{{\left(\frac{2}{7}\right)}^{n} + 1}{{\left(\frac{2}{7}\right)}^{n + 1} + 1}\right) = \frac{6}{7} < 1$ which proves the series to be convergent. ##### Impact of this question 3143 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7496652007102966, "perplexity": 4476.2750022942455}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00402.warc.gz"}
https://proofwiki.org/wiki/Definition:Modal_Logic
# Definition:Modal Logic ## Definition Modal logic is a branch of logic in which truth values are more complex than being merely true or false, and which distinguishes between different "modes" of truth. There are two operators in classical modal logic: Necessity, represented by $\Box$ and Possibility, represented by $\Diamond$. Modal logic may also have other operators, including: Temporal logic, which uses several operators including present and future; Epistemic logic, which uses operators "an individual knows that" and "for all an individual knows it might be true that"; Multi-Modal logic, which uses more than two unary modal operators. ## Also see • Results about modal logic can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25749513506889343, "perplexity": 2371.706998719114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999853.94/warc/CC-MAIN-20190625152739-20190625174739-00159.warc.gz"}
https://tlamsblog.wordpress.com/2011/01/09/mik-tex-2-9-cjk-font-problem/
# Mik TeX 2.9 CJK Font Problem I am a LaTeX user and I love it very much.  Recently I upgraded the system to MikTeX 2.9 (using in PC of course).   However, after I installed the CJK package, the tex file compiled OK but the dvi file cannot be display.   And when I use pdflatex to compile the file, it kept saying that “bsmip source file cannot be found”.   After a few search on Google, the follow command fixed the problem. ```initexmf --mkmaps ``` The remaining task is to re-install my Big5 Kai font.   I also like to mention that I already successfully installed UTF-8 Chinese font using the instruction from http://www.math.nus.edu.sg/aslaksen/cs/cjk.html.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984444975852966, "perplexity": 9059.441228045487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689823.92/warc/CC-MAIN-20170924010628-20170924030628-00265.warc.gz"}
https://www.valdostamuseum.com/hamsmith/stringbraneStdModel.html
## E6, Strings, Branes, and the Standard Model In his paper hep-th/0112261 entitled Algebraic Dreams, Pierre Ramond says: "... Nature shows that space-time symmetries with dynamics associated with gravity, and internal symmetries with their dynamics described by Yang-Mills theories, can coexist peacefully. How does She do it? ... there remain important unanswered questions. ...". According to a superstring theory web site: "... For bosonic strings ...[you]... can ... do quantum mechanics sensibly only if the spacetime dimensions number 26. For superstrings we can whittle it down to 10. ... ### A Brief Table of String Theories Type Spacetime Dimensions Details Bosonic 26 Only bosons, no fermions means only forces, no matter, with both open and closed strings. Major flaw: a particle with imaginary mass, called the tachyon I 10 Supersymmetry between forces and matter, with both open and closed strings, no tachyon, group symmetry is SO(32) IIA 10 Supersymmetry between forces and matter, with closed strings only, no tachyon, massless fermions spin both ways (nonchiral) IIB 10 Supersymmetry between forces and matter, with closed strings only, no tachyon, massless fermions only spin one way (chiral) HO 10 Supersymmetry between forces and matter, with closed strings only, no tachyon, heterotic, meaning right moving and left moving strings differ, group symmetry is SO(32) HE 10 Supersymmetry between forces and matter, with closed strings only, no tachyon, heterotic, meaning right moving and left moving strings differ, group symmetry is E8 x E8 ... There are higher dimensional objects in string theory with dimensions from zero (points) to nine, called p-branes. In terms of branes, what we usually call a membrane would be a two-brane, a string is called a one-brane and a point is called a zero-brane. ... A special class of p-branes in string theory are called D branes. Roughly speaking, a D brane is a p-brane where the ends of open strings are localized on the brane. A D brane is like a collective excitation of strings. ... ... the five superstring theories are connected to one another as if they are each a special case of some more fundamental theory ... ... an eleven dimensional theory of supergravity, which is supersymmetry combined with gravity ... didn't work as a unified theory of particle physics, because it doesn't have a sensible quantum limit as a point particle theory. But this eleven dimensional theory ... came back to life in the strong coupling limit of superstring theory in ten dimensions ... M theory is is the unknown eleven-dimensional theory whose low energy limit is the supergravity theory in eleven dimensions ... many people have taken to also using M theory to label the unknown theory believed to be the fundamental theory from which the known superstring theories emerge as special limits ... ... We still don't know the fundamental M theory ...". The purpose of this paper is to give an example of ### A String Theory with E6 Structure that accurately represents Gravity and the Standard Model. The E6 exceptional Lie algebra string theory is a counterexample to Pierre Ramond's statement: "... M-theory and Superstring theories ... are the only examples of theories where ... union ...[of]... gravity ... and internal symmetries ... appears possible ...", but is consistent with Pierre Ramond's statement: "... Nature relishes unique mathematical structures. ... The Exceptional Algebras are most unique and beautiful among Lie Algebras, and no one should be surprised if Nature uses them. ...". Although Raymond sees the tensor-spinor relationships of exceptional groups as an obstacle, saying "... The use of exceptional groups to describe space-time symmetries has not been as fruitful [as the use of classical groups] ... One obstacle has been that exceptional algebras relate tensor and spinor representations of their orthogonal subgroups, while Spin_Statistics requires them to be treated differently. ...", I see the exceptional tensor-spinor relationships of E6 as a way to introduce fermions into String Theory without naive 1-1 fermion-boson supersymmetry. "... The traceless Jordan matrices [ J3(O)o ] ... (3x3) traceless octonionic hermitian matrices, each labelled by 26 real parameters ... span the 26 representation of [ the 52-dimensional exceptional Lie algebra F4 ]. One can supplement the F4 transformations by an additional 26 parameters ... leading to a group with 78 parameters. These extra transformations are non-compact, and close on the F4 transformations, leading to the exceptional group E6(-26). The subscript in parenthesis denotes the number of non-compact minus the number of compact generators. ...". The following is my proposal to use the exceptional Lie algebra E6(-26), which I will for the rest of this message write as E6, to introduce fermions into string theory in a new way, based on the exceptional E6 relations between bosonic vectors/bivectors and fermionic spinors, in which 16 of the 26 dimensions are seen as orbifolds whose 8 + 8 singularities represent first-generation fermion particles and antiparticles. This structure allows string theory to be physically interpreted as a theory of interaction among world-lines in the Many-Worlds. According to Soji Kaneyuki, in Graded Lie Algebras, Related Geometric Structures, and Pseudo-hermitian Symmetric Spaces, Analysis and Geometry on Complex Homogeneous Domains, by Jacques Faraut, Soji Kaneyuki, Adam Koranyi, Qi-keng Lu, and Guy Roos (Birkhauser 2000), E6 as a Graded Lie Algebra with 5 grades: g = E6 = g(-2) + g(-1) + g(0) + g(1) + g(2) such that • g(0) = so(8) + R + R • dimR g(-1) = dimR g(1) = 16 = 8 + 8 • dimR g(-2) = dimR g(2) = 8 ### Here, step-by-step, is a description of the E6 structure: Step 1: g(0) = so(8) 28 gauge bosons _ + R + R | |dimR g(-1) = dimR g(1) = 16 = 8 + 8 |- 26-dim string spacetime | with J3(O)o structuredimR g(-2) = dimR g(2) = 8 _| Step 2: The E6 GLA has an Even Subalgebra gE (Bosonic) and an Odd Part gO (Fermionic): BOSONIC gE = g(-2) + g(0) + g(2) FERMIONIC gO = g(-1) + g(1) Step 3: BOSONICg(0) = so(8) 28 gauge bosons _ + R + R |dimR g(-2) = dimR g(2) = 8 |- 10-dim spacetime _|FERMIONIC dimR g(-1) = dimR g(1) = 16 = 8 8-dim orbifold + 8 8-dim orbifold Giving the Fermionic sector orbifold structure gives each point of the string/world-line a discrete value corresponding to one of the 8+8 = 16 fundamental first-generation fermion particles or antiparticles. Step 4: BOSONIC g(0) = so(8) 28 gauge bosons _ + R + R |dimR g(-2) = dimR g(2) = 8 |- 10-dim spacetime _|FERMIONIC dimR g(-1) = dimR g(1) = 16 = 8 8 fermions + 8 8 antifermions Step 5: BOSONIC 16-dim conformal U(2,2) g(0) = so(8) + 12-dim SU(3)xSU(2)xU(1) _ + R + R | dimR g(-2) = dimR g(2) = 4 |- 6-dim conformal spacetime _| + 4 4-dim internal symmetry spaceFERMIONIC dimR g(-1) = dimR g(1) = 16 = 8 8 fermions (3 gen) + 8 8 antifermions (3 gen) Step 6: BOSONIC 16-dim conformal U(2,2) g(0) = so(8) + 12-dim SU(3)xSU(2)xU(1) + R + R 2 spacetime conformal dimdimR g(-2) = dimR g(2) = 4 4-dim physical spacetime + 4 4-dim internal symmetry spaceFERMIONIC dimR g(-1) = dimR g(1) = 16 = 8 8 fermions (3 gen) + 8 8 antifermions (3 gen) The 2 spacetime conformal dimensions R+R are related to complex structure of • spacetime g(-2) + g(2) and • fermionic g(-1) + g(1). ### The E6 String Structure described above allows construction of a Realistic String Theory: This construction was motivated by a March 2004 sci.physics.research thread Re: photons from strings? in which John Baez asked: ### "... has anyone figured out a way to ... start with string theory ... to get just photons on Minkowski spacetime ..." ? Lubos Motl noted "... string theory always contains gravity ... Gravity is always contained as a vibration of a closed string, and closed strings can always be created from open strings....". Urs Schreiber said "... the low energy effective worldsheet theory of a single flat D3 brane of the bosonic string is, to lowest nontrivial order, just U(1) gauge theory in 4D ...". Aaron Bergman noted "... there are a bunch of scalars describing the transverse fluctuations of the brane ...". Urs Schreiber said "... I guess that's why you have to put the brane at the singularity of an orbifold if you want to get rid of the scalars ... if the number of dimensions is not an issue the simplest thing probably would be to consider the single space-filling D25 brane of the bosonic string. This one does not have any transverse fluctuations and there is indeed only the U(1) gauge field ...". Aaron Bergman replied "... Unfortunately, there's a tadpole in that configuration. You need 8192 D25 branes to cancel it. ...". Lubos Motl pointed out the existence of brane structures other than massless vectors, saying "... A D-brane contains other massless states, e.g. the transverse scalars (and their fermionic superpartners). It also contains an infinite tower of excited massive states. Finally, a D-brane in the full string theory is coupled to the bulk which inevitably contains gravity as well as other fields and particles. ... N coincident D-branes carry a U(N) gauge symmetry (and contain the appropriate gauge N^2 bosons, as you explained). Moreover, if this stack of N D-branes approaches an orientifold, they meet their mirror images and U(N) is extended to O(2N) or USp(2N). The brane intersections also carry new types of matter - made of the open strings stretched from one type of brane to the other - but these new fields are *not* gauge fields, and they don't lead to new gauge symmetries. For example, there are scalars whose condensation is able to join two intersecting D2-branes into a smooth, connected, hyperbolically shaped objects (D2-branes). ... the number of D-branes can be determined or bounded by anomaly cancellation and similar requirements. For example, the spacetime filling D9-branes in type I theory must generate the SO(32) gauge group, otherwise the theory is anomalous. (There are other arguments for this choice of 16+16 branes, too.)...". What follows on this page is my construction of ### a specific example of a String Theory with E6 structure containing gravity and the U(1)xSU(2)xSU(3) Standard Model. As to how my simple model is affected by some of the complications mentioned by Lubos Motl: Further, string theory Tachyons are related to interactions among strings considered as world-lines in the Many-Worlds. In short, the complications are either taken care of in the construction of the model or are useful in describing the Bohm-type quantum potential interactions among strings considered as world-lines in the Many-Worlds. Here is some further background, from Joseph Polchinski's book String Theory vol. 1 (Cambridge 1998), in Chapter 8 and the Glossary: "... a ... D-brane ...[is]... a dynamical object ... a flat hyperplane ...[for which]... a certain open string state corresponds to fluctuation of its shape ... ... A D25-brane fills space, so the string endpoint can be anywhere ... ... When no D-branes coincide there is just one massless vector on each, giving the gauge group U(1)^n in all. If r D-branes coincide, there are new massless states because string that are stretched between these branes can have vanishing length: ... Thus, there are r^2 vectors, forming the adjoint of a U(r) gauge group. ... there will also be r^2 massless scalars from the components normal to the D-brane. ... ... The massless fields on the world-volume of a Dp-brane are a U(1) vector plus 25 - p world-brane scalars describing the fluctuations. ... The fields on the brane are the embedding X^u(x) and the gauge field A_a(x) ... ... For n separated D-branes, the action is n copies of the action for a single D-brane. ... when the D-branes are coincident there are n^2 rather than n massless vectors and scalars on the brane ... ... The fields X^u(x) and A_a(x) will now be nxn matrices ... ... the gauge field ... becomes a non-Abelian U(n) gauge field ... ... the collectives coordinates ... X^u ... for the embedding of n D-branes in spacetime are now enlarged to nxn matrices. This 'noncommutative geometry' ...[may be]... an important hint about the nature of spacetime. ... ...[an]... orbifold ...(noun)...[is]... a coset space M/H, where H is a group of discrete symmetries of a manifold M. The coset is singular at the fixed points of H ...(verb)...[is]... to produce such a ... string theory by gauging H ... ... To determine the actual value of the D-brane tension ... Consider two parallel Dp-branes ...[They]... can feel each other's presence by exchanging closed strings ...[which is equivalent to]... a vacuum loop of an open string with one end on each D-brane ... The ... analogous ... field theory graph ... is the exchange of a single graviton or dilaton between the D-branes....". ### Here, step-by-step, is the String/Brane construction: Step 1: Consider the 26 Dimensions of String Theory as the 26-dimensional traceless part J3(O)o a O+ Ov O+* b O- Ov* O-* -a-b (where Ov, O+, and O- are in Octonion space with basis {1,i,j,k,E,I,J,K} and a and b are real numbers with basis {1}) of the 27-dimensional Jordan algebra J3(O) of 3x3 Hermitian Octonion matrices. Step 2: Take Urs Schreiber's D3 brane to correspond to the Imaginary Quaternionic associative subspace spanned by {i,j,k} in the 8-dimenisonal Octonionic Ov space. Step 3: Compactify the 4-dimensional co-associative subspace spanned by {E,I,J,K} in the Octonionic Ov space as a CP2 = SU(3)/U(2), with its 4 world-brane scalars corresponding to the 4 covariant components of a Higgs scalar. Add this subspace to D3, to get D7. Step 4: Orbifold the 1-dimensional Real subspace spanned by {1} in the Octonionic Ov space by the discrete multiplicative group Z2 = {-1,+1}, with its fixed points {-1,+1} corresponding to past and future time. This discretizes time steps and gets rid of the world-brane scalar corresponding to the subspace spanned by {1} in Ov. It also gives our brane a 2-level timelike structure, so that its past can connect to the future of a preceding brane and its future can connect to the past of a succeeding brane. Add this subspace to D7, to get D8. D8, our basic Brane, looks like two layers (past and future) of D7s. Beyond D8 our String Theory has 26 - 8 = 18 dimensions, of which 25 - 8 have corresponding world-brane scalars: • 8 world-brane scalars for Octonionic O+ space; • 8 world-brane scalars for Octonionic O- space; • 1 world-brane scalars for real a space; and • 1 dimension, for real b space, in which the D8 branes containing spacelike D3s are stacked in timelike order. Step 5: To use Urs Schreiber's idea to get rid of the world-brane scalars corresponding to the Octonionic O+ space, orbifold it by the 16-element discrete multiplicative group Oct16 = {+/-1,+/-i,+/-j,+/-k,+/-E,+/-I,+/-J,+/-K} to reduce O+ to 16 singular points {-1,-i,-j,-k,-E,-I,-J,-K,+1,+i,+j,+k,+E,+I,+J,+K}. • Let the 8 O+ singular points {-1,-i,-j,-k,-E,-I,-J,-K} correspond to the fundamental fermion particles {neutrino, red up quark, green up quark, blue up quark, electron, red down quark, green down quark, blue down quark} located on the past D7 layer of D8. • Let the 8 O+ singular points {+1,+i,+j,+k,+E,+I,+J,+K} correspond to the fundamental fermion particles {neutrino, red up quark, green up quark, blue up quark, electron, red down quark, green down quark, blue down quark} located on the future D7 layer of D8. This gets rid of the 8 world-brane scalars corresponding to O+, and leaves: • 8 world-brane scalars for Octonionic O- space; • 1 world-brane scalars for real a space; and • 1 dimension, for real b space, in which the D8 branes containing spacelike D3s are stacked in timelike order. Step 6: To use Urs Schreiber's idea to get rid of the world-brane scalars corresponding to the Octonionic O- space, orbifold it by the 16-element discrete multiplicative group Oct16 = {+/-1,+/-i,+/-j,+/-k,+/-E,+/-I,+/-J,+/-K} to reduce O- to 16 singular points {-1,-i,-j,-k,-E,-I,-J,-K,+1,+i,+j,+k,+E,+I,+J,+K}. • Let the 8 O- singular points {-1,-i,-j,-k,-E,-I,-J,-K} correspond to the fundamental fermion anti-particles {anti-neutrino, red up anti-quark, green up anti-quark, blue up anti-quark, positron, red down anti-quark, green down anti-quark, blue down anti-quark} located on the past D7 layer of D8. • Let the 8 O- singular points {+1,+i,+j,+k,+E,+I,+J,+K} correspond to the fundamental fermion anti-particles {anti-neutrino, red up anti-quark, green up anti-quark, blue up anti-quark, positron, red down anti-quark, green down anti-quark, blue down anti-quark} located on the future D7 layer of D8. This gets rid of the 8 world-brane scalars corresponding to O-, and leaves: • 1 world-brane scalars for real a space; and • 1 dimension, for real b space, in which the D8 branes containing spacelike D3s are stacked in timelike order. Step 7: Let the 1 world-brane scalar for real a space correspond to a Bohm-type Quantum Potential acting on strings in the stack of D8 branes. Interpret strings as world-lines in the Many-Worlds, short strings representing virtual particles and loops. Step 8: Fundamentally, physics is described on HyperDiamond Lattice structures. There are 7 independent E8 lattices, each corresponding to one of the 7 imaginary octionions. They can be denoted by iE8, jE8, kE8, EE8, IE8, JE8, and KE8 and are related to both D8 adjoint and half-spinor parts of E8 and each has 240 first-shell vertices. An 8th 8-dim lattice 1E8 with 240 first-shell vertices related to the D8 adjoint part of E8 Cl(8) is related to the 7 octonion imaginary lattices ( viXra 1301.0150 ). Give each D8 brane structure based on Planck-scale E8 lattices so that each D8 brane is a superposition/intersection/coincidence of the eight E8 lattices. Step 9: Since Polchinski says "... If r D-branes coincide ... there are r^2 vectors, forming the adjoint of a U(r) gauge group ...", make the following assignments: • a gauge boson emanating from D8 only from its 1E8 lattice is a U(1) photon; • a gauge boson emanating from D8 only from its 1E8 and EE8 lattices is a U(2) weak boson; • a gauge boson emanating from D8 only from its IE8, JE8, and KE8 lattices is a U(3) gluon. Note that I do not consider it problematic to have U(2) and U(3) instead of SU(2) and SU(3) for the weak and color forces, respectively. Here is some further discussion of the global Standard Model group structure. Here is some discussion of the root vector structures of the Standard Model groups. Step 10: Since Polchinski says "... there will also be r^2 massless scalars from the components normal to the D-brane. ... the collectives coordinates ... X^u ... for the embedding of n D-branes in spacetime are now enlarged to nxn matrices. This 'noncummutative geometry' ...[may be]... an important hint about the nature of spacetime. ...", make the following assignment: The 8x8 matrices for the collective coordinates linking a D8 brane to the next D8 brane in the stack are needed to connect the eight E8 lattices of the D8 brane to the eight E8 lattices of the next D8 brane in the stack. We have now accounted for all the scalars, and, since, as Lubos Motl noted, "... string theory always contains gravity ...", we have here at Step 10 a specific example of a String Theory containing gravity and the U(1)xSU(2)xSU(3) Standard Model. Step 11: We can go a bit further by noting that we have not described gauge bosons emanating from D8 from its iE8, jE8, or kE8 lattices. Therefore, make the following assignment: • a gauge boson emanating from D8 only from its 1E8, iE8, jE8, and kE8 lattices is a U(2,2) conformal gauge boson. We have here at Step 10 a String Theory containing the Standard Model plus two forms of gravity: I conjecture that those two forms of gravity are not only consistent, but that the structures of each will shed light on the structures of the other, and that the conformal structures are related to the conformal gravity ideas of I. E. Segal. Step 12: Going a bit further leads to consideration of the exceptional E-series of Lie algebras, as follows: a gauge boson emanating from D8 only from its 1E8, iE8, jE8, kE8, and EE8 lattices is a U(5) gauge boson related to Spin(10) and Complex E6. a gauge boson emanating from D8 only from its 1E8, iE8, jE8, kE8, EE8, and IE8 lattices is a U(6) gauge boson related to Spin(12) and Quaternionic E7. a gauge boson emanating from D8 only from its 1E8, iE8, jE8, kE8, EE8, IE8, and JE8 lattices is a U(7) gauge boson related to Spin(14) and possibly to Sextonionic E(7+(1/2)). a gauge boson emanating from D8 only from its 1E8, iE8, jE8, kE8, EE8, IE8, JE8, and KE8 lattices is a U(8) gauge boson related to Spin(16) and Octonionic E8. These correspondences are based on the natural inclusion of U(N) in Spin(2N) and on Magic Square constructions of the E series of Lie algebras, roughly described as follows: • 78-dim E6 = 45-dim Adjoint of Spin(10) + 32-dim Spinor of Spin(10) + Imaginary of C; • 133-dim E7 = 66-dim Adjoint of Spin(12) + 64-dim Spinor of Spin(12) + Imaginaries of Q; • 248-dim E8 = 120-dim Adjoint of Spin(16) + 128-dim half-Spinor of Spin(16) Physically, • E6 corresponds to 26-dim String Theory, related to traceless J3(O)o and the symmetric space E6 / F4. • E7 corresponds to 27-dim M-Theory, related to the Jordan algebra J3(O) and the symmetric space E7 / E6 x U(1). • E8 corresponds to 28-dim F-Theory, related to the Jordan algebra J4(Q) and the symmetric space E8 / E7 x SU(2). Note on Sextonions: I am not yet clear about how the Sextonionic E(7+(1/2)) works. It was only recently developed by J. M. Landsberg and Laurent Manivel in their paper "The sextonions and $E_{7\frac 12}$" at math.RT/0402157. Of course, the Sextonion algebra is not a real division algebra, but it does have interesting structure. In their paper, Landsberg and Manivel say: "... We fill in the "hole" in the exceptional series of Lie algebras that was observed by Cvitanovic, Deligne, Cohen and deMan. More precisely, we show that the intermediate Lie algebra between $E_7$ and $E_8$ satisfies some of the decomposition and dimension formulas of the exceptional simple Lie algebras. A key role is played by the sextonions, a six dimensional algebra between the quaternions and octonions. Using the sextonions, we show simliar results hold for the rows of an expanded Freudenthal magic chart. We also obtain new interpretations of the adjoint variety of the exceptional group $G_2$. ... ... the orthogonal space to a null-plane U, being equal to the kernel of a rank-two derivation, is a six-dimensional subalgebra of O. ... ... The decomposition ... into the direct sum of two null-planes, is unique. ...[this]... provides an interesting way to parametrize the set of quaternionic subalgebras of O. ...". Some possibly related facts of which I am aware include: • The set of Quaternionic subalgebras of Octonions = SU(3) = G2 / Spin(4). • G2 / SU(3) = S6 is almost complex but not complex and is not Kaehler. Its almost complex structure is not integrable. See chapter V of Curvature and Homology, rev. ed., by Samuel I. Goldberg (Dover 1998). • It may be that the sextonions and S6 are related to Spin(4) as the 6-dim conformal vector space of SU(2,2) = Spin(2,4) is related to 4-dim Minkowski space. Note on the Monster: The 26 dimensions of String Theory might be related to the 26 Sporadic Finite Simple Groups, the largest of which, the Monster, has about 8 x 10^53 elements. If you use positronium (electron-positron bound state of the two lowest-nonzero-mass Dirac fermions) as a unit of mass Mep = 1 MeV, then it is interesting that the product of the squares of the Planck mass Mpl = 1.2 x 10^22 MeV and W-boson mass Mw = 80,000 MeV gives ( ( Mpl/Mep )( Mw/Mep) )^2 = 9 x 10^53 which is roughly the Monster order. Maybe the Monster shows how, in the world of particle physics, "big" things like Planck mass and W-bosons are related to "little" (but not zero-mass) things like electrons and positrons, thus giving you some persepective on the world of fundamental particles. [ July 2004 note by Frank D. (Tony) Smith, Jr., on Sporadic Finite Groups The 26 Sporadic Finite Groups correspond to the 26 dimensions of J3(O)o, the traceless 3x3 Hermitian Octonionic matrices,   a Os+ Ov  Os+* b Os-  Ov* Os-* -a-b as follows: The 8 red groups correspond to the 8-dim octonionic Os+, the 8 green groups correspond to the 8-dim octonionic Os-, the 4+4=8 blue groups correspond to the 4+4=8-dim octonionic Ov, and the 2 black groups correspond to a and b. The 8 + 8 + 4 = 20 groups above the dashed --- line correspond to the Monster Family, as they are all part of the Monster Group F1. Here are 3 tables: • Involvements among the Sporadic Finite Simple Groups, from A Brief Introduction to the Finite Simple Groups, by Robert L. Griess, Jr., in Vertex Operators in Mathematics and Physics - Proceedings of a Conference November 10-17, 1983 (Springer-Verlag 1984), and from Sporadic Groups, by Michael Aschbacher (Cambridge 1994); • Orders of the Sporadic Finite Simple Groups, from The Classification of the Finite Simple Groups, by Daniel Gorenstein, Richard Lyons, and Ronald Solomon, (AMS 1994); and • a combined table of both Involvements and Orders. See also the Atlas of Sporadic Groups at http://web.mat.bham.ac.uk/atlas/v2.0/spor/ on the web. F1F2 in F1F3 in F1 F2F5 in F1 F2F7 in F1 Fi24Fi24 in F1Fi23 in F1 F2 Fi24Fi22 in F1 F2 Fi24 Fi23Co1 in F1Co2 in F1 F2 Co1Co3 in F1 Co1M24 in F1 Fi24 Co1 J4 ?in F2 ?M23 in F1 F2 Fi24 Fi23 Co1 Co2 Co3 M24 J4M22 in F1 F2 F5 Fi24 Fi23 Fi22 Co1 Co2 Co3 M24 M23 Mc HS J4 Ly M12 in F1 F2 F5 Fi24 Fi23 Fi22 Co1 Co3 M24 Suz J4 M11 in F1 F2 F5 Fi24 Fi23 Fi22 Co1 Co2 Co3 M24 M23 M12 Suz Mc HS ON J4 Ly Suz in F1 Co1J2 in F1 Co1 Suz ?in F2 Fi24 Fi23 ?Mc in F1 F2 Co1 Co2 Co3 LyHS in F1 F2 F5 Co1 Co2 Co3-----------------------------------------------------ONJ1 in ON ?in F1 F2 ? also in G2(11)J3 also in E6(4)Ru also in E7(5) J4 Ly Primes below 72 not used in sporadic finite groups: 53 61 F1 2^46 3^20 5^9 7^6 11^2 13^3 17 19 23 29 31 41 47 59 71F2 2^41 3^13 5^6 7^2 11 13 17 19 23 31 47F3 2^15 3^10 5^3 7^2 13 19 31F5 2^14 3^6 5^6 7 11 19 F7 2^10 3^3 5^2 7^3 17 Fi24 2^21 3^16 5^2 7^3 11 13 17 23 29 Fi23 2^18 3^13 5^2 7 11 13 17 23 Fi22 2^17 3^9 5^2 7 11 13 Co1 2^21 3^9 5^4 7^2 11 13 23 Co2 2^18 3^6 5^3 7 11 23 Co3 2^10 3^7 5^3 7 11 23 M24 2^10 3^3 5 7 11 23 M23 2^7 3^2 5 7 11 23 M22 2^7 3^2 5 7 11 M12 2^6 3^3 5 11 M11 2^4 3^2 5 11 Suz 2^13 3^7 5^2 7 11 13J2 2^7 3^3 5^2 7 Mc 2^7 3^6 5^3 7 11 HS 2^9 3^2 5^3 7 11 -----------------------------------------------------ON 2^9 3^4 5 7^3 11 19 31 J1 2^3 3 5 7 11 19J3 2^7 3^5 5 17 19Ru 2^14 3^3 5^3 7 13 29J4 2^21 3^3 5 7 11^3 23 29 31 37 41 43 47 59 71Ly 2^8 3^7 5^6 7 11 31 37 67 Primes below 72 not used in sporadic finite groups: 53 61 F12^46 3^20 5^9 7^6 11^2 13^3 17 19 23 29 31 41 47 59 71F2 in F12^41 3^13 5^6 7^2 11 13 17 19 23 31 47F3 in F1 F22^15 3^10 5^3 7^2 13 19 31F5 in F1 F22^14 3^6 5^6 7 11 19 F7 in F1 Fi242^10 3^3 5^2 7^3 17 Fi24 in F12^21 3^16 5^2 7^3 11 13 17 23 29 Fi23 in F1 F2 Fi242^18 3^13 5^2 7 11 13 17 23 Fi22 in F1 F2 Fi24 Fi232^17 3^9 5^2 7 11 13 Co1 in F12^21 3^9 5^4 7^2 11 13 23 Co2 in F1 F2 Co12^18 3^6 5^3 7 11 23 Co3 in F1 Co12^10 3^7 5^3 7 11 23 M24 in F1 Fi24 Co1 J4 ?in F2 ?2^10 3^3 5 7 11 23 M23 in F1 F2 Fi24 Fi23 Co1 Co2 Co3 M24 J42^7 3^2 5 7 11 23 M22 in F1 F2 F5 Fi24 Fi23 Fi22 Co1 Co2 Co3 M24 M23 Mc HS J4 Ly 2^7 3^2 5 7 11 M12 in F1 F2 F5 Fi24 Fi23 Fi22 Co1 Co3 M24 Suz J4 2^6 3^3 5 11 M11 in F1 F2 F5 Fi24 Fi23 Fi22 Co1 Co2 Co3 M24 M23 M12 Suz Mc HS ON J4 Ly 2^4 3^2 5 11 Suz in F1 Co12^13 3^7 5^2 7 11 13J2 in F1 Co1 Suz ?in F2 Fi24 Fi23 ?2^7 3^3 5^2 7 Mc in F1 F2 Co1 Co2 Co3 Ly2^7 3^6 5^3 7 11 HS in F1 F2 F5 Co1 Co2 Co32^9 3^2 5^3 7 11 -----------------------------------------------------ON2^9 3^4 5 7^3 11 19 31 J1 in ON ?in F1 F2 ? also in G2(11)2^3 3 5 7 11 19J3 also in E6(4)2^7 3^5 5 17 19Ru also in E7(5)2^14 3^3 5^3 7 13 29J4 2^21 3^3 5 7 11^3 23 29 31 37 41 43 47 59 71 Ly2^8 3^7 5^6 7 11 31 37 67 ] Tony's Home Page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4750096797943115, "perplexity": 3080.2463066246296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00349.warc.gz"}
https://electronics.stackexchange.com/questions/313337/is-gain-block-can-be-used-a-power-amplifier
# Is Gain Block can be used a Power Amplifier I have chosen 2 datasheets of MMIC Power Amplifier: 1. HMC311ST89 MMIC AMPLIFIER – DC-6GHZ -16db Gain 2. HMC637ALP5E – DC-6GHZ – 13db Gain My intention is to use the 1st one as Power Amplifier. I was wondering whether I am going right because the 1st one description is GAIN-BLOCK whereas, the 2nd one is mentioned as Power Amplifier as per the respective datasheet. • You need to look at the 1dB compression point (or, alternatively, to third order intercept point) spec in the datasheets in order to make a decision. – Enric Blanco Jun 28 '17 at 9:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139423966407776, "perplexity": 2721.7001403390573}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00636.warc.gz"}
https://www.physicsforums.com/threads/best-physics-problem-ever.352668/
# Best physics problem ever 1. Nov 7, 2009 ### DanP Dear Sir, I am a high school student and have a problem. My teacher and I were talking about Satan. Of course you know that when he fell from heaven, he fell for nine days, and nine nights, at 32 feet a second and was increasing his speed every second. I was told there was a foluma [formula] to it. I know you don't have time for such little things, but if possible please send me the foluma. Thank you, Jerry Quite possibly a letter sent to Einstein by a child, I was told. 2. Nov 7, 2009 ### Danger The formula is simple; there's no such thing as Satan. 3. Nov 7, 2009 ### DanP Some theorists believe LHC will spawn him Watch and laugh: Last edited by a moderator: Sep 25, 2014 4. Nov 7, 2009 ### FredGarvin That assumes that the acceleration due to gravity between Heaven and Hell is constant and equal to that of the Earth. 5. Nov 7, 2009 ### Redd Then we could calculate the distance between heaven and hell. It seems surprisingly small. Maybe this is actually a telling metaphor? 6. Nov 7, 2009 ### DaveC426913 Just for fun though... At what altitude is Heaven, by this logic. 9 days' fall to Earth, accounting for gravitational gradient, would give us what altitude? 7. Nov 7, 2009 ### slider142 There is a small problem. We are told that he falls at 32 feet per second, a constant velocity, and then told that he increases his speed every second (by an unknown amount, unless he is falling to Earth or another known mass). This is contradictory, unless the problem statement is that he is accelerating at 32 feet per second every second, which is approximately the acceleration due to gravity near sea level. Unfortunately, a 9 day fall cannot be anywhere near sea level, unless he is falling through a hole cut in the Earth. Last edited: Nov 7, 2009 8. Nov 7, 2009 ### slider142 Is there atmosphere present at the time of Satan's fall? If so, we must account for air friction; as he enters the atmosphere near the end of his fall he will brake to terminal velocity in air, which depends on his cross-section and mass. Assuming no friction and that he is actually falling to Earth's sea level, where Earth has mass M, we can use energy methods: $$9 \text{ days} = \int_\text{sea level}^x \frac{\pm dx}{\sqrt\frac{2GM}{x}}$$ Solving for the displacement x gives us: $$x = \left(\pm 3\sqrt\frac{GM}{2}(9 \text{ days}) + (\text{sea level})^\frac{3}{2}\right)^\frac{2}{3}$$ where the numerical value of G must be adjusted for days instead of seconds. The Earth page on Wikipedia gives average values for sea level and mass and assuming exact 24-hour days, we have approximately 6434 km, which is pretty close to the radius of the Earth. Last edited: Nov 7, 2009 9. Nov 7, 2009 ### Staff: Mentor Yeah, got to agree with Danger. The problem is non-existent. 10. Nov 7, 2009 ### DanP Henceforth, this shall be know as "Satan's law". 11. Nov 7, 2009 ### Loren Booda I once saw it calculated that heaven is hotter than hell - but that might violate the guidelines (no kidding). 12. Nov 7, 2009 ### Staff: Mentor Was that this? 13. Nov 7, 2009 ### Pengwuino I believe I saw this same mathematics prove that Women = Evil. 14. Nov 7, 2009 ### NeoDevin Well, women take time and money, so $\mbox{Women} = \mbox{Time}\cdot\mbox{Money}$. I have often heard it claimed that time is money, $\mbox{Time} = \mbox{Money}$, so we have $\mbox{Women} = \mbox{Money}^2$. But since money is the root of evil, $\mbox{Money} = \sqrt{\mbox{Evil}}$, we know that $\mbox{Women} = \left(\sqrt{\mbox{Evil}}\right)^2 = \mbox{Evil}$. 15. Nov 7, 2009 ### whs Fall speed from heaven = 2 * X; Where X is a constant for adjusting the fall speed to your liking. 16. Nov 7, 2009 ### DaveC426913 17. Nov 7, 2009 ### Loren Booda I think I saw the calculation in an old Ripley's Believe it or Not. There were references from a religious text, saying how high heaven is and where hell is located. Then using some earth physics - voila! I wonder if the student you quote ended up in engineering, divinity or their unification: theoretical physics. 18. Nov 8, 2009 ### NeoDevin 19. Nov 8, 2009 ### DanP Would be funny if you still have the proof somewhere and you can post it as a joke. 20. Nov 8, 2009 ### arildno This then yields the equation: $$women=\frac{evil}{{love}^{2}}$$ or: $${love}^{2}*women=evil$$ or: $$love*(love*women)=evil$$ Thus, the problem is that it is evil to love the fact that you love women. Straight men should regret the fact that they are not gay, while lesbians ought to regret they are not straight, I suppose Last edited: Nov 8, 2009
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315000176429749, "perplexity": 2751.339016503232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517845.16/warc/CC-MAIN-20171212173259-20171212193259-00580.warc.gz"}
https://scholar.lib.ntnu.edu.tw/en/publications/an-alternative-approach-for-a-distance-inequality-associated-with-2
# An alternative approach for a distance inequality associated with the second-order cone and the circular cone Xin He Miao, Yen chi Roger Lin, Jein Shan Chen* *Corresponding author for this work Research output: Contribution to journalArticlepeer-review ## Abstract It is well known that the second-order cone and the circular cone have many analogous properties. In particular, there exists an important distance inequality associated with the second-order cone and the circular cone. The inequality indicates that the distances of arbitrary points to the second-order cone and the circular cone are equivalent, which is crucial in analyzing the tangent cone and normal cone for the circular cone. In this paper, we provide an alternative approach to achieve the aforementioned inequality. Although the proof is a bit longer than the existing one, the new approach offers a way to clarify when the equality holds. Such a clarification is helpful for further study of the relationship between the second-order cone programming problems and the circular cone programming problems. Original language English 291 Journal of Inequalities and Applications 2016 1 https://doi.org/10.1186/s13660-016-1243-5 Published - 2016 Dec 1 ## Keywords • circular cone • distance • projection • second-order cone ## ASJC Scopus subject areas • Analysis • Discrete Mathematics and Combinatorics • Applied Mathematics ## Fingerprint Dive into the research topics of 'An alternative approach for a distance inequality associated with the second-order cone and the circular cone'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266403317451477, "perplexity": 950.8813285018666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00209.warc.gz"}
https://adventuresindatascience.wordpress.com/category/uncategorized/
## Picking regularization parameters the easy way (tl;dr: Use reciprocal distributions in your scikit-learn randomized-search cross-validation. If you don’t believe that’s easy, scroll down to see how little Python code you need to do this.) Picking model parameters (“hyperparameters”) is a common problem. I’ve written before on a powerful online-learning approach to parameter optimization using Gaussian Process Regression. There are other similar approaches out there, like Spearmint etc. But a lot of the time we don’t necessarily need such a powerful tool – we’d rather have something quick and easy that is available in scikit-learn (sklearn). For example, let’s say we want to use a classification model with one or two regularization parameters – what’s an easy way to pick values for them? ## Cross-validation and grid-search Cross-validation (CV) has been explained well by other folks so I won’t rehash it here. But let’s talk about deciding which parameter value choices to try. Let’s say we expect our regularization parameter to have its optimal value between 1e-7 and 1e2. In this case we might try this set of ten choices: $\{10^{-7}, 10^{-6}, 10^{-5}, 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}, 1, 10^1, 10^2\}$ If we exhaustively try these, we’d have to run ten CVs. We could also try more values, which would take longer, or fewer values, which might miss optimality by quite a bit. What if we have two parameters? If we again try ten choices per parameter, we’re now talking a hundred CVs. This kind of exhaustive search is called a grid search because we are searching over a grid of every combination of parameter choices. If you have even more parameters, or are trying to do a search that is more fine-grained or over a larger range, you can see that the number of CVs to run will really balloon into a very time-consuming endeavor. ## Randomized search Instead of a grid search exhaustively combing through every combination of parameter choices, what if we just picked a limited number of combinations – say fifty – at random. Obviously, this would make the process quicker than running a hundred, or a thousand, or a million CVs. But the results would obviously be worse, right? In fact, it turns out that randomized search can do about as well as a much longer exhaustive search. This paper by Bergstra & Bengio explains why, and below is a beautiful figure from the paper that illustrates one mechanism of how this works: In the figure above the two parameters are shown on the vertical and horizontal axis, and their contribution is shown in green and yellow. You can see that randomized search does a better job of nailing the sweet spot for the parameter that really matters – so long as we don’t just use the same grid points for the random search, but are actually searching in the continuous space. We’ll see how to do this in a moment. ## Scikit-learn Scikit-learn has very convenient built-in support for CV-based parameter search for both the exhaustive grid and randomized method. Both can be used with any sklearn model, or any sklearn-compatible model for that matter. I’ll focus on the randomized search method, which is called RandomizedSearchCV(). You’ll notice the documentation in the link above echoes what we said in the last section: “it is highly recommended to use continuous distributions for continuous parameters.” So let’s talk about how to do this. ## Choosing a continuous space for regularization parameters Look at what we intuitively did for the grid search case: we laid out a few options like this: $\{10^{-7}, 10^{-6}, 10^{-5}, 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}, 1, 10^1, 10^2\}$ What kind of selection is this? Or to put it formally, if you think about these values as sample pulls from a distribution, what kind of distribution is this? One way to think about this is: we want about an equal chance of ending up with a number of any order of magnitude within our range of interest. Let’s put this a little more concretely: we’d like equal chances of ending up with a number in the interval [1e-7, 1e-6] as in the interval [1e1, 1e2]. If you think about it a little, this is not a uniform distribution. A uniform distribution would be many orders of magnitude more likely to give you the a number in the latter interval than in the former. Exponential then? Nope, not that either. I had to figure this out on my own with some head-scratching and math-scribbling. It turns out that what we need here is a reciprocal distributionthis is a distribution with the probability density function (pdf) given by: $f(x) = \text{constant}\times\cfrac{1}{x}$ where $x$ is limited to a specified range. In our case, the range is our range of interest: [1e-7, 1e2]. Defining this distribution for our regularization parameters will give us the kind of random picks we want – equiprobable for any order of magnitude within the range. Try it out: # Pick ten random choices from our reciprocal distribution import scipy.stats scipy.stats.reciprocal.rvs(a=1e-7,b=1e2,size=10) ## Putting it all together Finally the fun part! Here’s the Python code for the whole thing: # Imports from sklearn.model_selection import RandomizedSearchCV, train_test_split import scipy.stats from polylearn import FactorizationMachineClassifier X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33) # specify parameters and distributions to sample from param_dist = { "alpha": scipy.stats.reciprocal(a=1e-7,b=1e2), "beta": scipy.stats.reciprocal(a=1e-7,b=1e2), "loss": ["squared_hinge","logistic"], "degree": [2,3] } # Model type: in this case, a Factorization Machine (FM) fm = FactorizationMachineClassifier(max_iter=1000) # Now do the search random_search = RandomizedSearchCV( fm, param_distributions=param_dist, n_iter=50, scoring='roc_auc', return_train_score=False ) random_search.fit(X_train, y_train) # Show key results; details are in random_search.cv_results_ print "\nBest model:", random_search.best_estimator_ print "\nTest score for best model:", random_search.score(X_test, y_test) #### Notes: • Here the model I’m trying to design is polylearn’s FactorizationMachineClassifier. This may be overkill for this toy dataset, but I’m using it because: • It shows that you can use this approach not just for sklearn models but also for any sklearn-compatible model • It’s a good showcase for multiple parameters of which some are continuous and some discrete • Of course you could instead use any other model you like, like LogisticRegression • You can see how convenient it is to specify the continuous parameters alpha and beta as random variables. The 50 search iterations will automatically pull values  to use from the specified distributions (reciprocal in this case). • I’m using area under the ROC curve as my success criterion; you could use any other choice you like. • For a convenient way to check out the search results in more detail, check out the sample code on this page (specifically the report() function). Hope that helps. Happy reciprocal-distribution random-searching! ## Postscript Sergey Feldman points out a simpler and more intuitive way to think about the reciprocal distribution: if we have a random variable X with a uniform distribution, then Y = 10x has a reciprocal distribution. In other words, the distribution we use above (a reciprocal distribution in the range [10-7, 10-2]) gives us a uniform sampling of exponents in the range [-7, 2], which is what we want. ## “Uncovering Big Bias with Big Data”: A Review Lawyer-turned-data-scientist David Colarusso recently came out with a very interesting and important analysis highlighting the effects of race, sex, and (imputed) income on criminal sentencing – it’s called “Uncovering Big Bias with Big Data“. (I came across this via MetaFilter via mathbabe.) Colarusso’s findings are that defendants who are black, poor, or male can expect longer sentences for the same charges than defendants who are, respectively, white, rich, or female – a finding that is both a sad comment on our justice system and an unsurprising one. But before taking his quantitative assertions as empirically valid, I wanted to look at it a little deeper, and here is what I found: • For a model to show merit, it’s crucial for it to perform predictably on unseen data. Colarusso pays lip service to testing a model with held-out data (which he inaccurately calls “cross-validation”), but that’s pretty much it. The main post linked above actually doesn’t present any details on it at all. When I dug deeper in the supporting iPython notebook, things got even weirder. Instead of using coefficients derived from training data to make predictions on the held-out data and then assess the validity of the predictions, he simply runs a regression training a second time on the held-out data, producing a new set of coefficients. What?! He says “the code below doesn’t really capture how I go about cross-validation”, but there is no other description of how he did go about testing with held-out data. • Using a single predictor – charge seriousness – the R2 score drops in half when applying the log function to the outcome. Thereafter, it does not rise when adding more predictors. So from an explanation-of-variance standpoint, the very first simplistic model is better than the final one. • Speaking of predictors, race and income were treated as independent covariates, when they are obviously correlated. Regularization could help with this problem, but was not considered. Interactions weren’t considered either – why not? • Finally, despite all the significant issues I mention above, this is perhaps the most worthy and important piece of analysis I’ve seen recently. Why do I say this? We have a glut of data scientists doing analyses on things that simply do not matter. Meanwhile, Colarusso has taken incarceration, something that deeply and destructively impacts the lives of not just individuals but entire communities, and scrutinized the notion many take for granted: that convicts deserve the sentences they get, and the oft-repeated (and racist) lie that disproportions in the justice system merely reflect the demographics of those who commit crimes. For this he deserves commendation. Since both data and his code are freely available, I’d encourage those who find fault with his analysis (and I include myself in this group) to not merely criticize, but try to do better. ## Integrating Spark with scikit-learn, visualizing eigenvectors, and fun! Three topics in this post, to make up for the long hiatus! 1. Apache Spark’s MLlib has built-in support for many machine learning algorithms, but not everything of course. But one can nicely integrate scikit-learn (sklearn) functions to work inside of Spark, distributedly, which makes things very efficient. That’s what I’m going to be talking about here. As a practical example, let’s consider k-Nearest-Neighbors (k-NN). Spark’s MLlib doesn’t have built-in support for this, but scikit-learn does. So let’s talk about sklearn for a minute. If you have a large number of points, say a million or more, and you want to obtain nearest neighbors for all of them (as may be the case with a k-NN-based recommender system), sklearn’s NearestNeighbors on a single machine can be hard to work with. The fit() method isn’t what takes a long time, it’s subsequently producing the results for the large number of queries with kneighbors() that is expensive: In the most straightforward deployment, if you try to send kneighbors() all point vectors in a single large matrix and ask it to come up with nearest neighbors for all of them in one fell swoop, it quickly exhausts the RAM and brings the machine to a crawl. Alternatively, the batch iteration method that I mentioned before is a good solution: after performing the initial fit, you can break the large matrix into chunks and obtain their neighbors chunk by chunk. This eases memory consumption, but can take a long time. There are of course approximate nearest-neighbor implementations such as Spotify’s Annoy. In my use case, Annoy actually did worse than sklearn’s exact neighbors, because Annoy does not have built-in support for matrices: if you want to evaluate nearest neighbors for n query points, you have to loop through each of your n queries one at a time, whereas sklearn’s k-NN implementation can take in a single matrix containing many query points and return nearest neighbors for all of them at a blow, relatively quickly. Your mileage may vary. I’ll talk about Annoy again a little later. To summarize the problem: • sklearn has good support for k-NN; Spark doesn’t. • sklearn’s k-NN fit() isn’t a problem • sklearn’s k-NN kneighbors() is a computational bottleneck for large data sets; is a good candidate for parallelization This is where Spark comes in. All we have to do is insert kneighbors() into a Spark map function after setting the stage for it. This is especially neat if you’re already working in Spark and/or if your data is already in HDFS to begin with, as is commonly the case. Below is a simplified Python (PySpark) code snippet to make this approach clear: # Imports from pyspark import SparkConf, SparkContext from sklearn.neighbors import NearestNeighbors # Let's say we already have a Spark object containing # all our vectors, called myvecs myvecs.cache() # Create kNN tree locally, and broadcast myvecscollected = myvecs.collect() knnobj = NearestNeighbors().fit(myvecscollected) # Get neighbors for each point, distributedly results = myvecs.map(lambda x: bc_knnobj.value.kneighbors(x)) Boom! That’s all you need. The key point in the above code is that we were able to pass sklearn’s NearestNeighbors’ kneighbors() method inside of Spark’s map(), which means that it can be parallel-y and nicely handled by Spark. (You can do the same thing using Annoy instead of sklearn, except that instead of broadcasting the Annoy object to workers, you need to serialize it to a file and distribute the file to workers instead. This code shows you how.) In my use case, harnessing Spark to distribute my sklearn code brought my runtime down from hours to minutes! Update: between the time I first considered this problem and now, there has also emerged a Spark package for distributing sklearn functionality over Spark, as well as a more comprehensive integration called sparkit-learn. So there are several solutions available now. I still like the approach shown above for its simplicity, and for not requiring any extraneous code. 2. A beautiful interactive visualization of eigenvectors, courtesy of the wonderful folks at Setosa. The thing that I love about this viz is that it doesn’t just show how eigenvectors are computed, it gives you an intuition for what they mean. 3. Lastly, and just for fun: Is it Pokemon or Big Data? ☺ ## Minibatch learning for large-scale data, using scikit-learn Let’s say you have a data set with a million or more training points (“rows”). What’s a reasonable way to implement supervised learning? One approach, of course, is to only use a subset of the rows. This has its merits, but there may be various reasons why you want to use the entire available data. What then? Andy Müller created an excellent cheat sheet, thumbnailed below, showing which machine learning techniques are likely to work best in different situations (clickable version here). It’s obviously not meant to be a rigid rule, but it’s still a good place to start answering the question above, or most similar questions. What we see from the above is that our situation points us towards Stochastic Gradient Descent (SGD) regression or classification. Why SGD? The problem with standard (usually gradient-descent-based) regression/classification implementations, support vector machines (SVMs), random forests etc is that they do not effectively scale to the data size we are talking, because of the need to load all the data into memory at once and/or nonlinear computation time. SGD, however, can deal with large data sets effectively by breaking up the data into chunks and processing them sequentially, as we will see shortly; this is often called minibatch learning. The fact that we only need to load one chunk into memory at a time makes it useful for large-scale data, and the fact that it can work iteratively allows it to be used for online learning as well. SGD can be used for regression or classification with any regularization scheme (ridge, lasso, etc) and any loss function (squared loss, logistic loss, etc). What is SGD? It’s been explained very nicely by Andrew Ng in his Coursera class (Week 10: Large Scale Machine Learning), and Léon Bottou has a somewhat more in-depth tutorial on it. Their explanations are excellent, and there’s no point in my duplicating them, so I’ll move on to implementation using Python and the scikit-learn (sklearn) library. The key feature of sklearn’s SGDRegressor and SGDClassifier classes that we’re interested in is the partial_fit() method; this is what supports minibatch learning. Whereas other estimators need to receive the entire training data in one go, there is no such necessity with the SGD estimators. One can, for instance, break up a data set of a million rows into a thousand chunks, then successively execute partial_fit() on each chunk. Each time one chunk is complete, it can be thrown out of memory and the next one loaded in, so memory needs are limited to the size of one chunk, not the entire data set. (It’s worth mentioning that the SGD estimators are not the only ones in sklearn that support minibatch learning; a variety of others are listed here. One can use this approach with any of them.) Finally, the use of a generator in Python makes this easy to implement. Below is a piece of simplified Python code for instructional purposes showing how to do this. It uses a generator called ‘batcherator’ to yield chunks one at a time, to be iteratively trained on using partial_fit() as described above. from sklearn.linear_model import SGDRegressor def iter_minibatches(chunksize): # Provide chunks one by one chunkstartmarker = 0 while chunkstartmarker < numtrainingpoints: chunkrows = range(chunkstartmarker,chunkstartmarker+chunksize) X_chunk, y_chunk = getrows(chunkrows) yield X_chunk, y_chunk chunkstartmarker += chunksize def main(): batcherator = iter_minibatches(chunksize=1000) model = SGDRegressor() # Train model for X_chunk, y_chunk in batcherator: model.partial_fit(X_chunk, y_chunk) # Now make predictions with trained model y_predicted = model.predict(X_test) We haven’t said anything about the getrows() function in the code above, since it pretty much depends on the specifics of where the data resides. Common situations might involve the data being stored on disk, stored in distributed fashion, obtained from an interface etc. Also, while this simplistic code calls SGDRegressor with default arguments, this may not be the best thing to do. It is best to carry out careful cross-validation to determine the best hyperparameters to use, especially for regularization. There is a bunch more practical info on using sklearn’s SGD estimators here. Hopefully this post, and the links within, give you enough info to get started. Happy large-scale learning! ## Online Learning with Gaussian Process Regression In my post at the RichRelevance Engineering Blog, I describe how one can use Gaussian Process Regression for online learning. Excerpt: Consider a situation where you have a dial to tweak, and this dial setting may influence a reward of some kind. For example, the dial may be a weight used in a personalization algorithm, and the reward may be clickthrough or revenue. The problem is, we don’t know beforehand how the dial affects the reward, and the reward behavior may be noisy. How then can we choose a dial setting that maximizes the reward? A handy way to approach this problem is to model the unknown reward function as an instance of a Gaussian Process – this method is called kriging or Gaussian Process Regression (GPR)…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49661558866500854, "perplexity": 1311.2255090189406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948572676.65/warc/CC-MAIN-20171215133912-20171215155912-00447.warc.gz"}
http://www.r-bloggers.com/essa2013-conference/
# ESSA2013 Conference November 24, 2012 By (This article was first published on R snippets, and kindly contributed to R-bloggers) It has been just announced that during ESSA2013 conference I am planning to organize a special track on "Statistical analysis of simulation models". I hope to get some presentations using GNU R to promote it in social simulation community. It is obvious that GNU R excels in analysis of simulation data. However, very often it can be neatly used to implement simulations themselves. For instance I have recently implemented a simulation model proposed in Section 4 of Volatility Clustering in Financial Markets: Empirical Facts and Agent–Based Models paper by Rama Cont. The model is formulated as follows (I give only its brief description; please refer to the paper for more details). Consider market with n trading agents and one asset. We simulate market for times periods. In each period each agent can buy asset, sell it or do nothing. Asset return r[i] in period i equals to number of buy orders minus number of sell orders divided by number of agents n and multiplied by normalizing constant max.r. Thus it will always lie in the interval [-max.r,max.r]. Agents make buy and sell decisions based on random public information about an asset. The stream of signals are IID normal random variables with mean 0 and standard deviation signal.sd. Each investor holds an internal non negative decision making threshold. If signal is higher than threshold level buy decision is made. If it is lower than minus threshold level asset is sold. If signal is not strong enough investor does nothing. After return r[i] is determined each investor with probability p.update performs threshold update to abs(r[i]). As you can see the description is quite lengthily. However, the implementation of the model in GNU R is a genuine snippet as can be seen below: cont <- function(times, n, signal.sd, max.r ,p.update) { threshold <- vector("numeric", n) signal <- rnorm(times, 0, signal.sd) r <- vector("numeric", times) for (i in 1:times) { r[i] <- max.r * (sum(signal[i] > threshold) - sum(signal[i] < (-threshold))) / n threshold[runif(n) < p.update] <- abs(r[i]) } return(r) } And an additional benefit is that one can analyze the simulation results in GNU R also. Here is a very simple example showing the relationship between signal.sd and standard deviation of simulated returns (the initial burn in period in the simulation is discarded): cont.sd <- function(signal.sd) { sd(cont(10000, 1000, signal.sd, 0.1, 0.05)[1000:10000]) } sd.in <- runif(100, 0.01, 0.1) sd.out <- sapply(sd.in, cont.sd) plot(sd.in,sd.out) and here is the resulting plot:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4231365919113159, "perplexity": 3335.7252838479426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274979.56/warc/CC-MAIN-20140728011754-00080-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quantum-physics-rayleigh-jeans-wiens-law.396534/
# Homework Help: Quantum physics-rayleigh jeans/wien's law 1. Apr 19, 2010 Show that the Rayleigh-Jeans radiation law is not consistent with Wien displacement law, λmax T=constant, or vmax is proportional to T. 2. Apr 19, 2010 ### Gordianus The displacement law states that at any temperature T the black body spectrum reaches its peak at a wavelength given by the displacement law. If you happen to plot the Rayleigh-Jeans formula, you'll find there is no maximum. The shorter the wavelength, the higher the spectral power. This is known as the "ultra-violet catastrophe" and, in the search of a "cure", Planck came up with his famous proposal. 3. Apr 21, 2010
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566909670829773, "perplexity": 1737.437349901993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859817.15/warc/CC-MAIN-20180617213237-20180617233237-00552.warc.gz"}
http://mathhelpforum.com/calculators/53640-ti-how-find-holes-asymptotes-graphs-print.html
TI: How to find holes and asymptotes in graphs? Printable View • October 14th 2008, 07:07 AM shailen.sobhee TI: How to find holes and asymptotes in graphs? I have 3 questions: 1) There is a "hole" at the point where x=4 in the graph: $\frac{(x-4)(x+3)}{(x-4)(x+5)}$ However, the hole is not visible when my TI89 Titanium plots the graph. How can I configure my calculator to show the hole? I have tried all the possible ways of plotting the graphs (dot, line, think, square, etc), but TI always fills it. Any ideas? Do you know any TI applications that can encircle the hole and make it more conspicuous? 2) Do you know any TI apps that can calculate the period of trigonometric functions? E.g: 2sin(x)+4cot(0.5x)+3cos(x)=0 3) Are there any TI app that can definite the asymptote of a graph for a specific domain? (lower and upper bounds stated) • October 14th 2008, 12:47 PM mr fantastic Quote: Originally Posted by shailen.sobhee I have 3 questions: 1) There is a "hole" at the point where x=4 in the graph: $\frac{(x-4)(x+3)}{(x-4)(x+5)}$ However, the hole is not visible when my TI89 Titanium plots the graph. How can I configure my calculator to show the hole? I have tried all the possible ways of plotting the graphs (dot, line, think, square, etc), but TI always fills it. Any ideas? Do you know any TI applications that can encircle the hole and make it more conspicuous? 2) Do you know any TI apps that can calculate the period of trigonometric functions? E.g: 2sin(x)+4cot(0.5x)+3cos(x)=0 3) Are there any TI app that can definite the asymptote of a graph for a specific domain? (lower and upper bounds stated) There are some things you actually have to be able to do for yourself rather than rely on a calculator to do them for you. • October 15th 2008, 07:08 PM Bobqwerty ... true. But since we follow pre-defined rules for finding most of these things, you should be able to find something similar (or simply write the calc. program yourself. It's not that hard :P) • October 15th 2008, 07:30 PM Chris L T521 Quote: Originally Posted by shailen.sobhee I have 3 questions: 1) There is a "hole" at the point where x=4 in the graph: $\frac{(x-4)(x+3)}{(x-4)(x+5)}$ However, the hole is not visible when my TI89 Titanium plots the graph. How can I configure my calculator to show the hole? I have tried all the possible ways of plotting the graphs (dot, line, think, square, etc), but TI always fills it. Any ideas? Do you know any TI applications that can encircle the hole and make it more conspicuous? Here's the graph of $y=\frac{(x-4)(x+3)}{(x-4)(x+5)}$ on my TI-89 Titanium (with a viewing window of [-8,8] x [-4,4]) http://img.photobucket.com/albums/v4...sLT521/B01.jpg Now, if you go to the zoom menu and select "ZoomDec"... http://img.photobucket.com/albums/v4...sLT521/B02.jpg You will get this: http://img.photobucket.com/albums/v4...sLT521/B03.jpg Do you notice something different between these two graphs? :D --Chris • October 15th 2008, 10:10 PM shailen.sobhee Now I see the "hole". Besides, discontinuity correction should be turned on to see the small gap. Like bob said, I guess I shall start writing programs. I'm a programmer, asm or basic should be very easy to learn. I'm planning to write an app that can encircle a hole and make it noticeable. Sometimes, one cannot easy deduce the existence of a hole by looking at the mathematical expression. For instance, what if the expression was in expanded form? Thanks to all who contributed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7978535294532776, "perplexity": 1317.532872545096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877881.80/warc/CC-MAIN-20140722025757-00221-ip-10-33-131-23.ec2.internal.warc.gz"}
https://wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=2017&number=2371
WIAS Preprint No. 2371, (2017) # Homogenization theory for the random conductance model with degenerate ergodic weights and unbounded-range jumps Authors • Flegel, Franziska • Heida, Martin • Slowik, Martin 2010 Mathematics Subject Classification • 60H25 60K37 35B27 35R60 47B80 47A75 Keywords • Random conductance model, homogenization, Dirichlet eigenvalues, local times, percolation DOI 10.20347/WIAS.PREPRINT.2371 Abstract We study homogenization properties of the discrete Laplace operator with random conductances on a large domain in Zd. More precisely, we prove almost-sure homogenization of the discrete Poisson equation and of the top of the Dirichlet spectrum. We assume that the conductances are stationary, ergodic and nearest-neighbor conductances are positive. In contrast to earlier results, we do not require uniform ellipticity but certain integrability conditions on the lower and upper tails of the conductances. We further allow jumps of arbitrary length. Without the long-range connections, the integrability condition on the lower tail is optimal for spectral homogenization. It coincides with a necessary condition for the validity of a local central limit theorem for the random walk among random conductances. As an application of spectral homogenization, we prove a quenched large deviation principle for thenormalized and rescaled local times of the random walk in a growing box. Our proofs are based on a compactness result for the Laplacian's Dirichlet energy, Poincaré inequalities, Moser iteration and two-scale convergence Appeared in • Ann. Inst. H. Poincare Probab. Statist., 55 (2019), pp. 1226--1257, DOI 10.1214/18-AIHP917 . Download Documents
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8268874287605286, "perplexity": 1205.2742808699472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00388.warc.gz"}
https://mathoverflow.net/users/39229/mauricio-g-tec?tab=topactivity
We’re rewarding the question askers & reputations are being recalculated! Read more. Mauricio G Tec ### Questions (8) 33 Inference using Topological Data Analysis: Is it worth it for a regular statistician to learn TDA? 7 Knots and Dynamics. Recent breakthroughs? 5 Is every $C^1$-domain which is homeomorphic to the unit ball in $\mathbb{R}^d$ Lipschitz equivalent to the unit ball? 4 What is the space for the coefficients of the connection 1-form of a connection in a vector bundle? 4 Implications of a recent result on Benford's law ### Reputation (669) This user has no recent positive reputation changes This user has not answered any questions ### Tags (19) 0 data-analysis × 2 0 gauge-theory 0 real-analysis × 2 0 connections 0 vector-bundles × 2 0 gt.geometric-topology 0 dg.differential-geometry × 2 0 graph-theory 0 principal-bundles 0 computational-topology ### Accounts (20) Mathematics 2,194 rep 88 silver badges2222 bronze badges MathOverflow 669 rep 77 silver badges1010 bronze badges Cross Validated 233 rep 11 gold badge11 silver badge99 bronze badges TeX - LaTeX 195 rep 66 bronze badges Philosophy 183 rep 66 bronze badges
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88133704662323, "perplexity": 10498.687318222486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668644.10/warc/CC-MAIN-20191115120854-20191115144854-00310.warc.gz"}
http://swmath.org/software/8675
# FLRBFN-AMF Computed force control system using functional link radial basis function network with asymmetric membership function for piezo-flexural nanopositioning stage. A computed force control system using functional link radial basis function network with asymmetric membership function (FLRBFN-AMF) for three-dimension motion control of a piezo-flexural nanopositioning stage (PFNS) is proposed in this study. First, the dynamics of the PFNS mechanism with the introduction of a lumped uncertainty including the equivalent hysteresis friction force are derived. Then, a computed force control system with an auxiliary control is proposed for the tracking of the reference contours with improved steady-state response. Since the dynamic characteristics of the PFNS are non-linear and time varying, a computed force control system using FLRBFN-AMF is designed to improve the control performance for the tracking of various reference trajectories, where the FLRBFN-AMF is employed to estimate a non-linear function including the lumped uncertainty of the PFNS. Moreover, by using the asymmetric membership function, the learning capability of the networks can be upgraded and the number of fuzzy rules can be optimised for the functional link radial basis function network. Furthermore, the adaptive learning algorithms for the training of the parameters of the FLRBFN-AMF online are derived using the Lyapunov stability theorem. Finally, some experimental results for the tracking of various reference contours of the PFNS are given to demonstrate the validity of the proposed control system. ## References in zbMATH (referenced in 1 article ) Showing result 1 of 1. Sorted by year (citations)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071131706237793, "perplexity": 1363.0186927414713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540909.75/warc/CC-MAIN-20161202170900-00456-ip-10-31-129-80.ec2.internal.warc.gz"}
http://blog.urremote.com/2012/08/
## Tuesday, August 7, 2012 ### Climbing Gunung Batur While Avoiding Trouble I'm fond of ascending peaks. In 1980 my first peak outside Australia and first live volcano was Gunung (meaning mountain) Batur, Kintamani, Bali. Unlike nearby Gunung Agung, which is tough, it is a modest peak with a five kilometre ascent that takes two to three hours. That makes it great for a  group  looking for some adventure bonding in which everyone can participate. Strenuous enough to be memorable but not too difficult. This easy access is not ideal for the guiding profession so, on Batur, they counter with a hard sell. Locals don't use guides and while foreign visitors often like to be guided, many would prefer to climb on their own. Guideless climbers still need directions, particularly as Batur is usually climbed at night with the aim of being on the summit for the sunrise. ### The Route There are multiple routes up the cone. The most used originates from a car park and from there proceeds up through the village, ascending the mountain by one route and descending by another. 3.00am to 4.00am is a good starting time to view the dawn from the summit. Animation of the most common route up Batur and some attractions along the way. The guiding office is near the car park and anyone starting from here without a guide will be accosted by the guiding enforcers. Climbers starting from elsewhere may be able to avoid this confrontation. If you are climbing at night without a guide you will need a route map as you will cross many paths leading elsewhere and sometimes the correct path is the least obvious. View Climbing Gunung Batur in a larger map with icon legends and route notes. The second icon is the carpark at the start of the climb and the first is at hotel Volcano III where we commenced. I recorded the route with My Tracks and it is available as a GPX file for import into your mapping app. Most people walk around the rim but some of our party didn't want to go further after reaching the summit. Alternatively, you could follow the torch light of another guided group but that may lead to trouble and is poor form if you have refused guiding services. ### Then and Now Despite being a modest climb, my first ascent in 1980 seemed quite an adventure. I had no idea Batur was there until I arrived on a trail bike after traversing roads impassable to cars. The road close to Toya Bungkah in 1980 with the village in the background. Image:  Around Gunung Batur - in the past  set. Heading towards Toya Bungkah probably close to  Kedisan, the village at the bottom of the descent into the caldera in 1983. Image: Batur - Change From 1967 Until Now gallery. The road near Toya Bungkah in 2009, still nothing special,  but much improved.  Nearby roads are built to a much lower standard and are still frequently impassable. Image: Batur and Trunyan in 2009  set. The village was recent. In 1967 it had not existed and in 1980 a warung was the sole commercial establishment. This warung, typical of the style in Bali at that time, was the only commercial establishment in Toya Bungkah in 1980. I slept in this guys home. Image:  Around Gunung Batur - in the past  set. It was an obviously poor village with the children spending their days tending to cows that aren't kept there any more. These kids spent their days cutting grass for the cows and didn't attend school. I don't know if all the kids go to school today but there are plenty of people here in their twenties who didn't and weren't taught to read. Image:  Around Gunung Batur - in the past  set. Today Toya Bungkah looks reasonably affluent but the surrounding area remains poorer than you will see elsewhere in Bali. Apart from the climb, the principal attraction of Toya Bungkah is the hot springs which the locals use as a communal bath. The public facility was pretty good in 1983 but has been developed and privatised so that there are now several facilities exclusively for tourists, some quite expensive. There is one new facility next to the car park that is free for Toya Bungkah residents and Rp50K for all visitors including Balinese. The hot springs were a well constructed public facility in 1983. Image: Batur - Change From 1967 Until Now gallery. #### It's Economically Based No one likes extortion (essentially taxation by non government entities), but it's most common in exploitative social systems when governance is weak. Modern China is sometimes described as a kleptocracy and under Suharto this was an apt description of Indonesia which weakens government authority. While Indonesia has been rapidly changing, these are strong traditions extending back to Dutch rule and they are more obvious in Batur than some other places. Apart from natural beauty, the area has limited resources and a long history of exploitation as discussed in Custodians of the Sacred Mountain; Thomas A. Reuter; University of Hawai‘i Press; 2002. This leads to distrust of authority and widespread opposition to taking advantage of opportunities like geothermal energy which will impact many but from which only a few elite are likely to benefit. The best assets are privatised by the politically influential and the poor barely get by labouring at agriculture. The only ways to escape hardship are to leave or exploit and with the best assets already taken, only services remain. Many tourists will pay sums for half a days guiding that would otherwise require several weeks of agricultural toil to earn. Others stay away. In the absence of strong governance, groups emerge to capture the opportunity and without official authority, ultimately resort to stand over tactics to get their way. This is the environment that has bred the guiding cartel whose members can expect to do well as long as the monopoly can be maintained. One guide told me they have 63 members and work is divided amongst the members with each guide going to the bottom of  the list after each job. My informant said they averaged 20 climbers a day. While pricing is variable, they have strong pricing discipline, in my case, only dropping the guiding offer to Rp280,000 after things had become so unpleasant that hiring a guide at any price was unlikely. This can only be possible when they are effective in suppressing competition. Batur's climbers are a mere quarter of 1 percent of Bali's visitors of 2.8 million in 2011. Guides from elsewhere will usually bring tourists only as far as Kintamani for the view from the caldera rim and will not offer the climbing opportunity. The road into the caldera is tough on their vehicles, so much so that I've known drivers to refuse the descent, and they are not keen to share guiding revenue with their colleagues in Toya Bungkah. While much effort goes into maximising revenue from those that turn up, there is little obvious effort on promoting the climb. Climbing is not everyone's thing but there is surely an opportunity to increase Batur's current visitor numbers. One tourist in a hundred ought to be easy. Working against that is the community inequality, coercion and distrust of government that makes it difficult to achieve more cooperation and investment in shared infrastructure, particularly roads, that would be required to attract large visitor numbers. The guiding association's ambition of Batur being an upmarket experience, is consistent with Governor Pastikas view that Bali should be an expensive destination but it is incongruous for the opulence of  the Ayu resort to be in stark contrast with the poor roads, poor infrastructure and obvious poverty over the fence. With electricity and good mobile phone/internet reception Toya Bungkah is already not that weird "other world" I experienced in 1980 but increased development would mean losing some of the current atmosphere in the same way as Kuta has lost the atmosphere it had in 1980, as visitor numbers increased. Locals would welcome better roads and increased opportunities. Most visitors coming up for the day from Ubud probably wouldn't notice what was lost. While Kuta sadly destroyed much of its natural beauty as it developed a different sort of magic has emerged from the mayhem and the Kintamani region could develop its own different sort of magic and even maintain it's awesome natural assets. #### The Positives In 1980 when I first visited, Batur was pristine, probably because visitation was infrequent. On intermediate trips it was a free for all, covered in rubbish. Today it is clean, neat and not overbuilt. Someone must be responsible for this improvement and the new hot springs provide a facility rivalling that available to Toya Bungkah residents back in 1980. The guiding cartel at least benefits locals rather than absentee landlords. ### What To Do - Specifics I've climbed without a guide as have others. The hassles will be at the base of the climb. Once you get part way up you are unlikely to have trouble and drink sellers may offer unofficial guiding services. You will need a flash light which can be purchased cheaply, some warm clothing and ideally rain protection. If you are self driving/riding avoid vehicle damage by leaving your vehicle at the hotel rather than the car park at the base of the climb. One guy who suffered vehicle damage thought the cost of tyre repairs was still a bargain compared to guiding fees. Gentle persuasion will be tried first and be prepared for a forceful discussion and to resist strong demands. Some people have reported violence and though I experience fear arguing with strangers in the dark I don't think it usually gets too violent; or else I've been lucky. You might not get much immediate help in a confrontation but extreme violence seems unlikely and I've not heard of robbery. Confrontation is unpleasant and leaves a bad taste but once on the mountain a nice camaraderie develops and even the guiding fraternity seems not to hold a grudge. Be careful looking down the 150 metres into the crater from the precipice near the bat cave (see the map) as at least one tourist fell to their death. I wouldn't want to be having an argument on this unfenced precipice. Most of all, enjoy the experience because the view is great after it's earned and there's lots to do and see along the way.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20718418061733246, "perplexity": 3518.5251687802247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00597.warc.gz"}
https://jyx.jyu.fi/handle/123456789/64185?show=full
dc.contributor.author Salvioni, Gianluca dc.date.accessioned 2019-05-24T11:07:21Z dc.date.available 2019-05-24T11:07:21Z dc.date.issued 2019 dc.identifier.isbn 978-951-39-7775-7 dc.identifier.uri https://jyx.jyu.fi/handle/123456789/64185 dc.description.abstract This monograph focused on a method to link nuclear energy density functionals to the ab initio solution of the nuclear many-body problem. This method, proposed in Ref. [1], was here discussed in many aspects as well as applied to a state-of-art ab initio approach. We introduced the basis of the density functional theory, paying attention to the concept of generators of the functional. In parallel, we explored the Self-Consistent Green's Function approach as ab initio framework to calculate ground-state energies. We derived the model functional based on the Levy-Lieb constrained variation, which exploited the response of the nucleus to an external perturbation. Using the Green's function technique and the NNLOsat chiral interaction in the ab initio Hamiltonian, seven semi-magic nuclei were probed with perturbations induced by generators of two- and three-body contact interaction (Skyrme-like). We employed the same generators to built model functionals, whereupon the coupling constants were fitted to reproduce the perturbed ground-state energies. Several parametrizations of the functionals were obtained for given choices of generators, selection of data points, and assumed uncertainties. We analysed the derived parametrizations according to their statistical performances, magnitude of the propagated errors, and corresponding nuclear matter description. Two parametrizations emerged as the most promising, but the model functionals built from them did not produce meaningful results. As it turned out, zero-range generators provided a poor description of the chiral interaction. Moreover, the performed error analysis suggested that the actual precision of the ab initio approach may not be sufficient to improve the quality of the novel energy density functionals. en dc.relation.ispartofseries JYU dissertations dc.title Model nuclear energy density functionals derived from ab initio calculations dc.identifier.urn URN:ISBN:978-951-39-7775-7 
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9452100992202759, "perplexity": 1773.24727680628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00606.warc.gz"}
https://brilliant.org/problems/reversed-branched-logs/
# Reversed Branched logs Algebra Level 4 $\begin{cases} \log_3(\log_2x)+\log_{\frac{1}{3}}(\log_{\frac{1}{2}}y)=1 \\ xy^2=4 \end{cases}$ If the above equations hold for some values of $$x$$ and $$y$$, then find the value of $$xy$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185349345207214, "perplexity": 521.0040051688721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00141-ip-10-171-6-4.ec2.internal.warc.gz"}
https://sri-uq.kaust.edu.sa/publications/detail/tucker-tensor-analysis-of-matern-functions-in-spatial-statistics
## Tucker tensor analysis of Matern functions in spatial statistics by Alexander Litvinenko, David Keyes, Venera Khoromskaia, Boris N. Khoromskij, Hermann G. Matthies Manuscripts Year: 2018 #### Bibliography ​Alexander Litvinenko, David Keyes, Venera Khoromskaia,Boris N. Khoromskij and Hermann G. Matthies, Tucker tensor analysis of  Matern functions in spatial statistics, accepted to J. CMAM, 2018 ## Acknowledgements: ​The research reported in this publication was supported by funding from King Abdullah University of Science and Technology (KAUST). ## Bibtex: @ARTICLE{2017arXiv171106874L, author = {{Litvinenko}, A. and {Keyes}, D. and {Khoromskaia}, V. and {Khoromskij}, B.~N. and {Matthies}, H.~G.}, title = "{Tucker Tensor analysis of Matern functions in spatial statistics}", journal = {ArXiv e-prints}, archivePrefix = "arXiv", eprint = {1711.06874}, primaryClass = "math.NA", keywords = {Mathematics - Numerical Analysis, 62F99, 62P12, 65F30, 65F40, G.3, G.4, J.2}, year = 2017, month = nov, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } #### Abstract ​In this work, we describe advanced numerical tools for working with multivariate functions and for the analysis of large data sets. These tools will drastically reduce the required computing time and the storage cost, and, therefore, will allow us to consider much larger data sets or finer meshes. Covariance matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of Mat\'ern- and Slater-type functions with varying parameters and demonstrate numerically that their approximations exhibit exponentially fast convergence. We prove the exponential convergence of the Tucker and canonical approximations in tensor rank parameters. Several statistical operations are performed in this low-rank tensor format, including evaluating the conditional covariance matrix, spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood, inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations reduce the computing and storage costs essentially. For example, the storage cost is reduced from an exponential $\mathcal{O}(n^d)$ to a linear scaling $\mathcal{O}(drn)$, where $d$ is the spatial dimension, $n$ is the number of mesh points in one direction, and $r$ is the tensor rank. Prerequisites for applicability of the proposed techniques are the assumptions that the data, locations, and measurements lie on a tensor (axes-parallel) grid and that the covariance function depends on a distance, $\| x-y\|$. ## ISSN: accepted to Journal Comput. Methods Appl. Math. (CMAM), DE GRUYTER
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7864254117012024, "perplexity": 4721.97144927632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00074.warc.gz"}
https://stats.stackexchange.com/questions/76973/inspecting-assumption-of-homoscedasticity
# Inspecting assumption of homoscedasticity Using a Fligner test to infer about the respect of the assumption of homoscedasticity is not very smart given that the Fligner test tests to the null that there is no difference of variance between the groups. This will wrongly favors small sample size. As it has been said by @Michael Mayer here. How can we further investigate if the assumption of homoscedasticity is respected? Is it worth plotting the model's residuals versus the fitted values? Below lines are R coded: m = aov(myFormula, myData) plot(residuals(m), m\$fit) I don't have much experience in statistics and it seems rather hard for me to decide from this plot whether the assumption of homoscedasticity is respected. What else can I do? • If you plot(m) you should get 4 plots, including a scale-location plot, which should be easier to interpret - if the mean of the scale location plot clearly changes, you don't have homoskedasticity. – Glen_b Nov 20 '13 at 1:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7709670662879944, "perplexity": 1293.652901469105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635724.52/warc/CC-MAIN-20210618043356-20210618073356-00338.warc.gz"}
https://www.physicsforums.com/threads/object-moving-at-speed-of-light-as-reference-frame.612620/
# Object moving at speed of light as Reference frame. 1. Jun 9, 2012 ### aleemudasir Is there any other object except photon which moves at the speed of light? Why can't an object moving at the speed of light be taken as reference frame? Can we use the equation m=m(0)/sqrt(1-v^2/c^2) for an object moving with speed of light? 2. Jun 9, 2012 ### harrylin Part of the answer directly follows from your questions: using your equation, you will find that only an object with zero rest mass can propagate at the speed of light; and such objects are called photons (it is assumed that photons have exactly zero rest mass). Note that as 0/0 is useless, for the "mass equivalent" of light you can use m=p/c. And how would you use a photon as reference frame? A reference frame is a system for comparing (measuring) such things as time and distance. If a clock and ruler would be accelerated to light speed (although impossible), they would stop ticking and have zero length. 3. Jun 9, 2012 ### Staff: Mentor here is a FAQ which explains why. 4. Jun 9, 2012 ### aleemudasir So does that mean it is impossible for an object with non-zero rest mass to move with speed of light? And Why? 5. Jun 9, 2012 ### harrylin Again, use your own equation! How much relativistic mass will it have at the speed of light? How much energy is needed to bring it to that speed? 6. Jun 9, 2012 ### bobc2 Yes, other massless bosons. Your question has been answered quite well here. Also, you might consider the problem in the context of space-time diagrams (google it or find discussions of space-time diagrams in other posts). The sketches below show a sequence in which an observer (blue frames of reference) moves at ever greater relativistic velocities with respect to a rest frame (black perpendicular coordinates). One aspect of the photon (any massless boson) that makes it so special is that its worldline always bisects the angle between the time axis and the spatial axis for any observer, no matter what the observer's speed (thus, the speed of light is the same for all observers). Notice in the sequence that the moving observer's X4 and X1 axes rotate toward each other, getting closer and closer to each other as the speed of light is approached. In the limit the X4 axis and the X1 axis overlay each other. So, if the observer were actually moving at the speed of light, both his time axis and his spatial axis would be colinear with the photon worldline. How would you define that as a coordinate system? Last edited: Jun 9, 2012 7. Jun 9, 2012 ### aleemudasir I am not talking of an observer moving at the speed of light rather I am talking about an observer observing an object x w.r.t to an object y(moving at the speed of light). I didn't get well this graphical explanation, would you please elaborate. 8. Jun 9, 2012 ### HallsofIvy Staff Emeritus Then you will have to explain what you mean by that. What do you mean by "observing x with respect to y"? Any observer see object with respect to himself, not with respect to any other frame of reference. 9. Jun 9, 2012 ### aleemudasir I don't think that is necessary, let's talk as an example about 3-Dimensinal co-ordinate system in which an observer observes the motion of any object w.r.t to the origin(0,0,0). 10. Jun 9, 2012 ### Staff: Mentor Ah, but is the observer moving with respect to that origin? If so, we have three frames (observer, object, and frame-with-origin-at-(0,0,0)) to transform between, not two. None of these frame can have a velocity greater than or equal to to the speed of light relative to any other of these frames. 11. Jun 9, 2012 ### aleemudasir The observer is at rest w.r.t origin. 12. Jun 9, 2012 ### Staff: Mentor Yes, any object with non zero rest mass must move slower than c in any inertial frame. As far as why, that is inherently a tricky question. What are you allowing to be assumed when answering? And what kind of answer are you looking for? If I were asking the question I would be looking for a geometric answer and I would allow the Minkowski metric to be assumed. Then the answer is that a massive object has a timelike four momentum by definition, and any timelike four momentum corresponds to a three velocity < c. If that doesn't answer the question then you will need to clarify what you want better. 13. Jun 10, 2012 ### harrylin OK, then my answer of post #5 applies. How much energy do you think is required? If you did not manage to calculate that a division by zero is infinite, the answer is given in section 10 of http://www.fourmilab.ch/etexts/einstein/specrel/www/ :
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114892244338989, "perplexity": 613.9397941991273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607846.35/warc/CC-MAIN-20170524131951-20170524151951-00213.warc.gz"}
http://psychology.wikia.com/wiki/Prior_probability?oldid=29370
# Prior probability 34,189pages on this wiki (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) A prior probability is a marginal probability, interpreted as a description of what is known about a variable in the absence of some evidence. The posterior probability is then the conditional probability of the variable taking the evidence into account. The posterior probability is computed from the prior and the likelihood function via Bayes' theorem. As prior and posterior are not terms used in frequentist analyses, this article uses the vocabulary of Bayesian probability and Bayesian inference. Throughout this article, for the sake of brevity the term variable encompasses observable variables, latent (unobserved) variables, parameters, and hypotheses. ## Prior probability distribution Edit In Bayesian statistical inference, a prior probability distribution, often called simply the prior, of an uncertain quantity p (for example, suppose p is the proportion of voters who will vote for the politician named Smith in a future election) is the probability distribution that would express one's uncertainty about p before the "data" (for example, an opinion poll) are taken into account. It is meant to attribute uncertainty rather than randomness to the uncertain quantity. One applies Bayes' theorem, multiplying the prior by the likelihood function and then normalizing, to get the posterior probability distribution, which is the conditional distribution of the uncertain quantity given the data. A prior is often the purely subjective assessment of an experienced expert. Some will choose a conjugate prior when they can, to make calculation of the posterior distribution easier. ## Informative priors Edit An informative prior expresses specific, definite information about a variable. An example is a prior distribution for the temperature at noon tomorrow. A reasonable approach is to make the prior a normal distribution with expected value equal to today's noontime temperature, with variance equal to the day-to-day variance of atmospheric temperature. This example has a property in common with many priors, namely, that the posterior from one problem (today's temperature) becomes the prior for another problem (tomorrow's temperature); pre-existing evidence which has already been taken into account is part of the prior and as more evidence accumulates the prior is determined largely by the evidence rather than any original assumption, provided that the original assumption admitted the possibility of what the evidence is suggesting. The terms "prior" and "posterior" are generally relative to a specific datum or observation. ## Uninformative priors Edit An uninformative prior expresses vague or general information about a variable. The term "uninformative prior" is a misnomer; such a prior might be called a not very informative prior. Uninformative priors can express information such as "the variable is positive" or "the variable is less than some limit". Some authorities prefer the term objective prior. In parameter estimation problems, the use of an uninformative prior typically yields results which are not too different from conventional statistical analysis, as the likelihood function often yields more information than the uninformative prior. Some attempts have been made at finding probability distributions in some sense logically required by the nature of one's state of uncertainty; these are a subject of philosophical controversy. For example, Edwin T. Jaynes has published an argument (Jaynes 1968) based on Lie groups that suggests that the prior for the proportion $p$ of voters voting for a candidate, given no other information, should be $p^{-1}(1-p)^{-1}$. If one is so uncertain about the value of the aforementioned proportion $p$ that one knows only that at least one voter will vote for Smith and at least one will not, then the conditional probability distribution of $p$ given this information alone is the uniform distribution on the interval [0, 1], which is obtained by applying Bayes' Theorem to the data set consisting of one vote for Smith and one vote against, using the above prior. Priors can be constructed which are proportional to the Haar measure if the parameter space $X$ carries a natural group structure. For example, in physics we might expect that an experiment will give the same results regardless of our choice of the origin of a coordinate system. This induces the group structure of the translation group on $X$, and the resulting prior is a constant improper prior. Similarly, some measurements are naturally invariant to the choice of an arbitrary scale (i.e., it doesn't matter if we use centimeters or inches, we should get results that are physically the same). In such a case, the scale group is the natural group structure, and the corresponding prior on $X$ is proportional to $1/x$. It sometimes matters whether we use the left-invariant or right-invariant Haar measure. For example, the left and right invariant Haar measures on the affine group are not equal. Berger (1985, p. 413) argues that the right-invariant Haar measure is the correct choice. Another idea, championed by Edwin T. Jaynes, is to use the principle of maximum entropy. The motivation is that the Shannon entropy of a probability distribution measures the amount of information contained the distribution. The larger the entropy, the less information is provided by the distribution. Thus, by maximizing the entropy over a suitable set of probability distributions on $X$, one finds that distribution that is least informative in the sense that it contains the least amount of information consistent with the constraints that define the set. For example, the maximum entropy prior on a discrete space, given only that the probability is normalized to 1, is the prior that assigns equal probability to each state. And in the continuous case, the maximum entropy prior given that the density is normalized with mean zero and variance unity is the standard normal distribution. A related idea, reference priors, was introduced by Jose M. Bernardo. Here, the idea is to maximize the expected Kullback-Leibler divergence of the posterior distribution relative to the prior. This maximizes the expected posterior information about $x$ when the prior density is $p(x)$. The reference prior is defined in the asymptotic limit, i.e., one considers the limit of the priors so obtained as the number of data points goes to infinity. Reference priors are often the objective prior of choice in multivariate problems, since other rules (e.g., Jeffreys' rule) may result in priors with problematic behavior. Philosophical problems associated with uninformative priors are associated with the choice of an appropriate metric, or measurement scale. Suppose we want a prior for the running speed of a runner who is unknown to us. We could specify, say, a normal distribution as the prior for his speed, but alternatively we could specify a normal prior for the time he takes to complete 100 metres, which is proportional to the reciprocal of the first prior. These are very different priors, but it is not clear which is to be preferred. Similarly, if asked to estimate an unknown proportion between 0 and 1, we might say that all proportions are equally likely and use a uniform prior. Alternatively, we might say that all orders of magnitude for the proportion are equally likely, which gives a prior proportional to the logarithm. The Jeffreys prior attempts to solve this problem by computing a prior which expresses the same belief no matter which metric is used. The Jeffreys prior for an unknown proportion $p$ is $p^{1/2}(1-p)^{1/2}$, which differs from Jaynes' recommendation. Practical problems associated with uninformative priors include the requirement that the posterior distribution be proper. The usual uninformative priors on continuous, unbounded variables are improper. This need not be a problem if the posterior distribution is proper. Another issue of importance is that if an uninformative prior is to be used routinely, i.e., with many different data sets, it should have good frequentist properties. Normally a Bayesian would not be concerned with such issues, but it can be important in this situation. For example, one would want any decision rule based on the posterior distribution to be admissible under the adopted loss function. Unfortunately, admissibility is often difficult to check, although some results are known (e.g., Berger and Strawderman 1996). The issue is particularly acute with hierarchical Bayes models; the usual priors (e.g., Jeffreys' prior) may give badly inadmissible decision rules if employed at the higher levels of the hierarchy. ## Improper priorsEdit If Bayes' theorem is written as $P(A_i|B) = \frac{P(B | A_i) P(A_i)}{\sum_j P(B|A_j)P(A_j)}\, ,$ then it is clear that it would remain true if all the prior probabilities P(Ai) and P(Aj) were multiplied by a given constant; the same would be true for a continuous random variable. The posterior probabilities will still sum (or integrate) to 1 even if the prior values do not, and so the priors only need be specified in the correct proportion. Taking this idea further, in many cases the sum or integral of the prior values may not even need to be finite to get sensible answers for the posterior probabilities. When this is the case, the prior is called an improper prior. Some statisticians use improper priors as uninformative priors. For example, if they need a prior distribution for the mean and variance of a random variable, they may assume p(mv) ~ 1/v (for v > 0) which would suggest that any value for the mean is equally likely and that a value for the positive variance becomes less likely in inverse proportion to its value. Since $\int_{-\infty}^{\infty} dm\, = \int_{0}^{\infty} \frac{1}{v} \,dv = \infty,$ this would be an improper prior both for the mean and for the variance. ## References Edit Probability distributions [[[:Template:Tnavbar-plain-nodiv]]] Univariate Multivariate Discrete: BernoullibinomialBoltzmanncompound PoissondegeneratedegreeGauss-Kuzmingeometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniformYule-SimonzetaZipfZipf-Mandelbrot Ewensmultinomial Continuous: BetaBeta primeCauchychi-squareDirac delta functionErlangexponentialexponential powerFfadingFisher's zFisher-TippettGammageneralized extreme valuegeneralized hyperbolicgeneralized inverse GaussianHotelling's T-squarehyperbolic secanthyper-exponentialhypoexponentialinverse chi-squareinverse gaussianinverse gammaKumaraswamyLandauLaplaceLévyLévy skew alpha-stablelogisticlog-normalMaxwell-BoltzmannMaxwell speednormal (Gaussian)ParetoPearsonpolarraised cosineRayleighrelativistic Breit-WignerRiceStudent's ttriangulartype-1 Gumbeltype-2 GumbeluniformVoigtvon MisesWeibullWigner semicircle DirichletKentmatrix normalmultivariate normalvon Mises-FisherWigner quasiWishart Miscellaneous: Cantorconditionalexponential familyinfinitely divisiblelocation-scale familymarginalmaximum entropy phase-typeposterior priorquasisampling </center>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609204530715942, "perplexity": 513.9400188766543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298576.76/warc/CC-MAIN-20150323172138-00103-ip-10-168-14-71.ec2.internal.warc.gz"}
https://scoop.eduncle.com/74057
UGC NET Follow December 23, 2019 3:43 pm Match the following two sets given below. Set-I consists of terms related to specific deviations and Set-II consists of resultant impairments. • 0 Attempts • 0 Likes • 0 Shares (a)     Cerebral palsy is a term used to describe a set of neurological conditions that affect movement. (b)     Congenital cytomegalovirus ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285840392112732, "perplexity": 19143.63404295254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00031.warc.gz"}
https://human.libretexts.org/Courses/Achieving_the_Dream/Book%3A_Accelerated_English_(Ashley_Paul)/08%3A_Unit_2%3A_Writing_Process/08.6%3A_Revising
# 8.6: Revising $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ ## Introduction ### Learning Objectives • identify the process of seeking input on writing from others • identify strategies for incorporating personal and external editorial comments • identify methods for re-seeing a piece of writing • identify higher order concerns for revision Taken literally, revision is re-vision — literally re-seeing the paper in front of you. The act of revision centers heavily around the practice of questioning your work.  As you read through this section, and consider your own habits when it comes to revision, consider this list of guiding questions from The Writing Center at UNC-Chapel Hill. ## Revision Checklist ### Subject, Audience, Purpose 1. What’s the most important thing I want to say about my subject? 3. Why do I think the subject is worth writing about? Will my reader think the paper was worth reading? 4. What verb explains what I’m trying to do in this paper (tell a story, compare X and Y, describe Z)? 5. Does my first paragraph answer questions 1-4? If not, why not? ### Organization 1. How many specific points do I make about my subject? Did I overlap or repeat any points? Did I leave my points out or add some that aren’t relevant to the main idea? 2. How many paragraphs did I use to talk about each point? 3. Why did I talk about them in this order? Should the order be changed? 4. How did I get from one point to the next? What signposts did I give the reader? ### Paragraphing (Ask these questions of every paragraph) 1. What job is this paragraph supposed to do? How does it relate to the paragraph before and after it? 2. What’s the topic idea? Will my reader have trouble finding it? 3. How many sentences did it take to develop the topic idea? Can I substitute better examples, reasons, or details? 4. How well does the paragraph hold together? How many levels of generality does it have? Are the sentences different lengths and types? Do I need transitions? When I read the paragraph out loud, did it flow smoothly? ### Sentences (Ask these questions of every sentence) 1. Which sentences in my paper do I like the most? The least? 2. Can my reader “see” what I’m saying? What words could I substitute for people, things, this/that, aspect, etc.? 3. Is this sentence “fat”? 4. Can I combine this sentence with another one? ### Things to Check Last 1. Did I check spelling and punctuation? What kinds of grammar or punctuation problems did I have in my last paper? 2. How does my paper end? Did I keep the promises I made to my reader at the beginning of the paper? 3. When I read the assignment again, did I miss anything? 4. What do I like best about this paper? What do I need to work on in the next paper? — from A Rhetoric for Writing Teachers by Erika Lindemann ## Respond and Redraft There are several steps to turn a first (or second, or third!) draft of a piece of writing into the final version. There is no way to get to that wonderful final draft without all the steps in between. Professors often ask for draft essays in order to guide you as your writing develops. As you progress from 1st to 2nd draft, or from 2nd (3rd or 4th) to final draft, seeking input from others can help you get a fresh perspective on your work. A survival tip for college is to develop relationships with people whose opinions you trust. You’ll want to be able to draw on these people to give valuable, helpful, supportive feedback on your writing. As you first get started with college classes, you’ll likely participate in peer reviews for essay assignments. Show your appreciation to your classmates who offer you helpful feedback. Note which of your classmates whose writing you admire. Try to continue working with these people as much as possible. Also take advantage of your school’s Writing Center, if possible. Most tutoring centers will welcome talking with you at any stage of your essay-writing process. Note: tutors won’t just “fix” a paper draft. They will talk with you about what areas you are concerned with, and offer strategies to help focus YOU as YOU revise your paper. Finally, your professor will likely be happy to talk over a draft with you, as well. Some classes will require you to turn in a rough draft for a grade and instructor comments, but most won’t. Nonetheless, your professors expect you to write multiple drafts, and will welcome a visit during office hours to talk about how to make your paper as strong as it can possibly be. Really going from draft to final version requires rethinking the flow of logic in your writing. For instance, you might realize that a sentence buried on the 3rd page of your paper would be an excellent “hook.” To use it well, you will need to redraft, moving it to the opening and altering the rest of the material on page 3 as well. Redrafting means looking again at how each piece of your argument fits together in the whole. • Delete unnecessary information–or if you think it fits better elsewhere, re-place it. • Outlining your paper as it stands in the current draft can be very helpful for figuring out how you are presenting your ideas and can make it much easier to see where you need to reorder your information, add more support, or delete unnecessary material. • If you are a visual person, try a craftsy approach. Print your essay out (single-sided) and cut it into paragraph-long pieces. Shuffle the pieces around so that you’ve mixed up their original order entirely. Then individually read and place the pieces/paragraph in the order that the ideas connect. As you tape or pin the parts together, you might find that the paragraphs are coming together in different ways than in your original draft. ## Higher Order Concerns You’ve written a draft of your paper.  Now your work is done, so you should just turn it in, right?  No, WAIT!  Step away from the computer, take a deep breath, and don’t submit that assignment just yet. You should always revise and proofread your paper.  A first draft is usually a very rough draft.  It takes time and at least two (or more!) additionalpasses through to really make sure your argument is strong, your writing is polished, and there are no typos or grammatical errors.  Making these efforts will always give you a better paper in the end. Try to wait a day or two before looking back over your paper.  If you are on a tight deadline, then take a walk, grab a snack, drink some coffee, or do something else to clear your head so you can read through your paper with fresh eyes.  The longer you wait, the more likely it is you will see what is actually on the page and not what you meant to write. ### What to Look for in the First Pass(es): Higher-Order Concerns Typically, early review passes of a paper should focus on the larger issues, which are known as higher-order concerns. Higher-order concerns relate to the strength of your ideas, the support for your argument, and the logic of how your points are presented. Some important higher-order concerns are listed below, along with some questions you can ask yourself while proofreading diting to see if your paper needs work in any of these areas: • The Thesis Statement: • Does your paper have a clear thesis statement? If so, where is it? • Does the introduction lead up to that thesis statement? • Does each paragraph directly relate back to your thesis statement? • The Argument: • Is your thesis statement supported by enough evidence? • Do you need to add any explanations or examples to better make your case? • Is there any unnecessary or irrelevant information that should be removed? • Large-Scale Organization: • Could your paper be easily outlined or tree-diagrammed? • Are your paragraphs presented in a logical order? • Are similar ideas grouped together? • Are there clear transitions (either verbal or logical) that link each paragraph to what came before? • Organization within Paragraphs: • Is each paragraph centered around one main idea? • Is there a clear topic sentence for each paragraph? • Are any of your paragraphs too short or too long? • Do all the sentences in each paragraph relate back to their respective topic sentences? • Are the sentences presented in a logical order, so each grows out of what came before? • The Assignment Instructions: • Have you completed all of the tasks required by the instructor? • Did you include all necessary sections (for example, an abstract or reference list)? • Are you following the required style for formatting the paper as a whole, the reference list, and/or your citations? (That last question is technically a lower-order concern, but it falls under the assignment instructions and is something where you could easily lose points if you don’t follow instructions.) When reading through your early draft(s) of your paper, mark up your paper with those concerns in mind first.  Keep revising until you have fixed all of these larger-scale issues. Your paper may change a lot as you do this – that’s completely normal! You might have to add more material; cut sentences, paragraphs, or even whole sections; or rewrite significant portions of the paper to fix any problems related to these higher-order concerns.  This is why you should be careful not to get too bogged down with small-scale problems early on: there is no point in spending a lot of time fixing sentences that you end up cutting because they don’t actually fit in with your topic. 8.6: Revising is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47020062804222107, "perplexity": 1703.2297268036168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00606.warc.gz"}
http://eprint.iacr.org/2005/140/20050623:082222
## Cryptology ePrint Archive: Report 2005/140 How to Split a Shared Secret into Shared Bits in Constant-Round Ivan Damgård and Matthias Fitzi and Jesper Buus Nielsen and Tomas Toft Abstract: We show that if a set of players hold shares of a value $a\in Z_p$ for some prime $p$ (where the set of shares is written $[a]_p$), it is possible to compute, in constant round and with unconditional security, sharings of the bits of $a$, i.e.~compute sharings $[a_0]_p, \ldots, [a_{l-1}]_p$ such that $l = \lceil \log_2(p) \rceil$, $a_0, \ldots, a_{l-1} \in \{0,1\}$ and $a = \sum_{i=0}^{l-1} a_i 2^i$. Our protocol is secure against active adversaries and works for any linear secret sharing scheme with a multiplication protocol. This result immediately implies solutions to other long-standing open problems, such as constant-round and unconditionally secure protocols for comparing shared numbers and deciding whether a shared number is zero. The complexity of our protocol is $O(l \log(l))$ invocations of the multiplication protocol for the underlying secret sharing scheme, carried out in $O(1)$. Category / Keywords: cryptographic protocols / secret sharing, unconditional security Date: received 13 May 2005, last revised 23 Jun 2005 Contact author: buus at daimi au dk Available format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation [ Cryptology ePrint archive ]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6940047144889832, "perplexity": 2519.6610989704973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043058631.99/warc/CC-MAIN-20150728002418-00122-ip-10-236-191-2.ec2.internal.warc.gz"}
https://mathstodon.xyz/@11011110/100908632866635208
An upper bound for Lebesgue’s universal covering problem vixra.org/abs/1801.0292 Philip Gibbs makes progress on the smallest area needed to cover a congruent copy of every diameter-one curve in the plane, with additional contributions from John Baez, Karine Bagdasaryan, and Greg Egan. See Baez's blog post johncarlosbaez.wordpress.com/2 for more. But why vixra?? A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes. Use $ and $ for inline LaTeX, and $ and $ for display mode.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961891531944275, "perplexity": 2195.232500232908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257605.76/warc/CC-MAIN-20190524104501-20190524130501-00091.warc.gz"}
http://physics.stackexchange.com/questions/21793/entropy-change-in-a-cycle-with-two-isochoric-and-two-adiabatic-processes/21795
# Entropy change in a cycle with two isochoric and two adiabatic processes [closed] Prove that the change of Entropy in a cycle with two isochoric and two adiabatic processes is 0. How can I prove that? Thanks! - ## closed as too localized by David Z♦Mar 3 '12 at 19:29 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question. Welcome to Physics! This is a site for conceptual questions about physics, not general homework help. If you can edit your question to ask about the specific physics concept that is giving you trouble, I'll be happy to reopen it. For instance, describe how you tried to prove it and what went wrong in the process. See our FAQ and homework policy for more information. – David Z Mar 3 '12 at 19:31 We don't give full solutions here. But here is a hint: Write $dS=\frac{q}{T}$ and apply first law. Or just say: Entropy is a state function, so it does not change in a cyclic process. - Entropy is a state function, so if your system ends up in the same state as it started and there has been no net heat flow the entropy change must be zero. Are you allowed to assume it's an ideal gas? If so draw a PV diagram of the cycle. The two isochoric stages are vertical lines, and the two adiabatic stages are the usual curves of PV = nRT. Using this relation you can show the temperature change in the two isochoric stages is the same, and therefore that the $\Delta Q$ is the same (assuming $C_v$ is constant). This means the net entropy change in the cycle is zero. To do the calculation start at one point ($P_1$, $V_1$) and work your way round the cycle using PV = nRT to calculate the pressure at the remain three points. If you're not allowed to assume it's an ideal gas I'd have to think about it because you can no longer use PV = nRT. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6656960248947144, "perplexity": 320.49023160996467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053252010.41/warc/CC-MAIN-20160524012732-00211-ip-10-185-217-139.ec2.internal.warc.gz"}
http://gmatclub.com/forum/admitted-to-wharton-class-of-74072-120.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 03 May 2016, 08:03 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Admitted to Wharton Class of 2011! Author Message Intern Joined: 19 Dec 2008 Posts: 29 Schools: Wharton, Haas, Anderson, Booth, Kellogg MMM, Stanford Followers: 2 Kudos [?]: 12 [0], given: 0 ### Show Tags 10 Mar 2009, 09:06 Did anyone else receive an email last week that the financial aid office was waiting on other materials to complete the fellowship process? I was curious if this went out to everyone who hadn't submitted all materials. I am still hanging on to the delusion that this meant I might actually get a dollar or two in fellowship money. On another note, how are you folks planning on finding housing? Specifically those who think they will live in a brownstone. Out in California everyone seems to use the same website, so I was wondering if there is something comparable in Philly. Craigslist never seems to be the most reliable or complete source. I want to start to get a better feel for what is out there and the range of prices before I make the next visit. Anyone else starting to feel like their life is about to get pretty crazy? I know it's almost five months away, but it feels like next week. Director Joined: 12 Jul 2008 Posts: 518 Schools: Wharton Followers: 21 Kudos [?]: 141 [0], given: 0 ### Show Tags 10 Mar 2009, 09:19 dayman wrote: Did anyone else receive an email last week that the financial aid office was waiting on other materials to complete the fellowship process? I was curious if this went out to everyone who hadn't submitted all materials. I am still hanging on to the delusion that this meant I might actually get a dollar or two in fellowship money. On another note, how are you folks planning on finding housing? Specifically those who think they will live in a brownstone. Out in California everyone seems to use the same website, so I was wondering if there is something comparable in Philly. Craigslist never seems to be the most reliable or complete source. I want to start to get a better feel for what is out there and the range of prices before I make the next visit. Anyone else starting to feel like their life is about to get pretty crazy? I know it's almost five months away, but it feels like next week. Wharton housing guide. Current Student Joined: 09 Apr 2008 Posts: 197 Schools: Chicago Booth '11 Followers: 2 Kudos [?]: 14 [0], given: 2 ### Show Tags 12 Mar 2009, 14:37 When is the next admit weekend? Senior Manager Joined: 13 Jun 2007 Posts: 410 Schools: Wharton, Booth, Stern Followers: 11 Kudos [?]: 81 [0], given: 0 ### Show Tags 13 Mar 2009, 03:19 pguard wrote: When is the next admit weekend? April 16 - 19. Pissed off I won't be there, should be terrific fun - away with the rugby team for a tournament. _________________ Wharton admits, join the rugby team!! It'll be by far the best experience of your MBA life Current Student Joined: 02 Aug 2007 Posts: 187 Schools: Wharton Followers: 20 Kudos [?]: 116 [0], given: 0 ### Show Tags 18 Mar 2009, 10:20 Hey guys- For those of you who have already recieved your Award Notification Letter, is the "expected minimum contribution from student" ALWAYS $8k? Just wondering where they get that figure from..and whether most people just meet the minimum or go above and beyond it? Current Student Joined: 26 Mar 2008 Posts: 114 Followers: 1 Kudos [?]: 2 [0], given: 0 Re: Admitted to Wharton Class of 2011! [#permalink] ### Show Tags 18 Mar 2009, 11:09 on my ANL, the total for first year is given as$83K. So Im assuming the student contribution would be 8.3K Of course, the 83K figure is based on the assumption you will be spending as much as what is listed on the proposed budget (which is a bit bloated, IMHO). sterny wrote: Hey guys- I also found out that utilities are much cheaper compared to Boston, typically ranging from $30-70 for heat, electric, and water. Only one of the 6 places I visited allowed subletting. A leasing agent did suggest to me, however, that I could add someone to the lease for 3 months and then take them off. Finally, a note about timing. I'm glad I went early. Apparently apartments go fast during welcome weekend, and I didn't want to get caught playing that game. The only drawback for being early is that the places don't know their availability for July / August timeframe yet. Not a big deal, though. They'll put you on a waitlist and notify you immediately when something you want is available (requires$50-100 non-refundable credit check fee + refundable security deposit that ranges from $500 to 1 month's rent). Being early just means you're at the top of the waitlist. Two less relevant side notes: 1) The street parking is ridiculously expensive. 7.5 minutes per 25 cents. 2) Banking seems like a little bit of a pain. There are 6-7 major banks in Philly, which seems to dilute the non-fee ATM density in the city. Compared to Boston, which is dominated by Bank of America, this could be slightly annoying. I'll write up my notes on the individual buildings in the next six posts. Director Joined: 12 Jul 2008 Posts: 518 Schools: Wharton Followers: 21 Kudos [?]: 141 [3] , given: 0 Re: Admitted to Wharton Class of 2011! [#permalink] ### Show Tags 23 Mar 2009, 20:07 3 This post received KUDOS Riverloft http://www.riverloftapartments.com 2300 Walnut St. Studio, 1BR, 3BR Overall impressions This is probably the nicest building I visited. It's a mid-rise w/ 8-ish floors. The management/leasing agent was the friendliest and clearly the most responsive of all the places I visited. I've already swapped 6 e-mails and spent 30 min on the phone with the woman who showed me around since my visit on Saturday. The apartments are relatively nice with very high ceilings. There's a decent gym with free weights and standard cardio machines. The cardio machines don't have individual TVs, though, which was disappointing (I'm spoiled). Although I knew this was the nicest place I was going to see, I was skeptical because apartmentratings.com has two complaints about mice in the building. The leasing agent explained that this was a temporary 1-month problem that was quickly taken care of and due to an unusual construction circumstance. I confirmed with a Wharton student living there that she nor her neighbors have had any pest problems during their stay. Location This is the best compromise b/w being close to school and being in Center City. It is right next to the bridge that crosses over to University City. Basically this is as close as you can get to Wharton while still being in Center City. Lease Allows a 22-month lease but locks you into a 3% rent increase for the second year. One-time amenities fee:$175 Public transportation Bus across the street that goes to University City Cab light outside for hailing taxis Shopping Rite Aid w/ pharmacy 1 block away Apartments Small Studio: $1500-1650 658 sq ft. The small studios are definitely cozy. You can view the floor plans on their website, so I won't go into detail. The one note I will make is that there is a small den that is not part of the living area that can be used as the bedroom. When I say bedroom, I mean you can fit a bed in there and that's it. Possibly a night stand, but I'm not sure about that. There is not a lot of closet space, so if you're a girl or a metrosexual (i'm looking at you, solaris1), these studios may not be ideal. In addition, to those who enjoy bubble baths (again, i'm looking at your solaris1), there is no full bathtub, just a shower. The floor is all hardwood. All-in-all, if you want a reasonably priced place and are willing to sacrifice some space for quality and location, this is not a bad option. Large Studio:$1900 ~950 sq ft. I didn't get to see this layout They only have 8 of these layouts in the entire building. Essentially a 1BR, but the BR is officially a den, because it has no windows, which is why it's called a studio. 1BR Loft: $1970-2200 ~950 sq ft. These are swank apartments and is what I ended up taking. The leasing agent showed me a model apartment, and everything about it was very nice. I really appreciate the large space and ended up being talked into splurging by my friends and family (who would've thought my cheap asian parents would convince me to spend money?). These also have 1.5 baths for those that don't like guests using their main bathroom. If you're a social person, this is a perfect party apartment. Hardwood floors in the living room (for easy beer spill cleaning). Carpets upstairs in the loft bedroom. Tons of closet space. 1BR Flat / 3BR: ?? ?? Didn't see these, so I know nothing about them. I know they're available, though. Note: If you go visit this place, PM me. I will give you my name and you can put me down as your referral. I will split the referral bonus with you 50/50 if you end up living here. SVP Status: Burning mid-night oil....daily Joined: 07 Nov 2008 Posts: 2400 Schools: Yale SOM 2011 Alum, Kellogg, Booth, Tuck WE 1: IB - Restructuring & Distressed M&A Followers: 78 Kudos [?]: 724 [0], given: 548 Re: Admitted to Wharton Class of 2011! [#permalink] ### Show Tags 23 Mar 2009, 20:13 Zoinnk made some really great points about the apartment situation. As someone who actually did thorough analysis of Center City (Philly) for the purpose of applying to Wharton last year (before I accidentally got off at the wrong subway stop, and walked through the wrong neighborhood that surrounds UPenn campus, and decided that UPenn wasn't gonna be an option....Yes, I know...New Haven might be worse... ), I highly recommend the apartment complexes around the Rittenhouse Square. Doorman buildings are relatively inexpensive (2 bedroom I looked at was only$1600 to $1800/month with 2 yr lease) and there's a vibrant nightlife scene around the area. All the best cafes, shopping malls, amish-market, B&N, bars, 24 hr CVS, and etc are conveniently located in the area. Trolley or subway stations are nearby for convenient 3 or 4 stop ride to Wharton campus. Click here for Rittenhouse Square website _________________ Director Joined: 12 Jul 2008 Posts: 518 Schools: Wharton Followers: 21 Kudos [?]: 141 [2] , given: 0 Re: Admitted to Wharton Class of 2011! [#permalink] ### Show Tags 23 Mar 2009, 20:32 2 This post received KUDOS 1500 Locust http://www.scullycompany.com/communitie ... b=1&id=210 Studio, 1BR, 2BR Overall impressions This is your typical high-rise with 40+ floors. There's a gym (with brand new equipment that they were just about to move in), a pool, a hot tub, locker rooms, saunas, a roof deck, and community activities (yoga, pilates, public speaking classes, etc.). While they seem to promote the community within 1500 Locust, I definitely felt that the place had an impersonal air to it. With so many apartments to manage, I find it hard to believe that management/maintenance is responsive or adds a personal touch. This place, however, is clearly Wharton central. The leasing agent even knew that Thursdays were Wharton party days, since there are no classes on Friday. If you want to live in the social center of the Wharton class, this place is for you. Obviously, there are disadvantages that come with that. Specifically, I've heard that noise can be an issue here. If you want a quieter living situation, I would investigate the noise issue thoroughly hear before committing to the building Location This building was the furthest away from Wharton. Lease Does not allow 22 month leases. Running a special right now due to the housing market. 2 months free for a 13-month lease. This means that typically expensive apartments are very affordable! The apt I saw is regularly$1840 but is now $1557 (with the 2 free months prorated over the 13 months). Be forewarned, though. If you want to renew and do a 10-month lease to get you through May/June of 2nd year, you will be kicked to the regular rent + rent increase +$650 penalty for short-term (i.e., <12 month) lease. Public transportation Public transportation is close by (I forget how close), but the leasing agent says that all the students take cabs in the morning rather than public transportation anyways. Shopping Super-convenient. Almost everything you need is within the complex. Dry cleaning, tailoring, convenience, small food market. If you're looking for easy errand-running. This place is for you! Apartments The apartments here vary greatly. Some are renovated. Some aren't. Make sure to ask to see the specific apartment you will be leasing. The leasing agent will tell you if it's renovated or not, but you should still see it at least. Director Joined: 12 Jul 2008 Posts: 518 Schools: Wharton Followers: 21 Kudos [?]: 141 [2] , given: 0 ### Show Tags 23 Mar 2009, 21:01 2 KUDOS Lofts at 1835 Arch http://www.1835arch.com/ 1BR, 2BR Overall impressions If I were living in Philly but not going to Wharton, this is the building I would choose. The area is by far the nicest area of the six places. It is right next to all the museums in Philly / Logan Square. It is also a typical high-rise, but it spoke to me more than 1500 Locust did. One nice perk is the monthly merchant spotlight they do in their lobby, which highlights a local business (restaurant or shop). All-in-all, if you're married and/or not into the party scene, this would be an excellent option for you. Location This building is on the border of Center City and the Art Museum area (i.e., north border of Center City). Probably the most inconvenient location of the places I visited, requiring the longest commute to school. The leasing agent said 20-30 min by public transportation to school. Lease 22 month leases are available w/ no baked in rent increase Extra storage unit for additional $25-50/month (depending on size) Public transportation Public transportation is close by (I forget how close), but the leasing agent says that all the students take cabs in the morning rather than public transportation anyways. Shopping Everything you need (grocery, pharmacy, etc.) is within a 5 block radius. Most of the places, though, are on the border of the 5 block radius, so it's on the long end of a short walk. 20% food & drink discount at on-site restaurant, mission grill. Apartments They have six different floor plans for 1 BRs. 3 flat styles and 3 loft styles. The lofts aren't true lofts. The bedroom area is just slightly raised -- 3-4 steps. The variations within the styles are just sizes (615-850 sq ft). The smallest sized apartments have less-than-desirable closet space. Price for 1 BRs is very affordable!$1250-2000 depending on floor, size, and style, but even the nicest style starts at $1700. Director Joined: 12 Jul 2008 Posts: 518 Schools: Wharton Followers: 21 Kudos [?]: 141 [1] , given: 0 Re: Admitted to Wharton Class of 2011! [#permalink] ### Show Tags 23 Mar 2009, 21:24 1 This post received KUDOS Wanamaker House http://www.wanamakerhouse.com/ 2020 Walnut St. Overall impressions This is a condo building, so Wanamaker management does not handle leasing. Allan Domb Realty takes care of leasing for individual owners of the condos. The building itself is fine. It is solidly average. I don't have many good or bad things to say about this place. It did not blow me away nor did it cause me to convulse in disgust. The gym is a bit small and left a little to be desired. The rooftop pool is also a nice perk. One annoying thing is that the place does not have in-unit washer/dryer, which would be fine except that each floor has 1 washer and 1 dryer ($2 for wash and $2 for dry). Overall, I came out of this visit completely indifferent to Wanamaker. Seems like it would be ok for 2 years. Location This is smack dab in the middle of Center City, maybe two blocks away from Rittenhouse Square. Basically split the difference b/w Riverloft and 1500 Locust. Lease Leases vary depending on the individual owner. Public transportation Public transportation is close by (I forget how close), but the leasing agent says that all the students take cabs in the morning rather than public transportation anyways. Shopping Everything you need (grocery, pharmacy, etc.) is very close by, since it's in the middle of Center City. I didn't scope out exactly how close, but my sense from my 5 minutes of walking around that convenience would not be a concern. Apartments Very difficult to comment. I saw a studio, a 1 BR, and a 2 BR. Two had carpets and one had hardwood floors. It completely depends on the owners. All three of the apartments had an older look to them. Definitely not the modern stylings and renovations of most high rises. Director Joined: 12 Jul 2008 Posts: 518 Schools: Wharton Followers: 21 Kudos [?]: 141 [1] , given: 0 Re: Admitted to Wharton Class of 2011! [#permalink] ### Show Tags 23 Mar 2009, 21:38 1 This post received KUDOS 2400 Chestnut http://www.2400chestnut.com/ Overall impressions This is the poor man's high-rise. Prices are super-low for a reason. The apartments were not very nice and had an old, musky smell. The gym, in the basement, left a lot to be desired. It resembled a prison gym. All the weights were rusted over. The cardio machines were packed in very closely. No in-unit washer dryer. 2 W/Ds per floor. I don't have much detail on anything, as I mentally checked out the moment I walked into the first apartment. To be fair, if you're on a tight budget, this is a very good option. You sacrifice a decent amount of quality, but everyone who lives at 2400 Chestnut seems quite happy there. It isn't a horrible bang for your buck. Location Like Riverloft, this is located on the very west edge of Center City, right next to a bridge over to Center City. In this case, however, Chestnut St. is one-way the wrong way. Lease Allows 22 month leases. Surprisingly expensive WL deposit of$300 (don't know if it's refundable, but I assume it must be). The only high-rise that allows subletting. Rent includes free DirecTV with only basic cable channels Public transportation 2400 Chestnut runs it's own shuttles to University City. Super convenient! Shopping Just as convenient as RiverLoft. Trader Joe's is 4 blocks away. The same Rite Aid is also a block or two away. Apartments The apartments are really not very nice for a professionally managed building. They will re-carpet and re-paint for new tenants. But the appliances, cabinets, and closets are all clearly very old. The pricing on the website is accurate. Director Joined: 12 Jul 2008 Posts: 518 Schools: Wharton Followers: 21 Kudos [?]: 141 [2] , given: 0 ### Show Tags 23 Mar 2009, 21:51 2 KUDOS Left Bank http://www.leftbankapts.com/ 3131 Walnut St. Studio, 1 BR, 2 BR, 3 BR Overall impressions This is the last place I saw on Saturday. Maybe I was just exhausted by then and didn't care, but I didn't come out with a good feeling even though the quality of the apartments were about the same as 1835 Arch, which is to say quite nice. A large contributing factor was the leasing agent was clearly selling me hard, and I react very negatively to boilerplate hard sells. The gym was the nicest of the six buildings I visited. Location This is the only building not in Center City that I visited. Left Bank is on the University City side of the river (hence Left Bank), so it's only 200-300 yards away from the edge of center city. Very convenient to school but less convenient to the Center City social life. Lease Does not allow 22 month leases Public transportation You probably won't need public transportation to get to school from here, but you can pick up buses to Wharton if you can't handle the 10 min walk. Shopping Leasing agent freely admitted you would need a car to track down groceries or any other shopping. Apartments Apartments were very similar to 1835 Arch. They are very nice and modern. The leasing agent was not particularly detailed about the different options for 1 BRs, but I saw a 1 BR flat and 1 BR loft. Prices for 1 BR start at $1700 and go to$2350. Studios $1500-1650. Current Student Joined: 02 Aug 2007 Posts: 187 Schools: Wharton Followers: 20 Kudos [?]: 116 [0], given: 0 Re: Admitted to Wharton Class of 2011! [#permalink] ### Show Tags 24 Mar 2009, 08:00 zoinnk - you are amazing buddy! Great job posting your review of these places. I am kinda curious - how much does the average Wharton student spend on housing? I am definitely leaning towards getting a 2BR and splitting it with a room-mate to bring down the cost closer to$1000. Do you think that's a realistic possibility? Last edited by sterny on 24 Mar 2009, 08:19, edited 1 time in total. Director Joined: 12 Jul 2008 Posts: 518 Schools: Wharton Followers: 21 Kudos [?]: 141 [0], given: 0 ### Show Tags 24 Mar 2009, 08:13 sterny wrote: zoinnk - you are amazing buddy! Great job posting your review from these places. I am kinda curious - how much does the average Wharton student spend on housing? I am definitely leaning towards getting a 2BR and splitting it with a room-mate to bring down the cost closer to $1000. Do you think that's a realistic possibility? I think that's totally reasonable. I would say a 2 BR in a nice high-rise will run you$1200-1400. Don't quote me on that, though, as I was not looking at 2BRs particularly closely. Re: Admitted to Wharton Class of 2011!   [#permalink] 24 Mar 2009, 08:13 Go to page   Previous    1   2   3   4   5   6   7   8   9   10   11   12   13    Next  [ 244 posts ] Similar topics Replies Last post Similar Topics: 5 Admitted to Wharton Class of 2013 Thread 58 06 Jan 2011, 08:00 1 Admitted to Wharton Class of 2012 Thread 5 26 Mar 2010, 11:19 1 Admitted to Duke Fuqua Class of 2011 88 03 Feb 2009, 14:22 2 Admitted to Duke Fuqua Class of 2011 88 03 Feb 2009, 14:22 104 Admitted to Kellogg Class of 2011 Thread 1179 18 Dec 2008, 19:01 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16598853468894958, "perplexity": 5495.441784849524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121561.0/warc/CC-MAIN-20160428161521-00029-ip-10-239-7-51.ec2.internal.warc.gz"}
https://quantumcomputing.stackexchange.com/tags/fidelity/new
# Tag Info The fidelity case was already worked in the other answer. Here is an idea for the trace distance one. The trace distance between $\rho$ and some $|\psi\rangle\!\langle\psi|$ is $$\|\rho - |\psi\rangle\!\langle\psi|\|_1 = \operatorname{Tr}\lvert \,\rho - |\psi\rangle\!\langle\psi|\,\rvert,$$ which is equal to the sum of the singular values of $\rho-|\psi\... 5 Recall that for any Hermitian operator$A$and any unit vector$|\psi\rangle$the real number$\langle \psi|A|\psi\rangle$, known as the Rayleigh quotient, is bounded by the largest eigenvalue$\lambda_{max}$of$A$$$\langle \psi|A|\psi\rangle \le \lambda_{max}.$$ Moreover, the maximum is achieved when$|\psi\rangle$is the unit norm eigenvector of$A\$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928565621376038, "perplexity": 268.86556570432015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00471.warc.gz"}
https://code.kx.com/q/cloud/autoscale/cost-risk/
# Cost/risk analysis To determine how much cost savings our cluster of RDBs can make we will deploy the stack and simulate a day in the market. t3 instances were used here for simplicity. Their small sizes meant the clusters could scale in and out to demonstrate cost savings without using a huge amount of data. In reality they can incur a significant amount of excess costs due to their burstable performance. For production systems fixed cost instances like r5, m5, and i3 should really be used. ## Initial simulation First we just want to see the cluster in action so we can see how it behaves. To do this we will run the cluster with t3a.micro instances. In the Auto Scaling the RDB section above, data is distributed evenly throughout the day. This will not be the case in most of our systems as data volumes will be highly concentrated between market open and close. To simulate this as closely as possible we will generate data following the distribution below. Figure 2.1: Simulation data volume distribution In this simulation we will aim to send in 6GB of data of mock trade and quote data. The peak data volume will be almost 1GB of data per hour (15% of the daily data) . The t3a.micro instances only have 1GB of RAM so we should see the cluster scaling out quite quickly while markets are open. The behavior of the cluster was monitored using Cloudwatch metrics. Each RDB server published the results of the Linux free command. First we will take a look at the total capacity of the cluster throughout the day. Figure 2.2: Total memory capacity of the t3a.micro cluster – Cloudwatch Metrics As expected we can see the number of servers stayed at one until the market opened. The RDBs then started to receive data and the cluster scaled up to eight instances. At end-of-day the data was flushed from memory and all but the live server was terminated. So the capacity was reduced back to 1GB and the cycle continued the day after. Plotting the memory usage of each server we see that the rates at which they rose were higher in the middle of the day when the data volumes were highest. Figure 2.3: Memory usage of each of the t3a.micro servers – Cloudwatch Metrics Focusing on just two of the servers we can see the relationship between the live server and the one it eventually launches. Figure 2.4: Scaling thresholds of t3a.micro servers – Cloudwatch Metrics At 60% memory usage the live server increased the ASG’s DesiredCapacity and launched the new server. We can see the new server then waited for about twenty minutes until the live RDB reached the roll threshold of 80%. The live server then unsubscribed from the tickerplant and the next server took over. ## Cost factors Now that we can see the cluster working as expected we can take a look at its cost-efficiency. More specifically, how much of the computing resources we provisioned did we actually use. To do that we can take a look at the capacity of the cluster versus its memory usage. Figure 2.5: T3a.micro cluster’s total memory capacity vs total memory usage – Cloudwatch Metrics We can see from the graph above that the cluster’s capacity follows the demand line quite closely. As we pay per GB of RAM we use, the capacity line can be taken as the cost of the cluster. The gap between it and the usage line is where the cluster can make savings. Our first option is to reduce the size of each step up in capacity by reducing the size of our cluster’s servers. To bring the step itself closer to the demand line we need to either scale the server as late as possible or have each RDB hold more data. To summarize, there are three factors we can adjust in our cluster. • The server size • The scale threshold • The roll threshold ### Risk analysis Care will be needed when adjusting these factors for cost-efficiency as each one will increase the risk of failure. First and foremost a roll threshold should be chosen so that the chance of losing an RDB to a 'wsfull error is minimized. The main risk associated with scaling comes from not being able to scale out fast enough. This will occur if the lead time for an RDB server is greater than the time it takes for the live server to roll after it has told the ASG to scale out. Figure 2.6: T3a.micro server waiting to become the live subscriber – Cloudwatch Metric Taking a closer look at Figure 2.4 we can see the t3a.micro took around one minute to initialize. It then waited another 22 minutes for the live server to climb to its roll threshold of 80% and took its place. So for this simulation the cluster had a 22-minute cushion. With a one-minute lead time, the data volumes would have to increase to 22 times that of the mock feed before the cluster started to fall behind. We could reduce this time by narrowing the gap between scaling and rolling, but it may not be worth it. Falling behind the tickerplant will mean recovering data from its log. This issue will be a compounding one as each subsequent server that comes up will be farther and farther behind the tickerplant. More and more data will need to be recovered, and live data will be delayed. One of the mantras of Auto Scaling is to stop guessing demand. By keeping a cushion for the RDBs in the tickerplant’s queue we will likely not have to worry about large spikes in demand affecting our system. Further simulations will be run to determine whether cost savings associated with adjusting these factors are worth the risk. ## Server size comparison To determine the impact of using smaller instances four clusters were launched each with a different instance type. The instances used had capacities of 2, 4, 8, and 16GB. Figure 3.1: T3a instance types used for cost efficiency comparison As in the first simulation the data volumes were distributed in order to simulate a day in the market. However, in this simulation we aimed to send in around 16GB of data to match the total capacity of one t3a.xlarge (the largest instance type of the clusters). .sub.i was published from each of the live RDBs allowing us to plot the upd message throughput. Figure 3.2: T3a cluster’s upd throughput – Cloudwatch Metrics Since there was no great difference between the clusters, the assumption could be made that the amount of data in each cluster at any given time throughout the day was equivalent. So any further comparisons between the four clusters would be valid. Next the total capacity of each cluster was plotted. Figure 3.3: T3a clusters' total memory capacity Strangely the capacity of the t3a.small cluster (the smallest instance) rose above the capacity of the larger ones. Intuitively they should scale together but the smaller steps of the t3a.small cluster should still have kept it below the others. When the memory usage of each server was plotted we saw that the smaller instances once again rose above the larger ones. Figure 3.4: T3a clusters' memory usage – Cloudwatch Metrics This comes down to the latent memory of each server, when an empty RDB process is running the memory usage is approximately 150 MB. (base) [ec2-user@ip-10-0-1-212 ~]$free total used free shared buff/cache available Mem: 2002032 150784 1369484 476 481764 1694748 Swap: 0 0 0 So for every instance that we add to the cluster, the overall memory usage will increase by 150MB. This extra 150MB will be negligible when the data volumes are scaled up as much larger servers will be used. The effect is less prominent in the 4, 8, and 16GB servers so going forward we will use them to compare costs. Figure 3.5: Larger t3a Clusters' Memory Usage - Cloudwatch Metrics The three clusters here behave as expected. The smallest cluster’s capacity stays far closer to the demand line, although it does move towards the larger ones as more instances are added. This is the worst-case scenario for the t3a.xlarge cluster, as 16GBs means it has to scale up to safely meet the demand of the simulation’s data, but the second server stays mostly empty until end-of-day. The cluster will still have major savings over a t3.2xlarge with 32GB. The cost of running each cluster was calculated, the results are shown below. We can see that the two smaller instances have significant savings compared to the larger ones. 50% savings when compared to running a t3a.2xlarge. The clusters with larger instances saw just 35 and 38%. instance capacity (GB) total cost ($) cost saving (%) t3a.small 1 3.7895 48 t3a.medium 2 3.5413 51 t3a.large 4 4.4493 38 t3a.xlarge 16 4.7175 35 t3a.2xlarge 32 7.2192 0 Figure 3.6: T3a clusters' cost savings If data volumes are scaled up the savings could become even greater as the ratio of server size to total daily data volume becomes greater. However it is worth noting that the larger servers did have more capacity when the data volumes stopped, so the differences may also be slightly exaggerated. Taking a look at Figure 3.5 we can intuitively split the day into three stages. 1. End of Day to Market Open 2. Market Open to Market Close 3. Market Close to End of Day Savings in the first stage will only be achieved by reducing the instance size. In the second stage savings look to be less significant, but could be achieved by both reducing server size and reducing the time in the queue of the servers. From market-close to end-of-day the clusters have scaled out fully. In this stage cost-efficiency will be determined by how much data is in the final server. If it is only holding a small amount of data when market closes there will be idle capacity in the cluster until end-of-day occurs. This will be rather random and depend mainly on how much data is generated by the market. Although having smaller servers will reduce the maximum amount of capacity that could be left unused. The worst-case scenario in this stage is that the amount of data held by the last live server falls in the range between the scale and roll thresholds. This will mean an entire RDB server will be sitting idle until end-of-day. To reduce the likelihood of this occurring it may be worth increasing the scale threshold and risking falling behind the tickerplant in the case of high data volumes. ## Threshold window comparison To test the effects of the scale threshold on cost another stack was launched (also with four RDB clusters). In this stack all four clusters used t3a.medium EC2 instances (4GB) and a roll threshold of 85% was set. Data was generated in the same fashion as the previous simulation. The scale thresholds were set to 20, 40, 60, and 80% and the memory capacity was plotted as in Figure 3.4. Figure 4.1: T3a.medium clusters' memory capacity vs memory usage – Cloudwatch Metrics As expected the clusters with the lower scale thresholds scale out farther away from the demand line. Their new servers will then have a longer wait time in the tickerplant queue. This will reduce the risks associated with the second stage but also increase its costs. This difference can be seen more clearly if only the 20 and 80% clusters are plotted. Figure 4.2: T3a.medium 20 and 80% clusters' memory capacity vs memory usage – Cloudwatch Metrics Most importantly we can see that in the third stage the clusters with lower thresholds started an extra server. So a whole instance was left idle in those clusters from market-close to end-of-day. The costs associated with each cluster were calculated below. instance threshold capacity (GB) total cost (\$) cost saving (%) t3a.medium 80% 4 3.14 43 t3a.medium 60% 4 3.19 44 t3a.medium 40% 4 3.56 49 t3a.medium 20% 4 3.61 50 t3a.2xlarge n/a 32 7.21 0 The 20 and 40% clusters and the 60 and 80% clusters started the same amount of servers as each other throughout the day. So we can compare their costs to analyze cost-efficiencies in the second stage (market-open to close). With differences of under 1% compared to the t3.2xlarge the cost savings we can make from this stage are not that significant. Comparing the difference between the two pairs we can see that costs jump from 44 to 49%. Therefore the final stage where there is an extra server sitting idle until end-of-day has a much larger impact. Even though raising the scale threshold has a significant impact when no extra server is added at market-close, choosing whether to do so will still be dependant on the needs of each system. A 5% decrease in costs may not be worth the risk of falling behind the tickerplant. ## Taking it further ### Turning the cluster off The saving estimates in the previous sections could be taken a step further by adding scheduled scaling. When the RDBs are not in use we could scale the cluster down to zero, effectively turning off the RDB. Weekends are a prime example of when this could be useful, but it could also be extended to the period between end-of-day and market open. If data only starts coming into the RDB at around 07:00 when markets open there is no point having a server up. So we could schedule the ASG to turn down to zero instances at end-of-day. We then have a few options for scaling back out, each have some pros and cons. option remarks Schedule the ASG to scale out at 05:30 before the market opens Data will not be available until then if it starts to come in before. Monitor the tickerplant for the first upd message and scale out when it is received Data will not be available until the RDB comes up and recovers from the tickerplant log. Will not be much data to recover. Scale out when the first query is run Useful because data is not needed until it is queried. RDBs may come up before there is any data. A large amount of data may need to be recovered if queries start to come in later in the day. The least complex way to run this solution would be in tandem with a write-down database (WDB) process. The RDBs will then not have to save down to disk at end-of-day so scaling in will be quicker. The complexity will also be reduced. If the RDBs were to write down at end-of-day a separate process would be needed to coordinate the writes of each one and sort and part the data. As the cluster will most likely be deployed alongside a WDB process an intraday write-down solution could also be incorporated. If we were to write to the HDB every hour, the RDBs could then flush their data from memory allowing the cluster to scale in each time. Options for how to set up an intraday write-down solution have been discussed in a whitepaper by Colm McCarthy. ### Querying distributed RDBs As discussed, building a gateway to query the RDBs is beyond the scope of this paper. When a gateway process is set up, distributed RDBs could offer some advantages over a regular RDB: • RDBs can be filtered out by the gateway pre-query based on which data sets they are holding. • Each RDB will be holding a fraction of the day’s data, decreasing query memory and duration. • Queries across multiple RDBs can be done in parallel. ## Conclusion This article has presented a solution for a scalable real-time database cluster. The simulations carried out showed savings of up to 50% could be made. These savings, along with the increased availability of the cluster, could make holding a whole day’s data in memory more feasible for our kdb+ databases. If not, the cluster can be used alongside an intraday write-down process. If an intraday write is incorporated in a system it is usually one that needs to keep memory usage low. The scalability of the cluster can guard against large spikes in intraday data volumes crippling the system. Used in this way very small instances could be used to reduce costs. The .u.asg functionality in the tickerplant also gives the opportunity to run multiple clusters at different levels of risk. Highly important data can be placed in a cluster with a low scale threshold or larger instance size. If certain data sources do not need to be available with a low latency clusters with smaller instances and higher scale thresholds can be used to reduce costs. ## Author Jack Stapleton is a kdb+ consultant for KX who has worked for some the world’s largest financial institutions. Based in Dublin, Jack is currently working on the design, development, and maintenance of a range of kdb+ solutions in the cloud for a leading financial institution. kxcontrib/cloud-autoscaling companion scripts
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35871124267578125, "perplexity": 1454.7085287656782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00346.warc.gz"}
http://uk.mathworks.com/help/econ/var-models.html?nocookie=true
Accelerating the pace of engineering and science Documentation Vector Autoregressive Models Introduction to Vector Autoregressive (VAR) Models Types of VAR Models The multivariate time series models used in Econometrics Toolbox™ functions are based on linear, autoregressive models. The basic models are: Model NameAbbreviationEquation Vector Autoregressive VAR(p) ${y}_{t}=a+\sum _{i=1}^{p}{A}_{i}{y}_{t-i}+{\epsilon }_{t}$ Vector Moving Average VMA(q) ${y}_{t}=a+\sum _{j=1}^{q}{B}_{j}{\epsilon }_{t-j}+{\epsilon }_{t}$ Vector Autoregressive Moving Average VARMA(p, q) ${y}_{t}=a+\sum _{i=1}^{p}{A}_{i}{y}_{t-i}+\sum _{j=1}^{q}{B}_{j}{\epsilon }_{t-j}+{\epsilon }_{t}$ Vector Autoregressive Moving Average with eXogenous inputs VARMAX(p, q, r) ${y}_{t}=a+{X}_{t}\cdot b+\sum _{i=1}^{p}{A}_{i}{y}_{t-i}+\sum _{j=1}^{q}{B}_{j}{\epsilon }_{t-j}+{\epsilon }_{t}$ Structural Vector Autoregressive Moving Average with eXogenous inputs SVARMAX(p, q, r) ${A}_{0}{y}_{t}=a+{X}_{t}\cdot b+\sum _{i=1}^{p}{A}_{i}{y}_{t-i}+\sum _{j=1}^{q}{B}_{j}{\epsilon }_{t-j}+{B}_{0}{\epsilon }_{t}$ The following variables appear in the equations: • yt is the vector of response time series variables at time t. yt has n elements. • a is a constant vector of offsets, with n elements. • Ai are n-by-n matrices for each i. The Ai are autoregressive matrices. There are p autoregressive matrices. • εt is a vector of serially uncorrelated innovations, vectors of length n. The εt are multivariate normal random vectors with a covariance matrix Q, where Q is an identity matrix, unless otherwise specified. • Bj are n-by-n matrices for each j. The Bj are moving average matrices. There are q moving average matrices. • Xt is an n-by-r matrix representing exogenous terms at each time t. r is the number of exogenous series. Exogenous terms are data (or other unmodeled inputs) in addition to the response time series yt. • b is a constant vector of regression coefficients of size r. So the product Xt·b is a vector of size n. Generally, the time series yt and Xt are observable. In other words, if you have data, it represents one or both of these series. You do not always know the offset a, coefficient b, autoregressive matrices Ai, and moving average matrices Bj. You typically want to fit these parameters to your data. See the vgxvarx function reference page for ways to estimate unknown parameters. The innovations εt are not observable, at least in data, though they can be observable in simulations. Lag Operator Representation There is an equivalent representation of the linear autoregressive equations in terms of lag operators. The lag operator L moves the time index back by one: Lyt = yt–1. The operator Lm moves the time index back by m: Lmyt = ytm. In lag operator form, the equation for a SVARMAX(p, q, r) model becomes $\left({A}_{0}-\sum _{i=1}^{p}{A}_{i}{L}^{i}\right){y}_{t}=a+{X}_{t}b+\left({B}_{0}+\sum _{j=1}^{q}{B}_{j}{L}^{j}\right){\epsilon }_{t}.$ This equation can be written as $A\left(L\right){y}_{t}=a+{X}_{t}b+B\left(L\right){\epsilon }_{t},$ where Stable and Invertible Models A VAR model is stable if This condition implies that, with all innovations equal to zero, the VAR process converges to a as time goes on. See Lütkepohl [69] Chapter 2 for a discussion. A VMA model is invertible if This condition implies that the pure VAR representation of the process is stable. For an explanation of how to convert between VAR and VMA models, see Changing Model Representations. See Lütkepohl [69] Chapter 11 for a discussion of invertible VMA models. A VARMA model is stable if its VAR part is stable. Similarly, a VARMA model is invertible if its VMA part is invertible. There is no well-defined notion of stability or invertibility for models with exogenous inputs (e.g., VARMAX models). An exogenous input can destabilize a model. Building VAR Models To understand a multiple time series model, or multiple time series data, you generally perform the following steps: 1. Import and preprocess data. 2. Specify a model. 1. Specifying Models to set up a model using vgxset: 2. Determining an Appropriate Number of Lags to determine an appropriate number of lags for your model 3. Fit the model to data. 1. Fitting Models to Data to use vgxvarx to estimate the unknown parameters in your models. This can involve: 4. Analyze and forecast using the fitted model. This can involve: 1. Checking Stability to determine whether your model is stable and invertible 2. Forecasting with vgxpred to forecast directly from models 3. Forecasting with vgxsim to simulate a model 4. Generate Impulse Responses for a VAR model to calculate impulse responses, which give forecasts based on an assumed change in an input to a time series 5. Comparing Forecasts with Forecast Period Data to compare the results of your model's forecasts to your data Your application need not involve all of the steps in this workflow. For example, you might not have any data, but want to simulate a parameterized model. In that case, you would perform only steps 2 and 4 of the generic workflow. You might iterate through some of these steps. Data Structures Multivariate Time Series Data Often, the first step in creating a multiple time series model is to obtain data. There are two types of multiple time series data: Before using Econometrics Toolbox functions with the data, put the data into the required form. Use standard MATLAB commands, or preprocess the data with a spreadsheet program, database program, PERL, or other utilities. There are several freely available sources of data sets, such as the St. Louis Federal Reserve Economics Database (known as FRED): http://research.stlouisfed.org/fred2/. If you have a license, you can use Datafeed Toolbox™ functions to access data from various sources. Response Data Structure Response data for multiple time series models must be in the form of a matrix. Each row of the matrix represents one time, and each column of the matrix represents one time series. The earliest data is the first row, the latest data is the last row. The data represents yt in the notation of Types of VAR Models. If there are T times and n time series, put the data in the form of a T-by-n matrix: $\left[\begin{array}{cccc}Y{1}_{1}& Y{2}_{1}& \cdots & Y{n}_{1}\\ Y{1}_{2}& Y{2}_{2}& \cdots & Y{n}_{2}\\ ⋮& ⋮& \ddots & ⋮\\ Y{1}_{T}& Y{2}_{T}& \cdots & Y{n}_{T}\end{array}\right]$ Y1t represents time series 1,..., Ynt represents time series n, 1 ≤ t ≤ T. Multiple Paths.  Response time series data can have an extra dimension corresponding to separate, independent paths. For this type of data, use a three-dimensional array Y(t,j,p), where: • t is the time index of an observation, 1 ≤ t ≤ T. • j is the index of a time series, 1 ≤ j ≤ n. • p is the path index, 1 ≤ p ≤ P. For any path p, Y(t,j,p) is a time series. Example: Response Data Structure The file Data_USEconModel ships with Econometrics Toolbox software. It contains time series from the St. Louis Federal Reserve Economics Database (known as FRED). Enter load Data_USEconModel • Data, a 249-by-14 matrix containing the 14 time series, • DataTable, a 249-by-14 tabular array that packages the data, • dates, a 249-element vector containing the dates for Data, • Description, a character array containing a description of the data series and the key to the labels for each series, • series, a 1-by-14 cell array of labels for the time series. Examine the data structures: firstPeriod = dates(1) lastPeriod = dates(end) firstPeriod = 711217 lastPeriod = 733863 • dates is a vector containing MATLAB serial date numbers, the number of days since the putative date January 1, 0000. (This "date" is not a real date, but is convenient for making date calculations; for more information, see Date Formats in the Financial Toolbox™ User's Guide.) • The Data matrix contains 14 columns. These represent the time series labeled by the cell vector of strings series. FRED SeriesDescription COEPaid compensation of employees in $billions CPIAUCSL Consumer Price Index FEDFUNDSEffective federal funds rate GCEGovernment consumption expenditures and investment in$ billions GDPGross Domestic Product GDPDEFGross domestic product in $billions GDPIGross private domestic investment in$ billions GS10Ten-year treasury bond yield HOANBSNon-farm business sector index of hours worked M1SL M1 money supply (narrow money) PCECPersonal consumption expenditures in \$ billions TB3MS Three-month treasury bill yield UNRATEUnemployment rate DataTable is a tabular array containing the same data as in Data, but you can call variables using the tabular array and the name of the variable. Use dot notation to access a variable, for example, DataTable.UNRATE calls the unemployment rate time series. Data Preprocessing Your data might have characteristics that violate assumptions for linear multiple time series models. For example, you can have data with exponential growth, or data from multiple sources at different periodicities. You must preprocess your data to convert it into an acceptable form for analysis. • For time series with exponential growth, you can preprocess the data by taking the logarithm of the growing series. In some cases you then difference the result. For an example, see Transforming Data for Stationarity. • For data from multiple sources, you must decide how best to fill in missing values. Commonly, you take the missing values as unchanged from the previous value, or as interpolated from neighboring values. Note:   If you take a difference of a series, the series becomes 1 shorter. If you take a difference of only some time series in a data set, truncate the other series so that all have the same length, or pad the differenced series with initial values. Testing Data for Stationarity.  You can test each time series data column for stability using unit root tests. For details, see Unit Root Nonstationarity. Partitioning Response Data To fit a lagged model to data, partition the response data in up to three sections: • Presample data • Estimation data • Forecast data The purpose of presample data is to provide initial values for lagged variables. When trying to fit a model to the estimation data, you need to access earlier times. For example, if the maximum lag in a model is 4, and if the earliest time in the estimation data is 50, then the model needs to access data at time 46 when fitting the observations at time 50. vgxvarx assumes the value 0 for any data that is not part of the presample data. The estimation data contains the observations yt. Use the estimation data to fit the model. Use the forecast data for comparing fitted model predictions against data. You do not have to have a forecast period. Use one to validate the predictive power of a fitted model. The following figure shows how to arrange the data in the data matrix, with j presample rows and k forecast rows. Model Specification Structures Models for Multiple Time Series Econometrics Toolbox functions require a model specification structure as an input before they simulate, calibrate, forecast, or perform other calculations. You can specify a model with or without time series data. If you have data, you can fit models to the data as described in VAR Model Estimation. If you do not have data, you can specify a model with parameters you provide, as described in Specification Structures with Known Parameters. Specifying Models Create an Econometrics Toolbox multiple time series model specification structure using the vgxset function. Use this structure for calibrating, simulating, predicting, analyzing, and displaying multiple time series. There are three ways to create model structures using the vgxset function: • Specification Structures with Known Parameters. Use this method when you know the values of all relevant parameters of your model. • Specification Structures with No Parameter Values. Use this method when you know the size, type, and number of lags in your model, but do not know the values of any of the AR or MA coefficients, or the value of your Q matrix. • Specification Structures with Selected Parameter Values. Use this method when you know the size, type, and number of lags in your model, and also know some, but not all, of the values of AR or MA coefficients. This method includes the case when you want certain parameters to be zero. You can specify as many parameters as you like. For example, you can specify certain parameters, request that vgxvarx estimate others, and specify other parameters with [] to indicate default values. Specification Structures with Known Parameters If you know the values of model parameters, create a model structure with the parameters. The following are the name-value pairs you can pass to vgxset for known parameter values: Model Parameters NameValue a An n-vector of offset constants. The default is empty. AR0 An n-by-n invertible matrix representing the zero-lag structural AR coefficients. The default is empty, which means an identity matrix. AR An nAR-element cell array of n-by-n matrices of AR coefficients. The default is empty. MA0 An n-by-n invertible matrix representing the zero-lag structural MA coefficients. The default is empty, which means an identity matrix. MA An nMA-element cell array of n-by-n matrices of MA coefficients. The default is empty. b An nX-vector of regression coefficients. The default is empty. Q An n-by-n symmetric innovations covariance matrix. The default is empty, which means an identity matrix. ARlag A monotonically increasing nAR-vector of AR lags. The default is empty, which means all the lags from 1 to p, the maximum AR lag. MAlag A monotonically increasing nMA-vector of MA lags. The default is empty, which means all the lags from 1 to q, the maximum MA lag. vgxset infers the model dimensionality, given by n, p, and q in Types of VAR Models, from the input parameters. These parameters are n, nAR, and nMA respectively in vgxset syntax. For more information, see Specification Structures with No Parameter Values. The ARlag and MAlag vectors allow you to specify which lags you want to include. For example, to specify AR lags 1 and 3 without lag 2, set ARlag to [1 3]. This setting corresponds to nAR = 2 for two specified lags, even though this is a third order model, since the maximum lag is 3. The following example shows how to create a model structure when you have known parameters. Consider a VAR(1) model: ${y}_{t}=a+\left[\begin{array}{ccc}.5& 0& 0\\ .1& .1& .3\\ 0& .2& .3\end{array}\right]{y}_{t-1}+{\epsilon }_{t},$ Specifically, a = [0.05, 0, –.05]' and wt are distributed as standard three-dimensional normal random variables. Create a model specification structure with vgxset: A1 = [.5 0 0;.1 .1 .3;0 .2 .3]; Q = eye(3); Mdl = vgxset('a',[.05;0;-.05],'AR',{A1},'Q',Q) Mdl = Model: 3-D VAR(1) with Additive Constant n: 3 nAR: 1 nMA: 0 nX: 0 a: [0.05 0 -0.05] additive constants AR: {1x1 cell} stable autoregressive process Q: [3x3] covariance matrix vgxset identifies this model as a stable VAR(1) model with three dimensions and additive constants. Specification Structures with No Parameter Values By default, vgxvarx fits all unspecified additive (a), AR, regression coefficients (b), and Q parameters. You must specify the number of time series and the type of model you want vgxvarx to fit. The following are the name-value pairs you can pass to vgxset for unknown parameter values: Model Orders NameValue n A positive integer specifying the number of time series. The default is 1. nAR A nonnegative integer specifying the number of AR lags (corresponds to p in Types of VAR Models). The default is 0. nMA A nonnegative integer specifying the number of MA lags (corresponds to q in Types of VAR Models). The default is 0. Currently, vgxvarx cannot fit MA matrices. Therefore, specifying an nMA greater than 0 does not yield estimated MA matrices. nX A nonnegative integer specifying the number regression parameters (corresponds to r in Types of VAR Models). The default is 0. Constant Additive offset logical indicator. The default is false. The following example shows how to specify the model in Specification Structures with Known Parameters, but without explicit parameters. Mdl = vgxset('n',3,'nAR',1,'Constant',true) Mdl = Model: 3-D VAR(1) with Additive Constant n: 3 nAR: 1 nMA: 0 nX: 0 a: [] AR: {} Q: [] Specification Structures with Selected Parameter Values You can create a model structure with some known parameters, and have vgxvarx fit the unknown parameters to data. Here are the name-value pairs you can pass to vgxset for requested parameter values: Model Parameter Estimation NameValue asolve An n-vector of additive offset logical indicators. The default is empty, which means true(n,1). ARsolve An nAR-element cell array of n-by-n matrices of AR logical indicators. The default is empty, which means an nAR-element cell array of true(n). AR0solve An n-by-n matrix of AR0 logical indicators. The default is empty, which means false(n). MAsolve An nMA-element cell array of n-by-n matrices of MA logical indicators. The default is empty, which means false(n). MA0solve An n-by-n matrix of MA0 logical indicators. The default is empty, which means false(n). bsolve An nX-vector of regression logical indicators. The default is empty, which means true(n,1). Qsolve An n-by-n symmetric covariance matrix logical indicator. The default is empty, which means true(n), unless the 'CovarType' option of vgxvarx overrides it. Specify a logical 1 (true) for every parameter that you want vgxvarx to estimate. Currently, vgxvarx cannot fit the AR0, MA0, or MA matrices. Therefore, vgxvarx ignores the AR0solve, MA0solve, and MAsolve indicators. However, you can examine the Example_StructuralParams.m file for an approach to estimating the AR0 and MA0 matrices. Enter help Example_StructuralParams at the MATLAB command line for information. See Lütkepohl [69] Chapter 9 for algorithms for estimating structural models. Currently, vgxvarx also ignores the Qsolve matrix. vgxvarx can fit either a diagonal or a full Q matrix; see vgxvarx. This example shows how to specify the model in Specification Structures with Known Parameters, but with requested AR parameters with a diagonal autoregressive structure. The dimensionality of the model is known, as is the parameter vector a, but the autoregressive matrix A1 and covariance matrix Q are not known. Mdl = vgxset('ARsolve',{logical(eye(3))},'a',... [.05;0;-.05]) Mdl = Model: 3-D VAR(1) with Additive Constant n: 3 nAR: 1 nMA: 0 nX: 0 a: [0.05 0 -0.05] additive constants AR: {} ARsolve: {1x1 cell of logicals} autoregressive lag indicators Q: [] Displaying and Changing a Specification Structure After you set up a model structure, you can examine it in several ways: • Call the vgxdisp function. • Double-click the structure in the MATLAB Workspace browser. • Call the vgxget function. • Enter Spec at the MATLAB command line, where Spec is the name of the model structure. • Enter Spec.ParamName at the MATLAB command line, where Spec is the name of the model structure, and ParamName is the name of the parameter you want to examine. You can change any part of a model structure named, for example, Spec, using the vgxset as follows: Spec = vgxset(Spec,'ParamName',value,...) This syntax changes only the 'ParamName' parts of the Spec structure. Determining an Appropriate Number of Lags There are two Econometrics Toolbox functions that can help you determine an appropriate number of lags for your models: • The lratiotest function performs likelihood ratio tests to help identify the appropriate number of lags. • The aicbic function calculates the Akaike and Bayesian information criteria to determine the minimal appropriate number of required lags. Example: Using the Likelihood Ratio Test to Calculate the Minimal Requisite Lag.  lratiotest requires inputs of the loglikelihood of an unrestricted model, the loglikelihood of a restricted model, and the number of degrees of freedom (DoF). DoF is the difference in the number of active parameters between the unrestricted and restricted models. lratiotest returns a Boolean: 1 means reject the restricted model in favor of the unrestricted model, 0 means there is insufficient evidence to reject the restricted model. In the context of determining an appropriate number of lags, the restricted model has fewer lags, and the unrestricted model has more lags. Otherwise, test models with the same type of fitted parameters (for example, both with full Q matrices, or both with diagonal Q matrices). • Obtain the loglikelihood (LLF) of a model as the third output of vgxvarx: [EstSpec,EstStdErrors,LLF,W] = vgxvarx(...) • Obtain the number of active parameters in a model as the second output of vgxcount: [NumParam,NumActive] = vgxcount(Spec) For example, suppose you have four fitted models with varying lag structures. The models have loglikelihoods LLF1, LLF2, LLF3, and LLF4, and active parameter counts n1p, n2p, n3p, and n4p. Suppose model 4 has the largest number of lags. Perform likelihood ratio tests of models 1, 2, and 3 against model 4, as follows: reject1 = lratiotest(LLF4,LLF1,n4p - n1p) reject2 = lratiotest(LLF4,LLF2,n4p - n2p) reject3 = lratiotest(LLF4,LLF3,n4p - n3p) If reject1 = 1, you reject model 1 in favor of model 4, and similarly for models 2 and 3. If any of the models have rejectI = 0, you have an indication that you can use fewer lags than in model 4. Example: Using Akaike Information Criterion to Calculate the Minimal Requisite Lag.  aicbic requires inputs of the loglikelihood of a model and of the number of active parameters in the model. It returns the value of the Akaike information criterion. Lower values are better than higher values. aicbic accepts vectors of loglikelihoods and vectors of active parameters, and returns a vector of Akaike information criteria, which makes it easy to find the minimum. • Obtain the loglikelihood (LLF) of a model as the third output of vgxvarx: [EstSpec,EstStdErrors,LLF,W] = vgxvarx(...) • Obtain the number of active parameters in a model as the second output of vgxcount: [NumParam,NumActive] = vgxcount(Spec) For example, suppose you have four fitted models with varying lag structures. The models have loglikelihoods LLF1, LLF2, LLF3, and LLF4, and active parameter counts n1p, n2p, n3p, and n4p. Calculate the Akaike information criteria as follows: AIC = aicbic([LLF1 LLF2 LLF3 LLF4],[n1p n2p n3p n4p]) The most suitable model has the lowest value of the Akaike information criterion. VAR Model Estimation Preparing Models for Fitting To create a model of multiple time series data, decide on a parametric form of the model, and fit parameters to the data. When you have a calibrated (fitted) model, check if the model fits the data adequately. To fit a model to data, you must have: There are several Econometrics Toolbox functions that aid these tasks, including: Structural Matrices.  The structural matrices in SVARMAX models are the A0 and B0 matrices. See Types of VAR Models for definitions of these terms. Currently, vgxvarx cannot fit these matrices to data. However, you can examine the Example_StructuralParams.m file for an approach to estimating the AR0 and MA0 matrices. Enter help Example_StructuralParams at the MATLAB command line for information. See Lütkepohl [69] Chapter 9 for algorithms for estimating structural models. Changing Model Representations You can convert a VARMA model to an equivalent VAR model using the vgxar function. (See Types of VAR Models for definitions of these terms.) Similarly, you can convert a VARMA model to an equivalent VMA model using the vgxma function. These functions are useful in the following situations: • Calibration of models The vgxvarx function can calibrate only VAR and VARX models. So to calibrate a VARMA model, you could first convert it to a VAR model. However, you can ask vgxvarx to ignore VMA terms and fit just the VAR structure. See Fit a VARMA Model for a comparison of conversion versus no conversion. • Forecasting It is straightforward to generate forecasts for VMA models. In fact, vgxpred internally converts models to VMA models to calculate forecast statistics. • Analyzing models Sometimes it is easier to define your model using one structure, but you want to analyze it using a different structure. The algorithm for conversion between models involves series that are, in principle, infinite. The vgxar and vgxma functions truncate these series to the maximum of nMA and nAR, introducing an inaccuracy. You can specify that the conversion give more terms, or give terms to a specified accuracy. See [69] for more information on these transformations. Convert a VARMA Model to a VAR Model. This example creates a VARMA model, then converts it to a pure VAR model. Create a VARMA model specification structure. A1 = [.2 -.1 0;.1 .2 .05;0 .1 .3]; A2 = [.3 0 0;.1 .4 .1;0 0 .2]; A3 = [.4 .1 -.1;.2 -.5 0;.05 .05 .2]; MA1 = .2*eye(3); MA2 = [.3 .2 .1;.2 .4 0;.1 0 .5]; Spec = vgxset('AR',{A1,A2,A3},'MA',{MA1,MA2}) Spec = Model: 3-D VARMA(3,2) with No Additive Constant n: 3 nAR: 3 nMA: 2 nX: 0 AR: {3x1 cell} stable autoregressive process MA: {2x1 cell} invertible moving average process Q: [] Convert the structure to a pure VAR structure: SpecAR = vgxar(Spec) SpecAR = Model: 3-D VAR(3) with No Additive Constant n: 3 nAR: 3 nMA: 0 nX: 0 AR: {3x1 cell} unstable autoregressive process Q: [] The converted process is unstable; see the AR row. An unstable model could yield inaccurate predictions. Taking more terms in the series gives a stable model: specAR4 = vgxar(Spec,4) specAR4 = Model: 3-D VAR(4) with No Additive Constant n: 3 nAR: 4 nMA: 0 nX: 0 AR: {4x1 cell} stable autoregressive process Q: [] Convert a VARMA Model to a VMA Model. This example uses a VARMA model and converts it to a pure VMA model. Create a VARMA model specification structure. A1 = [.2 -.1 0;.1 .2 .05;0 .1 .3]; A2 = [.3 0 0;.1 .4 .1;0 0 .2]; A3 = [.4 .1 -.1;.2 -.5 0;.05 .05 .2]; MA1 = .2*eye(3); MA2 = [.3 .2 .1;.2 .4 0;.1 0 .5]; Spec = vgxset('AR',{A1,A2,A3},'MA',{MA1,MA2}) Spec = Model: 3-D VARMA(3,2) with No Additive Constant n: 3 nAR: 3 nMA: 2 nX: 0 AR: {3x1 cell} stable autoregressive process MA: {2x1 cell} invertible moving average process Q: [] Convert the structure to a pure VAR structure: SpecAR = vgxar(Spec) SpecAR = Model: 3-D VAR(3) with No Additive Constant n: 3 nAR: 3 nMA: 0 nX: 0 AR: {3x1 cell} unstable autoregressive process Q: [] Convert the model specification structure Spec to a pure MA structure: SpecMA = vgxma(Spec) SpecMA = Model: 3-D VMA(3) with No Additive Constant n: 3 nAR: 0 nMA: 3 nX: 0 MA: {3x1 cell} non-invertible moving average process Q: [] Notice that the pure VMA process has more MA terms than the original process. The number is the maximum of nMA and nAR, and nAR = 3. The converted VMA model is not invertible; see the MA row. A noninvertible model can yield inaccurate predictions. Taking more terms in the series results in an invertible model. specMA4 = vgxma(Spec,4) specMA4 = Model: 3-D VMA(4) with No Additive Constant n: 3 nAR: 0 nMA: 4 nX: 0 MA: {4x1 cell} invertible moving average process Q: [] Converting the resulting VMA model to a pure VAR model results in the same VAR(3) model as SpecAR. SpecAR2 = vgxar(SpecMA); vgxdisp(SpecAR,SpecAR2) Model 1: 3-D VAR(3) with No Additive Constant Conditional mean is not AR-stable and is MA-invertible Model 2: 3-D VAR(3) with No Additive Constant Conditional mean is not AR-stable and is MA-invertible Parameter Model 1 Model 2 -------------- -------------- -------------- AR(1)(1,1) 0.4 0.4 (1,2) -0.1 -0.1 (1,3) -0 -0 (2,1) 0.1 0.1 (2,2) 0.4 0.4 (2,3) 0.05 0.05 (3,1) -0 -0 (3,2) 0.1 0.1 (3,3) 0.5 0.5 AR(2)(1,1) 0.52 0.52 (1,2) 0.22 0.22 (1,3) 0.1 0.1 (2,1) 0.28 0.28 (2,2) 0.72 0.72 (2,3) 0.09 0.09 (3,1) 0.1 0.1 (3,2) -0.02 -0.02 (3,3) 0.6 0.6 AR(3)(1,1) 0.156 0.156 (1,2) -0.004 -0.004 (1,3) -0.18 -0.18 (2,1) 0.024 0.024 (2,2) -0.784 -0.784 (2,3) -0.038 -0.038 (3,1) -0.01 -0.01 (3,2) 0.014 0.014 (3,3) -0.17 -0.17 Q(:,:) [] [] Conversion Types and Accuracy.  Some conversions occur when explicitly requested, such as those initiated by calls to vgxar and vgxma. Other conversions occur automatically as needed for calculations. For example, vgxpred internally converts models to VMA models to calculate forecast statistics. By default, conversions give terms up to the largest lag present in the model. However, for more accuracy in conversion, you can specify that the conversion use more terms. You can also specify that it continue until a residual term is below a threshold you set. The syntax is SpecAR = vgxar(Spec,nAR,ARlag,Cutoff) SpecMA = vgxma(Spec,nMA,MAlag,Cutoff) • nMA and nAR represent the number of terms in the series. • ARlag and MAlag are vectors of particular lags that you want in the converted model. • Cutoff is a positive parameter that truncates the series if the norm of a converted term is smaller than Cutoff. Cutoff is 0 by default. For details, see the function reference pages for vgxar and vgxma. Fitting Models to Data The vgxvarx function performs parameter estimation. vgxvarx only estimates parameters for VAR and VARX models. In other words, vgxvarx does not estimate moving average matrices, which appear, for example, in VMA and VARMA models. For definitions of these terms, see Types of VAR Models. The vgxar function converts a VARMA model to a VAR model. Currently, it does not handle VARMAX models. You have two choices in fitting parameters to a VARMA model or VARMAX model: • Set the vgxvarx 'IgnoreMA' parameter to 'yes' (the default is 'no'). In this case vgxvarx ignores VMA parameters, and fits the VARX parameters. • Convert a VARMA model to a VAR model using vgxar. Then fit the resulting VAR model using vgxvarx. Each of these options is effective on some data. Try both if you have VMA terms in your model. See Fit a VARMA Model for an example showing both options. Fit a VAR Model. This example uses two series: the consumer price index (CPI) and the unemployment rate (UNRATE) from the data set Data_USEconmodel. Obtain the two time series, and convert them for stationarity: load Data_USEconModel cpi = DataTable.CPIAUCSL; cpi = log(cpi); dCPI = diff(cpi); unem = DataTable.UNRATE; Y = [dCPI,unem(2:end)]; Create a VAR model: Spec = vgxset('n',2,'nAR',4,'Constant',true) Spec = Model: 2-D VAR(4) with Additive Constant n: 2 nAR: 4 nMA: 0 nX: 0 a: [] AR: {} Q: [] Fit the model to the data using vgxvarx: [EstSpec,EstStdErrors,logL,W] = vgxvarx(Spec,Y); vgxdisp(EstSpec) Model : 2-D VAR(4) with Additive Constant Conditional mean is AR-stable and is MA-invertible a Constant: 0.00184568 0.315567 AR(1) Autoregression Matrix: 0.308635 -0.0032011 -4.48152 1.34343 AR(2) Autoregression Matrix: 0.224224 0.00124669 7.19015 -0.26822 AR(3) Autoregression Matrix: 0.353274 0.00287036 1.48726 -0.227145 AR(4) Autoregression Matrix: -0.0473456 -0.000983119 8.63672 0.0768312 Q Innovations Covariance: 3.51443e-05 -0.000186967 -0.000186967 0.116697 Fit a VARMA Model. This example uses artificial data to generate a time series, then shows two ways of fitting a VARMA model to the series. Specify the model: AR1 = [.3 -.1 .05;.1 .2 .1;-.1 .2 .4]; AR2 = [.1 .05 .001;.001 .1 .01;-.01 -.01 .2]; Q = [.2 .05 .02;.05 .3 .1;.02 .1 .25]; MA1 = [.5 .2 .1;.1 .6 .2;0 .1 .4]; MA2 = [.2 .1 .1; .05 .1 .05;.02 .04 .2]; Spec = vgxset('AR',{AR1,AR2},'Q',Q,'MA',{MA1,MA2}) Spec = Model: 3-D VARMA(2,2) with No Additive Constant n: 3 nAR: 2 nMA: 2 nX: 0 AR: {2x1 cell} stable autoregressive process MA: {2x1 cell} invertible moving average process Q: [3x3] covariance matrix Generate a time series using vgxsim: YF = [100 50 20;110 52 22;119 54 23]; % starting values rng(1); % For reproducibility Y = vgxsim(Spec,190,[],YF); Fit the data to a VAR model by calling vgxvarx with the 'IgnoreMA' option: estSpec = vgxvarx(Spec,Y(3:end,:),[],Y(1:2,:),'IgnoreMA','yes'); Compare the estimated model with the original: vgxdisp(Spec,estSpec) Model 1: 3-D VARMA(2,2) with No Additive Constant Conditional mean is AR-stable and is MA-invertible Model 2: 3-D VAR(2) with No Additive Constant Conditional mean is AR-stable and is MA-invertible Parameter Model 1 Model 2 -------------- -------------- -------------- AR(1)(1,1) 0.3 0.723964 (1,2) -0.1 0.119695 (1,3) 0.05 0.10452 (2,1) 0.1 0.0828041 (2,2) 0.2 0.788177 (2,3) 0.1 0.299648 (3,1) -0.1 -0.138715 (3,2) 0.2 0.397231 (3,3) 0.4 0.748157 AR(2)(1,1) 0.1 -0.126833 (1,2) 0.05 -0.0690256 (1,3) 0.001 -0.118524 (2,1) 0.001 0.0431623 (2,2) 0.1 -0.265387 (2,3) 0.01 -0.149646 (3,1) -0.01 0.107702 (3,2) -0.01 -0.304243 (3,3) 0.2 0.0165912 MA(1)(1,1) 0.5 (1,2) 0.2 (1,3) 0.1 (2,1) 0.1 (2,2) 0.6 (2,3) 0.2 (3,1) 0 (3,2) 0.1 (3,3) 0.4 MA(2)(1,1) 0.2 (1,2) 0.1 (1,3) 0.1 (2,1) 0.05 (2,2) 0.1 (2,3) 0.05 (3,1) 0.02 (3,2) 0.04 (3,3) 0.2 Q(1,1) 0.2 0.193553 Q(2,1) 0.05 0.0408221 Q(2,2) 0.3 0.252461 Q(3,1) 0.02 0.00690626 Q(3,2) 0.1 0.0922074 Q(3,3) 0.25 0.306271 The estimated Q matrix is close to the original Q matrix. However, the estimated AR terms are not close to the original AR terms. Specifically, nearly all the AR(2) coefficients are the opposite sign, and most AR(1) coefficients are off by about a factor of 2. Alternatively, before fitting the model, convert it to a pure AR model. To do this, specify the model and generate a time series as above. Then, convert the model to a pure AR model: specAR = vgxar(Spec); Fit the converted model to the data: estSpecAR = vgxvarx(specAR,Y(3:end,:),[],Y(1:2,:)); Compare the fitted model to the original model: vgxdisp(specAR,estSpecAR) Model 1: 3-D VAR(2) with No Additive Constant Conditional mean is AR-stable and is MA-invertible Model 2: 3-D VAR(2) with No Additive Constant Conditional mean is AR-stable and is MA-invertible Parameter Model 1 Model 2 -------------- -------------- -------------- AR(1)(1,1) 0.8 0.723964 (1,2) 0.1 0.119695 (1,3) 0.15 0.10452 (2,1) 0.2 0.0828041 (2,2) 0.8 0.788177 (2,3) 0.3 0.299648 (3,1) -0.1 -0.138715 (3,2) 0.3 0.397231 (3,3) 0.8 0.748157 AR(2)(1,1) -0.13 -0.126833 (1,2) -0.09 -0.0690256 (1,3) -0.114 -0.118524 (2,1) -0.129 0.0431623 (2,2) -0.35 -0.265387 (2,3) -0.295 -0.149646 (3,1) 0.03 0.107702 (3,2) -0.17 -0.304243 (3,3) 0.05 0.0165912 Q(1,1) 0.2 0.193553 Q(2,1) 0.05 0.0408221 Q(2,2) 0.3 0.252461 Q(3,1) 0.02 0.00690626 Q(3,2) 0.1 0.0922074 Q(3,3) 0.25 0.306271 The model coefficients between the pure AR models are closer than between the original VARMA model and the fitted AR model. Most model coefficients are within 20% or the original. Notice, too, that estSpec and estSpecAR are identical. This is because both are AR(2) models fitted to the same data series. How vgxvarx Works.  vgxvarx finds maximum likelihood estimators of AR and Q matrices and the a and b vectors if present. For VAR models and if the response series do not contain NaN values, vgxvarx uses a direct solution algorithm that requires no iterations. For VARX models or if the response data contain missing values, vgxvarx optimizes the likelihood using the expectation-conditional-maximization (ECM) algorithm. The iterations usually converge quickly, unless two or more exogenous data streams are proportional to each other. In that case, there is no unique maximum likelihood estimator, and the iterations might not converge. You can set the maximum number of iterations with the MaxIter parameter, which has a default value of 1000. vgxvarx does not support exogenous series containing NaN values. vgxvarx calculates the loglikelihood of the data, giving it as an output of the fitted model. Use this output in testing the quality of the model. For example, see Determining an Appropriate Number of Lags and Examining the Stability of a Fitted Model. Examining the Stability of a Fitted Model When you enter the name of a fitted model at the command line, you obtain a summary. This summary contains a report on the stability of the VAR part of the model, and the invertibility of the VMA part. You can also find whether a model is stable or invertible by entering: [isStable,isInvertible] = vgxqual(Spec) This gives a Boolean value of 1 for isStable if the model is stable, and a Boolean value of 1 for isInvertible if the model is invertible. This stability or invertibility does not take into account exogenous terms, which can disrupt the stability of a model. Stable, invertible models give reliable results, while unstable or noninvertible ones might not. Stability and invertibility are equivalent to all eigenvalues of the associated lag operators having modulus less than 1. In fact vgxqual evaluates these quantities by calculating eigenvalues. For more information, see the vgxqual function reference page or Hamilton [48] VAR Model Forecasting, Simulation, and Analysis VAR Forecasting When you have models with parameters (known or estimated), you can examine the predictions of the models. For information on specifying models, see Model Specification Structures. For information on calibrating models, see VAR Model Estimation. The main methods of forecasting are: • Generating forecasts with error bounds using vgxpred • Generating simulations with vgxsim • Generating sample paths with vgxproc These functions base their forecasts on a model specification and initial data. The functions differ in their innovations processes: • vgxpred assumes zero innovations. Therefore, vgxpred yields a deterministic forecast. • vgxsim assumes the innovations are jointly normal with covariance matrix Q. vgxsim yields pseudorandom (Monte Carlo) sample paths. • vgxproc uses a separate input for the innovations process. vgxproc yields a sample path that is deterministically based on the innovations process. vgxpred is faster and takes less memory than generating many sample paths using vgxsim. However, vgxpred is not as flexible as vgxsim. For example, suppose you transform some time series before making a model, and want to undo the transformation when examining forecasts. The error bounds given by transforms of vgxpred error bounds are not valid bounds. In contrast, the error bounds given by the statistics of transformed simulations are valid. Forecast a VAR Model. This example shows how to use vgxpred to forecast a VAR model. vgxpred enables you to generate forecasts with error estimates. vgxpred requires: • A fully-specified model (for example , impSpec in what follows) • The number of periods for the forecast (for example, FT in what follows) vgxpred optionally takes: • An exogenous data series • A presample time series (e.g., Y(end-3:end,:) in what follows) • Extra paths Load the Data_USEconModel data set. This example uses two time series: the logarithm of real GDP, and the real 3-month T-bill rate, both differenced to be approximately stationary. Suppose that a VAR(4) model is appropriate to describe the time series. load Data_USEconModel DEF = log(DataTable.CPIAUCSL); GDP = log(DataTable.GDP); rGDP = diff(GDP - DEF); % Real GDP is GDP - deflation TB3 = 0.01*DataTable.TB3MS; dDEF = 4*diff(DEF); % Scaling rTB3 = TB3(2:end) - dDEF; % Real interest is deflated Y = [rGDP,rTB3]; Fit a VAR(4) model specification: Spec = vgxset('n',2,'nAR',4,'Constant',true); impSpec = vgxvarx(Spec,Y(5:end,:),[],Y(1:4,:)); impSpec = vgxset(impSpec,'Series',... {'Transformed real GDP','Transformed real 3-mo T-bill rate'}); Predict the evolution of the time series: FDates = datenum({'30-Jun-2009'; '30-Sep-2009'; '31-Dec-2009'; '31-Mar-2010'; '30-Jun-2010'; '30-Sep-2010'; '31-Dec-2010'; '31-Mar-2011'; '30-Jun-2011'; '30-Sep-2011'; '31-Dec-2011'; '31-Mar-2012'; '30-Jun-2012'; '30-Sep-2012'; '31-Dec-2012'; '31-Mar-2013'; '30-Jun-2013'; '30-Sep-2013'; '31-Dec-2013'; '31-Mar-2014'; '30-Jun-2014' }); FT = numel(FDates); [Forecast,ForecastCov] = vgxpred(impSpec,FT,[],... Y(end-3:end,:)); View the forecast using vgxplot: vgxplot(impSpec,Y(end-10:end,:),Forecast,ForecastCov); Forecast a VAR Model Using Monte Carlo Simulation. This example shows how to use Monte Carlo simulation via vgxsim to forecast a VAR model. vgxsim enables you to generate simulations of time series based on your model. If you have a trustworthy model structure, you can use these simulations as sample forecasts. vgxsim requires: • A model (impSpec in what follows) • The number of periods for the forecast (FT in what follows) vgxsim optionally takes: • An exogenous data series • A presample time series (Y(end-3:end,:) in what follows) • Presample innovations • The number of realizations to simulate (2000 in what follows) Load the Data_USEconModel data set. This example uses two time series: the logarithm of real GDP, and the real 3-month T-bill rate, both differenced to be approximately stationary. For illustration, a VAR(4) model describes the time series. load Data_USEconModel DEF = log(DataTable.CPIAUCSL); GDP = log(DataTable.GDP); rGDP = diff(GDP - DEF); % Real GDP is GDP - deflation TB3 = 0.01*DataTable.TB3MS; dDEF = 4*diff(DEF); % Scaling rTB3 = TB3(2:end) - dDEF; % Real interest is deflated Y = [rGDP,rTB3]; Fit a VAR(4) model specification: Spec = vgxset('n',2,'nAR',4,'Constant',true); impSpec = vgxvarx(Spec,Y(5:end,:),[],Y(1:4,:)); impSpec = vgxset(impSpec,'Series',... {'Transformed real GDP','Transformed real 3-mo T-bill rate'}); Define the forecast horizon. FDates = datenum({'30-Jun-2009'; '30-Sep-2009'; '31-Dec-2009'; '31-Mar-2010'; '30-Jun-2010'; '30-Sep-2010'; '31-Dec-2010'; '31-Mar-2011'; '30-Jun-2011'; '30-Sep-2011'; '31-Dec-2011'; '31-Mar-2012'; '30-Jun-2012'; '30-Sep-2012'; '31-Dec-2012'; '31-Mar-2013'; '30-Jun-2013'; '30-Sep-2013'; '31-Dec-2013'; '31-Mar-2014'; '30-Jun-2014' }); FT = numel(FDates); Simulate the model for 10 steps, replicated 2000 times: rng(1); %For reproducibility Ysim = vgxsim(impSpec,FT,[],Y(end-3:end,:),[],2000); Calculate the mean and standard deviation of the simulated series: Ymean = mean(Ysim,3); % Calculate means Ystd = std(Ysim,0,3); % Calculate std deviations Plot the means +/- 1 standard deviation for the simulated series: subplot(2,1,1) plot(dates(end-10:end),Y(end-10:end,1),'k') hold('on') plot([dates(end);FDates],[Y(end,1);Ymean(:,1)],'r') plot([dates(end);FDates],[Y(end,1);Ymean(:,1)]+[0;Ystd(:,1)],'b') plot([dates(end);FDates],[Y(end,1);Ymean(:,1)]-[0;Ystd(:,1)],'b') datetick('x') title('Transformed real GDP') subplot(2,1,2) plot(dates(end-10:end),Y(end-10:end,2),'k') hold('on') axis([dates(end-10),FDates(end),-.1,.1]); plot([dates(end);FDates],[Y(end,2);Ymean(:,2)],'r') plot([dates(end);FDates],[Y(end,2);Ymean(:,2)]+[0;Ystd(:,2)],'b') plot([dates(end);FDates],[Y(end,2);Ymean(:,2)]-[0;Ystd(:,2)],'b') datetick('x') title('Transformed real 3-mo T-bill rate') How vgxpred and vgxsim Work.  vgxpred generates two quantities: • A deterministic forecast time series based on 0 innovations • Time series of forecast covariances based on the Q matrix The simulations for models with VMA terms uses presample innovation terms. Presample innovation terms are values of εt for times before the forecast period that affect the MA terms. For definitions of the terms MA, Q, and εt, see Types of VAR Models. If you do not provide all requisite presample innovation terms, vgxpred assumes the value 0 for missing terms. vgxsim generates random time series based on the model using normal random innovations distributed with Q covariances. The simulations of models with MA terms uses presample innovation terms. If you do not provide all requisite presample innovation terms, vgxsim assumes the value 0 for missing terms. Data Scaling If you scaled any time series before fitting a model, you can unscale the resulting time series to understand its predictions more easily. • If you scaled a series with log, transform predictions of the corresponding model with exp. • If you scaled a series with diff(log), transform predictions of the corresponding model with cumsum(exp). cumsum is the inverse of diff; it calculates cumulative sums. As in integration, you must choose an appropriate additive constant for the cumulative sum. For example, take the log of the final entry in the corresponding data series, and use it as the first term in the series before applying cumsum. Calculating Impulse Responses You can examine the effect of impulse responses to models with the vgxproc function. An impulse response is the deterministic response of a time series model to an innovations process that has the value of one standard deviation in one component at the initial time, and zeros in all other components and times. vgxproc simulates the evolution of a time series model from a given innovations process. Therefore, vgxproc is appropriate for examining impulse responses. The only difficulty in using vgxproc is determining exactly what is "the value of one standard deviation in one component at the initial time." This value can mean different things depending on your model. • For a structural model, B0 is usually a known diagonal matrix, and Q is an identity matrix. In this case, the impulse response to component i is the square root of B(i,i). • For a nonstructural model, there are several choices. The simplest choice, though not necessarily the most accurate, is to take component i as the square root of Q(i,i). Other possibilities include taking the Cholesky decomposition of Q, or diagonalizing Q and taking the square root of the diagonal matrix. Generate Impulse Responses for a VAR model. This example shows how to generate impulse responses of an interest rate shock on real GDP using vgxproc. Load the Data_USEconModel data set. This example uses two time series: the logarithm of real GDP, and the real 3-month T-bill rate, both differenced to be approximately stationary. Suppose that a VAR(4) model is appropriate to describe the time series. load Data_USEconModel DEF = log(DataTable.CPIAUCSL); GDP = log(DataTable.GDP); rGDP = diff(GDP - DEF); % Real GDP is GDP - deflation TB3 = 0.01*DataTable.TB3MS; dDEF = 4*diff(DEF); % Scaling rTB3 = TB3(2:end) - dDEF; % Real interest is deflated Y = [rGDP,rTB3]; Define the forecast horizon. FDates = datenum({'30-Jun-2009'; '30-Sep-2009'; '31-Dec-2009'; '31-Mar-2010'; '30-Jun-2010'; '30-Sep-2010'; '31-Dec-2010'; '31-Mar-2011'; '30-Jun-2011'; '30-Sep-2011'; '31-Dec-2011'; '31-Mar-2012'; '30-Jun-2012'; '30-Sep-2012'; '31-Dec-2012'; '31-Mar-2013'; '30-Jun-2013'; '30-Sep-2013'; '31-Dec-2013'; '31-Mar-2014'; '30-Jun-2014' }); FT = numel(FDates); Fit a VAR(4) model specification: Spec = vgxset('n',2,'nAR',4,'Constant',true); impSpec = vgxvarx(Spec,Y(5:end,:),[],Y(1:4,:)); impSpec = vgxset(impSpec,'Series',... {'Transformed real GDP','Transformed real 3-mo T-bill rate'}); Generate the innovations processes both with and without an impulse (shock): W0 = zeros(FT, 2); % Innovations without a shock W1 = W0; W1(1,2) = sqrt(impSpec.Q(2,2)); % Innovations with a shock Generate the processes with and without the shock: Yimpulse = vgxproc(impSpec,W1,[],Y); % Process with shock Ynoimp = vgxproc(impSpec,W0,[],Y); % Process with no shock Undo the scaling for the GDP processes: Yimp1 = exp(cumsum(Yimpulse(:,1))); % Undo scaling Ynoimp1 = exp(cumsum(Ynoimp(:,1))); Compute and plot the relative difference between the calculated GDPs: RelDiff = (Yimp1 - Ynoimp1) ./ Yimp1; plot(FDates,100*RelDiff);dateaxis('x',12) title(... 'Impact of Interest Rate Shock on Real Gross Domestic Product') ylabel('% Change') The graph shows that an increased interest rate causes a dip in the real GDP for a short time. Afterwards the real GDP begins to climb again, reaching its former level in about 1 year. Multivariate Time Series Models with Regression Terms Incorporate feedback from exogenous predictors, or study their linear associations to the response series by including a regression component in multivariate time series models. By order of increasing complexity, examples of multivariate, time series, regression models include: • Modeling the effects of an intervention or to include shared intercepts among several responses. In these cases, the exogenous series are indicator variables. • Modeling the contemporaneous linear associations between a subset of exogenous series to each response. Applications include CAPM analysis and studying the effects of prices of items on their demand. These applications are examples of seemingly unrelated regression (SUR). For more details, see Implement Seemingly Unrelated Regression Analyses and Estimate the Capital Asset Pricing Model Using SUR. • Modeling the linear associations between contemporaneous and lagged, exogenous series and the response as part of a multivariate, distributed lag model. Applications include determining how a change in monetary growth affects real gross domestic product (GDP) and gross national income (GNI). • Any combination of SUR and the distributed lag model that includes the lagged effects of responses, also known as simultaneous equation models. VARMAX modeling is an example (see Types of VAR Models). The general equation for a multivariate, time series, regression model is ${y}_{t}=a+{X}_{t}\cdot b+\sum _{i=1}^{p}{A}_{i}{y}_{t-i}+\sum _{j=1}^{q}{B}_{j}{\epsilon }_{t-j}+{\epsilon }_{t},$ where, in particular, • Xt is an n-by-r design matrix. • Row j of Xt contains the observations of the regression variables that correspond to the period t observation of response series j. • Column k of Xt corresponds to the period t observations of regression variable k. (There are r regression variables composed from the exogenous series. For details, see Design Matrix Structure for Including Exogenous Data.) • Xt can contain lagged exogenous series. • b is an r-by-1 vector of regression coefficients corresponding to the r regression variables. The column entries of Xt share a common regression coefficient for all t. That is, the regression component of the response series (yt = [y1t,y2t,...,ynt]′) for period t is $\left[\begin{array}{c}X{\left(1,1\right)}_{t}{b}_{1}+\cdots +X{\left(1,r\right)}_{t}{b}_{r}\\ X{\left(2,1\right)}_{t}{b}_{1}+\cdots +X{\left(2,r\right)}_{t}{b}_{r}\\ ⋮\\ X{\left(n,1\right)}_{t}{b}_{1}+\cdots +X{\left(n,r\right)}_{t}{b}_{r}\end{array}\right].$ • a is an n-by-1 vector of intercepts corresponding to the n response series. Design Matrix Structure for Including Exogenous Data Overview.  For maximum flexibility, construct a design matrix that linearly associates the exogenous series to each response series. It helps to think of the design matrix as a vector of T smaller, block design matrices. The rows of block design matrix t correspond to observation t of the response series, and the columns correspond to the regression coefficients of the regression variables. vgxvarx estimates the regression component of multivariate time series models using the Statistics Toolbox™ function mvregress. Therefore, you must pass the design matrix as a T-by-1 cell vector, where cell t is the n-by-r numeric, block, design matrix at period t, n is the number of response series, and r is the number of regression variables in the design. That is, the structure of the entire design matrix is At each time t, the n-by-r matrix Xt multiplies the r-by-1 vector b, yielding an n-by-1 vector of linear combinations. This setup implies suggests that: • The number of regression variables might differ from the number of exogenous series. That is, you can associate different sets of exogenous series among response series. • Each block design matrix in the cell vector must have the same dimensionality. That is, the multivariate time series framework does not accommodate time-varying models. The state-space framework does accommodate time-varying, multivariate time series models. For details, see ssm. vgxinfer, vgxpred, vgxproc, and vgxsim accommodate multiple response paths. You can associate a common design matrix for all response paths by passing in a cell vector of design matrices. You can also associate a different design matrix to each response path by passing in a T-by-M cell matrix of design matrices, where M is the number of response paths and cell (t,m) is an n-by-r numeric, design matrix at period t (denoted Xt(m)). That is, the structure of the entire design matrix for all paths is For more details on how to structure design matrices for mvregress, see Set Up Multivariate Regression Problems. Examples of Design Matrix Structures. • Intervention model — Suppose a tariff is imposed over some time period. You suspect that this tariff affects GNP and three other economic time series. To determine the effects of the tariff, use an intervention model, where the response series are the four econometric series, and the exogenous, regression variables are indicator variables representing the presence of the tariff in the system. Here are two ways of including the exogenous tariffs. • Responses share a regression coefficient — Each block design matrix (or cell) consists of either ones(4,1) or zeros(4,1), where a 1 indicates that the tariff is in the system, and 0 otherwise. That is, at period t, a cell of the entire design matrix contains one of • Responses do not share regression coefficients — Each block matrix (or cell) consists of either eye(4) or zeros(4). That is, at period t, a cell of the entire design matrix contains one of In this case, the sole exogenous, indicator variable expands to four regression variables. The advantage of the larger (replicated) formulation is that it allows for vgxvarx to estimate the influence of the tariff on each response series separately. The resulting estimated regression coefficient vector $\stackrel{^}{b}$ can have differing values for each component. The different values reflect the different direct influences of the tariff on each time series. Once you have the entire design matrix (denoted Design), you must put each block design matrix that composes Design into the respective cells of a T-by-1 cell vector (or cell matrix for multiple paths). To do this, use mat2cell. Specify to break up Design into T, 4-by-size(Design,2) block design matrices using DesignCell = mat2cell(Design,4*ones(T,1),size(Design,2)) DesignCell is the properly structured regression variables that you can now pass into vgxvarx to estimate the model parameters. • SUR that associates all exogenous series to each response series — If the columns of a X are exogenous series, then, to associate all exogenous series to each response, 1. Create the entire design matrix by expanding X using its Kronecker product with the n-by-n identity matrix, e.g., if there are four responses, then the entire design matrix is Design = kron(X,eye(4)); 2. Put each block design matrix into the cells of a T-by-1 cell vector using mat2cell. Each block matrix has four rows and size(Design,2) columns. • Linear trend — You can model linear trends in your data by including the exogenous matrix eye(n)*t in cell t of the entire design matrix. Estimation of Models that Include Exogenous Data Before you estimate a multivariate, time series, regression model using vgxvarx, specify the number of regression variables in the created model. (For details on specifying a multivariate time series model using vgxset, see Model Specification Structures). Recall from Design Matrix Structure for Including Exogenous Data that the number of regression variables in the model is the number of columns in each design matrix denoted r. You can indicate the number of regression variables several ways: • For a new model, • Specify the nX name-value pair argument as the number of regression variables when you create the model using vgxset. • Specify the bsolve name-value pair argument as the logical vector true(r,1) vgxset creates a multivariate time series model object, and fills in the appropriate properties. In what follows, Mdl denotes a created multivariate time aeries model in the Workspace. • For a model in the Workspace, set either of the nX or bsolve properties to r or true(r,1), respectively, using dot notation, e.g., Mdl.nX = r. You can also exclude a subset of regression coefficient from being estimated. For example, to exclude the first regression coefficient, set 'bsolve',[false(1);true(r-1,1)]. Be aware that if the model is new (i.e, Mdl.b = []), then the software sets any coefficient it doesn't estimate to 0. To fix a coefficient to a value: 1. Enter Mdl.b = ones(r,1); 2. Specify values for the elements you want to hold fixed during estimation in the b property. For example, to specify that the first regression coefficient should be held at 2 during estimation, enter Mdl.b(1) = 2; 3. Enter Mdl.bsolve = [false(1);true(r-1,1)]; The software does not estimate regression intercepts (a) by default. To include a different regression intercept for each response series, specify 'Constant',true when you create the model using vgxset, or set the Constant property of a model in the Workspace to true using dot notation. Alternatively, you can specify 'asolve',true(n,1) or set the asolve property to true(n,1). To exclude a regression intercept from estimation, follow the same steps as for excluding a regression coefficient. To estimate the regression coefficients, pass the model, response data, and the cell vector of design matrices (see Design Matrix Structure for Including Exogenous Data) to vgxvarx. For details on how vgxvarx works when it estimates regression coefficients, see How vgxvarx Works. Be aware that the presence of exogenous series in a multivariate time series model might destabilized the fitted model. Implement Seemingly Unrelated Regression Analyses This example shows how to prepare exogenous data for several seemingly unrelated regression (SUR) analyses. The response and exogenous series are random paths from a standard Gaussian distribution. In seemingly unrelated regression (SUR), each response variable is a function of a subset of the exogenous series, but not of any endogenous variable. That is, for and , the model for response at period is The indices of the regression coefficients and exogenous predictors indicate that: • You can associate each response with a different subset of exogenous predictors. • The response series might not share intercepts or regression coefficients. SUR accommodates intra-period innovation heteroscedasticity and correlation, but inter-period innovation independence and homoscedasticity, i.e., Simulate Data from the True Model Suppose that the true model is where , are multivariate Gaussian random variables each having mean zero and jointly having covariance matrix Suppose that the paths represent different econometric measurements, e.g. stock returns. Simulate four exogenous predictor paths from the standard Gaussian distribution. rng(1); % For reproducibility n = 3; % Number of response series nExo = 4; % Number of exogenous series T = 100; X = randn(100,nExo); The multivariate time series analysis functions of Econometrics Toolbox™ require you to input the exogenous data in a T-by-1 cell vector. Cell of the cell vector is a design matrix indicating the linear relationship of the exogenous variables with each response series at period . Specifically, each design matrix in the cell array: • Has rows, each corresponding to a response series. • Has columns since, in this example, all exogenous variables are in the regression component of each response series. To create the cell vector of design matrices for this case, first expand the exogenous predictor data by finding its Kronecker product with the -by- identity matrix. ExpandX1 = kron(X,eye(n)); r1 = size(ExpandX1,2); % Number of regression variables ExpandX1 is an -by- numeric matrix formed by multiplying each element of X to the -by- identity matrix, and then putting the product in the corresponding position of a -by-1 block matrix of -by- matrices. Create the cell vector of design matrices by putting each consecutive -by- block matrices of ExpandX1 into the cells of a -by-1 cell vector. Verify that one of the cells contains the expected design matrix (e.g. the third cell)). CellX1 = mat2cell(ExpandX1,n*ones(T,1)); CellX1{3} X(3,:) ans = Columns 1 through 7 -0.7585 0 0 1.9302 0 0 1.8562 0 -0.7585 0 0 1.9302 0 0 0 0 -0.7585 0 0 1.9302 0 Columns 8 through 12 0 0 1.3411 0 0 1.8562 0 0 1.3411 0 0 1.8562 0 0 1.3411 ans = -0.7585 1.9302 1.8562 1.3411 In period 3, all observed predictors are associated with each response series. Create a multivariate time series model object that characterizes the true model using vgxset. aTrue = [1; -1; 0.5]; bTrue = [2; 4; -2; -1.5; 2.5; 0.5; 0.5; -1.75; -1.5; 0.75; -0.05; 0.7]; InnovCov = [1 0.5 -0.05; 0.5 1 0.25; -0.05 0.25 1]; TrueMdl = vgxset('n',n,'b',bTrue,'a',aTrue,'Q',InnovCov) Y = vgxsim(TrueMdl,100,CellX1); TrueMdl = Model: 3-D VARMAX(0,0,12) with Additive Constant n: 3 nAR: 0 nMA: 0 nX: 12 a: [1 -1 0.5] additive constants b: [12x1] regression coefficients Q: [3x3] covariance matrix SUR Using All Predictors for Each Response Series Create a multivariate time series model suitable for SUR using vgxset. You must specify the number of response series ('n'), the number of regression variables ('nX'), and whether to include different regression intercepts for each response series ('Constant'). Mdl1 = vgxset('n',n,'nX',r1,'Constant',true) Mdl1 = Model: 3-D VARMAX(0,0,12) with Additive Constant n: 3 nAR: 0 nMA: 0 nX: 12 a: [] b: [] Q: [] Mdl1 is a multivariate time series model object. Unlike TrueMdl, none of the coefficients, intercepts, and intra-period covariance matrix have values. Therefore, Mdl1 is suitable for estimation. Estimate the regression coefficients using vgxvarx. Extract the residuals. Display the estimated model using vgxdisp [EstMdl1,~,~,W] = vgxvarx(Mdl1,Y,CellX1); vgxdisp(EstMdl1) Model : 3-D VARMAX(0,0,12) with Additive Constant a Constant: 0.978981 -1.06438 0.453232 b Regression Parameter: 1.76856 3.85757 -2.20089 -1.55085 2.44067 0.464144 0.69588 -1.71386 -1.6414 0.670357 -0.0564374 0.565809 Q Innovations Covariance: 1.38503 0.667301 -0.159136 0.667301 0.973123 0.216492 -0.159136 0.216492 0.993384 EstMdl is a multivariate time series model containing the estimated parameters. W is a -by- matrix of residuals. By default, vgxvarx models a full, intra-period innovations covariance matrix. Alternatively, and in this case, you can use the backslash operator on X and Y. However, you must include a column of ones in X for the intercepts. coeff = [ones(T,1) X]\Y coeff = 0.9790 -1.0644 0.4532 1.7686 3.8576 -2.2009 -1.5508 2.4407 0.4641 0.6959 -1.7139 -1.6414 0.6704 -0.0564 0.5658 coeff is a nExo + 1-by- n matrix of estimated regression coefficients and intercepts. The estimated intercepts are in the first row, and the rest of the matrix contains the estimated regression coefficients Compare all estimates to their true values. fprintf('\n'); fprintf(' Intercepts \n'); fprintf(' True | vgxvarx | backslash\n'); fprintf('--------------------------------------\n'); for j = 1:n fprintf(' %8.4f | %8.4f | %8.4f\n',aTrue(j),EstMdl1.a(j),coeff(1,j)); end cB = coeff'; cB = cB(:); fprintf('\n'); fprintf(' Coefficients \n'); fprintf(' True | vgxvarx | backslash\n'); fprintf('--------------------------------------\n'); for j = 1:r1 fprintf(' %8.4f | %8.4f | %8.4f\n',bTrue(j),... EstMdl1.b(j),cB(n + j)); end fprintf('\n'); fprintf(' Innovations Covariance\n'); fprintf(' True | vgxvarx\n'); fprintf('----------------------------------------------------------\n'); for j = 1:n fprintf('%8.4f %8.4f %8.4f | %8.4f %8.4f %8.4f\n',... InnovCov(j,:),EstMdl1.Q(j,:)); end Intercepts True | vgxvarx | backslash -------------------------------------- 1.0000 | 0.9790 | 0.9790 -1.0000 | -1.0644 | -1.0644 0.5000 | 0.4532 | 0.4532 Coefficients True | vgxvarx | backslash -------------------------------------- 2.0000 | 1.7686 | 1.7686 4.0000 | 3.8576 | 3.8576 -2.0000 | -2.2009 | -2.2009 -1.5000 | -1.5508 | -1.5508 2.5000 | 2.4407 | 2.4407 0.5000 | 0.4641 | 0.4641 0.5000 | 0.6959 | 0.6959 -1.7500 | -1.7139 | -1.7139 -1.5000 | -1.6414 | -1.6414 0.7500 | 0.6704 | 0.6704 -0.0500 | -0.0564 | -0.0564 0.7000 | 0.5658 | 0.5658 Innovations Covariance True | vgxvarx ---------------------------------------------------------- 1.0000 0.5000 -0.0500 | 1.3850 0.6673 -0.1591 0.5000 1.0000 0.2500 | 0.6673 0.9731 0.2165 -0.0500 0.2500 1.0000 | -0.1591 0.2165 0.9934 The estimates from implementing vgxvarx and the backslash operator are the same, and are fairly close to their corresponding true values. One way to check the relationship strength between the predictors and responses is to compute the coefficient of determination (i.e., the fraction of variation explained by the predictors), which is where is the estimated variance of residual series , and is the estimated variance of response series . R2 = 1 - sum(diag(cov(W)))/sum(diag(cov(Y))) R2 = 0.9118 The SUR model and predictor data explain approximately 91% of the variation in the response data. SUR Using a Unique Predictor for Each Response Series For each period , create block design matrices such that response series is linearly associated to predictor series , . Put the block design matrices in cells of a -by-1 cell vector in chronological order. CellX2 = cell(T,1); for j = 1:T CellX2{j} = diag(X(j,1:n)); end r2 = size(CellX2{1},2); Create a multivariate time series model by using vgxset and specifying the number of response series, the number of regression variables, and whether to include different regression intercepts for each response series. Mdl2 = vgxset('n',n,'nX',r2,'Constant',true); Estimate the regression coefficients using vgxvarx. Display the estimated parameters. Compute the coefficient of determination. [EstMdl2,~,~,W2] = vgxvarx(Mdl2,Y,CellX2); vgxdisp(EstMdl2) R2 = 1 - sum(diag(cov(W2)))/sum(diag(cov(Y))) Model : 3-D VARMAX(0,0,3) with Additive Constant a Constant: 1.07752 -1.43445 0.674376 b Regression Parameter: 1.01491 3.83837 -2.71834 Q Innovations Covariance: 4.96205 4.91571 -1.86546 4.91571 20.8263 -11.0945 -1.86546 -11.0945 7.75392 R2 = 0.1177 The model and predictors explain approximately 12% of the variation in the response series. This should not be surprising since the model is not the same as the response-generating process. SUR Using a Shared Intercept for All Response Series Create block design matrices such that each response series is linearly associated to all predictor series . Prepend the resulting design matrix with a vector of ones representing the common intercept. ExpandX3 = [ones(T*n,1) kron(X,eye(n))]; r3 = size(ExpandX3,2); Put the block design matrices into the cells of a -by-1 cell vector in chronological order. CellX3 = mat2cell(ExpandX3,n*ones(T,1)); Create a multivariate time series model by using vgxset and specifying the number of response series and the number of regression variables. By default, vgxset excludes regression intercepts. Mdl3 = vgxset('n',n,'nX',r3); Estimate the regression coefficients using vgxvarx. Display the estimated parameters. Compute the coefficient of determination. [EstMdl3,~,~,W3] = vgxvarx(Mdl3,Y,CellX3); vgxdisp(EstMdl3) a = EstMdl3.b(1) R2 = 1 - sum(diag(cov(W3)))/sum(diag(cov(Y))) Model : 3-D VARMAX(0,0,13) with No Additive Constant b Regression Parameter: 0.388833 1.73468 3.94099 -2.20458 -1.56878 2.48483 0.462187 0.802394 -1.97614 -1.62978 0.63972 0.0190058 0.562466 Q Innovations Covariance: 1.72265 -0.164059 -0.122294 -0.164059 3.02031 0.12577 -0.122294 0.12577 0.997404 a = 0.3888 R2 = 0.9099 The shared, estimated regression intercept is 0.389, and the other coefficients are similar to the first SUR implementation. The model and predictors explain approximately 91% of the variation in the response series. This should not be surprising since the model almost the same as the response-generating process. Estimate the Capital Asset Pricing Model Using SUR This example shows how to implement the capital asset pricing model (CAPM) using the Econometric Toolbox™ multivariate time series framework. The CAPM model characterizes comovements between asset and market prices. Under this framework, individual asset returns are linearly associated with the return of the whole market (for details, see [1], [2], and [3]). That is, given the return series of all stocks in a market ( ) and the return of a riskless asset ( ), the CAPM model for return series ( ) is for all assets in the market. is an -by-1 vector of asset alphas that should be zero, and it is of interest to investigate assets whose asset alphas are significantly away from zero. is a -by-1 vector of asset betas that specify the degree of comovement between the asset being modeled and the market. An interpretation of element of is • If , then asset moves in the same direction and with the same volatility as the market, i.e., is positively correlated with the market . • If , then asset moves in the opposite direction, but with the same volatility as the market, i.e., is negatively correlated with the market. • If , then asset is uncorrelated with the market. In general: • determines the direction that the asset is moving relative to the market as described in the previous bullets. • is the factor that determines how much more or less volatile asset is relative to the market. For example, if , then asset is 10 times as volatile as the market. Load the CAPM data set included in the Financial Toolbox™. load CAPMuniverse varWithNaNs = Assets(any(isnan(Data),1)) dateRange = datestr([Dates(1) Dates(end)]) varWithNaNs = 'AMZN' 'GOOG' dateRange = 03-Jan-2000 07-Nov-2005 The variable Data is a 1471-by-14 numeric matrix containing the daily returns of a set of 12 stocks (columns 1 through 12), one rickless asset (column 13), and the return of the whole market (column 14). The returns were measured from 03Jan2000 through 07Nov2005. AMZN and GOOG had their IPO during sampling, and so they have missing values. Assign variables for the response and predictor series. Y = bsxfun(@minus,Data(:,1:12),Data(:,14)); X = Data(:,13) - Data(:,14); [T,n] = size(Y) T = 1471 n = 12 Y is a 1471-by-12 matrix of the returns adjusted by the riskless return. X is a 1471-by-1 vector of the market return adjusted by the riskless return. Create block design matrices for each period, and put them into cells of a -by-1 cell vector. You can specify that each response series has its own intercept when you create the multivariate time series model object. Therefore, do not consider the intercept when you create the block design matrices. Design = kron(X,eye(n)); CellX = mat2cell(Design,n*ones(T,1)); nX = size(Design,2); Create the Multivariate Time Series Model Create a multivariate time series model that characterizes the CAPM model. You must specify the number of response series, whether to give each response series equation an intercept, and the number of regression variables. Mdl = vgxset('n',n,'Constant',true,'nX',nX); Mdl is a multivariate time series model object that characterizes the desired CAPM model. Estimate the Multivariate Time Series Model Pass the CAPM model specification (Mdl), the response series (Y), and the cell vector of block design matrices (CellX) to vgxvarx. Request to return the estimated multivariate time series model and the estimated coefficient standard errors. vgxvarx maximizes the likelihood using the expectation-conditional-maximization (ECM) algorithm. ECM accommodates missing response values directly (i.e., without imputation), but at the cost of computation time. [EstMdl,EstCoeffSEMdl] = vgxvarx(Mdl,Y,CellX); EstMdl and EstCoeffSEMdl have the same structure as Mdl, but EstMdl contains the parameter estimates and EstCoeffSEMdl contains the estimated standard errors of the parameter estimates. EstCoeffSEMdl: • Contains the biased maximum likelihood standard errors. • Does not include the estimated standard errors of the intra-period covariances. To include the standard errors of the intra-period covariances, specify the name-value pair 'StdErrType','all' in vgxvarx. Analyze Coefficient Estimates Display the regression estimates, their standard errors, and their t statistics. By default, the software estimates, stores, and displays standard errors from maximum likelihood. Specify to use the unbiased least squares standard errors. dispMdl = vgxset(EstMdl,'Q',[]) % Suppress printing covariance estimates dispMdl = Model: 12-D VARMAX(0,0,12) with Additive Constant n: 12 nAR: 0 nMA: 0 nX: 12 asolve: [12x1 logical] additive constant indicators b: [12x1] regression coefficients bsolve: [12x1 logical] regression coefficient indicators Q: [] Qsolve: [12x12 logical] covariance matrix indicators Model : 12-D VARMAX(0,0,12) with Additive Constant Standard errors with DoF adjustment (least-squares) Parameter Value Std. Error t-Statistic -------------- -------------- -------------- -------------- a(1) 0.00116454 0.000869904 1.3387 a(2) 0.000715822 0.00121752 0.587932 a(3) -0.000223753 0.000806185 -0.277546 a(4) -2.44513e-05 0.000689289 -0.0354732 a(5) 0.00140469 0.00101676 1.38153 a(6) 0.00412219 0.000910392 4.52793 a(7) 0.000116143 0.00068952 0.168441 a(8) -1.37697e-05 0.000456934 -0.0301351 a(9) 0.000110279 0.000710953 0.155114 a(10) -0.000244727 0.000521036 -0.469693 a(11) 3.2336e-05 0.000861501 0.0375346 a(12) 0.000128267 0.00103773 0.123603 b(1) 1.22939 0.0741875 16.5714 b(2) 1.36728 0.103833 13.1681 b(3) 1.5653 0.0687534 22.7669 b(4) 1.25942 0.0587843 21.4245 b(5) 1.34406 0.0867116 15.5003 b(6) 0.617321 0.0776404 7.95103 b(7) 1.37454 0.0588039 23.375 b(8) 1.08069 0.0389684 27.7326 b(9) 1.60024 0.0606318 26.3928 b(10) 1.1765 0.0444352 26.4767 b(11) 1.50103 0.0734709 20.4303 b(12) 1.65432 0.0885002 18.6928 To determine whether the parameters are significantly away from zero, suppose that a t statistic of 3 or more indicates significance. Response series 6 has a significant asset alpha. sigASymbol = Assets(6) sigASymbol = 'GOOG' As a result, GOOG has exploitable economic properties. All asset betas are greater than 3. This indicates that all assets are significantly correlated with the market. However, GOOG has an asset beta of approximately 0.62, whereas all other asset betas are greater than 1. This indicates that the magnitude of the volatility of GOOG is approximately 62% of the market volatility. The reason for this is that GOOG steadily and almost consistently appreciated in value while the market experienced volatile horizontal movements. For more details and an alternative analysis, see Capital Asset Pricing Model with Missing Data. Simulate Responses of an Estimated VARX Model This example shows how to estimate a multivariate time series model that contains lagged endogenous and exogenous variables, and how to simulate responses. The response series are the quarterly: • Changes in real gross domestic product (rGDP) rates ( ) • Real money supply (rM1SL) rates ( ) • Short-term interest rates (i.e., three-month treasury bill yield, ) from March, 1959 through March, 2009 . The exogenous series is the quarterly changes in the unemployment rates ( ). Suppose that a model for the responses is this VARX(4,3) model Preprocess the Data Load the U.S. macroeconomic data set. Flag the series and their periods that contain missing values (indicated by NaN values). load Data_USEconModel varNaN = any(ismissing(DataTable),1); % Variables containing NaN values seriesWithNaNs = series(varNaN) seriesWithNaNs = Columns 1 through 3 '(FEDFUNDS) Effec...' '(GS10) Ten-year ...' '(M1SL) M1 money ...' Columns 4 through 5 '(M2SL) M2 money ...' '(UNRATE) Unemplo...' In this data set, the variables that contain missing values entered the sample later than the other variables. There are no missing values after sampling started for a particular variable. vgxvarx accommodates missing values for responses, but not for regression variables. Flag all periods corresponding to a missing regression variable value. idx = ~isnan(DataTable.UNRATE); For the rest of the example, consider only those values that of the series indicated by a true in idx. Compute rGDP and rM1SL, and the growth rates of rGDP, rM1SL, short-term interest rates, and the unemployment rate. Description contains a description of the data and the variable names. Reserve the last three years of data to investigate the out-of-sample performance of the estimated model. rGDP = DataTable.GDP(idx)./(DataTable.GDPDEF(idx)/100); rM1SL = DataTable.M1SL(idx)./(DataTable.GDPDEF(idx)/100); dLRGDP = diff(log(rGDP)); % rGDP growth rate dLRM1SL = diff(log(rM1SL)); % rM1SL growth rate d3MTB = diff(DataTable.TB3MS(idx)); % Change in short-term interest rate (3MTB) dUNRATE = diff(DataTable.UNRATE(idx)); % Change in unemployment rate T = numel(d3MTB); % Total sample size oosT = 12; % Out-of-sample size estT = T - oosT; % Estimation sample size estIdx = 1:estT; % Estimation sample indices oosIdx = (T - 11):T; % Out-of-sample indices dates = dates((end - T + 1):end); EstY = [dLRGDP(estIdx) dLRM1SL(estIdx) d3MTB(estIdx)]; % In-sample responses estX = dUNRATE(estIdx); % In-sample exogenous data n = size(EstY,2); OOSY = [dLRGDP(oosIdx) dLRM1SL(oosIdx) d3MTB(oosIdx)]; % Out-of-sample responses oosX = dUNRATE(oosIdx); % Out-of-sample exogenous data Create the Design Matrices Create an estT-by-1 cell vector of block design matrices that associate the predictor series with each response such that the responses do not share a coefficient. EstExpandX = kron(estX,eye(n)); EstCellX = mat2cell(EstExpandX,n*ones(estT,1)); nX = size(EstExpandX,2); Specify the VARX Model Specify a multivariate time series model object that characterizes the VARX(4,3) model using vgxset. Mdl = vgxset('n',n,'nAR',4,'nX',nX,'Constant',true); Estimate the VAR(4) Model Estimate the parameters of the VARX(4,3) model using vgxvarx. Display the parameter estimates. EstMdl = vgxvarx(Mdl,EstY,EstCellX); vgxdisp(EstMdl) Model : 3-D VARMAX(4,0,3) with Additive Constant Conditional mean is AR-stable and is MA-invertible a Constant: 0.00811792 0.000709263 0.0465824 b Regression Parameter: -0.0162116 -0.00163933 -1.50115 AR(1) Autoregression Matrix: -0.0375772 -0.0133236 0.00108218 -0.00519697 0.177963 -0.00501432 -0.873992 -6.89049 -0.165888 AR(2) Autoregression Matrix: 0.0753033 0.0775643 -0.001049 0.00282857 0.29064 -0.00159574 4.00724 0.465046 -0.221024 AR(3) Autoregression Matrix: -0.0927688 -0.0240239 -0.000549057 0.0627837 0.0686179 -0.00212185 -7.52241 10.247 0.227121 AR(4) Autoregression Matrix: 0.0646951 -0.0792765 -0.000176166 0.0276958 0.00922231 -0.000183861 1.38523 -11.8774 0.0518154 Q Innovations Covariance: 3.57524e-05 7.05807e-06 -4.23542e-06 7.05807e-06 9.67992e-05 -0.00188786 -4.23542e-06 -0.00188786 0.777151 EstMdl is a multivariate time series model object containing the estimated parameters. Simulate Out-Of-Sample Response Paths Using the Same Exogenous Data per Path Simulate 1000, 3 year response series paths from the estimated model assuming that the exogenous unemployment rate is a fixed series. Since the model contains 4 lags per endogenous variable, specify the last 4 observations in the estimation sample as presample data. OOSExpandX = kron(oosX,eye(n)); OOSCellX = mat2cell(OOSExpandX,n*ones(oosT,1)); numPaths = 1000; Y0 = EstY((end-3):end,:); rng(1); % For reproducibility YSim = vgxsim(EstMdl,oosT,OOSCellX,Y0,[],numPaths); YSim is a 12-by-3-by-1000 numeric array of simulated responses. The rows of YSim correspond to out-of-sample periods, the columns correspond to the response series, and the leaves correspond to paths. Plot the response data and the simulated responses. Identify the 5%, 25%, 75% and 95% percentiles, and the mean and median of the simulated series at each out-of-sample period. YSimBar = mean(YSim,3); YSimQrtl = quantile(YSim,[0.05 0.25 0.5 0.75 0.95],3); RepDates = repmat(dates(oosIdx),1,1000); respNames = {'dLRGDP' 'dLRM1SL' 'd3MTB'}; figure; for j = 1:n; subplot(3,1,j); h1 = plot(dates(oosIdx),squeeze(YSim(:,j,:)),'Color',0.75*ones(3,1)); hold on; h2 = plot(dates(oosIdx),YSimBar(:,j),'.-k','LineWidth',2); h3 = plot(dates(oosIdx),squeeze(YSimQrtl(:,j,:)),':r','LineWidth',1.5); h4 = plot(dates((end - 30):end),[EstY((end - 18):end,j);OOSY(:,j)],... 'b','LineWidth',2); title(sprintf('%s',respNames{j})); datetick; axis tight; hold off; end legend([h1(1) h2(1) h3(1) h4],{'Simulated Series','Simulation Mean',... 'Simulation Quartile','Data'},'Location',[0.4 0.1 0.01 0.01],... 'FontSize',8); Simulate Out-Of-Sample Response Paths Using Random Exogenous Data Suppose that the change in the unemployment rate is an AR(4) model, and fit the model to the estimation sample data. MdlUNRATE = arima('ARLags',1:4); EstMdlUNRATE = estimate(MdlUNRATE,estX,'Display','off'); EstMdlUNRATE is an arima model object containing the parameter estimates. Simulate 1000, 3 year paths from the estimated AR(4) model for the change in unemployment rate. Since the model contains 4 lags, specify the last 4 observations in the estimation sample as presample data. XSim = simulate(EstMdlUNRATE,oosT,'Y0',estX(end-3:end),... 'NumPaths',numPaths); XSim is a 12-by-1000 numeric matrix of simulated exogenous paths. The rows correspond to periods and the columns correspond to paths. Create a cell matrix of block design matrices to organize the exogenous data, where each column corresponds to a path. ExpandXSim = kron(XSim,eye(n)); size(ExpandXSim) CellXSim = mat2cell(ExpandXSim,n*ones(oosT,1),n*ones(1,numPaths)); size(CellXSim) CellXSim{1,1} ans = 36 3000 ans = 12 1000 ans = 0.7901 0 0 0 0.7901 0 0 0 0.7901 ExpandXSim is a 36-by-3000 numeric matrix, and CellXSim is a 12-by-1000 cell matrix of mutually exclusive, neighboring, 3-by-3 block matrices in ExpandXSim. Simulate 3 years of 1000 future response series paths from the estimated model using the simulated exogenous data. Since the model contains 4 lags per endogenous variable, specify the last 4 observations in the estimation sample as presample data. YSimRX = vgxsim(EstMdl,oosT,CellXSim,Y0,[],numPaths); YSimRX is a 12-by-3-by-1000 numeric array of simulated responses. Plot the response data and the simulated responses. Identify the 5%, 25%, 75% and 95% percentiles, and the mean and median of the simulated series at each out-of-sample period. YSimBarRX = mean(YSimRX,3); YSimQrtlRX = quantile(YSimRX,[0.05 0.25 0.5 0.75 0.95],3); figure; for j = 1:n; subplot(3,1,j); h1 = plot(dates(oosIdx),squeeze(YSimRX(:,j,:)),'Color',0.75*ones(3,1)); hold on; h2 = plot(dates(oosIdx),YSimBarRX(:,j),'.-k','LineWidth',2); h3 = plot(dates(oosIdx),squeeze(YSimQrtlRX(:,j,:)),':r','LineWidth',1.5); h4 = plot(dates((end - 30):end),[EstY((end - 18):end,j);OOSY(:,j)],... 'b','LineWidth',2); title(sprintf('%s with Simulated Unemployment Rate',respNames{j})); datetick; axis tight; hold off; end legend([h1(1) h2(1) h3(1) h4],{'Simulated Series','Simulation Mean',... 'Simulation Quartile','Data'},'Location',[0.4 0.1 0.01 0.01],... 'FontSize',8) VAR Model Case Study Overview of the Case Study This section contains an example of the workflow described in Building VAR Models. The example uses three time series: GDP, M1 money supply, and the 3-month T-bill rate. The example shows: 2. Partitioning the transformed data into presample, estimation, and forecast intervals to support a backtesting experiment 3. Making several models 4. Fitting the models to the data 5. Deciding which of the models is best 6. Making forecasts based on the best model Loading Data.  The file Data_USEconModel ships with Econometrics Toolbox software. The file contains time series from the St. Louis Federal Reserve Economics Database. This example uses three of the time series: GDP, M1 money supply (M1SL), and 3-month T-bill rate (TB3MS). Load the data into a time series matrix Y as follows: load Data_USEconModel gdp = DataTable.GDP; m1 = DataTable.M1SL; tb3 = DataTable.TB3MS; Y = [gdp,m1,tb3]; Transforming Data for Stationarity.  Plot the data to look for trends: figure subplot(3,1,1) plot(dates,Y(:,1),'r'); title('GDP') datetick('x') grid on subplot(3,1,2); plot(dates,Y(:,2),'b'); title('M1') datetick('x') grid on subplot(3,1,3); plot(dates, Y(:,3), 'k') title('3-mo T-bill') datetick('x') grid on hold off Not surprisingly, both the GDP and M1 data appear to grow, while the T-bill returns show no long-term growth. To counter the trends in GDP and M1, take a difference of the logarithms of the data. Taking a difference shortens the time series, as described in Transforming Data for Stationarity. Therefore, truncate the T-bill series and date series X so that the Y data matrix has the same number of rows for each column: Y = [diff(log(Y(:,1:2))), Y(2:end,3)]; % Transformed data X = dates(2:end); figure subplot(3,1,1) plot(X,Y(:,1),'r'); title('GDP') datetick('x') grid on subplot(3,1,2); plot(X,Y(:,2),'b'); title('M1') datetick('x') grid on subplot(3,1,3); plot(X, Y(:,3),'k'), title('3-mo T-bill') datetick('x') grid on You see that the scale of the first two columns is about 100 times smaller than the third. Multiply the first two columns by 100 so that the time series are all roughly on the same scale. This scaling makes it easy to plot all the series on the same plot. More importantly, this type of scaling makes optimizations more numerically stable (for example, maximizing loglikelihoods). Y(:,1:2) = 100*Y(:,1:2); figure plot(X,Y(:,1),'r'); hold on plot(X,Y(:,2),'b'); datetick('x') grid on plot(X,Y(:,3),'k'); legend('GDP','M1','3-mo T-bill'); hold off Selecting and Fitting Models Selecting Models.  You can choose many different models for the data. This example rather arbitrarily chooses four models: • VAR(2) with diagonal autoregressive and covariance matrices • VAR(2) with full autoregressive and covariance matrices • VAR(4) with diagonal autoregressive and covariance matrices • VAR(4) with full autoregressive and covariance matrices Make the series the same length, and transform them to be stationary and on a similar scale. dGDP = 100*diff(log(gdp(49:end))); dM1 = 100*diff(log(m1(49:end))); dT3 = diff(tb3(49:end)); Y = [dGDP dM1 dT3]; Create the four models as follows: dt = logical(eye(3)); VAR2diag = vgxset('ARsolve',repmat({dt},2,1),... 'asolve',true(3,1),'Series',{'GDP','M1','3-mo T-bill'}); VAR2full = vgxset(VAR2diag,'ARsolve',[]); VAR4diag = vgxset(VAR2diag,'nAR',4,'ARsolve',repmat({dt},4,1)); VAR4full = vgxset(VAR2full,'nAR',4); The matrix dt is a diagonal logical matrix. dt specifies that both the autoregressive matrices for VAR2diag and VAR4diag are diagonal. In contrast, the specifications for VAR2full and VAR4full have empty matrices instead of dt. Therefore, vgxvarx fits the defaults, which are full matrices for autoregressive and correlation matrices. Choosing Presample, Estimation, and Forecast Periods.  To assess the quality of the models, divide the response data Y into three periods: presample, estimation, and forecast. Fit the models to the estimation data, using the presample period to provide lagged data. Compare the predictions of the fitted models to the forecast data. The estimation period is in-sample, and the forecast period is out-of-sample (also known as backtesting). For the two VAR(4) models, the presample period is the first four rows of Y. Use the same presample period for the VAR(2) models so that all the models are fit to the same data. This is necessary for model fit comparisons. For both models, the forecast period is the final 10% of the rows of Y. The estimation period for the models goes from row 5 to the 90% row. The code for defining these data periods follows: Ypre = Y(1:4,:); T = ceil(.9*size(Y,1)); Yest = Y(5:T,:); YF = Y((T+1):end,:); TF = size(YF,1); Fitting with vgxvarx.  Now that the models and time series exist, you can easily fit the models to the data: [EstSpec1,EstStdErrors1,LLF1,W1] = ... vgxvarx(VAR2diag,Yest,[],Ypre,'CovarType','Diagonal'); [EstSpec2,EstStdErrors2,LLF2,W2] = ... vgxvarx(VAR2full,Yest,[],Ypre); [EstSpec3,EstStdErrors3,LLF3,W3] = ... vgxvarx(VAR4diag,Yest,[],Ypre,'CovarType','Diagonal'); [EstSpec4,EstStdErrors4,LLF4,W4] = ... vgxvarx(VAR4full,Yest,[],Ypre); • The EstSpec structures are the fitted models. • The EstStdErrors structures contain the standard errors of the fitted models. • The LLF are the loglikelihoods of the fitted models. Use these to help select the best model, as described in Checking Model Adequacy. • The W are the estimated innovations (residuals) processes, the same size as Yest. • The specification for EstSpec1 and EstSpec3 includes diagonal covariance matrices. Checking Stability.  You can check whether the estimated models are stable and invertible with the vgxqual function. (There are no MA terms in these models, so the models are necessarily invertible.) The test shows that all the estimated models are stable: [isStable1,isInvertible1] = vgxqual(EstSpec1); [isStable2,isInvertible2] = vgxqual(EstSpec2); [isStable3,isInvertible3] = vgxqual(EstSpec3); [isStable4,isInvertible4] = vgxqual(EstSpec4); [isStable1,isStable2,isStable3,isStable4] ans = 1 1 1 1 You can also look at the estimated specification structures. Each contains a line stating whether the model is stable: EstSpec4 EstSpec4 = Model: 3-D VAR(4) with Additive Constant Series: {'GDP' 'M1' '3-mo T-bill'} n: 3 nAR: 4 nMA: 0 nX: 0 a: [0.524224 0.106746 -0.671714] additive constants asolve: [1 1 1 logical] additive constant indicators AR: {4x1 cell} stable autoregressive process ARsolve: {4x1 cell of logicals} autoregressive lag indicators Q: [3x3] covariance matrix Qsolve: [3x3 logical] covariance matrix indicators Likelihood Ratio Tests.  You can compare the restricted (diagonal) AR models to their unrestricted (full) counterparts using the lratiotest function. The test rejects or fails to reject the hypothesis that the restricted models are adequate, with a default 5% tolerance. This is an in-sample test. To perform the test: 1. Count the parameters in the models using the vgxcount function: [n1,n1p] = vgxcount(EstSpec1); [n2,n2p] = vgxcount(EstSpec2); [n3,n3p] = vgxcount(EstSpec3); [n4,n4p] = vgxcount(EstSpec4); 2. Compute the likelihood ratio tests: reject1 = lratiotest(LLF2,LLF1,n2p - n1p) reject3 = lratiotest(LLF4,LLF3,n4p - n3p) reject4 = lratiotest(LLF4,LLF2,n4p - n2p) reject1 = 1 reject3 = 1 reject4 = 0 The 1 results indicate that the likelihood ratio test rejected both the restricted models, AR(1) and AR(3), in favor of the corresponding unrestricted models. Therefore, based on this test, the unrestricted AR(2) and AR(4) models are preferable. However, the test does not reject the unrestricted AR(2) model in favor of the unrestricted AR(4) model. (This test regards the AR(2) model as an AR(4) model with restrictions that the autoregression matrices AR(3) and AR(4) are 0.) Therefore, it seems that the unrestricted AR(2) model is best among the models fit. Akaike Information Criterion.  To find the best model among a set, minimize the Akaike information criterion. This is an in-sample calculation. Here is how to calculate the criterion for the four models: AIC = aicbic([LLF1 LLF2 LLF3 LLF4],[n1p n2p n3p n4p]) AIC = 1.0e+03 * 1.5129 1.4462 1.5122 1.4628 The best model according to this criterion is the unrestricted AR(2) model. Notice, too, that the unrestricted AR(4) model has lower Akaike information than either of the restricted models. Based on this criterion, the unrestricted AR(2) model is best, with the unrestricted AR(4) model coming next in preference. Comparing Forecasts with Forecast Period Data.  To compare the predictions of the four models against the forecast data YF, use the vgxpred function. This function returns both a prediction of the mean time series, and an error covariance matrix that gives confidence intervals about the means. This is an out-of-sample calculation. [FY1,FYCov1] = vgxpred(EstSpec1,TF,[],Yest); [FY2,FYCov2] = vgxpred(EstSpec2,TF,[],Yest); [FY3,FYCov3] = vgxpred(EstSpec3,TF,[],Yest); [FY4,FYCov4] = vgxpred(EstSpec4,TF,[],Yest); A plot shows the predictions in the shaded region to the right: figure vgxplot(EstSpec2,Yest,FY2,FYCov2) It is now straightforward to calculate the sum of squares error between the predictions and the data, YF: error1 = YF - FY1; error2 = YF - FY2; error3 = YF - FY3; error4 = YF - FY4; SSerror1 = error1(:)' * error1(:); SSerror2 = error2(:)' * error2(:); SSerror3 = error3(:)' * error3(:); SSerror4 = error4(:)' * error4(:); figure bar([SSerror1 SSerror2 SSerror3 SSerror4],.5) ylabel('Sum of squared errors') set(gca,'XTickLabel',... {'AR2 diag' 'AR2 full' 'AR4 diag' 'AR4 full'}) title('Sum of Squared Forecast Errors') The predictive performance of the four models is similar. The full AR(2) model seems to be the best and most parsimonious fit. Its model parameters are: vgxdisp(EstSpec2) Model : 3-D VAR(2) with Additive Constant Conditional mean is AR-stable and is MA-invertible Series : GDP Series : M1 Series : 3-mo T-bill a Constant: 0.687401 0.3856 -0.915879 AR(1) Autoregression Matrix: 0.272374 -0.0162214 0.0928186 0.0563884 0.240527 -0.389905 0.280759 -0.0712716 -0.32747 AR(2) Autoregression Matrix: 0.242554 0.140464 -0.177957 0.00130726 0.380042 -0.0484981 0.260414 0.024308 -0.43541 Q Innovations Covariance: 0.632182 0.105925 0.216806 0.105925 0.991607 -0.155881 0.216806 -0.155881 1.00082 Forecasting This example shows two ways to make predictions or forecasts based on the EstSpec4 fitted model: • Running vgxpred based on the last few rows of YF. • Simulating several time series with the vgxsim function. In both cases, transform the forecasts so they are directly comparable to the original time series. Forecasting with vgxpred.  This example shows predictions 10 steps into the future. 1. Generate the prediction time series from the fitted model beginning at the latest times: [ypred,ycov] = vgxpred(EstSpec2,10,[],YF); 2. Transform the predictions by undoing the scaling and differencing applied to the original data. Make sure to insert the last observation at the beginning of the time series before using cumsum to undo the differencing. And, since differencing occurred after taking logarithms, insert the logarithm before using cumsum: yfirst = [gdp,m1,tb3]; yfirst = yfirst(49:end,:); % Remove NaNs dates = dates(49:end); endpt = yfirst(end,:); endpt(1:2) = log(endpt(1:2)); ypred(:,1:2) = ypred(:,1:2)/100; % Rescale percentage ypred = [endpt; ypred]; % Prepare for cumsum ypred(:,1:3) = cumsum(ypred(:,1:3)); ypred(:,1:2) = exp(ypred(:,1:2)); lastime = dates(end); timess = lastime:91:lastime+910; % Insert forecast horizon figure subplot(3,1,1) plot(timess,ypred(:,1),':r') hold on plot(dates,yfirst(:,1),'k') datetick('x') grid on title('GDP') subplot(3,1,2); plot(timess,ypred(:,2),':r') hold on plot(dates,yfirst(:,2),'k') datetick('x') grid on title('M1') subplot(3,1,3); plot(timess,ypred(:,3),':r') hold on plot(dates,yfirst(:,3),'k') datetick('x') grid on title('3-mo T-bill') hold off The plot shows the extrapolations in dotted red, and the original data series in solid black. Look at the last few years in this plot to get a sense of how the predictions relate to the latest data points: ylast = yfirst(170:end,:); timeslast = dates(170:end); figure subplot(3,1,1) plot(timess,ypred(:,1),'--r') hold on plot(timeslast,ylast(:,1),'k') datetick('x') grid on title('GDP') subplot(3,1,2); plot(timess,ypred(:,2),'--r') hold on plot(timeslast,ylast(:,2),'k') datetick('x') grid on title('M1') subplot(3,1,3); plot(timess,ypred(:,3),'--r') hold on plot(timeslast,ylast(:,3),'k') datetick('x') grid on title('3-mo T-bill') hold off The forecast shows increasing GDP, little growth in M1, and a slight decline in the interest rate. However, the forecast has no error bars. For a forecast with errors, see Forecasting with vgxsim. Forecasting with vgxsim.  This example shows forecasting 10 steps into the future, with a simulation replicated 2000 times, and generates the means and standard deviations. 1. Simulate a time series from the fitted model beginning at the latest times: rng(1); % For reproducibility ysim = vgxsim(EstSpec2,10,[],YF,[],2000); 2. Transform the predictions by undoing the scaling and differencing applied to the original data. Make sure to insert the last observation at the beginning of the time series before using cumsum to undo the differencing. And, since differencing occurred after taking logarithms, insert the logarithm before using cumsum: yfirst = [gdp,m1,tb3]; endpt = yfirst(end,:); endpt(1:2) = log(endpt(1:2)); ysim(:,1:2,:) = ysim(:,1:2,:)/100; ysim = [repmat(endpt,[1,1,2000]);ysim]; ysim(:,1:3,:) = cumsum(ysim(:,1:3,:)); ysim(:,1:2,:) = exp(ysim(:,1:2,:)); 3. Compute the mean and standard deviation of each series, and plot the results. The plot has the mean in black, with ±1 standard deviation in red. ymean = mean(ysim,3); ystd = std(ysim,0,3); figure subplot(3,1,1) plot(timess,ymean(:,1),'k') datetick('x') grid on hold on plot(timess,ymean(:,1)+ystd(:,1),'--r') plot(timess,ymean(:,1)-ystd(:,1),'--r') title('GDP') subplot(3,1,2); plot(timess,ymean(:,2),'k') hold on datetick('x') grid on plot(timess,ymean(:,2)+ystd(:,2),'--r') plot(timess,ymean(:,2)-ystd(:,2),'--r') title('M1') subplot(3,1,3); plot(timess,ymean(:,3),'k') hold on datetick('x') grid on plot(timess,ymean(:,3)+ystd(:,3),'--r') plot(timess,ymean(:,3)-ystd(:,3),'--r') title('3-mo T-bill') hold off The series show increasing growth in GDP, moderate to little growth in M1, and uncertainty about the direction of T-bill rates. References [1] Sharpe, W. F. "Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk." Journal of Finance.Vol. 19, 1964, pp. 425–442. [2] Linter, J. "The Valuation of Risk Assets and the Selection of Risky Investments in Stocks." Review of Economics and Statistics. Vol. 14, 1965, pp. 13–37. [3] Jarrow, A. Finance Theory. Prentice-Hall, Inc., 1988.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7430362105369568, "perplexity": 4337.469030124298}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768831.100/warc/CC-MAIN-20141217075248-00035-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/157079-continuity-degree-spline-curves.html
# Math Help - Continuity degree of spline curves? 1. ## Continuity degree of spline curves? I'm trying to solve a question that simply asks what the degree of smoothness (I assume this is the continuity degree) is for certain spline curves. For a cardinal spline and a Kochnaek-Bartels spline: Degree of polynomial: 2n - 1 Gives continuity degree of: C^(n-1) For a B-spline: Degree of polynomial: d - 1 Gives continuity degree: C^(d-2) Are these correct? I've tried to find what the continuity degree is for the Bézier curve, but I can't find it anywhere. 2. A Bezier curve is a cubic spline. d- 1= 3.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797763824462891, "perplexity": 3128.8828481882856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398471441.74/warc/CC-MAIN-20151124205431-00150-ip-10-71-132-137.ec2.internal.warc.gz"}
https://motls.blogspot.com/2016/02/ligo-discovers-black-hole-merger-12.html
## Thursday, February 11, 2016 ... // ### LIGO discovers a black hole merger 1.3 billion light years away I guess that many of you have both watched and listened to (!) the National Science Foundation event in the National Press Club, Washington D.C., too. If you haven't, you should listen to the press conference. YouTube succeeded in guaranteeing a smooth transmission of the live stream to the 90,000 (peak) viewers. Before NSF posted the official recorded video, I used the earlier Ruptly copy. The sound started at 0:18:05 and ended at 1:29:30 in it. If I pick a reaction in the newspapers, you may check the NYT movie and article, too. All the rumors I have heard of were 100% true – and those 50 percent of TRF readers who said Yes Yes Yes in our poll were right – but we still learned some numbers that surprised me. The event was headed by the director of NSF France Anne-Dominic Córdova. Caltech LIGO director David H. Reitze was the first scientist who quickly announced "we did it" a few minutes after the sound in the YouTube video started at 10:30 DC time. He was followed by Gabriela Gonzalez, the spokeswoman of the LIGO Scientific Collaboration (LSC). Founders of LIGO, Kip Thorne (Caltech) and the "main" inventor Rainer Weiss (MIT), spoke a lot. The third co-founder, Ronald Drever (Caltech), is unfortunately ill now. Joseph Weber who tried to detect the waves by the Weber bars (he died in 2000) was mentioned. A Russian journalist argued that the idea was stolen from the Soviet Union, anyway. ;-) See John Preskill's article for some history and credits or a LIGO timeline. Kip Thorne had to answer this "Russian connection" question on Russia Today (TV), of course. ;-) Braginsky was like Tsiolkovsky but izvinítě, požalujsta, like the moonwalking, this is an overwhelmingly American achievement. All of the comments at the press event were pleasure to listen to. They have said lots of things, shown various simulations, and played sounds similar to those that the readers of the LIGO category on this blog have been exposed to. Well, but you may still listen to the actual chirp [also: YouTube, II] from the merger (hidden in some noise, just like in the Black Hole Hunter game). Davide Castelvecchi and Seth Borenstein, two journalists I recently interacted with via Twitter (positively in the first case!), asked questions, too. Shortly after the press conference started, the discovery paper in PRL (which has already been peer-reviewed) appeared on the web: Observation of Gravitational Waves from a Binary Black Hole Merger (at ligo.org; official PRL URL, a Dropbox backup) It may be useful to rewrite the abstract for you. I am sure that the data is freely available, but I will reformulate the abstract in my words so that I internalize it, too: On September 14th, 2015, days after the Advanced LIGO run started, Italian postdoc Marco Drago (from Padua) already observed this event on his screen (in Hanover, Germany) at 9:50 UTC. The frequency goes from 35 to 250 hertz, the peak gravitational-wave strain was $$10^{-21}$$. The signal appeared in both LIGO detectors (see the image at the top) with the relative delay 0.007 seconds (the maximum delay is 0.010 seconds; Louisiana heard it first) and is fully consistent with a merger of black holes of masses, in multiples of the Sun's $$E=mc^2$$, $$36\pm 2$$ plus $$29\pm 2$$ goes to $$62\pm 2$$ plus $$3.0\pm 0.3$$ worth of gravitational waves. My errors are approximate standard deviations. The final black hole is a rotating ("Kerr") black hole with the angular momentum equal to $$a=c|\vec S|/GM^2=67\pm 3 \%$$ of the maximum value allowed for the same mass. The distance of the object is $$410\pm 90$$ megaparsecs, about 1.3 billion light years (plus minus 20 percent), according to the luminosity, and the redshift calculated from that is $$z=0.09\pm 0.02$$. The direction is approximately in the Magellanic clouds (but those are much closer than the observed black hole[s]). The significance of the signal – which looks nice in a window that is 0.2 seconds long – is the previously mentioned 5.1 sigma. An even more amusing way to express this information is that the false detection rate is "once in 230 thousand years". With this low frequency of false positives, even the Nobel committee should be willing to take the risk. Eleven other, companion papers were made instantly available. New rumors say that there exist 7 other candidate detections. Unfortunately, they are weaker than one that was already announced. (To say the least, GW150914 was the strongest signal between September 12th and October 20th.) That's puzzling given the fact that this "strongest one" was detected within days after the Advanced LIGO run began. Well, this discovery actually took place two days before I wrote about the beginning of the Advanced LIGO run. It's actually more puzzling than that. The discovery took place four days before the Advanced LIGO run began according to the official LIGO tweet. And that's despite the fact that during the press event today, Kip Thorne denied that he knew how to use LIGO as a time machine. (More seriously, I sincerely hope that the September 18th start of Advanced LIGO was just some official bureaucratic event only and they had totally OK data for days before that. Science agreed and added details.) The distance of the source came as a shock to me. Over one billion light years. I expected the distance to be shorter by an order of magnitude, some 100 million light years, perhaps inside the Virgo supercluster whose size is about 100 million light years. With more patience (and the planned 3-fold improvement of the Advanced LIGO's sensitivity), LIGO should see many stronger (closer) black hole mergers as well as the other things that this totally new method to observe the Universe may discover (and cosmic strings aren't impossible). Graphs of the discovered merger taken from the PRL paper. Click it to zoom in. You see that the intensity at the two LIGO clones differs a bit. But the increasing-frequency profile is very clear in both pictures separately – compare the blue picture with the very similar 2010 fake injection – and the agreement between them eliminates any doubt that this is a gravitational wave from a cosmic merger. Many of us were amazed by the fact that the black hole merger has converted three solar masses of $$E=mc^2$$ – which used to be stars of regular matter some time ago – into ripples of the spacetime. Kip Thorne has described the unbelievable power of the black hole merger differently. During the peak power, the black hole merger releases 50 times more watts in gravitational waves than all damn stars in the visible Universe combined. This is just shocking. The gravitational wave GW150914 was so strong that the wave immediately created its own Twitter account, too. Ladies and Gentlemen, a new era in astrophysics – and, if we're very lucky, even in experimental fundamental physics – has begun. By the way, there are various technicalities they must have mastered but they are bothering me. One of them is that the two LIGO clones' signals shouldn't differ just by the overall intensity and a time delay. They measure polarizations with respect to slightly different axes (a "plus" in Washington state, a combination of "plus" and "cross" in Louisiana) so the precise functions of time may differ as well, right? Appreciate that the incoming signal may be partly circularly polarized, i.e. an out-of-sync superposition of the two "cross/plus" polarizations. I think it would be helpful to have at least 8 clones of LIGO, to measure the location of the source more accurately and to be more clear about the different polarizations. (The location of GW150914 was determined to belong to a mostly Southern-Hemisphere strip of size 590 squared degrees. Antares and IceCube saw no neutrinos at that moment from that direction.) Also, the Hubble redshift $$z\sim 0.09$$ is "somewhat non-negligible". But it's possible that even if you neglected it completely, you would just produce something like 10% errors in the determination of other parameters which is not too big a deal. After all, the black holes could have some non-cosmological speeds as well. I think that this speed could be confused with other changes of the parameters. This educator's booklet talking about GW150914 (it's really great when GW doesn't stand for global warming) is nicely colorful. You may also like these eight new computer visualizations of the dramatic event created by black-holes.org. Before the traffic fades away, you may see the places from which people arrive to this blog – about ten visits a minute when I am writing these lines. One more comment. On Twitter, I have discussed a potentially puzzling issue with a cosmologist – because I was prepared for that by the same confusion weeks ago, one which I hopefully resolved at the end. Why does LIGO see anything at all? Do the arms really stretch, and does the wavelength of the laser light stretch etc.? I think that the answer is that one must appreciate the hierarchy between time scales. It takes just 60 microseconds for the laser light to get back and forth through the 4-kilometer arms. That's much shorter than one period of the gravitational wave, so the laser pretty much measures the "instantaneous" proper length of the arms at a given moment and we may neglect the fact that the wave is changing things as functions of time. A slower process is the gravitational wave, with the period 0.01 seconds or so, which really changes the metric tensor, so the same coordinate differences are translated to oscillating proper lengths. So the proper length of the arms is really oscillating. And now, the rigidity of the Earth is working hard to return the proper length of the arms to the "normal" equilibrium value. But this process is the slowest one – the signals move by the speed of sound so it takes a second or so to get through the solid materials. So this "sound" simply doesn't have enough time to return the length of the arm to "what it should be" before the gravitational wave makes another change. Note that it's important that the speed of sound is lower than the speed of light, and the period of the gravitational wave is in between the times needed by sound and by light to travel the distance (size of LIGO). Some fun: the information was leaked "highly unofficially" months ago. But even on Thursday, an embargo was broken by few minutes by a photograph of a cake (picture).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.590887188911438, "perplexity": 1225.7504185569535}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00077.warc.gz"}